Train and fine-tune models, manage models from experimentation to production. For guides and examples, see https://docs.wandb.ai.
This is the multi-page printable view of this section. Click here to print.
SDK v(0.19.11)
- 1: Actions
- 1.1: Classes
- 1.1.1: Artifact
- 1.1.2: ArtifactTTL
- 1.1.3: Error
- 1.1.4: Run
- 1.1.5: Settings
- 1.2: Functions
- 1.2.1: agent()
- 1.2.2: controller()
- 1.2.3: finish()
- 1.2.4: init()
- 1.2.5: login()
- 1.2.6: restore()
- 1.2.7: setup()
- 1.2.8: sweep()
- 1.2.9: teardown()
- 1.3: Legacy Functions
- 1.3.1: define_metric()
- 1.3.2: link_model()
- 1.3.3: log_artifact()
- 1.3.4: log_model()
- 1.3.5: log()
- 1.3.6: save()
- 1.3.7: unwatch()
- 1.3.8: use_artifact()
- 1.3.9: use_model()
- 1.3.10: watch()
- 2: Data Types
- 2.1: Audio
- 2.2: box3d()
- 2.3: Html
- 2.4: Image
- 2.5: Molecule
- 2.6: Object3D
- 2.7: Plotly
- 2.8: Table
- 2.9: Video
- 3: Launch Library Reference
- 3.1: create_and_run_agent()
- 3.2: launch_add()
- 3.3: launch()
- 3.4: LaunchAgent
- 3.5: load_wandb_config()
- 3.6: manage_config_file()
- 3.7: manage_wandb_config()
1 - Actions
Use during training to log experiments, track metrics, and save model artifacts.
1.1 - Classes
1.1.1 - Artifact
class Artifact
Flexible and lightweight building block for dataset and model versioning.
Construct an empty W&B Artifact. Populate an artifacts contents with methods that begin with add
. Once the artifact has all the desired files, you can call wandb.log_artifact()
to log it.
Args:
name
(str): A human-readable name for the artifact. Use the name to identify a specific artifact in the W&B App UI or programmatically. You can interactively reference an artifact with theuse_artifact
Public API. A name can contain letters, numbers, underscores, hyphens, and dots. The name must be unique across a project.type
(str): The artifact’s type. Use the type of an artifact to both organize and differentiate artifacts. You can use any string that contains letters, numbers, underscores, hyphens, and dots. Common types includedataset
ormodel
. Includemodel
within your type string if you want to link the artifact to the W&B Model Registry. Note that some types reserved for internal use and cannot be set by users. Such types includejob
and types that start withwandb-
.description (str | None) = None
: A description of the artifact. For Model or Dataset Artifacts, add documentation for your standardized team model or dataset card. View an artifact’s description programmatically with theArtifact.description
attribute or programmatically with the W&B App UI. W&B renders the description as markdown in the W&B App.metadata (dict[str, Any] | None) = None
: Additional information about an artifact. Specify metadata as a dictionary of key-value pairs. You can specify no more than 100 total keys.incremental
: UseArtifact.new_draft()
method instead to modify an existing artifact.use_as
: Deprecated.is_link
: Boolean indication of if the artifact is a linked artifact(True
) or source artifact(False
).
Returns:
An Artifact
object.
method Artifact.__init__
__init__(
name: 'str',
type: 'str',
description: 'str | None' = None,
metadata: 'dict[str, Any] | None' = None,
incremental: 'bool' = False,
use_as: 'str | None' = None
) → None
property Artifact.aliases
List of one or more semantically-friendly references or
identifying “nicknames” assigned to an artifact version.
Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information.
property Artifact.collection
The collection this artifact was retrieved from.
A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence.
property Artifact.commit_hash
The hash returned when this artifact was committed.
property Artifact.created_at
Timestamp when the artifact was created.
property Artifact.description
A description of the artifact.
property Artifact.digest
The logical digest of the artifact.
The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest
version, then log_artifact
is a no-op.
property Artifact.distributed_id
property Artifact.entity
The name of the entity that the artifact collection belongs to.
If the artifact is a link, the entity will be the entity of the linked artifact.
property Artifact.file_count
The number of files (including references).
property Artifact.history_step
The nearest step at which history metrics were logged for the source run of the artifact.
Examples:
run = artifact.logged_by()
if run and (artifact.history_step is not None):
history = run.sample_history(
min_step=artifact.history_step,
max_step=artifact.history_step + 1,
keys=["my_metric"],
)
```
---
### <kbd>property</kbd> Artifact.id
The artifact's ID.
---
### <kbd>property</kbd> Artifact.incremental
---
### <kbd>property</kbd> Artifact.is_link
Boolean flag indicating if the artifact is a link artifact.
True: The artifact is a link artifact to a source artifact. False: The artifact is a source artifact.
---
### <kbd>property</kbd> Artifact.linked_artifacts
Returns a list of all the linked artifacts of a source artifact.
If the artifact is a link artifact (`artifact.is_link == True`), it will return an empty list. Limited to 500 results.
---
### <kbd>property</kbd> Artifact.manifest
The artifact's manifest.
The manifest lists all of its contents, and can't be changed once the artifact has been logged.
---
### <kbd>property</kbd> Artifact.metadata
User-defined artifact metadata.
Structured data associated with the artifact.
---
### <kbd>property</kbd> Artifact.name
The artifact name and version of the artifact.
A string with the format `{collection}:{alias}`. If fetched before an artifact is logged/saved, the name won't contain the alias. If the artifact is a link, the name will be the name of the linked artifact.
---
### <kbd>property</kbd> Artifact.project
The name of the project that the artifact collection belongs to.
If the artifact is a link, the project will be the project of the linked artifact.
---
### <kbd>property</kbd> Artifact.qualified_name
The entity/project/name of the artifact.
If the artifact is a link, the qualified name will be the qualified name of the linked artifact path.
---
### <kbd>property</kbd> Artifact.size
The total size of the artifact in bytes.
Includes any references tracked by this artifact.
---
### <kbd>property</kbd> Artifact.source_artifact
Returns the source artifact. The source artifact is the original logged artifact.
If the artifact itself is a source artifact (`artifact.is_link == False`), it will return itself.
---
### <kbd>property</kbd> Artifact.source_collection
The artifact's source collection.
The source collection is the collection that the artifact was logged from.
---
### <kbd>property</kbd> Artifact.source_entity
The name of the entity of the source artifact.
---
### <kbd>property</kbd> Artifact.source_name
The artifact name and version of the source artifact.
A string with the format `{source_collection}:{alias}`. Before the artifact is saved, contains only the name since the version is not yet known.
---
### <kbd>property</kbd> Artifact.source_project
The name of the project of the source artifact.
---
### <kbd>property</kbd> Artifact.source_qualified_name
The source_entity/source_project/source_name of the source artifact.
---
### <kbd>property</kbd> Artifact.source_version
The source artifact's version.
A string with the format `v{number}`.
---
### <kbd>property</kbd> Artifact.state
The status of the artifact. One of: "PENDING", "COMMITTED", or "DELETED".
---
### <kbd>property</kbd> Artifact.tags
List of one or more tags assigned to this artifact version.
---
### <kbd>property</kbd> Artifact.ttl
The time-to-live (TTL) policy of an artifact.
Artifacts are deleted shortly after a TTL policy's duration passes. If set to `None`, the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact.
**Raises:**
- `ArtifactNotLoggedError`: Unable to fetch inherited TTL if the artifact has not been logged or saved.
---
### <kbd>property</kbd> Artifact.type
The artifact's type. Common types include `dataset` or `model`.
---
### <kbd>property</kbd> Artifact.updated_at
The time when the artifact was last updated.
---
### <kbd>property</kbd> Artifact.url
Constructs the URL of the artifact.
**Returns:**
- `str`: The URL of the artifact.
---
### <kbd>property</kbd> Artifact.use_as
Deprecated.
---
### <kbd>property</kbd> Artifact.version
The artifact's version.
A string with the format `v{number}`. If the artifact is a link artifact, the version will be from the linked collection.
---
### <kbd>method</kbd> `Artifact.add`
```python
add(
obj: 'WBValue',
name: 'StrPath',
overwrite: 'bool' = False
) → ArtifactManifestEntry
Add wandb.WBValue obj
to the artifact.
Args:
obj
: The object to add. Currently support one of Bokeh, JoinedTable, PartitionedTable, Table, Classes, ImageMask, BoundingBoxes2D, Audio, Image, Video, Html, Object3Dname
: The path within the artifact to add the object.overwrite
: If True, overwrite existing objects with the same file path if applicable.
Returns: The added manifest entry
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.add_dir
add_dir(
local_path: 'str',
name: 'str | None' = None,
skip_cache: 'bool | None' = False,
policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
merge: 'bool' = False
) → None
Add a local directory to the artifact.
Args:
local_path
: The path of the local directory.name
: The subdirectory name within an artifact. The name you specify appears in the W&B App UI nested by artifact’stype
. Defaults to the root of the artifact.skip_cache
: If set toTrue
, W&B will not copy/move files to the cache while uploadingpolicy
: By default, “mutable”.- mutable: Create a temporary copy of the file to prevent corruption during upload.
- immutable: Disable protection, rely on the user not to delete or change the file.
merge
: IfFalse
(default), throws ValueError if a file was already added in a previous add_dir call and its content has changed. IfTrue
, overwrites existing files with changed content. Always adds new files and never removes files. To replace an entire directory, pass a name when adding the directory usingadd_dir(local_path, name=my_prefix)
and callremove(my_prefix)
to remove the directory, then add it again.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.ValueError
: Policy must be “mutable” or “immutable”
method Artifact.add_file
add_file(
local_path: 'str',
name: 'str | None' = None,
is_tmp: 'bool | None' = False,
skip_cache: 'bool | None' = False,
policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
overwrite: 'bool' = False
) → ArtifactManifestEntry
Add a local file to the artifact.
Args:
local_path
: The path to the file being added.name
: The path within the artifact to use for the file being added. Defaults to the basename of the file.is_tmp
: If true, then the file is renamed deterministically to avoid collisions.skip_cache
: IfTrue
, do not copy files to the cache after uploading.policy
: By default, set to “mutable”. If set to “mutable”, create a temporary copy of the file to prevent corruption during upload. If set to “immutable”, disable protection and rely on the user not to delete or change the file.overwrite
: IfTrue
, overwrite the file if it already exists.
Returns: The added manifest entry.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.ValueError
: Policy must be “mutable” or “immutable”
method Artifact.add_reference
add_reference(
uri: 'ArtifactManifestEntry | str',
name: 'StrPath | None' = None,
checksum: 'bool' = True,
max_objects: 'int | None' = None
) → Sequence[ArtifactManifestEntry]
Add a reference denoted by a URI to the artifact.
Unlike files or directories that you add to an artifact, references are not uploaded to W&B. For more information, see Track external files.
By default, the following schemes are supported:
- http(s): The size and digest of the file will be inferred by the
Content-Length
and theETag
response headers returned by the server. - s3: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
- gs: The checksum and size are pulled from the object metadata. If bucket versioning is enabled, then the version ID is also tracked.
- https, domain matching
*.blob.core.windows.net
- Azure: The checksum and size are be pulled from the blob metadata. If storage account versioning is enabled, then the version ID is also tracked.
- file: The checksum and size are pulled from the file system. This scheme is useful if you have an NFS share or other externally mounted volume containing files you wish to track but not necessarily upload.
For any other scheme, the digest is just a hash of the URI and the size is left blank.
Args:
uri
: The URI path of the reference to add. The URI path can be an object returned fromArtifact.get_entry
to store a reference to another artifact’s entry.name
: The path within the artifact to place the contents of this reference.checksum
: Whether or not to checksum the resource(s) located at the reference URI. Checksumming is strongly recommended as it enables automatic integrity validation. Disabling checksumming will speed up artifact creation but reference directories will not iterated through so the objects in the directory will not be saved to the artifact. We recommend settingchecksum=False
when adding reference objects, in which case a new version will only be created if the reference URI changes.max_objects
: The maximum number of objects to consider when adding a reference that points to directory or bucket store prefix. By default, the maximum number of objects allowed for Amazon S3, GCS, Azure, and local files is 10,000,000. Other URI schemas do not have a maximum.
Returns: The added manifest entries.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.checkout
checkout(root: 'str | None' = None) → str
Replace the specified root directory with the contents of the artifact.
WARNING: This will delete all files in root
that are not included in the artifact.
Args:
root
: The directory to replace with this artifact’s files.
Returns: The path of the checked out contents.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.delete
delete(delete_aliases: 'bool' = False) → None
Delete an artifact and its files.
If called on a linked artifact, only the link is deleted, and the source artifact is unaffected.
Use artifact.unlink()
instead of artifact.delete()
to remove a link between a source artifact and a linked artifact.
Args:
delete_aliases
: If set toTrue
, deletes all aliases associated with the artifact. Otherwise, this raises an exception if the artifact has existing aliases. This parameter is ignored if the artifact is linked (a member of a portfolio collection).
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.download
download(
root: 'StrPath | None' = None,
allow_missing_references: 'bool' = False,
skip_cache: 'bool | None' = None,
path_prefix: 'StrPath | None' = None,
multipart: 'bool | None' = None
) → FilePathStr
Download the contents of the artifact to the specified root directory.
Existing files located within root
are not modified. Explicitly delete root
before you call download
if you want the contents of root
to exactly match the artifact.
Args:
root
: The directory W&B stores the artifact’s files.allow_missing_references
: If set toTrue
, any invalid reference paths will be ignored while downloading referenced files.skip_cache
: If set toTrue
, the artifact cache will be skipped when downloading and W&B will download each file into the default root or specified download directory.path_prefix
: If specified, only files with a path that starts with the given prefix will be downloaded. Uses unix format (forward slashes).multipart
: If set toNone
(default), the artifact will be downloaded in parallel using multipart download if individual file size is greater than 2GB. If set toTrue
orFalse
, the artifact will be downloaded in parallel or serially regardless of the file size.
Returns: The path to the downloaded contents.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.file
file(root: 'str | None' = None) → StrPath
Download a single file artifact to the directory you specify with root
.
Args:
root
: The root directory to store the file. Defaults to./artifacts/self.name/
.
Returns: The full path of the downloaded file.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the artifact contains more than one file.
method Artifact.files
files(names: 'list[str] | None' = None, per_page: 'int' = 50) → ArtifactFiles
Iterate over all files stored in this artifact.
Args:
names
: The filename paths relative to the root of the artifact you wish to list.per_page
: The number of files to return per request.
Returns:
An iterator containing File
objects.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.finalize
finalize() → None
Finalize the artifact version.
You cannot modify an artifact version once it is finalized because the artifact is logged as a specific artifact version. Create a new artifact version to log more data to an artifact. An artifact is automatically finalized when you log the artifact with log_artifact
.
method Artifact.get
get(name: 'str') → WBValue | None
Get the WBValue object located at the artifact relative name
.
Args:
name
: The artifact relative name to retrieve.
Returns:
W&B object that can be logged with wandb.log()
and visualized in the W&B UI.
Raises:
ArtifactNotLoggedError
: if the artifact isn’t logged or the run is offline.
method Artifact.get_added_local_path_name
get_added_local_path_name(local_path: 'str') → str | None
Get the artifact relative name of a file added by a local filesystem path.
Args:
local_path
: The local path to resolve into an artifact relative name.
Returns: The artifact relative name.
method Artifact.get_entry
get_entry(name: 'StrPath') → ArtifactManifestEntry
Get the entry with the given name.
Args:
name
: The artifact relative name to get
Returns:
A W&B
object.
Raises:
ArtifactNotLoggedError
: if the artifact isn’t logged or the run is offline.KeyError
: if the artifact doesn’t contain an entry with the given name.
method Artifact.get_path
get_path(name: 'StrPath') → ArtifactManifestEntry
Deprecated. Use get_entry(name)
.
method Artifact.is_draft
is_draft() → bool
Check if artifact is not saved.
Returns:
Boolean. False
if artifact is saved. True
if artifact is not saved.
method Artifact.json_encode
json_encode() → dict[str, Any]
Returns the artifact encoded to the JSON format.
Returns:
A dict
with string
keys representing attributes of the artifact.
method Artifact.link
link(target_path: 'str', aliases: 'list[str] | None' = None) → Artifact | None
Link this artifact to a portfolio (a promoted collection of artifacts).
Args:
target_path
: The path to the portfolio inside a project. The target path must adhere to one of the following schemas{portfolio}
,{project}/{portfolio}
or{entity}/{project}/{portfolio}
. To link the artifact to the Model Registry, rather than to a generic portfolio inside a project, settarget_path
to the following schema{"model-registry"}/{Registered Model Name}
or{entity}/{"model-registry"}/{Registered Model Name}
.aliases
: A list of strings that uniquely identifies the artifact inside the specified portfolio.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
Returns: The linked artifact if linking was successful, otherwise None.
method Artifact.logged_by
logged_by() → Run | None
Get the W&B run that originally logged the artifact.
Returns: The name of the W&B run that originally logged the artifact.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.new_draft
new_draft() → Artifact
Create a new draft artifact with the same content as this committed artifact.
Modifying an existing artifact creates a new artifact version known as an “incremental artifact”. The artifact returned can be extended or modified and logged as a new version.
Returns:
An Artifact
object.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.new_file
new_file(
name: 'str',
mode: 'str' = 'x',
encoding: 'str | None' = None
) → Iterator[IO]
Open a new temporary file and add it to the artifact.
Args:
name
: The name of the new file to add to the artifact.mode
: The file access mode to use to open the new file.encoding
: The encoding used to open the new file.
Returns: A new file object that can be written to. Upon closing, the file is automatically added to the artifact.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
method Artifact.remove
remove(item: 'StrPath | ArtifactManifestEntry') → None
Remove an item from the artifact.
Args:
item
: The item to remove. Can be a specific manifest entry or the name of an artifact-relative path. If the item matches a directory all items in that directory will be removed.
Raises:
ArtifactFinalizedError
: You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.FileNotFoundError
: If the item isn’t found in the artifact.
method Artifact.save
save(
project: 'str | None' = None,
settings: 'wandb.Settings | None' = None
) → None
Persist any changes made to the artifact.
If currently in a run, that run will log this artifact. If not currently in a run, a run of type “auto” is created to track this artifact.
Args:
project
: A project to use for the artifact in the case that a run is not already in context.settings
: A settings object to use when initializing an automatic run. Most commonly used in testing harness.
method Artifact.unlink
unlink() → None
Unlink this artifact if it is currently a member of a promoted collection of artifacts.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the artifact is not linked, in other words, it is not a member of a portfolio collection.
method Artifact.used_by
used_by() → list[Run]
Get a list of the runs that have used this artifact and its linked artifacts.
Returns:
A list of Run
objects.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.
method Artifact.verify
verify(root: 'str | None' = None) → None
Verify that the contents of an artifact match the manifest.
All files in the directory are checksummed and the checksums are then cross-referenced against the artifact’s manifest. References are not verified.
Args:
root
: The directory to verify. If None artifact will be downloaded to ‘./artifacts/self.name/’.
Raises:
ArtifactNotLoggedError
: If the artifact is not logged.ValueError
: If the verification fails.
method Artifact.wait
wait(timeout: 'int | None' = None) → Artifact
If needed, wait for this artifact to finish logging.
Args:
timeout
: The time, in seconds, to wait.
Returns:
An Artifact
object.
1.1.3 - Error
class Error
Base W&B Error.
method Error.__init__
__init__(message, context: Optional[dict] = None) → None
1.1.4 - Run
class Run
A unit of computation logged by W&B. Typically, this is an ML experiment.
Call wandb.init()
to create a new run. wandb.init()
starts a new run and returns a wandb.Run
object. Each run is associated with a unique ID (run ID). There is only ever at most one active wandb.Run
in any process.
For distributed training experiments, you can either track each process separately using one run per process or track all processes to a single run. See Log distributed training experiments for more information.
You can log data to a run with wandb.log()
. Anything you log using wandb.log()
is sent to that run. See Create an experiment or wandb.init
API reference page or more information.
There is a another Run
object in the wandb.apis.public
namespace. Use this object is to interact with runs that have already been created.
Finish active runs before starting new runs. Use a context manager (with
statement) to automatically finish the run or use wandb.finish()
to finish a run manually. W&B recommends using a context manager to automatically finish the run.
Attributes:
summary
: (Summary) Single values set for eachwandb.log()
key. By default, summary is set to the last value logged. You can manually set summary to the best value, like max accuracy, instead of the final value.
Examples:
Create a run with wandb.init()
:
import wandb
# Start a new run and log some data
# Use context manager (`with` statement) to automatically finish the run
with wandb.init(entity="entity", project="project") as run:
run.log({"accuracy": acc, "loss": loss})
method Run.__init__
__init__(
settings: 'Settings',
config: 'dict[str, Any] | None' = None,
sweep_config: 'dict[str, Any] | None' = None,
launch_config: 'dict[str, Any] | None' = None
) → None
property Run.config
Config object associated with this run.
property Run.config_static
Static config object associated with this run.
property Run.dir
The directory where files associated with the run are saved.
property Run.disabled
True if the run is disabled, False otherwise.
property Run.entity
The name of the W&B entity associated with the run.
Entity can be a username or the name of a team or organization.
property Run.group
Name of the group associated with the run.
Setting a group helps the W&B UI organize runs. If you are doing a distributed training you should give all of the runs in the training the same group. If you are doing cross-validation you should give all the cross-validation folds the same group.
property Run.id
Identifier for this run.
property Run.job_type
Name of the job type associated with the run.
property Run.name
Display name of the run.
Display names are not guaranteed to be unique and may be descriptive. By default, they are randomly generated.
property Run.notes
Notes associated with the run, if there are any.
Notes can be a multiline string and can also use markdown and latex equations inside $$
, like $x + 3$
.
property Run.offline
True if the run is offline, False otherwise.
property Run.path
Path to the run.
Run paths include entity, project, and run ID, in the format entity/project/run_id
.
property Run.project
Name of the W&B project associated with the run.
property Run.project_url
URL of the W&B project associated with the run, if there is one.
Offline runs do not have a project URL.
property Run.resumed
True if the run was resumed, False otherwise.
property Run.settings
A frozen copy of run’s Settings object.
property Run.start_time
Unix timestamp (in seconds) of when the run started.
property Run.starting_step
The first step of the run.
property Run.step
Current value of the step.
This counter is incremented by wandb.log
.
property Run.sweep_id
Identifier for the sweep associated with the run, if there is one.
property Run.sweep_url
URL of the sweep associated with the run, if there is one.
Offline runs do not have a sweep URL.
property Run.tags
Tags associated with the run, if there are any.
property Run.url
The url for the W&B run, if there is one.
Offline runs will not have a url.
method Run.alert
alert(
title: 'str',
text: 'str',
level: 'str | AlertLevel | None' = None,
wait_duration: 'int | float | timedelta | None' = None
) → None
Create an alert with the given title and text.
Args:
title
: The title of the alert, must be less than 64 characters long.text
: The text body of the alert.level
: The alert level to use, either:INFO
,WARN
, orERROR
.wait_duration
: The time to wait (in seconds) before sending another alert with this title.
method Run.define_metric
define_metric(
name: 'str',
step_metric: 'str | wandb_metric.Metric | None' = None,
step_sync: 'bool | None' = None,
hidden: 'bool | None' = None,
summary: 'str | None' = None,
goal: 'str | None' = None,
overwrite: 'bool | None' = None
) → wandb_metric.Metric
Customize metrics logged with wandb.log()
.
Args:
name
: The name of the metric to customize.step_metric
: The name of another metric to serve as the X-axis for this metric in automatically generated charts.step_sync
: Automatically insert the last value of step_metric intorun.log()
if it is not provided explicitly. Defaults to True if step_metric is specified.hidden
: Hide this metric from automatic plots.summary
: Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.goal
: Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.overwrite
: If false, then this call is merged with previousdefine_metric
calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.
Returns: An object that represents this call but can otherwise be discarded.
method Run.display
display(height: 'int' = 420, hidden: 'bool' = False) → bool
Display this run in Jupyter.
method Run.finish
finish(exit_code: 'int | None' = None, quiet: 'bool | None' = None) → None
Finish a run and upload any remaining data.
Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.
Run States:
- Running: Active run that is logging data and/or sending heartbeats.
- Crashed: Run that stopped sending heartbeats unexpectedly.
- Finished: Run completed successfully (
exit_code=0
) with all data synced. - Failed: Run completed with errors (
exit_code!=0
). - Killed: Run was forcibly stopped before it could finish.
Args:
exit_code
: Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.quiet
: Deprecated. Configure logging verbosity usingwandb.Settings(quiet=...)
.
method Run.finish_artifact
finish_artifact(
artifact_or_path: 'Artifact | str',
name: 'str | None' = None,
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
distributed_id: 'str | None' = None
) → Artifact
Finishes a non-finalized artifact as output of a run.
Subsequent “upserts” with the same distributed ID will result in a new version.
Args:
artifact_or_path
: A path to the contents of this artifact, can be in the following forms: -/local/directory
-/local/directory/file.txt
-s3://bucket/path
You can also pass an Artifact object created by callingwandb.Artifact
.name
: An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.type
: The type of artifact to log, examples includedataset
,model
aliases
: Aliases to apply to this artifact, defaults to["latest"]
distributed_id
: Unique string that all distributed jobs share. If None, defaults to the run’s group name.
Returns:
An Artifact
object.
method Run.get_project_url
get_project_url() → str | None
This method is deprecated and will be removed in a future release. Use run.project_url
instead.
URL of the W&B project associated with the run, if there is one. Offline runs do not have a project URL.
method Run.get_sweep_url
get_sweep_url() → str | None
This method is deprecated and will be removed in a future release. Use run.sweep_url
instead.
The URL of the sweep associated with the run, if there is one. Offline runs do not have a sweep URL.
method Run.get_url
get_url() → str | None
This method is deprecated and will be removed in a future release. Use run.url
instead.
URL of the W&B run, if there is one. Offline runs do not have a URL.
method Run.link_artifact
link_artifact(
artifact: 'Artifact',
target_path: 'str',
aliases: 'list[str] | None' = None
) → Artifact | None
Link the given artifact to a portfolio (a promoted collection of artifacts).
Linked artifacts are visible in the UI for the specified portfolio.
Args:
artifact
: the (public or local) artifact which will be linkedtarget_path
: takes the following forms:{portfolio}
,{project}/{portfolio}
, or{entity}/{project}/{portfolio}
aliases
:List[str]
- optional alias(es) that will only be applied on this linked artifact inside the portfolio. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Returns: The linked artifact if linking was successful, otherwise None.
method Run.link_model
link_model(
path: 'StrPath',
registered_model_name: 'str',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → Artifact | None
Log a model artifact version and link it to a registered model in the model registry.
Linked model versions are visible in the UI for the specified registered model.
This method will:
- Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’ of type ‘model’.
- Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project. If not, create a new registered model with name ‘registered_model_name’.
- Link version of model artifact ’name’ to registered model, ‘registered_model_name’.
- Attach aliases from ‘aliases’ list to the newly linked model artifact version.
Args:
path
: (str) A path to the contents of this model, can be in the following forms:/local/directory
/local/directory/file.txt
s3://bucket/path
registered_model_name
: The name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the run.name
: The name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified.aliases
: Aliases that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Raises:
AssertionError
: If registered_model_name is a path or if model artifact ’name’ is of a type that does not contain the substring ‘model’.ValueError
: If name has invalid special characters.
Returns: The linked artifact if linking was successful, otherwise None.
Examples:
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.link_model(
path="/local/directory",
registered_model_name="my_entity/my_project/my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
method Run.log
log(
data: 'dict[str, Any]',
step: 'int | None' = None,
commit: 'bool | None' = None
) → None
Upload run data.
Use log
to log data from runs, such as scalars, images, video, histograms, plots, and tables. See Log objects and media for code snippets, best practices, and more.
Basic usage:
import wandb
with wandb.init() as run:
run.log({"train-loss": 0.5, "accuracy": 0.9})
The previous code snippet saves the loss and accuracy to the run’s history and updates the summary values for these metrics.
Visualize logged data in a workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, such as in a Jupyter notebook, with the Public API.
Logged values don’t have to be scalars. You can log any W&B supported Data Type such as images, audio, video, and more. For example, you can use wandb.Table
to log structured data. See Log tables, visualize and query data tutorial for more details.
W&B organizes metrics with a forward slash (/
) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:
run.log(
{
"train/accuracy": 0.9,
"train/loss": 30,
"validate/accuracy": 0.8,
"validate/loss": 20,
}
)
Only one level of nesting is supported; run.log({"a/b/c": 1})
produces a section named “a/b”.
run.log
is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.
By default, each call to log
creates a new “step”. The step must always increase, and it is not possible to log to a previous step. You can use any metric as the X axis in charts. See Custom log axes for more details.
In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
It is possible to use multiple log
invocations to log to the same step with the step
and commit
parameters. The following are all equivalent:
# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})
# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})
# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args:
data
: Adict
withstr
keys and values that are serializablePython objects including
:int
,float
andstring
; any of thewandb.data_types
; lists, tuples and NumPy arrays of serializable Python objects; otherdict
s of this structure.step
: The step number to log. IfNone
, then an implicit auto-incrementing step is used. See the notes in the description.commit
: If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. Ifstep
isNone
, then the default iscommit=True
; otherwise, the default iscommit=False
.sync
: This argument is deprecated and does nothing.
Examples: For more and more detailed examples, see our guides to logging.
Basic usage
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
Image from NumPy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0,
high=256,
size=(100, 100, 3),
dtype=np.uint8,
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Video from NumPy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0,
high=256,
size=(10, 3, 100, 100),
dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})
Matplotlib plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
3D Object
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
Raises:
wandb.Error
: if called beforewandb.init
ValueError
: if invalid data is passed
Examples:
# Basic usage
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
# Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
# Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
# Image from numpy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Video from numpy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8
)
run.log({"video": wandb.Video(frames, fps=4)})
# Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
# PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
# 3D Object
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
For more and more detailed examples, see our guides to logging.
method Run.log_artifact
log_artifact(
artifact_or_path: 'Artifact | StrPath',
name: 'str | None' = None,
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
tags: 'list[str] | None' = None
) → Artifact
Declare an artifact as an output of a run.
Args:
artifact_or_path
: A path to the contents of this artifact, can be in the following forms/local/directory
/local/directory/file.txt
s3://bucket/path
name
: An artifact name. Defaults to the basename of the path prepended with the current run id if not specified. Valid names can be in the following forms:- name:version
- name:alias
- digest
type
: The type of artifact to log. Common examples includedataset
andmodel
aliases
: Aliases to apply to this artifact, defaults to["latest"]
tags
: Tags to apply to this artifact, if any.
Returns:
An Artifact
object.
method Run.log_code
log_code(
root: 'str | None' = '.',
name: 'str | None' = None,
include_fn: 'Callable[[str, str], bool] | Callable[[str], bool]' = <function _is_py_requirements_or_dockerfile at 0x101b8a290>,
exclude_fn: 'Callable[[str, str], bool] | Callable[[str], bool]' = <function exclude_wandb_fn at 0x1039e3760>
) → Artifact | None
Save the current state of your code to a W&B Artifact.
By default, it walks the current directory and logs all files that end with .py
.
Args:
root
: The relative (toos.getcwd()
) or absolute path to recursively find code from.name
: The name of our code artifact. By default, we’ll name the artifactsource-$PROJECT_ID-$ENTRYPOINT_RELPATH
. There may be scenarios where you want many runs to share the same artifact. Specifying name allows you to achieve that.include_fn
: A callable that accepts a file path and (optionally) root path and returns True when it should be included and False otherwise. Thisdefaults to
lambda path, root: path.endswith(".py")
.exclude_fn
: A callable that accepts a file path and (optionally) root path and returnsTrue
when it should be excluded andFalse
otherwise. This defaults to a function that excludes all files within<root>/.wandb/
and<root>/wandb/
directories.
Examples: Basic usage
import wandb
with wandb.init() as run:
run.log_code()
Advanced usage
import wandb
with wandb.init() as run:
run.log_code(
root="../",
include_fn=lambda path: path.endswith(".py") or path.endswith(".ipynb"),
exclude_fn=lambda path, root: os.path.relpath(path, root).startswith(
"cache/"
),
)
Returns:
An Artifact
object if code was logged
method Run.log_model
log_model(
path: 'StrPath',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → None
Logs a model artifact as an output of this run.
The name of model artifact can only contain alphanumeric characters, underscores, and hyphens.
Args:
path
: A path to the contents of this model, can be in the following forms/local/directory
/local/directory/file.txt
s3://bucket/path
name
: A name to assign to the model artifact that the file contents will be added to. The string must contain only alphanumeric characters such as dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.aliases
: Aliases to apply to the created model artifact, defaults to["latest"]
Returns: None
Raises:
ValueError
: if name has invalid special characters.
Examples:
run.log_model(
path="/local/directory",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.log_model(
path="/local/directory",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
method Run.mark_preempting
mark_preempting() → None
Mark this run as preempting.
Also tells the internal process to immediately report this to server.
method Run.project_name
project_name() → str
This method is deprecated and will be removed in a future release. Use run.project
instead.
Name of the W&B project associated with the run.
method Run.restore
restore(
name: 'str',
run_path: 'str | None' = None,
replace: 'bool' = False,
root: 'str | None' = None
) → None | TextIO
Download the specified file from cloud storage.
File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.
Args:
name
: The name of the file.run_path
: Optional path to a run to pull files from, i.e.username/project_name/run_id
if wandb.init has not been called, this is required.replace
: Whether to download the file even if it already exists locallyroot
: The directory to download the file to. Defaults to the current directory or the run directory if wandb.init was called.
Returns: None if it can’t find the file, otherwise a file object open for reading.
Raises:
wandb.CommError
: If W&B can’t connect to the W&B backend.ValueError
: If the file is not found or can’t find run_path.
method Run.save
save(
glob_str: 'str | os.PathLike',
base_path: 'str | os.PathLike | None' = None,
policy: 'PolicyName' = 'live'
) → bool | list[str]
Sync one or more files to W&B.
Relative paths are relative to the current working directory.
A Unix glob, such as “myfiles/*”, is expanded at the time save
is called regardless of the policy
. In particular, new files are not picked up automatically.
A base_path
may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str
, and the directory structure beneath it is preserved.
When given an absolute path or glob and no base_path
, one directory level is preserved as in the example above.
Args:
glob_str
: A relative or absolute path or Unix glob.base_path
: A path to use to infer a directory structure; see examples.policy
: One oflive
,now
, orend
.- live: upload the file as it changes, overwriting the previous version
- now: upload the file once now
- end: upload file when the run ends
Returns: Paths to the symlinks created for the matched files.
For historical reasons, this may return a boolean in legacy code.
import wandb
wandb.init()
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
method Run.status
status() → RunStatus
Get sync info from the internal backend, about the current run’s sync status.
method Run.to_html
to_html(height: 'int' = 420, hidden: 'bool' = False) → str
Generate HTML containing an iframe displaying the current run.
method Run.unwatch
unwatch(
models: 'torch.nn.Module | Sequence[torch.nn.Module] | None' = None
) → None
Remove pytorch model topology, gradient and parameter hooks.
Args:
models
: Optional list of pytorch models that have had watch called on them.
method Run.upsert_artifact
upsert_artifact(
artifact_or_path: 'Artifact | str',
name: 'str | None' = None,
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
distributed_id: 'str | None' = None
) → Artifact
Declare (or append to) a non-finalized artifact as output of a run.
Note that you must call run.finish_artifact() to finalize the artifact. This is useful when distributed jobs need to all contribute to the same artifact.
Args:
artifact_or_path
: A path to the contents of this artifact, can be in the following forms:/local/directory
/local/directory/file.txt
s3://bucket/path
name
: An artifact name. May be prefixed with “entity/project”. Defaults to the basename of the path prepended with the current run ID if not specified. Valid names can be in the following forms:- name:version
- name:alias
- digest
type
: The type of artifact to log. Common examples includedataset
,model
.aliases
: Aliases to apply to this artifact, defaults to["latest"]
.distributed_id
: Unique string that all distributed jobs share. If None, defaults to the run’s group name.
Returns:
An Artifact
object.
method Run.use_artifact
use_artifact(
artifact_or_name: 'str | Artifact',
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
use_as: 'str | None' = None
) → Artifact
Declare an artifact as an input to a run.
Call download
or file
on the returned object to get the contents locally.
Args:
artifact_or_name
: The name of the artifact to use. May be prefixed with the name of the project the artifact was logged to ("" or “ / ”). If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms - name:version
- name:alias
type
: The type of artifact to use.aliases
: Aliases to apply to this artifactuse_as
: This argument is deprecated and does nothing.
Returns:
An Artifact
object.
Examples:
import wandb
run = wandb.init(project="<example>")
# Use an artifact by name and alias
artifact_a = run.use_artifact(artifact_or_name="<name>:<alias>")
# Use an artifact by name and version
artifact_b = run.use_artifact(artifact_or_name="<name>:v<version>")
# Use an artifact by entity/project/name:alias
artifact_c = run.use_artifact(
artifact_or_name="<entity>/<project>/<name>:<alias>"
)
# Use an artifact by entity/project/name:version
artifact_d = run.use_artifact(
artifact_or_name="<entity>/<project>/<name>:v<version>"
)
method Run.use_model
use_model(name: 'str') → FilePathStr
Download the files logged in a model artifact name
.
Args:
name
: A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed withentity/project/
. Valid names can be in the following forms- model_artifact_name:version
- model_artifact_name:alias
Raises:
AssertionError
: if model artifactname
is of a type that does not contain the substring ‘model’.
Returns:
path
: path to downloaded model artifact file(s).
Examples:
run.use_model(
name="my_model_artifact:latest",
)
run.use_model(
name="my_project/my_model_artifact:v0",
)
run.use_model(
name="my_entity/my_project/my_model_artifact:<digest>",
)
Invalid usage
run.use_model(
name="my_entity/my_project/my_model_artifact",
)
method Run.watch
watch(
models: 'torch.nn.Module | Sequence[torch.nn.Module]',
criterion: 'torch.F | None' = None,
log: "Literal['gradients', 'parameters', 'all'] | None" = 'gradients',
log_freq: 'int' = 1000,
idx: 'int | None' = None,
log_graph: 'bool' = False
) → None
Hook into given PyTorch model to monitor gradients and the model’s computational graph.
This function can track parameters, gradients, or both during training.
Args:
models
: A single model or a sequence of models to be monitored.criterion
: The loss function being optimized (optional).log
: Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”).log_freq
: Frequency (in batches) to log gradients and parameters. (default=1000)idx
: Index used when tracking multiple models withwandb.watch
. (default=None)log_graph
: Whether to log the model’s computational graph. (default=False)
Raises:
ValueError: If wandb.init
has not been called or if any of the models are not instances of torch.nn.Module
.
1.1.5 - Settings
class Settings
Settings for the W&B SDK.
This class manages configuration settings for the W&B SDK, ensuring type safety and validation of all settings. Settings are accessible as attributes and can be initialized programmatically, through environment variables (WANDB_ prefix
), and with configuration files.
The settings are organized into three categories: 1. Public settings: Core configuration options that users can safely modify to customize W&B’s behavior for their specific needs. 2. Internal settings: Settings prefixed with ‘x_’ that handle low-level SDK behavior. These settings are primarily for internal use and debugging. While they can be modified, they are not considered part of the public API and may change without notice in future versions. 3. Computed settings: Read-only settings that are automatically derived from other settings or the environment.
Args:
allow_offline_artifacts
(bool): Flag to allow table artifacts to be synced in offline mode.allow_val_change
(bool): Flag to allow modification ofConfig
values after they’ve been set.anonymous
(Optional[Literal[“allow”, “must”, “never”]]): Controls anonymous data logging. Possible values are:- “never”: requires you to link your W&B account before tracking the run, so you don’t accidentally create an anonymous run.
- “allow”: lets a logged-in user track runs with their account, but lets someone who is running the script without a W&B account see the charts in the UI.
- “must”: sends the run to an anonymous account instead of to a signed-up user account.
api_key
(Optional[str]): The W&B API key.azure_account_url_to_access_key
(Optional[Dict[str, str]]): Mapping of Azure account URLs to their corresponding access keys for Azure integration.base_url
(str): The URL of the W&B backend for data synchronization.code_dir
(Optional[str]): Directory containing the code to be tracked by W&B.config_paths
(Optional[Sequence[str]]): Paths to files to load configuration from into theConfig
object.console
(Literal[“auto”, “off”, “wrap”, “redirect”, “wrap_raw”, “wrap_emu”]): The type of console capture to be applied. Possible values are:- “auto” - Automatically selects the console capture method based on the system environment and settings.
- “off” - Disables console capture.
- “redirect” - Redirects low-level file descriptors for capturing output.
- “wrap” - Overrides the write methods of sys.stdout/sys.stderr. Will be mapped to either “wrap_raw” or “wrap_emu” based on the state of the system.
- “wrap_raw” - Same as “wrap” but captures raw output directly instead of through an emulator. Derived from the
wrap
setting and should not be set manually. - “wrap_emu” - Same as “wrap” but captures output through an emulator. Derived from the
wrap
setting and should not be set manually.
console_multipart
(bool): Whether to produce multipart console log files.credentials_file
(str): Path to file for writing temporary access tokens.disable_code
(bool): Whether to disable capturing the code.disable_git
(bool): Whether to disable capturing the git state.disable_job_creation
(bool): Whether to disable the creation of a job artifact for W&B Launch.docker
(Optional[str]): The Docker image used to execute the script.email
(Optional[str]): The email address of the user.entity
(Optional[str]): The W&B entity, such as a user or a team.organization
(Optional[str]): The W&B organization.force
(bool): Whether to pass theforce
flag towandb.login()
.fork_from
(Optional[RunMoment]): Specifies a point in a previous execution of a run to fork from. The point is defined by the run ID, a metric, and its value. Only the metric ‘_step’ is supported.git_commit
(Optional[str]): The git commit hash to associate with the run.git_remote
(str): The git remote to associate with the run.git_remote_url
(Optional[str]): The URL of the git remote repository.git_root
(Optional[str]): Root directory of the git repository.heartbeat_seconds
(int): Interval in seconds between heartbeat signals sent to the W&B servers.host
(Optional[str]): Hostname of the machine running the script.http_proxy
(Optional[str]): Custom proxy servers for http requests to W&B.https_proxy
(Optional[str]): Custom proxy servers for https requests to W&B.identity_token_file
(Optional[str]): Path to file containing an identity token (JWT) for authentication.ignore_globs
(Sequence[str]): Unix glob patterns relative tofiles_dir
specifying files to exclude from upload.init_timeout
(float): Time in seconds to wait for thewandb.init
call to complete before timing out.insecure_disable_ssl
(bool): Whether to disable SSL verification.job_name
(Optional[str]): Name of the Launch job running the script.job_source
(Optional[Literal[“repo”, “artifact”, “image”]]): Source type for Launch.label_disable
(bool): Whether to disable automatic labeling features.launch
(bool): Flag to indicate if the run is being launched through W&B Launch.launch_config_path
(Optional[str]): Path to the launch configuration file.login_timeout
(Optional[float]): Time in seconds to wait for login operations before timing out.mode
(Literal[“online”, “offline”, “dryrun”, “disabled”, “run”, “shared”]): The operating mode for W&B logging and synchronization.notebook_name
(Optional[str]): Name of the notebook if running in a Jupyter-like environment.program
(Optional[str]): Path to the script that created the run, if available.program_abspath
(Optional[str]): The absolute path from the root repository directory to the script that created the run. Root repository directory is defined as the directory containing the .git directory, if it exists. Otherwise, it’s the current working directory.program_relpath
(Optional[str]): The relative path to the script that created the run.project
(Optional[str]): The W&B project ID.quiet
(bool): Flag to suppress non-essential output.reinit
(Union[Literal[“default”, “return_previous”, “finish_previous”, “create_new”], bool]): What to do whenwandb.init()
is called while a run is active. Options are- “default”: Use “finish_previous” in notebooks and “return_previous” otherwise.
- “return_previous”: Return the most recently created run that is not yet finished. This does not update
wandb.run
; see the “create_new” option. - “finish_previous”: Finish all active runs, then return a new run.
- “create_new”: Create a new run without modifying other active runs. Does not update
wandb.run
and top-level functions likewandb.log
. Because of this, some older integrations that rely on the global run will not work.
relogin
(bool): Whether to force a new login attempt.resume
(Optional[Literal[“allow”, “must”, “never”, “auto”]]): Specifies the resume behavior for the run. The available options are- “must”: Resumes from an existing run with the same ID. If no such run exists, it will result in failure.
- “allow”: Attempts to resume from an existing run with the same ID. If none is found, a new run will be created.
- “never”: Always starts a new run. If a run with the same ID already exists, it will result in failure.
- “auto”: Automatically resumes from the most recent failed run on the same machine.
resume_from
(Optional[RunMoment]): Specifies a point in a previous execution of a run to resume from. The point is defined by the run ID, a metric, and its value. Currently, only the metric ‘_step’ is supported.resumed
(bool): Indication from the server about the state of the run. This is different from resume, a user provided flag.root_dir
(str): The root directory to use as the base for all run-related paths. Used to derive the wandb directory and the run directory.run_group
(Optional[str]): Group identifier for related runs. Used for grouping runs in the UI.run_id
(Optional[str]): The ID of the run.run_job_type
(Optional[str]): Type of job being run (e.g., training, evaluation).run_name
(Optional[str]): Human-readable name for the run.run_notes
(Optional[str]): Additional notes or description for the run.run_tags
(Optional[Tuple[str, …]]): Tags to associate with the run for organization and filtering.sagemaker_disable
(bool): Flag to disable SageMaker-specific functionality.save_code
(Optional[bool]): Whether to save the code associated with the run.settings_system
(Optional[str]): Path to the system-wide settings file.show_colors
(Optional[bool]): Whether to use colored output in the console.show_emoji
(Optional[bool]): Whether to show emoji in the console output.show_errors
(bool): Whether to display error messages.show_info
(bool): Whether to display informational messages.show_warnings
(bool): Whether to display warning messages.silent
(bool): Flag to suppress all output.start_method
(Optional[str]): Method to use for starting subprocesses.strict
(Optional[bool]): Whether to enable strict mode for validation and error checking.summary_timeout
(int): Time in seconds to wait for summary operations before timing out.summary_warnings
(int): Maximum number of summary warnings to display.sweep_id
(Optional[str]): Identifier of the sweep this run belongs to.sweep_param_path
(Optional[str]): Path to the sweep parameters configuration.symlink
(bool): Whether to use symlinks for run directories.sync_tensorboard
(Optional[bool]): Whether to synchronize TensorBoard logs with W&B.table_raise_on_max_row_limit_exceeded
(bool): Whether to raise an exception when table row limits are exceeded.username
(Optional[str]): Username of the user.
property Settings.colab_url
The URL to the Colab notebook, if running in Colab.
property Settings.deployment
property Settings.files_dir
Absolute path to the local directory where the run’s files are stored.
property Settings.is_local
property Settings.log_dir
The directory for storing log files.
property Settings.log_internal
The path to the file to use for internal logs.
property Settings.log_symlink_internal
The path to the symlink to the internal log file of the most recent run.
property Settings.log_symlink_user
The path to the symlink to the user-process log file of the most recent run.
property Settings.log_user
The path to the file to use for user-process logs.
property Settings.model_extra
Get extra fields set during validation.
Returns:
A dictionary of extra fields, or None
if config.extra
is not set to "allow"
.
property Settings.model_fields_set
Returns the set of fields that have been explicitly set on this model instance.
Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults.
property Settings.project_url
The W&B URL where the project can be viewed.
property Settings.resume_fname
The path to the resume file.
property Settings.run_mode
The mode of the run. Can be either “run” or “offline-run”.
property Settings.run_url
The W&B URL where the run can be viewed.
property Settings.settings_workspace
The path to the workspace settings file.
property Settings.sweep_url
The W&B URL where the sweep can be viewed.
property Settings.sync_dir
The directory for storing the run’s files.
property Settings.sync_file
Path to the append-only binary transaction log file.
property Settings.sync_symlink_latest
Path to the symlink to the most recent run’s transaction log file.
property Settings.timespec
The time specification for the run.
property Settings.wandb_dir
Full path to the wandb directory.
classmethod Settings.catch_private_settings
catch_private_settings(values)
Check if a private field is provided and assign to the corresponding public one.
This is a compatibility layer to handle previous versions of the settings.
method Settings.update_from_dict
update_from_dict(settings: 'Dict[str, Any]') → None
Update settings from a dictionary.
1.2 - Functions
1.2.1 - agent()
function agent
agent(
sweep_id: str,
function: Optional[Callable] = None,
entity: Optional[str] = None,
project: Optional[str] = None,
count: Optional[int] = None
) → None
Start one or more sweep agents.
The sweep agent uses the sweep_id
to know which sweep it is a part of, what function to execute, and (optionally) how many agents to run.
Args:
sweep_id
: The unique identifier for a sweep. A sweep ID is generated by W&B CLI or Python SDK.function
: A function to call instead of the “program” specified in the sweep config.entity
: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.project
: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled “Uncategorized”.count
: The number of sweep config trials to try.
1.2.2 - controller()
function controller
controller(
sweep_id_or_config: Optional[str, Dict] = None,
entity: Optional[str] = None,
project: Optional[str] = None
) → _WandbController
Public sweep controller constructor.
Examples:
import wandb
tuner = wandb.controller(...)
print(tuner.sweep_config)
print(tuner.sweep_id)
tuner.configure_search(...)
tuner.configure_stopping(...)
1.2.3 - finish()
function finish
finish(exit_code: 'int | None' = None, quiet: 'bool | None' = None) → None
Finish a run and upload any remaining data.
Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.
Run States:
- Running: Active run that is logging data and/or sending heartbeats.
- Crashed: Run that stopped sending heartbeats unexpectedly.
- Finished: Run completed successfully (
exit_code=0
) with all data synced. - Failed: Run completed with errors (
exit_code!=0
).
Args:
exit_code
: Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.quiet
: Deprecated. Configure logging verbosity usingwandb.Settings(quiet=...)
.
1.2.4 - init()
function init
init(
entity: 'str | None' = None,
project: 'str | None' = None,
dir: 'StrPath | None' = None,
id: 'str | None' = None,
name: 'str | None' = None,
notes: 'str | None' = None,
tags: 'Sequence[str] | None' = None,
config: 'dict[str, Any] | str | None' = None,
config_exclude_keys: 'list[str] | None' = None,
config_include_keys: 'list[str] | None' = None,
allow_val_change: 'bool | None' = None,
group: 'str | None' = None,
job_type: 'str | None' = None,
mode: "Literal['online', 'offline', 'disabled'] | None" = None,
force: 'bool | None' = None,
anonymous: "Literal['never', 'allow', 'must'] | None" = None,
reinit: "bool | Literal[None, 'default', 'return_previous', 'finish_previous', 'create_new']" = None,
resume: "bool | Literal['allow', 'never', 'must', 'auto'] | None" = None,
resume_from: 'str | None' = None,
fork_from: 'str | None' = None,
save_code: 'bool | None' = None,
tensorboard: 'bool | None' = None,
sync_tensorboard: 'bool | None' = None,
monitor_gym: 'bool | None' = None,
settings: 'Settings | dict[str, Any] | None' = None
) → Run
Start a new run to track and log to W&B.
In an ML training pipeline, you could add wandb.init()
to the beginning of your training script as well as your evaluation script, and each piece would be tracked as a run in W&B.
wandb.init()
spawns a new background process to log data to a run, and it also syncs data to https://wandb.ai by default, so you can see your results in real-time. When you’re done logging data, call wandb.finish()
to end the run. If you don’t call run.finish()
, the run will end when your script exits.
Run IDs must not contain any of the following special characters / \ # ? % :
Args:
entity
: The username or team name the runs are logged to. The entity must already exist, so ensure you create your account or team in the UI before starting to log runs. If not specified, the run will default your default entity. To change the default entity, go to your settings and update the “Default location to create new projects” under “Default team”.project
: The name of the project under which this run will be logged. If not specified, we use a heuristic to infer the project name based on the system, such as checking the git root or the current program file. If we can’t infer the project name, the project will default to"uncategorized"
.dir
: The absolute path to the directory where experiment logs and metadata files are stored. If not specified, this defaults to the./wandb
directory. Note that this does not affect the location where artifacts are stored when callingdownload()
.id
: A unique identifier for this run, used for resuming. It must be unique within the project and cannot be reused once a run is deleted. For a short descriptive name, use thename
field, or for saving hyperparameters to compare across runs, useconfig
.name
: A short display name for this run, which appears in the UI to help you identify it. By default, we generate a random two-word name allowing easy cross-reference runs from table to charts. Keeping these run names brief enhances readability in chart legends and tables. For saving hyperparameters, we recommend using theconfig
field.notes
: A detailed description of the run, similar to a commit message in Git. Use this argument to capture any context or details that may help you recall the purpose or setup of this run in the future.tags
: A list of tags to label this run in the UI. Tags are helpful for organizing runs or adding temporary identifiers like “baseline” or “production.” You can easily add, remove tags, or filter by tags in the UI. If resuming a run, the tags provided here will replace any existing tags. To add tags to a resumed run without overwriting the current tags, userun.tags += ["new_tag"]
after callingrun = wandb.init()
.config
: Setswandb.config
, a dictionary-like object for storing input parameters to your run, such as model hyperparameters or data preprocessing settings. The config appears in the UI in an overview page, allowing you to group, filter, and sort runs based on these parameters. Keys should not contain periods (.
), and values should be smaller than 10 MB. If a dictionary,argparse.Namespace
, orabsl.flags.FLAGS
is provided, the key-value pairs will be loaded directly intowandb.config
. If a string is provided, it is interpreted as a path to a YAML file, from which configuration values will be loaded intowandb.config
.config_exclude_keys
: A list of specific keys to exclude fromwandb.config
.config_include_keys
: A list of specific keys to include inwandb.config
.allow_val_change
: Controls whether config values can be modified after their initial set. By default, an exception is raised if a config value is overwritten. For tracking variables that change during training, such as a learning rate, consider usingwandb.log()
instead. By default, this isFalse
in scripts andTrue
in Notebook environments.group
: Specify a group name to organize individual runs as part of a larger experiment. This is useful for cases like cross-validation or running multiple jobs that train and evaluate a model on different test sets. Grouping allows you to manage related runs collectively in the UI, making it easy to toggle and review results as a unified experiment.job_type
: Specify the type of run, especially helpful when organizing runs within a group as part of a larger experiment. For example, in a group, you might label runs with job types such as “train” and “eval”. Defining job types enables you to easily filter and group similar runs in the UI, facilitating direct comparisons.mode
: Specifies how run data is managed, with the following options:"online"
(default): Enables live syncing with W&B when a network connection is available, with real-time updates to visualizations."offline"
: Suitable for air-gapped or offline environments; data is saved locally and can be synced later. Ensure the run folder is preserved to enable future syncing."disabled"
: Disables all W&B functionality, making the run’s methods no-ops. Typically used in testing to bypass W&B operations.
force
: Determines if a W&B login is required to run the script. IfTrue
, the user must be logged in to W&B; otherwise, the script will not proceed. IfFalse
(default), the script can proceed without a login, switching to offline mode if the user is not logged in.anonymous
: Specifies the level of control over anonymous data logging. Available options are:"never"
(default): Requires you to link your W&B account before tracking the run. This prevents unintentional creation of anonymous runs by ensuring each run is associated with an account."allow"
: Enables a logged-in user to track runs with their account, but also allows someone running the script without a W&B account to view the charts and data in the UI."must"
: Forces the run to be logged to an anonymous account, even if the user is logged in.
reinit
: Shorthand for the “reinit” setting. Determines the behavior ofwandb.init()
when a run is active.resume
: Controls the behavior when resuming a run with the specifiedid
. Available options are:"allow"
: If a run with the specifiedid
exists, it will resume from the last step; otherwise, a new run will be created."never"
: If a run with the specifiedid
exists, an error will be raised. If no such run is found, a new run will be created."must"
: If a run with the specifiedid
exists, it will resume from the last step. If no run is found, an error will be raised."auto"
: Automatically resumes the previous run if it crashed on this machine; otherwise, starts a new run.True
: Deprecated. Use"auto"
instead.False
: Deprecated. Use the default behavior (leavingresume
unset) to always start a new run. Ifresume
is set,fork_from
andresume_from
cannot be used. Whenresume
is unset, the system will always start a new run.
resume_from
: Specifies a moment in a previous run to resume a run from, using the format{run_id}?_step={step}
. This allows users to truncate the history logged to a run at an intermediate step and resume logging from that step. The target run must be in the same project. If anid
argument is also provided, theresume_from
argument will take precedence.resume
,resume_from
andfork_from
cannot be used together, only one of them can be used at a time. Note that this feature is in beta and may change in the future.fork_from
: Specifies a point in a previous run from which to fork a new run, using the format{id}?_step={step}
. This creates a new run that resumes logging from the specified step in the target run’s history. The target run must be part of the current project. If anid
argument is also provided, it must be different from thefork_from
argument, an error will be raised if they are the same.resume
,resume_from
andfork_from
cannot be used together, only one of them can be used at a time. Note that this feature is in beta and may change in the future.save_code
: Enables saving the main script or notebook to W&B, aiding in experiment reproducibility and allowing code comparisons across runs in the UI. By default, this is disabled, but you can change the default to enable on your settings page.tensorboard
: Deprecated. Usesync_tensorboard
instead.sync_tensorboard
: Enables automatic syncing of W&B logs from TensorBoard or TensorBoardX, saving relevant event files for viewing in the W&B UI.saving relevant event files for viewing in the W&B UI. (Default
:False
)monitor_gym
: Enables automatic logging of videos of the environment when using OpenAI Gym.settings
: Specifies a dictionary orwandb.Settings
object with advanced settings for the run.
Raises:
Error
: if some unknown or internal error happened during the run initialization.AuthenticationError
: if the user failed to provide valid credentials.CommError
: if there was a problem communicating with the WandB server.UsageError
: if the user provided invalid arguments.KeyboardInterrupt
: if user interrupts the run.
Returns:
A Run
object.
Examples:
wandb.init()
returns a run object, and you can also access the run object with wandb.run
:
import wandb
config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
run.config.update({"architecture": "resnet", "depth": 34})
# ... your training code here ...
1.2.5 - login()
function login
login(
anonymous: Optional[Literal['must', 'allow', 'never']] = None,
key: Optional[str] = None,
relogin: Optional[bool] = None,
host: Optional[str] = None,
force: Optional[bool] = None,
timeout: Optional[int] = None,
verify: bool = False,
referrer: Optional[str] = None
) → bool
Set up W&B login credentials.
By default, this will only store credentials locally without verifying them with the W&B server. To verify credentials, pass verify=True
.
Args:
anonymous
: Set to “must”, “allow”, or “never”. If set to “must”, always log a user in anonymously. If set to “allow”, only create an anonymous user if the user isn’t already logged in. If set to “never”, never log a user anonymously. Default set to “never”.key
: The API key to use.relogin
: If true, will re-prompt for API key.host
: The host to connect to.force
: If true, will force a relogin.timeout
: Number of seconds to wait for user input.verify
: Verify the credentials with the W&B server.referrer
: The referrer to use in the URL login request.
Returns:
bool
: Ifkey
is configured
Raises:
AuthenticationError
: Ifapi_key
fails verification with the server.UsageError
: Ifapi_key
cannot be configured and no tty.
1.2.6 - restore()
function restore
restore(
name: 'str',
run_path: 'str | None' = None,
replace: 'bool' = False,
root: 'str | None' = None
) → None | TextIO
Download the specified file from cloud storage.
File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.
Args:
name
: The name of the file.run_path
: Optional path to a run to pull files from, i.e.username/project_name/run_id
if wandb.init has not been called, this is required.replace
: Whether to download the file even if it already exists locallyroot
: The directory to download the file to. Defaults to the current directory or the run directory if wandb.init was called.
Returns: None if it can’t find the file, otherwise a file object open for reading.
Raises:
wandb.CommError
: If W&B can’t connect to the W&B backend.ValueError
: If the file is not found or can’t find run_path.
1.2.7 - setup()
function setup
setup(settings: 'Settings | None' = None) → _WandbSetup
Prepares W&B for use in the current process and its children.
You can usually ignore this as it is implicitly called by wandb.init()
.
When using wandb in multiple processes, calling wandb.setup()
in the parent process before starting child processes may improve performance and resource utilization.
Note that wandb.setup()
modifies os.environ
, and it is important that child processes inherit the modified environment variables.
See also wandb.teardown()
.
Args:
settings
: Configuration settings to apply globally. These can be overridden by subsequentwandb.init()
calls.
Example:
import multiprocessing
import wandb
def run_experiment(params):
with wandb.init(config=params):
# Run experiment
pass
if __name__ == "__main__":
# Start backend and set global config
wandb.setup(settings={"project": "my_project"})
# Define experiment parameters
experiment_params = [
{"learning_rate": 0.01, "epochs": 10},
{"learning_rate": 0.001, "epochs": 20},
]
# Start multiple processes, each running a separate experiment
processes = []
for params in experiment_params:
p = multiprocessing.Process(target=run_experiment, args=(params,))
p.start()
processes.append(p)
# Wait for all processes to complete
for p in processes:
p.join()
# Optional: Explicitly shut down the backend
wandb.teardown()
1.2.8 - sweep()
function sweep
sweep(
sweep: Union[dict, Callable],
entity: Optional[str] = None,
project: Optional[str] = None,
prior_runs: Optional[List[str]] = None
) → str
Initialize a hyperparameter sweep.
Search for hyperparameters that optimizes a cost function of a machine learning model by testing various combinations.
Make note the unique identifier, sweep_id
, that is returned. At a later step provide the sweep_id
to a sweep agent.
See Sweep configuration structure for information on how to define your sweep.
Args:
sweep
: The configuration of a hyperparameter search. (or configuration generator). If you provide a callable, ensure that the callable does not take arguments and that it returns a dictionary that conforms to the W&B sweep config spec.entity
: The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.project
: The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.prior_runs
: The run IDs of existing runs to add to this sweep.
Returns:
sweep_id
: str. A unique identifier for the sweep.
1.2.9 - teardown()
function teardown
teardown(exit_code: 'int | None' = None) → None
Waits for W&B to finish and frees resources.
Completes any runs that were not explicitly finished using run.finish()
and waits for all data to be uploaded.
It is recommended to call this at the end of a session that used wandb.setup()
. It is invoked automatically in an atexit
hook, but this is not reliable in certain setups such as when using Python’s multiprocessing
module.
1.3 - Legacy Functions
1.3.1 - define_metric()
function wandb.define_metric
wandb.define_metric(
name: 'str',
step_metric: 'str | wandb_metric.Metric | None' = None,
step_sync: 'bool | None' = None,
hidden: 'bool | None' = None,
summary: 'str | None' = None,
goal: 'str | None' = None,
overwrite: 'bool | None' = None
) → wandb_metric.Metric
Customize metrics logged with wandb.log()
.
Args:
name
: The name of the metric to customize.step_metric
: The name of another metric to serve as the X-axis for this metric in automatically generated charts.step_sync
: Automatically insert the last value of step_metric intorun.log()
if it is not provided explicitly. Defaults to True if step_metric is specified.hidden
: Hide this metric from automatic plots.summary
: Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.goal
: Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.overwrite
: If false, then this call is merged with previousdefine_metric
calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.
Returns: An object that represents this call but can otherwise be discarded.
1.3.2 - link_model()
function wandb.link_model
wandb.link_model(
path: 'StrPath',
registered_model_name: 'str',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → Artifact | None
Log a model artifact version and link it to a registered model in the model registry.
Linked model versions are visible in the UI for the specified registered model.
This method will:
- Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’ of type ‘model’.
- Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project. If not, create a new registered model with name ‘registered_model_name’.
- Link version of model artifact ’name’ to registered model, ‘registered_model_name’.
- Attach aliases from ‘aliases’ list to the newly linked model artifact version.
Args:
path
: (str) A path to the contents of this model, can be in the following forms:/local/directory
/local/directory/file.txt
s3://bucket/path
registered_model_name
: The name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the run.name
: The name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified.aliases
: Aliases that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Raises:
AssertionError
: If registered_model_name is a path or if model artifact ’name’ is of a type that does not contain the substring ‘model’.ValueError
: If name has invalid special characters.
Returns: The linked artifact if linking was successful, otherwise None.
Examples:
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.link_model(
path="/local/directory",
registered_model_name="my_entity/my_project/my_reg_model",
name="my_model_artifact",
aliases=["production"],
)
run.link_model(
path="/local/directory",
registered_model_name="my_reg_model",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
1.3.3 - log_artifact()
function wandb.log_artifact
wandb.log_artifact(
artifact_or_path: 'Artifact | StrPath',
name: 'str | None' = None,
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
tags: 'list[str] | None' = None
) → Artifact
Declare an artifact as an output of a run.
Args:
artifact_or_path
: A path to the contents of this artifact, can be in the following forms/local/directory
/local/directory/file.txt
s3://bucket/path
name
: An artifact name. Defaults to the basename of the path prepended with the current run id if not specified. Valid names can be in the following forms:- name:version
- name:alias
- digest
type
: The type of artifact to log. Common examples includedataset
andmodel
aliases
: Aliases to apply to this artifact, defaults to["latest"]
tags
: Tags to apply to this artifact, if any.
Returns:
An Artifact
object.
1.3.4 - log_model()
function wandb.log_model
wandb.log_model(
path: 'StrPath',
name: 'str | None' = None,
aliases: 'list[str] | None' = None
) → None
Logs a model artifact as an output of this run.
The name of model artifact can only contain alphanumeric characters, underscores, and hyphens.
Args:
path
: A path to the contents of this model, can be in the following forms/local/directory
/local/directory/file.txt
s3://bucket/path
name
: A name to assign to the model artifact that the file contents will be added to. The string must contain only alphanumeric characters such as dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.aliases
: Aliases to apply to the created model artifact, defaults to["latest"]
Returns: None
Raises:
ValueError
: if name has invalid special characters.
Examples:
run.log_model(
path="/local/directory",
name="my_model_artifact",
aliases=["production"],
)
Invalid usage
run.log_model(
path="/local/directory",
name="my_entity/my_project/my_model_artifact",
aliases=["production"],
)
1.3.5 - log()
function wandb.log
wandb.log(
data: 'dict[str, Any]',
step: 'int | None' = None,
commit: 'bool | None' = None
) → None
Upload run data.
Use log
to log data from runs, such as scalars, images, video, histograms, plots, and tables. See Log objects and media for code snippets, best practices, and more.
Basic usage:
import wandb
with wandb.init() as run:
run.log({"train-loss": 0.5, "accuracy": 0.9})
The previous code snippet saves the loss and accuracy to the run’s history and updates the summary values for these metrics.
Visualize logged data in a workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, such as in a Jupyter notebook, with the Public API.
Logged values don’t have to be scalars. You can log any W&B supported Data Type such as images, audio, video, and more. For example, you can use wandb.Table
to log structured data. See Log tables, visualize and query data tutorial for more details.
W&B organizes metrics with a forward slash (/
) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:
run.log(
{
"train/accuracy": 0.9,
"train/loss": 30,
"validate/accuracy": 0.8,
"validate/loss": 20,
}
)
Only one level of nesting is supported; run.log({"a/b/c": 1})
produces a section named “a/b”.
run.log
is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.
By default, each call to log
creates a new “step”. The step must always increase, and it is not possible to log to a previous step. You can use any metric as the X axis in charts. See Custom log axes for more details.
In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
It is possible to use multiple log
invocations to log to the same step with the step
and commit
parameters. The following are all equivalent:
# Normal usage:
run.log({"train-loss": 0.5, "accuracy": 0.8})
run.log({"train-loss": 0.4, "accuracy": 0.9})
# Implicit step without auto-incrementing:
run.log({"train-loss": 0.5}, commit=False)
run.log({"accuracy": 0.8})
run.log({"train-loss": 0.4}, commit=False)
run.log({"accuracy": 0.9})
# Explicit step:
run.log({"train-loss": 0.5}, step=current_step)
run.log({"accuracy": 0.8}, step=current_step)
current_step += 1
run.log({"train-loss": 0.4}, step=current_step)
run.log({"accuracy": 0.9}, step=current_step)
Args:
data
: Adict
withstr
keys and values that are serializablePython objects including
:int
,float
andstring
; any of thewandb.data_types
; lists, tuples and NumPy arrays of serializable Python objects; otherdict
s of this structure.step
: The step number to log. IfNone
, then an implicit auto-incrementing step is used. See the notes in the description.commit
: If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. Ifstep
isNone
, then the default iscommit=True
; otherwise, the default iscommit=False
.sync
: This argument is deprecated and does nothing.
Examples: For more and more detailed examples, see our guides to logging.
Basic usage
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
Image from NumPy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0,
high=256,
size=(100, 100, 3),
dtype=np.uint8,
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Video from NumPy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0,
high=256,
size=(10, 3, 100, 100),
dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})
Matplotlib plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
3D Object
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
Raises:
wandb.Error
: if called beforewandb.init
ValueError
: if invalid data is passed
Examples:
# Basic usage
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
# Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:
run.log({"accuracy": 0.8})
# Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
# Image from numpy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Video from numpy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(
low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8
)
run.log({"video": wandb.Video(frames, fps=4)})
# Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2
run.log({"chart": fig})
# PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
# 3D Object
import wandb
run = wandb.init()
run.log(
{
"generated_samples": [
wandb.Object3D(open("sample.obj")),
wandb.Object3D(open("sample.gltf")),
wandb.Object3D(open("sample.glb")),
]
}
)
For more and more detailed examples, see our guides to logging.
1.3.6 - save()
function wandb.save
wandb.save(
glob_str: 'str | os.PathLike',
base_path: 'str | os.PathLike | None' = None,
policy: 'PolicyName' = 'live'
) → bool | list[str]
Sync one or more files to W&B.
Relative paths are relative to the current working directory.
A Unix glob, such as “myfiles/*”, is expanded at the time save
is called regardless of the policy
. In particular, new files are not picked up automatically.
A base_path
may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str
, and the directory structure beneath it is preserved.
When given an absolute path or glob and no base_path
, one directory level is preserved as in the example above.
Args:
glob_str
: A relative or absolute path or Unix glob.base_path
: A path to use to infer a directory structure; see examples.policy
: One oflive
,now
, orend
.- live: upload the file as it changes, overwriting the previous version
- now: upload the file once now
- end: upload file when the run ends
Returns: Paths to the symlinks created for the matched files.
For historical reasons, this may return a boolean in legacy code.
import wandb
wandb.init()
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
1.3.7 - unwatch()
function wandb.unwatch
wandb.unwatch(
models: 'torch.nn.Module | Sequence[torch.nn.Module] | None' = None
) → None
Remove pytorch model topology, gradient and parameter hooks.
Args:
models
: Optional list of pytorch models that have had watch called on them.
1.3.8 - use_artifact()
function wandb.use_artifact
wandb.use_artifact(
artifact_or_name: 'str | Artifact',
type: 'str | None' = None,
aliases: 'list[str] | None' = None,
use_as: 'str | None' = None
) → Artifact
Declare an artifact as an input to a run.
Call download
or file
on the returned object to get the contents locally.
Args:
artifact_or_name
: The name of the artifact to use. May be prefixed with the name of the project the artifact was logged to ("" or “ / ”). If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms - name:version
- name:alias
type
: The type of artifact to use.aliases
: Aliases to apply to this artifactuse_as
: This argument is deprecated and does nothing.
Returns:
An Artifact
object.
Examples:
import wandb
run = wandb.init(project="<example>")
# Use an artifact by name and alias
artifact_a = run.use_artifact(artifact_or_name="<name>:<alias>")
# Use an artifact by name and version
artifact_b = run.use_artifact(artifact_or_name="<name>:v<version>")
# Use an artifact by entity/project/name:alias
artifact_c = run.use_artifact(
artifact_or_name="<entity>/<project>/<name>:<alias>"
)
# Use an artifact by entity/project/name:version
artifact_d = run.use_artifact(
artifact_or_name="<entity>/<project>/<name>:v<version>"
)
1.3.9 - use_model()
function wandb.use_model
wandb.use_model(name: 'str') → FilePathStr
Download the files logged in a model artifact name
.
Args:
name
: A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed withentity/project/
. Valid names can be in the following forms- model_artifact_name:version
- model_artifact_name:alias
Raises:
AssertionError
: if model artifactname
is of a type that does not contain the substring ‘model’.
Returns:
path
: path to downloaded model artifact file(s).
Examples:
run.use_model(
name="my_model_artifact:latest",
)
run.use_model(
name="my_project/my_model_artifact:v0",
)
run.use_model(
name="my_entity/my_project/my_model_artifact:<digest>",
)
Invalid usage
run.use_model(
name="my_entity/my_project/my_model_artifact",
)
1.3.10 - watch()
function wandb.watch
wandb.watch(
models: 'torch.nn.Module | Sequence[torch.nn.Module]',
criterion: 'torch.F | None' = None,
log: "Literal['gradients', 'parameters', 'all'] | None" = 'gradients',
log_freq: 'int' = 1000,
idx: 'int | None' = None,
log_graph: 'bool' = False
) → None
Hook into given PyTorch model to monitor gradients and the model’s computational graph.
This function can track parameters, gradients, or both during training.
Args:
models
: A single model or a sequence of models to be monitored.criterion
: The loss function being optimized (optional).log
: Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”).log_freq
: Frequency (in batches) to log gradients and parameters. (default=1000)idx
: Index used when tracking multiple models withwandb.watch
. (default=None)log_graph
: Whether to log the model’s computational graph. (default=False)
Raises:
ValueError: If wandb.init
has not been called or if any of the models are not instances of torch.nn.Module
.
2 - Data Types
Defines Data Types for logging interactive visualizations to W&B.
2.1 - Audio
class Audio
W&B class for audio clips.
Attributes:
data_or_path
(string or numpy array): A path to an audio file or a numpy array of audio data.sample_rate
(int): Sample rate, required when passing in raw numpy array of audio data.caption
(string): Caption to display with audio.
method Audio.__init__
__init__(
data_or_path: Union[str, pathlib.Path, list, ForwardRef('np.ndarray')],
sample_rate: Optional[int] = None,
caption: Optional[str] = None
)
Accept a path to an audio file or a numpy array of audio data.
2.2 - box3d()
function box3d
box3d(
center: 'npt.ArrayLike',
size: 'npt.ArrayLike',
orientation: 'npt.ArrayLike',
color: 'RGBColor',
label: 'Optional[str]' = None,
score: 'Optional[numeric]' = None
) → Box3D
Returns a Box3D.
Args:
center
: The center point of the box as a length-3 ndarray.size
: The box’s X, Y and Z dimensions as a length-3 ndarray.orientation
: The rotation transforming global XYZ coordinates into the box’s local XYZ coordinates, given as a length-4 ndarray [r, x, y, z] corresponding to the non-zero quaternion r + xi + yj + zk.color
: The box’s color as an (r, g, b) tuple with 0 <= r,g,b <= 1.label
: An optional label for the box.score
: An optional score for the box.
2.3 - Html
class Html
W&B class for logging HTML content to W&B.
Args:
data
: HTML to display in wandbinject
: Add a stylesheet to the HTML object. If set to False the HTML will pass through unchanged.
method Html.__init__
__init__(
data: Union[str, pathlib.Path, ForwardRef('TextIO')],
inject: bool = True,
data_is_not_path: bool = False
) → None
Creates a W&B HTML object.
It can be initialized by providing a path to a file:
with wandb.init() as run:
run.log({"html": wandb.Html("./index.html")})
Alternatively, it can be initialized by providing literal HTML, in either a string or IO object:
with wandb.init() as run:
run.log({"html": wandb.Html("<h1>Hello, world!</h1>")})
Args: data: A string that is a path to a file with the extension “.html”, or a string or IO object containing literal HTML.
inject
: Add a stylesheet to the HTML object. If set to False the HTML will pass through unchanged.data_is_not_path
: If set to False, the data will be treated as a path to a file.
2.4 - Image
class Image
A class for logging images to W&B.
See https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes for more information on modes.
Args:
data_or_path
: Accepts numpy array of image data, or a PIL image. The class attempts to infer the data format and converts it.mode
: The PIL mode for an image. Most common are “L”, “RGB”, “RGBA”.caption
: Label for display of image.
When logging a torch.Tensor
as a wandb.Image
, images are normalized. If you do not want to normalize your images, convert your tensors to a PIL Image.
Examples:
# Create a wandb.Image from a numpy array
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# Create a wandb.Image from a PILImage
import numpy as np
from PIL import Image as PILImage
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
# log .jpg rather than .png (default)
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}", file_type="jpg")
examples.append(image)
run.log({"examples": examples})
method Image.__init__
__init__(
data_or_path: 'ImageDataOrPathType',
mode: Optional[str] = None,
caption: Optional[str] = None,
grouping: Optional[int] = None,
classes: Optional[ForwardRef('Classes'), Sequence[dict]] = None,
boxes: Optional[Dict[str, ForwardRef('BoundingBoxes2D')], Dict[str, dict]] = None,
masks: Optional[Dict[str, ForwardRef('ImageMask')], Dict[str, dict]] = None,
file_type: Optional[str] = None,
normalize: bool = True
) → None
Initialize a wandb.Image object.
Args:
data_or_path
: Accepts numpy array/pytorch tensor of image data, a PIL image object, or a path to an image file.
If a numpy array or pytorch tensor is provided, the image data will be saved to the given file type. If the values are not in the range [0, 255] or all values are in the range [0, 1], the image pixel values will be normalized to the range [0, 255] unless normalize
is set to False.
- pytorch tensor should be in the format (channel, height, width)
- numpy array should be in the format (height, width, channel)
mode
: The PIL mode for an image. Most common are “L”, “RGB”,"RGBA". Full explanation at https
: //pillow.readthedocs.io/en/stable/handbook/concepts.html#modescaption
: Label for display of image.grouping
: The grouping number for the image.classes
: A list of class information for the image, used for labeling bounding boxes, and image masks.boxes
: A dictionary containing bounding box information for the image.see
: https://docs.wandb.ai/ref/python/data-types/boundingboxes2d/masks
: A dictionary containing mask information for the image.see
: https://docs.wandb.ai/ref/python/data-types/imagemask/file_type
: The file type to save the image as. This parameter has no effect if data_or_path is a path to an image file.normalize
: If True, normalize the image pixel values to fall within the range of [0, 255]. Normalize is only applied if data_or_path is a numpy array or pytorch tensor.
Examples:
Create a wandb.Image from a numpy array ```python
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
```
Create a wandb.Image from a PILImage ```python
import numpy as np
from PIL import Image as PILImage
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
```
log .jpg rather than .png (default) ```python
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(
pixels, caption=f"random field {i}", file_type="jpg"
)
examples.append(image)
run.log({"examples": examples})
```
method Image.guess_mode
guess_mode(
data: Union[ForwardRef('np.ndarray'), ForwardRef('torch.Tensor')],
file_type: Optional[str] = None
) → str
Guess what type of image the np.array is representing.
2.5 - Molecule
class Molecule
W&B class for 3D Molecular data.
Args:
data_or_path
: (pathlib.Path, string, io) Molecule can be initialized from a file name or an io object.caption
: (string) Caption associated with the molecule for display.
method Molecule.__init__
__init__(
data_or_path: Union[str, pathlib.Path, ForwardRef('TextIO')],
caption: Optional[str] = None,
**kwargs: str
) → None
2.6 - Object3D
class Object3D
W&B class for 3D point clouds.
Args:
data_or_path
: (numpy array, pathlib.Path, string, io) Object3D can be initialized from a file or a numpy array.
Examples: The shape of the numpy array must be one of either
[[x y z], ...] nx3
[[x y z c], ...] nx4 where c is a category with supported range [1, 14]
[[x y z r g b], ...] nx6 where is rgb is color
method Object3D.__init__
__init__(
data_or_path: Union[ForwardRef('np.ndarray'), str, pathlib.Path, ForwardRef('TextIO'), dict],
caption: Optional[str] = None,
**kwargs: Optional[str, ForwardRef('FileFormat3D')]
) → None
2.7 - Plotly
class Plotly
W&B class for Plotly plots.
Args:
val
: Matplotlib or Plotly figure.
method Plotly.__init__
__init__(
val: Union[ForwardRef('plotly.Figure'), ForwardRef('matplotlib.artist.Artist')]
)
classmethod Plotly.get_media_subdir
get_media_subdir() → str
classmethod Plotly.make_plot_media
make_plot_media(
val: Union[ForwardRef('plotly.Figure'), ForwardRef('matplotlib.artist.Artist')]
) → Union[wandb.sdk.data_types.image.Image, ForwardRef('Plotly')]
method Plotly.to_json
to_json(
run_or_artifact: Union[ForwardRef('LocalRun'), ForwardRef('Artifact')]
) → dict
2.8 - Table
class Table
The Table class used to display and analyze tabular data.
Unlike traditional spreadsheets, Tables support numerous types of data: scalar values, strings, numpy arrays, and most subclasses of wandb.data_types.Media
. This means you can embed Images
, Video
, Audio
, and other sorts of rich, annotated media directly in Tables, alongside other traditional scalar values.
This class is the primary class used to generate the Table Visualizer in the UI: https://docs.wandb.ai/guides/data-vis/tables.
Attributes:
columns
(List[str]): Names of the columns in the table. Defaults to [“Input”, “Output”, “Expected”].data
: (List[List[any]]) 2D row-oriented array of values.dataframe
(pandas.DataFrame): DataFrame object used to create the table. When set,data
andcolumns
arguments are ignored.optional
(Union[bool,List[bool]]): Determines ifNone
values are allowed. Default toTrue
. - If a singular bool value, then the optionality is enforced for all columns specified at construction time. - If a list of bool values, then the optionality is applied to each column - should be the same length ascolumns
. applies to all columns. A list of bool values applies to each respective column.allow_mixed_types
(bool): Determines if columns are allowed to have mixed types (disables type validation). Defaults to False.
method Table.__init__
__init__(
columns=None,
data=None,
rows=None,
dataframe=None,
dtype=None,
optional=True,
allow_mixed_types=False,
log_mode: Optional[Literal['IMMUTABLE', 'MUTABLE', 'INCREMENTAL']] = 'IMMUTABLE'
)
Initializes a Table object.
The rows is available for legacy reasons and should not be used. The Table class uses data to mimic the Pandas API.
Args:
columns
: (List[str]) Names of the columns in the table. Defaults to [“Input”, “Output”, “Expected”].data
: (List[List[any]]) 2D row-oriented array of values.dataframe
: (pandas.DataFrame) DataFrame object used to create the table. When set,data
andcolumns
arguments are ignored.optional
: (Union[bool,List[bool]]) Determines ifNone
values are allowed. Default to True - If a singular bool value, then the optionality is enforced for all columns specified at construction time - If a list of bool values, then the optionality is applied to each column - should be the same length ascolumns
applies to all columns. A list of bool values applies to each respective column.allow_mixed_types
: (bool) Determines if columns are allowed to have mixed types (disables type validation). Defaults to Falselog_mode
: Optional[str] Controls how the Table is logged when mutations occur. Options: - “IMMUTABLE” (default): Table can only be logged once; subsequent logging attempts after the table has been mutated will be no-ops. - “MUTABLE”: Table can be re-logged after mutations, creating a new artifact version each time it’s logged. - “INCREMENTAL”: Table data is logged incrementally, with each log creating a new artifact entry containing the new data since the last log.
method Table.add_column
add_column(name, data, optional=False)
Adds a column of data to the table.
Args:
name
: (str) - the unique name of the columndata
: (list | np.array) - a column of homogeneous dataoptional
: (bool) - if null-like values are permitted
method Table.add_computed_columns
add_computed_columns(fn)
Adds one or more computed columns based on existing data.
Args:
fn
: A function which accepts one or two parameters, ndx (int) and row (dict), which is expected to return a dict representing new columns for that row, keyed by the new column names.
ndx
is an integer representing the index of the row. Only included if include_ndx
is set to True
.
row
is a dictionary keyed by existing columns
method Table.add_data
add_data(*data)
Adds a new row of data to the table.
The maximum amount ofrows in a table is determined by wandb.Table.MAX_ARTIFACT_ROWS
.
The length of the data should match the length of the table column.
method Table.add_row
add_row(*row)
Deprecated; use add_data instead.
method Table.cast
cast(col_name, dtype, optional=False)
Casts a column to a specific data type.
This can be one of the normal python classes, an internal W&B type, or an example object, like an instance of wandb.Image or wandb.Classes.
Args:
col_name
(str): The name of the column to cast.dtype
(class, wandb.wandb_sdk.interface._dtypes.Type, any): The target dtype.optional
(bool): If the column should allow Nones.
method Table.get_column
get_column(name, convert_to=None)
Retrieves a column from the table and optionally converts it to a NumPy object.
Args:
name
: (str) - the name of the columnconvert_to
: (str, optional) - “numpy”: will convert the underlying data to numpy object
method Table.get_dataframe
get_dataframe()
Returns a pandas.DataFrame
of the table.
method Table.get_index
get_index()
Returns an array of row indexes for use in other tables to create links.
2.9 - Video
class Video
A class for logging videos to W&B.
Args:
data_or_path
: Video can be initialized with a path to a file or an io object. The format must be “gif”, “mp4”, “webm” or “ogg”. The format must be specified with the format argument. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. Channels should be (time, channel, height, width) or (batch, time, channel, height width)caption
: Caption associated with the video for display.fps
: The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes.format
: Format of video, necessary if initializing with path or io object.
Examples: Log a numpy array as a video
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)
frames = np.random.randint(low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8)
run.log({"video": wandb.Video(frames, fps=4)})
method Video.__init__
__init__(
data_or_path: Union[str, pathlib.Path, ForwardRef('np.ndarray'), ForwardRef('TextIO'), ForwardRef('BytesIO')],
caption: Optional[str] = None,
fps: Optional[int] = None,
format: Optional[Literal['gif', 'mp4', 'webm', 'ogg']] = None
)
Initialize a W&B Video object.
Args: data_or_path: Video can be initialized with a path to a file or an io object. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. The dimensions should be (number of frames, channel, height, width) or (batch, number of frames, channel, height, width) The format parameter must be specified with the format argument when initializing with a numpy array or io object.
caption
: Caption associated with the video for display. fps: The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes. format: Format of video, necessary if initializing with a numpy array or io object. This parameter will be used to determine the format to use when encoding the video data. Accepted values are “gif”, “mp4”, “webm”, or “ogg”. If no value is provided, the default format will be “gif”.
Examples: Log a numpy array as a video ```python import numpy as np import wandb
with wandb.init() as run: # axes are (number of frames, channel, height, width) frames = np.random.randint( low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8 ) run.log({“video”: wandb.Video(frames, format=“mp4”, fps=4)})
---
3 - Launch Library Reference
A collection of launch APIs for W&B.
3.1 - create_and_run_agent()
function create_and_run_agent
create_and_run_agent(
api: wandb.apis.internal.Api,
config: Dict[str, Any]
) → None
3.2 - launch_add()
function launch_add
launch_add(
uri: Optional[str] = None,
job: Optional[str] = None,
config: Optional[Dict[str, Any]] = None,
template_variables: Optional[Dict[str, Union[float, int, str]]] = None,
project: Optional[str] = None,
entity: Optional[str] = None,
queue_name: Optional[str] = None,
resource: Optional[str] = None,
entry_point: Optional[List[str]] = None,
name: Optional[str] = None,
version: Optional[str] = None,
docker_image: Optional[str] = None,
project_queue: Optional[str] = None,
resource_args: Optional[Dict[str, Any]] = None,
run_id: Optional[str] = None,
build: Optional[bool] = False,
repository: Optional[str] = None,
sweep_id: Optional[str] = None,
author: Optional[str] = None,
priority: Optional[int] = None
) → public.QueuedRun
Enqueue a W&B launch experiment. With either a source uri, job or docker_image.
Arguments:
uri
: URI of experiment to run. A wandb run uri or a Git repository URI.job
: string reference to a wandb.Job eg: wandb/test/my-job:latestconfig
: A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”template_variables
: A dictionary containing values of template variables for a run queue.Expected format of
{“VAR_NAME”: VAR_VALUE}
project
: Target project to send launched run toentity
: Target entity to send launched run toqueue
: the name of the queue to enqueue the run topriority
: the priority level of the job, where 1 is the highest priorityresource
: Execution backend for the run: W&B provides built-in support for “local-container” backendentry_point
: Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.name
: Name run under which to launch the run.version
: For Git-based projects, either a commit hash or a branch name.docker_image
: The name of the docker image to use for the run.resource_args
: Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config underresource_args
.run_id
: optional string indicating the id of the launched runbuild
: optional flag defaulting to false, requires queue to be set if build, an image is created, creates a job artifact, pushes a reference to that job artifact to queuerepository
: optional string to control the name of the remote repository, used when pushing images to a registryproject_queue
: optional string to control the name of the project for the queue. Primarily used for back compatibility with project scoped queues
Example:
from wandb.sdk.launch import launch_add
project_uri = "https://github.com/wandb/examples"
params = {"alpha": 0.5, "l1_ratio": 0.01}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch_add(uri=project_uri, parameters=params)
Returns:
an instance ofwandb.api.public.QueuedRun
which gives information about the queued run, or if wait_until_started
or wait_until_finished
are called, gives access to the underlying Run information.
Raises:
wandb.exceptions.LaunchError
if unsuccessful
3.3 - launch()
function launch
launch(
api: wandb.apis.internal.Api,
job: Optional[str] = None,
entry_point: Optional[List[str]] = None,
version: Optional[str] = None,
name: Optional[str] = None,
resource: Optional[str] = None,
resource_args: Optional[Dict[str, Any]] = None,
project: Optional[str] = None,
entity: Optional[str] = None,
docker_image: Optional[str] = None,
config: Optional[Dict[str, Any]] = None,
synchronous: Optional[bool] = True,
run_id: Optional[str] = None,
repository: Optional[str] = None
) → AbstractRun
Launch a W&B launch experiment.
Arguments:
job
: string reference to a wandb.Job eg: wandb/test/my-job:latestapi
: An instance of a wandb Api from wandb.apis.internal.entry_point
: Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.version
: For Git-based projects, either a commit hash or a branch name.name
: Name run under which to launch the run.resource
: Execution backend for the run.resource_args
: Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config underresource_args
.project
: Target project to send launched run toentity
: Target entity to send launched run toconfig
: A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”.synchronous
: Whether to block while waiting for a run to complete. Defaults to True. Note that ifsynchronous
is False andbackend
is “local-container”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated. Ifsynchronous
is True and the run fails, the current process will error out as well.run_id
: ID for the run (To ultimately replace the :name: field)repository
: string name of repository path for remote registry
Example:
from wandb.sdk.launch import launch
job = "wandb/jobs/Hello World:latest"
params = {"epochs": 5}
# Run W&B project and create a reproducible docker environment
# on a local host
api = wandb.apis.internal.Api()
launch(api, job, parameters=params)
```
**Returns:**
an instance of`wandb.launch.SubmittedRun` exposing information (e.g. run ID) about the launched run.
**Raises:**
`wandb.exceptions.ExecutionError` If a run launched in blocking mode is unsuccessful.
3.4 - LaunchAgent
class LaunchAgent
Launch agent class which polls run given run queues and launches runs for wandb launch.
method LaunchAgent.__init__
__init__(api: wandb.apis.internal.Api, config: Dict[str, Any])
Initialize a launch agent.
Arguments:
api
: Api object to use for making requests to the backend.config
: Config dictionary for the agent.
property LaunchAgent.num_running_jobs
Return the number of jobs not including schedulers.
property LaunchAgent.num_running_schedulers
Return just the number of schedulers.
property LaunchAgent.thread_ids
Returns a list of keys running thread ids for the agent.
method LaunchAgent.check_sweep_state
check_sweep_state(
launch_spec: Dict[str, Any],
api: wandb.apis.internal.Api
) → None
Check the state of a sweep before launching a run for the sweep.
method LaunchAgent.fail_run_queue_item
fail_run_queue_item(
run_queue_item_id: str,
message: str,
phase: str,
files: Optional[List[str]] = None
) → None
method LaunchAgent.finish_thread_id
finish_thread_id(
thread_id: int,
exception: Optional[Exception, wandb.sdk.launch.errors.LaunchDockerError] = None
) → None
Removes the job from our list for now.
method LaunchAgent.get_job_and_queue
get_job_and_queue() → Optional[wandb.sdk.launch.agent.agent.JobSpecAndQueue]
classmethod LaunchAgent.initialized
initialized() → bool
Return whether the agent is initialized.
method LaunchAgent.loop
loop() → None
Loop infinitely to poll for jobs and run them.
Raises:
KeyboardInterrupt
: if the agent is requested to stop.
classmethod LaunchAgent.name
name() → str
Return the name of the agent.
method LaunchAgent.pop_from_queue
pop_from_queue(queue: str) → Any
Pops an item off the runqueue to run as a job.
Arguments:
queue
: Queue to pop from.
Returns: Item popped off the queue.
Raises:
Exception
: if there is an error popping from the queue.
method LaunchAgent.print_status
print_status() → None
Prints the current status of the agent.
method LaunchAgent.run_job
run_job(
job: Dict[str, Any],
queue: str,
file_saver: wandb.sdk.launch.agent.run_queue_item_file_saver.RunQueueItemFileSaver
) → None
Set up project and run the job.
Arguments:
job
: Job to run.
method LaunchAgent.task_run_job
task_run_job(
launch_spec: Dict[str, Any],
job: Dict[str, Any],
default_config: Dict[str, Any],
api: wandb.apis.internal.Api,
job_tracker: wandb.sdk.launch.agent.job_status_tracker.JobAndRunStatusTracker
) → None
method LaunchAgent.update_status
update_status(status: str) → None
Update the status of the agent.
Arguments:
status
: Status to update the agent to.
3.5 - load_wandb_config()
function load_wandb_config
load_wandb_config() → Config
Load wandb config from WANDB_CONFIG environment variable(s).
The WANDB_CONFIG environment variable is a json string that can contain multiple config keys. The WANDB_CONFIG_[0-9]+ environment variables are used for environments where there is a limit on the length of environment variables. In that case, we shard the contents of WANDB_CONFIG into multiple environment variables numbered from 0.
Returns: A dictionary of wandb config values.
3.6 - manage_config_file()
function manage_config_file
manage_config_file(
path: str,
include: Optional[List[str]] = None,
exclude: Optional[List[str]] = None,
schema: Optional[Any] = None
)
Declare an overridable configuration file for a launch job.
If a new job version is created from the active run, the configuration file will be added to the job’s inputs. If the job is launched and overrides have been provided for the configuration file, this function will detect the overrides from the environment and update the configuration file on disk. Note that these overrides will only be applied in ephemeral containers. include
and exclude
are lists of dot separated paths with the config. The paths are used to filter subtrees of the configuration file out of the job’s inputs.
For example, given the following configuration file: yaml model: name: resnet layers: 18 training: epochs: 10 batch_size: 32
Passing include=['model']
will only include the model
subtree in the job’s inputs. Passing exclude=['model.layers']
will exclude the layers
key from the model
subtree. Note that exclude
takes precedence over include
.
.
is used as a separator for nested keys. If a key contains a .
, it should be escaped with a backslash, e.g. include=[r'model\.layers']
. Note the use of r
to denote a raw string when using escape chars.
Args:
path
(str): The path to the configuration file. This path must be relative and must not contain backwards traversal, i.e...
.include
(List[str]): A list of keys to include in the configuration file.exclude
(List[str]): A list of keys to exclude from the configuration file.schema
(dict | Pydantic model): A JSON Schema or Pydantic model describing describing which attributes will be editable from the Launch drawer. Accepts both an instance of a Pydantic BaseModel class or the BaseModel class itself.
Raises:
LaunchError
: If the path is not valid, or if there is no active run.
3.7 - manage_wandb_config()
function manage_wandb_config
manage_wandb_config(
include: Optional[List[str]] = None,
exclude: Optional[List[str]] = None,
schema: Optional[Any] = None
)
Declare wandb.config as an overridable configuration for a launch job.
If a new job version is created from the active run, the run config (wandb.config) will become an overridable input of the job. If the job is launched and overrides have been provided for the run config, the overrides will be applied to the run config when wandb.init
is called. include
and exclude
are lists of dot separated paths with the config. The paths are used to filter subtrees of the configuration file out of the job’s inputs.
For example, given the following run config contents: yaml model: name: resnet layers: 18 training: epochs: 10 batch_size: 32
Passing include=['model']
will only include the model
subtree in the job’s inputs. Passing exclude=['model.layers']
will exclude the layers
key from the model
subtree. Note that exclude
takes precedence over include
. .
is used as a separator for nested keys. If a key contains a .
, it should be escaped with a backslash, e.g. include=[r'model\.layers']
. Note the use of r
to denote a raw string when using escape chars.
Args:
include
(List[str]): A list of subtrees to include in the configuration.exclude
(List[str]): A list of subtrees to exclude from the configuration.schema
(dict | Pydantic model): A JSON Schema or Pydantic model describing describing which attributes will be editable from the Launch drawer. Accepts both an instance of a Pydantic BaseModel class or the BaseModel class itself.
Raises:
LaunchError
: If there is no active run.