This guide provides recommendations on how to integrate W&B into your Python training script or notebook for hyperparameter search optimization.Documentation Index
Fetch the complete documentation index at: https://wb-21fd5541-sweeps-updates.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Original training script
Suppose you have a Python script that trains a model (see below). Your goal is to find the hyperparameters that maxmimizes the validation accuracy(val_acc).
In your Python script, you define two functions: train_one_epoch and evaluate_one_epoch. The train_one_epoch function simulates training for one epoch and returns the training accuracy and loss. The evaluate_one_epoch function simulates evaluating the model on the validation data set and returns the validation accuracy and loss.
You define a configuration dictionary (config) that contains hyperparameter values such as the learning rate (lr), batch size (batch_size), and number of epochs (epochs). The values in the configuration dictionary control the training process.
Next you define a function called main that mimics a typical training loop. For each epoch, the accuracy and loss is computed on the training and validation data sets.
This code is a mock training script. It does not train a model, but simulates the training process by generating random accuracy and loss values. The purpose of this code is to demonstrate how to integrate W&B into your training script.
train.py
val_acc).
Add W&B to your training script
Update you training script to include W&B. How you integrate W&B to your Python script or notebook depends on how you manage sweeps. To use the W&B Python SDK to start, stop, and manage sweeps, follow the instructions in the Python script or notebook tab. To use the W&B CLI instead, follow the instructions in the CLI tab.- CLI
- Python script or notebook
Create a YAML file that defines the hyperparameters to optimize and the metric to optimize. W&B uses this file to determine which hyperparameters to vary during the sweep and which metric to optimize.Add the name of your Python script to the program key in the YAML file on line 1.The following YAML file corresponds to the original training script shown earlier. The training script varies the batch_size, lr, and epochs hyperparameters. The YAML file defines the same hyperparameters and specifies the values to try for each one on lines 8 to 14.The training script also computes the validation accuracy metric, val_acc. The YAML file specifies that the sweep should maximize val_acc on line 5.For more information on how to create a W&B Sweep configuration, see Define sweep configuration.After you define your sweep configuration in a YAML file, you need to add W&B to your training script to read in the YAML file and log the metric you want to optimize for.Within your training script, add the following code snippets to integrate W&B:In your shell, set a maximum number of runs for the sweep agent to try. This is optional. In this example, we set the maximum number to 5.Next, initialize the sweep with the This returns a sweep ID. For more information on how to initialize sweeps, see
Initialize sweeps.Copy the sweep ID and replace For more information, see Start sweep jobs.
The sweep agent selects a value from the
values list and passes it to wandb.config in the training script. For example, if you define the batch_size parameter with the values [16, 32, 64], the sweep agent selects one of those values and passes it to the training script as wandb.config.batch_size. config.yaml
- Import the W&B Python SDK (
wandb). - Initialize a run with
wandb.init(). - Read the YAML configuration file with a Python package such as yaml, and pass the configuration to
wandb.init(). - Pass the configuration object to the config parameter of
wandb.init(). - Retrieve the hyperparameter values from
wandb.Run.configso that your script uses the values defined in the YAML file instead of hard-coded values. W&B flattens configuration values, so you can access nested values with dot notation or bracket notation as though they were top-level keys. - Log the metric that you want to optimize with
wandb.Run.log().
wandb.init().Lines 9 and 10 show how to fetch the hyperparameter values from the wandb.Run.config object. Line 17 shows how to log the metric you are optimizing for (val_acc) to W&B.train.py
W&B flattens configuration values passed to You then read in the file with You can then access After you read in the file and pass the configuration to
wandb.init(config=)Normally, you access nested values in a configuration object with dot notation or bracket notation. For example, consider the following nested configuration:sample.yaml
yaml and pass the configuration to wandb.init(config=):nested_value1 with yaml_sample["key2"]["nested_key1"] or yaml_sample.key2.nested_key1.When you pass a configuration to wandb.init(config=), W&B flattens the values. This means that you access nested values as though they were top-level keys.For example, consider the following YAML file:config.yaml
wandb.init(config=), access the goal value with run.config["goal"] instead of run.config["metric"]["goal"] or run.config.metric.goal.wandb sweep command. Provide the name of the YAML file. Optionally provide the name of the project for the project flag (--project):sweepID in the following code snippet to start
the sweep job with the wandb agent
command:Logging metrics to W&B in a sweepYou must log the metric you define and are optimizing for in both your sweep configuration and with The following is an incorrect example of logging the metric to W&B. The metric that is optimized for in the sweep configuration is
wandb.Run.log(). For example, if you define the metric to optimize as val_acc within your sweep configuration, you must also log val_acc to W&B. If you do not log the metric, W&B does not know what to optimize for.val_acc, but the code logs val_acc within a nested dictionary under the key validation. You must log the metric directly, not within a nested dictionary.