How can we use Meta Research Hydra to build scalable and repeatable machine learning experiment pipelines?

In this tutorial we will explore hydraan advanced configuration management framework originally developed and open sourced by Meta Research. We first define a structured configuration using Python data classes, which allows us to manage experimental parameters in a clean, modular, and repeatable way. As we work through this tutorial, we will write configurations, apply runtime overrides, and simulate a multi-run experiment with hyperparameter sweeps. Check The complete code is here.

import subprocess
import sys
subprocess.check_call([sys.executable, "-m", "pip", "install", "-q", "hydra-core"])


import hydra
from hydra import compose, initialize_config_dir
from omegaconf import OmegaConf, DictConfig
from dataclasses import dataclass, field
from typing import List, Optional
import os
from pathlib import Path

We first install Hydra and import all the basic modules required for structured configuration, dynamic composition and file processing. This setup ensures that our environment is ready to seamlessly execute the full tutorial on Google Colab. Check The complete code is here.

@dataclass
class OptimizerConfig:
   _target_: str = "torch.optim.SGD"
   lr: float = 0.01
  
@dataclass
class AdamConfig(OptimizerConfig):
   _target_: str = "torch.optim.Adam"
   lr: float = 0.001
   betas: tuple = (0.9, 0.999)
   weight_decay: float = 0.0


@dataclass
class SGDConfig(OptimizerConfig):
   _target_: str = "torch.optim.SGD"
   lr: float = 0.01
   momentum: float = 0.9
   nesterov: bool = True


@dataclass
class ModelConfig:
   name: str = "resnet"
   num_layers: int = 50
   hidden_dim: int = 512
   dropout: float = 0.1


@dataclass
class DataConfig:
   dataset: str = "cifar10"
   batch_size: int = 32
   num_workers: int = 4
   augmentation: bool = True


@dataclass
class TrainingConfig:
   model: ModelConfig = field(default_factory=ModelConfig)
   data: DataConfig = field(default_factory=DataConfig)
   optimizer: OptimizerConfig = field(default_factory=AdamConfig)
   epochs: int = 100
   seed: int = 42
   device: str = "cuda"
   experiment_name: str = "exp_001"

We use Python data classes to define clean, type-safe configurations for models, data, and optimizer settings. This structure allows us to manage complex experimental parameters in a modular and readable manner while ensuring consistency between runs. Check The complete code is here.

def setup_config_dir():
   config_dir = Path("./hydra_configs")
   config_dir.mkdir(exist_ok=True)
  
   main_config = """
defaults:
 - model: resnet
 - data: cifar10
 - optimizer: adam
 - _self_


epochs: 100
seed: 42
device: cuda
experiment_name: exp_001
"""
   (config_dir / "config.yaml").write_text(main_config)
  
   model_dir = config_dir / "model"
   model_dir.mkdir(exist_ok=True)
  
   (model_dir / "resnet.yaml").write_text("""
name: resnet
num_layers: 50
hidden_dim: 512
dropout: 0.1
""")
  
   (model_dir / "vit.yaml").write_text("""
name: vision_transformer
num_layers: 12
hidden_dim: 768
dropout: 0.1
patch_size: 16
""")
  
   data_dir = config_dir / "data"
   data_dir.mkdir(exist_ok=True)
  
   (data_dir / "cifar10.yaml").write_text("""
dataset: cifar10
batch_size: 32
num_workers: 4
augmentation: true
""")
  
   (data_dir / "imagenet.yaml").write_text("""
dataset: imagenet
batch_size: 128
num_workers: 8
augmentation: true
""")
  
   opt_dir = config_dir / "optimizer"
   opt_dir.mkdir(exist_ok=True)
  
   (opt_dir / "adam.yaml").write_text("""
_target_: torch.optim.Adam
lr: 0.001
betas: [0.9, 0.999]
weight_decay: 0.0
""")
  
   (opt_dir / "sgd.yaml").write_text("""
_target_: torch.optim.SGD
lr: 0.01
momentum: 0.9
nesterov: true
""")
  
   return str(config_dir.absolute())

We programmatically create a directory that contains the model, dataset, and YAML configuration files for the optimizer. This approach allows us to demonstrate how Hydra automatically composes configurations from different files, thus maintaining flexibility and clarity in experiments. Check The complete code is here.

@hydra.main(version_base=None, config_path="hydra_configs", config_name="config")
def train(cfg: DictConfig) -> float:
   print("=" * 80)
   print("CONFIGURATION")
   print("=" * 80)
   print(OmegaConf.to_yaml(cfg))
  
   print("n" + "=" * 80)
   print("ACCESSING CONFIGURATION VALUES")
   print("=" * 80)
   print(f"Model: {cfg.model.name}")
   print(f"Dataset: {cfg.data.dataset}")
   print(f"Batch Size: {cfg.data.batch_size}")
   print(f"Optimizer LR: {cfg.optimizer.lr}")
   print(f"Epochs: {cfg.epochs}")
  
   best_acc = 0.0
   for epoch in range(min(cfg.epochs, 3)):
       acc = 0.5 + (epoch * 0.1) + (cfg.optimizer.lr * 10)
       best_acc = max(best_acc, acc)
       print(f"Epoch {epoch+1}/{cfg.epochs}: Accuracy = {acc:.4f}")
  
   return best_acc

We implemented a training function that leverages Hydra’s configuration system to print, access, and use nested configuration values. By simulating a simple training loop, we show how Hydra cleanly integrates experimental control into real-world workflows. Check The complete code is here.

def demo_basic_usage():
   print("n" + "πŸš€ DEMO 1: Basic Configurationn")
   config_dir = setup_config_dir()
   with initialize_config_dir(version_base=None, config_dir=config_dir):
       cfg = compose(config_name="config")
       print(OmegaConf.to_yaml(cfg))


def demo_config_override():
   print("n" + "πŸš€ DEMO 2: Configuration Overridesn")
   config_dir = setup_config_dir()
   with initialize_config_dir(version_base=None, config_dir=config_dir):
       cfg = compose(
           config_name="config",
           overrides=[
               "model=vit",
               "data=imagenet",
               "optimizer=sgd",
               "optimizer.lr=0.1",
               "epochs=50"
           ]
       )
       print(OmegaConf.to_yaml(cfg))


def demo_structured_config():
   print("n" + "πŸš€ DEMO 3: Structured Config Validationn")
   from hydra.core.config_store import ConfigStore
   cs = ConfigStore.instance()
   cs.store(name="training_config", node=TrainingConfig)
   with initialize_config_dir(version_base=None, config_dir=setup_config_dir()):
       cfg = compose(config_name="config")
       print(f"Config type: {type(cfg)}")
       print(f"Epochs (validated as int): {cfg.epochs}")


def demo_multirun_simulation():
   print("n" + "πŸš€ DEMO 4: Multirun Simulationn")
   config_dir = setup_config_dir()
   experiments = [
       ["model=resnet", "optimizer=adam", "optimizer.lr=0.001"],
       ["model=resnet", "optimizer=sgd", "optimizer.lr=0.01"],
       ["model=vit", "optimizer=adam", "optimizer.lr=0.0001"],
   ]
   results = {}
   for i, overrides in enumerate(experiments):
       print(f"n--- Experiment {i+1} ---")
       with initialize_config_dir(version_base=None, config_dir=config_dir):
           cfg = compose(config_name="config", overrides=overrides)
           print(f"Model: {cfg.model.name}, Optimizer: {cfg.optimizer._target_}")
           print(f"Learning Rate: {cfg.optimizer.lr}")
           results[f"exp_{i+1}"] = cfg
   return results


def demo_interpolation():
   print("n" + "πŸš€ DEMO 5: Variable Interpolationn")
   cfg = OmegaConf.create({
       "model": {"name": "resnet", "layers": 50},
       "experiment": "${model.name}_${model.layers}",
       "output_dir": "/outputs/${experiment}",
       "checkpoint": "${output_dir}/best.ckpt"
   })
   print(OmegaConf.to_yaml(cfg))
   print(f"nResolved experiment name: {cfg.experiment}")
   print(f"Resolved checkpoint path: {cfg.checkpoint}")

We demonstrated Hydra’s advanced capabilities, including configuration overrides, structured configuration validation, multiple runs of simulations, and variable interpolation. Each demonstration shows how Hydra speeds experiments, simplifies manual setup, and improves research reproducibility. Check The complete code is here.

if __name__ == "__main__":
   demo_basic_usage()
   demo_config_override()
   demo_structured_config()
   demo_multirun_simulation()
   demo_interpolation()
   print("n" + "=" * 80)
   print("Tutorial complete! Key takeaways:")
   print("βœ“ Config composition with defaults")
   print("βœ“ Runtime overrides via command line")
   print("βœ“ Structured configs with type safety")
   print("βœ“ Multirun for hyperparameter sweeps")
   print("βœ“ Variable interpolation")
   print("=" * 80)

We execute all the demos in sequence to see Hydra in action, from loading the configuration to executing multiple runs. Finally, we summarize the key takeaways, highlighting how Hydra enables scalable and elegant experiment management.

In summary, we learned how Hydra, a first from Meta Research, simplifies and enhances experimental management through its powerful combination system. We explore structured configuration, interpolation, and multi-run capabilities to make large-scale machine learning workflows more flexible and maintainable. Armed with this knowledge, you can now integrate Hydra into your own research or development workflow, ensuring reproducibility, efficiency, and clarity in every experiment you run.


Check The complete code is here. Please feel free to check out our GitHub page for tutorials, code, and notebooks. In addition, welcome to follow us twitter And don’t forget to join our 100k+ ML SubReddit and subscribe our newsletter. wait! Are you using Telegram? Now you can also join us via telegram.


Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of artificial intelligence for the benefit of society. His most recent endeavor is the launch of Marktechpost, an artificial intelligence media platform that stands out for its in-depth coverage of machine learning and deep learning news that is technically sound and easy to understand for a broad audience. The platform has more than 2 million monthly views, which shows that it is very popular among viewers.

πŸ™Œ FOLLOW MARKTECHPOST: Add us as your go-to source on Google.

You may also like...