As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude.
All the PyTorch examples and GaudiTrainer script work out of the box with distributed training. There are two ways of launching them:
python gaudi_spawn.py \
--world_size number_of_hpu_you_have --use_mpi \
path_to_script.py --args1 --args2 ... --argsNwhere --argX is an argument of the script to run in a distributed way.
Examples are given for question answering here and for text classification here.
from optimum.habana.distributed import DistributedRunner
from optimum.utils import logging
world_size=8 # Number of HPUs to use (1 or 8)
# define distributed runner
distributed_runner = DistributedRunner(
command_list=["scripts/train.py --args1 --args2 ... --argsN"],
world_size=world_size,
use_mpi=True,
multi_hls=False,
)
# start job
ret_code = distributed_runner.run()( command_list = [] world_size = 1 use_mpi = False use_deepspeed = False use_env = False map_by = 'socket' multi_hls = False )
Set up training hardware configurations and run distributed training commands.
Multi-node configuration setup for mpirun.
Single-card setup.
Single-node multi-card configuration setup.
Single-node multi-card configuration setup for DeepSpeed.
Single-node multi-card configuration setup for mpirun.