TRL documentation
Online DPO Trainer
Online DPO Trainer
Overview
Online DPO was proposed in Direct Language Model Alignment from Online AI Feedback by Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, and Mathieu Blondel.
The abstract from the paper is the following:
Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.
This post-training method was contributed by Michael Noukhovitch, Shengyi Costa Huang, Quentin Gallouédec, and Edward Beeching.
Quick start
This example demonstrates how to train a model using the online DPO method. We use the Qwen 0.5B model as the base model and PairRMJudge as a judge. We use the prompts from the UltraFeedback dataset. You can view the prompts in the dataset here:
Below is the script to train the model:
# train_online_dpo.py
from datasets import load_dataset
from trl import OnlineDPOConfig, OnlineDPOTrainer, PairRMJudge
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
judge = PairRMJudge()
train_dataset = load_dataset("trl-lib/ultrafeedback-prompt", split="train")
training_args = OnlineDPOConfig(output_dir="Qwen2-0.5B-OnlineDPO")
trainer = OnlineDPOTrainer(
model=model, judge=judge, args=training_args, processing_class=tokenizer, train_dataset=train_dataset
)
trainer.train()
Execute the script using the following command:
accelerate launch train_online_dpo.py
Distributed across 8 GPUs, the training takes approximately 1 hour. You can verify the training progress by checking the reward graph. An increasing trend in both the reward for rejected and chosen completions indicates that the model is improving and generating better responses over time.
To see how the trained model performs, you can use the Transformers Chat CLI.
$ transformers chat trl-lib/Qwen2-0.5B-OnlineDPO
<quentin_gallouedec>:
What is the best programming language?
<trl-lib/Qwen2-0.5B-OnlineDPO>:
The best programming language depends on your specific needs and priorities. Some people prefer imperative programming languages (like Haskell or Lisp), while others prefer functional programming languages (like Scala or Python). It's important to consider your work style, programming environment, and project requirements when choosing a programming language.
Expected dataset type
Online DPO only requires a prompt-only dataset (unlike offline DPO, that expects preference dataset). The OnlineDPOTrainer supports both conversational and standard dataset formats. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.
Usage tips
Use a reward model
Instead of a judge, you can chose to use a reward model — see Reward Bench for a leaderboard of public models you can use. Below is a code example showing how to replace a judge with the trl-lib/Qwen2-0.5B-Reward model:
- from trl import PairRMJudge
+ from transformers import AutoModelForSequenceClassification
- judge = PairRMJudge()
+ reward_model = AutoModelForSequenceClassification.from_pretrained("trl-lib/Qwen2-0.5B-Reward", num_labels=1)
+ reward_tokenizer = AutoTokenizer.from_pretrained("trl-lib/Qwen2-0.5B-Reward")
trainer = OnlineDPOTrainer(
...
- judge=judge,
+ reward_funcs=reward_model,
+ reward_processing_class=reward_tokenizer,
...
)
Encourage EOS token generation
When using a reward model, we may want the model to generate completions within a given length. During training, the model will generate completions up to the maximum length specified in the max_new_tokens
argument of OnlineDPOConfig. If you want to penalize the model for not generating an EOS token before reaching the maximum length, you can use the missing_eos_penalty
argument of OnlineDPOConfig:
training_args = OnlineDPOConfig(..., max_new_tokens=128, missing_eos_penalty=1.0)
Logging Completions
To better understand your model’s behavior during training, you can log sample completions periodically using the LogCompletionsCallback.
trainer = OnlineDPOTrainer(..., eval_dataset=eval_dataset)
completions_callback = LogCompletionsCallback(trainer, num_prompts=8)
trainer.add_callback(completions_callback)
This callback logs the model’s generated completions directly to Weights & Biases.
Example script
We provide an example script to train a model using the online DPO method. The script is available in examples/scripts/dpo_online.py
To test the online DPO script with the Qwen2.5 0.5B model on the UltraFeedback dataset, run the following command:
python examples/scripts/dpo_online.py \ --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \ --judge pair_rm \ --dataset_name trl-lib/ultrafeedback-prompt \ --learning_rate 5.0e-7 \ --output_dir Qwen2.5-0.5B-Online-DPO-PairRM \ --warmup_ratio 0.1 \ --push_to_hub
Logged metrics
While training and evaluating, we record the following reward metrics. Here is an example tracked run at Weights and Biases
objective/kl
: The mean Kullback-Leibler (KL) divergence between the current model and reference model.objective/entropy
: The mean entropy of the model, indicating the randomness of the actions chosen by the model.objective/non_score_reward
: The mean reward from non-score-related sources, basicallybeta * kl.sum(1)
, wherebeta
is the KL penalty coefficient andkl
is the per-token KL divergence.objective/rlhf_reward
: The mean RLHF reward, which isscores - non_score_reward
. Therlhf_reward
is the ultimate objective of online DPO training. If training works as intended, this metric should keep going up.objective/scores
: The mean scores returned by the reward model.objective/scores_margin
: The mean score margin (according to the external reward model) between the chosen and rejected completions.rewards/chosen
: The mean reward (according to online DPO’s implicit reward model)of the chosen completions.rewards/rejected
: The mean reward (according to online DPO’s implicit reward model) of the rejected completions.rewards/accuracies
: The accuracies of the online DPO’s implicit reward model.rewards/margins
: The mean reward margin (according to online DPO’s implicit reward model) between the chosen and rejected completions.logps/chosen
: The mean log probabilities of the chosen completions.logps/rejected
: The mean log probabilities of the rejected completions.val/contain_eos_token
: The fraction of completions which contain an EOS token.beta
: The parameter that controls the weight of the loss term representing the deviation from the reference model. Typically fixed, but can be made dynamic by passing a list to OnlineDPOConfig.
Benchmark experiments
To validate the online DPO implementation works, we ran experiments with the Pythia 1B, 2.8B, and 6.9B models on a single node of 8 x H100s. Here are the commands we used to run the experiments. We take the SFT / RM models directly from The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization.
# 1B Online DPO experiment
accelerate launch --config_file examples/accelerate_configs/multi_gpu.yaml \
examples/scripts/dpo_online.py \
--model_name_or_path trl-lib/pythia-1b-deduped-tldr-sft \
--reward_model_path trl-lib/pythia-1b-deduped-tldr-rm \
--dataset_name trl-lib/tldr \
--learning_rate 5.0e-7 \
--output_dir pythia-1b-deduped-tldr-online-dpo \
--beta 0.1 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--num_train_epochs 3 \
--max_new_tokens 53 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--save_steps 0.1 \
--push_to_hub
# 2.8B Online DPO experiment
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \
examples/scripts/dpo_online.py \
--model_name_or_path trl-lib/pythia-2.8b-deduped-tldr-sft \
--reward_model_path trl-lib/pythia-2.8b-deduped-tldr-rm \
--dataset_name trl-lib/tldr \
--learning_rate 5.0e-7 \
--output_dir pythia-2.8b-deduped-tldr-online-dpo \
--beta 0.1 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--num_train_epochs 3 \
--max_new_tokens 53 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--save_steps 0.1 \
--push_to_hub
# 6.9B Online DPO experiment
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \
examples/scripts/dpo_online.py \
--model_name_or_path trl-lib/pythia-6.9b-deduped-tldr-sft \
--reward_model_path trl-lib/pythia-6.9b-deduped-tldr-rm \
--dataset_name trl-lib/tldr \
--learning_rate 5.0e-7 \
--output_dir pythia-6.9b-deduped-tldr-online-dpo \
--beta 0.1 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--num_train_epochs 3 \
--max_new_tokens 53 \
--warmup_ratio 0.1 \
--missing_eos_penalty 1.0 \
--gradient_checkpointing \
--save_steps 0.1 \
--push_to_hub
Checkpoints and experiment tracking are available at:
To evaluate, we use vLLM to load the checkpoints and GPT-4o mini as a judge model to evaluate the generated TL;DR against the reference TL;DR. For more information on how to use judges, see Judges.
$ python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-1b-deduped-tldr-sft --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 33.00% python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-6.9b-deduped-tldr-sft --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 41.50% python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-1b-deduped-tldr-online-dpo --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 62.60% python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-6.9b-deduped-tldr-online-dpo --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 74.20%
We can then plot the RLHF scaling chart.
import matplotlib.pyplot as plt
results = {
"SFT": {1.0e9: 0.21, 2.8e9: 0.27, 6.9e9: 0.316},
"online-dpo": {1.0e9: 0.542, 2.8e9: 0.746, 6.9e9: 0.796},
"offline-dpo": {1.0e9: 0.422, 2.8e9: 0.517, 6.9e9: 0.701},
}
plt.plot(results["SFT"].keys(), results["SFT"].values(), label="SFT", marker="o")
plt.plot(results["online-dpo"].keys(), results["online-dpo"].values(), label="Online-dpo with RM judge", marker="o")
plt.plot(results["offline-dpo"].keys(), results["offline-dpo"].values(), label="Offline-dpo", marker="o")
plt.axhline(y=0.5, color="black", linestyle="-.", label="Human reference summary")
plt.xscale("log")
plt.xlabel("Model size")
plt.ylabel("Win rate against reference summaries\n(according to GPT-4-0613)")
plt.title("DPO scaling by model size")
plt.legend()
plt.xlim(5e8, 1.2e10)
plt.xticks([1e9, 3e9, 1e10], ["1B", "3B", "10B"])
plt.grid(True, which="both", ls="--", c="0.7")
plt.tight_layout()
plt.show()
The online DPO checkpoint gets increasingly more win rate as we scale up the model sizes. This is a good sign that the online DPO implementation is working as intended.
OnlineDPOTrainer
class trl.OnlineDPOTrainer
< source >( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str] ref_model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = None reward_funcs: typing.Union[str, transformers.modeling_utils.PreTrainedModel, typing.Callable[[list, list], list[float]], list[typing.Union[str, transformers.modeling_utils.PreTrainedModel, typing.Callable[[list, list], list[float]]]], NoneType] = None judge: typing.Optional[trl.trainer.judges.BasePairwiseJudge] = None args: typing.Optional[trl.trainer.online_dpo_config.OnlineDPOConfig] = None data_collator: typing.Optional[transformers.data.data_collator.DataCollator] = None train_dataset: typing.Union[datasets.arrow_dataset.Dataset, torch.utils.data.dataset.IterableDataset, NoneType] = None eval_dataset: typing.Union[datasets.arrow_dataset.Dataset, torch.utils.data.dataset.IterableDataset, dict[str, typing.Union[datasets.arrow_dataset.Dataset, torch.utils.data.dataset.IterableDataset]], NoneType] = None processing_class: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.processing_utils.ProcessorMixin, NoneType] = None reward_processing_classes: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, list[transformers.tokenization_utils_base.PreTrainedTokenizerBase], NoneType] = None peft_config: typing.Optional[ForwardRef('PeftConfig')] = None compute_metrics: typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], dict]] = None callbacks: typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers: tuple = (None, None) preprocess_logits_for_metrics: typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None reward_model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = None reward_processing_class: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None )
Parameters
- model (
Union[str, nn.Module, PreTrainedModel]
) — Model to be trained. Can be either:- A string, being the model id of a pretrained model hosted inside a model repo on huggingface.co, or a
path to a directory containing model weights saved using
save_pretrained, e.g.,
'./my_model_directory/'
. The model is loaded using from_pretrained with the keyword arguments inargs.model_init_kwargs
. - A PreTrainedModel object. Only causal language models are supported.
- A string, being the model id of a pretrained model hosted inside a model repo on huggingface.co, or a
path to a directory containing model weights saved using
save_pretrained, e.g.,
- ref_model (
transformers.PreTrainedModel
ortorch.nn.Module
orNone
) — The reference model to use for training. If None is specified, the reference model will be created from the model. - judge (
BasePairwiseJudge
) — The judge to use for pairwise comparison of model completions. - reward_funcs (
Union[RewardFunc, list[RewardFunc]]
, optional, defaults toNone
) — Reward functions to be used for computing the rewards. To compute the rewards, we call all the reward functions with the prompts and completions and sum the rewards. Can be either:- A single reward function: Can be a string (path to model), a PreTrainedModel, or a custom callable function.
- A list of reward functions: Must all be of compatible types.
Note: Only one of
judge
, orreward_funcs
should be provided. - args (
OnlineDPOConfig
) — The online DPO config arguments to use for training. - data_collator (
transformers.DataCollator
) — The data collator to use for training. If None is specified, the default data collator (DPODataCollatorWithPadding
) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences. - train_dataset (Dataset or IterableDataset) — The dataset to use for training.
- eval_dataset (Dataset, IterableDataset or
dict[str, Union[Dataset, IterableDataset]]
) — The dataset to use for evaluation. - processing_class (PreTrainedTokenizerBase or ProcessorMixin, optional, defaults to
None
) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model. - reward_processing_classes (
Union[PreTrainedTokenizerBase, list[PreTrainedTokenizerBase]]
, optional, defaults toNone
) — Processing classes corresponding to the reward functions specified inreward_funcs
. Can be either:- A single processing class: Used when
reward_funcs
contains only one reward function. - A list of processing classes: Must match the order and length of the reward functions in
reward_funcs
.
If set to
None
, the tokenizer for each model-based reward function is automatically loaded using from_pretrained. - A single processing class: Used when
- peft_config (
~peft.PeftConfig
, optional, defaults toNone
) — PEFT configuration used to wrap the model. IfNone
, the model is not wrapped. - compute_metrics (
Callable[[EvalPrediction], dict]
, optional) — The function to use to compute the metrics. Must take aEvalPrediction
and return a dictionary string to metric values. - callbacks (
list[transformers.TrainerCallback]
) — The callbacks to use for training. - optimizers (
tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) — The optimizer and scheduler to use for training. - preprocess_logits_for_metrics (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) — The function to use to preprocess the logits before computing the metrics.
Initialize OnlineDPOTrainer.
.. deprecated:: 0.22.0 The following parameters are deprecated and will be removed in a future version:
reward_model
: Usereward_funcs
instead. For example, changereward_model=model
toreward_funcs=model
.reward_processing_class
: Usereward_processing_classes
instead. For example, changereward_processing_class=tokenizer
toreward_processing_classes=tokenizer
.
train
< source >( resume_from_checkpoint: typing.Union[str, bool, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), dict[str, typing.Any], NoneType] = None ignore_keys_for_eval: typing.Optional[list[str]] = None **kwargs )
Parameters
- resume_from_checkpoint (
str
orbool
, optional) — If astr
, local path to a saved checkpoint as saved by a previous instance ofTrainer
. If abool
and equalsTrue
, load the last checkpoint in args.output_dir as saved by a previous instance ofTrainer
. If present, training will resume from the model/optimizer/scheduler states loaded here. - trial (
optuna.Trial
ordict[str, Any]
, optional) — The trial run or the hyperparameter dictionary for hyperparameter search. - ignore_keys_for_eval (
list[str]
, optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training. - kwargs (
dict[str, Any]
, optional) — Additional keyword arguments used to hide deprecated arguments
Main training entry point.
Will save the model, so you can reload it using from_pretrained()
.
Will only save from the main process.
push_to_hub
< source >( commit_message: typing.Optional[str] = 'End of training' blocking: bool = True token: typing.Optional[str] = None revision: typing.Optional[str] = None **kwargs )
Parameters
- commit_message (
str
, optional, defaults to"End of training"
) — Message to commit while pushing. - blocking (
bool
, optional, defaults toTrue
) — Whether the function should return only when thegit push
has finished. - token (
str
, optional, defaults toNone
) — Token with write permission to overwrite Trainer’s original args. - revision (
str
, optional) — The git revision to commit from. Defaults to the head of the “main” branch. - kwargs (
dict[str, Any]
, optional) — Additional keyword arguments passed along to~Trainer.create_model_card
.
Upload self.model
and self.processing_class
to the 🤗 model hub on the repo self.args.hub_model_id
.
OnlineDPOConfig
class trl.OnlineDPOConfig
< source >( output_dir: typing.Optional[str] = None overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: typing.Optional[float] = 0 torch_empty_cache_steps: typing.Optional[int] = None learning_rate: float = 5e-07 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs: typing.Union[dict[str, typing.Any], str, NoneType] = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: str = 'passive' log_level_replica: str = 'warning' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: float = 10 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps: float = 500 save_total_limit: typing.Optional[int] = None save_safetensors: typing.Optional[bool] = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False use_ipex: bool = False bf16: typing.Optional[bool] = None fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 ddp_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: typing.Union[str, list[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[float] = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: typing.Optional[int] = None past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = True label_names: typing.Optional[list[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False fsdp: typing.Union[list[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params: int = 0 fsdp_config: typing.Union[dict[str, typing.Any], str, NoneType] = None fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None accelerator_config: typing.Union[dict, str, NoneType] = None parallelism_config: typing.Optional[ForwardRef('ParallelismConfig')] = None deepspeed: typing.Union[dict, str, NoneType] = None label_smoothing_factor: float = 0.0 optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch_fused' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Union[NoneType, str, list[str]] = None ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None ddp_broadcast_buffers: typing.Optional[bool] = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: typing.Optional[bool] = None hub_always_push: bool = False hub_revision: typing.Optional[str] = None gradient_checkpointing: bool = True gradient_checkpointing_kwargs: typing.Union[dict[str, typing.Any], str, NoneType] = None include_inputs_for_metrics: bool = False include_for_metrics: list = <factory> eval_do_concat_batches: bool = True fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: int = 1800 torch_compile: bool = False torch_compile_backend: typing.Optional[str] = None torch_compile_mode: typing.Optional[str] = None include_tokens_per_second: typing.Optional[bool] = False include_num_input_tokens_seen: typing.Optional[bool] = False neftune_noise_alpha: typing.Optional[float] = None optim_target_modules: typing.Union[NoneType, str, list[str]] = None batch_eval_metrics: bool = False eval_on_start: bool = False use_liger_kernel: typing.Optional[bool] = False liger_kernel_config: typing.Optional[dict[str, bool]] = None eval_use_gather_object: typing.Optional[bool] = False average_tokens_across_devices: typing.Optional[bool] = True reward_model_path: typing.Optional[str] = None judge: typing.Optional[str] = None max_new_tokens: int = 64 max_length: int = 512 temperature: float = 0.9 top_p: float = 1.0 top_k: typing.Optional[int] = None min_p: typing.Optional[float] = None repetition_penalty: float = 1.0 generation_kwargs: typing.Optional[dict] = None use_transformers_paged: bool = False cache_implementation: typing.Optional[str] = None missing_eos_penalty: typing.Optional[float] = None beta: list = <factory> loss_type: str = 'sigmoid' disable_dropout: bool = True use_vllm: bool = False vllm_model_impl: str = 'vllm' vllm_guided_decoding_regex: typing.Optional[str] = None vllm_gpu_memory_utilization: typing.Optional[float] = 0.55 vllm_mode: str = 'server' vllm_server_base_url: typing.Optional[str] = None vllm_server_host: str = '0.0.0.0' vllm_server_port: int = 8000 vllm_server_timeout: float = 240.0 vllm_tensor_parallel_size: int = 1 ds3_gather_for_generation: bool = True model_init_kwargs: typing.Optional[dict[str, typing.Any]] = None reward_weights: typing.Optional[list[float]] = None dataset_num_proc: typing.Optional[int] = None gpu_memory_utilization: typing.Optional[float] = None )
Parameters
- reward_model_path (
str
orNone
, optional, defaults toNone
) — Path to the reward model. Eitherjudge
orreward_model_path
must be set, but not both. - judge (
str
orNone
, optional, defaults toNone
) — Name of the judge to use. Eitherjudge
orreward_model_path
must be set, but not both. - max_new_tokens (
int
, optional, defaults to64
) — Maximum number of tokens to generate per completion. - max_length (
int
, optional, defaults to256
) — Maximum total length of the sequence (prompt + completion) used to compute log probabilities. If the sequence exceeds this limit, the leftmost tokens will be truncated to preserve as much of the completion as possible. - temperature (
float
, optional, defaults to0.9
) — Temperature for sampling. The higher the temperature, the more random the completions. - missing_eos_penalty (
float
orNone
, optional, defaults toNone
) — Penalty applied to the score when the model fails to generate an EOS token. This is useful to encourage to generate completions shorter than the maximum length (max_new_tokens
). The penalty must be a positive value. This parameter only works when usingreward_funcs
and not when usingjudge
. - beta (
float
orlist[float]
, optional, defaults to0.1
) — Parameter controlling the deviation from the reference model. Higher β means less deviation from the reference model. For the IPO loss (loss_type="ipo"
), β is the regularization parameter denoted by τ in the paper. If a list of floats is provided then the β is selected for each new epoch and the last β is used for the rest of the epochs. - loss_type (
str
, optional, defaults to"sigmoid"
) — Type of loss to use. Possible values are: - dataset_num_proc (
int
orNone
, optional, defaults toNone
) — Number of processes to use for processing the dataset. - disable_dropout (
bool
, optional, defaults toTrue
) — Whether to disable dropout in the model and reference model.
Parameters that control generation
- top_p (
float
, optional, defaults to1.0
) — Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to1.0
to consider all tokens. - top_k (
int
orNone
, optional, defaults toNone
) — Number of highest probability vocabulary tokens to keep for top-k-filtering. IfNone
, top-k-filtering is disabled and all tokens are considered. - min_p (
float
orNone
, optional, defaults toNone
) — Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between0.0
and1.0
. Typical values are in the0.01-0.2
range. - repetition_penalty (
float
, optional, defaults to1.0
) — Float that penalizes new tokens based on whether they appear in the prompt and the generated text so far. Values >1.0
encourage the model to use new tokens, while values <1.0
encourage the model to repeat tokens. - use_transformers_paged (
bool
, optional, defaults toFalse
) — Whether to use thetransformers
paged implementation for generation. If set toTrue
, thetransformers
paged implementation will be used for generation instead of the default padded implementation. This parameter is only effective whenuse_vllm
is set toFalse
. - cache_implementation (
str
orNone
, optional, defaults toNone
) — Implementation of the cache method for faster generation whenuse_vllm
is set toFalse
. - generation_kwargs (
dict[str, Any]
orNone
, optional, defaults toNone
) — Additional keyword arguments to pass toGenerationConfig
(if using transformers) orSamplingParams
(if using vLLM) when sampling completions. This can be used to further customize the generation behavior, such as settingsupress_tokens
,num_beams
, etc. If it contains keys that conflict with the other generation parameters (likemin_p
,top_p
, etc.), they will override them.
Parameters that control generation acceleration powered by vLLM
- use_vllm (
bool
, optional, defaults toFalse
) — Whether to use vLLM for generating completions. If set toTrue
, the trainer will use vLLM for generation instead of the default model.generate(). Requiresvllm
to be installed. - vllm_model_impl (
str
, optional, defaults to"vllm"
) — Model implementation to use for vLLM. Must be one of"transformers"
or"vllm"
."transformers"
: Use thetransformers
backend for model implementation."vllm"
: Use thevllm
library for model implementation. - vllm_mode (
str
, optional, defaults to"server"
) — Mode to use for vLLM integration whenuse_vllm
is set toTrue
. Must be one of"server"
or"colocate"
."server"
: The trainer will send generation requests to a separate vLLM server. Make sure a TRL vLLM server is running (start withtrl vllm-serve
)."colocate"
: vLLM will run in the same process and share the training GPUs. This avoids the need for a separate server but may cause resource contention with training.
- vllm_guided_decoding_regex (
str
orNone
, optional, defaults toNone
) — Regex for vLLM guided decoding. IfNone
(default), guided decoding is disabled.
Parameters that control the vLLM server (only used when `vllm_mode` is `"server"`)
- vllm_server_base_url (
str
orNone
, optional, defaults toNone
) — Base URL for the vLLM server (e.g.,"http://localhost:8000"
). If provided,vllm_server_host
andvllm_server_port
are ignored. - vllm_server_host (
str
, optional, defaults to"0.0.0.0"
) — Host of the vLLM server to connect to. Ignored ifvllm_server_base_url
is provided. - vllm_server_port (
int
, optional, defaults to8000
) — Port of the vLLM server to connect to. Ignored ifvllm_server_base_url
is provided. - vllm_server_timeout (
float
, optional, defaults to240.0
) — Total timeout duration in seconds to wait for the vLLM server to be up. If the server is not up after the timeout, aConnectionError
is raised.
Parameters that control colocated vLLM execution (only used when `vllm_mode` is `"colocate"`)
- vllm_gpu_memory_utilization (
float
, optional, defaults to0.55
) — Control the GPU memory utilization for vLLM. This setting only applies whenvllm_mode
is set to"colocate"
. If you are usingvllm_mode="server"
, this parameter must be passed separately when launching the vLLM server via the--vllm_gpu_memory_utilization
flag. - vllm_tensor_parallel_size (
int
, optional, defaults to1
) — Control the tensor parallel size for vLLM. This setting only applies whenvllm_mode
is set to"colocate"
. If you are usingvllm_mode="server"
, this parameter must be passed separately when launching the vLLM server via the--vllm_tensor_parallel_size
flag.
Other parameters
- ds3_gather_for_generation (
bool
, optional, defaults toTrue
) — This setting applies to DeepSpeed ZeRO-3. If enabled, the policy model weights are gathered for generation, improving generation speed. However, disabling this option allows training models that exceed the VRAM capacity of a single GPU, albeit at the cost of slower generation. Disabling this option is not compatible with vLLM generation. - model_init_kwargs (
dict[str, Any]
orNone
, optional, defaults toNone
) — Keyword arguments to pass toAutoModelForCausalLM.from_pretrained
when instantiating the model from a string.
Configuration class for the OnlineDPOTrainer.
This class includes only the parameters that are specific to Online DPO training. For a full list of training arguments, please refer to the TrainingArguments documentation. Note that default values in this class may differ from those in TrainingArguments.
Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.