bird-of-paradise commited on
Commit
a3ae39b
·
verified ·
1 Parent(s): 0747405

Adding advanced implementation for distributed system

Browse files

This implementation includes several key features for more advanced practitioners:

- FSDP Compatibility: Designed from the ground up to run on multi-GPU systems using PyTorch's Fully Sharded Data Parallel.

- Hybrid Optimization (MuonW): Implements a robust "MuonW" approach, using Muon for matrix parameters while falling back to the well-tested AdamW optimizer for all other parameters (e.g., embeddings, biases, and other non-matrix tensors).
- Advanced Metric Tracking: Includes a `get_post_step_metrics` method for detailed, real-time monitoring of the optimizer's state, crucial for debugging and research at scale.

Files changed (2) hide show
  1. MuonForOLMo.ipynb +1182 -0
  2. README.md +13 -1
MuonForOLMo.ipynb ADDED
@@ -0,0 +1,1182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 0,
4
+ "metadata": {
5
+ "colab": {
6
+ "provenance": []
7
+ },
8
+ "kernelspec": {
9
+ "name": "python3",
10
+ "display_name": "Python 3"
11
+ },
12
+ "language_info": {
13
+ "name": "python"
14
+ }
15
+ },
16
+ "cells": [
17
+ {
18
+ "cell_type": "code",
19
+ "execution_count": 6,
20
+ "metadata": {
21
+ "id": "cCXb6F65XhI_"
22
+ },
23
+ "outputs": [],
24
+ "source": [
25
+ "import logging\n",
26
+ "from abc import ABCMeta, abstractmethod\n",
27
+ "from dataclasses import dataclass, replace\n",
28
+ "from math import cos, pi, sqrt\n",
29
+ "from typing import Any, Dict, List, Optional, Tuple, Union\n",
30
+ "\n",
31
+ "import torch\n",
32
+ "import torch.distributed as dist\n",
33
+ "import torch.nn as nn\n",
34
+ "from torch.distributed.fsdp import FullyShardedDataParallel\n",
35
+ "from torch.distributed.fsdp import FullyShardedDataParallel as FSDP\n",
36
+ "from torch.optim.optimizer import Optimizer as OptimizerBase\n",
37
+ "\n",
38
+ "#from . import LayerNormBase\n",
39
+ "#from .config import OptimizerType, SchedulerConfig, SchedulerType, TrainConfig\n",
40
+ "#from .torch_util import get_default_device, is_distributed\n",
41
+ "\n",
42
+ "\"\"\" Simulate import from .torch_util \"\"\"\n",
43
+ "\n",
44
+ "import gc\n",
45
+ "import os\n",
46
+ "from typing import Optional, TypeVar\n",
47
+ "\n",
48
+ "import torch\n",
49
+ "import torch.distributed as dist\n",
50
+ "\n",
51
+ "T = TypeVar(\"T\")\n",
52
+ "\n",
53
+ "\n",
54
+ "def is_distributed() -> bool:\n",
55
+ " return dist.is_available() and dist.is_initialized()\n",
56
+ "\n",
57
+ "def get_default_device() -> torch.device:\n",
58
+ " if torch.cuda.is_available() and torch.cuda.is_initialized():\n",
59
+ " return torch.device(\"cuda\")\n",
60
+ " elif torch.backends.mps.is_available():\n",
61
+ " return torch.device(\"mps\")\n",
62
+ " else:\n",
63
+ " return torch.device(\"cpu\")\n",
64
+ "\n",
65
+ "\n",
66
+ "\"\"\" end of simulation \"\"\"\n",
67
+ "\n",
68
+ "\n",
69
+ "\n",
70
+ "\n",
71
+ "__all__ = [\n",
72
+ " \"Optimizer\",\n",
73
+ " \"LionW\",\n",
74
+ " \"AdamW\",\n",
75
+ " \"MuonW\",\n",
76
+ " \"Scheduler\",\n",
77
+ " \"CosWithWarmup\",\n",
78
+ " \"LinearWithWarmup\",\n",
79
+ " \"InvSqrtWithWarmup\",\n",
80
+ " \"MaxScheduler\",\n",
81
+ " \"ConstantScheduler\",\n",
82
+ " \"CosLinearEnvelope\",\n",
83
+ " \"BoltOnWarmupScheduler\",\n",
84
+ " \"build_optimizer\",\n",
85
+ " \"build_scheduler\",\n",
86
+ "]\n",
87
+ "\n",
88
+ "\n",
89
+ "log = logging.getLogger(__name__)"
90
+ ]
91
+ },
92
+ {
93
+ "cell_type": "code",
94
+ "source": [
95
+ "class Optimizer(OptimizerBase):\n",
96
+ " def __init__(self, *args, record_update_metrics: bool = False, selective_updates: bool = False, **kwargs):\n",
97
+ " super().__init__(*args, **kwargs)\n",
98
+ " self._record_update_metrics = record_update_metrics\n",
99
+ " self._collecting_metrics = False\n",
100
+ " self._selective_updates = selective_updates\n",
101
+ "\n",
102
+ " def _clean_param_name(self, name: str) -> str:\n",
103
+ " return name.replace(\"_fsdp_wrapped_module.\", \"\")\n",
104
+ "\n",
105
+ " @torch.no_grad()\n",
106
+ " def clip_grads_and_collect_metrics(\n",
107
+ " self,\n",
108
+ " global_step: int,\n",
109
+ " collect_param_metrics: bool = True,\n",
110
+ " process_group: Optional[dist.ProcessGroup] = None,\n",
111
+ " device: Optional[torch.device] = None,\n",
112
+ " ) -> Dict[str, torch.Tensor]:\n",
113
+ " \"\"\"\n",
114
+ " Clips gradients for every group that has the field `max_grad_norm`.\n",
115
+ " At the same time collect metrics for each parameter and its gradient.\n",
116
+ " \"\"\"\n",
117
+ " self._collecting_metrics = collect_param_metrics\n",
118
+ " device = get_default_device() if device is None else device\n",
119
+ "\n",
120
+ " # NOTE (epwalsh): during distributed training we're making an assumption that the order of\n",
121
+ " # the param groups and the params within each group are the same across all ranks.\n",
122
+ " # This is justified since we initialize the parameter groups in every rank by iterating over\n",
123
+ " # `module.parameters()` or `module.named_modules()` / `module.named_parameters()`, each of which\n",
124
+ " # provides a consistent order.\n",
125
+ " # For each parameter (with a gradient) we'll collect:\n",
126
+ " # - min, max, avg, norm of the param itself\n",
127
+ " # - min, max, avg, norm of the param's gradient\n",
128
+ " # - min, max, avg, norm of any additional per-parameter optimizer state metrics returned from\n",
129
+ " # `self.get_state_for_param()`.\n",
130
+ " # Afterwards we'll reduce these all over all ranks.\n",
131
+ " per_param_min_metrics: List[torch.Tensor] = []\n",
132
+ " per_param_max_metrics: List[torch.Tensor] = []\n",
133
+ " per_param_sum_metrics: List[torch.Tensor] = []\n",
134
+ " per_param_norm_metrics: List[torch.Tensor] = []\n",
135
+ " per_param_numel_metrics: List[torch.Tensor] = []\n",
136
+ "\n",
137
+ " per_param_min_metric_names: List[str] = []\n",
138
+ " per_param_max_metric_names: List[str] = []\n",
139
+ " per_param_avg_metric_names: List[str] = []\n",
140
+ " per_param_norm_metric_names: List[str] = []\n",
141
+ "\n",
142
+ " dst_rank = 0\n",
143
+ " if process_group is not None:\n",
144
+ " dst_rank = dist.get_global_rank(process_group, 0)\n",
145
+ "\n",
146
+ " #######################################################################\n",
147
+ " # part 1: collect metrics locally\n",
148
+ " #######################################################################\n",
149
+ " for group in self.param_groups:\n",
150
+ " for name, p in zip(group[\"param_names\"], group[\"params\"]):\n",
151
+ " name = self._clean_param_name(name)\n",
152
+ " # Always need to collect the norm of gradients for clipping, even if we're not collecting\n",
153
+ " # other metrics.\n",
154
+ " tensors: List[Optional[torch.Tensor]] = [p.grad]\n",
155
+ " prefixes: List[str] = [f\"grad/{name}\"]\n",
156
+ " if collect_param_metrics:\n",
157
+ " state = self.get_state_for_param(p)\n",
158
+ " sorted_state_keys = sorted([k for k in state.keys()])\n",
159
+ " tensors.extend([p] + [state[key] for key in sorted_state_keys])\n",
160
+ " prefixes.extend([f\"param/{name}\"] + [f\"{key}/{name}\" for key in sorted_state_keys])\n",
161
+ " assert len(tensors) == len(prefixes)\n",
162
+ "\n",
163
+ " # Get min, max, avg, and norm for all `tensors` associated with the parameter.\n",
164
+ " for x, prefix in zip(tensors, prefixes):\n",
165
+ " # grad or state tensors could be none for params that have their shards completely on\n",
166
+ " # other ranks.\n",
167
+ " if x is not None and x.numel() > 0:\n",
168
+ " if collect_param_metrics:\n",
169
+ " x_abs = x.abs()\n",
170
+ " per_param_min_metrics.append(x_abs.min().unsqueeze(0).to(dtype=torch.float32))\n",
171
+ " per_param_max_metrics.append(x_abs.max().unsqueeze(0).to(dtype=torch.float32))\n",
172
+ " per_param_sum_metrics.append(x.sum().unsqueeze(0).to(dtype=torch.float32))\n",
173
+ " per_param_numel_metrics.append(\n",
174
+ " torch.tensor([x.numel()], device=device, dtype=torch.float32)\n",
175
+ " )\n",
176
+ " per_param_norm_metrics.append(\n",
177
+ " torch.linalg.vector_norm(x, 2.0, dtype=torch.float32).unsqueeze(0)\n",
178
+ " )\n",
179
+ " else:\n",
180
+ " if collect_param_metrics:\n",
181
+ " per_param_min_metrics.append(\n",
182
+ " torch.tensor([float(\"inf\")], device=device, dtype=torch.float32)\n",
183
+ " )\n",
184
+ " per_param_max_metrics.append(torch.tensor([0.0], device=device, dtype=torch.float32))\n",
185
+ " per_param_sum_metrics.append(torch.tensor([0.0], device=device, dtype=torch.float32))\n",
186
+ " per_param_numel_metrics.append(torch.tensor([0.0], device=device, dtype=torch.float32))\n",
187
+ " per_param_norm_metrics.append(torch.tensor([0.0], device=device, dtype=torch.float32))\n",
188
+ " if collect_param_metrics:\n",
189
+ " per_param_min_metric_names.append(f\"{prefix}.min\")\n",
190
+ " per_param_max_metric_names.append(f\"{prefix}.max\")\n",
191
+ " per_param_avg_metric_names.append(f\"{prefix}.avg\")\n",
192
+ " per_param_norm_metric_names.append(f\"{prefix}.norm\")\n",
193
+ "\n",
194
+ " assert (\n",
195
+ " len(per_param_min_metrics)\n",
196
+ " == len(per_param_min_metric_names)\n",
197
+ " == len(per_param_max_metrics)\n",
198
+ " == len(per_param_max_metric_names)\n",
199
+ " == len(per_param_sum_metrics)\n",
200
+ " == len(per_param_numel_metrics)\n",
201
+ " == len(per_param_avg_metric_names)\n",
202
+ " )\n",
203
+ " assert len(per_param_norm_metrics) == len(per_param_norm_metric_names)\n",
204
+ "\n",
205
+ " def is_grad_norm_metric(metric_name: str) -> bool:\n",
206
+ " return metric_name.startswith(\"grad/\") and metric_name.endswith(\".norm\")\n",
207
+ "\n",
208
+ " #######################################################################\n",
209
+ " # part 2: reduce metrics over ranks\n",
210
+ " #######################################################################\n",
211
+ " param_group_sharded = False\n",
212
+ " for group in self.param_groups:\n",
213
+ " param_group_sharded = param_group_sharded or group.get(\"sharded\", False)\n",
214
+ "\n",
215
+ " total_grad_norm: torch.Tensor\n",
216
+ " per_param_avg_metrics: List[torch.Tensor] = []\n",
217
+ " if is_distributed() and param_group_sharded:\n",
218
+ " # Reduce metrics across all ranks. Note that we can use a `reduce` for most cases\n",
219
+ " # instead of an `all_reduce`, but we need `all_reduce` for norms so that all ranks\n",
220
+ " # get the right value for gradient norms so they can clip correctly.\n",
221
+ " # Reduce mins.\n",
222
+ " if per_param_min_metrics:\n",
223
+ " all_mins = torch.cat(per_param_min_metrics).to(device)\n",
224
+ " dist.reduce(all_mins, dst_rank, op=dist.ReduceOp.MIN, group=process_group)\n",
225
+ " per_param_min_metrics = all_mins.split(1)\n",
226
+ " # Reduce maxs.\n",
227
+ " if per_param_max_metrics:\n",
228
+ " all_maxs = torch.cat(per_param_max_metrics).to(device)\n",
229
+ " dist.reduce(all_maxs, dst_rank, op=dist.ReduceOp.MAX, group=process_group)\n",
230
+ " per_param_max_metrics = all_maxs.split(1)\n",
231
+ " # Reduce sums or just norms.\n",
232
+ " all_norms = torch.cat(per_param_norm_metrics).to(device) ** 2.0\n",
233
+ " if per_param_sum_metrics and per_param_numel_metrics:\n",
234
+ " all_sums = torch.cat(per_param_sum_metrics).to(device)\n",
235
+ " all_numels = torch.cat(per_param_numel_metrics).to(device)\n",
236
+ " all_sums_norms_numels = torch.cat(\n",
237
+ " [all_sums.unsqueeze(0), all_norms.unsqueeze(0), all_numels.unsqueeze(0)], dim=0\n",
238
+ " )\n",
239
+ " dist.all_reduce(all_sums_norms_numels, op=dist.ReduceOp.SUM, group=process_group)\n",
240
+ " all_sums, all_norms, all_numels = all_sums_norms_numels.split(1)\n",
241
+ " # Get averages.\n",
242
+ " # NOTE: could get infs for non-rank0 processes but that's okay.\n",
243
+ " per_param_avg_metrics = (all_sums / all_numels).squeeze(0).split(1)\n",
244
+ " else:\n",
245
+ " dist.all_reduce(all_norms, op=dist.ReduceOp.SUM, group=process_group)\n",
246
+ " grad_norm_metric_mask = torch.tensor(\n",
247
+ " [float(is_grad_norm_metric(n)) for n in per_param_norm_metric_names], device=all_norms.device\n",
248
+ " )\n",
249
+ " total_grad_norm = (all_norms * grad_norm_metric_mask).sum() ** 0.5\n",
250
+ " per_param_norm_metrics = (all_norms ** (0.5)).squeeze(0).split(1)\n",
251
+ " else:\n",
252
+ " total_grad_norm = (\n",
253
+ " torch.cat(\n",
254
+ " [\n",
255
+ " m\n",
256
+ " for m, n in zip(per_param_norm_metrics, per_param_norm_metric_names)\n",
257
+ " if is_grad_norm_metric(n)\n",
258
+ " ]\n",
259
+ " )\n",
260
+ " ** 2.0\n",
261
+ " ).sum() ** 0.5\n",
262
+ " per_param_avg_metrics = [x / n for x, n in zip(per_param_sum_metrics, per_param_numel_metrics)]\n",
263
+ "\n",
264
+ " assert len(per_param_avg_metrics) == len(per_param_avg_metric_names)\n",
265
+ "\n",
266
+ " # Collect all metrics into a single dict.\n",
267
+ " all_metrics: Dict[str, torch.Tensor] = {}\n",
268
+ " if collect_param_metrics:\n",
269
+ " for metric_name, metric in zip(per_param_min_metric_names, per_param_min_metrics):\n",
270
+ " all_metrics[metric_name] = metric.squeeze(0)\n",
271
+ " for metric_name, metric in zip(per_param_max_metric_names, per_param_max_metrics):\n",
272
+ " all_metrics[metric_name] = metric.squeeze(0)\n",
273
+ " for metric_name, metric in zip(per_param_avg_metric_names, per_param_avg_metrics):\n",
274
+ " all_metrics[metric_name] = metric.squeeze(0)\n",
275
+ "\n",
276
+ " for metric_name, metric in zip(per_param_norm_metric_names, per_param_norm_metrics):\n",
277
+ " all_metrics[metric_name] = metric.squeeze(0)\n",
278
+ " all_metrics[\"total_grad_norm\"] = total_grad_norm\n",
279
+ "\n",
280
+ " #######################################################################\n",
281
+ " # part 3: clip grads\n",
282
+ " #######################################################################\n",
283
+ " num_grads_clipped = 0\n",
284
+ " num_eligible_grads = 0\n",
285
+ " for group in self.param_groups:\n",
286
+ " if (max_norm_ratio := group.get(\"max_grad_norm_ratio\")) is not None:\n",
287
+ " num_clipped = self._do_adaptive_clipping(\n",
288
+ " group, max_norm_ratio, global_step, all_metrics, collect_param_metrics=collect_param_metrics\n",
289
+ " )\n",
290
+ " elif (max_norm := group.get(\"max_grad_norm\")) is not None:\n",
291
+ " num_clipped = self._do_global_fixed_clipping(\n",
292
+ " group, max_norm, all_metrics, collect_param_metrics=collect_param_metrics\n",
293
+ " )\n",
294
+ " else:\n",
295
+ " # No clipping needed.\n",
296
+ " continue\n",
297
+ " num_eligible_grads += len(group[\"params\"])\n",
298
+ " if num_clipped is not None:\n",
299
+ " num_grads_clipped += num_clipped\n",
300
+ "\n",
301
+ " if collect_param_metrics:\n",
302
+ " if num_eligible_grads > 0:\n",
303
+ " clipping_rate = torch.tensor(num_grads_clipped / num_eligible_grads, device=\"cpu\")\n",
304
+ " else:\n",
305
+ " clipping_rate = torch.tensor(0.0, device=\"cpu\")\n",
306
+ " all_metrics[\"clipping_rate\"] = clipping_rate\n",
307
+ "\n",
308
+ " # total_grad_norm is computed at all steps, even when collect_param_metrics is set to False\n",
309
+ " return all_metrics\n",
310
+ "\n",
311
+ " @torch.no_grad()\n",
312
+ " def _do_adaptive_clipping(\n",
313
+ " self,\n",
314
+ " group: Dict[str, Any],\n",
315
+ " max_norm_ratio: float,\n",
316
+ " global_step: int,\n",
317
+ " all_metrics: Dict[str, torch.Tensor],\n",
318
+ " collect_param_metrics: bool = True,\n",
319
+ " device: Optional[torch.device] = None,\n",
320
+ " ) -> Optional[int]:\n",
321
+ " \"\"\"\n",
322
+ " Do adaptive gradient clipping on a param group.\n",
323
+ "\n",
324
+ " If ``collect_param_metrics`` is ``True`` this will return the total number of gradients clipped.\n",
325
+ " \"\"\"\n",
326
+ " device = get_default_device() if device is None else device\n",
327
+ " num_grads_clipped = 0\n",
328
+ " # We'll use the bigger of beta1 and beta2 to update the exponential average of the norm of\n",
329
+ " # the gradient (a scalar), not to be confused with the exponential average of the gradient.\n",
330
+ " # TODO (epwalsh): handle optimizers that don't have betas.\n",
331
+ " beta1, beta2 = group[\"betas\"]\n",
332
+ " beta = max(beta1, beta2)\n",
333
+ " for name, p in zip(group[\"param_names\"], group[\"params\"]):\n",
334
+ " name = self._clean_param_name(name)\n",
335
+ " grad_norm = all_metrics.get(f\"grad/{name}.norm\")\n",
336
+ " if grad_norm is None:\n",
337
+ " continue\n",
338
+ "\n",
339
+ " # Get or initialize the exponential average of grad norm.\n",
340
+ " # TODO: The way we have it right now, every rank tracks the `grad_norm_exp_avg` of every parameter,\n",
341
+ " # even parameters for which the corresponding local shard is empty. This has the potential to\n",
342
+ " # cause some issues with the optimizer, as we ran into with https://github.com/allenai/LLM/pull/372.\n",
343
+ " # So we should consider changing how we do this at some point so that we don't add any state\n",
344
+ " # to parameters for which the local shard is empty. That would probably add extra distributed\n",
345
+ " # communication, at least on steps where we have to log (i.e. when `collect_param_metrics=True`).\n",
346
+ " state = self.state[p]\n",
347
+ " grad_norm_exp_avg = state.get(\"grad_norm_exp_avg\")\n",
348
+ " if grad_norm_exp_avg is None:\n",
349
+ " grad_norm_exp_avg = grad_norm.clone().to(device)\n",
350
+ " # We don't want to add anything to `state` until `state` has been initialized, otherwise\n",
351
+ " # this will crash some optimizers which rely on checking `len(state)`. The downside here\n",
352
+ " # is that we won't start tracking `grad_norm_exp_avg` until the 2nd training step.\n",
353
+ " if global_step > 1:\n",
354
+ " state[\"grad_norm_exp_avg\"] = grad_norm_exp_avg\n",
355
+ "\n",
356
+ " max_allowed_norm = max_norm_ratio * grad_norm_exp_avg\n",
357
+ " clip_coef = max_allowed_norm / (grad_norm + 1e-6)\n",
358
+ "\n",
359
+ " # Clip the gradients and update the exponential average.\n",
360
+ " # Note that multiplying by the clamped coefficient is meaningless when it is\n",
361
+ " # equal to 1, but it avoids the host-device sync that would result from `if clip_coef_clamped < 1`.\n",
362
+ " clip_coef_clamped = torch.clamp(clip_coef, max=1.0)\n",
363
+ " if p.grad is not None:\n",
364
+ " # p.grad could be none for some ranks when using FSDP.\n",
365
+ " p.grad.detach().mul_(clip_coef_clamped.to(p.grad.device, p.grad.dtype))\n",
366
+ "\n",
367
+ " # Update the exponential average of the norm of the gradient with the clipped norm of the gradient.\n",
368
+ " grad_norm_exp_avg.lerp_((grad_norm * clip_coef_clamped).to(grad_norm_exp_avg.device), 1 - beta)\n",
369
+ " # Alternative: update with the *unclipped* norm of the gradient.\n",
370
+ " # grad_norm_exp_avg.lerp_(grad_norm.to(grad_norm_exp_avg.device), 1 - beta)\n",
371
+ "\n",
372
+ " if collect_param_metrics:\n",
373
+ " # Can't avoid host-device sync here.\n",
374
+ " if clip_coef_clamped < 1.0:\n",
375
+ " num_grads_clipped += 1\n",
376
+ " all_metrics[f\"grad_norm_exp_avg/{name}\"] = grad_norm_exp_avg\n",
377
+ " return num_grads_clipped if collect_param_metrics else None\n",
378
+ "\n",
379
+ " @torch.no_grad()\n",
380
+ " def _do_global_fixed_clipping(\n",
381
+ " self,\n",
382
+ " group: Dict[str, Any],\n",
383
+ " max_norm: float,\n",
384
+ " all_metrics: Dict[str, torch.Tensor],\n",
385
+ " collect_param_metrics: bool = True,\n",
386
+ " device: Optional[torch.device] = None,\n",
387
+ " ) -> Optional[int]:\n",
388
+ " \"\"\"\n",
389
+ " Do global fixed gradient clipping on a param group.\n",
390
+ "\n",
391
+ " If ``collect_param_metrics`` is ``True`` this will return the total number of gradients clipped.\n",
392
+ " \"\"\"\n",
393
+ " device = get_default_device() if device is None else device\n",
394
+ " total_grad_norm = all_metrics[\"total_grad_norm\"]\n",
395
+ " clip_coef = max_norm / (total_grad_norm.to(device) + 1e-6)\n",
396
+ " clip_coef_clamped = torch.clamp(clip_coef, max=1.0)\n",
397
+ " num_grads_clipped: Optional[int] = None\n",
398
+ " if collect_param_metrics:\n",
399
+ " # Can't avoid host-device sync here.\n",
400
+ " if clip_coef_clamped < 1.0:\n",
401
+ " num_grads_clipped = len(group[\"params\"])\n",
402
+ " for p in group[\"params\"]:\n",
403
+ " # Clip the gradients.\n",
404
+ " # Note that multiplying by the clamped coefficient is meaningless when it is\n",
405
+ " # equal to 1, but it avoids the host-device sync that would result from `if clip_coef_clamped < 1`.\n",
406
+ " if p.grad is not None:\n",
407
+ " # p.grad could be none for some ranks when using FSDP.\n",
408
+ " p.grad.detach().mul_(clip_coef_clamped.to(p.grad.device, p.grad.dtype))\n",
409
+ " return num_grads_clipped\n",
410
+ "\n",
411
+ " def get_post_step_metrics(\n",
412
+ " self, module: nn.Module, process_group: Optional[dist.ProcessGroup] = None\n",
413
+ " ) -> Dict[str, torch.Tensor]:\n",
414
+ " del module, process_group\n",
415
+ " return {}\n",
416
+ "\n",
417
+ " def get_state_for_param(self, param: nn.Parameter) -> Dict[str, Optional[torch.Tensor]]:\n",
418
+ " del param\n",
419
+ " return {}"
420
+ ],
421
+ "metadata": {
422
+ "id": "o9dFXoh2YSVn"
423
+ },
424
+ "execution_count": 7,
425
+ "outputs": []
426
+ },
427
+ {
428
+ "cell_type": "code",
429
+ "source": [
430
+ "class MuonW(Optimizer):\n",
431
+ " \"\"\"\n",
432
+ " Distributed implementation of Muon optimizer with weight decay.\n",
433
+ "\n",
434
+ " Muon applies orthogonalization to matrix parameter(2D+) updates using\n",
435
+ " Newton-Schulz orthogonalization iterations to compute the zeroth power. For non-matrix\n",
436
+ " parameters(embeddings, heads, bias), it uses AdamW as a backup.\n",
437
+ "\n",
438
+ " \"\"\"\n",
439
+ "\n",
440
+ " def __init__(\n",
441
+ " self,\n",
442
+ " params,\n",
443
+ " lr=0.01,\n",
444
+ " betas=(0.95, 0.95), # Muon uses single momentum param\n",
445
+ " weight_decay=0.0,\n",
446
+ " ns_steps=5,\n",
447
+ " nesterov=True,\n",
448
+ " eps=1e-8, # For AdamW backup\n",
449
+ " record_update_metrics=False,\n",
450
+ " selective_updates=False,\n",
451
+ " device=None,\n",
452
+ " ):\n",
453
+ " if isinstance(params, (list, tuple)) and len(params) > 0 and isinstance(params[0], dict):\n",
454
+ " # User provided param groups\n",
455
+ " for param_group in params:\n",
456
+ " if 'use_muon' not in param_group:\n",
457
+ " param_group['use_muon'] = True\n",
458
+ " else:\n",
459
+ " # Convert single params list to a param group\n",
460
+ " params = [{'params': params, 'use_muon': True}]\n",
461
+ "\n",
462
+ " defaults = dict(\n",
463
+ " lr=lr,\n",
464
+ " betas=betas,\n",
465
+ " weight_decay=weight_decay,\n",
466
+ " ns_steps=ns_steps,\n",
467
+ " nesterov=nesterov,\n",
468
+ " eps=eps,\n",
469
+ " use_muon=True, # Default to using Muon\n",
470
+ " )\n",
471
+ " super().__init__(\n",
472
+ " params,\n",
473
+ " defaults,\n",
474
+ " record_update_metrics=record_update_metrics,\n",
475
+ " selective_updates=selective_updates\n",
476
+ " )\n",
477
+ " self._device = device\n",
478
+ " self._update_norms = None\n",
479
+ " self._update_maxs = None\n",
480
+ " self._update_param_names = None\n",
481
+ "\n",
482
+ " def zeropower_via_newtonschulz5(self, G, steps: int):\n",
483
+ " \"\"\"\n",
484
+ " Newton-Schulz iteration to compute the zeroth power / orthogonalization of G.\n",
485
+ " \"\"\"\n",
486
+ " assert G.ndim >= 2\n",
487
+ " a, b, c = (3.4445, -4.7750, 2.0315)\n",
488
+ " X = G.bfloat16()\n",
489
+ " if G.size(-2) > G.size(-1):\n",
490
+ " X = X.mT\n",
491
+ "\n",
492
+ " # Ensure spectral norm is at most 1\n",
493
+ " X = X / (X.norm(dim=(-2, -1), keepdim=True) + 1e-7)\n",
494
+ " # Perform the NS iterations\n",
495
+ " for _ in range(steps):\n",
496
+ " A = X @ X.mT\n",
497
+ " B = b * A + c * A @ A\n",
498
+ " X = a * X + B @ X\n",
499
+ "\n",
500
+ " if G.size(-2) > G.size(-1):\n",
501
+ " X = X.mT\n",
502
+ " return X\n",
503
+ "\n",
504
+ " def get_state_for_param(self, param: nn.Parameter) -> Dict[str, Optional[torch.Tensor]]:\n",
505
+ " \"\"\"Return optimizer state for a parameter.\"\"\"\n",
506
+ " state = self.state[param]\n",
507
+ " if not state:\n",
508
+ " return {}\n",
509
+ "\n",
510
+ " result = {}\n",
511
+ " if 'momentum_buffer' in state:\n",
512
+ " result['momentum_buffer'] = state['momentum_buffer']\n",
513
+ " if 'exp_avg' in state:\n",
514
+ " result['exp_avg'] = state['exp_avg']\n",
515
+ " if 'exp_avg_sq' in state:\n",
516
+ " result['exp_avg_sq'] = state['exp_avg_sq']\n",
517
+ "\n",
518
+ " return result\n",
519
+ "\n",
520
+ " @torch.no_grad()\n",
521
+ " def step(self, closure=None):\n",
522
+ " \"\"\"Perform a single optimization step.\"\"\"\n",
523
+ " if closure is not None:\n",
524
+ " with torch.enable_grad():\n",
525
+ " closure()\n",
526
+ "\n",
527
+ " device = get_default_device() if self._device is None else self._device\n",
528
+ " update_norms = []\n",
529
+ " update_maxs = []\n",
530
+ " update_param_names = []\n",
531
+ "\n",
532
+ " collecting_metrics = self._collecting_metrics and self._record_update_metrics\n",
533
+ "\n",
534
+ " for group in self.param_groups:\n",
535
+ " lr = group['lr']\n",
536
+ " weight_decay = group['weight_decay']\n",
537
+ " beta1, beta2 = group['betas']\n",
538
+ " ns_steps = group['ns_steps']\n",
539
+ " nesterov = group['nesterov']\n",
540
+ " eps = group['eps']\n",
541
+ " use_muon = group['use_muon']\n",
542
+ "\n",
543
+ " for name, p in zip(group[\"param_names\"], group[\"params\"]):\n",
544
+ " name = self._clean_param_name(name)\n",
545
+ "\n",
546
+ " if p.grad is None:\n",
547
+ " if collecting_metrics:\n",
548
+ " update_param_names.append(name)\n",
549
+ " update_norms.append(torch.tensor([0.0], device=device))\n",
550
+ " update_maxs.append(torch.tensor([0.0], device=device))\n",
551
+ " continue\n",
552
+ "\n",
553
+ " # Apply weight decay\n",
554
+ " #mask = p.grad != 0 if self._selective_updates else 1\n",
555
+ " mask = (p.grad != 0) if self._selective_updates else torch.ones_like(p, dtype=torch.bool)\n",
556
+ " p.mul_(1 - mask * (lr * weight_decay))\n",
557
+ "\n",
558
+ " grad = p.grad\n",
559
+ " state = self.state[p]\n",
560
+ "\n",
561
+ " # Determine whether to use Muon or AdamW for this parameter\n",
562
+ " # We use Muon for matrix parameters unless explicitly disabled\n",
563
+ " should_use_muon = use_muon and p.ndim >= 2 and not ('embed' in name.lower() or 'head' in name.lower())\n",
564
+ "\n",
565
+ " if should_use_muon:\n",
566
+ " # --- Muon Update Logic ---\n",
567
+ "\n",
568
+ " # Initialize momentum buffer if needed\n",
569
+ " if 'momentum_buffer' not in state:\n",
570
+ " state['momentum_buffer'] = torch.zeros_like(grad)\n",
571
+ " momentum_buffer = state['momentum_buffer']\n",
572
+ "\n",
573
+ " # Update momentum\n",
574
+ " momentum_buffer.lerp_(grad, mask * (1 - beta1))\n",
575
+ "\n",
576
+ " # Compute update\n",
577
+ " if nesterov:\n",
578
+ " update = momentum_buffer * beta1 + grad * (1 - beta1)\n",
579
+ " else:\n",
580
+ " update = momentum_buffer.clone()\n",
581
+ "\n",
582
+ " if isinstance(mask, torch.Tensor):\n",
583
+ " update.mul_(mask)\n",
584
+ "\n",
585
+ " # Handle conv filters\n",
586
+ " orig_shape = update.shape\n",
587
+ " if update.ndim == 4:\n",
588
+ " update = update.view(update.shape[0], -1)\n",
589
+ "\n",
590
+ " # Apply Newton-Schulz\n",
591
+ " update = self.zeropower_via_newtonschulz5(update, steps=ns_steps)\n",
592
+ "\n",
593
+ " # Scale update\n",
594
+ " update *= max(1, grad.size(-2) / grad.size(-1)) ** 0.5\n",
595
+ "\n",
596
+ " # Reshape if needed\n",
597
+ " if len(orig_shape) == 4:\n",
598
+ " update = update.view(orig_shape)\n",
599
+ "\n",
600
+ " else:\n",
601
+ " # --- AdamW Update Logic ---\n",
602
+ "\n",
603
+ " # Initialize momentum buffers if needed\n",
604
+ " if 'exp_avg' not in state:\n",
605
+ " state['exp_avg'] = torch.zeros_like(grad)\n",
606
+ " state['exp_avg_sq'] = torch.zeros_like(grad)\n",
607
+ " state['step'] = 0\n",
608
+ "\n",
609
+ " # Update step count\n",
610
+ " state['step'] += 1\n",
611
+ " step = state['step']\n",
612
+ "\n",
613
+ " # Update momentum buffers\n",
614
+ " state['exp_avg'].lerp_(grad, mask * (1 - beta1))\n",
615
+ " state['exp_avg_sq'].mul_(1 - mask * (1 - beta2)).addcmul_(grad, grad, value=1 - beta2)\n",
616
+ "\n",
617
+ " # Bias correction\n",
618
+ " bias_correction1 = 1 - beta1 ** step\n",
619
+ " bias_correction2 = 1 - beta2 ** step\n",
620
+ "\n",
621
+ " # Compute AdamW update\n",
622
+ " denom = (state['exp_avg_sq'].sqrt() / math.sqrt(bias_correction2)).add_(eps)\n",
623
+ " update = state['exp_avg'] / bias_correction1 / denom\n",
624
+ "\n",
625
+ " if isinstance(mask, torch.Tensor):\n",
626
+ " update.mul_(mask)\n",
627
+ "\n",
628
+ " # Apply update\n",
629
+ " p.add_(update, alpha=-lr)\n",
630
+ "\n",
631
+ " # Collect metrics\n",
632
+ " if collecting_metrics:\n",
633
+ " update_param_names.append(name)\n",
634
+ " update_norms.append(torch.linalg.vector_norm(update, 2.0, dtype=torch.float32).unsqueeze(0))\n",
635
+ " update_maxs.append(update.abs().max().unsqueeze(0))\n",
636
+ "\n",
637
+ " # Store metrics\n",
638
+ " if collecting_metrics:\n",
639
+ " self._update_norms = update_norms\n",
640
+ " self._update_maxs = update_maxs\n",
641
+ " self._update_param_names = update_param_names\n",
642
+ "\n",
643
+ " return None\n",
644
+ "\n",
645
+ " def get_post_step_metrics(\n",
646
+ " self, module: nn.Module, process_group: Optional[dist.ProcessGroup] = None\n",
647
+ " ) -> Dict[str, torch.Tensor]:\n",
648
+ " \"\"\"Get metrics about the optimization step.\"\"\"\n",
649
+ " if not (self._record_update_metrics and self._collecting_metrics):\n",
650
+ " return {}\n",
651
+ "\n",
652
+ " device = get_default_device() if self._device is None else self._device\n",
653
+ " dst_rank = 0\n",
654
+ " if process_group is not None:\n",
655
+ " dst_rank = dist.get_global_rank(process_group, 0)\n",
656
+ "\n",
657
+ " param_names = self._update_param_names\n",
658
+ " update_norms = self._update_norms\n",
659
+ " update_maxs = self._update_maxs\n",
660
+ "\n",
661
+ " if param_names is None or update_norms is None or update_maxs is None:\n",
662
+ " return {}\n",
663
+ "\n",
664
+ " # Reduce metrics if needed\n",
665
+ " if is_distributed() and isinstance(module, FullyShardedDataParallel):\n",
666
+ " # Reduce norms\n",
667
+ " all_norms = torch.cat(update_norms).to(device) ** 2.0\n",
668
+ " dist.reduce(all_norms, dst_rank, op=dist.ReduceOp.SUM, group=process_group)\n",
669
+ " update_norms = (all_norms ** (0.5)).squeeze(0).split(1)\n",
670
+ "\n",
671
+ " # Reduce maxs\n",
672
+ " all_maxs = torch.cat(update_maxs).to(device)\n",
673
+ " dist.reduce(all_maxs, dst_rank, op=dist.ReduceOp.MAX, group=process_group)\n",
674
+ " update_maxs = all_maxs.split(1)\n",
675
+ "\n",
676
+ " # Collect metrics\n",
677
+ " metrics = {}\n",
678
+ " for param_name, update_norm, update_max in zip(param_names, update_norms, update_maxs):\n",
679
+ " metrics[f\"update/{param_name}.norm\"] = update_norm.squeeze(0)\n",
680
+ " metrics[f\"update/{param_name}.max\"] = update_max.squeeze(0)\n",
681
+ "\n",
682
+ " # Reset stored metrics\n",
683
+ " self._update_norms = None\n",
684
+ " self._update_maxs = None\n",
685
+ " self._update_param_names = None\n",
686
+ "\n",
687
+ " return metrics"
688
+ ],
689
+ "metadata": {
690
+ "id": "UgBBhlu8YSOD"
691
+ },
692
+ "execution_count": 9,
693
+ "outputs": []
694
+ },
695
+ {
696
+ "cell_type": "code",
697
+ "source": [],
698
+ "metadata": {
699
+ "id": "apYTNxvcYSFf"
700
+ },
701
+ "execution_count": null,
702
+ "outputs": []
703
+ },
704
+ {
705
+ "cell_type": "markdown",
706
+ "source": [
707
+ "## testing suit"
708
+ ],
709
+ "metadata": {
710
+ "id": "C7qri20wY61B"
711
+ }
712
+ },
713
+ {
714
+ "cell_type": "code",
715
+ "source": [
716
+ "# Quick debug test to see if Muon is actually updating\n",
717
+ "import torch\n",
718
+ "import torch.nn as nn\n",
719
+ "\n",
720
+ "model = nn.Linear(10, 5, bias=False)\n",
721
+ "optimizer = MuonW([{'params': model.parameters(), 'param_names': ['weight']}], lr=0.1)\n",
722
+ "\n",
723
+ "# Initial weight\n",
724
+ "init_weight = model.weight.data.clone()\n",
725
+ "\n",
726
+ "# Create gradient\n",
727
+ "x = torch.randn(32, 10)\n",
728
+ "y = model(x)\n",
729
+ "loss = y.sum()\n",
730
+ "loss.backward()\n",
731
+ "\n",
732
+ "print(f\"Gradient norm: {model.weight.grad.norm():.4f}\")\n",
733
+ "\n",
734
+ "# Step\n",
735
+ "optimizer.step()\n",
736
+ "\n",
737
+ "# Check update\n",
738
+ "weight_change = (model.weight.data - init_weight).norm()\n",
739
+ "print(f\"Weight change: {weight_change:.4f}\")\n",
740
+ "\n",
741
+ "if weight_change < 1e-6:\n",
742
+ " print(\"WARNING: Weights barely changed - check Newton-Schulz implementation\")"
743
+ ],
744
+ "metadata": {
745
+ "colab": {
746
+ "base_uri": "https://localhost:8080/"
747
+ },
748
+ "id": "JsLd9EUbYfMw",
749
+ "outputId": "447510b5-446c-48da-b10f-5ee35d1e137e"
750
+ },
751
+ "execution_count": 12,
752
+ "outputs": [
753
+ {
754
+ "output_type": "stream",
755
+ "name": "stdout",
756
+ "text": [
757
+ "Gradient norm: 40.4564\n",
758
+ "Weight change: 0.0680\n"
759
+ ]
760
+ }
761
+ ]
762
+ },
763
+ {
764
+ "cell_type": "code",
765
+ "source": [
766
+ "import math\n",
767
+ "\n",
768
+ "import torch\n",
769
+ "import torch.nn as nn\n",
770
+ "import torch.nn.functional as F\n",
771
+ "import numpy as np\n",
772
+ "from typing import Dict, Optional\n",
773
+ "import unittest\n",
774
+ "from unittest.mock import MagicMock, patch\n",
775
+ "\n",
776
+ "# Mock the required imports for testing\n",
777
+ "class MockOptimizer:\n",
778
+ " \"\"\"Mock base optimizer for testing\"\"\"\n",
779
+ " def __init__(self, params, defaults, **kwargs):\n",
780
+ " self.param_groups = []\n",
781
+ " self.state = {}\n",
782
+ " self._collecting_metrics = False\n",
783
+ " self._record_update_metrics = False\n",
784
+ "\n",
785
+ " if isinstance(params, (list, tuple)) and len(params) > 0 and isinstance(params[0], dict):\n",
786
+ " for group in params:\n",
787
+ " param_group = {**defaults, **group}\n",
788
+ " self.param_groups.append(param_group)\n",
789
+ " else:\n",
790
+ " self.param_groups = [{'params': list(params), **defaults}]\n",
791
+ "\n",
792
+ " def _clean_param_name(self, name):\n",
793
+ " return name.replace(\"_fsdp_wrapped_module.\", \"\")\n",
794
+ "\n",
795
+ "def get_default_device():\n",
796
+ " return torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
797
+ "\n",
798
+ "def is_distributed():\n",
799
+ " return False\n",
800
+ "\n",
801
+ "# Insert your MuonW class here (copy from document 4)\n",
802
+ "# For testing purposes, inherit from MockOptimizer instead of Optimizer\n",
803
+ "\n",
804
+ "class TestMuonW(unittest.TestCase):\n",
805
+ " \"\"\"Test cases for MuonW optimizer\"\"\"\n",
806
+ "\n",
807
+ " def setUp(self):\n",
808
+ " \"\"\"Set up test fixtures\"\"\"\n",
809
+ " torch.manual_seed(42)\n",
810
+ " self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
811
+ "\n",
812
+ " def test_matrix_param_uses_muon(self):\n",
813
+ " \"\"\"Test that matrix parameters use Muon update\"\"\"\n",
814
+ " # Create a simple model with matrix parameter\n",
815
+ " model = nn.Linear(10, 5)\n",
816
+ " model.to(self.device)\n",
817
+ "\n",
818
+ " # Add parameter names\n",
819
+ " params = [{'params': model.parameters(),\n",
820
+ " 'param_names': ['weight', 'bias']}]\n",
821
+ "\n",
822
+ " optimizer = MuonW(params, lr=0.01)\n",
823
+ "\n",
824
+ " # Create dummy loss and backward\n",
825
+ " x = torch.randn(32, 10, device=self.device)\n",
826
+ " y = model(x)\n",
827
+ " loss = y.sum()\n",
828
+ " loss.backward()\n",
829
+ "\n",
830
+ " # Check initial state\n",
831
+ " weight_state_before = model.weight.data.clone()\n",
832
+ "\n",
833
+ " # Step\n",
834
+ " optimizer.step()\n",
835
+ "\n",
836
+ " # Verify weight changed (Muon was applied)\n",
837
+ " assert not torch.allclose(weight_state_before, model.weight.data)\n",
838
+ "\n",
839
+ " # Check that momentum buffer was created for weight\n",
840
+ " assert 'momentum_buffer' in optimizer.state[model.weight]\n",
841
+ "\n",
842
+ " print(\"✓ Matrix parameters use Muon update\")\n",
843
+ "\n",
844
+ " def test_scalar_param_uses_adamw(self):\n",
845
+ " \"\"\"Test that scalar parameters use AdamW update\"\"\"\n",
846
+ " class ModelWithScalar(nn.Module):\n",
847
+ " def __init__(self):\n",
848
+ " super().__init__()\n",
849
+ " self.weight = nn.Parameter(torch.randn(5, 10)) # Fixed: shape should be (out_features, in_features)\n",
850
+ " self.scalar = nn.Parameter(torch.randn(())) # scalar\n",
851
+ "\n",
852
+ " def forward(self, x):\n",
853
+ " return F.linear(x, self.weight) * self.scalar\n",
854
+ "\n",
855
+ " model = ModelWithScalar().to(self.device)\n",
856
+ "\n",
857
+ " params = [{'params': model.parameters(),\n",
858
+ " 'param_names': ['weight', 'scalar']}]\n",
859
+ "\n",
860
+ " optimizer = MuonW(params, lr=0.01)\n",
861
+ "\n",
862
+ " # Forward and backward\n",
863
+ " x = torch.randn(32, 10, device=self.device)\n",
864
+ " y = model(x)\n",
865
+ " loss = y.sum()\n",
866
+ " loss.backward()\n",
867
+ "\n",
868
+ " # Step\n",
869
+ " optimizer.step()\n",
870
+ "\n",
871
+ " # Check that scalar parameter has AdamW state\n",
872
+ " scalar_state = optimizer.state[model.scalar]\n",
873
+ " assert 'exp_avg' in scalar_state\n",
874
+ " assert 'exp_avg_sq' in scalar_state\n",
875
+ " assert 'step' in scalar_state\n",
876
+ "\n",
877
+ " print(\"✓ Scalar parameters use AdamW update\")\n",
878
+ "\n",
879
+ " def test_embedding_uses_adamw(self):\n",
880
+ " \"\"\"Test that embedding layers use AdamW by default\"\"\"\n",
881
+ " model = nn.Embedding(100, 16).to(self.device)\n",
882
+ "\n",
883
+ " params = [{'params': model.parameters(),\n",
884
+ " 'param_names': ['embedding.weight']}]\n",
885
+ "\n",
886
+ " optimizer = MuonW(params, lr=0.01)\n",
887
+ "\n",
888
+ " # Create dummy gradient\n",
889
+ " idx = torch.randint(0, 100, (32,), device=self.device)\n",
890
+ " y = model(idx)\n",
891
+ " loss = y.sum()\n",
892
+ " loss.backward()\n",
893
+ "\n",
894
+ " # Step\n",
895
+ " optimizer.step()\n",
896
+ "\n",
897
+ " # Check that embedding has AdamW state (not Muon)\n",
898
+ " embed_state = optimizer.state[model.weight]\n",
899
+ " assert 'exp_avg' in embed_state\n",
900
+ " assert 'exp_avg_sq' in embed_state\n",
901
+ "\n",
902
+ " print(\"✓ Embedding parameters use AdamW update\")\n",
903
+ "\n",
904
+ " def test_weight_decay(self):\n",
905
+ " \"\"\"Test that weight decay is applied correctly\"\"\"\n",
906
+ " model = nn.Linear(10, 5, bias=False).to(self.device)\n",
907
+ "\n",
908
+ " params = [{'params': model.parameters(),\n",
909
+ " 'param_names': ['weight']}]\n",
910
+ "\n",
911
+ " weight_decay = 0.1\n",
912
+ " optimizer = MuonW(params, lr=0.01, weight_decay=weight_decay)\n",
913
+ "\n",
914
+ " # Store initial weight\n",
915
+ " initial_weight = model.weight.data.clone()\n",
916
+ "\n",
917
+ " # Create zero gradient (to isolate weight decay effect)\n",
918
+ " model.weight.grad = torch.zeros_like(model.weight)\n",
919
+ "\n",
920
+ " # Step\n",
921
+ " optimizer.step()\n",
922
+ "\n",
923
+ " # Check weight decay was applied: new_weight = old_weight * (1 - lr * wd)\n",
924
+ " expected = initial_weight * (1 - 0.01 * weight_decay)\n",
925
+ " assert torch.allclose(model.weight.data, expected, rtol=1e-5)\n",
926
+ "\n",
927
+ " print(\"✓ Weight decay applied correctly\")\n",
928
+ "\n",
929
+ " def test_nesterov_momentum(self):\n",
930
+ " \"\"\"Test Nesterov momentum option\"\"\"\n",
931
+ " # Test with Nesterov=True\n",
932
+ " model1 = nn.Linear(10, 5, bias=False).to(self.device)\n",
933
+ " model2 = nn.Linear(10, 5, bias=False).to(self.device)\n",
934
+ "\n",
935
+ " # Same initialization\n",
936
+ " model2.weight.data.copy_(model1.weight.data)\n",
937
+ "\n",
938
+ " params1 = [{'params': model1.parameters(), 'param_names': ['weight']}]\n",
939
+ " params2 = [{'params': model2.parameters(), 'param_names': ['weight']}]\n",
940
+ "\n",
941
+ " opt1 = MuonW(params1, lr=0.01, nesterov=True)\n",
942
+ " opt2 = MuonW(params2, lr=0.01, nesterov=False)\n",
943
+ "\n",
944
+ " # Same gradients\n",
945
+ " grad = torch.randn_like(model1.weight)\n",
946
+ " model1.weight.grad = grad.clone()\n",
947
+ " model2.weight.grad = grad.clone()\n",
948
+ "\n",
949
+ " opt1.step()\n",
950
+ " opt2.step()\n",
951
+ "\n",
952
+ " # Updates should be different\n",
953
+ " assert not torch.allclose(model1.weight.data, model2.weight.data)\n",
954
+ "\n",
955
+ " print(\"✓ Nesterov momentum works differently from standard momentum\")\n",
956
+ "\n",
957
+ " def test_conv_filters(self):\n",
958
+ " \"\"\"Test that conv filters are handled correctly\"\"\"\n",
959
+ " model = nn.Conv2d(3, 16, kernel_size=3).to(self.device)\n",
960
+ "\n",
961
+ " params = [{'params': model.parameters(),\n",
962
+ " 'param_names': ['conv.weight', 'conv.bias']}]\n",
963
+ "\n",
964
+ " optimizer = MuonW(params, lr=0.01)\n",
965
+ "\n",
966
+ " # Forward and backward\n",
967
+ " x = torch.randn(4, 3, 32, 32, device=self.device)\n",
968
+ " y = model(x)\n",
969
+ " loss = y.sum()\n",
970
+ " loss.backward()\n",
971
+ "\n",
972
+ " initial_weight = model.weight.data.clone()\n",
973
+ "\n",
974
+ " # Step\n",
975
+ " optimizer.step()\n",
976
+ "\n",
977
+ " # Check weight was updated\n",
978
+ " assert not torch.allclose(initial_weight, model.weight.data)\n",
979
+ "\n",
980
+ " # Check state exists\n",
981
+ " assert 'momentum_buffer' in optimizer.state[model.weight]\n",
982
+ "\n",
983
+ " print(\"✓ Conv filters handled correctly\")\n",
984
+ "\n",
985
+ " def test_multiple_param_groups(self):\n",
986
+ " \"\"\"Test optimizer with multiple parameter groups\"\"\"\n",
987
+ " model = nn.Sequential(\n",
988
+ " nn.Linear(10, 20),\n",
989
+ " nn.ReLU(),\n",
990
+ " nn.Linear(20, 5)\n",
991
+ " ).to(self.device)\n",
992
+ "\n",
993
+ " # Different learning rates for different layers\n",
994
+ " params = [\n",
995
+ " {'params': model[0].parameters(), 'lr': 0.01, 'param_names': ['layer0.weight', 'layer0.bias']},\n",
996
+ " {'params': model[2].parameters(), 'lr': 0.001, 'param_names': ['layer2.weight', 'layer2.bias']}\n",
997
+ " ]\n",
998
+ "\n",
999
+ " optimizer = MuonW(params)\n",
1000
+ "\n",
1001
+ " # Forward and backward\n",
1002
+ " x = torch.randn(32, 10, device=self.device)\n",
1003
+ " y = model(x)\n",
1004
+ " loss = y.sum()\n",
1005
+ " loss.backward()\n",
1006
+ "\n",
1007
+ " # Store initial weights\n",
1008
+ " w0_init = model[0].weight.data.clone()\n",
1009
+ " w2_init = model[2].weight.data.clone()\n",
1010
+ "\n",
1011
+ " # Step\n",
1012
+ " optimizer.step()\n",
1013
+ "\n",
1014
+ " # Both should be updated\n",
1015
+ " assert not torch.allclose(w0_init, model[0].weight.data)\n",
1016
+ " assert not torch.allclose(w2_init, model[2].weight.data)\n",
1017
+ "\n",
1018
+ " print(\"✓ Multiple parameter groups work correctly\")\n",
1019
+ "\n",
1020
+ " def test_zero_grad_handling(self):\n",
1021
+ " \"\"\"Test that parameters with zero gradients are handled correctly\"\"\"\n",
1022
+ " model = nn.Linear(10, 5).to(self.device)\n",
1023
+ "\n",
1024
+ " params = [{'params': model.parameters(),\n",
1025
+ " 'param_names': ['weight', 'bias']}]\n",
1026
+ "\n",
1027
+ " optimizer = MuonW(params, lr=0.01)\n",
1028
+ "\n",
1029
+ " # Set zero gradient\n",
1030
+ " model.weight.grad = torch.zeros_like(model.weight)\n",
1031
+ " model.bias.grad = torch.zeros_like(model.bias)\n",
1032
+ "\n",
1033
+ " initial_weight = model.weight.data.clone()\n",
1034
+ "\n",
1035
+ " # Step should not crash\n",
1036
+ " optimizer.step()\n",
1037
+ "\n",
1038
+ " # With zero grad and no weight decay, parameters shouldn't change much\n",
1039
+ " # (only numerical errors from Newton-Schulz on zero matrix)\n",
1040
+ " assert torch.allclose(initial_weight, model.weight.data, atol=1e-6)\n",
1041
+ "\n",
1042
+ " print(\"✓ Zero gradients handled correctly\")\n",
1043
+ "\n",
1044
+ "def test_distributed_mock():\n",
1045
+ " \"\"\"Test distributed functionality using mocks\"\"\"\n",
1046
+ " print(\"\\nTesting distributed functionality with mocks...\")\n",
1047
+ "\n",
1048
+ " with patch('torch.distributed.is_initialized', return_value=True):\n",
1049
+ " with patch('torch.distributed.get_global_rank', return_value=0):\n",
1050
+ " with patch('torch.distributed.reduce') as mock_reduce:\n",
1051
+ " # This simulates distributed metric collection\n",
1052
+ " model = nn.Linear(10, 5)\n",
1053
+ " params = [{'params': model.parameters(),\n",
1054
+ " 'param_names': ['weight', 'bias']}]\n",
1055
+ "\n",
1056
+ " optimizer = MuonW(params, lr=0.01, record_update_metrics=True)\n",
1057
+ " optimizer._collecting_metrics = True\n",
1058
+ "\n",
1059
+ " # Create gradient\n",
1060
+ " model.weight.grad = torch.randn_like(model.weight)\n",
1061
+ " model.bias.grad = torch.randn_like(model.bias)\n",
1062
+ "\n",
1063
+ " # Step\n",
1064
+ " optimizer.step()\n",
1065
+ "\n",
1066
+ " # Check if metrics were collected\n",
1067
+ " assert optimizer._update_norms is not None\n",
1068
+ " assert optimizer._update_param_names is not None\n",
1069
+ "\n",
1070
+ " print(\"✓ Distributed mock test passed\")\n",
1071
+ "\n",
1072
+ "def run_convergence_test():\n",
1073
+ " \"\"\"Test that the optimizer actually optimizes a simple problem\"\"\"\n",
1074
+ " print(\"\\nRunning convergence test...\")\n",
1075
+ "\n",
1076
+ " torch.manual_seed(42)\n",
1077
+ " device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
1078
+ "\n",
1079
+ " # Simple regression problem\n",
1080
+ " X = torch.randn(100, 10, device=device)\n",
1081
+ " true_w = torch.randn(10, 1, device=device)\n",
1082
+ " y = X @ true_w + 0.1 * torch.randn(100, 1, device=device)\n",
1083
+ "\n",
1084
+ " model = nn.Linear(10, 1, bias=False).to(device)\n",
1085
+ " params = [{'params': model.parameters(), 'param_names': ['weight']}]\n",
1086
+ " optimizer = MuonW(params, lr=0.1) # Increased learning rate for better convergence\n",
1087
+ "\n",
1088
+ " losses = []\n",
1089
+ " for epoch in range(200): # More epochs for convergence\n",
1090
+ " # Forward\n",
1091
+ " pred = model(X)\n",
1092
+ " loss = F.mse_loss(pred, y)\n",
1093
+ " losses.append(loss.item())\n",
1094
+ "\n",
1095
+ " # Backward\n",
1096
+ " model.zero_grad() # Use model.zero_grad() instead\n",
1097
+ " loss.backward()\n",
1098
+ "\n",
1099
+ " # Update\n",
1100
+ " optimizer.step()\n",
1101
+ "\n",
1102
+ " # Check that loss decreased - relaxed threshold\n",
1103
+ " assert losses[-1] < losses[0] * 0.7, f\"Loss didn't decrease enough: {losses[0]:.4f} -> {losses[-1]:.4f}\"\n",
1104
+ "\n",
1105
+ " print(f\"✓ Convergence test passed: {losses[0]:.4f} -> {losses[-1]:.4f}\")\n",
1106
+ "\n",
1107
+ "if __name__ == \"__main__\":\n",
1108
+ " print(\"Running MuonW Optimizer Tests\")\n",
1109
+ " print(\"=\" * 50)\n",
1110
+ "\n",
1111
+ " # Run unit tests\n",
1112
+ " suite = unittest.TestLoader().loadTestsFromTestCase(TestMuonW)\n",
1113
+ " runner = unittest.TextTestRunner(verbosity=0)\n",
1114
+ " result = runner.run(suite)\n",
1115
+ "\n",
1116
+ " # Run additional tests\n",
1117
+ " test_distributed_mock()\n",
1118
+ " run_convergence_test()\n",
1119
+ "\n",
1120
+ " print(\"\\n\" + \"=\" * 50)\n",
1121
+ " if result.wasSuccessful():\n",
1122
+ " print(\"All tests passed! ✅\")\n",
1123
+ " else:\n",
1124
+ " print(f\"Some tests failed. Failures: {len(result.failures)}, Errors: {len(result.errors)}\")"
1125
+ ],
1126
+ "metadata": {
1127
+ "colab": {
1128
+ "base_uri": "https://localhost:8080/"
1129
+ },
1130
+ "id": "CrWv9OuRYfHl",
1131
+ "outputId": "4a2ce32e-d9b8-43f3-ec0d-9c4f10a770ec"
1132
+ },
1133
+ "execution_count": 13,
1134
+ "outputs": [
1135
+ {
1136
+ "output_type": "stream",
1137
+ "name": "stderr",
1138
+ "text": [
1139
+ "----------------------------------------------------------------------\n",
1140
+ "Ran 8 tests in 0.021s\n",
1141
+ "\n",
1142
+ "OK\n"
1143
+ ]
1144
+ },
1145
+ {
1146
+ "output_type": "stream",
1147
+ "name": "stdout",
1148
+ "text": [
1149
+ "Running MuonW Optimizer Tests\n",
1150
+ "==================================================\n",
1151
+ "✓ Conv filters handled correctly\n",
1152
+ "✓ Embedding parameters use AdamW update\n",
1153
+ "✓ Matrix parameters use Muon update\n",
1154
+ "✓ Multiple parameter groups work correctly\n",
1155
+ "✓ Nesterov momentum works differently from standard momentum\n",
1156
+ "✓ Scalar parameters use AdamW update\n",
1157
+ "✓ Weight decay applied correctly\n",
1158
+ "✓ Zero gradients handled correctly\n",
1159
+ "\n",
1160
+ "Testing distributed functionality with mocks...\n",
1161
+ "✓ Distributed mock test passed\n",
1162
+ "\n",
1163
+ "Running convergence test...\n",
1164
+ "✓ Convergence test passed: 20.7094 -> 0.0136\n",
1165
+ "\n",
1166
+ "==================================================\n",
1167
+ "All tests passed! ✅\n"
1168
+ ]
1169
+ }
1170
+ ]
1171
+ },
1172
+ {
1173
+ "cell_type": "code",
1174
+ "source": [],
1175
+ "metadata": {
1176
+ "id": "Xa9ABULwYfAi"
1177
+ },
1178
+ "execution_count": null,
1179
+ "outputs": []
1180
+ }
1181
+ ]
1182
+ }
README.md CHANGED
@@ -16,7 +16,7 @@ datasets:
16
 
17
  # Understanding the Muon Optimizer: Theory and Implementation
18
  ## 📘 Contents
19
-
20
  1. [Introduction to Muon](#introduction)
21
  2. [The Problem: Skewed Singular Values](#1-the-problem-skewed-singular-value-distributions)
22
  3. [Newton-Schulz Orthogonalization](#3-the-newton-schulz-iteration)
@@ -33,6 +33,18 @@ datasets:
33
  The included [Colab notebook](./Muon.ipynb) allows you to run all experiments and implement Muon from scratch.
34
 
35
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## Introduction
38
 
 
16
 
17
  # Understanding the Muon Optimizer: Theory and Implementation
18
  ## 📘 Contents
19
+ 0. [Try It Yourself -- base and advanced implementions](#-try-it-yourself)
20
  1. [Introduction to Muon](#introduction)
21
  2. [The Problem: Skewed Singular Values](#1-the-problem-skewed-singular-value-distributions)
22
  3. [Newton-Schulz Orthogonalization](#3-the-newton-schulz-iteration)
 
33
  The included [Colab notebook](./Muon.ipynb) allows you to run all experiments and implement Muon from scratch.
34
 
35
 
36
+ 🚀 Advanced Implementation: Distributed Training with FSDP
37
+ For users looking to apply Muon in a large-scale, distributed training environment, the included [Colab notebook](./MuonForOLMo.ipynb) provides a more advanced, standalone implementation. This version is adapted from the code in my pending [Pull Request](https://github.com/allenai/OLMo/pull/882) to the Allen Institute for AI's OLMo repository.
38
+
39
+ This implementation includes several key features for more advanced practitioners:
40
+
41
+ - FSDP Compatibility: Designed from the ground up to run on multi-GPU systems using PyTorch's Fully Sharded Data Parallel.
42
+
43
+ - Hybrid Optimization (MuonW): Implements a robust "MuonW" approach, using Muon for matrix parameters while falling back to the well-tested AdamW optimizer for all other parameters (e.g., embeddings, biases, and other non-matrix tensors).
44
+ - Advanced Metric Tracking: Includes a `get_post_step_metrics` method for detailed, real-time monitoring of the optimizer's state, crucial for debugging and research at scale.
45
+
46
+ ➡️ Open the [Notebook](./MuonForOLMo.ipynb)
47
+
48
 
49
  ## Introduction
50