STOP EVERYTHING NOW - we might finally have a radical architecture improvement over Transformers!!! 🚨
A lone scientist just proposed Tiny Recursive Model (TRM), and it is literally the most impressive model that I've seen this year.
➡️ Tiny Recursive Model is 7M parameters ➡️ On ARC-AGI, it beats flagship models like Gemini-2.5-pro
Consider how wild this is: Gemini-2.5-pro must be over 10,000x bigger and had 1,000 as many authors 😂 (Alexia is alone on the paper)
What's this sorcery? In short: it's a very tiny Transformers, but it loops over itself at two different frequencies, updating two latent variables: one for the proposed answer and one for the reasoning.
@AlexiaJM started from the paper Hierarchical Reasoning Model, published a few months ago, that already showed breakthrough improvement on AGI for its small size (27M)
Hierarchical Reasoning Model had introduced one main feature: 🔎 Deep supervision In their model, one part (here one layer) would run at high frequency, and another would be lower frequency, running only every n steps.
They had used a recurrent architecture, where these layers would repeat many times ; but to make it work they had to do many approximations, including not fully backpropagating the loss through all layers.
Alexia studied what was useful and what wasn't, and cleaned the architecture as follows : Why use a recurrent architecture, when you can just make it a loop? ➡️ She made the network recursive, looping over itself
Why use 2 latent variables ? ➡️ She provides a crystal clear explanation : the one that changes frequently is the reasoning, the one that changes at low frequency is the proposed answer. ➡️ She runs ablation studies to validate that 2 is indeed optimal.
This new setup is a much more elegant way to process reasoning than generating huge chains of tokens as all flagship models currently do.
This might be the breakthrough we've been awaiting for so long!
YOLO models are known to be super fast for real-time computer vision, but they have a downside with being volatile to NMS 🥲
Transformer-based models on the other hand are computationally not as efficient 🥲
Isn't there something in between? Enter RT-DETR!
The authors combined CNN backbone, multi-stage hybrid decoder (combining convs and attn) with a transformer decoder. In the paper, authors also claim one can adjust speed by changing decoder layers without retraining altogether. The authors find out that the model performs better in terms of speed and accuracy compared to the previous state-of-the-art. 🤩
Findings so far are that removing one or two layers has a relatively moderate impact on quality. PPL and KLD suffer quite a lot, as expected considering that pruning changes the logits distribution, but the drop in inference quality, as reflected by tests' scores, is less pronounced.
Another interesting side-effect, at least with Qwen3-30B-A3B, is that pruning 3 or more layers makes the model forget English and reply in Chinese! but with still reasonable answers.