Papers
arxiv:2507.22832

Tapping into the Black Box: Uncovering Aligned Representations in Pretrained Neural Networks

Published on Jul 30

Abstract

ReLU networks learn interpretable linear models whose decision boundaries can be extracted to reveal perceptually aligned features in ImageNet-pretrained architectures.

AI-generated summary

In this paper we argue that ReLU networks learn an implicit linear model we can actually tap into. We describe that alleged model formally and show that we can approximately pull its decision boundary back to the input space with certain simple modification to the backward pass. The resulting gradients (called excitation pullbacks) reveal high-resolution input- and target-specific features of remarkable perceptual alignment on a number of popular ImageNet-pretrained deep architectures. This strongly suggests that neural networks do, in fact, rely on learned interpretable patterns that can be recovered after training. Thus, our findings may have profound implications for knowledge discovery and the development of dependable artificial systems.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.22832 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.22832 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1