Tapping into the Black Box: Uncovering Aligned Representations in Pretrained Neural Networks
Abstract
ReLU networks learn interpretable linear models whose decision boundaries can be extracted to reveal perceptually aligned features in ImageNet-pretrained architectures.
In this paper we argue that ReLU networks learn an implicit linear model we can actually tap into. We describe that alleged model formally and show that we can approximately pull its decision boundary back to the input space with certain simple modification to the backward pass. The resulting gradients (called excitation pullbacks) reveal high-resolution input- and target-specific features of remarkable perceptual alignment on a number of popular ImageNet-pretrained deep architectures. This strongly suggests that neural networks do, in fact, rely on learned interpretable patterns that can be recovered after training. Thus, our findings may have profound implications for knowledge discovery and the development of dependable artificial systems.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper