abs
stringlengths
45
62
Download PDF
stringlengths
50
84
OpenReview
stringlengths
42
42
title
stringlengths
10
168
url
stringlengths
45
62
authors
stringlengths
9
704
detail_url
stringlengths
45
62
tags
stringclasses
1 value
abstract
stringlengths
415
5.03k
https://proceedings.mlr.press/v202/ren23a.html
https://proceedings.mlr.press/v202/ren23a/ren23a.pdf
https://openreview.net/forum?id=UkG4Nn634P
Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts
https://proceedings.mlr.press/v202/ren23a.html
Qihan Ren, Huiqi Deng, Yunuo Chen, Siyu Lou, Quanshi Zhang
https://proceedings.mlr.press/v202/ren23a.html
ICML 2023
In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN. It has been observed and studied that a relatively small set of interactive concepts usually emerge in the knowledge representation of a sufficiently-trained neural network, and such concepts can faithfully explain the network output. Based on this, our study proves that compared to standard deep neural networks (DNNs), it is less likely for BNNs to encode complex concepts. Experiments verify our theoretical proofs. Note that the tendency to encode less complex concepts does not necessarily imply weak representation power, considering that complex concepts exhibit low generalization power and high adversarial vulnerability. The code is available at https://github.com/sjtu-xai-lab/BNN-concepts.
https://proceedings.mlr.press/v202/ren23b.html
https://proceedings.mlr.press/v202/ren23b/ren23b.pdf
https://openreview.net/forum?id=qZ1reqm5BM
Escaping saddle points in zeroth-order optimization: the power of two-point estimators
https://proceedings.mlr.press/v202/ren23b.html
Zhaolin Ren, Yujie Tang, Na Li
https://proceedings.mlr.press/v202/ren23b.html
ICML 2023
Two-point zeroth order methods are important in many applications of zeroth-order optimization arising in robotics, wind farms, power systems, online optimization, and adversarial robustness to black-box attacks in deep neural networks, where the problem can be high-dimensional and/or time-varying. Furthermore, such problems may be nonconvex and contain saddle points. While existing works have shown that zeroth-order methods utilizing $\Omega(d)$ function valuations per iteration (with $d$ denoting the problem dimension) can escape saddle points efficiently, it remains an open question if zeroth-order methods based on two-point estimators can escape saddle points. In this paper, we show that by adding an appropriate isotropic perturbation at each iteration, a zeroth-order algorithm based on $2m$ (for any $1 \leq m \leq d$) function evaluations per iteration can not only find $\epsilon$-second order stationary points polynomially fast, but do so using only $\tilde{O}(\frac{d}{m\epsilon^{2}\bar{\psi}})$ function evaluations, where $\bar{\psi} \geq \tilde{\Omega}(\sqrt{\epsilon})$ is a parameter capturing the extent to which the function of interest exhibits the strict saddle property.
https://proceedings.mlr.press/v202/ren23c.html
https://proceedings.mlr.press/v202/ren23c/ren23c.pdf
https://openreview.net/forum?id=lO5sAPGWqv
Dimension-independent Certified Neural Network Watermarks via Mollifier Smoothing
https://proceedings.mlr.press/v202/ren23c.html
Jiaxiang Ren, Yang Zhou, Jiayin Jin, Lingjuan Lyu, Da Yan
https://proceedings.mlr.press/v202/ren23c.html
ICML 2023
Certified_Watermarks is the first to provide a watermark certificate against $l_2$-norm watermark removal attacks, by leveraging the randomized smoothing techniques for certified robustness to adversarial attacks. However, the randomized smoothing techniques suffer from hardness of certified robustness in high-dimensional space against $l_p$-norm attacks for large $p$ ($p>2$). The certified watermark method based on the randomized smoothing is no exception, i.e., fails to provide meaningful certificates in high-dimensional space against the $l_p$-norm watermark removal attacks ($p>2$). By leveraging mollifier theory, this paper proposes a mollifier smoothing method with dimension-independent certified radius of our proposed smooth classifier, for conducting the certified watermark problem against the $l_p$-norm watermark removal attacks ($1 \leq p \leq \infty$) for high parameter dimension $d$. Based on partial differential equation (PDE) theory, an approximation of mollifier smoothing is developed to alleviate the inefficiency of sampling and prediction in the randomized smoothing as well as numerical integration in the mollifier smoothing, while maintaining the certified watermark against the $l_p$-norm watermark removal attacks ($1 \leq p \leq \infty$).
https://proceedings.mlr.press/v202/reneau23a.html
https://proceedings.mlr.press/v202/reneau23a/reneau23a.pdf
https://openreview.net/forum?id=LVARH5wXM9
Feature Programming for Multivariate Time Series Prediction
https://proceedings.mlr.press/v202/reneau23a.html
Alex Daniel Reneau, Jerry Yao-Chieh Hu, Ammar Gilani, Han Liu
https://proceedings.mlr.press/v202/reneau23a.html
ICML 2023
We introduce the concept of programmable feature engineering for time series modeling and propose a feature programming framework. This framework generates large amounts of predictive features for noisy multivariate time series while allowing users to incorporate their inductive bias with minimal effort. The key motivation of our framework is to view any multivariate time series as a cumulative sum of fine-grained trajectory increments, with each increment governed by a novel spin-gas dynamical Ising model. This fine-grained perspective motivates the development of a parsimonious set of operators that summarize multivariate time series in an abstract fashion, serving as the foundation for large-scale automated feature engineering. Numerically, we validate the efficacy of our method on several synthetic and real-world noisy time series datasets.
https://proceedings.mlr.press/v202/rezaei23a.html
https://proceedings.mlr.press/v202/rezaei23a/rezaei23a.pdf
https://openreview.net/forum?id=VpyyUM64AQ
Run-off Election: Improved Provable Defense against Data Poisoning Attacks
https://proceedings.mlr.press/v202/rezaei23a.html
Keivan Rezaei, Kiarash Banihashem, Atoosa Chegini, Soheil Feizi
https://proceedings.mlr.press/v202/rezaei23a.html
ICML 2023
In data poisoning attacks, an adversary tries to change a model’s prediction by adding, modifying, or removing samples in the training data. Recently, ensemble-based approaches for obtaining provable defenses against data poisoning have been proposed where predictions are done by taking a majority vote across multiple base models. In this work, we show that merely considering the majority vote in ensemble defenses is wasteful as it does not effectively utilize available information in the logits layers of the base models. Instead, we propose Run-Off Election (ROE), a novel aggregation method based on a two-round election across the base models: In the first round, models vote for their preferred class and then a second, Run-Off election is held between the top two classes in the first round. Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work. We evaluate our methods on MNIST, CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to $3%$-$4%$. Also, by applying ROE on a boosted version of DPA, we gain improvements around $12%$-$27%$ comparing to the current state-of-the-art, establishing a new state-of-the-art in (pointwise) certified robustness against data poisoning. In many cases, our approach outperforms the state-of-the-art, even when using 32 times less computational power.
https://proceedings.mlr.press/v202/richards23a.html
https://proceedings.mlr.press/v202/richards23a/richards23a.pdf
https://openreview.net/forum?id=oke1MUPK2l
Learning Control-Oriented Dynamical Structure from Data
https://proceedings.mlr.press/v202/richards23a.html
Spencer M. Richards, Jean-Jacques Slotine, Navid Azizan, Marco Pavone
https://proceedings.mlr.press/v202/richards23a.html
ICML 2023
Even for known nonlinear dynamical systems, feedback controller synthesis is a difficult problem that often requires leveraging the particular structure of the dynamics to induce a stable closed-loop system. For general nonlinear models, including those fit to data, there may not be enough known structure to reliably synthesize a stabilizing feedback controller. In this paper, we discuss a state-dependent nonlinear tracking controller formulation based on a state-dependent Riccati equation for general nonlinear control-affine systems. This formulation depends on a nonlinear factorization of the system of vector fields defining the control-affine dynamics, which always exists under mild smoothness assumptions. We propose a method for learning this factorization from a finite set of data. On a variety of simulated nonlinear dynamical systems, we empirically demonstrate the efficacy of learned versions of this controller in stable trajectory tracking. Alongside our learning method, we evaluate recent ideas in jointly learning a controller and stabilizability certificate for known dynamical systems; we show experimentally that such methods can be frail in comparison.
https://proceedings.mlr.press/v202/richemond23a.html
https://proceedings.mlr.press/v202/richemond23a/richemond23a.pdf
https://openreview.net/forum?id=GuVJ0hoHOl
The Edge of Orthogonality: A Simple View of What Makes BYOL Tick
https://proceedings.mlr.press/v202/richemond23a.html
Pierre Harvey Richemond, Allison Tam, Yunhao Tang, Florian Strub, Bilal Piot, Felix Hill
https://proceedings.mlr.press/v202/richemond23a.html
ICML 2023
Self-predictive unsupervised learning methods such as BYOL or SimSIAM have shown impressive results, and counter-intuitively, do not collapse to trivial representations. In this work, we aim at exploring the simplest possible mathematical arguments towards explaining the underlying mechanisms behind self-predictive unsupervised learning. We start with the observation that those methods crucially rely on the presence of a predictor network (and stop-gradient). With simple linear algebra, we show that when using a linear predictor, the optimal predictor is close to an orthogonal projection, and propose a general framework based on orthonormalization that enables to interpret and give intuition on why BYOL works. In addition, this framework demonstrates the crucial role of the exponential moving average and stop-gradient operator in BYOL as an efficient orthonormalization mechanism. We use these insights to propose four new closed-form predictor variants of BYOL to support our analysis. Our closed-form predictors outperform standard linear trainable predictor BYOL at 100 and 300 epochs (top-1 linear accuracy on ImageNet).
https://proceedings.mlr.press/v202/rio23a.html
https://proceedings.mlr.press/v202/rio23a/rio23a.pdf
https://openreview.net/forum?id=T27zdeulEK
Multi-Agent Best Arm Identification with Private Communications
https://proceedings.mlr.press/v202/rio23a.html
Alexandre Rio, Merwan Barlier, Igor Colin, Marta Soare
https://proceedings.mlr.press/v202/rio23a.html
ICML 2023
We address multi-agent best arm identification with privacy guarantees. In this setting, agents collaborate by communicating to find the optimal arm. To avoid leaking sensitive data through messages, we consider two notions of privacy withholding different kinds of information: differential privacy and $(\epsilon, \eta)$-privacy. For each privacy definition, we propose an algorithm based on a two-level successive elimination scheme. We provide theoretical guarantees for the privacy level, accuracy and sample complexity of our algorithms. Experiments on various settings support our theoretical findings.
https://proceedings.mlr.press/v202/rittler23a.html
https://proceedings.mlr.press/v202/rittler23a/rittler23a.pdf
https://openreview.net/forum?id=UOyczqYEU7
A Two-Stage Active Learning Algorithm for k-Nearest Neighbors
https://proceedings.mlr.press/v202/rittler23a.html
Nicholas Rittler, Kamalika Chaudhuri
https://proceedings.mlr.press/v202/rittler23a.html
ICML 2023
$k$-nearest neighbor classification is a popular non-parametric method because of desirable properties like automatic adaption to distributional scale changes. Unfortunately, it has thus far proved difficult to design active learning strategies for the training of local voting-based classifiers that naturally retain these desirable properties, and hence active learning strategies for $k$-nearest neighbor classification have been conspicuously missing from the literature. In this work, we introduce a simple and intuitive active learning algorithm for the training of $k$-nearest neighbor classifiers, the first in the literature which retains the concept of the $k$-nearest neighbor vote at prediction time. We provide consistency guarantees for a modified $k$-nearest neighbors classifier trained on samples acquired via our scheme, and show that when the conditional probability function $\mathbb{P}(Y=y|X=x)$ is sufficiently smooth and the Tsybakov noise condition holds, our actively trained classifiers converge to the Bayes optimal classifier at a faster asymptotic rate than passively trained $k$-nearest neighbor classifiers.
https://proceedings.mlr.press/v202/ro23a.html
https://proceedings.mlr.press/v202/ro23a/ro23a.pdf
https://openreview.net/forum?id=1pMC4ScIXn
Lowering the Pre-training Tax for Gradient-based Subset Training: A Lightweight Distributed Pre-Training Toolkit
https://proceedings.mlr.press/v202/ro23a.html
Yeonju Ro, Zhangyang Wang, Vijay Chidambaram, Aditya Akella
https://proceedings.mlr.press/v202/ro23a.html
ICML 2023
Training data and model sizes are increasing exponentially. One way to reduce training time and resources is to train with a carefully selected subset of the full dataset. Prior work uses the gradient signals obtained during a warm-up or “pre-training" phase over the full dataset, for determining the core subset; if the pre-training phase is too small, the gradients obtained are chaotic and unreliable. As a result, the pre-training phase itself incurs significant time/resource overhead, and prior work has not gone beyond hyperparameter search to reduce pre-training time. Our work explicitly aims to reduce this $\textbf{pre-training tax}$ in gradient-based subset training. We develop a principled, scalable approach for pre-training in a distributed setup. Our approach is $\textit{lightweight}$ and $\textit{minimizes communication}$ between distributed worker nodes. It is the first to utilize the concept of model-soup based distributed training $\textit{at initialization}$. The key idea is to minimally train an ensemble of models on small, disjointed subsets of the data; we further employ data-driven sparsity and data augmentation for local worker training to boost ensemble diversity. The centralized model, obtained at the end of pre-training by merging the per-worker models, is found to offer stabilized gradient signals to select subsets, on which the main model is further trained. We have validated the effectiveness of our method through extensive experiments on CIFAR-10/100, and ImageNet, using ResNet and WideResNet models. For example, our approach is shown to achieve $\textbf{15.4$\times$}$ pre-training speedup and $\textbf{2.8$\times$}$ end-to-end speedup on CIFAR10 and ResNet18 without loss of accuracy. The code is at https://github.com/moonbucks/LiPT.git.
https://proceedings.mlr.press/v202/rodri-guez-galvez23a.html
https://proceedings.mlr.press/v202/rodri-guez-galvez23a/rodri-guez-galvez23a.pdf
https://openreview.net/forum?id=YJ3ytyemn1
The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning
https://proceedings.mlr.press/v202/rodri-guez-galvez23a.html
Borja Rodrı́guez Gálvez, Arno Blaas, Pau Rodriguez, Adam Golinski, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella
https://proceedings.mlr.press/v202/rodri-guez-galvez23a.html
ICML 2023
The mechanisms behind the success of multi-view self-supervised learning (MVSSL) are not yet fully understood. Contrastive MVSSL methods have been studied through the lens of InfoNCE, a lower bound of the Mutual Information (MI). However, the relation between other MVSSL methods and MI remains unclear. We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens. Through this ER bound, we show that clustering-based methods such as DeepCluster and SwAV maximize the MI. We also re-interpret the mechanisms of distillation-based approaches such as BYOL and DINO, showing that they explicitly maximize the reconstruction term and implicitly encourage a stable entropy, and we confirm this empirically. We show that replacing the objectives of common MVSSL methods with this ER bound achieves competitive performance, while making them stable when training with smaller batch sizes or smaller exponential moving average (EMA) coefficients.
https://proceedings.mlr.press/v202/rodriguez-sanchez23a.html
https://proceedings.mlr.press/v202/rodriguez-sanchez23a/rodriguez-sanchez23a.pdf
https://openreview.net/forum?id=dA6biC3XgO
RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents
https://proceedings.mlr.press/v202/rodriguez-sanchez23a.html
Rafael Rodriguez-Sanchez, Benjamin Adin Spiegel, Jennifer Wang, Roma Patel, Stefanie Tellex, George Konidaris
https://proceedings.mlr.press/v202/rodriguez-sanchez23a.html
ICML 2023
We introduce RLang, a domain-specific language (DSL) for communicating domain knowledge to an RL agent. Unlike existing RL DSLs that ground to $\textit{single}$ elements of a decision-making formalism (e.g., the reward function or policy), RLang can specify information about every element of a Markov decision process. We define precise syntax and grounding semantics for RLang, and provide a parser that grounds RLang programs to an algorithm-agnostic $\textit{partial}$ world model and policy that can be exploited by an RL agent. We provide a series of example RLang programs demonstrating how different RL methods can exploit the resulting knowledge, encompassing model-free and model-based tabular algorithms, policy gradient and value-based methods, hierarchical approaches, and deep methods.
https://proceedings.mlr.press/v202/roh23a.html
https://proceedings.mlr.press/v202/roh23a/roh23a.pdf
https://openreview.net/forum?id=PvaPDkAbaL
Improving Fair Training under Correlation Shifts
https://proceedings.mlr.press/v202/roh23a.html
Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh
https://proceedings.mlr.press/v202/roh23a.html
ICML 2023
Model fairness is an essential element for Trustworthy AI. While many techniques for model fairness have been proposed, most of them assume that the training and deployment data distributions are identical, which is often not true in practice. In particular, when the bias between labels and sensitive groups changes, the fairness of the trained model is directly influenced and can worsen. We make two contributions for solving this problem. First, we analytically show that existing in-processing fair algorithms have fundamental limits in accuracy and group fairness. We utilize the notion of correlation shifts between labels and groups, which can explicitly capture the change of the above bias. Second, we propose a novel pre-processing step that samples the input data to reduce correlation shifts and thus enables the in-processing approaches to overcome their limitations. We formulate an optimization problem for adjusting the data ratio among labels and sensitive groups to reflect the shifted correlation. A key benefit of our approach lies in decoupling the roles of pre- and in-processing approaches: correlation adjustment via pre-processing and unfairness mitigation on the processed data via in-processing. Experiments show that our framework effectively improves existing in-processing fair algorithms w.r.t. accuracy and fairness, both on synthetic and real datasets.
https://proceedings.mlr.press/v202/rowland23a.html
https://proceedings.mlr.press/v202/rowland23a/rowland23a.pdf
https://openreview.net/forum?id=6EVUnWGBMU
The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
https://proceedings.mlr.press/v202/rowland23a.html
Mark Rowland, Yunhao Tang, Clare Lyle, Remi Munos, Marc G Bellemare, Will Dabney
https://proceedings.mlr.press/v202/rowland23a.html
ICML 2023
We study the problem of temporal-difference-based policy evaluation in reinforcement learning. In particular, we analyse the use of a distributional reinforcement learning algorithm, quantile temporal-difference learning (QTD), for this task. We reach the surprising conclusion that even if a practitioner has no interest in the return distribution beyond the mean, QTD (which learns predictions about the full distribution of returns) may offer performance superior to approaches such as classical TD learning, which predict only the mean return, even in the tabular setting.
https://proceedings.mlr.press/v202/ruan23a.html
https://proceedings.mlr.press/v202/ruan23a/ruan23a.pdf
https://openreview.net/forum?id=qw1UFuvGNR
Robust Satisficing MDPs
https://proceedings.mlr.press/v202/ruan23a.html
Haolin Ruan, Siyu Zhou, Zhi Chen, Chin Pang Ho
https://proceedings.mlr.press/v202/ruan23a.html
ICML 2023
Despite being a fundamental building block for reinforcement learning, Markov decision processes (MDPs) often suffer from ambiguity in model parameters. Robust MDPs are proposed to overcome this challenge by optimizing the worst-case performance under ambiguity. While robust MDPs can provide reliable policies with limited data, their worst-case performances are often overly conservative, and so they do not offer practical insights into the actual performance of these reliable policies. This paper proposes robust satisficing MDPs (RSMDPs), where the expected returns of feasible policies are softly-constrained to achieve a user-specified target under ambiguity. We derive a tractable reformulation for RSMDPs and develop a first-order method for solving large instances. Experimental results demonstrate that RSMDPs can prescribe policies to achieve their targets, which are much higher than the optimal worst-case returns computed by robust MDPs. Moreover, the average and percentile performances of our model are competitive among other models. We also demonstrate the scalability of the proposed algorithm compared with a state-of-the-art commercial solver.
https://proceedings.mlr.press/v202/rucker23a.html
https://proceedings.mlr.press/v202/rucker23a/rucker23a.pdf
https://openreview.net/forum?id=Ozx3OCAfDf
Infinite Action Contextual Bandits with Reusable Data Exhaust
https://proceedings.mlr.press/v202/rucker23a.html
Mark Rucker, Yinglun Zhu, Paul Mineiro
https://proceedings.mlr.press/v202/rucker23a.html
ICML 2023
For infinite action contextual bandits, smoothed regret and reduction to regression results in state-of-the-art online performance with computational cost independent of the action set: unfortunately, the resulting data exhaust does not have well-defined importance-weights. This frustrates the execution of downstream data science processes such as offline model selection. In this paper we describe an online algorithm with an equivalent smoothed regret guarantee, but which generates well-defined importance weights: in exchange, the online computational cost increases, but only to order smoothness (i.e., still independent of the action set). This removes a key obstacle to adoption of smoothed regret in production scenarios.
https://proceedings.mlr.press/v202/rudner23a.html
https://proceedings.mlr.press/v202/rudner23a/rudner23a.pdf
https://openreview.net/forum?id=4zGDjvg0vA
Function-Space Regularization in Neural Networks: A Probabilistic Perspective
https://proceedings.mlr.press/v202/rudner23a.html
Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, Andrew Gordon Wilson
https://proceedings.mlr.press/v202/rudner23a.html
ICML 2023
Parameter-space regularization in neural network optimization is a fundamental tool for improving generalization. However, standard parameter-space regularization methods make it challenging to encode explicit preferences about desired predictive functions into neural network training. In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training. This method—which we refer to as function-space empirical Bayes (FS-EB)—includes both parameter- and function-space regularization, is mathematically simple, easy to implement, and incurs only minimal computational overhead compared to standard regularization techniques. We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection, highly-calibrated predictive uncertainty estimates, successful task adaption from pre-trained models, and improved generalization under covariate shift.
https://proceedings.mlr.press/v202/rugamer23a.html
https://proceedings.mlr.press/v202/rugamer23a/rugamer23a.pdf
https://openreview.net/forum?id=pkzQcry2G8
A New PHO-rmula for Improved Performance of Semi-Structured Networks
https://proceedings.mlr.press/v202/rugamer23a.html
David Rügamer
https://proceedings.mlr.press/v202/rugamer23a.html
ICML 2023
Recent advances to combine structured regression models and deep neural networks for better interpretability, more expressiveness, and statistically valid uncertainty quantification demonstrate the versatility of semi-structured neural networks (SSNs). We show that techniques to properly identify the contributions of the different model components in SSNs, however, lead to suboptimal network estimation, slower convergence, and degenerated or erroneous predictions. In order to solve these problems while preserving favorable model properties, we propose a non-invasive post-hoc orthogonalization (PHO) that guarantees identifiability of model components and provides better estimation and prediction quality. Our theoretical findings are supported by numerical experiments, a benchmark comparison as well as a real-world application to COVID-19 infections.
https://proceedings.mlr.press/v202/ruhe23a.html
https://proceedings.mlr.press/v202/ruhe23a/ruhe23a.pdf
https://openreview.net/forum?id=DNAJdkHPQ5
Geometric Clifford Algebra Networks
https://proceedings.mlr.press/v202/ruhe23a.html
David Ruhe, Jayesh K Gupta, Steven De Keninck, Max Welling, Johannes Brandstetter
https://proceedings.mlr.press/v202/ruhe23a.html
ICML 2023
We propose Geometric Clifford Algebra Networks (GCANs) for modeling dynamical systems. GCANs are based on symmetry group transformations using geometric (Clifford) algebras. We first review the quintessence of modern (plane-based) geometric algebra, which builds on isometries encoded as elements of the $\mathrm{Pin}(p,q,r)$ group. We then propose the concept of group action layers, which linearly combine object transformations using pre-specified group actions. Together with a new activation and normalization scheme, these layers serve as adjustable geometric templates that can be refined via gradient descent. Theoretical advantages are strongly reflected in the modeling of three-dimensional rigid body transformations as well as large-scale fluid dynamics simulations, showing significantly improved performance over traditional methods.
https://proceedings.mlr.press/v202/runje23a.html
https://proceedings.mlr.press/v202/runje23a/runje23a.pdf
https://openreview.net/forum?id=wRgJwQKLmC
Constrained Monotonic Neural Networks
https://proceedings.mlr.press/v202/runje23a.html
Davor Runje, Sharath M Shankaranarayana
https://proceedings.mlr.press/v202/runje23a.html
ICML 2023
Wider adoption of neural networks in many critical domains such as finance and healthcare is being hindered by the need to explain their predictions and to impose additional constraints on them. Monotonicity constraint is one of the most requested properties in real-world scenarios and is the focus of this paper. One of the oldest ways to construct a monotonic fully connected neural network is to constrain signs on its weights. Unfortunately, this construction does not work with popular non-saturated activation functions as it can only approximate convex functions. We show this shortcoming can be fixed by constructing two additional activation functions from a typical unsaturated monotonic activation function and employing each of them on the part of neurons. Our experiments show this approach of building monotonic neural networks has better accuracy when compared to other state-of-the-art methods, while being the simplest one in the sense of having the least number of parameters, and not requiring any modifications to the learning procedure or post-learning steps. Finally, we prove it can approximate any continuous monotone function on a compact subset of $\mathbb{R}^n$.
https://proceedings.mlr.press/v202/rust23a.html
https://proceedings.mlr.press/v202/rust23a/rust23a.pdf
https://openreview.net/forum?id=SP6w4sVCyp
Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
https://proceedings.mlr.press/v202/rust23a.html
Phillip Rust, Anders Søgaard
https://proceedings.mlr.press/v202/rust23a.html
ICML 2023
Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.
https://proceedings.mlr.press/v202/rustamov23a.html
https://proceedings.mlr.press/v202/rustamov23a/rustamov23a.pdf
https://openreview.net/forum?id=tIJMebbcRF
Intrinsic Sliced Wasserstein Distances for Comparing Collections of Probability Distributions on Manifolds and Graphs
https://proceedings.mlr.press/v202/rustamov23a.html
Raif M. Rustamov, Subhabrata Majumdar
https://proceedings.mlr.press/v202/rustamov23a.html
ICML 2023
Collections of probability distributions arise in a variety of applications ranging from user activity pattern analysis to brain connectomics. In practice these distributions can be defined over diverse domain types including finite intervals, circles, cylinders, spheres, other manifolds, and graphs. This paper introduces an approach for detecting differences between two collections of distributions over such general domains. To this end, we propose the intrinsic slicing construction that yields a novel class of Wasserstein distances on manifolds and graphs. These distances are Hilbert embeddable, allowing us to reduce the distribution collection comparison problem to a more familiar mean testing problem in a Hilbert space. We provide two testing procedures one based on resampling and another on combining p-values from coordinate-wise tests. Our experiments in various synthetic and real data settings show that the resulting tests are powerful and the p-values are well-calibrated.
https://proceedings.mlr.press/v202/ryabinin23a.html
https://proceedings.mlr.press/v202/ryabinin23a/ryabinin23a.pdf
https://openreview.net/forum?id=zZ5vSCvAxT
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
https://proceedings.mlr.press/v202/ryabinin23a.html
Max Ryabinin, Tim Dettmers, Michael Diskin, Alexander Borzunov
https://proceedings.mlr.press/v202/ryabinin23a.html
ICML 2023
Many deep learning applications benefit from using large models with billions of parameters. Training these models is notoriously expensive due to the need for specialized HPC clusters. In this work, we consider alternative setups for training large models: using cheap “preemptible” instances or pooling existing resources from multiple regions. We analyze the performance of existing model-parallel algorithms in these conditions and find configurations where training larger models becomes less communication-intensive. Based on these findings, we propose SWARM Parallelism (Stochastically Wired Adaptively Rebalanced Model Parallelism), a model-parallel training algorithm designed for poorly connected, heterogeneous and unreliable devices. SWARM creates temporary randomized pipelines between nodes that are rebalanced in case of failure. We empirically validate our findings and compare SWARM Parallelism with existing large-scale training approaches. Finally, we combine our insights with compression strategies to train a large Transformer language model with 1B shared parameters ($\approx$13B before sharing) on preemptible T4 GPUs with less than 200 Mb/s network.
https://proceedings.mlr.press/v202/ryali23a.html
https://proceedings.mlr.press/v202/ryali23a/ryali23a.pdf
https://openreview.net/forum?id=XQuXfVR5Wx
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
https://proceedings.mlr.press/v202/ryali23a.html
Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer
https://proceedings.mlr.press/v202/ryali23a.html
ICML 2023
Modern hierarchical vision transformers have added several vision-specific components in the pursuit of supervised classification performance. While these components lead to effective accuracies and attractive FLOP counts, the added complexity actually makes these transformers slower than their vanilla ViT counterparts. In this paper, we argue that this additional bulk is unnecessary. By pretraining with a strong visual pretext task (MAE), we can strip out all the bells-and-whistles from a state-of-the-art multi-stage vision transformer without losing accuracy. In the process, we create Hiera, an extremely simple hierarchical vision transformer that is more accurate than previous models while being significantly faster both at inference and during training. We evaluate Hiera on a variety of tasks for image and video recognition. Our code and models are available at https://github.com/facebookresearch/hiera.
https://proceedings.mlr.press/v202/rychener23a.html
https://proceedings.mlr.press/v202/rychener23a/rychener23a.pdf
https://openreview.net/forum?id=srcA0Dzooj
End-to-End Learning for Stochastic Optimization: A Bayesian Perspective
https://proceedings.mlr.press/v202/rychener23a.html
Yves Rychener, Daniel Kuhn, Tobias Sutter
https://proceedings.mlr.press/v202/rychener23a.html
ICML 2023
We develop a principled approach to end-to-end learning in stochastic optimization. First, we show that the standard end-to-end learning algorithm admits a Bayesian interpretation and trains a posterior Bayes action map. Building on the insights of this analysis, we then propose new end-to-end learning algorithms for training decision maps that output solutions of empirical risk minimization and distributionally robust optimization problems, two dominant modeling paradigms in optimization under uncertainty. Numerical results for a synthetic newsvendor problem illustrate the key differences between alternative training schemes. We also investigate an economic dispatch problem based on real data to showcase the impact of the neural network architecture of the decision maps on their test performance.
https://proceedings.mlr.press/v202/saad23a.html
https://proceedings.mlr.press/v202/saad23a/saad23a.pdf
https://openreview.net/forum?id=msZrQQAlBA
Sequential Monte Carlo Learning for Time Series Structure Discovery
https://proceedings.mlr.press/v202/saad23a.html
Feras Saad, Brian Patton, Matthew Douglas Hoffman, Rif A. Saurous, Vikash Mansinghka
https://proceedings.mlr.press/v202/saad23a.html
ICML 2023
This paper presents a new approach to automatically discovering accurate models of complex time series data. Working within a Bayesian nonparametric prior over a symbolic space of Gaussian process time series models, we present a novel structure learning algorithm that integrates sequential Monte Carlo (SMC) and involutive MCMC for highly effective posterior inference. Our method can be used both in "online” settings, where new data is incorporated sequentially in time, and in “offline” settings, by using nested subsets of historical data to anneal the posterior. Empirical measurements on real-world time series show that our method can deliver 10x–100x runtime speedups over previous MCMC and greedy-search structure learning algorithms targeting the same model family. We use our method to perform the first large-scale evaluation of Gaussian process time series structure learning on a prominent benchmark of 1,428 econometric datasets. The results show that our method discovers sensible models that deliver more accurate point forecasts and interval forecasts over multiple horizons as compared to widely used statistical and neural baselines that struggle on this challenging data.
https://proceedings.mlr.press/v202/saad23b.html
https://proceedings.mlr.press/v202/saad23b/saad23b.pdf
https://openreview.net/forum?id=fmLW8Eq3VQ
Active Ranking of Experts Based on their Performances in Many Tasks
https://proceedings.mlr.press/v202/saad23b.html
El Mehdi Saad, Nicolas Verzelen, Alexandra Carpentier
https://proceedings.mlr.press/v202/saad23b.html
ICML 2023
We consider the problem of ranking n experts based on their performances on d tasks. We make a monotonicity assumption stating that for each pair of experts, one outperforms the other on all tasks. We consider the sequential setting where in each round the learner has access to noisy evaluations of actively chosen pair of expert-task, given the information available up to the actual round. Given a confidence parameter $\delta \in (0, 1)$, we provide strategies allowing to recover the correct ranking of experts and develop a bound on the total number of queries made by our algorithm that hold with probability at least $1-\delta$. We show that our strategy is adaptive to the complexity of the problem (our bounds are instance dependent), and develop matching lower bounds up to a ploy-logarithmic factor. Finally, we adapt our strategy to the relaxed problem of best expert identification and provide numerical simulation consistent with our theoretical results
https://proceedings.mlr.press/v202/saberi23a.html
https://proceedings.mlr.press/v202/saberi23a/saberi23a.pdf
https://openreview.net/forum?id=0YGT8Bqpoa
Sample Complexity Bounds for Learning High-dimensional Simplices in Noisy Regimes
https://proceedings.mlr.press/v202/saberi23a.html
Seyed Amir Hossein Saberi, Amir Najafi, Abolfazl Motahari, Babak Khalaj
https://proceedings.mlr.press/v202/saberi23a.html
ICML 2023
In this paper, we propose sample complexity bounds for learning a simplex from noisy samples. A dataset of size $n$ is given which includes i.i.d. samples drawn from a uniform distribution over an unknown arbitrary simplex in $\mathbb{R}^K$, where samples are assumed to be corrupted by a multi-variate additive Gaussian noise of an arbitrary magnitude. We prove the existence of an algorithm that with high probability outputs a simplex having a $\ell_2$ distance of at most $\varepsilon$ from the true simplex (for any $\varepsilon>0$). Also, we theoretically show that in order to achieve this bound, it is sufficient to have $n\ge\tilde{\Omega}\left(K^2/\varepsilon^2\right)e^{\Omega\left(K/\mathrm{SNR}^2\right)}$ samples, where $\mathrm{SNR}$ stands for the signal-to-noise ratio and is defined as the ratio of the maximum component-wise standard deviation of the simplex (signal) to that of the noise vector. This result solves an important open problem in this area of research, and shows as long as $\mathrm{SNR}\ge\Omega\left(\sqrt{K}\right)$ the sample complexity of the noisy regime has the same order to that of the noiseless case. Our proofs are a combination of the so-called sample compression technique in (Ashtiani et al., 2018), mathematical tools from high-dimensional geometry, and Fourier analysis. In particular, we have proposed a general Fourier-based technique for recovery of a more general class of distribution families from additive Gaussian noise, which can be further used in a variety of other related problems.
https://proceedings.mlr.press/v202/sachidananda23a.html
https://proceedings.mlr.press/v202/sachidananda23a/sachidananda23a.pdf
https://openreview.net/forum?id=3gLPC1VtDz
Global Selection of Contrastive Batches via Optimization on Sample Permutations
https://proceedings.mlr.press/v202/sachidananda23a.html
Vin Sachidananda, Ziyi Yang, Chenguang Zhu
https://proceedings.mlr.press/v202/sachidananda23a.html
ICML 2023
Contrastive Learning has recently achieved state-of-the-art performance in a wide range of unimodal and multimodal tasks. Many contrastive learning approaches use mined hard negatives to make batches more informative during training but these approaches are inefficient as they increase epoch length proportional to the number of mined negatives and require frequent updates of nearest neighbor indices or mining from recent batches. In this work, we provide an alternative to hard negative mining, Global Contrastive Batch Sampling (GCBS), an efficient approximation to the batch assignment problem that upper bounds the gap between the global and training losses, $\mathcal{L}^{Global} - \mathcal{L}^{Train}$, in contrastive learning settings. Through experimentation we find GCBS improves state-of-the-art performance in sentence embedding and code-search tasks. Additionally, GCBS is easy to implement as it requires only a few additional lines of code, does not maintain external data structures such as nearest neighbor indices, is more computationally efficient than the most minimal hard negative mining approaches, and makes no changes to the model being trained. Code is available at https://github.com/vinayak1/GCBS.
https://proceedings.mlr.press/v202/sadiev23a.html
https://proceedings.mlr.press/v202/sadiev23a/sadiev23a.pdf
https://openreview.net/forum?id=JCpQcyjI7W
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance
https://proceedings.mlr.press/v202/sadiev23a.html
Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
https://proceedings.mlr.press/v202/sadiev23a.html
ICML 2023
During the recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization methods has been growing. One of the main reasons for this is that high-probability complexity bounds are more accurate and less studied than in-expectation ones. However, SOTA high-probability non-asymptotic convergence results are derived under strong assumptions such as boundedness of the gradient noise variance or of the objective’s gradient itself. In this paper, we propose several algorithms with high-probability convergence results under less restrictive assumptions. In particular, we derive new high-probability convergence results under the assumption that the gradient/operator noise has bounded central $\alpha$-th moment for $\alpha \in (1,2]$ in the following setups: (i) smooth non-convex / Polyak-Lojasiewicz / convex / strongly convex / quasi-strongly convex minimization problems, (ii) Lipschitz / star-cocoercive and monotone / quasi-strongly monotone variational inequalities. These results justify the usage of the considered methods for solving problems that do not fit standard functional classes studied in stochastic optimization.
https://proceedings.mlr.press/v202/saha23a.html
https://proceedings.mlr.press/v202/saha23a/saha23a.pdf
https://openreview.net/forum?id=itr9tFVFZm
End-to-end Differentiable Clustering with Associative Memories
https://proceedings.mlr.press/v202/saha23a.html
Bishwajit Saha, Dmitry Krotov, Mohammed J Zaki, Parikshit Ram
https://proceedings.mlr.press/v202/saha23a.html
ICML 2023
Clustering is a widely used unsupervised learning technique involving an intensive discrete optimization problem. Associative Memory models or AMs are differentiable neural networks defining a recursive dynamical system, which have been integrated with various deep learning architectures. We uncover a novel connection between the AM dynamics and the inherent discrete assignment necessary in clustering to propose a novel unconstrained continuous relaxation of the discrete clustering problem, enabling end-to-end differentiable clustering with AM, dubbed ClAM. Leveraging the pattern completion ability of AMs, we further develop a novel self-supervised clustering loss. Our evaluations on varied datasets demonstrate that ClAM benefits from the self-supervision, and significantly improves upon both the traditional Lloyd’s k-means algorithm, and more recent continuous clustering relaxations (by upto 60% in terms of the Silhouette Coefficient).
https://proceedings.mlr.press/v202/saig23a.html
https://proceedings.mlr.press/v202/saig23a/saig23a.pdf
https://openreview.net/forum?id=dopVDyZSEW
Learning to Suggest Breaks: Sustainable Optimization of Long-Term User Engagement
https://proceedings.mlr.press/v202/saig23a.html
Eden Saig, Nir Rosenfeld
https://proceedings.mlr.press/v202/saig23a.html
ICML 2023
Optimizing user engagement is a key goal for modern recommendation systems, but blindly pushing users towards increased consumption risks burn-out, churn, or even addictive habits. To promote digital well-being, most platforms now offer a service that periodically prompts users to take breaks. These, however, must be set up manually, and so may be suboptimal for both users and the system. In this paper, we study the role of breaks in recommendation, and propose a framework for learning optimal breaking policies that promote and sustain long-term engagement. Based on the notion that recommendation dynamics are susceptible to both positive and negative feedback, we cast recommendation as a Lotka-Volterra dynamical system, where breaking reduces to a problem of optimal control. We then give an efficient learning algorithm, provide theoretical guarantees, and empirically demonstrate the utility of our approach on semi-synthetic data.
https://proceedings.mlr.press/v202/saito23a.html
https://proceedings.mlr.press/v202/saito23a/saito23a.pdf
https://openreview.net/forum?id=d65OJgQp5Q
Multi-class Graph Clustering via Approximated Effective $p$-Resistance
https://proceedings.mlr.press/v202/saito23a.html
Shota Saito, Mark Herbster
https://proceedings.mlr.press/v202/saito23a.html
ICML 2023
This paper develops an approximation to the (effective) $p$-resistance and applies it to multi-class clustering. Spectral methods based on the graph Laplacian and its generalization to the graph $p$-Laplacian have been a backbone of non-euclidean clustering techniques. The advantage of the $p$-Laplacian is that the parameter $p$ induces a controllable bias on cluster structure. The drawback of $p$-Laplacian eigenvector based methods is that the third and higher eigenvectors are difficult to compute. Thus, instead, we are motivated to use the $p$-resistance induced by the $p$-Laplacian for clustering. For $p$-resistance, small $p$ biases towards clusters with high internal connectivity while large $p$ biases towards clusters of small “extent,” that is a preference for smaller shortest-path distances between vertices in the cluster. However, the $p$-resistance is expensive to compute. We overcome this by developing an approximation to the $p$-resistance. We prove upper and lower bounds on this approximation and observe that it is exact when the graph is a tree. We also provide theoretical justification for the use of $p$-resistance for clustering. Finally, we provide experiments comparing our approximated $p$-resistance clustering to other $p$-Laplacian based methods.
https://proceedings.mlr.press/v202/saito23b.html
https://proceedings.mlr.press/v202/saito23b/saito23b.pdf
https://openreview.net/forum?id=rK45lWksZ9
Off-Policy Evaluation for Large Action Spaces via Conjunct Effect Modeling
https://proceedings.mlr.press/v202/saito23b.html
Yuta Saito, Qingyang Ren, Thorsten Joachims
https://proceedings.mlr.press/v202/saito23b.html
ICML 2023
We study off-policy evaluation (OPE) of contextual bandit policies for large discrete action spaces where conventional importance-weighting approaches suffer from excessive variance. To circumvent this variance issue, we propose a new estimator, called OffCEM, that is based on the conjunct effect model (CEM), a novel decomposition of the causal effect into a cluster effect and a residual effect. OffCEM applies importance weighting only to action clusters and addresses the residual causal effect through model-based reward estimation. We show that the proposed estimator is unbiased under a new assumption, called local correctness, which only requires that the residual-effect model preserves the relative expected reward differences of the actions within each cluster. To best leverage the CEM and local correctness, we also propose a new two-step procedure for performing model-based estimation that minimizes bias in the first step and variance in the second step. We find that the resulting OffCEM estimator substantially improves bias and variance compared to a range of conventional estimators. Experiments demonstrate that OffCEM provides substantial improvements in OPE especially in the presence of many actions.
https://proceedings.mlr.press/v202/sakaue23a.html
https://proceedings.mlr.press/v202/sakaue23a/sakaue23a.pdf
https://openreview.net/forum?id=mY8KXsRNQv
Rethinking Warm-Starts with Predictions: Learning Predictions Close to Sets of Optimal Solutions for Faster $\text{L}$-/$\text{L}^\natural$-Convex Function Minimization
https://proceedings.mlr.press/v202/sakaue23a.html
Shinsaku Sakaue, Taihei Oki
https://proceedings.mlr.press/v202/sakaue23a.html
ICML 2023
An emerging line of work has shown that machine-learned predictions are useful to warm-start algorithms for discrete optimization problems, such as bipartite matching. Previous studies have shown time complexity bounds proportional to some distance between a prediction and an optimal solution, which we can approximately minimize by learning predictions from past optimal solutions. However, such guarantees may not be meaningful when multiple optimal solutions exist. Indeed, the dual problem of bipartite matching and, more generally, $\text{L}$-/$\text{L}^\\natural$-convex function minimization have arbitrarily many optimal solutions, making such prediction-dependent bounds arbitrarily large. To resolve this theoretically critical issue, we present a new warm-start-with-prediction framework for $\text{L}$-/$\text{L}^\\natural$-convex function minimization. Our framework offers time complexity bounds proportional to the distance between a prediction and the set of all optimal solutions. The main technical difficulty lies in learning predictions that are provably close to sets of all optimal solutions, for which we present an online-gradient-descent-based method. We thus give the first polynomial-time learnability of predictions that can provably warm-start algorithms regardless of multiple optimal solutions.
https://proceedings.mlr.press/v202/sakhi23a.html
https://proceedings.mlr.press/v202/sakhi23a/sakhi23a.pdf
https://openreview.net/forum?id=xN7I0tRuPZ
PAC-Bayesian Offline Contextual Bandits With Guarantees
https://proceedings.mlr.press/v202/sakhi23a.html
Otmane Sakhi, Pierre Alquier, Nicolas Chopin
https://proceedings.mlr.press/v202/sakhi23a.html
ICML 2023
This paper introduces a new principled approach for off-policy learning in contextual bandits. Unlike previous work, our approach does not derive learning principles from intractable or loose bounds. We analyse the problem through the PAC-Bayesian lens, interpreting policies as mixtures of decision rules. This allows us to propose novel generalization bounds and provide tractable algorithms to optimize them. We prove that the derived bounds are tighter than their competitors, and can be optimized directly to confidently improve upon the logging policy offline. Our approach learns policies with guarantees, uses all available data and does not require tuning additional hyperparameters on held-out sets. We demonstrate through extensive experiments the effectiveness of our approach in providing performance guarantees in practical scenarios.
https://proceedings.mlr.press/v202/salgia23a.html
https://proceedings.mlr.press/v202/salgia23a/salgia23a.pdf
https://openreview.net/forum?id=MOCvBgWzug
Provably and Practically Efficient Neural Contextual Bandits
https://proceedings.mlr.press/v202/salgia23a.html
Sudeep Salgia
https://proceedings.mlr.press/v202/salgia23a.html
ICML 2023
We consider the neural contextual bandit problem. In contrast to the existing work which primarily focuses on ReLU neural nets, we consider a general set of smooth activation functions. Under this more general setting, (i) we derive non-asymptotic error bounds on the difference between an overparameterized neural net and its corresponding neural tangent kernel, (ii) we propose an algorithm with a provable sublinear regret bound that is also efficient in the finite regime as demonstrated by empirical studies. The non-asymptotic error bounds may be of broader interests as a tool to establish the relation between the smoothness of the activation functions in neural contextual bandits and the smoothness of the kernels in kernel bandits.
https://proceedings.mlr.press/v202/salgia23b.html
https://proceedings.mlr.press/v202/salgia23b/salgia23b.pdf
https://openreview.net/forum?id=9dZqmxZh4z
Distributed Linear Bandits under Communication Constraints
https://proceedings.mlr.press/v202/salgia23b.html
Sudeep Salgia, Qing Zhao
https://proceedings.mlr.press/v202/salgia23b.html
ICML 2023
We consider distributed linear bandits where $M$ agents learn collaboratively to minimize the overall cumulative regret incurred by all agents. Information exchange is facilitated by a central server, and both the uplink and downlink communications are carried over channels with fixed capacity, which limits the amount of information that can be transmitted in each use of the channels. We investigate the regret-communication trade-off by (i) establishing information-theoretic lower bounds on the required communications (in terms of bits) for achieving a sublinear regret order; (ii) developing an efficient algorithm that achieves the minimum sublinear regret order offered by centralized learning using the minimum order of communications dictated by the information-theoretic lower bounds. For sparse linear bandits, we show a variant of the proposed algorithm offers better regret-communication trade-off by leveraging the sparsity of the problem.
https://proceedings.mlr.press/v202/salinas23a.html
https://proceedings.mlr.press/v202/salinas23a/salinas23a.pdf
https://openreview.net/forum?id=UE8EkguNmO
Optimizing Hyperparameters with Conformal Quantile Regression
https://proceedings.mlr.press/v202/salinas23a.html
David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, Cedric Archambeau
https://proceedings.mlr.press/v202/salinas23a.html
ICML 2023
Many state-of-the-art hyperparameter optimization (HPO) algorithms rely on model-based optimizers that learn surrogate models of the target function to guide the search. Gaussian processes are the de facto surrogate model due to their ability to capture uncertainty. However, they make strong assumptions about the observation noise, which might not be warranted in practice. In this work, we propose to leverage conformalized quantile regression which makes minimal assumptions about the observation noise and, as a result, models the target function in a more realistic and robust fashion which translates to quicker HPO convergence on empirical benchmarks. To apply our method in a multi-fidelity setting, we propose a simple, yet effective, technique that aggregates observed results across different resource levels and outperforms conventional methods across many empirical tasks.
https://proceedings.mlr.press/v202/salman23a.html
https://proceedings.mlr.press/v202/salman23a/salman23a.pdf
https://openreview.net/forum?id=mSKJS7YbwU
Raising the Cost of Malicious AI-Powered Image Editing
https://proceedings.mlr.press/v202/salman23a.html
Hadi Salman, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, Aleksander Madry
https://proceedings.mlr.press/v202/salman23a.html
ICML 2023
We present an approach to mitigating the risks of malicious image editing posed by large diffusion models. The key idea is to immunize images so as to make them resistant to manipulation by these models. This immunization relies on injection of imperceptible adversarial perturbations designed to disrupt the operation of the targeted diffusion models, forcing them to generate unrealistic images. We provide two methods for crafting such perturbations, and then demonstrate their efficacy. Finally, we discuss a policy component necessary to make our approach fully effective and practical—one that involves the organizations developing diffusion models, rather than individual users, to implement (and support) the immunization process.
https://proceedings.mlr.press/v202/sander23a.html
https://proceedings.mlr.press/v202/sander23a/sander23a.pdf
https://openreview.net/forum?id=dolp65Z6re
Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective
https://proceedings.mlr.press/v202/sander23a.html
Michael Eli Sander, Joan Puigcerver, Josip Djolonga, Gabriel Peyré, Mathieu Blondel
https://proceedings.mlr.press/v202/sander23a.html
ICML 2023
The top-$k$ operator returns a $k$-sparse vector, where the non-zero values correspond to the $k$ largest values of the input. Unfortunately, because it is a discontinuous function, it is difficult to incorporate in neural networks trained end-to-end with backpropagation. Recent works have considered differentiable relaxations, based either on regularization or perturbation techniques. However, to date, no approach is fully differentiable and sparse. In this paper, we propose new differentiable and sparse top-$k$ operators. We view the top-$k$ operator as a linear program over the permutahedron, the convex hull of permutations. We then introduce a $p$-norm regularization term to smooth out the operator, and show that its computation can be reduced to isotonic optimization. Our framework is significantly more general than the existing one and allows for example to express top-$k$ operators that select values in magnitude. On the algorithmic side, in addition to pool adjacent violator (PAV) algorithms, we propose a new GPU/TPU-friendly Dykstra algorithm to solve isotonic optimization problems. We successfully use our operators to prune weights in neural networks, to fine-tune vision transformers, and as a router in sparse mixture of experts.
https://proceedings.mlr.press/v202/sander23b.html
https://proceedings.mlr.press/v202/sander23b/sander23b.pdf
https://openreview.net/forum?id=DIkGgI9baJ
TAN Without a Burn: Scaling Laws of DP-SGD
https://proceedings.mlr.press/v202/sander23b.html
Tom Sander, Pierre Stock, Alexandre Sablayrolles
https://proceedings.mlr.press/v202/sander23b.html
ICML 2023
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in particular with the use of massive batches and aggregated data augmentations for a large number of training steps. These techniques require much more computing resources than their non-private counterparts, shifting the traditional privacy-accuracy trade-off to a privacy-accuracy-compute trade-off and making hyper-parameter search virtually impossible for realistic scenarios. In this work, we decouple privacy analysis and experimental behavior of noisy training to explore the trade-off with minimal computational requirements. We first use the tools of Renyi Differential Privacy (RDP) to highlight that the privacy budget, when not overcharged, only depends on the total amount of noise (TAN) injected throughout training. We then derive scaling laws for training models with DP-SGD to optimize hyper-parameters with more than a $100\times$ reduction in computational budget. We apply the proposed method on CIFAR-10 and ImageNet and, in particular, strongly improve the state-of-the-art on ImageNet with a $+9$ points gain in top-1 accuracy for a privacy budget $\varepsilon=8$.
https://proceedings.mlr.press/v202/sangani23a.html
https://proceedings.mlr.press/v202/sangani23a/sangani23a.pdf
https://openreview.net/forum?id=3MlWDiBcpr
Discrete Continuous Optimization Framework for Simultaneous Clustering and Training in Mixture Models
https://proceedings.mlr.press/v202/sangani23a.html
Parth Vipul Sangani, Arjun Shashank Kashettiwar, Pritish Chakraborty, Bhuvan Reddy Gangula, Durga S, Ganesh Ramakrishnan, Rishabh K Iyer, Abir De
https://proceedings.mlr.press/v202/sangani23a.html
ICML 2023
We study a new framework of learning mixture models via automatic clustering called PRESTO, wherein we optimize a joint objective function on the model parameters and the partitioning, with each model tailored to perform well on its specific cluster. In contrast to prior work, we do not assume any generative model for the data. We convert our training problem to a joint parameter estimation cum a subset selection problem, subject to a matroid span constraint. This allows us to reduce our problem into a constrained set function minimization problem, where the underlying objective is monotone and approximately submodular. We then propose a new joint discrete-continuous optimization algorithm that achieves a bounded approximation guarantee for our problem. We show that PRESTO outperforms several alternative methods. Finally, we study PRESTO in the context of resource-efficient deep learning, where we train smaller resource-constrained models on each partition and show that it outperforms existing data partitioning and model pruning/knowledge distillation approaches, which in contrast to PRESTO, require large initial (teacher) models.
https://proceedings.mlr.press/v202/santurkar23a.html
https://proceedings.mlr.press/v202/santurkar23a/santurkar23a.pdf
https://openreview.net/forum?id=7IRybndMLU
Whose Opinions Do Language Models Reflect?
https://proceedings.mlr.press/v202/santurkar23a.html
Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto
https://proceedings.mlr.press/v202/santurkar23a.html
ICML 2023
Language models (LMs) are increasingly being used in open-ended contexts, where the opinions they reflect in response to subjective queries can have a profound impact, both on user satisfaction, and shaping the views of society at large. We put forth a quantitative framework to investigate the opinions reflected by LMs – by leveraging high-quality public opinion polls. Using this framework, we create OpinionQA, a dataset for evaluating the alignment of LM opinions with those of 60 US demographic groups over topics ranging from abortion to automation. Across topics, we find substantial misalignment between the views reflected by current LMs and those of US demographic groups: on par with the Democrat-Republican divide on climate change. Notably, this misalignment persists even after explicitly steering the LMs towards particular groups. Our analysis not only confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs, but also surfaces groups whose opinions are poorly reflected by current LMs (e.g., 65+ and widowed individuals).
https://proceedings.mlr.press/v202/saran23a.html
https://proceedings.mlr.press/v202/saran23a/saran23a.pdf
https://openreview.net/forum?id=ZDcKFyL1F9
Streaming Active Learning with Deep Neural Networks
https://proceedings.mlr.press/v202/saran23a.html
Akanksha Saran, Safoora Yousefi, Akshay Krishnamurthy, John Langford, Jordan T. Ash
https://proceedings.mlr.press/v202/saran23a.html
ICML 2023
Active learning is perhaps most naturally posed as an online learning problem. However, prior active learning approaches with deep neural networks assume offline access to the entire dataset ahead of time. This paper proposes VeSSAL, a new algorithm for batch active learning with deep neural networks in streaming settings, which samples groups of points to query for labels at the moment they are encountered. Our approach trades off between uncertainty and diversity of queried samples to match a desired query rate without requiring any hand-tuned hyperparameters. Altogether, we expand the applicability of deep neural networks to realistic active learning scenarios, such as applications relevant to HCI and large, fractured datasets.
https://proceedings.mlr.press/v202/sarnthein23a.html
https://proceedings.mlr.press/v202/sarnthein23a/sarnthein23a.pdf
https://openreview.net/forum?id=6NUs7C2uRo
Random Teachers are Good Teachers
https://proceedings.mlr.press/v202/sarnthein23a.html
Felix Sarnthein, Gregor Bachmann, Sotiris Anagnostidis, Thomas Hofmann
https://proceedings.mlr.press/v202/sarnthein23a.html
ICML 2023
In this work, we investigate the implicit regularization induced by teacher-student learning dynamics in self-distillation. To isolate its effect, we describe a simple experiment where we consider teachers at random initialization instead of trained teachers. Surprisingly, when distilling a student into such a random teacher, we observe that the resulting model and its representations already possess very interesting characteristics; (1) we observe a strong improvement of the distilled student over its teacher in terms of probing accuracy. (2) The learned representations are data-dependent and transferable between different tasks but deteriorate strongly if trained on random inputs. (3) The student checkpoint contains sparse subnetworks, so-called lottery tickets, and lies on the border of linear basins in the supervised loss landscape. These observations have interesting consequences for several important areas in machine learning: (1) Self-distillation can work solely based on the implicit regularization present in the gradient dynamics without relying on any dark knowledge, (2) self-supervised learning can learn features even in the absence of data augmentation and (3) training dynamics during the early phase of supervised training do not necessarily require label information. Finally, we shed light on an intriguing local property of the loss landscape: the process of feature learning is strongly amplified if the student is initialized closely to the teacher. These results raise interesting questions about the nature of the landscape that have remained unexplored so far. Code is available at https://github.com/safelix/dinopl.
https://proceedings.mlr.press/v202/sasso23a.html
https://proceedings.mlr.press/v202/sasso23a/sasso23a.pdf
https://openreview.net/forum?id=ZwjSECgl6p
Posterior Sampling for Deep Reinforcement Learning
https://proceedings.mlr.press/v202/sasso23a.html
Remo Sasso, Michelangelo Conserva, Paulo Rauber
https://proceedings.mlr.press/v202/sasso23a.html
ICML 2023
Despite remarkable successes, deep reinforcement learning algorithms remain sample inefficient: they require an enormous amount of trial and error to find good policies. Model-based algorithms promise sample efficiency by building an environment model that can be used for planning. Posterior Sampling for Reinforcement Learning is such a model-based algorithm that has attracted significant interest due to its performance in the tabular setting. This paper introduces Posterior Sampling for Deep Reinforcement Learning (PSDRL), the first truly scalable approximation of Posterior Sampling for Reinforcement Learning that retains its model-based essence. PSDRL combines efficient uncertainty quantification over latent state space models with a specially tailored incremental planning algorithm based on value-function approximation. Extensive experiments on the Atari benchmark show that PSDRL significantly outperforms previous state-of-the-art attempts at scaling up posterior sampling while being competitive with a state-of-the-art (model-based) reinforcement learning method, both in sample efficiency and computational efficiency.
https://proceedings.mlr.press/v202/sato23a.html
https://proceedings.mlr.press/v202/sato23a/sato23a.pdf
https://openreview.net/forum?id=xQm5d5T30v
Graph Neural Networks can Recover the Hidden Features Solely from the Graph Structure
https://proceedings.mlr.press/v202/sato23a.html
Ryoma Sato
https://proceedings.mlr.press/v202/sato23a.html
ICML 2023
Graph Neural Networks (GNNs) are popular models for graph learning problems. GNNs show strong empirical performance in many practical tasks. However, the theoretical properties have not been completely elucidated. In this paper, we investigate whether GNNs can exploit the graph structure from the perspective of the expressive power of GNNs. In our analysis, we consider graph generation processes that are controlled by hidden (or latent) node features, which contain all information about the graph structure. A typical example of this framework is kNN graphs constructed from the hidden features. In our main results, we show that GNNs can recover the hidden node features from the input graph alone, even when all node features, including the hidden features themselves and any indirect hints, are unavailable. GNNs can further use the recovered node features for downstream tasks. These results show that GNNs can fully exploit the graph structure by themselves, and in effect, GNNs can use both the hidden and explicit node features for downstream tasks. In the experiments, we confirm the validity of our results by showing that GNNs can accurately recover the hidden features using a GNN architecture built based on our theoretical analysis.
https://proceedings.mlr.press/v202/sato23b.html
https://proceedings.mlr.press/v202/sato23b/sato23b.pdf
https://openreview.net/forum?id=26P1PGCFyc
Existence and Estimation of Critical Batch Size for Training Generative Adversarial Networks with Two Time-Scale Update Rule
https://proceedings.mlr.press/v202/sato23b.html
Naoki Sato, Hideaki Iiduka
https://proceedings.mlr.press/v202/sato23b.html
ICML 2023
Previous results have shown that a two time-scale update rule (TTUR) using different learning rates, such as different constant rates or different decaying rates, is useful for training generative adversarial networks (GANs) in theory and in practice. Moreover, not only the learning rate but also the batch size is important for training GANs with TTURs and they both affect the number of steps needed for training. This paper studies the relationship between batch size and the number of steps needed for training GANs with TTURs based on constant learning rates. We theoretically show that, for a TTUR with constant learning rates, the number of steps needed to find stationary points of the loss functions of both the discriminator and generator decreases as the batch size increases and that there exists a critical batch size minimizing the stochastic first-order oracle (SFO) complexity. Then, we use the Fréchet inception distance (FID) as the performance measure for training and provide numerical results indicating that the number of steps needed to achieve a low FID score decreases as the batch size increases and that the SFO complexity increases once the batch size exceeds the measured critical batch size. Moreover, we show that measured critical batch sizes are close to the sizes estimated from our theoretical results.
https://proceedings.mlr.press/v202/sauer23a.html
https://proceedings.mlr.press/v202/sauer23a/sauer23a.pdf
https://openreview.net/forum?id=PZahJfBVNB
StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
https://proceedings.mlr.press/v202/sauer23a.html
Axel Sauer, Tero Karras, Samuli Laine, Andreas Geiger, Timo Aila
https://proceedings.mlr.press/v202/sauer23a.html
ICML 2023
Text-to-image synthesis has recently seen significant progress thanks to large pretrained language models, large-scale training data, and the introduction of scalable model families such as diffusion and autoregressive models. However, the best-performing models require iterative evaluation to generate a single sample. In contrast, generative adversarial networks (GANs) only need a single forward pass. They are thus much faster, but they currently remain far behind the state-of-the-art in large-scale text-to-image synthesis. This paper aims to identify the necessary steps to regain competitiveness. Our proposed model, StyleGAN-T, addresses the specific requirements of large-scale text-to-image synthesis, such as large capacity, stable training on diverse datasets, strong text alignment, and controllable variation vs. text alignment tradeoff. StyleGAN-T significantly improves over previous GANs and outperforms distilled diffusion models - the previous state-of-the-art in fast text-to-image synthesis - in terms of sample quality and speed.
https://proceedings.mlr.press/v202/savchenko23a.html
https://proceedings.mlr.press/v202/savchenko23a/savchenko23a.pdf
https://openreview.net/forum?id=DH11pt7S2t
Facial Expression Recognition with Adaptive Frame Rate based on Multiple Testing Correction
https://proceedings.mlr.press/v202/savchenko23a.html
Andrey Savchenko
https://proceedings.mlr.press/v202/savchenko23a.html
ICML 2023
In this paper, we consider the problem of the high computational complexity of video-based facial expression recognition. A novel sequential procedure is proposed with an adaptive frame rate selection in a short video fragment to speed up decision-making. We automatically adjust the frame rate and process fewer frames with a low frame rate for more straightforward videos and more frames for complex ones. To determine the frame rate at which an inference is sufficiently reliable, the Benjamini-Hochberg procedure from multiple comparisons theory is employed to control the false discovery rate. The main advantages of our method are an improvement of the trustworthiness of decision-making by maintaining only one hyper-parameter (false acceptance rate) and its applicability with arbitrary neural network models used as facial feature extractors without the need to re-train these models. An experimental study on datasets from ABAW and EmotiW challenges proves the superior performance (1.5-40 times faster) of the proposed approach compared to processing all frames and existing techniques with early exiting and adaptive frame selection.
https://proceedings.mlr.press/v202/saxena23a.html
https://proceedings.mlr.press/v202/saxena23a/saxena23a.pdf
https://openreview.net/forum?id=Tw7pgl861K
Off-Policy Average Reward Actor-Critic with Deterministic Policy Search
https://proceedings.mlr.press/v202/saxena23a.html
Naman Saxena, Subhojyoti Khastagir, Shishir Kolathaya, Shalabh Bhatnagar
https://proceedings.mlr.press/v202/saxena23a.html
ICML 2023
The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an $\epsilon$-optimal stationary policy with a sample complexity of $\Omega(\epsilon^{-2.5})$. We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments.
https://proceedings.mlr.press/v202/schar23a.html
https://proceedings.mlr.press/v202/schar23a/schar23a.pdf
https://openreview.net/forum?id=K2RnHMiUrW
Gibbsian Polar Slice Sampling
https://proceedings.mlr.press/v202/schar23a.html
Philip Schär, Michael Habeck, Daniel Rudolf
https://proceedings.mlr.press/v202/schar23a.html
ICML 2023
Polar slice sampling (Roberts & Rosenthal, 2002) is a Markov chain approach for approximate sampling of distributions that is difficult, if not impossible, to implement efficiently, but behaves provably well with respect to the dimension. By updating the directional and radial components of chain iterates separately, we obtain a family of samplers that mimic polar slice sampling, and yet can be implemented efficiently. Numerical experiments in a variety of settings indicate that our proposed algorithm outperforms the two most closely related approaches, elliptical slice sampling (Murray et al., 2010) and hit-and-run uniform slice sampling (MacKay, 2003). We prove the well-definedness and convergence of our methods under suitable assumptions on the target distribution.
https://proceedings.mlr.press/v202/schlaginhaufen23a.html
https://proceedings.mlr.press/v202/schlaginhaufen23a/schlaginhaufen23a.pdf
https://openreview.net/forum?id=tQ4hdUoM1E
Identifiability and Generalizability in Constrained Inverse Reinforcement Learning
https://proceedings.mlr.press/v202/schlaginhaufen23a.html
Andreas Schlaginhaufen, Maryam Kamgarpour
https://proceedings.mlr.press/v202/schlaginhaufen23a.html
ICML 2023
Two main challenges in Reinforcement Learning (RL) are designing appropriate reward functions and ensuring the safety of the learned policy. To address these challenges, we present a theoretical framework for Inverse Reinforcement Learning (IRL) in constrained Markov decision processes. From a convex-analytic perspective, we extend prior results on reward identifiability and generalizability to both the constrained setting and a more general class of regularizations. In particular, we show that identifiability up to potential shaping (Cao et al., 2021) is a consequence of entropy regularization and may generally no longer hold for other regularizations or in the presence of safety constraints. We also show that to ensure generalizability to new transition laws and constraints, the true reward must be identified up to a constant. Additionally, we derive a finite sample guarantee for the suboptimality of the learned rewards, and validate our results in a gridworld environment.
https://proceedings.mlr.press/v202/schnaus23a.html
https://proceedings.mlr.press/v202/schnaus23a/schnaus23a.pdf
https://openreview.net/forum?id=YzSeC2HsMz
Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks
https://proceedings.mlr.press/v202/schnaus23a.html
Dominik Schnaus, Jongseok Lee, Daniel Cremers, Rudolph Triebel
https://proceedings.mlr.press/v202/schnaus23a.html
ICML 2023
In this work, we propose a novel prior learning method for advancing generalization and uncertainty estimation in deep neural networks. The key idea is to exploit scalable and structured posteriors of neural networks as informative priors with generalization guarantees. Our learned priors provide expressive probabilistic representations at large scale, like Bayesian counterparts of pre-trained models on ImageNet, and further produce non-vacuous generalization bounds. We also extend this idea to a continual learning framework, where the favorable properties of our priors are desirable. Major enablers are our technical contributions: (1) the sums-of-Kronecker-product computations, and (2) the derivations and optimizations of tractable objectives that lead to improved generalization bounds. Empirically, we exhaustively show the effectiveness of this method for uncertainty estimation and generalization.
https://proceedings.mlr.press/v202/schroder23a.html
https://proceedings.mlr.press/v202/schroder23a/schroder23a.pdf
https://openreview.net/forum?id=BurklMMas2
Deterministic equivalent and error universality of deep random features learning
https://proceedings.mlr.press/v202/schroder23a.html
Dominik Schröder, Hugo Cui, Daniil Dmitriev, Bruno Loureiro
https://proceedings.mlr.press/v202/schroder23a.html
ICML 2023
This manuscript considers the problem of learning a random Gaussian network function using a fully connected network with frozen intermediate layers and trainable readout layer. This problem can be seen as a natural generalization of the widely studied random features model to deeper architectures. First, we prove Gaussian universality of the test error in a ridge regression setting where the learner and target networks share the same intermediate layers, and provide a sharp asymptotic formula for it. Establishing this result requires proving a deterministic equivalent for traces of the deep random features sample covariance matrices which can be of independent interest. Second, we conjecture the asymptotic Gaussian universality of the test error in the more general setting of arbitrary convex losses and generic learner/target architectures. We provide extensive numerical evidence for this conjecture, which requires the derivation of closed-form expressions for the layer-wise post-activation population covariances. In light of our results, we investigate the interplay between architecture design and implicit regularization.
https://proceedings.mlr.press/v202/schulze-buschoff23a.html
https://proceedings.mlr.press/v202/schulze-buschoff23a/schulze-buschoff23a.pdf
https://openreview.net/forum?id=C5lAsw1GCg
The Acquisition of Physical Knowledge in Generative Neural Networks
https://proceedings.mlr.press/v202/schulze-buschoff23a.html
Luca M. Schulze Buschoff, Eric Schulz, Marcel Binz
https://proceedings.mlr.press/v202/schulze-buschoff23a.html
ICML 2023
As children grow older, they develop an intuitive understanding of the physical processes around them. Their physical understanding develops in stages, moving along developmental trajectories which have been mapped out extensively in previous empirical research. Here, we investigate how the learning trajectories of deep generative neural networks compare to children’s developmental trajectories using physical understanding as a testbed. We outline an approach that allows us to examine two distinct hypotheses of human development – stochastic optimization and complexity increase. We find that while our models are able to accurately predict a number of physical processes, their learning trajectories under both hypotheses do not follow the developmental trajectories of children.
https://proceedings.mlr.press/v202/schwarz23a.html
https://proceedings.mlr.press/v202/schwarz23a/schwarz23a.pdf
https://openreview.net/forum?id=bBXCCSoVQZ
Modality-Agnostic Variational Compression of Implicit Neural Representations
https://proceedings.mlr.press/v202/schwarz23a.html
Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin
https://proceedings.mlr.press/v202/schwarz23a.html
ICML 2023
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR). Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism. This allows the specialisation of a shared INR network to each data item through subnetwork selection. After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression. Variational Compression of Implicit Neural Representations (VC-INR) shows improved performance given the same representational capacity pre quantisation while also outperforming previous quantisation schemes used for other INR techniques.Our experiments demonstrate strong results over a large set of diverse modalities using the same algorithm without any modality-specific inductive biases. We show results on images, climate data, 3D shapes and scenes as well as audio and video, introducing VC-INR as the first INR-based method to outperform codecs as well-known and diverse as JPEG 2000, MP3 and AVC/HEVC on their respective modalities.
https://proceedings.mlr.press/v202/schwarzer23a.html
https://proceedings.mlr.press/v202/schwarzer23a/schwarzer23a.pdf
https://openreview.net/forum?id=2sjm6AH1jB
Bigger, Better, Faster: Human-level Atari with human-level efficiency
https://proceedings.mlr.press/v202/schwarzer23a.html
Max Schwarzer, Johan Samir Obando Ceron, Aaron Courville, Marc G Bellemare, Rishabh Agarwal, Pablo Samuel Castro
https://proceedings.mlr.press/v202/schwarzer23a.html
ICML 2023
We introduce a value-based RL agent, which we call BBF, that achieves super-human performance in the Atari 100K benchmark. BBF relies on scaling the neural networks used for value estimation, as well as a number of other design choices that enable this scaling in a sample-efficient manner. We conduct extensive analyses of these design choices and provide insights for future work. We end with a discussion about updating the goalposts for sample-efficient RL research on the ALE. We make our code and data publicly available at https://github.com/google-research/google-research/tree/master/bigger_better_faster.
https://proceedings.mlr.press/v202/sclocchi23a.html
https://proceedings.mlr.press/v202/sclocchi23a/sclocchi23a.pdf
https://openreview.net/forum?id=touFUfiNAM
Dissecting the Effects of SGD Noise in Distinct Regimes of Deep Learning
https://proceedings.mlr.press/v202/sclocchi23a.html
Antonio Sclocchi, Mario Geiger, Matthieu Wyart
https://proceedings.mlr.press/v202/sclocchi23a.html
ICML 2023
Understanding when the noise in stochastic gradient descent (SGD) affects generalization of deep neural networks remains a challenge, complicated by the fact that networks can operate in distinct training regimes. Here we study how the magnitude of this noise $T$ affects performance as the size of the training set $P$ and the scale of initialization $\alpha$ are varied. For gradient descent, $\alpha$ is a key parameter that controls if the network is lazy’ ($\alpha\gg1$) or instead learns features ($\alpha\ll1$). For classification of MNIST and CIFAR10 images, our central results are: *(i)* obtaining phase diagrams for performance in the $(\alpha,T)$ plane. They show that SGD noise can be detrimental or instead useful depending on the training regime. Moreover, although increasing $T$ or decreasing $\alpha$ both allow the net to escape the lazy regime, these changes can have opposite effects on performance. *(ii)* Most importantly, we find that the characteristic temperature $T_c$ where the noise of SGD starts affecting the trained model (and eventually performance) is a power law of $P$. We relate this finding with the observation that key dynamical quantities, such as the total variation of weights during training, depend on both $T$ and $P$ as power laws. These results indicate that a key effect of SGD noise occurs late in training, by affecting the stopping process whereby all data are fitted. Indeed, we argue that due to SGD noise, nets must develop a strongersignal’, i.e. larger informative weights, to fit the data, leading to a longer training time. A stronger signal and a longer training time are also required when the size of the training set $P$ increases. We confirm these views in the perceptron model, where signal and noise can be precisely measured. Interestingly, exponents characterizing the effect of SGD depend on the density of data near the decision boundary, as we explain.
https://proceedings.mlr.press/v202/sedlmayer23a.html
https://proceedings.mlr.press/v202/sedlmayer23a/sedlmayer23a.pdf
https://openreview.net/forum?id=2djCNo5yXQ
A Fast Optimistic Method for Monotone Variational Inequalities
https://proceedings.mlr.press/v202/sedlmayer23a.html
Michael Sedlmayer, Dang-Khoa Nguyen, Radu Ioan Bot
https://proceedings.mlr.press/v202/sedlmayer23a.html
ICML 2023
We study monotone variational inequalities that can arise as optimality conditions for constrained convex optimization or convex-concave minimax problems and propose a novel algorithm that uses only one gradient/operator evaluation and one projection onto the constraint set per iteration. The algorithm, which we call fOGDA-VI, achieves a $o(\frac{1}{k})$ rate of convergence in terms of the restricted gap function as well as the natural residual for the last iterate. Moreover, we provide a convergence guarantee for the sequence of iterates to a solution of the variational inequality. These are the best theoretical convergence results for numerical methods for (only) monotone variational inequalities reported in the literature. To empirically validate our algorithm we investigate a two-player matrix game with mixed strategies of the two players. Concluding, we show promising results regarding the application of fOGDA-VI to the training of generative adversarial nets.
https://proceedings.mlr.press/v202/segovia-martin23a.html
https://proceedings.mlr.press/v202/segovia-martin23a/segovia-martin23a.pdf
https://openreview.net/forum?id=WlIJAlWou5
Double-Weighting for Covariate Shift Adaptation
https://proceedings.mlr.press/v202/segovia-martin23a.html
José I. Segovia-Martín, Santiago Mazuelas, Anqi Liu
https://proceedings.mlr.press/v202/segovia-martin23a.html
ICML 2023
Supervised learning is often affected by a covariate shift in which the marginal distributions of instances (covariates $x$) of training and testing samples $p_\text{tr}(x)$ and $p_\text{te}(x)$ are different but the label conditionals coincide. Existing approaches address such covariate shift by either using the ratio $p_\text{te}(x)/p_\text{tr}(x)$ to weight training samples (reweighted methods) or using the ratio $p_\text{tr}(x)/p_\text{te}(x)$ to weight testing samples (robust methods). However, the performance of such approaches can be poor under support mismatch or when the above ratios take large values. We propose a minimax risk classification (MRC) approach for covariate shift adaptation that avoids such limitations by weighting both training and testing samples. In addition, we develop effective techniques that obtain both sets of weights and generalize the conventional kernel mean matching method. We provide novel generalization bounds for our method that show a significant increase in the effective sample size compared with reweighted methods. The proposed method also achieves enhanced classification performance in both synthetic and empirical experiments.
https://proceedings.mlr.press/v202/seidl23a.html
https://proceedings.mlr.press/v202/seidl23a/seidl23a.pdf
https://openreview.net/forum?id=oeRMR0La70
Enhancing Activity Prediction Models in Drug Discovery with the Ability to Understand Human Language
https://proceedings.mlr.press/v202/seidl23a.html
Philipp Seidl, Andreu Vall, Sepp Hochreiter, Günter Klambauer
https://proceedings.mlr.press/v202/seidl23a.html
ICML 2023
Activity and property prediction models are the central workhorses in drug discovery and materials sciences, but currently, they have to be trained or fine-tuned for new tasks. Without training or fine-tuning, scientific language models could be used for such low-data tasks through their announced zero- and few-shot capabilities. However, their predictive quality at activity prediction is lacking. In this work, we envision a novel type of activity prediction model that is able to adapt to new prediction tasks at inference time, via understanding textual information describing the task. To this end, we propose a new architecture with separate modules for chemical and natural language inputs, and a contrastive pretraining objective on data from large biochemical databases. In extensive experiments, we show that our method CLAMP yields improved predictive performance on few-shot learning benchmarks and zero-shot problems in drug discovery. We attribute the advances of our method to the modularized architecture and to our pre-training objective.
https://proceedings.mlr.press/v202/seidman23a.html
https://proceedings.mlr.press/v202/seidman23a/seidman23a.pdf
https://openreview.net/forum?id=gpbBUE8uhp
Variational Autoencoding Neural Operators
https://proceedings.mlr.press/v202/seidman23a.html
Jacob H Seidman, Georgios Kissas, George J. Pappas, Paris Perdikaris
https://proceedings.mlr.press/v202/seidman23a.html
ICML 2023
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems. A natural way of modeling functional data is by learning operators between infinite dimensional spaces, leading to discretization invariant representations that scale independently of the sample grid resolution. Here we present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders. For this purpose, we provide a novel rigorous mathematical formulation of the variational objective in function spaces for training. VANO first maps an input function to a distribution over a latent space using a parametric encoder and then decodes a sample from the latent distribution to reconstruct the input, as in classic variational autoencoders. We test VANO with different model set-ups and architecture choices for a variety of benchmarks. We start from a simple Gaussian random field where we can analytically track what the model learns and progressively transition to more challenging benchmarks including modeling phase separation in Cahn-Hilliard systems and real world satellite data for measuring Earth surface deformation.
https://proceedings.mlr.press/v202/seifner23a.html
https://proceedings.mlr.press/v202/seifner23a/seifner23a.pdf
https://openreview.net/forum?id=ZtvnhohkVk
Neural Markov Jump Processes
https://proceedings.mlr.press/v202/seifner23a.html
Patrick Seifner, Ramses J Sanchez
https://proceedings.mlr.press/v202/seifner23a.html
ICML 2023
Markov jump processes are continuous-time stochastic processes with a wide range of applications in both natural and social sciences. Despite their widespread use, inference in these models is highly non-trivial and typically proceeds via either Monte Carlo or expectation-maximization methods. In this work we introduce an alternative, variational inference algorithm for Markov jump processes which relies on neural ordinary differential equations, and is trainable via back-propagation. Our methodology learns neural, continuous-time representations of the observed data, that are used to approximate the initial distribution and time-dependent transition probability rates of the posterior Markov jump process. The time-independent rates of the prior process are in contrast trained akin to generative adversarial networks. We test our approach on synthetic data sampled from ground-truth Markov jump processes, experimental switching ion channel data and molecular dynamics simulations. Source code to reproduce our experiments is available online.
https://proceedings.mlr.press/v202/sellier23a.html
https://proceedings.mlr.press/v202/sellier23a/sellier23a.pdf
https://openreview.net/forum?id=1e2xR04JVl
Bayesian online change point detection with Hilbert space approximate Student-t process
https://proceedings.mlr.press/v202/sellier23a.html
Jeremy Sellier, Petros Dellaportas
https://proceedings.mlr.press/v202/sellier23a.html
ICML 2023
In this paper, we introduce a variant of Bayesian online change point detection with a reducedrank Student-t process (TP) and dependent Student-t noise, as a nonparametric time series model. Our method builds and improves upon the state-of-the-art Gaussian process (GP) change point model benchmark of Saatci et al. (2010). The Student-t process generalizes the concept of a GP and hence yields a more flexible alternative. Additionally, unlike a GP, the predictive variance explicitly depends on the training observations, while the use of an entangled Student-t noise model preserves analytical tractability. Our approach also uses a Hilbert space reduced-rank representation of the TP kernel, derived from an eigenfunction expansion of the Laplace operator (Solin & Sarkka, 2020), to alleviate its computational complexity. Improvements in prediction and training time are demonstrated with real-world data-sets
https://proceedings.mlr.press/v202/sellke23a.html
https://proceedings.mlr.press/v202/sellke23a/sellke23a.pdf
https://openreview.net/forum?id=DpXAjLEG9G
Incentivizing Exploration with Linear Contexts and Combinatorial Actions
https://proceedings.mlr.press/v202/sellke23a.html
Mark Sellke
https://proceedings.mlr.press/v202/sellke23a.html
ICML 2023
We advance the study of incentivized bandit exploration, in which arm choices are viewed as recommendations and are required to be Bayesian incentive compatible. Recent work of Sellke-Slivkins (Operations Research 2022) has shown that for the special case of independent arms, after collecting enough initial samples, the popular Thompson sampling algorithm becomes incentive compatible. This was generalized to the combinatorial semibandit in Hu-Ngo-Slivkins-Wu (NeurIPS 2022). We give an analog of this result for linear bandits, where the independence of the prior is replaced by a natural convexity condition. This opens up the possibility of efficient and regret-optimal incentivized exploration in high-dimensional action spaces. In the semibandit model, we also improve the sample complexity for the pre-Thompson sampling phase of initial data collection.
https://proceedings.mlr.press/v202/senetaire23a.html
https://proceedings.mlr.press/v202/senetaire23a/senetaire23a.pdf
https://openreview.net/forum?id=RPzQOi1Cyf
Explainability as statistical inference
https://proceedings.mlr.press/v202/senetaire23a.html
Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei
https://proceedings.mlr.press/v202/senetaire23a.html
ICML 2023
A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model’s parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture, and any type of prediction problem. Our model is akin to amortized interpretability methods, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularized maximum likelihood for our general model. Using our framework, we identify imputation as a common issue of these models. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map and show experimentally that multiple imputation provides more reasonable interpretations.
https://proceedings.mlr.press/v202/seo23a.html
https://proceedings.mlr.press/v202/seo23a/seo23a.pdf
https://openreview.net/forum?id=DwOUndjwiV
Multi-View Masked World Models for Visual Robotic Manipulation
https://proceedings.mlr.press/v202/seo23a.html
Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel
https://proceedings.mlr.press/v202/seo23a.html
ICML 2023
Visual robotic manipulation research and applications often use multiple cameras, or views, to better perceive the world. How else can we utilize the richness of multi-view data? In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation. Specifically, we train a multi-view masked autoencoder which reconstructs pixels of randomly masked viewpoints and then learn a world model operating on the representations from the autoencoder. We demonstrate the effectiveness of our method in a range of scenarios, including multi-view control and single-view control with auxiliary cameras for representation learning. We also show that the multi-view masked autoencoder trained with multiple randomized viewpoints enables training a policy with strong viewpoint randomization and transferring the policy to solve real-robot tasks without camera calibration and an adaptation procedure. Video demonstrations are available at: https://sites.google.com/view/mv-mwm.
https://proceedings.mlr.press/v202/severo23a.html
https://proceedings.mlr.press/v202/severo23a/severo23a.pdf
https://openreview.net/forum?id=9L6f6Y7GUS
One-Shot Compression of Large Edge-Exchangeable Graphs using Bits-Back Coding
https://proceedings.mlr.press/v202/severo23a.html
Daniel Severo, James Townsend, Ashish J Khisti, Alireza Makhzani
https://proceedings.mlr.press/v202/severo23a.html
ICML 2023
We present a one-shot method for compressing large labeled graphs called Random Edge Coding. When paired with a parameter-free model based on Pólya’s Urn, the worst-case computational and memory complexities scale quasi-linearly and linearly with the number of observed edges, making it efficient on sparse graphs, and requires only integer arithmetic. Key to our method is bits-back coding, which is used to sample edges and vertices without replacement from the edge-list in a way that preserves the structure of the graph. Optimality is proven under a class of random graph models that are invariant to permutations of the edges and of vertices within an edge. Experiments indicate Random Edge Coding can achieve competitive compression performance on real-world network datasets and scales to graphs with millions of nodes and edges.
https://proceedings.mlr.press/v202/shah23a.html
https://proceedings.mlr.press/v202/shah23a/shah23a.pdf
https://openreview.net/forum?id=jIMwqkngZQ
ModelDiff: A Framework for Comparing Learning Algorithms
https://proceedings.mlr.press/v202/shah23a.html
Harshay Shah, Sung Min Park, Andrew Ilyas, Aleksander Madry
https://proceedings.mlr.press/v202/shah23a.html
ICML 2023
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms. We begin by formalizing this goal as one of finding distinguishing feature transformations, i.e., input transformations that change the predictions of models trained with one learning algorithm but not the other. We then present ModelDiff, a method that leverages the datamodels framework (Ilyas et al., 2022) to compare learning algorithms based on how they use their training data. We demonstrate ModelDiff through three case studies, comparing models trained with/without data augmentation, with/without pre-training, and with different SGD hyperparameters.
https://proceedings.mlr.press/v202/shamsian23a.html
https://proceedings.mlr.press/v202/shamsian23a/shamsian23a.pdf
https://openreview.net/forum?id=MWzQgOtaFi
Auxiliary Learning as an Asymmetric Bargaining Game
https://proceedings.mlr.press/v202/shamsian23a.html
Aviv Shamsian, Aviv Navon, Neta Glazer, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya
https://proceedings.mlr.press/v202/shamsian23a.html
ICML 2023
Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets. However, this approach may present several difficulties: (i) optimizing multiple objectives can be more challenging, and (ii) how to balance the auxiliary tasks to best assist the main task is unclear. In this work, we propose a novel approach, named AuxiNash, for balancing tasks in auxiliary learning by formalizing the problem as generalized bargaining game with asymmetric task bargaining power. Furthermore, we describe an efficient procedure for learning the bargaining power of tasks based on their contribution to the performance of the main task and derive theoretical guarantees for its convergence. Finally, we evaluate AuxiNash on multiple multi-task benchmarks and find that it consistently outperforms competing methods.
https://proceedings.mlr.press/v202/shao23a.html
https://proceedings.mlr.press/v202/shao23a/shao23a.pdf
https://openreview.net/forum?id=RYD1UMgTdk
Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
https://proceedings.mlr.press/v202/shao23a.html
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen
https://proceedings.mlr.press/v202/shao23a.html
ICML 2023
Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations. However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly. We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself, and selects effective demonstrations to elicit better reasoning. Our method alternates between a backward and forward process to generate new examples. The backward process generates a question that match a sampled reasoning chain, so that the question is solvable and clear. The forward process produces a more detailed reasoning chain for the question, improving the quality of the example. We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.
https://proceedings.mlr.press/v202/shao23b.html
https://proceedings.mlr.press/v202/shao23b/shao23b.pdf
https://openreview.net/forum?id=qEcJpq2Kjr
Complementary Attention for Multi-Agent Reinforcement Learning
https://proceedings.mlr.press/v202/shao23b.html
Jianzhun Shao, Hongchang Zhang, Yun Qu, Chang Liu, Shuncheng He, Yuhang Jiang, Xiangyang Ji
https://proceedings.mlr.press/v202/shao23b.html
ICML 2023
In cooperative multi-agent reinforcement learning, centralized training with decentralized execution (CTDE) shows great promise for a trade-off between independent Q-learning and joint action learning. However, vanilla CTDE methods assumed a fixed number of agents could hardly adapt to real-world scenarios where dynamic team compositions typically suffer from dramatically variant partial observability. Specifically, agents with extensive sight ranges are prone to be affected by trivial environmental substrates, dubbed the "distracted attention" issue; ones with limited observation can hardly sense their teammates, degrading the cooperation quality. In this paper, we propose Complementary Attention for Multi-Agent reinforcement learning (CAMA), which applies a divide-and-conquer strategy on input entities accompanied with the complementary attention of enhancement and replenishment. Concretely, to tackle the distracted attention issue, highly contributed entities’ attention is enhanced by the execution-related representation extracted via action prediction with an inverse model. For better out-of-sight-range cooperation, the lowly contributed ones are compressed to brief messages with a conditional mutual information estimator. Our CAMA facilitates stable and sustainable teamwork, which is justified by the impressive results reported on the challenging StarCraftII, MPE, and Traffic Junction benchmarks.
https://proceedings.mlr.press/v202/shapira-weber23a.html
https://proceedings.mlr.press/v202/shapira-weber23a/shapira-weber23a.pdf
https://openreview.net/forum?id=7IbLWa0anE
Regularization-free Diffeomorphic Temporal Alignment Nets
https://proceedings.mlr.press/v202/shapira-weber23a.html
Ron Shapira Weber, Oren Freifeld
https://proceedings.mlr.press/v202/shapira-weber23a.html
ICML 2023
In time-series analysis, nonlinear temporal misalignment is a major problem that forestalls even simple averaging. An effective learning-based solution for this problem is the Diffeomorphic Temporal Alignment Net (DTAN), that, by relying on a diffeomorphic temporal transformer net and the amortization of the joint-alignment task, eliminates drawbacks of traditional alignment methods. Unfortunately, existing DTAN formulations crucially depend on a regularization term whose optimal hyperparameters are dataset-specific and usually searched via a large number of experiments. Here we propose a regularization-free DTAN that obviates the need to perform such an expensive, and often impractical, search. Concretely, we propose a new well-behaved loss that we call the Inverse Consistency Averaging Error (ICAE), as well as a related new triplet loss. Extensive experiments on 128 UCR datasets show that the proposed method outperforms contemporary methods despite not using a regularization. Moreover, ICAE also gives rise to the first DTAN that supports variable-length signals. Our code is available at https://github.com/BGU-CS-VIL/RF-DTAN.
https://proceedings.mlr.press/v202/sharifnassab23a.html
https://proceedings.mlr.press/v202/sharifnassab23a/sharifnassab23a.pdf
https://openreview.net/forum?id=hGI6SjQ7Ty
Toward Efficient Gradient-Based Value Estimation
https://proceedings.mlr.press/v202/sharifnassab23a.html
Arsalan Sharifnassab, Richard S. Sutton
https://proceedings.mlr.press/v202/sharifnassab23a.html
ICML 2023
Gradient-based methods for value estimation in reinforcement learning have favorable stability properties, but they are typically much slower than Temporal Difference (TD) learning methods. We study the root causes of this slowness and show that Mean Square Bellman Error (MSBE) is an ill-conditioned loss function in the sense that its Hessian has large condition-number. To resolve the adverse effect of poor conditioning of MSBE on gradient based methods, we propose a low complexity batch-free proximal method that approximately follows the Gauss-Newton direction and is asymptotically robust to parameterization. Our main algorithm, called RANS, is efficient in the sense that it is significantly faster than the residual gradient methods while having almost the same computational complexity, and is competitive with TD on the classic problems that we tested.
https://proceedings.mlr.press/v202/sharrock23a.html
https://proceedings.mlr.press/v202/sharrock23a/sharrock23a.pdf
https://openreview.net/forum?id=4fBTKlUfTW
Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates
https://proceedings.mlr.press/v202/sharrock23a.html
Louis Sharrock, Christopher Nemeth
https://proceedings.mlr.press/v202/sharrock23a.html
ICML 2023
In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.
https://proceedings.mlr.press/v202/shaul23a.html
https://proceedings.mlr.press/v202/shaul23a/shaul23a.pdf
https://openreview.net/forum?id=VWkHEQasPA
On Kinetic Optimal Probability Paths for Generative Models
https://proceedings.mlr.press/v202/shaul23a.html
Neta Shaul, Ricky T. Q. Chen, Maximilian Nickel, Matthew Le, Yaron Lipman
https://proceedings.mlr.press/v202/shaul23a.html
ICML 2023
Recent successful generative models are trained by fitting a neural network to an a-priori defined tractable probability density path taking noise to training examples. In this paper we investigate the space of Gaussian probability paths, which includes diffusion paths as an instance, and look for an optimal member in some useful sense. In particular, minimizing the Kinetic Energy (KE) of a path is known to make particles’ trajectories simple, hence easier to sample, and empirically improve performance in terms of likelihood of unseen data and sample generation quality. We investigate Kinetic Optimal (KO) Gaussian paths and offer the following observations: (i) We show the KE takes a simplified form on the space of Gaussian paths, where the data is incorporated only through a single, one dimensional scalar function, called the data separation function. (ii) We characterize the KO solutions with a one dimensional ODE. (iii) We approximate data-dependent KO paths by approximating the data separation function and minimizing the KE. (iv) We prove that the data separation function converges to $1$ in the general case of arbitrary normalized dataset consisting of $n$ samples in $d$ dimension as $n/\sqrt{d}\rightarrow 0$. A consequence of this result is that the Conditional Optimal Transport (Cond-OT) path becomes kinetic optimal as $n/\sqrt{d}\rightarrow 0$. We further support this theory with empirical experiments on ImageNet.
https://proceedings.mlr.press/v202/shekhar23a.html
https://proceedings.mlr.press/v202/shekhar23a/shekhar23a.pdf
https://openreview.net/forum?id=QT5Cphscf2
Sequential Changepoint Detection via Backward Confidence Sequences
https://proceedings.mlr.press/v202/shekhar23a.html
Shubhanshu Shekhar, Aaditya Ramdas
https://proceedings.mlr.press/v202/shekhar23a.html
ICML 2023
We present a simple reduction from sequential estimation to sequential changepoint detection (SCD). In short, suppose we are interested in detecting changepoints in some parameter or functional $\theta$ of the underlying distribution. We demonstrate that if we can construct a confidence sequence (CS) for $\theta$, then we can also successfully perform SCD for $\theta$. This is accomplished by checking if two CSs — one forwards and the other backwards — ever fail to intersect. Since the literature on CSs has been rapidly evolving recently, the reduction provided in this paper immediately solves several old and new change detection problems. Further, our “backward CS”, constructed by reversing time, is new and potentially of independent interest. We provide strong nonasymptotic guarantees on the frequency of false alarms and detection delay, and demonstrate numerical effectiveness on several problems.
https://proceedings.mlr.press/v202/shekhovtsov23a.html
https://proceedings.mlr.press/v202/shekhovtsov23a/shekhovtsov23a.pdf
https://openreview.net/forum?id=9GjM8UzCYN
Cold Analysis of Rao-Blackwellized Straight-Through Gumbel-Softmax Gradient Estimator
https://proceedings.mlr.press/v202/shekhovtsov23a.html
Alexander Shekhovtsov
https://proceedings.mlr.press/v202/shekhovtsov23a.html
ICML 2023
Many problems in machine learning require an estimate of the gradient of an expectation in discrete random variables with respect to the sampling distribution. This work is motivated by the development of the Gumbel-Softmax family of estimators, which use a temperature-controlled relaxation of discrete variables. The state-of-the art in this family, the Gumbel-Rao estimator uses an extra internal sampling to reduce the variance, which may be costly. We analyze this estimator and show that it possesses a zero temperature limit with a surprisingly simple closed form. The limit estimator, called ZGR, has favorable bias and variance properties, it is easy to implement and computationally inexpensive. It decomposes as the average of the straight through (ST) estimator and DARN estimator — two basic but not very well performing on their own estimators. We demonstrate that the simple ST–ZGR family of estimators practically dominates in the bias-variance tradeoffs the whole GR family while also outperforming SOTA unbiased estimators.
https://proceedings.mlr.press/v202/shen23a.html
https://proceedings.mlr.press/v202/shen23a/shen23a.pdf
https://openreview.net/forum?id=0TJgsYhgGg
Towards Understanding and Improving GFlowNet Training
https://proceedings.mlr.press/v202/shen23a.html
Max W Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, Kyunghyun Cho, Tommaso Biancalani
https://proceedings.mlr.press/v202/shen23a.html
ICML 2023
Generative flow networks (GFlowNets) are a family of algorithms that learn a generative policy to sample discrete objects $x$ with non-negative reward $R(x)$. Learning objectives guarantee the GFlowNet samples $x$ from the target distribution $p^*(x) \propto R(x)$ when loss is globally minimized over all states or trajectories, but it is unclear how well they perform with practical limits on training resources. We introduce an efficient evaluation strategy to compare the learned sampling distribution to the target reward distribution. As flows can be underdetermined given training data, we clarify the importance of learned flows to generalization and matching $p^*(x)$ in practice. We investigate how to learn better flows, and propose (i) prioritized replay training of high-reward $x$, (ii) relative edge flow policy parametrization, and (iii) a novel guided trajectory balance objective, and show how it can solve a substructure credit assignment problem. We substantially improve sample efficiency on biochemical design tasks.
https://proceedings.mlr.press/v202/shen23b.html
https://proceedings.mlr.press/v202/shen23b/shen23b.pdf
https://openreview.net/forum?id=jWFRFz7yIc
On Balancing Bias and Variance in Unsupervised Multi-Source-Free Domain Adaptation
https://proceedings.mlr.press/v202/shen23b.html
Maohao Shen, Yuheng Bu, Gregory W. Wornell
https://proceedings.mlr.press/v202/shen23b.html
ICML 2023
Due to privacy, storage, and other constraints, there is a growing need for unsupervised domain adaptation techniques in machine learning that do not require access to the data used to train a collection of source models. Existing methods for multi-source-free domain adaptation (MSFDA) typically train a target model using pseudo-labeled data produced by the source models, which focus on improving the pseudo-labeling techniques or proposing new training objectives. Instead, we aim to analyze the fundamental limits of MSFDA. In particular, we develop an information-theoretic bound on the generalization error of the resulting target model, which illustrates an inherent bias-variance trade-off. We then provide insights on how to balance this trade-off from three perspectives, including domain aggregation, selective pseudo-labeling, and joint feature alignment, which leads to the design of novel algorithms. Experiments on multiple datasets validate our theoretical analysis and demonstrate the state-of-art performance of the proposed algorithm, especially on some of the most challenging datasets, including Office-Home and DomainNet.
https://proceedings.mlr.press/v202/shen23c.html
https://proceedings.mlr.press/v202/shen23c/shen23c.pdf
https://openreview.net/forum?id=b79OvQq0Cs
On Penalty-based Bilevel Gradient Descent Method
https://proceedings.mlr.press/v202/shen23c.html
Han Shen, Tianyi Chen
https://proceedings.mlr.press/v202/shen23c.html
ICML 2023
Bilevel optimization enjoys a wide range of applications in hyper-parameter optimization, meta-learning and reinforcement learning. However, bilevel problems are difficult to solve and recent progress on scalable bilevel algorithms mainly focuses on bilevel optimization problems where the lower-level objective is either strongly convex or unconstrained. In this work, we tackle the bilevel problem through the lens of the penalty method. We show that under certain conditions, the penalty reformulation recovers the solutions of the original bilevel problem. Further, we propose the penalty-based bilevel gradient descent algorithm and establish its finite-time convergence for the constrained bilevel problem without lower-level strong convexity. The experimental results showcase the efficiency of the proposed algorithm.
https://proceedings.mlr.press/v202/shen23d.html
https://proceedings.mlr.press/v202/shen23d/shen23d.pdf
https://openreview.net/forum?id=wZsnZkviro
Non-autoregressive Conditional Diffusion Models for Time Series Prediction
https://proceedings.mlr.press/v202/shen23d.html
Lifeng Shen, James Kwok
https://proceedings.mlr.press/v202/shen23d.html
ICML 2023
Recently, denoising diffusion models have led to significant breakthroughs in the generation of images, audio and text. However, it is still an open question on how to adapt their strong modeling ability to model time series. In this paper, we propose TimeDiff, a non-autoregressive diffusion model that achieves high-quality time series prediction with the introduction of two novel conditioning mechanisms: future mixup and autoregressive initialization. Similar to teacher forcing, future mixup allows parts of the ground-truth future predictions for conditioning, while autoregressive initialization helps better initialize the model with basic time series patterns such as short-term trends. Extensive experiments are performed on nine real-world datasets. Results show that TimeDiff consistently outperforms existing time series diffusion models, and also achieves the best overall performance across a variety of the existing strong baselines (including transformers and FiLM).
https://proceedings.mlr.press/v202/shen23e.html
https://proceedings.mlr.press/v202/shen23e/shen23e.pdf
https://openreview.net/forum?id=2C8Y6iao2I
Cross-Modal Fine-Tuning: Align then Refine
https://proceedings.mlr.press/v202/shen23e.html
Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar
https://proceedings.mlr.press/v202/shen23e.html
ICML 2023
Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP. However, similar gains have not been observed in many other modalities due to a lack of relevant pretrained models. In this work, we propose ORCA, a general cross-modal fine-tuning framework that extends the applicability of a single large-scale pretrained model to diverse modalities. ORCA adapts to a target task via an align-then-refine workflow: given the target input, ORCA first learns an embedding network that aligns the embedded feature distribution with the pretraining modality. The pretrained model is then fine-tuned on the embedded data to exploit the knowledge shared across modalities. Through extensive experiments, we show that ORCA obtains state-of-the-art results on 3 benchmarks containing over 60 datasets from 12 modalities, outperforming a wide range of hand-designed, AutoML, general-purpose, and task-specific cross-modal methods. We highlight the importance of data alignment via a series of ablation studies and exemplify ORCA’s utility in data-limited regimes.
https://proceedings.mlr.press/v202/shen23f.html
https://proceedings.mlr.press/v202/shen23f/shen23f.pdf
https://openreview.net/forum?id=PhSiZuHiVn
Auxiliary Modality Learning with Generalized Curriculum Distillation
https://proceedings.mlr.press/v202/shen23f.html
Yu Shen, Xijun Wang, Peng Gao, Ming Lin
https://proceedings.mlr.press/v202/shen23f.html
ICML 2023
Driven by the need from real-world applications, Auxiliary Modality Learning (AML) offers the possibility to utilize more information from auxiliary data in training, while only requiring data from one or fewer modalities in test, to save the overall computational cost and reduce the amount of input data for inferencing. In this work, we formally define “Auxiliary Modality Learning” (AML), systematically classify types of auxiliary modality (in visual computing) and architectures for AML, and analyze their performance. We also analyze the conditions under which AML works well from the optimization and data distribution perspectives. To guide various choices to achieve optimal performance using AML, we propose a novel method to assist in choosing the best auxiliary modality and estimating an upper bound performance before executing AML. In addition, we propose a new AML method using generalized curriculum distillation to enable more effective curriculum learning. Our method achieves the best performance compared to other SOTA methods.
https://proceedings.mlr.press/v202/shenfeld23a.html
https://proceedings.mlr.press/v202/shenfeld23a/shenfeld23a.pdf
https://openreview.net/forum?id=Hk3m8Nh7mn
TGRL: An Algorithm for Teacher Guided Reinforcement Learning
https://proceedings.mlr.press/v202/shenfeld23a.html
Idan Shenfeld, Zhang-Wei Hong, Aviv Tamar, Pulkit Agrawal
https://proceedings.mlr.press/v202/shenfeld23a.html
ICML 2023
We consider solving sequential decision-making problems in the scenario where the agent has access to two supervision sources: $\textit{reward signal}$ and a $\textit{teacher}$ that can be queried to obtain a $\textit{good}$ action for any state encountered by the agent. Learning solely from rewards, or reinforcement learning, is data inefficient and may not learn high-reward policies in challenging scenarios involving sparse rewards or partial observability. On the other hand, learning from a teacher may sometimes be infeasible. For instance, the actions provided by a teacher with privileged information may be unlearnable by an agent with limited information (i.e., partial observability). In other scenarios, the teacher might be sub-optimal, and imitating their actions can limit the agent’s performance. To overcome these challenges, prior work proposed to jointly optimize imitation and reinforcement learning objectives but relied on heuristics and problem-specific hyper-parameter tuning to balance the two objectives. We introduce Teacher Guided Reinforcement Learning (TGRL), a principled approach to dynamically balance following the teacher’s guidance and leveraging RL. TGRL outperforms strong baselines across diverse domains without hyperparameter tuning.
https://proceedings.mlr.press/v202/sheng23a.html
https://proceedings.mlr.press/v202/sheng23a/sheng23a.pdf
https://openreview.net/forum?id=RRntzKrBTp
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU
https://proceedings.mlr.press/v202/sheng23a.html
Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher Re, Ion Stoica, Ce Zhang
https://proceedings.mlr.press/v202/sheng23a.html
ICML 2023
The high computational and memory requirements of large language model (LLM) inference make it feasible only with multiple high-end accelerators. Motivated by the emerging demand for latency-insensitive tasks with batched processing, this paper initiates the study of high-throughput LLM inference using limited resources, such as a single commodity GPU. We present FlexGen, a high-throughput generation engine for running LLMs with limited GPU memory. FlexGen can be flexibly configured under various hardware resource constraints by aggregating memory and computation from the GPU, CPU, and disk. By solving a linear programming problem, it searches for efficient patterns to store and access tensors. FlexGen further compresses the weights and the attention cache to 4 bits with negligible accuracy loss. These techniques enable FlexGen to have a larger space of batch size choices and thus significantly increase maximum throughput. As a result, when running OPT-175B on a single 16GB GPU, FlexGen achieves significantly higher throughput compared to state-of-the-art offloading systems, reaching a generation throughput of 1 token/s for the first time with an effective batch size of 144. On the HELM benchmark, FlexGen can benchmark a 30B model with a 16GB GPU on 7 representative sub-scenarios in 21 hours. The code is available at https://github.com/FMInference/FlexGen.
https://proceedings.mlr.press/v202/sherman23a.html
https://proceedings.mlr.press/v202/sherman23a/sherman23a.pdf
https://openreview.net/forum?id=DF6ypWrepg
Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation
https://proceedings.mlr.press/v202/sherman23a.html
Uri Sherman, Tomer Koren, Yishay Mansour
https://proceedings.mlr.press/v202/sherman23a.html
ICML 2023
We study reinforcement learning with linear function approximation and adversarially changing cost functions, a setup that has mostly been considered under simplifying assumptions such as full information feedback or exploratory conditions. We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback, featuring a combination of mirror-descent and least squares policy evaluation in an auxiliary MDP used to compute exploration bonuses. Our algorithm obtains an $\widetilde O(K^{6/7})$ regret bound, improving significantly over previous state-of-the-art of $\widetilde O (K^{14/15})$ in this setting. In addition, we present a version of the same algorithm under the assumption a simulator of the environment is available to the learner (but otherwise no exploratory assumptions are made), and prove it obtains state-of-the-art regret of $\widetilde O (K^{2/3})$.
https://proceedings.mlr.press/v202/shevchenko23a.html
https://proceedings.mlr.press/v202/shevchenko23a/shevchenko23a.pdf
https://openreview.net/forum?id=eStrtvtXiN
Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods
https://proceedings.mlr.press/v202/shevchenko23a.html
Aleksandr Shevchenko, Kevin Kögler, Hamed Hassani, Marco Mondelli
https://proceedings.mlr.press/v202/shevchenko23a.html
ICML 2023
Autoencoders are a popular model in many branches of machine learning and lossy data compression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly understood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanishing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two-layer autoencoders trained in the challenging proportional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise description of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions.
https://proceedings.mlr.press/v202/shi23a.html
https://proceedings.mlr.press/v202/shi23a/shi23a.pdf
https://openreview.net/forum?id=JSZmoN03Op
Large Language Models Can Be Easily Distracted by Irrelevant Context
https://proceedings.mlr.press/v202/shi23a.html
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H. Chi, Nathanael Schärli, Denny Zhou
https://proceedings.mlr.press/v202/shi23a.html
ICML 2023
Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate the distractibility of large language models, i.e., how the model prediction can be distracted by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of different prompting techniques for large language models, and find that the model is easily distracted by irrelevant information. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.
https://proceedings.mlr.press/v202/shi23b.html
https://proceedings.mlr.press/v202/shi23b/shi23b.pdf
https://openreview.net/forum?id=MYer35ydij
Everyone’s Preference Changes Differently: A Weighted Multi-Interest Model For Retrieval
https://proceedings.mlr.press/v202/shi23b.html
Hui Shi, Yupeng Gu, Yitong Zhou, Bo Zhao, Sicun Gao, Jishen Zhao
https://proceedings.mlr.press/v202/shi23b.html
ICML 2023
User embeddings (vectorized representations of a user) are essential in recommendation systems. Numerous approaches have been proposed to construct a representation for the user in order to find similar items for retrieval tasks, and they have been proven effective in industrial recommendation systems. Recently people have discovered the power of using multiple embeddings to represent a user, with the hope that each embedding represents the user’s interest in a certain topic. With multi-interest representation, it’s important to model the user’s preference over the different topics and how the preference changes with time. However, existing approaches either fail to estimate the user’s affinity to each interest or unreasonably assume every interest of every user fades at an equal rate with time, thus hurting the performance of candidate retrieval. In this paper, we propose the Multi-Interest Preference (MIP) model, an approach that not only produces multi-interest for users by using the user’s sequential engagement more effectively but also automatically learns a set of weights to represent the preference over each embedding so that the candidates can be retrieved from each interest proportionally. Extensive experiments have been done on various industrial-scale datasets to demonstrate the effectiveness of our approach.
https://proceedings.mlr.press/v202/shi23c.html
https://proceedings.mlr.press/v202/shi23c/shi23c.pdf
https://openreview.net/forum?id=3HMO9iSBdy
A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints
https://proceedings.mlr.press/v202/shi23c.html
Ming Shi, Yingbin Liang, Ness Shroff
https://proceedings.mlr.press/v202/shi23c.html
ICML 2023
In many applications of Reinforcement Learning (RL), it is critically important that the algorithm performs safely, such that instantaneous hard constraints are satisfied at each step, and unsafe states and actions are avoided. However, existing algorithms for “safe” RL are often designed under constraints that either require expected cumulative costs to be bounded or assume all states are safe. Thus, such algorithms could violate instantaneous hard constraints and traverse unsafe states (and actions) in practice. Hence, in this paper, we develop the first near-optimal safe RL algorithm for episodic Markov Decision Processes with unsafe states and actions under instantaneous hard constraints and the linear mixture model. It achieves a regret $\tilde{O}(\frac{d H^3 \sqrt{d K}}{\Delta_c})$ that nearly matches the state-of-the-art regret in the setting with only unsafe actions and that in the unconstrained setting, and is safe at each step, where $d$ is the feature-mapping dimension, $K$ is the number of episodes, $H$ is the episode length, and $\Delta_c$ is a safety-related parameter. We also provide a lower bound $\tilde{\Omega}(\max\{d H \sqrt{K}, \frac{H}{\Delta_c^2}\})$, which indicates that the dependency on $\Delta_c$ is necessary. Further, both our algorithm design and regret analysis involve several novel ideas, which may be of independent interest.
https://proceedings.mlr.press/v202/shi23d.html
https://proceedings.mlr.press/v202/shi23d/shi23d.pdf
https://openreview.net/forum?id=fn2NFlYLBL
Improving the Model Consistency of Decentralized Federated Learning
https://proceedings.mlr.press/v202/shi23d.html
Yifan Shi, Li Shen, Kang Wei, Yan Sun, Bo Yuan, Xueqian Wang, Dacheng Tao
https://proceedings.mlr.press/v202/shi23d.html
ICML 2023
To mitigate the privacy leakages and communication burdens of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in a decentralized communication network. However, existing DFL suffers from high inconsistency among local clients, which results in severe distribution shift and inferior performance compared with centralized FL (CFL), especially on heterogeneous data or sparse communication topologies. To alleviate this issue, we propose two DFL algorithms named DFedSAM and DFedSAM-MGS to improve the performance of DFL. Specifically, DFedSAM leverages gradient perturbation to generate local flat models via Sharpness Aware Minimization (SAM), which searches for models with uniformly low loss values. DFedSAM-MGS further boosts DFedSAM by adopting Multiple Gossip Steps (MGS) for better model consistency, which accelerates the aggregation of local flat models and better balances communication complexity and generalization. Theoretically, we present improved convergence rates $\small \mathcal{O}\big(\frac{1}{\sqrt{KT}}+\frac{1}{T}+\frac{1}{K^{1/2}T^{3/2}(1-\lambda)^2}\big)$ and $\small \mathcal{O}\big(\frac{1}{\sqrt{KT}}+\frac{1}{T}+\frac{\lambda^Q+1}{K^{1/2}T^{3/2}(1-\lambda^Q)^2}\big)$ in non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where $1-\lambda$ is the spectral gap of gossip matrix and $Q$ is the number of MGS. Empirically, our methods can achieve competitive performance compared with CFL methods and outperform existing DFL methods.
https://proceedings.mlr.press/v202/shi23e.html
https://proceedings.mlr.press/v202/shi23e/shi23e.pdf
https://openreview.net/forum?id=dovQpb7Qda
UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
https://proceedings.mlr.press/v202/shi23e.html
Dachuan Shi, Chaofan Tao, Ying Jin, Zhendong Yang, Chun Yuan, Jiaqi Wang
https://proceedings.mlr.press/v202/shi23e.html
ICML 2023
Real-world data contains a vast amount of multimodal information, among which vision and language are the two most representative modalities. Moreover, increasingly heavier models, e.g., Transformers, have attracted the attention of researchers to model compression. However, how to compress multimodal models, especially vison-language Transformers, is still under-explored. This paper proposes the Unified and Progressive Pruning (UPop) as a universal vison-language Transformer compression framework, which incorporates 1) unifiedly searching multimodal subnets in a continuous optimization space from the original model, which enables automatic assignment of pruning ratios among compressible modalities and structures; 2) progressively searching and retraining the subnet, which maintains convergence between the search and retrain to attain higher compression ratios. Experiments on various tasks, datasets, and model architectures demonstrate the effectiveness and versatility of the proposed UPop framework. The code is available at https://github.com/sdc17/UPop.
https://proceedings.mlr.press/v202/shi23f.html
https://proceedings.mlr.press/v202/shi23f/shi23f.pdf
https://openreview.net/forum?id=PKrDN23Rke
Sequence Modeling with Multiresolution Convolutional Memory
https://proceedings.mlr.press/v202/shi23f.html
Jiaxin Shi, Ke Alexander Wang, Emily Fox
https://proceedings.mlr.press/v202/shi23f.html
ICML 2023
Efficiently capturing the long-range patterns in sequential data sources salient to a given task—such as classification and generative modeling—poses a fundamental challenge. Popular approaches in the space tradeoff between the memory burden of brute-force enumeration and comparison, as in transformers, the computational burden of complicated sequential dependencies, as in recurrent neural networks, or the parameter burden of convolutional networks with many or large filters. We instead take inspiration from wavelet-based multiresolution analysis to define a new building block for sequence modeling, which we call a MultiresLayer. The key component of our model is the multiresolution convolution, capturing multiscale trends in the input sequence. Our MultiresConv can be implemented with shared filters across a dilated causal convolution tree. Thus it garners the computational advantages of convolutional networks and the principled theoretical motivation of wavelet decompositions. Our MultiresLayer is straightforward to implement, requires significantly fewer parameters, and maintains at most a $O(N \log N)$ memory footprint for a length $N$ sequence. Yet, by stacking such layers, our model yields state-of-the-art performance on a number of sequence classification and autoregressive density estimation tasks using CIFAR-10, ListOps, and PTB-XL datasets.
https://proceedings.mlr.press/v202/shi23g.html
https://proceedings.mlr.press/v202/shi23g/shi23g.pdf
https://openreview.net/forum?id=mlCogIjF0S
Statistical Inference on Multi-armed Bandits with Delayed Feedback
https://proceedings.mlr.press/v202/shi23g.html
Lei Shi, Jingshen Wang, Tianhao Wu
https://proceedings.mlr.press/v202/shi23g.html
ICML 2023
Multi armed bandit (MAB) algorithms have been increasingly used to complement or integrate with A/B tests and randomized clinical trials in e-commerce, healthcare, and policymaking. Recent developments incorporate possible delayed feedback. While existing MAB literature often focuses on maximizing the expected cumulative reward outcomes (or, equivalently, regret minimization), few efforts have been devoted to establish valid statistical inference approaches to quantify the uncertainty of learned policies. We attempt to fill this gap by providing a unified statistical inference framework for policy evaluation where a target policy is allowed to differ from the data collecting policy, and our framework allows delay to be associated with the treatment arms. We present an adaptively weighted estimator that on one hand incorporates the arm-dependent delaying mechanism to achieve consistency, and on the other hand mitigates the variance inflation across stages due to vanishing sampling probability. In particular, our estimator does not critically depend on the ability to estimate the unknown delay mechanism. Under appropriate conditions, we prove that our estimator converges to a normal distribution as the number of time points goes to infinity, which provides guarantees for large-sample statistical inference. We illustrate the finite-sample performance of our approach through Monte Carlo experiments.
https://proceedings.mlr.press/v202/shi23h.html
https://proceedings.mlr.press/v202/shi23h/shi23h.pdf
https://openreview.net/forum?id=V5ULNu7Kix
Provably Efficient Offline Reinforcement Learning with Perturbed Data Sources
https://proceedings.mlr.press/v202/shi23h.html
Chengshuai Shi, Wei Xiong, Cong Shen, Jing Yang
https://proceedings.mlr.press/v202/shi23h.html
ICML 2023
Existing theoretical studies on offline reinforcement learning (RL) mostly consider a dataset sampled directly from the target task. In practice, however, data often come from several heterogeneous but related sources. Motivated by this gap, this work aims at rigorously understanding offline RL with multiple datasets that are collected from randomly perturbed versions of the target task instead of from itself. An information-theoretic lower bound is derived, which reveals a necessary requirement on the number of involved sources in addition to that on the number of data samples. Then, a novel HetPEVI algorithm is proposed, which simultaneously considers the sample uncertainties from a finite number of data samples per data source and the source uncertainties due to a finite number of available data sources. Theoretical analyses demonstrate that HetPEVI can solve the target task as long as the data sources collectively provide a good data coverage. Moreover, HetPEVI is demonstrated to be optimal up to a polynomial factor of the horizon length. Finally, the study is extended to offline Markov games and offline robust RL, which demonstrates the generality of the proposed designs and theoretical analyses.
https://proceedings.mlr.press/v202/shi23i.html
https://proceedings.mlr.press/v202/shi23i/shi23i.pdf
https://openreview.net/forum?id=hhszTjRaBx
On the Complexity of Bayesian Generalization
https://proceedings.mlr.press/v202/shi23i.html
Yu-Zhe Shi, Manjie Xu, John E. Hopcroft, Kun He, Joshua B. Tenenbaum, Song-Chun Zhu, Ying Nian Wu, Wenjuan Han, Yixin Zhu
https://proceedings.mlr.press/v202/shi23i.html
ICML 2023
We examine concept generalization at a large scale in the natural visual spectrum. Established computational modes (i.e., rule-based or similarity-based) are primarily studied isolated, focusing on confined and abstract problem spaces. In this work, we study these two modes when the problem space scales up and when the complexity of concepts becomes diverse. At the representational level, we investigate how the complexity varies when a visual concept is mapped to the representation space. Prior literature has shown that two types of complexities (Griffiths & Tenenbaum, 2003) build an inverted-U relation (Donderi, 2006; Sun & Firestone, 2021). Leveraging Representativeness of Attribute (RoA), we computationally confirm: Models use attributes with high RoA to describe visual concepts, and the description length falls in an inverted-U relation with the increment in visual complexity. At the computational level, we examine how the complexity of representation affects the shift between the rule- and similarity-based generalization. We hypothesize that category-conditioned visual modeling estimates the co-occurrence frequency between visual and categorical attributes, thus potentially serving as the prior for the natural visual world. Experimental results show that representations with relatively high subjective complexity outperform those with relatively low subjective complexity in rule-based generalization, while the trend is the opposite in similarity-based generalization.
https://proceedings.mlr.press/v202/shi23j.html
https://proceedings.mlr.press/v202/shi23j/shi23j.pdf
https://openreview.net/forum?id=DBlWCsOy94
Understanding and Generalizing Contrastive Learning from the Inverse Optimal Transport Perspective
https://proceedings.mlr.press/v202/shi23j.html
Liangliang Shi, Gu Zhang, Haoyu Zhen, Jintao Fan, Junchi Yan
https://proceedings.mlr.press/v202/shi23j.html
ICML 2023
Previous research on contrastive learning (CL) has primarily focused on pairwise views to learn representations by attracting positive samples and repelling negative ones. In this work, we aim to understand and generalize CL from a point set matching perspective, instead of the comparison between two points. Specifically, we formulate CL as a form of inverse optimal transport (IOT), which involves a bilevel optimization procedure for learning where the outter minimization aims to learn the representations and the inner is to learn the coupling (i.e. the probability of matching matrix) between the point sets. Specifically, by adjusting the relaxation degree of constraints in the inner minimization, we obtain three contrastive losses and show that the dominant contrastive loss in literature InfoNCE falls into one of these losses. This reveals a new and more general algorithmic framework for CL. Additionally, the soft matching scheme in IOT induces a uniformity penalty to enhance representation learning which is akin to the CL’s uniformity. Results on vision benchmarks show the effectiveness of our derived loss family and the new uniformity term.