abs
stringlengths 45
62
| Download PDF
stringlengths 50
84
| OpenReview
stringlengths 42
42
| title
stringlengths 10
168
| url
stringlengths 45
62
| authors
stringlengths 9
704
| detail_url
stringlengths 45
62
| tags
stringclasses 1
value | abstract
stringlengths 415
5.03k
⌀ |
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v202/markov23a.html
|
https://proceedings.mlr.press/v202/markov23a/markov23a.pdf
|
https://openreview.net/forum?id=Nqp8A5IDzq
|
Quantized Distributed Training of Large Models with Convergence Guarantees
|
https://proceedings.mlr.press/v202/markov23a.html
|
Ilia Markov, Adrian Vladu, Qi Guo, Dan Alistarh
|
https://proceedings.mlr.press/v202/markov23a.html
|
ICML 2023
|
Communication-reduction techniques are a popular way to improve scalability in data-parallel training of deep neural networks (DNNs). The recent emergence of large language models such as GPT has created the need for new approaches to exploit data-parallelism. Among these, fully-sharded data parallel (FSDP) training is highly popular, yet it still encounters scalability bottlenecks. One reason is that applying compression techniques to FSDP is challenging: as the vast majority of the communication involves the model’s weights, direct compression alters convergence and leads to accuracy loss. We present QSDP, a variant of FSDP which supports both gradient and weight quantization with theoretical guarantees, is simple to implement and has essentially no overheads. To derive QSDP we prove that a natural modification of SGD achieves convergence even when we only maintain quantized weights, and thus the domain over which we train consists of quantized points and is, therefore, highly non-convex. We validate this approach by training GPT-family models with up to 1.3 billion parameters on a multi-node cluster. Experiments show that QSDP preserves model accuracy, while completely removing the communication bottlenecks of FSDP, providing end-to-end speedups of up to 2.2x.
|
https://proceedings.mlr.press/v202/maronas23a.html
|
https://proceedings.mlr.press/v202/maronas23a/maronas23a.pdf
|
https://openreview.net/forum?id=OvFjnqxmuo
|
Efficient Transformed Gaussian Processes for Non-Stationary Dependent Multi-class Classification
|
https://proceedings.mlr.press/v202/maronas23a.html
|
Juan Maroñas, Daniel Hernández-Lobato
|
https://proceedings.mlr.press/v202/maronas23a.html
|
ICML 2023
|
This work introduces the Efficient Transformed Gaussian Process (ETGP), a new way of creating $C$ stochastic processes characterized by: 1) the $C$ processes are non-stationary, 2) the $C$ processes are dependent by construction without needing a mixing matrix, 3) training and making predictions is very efficient since the number of Gaussian Processes (GP) operations (e.g. inverting the inducing point’s covariance matrix) do not depend on the number of processes. This makes the ETGP particularly suited for multi-class problems with a very large number of classes, which are the problems studied in this work. ETGP exploits the recently proposed Transformed Gaussian Process (TGP), a stochastic process specified by transforming a Gaussian Process using an invertible transformation. However, unlike TGP, ETGP is constructed by transforming a single sample from a GP using $C$ invertible transformations. We derive an efficient sparse variational inference algorithm for the proposed model and demonstrate its utility in 5 classification tasks which include low/medium/large datasets and a different number of classes, ranging from just a few to hundreds. Our results show that ETGP, in general, outperforms state-of-the-art methods for multi-class classification based on GPs, and has a lower computational cost (around one order of magnitude smaller).
|
https://proceedings.mlr.press/v202/marro23a.html
|
https://proceedings.mlr.press/v202/marro23a/marro23a.pdf
|
https://openreview.net/forum?id=qO8YziH2hO
|
Computational Asymmetries in Robust Classification
|
https://proceedings.mlr.press/v202/marro23a.html
|
Samuele Marro, Michele Lombardi
|
https://proceedings.mlr.press/v202/marro23a.html
|
ICML 2023
|
In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is $\mathit{NP}$-hard, ensuring their robustness at training time is $\Sigma^2_P$-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is $\mathit{NP}$-hard, while attacking it is $\Sigma_2^P$-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.
|
https://proceedings.mlr.press/v202/marwah23a.html
|
https://proceedings.mlr.press/v202/marwah23a/marwah23a.pdf
|
https://openreview.net/forum?id=nEsNOPLpgb
|
Neural Network Approximations of PDEs Beyond Linearity: A Representational Perspective
|
https://proceedings.mlr.press/v202/marwah23a.html
|
Tanya Marwah, Zachary Chase Lipton, Jianfeng Lu, Andrej Risteski
|
https://proceedings.mlr.press/v202/marwah23a.html
|
ICML 2023
|
A burgeoning line of research has developed deep neural networks capable of approximating the solutions to high dimensional PDEs, opening related lines of theoretical inquiry focused on explaining how it is that these models appear to evade the curse of dimensionality. However, most theoretical analyses thus far have been limited to linear PDEs. In this work, we take a step towards studying the representational power of neural networks for approximating solutions to nonlinear PDEs. We focus on a class of PDEs known as nonlinear elliptic variational PDEs, whose solutions minimize an Euler-Lagrange energy functional $\mathcal{E}(u) = \int_\Omega L(x, u(x), \nabla u(x)) - f(x) u(x)dx$. We show that if composing a function with Barron norm $b$ with partial derivatives of $L$ produces a function of Barron norm at most $B_L b^p$, the solution to the PDE can be $\epsilon$-approximated in the $L^2$ sense by a function with Barron norm $O\left(\left(dB_L\right)^{\max\{p \log(1/ \epsilon), p^{\log(1/\epsilon)}\}}\right)$. By a classical result due to Barron (1993), this correspondingly bounds the size of a 2-layer neural network needed to approximate the solution. Treating $p, \epsilon, B_L$ as constants, this quantity is polynomial in dimension, thus showing neural networks can evade the curse of dimensionality. Our proof technique involves neurally simulating (preconditioned) gradient in an appropriate Hilbert space, which converges exponentially fast to the solution of the PDE, and such that we can bound the increase of the Barron norm at each iterate. Our results subsume and substantially generalize analogous prior results for linear elliptic PDEs over a unit hypercube.
|
https://proceedings.mlr.press/v202/mashkaria23a.html
|
https://proceedings.mlr.press/v202/mashkaria23a/mashkaria23a.pdf
|
https://openreview.net/forum?id=teVdwyjrVn
|
Generative Pretraining for Black-Box Optimization
|
https://proceedings.mlr.press/v202/mashkaria23a.html
|
Satvik Mehul Mashkaria, Siddarth Krishnamoorthy, Aditya Grover
|
https://proceedings.mlr.press/v202/mashkaria23a.html
|
ICML 2023
|
Many problems in science and engineering involve optimizing an expensive black-box function over a high-dimensional space. In the offline model-based optimization (MBO) setting, we assume access to a fixed, offline dataset for pretraining and a small budget for online function evaluations. Prior approaches seek to utilize the offline data to approximate the function or its inverse but are not sufficiently accurate far from the data distribution. We propose BONET, a generative framework for pretraining a novel model-based optimizer using offline datasets. In BONET, we train an autoregressive model on fixed-length trajectories derived from an offline dataset. We design a sampling strategy to synthesize trajectories from offline data using a simple heuristic of rolling out monotonic transitions from low-fidelity to high-fidelity samples. Empirically, we instantiate BONET using a causally masked Transformer (Radford et al., 2019) and evaluate it on Design-Bench (Trabucco et al., 2022), where we rank the best on average, outperforming state-of-the-art baselines.
|
https://proceedings.mlr.press/v202/mate23a.html
|
https://proceedings.mlr.press/v202/mate23a/mate23a.pdf
|
https://openreview.net/forum?id=priTMs7n6e
|
Improved Policy Evaluation for Randomized Trials of Algorithmic Resource Allocation
|
https://proceedings.mlr.press/v202/mate23a.html
|
Aditya Mate, Bryan Wilder, Aparna Taneja, Milind Tambe
|
https://proceedings.mlr.press/v202/mate23a.html
|
ICML 2023
|
We consider the task of evaluating policies of algorithmic resource allocation through randomized controlled trials (RCTs). Such policies are tasked with optimizing the utilization of limited intervention resources, with the goal of maximizing the benefits derived. Evaluation of such allocation policies through RCTs proves difficult, notwithstanding the scale of the trial, because the individuals’ outcomes are inextricably interlinked through resource constraints controlling the policy decisions. Our key contribution is to present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT. We identify conditions under which such reassignments are permissible and can be leveraged to construct counterfactual trials, whose outcomes can be accurately ascertained, for free. We prove theoretically that such an estimator is more accurate than common estimators based on sample means – we show that it returns an unbiased estimate and simultaneously reduces variance. We demonstrate the value of our approach through empirical experiments on synthetic, semisynthetic as well as real case study data and show improved estimation accuracy across the board.
|
https://proceedings.mlr.press/v202/maurais23a.html
|
https://proceedings.mlr.press/v202/maurais23a/maurais23a.pdf
|
https://openreview.net/forum?id=xPU9F90HjM
|
Multi-Fidelity Covariance Estimation in the Log-Euclidean Geometry
|
https://proceedings.mlr.press/v202/maurais23a.html
|
Aimee Maurais, Terrence Alsup, Benjamin Peherstorfer, Youssef Marzouk
|
https://proceedings.mlr.press/v202/maurais23a.html
|
ICML 2023
|
We introduce a multi-fidelity estimator of covariance matrices that employs the log-Euclidean geometry of the symmetric positive-definite manifold. The estimator fuses samples from a hierarchy of data sources of differing fidelities and costs for variance reduction while guaranteeing definiteness, in contrast with previous approaches. The new estimator makes covariance estimation tractable in applications where simulation or data collection is expensive; to that end, we develop an optimal sample allocation scheme that minimizes the mean-squared error of the estimator given a fixed budget. Guaranteed definiteness is crucial to metric learning, data assimilation, and other downstream tasks. Evaluations of our approach using data from physical applications (heat conduction, fluid dynamics) demonstrate more accurate metric learning and speedups of more than one order of magnitude compared to benchmarks.
|
https://proceedings.mlr.press/v202/mayekar23a.html
|
https://proceedings.mlr.press/v202/mayekar23a/mayekar23a.pdf
|
https://openreview.net/forum?id=sHlxJIWfZb
|
Communication-Constrained Bandits under Additive Gaussian Noise
|
https://proceedings.mlr.press/v202/mayekar23a.html
|
Prathamesh Mayekar, Jonathan Scarlett, Vincent Y. F. Tan
|
https://proceedings.mlr.press/v202/mayekar23a.html
|
ICML 2023
|
We study a distributed stochastic multi-armed bandit where a client supplies the learner with communication-constrained feedback based on the rewards for the corresponding arm pulls. In our setup, the client must encode the rewards such that the second moment of the encoded rewards is no more than $P$, and this encoded reward is further corrupted by additive Gaussian noise of variance $\sigma^2$; the learner only has access to this corrupted reward. For this setting, we derive an information-theoretic lower bound of $\Omega\left(\sqrt{\frac{KT}{\mathtt{SNR} \wedge1}} \right)$ on the minimax regret of any scheme, where $\mathtt{SNR}\coloneqq \frac{P}{\sigma^2}$, and $K$ and $T$ are the number of arms and time horizon, respectively. Furthermore, we propose a multi-phase bandit algorithm, $\mathtt{UE}\text{-}\mathtt{UCB}\text{++}$, which matches this lower bound to a minor additive factor. $\mathtt{UE}\text{-}\mathtt{UCB}\text{++}$ performs uniform exploration in its initial phases and then utilizes the upper confidence bound (UCB) bandit algorithm in its final phase. An interesting feature of $\mathtt{UE}\text{-}\mathtt{UCB}\text{++}$ is that the coarser estimates of the mean rewards formed during a uniform exploration phase help to refine the encoding protocol in the next phase, leading to more accurate mean estimates of the rewards in the subsequent phase. This positive reinforcement cycle is critical to reducing the number of uniform exploration rounds and closely matching our lower bound.
|
https://proceedings.mlr.press/v202/mazzetto23a.html
|
https://proceedings.mlr.press/v202/mazzetto23a/mazzetto23a.pdf
|
https://openreview.net/forum?id=k5d0DhHayB
|
Nonparametric Density Estimation under Distribution Drift
|
https://proceedings.mlr.press/v202/mazzetto23a.html
|
Alessio Mazzetto, Eli Upfal
|
https://proceedings.mlr.press/v202/mazzetto23a.html
|
ICML 2023
|
We study nonparametric density estimation in non-stationary drift settings. Given a sequence of independent samples taken from a distribution that gradually changes in time, the goal is to compute the best estimate for the current distribution. We prove tight minimax risk bounds for both discrete and continuous smooth densities, where the minimum is over all possible estimates and the maximum is over all possible distributions that satisfy the drift constraints. Our technique handles a broad class of drift models and generalizes previous results on agnostic learning under drift.
|
https://proceedings.mlr.press/v202/mbacke23a.html
|
https://proceedings.mlr.press/v202/mbacke23a/mbacke23a.pdf
|
https://openreview.net/forum?id=Jpbyykdtmt
|
PAC-Bayesian Generalization Bounds for Adversarial Generative Models
|
https://proceedings.mlr.press/v202/mbacke23a.html
|
Sokhna Diarra Mbacke, Florence Clerc, Pascal Germain
|
https://proceedings.mlr.press/v202/mbacke23a.html
|
ICML 2023
|
We extend PAC-Bayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the total variation distance. Our first result on the Wasserstein distance assumes the instance space is bounded, while our second result takes advantage of dimensionality reduction. Our results naturally apply to Wasserstein GANs and Energy-Based GANs, and our bounds provide new training objectives for these two. Although our work is mainly theoretical, we perform numerical experiments showing non-vacuous generalization bounds for Wasserstein GANs on synthetic datasets.
|
https://proceedings.mlr.press/v202/mckinzie23a.html
|
https://proceedings.mlr.press/v202/mckinzie23a/mckinzie23a.pdf
|
https://openreview.net/forum?id=pw5vm7tzeE
|
Robustness in Multimodal Learning under Train-Test Modality Mismatch
|
https://proceedings.mlr.press/v202/mckinzie23a.html
|
Brandon Mckinzie, Vaishaal Shankar, Joseph Yitan Cheng, Yinfei Yang, Jonathon Shlens, Alexander T Toshev
|
https://proceedings.mlr.press/v202/mckinzie23a.html
|
ICML 2023
|
Multimodal learning is defined as learning over multiple heterogeneous input modalities such as video, audio, and text. In this work, we are concerned with understanding how models behave as the type of modalities differ between training and deployment, a situation that naturally arises in many applications of multimodal learning to hardware platforms. We present a multimodal robustness framework to provide a systematic analysis of common multimodal representation learning methods. Further, we identify robustness short-comings of these approaches and propose two intervention techniques leading to $1.5\times$-$4\times$ robustness improvements on three datasets, AudioSet, Kinetics-400 and ImageNet-Captions. Finally, we demonstrate that these interventions better utilize additional modalities, if present, to achieve competitive results of $44.2$ mAP on AudioSet 20K.
|
https://proceedings.mlr.press/v202/mehrabi23a.html
|
https://proceedings.mlr.press/v202/mehrabi23a/mehrabi23a.pdf
|
https://openreview.net/forum?id=wCoQtW29XE
|
A Model-free Closeness-of-influence Test for Features in Supervised Learning
|
https://proceedings.mlr.press/v202/mehrabi23a.html
|
Mohammad Mehrabi, Ryan A. Rossi
|
https://proceedings.mlr.press/v202/mehrabi23a.html
|
ICML 2023
|
Understanding the effect of a feature vector $x\in \mathbb{R}^d$ on the response value (label) $y\in \mathbb{R}$ is the cornerstone of many statistical learning problems. Ideally, it is desired to understand how a set of collected features combine together and influence the response value, but this problem is notoriously difficult, due to the high-dimensionality of data and limited number of labeled data points, among many others. In this work, we take a new perspective on this problem, and we study the question of assessing the difference of influence that the two given features have on the response value. We first propose a notion of closeness for the influence of features, and show that our definition recovers the familiar notion of the magnitude of coefficients in the parametric model. We then propose a novel method to test for the closeness of influence in general model-free supervised learning problems. Our proposed test can be used with finite number of samples with control on type I error rate, no matter the ground truth conditional law $\mathcal{L}(Y|X)$. We analyze the power of our test for two general learning problems i) linear regression, and ii) binary classification under mixture of Gaussian models, and show that under the proper choice of score function, an internal component of our test, with sufficient number of samples will achieve full statistical power. We evaluate our findings through extensive numerical simulations, specifically we adopt the datamodel framework (Ilyas, et al., 2022) for CIFAR-10 dataset to identify pairs of training samples with different influence on the trained model via optional black box training mechanisms.
|
https://proceedings.mlr.press/v202/mei23a.html
|
https://proceedings.mlr.press/v202/mei23a/mei23a.pdf
|
https://openreview.net/forum?id=XqyXhjVRxR
|
Stochastic Gradient Succeeds for Bandits
|
https://proceedings.mlr.press/v202/mei23a.html
|
Jincheng Mei, Zixin Zhong, Bo Dai, Alekh Agarwal, Csaba Szepesvari, Dale Schuurmans
|
https://proceedings.mlr.press/v202/mei23a.html
|
ICML 2023
|
We show that the stochastic gradient bandit algorithm converges to a globally optimal policy at an $O(1/t)$ rate, even with a constant step size. Remarkably, global convergence of the stochastic gradient bandit algorithm has not been previously established, even though it is an old algorithm known to be applicable to bandits. The new result is achieved by establishing two novel technical findings: first, the noise of the stochastic updates in the gradient bandit algorithm satisfies a strong “growth condition” property, where the variance diminishes whenever progress becomes small, implying that additional noise control via diminishing step sizes is unnecessary; second, a form of “weak exploration” is automatically achieved through the stochastic gradient updates, since they prevent the action probabilities from decaying faster than $O(1/t)$, thus ensuring that every action is sampled infinitely often with probability $1$. These two findings can be used to show that the stochastic gradient update is already “sufficient” for bandits in the sense that exploration versus exploitation is automatically balanced in a manner that ensures almost sure convergence to a global optimum. These novel theoretical findings are further verified by experimental results.
|
https://proceedings.mlr.press/v202/melnychuk23a.html
|
https://proceedings.mlr.press/v202/melnychuk23a/melnychuk23a.pdf
|
https://openreview.net/forum?id=aa6ejr9t49
|
Normalizing Flows for Interventional Density Estimation
|
https://proceedings.mlr.press/v202/melnychuk23a.html
|
Valentyn Melnychuk, Dennis Frauen, Stefan Feuerriegel
|
https://proceedings.mlr.press/v202/melnychuk23a.html
|
ICML 2023
|
Existing machine learning methods for causal inference usually estimate quantities expressed via the mean of potential outcomes (e.g., average treatment effect). However, such quantities do not capture the full information about the distribution of potential outcomes. In this work, we estimate the density of potential outcomes after interventions from observational data. For this, we propose a novel, fully-parametric deep learning method called Interventional Normalizing Flows. Specifically, we combine two normalizing flows, namely (i) a nuisance flow for estimating nuisance parameters and (ii) a target flow for parametric estimation of the density of potential outcomes. We further develop a tractable optimization objective based on a one-step bias correction for efficient and doubly robust estimation of the target flow parameters. As a result, our Interventional Normalizing Flows offer a properly normalized density estimator. Across various experiments, we demonstrate that our Interventional Normalizing Flows are expressive and highly effective, and scale well with both sample size and high-dimensional confounding. To the best of our knowledge, our Interventional Normalizing Flows are the first proper fully-parametric, deep learning method for density estimation of potential outcomes.
|
https://proceedings.mlr.press/v202/melnyk23a.html
|
https://proceedings.mlr.press/v202/melnyk23a/melnyk23a.pdf
|
https://openreview.net/forum?id=K2gn1WiLAu
|
Reprogramming Pretrained Language Models for Antibody Sequence Infilling
|
https://proceedings.mlr.press/v202/melnyk23a.html
|
Igor Melnyk, Vijil Chenthamarakshan, Pin-Yu Chen, Payel Das, Amit Dhurandhar, Inkit Padhi, Devleena Das
|
https://proceedings.mlr.press/v202/melnyk23a.html
|
ICML 2023
|
Antibodies comprise the most versatile class of binding molecules, with numerous applications in biomedicine. Computational design of antibodies involves generating novel and diverse sequences, while maintaining structural consistency. Unique to antibodies, designing the complementarity-determining region (CDR), which determines the antigen binding affinity and specificity, creates its own unique challenges. Recent deep learning models have shown impressive results, however the limited number of known antibody sequence/structure pairs frequently leads to degraded performance, particularly lacking diversity in the generated sequences. In our work we address this challenge by leveraging Model Reprogramming (MR), which repurposes pretrained models on a source language to adapt to the tasks that are in a different language and have scarce data - where it may be difficult to train a high-performing model from scratch or effectively fine-tune an existing pre-trained model on the specific task. Specifically, we introduce ReprogBert in which a pretrained English language model is repurposed for protein sequence infilling - thus considers cross-language adaptation using less data. Results on antibody design benchmarks show that our model on low-resourced antibody sequence dataset provides highly diverse CDR sequences, up to more than a two-fold increase of diversity over the baselines, without losing structural integrity and naturalness. The generated sequences also demonstrate enhanced antigen binding specificity and virus neutralization ability. Code is available at https://github.com/IBM/ReprogBERT
|
https://proceedings.mlr.press/v202/memarrast23a.html
|
https://proceedings.mlr.press/v202/memarrast23a/memarrast23a.pdf
|
https://openreview.net/forum?id=i4SDT6qIIl
|
Superhuman Fairness
|
https://proceedings.mlr.press/v202/memarrast23a.html
|
Omid Memarrast, Linh Vu, Brian D Ziebart
|
https://proceedings.mlr.press/v202/memarrast23a.html
|
ICML 2023
|
The fairness of machine learning-based decisions has become an increasingly important focus in the design of supervised machine learning methods. Most fairness approaches optimize a specified trade-off between performance measure(s) (e.g., accuracy, log loss, or AUC) and fairness metric(s) (e.g., demographic parity, equalized odds). This begs the question: are the right performance-fairness trade-offs being specified? We instead re-cast fair machine learning as an imitation learning task by introducing superhuman fairness, which seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures. We demonstrate the benefits of this approach given suboptimal decisions.
|
https://proceedings.mlr.press/v202/meng23a.html
|
https://proceedings.mlr.press/v202/meng23a/meng23a.pdf
|
https://openreview.net/forum?id=dWEKxwes02
|
A Model-Based Method for Minimizing CVaR and Beyond
|
https://proceedings.mlr.press/v202/meng23a.html
|
Si Yi Meng, Robert M. Gower
|
https://proceedings.mlr.press/v202/meng23a.html
|
ICML 2023
|
We develop a variant of the stochastic prox-linear method for minimizing the Conditional Value-at-Risk (CVaR) objective. CVaR is a risk measure focused on minimizing worst-case performance, defined as the average of the top quantile of the losses. In machine learning, such a risk measure is useful to train more robust models. Although the stochastic subgradient method (SGM) is a natural choice for minimizing the CVaR objective, we show that our stochastic prox-linear (SPL+) algorithm can better exploit the structure of the objective, while still providing a convenient closed form update. Our SPL+ method also adapts to the scaling of the loss function, which allows for easier tuning. We then specialize a general convergence theorem for SPL+ to our setting, and show that it allows for a wider selection of step sizes compared to SGM. We support this theoretical finding experimentally.
|
https://proceedings.mlr.press/v202/meng23b.html
|
https://proceedings.mlr.press/v202/meng23b/meng23b.pdf
|
https://openreview.net/forum?id=jbAjEhBuOZ
|
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
|
https://proceedings.mlr.press/v202/meng23b.html
|
Yu Meng, Martin Michalski, Jiaxin Huang, Yu Zhang, Tarek Abdelzaher, Jiawei Han
|
https://proceedings.mlr.press/v202/meng23b.html
|
ICML 2023
|
Recent studies have revealed the intriguing few-shot learning ability of pretrained language models (PLMs): They can quickly adapt to a new task when fine-tuned on a small amount of labeled data formulated as prompts, without requiring abundant task-specific annotations. Despite their promising performance, most existing few-shot approaches that only learn from the small training set still underperform fully supervised training by nontrivial margins. In this work, we study few-shot learning with PLMs from a different perspective: We first tune an autoregressive PLM on the few-shot samples and then use it as a generator to synthesize a large amount of novel training samples which augment the original training set. To encourage the generator to produce label-discriminative samples, we train it via weighted maximum likelihood where the weight of each token is automatically adjusted based on a discriminative meta-learning objective. A classification PLM can then be fine-tuned on both the few-shot and the synthetic samples with regularization for better generalization and stability. Our approach FewGen achieves an overall better result across seven classification tasks of the GLUE benchmark than existing few-shot learning methods, improving no-augmentation methods by 5+ average points, and outperforming augmentation methods by 3+ average points.
|
https://proceedings.mlr.press/v202/merlis23a.html
|
https://proceedings.mlr.press/v202/merlis23a/merlis23a.pdf
|
https://openreview.net/forum?id=NPziNAQTm4
|
On Preemption and Learning in Stochastic Scheduling
|
https://proceedings.mlr.press/v202/merlis23a.html
|
Nadav Merlis, Hugo Richard, Flore Sentenac, Corentin Odic, Mathieu Molina, Vianney Perchet
|
https://proceedings.mlr.press/v202/merlis23a.html
|
ICML 2023
|
We study single-machine scheduling of jobs, each belonging to a job type that determines its duration distribution. We start by analyzing the scenario where the type characteristics are known and then move to two learning scenarios where the types are unknown: non-preemptive problems, where each started job must be completed before moving to another job; and preemptive problems, where job execution can be paused in the favor of moving to a different job. In both cases, we design algorithms that achieve sublinear excess cost, compared to the performance with known types, and prove lower bounds for the non-preemptive case. Notably, we demonstrate, both theoretically and through simulations, how preemptive algorithms can greatly outperform non-preemptive ones when the durations of different job types are far from one another, a phenomenon that does not occur when the type durations are known.
|
https://proceedings.mlr.press/v202/mesnard23a.html
|
https://proceedings.mlr.press/v202/mesnard23a/mesnard23a.pdf
|
https://openreview.net/forum?id=4yoLVter71
|
Quantile Credit Assignment
|
https://proceedings.mlr.press/v202/mesnard23a.html
|
Thomas Mesnard, Wenqi Chen, Alaa Saade, Yunhao Tang, Mark Rowland, Theophane Weber, Clare Lyle, Audrunas Gruslys, Michal Valko, Will Dabney, Georg Ostrovski, Eric Moulines, Remi Munos
|
https://proceedings.mlr.press/v202/mesnard23a.html
|
ICML 2023
|
In reinforcement learning, the credit assignment problem is to distinguish luck from skill, that is, separate the inherent randomness in the environment from the controllable effects of the agent’s actions. This paper proposes two novel algorithms, Quantile Credit Assignment (QCA) and Hindsight QCA (HQCA), which incorporate distributional value estimation to perform credit assignment. QCA uses a network that predicts the quantiles of the return distribution, whereas HQCA additionally incorporates information about the future. Both QCA and HQCA have the appealing interpretation of leveraging an estimate of the quantile level of the return (interpreted as the level of "luck") in order to derive a "luck-dependent" baseline for policy gradient methods. We show theoretically that this approach gives an unbiased policy gradient estimate that can yield significant variance reductions over a standard value estimate baseline. QCA and HQCA significantly outperform prior state-of-the-art methods on a range of extremely difficult credit assignment problems.
|
https://proceedings.mlr.press/v202/metelev23a.html
|
https://proceedings.mlr.press/v202/metelev23a/metelev23a.pdf
|
https://openreview.net/forum?id=DwDQNKF4oy
|
Is Consensus Acceleration Possible in Decentralized Optimization over Slowly Time-Varying Networks?
|
https://proceedings.mlr.press/v202/metelev23a.html
|
Dmitry Metelev, Alexander Rogozin, Dmitry Kovalev, Alexander Gasnikov
|
https://proceedings.mlr.press/v202/metelev23a.html
|
ICML 2023
|
We consider decentralized optimization problems where one aims to minimize a sum of convex smooth objective functions distributed between nodes in the network. The links in the network can change from time to time. For the setting when the amount of changes is arbitrary, lower complexity bounds and corresponding optimal algorithms are known, and the consensus acceleration is not possible. However, in practice the magnitude of network changes may be limited. We derive lower complexity bounds for several regimes of velocity of networks changes. Moreover, we show how to obtain accelerated communication rates for a certain class of time-varying graphs using a specific consensus algorithm.
|
https://proceedings.mlr.press/v202/metelli23a.html
|
https://proceedings.mlr.press/v202/metelli23a/metelli23a.pdf
|
https://openreview.net/forum?id=dx5rPfq6Hr
|
Towards Theoretical Understanding of Inverse Reinforcement Learning
|
https://proceedings.mlr.press/v202/metelli23a.html
|
Alberto Maria Metelli, Filippo Lazzati, Marcello Restelli
|
https://proceedings.mlr.press/v202/metelli23a.html
|
ICML 2023
|
Inverse reinforcement learning (IRL) denotes a powerful family of algorithms for recovering a reward function justifying the behavior demonstrated by an expert agent. A well-known limitation of IRL is the ambiguity in the choice of the reward function, due to the existence of multiple rewards that explain the observed behavior. This limitation has been recently circumvented by formulating IRL as the problem of estimating the feasible reward set, i.e., the region of the rewards compatible with the expert’s behavior. In this paper, we make a step towards closing the theory gap of IRL in the case of finite-horizon problems with a generative model. We start by formally introducing the problem of estimating the feasible reward set, the corresponding PAC requirement, and discussing the properties of particular classes of rewards. Then, we provide the first minimax lower bound on the sample complexity for the problem of estimating the feasible reward set of order ${\Omega}\left( \frac{H^3SA}{\epsilon^2} \left( \log \left(\frac{1}{\delta}\right) + S \right)\right)$, being $S$ and $A$ the number of states and actions respectively, $H$ the horizon, $\epsilon$ the desired accuracy, and $\delta$ the confidence. We analyze the sample complexity of a uniform sampling strategy (US-IRL), proving a matching upper bound up to logarithmic factors. Finally, we outline several open questions in IRL and propose future research directions.
|
https://proceedings.mlr.press/v202/meyer23a.html
|
https://proceedings.mlr.press/v202/meyer23a/meyer23a.pdf
|
https://openreview.net/forum?id=2L435rBxrF
|
Quantum Policy Gradient Algorithm with Optimized Action Decoding
|
https://proceedings.mlr.press/v202/meyer23a.html
|
Nico Meyer, Daniel Scherer, Axel Plinge, Christopher Mutschler, Michael Hartmann
|
https://proceedings.mlr.press/v202/meyer23a.html
|
ICML 2023
|
Quantum machine learning implemented by variational quantum circuits (VQCs) is considered a promising concept for the noisy intermediate-scale quantum computing era. Focusing on applications in quantum reinforcement learning, we propose an action decoding procedure for a quantum policy gradient approach. We introduce a quality measure that enables us to optimize the classical post-processing required for action selection, inspired by local and global quantum measurements. The resulting algorithm demonstrates a significant performance improvement in several benchmark environments. With this technique, we successfully execute a full training routine on a 5-qubit hardware device. Our method introduces only negligible classical overhead and has the potential to improve VQC-based algorithms beyond the field of quantum reinforcement learning.
|
https://proceedings.mlr.press/v202/meyer23b.html
|
https://proceedings.mlr.press/v202/meyer23b/meyer23b.pdf
|
https://openreview.net/forum?id=WT70GgYdLI
|
Training Deep Surrogate Models with Large Scale Online Learning
|
https://proceedings.mlr.press/v202/meyer23b.html
|
Lucas Thibaut Meyer, Marc Schouler, Robert Alexander Caulk, Alejandro Ribes, Bruno Raffin
|
https://proceedings.mlr.press/v202/meyer23b.html
|
ICML 2023
|
The spatiotemporal resolution of Partial Differential Equations (PDEs) plays important roles in the mathematical description of the world’s physical phenomena. In general, scientists and engineers solve PDEs numerically by the use of computationally demanding solvers. Recently, deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs. Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training. This paper advocates that relying on a traditional static dataset to train these models does not allow the full benefit of the solver to be used as a data generator. It proposes an open source online training framework for deep surrogate models. The framework implements several levels of parallelism focused on simultaneously generating numerical simulations and training deep neural networks. This approach suppresses the I/O and storage bottleneck associated with disk-loaded datasets, and opens the way to training on significantly larger datasets. Experiments compare the offline and online training of four surrogate models, including state-of-the-art architectures. Results indicate that exposing deep surrogate models to more dataset diversity, up to hundreds of GB, can increase model generalization capabilities. Fully connected neural networks, Fourier Neural Operator (FNO), and Message Passing PDE Solver prediction accuracy is improved by 68%, 16% and 7%, respectively.
|
https://proceedings.mlr.press/v202/mguni23a.html
|
https://proceedings.mlr.press/v202/mguni23a/mguni23a.pdf
|
https://openreview.net/forum?id=VLoypBbG3t
|
MANSA: Learning Fast and Slow in Multi-Agent Systems
|
https://proceedings.mlr.press/v202/mguni23a.html
|
David Henry Mguni, Haojun Chen, Taher Jafferjee, Jianhong Wang, Longfei Yue, Xidong Feng, Stephen Marcus Mcaleer, Feifei Tong, Jun Wang, Yaodong Yang
|
https://proceedings.mlr.press/v202/mguni23a.html
|
ICML 2023
|
In multi-agent reinforcement learning (MARL), independent learning (IL) often shows remarkable performance and easily scales with the number of agents. Yet, using IL can be inefficient and runs the risk of failing to successfully train, particularly in scenarios that require agents to coordinate their actions. Using centralised learning (CL) enables MARL agents to quickly learn how to coordinate their behaviour but employing CL everywhere is often prohibitively expensive in real-world applications. Besides, using CL in value-based methods often needs strong representational constraints (e.g. individual-global-max condition) that can lead to poor performance if violated. In this paper, we introduce a novel plug & play IL framework named Multi-Agent Network Selection Algorithm (MANSA) which selectively employs CL only at states that require coordination. At its core, MANSA has an additional agent that uses switching controls to quickly learn the best states to activate CL during training, using CL only where necessary and vastly reducing the computational burden of CL. Our theory proves MANSA preserves cooperative MARL convergence properties, boosts IL performance and can optimally make use of a fixed budget on the number CL calls. We show empirically in Level-based Foraging (LBF) and StarCraft Multi-agent Challenge (SMAC) that MANSA achieves fast, superior and more reliable performance while making 40% fewer CL calls in SMAC and using CL at only 1% CL calls in LBF.
|
https://proceedings.mlr.press/v202/mhammedi23a.html
|
https://proceedings.mlr.press/v202/mhammedi23a/mhammedi23a.pdf
|
https://openreview.net/forum?id=rVtdWHPFxX
|
Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL
|
https://proceedings.mlr.press/v202/mhammedi23a.html
|
Zakaria Mhammedi, Dylan J Foster, Alexander Rakhlin
|
https://proceedings.mlr.press/v202/mhammedi23a.html
|
ICML 2023
|
We study the design of sample-efficient algorithms for reinforcement learning in the presence of rich, high-dimensional observations, formalized via the Block MDP problem. Existing algorithms suffer from either 1) computational intractability, 2) strong statistical assumptions that are not necessarily satisfied in practice, or 3) suboptimal sample complexity. We address these issues by providing the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level, with minimal statistical assumptions. Our algorithm, MusIK, combines exploration with representation learning based on multi-step inverse kinematics, a learning objective in which the aim is to predict the current action from the current observation and observations in the (potentially distant) future. MusIK is simple and flexible, and can efficiently take advantage of general-purpose function approximation. Our analysis of MusIK leverages several new techniques tailored to non-optimistic algorithms for reward-free exploration, which we anticipate will find broader use.
|
https://proceedings.mlr.press/v202/mhanna23a.html
|
https://proceedings.mlr.press/v202/mhanna23a/mhanna23a.pdf
|
https://openreview.net/forum?id=jqeMV8LrCB
|
Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function
|
https://proceedings.mlr.press/v202/mhanna23a.html
|
Elissa Mhanna, Mohamad Assaad
|
https://proceedings.mlr.press/v202/mhanna23a.html
|
ICML 2023
|
Zero-order (ZO) optimization is a powerful tool for dealing with realistic constraints. On the other hand, the gradient-tracking (GT) technique proved to be an efficient method for distributed optimization aiming to achieve consensus. However, it is a first-order (FO) method that requires knowledge of the gradient, which is not always possible in practice. In this work, we introduce a zero-order distributed optimization method based on a one-point estimate of the gradient tracking technique. We prove that this new technique converges with a single noisy function query at a time in the non-convex setting. We then establish a convergence rate of $O(\frac{1}{\sqrt[3]{K}})$ after a number of iterations K, which competes with that of $O(\frac{1}{\sqrt[4]{K}})$ of its centralized counterparts. Finally, a numerical example validates our theoretical results.
|
https://proceedings.mlr.press/v202/miao23a.html
|
https://proceedings.mlr.press/v202/miao23a/miao23a.pdf
|
https://openreview.net/forum?id=7W1uE3BjPO
|
Learning Instance-Specific Augmentations by Capturing Local Invariances
|
https://proceedings.mlr.press/v202/miao23a.html
|
Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim
|
https://proceedings.mlr.press/v202/miao23a.html
|
ICML 2023
|
We introduce InstaAug, a method for automatically learning input-specific augmentations from data. Previous methods for learning augmentations have typically assumed independence between the original input and the transformation applied to that input. This can be highly restrictive, as the invariances we hope our augmentation will capture are themselves often highly input dependent. InstaAug instead introduces a learnable invariance module that maps from inputs to tailored transformation parameters, allowing local invariances to be captured. This can be simultaneously trained alongside the downstream model in a fully end-to-end manner, or separately learned for a pre-trained model. We empirically demonstrate that InstaAug learns meaningful input-dependent augmentations for a wide range of transformation classes, which in turn provides better performance on both supervised and self-supervised tasks.
|
https://proceedings.mlr.press/v202/michel23a.html
|
https://proceedings.mlr.press/v202/michel23a/michel23a.pdf
|
https://openreview.net/forum?id=5Purw053IP
|
Path Neural Networks: Expressive and Accurate Graph Neural Networks
|
https://proceedings.mlr.press/v202/michel23a.html
|
Gaspard Michel, Giannis Nikolentzos, Johannes F. Lutzeyer, Michalis Vazirgiannis
|
https://proceedings.mlr.press/v202/michel23a.html
|
ICML 2023
|
Graph neural networks (GNNs) have recently become the standard approach for learning with graph-structured data. Prior work has shed light into their potential, but also their limitations. Unfortunately, it was shown that standard GNNs are limited in their expressive power. These models are no more powerful than the 1-dimensional Weisfeiler-Leman (1-WL) algorithm in terms of distinguishing non-isomorphic graphs. In this paper, we propose Path Neural Networks (PathNNs), a model that updates node representations by aggregating paths emanating from nodes. We derive three different variants of the PathNN model that aggregate single shortest paths, all shortest paths and all simple paths of length up to K. We prove that two of these variants are strictly more powerful than the 1-WL algorithm, and we experimentally validate our theoretical results. We find that PathNNs can distinguish pairs of non-isomorphic graphs that are indistinguishable by 1-WL, while our most expressive PathNN variant can even distinguish between 3-WL indistinguishable graphs. The different PathNN variants are also evaluated on graph classification and graph regression datasets, where in most cases, they outperform the baseline methods.
|
https://proceedings.mlr.press/v202/miconi23a.html
|
https://proceedings.mlr.press/v202/miconi23a/miconi23a.pdf
|
https://openreview.net/forum?id=kvnoQvYFyB
|
Learning to acquire novel cognitive tasks with evolution, plasticity and meta-meta-learning
|
https://proceedings.mlr.press/v202/miconi23a.html
|
Thomas Miconi
|
https://proceedings.mlr.press/v202/miconi23a.html
|
ICML 2023
|
A hallmark of intelligence is the ability to autonomously learn new flexible, cognitive behaviors - that is, behaviors where the appropriate action depends not just on immediate stimuli (as in simple reflexive stimulus-response associations), but on contextual information that must be adequately acquired, stored and processed. While many meta-learning algorithms can design agents that autonomously learn new tasks, cognitive tasks adds another level of learning and memory to typical “learning-to-learn” problems. Here we evolve neural networks, endowed with plastic connections and neuromodulation, over a sizable set of simple cognitive tasks adapted from a computational neuroscience framework. The resulting evolved networks can automatically modify their own connectivity to acquire a novel simple cognitive task, never seen during evolution, from stimuli and rewards alone, through the spontaneous operation of their evolved neural organization and plasticity system. Our results emphasize the importance of carefully considering the multiple learning loops involved in the emergence of intelligent behavior.
|
https://proceedings.mlr.press/v202/miliotou23a.html
|
https://proceedings.mlr.press/v202/miliotou23a/miliotou23a.pdf
|
https://openreview.net/forum?id=57OuafQmu8
|
Generative Decoding of Visual Stimuli
|
https://proceedings.mlr.press/v202/miliotou23a.html
|
Eleni Miliotou, Panagiotis Kyriakis, Jason D Hinman, Andrei Irimia, Paul Bogdan
|
https://proceedings.mlr.press/v202/miliotou23a.html
|
ICML 2023
|
Reconstructing natural images from fMRI recordings is a challenging task of great importance in neuroscience. The current architectures are bottlenecked because they fail to effectively capture the hierarchical processing of visual stimuli that takes place in the human brain. Motivated by that fact, we introduce a novel neural network architecture for the problem of neural decoding. Our architecture uses Hierarchical Variational Autoencoders (HVAEs) to learn meaningful representations of natural images and leverages their latent space hierarchy to learn voxel-to-image mappings. By mapping the early stages of the visual pathway to the first set of latent variables and the higher visual cortex areas to the deeper layers in the latent hierarchy, we are able to construct a latent variable neural decoding model that replicates the hierarchical visual information processing. Our model achieves better reconstructions compared to the state of the art and our ablation study indicates that the hierarchical structure of the latent space is responsible for that performance.
|
https://proceedings.mlr.press/v202/min23a.html
|
https://proceedings.mlr.press/v202/min23a/min23a.pdf
|
https://openreview.net/forum?id=UDzgqDZc7Q
|
Cooperative Multi-Agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation
|
https://proceedings.mlr.press/v202/min23a.html
|
Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu
|
https://proceedings.mlr.press/v202/min23a.html
|
ICML 2023
|
We study multi-agent reinforcement learning in the setting of episodic Markov decision processes, where many agents cooperate via communication through a central server. We propose a provably efficient algorithm based on value iteration that can simultaneously allow asynchronous communication and guarantee the benefit of cooperation with low communication complexity. Under linear function approximation, we prove that our algorithm enjoys a $\tilde{\mathcal{O}}(d^{3/2}H^2\sqrt{K})$ regret upper bound with $\tilde{\mathcal{O}}(dHM^2)$ communication complexity, where $d$ is the feature dimension, $H$ is the horizon length, $M$ is the total number of agents, and $K$ is the total number of episodes. We also provide a lower bound showing that an $\Omega(dM)$ communication complexity is necessary to improve the performance through collaboration.
|
https://proceedings.mlr.press/v202/min23b.html
|
https://proceedings.mlr.press/v202/min23b/min23b.pdf
|
https://openreview.net/forum?id=mozREzZ5oK
|
Directed Chain Generative Adversarial Networks
|
https://proceedings.mlr.press/v202/min23b.html
|
Ming Min, Ruimeng Hu, Tomoyuki Ichiba
|
https://proceedings.mlr.press/v202/min23b.html
|
ICML 2023
|
Real-world data can be multimodal distributed, e.g., data describing the opinion divergence in a community, the interspike interval distribution of neurons, and the oscillators natural frequencies. Generating multimodal distributed real-world data has become a challenge to existing generative adversarial networks (GANs). For example, it is often observed that Neural SDEs have only demonstrated successfully performance mainly in generating unimodal time series datasets. In this paper, we propose a novel time series generator, named directed chain GANs (DC-GANs), which inserts a time series dataset (called a neighborhood process of the directed chain or input) into the drift and diffusion coefficients of the directed chain SDEs with distributional constraints. DC-GANs can generate new time series of the same distribution as the neighborhood process, and the neighborhood process will provide the key step in learning and generating multimodal distributed time series. The proposed DC-GANs are examined on four datasets, including two stochastic models from social sciences and computational neuroscience, and two real-world datasets on stock prices and energy consumption. To our best knowledge, DC-GANs are the first work that can generate multimodal time series data and consistently outperforms state-of-the-art benchmarks with respect to measures of distribution, data similarity, and predictive ability.
|
https://proceedings.mlr.press/v202/min23c.html
|
https://proceedings.mlr.press/v202/min23c/min23c.pdf
|
https://openreview.net/forum?id=OPwwby2wOt
|
An Information-Theoretic Analysis of Nonstationary Bandit Learning
|
https://proceedings.mlr.press/v202/min23c.html
|
Seungki Min, Daniel Russo
|
https://proceedings.mlr.press/v202/min23c.html
|
ICML 2023
|
In nonstationary bandit learning problems, the decision-maker must continually gather information and adapt their action selection as the latent state of the environment evolves. In each time period, some latent optimal action maximizes expected reward under the environment state. We view the optimal action sequence as a stochastic process, and take an information-theoretic approach to analyze attainable performance. We bound per-period regret in terms of the entropy rate of the optimal action process. The bound applies to a wide array of problems studied in the literature and reflects the problem’s information structure through its information-ratio.
|
https://proceedings.mlr.press/v202/min23d.html
|
https://proceedings.mlr.press/v202/min23d/min23d.pdf
|
https://openreview.net/forum?id=63rNiH4mgG
|
On the Convergence of Gradient Flow on Multi-layer Linear Models
|
https://proceedings.mlr.press/v202/min23d.html
|
Hancheng Min, Rene Vidal, Enrique Mallada
|
https://proceedings.mlr.press/v202/min23d.html
|
ICML 2023
|
In this paper, we analyze the convergence of gradient flow on a multi-layer linear model with a loss function of the form $f(W_1W_2\cdots W_L)$. We show that when $f$ satisfies the gradient dominance property, proper weight initialization leads to exponential convergence of the gradient flow to a global minimum of the loss. Moreover, the convergence rate depends on two trajectory-specific quantities that are controlled by the weight initialization: the imbalance matrices, which measure the difference between the weights of adjacent layers, and the least singular value of the weight product $W=W_1W_2\cdots W_L$. Our analysis exploits the fact that the gradient of the overparameterized loss can be written as the composition of the non-overparametrized gradient with a time-varying (weight-dependent) linear operator whose smallest eigenvalue controls the convergence rate. The key challenge we address is to derive a uniform lower bound for this time-varying eigenvalue that lead to improved rates for several multi-layer network models studied in the literature.
|
https://proceedings.mlr.press/v202/mishkin23a.html
|
https://proceedings.mlr.press/v202/mishkin23a/mishkin23a.pdf
|
https://openreview.net/forum?id=Th1vXHbzZ6
|
Optimal Sets and Solution Paths of ReLU Networks
|
https://proceedings.mlr.press/v202/mishkin23a.html
|
Aaron Mishkin, Mert Pilanci
|
https://proceedings.mlr.press/v202/mishkin23a.html
|
ICML 2023
|
We develop an analytical framework to characterize the set of optimal ReLU neural networks by reformulating the non-convex training problem as a convex program. We show that the global optima of the convex parameterization are given by a polyhedral set and then extend this characterization to the optimal set of the non-convex training objective. Since all stationary points of the ReLU training problem can be represented as optima of sub-sampled convex programs, our work provide a general expression for all critical points of the non-convex objective. We then leverage our results to provide an optimal pruning algorithm for computing minimal networks, establish conditions for the regularization path of ReLU networks to be continuous, and develop sensitivity results for minimal ReLU networks.
|
https://proceedings.mlr.press/v202/mishne23a.html
|
https://proceedings.mlr.press/v202/mishne23a/mishne23a.pdf
|
https://openreview.net/forum?id=grhjD5an7A
|
The Numerical Stability of Hyperbolic Representation Learning
|
https://proceedings.mlr.press/v202/mishne23a.html
|
Gal Mishne, Zhengchao Wan, Yusu Wang, Sheng Yang
|
https://proceedings.mlr.press/v202/mishne23a.html
|
ICML 2023
|
The hyperbolic space is widely used for representing hierarchical datasets due to its ability to embed trees with small distortion. However, this property comes at a price of numerical instability such that training hyperbolic learning models will sometimes lead to catastrophic NaN problems, encountering unrepresentable values in floating point arithmetic. In this work, we analyze the limitations of two popular models for the hyperbolic space, namely, the Poincaré ball and the Lorentz model. We find that, under the 64-bit arithmetic system, the Poincaré ball has a relatively larger capacity than the Lorentz model for correctly representing points. However, the Lorentz model is superior to the Poincaré ball from the perspective of optimization, which we theoretically validate. To address these limitations, we identify one Euclidean parametrization of the hyperbolic space which can alleviate these issues. We further extend this Euclidean parametrization to hyperbolic hyperplanes and demonstrate its effectiveness in improving the performance of hyperbolic SVM.
|
https://proceedings.mlr.press/v202/mitchell23a.html
|
https://proceedings.mlr.press/v202/mitchell23a/mitchell23a.pdf
|
https://openreview.net/forum?id=UiAyIILXRd
|
DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature
|
https://proceedings.mlr.press/v202/mitchell23a.html
|
Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D Manning, Chelsea Finn
|
https://proceedings.mlr.press/v202/mitchell23a.html
|
ICML 2023
|
The increasing fluency and widespread usage of large language models (LLMs) highlight the desirability of corresponding tools aiding detection of LLM-generated text. In this paper, we identify a property of the structure of an LLM’s probability function that is useful for such detection. Specifically, we demonstrate that text sampled from an LLM tends to occupy negative curvature regions of the model’s log probability function. Leveraging this observation, we then define a new curvature-based criterion for judging if a passage is generated from a given LLM. This approach, which we call DetectGPT, does not require training a separate classifier, collecting a dataset of real or generated passages, or explicitly watermarking generated text. It uses only log probabilities computed by the model of interest and random perturbations of the passage from another generic pre-trained language model (e.g., T5). We find DetectGPT is more discriminative than existing zero-shot methods for model sample detection, notably improving detection of fake news articles generated by 20B parameter GPT-NeoX from 0.81 AUROC for the strongest zero-shot baseline to 0.95 AUROC for DetectGPT.
|
https://proceedings.mlr.press/v202/mittal23a.html
|
https://proceedings.mlr.press/v202/mittal23a/mittal23a.pdf
|
https://openreview.net/forum?id=LCAjuPNJP0
|
Diffusion Based Representation Learning
|
https://proceedings.mlr.press/v202/mittal23a.html
|
Sarthak Mittal, Korbinian Abstreiter, Stefan Bauer, Bernhard Schölkopf, Arash Mehrjou
|
https://proceedings.mlr.press/v202/mittal23a.html
|
ICML 2023
|
Diffusion-based methods, represented as stochastic differential equations on a continuous-time domain, have recently proven successful as non-adversarial generative models. Training such models relies on denoising score matching, which can be seen as multi-scale denoising autoencoders. Here, we augment the denoising score matching framework to enable representation learning without any supervised signal. GANs and VAEs learn representations by directly transforming latent codes to data samples. In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective and thus encodes the information needed for denoising. We illustrate how this difference allows for manual control of the level of details encoded in the representation. Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements on state-of-the-art models on semi-supervised image classification. We also compare the quality of learned representations of diffusion score matching with other methods like autoencoder and contrastively trained systems through their performances on downstream tasks. Finally, we also ablate with a different SDE formulation for diffusion models and show that the benefits on downstream tasks are still present on changing the underlying differential equation.
|
https://proceedings.mlr.press/v202/mo23a.html
|
https://proceedings.mlr.press/v202/mo23a/mo23a.pdf
|
https://openreview.net/forum?id=lYZOjMvxws
|
Disentangled Multiplex Graph Representation Learning
|
https://proceedings.mlr.press/v202/mo23a.html
|
Yujie Mo, Yajie Lei, Jialie Shen, Xiaoshuang Shi, Heng Tao Shen, Xiaofeng Zhu
|
https://proceedings.mlr.press/v202/mo23a.html
|
ICML 2023
|
Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information extraction. In this paper, we argue that it is essential for conducting effective and robust UMGRL to extract complete and clean common information, as well as more-complementarity and less-noise private information. To achieve this, we first investigate disentangled representation learning for the multiplex graph to capture complete and clean common information, as well as design a contrastive constraint to preserve the complementarity and remove the noise in the private information. Moreover, we theoretically analyze that the common and private representations learned by our method are provably disentangled and contain more task-relevant and less task-irrelevant information to benefit downstream tasks. Extensive experiments verify the superiority of the proposed method in terms of different downstream tasks.
|
https://proceedings.mlr.press/v202/mo23b.html
|
https://proceedings.mlr.press/v202/mo23b/mo23b.pdf
|
https://openreview.net/forum?id=SkC2ZATroO
|
A Unified Audio-Visual Learning Framework for Localization, Separation, and Recognition
|
https://proceedings.mlr.press/v202/mo23b.html
|
Shentong Mo, Pedro Morgado
|
https://proceedings.mlr.press/v202/mo23b.html
|
ICML 2023
|
The ability to accurately recognize, localize and separate sound sources is fundamental to any audio-visual perception task. Historically, these abilities were tackled separately, with several methods developed independently for each task. However, given the interconnected nature of source localization, separation, and recognition, independent models are likely to yield suboptimal performance as they fail to capture the interdependence between these tasks. To address this problem, we propose a unified audio-visual learning framework (dubbed OneAVM) that integrates audio and visual cues for joint localization, separation, and recognition. OneAVM comprises a shared audio-visual encoder and task-specific decoders trained with three objectives. The first objective aligns audio and visual representations through a localized audio-visual correspondence loss. The second tackles visual source separation using a traditional mix-and-separate framework. Finally, the third objective reinforces visual feature separation and localization by mixing images in pixel space and aligning their representations with those of all corresponding sound sources. Extensive experiments on MUSIC, VGG-Instruments, VGG-Music, and VGGSound datasets demonstrate the effectiveness of OneAVM for all three tasks, audio-visual source localization, separation, and nearest neighbor recognition, and empirically demonstrate a strong positive transfer between them.
|
https://proceedings.mlr.press/v202/mo23c.html
|
https://proceedings.mlr.press/v202/mo23c/mo23c.pdf
|
https://openreview.net/forum?id=xN4eYXdY64
|
Pruning via Sparsity-indexed ODE: a Continuous Sparsity Viewpoint
|
https://proceedings.mlr.press/v202/mo23c.html
|
Zhanfeng Mo, Haosen Shi, Sinno Jialin Pan
|
https://proceedings.mlr.press/v202/mo23c.html
|
ICML 2023
|
Neural pruning, which involves identifying the optimal sparse subnetwork, is a key technique for reducing the complexity and improving the efficiency of deep neural networks. To address the challenge of solving neural pruning at a specific sparsity level directly, we investigate the evolution of optimal subnetworks with continuously increasing sparsity, which can provide insight into how to transform an unpruned dense model into an optimal subnetwork with any desired level of sparsity. In this paper, we proposed a novel pruning framework, coined Sparsity-indexed ODE (SpODE) that provides explicit guidance on how to best preserve model performance while ensuring an infinitesimal increase in model sparsity. On top of this, we develop a pruning algorithm, termed Pruning via Sparsity-indexed ODE (PSO), that enables effective pruning via traveling along the SpODE path. Empirical experiments show that PSO achieves either better or comparable performance compared to state-of-the-art baselines across various pruning settings.
|
https://proceedings.mlr.press/v202/moayeri23a.html
|
https://proceedings.mlr.press/v202/moayeri23a/moayeri23a.pdf
|
https://openreview.net/forum?id=C6zz7ivXyM
|
Text-To-Concept (and Back) via Cross-Model Alignment
|
https://proceedings.mlr.press/v202/moayeri23a.html
|
Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi
|
https://proceedings.mlr.press/v202/moayeri23a.html
|
ICML 2023
|
We observe that the mapping between an image’s representation in one model to its representation in another can be learned surprisingly well with just a linear layer, even across diverse models. Building on this observation, we propose text-to-concept, where features from a fixed pretrained model are aligned linearly to the CLIP space, so that text embeddings from CLIP’s text encoder become directly comparable to the aligned features. With text-to-concept, we convert fixed off-the-shelf vision encoders to surprisingly strong zero-shot classifiers for free, with accuracy at times even surpassing that of CLIP, despite being much smaller models and trained on a small fraction of the data compared to CLIP. We show other immediate use-cases of text-to-concept, like building concept bottleneck models with no concept supervision, diagnosing distribution shifts in terms of human concepts, and retrieving images satisfying a set of text-based constraints. Lastly, we demonstrate the feasibility of concept-to-text, where vectors in a model’s feature space are decoded by first aligning to the CLIP before being fed to a GPT-based generative model. Our work suggests existing deep models, with presumably diverse architectures and training, represent input samples relatively similarly, and a two-way communication across model representation spaces and to humans (through language) is viable.
|
https://proceedings.mlr.press/v202/mohamadi23a.html
|
https://proceedings.mlr.press/v202/mohamadi23a/mohamadi23a.pdf
|
https://openreview.net/forum?id=3UXsGzUJc5
|
A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel
|
https://proceedings.mlr.press/v202/mohamadi23a.html
|
Mohamad Amin Mohamadi, Wonho Bae, Danica J. Sutherland
|
https://proceedings.mlr.press/v202/mohamadi23a.html
|
ICML 2023
|
Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given network’s representation: they are often far less expensive to compute and applicable more broadly than infinite-width NTKs. For networks with $O$ output units (e.g. an $O$-class classifier), however, the eNTK on $N$ inputs is of size $NO \times NO$, taking $\mathcal O\big( (N O)^2\big)$ memory and up to $\mathcal O\big( (N O)^3 \big)$ computation to use. Most existing applications have therefore used one of a handful of approximations yielding $N \times N$ kernel matrices, saving orders of magnitude of computation, but with limited to no justification. We prove that one such approximation, which we call "sum of logits," converges to the true eNTK at initialization. Our experiments demonstrate the quality of this approximation for various uses across a range of settings.
|
https://proceedings.mlr.press/v202/mohtashami23a.html
|
https://proceedings.mlr.press/v202/mohtashami23a/mohtashami23a.pdf
|
https://openreview.net/forum?id=DGSmVHmOrv
|
Special Properties of Gradient Descent with Large Learning Rates
|
https://proceedings.mlr.press/v202/mohtashami23a.html
|
Amirkeivan Mohtashami, Martin Jaggi, Sebastian U Stich
|
https://proceedings.mlr.press/v202/mohtashami23a.html
|
ICML 2023
|
When training neural networks, it has been widely observed that a large step size is essential in stochastic gradient descent (SGD) for obtaining superior models. However, the effect of large step sizes on the success of SGD is not well understood theoretically. Several previous works have attributed this success to the stochastic noise present in SGD. However, we show through a novel set of experiments that the stochastic noise is not sufficient to explain good non-convex training, and that instead the effect of a large learning rate itself is essential for obtaining best performance.We demonstrate the same effects also in the noise-less case, i.e. for full-batch GD. We formally prove that GD with large step size —on certain non-convex function classes — follows a different trajectory than GD with a small step size, which can lead to convergence to a global minimum instead of a local one. Our settings provide a framework for future analysis which allows comparing algorithms based on behaviors that can not be observed in the traditional settings.
|
https://proceedings.mlr.press/v202/molinaro23a.html
|
https://proceedings.mlr.press/v202/molinaro23a/molinaro23a.pdf
|
https://openreview.net/forum?id=S4fEjmWg4X
|
Neural Inverse Operators for Solving PDE Inverse Problems
|
https://proceedings.mlr.press/v202/molinaro23a.html
|
Roberto Molinaro, Yunan Yang, Björn Engquist, Siddhartha Mishra
|
https://proceedings.mlr.press/v202/molinaro23a.html
|
ICML 2023
|
A large class of inverse problems for PDEs are only well-defined as mappings from operators to functions. Existing operator learning frameworks map functions to functions and need to be modified to learn inverse maps from data. We propose a novel architecture termed Neural Inverse Operators (NIOs) to solve these PDE inverse problems. Motivated by the underlying mathematical structure, NIO is based on a suitable composition of DeepONets and FNOs to approximate mappings from operators to functions. A variety of experiments are presented to demonstrate that NIOs significantly outperform baselines and solve PDE inverse problems robustly, accurately and are several orders of magnitude faster than existing direct and PDE-constrained optimization methods.
|
https://proceedings.mlr.press/v202/monchot23a.html
|
https://proceedings.mlr.press/v202/monchot23a/monchot23a.pdf
|
https://openreview.net/forum?id=oZ0owWGDKv
|
Input uncertainty propagation through trained neural networks
|
https://proceedings.mlr.press/v202/monchot23a.html
|
Paul Monchot, Loic Coquelin, Sébastien Julien Petit, Sébastien Marmin, Erwan Le Pennec, Nicolas Fischer
|
https://proceedings.mlr.press/v202/monchot23a.html
|
ICML 2023
|
When physical sensors are involved, such as image sensors, the uncertainty over the input data is often a major component of the output uncertainty of machine learning models. In this work, we address the problem of input uncertainty propagation through trained neural networks. We do not rely on a Gaussian distribution assumption of the output or of any intermediate layer. We propagate instead a Gaussian Mixture Model (GMM) that offers much more flexibility, using the Split&Merge algorithm. This paper’s main contribution is the computation of a Wasserstein criterion to control the Gaussian splitting procedure for which theoretical guarantees of convergence on the output distribution estimates are derived. The methodology is tested against a wide range of datasets and networks. It shows robustness, and genericity and offers highly accurate output probability density function estimation while maintaining a reasonable computational cost compared with the standard Monte Carlo (MC) approach.
|
https://proceedings.mlr.press/v202/montanari23a.html
|
https://proceedings.mlr.press/v202/montanari23a/montanari23a.pdf
|
https://openreview.net/forum?id=TdGIfSZDV3
|
Compressing Tabular Data via Latent Variable Estimation
|
https://proceedings.mlr.press/v202/montanari23a.html
|
Andrea Montanari, Eric Weiner
|
https://proceedings.mlr.press/v202/montanari23a.html
|
ICML 2023
|
Data used for analytics and machine learning often take the form of tables with categorical entries. We introduce a family of lossless compression algorithms for such data that proceed in four steps: (i) Estimate latent variables associated to rows and columns; (ii) Partition the table in blocks according to the row/column latents; (iii) Apply a sequential (e.g. Lempel-Ziv) coder to each of the blocks; (iv) Append a compressed encoding of the latents. We evaluate this approach on several benchmark datasets, and study optimal compression in a probabilistic model for tabular data, whereby latent values are independent and table entries are conditionally independent given the latent values. We prove that the model has a well defined entropy rate and satisfies an asymptotic equipartition property. We also prove that classical compression schemes such as Lempel-Ziv and finite-state encoders do not achieve this rate. On the other hand, the latent estimation strategy outlined above achieves the optimal rate.
|
https://proceedings.mlr.press/v202/monzio-compagnoni23a.html
|
https://proceedings.mlr.press/v202/monzio-compagnoni23a/monzio-compagnoni23a.pdf
|
https://openreview.net/forum?id=3HHh17GBMO
|
An SDE for Modeling SAM: Theory and Insights
|
https://proceedings.mlr.press/v202/monzio-compagnoni23a.html
|
Enea Monzio Compagnoni, Luca Biggio, Antonio Orvieto, Frank Norbert Proske, Hans Kersting, Aurelien Lucchi
|
https://proceedings.mlr.press/v202/monzio-compagnoni23a.html
|
ICML 2023
|
We study the SAM (Sharpness-Aware Minimization) optimizer which has recently attracted a lot of interest due to its increased performance over more classical variants of stochastic gradient descent. Our main contribution is the derivation of continuous-time models (in the form of SDEs) for SAM and two of its variants, both for the full-batch and mini-batch settings. We demonstrate that these SDEs are rigorous approximations of the real discrete-time algorithms (in a weak sense, scaling linearly with the learning rate). Using these models, we then offer an explanation of why SAM prefers flat minima over sharp ones – by showing that it minimizes an implicitly regularized loss with a Hessian-dependent noise structure. Finally, we prove that SAM is attracted to saddle points under some realistic conditions. Our theoretical results are supported by detailed experiments.
|
https://proceedings.mlr.press/v202/morishita23a.html
|
https://proceedings.mlr.press/v202/morishita23a/morishita23a.pdf
|
https://openreview.net/forum?id=a6PvWIHFsF
|
Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
|
https://proceedings.mlr.press/v202/morishita23a.html
|
Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa
|
https://proceedings.mlr.press/v202/morishita23a.html
|
ICML 2023
|
We study a synthetic corpus based approach for language models (LMs) to acquire logical deductive reasoning ability. The previous studies generated deduction examples using specific sets of deduction rules. However, these rules were limited or otherwise arbitrary. This can limit the generalizability of acquired deductive reasoning ability. We rethink this and adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way. We empirically verify that LMs trained on the proposed corpora, which we name $\textbf{FLD}$ ($\textbf{F}$ormal $\textbf{L}$ogic $\textbf{D}$eduction), acquire more generalizable deductive reasoning ability. Furthermore, we identify the aspects of deductive reasoning ability on which deduction corpora can enhance LMs and those on which they cannot. Finally, on the basis of these results, we discuss the future directions for applying deduction corpora or other approaches for each aspect. We release the code, data, and models.
|
https://proceedings.mlr.press/v202/morris23a.html
|
https://proceedings.mlr.press/v202/morris23a/morris23a.pdf
|
https://openreview.net/forum?id=rZN3mc5m3C
|
WL meet VC
|
https://proceedings.mlr.press/v202/morris23a.html
|
Christopher Morris, Floris Geerts, Jan Tönshoff, Martin Grohe
|
https://proceedings.mlr.press/v202/morris23a.html
|
ICML 2023
|
Recently, many works studied the expressive power of graph neural networks (GNNs) by linking it to the $1$-dimensional Weisfeiler-Leman algorithm ($1\text{-}\mathsf{WL}$). Here, the $1\text{-}\mathsf{WL}$ is a well-studied heuristic for the graph isomorphism problem, which iteratively colors or partitions a graph’s vertex set. While this connection has led to significant advances in understanding and enhancing GNNs’ expressive power, it does not provide insights into their generalization performance, i.e., their ability to make meaningful predictions beyond the training set. In this paper, we study GNNs’ generalization ability through the lens of Vapnik-Chervonenkis (VC) dimension theory in two settings, focusing on graph-level predictions. First, when no upper bound on the graphs’ order is known, we show that the bitlength of GNNs’ weights tightly bounds their VC dimension. Further, we derive an upper bound for GNNs’ VC dimension using the number of colors produced by the $1\text{-}\mathsf{WL}$. Secondly, when an upper bound on the graphs’ order is known, we show a tight connection between the number of graphs distinguishable by the $1\text{-}\mathsf{WL}$ and GNNs’ VC dimension. Our empirical study confirms the validity of our theoretical findings.
|
https://proceedings.mlr.press/v202/moskovitz23a.html
|
https://proceedings.mlr.press/v202/moskovitz23a/moskovitz23a.pdf
|
https://openreview.net/forum?id=8Hwfncc2Km
|
ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs
|
https://proceedings.mlr.press/v202/moskovitz23a.html
|
Ted Moskovitz, Brendan O’Donoghue, Vivek Veeriah, Sebastian Flennerhag, Satinder Singh, Tom Zahavy
|
https://proceedings.mlr.press/v202/moskovitz23a.html
|
ICML 2023
|
In recent years, reinforcement learning (RL) has been applied to real-world problems with increasing success. Such applications often require to put constraints on the agent’s behavior. Existing algorithms for constrained RL (CRL) rely on gradient descent-ascent, but this approach comes with a caveat. While these algorithms are guaranteed to converge on average, they do not guarantee last-iterate convergence, i.e., the current policy of the agent may never converge to the optimal solution. In practice, it is often observed that the policy alternates between satisfying the constraints and maximizing the reward, rarely accomplishing both objectives simultaneously. Here, we address this problem by introducing Reinforcement Learning with Optimistic Ascent-Descent (ReLOAD), a principled CRL method with guaranteed last-iterate convergence. We demonstrate its empirical effectiveness on a wide variety of CRL problems including discrete MDPs and continuous control. In the process we establish a benchmark of challenging CRL problems.
|
https://proceedings.mlr.press/v202/moulin23a.html
|
https://proceedings.mlr.press/v202/moulin23a/moulin23a.pdf
|
https://openreview.net/forum?id=LctoTBcGUf
|
Optimistic Planning by Regularized Dynamic Programming
|
https://proceedings.mlr.press/v202/moulin23a.html
|
Antoine Moulin, Gergely Neu
|
https://proceedings.mlr.press/v202/moulin23a.html
|
ICML 2023
|
We propose a new method for optimistic planning in infinite-horizon discounted Markov decision processes based on the idea of adding regularization to the updates of an otherwise standard approximate value iteration procedure. This technique allows us to avoid contraction and monotonicity arguments typically required by existing analyses of approximate dynamic programming methods, and in particular to use approximate transition functions estimated via least-squares procedures in MDPs with linear function approximation. We use our method to recover known guarantees in tabular MDPs and to provide a computationally efficient algorithm for learning near-optimal policies in discounted linear mixture MDPs from a single stream of experience, and show it achieves near-optimal statistical guarantees.
|
https://proceedings.mlr.press/v202/muca-cirone23a.html
|
https://proceedings.mlr.press/v202/muca-cirone23a/muca-cirone23a.pdf
|
https://openreview.net/forum?id=0rnA1l6WAc
|
Neural signature kernels as infinite-width-depth-limits of controlled ResNets
|
https://proceedings.mlr.press/v202/muca-cirone23a.html
|
Nicola Muca Cirone, Maud Lemercier, Cristopher Salvi
|
https://proceedings.mlr.press/v202/muca-cirone23a.html
|
ICML 2023
|
Motivated by the paradigm of reservoir computing, we consider randomly initialized controlled ResNets defined as Euler-discretizations of neural controlled differential equations (Neural CDEs), a unified architecture which enconpasses both RNNs and ResNets. We show that in the infinite-width-depth limit and under proper scaling, these architectures converge weakly to Gaussian processes indexed on some spaces of continuous paths and with kernels satisfying certain partial differential equations (PDEs) varying according to the choice of activation function $\varphi$, extending the results of Hayou (2022); Hayou & Yang (2023) to the controlled and homogeneous case. In the special, homogeneous, case where $\varphi$ is the identity, we show that the equation reduces to a linear PDE and the limiting kernel agrees with the signature kernel of Salvi et al. (2021a). We name this new family of limiting kernels neural signature kernels. Finally, we show that in the infinite-depth regime, finite-width controlled ResNets converge in distribution to Neural CDEs with random vector fields which, depending on whether the weights are shared across layers, are either time-independent and Gaussian or behave like a matrix-valued Brownian motion.
|
https://proceedings.mlr.press/v202/muckley23a.html
|
https://proceedings.mlr.press/v202/muckley23a/muckley23a.pdf
|
https://openreview.net/forum?id=iUspLfxpWC
|
Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models
|
https://proceedings.mlr.press/v202/muckley23a.html
|
Matthew J. Muckley, Alaaeldin El-Nouby, Karen Ullrich, Herve Jegou, Jakob Verbeek
|
https://proceedings.mlr.press/v202/muckley23a.html
|
ICML 2023
|
Lossy image compression aims to represent images in as few bits as possible while maintaining fidelity to the original. Theoretical results indicate that optimizing distortion metrics such as PSNR or MS-SSIM necessarily leads to a discrepancy in the statistics of original images from those of reconstructions, in particular at low bitrates, often manifested by the blurring of the compressed images. Previous work has leveraged adversarial discriminators to improve statistical fidelity. Yet these binary discriminators adopted from generative modeling tasks may not be ideal for image compression. In this paper, we introduce a non-binary discriminator that is conditioned on quantized local image representations obtained via VQ-VAE autoencoders. Our evaluations on the CLIC2020, DIV2K and Kodak datasets show that our discriminator is more effective for jointly optimizing distortion (e.g., PSNR) and statistical fidelity (e.g., FID) than the PatchGAN of the state-of-the-art HiFiC model. On CLIC2020, we obtain the same FID as HiFiC with 30-40% fewer bits.
|
https://proceedings.mlr.press/v202/muller23a.html
|
https://proceedings.mlr.press/v202/muller23a/muller23a.pdf
|
https://openreview.net/forum?id=1DP5fR3iTr
|
PFNs4BO: In-Context Learning for Bayesian Optimization
|
https://proceedings.mlr.press/v202/muller23a.html
|
Samuel Müller, Matthias Feurer, Noah Hollmann, Frank Hutter
|
https://proceedings.mlr.press/v202/muller23a.html
|
ICML 2023
|
In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible surrogate for Bayesian Optimization (BO). PFNs are neural processes that are trained to approximate the posterior predictive distribution (PPD) through in-context learning on any prior distribution that can be efficiently sampled from. We describe how this flexibility can be exploited for surrogate modeling in BO. We use PFNs to mimic a naive Gaussian process (GP), an advanced GP, and a Bayesian Neural Network (BNN). In addition, we show how to incorporate further information into the prior, such as allowing hints about the position of optima (user priors), ignoring irrelevant dimensions, and performing non-myopic BO by learning the acquisition function. The flexibility underlying these extensions opens up vast possibilities for using PFNs for BO. We demonstrate the usefulness of PFNs for BO in a large-scale evaluation on artificial GP samples and three different hyperparameter optimization testbeds: HPO-B, Bayesmark, and PD1. We publish code alongside trained models at https://github.com/automl/PFNs4BO.
|
https://proceedings.mlr.press/v202/muller23b.html
|
https://proceedings.mlr.press/v202/muller23b/muller23b.pdf
|
https://openreview.net/forum?id=y6sCx3eJpw
|
Achieving High Accuracy with PINNs via Energy Natural Gradient Descent
|
https://proceedings.mlr.press/v202/muller23b.html
|
Johannes Müller, Marius Zeinhofer
|
https://proceedings.mlr.press/v202/muller23b.html
|
ICML 2023
|
We propose energy natural gradient descent, a natural gradient method with respect to a Hessian-induced Riemannian metric as an optimization algorithm for physics-informed neural networks (PINNs) and the deep Ritz method. As a main motivation we show that the update direction in function space resulting from the energy natural gradient corresponds to the Newton direction modulo an orthogonal projection on the model’s tangent space. We demonstrate experimentally that energy natural gradient descent yields highly accurate solutions with errors several orders of magnitude smaller than what is obtained when training PINNs with standard optimizers like gradient descent or Adam, even when those are allowed significantly more computation time.
|
https://proceedings.mlr.press/v202/munk23a.html
|
https://proceedings.mlr.press/v202/munk23a/munk23a.pdf
|
https://openreview.net/forum?id=qlAtMW9jIh
|
Uncertain Evidence in Probabilistic Models and Stochastic Simulators
|
https://proceedings.mlr.press/v202/munk23a.html
|
Andreas Munk, Alexander Mead, Frank Wood
|
https://proceedings.mlr.press/v202/munk23a.html
|
ICML 2023
|
We consider the problem of performing Bayesian inference in probabilistic models where observations are accompanied by uncertainty, referred to as "uncertain evidence.” We explore how to interpret uncertain evidence, and by extension the importance of proper interpretation as it pertains to inference about latent variables. We consider a recently-proposed method "distributional evidence” as well as revisit two older methods: Jeffrey’s rule and virtual evidence. We devise guidelines on how to account for uncertain evidence and we provide new insights, particularly regarding consistency. To showcase the impact of different interpretations of the same uncertain evidence, we carry out experiments in which one interpretation is defined as "correct.” We then compare inference results from each different interpretation illustrating the importance of careful consideration of uncertain evidence.
|
https://proceedings.mlr.press/v202/murata23a.html
|
https://proceedings.mlr.press/v202/murata23a/murata23a.pdf
|
https://openreview.net/forum?id=4weSHLFgtZ
|
GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse Problems with Denoising Diffusion Restoration
|
https://proceedings.mlr.press/v202/murata23a.html
|
Naoki Murata, Koichi Saito, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon
|
https://proceedings.mlr.press/v202/murata23a.html
|
ICML 2023
|
Pre-trained diffusion models have been successfully used as priors in a variety of linear inverse problems, where the goal is to reconstruct a signal from noisy linear measurements. However, existing approaches require knowledge of the linear operator. In this paper, we propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown. GibbsDDRM constructs a joint distribution of the data, measurements, and linear operator by using a pre-trained diffusion model for the data prior, and it solves the problem by posterior sampling with an efficient variant of a Gibbs sampler. The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning. In experiments, it achieved high performance on both blind image deblurring and vocal dereverberation tasks, despite the use of simple generic priors for the underlying linear operators.
|
https://proceedings.mlr.press/v202/murata23b.html
|
https://proceedings.mlr.press/v202/murata23b/murata23b.pdf
|
https://openreview.net/forum?id=MxWGsrJL32
|
DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning
|
https://proceedings.mlr.press/v202/murata23b.html
|
Tomoya Murata, Taiji Suzuki
|
https://proceedings.mlr.press/v202/murata23b.html
|
ICML 2023
|
Differential private optimization for nonconvex smooth objective is considered. In the previous work, the best known utility bound is $\widetilde O(\sqrt{d}/(n\varepsilon_\mathrm{DP}))$ in terms of the squared full gradient norm, which is achieved by Differential Private Gradient Descent (DP-GD) as an instance, where $n$ is the sample size, $d$ is the problem dimensionality and $\varepsilon_\mathrm{DP}$ is the differential privacy parameter. To improve the best known utility bound, we propose a new differential private optimization framework called DIFF2 (DIFFerential private optimization via gradient DIFFerences) that constructs a differential private global gradient estimator with possibly quite small variance based on communicated gradient differences rather than gradients themselves. It is shown that DIFF2 with a gradient descent subroutine achieves the utility of $\widetilde O(d^{2/3}/(n\varepsilon_\mathrm{DP})^{4/3})$, which can be significantly better than the previous one in terms of the dependence on the sample size $n$. To the best of our knowledge, this is the first fundamental result to improve the standard utility $\widetilde O(\sqrt{d}/(n\varepsilon_\mathrm{DP}))$ for nonconvex objectives. Additionally, a more computational and communication efficient subroutine is combined with DIFF2 and its theoretical analysis is also given. Numerical experiments are conducted to validate the superiority of DIFF2 framework.
|
https://proceedings.mlr.press/v202/murphy23a.html
|
https://proceedings.mlr.press/v202/murphy23a/murphy23a.pdf
|
https://openreview.net/forum?id=81RIPI742h
|
Efficiently predicting high resolution mass spectra with graph neural networks
|
https://proceedings.mlr.press/v202/murphy23a.html
|
Michael Murphy, Stefanie Jegelka, Ernest Fraenkel, Tobias Kind, David Healey, Thomas Butler
|
https://proceedings.mlr.press/v202/murphy23a.html
|
ICML 2023
|
Identifying a small molecule from its mass spectrum is the primary open problem in computational metabolomics. This is typically cast as information retrieval: an unknown spectrum is matched against spectra predicted computationally from a large database of chemical structures. However, current approaches to spectrum prediction model the output space in ways that force a tradeoff between capturing high resolution mass information and tractable learning. We resolve this tradeoff by casting spectrum prediction as a mapping from an input molecular graph to a probability distribution over chemical formulas. We further discover that a large corpus of mass spectra can be closely approximated using a fixed vocabulary constituting only 2% of all observed formulas. This enables efficient spectrum prediction using an architecture similar to graph classification - GrAFF-MS - achieving significantly lower prediction error and greater retrieval accuracy than previous approaches.
|
https://proceedings.mlr.press/v202/mussi23a.html
|
https://proceedings.mlr.press/v202/mussi23a/mussi23a.pdf
|
https://openreview.net/forum?id=XyzhpYy2G4
|
Dynamical Linear Bandits
|
https://proceedings.mlr.press/v202/mussi23a.html
|
Marco Mussi, Alberto Maria Metelli, Marcello Restelli
|
https://proceedings.mlr.press/v202/mussi23a.html
|
ICML 2023
|
In many real-world sequential decision-making problems, an action does not immediately reflect on the feedback and spreads its effects over a long time frame. For instance, in online advertising, investing in a platform produces an instantaneous increase of awareness, but the actual reward, i.e., a conversion, might occur far in the future. Furthermore, whether a conversion takes place depends on: how fast the awareness grows, its vanishing effects, and the synergy or interference with other advertising platforms. Previous work has investigated the Multi-Armed Bandit framework with the possibility of delayed and aggregated feedback, without a particular structure on how an action propagates in the future, disregarding possible dynamical effects. In this paper, we introduce a novel setting, the Dynamical Linear Bandits (DLB), an extension of the linear bandits characterized by a hidden state. When an action is performed, the learner observes a noisy reward whose mean is a linear function of the hidden state and of the action. Then, the hidden state evolves according to linear dynamics, affected by the performed action too. We start by introducing the setting, discussing the notion of optimal policy, and deriving an expected regret lower bound. Then, we provide an optimistic regret minimization algorithm, Dynamical Linear Upper Confidence Bound (DynLin-UCB), that suffers an expected regret of order $\widetilde{\mathcal{O}} \Big( \frac{d \sqrt{T}}{(1-\overline{\rho})^{3/2}} \Big)$, where $\overline{\rho}$ is a measure of the stability of the system, and $d$ is the dimension of the action vector. Finally, we conduct a numerical validation on a synthetic environment and on real-world data to show the effectiveness of DynLin-UCB in comparison with several baselines.
|
https://proceedings.mlr.press/v202/nabati23a.html
|
https://proceedings.mlr.press/v202/nabati23a/nabati23a.pdf
|
https://openreview.net/forum?id=qGcVul46dk
|
Representation-Driven Reinforcement Learning
|
https://proceedings.mlr.press/v202/nabati23a.html
|
Ofir Nabati, Guy Tennenholtz, Shie Mannor
|
https://proceedings.mlr.press/v202/nabati23a.html
|
ICML 2023
|
We present a representation-driven framework for reinforcement learning. By representing policies as estimates of their expected values, we leverage techniques from contextual bandits to guide exploration and exploitation. Particularly, embedding a policy network into a linear feature space allows us to reframe the exploration-exploitation problem as a representation-exploitation problem, where good policy representations enable optimal exploration. We demonstrate the effectiveness of this framework through its application to evolutionary and policy gradient-based approaches, leading to significantly improved performance compared to traditional methods. Our framework provides a new perspective on reinforcement learning, highlighting the importance of policy representation in determining optimal exploration-exploitation strategies.
|
https://proceedings.mlr.press/v202/nabli23a.html
|
https://proceedings.mlr.press/v202/nabli23a/nabli23a.pdf
|
https://openreview.net/forum?id=4k8saM66EH
|
DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization
|
https://proceedings.mlr.press/v202/nabli23a.html
|
Adel Nabli, Edouard Oyallon
|
https://proceedings.mlr.press/v202/nabli23a.html
|
ICML 2023
|
This work introduces DADAO: the first decentralized, accelerated, asynchronous, primal, first-order algorithm to minimize a sum of $L$-smooth and $\mu$-strongly convex functions distributed over a given network of size $n$. Our key insight is based on modeling the local gradient updates and gossip communication procedures with separate independent Poisson Point Processes. This allows us to decouple the computation and communication steps, which can be run in parallel, while making the whole approach completely asynchronous. This leads to communication acceleration compared to synchronous approaches. Our new method employs primal gradients and does not use a multi-consensus inner loop nor other ad-hoc mechanisms such as Error Feedback, Gradient Tracking, or a Proximal operator. By relating the inverse of the smallest positive eigenvalue of the Laplacian matrix $\chi_1$ and the maximal resistance $\chi_2\leq \chi_1$ of the graph to a sufficient minimal communication rate between the nodes of the network, we show that our algorithm requires $\mathcal{O}(n\sqrt{\frac{L}{\mu}}\log(\frac{1}{\epsilon}))$ local gradients and only $\mathcal{O}(n\sqrt{\chi_1\chi_2}\sqrt{\frac{L}{\mu}}\log(\frac{1}{\epsilon}))$ communications to reach a precision $\epsilon$, up to logarithmic terms. Thus, we simultaneously obtain an accelerated rate for both computations and communications, leading to an improvement over state-of-the-art works, our simulations further validating the strength of our relatively unconstrained method.
|
https://proceedings.mlr.press/v202/nagaraj23a.html
|
https://proceedings.mlr.press/v202/nagaraj23a/nagaraj23a.pdf
|
https://openreview.net/forum?id=06djx2x2Rf
|
Multi-User Reinforcement Learning with Low Rank Rewards
|
https://proceedings.mlr.press/v202/nagaraj23a.html
|
Dheeraj Mysore Nagaraj, Suhas S Kowshik, Naman Agarwal, Praneeth Netrapalli, Prateek Jain
|
https://proceedings.mlr.press/v202/nagaraj23a.html
|
ICML 2023
|
We consider collaborative multi-user reinforcement learning, where multiple users have the same state-action space and transition probabilities but different rewards. Under the assumption that the reward matrix of the $N$ users has a low-rank structure – a standard and practically successful assumption in the collaborative filtering setting – we design algorithms with significantly lower sample complexity compared to the ones that learn the MDP individually for each user. Our main contribution is an algorithm which explores rewards collaboratively with $N$ user-specific MDPs and can learn rewards efficiently in two key settings: tabular MDPs and linear MDPs. When $N$ is large and the rank is constant, the sample complexity per MDP depends logarithmically over the size of the state-space, which represents an exponential reduction (in the state-space size) when compared to the standard “non-collaborative” algorithms. Our main technical contribution is a method to construct policies which obtain data such that low rank matrix completion is possible (without a generative model). This goes beyond the regular RL framework and is closely related to mean field limits of multi-agent RL.
|
https://proceedings.mlr.press/v202/nagler23a.html
|
https://proceedings.mlr.press/v202/nagler23a/nagler23a.pdf
|
https://openreview.net/forum?id=HFBBLgPL8x
|
Statistical Foundations of Prior-Data Fitted Networks
|
https://proceedings.mlr.press/v202/nagler23a.html
|
Thomas Nagler
|
https://proceedings.mlr.press/v202/nagler23a.html
|
ICML 2023
|
Prior-data fitted networks (PFNs) were recently proposed as a new paradigm for machine learning. Instead of training the network to an observed training set, a fixed model is pre-trained offline on small, simulated training sets from a variety of tasks. The pre-trained model is then used to infer class probabilities in-context on fresh training sets with arbitrary size and distribution. Empirically, PFNs achieve state-of-the-art performance on tasks with similar size to the ones used in pre-training. Surprisingly, their accuracy further improves when passed larger data sets during inference. This article establishes a theoretical foundation for PFNs and illuminates the statistical mechanisms governing their behavior. While PFNs are motivated by Bayesian ideas, a purely frequentistic interpretation of PFNs as pre-tuned, but untrained predictors explains their behavior. A predictor’s variance vanishes if its sensitivity to individual training samples does and the bias vanishes only if it is appropriately localized around the test feature. The transformer architecture used in current PFN implementations ensures only the former. These findings shall prove useful for designing architectures with favorable empirical behavior.
|
https://proceedings.mlr.press/v202/naik23a.html
|
https://proceedings.mlr.press/v202/naik23a/naik23a.pdf
|
https://openreview.net/forum?id=CtKKqTZFuV
|
Do Machine Learning Models Learn Statistical Rules Inferred from Data?
|
https://proceedings.mlr.press/v202/naik23a.html
|
Aaditya Naik, Yinjun Wu, Mayur Naik, Eric Wong
|
https://proceedings.mlr.press/v202/naik23a.html
|
ICML 2023
|
Machine learning models can make critical errors that are easily hidden within vast amounts of data. Such errors often run counter to rules based on human intuition. However, rules based on human knowledge are challenging to scale or to even formalize. We thereby seek to infer statistical rules from the data and quantify the extent to which a model has learned them. We propose a framework SQRL that integrates logic-based methods with statistical inference to derive these rules from a model’s training data without supervision. We further show how to adapt models at test time to reduce rule violations and produce more coherent predictions. SQRL generates up to 300K rules over datasets from vision, tabular, and language settings. We uncover up to 158K violations of those rules by state-of-the-art models for classification, object detection, and data imputation. Test-time adaptation reduces these violations by up to 68.7% with relative performance improvement up to 32%. SQRL is available at https://github.com/DebugML/sqrl.
|
https://proceedings.mlr.press/v202/naiman23a.html
|
https://proceedings.mlr.press/v202/naiman23a/naiman23a.pdf
|
https://openreview.net/forum?id=t1ZPGMHyWL
|
Sample and Predict Your Latent: Modality-free Sequential Disentanglement via Contrastive Estimation
|
https://proceedings.mlr.press/v202/naiman23a.html
|
Ilan Naiman, Nimrod Berman, Omri Azencot
|
https://proceedings.mlr.press/v202/naiman23a.html
|
ICML 2023
|
Unsupervised disentanglement is a long-standing challenge in representation learning. Recently, self-supervised techniques achieved impressive results in the sequential setting, where data is time-dependent. However, the latter methods employ modality-based data augmentations and random sampling or solve auxiliary tasks. In this work, we propose to avoid that by generating, sampling, and comparing empirical distributions from the underlying variational model. Unlike existing work, we introduce a self-supervised sequential disentanglement framework based on contrastive estimation with no external signals, while using common batch sizes and samples from the latent space itself. In practice, we propose a unified, efficient, and easy-to-code sampling strategy for semantically similar and dissimilar views of the data. We evaluate our approach on video, audio, and time series benchmarks. Our method presents state-of-the-art results in comparison to existing techniques. The code is available at https://github.com/azencot-group/SPYL.
|
https://proceedings.mlr.press/v202/nasr23a.html
|
https://proceedings.mlr.press/v202/nasr23a/nasr23a.pdf
|
https://openreview.net/forum?id=smrYWkIV9J
|
Effectively Using Public Data in Privacy Preserving Machine Learning
|
https://proceedings.mlr.press/v202/nasr23a.html
|
Milad Nasr, Saeed Mahloujifar, Xinyu Tang, Prateek Mittal, Amir Houmansadr
|
https://proceedings.mlr.press/v202/nasr23a.html
|
ICML 2023
|
Differentially private (DP) machine learning techniques are notorious for their degradation of model utility (e.g., they degrade classification accuracy). A recent line of work has demonstrated that leveraging public data can improve the trade-off between privacy and utility when training models with DP guaranteed. In this work, we further explore the potential of using public data in DP models, showing that utility gains can in fact be significantly higher than what shown in prior works. Specifically, we introduce DOPE-SGD, a modified DP-SGD algorithm that leverages public data during its training. DOPE-SGD uses public data in two complementary ways: (1) it uses advance augmentation techniques that leverages public data to generate synthetic data that is effectively embedded in multiple steps of the training pipeline; (2) it uses a modified gradient clipping mechanism (which is a standard technique in DP training) to change the origin of gradient vectors using the information inferred from available public and synthetic data, therefore boosting utility. We also introduce a technique to ensemble intermediate DP models by leveraging the post processing property of differential privacy to further improve the accuracy of the predictions. Our experimental results demonstrate the effectiveness of our approach in improving the state-of-the-art in DP machine learning across multiple datasets, network architectures, and application domains. For instance, assuming access to $2,000$ public images, and for a privacy budget of $\varepsilon=2,\delta=10^{-5}$, our technique achieves an accuracy of $75.1%$ on CIFAR10, significantly higher than $68.1%$ achieved by the state of the art.
|
https://proceedings.mlr.press/v202/nasr-esfahany23a.html
|
https://proceedings.mlr.press/v202/nasr-esfahany23a/nasr-esfahany23a.pdf
|
https://openreview.net/forum?id=4KeshY2gzB
|
Counterfactual Identifiability of Bijective Causal Models
|
https://proceedings.mlr.press/v202/nasr-esfahany23a.html
|
Arash Nasr-Esfahany, Mohammad Alizadeh, Devavrat Shah
|
https://proceedings.mlr.press/v202/nasr-esfahany23a.html
|
ICML 2023
|
We study counterfactual identifiability in causal models with bijective generation mechanisms (BGM), a class that generalizes several widely-used causal models in the literature. We establish their counterfactual identifiability for three common causal structures with unobserved confounding, and propose a practical learning method that casts learning a BGM as structured generative modeling. Learned BGMs enable efficient counterfactual estimation and can be obtained using a variety of deep conditional generative models. We evaluate our techniques in a visual task and demonstrate its application in a real-world video streaming simulation task.
|
https://proceedings.mlr.press/v202/nath23a.html
|
https://proceedings.mlr.press/v202/nath23a/nath23a.pdf
|
https://openreview.net/forum?id=WtXN9bQqWl
|
Discovering Object-Centric Generalized Value Functions From Pixels
|
https://proceedings.mlr.press/v202/nath23a.html
|
Somjit Nath, Gopeshh Subbaraj, Khimya Khetarpal, Samira Ebrahimi Kahou
|
https://proceedings.mlr.press/v202/nath23a.html
|
ICML 2023
|
Deep Reinforcement Learning has shown significant progress in extracting useful representations from high-dimensional inputs albeit using hand-crafted auxiliary tasks and pseudo rewards. Automatically learning such representations in an object-centric manner geared towards control and fast adaptation remains an open research problem. In this paper, we introduce a method that tries to discover meaningful features from objects, translating them to temporally coherent ‘question’ functions and leveraging the subsequent learned general value functions for control. We compare our approach with state-of-the-art techniques alongside other ablations and show competitive performance in both stationary and non-stationary settings. Finally, we also investigate the discovered general value functions and through qualitative analysis show that the learned representations are not only interpretable but also, centered around objects that are invariant to changes across tasks facilitating fast adaptation.
|
https://proceedings.mlr.press/v202/nauman23a.html
|
https://proceedings.mlr.press/v202/nauman23a/nauman23a.pdf
|
https://openreview.net/forum?id=HKfSTYLJh7
|
On Many-Actions Policy Gradient
|
https://proceedings.mlr.press/v202/nauman23a.html
|
Michal Nauman, Marek Cygan
|
https://proceedings.mlr.press/v202/nauman23a.html
|
ICML 2023
|
We study the variance of stochastic policy gradients (SPGs) with many action samples per state. We derive a many-actions optimality condition, which determines when many-actions SPG yields lower variance as compared to a single-action agent with proportionally extended trajectory. We propose Model-Based Many-Actions (MBMA), an approach leveraging dynamics models for many-actions sampling in the context of SPG. MBMA addresses issues associated with existing implementations of many-actions SPG and yields lower bias and comparable variance to SPG estimated from states in model-simulated rollouts. We find that MBMA bias and variance structure matches that predicted by theory. As a result, MBMA achieves improved sample efficiency and higher returns on a range of continuous action environments as compared to model-free, many-actions, and model-based on-policy SPG baselines.
|
https://proceedings.mlr.press/v202/navon23a.html
|
https://proceedings.mlr.press/v202/navon23a/navon23a.pdf
|
https://openreview.net/forum?id=SCU1xlr9Y4
|
Equivariant Architectures for Learning in Deep Weight Spaces
|
https://proceedings.mlr.press/v202/navon23a.html
|
Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron
|
https://proceedings.mlr.press/v202/navon23a.html
|
ICML 2023
|
Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction. Unfortunately, the unique symmetry structure of deep weight spaces makes this design very challenging. If successful, such architectures would be capable of performing a wide range of intriguing tasks, from adapting a pre-trained network to a new domain to editing objects represented as functions (INRs or NeRFs). As a first step towards this goal, we present here a novel network architecture for learning in deep weight spaces. It takes as input a concatenation of weights and biases of a pre-trained MLP and processes it using a composition of layers that are equivariant to the natural permutation symmetry of the MLP’s weights: Changing the order of neurons in intermediate layers of the MLP does not affect the function it represents. We provide a full characterization of all affine equivariant and invariant layers for these symmetries and show how these layers can be implemented using three basic operations: pooling, broadcasting, and fully connected layers applied to the input in an appropriate manner. We demonstrate the effectiveness of our architecture and its advantages over natural baselines in a variety of learning tasks.
|
https://proceedings.mlr.press/v202/nayak23a.html
|
https://proceedings.mlr.press/v202/nayak23a/nayak23a.pdf
|
https://openreview.net/forum?id=SaZvBUk2q4
|
Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation
|
https://proceedings.mlr.press/v202/nayak23a.html
|
Siddharth Nayak, Kenneth Choi, Wenqi Ding, Sydney Dolan, Karthik Gopalakrishnan, Hamsa Balakrishnan
|
https://proceedings.mlr.press/v202/nayak23a.html
|
ICML 2023
|
We consider the problem of multi-agent navigation and collision avoidance when observations are limited to the local neighborhood of each agent. We propose InforMARL, a novel architecture for multi-agent reinforcement learning (MARL) which uses local information intelligently to compute paths for all the agents in a decentralized manner. Specifically, InforMARL aggregates information about the local neighborhood of agents for both the actor and the critic using a graph neural network and can be used in conjunction with any standard MARL algorithm. We show that (1) in training, InforMARL has better sample efficiency and performance than baseline approaches, despite using less information, and (2) in testing, it scales well to environments with arbitrary numbers of agents and obstacles. We illustrate these results using four task environments, including one with predetermined goals for each agent, and one in which the agents collectively try to cover all goals.
|
https://proceedings.mlr.press/v202/nazari23a.html
|
https://proceedings.mlr.press/v202/nazari23a/nazari23a.pdf
|
https://openreview.net/forum?id=VAEZ4nrqAk
|
Geometric Autoencoders - What You See is What You Decode
|
https://proceedings.mlr.press/v202/nazari23a.html
|
Philipp Nazari, Sebastian Damrich, Fred A Hamprecht
|
https://proceedings.mlr.press/v202/nazari23a.html
|
ICML 2023
|
Visualization is a crucial step in exploratory data analysis. One possible approach is to train an autoencoder with low-dimensional latent space. Large network depth and width can help unfolding the data. However, such expressive networks can achieve low reconstruction error even when the latent representation is distorted. To avoid such misleading visualizations, we propose first a differential geometric perspective on the decoder, leading to insightful diagnostics for an embedding’s distortion, and second a new regularizer mitigating such distortion. Our “Geometric Autoencoder” avoids stretching the embedding spuriously, so that the visualization captures the data structure more faithfully. It also flags areas where little distortion could not be achieved, thus guarding against misinterpretation.
|
https://proceedings.mlr.press/v202/neklyudov23a.html
|
https://proceedings.mlr.press/v202/neklyudov23a/neklyudov23a.pdf
|
https://openreview.net/forum?id=E3BW8pG64Y
|
Action Matching: Learning Stochastic Dynamics from Samples
|
https://proceedings.mlr.press/v202/neklyudov23a.html
|
Kirill Neklyudov, Rob Brekelmans, Daniel Severo, Alireza Makhzani
|
https://proceedings.mlr.press/v202/neklyudov23a.html
|
ICML 2023
|
Learning the continuous dynamics of a system from snapshots of its temporal marginals is a problem which appears throughout natural sciences and machine learning, including in quantum systems, single-cell biological data, and generative modeling. In these settings, we assume access to cross-sectional samples that are uncorrelated over time, rather than full trajectories of samples. In order to better understand the systems under observation, we would like to learn a model of the underlying process that allows us to propagate samples in time and thereby simulate entire individual trajectories. In this work, we propose Action Matching, a method for learning a rich family of dynamics using only independent samples from its time evolution. We derive a tractable training objective, which does not rely on explicit assumptions about the underlying dynamics and does not require back-propagation through differential equations or optimal transport solvers. Inspired by connections with optimal transport, we derive extensions of Action Matching to learn stochastic differential equations and dynamics involving creation and destruction of probability mass. Finally, we showcase applications of Action Matching by achieving competitive performance in a diverse set of experiments from biology, physics, and generative modeling.
|
https://proceedings.mlr.press/v202/nettasinghe23a.html
|
https://proceedings.mlr.press/v202/nettasinghe23a/nettasinghe23a.pdf
|
https://openreview.net/forum?id=Wf3MKy02wv
|
Extending Conformal Prediction to Hidden Markov Models with Exact Validity via de Finetti’s Theorem for Markov Chains
|
https://proceedings.mlr.press/v202/nettasinghe23a.html
|
Buddhika Nettasinghe, Samrat Chatterjee, Ramakrishna Tipireddy, Mahantesh M Halappanavar
|
https://proceedings.mlr.press/v202/nettasinghe23a.html
|
ICML 2023
|
Conformal prediction is a widely used method to quantify the uncertainty of a classifier under the assumption of exchangeability (e.g., IID data). We generalize conformal prediction to the Hidden Markov Model (HMM) framework where the assumption of exchangeability is not valid. The key idea of the proposed method is to partition the non-exchangeable Markovian data from the HMM into exchangeable blocks by exploiting the de Finetti’s Theorem for Markov Chains discovered by Diaconis and Freedman (1980). The permutations of the exchangeable blocks are viewed as randomizations of the observed Markovian data from the HMM. The proposed method provably retains all desirable theoretical guarantees offered by the classical conformal prediction framework in both exchangeable and Markovian settings. In particular, while the lack of exchangeability introduced by Markovian samples constitutes a violation of a crucial assumption for classical conformal prediction, the proposed method views it as an advantage that can be exploited to improve the performance further. Detailed numerical and empirical results that complement the theoretical conclusions are provided to illustrate the practical feasibility of the proposed method.
|
https://proceedings.mlr.press/v202/nguyen23a.html
|
https://proceedings.mlr.press/v202/nguyen23a/nguyen23a.pdf
|
https://openreview.net/forum?id=TowCaiz7Ui
|
ClimaX: A foundation model for weather and climate
|
https://proceedings.mlr.press/v202/nguyen23a.html
|
Tung Nguyen, Johannes Brandstetter, Ashish Kapoor, Jayesh K Gupta, Aditya Grover
|
https://proceedings.mlr.press/v202/nguyen23a.html
|
ICML 2023
|
Recent data-driven approaches based on machine learning aim to directly solve a downstream forecasting or projection task by learning a data-driven functional mapping using deep neural networks. However, these networks are trained using curated and homogeneous climate datasets for specific spatiotemporal tasks, and thus lack the generality of currently used computationally intensive physics-informed numerical models for weather and climate modeling. We develop and demonstrate ClimaX, a flexible and generalizable deep learning model for weather and climate science that can be trained using heterogeneous datasets spanning different variables, spatio-temporal coverage, and physical groundings. ClimaX extends the Transformer architecture with novel encoding and aggregation blocks that allow effective use of available compute and data while maintaining general utility. ClimaX is pretrained with a self-supervised learning objective on climate datasets derived from CMIP6. The pretrained ClimaX can then be fine-tuned to address a breadth of climate and weather tasks, including those that involve atmospheric variables and spatio-temporal scales unseen during pretraining. Compared to existing data-driven baselines, we show that this generality in ClimaX results in superior performance on benchmarks for weather forecasting and climate projections, even when pretrained at lower resolutions and compute budgets. Our source code is available at https://github.com/microsoft/ClimaX.
|
https://proceedings.mlr.press/v202/nguyen23b.html
|
https://proceedings.mlr.press/v202/nguyen23b/nguyen23b.pdf
|
https://openreview.net/forum?id=ZhPSR06M5C
|
Provable Reset-free Reinforcement Learning by No-Regret Reduction
|
https://proceedings.mlr.press/v202/nguyen23b.html
|
Hoai-An Nguyen, Ching-An Cheng
|
https://proceedings.mlr.press/v202/nguyen23b.html
|
ICML 2023
|
Reinforcement learning (RL) so far has limited real-world applications. One key challenge is that typical RL algorithms heavily rely on a reset mechanism to sample proper initial states; these reset mechanisms, in practice, are expensive to implement due to the need for human intervention or heavily engineered environments. To make learning more practical, we propose a generic no-regret reduction to systematically design reset-free RL algorithms. Our reduction turns the reset-free RL problem into a two-player game. We show that achieving sublinear regret in this two-player game would imply learning a policy that has both sublinear performance regret and sublinear total number of resets in the original RL problem. This means that the agent eventually learns to perform optimally and avoid resets. To demonstrate the effectiveness of this reduction, we design an instantiation for linear Markov decision processes, which is the first provably correct reset-free RL algorithm.
|
https://proceedings.mlr.press/v202/nguyen23c.html
|
https://proceedings.mlr.press/v202/nguyen23c/nguyen23c.pdf
|
https://openreview.net/forum?id=eWAvwKajx2
|
Revisiting Over-smoothing and Over-squashing Using Ollivier-Ricci Curvature
|
https://proceedings.mlr.press/v202/nguyen23c.html
|
Khang Nguyen, Nong Minh Hieu, Vinh Duc Nguyen, Nhat Ho, Stanley Osher, Tan Minh Nguyen
|
https://proceedings.mlr.press/v202/nguyen23c.html
|
ICML 2023
|
Graph Neural Networks (GNNs) had been demonstrated to be inherently susceptible to the problems of over-smoothing and over-squashing. These issues prohibit the ability of GNNs to model complex graph interactions by limiting their effectiveness in taking into account distant information. Our study reveals the key connection between the local graph geometry and the occurrence of both of these issues, thereby providing a unified framework for studying them at a local scale using the Ollivier-Ricci curvature. Specifically, we demonstrate that over-smoothing is linked to positive graph curvature while over-squashing is linked to negative graph curvature. Based on our theory, we propose the Batch Ollivier-Ricci Flow, a novel rewiring algorithm capable of simultaneously addressing both over-smoothing and over-squashing.
|
https://proceedings.mlr.press/v202/nguyen23d.html
|
https://proceedings.mlr.press/v202/nguyen23d/nguyen23d.pdf
|
https://openreview.net/forum?id=ElgoXPdI5l
|
Deep Clustering with Incomplete Noisy Pairwise Annotations: A Geometric Regularization Approach
|
https://proceedings.mlr.press/v202/nguyen23d.html
|
Tri Nguyen, Shahana Ibrahim, Xiao Fu
|
https://proceedings.mlr.press/v202/nguyen23d.html
|
ICML 2023
|
The recent integration of deep learning and pairwise similarity annotation-based constrained clustering—i.e., deep constrained clustering (DCC)—has proven effective for incorporating weak supervision into massive data clustering: Less than 1% of pair similarity annotations can often substantially enhance the clustering accuracy. However, beyond empirical successes, there is a lack of understanding of DCC. In addition, many DCC paradigms are sensitive to annotation noise, but performance-guaranteed noisy DCC methods have been largely elusive. This work first takes a deep look into a recently emerged logistic loss function of DCC, and characterizes its theoretical properties. Our result shows that the logistic DCC loss ensures the identifiability of data membership under reasonable conditions, which may shed light on its effectiveness in practice. Building upon this understanding, a new loss function based on geometric factor analysis is proposed to fend against noisy annotations. It is shown that even under unknown annotation confusions, the data membership can still be provably identified under our proposed learning criterion. The proposed approach is tested over multiple datasets to validate our claims.
|
https://proceedings.mlr.press/v202/nguyen23e.html
|
https://proceedings.mlr.press/v202/nguyen23e/nguyen23e.pdf
|
https://openreview.net/forum?id=6s64XSlhJC
|
Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction
|
https://proceedings.mlr.press/v202/nguyen23e.html
|
Khai Nguyen, Dang Nguyen, Nhat Ho
|
https://proceedings.mlr.press/v202/nguyen23e.html
|
ICML 2023
|
Max sliced Wasserstein (Max-SW) distance has been widely known as a solution for less discriminative projections of sliced Wasserstein (SW) distance. In applications that have various independent pairs of probability measures, amortized projection optimization is utilized to predict the “max" projecting directions given two input measures instead of using projected gradient ascent multiple times. Despite being efficient, Max-SW and its amortized version cannot guarantee metricity property due to the sub-optimality of the projected gradient ascent and the amortization gap. Therefore, we propose to replace Max-SW with distributional sliced Wasserstein distance with von Mises-Fisher (vMF) projecting distribution (v-DSW). Since v-DSW is a metric with any non-degenerate vMF distribution, its amortized version can guarantee the metricity when performing amortization. Furthermore, current amortized models are not permutation invariant and symmetric. To address the issue, we design amortized models based on self-attention architecture. In particular, we adopt efficient self-attention architectures to make the computation linear in the number of supports. With the two improvements, we derive self-attention amortized distributional projection optimization and show its appealing performance in point-cloud reconstruction and its downstream applications
|
https://proceedings.mlr.press/v202/nguyen23f.html
|
https://proceedings.mlr.press/v202/nguyen23f/nguyen23f.pdf
|
https://openreview.net/forum?id=kjEKnLhZQV
|
Building Neural Networks on Matrix Manifolds: A Gyrovector Space Approach
|
https://proceedings.mlr.press/v202/nguyen23f.html
|
Xuan Son Nguyen, Shuo Yang
|
https://proceedings.mlr.press/v202/nguyen23f.html
|
ICML 2023
|
Matrix manifolds, such as manifolds of Symmetric Positive Definite (SPD) matrices and Grassmann manifolds, appear in many applications. Recently, by applying the theory of gyrogroups and gyrovector spaces that is a powerful framework for studying hyperbolic geometry, some works have attempted to build principled generalizations of Euclidean neural networks on matrix manifolds. However, due to the lack of many concepts in gyrovector spaces for the considered manifolds, e.g., the inner product and gyroangles, techniques and mathematical tools provided by these works are still limited compared to those developed for studying hyperbolic geometry. In this paper, we generalize some notions in gyrovector spaces for SPD and Grassmann manifolds, and propose new models and layers for building neural networks on these manifolds. We show the effectiveness of our approach in two applications, i.e., human action recognition and knowledge graph completion.
|
https://proceedings.mlr.press/v202/ngweta23a.html
|
https://proceedings.mlr.press/v202/ngweta23a/ngweta23a.pdf
|
https://openreview.net/forum?id=oupdxuURWD
|
Simple Disentanglement of Style and Content in Visual Representations
|
https://proceedings.mlr.press/v202/ngweta23a.html
|
Lilian Ngweta, Subha Maity, Alex Gittens, Yuekai Sun, Mikhail Yurochkin
|
https://proceedings.mlr.press/v202/ngweta23a.html
|
ICML 2023
|
Learning visual representations with interpretable features, i.e., disentangled representations, remains a challenging problem. Existing methods demonstrate some success but are hard to apply to large-scale vision datasets like ImageNet. In this work, we propose a simple post-processing framework to disentangle content and style in learned representations from pre-trained vision models. We model the pre-trained features probabilistically as linearly entangled combinations of the latent content and style factors and develop a simple disentanglement algorithm based on the probabilistic model. We show that the method provably disentangles content and style features and verify its efficacy empirically. Our post-processed features yield significant domain generalization performance improvements when the distribution shift occurs due to style changes or style-related spurious correlations.
|
https://proceedings.mlr.press/v202/ni23a.html
|
https://proceedings.mlr.press/v202/ni23a/ni23a.pdf
|
https://openreview.net/forum?id=IKCk6th595
|
MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL
|
https://proceedings.mlr.press/v202/ni23a.html
|
Fei Ni, Jianye Hao, Yao Mu, Yifu Yuan, Yan Zheng, Bin Wang, Zhixuan Liang
|
https://proceedings.mlr.press/v202/ni23a.html
|
ICML 2023
|
Recently, diffusion model shines as a promising backbone for the sequence modeling paradigm in offline reinforcement learning(RL). However, these works mostly lack the generalization ability across tasks with reward or dynamics change. To tackle this challenge, in this paper we propose a task-oriented conditioned diffusion planner for offline meta-RL(MetaDiffuser), which considers the generalization problem as conditional trajectory generation task with contextual representation. The key is to learn a context conditioned diffusion model which can generate task-oriented trajectories for planning across diverse tasks. To enhance the dynamics consistency of the generated trajectories while encouraging trajectories to achieve high returns, we further design a dual-guided module in the sampling process of the diffusion model. The proposed framework enjoys the robustness to the quality of collected warm-start data from the testing task and the flexibility to incorporate with different task representation method. The experiment results on MuJoCo benchmarks show that MetaDiffuser outperforms other strong offline meta-RL baselines, demonstrating the outstanding conditional generation ability of diffusion architecture.
|
https://proceedings.mlr.press/v202/ni23b.html
|
https://proceedings.mlr.press/v202/ni23b/ni23b.pdf
|
https://openreview.net/forum?id=Gj3zN9zs4v
|
LEVER: Learning to Verify Language-to-Code Generation with Execution
|
https://proceedings.mlr.press/v202/ni23b.html
|
Ansong Ni, Srini Iyer, Dragomir Radev, Veselin Stoyanov, Wen-Tau Yih, Sida Wang, Xi Victoria Lin
|
https://proceedings.mlr.press/v202/ni23b.html
|
ICML 2023
|
The advent of large language models trained on code (code LLMs) has led to significant progress in language-to-code generation. State-of-the-art approaches in this area combine LLM decoding with sample pruning and reranking using test cases or heuristics based on the execution results. However, it is challenging to obtain test cases for many real-world language-to-code applications, and heuristics cannot well capture the semantic features of the execution results, such as data type and value range, which often indicates the correctness of the program. In this work, we propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results. Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results. The sampled programs are reranked by combining the verification score with the LLM generation probability, and marginalizing over programs with the same execution results. On four datasets across the domains of table QA, math QA and basic Python programming, LEVER consistently improves over the base code LLMs (4.6% to 10.9% with code-davinci-002) and achieves new state-of-the-art results on all of them.
|
https://proceedings.mlr.press/v202/ni23c.html
|
https://proceedings.mlr.press/v202/ni23c/ni23c.pdf
|
https://openreview.net/forum?id=XqtpvTnJum
|
Continual Vision-Language Representation Learning with Off-Diagonal Information
|
https://proceedings.mlr.press/v202/ni23c.html
|
Zixuan Ni, Longhui Wei, Siliang Tang, Yueting Zhuang, Qi Tian
|
https://proceedings.mlr.press/v202/ni23c.html
|
ICML 2023
|
Large-scale multi-modal contrastive learning frameworks like CLIP typically require a large amount of image-text samples for training. However, these samples are always collected continuously in real scenarios. This paper discusses the feasibility of continual CLIP training using streaming data. Unlike continual learning based on self-supervised learning methods for pure images, which is empirically robust against catastrophic forgetting, CLIP’s performance degeneration in the continual setting is significant and non-neglectable. By analyzing the changes in the model’s representation space during continual CLIP training from a spatial geometry perspective, we explore and summarize these spatial variations as Spatial Disorder (SD), which can be divided into Intra-modal Rotation and Inter-modal Deviation. Moreover, we empirically and theoretically demonstrate how SD leads to a performance decline for CLIP on cross-modal retrieval tasks. To alleviate SD, we propose a new continual vision-language representation learning framework Mod-X: Maintain off-diagonal information-matriX. By selectively aligning the off-diagonal information distribution of contrastive matrices, the Mod-X improves the capability of the multi-modal model by maintaining the multi-modal representation space alignment on the old data domain during continuously fitting the new training data domain. Experiments on commonly used datasets with different scales and scopes have demonstrated the effectiveness of our method.
|
https://proceedings.mlr.press/v202/nie23a.html
|
https://proceedings.mlr.press/v202/nie23a/nie23a.pdf
|
https://openreview.net/forum?id=mFE1SBf6av
|
Attributing Image Generative Models using Latent Fingerprints
|
https://proceedings.mlr.press/v202/nie23a.html
|
Guangyu Nie, Changhoon Kim, Yezhou Yang, Yi Ren
|
https://proceedings.mlr.press/v202/nie23a.html
|
ICML 2023
|
Generative models have enabled the creation of contents that are indistinguishable from those taken from nature. Open-source development of such models raised concerns about the risks of their misuse for malicious purposes. One potential risk mitigation strategy is to attribute generative models via fingerprinting. Current fingerprinting methods exhibit a significant tradeoff between robust attribution accuracy and generation quality while lacking design principles to improve this tradeoff. This paper investigates the use of latent semantic dimensions as fingerprints, from where we can analyze the effects of design variables, including the choice of fingerprinting dimensions, strength, and capacity, on the accuracy-quality tradeoff. Compared with previous SOTA, our method requires minimum computation and is more applicable to large-scale models. We use StyleGAN2 and the latent diffusion model to demonstrate the efficacy of our method.
|
https://proceedings.mlr.press/v202/nie23b.html
|
https://proceedings.mlr.press/v202/nie23b/nie23b.pdf
|
https://openreview.net/forum?id=fBDP40MrQS
|
A Framework for Adapting Offline Algorithms to Solve Combinatorial Multi-Armed Bandit Problems with Bandit Feedback
|
https://proceedings.mlr.press/v202/nie23b.html
|
Guanyu Nie, Yididiya Y. Nadew, Yanhui Zhu, Vaneet Aggarwal, Christopher John Quinn
|
https://proceedings.mlr.press/v202/nie23b.html
|
ICML 2023
|
We investigate the problem of stochastic, combinatorial multi-armed bandits where the learner only has access to bandit feedback and the reward function can be non-linear. We provide a general framework for adapting discrete offline approximation algorithms into sublinear $\alpha$-regret methods that only require bandit feedback, achieving $\mathcal{O}\left(T^\frac{2}{3}\log(T)^\frac{1}{3}\right)$ expected cumulative $\alpha$-regret dependence on the horizon $T$. The framework only requires the offline algorithms to be robust to small errors in function evaluation. The adaptation procedure does not even require explicit knowledge of the offline approximation algorithm — the offline algorithm can be used as black box subroutine. To demonstrate the utility of the proposed framework, the proposed framework is applied to multiple problems in submodular maximization, adapting approximation algorithms for cardinality and for knapsack constraints. The new CMAB algorithms for knapsack constraints outperform a full-bandit method developed for the adversarial setting in experiments with real-world data.
|
https://proceedings.mlr.press/v202/nikankin23a.html
|
https://proceedings.mlr.press/v202/nikankin23a/nikankin23a.pdf
|
https://openreview.net/forum?id=9n9NJ4qMV6
|
SinFusion: Training Diffusion Models on a Single Image or Video
|
https://proceedings.mlr.press/v202/nikankin23a.html
|
Yaniv Nikankin, Niv Haim, Michal Irani
|
https://proceedings.mlr.press/v202/nikankin23a.html
|
ICML 2023
|
Diffusion models exhibited tremendous progress in image and video generation, exceeding GANs in quality and diversity. However, they are usually trained on very large datasets and are not naturally adapted to manipulate a given input image or video. In this paper we show how this can be resolved by training a diffusion model on a single input image or video. Our image/video-specific diffusion model (SinFusion) learns the appearance and dynamics of the single image or video, while utilizing the conditioning capabilities of diffusion models. It can solve a wide array of image/video-specific manipulation tasks. In particular, our model can learn from few frames the motion and dynamics of a single input video. It can then generate diverse new video samples of the same dynamic scene, extrapolate short videos into long ones (both forward and backward in time) and perform video upsampling. Most of these tasks are not realizable by current video-specific generation methods.
|
https://proceedings.mlr.press/v202/nikdan23a.html
|
https://proceedings.mlr.press/v202/nikdan23a/nikdan23a.pdf
|
https://openreview.net/forum?id=JSTp7NiuYi
|
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks at the Edge
|
https://proceedings.mlr.press/v202/nikdan23a.html
|
Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh
|
https://proceedings.mlr.press/v202/nikdan23a.html
|
ICML 2023
|
We provide an efficient implementation of the backpropagation algorithm, specialized to the case where the weights of the neural network being trained are sparse. Our algorithm is general, as it applies to arbitrary (unstructured) sparsity and common layer types (e.g., convolutional or linear). We provide a fast vectorized implementation on commodity CPUs, and show that it can yield speedups in end-to-end runtime experiments, both in transfer learning using already-sparsified networks, and in training sparse networks from scratch. Thus, our results provide the first support for sparse training on commodity hardware.
|
https://proceedings.mlr.press/v202/nikulin23a.html
|
https://proceedings.mlr.press/v202/nikulin23a/nikulin23a.pdf
|
https://openreview.net/forum?id=NRQ5lC8Dit
|
Anti-Exploration by Random Network Distillation
|
https://proceedings.mlr.press/v202/nikulin23a.html
|
Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, Sergey Kolesnikov
|
https://proceedings.mlr.press/v202/nikulin23a.html
|
ICML 2023
|
Despite the success of Random Network Distillation (RND) in various domains, it was shown as not discriminative enough to be used as an uncertainty estimator for penalizing out-of-distribution actions in offline reinforcement learning. In this paper, we revisit these results and show that, with a naive choice of conditioning for the RND prior, it becomes infeasible for the actor to effectively minimize the anti-exploration bonus and discriminativity is not an issue. We show that this limitation can be avoided with conditioning based on Feature-wise Linear Modulation (FiLM), resulting in a simple and efficient ensemble-free algorithm based on Soft Actor-Critic. We evaluate it on the D4RL benchmark, showing that it is capable of achieving performance comparable to ensemble-based methods and outperforming ensemble-free approaches by a wide margin.
|
https://proceedings.mlr.press/v202/ning23a.html
|
https://proceedings.mlr.press/v202/ning23a/ning23a.pdf
|
https://openreview.net/forum?id=0OcEWSMnSh
|
Input Perturbation Reduces Exposure Bias in Diffusion Models
|
https://proceedings.mlr.press/v202/ning23a.html
|
Mang Ning, Enver Sangineto, Angelo Porrello, Simone Calderara, Rita Cucchiara
|
https://proceedings.mlr.press/v202/ning23a.html
|
ICML 2023
|
Denoising Diffusion Probabilistic Models have shown an impressive generation quality although their long sampling chain leads to high computational costs. In this paper, we observe that a long sampling chain also leads to an error accumulation phenomenon, which is similar to the exposure bias problem in autoregressive text generation. Specifically, we note that there is a discrepancy between training and testing, since the former is conditioned on the ground truth samples, while the latter is conditioned on the previously generated results. To alleviate this problem, we propose a very simple but effective training regularization, consisting in perturbing the ground truth samples to simulate the inference time prediction errors. We empirically show that, without affecting the recall and precision, the proposed input perturbation leads to a significant improvement in the sample quality while reducing both the training and the inference times. For instance, on CelebA 64x64, we achieve a new state-of-the-art FID score of 1.27, while saving 37.5% of the training time. The code is available at https://github.com/forever208/DDPM-IP
|
https://proceedings.mlr.press/v202/nitanda23a.html
|
https://proceedings.mlr.press/v202/nitanda23a/nitanda23a.pdf
|
https://openreview.net/forum?id=xdT94ekVBh
|
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems
|
https://proceedings.mlr.press/v202/nitanda23a.html
|
Atsushi Nitanda, Kazusato Oko, Denny Wu, Nobuhito Takenouchi, Taiji Suzuki
|
https://proceedings.mlr.press/v202/nitanda23a.html
|
ICML 2023
|
The entropic fictitious play (EFP) is a recently proposed algorithm that minimizes the sum of a convex functional and entropy in the space of measures — such an objective naturally arises in the optimization of a two-layer neural network in the mean-field regime. In this work, we provide a concise primal-dual analysis of EFP in the setting where the learning problem exhibits a finite-sum structure. We establish quantitative global convergence guarantees for both the continuous-time and discrete-time dynamics based on properties of a proximal Gibbs measure introduced in Nitanda et al. (2022). Furthermore, our primal-dual framework entails a memory-efficient particle-based implementation of the EFP update, and also suggests a connection to gradient boosting methods. We illustrate the efficiency of our novel implementation in experiments including neural network optimization and image synthesis.
|
https://proceedings.mlr.press/v202/noarov23a.html
|
https://proceedings.mlr.press/v202/noarov23a/noarov23a.pdf
|
https://openreview.net/forum?id=tyqL1bPl0L
|
The Statistical Scope of Multicalibration
|
https://proceedings.mlr.press/v202/noarov23a.html
|
Georgy Noarov, Aaron Roth
|
https://proceedings.mlr.press/v202/noarov23a.html
|
ICML 2023
|
We make a connection between multicalibration and property elicitation and show that (under mild technical conditions) it is possible to produce a multicalibrated predictor for a continuous scalar property $\Gamma$ if and only if $\Gamma$ is elicitable. On the negative side, we show that for non-elicitable continuous properties there exist simple data distributions on which even the true distributional predictor is not calibrated. On the positive side, for elicitable $\Gamma$, we give simple canonical algorithms for the batch and the online adversarial setting, that learn a $\Gamma$-multicalibrated predictor. This generalizes past work on multicalibrated means and quantiles, and in fact strengthens existing online quantile multicalibration results. To further counter-weigh our negative result, we show that if a property $\Gamma^1$ is not elicitable by itself, but is elicitable conditionally on another elicitable property $\Gamma^0$, then there is a canonical algorithm that jointly multicalibrates $\Gamma^1$ and $\Gamma^0$; this generalizes past work on mean-moment multicalibration. Finally, as applications of our theory, we provide novel algorithmic and impossibility results for fair (multicalibrated) risk assessment.
|
https://proceedings.mlr.press/v202/nottingham23a.html
|
https://proceedings.mlr.press/v202/nottingham23a/nottingham23a.pdf
|
https://openreview.net/forum?id=Rm5Qi57C5I
|
Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling
|
https://proceedings.mlr.press/v202/nottingham23a.html
|
Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, Roy Fox
|
https://proceedings.mlr.press/v202/nottingham23a.html
|
ICML 2023
|
Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world. However, if initialized with knowledge of high-level subgoals and transitions between subgoals, RL agents could utilize this Abstract World Model (AWM) for planning and exploration. We propose using few-shot large language models (LLMs) to hypothesize an AWM, that will be verified through world experience, to improve sample efficiency of RL agents. Our DECKARD agent applies LLM-guided exploration to item crafting in Minecraft in two phases: (1) the Dream phase where the agent uses an LLM to decompose a task into a sequence of subgoals, the hypothesized AWM; and (2) the Wake phase where the agent learns a modular policy for each subgoal and verifies or corrects the hypothesized AWM. Our method of hypothesizing an AWM with LLMs and then verifying the AWM based on agent experience not only increases sample efficiency over contemporary methods by an order of magnitude but is also robust to and corrects errors in the LLM, successfully blending noisy internet-scale information from LLMs with knowledge grounded in environment dynamics.
|
https://proceedings.mlr.press/v202/nova23a.html
|
https://proceedings.mlr.press/v202/nova23a/nova23a.pdf
|
https://openreview.net/forum?id=Ga6nQOAb7A
|
Gradient-Free Structured Pruning with Unlabeled Data
|
https://proceedings.mlr.press/v202/nova23a.html
|
Azade Nova, Hanjun Dai, Dale Schuurmans
|
https://proceedings.mlr.press/v202/nova23a.html
|
ICML 2023
|
Large Language Models (LLMs) have achieved great success in solving difficult tasks across many domains, but such success comes with a high computation cost, and inference latency. As developers and third parties customize these models, the need to provide efficient inference has increased. Many efforts have attempted to reduce inference cost through model compression techniques such as pruning and distillation. However, these techniques either require labeled data, or are time-consuming as they require the compressed model to be retrained to regain accuracy. In this paper, we propose a gradient-free structured pruning framework that uses only unlabeled data. An evaluation on the GLUE and SQuAD benchmarks using BERT$_{BASE}$ and DistilBERT illustrates the effectiveness of the proposed approach. By only using the weights of the pre-trained model and unlabeled data, in a matter of a few minutes on a single GPU, up to 40% of the original FLOP count can be reduced with less than a $4%$ accuracy loss across all tasks considered.
|
https://proceedings.mlr.press/v202/novack23a.html
|
https://proceedings.mlr.press/v202/novack23a/novack23a.pdf
|
https://openreview.net/forum?id=dOynYNAeHl
|
CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets
|
https://proceedings.mlr.press/v202/novack23a.html
|
Zachary Novack, Julian Mcauley, Zachary Chase Lipton, Saurabh Garg
|
https://proceedings.mlr.press/v202/novack23a.html
|
ICML 2023
|
Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot classification through their ability generate embeddings for each class based on their (natural language) names. Prior work has focused on improving the accuracy of these models through prompt engineering or by incorporating a small amount of labeled downstream data (via finetuning). However, there has been little focus on improving the richness of the class names themselves, which can pose issues when class labels are coarsely-defined and are uninformative. We propose Classification with Hierarchical Label Sets (or CHiLS), an alternative strategy for zero-shot classification specifically designed for datasets with implicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each class, produce a set of subclasses, using either existing label hierarchies or by querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though these subclasses were the labels of interest; (iii) map the predicted subclass back to its parent to produce the final prediction. Across numerous datasets with underlying hierarchical structure, CHiLS leads to improved accuracy in situations both with and without ground-truth hierarchical information. CHiLS is simple to implement within existing zero-shot pipelines and requires no additional training cost. Code is available at: https://github.com/acmi-lab/CHILS.
|
https://proceedings.mlr.press/v202/novikov23a.html
|
https://proceedings.mlr.press/v202/novikov23a/novikov23a.pdf
|
https://openreview.net/forum?id=m2S96Qf2R3
|
Few-bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction
|
https://proceedings.mlr.press/v202/novikov23a.html
|
Georgii Sergeevich Novikov, Daniel Bershatsky, Julia Gusak, Alex Shonenkov, Denis Valerievich Dimitrov, Ivan Oseledets
|
https://proceedings.mlr.press/v202/novikov23a.html
|
ICML 2023
|
Memory footprint is one of the main limiting factors for large neural network training. In backpropagation, one needs to store the input to each operation in the computational graph. Every modern neural network model has quite a few pointwise nonlinearities in its architecture, and such operations induce additional memory costs that, as we show, can be significantly reduced by quantization of the gradients. We propose a systematic approach to compute optimal quantization of the retained gradients of the pointwise nonlinear functions with only a few bits per each element. We show that such approximation can be achieved by computing an optimal piecewise-constant approximation of the derivative of the activation function, which can be done by dynamic programming. The drop-in replacements are implemented for all popular nonlinearities and can be used in any existing pipeline. We confirm the memory reduction and the same convergence on several open benchmarks.
|
https://proceedings.mlr.press/v202/o-donoghue23a.html
|
https://proceedings.mlr.press/v202/o-donoghue23a/o-donoghue23a.pdf
|
https://openreview.net/forum?id=quQWA5m4r7
|
Efficient Exploration via Epistemic-Risk-Seeking Policy Optimization
|
https://proceedings.mlr.press/v202/o-donoghue23a.html
|
Brendan O’Donoghue
|
https://proceedings.mlr.press/v202/o-donoghue23a.html
|
ICML 2023
|
Exploration remains a key challenge in deep reinforcement learning (RL). Optimism in the face of uncertainty is a well-known heuristic with theoretical guarantees in the tabular setting, but how best to translate the principle to deep reinforcement learning, which involves online stochastic gradients and deep network function approximators, is not fully understood. In this paper we propose a new, differentiable optimistic objective that when optimized yields a policy that provably explores efficiently, with guarantees even under function approximation. Our new objective is a zero-sum two-player game derived from endowing the agent with an epistemic-risk-seeking utility function, which converts uncertainty into value and encourages the agent to explore uncertain states. We show that the solution to this game minimizes an upper bound on the regret, with the ’players’ each attempting to minimize one component of a particular regret decomposition. We derive a new model-free algorithm which we call ’epistemic-risk-seeking actor-critic’ (ERSAC), which is simply an application of simultaneous stochastic gradient ascent-descent to the game. Finally, we discuss a recipe for incorporating off-policy data and show that combining the risk-seeking objective with replay data yields a double benefit in terms of statistical efficiency. We conclude with some results showing good performance of a deep RL agent using the technique on the challenging ’DeepSea’ environment, showing significant performance improvements even over other efficient exploration techniques, as well as improved performance on the Atari benchmark.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.