abs
stringlengths 45
62
| Download PDF
stringlengths 50
84
| OpenReview
stringlengths 42
42
| title
stringlengths 10
168
| url
stringlengths 45
62
| authors
stringlengths 9
704
| detail_url
stringlengths 45
62
| tags
stringclasses 1
value | abstract
stringlengths 415
5.03k
⌀ |
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v202/oh23a.html
|
https://proceedings.mlr.press/v202/oh23a/oh23a.pdf
|
https://openreview.net/forum?id=dls28A9vYi
|
Provable Benefit of Mixup for Finding Optimal Decision Boundaries
|
https://proceedings.mlr.press/v202/oh23a.html
|
Junsoo Oh, Chulhee Yun
|
https://proceedings.mlr.press/v202/oh23a.html
|
ICML 2023
|
We investigate how pair-wise data augmentation techniques like Mixup affect the sample complexity of finding optimal decision boundaries in a binary linear classification problem. For a family of data distributions with a separability constant $\kappa$, we analyze how well the optimal classifier in terms of training loss aligns with the optimal one in test accuracy (i.e., Bayes optimal classifier). For vanilla training without augmentation, we uncover an interesting phenomenon named the curse of separability. As we increase $\kappa$ to make the data distribution more separable, the sample complexity of vanilla training increases exponentially in $\kappa$; perhaps surprisingly, the task of finding optimal decision boundaries becomes harder for more separable distributions. For Mixup training, we show that Mixup mitigates this problem by significantly reducing the sample complexity. To this end, we develop new concentration results applicable to $n^2$ pair-wise augmented data points constructed from $n$ independent data, by carefully dealing with dependencies between overlapping pairs. Lastly, we study other masking-based Mixup-style techniques and show that they can distort the training loss and make its minimizer converge to a suboptimal classifier in terms of test accuracy.
|
https://proceedings.mlr.press/v202/ohana23a.html
|
https://proceedings.mlr.press/v202/ohana23a/ohana23a.pdf
|
https://openreview.net/forum?id=bKxkWVTrxx
|
Shedding a PAC-Bayesian Light on Adaptive Sliced-Wasserstein Distances
|
https://proceedings.mlr.press/v202/ohana23a.html
|
Ruben Ohana, Kimia Nadjahi, Alain Rakotomamonjy, Liva Ralaivola
|
https://proceedings.mlr.press/v202/ohana23a.html
|
ICML 2023
|
The Sliced-Wasserstein distance (SW) is a computationally efficient and theoretically grounded alternative to the Wasserstein distance. Yet, the literature on its statistical properties – or, more accurately, its generalization properties – with respect to the distribution of slices, beyond the uniform measure, is scarce. To bring new contributions to this line of research, we leverage the PAC-Bayesian theory and a central observation that SW may be interpreted as an average risk, the quantity PAC-Bayesian bounds have been designed to characterize. We provide three types of results: i) PAC-Bayesian generalization bounds that hold on what we refer as adaptive Sliced-Wasserstein distances, i.e. SW defined with respect to arbitrary distributions of slices (among which data-dependent distributions), ii) a principled procedure to learn the distribution of slices that yields maximally discriminative SW, by optimizing our theoretical bounds, and iii) empirical illustrations of our theoretical findings.
|
https://proceedings.mlr.press/v202/ohayon23a.html
|
https://proceedings.mlr.press/v202/ohayon23a/ohayon23a.pdf
|
https://openreview.net/forum?id=01ommd4uLX
|
Reasons for the Superiority of Stochastic Estimators over Deterministic Ones: Robustness, Consistency and Perceptual Quality
|
https://proceedings.mlr.press/v202/ohayon23a.html
|
Guy Ohayon, Theo Joseph Adrai, Michael Elad, Tomer Michaeli
|
https://proceedings.mlr.press/v202/ohayon23a.html
|
ICML 2023
|
Stochastic restoration algorithms allow to explore the space of solutions that correspond to the degraded input. In this paper we reveal additional fundamental advantages of stochastic methods over deterministic ones, which further motivate their use. First, we prove that any restoration algorithm that attains perfect perceptual quality and whose outputs are consistent with the input must be a posterior sampler, and is thus required to be stochastic. Second, we illustrate that while deterministic restoration algorithms may attain high perceptual quality, this can be achieved only by filling up the space of all possible source images using an extremely sensitive mapping, which makes them highly vulnerable to adversarial attacks. Indeed, we show that enforcing deterministic models to be robust to such attacks profoundly hinders their perceptual quality, while robustifying stochastic models hardly influences their perceptual quality, and improves their output variability. These findings provide a motivation to foster progress in stochastic restoration methods, paving the way to better recovery algorithms.
|
https://proceedings.mlr.press/v202/okati23a.html
|
https://proceedings.mlr.press/v202/okati23a/okati23a.pdf
|
https://openreview.net/forum?id=Eni4D5gVBq
|
On the Within-Group Fairness of Screening Classifiers
|
https://proceedings.mlr.press/v202/okati23a.html
|
Nastaran Okati, Stratis Tsirtsis, Manuel Gomez Rodriguez
|
https://proceedings.mlr.press/v202/okati23a.html
|
ICML 2023
|
Screening classifiers are increasingly used to identify qualified candidates in a variety of selection processes. In this context, it has been recently shown that if a classifier is calibrated, one can identify the smallest set of candidates which contains, in expectation, a desired number of qualified candidates using a threshold decision rule. This lends support to focusing on calibration as the only requirement for screening classifiers. In this paper, we argue that screening policies that use calibrated classifiers may suffer from an understudied type of within-group unfairness—they may unfairly treat qualified members within demographic groups of interest. Further, we argue that this type of unfairness can be avoided if classifiers satisfy within-group monotonicity, a natural monotonicity property within each group. Then, we introduce an efficient post-processing algorithm based on dynamic programming to minimally modify a given calibrated classifier so that its probability estimates satisfy within-group monotonicity. We validate our algorithm using US Census survey data and show that within-group monotonicity can often be achieved at a small cost in terms of prediction granularity and shortlist size.
|
https://proceedings.mlr.press/v202/oko23a.html
|
https://proceedings.mlr.press/v202/oko23a/oko23a.pdf
|
https://openreview.net/forum?id=ORyo7fxcIA
|
Diffusion Models are Minimax Optimal Distribution Estimators
|
https://proceedings.mlr.press/v202/oko23a.html
|
Kazusato Oko, Shunta Akiyama, Taiji Suzuki
|
https://proceedings.mlr.press/v202/oko23a.html
|
ICML 2023
|
While efficient distribution learning is no doubt behind the groundbreaking success of diffusion modeling, its theoretical guarantees are quite limited. In this paper, we provide the first rigorous analysis on approximation and generalization abilities of diffusion modeling for well-known function spaces. The highlight of this paper is that when the true density function belongs to the Besov space and the empirical score matching loss is properly minimized, the generated data distribution achieves the nearly minimax optimal estimation rates in the total variation distance and in the Wasserstein distance of order one. Furthermore, we extend our theory to demonstrate how diffusion models adapt to low-dimensional data distributions. We expect these results advance theoretical understandings of diffusion modeling and its ability to generate verisimilar outputs.
|
https://proceedings.mlr.press/v202/olivier23a.html
|
https://proceedings.mlr.press/v202/olivier23a/olivier23a.pdf
|
https://openreview.net/forum?id=3m9c6uYKK7
|
How Many Perturbations Break This Model? Evaluating Robustness Beyond Adversarial Accuracy
|
https://proceedings.mlr.press/v202/olivier23a.html
|
Raphael Olivier, Bhiksha Raj
|
https://proceedings.mlr.press/v202/olivier23a.html
|
ICML 2023
|
Robustness to adversarial attacks is typically evaluated with adversarial accuracy. While essential, this metric does not capture all aspects of robustness and in particular leaves out the question of how many perturbations can be found for each point. In this work, we introduce an alternative approach, adversarial sparsity, which quantifies how difficult it is to find a successful perturbation given both an input point and a constraint on the direction of the perturbation. We show that sparsity provides valuable insight into neural networks in multiple ways: for instance, it illustrates important differences between current state-of-the-art robust models them that accuracy analysis does not, and suggests approaches for improving their robustness. When applying broken defenses effective against weak attacks but not strong ones, sparsity can discriminate between the totally ineffective and the partially effective defenses. Finally, with sparsity we can measure increases in robustness that do not affect accuracy: we show for example that data augmentation can by itself increase adversarial robustness, without using adversarial training.
|
https://proceedings.mlr.press/v202/oprescu23a.html
|
https://proceedings.mlr.press/v202/oprescu23a/oprescu23a.pdf
|
https://openreview.net/forum?id=j5PYwPsNci
|
B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under Hidden Confounding
|
https://proceedings.mlr.press/v202/oprescu23a.html
|
Miruna Oprescu, Jacob Dorn, Marah Ghoummaid, Andrew Jesson, Nathan Kallus, Uri Shalit
|
https://proceedings.mlr.press/v202/oprescu23a.html
|
ICML 2023
|
Estimating heterogeneous treatment effects from observational data is a crucial task across many fields, helping policy and decision-makers take better actions. There has been recent progress on robust and efficient methods for estimating the conditional average treatment effect (CATE) function, but these methods often do not take into account the risk of hidden confounding, which could arbitrarily and unknowingly bias any causal estimate based on observational data. We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on the level of hidden confounding. We derive the B-Learner by adapting recent results for sharp and valid bounds of the average treatment effect (Dorn et al., 2021) into the framework given by Kallus & Oprescu (2023) for robust and model-agnostic learning of conditional distributional treatment effects. The B-Learner can use any function estimator such as random forests and deep neural networks, and we prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods. Semi-synthetic experimental comparisons validate the theoretical findings, and we use real-world data demonstrate how the method might be used in practice.
|
https://proceedings.mlr.press/v202/orlanski23a.html
|
https://proceedings.mlr.press/v202/orlanski23a/orlanski23a.pdf
|
https://openreview.net/forum?id=VnGIZsmxDG
|
Measuring the Impact of Programming Language Distribution
|
https://proceedings.mlr.press/v202/orlanski23a.html
|
Gabriel Orlanski, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua Howland, Jonathan Malmaud, Jacob Austin, Rishabh Singh, Michele Catasta
|
https://proceedings.mlr.press/v202/orlanski23a.html
|
ICML 2023
|
Current benchmarks for evaluating neural code models focus on only a small subset of programming languages, excluding many popular languages such as Go or Rust. To ameliorate this issue, we present the BabelCode framework for execution-based evaluation of any benchmark in any language. BabelCode enables new investigations into the qualitative performance of models’ memory, runtime, and individual test case results. Additionally, we present a new code translation dataset called Translating Python Programming Puzzles (TP3) from the Python Programming Puzzles (Schuster et al., 2021) benchmark that involves translating expert-level python functions to any language. With both BabelCode and the TP3 benchmark, we investigate if balancing the distributions of 14 languages in a training dataset improves a large language model’s performance on low-resource languages. Training a model on a balanced corpus results in, on average, 12.34% higher $pass@k$ across all tasks and languages compared to the baseline. We find that this strategy achieves 66.48% better $pass@k$ on low-resource languages at the cost of only a 12.94% decrease to high-resource languages. In our three translation tasks, this strategy yields, on average, 30.77% better low-resource $pass@k$ while having 19.58% worse high-resource $pass@k$.
|
https://proceedings.mlr.press/v202/ortiz-jimenez23a.html
|
https://proceedings.mlr.press/v202/ortiz-jimenez23a/ortiz-jimenez23a.pdf
|
https://openreview.net/forum?id=CnHxxjqkMi
|
When does Privileged information Explain Away Label Noise?
|
https://proceedings.mlr.press/v202/ortiz-jimenez23a.html
|
Guillermo Ortiz-Jimenez, Mark Collier, Anant Nawalgaria, Alexander Nicholas D’Amour, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou
|
https://proceedings.mlr.press/v202/ortiz-jimenez23a.html
|
ICML 2023
|
Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise. However, the reasons for its effectiveness are not well understood. In this study, we investigate the role played by different properties of the PI in explaining away label noise. Through experiments on multiple datasets with real PI (CIFAR-N/H) and a new large-scale benchmark ImageNet-PI, we find that PI is most helpful when it allows networks to easily distinguish clean from noisy data, while enabling a learning shortcut to memorize the noisy examples. Interestingly, when PI becomes too predictive of the target label, PI methods often perform worse than their no-PI baselines. Based on these findings, we propose several enhancements to the state-of-the-art PI methods and demonstrate the potential of PI as a means of tackling label noise. Finally, we show how we can easily combine the resulting PI approaches with existing no-PI techniques designed to deal with label noise.
|
https://proceedings.mlr.press/v202/orvieto23a.html
|
https://proceedings.mlr.press/v202/orvieto23a/orvieto23a.pdf
|
https://openreview.net/forum?id=M3Yd3QyRG4
|
Resurrecting Recurrent Neural Networks for Long Sequences
|
https://proceedings.mlr.press/v202/orvieto23a.html
|
Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, Soham De
|
https://proceedings.mlr.press/v202/orvieto23a.html
|
ICML 2023
|
Recurrent Neural Networks (RNNs) offer fast inference on long sequences but are hard to optimize and slow to train. Deep state-space models (SSMs) have recently been shown to perform remarkably well on long sequence modeling tasks, and have the added benefits of fast parallelizable training and RNN-like fast inference. However, while SSMs are superficially similar to RNNs, there are important differences that make it unclear where their performance boost over RNNs comes from. We show that careful design of deep RNNs using standard signal propagation arguments can recover the impressive performance of deep SSMs on long-range reasoning tasks, while matching their training speed. To achieve this, we analyze and ablate a series of changes to standard RNNs including linearizing and diagonalizing the recurrence, using better parameterizations and initializations, and ensuring careful normalization of the forward pass. Our results provide new insights on the origins of the impressive performance of deep SSMs, and introduce an RNN block called the Linear Recurrent Unit (or LRU) that matches both their performance on the Long Range Arena benchmark and their computational efficiency.
|
https://proceedings.mlr.press/v202/ouyang23a.html
|
https://proceedings.mlr.press/v202/ouyang23a/ouyang23a.pdf
|
https://openreview.net/forum?id=9iNScYEBWZ
|
Improving Adversarial Robustness Through the Contrastive-Guided Diffusion Process
|
https://proceedings.mlr.press/v202/ouyang23a.html
|
Yidong Ouyang, Liyan Xie, Guang Cheng
|
https://proceedings.mlr.press/v202/ouyang23a.html
|
ICML 2023
|
Synthetic data generation has become an emerging tool to help improve the adversarial robustness in classification tasks, since robust learning requires a significantly larger amount of training samples compared with standard classification. Among various deep generative models, the diffusion model has been shown to produce high-quality synthetic images and has achieved good performance in improving the adversarial robustness. However, diffusion-type methods are generally slower in data generation as compared with other generative models. Although different acceleration techniques have been proposed recently, it is also of great importance to study how to improve the sample efficiency of synthetic data for the downstream task. In this paper, we first analyze the optimality condition of synthetic distribution for achieving improved robust accuracy. We show that enhancing the distinguishability among the generated data is critical for improving adversarial robustness. Thus, we propose the Contrastive-Guided Diffusion Process (Contrastive-DP), which incorporates the contrastive loss to guide the diffusion model in data generation. We validate our theoretical results using simulations and demonstrate the good performance of Contrastive-DP on image datasets.
|
https://proceedings.mlr.press/v202/oymak23a.html
|
https://proceedings.mlr.press/v202/oymak23a/oymak23a.pdf
|
https://openreview.net/forum?id=qorOnDor89
|
On the Role of Attention in Prompt-tuning
|
https://proceedings.mlr.press/v202/oymak23a.html
|
Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, Christos Thrampoulidis
|
https://proceedings.mlr.press/v202/oymak23a.html
|
ICML 2023
|
Prompt-tuning is an emerging strategy to adapt large language models (LLM) to downstream tasks by learning a (soft-)prompt parameter from data. Despite its success in LLMs, there is limited theoretical understanding of the power of prompt-tuning and the role of the attention mechanism in prompting. In this work, we explore prompt-tuning for one-layer attention architectures and study contextual mixture-models where each input token belongs to a context-relevant or -irrelevant set. We isolate the role of prompt-tuning through a self-contained prompt-attention model. Our contributions are as follows: (1) We show that softmax-prompt-attention is provably more expressive than softmax-self-attention and linear-prompt-attention under our contextual data model. (2) We analyze the initial trajectory of gradient descent and show that it learns the prompt and prediction head with near-optimal sample complexity and demonstrate how the prompt can provably attend to sparse context-relevant tokens. (3) Assuming a known prompt but an unknown prediction head, we characterize the exact finite sample performance of prompt-attention which reveals the fundamental performance limits and the precise benefit of the context information. We also provide experiments that verify our theoretical insights on real datasets and demonstrate how prompt-tuning enables the model to attend to context-relevant information.
|
https://proceedings.mlr.press/v202/ozdaglar23a.html
|
https://proceedings.mlr.press/v202/ozdaglar23a/ozdaglar23a.pdf
|
https://openreview.net/forum?id=LxkOVVGQYq
|
Revisiting the Linear-Programming Framework for Offline RL with General Function Approximation
|
https://proceedings.mlr.press/v202/ozdaglar23a.html
|
Asuman E. Ozdaglar, Sarath Pattathil, Jiawei Zhang, Kaiqing Zhang
|
https://proceedings.mlr.press/v202/ozdaglar23a.html
|
ICML 2023
|
Offline reinforcement learning (RL) aims to find an optimal policy for sequential decision-making using a pre-collected dataset, without further interaction with the environment. Recent theoretical progress has focused on developing sample-efficient offline RL algorithms with various relaxed assumptions on data coverage and function approximators, especially to handle the case with excessively large state-action spaces. Among them, the framework based on the linear-programming (LP) reformulation of Markov decision processes has shown promise: it enables sample-efficient offline RL with function approximation, under only partial data coverage and realizability assumptions on the function classes, with favorable computational tractability. In this work, we revisit the LP framework for offline RL, and provide a new reformulation that advances the existing results in several aspects, relaxing certain assumptions and achieving optimal statistical rates in terms of sample size. Our key enabler is to introduce proper constraints in the reformulation, instead of using any regularization as in the literature, also with careful choices of the function classes and initial state distributions. We hope our insights bring into light the use of LP formulations and the induced primal-dual minimax optimization, in offline RL.
|
https://proceedings.mlr.press/v202/padmakumar23a.html
|
https://proceedings.mlr.press/v202/padmakumar23a/padmakumar23a.pdf
|
https://openreview.net/forum?id=EuUeVUS6UV
|
Extrapolative Controlled Sequence Generation via Iterative Refinement
|
https://proceedings.mlr.press/v202/padmakumar23a.html
|
Vishakh Padmakumar, Richard Yuanzhe Pang, He He, Ankur P Parikh
|
https://proceedings.mlr.press/v202/padmakumar23a.html
|
ICML 2023
|
We study the problem of extrapolative controlled generation, i.e., generating sequences with attribute values beyond the range seen in training. This task is of significant importance in automated design, especially drug discovery, where the goal is to design novel proteins that are better (e.g., more stable) than existing sequences. Thus, by definition the target sequences and their attribute values are out of the training distribution, posing challenges to existing methods that aim to directly generate the target sequence. Instead, in this work, we propose Iterative Controlled Extrapolation (ICE) which iteratively makes local edits to a sequence to enable extrapolation. We train the model on synthetically generated sequence pairs that demonstrate small improvement in the attribute value. Results on one natural language task (sentiment analysis) and two protein engineering tasks (ACE2 stability and AAV fitness) show that ICE outperforms state-of-the-art approaches despite its simplicity.
|
https://proceedings.mlr.press/v202/pal23a.html
|
https://proceedings.mlr.press/v202/pal23a/pal23a.pdf
|
https://openreview.net/forum?id=sdYsEseLu4
|
Locally Regularized Neural Differential Equations: Some Black Boxes were meant to remain closed!
|
https://proceedings.mlr.press/v202/pal23a.html
|
Avik Pal, Alan Edelman, Christopher Vincent Rackauckas
|
https://proceedings.mlr.press/v202/pal23a.html
|
ICML 2023
|
Neural Differential Equations have become an important modeling framework due to their ability to adapt to new problems automatically. Training a neural differential equation is effectively a search over a space of plausible dynamical systems. Controlling the computational cost for these models is difficult since it relies on the number of steps the adaptive solver takes. Most prior works have used higher-order methods to reduce prediction timings while greatly increasing training time or reducing both training and prediction timings by relying on specific training algorithms, which are harder to use as a drop-in replacement. In this manuscript, we use internal cost heuristics of adaptive differential equation solvers at stochastic time-points to guide the training towards learning a dynamical system that is easier to integrate. We “close the blackbox” and allow the use of our method with any sensitivity method. We perform experimental studies to compare our method to global regularization to show that we attain similar performance numbers without compromising on the flexibility of implementation. We develop two sampling strategies to trade-off between performance and training time. Our method reduces the number of function evaluations to 0.556x - 0.733x and accelerates predictions by 1.3x - 2x.
|
https://proceedings.mlr.press/v202/pal23b.html
|
https://proceedings.mlr.press/v202/pal23b/pal23b.pdf
|
https://openreview.net/forum?id=bm2qVX0h09
|
Controlled Differential Equations on Long Sequences via Non-standard Wavelets
|
https://proceedings.mlr.press/v202/pal23b.html
|
Sourav Pal, Zhanpeng Zeng, Sathya N. Ravi, Vikas Singh
|
https://proceedings.mlr.press/v202/pal23b.html
|
ICML 2023
|
Neural Controlled Differential equations (NCDE) are a powerful mechanism to model the dynamics in temporal sequences, e.g., applications involving physiological measures, where apart from the initial condition, the dynamics also depend on subsequent measures or even a different "control" sequence. But NCDEs do not scale well to longer sequences. Existing strategies adapt rough path theory, and instead model the dynamics over summaries known as log signatures. While rigorous and elegant, invertibility of these summaries is difficult, and limits the scope of problems where these ideas can offer strong benefits (reconstruction, generative modeling). For tasks where it is sensible to assume that the (long) sequences in the training data are a fixed length of temporal measurements – this assumption holds in most experiments tackled in the literature – we describe an efficient simplification. First, we recast the regression/classification task as an integral transform. We then show how restricting the class of operators (permissible in the integral transform), allows the use of a known algorithm that leverages non-standard Wavelets to decompose the operator. Thereby, our task (learning the operator) radically simplifies. A neural variant of this idea yields consistent improvements across a wide gamut of use cases tackled in existing works. We also describe a novel application on modeling tasks involving coupled differential equations.
|
https://proceedings.mlr.press/v202/pan23a.html
|
https://proceedings.mlr.press/v202/pan23a/pan23a.pdf
|
https://openreview.net/forum?id=nkals4A4Vs
|
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark
|
https://proceedings.mlr.press/v202/pan23a.html
|
Alexander Pan, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Hanlin Zhang, Scott Emmons, Dan Hendrycks
|
https://proceedings.mlr.press/v202/pan23a.html
|
ICML 2023
|
Artificial agents have traditionally been trained to maximize reward, which may incentivize power-seeking and deception, analogous to how next-token prediction in language models (LMs) may incentivize toxicity. So do agents naturally learn to be Machiavellian? And how do we measure these behaviors in general-purpose models such as GPT-4? Towards answering these questions, we introduce Machiavelli, a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios that center on social decision-making. Scenario labeling is automated with LMs, which are more performant than human annotators. We mathematize dozens of harmful behaviors and use our annotations to evaluate agents’ tendencies to be power-seeking, cause disutility, and commit ethical violations. We observe some tension between maximizing reward and behaving ethically. To improve this trade-off, we investigate LM-based methods to steer agents towards less harmful behaviors. Our results show that agents can both act competently and morally, so concrete progress can currently be made in machine ethics–designing agents that are Pareto improvements in both safety and capabilities.
|
https://proceedings.mlr.press/v202/pan23b.html
|
https://proceedings.mlr.press/v202/pan23b/pan23b.pdf
|
https://openreview.net/forum?id=DZxkGYipRu
|
Beyond Homophily: Reconstructing Structure for Graph-agnostic Clustering
|
https://proceedings.mlr.press/v202/pan23b.html
|
Erlin Pan, Zhao Kang
|
https://proceedings.mlr.press/v202/pan23b.html
|
ICML 2023
|
Graph neural networks (GNNs) based methods have achieved impressive performance on node clustering task. However, they are designed on the homophilic assumption of graph and clustering on heterophilic graph is overlooked. Due to the lack of labels, it is impossible to first identify a graph as homophilic or heterophilic before a suitable GNN model can be found. Hence, clustering on real-world graph with various levels of homophily poses a new challenge to the graph research community. To fill this gap, we propose a novel graph clustering method, which contains three key components: graph reconstruction, a mixed filter, and dual graph clustering network. To be graph-agnostic, we empirically construct two graphs which are high homophily and heterophily from each data. The mixed filter based on the new graphs extracts both low-frequency and high-frequency information. To reduce the adverse coupling between node attribute and topological structure, we separately map them into two subspaces in dual graph clustering network. Extensive experiments on 11 benchmark graphs demonstrate our promising performance. In particular, our method dominates others on heterophilic graphs.
|
https://proceedings.mlr.press/v202/pan23c.html
|
https://proceedings.mlr.press/v202/pan23c/pan23c.pdf
|
https://openreview.net/forum?id=beHp3L9KXc
|
Better Training of GFlowNets with Local Credit and Incomplete Trajectories
|
https://proceedings.mlr.press/v202/pan23c.html
|
Ling Pan, Nikolay Malkin, Dinghuai Zhang, Yoshua Bengio
|
https://proceedings.mlr.press/v202/pan23c.html
|
ICML 2023
|
Generative Flow Networks or GFlowNets are related to Monte-Carlo Markov chain methods (as they sample from a distribution specified by an energy function), reinforcement learning (as they learn a policy to sample composed objects through a sequence of steps), generative models (as they learn to represent and sample from a distribution) and amortized variational methods (as they can be used to learn to approximate and sample from an otherwise intractable posterior, given a prior and a likelihood). They are trained to generate an object $x$ through a sequence of steps with probability proportional to some reward function $R(x)$ (or $\exp(-\mathcal{E}(x))$ with $\mathcal{E}(x)$ denoting the energy function), given at the end of the generative trajectory. Like for other RL settings where the reward is only given at the end, the efficiency of training and credit assignment may suffer when those trajectories are longer. With previous GFlowNet work, no learning was possible from incomplete trajectories (lacking a terminal state and the computation of the associated reward). In this paper, we consider the case where the energy function can be applied not just to terminal states but also to intermediate states. This is for example achieved when the energy function is additive, with terms available along the trajectory. We show how to reparameterize the GFlowNet state flow function to take advantage of the partial reward already accrued at each state. This enables a training objective that can be applied to update parameters even with incomplete trajectories. Even when complete trajectories are available, being able to obtain more localized credit and gradients is found to speed up training convergence, as demonstrated across many simulations.
|
https://proceedings.mlr.press/v202/pan23d.html
|
https://proceedings.mlr.press/v202/pan23d/pan23d.pdf
|
https://openreview.net/forum?id=bZTyDxBpx0
|
A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer
|
https://proceedings.mlr.press/v202/pan23d.html
|
Hongyi Pan, Xin Zhu, Salih Furkan Atici, Ahmet Cetin
|
https://proceedings.mlr.press/v202/pan23d.html
|
ICML 2023
|
In this paper, we propose a novel Hadamard Transform (HT)-based neural network layer for hybrid quantum-classical computing. It implements the regular convolutional layers in the Hadamard transform domain. The idea is based on the HT convolution theorem which states that the dyadic convolution between two vectors is equivalent to the element-wise multiplication of their HT representation. Computing the HT is simply the application of a Hadamard gate to each qubit individually, so the HT computations of our proposed layer can be implemented on a quantum computer. Compared to the regular Conv2D layer, the proposed HT-perceptron layer is computationally more efficient. Compared to a CNN with the same number of trainable parameters and 99.26% test accuracy, our HT network reaches 99.31% test accuracy with 57.1% MACs reduced in the MNIST dataset; and in our ImageNet-1K experiments, our HT-based ResNet-50 exceeds the accuracy of the baseline ResNet-50 by 0.59% center-crop top-1 accuracy using 11.5% fewer parameters with 12.6% fewer MACs.
|
https://proceedings.mlr.press/v202/panageas23a.html
|
https://proceedings.mlr.press/v202/panageas23a/panageas23a.pdf
|
https://openreview.net/forum?id=Rw8OOwatgy
|
Semi Bandit dynamics in Congestion Games: Convergence to Nash Equilibrium and No-Regret Guarantees.
|
https://proceedings.mlr.press/v202/panageas23a.html
|
Ioannis Panageas, Stratis Skoulakis, Luca Viano, Xiao Wang, Volkan Cevher
|
https://proceedings.mlr.press/v202/panageas23a.html
|
ICML 2023
|
In this work, we propose introduce a variant of online stochastic gradient descent and prove it converges to Nash equilibria and simultaneously it has sublinear regret for the class of congestion games in the semi-bandit feedback setting. Our proposed method admits convergence rates depending only polynomially on the number of players and the number of facilities, but not on the size of the action set, which can be exponentially large in terms of the number of facilities. Moreover, the running time of our method has polynomial-time dependence on the implicit description of the game. Our analysis exploits techniques from convex geometry, in particular Caratheodory’s theorem and recent advances in non-convex stochastic optimization. This work improves upon and answers an open question from (Cui et al 2022).
|
https://proceedings.mlr.press/v202/panchal23a.html
|
https://proceedings.mlr.press/v202/panchal23a/panchal23a.pdf
|
https://openreview.net/forum?id=q5RHsg6VRw
|
Flash: Concept Drift Adaptation in Federated Learning
|
https://proceedings.mlr.press/v202/panchal23a.html
|
Kunjal Panchal, Sunav Choudhary, Subrata Mitra, Koyel Mukherjee, Somdeb Sarkhel, Saayan Mitra, Hui Guan
|
https://proceedings.mlr.press/v202/panchal23a.html
|
ICML 2023
|
In Federated Learning (FL), adaptive optimization is an effective approach to addressing the statistical heterogeneity issue but cannot adapt quickly to concept drifts. In this work, we propose a novel adaptive optimizer called Flash that simultaneously addresses both statistical heterogeneity and the concept drift issues. The fundamental insight is that a concept drift can be detected based on the magnitude of parameter updates that are required to fit the global model to each participating client’s local data distribution. Flash uses a two-pronged approach that synergizes client-side early-stopping training to facilitate detection of concept drifts and the server-side drift-aware adaptive optimization to effectively adjust effective learning rate. We theoretically prove that Flash matches the convergence rate of state-of-the-art adaptive optimizers and further empirically evaluate the efficacy of Flash on a variety of FL benchmarks using different concept drift settings.
|
https://proceedings.mlr.press/v202/pandey23a.html
|
https://proceedings.mlr.press/v202/pandey23a/pandey23a.pdf
|
https://openreview.net/forum?id=2MaUpKBSju
|
Learn to Accumulate Evidence from All Training Samples: Theory and Practice
|
https://proceedings.mlr.press/v202/pandey23a.html
|
Deep Shankar Pandey, Qi Yu
|
https://proceedings.mlr.press/v202/pandey23a.html
|
ICML 2023
|
Evidential deep learning, built upon belief theory and subjective logic, offers a principled and computationally efficient way to turn a deterministic neural network uncertainty-aware. The resultant evidential models can quantify fine-grained uncertainty using the learned evidence. To ensure theoretically sound evidential models, the evidence needs to be non-negative, which requires special activation functions for model training and inference. This constraint often leads to inferior predictive performance compared to standard softmax models, making it challenging to extend them to many large-scale datasets. To unveil the real cause of this undesired behavior, we theoretically investigate evidential models and identify a fundamental limitation that explains the inferior performance: existing evidential activation functions create zero evidence regions, which prevent the model to learn from training samples falling into such regions. A deeper analysis of evidential activation functions based on our theoretical underpinning inspires the design of a novel regularizer that effectively alleviates this fundamental limitation. Extensive experiments over many challenging real-world datasets and settings confirm our theoretical findings and demonstrate the effectiveness of our proposed approach.
|
https://proceedings.mlr.press/v202/pang23a.html
|
https://proceedings.mlr.press/v202/pang23a/pang23a.pdf
|
https://openreview.net/forum?id=ICk7GJ1awE
|
Secure Federated Correlation Test and Entropy Estimation
|
https://proceedings.mlr.press/v202/pang23a.html
|
Qi Pang, Lun Wang, Shuai Wang, Wenting Zheng, Dawn Song
|
https://proceedings.mlr.press/v202/pang23a.html
|
ICML 2023
|
We propose the first federated correlation test framework compatible with secure aggregation, namely FED-$\chi^2$. In our protocol, the statistical computations are recast as frequency moment estimation problems, where the clients collaboratively generate a shared projection matrix and then use stable projection to encode the local information in a compact vector. As such encodings can be linearly aggregated, secure aggregation can be applied to conceal the individual updates. We formally establish the security guarantee of FED-$\chi^2$ by proving that only the minimum necessary information (i.e., the correlation statistics) is revealed to the server. We show that our protocol can be naturally extended to estimate other statistics that can be recast as frequency moment estimations. By accommodating Shannon’e Entropy in FED-$\chi^2$, we further propose the first secure federated entropy estimation protocol, FED-$H$. The evaluation results demonstrate that FED-$\chi^2$ and FED-$H$ achieve good performance with small client-side computation overhead in several real-world case studies.
|
https://proceedings.mlr.press/v202/panigrahi23a.html
|
https://proceedings.mlr.press/v202/panigrahi23a/panigrahi23a.pdf
|
https://openreview.net/forum?id=Rgnaj43Pk0
|
Task-Specific Skill Localization in Fine-tuned Language Models
|
https://proceedings.mlr.press/v202/panigrahi23a.html
|
Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora
|
https://proceedings.mlr.press/v202/panigrahi23a.html
|
ICML 2023
|
Pre-trained language models can be fine-tuned to solve diverse NLP tasks, including in few-shot settings. Thus fine-tuning allows the model to quickly pick up task-specific "skills," but there has been limited study of where these newly-learnt skills reside inside the massive model. This paper introduces the term skill localization for this problem and proposes a solution. Given the downstream task and a model fine-tuned on that task, a simple optimization is used to identify a very small subset of parameters ($\sim$0.01% of model parameters) responsible for ($>$95%) of the model’s performance, in the sense that grafting the fine-tuned values for just this tiny subset onto the pre-trained model gives performance almost as well as the fine-tuned model. While reminiscent of recent works on parameter-efficient fine-tuning, the novel aspects here are that: (i) No further retraining is needed on the subset (unlike, say, with lottery tickets). (ii) Notable improvements are seen over vanilla fine-tuning with respect to calibration of predictions in-distribution (40-90% error reduction) as well as quality of predictions out-of-distribution (OOD). In models trained on multiple tasks, a stronger notion of skill localization is observed, where the sparse regions corresponding to different tasks are almost disjoint, and their overlap (when it happens) is a proxy for task similarity. Experiments suggest that localization via grafting can assist certain forms continual learning.
|
https://proceedings.mlr.press/v202/park23a.html
|
https://proceedings.mlr.press/v202/park23a/park23a.pdf
|
https://openreview.net/forum?id=Q6Y1cnHMee
|
Kernel Sufficient Dimension Reduction and Variable Selection for Compositional Data via Amalgamation
|
https://proceedings.mlr.press/v202/park23a.html
|
Junyoung Park, Jeongyoun Ahn, Cheolwoo Park
|
https://proceedings.mlr.press/v202/park23a.html
|
ICML 2023
|
Compositional data with a large number of components and an abundance of zeros are frequently observed in many fields recently. Analyzing such sparse high-dimensional compositional data naturally calls for dimension reduction or, more preferably, variable selection. Most existing approaches lack interpretability or cannot handle zeros properly, as they rely on a log-ratio transformation. We approach this problem with sufficient dimension reduction (SDR), one of the most studied dimension reduction frameworks in statistics. Characterized by the conditional independence of the data to the response on the found subspace, the SDR framework has been effective for both linear and nonlinear dimension reduction problems. This work proposes a compositional SDR that can handle zeros naturally while incorporating the nonlinear nature and spurious negative correlations among components rigorously. A critical consideration of sub-composition versus amalgamation for compositional variable selection is discussed. The proposed compositional SDR is shown to be statistically consistent in constructing a sub-simplex consisting of true signal variables. Simulation and real microbiome data are used to demonstrate the performance of the proposed SDR compared to existing state-of-art approaches.
|
https://proceedings.mlr.press/v202/park23b.html
|
https://proceedings.mlr.press/v202/park23b/park23b.pdf
|
https://openreview.net/forum?id=a5BIUnE7cR
|
Learning Affinity with Hyperbolic Representation for Spatial Propagation
|
https://proceedings.mlr.press/v202/park23b.html
|
Jin-Hwi Park, Jaesung Choe, Inhwan Bae, Hae-Gon Jeon
|
https://proceedings.mlr.press/v202/park23b.html
|
ICML 2023
|
Recent approaches to representation learning have successfully demonstrated the benefits in hyperbolic space, driven by an excellent ability to make hierarchical relationships. In this work, we demonstrate that the properties of hyperbolic geometry serve as a valuable alternative to learning hierarchical affinity for spatial propagation tasks. We propose a Hyperbolic Affinity learning Module (HAM) to learn spatial affinity by considering geodesic distance on the hyperbolic space. By simply incorporating our HAM into conventional spatial propagation tasks, we validate its effectiveness, capturing the pixel hierarchy of affinity maps in hyperbolic space. The proposed methodology can lead to performance improvements in explicit propagation processes such as depth completion and semantic segmentation.
|
https://proceedings.mlr.press/v202/park23c.html
|
https://proceedings.mlr.press/v202/park23c/park23c.pdf
|
https://openreview.net/forum?id=PBRArApxMh
|
TRAK: Attributing Model Behavior at Scale
|
https://proceedings.mlr.press/v202/park23c.html
|
Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry
|
https://proceedings.mlr.press/v202/park23c.html
|
ICML 2023
|
The goal of data attribution is to trace model predictions back to training data. Despite a long line of work towards this goal, existing approaches to data attribution tend to force users to choose between computational tractability and efficacy. That is, computationally tractable methods can struggle with accurately attributing model predictions in non-convex settings (e.g., in the context of deep neural networks), while methods that are effective in such regimes require training thousands of models, which makes them impractical for large models or datasets. In this work, we introduce TRAK (Tracing with the Randomly-projected After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differentiable models. In particular, by leveraging only a handful of trained models, TRAK can match the performance of attribution methods that require training thousands of models. We demonstrate the utility of TRAK across various modalities and scales: image classifiers trained on ImageNet, vision-language models (CLIP), and language models (BERT and mT5). We provide code for using TRAK (and reproducing our work) at https://github.com/MadryLab/trak .
|
https://proceedings.mlr.press/v202/park23d.html
|
https://proceedings.mlr.press/v202/park23d/park23d.pdf
|
https://openreview.net/forum?id=QanRglB0H7
|
Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization
|
https://proceedings.mlr.press/v202/park23d.html
|
Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon
|
https://proceedings.mlr.press/v202/park23d.html
|
ICML 2023
|
In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.
|
https://proceedings.mlr.press/v202/park23e.html
|
https://proceedings.mlr.press/v202/park23e/park23e.pdf
|
https://openreview.net/forum?id=Xx0TH4IKgQ
|
Towards Understanding Ensemble Distillation in Federated Learning
|
https://proceedings.mlr.press/v202/park23e.html
|
Sejun Park, Kihun Hong, Ganguk Hwang
|
https://proceedings.mlr.press/v202/park23e.html
|
ICML 2023
|
Federated Learning (FL) is a collaborative machine learning paradigm for data privacy preservation. Recently, a knowledge distillation (KD) based information sharing approach in FL, which conducts ensemble distillation on an unlabeled public dataset, has been proposed. However, despite its experimental success and usefulness, the theoretical analysis of the KD based approach has not been satisfactorily conducted. In this work, we build a theoretical foundation of the ensemble distillation framework in federated learning from the perspective of kernel ridge regression (KRR). In this end, we propose a KD based FL algorithm for KRR models which is related with some existing KD based FL algorithms, and analyze our algorithm theoretically. We show that our algorithm makes local prediction models as much powerful as the centralized KRR model (which is a KRR model trained by all of local datasets) in terms of the convergence rate of the generalization error if the unlabeled public dataset is sufficiently large. We also provide experimental results to verify our theoretical results on ensemble distillation in federated learning.
|
https://proceedings.mlr.press/v202/park23f.html
|
https://proceedings.mlr.press/v202/park23f/park23f.pdf
|
https://openreview.net/forum?id=M3IX2zAIdi
|
Learning Controllable Degradation for Real-World Super-Resolution via Constrained Flows
|
https://proceedings.mlr.press/v202/park23f.html
|
Seobin Park, Dongjin Kim, Sungyong Baik, Tae Hyun Kim
|
https://proceedings.mlr.press/v202/park23f.html
|
ICML 2023
|
Recent deep-learning-based super-resolution (SR) methods have been successful in recovering high-resolution (HR) images from their low-resolution (LR) counterparts, albeit on the synthetic and simple degradation setting: bicubic downscaling. On the other hand, super-resolution on real-world images demands the capability to handle complex downscaling mechanism which produces different artifacts (e.g., noise, blur, color distortion) upon downscaling factors. To account for complex downscaling mechanism in real-world LR images, there have been a few efforts in constructing datasets consisting of LR images with real-world downsampling degradation. However, making such datasets entails a tremendous amount of time and effort, thereby resorting to very few number of downscaling factors (e.g., $\times$2, $\times$3, $\times$4). To remedy the issue, we propose to generate realistic SR datasets for unseen degradation levels by exploring the latent space of real LR images and thereby producing more diverse yet realistic LR images with complex real-world artifacts. Our quantitative and qualitative experiments demonstrate the accuracy of the generated LR images, and we show that the various conventional SR networks trained with our newly generated SR datasets can produce much better HR images.
|
https://proceedings.mlr.press/v202/park23g.html
|
https://proceedings.mlr.press/v202/park23g/park23g.pdf
|
https://openreview.net/forum?id=zzPMd0Ue4i
|
Differentially Private Sharpness-Aware Training
|
https://proceedings.mlr.press/v202/park23g.html
|
Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee
|
https://proceedings.mlr.press/v202/park23g.html
|
ICML 2023
|
Training deep learning models with differential privacy (DP) results in a degradation of performance. The training dynamics of models with DP show a significant difference from standard training, whereas understanding the geometric properties of private learning remains largely unexplored. In this paper, we investigate sharpness, a key factor in achieving better generalization, in private learning. We show that flat minima can help reduce the negative effects of per-example gradient clipping and the addition of Gaussian noise. We then verify the effectiveness of Sharpness-Aware Minimization (SAM) for seeking flat minima in private learning. However, we also discover that SAM is detrimental to the privacy budget and computational time due to its two-step optimization. Thus, we propose a new sharpness-aware training method that mitigates the privacy-optimization trade-off. Our experimental results demonstrate that the proposed method improves the performance of deep learning models with DP from both scratch and fine-tuning. Code is available at https://github.com/jinseongP/DPSAT.
|
https://proceedings.mlr.press/v202/park23h.html
|
https://proceedings.mlr.press/v202/park23h/park23h.pdf
|
https://openreview.net/forum?id=Ct2N6RWZpQ
|
Controllability-Aware Unsupervised Skill Discovery
|
https://proceedings.mlr.press/v202/park23h.html
|
Seohong Park, Kimin Lee, Youngwoon Lee, Pieter Abbeel
|
https://proceedings.mlr.press/v202/park23h.html
|
ICML 2023
|
One of the key capabilities of intelligent agents is the ability to discover useful skills without external supervision. However, the current unsupervised skill discovery methods are often limited to acquiring simple, easy-to-learn skills due to the lack of incentives to discover more complex, challenging behaviors. We introduce a novel unsupervised skill discovery method, Controllability-aware Skill Discovery (CSD), which actively seeks complex, hard-to-control skills without supervision. The key component of CSD is a controllability-aware distance function, which assigns larger values to state transitions that are harder to achieve with the current skills. Combined with distance-maximizing skill discovery, CSD progressively learns more challenging skills over the course of training as our jointly trained distance function reduces rewards for easy-to-achieve skills. Our experimental results in six robotic manipulation and locomotion environments demonstrate that CSD can discover diverse complex skills including object manipulation and locomotion skills with no supervision, significantly outperforming prior unsupervised skill discovery methods. Videos and code are available at https://seohong.me/projects/csd/
|
https://proceedings.mlr.press/v202/park23i.html
|
https://proceedings.mlr.press/v202/park23i/park23i.pdf
|
https://openreview.net/forum?id=pj3EXUk5Uj
|
Predictable MDP Abstraction for Unsupervised Model-Based RL
|
https://proceedings.mlr.press/v202/park23i.html
|
Seohong Park, Sergey Levine
|
https://proceedings.mlr.press/v202/park23i.html
|
ICML 2023
|
A key component of model-based reinforcement learning (RL) is a dynamics model that predicts the outcomes of actions. Errors in this predictive model can degrade the performance of model-based controllers, and complex Markov decision processes (MDPs) can present exceptionally difficult prediction problems. To mitigate this issue, we propose predictable MDP abstraction (PMA): instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space that only permits predictable, easy-to-model actions, while covering the original state-action space as much as possible. As a result, model learning becomes easier and more accurate, which allows robust, stable model-based planning or model-based RL. This transformation is learned in an unsupervised manner, before any task is specified by the user. Downstream tasks can then be solved with model-based control in a zero-shot fashion, without additional environment interactions. We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches in a range of benchmark environments. Our code and videos are available at https://seohong.me/projects/pma/
|
https://proceedings.mlr.press/v202/park23j.html
|
https://proceedings.mlr.press/v202/park23j/park23j.pdf
|
https://openreview.net/forum?id=Z1kD2xOpVl
|
Neural Stochastic Differential Games for Time-series Analysis
|
https://proceedings.mlr.press/v202/park23j.html
|
Sungwoo Park, Byoungwoo Park, Moontae Lee, Changhee Lee
|
https://proceedings.mlr.press/v202/park23j.html
|
ICML 2023
|
Modeling spatiotemporal dynamics with neural differential equations has become a major line of research that opens new ways to handle various real-world scenarios (e.g., missing observations, irregular times, etc.). Despite such progress, most existing methods still face challenges in providing a general framework for analyzing time series. To tackle this, we adopt stochastic differential games to suggest a new philosophy of utilizing interacting collective intelligence in time series analysis. For the implementation, we develop the novel gradient descent-based algorithm called deep neural fictitious play to approximate the Nash equilibrium. We theoretically analyze the convergence result of the proposed algorithm and discuss the advantage of cooperative games in handling noninformative observation. Throughout the experiments on various datasets, we demonstrate the superiority of our framework over all the tested benchmarks in modeling time-series prediction by capitalizing on the advantages of applying cooperative games. An ablation study shows that neural agents of the proposed framework learn intrinsic temporal relevance to make accurate time-series predictions.
|
https://proceedings.mlr.press/v202/park23k.html
|
https://proceedings.mlr.press/v202/park23k/park23k.pdf
|
https://openreview.net/forum?id=rsaAjVplzC
|
Accelerated Infeasibility Detection of Constrained Optimization and Fixed-Point Iterations
|
https://proceedings.mlr.press/v202/park23k.html
|
Jisun Park, Ernest K. Ryu
|
https://proceedings.mlr.press/v202/park23k.html
|
ICML 2023
|
As first-order optimization methods become the method of choice for solving large-scale optimization problems, optimization solvers based on first-order algorithms are being built. Such general-purpose solvers must robustly detect infeasible or misspecified problem instances, but the computational complexity of first-order methods for doing so has yet to be formally studied. In this work, we characterize the optimal accelerated rate of infeasibility detection. We show that the standard fixed-point iteration achieves a $\mathcal{O}(1/k^2)$ and $\mathcal{O}(1/k)$ rates, respectively, on the normalized iterates and the fixed-point residual converging to the infimal displacement vector, while the accelerated fixed-point iteration achieves $\mathcal{O}(1/k^2)$ and $\tilde{\mathcal{O}}(1/k^2)$ rates. We then provide a matching complexity lower bound to establish that $\Theta(1/k^2)$ is indeed the optimal accelerated rate.
|
https://proceedings.mlr.press/v202/parmas23a.html
|
https://proceedings.mlr.press/v202/parmas23a/parmas23a.pdf
|
https://openreview.net/forum?id=rDMAJECBM2
|
Model-based Reinforcement Learning with Scalable Composite Policy Gradient Estimators
|
https://proceedings.mlr.press/v202/parmas23a.html
|
Paavo Parmas, Takuma Seno, Yuma Aoki
|
https://proceedings.mlr.press/v202/parmas23a.html
|
ICML 2023
|
In model-based reinforcement learning (MBRL), policy gradients can be estimated either by derivative-free RL methods, such as likelihood ratio gradients (LR), or by backpropagating through a differentiable model via reparameterization gradients (RP). Instead of using one or the other, the Total Propagation (TP) algorithm in prior work showed that a combination of LR and RP estimators averaged using inverse variance weighting (IVW) can achieve orders of magnitude improvement over either method. However, IVW-based composite estimators have not yet been applied in modern RL tasks, as it is unclear if they can be implemented scalably. We propose a scalable method, Total Propagation X (TPX) that improves over TP by changing the node used for IVW, and employing coordinate wise weighting. We demonstrate the scalability of TPX by applying it to the state of the art visual MBRL algorithm Dreamer. The experiments showed that Dreamer fails with long simulation horizons, while our TPX works reliably for only a fraction of additional computation. One key advantage of TPX is its ease of implementation, which will enable experimenting with IVW on many tasks beyond MBRL.
|
https://proceedings.mlr.press/v202/parulekar23a.html
|
https://proceedings.mlr.press/v202/parulekar23a/parulekar23a.pdf
|
https://openreview.net/forum?id=zAgouWgI7b
|
PAC Generalization via Invariant Representations
|
https://proceedings.mlr.press/v202/parulekar23a.html
|
Advait U Parulekar, Karthikeyan Shanmugam, Sanjay Shakkottai
|
https://proceedings.mlr.press/v202/parulekar23a.html
|
ICML 2023
|
Invariant representations are transformations of the covariates such that the best model on top of the representation is invariant across training environments. In the context of linear Structural Equation Models (SEMs), invariant representations might allow us to learn models with out-of-distribution guarantees, i.e., models that are robust to interventions in the SEM. To address the invariant representation problem in a finite sample setting, we consider the notion of $\epsilon$-approximate invariance. We study the following question: If a representation is approximately invariant with respect to a given number of training interventions, will it continue to be approximately invariant on a larger collection of unseen intervened SEMs? Inspired by PAC learning, we obtain finite-sample out-of-distribution generalization guarantees for approximate invariance that holds probabilistically over a family of linear SEMs without faithfulness assumptions.
|
https://proceedings.mlr.press/v202/pashakhanloo23a.html
|
https://proceedings.mlr.press/v202/pashakhanloo23a/pashakhanloo23a.pdf
|
https://openreview.net/forum?id=yqUhEFPoDN
|
Stochastic Gradient Descent-Induced Drift of Representation in a Two-Layer Neural Network
|
https://proceedings.mlr.press/v202/pashakhanloo23a.html
|
Farhad Pashakhanloo, Alexei Koulakov
|
https://proceedings.mlr.press/v202/pashakhanloo23a.html
|
ICML 2023
|
Representational drift refers to over-time changes in neural activation accompanied by a stable task performance. Despite being observed in the brain and in artificial networks, the mechanisms of drift and its implications are not fully understood. Motivated by recent experimental findings of stimulus-dependent drift in the piriform cortex, we use theory and simulations to study this phenomenon in a two-layer linear feedforward network. Specifically, in a continual online learning scenario, we study the drift induced by the noise inherent in the Stochastic Gradient Descent (SGD). By decomposing the learning dynamics into the normal and tangent spaces of the minimum-loss manifold, we show the former corresponds to a finite variance fluctuation, while the latter could be considered as an effective diffusion process on the manifold. We analytically compute the fluctuation and the diffusion coefficients for the stimuli representations in the hidden layer as functions of network parameters and input distribution. Further, consistent with experiments, we show that the drift rate is slower for a more frequently presented stimulus. Overall, our analysis yields a theoretical framework for better understanding of the drift phenomenon in biological and artificial neural networks.
|
https://proceedings.mlr.press/v202/passaro23a.html
|
https://proceedings.mlr.press/v202/passaro23a/passaro23a.pdf
|
https://openreview.net/forum?id=QIejMwU0r9
|
Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs
|
https://proceedings.mlr.press/v202/passaro23a.html
|
Saro Passaro, C. Lawrence Zitnick
|
https://proceedings.mlr.press/v202/passaro23a.html
|
ICML 2023
|
Graph neural networks that model 3D data, such as point clouds or atoms, are typically desired to be $SO(3)$ equivariant, i.e., equivariant to 3D rotations. Unfortunately equivariant convolutions, which are a fundamental operation for equivariant networks, increase significantly in computational complexity as higher-order tensors are used. In this paper, we address this issue by reducing the $SO(3)$ convolutions or tensor products to mathematically equivalent convolutions in $SO(2)$ . This is accomplished by aligning the node embeddings’ primary axis with the edge vectors, which sparsifies the tensor product and reduces the computational complexity from $O(L^6)$ to $O(L^3)$, where $L$ is the degree of the representation. We demonstrate the potential implications of this improvement by proposing the Equivariant Spherical Channel Network (eSCN), a graph neural network utilizing our novel approach to equivariant convolutions, which achieves state-of-the-art results on the large-scale OC-20 and OC-22 datasets.
|
https://proceedings.mlr.press/v202/patel23a.html
|
https://proceedings.mlr.press/v202/patel23a/patel23a.pdf
|
https://openreview.net/forum?id=mi7pnouqLa
|
Federated Online and Bandit Convex Optimization
|
https://proceedings.mlr.press/v202/patel23a.html
|
Kumar Kshitij Patel, Lingxiao Wang, Aadirupa Saha, Nathan Srebro
|
https://proceedings.mlr.press/v202/patel23a.html
|
ICML 2023
|
We study the problems of distributed online and bandit convex optimization against an adaptive adversary. We aim to minimize the average regret on $M$ machines working in parallel over $T$ rounds with $R$ intermittent communications. Assuming the underlying cost functions are convex and can be generated adaptively, our results show that collaboration is not beneficial when the machines have access to the first-order gradient information at the queried points. This is in contrast to the case for stochastic functions, where each machine samples the cost functions from a fixed distribution. Furthermore, we delve into the more challenging setting of federated online optimization with bandit (zeroth-order) feedback, where the machines can only access values of the cost functions at the queried points. The key finding here is identifying the high-dimensional regime where collaboration is beneficial and may even lead to a linear speedup in the number of machines. We further illustrate our findings through federated adversarial linear bandits by developing novel distributed single and two-point feedback algorithms. Our work is the first attempt towards a systematic understanding of federated online optimization with limited feedback, and it attains tight regret bounds in the intermittent communication setting for both first and zeroth-order feedback. Our results thus bridge the gap between stochastic and adaptive settings in federated online optimization.
|
https://proceedings.mlr.press/v202/pearce-crump23a.html
|
https://proceedings.mlr.press/v202/pearce-crump23a/pearce-crump23a.pdf
|
https://openreview.net/forum?id=uY7F5bouCN
|
Brauer’s Group Equivariant Neural Networks
|
https://proceedings.mlr.press/v202/pearce-crump23a.html
|
Edward Pearce-Crump
|
https://proceedings.mlr.press/v202/pearce-crump23a.html
|
ICML 2023
|
We provide a full characterisation of all of the possible group equivariant neural networks whose layers are some tensor power of $\mathbb{R}^{n}$ for three symmetry groups that are missing from the machine learning literature: $O(n)$, the orthogonal group; $SO(n)$, the special orthogonal group; and $Sp(n)$, the symplectic group. In particular, we find a spanning set of matrices for the learnable, linear, equivariant layer functions between such tensor power spaces in the standard basis of $\mathbb{R}^{n}$ when the group is $O(n)$ or $SO(n)$, and in the symplectic basis of $\mathbb{R}^{n}$ when the group is $Sp(n)$.
|
https://proceedings.mlr.press/v202/pearce-crump23b.html
|
https://proceedings.mlr.press/v202/pearce-crump23b/pearce-crump23b.pdf
|
https://openreview.net/forum?id=MORCsaQjR7
|
How Jellyfish Characterise Alternating Group Equivariant Neural Networks
|
https://proceedings.mlr.press/v202/pearce-crump23b.html
|
Edward Pearce-Crump
|
https://proceedings.mlr.press/v202/pearce-crump23b.html
|
ICML 2023
|
We provide a full characterisation of all of the possible alternating group ($A_n$) equivariant neural networks whose layers are some tensor power of $\mathbb{R}^{n}$. In particular, we find a basis of matrices for the learnable, linear, $A_n$–equivariant layer functions between such tensor power spaces in the standard basis of $\mathbb{R}^{n}$. We also describe how our approach generalises to the construction of neural networks that are equivariant to local symmetries.
|
https://proceedings.mlr.press/v202/pei23a.html
|
https://proceedings.mlr.press/v202/pei23a/pei23a.pdf
|
https://openreview.net/forum?id=mXv2aVqUGG
|
Can Large Language Models Reason about Program Invariants?
|
https://proceedings.mlr.press/v202/pei23a.html
|
Kexin Pei, David Bieber, Kensen Shi, Charles Sutton, Pengcheng Yin
|
https://proceedings.mlr.press/v202/pei23a.html
|
ICML 2023
|
Identifying invariants is an important program analysis task with applications towards program understanding, bug finding, vulnerability analysis, and formal verification. Existing tools for identifying program invariants rely on dynamic analysis, requiring traces collected from multiple executions in order to produce reliable invariants. We study the application of large language models to invariant prediction, finding that models trained on source code and fine-tuned for invariant generation can perform invariant prediction as static rather than dynamic analysis. Using a scratchpad approach where invariants are predicted sequentially through a program gives the best performance, finding invariants statically of quality comparable to those obtained by a dynamic analysis tool with access to five program traces.
|
https://proceedings.mlr.press/v202/pei23b.html
|
https://proceedings.mlr.press/v202/pei23b/pei23b.pdf
|
https://openreview.net/forum?id=pRQOVucM8e
|
Dynamics-inspired Neuromorphic Visual Representation Learning
|
https://proceedings.mlr.press/v202/pei23b.html
|
Zhengqi Pei, Shuhui Wang
|
https://proceedings.mlr.press/v202/pei23b.html
|
ICML 2023
|
This paper investigates the dynamics-inspired neuromorphic architecture for visual representation learning following Hamilton’s principle. Our method converts weight-based neural structure to its dynamics-based form that consists of finite sub-models, whose mutual relations measured by computing path integrals amongst their dynamical states are equivalent to the typical neural weights. Based on the entropy reduction process derived from the Euler-Lagrange equations, the feedback signals interpreted as stress forces amongst sub-models push them to move. We first train a dynamics-based neural model from scratch and observe that this model outperforms traditional neural models on MNIST. We then convert several pre-trained neural structures into dynamics-based forms, followed by fine-tuning via entropy reduction to obtain the stabilized dynamical states. We observe consistent improvements in these transformed models over their weight-based counterparts on ImageNet and WebVision in terms of computational complexity, parameter size, testing accuracy, and robustness. Besides, we show the correlation between model performance and structural entropy, providing deeper insight into weight-free neuromorphic learning.
|
https://proceedings.mlr.press/v202/peifeng23a.html
|
https://proceedings.mlr.press/v202/peifeng23a/peifeng23a.pdf
|
https://openreview.net/forum?id=8YxuCY7BuH
|
Feature Directions Matter: Long-Tailed Learning via Rotated Balanced Representation
|
https://proceedings.mlr.press/v202/peifeng23a.html
|
Gao Peifeng, Qianqian Xu, Peisong Wen, Zhiyong Yang, Huiyang Shao, Qingming Huang
|
https://proceedings.mlr.press/v202/peifeng23a.html
|
ICML 2023
|
Long-tailed learning is one of the most challenging problems in visual recognition. There are some studies aiming to solve long-tailed classification from the perspective of feature learning. Recent work proposes to learn the balanced representation by fixing the linear classifier as Equiangular Tight Frame (ETF), since they argue what matters in classification is the structure of the feature, instead of their directions. Holding a different view, in this paper, we show that features with fixed directions may be harmful to the generalization of models, even if it is completely symmetric. To avoid this issue, we propose Representation-Balanced Learning Framework (RBL), which introduces orthogonal matrices to learn directions while maintaining the geometric structure of ETF. Theoretically, our contributions are two-fold: 1). we point out that the feature learning of RBL is insensitive toward training set label distribution, it always learns a balanced representation space. 2). we provide a generalization analysis of proposed RBL through training stability. To analyze the stability of the parameter with orthogonal constraint, we propose a novel training stability analysis paradigm, Two-Parameter Model Stability. Practically, our method is extremely simple in implementation but shows great superiority on several benchmark datasets.
|
https://proceedings.mlr.press/v202/peltonen23a.html
|
https://proceedings.mlr.press/v202/peltonen23a/peltonen23a.pdf
|
https://openreview.net/forum?id=g0zYQRWmFR
|
Fair Neighbor Embedding
|
https://proceedings.mlr.press/v202/peltonen23a.html
|
Jaakko Peltonen, Wen Xu, Timo Nummenmaa, Jyrki Nummenmaa
|
https://proceedings.mlr.press/v202/peltonen23a.html
|
ICML 2023
|
We consider fairness in dimensionality reduction. Nonlinear dimensionality reduction yields low dimensional representations that let users visualize and explore high-dimensional data. However, traditional dimensionality reduction may yield biased visualizations overemphasizing relationships of societal phenomena to sensitive attributes or protected groups. We introduce a framework of fair neighbor embedding, the Fair Neighbor Retrieval Visualizer, which formulates fair nonlinear dimensionality reduction as an information retrieval task whose performance and fairness are quantified by information retrieval criteria. The method optimizes low-dimensional embeddings that preserve high-dimensional data neighborhoods without yielding biased association of such neighborhoods to protected groups. In experiments the method yields fair visualizations outperforming previous methods.
|
https://proceedings.mlr.press/v202/peng23a.html
|
https://proceedings.mlr.press/v202/peng23a/peng23a.pdf
|
https://openreview.net/forum?id=o7BOzuqFi2
|
The Ideal Continual Learner: An Agent That Never Forgets
|
https://proceedings.mlr.press/v202/peng23a.html
|
Liangzu Peng, Paris Giampouras, Rene Vidal
|
https://proceedings.mlr.press/v202/peng23a.html
|
ICML 2023
|
The goal of continual learning is to find a model that solves multiple learning tasks which are presented sequentially to the learner. A key challenge in this setting is that the learner may "forget" how to solve a previous task when learning a new task, a phenomenon known as catastrophic forgetting. To address this challenge, many practical methods have been proposed, including memory-based, regularization-based and expansion-based methods. However, a rigorous theoretical understanding of these methods remains elusive. This paper aims to bridge this gap between theory and practice by proposing a new continual learning framework called "Ideal Continual Learner" (ICL), which is guaranteed to avoid catastrophic forgetting by construction. We show that ICL unifies multiple well-established continual learning methods and gives new theoretical insights into the strengths and weaknesses of these methods. We also derive generalization bounds for ICL which allow us to theoretically quantify "how rehearsal affects generalization". Finally, we connect ICL to several classic subjects and research topics of modern interest, which allows us to make historical remarks and inspire future directions.
|
https://proceedings.mlr.press/v202/peng23b.html
|
https://proceedings.mlr.press/v202/peng23b/peng23b.pdf
|
https://openreview.net/forum?id=gfGLMZR27W
|
MolDiff: Addressing the Atom-Bond Inconsistency Problem in 3D Molecule Diffusion Generation
|
https://proceedings.mlr.press/v202/peng23b.html
|
Xingang Peng, Jiaqi Guan, Qiang Liu, Jianzhu Ma
|
https://proceedings.mlr.press/v202/peng23b.html
|
ICML 2023
|
Deep generative models have recently achieved superior performance in 3D molecule generation. Most of them first generate atoms and then add chemical bonds based on the generated atoms in a post-processing manner. However, there might be no corresponding bond solution for the temporally generated atoms as their locations are generated without considering potential bonds. We define this problem as the atom-bond inconsistency problem and claim it is the main reason for current approaches to generating unrealistic 3D molecules. To overcome this problem, we propose a new diffusion model called MolDiff which can generate atoms and bonds simultaneously while still maintaining their consistency by explicitly modeling the dependence between their relationships. We evaluated the generation ability of our proposed model and the quality of the generated molecules using criteria related to both geometry and chemical properties. The empirical studies showed that our model outperforms previous approaches, achieving a three-fold improvement in success rate and generating molecules with significantly better quality.
|
https://proceedings.mlr.press/v202/peng23c.html
|
https://proceedings.mlr.press/v202/peng23c/peng23c.pdf
|
https://openreview.net/forum?id=rFLtREMkTR
|
Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation
|
https://proceedings.mlr.press/v202/peng23c.html
|
Andi Peng, Aviv Netanyahu, Mark K Ho, Tianmin Shu, Andreea Bobu, Julie Shah, Pulkit Agrawal
|
https://proceedings.mlr.press/v202/peng23c.html
|
ICML 2023
|
Policies often fail at test-time due to distribution shifts—changes in the state and reward that occur when an end user deploys the policy in environments different from those seen in training. Data augmentation can help models be more robust to such shifts by varying specific concepts in the state, e.g. object color, that are task-irrelevant and should not impact desired actions. However, designers training the agent don’t often know which concepts are irrelevant a priori. We propose a human-in-the-loop framework to leverage feedback from the end user to quickly identify and augment task-irrelevant visual state concepts. Our framework generates counterfactual demonstrations that allow users to quickly isolate shifted state concepts and identify if they should not impact the desired task, and can therefore be augmented using existing actions. We present experiments validating our full pipeline on discrete and continuous control tasks with real human users. Our method better enables users to (1) understand agent failure, (2) improve sample efficiency of demonstrations required for finetuning, and (3) adapt the agent to their desired reward.
|
https://proceedings.mlr.press/v202/perets23a.html
|
https://proceedings.mlr.press/v202/perets23a/perets23a.pdf
|
https://openreview.net/forum?id=5EVk1RXh3O
|
Learning Hidden Markov Models When the Locations of Missing Observations are Unknown
|
https://proceedings.mlr.press/v202/perets23a.html
|
Binyamin Perets, Mark Kozdoba, Shie Mannor
|
https://proceedings.mlr.press/v202/perets23a.html
|
ICML 2023
|
The Hidden Markov Model (HMM) is one of the most widely used statistical models for sequential data analysis. One of the key reasons for this versatility is the ability of HMM to deal with missing data. However, standard HMM learning algorithms rely crucially on the assumption that the positions of the missing observations within the observation sequence are known. In the natural sciences, where this assumption is often violated, special variants of HMM, commonly known as Silent-state HMMs (SHMMs), are used. Despite their widespread use, these algorithms strongly rely on specific structural assumptions of the underlying chain, such as acyclicity, thus limiting the applicability of these methods. Moreover, even in the acyclic case, it has been shown that these methods can lead to poor reconstruction. In this paper we consider the general problem of learning an HMM from data with unknown missing observation locations. We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain, and can also be used with limited prior knowledge, unlike SHMM. We evaluate and compare the algorithms in a variety of scenarios, measuring their reconstruction precision, and robustness under model miss-specification. Notably, we show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
|
https://proceedings.mlr.press/v202/perini23a.html
|
https://proceedings.mlr.press/v202/perini23a/perini23a.pdf
|
https://openreview.net/forum?id=pf3NihScj1
|
Estimating the Contamination Factor’s Distribution in Unsupervised Anomaly Detection
|
https://proceedings.mlr.press/v202/perini23a.html
|
Lorenzo Perini, Paul-Christian Bürkner, Arto Klami
|
https://proceedings.mlr.press/v202/perini23a.html
|
ICML 2023
| null |
https://proceedings.mlr.press/v202/pesce23a.html
|
https://proceedings.mlr.press/v202/pesce23a/pesce23a.pdf
|
https://openreview.net/forum?id=W6SAzouaKT
|
Are Gaussian Data All You Need? The Extents and Limits of Universality in High-Dimensional Generalized Linear Estimation
|
https://proceedings.mlr.press/v202/pesce23a.html
|
Luca Pesce, Florent Krzakala, Bruno Loureiro, Ludovic Stephan
|
https://proceedings.mlr.press/v202/pesce23a.html
|
ICML 2023
|
In this manuscript we consider the problem of generalized linear estimation on Gaussian mixture data with labels given by a single-index model. Our first result is a sharp asymptotic expression for the test and training errors in the high-dimensional regime. Motivated by the recent stream of results on the Gaussian universality of the test and training errors in generalized linear estimation, we ask ourselves the question: "when is a single Gaussian enough to characterize the error?". Our formula allows us to give sharp answers to this question, both in the positive and negative directions. More precisely, we show that the sufficient conditions for Gaussian universality (or lack thereof) crucially depend on the alignment between the target weights and the means and covariances of the mixture clusters, which we precisely quantify. In the particular case of least-squares interpolation, we prove a strong universality property of the training error and show it follows a simple, closed-form expression. Finally, we apply our results to real datasets, clarifying some recent discussions in the literature about Gaussian universality of the errors in this context.
|
https://proceedings.mlr.press/v202/petrov23a.html
|
https://proceedings.mlr.press/v202/petrov23a/petrov23a.pdf
|
https://openreview.net/forum?id=zv7X5ybgSQ
|
Certifying Ensembles: A General Certification Theory with S-Lipschitzness
|
https://proceedings.mlr.press/v202/petrov23a.html
|
Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip Torr, Adel Bibi
|
https://proceedings.mlr.press/v202/petrov23a.html
|
ICML 2023
|
Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has been shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.
|
https://proceedings.mlr.press/v202/pfrommer23a.html
|
https://proceedings.mlr.press/v202/pfrommer23a/pfrommer23a.pdf
|
https://openreview.net/forum?id=631FTQB0UB
|
The Power of Learned Locally Linear Models for Nonlinear Policy Optimization
|
https://proceedings.mlr.press/v202/pfrommer23a.html
|
Daniel Pfrommer, Max Simchowitz, Tyler Westenbroek, Nikolai Matni, Stephen Tu
|
https://proceedings.mlr.press/v202/pfrommer23a.html
|
ICML 2023
|
A common pipeline in learning-based control is to iteratively estimate a model of system dynamics, and apply a trajectory optimization algorithm - e.g. $\mathtt{iLQR}$ - on the learned model to minimize a target cost. This paper conducts a rigorous analysis of a simplified variant of this strategy for general nonlinear systems. We analyze an algorithm which iterates between estimating local linear models of nonlinear system dynamics and performing $\mathtt{iLQR}$-like policy updates. We demonstrate that this algorithm attains sample complexity polynomial in relevant problem parameters, and, by synthesizing locally stabilizing gains, overcomes exponential dependence in problem horizon. Experimental results validate the performance of our algorithm, and compare to natural deep-learning baselines.
|
https://proceedings.mlr.press/v202/pham23a.html
|
https://proceedings.mlr.press/v202/pham23a/pham23a.pdf
|
https://openreview.net/forum?id=HaPz0YuD78
|
A Scalable Frank-Wolfe-Based Algorithm for the Max-Cut SDP
|
https://proceedings.mlr.press/v202/pham23a.html
|
Chi Bach Pham, Wynita Griggs, James Saunderson
|
https://proceedings.mlr.press/v202/pham23a.html
|
ICML 2023
|
We consider the problem of solving large-scale instances of the Max-Cut semidefinite program (SDP), i.e., optimizing a linear function over $n\times n$ positive semidefinite (PSD) matrices with unit diagonal. When the cost matrix is PSD, we show how to exactly reformulate the problem as maximizing a smooth concave function over PSD matrices with unit trace. By applying the Frank-Wolfe method, we obtain a simple algorithm that is compatible with recent sampling-based techniques to solve SDPs using low memory. We demonstrate the practical performance of our method on $10^6\times 10^6$ instances of the max-cut SDP with costs having up to $5 \times 10^6$ non-zero entries. Theoretically, we show that our method solves problems with diagonally dominant costs to relative error $\epsilon$ in $O(n\epsilon^{-1})$ calls to a randomized approximate largest eigenvalue subroutine, each of which succeeds with high probability after $O(\log(n)\epsilon^{-1/2})$ matrix-vector multiplications with the cost matrix.
|
https://proceedings.mlr.press/v202/phan23a.html
|
https://proceedings.mlr.press/v202/phan23a/phan23a.pdf
|
https://openreview.net/forum?id=23uOLxPd34
|
Attention-Based Recurrence for Multi-Agent Reinforcement Learning under Stochastic Partial Observability
|
https://proceedings.mlr.press/v202/phan23a.html
|
Thomy Phan, Fabian Ritz, Philipp Altmann, Maximilian Zorn, Jonas Nüßlein, Michael Kölle, Thomas Gabor, Claudia Linnhoff-Popien
|
https://proceedings.mlr.press/v202/phan23a.html
|
ICML 2023
|
Stochastic partial observability poses a major challenge for decentralized coordination in multi-agent reinforcement learning but is largely neglected in state-of-the-art research due to a strong focus on state-based centralized training for decentralized execution (CTDE) and benchmarks that lack sufficient stochasticity like StarCraft Multi-Agent Challenge (SMAC). In this paper, we propose Attention-based Embeddings of Recurrence In multi-Agent Learning (AERIAL) to approximate value functions under stochastic partial observability. AERIAL replaces the true state with a learned representation of multi-agent recurrence, considering more accurate information about decentralized agent decisions than state-based CTDE. We then introduce MessySMAC, a modified version of SMAC with stochastic observations and higher variance in initial states, to provide a more general and configurable benchmark regarding stochastic partial observability. We evaluate AERIAL in Dec-Tiger as well as in a variety of SMAC and MessySMAC maps, and compare the results with state-based CTDE. Furthermore, we evaluate the robustness of AERIAL and state-based CTDE against various stochasticity configurations in MessySMAC.
|
https://proceedings.mlr.press/v202/phang23a.html
|
https://proceedings.mlr.press/v202/phang23a/phang23a.pdf
|
https://openreview.net/forum?id=O0iQkpQfFe
|
HyperTuning: Toward Adapting Large Language Models without Back-propagation
|
https://proceedings.mlr.press/v202/phang23a.html
|
Jason Phang, Yi Mao, Pengcheng He, Weizhu Chen
|
https://proceedings.mlr.press/v202/phang23a.html
|
ICML 2023
|
Fine-tuning large language models for different tasks can be costly and inefficient, and even methods that reduce the number of tuned parameters still require full gradient-based optimization. We propose HyperTuning, a novel approach to model adaptation that uses a hypermodel to generate task-specific parameters for a fixed downstream model. We demonstrate a simple setup for hypertuning with HyperT5, a T5-based hypermodel that produces soft prefixes or LoRA parameters for a frozen T5 model from few-shot examples. We train HyperT5 in two stages: first, hyperpretraining with a modified conditional language modeling objective that trains a hypermodel to generate parameters; second, multi-task fine-tuning (MTF) on a large number of diverse language tasks. We evaluate HyperT5 on P3, MetaICL and Super-NaturalInstructions datasets, and show that it can effectively generate parameters for unseen tasks. Moreover, we show that using hypermodel-generated parameters as initializations for further parameter-efficient fine-tuning improves performance. HyperTuning can thus be a flexible and efficient way to leverage large language models for diverse downstream applications.
|
https://proceedings.mlr.press/v202/pinson23a.html
|
https://proceedings.mlr.press/v202/pinson23a/pinson23a.pdf
|
https://openreview.net/forum?id=ZFBf47ZNos
|
Linear CNNs Discover the Statistical Structure of the Dataset Using Only the Most Dominant Frequencies
|
https://proceedings.mlr.press/v202/pinson23a.html
|
Hannah Pinson, Joeri Lenaerts, Vincent Ginis
|
https://proceedings.mlr.press/v202/pinson23a.html
|
ICML 2023
|
We here present a stepping stone towards a deeper understanding of convolutional neural networks (CNNs) in the form of a theory of learning in linear CNNs. Through analyzing the gradient descent equations, we discover that the evolution of the network during training is determined by the interplay between the dataset structure and the convolutional network structure. We show that linear CNNs discover the statistical structure of the dataset with non-linear, ordered, stage-like transitions, and that the speed of discovery changes depending on the relationship between the dataset and the convolutional network structure. Moreover, we find that this interplay lies at the heart of what we call the "dominant frequency bias", where linear CNNs arrive at these discoveries using only the dominant frequencies of the different structural parts present in the dataset. We furthermore provide experiments that show how our theory relates to deep, non-linear CNNs used in practice. Our findings shed new light on the inner working of CNNs, and can help explain their shortcut learning and their tendency to rely on texture instead of shape.
|
https://proceedings.mlr.press/v202/plassier23a.html
|
https://proceedings.mlr.press/v202/plassier23a/plassier23a.pdf
|
https://openreview.net/forum?id=ytpEqHYSEy
|
Conformal Prediction for Federated Uncertainty Quantification Under Label Shift
|
https://proceedings.mlr.press/v202/plassier23a.html
|
Vincent Plassier, Mehdi Makni, Aleksandr Rubashevskii, Eric Moulines, Maxim Panov
|
https://proceedings.mlr.press/v202/plassier23a.html
|
ICML 2023
|
Federated Learning (FL) is a machine learning framework where many clients collaboratively train models while keeping the training data decentralized. Despite recent advances in FL, the uncertainty quantification topic (UQ) remains partially addressed. Among UQ methods, conformal prediction (CP) approaches provides distribution-free guarantees under minimal assumptions. We develop a new federated conformal prediction method based on quantile regression and take into account privacy constraints. This method takes advantage of importance weighting to effectively address the label shift between agents and provides theoretical guarantees for both valid coverage of the prediction sets and differential privacy. Extensive experimental studies demonstrate that this method outperforms current competitors.
|
https://proceedings.mlr.press/v202/podina23a.html
|
https://proceedings.mlr.press/v202/podina23a/podina23a.pdf
|
https://openreview.net/forum?id=FREvWGzoRu
|
Universal Physics-Informed Neural Networks: Symbolic Differential Operator Discovery with Sparse Data
|
https://proceedings.mlr.press/v202/podina23a.html
|
Lena Podina, Brydon Eastman, Mohammad Kohandel
|
https://proceedings.mlr.press/v202/podina23a.html
|
ICML 2023
|
In this work we perform symbolic discovery of differential operators in a situation where there is sparse experimental data. This small data regime in machine learning can be made tractable by providing our algorithms with prior information about the underlying dynamics. Physics Informed Neural Networks (PINNs) have been very successful in this regime (reconstructing entire ODE solutions using only a single point or entire PDE solutions with very few measurements of the initial condition). The Universal PINN approach (UPINN) adds a neural network that learns a representation of unknown hidden terms in the differential equation. The algorithm yields both a surrogate solution to the differential equation and a black-box representation of the hidden terms. These hidden term neural networks can then be converted into symbolic equations using symbolic regression techniques like AI Feynman. In order to achieve convergence of the neural networks, we provide our algorithms with (noisy) measurements of both the initial condition as well as (synthetic) experimental data obtained at later times. We demonstrate strong performance of UPINNs even when provided with very few measurements of noisy data in both the ODE and PDE regime.
|
https://proceedings.mlr.press/v202/podkopaev23a.html
|
https://proceedings.mlr.press/v202/podkopaev23a/podkopaev23a.pdf
|
https://openreview.net/forum?id=pKz0SD05YC
|
Sequential Kernelized Independence Testing
|
https://proceedings.mlr.press/v202/podkopaev23a.html
|
Aleksandr Podkopaev, Patrick Blöbaum, Shiva Kasiviswanathan, Aaditya Ramdas
|
https://proceedings.mlr.press/v202/podkopaev23a.html
|
ICML 2023
|
Independence testing is a classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data. However, practitioners often prefer procedures that adapt to the complexity of a problem at hand instead of setting sample size in advance. Ideally, such procedures should (a) stop earlier on easy tasks (and later on harder tasks), hence making better use of available resources, and (b) continuously monitor the data and efficiently incorporate statistical evidence after collecting new data, while controlling the false alarm rate. Classical batch tests are not tailored for streaming data: valid inference after data peeking requires correcting for multiple testing which results in low power. Following the principle of testing by betting, we design sequential kernelized independence tests that overcome such shortcomings. We exemplify our broad framework using bets inspired by kernelized dependence measures, e.g., the Hilbert-Schmidt independence criterion. Our test is also valid under non-i.i.d. time-varying settings. We demonstrate the power of our approaches on both simulated and real data.
|
https://proceedings.mlr.press/v202/poiani23a.html
|
https://proceedings.mlr.press/v202/poiani23a/poiani23a.pdf
|
https://openreview.net/forum?id=GDVczeyqFa
|
Truncating Trajectories in Monte Carlo Reinforcement Learning
|
https://proceedings.mlr.press/v202/poiani23a.html
|
Riccardo Poiani, Alberto Maria Metelli, Marcello Restelli
|
https://proceedings.mlr.press/v202/poiani23a.html
|
ICML 2023
|
In Reinforcement Learning (RL), an agent acts in an unknown environment to maximize the expected cumulative discounted sum of an external reward signal, i.e., the expected return. In practice, in many tasks of interest, such as policy optimization, the agent usually spends its interaction budget by collecting episodes of fixed length within a simulator (i.e., Monte Carlo simulation). However, given the discounted nature of the RL objective, this data collection strategy might not be the best option. Indeed, the rewards taken in early simulation steps weigh exponentially more than future rewards. Taking a cue from this intuition, in this paper, we design an a-priori budget allocation strategy that leads to the collection of trajectories of different lengths, i.e., truncated. The proposed approach provably minimizes the width of the confidence intervals around the empirical estimates of the expected return of a policy. After discussing the theoretical properties of our method, we make use of our trajectory truncation mechanism to extend Policy Optimization via Importance Sampling (POIS, Metelli et al., 2018) algorithm. Finally, we conduct a numerical comparison between our algorithm and POIS: the results are consistent with our theory and show that an appropriate truncation of the trajectories can succeed in improving performance.
|
https://proceedings.mlr.press/v202/poli23a.html
|
https://proceedings.mlr.press/v202/poli23a/poli23a.pdf
|
https://openreview.net/forum?id=1sxiBaGEtg
|
Hyena Hierarchy: Towards Larger Convolutional Language Models
|
https://proceedings.mlr.press/v202/poli23a.html
|
Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher Re
|
https://proceedings.mlr.press/v202/poli23a.html
|
ICML 2023
|
Recent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale. However, the core building block of Transformers, the attention operator, exhibits quadratic cost in sequence length, limiting the amount of context accessible. Existing subquadratic methods based on low-rank and sparse approximations need to be combined with dense attention layers to match Transformers at scale, indicating a gap in capability. In this work, we propose Hyena, a subquadratic drop-in replacement for attention constructed by interleaving implicitly parametrized long convolutions and data-controlled gating. In challenging reasoning tasks on sequences of thousands to hundreds of thousands of tokens, Hyena improves accuracy by more than 50 points over operators relying on state-space models, transfer functions, and other implicit and explicit methods, matching attention-based models. We set a new state-of-the-art for dense-attention-free architectures on language modeling in standard datasets WikiText103 and The Pile, reaching Transformer quality with a 20% reduction in training compute required at sequence length 2k. Hyena operators are 2x faster than highly optimized attention at sequence length 8k, with speedups of 100x at 64k.
|
https://proceedings.mlr.press/v202/pollaci23a.html
|
https://proceedings.mlr.press/v202/pollaci23a/pollaci23a.pdf
|
https://openreview.net/forum?id=TuHgrnPHZq
|
Spurious Valleys and Clustering Behavior of Neural Networks
|
https://proceedings.mlr.press/v202/pollaci23a.html
|
Samuele Pollaci
|
https://proceedings.mlr.press/v202/pollaci23a.html
|
ICML 2023
|
Neural networks constitute a class of functions that are typically non-surjective, with high-dimensional fibers and complicated image. We prove two main results concerning the geometry of the loss landscape of a neural network. First, we provide an explicit effective bound on the sizes of the hidden layers so that the loss landscape has no spurious valleys, which guarantees the success of gradient descent methods. Second, we present a novel method for analyzing whether a given neural network architecture with monomial activation function can represent a target function of interest. The core of our analysis method is the study of a specific set of error values, and its behavior depending on different training datasets.
|
https://proceedings.mlr.press/v202/pooladian23a.html
|
https://proceedings.mlr.press/v202/pooladian23a/pooladian23a.pdf
|
https://openreview.net/forum?id=mxkGDxWOHS
|
Multisample Flow Matching: Straightening Flows with Minibatch Couplings
|
https://proceedings.mlr.press/v202/pooladian23a.html
|
Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, Ricky T. Q. Chen
|
https://proceedings.mlr.press/v202/pooladian23a.html
|
ICML 2023
|
Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples. Recent works, such as Flow Matching, derived paths that are optimal for each data sample. However, these algorithms rely on independent data and noise samples, and do not exploit underlying structure in the data distribution for constructing probability paths. We propose Multisample Flow Matching, a more general framework that uses non-trivial couplings between data and noise samples while satisfying the correct marginal constraints. At small overhead costs, this generalization allows us to (i) reduce gradient variance during training, (ii) obtain straighter flows for the learned vector field, which allows us to generate high-quality samples using fewer function evaluations, and (iii) obtain transport maps with low cost in high dimensions, which has applications beyond generative modeling. Importantly, we do so in a completely simulation-free manner with a simple minimization objective. We show that our proposed methods improve sample consistency on downsampled ImageNet data sets, and lead to better low-cost sample generation.
|
https://proceedings.mlr.press/v202/pooladian23b.html
|
https://proceedings.mlr.press/v202/pooladian23b/pooladian23b.pdf
|
https://openreview.net/forum?id=2zIlKYx6iV
|
Minimax estimation of discontinuous optimal transport maps: The semi-discrete case
|
https://proceedings.mlr.press/v202/pooladian23b.html
|
Aram-Alexandre Pooladian, Vincent Divol, Jonathan Niles-Weed
|
https://proceedings.mlr.press/v202/pooladian23b.html
|
ICML 2023
|
We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb{R}^d$, on the basis of i.i.d. samples. All existing statistical analyses of this problem require the assumption that the transport map is Lipschitz, a strong requirement that, in particular, excludes any examples where the transport map is discontinuous. As a first step towards developing estimation procedures for discontinuous maps, we consider the important special case where the data distribution $Q$ is a discrete measure supported on a finite number of points in $\mathbb{R}^d$. We study a computationally efficient estimator initially proposed by (Pooladian & Niles-Weed, 2021), based on entropic optimal transport, and show in the semi-discrete setting that it converges at the minimax-optimal rate $n^{-1/2}$, independent of dimension. Other standard map estimation techniques both lack finite-sample guarantees in this setting and provably suffer from the curse of dimensionality. We confirm these results in numerical experiments, and provide experiments for other settings, not covered by our theory, which indicate that the entropic estimator is a promising methodology for other discontinuous transport map estimation problems.
|
https://proceedings.mlr.press/v202/prabhudesai23a.html
|
https://proceedings.mlr.press/v202/prabhudesai23a/prabhudesai23a.pdf
|
https://openreview.net/forum?id=G5vKSJVhJL
|
Test-time Adaptation with Slot-Centric Models
|
https://proceedings.mlr.press/v202/prabhudesai23a.html
|
Mihir Prabhudesai, Anirudh Goyal, Sujoy Paul, Sjoerd Van Steenkiste, Mehdi S. M. Sajjadi, Gaurav Aggarwal, Thomas Kipf, Deepak Pathak, Katerina Fragkiadaki
|
https://proceedings.mlr.press/v202/prabhudesai23a.html
|
ICML 2023
|
Current visual detectors, though impressive within their training distribution, often fail to parse out-of-distribution scenes into their constituent entities. Recent test-time adaptation methods use auxiliary self-supervised losses to adapt the network parameters to each test example independently and have shown promising results towards generalization outside the training distribution for the task of image classification. In our work, we find evidence that these losses are insufficient for the task of scene decomposition, without also considering architectural inductive biases. Recent slot-centric generative models attempt to decompose scenes into entities in a self-supervised manner by reconstructing pixels. Drawing upon these two lines of work, we propose Slot-TTA, a semi-supervised slot-centric scene decomposition model that at test time is adapted per scene through gradient descent on reconstruction or cross-view synthesis objectives. We evaluate Slot-TTA across multiple input modalities, images or 3D point clouds, and show substantial out-of-distribution performance improvements against state-of-the-art supervised feed-forward detectors, and alternative test-time adaptation methods. Project Webpage: http://slot-tta.github.io/
|
https://proceedings.mlr.press/v202/prinster23a.html
|
https://proceedings.mlr.press/v202/prinster23a/prinster23a.pdf
|
https://openreview.net/forum?id=ORxBEWMPAJ
|
JAWS-X: Addressing Efficiency Bottlenecks of Conformal Prediction Under Standard and Feedback Covariate Shift
|
https://proceedings.mlr.press/v202/prinster23a.html
|
Drew Prinster, Suchi Saria, Anqi Liu
|
https://proceedings.mlr.press/v202/prinster23a.html
|
ICML 2023
|
We study the efficient estimation of predictive confidence intervals for black-box predictors when the common data exchangeability (e.g., i.i.d.) assumption is violated due to potentially feedback-induced shifts in the input data distribution. That is, we focus on standard and feedback covariate shift (FCS), where the latter allows for feedback dependencies between train and test data that occur in many decision-making scenarios like experimental design. Whereas prior conformal prediction methods for this problem are in general either extremely computationally demanding or make inefficient use of labeled data, we propose a collection of methods based on the jackknife+ that achieve a practical balance of computational and statistical efficiency. Theoretically, our proposed JAW-FCS method extends the rigorous, finite-sample coverage guarantee of the jackknife+ to FCS. We moreover propose two tunable relaxations to JAW-FCS’s computation that maintain finite-sample guarantees: one using only $K$ leave-one-out models (JAW-$K$LOO) and a second building on $K$-fold cross validation+ (WCV+). Practically, we demonstrate that JAW-FCS and its computational relaxations outperform state-of-the-art baselines on a variety of real-world datasets under standard and feedback covariate shift, including for biomolecular design and active learning tasks.
|
https://proceedings.mlr.press/v202/puny23a.html
|
https://proceedings.mlr.press/v202/puny23a/puny23a.pdf
|
https://openreview.net/forum?id=oVwFwXO9Kg
|
Equivariant Polynomials for Graph Neural Networks
|
https://proceedings.mlr.press/v202/puny23a.html
|
Omri Puny, Derek Lim, Bobak Kiani, Haggai Maron, Yaron Lipman
|
https://proceedings.mlr.press/v202/puny23a.html
|
ICML 2023
|
Graph Neural Networks (GNN) are inherently limited in their expressive power. Recent seminal works (Xu et al., 2019; Morris et al., 2019b) introduced the Weisfeiler-Lehman (WL) hierarchy as a measure of expressive power. Although this hierarchy has propelled significant advances in GNN analysis and architecture developments, it suffers from several significant limitations. These include a complex definition that lacks direct guidance for model improvement and a WL hierarchy that is too coarse to study current GNNs. This paper introduces an alternative expressive power hierarchy based on the ability of GNNs to calculate equivariant polynomials of a certain degree. As a first step, we provide a full characterization of all equivariant graph polynomials by introducing a concrete basis, significantly generalizing previous results. Each basis element corresponds to a specific multi-graph, and its computation over some graph data input corresponds to a tensor contraction problem. Second, we propose algorithmic tools for evaluating the expressiveness of GNNs using tensor contraction sequences, and calculate the expressive power of popular GNNs. Finally, we enhance the expressivity of common GNN architectures by adding polynomial features or additional operations / aggregations inspired by our theory. These enhanced GNNs demonstrate state-of-the-art results in experiments across multiple graph learning benchmarks.
|
https://proceedings.mlr.press/v202/qi23a.html
|
https://proceedings.mlr.press/v202/qi23a/qi23a.pdf
|
https://openreview.net/forum?id=80IfYewOh1
|
Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
|
https://proceedings.mlr.press/v202/qi23a.html
|
Zekun Qi, Runpei Dong, Guofan Fan, Zheng Ge, Xiangyu Zhang, Kaisheng Ma, Li Yi
|
https://proceedings.mlr.press/v202/qi23a.html
|
ICML 2023
|
Mainstream 3D representation learning approaches are built upon contrastive or generative modeling pretext tasks, where great improvements in performance on various downstream tasks have been achieved. However, we find these two paradigms have different characteristics: (i) contrastive models are data-hungry that suffer from a representation over-fitting issue; (ii) generative models have a data filling issue that shows inferior data scaling capacity compared to contrastive models. This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms. In this paper, we propose contrast with reconstruct (ReCon) that unifies these two paradigms. ReCon is trained to learn from both generative modeling teachers and cross-modal contrastive teachers through ensemble distillation, where the generative student is used to guide the contrastive student. An encoder-decoder style ReCon-block is proposed that transfers knowledge through cross attention with stop-gradient, which avoids pretraining over-fitting and pattern difference issues. ReCon achieves a new state-of-the-art in 3D representation learning, e.g., 91.26% accuracy on ScanObjectNN. Codes have been released at https://github.com/qizekun/ReCon.
|
https://proceedings.mlr.press/v202/qi23b.html
|
https://proceedings.mlr.press/v202/qi23b/qi23b.pdf
|
https://openreview.net/forum?id=zkdHgAKedJ
|
An Effective Meaningful Way to Evaluate Survival Models
|
https://proceedings.mlr.press/v202/qi23b.html
|
Shi-Ang Qi, Neeraj Kumar, Mahtab Farrokh, Weijie Sun, Li-Hao Kuan, Rajesh Ranganath, Ricardo Henao, Russell Greiner
|
https://proceedings.mlr.press/v202/qi23b.html
|
ICML 2023
|
One straightforward metric to evaluate a survival prediction model is based on the Mean Absolute Error (MAE) – the average of the absolute difference between the time predicted by the model and the true event time, over all subjects. Unfortunately, this is challenging because, in practice, the test set includes (right) censored individuals, meaning we do not know when a censored individual actually experienced the event. In this paper, we explore various metrics to estimate MAE for survival datasets that include (many) censored individuals. Moreover, we introduce a novel and effective approach for generating realistic semi-synthetic survival datasets to facilitate the evaluation of metrics. Our findings, based on the analysis of the semi-synthetic datasets, reveal that our proposed metric (MAE using pseudo-observations) is able to rank models accurately based on their performance, and often closely matches the true MAE – in particular, is better than several alternative methods.
|
https://proceedings.mlr.press/v202/qiang23a.html
|
https://proceedings.mlr.press/v202/qiang23a/qiang23a.pdf
|
https://openreview.net/forum?id=7haEvhb25X
|
Coarse-to-Fine: a Hierarchical Diffusion Model for Molecule Generation in 3D
|
https://proceedings.mlr.press/v202/qiang23a.html
|
Bo Qiang, Yuxuan Song, Minkai Xu, Jingjing Gong, Bowen Gao, Hao Zhou, Wei-Ying Ma, Yanyan Lan
|
https://proceedings.mlr.press/v202/qiang23a.html
|
ICML 2023
|
Generating desirable molecular structures in 3D is a fundamental problem for drug discovery. Despite the considerable progress we have achieved, existing methods usually generate molecules in atom resolution and ignore intrinsic local structures such as rings, which leads to poor quality in generated structures, especially when generating large molecules. Fragment-based molecule generation is a promising strategy, however, it is nontrivial to be adapted for 3D non-autoregressive generations because of the combinational optimization problems. In this paper, we utilize a coarse-to-fine strategy to tackle this problem, in which a Hierarchical Diffusion-based model (i.e. HierDiff) is proposed to preserve the validity of local segments without relying on autoregressive modeling. Specifically, HierDiff first generates coarse-grained molecule geometries via an equivariant diffusion process, where each coarse-grained node reflects a fragment in a molecule. Then the coarse-grained nodes are decoded into fine-grained fragments by a message-passing process and a newly designed iterative refined sampling module. Lastly, the fine-grained fragments are then assembled to derive a complete atomic molecular structure. Extensive experiments demonstrate that HierDiff consistently improves the quality of molecule generation over existing methods.
|
https://proceedings.mlr.press/v202/qiao23a.html
|
https://proceedings.mlr.press/v202/qiao23a/qiao23a.pdf
|
https://openreview.net/forum?id=Y7zfCfnax4
|
Collaborative Causal Inference with Fair Incentives
|
https://proceedings.mlr.press/v202/qiao23a.html
|
Rui Qiao, Xinyi Xu, Bryan Kian Hsiang Low
|
https://proceedings.mlr.press/v202/qiao23a.html
|
ICML 2023
|
Collaborative causal inference (CCI) aims to improve the estimation of the causal effect of treatment variables by utilizing data aggregated from multiple self-interested parties. Since their source data are valuable proprietary assets that can be costly or tedious to obtain, every party has to be incentivized to be willing to contribute to the collaboration, such as with a guaranteed fair and sufficiently valuable reward (than performing causal inference on its own). This paper presents a reward scheme designed using the unique statistical properties that are required by causal inference to guarantee certain desirable incentive criteria (e.g., fairness, benefit) for the parties based on their contributions. To achieve this, we propose a data valuation function to value parties’ data for CCI based on the distributional closeness of its resulting treatment effect estimate to that utilizing the aggregated data from all parties. Then, we show how to value the parties’ rewards fairly based on a modified variant of the Shapley value arising from our proposed data valuation for CCI. Finally, the Shapley fair rewards to the parties are realized in the form of improved, stochastically perturbed treatment effect estimates. We empirically demonstrate the effectiveness of our reward scheme using simulated and real-world datasets.
|
https://proceedings.mlr.press/v202/qiao23b.html
|
https://proceedings.mlr.press/v202/qiao23b/qiao23b.pdf
|
https://openreview.net/forum?id=BAcj3gDlWV
|
FREDIS: A Fusion Framework of Refinement and Disambiguation for Unreliable Partial Label Learning
|
https://proceedings.mlr.press/v202/qiao23b.html
|
Congyu Qiao, Ning Xu, Jiaqi Lv, Yi Ren, Xin Geng
|
https://proceedings.mlr.press/v202/qiao23b.html
|
ICML 2023
|
To reduce the difficulty of annotation, partial label learning (PLL) has been widely studied, where each example is ambiguously annotated with a set of candidate labels instead of the exact correct label. PLL assumes that the candidate label set contains the correct label, which induces disambiguation, i.e., identification of the correct label in the candidate label set, adopted in most PLL methods. However, this assumption is impractical as no one could guarantee the existence of the correct label in the candidate label set under real-world scenarios. Therefore, Unreliable Partial Label Learning (UPLL) is investigated where the correct label of each example may not exist in the candidate label set. In this paper, we propose a fusion framework of refinement and disambiguation named FREDIS to handle the UPLL problem. Specifically, with theoretical guarantees, not only does disambiguation move incorrect labels from candidate labels to non-candidate labels but also refinement, an opposite procedure, moves correct labels from non-candidate labels to candidate labels. Besides, we prove that the classifier trained by our framework could eventually approximate the Bayes optimal classifier. Extensive experiments on widely used benchmark datasets validate the effectiveness of our proposed framework.
|
https://proceedings.mlr.press/v202/qin23a.html
|
https://proceedings.mlr.press/v202/qin23a/qin23a.pdf
|
https://openreview.net/forum?id=LhFE049fh5
|
Nugget: Neural Agglomerative Embeddings of Text
|
https://proceedings.mlr.press/v202/qin23a.html
|
Guanghui Qin, Benjamin Van Durme
|
https://proceedings.mlr.press/v202/qin23a.html
|
ICML 2023
|
Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution called Nugget, which encodes language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation, and intuitively segment language into meaningful units. We demonstrate Nugget outperforms related approaches in tasks involving semantic comparison. Finally, we illustrate these compact units allow for expanding the contextual window of a language model (LM), suggesting new future LMs that can condition on significantly larger amounts of content.
|
https://proceedings.mlr.press/v202/qin23b.html
|
https://proceedings.mlr.press/v202/qin23b/qin23b.pdf
|
https://openreview.net/forum?id=2bUddKNVf8
|
BiBench: Benchmarking and Analyzing Network Binarization
|
https://proceedings.mlr.press/v202/qin23b.html
|
Haotong Qin, Mingyuan Zhang, Yifu Ding, Aoyu Li, Zhongang Cai, Ziwei Liu, Fisher Yu, Xianglong Liu
|
https://proceedings.mlr.press/v202/qin23b.html
|
ICML 2023
|
Network binarization emerges as one of the most promising compression approaches offering extraordinary computation and memory savings by minimizing the bit-width. However, recent research has shown that applying existing binarization algorithms to diverse tasks, architectures, and hardware in realistic scenarios is still not straightforward. Common challenges of binarization, such as accuracy degradation and efficiency limitation, suggest that its attributes are not fully understood. To close this gap, we present BiBench, a rigorously designed benchmark with in-depth analysis for network binarization. We first carefully scrutinize the requirements of binarization in the actual production and define evaluation tracks and metrics for a comprehensive and fair investigation. Then, we evaluate and analyze a series of milestone binarization algorithms that function at the operator level and with extensive influence. Our benchmark reveals that 1) the binarized operator has a crucial impact on the performance and deployability of binarized networks; 2) the accuracy of binarization varies significantly across different learning tasks and neural architectures; 3) binarization has demonstrated promising efficiency potential on edge devices despite the limited hardware support. The results and analysis also lead to a promising paradigm for accurate and efficient binarization. We believe that BiBench will contribute to the broader adoption of binarization and serve as a foundation for future research. The code for our BiBench is released https://github.com/htqin/BiBench .
|
https://proceedings.mlr.press/v202/qiu23a.html
|
https://proceedings.mlr.press/v202/qiu23a/qiu23a.pdf
|
https://openreview.net/forum?id=slM2r4bRD1
|
Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization
|
https://proceedings.mlr.press/v202/qiu23a.html
|
Zi-Hao Qiu, Quanqi Hu, Zhuoning Yuan, Denny Zhou, Lijun Zhang, Tianbao Yang
|
https://proceedings.mlr.press/v202/qiu23a.html
|
ICML 2023
|
In this paper, we aim to optimize a contrastive loss with individualized temperatures in a principled manner. The common practice of using a global temperature parameter $\tau$ ignores the fact that “not all semantics are created equal", meaning that different anchor data may have different numbers of samples with similar semantics, especially when data exhibits long-tails. First, we propose a new robust contrastive loss inspired by distributionally robust optimization (DRO), providing us an intuition about the effect of $\tau$ and a mechanism for automatic temperature individualization. Then, we propose an efficient stochastic algorithm for optimizing the robust contrastive loss with a provable convergence guarantee without using large mini-batch sizes. Theoretical and experimental results show that our algorithm automatically learns a suitable $\tau$ for each sample. Specifically, samples with frequent semantics use large temperatures to keep local semantic structures, while samples with rare semantics use small temperatures to induce more separable features. Our method not only outperforms prior strong baselines (e.g., SimCLR, CLIP) on unimodal and bimodal tasks with larger improvements on imbalanced data but also is less sensitive to hyper-parameters. To our best knowledge, this is the first methodical approach to optimizing a contrastive loss with individualized temperatures. Our proposed algorithms are implemented in the LibAUC library at https://libauc.org.
|
https://proceedings.mlr.press/v202/qiu23b.html
|
https://proceedings.mlr.press/v202/qiu23b/qiu23b.pdf
|
https://openreview.net/forum?id=WEHswL8qgO
|
Shortest Edit Path Crossover: A Theory-driven Solution to the Permutation Problem in Evolutionary Neural Architecture Search
|
https://proceedings.mlr.press/v202/qiu23b.html
|
Xin Qiu, Risto Miikkulainen
|
https://proceedings.mlr.press/v202/qiu23b.html
|
ICML 2023
|
Population-based search has recently emerged as a possible alternative to Reinforcement Learning (RL) for black-box neural architecture search (NAS). It performs well in practice even though it is not theoretically well understood. In particular, whereas traditional population-based search methods such as evolutionary algorithms (EAs) draw much power from crossover operations, it is difficult to take advantage of them in NAS. The main obstacle is believed to be the permutation problem: The mapping between genotype and phenotype in traditional graph representations is many-to-one, leading to a disruptive effect of standard crossover. This paper presents the first theoretical analysis of the behaviors of mutation, crossover and RL in black-box NAS, and proposes a new crossover operator based on the shortest edit path (SEP) in graph space. The SEP crossover is shown theoretically to overcome the permutation problem, and as a result, have a better expected improvement compared to mutation, standard crossover and RL. Further, it empirically outperform these other methods on state-of-the-art NAS benchmarks. The SEP crossover therefore allows taking full advantage of population-based search in NAS, and the underlying theory can serve as a foundation for deeper understanding of black-box NAS methods in general.
|
https://proceedings.mlr.press/v202/qiu23c.html
|
https://proceedings.mlr.press/v202/qiu23c/qiu23c.pdf
|
https://openreview.net/forum?id=s5F1a6s1HS
|
Simple and Fast Group Robustness by Automatic Feature Reweighting
|
https://proceedings.mlr.press/v202/qiu23c.html
|
Shikai Qiu, Andres Potapczynski, Pavel Izmailov, Andrew Gordon Wilson
|
https://proceedings.mlr.press/v202/qiu23c.html
|
ICML 2023
|
A major challenge to out-of-distribution generalization is reliance on spurious features — patterns that are predictive of the class label in the training data distribution, but not causally related to the target. Standard methods for reducing the reliance on spurious features typically assume that we know what the spurious feature is, which is rarely true in the real world. Methods that attempt to alleviate this limitation are complex, hard to tune, and lead to a significant computational overhead compared to standard training. In this paper, we propose Automatic Feature Reweighting (AFR), an extremely simple and fast method for updating the model to reduce the reliance on spurious features. AFR retrains the last layer of a standard ERM-trained base model with a weighted loss that emphasizes the examples where the ERM model predicts poorly, automatically upweighting the minority group without group labels. With this simple procedure, we improve upon the best reported results among competing methods trained without spurious attributes on several vision and natural language classification benchmarks, using only a fraction of their compute.
|
https://proceedings.mlr.press/v202/quinzan23a.html
|
https://proceedings.mlr.press/v202/quinzan23a/quinzan23a.pdf
|
https://openreview.net/forum?id=JrSWhb7dzp
|
DRCFS: Doubly Robust Causal Feature Selection
|
https://proceedings.mlr.press/v202/quinzan23a.html
|
Francesco Quinzan, Ashkan Soleymani, Patrick Jaillet, Cristian R. Rojas, Stefan Bauer
|
https://proceedings.mlr.press/v202/quinzan23a.html
|
ICML 2023
|
Knowing the features of a complex system that are highly relevant to a particular target variable is of fundamental interest in many areas of science. Existing approaches are often limited to linear settings, sometimes lack guarantees, and in most cases, do not scale to the problem at hand, in particular to images. We propose DRCFS, a doubly robust feature selection method for identifying the causal features even in nonlinear and high dimensional settings. We provide theoretical guarantees, illustrate necessary conditions for our assumptions, and perform extensive experiments across a wide range of simulated and semi-synthetic datasets. DRCFS significantly outperforms existing state-of-the-art methods, selecting robust features even in challenging highly non-linear and high-dimensional problems.
|
https://proceedings.mlr.press/v202/radford23a.html
|
https://proceedings.mlr.press/v202/radford23a/radford23a.pdf
|
https://openreview.net/forum?id=Xr12kpEP3G
|
Robust Speech Recognition via Large-Scale Weak Supervision
|
https://proceedings.mlr.press/v202/radford23a.html
|
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine Mcleavey, Ilya Sutskever
|
https://proceedings.mlr.press/v202/radford23a.html
|
ICML 2023
|
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results without the need for any dataset specific fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
|
https://proceedings.mlr.press/v202/raffel23a.html
|
https://proceedings.mlr.press/v202/raffel23a/raffel23a.pdf
|
https://openreview.net/forum?id=mlsNi1IDjY
|
Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation
|
https://proceedings.mlr.press/v202/raffel23a.html
|
Matthew Raffel, Drew Penney, Lizhong Chen
|
https://proceedings.mlr.press/v202/raffel23a.html
|
ICML 2023
|
Transformer models using segment-based processing have been an effective architecture for simultaneous speech translation. However, such models create a context mismatch between training and inference environments, hindering potential translation accuracy. We solve this issue by proposing Shiftable Context, a simple yet effective scheme to ensure that consistent segment and context sizes are maintained throughout training and inference, even with the presence of partially filled segments due to the streaming nature of simultaneous translation. Shiftable Context is also broadly applicable to segment-based transformers for streaming tasks. Our experiments on the English-German, English-French, and English-Spanish language pairs from the MUST-C dataset demonstrate that when applied to the Augmented Memory Transformer, a state-of-the-art model for simultaneous speech translation, the proposed scheme achieves an average increase of 2.09, 1.83, and 1.95 BLEU scores across each wait-k value for the three language pairs, respectively, with a minimal impact on computation-aware Average Lagging.
|
https://proceedings.mlr.press/v202/raghu23a.html
|
https://proceedings.mlr.press/v202/raghu23a/raghu23a.pdf
|
https://openreview.net/forum?id=WhRLdsDTBD
|
Sequential Multi-Dimensional Self-Supervised Learning for Clinical Time Series
|
https://proceedings.mlr.press/v202/raghu23a.html
|
Aniruddh Raghu, Payal Chandak, Ridwan Alam, John Guttag, Collin Stultz
|
https://proceedings.mlr.press/v202/raghu23a.html
|
ICML 2023
|
Self-supervised learning (SSL) for clinical time series data has received significant attention in recent literature, since these data are highly rich and provide important information about a patient’s physiological state. However, most existing SSL methods for clinical time series are limited in that they are designed for unimodal time series, such as a sequence of structured features (e.g., lab values and vitals signs) or an individual high-dimensional physiological signal (e.g., an electrocardiogram). These existing methods cannot be readily extended to model time series that exhibit multimodality, with structured features and high-dimensional data being recorded at each timestep in the sequence. In this work, we address this gap and propose a new SSL method — Sequential Multi-Dimensional SSL — where a SSL loss is applied both at the level of the entire sequence and at the level of the individual high-dimensional data points in the sequence in order to better capture information at both scales. Our strategy is agnostic to the specific form of loss function used at each level – it can be contrastive, as in SimCLR, or non-contrastive, as in VICReg. We evaluate our method on two real-world clinical datasets, where the time series contains sequences of (1) high-frequency electrocardiograms and (2) structured data from lab values and vitals signs. Our experimental results indicate that pre-training with our method and then fine-tuning on downstream tasks improves performance over baselines on both datasets, and in several settings, can lead to improvements across different self-supervised loss functions.
|
https://proceedings.mlr.press/v202/rahbar23a.html
|
https://proceedings.mlr.press/v202/rahbar23a/rahbar23a.pdf
|
https://openreview.net/forum?id=mt4j86X6Py
|
Recovery Bounds on Class-Based Optimal Transport: A Sum-of-Norms Regularization Framework
|
https://proceedings.mlr.press/v202/rahbar23a.html
|
Arman Rahbar, Ashkan Panahi, Morteza Haghir Chehreghani, Devdatt Dubhashi, Hamid Krim
|
https://proceedings.mlr.press/v202/rahbar23a.html
|
ICML 2023
|
We develop a novel theoretical framework for understating Optimal Transport (OT) schemes respecting a class structure. For this purpose, we propose a convex OT program with a sum-of-norms regularization term, which provably recovers the underlying class structure under geometric assumptions. Furthermore, we derive an accelerated proximal algorithm with a closed-form projection and proximal operator scheme, thereby affording a more scalable algorithm for computing optimal transport plans. We provide a novel argument for the uniqueness of the optimum even in the absence of strong convexity. Our experiments show that the new regularizer not only results in a better preservation of the class structure in the data but also yields additional robustness to the data geometry, compared to previous regularizers.
|
https://proceedings.mlr.press/v202/raj23a.html
|
https://proceedings.mlr.press/v202/raj23a/raj23a.pdf
|
https://openreview.net/forum?id=9ZNtKRQUGR
|
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
|
https://proceedings.mlr.press/v202/raj23a.html
|
Anant Raj, Lingjiong Zhu, Mert Gurbuzbalaban, Umut Simsekli
|
https://proceedings.mlr.press/v202/raj23a.html
|
ICML 2023
|
Heavy-tail phenomena in stochastic gradient descent (SGD) have been reported in several empirical studies. Experimental evidence in previous works suggests a strong interplay between the heaviness of the tails and generalization behavior of SGD. To address this empirical phenomena theoretically, several works have made strong topological and statistical assumptions to link the generalization error to heavy tails. Very recently, new generalization bounds have been proven, indicating a non-monotonic relationship between the generalization error and heavy tails, which is more pertinent to the reported empirical observations. While these bounds do not require additional topological assumptions given that SGD can be modeled using a heavy-tailed stochastic differential equation (SDE), they can only apply to simple quadratic problems. In this paper, we build on this line of research and develop generalization bounds for a more general class of objective functions, which includes non-convex functions as well. Our approach is based on developing Wasserstein stability bounds for heavy-tailed SDEs and their discretizations, which we then convert to generalization bounds. Our results do not require any nontrivial assumptions; yet, they shed more light to the empirical observations, thanks to the generality of the loss functions.
|
https://proceedings.mlr.press/v202/rajeswar23a.html
|
https://proceedings.mlr.press/v202/rajeswar23a/rajeswar23a.pdf
|
https://openreview.net/forum?id=eSpbTG0TZN
|
Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels
|
https://proceedings.mlr.press/v202/rajeswar23a.html
|
Sai Rajeswar, Pietro Mazzaglia, Tim Verbelen, Alexandre Piché, Bart Dhoedt, Aaron Courville, Alexandre Lacoste
|
https://proceedings.mlr.press/v202/rajeswar23a.html
|
ICML 2023
|
Controlling artificial agents from visual sensory data is an arduous task. Reinforcement learning (RL) algorithms can succeed but require large amounts of interactions between the agent and the environment. To alleviate the issue, unsupervised RL proposes to employ self-supervised interaction and learning, for adapting faster to future tasks. Yet, as shown in the Unsupervised RL Benchmark (URLB; Laskin et al. 2021), whether current unsupervised strategies can improve generalization capabilities is still unclear, especially in visual control settings. In this work, we study the URLB and propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent, and a task-aware fine-tuning strategy combined with a new proposed hybrid planner, Dyna-MPC, to adapt the agent for downstream tasks. On URLB, our method obtains 93.59% overall normalized performance, surpassing previous baselines by a staggering margin. The approach is empirically evaluated through a large-scale empirical study, which we use to validate our design choices and analyze our models. We also show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation. Project website: https://masteringurlb.github.io/
|
https://proceedings.mlr.press/v202/ramakrishnan23a.html
|
https://proceedings.mlr.press/v202/ramakrishnan23a/ramakrishnan23a.pdf
|
https://openreview.net/forum?id=vOcOzRWpvm
|
SpotEM: Efficient Video Search for Episodic Memory
|
https://proceedings.mlr.press/v202/ramakrishnan23a.html
|
Santhosh Kumar Ramakrishnan, Ziad Al-Halah, Kristen Grauman
|
https://proceedings.mlr.press/v202/ramakrishnan23a.html
|
ICML 2023
|
The goal in episodic memory (EM) is to search a long egocentric video to answer a natural language query (e.g., “where did I leave my purse?”). Existing EM methods exhaustively extract expensive fixed-length clip features to look everywhere in the video for the answer, which is infeasible for long wearable-camera videos that span hours or even days. We propose SpotEM, an approach to achieve efficiency for a given EM method while maintaining good accuracy. SpotEM consists of three key ideas: 1) a novel clip selector that learns to identify promising video regions to search conditioned on the language query; 2) a set of low-cost semantic indexing features that capture the context of rooms, objects, and interactions that suggest where to look; and 3) distillation losses that address the optimization issues arising from end-to-end joint training of the clip selector and EM model. Our experiments on 200+ hours of video from the Ego4D EM Natural Language Queries benchmark and three different EM models demonstrate the effectiveness of our approach: computing only 10% – 25% of the clip features, we preserve 84% – 97% of the original EM model’s accuracy. Project page: https://vision.cs.utexas.edu/projects/spotem
|
https://proceedings.mlr.press/v202/ramasinghe23a.html
|
https://proceedings.mlr.press/v202/ramasinghe23a/ramasinghe23a.pdf
|
https://openreview.net/forum?id=FLhE8qzOmo
|
How much does Initialization Affect Generalization?
|
https://proceedings.mlr.press/v202/ramasinghe23a.html
|
Sameera Ramasinghe, Lachlan Ewen Macdonald, Moshiur Farazi, Hemanth Saratchandran, Simon Lucey
|
https://proceedings.mlr.press/v202/ramasinghe23a.html
|
ICML 2023
|
Characterizing the remarkable generalization properties of over-parameterized neural networks remains an open problem. A growing body of recent literature shows that the bias of stochastic gradient descent (SGD) and architecture choice implicitly leads to better generalization. In this paper, we show on the contrary that, independently of architecture, SGD can itself be the cause of poor generalization if one does not ensure good initialization. Specifically, we prove that any differentiably parameterized model, trained under gradient flow, obeys a weak spectral bias law which states that sufficiently high frequencies train arbitrarily slowly. This implies that very high frequencies present at initialization will remain after training, and hamper generalization. Further, we empirically test the developed theoretical insights using practical, deep networks. Finally, we contrast our framework with that supplied by the flat-minima conjecture and show that Fourier analysis grants a more reliable framework for understanding the generalization of neural networks.
|
https://proceedings.mlr.press/v202/rame23a.html
|
https://proceedings.mlr.press/v202/rame23a/rame23a.pdf
|
https://openreview.net/forum?id=EnW6Fsp1oP
|
Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization
|
https://proceedings.mlr.press/v202/rame23a.html
|
Alexandre Rame, Kartik Ahuja, Jianyu Zhang, Matthieu Cord, Leon Bottou, David Lopez-Paz
|
https://proceedings.mlr.press/v202/rame23a.html
|
ICML 2023
|
Foundation models are redefining how AI systems are built. Practitioners now follow a standard procedure to build their machine learning solutions: from a pre-trained foundation model, they fine-tune the weights on the target task of interest. So, the Internet is swarmed by a handful of foundation models fine-tuned on many diverse tasks: these individual fine-tunings exist in isolation without benefiting from each other. In our opinion, this is a missed opportunity, as these specialized models contain rich and diverse features. In this paper, we thus propose model ratatouille, a new strategy to recycle the multiple fine-tunings of the same foundation model on diverse auxiliary tasks. Specifically, we repurpose these auxiliary weights as initializations for multiple parallel fine-tunings on the target task; then, we average all fine-tuned weights to obtain the final model. This recycling strategy aims at maximizing the diversity in weights by leveraging the diversity in auxiliary tasks. Empirically, it improves the state of the art on the reference DomainBed benchmark for out-of-distribution generalization. Looking forward, this work contributes to the emerging paradigm of updatable machine learning where, akin to open-source software development, the community collaborates to reliably update machine learning models.
|
https://proceedings.mlr.press/v202/ramesh23a.html
|
https://proceedings.mlr.press/v202/ramesh23a/ramesh23a.pdf
|
https://openreview.net/forum?id=XMH3N8rteD
|
A Picture of the Space of Typical Learnable Tasks
|
https://proceedings.mlr.press/v202/ramesh23a.html
|
Rahul Ramesh, Jialin Mao, Itay Griniasty, Rubing Yang, Han Kheng Teoh, Mark Transtrum, James Sethna, Pratik Chaudhari
|
https://proceedings.mlr.press/v202/ramesh23a.html
|
ICML 2023
|
We develop information geometric techniques to understand the representations learned by deep networks when they are trained on different tasks using supervised, meta-, semi-supervised and contrastive learning. We shed light on the following phenomena that relate to the structure of the space of tasks: (1) the manifold of probabilistic models trained on different tasks using different representation learning methods is effectively low-dimensional; (2) supervised learning on one task results in a surprising amount of progress even on seemingly dissimilar tasks; progress on other tasks is larger if the training task has diverse classes; (3) the structure of the space of tasks indicated by our analysis is consistent with parts of the Wordnet phylogenetic tree; (4) episodic meta-learning algorithms and supervised learning traverse different trajectories during training but they fit similar models eventually; (5) contrastive and semi-supervised learning methods traverse trajectories similar to those of supervised learning. We use classification tasks constructed from the CIFAR-10 and Imagenet datasets to study these phenomena. Code is available at https://github.com/grasp-lyrl/picture_of_space_of_tasks.
|
https://proceedings.mlr.press/v202/ran23a.html
|
https://proceedings.mlr.press/v202/ran23a/ran23a.pdf
|
https://openreview.net/forum?id=0vQVC7WCVq
|
Policy Regularization with Dataset Constraint for Offline Reinforcement Learning
|
https://proceedings.mlr.press/v202/ran23a.html
|
Yuhang Ran, Yi-Chen Li, Fuxiang Zhang, Zongzhang Zhang, Yang Yu
|
https://proceedings.mlr.press/v202/ran23a.html
|
ICML 2023
|
We consider the problem of learning the best possible policy from a fixed dataset, known as offline Reinforcement Learning (RL). A common taxonomy of existing offline RL works is policy regularization, which typically constrains the learned policy by distribution or support of the behavior policy. However, distribution and support constraints are overly conservative since they both force the policy to choose similar actions as the behavior policy when considering particular states. It will limit the learned policy’s performance, especially when the behavior policy is sub-optimal. In this paper, we find that regularizing the policy towards the nearest state-action pair can be more effective and thus propose Policy Regularization with Dataset Constraint (PRDC). When updating the policy in a given state, PRDC searches the entire dataset for the nearest state-action sample and then restricts the policy with the action of this sample. Unlike previous works, PRDC can guide the policy with proper behaviors from the dataset, allowing it to choose actions that do not appear in the dataset along with the given state. It is a softer constraint but still keeps enough conservatism from out-of-distribution actions. Empirical evidence and theoretical analysis show that PRDC can alleviate offline RL’s fundamentally challenging value overestimation issue with a bounded performance gap. Moreover, on a set of locomotion and navigation tasks, PRDC achieves state-of-the-art performance compared with existing methods. Code is available at https://github.com/LAMDA-RL/PRDC
|
https://proceedings.mlr.press/v202/ran23b.html
|
https://proceedings.mlr.press/v202/ran23b/ran23b.pdf
|
https://openreview.net/forum?id=vPCKyca1s7
|
SpENCNN: Orchestrating Encoding and Sparsity for Fast Homomorphically Encrypted Neural Network Inference
|
https://proceedings.mlr.press/v202/ran23b.html
|
Ran Ran, Xinwei Luo, Wei Wang, Tao Liu, Gang Quan, Xiaolin Xu, Caiwen Ding, Wujie Wen
|
https://proceedings.mlr.press/v202/ran23b.html
|
ICML 2023
|
Homomorphic Encryption (HE) is a promising technology to protect clients’ data privacy for Machine Learning as a Service (MLaaS) on public clouds. However, HE operations can be orders of magnitude slower than their counterparts for plaintexts and thus result in prohibitively high inference latency, seriously hindering the practicality of HE. In this paper, we propose a HE-based fast neural network (NN) inference framework–SpENCNN built upon the co-design of HE operation-aware model sparsity and the single-instruction-multiple-data (SIMD)-friendly data packing, to improve NN inference latency. In particular, we first develop an encryption-aware HE-group convolution technique that can partition channels among different groups based on the data size and ciphertext size, and then encode them into the same ciphertext by novel group-interleaved encoding, so as to dramatically reduce the number of bottlenecked operations in HE convolution. We further tailor a HE-friendly sub-block weight pruning to reduce the costly HE-based convolution operation. Our experiments show that SpENCNN can achieve overall speedups of 8.37$\times$, 12.11$\times$, 19.26$\times$, and 1.87$\times$ for LeNet, VGG-5, HEFNet, and ResNet-20 respectively, with negligible accuracy loss. Our code is publicly available at https://github.com/ranran0523/SPECNN.
|
https://proceedings.mlr.press/v202/rangamani23a.html
|
https://proceedings.mlr.press/v202/rangamani23a/rangamani23a.pdf
|
https://openreview.net/forum?id=XbggSPNB9W
|
Feature learning in deep classifiers through Intermediate Neural Collapse
|
https://proceedings.mlr.press/v202/rangamani23a.html
|
Akshay Rangamani, Marius Lindegaard, Tomer Galanti, Tomaso A Poggio
|
https://proceedings.mlr.press/v202/rangamani23a.html
|
ICML 2023
|
In this paper, we conduct an empirical study of the feature learning process in deep classifiers. Recent research has identified a training phenomenon called Neural Collapse (NC), in which the top-layer feature embeddings of samples from the same class tend to concentrate around their means, and the top layer’s weights align with those features. Our study aims to investigate if these properties extend to intermediate layers. We empirically study the evolution of the covariance and mean of representations across different layers and show that as we move deeper into a trained neural network, the within-class covariance decreases relative to the between-class covariance. Additionally, we find that in the top layers, where the between-class covariance is dominant, the subspace spanned by the class means aligns with the subspace spanned by the most significant singular vector components of the weight matrix in the corresponding layer. Finally, we discuss the relationship between NC and Associative Memories (Willshaw et. al. 1969).
|
https://proceedings.mlr.press/v202/rathnam23a.html
|
https://proceedings.mlr.press/v202/rathnam23a/rathnam23a.pdf
|
https://openreview.net/forum?id=BEoppYS3SC
|
The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning
|
https://proceedings.mlr.press/v202/rathnam23a.html
|
Sarah Rathnam, Sonali Parbhoo, Weiwei Pan, Susan Murphy, Finale Doshi-Velez
|
https://proceedings.mlr.press/v202/rathnam23a.html
|
ICML 2023
|
Discount regularization, using a shorter planning horizon when calculating the optimal policy, is a popular choice to restrict planning to a less complex set of policies when estimating an MDP from sparse or noisy data (Jiang et al., 2015). It is commonly understood that discount regularization functions by de-emphasizing or ignoring delayed effects. In this paper, we reveal an alternate view of discount regularization that exposes unintended consequences. We demonstrate that planning under a lower discount factor produces an identical optimal policy to planning using any prior on the transition matrix that has the same distribution for all states and actions. In fact, it functions like a prior with stronger regularization on state-action pairs with more transition data. This leads to poor performance when the transition matrix is estimated from data sets with uneven amounts of data across state-action pairs. Our equivalence theorem leads to an explicit formula to set regularization parameters locally for individual state-action pairs rather than globally. We demonstrate the failures of discount regularization and how we remedy them using our state-action-specific method across simple empirical examples as well as a medical cancer simulator.
|
https://proceedings.mlr.press/v202/ray-chowdhury23a.html
|
https://proceedings.mlr.press/v202/ray-chowdhury23a/ray-chowdhury23a.pdf
|
https://openreview.net/forum?id=JK0hiktaFV
|
Beam Tree Recursive Cells
|
https://proceedings.mlr.press/v202/ray-chowdhury23a.html
|
Jishnu Ray Chowdhury, Cornelia Caragea
|
https://proceedings.mlr.press/v202/ray-chowdhury23a.html
|
ICML 2023
|
We propose Beam Tree Recursive Cell (BT-Cell) - a backpropagation-friendly framework to extend Recursive Neural Networks (RvNNs) with beam search for latent structure induction. We further extend this framework by proposing a relaxation of the hard top-$k$ operators in beam search for better propagation of gradient signals. We evaluate our proposed models in different out-of-distribution splits in both synthetic and realistic data. Our experiments show that BT-Cell achieves near-perfect performance on several challenging structure-sensitive synthetic tasks like ListOps and logical inference while maintaining comparable performance in realistic data against other RvNN-based models. Additionally, we identify a previously unknown failure case for neural models in generalization to unseen number of arguments in ListOps. The code is available at: https://github.com/JRC1995/BeamTreeRecursiveCells.
|
https://proceedings.mlr.press/v202/ray-chowdhury23b.html
|
https://proceedings.mlr.press/v202/ray-chowdhury23b/ray-chowdhury23b.pdf
|
https://openreview.net/forum?id=6clNOzWjgY
|
Monotonic Location Attention for Length Generalization
|
https://proceedings.mlr.press/v202/ray-chowdhury23b.html
|
Jishnu Ray Chowdhury, Cornelia Caragea
|
https://proceedings.mlr.press/v202/ray-chowdhury23b.html
|
ICML 2023
|
We explore different ways to utilize position-based cross-attention in seq2seq networks to enable length generalization in algorithmic tasks. We show that a simple approach of interpolating the original and reversed encoded representations combined with relative attention allows near-perfect length generalization for both forward and reverse lookup tasks or copy tasks that had been generally hard to tackle. We also devise harder diagnostic tasks where the relative distance of the ideal attention position varies with timestep. In such settings, the simple interpolation trick with relative attention is not sufficient. We introduce novel variants of location attention building on top of Dubois et al. (2020) to address the new diagnostic tasks. We also show the benefits of our approaches for length generalization in SCAN (Lake & Baroni, 2018) and CFQ (Keysers et al.,2020). Our code is available on GitHub.
|
https://proceedings.mlr.press/v202/razon23a.html
|
https://proceedings.mlr.press/v202/razon23a/razon23a.pdf
|
https://openreview.net/forum?id=s8LWZVJb0f
|
Automated Search for Conjectures on Mathematical Constants using Analysis of Integer Sequences
|
https://proceedings.mlr.press/v202/razon23a.html
|
Ofir Razon, Yoav Harris, Shahar Gottlieb, Dan Carmon, Ofir David, Ido Kaminer
|
https://proceedings.mlr.press/v202/razon23a.html
|
ICML 2023
|
The discovery of formulas involving mathematical constants such as $\pi$ and $e$ had a great impact on various fields of science and mathematics. However, such discoveries have remained scarce, relying on the intuition of mathematicians such as Ramanujan and Gauss. Recent efforts to automate such discoveries, such as the Ramanujan Machine project, relied solely on exhaustive search and remain limited by the space of options that can be covered. Here we propose a fundamentally different method to search for conjectures on mathematical constants: through analysis of integer sequences. We introduce the Enumerated Signed-continued-fraction Massey Approve (ESMA) algorithm, which builds on the Berlekamp-Massey algorithm to identify patterns in integer sequences that represent mathematical constants. ESMA has found various known formulas and new conjectures for $e, e^2, \tan(1)$, and ratios of values of Bessel functions, many of which provide faster numerical convergence than their corresponding simple continued fractions forms. We also characterize the space of constants that ESMA can catch and quantify its algorithmic advantage in certain scenarios. Altogether, this work continues the development toward algorithm-augmented mathematical intuition, to help accelerate mathematical research.
|
https://proceedings.mlr.press/v202/refinetti23a.html
|
https://proceedings.mlr.press/v202/refinetti23a/refinetti23a.pdf
|
https://openreview.net/forum?id=CPKMwyiyDv
|
Neural networks trained with SGD learn distributions of increasing complexity
|
https://proceedings.mlr.press/v202/refinetti23a.html
|
Maria Refinetti, Alessandro Ingrosso, Sebastian Goldt
|
https://proceedings.mlr.press/v202/refinetti23a.html
|
ICML 2023
|
The uncanny ability of over-parameterised neural networks to generalise well has been explained using various "simplicity biases". These theories postulate that neural networks avoid overfitting by first fitting simple, linear classifiers before learning more complex, non-linear functions. Meanwhile, data structure is also recognised as a key ingredient for good generalisation, yet its role in simplicity biases is not yet understood. Here, we show that neural networks trained using stochastic gradient descent initially classify their inputs using lower-order input statistics, like mean and covariance, and exploit higher-order statistics only later during training. We first demonstrate this distributional simplicity bias (DSB) in a solvable model of a single neuron trained on synthetic data. We then demonstrate DSB empirically in a range of deep convolutional networks and visual transformers trained on CIFAR10, and show that it even holds in networks pre-trained on ImageNet. We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of Gaussian universality in learning.
|
https://proceedings.mlr.press/v202/reid23a.html
|
https://proceedings.mlr.press/v202/reid23a/reid23a.pdf
|
https://openreview.net/forum?id=qw8zAw6mzJ
|
Simplex Random Features
|
https://proceedings.mlr.press/v202/reid23a.html
|
Isaac Reid, Krzysztof Marcin Choromanski, Valerii Likhosherstov, Adrian Weller
|
https://proceedings.mlr.press/v202/reid23a.html
|
ICML 2023
|
We present Simplex Random Features (SimRFs), a new random feature (RF) mechanism for unbiased approximation of the softmax and Gaussian kernels by geometrical correlation of random projection vectors. We prove that SimRFs provide the smallest possible mean square error (MSE) on unbiased estimates of these kernels among the class of weight-independent geometrically-coupled positive random feature (PRF) mechanisms, substantially outperforming the previously most accurate Orthogonal Random Features (ORFs) at no observable extra cost. We present a more computationally expensive SimRFs+ variant, which we prove is asymptotically optimal in the broader family of weight-dependent geometrical coupling schemes (which permit correlations between random vector directions and norms). In extensive empirical studies, we show consistent gains provided by SimRFs in settings including pointwise kernel estimation, nonparametric classification and scalable Transformers.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.