abs
stringlengths
45
62
Download PDF
stringlengths
50
84
OpenReview
stringlengths
42
42
title
stringlengths
10
168
url
stringlengths
45
62
authors
stringlengths
9
704
detail_url
stringlengths
45
62
tags
stringclasses
1 value
abstract
stringlengths
415
5.03k
https://proceedings.mlr.press/v202/shih23a.html
https://proceedings.mlr.press/v202/shih23a/shih23a.pdf
https://openreview.net/forum?id=vSqzYxbITL
Long Horizon Temperature Scaling
https://proceedings.mlr.press/v202/shih23a.html
Andy Shih, Dorsa Sadigh, Stefano Ermon
https://proceedings.mlr.press/v202/shih23a.html
ICML 2023
Temperature scaling is a popular technique for tuning the sharpness of a model distribution. It is used extensively for sampling likely generations and calibrating model uncertainty, and even features as a controllable parameter to many large language models in deployment. However, autoregressive models rely on myopic temperature scaling that greedily optimizes the next token. To address this, we propose Long Horizon Temperature Scaling (LHTS), a novel approach for sampling from temperature-scaled joint distributions. LHTS is compatible with all likelihood-based models, and optimizes for the long-horizon likelihood of samples. We derive a temperature-dependent LHTS objective, and show that fine-tuning a model on a range of temperatures produces a single model capable of generation with a controllable long-horizon temperature parameter. We experiment with LHTS on image diffusion models and character/language autoregressive models, demonstrating its advantages over myopic temperature scaling in likelihood and sample quality, and showing improvements in accuracy of a multiple choice analogy by $10$%.
https://proceedings.mlr.press/v202/shilton23a.html
https://proceedings.mlr.press/v202/shilton23a/shilton23a.pdf
https://openreview.net/forum?id=nCukQnbhp5
Gradient Descent in Neural Networks as Sequential Learning in Reproducing Kernel Banach Space
https://proceedings.mlr.press/v202/shilton23a.html
Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
https://proceedings.mlr.press/v202/shilton23a.html
ICML 2023
The study of Neural Tangent Kernels (NTKs) has provided much needed insight into convergence and generalization properties of neural networks in the over-parametrized (wide) limit by approximating the network using a first-order Taylor expansion with respect to its weights in the neighborhood of their initialization values. This allows neural network training to be analyzed from the perspective of reproducing kernel Hilbert spaces (RKHS), which is informative in the over-parametrized regime, but a poor approximation for narrower networks as the weights change more during training. Our goal is to extend beyond the limits of NTK toward a more general theory. We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights as an inner product of two feature maps, respectively from data and weight-step space, to feature space, allowing neural network training to be analyzed from the perspective of reproducing kernel Banach space (RKBS). We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning in RKBS. Using this, we present novel bound on uniform convergence where the iterations count and learning rate play a central role, giving new theoretical insight into neural network training.
https://proceedings.mlr.press/v202/shim23a.html
https://proceedings.mlr.press/v202/shim23a/shim23a.pdf
https://openreview.net/forum?id=IYmUVGlOwG
SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning
https://proceedings.mlr.press/v202/shim23a.html
Dongseok Shim, Seungjae Lee, H. Jin Kim
https://proceedings.mlr.press/v202/shim23a.html
ICML 2023
As previous representations for reinforcement learning cannot effectively incorporate a human-intuitive understanding of the 3D environment, they usually suffer from sub-optimal performances. In this paper, we present Semantic-aware Neural Radiance Fields for Reinforcement Learning (SNeRL), which jointly optimizes semantic-aware neural radiance fields (NeRF) with a convolutional encoder to learn 3D-aware neural implicit representation from multi-view images. We introduce 3D semantic and distilled feature fields in parallel to the RGB radiance fields in NeRF to learn semantic and object-centric representation for reinforcement learning. SNeRL outperforms not only previous pixel-based representations but also recent 3D-aware representations both in model-free and model-based reinforcement learning.
https://proceedings.mlr.press/v202/shin23a.html
https://proceedings.mlr.press/v202/shin23a/shin23a.pdf
https://openreview.net/forum?id=YIWtM3GdZc
A Closer Look at the Intervention Procedure of Concept Bottleneck Models
https://proceedings.mlr.press/v202/shin23a.html
Sungbin Shin, Yohan Jo, Sungsoo Ahn, Namhoon Lee
https://proceedings.mlr.press/v202/shin23a.html
ICML 2023
Concept bottleneck models (CBMs) are a class of interpretable neural network models that predict the target response of a given input based on its high-level concepts. Unlike the standard end-to-end models, CBMs enable domain experts to intervene on the predicted concepts and rectify any mistakes at test time, so that more accurate task predictions can be made at the end. While such intervenability provides a powerful avenue of control, many aspects of the intervention procedure remain rather unexplored. In this work, we develop various ways of selecting intervening concepts to improve the intervention effectiveness and conduct an array of in-depth analyses as to how they evolve under different circumstances. Specifically, we find that an informed intervention strategy can reduce the task error more than ten times compared to the current baseline under the same amount of intervention counts in realistic settings, and yet, this can vary quite significantly when taking into account different intervention granularity. We verify our findings through comprehensive evaluations, not only on the standard real datasets, but also on synthetic datasets that we generate based on a set of different causal graphs. We further discover some major pitfalls of the current practices which, without a proper addressing, raise concerns on reliability and fairness of the intervention procedure.
https://proceedings.mlr.press/v202/shin23b.html
https://proceedings.mlr.press/v202/shin23b/shin23b.pdf
https://openreview.net/forum?id=xOWLUSQCE4
MetricGAN-OKD: Multi-Metric Optimization of MetricGAN via Online Knowledge Distillation for Speech Enhancement
https://proceedings.mlr.press/v202/shin23b.html
Wooseok Shin, Byung Hoon Lee, Jin Sob Kim, Hyun Joon Park, Sung Won Han
https://proceedings.mlr.press/v202/shin23b.html
ICML 2023
In speech enhancement, MetricGAN-based approaches reduce the discrepancy between the $L_p$ loss and evaluation metrics by utilizing a non-differentiable evaluation metric as the objective function. However, optimizing multiple metrics simultaneously remains challenging owing to the problem of confusing gradient directions. In this paper, we propose an effective multi-metric optimization method in MetricGAN via online knowledge distillation—MetricGAN-OKD. MetricGAN-OKD, which consists of multiple generators and target metrics, related by a one-to-one correspondence, enables generators to learn with respect to a single metric reliably while improving performance with respect to other metrics by mimicking other generators. Experimental results on speech enhancement and listening enhancement tasks reveal that the proposed method significantly improves performance in terms of multiple metrics compared to existing multi-metric optimization methods. Further, the good performance of MetricGAN-OKD is explained in terms of network generalizability and correlation between metrics.
https://proceedings.mlr.press/v202/shin23c.html
https://proceedings.mlr.press/v202/shin23c/shin23c.pdf
https://openreview.net/forum?id=OxkESnZnN2
Improved Learning-Augmented Algorithms for the Multi-Option Ski Rental Problem via Best-Possible Competitive Analysis
https://proceedings.mlr.press/v202/shin23c.html
Yongho Shin, Changyeol Lee, Gukryeol Lee, Hyung-Chan An
https://proceedings.mlr.press/v202/shin23c.html
ICML 2023
In this paper, we present improved learning-augmented algorithms for the multi-option ski rental problem. Learning-augmented algorithms take ML predictions as an added part of the input and incorporates these predictions in solving the given problem. Due to their unique strength that combines the power of ML predictions with rigorous performance guarantees, they have been extensively studied in the context of online optimization problems. Even though ski rental problems are one of the canonical problems in the field of online optimization, only deterministic algorithms were previously known for multi-option ski rental, with or without learning augmentation. We present the first randomized learning-augmented algorithm for this problem, surpassing previous performance guarantees given by deterministic algorithms. Our learning-augmented algorithm is based on a new, provably best-possible randomized competitive algorithm for the problem. Our results are further complemented by lower bounds for deterministic and randomized algorithms, and computational experiments evaluating our algorithms’ performance improvements.
https://proceedings.mlr.press/v202/shin23d.html
https://proceedings.mlr.press/v202/shin23d/shin23d.pdf
https://openreview.net/forum?id=pmUI642icm
One-shot Imitation in a Non-Stationary Environment via Multi-Modal Skill
https://proceedings.mlr.press/v202/shin23d.html
Sangwoo Shin, Daehee Lee, Minjong Yoo, Woo Kyung Kim, Honguk Woo
https://proceedings.mlr.press/v202/shin23d.html
ICML 2023
One-shot imitation is to learn a new task from a single demonstration, yet it is a challenging problem to adopt it for complex tasks with the high domain diversity inherent in a non-stationary environment. To tackle the problem, we explore the compositionality of complex tasks, and present a novel skill-based imitation learning framework enabling one-shot imitation and zero-shot adaptation; from a single demonstration for a complex unseen task, a semantic skill sequence is inferred and then each skill in the sequence is converted into an action sequence optimized for environmental hidden dynamics that can vary over time. Specifically, we leverage a vision-language model to learn a semantic skill set from offline video datasets, where each skill is represented on the vision-language embedding space, and adapt meta-learning with dynamics inference to enable zero-shot skill adaptation. We evaluate our framework with various one-shot imitation scenarios for extended multi-stage Meta-world tasks, showing its superiority in learning complex tasks, generalizing to dynamics changes, and extending to different demonstration conditions and modalities, compared to other baselines.
https://proceedings.mlr.press/v202/shin23e.html
https://proceedings.mlr.press/v202/shin23e/shin23e.pdf
https://openreview.net/forum?id=EvGOdASdHi
Context Consistency Regularization for Label Sparsity in Time Series
https://proceedings.mlr.press/v202/shin23e.html
Yooju Shin, Susik Yoon, Hwanjun Song, Dongmin Park, Byunghyun Kim, Jae-Gil Lee, Byung Suk Lee
https://proceedings.mlr.press/v202/shin23e.html
ICML 2023
Labels are typically sparse in real-world time series due to the high annotation cost. Recently, consistency regularization techniques have been used to generate artificial labels from unlabeled augmented instances. To fully exploit the sequential characteristic of time series in consistency regularization, we propose a novel method of data augmentation called context-attached augmentation, which adds preceding and succeeding instances to a target instance to form its augmented instance. Unlike the existing augmentation techniques that modify a target instance by directly perturbing its attributes, the context-attached augmentation generates instances augmented with varying contexts while maintaining the target instance. Based on our augmentation method, we propose a context consistency regularization framework, which first adds different contexts to a target instance sampled from a given time series and then shares unitary reliability-based cross-window labels across the augmented instances to maintain consistency. We demonstrate that the proposed framework outperforms the existing state-of-the-art consistency regularization frameworks through comprehensive experiments on real-world time-series datasets.
https://proceedings.mlr.press/v202/shirahmad-gale-bagi23a.html
https://proceedings.mlr.press/v202/shirahmad-gale-bagi23a/shirahmad-gale-bagi23a.pdf
https://openreview.net/forum?id=Kw90j2pNSt
Generative Causal Representation Learning for Out-of-Distribution Motion Forecasting
https://proceedings.mlr.press/v202/shirahmad-gale-bagi23a.html
Shayan Shirahmad Gale Bagi, Zahra Gharaee, Oliver Schulte, Mark Crowley
https://proceedings.mlr.press/v202/shirahmad-gale-bagi23a.html
ICML 2023
Conventional supervised learning methods typically assume i.i.d samples and are found to be sensitive to out-of-distribution (OOD) data. We propose Generative Causal Representation Learning (GCRL) which leverages causality to facilitate knowledge transfer under distribution shifts. While we evaluate the effectiveness of our proposed method in human trajectory prediction models, GCRL can be applied to other domains as well. First, we propose a novel causal model that explains the generative factors in motion forecasting datasets using features that are common across all environments and with features that are specific to each environment. Selection variables are used to determine which parts of the model can be directly transferred to a new environment without fine-tuning. Second, we propose an end-to-end variational learning paradigm to learn the causal mechanisms that generate observations from features. GCRL is supported by strong theoretical results that imply identifiability of the causal model under certain assumptions. Experimental results on synthetic and real-world motion forecasting datasets show the robustness and effectiveness of our proposed method for knowledge transfer under zero-shot and low-shot settings by substantially outperforming the prior motion forecasting models on out-of-distribution prediction.
https://proceedings.mlr.press/v202/shirzad23a.html
https://proceedings.mlr.press/v202/shirzad23a/shirzad23a.pdf
https://openreview.net/forum?id=3Ge74dgjjU
Exphormer: Sparse Transformers for Graphs
https://proceedings.mlr.press/v202/shirzad23a.html
Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J. Sutherland, Ali Kemal Sinop
https://proceedings.mlr.press/v202/shirzad23a.html
ICML 2023
Graph transformers have emerged as a promising architecture for a variety of graph learning and representation tasks. Despite their successes, though, it remains challenging to scale graph transformers to large graphs while maintaining accuracy competitive with message-passing networks. In this paper, we introduce Exphormer, a framework for building powerful and scalable graph transformers. Exphormer consists of a sparse attention mechanism based on two mechanisms: virtual global nodes and expander graphs, whose mathematical characteristics, such as spectral expansion, pseduorandomness, and sparsity, yield graph transformers with complexity only linear in the size of the graph, while allowing us to prove desirable theoretical properties of the resulting transformer models. We show that incorporating Exphormer into the recently-proposed GraphGPS framework produces models with competitive empirical results on a wide variety of graph datasets, including state-of-the-art results on three datasets. We also show that Exphormer can scale to datasets on larger graphs than shown in previous graph transformer architectures.
https://proceedings.mlr.press/v202/shoshan23a.html
https://proceedings.mlr.press/v202/shoshan23a/shoshan23a.pdf
https://openreview.net/forum?id=QwFkJ3QVii
Synthetic data for model selection
https://proceedings.mlr.press/v202/shoshan23a.html
Alon Shoshan, Nadav Bhonker, Igor Kviatkovsky, Matan Fintz, Gerard Medioni
https://proceedings.mlr.press/v202/shoshan23a.html
ICML 2023
Recent breakthroughs in synthetic data generation approaches made it possible to produce highly photorealistic images which are hardly distinguishable from real ones. Furthermore, synthetic generation pipelines have the potential to generate an unlimited number of images. The combination of high photorealism and scale turn synthetic data into a promising candidate for improving various machine learning (ML) pipelines. Thus far, a large body of research in this field has focused on using synthetic images for training, by augmenting and enlarging training data. In contrast to using synthetic data for training, in this work we explore whether synthetic data can be beneficial for model selection. Considering the task of image classification, we demonstrate that when data is scarce, synthetic data can be used to replace the held out validation set, thus allowing to train on a larger dataset. We also introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain. We show that such calibration significantly improves the usefulness of synthetic data for model selection.
https://proceedings.mlr.press/v202/shou23a.html
https://proceedings.mlr.press/v202/shou23a/shou23a.pdf
https://openreview.net/forum?id=jFPdftHG4F
Probabilistic Attention-to-Influence Neural Models for Event Sequences
https://proceedings.mlr.press/v202/shou23a.html
Xiao Shou, Debarun Bhattacharjya, Tian Gao, Dharmashankar Subramanian, Oktie Hassanzadeh, Kristin Bennett
https://proceedings.mlr.press/v202/shou23a.html
ICML 2023
Discovering knowledge about which types of events influence others, using datasets of event sequences without time stamps, has several practical applications. While neural sequence models are able to capture complex and potentially long-range historical dependencies, they often lack the interpretability of simpler models for event sequence dynamics. We provide a novel neural framework in such a setting - a probabilistic attention-to-influence neural model - which not only captures complex instance-wise interactions between events but also learns influencers for each event type of interest. Given event sequence data and a prior distribution on type-wise influence, we efficiently learn an approximate posterior for type-wise influence by an attention-to-influence transformation using variational inference. Our method subsequently models the conditional likelihood of sequences by sampling the above posterior to focus attention on influencing event types. We motivate our general framework and show improved performance in experiments compared to existing baselines on synthetic data as well as real-world benchmarks, for tasks involving prediction and influencing set identification.
https://proceedings.mlr.press/v202/shridharan23a.html
https://proceedings.mlr.press/v202/shridharan23a/shridharan23a.pdf
https://openreview.net/forum?id=eXtJRDCGye
Causal Bounds in Quasi-Markovian Graphs
https://proceedings.mlr.press/v202/shridharan23a.html
Madhumitha Shridharan, Garud Iyengar
https://proceedings.mlr.press/v202/shridharan23a.html
ICML 2023
We consider the problem of computing bounds for causal queries on quasi-Markovian graphs with unobserved confounders and discrete valued observed variables, where identifiability does not hold. Existing non-parametric approaches for computing such bounds use multilinear programming (MP) formulations that are often intractable for existing solvers when the degree of the polynomial objective is greater than two. Hence, one often has to resort to either fast approximate heuristics which are not guaranteed to contain the true query value, or more accurate but computationally intensive procedures. We show how to construct an equivalent MP with a polynomial objective of lower degree. In particular, the degree of the objective in the new MP is equal to only the number of C-components that are intervened upon, instead of the total number of C-components. As a result, we can compute exact bounds for significantly larger causal inference problems as compared to what is possible using existing techniques. We also propose a very efficient Frank-Wolfe heuristic that produces very high quality bounds, and scales to large multilinear problems of higher degree.
https://proceedings.mlr.press/v202/shrivastava23a.html
https://proceedings.mlr.press/v202/shrivastava23a/shrivastava23a.pdf
https://openreview.net/forum?id=RX70NHEPE0
Repository-Level Prompt Generation for Large Language Models of Code
https://proceedings.mlr.press/v202/shrivastava23a.html
Disha Shrivastava, Hugo Larochelle, Daniel Tarlow
https://proceedings.mlr.press/v202/shrivastava23a.html
ICML 2023
With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex used in GitHub Copilot), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn’t require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code auto-completion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a relative improvement of 36% over Codex, showing the quality of these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. We release our code, data, and trained checkpoints at: https://github.com/shrivastavadisha/repo_level_prompt_generation.
https://proceedings.mlr.press/v202/shu23a.html
https://proceedings.mlr.press/v202/shu23a/shu23a.pdf
https://openreview.net/forum?id=DTM83ccsMA
CLIPood: Generalizing CLIP to Out-of-Distributions
https://proceedings.mlr.press/v202/shu23a.html
Yang Shu, Xingzhuo Guo, Jialong Wu, Ximei Wang, Jianmin Wang, Mingsheng Long
https://proceedings.mlr.press/v202/shu23a.html
ICML 2023
Out-of-distribution (OOD) generalization, where the model needs to handle distribution shifts from training, is a major challenge of machine learning. Contrastive language-image pre-training (CLIP) models have shown impressive zero-shot ability, but the further adaptation of CLIP on downstream tasks undesirably degrades OOD performances. This paper aims at generalizing CLIP to out-of-distribution test data on downstream tasks. We propose CLIPood, a fine-tuning method that can adapt CLIP models to OOD situations where both domain shifts and open classes may occur on the unseen test data. To exploit the semantic relations between classes from the text modality, CLIPood introduces a new training objective, margin metric softmax (MMS), with class adaptive margins for fine-tuning. To incorporate both pre-trained zero-shot model and fine-tuned task-adaptive model, CLIPood leverages a new optimization strategy, Beta moving average (BMA), to maintain a temporal ensemble weighted by Beta distribution. Experiments on diverse datasets with different OOD scenarios show that CLIPood consistently outperforms existing generalization techniques.
https://proceedings.mlr.press/v202/si23a.html
https://proceedings.mlr.press/v202/si23a/si23a.pdf
https://openreview.net/forum?id=Skrk3StS2g
Semi-Autoregressive Energy Flows: Exploring Likelihood-Free Training of Normalizing Flows
https://proceedings.mlr.press/v202/si23a.html
Phillip Si, Zeyi Chen, Subham Sekhar Sahoo, Yair Schiff, Volodymyr Kuleshov
https://proceedings.mlr.press/v202/si23a.html
ICML 2023
Training normalizing flow generative models can be challenging due to the need to calculate computationally expensive determinants of Jacobians. This paper studies the likelihood-free training of flows and proposes the energy objective, an alternative sample-based loss based on proper scoring rules. The energy objective is determinant-free and supports flexible model architectures that are not easily compatible with maximum likelihood training, including semi-autoregressive energy flows, a novel model family that interpolates between fully autoregressive and non-autoregressive models. Energy flows feature competitive sample quality, posterior inference, and generation speed relative to likelihood-based flows; this performance is decorrelated from the quality of log-likelihood estimates, which are generally very poor. Our findings question the use of maximum likelihood as an objective or a metric, and contribute to a scientific study of its role in generative modeling. Code is available at https://github.com/ps789/SAEF.
https://proceedings.mlr.press/v202/siahkoohi23a.html
https://proceedings.mlr.press/v202/siahkoohi23a/siahkoohi23a.pdf
https://openreview.net/forum?id=CEFFBle9E6
Unearthing InSights into Mars: Unsupervised Source Separation with Limited Data
https://proceedings.mlr.press/v202/siahkoohi23a.html
Ali Siahkoohi, Rudy Morel, Maarten V. De Hoop, Erwan Allys, Gregory Sainton, Taichi Kawamura
https://proceedings.mlr.press/v202/siahkoohi23a.html
ICML 2023
Source separation involves the ill-posed problem of retrieving a set of source signals that have been observed through a mixing operator. Solving this problem requires prior knowledge, which is commonly incorporated by imposing regularity conditions on the source signals, or implicitly learned through supervised or unsupervised methods from existing data. While data-driven methods have shown great promise in source separation, they often require large amounts of data, which rarely exists in planetary space missions. To address this challenge, we propose an unsupervised source separation scheme for domains with limited data access that involves solving an optimization problem in the wavelet scattering covariance representation space—an interpretable, low-dimensional representation of stationary processes. We present a real-data example in which we remove transient, thermally-induced microtilts—known as glitches—from data recorded by a seismometer during NASA’s InSight mission on Mars. Thanks to the wavelet scattering covariances’ ability to capture non-Gaussian properties of stochastic processes, we are able to separate glitches using only a few glitch-free data snippets.
https://proceedings.mlr.press/v202/sieber23a.html
https://proceedings.mlr.press/v202/sieber23a/sieber23a.pdf
https://openreview.net/forum?id=1VxLNhSVMp
Quantitative Universal Approximation Bounds for Deep Belief Networks
https://proceedings.mlr.press/v202/sieber23a.html
Julian Sieber, Johann Gehringer
https://proceedings.mlr.press/v202/sieber23a.html
ICML 2023
We show that deep belief networks with binary hidden units can approximate any multivariate probability density under very mild integrability requirements on the parental density of the visible nodes. The approximation is measured in the $L^q$-norm for $q\in[1,\infty]$ ($q=\infty$ corresponding to the supremum norm) and in Kullback-Leibler divergence. Furthermore, we establish sharp quantitative bounds on the approximation error in terms of the number of hidden units.
https://proceedings.mlr.press/v202/simchi-levi23a.html
https://proceedings.mlr.press/v202/simchi-levi23a/simchi-levi23a.pdf
https://openreview.net/forum?id=a86SXRxhtA
Pricing Experimental Design: Causal Effect, Expected Revenue and Tail Risk
https://proceedings.mlr.press/v202/simchi-levi23a.html
David Simchi-Levi, Chonghuan Wang
https://proceedings.mlr.press/v202/simchi-levi23a.html
ICML 2023
When launching a new product, historical sales data is often not available, leaving price as a crucial experimental instrument for sellers to gauge market response. When designing pricing experiments, there are three fundamental objectives: estimating the causal effect of price (i.e., price elasticity), maximizing the expected revenue through the experiment, and controlling the tail risk suffering from a very huge loss. In this paper, we reveal the relationship among such three objectives. Under a linear structural model, we investigate the trade-offs between causal inference and expected revenue maximization, as well as between expected revenue maximization and tail risk control. Furthermore, we propose an optimal pricing experimental design, which can flexibly adapt to different desired levels of trade-offs. Through the optimal design, we also explore the relationship between causal inference and tail risk control.
https://proceedings.mlr.press/v202/simchowitz23a.html
https://proceedings.mlr.press/v202/simchowitz23a/simchowitz23a.pdf
https://openreview.net/forum?id=MEnEJqyE4s
Statistical Learning under Heterogeneous Distribution Shift
https://proceedings.mlr.press/v202/simchowitz23a.html
Max Simchowitz, Anurag Ajay, Pulkit Agrawal, Akshay Krishnamurthy
https://proceedings.mlr.press/v202/simchowitz23a.html
ICML 2023
This paper studies the prediction of a target $\mathbf{z}$ from a pair of random variables $(\mathbf{x},\mathbf{y})$, where the ground-truth predictor is additive $\mathbb{E}[\mathbf{z} \mid \mathbf{x},\mathbf{y}] = f_\star(\mathbf{x}) +g_{\star}(\mathbf{y})$. We study the performance of empirical risk minimization (ERM) over functions $f+g$, $f \in \mathcal{F}$ and $g \in \mathcal{G}$, fit on a given training distribution, but evaluated on a test distribution which exhibits covariate shift. We show that, when the class $\mathcal{F}$ is "simpler" than $\mathcal{G}$ (measured, e.g., in terms of its metric entropy), our predictor is more resilient to heterogeneous covariate shifts in which the shift in $\mathbf{x}$ is much greater than that in $\mathbf{y}$. These results rely on a novel Hölder style inequality for the Dudley integral which may be of independent interest. Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in "simpler" features across numerous domains.
https://proceedings.mlr.press/v202/simon23a.html
https://proceedings.mlr.press/v202/simon23a/simon23a.pdf
https://openreview.net/forum?id=aqnGHvzrqL
On the Stepwise Nature of Self-Supervised Learning
https://proceedings.mlr.press/v202/simon23a.html
James B Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J Fetterman, Joshua Albrecht
https://proceedings.mlr.press/v202/simon23a.html
ICML 2023
We present a simple picture of the training process of self-supervised learning methods with dual deep networks. In our picture, these methods learn their high-dimensional embeddings one dimension at a time in a sequence of discrete, well-separated steps. We arrive at this picture via the study of a linear toy model of Barlow Twins, applicable to the case in which the trained network is infinitely wide. We solve the training dynamics of our toy model from small initialization, finding that the model learns the top eigenmodes of a certain contrastive kernel in a discrete, stepwise fashion, and find a closed-form expression for the final learned representations. Remarkably, we see the same stepwise learning phenomenon when training deep ResNets using the Barlow Twins, SimCLR, and VICReg losses. This stepwise picture partially demystifies the process of self-supervised training.
https://proceedings.mlr.press/v202/sinclair23a.html
https://proceedings.mlr.press/v202/sinclair23a/sinclair23a.pdf
https://openreview.net/forum?id=B6FMRlnDQz
Hindsight Learning for MDPs with Exogenous Inputs
https://proceedings.mlr.press/v202/sinclair23a.html
Sean R. Sinclair, Felipe Vieira Frujeri, Ching-An Cheng, Luke Marshall, Hugo De Oliveira Barbalho, Jingling Li, Jennifer Neville, Ishai Menache, Adith Swaminathan
https://proceedings.mlr.press/v202/sinclair23a.html
ICML 2023
Many resource management problems require sequential decision-making under uncertainty, where the only uncertainty affecting the decision outcomes are exogenous variables outside the control of the decision-maker. We model these problems as Exo-MDPs (Markov Decision Processes with Exogenous Inputs) and design a class of data-efficient algorithms for them termed Hindsight Learning (HL). Our HL algorithms achieve data efficiency by leveraging a key insight: having samples of the exogenous variables, past decisions can be revisited in hindsight to infer counterfactual consequences that can accelerate policy improvements. We compare HL against classic baselines in the multi-secretary and airline revenue management problems. We also scale our algorithms to a business-critical cloud resource management problem – allocating Virtual Machines (VMs) to physical machines, and simulate their performance with real datasets from a large public cloud provider. We find that HL algorithms outperform domain-specific heuristics, as well as state-of-the-art reinforcement learning methods.
https://proceedings.mlr.press/v202/singer23a.html
https://proceedings.mlr.press/v202/singer23a/singer23a.pdf
https://openreview.net/forum?id=YrXq8eG1EY
Text-To-4D Dynamic Scene Generation
https://proceedings.mlr.press/v202/singer23a.html
Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, Yaniv Taigman
https://proceedings.mlr.press/v202/singer23a.html
ICML 2023
We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data and the T2V model is trained only on Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our approach using comprehensive quantitative and qualitative experiments and show an improvement over previously established internal baselines. To the best of our knowledge, our method is the first to generate 3D dynamic scenes given a text description. Generated samples can be viewed at make-a-video3d.github.io
https://proceedings.mlr.press/v202/singh23a.html
https://proceedings.mlr.press/v202/singh23a/singh23a.pdf
https://openreview.net/forum?id=BrOPvKsIXW
The Hessian perspective into the Nature of Convolutional Neural Networks
https://proceedings.mlr.press/v202/singh23a.html
Sidak Pal Singh, Thomas Hofmann, Bernhard Schölkopf
https://proceedings.mlr.press/v202/singh23a.html
ICML 2023
While Convolutional Neural Networks (CNNs) have long been investigated and applied, as well as theorized, we aim to provide a slightly different perspective into their nature — through the perspective of their Hessian maps. The reason is that the loss Hessian captures the pairwise interaction of parameters and therefore forms a natural ground to probe how the architectural aspects of CNNs get manifested in their structure and properties. We develop a framework relying on Toeplitz representation of CNNs, and then utilize it to reveal the Hessian structure and, in particular, its rank. We prove tight upper bounds (with linear activations), which closely follow the empirical trend of the Hessian rank and in practice also hold for more general settings. Overall, our work generalizes and further establishes the key insight that the Hessian rank grows as the square root of the number of parameters, even in CNNs.
https://proceedings.mlr.press/v202/singh23b.html
https://proceedings.mlr.press/v202/singh23b/singh23b.pdf
https://openreview.net/forum?id=bZXfHpbUFi
When do Minimax-fair Learning and Empirical Risk Minimization Coincide?
https://proceedings.mlr.press/v202/singh23b.html
Harvineet Singh, Matthäus Kleindessner, Volkan Cevher, Rumi Chunara, Chris Russell
https://proceedings.mlr.press/v202/singh23b.html
ICML 2023
Minimax-fair machine learning minimizes the error for the worst-off group. However, empirical evidence suggests that when sophisticated models are trained with standard empirical risk minimization (ERM), they often have the same performance on the worst-off group as a minimax-trained model. Our work makes this counter-intuitive observation concrete. We prove that if the hypothesis class is sufficiently expressive and the group information is recoverable from the features, ERM and minimax-fairness learning formulations indeed have the same performance on the worst-off group. We provide additional empirical evidence of how this observation holds on a wide range of datasets and hypothesis classes. Since ERM is fundamentally easier than minimax optimization, our findings have implications on the practice of fair machine learning.
https://proceedings.mlr.press/v202/sipka23a.html
https://proceedings.mlr.press/v202/sipka23a/sipka23a.pdf
https://openreview.net/forum?id=15NZ7EzZd8
Differentiable Simulations for Enhanced Sampling of Rare Events
https://proceedings.mlr.press/v202/sipka23a.html
Martin Sipka, Johannes C. B. Dietschreit, Lukáš Grajciar, Rafael Gomez-Bombarelli
https://proceedings.mlr.press/v202/sipka23a.html
ICML 2023
Simulating rare events, such as the transformation of a reactant into a product in a chemical reaction typically requires enhanced sampling techniques that rely on heuristically chosen collective variables (CVs). We propose using differentiable simulations (DiffSim) for the discovery and enhanced sampling of chemical transformations without a need to resort to preselected CVs, using only a distance metric. Reaction path discovery and estimation of the biasing potential that enhances the sampling are merged into a single end-to-end problem that is solved by path-integral optimization. This is achieved by introducing multiple improvements over standard DiffSim such as partial backpropagation and graph mini-batching making DiffSim training stable and efficient. The potential of DiffSim is demonstrated in the successful discovery of transition paths for the Muller-Brown model potential as well as a benchmark chemical system - alanine dipeptide.
https://proceedings.mlr.press/v202/sitawarin23a.html
https://proceedings.mlr.press/v202/sitawarin23a/sitawarin23a.pdf
https://openreview.net/forum?id=ho3AhQgQ5o
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems
https://proceedings.mlr.press/v202/sitawarin23a.html
Chawin Sitawarin, Florian Tramèr, Nicholas Carlini
https://proceedings.mlr.press/v202/sitawarin23a.html
ICML 2023
Decision-based attacks construct adversarial examples against a machine learning (ML) model by making only hard-label queries. These attacks have mainly been applied directly to standalone neural networks. However, in practice, ML models are just one component of a larger learning system. We find that by adding a single preprocessor in front of a classifier, state-of-the-art query-based attacks are up to seven× less effective at attacking a prediction pipeline than at attacking the model alone. We explain this discrepancy by the fact that most preprocessors introduce some notion of invariance to the input space. Hence, attacks that are unaware of this invariance inevitably waste a large number of queries to re-discover or overcome it. We, therefore, develop techniques to (i) reverse-engineer the preprocessor and then (ii) use this extracted information to attack the end-to-end system. Our preprocessors extraction method requires only a few hundred queries, and our preprocessor-aware attacks recover the same efficacy as when attacking the model alone. The code can be found at https://github.com/google-research/preprocessor-aware-black-box-attack.
https://proceedings.mlr.press/v202/skalse23a.html
https://proceedings.mlr.press/v202/skalse23a/skalse23a.pdf
https://openreview.net/forum?id=nqvBUsnC5L
Invariance in Policy Optimisation and Partial Identifiability in Reward Learning
https://proceedings.mlr.press/v202/skalse23a.html
Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, Adam Gleave
https://proceedings.mlr.press/v202/skalse23a.html
ICML 2023
It is often very challenging to manually design reward functions for complex, real-world tasks. To solve this, one can instead use reward learning to infer a reward function from data. However, there are often multiple reward functions that fit the data equally well, even in the infinite-data limit. This means that the reward function is only partially identifiable. In this work, we formally characterise the partial identifiability of the reward function given several popular reward learning data sources, including expert demonstrations and trajectory comparisons. We also analyse the impact of this partial identifiability for several downstream tasks, such as policy optimisation. We unify our results in a framework for comparing data sources and downstream tasks by their invariances, with implications for the design and selection of data sources for reward learning.
https://proceedings.mlr.press/v202/slumbers23a.html
https://proceedings.mlr.press/v202/slumbers23a/slumbers23a.pdf
https://openreview.net/forum?id=MGDYQNXjCg
A Game-Theoretic Framework for Managing Risk in Multi-Agent Systems
https://proceedings.mlr.press/v202/slumbers23a.html
Oliver Slumbers, David Henry Mguni, Stefano B Blumberg, Stephen Marcus Mcaleer, Yaodong Yang, Jun Wang
https://proceedings.mlr.press/v202/slumbers23a.html
ICML 2023
In order for agents in multi-agent systems (MAS) to be safe, they need to take into account the risks posed by the actions of other agents. However, the dominant paradigm in game theory (GT) assumes that agents are not affected by risk from other agents and only strive to maximise their expected utility. For example, in hybrid human-AI driving systems, it is necessary to limit large deviations in reward resulting from car crashes. Although there are equilibrium concepts in game theory that take into account risk aversion, they either assume that agents are risk-neutral with respect to the uncertainty caused by the actions of other agents, or they are not guaranteed to exist. We introduce a new GT-based Risk-Averse Equilibrium (RAE) that always produces a solution that minimises the potential variance in reward accounting for the strategy of other agents. Theoretically and empirically, we show RAE shares many properties with a Nash Equilibrium (NE), establishing convergence properties and generalising to risk-dominant NE in certain cases. To tackle large-scale problems, we extend RAE to the PSRO multi-agent reinforcement learning (MARL) framework. We empirically demonstrate the minimum reward variance benefits of RAE in matrix games with high-risk outcomes. Results on MARL experiments show RAE generalises to risk-dominant NE in a trust dilemma game and that it reduces instances of crashing by 7x in an autonomous driving setting versus the best performing baseline.
https://proceedings.mlr.press/v202/sodhi23a.html
https://proceedings.mlr.press/v202/sodhi23a/sodhi23a.pdf
https://openreview.net/forum?id=gVAk5bYETD
On the Effectiveness of Offline RL for Dialogue Response Generation
https://proceedings.mlr.press/v202/sodhi23a.html
Paloma Sodhi, Felix Wu, Ethan R. Elenberg, Kilian Q Weinberger, Ryan Mcdonald
https://proceedings.mlr.press/v202/sodhi23a.html
ICML 2023
A common training technique for language models is teacher forcing (TF). TF attempts to match human language exactly, even though identical meanings can be expressed in different ways. This motivates use of sequence-level objectives for dialogue response generation. In this paper, we study the efficacy of various offline reinforcement learning (RL) methods to maximize such objectives. We present a comprehensive evaluation across multiple datasets, models, and metrics. Offline RL shows a clear performance improvement over teacher forcing while not inducing training instability or sacrificing practical training budgets.
https://proceedings.mlr.press/v202/soen23a.html
https://proceedings.mlr.press/v202/soen23a/soen23a.pdf
https://openreview.net/forum?id=mwYHi3V56U
Fair Densities via Boosting the Sufficient Statistics of Exponential Families
https://proceedings.mlr.press/v202/soen23a.html
Alexander Soen, Hisham Husain, Richard Nock
https://proceedings.mlr.press/v202/soen23a.html
ICML 2023
We introduce a boosting algorithm to pre-process data for fairness. Starting from an initial fair but inaccurate distribution, our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee. To do so, it learns the sufficient statistics of an exponential family with boosting-compliant convergence. Importantly, we are able to theoretically prove that the learned distribution will have a representation rate and statistical rate data fairness guarantee. Unlike recent optimization based pre-processing methods, our approach can be easily adapted for continuous domain features. Furthermore, when the weak learners are specified to be decision trees, the sufficient statistics of the learned distribution can be examined to provide clues on sources of (un)fairness. Empirical results are present to display the quality of result on real-world data.
https://proceedings.mlr.press/v202/sokar23a.html
https://proceedings.mlr.press/v202/sokar23a/sokar23a.pdf
https://openreview.net/forum?id=skb34O7hFp
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
https://proceedings.mlr.press/v202/sokar23a.html
Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, Utku Evci
https://proceedings.mlr.press/v202/sokar23a.html
ICML 2023
In this work we identify the dormant neuron phenomenon in deep reinforcement learning, where an agent’s network suffers from an increasing number of inactive neurons, thereby affecting network expressivity. We demonstrate the presence of this phenomenon across a variety of algorithms and environments, and highlight its effect on learning. To address this issue, we propose a simple and effective method (ReDo) that Recycles Dormant neurons throughout training. Our experiments demonstrate that ReDo maintains the expressive power of networks by reducing the number of dormant neurons and results in improved performance.
https://proceedings.mlr.press/v202/sokota23a.html
https://proceedings.mlr.press/v202/sokota23a/sokota23a.pdf
https://openreview.net/forum?id=bpGXNJ3kkU
Abstracting Imperfect Information Away from Two-Player Zero-Sum Games
https://proceedings.mlr.press/v202/sokota23a.html
Samuel Sokota, Ryan D’Orazio, Chun Kai Ling, David J Wu, J Zico Kolter, Noam Brown
https://proceedings.mlr.press/v202/sokota23a.html
ICML 2023
In their seminal work, Nayyar et al. (2013) showed that imperfect information can be abstracted away from common-payoff games by having players publicly announce their policies as they play. This insight underpins sound solvers and decision-time planning algorithms for common-payoff games. Unfortunately, a naive application of the same insight to two-player zero-sum games fails because Nash equilibria of the game with public policy announcements may not correspond to Nash equilibria of the original game. As a consequence, existing sound decision-time planning algorithms require complicated additional mechanisms that have unappealing properties. The main contribution of this work is showing that certain regularized equilibria do not possess the aforementioned non-correspondence problem—thus, computing them can be treated as perfect-information problems. Because these regularized equilibria can be made arbitrarily close to Nash equilibria, our result opens the door to a new perspective to solving two-player zero-sum games and yields a simplified framework for decision-time planning in two-player zero-sum games, void of the unappealing properties that plague existing decision-time planning approaches.
https://proceedings.mlr.press/v202/son23a.html
https://proceedings.mlr.press/v202/son23a/son23a.pdf
https://openreview.net/forum?id=DZvwV3Z4Z8
Meta-SAGE: Scale Meta-Learning Scheduled Adaptation with Guided Exploration for Mitigating Scale Shift on Combinatorial Optimization
https://proceedings.mlr.press/v202/son23a.html
Jiwoo Son, Minsu Kim, Hyeonah Kim, Jinkyoo Park
https://proceedings.mlr.press/v202/son23a.html
ICML 2023
This paper proposes Meta-SAGE, a novel approach for improving the scalability of deep reinforcement learning models for combinatorial optimization (CO) tasks. Our method adapts pre-trained models to larger-scale problems in test time by suggesting two components: a scale meta-learner (SML) and scheduled adaptation with guided exploration (SAGE). First, SML transforms the context embedding for subsequent adaptation of SAGE based on scale information. Then, SAGE adjusts the model parameters dedicated to the context embedding for a specific instance. SAGE introduces locality bias, which encourages selecting nearby locations to determine the next location. The locality bias gradually decays as the model is adapted to the target instance. Results show that Meta-SAGE outperforms previous adaptation methods and significantly improves scalability in representative CO tasks. Our source code is available at https://github.com/kaist-silab/meta-sage.
https://proceedings.mlr.press/v202/song23a.html
https://proceedings.mlr.press/v202/song23a/song23a.pdf
https://openreview.net/forum?id=FmqFfMTNnv
Consistency Models
https://proceedings.mlr.press/v202/song23a.html
Yang Song, Prafulla Dhariwal, Mark Chen, Ilya Sutskever
https://proceedings.mlr.press/v202/song23a.html
ICML 2023
Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256.
https://proceedings.mlr.press/v202/song23b.html
https://proceedings.mlr.press/v202/song23b/song23b.pdf
https://openreview.net/forum?id=ud9QEKm0Ym
LipsNet: A Smooth and Robust Neural Network with Adaptive Lipschitz Constant for High Accuracy Optimal Control
https://proceedings.mlr.press/v202/song23b.html
Xujie Song, Jingliang Duan, Wenxuan Wang, Shengbo Eben Li, Chen Chen, Bo Cheng, Bo Zhang, Junqing Wei, Xiaoming Simon Wang
https://proceedings.mlr.press/v202/song23b.html
ICML 2023
Deep reinforcement learning (RL) is a powerful approach for solving optimal control problems. However, RL-trained policies often suffer from the action fluctuation problem, where the consecutive actions significantly differ despite only slight state variations. This problem results in mechanical components’ wear and tear and poses safety hazards. The action fluctuation is caused by the high Lipschitz constant of actor networks. To address this problem, we propose a neural network named LipsNet. We propose the Multi-dimensional Gradient Normalization (MGN) method, to constrain the Lipschitz constant of networks with multi-dimensional input and output. Benefiting from MGN, LipsNet achieves Lipschitz continuity, allowing smooth actions while preserving control performance by adjusting Lipschitz constant. LipsNet addresses the action fluctuation problem at network level rather than algorithm level, which can serve as actor networks in most RL algorithms, making it more flexible and user-friendly than previous works. Experiments demonstrate that LipsNet has good landscape smoothness and noise robustness, resulting in significantly smoother action compared to the Multilayer Perceptron.
https://proceedings.mlr.press/v202/song23c.html
https://proceedings.mlr.press/v202/song23c/song23c.pdf
https://openreview.net/forum?id=rziLKupkl3
Deep Perturbation Learning: Enhancing the Network Performance via Image Perturbations
https://proceedings.mlr.press/v202/song23c.html
Zifan Song, Xiao Gong, Guosheng Hu, Cairong Zhao
https://proceedings.mlr.press/v202/song23c.html
ICML 2023
Image perturbation technique is widely used to generate adversarial examples to attack networks, greatly decreasing the performance of networks. Unlike the existing works, in this paper, we introduce a novel framework Deep Perturbation Learning (DPL), the new insights into understanding image perturbations, to enhance the performance of networks rather than decrease the performance. Specifically, we learn image perturbations to amend the data distribution of training set to improve the performance of networks. This optimization w.r.t data distribution is non-trivial. To approach this, we tactfully construct a differentiable optimization target w.r.t. image perturbations via minimizing the empirical risk. Then we propose an alternating optimization of the network weights and perturbations. DPL can easily be adapted to a wide spectrum of downstream tasks and backbone networks. Extensive experiments demonstrate the effectiveness of our DPL on 6 datasets (CIFAR-10, CIFAR100, ImageNet, MS-COCO, PASCAL VOC, and SBD) over 3 popular vision tasks (image classification, object detection, and semantic segmentation) with different backbone architectures (e.g., ResNet, MobileNet, and ViT).
https://proceedings.mlr.press/v202/song23d.html
https://proceedings.mlr.press/v202/song23d/song23d.pdf
https://openreview.net/forum?id=N9F5wG0hEu
Latent Traversals in Generative Models as Potential Flows
https://proceedings.mlr.press/v202/song23d.html
Yue Song, T. Anderson Keller, Nicu Sebe, Max Welling
https://proceedings.mlr.press/v202/song23d.html
ICML 2023
Despite the significant recent progress in deep generative models, the underlying structure of their latent spaces is still poorly understood, thereby making the task of performing semantically meaningful latent traversals an open research challenge. Most prior work has aimed to solve this challenge by modeling latent structures linearly, and finding corresponding linear directions which result in ‘disentangled’ generations. In this work, we instead propose to model latent structures with a learned dynamic potential landscape, thereby performing latent traversals as the flow of samples down the landscape’s gradient. Inspired by physics, optimal transport, and neuroscience, these potential landscapes are learned as physically realistic partial differential equations, thereby allowing them to flexibly vary over both space and time. To achieve disentanglement, multiple potentials are learned simultaneously, and are constrained by a classifier to be distinct and semantically self-consistent. Experimentally, we demonstrate that our method achieves both more qualitatively and quantitatively disentangled trajectories than state-of-the-art baselines. Further, we demonstrate that our method can be integrated as a regularization term during training, thereby acting as an inductive bias towards the learning of structured representations, ultimately improving model likelihood on similarly structured data. Code is available at https://github.com/KingJamesSong/PDETraversal.
https://proceedings.mlr.press/v202/song23e.html
https://proceedings.mlr.press/v202/song23e/song23e.pdf
https://openreview.net/forum?id=eqTWOzheZT
FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks
https://proceedings.mlr.press/v202/song23e.html
Bingqing Song, Prashant Khanduri, Xinwei Zhang, Jinfeng Yi, Mingyi Hong
https://proceedings.mlr.press/v202/song23e.html
ICML 2023
Federated Learning (FL) is a distributed learning paradigm that allows multiple clients to learn a joint model by utilizing privately held data at each client. Significant research efforts have been devoted to develop advanced algorithms that deal with the situation where the data at individual clients have heterogeneous distributions. In this work, we show that data heterogeneity can be dealt from a different perspective. That is, by utilizing a certain overparameterized multi-layer neural network at each client, even the vanilla FedAvg (a.k.a. the Local SGD) algorithm can accurately optimize the training problem: When each client has a neural network with one wide layer of size $N$ (where $N$ is the number of total training samples), followed by layers of smaller widths, FedAvg converges linearly to a solution that achieves (almost) zero training loss, without requiring any assumptions on the clients’ data distributions. To our knowledge, this is the first work that demonstrates such resilience to data heterogeneity for FedAvg when trained on multi-layer neural networks. Our experiments also confirm that, neural networks of large size can achieve better and more stable performance for FL problems.
https://proceedings.mlr.press/v202/song23f.html
https://proceedings.mlr.press/v202/song23f/song23f.pdf
https://openreview.net/forum?id=OcKwZhPwHA
RGE: A Repulsive Graph Rectification for Node Classification via Influence
https://proceedings.mlr.press/v202/song23f.html
Jaeyun Song, Sungyub Kim, Eunho Yang
https://proceedings.mlr.press/v202/song23f.html
ICML 2023
In real-world graphs, noisy connections are inevitable, which makes it difficult to obtain unbiased node representations. Among various attempts to resolve this problem, a method of estimating the counterfactual effects of these connectivities has recently attracted attention, which mainly uses influence functions for single graph elements (i.e., node and edge). However, in this paper, we argue that there is a strongly interacting group effect between the influences of graph elements due to their connectivity. In the same vein, we observe that edge groups connecting to the same train node exhibit significant differences in their influences, hence no matter how negative each is, removing them at once may have a rather negative effect as a group. Based on this motivation, we propose a new edge-removing strategy, Repulsive edge Group Elimination (RGE), that preferentially removes edges with no interference in groups. Empirically, we demonstrate that RGE consistently outperforms existing methods on the various benchmark datasets.
https://proceedings.mlr.press/v202/song23g.html
https://proceedings.mlr.press/v202/song23g/song23g.pdf
https://openreview.net/forum?id=7viOT7Zs9G
Importance Weighted Expectation-Maximization for Protein Sequence Design
https://proceedings.mlr.press/v202/song23g.html
Zhenqiao Song, Lei Li
https://proceedings.mlr.press/v202/song23g.html
ICML 2023
Designing protein sequences with desired biological function is crucial in biology and chemistry. Recent machine learning methods use a surrogate sequence-function model to replace the expensive wet-lab validation. How can we efficiently generate diverse and novel protein sequences with high fitness? In this paper, we propose IsEM-Pro, an approach to generate protein sequences towards a given fitness criterion. At its core, IsEM-Pro is a latent generative model, augmented by combinatorial structure features from a separately learned Markov random fields (MRFs). We develop an Monte Carlo Expectation-Maximization method (MCEM) to learn the model. During inference, sampling from its latent space enhances diversity while its MRFs features guide the exploration in high fitness regions. Experiments on eight protein sequence design tasks show that our IsEM-Pro outperforms the previous best methods by at least 55% on average fitness score and generates more diverse and novel protein sequences.
https://proceedings.mlr.press/v202/song23h.html
https://proceedings.mlr.press/v202/song23h/song23h.pdf
https://openreview.net/forum?id=uIzkbJgyqc
Sketching for First Order Method: Efficient Algorithm for Low-Bandwidth Channel and Vulnerability
https://proceedings.mlr.press/v202/song23h.html
Zhao Song, Yitan Wang, Zheng Yu, Lichen Zhang
https://proceedings.mlr.press/v202/song23h.html
ICML 2023
Sketching is one of the most fundamental tools in large-scale machine learning. It enables runtime and memory saving via randomly compressing the original large problem into lower dimensions. In this paper, we propose a novel sketching scheme for the first order method in large-scale distributed learning setting, such that the communication costs between distributed agents are saved while the convergence of the algorithms is still guaranteed. Given gradient information in a high dimension $d$, the agent passes the compressed information processed by a sketching matrix $R\in \mathbb{R}^{s\times d}$ with $s\ll d$, and the receiver de-compressed via the de-sketching matrix $R^\top$ to “recover” the information in original dimension. Using such a framework, we develop algorithms for federated learning with lower communication costs. However, such random sketching does not protect the privacy of local data directly. We show that the gradient leakage problem still exists after applying the sketching technique by presenting a specific gradient attack method. As a remedy, we prove rigorously that the algorithm will be differentially private by adding additional random noises in gradient information, which results in a both communication-efficient and differentially private first order approach for federated learning tasks. Our sketching scheme can be further generalized to other learning settings and might be of independent interest itself.
https://proceedings.mlr.press/v202/song23i.html
https://proceedings.mlr.press/v202/song23i/song23i.pdf
https://openreview.net/forum?id=fPytFuT5bO
Sketching Meets Differential Privacy: Fast Algorithm for Dynamic Kronecker Projection Maintenance
https://proceedings.mlr.press/v202/song23i.html
Zhao Song, Xin Yang, Yuanyuan Yang, Lichen Zhang
https://proceedings.mlr.press/v202/song23i.html
ICML 2023
Projection maintenance is one of the core data structure tasks. Efficient data structures for projection maintenance have led to recent breakthroughs in many convex programming algorithms. In this work, we further extend this framework to the Kronecker product structure. Given a constraint matrix ${\sf A}$ and a positive semi-definite matrix $W\in \mathbb{R}^{n\times n}$ with a sparse eigenbasis, we consider the task of maintaining the projection in the form of ${\sf B}^\top({\sf B}{\sf B}^\top)^{-1}{\sf B}$, where ${\sf B}={\sf A}(W\otimes I)$ or ${\sf B}={\sf A}(W^{1/2}\otimes W^{1/2})$. At each iteration, the weight matrix $W$ receives a low rank change and we receive a new vector $h$. The goal is to maintain the projection matrix and answer the query ${\sf B}^\top({\sf B}{\sf B}^\top)^{-1}{\sf B}h$ with good approximation guarantees. We design a fast dynamic data structure for this task and it is robust against an adaptive adversary. Following the beautiful and pioneering work of [Beimel, Kaplan, Mansour, Nissim, Saranurak and Stemmer, STOC’22], we use tools from differential privacy to reduce the randomness required by the data structure and further improve the running time.
https://proceedings.mlr.press/v202/song23j.html
https://proceedings.mlr.press/v202/song23j/song23j.pdf
https://openreview.net/forum?id=cJh37mrFms
A Nearly-Optimal Bound for Fast Regression with $\ell_∞$ Guarantee
https://proceedings.mlr.press/v202/song23j.html
Zhao Song, Mingquan Ye, Junze Yin, Lichen Zhang
https://proceedings.mlr.press/v202/song23j.html
ICML 2023
Given a matrix $A\in \mathbb{R}^{n\times d}$ and a vector $b\in \mathbb{R}^n$, we consider the regression problem with $\ell_\infty$ guarantees: finding a vector $x’\in \mathbb{R}^d$ such that $||x’-x^* ||_\infty \leq \frac{\epsilon}{\sqrt{d}}\cdot ||Ax^*-b||_2\cdot ||A^\dagger||$ with $x^*$ being the optimal solution to the regression $||Ax-b||_2$. One popular approach for solving $\ell_2$ regression problem is via sketching: picking a structured random matrix $S\in \mathbb{R}^{m\times n}$ with $m\ll n$ and $SA$ can be quickly computed, solve the “sketched” regression problem $x’=\mathrm{argmin} ||SAx-Sb||_2$. In this paper, we show that in order to obtain such $\ell_\infty$ guarantee for $\ell_2$ regression, one has to use sketching matrices that are dense. To the best of our knowledge, this is the first user case in which dense sketching matrices are necessary. On the algorithmic side, we prove that, there exists a distribution of dense sketching matrices with $m=\epsilon^{-2}d\log^3(n/\delta)$ such that solving the sketched regression problem gives the $\ell_\infty$ guarantee, with probability at least $1-\delta$. Moreover, the matrix $SA$ can be computed in time $O(nd\log n)$. Our row count is nearly-optimal up to logarithmic factors, and significantly improves the result in [Price, Song and Woodruff, ICALP’17], in which $m=\Omega(\epsilon^{-2}d^{1+\gamma})$ for $\gamma\in (0, 1)$ is required. Moreover, we develop a novel analytical framework for $\ell_\infty$ guarantee regression that utilizes the Oblivious Coordinate-wise Embedding (OCE) property introduced in [Song and Yu, ICML’21]. Our analysis is much simpler and more general than that of [Price, Song and Woodruff, ICALP’17]. Leveraging this framework, we extend the $\ell_\infty$ guarantee regression result to dense sketching matrices for computing fast tensor product of vectors.
https://proceedings.mlr.press/v202/song23k.html
https://proceedings.mlr.press/v202/song23k/song23k.pdf
https://openreview.net/forum?id=JzZ2xAvCs8
Loss-Guided Diffusion Models for Plug-and-Play Controllable Generation
https://proceedings.mlr.press/v202/song23k.html
Jiaming Song, Qinsheng Zhang, Hongxu Yin, Morteza Mardani, Ming-Yu Liu, Jan Kautz, Yongxin Chen, Arash Vahdat
https://proceedings.mlr.press/v202/song23k.html
ICML 2023
We consider guiding denoising diffusion models with general differentiable loss functions in a plug-and-play fashion, enabling controllable generation without additional training. This paradigm, termed Loss-Guided Diffusion (LGD), can easily be integrated into all diffusion models and leverage various efficient samplers. Despite the benefits, the resulting guidance term is, unfortunately, an intractable integral and needs to be approximated. Existing methods compute the guidance term based on a point estimate. However, we show that such approaches have significant errors over the scale of the approximations. To address this issue, we propose a Monte Carlo method that uses multiple samples from a suitable distribution to reduce bias. Our method is effective in various synthetic and real-world settings, including image super-resolution, text or label-conditional image generation, and controllable motion synthesis. Notably, we show how our method can be applied to control a pretrained motion diffusion model to follow certain paths and avoid obstacles that are proven challenging to prior methods.
https://proceedings.mlr.press/v202/soulos23a.html
https://proceedings.mlr.press/v202/soulos23a/soulos23a.pdf
https://openreview.net/forum?id=YZoYYaawO2
Differentiable Tree Operations Promote Compositional Generalization
https://proceedings.mlr.press/v202/soulos23a.html
Paul Soulos, Edward J Hu, Kate Mccurdy, Yunmo Chen, Roland Fernandez, Paul Smolensky, Jianfeng Gao
https://proceedings.mlr.press/v202/soulos23a.html
ICML 2023
In the context of structure-to-structure transformation tasks, learning sequences of discrete symbolic operations poses significant challenges due to their non-differentiability. To facilitate the learning of these symbolic sequences, we introduce a differentiable tree interpreter that compiles high-level symbolic tree operations into subsymbolic matrix operations on tensors. We present a novel Differentiable Tree Machine (DTM) architecture that integrates our interpreter with an external memory and an agent that learns to sequentially select tree operations to execute the target transformation in an end-to-end manner. With respect to out-of-distribution compositional generalization on synthetic semantic parsing and language generation tasks, DTM achieves 100% while existing baselines such as Transformer, Tree Transformer, LSTM, and Tree2Tree LSTM achieve less than 30%. DTM remains highly interpretable in addition to its perfect performance.
https://proceedings.mlr.press/v202/sportisse23a.html
https://proceedings.mlr.press/v202/sportisse23a/sportisse23a.pdf
https://openreview.net/forum?id=nS2x7LOKZk
Are labels informative in semi-supervised learning? Estimating and leveraging the missing-data mechanism.
https://proceedings.mlr.press/v202/sportisse23a.html
Aude Sportisse, Hugo Schmutz, Olivier Humbert, Charles Bouveyron, Pierre-Alexandre Mattei
https://proceedings.mlr.press/v202/sportisse23a.html
ICML 2023
Semi-supervised learning is a powerful technique for leveraging unlabeled data to improve machine learning models, but it can be affected by the presence of “informative" labels, which occur when some classes are more likely to be labeled than others. In the missing data literature, such labels are called missing not at random. In this paper, we propose a novel approach to address this issue by estimating the missing-data mechanism and using inverse propensity weighting to debias any SSL algorithm, including those using data augmentation. We also propose a likelihood ratio test to assess whether or not labels are indeed informative. Finally, we demonstrate the performance of the proposed methods on different datasets, in particular on two medical datasets for which we design pseudo-realistic missing data scenarios.
https://proceedings.mlr.press/v202/squires23a.html
https://proceedings.mlr.press/v202/squires23a/squires23a.pdf
https://openreview.net/forum?id=1VDuHddxtA
Linear Causal Disentanglement via Interventions
https://proceedings.mlr.press/v202/squires23a.html
Chandler Squires, Anna Seigal, Salil S Bhate, Caroline Uhler
https://proceedings.mlr.press/v202/squires23a.html
ICML 2023
Causal disentanglement seeks a representation of data involving latent variables that are related via a causal model. A representation is identifiable if both the latent model and the transformation from latent to observed variables are unique. In this paper, we study observed variables that are a linear transformation of a linear latent causal model. Data from interventions are necessary for identifiability: if one latent variable is missing an intervention, we show that there exist distinct models that cannot be distinguished. Conversely, we show that a single intervention on each latent variable is sufficient for identifiability. Our proof uses a generalization of the RQ decomposition of a matrix that replaces the usual orthogonal and upper triangular conditions with analogues depending on a partial order on the rows of the matrix, with partial order determined by a latent causal model. We corroborate our theoretical results with a method for causal disentanglement. We show that the method accurately recovers a latent causal model on synthetic and semi-synthetic data and we illustrate a use case on a dataset of single-cell RNA sequencing measurements.
https://proceedings.mlr.press/v202/srivastava23a.html
https://proceedings.mlr.press/v202/srivastava23a/srivastava23a.pdf
https://openreview.net/forum?id=cd97d9zV0s
Generating Language Corrections for Teaching Physical Control Tasks
https://proceedings.mlr.press/v202/srivastava23a.html
Megha Srivastava, Noah Goodman, Dorsa Sadigh
https://proceedings.mlr.press/v202/srivastava23a.html
ICML 2023
AI assistance continues to help advance applications in education, from language learning to intelligent tutoring systems, yet current methods for providing students feedback are still quite limited. Most automatic feedback systems either provide binary correctness feedback, which may not help a student understand how to improve, or require hand-coding feedback templates, which may not generalize to new domains. This can be particularly challenging for physical control tasks, where the rich diversity in student behavior and specialized domains make it challenging to leverage general-purpose assistive tools for providing feedback. We design and build CORGI, a model trained to generate language corrections for physical control tasks, such as learning to ride a bike. CORGI takes in as input a pair of student and expert trajectories, and then generates natural language corrections to help the student improve. We collect and train CORGI over data from three diverse physical control tasks (drawing, steering, and joint movement). Through both automatic and human evaluations, we show that CORGI can (i) generate valid feedback for novel student trajectories, (ii) outperform baselines on domains with novel control dynamics, and (iii) improve student learning in an interactive drawing task.
https://proceedings.mlr.press/v202/staerman23a.html
https://proceedings.mlr.press/v202/staerman23a/staerman23a.pdf
https://openreview.net/forum?id=xSzm4fFIIg
FaDIn: Fast Discretized Inference for Hawkes Processes with General Parametric Kernels
https://proceedings.mlr.press/v202/staerman23a.html
Guillaume Staerman, Cédric Allain, Alexandre Gramfort, Thomas Moreau
https://proceedings.mlr.press/v202/staerman23a.html
ICML 2023
Temporal point processes (TPP) are a natural tool for modeling event-based data. Among all TPP models, Hawkes processes have proven to be the most widely used, mainly due to their adequate modeling for various applications, particularly when considering exponential or non-parametric kernels. Although non-parametric kernels are an option, such models require large datasets. While exponential kernels are more data efficient and relevant for specific applications where events immediately trigger more events, they are ill-suited for applications where latencies need to be estimated, such as in neuroscience. This work aims to offer an efficient solution to TPP inference using general parametric kernels with finite support. The developed solution consists of a fast $\ell_2$ gradient-based solver leveraging a discretized version of the events. After theoretically supporting the use of discretization, the statistical and computational efficiency of the novel approach is demonstrated through various numerical experiments. Finally, the method’s effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG). Given the use of general parametric kernels, results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
https://proceedings.mlr.press/v202/stein23a.html
https://proceedings.mlr.press/v202/stein23a/stein23a.pdf
https://openreview.net/forum?id=kdrPtUAfNx
Partial Optimality in Cubic Correlation Clustering
https://proceedings.mlr.press/v202/stein23a.html
David Stein, Silvia Di Gregorio, Bjoern Andres
https://proceedings.mlr.press/v202/stein23a.html
ICML 2023
The higher-order correlation clustering problem is an expressive model, and recently, local search heuristics have been proposed for several applications. Certifying optimality, however, is NP-hard and practically hampered already by the complexity of the problem statement. Here, we focus on establishing partial optimality conditions for the special case of complete graphs and cubic objective functions. In addition, we define and implement algorithms for testing these conditions and examine their effect numerically, on two datasets.
https://proceedings.mlr.press/v202/steiner23a.html
https://proceedings.mlr.press/v202/steiner23a/steiner23a.pdf
https://openreview.net/forum?id=9v29agPZkj
MODeL: Memory Optimizations for Deep Learning
https://proceedings.mlr.press/v202/steiner23a.html
Benoit Steiner, Mostafa Elhoushi, Jacob Kahn, James Hegarty
https://proceedings.mlr.press/v202/steiner23a.html
ICML 2023
The size of deep neural networks has grown exponentially in recent years. Unfortunately, hardware devices have not kept pace with the rapidly increasing memory requirements. To cope with this, researchers have proposed various techniques including spilling, rematerialization, reduced precision training, model pruning, and so on. However, these approaches suffer from various limitations, such as increasing training time, affecting model accuracy, or requiring extensive manual modifications to the neural networks. We present MODeL, an algorithm that optimizes the lifetime and memory location of the tensors used to train neural networks. Our method automatically reduces the memory usage of existing neural networks without any of the drawbacks of other techniques. We formulate the problem as a joint integer linear program (ILP). We present several techniques to simplify the encoding of the problem, and enable our approach to scale to the size of state-of-the-art neural networks using an off-the-shelf ILP solver. We experimentally demonstrate that MODeL only takes seconds to allow the training of neural networks using 30% less memory on average.
https://proceedings.mlr.press/v202/straitouri23a.html
https://proceedings.mlr.press/v202/straitouri23a/straitouri23a.pdf
https://openreview.net/forum?id=tgm43aFDXD
Improving Expert Predictions with Conformal Prediction
https://proceedings.mlr.press/v202/straitouri23a.html
Eleni Straitouri, Lequn Wang, Nastaran Okati, Manuel Gomez Rodriguez
https://proceedings.mlr.press/v202/straitouri23a.html
ICML 2023
Automated decision support systems promise to help human experts solve multiclass classification tasks more efficiently and accurately. However, existing systems typically require experts to understand when to cede agency to the system or when to exercise their own agency. Otherwise, the experts may be better off solving the classification tasks on their own. In this work, we develop an automated decision support system that, by design, does not require experts to understand when to trust the system to improve performance. Rather than providing (single) label predictions and letting experts decide when to trust these predictions, our system provides sets of label predictions constructed using conformal prediction—prediction sets—and forcefully asks experts to predict labels from these sets. By using conformal prediction, our system can precisely trade-off the probability that the true label is not in the prediction set, which determines how frequently our system will mislead the experts, and the size of the prediction set, which determines the difficulty of the classification task the experts need to solve using our system. In addition, we develop an efficient and near-optimal search method to find the conformal predictor under which the experts benefit the most from using our system. Simulation experiments using synthetic and real expert predictions demonstrate that our system may help experts make more accurate predictions and is robust to the accuracy of the classifier the conformal predictor relies on.
https://proceedings.mlr.press/v202/strimel23a.html
https://proceedings.mlr.press/v202/strimel23a/strimel23a.pdf
https://openreview.net/forum?id=NHfj50Wiry
Lookahead When It Matters: Adaptive Non-causal Transformers for Streaming Neural Transducers
https://proceedings.mlr.press/v202/strimel23a.html
Grant Strimel, Yi Xie, Brian John King, Martin Radfar, Ariya Rastrow, Athanasios Mouchtaris
https://proceedings.mlr.press/v202/strimel23a.html
ICML 2023
Streaming speech recognition architectures are employed for low-latency, real-time applications. Such architectures are often characterized by their causality. Causal architectures emit tokens at each frame, relying only on current and past signal, while non-causal models are exposed to a window of future frames at each step to increase predictive accuracy. This dichotomy amounts to a trade-off for real-time Automatic Speech Recognition (ASR) system design: profit from the low-latency benefit of strictly-causal architectures while accepting predictive performance limitations, or realize the modeling benefits of future-context models accompanied by their higher latency penalty. In this work, we relax the constraints of this choice and present the Adaptive Non-Causal Attention Transducer (ANCAT). Our architecture is non-causal in the traditional sense, but executes in a low-latency, streaming manner by dynamically choosing when to rely on future context and to what degree within the audio stream. The resulting mechanism, when coupled with our novel regularization algorithms, delivers comparable accuracy to non-causal configurations while improving significantly upon latency, closing the gap with their causal counterparts. We showcase our design experimentally by reporting comparative ASR task results with measures of accuracy and latency on both publicly accessible and production-scale, voice-assistant datasets.
https://proceedings.mlr.press/v202/stucchi23a.html
https://proceedings.mlr.press/v202/stucchi23a/stucchi23a.pdf
https://openreview.net/forum?id=KVFQ5fUmHg
Kernel QuantTree
https://proceedings.mlr.press/v202/stucchi23a.html
Diego Stucchi, Paolo Rizzo, Nicolò Folloni, Giacomo Boracchi
https://proceedings.mlr.press/v202/stucchi23a.html
ICML 2023
We present Kernel QuantTree (KQT), a non-parametric change detection algorithm that monitors multivariate data through a histogram. KQT constructs a nonlinear partition of the input space that matches pre-defined target probabilities and specifically promotes compact bins adhering to the data distribution, resulting in a powerful detection algorithm. We prove two key theoretical advantages of KQT: i) statistics defined over the KQT histogram do not depend on the stationary data distribution $\phi_0$, so detection thresholds can be set a priori to control false positive rate, and ii) thanks to the kernel functions adopted, the KQT monitoring scheme is invariant to the roto-translation of the input data. Consequently, KQT does not require any preprocessing step like PCA. Our experiments show that KQT achieves superior detection power than non-parametric state-of-the-art change detection methods, and can reliably control the false positive rate.
https://proceedings.mlr.press/v202/stucki23a.html
https://proceedings.mlr.press/v202/stucki23a/stucki23a.pdf
https://openreview.net/forum?id=vlaPdKdbGK
Topologically Faithful Image Segmentation via Induced Matching of Persistence Barcodes
https://proceedings.mlr.press/v202/stucki23a.html
Nico Stucki, Johannes C. Paetzold, Suprosanna Shit, Bjoern Menze, Ulrich Bauer
https://proceedings.mlr.press/v202/stucki23a.html
ICML 2023
Segmentation models predominantly optimize pixel-overlap-based loss, an objective that is actually inadequate for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the topology of the segmented structures. However, so far, existing methods only consider global topological properties, ignoring the need to preserve topological features spatially, which is crucial for accurate segmentation. We introduce the concept of induced matchings from persistent homology to achieve a spatially correct matching between persistence barcodes in a segmentation setting. Based on this concept, we define the Betti matching error as an interpretable, topologically and feature-wise accurate metric for image segmentations, which resolves the limitations of the Betti number error. Our Betti matching error is differentiable and efficient to use as a loss function. We demonstrate that it improves the topological performance of segmentation networks significantly across six diverse datasets while preserving the performance with respect to traditional scores. Our code is publicly available (https://github.com/nstucki/Betti-matching/).
https://proceedings.mlr.press/v202/su23a.html
https://proceedings.mlr.press/v202/su23a/su23a.pdf
https://openreview.net/forum?id=DdQ8ewIgY9
Towards Robust Graph Incremental Learning on Evolving Graphs
https://proceedings.mlr.press/v202/su23a.html
Junwei Su, Difan Zou, Zijun Zhang, Chuan Wu
https://proceedings.mlr.press/v202/su23a.html
ICML 2023
Incremental learning is a machine learning approach that involves training a model on a sequence of tasks, rather than all tasks at once. This ability to learn incrementally from a stream of tasks is crucial for many real-world applications. However, incremental learning is a challenging problem on graph-structured data, as many graph-related problems involve prediction tasks for each individual node, known as Node-wise Graph Incremental Learning (NGIL). This introduces non-independent and non-identically distributed characteristics in the sample data generation process, making it difficult to maintain the performance of the model as new tasks are added. In this paper, we focus on the inductive NGIL problem, which accounts for the evolution of graph structure (structural shift) induced by emerging tasks. We provide a formal formulation and analysis of the problem, and propose a novel regularization-based technique called Structural-Shift-Risk-Mitigation (SSRM) to mitigate the impact of the structural shift on catastrophic forgetting of the inductive NGIL problem. We show that the structural shift can lead to a shift in the input distribution for the existing tasks, and further lead to an increased risk of catastrophic forgetting. Through comprehensive empirical studies with several benchmark datasets, we demonstrate that our proposed method, Structural-Shift-Risk-Mitigation (SSRM), is flexible and easy to adapt to improve the performance of state-of-the-art GNN incremental learning frameworks in the inductive setting.
https://proceedings.mlr.press/v202/suau23a.html
https://proceedings.mlr.press/v202/suau23a/suau23a.pdf
https://openreview.net/forum?id=2tuJDwSSP2
DUET: 2D Structured and Approximately Equivariant Representations
https://proceedings.mlr.press/v202/suau23a.html
Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella
https://proceedings.mlr.press/v202/suau23a.html
ICML 2023
Multiview Self-Supervised Learning (MSSL) is based on learning invariances with respect to a set of input transformations. However, invariance partially or totally removes transformation-related information from the representations, which might harm performance for specific downstream tasks that require such information. We propose 2D strUctured and EquivarianT representations (coined DUET), which are 2d representations organized in a matrix structure, and equivariant with respect to transformations acting on the input data. DUET representations maintain information about an input transformation, while remaining semantically expressive. Compared to SimCLR (Chen et al., 2020) (unstructured and invariant) and ESSL (Dangovski et al., 2022) (unstructured and equivariant), the structured and equivariant nature of DUET representations enables controlled generation with lower reconstruction error, while controllability is not possible with SimCLR or ESSL. DUET also achieves higher accuracy for several discriminative tasks, and improves transfer learning.
https://proceedings.mlr.press/v202/suh23a.html
https://proceedings.mlr.press/v202/suh23a/suh23a.pdf
https://openreview.net/forum?id=KqNX6VOqnJ
Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels
https://proceedings.mlr.press/v202/suh23a.html
Min-Kook Suh, Seung-Woo Seo
https://proceedings.mlr.press/v202/suh23a.html
ICML 2023
Although contrastive learning methods have shown prevailing performance on a variety of representation learning tasks, they encounter difficulty when the training dataset is long-tailed. Many researchers have combined contrastive learning and a logit adjustment technique to address this problem, but the combinations are done ad-hoc and a theoretical background has not yet been provided. The goal of this paper is to provide the background and further improve the performance. First, we show that the fundamental reason contrastive learning methods struggle with long-tailed tasks is that they try to maximize the mutual information between latent features and input data. As ground-truth labels are not considered in the maximization, they are not able to address imbalances between classes. Rather, we interpret the long-tailed recognition task as a mutual information maximization between latent features and ground-truth labels. This approach integrates contrastive learning and logit adjustment seamlessly to derive a loss function that shows state-of-the-art performance on long-tailed recognition benchmarks. It also demonstrates its efficacy in image segmentation tasks, verifying its versatility beyond image classification. Code is available at https://github.com/bluecdm/Long-tailed-recognition.
https://proceedings.mlr.press/v202/sui23a.html
https://proceedings.mlr.press/v202/sui23a/sui23a.pdf
https://openreview.net/forum?id=E8u45VW6Nj
Adversarial Learning of Distributional Reinforcement Learning
https://proceedings.mlr.press/v202/sui23a.html
Yang Sui, Yukun Huang, Hongtu Zhu, Fan Zhou
https://proceedings.mlr.press/v202/sui23a.html
ICML 2023
Reinforcement learning (RL) has made significant advancements in artificial intelligence. However, its real-world applications are limited due to differences between simulated environments and the actual world. Consequently, it is crucial to systematically analyze how each component of the RL system can affect the final model performance. In this study, we propose an adversarial learning framework for distributional reinforcement learning, which adopts the concept of influence measure from the statistics community. This framework enables us to detect performance loss caused by either the internal policy structure or the external state observation. The proposed influence measure is based on information geometry and has desirable properties of invariance. We demonstrate that the influence measure is useful for three diagnostic tasks: identifying fragile states in trajectories, determining the instability of the policy architecture, and pinpointing anomalously sensitive policy parameters.
https://proceedings.mlr.press/v202/sumers23a.html
https://proceedings.mlr.press/v202/sumers23a/sumers23a.pdf
https://openreview.net/forum?id=6vVkGnEpP7
Distilling Internet-Scale Vision-Language Models into Embodied Agents
https://proceedings.mlr.press/v202/sumers23a.html
Theodore Sumers, Kenneth Marino, Arun Ahuja, Rob Fergus, Ishita Dasgupta
https://proceedings.mlr.press/v202/sumers23a.html
ICML 2023
Instruction-following agents must ground language into their observation and action spaces. Learning to ground language is challenging, typically requiring domain-specific engineering or large quantities of human interaction data. To address this challenge, we propose using pretrained vision-language models (VLMs) to supervise embodied agents. We combine ideas from model distillation and hindsight experience replay (HER), using a VLM to retroactively generate language describing the agent’s behavior. Simple prompting allows us to control the supervision signal, teaching an agent to interact with novel objects based on their names (e.g., planes) or their features (e.g., colors) in a 3D rendered environment. Fewshot prompting lets us teach abstract category membership, including pre-existing categories (food vs toys) and ad-hoc ones (arbitrary preferences over objects). Our work outlines a new and effective way to use internet-scale VLMs, repurposing the generic language grounding acquired by such models to teach task-relevant groundings to embodied agents.
https://proceedings.mlr.press/v202/sun23a.html
https://proceedings.mlr.press/v202/sun23a/sun23a.pdf
https://openreview.net/forum?id=0uhNUQmtFk
Vector-Valued Control Variates
https://proceedings.mlr.press/v202/sun23a.html
Zhuo Sun, Alessandro Barp, Francois-Xavier Briol
https://proceedings.mlr.press/v202/sun23a.html
ICML 2023
Control variates are variance reduction tools for Monte Carlo estimators. They can provide significant variance reduction, but usually require a large number of samples, which can be prohibitive when sampling or evaluating the integrand is computationally expensive. Furthermore, there are many scenarios where we need to compute multiple related integrals simultaneously or sequentially, which can further exacerbate computational costs. In this paper, we propose vector-valued control variates, an extension of control variates which can be used to reduce the variance of multiple Monte Carlo estimators jointly. This allows for the transfer of information across integration tasks, and hence reduces the need for a large number of samples. We focus on control variates based on kernel interpolants and our novel construction is obtained through a generalised Stein identity and the development of novel matrix-valued Stein reproducing kernels. We demonstrate our methodology on a range of problems including multifidelity modelling, Bayesian inference for dynamical systems, and model evidence computation through thermodynamic integration.
https://proceedings.mlr.press/v202/sun23b.html
https://proceedings.mlr.press/v202/sun23b/sun23b.pdf
https://openreview.net/forum?id=Sx5qQHvWWD
MetaModulation: Learning Variational Feature Hierarchies for Few-Shot Learning with Fewer Tasks
https://proceedings.mlr.press/v202/sun23b.html
Wenfang Sun, Yingjun Du, Xiantong Zhen, Fan Wang, Ling Wang, Cees G. M. Snoek
https://proceedings.mlr.press/v202/sun23b.html
ICML 2023
Meta-learning algorithms are able to learn a new task using previously learned knowledge, but they often require a large number of meta-training tasks which may not be readily available. To address this issue, we propose a method for few-shot learning with fewer tasks, which we call MetaModulation. The key idea is to use a neural network to increase the density of the meta-training tasks by modulating batch normalization parameters during meta-training. Additionally, we modify parameters at various neural network levels, rather than just a single layer, to increase task diversity. To account for the uncertainty caused by the reduced number of training tasks, we propose a variational MetaModulation where the modulation parameters are treated as latent variables. We also introduce learning variational feature hierarchies by the variational MetaModulation, which modulates features at all layers and can take into account task uncertainty and generate more diverse tasks. The ablation studies illustrate the advantages of utilizing a learnable task modulation at different levels and demonstrate the benefit of incorporating probabilistic variants in few-task meta-learning. Our MetaModulation and its variational variants consistently outperform state-of-the-art alternatives on four few-task meta-learning benchmarks.
https://proceedings.mlr.press/v202/sun23c.html
https://proceedings.mlr.press/v202/sun23c/sun23c.pdf
https://openreview.net/forum?id=ZMP0Bki9aK
Revisiting Sampling for Combinatorial Optimization
https://proceedings.mlr.press/v202/sun23c.html
Haoran Sun, Katayoon Goshvadi, Azade Nova, Dale Schuurmans, Hanjun Dai
https://proceedings.mlr.press/v202/sun23c.html
ICML 2023
Sampling approaches like Markov chain Monte Carlo were once popular for combinatorial optimization, but the inefficiency of classical methods and the need for problem-specific designs curtailed ongoing development. Recent work has favored data-driven approaches that mitigate the need for hand-craft heuristics, but these are often not usable as out-of-the-box solvers due to dependence on in-distribution training and limited scalability to large instances. In this paper, we revisit the idea of using sampling for combinatorial optimization, motivated by the significant recent advances of gradient-based discrete MCMC and new techniques for parallel neighborhood exploration on accelerators. Remarkably, we find that modern sampling strategies can leverage landscape information to provide general-purpose solvers that require no training and yet are competitive with state of the art combinatorial solvers. In particular, experiments on cover vertex selection, graph partition and routing demonstrate better speed-quality trade-offs over current learning based approaches, and sometimes even superior performance to commercial solvers and specialized algorithms.
https://proceedings.mlr.press/v202/sun23d.html
https://proceedings.mlr.press/v202/sun23d/sun23d.pdf
https://openreview.net/forum?id=kMWdo7dZUk
What Makes Entities Similar? A Similarity Flooding Perspective for Multi-sourced Knowledge Graph Embeddings
https://proceedings.mlr.press/v202/sun23d.html
Zequn Sun, Jiacheng Huang, Xiaozhou Xu, Qijin Chen, Weijun Ren, Wei Hu
https://proceedings.mlr.press/v202/sun23d.html
ICML 2023
Joint representation learning over multi-sourced knowledge graphs (KGs) yields transferable and expressive embeddings that improve downstream tasks. Entity alignment (EA) is a critical step in this process. Despite recent considerable research progress in embedding-based EA, how it works remains to be explored. In this paper, we provide a similarity flooding perspective to explain existing translation-based and aggregation-based EA models. We prove that the embedding learning process of these models actually seeks a fixpoint of pairwise similarities between entities. We also provide experimental evidence to support our theoretical analysis. We propose two simple but effective methods inspired by the fixpoint computation in similarity flooding, and demonstrate their effectiveness on benchmark datasets. Our work bridges the gap between recent embedding-based models and the conventional similarity flooding algorithm. It would improve our understanding of and increase our faith in embedding-based EA.
https://proceedings.mlr.press/v202/sun23e.html
https://proceedings.mlr.press/v202/sun23e/sun23e.pdf
https://openreview.net/forum?id=4JmgYXoBNd
Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming
https://proceedings.mlr.press/v202/sun23e.html
Chunlin Sun, Shang Liu, Xiaocheng Li
https://proceedings.mlr.press/v202/sun23e.html
ICML 2023
In this paper, we study the predict-then-optimize problem where the output of a machine learning prediction task is used as the input of some downstream optimization problem, say, the objective coefficient vector of a linear program. The problem is also known as predictive analytics or contextual linear programming. The existing approaches largely suffer from either (i) optimization intractability (a non-convex objective function)/statistical inefficiency (a suboptimal generalization bound) or (ii) requiring strong condition(s) such as no constraint or loss calibration. We develop a new approach to the problem called maximum optimality margin which designs the machine learning loss function by the optimality condition of the downstream optimization. The max-margin formulation enjoys both computational efficiency and good theoretical properties for the learning procedure. More importantly, our new approach only needs the observations of the optimal solution in the training data rather than the objective function, which makes it a new and natural approach to the inverse linear programming problem under both contextual and context-free settings; we also analyze the proposed method under both offline and online settings, and demonstrate its performance using numerical experiments.
https://proceedings.mlr.press/v202/sun23f.html
https://proceedings.mlr.press/v202/sun23f/sun23f.pdf
https://openreview.net/forum?id=HvAFJsXPN9
Tensor Gaussian Process with Contraction for Multi-Channel Imaging Analysis
https://proceedings.mlr.press/v202/sun23f.html
Hu Sun, Ward Manchester, Meng Jin, Yang Liu, Yang Chen
https://proceedings.mlr.press/v202/sun23f.html
ICML 2023
Multi-channel imaging data is a prevalent data format in scientific fields such as astronomy and biology. The structured information and the high dimensionality of these 3-D tensor data makes the analysis an intriguing but challenging topic for statisticians and practitioners. The low-rank scalar-on-tensor regression model, in particular, has received widespread attention and has been re-formulated as a tensor Gaussian Process (Tensor-GP) model with multi-linear kernel in Yu et al. (2018). In this paper, we extend the Tensor-GP model by introducing an integrative dimensionality reduction technique, called tensor contraction, with a Tensor-GP for a scalar-on-tensor regression task with multi-channel imaging data. This is motivated by the solar flare forecasting problem with high dimensional multi-channel imaging data. We first estimate a latent, reduced-size tensor for each data tensor and then apply a multi-linear Tensor-GP on the latent tensor data for prediction. We introduce an anisotropic total-variation regularization when conducting the tensor contraction to obtain a sparse and smooth latent tensor. We then propose an alternating proximal gradient descent algorithm for estimation. We validate our approach via extensive simulation studies and applying it to the solar flare forecasting problem.
https://proceedings.mlr.press/v202/sun23g.html
https://proceedings.mlr.press/v202/sun23g/sun23g.pdf
https://openreview.net/forum?id=Asrg2we3dP
MABe22: A Multi-Species Multi-Task Benchmark for Learned Representations of Behavior
https://proceedings.mlr.press/v202/sun23g.html
Jennifer J. Sun, Markus Marks, Andrew Wesley Ulmer, Dipam Chakraborty, Brian Geuther, Edward Hayes, Heng Jia, Vivek Kumar, Sebastian Oleszko, Zachary Partridge, Milan Peelman, Alice Robie, Catherine E Schretter, Keith Sheppard, Chao Sun, Param Uttarwar, Julian Morgan Wagner, Erik Werner, Joseph Parker, Pietro Perona, Yisong Yue, Kristin Branson, Ann Kennedy
https://proceedings.mlr.press/v202/sun23g.html
ICML 2023
We introduce MABe22, a large-scale, multi-agent video and trajectory benchmark to assess the quality of learned behavior representations. This dataset is collected from a variety of biology experiments, and includes triplets of interacting mice (4.7 million frames video+pose tracking data, 10 million frames pose only), symbiotic beetle-ant interactions (10 million frames video data), and groups of interacting flies (4.4 million frames of pose tracking data). Accompanying these data, we introduce a panel of real-life downstream analysis tasks to assess the quality of learned representations by evaluating how well they preserve information about the experimental conditions (e.g. strain, time of day, optogenetic stimulation) and animal behavior. We test multiple state-of-the-art self-supervised video and trajectory representation learning methods to demonstrate the use of our benchmark, revealing that methods developed using human action datasets do not fully translate to animal datasets. We hope that our benchmark and dataset encourage a broader exploration of behavior representation learning methods across species and settings.
https://proceedings.mlr.press/v202/sun23h.html
https://proceedings.mlr.press/v202/sun23h/sun23h.pdf
https://openreview.net/forum?id=vD1R00hROK
Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape
https://proceedings.mlr.press/v202/sun23h.html
Yan Sun, Li Shen, Shixiang Chen, Liang Ding, Dacheng Tao
https://proceedings.mlr.press/v202/sun23h.html
ICML 2023
In federated learning (FL), a cluster of local clients are chaired under the coordination of the global server and cooperatively train one model with privacy protection. Due to the multiple local updates and the isolated non-iid dataset, clients are prone to overfit into their own optima, which extremely deviates from the global objective and significantly undermines the performance. Most previous works only focus on enhancing the consistency between the local and global objectives to alleviate this prejudicial client drifts from the perspective of the optimization view, whose performance would be prominently deteriorated on the high heterogeneity. In this work, we propose a novel and general algorithm FedSMOO by jointly considering the optimization and generalization targets to efficiently improve the performance in FL. Concretely, FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective, which is meanwhile revised by the global Sharpness Aware Minimization (SAM) optimizer to search for the consistent flat minima. Our theoretical analysis indicates that FedSMOO achieves fast $\mathcal{O}(1/T)$ convergence rate with low generalization bound. Extensive numerical studies are conducted on the real-world dataset to verify its peerless efficiency and excellent generality.
https://proceedings.mlr.press/v202/sun23i.html
https://proceedings.mlr.press/v202/sun23i/sun23i.pdf
https://openreview.net/forum?id=JHodnaW5WZ
When and How Does Known Class Help Discover Unknown Ones? Provable Understanding Through Spectral Analysis
https://proceedings.mlr.press/v202/sun23i.html
Yiyou Sun, Zhenmei Shi, Yingyu Liang, Yixuan Li
https://proceedings.mlr.press/v202/sun23i.html
ICML 2023
Novel Class Discovery (NCD) aims at inferring novel classes in an unlabeled set by leveraging prior knowledge from a labeled set with known classes. Despite its importance, there is a lack of theoretical foundations for NCD. This paper bridges the gap by providing an analytical framework to formalize and investigate when and how known classes can help discover novel classes. Tailored to the NCD problem, we introduce a graph-theoretic representation that can be learned by a novel NCD Spectral Contrastive Loss (NSCL). Minimizing this objective is equivalent to factorizing the graph’s adjacency matrix, which allows us to derive a provable error bound and provide the sufficient and necessary condition for NCD. Empirically, NSCL can match or outperform several strong baselines on common benchmark datasets, which is appealing for practical usage while enjoying theoretical guarantees.
https://proceedings.mlr.press/v202/sun23j.html
https://proceedings.mlr.press/v202/sun23j/sun23j.pdf
https://openreview.net/forum?id=x9VwD7x64f
Learning Prescriptive ReLU Networks
https://proceedings.mlr.press/v202/sun23j.html
Wei Sun, Asterios Tsiourvas
https://proceedings.mlr.press/v202/sun23j.html
ICML 2023
We study the problem of learning optimal policy from a set of discrete treatment options using observational data. We propose a piecewise linear neural network model that can balance strong prescriptive performance and interpretability, which we refer to as the prescriptive ReLU network, or P-ReLU. We show analytically that this model (i) partitions the input space into disjoint polyhedra, where all instances that belong to the same partition receive the same treatment, and (ii) can be converted into an equivalent prescriptive tree with hyperplane splits for interpretability. We demonstrate the flexibility of the P-ReLU network as constraints can be easily incorporated with minor modifications to the architecture. Through experiments, we validate the superior prescriptive accuracy of P-ReLU against competing benchmarks. Lastly, we present examples of prescriptive trees extracted from trained P-ReLUs using a real-world dataset, for both the unconstrained and constrained scenarios.
https://proceedings.mlr.press/v202/sun23k.html
https://proceedings.mlr.press/v202/sun23k/sun23k.pdf
https://openreview.net/forum?id=AHWSZhGXbs
All in a Row: Compressed Convolution Networks for Graphs
https://proceedings.mlr.press/v202/sun23k.html
Junshu Sun, Shuhui Wang, Xinzhe Han, Zhe Xue, Qingming Huang
https://proceedings.mlr.press/v202/sun23k.html
ICML 2023
Compared to Euclidean convolution, existing graph convolution methods generally fail to learn diverse convolution operators under limited parameter scales and depend on additional treatments of multi-scale feature extraction. The challenges of generalizing Euclidean convolution to graphs arise from the irregular structure of graphs. To bridge the gap between Euclidean space and graph space, we propose a differentiable method for regularization on graphs that applies permutations to the input graphs. The permutations constrain all nodes in a row regardless of their input order and therefore enable the flexible generalization of Euclidean convolution. Based on the regularization of graphs, we propose Compressed Convolution Network (CoCN) for hierarchical graph representation learning. CoCN follows the local feature learning and global parameter sharing mechanisms of Convolution Neural Networks. The whole model can be trained end-to-end and is able to learn both individual node features and the corresponding structure features. We validate CoCN on several node classification and graph classification benchmarks. CoCN achieves superior performance over competitive convolutional GNNs and graph pooling models. Codes are available at https://github.com/sunjss/CoCN.
https://proceedings.mlr.press/v202/sun23l.html
https://proceedings.mlr.press/v202/sun23l/sun23l.pdf
https://openreview.net/forum?id=a0kGwNUwil
Momentum Ensures Convergence of SIGNSGD under Weaker Assumptions
https://proceedings.mlr.press/v202/sun23l.html
Tao Sun, Qingsong Wang, Dongsheng Li, Bao Wang
https://proceedings.mlr.press/v202/sun23l.html
ICML 2023
Sign Stochastic Gradient Descent (signSGD) is a communication-efficient stochastic algorithm that only uses the sign information of the stochastic gradient to update the model’s weights. However, the existing convergence theory of signSGD either requires increasing batch sizes during training or assumes the gradient noise is symmetric and unimodal. Error feedback has been used to guarantee the convergence of signSGD under weaker assumptions at the cost of communication overhead. This paper revisits the convergence of signSGD and proves that momentum can remedy signSGD under weaker assumptions than previous techniques; in particular, our convergence theory does not require the assumption of bounded stochastic gradient or increased batch size. Our results resonate with echoes of previous empirical results where, unlike signSGD, signSGD with momentum maintains good performance even with small batch sizes. Another new result is that signSGD with momentum can achieve an improved convergence rate when the objective function is second-order smooth. We further extend our theory to signSGD with major vote and federated learning.
https://proceedings.mlr.press/v202/sun23m.html
https://proceedings.mlr.press/v202/sun23m/sun23m.pdf
https://openreview.net/forum?id=dwn6o2pYJp
A Critical Revisit of Adversarial Robustness in 3D Point Cloud Recognition with Diffusion-Driven Purification
https://proceedings.mlr.press/v202/sun23m.html
Jiachen Sun, Jiongxiao Wang, Weili Nie, Zhiding Yu, Zhuoqing Mao, Chaowei Xiao
https://proceedings.mlr.press/v202/sun23m.html
ICML 2023
3D point clouds serve as a crucial data representation in numerous real-world applications such as autonomous driving, robotics, and medical imaging. While the advancements in deep learning have spurred the utilization of 3D point clouds, deep models are notoriously vulnerable to adversarial attacks. Various defense solutions have been proposed to build robust models against adversarial attacks. In this work, we pinpoint a major limitation of the leading empirical defense, adversarial training, when applied to 3D point cloud models: gradient obfuscation, which significantly hampers robustness against potent attacks. To bridge the gap, we propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks. Since PointDP does not rely on predefined adversarial examples for training, it can defend against a variety of threats. We conduct a comprehensive evaluation of PointDP across six representative 3D point cloud architectures, employing sixteen strong and adaptive attacks to manifest its foundational robustness. Our evaluation shows that PointDP achieves significantly better (i.e., 12.6%-40.3%) adversarial robustness than state-of-the-art methods under strong attacks bounded by different $\ell_p$ norms.
https://proceedings.mlr.press/v202/sun23n.html
https://proceedings.mlr.press/v202/sun23n/sun23n.pdf
https://openreview.net/forum?id=J4w91xRPBY
SDDM: Score-Decomposed Diffusion Models on Manifolds for Unpaired Image-to-Image Translation
https://proceedings.mlr.press/v202/sun23n.html
Shikun Sun, Longhui Wei, Junliang Xing, Jia Jia, Qi Tian
https://proceedings.mlr.press/v202/sun23n.html
ICML 2023
Recent score-based diffusion models (SBDMs) show promising results in unpaired image-to-image translation (I2I). However, existing methods, either energy-based or statistically-based, provide no explicit form of the interfered intermediate generative distributions. This work presents a new score-decomposed diffusion model (SDDM) on manifolds to explicitly optimize the tangled distributions during image generation. SDDM derives manifolds to make the distributions of adjacent time steps separable and decompose the score function or energy guidance into an image "denoising" part and a content "refinement" part. To refine the image in the same noise level, we equalize the refinement parts of the score function and energy guidance, which permits multi-objective optimization on the manifold. We also leverage the block adaptive instance normalization module to construct manifolds with lower dimensions but still concentrated with the perturbed reference image. SDDM outperforms existing SBDM-based methods with much fewer diffusion steps on several I2I benchmarks.
https://proceedings.mlr.press/v202/sun23o.html
https://proceedings.mlr.press/v202/sun23o/sun23o.pdf
https://openreview.net/forum?id=ghdyH3u8y3
A Neural PDE Solver with Temporal Stencil Modeling
https://proceedings.mlr.press/v202/sun23o.html
Zhiqing Sun, Yiming Yang, Shinjae Yoo
https://proceedings.mlr.press/v202/sun23o.html
ICML 2023
Numerical simulation of non-linear partial differential equations plays a crucial role in modeling physical science and engineering phenomena, such as weather, climate, and aerodynamics. Recent Machine Learning (ML) models trained on low-resolution spatio-temporal signals have shown new promises in capturing important dynamics in high-resolution signals, under the condition that the models can effectively recover the missing details. However, this study shows that significant information is often lost in the low-resolution down-sampled features. To address such issues, we propose a new approach, namely Temporal Stencil Modeling (TSM), which combines the strengths of advanced time-series sequence modeling (with the HiPPO features) and state-of-the-art neural PDE solvers (with learnable stencil modeling). TSM aims to recover the lost information from the PDE trajectories and can be regarded as a temporal generalization of classic finite volume methods such as WENO. Our experimental results show that TSM achieves the new state-of-the-art simulation accuracy for 2-D incompressible Navier-Stokes turbulent flows: it significantly outperforms the previously reported best results by 19.9% in terms of the highly-correlated duration time, and reduces the inference latency into 80%. We also show a strong generalization ability of the proposed method to various out-of-distribution turbulent flow settings, as well as lower resolution or 1-D / 3-D settings. Our code is available at https://github.com/Edward-Sun/TSM-PDE .
https://proceedings.mlr.press/v202/sun23p.html
https://proceedings.mlr.press/v202/sun23p/sun23p.pdf
https://openreview.net/forum?id=YzmK8afGvq
Feature Expansion for Graph Neural Networks
https://proceedings.mlr.press/v202/sun23p.html
Jiaqi Sun, Lin Zhang, Guangyi Chen, Peng Xu, Kun Zhang, Yujiu Yang
https://proceedings.mlr.press/v202/sun23p.html
ICML 2023
Graph neural networks aim to learn representations for graph-structured data and show impressive performance in node classification. Recently, many methods have studied the representations of GNNs from the perspective of optimization goals and spectral graph theory. However, the feature space that dominates representation learning has not been systematically studied in graph neural networks. In this paper, we propose to fill this gap by analyzing the feature space of both spatial and spectral models. We decompose graph neural networks into determined feature spaces and trainable weights, providing the convenience of studying the feature space explicitly using matrix space analysis. In particular, we find theoretically that the feature space tends to be linearly correlated due to repeated aggregations. In this case, the feature space is bounded by the poor representation of shared weights or the limited dimensionality of node attributes in existing models, leading to poor performance. Motivated by these findings, we propose 1) feature subspaces flattening and 2) structural principal components to expand the feature space. Extensive experiments verify the effectiveness of our proposed more comprehensive feature space, with comparable inference time to the baseline, and demonstrate its efficient convergence capability.
https://proceedings.mlr.press/v202/sun23q.html
https://proceedings.mlr.press/v202/sun23q/sun23q.pdf
https://openreview.net/forum?id=rwLwGPdzDD
Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning
https://proceedings.mlr.press/v202/sun23q.html
Yihao Sun, Jiaji Zhang, Chengxing Jia, Haoxin Lin, Junyin Ye, Yang Yu
https://proceedings.mlr.press/v202/sun23q.html
ICML 2023
For offline reinforcement learning (RL), model-based methods are expected to be data-efficient as they incorporate dynamics models to generate more data. However, due to inevitable model errors, straightforwardly learning a policy in the model typically fails in the offline setting. Previous studies have incorporated conservatism to prevent out-of-distribution exploration. For example, MOPO penalizes rewards through uncertainty measures from predicting the next states, which we have discovered are loose bounds of the ideal uncertainty, i.e., the Bellman error. In this work, we propose MOdel-Bellman Inconsistency penalized offLinE Policy Optimization (MOBILE), a novel uncertainty-driven offline RL algorithm. MOBILE conducts uncertainty quantification through the inconsistency of Bellman estimations under an ensemble of learned dynamics models, which can be a better approximator to the true Bellman error, and penalizes the Bellman estimation based on this uncertainty. Empirically we have verified that our proposed uncertainty quantification can be significantly closer to the true Bellman error than the compared methods. Consequently, MOBILE outperforms prior offline RL approaches on most tasks of D4RL and NeoRL benchmarks.
https://proceedings.mlr.press/v202/sundararajan23a.html
https://proceedings.mlr.press/v202/sundararajan23a/sundararajan23a.pdf
https://openreview.net/forum?id=9PJ2V6qvQL
Inflow, Outflow, and Reciprocity in Machine Learning
https://proceedings.mlr.press/v202/sundararajan23a.html
Mukund Sundararajan, Walid Krichene
https://proceedings.mlr.press/v202/sundararajan23a.html
ICML 2023
Data is pooled across entities (individuals or enterprises) to create machine learning models, and sometimes, the entities that contribute the data also benefit from the models. Consider for instance a recommender system (e.g. Spotify, Instagram or YouTube), a health care app that predicts the risk for some disease, or a service built by pooling data across enterprises. In this work we propose a framework to study this value exchange, i.e., we model and measure contributions (outflows), benefits (inflows) and the balance between contributions and benefits (the degree of reciprocity). We show theoretically, and via experiments that under certain distributional assumptions, some classes of models are approximately reciprocal. These results only scratch the surface; we conclude with several open directions.
https://proceedings.mlr.press/v202/suriyakumar23a.html
https://proceedings.mlr.press/v202/suriyakumar23a/suriyakumar23a.pdf
https://openreview.net/forum?id=70sWtujQAU
When Personalization Harms Performance: Reconsidering the Use of Group Attributes in Prediction
https://proceedings.mlr.press/v202/suriyakumar23a.html
Vinith Menon Suriyakumar, Marzyeh Ghassemi, Berk Ustun
https://proceedings.mlr.press/v202/suriyakumar23a.html
ICML 2023
Machine learning models are often personalized with categorical attributes that define groups. In this work, we show that personalization with group attributes can inadvertently reduce performance at a group level – i.e., groups may receive unnecessarily inaccurate predictions by sharing their personal characteristics. We present formal conditions to ensure the fair use of group attributes in a prediction task, and describe how they can be checked by training one additional model. We characterize how fair use conditions be violated due to standard practices in model development, and study the prevalence of fair use violations in clinical prediction tasks. Our results show that personalization often fails to produce a tailored performance gain for every group who reports personal data, and underscore the need to evaluate fair use when personalizing models with characteristics that are protected, sensitive, self-reported, or costly to acquire.
https://proceedings.mlr.press/v202/susano-pinto23a.html
https://proceedings.mlr.press/v202/susano-pinto23a/susano-pinto23a.pdf
https://openreview.net/forum?id=zzOooeAqtT
Tuning Computer Vision Models With Task Rewards
https://proceedings.mlr.press/v202/susano-pinto23a.html
André Susano Pinto, Alexander Kolesnikov, Yuge Shi, Lucas Beyer, Xiaohua Zhai
https://proceedings.mlr.press/v202/susano-pinto23a.html
ICML 2023
Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness to improve generic models pretrained to imitate example outputs across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.
https://proceedings.mlr.press/v202/suttle23a.html
https://proceedings.mlr.press/v202/suttle23a/suttle23a.pdf
https://openreview.net/forum?id=DVinvfAvIB
Beyond Exponentially Fast Mixing in Average-Reward Reinforcement Learning via Multi-Level Monte Carlo Actor-Critic
https://proceedings.mlr.press/v202/suttle23a.html
Wesley A Suttle, Amrit Bedi, Bhrij Patel, Brian M. Sadler, Alec Koppel, Dinesh Manocha
https://proceedings.mlr.press/v202/suttle23a.html
ICML 2023
Many existing reinforcement learning (RL) methods employ stochastic gradient iteration on the back end, whose stability hinges upon a hypothesis that the data-generating process mixes exponentially fast with a rate parameter that appears in the step-size selection. Unfortunately, this assumption is violated for large state spaces or settings with sparse rewards, and the mixing time is unknown, making the step size inoperable. In this work, we propose an RL methodology attuned to the mixing time by employing a multi-level Monte Carlo estimator for the critic, the actor, and the average reward embedded within an actor-critic (AC) algorithm. This method, which we call Multi-level Actor-Critic (MAC), is developed specifically for infinite-horizon average-reward settings and neither relies on oracle knowledge of the mixing time in its parameter selection nor assumes its exponential decay; it is therefore readily applicable to applications with slower mixing times. Nonetheless, it achieves a convergence rate comparable to SOTA actor-critic algorithms. We experimentally show that these alleviated restrictions on the technical conditions required for stability translate to superior performance in practice for RL problems with sparse rewards.
https://proceedings.mlr.press/v202/suzuki23a.html
https://proceedings.mlr.press/v202/suzuki23a/suzuki23a.pdf
https://openreview.net/forum?id=T7kquivfZC
Tight and fast generalization error bound of graph embedding in metric space
https://proceedings.mlr.press/v202/suzuki23a.html
Atsushi Suzuki, Atsushi Nitanda, Taiji Suzuki, Jing Wang, Feng Tian, Kenji Yamanishi
https://proceedings.mlr.press/v202/suzuki23a.html
ICML 2023
Recent studies have experimentally shown that we can achieve in non-Euclidean metric space effective and efficient graph embedding, which aims to obtain the vertices’ representations reflecting the graph’s structure in the metric space. Specifically, graph embedding in hyperbolic space has experimentally succeeded in embedding graphs with hierarchical-tree structure, e.g., data in natural languages, social networks, and knowledge bases. However, recent theoretical analyses have shown a much higher upper bound on non-Euclidean graph embedding’s generalization error than Euclidean one’s, where a high generalization error indicates that the incompleteness and noise in the data can significantly damage learning performance. It implies that the existing bound cannot guarantee the success of graph embedding in non-Euclidean metric space in a practical training data size, which can prevent non-Euclidean graph embedding’s application in real problems. This paper provides a novel upper bound of graph embedding’s generalization error by evaluating the local Rademacher complexity of the model as a function set of the distances of representation couples. Our bound clarifies that the performance of graph embedding in non-Euclidean metric space, including hyperbolic space, is better than the existing upper bounds suggest. Specifically, our new upper bound is polynomial in the metric space’s geometric radius $R$ and can be $O(\frac{1}{S})$ at the fastest, where $S$ is the training data size. Our bound is significantly tighter and faster than the existing one, which can be exponential to $R$ and $O(\frac{1}{\sqrt{S}})$ at the fastest. Specific calculations on example cases show that graph embedding in non-Euclidean metric space can outperform that in Euclidean space with much smaller training data than the existing bound has suggested.
https://proceedings.mlr.press/v202/sverdrup23a.html
https://proceedings.mlr.press/v202/sverdrup23a/sverdrup23a.pdf
https://openreview.net/forum?id=Fq03w1f6hy
Proximal Causal Learning of Conditional Average Treatment Effects
https://proceedings.mlr.press/v202/sverdrup23a.html
Erik Sverdrup, Yifan Cui
https://proceedings.mlr.press/v202/sverdrup23a.html
ICML 2023
Efficiently and flexibly estimating treatment effect heterogeneity is an important task in a wide variety of settings ranging from medicine to marketing, and there are a considerable number of promising conditional average treatment effect estimators currently available. These, however, typically rely on the assumption that the measured covariates are enough to justify conditional exchangeability. We propose the P-learner, motivated by the R- and DR-learner, a tailored two-stage loss function for learning heterogeneous treatment effects in settings where exchangeability given observed covariates is an implausible assumption, and we wish to rely on proxy variables for causal inference. Our proposed estimator can be implemented by off-the-shelf loss-minimizing machine learning methods, which in the case of kernel regression satisfies an oracle bound on the estimated error as long as the nuisance components are estimated reasonably well.
https://proceedings.mlr.press/v202/swamy23a.html
https://proceedings.mlr.press/v202/swamy23a/swamy23a.pdf
https://openreview.net/forum?id=NrJCNcI5WV
Inverse Reinforcement Learning without Reinforcement Learning
https://proceedings.mlr.press/v202/swamy23a.html
Gokul Swamy, David Wu, Sanjiban Choudhury, Drew Bagnell, Steven Wu
https://proceedings.mlr.press/v202/swamy23a.html
ICML 2023
Inverse Reinforcement Learning (IRL) is a powerful set of techniques for imitation learning that aims to learn a reward function that rationalizes expert demonstrations. Unfortunately, traditional IRL methods suffer from a computational weakness: they require repeatedly solving a hard reinforcement learning (RL) problem as a subroutine. This is counter-intuitive from the viewpoint of reductions: we have reduced the easier problem of imitation learning to repeatedly solving the harder problem of RL. Another thread of work has proved that access to the side-information of the distribution of states where a strong policy spends time can dramatically reduce the sample and computational complexities of solving an RL problem. In this work, we demonstrate for the first time a more informed imitation learning reduction where we utilize the state distribution of the expert to alleviate the global exploration component of the RL subroutine, providing an exponential speedup in theory. In practice, we find that we are able to significantly speed up the prior art on continuous control tasks.
https://proceedings.mlr.press/v202/swanson23a.html
https://proceedings.mlr.press/v202/swanson23a/swanson23a.pdf
https://openreview.net/forum?id=W6LjGzW8Kk
Von Mises Mixture Distributions for Molecular Conformation Generation
https://proceedings.mlr.press/v202/swanson23a.html
Kirk Swanson, Jake Lawrence Williams, Eric M Jonas
https://proceedings.mlr.press/v202/swanson23a.html
ICML 2023
Molecules are frequently represented as graphs, but the underlying 3D molecular geometry (the locations of the atoms) ultimately determines most molecular properties. However, most molecules are not static and at room temperature adopt a wide variety of geometries or $\textit{conformations}$. The resulting distribution on geometries $p(x)$ is known as the Boltzmann distribution, and many molecular properties are expectations computed under this distribution. Generating accurate samples from the Boltzmann distribution is therefore essential for computing these expectations accurately. Traditional sampling-based methods are computationally expensive, and most recent machine learning-based methods have focused on identifying $\textit{modes}$ in this distribution rather than generating true $\textit{samples}$. Generating such samples requires capturing conformational variability, and it has been widely recognized that the majority of conformational variability in molecules arises from rotatable bonds. In this work, we present VonMisesNet, a new graph neural network that captures conformational variability via a variational approximation of rotatable bond torsion angles as a mixture of von Mises distributions. We demonstrate that VonMisesNet can generate conformations for arbitrary molecules in a way that is both physically accurate with respect to the Boltzmann distribution and orders of magnitude faster than existing sampling methods.
https://proceedings.mlr.press/v202/syed23a.html
https://proceedings.mlr.press/v202/syed23a/syed23a.pdf
https://openreview.net/forum?id=gy5bhRzsex
Optimal randomized multilevel Monte Carlo for repeatedly nested expectations
https://proceedings.mlr.press/v202/syed23a.html
Yasa Syed, Guanyang Wang
https://proceedings.mlr.press/v202/syed23a.html
ICML 2023
The estimation of repeatedly nested expectations is a challenging task that arises in many real-world systems. However, existing methods generally suffer from high computational costs when the number of nestings becomes large. Fix any non-negative integer $D$ for the total number of nestings. Standard Monte Carlo methods typically cost at least $\mathcal{O}(\varepsilon^{-(2+D)})$ and sometimes $\mathcal {O}(\varepsilon^{-2(1+D)})$ to obtain an estimator up to $\varepsilon$-error. More advanced methods, such as multilevel Monte Carlo, currently only exist for $D = 1$. In this paper, we propose a novel Monte Carlo estimator called $\mathsf{READ}$, which stands for “Recursive Estimator for Arbitrary Depth.” Our estimator has an optimal computational cost of $\mathcal{O}(\varepsilon^{-2})$ for every fixed $D$ under suitable assumptions, and a nearly optimal computational cost of $\mathcal{O}(\varepsilon^{-2(1 + \delta)})$ for any $0 < \delta < \frac12$ under much more general assumptions. Our estimator is also unbiased, which makes it easy to parallelize. The key ingredients in our construction are an observation of the problem’s recursive structure and the recursive use of the randomized multilevel Monte Carlo method.
https://proceedings.mlr.press/v202/szot23a.html
https://proceedings.mlr.press/v202/szot23a/szot23a.pdf
https://openreview.net/forum?id=BYEsw113sz
Adaptive Coordination in Social Embodied Rearrangement
https://proceedings.mlr.press/v202/szot23a.html
Andrew Szot, Unnat Jain, Dhruv Batra, Zsolt Kira, Ruta Desai, Akshara Rai
https://proceedings.mlr.press/v202/szot23a.html
ICML 2023
We present the task of "Social Rearrangement", consisting of cooperative everyday tasks like setting up the dinner table, tidying a house or unpacking groceries in a simulated multi-agent environment. In Social Rearrangement, two robots coordinate to complete a long-horizon task, using onboard sensing and egocentric observations, and no privileged information about the environment. We study zero-shot coordination (ZSC) in this task, where an agent collaborates with a new partner, emulating a scenario where a robot collaborates with a new human partner. Prior ZSC approaches struggle to generalize in our complex and visually rich setting, and on further analysis, we find that they fail to generate diverse coordination behaviors at training time. To counter this, we propose Behavior Diversity Play (BDP), a novel ZSC approach that encourages diversity through a discriminability objective. Our results demonstrate that BDP learns adaptive agents that can tackle visual coordination, and zero-shot generalize to new partners in unseen environments, achieving 35% higher success and 32% higher efficiency compared to baselines.
https://proceedings.mlr.press/v202/taghibakhshi23a.html
https://proceedings.mlr.press/v202/taghibakhshi23a/taghibakhshi23a.pdf
https://openreview.net/forum?id=bkRrdYhd7U
MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods
https://proceedings.mlr.press/v202/taghibakhshi23a.html
Ali Taghibakhshi, Nicolas Nytko, Tareq Uz Zaman, Scott Maclachlan, Luke Olson, Matthew West
https://proceedings.mlr.press/v202/taghibakhshi23a.html
ICML 2023
Domain decomposition methods (DDMs) are popular solvers for discretized systems of partial differential equations (PDEs), with one-level and multilevel variants. These solvers rely on several algorithmic and mathematical parameters, prescribing overlap, subdomain boundary conditions, and other properties of the DDM. While some work has been done on optimizing these parameters, it has mostly focused on the one-level setting or special cases such as structured-grid discretizations with regular subdomain construction. In this paper, we propose multigrid graph neural networks (MG-GNN), a novel GNN architecture for learning optimized parameters in two-level DDMs. We train MG-GNN using a new unsupervised loss function, enabling effective training on small problems that yields robust performance on unstructured grids that are orders of magnitude larger than those in the training set. We show that MG-GNN outperforms popular hierarchical graph network architectures for this optimization and that our proposed loss function is critical to achieving this improved performance.
https://proceedings.mlr.press/v202/tai23a.html
https://proceedings.mlr.press/v202/tai23a/tai23a.pdf
https://openreview.net/forum?id=MA5yCdi2LK
Learning Mixtures of Gaussians with Censored Data
https://proceedings.mlr.press/v202/tai23a.html
Wai Ming Tai, Bryon Aragam
https://proceedings.mlr.press/v202/tai23a.html
ICML 2023
We study the problem of learning mixtures of Gaussians with censored data. Statistical learning with censored data is a classical problem, with numerous practical applications, however, finite-sample guarantees for even simple latent variable models such as Gaussian mixtures are missing. Formally, we are given censored data from a mixture of univariate Gaussians $ \sum_{i=1}^k w_i \mathcal{N}(\mu_i,\sigma^2), $ i.e. the sample is observed only if it lies inside a set $S$. The goal is to learn the weights $w_i$ and the means $\mu_i$. We propose an algorithm that takes only $\frac{1}{\varepsilon^{O(k)}}$ samples to estimate the weights $w_i$ and the means $\mu_i$ within $\varepsilon$ error.
https://proceedings.mlr.press/v202/takakura23a.html
https://proceedings.mlr.press/v202/takakura23a/takakura23a.pdf
https://openreview.net/forum?id=DCjUPvimM2
Approximation and Estimation Ability of Transformers for Sequence-to-Sequence Functions with Infinite Dimensional Input
https://proceedings.mlr.press/v202/takakura23a.html
Shokichi Takakura, Taiji Suzuki
https://proceedings.mlr.press/v202/takakura23a.html
ICML 2023
Despite the great success of Transformer networks in various applications such as natural language processing and computer vision, their theoretical aspects are not well understood. In this paper, we study the approximation and estimation ability of Transformers as sequence-to-sequence functions with infinite dimensional inputs. Although inputs and outputs are both infinite dimensional, we show that when the target function has anisotropic smoothness, Transformers can avoid the curse of dimensionality due to their feature extraction ability and parameter sharing property. In addition, we show that even if the smoothness changes depending on each input, Transformers can estimate the importance of features for each input and extract important features dynamically. Then, we proved that Transformers achieve similar convergence rate as in the case of the fixed smoothness. Our theoretical results support the practical success of Transformers for high dimensional data.
https://proceedings.mlr.press/v202/takamoto23a.html
https://proceedings.mlr.press/v202/takamoto23a/takamoto23a.pdf
https://openreview.net/forum?id=rkBXPvs8s0
Learning Neural PDE Solvers with Parameter-Guided Channel Attention
https://proceedings.mlr.press/v202/takamoto23a.html
Makoto Takamoto, Francesco Alesiani, Mathias Niepert
https://proceedings.mlr.press/v202/takamoto23a.html
ICML 2023
Scientific Machine Learning (SciML) is concerned with the development of learned emulators of physical systems governed by partial differential equations (PDE). In application domains such as weather forecasting, molecular dynamics, and inverse design, ML-based surrogate models are increasingly used to augment or replace inefficient and often non-differentiable numerical simulation algorithms. While a number of ML-based methods for approximating the solutions of PDEs have been proposed in recent years, they typically do not adapt to the parameters of the PDEs, making it difficult to generalize to PDE parameters not seen during training. We propose a Channel Attention guided by PDE Parameter Embeddings (CAPE) component for neural surrogate models and a simple yet effective curriculum learning strategy. The CAPE module can be combined with any neural PDE solvers allowing them to adapt to unseen PDE parameters. The curriculum learning strategy provides a seamless transition between teacher-forcing and fully auto-regressive training. We compare CAPE in conjunction with the curriculum learning strategy using a PDE benchmark and obtain consistent and significant improvements over the baseline models. The experiments also show several advantages of CAPE, such as its increased ability to generalize to unseen PDE parameters without large increases inference time and parameter count. An implementation of the method and experiments are available at https://anonymous.4open.science/r/CAPE-ML4Sci-145B.
https://proceedings.mlr.press/v202/takemura23a.html
https://proceedings.mlr.press/v202/takemura23a/takemura23a.pdf
https://openreview.net/forum?id=0Rw09XcGS4
Contextual Conservative Interleaving Bandits
https://proceedings.mlr.press/v202/takemura23a.html
Kei Takemura
https://proceedings.mlr.press/v202/takemura23a.html
ICML 2023
The performance of a bandit algorithm is usually measured by the cumulative rewards of the actions chosen by the algorithm. However, in many real-world applications, the rewards in each round should be good enough for reasons such as safety and fairness. In this paper, we investigate the contextual conservative interleaving bandit problem, which has a performance constraint that requires the chosen actions to be not much worse than given baseline actions in each round. This work is the first to simultaneously consider the following practical situations: (1) multiple actions are chosen in a round, (2) the feature vectors associated with given actions depend on the round, and (3) the performance constraints in each round that depend only on the actions chosen in that round. We propose a meta-algorithm, Greedy on Confidence Widths (GCW), that satisfies the performance constraints with high probability. GCW uses a standard bandit algorithm and achieves minimax optimal regret up to logarithmic factors if the algorithm used is also minimax optimal. We improve the existing analyses for the C${}^2$UCB algorithm and the Thompson sampling to combine with GCW. We show that these algorithms achieve near-optimal regret when the feasible sets of given actions are the bases of a matroid. Our numerical experiments on a real-world dataset demonstrate that GCW with the standard bandit algorithms efficiently improves performance while satisfying the performance constraints.
https://proceedings.mlr.press/v202/takeno23a.html
https://proceedings.mlr.press/v202/takeno23a/takeno23a.pdf
https://openreview.net/forum?id=z6UjervUws
Randomized Gaussian Process Upper Confidence Bound with Tighter Bayesian Regret Bounds
https://proceedings.mlr.press/v202/takeno23a.html
Shion Takeno, Yu Inatsu, Masayuki Karasuyama
https://proceedings.mlr.press/v202/takeno23a.html
ICML 2023
Gaussian process upper confidence bound (GP-UCB) is a theoretically promising approach for black-box optimization; however, the confidence parameter $\beta$ is considerably large in the theorem and chosen heuristically in practice. Then, randomized GP-UCB (RGP-UCB) uses a randomized confidence parameter, which follows the Gamma distribution, to mitigate the impact of manually specifying $\beta$. This study first generalizes the regret analysis of RGP-UCB to a wider class of distributions, including the Gamma distribution. Furthermore, we propose improved RGP-UCB (IRGP-UCB) based on a two-parameter exponential distribution, which achieves tighter Bayesian regret bounds. IRGP-UCB does not require an increase in the confidence parameter in terms of the number of iterations, which avoids over-exploration in the later iterations. Finally, we demonstrate the effectiveness of IRGP-UCB through extensive experiments.
https://proceedings.mlr.press/v202/takeno23b.html
https://proceedings.mlr.press/v202/takeno23b/takeno23b.pdf
https://openreview.net/forum?id=ue4OeIMU29
Towards Practical Preferential Bayesian Optimization with Skew Gaussian Processes
https://proceedings.mlr.press/v202/takeno23b.html
Shion Takeno, Masahiro Nomura, Masayuki Karasuyama
https://proceedings.mlr.press/v202/takeno23b.html
ICML 2023
We study preferential Bayesian optimization (BO) where reliable feedback is limited to pairwise comparison called duels. An important challenge in preferential BO, which uses the preferential Gaussian process (GP) model to represent flexible preference structure, is that the posterior distribution is a computationally intractable skew GP. The most widely used approach for preferential BO is Gaussian approximation, which ignores the skewness of the true posterior. Alternatively, Markov chain Monte Carlo (MCMC) based preferential BO is also proposed. In this work, we first verify the accuracy of Gaussian approximation, from which we reveal the critical problem that the predictive probability of duels can be inaccurate. This observation motivates us to improve the MCMC-based estimation for skew GP, for which we show the practical efficiency of Gibbs sampling and derive the low variance MC estimator. However, the computational time of MCMC can still be a bottleneck in practice. Towards building a more practical preferential BO, we develop a new method that achieves both high computational efficiency and low sample complexity, and then demonstrate its effectiveness through extensive numerical experiments.
https://proceedings.mlr.press/v202/tan23a.html
https://proceedings.mlr.press/v202/tan23a/tan23a.pdf
https://openreview.net/forum?id=6bfF0RYvMy
Robust Explanation for Free or At the Cost of Faithfulness
https://proceedings.mlr.press/v202/tan23a.html
Zeren Tan, Yang Tian
https://proceedings.mlr.press/v202/tan23a.html
ICML 2023
Devoted to interpreting the explicit behaviors of machine learning models, explanation methods can identify implicit characteristics of models to improve trustworthiness. However, explanation methods are shown as vulnerable to adversarial perturbations, implying security concerns in high-stakes domains. In this paper, we investigate when robust explanations are necessary and what they cost. We prove that the robustness of explanations is determined by the robustness of the model to be explained. Therefore, we can have robust explanations for free for a robust model. To have robust explanations for a non-robust model, composing the original model with a kernel is proved as an effective way that returns strictly more robust explanations. Nevertheless, we argue that this also incurs a robustness-faithfulness trade-off, i.e., contrary to common expectations, an explanation method may also become less faithful when it becomes more robust. This argument holds for any model. We are the first to introduce this trade-off and theoretically prove its existence for SmoothGrad. Theoretical findings are verified by empirical evidence on six state-of-the-art explanation methods and four backbones.
https://proceedings.mlr.press/v202/tan23b.html
https://proceedings.mlr.press/v202/tan23b/tan23b.pdf
https://openreview.net/forum?id=0jSSVPrfcX
Provably Invariant Learning without Domain Information
https://proceedings.mlr.press/v202/tan23b.html
Xiaoyu Tan, Lin Yong, Shengyu Zhu, Chao Qu, Xihe Qiu, Xu Yinghui, Peng Cui, Yuan Qi
https://proceedings.mlr.press/v202/tan23b.html
ICML 2023
Typical machine learning applications always assume the data follows independent and identically distributed (IID) assumptions. In contrast, this assumption is frequently violated in real-world circumstances, leading to the Out-of-Distribution (OOD) generalization problem and a major drop in model robustness. To mitigate this issue, the invariant learning technique is leveraged to distinguish between spurious features and invariant features among all input features and to train the model purely on the basis of the invariant features. Numerous invariant learning strategies imply that the training data should contain domain information. Such information includes the environment index or auxiliary information acquired from prior knowledge. However, acquiring these information is typically impossible in practice. In this study, we present TIVA for environment-independent invariance learning, which requires no environment-specific information in training data. We discover and prove that, given certain mild data conditions, it is possible to train an environment partitioning policy based on attributes that are independent of the targets and then conduct invariant risk minimization. We examine our method in comparison to other baseline methods, which demonstrate superior performance and excellent robustness under OOD, using multiple benchmarks.
https://proceedings.mlr.press/v202/tang23a.html
https://proceedings.mlr.press/v202/tang23a/tang23a.pdf
https://openreview.net/forum?id=CRmRAuwfd4
Auto-Differentiation of Relational Computations for Very Large Scale Machine Learning
https://proceedings.mlr.press/v202/tang23a.html
Yuxin Tang, Zhimin Ding, Dimitrije Jankov, Binhang Yuan, Daniel Bourgeois, Chris Jermaine
https://proceedings.mlr.press/v202/tang23a.html
ICML 2023
The relational data model was designed to facilitate large-scale data management and analytics. We consider the problem of how to differentiate computations expressed relationally. We show experimentally that a relational engine running an auto-differentiated relational algorithm can easily scale to very large datasets, and is competitive with state-of-the-art, special-purpose systems for large-scale distributed machine learning.
https://proceedings.mlr.press/v202/tang23b.html
https://proceedings.mlr.press/v202/tang23b/tang23b.pdf
https://openreview.net/forum?id=dHHWXFbSR7
Regret-Minimizing Double Oracle for Extensive-Form Games
https://proceedings.mlr.press/v202/tang23b.html
Xiaohang Tang, Le Cong Dinh, Stephen Marcus Mcaleer, Yaodong Yang
https://proceedings.mlr.press/v202/tang23b.html
ICML 2023
By incorporating regret minimization, double oracle methods have demonstrated rapid convergence to Nash Equilibrium (NE) in normal-form games and extensive-form games, through algorithms such as online double oracle (ODO) and extensive-form double oracle (XDO), respectively. In this study, we further examine the theoretical convergence rate and sample complexity of such regret minimization-based double oracle methods, utilizing a unified framework called Regret-Minimizing Double Oracle. Based on this framework, we extend ODO to extensive-form games and determine its sample complexity. Moreover, we demonstrate that the sample complexity of XDO can be exponential in the number of information sets $|S|$, owing to the exponentially decaying stopping threshold of restricted games. To solve this problem, we propose the Periodic Double Oracle (PDO) method, which has the lowest sample complexity among regret minimization-based double oracle methods, being only polynomial in $|S|$. Empirical evaluations on multiple poker and board games show that PDO achieves significantly faster convergence than previous double oracle algorithms and reaches a competitive level with state-of-the-art regret minimization methods.
https://proceedings.mlr.press/v202/tang23c.html
https://proceedings.mlr.press/v202/tang23c/tang23c.pdf
https://openreview.net/forum?id=H6bzZQ8NM7
From Perception to Programs: Regularize, Overparameterize, and Amortize
https://proceedings.mlr.press/v202/tang23c.html
Hao Tang, Kevin Ellis
https://proceedings.mlr.press/v202/tang23c.html
ICML 2023
We develop techniques for synthesizing neurosymbolic programs. Such programs mix discrete symbolic processing with continuous neural computation. We relax this mixed discrete/continuous problem and jointly learn all modules with gradient descent, and also incorporate amortized inference, overparameterization, and a differentiable strategy for penalizing lengthy programs. Collectedly this toolbox improves the stability of gradient-guided program search, and suggests ways of learning both how to parse continuous input into discrete abstractions, and how to process those abstractions via symbolic code.