abs
stringlengths 45
62
| Download PDF
stringlengths 50
84
| OpenReview
stringlengths 42
42
| title
stringlengths 10
168
| url
stringlengths 45
62
| authors
stringlengths 9
704
| detail_url
stringlengths 45
62
| tags
stringclasses 1
value | abstract
stringlengths 415
5.03k
⌀ |
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v202/wang23x.html
|
https://proceedings.mlr.press/v202/wang23x/wang23x.pdf
|
https://openreview.net/forum?id=Ir3Hty3Aj2
|
NP-SemiSeg: When Neural Processes meet Semi-Supervised Semantic Segmentation
|
https://proceedings.mlr.press/v202/wang23x.html
|
Jianfeng Wang, Daniela Massiceti, Xiaolin Hu, Vladimir Pavlovic, Thomas Lukasiewicz
|
https://proceedings.mlr.press/v202/wang23x.html
|
ICML 2023
|
Semi-supervised semantic segmentation involves assigning pixel-wise labels to unlabeled images at training time. This is useful in a wide range of real-world applications where collecting pixel-wise labels is not feasible in time or cost. Current approaches to semi-supervised semantic segmentation work by predicting pseudo-labels for each pixel from a class-wise probability distribution output by a model. If this predicted probability distribution is incorrect, however, it leads to poor segmentation results which can have knock-on consequences in safety critical systems, like medical images or self-driving cars. It is, therefore, important to understand what a model does not know, which is mainly achieved by uncertainty quantification. Recently, neural processes (NPs) have been explored in semi-supervised image classification, and they have been a computationally efficient and effective method for uncertainty quantification. In this work, we move one step forward by adapting NPs to semi-supervised semantic segmentation, resulting in a new model called NP-SemiSeg. We experimentally evaluated NP-SemiSeg on the public benchmarks PASCAL VOC 2012 and Cityscapes, with different training settings, and the results verify its effectiveness.
|
https://proceedings.mlr.press/v202/wang23y.html
|
https://proceedings.mlr.press/v202/wang23y/wang23y.pdf
|
https://openreview.net/forum?id=NRJPnlZ1JI
|
GC-Flow: A Graph-Based Flow Network for Effective Clustering
|
https://proceedings.mlr.press/v202/wang23y.html
|
Tianchun Wang, Farzaneh Mirzazadeh, Xiang Zhang, Jie Chen
|
https://proceedings.mlr.press/v202/wang23y.html
|
ICML 2023
|
Graph convolutional networks (GCNs) are discriminative models that directly model the class posterior $p(y|\mathbf{x})$ for semi-supervised classification of graph data. While being effective, as a representation learning approach, the node representations extracted from a GCN often miss useful information for effective clustering, because the objectives are different. In this work, we design normalizing flows that replace GCN layers, leading to a generative model that models both the class conditional likelihood $p(\mathbf{x}|y)$ and the class prior $p(y)$. The resulting neural network, GC-Flow, retains the graph convolution operations while being equipped with a Gaussian mixture representation space. It enjoys two benefits: it not only maintains the predictive power of GCN, but also produces well-separated clusters, due to the structuring of the representation space. We demonstrate these benefits on a variety of benchmark data sets. Moreover, we show that additional parameterization, such as that on the adjacency matrix used for graph convolutions, yields additional improvement in clustering.
|
https://proceedings.mlr.press/v202/wang23z.html
|
https://proceedings.mlr.press/v202/wang23z/wang23z.pdf
|
https://openreview.net/forum?id=kMnEaXGuZr
|
Curriculum Co-disentangled Representation Learning across Multiple Environments for Social Recommendation
|
https://proceedings.mlr.press/v202/wang23z.html
|
Xin Wang, Zirui Pan, Yuwei Zhou, Hong Chen, Chendi Ge, Wenwu Zhu
|
https://proceedings.mlr.press/v202/wang23z.html
|
ICML 2023
|
There exist complex patterns behind the decision-making processes of different individuals across different environments. For instance, in a social recommender system, various user behaviors are driven by highly entangled latent factors from two environments, i.e., consuming environment where users consume items and social environment where users connect with each other. Uncovering the disentanglement of these latent factors for users can benefit in enhanced explainability and controllability for recommendation. However, in literature there has been no work on social recommendation capable of disentangling user representations across consuming and social environments. To solve this problem, we study co-disentangled representation learning across different environments via proposing the curriculum co-disentangled representation learning (CurCoDis) model to disentangle the hidden factors for users across both consuming and social environments. To co-disentangle joint representations for user-item consumption and user-user social graph simultaneously, we partition the social graph into equal-size sub-graphs with minimum number of edges being cut, and design a curriculum weighing strategy for subgraph training through measuring the complexity of subgraphs via Descartes’ rule of signs. We further develop the prototype-routing optimization mechanism, which achieves co-disentanglement of user representations across consuming and social environments. Extensive experiments for social recommendation demonstrate that our proposed CurCoDis model can significantly outperform state-of-the-art methods on several real-world datasets.
|
https://proceedings.mlr.press/v202/wang23aa.html
|
https://proceedings.mlr.press/v202/wang23aa/wang23aa.pdf
|
https://openreview.net/forum?id=iXYnIz4RRx
|
Data Efficient Neural Scaling Law via Model Reusing
|
https://proceedings.mlr.press/v202/wang23aa.html
|
Peihao Wang, Rameswar Panda, Zhangyang Wang
|
https://proceedings.mlr.press/v202/wang23aa.html
|
ICML 2023
|
The number of parameters in large transformers has been observed to grow exponentially. Despite notable performance improvements, concerns have been raised that such a growing model size will run out of data in the near future. As manifested in the neural scaling law, modern learning backbones are not data-efficient. To maintain the utility of the model capacity, training data should be increased proportionally. In this paper, we study the neural scaling law under the previously overlooked data scarcity regime, focusing on the more challenging situation where we need to train a gigantic model with a disproportionately limited supply of available training data. We find that the existing power laws underestimate the data inefficiency of large transformers. Their performance will drop significantly if the training set is insufficient. Fortunately, we discover another blessing - such a data-inefficient scaling law can be restored through a model reusing approach that warm-starts the training of a large model by initializing it using smaller models. Our empirical study shows that model reusing can effectively reproduce the power law under the data scarcity regime. When progressively applying model reusing to expand the model size, we also observe consistent performance improvement in large transformers. We release our code at: https://github.com/VITA-Group/Data-Efficient-Scaling.
|
https://proceedings.mlr.press/v202/wang23ab.html
|
https://proceedings.mlr.press/v202/wang23ab/wang23ab.pdf
|
https://openreview.net/forum?id=aaI18lTgjr
|
Deep Temporal Sets with Evidential Reinforced Attentions for Unique Behavioral Pattern Discovery
|
https://proceedings.mlr.press/v202/wang23ab.html
|
Dingrong Wang, Deep Shankar Pandey, Krishna Prasad Neupane, Zhiwei Yu, Ervine Zheng, Zhi Zheng, Qi Yu
|
https://proceedings.mlr.press/v202/wang23ab.html
|
ICML 2023
|
Machine learning-driven human behavior analysis is gaining attention in behavioral/mental healthcare, due to its potential to identify behavioral patterns that cannot be recognized by traditional assessments. Real-life applications, such as digital behavioral biomarker identification, often require the discovery of complex spatiotemporal patterns in multimodal data, which is largely under-explored. To fill this gap, we propose a novel model that integrates uniquely designed Deep Temporal Sets (DTS) with Evidential Reinforced Attentions (ERA). DTS captures complex temporal relationships in the input and generates a set-based representation, while ERA captures the policy network’s uncertainty and conducts evidence-aware exploration to locate attentive regions in behavioral data. Using child-computer interaction data as a testing platform, we demonstrate the effectiveness of DTS-ERA in differentiating children with Autism Spectrum Disorder and typically developing children based on sequential multimodal visual and touch behaviors. Comparisons with baseline methods show that our model achieves superior performance and has the potential to provide objective, quantitative, and precise analysis of complex human behaviors.
|
https://proceedings.mlr.press/v202/wang23ac.html
|
https://proceedings.mlr.press/v202/wang23ac/wang23ac.pdf
|
https://openreview.net/forum?id=5Z31keXfbQ
|
Active Learning based Structural Inference
|
https://proceedings.mlr.press/v202/wang23ac.html
|
Aoran Wang, Jun Pang
|
https://proceedings.mlr.press/v202/wang23ac.html
|
ICML 2023
|
In this paper, we propose a novel framework, Active Learning based Structural Inference (ALaSI), to infer the existence of directed connections from observed agents’ states over a time period in a dynamical system. With the help of deep active learning, ALaSI is competent in learning the representation of connections with a relatively small pool of prior knowledge. Moreover, based on information theory, the proposed inter- and out-of-scope message learning pipelines are remarkably beneficial to structural inference for large dynamical systems. We evaluate ALaSI on various large datasets including simulated systems and real-world networks, to demonstrate that ALaSI is able to outperform previous methods in precisely inferring the existence of connections in large systems under either supervised learning or unsupervised learning.
|
https://proceedings.mlr.press/v202/wang23ad.html
|
https://proceedings.mlr.press/v202/wang23ad/wang23ad.pdf
|
https://openreview.net/forum?id=1EWPr0ks8I
|
Better Diffusion Models Further Improve Adversarial Training
|
https://proceedings.mlr.press/v202/wang23ad.html
|
Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, Shuicheng Yan
|
https://proceedings.mlr.press/v202/wang23ad.html
|
ICML 2023
|
It has been recognized that the data generated by the denoising diffusion probabilistic model (DDPM) improves adversarial training. After two years of rapid development in diffusion models, a question naturally arises: can better diffusion models further improve adversarial training? This paper gives an affirmative answer by employing the most recent diffusion model which has higher efficiency ($\sim 20$ sampling steps) and image quality (lower FID score) compared with DDPM. Our adversarially trained models achieve state-of-the-art performance on RobustBench using only generated data (no external datasets). Under the $\ell_\infty$-norm threat model with $\epsilon=8/255$, our models achieve $70.69\\%$ and $42.67\\%$ robust accuracy on CIFAR-10 and CIFAR-100, respectively, i.e. improving upon previous state-of-the-art models by $+4.58\\%$ and $+8.03\\%$. Under the $\ell_2$-norm threat model with $\epsilon=128/255$, our models achieve $84.86\\%$ on CIFAR-10 ($+4.44\\%$). These results also beat previous works that use external data. We also provide compelling results on the SVHN and TinyImageNet datasets. Our code is at https://github.com/wzekai99/DM-Improves-AT.
|
https://proceedings.mlr.press/v202/wang23ae.html
|
https://proceedings.mlr.press/v202/wang23ae/wang23ae.pdf
|
https://openreview.net/forum?id=tzxNDzYuMt
|
Polarity Is All You Need to Learn and Transfer Faster
|
https://proceedings.mlr.press/v202/wang23ae.html
|
Qingyang Wang, Michael Alan Powell, Eric W Bridgeford, Ali Geisa, Joshua T Vogelstein
|
https://proceedings.mlr.press/v202/wang23ae.html
|
ICML 2023
|
Natural intelligences (NIs) thrive in a dynamic world - they learn quickly, sometimes with only a few samples. In contrast, artificial intelligences (AIs) typically learn with a prohibitive number of training samples and computational power. What design principle difference between NI and AI could contribute to such a discrepancy? Here, we investigate the role of weight polarity: development processes initialize NIs with advantageous polarity configurations; as NIs grow and learn, synapse magnitudes update, yet polarities are largely kept unchanged. We demonstrate with simulation and image classification tasks that if weight polarities are adequately set a priori, then networks learn with less time and data. We also explicitly illustrate situations in which a priori setting the weight polarities is disadvantageous for networks. Our work illustrates the value of weight polarities from the perspective of statistical and computational efficiency during learning.
|
https://proceedings.mlr.press/v202/wang23af.html
|
https://proceedings.mlr.press/v202/wang23af/wang23af.pdf
|
https://openreview.net/forum?id=CcDKqUR546
|
Projected Tensor Power Method for Hypergraph Community Recovery
|
https://proceedings.mlr.press/v202/wang23af.html
|
Jinxin Wang, Yuen-Man Pun, Xiaolu Wang, Peng Wang, Anthony Man-Cho So
|
https://proceedings.mlr.press/v202/wang23af.html
|
ICML 2023
|
This paper investigates the problem of exact community recovery in the symmetric $d$-uniform $(d \geq 2)$ hypergraph stochastic block model ($d$-HSBM). In this model, a $d$-uniform hypergraph with $n$ nodes is generated by first partitioning the $n$ nodes into $K\geq 2$ equal-sized disjoint communities and then generating hyperedges with a probability that depends on the community memberships of $d$ nodes. Despite the non-convex and discrete nature of the maximum likelihood estimation problem, we develop a simple yet efficient iterative method, called the projected tensor power method, to tackle it. As long as the initialization satisfies a partial recovery condition in the logarithmic degree regime of the problem, we show that our proposed method can exactly recover the hidden community structure down to the information-theoretic limit with high probability. Moreover, our proposed method exhibits a competitive time complexity of $\mathcal{O}(n\log^2n/\log\log n)$ when the aforementioned initialization condition is met. We also conduct numerical experiments to validate our theoretical findings.
|
https://proceedings.mlr.press/v202/wang23ag.html
|
https://proceedings.mlr.press/v202/wang23ag/wang23ag.pdf
|
https://openreview.net/forum?id=9hw2qIEHJF
|
Estimating Possible Causal Effects with Latent Variables via Adjustment
|
https://proceedings.mlr.press/v202/wang23ag.html
|
Tian-Zuo Wang, Tian Qin, Zhi-Hua Zhou
|
https://proceedings.mlr.press/v202/wang23ag.html
|
ICML 2023
|
Causal effect identification is a fundamental task in artificial intelligence. A most ideal scenario for causal effect identification is that there is a directed acyclic graph as a prior causal graph encoding the causal relations of all relevant variables. In real tasks, however, the prior causal graph is usually not available, and some relevant variables may be latent as well. With observational data, we can only learn a partial ancestral graph (PAG), which contains some indeterminate causal relations. Since many causal graphs can correspond to one PAG, they are possibly associated with different causal effects. The aim of this paper is to estimate these possible causal effects via covariate adjustment given a PAG. This task is challenging because the number of causal graphs corresponding to a PAG grows super-exponentially with the number of variables. We propose a new graphical characterization for possible adjustment sets, and based on this, we develop the first method to determine the set of possible causal effects that are consistent with the given PAG without enumerating any causal graphs. Our method can output the same set as the enumeration method with super-exponentially less complexity. Experiments validate the effectiveness and tremendous efficiency improvement of the proposed method.
|
https://proceedings.mlr.press/v202/wang23ah.html
|
https://proceedings.mlr.press/v202/wang23ah/wang23ah.pdf
|
https://openreview.net/forum?id=ycZSQdo2F9
|
InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models
|
https://proceedings.mlr.press/v202/wang23ah.html
|
Yingheng Wang, Yair Schiff, Aaron Gokaslan, Weishen Pan, Fei Wang, Christopher De Sa, Volodymyr Kuleshov
|
https://proceedings.mlr.press/v202/wang23ah.html
|
ICML 2023
|
While diffusion models excel at generating high-quality samples, their latent variables typically lack semantic meaning and are not suitable for representation learning. Here, we propose InfoDiffusion, an algorithm that augments diffusion models with low-dimensional latent variables that capture high-level factors of variation in the data. InfoDiffusion relies on a learning objective regularized with the mutual information between observed and hidden variables, which improves latent space quality and prevents the latents from being ignored by expressive diffusion-based decoders. Empirically, we find that InfoDiffusion learns disentangled and human-interpretable latent representations that are competitive with state-of-the-art generative and contrastive methods, while retaining the high sample quality of diffusion models. Our method enables manipulating the attributes of generated images and has the potential to assist tasks that require exploring a learned latent space to generate quality samples, e.g., generative design.
|
https://proceedings.mlr.press/v202/wang23ai.html
|
https://proceedings.mlr.press/v202/wang23ai/wang23ai.pdf
|
https://openreview.net/forum?id=KoIqF3Dztr
|
A Robust Test for the Stationarity Assumption in Sequential Decision Making
|
https://proceedings.mlr.press/v202/wang23ai.html
|
Jitao Wang, Chengchun Shi, Zhenke Wu
|
https://proceedings.mlr.press/v202/wang23ai.html
|
ICML 2023
|
Reinforcement learning (RL) is a powerful technique that allows an autonomous agent to learn an optimal policy to maximize the expected return. The optimality of various RL algorithms relies on the stationarity assumption, which requires time-invariant state transition and reward functions. However, deviations from stationarity over extended periods often occur in real-world applications like robotics control, health care and digital marketing, resulting in suboptimal policies learned under stationary assumptions. In this paper, we propose a model-based doubly robust procedure for testing the stationarity assumption and detecting change points in offline RL settings with certain degree of homogeneity. Our proposed testing procedure is robust to model misspecifications and can effectively control type-I error while achieving high statistical power, especially in high-dimensional settings. Extensive comparative simulations and a real-world interventional mobile health example illustrate the advantages of our method in detecting change points and optimizing long-term rewards in high-dimensional, non-stationary environments.
|
https://proceedings.mlr.press/v202/wang23aj.html
|
https://proceedings.mlr.press/v202/wang23aj/wang23aj.pdf
|
https://openreview.net/forum?id=elL6uw9qOX
|
GEAR: A GPU-Centric Experience Replay System for Large Reinforcement Learning Models
|
https://proceedings.mlr.press/v202/wang23aj.html
|
Hanjing Wang, Man-Kit Sit, Congjie He, Ying Wen, Weinan Zhang, Jun Wang, Yaodong Yang, Luo Mai
|
https://proceedings.mlr.press/v202/wang23aj.html
|
ICML 2023
|
This paper introduces a distributed, GPU-centric experience replay system, GEAR, designed to perform scalable reinforcement learning (RL) with large sequence models (such as transformers). With such models, existing systems such as Reverb face considerable bottlenecks in memory, computation, and communication. GEAR, however, optimizes memory efficiency by enabling the memory resources on GPU servers (including host memory and device memory) to manage trajectory data. Furthermore, it facilitates decentralized GPU devices to expedite various trajectory selection strategies, circumventing computational bottlenecks. GEAR is equipped with GPU kernels capable of collecting trajectories using zero-copy access to host memory, along with remote-directed-memory access over InfiniBand, improving communication efficiency. Cluster experiments have shown that GEAR can achieve performance levels up to 6× greater than Reverb when training state-of-the-art large RL models. GEAR is open-sourced at https:// github.com/bigrl-team/gear.
|
https://proceedings.mlr.press/v202/wang23ak.html
|
https://proceedings.mlr.press/v202/wang23ak/wang23ak.pdf
|
https://openreview.net/forum?id=BVvYe8q9CF
|
Effective and Efficient Structural Inference with Reservoir Computing
|
https://proceedings.mlr.press/v202/wang23ak.html
|
Aoran Wang, Tsz Pan Tong, Jun Pang
|
https://proceedings.mlr.press/v202/wang23ak.html
|
ICML 2023
|
In this paper, we present an effective and efficient structural inference approach by integrating a Reservoir Computing (RC) network into a Variational Auto-encoder-based (VAE-based) structural inference framework. With the help of Bi-level Optimization, the backbone VAE-based method follows the Information Bottleneck principle and infers a general adjacency matrix in its latent space; the RC net substitutes the partial role of the decoder and encourages the whole approach to perform further steps of gradient descent based on limited available data. The experimental results on various datasets including biological networks, simulated fMRI data, and physical simulations show the effectiveness and efficiency of our proposed method for structural inference, either with much fewer trajectories or with much shorter trajectories compared with previous works.
|
https://proceedings.mlr.press/v202/wang23al.html
|
https://proceedings.mlr.press/v202/wang23al/wang23al.pdf
|
https://openreview.net/forum?id=VLmf5fqWdf
|
Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning
|
https://proceedings.mlr.press/v202/wang23al.html
|
Tongzhou Wang, Antonio Torralba, Phillip Isola, Amy Zhang
|
https://proceedings.mlr.press/v202/wang23al.html
|
ICML 2023
|
In goal-reaching reinforcement learning (RL), the optimal value function has a particular geometry, called quasimetrics structure. This paper introduces Quasimetric Reinforcement Learning (QRL), a new RL method that utilizes quasimetric models to learn optimal value functions. Distinct from prior approaches, the QRL objective is specifically designed for quasimetrics, and provides strong theoretical recovery guarantees. Empirically, we conduct thorough analyses on a discretized MountainCar environment, identifying properties of QRL and its advantages over alternatives. On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance, across both state-based and image-based observations.
|
https://proceedings.mlr.press/v202/wang23am.html
|
https://proceedings.mlr.press/v202/wang23am/wang23am.pdf
|
https://openreview.net/forum?id=ikDXPA0BA2
|
Model-Free Robust Average-Reward Reinforcement Learning
|
https://proceedings.mlr.press/v202/wang23am.html
|
Yue Wang, Alvaro Velasquez, George K. Atia, Ashley Prater-Bennette, Shaofeng Zou
|
https://proceedings.mlr.press/v202/wang23am.html
|
ICML 2023
|
Robust Markov decision processes (MDPs) address the challenge of model uncertainty by optimizing the worst-case performance over an uncertainty set of MDPs. In this paper, we focus on the robust average-reward MDPs under the model-free setting. We first theoretically characterize the structure of solutions to the robust average-reward Bellman equation, which is essential for our later convergence analysis. We then design two model-free algorithms, robust relative value iteration (RVI) TD and robust RVI Q-learning, and theoretically prove their convergence to the optimal solution. We provide several widely used uncertainty sets as examples, including those defined by the contamination model, total variation, Chi-squared divergence, Kullback-Leibler (KL) divergence, and Wasserstein distance.
|
https://proceedings.mlr.press/v202/wang23an.html
|
https://proceedings.mlr.press/v202/wang23an/wang23an.pdf
|
https://openreview.net/forum?id=unWBARk7v2
|
Live in the Moment: Learning Dynamics Model Adapted to Evolving Policy
|
https://proceedings.mlr.press/v202/wang23an.html
|
Xiyao Wang, Wichayaporn Wongkamjan, Ruonan Jia, Furong Huang
|
https://proceedings.mlr.press/v202/wang23an.html
|
ICML 2023
|
Model-based reinforcement learning (RL) often achieves higher sample efficiency in practice than model-free RL by learning a dynamics model to generate samples for policy learning. Previous works learn a dynamics model that fits under the empirical state-action visitation distribution for all historical policies, i.e., the sample replay buffer. However, in this paper, we observe that fitting the dynamics model under the distribution for all historical policies does not necessarily benefit model prediction for the current policy since the policy in use is constantly evolving over time. The evolving policy during training will cause state-action visitation distribution shifts. We theoretically analyze how this distribution shift over historical policies affects the model learning and model rollouts. We then propose a novel dynamics model learning method, named Policy-adapted Dynamics Model Learning (PDML). PDML dynamically adjusts the historical policy mixture distribution to ensure the learned model can continually adapt to the state-action visitation distribution of the evolving policy. Experiments on a range of continuous control environments in MuJoCo show that PDML achieves significant improvement in sample efficiency and higher asymptotic performance combined with the state-of-the-art model-based RL methods.
|
https://proceedings.mlr.press/v202/wang23ao.html
|
https://proceedings.mlr.press/v202/wang23ao/wang23ao.pdf
|
https://openreview.net/forum?id=OQFR3p76OR
|
Learning to Bid in Repeated First-Price Auctions with Budgets
|
https://proceedings.mlr.press/v202/wang23ao.html
|
Qian Wang, Zongjun Yang, Xiaotie Deng, Yuqing Kong
|
https://proceedings.mlr.press/v202/wang23ao.html
|
ICML 2023
|
Budget management strategies in repeated auctions have received growing attention in online advertising markets. However, previous work on budget management in online bidding mainly focused on second-price auctions. The rapid shift from second-price auctions to first-price auctions for online ads in recent years has motivated the challenging question of how to bid in repeated first-price auctions while controlling budgets. In this work, we study the problem of learning in repeated first-price auctions with budgets. We design a dual-based algorithm that can achieve a near-optimal $\widetilde{O}(\sqrt{T})$ regret with full information feedback where the maximum competing bid is always revealed after each auction. We further consider the setting with one-sided information feedback where only the winning bid is revealed after each auction. We show that our modified algorithm can still achieve an $\widetilde{O}(\sqrt{T})$ regret with mild assumptions on the bidder’s value distribution. Finally, we complement the theoretical results with numerical experiments to confirm the effectiveness of our budget management policy.
|
https://proceedings.mlr.press/v202/wang23ap.html
|
https://proceedings.mlr.press/v202/wang23ap/wang23ap.pdf
|
https://openreview.net/forum?id=vO12TSO55f
|
Network Effects in Performative Prediction Games
|
https://proceedings.mlr.press/v202/wang23ap.html
|
Xiaolu Wang, Chung-Yiu Yau, Hoi To Wai
|
https://proceedings.mlr.press/v202/wang23ap.html
|
ICML 2023
|
This paper studies the multi-agent performative prediction (Multi-PP) games over multiplex networks. We consider a distributed learning setting where agents partially cooperate on an agent network, while during learning, the data samples drawn depend on the prediction models of the agent itself and neighboring agents on a population network. The dynamics of Multi-PP games is hence affected by the interplay between both networks. This paper concentrates on this Multi-PP game with the following contributions. Firstly, we analyze sufficient conditions for the existence of the performative stable equilibrium (PSE) and Nash equilibrium (NE) of the Multi-PP games. Secondly, we analyze the changes to the equilibrium induced by perturbed data distributions, and derive the closed-form solutions where the network topologies are explicit. Our results connect the existence of PSE/NE with strengths of agents’ cooperation, and the changes of equilibrium solutions across agents with their node centrality, etc. Lastly, we show that a stochastic gradient descent (SGD) based distributed learning procedure finds the PSE under the said sufficient condition. Numerical illustrations on the network effects in Multi-PP games corroborate our findings.
|
https://proceedings.mlr.press/v202/wang23aq.html
|
https://proceedings.mlr.press/v202/wang23aq/wang23aq.pdf
|
https://openreview.net/forum?id=cjWHQpEqaZ
|
Robustly Learning a Single Neuron via Sharpness
|
https://proceedings.mlr.press/v202/wang23aq.html
|
Puqian Wang, Nikos Zarifis, Ilias Diakonikolas, Jelena Diakonikolas
|
https://proceedings.mlr.press/v202/wang23aq.html
|
ICML 2023
|
We study the problem of learning a single neuron with respect to the $L_2^2$-loss in the presence of adversarial label noise. We give an efficient algorithm that, for a broad family of activations including ReLUs, approximates the optimal $L_2^2$-error within a constant factor. Notably, our algorithm succeeds under much milder distributional assumptions compared to prior work. The key ingredient enabling our results is a novel connection to local error bounds from optimization theory.
|
https://proceedings.mlr.press/v202/wang23ar.html
|
https://proceedings.mlr.press/v202/wang23ar/wang23ar.pdf
|
https://openreview.net/forum?id=Fm5wBnahGR
|
DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
|
https://proceedings.mlr.press/v202/wang23ar.html
|
Zifeng Wang, Zheng Zhan, Yifan Gong, Yucai Shao, Stratis Ioannidis, Yanzhi Wang, Jennifer Dy
|
https://proceedings.mlr.press/v202/wang23ar.html
|
ICML 2023
|
Rehearsal-based approaches are a mainstay of continual learning (CL). They mitigate the catastrophic forgetting problem by maintaining a small fixed-size buffer with a subset of data from past tasks. While most rehearsal-based approaches exploit the knowledge from buffered past data, little attention is paid to inter-task relationships and to critical task-specific and task-invariant knowledge. By appropriately leveraging inter-task relationships, we propose a novel CL method, named DualHSIC, to boost the performance of existing rehearsal-based methods in a simple yet effective way. DualHSIC consists of two complementary components that stem from the so-called Hilbert Schmidt independence criterion (HSIC): HSIC-Bottleneck for Rehearsal (HBR) lessens the inter-task interference and HSIC Alignment (HA) promotes task-invariant knowledge sharing. Extensive experiments show that DualHSIC can be seamlessly plugged into existing rehearsal-based methods for consistent performance improvements, outperforming recent state-of-the-art regularization-enhanced rehearsal methods.
|
https://proceedings.mlr.press/v202/wang23as.html
|
https://proceedings.mlr.press/v202/wang23as/wang23as.pdf
|
https://openreview.net/forum?id=NbC9a9zS5K
|
Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments
|
https://proceedings.mlr.press/v202/wang23as.html
|
Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
|
https://proceedings.mlr.press/v202/wang23as.html
|
ICML 2023
|
It is quite challenging to ensure the safety of reinforcement learning (RL) agents in an unknown and stochastic environment under hard constraints that require the system state not to reach certain specified unsafe regions. Many popular safe RL methods such as those based on the Constrained Markov Decision Process (CMDP) paradigm formulate safety violations in a cost function and try to constrain the expectation of cumulative cost under a threshold. However, it is often difficult to effectively capture and enforce hard reachability-based safety constraints indirectly with such constraints on safety violation cost. In this work, we leverage the notion of barrier function to explicitly encode the hard safety chance constraints, and given that the environment is unknown, relax them to our design of generative-model-based soft barrier functions. Based on such soft barriers, we propose a novel safe RL approach with bi-level optimization that can jointly learn the unknown environment and optimize the control policy, while effectively avoiding the unsafe region with safety probability optimization. Experiments on a set of examples demonstrate that our approach can effectively enforce hard safety chance constraints and significantly outperform CMDP-based baseline methods in system safe rates measured via simulations.
|
https://proceedings.mlr.press/v202/wang23at.html
|
https://proceedings.mlr.press/v202/wang23at/wang23at.pdf
|
https://openreview.net/forum?id=D2Oaj7v9YJ
|
LinSATNet: The Positive Linear Satisfiability Neural Networks
|
https://proceedings.mlr.press/v202/wang23at.html
|
Runzhong Wang, Yunhao Zhang, Ziao Guo, Tianyi Chen, Xiaokang Yang, Junchi Yan
|
https://proceedings.mlr.press/v202/wang23at.html
|
ICML 2023
|
Encoding constraints into neural networks is attractive. This paper studies how to introduce the popular positive linear satisfiability to neural networks. We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions. We further theoretically characterize the convergence property of the Sinkhorn algorithm for multiple marginals, and the underlying formulation is also derived. In contrast to the sequential decision e.g. reinforcement learning-based solvers, we showcase our technique in solving constrained (specifically satisfiability) problems by one-shot neural networks, including i) a neural routing solver learned without supervision of optimal solutions; ii) a partial graph matching network handling graphs with unmatchable outliers on both sides; iii) a predictive network for financial portfolios with continuous constraints. To our knowledge, there exists no one-shot neural solver for these scenarios when they are formulated as satisfiability problems. Source code is available at https://github.com/Thinklab-SJTU/LinSATNet.
|
https://proceedings.mlr.press/v202/wang23au.html
|
https://proceedings.mlr.press/v202/wang23au/wang23au.pdf
|
https://openreview.net/forum?id=dkYfm01yQp
|
Offline Meta Reinforcement Learning with In-Distribution Online Adaptation
|
https://proceedings.mlr.press/v202/wang23au.html
|
Jianhao Wang, Jin Zhang, Haozhe Jiang, Junyu Zhang, Liwei Wang, Chongjie Zhang
|
https://proceedings.mlr.press/v202/wang23au.html
|
ICML 2023
|
Recent offline meta-reinforcement learning (meta-RL) methods typically utilize task-dependent behavior policies (e.g., training RL agents on each individual task) to collect a multi-task dataset. However, these methods always require extra information for fast adaptation, such as offline context for testing tasks. To address this problem, we first formally characterize a unique challenge in offline meta-RL: transition-reward distribution shift between offline datasets and online adaptation. Our theory finds that out-of-distribution adaptation episodes may lead to unreliable policy evaluation and that online adaptation with in-distribution episodes can ensure adaptation performance guarantee. Based on these theoretical insights, we propose a novel adaptation framework, called In-Distribution online Adaptation with uncertainty Quantification (IDAQ), which generates in-distribution context using a given uncertainty quantification and performs effective task belief inference to address new tasks. We find a return-based uncertainty quantification for IDAQ that performs effectively. Experiments show that IDAQ achieves state-of-the-art performance on the Meta-World ML1 benchmark compared to baselines with/without offline adaptation.
|
https://proceedings.mlr.press/v202/wang23av.html
|
https://proceedings.mlr.press/v202/wang23av/wang23av.pdf
|
https://openreview.net/forum?id=SHJaI92vWC
|
Reachability-Aware Laplacian Representation in Reinforcement Learning
|
https://proceedings.mlr.press/v202/wang23av.html
|
Kaixin Wang, Kuangqi Zhou, Jiashi Feng, Bryan Hooi, Xinchao Wang
|
https://proceedings.mlr.press/v202/wang23av.html
|
ICML 2023
|
In Reinforcement Learning (RL), Laplacian Representation (LapRep) is a task-agnostic state representation that encodes the geometry of the environment. A desirable property of LapRep stated in prior works is that the Euclidean distance in the LapRep space roughly reflects the reachability between states, which motivates the usage of this distance for reward shaping. However, we find that LapRep does not necessarily have this property in general: two states having a small distance under LapRep can actually be far away in the environment. Such a mismatch would impede the learning process in reward shaping. To fix this issue, we introduce a Reachability-Aware Laplacian Representation (RA-LapRep), by properly scaling each dimension of LapRep. Despite the simplicity, we demonstrate that RA-LapRep can better capture the inter-state reachability as compared to LapRep, through both theoretical explanations and experimental results. Additionally, we show that this improvement yields a significant boost in reward shaping performance and benefits bottleneck state discovery.
|
https://proceedings.mlr.press/v202/wang23aw.html
|
https://proceedings.mlr.press/v202/wang23aw/wang23aw.pdf
|
https://openreview.net/forum?id=UlOHeXD4MD
|
PPG Reloaded: An Empirical Study on What Matters in Phasic Policy Gradient
|
https://proceedings.mlr.press/v202/wang23aw.html
|
Kaixin Wang, Daquan Zhou, Jiashi Feng, Shie Mannor
|
https://proceedings.mlr.press/v202/wang23aw.html
|
ICML 2023
|
In model-free reinforcement learning, recent methods based on a phasic policy gradient (PPG) framework have shown impressive improvements in sample efficiency and zero-shot generalization on the challenging Procgen benchmark. In PPG, two design choices are believed to be the key contributing factors to its superior performance over PPO: the high level of value sample reuse and the low frequency of feature distillation. However, through an extensive empirical study, we unveil that policy regularization and data diversity are what actually matters. In particular, we can achieve the same level of performance with low value sample reuse and frequent feature distillation, as long as the policy regularization strength and data diversity are preserved. In addition, we can maintain the high performance of PPG while reducing the computational cost to a similar level as PPO. Our comprehensive study covers all 16 Procgen games in both sample efficiency and generalization setups. We hope it can advance the understanding of PPG and provide insights for future works.
|
https://proceedings.mlr.press/v202/watson23a.html
|
https://proceedings.mlr.press/v202/watson23a/watson23a.pdf
|
https://openreview.net/forum?id=Lg6ia0e8fo
|
On Heterogeneous Treatment Effects in Heterogeneous Causal Graphs
|
https://proceedings.mlr.press/v202/watson23a.html
|
Richard A Watson, Hengrui Cai, Xinming An, Samuel Mclean, Rui Song
|
https://proceedings.mlr.press/v202/watson23a.html
|
ICML 2023
|
Heterogeneity and comorbidity are two interwoven challenges associated with various healthcare problems that greatly hampered research on developing effective treatment and understanding of the underlying neurobiological mechanism. Very few studies have been conducted to investigate heterogeneous causal effects (HCEs) in graphical contexts due to the lack of statistical methods. To characterize this heterogeneity, we first conceptualize heterogeneous causal graphs (HCGs) by generalizing the causal graphical model with confounder-based interactions and multiple mediators. Such confounders with an interaction with the treatment are known as moderators. This allows us to flexibly produce HCGs given different moderators and explicitly characterize HCEs from the treatment or potential mediators on the outcome. We establish the theoretical forms of HCEs and derive their properties at the individual level in both linear and nonlinear models. An interactive structural learning is developed to estimate the complex HCGs and HCEs with confidence intervals provided. Our method is empirically justified by extensive simulations and its practical usefulness is illustrated by exploring causality among psychiatric disorders for trauma survivors. Code implementing the proposed algorithm is open-source and publicly available at: https://github.com/richard-watson/ISL.
|
https://proceedings.mlr.press/v202/waudby-smith23a.html
|
https://proceedings.mlr.press/v202/waudby-smith23a/waudby-smith23a.pdf
|
https://openreview.net/forum?id=gKxXNAVZeF
|
Nonparametric Extensions of Randomized Response for Private Confidence Sets
|
https://proceedings.mlr.press/v202/waudby-smith23a.html
|
Ian Waudby-Smith, Steven Wu, Aaditya Ramdas
|
https://proceedings.mlr.press/v202/waudby-smith23a.html
|
ICML 2023
|
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP). Given bounded observations $(X_1, …, X_n)$ with mean $\mu^\star$ that are privatized into $(Z_1, …, Z_n)$, we present confidence intervals (CI) and time-uniform confidence sequences (CS) for $\mu^\star$ when only given access to the privatized data. To achieve this, we introduce a nonparametric and sequentially interactive generalization of Warner’s famous “randomized response” mechanism, satisfying LDP for arbitrary bounded random variables, and then provide CIs and CSs for their means given access to the resulting privatized observations. For example, our results yield private analogues of Hoeffding’s inequality in both fixed-time and time-uniform regimes. We extend these Hoeffding-type CSs to capture time-varying (non-stationary) means, and conclude by illustrating how these methods can be used to conduct private online A/B tests.
|
https://proceedings.mlr.press/v202/weber23a.html
|
https://proceedings.mlr.press/v202/weber23a/weber23a.pdf
|
https://openreview.net/forum?id=qKralclxZY
|
Global optimality for Euclidean CCCP under Riemannian convexity
|
https://proceedings.mlr.press/v202/weber23a.html
|
Melanie Weber, Suvrit Sra
|
https://proceedings.mlr.press/v202/weber23a.html
|
ICML 2023
|
We study geodesically convex (g-convex) problems that can be written as a difference of Euclidean convex functions. This structure arises in key applications such as matrix scaling, M- estimators of scatter matrices, and Brascamp-Lieb inequalities. In particular, we exploit this structure to make use of the Convex-Concave Procedure (CCCP), which helps us bypass potentially expensive Riemannian operations and leads to very competitive solvers. Importantly, unlike existing theory for CCCP that ensures convergence to stationary points, we exploit the overall g-convexity structure and provide iteration complexity results for global optimality. We illustrate our results by specializing them to a few concrete optimization problems that have been previously studied in the machine learning literature. We hope our work spurs the study of mixed Euclidean-Riemannian optimization algorithms.
|
https://proceedings.mlr.press/v202/wei23a.html
|
https://proceedings.mlr.press/v202/wei23a/wei23a.pdf
|
https://openreview.net/forum?id=GmUUB5HuOe
|
A Universal Unbiased Method for Classification from Aggregate Observations
|
https://proceedings.mlr.press/v202/wei23a.html
|
Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen
|
https://proceedings.mlr.press/v202/wei23a.html
|
ICML 2023
|
In conventional supervised classification, true labels are required for individual instances. However, it could be prohibitive to collect the true labels for individual instances, due to privacy concerns or unaffordable annotation costs. This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances. CFAO is a generalized learning framework that contains various learning problems, such as multiple-instance learning and learning from label proportions. The goal of this paper is to present a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses—previous research failed to achieve this goal. Practically, our method works by weighing the importance of each instance and each label in the group, which provides purified supervision for the classifier to learn. Theoretically, our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses. Extensive experiments on various problems of CFAO demonstrate the superiority of our proposed method.
|
https://proceedings.mlr.press/v202/wei23b.html
|
https://proceedings.mlr.press/v202/wei23b/wei23b.pdf
|
https://openreview.net/forum?id=LOy8W3UGFK
|
NTK-approximating MLP Fusion for Efficient Language Model Fine-tuning
|
https://proceedings.mlr.press/v202/wei23b.html
|
Tianxin Wei, Zeming Guo, Yifan Chen, Jingrui He
|
https://proceedings.mlr.press/v202/wei23b.html
|
ICML 2023
|
Fine-tuning a pre-trained language model (PLM) emerges as the predominant strategy in many natural language processing applications. However, even fine-tuning the PLMs and doing inference are expensive, especially on edge devices with low computing power. Some general approaches (e.g. quantization and distillation) have been widely studied to reduce the compute/memory of PLM fine-tuning, while very few one-shot compression techniques are explored. In this paper, we investigate the neural tangent kernel (NTK)–which reveals the gradient descent dynamics of neural networks–of the multilayer perceptrons (MLP) modules in a PLM and propose to coin a lightweight PLM through NTK-approximating MLP fusion. To achieve this, we reconsider the MLP as a bundle of sub-MLPs, and cluster them into a given number of centroids, which can then be restored as a compressed MLP and surprisingly shown to well approximate the NTK of the original PLM. Extensive experiments of PLM fine-tuning on both natural language understanding (NLU) and generation (NLG) tasks are provided to verify the effectiveness of the proposed method MLP fusion. Our code is available at https://github.com/weitianxin/MLP_Fusion.
|
https://proceedings.mlr.press/v202/wei23c.html
|
https://proceedings.mlr.press/v202/wei23c/wei23c.pdf
|
https://openreview.net/forum?id=XoJIpLASZx
|
Boosting Graph Contrastive Learning via Graph Contrastive Saliency
|
https://proceedings.mlr.press/v202/wei23c.html
|
Chunyu Wei, Yu Wang, Bing Bai, Kai Ni, David Brady, Lu Fang
|
https://proceedings.mlr.press/v202/wei23c.html
|
ICML 2023
|
Graph augmentation plays a crucial role in achieving good generalization for contrastive graph self-supervised learning. However, mainstream Graph Contrastive Learning (GCL) often favors random graph augmentations, by relying on random node dropout or edge perturbation on graphs. Random augmentations may inevitably lead to semantic information corruption during the training, and force the network to mistakenly focus on semantically irrelevant environmental background structures. To address these limitations and to improve generalization, we propose a novel self-supervised learning framework for GCL, which can adaptively screen the semantic-related substructure in graphs by capitalizing on the proposed gradient-based Graph Contrastive Saliency (GCS). The goal is to identify the most semantically discriminative structures of a graph via contrastive learning, such that we can generate semantically meaningful augmentations by leveraging on saliency. Empirical evidence on 16 benchmark datasets demonstrates the exclusive merits of the GCS-based framework. We also provide rigorous theoretical justification for GCS’s robustness properties. Code is available at https://github.com/GCS2023/GCS .
|
https://proceedings.mlr.press/v202/wei23d.html
|
https://proceedings.mlr.press/v202/wei23d/wei23d.pdf
|
https://openreview.net/forum?id=ot445h4SVB
|
Set-membership Belief State-based Reinforcement Learning for POMDPs
|
https://proceedings.mlr.press/v202/wei23d.html
|
Wei Wei, Lijun Zhang, Lin Li, Huizhong Song, Jiye Liang
|
https://proceedings.mlr.press/v202/wei23d.html
|
ICML 2023
|
Reinforcement learning (RL) has made significant progress in areas such as Atari games and robotic control, where the agents have perfect sensing capabilities. However, in many real-world sequential decision-making tasks, the observation data could be noisy or incomplete due to the intrinsic low quality of the sensors or unexpected malfunctions; that is, the agent’s perceptions are rarely perfect. The current POMDP RL methods, such as particle-based and Gaussian-based, can only provide a probability estimate of hidden states rather than certain belief regions, which may lead to inefficient and even wrong decision-making. This paper proposes a novel algorithm called Set-membership Belief state-based Reinforcement Learning (SBRL), which consists of two parts: a Set-membership Belief state learning Model (SBM) for learning bounded belief state sets and an RL controller for making decisions based on SBM. We prove that our belief estimation method can provide a series of belief state sets that always contain the true states under the unknown-but-bounded (UBB) noise. The effectiveness of the proposed method is verified on a collection of benchmark tasks, and the results show that our method outperforms the state-of-the-art methods.
|
https://proceedings.mlr.press/v202/wei23e.html
|
https://proceedings.mlr.press/v202/wei23e/wei23e.pdf
|
https://openreview.net/forum?id=g0ofsq1NRL
|
Mitigating Memorization of Noisy Labels by Clipping the Model Prediction
|
https://proceedings.mlr.press/v202/wei23e.html
|
Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li
|
https://proceedings.mlr.press/v202/wei23e.html
|
ICML 2023
|
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks. Cross Entropy (CE) loss has been shown to be not robust to noisy labels due to its unboundedness. To alleviate this issue, existing works typically design specialized robust losses with the symmetric condition, which usually lead to the underfitting issue. In this paper, our key idea is to induce a loss bound at the logit level, thus universally enhancing the noise robustness of existing losses. Specifically, we propose logit clipping (LogitClip), which clamps the norm of the logit vector to ensure that it is upper bounded by a constant. In this manner, CE loss equipped with our LogitClip method is effectively bounded, mitigating the overfitting to examples with noisy labels. Moreover, we present theoretical analyses to certify the noise-tolerant ability of LogitClip. Extensive experiments show that LogitClip not only significantly improves the noise robustness of CE loss, but also broadly enhances the generalization performance of popular robust losses.
|
https://proceedings.mlr.press/v202/weilbach23a.html
|
https://proceedings.mlr.press/v202/weilbach23a/weilbach23a.pdf
|
https://openreview.net/forum?id=24wzmwrldX
|
Graphically Structured Diffusion Models
|
https://proceedings.mlr.press/v202/weilbach23a.html
|
Christian Dietrich Weilbach, William Harvey, Frank Wood
|
https://proceedings.mlr.press/v202/weilbach23a.html
|
ICML 2023
|
We introduce a framework for automatically defining and learning deep generative models with problem-specific structure. We tackle problem domains that are more traditionally solved by algorithms such as sorting, constraint satisfaction for Sudoku, and matrix factorization. Concretely, we train diffusion models with an architecture tailored to the problem specification. This problem specification should contain a graphical model describing relationships between variables, and often benefits from explicit representation of subcomputations. Permutation invariances can also be exploited. Across a diverse set of experiments we improve the scaling relationship between problem dimension and our model’s performance, in terms of both training time and final accuracy. Our code can be found at https://github.com/plai-group/gsdm.
|
https://proceedings.mlr.press/v202/welke23a.html
|
https://proceedings.mlr.press/v202/welke23a/welke23a.pdf
|
https://openreview.net/forum?id=ppgRPC14uI
|
Expectation-Complete Graph Representations with Homomorphisms
|
https://proceedings.mlr.press/v202/welke23a.html
|
Pascal Welke, Maximilian Thiessen, Fabian Jogl, Thomas Gärtner
|
https://proceedings.mlr.press/v202/welke23a.html
|
ICML 2023
|
We investigate novel random graph embeddings that can be computed in expected polynomial time and that are able to distinguish all non-isomorphic graphs in expectation. Previous graph embeddings have limited expressiveness and either cannot distinguish all graphs or cannot be computed efficiently for every graph. To be able to approximate arbitrary functions on graphs, we are interested in efficient alternatives that become arbitrarily expressive with increasing resources. Our approach is based on Lovász’ characterisation of graph isomorphism through an infinite dimensional vector of homomorphism counts. Our empirical evaluation shows competitive results on several benchmark graph learning tasks.
|
https://proceedings.mlr.press/v202/wen23a.html
|
https://proceedings.mlr.press/v202/wen23a/wen23a.pdf
|
https://openreview.net/forum?id=HpOVpztozV
|
A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging
|
https://proceedings.mlr.press/v202/wen23a.html
|
Jeffrey Wen, Rizwan Ahmad, Philip Schniter
|
https://proceedings.mlr.press/v202/wen23a.html
|
ICML 2023
|
Accelerated magnetic resonance (MR) imaging attempts to reduce acquisition time by collecting data below the Nyquist rate. As an ill-posed inverse problem, many plausible solutions exist, yet the majority of deep learning approaches generate only a single solution. We instead focus on sampling from the posterior distribution, which provides more comprehensive information for downstream inference tasks. To do this, we design a novel conditional normalizing flow (CNF) that infers the signal component in the measurement operator’s nullspace, which is later combined with measured data to form complete images. Using fastMRI brain and knee data, we demonstrate fast inference and accuracy that surpasses recent posterior sampling techniques for MRI. Code is available at https://github.com/jwen307/mri_cnf
|
https://proceedings.mlr.press/v202/wen23b.html
|
https://proceedings.mlr.press/v202/wen23b/wen23b.pdf
|
https://openreview.net/forum?id=ml9EmtlMiy
|
Optimizing Mode Connectivity for Class Incremental Learning
|
https://proceedings.mlr.press/v202/wen23b.html
|
Haitao Wen, Haoyang Cheng, Heqian Qiu, Lanxiao Wang, Lili Pan, Hongliang Li
|
https://proceedings.mlr.press/v202/wen23b.html
|
ICML 2023
|
Class incremental learning (CIL) is one of the most challenging scenarios in continual learning. Existing work mainly focuses on strategies like memory replay, regularization, or dynamic architecture but ignores a crucial aspect: mode connectivity. Recent studies have shown that different minima can be connected by a low-loss valley, and ensembling over the valley shows improved performance and robustness. Motivated by this, we try to investigate the connectivity in CIL and find that the high-loss ridge exists along the linear connection between two adjacent continual minima. To dodge the ridge, we propose parameter-saving OPtimizing Connectivity (OPC) based on Fourier series and gradient projection for finding the low-loss path between minima. The optimized path provides infinite low-loss solutions. We further propose EOPC to ensemble points within a local bent cylinder to improve performance on learned tasks. Our scheme can serve as a plug-in unit, extensive experiments on CIFAR-100, ImageNet-100, and ImageNet-1K show consistent improvements when adapting EOPC to existing representative CIL methods. Our code is available at https://github.com/HaitaoWen/EOPC.
|
https://proceedings.mlr.press/v202/weng23a.html
|
https://proceedings.mlr.press/v202/weng23a/weng23a.pdf
|
https://openreview.net/forum?id=mSslPmao9h
|
Towards Learning Geometric Eigen-Lengths Crucial for Fitting Tasks
|
https://proceedings.mlr.press/v202/weng23a.html
|
Yijia Weng, Kaichun Mo, Ruoxi Shi, Yanchao Yang, Leonidas Guibas
|
https://proceedings.mlr.press/v202/weng23a.html
|
ICML 2023
|
Some extremely low-dimensional yet crucial geometric eigen-lengths often determine the success of some geometric tasks. For example, the height of an object is important to measure to check if it can fit between the shelves of a cabinet, while the width of a couch is crucial when trying to move it through a doorway. Humans have materialized such crucial geometric eigen-lengths in common sense since they are very useful in serving as succinct yet effective, highly interpretable, and universal object representations. However, it remains obscure and underexplored if learning systems can be equipped with similar capabilities of automatically discovering such key geometric quantities from doing tasks. In this work, we therefore for the first time formulate and propose a novel learning problem on this question and set up a benchmark suite including tasks, data, and evaluation metrics for studying the problem. We focus on a family of common fitting tasks as the testbed for the proposed learning problem. We explore potential solutions and demonstrate the feasibility of learning eigen-lengths from simply observing successful and failed fitting trials. We also attempt geometric grounding for more accurate eigen-length measurement and study the reusability of the learned geometric eigen-lengths across multiple tasks. Our work marks the first exploratory step toward learning crucial geometric eigen-lengths and we hope it can inspire future research in tackling this important yet underexplored problem.
|
https://proceedings.mlr.press/v202/weng23b.html
|
https://proceedings.mlr.press/v202/weng23b/weng23b.pdf
|
https://openreview.net/forum?id=bj4XknLFIN
|
Open-VCLIP: Transforming CLIP to an Open-vocabulary Video Model via Interpolated Weight Optimization
|
https://proceedings.mlr.press/v202/weng23b.html
|
Zejia Weng, Xitong Yang, Ang Li, Zuxuan Wu, Yu-Gang Jiang
|
https://proceedings.mlr.press/v202/weng23b.html
|
ICML 2023
|
Contrastive Language-Image Pretraining (CLIP) has demonstrated impressive zero-shot learning abilities for image understanding, yet limited effort has been made to investigate CLIP for zero-shot video recognition. We introduce Open-VCLIP, a simple yet effective approach that transforms CLIP into a strong zero-shot video classifier that can recognize unseen actions and events at test time. Our framework extends CLIP with minimal modifications to model spatial-temporal relationships in videos, making it a specialized video classifier, while striving for generalization. We formally show that training an Open-VCLIP is equivalent to continual learning with zero historical data. To address this problem, we propose Interpolated Weight Optimization, which utilizes the benefit of weight interpolation in both training and test time. We evaluate our method on three popular and challenging action recognition datasets following various zero-shot evaluation protocols and we demonstrate our approach outperforms state-of-the-art methods by clear margins. In particular, we achieve 87.9%, 58.3%, 81.1% zero-shot accuracy on UCF, HMDB and Kinetics-600 respectively, outperforming state-of-the-art methods by 8.3%, 7.8% and 12.2%. Code is released at https://github.com/wengzejia1/Open-VCLIP.
|
https://proceedings.mlr.press/v202/whitehouse23a.html
|
https://proceedings.mlr.press/v202/whitehouse23a/whitehouse23a.pdf
|
https://openreview.net/forum?id=pMQciTD4vb
|
Fully-Adaptive Composition in Differential Privacy
|
https://proceedings.mlr.press/v202/whitehouse23a.html
|
Justin Whitehouse, Aaditya Ramdas, Ryan Rogers, Steven Wu
|
https://proceedings.mlr.press/v202/whitehouse23a.html
|
ICML 2023
|
Composition is a key feature of differential privacy. Well-known advanced composition theorems allow one to query a private database quadratically more times than basic privacy composition would permit. However, these results require that the privacy parameters of all algorithms be fixed before interacting with the data. To address this, Rogers et al. introduced fully adaptive composition, wherein both algorithms and their privacy parameters can be selected adaptively. They defined two probabilistic objects to measure privacy in adaptive composition: privacy filters, which provide differential privacy guarantees for composed interactions, and privacy odometers, time-uniform bounds on privacy loss. There are substantial gaps between advanced composition and existing filters and odometers. First, existing filters place stronger assumptions on the algorithms being composed. Second, these odometers and filters suffer from large constants, making them impractical. We construct filters that match the rates of advanced composition, including constants, despite allowing for adaptively chosen privacy parameters. En route we also derive a privacy filter for approximate zCDP. We also construct several general families of odometers. These odometers match the tightness of advanced composition at an arbitrary, preselected point in time, or at all points in time simultaneously, up to a doubly-logarithmic factor. We obtain our results by leveraging advances in martingale concentration. In sum, we show that fully adaptive privacy is obtainable at almost no loss.
|
https://proceedings.mlr.press/v202/willette23a.html
|
https://proceedings.mlr.press/v202/willette23a/willette23a.pdf
|
https://openreview.net/forum?id=bBk09FBRox
|
Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation
|
https://proceedings.mlr.press/v202/willette23a.html
|
Jeffrey Willette, Seanie Lee, Bruno Andreis, Kenji Kawaguchi, Juho Lee, Sung Ju Hwang
|
https://proceedings.mlr.press/v202/willette23a.html
|
ICML 2023
|
Recent work on mini-batch consistency (MBC) for set functions has brought attention to the need for sequentially processing and aggregating chunks of a partitioned set while guaranteeing the same output for all partitions. However, existing constraints on MBC architectures lead to models with limited expressive power. Additionally, prior work has not addressed how to deal with large sets during training when the full set gradient is required. To address these issues, we propose a Universally MBC (UMBC) class of set functions which can be used in conjunction with arbitrary non-MBC components while still satisfying MBC, enabling a wider range of function classes to be used in MBC settings. Furthermore, we propose an efficient MBC training algorithm which gives an unbiased approximation of the full set gradient and has a constant memory overhead for any set size for both train- and test-time. We conduct extensive experiments including image completion, text classification, unsupervised clustering, and cancer detection on high-resolution images to verify the efficiency and efficacy of our scalable set encoding framework. Our code is available at github.com/jeffwillette/umbc
|
https://proceedings.mlr.press/v202/williams23a.html
|
https://proceedings.mlr.press/v202/williams23a/williams23a.pdf
|
https://openreview.net/forum?id=7BxPT6X3sj
|
Flexible Phase Dynamics for Bio-Plausible Contrastive Learning
|
https://proceedings.mlr.press/v202/williams23a.html
|
Ezekiel Williams, Colin Bredenberg, Guillaume Lajoie
|
https://proceedings.mlr.press/v202/williams23a.html
|
ICML 2023
|
Many learning algorithms used as normative models in neuroscience or as candidate approaches for learning on neuromorphic chips learn by contrasting one set of network states with another. These Contrastive Learning (CL) algorithms are traditionally implemented with rigid, temporally non-local, and periodic learning dynamics, that could limit the range of physical systems capable of harnessing CL. In this study, we build on recent work exploring how CL might be implemented by biological or neurmorphic systems and show that this form of learning can be made temporally local, and can still function even if many of the dynamical requirements of standard training procedures are relaxed. Thanks to a set of general theorems corroborated by numerical experiments across several CL models, our results provide theoretical foundations for the study and development of CL methods for biological and neuromorphic neural networks.
|
https://proceedings.mlr.press/v202/williams23b.html
|
https://proceedings.mlr.press/v202/williams23b/williams23b.pdf
|
https://openreview.net/forum?id=FJZlpWc9GN
|
Approximate Stein Classes for Truncated Density Estimation
|
https://proceedings.mlr.press/v202/williams23b.html
|
Daniel James Williams, Song Liu
|
https://proceedings.mlr.press/v202/williams23b.html
|
ICML 2023
|
Estimating truncated density models is difficult, as these models have intractable normalising constants and hard to satisfy boundary conditions. Score matching can be adapted to solve the truncated density estimation problem, but requires a continuous weighting function which takes zero at the boundary and is positive elsewhere. Evaluation of such a weighting function (and its gradient) often requires a closed-form expression of the truncation boundary and finding a solution to a complicated optimisation problem. In this paper, we propose approximate Stein classes, which in turn leads to a relaxed Stein identity for truncated density estimation. We develop a novel discrepancy measure, truncated kernelised Stein discrepancy (TKSD), which does not require fixing a weighting function in advance, and can be evaluated using only samples on the boundary. We estimate a truncated density model by minimising the Lagrangian dual of TKSD. Finally, experiments show the accuracy of our method to be an improvement over previous works even without the explicit functional form of the boundary.
|
https://proceedings.mlr.press/v202/wilming23a.html
|
https://proceedings.mlr.press/v202/wilming23a/wilming23a.pdf
|
https://openreview.net/forum?id=BdwGV6fwbK
|
Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables
|
https://proceedings.mlr.press/v202/wilming23a.html
|
Rick Wilming, Leo Kieslich, Benedict Clark, Stefan Haufe
|
https://proceedings.mlr.press/v202/wilming23a.html
|
ICML 2023
|
In recent years, the community of ’explainable artificial intelligence’ (XAI) has created a vast body of methods to bridge a perceived gap between model ’complexity’ and ’interpretability’. However, a concrete problem to be solved by XAI methods has not yet been formally stated. As a result, XAI methods are lacking theoretical and empirical evidence for the ’correctness’ of their explanations, limiting their potential use for quality-control and transparency purposes. At the same time, Haufe et al. (2014) showed, using simple toy examples, that even standard interpretations of linear models can be highly misleading. Specifically, high importance may be attributed to so-called suppressor variables lacking any statistical relation to the prediction target. This behavior has been confirmed empirically for a large array of XAI methods in Wilming et al. (2022). Here, we go one step further by deriving analytical expressions for the behavior of a variety of popular XAI methods on a simple two-dimensional binary classification problem involving Gaussian class-conditional distributions. We show that the majority of the studied approaches will attribute non-zero importance to a non-class-related suppressor feature in the presence of correlated noise. This poses important limitations on the interpretations and conclusions that the outputs of these XAI methods can afford.
|
https://proceedings.mlr.press/v202/wipf23a.html
|
https://proceedings.mlr.press/v202/wipf23a/wipf23a.pdf
|
https://openreview.net/forum?id=NUtErghzv4
|
Marginalization is not Marginal: No Bad VAE Local Minima when Learning Optimal Sparse Representations
|
https://proceedings.mlr.press/v202/wipf23a.html
|
David Wipf
|
https://proceedings.mlr.press/v202/wipf23a.html
|
ICML 2023
|
Although the variational autoencoder (VAE) represents a widely-used deep generative model, the underlying energy function when applied to continuous data remains poorly understood. In fact, most prior theoretical analysis has assumed a simplified affine decoder such that the model collapses to probabilistic PCA, a restricted regime whereby existing classical algorithms can also be trivially applied to guarantee globally optimal solutions. To push our understanding into more complex, practically-relevant settings, this paper instead adopts a deceptively sophisticated single-layer decoder that nonetheless allows the VAE to address the fundamental challenge of learning optimally sparse representations of continuous data originating from popular multiple-response regression models. In doing so, we can then examine VAE properties within the non-trivial context of solving difficult, NP-hard inverse problems. More specifically, we prove rigorous conditions which guarantee that any minimum of the VAE energy (local or global) will produce the optimally sparse latent representation, meaning zero reconstruction error using a minimal number of active latent dimensions. This is ultimately possible because VAE marginalization over the latent posterior selectively smooths away bad local minima as has been conjectured but not actually proven in prior work. We then discuss how equivalent-capacity deterministic autoencoders, even with appropriate sparsity-promoting regularization of the latent space, maintain bad local minima that do not correspond with such parsimonious representations. Overall, these results serve to elucidate key properties of the VAE loss surface relative to finding low-dimensional structure in data.
|
https://proceedings.mlr.press/v202/wollschlager23a.html
|
https://proceedings.mlr.press/v202/wollschlager23a/wollschlager23a.pdf
|
https://openreview.net/forum?id=DjwMRloMCO
|
Uncertainty Estimation for Molecules: Desiderata and Methods
|
https://proceedings.mlr.press/v202/wollschlager23a.html
|
Tom Wollschläger, Nicholas Gao, Bertrand Charpentier, Mohamed Amine Ketata, Stephan Günnemann
|
https://proceedings.mlr.press/v202/wollschlager23a.html
|
ICML 2023
|
Graph Neural Networks (GNNs) are promising surrogates for quantum mechanical calculations as they establish unprecedented low errors on collections of molecular dynamics (MD) trajectories. Thanks to their fast inference times they promise to accelerate computational chemistry applications. Unfortunately, despite low in-distribution (ID) errors, such GNNs might be horribly wrong for out-of-distribution (OOD) samples. Uncertainty estimation (UE) may aid in such situations by communicating the model’s certainty about its prediction. Here, we take a closer look at the problem and identify six key desiderata for UE in molecular force fields, three ’physics-informed’ and three ’application-focused’ ones. To overview the field, we survey existing methods from the field of UE and analyze how they fit to the set desiderata. By our analysis, we conclude that none of the previous works satisfies all criteria. To fill this gap, we propose Localized Neural Kernel (LNK) a Gaussian Process (GP)-based extension to existing GNNs satisfying the desiderata. In our extensive experimental evaluation, we test four different UE with three different backbones across two datasets. In out-of-equilibrium detection, we find LNK yielding up to 2.5 and 2.1 times lower errors in terms of AUC-ROC score than dropout or evidential regression-based methods while maintaining high predictive performance.
|
https://proceedings.mlr.press/v202/woo23a.html
|
https://proceedings.mlr.press/v202/woo23a/woo23a.pdf
|
https://openreview.net/forum?id=WfI3I8OjHS
|
The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond
|
https://proceedings.mlr.press/v202/woo23a.html
|
Jiin Woo, Gauri Joshi, Yuejie Chi
|
https://proceedings.mlr.press/v202/woo23a.html
|
ICML 2023
|
In this paper, we consider federated Q-learning, which aims to learn an optimal Q-function by periodically aggregating local Q-estimates trained on local data alone. Focusing on infinite-horizon tabular Markov decision processes, we provide sample complexity guarantees for both the synchronous and asynchronous variants of federated Q-learning. In both cases, our bounds exhibit a linear speedup with respect to the number of agents and sharper dependencies on other salient problem parameters. Moreover, existing approaches to federated Q-learning adopt an equally-weighted averaging of local Q-estimates, which can be highly sub-optimal in the asynchronous setting since the local trajectories can be highly heterogeneous due to different local behavior policies. Existing sample complexity scales inverse proportionally to the minimum entry of the stationary state-action occupancy distributions over all agents, requiring that every agent covers the entire state-action space. Instead, we propose a novel importance averaging algorithm, giving larger weights to more frequently visited state-action pairs. The improved sample complexity scales inverse proportionally to the minimum entry of the average stationary state-action occupancy distribution of all agents, thus only requiring the agents collectively cover the entire state-action space, unveiling the blessing of heterogeneity.
|
https://proceedings.mlr.press/v202/woo23b.html
|
https://proceedings.mlr.press/v202/woo23b/woo23b.pdf
|
https://openreview.net/forum?id=pgcfCCNQXO
|
Learning Deep Time-index Models for Time Series Forecasting
|
https://proceedings.mlr.press/v202/woo23b.html
|
Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, Steven Hoi
|
https://proceedings.mlr.press/v202/woo23b.html
|
ICML 2023
|
Deep learning has been actively applied to time series forecasting, leading to a deluge of new methods, belonging to the class of historical-value models. Yet, despite the attractive properties of time-index models, such as being able to model the continuous nature of underlying time series dynamics, little attention has been given to them. Indeed, while naive deep time-index models are far more expressive than the manually predefined function representations of classical time-index models, they are inadequate for forecasting, being unable to generalize to unseen time steps due to the lack of inductive bias. In this paper, we propose DeepTime, a meta-optimization framework to learn deep time-index models which overcome these limitations, yielding an efficient and accurate forecasting model. Extensive experiments on real world datasets in the long sequence time-series forecasting setting demonstrate that our approach achieves competitive results with state-of-the-art methods, and is highly efficient. Code is available at https://github.com/salesforce/DeepTime.
|
https://proceedings.mlr.press/v202/woodruff23a.html
|
https://proceedings.mlr.press/v202/woodruff23a/woodruff23a.pdf
|
https://openreview.net/forum?id=3onrj9ua4l
|
Sharper Bounds for $\ell_p$ Sensitivity Sampling
|
https://proceedings.mlr.press/v202/woodruff23a.html
|
David Woodruff, Taisuke Yasuda
|
https://proceedings.mlr.press/v202/woodruff23a.html
|
ICML 2023
|
In large scale machine learning, random sampling is a popular way to approximate datasets by a small representative subset of examples. In particular, sensitivity sampling is an intensely studied technique which provides provable guarantees on the quality of approximation, while reducing the number of examples to the product of the VC dimension $d$ and the total sensitivity $\mathfrak{S}$ in remarkably general settings. However, guarantees going beyond this general bound of $\mathfrak{S} d$ are known in perhaps only one setting, for $\ell_2$ subspace embeddings, despite intense study of sensitivity sampling in prior work. In this work, we show the first bounds for sensitivity sampling for $\ell_p$ subspace embeddings for $p\neq 2$ that improve over the general $\mathfrak{S} d$ bound, achieving a bound of roughly $\mathfrak{S}^{2/p}$ for $1\leq p<2$ and $\mathfrak{S}^{2-2/p}$ for $2root leverage score sampling algorithm achieves a bound of roughly $d$ for $1\leq p<2$, and that a combination of leverage score and sensitivity sampling achieves an improved bound of roughly $d^{2/p}\mathfrak{S}^{2-4/p}$ for $2
Cite this Paper
BibTeX
@InProceedings{pmlr-v202-woodruff23a,
title = {Sharper Bounds for $\ell_p$ Sensitivity Sampling},
author = {Woodruff, David and Yasuda, Taisuke},
booktitle = {Proceedings of the 40th International Conference on Machine Learning},
pages = {37238--37272},
year = {2023},
editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan},
volume = {202},
series = {Proceedings of Machine Learning Research},
month = {23--29 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v202/woodruff23a/woodruff23a.pdf},
url = {https://proceedings.mlr.press/v202/woodruff23a.html},
abstract = {In large scale machine learning, random sampling is a popular way to approximate datasets by a small representative subset of examples. In particular, sensitivity sampling is an intensely studied technique which provides provable guarantees on the quality of approximation, while reducing the number of examples to the product of the VC dimension $d$ and the total sensitivity $\mathfrak{S}$ in remarkably general settings. However, guarantees going beyond this general bound of $\mathfrak{S} d$ are known in perhaps only one setting, for $\ell_2$ subspace embeddings, despite intense study of sensitivity sampling in prior work. In this work, we show the first bounds for sensitivity sampling for $\ell_p$ subspace embeddings for $p\neq 2$ that improve over the general $\mathfrak{S} d$ bound, achieving a bound of roughly $\mathfrak{S}^{2/p}$ for $1\leq p<2$ and $\mathfrak{S}^{2-2/p}$ for $2root leverage score sampling algorithm achieves a bound of roughly $d$ for $1\leq p<2$, and that a combination of leverage score and sensitivity sampling achieves an improved bound of roughly $d^{2/p}\mathfrak{S}^{2-4/p}$ for $2
Copy to Clipboard
Download
Endnote
%0 Conference Paper
%T Sharper Bounds for $\ell_p$ Sensitivity Sampling
%A David Woodruff
%A Taisuke Yasuda
%B Proceedings of the 40th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2023
%E Andreas Krause
%E Emma Brunskill
%E Kyunghyun Cho
%E Barbara Engelhardt
%E Sivan Sabato
%E Jonathan Scarlett
%F pmlr-v202-woodruff23a
%I PMLR
%P 37238--37272
%U https://proceedings.mlr.press/v202/woodruff23a.html
%V 202
%X In large scale machine learning, random sampling is a popular way to approximate datasets by a small representative subset of examples. In particular, sensitivity sampling is an intensely studied technique which provides provable guarantees on the quality of approximation, while reducing the number of examples to the product of the VC dimension $d$ and the total sensitivity $\mathfrak{S}$ in remarkably general settings. However, guarantees going beyond this general bound of $\mathfrak{S} d$ are known in perhaps only one setting, for $\ell_2$ subspace embeddings, despite intense study of sensitivity sampling in prior work. In this work, we show the first bounds for sensitivity sampling for $\ell_p$ subspace embeddings for $p\neq 2$ that improve over the general $\mathfrak{S} d$ bound, achieving a bound of roughly $\mathfrak{S}^{2/p}$ for $1\leq p<2$ and $\mathfrak{S}^{2-2/p}$ for $2root leverage score sampling algorithm achieves a bound of roughly $d$ for $1\leq p<2$, and that a combination of leverage score and sensitivity sampling achieves an improved bound of roughly $d^{2/p}\mathfrak{S}^{2-4/p}$ for $2
Copy to Clipboard
Download
APA
Woodruff, D. & Yasuda, T.. (2023). Sharper Bounds for $\ell_p$ Sensitivity Sampling. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:37238-37272 Available from https://proceedings.mlr.press/v202/woodruff23a.html.
Copy to Clipboard
Download
Related Material
Download PDF
OpenReview
|
https://proceedings.mlr.press/v202/woodworth23a.html
|
https://proceedings.mlr.press/v202/woodworth23a/woodworth23a.pdf
|
https://openreview.net/forum?id=FR2F4QzWFp
|
Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy
|
https://proceedings.mlr.press/v202/woodworth23a.html
|
Blake Woodworth, Konstantin Mishchenko, Francis Bach
|
https://proceedings.mlr.press/v202/woodworth23a.html
|
ICML 2023
|
We present an algorithm for minimizing an objective with hard-to-compute gradients by using a related, easier-to-access function as a proxy. Our algorithm is based on approximate proximal-point iterations on the proxy combined with relatively few stochastic gradients from the objective. When the difference between the objective and the proxy is $\delta$-smooth, our algorithm guarantees convergence at a rate matching stochastic gradient descent on a $\delta$-smooth objective, which can lead to substantially better sample efficiency. Our algorithm has many potential applications in machine learning, and provides a principled means of leveraging synthetic data, physics simulators, mixed public and private data, and more.
|
https://proceedings.mlr.press/v202/wu23a.html
|
https://proceedings.mlr.press/v202/wu23a/wu23a.pdf
|
https://openreview.net/forum?id=p32U4ulksI
|
SEGA: Structural Entropy Guided Anchor View for Graph Contrastive Learning
|
https://proceedings.mlr.press/v202/wu23a.html
|
Junran Wu, Xueyuan Chen, Bowen Shi, Shangzhe Li, Ke Xu
|
https://proceedings.mlr.press/v202/wu23a.html
|
ICML 2023
|
In contrastive learning, the choice of "view" controls the information that the representation captures and influences the performance of the model. However, leading graph contrastive learning methods generally produce views via random corruption or learning, which could lead to the loss of essential information and alteration of semantic information. An anchor view that maintains the essential information of input graphs for contrastive learning has been hardly investigated. In this paper, based on the theory of graph information bottleneck, we deduce the definition of this anchor view; put differently, the anchor view with essential information of input graph is supposed to have the minimal structural uncertainty. Furthermore, guided by structural entropy, we implement the anchor view, termed SEGA, for graph contrastive learning. We extensively validate the proposed anchor view on various benchmarks regarding graph classification under unsupervised, semi-supervised, and transfer learning and achieve significant performance boosts compared to the state-of-the-art methods.
|
https://proceedings.mlr.press/v202/wu23b.html
|
https://proceedings.mlr.press/v202/wu23b/wu23b.pdf
|
https://openreview.net/forum?id=1Hh1cIPJ7V
|
Causal Proxy Models for Concept-based Model Explanations
|
https://proceedings.mlr.press/v202/wu23b.html
|
Zhengxuan Wu, Karel D’Oosterlinck, Atticus Geiger, Amir Zur, Christopher Potts
|
https://proceedings.mlr.press/v202/wu23b.html
|
ICML 2023
|
Explainability methods for NLP systems encounter a version of the fundamental problem of causal inference: for a given ground-truth input text, we never truly observe the counterfactual texts necessary for isolating the causal effects of model representations on outputs. In response, many explainability methods make no use of counterfactual texts, assuming they will be unavailable. In this paper, we show that robust causal explainability methods can be created using approximate counterfactuals, which can be written by humans to approximate a specific counterfactual or simply sampled using metadata-guided heuristics. The core of our proposal is the Causal Proxy Model (CPM). A CPM explains a black-box model $\mathcal{N}$ because it is trained to have the same actual input/output behavior as $\mathcal{N}$ while creating neural representations that can be intervened upon to simulate the counterfactual input/output behavior of $\mathcal{N}$. Furthermore, we show that the best CPM for $\mathcal{N}$ performs comparably to $\mathcal{N}$ in making factual predictions, which means that the CPM can simply replace $\mathcal{N}$, leading to more explainable deployed models.
|
https://proceedings.mlr.press/v202/wu23c.html
|
https://proceedings.mlr.press/v202/wu23c/wu23c.pdf
|
https://openreview.net/forum?id=6uM4yf6D5l
|
Effective Neural Topic Modeling with Embedding Clustering Regularization
|
https://proceedings.mlr.press/v202/wu23c.html
|
Xiaobao Wu, Xinshuai Dong, Thong Thanh Nguyen, Anh Tuan Luu
|
https://proceedings.mlr.press/v202/wu23c.html
|
ICML 2023
|
Topic models have been prevalent for decades with various applications. However, existing topic models commonly suffer from the notorious topic collapsing: discovered topics semantically collapse towards each other, leading to highly repetitive topics, insufficient topic discovery, and damaged model interpretability. In this paper, we propose a new neural topic model, Embedding Clustering Regularization Topic Model (ECRTM). Besides the existing reconstruction error, we propose a novel Embedding Clustering Regularization (ECR), which forces each topic embedding to be the center of a separately aggregated word embedding cluster in the semantic space. This enables each produced topic to contain distinct word semantics, which alleviates topic collapsing. Regularized by ECR, our ECRTM generates diverse and coherent topics together with high-quality topic distributions of documents. Extensive experiments on benchmark datasets demonstrate that ECRTM effectively addresses the topic collapsing issue and consistently surpasses state-of-the-art baselines in terms of topic quality, topic distributions of documents, and downstream classification tasks.
|
https://proceedings.mlr.press/v202/wu23d.html
|
https://proceedings.mlr.press/v202/wu23d/wu23d.pdf
|
https://openreview.net/forum?id=qEoAywuHSO
|
Adaptive Compositional Continual Meta-Learning
|
https://proceedings.mlr.press/v202/wu23d.html
|
Bin Wu, Jinyuan Fang, Xiangxiang Zeng, Shangsong Liang, Qiang Zhang
|
https://proceedings.mlr.press/v202/wu23d.html
|
ICML 2023
|
This paper focuses on continual meta-learning, where few-shot tasks are heterogeneous and sequentially available. Recent works use a mixture model for meta-knowledge to deal with the heterogeneity. However, these methods suffer from parameter inefficiency caused by two reasons: (1) the underlying assumption of mutual exclusiveness among mixture components hinders sharing meta-knowledge across heterogeneous tasks. (2) they only allow increasing mixture components and cannot adaptively filter out redundant components. In this paper, we propose an Adaptive Compositional Continual Meta-Learning (ACML) algorithm, which employs a compositional premise to associate a task with a subset of mixture components, allowing meta-knowledge sharing among heterogeneous tasks. Moreover, to adaptively adjust the number of mixture components, we propose a component sparsification method based on evidential theory to filter out redundant components. Experimental results show ACML outperforms strong baselines, showing the effectiveness of our compositional meta-knowledge, and confirming that ACML can adaptively learn meta-knowledge.
|
https://proceedings.mlr.press/v202/wu23e.html
|
https://proceedings.mlr.press/v202/wu23e/wu23e.pdf
|
https://openreview.net/forum?id=Ht9r3P6Lts
|
Anchor Sampling for Federated Learning with Partial Client Participation
|
https://proceedings.mlr.press/v202/wu23e.html
|
Feijie Wu, Song Guo, Zhihao Qu, Shiqi He, Ziming Liu, Jing Gao
|
https://proceedings.mlr.press/v202/wu23e.html
|
ICML 2023
|
Compared with full client participation, partial client participation is a more practical scenario in federated learning, but it may amplify some challenges in federated learning, such as data heterogeneity. The lack of inactive clients’ updates in partial client participation makes it more likely for the model aggregation to deviate from the aggregation based on full client participation. Training with large batches on individual clients is proposed to address data heterogeneity in general, but their effectiveness under partial client participation is not clear. Motivated by these challenges, we propose to develop a novel federated learning framework, referred to as FedAMD, for partial client participation. The core idea is anchor sampling, which separates partial participants into anchor and miner groups. Each client in the anchor group aims at the local bullseye with the gradient computation using a large batch. Guided by the bullseyes, clients in the miner group steer multiple near-optimal local updates using small batches and update the global model. By integrating the results of the two groups, FedAMD is able to accelerate the training process and improve the model performance. Measured by $\epsilon$-approximation and compared to the state-of-the-art methods, FedAMD achieves the convergence by up to $O(1/\epsilon)$ fewer communication rounds under non-convex objectives. Empirical studies on real-world datasets validate the effectiveness of FedAMD and demonstrate the superiority of the proposed algorithm: Not only does it considerably save computation and communication costs, but also the test accuracy significantly improves.
|
https://proceedings.mlr.press/v202/wu23f.html
|
https://proceedings.mlr.press/v202/wu23f/wu23f.pdf
|
https://openreview.net/forum?id=GwBsk5F1ti
|
Solving High-Dimensional PDEs with Latent Spectral Models
|
https://proceedings.mlr.press/v202/wu23f.html
|
Haixu Wu, Tengge Hu, Huakun Luo, Jianmin Wang, Mingsheng Long
|
https://proceedings.mlr.press/v202/wu23f.html
|
ICML 2023
|
Deep models have achieved impressive progress in solving partial differential equations (PDEs). A burgeoning paradigm is learning neural operators to approximate the input-output mappings of PDEs. While previous deep models have explored the multiscale architectures and various operator designs, they are limited to learning the operators as a whole in the coordinate space. In real physical science problems, PDEs are complex coupled equations with numerical solvers relying on discretization into high-dimensional coordinate space, which cannot be precisely approximated by a single operator nor efficiently learned due to the curse of dimensionality. We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs. Going beyond the coordinate space, LSM enables an attention-based hierarchical projection network to reduce the high-dimensional data into a compact latent space in linear time. Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space that approximates complex input-output mappings via learning multiple basis operators, enjoying nice theoretical guarantees for convergence and approximation. Experimentally, LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks covering both solid and fluid physics. Code is available at https://github.com/thuml/Latent-Spectral-Models.
|
https://proceedings.mlr.press/v202/wu23g.html
|
https://proceedings.mlr.press/v202/wu23g/wu23g.pdf
|
https://openreview.net/forum?id=jky0AfLHir
|
A Law of Robustness beyond Isoperimetry
|
https://proceedings.mlr.press/v202/wu23g.html
|
Yihan Wu, Heng Huang, Hongyang Zhang
|
https://proceedings.mlr.press/v202/wu23g.html
|
ICML 2023
|
We study the robust interpolation problem of arbitrary data distributions supported on a bounded space and propose a two-fold law of robustness. Robust interpolation refers to the problem of interpolating $n$ noisy training data points in $R^d$ by a Lipschitz function. Although this problem has been well understood when the samples are drawn from an isoperimetry distribution, much remains unknown concerning its performance under generic or even the worst-case distributions. We prove a Lipschitzness lower bound $\Omega(\sqrt{n/p})$ of the interpolating neural network with $p$ parameters on arbitrary data distributions. With this result, we validate the law of robustness conjecture in prior work by Bubeck, Li and Nagaraj on two-layer neural networks with polynomial weights. We then extend our result to arbitrary interpolating approximators and prove a Lipschitzness lower bound $\Omega(n^{1/d})$ for robust interpolation. Our results demonstrate a two-fold law of robustness: a) we show the potential benefit of overparametrization for smooth data interpolation when $n=poly(d)$, and b) we disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=\exp(\omega(d))$.
|
https://proceedings.mlr.press/v202/wu23h.html
|
https://proceedings.mlr.press/v202/wu23h/wu23h.pdf
|
https://openreview.net/forum?id=tHLLq4S9Dr
|
Uncovering Adversarial Risks of Test-Time Adaptation
|
https://proceedings.mlr.press/v202/wu23h.html
|
Tong Wu, Feiran Jia, Xiangyu Qi, Jiachen T. Wang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal
|
https://proceedings.mlr.press/v202/wu23h.html
|
ICML 2023
|
Recently, test-time adaptation (TTA) has been proposed as a promising solution for addressing distribution shifts. It allows a base model to adapt to an unforeseen distribution during inference by leveraging the information from the batch of (unlabeled) test data. However, we uncover a novel security vulnerability of TTA based on the insight that predictions on benign samples can be impacted by malicious samples in the same batch. To exploit this vulnerability, we propose Distribution Invading Attack (DIA), which injects a small fraction of malicious data into the test batch. DIA causes models using TTA to misclassify benign and unperturbed test data, providing an entirely new capability for adversaries that is infeasible in canonical machine learning pipelines. Through comprehensive evaluations, we demonstrate the high effectiveness of our attack on multiple benchmarks across six TTA methods. In response, we investigate two countermeasures to robustify the existing insecure TTA implementations, following the principle of security by design. Together, we hope our findings can make the community aware of the utility-security tradeoffs in deploying TTA and provide valuable insights for developing robust TTA approaches.
|
https://proceedings.mlr.press/v202/wu23i.html
|
https://proceedings.mlr.press/v202/wu23i/wu23i.pdf
|
https://openreview.net/forum?id=rSOMtDM1mB
|
Stable Estimation of Heterogeneous Treatment Effects
|
https://proceedings.mlr.press/v202/wu23i.html
|
Anpeng Wu, Kun Kuang, Ruoxuan Xiong, Bo Li, Fei Wu
|
https://proceedings.mlr.press/v202/wu23i.html
|
ICML 2023
|
Estimating heterogeneous treatment effects (HTE) is crucial for identifying the variation of treatment effects across individuals or subgroups. Most existing methods estimate HTE by removing the confounding bias from imbalanced treatment assignments. However, these methods may produce unreliable estimates of treatment effects and potentially allocate suboptimal treatment arms for underrepresented populations. To improve the estimation accuracy of HTE for underrepresented populations, we propose a novel Stable CounterFactual Regression (StableCFR) to smooth the population distribution and upsample the underrepresented subpopulations, while balancing confounders between treatment and control groups. Specifically, StableCFR upsamples the underrepresented data using uniform sampling, where each disjoint subpopulation is weighted proportional to the Lebesgue measure of its support. Moreover, StableCFR balances covariates by using an epsilon-greedy matching approach. Empirical results on both synthetic and real-world datasets demonstrate the superior performance of our StableCFR on estimating HTE for underrepresented populations.
|
https://proceedings.mlr.press/v202/wu23j.html
|
https://proceedings.mlr.press/v202/wu23j/wu23j.pdf
|
https://openreview.net/forum?id=MocsSAUKlk
|
Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph Matching
|
https://proceedings.mlr.press/v202/wu23j.html
|
Fang Wu, Siyuan Li, Xurui Jin, Yinghui Jiang, Dragomir Radev, Zhangming Niu, Stan Z. Li
|
https://proceedings.mlr.press/v202/wu23j.html
|
ICML 2023
|
The success of graph neural networks (GNNs) provokes the question about explainability: “Which fraction of the input graph is the most determinant of the prediction?” Particularly, parametric explainers prevail in existing approaches because of their more robust capability to decipher the black-box (i.e., target GNNs). In this paper, based on the observation that graphs typically share some common motif patterns, we propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs. It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance. Moreover, we note that present graph sampling or node-dropping methods usually suffer from the false positive sampling problem. To alleviate this issue, we design a new augmentation paradigm named MatchDrop. It takes advantage of MatchExplainer to fix the most informative portion of the graph and merely operates graph augmentations on the rest less informative part. Extensive experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins. Results also demonstrate that MatchDrop is a general scheme to be equipped with GNNs for enhanced performance. The code is available at https://github.com/smiles724/MatchExplainer.
|
https://proceedings.mlr.press/v202/wu23k.html
|
https://proceedings.mlr.press/v202/wu23k/wu23k.pdf
|
https://openreview.net/forum?id=q1WGm3hItW
|
Understanding Int4 Quantization for Language Models: Latency Speedup, Composability, and Failure Cases
|
https://proceedings.mlr.press/v202/wu23k.html
|
Xiaoxia Wu, Cheng Li, Reza Yazdani Aminabadi, Zhewei Yao, Yuxiong He
|
https://proceedings.mlr.press/v202/wu23k.html
|
ICML 2023
|
Improving the deployment efficiency of transformer-based language models has been challenging given their high computation and memory cost. While INT8 quantization has recently been shown to be effective in reducing both the memory cost and latency while preserving model accuracy, it remains unclear whether we can leverage INT4 (which doubles peak hardware throughput) to achieve further latency improvement. In this study, we explore the feasibility of employing INT4 weight and activation (W4A4) quantization for language models. Our findings indicate that W4A4 quantization introduces no to negligible accuracy degradation for encoder-only and encoder-decoder models, but causes a significant accuracy drop for decoder-only models. To materialize the performance gain using W4A4, we develop a highly-optimized end-to-end W4A4 encoder inference pipeline supporting different quantization strategies. Our INT4 pipeline is $8.5\times$ faster for latency-oriented scenarios and up to $3\times$ for throughput-oriented scenarios compared to the inference of FP16, and improves the SOTA BERT INT8 performance from FasterTransformer by up to $1.7\times$. We provide insights into the failure cases when applying W4A4 to decoder-only models, and further explore the compatibility of INT4 quantization with other compression methods, like pruning and layer reduction.
|
https://proceedings.mlr.press/v202/wu23l.html
|
https://proceedings.mlr.press/v202/wu23l/wu23l.pdf
|
https://openreview.net/forum?id=DDX1YCwOag
|
Towards Understanding Generalization of Macro-AUC in Multi-label Learning
|
https://proceedings.mlr.press/v202/wu23l.html
|
Guoqiang Wu, Chongxuan Li, Yilong Yin
|
https://proceedings.mlr.press/v202/wu23l.html
|
ICML 2023
|
Macro-AUC is the arithmetic mean of the class-wise AUCs in multi-label learning and is commonly used in practice. However, its theoretical understanding is far lacking. Toward solving it, we characterize the generalization properties of various learning algorithms based on the corresponding surrogate losses w.r.t. Macro-AUC. We theoretically identify a critical factor of the dataset affecting the generalization bounds: the label-wise class imbalance. Our results on the imbalance-aware error bounds show that the widely-used univariate loss-based algorithm is more sensitive to the label-wise class imbalance than the proposed pairwise and reweighted loss-based ones, which probably implies its worse performance. Moreover, empirical results on various datasets corroborate our theory findings. To establish it, technically, we propose a new (and more general) McDiarmid-type concentration inequality, which may be of independent interest.
|
https://proceedings.mlr.press/v202/wu23m.html
|
https://proceedings.mlr.press/v202/wu23m/wu23m.pdf
|
https://openreview.net/forum?id=SZldtLz7xg
|
Quantifying the Knowledge in GNNs for Reliable Distillation into MLPs
|
https://proceedings.mlr.press/v202/wu23m.html
|
Lirong Wu, Haitao Lin, Yufei Huang, Stan Z. Li
|
https://proceedings.mlr.press/v202/wu23m.html
|
ICML 2023
|
To bridge the gaps between topology-aware Graph Neural Networks (GNNs) and inference-efficient Multi-Layer Perceptron (MLPs), GLNN proposes to distill knowledge from a well-trained teacher GNN into a student MLP. Despite their great progress, comparatively little work has been done to explore the reliability of different knowledge points (nodes) in GNNs, especially their roles played during distillation. In this paper, we first quantify the knowledge reliability in GNN by measuring the invariance of their information entropy to noise perturbations, from which we observe that different knowledge points (1) show different distillation speeds (temporally); (2) are differentially distributed in the graph (spatially). To achieve reliable distillation, we propose an effective approach, namely Knowledge-inspired Reliable Distillation (KRD), that models the probability of each node being an informative and reliable knowledge point, based on which we sample a set of additional reliable knowledge points as supervision for training student MLPs. Extensive experiments show that KRD improves over the vanilla MLPs by 12.62% and outperforms its corresponding teacher GNNs by 2.16% averaged over 7 datasets and 3 GNN architectures. Codes are publicly available at: https://github.com/LirongWu/RKD.
|
https://proceedings.mlr.press/v202/wu23n.html
|
https://proceedings.mlr.press/v202/wu23n/wu23n.pdf
|
https://openreview.net/forum?id=2iNOg04g8R
|
Delay-agnostic Asynchronous Coordinate Update Algorithm
|
https://proceedings.mlr.press/v202/wu23n.html
|
Xuyang Wu, Changxin Liu, Sindri Magnússon, Mikael Johansson
|
https://proceedings.mlr.press/v202/wu23n.html
|
ICML 2023
|
We propose a delay-agnostic asynchronous coordinate update algorithm (DEGAS) for computing operator fixed points, with applications to asynchronous optimization. DEGAS includes novel asynchronous variants of ADMM and block-coordinate descent as special cases. We prove that DEGAS converges with both bounded and unbounded delays under delay-free parameter conditions. We also validate by theory and experiments that DEGAS adapts well to the actual delays. The effectiveness of DEGAS is demonstrated by numerical experiments on classification problems.
|
https://proceedings.mlr.press/v202/wu23o.html
|
https://proceedings.mlr.press/v202/wu23o/wu23o.pdf
|
https://openreview.net/forum?id=Qh0Gbq3lkh
|
Masked Trajectory Models for Prediction, Representation, and Control
|
https://proceedings.mlr.press/v202/wu23o.html
|
Philipp Wu, Arjun Majumdar, Kevin Stone, Yixin Lin, Igor Mordatch, Pieter Abbeel, Aravind Rajeswaran
|
https://proceedings.mlr.press/v202/wu23o.html
|
ICML 2023
|
We introduce Masked Trajectory Models (MTM) as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a state-action sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns versatile networks that can take on different roles or capabilities, by simply choosing appropriate masks at inference time. For example, the same MTM network can be used as a forward dynamics model, inverse dynamics model, or even an offline RL agent. Through extensive experiments in several continuous control tasks, we show that the same MTM network – i.e. same weights – can match or outperform specialized networks trained for the aforementioned capabilities. Additionally, we find that state representations learned by MTM can significantly accelerate the learning speed of traditional RL algorithms. Finally, in offline RL benchmarks, we find that MTM is competitive with specialized offline RL algorithms, despite MTM being a generic self-supervised learning method without any explicit RL components. Code is available at https://github.com/facebookresearch/mtm.
|
https://proceedings.mlr.press/v202/wu23p.html
|
https://proceedings.mlr.press/v202/wu23p/wu23p.pdf
|
https://openreview.net/forum?id=jOLIFanYnt
|
Disentangled Multi-Fidelity Deep Bayesian Active Learning
|
https://proceedings.mlr.press/v202/wu23p.html
|
Dongxia Wu, Ruijia Niu, Matteo Chinazzi, Yian Ma, Rose Yu
|
https://proceedings.mlr.press/v202/wu23p.html
|
ICML 2023
|
To balance quality and cost, various domain areas of science and engineering run simulations at multiple levels of sophistication. Multi-fidelity active learning aims to learn a direct mapping from input parameters to simulation outputs at the highest fidelity by actively acquiring data from multiple fidelity levels. However, existing approaches based on Gaussian processes are hardly scalable to high-dimensional data. Deep learning-based methods often impose a hierarchical structure in hidden representations, which only supports passing information from low-fidelity to high-fidelity. These approaches can lead to the undesirable propagation of errors from low-fidelity representations to high-fidelity ones. We propose a novel framework called Disentangled Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), which learns the surrogate models conditioned on the distribution of functions at multiple fidelities. On benchmark tasks of learning deep surrogates of partial differential equations including heat equation, Poisson’s equation and fluid simulations, our approach significantly outperforms state-of-the-art in prediction accuracy and sample efficiency.
|
https://proceedings.mlr.press/v202/wu23q.html
|
https://proceedings.mlr.press/v202/wu23q/wu23q.pdf
|
https://openreview.net/forum?id=vZh3aw4TaF
|
Tight Data Access Bounds for Private Top-$k$ Selection
|
https://proceedings.mlr.press/v202/wu23q.html
|
Hao Wu, Olga Ohrimenko, Anthony Wirth
|
https://proceedings.mlr.press/v202/wu23q.html
|
ICML 2023
|
We study the top-$k$ selection problem under the differential privacy model: $m$ items are rated according to votes of a set of clients. We consider a setting in which algorithms can retrieve data via a sequence of accesses, each either a random access or a sorted access; the goal is to minimize the total number of data accesses. Our algorithm requires only $O(\sqrt{mk})$ expected accesses: to our knowledge, this is the first sublinear data-access upper bound for this problem. Our analysis also shows that the well-known exponential mechanism requires only $O(\sqrt{m})$ expected accesses. Accompanying this, we develop the first lower bounds for the problem, in three settings: only random accesses; only sorted accesses; a sequence of accesses of either kind. We show that, to avoid $\Omega(m)$ access cost, supporting both kinds of access is necessary, and that in this case our algorithm’s access cost is optimal.
|
https://proceedings.mlr.press/v202/wu23r.html
|
https://proceedings.mlr.press/v202/wu23r/wu23r.pdf
|
https://openreview.net/forum?id=odBNsDyr4Q
|
The Implicit Regularization of Dynamical Stability in Stochastic Gradient Descent
|
https://proceedings.mlr.press/v202/wu23r.html
|
Lei Wu, Weijie J Su
|
https://proceedings.mlr.press/v202/wu23r.html
|
ICML 2023
|
In this paper, we study the implicit regularization of stochastic gradient descent (SGD) through the lens of dynamical stability (Wu et al., 2018). We start by revising existing stability analyses of SGD, showing how the Frobenius norm and trace of Hessian relate to different notions of stability. Notably, if a global minimum is linearly stable for SGD, then the trace of Hessian must be less than or equal to $2/\eta$, where $\eta$ denotes the learning rate. By contrast, for gradient descent (GD), the stability imposes a similar constraint but only on the largest eigenvalue of Hessian. We then turn to analyze the generalization properties of these stable minima, focusing specifically on two-layer ReLU networks and diagonal linear networks. Notably, we establish the equivalence between these metrics of sharpness and certain parameter norms for the two models, which allows us to show that the stable minima of SGD provably generalize well. By contrast, the stability-induced regularization of GD is provably too weak to ensure satisfactory generalization. This discrepancy provides an explanation of why SGD often generalizes better than GD. Note that the learning rate (LR) plays a pivotal role in the strength of stability-induced regularization. As the LR increases, the regularization effect becomes more pronounced, elucidating why SGD with a larger LR consistently demonstrates superior generalization capabilities. Additionally, numerical experiments are provided to support our theoretical findings.
|
https://proceedings.mlr.press/v202/wu23s.html
|
https://proceedings.mlr.press/v202/wu23s/wu23s.pdf
|
https://openreview.net/forum?id=3PZu2GPl64
|
Distributional Offline Policy Evaluation with Predictive Error Guarantees
|
https://proceedings.mlr.press/v202/wu23s.html
|
Runzhe Wu, Masatoshi Uehara, Wen Sun
|
https://proceedings.mlr.press/v202/wu23s.html
|
ICML 2023
|
We study the problem of estimating the distribution of the return of a policy using an offline dataset that is not generated from the policy, i.e., distributional offline policy evaluation (OPE). We propose an algorithm called Fitted Likelihood Estimation (FLE), which conducts a sequence of Maximum Likelihood Estimation (MLE) and has the flexibility of integrating any state-of-the-art probabilistic generative models as long as it can be trained via MLE. FLE can be used for both finite-horizon and infinite-horizon discounted settings where rewards can be multi-dimensional vectors. Our theoretical results show that for both finite-horizon and infinite-horizon discounted settings, FLE can learn distributions that are close to the ground truth under total variation distance and Wasserstein distance, respectively. Our theoretical results hold under the conditions that the offline data covers the test policy’s traces and that the supervised learning MLE procedures succeed. Experimentally, we demonstrate the performance of FLE with two generative models, Gaussian mixture models and diffusion models. For the multi-dimensional reward setting, FLE with diffusion models is capable of estimating the complicated distribution of the return of a test policy.
|
https://proceedings.mlr.press/v202/wu23t.html
|
https://proceedings.mlr.press/v202/wu23t/wu23t.pdf
|
https://openreview.net/forum?id=B5dh3UDjVs
|
$\pi$-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation
|
https://proceedings.mlr.press/v202/wu23t.html
|
Chengyue Wu, Teng Wang, Yixiao Ge, Zeyu Lu, Ruisong Zhou, Ying Shan, Ping Luo
|
https://proceedings.mlr.press/v202/wu23t.html
|
ICML 2023
|
Foundation models have achieved great advances in multi-task learning with a unified interface of unimodal and multimodal tasks. However, the potential of such multi-task learners has not been exploited during transfer learning. In this work, we present a universal parameter-efficient transfer learning method, termed Predict-Interpolate Tuning ($\pi$-Tuning), for vision, language, and vision-language tasks. It aggregates the parameters of lightweight task-specific experts learned from similar tasks to aid the target downstream task. The task similarities are predicted in a unified modality-independent space, yielding a scalable graph to demonstrate task relationships. $\pi$-Tuning has several appealing benefits. First, it flexibly explores both intra- and inter-modal transferability between similar tasks to improve the accuracy and robustness of transfer learning, especially in data-scarce scenarios. Second, it offers a systematical solution for transfer learning with multi-task prediction-and-then-interpolation, compatible with diverse types of parameter-efficient experts, such as prompt and adapter. Third, an extensive study of task-level mutual benefits on 14 unimodal and 6 multimodal datasets shows that $\pi$-Tuning surpasses fine-tuning and other parameter-efficient transfer learning methods both in full-shot and low-shot regimes. The task graph also enables an in-depth interpretable analysis of task transferability across modalities. The code will be available at https://github.com/TencentARC/pi-Tuning.
|
https://proceedings.mlr.press/v202/wu23u.html
|
https://proceedings.mlr.press/v202/wu23u/wu23u.pdf
|
https://openreview.net/forum?id=RZv1wqCOq9
|
Learning Functional Distributions with Private Labels
|
https://proceedings.mlr.press/v202/wu23u.html
|
Changlong Wu, Yifan Wang, Ananth Grama, Wojciech Szpankowski
|
https://proceedings.mlr.press/v202/wu23u.html
|
ICML 2023
|
We study the problem of learning functional distributions in the presence of noise. A functional is a map from the space of features to distributions over a set of labels, and is often assumed to belong to a known class of hypotheses $\mathcal{F}$. Features are generated by a general random process and labels are sampled independently from feature-dependent distributions. In privacy sensitive applications, labels are passed through a noisy kernel. We consider online learning, where at each time step, a predictor attempts to predict the actual (label) distribution given only the features and noisy labels in prior steps. The performance of the predictor is measured by the expected KL-risk that compares the predicted distributions to the underlying truth. We show that the minimax expected KL-risk is of order $\tilde{\Theta}(\sqrt{T\log|\mathcal{F}|})$ for finite hypothesis class $\mathcal{F}$ and any non-trivial noise level. We then extend this result to general infinite classes via the concept of stochastic sequential covering and provide matching lower and upper bounds for a wide range of natural classes.
|
https://proceedings.mlr.press/v202/wu23v.html
|
https://proceedings.mlr.press/v202/wu23v/wu23v.pdf
|
https://openreview.net/forum?id=jGYxcXSg8C
|
QuantumDARTS: Differentiable Quantum Architecture Search for Variational Quantum Algorithms
|
https://proceedings.mlr.press/v202/wu23v.html
|
Wenjie Wu, Ge Yan, Xudong Lu, Kaisen Pan, Junchi Yan
|
https://proceedings.mlr.press/v202/wu23v.html
|
ICML 2023
|
With the arrival of the Noisy Intermediate-Scale Quantum (NISQ) era and the fast development of machine learning, variational quantum algorithms (VQA) including Variational Quantum Eigensolver (VQE) and quantum neural network (QNN) have received increasing attention with wide potential applications in foreseeable near future. We study the problem of quantum architecture search (QAS) for VQA to automatically design parameterized quantum circuits (PQC). We devise a differentiable searching algorithm based on Gumbel-Softmax in contrast to peer methods that often require numerous circuit sampling and evaluation. Two versions of our algorithm are provided, namely macro search and micro search, where macro search directly searches for the whole circuit like other literature while the innovative micro search is able to infer the sub-circuit structure from a small-scale and then transfer that to a large-scale problem. We conduct intensive experiments on unweighted Max-Cut, ground state energy estimation, and image classification. The superior performance shows the efficiency and capability of macro search, which requires little prior knowledge. Moreover, the experiments on micro search show the potential of our algorithm for large-scale QAS problems.
|
https://proceedings.mlr.press/v202/wu23w.html
|
https://proceedings.mlr.press/v202/wu23w/wu23w.pdf
|
https://openreview.net/forum?id=QDxtrlPmfB
|
Discover and Cure: Concept-aware Mitigation of Spurious Correlation
|
https://proceedings.mlr.press/v202/wu23w.html
|
Shirley Wu, Mert Yuksekgonul, Linjun Zhang, James Zou
|
https://proceedings.mlr.press/v202/wu23w.html
|
ICML 2023
|
Deep neural networks often rely on spurious correlations to make predictions, which hinders generalization beyond training environments. For instance, models that associate cats with bed backgrounds can fail to predict the existence of cats in other environments without beds. Mitigating spurious correlations is crucial in building trustworthy models. However, the existing works lack transparency to offer insights into the mitigation process. In this work, we propose an interpretable framework, Discover and Cure (DISC), to tackle the issue. With human-interpretable concepts, DISC iteratively 1) discovers unstable concepts across different environments as spurious attributes, then 2) intervenes on the training data using the discovered concepts to reduce spurious correlation. Across systematic experiments, DISC provides superior generalization ability and interpretability than the existing approaches. Specifically, it outperforms the state-of-the-art methods on an object recognition task and a skin-lesion classification task by 7.5% and 9.6%, respectively. Additionally, we offer theoretical analysis and guarantees to understand the benefits of models trained by DISC. Code and data are available at https://github.com/Wuyxin/DISC.
|
https://proceedings.mlr.press/v202/wu23x.html
|
https://proceedings.mlr.press/v202/wu23x/wu23x.pdf
|
https://openreview.net/forum?id=sEP5oUajXh
|
On the Training Instability of Shuffling SGD with Batch Normalization
|
https://proceedings.mlr.press/v202/wu23x.html
|
David Xing Wu, Chulhee Yun, Suvrit Sra
|
https://proceedings.mlr.press/v202/wu23x.html
|
ICML 2023
|
We uncover how SGD interacts with batch normalization and can exhibit undesirable training dynamics such as divergence. More precisely, we study how Single Shuffle (SS) and Random Reshuffle (RR)—two widely used variants of SGD—interact surprisingly differently in the presence of batch normalization: RR leads to much more stable evolution of training loss than SS. As a concrete example, for regression using a linear network with batch normalized inputs, we prove that SS and RR converge to distinct global optima that are “distorted” away from gradient descent. Thereafter, for classification we characterize conditions under which training divergence for SS and RR can, and cannot occur. We present explicit constructions to show how SS leads to distorted optima in regression and divergence for classification, whereas RR avoids both distortion and divergence. We validate our results empirically in realistic settings, and conclude that the separation between SS and RR used with batch normalization is relevant in practice.
|
https://proceedings.mlr.press/v202/wu23y.html
|
https://proceedings.mlr.press/v202/wu23y/wu23y.pdf
|
https://openreview.net/forum?id=iSxFInGln2
|
dugMatting: Decomposed-Uncertainty-Guided Matting
|
https://proceedings.mlr.press/v202/wu23y.html
|
Jiawei Wu, Changqing Zhang, Zuoyong Li, Huazhu Fu, Xi Peng, Joey Tianyi Zhou
|
https://proceedings.mlr.press/v202/wu23y.html
|
ICML 2023
|
Cutting out an object and estimating its opacity mask, known as image matting, is a key task in image and video editing. Due to the highly ill-posed issue, additional inputs, typically user-defined trimaps or scribbles, are usually needed to reduce the uncertainty. Although effective, it is either time consuming or only suitable for experienced users who know where to place the strokes. In this work, we propose a decomposed-uncertainty-guided matting (dugMatting) algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results. Basing on the characteristic of these uncertainties, the epistemic uncertainty is reduced in the process of guiding interaction (which introduces prior knowledge), while the aleatoric uncertainty is reduced in modeling data distribution (which introduces statistics for both data and possible noise). The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling. Extensively quantitative and qualitative results validate that the proposed method significantly improves the original matting algorithms in terms of both efficiency and efficacy.
|
https://proceedings.mlr.press/v202/wu23z.html
|
https://proceedings.mlr.press/v202/wu23z/wu23z.pdf
|
https://openreview.net/forum?id=nmVOTsQGR9
|
Personalized Federated Learning under Mixture of Distributions
|
https://proceedings.mlr.press/v202/wu23z.html
|
Yue Wu, Shuaicheng Zhang, Wenchao Yu, Yanchi Liu, Quanquan Gu, Dawei Zhou, Haifeng Chen, Wei Cheng
|
https://proceedings.mlr.press/v202/wu23z.html
|
ICML 2023
|
The recent trend towards Personalized Federated Learning (PFL) has garnered significant attention as it allows for the training of models that are tailored to each client while maintaining data privacy. However, current PFL techniques primarily focus on modeling the conditional distribution heterogeneity (i.e. concept shift), which can result in suboptimal performance when the distribution of input data across clients diverges (i.e. covariate shift). Additionally, these techniques often lack the ability to adapt to unseen data, further limiting their effectiveness in real-world scenarios. To address these limitations, we propose a novel approach, FedGMM, which utilizes Gaussian mixture models (GMM) to effectively fit the input data distributions across diverse clients. The model parameters are estimated by maximum likelihood estimation utilizing a federated Expectation-Maximization algorithm, which is solved in closed form and does not assume gradient similarity. Furthermore, FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification. Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
|
https://proceedings.mlr.press/v202/wu23aa.html
|
https://proceedings.mlr.press/v202/wu23aa/wu23aa.pdf
|
https://openreview.net/forum?id=24WcHqDM6O
|
Differentially Private Episodic Reinforcement Learning with Heavy-tailed Rewards
|
https://proceedings.mlr.press/v202/wu23aa.html
|
Yulian Wu, Xingyu Zhou, Sayak Ray Chowdhury, Di Wang
|
https://proceedings.mlr.press/v202/wu23aa.html
|
ICML 2023
|
In this paper we study the problem of (finite horizon tabular) Markov decision processes (MDPs) with heavy-tailed rewards under the constraint of differential privacy (DP). Compared with the previous studies for private reinforcement learning that typically assume rewards are sampled from some bounded or sub-Gaussian distributions to ensure DP, we consider the setting where reward distributions have only finite $(1+v)$-th moments with some $v \in (0,1]$. By resorting to robust mean estimators for rewards, we first propose two frameworks for heavy-tailed MDPs, i.e., one is for value iteration and another is for policy optimization. Under each framework, we consider both joint differential privacy (JDP) and local differential privacy (LDP) models. Based on our frameworks, we provide regret upper bounds for both JDP and LDP cases, and show that the moment of distributions and privacy budget have significant impact on regrets. Finally, we establish a lower bound of regret minimization for heavy-tailed MDPs in JDP model by reducing it to the instance-independent lower bound of heavy-tailed multi-armed bandits in DP model. We also show the lower bound for the problem in LDP by adopting some private minimax methods. Our results reveal that there are fundamental differences between the problem of private RL with sub-Gaussian and that with heavy-tailed rewards.
|
https://proceedings.mlr.press/v202/wu23ab.html
|
https://proceedings.mlr.press/v202/wu23ab/wu23ab.pdf
|
https://openreview.net/forum?id=yMs4kwihug
|
Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron
|
https://proceedings.mlr.press/v202/wu23ab.html
|
Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Sham M. Kakade
|
https://proceedings.mlr.press/v202/wu23ab.html
|
ICML 2023
|
This paper considers the problem of learning single ReLU neuron with squared loss (a.k.a., ReLU regression) in the overparameterized regime, where the input dimension can exceed the number of samples. We analyze a Perceptron-type algorithm called GLM-tron [Kakade et al. 2011], and provide its dimension-free risk upper bounds for high-dimensional ReLU regression in both well-specified and misspecified settings. Our risk bounds recover several existing results as special cases. Moreover, in the well-specified setting, we also provide an instance-wise matching risk lower bound for GLM-tron. Our upper and lower risk bounds provide a sharp characterization of the high-dimensional ReLU regression problems that can be learned via GLM-tron. On the other hand, we provide some negative results for stochastic gradient descent (SGD) for ReLU regression with symmetric Bernoulli data: if the model is well-specified, the excess risk of SGD is provably no better than that of GLM-tron ignoring constant factors, for each problem instance; and in the noiseless case, GLM-tron can achieve a small risk while SGD unavoidably suffers from a constant risk in expectation. These results together suggest that GLM-tron might be more preferable than SGD for high-dimensional ReLU regression.
|
https://proceedings.mlr.press/v202/xian23a.html
|
https://proceedings.mlr.press/v202/xian23a/xian23a.pdf
|
https://openreview.net/forum?id=iIuLNEnOue
|
Understanding Backdoor Attacks through the Adaptability Hypothesis
|
https://proceedings.mlr.press/v202/xian23a.html
|
Xun Xian, Ganghua Wang, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, Jie Ding
|
https://proceedings.mlr.press/v202/xian23a.html
|
ICML 2023
|
A poisoning backdoor attack is a rising security concern for deep learning. This type of attack can result in the backdoored model functioning normally most of the time but exhibiting abnormal behavior when presented with inputs containing the backdoor trigger, making it difficult to detect and prevent. In this work, we propose the adaptability hypothesis to understand when and why a backdoor attack works for general learning models, including deep neural networks, based on the theoretical investigation of classical kernel-based learning models. The adaptability hypothesis postulates that for an effective attack, the effect of incorporating a new dataset on the predictions of the original data points will be small, provided that the original data points are distant from the new dataset. Experiments on benchmark image datasets and state-of-the-art backdoor attacks for deep neural networks are conducted to corroborate the hypothesis. Our finding provides insight into the factors that affect the attack’s effectiveness and has implications for the design of future attacks and defenses.
|
https://proceedings.mlr.press/v202/xian23b.html
|
https://proceedings.mlr.press/v202/xian23b/xian23b.pdf
|
https://openreview.net/forum?id=8eml6N3JGo
|
Fair and Optimal Classification via Post-Processing
|
https://proceedings.mlr.press/v202/xian23b.html
|
Ruicheng Xian, Lang Yin, Han Zhao
|
https://proceedings.mlr.press/v202/xian23b.html
|
ICML 2023
|
To mitigate the bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, underlies the design of fair algorithms. To this end, this paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems, under the most general multi-group, multi-class, and noisy setting. Specifically, we show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem. On the practical side, our findings lead to a simple post-processing algorithm that derives fair classifiers from score functions, which yields the optimal fair classifier when the score is Bayes optimal. We provide suboptimality analysis and sample complexity for our algorithm, and demonstrate its effectiveness on benchmark datasets.
|
https://proceedings.mlr.press/v202/xiang23a.html
|
https://proceedings.mlr.press/v202/xiang23a/xiang23a.pdf
|
https://openreview.net/forum?id=t0ozPUGnBs
|
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
|
https://proceedings.mlr.press/v202/xiang23a.html
|
Zhen Xiang, Zidi Xiong, Bo Li
|
https://proceedings.mlr.press/v202/xiang23a.html
|
ICML 2023
|
Backdoor (Trojan) attack is a common threat to deep neural networks, where samples from one or more source classes embedded with a backdoor trigger will be misclassified to adversarial target classes. Existing methods for detecting whether a classifier is backdoor attacked are mostly designed for attacks with a single adversarial target (e.g., all-to-one attack). To the best of our knowledge, without supervision, no existing methods can effectively address the more general X2X attack with an arbitrary number of source classes, each paired with an arbitrary target class. In this paper, we propose UMD, the first Unsupervised Model Detection method that effectively detects X2X backdoor attacks via a joint inference of the adversarial (source, target) class pairs. In particular, we first define a novel transferability statistic to measure and select a subset of putative backdoor class pairs based on a proposed clustering approach. Then, these selected class pairs are jointly assessed based on an aggregation of their reverse-engineered trigger size for detection inference, using a robust and unsupervised anomaly detector we proposed. We conduct comprehensive evaluations on CIFAR-10, GTSRB, and Imagenette dataset, and show that our unsupervised UMD outperforms SOTA detectors (even with supervision) by 17%, 4%, and 8%, respectively, in terms of the detection accuracy against diverse X2X attacks. We also show the strong detection performance of UMD against several strong adaptive attacks.
|
https://proceedings.mlr.press/v202/xiao23a.html
|
https://proceedings.mlr.press/v202/xiao23a/xiao23a.pdf
|
https://openreview.net/forum?id=7RIjvZfceF
|
Random Shuffle Transformer for Image Restoration
|
https://proceedings.mlr.press/v202/xiao23a.html
|
Jie Xiao, Xueyang Fu, Man Zhou, Hongjian Liu, Zheng-Jun Zha
|
https://proceedings.mlr.press/v202/xiao23a.html
|
ICML 2023
|
Non-local interactions play a vital role in boosting performance for image restoration. However, local window Transformer has been preferred due to its efficiency for processing high-resolution images. The superiority in efficiency comes at the cost of sacrificing the ability to model non-local interactions. In this paper, we present that local window Transformer can also function as modeling non-local interactions. The counterintuitive function is based on the permutation-equivariance of self-attention. The basic principle is quite simple: by randomly shuffling the input, local self-attention also has the potential to model non-local interactions without introducing extra parameters. Our random shuffle strategy enjoys elegant theoretical guarantees in extending the local scope. The resulting Transformer dubbed ShuffleFormer is capable of processing high-resolution images efficiently while modeling non-local interactions. Extensive experiments demonstrate the effectiveness of ShuffleFormer across a variety of image restoration tasks, including image denoising, deraining, and deblurring. Code is available at https://github.com/jiexiaou/ShuffleFormer.
|
https://proceedings.mlr.press/v202/xiao23b.html
|
https://proceedings.mlr.press/v202/xiao23b/xiao23b.pdf
|
https://openreview.net/forum?id=IYyhNudD9V
|
Communication-Efficient Federated Hypergradient Computation via Aggregated Iterative Differentiation
|
https://proceedings.mlr.press/v202/xiao23b.html
|
Peiyao Xiao, Kaiyi Ji
|
https://proceedings.mlr.press/v202/xiao23b.html
|
ICML 2023
|
Federated bilevel optimization has attracted increasing attention due to emerging machine learning and communication applications. The biggest challenge lies in computing the gradient of the upper-level objective function (i.e., hypergradient) in the federated setting due to the nonlinear and distributed construction of a series of global Hessian matrices. In this paper, we propose a novel communication-efficient federated hypergradient estimator via aggregated iterative differentiation (AggITD). AggITD is simple to implement and significantly reduces the communication cost by conducting the federated hypergradient estimation and the lower-level optimization simultaneously. We show that the proposed AggITD-based algorithm achieves the same sample complexity as existing approximate implicit differentiation (AID)-based approaches with much fewer communication rounds in the presence of data heterogeneity. Our results also shed light on the great advantage of ITD over AID in the federated/distributed hypergradient estimation. This differs from the comparison in the non-distributed bilevel optimization, where ITD is less efficient than AID. Our extensive experiments demonstrate the great effectiveness and communication efficiency of the proposed method.
|
https://proceedings.mlr.press/v202/xiao23c.html
|
https://proceedings.mlr.press/v202/xiao23c/xiao23c.pdf
|
https://openreview.net/forum?id=sHfSV8eYEp
|
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
|
https://proceedings.mlr.press/v202/xiao23c.html
|
Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han
|
https://proceedings.mlr.press/v202/xiao23c.html
|
ICML 2023
|
Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG, and LLaMA family. We demonstrate up to 1.56$\times$ speedup and 2$\times$ memory reduction for LLMs with negligible loss in accuracy. SmoothQuant enables serving 530B LLM within a single node. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs.
|
https://proceedings.mlr.press/v202/xiao23d.html
|
https://proceedings.mlr.press/v202/xiao23d/xiao23d.pdf
|
https://openreview.net/forum?id=jSkV9aP1Mi
|
On the Forward Invariance of Neural ODEs
|
https://proceedings.mlr.press/v202/xiao23d.html
|
Wei Xiao, Tsun-Hsuan Wang, Ramin Hasani, Mathias Lechner, Yutong Ban, Chuang Gan, Daniela Rus
|
https://proceedings.mlr.press/v202/xiao23d.html
|
ICML 2023
|
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications by using invariance set propagation. Our approach uses a class of control barrier functions to transform output specifications into constraints on the parameters and inputs of the learning system. This setup allows us to achieve output specification guarantees simply by changing the constrained parameters/inputs both during training and inference. Moreover, we demonstrate that our invariance set propagation through data-controlled neural ODEs not only maintains generalization performance but also creates an additional degree of robustness by enabling causal manipulation of the system’s parameters/inputs. We test our method on a series of representation learning tasks, including modeling physical dynamics and convexity portraits, as well as safe collision avoidance for autonomous vehicles.
|
https://proceedings.mlr.press/v202/xiao23e.html
|
https://proceedings.mlr.press/v202/xiao23e/xiao23e.pdf
|
https://openreview.net/forum?id=LrDkno4B3u
|
COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models
|
https://proceedings.mlr.press/v202/xiao23e.html
|
Jinqi Xiao, Miao Yin, Yu Gong, Xiao Zang, Jian Ren, Bo Yuan
|
https://proceedings.mlr.press/v202/xiao23e.html
|
ICML 2023
|
Attention-based vision models, such as Vision Transformer (ViT) and its variants, have shown promising performance in various computer vision tasks. However, these emerging architectures suffer from large model sizes and high computational costs, calling for efficient model compression solutions. To date, pruning ViTs has been well studied, while other compression strategies that have been widely applied in CNN compression, e.g., model factorization, is little explored in the context of ViT compression. This paper explores an efficient method for compressing vision transformers to enrich the toolset for obtaining compact attention-based vision models. Based on the new insight on the multi-head attention layer, we develop a highly efficient ViT compression solution, which outperforms the state-of-the-art pruning methods. For compressing DeiT-small and DeiT-base models on ImageNet, our proposed approach can achieve $0.45%$ and $0.76%$ higher top-1 accuracy even with fewer parameters. Our finding can also be applied to improve the customization efficiency of text-to-image diffusion models, with much faster training (up to $2.6\times$ speedup) and lower extra storage cost (up to $1927.5\times$ reduction) than the existing works.
|
https://proceedings.mlr.press/v202/xie23a.html
|
https://proceedings.mlr.press/v202/xie23a/xie23a.pdf
|
https://openreview.net/forum?id=e5qDTqs1MS
|
Improving Bi-level Optimization Based Methods with Inspiration from Humans’ Classroom Study Techniques
|
https://proceedings.mlr.press/v202/xie23a.html
|
Pengtao Xie
|
https://proceedings.mlr.press/v202/xie23a.html
|
ICML 2023
|
In humans’ classroom learning, many effective study techniques (e.g., the Feynman technique, peer questioning, etc.) have been developed to improve learning outcomes. We are interested in investigating whether these techniques can inspire the development of ML training strategies to improve bi-level optimization (BLO) based methods. Towards this goal, we develop a general framework, Skillearn, which consists of basic elements such as learners, interaction functions, learning stages, etc. These elements can be flexibly configured to create various training strategies, each emulating a study technique of humans. In case studies, we apply Skillearn to create new training strategies, by emulating the Feynman technique and peer questioning, which are two broadly adopted techniques in humans’ classroom learning. These training strategies are used for improving two BLO based applications including neural architecture search and data weighting. Experiments on various datasets demonstrate the effectiveness of our methods.
|
https://proceedings.mlr.press/v202/xie23b.html
|
https://proceedings.mlr.press/v202/xie23b/xie23b.pdf
|
https://openreview.net/forum?id=7maTHA7zua
|
Future-conditioned Unsupervised Pretraining for Decision Transformer
|
https://proceedings.mlr.press/v202/xie23b.html
|
Zhihui Xie, Zichuan Lin, Deheng Ye, Qiang Fu, Yang Wei, Shuai Li
|
https://proceedings.mlr.press/v202/xie23b.html
|
ICML 2023
|
Recent research in offline reinforcement learning (RL) has demonstrated that return-conditioned supervised learning is a powerful paradigm for decision-making problems. While promising, return conditioning is limited to training data labeled with rewards and therefore faces challenges in learning from unsupervised data. In this work, we aim to utilize generalized future conditioning to enable efficient unsupervised pretraining from reward-free and sub-optimal offline data. We propose Pretrained Decision Transformer (PDT), a conceptually simple approach for unsupervised RL pretraining. PDT leverages future trajectory information as a privileged context to predict actions during training. The ability to make decisions based on both present and future factors enhances PDT’s capability for generalization. Besides, this feature can be easily incorporated into a return-conditioned framework for online finetuning, by assigning return values to possible futures and sampling future embeddings based on their respective values. Empirically, PDT outperforms or performs on par with its supervised pretraining counterpart, especially when dealing with sub-optimal data. Further analysis reveals that PDT can extract diverse behaviors from offline data and controllably sample high-return behaviors by online finetuning. Code is available at here.
|
https://proceedings.mlr.press/v202/xie23c.html
|
https://proceedings.mlr.press/v202/xie23c/xie23c.pdf
|
https://openreview.net/forum?id=M0bwbIl4Bl
|
DeSRA: Detect and Delete the Artifacts of GAN-based Real-World Super-Resolution Models
|
https://proceedings.mlr.press/v202/xie23c.html
|
Liangbin Xie, Xintao Wang, Xiangyu Chen, Gen Li, Ying Shan, Jiantao Zhou, Chao Dong
|
https://proceedings.mlr.press/v202/xie23c.html
|
ICML 2023
|
Image super-resolution (SR) with generative adversarial networks (GAN) has achieved great success in restoring realistic details. However, it is notorious that GAN-based SR models will inevitably produce unpleasant and undesirable artifacts, especially in practical scenarios. Previous works typically suppress artifacts with an extra loss penalty in the training phase. They only work for in-distribution artifact types generated during training. When applied in real-world scenarios, we observe that those improved methods still generate obviously annoying artifacts during inference. In this paper, we analyze the cause and characteristics of the GAN artifacts produced in unseen test data without ground-truths. We then develop a novel method, namely, DeSRA, to Detect and then “Delete” those SR Artifacts in practice. Specifically, we propose to measure a relative local variance distance from MSE-SR results and GAN-SR results, and locate the problematic areas based on the above distance and semantic-aware thresholds. After detecting the artifact regions, we develop a finetune procedure to improve GAN-based SR models with a few samples, so that they can deal with similar types of artifacts in more unseen real data. Equipped with our DeSRA, we can successfully eliminate artifacts from inference and improve the ability of SR models to be applied in real-world scenarios. The code will be available at https://github.com/TencentARC/DeSRA.
|
https://proceedings.mlr.press/v202/xie23d.html
|
https://proceedings.mlr.press/v202/xie23d/xie23d.pdf
|
https://openreview.net/forum?id=6lP80vBiI6
|
Semiparametrically Efficient Off-Policy Evaluation in Linear Markov Decision Processes
|
https://proceedings.mlr.press/v202/xie23d.html
|
Chuhan Xie, Wenhao Yang, Zhihua Zhang
|
https://proceedings.mlr.press/v202/xie23d.html
|
ICML 2023
|
We study semiparametrically efficient estimation in off-policy evaluation (OPE) where the underlying Markov decision process (MDP) is linear with a known feature map. We characterize the variance lower bound for regular estimators in the linear MDP setting and propose an efficient estimator whose variance achieves that lower bound. Consistency and asymptotic normality of our estimator are established under mild conditions, which merely requires the only infinite-dimensional nuisance parameter to be estimated at a $n^{-1/4}$ convergence rate. We also construct an asymptotically valid confidence interval for statistical inference and conduct simulation studies to validate our results. To our knowledge, this is the first work that concerns efficient estimation in the presence of a known structure of MDPs in the OPE literature.
|
https://proceedings.mlr.press/v202/xie23e.html
|
https://proceedings.mlr.press/v202/xie23e/xie23e.pdf
|
https://openreview.net/forum?id=5XtZGZ9o4Q
|
A Critical View of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment
|
https://proceedings.mlr.press/v202/xie23e.html
|
Hanchen Xie, Jiageng Zhu, Mahyar Khayatkhoei, Jiazhi Li, Mohamed E. Hussein, Wael Abdalmageed
|
https://proceedings.mlr.press/v202/xie23e.html
|
ICML 2023
|
Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the model’s capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets.
|
https://proceedings.mlr.press/v202/xing23a.html
|
https://proceedings.mlr.press/v202/xing23a/xing23a.pdf
|
https://openreview.net/forum?id=OuNhPgThxP
|
Controlling Type Confounding in Ad Hoc Teamwork with Instance-wise Teammate Feedback Rectification
|
https://proceedings.mlr.press/v202/xing23a.html
|
Dong Xing, Pengjie Gu, Qian Zheng, Xinrun Wang, Shanqi Liu, Longtao Zheng, Bo An, Gang Pan
|
https://proceedings.mlr.press/v202/xing23a.html
|
ICML 2023
|
Ad hoc teamwork requires an agent to cooperate with unknown teammates without prior coordination. Many works propose to abstract teammate instances into high-level representation of types and then pre-train the best response for each type. However, most of them do not consider the distribution of teammate instances within a type. This could expose the agent to the hidden risk of type confounding. In the worst case, the best response for an abstract teammate type could be the worst response for all specific instances of that type. This work addresses the issue from the lens of causal inference. We first theoretically demonstrate that this phenomenon is due to the spurious correlation brought by uncontrolled teammate distribution. Then, we propose our solution, CTCAT, which disentangles such correlation through an instance-wise teammate feedback rectification. This operation reweights the interaction of teammate instances within a shared type to reduce the influence of type confounding. The effect of CTCAT is evaluated in multiple domains, including classic ad hoc teamwork tasks and real-world scenarios. Results show that CTCAT is robust to the influence of type confounding, a practical issue that directly hazards the robustness of our trained agents but was unnoticed in previous works.
|
https://proceedings.mlr.press/v202/xiong23a.html
|
https://proceedings.mlr.press/v202/xiong23a/xiong23a.pdf
|
https://openreview.net/forum?id=PzlO3SSk3A
|
Universal Morphology Control via Contextual Modulation
|
https://proceedings.mlr.press/v202/xiong23a.html
|
Zheng Xiong, Jacob Beck, Shimon Whiteson
|
https://proceedings.mlr.press/v202/xiong23a.html
|
ICML 2023
|
Learning a universal policy across different robot morphologies can significantly improve learning efficiency and generalization in continuous control. However, it poses a challenging multi-task reinforcement learning problem, as the optimal policy may be quite different across robots and critically depend on the morphology. Existing methods utilize graph neural networks or transformers to handle heterogeneous state and action spaces across different morphologies, but pay little attention to the dependency of a robot’s control policy on its morphology context. In this paper, we propose a hierarchical architecture to better model this dependency via contextual modulation, which includes two key submodules: (1) Instead of enforcing hard parameter sharing across robots, we use hypernetworks to generate morphology-dependent control parameters; (2) We propose a fixed attention mechanism that solely depends on the morphology to modulate the interactions between different limbs in a robot. Experimental results show that our method not only improves learning performance on a diverse set of training robots, but also generalizes better to unseen morphologies in a zero-shot fashion. The code is publicly available at https://github.com/MasterXiong/ModuMorph.
|
https://proceedings.mlr.press/v202/xiong23b.html
|
https://proceedings.mlr.press/v202/xiong23b/xiong23b.pdf
|
https://openreview.net/forum?id=BDYIci7bVs
|
Relevant Walk Search for Explaining Graph Neural Networks
|
https://proceedings.mlr.press/v202/xiong23b.html
|
Ping Xiong, Thomas Schnake, Michael Gastegger, Grégoire Montavon, Klaus Robert Muller, Shinichi Nakajima
|
https://proceedings.mlr.press/v202/xiong23b.html
|
ICML 2023
|
Graph Neural Networks (GNNs) have become important machine learning tools for graph analysis, and its explainability is crucial for safety, fairness, and robustness. Layer-wise relevance propagation for GNNs (GNN-LRP) evaluates the relevance of walks to reveal important information flows in the network, and provides higher-order explanations, which have been shown to be superior to the lower-order, i.e., node-/edge-level, explanations. However, identifying relevant walks by GNN-LRP requires exponential computational complexity with respect to the network depth, which we will remedy in this paper. Specifically, we propose polynomial-time algorithms for finding top-$K$ relevant walks, which drastically reduces the computation and thus increases the applicability of GNN-LRP to large-scale problems. Our proposed algorithms are based on the max-product algorithm—a common tool for finding the maximum likelihood configurations in probabilistic graphical models—and can find the most relevant walks exactly at the neuron level and approximately at the node level. Our experiments demonstrate the performance of our algorithms at scale and their utility across application domains, i.e., on epidemiology, molecular, and natural language benchmarks. We provide our codes under github.com/xiong-ping/rel_walk_gnnlrp.
|
https://proceedings.mlr.press/v202/xu23a.html
|
https://proceedings.mlr.press/v202/xu23a/xu23a.pdf
|
https://openreview.net/forum?id=ARDbU7beLp
|
Why do Nearest Neighbor Language Models Work?
|
https://proceedings.mlr.press/v202/xu23a.html
|
Frank F. Xu, Uri Alon, Graham Neubig
|
https://proceedings.mlr.press/v202/xu23a.html
|
ICML 2023
|
Language models (LMs) compute the probability of a text by sequentially computing a representation of an already-seen context and using this representation to predict the next word. Currently, most LMs calculate these representations through a neural network consuming the immediate previous context. However recently, retrieval-augmented LMs have shown to improve over standard neural LMs, by accessing information retrieved from a large datastore, in addition to their standard, parametric, next-word prediction. In this paper, we set out to understand why retrieval-augmented language models, and specifically why k-nearest neighbor language models (kNN-LMs) perform better than standard parametric LMs, even when the k-nearest neighbor component retrieves examples from the same training set that the LM was originally trained on. To this end, we perform analysis of various dimensions over which kNN-LM diverges from standard LMs, and investigate these dimensions one by one. Empirically, we identify three main reasons why kNN-LM performs better than standard LMs: using a different input representation for predicting the next tokens, approximate kNN search, and the importance of softmax temperature for the kNN distribution. Further, we incorporate some insights into the standard parametric LM, improving performance without the need for an explicit retrieval component. The code is available at https://github.com/frankxu2004/knnlm-why.
|
https://proceedings.mlr.press/v202/xu23b.html
|
https://proceedings.mlr.press/v202/xu23b/xu23b.pdf
|
https://openreview.net/forum?id=HltJfwwfhX
|
MixFlows: principled variational inference via mixed flows
|
https://proceedings.mlr.press/v202/xu23b.html
|
Zuheng Xu, Naitong Chen, Trevor Campbell
|
https://proceedings.mlr.press/v202/xu23b.html
|
ICML 2023
|
This work presents mixed variational flows (MixFlows), a new variational family that consists of a mixture of repeated applications of a map to an initial reference distribution. First, we provide efficient algorithms for i.i.d. sampling, density evaluation, and unbiased ELBO estimation. We then show that MixFlows have MCMC-like convergence guarantees when the flow map is ergodic and measure-preserving, and provide bounds on the accumulation of error for practical implementations where the flow map is approximated. Finally, we develop an implementation of MixFlows based on uncorrected discretized Hamiltonian dynamics combined with deterministic momentum refreshment. Simulated and real data experiments show that MixFlows can provide more reliable posterior approximations than several black-box normalizing flows, as well as samples of comparable quality to those obtained from state-of-the-art MCMC methods.
|
https://proceedings.mlr.press/v202/xu23c.html
|
https://proceedings.mlr.press/v202/xu23c/xu23c.pdf
|
https://openreview.net/forum?id=vpXYb9vqU4
|
Bit Allocation using Optimization
|
https://proceedings.mlr.press/v202/xu23c.html
|
Tongda Xu, Han Gao, Chenjian Gao, Yuanyuan Wang, Dailan He, Jinyong Pi, Jixiang Luo, Ziyu Zhu, Mao Ye, Hongwei Qin, Yan Wang, Jingjing Liu, Ya-Qin Zhang
|
https://proceedings.mlr.press/v202/xu23c.html
|
ICML 2023
|
In this paper, we consider the problem of bit allocation in Neural Video Compression (NVC). First, we reveal a fundamental relationship between bit allocation in NVC and Semi-Amortized Variational Inference (SAVI). Specifically, we show that SAVI with GoP (Group-of-Picture)-level likelihood is equivalent to pixel-level bit allocation with precise rate & quality dependency model. Based on this equivalence, we establish a new paradigm of bit allocation using SAVI. Different from previous bit allocation methods, our approach requires no empirical model and is thus optimal. Moreover, as the original SAVI using gradient ascent only applies to single-level latent, we extend the SAVI to multi-level such as NVC by recursively applying back-propagating through gradient ascent. Finally, we propose a tractable approximation for practical implementation. Our method can be applied to scenarios where performance outweights encoding speed, and serves as an empirical bound on the R-D performance of bit allocation. Experimental results show that current state-of-the-art bit allocation algorithms still have a room of $\approx 0.5$ dB PSNR to improve compared with ours. Code is available at https://github.com/tongdaxu/Bit-Allocation-Using-Optimization.
|
https://proceedings.mlr.press/v202/xu23d.html
|
https://proceedings.mlr.press/v202/xu23d/xu23d.pdf
|
https://openreview.net/forum?id=na4JLS1Hh9
|
Regret Bounds for Markov Decision Processes with Recursive Optimized Certainty Equivalents
|
https://proceedings.mlr.press/v202/xu23d.html
|
Wenhao Xu, Xuefeng Gao, Xuedong He
|
https://proceedings.mlr.press/v202/xu23d.html
|
ICML 2023
|
The optimized certainty equivalent (OCE) is a family of risk measures that cover important examples such as entropic risk, conditional value-at-risk and mean-variance models. In this paper, we propose a new episodic risk-sensitive reinforcement learning formulation based on tabular Markov decision processes with recursive OCEs. We design an efficient learning algorithm for this problem based on value iteration and upper confidence bound. We derive an upper bound on the regret of the proposed algorithm, and also establish a minimax lower bound. Our bounds show that the regret rate achieved by our proposed algorithm has optimal dependence on the number of episodes and the number of actions.
|
https://proceedings.mlr.press/v202/xu23e.html
|
https://proceedings.mlr.press/v202/xu23e/xu23e.pdf
|
https://openreview.net/forum?id=79b2zU6fGb
|
Probabilistic Categorical Adversarial Attack and Adversarial Training
|
https://proceedings.mlr.press/v202/xu23e.html
|
Han Xu, Pengfei He, Jie Ren, Yuxuan Wan, Zitao Liu, Hui Liu, Jiliang Tang
|
https://proceedings.mlr.press/v202/xu23e.html
|
ICML 2023
|
The studies on adversarial attacks and defenses have greatly improved the robustness of Deep Neural Networks (DNNs). Most advanced approaches have been overwhelmingly designed for continuous data such as images. However, these achievements are still hard to be generalized to categorical data. To bridge this gap, we propose a novel framework, Probabilistic Categorical Adversarial Attack (or PCAA). It transfers the discrete optimization problem of finding categorical adversarial examples to a continuous problem that can be solved via gradient-based methods. We analyze the optimality (attack success rate) and time complexity of PCAA to demonstrate its significant advantage over current search-based attacks. More importantly, through extensive empirical studies, we demonstrate that the well-established defenses for continuous data, such as adversarial training and TRADES, can be easily accommodated to defend DNNs for categorical data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.