title
stringlengths 10
192
| authors
stringlengths 7
342
| abstract
stringlengths 82
4.51k
| url
stringlengths 44
59
| detail_url
stringlengths 44
59
| abs
stringlengths 44
59
| OpenReview
stringclasses 1
value | Download PDF
stringlengths 47
77
| tags
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
Environment Inference for Invariant Learning
|
Elliot Creager, Joern-Henrik Jacobsen, Richard Zemel
|
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domain-invariant. An important assumption in this area is that the training examples are partitioned into “domains” or “environments”. Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds dataset. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.
|
https://proceedings.mlr.press/v139/creager21a.html
|
https://proceedings.mlr.press/v139/creager21a.html
|
https://proceedings.mlr.press/v139/creager21a.html
|
http://proceedings.mlr.press/v139/creager21a/creager21a.pdf
|
ICML 2021
|
|
Mind the Box: $l_1$-APGD for Sparse Adversarial Attacks on Image Classifiers
|
Francesco Croce, Matthias Hein
|
We show that when taking into account also the image domain $[0,1]^d$, established $l_1$-projected gradient descent (PGD) attacks are suboptimal as they do not consider that the effective threat model is the intersection of the $l_1$-ball and $[0,1]^d$. We study the expected sparsity of the steepest descent step for this effective threat model and show that the exact projection onto this set is computationally feasible and yields better performance. Moreover, we propose an adaptive form of PGD which is highly effective even with a small budget of iterations. Our resulting $l_1$-APGD is a strong white-box attack showing that prior works overestimated their $l_1$-robustness. Using $l_1$-APGD for adversarial training we get a robust classifier with SOTA $l_1$-robustness. Finally, we combine $l_1$-APGD and an adaptation of the Square Attack to $l_1$ into $l_1$-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of $l_1$-ball intersected with $[0,1]^d$.
|
https://proceedings.mlr.press/v139/croce21a.html
|
https://proceedings.mlr.press/v139/croce21a.html
|
https://proceedings.mlr.press/v139/croce21a.html
|
http://proceedings.mlr.press/v139/croce21a/croce21a.pdf
|
ICML 2021
|
|
Parameterless Transductive Feature Re-representation for Few-Shot Learning
|
Wentao Cui, Yuhong Guo
|
Recent literature in few-shot learning (FSL) has shown that transductive methods often outperform their inductive counterparts. However, most transductive solutions, particularly the meta-learning based ones, require inserting trainable parameters on top of some inductive baselines to facilitate transduction. In this paper, we propose a parameterless transductive feature re-representation framework that differs from all existing solutions from the following perspectives. (1) It is widely compatible with existing FSL methods, including meta-learning and fine tuning based models. (2) The framework is simple and introduces no extra training parameters when applied to any architecture. We conduct experiments on three benchmark datasets by applying the framework to both representative meta-learning baselines and state-of-the-art FSL methods. Our framework consistently improves performances in all experiments and refreshes the state-of-the-art FSL results.
|
https://proceedings.mlr.press/v139/cui21a.html
|
https://proceedings.mlr.press/v139/cui21a.html
|
https://proceedings.mlr.press/v139/cui21a.html
|
http://proceedings.mlr.press/v139/cui21a/cui21a.pdf
|
ICML 2021
|
|
Randomized Algorithms for Submodular Function Maximization with a $k$-System Constraint
|
Shuang Cui, Kai Han, Tianshuai Zhu, Jing Tang, Benwei Wu, He Huang
|
Submodular optimization has numerous applications such as crowdsourcing and viral marketing. In this paper, we study the problem of non-negative submodular function maximization subject to a $k$-system constraint, which generalizes many other important constraints in submodular optimization such as cardinality constraint, matroid constraint, and $k$-extendible system constraint. The existing approaches for this problem are all based on deterministic algorithmic frameworks, and the best approximation ratio achieved by these algorithms (for a general submodular function) is $k+2\sqrt{k+2}+3$. We propose a randomized algorithm with an improved approximation ratio of $(1+\sqrt{k})^2$, while achieving nearly-linear time complexity significantly lower than that of the state-of-the-art algorithm. We also show that our algorithm can be further generalized to address a stochastic case where the elements can be adaptively selected, and propose an approximation ratio of $(1+\sqrt{k+1})^2$ for the adaptive optimization case. The empirical performance of our algorithms is extensively evaluated in several applications related to data mining and social computing, and the experimental results demonstrate the superiorities of our algorithms in terms of both utility and efficiency.
|
https://proceedings.mlr.press/v139/cui21b.html
|
https://proceedings.mlr.press/v139/cui21b.html
|
https://proceedings.mlr.press/v139/cui21b.html
|
http://proceedings.mlr.press/v139/cui21b/cui21b.pdf
|
ICML 2021
|
|
GBHT: Gradient Boosting Histogram Transform for Density Estimation
|
Jingyi Cui, Hanyuan Hang, Yisen Wang, Zhouchen Lin
|
In this paper, we propose a density estimation algorithm called \textit{Gradient Boosting Histogram Transform} (GBHT), where we adopt the \textit{Negative Log Likelihood} as the loss function to make the boosting procedure available for the unsupervised tasks. From a learning theory viewpoint, we first prove fast convergence rates for GBHT with the smoothness assumption that the underlying density function lies in the space $C^{0,\alpha}$. Then when the target density function lies in spaces $C^{1,\alpha}$, we present an upper bound for GBHT which is smaller than the lower bound of its corresponding base learner, in the sense of convergence rates. To the best of our knowledge, we make the first attempt to theoretically explain why boosting can enhance the performance of its base learners for density estimation problems. In experiments, we not only conduct performance comparisons with the widely used KDE, but also apply GBHT to anomaly detection to showcase a further application of GBHT.
|
https://proceedings.mlr.press/v139/cui21c.html
|
https://proceedings.mlr.press/v139/cui21c.html
|
https://proceedings.mlr.press/v139/cui21c.html
|
http://proceedings.mlr.press/v139/cui21c/cui21c.pdf
|
ICML 2021
|
|
ProGraML: A Graph-based Program Representation for Data Flow Analysis and Compiler Optimizations
|
Chris Cummins, Zacharias V. Fisches, Tal Ben-Nun, Torsten Hoefler, Michael F P O’Boyle, Hugh Leather
|
Machine learning (ML) is increasingly seen as a viable approach for building compiler optimization heuristics, but many ML methods cannot replicate even the simplest of the data flow analyses that are critical to making good optimization decisions. We posit that if ML cannot do that, then it is insufficiently able to reason about programs. We formulate data flow analyses as supervised learning tasks and introduce a large open dataset of programs and their corresponding labels from several analyses. We use this dataset to benchmark ML methods and show that they struggle on these fundamental program reasoning tasks. We propose ProGraML - Program Graphs for Machine Learning - a language-independent, portable representation of program semantics. ProGraML overcomes the limitations of prior works and yields improved performance on downstream optimization tasks.
|
https://proceedings.mlr.press/v139/cummins21a.html
|
https://proceedings.mlr.press/v139/cummins21a.html
|
https://proceedings.mlr.press/v139/cummins21a.html
|
http://proceedings.mlr.press/v139/cummins21a/cummins21a.pdf
|
ICML 2021
|
|
Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
|
Sebastian Curi, Ilija Bogunovic, Andreas Krause
|
In real-world tasks, reinforcement learning (RL) agents frequently encounter situations that are not present during training time. To ensure reliable performance, the RL agents need to exhibit robustness to such worst-case situations. The robust-RL framework addresses this challenge via a minimax optimization between an agent and an adversary. Previous robust RL algorithms are either sample inefficient, lack robustness guarantees, or do not scale to large problems. We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem while attaining near-optimal sample complexity guarantees. RH-UCRL is a model-based reinforcement learning (MBRL) algorithm that effectively distinguishes between epistemic and aleatoric uncertainty and efficiently explores both the agent and the adversary decision spaces during policy learning. We scale RH-UCRL to complex tasks via neural networks ensemble models as well as neural network policies. Experimentally we demonstrate that RH-UCRL outperforms other robust deep RL algorithms in a variety of adversarial environments.
|
https://proceedings.mlr.press/v139/curi21a.html
|
https://proceedings.mlr.press/v139/curi21a.html
|
https://proceedings.mlr.press/v139/curi21a.html
|
http://proceedings.mlr.press/v139/curi21a/curi21a.pdf
|
ICML 2021
|
|
Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability
|
Mihaela Curmei, Sarah Dean, Benjamin Recht
|
In this work, we consider how preference models in interactive recommendation systems determine the availability of content and users’ opportunities for discovery. We propose an evaluation procedure based on stochastic reachability to quantify the maximum probability of recommending a target piece of content to an user for a set of allowable strategic modifications. This framework allows us to compute an upper bound on the likelihood of recommendation with minimal assumptions about user behavior. Stochastic reachability can be used to detect biases in the availability of content and diagnose limitations in the opportunities for discovery granted to users. We show that this metric can be computed efficiently as a convex program for a variety of practical settings, and further argue that reachability is not inherently at odds with accuracy. We demonstrate evaluations of recommendation algorithms trained on large datasets of explicit and implicit ratings. Our results illustrate how preference models, selection rules, and user interventions impact reachability and how these effects can be distributed unevenly.
|
https://proceedings.mlr.press/v139/curmei21a.html
|
https://proceedings.mlr.press/v139/curmei21a.html
|
https://proceedings.mlr.press/v139/curmei21a.html
|
http://proceedings.mlr.press/v139/curmei21a/curmei21a.pdf
|
ICML 2021
|
|
Dynamic Balancing for Model Selection in Bandits and RL
|
Ashok Cutkosky, Christoph Dann, Abhimanyu Das, Claudio Gentile, Aldo Pacchiano, Manish Purohit
|
We propose a framework for model selection by combining base algorithms in stochastic bandits and reinforcement learning. We require a candidate regret bound for each base algorithm that may or may not hold. We select base algorithms to play in each round using a “balancing condition” on the candidate regret bounds. Our approach simultaneously recovers previous worst-case regret bounds, while also obtaining much smaller regret in natural scenarios when some base learners significantly exceed their candidate bounds. Our framework is relevant in many settings, including linear bandits and MDPs with nested function classes, linear bandits with unknown misspecification, and tuning confidence parameters of algorithms such as LinUCB. Moreover, unlike recent efforts in model selection for linear stochastic bandits, our approach can be extended to consider adversarial rather than stochastic contexts.
|
https://proceedings.mlr.press/v139/cutkosky21a.html
|
https://proceedings.mlr.press/v139/cutkosky21a.html
|
https://proceedings.mlr.press/v139/cutkosky21a.html
|
http://proceedings.mlr.press/v139/cutkosky21a/cutkosky21a.pdf
|
ICML 2021
|
|
ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
|
Stéphane D’Ascoli, Hugo Touvron, Matthew L Leavitt, Ari S Morcos, Giulio Biroli, Levent Sagun
|
Convolutional architectures have proven extremely successful for vision tasks. Their hard inductive biases enable sample-efficient learning, but come at the cost of a potentially lower performance ceiling. Vision Transformers (ViTs) rely on more flexible self-attention layers, and have recently outperformed CNNs for image classification. However, they require costly pre-training on large external datasets or distillation from pre-trained convolutional networks. In this paper, we ask the following question: is it possible to combine the strengths of these two architectures while avoiding their respective limitations? To this end, we introduce gated positional self-attention (GPSA), a form of positional self-attention which can be equipped with a “soft" convolutional inductive bias. We initialise the GPSA layers to mimic the locality of convolutional layers, then give each attention head the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information. The resulting convolutional-like ViT architecture, ConViT, outperforms the DeiT on ImageNet, while offering a much improved sample efficiency. We further investigate the role of locality in learning by first quantifying how it is encouraged in vanilla self-attention layers, then analysing how it is escaped in GPSA layers. We conclude by presenting various ablations to better understand the success of the ConViT. Our code and models are released publicly at https://github.com/facebookresearch/convit.
|
https://proceedings.mlr.press/v139/d-ascoli21a.html
|
https://proceedings.mlr.press/v139/d-ascoli21a.html
|
https://proceedings.mlr.press/v139/d-ascoli21a.html
|
http://proceedings.mlr.press/v139/d-ascoli21a/d-ascoli21a.pdf
|
ICML 2021
|
|
Consistent regression when oblivious outliers overwhelm
|
Tommaso D’Orsi, Gleb Novikov, David Steurer
|
We consider a robust linear regression model $y=X\beta^* + \eta$, where an adversary oblivious to the design $X\in \mathbb{R}^{n\times d}$ may choose $\eta$ to corrupt all but an $\alpha$ fraction of the observations $y$ in an arbitrary way. Prior to our work, even for Gaussian $X$, no estimator for $\beta^*$ was known to be consistent in this model except for quadratic sample size $n \gtrsim (d/\alpha)^2$ or for logarithmic inlier fraction $\alpha\ge 1/\log n$. We show that consistent estimation is possible with nearly linear sample size and inverse-polynomial inlier fraction. Concretely, we show that the Huber loss estimator is consistent for every sample size $n= \omega(d/\alpha^2)$ and achieves an error rate of $O(d/\alpha^2n)^{1/2}$ (both bounds are optimal up to constant factors). Our results extend to designs far beyond the Gaussian case and only require the column span of $X$ to not contain approximately sparse vectors (similar to the kind of assumption commonly made about the kernel space for compressed sensing). We provide two technically similar proofs. One proof is phrased in terms of strong convexity, extending work of [Tsakonas et al. ’14], and particularly short. The other proof highlights a connection between the Huber loss estimator and high-dimensional median computations. In the special case of Gaussian designs, this connection leads us to a strikingly simple algorithm based on computing coordinate-wise medians that achieves nearly optimal guarantees in linear time, and that can exploit sparsity of $\beta^*$. The model studied here also captures heavy-tailed noise distributions that may not even have a first moment.
|
https://proceedings.mlr.press/v139/d-orsi21a.html
|
https://proceedings.mlr.press/v139/d-orsi21a.html
|
https://proceedings.mlr.press/v139/d-orsi21a.html
|
http://proceedings.mlr.press/v139/d-orsi21a/d-orsi21a.pdf
|
ICML 2021
|
|
Offline Reinforcement Learning with Pseudometric Learning
|
Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, Matthieu Geist
|
Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.
|
https://proceedings.mlr.press/v139/dadashi21a.html
|
https://proceedings.mlr.press/v139/dadashi21a.html
|
https://proceedings.mlr.press/v139/dadashi21a.html
|
http://proceedings.mlr.press/v139/dadashi21a/dadashi21a.pdf
|
ICML 2021
|
|
A Tale of Two Efficient and Informative Negative Sampling Distributions
|
Shabnam Daghaghi, Tharun Medini, Nicholas Meisburger, Beidi Chen, Mengnan Zhao, Anshumali Shrivastava
|
Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval. The calculation of full softmax is costly from the computational and energy perspective. There have been various sampling approaches to overcome this challenge, popularly known as negative sampling (NS). Ideally, NS should sample negative classes from a distribution that is dependent on the input data, the current parameters, and the correct positive class. Unfortunately, due to the dynamically updated parameters and data samples, there is no sampling scheme that is provably adaptive and samples the negative classes efficiently. Therefore, alternative heuristics like random sampling, static frequency-based sampling, or learning-based biased sampling, which primarily trade either the sampling cost or the adaptivity of samples per iteration are adopted. In this paper, we show two classes of distributions where the sampling scheme is truly adaptive and provably generates negative samples in near-constant time. Our implementation in C++ on CPU is significantly superior, both in terms of wall-clock time and accuracy, compared to the most optimized TensorFlow implementations of other popular negative sampling approaches on powerful NVIDIA V100 GPU.
|
https://proceedings.mlr.press/v139/daghaghi21a.html
|
https://proceedings.mlr.press/v139/daghaghi21a.html
|
https://proceedings.mlr.press/v139/daghaghi21a.html
|
http://proceedings.mlr.press/v139/daghaghi21a/daghaghi21a.pdf
|
ICML 2021
|
|
SiameseXML: Siamese Networks meet Extreme Classifiers with 100M Labels
|
Kunal Dahiya, Ananye Agarwal, Deepak Saini, Gururaj K, Jian Jiao, Amit Singh, Sumeet Agarwal, Purushottam Kar, Manik Varma
|
Deep extreme multi-label learning (XML) requires training deep architectures that can tag a data point with its most relevant subset of labels from an extremely large label set. XML applications such as ad and product recommendation involve labels rarely seen during training but which nevertheless hold the key to recommendations that delight users. Effective utilization of label metadata and high quality predictions for rare labels at the scale of millions of labels are thus key challenges in contemporary XML research. To address these, this paper develops the SiameseXML framework based on a novel probabilistic model that naturally motivates a modular approach melding Siamese architectures with high-capacity extreme classifiers, and a training pipeline that effortlessly scales to tasks with 100 million labels. SiameseXML offers predictions 2–13% more accurate than leading XML methods on public benchmark datasets, as well as in live A/B tests on the Bing search engine, it offers significant gains in click-through-rates, coverage, revenue and other online metrics over state-of-the-art techniques currently in production. Code for SiameseXML is available at https://github.com/Extreme-classification/siamesexml
|
https://proceedings.mlr.press/v139/dahiya21a.html
|
https://proceedings.mlr.press/v139/dahiya21a.html
|
https://proceedings.mlr.press/v139/dahiya21a.html
|
http://proceedings.mlr.press/v139/dahiya21a/dahiya21a.pdf
|
ICML 2021
|
|
Fixed-Parameter and Approximation Algorithms for PCA with Outliers
|
Yogesh Dahiya, Fedor Fomin, Fahad Panolan, Kirill Simonov
|
PCA with Outliers is the fundamental problem of identifying an underlying low-dimensional subspace in a data set corrupted with outliers. A large body of work is devoted to the information-theoretic aspects of this problem. However, from the computational perspective, its complexity is still not well-understood. We study this problem from the perspective of parameterized complexity by investigating how parameters like the dimension of the data, the subspace dimension, the number of outliers and their structure, and approximation error, influence the computational complexity of the problem. Our algorithmic methods are based on techniques of randomized linear algebra and algebraic geometry.
|
https://proceedings.mlr.press/v139/dahiya21b.html
|
https://proceedings.mlr.press/v139/dahiya21b.html
|
https://proceedings.mlr.press/v139/dahiya21b.html
|
http://proceedings.mlr.press/v139/dahiya21b/dahiya21b.pdf
|
ICML 2021
|
|
Sliced Iterative Normalizing Flows
|
Biwei Dai, Uros Seljak
|
We develop an iterative (greedy) deep learning (DL) algorithm which is able to transform an arbitrary probability distribution function (PDF) into the target PDF. The model is based on iterative Optimal Transport of a series of 1D slices, matching on each slice the marginal PDF to the target. The axes of the orthogonal slices are chosen to maximize the PDF difference using Wasserstein distance at each iteration, which enables the algorithm to scale well to high dimensions. As special cases of this algorithm, we introduce two sliced iterative Normalizing Flow (SINF) models, which map from the data to the latent space (GIS) and vice versa (SIG). We show that SIG is able to generate high quality samples of image datasets, which match the GAN benchmarks, while GIS obtains competitive results on density estimation tasks compared to the density trained NFs, and is more stable, faster, and achieves higher p(x) when trained on small training sets. SINF approach deviates significantly from the current DL paradigm, as it is greedy and does not use concepts such as mini-batching, stochastic gradient descent and gradient back-propagation through deep layers.
|
https://proceedings.mlr.press/v139/dai21a.html
|
https://proceedings.mlr.press/v139/dai21a.html
|
https://proceedings.mlr.press/v139/dai21a.html
|
http://proceedings.mlr.press/v139/dai21a/dai21a.pdf
|
ICML 2021
|
|
Convex Regularization in Monte-Carlo Tree Search
|
Tuan Q Dam, Carlo D’Eramo, Jan Peters, Joni Pajarinen
|
Monte-Carlo planning and Reinforcement Learning (RL) are essential to sequential decision making. The recent AlphaGo and AlphaZero algorithms have shown how to successfully combine these two paradigms to solve large-scale sequential decision problems. These methodologies exploit a variant of the well-known UCT algorithm to trade off the exploitation of good actions and the exploration of unvisited states, but their empirical success comes at the cost of poor sample-efficiency and high computation time. In this paper, we overcome these limitations by introducing the use of convex regularization in Monte-Carlo Tree Search (MCTS) to drive exploration efficiently and to improve policy updates. First, we introduce a unifying theory on the use of generic convex regularizers in MCTS, deriving the first regret analysis of regularized MCTS and showing that it guarantees an exponential convergence rate. Second, we exploit our theoretical framework to introduce novel regularized backup operators for MCTS, based on the relative entropy of the policy update and, more importantly, on the Tsallis entropy of the policy, for which we prove superior theoretical guarantees. We empirically verify the consequence of our theoretical results on a toy problem. Finally, we show how our framework can easily be incorporated in AlphaGo and we empirically show the superiority of convex regularization, w.r.t. representative baselines, on well-known RL problems across several Atari games.
|
https://proceedings.mlr.press/v139/dam21a.html
|
https://proceedings.mlr.press/v139/dam21a.html
|
https://proceedings.mlr.press/v139/dam21a.html
|
http://proceedings.mlr.press/v139/dam21a/dam21a.pdf
|
ICML 2021
|
|
Demonstration-Conditioned Reinforcement Learning for Few-Shot Imitation
|
Christopher R. Dance, Julien Perez, Théo Cachet
|
In few-shot imitation, an agent is given a few demonstrations of a previously unseen task, and must then successfully perform that task. We propose a novel approach to learning few-shot-imitation agents that we call demonstration-conditioned reinforcement learning (DCRL). Given a training set consisting of demonstrations, reward functions and transition distributions for multiple tasks, the idea is to work with a policy that takes demonstrations as input, and to train this policy to maximize the average of the cumulative reward over the set of training tasks. Relative to previously proposed few-shot imitation methods that use behaviour cloning or infer reward functions from demonstrations, our method has the disadvantage that it requires reward functions at training time. However, DCRL also has several advantages, such as the ability to improve upon suboptimal demonstrations, to operate given state-only demonstrations, and to cope with a domain shift between the demonstrator and the agent. Moreover, we show that DCRL outperforms methods based on behaviour cloning by a large margin, on navigation tasks and on robotic manipulation tasks from the Meta-World benchmark.
|
https://proceedings.mlr.press/v139/dance21a.html
|
https://proceedings.mlr.press/v139/dance21a.html
|
https://proceedings.mlr.press/v139/dance21a.html
|
http://proceedings.mlr.press/v139/dance21a/dance21a.pdf
|
ICML 2021
|
|
Re-understanding Finite-State Representations of Recurrent Policy Networks
|
Mohamad H Danesh, Anurag Koul, Alan Fern, Saeed Khorram
|
We introduce an approach for understanding control policies represented as recurrent neural networks. Recent work has approached this problem by transforming such recurrent policy networks into finite-state machines (FSM) and then analyzing the equivalent minimized FSM. While this led to interesting insights, the minimization process can obscure a deeper understanding of a machine’s operation by merging states that are semantically distinct. To address this issue, we introduce an analysis approach that starts with an unminimized FSM and applies more-interpretable reductions that preserve the key decision points of the policy. We also contribute an attention tool to attain a deeper understanding of the role of observations in the decisions. Our case studies on 7 Atari games and 3 control benchmarks demonstrate that the approach can reveal insights that have not been previously noticed.
|
https://proceedings.mlr.press/v139/danesh21a.html
|
https://proceedings.mlr.press/v139/danesh21a.html
|
https://proceedings.mlr.press/v139/danesh21a.html
|
http://proceedings.mlr.press/v139/danesh21a/danesh21a.pdf
|
ICML 2021
|
|
Newton Method over Networks is Fast up to the Statistical Precision
|
Amir Daneshmand, Gesualdo Scutari, Pavel Dvurechensky, Alexander Gasnikov
|
We propose a distributed cubic regularization of the Newton method for solving (constrained) empirical risk minimization problems over a network of agents, modeled as undirected graph. The algorithm employs an inexact, preconditioned Newton step at each agent’s side: the gradient of the centralized loss is iteratively estimated via a gradient-tracking consensus mechanism and the Hessian is subsampled over the local data sets. No Hessian matrices are exchanged over the network. We derive global complexity bounds for convex and strongly convex losses. Our analysis reveals an interesting interplay between sample and iteration/communication complexity: statistically accurate solutions are achievable in roughly the same number of iterations of the centralized cubic Newton, with a communication cost per iteration of the order of $\widetilde{\mathcal{O}}\big(1/\sqrt{1-\rho}\big)$, where $\rho$ characterizes the connectivity of the network. This represents a significant improvement with respect to existing, statistically oblivious, distributed Newton-based methods over networks.
|
https://proceedings.mlr.press/v139/daneshmand21a.html
|
https://proceedings.mlr.press/v139/daneshmand21a.html
|
https://proceedings.mlr.press/v139/daneshmand21a.html
|
http://proceedings.mlr.press/v139/daneshmand21a/daneshmand21a.pdf
|
ICML 2021
|
|
BasisDeVAE: Interpretable Simultaneous Dimensionality Reduction and Feature-Level Clustering with Derivative-Based Variational Autoencoders
|
Dominic Danks, Christopher Yau
|
The Variational Autoencoder (VAE) performs effective nonlinear dimensionality reduction in a variety of problem settings. However, the black-box neural network decoder function typically employed limits the ability of the decoder function to be constrained and interpreted, making the use of VAEs problematic in settings where prior knowledge should be embedded within the decoder. We present DeVAE, a novel VAE-based model with a derivative-based forward mapping, allowing for greater control over decoder behaviour via specification of the decoder function in derivative space. Additionally, we show how DeVAE can be paired with a sparse clustering prior to create BasisDeVAE and perform interpretable simultaneous dimensionality reduction and feature-level clustering. We demonstrate the performance and scalability of the DeVAE and BasisDeVAE models on synthetic and real-world data and present how the derivative-based approach allows for expressive yet interpretable forward models which respect prior knowledge.
|
https://proceedings.mlr.press/v139/danks21a.html
|
https://proceedings.mlr.press/v139/danks21a.html
|
https://proceedings.mlr.press/v139/danks21a.html
|
http://proceedings.mlr.press/v139/danks21a/danks21a.pdf
|
ICML 2021
|
|
Intermediate Layer Optimization for Inverse Problems using Deep Generative Models
|
Giannis Daras, Joseph Dean, Ajil Jalal, Alex Dimakis
|
We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models. Instead of optimizing only over the initial latent code, we progressively change the input layer obtaining successively more expressive generators. To explore the higher dimensional spaces, our method searches for latent codes that lie within a small l1 ball around the manifold induced by the previous layer. Our theoretical analysis shows that by keeping the radius of the ball relatively small, we can improve the established error bound for compressed sensing with deep generative models. We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN2 and PULSE for a wide range of inverse problems including inpainting, denoising, super-resolution and compressed sensing.
|
https://proceedings.mlr.press/v139/daras21a.html
|
https://proceedings.mlr.press/v139/daras21a.html
|
https://proceedings.mlr.press/v139/daras21a.html
|
http://proceedings.mlr.press/v139/daras21a/daras21a.pdf
|
ICML 2021
|
|
Measuring Robustness in Deep Learning Based Compressive Sensing
|
Mohammad Zalbagi Darestani, Akshay S Chaudhari, Reinhard Heckel
|
Deep neural networks give state-of-the-art accuracy for reconstructing images from few and noisy measurements, a problem arising for example in accelerated magnetic resonance imaging (MRI). However, recent works have raised concerns that deep-learning-based image reconstruction methods are sensitive to perturbations and are less robust than traditional methods: Neural networks (i) may be sensitive to small, yet adversarially-selected perturbations, (ii) may perform poorly under distribution shifts, and (iii) may fail to recover small but important features in an image. In order to understand the sensitivity to such perturbations, in this work, we measure the robustness of different approaches for image reconstruction including trained and un-trained neural networks as well as traditional sparsity-based methods. We find, contrary to prior works, that both trained and un-trained methods are vulnerable to adversarial perturbations. Moreover, both trained and un-trained methods tuned for a particular dataset suffer very similarly from distribution shifts. Finally, we demonstrate that an image reconstruction method that achieves higher reconstruction quality, also performs better in terms of accurately recovering fine details. Our results indicate that the state-of-the-art deep-learning-based image reconstruction methods provide improved performance than traditional methods without compromising robustness.
|
https://proceedings.mlr.press/v139/darestani21a.html
|
https://proceedings.mlr.press/v139/darestani21a.html
|
https://proceedings.mlr.press/v139/darestani21a.html
|
http://proceedings.mlr.press/v139/darestani21a/darestani21a.pdf
|
ICML 2021
|
|
SAINT-ACC: Safety-Aware Intelligent Adaptive Cruise Control for Autonomous Vehicles Using Deep Reinforcement Learning
|
Lokesh Chandra Das, Myounggyu Won
|
We present a novel adaptive cruise control (ACC) system namely SAINT-ACC: {S}afety-{A}ware {Int}elligent {ACC} system (SAINT-ACC) that is designed to achieve simultaneous optimization of traffic efficiency, driving safety, and driving comfort through dynamic adaptation of the inter-vehicle gap based on deep reinforcement learning (RL). A novel dual RL agent-based approach is developed to seek and adapt the optimal balance between traffic efficiency and driving safety/comfort by effectively controlling the driving safety model parameters and inter-vehicle gap based on macroscopic and microscopic traffic information collected from dynamically changing and complex traffic environments. Results obtained through over 12,000 simulation runs with varying traffic scenarios and penetration rates demonstrate that SAINT-ACC significantly enhances traffic flow, driving safety and comfort compared with a state-of-the-art approach.
|
https://proceedings.mlr.press/v139/das21a.html
|
https://proceedings.mlr.press/v139/das21a.html
|
https://proceedings.mlr.press/v139/das21a.html
|
http://proceedings.mlr.press/v139/das21a/das21a.pdf
|
ICML 2021
|
|
Lipschitz normalization for self-attention layers with application to graph neural networks
|
George Dasoulas, Kevin Scaman, Aladin Virmaux
|
Attention based neural networks are state of the art in a large range of applications. However, their performance tends to degrade when the number of layers increases. In this work, we show that enforcing Lipschitz continuity by normalizing the attention scores can significantly improve the performance of deep attention models. First, we show that, for deep graph attention networks (GAT), gradient explosion appears during training, leading to poor performance of gradient-based training algorithms. To address this issue, we derive a theoretical analysis of the Lipschitz continuity of attention modules and introduce LipschitzNorm, a simple and parameter-free normalization for self-attention mechanisms that enforces the model to be Lipschitz continuous. We then apply LipschitzNorm to GAT and Graph Transformers and show that their performance is substantially improved in the deep setting (10 to 30 layers). More specifically, we show that a deep GAT model with LipschitzNorm achieves state of the art results for node label prediction tasks that exhibit long-range dependencies, while showing consistent improvements over their unnormalized counterparts in benchmark node classification tasks.
|
https://proceedings.mlr.press/v139/dasoulas21a.html
|
https://proceedings.mlr.press/v139/dasoulas21a.html
|
https://proceedings.mlr.press/v139/dasoulas21a.html
|
http://proceedings.mlr.press/v139/dasoulas21a/dasoulas21a.pdf
|
ICML 2021
|
|
Householder Sketch for Accurate and Accelerated Least-Mean-Squares Solvers
|
Jyotikrishna Dass, Rabi Mahapatra
|
Least-Mean-Squares (\textsc{LMS}) solvers comprise a class of fundamental optimization problems such as linear regression, and regularized regressions such as Ridge, LASSO, and Elastic-Net. Data summarization techniques for big data generate summaries called coresets and sketches to speed up model learning under streaming and distributed settings. For example, \citep{nips2019} design a fast and accurate Caratheodory set on input data to boost the performance of existing \textsc{LMS} solvers. In retrospect, we explore classical Householder transformation as a candidate for sketching and accurately solving LMS problems. We find it to be a simpler, memory-efficient, and faster alternative that always existed to the above strong baseline. We also present a scalable algorithm based on the construction of distributed Householder sketches to solve \textsc{LMS} problem across multiple worker nodes. We perform thorough empirical analysis with large synthetic and real datasets to evaluate the performance of Householder sketch and compare with \citep{nips2019}. Our results show Householder sketch speeds up existing \textsc{LMS} solvers in the scikit-learn library up to $100$x-$400$x. Also, it is $10$x-$100$x faster than the above baseline with similar numerical stability. The distributed algorithm demonstrates linear scalability with a near-negligible communication overhead.
|
https://proceedings.mlr.press/v139/dass21a.html
|
https://proceedings.mlr.press/v139/dass21a.html
|
https://proceedings.mlr.press/v139/dass21a.html
|
http://proceedings.mlr.press/v139/dass21a/dass21a.pdf
|
ICML 2021
|
|
Byzantine-Resilient High-Dimensional SGD with Local Iterations on Heterogeneous Data
|
Deepesh Data, Suhas Diggavi
|
We study stochastic gradient descent (SGD) with local iterations in the presence of Byzantine clients, motivated by the federated learning. The clients, instead of communicating with the server in every iteration, maintain their local models, which they update by taking several SGD iterations based on their own datasets and then communicate the net update with the server, thereby achieving communication-efficiency. Furthermore, only a subset of clients communicates with the server at synchronization times. The Byzantine clients may collude and send arbitrary vectors to the server to disrupt the learning process. To combat the adversary, we employ an efficient high-dimensional robust mean estimation algorithm at the server to filter-out corrupt vectors; and to analyze the outlier-filtering procedure, we develop a novel matrix concentration result that may be of independent interest. We provide convergence analyses for both strongly-convex and non-convex smooth objectives in the heterogeneous data setting. We believe that ours is the first Byzantine-resilient local SGD algorithm and analysis with non-trivial guarantees. We corroborate our theoretical results with preliminary experiments for neural network training.
|
https://proceedings.mlr.press/v139/data21a.html
|
https://proceedings.mlr.press/v139/data21a.html
|
https://proceedings.mlr.press/v139/data21a.html
|
http://proceedings.mlr.press/v139/data21a/data21a.pdf
|
ICML 2021
|
|
Catformer: Designing Stable Transformers via Sensitivity Analysis
|
Jared Q Davis, Albert Gu, Krzysztof Choromanski, Tri Dao, Christopher Re, Chelsea Finn, Percy Liang
|
Transformer architectures are widely used, but training them is non-trivial, requiring custom learning rate schedules, scaling terms, residual connections, careful placement of submodules such as normalization, and so on. In this paper, we improve upon recent analysis of Transformers and formalize a notion of sensitivity to capture the difficulty of training. Sensitivity characterizes how the variance of activation and gradient norms change in expectation when parameters are randomly perturbed. We analyze the sensitivity of previous Transformer architectures and design a new architecture, the Catformer, which replaces residual connections or RNN-based gating mechanisms with concatenation. We prove that Catformers are less sensitive than other Transformer variants and demonstrate that this leads to more stable training. On DMLab30, a suite of high-dimension reinforcement tasks, Catformer outperforms other transformers, including Gated Transformer-XL—the state-of-the-art architecture designed to address stability—by 13%.
|
https://proceedings.mlr.press/v139/davis21a.html
|
https://proceedings.mlr.press/v139/davis21a.html
|
https://proceedings.mlr.press/v139/davis21a.html
|
http://proceedings.mlr.press/v139/davis21a/davis21a.pdf
|
ICML 2021
|
|
Diffusion Source Identification on Networks with Statistical Confidence
|
Quinlan E Dawkins, Tianxi Li, Haifeng Xu
|
Diffusion source identification on networks is a problem of fundamental importance in a broad class of applications, including controlling the spreading of rumors on social media, identifying a computer virus over cyber networks, or identifying the disease center during epidemiology. Though this problem has received significant recent attention, most known approaches are well-studied in only very restrictive settings and lack theoretical guarantees for more realistic networks. We introduce a statistical framework for the study of this problem and develop a confidence set inference approach inspired by hypothesis testing. Our method efficiently produces a small subset of nodes, which provably covers the source node with any pre-specified confidence level without restrictive assumptions on network structures. To our knowledge, this is the first diffusion source identification method with a practically useful theoretical guarantee on general networks. We demonstrate our approach via extensive synthetic experiments on well-known random network models, a large data set of real-world networks as well as a mobility network between cities concerning the COVID-19 spreading in January 2020.
|
https://proceedings.mlr.press/v139/dawkins21a.html
|
https://proceedings.mlr.press/v139/dawkins21a.html
|
https://proceedings.mlr.press/v139/dawkins21a.html
|
http://proceedings.mlr.press/v139/dawkins21a/dawkins21a.pdf
|
ICML 2021
|
|
Bayesian Deep Learning via Subnetwork Inference
|
Erik Daxberger, Eric Nalisnick, James U Allingham, Javier Antoran, Jose Miguel Hernandez-Lobato
|
The Bayesian paradigm has the potential to solve core issues of deep neural networks such as poor calibration and data inefficiency. Alas, scaling Bayesian inference to large weight spaces often requires restrictive approximations. In this work, we show that it suffices to perform inference over a small subset of model weights in order to obtain accurate predictive posteriors. The other weights are kept as point estimates. This subnetwork inference framework enables us to use expressive, otherwise intractable, posterior approximations over such subsets. In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation. We propose a subnetwork selection strategy that aims to maximally preserve the model’s predictive uncertainty. Empirically, our approach compares favorably to ensembles and less expressive posterior approximations over full networks.
|
https://proceedings.mlr.press/v139/daxberger21a.html
|
https://proceedings.mlr.press/v139/daxberger21a.html
|
https://proceedings.mlr.press/v139/daxberger21a.html
|
http://proceedings.mlr.press/v139/daxberger21a/daxberger21a.pdf
|
ICML 2021
|
|
Adversarial Robustness Guarantees for Random Deep Neural Networks
|
Giacomo De Palma, Bobak Kiani, Seth Lloyd
|
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversarial examples, which are incorrectly classified inputs that are extremely close to a correctly classified input. We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any p$\geq$1, the \ell^p distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the \ell^p norm of the input. The results are based on the recently proved equivalence between Gaussian processes and deep neural networks in the limit of infinite width of the hidden layers, and are validated with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. The results constitute a fundamental advance in the theoretical understanding of adversarial examples, and open the way to a thorough theoretical characterization of the relation between network architecture and robustness to adversarial perturbations.
|
https://proceedings.mlr.press/v139/de-palma21a.html
|
https://proceedings.mlr.press/v139/de-palma21a.html
|
https://proceedings.mlr.press/v139/de-palma21a.html
|
http://proceedings.mlr.press/v139/de-palma21a/de-palma21a.pdf
|
ICML 2021
|
|
High-Dimensional Gaussian Process Inference with Derivatives
|
Filip de Roos, Alexandra Gessner, Philipp Hennig
|
Although it is widely known that Gaussian processes can be conditioned on observations of the gradient, this functionality is of limited use due to the prohibitive computational cost of $\mathcal{O}(N^3 D^3)$ in data points $N$ and dimension $D$. The dilemma of gradient observations is that a single one of them comes at the same cost as $D$ independent function evaluations, so the latter are often preferred. Careful scrutiny reveals, however, that derivative observations give rise to highly structured kernel Gram matrices for very general classes of kernels (inter alia, stationary kernels). We show that in the \emph{low-data} regime $N < D$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $\mathcal{O}(N^2D + (N^2)^3)$ (i.e., linear in the number of dimensions) and, in special cases, to $\mathcal{O}(N^2D + N^3)$. This reduction in complexity opens up new use-cases for inference with gradients especially in the high-dimensional regime, where the information-to-cost ratio of gradient observations significantly increases. We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.
|
https://proceedings.mlr.press/v139/de-roos21a.html
|
https://proceedings.mlr.press/v139/de-roos21a.html
|
https://proceedings.mlr.press/v139/de-roos21a.html
|
http://proceedings.mlr.press/v139/de-roos21a/de-roos21a.pdf
|
ICML 2021
|
|
Transfer-Based Semantic Anomaly Detection
|
Lucas Deecke, Lukas Ruff, Robert A. Vandermeulen, Hakan Bilen
|
Detecting semantic anomalies is challenging due to the countless ways in which they may appear in real-world data. While enhancing the robustness of networks may be sufficient for modeling simplistic anomalies, there is no good known way of preparing models for all potential and unseen anomalies that can potentially occur, such as the appearance of new object classes. In this paper, we show that a previously overlooked strategy for anomaly detection (AD) is to introduce an explicit inductive bias toward representations transferred over from some large and varied semantic task. We rigorously verify our hypothesis in controlled trials that utilize intervention, and show that it gives rise to surprisingly effective auxiliary objectives that outperform previous AD paradigms.
|
https://proceedings.mlr.press/v139/deecke21a.html
|
https://proceedings.mlr.press/v139/deecke21a.html
|
https://proceedings.mlr.press/v139/deecke21a.html
|
http://proceedings.mlr.press/v139/deecke21a/deecke21a.pdf
|
ICML 2021
|
|
Grid-Functioned Neural Networks
|
Javier Dehesa, Andrew Vidler, Julian Padget, Christof Lutteroth
|
We introduce a new neural network architecture that we call "grid-functioned" neural networks. It utilises a grid structure of network parameterisations that can be specialised for different subdomains of the problem, while maintaining smooth, continuous behaviour. The grid gives the user flexibility to prevent gross features from overshadowing important minor ones. We present a full characterisation of its computational and spatial complexity, and demonstrate its potential, compared to a traditional architecture, over a set of synthetic regression problems. We further illustrate the benefits through a real-world 3D skeletal animation case study, where it offers the same visual quality as a state-of-the-art model, but with lower computational complexity and better control accuracy.
|
https://proceedings.mlr.press/v139/dehesa21a.html
|
https://proceedings.mlr.press/v139/dehesa21a.html
|
https://proceedings.mlr.press/v139/dehesa21a.html
|
http://proceedings.mlr.press/v139/dehesa21a/dehesa21a.pdf
|
ICML 2021
|
|
Multidimensional Scaling: Approximation and Complexity
|
Erik Demaine, Adam Hesterberg, Frederic Koehler, Jayson Lynch, John Urschel
|
Metric Multidimensional scaling (MDS) is a classical method for generating meaningful (non-linear) low-dimensional embeddings of high-dimensional data. MDS has a long history in the statistics, machine learning, and graph drawing communities. In particular, the Kamada-Kawai force-directed graph drawing method is equivalent to MDS and is one of the most popular ways in practice to embed graphs into low dimensions. Despite its ubiquity, our theoretical understanding of MDS remains limited as its objective function is highly non-convex. In this paper, we prove that minimizing the Kamada-Kawai objective is NP-hard and give a provable approximation algorithm for optimizing it, which in particular is a PTAS on low-diameter graphs. We supplement this result with experiments suggesting possible connections between our greedy approximation algorithm and gradient-based methods.
|
https://proceedings.mlr.press/v139/demaine21a.html
|
https://proceedings.mlr.press/v139/demaine21a.html
|
https://proceedings.mlr.press/v139/demaine21a.html
|
http://proceedings.mlr.press/v139/demaine21a/demaine21a.pdf
|
ICML 2021
|
|
What Does Rotation Prediction Tell Us about Classifier Accuracy under Varying Testing Environments?
|
Weijian Deng, Stephen Gould, Liang Zheng
|
Understanding classifier decision under novel environments is central to the community, and a common practice is evaluating it on labeled test sets. However, in real-world testing, image annotations are difficult and expensive to obtain, especially when the test environment is changing. A natural question then arises: given a trained classifier, can we evaluate its accuracy on varying unlabeled test sets? In this work, we train semantic classification and rotation prediction in a multi-task way. On a series of datasets, we report an interesting finding, i.e., the semantic classification accuracy exhibits a strong linear relationship with the accuracy of the rotation prediction task (Pearson’s Correlation r > 0.88). This finding allows us to utilize linear regression to estimate classifier performance from the accuracy of rotation prediction which can be obtained on the test set through the freely generated rotation labels.
|
https://proceedings.mlr.press/v139/deng21a.html
|
https://proceedings.mlr.press/v139/deng21a.html
|
https://proceedings.mlr.press/v139/deng21a.html
|
http://proceedings.mlr.press/v139/deng21a/deng21a.pdf
|
ICML 2021
|
|
Toward Better Generalization Bounds with Locally Elastic Stability
|
Zhun Deng, Hangfeng He, Weijie Su
|
Algorithmic stability is a key characteristic to ensure the generalization ability of a learning algorithm. Among different notions of stability, \emph{uniform stability} is arguably the most popular one, which yields exponential generalization bounds. However, uniform stability only considers the worst-case loss change (or so-called sensitivity) by removing a single data point, which is distribution-independent and therefore undesirable. There are many cases that the worst-case sensitivity of the loss is much larger than the average sensitivity taken over the single data point that is removed, especially in some advanced models such as random feature models or neural networks. Many previous works try to mitigate the distribution independent issue by proposing weaker notions of stability, however, they either only yield polynomial bounds or the bounds derived do not vanish as sample size goes to infinity. Given that, we propose \emph{locally elastic stability} as a weaker and distribution-dependent stability notion, which still yields exponential generalization bounds. We further demonstrate that locally elastic stability implies tighter generalization bounds than those derived based on uniform stability in many situations by revisiting the examples of bounded support vector machines, regularized least square regressions, and stochastic gradient descent.
|
https://proceedings.mlr.press/v139/deng21b.html
|
https://proceedings.mlr.press/v139/deng21b.html
|
https://proceedings.mlr.press/v139/deng21b.html
|
http://proceedings.mlr.press/v139/deng21b/deng21b.pdf
|
ICML 2021
|
|
Revenue-Incentive Tradeoffs in Dynamic Reserve Pricing
|
Yuan Deng, Sebastien Lahaie, Vahab Mirrokni, Song Zuo
|
Online advertisements are primarily sold via repeated auctions with reserve prices. In this paper, we study how to set reserves to boost revenue based on the historical bids of strategic buyers, while controlling the impact of such a policy on the incentive compatibility of the repeated auctions. Adopting an incentive compatibility metric which quantifies the incentives to shade bids, we propose a novel class of reserve pricing policies and provide analytical tradeoffs between their revenue performance and bid-shading incentives. The policies are inspired by the exponential mechanism from the literature on differential privacy, but our study uncovers mechanisms with significantly better revenue-incentive tradeoffs than the exponential mechanism in practice. We further empirically evaluate the tradeoffs on synthetic data as well as real ad auction data from a major ad exchange to verify and support our theoretical findings.
|
https://proceedings.mlr.press/v139/deng21c.html
|
https://proceedings.mlr.press/v139/deng21c.html
|
https://proceedings.mlr.press/v139/deng21c.html
|
http://proceedings.mlr.press/v139/deng21c/deng21c.pdf
|
ICML 2021
|
|
Heterogeneity for the Win: One-Shot Federated Clustering
|
Don Kurian Dennis, Tian Li, Virginia Smith
|
In this work, we explore the unique challenges—and opportunities—of unsupervised federated learning (FL). We develop and analyze a one-shot federated clustering scheme, kfed, based on the widely-used Lloyd’s method for $k$-means clustering. In contrast to many supervised problems, we show that the issue of statistical heterogeneity in federated networks can in fact benefit our analysis. We analyse kfed under a center separation assumption and compare it to the best known requirements of its centralized counterpart. Our analysis shows that in heterogeneous regimes where the number of clusters per device $(k’)$ is smaller than the total number of clusters over the network $k$, $(k’\le \sqrt{k})$, we can use heterogeneity to our advantage—significantly weakening the cluster separation requirements for kfed. From a practical viewpoint, kfed also has many desirable properties: it requires only round of communication, can run asynchronously, and can handle partial participation or node/network failures. We motivate our analysis with experiments on common FL benchmarks, and highlight the practical utility of one-shot clustering through use-cases in personalized FL and device sampling.
|
https://proceedings.mlr.press/v139/dennis21a.html
|
https://proceedings.mlr.press/v139/dennis21a.html
|
https://proceedings.mlr.press/v139/dennis21a.html
|
http://proceedings.mlr.press/v139/dennis21a/dennis21a.pdf
|
ICML 2021
|
|
Kernel Continual Learning
|
Mohammad Mahdi Derakhshani, Xiantong Zhen, Ling Shao, Cees Snoek
|
This paper introduces kernel continual learning, a simple but effective variant of continual learning that leverages the non-parametric nature of kernel methods to tackle catastrophic forgetting. We deploy an episodic memory unit that stores a subset of samples for each task to learn task-specific classifiers based on kernel ridge regression. This does not require memory replay and systematically avoids task interference in the classifiers. We further introduce variational random features to learn a data-driven kernel for each task. To do so, we formulate kernel continual learning as a variational inference problem, where a random Fourier basis is incorporated as the latent variable. The variational posterior distribution over the random Fourier basis is inferred from the coreset of each task. In this way, we are able to generate more informative kernels specific to each task, and, more importantly, the coreset size can be reduced to achieve more compact memory, resulting in more efficient continual learning based on episodic memory. Extensive evaluation on four benchmarks demonstrates the effectiveness and promise of kernels for continual learning.
|
https://proceedings.mlr.press/v139/derakhshani21a.html
|
https://proceedings.mlr.press/v139/derakhshani21a.html
|
https://proceedings.mlr.press/v139/derakhshani21a.html
|
http://proceedings.mlr.press/v139/derakhshani21a/derakhshani21a.pdf
|
ICML 2021
|
|
Bayesian Optimization over Hybrid Spaces
|
Aryan Deshwal, Syrine Belakaria, Janardhan Rao Doppa
|
We consider the problem of optimizing hybrid structures (mixture of discrete and continuous input variables) via expensive black-box function evaluations. This problem arises in many real-world applications. For example, in materials design optimization via lab experiments, discrete and continuous variables correspond to the presence/absence of primitive elements and their relative concentrations respectively. The key challenge is to accurately model the complex interactions between discrete and continuous variables. In this paper, we propose a novel approach referred as Hybrid Bayesian Optimization (HyBO) by utilizing diffusion kernels, which are naturally defined over continuous and discrete variables. We develop a principled approach for constructing diffusion kernels over hybrid spaces by utilizing the additive kernel formulation, which allows additive interactions of all orders in a tractable manner. We theoretically analyze the modeling strength of additive hybrid kernels and prove that it has the universal approximation property. Our experiments on synthetic and six diverse real-world benchmarks show that HyBO significantly outperforms the state-of-the-art methods.
|
https://proceedings.mlr.press/v139/deshwal21a.html
|
https://proceedings.mlr.press/v139/deshwal21a.html
|
https://proceedings.mlr.press/v139/deshwal21a.html
|
http://proceedings.mlr.press/v139/deshwal21a/deshwal21a.pdf
|
ICML 2021
|
|
Navigation Turing Test (NTT): Learning to Evaluate Human-Like Navigation
|
Sam Devlin, Raluca Georgescu, Ida Momennejad, Jaroslaw Rzepecki, Evelyn Zuniga, Gavin Costello, Guy Leroy, Ali Shaw, Katja Hofmann
|
A key challenge on the path to developing agents that learn complex human-like behavior is the need to quickly and accurately quantify human-likeness. While human assessments of such behavior can be highly accurate, speed and scalability are limited. We address these limitations through a novel automated Navigation Turing Test (ANTT) that learns to predict human judgments of human-likeness. We demonstrate the effectiveness of our automated NTT on a navigation task in a complex 3D environment. We investigate six classification models to shed light on the types of architectures best suited to this task, and validate them against data collected through a human NTT. Our best models achieve high accuracy when distinguishing true human and agent behavior. At the same time, we show that predicting finer-grained human assessment of agents’ progress towards human-like behavior remains unsolved. Our work takes an important step towards agents that more effectively learn complex human-like behavior.
|
https://proceedings.mlr.press/v139/devlin21a.html
|
https://proceedings.mlr.press/v139/devlin21a.html
|
https://proceedings.mlr.press/v139/devlin21a.html
|
http://proceedings.mlr.press/v139/devlin21a/devlin21a.pdf
|
ICML 2021
|
|
Versatile Verification of Tree Ensembles
|
Laurens Devos, Wannes Meert, Jesse Davis
|
Machine learned models often must abide by certain requirements (e.g., fairness or legal). This has spurred interested in developing approaches that can provably verify whether a model satisfies certain properties. This paper introduces a generic algorithm called Veritas that enables tackling multiple different verification tasks for tree ensemble models like random forests (RFs) and gradient boosted decision trees (GBDTs). This generality contrasts with previous work, which has focused exclusively on either adversarial example generation or robustness checking. Veritas formulates the verification task as a generic optimization problem and introduces a novel search space representation. Veritas offers two key advantages. First, it provides anytime lower and upper bounds when the optimization problem cannot be solved exactly. In contrast, many existing methods have focused on exact solutions and are thus limited by the verification problem being NP-complete. Second, Veritas produces full (bounded suboptimal) solutions that can be used to generate concrete examples. We experimentally show that our method produces state-of-the-art robustness estimates, especially when executed with strict time constraints. This is exceedingly important when checking the robustness of large datasets. Additionally, we show that Veritas enables tackling more real-world verification scenarios.
|
https://proceedings.mlr.press/v139/devos21a.html
|
https://proceedings.mlr.press/v139/devos21a.html
|
https://proceedings.mlr.press/v139/devos21a.html
|
http://proceedings.mlr.press/v139/devos21a/devos21a.pdf
|
ICML 2021
|
|
On the Inherent Regularization Effects of Noise Injection During Training
|
Oussama Dhifallah, Yue Lu
|
Randomly perturbing networks during the training process is a commonly used approach to improving generalization performance. In this paper, we present a theoretical study of one particular way of random perturbation, which corresponds to injecting artificial noise to the training data. We provide a precise asymptotic characterization of the training and generalization errors of such randomly perturbed learning problems on a random feature model. Our analysis shows that Gaussian noise injection in the training process is equivalent to introducing a weighted ridge regularization, when the number of noise injections tends to infinity. The explicit form of the regularization is also given. Numerical results corroborate our asymptotic predictions, showing that they are accurate even in moderate problem dimensions. Our theoretical predictions are based on a new correlated Gaussian equivalence conjecture that generalizes recent results in the study of random feature models.
|
https://proceedings.mlr.press/v139/dhifallah21a.html
|
https://proceedings.mlr.press/v139/dhifallah21a.html
|
https://proceedings.mlr.press/v139/dhifallah21a.html
|
http://proceedings.mlr.press/v139/dhifallah21a/dhifallah21a.pdf
|
ICML 2021
|
|
Hierarchical Agglomerative Graph Clustering in Nearly-Linear Time
|
Laxman Dhulipala, David Eisenstat, Jakub Łącki, Vahab Mirrokni, Jessica Shi
|
We study the widely-used hierarchical agglomerative clustering (HAC) algorithm on edge-weighted graphs. We define an algorithmic framework for hierarchical agglomerative graph clustering that provides the first efficient $\tilde{O}(m)$ time exact algorithms for classic linkage measures, such as complete- and WPGMA-linkage, as well as other measures. Furthermore, for average-linkage, arguably the most popular variant of HAC, we provide an algorithm that runs in $\tilde{O}(n\sqrt{m})$ time. For this variant, this is the first exact algorithm that runs in subquadratic time, as long as $m=n^{2-\epsilon}$ for some constant $\epsilon > 0$. We complement this result with a simple $\epsilon$-close approximation algorithm for average-linkage in our framework that runs in $\tilde{O}(m)$ time. As an application of our algorithms, we consider clustering points in a metric space by first using $k$-NN to generate a graph from the point set, and then running our algorithms on the resulting weighted graph. We validate the performance of our algorithms on publicly available datasets, and show that our approach can speed up clustering of point datasets by a factor of 20.7–76.5x.
|
https://proceedings.mlr.press/v139/dhulipala21a.html
|
https://proceedings.mlr.press/v139/dhulipala21a.html
|
https://proceedings.mlr.press/v139/dhulipala21a.html
|
http://proceedings.mlr.press/v139/dhulipala21a/dhulipala21a.pdf
|
ICML 2021
|
|
Learning Online Algorithms with Distributional Advice
|
Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Ali Vakilian, Nikos Zarifis
|
We study the problem of designing online algorithms given advice about the input. While prior work had focused on deterministic advice, we only assume distributional access to the instances of interest, and the goal is to learn a competitive algorithm given access to i.i.d. samples. We aim to be competitive against an adversary with prior knowledge of the distribution, while also performing well against worst-case inputs. We focus on the classical online problems of ski-rental and prophet-inequalities, and provide sample complexity bounds for the underlying learning tasks. First, we point out that for general distributions it is information-theoretically impossible to beat the worst-case competitive-ratio with any finite sample size. As our main contribution, we establish strong positive results for well-behaved distributions. Specifically, for the broad class of log-concave distributions, we show that $\mathrm{poly}(1/\epsilon)$ samples suffice to obtain $(1+\epsilon)$-competitive ratio. Finally, we show that this sample upper bound is close to best possible, even for very simple classes of distributions.
|
https://proceedings.mlr.press/v139/diakonikolas21a.html
|
https://proceedings.mlr.press/v139/diakonikolas21a.html
|
https://proceedings.mlr.press/v139/diakonikolas21a.html
|
http://proceedings.mlr.press/v139/diakonikolas21a/diakonikolas21a.pdf
|
ICML 2021
|
|
A Wasserstein Minimax Framework for Mixed Linear Regression
|
Theo Diamandis, Yonina Eldar, Alireza Fallah, Farzan Farnia, Asuman Ozdaglar
|
Multi-modal distributions are commonly used to model clustered data in statistical learning tasks. In this paper, we consider the Mixed Linear Regression (MLR) problem. We propose an optimal transport-based framework for MLR problems, Wasserstein Mixed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models. Through a model-based duality analysis, WMLR reduces the underlying MLR task to a nonconvex-concave minimax optimization problem, which can be provably solved to find a minimax stationary point by the Gradient Descent Ascent (GDA) algorithm. In the special case of mixtures of two linear regression models, we show that WMLR enjoys global convergence and generalization guarantees. We prove that WMLR’s sample complexity grows linearly with the dimension of data. Finally, we discuss the application of WMLR to the federated learning task where the training samples are collected by multiple agents in a network. Unlike the Expectation-Maximization algorithm, WMLR directly extends to the distributed, federated learning setting. We support our theoretical results through several numerical experiments, which highlight our framework’s ability to handle the federated learning setting with mixture models.
|
https://proceedings.mlr.press/v139/diamandis21a.html
|
https://proceedings.mlr.press/v139/diamandis21a.html
|
https://proceedings.mlr.press/v139/diamandis21a.html
|
http://proceedings.mlr.press/v139/diamandis21a/diamandis21a.pdf
|
ICML 2021
|
|
Context-Aware Online Collective Inference for Templated Graphical Models
|
Charles Dickens, Connor Pryor, Eriq Augustine, Alexander Miller, Lise Getoor
|
In this work, we examine online collective inference, the problem of maintaining and performing inference over a sequence of evolving graphical models. We utilize templated graphical models (TGM), a general class of graphical models expressed via templates and instantiated with data. A key challenge is minimizing the cost of instantiating the updated model. To address this, we define a class of exact and approximate context-aware methods for updating an existing TGM. These methods avoid a full re-instantiation by using the context of the updates to only add relevant components to the graphical model. Further, we provide stability bounds for the general online inference problem and regret bounds for a proposed approximation. Finally, we implement our approach in probabilistic soft logic, and test it on several online collective inference tasks. Through these experiments we verify the bounds on regret and stability, and show that our approximate online approach consistently runs two to five times faster than the offline alternative while, surprisingly, maintaining the quality of the predictions.
|
https://proceedings.mlr.press/v139/dickens21a.html
|
https://proceedings.mlr.press/v139/dickens21a.html
|
https://proceedings.mlr.press/v139/dickens21a.html
|
http://proceedings.mlr.press/v139/dickens21a/dickens21a.pdf
|
ICML 2021
|
|
ARMS: Antithetic-REINFORCE-Multi-Sample Gradient for Binary Variables
|
Aleksandar Dimitriev, Mingyuan Zhou
|
Estimating the gradients for binary variables is a task that arises frequently in various domains, such as training discrete latent variable models. What has been commonly used is a REINFORCE based Monte Carlo estimation method that uses either independent samples or pairs of negatively correlated samples. To better utilize more than two samples, we propose ARMS, an Antithetic REINFORCE-based Multi-Sample gradient estimator. ARMS uses a copula to generate any number of mutually antithetic samples. It is unbiased, has low variance, and generalizes both DisARM, which we show to be ARMS with two samples, and the leave-one-out REINFORCE (LOORF) estimator, which is ARMS with uncorrelated samples. We evaluate ARMS on several datasets for training generative models, and our experimental results show that it outperforms competing methods. We also develop a version of ARMS for optimizing the multi-sample variational bound, and show that it outperforms both VIMCO and DisARM. The code is publicly available.
|
https://proceedings.mlr.press/v139/dimitriev21a.html
|
https://proceedings.mlr.press/v139/dimitriev21a.html
|
https://proceedings.mlr.press/v139/dimitriev21a.html
|
http://proceedings.mlr.press/v139/dimitriev21a/dimitriev21a.pdf
|
ICML 2021
|
|
XOR-CD: Linearly Convergent Constrained Structure Generation
|
Fan Ding, Jianzhu Ma, Jinbo Xu, Yexiang Xue
|
We propose XOR-Contrastive Divergence learning (XOR-CD), a provable approach for constrained structure generation, which remains difficult for state-of-the-art neural network and constraint reasoning approaches. XOR-CD harnesses XOR-Sampling to generate samples from the model distribution in CD learning and is guaranteed to generate valid structures. In addition, XOR-CD has a linear convergence rate towards the global maximum of the likelihood function within a vanishing constant in learning exponential family models. Constraint satisfaction enabled by XOR-CD also boosts its learning performance. Our real-world experiments on data-driven experimental design, dispatching route generation, and sequence-based protein homology detection demonstrate the superior performance of XOR-CD compared to baseline approaches in generating valid structures as well as capturing the inductive bias in the training set.
|
https://proceedings.mlr.press/v139/ding21a.html
|
https://proceedings.mlr.press/v139/ding21a.html
|
https://proceedings.mlr.press/v139/ding21a.html
|
http://proceedings.mlr.press/v139/ding21a/ding21a.pdf
|
ICML 2021
|
|
Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach
|
Tianyu Ding, Zhihui Zhu, Rene Vidal, Daniel P Robinson
|
The Dual Principal Component Pursuit (DPCP) method has been proposed to robustly recover a subspace of high-relative dimension from corrupted data. Existing analyses and algorithms of DPCP, however, mainly focus on finding a normal to a single hyperplane that contains the inliers. Although these algorithms can be extended to a subspace of higher co-dimension through a recursive approach that sequentially finds a new basis element of the space orthogonal to the subspace, this procedure is computationally expensive and lacks convergence guarantees. In this paper, we consider a DPCP approach for simultaneously computing the entire basis of the orthogonal complement subspace (we call this a holistic approach) by solving a non-convex non-smooth optimization problem over the Grassmannian. We provide geometric and statistical analyses for the global optimality and prove that it can tolerate as many outliers as the square of the number of inliers, under both noiseless and noisy settings. We then present a Riemannian regularity condition for the problem, which is then used to prove that a Riemannian subgradient method converges linearly to a neighborhood of the orthogonal subspace with error proportional to the noise level.
|
https://proceedings.mlr.press/v139/ding21b.html
|
https://proceedings.mlr.press/v139/ding21b.html
|
https://proceedings.mlr.press/v139/ding21b.html
|
http://proceedings.mlr.press/v139/ding21b/ding21b.pdf
|
ICML 2021
|
|
Coded-InvNet for Resilient Prediction Serving Systems
|
Tuan Dinh, Kangwook Lee
|
Inspired by a new coded computation algorithm for invertible functions, we propose Coded-InvNet a new approach to design resilient prediction serving systems that can gracefully handle stragglers or node failures. Coded-InvNet leverages recent findings in the deep learning literature such as invertible neural networks, Manifold Mixup, and domain translation algorithms, identifying interesting research directions that span across machine learning and systems. Our experimental results show that Coded-InvNet can outperform existing approaches, especially when the compute resource overhead is as low as 10%. For instance, without knowing which of the ten workers is going to fail, our algorithm can design a backup task so that it can correctly recover the missing prediction result with an accuracy of 85.9%, significantly outperforming the previous SOTA by 32.5%.
|
https://proceedings.mlr.press/v139/dinh21a.html
|
https://proceedings.mlr.press/v139/dinh21a.html
|
https://proceedings.mlr.press/v139/dinh21a.html
|
http://proceedings.mlr.press/v139/dinh21a/dinh21a.pdf
|
ICML 2021
|
|
Estimation and Quantization of Expected Persistence Diagrams
|
Vincent Divol, Theo Lacombe
|
Persistence diagrams (PDs) are the most common descriptors used to encode the topology of structured data appearing in challenging learning tasks; think e.g. of graphs, time series or point clouds sampled close to a manifold. Given random objects and the corresponding distribution of PDs, one may want to build a statistical summary—such as a mean—of these random PDs, which is however not a trivial task as the natural geometry of the space of PDs is not linear. In this article, we study two such summaries, the Expected Persistence Diagram (EPD), and its quantization. The EPD is a measure supported on $\mathbb{R}^2$, which may be approximated by its empirical counterpart. We prove that this estimator is optimal from a minimax standpoint on a large class of models with a parametric rate of convergence. The empirical EPD is simple and efficient to compute, but possibly has a very large support, hindering its use in practice. To overcome this issue, we propose an algorithm to compute a quantization of the empirical EPD, a measure with small support which is shown to approximate with near-optimal rates a quantization of the theoretical EPD.
|
https://proceedings.mlr.press/v139/divol21a.html
|
https://proceedings.mlr.press/v139/divol21a.html
|
https://proceedings.mlr.press/v139/divol21a.html
|
http://proceedings.mlr.press/v139/divol21a/divol21a.pdf
|
ICML 2021
|
|
On Energy-Based Models with Overparametrized Shallow Neural Networks
|
Carles Domingo-Enrich, Alberto Bietti, Eric Vanden-Eijnden, Joan Bruna
|
Energy-based models (EBMs) are a simple yet powerful framework for generative modeling. They are based on a trainable energy function which defines an associated Gibbs measure, and they can be trained and sampled from via well-established statistical tools, such as MCMC. Neural networks may be used as energy function approximators, providing both a rich class of expressive models as well as a flexible device to incorporate data structure. In this work we focus on shallow neural networks. Building from the incipient theory of overparametrized neural networks, we show that models trained in the so-called ’active’ regime provide a statistical advantage over their associated ’lazy’ or kernel regime, leading to improved adaptivity to hidden low-dimensional structure in the data distribution, as already observed in supervised learning. Our study covers both the maximum likelihood and Stein Discrepancy estimators, and we validate our theoretical results with numerical experiments on synthetic data.
|
https://proceedings.mlr.press/v139/domingo-enrich21a.html
|
https://proceedings.mlr.press/v139/domingo-enrich21a.html
|
https://proceedings.mlr.press/v139/domingo-enrich21a.html
|
http://proceedings.mlr.press/v139/domingo-enrich21a/domingo-enrich21a.pdf
|
ICML 2021
|
|
Kernel-Based Reinforcement Learning: A Finite-Time Analysis
|
Omar Darwiche Domingues, Pierre Menard, Matteo Pirotta, Emilie Kaufmann, Michal Valko
|
We consider the exploration-exploitation dilemma in finite-horizon reinforcement learning problems whose state-action space is endowed with a metric. We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards and transitions to efficiently balance exploration and exploitation. For problems with $K$ episodes and horizon $H$, we provide a regret bound of $\widetilde{O}\left( H^3 K^{\frac{2d}{2d+1}}\right)$, where $d$ is the covering dimension of the joint state-action space. This is the first regret bound for kernel-based RL using smoothing kernels, which requires very weak assumptions on the MDP and applies to a wide range of tasks. We empirically validate our approach in continuous MDPs with sparse rewards.
|
https://proceedings.mlr.press/v139/domingues21a.html
|
https://proceedings.mlr.press/v139/domingues21a.html
|
https://proceedings.mlr.press/v139/domingues21a.html
|
http://proceedings.mlr.press/v139/domingues21a/domingues21a.pdf
|
ICML 2021
|
|
Attention is not all you need: pure attention loses rank doubly exponentially with depth
|
Yihe Dong, Jean-Baptiste Cordonnier, Andreas Loukas
|
Attention-based architectures have become ubiquitous in machine learning. Yet, our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms—or paths—each involving the operation of a sequence of attention heads across layers. Using this path decomposition, we prove that self-attention possesses a strong inductive bias towards "token uniformity". Specifically, without skip connections or multi-layer perceptrons (MLPs), the output converges doubly exponentially to a rank-1 matrix. On the other hand, skip connections and MLPs stop the output from degeneration. Our experiments verify the convergence results on standard transformer architectures.
|
https://proceedings.mlr.press/v139/dong21a.html
|
https://proceedings.mlr.press/v139/dong21a.html
|
https://proceedings.mlr.press/v139/dong21a.html
|
http://proceedings.mlr.press/v139/dong21a/dong21a.pdf
|
ICML 2021
|
|
How rotational invariance of common kernels prevents generalization in high dimensions
|
Konstantin Donhauser, Mingqi Wu, Fanny Yang
|
Kernel ridge regression is well-known to achieve minimax optimal rates in low-dimensional settings. However, its behavior in high dimensions is much less understood. Recent work establishes consistency for high-dimensional kernel regression for a number of specific assumptions on the data distribution. In this paper, we show that in high dimensions, the rotational invariance property of commonly studied kernels (such as RBF, inner product kernels and fully-connected NTK of any depth) leads to inconsistent estimation unless the ground truth is a low-degree polynomial. Our lower bound on the generalization error holds for a wide range of distributions and kernels with different eigenvalue decays. This lower bound suggests that consistency results for kernel ridge regression in high dimensions generally require a more refined analysis that depends on the structure of the kernel beyond its eigenvalue decay.
|
https://proceedings.mlr.press/v139/donhauser21a.html
|
https://proceedings.mlr.press/v139/donhauser21a.html
|
https://proceedings.mlr.press/v139/donhauser21a.html
|
http://proceedings.mlr.press/v139/donhauser21a/donhauser21a.pdf
|
ICML 2021
|
|
Fast Stochastic Bregman Gradient Methods: Sharp Analysis and Variance Reduction
|
Radu Alexandru Dragomir, Mathieu Even, Hadrien Hendrikx
|
We study the problem of minimizing a relatively-smooth convex function using stochastic Bregman gradient methods. We first prove the convergence of Bregman Stochastic Gradient Descent (BSGD) to a region that depends on the noise (magnitude of the gradients) at the optimum. In particular, BSGD quickly converges to the exact minimizer when this noise is zero (interpolation setting, in which the data is fit perfectly). Otherwise, when the objective has a finite sum structure, we show that variance reduction can be used to counter the effect of noise. In particular, fast convergence to the exact minimizer can be obtained under additional regularity assumptions on the Bregman reference function. We illustrate the effectiveness of our approach on two key applications of relative smoothness: tomographic reconstruction with Poisson noise and statistical preconditioning for distributed optimization.
|
https://proceedings.mlr.press/v139/dragomir21a.html
|
https://proceedings.mlr.press/v139/dragomir21a.html
|
https://proceedings.mlr.press/v139/dragomir21a.html
|
http://proceedings.mlr.press/v139/dragomir21a/dragomir21a.pdf
|
ICML 2021
|
|
Bilinear Classes: A Structural Framework for Provable Generalization in RL
|
Simon Du, Sham Kakade, Jason Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang
|
This work introduces Bilinear Classes, a new structural framework, which permit generalization in reinforcement learning in a wide variety of settings through the use of function approximation. The framework incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear Q*/V* model in which both the optimal Q-function and the optimal V-function are linear in some known feature space. Our main result provides an RL algorithm which has polynomial sample complexity for Bilinear Classes; notably, this sample complexity is stated in terms of a reduction to the generalization error of an underlying supervised learning sub-problem. These bounds nearly match the best known sample complexity bounds for existing models. Furthermore, this framework also extends to the infinite dimensional (RKHS) setting: for the the Linear Q*/V* model, linear MDPs, and linear mixture MDPs, we provide sample complexities that have no explicit dependence on the explicit feature dimension (which could be infinite), but instead depends only on information theoretic quantities.
|
https://proceedings.mlr.press/v139/du21a.html
|
https://proceedings.mlr.press/v139/du21a.html
|
https://proceedings.mlr.press/v139/du21a.html
|
http://proceedings.mlr.press/v139/du21a/du21a.pdf
|
ICML 2021
|
|
Improved Contrastive Divergence Training of Energy-Based Models
|
Yilun Du, Shuang Li, Joshua Tenenbaum, Igor Mordatch
|
Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability. We propose an adaptation to improve contrastive divergence training by scrutinizing a gradient term that is difficult to calculate and is often left out for convenience. We show that this gradient term is numerically significant and in practice is important to avoid training instabilities, while being tractable to estimate. We further highlight how data augmentation and multi-scale processing can be used to improve model robustness and generation quality. Finally, we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases, such as image generation, OOD detection, and compositional generation.
|
https://proceedings.mlr.press/v139/du21b.html
|
https://proceedings.mlr.press/v139/du21b.html
|
https://proceedings.mlr.press/v139/du21b.html
|
http://proceedings.mlr.press/v139/du21b/du21b.pdf
|
ICML 2021
|
|
Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation
|
Cunxiao Du, Zhaopeng Tu, Jing Jiang
|
We propose a new training objective named order-agnostic cross entropy (OaXE) for fully non-autoregressive translation (NAT) models. OaXE improves the standard cross-entropy loss to ameliorate the effect of word reordering, which is a common source of the critical multimodality problem in NAT. Concretely, OaXE removes the penalty for word order errors, and computes the cross entropy loss based on the best possible alignment between model predictions and target tokens. Since the log loss is very sensitive to invalid references, we leverage cross entropy initialization and loss truncation to ensure the model focuses on a good part of the search space. Extensive experiments on major WMT benchmarks demonstrate that OaXE substantially improves translation performance, setting new state of the art for fully NAT models. Further analyses show that OaXE indeed alleviates the multimodality problem by reducing token repetitions and increasing prediction confidence. Our code, data, and trained models are available at https://github.com/tencent-ailab/ICML21_OAXE.
|
https://proceedings.mlr.press/v139/du21c.html
|
https://proceedings.mlr.press/v139/du21c.html
|
https://proceedings.mlr.press/v139/du21c.html
|
http://proceedings.mlr.press/v139/du21c/du21c.pdf
|
ICML 2021
|
|
Putting the “Learning" into Learning-Augmented Algorithms for Frequency Estimation
|
Elbert Du, Franklyn Wang, Michael Mitzenmacher
|
In learning-augmented algorithms, algorithms are enhanced using information from a machine learning algorithm. In turn, this suggests that we should tailor our machine-learning approach for the target algorithm. We here consider this synergy in the context of the learned count-min sketch from (Hsu et al., 2019). Learning here is used to predict heavy hitters from a data stream, which are counted explicitly outside the sketch. We show that an approximately sufficient statistic for the performance of the underlying count-min sketch is given by the coverage of the predictor, or the normalized $L^1$ norm of keys that are filtered by the predictor to be explicitly counted. We show that machine learning models which are trained to optimize for coverage lead to large improvements in performance over prior approaches according to the average absolute frequency error. Our source code can be found at https://github.com/franklynwang/putting-the-learning-in-LAA.
|
https://proceedings.mlr.press/v139/du21d.html
|
https://proceedings.mlr.press/v139/du21d.html
|
https://proceedings.mlr.press/v139/du21d.html
|
http://proceedings.mlr.press/v139/du21d/du21d.pdf
|
ICML 2021
|
|
Estimating $α$-Rank from A Few Entries with Low Rank Matrix Completion
|
Yali Du, Xue Yan, Xu Chen, Jun Wang, Haifeng Zhang
|
Multi-agent evaluation aims at the assessment of an agent’s strategy on the basis of interaction with others. Typically, existing methods such as $\alpha$-rank and its approximation still require to exhaustively compare all pairs of joint strategies for an accurate ranking, which in practice is computationally expensive. In this paper, we aim to reduce the number of pairwise comparisons in recovering a satisfying ranking for $n$ strategies in two-player meta-games, by exploring the fact that agents with similar skills may achieve similar payoffs against others. Two situations are considered: the first one is when we can obtain the true payoffs; the other one is when we can only access noisy payoff. Based on these formulations, we leverage low-rank matrix completion and design two novel algorithms for noise-free and noisy evaluations respectively. For both of these settings, we theorize that $O(nr \log n)$ ($n$ is the number of agents and $r$ is the rank of the payoff matrix) payoff entries are required to achieve sufficiently well strategy evaluation performance. Empirical results on evaluating the strategies in three synthetic games and twelve real world games demonstrate that strategy evaluation from a few entries can lead to comparable performance to algorithms with full knowledge of the payoff matrix.
|
https://proceedings.mlr.press/v139/du21e.html
|
https://proceedings.mlr.press/v139/du21e.html
|
https://proceedings.mlr.press/v139/du21e.html
|
http://proceedings.mlr.press/v139/du21e/du21e.pdf
|
ICML 2021
|
|
Learning Diverse-Structured Networks for Adversarial Robustness
|
Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
|
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST). Classic network architectures (NAs) are generally worse than searched NA in ST, which should be the same in AT. In this paper, we argue that NA and AT cannot be handled independently, since given a dataset, the optimal NA in ST would be no longer optimal in AT. That being said, AT is time-consuming itself; if we directly search NAs in AT over large search spaces, the computation will be practically infeasible. Thus, we propose diverse-structured network (DS-Net), to significantly reduce the size of the search space: instead of low-level operations, we only consider predefined atomic blocks, where an atomic block is a time-tested building block like the residual block. There are only a few atomic blocks and thus we can weight all atomic blocks rather than find the best one in a searched block of DS-Net, which is an essential tradeoff between exploring diverse structures and exploiting the best structures. Empirical results demonstrate the advantages of DS-Net, i.e., weighting the atomic blocks.
|
https://proceedings.mlr.press/v139/du21f.html
|
https://proceedings.mlr.press/v139/du21f.html
|
https://proceedings.mlr.press/v139/du21f.html
|
http://proceedings.mlr.press/v139/du21f/du21f.pdf
|
ICML 2021
|
|
Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning
|
Yaqi Duan, Chi Jin, Zhiyuan Li
|
This paper considers batch Reinforcement Learning (RL) with general value function approximation. Our study investigates the minimal assumptions to reliably estimate/minimize Bellman error, and characterizes the generalization performance by (local) Rademacher complexities of general function classes, which makes initial steps in bridging the gap between statistical learning theory and batch RL. Concretely, we view the Bellman error as a surrogate loss for the optimality gap, and prove the followings: (1) In double sampling regime, the excess risk of Empirical Risk Minimizer (ERM) is bounded by the Rademacher complexity of the function class. (2) In the single sampling regime, sample-efficient risk minimization is not possible without further assumptions, regardless of algorithms. However, with completeness assumptions, the excess risk of FQI and a minimax style algorithm can be again bounded by the Rademacher complexity of the corresponding function classes. (3) Fast statistical rates can be achieved by using tools of local Rademacher complexity. Our analysis covers a wide range of function classes, including finite classes, linear spaces, kernel spaces, sparse linear features, etc.
|
https://proceedings.mlr.press/v139/duan21a.html
|
https://proceedings.mlr.press/v139/duan21a.html
|
https://proceedings.mlr.press/v139/duan21a.html
|
http://proceedings.mlr.press/v139/duan21a/duan21a.pdf
|
ICML 2021
|
|
Sawtooth Factorial Topic Embeddings Guided Gamma Belief Network
|
Zhibin Duan, Dongsheng Wang, Bo Chen, Chaojie Wang, Wenchao Chen, Yewen Li, Jie Ren, Mingyuan Zhou
|
Hierarchical topic models such as the gamma belief network (GBN) have delivered promising results in mining multi-layer document representations and discovering interpretable topic taxonomies. However, they often assume in the prior that the topics at each layer are independently drawn from the Dirichlet distribution, ignoring the dependencies between the topics both at the same layer and across different layers. To relax this assumption, we propose sawtooth factorial topic embedding guided GBN, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space. Specifically, both the words and topics are represented as embedding vectors of the same dimension. The topic matrix at a layer is factorized into the product of a factor loading matrix and a topic embedding matrix, the transpose of which is set as the factor loading matrix of the layer above. Repeating this particular type of factorization, which shares components between adjacent layers, leads to a structure referred to as sawtooth factorization. An auto-encoding variational inference network is constructed to optimize the model parameter via stochastic gradient descent. Experiments on big corpora show that our models outperform other neural topic models on extracting deeper interpretable topics and deriving better document representations.
|
https://proceedings.mlr.press/v139/duan21b.html
|
https://proceedings.mlr.press/v139/duan21b.html
|
https://proceedings.mlr.press/v139/duan21b.html
|
http://proceedings.mlr.press/v139/duan21b/duan21b.pdf
|
ICML 2021
|
|
Exponential Reduction in Sample Complexity with Learning of Ising Model Dynamics
|
Arkopal Dutt, Andrey Lokhov, Marc D Vuffray, Sidhant Misra
|
The usual setting for learning the structure and parameters of a graphical model assumes the availability of independent samples produced from the corresponding multivariate probability distribution. However, for many models the mixing time of the respective Markov chain can be very large and i.i.d. samples may not be obtained. We study the problem of reconstructing binary graphical models from correlated samples produced by a dynamical process, which is natural in many applications. We analyze the sample complexity of two estimators that are based on the interaction screening objective and the conditional likelihood loss. We observe that for samples coming from a dynamical process far from equilibrium, the sample complexity reduces exponentially compared to a dynamical process that mixes quickly.
|
https://proceedings.mlr.press/v139/dutt21a.html
|
https://proceedings.mlr.press/v139/dutt21a.html
|
https://proceedings.mlr.press/v139/dutt21a.html
|
http://proceedings.mlr.press/v139/dutt21a/dutt21a.pdf
|
ICML 2021
|
|
Reinforcement Learning Under Moral Uncertainty
|
Adrien Ecoffet, Joel Lehman
|
An ambitious goal for machine learning is to create agents that behave ethically: The capacity to abide by human moral norms would greatly expand the context in which autonomous agents could be practically and safely deployed, e.g. fully autonomous vehicles will encounter charged moral decisions that complicate their deployment. While ethical agents could be trained by rewarding correct behavior under a specific moral theory (e.g. utilitarianism), there remains widespread disagreement about the nature of morality. Acknowledging such disagreement, recent work in moral philosophy proposes that ethical behavior requires acting under moral uncertainty, i.e. to take into account when acting that one’s credence is split across several plausible ethical theories. This paper translates such insights to the field of reinforcement learning, proposes two training methods that realize different points among competing desiderata, and trains agents in simple environments to act under moral uncertainty. The results illustrate (1) how such uncertainty can help curb extreme behavior from commitment to single theories and (2) several technical complications arising from attempting to ground moral philosophy in RL (e.g. how can a principled trade-off between two competing but incomparable reward functions be reached). The aim is to catalyze progress towards morally-competent agents and highlight the potential of RL to contribute towards the computational grounding of moral philosophy.
|
https://proceedings.mlr.press/v139/ecoffet21a.html
|
https://proceedings.mlr.press/v139/ecoffet21a.html
|
https://proceedings.mlr.press/v139/ecoffet21a.html
|
http://proceedings.mlr.press/v139/ecoffet21a/ecoffet21a.pdf
|
ICML 2021
|
|
Confidence-Budget Matching for Sequential Budgeted Learning
|
Yonathan Efroni, Nadav Merlis, Aadirupa Saha, Shie Mannor
|
A core element in decision-making under uncertainty is the feedback on the quality of the performed actions. However, in many applications, such feedback is restricted. For example, in recommendation systems, repeatedly asking the user to provide feedback on the quality of recommendations will annoy them. In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. Specifically, we focus on multi-armed bandits, linear contextual bandits, and reinforcement learning problems. We start by analyzing the performance of ‘greedy’ algorithms that query a reward whenever they can. We show that in fully stochastic settings, doing so performs surprisingly well, but in the presence of any adversity, this might lead to linear regret. To overcome this issue, we propose the Confidence-Budget Matching (CBM) principle that queries rewards when the confidence intervals are wider than the inverse square root of the available budget. We analyze the performance of CBM based algorithms in different settings and show that it performs well in the presence of adversity in the contexts, initial states, and budgets.
|
https://proceedings.mlr.press/v139/efroni21a.html
|
https://proceedings.mlr.press/v139/efroni21a.html
|
https://proceedings.mlr.press/v139/efroni21a.html
|
http://proceedings.mlr.press/v139/efroni21a/efroni21a.pdf
|
ICML 2021
|
|
Self-Paced Context Evaluation for Contextual Reinforcement Learning
|
Theresa Eimer, André Biedenkapp, Frank Hutter, Marius Lindauer
|
Reinforcement learning (RL) has made a lot of advances for solving a single problem in a given environment; but learning policies that generalize to unseen variations of a problem remains challenging. To improve sample efficiency for learning on such instances of a problem domain, we present Self-Paced Context Evaluation (SPaCE). Based on self-paced learning, SPaCE automatically generates instance curricula online with little computational overhead. To this end, SPaCE leverages information contained in state values during training to accelerate and improve training performance as well as generalization capabilities to new \tasks from the same problem domain. Nevertheless, SPaCE is independent of the problem domain at hand and can be applied on top of any RL agent with state-value function approximation. We demonstrate SPaCE’s ability to speed up learning of different value-based RL agents on two environments, showing better generalization capabilities and up to 10x faster learning compared to naive approaches such as round robin or SPDRL, as the closest state-of-the-art approach.
|
https://proceedings.mlr.press/v139/eimer21a.html
|
https://proceedings.mlr.press/v139/eimer21a.html
|
https://proceedings.mlr.press/v139/eimer21a.html
|
http://proceedings.mlr.press/v139/eimer21a/eimer21a.pdf
|
ICML 2021
|
|
Provably Strict Generalisation Benefit for Equivariant Models
|
Bryn Elesedy, Sheheryar Zaidi
|
It is widely believed that engineering a model to be invariant/equivariant improves generalisation. Despite the growing popularity of this approach, a precise characterisation of the generalisation benefit is lacking. By considering the simplest case of linear models, this paper provides the first provably non-zero improvement in generalisation for invariant/equivariant models when the target distribution is invariant/equivariant with respect to a compact group. Moreover, our work reveals an interesting relationship between generalisation, the number of training examples and properties of the group action. Our results rest on an observation of the structure of function spaces under averaging operators which, along with its consequences for feature averaging, may be of independent interest.
|
https://proceedings.mlr.press/v139/elesedy21a.html
|
https://proceedings.mlr.press/v139/elesedy21a.html
|
https://proceedings.mlr.press/v139/elesedy21a.html
|
http://proceedings.mlr.press/v139/elesedy21a/elesedy21a.pdf
|
ICML 2021
|
|
Efficient Iterative Amortized Inference for Learning Symmetric and Disentangled Multi-Object Representations
|
Patrick Emami, Pan He, Sanjay Ranka, Anand Rangarajan
|
Unsupervised multi-object representation learning depends on inductive biases to guide the discovery of object-centric representations that generalize. However, we observe that methods for learning these representations are either impractical due to long training times and large memory consumption or forego key inductive biases. In this work, we introduce EfficientMORL, an efficient framework for the unsupervised learning of object-centric representations. We show that optimization challenges caused by requiring both symmetry and disentanglement can in fact be addressed by high-cost iterative amortized inference by designing the framework to minimize its dependence on it. We take a two-stage approach to inference: first, a hierarchical variational autoencoder extracts symmetric and disentangled representations through bottom-up inference, and second, a lightweight network refines the representations with top-down feedback. The number of refinement steps taken during training is reduced following a curriculum, so that at test time with zero steps the model achieves 99.1% of the refined decomposition performance. We demonstrate strong object decomposition and disentanglement on the standard multi-object benchmark while achieving nearly an order of magnitude faster training and test time inference over the previous state-of-the-art model.
|
https://proceedings.mlr.press/v139/emami21a.html
|
https://proceedings.mlr.press/v139/emami21a.html
|
https://proceedings.mlr.press/v139/emami21a.html
|
http://proceedings.mlr.press/v139/emami21a/emami21a.pdf
|
ICML 2021
|
|
Implicit Bias of Linear RNNs
|
Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K Fletcher
|
Contemporary wisdom based on empirical studies suggests that standard recurrent neural networks (RNNs) do not perform well on tasks requiring long-term memory. However, RNNs’ poor ability to capture long-term dependencies has not been fully understood. This paper provides a rigorous explanation of this property in the special case of linear RNNs. Although this work is limited to linear RNNs, even these systems have traditionally been difficult to analyze due to their non-linear parameterization. Using recently-developed kernel regime analysis, our main result shows that as the number of hidden units goes to infinity, linear RNNs learned from random initializations are functionally equivalent to a certain weighted 1D-convolutional network. Importantly, the weightings in the equivalent model cause an implicit bias to elements with smaller time lags in the convolution, and hence shorter memory. The degree of this bias depends on the variance of the transition matrix at initialization and is related to the classic exploding and vanishing gradients problem. The theory is validated with both synthetic and real data experiments.
|
https://proceedings.mlr.press/v139/emami21b.html
|
https://proceedings.mlr.press/v139/emami21b.html
|
https://proceedings.mlr.press/v139/emami21b.html
|
http://proceedings.mlr.press/v139/emami21b/emami21b.pdf
|
ICML 2021
|
|
Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs
|
Tolga Ergen, Mert Pilanci
|
Understanding the fundamental mechanism behind the success of deep neural networks is one of the key challenges in the modern machine learning literature. Despite numerous attempts, a solid theoretical analysis is yet to be developed. In this paper, we develop a novel unified framework to reveal a hidden regularization mechanism through the lens of convex optimization. We first show that the training of multiple three-layer ReLU sub-networks with weight decay regularization can be equivalently cast as a convex optimization problem in a higher dimensional space, where sparsity is enforced via a group $\ell_1$-norm regularization. Consequently, ReLU networks can be interpreted as high dimensional feature selection methods. More importantly, we then prove that the equivalent convex problem can be globally optimized by a standard convex optimization solver with a polynomial-time complexity with respect to the number of samples and data dimension when the width of the network is fixed. Finally, we numerically validate our theoretical results via experiments involving both synthetic and real datasets.
|
https://proceedings.mlr.press/v139/ergen21a.html
|
https://proceedings.mlr.press/v139/ergen21a.html
|
https://proceedings.mlr.press/v139/ergen21a.html
|
http://proceedings.mlr.press/v139/ergen21a/ergen21a.pdf
|
ICML 2021
|
|
Revealing the Structure of Deep Neural Networks via Convex Duality
|
Tolga Ergen, Mert Pilanci
|
We study regularized deep neural networks (DNNs) and introduce a convex analytic framework to characterize the structure of the hidden layers. We show that a set of optimal hidden layer weights for a norm regularized DNN training problem can be explicitly found as the extreme points of a convex set. For the special case of deep linear networks, we prove that each optimal weight matrix aligns with the previous layers via duality. More importantly, we apply the same characterization to deep ReLU networks with whitened data and prove the same weight alignment holds. As a corollary, we also prove that norm regularized deep ReLU networks yield spline interpolation for one-dimensional datasets which was previously known only for two-layer networks. Furthermore, we provide closed-form solutions for the optimal layer weights when data is rank-one or whitened. The same analysis also applies to architectures with batch normalization even for arbitrary data. Therefore, we obtain a complete explanation for a recent empirical observation termed Neural Collapse where class means collapse to the vertices of a simplex equiangular tight frame.
|
https://proceedings.mlr.press/v139/ergen21b.html
|
https://proceedings.mlr.press/v139/ergen21b.html
|
https://proceedings.mlr.press/v139/ergen21b.html
|
http://proceedings.mlr.press/v139/ergen21b/ergen21b.pdf
|
ICML 2021
|
|
Whitening for Self-Supervised Representation Learning
|
Aleksandr Ermolov, Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe
|
Most of the current self-supervised representation learning (SSL) methods are based on the contrastive loss and the instance-discrimination task, where augmented versions of the same image instance ("positives") are contrasted with instances extracted from other images ("negatives"). For the learning to be effective, many negatives should be compared with a positive pair, which is computationally demanding. In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latent-space features. The whitening operation has a "scattering" effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. Our solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance. The source code of the method and of all the experiments is available at: https://github.com/htdt/self-supervised.
|
https://proceedings.mlr.press/v139/ermolov21a.html
|
https://proceedings.mlr.press/v139/ermolov21a.html
|
https://proceedings.mlr.press/v139/ermolov21a.html
|
http://proceedings.mlr.press/v139/ermolov21a/ermolov21a.pdf
|
ICML 2021
|
|
Graph Mixture Density Networks
|
Federico Errica, Davide Bacciu, Alessio Micheli
|
We introduce the Graph Mixture Density Networks, a new family of machine learning models that can fit multimodal output distributions conditioned on graphs of arbitrary topology. By combining ideas from mixture models and graph representation learning, we address a broader class of challenging conditional density estimation problems that rely on structured data. In this respect, we evaluate our method on a new benchmark application that leverages random graphs for stochastic epidemic simulations. We show a significant improvement in the likelihood of epidemic outcomes when taking into account both multimodality and structure. The empirical analysis is complemented by two real-world regression tasks showing the effectiveness of our approach in modeling the output prediction uncertainty. Graph Mixture Density Networks open appealing research opportunities in the study of structure-dependent phenomena that exhibit non-trivial conditional output distributions.
|
https://proceedings.mlr.press/v139/errica21a.html
|
https://proceedings.mlr.press/v139/errica21a.html
|
https://proceedings.mlr.press/v139/errica21a.html
|
http://proceedings.mlr.press/v139/errica21a/errica21a.pdf
|
ICML 2021
|
|
Cross-Gradient Aggregation for Decentralized Learning from Non-IID Data
|
Yasaman Esfandiari, Sin Yong Tan, Zhanhong Jiang, Aditya Balu, Ethan Herron, Chinmay Hegde, Soumik Sarkar
|
Decentralized learning enables a group of collaborative agents to learn models using a distributed dataset without the need for a central parameter server. Recently, decentralized learning algorithms have demonstrated state-of-the-art results on benchmark data sets, comparable with centralized algorithms. However, the key assumption to achieve competitive performance is that the data is independently and identically distributed (IID) among the agents which, in real-life applications, is often not applicable. Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i.e., derivatives of its model with respect to its neighbors’ datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP). We theoretically analyze the convergence characteristics of CGA and demonstrate its efficiency on non-IID data distributions sampled from the MNIST and CIFAR-10 datasets. Our empirical comparisons show superior learning performance of CGA over existing state-of-the-art decentralized learning algorithms, as well as maintaining the improved performance under information compression to reduce peer-to-peer communication overhead. The code is available here on GitHub.
|
https://proceedings.mlr.press/v139/esfandiari21a.html
|
https://proceedings.mlr.press/v139/esfandiari21a.html
|
https://proceedings.mlr.press/v139/esfandiari21a.html
|
http://proceedings.mlr.press/v139/esfandiari21a/esfandiari21a.pdf
|
ICML 2021
|
|
Weight-covariance alignment for adversarially robust neural networks
|
Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales
|
Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks. However, existing SNNs are usually heuristically motivated, and often rely on adversarial training, which is computationally costly. We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training, and enjoys solid theoretical justification. Specifically, while existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness. We evaluate our method on a number of popular benchmarks, show that it can be applied to different architectures, and that it provides robustness to a variety of white-box and black-box attacks, while being simple and fast to train compared to existing alternatives.
|
https://proceedings.mlr.press/v139/eustratiadis21a.html
|
https://proceedings.mlr.press/v139/eustratiadis21a.html
|
https://proceedings.mlr.press/v139/eustratiadis21a.html
|
http://proceedings.mlr.press/v139/eustratiadis21a/eustratiadis21a.pdf
|
ICML 2021
|
|
Data augmentation for deep learning based accelerated MRI reconstruction with limited data
|
Zalan Fabian, Reinhard Heckel, Mahdi Soltanolkotabi
|
Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks. These networks are often trained end-to-end to directly reconstruct an image from a noisy or corrupted measurement of that image. To achieve state-of-the-art performance, training on large and diverse sets of images is considered critical. However, it is often difficult and/or expensive to collect large amounts of training images. Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for accelerated MRI reconstruction and study its effectiveness at reducing the required training data in a variety of settings. Our DA pipeline, MRAugment, is specifically designed to utilize the invariances present in medical imaging measurements as naive DA strategies that neglect the physics of the problem fail. Through extensive studies on multiple datasets we demonstrate that in the low-data regime DA prevents overfitting and can match or even surpass the state of the art while using significantly fewer training data, whereas in the high-data regime it has diminishing returns. Furthermore, our findings show that DA improves the robustness of the model against various shifts in the test distribution.
|
https://proceedings.mlr.press/v139/fabian21a.html
|
https://proceedings.mlr.press/v139/fabian21a.html
|
https://proceedings.mlr.press/v139/fabian21a.html
|
http://proceedings.mlr.press/v139/fabian21a/fabian21a.pdf
|
ICML 2021
|
|
Poisson-Randomised DirBN: Large Mutation is Needed in Dirichlet Belief Networks
|
Xuhui Fan, Bin Li, Yaqiong Li, Scott A. Sisson
|
The Dirichlet Belief Network (DirBN) was recently proposed as a promising deep generative model to learn interpretable deep latent distributions for objects. However, its current representation capability is limited since its latent distributions across different layers is prone to form similar patterns and can thus hardly use multi-layer structure to form flexible distributions. In this work, we propose Poisson-randomised Dirichlet Belief Networks (Pois-DirBN), which allows large mutations for the latent distributions across layers to enlarge the representation capability. Based on our key idea of inserting Poisson random variables in the layer-wise connection, Pois-DirBN first introduces a component-wise propagation mechanism to enable latent distributions to have large variations across different layers. Then, we develop a layer-wise Gibbs sampling algorithm to infer the latent distributions, leading to a larger number of effective layers compared to DirBN. In addition, we integrate out latent distributions and form a multi-stochastic deep integer network, which provides an alternative view on Pois-DirBN. We apply Pois-DirBN to relational modelling and validate its effectiveness through improved link prediction performance and more interpretable latent distribution visualisations. The code can be downloaded at https://github.com/xuhuifan/Pois_DirBN.
|
https://proceedings.mlr.press/v139/fan21a.html
|
https://proceedings.mlr.press/v139/fan21a.html
|
https://proceedings.mlr.press/v139/fan21a.html
|
http://proceedings.mlr.press/v139/fan21a/fan21a.pdf
|
ICML 2021
|
|
Model-based Reinforcement Learning for Continuous Control with Posterior Sampling
|
Ying Fan, Yifei Ming
|
Balancing exploration and exploitation is crucial in reinforcement learning (RL). In this paper, we study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces theoretically and empirically. First, we show the first regret bound of PSRL in continuous spaces which is polynomial in the episode length to the best of our knowledge. With the assumption that reward and transition functions can be modeled by Bayesian linear regression, we develop a regret bound of $\tilde{O}(H^{3/2}d\sqrt{T})$, where $H$ is the episode length, $d$ is the dimension of the state-action space, and $T$ indicates the total time steps. This result matches the best-known regret bound of non-PSRL methods in linear MDPs. Our bound can be extended to nonlinear cases as well with feature embedding: using linear kernels on the feature representation $\phi$, the regret bound becomes $\tilde{O}(H^{3/2}d_{\phi}\sqrt{T})$, where $d_\phi$ is the dimension of the representation space. Moreover, we present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection. To capture the uncertainty in models, we use Bayesian linear regression on the penultimate layer (the feature representation layer $\phi$) of neural networks. Empirical results show that our algorithm achieves the state-of-the-art sample efficiency in benchmark continuous control tasks compared to prior model-based algorithms, and matches the asymptotic performance of model-free algorithms.
|
https://proceedings.mlr.press/v139/fan21b.html
|
https://proceedings.mlr.press/v139/fan21b.html
|
https://proceedings.mlr.press/v139/fan21b.html
|
http://proceedings.mlr.press/v139/fan21b/fan21b.pdf
|
ICML 2021
|
|
SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies
|
Linxi Fan, Guanzhi Wang, De-An Huang, Zhiding Yu, Li Fei-Fei, Yuke Zhu, Animashree Anandkumar
|
Generalization has been a long-standing challenge for reinforcement learning (RL). Visual RL, in particular, can be easily distracted by irrelevant factors in high-dimensional observation space. In this work, we consider robust policy learning which targets zero-shot generalization to unseen visual environments with large distributional shift. We propose SECANT, a novel self-expert cloning technique that leverages image augmentation in two stages to *decouple* robust representation learning from policy optimization. Specifically, an expert policy is first trained by RL from scratch with weak augmentations. A student network then learns to mimic the expert policy by supervised learning with strong augmentations, making its representation more robust against visual variations compared to the expert. Extensive experiments demonstrate that SECANT significantly advances the state of the art in zero-shot generalization across 4 challenging domains. Our average reward improvements over prior SOTAs are: DeepMind Control (+26.5%), robotic manipulation (+337.8%), vision-based autonomous driving (+47.7%), and indoor object navigation (+15.8%). Code release and video are available at https://linxifan.github.io/secant-site/.
|
https://proceedings.mlr.press/v139/fan21c.html
|
https://proceedings.mlr.press/v139/fan21c.html
|
https://proceedings.mlr.press/v139/fan21c.html
|
http://proceedings.mlr.press/v139/fan21c/fan21c.pdf
|
ICML 2021
|
|
On Estimation in Latent Variable Models
|
Guanhua Fang, Ping Li
|
Latent variable models have been playing a central role in statistics, econometrics, machine learning with applications to repeated observation study, panel data inference, user behavior analysis, etc. In many modern applications, the inference based on latent variable models involves one or several of the following features: the presence of complex latent structure, the observed and latent variables being continuous or discrete, constraints on parameters, and data size being large. Therefore, solving an estimation problem for general latent variable models is highly non-trivial. In this paper, we consider a gradient based method via using variance reduction technique to accelerate estimation procedure. Theoretically, we show the convergence results for the proposed method under general and mild model assumptions. The algorithm has better computational complexity compared with the classical gradient methods and maintains nice statistical properties. Various numerical results corroborate our theory.
|
https://proceedings.mlr.press/v139/fang21a.html
|
https://proceedings.mlr.press/v139/fang21a.html
|
https://proceedings.mlr.press/v139/fang21a.html
|
http://proceedings.mlr.press/v139/fang21a/fang21a.pdf
|
ICML 2021
|
|
On Variational Inference in Biclustering Models
|
Guanhua Fang, Ping Li
|
Biclustering structures exist ubiquitously in data matrices and the biclustering problem was first formalized by John Hartigan (1972) to cluster rows and columns simultaneously. In this paper, we develop a theory for the estimation of general biclustering models, where the data is assumed to follow certain statistical distribution with underlying biclustering structure. Due to the existence of latent variables, directly computing the maximal likelihood estimator is prohibitively difficult in practice and we instead consider the variational inference (VI) approach to solve the parameter estimation problem. Although variational inference method generally has good empirical performance, there are very few theoretical results around VI. In this paper, we obtain the precise estimation bound of variational estimator and show that it matches the minimax rate in terms of estimation error under mild assumptions in biclustering setting. Furthermore, we study the convergence property of the coordinate ascent variational inference algorithm, where both local and global convergence results have been provided. Numerical results validate our new theories.
|
https://proceedings.mlr.press/v139/fang21b.html
|
https://proceedings.mlr.press/v139/fang21b.html
|
https://proceedings.mlr.press/v139/fang21b.html
|
http://proceedings.mlr.press/v139/fang21b/fang21b.pdf
|
ICML 2021
|
|
Learning Bounds for Open-Set Learning
|
Zhen Fang, Jie Lu, Anjin Liu, Feng Liu, Guangquan Zhang
|
Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. In this paper, we target a more challenging and re_x0002_alistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. Although researchers have designed many methods from the algorith_x0002_mic perspectives, there are few methods that pro_x0002_vide generalization guarantees on their ability to achieve consistent performance on different train_x0002_ing samples drawn from the same distribution. Motivated by the transfer learning and probably approximate correct (PAC) theory, we make a bold attempt to study OSL by proving its general_x0002_ization error-given training samples with size n, the estimation error will get close to order Op(1/$\sqrt{}$n). This is the first study to provide a generalization bound for OSL, which we do by theoretically investigating the risk of the tar_x0002_get classifier on unknown classes. According to our theory, a novel algorithm, called auxiliary open-set risk (AOSR) is proposed to address the OSL problem. Experiments verify the efficacy of AOSR. The code is available at github.com/AnjinLiu/Openset_Learning_AOSR.
|
https://proceedings.mlr.press/v139/fang21c.html
|
https://proceedings.mlr.press/v139/fang21c.html
|
https://proceedings.mlr.press/v139/fang21c.html
|
http://proceedings.mlr.press/v139/fang21c/fang21c.pdf
|
ICML 2021
|
|
Streaming Bayesian Deep Tensor Factorization
|
Shikai Fang, Zheng Wang, Zhimeng Pan, Ji Liu, Shandian Zhe
|
Despite the success of existing tensor factorization methods, most of them conduct a multilinear decomposition, and rarely exploit powerful modeling frameworks, like deep neural networks, to capture a variety of complicated interactions in data. More important, for highly expressive, deep factorization, we lack an effective approach to handle streaming data, which are ubiquitous in real-world applications. To address these issues, we propose SBTD, a Streaming Bayesian Deep Tensor factorization method. We first use Bayesian neural networks (NNs) to build a deep tensor factorization model. We assign a spike-and-slab prior over each NN weight to encourage sparsity and to prevent overfitting. We then use multivariate Delta’s method and moment matching to approximate the posterior of the NN output and calculate the running model evidence, based on which we develop an efficient streaming posterior inference algorithm in the assumed-density-filtering and expectation propagation framework. Our algorithm provides responsive incremental updates for the posterior of the latent factors and NN weights upon receiving newly observed tensor entries, and meanwhile identify and inhibit redundant/useless weights. We show the advantages of our approach in four real-world applications.
|
https://proceedings.mlr.press/v139/fang21d.html
|
https://proceedings.mlr.press/v139/fang21d.html
|
https://proceedings.mlr.press/v139/fang21d.html
|
http://proceedings.mlr.press/v139/fang21d/fang21d.pdf
|
ICML 2021
|
|
PID Accelerated Value Iteration Algorithm
|
Amir-Massoud Farahmand, Mohammad Ghavamzadeh
|
The convergence rate of Value Iteration (VI), a fundamental procedure in dynamic programming and reinforcement learning, for solving MDPs can be slow when the discount factor is close to one. We propose modifications to VI in order to potentially accelerate its convergence behaviour. The key insight is the realization that the evolution of the value function approximations $(V_k)_{k \geq 0}$ in the VI procedure can be seen as a dynamical system. This opens up the possibility of using techniques from \emph{control theory} to modify, and potentially accelerate, this dynamics. We present such modifications based on simple controllers, such as PD (Proportional-Derivative), PI (Proportional-Integral), and PID. We present the error dynamics of these variants of VI, and provably (for certain classes of MDPs) and empirically (for more general classes) show that the convergence rate can be significantly improved. We also propose a gain adaptation mechanism in order to automatically select the controller gains, and empirically show the effectiveness of this procedure.
|
https://proceedings.mlr.press/v139/farahmand21a.html
|
https://proceedings.mlr.press/v139/farahmand21a.html
|
https://proceedings.mlr.press/v139/farahmand21a.html
|
http://proceedings.mlr.press/v139/farahmand21a/farahmand21a.pdf
|
ICML 2021
|
|
Near-Optimal Entrywise Anomaly Detection for Low-Rank Matrices with Sub-Exponential Noise
|
Vivek Farias, Andrew A Li, Tianyi Peng
|
We study the problem of identifying anomalies in a low-rank matrix observed with sub-exponential noise, motivated by applications in retail and inventory management. State of the art approaches to anomaly detection in low-rank matrices apparently fall short, since they require that non-anomalous entries be observed with vanishingly small noise (which is not the case in our problem, and indeed in many applications). So motivated, we propose a conceptually simple entrywise approach to anomaly detection in low-rank matrices. Our approach accommodates a general class of probabilistic anomaly models. We extend recent work on entrywise error guarantees for matrix completion, establishing such guarantees for sub-exponential matrices, where in addition to missing entries, a fraction of entries are corrupted by (an also unknown) anomaly model. Viewing the anomaly detection as a classification task, to the best of our knowledge, we are the first to achieve the min-max optimal detection rate (up to log factors). Using data from a massive consumer goods retailer, we show that our approach provides significant improvements over incumbent approaches to anomaly detection.
|
https://proceedings.mlr.press/v139/farias21a.html
|
https://proceedings.mlr.press/v139/farias21a.html
|
https://proceedings.mlr.press/v139/farias21a.html
|
http://proceedings.mlr.press/v139/farias21a/farias21a.pdf
|
ICML 2021
|
|
Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results
|
Gabriele Farina, Andrea Celli, Nicola Gatti, Tuomas Sandholm
|
We focus on the problem of finding an optimal strategy for a team of players that faces an opponent in an imperfect-information zero-sum extensive-form game. Team members are not allowed to communicate during play but can coordinate before the game. In this setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game. In this paper, we first provide new modeling results about computing such an optimal distribution by drawing a connection to a different literature on extensive-form correlation. Second, we provide an algorithm that allows one for capping the number of profiles employed in the solution. This begets an anytime algorithm by increasing the cap. We find that often a handful of well-chosen such profiles suffices to reach optimal utility for the team. This enables team members to reach coordination through a simple and understandable plan. Finally, inspired by this observation and leveraging theoretical concepts that we introduce, we develop an efficient column-generation algorithm for finding an optimal distribution for the team. We evaluate it on a suite of common benchmark games. It is three orders of magnitude faster than the prior state of the art on games that the latter can solve and it can also solve several games that were previously unsolvable.
|
https://proceedings.mlr.press/v139/farina21a.html
|
https://proceedings.mlr.press/v139/farina21a.html
|
https://proceedings.mlr.press/v139/farina21a.html
|
http://proceedings.mlr.press/v139/farina21a/farina21a.pdf
|
ICML 2021
|
|
Train simultaneously, generalize better: Stability of gradient-based minimax learners
|
Farzan Farnia, Asuman Ozdaglar
|
The success of minimax learning problems of generative adversarial networks (GANs) has been observed to depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed and robustness properties of the underlying optimization algorithm. In this paper, we show that the optimization algorithm also plays a key role in the generalization performance of the trained minimax model. To this end, we analyze the generalization properties of standard gradient descent ascent (GDA) and proximal point method (PPM) algorithms through the lens of algorithmic stability as defined by Bousquet & Elisseeff, 2002 under both convex-concave and nonconvex-nonconcave minimax settings. While the GDA algorithm is not guaranteed to have a vanishing excess risk in convex-concave problems, we show the PPM algorithm enjoys a bounded excess risk in the same setup. For nonconvex-nonconcave problems, we compare the generalization performance of stochastic GDA and GDmax algorithms where the latter fully solves the maximization subproblem at every iteration. Our generalization analysis suggests the superiority of GDA provided that the minimization and maximization subproblems are solved simultaneously with similar learning rates. We discuss several numerical results indicating the role of optimization algorithms in the generalization of learned minimax models.
|
https://proceedings.mlr.press/v139/farnia21a.html
|
https://proceedings.mlr.press/v139/farnia21a.html
|
https://proceedings.mlr.press/v139/farnia21a.html
|
http://proceedings.mlr.press/v139/farnia21a/farnia21a.pdf
|
ICML 2021
|
|
Unbalanced minibatch Optimal Transport; applications to Domain Adaptation
|
Kilian Fatras, Thibault Sejourne, Rémi Flamary, Nicolas Courty
|
Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over subsets of data, i.e. minibatches. While computationally appealing, we highlight in this paper some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behaviors. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.
|
https://proceedings.mlr.press/v139/fatras21a.html
|
https://proceedings.mlr.press/v139/fatras21a.html
|
https://proceedings.mlr.press/v139/fatras21a.html
|
http://proceedings.mlr.press/v139/fatras21a/fatras21a.pdf
|
ICML 2021
|
|
Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach
|
Yingjie Fei, Zhuoran Yang, Zhaoran Wang
|
We study function approximation for episodic reinforcement learning with entropic risk measure. We first propose an algorithm with linear function approximation. Compared to existing algorithms, which suffer from improper regularization and regression biases, this algorithm features debiasing transformations in backward induction and regression procedures. We further propose an algorithm with general function approximation, which features implicit debiasing transformations. We prove that both algorithms achieve a sublinear regret and demonstrate a trade-off between generality and efficiency. Our analysis provides a unified framework for function approximation in risk-sensitive reinforcement learning, which leads to the first sublinear regret bounds in the setting.
|
https://proceedings.mlr.press/v139/fei21a.html
|
https://proceedings.mlr.press/v139/fei21a.html
|
https://proceedings.mlr.press/v139/fei21a.html
|
http://proceedings.mlr.press/v139/fei21a/fei21a.pdf
|
ICML 2021
|
|
Lossless Compression of Efficient Private Local Randomizers
|
Vitaly Feldman, Kunal Talwar
|
Locally Differentially Private (LDP) Reports are commonly used for collection of statistics and machine learning in the federated setting. In many cases the best known LDP algorithms require sending prohibitively large messages from the client device to the server (such as when constructing histograms over a large domain or learning a high-dimensional model). Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees. The practical implication of our result is that in typical applications every message can be compressed to the size of the server’s pseudo-random generator seed. From this general approach we derive low-communication algorithms for the problems of frequency estimation and high-dimensional mean estimation. Our algorithms are simpler and more accurate than existing low-communication LDP algorithms for these well-studied problems.
|
https://proceedings.mlr.press/v139/feldman21a.html
|
https://proceedings.mlr.press/v139/feldman21a.html
|
https://proceedings.mlr.press/v139/feldman21a.html
|
http://proceedings.mlr.press/v139/feldman21a/feldman21a.pdf
|
ICML 2021
|
|
Dimensionality Reduction for the Sum-of-Distances Metric
|
Zhili Feng, Praneeth Kacham, David Woodruff
|
We give a dimensionality reduction procedure to approximate the sum of distances of a given set of $n$ points in $R^d$ to any “shape” that lies in a $k$-dimensional subspace. Here, by “shape” we mean any set of points in $R^d$. Our algorithm takes an input in the form of an $n \times d$ matrix $A$, where each row of $A$ denotes a data point, and outputs a subspace $P$ of dimension $O(k^{3}/\epsilon^6)$ such that the projections of each of the $n$ points onto the subspace $P$ and the distances of each of the points to the subspace $P$ are sufficient to obtain an $\epsilon$-approximation to the sum of distances to any arbitrary shape that lies in a $k$-dimensional subspace of $R^d$. These include important problems such as $k$-median, $k$-subspace approximation, and $(j,l)$ subspace clustering with $j \cdot l \leq k$. Dimensionality reduction reduces the data storage requirement to $(n+d)k^{3}/\epsilon^6$ from nnz$(A)$. Here nnz$(A)$ could potentially be as large as $nd$. Our algorithm runs in time nnz$(A)/\epsilon^2 + (n+d)$poly$(k/\epsilon)$, up to logarithmic factors. For dense matrices, where nnz$(A) \approx nd$, we give a faster algorithm, that runs in time $nd + (n+d)$poly$(k/\epsilon)$ up to logarithmic factors. Our dimensionality reduction algorithm can also be used to obtain poly$(k/\epsilon)$ size coresets for $k$-median and $(k,1)$-subspace approximation problems in polynomial time.
|
https://proceedings.mlr.press/v139/feng21a.html
|
https://proceedings.mlr.press/v139/feng21a.html
|
https://proceedings.mlr.press/v139/feng21a.html
|
http://proceedings.mlr.press/v139/feng21a/feng21a.pdf
|
ICML 2021
|
|
Reserve Price Optimization for First Price Auctions in Display Advertising
|
Zhe Feng, Sebastien Lahaie, Jon Schneider, Jinchao Ye
|
The display advertising industry has recently transitioned from second- to first-price auctions as its primary mechanism for ad allocation and pricing. In light of this, publishers need to re-evaluate and optimize their auction parameters, notably reserve prices. In this paper, we propose a gradient-based algorithm to adaptively update and optimize reserve prices based on estimates of bidders’ responsiveness to experimental shocks in reserves. Our key innovation is to draw on the inherent structure of the revenue objective in order to reduce the variance of gradient estimates and improve convergence rates in both theory and practice. We show that revenue in a first-price auction can be usefully decomposed into a \emph{demand} component and a \emph{bidding} component, and introduce techniques to reduce the variance of each component. We characterize the bias-variance trade-offs of these techniques and validate the performance of our proposed algorithm through experiments on synthetic data and real display ad auctions data from a major ad exchange.
|
https://proceedings.mlr.press/v139/feng21b.html
|
https://proceedings.mlr.press/v139/feng21b.html
|
https://proceedings.mlr.press/v139/feng21b.html
|
http://proceedings.mlr.press/v139/feng21b/feng21b.pdf
|
ICML 2021
|
|
Uncertainty Principles of Encoding GANs
|
Ruili Feng, Zhouchen Lin, Jiapeng Zhu, Deli Zhao, Jingren Zhou, Zheng-Jun Zha
|
The compelling synthesis results of Generative Adversarial Networks (GANs) demonstrate rich semantic knowledge in their latent codes. To obtain this knowledge for downstream applications, encoding GANs has been proposed to learn encoders, such that real world data can be encoded to latent codes, which can be fed to generators to reconstruct those data. However, despite the theoretical guarantees of precise reconstruction in previous works, current algorithms generally reconstruct inputs with non-negligible deviations from inputs. In this paper we study this predicament of encoding GANs, which is indispensable research for the GAN community. We prove three uncertainty principles of encoding GANs in practice: a) the ‘perfect’ encoder and generator cannot be continuous at the same time, which implies that current framework of encoding GANs is ill-posed and needs rethinking; b) neural networks cannot approximate the underlying encoder and generator precisely at the same time, which explains why we cannot get ‘perfect’ encoders and generators as promised in previous theories; c) neural networks cannot be stable and accurate at the same time, which demonstrates the difficulty of training and trade-off between fidelity and disentanglement encountered in previous works. Our work may eliminate gaps between previous theories and empirical results, promote the understanding of GANs, and guide network designs for follow-up works.
|
https://proceedings.mlr.press/v139/feng21c.html
|
https://proceedings.mlr.press/v139/feng21c.html
|
https://proceedings.mlr.press/v139/feng21c.html
|
http://proceedings.mlr.press/v139/feng21c/feng21c.pdf
|
ICML 2021
|
|
Pointwise Binary Classification with Pairwise Confidence Comparisons
|
Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama
|
To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed. Among them, some consider using pairwise but not pointwise labels, when pointwise labels are not accessible due to privacy, confidentiality, or security reasons. However, as a pairwise label denotes whether or not two data points share a pointwise label, it cannot be easily collected if either point is equally likely to be positive or negative. Thus, in this paper, we propose a novel setting called pairwise comparison (Pcomp) classification, where we have only pairs of unlabeled data that we know one is more likely to be positive than the other. Firstly, we give a Pcomp data generation process, derive an unbiased risk estimator (URE) with theoretical guarantee, and further improve URE using correction functions. Secondly, we link Pcomp classification to noisy-label learning to develop a progressive URE and improve it by imposing consistency regularization. Finally, we demonstrate by experiments the effectiveness of our methods, which suggests Pcomp is a valuable and practically useful type of pairwise supervision besides the pairwise label.
|
https://proceedings.mlr.press/v139/feng21d.html
|
https://proceedings.mlr.press/v139/feng21d.html
|
https://proceedings.mlr.press/v139/feng21d.html
|
http://proceedings.mlr.press/v139/feng21d/feng21d.pdf
|
ICML 2021
|
|
Provably Correct Optimization and Exploration with Non-linear Policies
|
Fei Feng, Wotao Yin, Alekh Agarwal, Lin Yang
|
Policy optimization methods remain a powerful workhorse in empirical Reinforcement Learning (RL), with a focus on neural policies that can easily reason over complex and continuous state and/or action spaces. Theoretical understanding of strategic exploration in policy-based methods with non-linear function approximation, however, is largely missing. In this paper, we address this question by designing ENIAC, an actor-critic method that allows non-linear function approximation in the critic. We show that under certain assumptions, e.g., a bounded eluder dimension $d$ for the critic class, the learner finds to a near-optimal policy in $\widetilde{O}(\mathrm{poly}(d))$ exploration rounds. The method is robust to model misspecification and strictly extends existing works on linear function approximation. We also develop some computational optimizations of our approach with slightly worse statistical guarantees, and an empirical adaptation building on existing deep RL tools. We empirically evaluate this adaptation, and show that it outperforms prior heuristics inspired by linear methods, establishing the value in correctly reasoning about the agent’s uncertainty under non-linear function approximation.
|
https://proceedings.mlr.press/v139/feng21e.html
|
https://proceedings.mlr.press/v139/feng21e.html
|
https://proceedings.mlr.press/v139/feng21e.html
|
http://proceedings.mlr.press/v139/feng21e/feng21e.pdf
|
ICML 2021
|
|
KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation
|
Haozhe Feng, Zhaoyang You, Minghao Chen, Tianye Zhang, Minfeng Zhu, Fei Wu, Chao Wu, Wei Chen
|
Conventional unsupervised multi-source domain adaptation (UMDA) methods assume all source domains can be accessed directly. However, this assumption neglects the privacy-preserving policy, where all the data and computations must be kept decentralized. There exist three challenges in this scenario: (1) Minimizing the domain distance requires the pairwise calculation of the data from the source and target domains, while the data on the source domain is not available. (2) The communication cost and privacy security limit the application of existing UMDA methods, such as the domain adversarial training. (3) Since users cannot govern the data quality, the irrelevant or malicious source domains are more likely to appear, which causes negative transfer. To address the above problems, we propose a privacy-preserving UMDA paradigm named Knowledge Distillation based Decentralized Domain Adaptation (KD3A), which performs domain adaptation through the knowledge distillation on models from different source domains. The extensive experiments show that KD3A significantly outperforms state-of-the-art UMDA approaches. Moreover, the KD3A is robust to the negative transfer and brings a 100x reduction of communication cost compared with other decentralized UMDA methods.
|
https://proceedings.mlr.press/v139/feng21f.html
|
https://proceedings.mlr.press/v139/feng21f.html
|
https://proceedings.mlr.press/v139/feng21f.html
|
http://proceedings.mlr.press/v139/feng21f/feng21f.pdf
|
ICML 2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.