text
stringlengths 8
3.91k
| label
int64 0
10
|
---|---|
abstract: we study the regularity properties of gaussian fields defined
over spheres cross time. in particular, we consider two alternative spectral
decompositions for a gaussian field on sd × r. for each decomposition, we
establish regularity properties through sobolev and interpolation spaces.
we then propose a simulation method and study its level of accuracy in
the l2 sense. the method turns to be both fast and efficient.
msc 2010 subject classifications: primary 60g60, 60g17, 41a25; secondary 60g15, 33c55, 46e35, 33c45.
keywords and phrases: gaussian random fields, global data, big data,
space-time covariance, karhunen-loève expansion, spherical harmonics functions, schoenberg’s functions.
∗ supported
| 10 |
abstract
in this paper, we address the problem of sampling from a set and reconstructing a set stored
as a bloom filter. to the best of our knowledge our work is the first to address this question.
we introduce a novel hierarchical data structure called bloomsampletree that helps us design
efficient algorithms to extract an almost uniform sample from the set stored in a bloom filter and
also allows us to reconstruct the set efficiently. in the case where the hash functions used in the
bloom filter implementation are partially invertible, in the sense that it is easy to calculate the
set of elements that map to a particular hash value, we propose a second, more space-efficient
method called hashinvert for the reconstruction. we study the properties of these two methods
both analytically as well as experimentally. we provide bounds on run times for both methods
and sample quality for the bloomsampletree based algorithm, and show through an extensive
experimental evaluation that our methods are efficient and effective.
| 8 |
abstract
a (δ ≥ k1 , δ ≥ k2 )-partition of a graph g is a vertex-partition (v1 , v2 ) of g satisfying that
δ(g[vi ]) ≥ ki for i = 1, 2. we determine, for all positive integers k1 , k2 , the complexity of deciding
whether a given graph has a (δ ≥ k1 , δ ≥ k2 )-partition.
we also address the problem of finding a function g(k1 , k2 ) such that the (δ ≥ k1 , δ ≥ k2 )-partition
problem is n p-complete for the class of graphs of minimum degree less than g(k1 , k2 ) and polynomial for all graphs with minimum degree at least g(k1 , k2 ). we prove that g(1, k) = k for k ≥ 3,
that g(2, 2) = 3 and that g(2, 3), if it exists, has value 4 or 5.
keywords: n p-complete, polynomial, 2-partition, minimum degree.
| 8 |
abstract
we re-investigate a fundamental question: how effective is crossover in genetic algorithms in combining building blocks of good solutions? although this has been
discussed controversially for decades, we are still lacking a rigorous and intuitive answer. we provide such answers for royal road functions and o ne m ax, where every bit is a building block. for the latter we show that using crossover makes every
(µ+λ) genetic algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate µ and λ.
crossover is beneficial because it effectively turns fitness-neutral mutations into improvements by combining the right building blocks at a later stage. compared to
mutation-based evolutionary algorithms, this makes multi-bit mutations more useful. introducing
crossover changes the optimal mutation rate on o ne m ax from 1/n
√
to (1 + 5)/2 · 1/n ≈ 1.618/n. this holds both for uniform crossover and k-point
crossover. experiments and statistical tests confirm that our findings apply to a broad
class of building-block functions.
keywords
genetic algorithms, crossover, recombination, mutation rate, runtime analysis, theory.
| 9 |
abstract
in hyperspectral images, some spectral bands suffer from low signal-to-noise ratio due to noisy acquisition and atmospheric
effects, thus requiring robust techniques for the unmixing problem. this paper presents a robust supervised spectral unmixing
approach for hyperspectral images. the robustness is achieved by writing the unmixing problem as the maximization of the
correntropy criterion subject to the most commonly used constraints. two unmixing problems are derived: the first problem
considers the fully-constrained unmixing, with both the non-negativity and sum-to-one constraints, while the second one deals with
the non-negativity and the sparsity-promoting of the abundances. the corresponding optimization problems are solved efficiently
using an alternating direction method of multipliers (admm) approach. experiments on synthetic and real hyperspectral images
validate the performance of the proposed algorithms for different scenarios, demonstrating that the correntropy-based unmixing
is robust to outlier bands.
| 9 |
abstract. binomial ideals are special polynomial ideals with many algorithmically
and theoretically nice properties. we discuss the problem of deciding if a given polynomial ideal is binomial. while the methods are general, our main motivation and
source of examples is the simplification of steady state equations of chemical reaction
networks. for homogeneous ideals we give an efficient, gröbner-free algorithm for
binomiality detection, based on linear algebra only. on inhomogeneous input the algorithm can only give a sufficient condition for binomiality. as a remedy we construct a
heuristic toolbox that can lead to simplifications even if the given ideal is not binomial.
| 0 |
abstract
repetitive scenario design (rsd) is a randomized approach to robust design based on iterating two phases: a
standard scenario design phase that uses n scenarios (design samples), followed by randomized feasibility phase that
uses no test samples on the scenario solution. we give a full and exact probabilistic characterization of the number
of iterations required by the rsd approach for returning a solution, as a function of n , no , and of the desired levels
of probabilistic robustness in the solution. this novel approach broadens the applicability of the scenario technology,
since the user is now presented with a clear tradeoff between the number n of design samples and the ensuing
expected number of repetitions required by the rsd algorithm. the plain (one-shot) scenario design becomes just
one of the possibilities, sitting at one extreme of the tradeoff curve, in which one insists in finding a solution in a
single repetition: this comes at the cost of possibly high n . other possibilities along the tradeoff curve use lower n
values, but possibly require more than one repetition.
keywords
scenario design, probabilistic robustness, randomized algorithms, random convex programs.
| 3 |
abstract—active transport is sought in molecular communication to extend coverage, improve reliability, and mitigate
interference. one such active mechanism inherent to many liquid
environments is fluid flow. flow models are often over-simplified,
e.g., assuming one-dimensional diffusion with constant drift.
however, diffusion and flow are usually encountered in threedimensional bounded environments where the flow is highly
non-uniform such as in blood vessels or microfluidic channels.
for a qualitative understanding of the relevant physical effects
inherent to these channels a systematic framework is provided
based on the péclet number and the ratio of transmitter-receiver
distance to duct radius. we review the relevant laws of physics
and highlight when simplified models of uniform flow and
advection-only transport are applicable. for several molecular
communication setups, we highlight the effect of different flow
scenarios on the channel impulse response.
| 7 |
abstract
in this paper, we develop new tools and connections for exponential time approximation. in this
setting, we are given a problem instance and a parameter α > 1, and the goal is to design an
α-approximation algorithm with the fastest possible running time. we show the following results:
an r-approximation for maximum independent set in o∗ (exp(õ(n/r log2 r + r log2 r))) time,
an r-approximation for chromatic number in o∗ (exp(õ(n/r log r + r log2 r))) time,
a (2 − 1/r)-approximation for minimum vertex cover in o∗ (exp(n/rω(r) )) time, and
a (k − 1/r)-approximation for minimum k-hypergraph vertex cover in o∗ (exp(n/(kr)ω(kr) ))
time.
(throughout, õ and o∗ omit polyloglog(r) and factors polynomial in the input size, respectively.)
the best known time bounds for all problems were o∗ (2n/r ) [bourgeois et al. 2009, 2011 &
cygan et al. 2008]. for maximum independent set and chromatic number, these bounds were
complemented by exp(n1−o(1) /r1+o(1) ) lower bounds (under the exponential time hypothesis
(eth)) [chalermsook et al., 2013 & laekhanukit, 2014 (ph.d. thesis)]. our results show that
the naturally-looking o∗ (2n/r ) bounds are not tight for all these problems. the key to these
algorithmic results is a sparsification procedure that reduces a problem to its bounded-degree
variant, allowing the use of better approximation algorithms for bounded degree graphs. for
obtaining the first two results, we introduce a new randomized branching rule.
finally, we show a connection between pcp parameters and exponential-time approximation
algorithms. this connection together with our independent set algorithm refute the possibility
to overly reduce the size of chan’s pcp [chan, 2016]. it also implies that a (significant) improvement over our result will refute the gap-eth conjecture [dinur 2016 & manurangsi and
raghavendra, 2016].
1.
2.
3.
4.
| 8 |
abstract. this is a plea for the reopening of the building site for the
classification of finite simple groups in order to include the finite simple
hypergroups.
hypergroups were first introduced by frédéric marty, in 1934, at a
congress in stockholm, not to be confused with a later and quite different notion to which the same name was given, inopportunely.
i am well aware that, probably, quite a few mathematicians must have
already felt uncomfortable about the presence of the so-called sporadic
simple groups in the large tableau of the classification of finite simple
groups, and might have wrote about it, though i do not have any
reference to mention.
in what follows, i will try to explain, step by step, what a hypergroup
is, and, then, suggest a notion of simplicity for hypergroups, in a simple
and natural way, to match the notion in the case of groups, hoping it
will be fruitful.
examples and constructions are included.
| 4 |
abstract—mobile edge computing (mec) is expected to
be an effective solution to deliver 360-degree virtual reality
(vr) videos over wireless networks. in contrast to previous
computation-constrained mec framework, which reduces the
computation-resource consumption at the mobile vr device
by increasing the communication-resource consumption, we develop a communications-constrained mec framework to reduce communication-resource consumption by increasing the
computation-resource consumption and exploiting the caching
resources at the mobile vr device in this paper. specifically,
according to the task modularization, the mec server can only
deliver the components which have not been stored in the vr
device, and then the vr device uses the received components
and the corresponding cached components to construct the task,
resulting in low communication-resource consumption but high
delay. the mec server can also compute the task by itself to
reduce the delay, however, it consumes more communicationresource due to the delivery of entire task. therefore, we then
propose a task scheduling strategy to decide which computation
model should the mec server operates, in order to minimize
the communication-resource consumption under the delay constraint. finally, we discuss the tradeoffs between communications,
computing, and caching in the proposed system.
| 7 |
abstract. span program is a linear-algebraic model of computation
which can be used to design quantum algorithms. for any boolean function there exists a span program that leads to a quantum algorithm
with optimal quantum query complexity. in general, finding such span
programs is not an easy task.
in this work, given a query access to the adjacency matrix of a simple graph g with n vertices, we provide two new span-program-based
quantum algorithms:
√
– an algorithm for testing if the graph is bipartite that uses o(n n)
quantum queries;
√
– an algorithm for testing if the graph is connected that uses o(n n)
quantum queries.
| 8 |
abstract—we consider the problem of decentralized hypothesis
testing in a network of energy harvesting sensors, where sensors
make noisy observations of a phenomenon and send quantized
information about the phenomenon towards a fusion center. the
fusion center makes a decision about the present hypothesis
using the aggregate received data during a time interval. we
explicitly consider a scenario under which the messages are sent
through parallel access channels towards the fusion center. to
avoid limited lifetime issues, we assume each sensor is capable
of harvesting all the energy it needs for the communication from
the environment. each sensor has an energy buffer (battery) to
save its harvested energy for use in other time intervals. our
key contribution is to formulate the problem of decentralized
detection in a sensor network with energy harvesting devices. our
analysis is based on a queuing-theoretic model for the battery
and we propose a sensor decision design method by considering
long term energy management at the sensors. we show how
the performance of the system changes for different battery
capacities. we then numerically show how our findings can be
used in the design of sensor networks with energy harvesting
sensors.
| 7 |
abstract. for a tree g, we study the changing behaviors in the homology groups hi (bn g)
as n varies, where bn g := π1 (uconf n (g)). we prove that the ranks of these homologies
can be described by a single polynomial for all n, and construct this polynomiallexplicitly in
terms of invariants of the tree g. to accomplish this we prove that the group n hi (bn g)
can be endowed with the structure of a finitely generated graded module over an integral
polynomial ring, and further prove that it naturally decomposes as a direct sum of graded
shifts of squarefree monomial ideals. following this, we spend time considering how our
methods might be generalized to braid groups of arbitrary graphs, and make various conjectures in this direction.
| 0 |
abstract
timing guarantees are crucial to cyber-physical applications that must bound the end-to-end
delay between sensing, processing and actuation. for example, in a flight controller for a multirotor drone, the data from a gyro or inertial sensor must be gathered and processed to determine
the attitude of the aircraft. sensor data fusion is followed by control decisions that adjust the
flight of a drone by altering motor speeds. if the processing pipeline between sensor input and
actuation is not bounded, the drone will lose control and possibly fail to maintain flight.
motivated by the implementation of a multithreaded drone flight controller on the quest
rtos, we develop a composable pipe model based on the system’s task, scheduling and communication abstractions. this pipe model is used to analyze two semantics of end-to-end time:
reaction time and freshness time. we also argue that end-to-end timing properties should be
factored in at the early stage of application design. thus, we provide a mathematical framework to derive feasible task periods that satisfy both a given set of end-to-end timing constraints
and the schedulability requirement. we demonstrate the applicability of our design approach
by using it to port the cleanflight flight controller firmware to quest on the intel aero board.
experiments show that cleanflight ported to quest is able to achieve end-to-end latencies within
the predicted time bounds derived by analysis.
1998 acm subject classification c.3 real-time and embedded systems
keywords and phrases real-time systems, end-to-end timing analysis, flight controller
| 3 |
abstract
the primary objective of this paper is to revisit and make a case for
the merits of r.a. fisher’s objections to the decision-theoretic framing of
frequentist inference. it is argued that this framing is congruent with the
bayesian but incongruent with the frequentist inference. it provides the
bayesian approach with a theory of optimal inference, but it misrepresents
the theory of optimal frequentist inference by framing inferences solely in
terms of the universal quantifier ‘for all values of θ in θ’, denoted by
∀θ∈θ. this framing is at odds with the primary objective of modelbased frequentist inference, which is to learn from data about the true
value θ∗ ; the one that gave rise to the particular data. the frequentist
approach relies on factual (estimation, prediction), as well as hypothetical
(testing) reasoning, both of which revolve around the existential quantifier
∃θ∗ ∈θ. the paper calls into question the appropriateness of admissibility
and reassesses stein’s paradox as it relates to the capacity of frequentist
estimators to pinpoint θ∗ . the paper also compares and contrasts lossbased errors with traditional frequentist errors, such as coverage, type
i and ii; the former are attached to θ, but the latter to the inference
procedure itself.
key words: decision theoretic framing; bayesian vs. frequentist
inference; stein’s paradox; james-stein estimator; loss functions; admissibility; error probabilities; risk functions
| 10 |
abstract
facial expression recognition (fer) has always
been a challenging issue in computer vision. the
different expressions of emotion and uncontrolled
environmental factors lead to inconsistencies in the
complexity of fer and variability of between expression categories, which is often overlooked in
most facial expression recognition systems. in order
to solve this problem effectively, we presented a
simple and efficient cnn model to extract facial
features, and proposed a complexity perception
classification (cpc) algorithm for fer. the cpc
algorithm divided the dataset into an easy classification sample subspace and a complex classification
sample subspace by evaluating the complexity of
facial features that are suitable for classification.
the experimental results of our proposed algorithm
on fer2013 and ck+ datasets demonstrated the
algorithm’s effectiveness and superiority over other
state-of-the-art approaches.
| 2 |
abstract
in this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (uavs),
used as aerial base stations to collect data from ground internet of things (iot) devices, is investigated.
in particular, to enable reliable uplink communications for iot devices with a minimum total transmit
power, a novel framework is proposed for jointly optimizing the three-dimensional (3d) placement and
mobility of the uavs, device-uav association, and uplink power control. first, given the locations of
active iot devices at each time instant, the optimal uavs’ locations and associations are determined.
next, to dynamically serve the iot devices in a time-varying network, the optimal mobility patterns
of the uavs are analyzed. to this end, based on the activation process of the iot devices, the time
instances at which the uavs must update their locations are derived. moreover, the optimal 3d trajectory
of each uav is obtained in a way that the total energy used for the mobility of the uavs is minimized
while serving the iot devices. simulation results show that, using the proposed approach, the total
transmit power of the iot devices is reduced by 45% compared to a case in which stationary aerial
base stations are deployed. in addition, the proposed approach can yield a maximum of 28% enhanced
system reliability compared to the stationary case. the results also reveal an inherent tradeoff between
the number of update times, the mobility of the uavs, and the transmit power of the iot devices. in
essence, a higher number of updates can lead to lower transmit powers for the iot devices at the cost
of an increased mobility for the uavs.
| 7 |
abstract
in this work we study a weak order ideal associated with the coset
leaders of a non-binary linear code. this set allows the incrementally
computation of the coset leaders and the definitions of the set of leader
codewords. this set of codewords has some nice properties related to
the monotonicity of the weight compatible order on the generalized
support of a vector in fnq which allows to describe a test set, a trial set
and the set of zero neighbours of a linear code in terms of the leader
codewords.
| 7 |
abstract. we propose a novel method that uses convolutional neural networks
(cnns) for feature extraction. not just limited to conventional spatial domain representation, we use multilevel 2d discrete haar wavelet transform, where image representations are scaled to a variety of different sizes. these are then used to train
different cnns to select features. to be precise, we use 10 different cnns that select
a set of 10240 features, i.e. 1024/cnn. with this, 11 different handwritten scripts
are identified, where 1k words per script are used. in our test, we have achieved the
maximum script identification rate of 94.73% using multi-layer perceptron (mlp).
our results outperform the state-of-the-art techniques.
keywords: convolutional neural network, deep learning, multi-layer perceptron,
discrete wavelet transform, indic script identification
| 1 |
abstract
steady states of alternating-current (ac) circuits have been studied
in considerable detail. in 1982, baillieul and byrnes derived an upper
bound on the number of steady states in a loss-less ac circuit [ieee
tcas, 29(11): 724–737] and conjectured that this bound holds for ac
circuits in general. we prove this is indeed the case, among other results,
by studying a certain multi-homogeneous structure in an algebraisation.
| 5 |
abstract
this is the second in a series of papers on implementing a discontinuous galerkin (dg) method as an open source
matlab / gnu octave toolbox. the intention of this ongoing project is to offer a rapid prototyping package for
application development using dg methods. the implementation relies on fully vectorized matrix / vector operations
and is comprehensively documented. particular attention was paid to maintaining a direct mapping between discretization terms and code routines as well as to supporting the full code functionality in gnu octave. the present
work focuses on a two-dimensional time-dependent linear advection equation with space / time-varying coefficients,
and provides a general order implementation of several slope limiting schemes for the dg method.
keywords: matlab, gnu octave, discontinuous galerkin method, slope limiting, vectorization, open source,
advection operator
| 5 |
abstract. we study the problem of constructing phylogenetic trees for
a given set of species. the problem is formulated as that of finding a
minimum steiner tree on n points over the boolean hypercube of dimension d. it is known that an optimal tree can be found in linear time [1]
if the given dataset has a perfect phylogeny, i.e. cost of the optimal phylogeny is exactly d. moreover, if the data has a near-perfect phylogeny,
i.e. the cost of the optimal steiner tree is d + q, it is known [2] that an
exact solution can be found in running time which is polynomial in the
number of species and d, yet exponential in q. in this work, we give a
polynomial-time algorithm (in both d and q) that finds a phylogenetic
tree of cost d+o(q 2 ). this provides the best guarantees known—namely,
√
a (1 + o(1))-approximation—for the case log(d) ≪ q ≪ d, broadening
the range of settings for which near-optimal solutions can be efficiently
found. we also discuss the motivation and reasoning for studying such
additive approximations.
| 5 |
abstract—the visual focus of attention (vfoa) has been
recognized as a prominent conversational cue. we are interested
in estimating and tracking the vfoas associated with multiparty social interactions. we note that in this type of situations
the participants either look at each other or at an object of
interest; therefore their eyes are not always visible. consequently
both gaze and vfoa estimation cannot be based on eye detection
and tracking. we propose a method that exploits the correlation
between eye gaze and head movements. both vfoa and gaze
are modeled as latent variables in a bayesian switching statespace model. the proposed formulation leads to a tractable
learning procedure and to an efficient algorithm that simultaneously tracks gaze and visual focus. the method is tested and
benchmarked using two publicly available datasets that contain
typical multi-party human-robot and human-human interactions.
index terms—visual focus of attention, eye gaze, head pose,
dynamic bayesian model, switching kalman filter, multi-party
dialog, human-robot interaction.
| 1 |
abstract molecular fingerprints, i.e. feature vectors describing atomistic neighborhood configurations,
is an important abstraction and a key ingredient for data-driven modeling of potential energy surface
and interatomic force. in this paper, we present the density-encoded canonically aligned fingerprint
(decaf) fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector
quantities. the fingerprint is essentially a continuous density field formed through the superimposition
of smoothing kernels centered on the atoms. rotational invariance of the fingerprint is achieved by
aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame
computed from a kernel minisum optimization procedure. we show that this approach is superior over
pca-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry.
we propose that the ‘distance’ between the density fields be measured using a volume integral of their
pointwise difference. this can be efficiently computed using optimal quadrature rules, which only require
discrete sampling at a small number of grid points. we also experiment on the choice of weight functions
for constructing the density fields, and characterize their performance for fitting interatomic potentials.
the applicability of the fingerprint is demonstrated through a set of benchmark problems.
keywords: active learning, gaussian process regression, quantum mechanics, molecular dynamics,
next generation force fields
| 5 |
abstract
in this paper we analyse the benefits of incorporating interval-valued fuzzy
sets into the bousi-prolog system. a syntax, declarative semantics and implementation for this extension is presented and formalised. we show, by
using potential applications, that fuzzy logic programming frameworks enhanced with them can correctly work together with lexical resources and
ontologies in order to improve their capabilities for knowledge representation
and reasoning.
keywords: interval-valued fuzzy sets, approximate reasoning, lexical knowledge resources, fuzzy logic programming, fuzzy prolog.
1. introduction and motivation
nowadays, lexical knowledge resources as well as ontologies of concepts
are widely employed for modelling domain independent knowledge [1, 2] or
email address: [email protected] (clemente rubio-manzano)
preprint submitted to summited to studies of computational intelligent seriesnovember 16, 2017
| 2 |
abstract. the entropy of a finite probability space x measures the observable cardinality of large independent products x ⊗n of the probability
space. if two probability spaces x and y have the same entropy, there is an
almost measure-preserving bijection between large parts of x ⊗n and y ⊗n .
in this way, x and y are asymptotically equivalent.
it turns out to be challenging to generalize this notion of asymptotic
equivalence to configurations of probability spaces, which are collections of
probability spaces with measure-preserving maps between some of them.
in this article we introduce the intrinsic kolmogorov-sinai distance on
the space of configurations of probability spaces. concentrating on the
large-scale geometry we pass to the asymptotic kolmogorov-sinai distance.
it induces an asymptotic equivalence relation on sequences of configurations
of probability spaces. we will call the equivalence classes tropical probability
spaces.
in this context we prove an asymptotic equipartition property for configurations. it states that tropical configurations can always be approximated by homogeneous configurations. in addition, we show that the
solutions to certain information-optimization problems are lipschitz-continuous with respect to the asymptotic kolmogorov-sinai distance. it follows from these two statements that in order to solve an informationoptimization problem, it suffices to consider homogeneous configurations.
finally, we show that spaces of trajectories of length n of certain stochastic processes, in particular stationary markov chains, have a tropical
limit.
| 7 |
abstract
in functional logic programs, rules are applicable independently of textual order, i.e., any
rule can potentially be used to evaluate an expression. this is similar to logic languages
and contrary to functional languages, e.g., haskell enforces a strict sequential interpretation of rules. however, in some situations it is convenient to express alternatives by means
of compact default rules. although default rules are often used in functional programs, the
non-deterministic nature of functional logic programs does not allow to directly transfer
this concept from functional to functional logic languages in a meaningful way. in this paper we propose a new concept of default rules for curry that supports a programming style
similar to functional programming while preserving the core properties of functional logic
programming, i.e., completeness, non-determinism, and logic-oriented use of functions. we
discuss the basic concept and propose an implementation which exploits advanced features
of functional logic languages.
to appear in theory and practice of logic programming (tplp)
keywords: functional logic programming, semantics, program transformation
| 6 |
abstract. we develop new computational methods for studying potential counterexamples to the andrews–curtis conjecture, in particular,
akbulut–kurby examples ak(n). we devise a number of algorithms in
an attempt to disprove the most interesting counterexample ak(3). to
improve metric properties of the search space (which is a set of balanced
presentations of 1) we introduce a new transformation (called an acmmove here) that generalizes the original andrews-curtis transformations
and discuss details of a practical implementation. to reduce growth of
the search space we introduce a strong equivalence relation on balanced
presentations and study the space modulo automorphisms of the underlying free group. finally, we prove that automorphism-moves can be
applied to ak(n)-presentations. unfortunately, despite a lot of effort
we were unable to trivialize any of ak(n)-presentations, for n > 2.
keywords. andrews-curtis conjecture, akbulut-kurby presentations,
trivial group, conjugacy search problem, computations.
2010 mathematics subject classification. 20-04, 20f05, 20e05.
| 4 |
abstract. we show that in any q-gorenstein flat family of klt singularities, normalized volumes are lower semicontinuous with respect to the zariski topology. a quick
consequence is that smooth points have the largest normalized volume among all klt
singularities. using an alternative characterization of k-semistability developed by li,
liu and xu, we show that k-semistability is a very generic or empty condition in any
q-gorenstein flat family of log fano pairs.
| 0 |
abstract. the critical ideals of a graph are the determinantal ideals of the generalized laplacian
matrix associated to a graph. in this article we provide a set of minimal forbidden graphs for the set
of graphs with at most three trivial critical ideals. then we use these forbidden graphs to characterize
the graphs with at most three trivial critical ideals and clique number equal to 2 and 3.
| 0 |
abstract
when applied to training deep neural networks, stochastic gradient descent (sgd)
often incurs steady progression phases, interrupted by catastrophic episodes in
which loss and gradient norm explode. a possible mitigation of such events is to
slow down the learning process.
this paper presents a novel approach to control the sgd learning rate, that uses
two statistical tests. the first one, aimed at fast learning, compares the momentum
of the normalized gradient vectors to that of random unit vectors and accordingly
gracefully increases or decreases the learning rate. the second one is a change
point detection test, aimed at the detection of catastrophic learning episodes; upon
its triggering the learning rate is instantly halved.
both abilities of speeding up and slowing down the learning rate allows the proposed approach, called sal e ra, to learn as fast as possible but not faster. experiments on standard benchmarks show that sal e ra performs well in practice, and
compares favorably to the state of the art.
machine learning (ml) algorithms require efficient optimization techniques, whether to solve convex
problems (e.g., for svms), or non-convex ones (e.g., for neural networks). in the convex setting,
the main focus is on the order of the convergence rate [nesterov, 1983, defazio et al., 2014]. in
the non-convex case, ml is still more of an experimental science. significant efforts are devoted
to devising optimization algorithms (and robust default values for the associated hyper-parameters)
tailored to the typical regime of ml models and problem instances (e.g. deep convolutional neural
networks for mnist [le cun et al., 1998] or imagenet [deng et al., 2009]) [duchi et al., 2010,
zeiler, 2012, schaul et al., 2013, kingma and ba, 2014, tieleman and hinton, 2012].
as the data size and the model dimensionality increase, mainstream convex optimization methods
are adversely affected. hessian-based approaches, which optimally handle convex optimization
problems however ill-conditioned they are, do not scale up and approximations are required [martens
et al., 2012]. overall, stochastic gradient descent (sgd) is increasingly adopted both in convex and
non-convex settings, with good performances and linear tractability [bottou and bousquet, 2008,
hardt et al., 2015].
within the sgd framework, one of the main issues is to know how to control the learning rate:
the objective is to reach a satisfactory learning speed without triggering any catastrophic event,
manifested by the sudden rocketing of the training loss and the gradient norm. finding "how much is
not too much" in terms of learning rate is a slippery game. it depends both on the current state of the
system (the weight vector) and the current mini-batch. often, the eventual convergence of sgd is
| 9 |
abstract
online matching has received significant attention over the last 15 years due to its close
connection to internet advertising. as the seminal work of karp, vazirani, and vazirani has an
optimal (1−1/e) competitive ratio in the standard adversarial online model, much effort has gone
into developing useful online models that incorporate some stochasticity in the arrival process.
one such popular model is the “known i.i.d. model” where different customer-types arrive
online from a known distribution. we develop algorithms with improved competitive ratios for
some basic variants of this model with integral arrival rates, including: (a) the case of general
weighted edges, where we improve the best-known ratio of 0.667 due to haeupler, mirrokni and
zadimoghaddam [12] to 0.705; and (b) the vertex-weighted case, where we improve the 0.7250
ratio of jaillet and lu [13] to 0.7299.
we also consider an extension of stochastic rewards, a variant where each edge has an
independent probability of being present. for the setting of stochastic rewards with non-integral
arrival rates, we present a simple optimal non-adaptive algorithm with a ratio of 1 − 1/e. for
the special case where each edge is unweighted and has a uniform constant probability of being
present, we improve upon 1 − 1/e by proposing a strengthened lp benchmark.
one of the key ingredients of our improvement is the following (offline) approach to bipartitematching polytopes with additional constraints. we first add several valid constraints in order
to get a good fractional solution f; however, these give us less control over the structure of
f. we next remove all these additional constraints and randomly move from f to a feasible
point on the matching polytope with all coordinates being from the set {0, 1/k, 2/k, . . . , 1} for
a chosen integer k. the structure of this solution is inspired by jaillet and lu (mathematics of
operations research, 2013) and is a tractable structure for algorithm design and analysis. the
appropriate random move preserves many of the removed constraints (approximately with high
probability and exactly in expectation). this underlies some of our improvements and could be
of independent interest.
∗
| 8 |
abstract
conditions for geometric ergodicity of multivariate autoregressive conditional heteroskedasticity (arch) processes, with the so-called bekk (baba,
engle, kraft, and kroner) parametrization, are considered. we show for a
class of bekk-arch processes that the invariant distribution is regularly
varying. in order to account for the possibility of different tail indices of the
marginals, we consider the notion of vector scaling regular variation (vsrv),
closely related to non-standard regular variation. the characterization of the
tail behavior of the processes is used for deriving the asymptotic properties
of the sample covariance matrices.
| 10 |
abstraction/multi-formalism co-simulation .
a.4 black-box co-simulation . . . . . . . . . . . . . . . .
a.5 real-time co-simulation . . . . . . . . . . . . . . . .
a.6 many simulation units: large scale co-simulation .
| 3 |
abstract interpretation
f rancesco r anzato f rancesco tapparo
dipartimento di matematica pura ed applicata, università di padova
via belzoni 7, 35131 padova, italy
[email protected] [email protected]
| 6 |
abstract
many metainterpreters found in the logic programming literature are nondeterministic in
the sense that the selection of program clauses is not determined. examples are the familiar
“demo” and “vanilla” metainterpreters. for some applications this nondeterminism is
convenient. in some cases, however, a deterministic metainterpreter, having an explicit
selection of clauses, is needed. such cases include (1) conversion of or parallelism into
and parallelism for “committed-choice” processors, (2) logic-based, imperative-language
implementation of search strategies, and (3) simulation of bounded-resource reasoning.
deterministic metainterpreters are difficult to write because the programmer must be
concerned about the set of unifiers of the children of a node in the derivation tree. we argue
that it is both possible and advantageous to write these metainterpreters by reasoning
in terms of object programs converted into a syntactically restricted form that we call
“chain” form, where we can forget about unification, except for unit clauses. we give two
transformations converting logic programs into chain form, one for “moded” programs
(implicit in two existing exhaustive-traversal methods for committed-choice execution),
and one for arbitrary definite programs. as illustrations of our approach we show examples
of the three applications mentioned above.
| 6 |
abstract— the paper addresses state estimation for discretetime systems with binary (threshold) measurements by following a maximum a posteriori probability (map) approach
and exploiting a moving horizon (mh) approximation of the
map cost-function. it is shown that, for a linear system
and noise distributions with log-concave probability density
function, the proposed mh-map state estimator involves the
solution, at each sampling interval, of a convex optimization
problem. application of the mh-map estimator to dynamic
estimation of a diffusion field given pointwise-in-time-and-space
binary measurements of the field is also illustrated and, finally,
simulation results relative to this application are shown to
demonstrate the effectiveness of the proposed approach.
| 3 |
abstract—compressive sensing has been successfully used for
optimized operations in wireless sensor networks. however, raw
data collected by sensors may be neither originally sparse nor
easily transformed into a sparse data representation. this paper
addresses the problem of transforming source data collected by
sensor nodes into a sparse representation with a few nonzero
elements. our contributions that address three major issues
include: 1) an effective method that extracts population sparsity
of the data, 2) a sparsity ratio guarantee scheme, and 3) a
customized learning algorithm of the sparsifying dictionary. we
introduce an unsupervised neural network to extract an intrinsic
sparse coding of the data. the sparse codes are generated at
the activation of the hidden layer using a sparsity nomination
constraint and a shrinking mechanism. our analysis using real
data samples shows that the proposed method outperforms
conventional sparsity-inducing methods.
abstract—sparse coding, compressive sensing, sparse autoencoders, wireless sensor networks.
| 9 |
abstract
in this paper, we formulate the deep residual network (resnet) as a control problem
of transport equation. in resnet, the transport equation is solved along the characteristics. based on this observation, deep neural network is closely related to the control
problem of pdes on manifold. we propose several models based on transport equation, hamilton-jacobi equation and fokker-planck equation. the discretization of these
pdes on point cloud is also discussed.
| 7 |
abstract—in video surveillance, face recognition (fr) systems
are employed to detect individuals of interest appearing over
a distributed network of cameras. the performance of still-tovideo fr systems can decline significantly because faces captured
in the unconstrained operational domain (od) over multiple
video cameras have a different underlying data distribution
compared to faces captured under controlled conditions in the
enrollment domain (ed) with a still camera. this is particularly
true when individuals are enrolled to the system using a single
reference still. to improve the robustness of these systems, it
is possible to augment the reference set by generating synthetic
faces based on the original still. however, without knowledge of
the od, many synthetic images must be generated to account
for all possible capture conditions. fr systems may therefore
require complex implementations and yield lower accuracy when
training on many less relevant images. this paper introduces an
algorithm for domain-specific face synthesis (dsfs) that exploits
the representative intra-class variation information available
from the od. prior to operation (during camera calibration), a
compact set of faces from unknown persons appearing in the od
is selected through affinity propagation clustering in the captured
condition space (defined by pose and illumination estimation).
the domain-specific variations of these face images are then
projected onto the reference still of each individual by integrating
an image-based face relighting technique inside the 3d morphable
model framework. a compact set of synthetic faces is generated
that resemble individuals of interest under capture conditions
relevant to the od. in a particular implementation based on
sparse representation classification, the synthetic faces generated
with the dsfs are employed to form a cross-domain dictionary
that accounts for structured sparsity where the dictionary blocks
combine the original and synthetic faces of each individual.
experimental results obtained with videos from the chokepoint
and cox-s2v datasets reveal that augmenting the reference
gallery set of still-to-video fr systems using the proposed dsfs
approach can provide a significantly higher level of accuracy
compared to state-of-the-art approaches, with only a moderate
increase in its computational complexity.
index terms—face recognition, single sample per person,
face synthesis, 3d face reconstruction, illumination transferring, sparse representation-based classification, video surveillance.
| 1 |
abstract
linear regression is one of the most prevalent
techniques in machine learning; however, it is
also common to use linear regression for its explanatory capabilities rather than label prediction. ordinary least squares (ols) is often used
in statistics to establish a correlation between
an attribute (e.g. gender) and a label (e.g. income) in the presence of other (potentially correlated) features. ols assumes a particular model
that randomly generates the data, and derives tvalues — representing the likelihood of each real
value to be the true correlation. using t-values,
ols can release a confidence interval, which is
an interval on the reals that is likely to contain
the true correlation; and when this interval does
not intersect the origin, we can reject the null hypothesis as it is likely that the true correlation
is non-zero. our work aims at achieving similar guarantees on data under differentially private estimators. first, we show that for wellspread data, the gaussian johnson-lindenstrauss
transform (jlt) gives a very good approximation of t-values; secondly, when jlt approximates ridge regression (linear regression with
l2 -regularization) we derive, under certain conditions, confidence intervals using the projected
data; lastly, we derive, under different conditions,
confidence intervals for the “analyze gauss” algorithm (dwork et al., 2014).
| 8 |
abstract
semidefinite programs have recently been developed for the problem
of community detection, which may be viewed as a special case of the
stochastic blockmodel. here, we develop a semidefinite program that can
be tailored to other instances of the blockmodel, such as non-assortative
networks and overlapping communities. we establish label recovery in
sparse settings, with conditions that are analogous to recent results for
community detection. in settings where the data is not generated by a
blockmodel, we give an oracle inequality that bounds excess risk relative to the best blockmodel approximation. simulations are presented for
community detection, for overlapping communities, and for latent space
models.
| 10 |
abstract
we study rank-1 l1-norm-based tucker2 (l1-tucker2) decomposition of 3-way tensors, treated as a
collection of n d × m matrices that are to be jointly decomposed. our contributions are as follows. i) we prove
that the problem is equivalent to combinatorial optimization over n antipodal-binary variables. ii) we derive the
first two algorithms in the literature for its exact solution. the first algorithm has cost exponential in n ; the second
one has cost polynomial in n (under a mild assumption). our algorithms are accompanied by formal complexity
analysis. iii) we conduct numerical studies to compare the performance of exact l1-tucker2 (proposed) with
standard hosvd, hooi, glram, pca, l1-pca, and tpca-l1. our studies show that l1-tucker2 outperforms
(in tensor approximation) all the above counterparts when the processed data are outlier corrupted.
| 8 |
abstract
a basic problem in spectral clustering is the following. if a solution obtained from the
spectral relaxation is close to an integral solution, is it possible to find this integral solution even
though they might be in completely different basis? in this paper, we propose a new spectral
clustering algorithm. it can recover
√ a k-partition such that the subspace corresponding to the
span of its indicator vectors is o( opt) close to the original subspace in spectral norm with
opt being the minimum possible (opt ≤ 1 always). moreover our algorithm does not impose
any restriction on the cluster sizes. previously, no algorithm was known which could find a
k-partition closer than o(k · opt).
we present two applications for our algorithm. first one finds a disjoint union of bounded
degree expanders which approximate a given graph in spectral norm. the second one is for
approximating the sparsest k-partition in a graph where each cluster have expansion at most
φk provided φk ≤ o(λk+1 ) where λk+1 is the (k + 1)st eigenvalue of laplacian matrix. this
significantly improves upon the previous algorithms, which required φk ≤ o(λk+1 /k).
| 8 |
abstract. we introduce quasi-prüfer ring extensions, in order
to relativize quasi-prüfer domains and to take also into account
some contexts in recent papers, where such extensions appear in a
hidden form. an extension is quasi-prüfer if and only if it is an inc
pair. the class of these extensions has nice stability properties.
we also define almost-prüfer extensions that are quasi-prüfer, the
converse being not true. quasi-prüfer extensions are closely linked
to finiteness properties of fibers. applications are given for fmc
extensions, because they are quasi-prüfer.
| 0 |
abstract. it is notoriously hard to correctly implement a multiparty
protocol which involves asynchronous/concurrent interactions and the
constraints on states of multiple participants. to assist developers in implementing such protocols, we propose a novel specification language to
specify interactions within multiple object-oriented actors and the sideeffects on heap memory of those actors; a behavioral-type-based analysis
is presented for type checking. our specification language formalizes a
protocol as a global type, which describes the procedure of asynchronous
method calls, the usage of futures, and the heap side-effects with a firstorder logic. to characterize runs of instances of types, we give a modeltheoretic semantics for types and translate them into logical constraints
over traces. we prove protocol adherence: if a program is well-typed
w.r.t. a protocol, then every trace of the program adheres to the protocol, i.e., every trace is a model for the formula of its type.
| 6 |
abstract
discovery of an accurate causal bayesian network structure from observational
data can be useful in many areas of science. often the discoveries are made under
uncertainty, which can be expressed as probabilities. to guide the use of such
discoveries, including directing further investigation, it is important that those
probabilities be well-calibrated. in this paper, we introduce a novel framework to
derive calibrated probabilities of causal relationships from observational data. the
framework consists of three components: (1) an approximate method for generating
initial probability estimates of the edge types for each pair of variables, (2) the
availability of a relatively small number of the causal relationships in the network
for which the truth status is known, which we call a calibration training set, and
(3) a calibration method for using the approximate probability estimates and the
calibration training set to generate calibrated probabilities for the many remaining
pairs of variables. we also introduce a new calibration method based on a shallow
neural network. our experiments on simulated data support that the proposed
approach improves the calibration of causal edge predictions. the results also
support that the approach often improves the precision and recall of predictions.
| 2 |
abstract
screening for ultrahigh dimensional features may encounter complicated issues
such as outlying observations, heteroscedasticity or heavy-tailed distribution, multicollinearity and confounding effects. standard correlation-based marginal screening
methods may be a weak solution to these issues. we contribute a novel robust joint
screener to safeguard against outliers and distribution mis-specification for both the
response variable and the covariates, and to account for external variables at the screening step. specifically, we introduce a copula-based partial correlation (cpc) screener.
we show that the empirical process of the estimated cpc converges weakly to a gaussian process and establish the sure screening property for cpc screener under very
mild technical conditions, where we need not require any moment condition, weaker
than existing alternatives in the literature. moreover, our approach allows for a diverging number of conditional variables from the theoretical point of view. extensive
simulation studies and two data applications are included to illustrate our proposal.
| 10 |
abstract. we use the bass–jiang group for automorphism-induced
hnn-extensions to build a framework for the construction of tractable
groups with pathological outer automorphism groups. we apply
this framework to a strong form of a question of bumagin–wise
on the outer automorphism groups of finitely presented, residually
finite groups.
| 4 |
abstract
penalized estimation principle is fundamental to high-dimensional problems. in the literature, it has been extensively and successfully applied to various models with only structural
parameters. as a contrast, in this paper, we apply this penalization principle to a linear regression model with a finite-dimensional vector of structural parameters and a high-dimensional
vector of sparse incidental parameters. for the estimators of the structural parameters, we derive their consistency and asymptotic normality, which reveals an oracle property. however, the
penalized estimators for the incidental parameters possess only partial selection consistency but
not consistency. this is an interesting partial consistency phenomenon: the structural parameters are consistently estimated while the incidental ones cannot. for the structural parameters,
also considered is an alternative two-step penalized estimator, which has fewer possible asymptotic distributions and thus is more suitable for statistical inferences. we further extend the
methods and results to the case where the dimension of the structural parameter vector diverges
with but slower than the sample size. a data-driven approach for selecting a penalty regularization parameter is provided. the finite-sample performance of the penalized estimators for the
structural parameters is evaluated by simulations and a real data set is analyzed.
keywords: structural parameters, sparse incidental parameters, penalized estimation, partial consistency,
oracle property, two-step estimation, confidence intervals
| 10 |
abstract
outlier detection is the identification of points in a dataset that do not conform to the norm. outlier
detection is highly sensitive to the choice of the detection algorithm and the feature subspace used by
the algorithm. extracting domain-relevant insights from outliers needs systematic exploration of these
choices since diverse outlier sets could lead to complementary insights. this challenge is especially acute
in an interactive setting, where the choices must be explored in a time-constrained manner.
in this work, we present remix, the first system to address the problem of outlier detection in an
interactive setting. remix uses a novel mixed integer programming (mip) formulation for automatically
selecting and executing a diverse set of outlier detectors within a time limit. this formulation incorporates
multiple aspects such as (i) an upper limit on the total execution time of detectors (ii) diversity in the
space of algorithms and features, and (iii) meta-learning for evaluating the cost and utility of detectors.
remix provides two distinct ways for the analyst to consume its results: (i) a partitioning of the
detectors explored by remix into perspectives through low-rank non-negative matrix factorization; each
perspective can be easily visualized as an intuitive heatmap of experiments versus outliers, and (ii) an
ensembled set of outliers which combines outlier scores from all detectors. we demonstrate the benefits
of remix through extensive empirical validation on real-world data.
| 2 |
abstract. interaction with services provided by an execution environment forms part of the behaviours exhibited by instruction sequences
under execution. mechanisms related to the kind of interaction in question have been proposed in the setting of thread algebra. like thread,
service is an abstract behavioural concept. the concept of a functional
unit is similar to the concept of a service, but more concrete. a state
space is inherent in the concept of a functional unit, whereas it is not
inherent in the concept of a service. in this paper, we establish the existence of a universal computable functional unit for natural numbers and
related results.
keywords: functional unit, instruction sequence.
1998 acm computing classification: f.1.1, f.4.1.
| 6 |
abstract
representations are internal models of the environment that can provide guidance to a
behaving agent, even in the absence of sensory information. it is not clear how representations are developed and whether or not they are necessary or even essential for
intelligent behavior. we argue here that the ability to represent relevant features of the
environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure r.
to measure how r changes over time, we evolve two types of networks—an artificial
neural network and a network of hidden markov gates—to solve a categorization task
using a genetic algorithm. we find that the capacity to represent increases during evolutionary adaptation, and that agents form representations of their environment during
their lifetime. this ability allows the agents to act on sensorial inputs in the context
of their acquired representations and enables complex and context-dependent behavior.
we examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form
as an agent behaves to solve a task. we conclude that r should be able to quantify
∗
| 9 |
abstract
an autonomous computer system (such as a robot) typically
needs to identify, locate, and track persons appearing in its
sight. however, most solutions have their limitations regarding efficiency, practicability, or environmental constraints. in
this paper, we propose an effective and practical system which
combines video and inertial sensors for person identification
(pid). persons who do different activities are easy to identify.
to show the robustness and potential of our system, we propose a walking person identification (wpid) method to identify persons walking at the same time. by comparing features
derived from both video and inertial sensor data, we can associate sensors in smartphones with human objects in videos.
results show that the correctly identified rate of our wpid
method can up to 76% in 2 seconds.
index terms— artificial intelligence, computer vision,
gait analysis, inertial sensor, walking person identification.
1. introduction
human navigates the world through five senses, including
taste, touch, smell, hearing, and sight. we sometimes rely
on one sense while sometimes on multiple senses. for computer systems, the optical sensor is perhaps the most essential
sensor which captures information like human eyes. cameras are widely used for public safety and services in hospitals, shopping malls, streets, etc. on the other hand, booming
use of other sensors is seen in many iot applications due to
the advances in wireless communications and mems. in this
work, we like to raise one fundamental question: how can
we improve the perceptivity of computer systems by integrating multiple sensors? more specifically, we are interested in
fusing video and inertial sensor data to achieve person identification (pid), as is shown in fig. 1(b).
efficient pid is the first step toward surveillance, home
security, person tracking, no checkout supermarkets, and
human-robot conversation. traditional pid technologies are
usually based on capturing biological features like face, voice,
tooth, fingerprint, dna, and iris [1–3]. however, these techniques require intimate information of users, cumbersome
registration, training process, and user cooperation. also,
| 1 |
abstract—the increasing penetration of renewable energy in
recent years has led to more uncertainties in power systems.
in order to maintain system reliability and security, electricity
market operators need to keep certain reserves in the securityconstrained economic dispatch (sced) problems. a new concept, deliverable generation ramping reserve, is proposed in
this paper. the prices of generation ramping reserves and
generation capacity reserves are derived in the affine adjustable
robust optimization framework. with the help of these prices,
the valuable reserves can be identified among the available
reserves. these prices provide crucial information on the values
of reserve resources, which are critical for the long-term flexibility
investment. the market equilibrium based on these prices is
analyzed. simulations on a 3-bus system and the ieee 118-bus
system are performed to illustrate the concept of ramping reserve
price and capacity reserve price. the impacts of the reserve credit
on market participants are discussed.
index terms—ramping reserve, capacity reserve, marginal
price, uncertainties, affinely adjustable robust optimization
| 3 |
abstract
in this short note we give a formula for the factorization number
f2 (g) of a finite rank 3 abelian p-group g. this extends a result in
our previous work [9].
| 4 |
abstract
in this paper, we determine a class of deep holes for gabidulin codes
with both rank metric and hamming metric. moreover, we give a necessary
and sufficient condition for deciding whether a word is not a deep hole for
gabidulin codes, by which we study the error distance of two special classes
of words to certain gabidulin codes.
keywords: gabidulin codes, rank metric, deep holes, covering radius, error rank
distance
| 7 |
abstract the present paper treats the problem of finding the asymptotic bounds for the globally optimal
locations of orthogonal stiffeners minimizing the compliance of a rectangular plate in elastostatic bending.
the essence of the paper is the utilization of a method of analysis of orthogonally stiffened rectangular plates
first presented by mazurkiewicz in 1962, and obtained herein in a closed form for several special cases in the
approximation of stiffeners having zero torsional rigidity. asymptotic expansions of the expressions for the
deflection field of a stiffened plate are used to derive limit-case globally optimal stiffening layouts for highly
flexible and highly rigid stiffeners. a central result obtained in this work is an analytical proof of the fact that
an array of flexible enough orthogonal stiffeners of any number, stiffening a simply-supported rectangular
plate subjected to any lateral loading, is best to be put in the form of exactly two orthogonal stiffeners, one in
each direction.
keywords elastic plate bending; orthogonal stiffeners; fredholm's 2nd kind integral equation;
asymptotic analysis; globally optimal positions
1
| 5 |
abstract
we introduce a pair of tools, rasa nlu and rasa core, which are open source
python libraries for building conversational software. their purpose is to make
machine-learning based dialogue management and language understanding accessible to non-specialist software developers. in terms of design philosophy, we aim
for ease of use, and bootstrapping from minimal (or no) initial training data. both
packages are extensively documented and ship with a comprehensive suite of tests.
the code is available at https://github.com/rasahq/
| 2 |
abstract—the blockchain technology has achieved tremendous success in open (permissionless) decentralized consensus
by employing proof-of-work (pow) or its variants, whereby
unauthorized nodes cannot gain disproportionate impact on
consensus beyond their computational power. however, powbased systems incur a high delay and low throughput, making
them ineffective in dealing with real-time applications. on the
other hand, byzantine fault-tolerant (bft) consensus algorithms with better delay and throughput performance have
been employed in closed (permissioned) settings to avoid sybil
attacks. in this paper, we present sybil-proof wireless network
coordinate based byzantine consensus (senate), which is
based on the conventional bft consensus framework yet works
in open systems of wireless devices where faulty nodes may
launch sybil attacks. as in a senate in the legislature where the
quota of senators per state (district) is a constant irrespective
with the population of the state, “senators” in senate are
selected from participating distributed nodes based on their
wireless network coordinates (wnc) with a fixed number of
nodes per district in the wnc space. elected senators then
participate in the subsequent consensus reaching process and
broadcast the result. thereby, senate is proof against sybil
attacks since pseudonyms of a faulty node are likely to be
adjacent in the wnc space and hence fail to be elected.
index terms—byzantine fault-tolerant consensus, sybil attack,
wireless network, permissionless blockchain, distance geometry
| 7 |
abstract—time-division duplex (tdd) based massive mimo
systems rely on the reciprocity of the wireless propagation channels when calculating the downlink precoders based on uplink
pilots. however, the effective uplink and downlink channels
incorporating the analog radio front-ends of the base station
(bs) and user equipments (ues) exhibit non-reciprocity due to
non-identical behavior of the individual transmit and receive
chains. when downlink precoder is not aware of such channel
non-reciprocity (nrc), system performance can be significantly
degraded due to nrc induced interference terms. in this work,
we consider a general tdd-based massive mimo system where
frequency-response mismatches at both the bs and ues, as well
as the mutual coupling mismatch at the bs large-array system
all coexist and induce channel nrc. based on the nrc-impaired
signal models, we first propose a novel iterative estimation
method for acquiring both the bs and ue side nrc matrices and
then also propose a novel nrc-aware downlink precoder design
which utilizes the obtained estimates. furthermore, an efficient
pilot signaling scheme between the bs and ues is introduced
in order to facilitate executing the proposed estimation method
and the nrc-aware precoding technique in practical systems.
comprehensive numerical results indicate substantially improved
spectral efficiency performance when the proposed nrc estimation and nrc-aware precoding methods are adopted, compared
to the existing state-of-the-art methods.
index terms—beamforming, channel non-reciprocity, channel
state information, frequency-response mismatch, linear precoding, massive mimo, mutual coupling, time division duplexing
(tdd).
| 7 |
abstract
deep generative models are reported to be useful in broad applications including image generation. repeated inference between data space and latent space
in these models can denoise cluttered images and improve the quality of inferred
results. however, previous studies only qualitatively evaluated image outputs in
data space, and the mechanism behind the inference has not been investigated.
the purpose of the current study is to numerically analyze changes in activity patterns of neurons in the latent space of a deep generative model called a “variational
auto-encoder” (vae). what kinds of inference dynamics the vae demonstrates
when noise is added to the input data are identified. the vae embeds a dataset
with clear cluster structures in the latent space and the center of each cluster of
multiple correlated data points (memories) is referred as the concept. our study
demonstrated that transient dynamics of inference first approaches a concept, and
then moves close to a memory. moreover, the vae revealed that the inference
dynamics approaches a more abstract concept to the extent that the uncertainty
of input data increases due to noise. it was demonstrated that by increasing the
number of the latent variables, the trend of the inference dynamics to approach
a concept can be enhanced, and the generalization ability of the vae can be improved.
∗
| 9 |
abstract
a piecewise-deterministic markov process is a stochastic process whose behavior is governed by an
ordinary differential equation punctuated by random jumps occurring at random times. we focus on
the nonparametric estimation problem of the jump rate for such a stochastic model observed within a
long time interval under an ergodicity condition. we introduce an uncountable class (indexed by the
deterministic flow) of recursive kernel estimates of the jump rate and we establish their strong pointwise
consistency as well as their asymptotic normality. we propose to choose among this class the estimator
with the minimal variance, which is unfortunately unknown and thus remains to be estimated. we also
discuss the choice of the bandwidth parameters by cross-validation methods.
keywords: cross-validation · jump rate · kernel method · nonparametric estimation · piecewisedeterministic markov process
mathematics subject classification (2010): 62m05 · 62g20 · 60j25
| 10 |
abstract. large-scale collaborative analysis of brain imaging data, in psychiatry and neurology, offers a new source of statistical power to discover features
that boost accuracy in disease classification, differential diagnosis, and outcome
prediction. however, due to data privacy regulations or limited accessibility to
large datasets across the world, it is challenging to efficiently integrate distributed information. here we propose a novel classification framework through
multi-site weighted lasso: each site performs an iterative weighted lasso
for feature selection separately. within each iteration, the classification result
and the selected features are collected to update the weighting parameters for
each feature. this new weight is used to guide the lasso process at the next
iteration. only the features that help to improve the classification accuracy are
preserved. in tests on data from five sites (299 patients with major depressive
disorder (mdd) and 258 normal controls), our method boosted classification
accuracy for mdd by 4.9% on average. this result shows the potential of the
proposed new strategy as an effective and practical collaborative platform for
machine learning on large scale distributed imaging and biobank data.
| 5 |
abstract—autonomous aerial robots provide new possibilities
to study the habitats and behaviors of endangered species through
the efficient gathering of location information at temporal and
spatial granularities not possible with traditional manual survey
methods. we present a novel autonomous aerial vehicle system to
track and localize multiple radio-tagged animals. the simplicity
of measuring the received signal strength indicator (rssi) values
of very high frequency (vhf) radio-collars commonly used
in the field is exploited to realize a low cost and lightweight
tracking platform suitable for integration with unmanned aerial
vehicles (uavs). due to uncertainty and the nonlinearity of the
system based on rssi measurements, our tracking and planning
approaches integrate a particle filter for tracking and localizing;
a partially observable markov decision process (pomdp) for
dynamic path planning. this approach allows autonomous navigation of a uav in a direction of maximum information gain
to locate multiple mobile animals and reduce exploration time;
and, consequently, conserve on-board battery power. we also
employ the concept of a search termination criteria to maximize
the number of located animals within power constraints of the
aerial system. we validated our real-time and online approach
through both extensive simulations and field experiments with
two mobile vhf radio-tags.
| 3 |
abstract
model distillation compresses a trained machine learning model, such as
a neural network, into a smaller alternative such that it could be easily
deployed in a resource limited setting. unfortunately, this requires engineering two architectures: a student architecture smaller than the first
teacher architecture but trained to emulate it. in this paper, we present a
distillation strategy that produces a student architecture that is a simple
transformation of the teacher architecture. recent model distillation methods allow us to preserve most of the performance of the trained model
after replacing convolutional blocks with a cheap alternative. in addition,
distillation by attention transfer provides student network performance
that is better than training that student architecture directly on data.
| 1 |
abstract
protocells are supposed to have played a key role in the self-organizing
processes leading to the emergence of life. existing models either (i) describe protocell architecture and dynamics, given the existence of sets of
collectively self-replicating molecules for granted, or (ii) describe the emergence of the aforementioned sets from an ensemble of random molecules
in a simple experimental setting (e.g. a closed system or a steady-state
flow reactor) that does not properly describe a protocell. in this paper we
present a model that goes beyond these limitations by describing the dynamics of sets of replicating molecules within a lipid vesicle. we adopt the
simplest possible protocell architecture, by considering a semi-permeable
membrane that selects the molecular types that are allowed to enter or
exit the protocell and by assuming that the reactions take place in the
aqueous phase in the internal compartment. as a first approximation,
we ignore the protocell growth and division dynamics. the behavior of
catalytic reaction networks is then simulated by means of a stochastic
model that accounts for the creation and the extinction of species and
reactions. while this is not yet an exhaustive protocell model, it already
provides clues regarding some processes that are relevant for understanding the conditions that can enable a population of protocells to undergo
evolution and selection.
| 5 |
abstract
the problem of content delivery in caching networks is investigated for scenarios where multiple
users request identical files. redundant user demands are likely when the file popularity distribution
is highly non-uniform or the user demands are positively correlated. an adaptive method is proposed
for the delivery of redundant demands in caching networks. based on the redundancy pattern in the
current demand vector, the proposed method decides between the transmission of uncoded messages
or the coded messages of [1] for delivery. moreover, a lower bound on the delivery rate of redundant
requests is derived based on a cutset bound argument. the performance of the adaptive method is
investigated through numerical examples of the delivery rate of several specific demand vectors as well
as the average delivery rate of a caching network with correlated requests. the adaptive method is
shown to considerably reduce the gap between the non-adaptive delivery rate and the lower bound. in
some specific cases, using the adaptive method, this gap shrinks by almost 50% for the average rate.
| 7 |
abstract—in this note we deal with a new observer for
nonlinear systems of dimension n in canonical observability form.
we follow the standard high-gain paradigm, but instead of having
an observer of dimension n with a gain that grows up to power
n, we design an observer of dimension 2n − 2 with a gain that
grows up only to power 2.
| 3 |
abstract. the control problem of a linear discrete-time dynamical system
over a multi-hop network is explored. the network is assumed to be subject to
packet drops by malicious and nonmalicious nodes as well as random and malicious data corruption issues. we utilize asymptotic tail-probability bounds of
transmission failure ratios to characterize the links and paths of a network as
well as the network itself. this probabilistic characterization allows us to take
into account multiple failures that depend on each other, and coordinated malicious attacks on the network. we obtain a sufficient condition for the stability
of the networked control system by utilizing our probabilistic approach. we
then demonstrate the efficacy of our results in different scenarios concerning
transmission failures on a multi-hop network.
| 3 |
abstract
in this article, we advance divide-and-conquer strategies for solving the community detection problem in networks. we propose two algorithms which perform clustering on a number of small subgraphs
and finally patches the results into a single clustering. the main advantage of these algorithms is that
they bring down significantly the computational cost of traditional algorithms, including spectral clustering, semi-definite programs, modularity based methods, likelihood based methods etc., without losing
on accuracy and even improving accuracy at times. these algorithms are also, by nature, parallelizable.
thus, exploiting the facts that most traditional algorithms are accurate and the corresponding optimization problems are much simpler in small problems, our divide-and-conquer methods provide an omnibus
recipe for scaling traditional algorithms up to large networks. we prove consistency of these algorithms
under various subgraph selection procedures and perform extensive simulations and real-data analysis to
understand the advantages of the divide-and-conquer approach in various settings.
| 10 |
abstract
the main aim of this paper is to prove r-triviality for simple, simply connected
78 or e 78 , defined over a field k of arbitrary
algebraic groups with tits index e8,2
7,1
characteristic. let g be such a group. we prove that there exists a quadratic
extension k of k such that g is r-trivial over k, i.e., for any extension f of
k, g(f )/r = {1}, where g(f )/r denotes the group of r-equivalence classes in
g(f ), in the sense of manin (see [23]). as a consequence, it follows that the
variety g is retract k-rational and that the kneser-tits conjecture holds for these
groups over k. moreover, g(l) is projectively simple as an abstract group for
any field extension l of k. in their monograph ([51]) j. tits and richard weiss
conjectured that for an albert division algebra a over a field k, its structure group
str(a) is generated by scalar homotheties and its u -operators. this is known to
78 . we
be equivalent to the kneser-tits conjecture for groups with tits index e8,2
settle this conjucture for albert division algebras which are first constructions, in
affirmative. these results are obtained as corollaries to the main result, which
shows that if a is an albert division algebra which is a first construction and γ
its structure group, i.e., the algebraic group of the norm similarities of a, then
γ(f )/r = {1} for any field extension f of k, i.e., γ is r-trivial.
keywords: exceptional groups, algeraic groups, albert algebras, structure group, kneser-tits
conjecture
| 4 |
abstract
we consider a simple model of unreliable or crowdsourced data where there is an underlying set
of n binary variables, each “evaluator” contributes a (possibly unreliable or adversarial) estimate of the
values of some subset of r of the variables, and the learner is given the true value of a constant number
of variables. we show that, provided an α-fraction of the evaluators are “good” (either correct, or with
independent noise rate p < 1/2), then the true values of a (1 − ǫ) fraction of the n underlying variables
can be deduced as long as α > 1/(2 − 2p)r . for example, if each “good” worker evaluates a random
set of 10 items and there is no noise in their responses, then accurate recovery is possible provided the
fraction of good evaluators is larger than 1/1024. this result is optimal in that if α ≤ 1/(2 − 2p)r , the
large dataset can contain no information. this setting can be viewed as an instance of the semi-verified
learning model introduced in [3], which explores the tradeoff between the number of items evaluated by
each worker and the fraction of “good” evaluators. our results require the number of evaluators to be
extremely large, > nr , although our algorithm runs in linear time, or,ǫ (n), given query access to the
large dataset of evaluations. this setting and results can also be viewed as examining a general class of
semi-adversarial csps with a planted assignment.
this extreme parameter regime, where the fraction of reliable data is small (inverse exponential in
the amount of data provided by each source), is relevant to a number of practical settings. for example,
settings where one has a large dataset of customer preferences, with each customer specifying preferences for a small (constant) number of items, and the goal is to ascertain the preferences of a specific
demographic of interest. our results show that this large dataset (which lacks demographic information)
can be leveraged together with the preferences of the demographic of interest for a constant number
of randomly selected items, to recover an accurate estimate of the entire set of preferences, even if the
fraction of the original dataset contributed by the demographic of interest is inverse exponential in the
number of preferences supplied by each customer. in this sense, our results can be viewed as a “data
prism” allowing one to extract the behavior of specific cohorts from a large, mixed, dataset.
| 7 |
abstract—facial beauty prediction (fbp) is a significant visual
recognition problem to make assessment of facial attractiveness
that is consistent to human perception. to tackle this problem, various data-driven models, especially state-of-the-art deep
learning techniques, were introduced, and benchmark dataset
become one of the essential elements to achieve fbp. previous
works have formulated the recognition of facial beauty as a
specific supervised learning problem of classification, regression
or ranking, which indicates that fbp is intrinsically a computation problem with multiple paradigms. however, most of
fbp benchmark datasets were built under specific computation
constrains, which limits the performance and flexibility of the
computational model trained on the dataset. in this paper, we
argue that fbp is a multi-paradigm computation problem, and
propose a new diverse benchmark dataset, called scut-fbp5500,
to achieve multi-paradigm facial beauty prediction. the scutfbp5500 dataset has totally 5500 frontal faces with diverse
properties (male/female, asian/caucasian, ages) and diverse labels (face landmarks, beauty scores within [1, 5], beauty score
distribution), which allows different computational models with
different fbp paradigms, such as appearance-based/shape-based
facial beauty classification/regression model for male/female of
asian/caucasian. we evaluated the scut-fbp5500 dataset for
fbp using different combinations of feature and predictor,
and various deep learning methods. the results indicates the
improvement of fbp and the potential applications based on the
scut-fbp5500.
| 1 |
abstract
in general the different links of a broadcast channel may experience different fading dynamics
and, potentially, unequal or hybrid channel state information (csi) conditions. the faster the fading
and the shorter the fading block length, the more often the link needs to be trained and estimated at
the receiver, and the more likely that csi is stale or unavailable at the transmitter. disparity of link
fading dynamics in the presence of csi limitations can be modeled by a multi-user broadcast channel
with both non-identical link fading block lengths as well as dissimilar link csir/csit conditions.
this paper investigates a miso broadcast channel where some receivers experience longer coherence
intervals (static receivers) and have csir, while some other receivers experience shorter coherence
intervals (dynamic receivers) and do not enjoy free csir. we consider a variety of csit conditions
for the above mentioned model, including no csit, delayed csit, or hybrid csit. to investigate the
degrees of freedom region, we employ interference alignment and beamforming along with a product
superposition that allows simultaneous but non-contaminating transmission of pilots and data to different
receivers. outer bounds employ the extremal entropy inequality as well as a bounding of the performance
of a discrete memoryless multiuser multilevel broadcast channel. for several cases, inner and outer
bounds are established that either partially meet, or the gap diminishes with increasing coherence times.
| 7 |
abstract
in the last fifteen years, the high performance computing (hpc) community has claimed for parallel programming environments that reconciles generality, higher level of abstraction, portability, and efficiency for
distributed-memory parallel computing platforms. the hash component
model appears as an alternative for addressing hpc community claims
for fitting these requirements. this paper presents foundations that will
enable a parallel programming environment based on the hash model to
address the problems of “debugging”, performance evaluation and verification of formal properties of parallel program by means of a powerful,
simple, and widely adopted formalism: petri nets.
| 6 |
abstract
distributed actor languages are an effective means of constructing scalable reliable systems, and the erlang
programming language has a well-established and influential model. while the erlang model conceptually
provides reliable scalability, it has some inherent scalability limits and these force developers to depart from
the model at scale. this article establishes the scalability limits of erlang systems, and reports the work of
the eu release project to improve the scalability and understandability of the erlang reliable distributed
actor model.
we systematically study the scalability limits of erlang, and then address the issues at the virtual machine,
language and tool levels. more specifically: (1) we have evolved the erlang virtual machine so that it can
work effectively in large scale single-host multicore and numa architectures. we have made important
changes and architectural improvements to the widely used erlang/otp release. (2) we have designed and
implemented scalable distributed (sd) erlang libraries to address language-level scalability issues, and
provided and validated a set of semantics for the new language constructs. (3) to make large erlang systems
easier to deploy, monitor, and debug we have developed and made open source releases of five complementary
tools, some specific to sd erlang.
throughout the article we use two case studies to investigate the capabilities of our new technologies and
tools: a distributed hash table based orbit calculation and ant colony optimisation (aco). chaos monkey
experiments show that two versions of aco survive random process failure and hence that sd erlang
preserves the erlang reliability model. while we report measurements on a range of numa and cluster
architectures, the key scalability experiments are conducted on the athos cluster with 256 hosts (6144 cores).
even for programs with no global recovery data to maintain, sd erlang partitions the network to reduce
network traffic and hence improves performance of the orbit and aco benchmarks above 80 hosts. aco
measurements show that maintaining global recovery data dramatically limits scalability; however scalability
is recovered by partitioning the recovery data. we exceed the established scalability limits of distributed
erlang, and do not reach the limits of sd erlang for these benchmarks at this scale (256 hosts, 6144 cores).
| 6 |
abstract. pseudo-code is a great way of communicating ideas quickly and clearly while
giving readers no chance to understand the subtle implementation details (particularly the custom
toolchains and manual interventions) that actually make it work.
3. short and sweet. any limitations of your methods or proofs will be obvious to the careful reader,
so there is no need to waste space on making them explicit2 . however much work it takes colleagues
to fill in the gaps, you will still get the credit if you just say you have amazing experiments or
proofs (with a hat-tip to pierre de fermat: “cuius rei demonstrationem mirabilem sane detexi
hanc marginis exiguitas non caperet.”).
4. the deficit model. you’re the expert in the domain, only you can define what algorithms and
data to run experiments with. in the unhappy circumstance that your methods do not do well on
1
| 5 |
abstract. let g be the free two step nilpotent lie group on three generators and let
l be a sublaplacian on it. we compute the spectral resolution of l and prove that the
operators arising from this decomposition enjoy a tomas-stein type estimate.
| 4 |
abstract
| 6 |
abstract. let k be a field of characteristic 0 and consider exterior algebras of finite dimensional k-vector spaces. in this short paper we exhibit
principal quadric ideals in a family whose castelnuovo-mumford regularity is
unbounded. this negatively answers the analogue of stillman’s question for
exterior algebras posed by i. peeva. we show that these examples are dual to
modules over polynomial rings that yield counterexamples to a conjecture of
j. herzog on the betti numbers in the linear strand of syzygy modules.
| 0 |
abstract
recruitment market analysis provides valuable understanding of industry-specific economic growth and plays an important role for both employers and job seekers. with the
rapid development of online recruitment services, massive
recruitment data have been accumulated and enable a new
paradigm for recruitment market analysis. however, traditional methods for recruitment market analysis largely rely
on the knowledge of domain experts and classic statistical
models, which are usually too general to model large-scale
dynamic recruitment data, and have difficulties to capture
the fine-grained market trends. to this end, in this paper,
we propose a new research paradigm for recruitment market
analysis by leveraging unsupervised learning techniques for
automatically discovering recruitment market trends based
on large-scale recruitment data. specifically, we develop
a novel sequential latent variable model, named mtlvm,
which is designed for capturing the sequential dependencies
of corporate recruitment states and is able to automatically
learn the latent recruitment topics within a bayesian generative framework. in particular, to capture the variability of recruitment topics over time, we design hierarchical
dirichlet processes for mtlvm. these processes allow to
dynamically generate the evolving recruitment topics. finally, we implement a prototype system to empirically evaluate our approach based on real-world recruitment data in
china. indeed, by visualizing the results from mtlvm, we
can successfully reveal many interesting findings, such as the
popularity of lbs related jobs reached the peak in the 2nd
half of 2014, and decreased in 2015.
| 2 |
abstract
this work studies the strong duality of non-convex matrix factorization problems: we show that
under certain dual conditions, these problems and its dual have the same optimum. this has been well
understood for convex optimization, but little was known for non-convex problems. we propose a novel
analytical framework and show that under certain dual conditions, the optimal solution of the matrix
factorization program is the same as its bi-dual and thus the global optimality of the non-convex program
can be achieved by solving its bi-dual which is convex. these dual conditions are satisfied by a wide
class of matrix factorization problems, although matrix factorization problems are hard to solve in full
generality. this analytical framework may be of independent interest to non-convex optimization more
broadly.
we apply our framework to two prototypical matrix factorization problems: matrix completion and
robust principal component analysis (pca). these are examples of efficiently recovering a hidden matrix
given limited reliable observations of it. our framework shows that exact recoverability and strong duality
hold with nearly-optimal sample complexity guarantees for matrix completion and robust pca.
| 8 |
abstract
the heat kernel is a type of graph diffusion that, like the
much-used personalized pagerank diffusion, is useful in identifying a community nearby a starting seed node. we present
the first deterministic, local algorithm to compute this diffusion and use that algorithm to study the communities that it
produces. our algorithm is formally a relaxation method for
solving a linear system to estimate the matrix exponential
in a degree-weighted norm. we prove that this algorithm
stays localized in a large graph and has a worst-case constant runtime that depends only on the parameters of the
diffusion, not the size of the graph. on large graphs, our experiments indicate that the communities produced by this
method have better conductance than those produced by
pagerank, although they take slightly longer to compute.
on a real-world community identification task, the heat kernel communities perform better than those from the pagerank diffusion.
| 8 |
abstract
event sequence, asynchronously generated with random
timestamp, is ubiquitous among applications. the precise and
arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally
different from the time-series whereby series is indexed with
fixed and equal time interval. one expressive mathematical
tool for modeling event is point process. the intensity functions of many point processes involve two components: the
background and the effect by the history. due to its inherent spontaneousness, the background can be treated as a time
series while the other need to handle the history events. in
this paper, we model the background by a recurrent neural
network (rnn) with its units aligned with time series indexes while the history effect is modeled by another rnn
whose units are aligned with asynchronous events to capture the long-range dynamics. the whole model with event
type and timestamp prediction output layers can be trained
end-to-end. our approach takes an rnn perspective to point
process, and models its background and history effect. for
utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form
in point processes. meanwhile end-to-end training opens the
venue for reusing existing rich techniques in deep network for
point process modeling. we apply our model to the predictive
maintenance problem using a log dataset by more than 1000
atms from a global bank headquartered in north america.
| 2 |
abstract—in this paper, we present the role playing learning
(rpl) scheme for a mobile robot to navigate socially with its
human companion in populated environments. neural networks
(nn) are constructed to parameterize a stochastic policy that
directly maps sensory data collected by the robot to its velocity
outputs, while respecting a set of social norms. an efficient simulative learning environment is built with maps and pedestrians
trajectories collected from a number of real-world crowd data
sets. in each learning iteration, a robot equipped with the nn
policy is created virtually in the learning environment to play
itself as a companied pedestrian and navigate towards a goal in
a socially concomitant manner. thus, we call this process role
playing learning, which is formulated under a reinforcement
learning (rl) framework. the nn policy is optimized end-toend using trust region policy optimization (trpo), with consideration of the imperfectness of robot’s sensor measurements.
simulative and experimental results are provided to demonstrate
the efficacy and superiority of our method.
| 2 |
abstract:
this paper continues the research started in lepski and willer (2016). in the framework of
the convolution structure density model on rd , we address the problem of adaptive minimax
estimation with lp –loss over the scale of anisotropic nikol’skii classes. we fully characterize
the behavior of the minimax risk for different relationships between regularity parameters and
norm indexes in the definitions of the functional class and of the risk. in particular, we show
that the boundedness of the function to be estimated leads to an essential improvement of
the asymptotic of the minimax risk. we prove that the selection rule proposed in part i leads
to the construction of an optimally or nearly optimally (up to logarithmic factor) adaptive
estimator.
ams 2000 subject classifications: 62g05, 62g20.
keywords and phrases: deconvolution model, density estimation, oracle inequality, adaptive estimation, kernel estimators, lp –risk, anisotropic nikol’skii class.
| 10 |
abstract: cumulative link models have been widely used for ordered categorical
responses. uniform allocation of experimental units is commonly used in practice,
but often suffers from a lack of efficiency. we consider d-optimal designs with
ordered categorical responses and cumulative link models. for a predetermined set
of design points, we derive the necessary and sufficient conditions for an allocation
to be locally d-optimal and develop efficient algorithms for obtaining approximate
and exact designs. we prove that the number of support points in a minimally
supported design only depends on the number of predictors, which can be much
less than the number of parameters in the model. we show that a d-optimal
minimally supported allocation in this case is usually not uniform on its support
points. in addition, we provide ew d-optimal designs as a highly efficient surrogate
to bayesian d-optimal designs. both of them can be much more robust than
uniform designs.
key words and phrases: approximate design, exact design, multinomial response,
cumulative link model, minimally supported design, ordinal data.
| 10 |
abstract
| 2 |
abstract. we define and study generalized nil-coxeter algebras associated
to coxeter groups. motivated by a question of coxeter (1957), we construct the first examples of such finite-dimensional algebras that are not the
‘usual’ nil-coxeter algebras: a novel 2-parameter type a family that we call
n ca (n, d). we explore several combinatorial properties of n ca (n, d), including its coxeter word basis, length function, and hilbert–poincaré series,
and show that the corresponding generalized coxeter group is not a flat deformation of n ca (n, d). these algebras yield symmetric semigroup module
categories that are necessarily not monoidal; we write down their tannaka–
krein duality.
further motivated by the broué–malle–rouquier (bmr) freeness conjecture [j. reine angew. math. 1998], we define generalized nil-coxeter algebras n cw over all discrete real or complex reflection groups w , finite or
infinite. we provide a complete classification of all such algebras that are
finite-dimensional. remarkably, these turn out to be either the usual nilcoxeter algebras, or the algebras n ca (n, d). this proves as a special case –
and strengthens – the lack of equidimensional nil-coxeter analogues for finite
complex reflection groups. in particular, generic hecke algebras are not flat
deformations of n cw for w complex.
| 4 |
abstract
reinforcement learning (rl) is a promising
approach to solve dialogue policy optimisation. traditional rl algorithms, however, fail
to scale to large domains due to the curse
of dimensionality. we propose a novel dialogue management architecture, based on feudal rl, which decomposes the decision into
two steps; a first step where a master policy
selects a subset of primitive actions, and a second step where a primitive action is chosen
from the selected subset. the structural information included in the domain ontology is
used to abstract the dialogue state space, taking the decisions at each step using different
parts of the abstracted state. this, combined
with an information sharing mechanism between slots, increases the scalability to large
domains. we show that an implementation
of this approach, based on deep-q networks,
significantly outperforms previous state of the
art in several dialogue domains and environments, without the need of any additional reward signal.
| 2 |
abstract. learning the model parameters of a multi-object dynamical system from partial and
perturbed observations is a challenging task. despite recent numerical advancements in learning these
parameters, theoretical guarantees are extremely scarce. in this article, we study the identifiability
of these parameters and the consistency of the corresponding maximum likelihood estimate (mle)
under assumptions on the different components of the underlying multi-object system. in order to
understand the impact of the various sources of observation noise on the ability to learn the model
parameters, we study the asymptotic variance of the mle through the associated fisher information
matrix. for example, we show that specific aspects of the multi-target tracking (mtt) problem such
as detection failures and unknown data association lead to a loss of information which is quantified
in special cases of interest.
key words. identifiability, consistency, fisher information
ams subject classifications. 62f12, 62b10
| 10 |
abstract—massive multiple-input multiple-output (mimo) systems, which utilize a large number of antennas at the base
station, are expected to enhance network throughput by enabling
improved multiuser mimo techniques. to deploy many antennas
in reasonable form factors, base stations are expected to employ
antenna arrays in both horizontal and vertical dimensions, which
is known as full-dimension (fd) mimo. the most popular
two-dimensional array is the uniform planar array (upa),
where antennas are placed in a grid pattern. to exploit the
full benefit of massive mimo in frequency division duplexing
(fdd), the downlink channel state information (csi) should be
estimated, quantized, and fed back from the receiver to the
transmitter. however, it is difficult to accurately quantize the
channel in a computationally efficient manner due to the high
dimensionality of the massive mimo channel. in this paper, we
develop both narrowband and wideband csi quantizers for fdmimo taking the properties of realistic channels and the upa
into consideration. to improve quantization quality, we focus
on not only quantizing dominant radio paths in the channel,
but also combining the quantized beams. we also develop a
hierarchical beam search approach, which scans both vertical
and horizontal domains jointly with moderate computational
complexity. numerical simulations verify that the performance
of the proposed quantizers is better than that of previous csi
quantization techniques.
| 7 |
abstract
we initiate a thorough study of distributed property testing – producing algorithms for the
approximation problems of property testing in the congest model. in particular, for the
so-called dense graph testing model we emulate sequential tests for nearly all graph properties
having 1-sided tests, while in the general and sparse models we obtain faster tests for trianglefreeness, cycle-freeness and bipartiteness, respectively. in addition, we show a logarithmic lower
bound for testing bipartiteness and cycle-freeness, which holds even in the stronger local
model.
in most cases, aided by parallelism, the distributed algorithms have a much shorter running
time as compared to their counterparts from the sequential querying model of traditional property testing. the simplest property testing algorithms allow a relatively smooth transitioning
to the distributed model. for the more complex tasks we develop new machinery that may be
of independent interest.
| 8 |
abstract. standard higher-order contract monitoring breaks tail recursion and leads to space leaks that can change a program’s asymptotic complexity; space-efficiency restores tail recursion and bounds the
amount of space used by contracts. space-efficient contract monitoring
for contracts enforcing simple type disciplines (a/k/a gradual typing) is
well studied. prior work establishes a space-efficient semantics for manifest contracts without dependency [11]; we adapt that work to a latent
calculus with dependency. we guarantee space efficiency when no dependency is used; we cannot generally guarantee space efficiency when
dependency is used, but instead offer a framework for making such programs space efficient on a case-by-case basis.
| 6 |
abstract
recent work on neural network pruning indicates that, at training time, neural
networks need to be significantly larger in size than is necessary to represent the
eventual functions that they learn. this paper articulates a new hypothesis to explain
this phenomenon. this conjecture, which we term the lottery ticket hypothesis,
proposes that successful training depends on lucky random initialization of a
smaller subcomponent of the network. larger networks have more of these “lottery
tickets,” meaning they are more likely to luck out with a subcomponent initialized
in a configuration amenable to successful optimization.
this paper conducts a series of experiments with xor and mnist that support
the lottery ticket hypothesis. in particular, we identify these fortuitously-initialized
subcomponents by pruning low-magnitude weights from trained networks. we then
demonstrate that these subcomponents can be successfully retrained in isolation
so long as the subnetworks are given the same initializations as they had at the
beginning of the training process. initialized as such, these small networks reliably
converge successfully, often faster than the original network at the same level
of accuracy. however, when these subcomponents are randomly reinitialized or
rearranged, they perform worse than the original network. in other words, large
networks that train successfully contain small subnetworks with initializations
conducive to optimization.
the lottery ticket hypothesis and its connection to pruning are a step toward
developing architectures, initializations, and training strategies that make it possible
to solve the same problems with much smaller networks.
| 2 |
abstract
this paper continues the functional approach to the p-versus-np problem, begun in [2]. here
we focus on the monoid rmp2 of right-ideal morphisms of the free monoid, that have polynomial
input balance and polynomial time-complexity. we construct a machine model for the functions
in rmp2 , and evaluation functions. we prove that rmp2 is not finitely generated, and use this to
show separation results for time-complexity.
| 4 |
abstract
| 9 |
abstract
we address the fundamental network design problem of constructing approximate minimum
spanners. our contributions are for the distributed setting, providing both algorithmic and
hardness results.
our main hardness result shows that √an α-approximation for the minimum directed kspanner
√ √problem for k ≥ 5 requires ω(n/ α log n) rounds using deterministic algorithms or
ω( n/ α log n) rounds using randomized ones, in the congest model of distributed computing.
combined with the constant-round o(n )-approximation algorithm in the local model of
[barenboim, elkin and gavoille, 2016], as well as a polylog-round (1 + )-approximation algorithm
in the local model that we show here, our lower bounds for the congest model imply a strict
separation between the local and congest models. notably, to the best of our knowledge,
this is the first separation between these models for a local approximation problem.
similarly, a separation between the directed and undirected cases is implied. we also prove
that the minimum weighted k-spanner problem for k ≥ 4 requires a near-linear number of rounds
in the congest model, for directed or undirected graphs. in addition, we show lower bounds
for the minimum weighted 2-spanner problem in the congest and local models.
on the algorithmic side, apart from the aforementioned (1 + )-approximation algorithm
for minimum k-spanners, our main contribution is a new distributed construction of minimum
2-spanners that uses only polynomial local computations. our algorithm has a guaranteed
approximation ratio of o(log(m/n)) for a graph with n vertices and m edges, which matches
the best known ratio for polynomial time sequential algorithms [kortsarz and peleg, 1994],
and is tight if we restrict ourselves to polynomial local computations. an algorithm with this
approximation factor was not previously known for the distributed setting. the number of rounds
required for our algorithm is o(log n log ∆) w.h.p, where ∆ is the maximum degree in the graph.
our approach allows us to extend our algorithm to work also for the directed, weighted, and
client-server variants of the problem. it also provides a congest algorithm for the minimum
dominating set problem, with a guaranteed o(log ∆) approximation ratio.
| 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.