text
stringlengths 8
3.91k
| label
int64 0
10
|
---|---|
abstract. suppose an orientation preserving action of a finite group
g on the closed surface σg of genus g > 1 extends over the 3-torus t 3 for
some embedding σg ⊂ t 3 . then |g| ≤ 12(g − 1), and this upper bound
12(g − 1) can be achieved for g = n2 + 1, 3n2 + 1, 2n3 + 1, 4n3 + 1, 8n3 +
1, n ∈ z+ . those surfaces in t 3 realizing the maximum symmetries
can be either unknotted or knotted. similar problems in non-orientable
category is also discussed.
connection with minimal surfaces in t 3 is addressed and when the
maximum symmetric surfaces above can be realized by minimal surfaces
is identified.
| 4 |
abstract. hypergroups are lifted to power semigroups with negation, yielding a method of transferring
results from semigroup theory. this applies to analogous structures such as hypergroups, hyperfields,
and hypermodules, and permits us to transfer the general theory espoused in [19] to the hypertheory.
| 0 |
abstract—general video game playing (gvgp) aims at
designing an agent that is capable of playing multiple video
games with no human intervention. in 2014, the general video
game ai (gvgai) competition framework was created and
released with the purpose of providing researchers a common
open-source and easy to use platform for testing their ai
methods with potentially infinity of games created using video
game description language (vgdl). the framework has been
expanded into several tracks during the last few years to meet the
demand of different research directions. the agents are required
to either play multiples unknown games with or without access
to game simulations, or to design new game levels or rules.
this survey paper presents the vgdl, the gvgai framework,
existing tracks, and reviews the wide use of gvgai framework
in research, education and competitions five years after its birth.
a future plan of framework improvements is also described.
| 2 |
abstract diagnosis for tccp using a linear
temporal logic
marco comini, laura titolo
dimi, università degli studi di udine, italy
(e-mail: [email protected])
| 6 |
abstract
we prove that every non-trivial valuation on an infinite superrosy field of
positive characteristic has divisible value group and algebraically closed residue
field. in fact, we prove the following more general result. let k be a field such
that for every finite extension l of k and for every natural number n > 0 the
index [l∗ : (l∗ )n ] is finite and, if char(k) = p > 0 and f : l → l is given
by f (x) = xp − x, the index [l+ : f [l]] is also finite. then either there is a
non-trivial definable valuation on k, or every non-trivial valuation on k has
divisible value group and, if char(k) > 0, it has algebraically closed residue
field. in the zero characteristic case, we get some partial results of this kind.
we also notice that minimal fields have the property that every non-trivial
valuation has divisible value group and algebraically closed residue field.
| 0 |
abstract
medium voltage direct-current based integrated power system is projected as
one of the solutions for powering the all-electric ship. it faces significant challenges for accurately energizing advanced loads, especially the pulsed power
load, which can be rail gun, high power radar, and other state of art equipment. energy storage based on supercapacitors is proposed as a technique
for buffering the direct impact of pulsed power load on the power systems.
however, the high magnitude of charging current of the energy storage can
pose as a disturbance to both distribution and generation systems. this paper presents a fast switching device based real time control system that can
achieve a desired balance between maintaining the required power quality
and fast charging the energy storage in required time. test results are shown
to verify the performance of the proposed control algorithm.
keywords: medium voltage direct-current based integrated power system,
pulsed power load, power quality, disturbance metric, real time control
1. introduction
research related to navy shipboard power system raise a critical concern
regarding to the system stability due to diverse loads. similar to microgrids,
navy shipboard power systems do not have a slack bus [1]. it can be viewed as
a microgrid always operating in islanding mode. compared with typical terrestrial microgrid, the ratio between the overall load and generation is much
higher [2]. although new avenues such as zonal load architecture [3], high
preprint submitted to international journal of electrical power and energy systemsjanuary 18, 2018
| 3 |
abstract. let v be a symplectic vector space of dimension 2n. given a partition λ with at most n
parts, there is an associated irreducible representation s[λ] (v ) of sp(v ). this representation admits
a resolution by a natural complex lλ• , which we call the littlewood complex, whose terms are
restrictions of representations of gl(v ). when λ has more than n parts, the representation s[λ] (v )
is not defined, but the littlewood complex lλ• still makes sense. the purpose of this paper is to
compute its homology. we find that either lλ• is acyclic or it has a unique non-zero homology group,
which forms an irreducible representation of sp(v ). the non-zero homology group, if it exists, can
be computed by a rule reminiscent of that occurring in the borel–weil–bott theorem. this result
can be interpreted as the computation of the “derived specialization” of irreducible representations
of sp(∞), and as such categorifies earlier results of koike–terada on universal character rings. we
prove analogous results for orthogonal and general linear groups. along the way, we will see two
topics from commutative algebra: the minimal free resolutions of determinantal ideals and koszul
homology.
| 0 |
abstract
we model individual t2dm patient blood glucose level (bgl) by stochastic process with discrete
number of states mainly but not solely governed by medication regimen (e.g. insulin injections). bgl
states change otherwise according to various physiological triggers which render a stochastic, statistically
unknown, yet assumed to be quasi-stationary, nature of the process. in order to express incentive for being
in desired healthy bgl we heuristically define a reward function which returns positive values for desirable
bg levels and negative values for undesirable bg levels. the state space consists of sufficient number of
states in order to allow for memoryless assumption. this, in turn, allows to formulate markov decision
process (mdp), with an objective to maximize the total reward, summarized over a long run. the
probability law is found by model-based reinforcement learning (rl) and the optimal insulin treatment
policy is retrieved from mdp solution.
| 3 |
abstract smoothness conditions and propose
an orthonormal series estimator which attains the optimal rate of convergence. the performance of the estimator depends on the correct specification of a dimension parameter whose
optimal choice relies on smoothness characteristics of both the intensity and the error density. since a priori knowledge of such characteristics is a too strong assumption, we propose
a data-driven choice of the dimension parameter based on model selection and show that
the adaptive estimator either attains the minimax optimal rate or is suboptimal only by a
logarithmic factor.
| 10 |
abstract
| 9 |
abstract
we consider the problem of estimating a low-rank signal matrix from noisy measurements
under the assumption that the distribution of the data matrix belongs to an exponential family. in this setting, we derive generalized stein’s unbiased risk estimation (sure) formulas
that hold for any spectral estimators which shrink or threshold the singular values of the
data matrix. this leads to new data-driven spectral estimators, whose optimality is discussed using tools from random matrix theory and through numerical experiments. under
the spiked population model and in the asymptotic setting where the dimensions of the data
matrix are let going to infinity, some theoretical properties of our approach are compared
to recent results on asymptotically optimal shrinking rules for gaussian noise. it also leads
to new procedures for singular values shrinkage in finite-dimensional matrix denoising for
gamma-distributed and poisson-distributed measurements.
| 10 |
abstract. the k-restricted edge-connectivity of a graph g, denoted by
λk (g), is defined as the minimum size of an edge set whose removal leaves
exactly two connected components each containing at least k vertices.
this graph invariant, which can be seen as a generalization of a minimum edge-cut, has been extensively studied from a combinatorial point
of view. however, very little is known about the complexity of computing λk (g). very recently, in the parameterized complexity community
the notion of good edge separation of a graph has been defined, which
happens to be essentially the same as the k-restricted edge-connectivity.
motivated by the relevance of this invariant from both combinatorial and
algorithmic points of view, in this article we initiate a systematic study of
its computational complexity, with special emphasis on its parameterized
complexity for several choices of the parameters. we provide a number
of np-hardness and w[1]-hardness results, as well as fpt-algorithms.
keywords: graph cut; k-restricted edge-connectivity; good edge separation; parameterized complexity; fpt-algorithm; polynomial kernel.
| 8 |
abstract
mobile visual search applications are emerging that enable
users to sense their surroundings with smart phones. however, because of the particular challenges of mobile visual
search, achieving a high recognition bitrate has becomes a
consistent target of previous related works. in this paper,
we propose a few-parameter, low-latency, and high-accuracy
deep hashing approach for constructing binary hash codes for
mobile visual search. first, we exploit the architecture of
the mobilenet model, which significantly decreases the latency of deep feature extraction by reducing the number of
model parameters while maintaining accuracy. second, we
add a hash-like layer into mobilenet to train the model on labeled mobile visual data. evaluations show that the proposed
system can exceed state-of-the-art accuracy performance in
terms of the map. more importantly, the memory consumption is much less than that of other deep learning models. the
proposed method requires only 13 mb of memory for the neural network and achieves a map of 97.80% on the mobile
location recognition dataset used for testing.
index terms— mobile visual search, supervised hashing, binary code, deep learning
1. introduction
with the proliferation of mobile devices, it is becoming possible to use mobile perception functionalities (e.g., cameras,
gps, and wi-fi) to perceive the surrounding environment [1].
among such techniques, mobile visual search plays a key role
in mobile localization, mobile media search, and mobile social networking. however, rather than simply porting traditional visual search methods to mobile platforms, for mobile
visual search, one must face the challenges of a large auralvisual variance of queries, stringent memory and computation
constraints, network bandwidth limitations, and the desire for
an instantaneous search experience.
this work was partially supported by the ccf-tencent open research
fund (no. agr20160113), the national natural science foundation of
china (no. 61632008), and the fundamental research funds for the central
universities (no. 2016rcgd32).
| 1 |
abstract
we review boltzmann machines extended for time-series. these models often have recurrent
structure, and back propagration through time (bptt) is used to learn their parameters. the perstep computational complexity of bptt in online learning, however, grows linearly with respect
to the length of preceding time-series (i.e., learning rule is not local in time), which limits the
applicability of bptt in online learning. we then review dynamic boltzmann machines (dybms),
whose learning rule is local in time. dybm’s learning rule relates to spike-timing dependent
plasticity (stdp), which has been postulated and experimentally confirmed for biological neural
networks.
| 9 |
abstract
this paper presents two realizations of linear quantum systems for covariance assignment corresponding to pure gaussian states. the
first one is called a cascade realization; given any covariance matrix corresponding to a pure gaussian state, we can construct a cascaded
quantum system generating that state. the second one is called a locally dissipative realization; given a covariance matrix corresponding
to a pure gaussian state, if it satisfies certain conditions, we can construct a linear quantum system that has only local interactions with
its environment and achieves the assigned covariance matrix. both realizations are illustrated by examples from quantum optics.
key words: linear quantum system, cascade realization, locally dissipative realization, covariance assignment, pure gaussian state.
| 3 |
abstract
this paper considers the synchronization problem for networks of coupled nonlinear dynamical systems under switching communication topologies. two types of nonlinear agent
dynamics are considered. the first one is non-expansive dynamics (stable dynamics with
a convex lyapunov function ϕ(·)) and the second one is dynamics that satisfies a global
lipschitz condition. for the non-expansive case, we show that various forms of joint connectivity for communication graphs are sufficient for networks to achieve global asymptotic
ϕ-synchronization. we also show that ϕ-synchronization leads to state synchronization provided that certain additional conditions are satisfied. for the globally lipschitz case, unlike
the non-expansive case, joint connectivity alone is not sufficient for achieving synchronization. a sufficient condition for reaching global exponential synchronization is established in
terms of the relationship between the global lipschitz constant and the network parameters.
we also extend the results to leader-follower networks.
| 3 |
abstracts
franck dernoncourt∗
mit
[email protected]
| 9 |
abstract. the subject of this work is quantum predicative programming — the study of developing of programs intended for execution on
a quantum computer. we look at programming in the context of formal
methods of program development, or programming methodology. our
work is based on probabilistic predicative programming, a recent generalisation of the well-established predicative programming. it supports
the style of program development in which each programming step is
proven correct as it is made. we inherit the advantages of the theory,
such as its generality, simple treatment of recursive programs, time and
space complexity, and communication. our theory of quantum programming provides tools to write both classical and quantum specifications,
develop quantum programs that implement these specifications, and reason about their comparative time and space complexity all in the same
framework.
| 6 |
abstract we report on an extensive study of the current benefits and limitations of deep learning approaches
to robot vision and introduce a novel dataset used for
our investigation. to avoid the biases in currently available datasets, we consider a human-robot interaction
setting to design a data-acquisition protocol for visual
object recognition on the icub humanoid robot. considering the performance of off-the-shelf models trained
on off-line large-scale image retrieval datasets, we show
the necessity for knowledge transfer. indeed, we analyze different ways in which this last step can be done,
and identify the major bottlenecks in robotics scenarios.
by studying both object categorization and identification tasks, we highlight the key differences between object recognition in robotics and in image retrieval tasks,
for which the considered deep learning approaches have
been originally designed. in a nutshell, our results confirm also in the considered setting the remarkable improvements yield by deep learning, while pointing to
specific open challenges that need to be addressed for
seamless deployment in robotics.
| 1 |
abstract—a source submits status updates to a network for
delivery to a destination monitor. updates follow a route through
a series of network nodes. each node is a last-come-first-served
queue supporting preemption in service. we characterize the
average age of information at the input and output of each node
in the route induced by the updates passing through. for poisson
arrivals to a line network of preemptive memoryless servers, we
show that average age accumulates through successive network
nodes.
| 7 |
abstract—we use decision trees to build a helpdesk agent
reference network to facilitate the on-the-job advising of junior
or less experienced staff on how to better address
telecommunication customer fault reports. such reports generate
field measurements and remote measurements which, when
coupled with location data and client attributes, and fused with
organization-level statistics, can produce models of how support
should be provided. beyond decision support, these models can
help identify staff who can act as advisors, based on the quality,
consistency and predictability of dealing with complex
troubleshooting reports. advisor staff models are then used to
guide less experienced staff in their decision making; thus, we
advocate the deployment of a simple mechanism which exploits
the availability of staff with a sound track record at the helpdesk
to act as dormant tutors.
index terms— customer relationship management; decision
trees; knowledge flow graph
| 2 |
abstract
in multiagent systems, we often have a set of agents each of which have a preference ordering
over a set of items and one would like to know these preference orderings for various tasks, for
example, data analysis, preference aggregation, voting etc. however, we often have a large
number of items which makes it impractical to ask the agents for their complete preference
ordering. in such scenarios, we usually elicit these agents’ preferences by asking (a hopefully
small number of) comparison queries — asking an agent to compare two items. prior works on
preference elicitation focus on unrestricted domain and the domain of single peaked preferences
and show that the preferences in single peaked domain can be elicited by much less number of
queries compared to unrestricted domain. we extend this line of research and study preference
elicitation for single peaked preferences on trees which is a strict superset of the domain of single
peaked preferences. we show that the query complexity crucially depends on the number of
leaves, the path cover number, and the distance from path of the underlying single peaked tree,
whereas the other natural parameters like maximum degree, diameter, pathwidth do not play
any direct role in determining query complexity. we then investigate the query complexity for
finding a weak condorcet winner for preferences single peaked on a tree and show that this
task has much less query complexity than preference elicitation. here again we observe that the
number of leaves in the underlying single peaked tree and the path cover number of the tree
influence the query complexity of the problem.
| 8 |
abstract. scheduling theory is an old and well-established area in combinatorial optimization, whereas
the much younger area of parameterized complexity has only recently gained the attention of the community. our aim is to bring these two areas closer together by studying the parameterized complexity
of a class of single-machine two-agent scheduling problems. our analysis focuses on the case where the
number of jobs belonging to the second agent is considerably smaller than the number of jobs belonging
to the first agent, and thus can be considered as a fixed parameter k. we study a variety of combinations
of scheduling criteria for the two agents, and for each such combination we pinpoint its parameterized
complexity with respect to the parameter k. the scheduling criteria that we analyze include the total
weighted completion time, the total weighted number of tardy jobs, and the total weighted number of
just-in-time jobs. our analysis draws a borderline between tractable and intractable variants of these
problems.
⋆
| 8 |
abstract
we study the capacitated k-median problem for which existing constant-factor approximation algorithms are all pseudo-approximations that violate either the capacities or the upper
bound k on the number of open facilities. using the natural lp relaxation for the problem, one
can only hope to get the violation factor down to 2. li [soda’16] introduced a novel lp to go
beyond the limit of 2 and gave a constant-factor approximation algorithm that opens (1 + )k
facilities.
we use the configuration lp of li [soda’16] to give a constant-factor approximation for
the capacitated k-median problem in a seemingly harder configuration: we violate only the
capacities by 1 + . this result settles the problem as far as pseudo-approximation algorithms
are concerned.
| 8 |
abstract. a locally compact groupoid is said to have the weak containment property if its
full c ∗ -algebra coincide with its reduced one. although it is now known that this property is
strictly weaker than amenability, we show that the two properties are the same under a mild
exactness assumption. then we apply our result to get informations about the corresponding
weak containment property for some semigroups.
| 4 |
abstract: in linear regression with fixed design, we propose two procedures that aggregate a data-driven collection of supports. the collection
is a subset of the 2p possible supports and both its cardinality and its elements can depend on the data. the procedures satisfy oracle inequalities
with no assumption on the design matrix. then we use these procedures
to aggregate the supports that appear on the regularization path of the
lasso in order to construct an estimator that mimics the best lasso
estimator. if the restricted eigenvalue condition on the design matrix is
satisfied, then this estimator achieves optimal prediction bounds. finally,
we discuss the computational cost of these procedures.
| 10 |
abstract— the paper considers the problem of cooperative
estimation for a linear uncertain plant observed by a network of
communicating sensors. we take a novel approach by treating
the filtering problem from the view point of local sensors while
the network interconnections are accounted for via an uncertain
signals modelling of estimation performance of other nodes.
that is, the information communicated between the nodes is
treated as the true plant information subject to perturbations,
and each node is endowed with certain believes about these
perturbations during the filter design. the proposed distributed
filter achieves a suboptimal h∞ consensus performance. furthermore, local performance of each estimator is also assessed
given additional constraints on the performance of the other
nodes. these conditions are shown to be useful in tuning the
desired estimation performance of the sensor network.
| 3 |
abstracting from these details, the fft and ifft take
up a significant amount of compute resources, which we address in section 5.
table 5: cufft convolution performance breakdown (k40m, ms)
layer
l1
fprop
bprop
accgrad
l2
fprop
bprop
accgrad
l3
fprop
bprop
accgrad
l4
fprop
bprop
accgrad
l5
fprop
bprop
accgrad
| 9 |
abstraction for networks of control systems: a
dissipativity approach
| 3 |
abstract. completely random measures (crms) and their normalizations
are a rich source of bayesian nonparametric priors. examples include the beta,
gamma, and dirichlet processes. in this paper we detail two major classes
of sequential crm representations—series representations and superposition
representations—within which we organize both novel and existing sequential
representations that can be used for simulation and posterior inference. these
two classes and their constituent representations subsume existing ones that
have previously been developed in an ad hoc manner for specific processes.
since a complete infinite-dimensional crm cannot be used explicitly for computation, sequential representations are often truncated for tractability. we
provide truncation error analyses for each type of sequential representation,
as well as their normalized versions, thereby generalizing and improving upon
existing truncation error bounds in the literature. we analyze the computational complexity of the sequential representations, which in conjunction with
our error bounds allows us to directly compare representations and discuss
their relative efficiency. we include numerous applications of our theoretical
results to commonly-used (normalized) crms, demonstrating that our results
enable a straightforward representation and analysis of crms that has not
previously been available in a bayesian nonparametric context.
| 10 |
abstract. this note presents a discussion of the algebraic and combinatorial aspects
of the theory of pure o-sequences. various instances where pure o-sequences appear are
described. several open problems that deserve further investigation are also presented.
| 0 |
abstract
by taking a variety of realistic hardware imperfections into consideration, we propose an optimal
power allocation (opa) strategy to maximize the instantaneous secrecy rate of a cooperative wireless
network comprised of a source, a destination and an untrusted amplify-and-forward (af) relay. we
assume that either the source or the destination is equipped with a large-scale multiple antennas
(lsma) system, while the rest are equipped with a single antenna. to prevent the untrusted relay from
intercepting the source message, the destination sends an intended jamming noise to the relay, which is
referred to as destination-based cooperative jamming (dbcj). given this system model, novel closedform expressions are presented in the high signal-to-noise ratio (snr) regime for the ergodic secrecy
rate (esr) and the secrecy outage probability (sop). we further improve the secrecy performance
of the system by optimizing the associated hardware design. the results reveal that by beneficially
distributing the tolerable hardware imperfections across the transmission and reception radio-frequency
(rf) front ends of each node, the system’s secrecy rate may be improved. the engineering insight is
that equally sharing the total imperfections at the relay between the transmitter and the receiver provides
the best secrecy performance. numerical results illustrate that the proposed opa together with the most
appropriate hardware design significantly increases the secrecy rate.
| 7 |
abstract
feature learning with deep models has achieved impressive results for both data representation and classification
for various vision tasks. deep feature learning, however,
typically requires a large amount of training data, which
may not be feasible for some application domains. transfer
learning can be one of the approaches to alleviate this problem by transferring data from data-rich source domain to
data-scarce target domain. existing transfer learning methods typically perform one-shot transfer learning and often
ignore the specific properties that the transferred data must
satisfy. to address these issues, we introduce a constrained
deep transfer feature learning method to perform simultaneous transfer learning and feature learning by performing
transfer learning in a progressively improving feature space
iteratively in order to better narrow the gap between the target domain and the source domain for effective transfer of
the data from source domain to target domain. furthermore, we propose to exploit the target domain knowledge
and incorporate such prior knowledge as constraint during
transfer learning to ensure that the transferred data satisfies
certain properties of the target domain.
to demonstrate the effectiveness of the proposed constrained deep transfer feature learning method, we apply
it to thermal feature learning for eye detection by transferring from the visible domain. we also applied the proposed
method for cross-view facial expression recognition as a
second application. the experimental results demonstrate
the effectiveness of the proposed method for both applications.
| 1 |
abstract—we consider a finite-horizon linear-quadratic optimal control problem where only a limited number of control
messages are allowed for sending from the controller to the
actuator. to restrict the number of control actions computed
and transmitted by the controller, we employ a threshold-based
event-triggering mechanism that decides whether or not a control
message needs to be calculated and delivered. due to the nature of
threshold-based event-triggering algorithms, finding the optimal
control sequence requires minimizing a quadratic cost function
over a non-convex domain. in this paper, we firstly provide
an exact solution to the non-convex problem mentioned above
by solving an exponential number of quadratic programs. to
reduce computational complexity, we, then, propose two efficient
heuristic algorithms based on greedy search and the alternating
direction method of multipliers (admm) method. later, we
consider a receding horizon control strategy for linear systems
controlled by event-triggered controllers, and we also provide
a complete stability analysis of receding horizon control that
uses finite horizon optimization in the proposed class. numerical
examples testify to the viability of the presented design technique.
index terms — optimal control; linear systems; eventtriggered control; receding horizon control
| 3 |
abstract
advancement in technology has generated abundant high-dimensional data that allows integration of multiple relevant studies. due to their huge computational advantage, variable screening methods based on marginal correlation have become promising
alternatives to the popular regularization methods for variable selection. however, all
these screening methods are limited to single study so far. in this paper, we consider
a general framework for variable screening with multiple related studies, and further
propose a novel two-step screening procedure using a self-normalized estimator for highdimensional regression analysis in this framework. compared to the one-step procedure
and rank-based sure independence screening (sis) procedure, our procedure greatly reduces false negative errors while keeping a low false positive rate. theoretically, we
show that our procedure possesses the sure screening property with weaker assumptions on signal strengths and allows the number of features to grow at an exponential
rate of the sample size. in addition, we relax the commonly used normality assumption
and allow sub-gaussian distributions. simulations and a real transcriptomic application illustrate the advantage of our method as compared to the rank-based sis method.
key words and phrases: multiple studies, partial faithfulness, self-normalized estimator, sure screening property, variable selection
| 10 |
abstract
in the paper, a parallel tabu search algorithm for the resource constrained project scheduling problem is proposed. to deal with this np-hard combinatorial problem many optimizations have been
performed. for example, a resource evaluation algorithm is selected by a heuristic and an effective
tabu list was designed. in addition to that, a capacity-indexed resource evaluation algorithm was
proposed and the gpu (graphics processing unit) version uses a homogeneous model to reduce the
required communication bandwidth. according to the experiments, the gpu version outperforms the
optimized parallel cpu version with respect to the computational time and the quality of solutions.
in comparison with other existing heuristics, the proposed solution often gives better quality solutions.
cite as: libor bukata, premysl sucha, zdenek hanzalek, solving the resource constrained project
scheduling problem using the parallel tabu search designed for the cuda platform, journal of parallel
and distributed computing, volume 77, march 2015, pages 58-68, issn 0743-7315, http://dx.doi.
org/10.1016/j.jpdc.2014.11.005.
source code: https://github.com/ctu-iig/rcpspcpu, https://github.com/ctu-iig/rcpspgpu
keywords: resource constrained project scheduling problem, parallel tabu search, cuda,
homogeneous model, gpu
1. introduction
the resource constrained project scheduling
problem (rcpsp), which has a wide range of applications in logistics, manufacturing and project
management [1], is a universal and well-known
problem in the operations research domain. the
problem can be briefly described using a set of
∗
| 2 |
abstract—this paper is concerned with optimization of
distributed broadband wireless communication (bwc) systems.
bwc systems contain a distributed antenna system (das)
connected to a base station with optical fiber. distributed bwc
systems have been proposed as a solution to the power constraint
problem in traditional cellular networks. so far, the research on
bwc systems have advanced on two separate tracks, design of
the system to meet the quality of service requirements (qos) and
optimization of location of the das. in this paper, we consider a
combined optimization of bwc systems. we consider uplink
communications in distributed bwc systems with multiple levels
of priority traffic with arrivals and departures forming renewal
processes. we develop an analysis that determines packet delay
violation probability for each priority level as a function of the
outage probability of the das through the application of results
from renewal theory. then, we determine the optimal locations of
the antennas that minimize the antenna outage probability. we
also study the trade off between the packet delay violation
probability and packet loss probability. this work will be helpful
in the designing of the distributed bwc systems.
index terms— queuing delay, multiple levels of priority
traffic, distributed antenna system (das), outage probability,
antenna placement.
| 1 |
abstract
like classical block codes, a locally repairable code also obeys the singleton-type bound (we call
a locally repairable code optimal if it achieves the singleton-type bound). in the breakthrough work of
[14], several classes of optimal locally repairable codes were constructed via subcodes of reed-solomon
codes. thus, the lengths of the codes given in [14] are upper bounded by the code alphabet size q.
recently, it was proved through extension of construction in [14] that length of q-ary optimal locally
repairable codes can be q + 1 in [7]. surprisingly, [2] presented a few examples of q-ary optimal locally
repairable codes of small distance and locality with code length achieving roughly q 2 . very recently,
it was further shown in [8] that there exist q-ary optimal locally repairable codes with length bigger
than q + 1 and distance propositional to n. thus, it becomes an interesting and challenging problem to
construct new families of q-ary optimal locally repairable codes of length bigger than q + 1.
in this paper, we construct a class of optimal locally repairable codes of distance 3 and 4 with unbounded length (i.e., length of the codes is independent of the code alphabet size). our technique is
through cyclic codes with particular generator and parity-check polynomials that are carefully chosen.
| 7 |
abstract
let b = b (x) , x ∈ s2 be the fractional brownian motion indexed
by the unit sphere s2 with index 0 < h ≤ 21 , introduced by istas [12].
we establish optimal upper and lower bounds for its angular power spectrum {dℓ , ℓ = 0, 1, 2, . . .}, and then exploit its high-frequency behavior to
establish the property of its strong local nondeterminism of b.
| 10 |
abstract)∗
davide corona†
| 5 |
abstract
we give effective proofs of residual finiteness and conjugacy separability for finitely generated
nilpotent groups. in particular, we give precise asymptotic bounds for a function introduced
by bou-rabee that measures how large the quotients that are needed to separate non-identity
elements of bounded length from the identity which improves the work of bou-rabee. similarly, we give polynomial upper and lower bounds for an analogous function introduced by
lawton, louder, and mcreynolds that measures how large the quotients that are needed to
separate pairs of distinct conjugacy classes of bounded word length using work of blackburn
and mal’tsev.
| 4 |
abstract—in a device-to-device (d2d) underlaid massive
mimo system, d2d transmitters reuse the uplink spectrum
of cellular users (cus), leading to cochannel interference. to
decrease pilot overhead, we assume pilot reuse (pr) among
d2d pairs. we first derive the minimum-mean-square-error
(mmse) estimation of all channels and give a lower bound
on the ergodic achievable rate of both cellular and d2d links.
to mitigate pilot contamination caused by pr, we then propose
a pilot scheduling and pilot power control algorithm based on
the criterion of minimizing the sum mean-square-error (mse)
of channel estimation of d2d links. we show that, with an
appropriate pr ratio and a well designed pilot scheduling scheme,
each d2d transmitter could transmit its pilot with maximum
power. in addition, we also maximize the sum rate of all d2d
links while guaranteeing the quality of service (qos) of cus, and
develop an iterative algorithm to obtain a suboptimal solution.
simulation results show that the effect of pilot contamination can
be greatly decreased by the proposed pilot scheduling algorithm,
and the pr scheme provides significant performance gains over
the conventional orthogonal training scheme in terms of system
spectral efficiency.
| 7 |
abstract
nitsche’s method is a popular approach to implement dirichlet-type boundary conditions in situations where
a strong imposition is either inconvenient or simply not feasible. the method is widely applied in the context
of unfitted finite element methods. from the classical (symmetric) nitsche’s method it is well-known that
the stabilization parameter in the method has to be chosen sufficiently large to obtain unique solvability
of discrete systems. in this short note we discuss an often used strategy to set the stabilization parameter
and describe a possible problem that can arise from this. we show that in specific situations error bounds
can deteriorate and give examples of computations where nitsche’s method yields large and even diverging
discretization errors.
keywords: nitsche’s method, unfitted/immersed finite element methods, penalty/stabilization parameter,
accuracy, stability, error analysis
| 5 |
abstract
we investigate the performance of the finite volume method in solving viscoplastic flows. the creeping
square lid-driven cavity flow of a bingham plastic is chosen as the test case and the constitutive equation
is regularised as proposed by papanastasiou [j. rheology 31 (1987) 385-404]. it is shown that the
convergence rate of the standard simple pressure-correction algorithm, which is used to solve the
algebraic equation system that is produced by the finite volume discretisation, severely deteriorates as
the bingham number increases, with a corresponding increase in the non-linearity of the equations. it
is shown that using the simple algorithm in a multigrid context dramatically improves convergence,
although the multigrid convergence rates are much worse than for newtonian flows. the numerical
results obtained for bingham numbers as high as 1000 compare favorably with reported results of other
methods.
keywords: bingham plastic, papanastasiou regularisation, lid-driven cavity, finite volume method,
simple, multigrid
this is the accepted version of the article published in: journal of non-newtonian fluid mechanics
195 (2013) 19–31, doi:10.1016/j.jnnfm.2012.12.008
c 2016. this manuscript version is made available under the cc-by-nc-nd 4.0 license http:
//creativecommons.org/licenses/by-nc-nd/4.0/
1. introduction
viscoplastic flows constitute an important branch of non-newtonian fluid mechanics, as many materials of industrial, geophysical, and biological importance are known to exhibit yield stress. in general,
yield-stress fluids are suspensions of particles or macromolecules, such as pastes, gels, foams, drilling
fluids, food products, and nanocomposites. a comprehensive review of viscoplasticity has been carried
out by barnes [1]. such materials behave as (elastic or inelastic) solids, below a certain critical shear
stress level, i.e. the yield stress, and as liquids otherwise. the flow field is thus divided into unyielded
(rigid) and yielded (fluid) regions.
∗
| 5 |
abstract
for computer vision applications, prior works have shown the efficacy of reducing
numeric precision of model parameters (network weights) in deep neural networks.
activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. one way to reduce this
large memory footprint is to reduce the precision of activations. however, past
works have shown that reducing the precision of activations hurts model accuracy.
we study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. we reduce the precision of activation maps (along
with model parameters) and increase the number of filter maps in a layer, and find
that this scheme matches or surpasses the accuracy of the baseline full-precision
network. as a result, one can significantly improve the execution efficiency (e.g.
reduce dynamic memory footprint, memory bandwidth and computational energy)
and speed up the training and inference process with appropriate hardware support. we call our scheme wrpn - wide reduced-precision networks. we report
results and show that wrpn scheme is better than previously reported accuracies
on ilsvrc-12 dataset while being computationally less expensive compared to
previously reported reduced-precision networks.
| 9 |
abstract. let g be an abelian group and s be a g-graded a noetherian algebra over a
commutative ring a ⊆ s0 . let i1 , . . . , is be g-homogeneous ideals in s, and let m be a
finitely generated g-graded s-module. we show that the shape of nonzero g-graded betti
numbers of m i1t1 . . . ists exhibit an eventual linear behavior as the ti s get large.
| 0 |
abstract. it is well-known that the factorization properties of a domain are
reflected in the structure of its group of divisibility. the main theme of this
paper is to introduce a topological/graph-theoretic point of view to the current understanding of factorization in integral domains. we also show that
connectedness properties in the graph and topological space give rise to a
generalization of atomicity.
| 0 |
abstract
the large-system performance of maximum-a-posterior estimation is studied considering a general distortion
function when the observation vector is received through a linear system with additive white gaussian noise. the
analysis considers the system matrix to be chosen from the large class of rotationally invariant random matrices. we
take a statistical mechanical approach by introducing a spin glass corresponding to the estimator, and employing the
replica method for the large-system analysis. in contrast to earlier replica based studies, our analysis evaluates the
general replica ansatz of the corresponding spin glass and determines the asymptotic distortion of the estimator for
any structure of the replica correlation matrix. consequently, the replica symmetric as well as the replica symmetry
breaking ansatz with b steps of breaking is deduced from the given general replica ansatz. the generality of our
distortion function lets us derive a more general form of the maximum-a-posterior decoupling principle. based on
the general replica ansatz, we show that for any structure of the replica correlation matrix, the vector-valued system
decouples into a bank of equivalent decoupled linear systems followed by maximum-a-posterior estimators. the
structure of the decoupled linear system is further studied under both the replica symmetry and the replica symmetry
breaking assumptions. for b steps of symmetry breaking, the decoupled system is found to be an additive system
with a noise term given as the sum of an independent gaussian random variable with b correlated impairment
terms. the general decoupling property of the maximum-a-posterior estimator leads to the idea of a replica simulator
which represents the replica ansatz through the state evolution of a transition system described by its corresponding
decoupled system. as an application of our study, we investigate large compressive sensing systems by considering
the ℓp norm minimization recovery schemes. our numerical investigations show that the replica symmetric ansatz for
ℓ0 norm recovery fails to give an accurate approximation of the mean square error as the compression rate grows,
and therefore, the replica symmetry breaking ansätze are needed in order to assess the performance precisely.
index terms
maximum-a-posterior estimation, linear vector channel, decoupling principle, equivalent single-user system, compressive sensing, zero norm, replica method, statistical physics, replica symmetry breaking, replica simulator
the results of this manuscript were presented in parts at 2016 ieee information theory workshop (itw) [78] and 2017 ieee information
theory and applications workshop (ita) [79].
this work was supported by the german research foundation, deutsche forschungsgemeinschaft (dfg), under grant no. mu 3735/2-1.
ali bereyhi and ralf r. müller are with the institute for digital communications (idc), friedrich alexander university of erlangen-nürnberg
(fau), konrad-zuse-straße 5, 91052, erlangen, bavaria, germany (e-mails: [email protected], [email protected]).
hermann schulz-baldes is with the department of mathematics, fau, cauerstraße 11, 91058, erlangen, bavaria, germany (e-mail: schuba@
mi.uni-erlangen.de).
| 7 |
abstract
hi, robby, can you get my cup
from the cupboard?
| 1 |
abstract. we prove that the autonomous norm on the group of
compactly supported hamiltonian diffeomorphisms of the standard
r2n is bounded.
| 4 |
abstract
to help mitigate road congestion caused by the unrelenting growth of traffic demand, many transit
authorities have implemented managed lane policies. managed lanes typically run parallel to a freeway’s
standard, general-purpose (gp) lanes, but are restricted to certain types of vehicles. it was originally
thought that managed lanes would improve the use of existing infrastructure through incentivization of
demand-management behaviors like carpooling, but implementations have often been characterized by
unpredicted phenomena that is often to detrimental system performance. development of traffic models
that can capture these sorts of behaviors is a key step for helping managed lanes deliver on their promised
gains.
towards this goal, this paper presents several macroscopic traffic modeling tools we have used for
study of freeways equipped with managed lanes, or “managed lane-freeway networks.” the proposed
framework is based on the widely-used first-order kinematic wave theory. in this model, the gp and
the managed lanes are modeled as parallel links connected by nodes, where certain type of traffic may
switch between gp and managed lane links. two types of managed lane configuration are considered:
full-access, where vehicles can switch between the gp and the managed lanes anywhere; and separated,
where such switching is allowed only at certain locations called gates.
we incorporate two phenomena into our model that are particular to managed lane-freeway networks:
the inertia effect and the friction effect. the inertia effect reflects drivers’ inclination to stay in their lane
as long as possible and switch only if this would obviously improve their travel condition. the friction
effect reflects the empirically-observed driver fear of moving fast in a managed lane while traffic in the
adjacent gp links moves slowly due to congestion.
calibration of models of large road networks is difficult, as the dynamics depend on many parameters whose numbers grow with the network’s size. we present an iterative learning-based approach
to calibrating our model’s physical and driver-behavioral parameters. finally, we validate our model
and calibration methodology with case studies of simulations of two managed lane-equipped california
freeways.
| 3 |
abstract
compiler working with abstract functions modelled directly in the theorem
prover’s logic is defined and proven sound. then, this compiler is refined
to a concrete version that returns a target-language expression.
| 6 |
abstract
| 6 |
abstract—this paper proposes xml-defined network policies
(xdnp), a new high-level language based on xml notation,
to describe network control rules in software defined network
environments. we rely on existing openflow controllers specifically floodlight but the novelty of this project is to separate
complicated language- and framework-specific apis from policy
descriptions. this separation makes it possible to extend the
current work as a northbound higher level abstraction that
can support a wide range of controllers who are based on
different programming languages. by this approach, we believe
that network administrators can develop and deploy network
control policies easier and faster.
index terms—software defined networks; openflow; floodlight; sdn compiler; sdn programming languages; sdn abstraction.
| 6 |
abstract
in this paper we present linear time approximation schemes for several generalized matching problems on nonbipartite graphs. our results include o (m)-time algorithms for (1 − )maximum weight f -factor and (1 + )-approximate minimum weight f -edge cover. as a byproduct, we p
also obtain direct algorithms for the exact cardinality versions of these problems running
in o(m f (v )) time.
the technical contributions of this work include an efficient method for maintaining relaxed complementary slackness in generalized matching problems and approximation-preserving
reductions between the f -factor and f -edge cover problems.
| 8 |
abstract. we call an ideal in a polynomial ring robust if it can be minimally generated
by a universal gröbner basis. in this paper we show that robust toric ideals generated
by quadrics are essentially determinantal. we then discuss two possible generalizations to
higher degree, providing a tight classification for determinantal ideals, and a counterexample
to a natural extension for lawrence ideals. we close with a discussion of robustness of higher
betti numbers.
| 0 |
abstract
the major challenges of automatic track counting are distinguishing tracks and
material defects, identifying small tracks and defects of similar size, and detecting overlapping tracks. here we address the latter issue using wusem,
an algorithm which combines the watershed transform, morphological erosions
and labeling to separate regions in photomicrographs. wusem shows reliable
results when used in photomicrographs presenting almost isotropic objects. we
tested this method in two datasets of diallyl phthalate (dap) photomicrographs
and compared the results when counting manually and using the classic watershed. the mean automatic/manual efficiency ratio when using wusem in the
test datasets is 0.97 ± 0.11.
keywords: automatic counting, diallyl phthalate, digital image processing,
fission track dating
| 1 |
abstract—this paper considers the massive connectivity application in which a large number of potential devices communicate
with a base-station (bs) in a sporadic fashion. the detection of
device activity pattern together with the estimation of the channel
are central problems in such a scenario. due to the large number
of potential devices in the network, the devices need to be assigned
non-orthogonal signature sequences. the main objective of this
paper is to show that by using random signature sequences and
by exploiting sparsity in the user activity pattern, the joint user
detection and channel estimation problem can be formulated as
a compressed sensing single measurement vector (smv) problem
or multiple measurement vector (mmv) problem, depending on
whether the bs has a single antenna or multiple antennas, and be
efficiently solved using an approximate message passing (amp)
algorithm. this paper proposes an amp algorithm design that
exploits the statistics of the wireless channel and provides an
analytical characterization of the probabilities of false alarm and
missed detection by using the state evolution. we consider two
cases depending on whether the large-scale component of the
channel fading is known at the bs and design the minimum
mean squared error (mmse) denoiser for amp according to the
channel statistics. simulation results demonstrate the substantial
advantage of exploiting the statistical channel information in
amp design; however, knowing the large-scale fading component
does not offer tangible benefits. for the multiple-antenna case,
we employ two different amp algorithms, namely the amp with
vector denoiser and the parallel amp-mmv, and quantify the
benefit of deploying multiple antennas at the bs.
index terms—device activity detection, channel estimation,
approximate message passing, compressed sensing, internet of
things (iot), machine-type communications (mtc)
| 7 |
abstract. we give a new, simple distributed algorithm for graph colouring in paths and
cycles. our algorithm is fast and self-contained, it does not need any globally consistent
orientation, and it reduces the number of colours from 10100 to 3 in three iterations.
| 8 |
abstract. this paper provides an induction rule that can be used to prove properties of
data structures whose types are inductive, i.e., are carriers of initial algebras of functors.
our results are semantic in nature and are inspired by hermida and jacobs’ elegant algebraic formulation of induction for polynomial data types. our contribution is to derive,
under slightly different assumptions, a sound induction rule that is generic over all inductive types, polynomial or not. our induction rule is generic over the kinds of properties to
be proved as well: like hermida and jacobs, we work in a general fibrational setting and
so can accommodate very general notions of properties on inductive types rather than just
those of a particular syntactic form. we establish the soundness of our generic induction
rule by reducing induction to iteration. we then show how our generic induction rule can
be instantiated to give induction rules for the data types of rose trees, finite hereditary
sets, and hyperfunctions. the first of these lies outside the scope of hermida and jacobs’
work because it is not polynomial, and as far as we are aware, no induction rules have
been known to exist for the second and third in a general fibrational framework. our
instantiation for hyperfunctions underscores the value of working in the general fibrational
setting since this data type cannot be interpreted as a set.
| 6 |
abstract. step-indexed semantic interpretations of types were proposed as an alternative
to purely syntactic proofs of type safety using subject reduction. the types are interpreted
as sets of values indexed by the number of computation steps for which these values are
guaranteed to behave like proper elements of the type. building on work by ahmed, appel
and others, we introduce a step-indexed semantics for the imperative object calculus of
abadi and cardelli. providing a semantic account of this calculus using more ‘traditional’,
domain-theoretic approaches has proved challenging due to the combination of dynamically
allocated objects, higher-order store, and an expressive type system. here we show that,
using step-indexing, one can interpret a rich type discipline with object types, subtyping,
recursive and bounded quantified types in the presence of state.
| 6 |
abstract
it was shown recently that the k l1-norm principal components (l1-pcs) of a real-valued data matrix x ∈
rd×n (n data samples of d dimensions) can be exactly calculated with cost o(2n k ) or, when advantageous,
o(n dk−k+1 ) where d = rank(x), k < d [1], [2]. in applications where x is large (e.g., “big” data of large n
and/or “heavy” data of large d), these costs are prohibitive. in this work, we present a novel suboptimal algorithm
for the calculation of the k < d l1-pcs of x of cost o(n dmin{n, d} + n 2 (k 4 + dk 2 ) + dn k 3 ), which
is comparable to that of standard (l2-norm) pc analysis. our theoretical and experimental studies show that the
proposed algorithm calculates the exact optimal l1-pcs with high frequency and achieves higher value in the l1-pc
optimization metric than any known alternative algorithm of comparable computational cost. the superiority of the
calculated l1-pcs over standard l2-pcs (singular vectors) in characterizing potentially faulty data/measurements is
demonstrated with experiments on data dimensionality reduction and disease diagnosis from genomic data.
| 8 |
abstract—we present a structural clustering algorithm for
large-scale datasets of small labeled graphs, utilizing a frequent
subgraph sampling strategy. a set of representatives provides
an intuitive description of each cluster, supports the clustering process, and helps to interpret the clustering results. the
projection-based nature of the clustering approach allows us to
bypass dimensionality and feature extraction problems that arise
in the context of graph datasets reduced to pairwise distances
or feature vectors. while achieving high quality and (human)
interpretable clusterings, the runtime of the algorithm only grows
linearly with the number of graphs. furthermore, the approach is
easy to parallelize and therefore suitable for very large datasets.
our extensive experimental evaluation on synthetic and real
world datasets demonstrates the superiority of our approach over
existing structural and subspace clustering algorithms, both, from
a runtime and quality point of view.
| 8 |
abstract
| 6 |
abstract
as nuclear power expands, technical, economic, political, and environmental analyses of nuclear fuel cycles
by simulators increase in importance. to date, however, current tools are often fleet-based rather than
discrete and restrictively licensed rather than open source. each of these choices presents a challenge to
modeling fidelity, generality, efficiency, robustness, and scientific transparency. the cyclus nuclear fuel
cycle simulator framework and its modeling ecosystem incorporate modern insights from simulation science
and software architecture to solve these problems so that challenges in nuclear fuel cycle analysis can be
better addressed. a summary of the cyclus fuel cycle simulator framework and its modeling ecosystem are
presented. additionally, the implementation of each is discussed in the context of motivating challenges in
nuclear fuel cycle simulation. finally, the current capabilities of cyclus are demonstrated for both open
and closed fuel cycles.
keywords: nuclear fuel cycle, simulation, agent based modeling, nuclear engineering, object orientation,
systems analysis
| 5 |
abstract
this paper describes autonomous racing of rc race cars based on mathematical optimization. using a dynamical
model of the vehicle, control inputs are computed by receding horizon based controllers, where the objective is to
maximize progress on the track subject to the requirement of staying on the track and avoiding opponents. two
different control formulations are presented. the first controller employs a two-level structure, consisting of a path
planner and a nonlinear model predictive controller (nmpc) for tracking. the second controller combines both
tasks in one nonlinear optimization problem (nlp) following the ideas of contouring control. linear time varying
models obtained by linearization are used to build local approximations of the control nlps in the form of convex
quadratic programs (qps) at each sampling time. the resulting qps have a typical mpc structure and can be solved
in the range of milliseconds by recent structure exploiting solvers, which is key to the real-time feasibility of the
overall control scheme. obstacle avoidance is incorporated by means of a high-level corridor planner based on
dynamic programming, which generates convex constraints for the controllers according to the current position of
opponents and the track layout. the control performance is investigated experimentally using 1:43 scale rc race
cars, driven at speeds of more than 3 m/s and in operating regions with saturated rear tire forces (drifting). the
algorithms run at 50 hz sampling rate on embedded computing platforms, demonstrating the real-time feasibility
and high performance of optimization-based approaches for autonomous racing.
| 3 |
abstract: polarization-division multiplexed (pdm) transmission based on the nonlinear fourier
transform (nft) is proposed for optical fiber communication. the nft algorithms are generalized
from the scalar nonlinear schrödinger equation for one polarization to the manakov system
for two polarizations. the transmission performance of the pdm nonlinear frequency-division
multiplexing (nfdm) and pdm orthogonal frequency-division multiplexing (ofdm) are
determined. it is shown that the transmission performance in terms of q-factor is approximately
the same in pdm-nfdm and single polarization nfdm at twice the data rate and that the
polarization-mode dispersion does not seriously degrade system performance. compared with
pdm-ofdm, pdm-nfdm achieves a q-factor gain of 6.4 db. the theory can be generalized to
multi-mode fibers in the strong coupling regime, paving the way for the application of the nft
to address the nonlinear effects in space-division multiplexing.
© 2017 optical society of america
ocis codes: (060.2330) fiber optics communications,(060.4230) multiplexing, (060.4370) nonlinear optics, fibers
| 7 |
abstract. in this paper we propose an algorithm for the numerical
solution of arbitrary differential equations of fractional order. the algorithm is obtained by using the following decomposition of the differential
equation into a system of differential equation of integer order connected
with inverse forms of abel-integral equations. the algorithm is used for
solution of the linear and non-linear equations.
| 5 |
abstract. the klee’s measure of n axis-parallel boxes in rd is the
volume of their union. it can be computed in time within o(nd/2 ) in
the worst case. we describe three techniques to boost its computation:
one based on some type of “degeneracy” of the input, and two ones
on the inherent “easiness” of the structure of the input. the first technique benefits from instances where the maxima of the input is of small
size h, and yields a solution running in time within o(n log2d−2 h +
hd/2 ) ⊆ o(nd/2 ). the second technique takes advantage of instances
where no d-dimensional axis-aligned hyperplane intersects more than k
boxes in some dimension, and yields a solution running in time within
o(n log n+nk(d−2)/2 ) ⊆ o(nd/2 ). the third technique takes advantage of
instances where the intersection graph of the input has small treewidth ω.
it yields an algorithm running in time within o(n4 ω log ω+n(ω log ω)d/2 )
in general, and in time within o(n log n + nω d/2 ) if an optimal tree decomposition of the intersection graph is given. we show how to combine
these techniques in an algorithm which takes advantage of all three configurations.
| 8 |
abstract—using a drone as an aerial base station (abs)
to provide coverage to users on the ground is envisaged as
a promising solution for beyond fifth generation (beyond-5g)
wireless networks. while the literature to date has examined
downlink cellular networks with abss, we consider an uplink
cellular network with an abs. specifically, we analyze the use of
an underlay abs to provide coverage for a temporary event, such
as a sporting event or a concert in a stadium. using stochastic
geometry, we derive the analytical expressions for the uplink
coverage probability of the terrestrial base station (tbs) and
the abs. the results are expressed in terms of (i) the laplace
transforms of the interference power distribution at the tbs
and the abs and (ii) the distance distribution between the abs
and an independently and uniformly distributed (i.u.d.) abssupported user equipment and between the abs and an i.u.d.
tbs-supported user equipment. the accuracy of the analytical
results is verified by monte carlo simulations. our results show
that varying the abs height leads to a trade-off between the
uplink coverage probability of the tbs and the abs. in addition,
assuming a quality of service of 90% at the tbs, an uplink
coverage probability of the abs of over 85% can be achieved,
with the abs deployed at or below its optimal height of typically
between 250 − 500 m for the considered setup.
| 7 |
abstract
many evolutionary and constructive heuristic approaches have been
introduced in order to solve the traveling thief problem (ttp). however, the accuracy of such approaches is unknown due to their inability
to find global optima. in this paper, we propose three exact algorithms
and a hybrid approach to the ttp. we compare these with state-of-theart approaches to gather a comprehensive overview on the accuracy of
heuristic methods for solving small ttp instances.
| 8 |
abstract
the quest for algorithms that enable cognitive abilities is an important part of
machine learning. a common trait in many recently investigated cognitive-like
tasks is that they take into account different data modalities, such as visual and
textual input. in this paper we propose a novel and generally applicable form
of attention mechanism that learns high-order correlations between various data
modalities. we show that high-order correlations effectively direct the appropriate
attention to the relevant elements in the different data modalities that are required
to solve the joint task. we demonstrate the effectiveness of our high-order attention
mechanism on the task of visual question answering (vqa), where we achieve
state-of-the-art performance on the standard vqa dataset.
| 2 |
abstract
very important breakthroughs in data-centric machine learning algorithms led to impressive performance in ‘transactional’
point applications such as detecting anger in speech, alerts from a face recognition system, or ekg interpretation. nontransactional applications, e.g. medical diagnosis beyond the ekg results, require ai algorithms that integrate deeper and
broader knowledge in their problem-solving capabilities, e.g. integrating knowledge about anatomy and physiology of the heart
with ekg results and additional patient’s findings. similarly, for military aerial interpretation, where knowledge about enemy
doctrines on force composition and spread helps immensely in situation assessment beyond image recognition of individual
objects.
an initiative is proposed to build wikipedia for smart machines, meaning target readers are not human, but rather smart
machines. named rekopedia, the goal is to develop methodologies, tools, and automatic algorithms to convert humanity
knowledge that we all learn in schools, universities and during our professional life into reusable knowledge structures that
smart machines can use in their inference algorithms. ideally, rekopedia would be an open source shared knowledge repository
similar to the well-known shared open source software code repositories.
the double deep learning approach advocates integrating data-centric machine self-learning techniques with machineteaching techniques to leverage the power of both and overcome their corresponding limitations. for illustration, an outline of a
$15m project is described to produce reko knowledge modules for medical diagnosis of about 1,000 disorders.
ai applications that are based solely on data-centric machine learning algorithms are typically point solutions for transactional
tasks that do not lend themselves to automatic generalization beyond the scope of the data sets they are based on. today’s ai
industry is fragmented, and we are not establishing broad and deep enough foundations that will enable us to build higher level
‘generic’, ‘universal’ intelligence, let alone ‘super-intelligence’. we must find ways to create synergies between these fragments
and connect them with external knowledge sources, if we wish to scale faster the ai industry.
examples in the article are based on- or inspired by- real-life non-transactional ai systems i deployed over decades of ai career
that benefit hundreds of millions of people around the globe. we are now in the second ai ‘spring’ after a long ai ‘winter’.
to avoid sliding again into an ai winter, it is essential that we rebalance the roles of data and knowledge. data is
important but knowledge- deep and commonsense- are equally important.
| 2 |
abstract
we demonstrate that the integrality gap of the natural cut-based lp relaxation for the directed steiner
tree problem is o(log k) in quasi-bipartite graphs with k terminals. such instances can be seen to
generalize set cover, so the integrality gap analysis is tight up to a constant factor. a novel aspect
of our approach is that we use the primal-dual method; a technique that is rarely used in designing
approximation algorithms for network design problems in directed graphs.
| 8 |
abstract
recent approaches based on artificial neural networks (anns) have shown promising results for named-entity recognition
(ner). in order to achieve high performances, anns need to be trained on a
large labeled dataset. however, labels
might be difficult to obtain for the dataset
on which the user wants to perform ner:
label scarcity is particularly pronounced
for patient note de-identification, which
is an instance of ner. in this work, we
analyze to what extent transfer learning
may address this issue. in particular,
we demonstrate that transferring an ann
model trained on a large labeled dataset to
another dataset with a limited number of
labels improves upon the state-of-the-art
results on two different datasets for patient
note de-identification.
| 2 |
abstract: we discuss relations between residual networks (resnet), recurrent neural networks (rnns) and
the primate visual cortex. we begin with the observation that a shallow rnn is exactly equivalent to a very deep
resnet with weight sharing among the layers. a direct implementation of such a rnn, although having orders
of magnitude fewer parameters, leads to a performance similar to the corresponding resnet. we propose 1) a
generalization of both rnn and resnet architectures and 2) the conjecture that a class of moderately deep rnns
is a biologically-plausible model of the ventral stream in visual cortex. we demonstrate the effectiveness of the
architectures by testing them on the cifar-10 dataset.
| 9 |
abstract. we study the ramification theory for actions involving group schemes, focusing on the tame ramification. we consider the notion of tame quotient stack introduced
in [aov] and the one of tame action introduced in [cept]. we establish a local slice
theorem for unramified actions and after proving some interesting lifting properties for
linearly reductive group schemes, we establish a slice theorem for actions by commutative group schemes inducing tame quotient stacks. roughly speaking, we show that
these actions are induced from an action of an extension of the inertia group on a finitely
presented flat neighborhood. we finally consider the notion of tame action and determine
how this notion is related to the one of tame quotient stack previously considered.
| 0 |
abstract. we propose a new statistical procedure able in some way to overcome the curse
of dimensionality without structural assumptions on the function to estimate. it relies on
a least-squares type penalized criterion and a new collection of models built from hyperbolic
biorthogonal wavelet bases. we study its properties in a unifying intensity estimation framework,
where an oracle-type inequality and adaptation to mixed smoothness are shown to hold. besides,
we describe an algorithm for implementing the estimator with a quite reasonable complexity.
| 10 |
abstract
the phenomenon of entropy concentration provides strong support for the maximum entropy method, maxent, for inferring a probability vector from information
in the form of constraints. here we extend this phenomenon, in a discrete setting, to
non-negative integral vectors not necessarily summing to 1. we show that linear constraints that simply bound the allowable sums suffice for concentration to occur even
in this setting. this requires a new, ‘generalized’ entropy measure in which the sum of
the vector plays a role. we measure the concentration in terms of deviation from the
maximum generalized entropy value, or in terms of the distance from the maximum
generalized entropy vector. we provide non-asymptotic bounds on the concentration
in terms of various parameters, including a tolerance on the constraints which ensures
that they are always satisfied by an integral vector. generalized entropy maximization
is not only compatible with ordinary maxent, but can also be considered an extension of it, as it allows us to address problems that cannot be formulated as maxent
problems.
| 10 |
abstract
our purpose in this study was to present an integral-transform approach
to the analytical solutions of the pennes' bioheat transfer equation and to apply it
to the calculation of temperature distribution in tissues in hyperthermia with
magnetic nanoparticles (magnetic hyperthermia).
the validity of our method was investigated by comparison with the
analytical solutions obtained by the green's function method for point and shell
heat sources and the numerical solutions obtained by the finite-difference
method for gaussian-distributed and step-function sources.
there was good agreement between the radial profiles of temperature
calculated by our method and those obtained by the green's function method.
there was also good agreement between our method and the finite-difference
method except for the central temperature for a step-function source that had
approximately a 0.3% difference. we also found that the equations describing
the steady-state solutions for point and shell sources obtained by our method
agreed with those obtained by the green’s function method. these results
appear to indicate the validity of our method.
in conclusion, we presented an integral-transform approach to the
bioheat transfer problems in magnetic hyperthermia, and this study
demonstrated the validity of our method. the analytical solutions presented in
this study will be useful for gaining some insight into the heat diffusion process
during magnetic hyperthermia, for testing numerical codes and/or more
2
| 5 |
abstract—in this paper, we address the problem of the
distributed multi-target tracking with labeled set filters in the
framework of generalized covariance intersection (gci). our
analyses show that the label space mismatching (ls-dm) phenomenon, which means the same realization drawn from label
spaces of different sensors does not have the same implication,
is quite common in practical scenarios and may bring serious
problems. our contributions are two-fold. firstly, we provide a
principled mathematical definition of “label spaces matching (lsdm)” based on information divergence, which is also referred
to as ls-m criterion. then, to handle the ls-dm, we propose a
novel two-step distributed fusion algorithm, named as gci fusion
via label spaces matching (gci-lsm). the first step is to match
the label spaces from different sensors. to this end, we build a
ranked assignment problem and design a cost function consistent
with ls-m criterion to seek the optimal solution of matching
correspondence between label spaces of different sensors. the
second step is to perform the gci fusion on the matched label
space. we also derive the gci fusion with generic labeled multiobject (lmo) densities based on ls-m, which is the foundation
of labeled distributed fusion algorithms. simulation results for
gaussian mixture implementation highlight the performance of
the proposed gci-lsm algorithm in two different tracking
scenarios.
| 3 |
abstract. we answer a question of celikbas, dao, and takahashi by establishing the following characterization of gorenstein rings: a commutative
noetherian local ring (r, m) is gorenstein if and only if it admits an integrally
closed m-primary ideal of finite gorenstein dimension. this is accomplished
through a detailed study of certain test complexes. along the way we construct such a test complex that detect finiteness of gorenstein dimension, but
not that of projective dimension.
| 0 |
abstraction, in which the high-level representations can
amplify aspects of the input that are important for discrimination. these techniques have been used amongst others
to identify network threats [17] or encrypted traffic on a
network [18] [19]. a convolutional neural network (cnn), is
a specialised architecture of ann that employs a convolution
operation in at least one of its layers [20] [21]. a variety of
substantiated cnn architectures have been used to great effect
in computer vision [22] and even natural language processing
(nlp), with empirically distinguished superiority in semantic
matching [23], compared to other models.
cryptoknight is developed in coordination with this
methodology. we introduce a scalable learning system that
can easily incorporate new samples through the scalable synthesis of customisable cryptographic algorithms. its entirely
automated core architecture is aimed to minimise human
interaction, thus allowing the composition of an effective
model. we tested the framework on a number of externally
sourced applications utilising non-library linked functionality.
our experimental analysis indicates that cryptoknight is a
flexible solution that can quickly learn from new cryptographic execution patterns to classify unknown software. this
manuscript presents the following contributions:
• our unique convolutional neural network architecture
fits variable-length data to map an application’s timeinvariant cryptographic execution.
• complimented by procedural synthesis, we address the
issue of this task’s disproportionate latent feature space.
• the realised framework, cryptoknight, has demonstrably
faster results compared to that of previous methodologies,
and is extensively re-trainable.
ii. r elated w ork
the cryptovirological threat model has rapidly evolved over
the last decade. a number of notable individuals and research
groups have attempted to address the problem of cryptographic
primitive identification. we will discuss the consequences of
their findings here and address intrinsic problems.
a. heuristics
heuristical methods [24] are often utilised to locate an
optimal strategy for capturing the most appropriate solution.
these measures have previously shown great success in cryptographic primitive identification. a joint project from eth
zürich and google, inc. [8] detailed the automated decryption
of encrypted network communication in memory, to identify
the location and time a subject binary interacted with decrypted input. from an execution trace which dynamically
extracted memory access patterns and control flow data [8],
was able to identify the necessary factors required to retrieve
the relevant data in a new process. his implementation was
| 9 |
abstract
goal recognition is the problem of inferring the
goal of an agent, based on its observed actions. an
inspiring approach—plan recognition by planning
(prp)—uses off-the-shelf planners to dynamically
generate plans for given goals, eliminating the need
for the traditional plan library. however, existing
prp formulation is inherently inefficient in online
recognition, and cannot be used with motion planners for continuous spaces. in this paper, we utilize a different prp formulation which allows for
online goal recognition, and for application in continuous spaces. we present an online recognition
algorithm, where two heuristic decision points may
be used to improve run-time significantly over existing work. we specify heuristics for continuous
domains, prove guarantees on their use, and empirically evaluate the algorithm over n hundreds of
experiments in both a 3d navigational environment
and a cooperative robotic team task.
| 2 |
abstract
remote sensing satellite data offer the unique possibility to map land use land cover transformations by
providing spatially explicit information. however, detection of short-term processes and land use patterns
of high spatial-temporal variability is a challenging task.
we present a novel framework using multi-temporal terrasar-x data and machine learning techniques,
namely discriminative markov random fields with spatio-temporal priors, and import vector machines, in
order to advance the mapping of land cover characterized by short-term changes. our study region covers
a current deforestation frontier in the brazilian state pará with land cover dominated by primary forests,
different types of pasture land and secondary vegetation, and land use dominated by short-term processes
such as slash-and-burn activities. the data set comprises multi-temporal terrasar-x imagery acquired
over the course of the 2014 dry season, as well as optical data (rapideye, landsat) for reference. results
show that land use land cover is reliably mapped, resulting in spatially adjusted overall accuracies of up to
79% in a five class setting, yet limitations for the differentiation of different pasture types remain.
the proposed method is applicable on multi-temporal data sets, and constitutes a feasible approach to map
land use land cover in regions that are affected by high-frequent temporal changes.
keywords: markov random fields (mrf), import vector machines (ivm), multi-temporal lulc
mapping, deforestation, amazon, sar
| 1 |
abstract
for undirected graphs g (v, e) and g0 (v0 , e0 ), say that g is a region intersection
graph over g0 if there is a family of connected subsets {r u ⊆ v0 : u ∈ v } of g0 such that
{u, v} ∈ e ⇐⇒ r u ∩ r v , ∅.
we show if g0 excludes the complete graph k h as a minor for some h > 1, then every region
√
intersection graph g over g0 with m edges has a balanced separator with at most c h m nodes,
where c h is a constant depending only on h. if g additionally has uniformly bounded vertex
degrees, then such a separator is found by spectral partitioning.
a string graph is the intersection graph of continuous arcs in the plane. string graphs are
precisely region intersection graphs over planar graphs. thus the preceding result implies that
√
every string graph with m edges has a balanced separator of size o( m). this bound is optimal,
as it generalizes the planar separator theorem. it confirms a conjecture of fox and pach (2010),
√
and improves over the o( m log m) bound of matoušek (2013).
| 8 |
abstract
cell search is the process for a user to detect its neighboring base stations (bss) and make a
cell selection decision. due to the importance of beamforming gain in millimeter wave (mmwave)
and massive mimo cellular networks, the directional cell search delay performance is investigated. a
cellular network with fixed bs and user locations is considered, so that strong temporal correlations
exist for the sinr experienced at each bs and user. for poisson cellular networks with rayleigh fading
channels, a closed-form expression for the spatially averaged mean cell search delay of all users is
derived. this mean cell search delay for a noise-limited network (e.g., mmwave network) is proved to
be infinite whenever the non-line-of-sight (nlos) path loss exponent is larger than 2. for interferencelimited networks, a phase transition for the mean cell search delay is shown to exist in terms of the
number of bs antennas/beams m : the mean cell search delay is infinite when m is smaller than a
threshold and finite otherwise. beam-sweeping is also demonstrated to be effective in decreasing the
cell search delay, especially for the cell edge users.
| 7 |
abstract
| 5 |
abstract. we deduce properties of the koopman representation of a positive entropy probability measurepreserving action of a countable, discrete, sofic group. our main result may be regarded as a “representationtheoretic” version of sinaı̌’s factor theorem. we show that probability measure-preserving actions with
completely positive entropy of an infinite sofic group must be mixing and, if the group is nonamenable,
have spectral gap. this implies that if γ is a nonamenable group and γ y (x, µ) is a probability measurepreserving action which is not strongly ergodic, then no action orbit equivalent to γ y (x, µ) has completely
positive entropy. crucial to these results is a formula for entropy in the presence of a polish, but a priori
noncompact, model.
| 4 |
abstraction and systems language design
c. jasson casey∗ , andrew sutton† , gabriel dos reis† , alex sprintson∗
∗ department
| 6 |
abstract
we discuss actions of free groups on the circle with “ping-pong” dynamics; these are dynamics
determined by a finite amount of combinatorial data, analogous to schottky domains or markov
partitions. using this, we show that the free group fn admits an isolated circular order if and
only if n is even, in stark contrast with the case for linear orders. this answers a question from
[21]. inspired by work in [2], we also exhibit examples of “exotic” isolated points in the space of
all circular orders on f2 . analogous results are obtained for linear orders on the groups fn × z.1
| 4 |
abstract
we present a procedure for computing the convolution of exponential signals without the need of
solving integrals or summations. the procedure requires the resolution of a system of linear equations
involving vandermonde matrices. we apply the method to solve ordinary differential/difference equations
with constant coefficients.
| 3 |
abstract—secure multi-party computation (mpc) enables a
set of mutually distrusting parties to cooperatively compute,
using a cryptographic protocol, a function over their private data. this paper presents w ys? , a new domain-specific
language (dsl) implementation for writing mpcs. w ys?
is a verified, domain-specific integrated language extension
(vdsile), a new kind of embedded dsl hosted in f? , a fullfeatured, verification-oriented programming language. w ys?
source programs are essentially f? programs written against
an mpc library, meaning that programmers can use f? ’s
logic to verify the correctness and security properties of their
programs. to reason about the distributed semantics of these
programs, we formalize a deep embedding of w ys? , also in
f? . we mechanize the necessary metatheory to prove that the
properties verified for the w ys? source programs carry over
to the distributed, multi-party semantics. finally, we use f? ’s
extraction mechanism to extract an interpreter that we have
proved matches this semantics, yielding a verified implementation. w ys? is the first dsl to enable formal verification of
source mpc programs, and also the first mpc dsl to provide
a verified implementation. with w ys? we have implemented
several mpc protocols, including private set intersection, joint
median, and an mpc-based card dealing application, and have
verified their security and correctness.
| 6 |
abstract
antifragile systems grow measurably better in the
presence of hazards. this is in contrast to fragile
systems which break down in the presence of hazards, robust systems that tolerate hazards up to a
certain degree, and resilient systems that – like selfhealing systems – revert to their earlier expected
behavior after a period of convalescence. the notion of antifragility was introduced by taleb for
economics systems, but its applicability has been
illustrated in biological and engineering domains
as well. in this paper, we propose an architecture
that imparts antifragility to intelligent autonomous
systems, specifically those that are goal-driven and
based on ai-planning. we argue that this architecture allows the system to self-improve by uncovering new capabilities obtained either through the
hazards themselves (opportunistic) or through deliberation (strategic). an ai planning-based case
study of an autonomous wheeled robot is presented.
we show that with the proposed architecture, the
robot develops antifragile behaviour with respect to
an oil spill hazard.
| 2 |
abstract
the objective of clustering is to discover natural groups in datasets and to identify geometrical structures which might reside there, without assuming any prior knowledge on the
characteristics of the data. the problem can be seen as detecting the inherent separations
between groups of a given point set in a metric space governed by a similarity function. the
pairwise similarities between all data objects form a weighted graph adjacency matrix which
contains all necessary information for the clustering process, which can consequently be formulated as a graph partitioning problem. in this context, we propose a new cluster quality measure
which uses the maximum spanning tree and allows us to compute the optimal clustering under
the min-max principle in polynomial time. our algorithm can be applied when a load-balanced
clustering is required.
| 8 |
abstract. we investigate a possible connection between the f sz properties
of a group and its sylow subgroups. we show that the simple groups g2 (5)
and s6 (5), as well as all sporadic simple groups with order divisible by 56 are
not f sz, and that neither are their sylow 5-subgroups. the groups g2 (5)
and hn were previously established as non-f sz by peter schauenburg; we
present alternative proofs. all other sporadic simple groups and their sylow
subgroups are shown to be f sz. we conclude by considering all perfect groups
available through gap with order at most 106 , and show they are non-f sz
if and only if their sylow 5-subgroups are non-f sz.
| 4 |
abstract
while most scene flow methods use either variational optimization or a strong rigid motion assumption, we show for
the first time that scene flow can also be estimated by dense
interpolation of sparse matches. to this end, we find sparse
matches across two stereo image pairs that are detected
without any prior regularization and perform dense interpolation preserving geometric and motion boundaries by
using edge information. a few iterations of variational energy minimization are performed to refine our results, which
are thoroughly evaluated on the kitti benchmark and additionally compared to state-of-the-art on mpi sintel. for
application in an automotive context, we further show that
an optional ego-motion model helps to boost performance
and blends smoothly into our approach to produce a segmentation of the scene into static and dynamic parts.
| 1 |
abstract
we investigate the problem of testing the equivalence between two discrete histograms. a
k-histogram over [n] is a probability distribution that is piecewise constant over some set of k
intervals over [n]. histograms have been extensively studied in computer science and statistics.
given a set of samples from two k-histogram distributions p, q over [n], we want to distinguish
(with high probability) between the cases that p = q and kp − qk1 ≥ ǫ. the main contribution
of this paper is a new algorithm for this testing problem and a nearly matching informationtheoretic lower bound. specifically, the sample complexity of our algorithm matches our lower
bound up to a logarithmic factor, improving on previous work by polynomial factors in the
relevant parameters. our algorithmic approach applies in a more general setting and yields
improved sample upper bounds for testing closeness of other structured distributions as well.
| 10 |
abstract
haskell provides type-class-bounded and parametric polymorphism as opposed to subtype
polymorphism of object-oriented languages such as java and ocaml. it is a contentious
question whether haskell 98 without extensions, or with common extensions, or with
new extensions can fully support conventional object-oriented programming with encapsulation, mutable state, inheritance, overriding, statically checked implicit and explicit
subtyping, and so on.
in a first phase, we demonstrate how far we can get with object-oriented functional
programming, if we restrict ourselves to plain haskell 98. in the second and major phase,
we systematically substantiate that haskell 98, with some common extensions, supports
all the conventional oo features plus more advanced ones, including first-class lexically
scoped classes, implicitly polymorphic classes, flexible multiple inheritance, safe downcasts
and safe co-variant arguments. haskell indeed can support width and depth, structural
and nominal subtyping. we address the particular challenge to preserve haskell’s type
inference even for objects and object-operating functions. advanced type inference is a
strength of haskell that is worth preserving. many of the features we get “for free”: the
type system of haskell turns out to be a great help and a guide rather than a hindrance.
the oo features are introduced in haskell as the oohaskell library, non-trivially
based on the hlist library of extensible polymorphic records with first-class labels and
subtyping. the library sample code, which is patterned after the examples found in oo
textbooks and programming language tutorials, including the ocaml object tutorial,
demonstrates that oo code translates into oohaskell in an intuition-preserving way:
essentially expression-by-expression, without requiring global transformations.
oohaskell lends itself as a sandbox for typed oo language design.
keywords: object-oriented functional programming, object type inference, typed objectoriented language design, heterogeneous collections, ml-art, mutable objects, typeclass-based programming, haskell, haskell 98, structural subtyping, duck typing, nominal subtyping, width subtyping, deep subtyping, co-variance
| 2 |
abstract
we refine the general methodology in [1] for the construction and analysis of essentially minimax estimators for a wide class
of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions with support size s comparable
with the number of observations n. specifically, we determine the “smooth” and “non-smooth” regimes based on the confidence
set and the smoothness of the functional. in the “non-smooth” regime, we apply an unbiased estimator for a suitable polynomial
approximation of the functional. in the “smooth” regime, we construct a general version of the bias-corrected maximum likelihood
estimator (mle) based on taylor expansion.
we apply the general methodology to the problem of estimating the kl divergence between two discrete probability measures
p and q from empirical data in a non-asymptotic and possibly large alphabet setting. we construct minimax rate-optimal estimators
for d(p kq) when the likelihood ratio is upper bounded by a constant which may depend on the support size, and show that
the performance of the optimal estimator with n samples is essentially that of the mle with n ln n samples. our estimator is
adaptive in the sense that it does not require the knowledge of the support size nor the upper bound on the likelihood ratio. we
show that the general methodology results in minimax rate-optimal estimators for other divergences as well, such as the hellinger
distance and the χ2 -divergence. our approach refines the approximation methodology recently developed for the construction of
near minimax estimators of functionals of high-dimensional parameters, such as entropy, rényi entropy, mutual information and
`1 distance in large alphabet settings, and shows that the effective sample size enlargement phenomenon holds significantly more
widely than previously established.
index terms
divergence estimation, kl divergence, multivariate approximation theory, taylor expansion, functional estimation, maximum
likelihood estimator, high dimensional statistics, minimax lower bound
| 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.