cesare-spinoso's picture
Upload 3 files
ac8f274
text,label
"abstract. we solve the simultaneous conjugacy problem in artin’s braid groups and,
more generally, in garside groups, by means of a complete, effectively computable, finite
invariant. this invariant generalizes the one-dimensional notion of super summit set to
arbitrary dimensions. one key ingredient in our solution is the introduction of a provable
high-dimensional version of the birman–ko–lee cycling theorem. the complexity of this
solution is a small degree polynomial in the cardinalities of our generalized super summit
sets and the input parameters. computer experiments suggest that the cardinality of this
invariant, for a list of order n independent elements of artin’s braid group bn , is generically
close to 1.",4
"abstract—limited capture range, and the requirement to
provide high quality initialization for optimization-based 2d/3d
image registration methods, can significantly degrade the performance of 3d image reconstruction and motion compensation
pipelines. challenging clinical imaging scenarios, which contain
significant subject motion such as fetal in-utero imaging, complicate the 3d image and volume reconstruction process.
in this paper we present a learning based image registration method capable of predicting 3d rigid transformations of
arbitrarily oriented 2d image slices, with respect to a learned
canonical atlas co-ordinate system. only image slice intensity
information is used to perform registration and canonical alignment, no spatial transform initialization is required. to find
image transformations we utilize a convolutional neural network
(cnn) architecture to learn the regression function capable of
mapping 2d image slices to a 3d canonical atlas space.
we extensively evaluate the effectiveness of our approach
quantitatively on simulated magnetic resonance imaging (mri),
fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal mri data where our method
is integrated into a full reconstruction and motion compensation
pipeline. our learning based registration achieves an average
spatial prediction error of 7 mm on simulated data and produces
qualitatively improved reconstructions for heavily moving fetuses
with gestational ages of approximately 20 weeks. our model
provides a general and computationally efficient solution to the
2d/3d registration initialization problem and is suitable for realtime scenarios.",1
"abstract: a wireless charging station (wcs) transfers energy wirelessly to nodes within its charging range. this paper investigates
the impact of node speed on throughput of wcs overlaid mobile
ad-hoc networks (manet) when packet transmissions are constrained by energy status of each node. nodes in such a network
shows twofold charging pattern depending on their moving speeds.
a slow moving node outside wcs charging regions resorts to wait
energy charging from wcss for a long time while that inside wcs
charging regions consistently recharges the battery. a fast moving node waits and recharges directly contrary to the slow moving
node. reflecting these phenomena, we design a two-dimensional
markov chain of which the state dimensions respectively represent
remaining energy and distance to the nearest wcs normalized by
node speed. solving this enables to provide the following three impacts of speed on throughput. firstly, higher node speed improves
throughput by reducing the inter-meeting time between nodes and
wcss. secondly, such throughput improvement by higher speed is
replaceable with larger battery capacity. finally, we prove that the
throughput scaling is independent of node speed.
index terms: wireless power transfer, energy provision, wireless
charging station, node speed, battery capacity, node density, scaling law.",7
"abstract—we characterize the resolvability region for a large
class of point-to-point channels with continuous alphabets. in
our direct result, we prove not only the existence of good
resolvability codebooks, but adapt an approach based on the
chernoff-hoeffding bound to the continuous case showing that
the probability of drawing an unsuitable codebook is doubly
exponentially small. for the converse part, we show that our
previous elementary result carries over to the continuous case
easily under some mild continuity assumption.",7
"abstract
the paper provides simple formulas of bayesian filtering for the exact recursive computation of state conditional probability density
functions given quantized innovations signal measurements of a linear stochastic system. this is a topic of current interest because the
innovations signal should be white and therefore efficient in its use of channel capacity and in the design and optimization of the quantizer.
earlier approaches, which we reexamine and characterize here, have relied on assumptions concerning densities or approximations to yield
recursive solutions, which include the sign-of-innovations kalman filter and a particle filtering technique. our approach uses the kalman
filter innovations at the transmitter side and provides a point of comparison for the other methods, since it is based on the bayesian filter.
computational examples are provided.",3
"abstract—large-scale ad hoc analytics of genomic data is
popular using the r-programming language supported by 671
software packages provided by bioconductor. more recently,
analytical jobs are benefitting from on-demand computing and
storage, their scalability and their low maintenance cost, all of
which are offered by the cloud. while biologists and bioinformaticists can take an analytical job and execute it on their personal
workstations, it remains challenging to seamlessly execute the
job on the cloud infrastructure without extensive knowledge of
the cloud dashboard. how analytical jobs can not only with
minimum effort be executed on the cloud, but also how both
the resources and data required by the job can be managed is
explored in this paper. an open-source light-weight framework
for executing r-scripts using bioconductor packages, referred to
as ‘rbiocloud’, is designed and developed. rbiocloud offers a set
of simple command-line tools for managing the cloud resources,
the data and the execution of the job. three biological test cases
validate the feasibility of rbiocloud. the framework is publicly
available from http://www.rbiocloud.com.
keywords—cloud computing, r programming, bioconductor,
amazon web services, data analytics",5
"abstract—this paper employs equal-image-size source partitioning techniques to derive the capacities of the general discrete
memoryless wiretap channel (dm-wtc) under four different
secrecy criteria. these criteria respectively specify requirements
on the expected values and tail probabilities of the differences,
in absolute value and in exponent, between the joint probability
of the secret message and the eavesdropper’s observation and
the corresponding probability if they were independent. some of
these criteria reduce back to the standard leakage and variation
distance constraints that have been previously considered in the
literature. the capacities under these secrecy criteria are found to
be different when non-vanishing error and secrecy tolerances are
allowed. based on these new results, we are able to conclude that
the strong converse property generally holds for the dm-wtc
only under the two secrecy criteria based on constraining the tail
probabilities. under the secrecy criteria based on the expected
values, an interesting phase change phenomenon is observed as
the tolerance values vary.",7
"abstract
in sequential hypothesis testing, generalized
binary search (gbs) greedily chooses the
test with the highest information gain at each
step. it is known that gbs obtains the gold
standard query cost of o(log n) for problems
satisfying the k-neighborly condition, which
requires any two tests to be connected by a
sequence of tests where neighboring tests disagree on at most k hypotheses. in this paper, we introduce a weaker condition, splitneighborly, which requires that for the set of
hypotheses two neighbors disagree on, any
subset is splittable by some test. for four
problems that are not k-neighborly for any
constant k, we prove that they are splitneighborly, which allows us to obtain the optimal o(log n) worst-case query cost.",2
"abstract
this short note is to point the reader to notice that the proof of high dimensional
asymptotic normality of mle estimator for logistic regression under the regime
pn = o(n) given in paper: “maximum likelihood estimation in logistic regression
models with a diverging number of covariates. electronic journal of statistics, 6,
1838-1846.” is wrong.
keyword: high dimensional logistic regression; generalized linear models; asymptotic normality.",10
"abstract
this paper shows the complementary roles of mathematical and engineering points
of view when dealing with truss analysis problems involving systems of linear equations and inequalities. after the compatibility condition and the mathematical structure of the general solution of a system of linear equations is discussed, the truss
analysis problem is used to illustrate its mathematical and engineering multiple
aspects, including an analysis of the compatibility conditions and a physical interpretation of the general solution, and the generators of the resulting affine space.
next, the compatibility and the mathematical structure of the general solution of
linear systems of inequalities are analyzed and the truss analysis problem revisited
adding some inequality constraints, and discussing how they affect the resulting
general solution and many other aspects of it. finally, some conclusions are drawn.
key words: compatibility, cones, dual cones, linear spaces, polytopes,
simultaneous solutions, truss analysis.
pacs: d24, l60, 047",5
"abstract
machine learning (ml) has revamped every domain of life as it
provides powerful tools to build complex systems that learn and
improve from experience and data. our key insight is that to solve
a machine learning problem, data scientists do not invent a new
algorithm each time, but evaluate a range of existing models with
different configurations and select the best one. this task is laborious, error-prone, and drains a large chunk of project budget and
time. in this paper we present a novel framework inspired by programming by sketching[8] and partial evaluation[4] to minimize
human intervention in developing ml solutions. we templatize
machine learning algorithms to expose configuration choices as
holes to be searched. we share code and computation between different algorithms, and only partially evaluate configuration space
of algorithms based on information gained from initial algorithm
evaluations. we also employ hierarchical and heuristic based pruning to reduce the search space. our initial findings indicate that our
approach can generate highly accurate ml models. interviews with
data scientists show that they feel our framework can eliminate
sources of common errors and significantly reduce development
time.",2
"abstract. we show that a construction by aanderaa and cohen used in
their proof of the higman embedding theorem preserves torsion length.
we give a new construction showing that every finitely presented group
is the quotient of some c ′ (1/6) finitely presented group by the subgroup
generated by its torsion elements. we use these results to show there is a
finitely presented group with infinite torsion length which is c ′ (1/6), and
thus word-hyperbolic and virtually torsion-free.",4
"abstract
analyzing ordinal data becomes increasingly important in psychology, especially in
the context of item response theory. the generalized partial credit model (gpcm) is
probably the most widely used ordinal model and finds application in many large scale
educational assessment studies such as pisa. in the present paper, optimal test designs
are investigated for estimating persons’ abilities with the gpcm for calibrated tests
when item parameters are known from previous studies. we will derive that local
optimality may be achieved by assigning non-zero probability only to the first and last
category independently of a person’s ability. that is, when using such a design, the
gpcm reduces to the dichotomous 2pl model. since locally optimal designs require
the true ability to be known, we consider alternative bayesian design criteria using
weight distributions over the ability parameter space. for symmetric weight
distributions, we derive necessary conditions for the optimal one-point design of two
response categories to be bayes optimal. furthermore, we discuss examples of common
symmetric weight distributions and investigate, in which cases the necessary conditions
are also sufficient. since the 2pl model is a special case of the gpcm, all of these
results hold for the 2pl model as well.
key words: optimal design; bayesian design; partial credit model; 2pl model; rasch
model; item response theory.",10
"abstract
we consider the problem of approximating the output of a parametric electromagnetic field
model in the presence of a large number of uncertain input parameters. given a sufficiently
smooth output with respect to the input parameters, such problems are often tackled with
interpolation-based approaches, such as the stochastic collocation method on tensor-product or
isotropic sparse grids. due to the so-called curse of dimensionality, those approaches result in
increased or even forbidding computational costs. in order to reduce the growth in complexity with the number of dimensions, we employ a dimension-adaptive, hierarchical interpolation
scheme, based on nested univariate interpolation nodes. clenshaw-curtis and leja nodes satisfy the nestedness property and have been found to provide accurate interpolations when the
parameters follow uniform distributions. the dimension-adaptive algorithm constructs the approximation based on the observation that not all parameters or interactions among them are
equally important regarding their impact on the model’s output. our goal is to exploit this
anisotropy in order to construct accurate polynomial surrogate models at a reduced computational cost compared to isotropic sparse grids. we apply the stochastic collocation method
to two electromagnetic field models with medium- to high-dimensional input uncertainty. the
performances of isotropic and adaptively constructed, anisotropic sparse grids based on both
clenshaw-curtis and leja interpolation nodes are examined. all considered approaches are
compared with one another regarding the surrogate models’ approximation accuracies using a
cross-validation error metric.
keywords– dimension adaptivity, clenshaw-curtis, computational electromagnetics, electromagnetic field simulations, hierarchical interpolation, leja, sparse grids, stochastic collocation,
uncertainty quantification.",5
"abstract
we show that if the nearly-linear time solvers for laplacian matrices and their generalizations
can be extended to solve just slightly larger families of linear systems, then they can be used
to quickly solve all systems of linear equations over the reals. this result can be viewed either
positively or negatively: either we will develop nearly-linear time algorithms for solving all
systems of linear equations over the reals, or progress on the families we can solve in nearlylinear time will soon halt.",8
"abstract. in this survey, we outline two recent constructions of free commutative integro-differential
algebras. they are based on the construction of free commutative rota-baxter algebras by mixable
shuffles. the first is by evaluations. the second is by the method of gröbner-shirshov bases.",0
"abstract
in the (deletion-channel) trace reconstruction problem, there is an unknown n-bit source
string x. an algorithm is given access to independent traces of x, where a trace is formed by
deleting each bit of x independently with probability δ. the goal of the algorithm is to recover x
exactly (with high probability), while minimizing samples (number of traces) and running time.
previously, the best known algorithm for the trace reconstruction problem was due to holene 1/2 )) samples and running time for any fixed 0 < δ < 1.
stein et al. [hmpw08]; it uses exp(o(n
it is also what we call a “mean-based algorithm”, meaning that it only uses the empirical means
of the individual bits of the traces. holenstein et al. also gave a lower bound, showing that any
e
mean-based algorithm must use at least nω(log n) samples.
in this paper we improve both of these results, obtaining matching upper and lower bounds
for mean-based trace reconstruction. for any constant deletion rate 0 < δ < 1, we give a meanbased algorithm that uses exp(o(n1/3 )) time and traces; we also prove that any mean-based
algorithm must use at least exp(ω(n1/3 )) traces. in fact, we obtain matching upper and lower
bounds even for δ subconstant and ρ :=
1 − δ subconstant: when (log3 n)/n ≪ δ ≤ 1/2 the
bound is exp(−θ(δn)1/3 ), and when 1/ n ≪ ρ ≤ 1/2 the bound is exp(−θ(n/ρ)1/3 ).
our proofs involve estimates for the maxima of littlewood polynomials on complex disks.
we show that these techniques can also be used to perform trace reconstruction with random
insertions and bit-flips in addition to deletions. we also find a surprising result: for deletion
probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction.",8
"abstract
we develop the operational semantics of an untyped probabilistic λ-calculus with continuous distributions, and both hard and soft constraints, as a foundation for universal probabilistic programming languages such as church, anglican, and venture. our first contribution
is to adapt the classic operational semantics of λ-calculus to a continuous setting via creating
a measure space on terms and defining step-indexed approximations. we prove equivalence of
big-step and small-step formulations of this distribution-based semantics. to move closer to
inference techniques, we also define the sampling-based semantics of a term as a function from
a trace of random samples to a value. we show that the distribution induced by integration
over the space of traces equals the distribution-based semantics. our second contribution is
to formalize the implementation technique of trace markov chain monte carlo (mcmc) for
our calculus and to show its correctness. a key step is defining sufficient conditions for the
distribution induced by trace mcmc to converge to the distribution-based semantics. to
the best of our knowledge, this is the first rigorous correctness proof for trace mcmc for a
higher-order functional language, or for a language with soft constraints.",6
"abstract
we give upper bounds for the stanley depth of edge ideals of certain k–
partite clutters. in particular, we generalize a result of ishaq about the
stanley depth of the edge ideal of a complete bipartite graph. a result of
pournaki, seyed fakhari and yassemi implies that the stanley’s conjecture
holds for d-uniform complete d-partite clutters. here we give a shorter and
different proof of this fact.",0
"abstract—complexity analysis becomes a common task in
supervisory control. however, many results of interest are spread
across different topics. the aim of this paper is to bring several
interesting results from complexity theory and to illustrate their
relevance to supervisory control by proving new nontrivial results
concerning nonblockingness in modular supervisory control of
discrete event systems modeled by finite automata.
index terms—discrete event systems; finite automata; modular control; complexity.",3
"abstract
flavor (formal language for audio-visual object representation) has been created as a language
for describing coded multimedia bitstreams in a formal way so that the code for reading and writing
bitstreams can be automatically generated. it is an extension of c++ and java, in which the typing
system incorporates bitstream representation semantics. this allows describing in a single place both
the in-memory representation of data as well as their bitstream-level (compressed) representation.
flavor also comes with a translator that automatically generates standard c++ or java code from
the flavor source code so that direct access to compressed multimedia information by application
developers can be achieved with essentially zero programming. flavor has gone through many enhancements and this paper fully describes the latest version of the language and the translator. the
software has been made into an open source project as of version 4.1, and the latest downloadable
flavor package is available at http://flavor.sourceforge.net.",2
"abstract— in current power distribution systems, one of the
most challenging operation tasks is to coordinate the networkwide distributed energy resources (ders) to maintain the
stability of voltage magnitude of the system. this voltage control
task has been investigated actively under either distributed
optimization-based or local feedback control-based characterizations. the former architecture requires a strongly-connected
communication network among all ders for implementing
the optimization algorithms, a scenario not yet realistic in
most of the existing distribution systems with under-deployed
communication infrastructure. the latter one, on the other
hand, has been proven to suffer from loss of network-wide operational optimality. in this paper, we propose a game-theoretic
characterization for semi-local voltage control with only a
locally connected communication network. we analyze the
existence and uniqueness of the generalized nash equilibrium
(gne) for this characterization and develop a fully distributed
equilibrium-learning algorithm that relies on only neighbor-toneighbor information exchange. provable convergence results
are provided along with numerical tests which corroborate the
robust convergence property of the proposed algorithm.",3
"abstract-state machines by gurevich and huggins [5], norrish’s c ++ semantics in hol4 [12] and the operating-system verification projects vfiasco [7], l4.verified [10] and
robin [16] all use a semantics of c or c ++ data types that employs (untyped) byte sequences to encode
typed values for storing them in memory. an underspecified, partial function converts byte sequences
back into typed values.
we use the term underspecified data-type semantics to refer to such a semantics of data types that converts between typed values and untyped byte sequences while leaving the precise conversion functions
underspecified. with an underspecified data-type semantics, it is unknown during program verification
which specific bytes are written to memory.
the main ingredients of underspecified data-type semantics are two functions — to byte and
from byte — that convert between typed values and byte sequences. the function from byte is in general
partial, because not every byte sequence encodes a typed value. for instance, consider a representation
of integers that uses a parity bit: from byteint would be undefined for byte sequences with invalid parity.
underspecified data-type semantics are relevant for the verification of low-level systems code. this
includes code that needs to maintain hardware-controlled data structures, e.g., page directories, or that
contains its own memory allocator. type and memory safety of such low-level code depend on its
functional correctness and are undecidable in general. for this reason, type safety for such code can only
be established by logical reasoning and not by a conventional type system. as a consequence, this paper
focuses on data-type semantics instead of improving the type system for, e.g., c ++.
having to establish type correctness by verification is not as bad as it first sounds. with suitable
lemmas, type correctness can be proved automatically for those parts of the code that are statically type
∗ this work was in part funded by the european commission through pasr grant 104600, by the deutsche forschungsgemeinschaft through the quaos project, and by the swedish research council.",6
"abstract
a sup-interpretation is a tool which provides an upper bound on
the size of a value computed by some symbol of a program. supinterpretations have shown their interest to deal with the complexity
of first order functional programs. for instance, they allow to characterize all the functions bitwise computable in alogtime. this paper is
an attempt to adapt the framework of sup-interpretations to a fragment
of oriented-object programs, including distinct encodings of numbers
through the use of constructor symbols, loop and while constructs and
non recursive methods with side effects. we give a criterion, called
brotherly criterion, which ensures that each brotherly program computes objects whose size is polynomially bounded by the inputs sizes.",6
"abstract
a hopf galois structure on a finite field extension l/k is a pair (h, µ), where
h is a finite cocommutative k-hopf algebra and µ a hopf action. in this paper we
present an algorithm written in the computational algebra system magma which
gives all hopf galois structures on separable field extensions of a given degree
and several properties of those. we describe the results obtained for extensions
of degree up to 11. besides, we prove that separable extensions of degree p2 , for
p an odd prime, have at most one type of hopf galois structures.
keywords: galois theory, hopf algebra, computational system magma.",4
"abstract
in this paper we present a new width measure for a tree decomposition, minor-matching
hypertree width, µ-tw, for graphs and hypergraphs, such that bounding the width guarantees
that set of maximal independent sets has a polynomially-sized restriction to each decomposition bag. the relaxed conditions of the decomposition allow a much wider class of graphs
and hypergraphs of bounded width compared to other tree decompositions. we show that,
n
1
for fixed k, there are 2(1− k +o(1))( 2 ) n-vertex graphs of minor-matching hypertree width at
most k. a number of problems including maximum independence set, k-colouring, and
homomorphism of uniform hypergraphs permit polynomial-time solutions for hypergraphs
with bounded minor-matching hypertree width and bounded rank. we show that for any
given k and any graph g, it is possible to construct a decomposition of minor-matching hy3
pertree width at most o(k 3 ), or to prove that µ-tw(g) > k in time no(k ) . this is done by
presenting a general algorithm for approximating the hypertree width of well-behaved measures, and reducing µ-tw to such measure. the result relating the restriction of the maximal
independent sets to a set s with the set of induced matchings intersecting s in graphs, and
minor matchings intersecting s in hypergraphs, might be of independent interest.",8
"abstract: in this paper, we present two new forms of the write statement:
one of the form write(x); g where g is a statement and the other of the form
write(x); d where d is a module. the former is a generalization of traditional
write statement and is quite useful. the latter is useful for implementing
interactive modules.",6
"abstract—the paper studies the problem of achieving consensus in multi-agent systems in the case
where the dependency digraph γ has no spanning in-tree. we consider the regularization protocol that
amounts to the addition of a dummy agent (hub) uniformly connected to the agents. the presence
of such a hub guarantees the achievement of an asymptotic consensus. for the “evaporation” of the
dummy agent, the strength of its influences on the other agents vanishes, which leads to the concept
of latent consensus. we obtain a closed-form expression for the consensus when the connections of
the hub are symmetric; in this case, the impact of the hub upon the consensus remains fixed. on
the other hand, if the hub is essentially influenced by the agents, whereas its influence on them tends
to zero, then the consensus is expressed by the scalar product of the vector of column means of the
laplacian eigenprojection of γ and the initial state vector of the system. another protocol, which
assumes the presence of vanishingly weak uniform background links between the agents, leads to the
same latent consensus.
keywords: consensus, multi-agent system, decentralized control, regularization, eigenprojection, degroot’s iterative pooling, pagerank, laplacian matrix of a digraph.",3
"abstract)
alceste scalas",6
"abstract. following the model introduced by aguech, lasmar and mahmoud [probab. engrg. inform. sci. 21 (2007) 133–141], the weighted depth
of a node in a labelled rooted tree is the sum of all labels on the path connecting the node to the root. we analyze weighted depths of nodes with given
labels, the last inserted node, nodes ordered as visited by the depth first search
process, the weighted path length and the weighted wiener index in a random
binary search tree. we establish three regimes of nodes depending on whether
the second order behaviour of their weighted depths follows from fluctuations
of the keys on the path, the depth of the nodes, or both. finally, we investigate
a random distribution function on the unit interval arising as scaling limit for
weighted depths of nodes with at most one child.",8
"abstract
in this letter, we study multiuser communication systems enabled by an unmanned aerial vehicle
(uav) that is equipped with a directional antenna of adjustable beamwidth. we propose a fly-hoverand-communicate protocol where the ground terminals (gts) are partitioned into disjoint clusters
that are sequentially served by the uav as it hovers above the corresponding cluster centers. we
jointly optimize the uav’s flying altitude and antenna beamwidth for throughput optimization in
three fundamental multiuser communication models, namely uav-enabled downlink multicasting (mc),
downlink broadcasting (bc), and uplink multiple access (mac). our results show that the optimal uav
altitude and antenna beamwidth critically depend on the communication model considered.",7
"abstract— s trong digitalization and shifting from unidirectional
to bidirectional topology have transformed the electrical grid
into a cyber-physical energy system, i.e. smart grid, with strong
interdependency among various domains. it is mandatory to
develop a comprehensive and holistic validation approach for
such large scale system. however, a single research
infrastructure may not have sufficient expertise and equipment
for such test, without huge or eventually unfeasible investment.
in this paper, we propose another adequate approach:
connecting existing and established infrastructures with
complementary specialization and facilities into a crossinfrastructure holistic experiment. the proposition enables
testing of cpes assessment research in near real -world scenario
without significant investment while efficiently exploiting the
existing infrastructures. hybrid cloud based architecture is
considered as the support for such setup and the design of crossinfrastructure experiment is also covered.
index terms— cyber-physical energy s ystem, interoperability,
co-simulation, hardware-in-the-loop, cross-infrastructure.",3
"abstract
we present a novel approach to finding the k-sink on dynamic path networks with general edge
capacities. our first algorithm runs in o(n log n + k 2 log4 n) time, where n is the number of
vertices on the given path, and our second algorithm runs in o(n log3 n) time. together, they
improve upon the previously most efficient o(kn log2 n) time algorithm due to arumugam et
al. [1] for all values of k. in the case where all the edges have the same capacity, we again present
two algorithms that run in o(n + k 2 log2 n) time and o(n log n) time, respectively, and they
together improve upon the previously best o(kn) time algorithm due to higashikawa et al. [10]
for all values of k.
1998 acm subject classification f.2.2
keywords and phrases facility location, k-sink, parametric search, dynamic path network
digital object identifier 10.4230/lipics...",8
"abstract—this paper describes the design and development of
a decentralized firewall system powered by a novel malware detection engine. the firewall is built using blockchain technology.
the detection engine aims to classify portable executable (pe)
files as malicious or benign. file classification is carried out using
a deep belief neural network (dbn) as the detection engine. our
approach is to model the files as grayscale images and use the
dbn to classify those images into the aforementioned two classes.
an extensive data set of 10,000 files is used to train the dbn.
validation is carried out using 4,000 files previously unexposed
to the network. the final result of whether to allow or block a
file is obtained by arriving at a proof of work based consensus
in the blockchain network.
index terms—malware, blockchain consensus, portable executable, deep belief network, restricted boltzmann machine",2
"abstract
in this paper, we present a new estimator of the mean of a random vector,
computed by applying some threshold function to the norm. non asymptotic
dimension-free almost sub-gaussian bounds are proved under weak moment assumptions, using pac-bayesian inequalities.",10
"abstractions that combine spatial reasoning in the tabular structure with relational reasoning. spatial
reasoning allows the dsl programs to follow structured paths in the 2-dimensional table whereas relational
reasoning allows them to constrain those paths with predicates over cell values.
as shown schematically in fig. 2, the high-level structure of our synthesis algorithm is similar to prior
techniques that combine partitioning with unification (alur et al. 2015; gulwani 2011; yaghmazadeh et al. 2016) .
specifically, partitioning is used to classify the input-output examples into a small number of groups, each of
which can be represented using a conditional-free program in the dsl. in contrast, the goal of unification is
1 http://stackoverflow.com/questions/30952426/substract-last-cell-in-row-from-first-cell-with-number",6
"abstract
we show that, for the purpose of pricing swaptions, the swap rate and the corresponding forward rates can be
considered lognormal under a single martingale measure. swaptions can then be priced as options on a basket of
lognormal assets and an approximation formula is derived for such options. this formula is centered around a blackscholes price with an appropriate volatility, plus a correction term that can be interpreted as the expected tracking
error. the calibration problem can then be solved very efficiently using semidefinite programming.
keywords: semidefinite programming, libor market model, calibration, basket options.",5
"abstract
this paper attacks the following problem. we are given a large number n of
rectangles in the plane, each with horizontal and vertical sides, and also a number
r < n . the given list of n rectangles may contain duplicates. the problem is
to find r of these rectangles, such that, if they are discarded, then the intersection
of the remaining (n − r) rectangles has an intersection with as large an area as
possible. we will find an upper bound, depending only on n and r, and not on the
particular data presented, for the number of steps needed to run the algorithm on
(a mathematical model of) a computer. in fact our algorithm is able to determine,
for each s ≤ r, s rectangles from the given list of n rectangles, such that the
remaining (n − s) rectangles have as large an area as possible, and this takes
hardly any more time than taking care only of the case s = r. our algorithm
extends to d-rectangles—analogues of rectangles, but in dimension d instead of in
dimension 2. our method is to exhaustively examine all possible intersections—this

is much faster than it sounds, because we do not need to examine all ns subsets in
order to find all possible intersection rectangles. for an extreme example, suppose
the rectangles are nested, e.g., concentric squares of distinct sizes, then the only
intersections examined are the smallest s + 1 rectangles.",8
"abstracting with credit is permitted. to copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. request permissions from [email protected].
gecco ’17, berlin, germany
© 2017 copyright held by the owner/author(s). publication rights licensed to acm.
978-1-4503-4920-8/17/07. . . $15.00
doi: http://dx.doi.org/10.1145/3071178.3071292",9
"abstract: an accelerator is a specialized integrated circuit designed to
perform specific computations faster than if those computations were
performed by general purpose processor, cpu or gpu. state-of-the-art
deep neural networks (dnns) are becoming progressive larger and
different applications require different number of layers, types of layer,
number of nodes per layer and different interconnect between consecutive
layers. a dnn learning and inference accelerator thus need to be
reconfigurable against those many requirements for a dnn. it needs to
reconfigure to maximum use its on die resources, and if necessary, need to
be able to connect with other similar dies or packages for larger and higher
performing dnns. a field-programmable dnn learning & inference
accelerator (fprog-dnn) using hybrid systolic/non-systolic techniques,
distributed information/control and deep pipelined structure is proposed
and its microarchitecture and operation presented here. 400mm2 die sizes
are planned for 100 thousand workers (fp64) that can extend to multipledie packages. reconfigurability allows for different number of workers to
be assigned to different layers as a function of the relative difference in
computational load among layers. the computational delay per layer is
made roughly the same along pipelined accelerator structure. vgg-16 and
recently proposed inception modules are used for showing the flexibility
of the fprog-dnn’s reconfigurability. special structures were also added
for a combination of convolution layer, map coincidence and feedback for
state of the art learning with small set of examples, which is the focus of a
companion paper by the author (franca-neto, 2018). the flexibility of the
accelerator described can be appreciated by the fact that it is able to
reconfigure from (1) allocating all a dnn computations to a single worker
in one extreme of sub-optimal performance to (2) optimally allocating
workers per layer according to computational load in each dnn layer to
be realized. due the pipelined nature of the dnn realized in the fprogdnn, the speed-up provided by fprog-dnn varies from 2x to 50x to
gpus or tpus at equivalent number of workers. this speed-up is
consequence of hiding the delay in transporting activation outputs from
one layer to the next in a dnn behind the computations in the receiving
layer. this fprog-dnn concept has been simulated and validated at
behavioral/functional level.",9
"abstract
runtime adaptation (ra) is a technique prevalent to long-running, highly available software systems, whereby system characteristics are altered dynamically in response to
runtime events, while causing limited disruption to the execution of the system. actorbased, concurrent systems are often used to build such long-running, reactive systems
which require suffering from limited downtime. this makes runtime adaptation an appealing technique for mitigating erroneous behaviour in actor-systems, since mitigation
is carried out while the system executes.
in this dissertation we focus on providing effective adaptations that can be localised
and applied to specific concurrent actors, thereby only causing a temporary disruption to
the parts of the system requiring mitigation, while leaving the rest of the system intact.
we make the application of localised adaptations efficient through incremental synchronisation, whereby the specifier can strategically suspend specific parts of the system,
whenever this is strictly required for ensuring that adaptations are effectively applied.
we also study static analysis techniques to determine whether the specified incremental
synchronisation is in some sense adequate for local adaptations to be carried out.
we thus identify a number of generic adaptations that can be applied to any actor system, regardless of its design and the code that it executes. we implement the identified
adaptations as an extension of an existing runtime verification tool for actor-systems,
thereby creating a ra framework for monitoring and mitigating actor systems. in parallel to our implementation we also develop a formal model of our ra framework that
further serves to guide our implementation. this model also enables us to better understand the subtle errors that erroneously specified adaptation scripts may introduce.
we thus develop a static type system for detecting and rejecting erroneous adaptation
scripts prior to deployment, thereby providing the specifier with assistance for writing
valid scripts. although the static typesystem analyses scripts with respect to certain assumptions, we do not assume that the monitored system abides by these assumptions.
we therefore augment our ra framework with dynamic checks for halting monitoring
whenever the system deviates from our assumption. based on this dynamically checked
model of our ra framework, we prove type soundness for our static type system.
as a result of this dissertation we thus implement and formalise a novel a runtime
adaptation framework for actor systems as extension to an existing runtime verification tool. furthermore, exploring the mixture of static and dynamic typechecking, in the
context of runtime verification, for the purpose of adaptation is also quite novel. this exploration lead to the developing a novel type analysis technique for detecting erroneously
specified runtime adaptation scripts.",6
"abstract. the big data phenomenon has spawned large-scale linear
programming problems. in many cases, these problems are non-stationary. in this paper, we describe a new scalable algorithm called nslp
for solving high-dimensional, non-stationary linear programming problems on modern cluster computing systems. the algorithm consists of
two phases: quest and targeting. the quest phase calculates a solution
of the system of inequalities defining the constraint system of the linear programming problem under the condition of dynamic changes in
input data. to this end, the apparatus of fejer mappings is used. the
targeting phase forms a special system of points having the shape of an
n-dimensional axisymmetric cross. the cross moves in the n-dimensional
space in such a way that the solution of the linear programming problem
is located all the time in an ε-vicinity of the central point of the cross.
keywords: nslp algorithm · non-stationary linear programming problem · large-scale linear programming · fejer mapping.",8
"abstract
physics is studying a system based on the information available about it. there are
two approaches to physics, deterministic and the nondeterministic. the deterministic
approaches assume everything is known about the system. since everything about the
system is almost never known, nondeterministic approaches such as statistical physics and
quantum physics are of high importance. after a tour through informational interpretation
of quantum physics and required mathematical tools, we go into the notion of time.
we address the problem of time and a cluster of problems around measurement in
quantum mechanics. i present a new approach to interpreting time in terms of information
changes. time will emerge from the non-commutativity of quantum theory. in the third
section we review information thermodynamics and later derive some relations between
our notion of time and the thermal time as in [27].",7
"abstract interpretation). on the other hand, we find source to source transformation
techniques such as partial evaluation [mogensen and sestoft 1997] and more general
techniques based on the unfold and fold or on the replacement operation.
unfold/fold transformation techniques were first introduced for functional programs in [burstall and darlington 1977], and then adapted to logic programming
(lp) both for program synthesis [clark and sickel 1977; hogger 1981], and for program specialization and optimization [komorowski 1982]. tamaki and sato [1984]
proposed a general framework for the unfold/fold transformation of logic programs,
which has remained in the years the main historical reference of the field, and has
later been extended to constraint logic programming (clp) in [maher 1993; etalle
and gabbrielli 1996; bensaou and guessarian 1998] (for an overview of the subject,
see the survey by pettorossi and proietti [1994]). as shown by a number of applications, these techniques provide a powerful methodology for the development and
optimization of large programs, and can be regarded as the basic transformations
techniques, which might be further adapted to be used for partial evaluation.
despite a large literature in the field of declarative sequential languages, unfold/fold transformation sequences have hardly been applied to concurrent languages. notable exceptions are the papers of ueda and fukurawa [1988], sahlin
[1995], and of de francesco and santone [1996] (their relations with this paper are
discussed in section 7). also when considering partial evaluation we find only very
few recent attempts [hosoya et al. 1996; marinescu and goldberg 1997; gengler
and martel 1997] to apply it in the field of concurrent languages.
this situation is partially due to the fact that the non-determinism and the
synchronization mechanisms present in concurrent languages substantially complicate their semantics, thus complicating also the definition of correct transformation
systems. nevertheless these transformation techniques can be very useful also for
concurrent languages, since they allow further optimizations related to the simplification of synchronization and communication mechanisms.
in this paper we introduce a transformation system for concurrent constraint
programming (ccp) [saraswat 1989; saraswat and rinard 1990; saraswat et al.
1991]. this paradigm derives from replacing the store-as-valuation concept of von
neumann computing by the store-as-constraint model: its computational model
is based on a global store, which consists of the conjunction of all constraints established until that moment and expresses some partial information on the values
of the variables involved in the computation. concurrent processes synchronize
and communicate asynchronously via the store by using elementary actions (ask
and tell) which can be expressed in a logical form (essentially implication and conjunction [boer et al. 1997]). on one hand, ccp enjoys a clean logical semantics,
avoiding many of the complications arising in the concurrent imperative setting;
as argued in the position paper [etalle and gabbrieli 1998] this aspect is of great
help in the development of effective transformation tools. on the other hand,
acm transactions on programming languages and systems, vol. tbd, no. tdb, month year.",6
"abstract
this paper is devoted to the buckling analysis of thin composite plates under straight single-walled
carbon nanotubes reinforcement with uniform distribution and random orientations. in order to
develop the fundamental equations, the b3-spline finite strip method along with the classical plate
theory (cpt) is employed and the total potential energy is minimized which leads to an eigenvalue
problem. for deriving the effective modulus of thin composite plates reinforced with carbon
nanotubes, the mori-tanaka method is used in which each straight carbon nanotube is modeled as a
fiber with transversely isotropic elastic properties. the numerical results including the critical
buckling loads for rectangular thin composite plates reinforced by carbon nanotubes with various
boundary conditions and different volume fractions of nanotubes are provided and the positive effect
of using carbon nanotubes reinforcement in buckling of thin plates is illustrated.
keywords: mechanical buckling; thin composite plates; single-walled carbon nanotubes, b3-spline
finite strip method",5
"abstract
this paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. our
approach enables a robot to learn and adapt its gaze control strategy for human-robot interaction without the use
of external sensors or human supervision. the robot learns to focus its attention on groups of people from its own
audio-visual experiences, and independently of the number of people in the environment, their position and physical
appearance. in particular, we use recurrent neural networks and q-learning to find an optimal action-selection policy,
and we pretrain on a synthetic environment that simulates sound sources and moving participants to avoid the need of
interacting with people for hours. our experimental evaluation suggests that the proposed method is robust in terms
of parameters configuration (i.e. the selection of the parameter values has not a decisive impact on the performance).
the best results are obtained when audio and video information are jointly used, and when a late fusion strategy is
employed (i.e. when both sources of information are separately processed and then fused). successful experiments on
a real environment with the nao robot indicate that our framework is a step forward towards the autonomous learning
of a perceivable and socially acceptable gaze behavior.
keywords: reinforcement learning, human-robot interaction, robot gaze control, neural networks, transfer
learning, multimodal data fusion",9
"abstract—in this paper, by way of three examples – a fourth
order low pass active rc filter, a rudimentary bjt amplifier,
and an lc ladder – we show, how the algebraic capabilities of
modern computer algebra systems can, or in the last example,
might be brought to use in the task of designing analog circuits.
acm classification: i.1 symbolic and algebraic manipulation; j.2
physical sciences and engineering; g.2.2 graph theory
mathematics subject classification (2000): primary 94c05; secondary 94c15, 68w30, 13p10, 05c85
keywords: analog circuits, filter design, polynomial equations, computer algebra, delta-wye transformation.",5
"abstract
we define and study partial correlation graphs (pcgs) with variables in a general
hilbert space and their connections to generalized neighborhood regression, without making any distributional assumptions. using operator-theoretic arguments, and especially
the properties of projection operators on hilbert spaces, we show that these neighborhood
regressions have the algebraic structure of a lattice, which we call a neighborhood lattice.
this lattice property significantly reduces the number of conditions one has to check when
testing all partial correlation relations among a collection of variables. in addition, we
generalize the notion of perfectness in graphical models for a general pcg to this hilbert
space setting, and establish that almost all gram matrices are perfect. under this perfectness assumption, we show how these neighborhood lattices may be “graphically” computed
using separation properties of pcgs. we also discuss extensions of these ideas to directed
models, which present unique challenges compared to their undirected counterparts. our
results have implications for multivariate statistical learning in general, including structural
equation models, subspace clustering, and dimension reduction. for example, we discuss
how to compute neighborhood lattices efficiently and furthermore how they can be used to
reduce the sample complexity of learning directed acyclic graphs. our work demonstrates
that this abstract viewpoint via projection operators significantly simplifies existing ideas
and arguments from the graphical modeling literature, and furthermore can be used to
extend these ideas to more general nonparametric settings.",10
"abstract
low probability of detection (or covert) communication refers to the scenario where information
must be sent reliably to a receiver, but with low probability of detection by an adversary. recent
works on the fundamental limits of this communication problem have established achievability and
converse bounds that are asymptotic in the block length of the code. this paper uses gallager’s random
coding bound to derive a new achievability bound that is applicable to low probability of detection
communication in the finite block length regime. further insights are unveiled that are otherwise hidden
in previous asymptotic analyses.",7
"abstract. quantitative information flow (qif) is concerned with measuring how much of a secret is leaked to an adversary who observes the
result of a computation that uses it. prior work has shown that qif techniques based on abstract interpretation with probabilistic polyhedra can be
used to analyze the worst-case leakage of a query, on-line, to determine
whether that query can be safely answered. while this approach can
provide precise estimates, it does not scale well. this paper shows how
to solve the scalability problem by augmenting the baseline technique
with sampling and symbolic execution. we prove that our approach never
underestimates a query’s leakage (it is sound), and detailed experimental
results show that we can match the precision of the baseline technique
but with orders of magnitude better performance.",6
"abstract
we address the induced matching enumeration problem. an edge set m is an induced
matching of a graph g = (v, e). the enumeration of matchings are widely studied in literature, but the induced matching has not been paid much attention. a straightforward
algorithm takes o (|v |) time for each solution, that is coming from the time to generate a
subproblem. we investigated local structures that enables us to generate subproblems in
short time, and proved that the time complexity will be o (1) if the input graph is c4 -free.
a c4 -free graph is a graph any whose subgraph is not a cycle of length four. finally, we
show the fixed parameter tractability of counting induced matchings for graphs with bounded
tree-width and planar graphs.",8
"abstract—evolutionary deep intelligence synthesizes highly
efficient deep neural networks architectures over successive
generations. inspired by the nature versus nurture debate, we
propose a study to examine the role of external factors on
the network synthesis process by varying the availability of
simulated environmental resources. experimental results were
obtained for networks synthesized via asexual evolutionary synthesis (1-parent) and sexual evolutionary synthesis (2-parent,
3-parent, and 5-parent) using a 10% subset of the mnist
dataset. results show that a lower environmental factor model
resulted in a more gradual loss in performance accuracy and
decrease in storage size. this potentially allows significantly
reduced storage size with minimal to no drop in performance
accuracy, and the best networks were synthesized using the
lowest environmental factor models.
keywords-deep neural networks; deep learning; evolutionary deep intelligence; evolutionary synthesis; environmental
resource models",9
"abstract. the mechanical response, serviceability, and load bearing capacity of materials and
structural components can be adversely affected due to external stimuli, which include exposure to
a corrosive chemical species, high temperatures, temperature fluctuations (i.e., freezing-thawing),
cyclic mechanical loading, just to name a few. it is, therefore, of paramount importance in several
branches of engineering – ranging from aerospace engineering, civil engineering to biomedical engineering – to have a fundamental understanding of degradation of materials, as the materials in
these applications are often subjected to adverse environments. as a result of recent advancements
in material science, new materials like fiber-reinforced polymers and multi-functional materials that
exhibit high ductility have been developed and widely used; for example, as infrastructural materials or in medical devices (e.g., stents). the traditional small-strain approaches of modeling these
materials will not be adequate. in this paper, we study degradation of materials due to an exposure to chemical species and temperature under large-strain and large-deformations. in the first
part of our research work, we present a consistent mathematical model with firm thermodynamic
underpinning. we then obtain semi-analytical solutions of several canonical problems to illustrate
the nature of the quasi-static and unsteady behaviors of degrading hyperelastic solids.",5
"abstract
the atom graph of a graph is the graph whose vertices are the atoms obtained
by clique minimal separator decomposition of this graph, and whose edges are the
edges of all possible atom trees of this graph. we provide two efficient algorithms for
computing this atom graph, with a complexity in o(min(nα log n, nm, n(n+m)) time,
which is no more than the complexity of computing the atoms in the general case. we
extend our results to α-acyclic hypergraphs. we introduce the notion of union join
graph, which is the union of all possible join trees; we apply our algorithms for atom
graphs to efficiently compute union join graphs.
keywords: clique separator decomposition, atom tree, atom graph, clique tree,
clique graph, α-acyclic hypergraph.",8
"abstract--new generations of neutron scattering sources and instrumentation are providing challenges in
data handling for user software. time-of-flight instruments used at pulsed sources typically produce
hundreds or thousands of channels of data for each detector segment. new instruments are being designed
with thousands to hundreds of thousands of detector segments. high intensity neutron sources make
possible parametric studies and texture studies which further increase data handling requirements. the
integrated spectral analysis workbench (isaw) software developed at argonne handles large numbers of
spectra simultaneously while providing operations to reduce, sort, combine and export the data. it includes
viewers to inspect the data in detail in real time. isaw uses existing software components and packages
where feasible and takes advantage of the excellent support for user interface design and network
communication in java. the included scripting language simplifies repetitive operations for analyzing
many files related to a given experiment. recent additions to isaw include a contour view, a time-slice
table view, routines for finding and fitting peaks in data, and support for data from other facilities using the
nexus format. in this paper, i give an overview of features and planned improvements of isaw. details
of some of the improvements are covered in other presentations at this conference.",5
"abstract
owing to data-intensive large-scale applications, distributed computation systems have gained
significant recent interest, due to their ability of running such tasks over a large number of commodity nodes in a time efficient manner. one of the major bottlenecks that adversely impacts
the time efficiency is the computational heterogeneity of distributed nodes, often limiting the
task completion time due to the slowest worker. in this paper, we first present a lower bound
on the expected computation time based on the work-conservation principle. we then present
our approach of work exchange to combat the latency problem, in which faster workers can be
reassigned additional leftover computations that were originally assigned to slower workers. we
present two variations of the work exchange approach: a) when the computational heterogeneity
knowledge is known a priori; and b) when heterogeneity is unknown and is estimated in an online
manner to assign tasks to distributed workers. as a baseline, we also present and analyze the use
of an optimized maximum distance separable (mds) coded distributed computation scheme
over heterogeneous nodes. simulation results also compare the proposed approach of work exchange, the baseline mds coded scheme and the lower bound obtained via work-conservation
principle. we show that the work exchange scheme achieves time for computation which is very
close to the lower bound with limited coordination and communication overhead even when the
knowledge about heterogeneity levels is not available.",7
"abstract
we propose the kl-ucb++ algorithm for regret minimization in stochastic bandit models with
exponential families of distributions. we prove that it is simultaneously asymptotically optimal
(in the sense of lai and robbins’ lower bound) and minimax optimal. this is the first algorithm
proved to enjoy these two properties at the same time. this work thus merges two different lines of
research with simple and clear proofs.
keywords: stochastic multi-armed bandits, regret analysis, upper confidence bound (ucb), minimax optimality, asymptotic optimality.",10
"abstract
the aim of this paper is to make links between the specker compactifications of a locally compact
group and its convergence actions on compacta. if g is a convergence group acting on a compactum t
we prove that the closure of g in the attractor sum g⊔t [14] is a quasi-specker compactification of g.
together with a theorem due to h. abels [2], this implies that for any convergence action such that the
limit set λ is totally disconnected there exists a surjective g-equivariant continuous map ends g → λ.
conversely, when the group is compactly generated we show that any specker compactification gives
rise to a convergence action. given two minimal convergence actions of a compactly generated group
on totally disconnected compacta m and n, we can then prove the existence of a minimal compactum
t admitting a convergence action of g and g-equivariant continuous maps t → m and t → n. we
end the paper giving an interpretation of the set of ends of a compactly generated group as the
completion by a “visibility” uniformity [14].",4
"abstract—the discrete-time wiener phase noise channel with
an integrate-and-dump multi-sample receiver is studied. a novel
outer bound on the capacity with an average input power
constraint is derived as a function of the oversampling factor.
this outer bound yields the degrees of freedom for the scenario
in which the oversampling factor grows with the transmit power
p as p α . the result shows, perhaps surprisingly, that the largest
pre-log that can be attained with phase modulation at high signalto-noise ratio is at most 1/4.",7
"abstract. a version of the krull intersection theorem states that for
noetherian domains the krull intersection ki(i) of every proper ideal i
is trivial; that is
\
i n = {0}.
ki(i) :=
n=1",0
"abstract—this paper examines the dependence of performance measures on network size with a focus on large networks. we develop a framework where it is shown that poor
performance can be attributed to dimension–dependent scaling
of network energy. drawing from previous work, we show
that such scaling in undirected networks can be attributed
to the proximity of network spectrum to unity, or distance
to instability. however, such a simple characterization does
not exist for the case of directed networks. in fact, we show
that any arbitrary scaling can be achieved for a fixed distance
to instability. the results here demonstrate that it is always
favorable, in terms of performance scaling, to balance a large
network. this justifies a popular observation that undirected
networks generally perform better. we show the relationship
between network energy and performance measures, such as
output shortfall probability or centrality, that are used in
economic or financial networks. the strong connection between
them explains why a network topology behaves qualitatively
similar under different performance measures. our results
suggest that there is a need to study performance degradation
in large networks that focus on topological dependence and
transcend application specific analyses.",3
"abstraction of a trace, as defined in definition 9. in
fact, an a-trace can be obtained from a given trace by removing its actions and keeping only the number
of jumps.
definition 20 ((τ ,ε )-closeness [3]) consider a test duration t ∈ r+ , a maximum number of jumps j ∈
n, and τ , ε > 0; then two a-traces y1 and y2 are said to be (τ ,ε )-close, denoted by y1 ≈(τ ,ε ) y2 , if
1. for all (t, i) ∈ dom(y1 ) with t ≤ t, i ≤ j, there exists (s, j) ∈ dom(y2 ) such that |t − s| ≤ τ and
ky1 (t, i) − y2 (s, j)k ≤ ε , and
2. for all (t, i) ∈ dom(y2 ) with t ≤ t, i ≤ j, there exists (s, j) ∈ dom(y1 ) such that |t − s| ≤ τ and
ky2 (t, i) − y1 (s, j)k ≤ ε .
definition 21 (conformance relation [3]) consider two hybrid i/o automata h1 and h2 . given a
test duration t ∈ r+ , a maximum number of jumps j ∈ n, and τ , ε > 0, h2 conforms to h1 , denoted by
h2 ≈(τ ,ε ) h1 , if and only if for all solution pairs (u, y1 ) of h1 , there exists a solution pair (u, y2 ) of h2
such that the corresponding output a-traces y1 and y2 are (τ ,ε )-close.
figure 2 shows two a-traces y1 and y2 where y1 ≈(0.8,1) y2 . as mentioned, the notion of hconf (≈(τ ,ε ) )
is not sensitive to the existence/absence of actions. as a result, if the hioa generating y1 produces
output action off at t = 2 and the other hioa produces no action, the two a-traces are still regarded as
conforming according to hconf.
1 in",3
"abstract function, written in acl2, about which we can readily prove interesting correctness
properties. in the extensive code documentation for codewalker, the system is described as follows [18]:
two main facilities are provided by codewalker: the abstraction of a piece of code into an
acl2 “semantic function” that returns the same machine state, and the “projection” of such
a function into another function that computes the final value of a given state component
using only the values of the relevant initial state components.",6
"abstract: in this paper we introduce a novel approach for an important
problem of break detection. specifically, we are interested in detection of
an abrupt change in the covariance structure of a high-dimensional random
process – a problem, which has applications in many areas e.g., neuroimaging and finance. the developed approach is essentially a testing procedure
involving a choice of a critical level. to that end a non-standard bootstrap
scheme is proposed and theoretically justified under mild assumptions. theoretical study features a result providing guaranties for break detection. all
the theoretical results are established in a high-dimensional setting (dimensionality p  n). multiscale nature of the approach allows for a trade-off
between sensitivity of break detection and localization. the approach can
be naturally employed in an on-line setting. simulation study demonstrates
that the approach matches the nominal level of false alarm probability and
exhibits high power, outperforming a recent approach.
msc 2010 subject classifications: primary 62m10, 62h15; secondary
91b84, 62p10.
keywords and phrases: multiscale, bootstrap, structural change, critical
value, precision matrix.",10
"abstract: in 1960 rényi asked for the number of random queries necessary to recover a hidden bijective labeling of
n distinct objects. in each query one selects a random subset of labels and asks, what is the set of objects that have
these labels? we consider here an asymmetric version of the problem in which in every query an object is chosen with
probability p > 1/2 and we ignore “inconclusive” queries. we study the number of queries needed to recover the
labeling in its entirety (the height), to recover at least one single element (the fillup level), and to recover a randomly
chosen element (the typical depth). this problem exhibits several remarkable behaviors: the depth dn converges
in probability but not almost surely and while it satisfies the central limit theorem its local limit theorem doesn’t
hold; the height hn and the fillup level fn exhibit phase transitions with respect to p in the second term. to obtain
these results, we take a unified approach via the analysis of the external profile defined at level k as the number of
elements recovered by the kth query. we first establish new precise asymptotic results for the average and variance,
and a central limit law, for the external profile in the regime where it grows polynomially with n. we then extend the
external profile results to the boundaries of the central region, leading to the solution of our problem for the height
and fillup level. as a bonus, our analysis implies novel results for random patricia tries, as it turns out that the
problem is probabilistically equivalent to the analysis of the height, fillup level, typical depth, and external profile of
a patricia trie built from n independent binary sequences generated by a biased(p) memoryless source.
keywords: rényi problem, patricia trie, profile, height, fillup level, analytic combinatorics, mellin transform,
depoissonization",8
"abstract
2",4
"abstract—in this paper, the problem of resource management
is studied for a network of wireless virtual reality (vr) users
communicating using an unmanned aerial vehicle (uav)-enabled
lte-u network. in the studied model, the uavs act as vr control
centers that collect tracking information from the vr users over the
wireless uplink and, then, send the constructed vr images to the
vr users over an lte-u downlink. therefore, resource allocation
in such a uav-enabled lte-u network must jointly consider the
uplink and downlink links over both licensed and unlicensed bands.
in such a vr setting, the uavs can dynamically adjust the image
quality and format of each vr image to change the data size of each
vr image, then meet the delay requirement. therefore, resource
allocation must also take into account the image quality and format.
this vr-centric resource allocation problem is formulated as a
noncooperative game that enables a joint allocation of licensed and
unlicensed spectrum bands, as well as a dynamic adaptation of vr
image quality and format. to solve this game, a learning algorithm
based on the machine learning tools of echo state networks (esns)
with leaky integrator neurons is proposed. unlike conventional esn
based learning algorithms that are suitable for discrete-time systems,
the proposed algorithm can dynamically adjust the update speed
of the esn’s state and, hence, it can enable the uavs to learn the
continuous dynamics of their associated vr users. simulation results
show that the proposed algorithm achieves up to 14% and 27.1%
gains in terms of total vr qoe for all users compared to q-learning
using lte-u and q-learning using lte.",7
"abstract—in this work, spatial diversity techniques in the area
of multiple-input multiple-output (mimo) diffusion-based molecular communications (dbmc) are investigated. for transmitterside spatial coding, alamouti-type coding and repetition mimo
coding are proposed and analyzed. at the receiver-side, selection
diversity, equal-gain combining, and maximum-ratio combining
are studied as combining strategies. throughout the numerical
analysis, a symmetrical 2×2 mimo-dbmc system is assumed.
furthermore, a trained artificial neural network is utilized to
acquire the channel impulse responses. the numerical analysis
demonstrates that it is possible to achieve a diversity gain in
molecular communications. in addition, it is shown that for
mimo-dbmc systems repetition mimo coding is superior to
alamouti-type coding.
index terms—molecular communication via diffusion,
multiple-input multiple-output, spatial diversity, channel
modeling, artificial neural network.",7
"abstract
abstract
the universal object oriented languages made programming more simple and efficient. in the article is considered possibilities of using similar
methods in computer algebra. a clear and powerful universal language
is useful if particular problem was not implemented in standard software
packages like reduce, mathematica, etc. and if the using of internal programming languages of the packages looks not very efficient.
functional languages like lisp had some advantages and traditions
for algebraic and symbolic manipulations. functional and object oriented
programming are not incompatible ones. an extension of the model of an
object for manipulation with pure functions and algebraic expressions is
considered.",6
"abstract—wireless backhaul communication has been recently
realized with large antennas operating in the millimeter wave
(mmwave) frequency band and implementing highly directional
beamforming. in this paper, we focus on the alignment problem of
narrow beams between fixed position network nodes in mmwave
backhaul systems that are subject to small displacements due
to wind flow or ground vibration. we consider nodes equipped
with antenna arrays that are capable of performing only analog
processing and communicate through wireless channels including
a line-of-sight component. aiming at minimizing the time needed
to achieve beam alignment, we present an efficient method that
capitalizes on the exchange of position information between the
nodes to design their beamforming and combining vectors. some
numerical results on the outage probability with the proposed
beam alignment method offer useful preliminary insights on the
impact of some system and operation parameters.",7
"abstract
protograph low-density-parity-check (ldpc) are considered to design near-capacity lowrate codes over the binary erasure channel (bec) and binary additive white gaussian noise
(biawgn) channel. for protographs with degree-one variable nodes and doubly-generalized
ldpc (dgldpc) codes, conditions are derived to ensure equality of bit-error threshold and
block-error threshold. using this condition, low-rate codes with block-error threshold close to
capacity are designed and shown to have better error rate performance than other existing codes.",7
"abstract
in this paper, a new texture descriptor named ""fractional local neighborhood intensity
pattern"" (flnip) has been proposed for content based image retrieval (cbir). it is an
extension of the local neighborhood intensity pattern (lnip)[1]. flnip calculates the
relative intensity difference between a particular pixel and the center pixel of a 3×3 window
by considering the relationship with adjacent neighbors. in this work, the fractional change in
the local neighborhood involving the adjacent neighbors has been calculated first with respect
to one of the eight neighbors of the center pixel of a 3×3 window. next, the fractional change
has been calculated with respect to the center itself. the two values of fractional change are
next compared to generate a binary bit pattern. both sign and magnitude information are
encoded in a single descriptor as it deals with the relative change in magnitude in the adjacent
neighborhood i.e., the comparison of the fractional change. the descriptor is applied on four
multi-resolution images- one being the raw image and the other three being filtered gaussian
images obtained by applying gaussian filters of different standard deviations on the raw
image to signify the importance of exploring texture information at different resolutions in an
image. the four sets of distances obtained between the query and the target image are then
combined with a genetic algorithm based approach to improve the retrieval performance by
minimizing the distance between similar class images. the performance of the method has
been tested for image retrieval on four databases, including texture image databases such as
brodatz texture image database and salzburg texture database as well as a biomedical image
database namely the oasis database and one face database - at&t face database. the
precision and recall values observed on these databases have been compared with recent
state-of-art local patterns. the proposed method has shown a significant improvement over
many other existing methods.",1
"abstract
a copula of continuous random variables x and y is called an implicit
dependence copula if there exist functions α and β such that α(x) =
β(y ) almost surely, which is equivalent to c being factorizable as the
∗-product of a left invertible copula and a right invertible copula. every
implicit dependence copula is supported on the graph of f (x) = g(y) for
some measure-preserving functions f and g but the converse is not true
in general.
we obtain a characterization of copulas with implicit dependence supports in terms of the non-atomicity of two newly defined associated σalgebras. as an application, we give a broad sufficient condition under
which a self-similar copula has an implicit dependence support. under certain extra conditions, we explicitly compute the left invertible and right
invertible factors of the self-similar copula.",10
"abstract. we compute the last column of the lyubeznik table of k[x1 , ..., xn ]/icn .
mathematics subject classification (2010). 13d45; 13n10.
keywords. local cohomology, lyubeznik number, minimal vertex cover,
cohomological dimension, depth.",0
"abstract—we present an extended analytic formula for the
calculation of the temperature profile along a bondwire embedded
in a package. the resulting closed formula is built by coupling
the heat transfer equations of the bondwire and the surrounding
moulding compound by means of auxiliary variables that stem
from an ad-hoc linearisation and mediate the wire-mould thermal
interaction. the model, which corrects typical simplifications in
previously introduced analytic models, is also optimised against
carefully taken experimental samples representing fusing events
of bondwires within real packages.
index terms—bondwires, heat equation, mould compound,
thermal conduction, thermal radiation, heat kernel, green’s
function.",5
"abstract
in this paper, we introduce some reduction processes on graphs
which preserve the regularity of related edge ideals. as a consequence,
an alternative proof for the theorem of r. fröberg on linearity of
resolution of edge ideal of graphs is given.",0
"abstract
let g be a finite groupsand σ = {σi |i ∈ i} some partition of the set of all primes p, that is,
σ = {σi |i ∈ i}, where p = i∈i σi and σi ∩ σj = ∅ for all i 6= j. we say that g is σ-primary if g
is a σi -group for some i. a subgroup a of g is said to be: σ-subnormal in g if there is a subgroup
chain a = a0 ≤ a1 ≤ · · · ≤ an = g such that either ai−1 e ai or ai /(ai−1 )ai is σ-primary for
all i = 1, . . . , n; modular in g if the following conditions hold: (i) hx, a ∩ zi = hx, ai ∩ z for all
x ≤ g, z ≤ g such that x ≤ z, and (ii) ha, y ∩ zi = ha, y i ∩ z for all y ≤ g, z ≤ g such
that a ≤ z.
in this paper, a subgroup a of g is called σ-quasinormal in g if l is modular and σ-subnormal
in g.
we study σ-quasinormal subgroups of g. in particular, we prove that if a subgroup h of g
is σ-quasinormal in g, then for every chief factor h/k of g between h g and hg the semidirect
product (h/k) ⋊ (g/cg (h/k)) is σ-primary.",4
"abstractions and interesting challenges for program abstraction needed to
analyze actual systems.
i hope that the reader, and dave in particular, is entertained by the analysis of the immune system as
well as finding food for thought, challenges and maybe new insights.",5
"abstract—we consider channel estimation (ce) in narrowband
internet-of-things (nb-iot) systems. due to the fluctuations in
phase within receiver and transmitter oscillators, and also the
residual frequency offset (fo) caused by discontinuous receiving
of repetition coded transmit data-blocks, random phase noises
are presented in received signals. although the coherent-time
of fading channel can be assumed fairly long due to the lowmobility of nb-iot user-equipments (ues), such phase noises
have to be considered before combining the the channel estimates
over repetition copies to improve their accuracies. in this paper,
we derive a sequential minimum-mean-square-error (mmse)
channel estimator in the presence of random phase noise that
refines the ce sequentially with each received repetition copy,
which has a low-complexity and a small data storage. further, we
show through simulations that, the proposed sequential mmse
estimator improves the mean-square-error (mse) of ce by 1
db in the low signal-to-noise ratio (snr) regime, compared to a
traditional sequential mmse estimator that does not thoroughly
consider the impact of random phase noises.",7
"abstract
we prove that if a group generated by a bireversible mealy automaton contains an element of
infinite order, its growth blows up and is necessarily exponential. as a direct consequence, no infinite
virtually nilpotent group can be generated by a bireversible mealy automaton.",4
"abstract. concurrent ml’s events and event combinators facilitate modular concurrent programming with first-class synchronization abstractions. a standard
implementation of these abstractions relies on fairly complex manipulations of
first-class continuations in the underlying language. in this paper, we present a
lightweight implementation of these abstractions in concurrent haskell, a language that already provides first-order message passing. at the heart of our implementation is a new distributed synchronization protocol. in contrast with several
previous translations of event abstractions in concurrent languages, we remain
faithful to the standard semantics for events and event combinators; for example,
we retain the symmetry of choose for expressing selective communication.",6
"abstract
for some families of v−geometrically ergodic markov kernels indexed by a parameter, we study
the existence of a taylor expansion of the invariant distribution in the space of signed measures. our
approach, which completes some previous results for the perturbation analysis of markov chains, is
motivated by a problem in statistics: a control of the bias for the nonparametric kernel estimation in
some locally stationary markov models. we illustrate our results with a nonlinear autoregressive process
and a galton-watson process with immigration and time-varying parameters.",10
"abstract— in this paper we propose an algorithm for stabilizing circular formations of fixed-wing uavs with constant
speeds. the algorithm is based on the idea of tracking circles
with different radii in order to control the inter-vehicle phases
with respect to a target circumference. we prove that the
desired equilibrium is exponentially stable and thanks to the
guidance vector field that guides the vehicles, the algorithm
can be extended to other closed trajectories. one of the main
advantages of this approach is that the algorithm guarantees
the confinement of the team in a specific area, even when
communications or sensing among vehicles are lost. we show
the effectiveness of the algorithm with an actual formation flight
of three aircraft. the algorithm is ready to use for the general
public in the open-source paparazzi autopilot.",3
"abstract
we derive exact (ensemble-tight) error and erasure exponents for the asymmetric broadcast channel given a random superposition codebook. we consider forney’s optimal decoder for both messages and the message pair for the receiver that decodes both
messages. we prove that the optimal decoder designed to decode the pair of messages achieves the optimal trade-off between the
total and undetected exponents associated with the optimal decoder for the private message. we propose convex optimization-based
procedures to evaluate the exponents efficiently. numerical examples are presented to illustrate the results.
index terms
broadcast channels, degraded message sets, erasure decoding, undetected error, error exponents, superposition coding.",7
"abstract—this paper presents a comprehensive study of underwater visible light communications (uvlc), from channel
characterization, performance analysis, and effective transmission and reception methods. to this end, we first simulate
the fading-free impulse response (ffir) of uvlc channels
using monte carlo numerical procedure to take into account
the absorption and scattering effects; and then to characterize
turbulence effects, we multiply the aforementioned ffir by a
fading coefficient which for weak oceanic turbulence can be
modeled as a lognormal random variable (rv). based on this
general channel model, we analytically study the bit error rate
(ber) performance of uvlc systems with binary pulse position
modulation (bppm). in the next step, to mitigate turbulence
effects, we employ multiple transmitters and/or receivers, i.e., we
apply spatial diversity technique over uvlc links. closed-form
expressions for the system ber are provided, when an equal
gain combiner (egc) is employed at the receiver side, thanks to
the gauss-hermite quadrature formula as well as approximation
to the sum of lognormal rvs. we further apply saddle-point
approximation, an accurate photon-counting method, to evaluate
the system ber in the presence of shot noise. both laserbased collimated and light emitting diode (led)-based diffusive
links are investigated. additionally, in order to reduce the intersymbol interference (isi), introduced by the multiple-scattering
effect of uvlc channels on the propagating photons, we also
obtain the optimal multiple-symbol detection (msd) algorithm,
as well as the sub-optimal generalized msd (gmsd) algorithm.
our numerical analysis indicate good matches between the
analytical and photon-counting results implying the negligibility
of signal-dependent shot noise, and also between the analytical
results and numerical simulations confirming the accuracy of our
derived closed-form expressions for the system ber. besides,
our results show that spatial diversity significantly mitigates
fading impairments while (g)msd considerably alleviates isi
deterioration.
index terms—underwater visible light communications, ber
performance, lognormal turbulent channels, mimo, spatial diversity, photon-counting methods, (generalized) multiple-symbol
detection, collimated laser-based links, diffusive led-based links.",7
"abstract
topic models have been widely explored as probabilistic generative models of documents. traditional inference methods have sought closedform derivations for updating the models, however as the expressiveness of these models grows,
so does the difficulty of performing fast and
accurate inference over their parameters. this
paper presents alternative neural approaches to
topic modelling by providing parameterisable
distributions over topics which permit training
by backpropagation in the framework of neural variational inference. in addition, with the
help of a stick-breaking construction, we propose a recurrent network that is able to discover a notionally unbounded number of topics, analogous to bayesian non-parametric topic
models. experimental results on the mxm
song lyrics, 20newsgroups and reuters news
datasets demonstrate the effectiveness and efficiency of these neural topic models.",2
"abstract
the pancake puzzle is a classic optimization problem that has become a standard benchmark for heuristic search
algorithms. in this paper, we provide full proofs regarding the local search topology of the gap heuristic for the
pancake puzzle. first, we show that in any non-goal state in which there is no move that will decrease the number of
gaps, there is a move that will keep the number of gaps constant. we then classify any state in which the number of
gaps cannot be decreased in a single action into two groups: those requiring 2 actions to decrease the number of gaps,
and those which require 3 actions to decrease the number of gaps.",2
"abstract model with
n processes and checks its functional equivalence with a sequential implementation by executing the
model of the application. parallel data-flow analysis is a static analysis technique applied in [2]. the
work focuses on send-receive matching in mpi source code, which helps identify message leaks and
communication mismatch, by constructing a parallel control-flow graph by simple symbolic analysis
n. yoshida, w. vanderbauwhede (eds.): programming language
approaches to concurrency- and communication-centric
software 2013 (places’13)
eptcs 137, 2013, pp. 103–113, doi:10.4204/eptcs.137.9",6
"abstract
we introduce a new formulation of the hidden parameter markov decision process (hip-mdp), a framework for modeling families of related tasks using lowdimensional latent embeddings. our new framework correctly models the joint
uncertainty in the latent parameters and the state space. we also replace the original
gaussian process-based model with a bayesian neural network, enabling more
scalable inference. thus, we expand the scope of the hip-mdp to applications
with higher dimensions and more complex dynamics.",2
"abstract. we show that for every finite nonempty subset l of n≥2 there are a numerical monoid h
and a squarefree element a ∈ h whose set of lengths l(a) is equal to l.",0
"abstract—in a cell-free (cf) massive mimo architecture a very large number of distributed access points
(aps) simultaneously and jointly serves a much smaller
number of mobile stations (mss); a variant of the cellfree technique is the user-centric (uc) approach, wherein
each ap just decodes a reduced set of mss, practically
the ones that are received best. this paper introduces
and analyzes the cf and uc architectures at millimeter
wave (mmwave) frequencies. first of all, a multiuser
clustered channel model is introduced in order to account
for the correlation among the channels of nearby users;
then, an uplink multiuser channel estimation scheme is
described along with low-complexity hybrid analog/digital
beamforming architectures. interestingly, in the proposed
scheme no channel estimation is needed at the mss, and
the beamforming schemes used at the mss are channelindependent and have a very simple structure. numerical
results show that the considered architectures provide good
performance, especially in lightly loaded systems, with the
uc approach outperforming the cf one.",7
"abstract
a tail empirical process for heavy-tailed and right-censored data is introduced and its gaussian approximation is established. in this context, a (weighted) new hill-type estimator for
positive extreme value index is proposed and its consistency and asymptotic normality are
proved by means of the aforementioned process in the framework of second-order conditions
of regular variation. in a comparative simulation study, the newly defined estimator is seen
to perform better than the already existing ones in terms of both bias and mean squared
error. as a real data example, we apply our estimation procedure to evaluate the tail index
of the survival time of australian male aids patients. it is noteworthy that our approach
may also serve to develop other statistics related to the distribution tail such as second-order
parameter and reduced-bias tail index estimators. furthermore, the proposed tail empirical
process provides a goodness-of-fit test for pareto-like models under censorship.
keywords: extreme value index; heavy tails; random censoring; tail empirical process.
ams 2010 subject classification: 60f17, 62g30, 62g32, 62p05.",10
"abstract
in this paper, using the notions graphs, core graphs, immersions and covering maps
of graphs, introduced by stallings in 1983, we prove the burnside condition for the
intersection of subgroups of free groups with burnside condition.
keywords: graph, fundamental group, immersion and covering theory, burnside
condition.
2010 msc: 05e15, 05e18, 55q05, 57m10
1. introduction and motivation
in [3] j. stallings studied on free groups by theory of graphs. he introduced the
concept of immersions of graphs, provided an algorithmic process to study on finitely
generated subgroups of free groups. using these tools, he also gave an elegant proof
for howson’s theorem ”if a and b are finitely generated subgroups of a free group,
then a ∩ b is finitely generated”. moreover, using immersions of graphs and core
graphs (graphs with no trees hanging on) some mathematicians such as everitt and
gersten studied on h. neumann’s inequality on the rank of a ∩ b (see [1] and [2]).
stallings also in [3], introduced another notation called ”burnside condition for a
subgroup”. in this paper, we focus on this notion, and using similar methods, we
prove that if a and b are finitely generated subgroups of a free group f , and a ∩ b
satisfies the burnside condition in both a and b, then a ∩ b satisfies the burnside
condition in a ∨ b, the subgroup of f generated by a ∪ b.
corresponding author
email addresses: [email protected] (hanieh mirebrahimi),
[email protected] (fateme ghanei)
∗",4
"abstract—this paper proposes a new convex model predictive
control strategy for dynamic optimal power flow between battery
energy storage systems distributed in an ac microgrid. the
proposed control strategy uses a new problem formulation, based
on a linear d–q reference frame voltage-current model and
linearised power flow approximations. this allows the optimal
power flows to be solved as a convex optimisation problem, for
which fast and robust solvers exist. the proposed method does
not assume real and reactive power flows are decoupled, allowing
line losses, voltage constraints and converter current constraints
to be addressed. in addition, non-linear variations in the charge
and discharge efficiencies of lithium ion batteries are analysed
and included in the control strategy. real-time digital simulations
were carried out for an islanded microgrid based on the ieee 13
bus prototypical feeder, with distributed battery energy storage
systems and intermittent photovoltaic generation. it is shown
that the proposed control strategy approaches the performance
of a strategy based on non-convex optimisation, while reducing
the required computation time by a factor of 1000, making it
suitable for a real-time model predictive control implementation.
index terms—battery energy storage, energy management,
microgrid, model predictive control, optimal power flow,
quadratic programming.",3
"abstract
we give a constructive, computer-assisted proof that aut(f5 ), the automorphism group of the free group on 5 generators, has kazhdan’s property (t ).",4
"abstract
a universalization of a parameterized investment strategy is an online algorithm whose
average daily performance approaches that of the strategy operating with the optimal parameters
determined offline in hindsight. we present a general framework for universalizing investment
strategies and discuss conditions under which investment strategies are universalizable. we
present examples of common investment strategies that fit into our framework. the examples
include both trading strategies that decide positions in individual stocks, and portfolio strategies
that allocate wealth among multiple stocks. this work extends cover’s universal portfolio work.
we also discuss the runtime efficiency of universalization algorithms. while a straightforward
implementation of our algorithms runs in time exponential in the number of parameters, we show
that the efficient universal portfolio computation technique of kalai and vempala involving the
sampling of log-concave functions can be generalized to other classes of investment strategies.",5
"abstract—in this paper, we propose a wireless communicator
to manage and enhance a cardiac rhythm management system.
the system includes: (1) an on-body wireless electrocardiogram
(ecg), (2) an intracardiac electrogram (egm) embedded inside
an implantable cardioverter/defibrillator, and (3) a
communicator (with a resident learning system). the first two
devices are existing technology available in the market and are
emulated using data from the physionet database, while the
communicator was designed and implemented by our research
team.
the value of the information obtained by combining the
information supplied by (1) and (2), presented to the
communicator, improves decision making regarding use of the
actuator or other actions. preliminary results show a high level of
confidence in the decisions made by the communicator. for
example, excellent accuracy is achieved in predicting atrial
arrhythmia in 8 patients using only external ecg when we used a
neural network.
keywords—wireless, learning systems, bluetooth, zigbee,
ecg, egm",5
"abstract
in this tool demonstration, we give an overview of the chameleon type debugger. the type debugger’s primary use is to identify locations within a source program which are involved in a type error. by further
examining these (potentially) problematic program locations, users gain a better understanding of their program and are able to work towards the actual mistake which was the cause of the type error. the debugger
is interactive, allowing the user to provide additional information to narrow down the search space. one
of the novel aspects of the debugger is the ability to explain erroneous-looking types. in the event that an
unexpected type is inferred, the debugger can highlight program locations which contributed to that result.
furthermore, due to the flexible constraint-based foundation that the debugger is built upon, it can naturally
handle advanced type system features such as haskell’s type classes and functional dependencies.
keywords :",2
"abstract—in this paper we address the identification of 2d
spatial-temporal dynamical systems described by the vectorauto-regressive (var) form. the coefficient-matrices of the var
model are parametrized as sums of kronecker products. when
the number of terms in the sum is small compared to the
size of the matrices, such a kronecker representation efficiently
models large-scale var models. estimating the coefficient matrices in least-squares sense gives rise to a bilinear estimation
problem which is tackled using an alternating least squares
(als) algorithm. regularization or parameter constraints on the
coefficient-matrices allows to induce temporal network properties
such as stability, as well as spatial properties such as sparsity or
toeplitz structure. convergence of the regularized als is proved
using fixed-point theory. a numerical example demonstrates the
advantages of the new modeling paradigm. it leads to comparable
variance of the prediction error with the unstructured leastsquares estimation of var models. however, the number of
parameters grows only linearly with respect to the number of
nodes in the 2d sensor network instead of quadratically in the
case of fully unstructured coefficient-matrices.
index terms—system identification, vector auto-regressive
model, large-scale networks, kronecker product, alternating
least squares.",3
"abstract. we provide converses to two results of j. roe (geom. topol.
2005): first, the warped cone over a free action of a haagerup group admits a
fibred coarse embedding into a hilbert space, and second, a free action yielding
a warped cone with property a must be amenable. we construct examples
showing that in both cases the freeness assumption is necessary. the first
equivalence is obtained also for other classes of banach spaces, e.g. for lp spaces.",4