cesare-spinoso's picture
Upload 3 files
ac8f274
text,label
"abstract
we introduce a novel approach for annotating large quantity of in-the-wild facial
images with high-quality posterior age distribution as labels. each posterior provides a
probability distribution of estimated ages for a face. our approach is motivated by observations that it is easier to distinguish who is the older of two people than to determine
the person’s actual age. given a reference database with samples of known ages and
a dataset to label, we can transfer reliable annotations from the former to the latter via
human-in-the-loop comparisons. we show an effective way to transform such comparisons to posterior via fully-connected and softmax layers, so as to permit end-to-end
training in a deep network. thanks to the efficient and effective annotation approach, we
collect a new large-scale facial age dataset, dubbed ‘megaage’, which consists of 41, 941
images1 . with the dataset, we train a network that jointly performs ordinal hyperplane
classification and posterior distribution learning. our approach achieves state-of-theart results on popular benchmarks such as morph2, adience, and the newly proposed
megaage.
note: there are mistakes in our original paper. please check the appendix for errata.",1
"abstract
the interest in brain-like computation has led to the design of a
plethora of innovative neuromorphic systems. individually, spiking neural
networks (snns), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. despite the popularity of event-driven
snns in software, very few digital hardware architectures are found. this
is because existing hardware solutions for event management scale badly
with the number of events. this paper introduces the structured heap
queue, a pipelined digital hardware data structure, and demonstrates
its suitability for event management. the structured heap queue scales
gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven snns. the scaling is
linear for memory, logarithmic for logic resources and constant for processing time. the use of the structured heap queue is demonstrated on
field-programmable gate array (fpga) with an image segmentation experiment and a snn of 65 536 neurons and 513 184 synapses. events can
be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel
image is segmented in 200 ms.",9
"abstract—air traffic control (atc) is a complex safety critical
environment. a tower controller would be making many decisions
in real-time to sequence aircraft. while some optimization tools
exist to help the controller in some airports, even in these
situations, the real sequence of the aircraft adopted by the
controller is significantly different from the one proposed by the
optimization algorithm. this is due to the very dynamic nature
of the environment.
the objective of this paper is to test the hypothesis that one can
learn from the sequence adopted by the controller some strategies
that can act as heuristics in decision support tools for aircraft
sequencing. this aim is tested in this paper by attempting to learn
sequences generated from a well-known sequencing method that
is being used in the real world.
the approach relies on a genetic algorithm (ga) to learn these
sequences using a society probabilistic finite-state machines
(pfsms). each pfsm learns a different sub-space; thus, decomposing the learning problem into a group of agents that need
to work together to learn the overall problem. three sequence
metrics (levenshtein, hamming and position distances) are
compared as the fitness functions in ga. as the results suggest,
it is possible to learn the behavior of the algorithm/heuristic that
generated the original sequence from very limited information.",9
"abstract—diabetes is considered a lifestyle disease and a well
managed self-care plays an important role in the treatment.
clinicians often conduct surveys to understand the self-care
behaviours in their patients. in this context, we propose to use
self-organising maps (som) to explore the survey data for
assessing the self-care behaviours in type-1 diabetic patients.
specifically, som is used to visualise high dimensional similar patient profiles, which is rarely discussed. experiments demonstrate
that our findings through som analysis corresponds well to the
expectations of the clinicians. in addition, our findings inspire the
experts to improve their understanding of the self-care behaviours
for their patients. the principle findings in our study show: 1)
patients who take correct dose of insulin, inject insulin at the
right time, 2) patients who take correct food portions undertake
regular physical activity and 3) patients who eat on time take
correct food portions.",5
"abstract
search based software engineering (sbse) is an emerging
discipline that focuses on the application of search-based
optimization techniques to software engineering problems.
the capacity of sbse techniques to tackle problems involving large search spaces make their application attractive for
software product lines (spls). in recent years, several publications have appeared that apply sbse techniques to spl
problems. in this paper, we present the results of a systematic mapping study of such publications. we identified
the stages of the spl life cycle where sbse techniques have
been used, what case studies have been employed and how
they have been analysed. this mapping study revealed potential venues for further research as well as common misunderstanding and pitfalls when applying sbse techniques
that we address by providing a guideline for researchers and
practitioners interested in exploiting these techniques.",9
"abstract—we study the problem of private function retrieval
(pfr) in a distributed storage system. in pfr the user wishes
to retrieve a linear combination of m messages stored in noncolluding (n, k) mds coded databases while revealing no information about the coefficients of the intended linear combination
to any of the individual databases. we present an achievable
scheme for mds coded pfr with a rate that matches the
capacity for coded private information retrieval derived recently,
1−rc
k
r = (1 + rc + rc2 + · · · + rcm −1 )−1 = 1−r
m , where rc = n is
c
the rate of the mds code. this achievable rate is tight in some
special cases.",7
abstraction,2
"abstract. we describe a general method for expanding a truncated g-iterative
hasse-schmidt derivation, where g is an algebraic group. we give examples
of algebraic groups for which our method works.",0
"abstract—video captioning in essential is a complex natural
process, which is affected by various uncertainties stemming
from video content, subjective judgment, etc. in this paper we
build on the recent progress in using encoder-decoder framework
for video captioning and address what we find to be a critical
deficiency of the existing methods, that most of the decoders
propagate deterministic hidden states. such complex uncertainty
cannot be modeled efficiently by the deterministic models. in
this paper, we propose a generative approach, referred to as
multi-modal stochastic rnns networks (ms-rnn), which models
the uncertainty observed in the data using latent stochastic
variables. therefore, ms-rnn can improve the performance of
video captioning, and generate multiple sentences to describe a
video considering different random factors. specifically, a multimodal lstm (m-lstm) is first proposed to interact with both
visual and textual features to capture a high-level representation.
then, a backward stochastic lstm (s-lstm) is proposed to
support uncertainty propagation by introducing latent variables.
experimental results on the challenging datasets msvd and
msr-vtt show that our proposed ms-rnn approach outperforms the state-of-the-art video captioning benchmarks.
index terms—video captioning, rnn, uncertainty.",7
"abstract
approximate probabilistic inference algorithms are central to many fields. examples include sequential monte carlo inference in robotics, variational inference
in machine learning, and markov chain monte carlo inference in statistics. a
key problem faced by practitioners is measuring the accuracy of an approximate
inference algorithm on a specific data set. this paper introduces the auxiliary
inference divergence estimator (aide), an algorithm for measuring the accuracy of
approximate inference algorithms. aide is based on the observation that inference
algorithms can be treated as probabilistic models and the random variables used
within the inference algorithm can be viewed as auxiliary variables. this view leads
to a new estimator for the symmetric kl divergence between the approximating
distributions of two inference algorithms. the paper illustrates application of aide
to algorithms for inference in regression, hidden markov, and dirichlet process
mixture models. the experiments show that aide captures the qualitative behavior
of a broad class of inference algorithms and can detect failure modes of inference
algorithms that are missed by standard heuristics.",2
"abstract
we prove the completeness of an axiomatization for differential equation invariants. first,
we show that the differential equation axioms in differential dynamic logic are complete for
all algebraic invariants. our proof exploits differential ghosts, which introduce additional variables that can be chosen to evolve freely along new differential equations. cleverly chosen
differential ghosts are the proof-theoretical counterpart of dark matter. they create new hypothetical state, whose relationship to the original state variables satisfies invariants that did not
exist before. the reflection of these new invariants in the original system enables its analysis.
we then show that extending the axiomatization with existence and uniqueness axioms
makes it complete for all local progress properties, and further extension with a real induction
axiom makes it complete for all real arithmetic invariants. this yields a parsimonious axiomatization, which serves as the logical foundation for reasoning about invariants of differential
equations. moreover, our approach is purely axiomatic, and so the axiomatization is suitable
for sound implementation in foundational theorem provers.
keywords: differential equation axiomatization, differential dynamic logic, differential ghosts",6
"abstract—face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media
and law enforcement. there has been tremendous progress in face recognition accuracy over the past few decades, much of which
can be attributed to deep learning based approaches during the last five years. indeed, automated face recognition systems are now
believed to surpass human performance in some scenarios. despite this progress, a crucial question still remains unanswered: given a
face representation, how many identities can it resolve? in other words, what is the capacity of the face representation? a scientific
basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face
representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. we cast the
face capacity estimation problem under the information theoretic framework of capacity of a gaussian noise channel. by explicitly
accounting for two sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is
able to estimate the capacity of any given face representation. to demonstrate the efficacy of our approach, we estimate the capacity
of a 128-dimensional deep neural network based face representation, facenet [1], and that of the classical eigenfaces [2]
representation of the same dimensionality. our numerical experiments on unconstrained faces indicate that, (a) our capacity estimation
model yields a capacity upper bound of 5.8×108 for facenet and 1×100 for eigenface representation at a false acceptance rate (far)
of 1%, (b) the capacity of the face representation reduces drastically as you lower the desired far (for facenet representation; the
capacity at far of 0.1% and 0.001% is 2.4×106 and 7.0×102 , respectively), and (c) the empirical performance of the facenet
representation is significantly below the theoretical limit.
index terms—face recognition, face representation, channel capacity, gaussian noise channel, bayesian inference",1
"abstract—the tradeoff between the user’s memory size and the
worst-case download time in the (h, r, m, n) combination network
is studied, where a central server communicates with k users
through h immediate relays, and each user has local cache of
size m files and is connected to a different subset of r relays.
the main contribution of this paper is the design of a coded
caching scheme with asymmetric coded placement by leveraging
coordination among the relays, which was not exploited in past
work. mathematical analysis and numerical results show that the
proposed schemes outperform existing schemes.",7
"abstract
the problem of rested and restless multi-armed bandits with constrained availability of arms
is considered. the states of arms evolve in markovian manner and the exact states are hidden from
the decision maker. first, some structural results on value functions are claimed. following these
results, the optimal policy turns out to be a threshold policy. further, indexability of rested bandits
is established and index formula is derived. the performance of index policy is illustrated and
compared with myopic policy using numerical examples.",3
"abstract
faced with distribution shift between training and test set, we wish to detect and quantify
the shift, and to correct our classifiers without test set labels. motivated by medical diagnosis,
where diseases (targets), cause symptoms (observations), we focus on label shift, where the
label marginal p(y) changes but the conditional p(x|y) does not. we propose black box shift
estimation (bbse) to estimate the test distribution p(y). bbse exploits arbitrary black
box predictors to reduce dimensionality prior to shift correction. while better predictors give
tighter estimates, bbse works even when predictors are biased, inaccurate, or uncalibrated, so
long as their confusion matrices are invertible. we prove bbse’s consistency, bound its error,
and introduce a statistical test that uses bbse to detect shift. we also leverage bbse to
correct classifiers. experiments demonstrate accurate estimates and improved prediction, even
on high-dimensional datasets of natural images.",2
"abstract
the problem of detecting and removing redundant constraints is fundamental in
optimization. we focus on the case of linear programs (lps), given by d variables
with n inequality constraints. a constraint is called redundant, if after its removal,
the lp still has the same feasible region. the currently fastest method to detect all
redundancies is due to clarkson: it solves n linear programs, but each of them has
at most s constraints, where s is the number of nonredundant constraints.
in this paper, we study the special case where every constraint has at most two
variables with nonzero coefficients. this family, denoted by li(2), has some nice
properties. namely, as shown by aspvall and shiloach, given a variable xi and a
value λ, we can test in time o(nd) whether there is a feasible solution with xi = λ.
hochbaum and naor present an o(d2 n log n) algorithm for solving the feasibility
problem in li(2). their technique makes use of the fourier-motzkin elimination
method and the earlier mentioned result by aspvall and shiloach.
we present a strongly polynomial algorithm that solves redundancy detection in
time o(nd2 s log s). it uses a modification of clarkson’s algorithm, together with a
revised version of hochbaum and naor’s technique. finally we show that dimensionality testing can be done with the same running time as solving feasibility.",8
"abstract
we present techniques for decreasing the error probability of randomized algorithms and
for converting randomized algorithms to deterministic (non-uniform) algorithms. unlike most
existing techniques that involve repetition of the randomized algorithm and hence a slowdown,
our techniques produce algorithms with a similar run-time to the original randomized algorithms.
the amplification technique is related to a certain stochastic multi-armed bandit problem. the
derandomization technique – which is the main contribution of this work – points to an intriguing
connection between derandomization and sketching/sparsification.
we demonstrate the techniques by showing the following applications:
1. dense max-cut: a las vegas algorithm that given a γ-dense g = (v, e) that has a
cut containing 1 − ε fraction of the edges, finds a cut that contains 1 − o(ε) fraction of
2
2
2
the edges. the algorithm runs in time õ(|v | (1/ε)o(1/γ +1/ε ) ) and has error probability
2
exponentially small in |v | . it also implies a deterministic non-uniform algorithm with
the same run-time (note that the input size is θ(|v |2 )).
2. approximate clique: a las vegas algorithm that given a graph g = (v, e) that
contains a clique on ρ |v | vertices, and given ε > 0, finds a set on ρ |v | vertices of density
3 2
2
at least 1 − ε. the algorithm runs in time õ(|v | 2o(1/(ρ ε )) ) and has error probability
exponentially small in |v |. we also show a deterministic non-uniform algorithm with the
same run-time.
3. free games: a las vegas algorithm and a non-uniform deterministic algorithm that
given a free game (constraint satisfaction problem on a dense bipartite graph) with value
at least 1 − ε0 and given ε > 0, find a labeling of value at least 1 − ε0 − ε. the error
probability of the randomized algorithm is exponentially small in the number of vertices
and labels. the run-time of the algorithms is similar to that of algorithms with constant
error probability.
4. from list decoding to unique decoding for reed-muller code: a randomized
algorithm with error probability exponentially small in the input size that given a word f
and 0 < ǫ, ρ < 1 finds a short list such that every low degree polynomial that is ρ-close
to f is (1 − ǫ)-close to one of the words in the list. the algorithm runs in nearly linear
time in the input size, and implies a deterministic non-uniform algorithm with similar runtime. the run-time of our algorithms compares with that of the most efficient algebraic
algorithms, but our algorithms are combinatorial and much simpler.",8
"abstract—a newly proposed chemical-reaction-inspired metaheurisic, chemical reaction optimization (cro), has been
applied to many optimization problems in both discrete and
continuous domains. to alleviate the effort in tuning parameters,
this paper reduces the number of optimization parameters in
canonical cro and develops an adaptive scheme to evolve them.
our proposed adaptive cro (acro) adapts better to different
optimization problems. we perform simulations with acro on a
widely-used benchmark of continuous problems. the simulation
results show that acro has superior performance over canonical
cro.
index terms—chemical reaction optimization, continuous
optimization, adaptive scheme, metaheuristic, evolutionary algorithm.",9
"abstract. we introduce a family mr;f,η of infinitesimal supergroup schemes, which we call multiparameter supergroups, that generalize the infinitesimal frobenius kernels ga(r) of the additive
group scheme ga . then, following the approach of suslin, friedlander, and bendel, we use functor
cohomology to define characteristic extension classes for the general linear supergroup glm|n , and
we calculate how these classes restrict along homomorphisms ρ : mr;f,η → glm|n . finally, we apply
our calculations to describe (up to a finite surjective morphism) the spectrum of the cohomology
ring of the r-th frobenius kernel glm|n(r) of the general linear supergroup glm|n .",4
"abstract
this paper addresses detecting anomalous patterns in images, time-series, and tensor data
when the location and scale of the pattern is unknown a priori. the multiscale scan statistic
convolves the proposed pattern with the image at various scales and returns the maximum of
the resulting tensor. scale corrected multiscale scan statistics apply different standardizations
at each scale, and the limiting distribution under the null hypothesis—that the data is only
noise—is known for smooth patterns. we consider the problem of simultaneously learning and
detecting the anomalous pattern from a dictionary of smooth patterns and a database of many
tensors. to this end, we show that the multiscale scan statistic is a subexponential random
variable, and prove a chaining lemma for standardized suprema, which may be of independent
interest. then by averaging the statistics over the database of tensors we can learn the pattern
and obtain bernstein-type error bounds. we will also provide a construction of an -net of
the location and scale parameters, providing a computationally tractable approximation with
similar error bounds.",10
"abstract
this paper uses active learning to solve the problem of mining bounded-time
signal temporal requirements of cyber-physical systems or simply the requirement
mining problem. by utilizing robustness degree, we formulates the requirement
mining problem into two optimization problems, a parameter synthesis problem
and a falsification problem. we then propose a new active learning algorithm
called gaussian process adaptive confidence bound (gp-acb) to help solving
the falsification problem. we show theoretically that the gp-acb algorithm has a
lower regret bound thus a larger convergence rate than some existing active learning algorithms, such as gp-ucb. we finally illustrate and apply our requirement
mining algorithm on two case studies, the ackley’s function and a real world automatic transmission model. the case studies show that our mining algorithm with
gp-acb outperforms others, such as those based on nelder-mead, by an average
of 30% to 40%. our results demonstrate that there is a principled and efficient way
of extracting requirements for complex cyber-physical systems.",3
"abstract
several end-to-end deep learning approaches have been recently
presented which simultaneously extract visual features from the
input images and perform visual speech classification. however, research on jointly extracting audio and visual features
and performing classification is very limited. in this work, we
present an end-to-end audiovisual model based on bidirectional
long short-term memory (blstm) networks. to the best of
our knowledge, this is the first audiovisual fusion model which
simultaneously learns to extract features directly from the pixels
and spectrograms and perform classification of speech and nonlinguistic vocalisations. the model consists of multiple identical streams, one for each modality, which extract features directly from mouth regions and spectrograms. the temporal dynamics in each stream/modality are modeled by a blstm and
the fusion of multiple streams/modalities takes place via another
blstm. an absolute improvement of 1.9% in the mean f1 of 4
nonlingusitic vocalisations over audio-only classification is reported on the avic database. at the same time, the proposed
end-to-end audiovisual fusion system improves the state-of-theart performance on the avic database leading to a 9.7% absolute increase in the mean f1 measure. we also perform audiovisual speech recognition experiments on the ouluvs2 database
using different views of the mouth, frontal to profile. the proposed audiovisual system significantly outperforms the audioonly model for all views when the acoustic noise is high.
index terms: audiovisual fusion, end-to-end deep learning,
audiovisual speech recognition",1
"abstract—in this paper, we theoretically study the proportional
fair (pf) scheduler in the context of ultra-dense networks (udns).
analytical results are obtained for the coverage probability and
the area spectral efficiency (ase) performance of dense small
cell networks (scns) with the pf scheduler employed at base
stations (bss). the key point of our analysis is that the typical
user is no longer a random user as assumed in most studies in
the literature. instead, a user with the maximum pf metric is
chosen by its serving bs as the typical user. by comparing the
previous results of the round-robin (rr) scheduler with our new
results of the pf scheduler, we quantify the loss of the multiuser diversity of the pf scheduler with the network densification,
which casts a new look at the role of the pf scheduler in udns.
our conclusion is that the rr scheduler should be used in udns
to simplify the radio resource management (rrm)1 .",7
"abstract. we extend matui’s notion of almost finiteness to general étale groupoids
and show that the reduced groupoid c∗ -algebras of minimal almost finite groupoids
have stable rank one. the proof follows a new strategy, which can be regarded as a local
version of the large subalgebra argument.
the following three are the main consequences of our result. (i) for any group of
(local) subexponential growth and for any its minimal action admitting a totally disconnected free factor, the crossed product has stable rank one. (ii) any countable amenable
group admits a minimal action on the cantor set all whose minimal extensions form the
crossed product of stable rank one. (iii) for any amenable group, the crossed product of
the universal minimal action has stable rank one.",4
"abstract
motivated by comparative genomics, chen et al. [9] introduced the maximum duo-preservation
string mapping (mdsm) problem in which we are given two strings s1 and s2 from the same alphabet and the goal is to find a mapping π between them so as to maximize the number of
duos preserved. a duo is any two consecutive characters in a string and it is preserved in the
mapping if its two consecutive characters in s1 are mapped to same two consecutive characters
in s2 . the mdsm problem is known to be np-hard and there are approximation algorithms for
this problem [3, 5, 13], but all of them consider only the “unweighted” version of the problem
in the sense that a duo from s1 is preserved by mapping to any same duo in s2 regardless of
their positions in the respective strings. however, it is well-desired in comparative genomics to
find mappings that consider preserving duos that are “closer” to each other under some distance
measure [19].
in this paper, we introduce a generalized version of the problem, called the maximum-weight
duo-preservation string mapping (mwdsm) problem that captures both duos-preservation and
duos-distance measures in the sense that mapping a duo from s1 to each preserved duo in s2
has a weight, indicating the “closeness” of the two duos. the objective of the mwdsm problem is
to find a mapping so as to maximize the total weight of preserved duos. in this paper, we give
a polynomial-time 6-approximation algorithm for this problem.",8
"abstract. the problem of determining whether or not a nonelementary subgroup of p sl(2, c) is discrete is a long standing
one. the importance of two generator subgroups comes from
jørgensen’s inequality which has as a corollary the fact that a nonelementary subgroup of p sl(2, c) is discrete if and only if every
non-elementary two generator subgroup is. a solution even in the
two-generator p sl(2, r) case appears to require an algorithm that
relies on a the concept of trace minimizing that was initiated by
rosenberger and purzitsky in the 1970’s their work has lead to
many discreteness results and algorithms. here we show how their
concept of trace minimizing leads to a theorem that gives bounds
on the hyperbolic lengths of curves on the quotient surface that
are the images of primitive generators in the case where the group
is discrete and the quotient is a pair of pants. the result follows
as a consequence of the non-euclidean euclidean algorithm.",4
"abstract
the swarm intelligence of animals is a natural paradigm to apply to optimization problems. ant colony, bee colony, firefly and bat algorithms are
amongst those that have been demonstrated to efficiently to optimize complex
constraints. this paper proposes the new sparkling squid algorithm (ssa) for
multimodal optimization, inspired by the intelligent swarm behavior of its namesake. after an introduction, formulation and discussion of its implementation,
it will be compared to other popular metaheuristics. finally, applications to
well - known problems such as image registration and the traveling salesperson
problem will be discussed.",9
"abstract
motivation: metagenomics research has accelerated the studies
of microbial organisms, providing insights into the composition
and potential functionality of various microbial communities.
metatranscriptomics (studies of the transcripts from a mixture of
microbial species) and other meta-omics approaches hold even
greater promise for providing additional insights into functional and
regulatory characteristics of the microbial communities. current
metatranscriptomics projects are often carried out without matched
metagenomic datasets (of the same microbial communities). for
the projects that produce both metatranscriptomic and metagenomic
datasets, their analyses are often not integrated. metagenome
assemblies are far from perfect, partially explaining why metagenome
assemblies are not used for the analysis of metatranscriptomic
datasets.
results: here we report a reads mapping algorithm for mapping
of short reads onto a de bruijn graph of assemblies. a hash table
of junction k-mers (k-mers spanning branching structures in the de
bruijn graph) is used to facilitate fast mapping of reads to the graph.
we developed an application of this mapping algorithm: a reference
based approach to metatranscriptome assembly using graphs of
metagenome assembly as the reference. our results show that this
new approach (called tag) helps to assemble substantially more
transcripts that otherwise would have been missed or truncated
because of the fragmented nature of the reference metagenome.
availability: tag was implemented in c++ and has been tested
extensively on the linux platform. it is available for download as open
source at http://omics.informatics.indiana.edu/tag.
contact: [email protected].",5
"abstract. motivated by the need for efficient and accurate simulation of the dynamics of
the polar ice sheets, we design high-order finite element discretizations and scalable solvers for the
solution of nonlinear incompressible stokes equations. in particular, we focus on power-law, shear
thinning rheologies commonly used in modeling ice dynamics and other geophysical flows. we use
nonconforming hexahedral meshes and the conforming inf-sup stable finite element velocity-pressure
disc
pairings qk × qdisc
k−2 or qk × pk−1 , where k ≥ 2 is the polynomial order of the velocity space. to
solve the nonlinear equations, we propose a newton-krylov method with a block upper triangular
preconditioner for the linearized stokes systems. the diagonal blocks of this preconditioner are sparse
approximations of the (1,1)-block and of its schur complement. the (1,1)-block is approximated using
linear finite elements based on the nodes of the high-order discretization, and the application of its
inverse is approximated using algebraic multigrid with an incomplete factorization smoother. this
preconditioner is designed to be efficient on anisotropic meshes, which are necessary to match the
high aspect ratio domains typical for ice sheets. as part of this work, we develop and make available
extensions to two libraries—a hybrid meshing scheme for the p4est parallel adaptive mesh refinement
library, and a modified smoothed aggregation scheme for petsc—to improve their support for solving
pdes in high aspect ratio domains. in a comprehensive numerical study, we find that our solver yields
fast convergence that is independent of the element aspect ratio, the occurrence of nonconforming
interfaces, and of the mesh refinement, and that depends only weakly on the polynomial finite element
order. we simulate the ice flow in a realistic description of the antarctic ice sheet derived from field
data, and study the parallel scalability of our solver for problems with up to 383 million unknowns.
key words. viscous incompressible flow, nonlinear stokes equations, shear-thinning, high-order
finite elements, preconditioning, multigrid, newton-krylov method, ice sheet modeling, antarctic ice
sheet.",5
"abstract
convnets, through their architecture, only enforce invariance to translation. in this paper, we
introduce a new class of deep convolutional architectures called non-parametric transformation networks (nptns) which can learn general
invariances and symmetries directly from data.
nptns are a natural generalization of convnets
and can be optimized directly using gradient descent. unlike almost all previous works in deep
architectures, they make no assumption regarding
the structure of the invariances present in the data
and in that aspect are flexible and powerful. we
also model convnets and nptns under a unified framework called transformation networks
(tn), which yields a better understanding of the
connection between the two. we demonstrate the
efficacy of nptns on data such as mnist and
cifar10 where they outperform convnet baselines with the same number of parameters. we
show it is more effective than convnets in modelling symmetries from data, without the explicit
knowledge of the added arbitrary nuisance transformations. finally, we replace convnets with
nptns within capsule networks and show that
this enables capsule nets to perform even better.",1
"abstract
in the network flow interdiction problem an adversary attacks a network in order
to minimize the maximum s-t-flow. very little is known about the approximatibility of
this problem despite decades of interest in it. we present the first approximation hardness, showing that network flow interdiction and several of its variants cannot be much
easier to approximate than densest k-subgraph. in particular, any no(1) -approximation
algorithm for network flow interdiction would imply an no(1) -approximation algorithm
for densest k-subgraph. we complement this hardness results with the first approximation algorithm for network flow interdiction, which has approximation ratio 2(n − 1).
we also show that network flow interdiction is essentially the same as the budgeted
minimum s-t-cut problem, and transferring our results gives the first approximation
hardness and algorithm for that problem, as well.
keywords: network flow interdiction, approximation algorithms, hardness of approximation, budgeted optimization",8
"abstract. we develop refinements of the levenshtein bound in q-ary hamming spaces
by taking into account the discrete nature of the distances versus the continuous behavior of certain parameters used by levenshtein. the first relevant cases are investigated
in detail and new bounds are presented. in particular, we derive generalizations and
q-ary analogs of a maceliece bound. we provide evidence that our approach is as
good as the complete linear programming and discuss how faster are our calculations.
finally, we present a table with parameters of codes which, if exist, would attain our
bounds.",7
"abstract representations of processes with memory (the state) that must
be modified or erased as time progresses. applying landauer’s principle assigns thermodynamic consequences
to the hmm time evolution. which hmm (and corresponding states) is appropriate, though? we now see
that prediction and generation, two very natural tasks
for a thermodynamic system to perform, actually deliver
two different answers. it is important to understand how
physical circumstances relate to this choice of task—it
will be expressed in terms of heat.",7
abstract,3
"abstract
balancing fairness and efficiency in resource allocation is a
classical economic and computational problem. the price of
fairness measures the worst-case loss of economic efficiency
when using an inefficient but fair allocation rule; for indivisible goods in many settings, this price is unacceptably high.
one such setting is kidney exchange, where needy patients
swap willing but incompatible kidney donors. in this work,
we close an open problem regarding the theoretical price of
fairness in modern kidney exchanges. we then propose a general hybrid fairness rule that balances a strict lexicographic
preference ordering over classes of agents, and a utilitarian
objective that maximizes economic efficiency. we develop a
utility function for this rule that favors disadvantaged groups
lexicographically; but if cost to overall efficiency becomes
too high, it switches to a utilitarian objective. this rule has
only one parameter which is proportional to a bound on the
price of fairness, and can be adjusted by policymakers. we
apply this rule to real data from a large kidney exchange and
show that our hybrid rule produces more reliable outcomes
than other fairness rules.",2
"abstract
we establish the capacity region of the two-user finite-field fading x-channel with delayed channel state
information at the transmitters. we consider the most general case in which each transmitter has a common message
for both receivers and a private message for each one of them. we derive a new set of outer-bounds for this problem
that rely on an extremal entropy inequality. this inequality quantifies the ability of each transmitter in favoring one
receiver over the other in terms of the delivered entropy when both receivers must obtain a baseline entropy. we
show that the outer-bounds can be achieved by treating the x-channel as a combination of a number of well-known
problems such as the interference channel and the multicast channel. the capacity-achieving strategies of these subproblems must be interleaved and carried on simultaneously in certain regimes in order to achieve the x-channel
outer-bounds.
index terms
x-channel, finite-field model, channel capacity region, interference channel, delayed csit.",7
"abstract. we give asymptotics for the left and right tails of the limiting quicksort distribution. the results agree with, but are less precise
than, earlier non-rigorous results by knessl and spankowski.",8
"abstract
deep compositional models of meaning
acting on distributional representations of
words in order to produce vectors of larger
text constituents are evolving to a popular area of nlp research. we detail
a compositional distributional framework
based on a rich form of word embeddings
that aims at facilitating the interactions
between words in the context of a sentence. embeddings and composition layers are jointly learned against a generic
objective that enhances the vectors with
syntactic information from the surrounding context. furthermore, each word is
associated with a number of senses, the
most plausible of which is selected dynamically during the composition process.
we evaluate the produced vectors qualitatively and quantitatively with positive results. at the sentence level, the effectiveness of the framework is demonstrated on
the msrpar task, for which we report results within the state-of-the-art range.",9
"abstract
the practical implementation of bayesian inference requires numerical approximation when closed-form expressions are not available. what types of accuracy
(convergence) of the numerical approximations guarantee robustness and what types
do not? in particular, is the recursive application of bayes’ rule robust when subsequent data or posteriors are approximated? when the prior is the push forward of a
distribution by the map induced by the solution of a pde, in which norm should that
solution be approximated? motivated by such questions, we investigate the sensitivity of the distribution of posterior distributions (i.e. of posterior distribution-valued
random variables, randomized through the data) with respect to perturbations of
the prior and data generating distributions in the limit when the number of data
points grows towards infinity.",10
"abstract
diabetes has affected over 246 million people worldwide with a majority of them being women. according
to the who report, by 2025 this number is expected to rise to over 380 million. the disease has been
named the fifth deadliest disease in the united states with no imminent cure in sight. with the rise of
information technology and its continued advent into the medical and healthcare sector, the cases of
diabetes as well as their symptoms are well documented. this paper aims at finding solutions to diagnose
the disease by analyzing the patterns found in the data through classification analysis by employing
decision tree and naïve bayes algorithms. the research hopes to propose a quicker and more efficient
technique of diagnosing the disease, leading to timely treatment of the patients.",5
"abstract
deep reinforcement learning has emerged as a powerful tool
for a variety of learning tasks, however deep nets typically exhibit forgetting when learning multiple tasks in sequence. to
mitigate forgetting, we propose an experience replay process
that augments the standard fifo buffer and selectively stores
experiences in a long-term memory. we explore four strategies for selecting which experiences will be stored: favoring
surprise, favoring reward, matching the global training distribution, and maximizing coverage of the state space. we show
that distribution matching successfully prevents catastrophic
forgetting, and is consistently the best approach on all domains tested. while distribution matching has better and more
consistent performance, we identify one case in which coverage maximization is beneficial - when tasks that receive less
trained are more important. overall, our results show that selective experience replay, when suitable selection algorithms
are employed, can prevent catastrophic forgetting.",2
"abstract
vertex coloring is one of the classic symmetry breaking problems studied in distributed computing. in this paper we present a new algorithm for (∆ + 1)-list coloring in the randomized
local model running in o(log ∗ n + detd (poly log n)) time, where detd (n′ ) is the deterministic complexity of (deg +1)-list coloring (v’s palette has size deg(v) + 1) on n′ -vertex graphs.
this improves √upon a previous randomized algorithm of harris, schneider, and su [18] with
complexity o( log ∆ + log log n + detd (poly log n)), and is dramatically faster than the best
known
√ deterministic algorithm of fraigniaud, heinrich, and kosowski [14], with complexity
o( ∆ log2.5 ∆ + log∗ n).
our algorithm appears to be optimal. it matches the ω(log∗ n) randomized lower bound,
due to naor [25] and sort of matches the ω(det(poly log n)) randomized lower bound due to
chang, kopelowitz, and pettie [7], where det is the deterministic complexity
of (∆ + 1)-list
coloring. the best known upper bounds on detd (n′ ) and det(n′ ) are both 2o( log n ) (panconesi
and srinivasan [26]) and it is quite plausible that the complexities of both problems are the
same, asymptotically.",8
"abstract
we consider collaborative graph exploration with a set of k agents. all agents start
at a common vertex of an initially unknown graph and need to collectively visit all
other vertices. we assume agents are deterministic, vertices are distinguishable, moves
are simultaneous, and we allow agents to communicate globally. for this setting,√we
give the first non-trivial lower bounds that bridge the gap between small (k ≤ n)
and large (k ≥ n) teams of agents. remarkably, our bounds tightly connect to existing
results in both domains.
first, we significantly extend a lower bound of ω(log k/ log log k) by dynia et
al. on the competitive ratio of a collaborative tree exploration strategy to the range k ≤
n logc n for any c ∈ n. second, we provide a tight lower bound on the number of agents
needed for any competitive exploration algorithm. in particular, we show that any
collaborative tree exploration algorithm with k = dn1+o(1) agents has a competitive
ratio of ω(1), while dereniowski et al. gave an algorithm with k = dn1+ε agents and
competitive ratio o(1), for any ε > 0 and with d denoting the diameter of the graph.
lastly, we show that, for any exploration algorithm using k = n agents, there exist
trees of arbitrarily large height d that require ω(d2 ) rounds, and we provide a simple
algorithm that matches this bound for all trees.",8
"abstract—this paper considers heterogeneous coded caching
where the users have unequal distortion requirements. the server
is connected to the users via an error-free multicast link and
designs the users’ cache sizes subject to a total memory budget.
in particular, in the placement phase, the server jointly designs
the users’ cache sizes and the cache contents. to serve the
users’ requests, in the delivery phase, the server transmits signals
that satisfy the users’ distortion requirements. an optimization
problem with the objective of minimizing the worst-case delivery
load subject to the total cache memory budget and users’
distortion requirements is formulated. the optimal solution for
uncoded placement and linear delivery is characterized explicitly
and is shown to exhibit a threshold policy with respect to the total
cache memory budget. as a byproduct of the study, a caching
scheme for systems with fixed cache sizes that outperforms the
state-of-art is presented.",7
"abstract—we present a basic high-level structures
used for developing quantum programming languages.
the presented structures are commonly used in many
existing quantum programming languages and we use
quantum pseudo-code based on qcl quantum programming language to describe them.
we also present the implementation of introduced
structures in gnu octave language for scientific computing. procedures used in the implementation are available
as a package quantum-octave, providing a library of
functions, which facilitates the simulation of quantum
computing. this package allows also to incorporate highlevel programming concepts into the simulation in gnu
octave and matlab. as such it connects features unique
for high-level quantum programming languages, with the
full palette of efficient computational routines commonly
available in modern scientific computing systems.
to present the major features of the described package
we provide the implementation of selected quantum
algorithms. we also show how quantum errors can be
taken into account during the simulation of quantum
algorithms using quantum-octave package. this is
possible thanks to the ability to operate on density
matrices.
index terms—quantum information, quantum programming, models of quantum computation",6
"abstract
mpc (model predictive control) techniques, with constraints, are applied to a nonlinear vehicle model
for the development of an acc (adaptive cruise control) system for transitional manoeuvres. the
dynamic model of the vehicle is developed in the continuous-time domain and captures the real dynamics
of the sub-vehicle models for steady-state and transient operations. a parametric study for the mpc
method is conducted to analyse the response of the acc vehicle for critical manoeuvres. the simulation
results show the significant sensitivity of the response of the vehicle model with acc to controller
parameter and comparisons are made with a previous study. furthermore, the approach adopted in
this work is believed to reflect the control actions taken by a real vehicle.
key words:",3
abstract,6
"abstract
in this paper, we extend classical results on (i) signature symmetric realizations, and (ii) signature symmetric and passive
realizations, to systems which need not be controllable. these results are motivated in part by the existence of important
electrical networks, such as the famous bott-duffin networks, which possess signature symmetric and passive realizations that
are uncontrollable. in this regard, we provide necessary and sufficient algebraic conditions for a behavior to be realized as the
driving-point behavior of an electrical network comprising resistors, inductors, capacitors and transformers.
key words: reciprocity; passive system; linear system; controllability; observability; behaviors; electrical networks.",3
"abstract—hypergraph matching has recently become a popular approach for solving correspondence problems in computer vision as
it allows the use of higher-order geometric information. hypergraph matching can be formulated as a third-order optimization problem
subject to assignment constraints which turns out to be np-hard. in recent work, we have proposed an algorithm for hypergraph
matching which first lifts the third-order problem to a fourth-order problem and then solves the fourth-order problem via optimization of
the corresponding multilinear form. this leads to a tensor block coordinate ascent scheme which has the guarantee of providing
monotonic ascent in the original matching score function and leads to state-of-the-art performance both in terms of achieved matching
score and accuracy. in this paper we show that the lifting step to a fourth-order problem can be avoided yielding a third-order scheme
with the same guarantees and performance but being two times faster. moreover, we introduce a homotopy type method which further
improves the performance.
index terms—hypergraph matching, tensor, multilinear form, block coordinate ascent",8
"abstract. we give sufficient conditions for a frobenius category to be equivalent to
the category of gorenstein projective modules over an iwanaga–gorenstein ring. we
then apply this result to the frobenius category of special cohen–macaulay modules
over a rational surface singularity, where we show that the associated stable category is triangle equivalent to the singularity category of a certain discrepant partial
resolution of the given rational singularity. in particular, this produces uncountably
many iwanaga–gorenstein rings of finite gp type. we also apply our method to
representation theory, obtaining auslander–solberg and kong type results.",0
"abstract—in this paper, we propose a comprehensive polar coding solution that integrates reliability calculation, rate
matching and parity-check coding. judging a channel coding
design from the industry’s viewpoint, there are two primary
concerns: (i) low-complexity implementation in applicationspecific integrated circuit (asic), and (ii) superior & stable
performance under a wide range of code lengths and rates.
the former provides cost- & power-efficiency which are vital to
any commercial system; the latter ensures flexible and robust
services. our design respects both criteria. it demonstrates
better performance than existing schemes in literature, but
requires only a fraction of implementation cost. with easilyreproducible code construction for arbitrary code rates and
lengths, we are able to report “1-bit” fine-granularity simulation
results for thousands of cases. the released results can serve as
a baseline for future optimization of polar codes.1
index terms—5g, polar codes, construction, parity-check.",7
"abstract. we have proved the following problem:let r be a c-affine domain, let t be an element in r \ c and let i : c[t ] ֒→ r be the inclusion. assume that r/t r ∼
=c[t ] c[t ]t [n−1] . then r ∼
=c c[n] .
=c c[n−1] and that rt ∼
this result leads to the negative solution of the candidate counter-example
(conjecture e) of v.arno den essen :let a := c[t, u, x, y, z] denote a polynomial ring, and let f (u) := u3 − 3u, g(u) := u4 − 4u2 and h(u) := u5 − 10u be
the polynomials in c[u]. let d := f ′ (u)∂x + g ′ (u)∂y + h′ (u)∂z + t∂u which is a
locally nilpotent derivation on a. let ad be a subring { a ∈ a | d(a) = 0 }.
then ad ∼
6 c c[4] . consequently our result in this short paper guarantees the
=
conjectures : the cancellation problem for affine spaces”, “the linearization
problem”, “the embedding problem” and “the affine an -fibration problem”
to be still open.",0
"abstract. this paper is concerned with the diffusion of a fluid through a viscoelastic
solid undergoing large deformations. using ideas from the classical theory of mixtures and a
thermodynamic framework based on the notion of maximization of the rate of entropy production, the constitutive relations for a mixture of a viscoelastic solid and a fluid (specifically
newtonian fluid) are derived. by prescribing forms for the specific helmholtz potential and
the rate of dissipation, we derive the relations for the partial stress in the solid, the partial
stress in the fluid, the interaction force between the solid and the fluid, and the evolution
equation of the natural configuration of the solid. we also use the assumption that the
volume of the mixture is equal to the sum of the volumes of the two constituents in their
natural state as a constraint. results from the developed model are shown to be in good
agreement with the experimental data for the diffusion of various solvents through high
temperature polyimides that are used in the aircraft industry. the swelling of a viscoelastic
solid under the application of an external force is also studied.",5
"abstract. we provide a probabilistic and infinitesimal view of how the principal
component analysis procedure (pca) can be generalized to analysis of nonlinear
manifold valued data. starting with the probabilistic pca interpretation of the
euclidean pca procedure, we show how pca can be generalized to manifolds in
an intrinsic way that does not resort to linearization of the data space. the underlying probability model is constructed by mapping a euclidean stochastic process
to the manifold using stochastic development of euclidean semimartingales. the
construction uses a connection and bundles of covariant tensors to allow global
transport of principal eigenvectors, and the model is thereby an example of how
principal fiber bundles can be used to handle the lack of global coordinate system and orientations that characterizes manifold valued statistics. we show how
curvature implies non-integrability of the equivalent of euclidean principal subspaces, and how the stochastic flows provide an alternative to explicit construction
of such subspaces. we describe two estimation procedures for inference of parameters and prediction of principal components, and we give examples of properties
of the model on surfaces embedded in r3 .
principal component analysis, manifold valued statistics, stochastic development, probabilistic pca, anisotropic normal distributions, frame bundle",1
"abstract
the interaction between an artificial agent and its
environment is bi-directional. the agent extracts
relevant information from the environment, and
affects the environment by its actions in return to
accumulate high expected reward. standard reinforcement learning (rl) deals with the expected
reward maximization. however, there are always
information-theoretic limitations that restrict the
expected reward, which are not properly considered by the standard rl.
in this work we consider rl objectives with
information-theoretic limitations. for the first
time we derive a bellman-type recursive equation for the causal information between the environment and the agent, which is combined plausibly with the bellman recursion for the value
function. the unified equitation serves to explore
the typical behavior of artificial agents in an infinite time horizon.",3
"abstract. the social and economic importance of large bodies of
programs and data that are potentially long-lived has attracted
much attention in the commercial and research communities. here
we concentrate on a set of methodologies and technologies called
persistent programming. in particular we review programming
language support for the concept of orthogonal persistence, a
technique for the uniform treatment of objects irrespective of their
types or longevity. while research in persistent programming has
become unfashionable, we show how the concept is beginning to
appear as a major component of modern systems. we relate these
attempts to the original principles of orthogonal persistence and
give a few hints about how the concept may be utilised in the
future.",6
"abstract
we consider a dynamical process in a network which distributes all particles (tokens) located at a
node among its neighbors, in a round-robin manner.
we show that in the recurrent state of this dynamics (i.e., disregarding a polynomially long initialization
phase of the system), the number of particles located on a given edge, averaged over an interval of time, is
tightly concentrated around the average particle density in the system. formally, for a system of k particles
e
in a graph of m edges, during any interval of length t , this time-averaged value is k/m ± o(1/t
),
e
whenever gcd(m, k) = o(1)
(and so, e.g., whenever m is a prime number). to achieve these bounds, we
link the behavior of the studied dynamics to ergodic properties of traversals based on eulerian circuits on
a symmetric directed graph. these results are proved through sum set methods and are likely to be of
independent interest.
as a corollary, we also obtain bounds on the idleness of the studied dynamics, i.e., on the longest
possible time between two consecutive appearances of a token on an edge, taken over all edges. designing
trajectories for k tokens in a way which minimizes idleness is fundamental to the study of the patrolling
e
problem in networks. our results immediately imply a bound of o(m/k)
on the idleness of the studied
e
process, showing that it is a distributed o(1)-competitive solution to the patrolling task, for all of the
covered cases. our work also provides some further insights that may be interesting in load-balancing
applications.",8
"abstract
with the aim of understanding compactifications of 6d superconformal field theories to
four dimensions, we study punctures for theories of class sγ . the class sγ theories arise
from m5-branes probing c2 /γ, an ade singularity. the resulting 4d theories descend from
compactification on riemann surfaces decorated with punctures. we show that for class sγ
theories, a puncture is specified by singular boundary conditions for fields in the 5d quiver
gauge theory obtained from compactification of the 6d theory on a cylinder geometry. we
determine general boundary conditions and study in detail solutions with first order poles.
this yields a generalization of the nahm pole data present for 1/2 bps punctures for theories
of class s. focusing on specific algebraic structures, we show how the standard discussion
of nilpotent orbits and its connection to representations of su(2) generalizes in this broader
context.",0
"abstract
computing set joins of two inputs is a common task in database theory. recently, van gucht,
williams, woodruff and zhang [pods 2015] considered the complexity of such problems in the
natural model of (classical) two-party communication complexity and obtained tight bounds for
the complexity of several important distributed set joins.
in this paper we initiate the study of the quantum communication complexity of distributed
set joins. we design a quantum protocol for distributed boolean matrix multiplication, which
corresponds to computing the composition join of two databases, showing that the product of
two n ×√n boolean matrices, each owned by one of two respective parties, can be computed
e n`3/4 ) qubits of communication, where ` denotes the number of non-zero entries of
with o(
the product. since √
van gucht et al. showed that the classical communication complexity of
e
this problem is θ(n
`), our quantum algorithm outperforms classical protocols whenever the
output matrix is sparse. we also show a quantum lower bound and a matching classical upper
bound on the communication complexity of distributed matrix multiplication over f2 .
besides their applications to database theory, the communication complexity of set joins is
interesting due to its connections to direct product theorems in communication complexity. in
this work we also introduce a notion of all-pairs product theorem, and relate this notion to
standard direct product theorems in communication complexity.",8
"abstract
graph theory provides a language for studying the structure of relations, and it is
often used to study interactions over time too. however, it poorly captures the both
temporal and structural nature of interactions, that calls for a dedicated formalism.
in this paper, we generalize graph concepts in order to cope with both aspects in a
consistent way. we start with elementary concepts like density, clusters, or paths, and
derive from them more advanced concepts like cliques, degrees, clustering coefficients,
or connected components. we obtain a language to directly deal with interactions
over time, similar to the language provided by graphs to deal with relations. this
formalism is self-consistent: usual relations between different concepts are preserved.
it is also consistent with graph theory: graph concepts are special cases of the ones
we introduce. this makes it easy to generalize higher-level objects such as quotient
graphs, line graphs, k-cores, and centralities. this paper also considers discrete versus
continuous time assumptions, instantaneous links, and extensions to more complex
cases.",8
"abstract
pilgrim, ryan zachary. source coding optimization for distributed average consensus.
(under the direction of dror baron.)
consensus is a common method for computing a function of the data distributed among
the nodes of a network. of particular interest is distributed average consensus, whereby the",7
"abstract
convolutional neural networks (cnns) are similar to “ordinary”
neural networks in the sense that they are made up of hidden layers
consisting of neurons with “learnable” parameters. these neurons
receive inputs, performs a dot product, and then follows it with a
non-linearity. the whole network expresses the mapping between
raw image pixels and their class scores. conventionally, the softmax
function is the classifier used at the last layer of this network.
however, there have been studies [2, 3, 11] conducted to challenge
this norm. the cited studies introduce the usage of linear support
vector machine (svm) in an artificial neural network architecture.
this project is yet another take on the subject, and is inspired by
[11]. empirical data has shown that the cnn-svm model was able
to achieve a test accuracy of ≈99.04% using the mnist dataset[10].
on the other hand, the cnn-softmax was able to achieve a test
accuracy of ≈99.23% using the same dataset. both models were
also tested on the recently-published fashion-mnist dataset[13],
which is suppose to be a more difficult image classification dataset
than mnist[15]. this proved to be the case as cnn-svm reached
a test accuracy of ≈90.72%, while the cnn-softmax reached a test
accuracy of ≈91.86%. the said results may be improved if data
preprocessing techniques were employed on the datasets, and if
the base cnn model was a relatively more sophisticated than the
one used in this study.",9
"abstract
multidimensional recurrent neural networks (mdrnns) have shown a remarkable performance in the area of speech and handwriting recognition. the performance of an mdrnn is improved by further increasing its depth, and the difficulty of learning the deeper network is overcome by using hessian-free (hf)
optimization. given that connectionist temporal classification (ctc) is utilized as
an objective of learning an mdrnn for sequence labeling, the non-convexity of
ctc poses a problem when applying hf to the network. as a solution, a convex
approximation of ctc is formulated and its relationship with the em algorithm
and the fisher information matrix is discussed. an mdrnn up to a depth of 15
layers is successfully trained using hf, resulting in an improved performance for
sequence labeling.",9
"abstract
genetic algorithms are a well-known method for tackling the problem of variable selection. as
they are non-parametric and can use a large variety of fitness functions, they are well-suited as a
variable selection wrapper that can be applied to many different models. in almost all cases, the
chromosome formulation used in these genetic algorithms consists of a binary vector of length n
for n potential variables indicating the presence or absence of the corresponding variables.
while the aforementioned chromosome formulation has exhibited good performance for relatively
small n, there are potential problems when the size of n grows very large, especially when
interaction terms are considered. we introduce a modification to the standard chromosome
formulation that allows for better scalability and model sparsity when interaction terms are
included in the predictor search space. experimental results show that the indexed chromosome
formulation demonstrates improved computational efficiency and sparsity on high-dimensional
datasets with interaction terms compared to the standard chromosome formulation.
keywords : genetic algorithm, chromosome, variable selection, feature selection, interaction terms
, high dimensional data",9
"abstract
in this paper, we give a necessary and sufficient condition for mean stability of switched linear systems having a markov
regenerative process as its switching signal. this class of switched linear systems, which we call markov regenerative switched
linear systems, contains markov jump linear systems and semi-markov jump linear systems as special cases. we show that a
markov regenerative switched linear system is mth mean stable if and only if a particular matrix is schur stable, under the
assumption that either m is even or the system is positive.
key words: switched linear systems; mean stability; markov regenerative processes; positive systems",3
"abstract
in this paper, we deal with a calculus system slcd (syllogistic logic with
carroll diagrams), which gives a formal approach to logical reasoning with diagrams, for representations of the fundamental aristotelian categorical propositions and show that they are closed under the syllogistic criterion of inference
which is the deletion of middle term. therefore, it is implemented to let the
formalism comprise synchronically bilateral and trilateral diagrammatical appearance and a naive algorithmic nature. and also, there is no need specific
knowledge or exclusive ability to understand as well as to use it. consequently,
we give an effective algorithm used to determine whether a syllogistic reasoning
valid or not by using slcd.
keywords: categorical syllogism, validity, algorithm
2010 msc: 68q60, 03b70, 68t27",2
"abstract
many popular applications use traces of user data to offer various services to their users; example applications include driver-assistance systems and smart home services. however, revealing user
information to such applications puts users’ privacy at stake, as adversaries can infer sensitive private
information about the users such as their behaviors, interests, and locations. recent research shows that
adversaries can compromise users’ privacy when they use such applications even when the traces of
users’ information are protected by mechanisms like anonymization and obfuscation.
in this work, we derive the theoretical bounds on the privacy of users of these applications when
standard protection mechanisms are deployed. we build on our recent study in the area of location
privacy, in which we introduced formal notions of location privacy for anonymization-based location
privacy-protection mechanisms. more specifically, we derive the fundamental limits of user privacy when
both anonymization and obfuscation-based protection mechanisms are applied to users’ time series of
data. we investigate the impact of such mechanisms on the tradeoff between privacy protection and
user utility. in particular, we study achievability results for the case where the time-series of users are
governed by an i.i.d. process. the converse results are proved both for the i.i.d. case as well as the
more general markov chain model. we demonstrate that as the number of users in the network grows,
the obfuscation-anonymization plane can be divided into two regions: in the first region, all users have
perfect privacy; and, in the second region, no user has privacy.
n. takbiri is with the department of electrical and computer engineering, university of massachusetts, amherst, ma, 01003
usa e-mail: ([email protected]).
a. houmansadr is with the college of information and computer sciences, university of massachusetts, amherst, ma, 01003
usa e-mail:([email protected])
h. pishro-nik and d. goeckel are with the department of electrical and computer engineering, university of massachusetts,
amherst, ma, 01003 usa e-mail:([email protected])
this work was supported by national science foundation under grants ccf–0844725, ccf–1421957 and cns1739462.
this work was presented in part in ieee international symposium on information theory (isit 2017) [1].",7
"abstract
program sensitivity measures how robust a program is to small
changes in its input, and is a fundamental notion in domains ranging
from differential privacy to cyber-physical systems. a natural way
to formalize program sensitivity is in terms of metrics on the input
and output spaces, requiring that an r-sensitive function map inputs
that are at distance d to outputs that are at distance at most r · d.
program sensitivity is thus an analogue of lipschitz continuity for
programs.
reed and pierce introduced fuzz, a functional language with
a linear type system that can express program sensitivity. they
show soundness operationally, in the form of a metric preservation
property. inspired by their work, we study program sensitivity and
metric preservation from a denotational point of view. in particular,
we introduce metric cpos, a novel semantic structure for reasoning
about computation on metric spaces, by endowing cpos with a
compatible notion of distance. this structure is useful for reasoning
about metric properties of programs, and specifically about program
sensitivity. we demonstrate metric cpos by giving a model for the
deterministic fragment of fuzz.
categories and subject descriptors f.3.2 [logics and meaning
of programs]: semantics of programming languages
keywords domain theory, program sensitivity, metric spaces, lipschitz continuity",6
"abstract. we consider manipulation strategies for the rank-maximal matching problem. in the rank-maximal
matching problem we are given a bipartite graph g = (a ∪ p, e) such that a denotes a set of applicants and
p a set of posts. each applicant a ∈ a has a preference list over the set of his neighbors in g, possibly involving
ties. preference lists are represented by ranks on the edges - an edge (a, p) has rank i, denoted as rank(a, p) = i, if
post p belongs to one of a’s i-th choices. posts most preferred by an applicant a have rank one in his preference list.
a matching m is any subset of edges e such that no two edges of m share an endpoint. a rank-maximal matching
is one in which the maximum number of applicants is matched to their rank one posts and subject to this condition,
the maximum number of √
applicants is matched to their rank two posts, and so on. a rank-maximal matching can
be computed in o(min(c n, n)m) time, where n denotes the number of applicants, m the number of edges and c
the maximum rank of an edge in an optimal solution [1].
a central authority matches applicants to posts. it does so using one of the rank-maximal matchings. since there
may be more than one rank- maximal matching of g, we assume that the central authority may choose any one of
them or that the rank-maximal matching is chosen randomly. let a1 be a manipulative applicant, who knows the
preference lists of all the other applicants and wants to falsify his preference list so that he has a chance of getting
better posts than if he were truthful, i.e., than if he gave a true preference list. we can always assume that a1 does
not get his most preferred post in every rank maximal matching when he is truthful, otherwise a1 does not have any
incentive to cheat. in the first problem addressed in this paper the manipulative applicant a1 wants to ensure that
he is never matched to any post worse than the most preferred among those of rank greater than one and obtainable
when he is truthful. in the second problem the manipulator wants to construct such a preference list that the worst
post he can become matched to by the central authority is best possible or in other words, a1 wants to minimize the
maximal rank of a post he can become matched to.",8
"abstract
airborne lidar point cloud representing a forest contains 3d data, from which vertical stand structure
even of under-story layers can be derived. this paper presents a tree segmentation approach for multistory stands that stratifies the point cloud to canopy layers and segments individual tree crowns within
each layer using a digital surface model based tree segmentation method. the novelty of the approach is
the stratification procedure that separates the point cloud to an over-story and multiple under-story tree
canopy layers by analyzing vertical distributions of lidar points within overlapping locales. unlike
previous work that stripped stiff layers within a constrained area, the procedure stratifies the point cloud
to flexible tree canopy layers over an unconstrained area with minimal over/under-segmentations of tree
crowns across the layers. the procedure does not make a priori assumptions about the shape and size of
the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree
crowns of forest canopies with a variety of stand structures. we applied the proposed approach to the
university of kentucky robinson forest – a natural deciduous forest with complex terrain and vegetation
structure. the segmentation results showed that using the stratification procedure strongly improved
detecting under-story trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented
under-story trees (increased from 1% to 16%), while barely affecting the segmentation quality of overstory trees. results of vertical stratification of canopy showed that the point density of under-story
canopy layers were suboptimal for performing reasonable tree segmentation, suggesting that acquiring
denser lidar point clouds (becoming affordable due to advancements of the sensor technology and
platforms) would allow more improvements in segmenting under-story trees.",5
"abstract. given a finite group g and a faithful irreducible f g-module v where f has
prime order, does g have a regular orbit on v ? this problem is equivalent to determining
which primitive permutation groups of affine type have a base of size 2. we classify the
pairs (g, v ) for which g has no regular orbit on v , where g is a covering group of an
almost simple group whose socle is sporadic and v is a faithful irreducible f g-module
such that the order of f is prime and divides the order of g. for such (g, v ), we also
determine the minimal base size of g in its action on v .",4
"abstract
the notion of an equational shell is studied to involve the objects and their environment. appropriate methods are studied as valid embeddings
of refined objects. the refinement process determines the linkages between the variety of possible
representations giving rise to variants of computations. the case study is equipped with the adjusted equational systems that validate the initial
applicative framework.",6
"abstract
the sorting number of a graph with n vertices is the minimum depth of a sorting
network with n inputs and n outputs that uses only the edges of the graph to perform
comparisons. many known results on sorting networks can be stated in terms of sorting
numbers of different classes of graphs. in this paper we show the following general
results about the sorting number of graphs.
1. any n-vertex graph that contains a simple path of length d has a sorting network
of depth o(n log(n/d)).
2. any n-vertex graph with maximal degree ∆ has a sorting network of depth o(∆n).
we also provide several results relating the sorting number of a graph with its routing number, size of its maximal matching, and other well known graph properties.
additionally, we give some new bounds on the sorting number for some typical graphs.",8
"abstract. we explore transversals of finite index subgroups of finitely generated groups. we show that when h is a subgroup of a rank n group g
and h has index at least n in g then we can construct a left transversal for
h which contains a generating set of size n for g, and that the construction
is algorithmic when g is finitely presented. we also show that, in the case
where g has rank n ≤ 3, there is a simultaneous left-right transversal for h
which contains a generating set of size n for g. we finish by showing that
if h is a subgroup of a rank n group g with index less than 3 ⋅ 2n−1 , and h
contains no primitive elements of g, then h is normal in g and g/h ≅ c2n .",4
"abstract—interior tomography for the region-of-interest imaging has advantages of using a small detector and reducing x-ray
radiation dose. however, standard analytic reconstruction suffers
from severe cupping artifacts due to existence of null space in the
truncated radon transform. existing penalized reconstruction
methods may address this problem but they require extensive
computations due to the iterative reconstruction. inspired by the
recent deep learning approaches to low-dose and sparse view
ct, here we propose a deep learning architecture that removes
null space signals from the fbp reconstruction. experimental
results have shown that the proposed method provides nearperfect reconstruction with about 7 ∼ 10db improvement in
psnr over existing methods in spite of significantly reduced
run-time complexity.",2
"abstract
we study the problem of detecting a drift change of a brownian motion under
various extensions of the classical case. specifically, we consider the case of a random
post-change drift and examine monotonicity properties of the solution with respect
to different model parameters. moreover, robustness properties – effects of misspecification of the underlying model – are explored.",10
"abstract
the chinese restaurant process and the stick-breaking process are the two most
commonly used representations of the dirichlet process. however, the usual proof of
the connection between them is indirect, relying on abstract properties of the dirichlet
process that are difficult for nonexperts to verify. this short note provides a direct proof
that the stick-breaking process gives rise to the chinese restaurant process, without
using any measure theory.",10
"abstraction that is precise. for any program with a bounded visible heap,
meaning that the number of objects reachable from variables at any point of execution is bounded,
this abstraction is a finitary representation of its behaviour, even though an unbounded number of
objects can appear in the state. as a consequence, for such programs model checking is decidable.
finally we introduce a specification language for temporal properties of the heap, and discuss model
checking these properties against heap-manipulating programs.",6
"abstract
theoretical analyses of stochastic search algorithms, albeit few, have always existed since
these algorithms became popular. starting in the nineties a systematic approach to analyse
the performance of stochastic search heuristics has been put in place. this quickly increasing
basis of results allows, nowadays, the analysis of sophisticated algorithms such as populationbased evolutionary algorithms, ant colony optimisation and artificial immune systems. results
are available concerning problems from various domains including classical combinatorial and
continuous optimisation, single and multi-objective optimisation, and noisy and dynamic optimisation. this chapter introduces the mathematical techniques that are most commonly
used in the runtime analysis of stochastic search heuristics. careful attention is given to the
very popular artificial fitness levels and drift analyses techniques for which several variants are
presented. to aid the reader’s comprehension of the presented mathematical methods, these
are applied to the analysis of simple evolutionary algorithms for artificial example functions.
the chapter is concluded by providing references to more complex applications and further
extensions of the techniques for the obtainment of advanced results.",9
"abstract
in this work we offer an o(|v |2 |e|w ) pseudo-polynomial time deterministic algorithm for
solving the value problem and optimal strategy synthesis in mean payoff games. this improves by a factor log(|v |w ) the best previously known pseudo-polynomial time upper bound
due to brim, et al. the improvement hinges on a suitable characterization of values, and a
description of optimal positional strategies, in terms of reweighted energy games and small
energy-progress measures.",8
"abstract—this paper is eligible for the student
paper award. finding the underlying probability distributions of a set of observed sequences under the constraint that
each sequence is generated i.i.d by a distinct distribution is
considered. the number of distributions, and hence the number
of observed sequences, are let to grow with the observation
blocklength n. asymptotically matching upper and lower bounds
on the probability of error are derived.",7
"abstract
linear programming approaches have been applied to derive upper bounds on the size of classical and quantum codes. in
this paper, we derive similar results for general quantum codes with entanglement assistance by considering a type of split weight
enumerator. after deriving the macwilliams identities for these enumerators, we are able to prove algebraic linear programming
bounds, such as the singleton bound, the hamming bound, and the first linear programming bound. our singleton bound and
hamming bound are more general than the previous bounds for entanglement-assisted quantum stabilizer codes. in addition, we
show that the first linear programming bound improves the hamming bound when the relative distance is sufficiently large.
on the other hand, we obtain additional constraints on the size of pauli subgroups for quantum codes, which allow us to improve
the linear programming bounds on the minimum distance of quantum codes of small length. in particular, we show that there
is no [[27, 15, 5]] or [[28, 14, 6]] stabilizer code. we also discuss the existence of some entanglement-assisted quantum stabilizer
codes with maximal entanglement. as a result, the upper and lower bounds on the minimum distance of maximal-entanglement
quantum stabilizer codes with length up to 20 are significantly improved.",7
"abstract
the problem of testing monotonicity of a boolean function f : {0, 1}n → {0, 1} has received much attention
recently. denoting the proximity parameter by ε, the best tester is the
e √n/ε2 ) tester of khot-minzer-safra (focs 2015). let i(f ) denote the total
non-adaptive o(
influence of f . we give an adaptive tester whose running time is i(f )poly(ε−1 log n).",8
"abstract. let g be a connected algebraic group. an unrefinable chain of g is a chain
of subgroups g = g0 > g1 > · · · > gt = 1, where each gi is a maximal connected
subgroup of gi−1 . we introduce the notion of the length (respectively, depth) of g,
defined as the maximal (respectively, minimal) length of such a chain. working over an
algebraically closed field, we calculate the length of a connected group g in terms of the
dimension of its unipotent radical ru (g) and the dimension of a borel subgroup b of
the reductive quotient g/ru (g). in particular, a simple algebraic group of rank r has
length dim b + r, which gives a natural extension of a theorem of solomon and turull on
finite quasisimple groups of lie type. we then deduce that the length of any connected
algebraic group g exceeds 12 dim g.
we also study the depth of simple algebraic groups. in characteristic zero, we show
that the depth of such a group is at most 6 (this bound is sharp). in the positive
characteristic setting, we calculate the exact depth of each exceptional algebraic group
and we prove that the depth of a classical group (over a fixed algebraically closed field
of positive characteristic) tends to infinity with the rank of the group.
finally we study the chain difference of an algebraic group, which is the difference
between its length and its depth. in particular we prove that, for any connected algebraic
group g, the dimension of g/r(g) is bounded above in terms of the chain difference of
g.",4
"abstract
algorithmic statistics has two different (and almost orthogonal) motivations. from the philosophical point of view, it tries to formalize how the
statistics works and why some statistical models are better than others. after
this notion of a “good model” is introduced, a natural question arises: it is
possible that for some piece of data there is no good model? if yes, how often
these bad (non-stochastic) data appear “in real life”?
another, more technical motivation comes from algorithmic information
theory. in this theory a notion of complexity of a finite object (=amount of
information in this object) is introduced; it assigns to every object some number, called its algorithmic complexity (or kolmogorov complexity). algorithmic
statistic provides a more fine-grained classification: for each finite object some
curve is defined that characterizes its behavior. it turns out that several different definitions give (approximately) the same curve.1
in this survey we try to provide an exposition of the main results in the field
(including full proofs for the most important ones), as well as some historical
comments. we assume that the reader is familiar with the main notions of
algorithmic information (kolmogorov complexity) theory. an exposition can
be found in [44, chapters 1, 3, 4] or [22, chapters 2, 3], see also the survey [37].
∗",10
"abstract
end-to-end training of automated speech
recognition (asr) systems requires massive
data and compute resources. we explore
transfer learning based on model adaptation
as an approach for training asr models under
constrained gpu memory, throughput and
training data. we conduct several systematic
experiments adapting a wav2letter convolutional neural network originally trained for
english asr to the german language. we
show that this technique allows faster training
on consumer-grade resources while requiring
less training data in order to achieve the same
accuracy, thereby lowering the cost of training asr models in other languages. model
introspection revealed that small adaptations
to the network’s weights were sufficient for
good performance, especially for inner layers.",9
"abstract. one of the remaining questions in gorenstein homology is whether a local ring r is cohenmacaulay if it possesses a nonzero module which is either finitely generated of finite gorenstein injective
dimension or cohen-macaulay of finite g-dimension. in this paper, we continue our investigation on
this question. also, we treat two other closely related questions.",0
"abstract—the central role of food in our individual and
social life, combined with recent technological advances, has
motivated a growing interest in applications that help to better
monitor dietary habits as well as the exploration and retrieval
of food-related information. we review how visual content,
context and external knowledge can be integrated effectively into
food-oriented applications, with special focus on recipe analysis
and retrieval, food recommendation and restaurant context as
emerging directions.",1
"abstract— in this paper, a parametric model order reduction
(pmor) technique is proposed to find a simplified system
representation of a large-scale and complex thermal system. the
main principle behind this technique is that any change of the
physical parameters in the high-fidelity model can be updated
directly in the simplified model. for deriving the parametric
reduced model, a krylov subspace method is employed which
yields the relevant subspaces of the projected state. with the
help of the projection operator, first moments of the low-rank
model are set identical to the correspondent moments of the
original model. additionally, a prior upper bound of the error
induced by the approximation is derived.",3
"abstract
the linear induced matching width (lmim-width) of a graph is a
width parameter defined by using the notion of branch-decompositions
of a set function on ternary trees. in this paper we study outputpolynomial enumeration algorithms on graphs of bounded lmim-width
and graphs of bounded local lmim-width. in particular, we show that
all 1-minimal and all 1-maximal pσ, ρq-dominating sets, and hence all
minimal dominating sets, of graphs of bounded lmim-width can be
enumerated with polynomial (linear) delay using polynomial space.
furthermore, we show that all minimal dominating sets of a unit square
graph can be enumerated in incremental polynomial time.",8
"abstract. objective
to evaluate the efficacy of methods that use deep learning (dl) for the
automatic fine-grained segmentation of optical coherence tomography
(oct) images of the retina.
methods
oct images from 10 patients with mild non-proliferative diabetic retinopathy were used from a public (u. of miami) dataset. for each patient, five
images were available: one image of the fovea center, two images of the
perifovea, and two images of the parafovea. for each image, two expert
graders each manually annotated five retinal surfaces (i.e. boundaries between pairs of retinal layers). the first grader’s annotations were used
as ground truth and the second grader’s annotations to compute interoperator agreement. the proposed automated approach segments images
using fully convolutional networks (fcns) together with gaussian process (gp)-based regression as a post-processing step to improve the quality of the estimates. using 10-fold cross validation, the performance of
the algorithms is determined by computing the per-pixel unsigned error
(distance) between the automated estimates and the ground truth annotations generated by the first manual grader. we compare the proposed
method against five state of the art automatic segmentation techniques.
results
the results show that the proposed methods compare favorably with
state of the art techniques, resulting in the smallest mean unsigned error
values and associated standard deviations, and performance is comparable with human annotation of retinal layers from oct when there is
only mild retinopathy.
conclusions
the results suggest that semantic segmentation using fcns, coupled
with regression-based post-processing, can effectively solve the oct segmentation problem on par with human capabilities with mild retinopathy.",1
"abstract. spatial stochastic processes that are modeled over the entire earth’s surface require
statistical approaches that directly consider the spherical domain. here, we extend the notion",10
"abstract. we present an owl 2 ontology representing the saint gall plan, one of
the most ancient documents arrived intact to us, which describes the ideal model of
a benedictine monastic complex that inspired the design of many european monasteries.
keywords. ontology, owl 2, digital humanities, benedictine monasteries.",2
"abstract
motivated by geometric problems in signal processing, computer vision, and structural biology, we
study a class of orbit recovery problems where we observe noisy copies of an unknown signal, each acted
upon by a random element of some group (such as z/p or so(3)). the goal is to recover the orbit of the
signal under the group action. this generalizes problems of interest such as multi-reference alignment
(mra) and the reconstruction problem in cryo-electron microscopy (cryo-em). we obtain matching
lower and upper bounds on the sample complexity of these problems in high generality, showing that the
statistical difficulty is intricately determined by the invariant theory of the underlying symmetry group.
in particular, we determine that for cryo-em with noise variance σ 2 and uniform viewing directions,
the number of samples required scales as σ 6 . we match this bound with a novel algorithm for ab initio
reconstruction in cryo-em, based on invariant features of degree at most 3. we further discuss how to
recover multiple molecular structures from heterogeneous cryo-em samples.",0
"abstract. while lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early
diagnosis can be pivotal in patient survival rates. imaging-based, or radiomics-driven, detection methods have been
developed to aid diagnosticians, but largely rely on hand-crafted features which may not fully encapsulate the differences between cancerous and healthy tissue. recently, the concept of discovery radiomics was introduced, where
custom abstract features are discovered from readily available imaging data. we propose a novel evolutionary deep
radiomic sequencer discovery approach based on evolutionary deep intelligence. motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach
organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet
similarly descriptive radiomic sequences over multiple generations. as a result, this framework improves operational
efficiency and enables diagnosis to be run locally at the radiologist’s computer while maintaining detection accuracy.
we evaluated the evolved deep radiomic sequencer (edrs) discovered via the proposed evolutionary deep radiomic
sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung ct data with pathologically-proven diagnostic data from the lidc-idri dataset. the evolved deep radiomic
sequencer shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to
previous radiomics approaches.
keywords: discovery radiomics, radiomic sequencing, lung cancer, evolutionary deep intelligence, evolved
deep radiomic sequencer.
* mohammad javad shafiee, [email protected]",1
"abstract
sparsity-based methods are widely used in machine learning, statistics, and signal processing. there
is now a rich class of structured sparsity approaches that expand the modeling power of the sparsity
paradigm and incorporate constraints such as group sparsity, graph sparsity, or hierarchical sparsity. while
these sparsity models offer improved sample complexity and better interpretability, the improvements
come at a computational cost: it is often challenging to optimize over the (non-convex) constraint sets
that capture various sparsity structures. in this paper, we make progress in this direction in the context of
separated sparsity – a fundamental sparsity notion that captures exclusion constraints in linearly ordered
data such as time series. while prior algorithms for computing a projection onto this constraint set
required quadratic time, we provide a perturbed lagrangian relaxation approach that computes provably
exact projection in only nearly-linear time. although the sparsity constraint is non-convex, our perturbed
lagrangian approach is still guaranteed to find a globally optimal solution. in experiments, our new
algorithms offer a 10× speed-up already on moderately-size inputs.",7
"abstract
correctness constraints provide a foundation for automated debugging within object-oriented
systems. this paper discusses a new approach to incorporating correctness constraints into java
development environments. our approach uses the object constraint language (“ocl”) as a
specification language and the java debug interface (“jdi”) as a verification api. ocl provides a
standard language for expressing object-oriented constraints that can integrate with unified
modeling language (“uml”) software models. jdi provides a standard java api capable of
supporting type-safe and side effect free runtime constraint evaluation. the resulting correctness
constraint mechanism: (1) entails no programming language modifications; (2) requires neither
access nor changes to existing source code; and (3) works with standard off-the-shelf java virtual
machines (“vms”). a prototype correctness constraint auditor is presented to demonstrate the
utility of this mechanism for purposes of automated debugging.",6
"abstract. let (r, m) be a noetherian local ring. in this paper, we introduce a dual
notion for dualizing modules, namely codualizing modules. we study the basic properties
of codualizing modules and use them to establish an equivalence between the category of
noetherian modules of finite projective dimension and the category of artinian modules
of finite projective dimension. next, we give some applications of codualizing modules.
finally, we present a mixed identity involving quasidualizing module that characterize
the codualizing module. as an application, we obtain a necessary and sufficient condition
for r to be gorenstein.",0
"abstract
we present a deterministic oblivious lifo (stack), fifo, double-ended and double-ended priority queue as well as
an oblivious mergesort and quicksort algorithm. our techniques and ideas include concatenating queues end-to-end,
size balancing of multiple arrays, several multi-level partitionings of an array. our queues are the first to enable
executions of pop and push operations without any change of the data structure (controlled by a parameter). this
enables interesting applications in computing on encrypted data such as hiding confidential expressions. mergesort
becomes practical using our lifo queue, ie. it improves prior work (stoc ’14) by a factor of (more than) 1000 in
terms of comparisons for all practically relevant queue sizes. we are the first to present double-ended (priority) and
lifo queues as well as oblivious quicksort which is asymptotically optimal. aside from theortical analysis, we also
provide an empirical evaluation of all queues.
keywords: sorting, queues, complexity, oblivious algorithms, privacy preserving, computation on encrypted data,
secure computing, fully homomorphic encryption, secure multi-party computation",8
abstract,1