id
string | text
string | len_category
string | source
string |
---|---|---|---|
2101.00884
|
11institutetext: TIB – Leibniz Information Centre for Science and Technology,
Hannover, Germany 22institutetext: L3S Research Center, Leibniz University,
Hannover, Germany
22email: {arthur.brack—anett.hoppe—ralph.ewerth}@tib.eu
# Coreference Resolution in Research Papers from Multiple Domains
Arthur Brack 11 0000-0002-1428-5348 Daniel Uwe Müller 11 0000-0002-4492-5879
Anett Hoppe 11 0000-0002-1452-9509 Ralph Ewerth 1122 0000-0003-0918-6297
###### Abstract
Coreference resolution is essential for automatic text understanding to
facilitate high-level information retrieval tasks such as text summarisation
or question answering. Previous work indicates that the performance of state-
of-the-art approaches (e.g. based on BERT) noticeably declines when applied to
scientific papers. In this paper, we investigate the task of coreference
resolution in research papers and subsequent knowledge graph population. We
present the following contributions: (1) We annotate a corpus for coreference
resolution that comprises 10 different scientific disciplines from Science,
Technology, and Medicine (STM); (2) We propose transfer learning for automatic
coreference resolution in research papers; (3) We analyse the impact of
coreference resolution on knowledge graph (KG) population; (4) We release a
research KG that is automatically populated from 55,485 papers in 10 STM
domains. Comprehensive experiments show the usefulness of the proposed
approach. Our transfer learning approach considerably outperforms state-of-
the-art baselines on our corpus with an F1 score of 61.4 (+11.0), while the
evaluation against a gold standard KG shows that coreference resolution
improves the quality of the populated KG significantly with an F1 score of
63.5 (+21.8).
###### Keywords:
coreference resolution information extraction knowledge graph population
scholarly communication
## 1 Introduction
Current research is generally published in form of PDF files and, sometimes,
research artefacts of other modalities (data sets, source code, etc.). This
makes them hard to handle for retrieval systems, since their content is hidden
in human- but not machine-interpretable text. In consequence, current academic
search engines are not able to adequately support researchers in their day-to-
day tasks. This is further aggravated by the exploding number of published
articles [4].
Approaches to automatically structure research papers are thus an active area
of research. _Coreference resolution_ is the task to identify mentions in a
text which refer to the same entity or concept. It is an essential step for
automatic text understanding and facilitates down-stream tasks such as text
summarisation or question answering. For instance, the text ‘Coreference
resolution is… It is used for question answering…’, has two coreferent
mentions ‘Coreference resolution’ and ‘It’. This allows us to extract the fact
<coreference resolution, used_for, question answering>.
Current methods for coreference resolution based on deep learning achieve
quite impressive results (e.g. an F1 score of 79.6 for the OntoNotes 5.0
dataset [19]) in the general domain, that is data from phone conversations,
news, magazines, etc. But results of previous work indicate [10, 21, 34, 44]
that general coreference resolution systems perform poorly on scientific text.
This is presumably caused by the specific terminology and phrasing used in a
scientific domain. Some other studies state that annotating scientific text is
costly since it demands certain expertise in the article’s domain [1, 5, 18].
Most corpora for research papers cover only a single domain (e.g. biomedicine
[10], artificial intelligence [26]) and are thus limited to these domains. As
a result, the annotated corpora are relatively small and overall only a few
domains are covered. Datasets for the general domain are usually much larger,
but they have not been exploited yet by approaches for coreference resolution
in research papers.
Coreference resolution is also one of the main steps in the KG population
pipeline [27, 39]. However, to date it is not clear, to which extent (a)
coreference resolution can help to reduce the number of scientific concepts in
the populated KG, and (b) how coreference resolution influences the quality of
the populated KG. Besides, a KG comprising multiple scientific domains has not
been populated yet.
In this paper, we address the task of coreference resolution in research
papers and subsequent knowledge graph population. Our contributions can be
summarised as follows: (1) First, we annotate a corpus for coreference
resolution that consists of 110 abstracts from 10 domains from Science,
Technology, and Medicine. The systematic annotation resulted in a substantial
inter-coder agreement (0.68 $\kappa$). We provide and compare baseline results
for this dataset by evaluating five different state-of-the-art approaches: (i)
coreference resolution systems for the general [20] and (ii) for the
artificial intelligence domain [26]; (iii) supervised learning with training
data from our corpus with a SpanBERT [19] and (iv) a SciBERT-based system [3],
and (v) the Scientific Information Extractor [26]. Our experimental results
confirm that state-of-the-art coreference approaches do not perform well on
research papers. (2) Consequently, we propose sequential transfer learning for
coreference resolution in research papers. This approach utilises our corpus
by fine-tuning a model that is pre-trained on a large corpus from the general
domain [37]. Experimental results show that our approach significantly
outperforms the best state-of-the-art baseline (F1 score of 61.4, i.e. +11.0).
(3) We investigate the impact of coreference resolution on automatic KG
population. To evaluate the quality of various KG population strategies, we
(i) compile a gold standard KG from our annotated corpus that contains
scientific concepts referenced by mentions from text, and (ii) present a
procedure to evaluate the clustering results of mentions. (4) We release (i)
an automatically populated KG from 55,485 abstracts of the 10 STM domains and
(ii) a gold KG (Test-STM-KG) from the annotated STM-corpus. Experimental
results show that coreference resolution has only a small impact on the number
of concepts in a populated KG, but it helps to improve the quality of the KG
significantly: the population with coreference resolution yields an F1 score
of 63.5 evaluated against the gold KG (+21.8 F1). We release all our corpora
and source code to facilitate further research.
The remainder of the paper is organised as follows: Section 2 summarises
related work on coreference resolution. Section 3 describes the annotation
procedure and the characteristics of the corpus, and our proposed approaches
for coreference resolution, KG population and KG evaluation. The experimental
setup and results are reported in Section 4 and 5, while Section 6 concludes
the paper and outlines future work.
## 2 Related Work
### 2.1 Approaches for Coreference Resolution
For a given document $d$, the task of coreference resolution is (a) to extract
mentions of scientific concepts $M(d)=\\{m_{1},...,m_{h}\\}$, and (b) to
cluster mentions that refer to the same concept, i.e. $c_{d}(m)\subseteq M(d)$
is the cluster for mention $m$. Recent approaches mostly rely on supervised
learning and can be categorised into three groups [32]: (1) Mention-pair
models [33, 45] are binary classifiers that determine whether two mentions are
coreferent or not. (2) Entity-mention models [8, 41] determine whether a
mention is coreferent to a preceding _cluster_. A cluster has more expressive
features compared to a mention in mention-pair models. (3) Ranking-based
models [11, 24, 30] simultaneously rank all candidate antecedents (i.e.
preceding mention candidates). This enables the model to identify the most
probable antecedent.
Lee et al. [24, 25] propose an end-to-end neural coreference resolution model.
It is a ranking-based model that jointly recognises mentions and clusters.
Therefore, the model considers all spans in the text as possible mentions and
learns distributions over possible antecedents for each mention. For
computational efficiency, candidate spans and antecedents are pruned during
training and inference. Joshi et al. [20] enhance Lee et al.’s model with
BERT-based word embeddings [13], while Ma et al. [29] improve the model with
better attention mechanisms and loss functions.
Furthermore, several approaches proposed multi-task learning, such that
related tasks may benefit from knowledge in other tasks to achieve better
prediction accuracy: Luan et al. [26, 49] train a model on three tasks
(coreference resolution, entity and relation extraction) using one dataset of
research papers. Sanh et al. [43] introduce a multi-task model that is trained
on four tasks (mention detection, coreference resolution, entity and relation
extraction) using two different datasets in the general domain.
Results of some previous studies [10, 34, 21, 44] revealed that general
coreference systems do not work well in the biomedical domain due to the lack
of domain knowledge. For instance, on Colorado Richly Annotated Full Text
(CRAFT) corpus [10] a coreference resolution system for the news domain
achieves only 14.0 F1 (-32.0).
To the best of our knowledge, a transfer learning approach from the general to
the scientific domain has not been proposed for coreference resolution yet.
### 2.2 Corpora for Coreference Resolution in Research Papers
For the general domain, multiple datasets exist for coreference resolution,
e.g. Message Understanding Conference (MUC-7) [31], Automatic Content
Extraction (ACE05) [14], or OntoNotes 5.0 [37]. The OntoNotes 5.0 dataset [37]
is the largest one and is used in many benchmark experiments for coreference
resolution systems [24, 20, 29].
Various annotated datasets for coreference resolution exist also for research
papers: CRAFT corpus [10] covers 97 papers from biomedicine. The corpus of
Schäfer et al. [44] contains 266 papers from computational linguistics and
language technology. Chaimongkol et al. [6] annotated a corpus of 284 papers
from four subdisciplines in computer science. The SciERC corpus [26] comprises
500 abstracts from the artificial intelligence domain and features annotations
for scientific concepts and relations. It was used to generate an artificial
intelligence (AI) knowledge graph [12]. Furthermore, several datasets exist
for scientific concept extraction [1, 26, 5, 40] and relation extraction [26,
18, 1] that cover various scientific domains.
To the best of our knowledge, a corpus for coreference resolution that
comprises a broad range of scientific domains is not available yet.
## 3 Coreference Resolution in Research Papers
As the discussion of related work reveals, existing corpora for coreference
resolution in scientific papers normally cover only a single domain, and
coreference resolution approaches do not perform well on scholarly texts. To
address these issues, we systematically annotate a corpus with coreferences in
abstracts from 10 different science domains. Current approaches for
coreference resolution in research papers do not exploit existing annotated
datasets from the general domain, which are usually much larger than in the
scientific domain. We propose a sequential transfer learning approach that
takes advantage from large, annotated datasets. Finally, to the best of our
knowledge, the impact of (a) coreference resolution and (b) cross-domain
collapsing of mentions to scientific concepts on KG population with multiple
science domains has not been investigated yet. Consequently, we present an
evaluation procedure for the clustering aspect in the KG population pipeline.
In the sequel, we describe our annotated corpus, our transfer learning
approach for coreference resolution, and an evaluation procedure for
clustering in KG population.
### 3.1 Corpus for Coreference Resolution in 10 STM Domains
In this section, we describe the STM corpus [5], which we used as the basis
for the annotation, our annotation process, and the characteristics of the
resulting corpus.
##### STM Corpus:
The STM corpus [5] comprises 110 articles from 10 domains in Science,
Technology and Medicine, namely Agriculture (Agr), Astronomy (Ast), Biology
(Bio), Chemistry (Che), Computer Science (CS), Earth Science (ES), Engineering
(Eng), Materials Science (MS), Mathematics (Mat), and Medicine (Med). It
contains annotated mentions of scientific concepts in abstracts with four
domain-independent concept types, namely Process, Method, Material, and Data.
These concept mentions were later linked to entities in Wikipedia and Wikidata
[15]. The 110 articles (11 per domain) were taken from the OA-STM corpus [23]
of Elsevier Labs.
We build upon related work and extend the STM corpus with coreference
annotations. In particular, we (1) annotate coreference links between existing
scientific concept mentions in abstracts using the BRAT annotation tool [46],
and (2) annotate further mentions, i.e. pronouns and noun phrases consisting
of multiple consecutive mentions.
##### Annotation Process:
Other studies have shown that non-expert annotations are viable for the
scientific domain [5, 7, 17, 44, 47], and they are less costly than domain-
expert annotations. Therefore, we also annotate the corpus with non-domain
experts, i.e. by two students in computer science. Furthermore, we follow
mostly the annotation procedure of the STM corpus [5], which consists of the
following three phases:
1. 1.
_Pre-Annotation:_ This phase aims at developing annotation guidelines through
trial annotations. We adapted the comprehensive annotation guidelines of the
OntoNotes 5.0 dataset [38], which were developed for the general domain, to
research papers. In particular, we provide briefer and simpler descriptions
with examples from the scientific domain. Within three iterations both
annotators labelled independently 10, 9 and 7 abstracts (i.e. 26 abstracts),
respectively. After each iteration the annotators discussed the outcome and
refined the annotation guidelines.
2. 2.
_Independent Annotation:_ After the annotation guidelines were finalised, both
annotators independently re-annotated the previously annotated abstracts and
24 additional abstracts. The final inter-coder agreement was measured on the
50 abstracts (5 per domain) using Cohen’s $\kappa$ [9, 22] and MUC [48]. As
shown in Table 1, we achieve a substantial agreement with 0.68 $\kappa$ and
0.69 MUC.
3. 3.
_Consolidation:_ Finally, the remaining 60 abstracts were annotated by one
annotator and the annotation results of this author were used as the gold
standard corpus.
Table 1: Per-domain and overall inter-annotator agreement (Cohen’s $\kappa$ and MUC) for coreference resolution annotation in our STM corpus. | Mat | Med | Ast | CS | Bio | Agr | ES | Eng | Che | MS | Overall
---|---|---|---|---|---|---|---|---|---|---|---
$\kappa$ | 0.84 | 0.80 | 0.78 | 0.72 | 0.70 | 0.66 | 0.61 | 0.58 | 0.56 | 0.52 | 0.68
MUC | 0.83 | 0.69 | 0.78 | 0.73 | 0.70 | 0.72 | 0.61 | 0.66 | 0.56 | 0.63 | 0.69
Table 2: Characteristics of the annotated STM corpus with 110 abstracts per concept type in terms of number of scientific concept mentions, number of coreferent mentions, number of coreference clusters and singleton clusters, and the number of overall clusters. MIXED denotes clusters consisting of mentions with different concept types, NONE denotes coreference mentions and clusters without a scientific concept mention. | Data | Material | Method | Process | MIXED | NONE | Total
---|---|---|---|---|---|---|---
# mentions | 1,658 | 2,099 | 258 | 2,112 | 0 | 0 | 6,127
# coreferent mentions | 351 | 910 | 101 | 510 | 0 | 705 | 2,577
# coreference clusters | 153 | 339 | 30 | 198 | 50 | 138 | 908
# singleton clusters | 1,307 | 1,189 | 157 | 1,602 | 0 | 0 | 4,255
# overall clusters | 1,460 | 1,528 | 187 | 1,800 | 50 | 138 | 5,163
Table 3: Characteristics of the STM corpus per domain (11 abstracts per domain). | Agr | Ast | Bio | Che | CS | ES | Eng | MS | Mat | Med | Total
---|---|---|---|---|---|---|---|---|---|---|---
# mentions | 741 | 791 | 649 | 553 | 483 | 698 | 741 | 574 | 297 | 600 | 6,127
# coreferent mentions | 276 | 365 | 275 | 282 | 181 | 241 | 318 | 256 | 124 | 259 | 2,577
# coreference clusters | 106 | 120 | 98 | 90 | 67 | 93 | 117 | 87 | 48 | 82 | 908
# singleton clusters | 520 | 549 | 443 | 384 | 339 | 525 | 503 | 371 | 210 | 411 | 4,255
# clusters | 626 | 669 | 541 | 474 | 406 | 618 | 620 | 458 | 258 | 493 | 5,163
##### Corpus Characterstics:
Table 2 shows the characteristics of the resulting corpus broken down per
concept type, while they are listed per domain in Table 3. The original corpus
has in total 6,127 mentions. 2,577 mentions were annotated as coreferent
resulting in 908 coreference clusters. Thus, each coreference cluster contains
on average 2.84 mentions, while _Method_ clusters contain the most (3.4
mentions) and _Data_ clusters the least (2.3 mentions). Furthermore, 705
mentions were annotated additionally (referred to as NONE) since they
represent pronouns (422 mentions) or noun phrases consisting of multiple
consecutive original mentions (283 mentions) such as ‘… [[A], [B], and [C]
[treatments]]… [These treatments]…’. Fifty clusters (5%) contain mentions with
different concept types (referred to as MIXED) due to disagreements between
the annotators of the original concept mentions, and the annotators of
coreferences. For instance, non-coreferent mentions were annotated as
coreferent, or coreferent mentions have different concept types. Finally, 138
clusters (15%) do not have a concept type (NONE) since they form clusters
which are not coreferent with the original concept mentions.
### 3.2 Transfer Learning for Coreference Resolution
We suggest sequential transfer learning [42] for coreference resolution in
research papers. Therefore, we fine-tune a model pre-trained on a large
(source) dataset to our (target) dataset. As the source dataset, we use the
English portion of the OntoNotes 5.0 dataset [37], since it is a broad corpus
that consists of 3,493 documents with telephone conversations, magazine and
news articles, web data, broadcast conversations, and the New Testament.
Besides, our annotation guidelines were adapted from OntoNotes 5.0.
For the model, we utilise _BERT for Coreference Resolution (BFCR)_ [20] with
_SpanBERT_ [19] word embeddings. This model achieves state-of-the-art results
on the OntoNotes dataset [19]. Another advantage is the availability of the
pre-trained model and the source code. The BFCR model improves Lee et al.’s
approach [25] by replacing the LSTM encoder with the SpanBERT transformer-
encoder. SpanBERT [19] has different training objectives than BERT [13] to
better represent spans of text.
### 3.3 Cross-Domain Research Knowledge Graph Population
Let $d\in D$ be an abstract, $M(d)=\\{m_{1},...,m_{h}\\}$ the mentions of
scientific concepts in $d$, and $c_{d}(m_{i})\subseteq M(d)$ the corresponding
coreference cluster for mention $m_{i}$ in $d$. If mention $m_{s}$ is not
coreferent with other mentions in $d$, then $c_{d}(m_{s})=\\{m_{s}\\}$ is a
singleton cluster. The set of all clusters is denoted by $C$. An equivalence
relation $collapsable\subseteq C\times C$ defines if two clusters can be
collapsed, i.e. if the clusters refer to the same scientific concept. To
create the set of all concepts $E$, we build the quotient set for the set of
clusters $C$ with respect to the relation $collapsable$:
$\displaystyle C:=\\{c_{d}(m)|d\in D,m\in M(d)\\}$ (1)
$\displaystyle{[c]}:=\\{x\in C|collapsable(c,x)\\}$ (2) $\displaystyle
E:=\\{[c]|c\in C\\}$ (3)
Now, we can construct the KG: for each paper $d\in D$ and for each scientific
concept $e\in E$ we create a node in the KG. The scientific concept type of
$e$ is the most frequent concept type of all mentions in $e$. Then, for each
mention $m\in M(d)$ we create a ‘mentions’ link between the paper and the
corresponding scientific concept $[m]\in E$.
##### Cross-Domain vs. In-Domain Collapsing:
One commonly used approach to define the $collapsable$ relation is to treat
two clusters as equivalent, if and only if the ‘label’ of the clusters is the
same. The label of a cluster is the longest mention in the cluster normalised
by (a) lower-casing, (b) removing articles, possessives and demonstratives,
(c) resolving acronyms, and (d) lemmatisation using WordNet [16] to transform
plural forms to singular. Other studies [12, 26] used a similar label function
for KG population.
However, a research KG that comprises multiple scientific disciplines has not
been populated yet. Thus, it is not clear whether it is feasible to collapse
clusters across domains. Usually, terms within a scientific domain are
unambiguous. However, some terms have different meanings across scientific
disciplines (e.g. “neural network” in _CS_ and _Med_). Thus, we investigate
both cross-domain and in-domain collapsing strategies.
##### Knowledge Graph Population Approach:
We populate a research KG with research papers from multiple scientific
domains, i.e. 55,485 abstracts of Elsevier with CC-BY licence from the 10
investigated domains. First, we extract (a) concept mentions from the
abstracts using the scientific concept extractor of the STM-corpus [5], and
(b) clusters within the abstracts with our transfer learning coreference
model. Then, those mention clusters, which contain solely mentions recognised
by the coreference resolution model and not by the scientific concept
extraction model, are dropped, since the coreference resolution model does not
recognise the concept type of the mentions. Finally, the remaining clusters
serve for the population of the KG as described above.
### 3.4 Evaluation Procedure of Clustering in KG Population
One common approach to evaluate the quality of a populated KG is to annotate a
(random) subset of statements by humans as true or false and to calculate
precision and recall [12, 50]. To evaluate recall, small collections of
ground-truth capturing _all_ knowledge is necessary, that are usually
difficult to obtain [50]. To the best of our knowledge, a common approach to
evaluate the clustering aspect of the KG population pipeline does not exist
yet. Thus, in the following, we present (1) an annotated test KG, and (2)
metrics to evaluate clustering of mentions to concepts in KG population.
##### Test KG:
To enable evaluation of KG population strategies, we compile a test KG,
referred to as _Test-STM-KG_. For this purpose, we reuse the STEM-ECR corpus
[15], in which 1,221 mentions of the STM corpus are linked to Wikipedia
entities. First, we extract all annotated clusters of the STM corpus in which
all mentions of the cluster uniquely refer to the same Wikipedia entity. Then,
we collapse all clusters which refer to the same Wikipedia entity to concepts.
Formally, the Test-STM-KG is a partition of mentions, where each part denotes
a concept, i.e. a disjoint set of mentions. A mention is uniquely represented
by the tuple (start offset, end offset, concept type, doc id).
Table 4 shows the characteristics of the compiled Test-STM-KG. It consists of
920 clusters, of which 711 are singleton clusters. These clusters were
collapsed to 762 concepts, of which 31 concepts are used across multiple
domains (referred to as MIX).
Table 4: Characteristics of the _Test-STM-KG_ : number of concepts per concept type and per domain. MIX denotes the number of cross-domain concepts. | Agr | Ast | Bio | CS | Che | ES | Eng | MS | Mat | Med | MIX | Total
---|---|---|---|---|---|---|---|---|---|---|---|---
Data | 5 | 18 | 3 | 20 | 4 | 9 | 28 | 13 | 37 | 8 | 9 | 154
Material | 27 | 35 | 30 | 20 | 26 | 52 | 32 | 30 | 9 | 40 | 7 | 308
Method | 1 | 1 | 1 | 21 | 6 | 2 | 4 | 10 | 3 | 8 | 7 | 64
Process | 17 | 12 | 21 | 34 | 13 | 33 | 20 | 25 | 15 | 38 | 8 | 236
Total | 50 | 66 | 55 | 95 | 49 | 96 | 84 | 78 | 64 | 94 | 31 | 762
##### Evaluation Procedure:
To evaluate the clustering result of a KG population strategy, we use the
metrics of coreference resolution. The three popular metrics for coreference
resolution are $MUC$ [48], $B^{3}$ [2] and $CEAFe_{\phi 4}$ [28]. Each of them
represents different evaluation aspects (see [36] for more details). To
calculate these metrics, we treat the gold concepts (i.e. a partition of
mentions) of the Test-STM-KG as the ‘key’ and the predicted concepts as the
‘response’. We report also the _CoNLL P/R/F1_ scores, that is the averages of
$MUC$’s, $B^{3}$’s and $CEAFe_{\phi 4}$’s respective precision (P), recall (R)
and F1 scores. The CoNLL metrics were proposed for the conference on
Computational Natural Language Learning (CoNLL) shared tasks on coreference
resolution [36].
## 4 Experimental Setup
Here we describe our experimental setup for coreference resolution and KG
population.
### 4.1 Automatic Coreference Resolution
We evaluate three different state-of-the-art architectures on the STM dataset:
(I) _BERT for Coreference Resolution (BFCR)_ [20] with _SpanBERT_ [19] word
embeddings (referred to as _BFCR_Span_), (II) BFCR with _SciBERT_ [3] word
embeddings (referred to as _BFCR_Sci_), and (III) _Scientific Information
Extractor (SCIIE)_ [26] with ELMo [35] word embeddings (referred to as
_SCIIE_). The three architectures are evaluated in the following six
approaches (#1 - #6):
* •
_Pre-Trained Models:_ We evaluate already pre-trained models on the test sets
of the STM corpus, i.e. #1 _BFCR_Span_ trained on the English portion of the
OntoNotes dataset [38], and #2 _SCIIE_ trained on SciERC [26] from the AI
domain.
* •
_Supervised Learning:_ We train a model from scratch with the three
architectures using the training data of the STM corpus and evaluate their
performance with the test sets of STM: #3 _BFCR_Span_ , #4 _BFCR_Sci_ , and #5
_SCIIE_.
* •
_Transfer Learning:_ This is our proposed approach #6. We fine-tune all
parameters of a pre-trained model on the English portion of the OntoNotes
dataset [19] with the training data of our STM corpus. For that, we use the
_BFCR_Span_ architecture.
##### Evaluation:
We use the metrics $MUC$ [48], $B^{3}$ [2], $CEAFe_{\phi 4}$ [28] and $CoNLL$
[36] in compliance with other studies on coreference resolution [20, 29, 24].
To obtain robust results, we apply five-fold cross-validation, according to
the data splits given by Brack et al. [5], and report averaged results. For
each fold, the dataset is split into train/validation/test sets with 8/1/2
abstracts per domain, respectively, i.e. 80/10/20 abstracts. We reuse the
original implementations and default hyperparameters of the above
architectures. Hyperparameter-tuning of the best baseline approach #3
according to [20] confirmed that the default hyperparameters of _BFCR_Span_
perform best on our corpus.
### 4.2 Evaluation of KG Population Strategies
We compare four KG population strategies: (1) cross-domain and (2) in-domain
collapsing, as well as (3) cross-domain and (4) in-domain collapsing without
coreference resolution. To evaluate cross-domain and in-domain collapsing, we
take the gold clusters (i.e. mention clusters within the abstracts) of the
Test-STM-KG and collapse them to concepts according to the respective
strategy. When leaving out the coreference resolution step, we treat all
mentions in the Test-STM-KG as singleton clusters and collapse them to
concepts according to the respective strategy. Finally, we calculate the
metrics as described in Section 3.4.
## 5 Results and Discussion
In this section, we discuss the experimental results for automatic coreference
resolution and KG population.
### 5.1 Automatic Coreference Resolution
Table 5: Performance of the baseline approaches #1 - #5 and our proposed transfer learning approach #6 on the test sets of the STM corpus across five-fold cross validation. | | | $MUC$ | $B^{3}$ | $CEAFe_{\phi 4}$ | $CoNLL$
---|---|---|---|---|---|---
| | Training data | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1
#1 | BFCR_Span | OntoNotes | 57.1 | 31.1 | 40.2 | 55.9 | 25.7 | 35.2 | 50.2 | 28.1 | 36.0 | 54.4 | 28.3 | 37.1
#2 | SCIIE | SciERC | 13.4 | 4.5 | 6.8 | 13.1 | 4.3 | 6.5 | 18.1 | 6.0 | 9.0 | 14.9 | 4.9 | 7.4
#3 | BFCR_Span | STM | 61.6 | 45.6 | 52.3 | 59.8 | 41.5 | 48.8 | 57.9 | 44.4 | 50.0 | 59.8 | 43.8 | 50.4
#4 | BFCR_Sci | STM | 61.9 | 40.2 | 48.6 | 59.7 | 36.1 | 44.9 | 61.7 | 36.9 | 46.0 | 61.1 | 37.7 | 46.5
#5 | SCIIE | STM | 60.3 | 45.2 | 51.6 | 57.6 | 41.7 | 48.3 | 56.6 | 43.6 | 49.1 | 58.1 | 43.5 | 49.7
#6 | BFCR_Span | Onto$\rightarrow$STM | 64.5 | 63.5 | 63.9 | 61.0 | 60.0 | 60.4 | 60.5 | 59.6 | 60.0 | 62.0 | 61.0 | 61.4
Table 6: Per domain and overall CoNLL F1 results of the best baseline #3 and our transfer learning approach #6 on the STM corpus across five-fold cross validation. | | Training data | Agr | Ast | Bio | Che | CS | ES | Eng | MS | Mat | Med | Overall
---|---|---|---|---|---|---|---|---|---|---|---|---|---
#3 | BFCR_Span | STM | 48.0 | 50.5 | 52.2 | 49.0 | 59.1 | 39.6 | 52.8 | 47.6 | 42.5 | 51.0 | 50.4
#6 | BFCR_Span | Onto$\rightarrow$STM | 62.8 | 61.1 | 57.5 | 56.3 | 74.9 | 57.5 | 59.8 | 52.1 | 55.7 | 62.1 | 61.4
Table 5 shows the overall results of the six evaluated approaches and Table 6
the results per domain of the best baseline #3 and our approach #6. Our
transfer learning approach #6 _BFCR_Span_ from OntoNotes (Onto) [37] to STM
significantly outperforms the best baseline approach #3 with an overall CoNLL
F1 of 61.4 (+10.0) and a low standard deviation $\pm 1.5$ across the five
folds.
The approaches #1 _BFCR_Span_ pre-trained on OntoNotes [37], and #2 _SCIIE_
pre-trained on SciERC [26] achieve a CoNLL F1 score of 37.1 and 7.4,
respectively. These scores are quite low compared to the approaches #3 - #6
that use training data of the STM corpus. This indicates that models pre-
trained on existing datasets do not generalise sufficiently well for
coreference resolution in research papers. Models trained only on the STM
corpus (i.e. #3 - #5) achieve better results. However, they have quite low
recall scores indicating that the size of the training data might not be
sufficient to enable the model to generalise well. SciBERT #4, although pre-
trained on scientific texts, performs worse than SpanBERT #3. Presumably the
reason is that SpanBERT has approximately 3 times more parameters than
SciBERT. Our transfer learning approach #6 achieves the best results with
quite balanced precision and recall scores.
Furthermore, to evaluate the effectiveness of our transfer learning approach,
we compare the best baseline #3 and our transfer learning approach #6 also
with the SciERC corpus [26]. The SciERC corpus comprises 500 abstracts from
the AI domain. Since SciERC has around 5 times more training data than STM, we
compare the approaches #3 and #6 also using only $\frac{1}{5}$th of the
training data in SciERC while keeping the original validation and test sets.
It can be seen in Table 7 that our transfer learning approach #6 improves
slightly the baseline result using the whole training data with 60.1 F1
(+0.8). When using only $\frac{1}{5}$th of the training data, our transfer
learning approach noticeably outperforms the baseline with 54.2 F1 (+7.1).
Thus, our transfer learning approach can help significantly to improve the
performance of coreference resolution in research papers with few labelled
data.
Table 7: CoNLL scores on the tests sets of the SciERC corpus [26] across 3 random restarts of the approaches: current state of the art of Luan et al., the best baseline approach (#3), and our transfer learning approach (#6). We report results using the whole and using only $\frac{1}{5}$th of the training data of SciERC (referred to as $\frac{1}{5}$SciERC). | | Training data | P | R | F1
---|---|---|---|---|---
Luan et al. [26] | SciERC | 52.0 | 44.9 | 48.2
#3 | BFCR_Span | SciERC | 63.3 | 55.7 | 59.3
#6 | BFCR_Span | OntoNotes$\rightarrow$SciERC | 63.9 | 57.1 | 60.1
#3 | BFCR_Span | $\frac{1}{5}$SciERC | 63.1 | 39.1 | 47.1
#6 | BFCR_Span | OntoNotes$\rightarrow\frac{1}{5}$SciERC | 52.8 | 56.7 | 54.2
### 5.2 Cross-Domain Research KG
In this subsection, we describe the characteristics of our populated KG and
discuss the evaluation results of various KG population strategies.
#### 5.2.1 Characteristics of the Research KG:
Table 8 shows the characteristics of the populated KGs per domain. The
resulting KGs with cross-domain and in-domain collapsing have more than
994,000 and 1.1 Mio. scientific concepts, respectively, obtained from 55,485
abstracts with more than 2,1 Mio. concept mentions and 726,000 coreferent
mentions. _Ast_ and _Bio_ are the most represented domains, while _CS_ and
_Mat_ are the most underrepresented.
Table 8: Characteristics of the populated research KGs per domain: (1) number
of abstracts, number of extracted scientific concept mentions and coreferent
mentions, (2) the number of scientific concepts for the KG with cross-domain
collapsing, (3) in-domain collapsing, (4) cross-domain collapsing but without
coreference resolution, and (5) in-domain collapsing but without coreference
resolution. Reduction denotes the percentual reduction of mentions to
scientific concepts and MIX the cross-domain concepts.
| Agr | Ast | Bio | CS | Che | ES | Eng | MS | Mat | Med | MIX | Total
---|---|---|---|---|---|---|---|---|---|---|---|---
# abstracts | 7,731 | 15,053 | 11,109 | 1,216 | 1,234 | 2,352 | 3,049 | 2,258 | 665 | 10,818 | - | 55,485
# mentions | 332,983 | 370,311 | 423,315 | 45,388 | 46,203 | 129,288 | 127,985 | 86,490 | 20,466 | 586,019 | - | 2,168,448
# coref. men. | 108,579 | 120,942 | 143,292 | 17,674 | 14,059 | 40,974 | 42,654 | 25,820 | 8,510 | 203,884 | - | 726,388
cross-domain collapsing
KG concepts | 138,342 | 173,027 | 177,043 | 20,474 | 21,298 | 62,674 | 55,494 | 39,211 | 9,275 | 227,690 | 70,044 | 994,572
\- Data | 27,132 | 64,537 | 32,946 | 5,380 | 5,124 | 19,542 | 17,053 | 10,629 | 2,982 | 66,473 | 19,715 | 271,513
\- Material | 69,534 | 45,296 | 83,627 | 6,242 | 10,154 | 24,322 | 19,689 | 17,276 | 2,406 | 68,141 | 20,812 | 367,499
\- Method | 2,992 | 8,819 | 6,135 | 2,001 | 1,055 | 1,776 | 2,953 | 1,605 | 685 | 9,363 | 1,627 | 39,011
\- Process | 38,684 | 54,375 | 54,335 | 6,851 | 4,965 | 17,034 | 15,799 | 9,701 | 3,202 | 83,713 | 27,890 | 316,549
reduction | 58% | 53% | 58% | 55% | 54% | 52% | 57% | 55% | 55% | 61% | - | 54%
in-domain collapsing
KG concepts | 180,135 | 197,605 | 229,201 | 30,736 | 32,191 | 81,584 | 78,417 | 55,358 | 14,567 | 278,686 | - | 1,178,480
reduction | 46% | 47% | 46% | 32% | 30% | 37% | 39% | 36% | 29% | 52% | - | 46%
cross-domain collapsing without coreference resolution
KG concepts | 146,894 | 182,479 | 187,557 | 21,950 | 22,555 | 66,600 | 59,689 | 41,776 | 9,939 | 242,797 | 77,493 | 1,059,729
reduction | 56% | 51% | 56% | 52% | 51% | 48% | 53% | 52% | 51% | 59% | - | 51%
in-domain collapsing without coreference resolution
KG concepts | 184,218 | 199,894 | 234,399 | 31,525 | 32,937 | 83,445 | 80,476 | 56,690 | 14,911 | 284,547 | - | 1,203,042
reduction | 45% | 46% | 45% | 31% | 29% | 35% | 37% | 34% | 27% | 51% | - | 45%
#### 5.2.2 Evaluation of KG Population Strategies:
Next, we discuss the different KG population strategies. For each strategy,
Table 8 reports the number of concepts in the populated KG and the percentage
reduction of mentions to concepts, and in Table 9 the evaluation results of
KGs against the Test-STM-KG.
##### Cross-Domain vs. In-Domain Collapsing:
Cross-domain collapsing achieves a higher CoNLL F1 score of 64.8 than in-
domain collapsing with a score of 63.5 (see Table 9). However, in-domain
collapsing yields (as expected) a higher precision (CoNLL P 85.5), since some
terms have different meanings across domains (e.g. Measure_(mathematics) vs.
Measurement in https://en.wikipedia.org). Furthermore, the Test-STM-KG has
only 31 cross-domain concepts due to its small size. Thus, we expect that
cross-domain collapsing would yield worse results on a larger test set.
Furthermore, as shown in Table 8, cross-domain collapsing yields less concepts
than in-domain collapsing (more than 994,000 versus 1.1 Mio. concepts). We can
also observe that only 70,044 (7%) of the concepts are used across multiple
domains. This indicates, that each scientific domain mostly uses its own
terminology. However, the concepts used across domains can have different
meanings. Thus, when precision is more important than recall in downstream
tasks, in-domain collapsing should be the preferred choice.
##### Effect of Coreference Resolution:
Coreference resolution has only a small impact on the number of resulting
concepts in a populated KG (see Table 8). However, as shown in Table 9,
leaving out the coreference resolution step during KG population yields only
low CoNLL F1 scores, i.e. 41.7 (-21.8) F1 and 43.5 (-21.3) F1. Thus,
coreference resolution significantly improves the quality of a populated KG .
Table 9: Performance of the collapsing strategies evaluated against the _Test-
STM-KG_ : in-domain and cross-domain collapsing with and without coreference
resolution.
| #concepts | $MUC$ | $B^{3}$ | $CEAFe_{\phi 4}$ | $CoNLL$
---|---|---|---|---|---
| in KG | P | R | F1 | P | R | F | P | R | F1 | P | R | F1
in-domain collapsing | 859 | 86.3 | 70.6 | 77.7 | 86.0 | 69.0 | 76.6 | 84.1 | 23.1 | 36.2 | 85.5 | 54.2 | 63.5
\- without coreferences | 900 | 75.5 | 38.8 | 51.2 | 75.2 | 37.9 | 50.4 | 71.1 | 14.0 | 23.4 | 73.9 | 30.2 | 41.7
cross-domain collapsing | 837 | 85.0 | 73.0 | 78.5 | 84.5 | 72.1 | 77.8 | 84.7 | 24.6 | 38.1 | 84.7 | 56.6 | 64.8
\- without coreferences | 876 | 73.5 | 41.0 | 52.6 | 72.2 | 15.5 | 25.5 | 72.2 | 15.5 | 25.5 | 73.0 | 32.4 | 43.5
#### 5.2.3 Qualitative Analysis:
We also inspected the top five frequent domain-specific concepts in the
populated KG (a list of these concepts can be found in our public repository).
As far as we can judge with our computer science background, we consider the
extracted top frequent concepts to be reasonable and useful for the domains.
For instance, in Ast, the method ‘standard model’ is frequently mentioned,
while in CS the process ‘cyber attack’ appears most often. The frequency of
the top concepts differs significantly between the domains: In Med, Ast, Eng,
ES and Agr, a top frequent concept is referenced 10.8, 10.2, 4.9, 3.8, and 3.1
times per 1000 abstracts, respectively. In Che, MS, Mat, Bio, and CS, a top
frequent concept is referenced only by few abstracts (0.3, 0.4, 1.0, 1.4, and
2.3, respectively, per 1000 abstracts).
## 6 Conclusions
In this paper, we have investigated the task of coreference resolution in
research papers across 10 different scientific disciplines. We have annotated
a corpus that comprises 110 abstracts with coreferences with a substantial
inter-coder agreement. Our baseline results with current state-of-the-art
approaches for coreference resolution demonstrate that current approaches
perform poorly on our corpus. The proposed approach, which uses sequential
transfer learning and exploits annotated datasets from the general domain,
outperforms noticeably the state-of-the-art baselines. Thus, our transfer
learning approach can help to reduce annotation costs for scientific papers,
while obtaining high-quality results at the same time.
Furthermore, we have investigated the impact of coreference resolution on KG
population. For this purpose, we have compiled a gold KG from our annotated
corpus and propose an evaluation procedure for KG population strategies. We
have demonstrated that coreference resolution has a small impact on the number
of resulting concepts in the KG, but improved significantly the quality of the
KG. Finally, we have generated a research KG from 55,485 abstracts of the 10
investigated domains. We show that each domain mostly uses its own terminology
and that the populated KG contains useful concepts. To facilitate further
research, we make our corpora and source code publicly available:
https://github.com/arthurbra/stm-coref
In future work, we plan to evaluate multi-task learning approaches, and to
populate and evaluate a much larger research KG to get more insights in
scientific language use.
## References
* [1] Augenstein, I., Das, M., Riedel, S., Vikraman, L., McCallum, A.: Semeval 2017 task 10: Scienceie - extracting keyphrases and relations from scientific publications. In: SemEval@ACL. pp. 546–555. Association for Computational Linguistics (2017)
* [2] Bagga, A., Baldwin, B.: Algorithms for scoring coreference chains. In: In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference. pp. 563–566 (1998)
* [3] Beltagy, I., Lo, K., Cohan, A.: Scibert: A pretrained language model for scientific text. In: EMNLP/IJCNLP (1). pp. 3613–3618. Association for Computational Linguistics (2019)
* [4] Bornmann, L., Mutz, R.: Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. J. Assoc. Inf. Sci. Technol. 66(11), 2215–2222 (2015)
* [5] Brack, A., D’Souza, J., Hoppe, A., Auer, S., Ewerth, R.: Domain-independent extraction of scientific concepts from research articles. In: ECIR (1). Lecture Notes in Computer Science, vol. 12035, pp. 251–266. Springer (2020)
* [6] Chaimongkol, P., Aizawa, A., Tateisi, Y.: Corpus for coreference resolution on scientific papers. In: LREC. pp. 3187–3190. European Language Resources Association (ELRA) (2014)
* [7] Chambers, A.: Statistical Models for Text Classification and Clustering: Applications and Analysis. Ph.D. thesis, UNIVERSITY OF CALIFORNIA, IRVINE (2013)
* [8] Clark, K., Manning, C.D.: Entity-centric coreference resolution with model stacking. In: ACL (1). pp. 1405–1415. The Association for Computer Linguistics (2015)
* [9] Cohen, J.: A coefficient of agreement for nominal scales. Educational and psychological measurement 20(1), 37–46 (1960)
* [10] Cohen, K.B., Lanfranchi, A., Choi, M.J., Bada, M., Jr., W.A.B., Panteleyeva, N., Verspoor, K., Palmer, M., Hunter, L.E.: Coreference annotation and resolution in the colorado richly annotated full text (CRAFT) corpus of biomedical journal articles. BMC Bioinform. 18(1), 372:1–372:14 (2017)
* [11] Denis, P., Baldridge, J.: Specialized models and ranking for coreference resolution. In: EMNLP. pp. 660–669. ACL (2008)
* [12] Dessi, D., Osborne, F., Recupero, D.R., Buscaldi, D., Motta, E., Sack, H.: Ai-kg: an automatically generated knowledge graph of artificial intelligence. In: Proceedings of ISWC 2020 (accepted for publication) (2020)
* [13] Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (1). pp. 4171–4186. Association for Computational Linguistics (2019)
* [14] Doddington, G.R., Mitchell, A., Przybocki, M.A., Ramshaw, L.A., Strassel, S.M., Weischedel, R.M.: The automatic content extraction (ACE) program - tasks, data, and evaluation. In: LREC. European Language Resources Association (2004)
* [15] D’Souza, J., Hoppe, A., Brack, A., Jaradeh, M.Y., Auer, S., Ewerth, R.: The STEM-ECR dataset: Grounding scientific entity references in STEM scholarly content to authoritative encyclopedic and lexicographic sources. In: LREC. pp. 2192–2203. European Language Resources Association (2020)
* [16] Fellbaum, C. (ed.): WordNet: An Electronic Lexical Database. Language, Speech, and Communication, MIT Press, Cambridge, MA (1998)
* [17] Fisas, B., Saggion, H., Ronzano, F.: On the discoursive structure of computer graphics research papers. In: LAW@NAACL-HLT. pp. 42–51. The Association for Computer Linguistics (2015)
* [18] Gábor, K., Buscaldi, D., Schumann, A., QasemiZadeh, B., Zargayouna, H., Charnois, T.: Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. In: SemEval@NAACL-HLT. pp. 679–688. Association for Computational Linguistics (2018)
* [19] Joshi, M., Chen, D., Liu, Y., Weld, D.S., Zettlemoyer, L., Levy, O.: Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguistics 8, 64–77 (2020)
* [20] Joshi, M., Levy, O., Zettlemoyer, L., Weld, D.S.: BERT for coreference resolution: Baselines and analysis. In: EMNLP/IJCNLP (1). pp. 5802–5807. Association for Computational Linguistics (2019)
* [21] Kim, J., Nguyen, N.L.T., Wang, Y., Tsujii, J., Takagi, T., Yonezawa, A.: The genia event and protein coreference tasks of the bionlp shared task 2011. BMC Bioinform. 13(S-11), S1 (2012)
* [22] Kopec, M., Ogrodniczuk, M.: Inter-annotator Agreement in Coreference Annotation of Polish, pp. 149–158. Springer International Publishing, Cham (2014). https://doi.org/10.1007/978-3-319-05503-9_15
* [23] Labs, E.: Elsevier oa stm corpus. https://github.com/elsevierlabs/OA-STM-Corpus (2017), accessed: 2020-07-15
* [24] Lee, K., He, L., Lewis, M., Zettlemoyer, L.: End-to-end neural coreference resolution. In: EMNLP. pp. 188–197. Association for Computational Linguistics (2017)
* [25] Lee, K., He, L., Zettlemoyer, L.: Higher-order coreference resolution with coarse-to-fine inference. In: NAACL-HLT (2). pp. 687–692. Association for Computational Linguistics (2018)
* [26] Luan, Y., He, L., Ostendorf, M., Hajishirzi, H.: Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In: EMNLP. pp. 3219–3232. Association for Computational Linguistics (2018)
* [27] Lubani, M., Noah, S.A.M., Mahmud, R.: Ontology population: Approaches and design aspects. J. Inf. Sci. 45(4) (2019)
* [28] Luo, X.: On coreference resolution performance metrics. In: HLT/EMNLP. pp. 25–32. The Association for Computational Linguistics (2005)
* [29] Ma, J., Liu, J., Li, Y., Hu, X., Pan, Y., Sun, S., Lin, Q.: Jointly optimized neural coreference resolution with mutual attention. In: WSDM. pp. 402–410. ACM (2020)
* [30] Marasovic, A., Born, L., Opitz, J., Frank, A.: A mention-ranking model for abstract anaphora resolution. In: EMNLP. pp. 221–232. Association for Computational Linguistics (2017)
* [31] Mikheev, A., Grover, C., Moens, M.: Seventh message understanding conference (muc-7). The Association for Computational Linguistics (1998)
* [32] Ng, V.: Machine learning for entity coreference resolution: A retrospective look at two decades of research. In: AAAI. pp. 4877–4884. AAAI Press (2017)
* [33] Ng, V., Cardie, C.: Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In: COLING (2002)
* [34] Nguyen, N.L.T., Kim, J., Miwa, M., Matsuzaki, T., Tsujii, J.: Improving protein coreference resolution by simple semantic classification. BMC Bioinform. 13, 304 (2012)
* [35] Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L.: Deep contextualized word representations. In: NAACL-HLT. pp. 2227–2237. Association for Computational Linguistics (2018)
* [36] Pradhan, S., Luo, X., Recasens, M., Hovy, E.H., Ng, V., Strube, M.: Scoring coreference partitions of predicted mentions: A reference implementation. In: ACL (2). pp. 30–35. The Association for Computer Linguistics (2014)
* [37] Pradhan, S., Moschitti, A., Xue, N., Ng, H.T., Björkelund, A., Uryupina, O., Zhang, Y., Zhong, Z.: Towards robust linguistic analysis using ontonotes. In: CoNLL. pp. 143–152. ACL (2013)
* [38] Pradhan, S., Moschitti, A., Xue, N., Uryupina, O., Zhang, Y.: Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In: EMNLP-CoNLL Shared Task. pp. 1–40. ACL (2012)
* [39] Pujara, J., Singh, S.: Mining knowledge graphs from text. In: WSDM. pp. 789–790. ACM (2018)
* [40] Q. Zadeh, B., Handschuh, S.: The ACL RD-TEC: A dataset for benchmarking terminology extraction and classification in computational linguistics. In: Proceedings of the 4th International Workshop on Computational Terminology (Computerm). pp. 52–63. Association for Computational Linguistics and Dublin City University, Dublin, Ireland (Aug 2014). https://doi.org/10.3115/v1/W14-4807, https://www.aclweb.org/anthology/W14-4807
* [41] ur Rahman, M.A., Ng, V.: Supervised models for coreference resolution. In: EMNLP. pp. 968–977. ACL (2009)
* [42] Ruder, S.: Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, National University of Ireland, Galway (2019)
* [43] Sanh, V., Wolf, T., Ruder, S.: A hierarchical multi-task approach for learning embeddings from semantic tasks. In: AAAI. pp. 6949–6956. AAAI Press (2019)
* [44] Schäfer, U., Spurk, C., Steffen, J.: A fully coreference-annotated corpus of scholarly papers from the ACL anthology. In: COLING (Posters). pp. 1059–1070. Indian Institute of Technology Bombay (2012)
* [45] Soon, W.M., Ng, H.T., Lim, C.Y.: A machine learning approach to coreference resolution of noun phrases. Comput. Linguistics 27(4), 521–544 (2001)
* [46] Stenetorp, P., Pyysalo, S., Topic, G., Ohta, T., Ananiadou, S., Tsujii, J.: brat: a web-based tool for nlp-assisted text annotation. In: EACL. pp. 102–107. The Association for Computer Linguistics (2012)
* [47] Teufel, S., Siddharthan, A., Batchelor, C.: Towards discipline-independent argumentative zoning: Evidence from chemistry and computational linguistics. In: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3. p. 1493–1502. EMNLP ’09, Association for Computational Linguistics, USA (2009)
* [48] Vilain, M.B., Burger, J.D., Aberdeen, J.S., Connolly, D., Hirschman, L.: A model-theoretic coreference scoring scheme. In: MUC. pp. 45–52. ACL (1995)
* [49] Wadden, D., Wennberg, U., Luan, Y., Hajishirzi, H.: Entity, relation, and event extraction with contextualized span representations. In: EMNLP/IJCNLP (1). pp. 5783–5788. Association for Computational Linguistics (2019)
* [50] Weikum, G., Dong, L., Razniewski, S., Suchanek, F.M.: Machine knowledge: Creation and curation of comprehensive knowledge bases. CoRR abs/2009.11564 (2020)
|
8k
|
arxiv_papers
|
2101.00887
|
t_ #1 ⊗ m ⊗_#1
# Holographic study of entanglement and complexity for mixed states
Ashis Saha [email protected] Department of Physics, University of
Kalyani, Kalyani 741235, India Sunandan Gangopadhyay
[email protected] Department of Theoretical Sciences, S.N.
Bose National Centre for Basic Sciences,JD Block, Sector-III, Salt Lake,
Kolkata 700106, India
###### Abstract
In this paper, we holographically quantify the entanglement and complexity for
mixed states by following the prescription of purification. The bulk theory we
consider in this work is a hyperscaling violating solution, characterized by
two parameters, hyperscaling violating exponent $\theta$ and dynamical
exponent $z$. This geometry is dual to a non-relativistic strongly coupled
theory with hidden Fermi surfaces. We first compute the holographic analogy of
entanglement of purification (EoP), denoted as the minimal area of the
entanglement wedge cross section and observe the effects of $z$ and $\theta$.
Then in order to probe the mixed state complexity we compute the mutual
complexity for the BTZ black hole and the hyperscaling violating geometry by
incorporating the holographic subregion complexity conjecture. We carry this
out for two disjoint subsystems separated by a distance and also when the
subsystems are adjacent with subsystems making up the full system.
Furthermore, various aspects of holographic entanglement entropy such as
entanglement Smarr relation, Fisher information metric and the butterfly
velocity has also been discussed.
## 1 Introduction
The gauge/gravity duality [1, 2, 3] has been employed to holographically
compute quantum information theoretic quantitites and has thereby helped us to
understand the bulk-boundary relations. Among various observables of quantum
information theory, entanglement entropy (EE) has been the most fundamental
thing to study as it measures the correlation between two subsystems for a
pure state. EE has a very simple definition yet sometimes it is notoriously
difficult to compute. However, the holographic computation of entanglement
entropy which can be denoted as the Ryu-Takayanagi (RT) prescription, is a
remarkably simple technique which relates the area of a codimension-2 static
minimal surface with the entanglement entropy of a subsystem [4, 5, 6]. The
RT-prescription along with its modification for time-dependent scenario (HRT
prescription [7]) has been playing a key role for holographic studies of
information theoretic quantitites as the perturbative calculations which could
not be done on the field theoretic side due to its strongly coupled nature,
can now be performed in the bulk side since it is of weakly coupled nature.
Another important information theoretic quantity which has gained much
attention recently is the computational complexity. The complexity of a
quantum state represents the minimum number of simple operations which takes
the unentangled product state to a target state [8, 9, 10]. There are several
proposals to compute complexity holographically. Recently, several interesting
attempts has been made to define complexity in QFT [11, 12, 13, 14]. In
context of holographic computation, initially, it was suggested that the
complexity of a state (measured in gates) is proportional to the volume of the
Einstein-Rosen bridge (ERB) which connects two boundaries of an eternal black
hole [15, 16]
$\displaystyle C_{V}(t_{L},t_{R})=\frac{V_{ERB}(t_{L},t_{R})}{8\pi RG_{d+1}}$
(1)
where $R$ is the AdS radius and $V_{ERB}(t_{L},t_{R})$ is the co-dimension one
extremal volume of ERB which is bounded by the two spatial slices at times
$t_{L}$ and $t_{R}$ of two CFTs that live on the two boundaries of the eternal
black hole. Another conjecture states that complexity can be obtained from the
bulk action evaluated on the Wheeler-DeWitt patch [17, 18, 19]
$\displaystyle C_{A}=\frac{I_{WDW}}{\pi\hbar}~{}.$ (2)
The above two conjectures depends on the whole state of the physical system at
the boundary. In addition to these proposals, there is another conjecture
which depends on the reduced state of the system. This states that the co-
dimension one volume enclosed by the co-dimension two extremal RT surface is
proportional to the complexity
$\displaystyle C_{V}=\frac{V(\Gamma_{A}^{min})}{8\pi RG_{d+1}}~{}.$ (3)
This proposal is known as the holographic subregion complexity (HSC)
conjecture in the literature [20, 21, 22]. Recently, in [23], it was shown
that there exists a relation between the universal pieces of HEE and HSC.
Furthermore, the universal piece of HSC is proportional to the sphere free
energy $F_{S^{p}}$ for even dimensional dual CFTs and proportional to the Weyl
$a$-anomaly for odd dimensional dual CFTs.
In recent times, much attention is being paid to the study of entanglement
entropy and complexity for mixed states. For the study of EE for mixed states,
the entanglement of purification (EoP) [24] and entanglement negativity
$\mathcal{E}$ [25] has been the promising candidates. In the subsequent
analysis, our focus will be on the computation of EoP. Consider a density
matrix $\rho_{AB}$ corresponding to mixed state in Hilbert space
$\mathcal{H}$, where $\mathcal{H}=\mathcal{H}_{A}\tens\mathcal{H}_{B}$. Now
the process of purification states that one can construct a pure state
$\ket{\psi}$ from $\rho_{AB}$ by adding auxillary degrees of freedom to the
Hilbert space $\mathcal{H}$
$\displaystyle\rho_{AB}=tr_{A^{\prime}B^{\prime}}\ket{\psi}\bra{\psi};~{}\psi\in\mathcal{H}_{AA^{\prime}BB^{\prime}}=\mathcal{H}_{AA^{\prime}}\tens\mathcal{H}_{BB^{\prime}}~{}.$
(4)
Such states $\psi$ are called purifications of $\rho_{AB}$. It is to be noted
that the process of purification is not unique and different procedures for
purification for the same mixed state exists. In this set up, the definition
of EoP ($E_{P}$) reads [24]
$\displaystyle
E_{P}(\rho_{AB})=\mathop{min}_{tr_{A^{\prime}B^{\prime}}\ket{\psi}\bra{\psi}}S(\rho_{AA^{\prime}});~{}\rho_{AA^{\prime}}=tr_{BB^{\prime}}\ket{\psi}\bra{\psi}~{}.$
(5)
In the above expression, the minimization is taken over any state $\psi$
satisfying the condition
$\rho_{AB}=tr_{A^{\prime}B^{\prime}}\ket{\psi}\bra{\psi}$, where
$A^{\prime}B^{\prime}$ are arbitrary. In this paper we will compute the
holographic analogy of EoP, given by the minimal area of entanglement wedge
cross section (EWCS) $E_{W}$ [26]. However, there is no direct proof of
$E_{P}=E_{W}$ duality conjecture yet and it is mainly based on the following
properties of $E_{P}$ which are also satisfied by $E_{W}$ [24, 26]. These
properties are as follows.
$\displaystyle(i)$
$\displaystyle~{}E_{P}(\rho_{AB})=S(\rho_{A})=S(\rho_{B});~{}\rho_{AB}^{2}=\rho_{AB}$
$\displaystyle(i)$ $\displaystyle~{}\frac{1}{2}I(A:B)\leq E_{P}(\rho_{AB})\leq
min\left[S(\rho_{A}),S(\rho_{B})\right]~{}.$
In the above properties, $I(A:B)=S(A)+S(B)-S(A\cup B)$ is the mutual
information between two subsystems $A$ and $B$. Further, there exists a
critical separation length $D_{c}$ between $A$ and $B$ beyond which there is
no connected phase for any $l$. At $D_{c}$, $E_{W}$ probes the phase
transition of the RT surface $\Gamma_{AB}^{min}$ between connected and
disconnected phases. The disconnected phase is characterised by the condition
mutual information $I(A:B)=0$. Some recent very interesting observations in
this direction can be found in [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40, 41]. Further, recently several measures for mixed states dual to
EWCS has been proposed. Some of them are odd entropy [42], reflected entropy
[43, 44] and logarithmic negativity [45, 46]. On the other hand, recently the
study of complexity for mixed states has gained appreciable amount of
attention [47, 48, 49, 50]. Similar to the case of EE for mixed state, the
concept of ‘purification’ is also being employed in this context [49, 51]. The
purification complexity is defined as the minimal pure state complexity among
all possible purifications available for a mixed state. Preparing a mixed
state on some Hilbert space $\mathcal{H}$, starting from a reference (pure)
state involves the extension of the Hilbert space $\mathcal{H}$ by introducing
auxillary degrees of freedom [48, 50]. In this set up, a quantity denoted as
the mutual complexity $\Delta C$ has been prescribed in order to probe the
concept of purifiaction complexity [47, 48, 50, 49]. The mutual complexity
$\Delta C$ satisfies the following definition
$\displaystyle\Delta\mathcal{C}=\mathcal{C}(\rho_{A})++\mathcal{C}(\rho_{B})-\mathcal{C}(\rho_{A\cup
B})~{}.$ (6)
In this paper, we will incorporate the HSC conjecture in order to compute the
complexities $\mathcal{C}(\rho_{A})$, $\mathcal{C}(\rho_{B})$ and
$\mathcal{C}(\rho_{A\cup B})$. We compute $\Delta\mathcal{C}$ in two different
set ups. In one set up, we consider two disjoint subsystems $A$ and $B$ of
width $l$ on the boundary Cauchy slice $\sigma$, separated by a distance $x$.
We then compute the mutual complexity between these two subregions. The other
set up, we consider that the boundary Cauchy slice $\sigma$ is a collection of
two adjacent subsystems $A$ and $B$ of width $l$ with $A\cap B=0$ (zero
overlap) and $A^{c}=B$. In this set up we compute the mutual complexity
between a subregion $A$ and the full system $A\cup A^{c}$.
The paper is organized as follows. In Section 2, we briefly discuss the
aspects of the bulk theory which in this case is a hyperscaling violating
geometry. We then consider a single strip-like subsystem and holographically
compute the EE in Section 3. We also make comments on the thermodynamical
aspects of the computed HEE by computing the entanglement Smarr relation
satisfied by the HEE. Furthermore, we holographically compute the relative
entropy in order to obtain the Fisher information metric. In Section 4, we
consider two strip-like subsystems and holographically compute the EoP by
using the $E_{P}=E_{W}$ conjecture. We briefly study the temperature dependent
behaviour of EWCS along with the effects of $z$ and $\theta$ on the $E_{W}$.
The Butterfly velocity $v_{B}$ corresponding to the hyperscaling violating
geometry is computed in 5. We then compute the HSC corresponding to a single
strip-like subsystem in Section 6. In Section 7, we holographically compute
the mutual complexity $\Delta C$ by incorporating the HSC conjecture for the
BTZ black hole and the hyperscaling violating geometry. We consider two
different set ups to study the mutual complexity. We then conclude in Section
8. We also have an Appendix in the paper.
## 2 Bulk theory: Hyperscaling violating geometry
We shall start our analysis with a bulk hyperscaling violating spacetime
geometry. The solution corresponds to the following effective action of
Einstein-Maxwell-scalar theory [52, 53]
$\displaystyle S_{bulk}=\frac{1}{16\pi G}\int
d^{d+1}x\sqrt{-g}\left[\left(R-2\Lambda\right)-W(\phi)F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-V(\phi)\right]$
(7)
where $F_{\mu\nu}$ is the Faraday tensor associated with the gauge field
$A_{\mu}$, $\phi$ is the scalar field associated with the potential $V(\phi)$
and $W(\phi)$ is the coupling. Extremization of this action leads to the
following black hole solution [53]111Note that this geometry is different in
form than the one considered in [37].
$\displaystyle
ds^{2}=\frac{R^{2}}{r^{2}}\left[-\frac{f(r)}{r^{\frac{2(d-1)(z-1)}{(d-\theta-1)}}}dt^{2}+r^{\frac{2\theta}{d-\theta-1}}\frac{dr^{2}}{f(r)}+\sum_{i=1}^{d-1}dx_{i}^{2}\right]~{}.$
(8)
The lapse function $f(r)$ has the form
$f(r)=1-\left(\frac{r}{r_{h}}\right)^{(d-1)(1+\frac{z}{d-\theta-1})}$ where
$r_{H}$ is the event horizon of the black hole. The Hawking temperature of
black hole is obtained to be
$\displaystyle
T_{H}=\frac{(d-1)(z+d-\theta-1)}{4\pi(d-\theta-1)}\frac{1}{r_{h}^{z(d-1)/(d-\theta-1)}}~{}.$
(9)
The above mentioned metric is holographic dual to a $d$-dimensional non-
relativistic strongly coupled theory with Fermi surfaces. The metric is
associated with two independent exponents $z$ and $\theta$. The presence of
these two exponents leads to the following scale transformations
$\displaystyle x_{i}$ $\displaystyle\rightarrow$ $\displaystyle\xi~{}x_{i}$
$\displaystyle t$ $\displaystyle\rightarrow$ $\displaystyle\xi^{z}~{}t$
$\displaystyle ds$ $\displaystyle\rightarrow$
$\displaystyle\xi^{\frac{\theta}{d-1}}~{}ds~{}.$
This non-trivial scale transformation of the proper spacetime interval $ds$ is
quite different from the usual AdS/CFT picture. The non-invariance of $ds$ in
the bulk theory implies violations of hyperscaling in the boundary theory.
Keeping this in mind, $\theta$ is identified as the hyperscaling violation
exponent and $z$ is identified as the dynamical exponent. In the limit $z=1$,
$\theta=0$, we recover the SAdSd+1 solution which is dual to a relativistic
CFT in $d$-dimensions and in the limit $z\neq 1$, $\theta=0$, we obtain the
‘Lifshitz solutions’.
The two independent exponents $z$ and $\theta$ satisfy the following
inequalities
$\displaystyle\theta\leq d-2,~{}z\geq 1+\frac{\theta}{d-1}~{}.$ (10)
The ‘equalities’ of the above mentioned relations holds only for gauge
theories of non-Fermi liquid states in $d=3$ [54]. In this case, $\theta=1$
and $z=3/2$. For general $\theta=d-2$, logarithmic violation of the ‘Area law’
of entanglement entropy [55] is observed. This in turn means for $\theta=d-2$,
the bulk theory holographically describes a strongly coupled dual theory with
hidden Fermi surfaces. Some studies of information theoretic quantities for
the above mentioned hyperscaling violating geometries can be found in [56,
57].
## 3 Holographic entanglement Smarr relation
To begin our analysis, we consider our subsystem $A$ to be a strip of volume
$V_{sub}=L^{d-2}l$, where $-\frac{l}{2}<x_{1}<\frac{l}{2}$ and
$-\frac{L}{2}<x_{2,3,..,d-1}<\frac{L}{2}$. The amount of Hawking entropy
captured by the above mentioned volume reads
$\displaystyle S_{BH}=\frac{L^{d-2}l}{4G_{d+1}r_{h}^{d-1}}~{}.$ (11)
It is to be noted that the thermal entropy of the dual field theory is related
with the temperature222The thermal entropy of the dual field theory is
basically the Hawking entropy of the black hole given in eq.(9). as
$S_{th}\propto T^{\frac{d-1-\theta}{z}}$. For $\theta=d-2$, it reads
$S_{th}\propto T^{\frac{1}{z}}$. This result is observed for compressible
states with fermionic excitations.
We parametrize the co-dimension one static minimal surface as $x_{1}=x_{1}(r)$
which leads to the following area of the extremal surface $\Gamma_{A}^{min}$
$\displaystyle
A(\Gamma_{A}^{min})=2R^{d-1}L^{d-2}r_{t}^{\left(\frac{p}{p-\theta}\right)-p}\sum_{n=1}^{\infty}\frac{1}{\sqrt{\pi}}\frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)}\alpha^{np\left(1+\frac{z}{p-\theta}\right)}\int_{0}^{1}du\frac{u^{\frac{\theta}{p-\theta}-p+np\left(1+\frac{z}{p-\theta}\right)}}{\sqrt{1-u^{2p}}};~{}u=\frac{r}{r_{t}},~{}\alpha=\frac{r_{t}}{r_{h}}$
(12)
where $r_{t}$ is the turning point and $p=(d-1)$ which we have introduced for
the sake of simplicity. By substituting the area functional (given in eq.(12))
in the RT formula, we obtain the HEE [4]
$\displaystyle S_{E}$ $\displaystyle=$
$\displaystyle\frac{A(\Gamma_{A}^{min})}{4G_{d+1}}$ $\displaystyle=$
$\displaystyle\frac{2L^{d-2}}{4G_{d+1}\left(\frac{p}{p-\theta}-p\right)}\left(\frac{1}{\epsilon}\right)^{p-\left(\frac{p}{p-\theta}\right)}+\frac{L^{d-2}r_{t}^{\left(\frac{p}{p-\theta}\right)-p}}{4G_{d+1}}\sum_{n=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)}\frac{\alpha^{np\left(1+\frac{z}{p-\theta}\right)}}{p}\frac{\Gamma\left(\frac{\frac{p}{p-\theta}-p+np\left(1+\frac{z}{p-\theta}\right)}{2p}\right)}{\Gamma\left(\frac{\frac{p}{p-\theta}+np\left(1+\frac{z}{p-\theta}\right)}{2p}\right)}~{}.$
The relationship between the subsystem size $l$ and turning point $r_{t}$
reads (with the AdS radius $R=1$)
$\displaystyle
l=r_{t}^{\frac{p}{p-\theta}}\sum_{n=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)}\frac{\alpha^{np\left(1+\frac{z}{p-\theta}\right)}}{p}\frac{\Gamma\left(\frac{\frac{p}{p-\theta}+p+np\left(1+\frac{z}{p-\theta}\right)}{2p}\right)}{\Gamma\left(\frac{\frac{p}{p-\theta}+2p+np\left(1+\frac{z}{p-\theta}\right)}{2p}\right)}~{}.$
(14)
We now proceed to probe the thermodynamical aspects of HEE. It can be observed
from eq.(3) that the exprssion of $S_{E}$ contains a subsystem independent
divergent piece which we intend to get rid by defining a finite quantity. We
call this finite quantity as the renormalized holographic entanglement entropy
($S_{REE}$). From the point of view of the dual field theory this divergence
free quantity represents the change in entanglement entropy under an
excitation. In order to obtain $S_{REE}$ holographically, firstly we need to
compute the HEE corresponding to the asymptotic form
($r_{h}\rightarrow\infty$) of the hyperscaling violating black brane solution
given in eq.(8). This yields the following expression
$\displaystyle
S_{G}=\frac{2L^{d-2}}{4G_{d+1}\left(\frac{p}{p-\theta}-p\right)}\left(\frac{1}{\epsilon}\right)^{p-\left(\frac{p}{p-\theta}\right)}-\frac{2^{p-\theta}L^{d-2}}{4G_{d+1}\left(p-\frac{p}{p-\theta}\right)}\left(\frac{p-\theta}{p}\right)^{p-\theta-1}\frac{\pi^{\frac{p-\theta}{2}}}{l^{p-1-\theta}}\left(\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{\frac{p}{p-\theta}}{2p}\right)}\right)^{p-\theta}~{}.$
(15)
We now subtract the above expression (which represents the HEE corresponding
to the vacuum of the dual field theory) from $S_{E}$ (given in eq.(3)) in
order to get a finite quantity $S_{REE}$. This can be formally represented as
$\displaystyle S_{REE}=S_{E}-S_{G}~{}.$ (16)
On the other hand, the internal energy $E$ of the black hole can be obtained
by using the Hawking entropy (given in eq.(11)) and the Hawking temperature
(given in eq.(9)). The computed expression of $E$ can be represented as
$\displaystyle E=\left(\frac{p-\theta}{z+p-\theta}\right)S_{BH}T_{H}~{}.$ (17)
This is nothing but the classical Smarr relation of BH thermodynamics. In
[58], it was shown that the quantity $S_{REE}$ and the internal energy $E$
satisfies a Smarr-like thermodynamic relation corresponding to a generalized
temperature $T_{g}$. In this set up, this relation reads [59]
$\displaystyle E=\left(\frac{p-\theta}{z+p-\theta}\right)S_{REE}T_{g}~{}.$
(18)
It is remarkable to observe that the relation given in eq.(18) has a striking
similarity with the classical Smarr relation of BH thermodynamics, given in
eq.(17). In the limit $r_{t}\rightarrow r_{h}$, the leading term of the
generalized temperature $T_{g}$ produces the exact Hawking temperature $T_{H}$
whereas in the limit $\frac{r_{t}}{r_{h}}\ll 1$, the leading term of $T_{g}$
reads [59]
$\displaystyle\frac{1}{T_{g}}=\Delta_{1}l^{z}$ (19)
where the detailed expression of $\Delta_{1}$ reads
$\displaystyle\Delta_{1}=\frac{2\pi^{3/2}}{p}\left(\frac{1}{\frac{p}{p-\theta}-p+p\left(1+\frac{z}{p-\theta}\right)}\right)\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}\right)^{1+z}\left(\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}+p(1+\frac{z}{p-\theta})}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}+p(1+\frac{z}{p-\theta})}{2p}\right)}\right)~{}.$
From eq.(19) it can be observed that in the UV limit, $T_{g}$ shows the
similar behaviour as entanglement temperature $T_{ent}$ (proportional to the
inverse of subsystem size $l$) [60].
### 3.1 Relative entropy and the Fisher information metric
We now proceed to compute the Fisher information metric for the hyperscaling
violating geometry using the holographic proposal. The Fisher information
metric measures the distance between two quantum states and is given by [61]
$\displaystyle
G_{F,\lambda\lambda}=\langle\delta\rho~{}\delta\rho\rangle^{(\sigma)}_{\lambda\lambda}=\frac{1}{2}~{}Tr\left(\delta\rho\frac{d}{d(\delta\lambda)}\log(\sigma+\delta\lambda\delta\rho)|_{\delta\lambda=0}\right)$
(20)
where $\sigma$ is the density matrix and $\delta\rho$ is a small deviation
from the density matrix. On the other hand, there exists a relation between
the Fisher information metric and the relative entropy $S_{rel}$ [62]. This
reads
$\displaystyle G_{F,mm}=\frac{\partial^{2}}{\partial
m^{2}}S_{rel}(\rho_{m}||\rho_{0});~{}S_{rel}(\rho_{m}||\rho_{0})=\Delta\langle
H_{\rho_{0}}\rangle-\Delta S~{}.$ (21)
In the above expression, $\Delta S$ is the change in the entanglement entropy
from vacuum state, $\Delta\langle H_{\rho_{0}}\rangle$ is the change in the
modular Hamiltonian and $m$ is the perturbation parameter. In this set up, we
holographically compute the relative entropy $S_{rel}(\rho_{m}||\rho_{0})$. We
consider the background is slightly perturbed from pure hyperscaling violating
spacetime while the subsystem volume $L^{d-2}l$ is fixed. Then the inverse of
the lapse function $f(r)$ (given in eq.(8)) can be expressed as
$\displaystyle\frac{1}{f(r)}=1+mr^{p\left(1+\frac{z}{p-\theta}\right)}+m^{2}r^{2p\left(1+\frac{z}{p-\theta}\right)}$
(22)
where $m=\left(\frac{1}{r_{H}}\right)^{p\left(1+\frac{z}{p-\theta}\right)}$ is
the holographic perturbation parameter. Since we consider a perturbation to
the background geometry and also consider that the subsystem size $l$ has not
changed, we can express the turning point in the following perturbed form
$\displaystyle r_{t}=r_{t}^{(0)}+mr_{t}^{(1)}+m^{2}r_{t}^{(2)}$ (23)
where $r_{t}^{(0)}$ is the turning point for the pure hyperscaling violating
geometry and $r_{t}^{(1)}$, $r_{t}^{(2)}$ are the first and second order
corrections to the turning point. We now write down the subsystem length $l$
upto second order in perturbation as
$\displaystyle\frac{l}{r_{t}^{\frac{p}{p-\theta}}}=a_{0}+ma_{1}r_{t}^{p\left(1+\frac{z}{p-\theta}\right)}+m^{2}a_{2}r_{t}^{2p\left(1+\frac{z}{p-\theta}\right)}$
(24)
where
$\displaystyle
a_{0}=\frac{\sqrt{\pi}}{p}\frac{\Gamma\left(\frac{\frac{p}{p-\theta}+p}{2p}\right)}{\Gamma\left(\frac{\frac{p}{p-\theta}}{2p}\right)},~{}a_{1}=\frac{\sqrt{\pi}}{2p}\frac{\Gamma\left(\frac{\frac{p}{p-\theta}+p+p\left(1+\frac{z}{p-\theta}\right)}{2p}\right)}{\Gamma\left(\frac{\frac{p}{p-\theta}+p\left(1+\frac{z}{p-\theta}\right)}{2p}\right)},~{}a_{2}=\frac{3\sqrt{\pi}}{8p}\frac{\Gamma\left(\frac{\frac{p}{p-\theta}+p+2p\left(1+\frac{z}{p-\theta}\right)}{2p}\right)}{\Gamma\left(\frac{\frac{p}{p-\theta}+2p\left(1+\frac{z}{p-\theta}\right)}{2p}\right)}~{}.$
Using eq.(23) in eq.(24) and keeping in mind the consideration that $l$ has
not changed, we obtain the forms of $r_{t}^{(0)}$, $r_{t}^{(1)}$ and
$r_{t}^{(2)}$
$\displaystyle
r_{t}^{(0)}=\left(\frac{l}{a_{0}}\right)^{\frac{p-\theta}{p}},~{}r_{t}^{(1)}=-\left(\frac{p-\theta}{p}\right)\left(\frac{a_{1}}{a_{0}}\right)\left(r_{t}^{(0)}\right)^{1+p\left(1+\frac{z}{p-\theta}\right)},~{}r_{t}^{(2)}=\xi\left(r_{t}^{(0)}\right)^{1+2p\left(1+\frac{z}{p-\theta}\right)}$
(25)
where
$\displaystyle\xi=\left(\frac{p-\theta}{p}\right)\left[\left(\frac{2p-\theta}{2p}\right)\left(\frac{a_{1}}{a_{0}}\right)^{2}+(p-\theta)\left(1+\frac{z}{p-\theta}\right)\left(\frac{a_{1}}{a_{0}}\right)^{2}-\left(\frac{a_{2}}{a_{0}}\right)\right]~{}.$
(26)
On a similar note, the expression for area of the static minimal surface upto
second order in perturbation parameter $m$ can be obtained from eq.(12). We
then use eq.(23) to recast the expression for the area of the minimal surface
in the form
$\displaystyle
A(\Gamma_{A}^{min})=A(\Gamma_{A}^{min})^{(0)}+mA(\Gamma_{A}^{min})^{(1)}+m^{2}A(\Gamma_{A}^{min})^{(2)}~{}.$
(27)
It has been observed that at first order in $m$, $S_{rel}$ vanishes [62] and
in second order in $m$ it reads $S_{rel}=-\Delta S$. In this set up, it yields
$\displaystyle
S_{rel}=-m^{2}\frac{A(\Gamma_{A}^{min})^{(2)}}{4G_{d+1}}=m^{2}\frac{L^{d-2}}{4G_{d+1}}\Delta_{2}\left(\frac{l}{a_{0}}\right)^{1+2z+(p-\theta)}$
(28)
where
$\displaystyle\Delta_{2}=2p\left(\frac{p-\theta}{p}\right)\left(\frac{a_{1}^{2}}{a_{0}}\right)+p\left(p-\frac{\theta}{p-\theta}\right)\left(\frac{p-\theta}{p}\right)^{2}\left(\frac{a_{1}^{2}}{a_{0}}\right)-\left(\frac{2pa_{2}}{2p(1+\frac{z}{p-\theta})-p+\frac{p}{p-\theta}}\right)-2pa_{0}\xi~{}.$
By substituting the above expression in eq.(21), the Fisher information is
obtained to be
$\displaystyle
G_{F,mm}=\frac{L^{d-2}}{2G_{d+1}}\Delta_{2}\left(\frac{l}{a_{0}}\right)^{1+2z+(p-\theta)}\propto
l^{d+2z-\theta}~{}.$ (29)
In the limit $z=1$ and $\theta=0$, the above equation reads $G_{F,mm}\propto
l^{d+2}$ which agrees with the result obtained in [63]. The Fisher information
corresponding to the Lifshitz type solutions can be found in [64].
## 4 Entanglement wedge cross-section and the $E_{P}=E_{W}$ duality
We now proceed to compute the holographic entanglement of purification by
considering two subsystems, namely, $A$ and $B$ of length $l$ on the boundary
$\partial M$. From the bulk point of view, $\partial M$ is the boundary of a
canonical time-slice $M$ made in the static gravity dual. Furthermore, $A$ and
$B$ are separated by a distance $D$ so that the subsystems does not have an
overlap of non-zero size ($A\cap B=0$). Following the RT prescription, we
denote $\Gamma_{A}^{min}$, $\Gamma_{B}^{min}$ and $\Gamma_{AB}^{min}$ as the
static minimal surfaces corresponding to $A$, $B$ and $AB$ respectively. In
this set up, the domain of entanglement wedge $M_{AB}$ is the region in the
bulk with the following boundary
$\displaystyle\partial M_{AB}=A\cup B\cup\Gamma_{AB}^{min}~{}.$ (30)
It is also to be noted that if the separation $D$ is effectively large then
the codimension-0 bulk region $M_{AB}$ will be disconnected. We now divide
$\Gamma_{AB}^{min}$ into two parts
$\displaystyle\Gamma_{AB}^{min}=\Gamma_{AB}^{A}\cup\Gamma_{AB}^{B}$ (31)
such that the boundary $\partial M_{AB}$ of the canonical time-slice of the
full spacetime $M_{AB}$ can be represented as
$\displaystyle\partial M_{AB}=\bar{\Gamma}_{A}\cup\bar{\Gamma}_{B}$ (32)
where $\bar{\Gamma}_{A}=A\cup\Gamma_{AB}^{A}$ and
$\bar{\Gamma}_{B}=B\cup\Gamma_{AB}^{B}$. In this set up, it is now possible to
define the holographic entanglement entropies $S(\rho_{A\cup\Gamma_{AB}^{A}})$
and $S(\rho_{B\cup\Gamma_{AB}^{B}})$. These quantities can be computed by
finding a static minimal surface $\Sigma^{min}_{AB}$ such that
$\displaystyle\partial\Sigma^{min}_{AB}=\partial\bar{\Gamma}_{A}=\partial\bar{\Gamma}_{B}~{}.$
(33)
There can be infinite number possible choices for the spliting given in
eq.(31) and this in turn means there can be infinite number of choices for the
surface $\Sigma^{min}_{AB}$. The entanglement wedge cross section (EWCS) is
obtained by minimizing the area of $\Sigma^{min}_{AB}$ over all possible
choices for $\Sigma^{min}_{AB}$. This can be formally written down as
$\displaystyle E_{W}(\rho_{AB})=\mathop{min}_{\bar{\Gamma}_{A}\subset\partial
M_{AB}}\left[\frac{A\left(\Sigma^{min}_{AB}\right)}{4G_{d+1}}\right]~{}.$ (34)
We now proceed to compute $E_{W}$ for the holographic dual considered in this
paper. As we have mentioned earlier, EWCS is the surface with minimal area
which splits the entanglement wedge into two domains corresponding to $A$ and
$B$. This can be identified as a vertical, constant $x$ hypersurface. The time
induced metric on this constant $x$ hypersurface reads
$\displaystyle
ds_{ind}^{2}=\frac{R^{2}}{r^{2}}\left[r^{\frac{2\theta}{p-\theta}}\frac{dr^{2}}{f(r)}+\sum_{i=1}^{d-2}dx_{i}^{2}\right]~{}.$
(35)
By using this above mentioned induced metric, the EWCS is obtained to be
$\displaystyle\begin{aligned}
E_{W}=&\frac{L^{d-2}}{4G_{d+1}}\int_{r_{t}(D)}^{r_{t}(2l+D)}\frac{dr}{r^{d-1}\sqrt{f(r)}}\\\
=&\frac{L^{d-2}}{4G_{d+1}}\sum_{n=0}^{\infty}\frac{1}{\sqrt{\pi}}\frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)}\left[\frac{r_{t}(2l+D)^{np(1+\frac{z}{p-\theta})-p+1}-r_{t}(D)^{np(1+\frac{z}{p-\theta})-p+1}}{np(1+\frac{z}{p-\theta})-p+1}\right]\left(\frac{1}{r_{h}}\right)^{np(1+\frac{z}{p-\theta})}~{}.\end{aligned}$
As mentioned earlier, the above expression of $E_{W}$ always maintains the
following bound
$\displaystyle E_{W}\geq\frac{1}{2}I(A:B);~{}I(A:B)=S(A)+S(B)-S(A\cup B)$ (36)
where $I(A:B)$ is the mutual information between two subsystems $A$ and $B$.
On the other hand, in [65] it was shown that there exists a critical
separation between $A$ and $B$ beyond which there is no connected phase for
any $l$. This in turn means that at the critical separation length $D_{c}$,
$E_{W}$ probes the phase transition of the RT surface $\Gamma_{AB}^{min}$
between connected and disconnected phases. The disconnected phase is
characterised by the fact that the mutual information $I(A:B)$ vanishes, which
in this case reads
$\displaystyle 2S(l)-S(D)-S(2l+D)=0~{}.$ (37)
The above condition together with the bound given in eq.(36) leads to the
critical separation length $D_{c}$ [66].
We now write down the expression for $E_{W}$ (given in eq.(4)) in terms of the
parameters of the boundary theory. This we do for the small temperature case
($\frac{r_{t}(D)}{r_{h}}\ll\frac{r_{t}(2l+D)}{r_{h}}\ll 1$) and for the high
temperature case $(\frac{r_{t}(D)}{r_{h}}\ll
1,\frac{r_{t}(2l+D)}{r_{h}}\approx 1)$.
### 4.1 $E_{W}$ in the low temperature limit
In the limit $\frac{r_{t}(D)}{r_{h}}\ll\frac{r_{t}(2l+D)}{r_{h}}\ll 1$, it is
reasonable to consider terms upto order $m$ (where
$m=\frac{1}{r_{h}^{p(1+\frac{z}{p-\theta})}}$) in the expression for $E_{W}$
(given in eq.(4)). On the other hand for low temeprature considerations, it is
possible to perturbatively solve eq.(14) which leads to the following
relationship between a subsystem size $l$ and its corresponding turning point
$r_{t}$
$\displaystyle
r_{t}(l)=l^{\frac{p-\theta}{p}}\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)^{\frac{p-\theta}{p}}\times\left[1-\Delta_{3}l^{(p-\theta)\left(1+\frac{z}{p-\theta}\right)}T^{\left(1+\frac{p-\theta}{z}\right)}\right]$
(38)
where
$\displaystyle\Delta_{3}$ $\displaystyle=$
$\displaystyle\left(\frac{p-\theta}{2p}\right)\left(\frac{4\pi}{p(1+\frac{z}{p-\theta})}\right)^{1+\frac{p-\theta}{z}}\left(\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)\left(\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}+p(1+\frac{z}{p-\theta})}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}+p(1+\frac{z}{p-\theta})}{2p}\right)}\right)\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)^{(p-\theta)\left(1+\frac{z}{p-\theta}\right)}~{}.$
Now by using eq.(38) for $r_{t}(2l+D)$ and $r_{t}(D)$, we obtain the
expression for $E_{W}$ in the low temperature limit to be
$\displaystyle
E_{W}=E_{W}^{T=0}-\frac{L^{d-2}}{4G_{d+1}}\Delta_{4}\left[(2l+D)^{\left(\frac{p-\theta}{p}\right)\left(1+\frac{pz}{p-\theta}\right)}-D^{{\left(\frac{p-\theta}{p}\right)\left(1+\frac{pz}{p-\theta}\right)}}\right]T^{1+\left(\frac{p-\theta}{z}\right)}+...$
(39)
where the detailed expression for $\Delta_{4}$ is given in the Appendix. The
first term in eq.(39) is the EWCS at $T=0$. This reads
$\displaystyle
E_{W}^{T=0}=\frac{L^{d-2}}{4(p-1)G_{d+1}}\left(\frac{\sqrt{\pi}}{p}\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}\right)^{(p-\theta)-\left(\frac{p-\theta}{p}\right)}\left[\left(\frac{1}{D}\right)^{(p-\theta)-\left(\frac{p-\theta}{p}\right)}-\left(\frac{1}{2l+D}\right)^{(p-\theta)-\left(\frac{p-\theta}{p}\right)}\right]~{}.$
It can be observed from eq.(39) that the EWCS is a monotonically decreasing
function of temperature $T$ (as $1+\frac{p-\theta}{z}>0$). We now compute the
critical separation length $D_{c}$ at which $E_{W}$ probes the phase
transition of the RT surface $\Gamma_{AB}^{min}$ between the connected and
disconnected phases. This to be obtained from the condition
$\displaystyle 2S(l)-S(D)-S(2l+D)=0~{}.$ (41)
The general expression for the HEE of a strip of length $l$ is given in
eq.(3). Now similar to the above computation, in the limit
$\frac{r_{t}(D)}{r_{h}}\ll\frac{r_{t}(2l+D)}{r_{h}}\ll 1$, we consider terms
upto $\mathcal{O}(m)$ in eq.(3). By using this consideration, eq.(41) can be
expressed as
$\displaystyle\beta_{1}\left(\frac{2}{l^{p-\theta-1}}-\frac{1}{D^{p-\theta-1}}-\frac{1}{(2l+D)^{p-\theta-1}}\right)+\frac{\beta_{2}}{r_{h}^{p(1+\frac{z}{p-\theta})}}\left(2l^{1+z}-D^{1+z}-(2l+D)^{1+z}\right)=0$
(42)
where
$\displaystyle\beta_{1}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{\pi}}{4p}\frac{\Gamma\left(\frac{\frac{p}{p-\theta}-p}{2p}\right)}{\Gamma\left(\frac{\frac{p}{p-\theta}}{2p}\right)}\left(\frac{\sqrt{\pi}}{p}\frac{\Gamma\left(p+\frac{\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}\right)^{p-\theta-1}$
$\displaystyle\beta_{2}$ $\displaystyle=$
$\displaystyle\Delta_{3}\left(p-\frac{p}{p-\theta}\right)\beta_{1}+\frac{\sqrt{\pi}}{8p}\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}\right)^{1+z}\left(\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}+p(1+\frac{z}{p-\theta})}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}+p(1+\frac{z}{p-\theta})}{2p}\right)}\right)~{}.$
By solving the above equation we can find out the critical separation length
$D_{c}$ (where we substitute $\frac{D}{l}=k=constant$). It is worth mentioning
that in the above computations of $E_{W}$ and $I(A:B)$, we have considered
terms upto $\mathcal{O}(m)$ which is the leading order term for thermal
correction. Similarly, one can incorporate next-to leading order terms or more
to get a more accurate result.
Effect of hyperscaling violating exponent $\theta$ (we set $z=1$)
Effect of dynamical exponent $z$ (we set $\theta=0.2$)
Figure 1: Effects of $\theta$ and $z$ on $E_{W}$ and $I(A:B)$ at low
temperature (with $d=3$, $k=0.4$, $L=1$ and $G_{d+1}=1$)
### 4.2 $E_{W}$ in the high temperature limit
We now consider the limit $r_{t}(2l+D)\rightarrow r_{h}$ and
$\frac{r_{t}(D)}{r_{h}}\ll 1$. This in turn means the static minimal surface
associated with the turning point $r_{t}(2l+D)$ wraps a portion of the event
horizon $r_{h}$. In the large $n$ limit, the infinite sum associated with the
turning point $r_{t}(2l+D)$ goes as
$\approx\frac{1}{\sqrt{\pi}}\left(\frac{1}{n}\right)^{3/2}\left(\frac{r_{t}(2l+D)}{r_{h}}\right)^{np(1+\frac{z}{p-\theta})}$
which means it is convergent
($\sum_{n=1}^{\infty}\frac{1}{n^{3/2}}=\xi(\frac{3}{2})$). Further, we are
considering $\frac{r_{t}(D)}{r_{h}}\ll 1$ and in this limit it is reasonable
to keep terms only upto order $m$ in the infinite sum associated with
$r_{t}(D)$. In this set up, the expression for EWCS reads
$\displaystyle
E_{W}(T)=E_{W}^{T=0}+\frac{L^{d-2}}{4G_{d+1}}\Delta_{4}D^{z+\left(\frac{p-\theta}{p}\right)}T^{1+\left(\frac{p-\theta}{z}\right)}-\frac{L^{d-2}}{4G_{d+1}}\Delta_{5}T^{\left(\frac{p-\theta}{z}\right)-\left(\frac{p-\theta}{pz}\right)}+...$
(43)
where the temperature independent term (EWCS at $T=0$) is being given by
$\displaystyle
E_{W}^{T=0}=\frac{L^{d-2}}{4(p-1)G_{d+1}D^{(\frac{p-\theta}{p})(p-1)}}\left(\frac{\sqrt{\pi}}{p}\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}\right)^{(p-\theta)-\left(\frac{p-\theta}{p}\right)}~{}.$
(44)
The expressions for $\Delta_{4}$ and $\Delta_{5}$ are given in the Appendix.
Now we procced to compute the critical separation length $D_{c}$ in the high
temperature configuration. In the limit $r_{t}\rightarrow r_{h}$, the computed
result of $S_{E}$ (HEE of a strip like subsystem with length $l$), given in
eq.(3) can be rearranged in the following form333The first term of the
expression is nothing but the thermal entropy of the boundary subsystem given
in eq.(11).
$\displaystyle
S_{E}=\frac{L^{d-2}l}{4G_{d+1}r_{h}^{p}}+\frac{L^{d-2}}{4G_{d+1}r_{h}^{p-\frac{p}{p-\theta}}}\sum_{n=0}^{\infty}P_{n};~{}P_{n}=\left(\frac{1}{\frac{p}{p-\theta}-p+np(1+\frac{z}{p-\theta})}\right)\frac{\Gamma\left(n+\frac{1}{2}\right)}{\Gamma\left(n+1\right)}\frac{\Gamma\left(\frac{p+\frac{p}{p-\theta}+np(1+\frac{z}{p-\theta})}{2p}\right)}{\Gamma\left(\frac{2p+\frac{p}{p-\theta}+np(1+\frac{z}{p-\theta})}{2p}\right)}~{}.$
By using the above form of HEE, we can write down eq.(41) in the following
form
$\displaystyle\frac{\sum_{n=0}^{\infty}P_{n}}{r_{h}^{p-\frac{p}{p-\theta}}}-\frac{D}{4r_{h}^{p-\frac{p}{p-\theta}}}-\frac{\beta_{1}}{D^{p-\theta-1}}-\frac{\beta_{2}}{r_{h}^{p(1+\frac{z}{p-\theta})}}D^{1+z}=0~{}.$
(46)
Effect of hyperscaling violating exponent $\theta$ (we set $z=1$)
Effect of dynamical exponent $z$ (we set $\theta=0.2$)
Figure 2: Effects of $\theta$ and $z$ on $E_{W}$ and $I(A:B)$ at high
temperature (with $d=3$, $L=1$ and $G_{d+1}=1$)
In Fig(s).(1) and (2), we have graphically represented the effects of $z$ and
$\theta$ on the EWCS and holographic mutual information (HMI) for both low and
high temperature case respectively. For the low temperature case (Fig.(1)) we
have chose the separation length $D$ between the subsystems to be $D=0.4~{}l$.
From the above plots it can be observed that the EWCS always maintains the
bound $E_{W}>\frac{1}{2}I(A:B)$. The HMI continously decays and approaches
zero at a particular critical separation length $D_{c}$. This critical
separataion $D_{c}$ decreases with increasing $z$ and $\theta$. On the other
hand $E_{W}$ shows a discontinous jump at $D_{c}$. Upto $D_{c}$, $E_{W}$ has a
finite cross-section due to the connected phase whereas beyond this critical
separation length $E_{W}$ vanishes due to the disconnected phase of the RT
surface $\Gamma_{AB}^{min}$.
## 5 Butterfly velocity
In this section we shall discuss about information spreading in the dual field
theory from the holographic point of view. In context of quantum many body
physics, the study about the chaotic nature of the system (response of the
system at a late time after a local perturbation at initial time) can be
characterized by the following thermal average
$\displaystyle C(x,t)=\langle\left[W(t,x),V(0)\right]^{2}\rangle_{\beta}$ (47)
where $V(0)$ is a generic operator acting at the origin at earlier time and
$W(t,x)$ is a local operator acting at position $x$ at later time $t$. The
butterfly effect is usually governed by such commutators. It probes the
dependency of a late time measurement on the initial perturbation. The time at
which the commutator grows to $\mathcal{O}(1)$, is known as the Scrambling
time [67]. The study of Butterfly effect in context of AdS/CFT naturally
occurs as the black holes are observed to be the fastest Scramblers in the
nature [68]. For large $N$ field theories, eq.(47) grows as
$\displaystyle
C(x,t)=\frac{K}{N^{2}}\exp[\lambda_{L}\left(t-\frac{|x|}{v_{B}}\right)]+\mathcal{O}\left(\frac{1}{N^{4}}\right)$
(48)
where $K$ is a constant, $\lambda_{L}$ is the Lyapunov exponent which probes
the growth of chaos with time and $v_{B}$ is the Butterfly velocity. The
Butterfly velocity $v_{B}$ probes the speed at which the local perturbation
grows (the speed of the growth for chaos). This velocity composes a light cone
for chaos and outside this light cone, the perturbation does not effects the
system. Further, $v_{B}$ has the interpretation of effective Lieb-Robinson
velocity (state-dependent) $v_{LR}$ for strongly coupled field theories [69].
The Lyapunov exponent safisfies a upper bound [70]
$\displaystyle\lambda_{L}\leq 2\pi T~{}.$ (49)
One can consider acting of a local operator (perturbation) on a thermal state
of a CFT. At the initial stages, the information about the nature of the
operator can be obatined by operating another local operator at position $x$.
However, due to the Scrambling property, this information about the initial
perturbation will spread out in a larger and larger region with time.
In context of gauge/gravity duality, $v_{B}$ can be calculated by using the
subregion duality [71]. The static black hole in the bulk represents the
initial thermal state in the dual field theory. One can add a local
perturbation in this set up which eventually falls into event horizon. The
time-like trajectories of the local perturbation in the bulk can be probed, by
using the co-dimension two RT surfaces [72, 73]. In both of these scenario,
there shall be a smallest subregion which will contain enough information (at
later time $t$) about the local perturbation. It is aasumed that the bulk dual
to this smallest subregion of the boundary is the entanglement wedge [74]. The
Butterfly velocity represents the rate at which these subregions increases.
The holographic computation requires only the near-horizon data about the dual
gravitational solution. By following the approach given in [73], a expression
for the Butterfly velocity $v_{B}$ corresponding to a general black brane
geometry has been computed in [75]. For hyperscaling violating geometry (given
in eq. (8)), the expression for $v_{B}$ is obtained to be
$\displaystyle
v_{B}=\sqrt{\frac{1}{2}\left(1+\frac{z}{d-\theta-1}\right)}\left[\frac{4\pi}{(d-1)\left(1+\frac{z}{d-\theta-1}\right)}\right]^{1-\frac{1}{z}}T^{1-\frac{1}{z}}\propto
T^{1-\frac{1}{z}}.$ (50)
In the AdS limit $z\rightarrow 1,\theta\rightarrow 0$, it reduces to the well-
known result $v_{B}=\sqrt{\frac{d}{2(d-1)}}$ [73]. It is known that $v_{B}$ is
a model dependent parameter which in this case captures the collective effects
of $z$ and $\theta$. It is also to be noted that for hyperscaling violating
backgrounds, $v_{B}$ is related to the Hawking temperature (temperature of the
dual thermal field theory) [73, 76].
## 6 Holographic Subregion Complexity
The Quantum complexity (QC) can be realised in the following way. Considering
a simple (unentangled) product state $\ket{\uparrow\uparrow...\uparrow}$ as a
reference state, QC is defined as the minimum number of $2$-qubit unitary
operation required to prepare a target state $\ket{\psi}$ from the reference
state. In this section we study the holographic subregion complexity (HSC)
proposal [20]. This conjecture states that the volume enclosed by the co-
dimension two static minimal surface (RT surface) with the boundary coinciding
with that of the subsystem, is dual to the complexity of that subregion. For
the hyperscaling violating geometry, this co-dimension one volume reads
$\displaystyle V(\Gamma^{min}_{A})$ $\displaystyle=$ $\displaystyle
2L^{d-2}r_{t}^{\frac{p+\theta}{p-\theta}-p}\int_{\epsilon/r_{t}}^{1}du\frac{u^{\frac{2\theta-p}{p-\theta}-p}}{\sqrt{f(u)}}\int_{u}^{1}dk\frac{k^{p+\frac{\theta}{p-\theta}}}{\sqrt{f(k)}\sqrt{1-k^{2p}}}.$
(51)
We now use the above volume to obtain the HSC. This is given by (setting AdS
radius $R=1$) [20]
$\displaystyle C_{V}(A)$ $\displaystyle=$
$\displaystyle\frac{V(\Gamma^{min}_{A})}{8\pi G_{d+1}}$ (52) $\displaystyle=$
$\displaystyle\frac{L^{d-2}l}{8\pi
G_{d+1}\left(p-\frac{\theta}{p-\theta}\right)\epsilon^{\left(p-\frac{\theta}{p-\theta}\right)}}+\frac{L^{d-2}r_{t}^{\frac{p+\theta}{p-\theta}-p}}{8\pi
G_{d+1}}\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\bar{V}\left(\frac{r_{t}}{r_{h}}\right)^{(m+n)p\left(1+\frac{z}{p-\theta}\right)}$
where
$\displaystyle\bar{V}=\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})\Gamma\left(\frac{(m+n)p(1+\frac{z}{p-\theta})+1+\frac{2\theta}{p-\theta}}{2p}\right)}{\sqrt{\pi}\Gamma(n+1)\Gamma(m+1)\Gamma\left(\frac{(m+n)p(1+\frac{z}{p-\theta})+1+p+\frac{2\theta}{p-\theta}}{2p}\right)}\left[\frac{1}{p\left(mp(1+\frac{z}{p-\theta})+\frac{\theta}{o-\theta}-p\right)}\right]~{}.$
The first term in eq.(52) is the divergent piece of the HSC. In the subsequent
analysis we will denote it as $C^{div}$. We now consider the small temperature
limit, that is $\frac{r_{t}}{r_{h}}\ll 1$. In the spirit of this limit, we
keep terms up to $\mathcal{O}(m)$ in eq.(52). This in turn means that we are
interested only in the leading order temperature corrections to the HSC. These
considerations lead to the following expression of HSC
$\displaystyle C_{V}(A)=C^{div}+\frac{L^{d-2}}{8\pi
G_{d+1}}\bar{V}_{(0)}r_{t}^{\frac{p+\theta}{p-\theta}-p}+\frac{L^{d-2}}{8\pi
G_{d+1}}\left[\bar{V}_{(1)}+\bar{V}_{(2)}\right]r_{t}^{\frac{p+\theta}{p-\theta}-p}\left(\frac{r_{t}}{r_{h}}\right)^{p\left(1+\frac{z}{p-\theta}\right)}$
(53)
where $\bar{V}_{(0)}=\bar{V}|_{(n=0,m=0)}$,
$\bar{V}_{(1)}=\bar{V}|_{(n=1,m=0)}$ and $\bar{V}_{(2)}=\bar{V}|_{(n=0,m=1)}$.
We now use eq.(38) in order to express the above expression in terms of the
subsystem size $l$. This is obtained to be
$\displaystyle
C_{V}(A)=C^{div}+C_{1}l^{(\frac{p+\theta}{p})-(p-\theta)}+C_{2}l^{(\frac{p+\theta}{p})+z}T^{1+(\frac{p-\theta}{z})}$
(54)
where the expressions for $C_{1}$ and $C_{2}$ are given in the Appendix. As we
have mentioned earlier, $C^{div}$ represents the divergent piece of HSC
whereas the second term is the temperature independent term. The lowest order
temperature correction occurs in the third term. This result for the HSC will
be used in the next section to compute the mutual complexity between two
subsystems.
## 7 Complexity for mixed states: Mutual complexity ($\Delta\mathcal{C}$)
The complexity for mixed states (purification complexity) is defined as the
minimal (pure state) complexity among all possible purifications of the mixed
state. This in turn means that one has to optimize over the circuits which
take the reference state to a target state $\ket{\psi_{AB}}$ (a purification
of the desired mixed state $\rho_{A}$) and also need to optimize over the
possible purifications of $\rho_{A}$. This can be expressed as
$\displaystyle\mathcal{C}(\rho_{A})=\mathrm{min}_{B}~{}\mathcal{C}(\ket{\psi_{AB}});~{}\rho_{A}=\mathrm{Tr}_{B}\ket{\psi_{AB}}\bra{\psi_{AB}}$
(55)
where $A^{c}=B$. Recently a quantity denoted as the ‘mutual complexity
($\Delta\mathcal{C}$)’ has been defined in order to compute the above
mentioned mixed state complexity [47, 48]. The computation of
$\Delta\mathcal{C}$ starts with a pure state $\rho_{AB}$ in an extended
Hilbert space (including auxillary degrees of freedom), then by tracing out
the degrees of freedom of $B$, one gets the mixed state $\rho_{A}$. On the
other hand, tracing out the degrees of freedom of $A$ yields $\rho_{B}$. These
computed results then can then be used in the following formula to compute the
mutual complexity $\Delta\mathcal{C}$ [47]
$\displaystyle\Delta\mathcal{C}=\mathcal{C}(\rho_{A})++\mathcal{C}(\rho_{B})-\mathcal{C}(\rho_{A\cup
B})~{}.$ (56)
The mutual complexity $\Delta\mathcal{C}$ is said to be subadditive if
$\Delta\mathcal{C}>0$ and superadditive if $\Delta\mathcal{C}<0$.
We now choose to follow the subregion ‘Complexity=Volume’ conjecture to
compute the quantities $\mathcal{C}(\rho_{A})$, $\mathcal{C}(\rho_{B})$ and
$\mathcal{C}(\rho_{A\cup B})$. Similary one can follow the ‘Complexity=Action’
conjecture or $C=V2.0$ conjecture [77] to compute these quantities. We
consider two different set up to probe the mutual complexity
$\Delta\mathcal{C}$. In the first scenario, we consider two disjoint
subsystems $A$ and $B$ of width $l$ on the boundary Cauchy slice $\sigma$.
These two subsystems are separated by a distance $x$. We then compute the
mutual complexity between these two subregions. Next we consider that the
boundary Cauchy slice $\sigma$ is a collection of two adjacent subsystems $A$
and $B$ of width $l$ with $A\cap B=0$ (zero overlap) and $A^{c}=B$. In this
set up we compute the mutual complexity between a subregion $A$ and the full
system $A\cup A^{c}$.
### 7.1 Case 1: Mutual complexity between two disjoint subregions
In this set up, we assume the two subsystems $A$ and $B$ of width $l$ on the
Cauchy slice $\sigma$. The separation length between $A$ and $B$ is $x$. We
want to see how the rate of complexification of these two subsystems get
affected when we introduce correlation (classical and quantum) between these
two subregions. In [78], the authors have used the ‘Complexity=Action’
conjecture to study the mutual complexity between two subsystems and in [38]
‘Complexity = Volume’ conjecture was incorporated to probe $\Delta\mathcal{C}$
between two subregions. Here we follow the approach given in [38].
#### 7.1.1 BTZ black hole
We first compute the HSC corresponding to a single subsystem $A$ of length $l$
for the BTZ black hole. The mentioned black hole geometry is characterized by
the following metric [79, 80]
$\displaystyle
ds^{2}=\frac{R^{2}}{z^{2}}\left[-f(z)dt^{2}+\frac{dz^{2}}{f(z)}+dx^{2}\right];~{}f(z)=1-\frac{z^{2}}{z_{h}^{2}}~{}.$
(57)
Following the prescription of $C=V$ conjecture, the HSC corresponding to a
single subsystem $A$ of length $l$ in the dual field theory is obtained by
computing the co-dimension $1$ volume enclosed by the co-dimension two RT-
surface. This leads to the following
$\displaystyle\mathcal{C}(\rho_{A})$ $\displaystyle=$
$\displaystyle\frac{2R}{8\pi
RG_{2+1}}\int_{0}^{z_{t}}\frac{x(z)}{z^{2}\sqrt{f(z)}}dz$ (58)
$\displaystyle=$ $\displaystyle\frac{2R}{8\pi
RG_{2+1}}\int_{0}^{z_{t}}\frac{1}{z^{2}\sqrt{f(z)}}dz\int_{z}^{z_{t}}\frac{du}{\sqrt{f(u)}\sqrt{(\frac{u}{z_{t}})^{2}-1}}$
$\displaystyle=$
$\displaystyle\frac{1}{8\pi}\left(\frac{l}{\epsilon}-\pi\right)~{}(setting~{}G_{2+1}=1)$
where $\epsilon$ is the cut-off introduced to prevent the UV divergence and
$z_{t}$ is the turning point of the RT surface. It is to be observed that the
computed result of HSC in this case is independent of the black hole parameter
(that is, event horizon $z_{h}$). This is a unique feature of AdS3 as the
subregion complexity in this case is topological. We now consider two
subsystems $A$ and $B$ of equal length $l$, separated by a distance $x$. In
this set up, the connected RT surface is governed by two strips of length
$2l+x$ and length $x$ [81]. This leads to the following [38, 81]
$\displaystyle\mathcal{C}(A\cup B)=\mathcal{C}(2l+x)-\mathcal{C}(x)~{}.$ (59)
Note that when the separation $x$ between the two subsystems vanishes, then
$\displaystyle\mathcal{C}(l\cup l)=\mathcal{C}(2l)~{}.$ (60)
By substituting eq.(59) in eq.(56), the mutual complexity is obtained to be
$\displaystyle\Delta\mathcal{C}$ $\displaystyle=$ $\displaystyle
2\mathcal{C}(l)-\mathcal{C}(2l+x)+\mathcal{C}(x)=-\frac{1}{4}~{}.$ (61)
It can be observed that the mutual complexity is less than zero. This implies
that the complexity is superadditive. In the above computation of
$\mathcal{C}(A\cup B)$, we have considered only the connected RT surface. This
is true as long as we work within the limit $\frac{x}{l}\ll 1$. However, there
can be disconnected configuration also in which $\mathcal{C}(A\cup
B)=\mathcal{C}(A)+\mathcal{C}(B)$ [81]. This in turn means that when the
separation length $x$ is large enough, mutual complexity between two
subregions $\Delta\mathcal{C}$ is zero. This is similar to mutual information.
#### 7.1.2 Hyperscaling violating geometry
By considering the same set up, we now compute the mutual complexity between
$A$ and $B$ for the hyperscaling violating geometry. The HSC corresponding to
a single strip of length $l$ is given in eq.(52). We use this result to
compute the complexities $\mathcal{C}(l)$, $\mathcal{C}(2l+x)$ and
$\mathcal{C}(x)$. This leads to the following expression of mutual complexity
$\displaystyle\Delta\mathcal{C}$ $\displaystyle=$ $\displaystyle
2\mathcal{C}(l)-\mathcal{C}(2l+x)+\mathcal{C}(x)$ (62) $\displaystyle=$
$\displaystyle\frac{L^{d-2}}{8\pi
G_{d+1}}\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\bar{V}\Big{[}2r_{t}(l)^{\frac{p+\theta}{p-\theta}-p}\left(\frac{r_{t}(l)}{r_{h}}\right)^{(m+n)p\left(1+\frac{z}{p-\theta}\right)}+r_{t}(x)^{\frac{p+\theta}{p-\theta}-p}\left(\frac{r_{t}(x)}{r_{h}}\right)^{(m+n)p\left(1+\frac{z}{p-\theta}\right)}$
$\displaystyle-
r_{t}(2l+x)^{\frac{p+\theta}{p-\theta}-p}\left(\frac{r_{t}(2l+x)}{r_{h}}\right)^{(m+n)p\left(1+\frac{z}{p-\theta}\right)}\Big{]}$
where $r_{t}(l)$, $r_{t}(2l+x)$ and $r_{t}(x)$ corresponds to the turning
points of the RT surfaces associated to $l$, $2l+x$ and $x$ respectively. It
is to be observed that the divergent pieces of HSC cancels out which yields a
finite result. We now consider the small temperature limit
$\frac{r_{t}(x)}{r_{h}}\ll\frac{r_{t}(l)}{r_{h}}\ll\frac{r_{t}(2l+x)}{r_{h}}\ll
1$ and keep terms up to $\mathcal{O}(m)$. This in turn yields the following
expression for mutual complexity
$\displaystyle\Delta\mathcal{C}$ $\displaystyle=$ $\displaystyle
C_{1}\left[2l^{(\frac{p+\theta}{p})-(p-\theta)}-(2l+x)^{(\frac{p+\theta}{p})-(p-\theta)}+x^{(\frac{p+\theta}{p})-(p-\theta)}\right]$
(63) $\displaystyle+$ $\displaystyle
C_{2}\left[2l^{z+\frac{p+\theta}{p}}-(2l+x)^{z+\frac{p+\theta}{p}}+x^{z+\frac{p+\theta}{p}}\right]T^{1+\left(\frac{p-\theta}{z}\right)}~{}.$
Similar to the study of mutual information, we can also point out a critical
separation length $x_{c}$ at which the above expression is zero.
Effect of hyperscaling violating exponent $\theta$ (we set $z=1$)
Effect of dynamical exponent $z$ (we set $\theta=0.2$)
Figure 3: Effects of $\theta$ and $z$ on $\Delta\mathcal{C}$ (with $d=3$,
$k=0.4$, $L=1$ and $G_{d+1}=1$)
In Fig.(3), we have graphically represented the computed result of
$\Delta\mathcal{C}$. In the above plots, we have introduced $k=\frac{x}{l}$ in
order to compute the critical separation length $x_{c}$ at which
$\Delta\mathcal{C}=0$. These plots also represent the collective effects of
$z$ and $\theta$ on the mutual complexity.
### 7.2 Case 2: Mutual complexity between two adjacent subsystems
We now consider that the boundary Cauchy slice $\sigma$ is composed of two
adjacent subsystems $A$ and $B$ of width $l$. Further, we assume $A\cap B=0$
(zero overlapping) and $A^{c}=B$ and the full system (on $\sigma$) is in a
pure state. In this set up we compute the mutual complexity between a
subregion $A$ and the full system $A\cup A^{c}$ [49, 50].
#### 7.2.1 BTZ black hole
We now proceed to compute the mutual complexity between $A$ and $B=A^{c}$. In
this set up, the connected RT surface is composed of one strip of length $2l$.
This leads to the following expression for mutual complexity
$\displaystyle\Delta\mathcal{C}$ $\displaystyle=$ $\displaystyle
2\mathcal{C}(l)-\mathcal{C}(2l)$ (64) $\displaystyle=$
$\displaystyle\frac{1}{8\pi}\left[\frac{l}{\epsilon}-\pi+\frac{l}{\epsilon}-\pi-\frac{2l}{\epsilon}+\pi\right]$
$\displaystyle=$ $\displaystyle-\frac{1}{8}.$
Similar to the disjoint subregion case, mutual complexity in this set up is
also superadditive. This in turn means that the complexity of the state
corresponding to the full system is greater than the sum of the complexities
of the states in the two subsystems.
### 7.3 Hyperscaling violating geometry
We now proceed to compute the mixed state complexity for the hyperscaling
violating geometry. By using the HSC result given in eq.(52),
$\Delta\mathcal{C}$ in this set up reads
$\displaystyle\Delta\mathcal{C}$ $\displaystyle=$ $\displaystyle\frac{1}{8\pi
G_{d+1}}\left[V(\Gamma^{min}_{A})+V(\Gamma^{min}_{B})-V(\Gamma^{min}_{A\cup
B})\right]$ $\displaystyle=$ $\displaystyle\frac{L^{d-2}}{8\pi
G_{d+1}}\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\bar{V}\left[2r_{t}(l)^{\left(\frac{p+\theta}{p-\theta}\right)-p}\left(\frac{r_{t}(l)}{r_{h}}\right)^{(m+n)p\left(1+\frac{z}{p-\theta}\right)}-r_{t}(2l)^{\left(\frac{p+\theta}{p-\theta}\right)-p}\left(\frac{r_{t}(2l)}{r_{h}}\right)^{(m+n)p\left(1+\frac{z}{p-\theta}\right)}\right]~{}.$
We now consider the limit $\frac{r_{t}(l)}{r_{h}}\ll\frac{r_{t}(2l)}{r_{h}}\ll
1$. In this limit, we keep terms up to order $\mathcal{O}(m)$ in the above
expression. This in turn leads to the following expression
$\displaystyle\Delta\mathcal{C}=\left[2-2^{(\frac{p+\theta}{p})-(p-\theta)}\right]C_{1}l^{\left(\frac{p+\theta}{p}\right)-\left(p-\theta\right)}+\left[2-2^{z+\left(\frac{p+\theta}{p}\right)}\right]C_{2}l^{z+\left(\frac{p+\theta}{p}\right)}T^{1+\left(\frac{p-\theta}{z}\right)}~{}.$
(66)
We observe that similar to the BTZ case, the above result for $\Delta C$ is
less than zero. This in turn means that the mutual complexity computed using
the HSC conjecture yields a superadditive result.
## 8 Conclusion
In this paper, we compute the entanglement entropy and complexity for mixed
states by using the gauge/gravity correspondence. We start our analysis by
considering a hyperscaling violating solution as the bulk theory. This
geometry is associated with two parameters, namely, hyperscaling violating
exponent $z$ and dynamical exponent $\theta$. It is dual to a non-
relativistic, strongly coupled theory with hidden Fermi surfaces. We then
consider a single strip-like subsystem in order to compute the HEE of this
gravitational solution. We observe that the computed result of HEE along with
the internal energy $E$, satisfies a Smarr-like thermodynamics relation
associated with a generalized temeperature $T_{g}$. This thermodynamic
relation naturally emerges by demanding that the generalized temperature
$T_{g}$ reproduces the Hawking temperature $T_{H}$ as the leading term in the
IR ($r_{t}\rightarrow r_{h}$) limit. In UV limit ($\frac{r_{t}}{r_{h}}\ll 1$),
it is found that $T_{g}\propto\frac{1}{l^{z}}$, that is, $T_{g}$ is inversely
proportional to subsystem size $l$. This behaviour is compatible with the
definition of entanglement temperature given in the literature. We then
holographically compute the relative entropy $S_{rel}$, by incorporating the
perturbative approach. Using this the Fisher information metric is computed.
We find that in this case the power of $l$ carries both the exponents $z$ and
$\theta$. We then consider two strip-like subsystems $A$ and $B$ separated by
a length $D$, in order to compute the EWCS ($E_{W}$) which is the holographic
analogy of EoP. We compute $E_{W}$ for both low and high temperature
conditions. In both cases, there is a temperature independent term (denoted as
$E_{W}^{T=0}$) which is independent of the hyperscaling violating exponent $z$
but depends on the dynamical exponent $\theta$. On the other hand for a large
enough value of $D$ (critical separation length $D_{c}$), the RT surface
$\Gamma_{AB}^{min}$ becomes disconnected and $E_{W}$ should vanish. This in
turn means that $E_{W}$ probes the phase transition between the connected and
disconnected phases of the RT surface $\Gamma_{AB}^{min}$. We evaluate these
critical separation point $D_{c}$ by using the property that at $D_{c}$ the
mutual information between $A$ and $B$ becomes zero as they become
disconnected. This behaviour for $I(A:B)$ and $E_{W}$ is shown in Fig(s).(1,
2) for both low and high temperature cases. We observe that $E_{W}$ always
satisfies the property $E_{W}>\frac{1}{2}I(A:B)$. We then discuss the property
of information spreading by computing the Butterfly velocity. We observe that
the computed expression of Butterfly velocity explicitly depends on the
temperature of the dual field theory. We then compute the HSC by considering
again a single strip-like subsystem. The complexity for mixed state is
computed by following the concept of mutual complexity $\Delta C$. We have
used the HSC conjecture to compute the $\Delta C$ for both BTZ black hole and
hyperscaling violating geometry. We have studied the mutual complexity by
considering two different set ups. Firstly, we consider two disjoint
subsystems $A$ and $B$ of width $l$, separated by a length $x$ on the boundary
Cauchy slice $\sigma$. Computation of $\Delta\mathcal{C}$ in this set up
probes rate of complexification of these two subsystems when we consider
correlation (both classical and quantum) between them. Next we consider a
single subsystem $A$ in such a way that $A^{c}=B$ and $A\cap B=0$. We then
measure the mutual complexity between a subsystem $A$ and the full system
$A\cup A^{c}$. We observe that the computed result of mutual complexity is
superadditive that is $\Delta C<0$. This in turn means that the complexity of
the state corresponding to the full system is greater than the sum of the
complexities of the states in the two subsystems. We observe that for BTZ
black hole $\Delta\mathcal{C}$ is independent of temeprature however for
hyperscaling violating solution it contains a temperature independent term as
well as a temperature dependent term. It is to be kept in mind that this
nature of $\Delta C$ is observed for the HSC conjecture and similarly one can
use ‘Complexity=Action’ conjecture [47, 78] or ‘CV2.0’ conjecture to compute
$\Delta C$ in this context [49].
## Acknowledgements
A.S. would like to acknowledge the support by Council of Scientific and
Industrial Research (CSIR, Govt. of India) for Senior Research Fellowship.
## Appendix
In this appendix, we give the expressions of quantities that appear in the
main text. These are as follows.
$\displaystyle\Delta_{4}$ $\displaystyle=$
$\displaystyle\Delta_{3}\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)^{(p-\theta)\left(\frac{1-p}{p}\right)}-\frac{\left(\frac{4\pi}{p(1+\frac{z}{p-\theta})}\right)^{1+\frac{p-\theta}{z}}}{2\left(p(1+\frac{z}{p-\theta})-p+1\right)}\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)^{\left(\frac{p-\theta}{p}\right)\left(1+\frac{pz}{p-\theta}\right)}$
$\displaystyle\Delta_{5}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{\pi}}{(p-1)}\left(\frac{4\pi}{p(1+\frac{z}{p-\theta})}\right)^{(p-1)(\frac{p-\theta}{pz})}\frac{\Gamma\left(\frac{1+\frac{pz}{p-\theta}}{p(1+\frac{z}{p-\theta})}\right)}{\Gamma\left(\frac{2-p+\frac{pz}{p-\theta}}{2p(1+\frac{z}{p-\theta})}\right)}$
$\displaystyle C_{1}$ $\displaystyle=$ $\displaystyle\frac{L^{d-2}}{8\pi
G_{d+1}}\bar{V}_{(0)}\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)^{(\frac{p+\theta}{p})-(p-\theta)}$
$\displaystyle C_{2}$ $\displaystyle=$
$\displaystyle\left[(\bar{V}_{(1)}+\bar{V}_{(2)})\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)^{(\frac{p+\theta}{p})+z}-\left(\frac{p+\theta}{p-\theta}-p\right)\bar{V}_{(0)}\Delta_{3}\left(\frac{p}{\sqrt{\pi}}\frac{\Gamma\left(\frac{2p+\frac{p}{p-\theta}}{2p}\right)}{\Gamma\left(\frac{p+\frac{p}{p-\theta}}{2p}\right)}\right)^{(\frac{p+\theta}{p})-(p-\theta)}\right]$
$\displaystyle\times\frac{L^{d-2}}{8\pi
G_{d+1}}\left[\frac{4\pi}{p(1+\frac{z}{p-\theta})}\right]^{1+(\frac{p-\theta}{z})}~{}.$
## References
* [1] J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity”, _Int. J. Theor. Phys._ 38 (1999) 1113, arXiv:hep-th/9711200.
* [2] E. Witten, “Anti-de Sitter space and holography”, _Adv. Theor. Math. Phys._ 2 (1998) 253, arXiv:hep-th/9802150.
* [3] O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz, “Large N field theories, string theory and gravity”, _Phys. Rept._ 323 (2000) 183, arXiv:hep-th/9905111.
* [4] S. Ryu and T. Takayanagi, “Holographic derivation of entanglement entropy from AdS/CFT”, _Phys. Rev. Lett._ 96 (2006) 181602, arXiv:hep-th/0603001.
* [5] S. Ryu and T. Takayanagi, “Aspects of Holographic Entanglement Entropy”, _JHEP_ 08 (2006) 045, arXiv:hep-th/0605073.
* [6] T. Nishioka, S. Ryu and T. Takayanagi, “Holographic Entanglement Entropy: An Overview”, _J. Phys. A_ 42 (2009) 504008, arXiv:0905.0932 [hep-th].
* [7] V. E. Hubeny, M. Rangamani and T. Takayanagi, “A Covariant holographic entanglement entropy proposal”, _JHEP_ 07 (2007) 062, arXiv:0705.0016 [hep-th].
* [8] J. Watrous, “Quantum Computational Complexity”, arXiv:0804.3401 [quant-ph].
* [9] S. Aaronson, “The Complexity of Quantum States and Transformations: From Quantum Money to Black Holes”, arXiv:1607.05256 [quant-ph].
* [10] K. Hashimoto, N. Iizuka and S. Sugishita, “Thoughts on Holographic Complexity and its Basis-dependence”, _Phys. Rev. D_ 98[4] (2018) 046002, arXiv:1805.04226 [hep-th].
* [11] R. Jefferson and R. C. Myers, “Circuit complexity in quantum field theory”, _JHEP_ 10 (2017) 107, arXiv:1707.08570 [hep-th].
* [12] S. Chapman, M. P. Heller, H. Marrochio and F. Pastawski, “Toward a Definition of Complexity for Quantum Field Theory States”, _Phys. Rev. Lett._ 120[12] (2018) 121602, arXiv:1707.08582 [hep-th].
* [13] R. Khan, C. Krishnan and S. Sharma, “Circuit Complexity in Fermionic Field Theory”, _Phys. Rev. D_ 98[12] (2018) 126001, arXiv:1801.07620 [hep-th].
* [14] M. Doroudiani, A. Naseh and R. Pirmoradian, “Complexity for Charged Thermofield Double States”, _JHEP_ 01 (2020) 120, arXiv:1910.08806 [hep-th].
* [15] L. Susskind, “Entanglement is not enough”, _Fortsch. Phys._ 64 (2016) 49, arXiv:1411.0690 [hep-th].
* [16] L. Susskind, “Computational Complexity and Black Hole Horizons”, _Fortsch. Phys._ 64 (2016) 24, [Addendum: Fortsch.Phys. 64, 44–48 (2016)], arXiv:1403.5695 [hep-th].
* [17] A. R. Brown, D. A. Roberts, L. Susskind, B. Swingle and Y. Zhao, “Holographic Complexity Equals Bulk Action?”, _Phys. Rev. Lett._ 116[19] (2016) 191301, arXiv:1509.07876 [hep-th].
* [18] A. R. Brown, D. A. Roberts, L. Susskind, B. Swingle and Y. Zhao, “Complexity, action, and black holes”, _Phys. Rev. D_ 93[8] (2016) 086006, arXiv:1512.04993 [hep-th].
* [19] K. Goto, H. Marrochio, R. C. Myers, L. Queimada and B. Yoshida, “Holographic Complexity Equals Which Action?”, _JHEP_ 02 (2019) 160, arXiv:1901.00014 [hep-th].
* [20] M. Alishahiha, “Holographic Complexity”, _Phys. Rev. D_ 92[12] (2015) 126009, arXiv:1509.06614 [hep-th].
* [21] D. Carmi, R. C. Myers and P. Rath, “Comments on Holographic Complexity”, _JHEP_ 03 (2017) 118, arXiv:1612.00433 [hep-th].
* [22] S. Karar and S. Gangopadhyay, “Holographic complexity for Lifshitz system”, _Phys. Rev. D_ 98[2] (2018) 026029, arXiv:1711.10887 [hep-th].
* [23] S. Gangopadhyay, D. Jain and A. Saha, “Universal pieces of holographic entanglement entropy and holographic subregion complexity”, _Phys. Rev. D_ 102[4] (2020) 046002, arXiv:hep-th/2006.03428.
* [24] B. M. Terhal, M. Horodecki, D. W. Leung and D. P. DiVincenzo, “The entanglement of purification”, _Journal of Mathematical Physics_ 43[9] (2002) 4286–4298.
* [25] G. Vidal and R. Werner, “Computable measure of entanglement”, _Phys. Rev. A_ 65 (2002) 032314, arXiv:quant-ph/0102117.
* [26] T. Takayanagi and K. Umemoto, “Entanglement of purification through holographic duality”, _Nature Phys._ 14[6] (2018) 573, arXiv:1708.09393 [hep-th].
* [27] R. Espíndola, A. Guijosa and J. F. Pedraza, “Entanglement Wedge Reconstruction and Entanglement of Purification”, _Eur. Phys. J. C_ 78[8] (2018) 646, arXiv:1804.05855 [hep-th].
* [28] C. A. Agón, J. De Boer and J. F. Pedraza, “Geometric Aspects of Holographic Bit Threads”, _JHEP_ 05 (2019) 075, arXiv:1811.08879 [hep-th].
* [29] K. Umemoto and Y. Zhou, “Entanglement of Purification for Multipartite States and its Holographic Dual”, _JHEP_ 10 (2018) 152, arXiv:1805.02625 [hep-th].
* [30] H. Hirai, K. Tamaoka and T. Yokoya, “Towards Entanglement of Purification for Conformal Field Theories”, _PTEP_ 2018[6] (2018) 063B03, arXiv:1803.10539 [hep-th].
* [31] N. Bao, A. Chatwin-Davies, J. Pollack and G. N. Remmen, “Towards a Bit Threads Derivation of Holographic Entanglement of Purification”, _JHEP_ 07 (2019) 152, arXiv:1905.04317 [hep-th].
* [32] Y. Kusuki and K. Tamaoka, “Entanglement Wedge Cross Section from CFT: Dynamics of Local Operator Quench”, _JHEP_ 02 (2020) 017, arXiv:1909.06790 [hep-th].
* [33] H.-S. Jeong, K.-Y. Kim and M. Nishida, “Reflected Entropy and Entanglement Wedge Cross Section with the First Order Correction”, _JHEP_ 12 (2019) 170, arXiv:1909.02806 [hep-th].
* [34] K. Umemoto, “Quantum and Classical Correlations Inside the Entanglement Wedge”, _Phys. Rev. D_ 100[12] (2019) 126021, arXiv:1907.12555 [hep-th].
* [35] J. Harper and M. Headrick, “Bit threads and holographic entanglement of purification”, _JHEP_ 08 (2019) 101, arXiv:1906.05970 [hep-th].
* [36] N. Jokela and A. Pönni, “Notes on entanglement wedge cross sections”, _JHEP_ 07 (2019) 087, arXiv:1904.09582 [hep-th].
* [37] K. Babaei Velni, M. R. Mohammadi Mozaffar and M. Vahidinia, “Some Aspects of Entanglement Wedge Cross-Section”, _JHEP_ 05 (2019) 200, arXiv:1903.08490 [hep-th].
* [38] M. Ghodrati, X.-M. Kuang, B. Wang, C.-Y. Zhang and Y.-T. Zhou, “The connection between holographic entanglement and complexity of purification”, _JHEP_ 09 (2019) 009, arXiv:1902.02475 [hep-th].
* [39] J. Boruch, “Entanglement wedge cross-section in shock wave geometries”, _JHEP_ 07 (2020) 208, arXiv:2006.10625 [hep-th].
* [40] K. Babaei Velni, M. R. Mohammadi Mozaffar and M. H. Vahidinia, “Evolution of entanglement wedge cross section following a global quench”, _JHEP_ 08 (2020) 129, arXiv:2005.05673 [hep-th].
* [41] S. Chakrabortty, S. Pant and K. Sil, “Effect of back reaction on entanglement and subregion volume complexity in strongly coupled plasma”, _JHEP_ 06 (2020) 061, arXiv:2004.06991 [hep-th].
* [42] K. Tamaoka, “Entanglement Wedge Cross Section from the Dual Density Matrix”, _Phys. Rev. Lett._ 122[14] (2019) 141601, arXiv:1809.09109 [hep-th].
* [43] S. Dutta and T. Faulkner, “A canonical purification for the entanglement wedge cross-section”, arXiv:1905.00577 [hep-th].
* [44] J. Chu, R. Qi and Y. Zhou, “Generalizations of Reflected Entropy and the Holographic Dual”, _JHEP_ 03 (2020) 151, arXiv:1909.10456 [hep-th].
* [45] M. R. Mohammadi Mozaffar and A. Mollabashi, “Logarithmic Negativity in Lifshitz Harmonic Models”, _J. Stat. Mech._ 1805[5] (2018) 053113, arXiv:1712.03731 [hep-th].
* [46] J. Kudler-Flam and S. Ryu, “Entanglement negativity and minimal entanglement wedge cross sections in holographic theories”, _Phys. Rev. D_ 99[10] (2019) 106014, arXiv:1808.00446 [hep-th].
* [47] M. Alishahiha, K. Babaei Velni and M. R. Mohammadi Mozaffar, “Black hole subregion action and complexity”, _Phys. Rev. D_ 99[12] (2019) 126016, arXiv:1809.06031 [hep-th].
* [48] E. Cáceres, J. Couch, S. Eccles and W. Fischler, “Holographic Purification Complexity”, _Phys. Rev. D_ 99[8] (2019) 086016, arXiv:1811.10650 [hep-th].
* [49] C. A. Agón, M. Headrick and B. Swingle, “Subsystem Complexity and Holography”, _JHEP_ 02 (2019) 145, arXiv:1804.01561 [hep-th].
* [50] E. Caceres, S. Chapman, J. D. Couch, J. P. Hernandez, R. C. Myers and S.-M. Ruan, “Complexity of Mixed States in QFT and Holography”, _JHEP_ 03 (2020) 012, arXiv:1909.10557 [hep-th].
* [51] H. A. Camargo, L. Hackl, M. P. Heller, A. Jahn, T. Takayanagi and B. Windt, “Entanglement and Complexity of Purification in (1+1)-dimensional free Conformal Field Theories”, arXiv:2009.11881 [hep-th].
* [52] N. Ogawa, T. Takayanagi and T. Ugajin, “Holographic Fermi Surfaces and Entanglement Entropy”, _JHEP_ 01 (2012) 125, arXiv:1111.1023 [hep-th].
* [53] L. Huijse, S. Sachdev and B. Swingle, “Hidden Fermi surfaces in compressible states of gauge-gravity duality”, _Phys. Rev. B_ 85 (2012) 035121, arXiv:1112.0573 [cond-mat.str-el].
* [54] M. A. Metlitski and S. Sachdev, “Quantum phase transitions of metals in two spatial dimensions. I. Ising-nematic order”, _Phys. Rev. B_ 82 (2010) 075127.
* [55] M. Srednicki, “Entropy and area”, _Phys. Rev. Lett._ 71 (1993) 666, arXiv:hep-th/9303048.
* [56] M. R. Mohammadi Mozaffar and A. Mollabashi, “Entanglement in Lifshitz-type Quantum Field Theories”, _JHEP_ 07 (2017) 120, arXiv:1705.00483 [hep-th].
* [57] M. Alishahiha, A. Faraji Astaneh, M. R. Mohammadi Mozaffar and A. Mollabashi, “Complexity Growth with Lifshitz Scaling and Hyperscaling Violation”, _JHEP_ 07 (2018) 042, arXiv:1802.06740 [hep-th].
* [58] A. Saha, S. Gangopadhyay and J. P. Saha, “Holographic entanglement entropy and generalized entanglement temperature”, _Phys. Rev. D_ 100[10] (2019) 106008, arXiv:1906.03159 [hep-th].
* [59] A. Saha, S. Gangopadhyay and J. P. Saha, “Generalized entanglement temperature and entanglement Smarr relation”, _Phys. Rev. D_ 102 (2020) 086010, arXiv:2004.00867 [hep-th].
* [60] J. Bhattacharya, M. Nozaki, T. Takayanagi and T. Ugajin, “Thermodynamical Property of Entanglement Entropy for Excited States”, _Phys. Rev. Lett._ 110[9] (2013) 091602, arXiv:1212.1164 [hep-th].
* [61] S. Banerjee, J. Erdmenger and D. Sarkar, “Connecting Fisher information to bulk entanglement in holography”, _JHEP_ 08 (2018) 001, arXiv:1701.02319 [hep-th].
* [62] N. Lashkari and M. Van Raamsdonk, “Canonical Energy is Quantum Fisher Information”, _JHEP_ 04 (2016) 153, arXiv:1508.00897 [hep-th].
* [63] S. Karar, R. Mishra and S. Gangopadhyay, “Holographic complexity of boosted black brane and Fisher information”, _Phys. Rev. D_ 100[2] (2019) 026006, arXiv:1904.13090 [hep-th].
* [64] S. Karar and S. Gangopadhyay, “Holographic information theoretic quantities for Lifshitz black hole”, _Eur. Phys. J. C_ 80[6] (2020) 515, arXiv:2002.08272 [hep-th].
* [65] R.-Q. Yang, C.-Y. Zhang and W.-M. Li, “Holographic entanglement of purification for thermofield double states and thermal quench”, _JHEP_ 01 (2019) 114, arXiv:1810.00420 [hep-th].
* [66] O. Ben-Ami, D. Carmi and J. Sonnenschein, “Holographic Entanglement Entropy of Multiple Strips”, _JHEP_ 11 (2014) 144, arXiv:1409.6305 [hep-th].
* [67] P. Hayden and J. Preskill, “Black holes as mirrors: Quantum information in random subsystems”, _JHEP_ 09 (2007) 120, arXiv:0708.4025 [hep-th].
* [68] Y. Sekino and L. Susskind, “Fast Scramblers”, _JHEP_ 10 (2008) 065, arXiv:0808.2096 [hep-th].
* [69] D. A. Roberts and B. Swingle, “Lieb-Robinson Bound and the Butterfly Effect in Quantum Field Theories”, _Phys. Rev. Lett._ 117[9] (2016) 091602, arXiv:1603.09298 [hep-th].
* [70] J. Maldacena, S. H. Shenker and D. Stanford, “A bound on chaos”, _JHEP_ 08 (2016) 106, arXiv:1503.01409 [hep-th].
* [71] R. Bousso, S. Leichenauer and V. Rosenhaus, “Light-sheets and AdS/CFT”, _Phys. Rev. D_ 86 (2012) 046009, arXiv:1203.6619 [hep-th].
* [72] S. H. Shenker and D. Stanford, “Black holes and the butterfly effect”, _JHEP_ 03 (2014) 067, arXiv:1306.0622 [hep-th].
* [73] M. Mezei and D. Stanford, “On entanglement spreading in chaotic systems”, _JHEP_ 05 (2017) 065, arXiv:1608.05101 [hep-th].
* [74] B. Czech, J. L. Karczmarek, F. Nogueira and M. Van Raamsdonk, “The Gravity Dual of a Density Matrix”, _Class. Quant. Grav._ 29 (2012) 155009, arXiv:1204.1330 [hep-th].
* [75] B. S. DiNunno, N. Jokela, J. F. Pedraza and A. Pönni, “Quantum information probes of charge fractionalization”, arXiv:2101.11636 [hep-th].
* [76] M. Blake, “Universal Charge Diffusion and the Butterfly Effect in Holographic Theories”, _Phys. Rev. Lett._ 117[9] (2016) 091601, arXiv:1603.08510 [hep-th].
* [77] J. Couch, W. Fischler and P. H. Nguyen, “Noether charge, black hole volume, and complexity”, _JHEP_ 03 (2017) 119, arXiv:1610.02038 [hep-th].
* [78] R. Auzzi, S. Baiguera, A. Legramandi, G. Nardelli, P. Roy and N. Zenoni, “On subregion action complexity in AdS3 and in the BTZ black hole”, _JHEP_ 01 (2020) 066, arXiv:1910.00526 [hep-th].
* [79] M. Banados, C. Teitelboim and J. Zanelli, “The Black hole in three-dimensional space-time”, _Phys. Rev. Lett._ 69 (1992) 1849, arXiv:hep-th/9204099.
* [80] A. Saha, S. Karar and S. Gangopadhyay, “Bulk geometry from entanglement entropy of CFT”, _Eur. Phys. J. Plus_ 135[2] (2020) 132, arXiv:1807.04646 [hep-th].
* [81] O. Ben-Ami and D. Carmi, “On Volumes of Subregions in Holography and Complexity”, _JHEP_ 11 (2016) 129, arXiv:1609.02514 [hep-th].
|
16k
|
arxiv_papers
|
2101.00888
|
# Imaging vibrations of locally gated, electromechanical few layer graphene
resonators with a moving vacuum enclosure
Heng Lu School of Optoelectronic Science and Engineering & Collaborative
Innovation Center of Suzhou Nano Science and Technology, Soochow University,
Suzhou 215006, People’s Republic of China Key Lab of Advanced Optical
Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical
Technologies of Education Ministry of China, Soochow University, Suzhou
215006, People’s Republic of China Chen Yang School of Optoelectronic
Science and Engineering & Collaborative Innovation Center of Suzhou Nano
Science and Technology, Soochow University, Suzhou 215006, People’s Republic
of China Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu
Province & Key Lab of Modern Optical Technologies of Education Ministry of
China, Soochow University, Suzhou 215006, People’s Republic of China Ye Tian
School of Optoelectronic Science and Engineering & Collaborative Innovation
Center of Suzhou Nano Science and Technology, Soochow University, Suzhou
215006, People’s Republic of China Key Lab of Advanced Optical Manufacturing
Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of
Education Ministry of China, Soochow University, Suzhou 215006, People’s
Republic of China Jun Lu School of Optoelectronic Science and Engineering &
Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow
University, Suzhou 215006, People’s Republic of China Key Lab of Advanced
Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern
Optical Technologies of Education Ministry of China, Soochow University,
Suzhou 215006, People’s Republic of China Fanqi Xu School of Optoelectronic
Science and Engineering & Collaborative Innovation Center of Suzhou Nano
Science and Technology, Soochow University, Suzhou 215006, People’s Republic
of China Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu
Province & Key Lab of Modern Optical Technologies of Education Ministry of
China, Soochow University, Suzhou 215006, People’s Republic of China FengNan
Chen School of Optoelectronic Science and Engineering & Collaborative
Innovation Center of Suzhou Nano Science and Technology, Soochow University,
Suzhou 215006, People’s Republic of China Key Lab of Advanced Optical
Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical
Technologies of Education Ministry of China, Soochow University, Suzhou
215006, People’s Republic of China Yan Ying School of Optoelectronic Science
and Engineering & Collaborative Innovation Center of Suzhou Nano Science and
Technology, Soochow University, Suzhou 215006, People’s Republic of China Key
Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key
Lab of Modern Optical Technologies of Education Ministry of China, Soochow
University, Suzhou 215006, People’s Republic of China Kevin G. Schädler
ICFO–Institut de Ciencies Fotoniques, The Barcelona Institute of Science and
Technology, 08860 Castelldefels Barcelona, Spain Chinhua Wang School of
Optoelectronic Science and Engineering & Collaborative Innovation Center of
Suzhou Nano Science and Technology, Soochow University, Suzhou 215006,
People’s Republic of China Key Lab of Advanced Optical Manufacturing
Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of
Education Ministry of China, Soochow University, Suzhou 215006, People’s
Republic of China Frank H. L. Koppens ICFO–Institut de Ciencies Fotoniques,
The Barcelona Institute of Science and Technology, 08860 Castelldefels
Barcelona, Spain Antoine Reserbat-Plantey [email protected]
ICFO–Institut de Ciencies Fotoniques, The Barcelona Institute of Science and
Technology, 08860 Castelldefels Barcelona, Spain Joel Moser
[email protected] School of Optoelectronic Science and Engineering &
Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow
University, Suzhou 215006, People’s Republic of China Key Lab of Advanced
Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern
Optical Technologies of Education Ministry of China, Soochow University,
Suzhou 215006, People’s Republic of China
###### Abstract
Imaging the vibrations of nanomechanical resonators means measuring their
flexural mode shapes from the dependence of their frequency response on in-
plane position. Applied to two-dimensional resonators, this technique provides
a wealth of information on the mechanical properties of atomically-thin
membranes. We present a simple and robust system to image the vibrations of
few layer graphene (FLG) resonators at room temperature and in vacuum with an
in-plane displacement precision of $\approx 0.20$ $\mu$m. It consists of a
sturdy vacuum enclosure mounted on a three-axis micropositioning stage and
designed for free space optical measurements of vibrations. The system is
equipped with ultra-flexible radio frequency waveguides to electrically
actuate resonators. With it we characterize the lowest frequency mode of a FLG
resonator by measuring its frequency response as a function of position on the
membrane. The resonator is suspended over a nanofabricated local gate
electrode acting both as a mirror and as a capacitor plate to actuate
vibrations at radio frequencies. From these measurements, we estimate the
ratio of thermal expansion coefficient to thermal conductivity of the
membrane, and we measure the effective mass of the lowest frequency mode. We
complement our study with a globally gated resonator and image its first three
vibration modes. There, we find that folds in the membrane locally suppress
vibrations.
## I Introduction
Imaging the flexural vibrations of two-dimensional (2-D) nanomechanical
resonators is an important task. These resonators, made of atomically-thin
membranes of graphene Bunch _et al._ (2007); Chen _et al._ (2009); Barton
_et al._ (2011); Reserbat-Plantey _et al._ (2012); Barton _et al._ (2012);
Mathew _et al._ (2016); Zhang _et al._ (2020) and various thin materials
such as few layer transition metal dichalcogenides Sengupta _et al._ (2010);
Castellanos-Gomez _et al._ (2013); Wang _et al._ (2014); Morell _et al._
(2016); Cartamil-Bueno _et al._ (2015), offer the opportunity to study the
physics of vibrational modes in regimes where extremely small mass, low
bending rigidity, large stretching rigidity and large aspect ratio combine to
give rise to a wealth of mechanical behaviors Dykman (2012). In its simplest
form, imaging vibrations means measuring their time averaged, resonant
amplitude as a function of position on the membrane. Driven vibrations of
graphene resonators were imaged with an Atomic Force Microscope (AFM) Garcia-
Sanchez _et al._ (2008) and with an optical interferometry setup Barton _et
al._ (2011). Both driven and thermal vibrations of resonators based on
graphene Davidovikj _et al._ (2016); Reserbat-Plantey _et al._ (2016), MoS2
Wang _et al._ (2014), black phosphorus Wang _et al._ (2016), and hexagonal
boron nitride Zheng _et al._ (2017) were measured using a similar optical
interferometry technique. These measurements advanced our understanding of 2-D
resonators in several important ways. They made it possible to identify modes
in the vibration spectrum unambiguously, including degenerate modes that are
otherwise difficult to detect Barton _et al._ (2011); Davidovikj _et al._
(2016). They revealed the impact of unevenly distributed stress and mass on
the mode shape Garcia-Sanchez _et al._ (2008). They also proved to be
exquisitely sensitive to mechanical anisotropies, such as the anisotropy of
Young’s modulus that stems from the crystal structure of the membrane Wang
_et al._ (2016). All these interesting measurements may benefit nanomechanical
sensing applications, including spatially resolved nanomechanical mass
spectroscopy Hanay _et al._ (2015). Vibration imaging may also complement
other techniques including AFM, Raman spectroscopy and photoluminescence where
these are used to inspect 2-D materials for defects, impurities and grain
boundaries.
With all its merits, imaging vibrations of 2-D resonators remains a
challenging task. Difficulties come in part from the necessity to measure
vibrations in a controlled environment. Measuring the mechanical response of
thin membranes while minimizing their damping rates requires keeping them in
vacuum. This immediately places technical constraints on the measuring
equipment. Piezo linear actuators Ho and Jan (2016); Liu and Li (2016) were
used in some imaging experiments, where they either moved the vacuum enclosure
Davidovikj _et al._ (2016) or moved a microscope objective with respect to
the vacuum enclosure Reserbat-Plantey _et al._ (2016), while a high-precision
motorized stage was used in other experiments Wang _et al._ (2014, 2016);
Zheng _et al._ (2017). Adding to the complexity of imaging vibrations, radio
frequency electrical signals are sometimes supplied to the resonator to drive
vibrations while the position of the resonator is changing. Implementing this
driving technique is difficult because radio frequency cables are stiff and
hinder the motion of piezoelectric actuators and motorized stages.
Here we demonstrate vibration imaging of electrically driven, few layer
graphene (FLG) resonators using a moving vacuum enclosure. The enclosure is
mounted on a three-axis micropositioning stage with a measured in-plane
displacement precision of $\approx 0.20$ $\mu$m and is equipped with homemade,
ultra-flexible radio frequency waveguides. Our design makes use of the large
load capacity of the stage and the high sensitivity of the manual adjusters
attached to it. The adjusters are driven by inexpensive stepper motors
connected to them by simple gears and rubber belts. The benefits of our system
are its sturdy design that protects it against acoustic vibrations, its
submicrometer displacement precision unaffected by a rather heavy load, and
its capability of delivering radio frequency signals to a moving resonator. We
employ our system to image vibrations of two FLG resonators. The first
resonator is suspended over a nanofabricated local gate electrode acting both
as a mirror for optical detection and as a capacitor plate to actuate
vibrations at radio frequencies. We measure the hardening of the spring
constant of the lowest frequency mode of the resonator as a function of
absorbed optical power. From these measurements we estimate the ratio of
thermal expansion coefficient to thermal conductivity of the membrane, which
plays an important role in thermal transport across the resonator. In
addition, imaging the vibrations of this mode allows us to measure its
effective mass, which is important for quantitative mass and force sensing
applications based on 2-D resonators. Our measurements combine three
interesting features which, to the best of our knowledge, have not been
reported thus far in a single device: they demonstrate (i) vibration imaging
of a lowest frequency mode which (ii) resonates above 60 MHz using (iii) a
local metal gate to enhance optical readout. The second resonator has a fold
in the membrane that is not visible in an optical microscope. We image the
first three vibration modes and show that the fold locally suppresses
vibrations. These measurements demonstrate that our system can be used to
identify mesoscopic defects in the resonator and study their impact on the
mechanical response.
## II Experimental setup
The mechanical part of our system is robust and easy to operate. Our enclosure
is shaped as a cylinder with a diameter of 120 mm and a depth of 75 mm, and is
made of stainless steel (Fig. 1a, b). There is a clear advantage to such a
sturdy design: with a total weight of $\approx 2.2$ kg, the enclosure acts as
an efficient damper for acoustic vibrations which would otherwise preclude
certain experiments, such as those where the frequency of the resonator is
slowly modulated. To enable free space optical measurements, a window made of
fused silica, with a diameter of 12.7 mm and a thickness of 0.4 mm, is glued
on the front panel with Torr Seal epoxy. A holder for the substrate hosting
the resonators is affixed to the inner side of the front panel facing the
window (Fig. 1c). The holder accommodates a radio frequency printed circuit
board (PCB) that is connected to semi-rigid cables on its back side using SMA
connectors soldered through the PCB. The enclosure is sealed with Viton
o-rings, enabling a dynamic vacuum as low as $10^{-6}$ mbar using a small size
turbomolecular pump connected to a one-meter long KF16 flexible bellow. The
enclosure is mounted on a three-axis micromechanical linear stage (Newport
M-562-XYZ). Each axis is connected to a manual adjuster (Newport DS-4F) with a
specified fine sensitivity of 20 nm. Each adjuster is driven by a stepper
motor (Makeblock 42BYG) whose shaft is connected to the adjuster using gears
and a rubber belt. In vibration imaging experiments where a laser beam is used
to measure the response of a resonator, our system guarantees that only the
position of the resonator is varied while the light path remains unchanged, so
the incident optical power and the shape of the focused beam are unaffected by
the imaging process. Using a nanofabricated calibration sample, we measure the
precision of in-plane displacement of the enclosure to be $\approx 0.20$
$\mu$m (Supplementary Material, Section I).
Our optical setup, shown in Fig. 2a, is similar to those used to detect
vibrations of 2-D resonators (see e.g. Ref. Bunch _et al._ (2007)). Its
design originates from the setup presented in Refs. Carr and Craighead (1997);
Karabacak _et al._ (2005). Briefly, we employ a Helium-Neon laser emitting at
a wavelength $\lambda\approx 633$ nm as a monochromatic light source. The
output of the laser is filtered with a single mode fiber to obtain a clean
fundamental transverse Gaussian mode. The combination of a polarizing beam
splitter and a quarter-wave plate ensures that light incident on the resonator
and reflected light have orthogonal polarizations so the photodetector mostly
collects reflected light. Incident light is focused with and reflected light
is collected by a long working distance objective (Mitutoyo M Plan Apo 100X)
with a numerical aperture $\textrm{NA}=0.7$ ensuring a focused beam size
limited by diffraction. Radio frequency voltages are supplied to the
resonators (Fig. 2b) via ultra-flexible waveguides (Fig. 2c), as discussed
later. We measure the radius $w_{0}$ of the waist of the focused laser beam
using a modified version of the knife edge technique (Supplementary Material,
Section II). We find $w_{0}\approx 0.40$ $\mu$m, which is consistent with the
input beam parameters.
## III Results and discussion
With the beam optimally focused, we demonstrate that our system can image the
flexural vibrations of 2-D resonators based on suspended membranes of few
layer graphene (FLG). We present vibration imaging data obtained with two
devices. The first device is a locally gated FLG resonator. It is
characterized by a nanofabricated gate electrode made of evaporated gold over
which FLG is suspended. The gate electrode serves as a highly reflective
mirror for optical detection, and also forms a capacitor with FLG to actuate
vibrations at radio frequencies. We use this device to measure the ratio of
thermal expansion coefficient to thermal conductivity of FLG. We also use it
to measure the effective mass of the fundamental mode of vibration. The second
device is a globally gated FLG resonator. It consists of FLG suspended over a
doped silicon substrate. We use this device to demonstrate that our system can
be employed to detect the presence of folds in the thin membrane that are
otherwise invisible in an optical microscope. The advantage of a local gate
made of gold over a global silicon gate is the higher reflectance of the
mirror combined with the possibility of actuating individual resonators within
an array of devices. To the best of our knowledge, the use of a local gate
both for optical detection and capacitive actuation has not been reported thus
far.
We first consider our locally gated resonator. It is fabricated by exfoliating
FLG and transferring it onto a prefabricated substrate using the viscoelastic
transfer method Castellanos-Gomez _et al._ (2014). The substrate is thermal
silicon oxide grown on highly resistive silicon. It is patterned with source
and drain electrodes to contact FLG (Fig. 3a), a 3 $\mu$m diameter,
cylindrical cavity etched in the oxide, and a local gate electrode
nanofabricated at the bottom of the cavity (Fig. 3a, b). The distance between
FLG and the gate is nominally 250 nm (Fig. 3c). We estimate that our FLG is
composed of $N_{L}=8$ graphene layers from measurements of optical power
reflected by the oxidized silicon substrate with and without supported FLG,
away from the cavity. We obtain $N_{L}$ by comparing the ratio of these
measured powers to calculations based on the transfer matrix method Roddaro
_et al._ (2007); Chen _et al._ (2018), see Fig. 3d and Supplementary
Material, Section III. Additional reflected power measurements made on the
gold electrodes confirm this result. The resonator is actuated electrically by
applying an oscillating voltage $V_{\textrm{ac}}$ superimposed on a dc offset
$V_{\textrm{dc}}$ between FLG and the gate (Fig. 2b). This results in an
electrostatic force of amplitude
$V_{\textrm{dc}}V_{\textrm{ac}}|\frac{\mathrm{d}C}{\mathrm{d}z}|$, where the
third factor is the derivative with respect to flexural displacement of the
capacitance $C$ between FLG and the gate. We favor electrostatic drive over
optical drive Bunch _et al._ (2007) because the former allows actuating very
high frequency vibrations without the inconvenience of dissipation caused by
photothermal effects. To supply radio frequency signals for this actuation
without impeding the motion of the micropositioning stage, we use a homemade
waveguide consisting of a copper microstrip patterned on a 15 cm long ribbon
cut from a thin Kapton film (Fig. 2c). Our waveguide is ultra-flexible, its
insertion loss is smaller than 1 dB up to at least 3 GHz, and its scattering
parameters are insensitive to bending and twisting of the waveguide. The
resonator is kept at room temperature in a vacuum of $\approx 10^{-6}$ mbar.
We detect the flexural vibrations of the resonator using a standard technique
Bunch _et al._ (2007); Carr and Craighead (1997); Karabacak _et al._ (2005).
Briefly, the resonator is placed in an optical standing wave from which it
absorbs energy. Vibrations render absorbed energy time-dependent, resulting in
modulations of the reflected power. Correspondingly, the mean square voltage
$\langle V_{\textrm{pd}}^{2}\rangle$ at the output of the photodetector reads
$\langle V_{\textrm{pd}}^{2}\rangle\approx\left(G\times T\times
P_{\textrm{inc}}\Big{|}\frac{\mathrm{d}R}{\mathrm{d}z}\Big{|}_{z_{M}}\right)^{2}\langle
z_{\textrm{vib}}^{2}\rangle+\langle\delta V_{\textrm{b}}^{2}\rangle\,,$ (1)
where $G$, in units of V/W, is the product of the responsivity and the
transimpedance gain of the photodetector, $T$ is the transmittance of the
reflected light path, $P_{\textrm{inc}}$ is the optical power incident on the
resonator, $z_{\textrm{vib}}$ is the amplitude of vibrations in the flexural
direction, $\delta V_{\textrm{b}}$ is the amplitude of fluctuations of the
measurement background, and $\langle\cdot\rangle$ averages over time. The
quantity $|\frac{\mathrm{d}R}{\mathrm{d}z}|_{z_{M}}$ is the derivative of the
reflectance $R$ of the whole device consisting of the resonator and the
reflective gate, at a distance $z_{M}$ between the resonator and the gate. The
local gate acts as highly reflective mirror that optimizes the transduction of
$\langle z_{\textrm{vib}}^{2}\rangle$ into $\langle
V_{\textrm{pd}}^{2}\rangle$. We calculate
$|\frac{\mathrm{d}R}{\mathrm{d}z}|\approx 5\times 10^{-3}$/nm for our device
at $\lambda=633$ nm, which is about twice the value calculated with a regular
silicon gate Roddaro _et al._ (2007). To study the frequency response
$\langle z_{\textrm{vib}}^{2}\rangle(f)$ of the resonator, we sweep the drive
frequency $f$ of $V_{\textrm{ac}}$ and measure $\langle
V_{\textrm{pd}}^{2}\rangle$ with a spectrum analyzer. Figure 3e displays
$\langle V_{\textrm{pd}}^{2}\rangle$ as a function of $f$ and
$V_{\textrm{dc}}$ for the lowest frequency mode we can resolve. The response
shifts to higher frequencies as $|V_{\textrm{dc}}|$ increases mostly because
the electrostatic force
$\propto|\frac{\mathrm{d}C}{\mathrm{d}z}|V_{\textrm{dc}}^{2}$ tensions the
membrane as it pulls it towards the gate, making the resonant frequency of the
mode tunable.
With the resonator positioned near the center of the focused beam, we
characterize the lowest frequency resonance in the spectrum of $\langle
V_{\textrm{pd}}^{2}\rangle(f)$ and its dependence on $P_{\textrm{inc}}$. We
have verified that, on resonance, the electromechanical signal represented by
the root mean square voltage $\bar{V}=(\langle
V_{\textrm{pd}}^{2}\rangle-\langle\delta V_{\textrm{b}}^{2}\rangle)^{1/2}$
increases linearly with $V_{\textrm{ac}}$ within the range used in this work
while the resonant frequency does not shift, indicating that the resonator is
driven in a regime where the restoring force is linear in displacement. Figure
4a shows $\langle V_{\textrm{pd}}^{2}\rangle^{1/2}(f)$ measured at
$V_{\textrm{ac}}=12.6$ mV${}_{\textrm{rms}}$ and $V_{\textrm{dc}}=5$ V for
$P_{\textrm{inc}}$ ranging from 20 $\mu$W to 205 $\mu$W. Here as well, we have
verified that the peak value of $\bar{V}(f)$ increases linearly with
$P_{\textrm{inc}}$. Overall, $\bar{V}$ can be linearly amplified either by
increasing the vibrational amplitude electrostatically with $V_{\textrm{ac}}$
or by increasing the optical readout with a larger probe power
$P_{\textrm{inc}}$. The lineshape of $\bar{V}^{2}(f)$ is Lorentzian, which
indicates that the resonator behaves as a damped harmonic oscillator with
susceptibility
$\chi(f)=\frac{1}{4\pi^{2}}\frac{1}{f_{0}^{2}-f^{2}-\mathrm{i}f_{0}f/Q}\,,$
(2)
where $Q$ is the spectral quality factor, $f_{0}$ is the resonant frequency of
the vibrational mode, and $\bar{V}^{2}(f)\propto|\chi(f)|^{2}$. Figure 4b
shows that $Q$ is low and does not change within the range of
$P_{\textrm{inc}}$, while $f_{0}$ increases with $P_{\textrm{inc}}$ (Fig. 4c).
In the absence of nondissipative spectral broadening processes, $Q$ is
inversely proportional to the rate at which energy stored in a resonator gets
dissipated in a thermal bath. In low dimensional resonators based on 2-D
materials and on nanotubes, $Q$ at room temperature is always found to lie
between 10 and $\approx 100$ Bunch _et al._ (2007); Sazonova _et al._
(2004), which is surprisingly low given the high crystallinity of the
resonators. Proposed mechanisms to explain such low $Q$’s include losses
within the clamping area Rieger _et al._ (2014) and spectral broadening due
to nonlinear coupling between the mode of interest and a large number of
thermally activated modes Barnard _et al._ (2012); Zhang and Dykman (2015).
In turn, the increase of $f_{0}$ reveals a hardening of the spring constant of
the resonator. The latter may be due to vibrations responding to photothermal
forces with a delay Barton _et al._ (2012); Höhberger Metzger and Karrai
(2004); Metzger _et al._ (2008); Zaitsev _et al._ (2011). However, because
$Q$ does not appreciably change with $P_{\textrm{inc}}$, the hardening of the
spring constant is more likely to be caused by absorptive heating accompanied
by a contraction of the membrane Davidovikj _et al._ (2016); Yoon _et al._
(2011). In this case, it is interesting to relate the change $\Delta f_{0}$
induced by a change $\Delta P_{\textrm{inc}}$ to the thermal expansion
coefficient $\alpha$ and to the thermal conductivity $\kappa$ of the membrane.
For this we borrow a result from Ref. Morell _et al._ (2019), namely $|\Delta
f_{0}/\Delta P_{\textrm{abs}}|=|\alpha f_{0}\eta/(4\pi\epsilon\kappa h)|$,
with $P_{\textrm{abs}}$ the power absorbed by the membrane, $\epsilon$ the
strain within the membrane, $h=N_{L}\times 0.34\times 10^{-9}$ m the thickness
of the membrane, and $\eta\approx 1$ a factor that depends on the beam radius,
on the membrane radius and on Poisson’s ratio. We convert $P_{\textrm{inc}}$
into $P_{\textrm{abs}}$ using the absorbance
$A=P_{\textrm{abs}}/P_{\textrm{inc}}$ of our FLG suspended over the gate
electrode. We measure $A\approx 0.3$ from the ratio of power reflected by the
cavity covered by FLG to the power reflected by a nearby uncovered cavity
(Supplementary Material, Section III). Further, we estimate $\epsilon\approx
5\times 10^{-4}$ from $f_{0}$ by calculating the elastic energy of a disk-
shaped membrane Timoshenko and Woinowsky-Krieger (1987); Zhang (2016) and
deriving from it the spring constant of the fundamental mode (Supplementary
Material, Section IV). We make the simplifying assumption that the
electrostatic force is uniform over the membrane. We also assume a Young’s
modulus of $10^{12}$ Pa and a Poisson’s ratio of 0.165 Blakslee _et al._
(1970). Combining $\Delta f_{0}/\Delta P_{\textrm{inc}}$, $A$ and $\epsilon$,
we find $|\alpha/\kappa|\approx 4\times 10^{-9}$ m/W. This is a reasonable
estimate, considering for example $|\alpha/\kappa|\approx 4\times 10^{-9}$ m/W
with $\alpha\approx-8\times 10^{-6}$ K-1 from suspended singe layer graphene
Yoon _et al._ (2011) and $\kappa\approx 2000$ Wm-1K-1 from pyrolytic graphite
Touloukian (1970), both at room temperature.
We now measure the response of the resonator as a function of its in-plane
position with respect to the beam. Data shown in Figs. 4d-f are measured with
$V_{\textrm{ac}}=12.6$ mV${}_{\textrm{rms}}$, $V_{\textrm{dc}}=5$ V and
$P_{\textrm{inc}}=110$ $\mu$W. Figure 4d shows the same resonance as in Fig.
4a measured at various fixed positions along the $x$ direction, with the
center of the beam at $x_{0}$. Correspondingly, Fig. 4e shows $Q$ and Fig. 4f
shows $f_{0}$ as a function of $x-x_{0}$. Figures 4d-f are strikingly similar
to Figs. 4a-c: $Q$ does not depend on position while $f_{0}$ increases as the
resonator approaches $x_{0}$ and decreases as it moves out of the beam. As the
resonator moves with respect to the position of the beam, it samples the
intensity of the beam in a similar way to that of a reflective structure in a
knife edge measurement (Supplementary Material, Section II). Importantly, the
dependence of $f_{0}$ on position means that vibration imaging requires
measuring the full resonance at each position on the resonator, as we do next.
We present spatially resolved amplitude measurements of the lowest frequency
mode in Figs. 4g-i. Figure 4g shows the peak (resonant) value of the time
averaged electrical power $\langle V_{\textrm{pd}}^{2}\rangle/50$ dissipated
across the input impedance of the spectrum analyzer, expressed in
dB${}_{\textrm{m}}$, as a function of $x$ and $y$. Figure 4h shows the peak
value of $\bar{V}^{2}$ on a linear scale. The latter is proportional to the
mean square of the resonant vibrational amplitude, see Eq. (1), hence it is
proportional to the potential energy of the mode. The background
$\langle\delta V^{2}_{\textrm{b}}\rangle$ is measured in the cavity at $f_{0}$
but with the drive frequency shifted up and far away from resonance.
Measurements are made at $V_{\textrm{dc}}=5$ V with $V_{\textrm{ac}}=0.4$
mV${}_{\textrm{rms}}$ and $P_{\textrm{inc}}=110$ $\mu$W. At such low
$V_{\textrm{ac}}$, we find that the peak value of $\bar{V}^{2}$ is sizeable
only in a central area away from the edge of the cavity (highlighted by the
dashed circle in Fig. 4h). Within this area, the measured peak frequency of
$\bar{V}^{2}(f)$ is almost uniform and defines the resonant frequency $f_{0}$
of the mode (Fig. 4i).
We use the vibration imaging data shown in Figs. 4g-i to measure the effective
mass $m_{\textrm{eff}}$ of the mode. The latter is related to the potential
energy $U$ as
$U\equiv\frac{1}{2}\frac{m}{\pi a^{2}}(2\pi f_{0})^{2}\iint
z_{\textrm{vib}}^{2}(f_{0},x,y)\mathrm{d}S=\frac{1}{2}m_{\textrm{eff}}(2\pi
f_{0})^{2}z_{\textrm{max}}^{2}\,,$ (3)
where $m$ is the geometrical mass and $a$ is the radius of the membrane,
respectively, $z_{\textrm{vib}}(f_{0},x,y)$ is the resonant amplitude at
position $(x,y)$, $\mathrm{d}S$ is an elementary area on the membrane and
$z_{\textrm{max}}$ is the largest value of $z_{\textrm{vib}}(f_{0},x,y)$ over
the membrane. Replacing the integral in Eq. (3) with a discrete sum yields
$\frac{m_{\textrm{eff}}}{m}=\frac{\sum_{i,j}\bar{V}^{2}(f_{0},x_{i},y_{j})}{N\bar{V}^{2}_{\textrm{max}}}\,,$
(4)
where $x_{i}$ and $y_{j}$ are discrete coordinates over the cavity, $N$ is the
number of pixels within the area of the cavity in Figs. 4g-i, and
$\bar{V}^{2}_{\textrm{max}}$ is the largest value of
$\bar{V}^{2}(f_{0},x_{i},y_{j})$ over the membrane. While Figs. 4g-i represent
the convolution of the focused beam with the vibration mode shape (instead of
the mode shape alone), we verified numerically that $m_{\textrm{eff}}$
calculated from the convolution and $m_{\textrm{eff}}$ calculated from the
mode shape agree within 5% for our measured radius $w_{0}\approx 0.4$ $\mu$m
of the waist of the focused laser beam. Equation (4) yields
$m_{\textrm{eff}}/m=0.27\pm 0.01$. This estimate agrees well with the value of
0.27 calculated for a disk shaped graphene membrane without bending rigidity
and subjected to electrostatic pressure Weber _et al._ (2014). It shows that
the assumption of negligible bending rigidity compared to stretching rigidity
is still a valid one for FLG.
We complement our study with vibration imaging measurements performed on a
second device, showing the effect of a mesoscopic defect in FLG on the
mechanical response. Here the resonator consists of FLG suspended over silicon
oxide grown on doped silicon. The silicon substrate serves as a global gate
electrode. We show the mechanical resonance spectrum of the resonator and its
dependence on $V_{\textrm{dc}}$ in Supplementary Material, Section V. Figure
5a is a scanning electron microscope image of the device, which reveals the
presence of a fold in FLG near the bottom edge of the cavity. This fold cannot
be seen in an optical microscope with a $100\times$ magnification objective.
Figures 5b-d display the resonant value of $\langle
V_{\textrm{pd}}^{2}\rangle$ as a function of in-plane displacements $x$ and
$y$ for the first three vibrational modes we are able to measure. While these
results are qualitatively consistent with the shapes of a first, second and
third mode, we observe the presence of a node in the vicinity of the fold. The
fold presumably causes a local stiffening of FLG Moser and Bachtold (2009)
which pins vibration modes and forces them to a low amplitude state. Our
measurements show that folds have a strong impact on the mechanical response
of 2-D resonators. As with membranes with free standing edges Wang _et al._
(2014) and membranes with inhomogeneous strain and mass distributions Garcia-
Sanchez _et al._ (2008), membranes with folds may have mechanical properties
that may not be found in uniform membranes. Our measurement system is well
suited to investigate these properties as it is noninvasive and, unlike
scanning electron imaging, does not contaminate the surface of resonators.
## IV Conclusion
Our simple system composed of a vacuum enclosure mounted on a three-axis
micropositioning stage is well suited to measure the mechanical response of
few layer graphene electromechanical resonators and to image their vibrations.
From the hardening of the spring constant of the lowest frequency mode in
response to increased incident power, we estimate the ratio of thermal
expansion coefficient $\alpha$ to thermal conductivity $\kappa$ of the
membrane. Doing so requires either a strong temperature gradient induced by
the beam across the membrane or a resonator with a low spring constant, both
of which are more likely to be obtained with resonators larger than our 3
$\mu$m diameter device Barton _et al._ (2012); Davidovikj _et al._ (2016).
We image the shape of the lowest frequency mode and measure its effective
mass, which is important for quantitative sensing applications based on those
devices. We also image vibration modes in the presence of a fold in FLG, and
show that the fold strongly affects the mechanical response of the resonator.
Built-in calibration of in-plane displacement is possible by mapping the
reflectance of the cavity, which can be done by averaging the measurement
background away from resonance in between two resonant measurements. Our
system may be used in combination with a small size, dry cryostat that would
replace our heavy vacuum enclosure and with the objective outside the
cryostat. Measuring the dependence of $f_{0}$ on temperature would yield
$\alpha$ which, combined with $f_{0}(P_{\textrm{inc}})$, would yield $\kappa$
Morell _et al._ (2019). Planning for such experiments, we have experimentally
verified that our homemade waveguides remain flexible and that their insertion
loss remains low at cryogenic temperatures by bending 4 of them with a
piezopositioner at 800 mK. If precision and accuracy on the nanometer scale
are not needed, our system may offer an alternative to systems based on piezo
positioners which are fragile, have a small load capacity, and often come at a
prohibitive cost to experimentalists on a budget.
## Acknowledgments
J. Moser is grateful to Yin Zhang, Warner J. Venstra and Alexander Eichler for
helpful discussions. This work was supported by the National Natural Science
Foundation of China (grant numbers 61674112 and 62074107), the International
Cooperation and Exchange of the National Natural Science Foundation of China
NSFC-STINT (grant number 61811530020), Key Projects of Natural Science
Research in JiangSu Universities (grant number 16KJA140001), the project of
the Priority Academic Program Development (PAPD) of Jiangsu Higher Education
Institutions, and the Opening Fund of State Key Laboratory of Nonlinear
Mechanics in Beijing.
## References
* Bunch _et al._ (2007) J. S. Bunch, A. M. van der Zande, S. S. Verbridge, I. W. Frank, D. M. Tanenbaum, J. M. Parpia, H. G. Craighead, and P. L. McEuen, Science 315, 490 (2007).
* Chen _et al._ (2009) C. Chen, S. Rosenblatt, K. I. Bolotin, W. Kalb, P. Kim, I. Kymissis, H. L. Stormer, T. F. Heinz, and J. Hone, Nat. Nanotech. 4, 861 (2009).
* Barton _et al._ (2011) R. A. Barton, B. Ilic, A. M. van der Zande, W. S. Whitney, P. L. McEuen, J. M. Parpia, and H. G. Craighead, Nano Lett. 11, 1232 (2011).
* Reserbat-Plantey _et al._ (2012) A. Reserbat-Plantey, L. Marty, O. Arcizet, N. Bendiab, and V. Bouchiat, Nat. Nanotech. 7, 151 (2012).
* Barton _et al._ (2012) R. A. Barton, I. R. Storch, V. P. Adiga, R. Sakakibara, B. R. Cipriany, B. Ilic, S.-P. Wang, P. Ong, P. L. McEuen, J. M. Parpia, and H. G. Craighead, Nano Lett. 12, 4681 (2012).
* Mathew _et al._ (2016) J. P. Mathew, R. N. Patel, A. Borah, R. Vijay, and M. M. Deshmukh, Nat. Nanotech. 11, 747 (2016).
* Zhang _et al._ (2020) Z.-Z. Zhang, X.-X. Song, G. Luo, Z.-J. Su, K.-L. Wang, G. Cao, H.-O. Li, M. Xiao, G.-C. Guo, L. Tian, G. W. Deng, and G.-P. Guo, Proc. Natl. Acad. Sci. U. S. A. 117, 5582 (2020).
* Sengupta _et al._ (2010) S. Sengupta, H. S. Solanki, V. Singh, S. Dhara, and M. M. Deshmukh, Phys. Rev. B 82, 155432 (2010).
* Castellanos-Gomez _et al._ (2013) A. Castellanos-Gomez, R. van Leeuwen, M. Buscema, H. S. J. van der Zant, G. A. Steele, and W. J. Venstra, Adv. Mater. 25, 6719 (2013).
* Wang _et al._ (2014) Z. Wang, J. Lee, K. He, J. Shan, and P. X.-L. Feng, Sci. Rep. 4, 3919 (2014).
* Morell _et al._ (2016) N. Morell, A. Reserbat-Plantey, I. Tsioutsios, K. G. Schädler, F. Dubin, F. H. L. Koppens, and A. Bachtold, Nano Lett. 16, 5102 (2016).
* Cartamil-Bueno _et al._ (2015) S. J. Cartamil-Bueno, P. G. Steeneken, F. D. Tichelaar, E. Navarro-Moratalla, W. J. Venstra, R. van Leeuwen, E. Coronado, H. S. J. van der Zant, G. A. Steele, and A. Castellanos-Gomez, Nano Res. 8, 2842 (2015).
* Dykman (2012) M. I. Dykman, _Fluctuating Nonlinear Oscillators_ , 1st ed. (Oxford University Press, Oxford, U. K., 2012).
* Garcia-Sanchez _et al._ (2008) D. Garcia-Sanchez, A. M. van der Zande, A. San Paulo, B. Lassagne, P. L. McEuen, and A. Bachtold, Nano Lett. 8, 1399 (2008).
* Davidovikj _et al._ (2016) D. Davidovikj, J. J. Slim, S. J. Cartamil-Bueno, H. S. J. van der Zant, P. G. Steeneken, and W. J. Venstra, Nano Lett. 16, 2768 (2016).
* Reserbat-Plantey _et al._ (2016) A. Reserbat-Plantey, K. G. Schädler, L. Gaudreau, G. Navickaite, J. Güttinger, D. E. Chang, C. Toninelli, A. Bachtold, and F. H. L. Koppens, Nat. Comm. 7, 10218 (2016).
* Wang _et al._ (2016) Z. Wang, H. Jia, X.-Q. Zheng, R. Yang, G. J. Ye, X. H. Chen, and P. X.-L. Feng, Nano Lett. 16, 5394 (2016).
* Zheng _et al._ (2017) X.-Q. Zheng, J. Lee, and P. X.-L. Feng, Microsystems & Nanoengineering 3, 17038 (2017).
* Hanay _et al._ (2015) M. S. Hanay, S. I. Kelber, C. D. O’Connell, P. Mulvaney, J. E. Sader, and M. L. Roukes, Nat. Nanotech. 10, 339 (2015).
* Ho and Jan (2016) S.-T. Ho and S.-J. Jan, Prec. Eng. 43, 285 (2016).
* Liu and Li (2016) Y.-T. Liu and B.-J. Li, Prec. Eng. 46, 118 (2016).
* Carr and Craighead (1997) D. W. Carr and H. G. Craighead, J. Vac. Sci. Technol. B 15, 2760 (1997).
* Karabacak _et al._ (2005) D. Karabacak, T. Kouh, and K. L. Ekinci, J. Appl. Phys. 98, 124309 (2005).
* Castellanos-Gomez _et al._ (2014) A. Castellanos-Gomez, M. Buscema, R. Molenaar, V. Singh, L. Janssen, H. S. J. van der Zant, and G. A. Steele, 2-D Mater. 1, 011002 (2014).
* Roddaro _et al._ (2007) S. Roddaro, P. Pingue, V. Piazza, V. Pellegrini, and F. Beltram, Nano Lett. 7, 2707 (2007).
* Chen _et al._ (2018) F. Chen, C. Yang, W. Mao, H. Lu, K. G. Schädler, A. Reserbat-Plantey, J. Osmond, G. Cao, X. Li, C. Wang, Y. Yan, and J. Moser, 2D Mater. 6, 011003 (2018).
* Sazonova _et al._ (2004) V. Sazonova, Y. Yaish, H. Üstünel, D. Roundy, T. A. Arias, and P. L. McEuen, Nature 431, 284 (2004).
* Rieger _et al._ (2014) J. Rieger, A. Isacsson, M. J. Seitner, J. P. Kotthaus, and E. M. Weig, Nat. Commun. 5, 3345 (2014).
* Barnard _et al._ (2012) A. W. Barnard, V. Sazonova, A. M. van der Zande, and P. L. McEuen, Proc. Natl. Acad. Sci. U. S. A. 109, 19093 (2012).
* Zhang and Dykman (2015) Y. Zhang and M. I. Dykman, Phys. Rev. B 92, 165419 (2015).
* Höhberger Metzger and Karrai (2004) C. Höhberger Metzger and K. Karrai, Nature 432, 1002 (2004).
* Metzger _et al._ (2008) C. Metzger, I. Favero, A. Ortlieb, and K. Karrai, Phys. Rev. B 78, 035309 (2008).
* Zaitsev _et al._ (2011) S. Zaitsev, A. K. Pandey, O. Shtempluck, and E. Buks, Phys. Rev. E 84, 046605 (2011).
* Yoon _et al._ (2011) D. Yoon, Y.-W. Son, and H. Cheong, Nano Lett. 11, 3227 (2011).
* Morell _et al._ (2019) N. Morell, S. Tepsic, A. Reserbat-Plantey, A. Cepellotti, M. Manca, I. Epstein, A. Isacsson, X. Marie, F. Mauri, and A. Bachtold, Nano Lett. 19, 3143 (2019).
* Timoshenko and Woinowsky-Krieger (1987) S. Timoshenko and S. Woinowsky-Krieger, _Theory of Plates and Shells_ , 2nd ed. (McGraw-Hill, New York, NY, 1987).
* Zhang (2016) Y. Zhang, Sci. China-Phys. Mech. Astron. 59, 624602 (2016).
* Blakslee _et al._ (1970) O. L. Blakslee, D. G. Proctor, E. J. Seldin, G. B. Spence, and T. Weng, J. Appl. Phys. 41, 3373 (1970).
* Touloukian (1970) Y. S. Touloukian, _Thermal Conductivity : Nonmetallic Solids_ , 1st ed. (IFI/Plenum, New York, NY, 1970).
* Weber _et al._ (2014) P. Weber, J. Güttinger, I. Tsioutsios, D. E. Chang, and A. Bachtold, Nano Lett. 14, 2854 (2014).
* Moser and Bachtold (2009) J. Moser and A. Bachtold, Appl. Phys. Lett. 95, 173506 (2009).
Figure 1: Enclosure to image vibrations in vacuum. (a) Assembled enclosure
with a window (blue shaded disk) for free space optical measurements, a KF
port on the side for SMA connectors and another KF port on the back side for
pumping. (b) Exploded view. The red shaded square represents the substrate
hosting the resonators. It is glued on a printed circuited board (lightly
colored disk) that is attached to a copper holder (brown shaded disk with a
shallow recess). (c) Cross section showing the holder (brown), the board
(yellow), the substrate (red) and SMA connectors attached to the back side of
the board and connected to the front side of it.
Figure 2: Vibration measurement setup. (a) Optical setup. NDF: neutral density
filter. SMF: single mode fiber. $\lambda/2$: half-wave plate. PBS: polarizing
beam splitter. $\lambda/4$: quarter-wave plate. XYZ: micro-positioner. SA:
spectrum analyzer. PD: photodetector. (b) Electrical actuation scheme. FW:
flexible waveguide. The gate electrode is connected to a semi-rigid radio-
frequency cable inside the enclosure, which is connected to FW outside of the
enclosure via a hermetic feed-through. (c) Front and back side of FW
consisting of a 15 cm long copper microstrip patterned on Kapton and
terminated with SMA connectors.
Figure 3: Cavity, gate electrode, FLG thickness and tuning of the resonant
frequency. (a) Optical microscopy image showing FLG and source (S) and drain
(D) electrodes. (b) AFM image of the gate electrode at the bottom of the
cavity, and (c) cross section along the blue dashed line. (d) Calculated ratio
of reflectances $R_{\textit{FLG}}/R_{\textit{noFLG}}$ as a function of the
number of layers $N_{L}$. $R_{\textit{FLG}}$ is the reflectance of the
structure composed of FLG on a 500 nm thick slab of SiO2 on Si at
$\lambda=633$ nm, and $R_{\textit{noFLG}}$ is the reflectance without FLG. The
blue shaded area is the uncertainty related to oxide thickness measurements.
(e) $\langle V_{\textrm{pd}}^{2}\rangle$ as a function of $f$ and
$V_{\textrm{dc}}$ for the lowest frequency mode. $V_{\textrm{ac}}=0.4$
mV${}_{\textrm{rms}}$, $P_{\textrm{inc}}=110$ $\mu$W. Orange side of the color
bar: $5\times 10^{-11}$ V${}_{\textrm{rms}}^{2}$.
Figure 4: Effect of the beam on the response of the resonator and vibration
imaging. (a) Root mean square of the voltage at the output of the
photodetector, $\langle V_{\textrm{pd}}^{2}\rangle^{1/2}$ as a function of
drive frequency $f$ at $V_{\textrm{ac}}=12.6$ mV${}_{\textrm{rms}}$ and
$V_{\textrm{dc}}=5$ V with the resonator near the center of the beam. From
blue to red: $P_{\textrm{inc}}=20$, $45$, $70$, $135$, $160$, and $205$
$\mu$W. (b) Quality factors $Q$ and (c) resonant frequency $f_{0}$, measured
from a series of resonances partially shown in (a), as a function of
$P_{\textrm{inc}}$. (d) $\langle V_{\textrm{pd}}^{2}\rangle^{1/2}$ as a
function of $f$ as the resonator is moved along $x$ into the beam at $x_{0}$
($V_{\textrm{ac}}=12.6$ mV${}_{\textrm{rms}}$, $V_{\textrm{dc}}=5$ V and
$P_{\textrm{inc}}=110$ $\mu$W). (e) $Q$ and (f) $f_{0}$ as a function of in-
plane displacement $x-x_{0}$ taken from resonances partially shown in (d). (g)
Peak (resonant) value of the time averaged electrical power $\langle
V_{\textrm{pd}}^{2}\rangle/50$ as a function of $x$ and $y$. (h) Peak value of
the electromechanical signal $\bar{V}^{2}$, shown here on a linear scale.
$V_{\textrm{dc}}=5$ V, $V_{\textrm{ac}}=0.4$ mV${}_{\textrm{rms}}$ and
$P_{\textrm{inc}}=110$ $\mu$W. (i) Peak frequency of $\bar{V}^{2}(f)$. Data in
(g)-(i) were obtained after optimizing the collection efficiency of the
photodetector compared to data in (a)-(f).
Figure 5: Effect of a fold on a the mode shapes of the globally gated
resonator. (a) Scanning electron microscope image of the device. The arrows
point to a fold in FLG (red shading) near the bottom edge of the cavity. (b-d)
$\langle V_{\textrm{pd}}^{2}\rangle/50$ as a function of in-plane
displacements $x$ and $y$ for the lowest frequency mode near 75 MHz (a), for
the second mode near 100 MHz (b) and for the third mode near 150 MHz (d).
$V_{\textrm{ac}}=7.07$ mV${}_{\textrm{rms}}$ is used in (b) and (c) and
$V_{\textrm{ac}}=22.36$ mV${}_{\textrm{rms}}$ is used in (d).
$V_{\mathrm{dc}}=5$ V and $P_{\textrm{inc}}=110$ $\mu$W are used in all
panels.
|
8k
|
arxiv_papers
|
2101.00902
|
# REACT: Distributed Mobile Microservice Execution Enabled by Efficient Inter-
Process Communication
Chathura Sarathchandra 0000-0002-0266-0446 InterDigital EuropeLondonUnited
Kingdom [email protected]
(2021)
###### Abstract.
The increased mobile connectivity, the range and number of services available
in various computing environments in the network, demand mobile applications
to be highly dynamic to be able to efficiently incorporate those services into
applications, along with other local capabilities on mobile devices. However,
the monolithic structure and mostly static configuration of mobile application
components today limit application’s ability to dynamically manage internal
components, to be able to adapt to the user and the environment, and utilize
various services in the network for improving the application experience.
In this paper, we present REACT, a new Android-based framework that enables
apps to be developed as a collection of loosely coupled microservices (MS). It
allows individual distribution, dynamic management and offloading of MS to be
executed by services in the network, based on contextual changes. REACT aims
to provide i) a framework as an Android Library for creating MS-based apps
that adapt to contextual changes ii) a unified HTTP-based communication
mechanism, using Android Inter-Process Communication (IPC) for transporting
requests between locally running MS, while allowing flexible and transparent
switching between network and IPC requests, when offloading. We evaluate REACT
by implementing a video streaming app that dynamically offloads MS to web
services in the network, adapting to contextual changes. The evaluation shows
the adaptability to contextual changes and reductions in power consumption
when offloading, while our communication mechanism overcomes performance
limitations of Android IPC by enabling efficient transferring of large
payloads between mobile MS.
Mobile Microservices; Mobile Function Distribution; Mobile Cloud Computing;
Offloading; Microservices;
††copyright: acmcopyright††journalyear: 2021††doi:
10.1145/1122445.1122456††conference: xxxxx ’xx: xx xxth xxxxx xxxxx xxxx xx;
xx xx–xx, xx; xxxx, xxxx††booktitle: xxxx ’xx: xxx xxth xxxx xxxx xxx xxxxx,
xxxx xx–xx, xxxxx, xx xx, xxxx††price: 15.00††isbn:
978-1-4503-XXXX-X/18/06††ccs: Human-centered computing Ubiquitous and mobile
computing††ccs: Human-centered computing Ubiquitous computing††ccs: Human-
centered computing Mobile computing
## 1\. Introduction
The mobile device has become the primary device of choice for most users,
providing access to most applications and services at their fingertips, from
multimedia to personal finance applications. This is enabled by their
continuously increasing computing capabilities, high bandwidth and low latency
network technologies. According to Cisco’s Trend Report (Cisco, 2020), by year
2023 global mobile devices will grow to 13.1 billion from 8.8 billion in 2018.
Moreover, advancements in various software (e.g., Machine Learning algorithms)
and human-computer interaction (e.g., Augmented Reality) technologies have
helped popularize mobile applications and services providing enhanced user
experiences and functionality. Thus, gaming, social media and business mobile
apps are predicted to be the most popular out of all 299.1 billion downloaded
globally by 2023.
Resources and functionalities required for those applications, in turn,
increase the demand on resources on mobile devices that they run on, which are
inherently resource-constrained. Therefore, the lack of resources in mobile
devices limits the user experience and the type of applications and services
that can be offered.
Mobile cloud computing (Dinh et al., 2013) bridges the gap between the growing
resource demands of mobile applications, and limited resource availability of
mobile devices, through offloading resource-intensive functions to resource-
rich application execution environments (e.g., edge, cloud environments) in
the network or to other devices. This brings the resource elasticity of cloud
computing technology to the much rigid mobile devices. However, existing
mobile function offloading frameworks rely on the availability of pre-deployed
framework-specific server counterparts in the network (Chun et al., 2011;
Cuervo et al., 2010; Kosta et al., 2012) to be able to offload computing
functions, limiting the overall offloading opportunity. These server
counterparts receive and execute offloaded tasks, by following framework-
specific execution models and protocols. Partitioning of application
components are performed at code level (e.g., through code annotations),
maintaining the monolithic code structure, which limits the flexibility and
dynamicity in managing independent components. Thus, the application
components themselves are monolithically packaged and statically configured
for using known server offloading counterparts. This limits the ability of the
applications to dynamically use and adapt to newly available services (such as
microservice clouds (Kanti Datta et al., 2018)) and new execution
environments, respectively.
However, offloading is not always beneficial and may lead to negative
performance gains (Kumar and Lu, 2010). For example, in cases where it
requires large amount of data to be transferred between the mobile device and
the cloud when offloading, it may lead to higher execution times as well as
increased power consumption. However, under certain conditions the user may
prefer to offload functions for obtaining a better functionality and a better
user experience, at the expense of increased energy consumption. Therefore,
dynamically adapting to various contextual changes (Zhou et al., 2017) when
offloading is crucial, while decisions on when to offload, what to offload
(which functions) and where to offload (execution environments and other
mobile and Internet Of Things devices) have a direct impact on the overall
performance of the mobile device.
The availability of services in the network is one of the primary factors that
influence mobile function offloading decisions. If there are more services
that can be utilized for offloading, the more opportunities and options (e.g.,
multiple service providers offering same service with varying qualities) the
applications have when deciding when and where to offload (Magurawalage] et
al., 2014). Moreover, increased bandwidth, lowered latency and newly
introduced edge computing resources in emerging new communication technologies
such as 5G (Taleb et al., 2017), do not only increase the efficiency with
which those services could be accessed and executed, but also increase
offloading opportunities by allowing application and service providers to
deploy application and service instances both in the edge and in the cloud.
As a result of the increased ubiquity in services, resource provisioning
extends beyond public cloud and the mobile functions may be offloaded to
services in the edge as well as to services offered by other mobile/IoT
devices. However, as discussed above, monolithic design and statically binding
an application to one specific resource or to a specific application server
counterpart, offered by the same application provider, limits offloading
opportunities that a device may have for improving performance or augmenting
functionality. For example, an application provider may deploy instances of
the same service at multiple mobile, edge and cloud resources, and a device
that is bound only to a specific resource or a server instance in the cloud
may not be able to utilize an instance that is closer to the edge or offered
by another IoT device within user’s proximity. Moreover, with the increased
range of web services provided by the application providers as well as third-
party service providers that are available in networks (e.g., varying cloud
video encoding/decoding services), allowing applications to dynamically
utilize third-party services for offloading mobile functions will further
increase offloading opportunities and reduce costs in service provisioning.
Therefore, it is imperative that applications can dynamically utilize those
third-party services, as opposed to statically binding to a specific server
instance in the network provided by the corresponding application provider.
Microservices architecture (Jamshidi et al., 2018) has become an increasingly
popular application structure style and microservice clouds have been used for
providing services for various applications (Kanti Datta et al., 2018). It
provides applications with the flexibility to adapt to the underlying
technological changes and, the ability to manage resources and control
different application functions (i.e., microservices) independently. In this
architecture, an application is structured as a set of loosely coupled and
independently manageable services that communicate over a lightweight (and
often technology-agnostic) protocol (such as HTTP (Fielding et al., 1999)).
The fine-grained control of constituent components enabled by the
microservices architecture, allows the internal structure of applications to
be dynamically changed at runtime. The use of a common communication interface
(an application-layer protocol such as HTTP) across all microservices enables
dynamic binding to offloading counterparts, i.e., offloading to other
(micro)services over the network. This enables dynamic mobile function
offloading and utilization of suitable microservices deployed by the
application provider or other third-party service providers. Moreover, with
the increased availability, range and number of services (and microservices),
mobile offloading frameworks using microservices architecture can
significantly increase offloading opportunity and flexibility in choosing the
best services to be used towards improving application functionality and
performance. Therefore, the flexibility and adaptability (to contextual
changes) of mobile applications can be significantly improved through
combining the high flexibility of microservices architecture and the
dynamicity of mobile function offloading frameworks.
This paper presents REACT (micRoservice Execution enAbled by effiCient inTer-
process communication) 111An early prototype was demonstrated at a major
leading congress in 2019, and the complete system was to be demonstrated in
2020, a framework (made available as an open-source Android Library 222The
Link to open-source code of REACT Android Library -
https://github.com/chathura77/REACT) which allows Android mobile applications
to be developed as a collection of loosely coupled microservices that can be
independently managed. Using the framework, the Android application developers
can incorporate into their applications the flexibility provided by the
microservices architecture and the adaptability of applications to contextual
changes of mobile function offloading frameworks (by offloading functions to
improve functionality and/or performance). REACT can be integrated into an
application by simply including the provided Android Library and using the
provided easy to use Application Programming Interface (API). Using REACT,
applications can dynamically (based on contextual changes) utilize web
services for offloading various mobile Application Functions
(AF)/microservices, towards improving performance and functionality of mobile
devices, i.e., the presented API allows the developer to modularize an
application into constituent microservices, which then in turn can be
offloaded independently to be executed on corresponding web services. A new
mobile microservice communication mechanism is introduced, which uses the HTTP
protocol for communicating between microservices and services in the network,
while enabling flexible and dynamic switching between requests to
microservices on mobile device and web services in the network, when
offloading at runtime, transparently to communicating microservices. Our
solution uses Android Inter-Process Communication (IPC) mechanism for
transferring messages between communicating microservices within the mobile
device. Thus, REACT tackles critical performance limitations incurred by
Android IPC when transferring large payloads (Hsieh et al., 2013). In the rest
of this paper we use ’Application Functions’ (AF) and ’Microservices’
interchangeably when referring to application components modularized using
REACT.
Our main contributions are;
* •
An android application framework which enables modularization of app
components into microservices for context-aware independent management and
offloading of mobile microservices, at runtime.
* •
A HTTP-based mobile microservice communication mechanism, enabled by Android
IPC-based efficient local inter-microservice communication, and flexible
switching between local and network communication for dynamic microservice
offloading.
## 2\. Related Work
There have been several extensive studies on mobile computation offloading
frameworks in the past years (Shiraz et al., 2013). These approaches intend to
augment capabilities of resource-constrained mobile devices by dynamically
offloading computationally intensive tasks to resource-rich destinations.
Existing frameworks use services from surrounding computing devices (Oh et
al., 2019; AlDuaij et al., 2019; Dou et al., 2010) or remote cloud machines
(Chun et al., 2011; Cuervo et al., 2010; Elgazzar et al., 2016; Yang et al.,
2016; O’Sullivan and Grigoras, 2015; Kosta et al., 2012), where specific
protocols and offloading server counterparts are deployed at the offloading
destinations, and does not incorporate the microservices architecture i.e.,
offloading framework specific server counterparts that support their
offloading protocol needs to be installed at either surrounding computing
devices or remote cloud machines, prior to execution of the application. While
work presented in this paper is inspired by insights provided in previous
work, REACT focuses on creating flexible mobile applications based on the
microservice architecture supporting seamless offloading of local application
functions to web services.
MAUI (Cuervo et al., 2010) framework allows the developer to provide an
initial partition of the application through remotable annotations, indicating
methods and classes that can be offloaded to the MAUI server. The main aim of
MAUI is to optimize energy consumption by offloading suitable computing
intensive application tasks. MAUI runtime on smartphone communicates with the
MAUI runtime running on the MAUI server over Remote Procedure Calls when
offloading. Likewise, ThinkAir (Kosta et al., 2012) provides method-level task
offloading, based on code annotations provided by the developer at design
time. ThinkAir code generator then generates remotable method wrappers and
other utility functions required for offloading and remote execution of
application tasks. Moreover, ThinkAir optimizes application execution by
allowing parallel execution of offloaded tasks, speeding up the execution.
Both MAUI and ThinkAir gather various hardware, software and network related
information through profiling to be used as inputs when making offloading
decisions.
CloneCloud (Chun et al., 2011) provides runtime partitioning of applications
at thread level based on a combination of static analysis and dynamic
profiling techniques. The system enables partitioning of unmodified
applications, i.e., partition without developer’s help. The offloaded tasks
are executed on a pre-deployed cloned Virtual Machine (VM) in cloud with
higher resource capacity. The application-layer VM-based offloading mechanism
is used, where states offloading threads are encapsulated in VMs and them
migrated to remote cloud for execution. The execution runtime includes a per-
process migrator thread and per-node node manager, for managing thread states
(i.e., suspending, packaging, resuming and merging), and for managing nodes
(i.e., device-to-clone communication, clone image synchronization and
provisioning), respectively. Moreover, the CloneCloud system offloads towards
optimizing execution time and energy consumption. Both the local device and
the clone are devised with corresponding manager, migrator and profiler
counterparts, alongside the application.
Augmenting computational capabilities of mobile devices by offloading tasks at
the service level granularity has been studied in (Elgazzar et al., 2016;
Abolfazli et al., 2014), where the tasks are offloaded to a remote cloud
server that provides computing task execution services as RESTful services.
Elgazzar K, et. al (Elgazzar et al., 2016) presents a task offloading
framework which takes the location of the data required for executing
offloaded tasks (in a scenario where data is provided by a third-party
provider) in addition to the capabilities of the cloud service provider, when
making offloading decisions. RMCC (Abolfazli et al., 2014) framework employs a
RESTful API for offloading tasks and takes device energy consumption and
execution time when making offloading decisions.
Conventionally, computation task offloading decisions are made for reducing
the response time and energy consumption or for exploiting a balance between
the two. The seminal work of Kumar et al (Kumar and Lu, 2010) provided an
analytical mode with which one may determine whether offloading tasks can save
energy, taking computing resource requirements and instantaneous network
condition into consideration. Such decisions may be performed either using
rule/policy-based approaches (Cuervo et al., 2010; Kosta et al., 2012; Chun et
al., 2011) or based on learning techniques (Eom et al., 2013). In general,
most offloading decision-making strategies consider a combination of factors
such as energy consumption, task complexity, network condition, computing
resource availability (Magurawalage] et al., 2014) and utilization.
## 3\. Design Goals & Architecture
The ultimate goal of REACT is to improve performance and augmentation of
capabilities of mobile devices through utilization of available web services
and computing resources in the connected environments. In this section, we
present the design objectives of REACT, followed by a high-level overview of
REACT’s components.
1. (1)
_Microservices for mobile applications_ : Enable mobile applications to be
developed as a collection of loosely coupled app functions (i.e.,
microservices), that communicate using the HTTP protocol (Fielding et al.,
1999). allowing them to be independently managed.
2. (2)
_Dynamic adaptability to contextual changes_ : Key characteristics of mobile
computing environments are their rapid change and volatility. Therefore, any
solution that operate in such an environment must adapt to the changes and
account for volatility, towards ensuring the requirements of the application
execution. REACT enables applications to adapt to contextual changes through
allowing functions to be dynamically offloaded based on contextual information
gathered from the mobile device.
3. (3)
_Compatibility with web services_ : Provide the ability to utilize and
incorporate web services in the network as part of the mobile application.
Allow mobile devices to dynamically utilize web services over HTTP for mobile
function offloading, enabling dynamic binding to offloading server
counterparts/web services.
4. (4)
_Flexible and efficient communication mechanism_ : Introduce an efficient and
flexible communication mechanism unifying inter-AF communication both within
device and with services in the network. Use IPC mechanisms provided by
Android OS for local AF communication, while also allowing the AF message
transport technology (i.e., TCP or Android IPC) to be dynamically switched,
transparently to the communicating AFs, depending on their locality.
Figure 1. Overview of the REACT framework.
As depicted in Figure 1, the REACT framework has two major aspects, the API
(Section 4) and the REACT Runtime (Section 5). The new IPC-based communication
mechanism is introduced in Local Communication Manager (Section 6). The
following sections present details of each of those components.
Figure 2. REACT API Overview
## 4\. Programming Interface
The REACT framework provides the developer with an API which can be used for
defining various application tasks as independent Application Functions (AF)
that can be automatically offloaded to web services running on execution
environments in the network (e.g., edge, cloud). Thus, the REACT API enables
the creation of interoperable mobile AFs that can be executed and dynamically
managed within the common REACT platform.
As shown in Figure 2, the REACT library provides a _Function Wrapper_ class
which can be used for encapsulating and preparing AF procedures and data
structures. This allows the developer to define local AFs matching either
existing services in the network, or ones that may be deployed later for
enabling runtime offloading (e.g., an app developer may develop and deploy
remote counterparts to locally running AFs, as web services, which are in turn
used for offloading). For example, in a video viewing application the
developer may encapsulate the code that process the frames into a ”process” AF
partition along with its constituent procedures and data structures.
Moreover, using the methods provided with the wrapper class, the developer is
able to set AF address, e.g., Fully Qualified Domain Name (FQDN) of the
service, and handle AF communications by implementing corresponding virtual
request handler methods, following the request/response model. At the
initialization stage, the wrapper class connects with the underlying REACT
runtime and automatically configures AF related parameters (e.g., registering
the availability of the AF with the Function Catalogue). All other required AF
control and management tasks are done automatically by the underlying
management components of the REACT framework, interfacing with the
corresponding wrapper classes.
After the initialization phase, AFs become accessible using the provided
address, within the device. Any application component (including other AFs)
can communicate with the AFs using the ”Request” class provided by the
framework (or using other provided Request sub-classes which contain methods
for parsing and handling responses with specific data types, e.g.,
_ByteArrayRequest_ , _StringRequest_).
Figure 3. Application following SFC model
The API provides the developer with the flexibility to follow other models of
application development and structuring methods. Such examples include
Microservices and Service Function Chaining (SFC). Microservices is an
architecture that structures an application as a collection of loosely coupled
services. REACT API allows one to realize microservice based applications on
Android devices, using wrapper classes to construct independently manageable
components that communicate with other microservices or services over HTTP.
SFC model allows one to structure an application as a collection of
sequentially connected services. In the example presented in Figure 3, ’I’
indicates the input interface and ’O’ indicates the output interface of each
AF. Here, the _Function Wrapper_ class is used as the input interface for
handling incoming requests while the _Request_ class is used as the output
interface for communicating output data to the next AF in chain.
In all above scenarios, the REACT runtime manages both the execution
(dynamically offload suitable AF) as well as the communication between AFs
(in-device vs network communication).
### 4.1. Function Wrapper
Figure 4. A snippet of code of the Wrapper Class
The Function wrapper (as shown in Figure 4) class provides the means for any
application component in the device to create independent AFs. It provides an
interface for AFs to serve incoming requests on a specific URL. The Java
abstract methods provided by the interface allow the developer to handle/serve
incoming requests, invoking internal AF procedures, in turn linking them to
the REACT runtime. The class also includes other helper functions, that are
used by the REACT runtime and not used by the developer (methods automatically
invoked for registering/deregistering AFs with the Function Catalogues at
runtime). The code snippet in Figure 4 shows the structure of the interface.
### 4.2. Requests
Requests can be created to request services from AFs or any other web service
in the network, in the forms of RESTful (HTTP) or SOAP (Simple Object Access
Protocol) requests. In Figure 3, this _Request_ class has been used as the
output interface, for transferring the output of one AF to the next one. Once,
a request is constructed, depending on the locality of the requested AF, i.e.,
if the AF is executed locally or in the network, the request is indirected to
the locally executed AF or sent over to the service in the network,
respectively, by the _Execution Engine_.
## 5\. EXECUTION & RUNTIME
In this section we describe in detail the REACT runtime components that run on
the mobile device. These components are included in the REACT framework
library provided to the developer and started automatically when the device
and the application start. Since conventional web services can be used for
offloading computing tasks in the network and REACT does not require any
specific components or configurations to be deployed there, only components at
the mobile device are discussed. A separate section has been devoted to the
_Local Communication Manager_ component, in Section 6.
### 5.1. Function Distribution and Execution
Much like the third-party application code that are distributed in the form of
libraries today, AFs that use the REACT API (i.e., AFs using the _Wrapper_
class) can be distributed to be used independently in other applications that
support the REACT framework. We envision that third-party REACT compatible AF
providers may also provide compatible matching web services to be distributed
in the network, allowing applications to benefit from the dynamic offloading
functionality. Developers can include REACT AF in their apps, and access their
services through the _Request_ s. An AF can be dynamically instantiated within
an Android application or an Android service, by simply instantiating its
corresponding wrapper class (using Java _new_ operator) and invoking _start()_
on the AF instance, after setting required parameters (e.g., Android
Application Context (Android Developers, 2020b)).
### 5.2. Function Catalogues
The function catalogue keeps an up-to-date record of all available functions
that are available to serve requests within the device. There exist two
catalogues, namely, 1) Local (intra app) function catalogue and 2) Global
(inter app) function catalogue. The intra-app catalogue keeps record of AFs
that are available only within the application, while the inter-app catalogue
keeps record of AFs that area available device wide (i.e., implemented as an
independent android service, while a AF registration and query interfaces are
exposed over Android IPC) All AFs register with the intra-app catalogue as its
default behavior, while the application developers can choose to make an AF
available to other apps by setting the corresponding ’global catalogue’
parameter to ’true’ in _registerFunction()_ method in the Function Wrapper
class. For example, a new functionality may be made available (as an always-on
Android Service) for all applications running locally by a third-party
application provider or by Android OS, while still enabling the flexibility of
offloading to the network (enabled by REACT framework). Each AF record
contains the following tuples.
* •
Address: Each AF is assigned an address. For offloading an AF to be executed
by a web services in the network, the address must adhere to the condition
that there must be an address mapping logic with which an AF address can be
mapped to the URL of the web service in the network. For example, the current
implementation uses the reverse-DNS notation of the corresponding web services
used for offloading, to name the corresponding AFs, enabling a simple and
efficient mapping between local AFs and web services. This mapping logic is
realized in _lookup()_ function in the _Context Manager_.
* •
Communication method: REACT allows AFs to support multiple communication
methods. REACT configures Android Intent broadcast IPC as the default
communication method, but a given AF may offer one or more in-device function-
to-function communication method for serving requests, using any of existing
android IPC mechanisms. REACT allows the developer to specify the supported
communication methods when initializing the _Function Wrapper_ instance, which
are then automatically added into the catalogue. However, it is advisable not
to enable more than one communication method per AF for reducing the
communication management overhead caused by separate communication types. To
overcome shortcomings such as limited bandwidth and inefficient memory usage
of Android IPC mechanisms, we introduce our own communication mechanism in
Section 6.1, which can used by AFs.
### 5.3. Context manager
The _Context Manager_ gathers various instantaneous and real-time context
information which can be used for optimizing application execution,
personalizing the user experience and for dynamically adapting the application
to the conditions in the environment. The _Context Manager_ is aware of the
locally running AFs (received from catalogues as shown in Figure 1) and
gathers other information in following categories.
* •
Location: Mobility of the device play an important role in deciding when to
offload, e.g., prior knowledge of the web services available within a campus,
may allow the users within the campus to offload when in the premises. Most
modern mobile devices come with GPS sensors which can be used to obtain user’s
geographical coordinates. APIs provided by mobile platforms can be used to
access instantaneous GPS information.
* •
Device Resources: When making offloading decisions, the status of the
resources on the device is crucial, (e.g., when offloading for optimizing
local resource consumption). Such status information include: CPU utilization,
memory utilization, battery charging status, battery level, and storage
capacity. For example, battery “plugged” and “unplugged” statuses as well as
the current battery level can be used for deciding whether to offload a
resource intensive AF when not plugged into a power source.
* •
Connectivity: Mobile devices today come with a range of communication
interfaces supporting a wide range of applications. Such interfaces include,
Cellular, Bluetooth, WiFi and NFC, each having different characteristics and
capabilities. Real-time network connectivity information is imperative to be
used for making offloading decisions, For example, an offloading algorithm may
consider the instantaneous network throughout and latency information, as they
are two parameters that directly affect the performance when offloading.
### 5.4. Offload Decision Making Engine
The _Decision Making Engine_ decides on the best execution strategy for AFs,
based on information provided by the _Context Manager_. Based on a set of
preconfigured policies (or an algorithm) this component decides whether to
execute a given AF locally on the mobile device or to offload to be executed
by a service in the network. Thus, it invokes the corresponding callback
methods (’ _offloadFunction_ ’ and _initFunction_) for offloading or executing
a AF locally. This, in-turn calls the _Wrapper class_ to stop/start the AF,
and _Execution Engine_ to switch traffic between IPC and network. For example,
if a specific AF happens to cost more energy when executed locally than
offloaded, the _Offload Decision Making Engine_ may decide to offload it when
the device is not plugged into power. Similarly, when the device is connected
to user’s home WiFi network, computing-intensive tasks of a mobile game may be
offloaded to a gaming console connected to the same network.
### 5.5. Execution Engine
Requests that are created get added to a First In First Out (FIFO) queue that
is managed by the _Execution Engine_. The execution of each request is
initiated based on the requested AF’s locality and its chosen communication
mechanism, that is provided by the _Offload Decision Making Engine_. This
results in either sending the request to a service executed in the network
(over the _HttpStack_ in REACT that implements HTTP protocol stack), or in-
directing it to a locally executing AF (over the _IPCStack_), i.e., if the AF
has been decided to be offloaded then the request is sent as a HTTP request to
the web service, otherwise, the request is provided to the _Local
Communication Manager_ for sending it to the locally executing AF using the
chosen local function communication mechanism (i.e., any available IPC
mechanism). _Local Communication Manager_ uses the same communication medium
for responses as their corresponding requests. Switching between IPC and
network requests are performed on a request-by-request basis, and
transparently to the AFs. AFs are agnostic to the underlying communication
mechanism used. Therefore, AFs are not aware of weather peering AFs reside
locally or in the network and, do not need communication mechanism specific
procedures to be implemented within AFs. Moreover, this decoupling of
underlying communication and lifecycle management from AF’s internal
procedures, allows REACT to upgrade existing features without altering AFs
themselves (i.e., AF specific internal procedures - and its required to
recompile after updating the included REACT library). Further details on how
requests are handled by the _Local Communication Manager_ when AFs communicate
locally are presented in Section 6.
## 6\. Local Communication Manager
When in-directing requests to locally executing AFs, any of the IPC mechanisms
offered by the mobile platform may be used. The _Local Communication Manger_
(LCM) sends the Request (and receives the Response) over the chosen IPC
mechanism, while hiding complexities specific to the local communication
mechanisms from the _Execution Engine_. For example, it turns all local AF
communication into the Request/Response model, i.e., turns all asynchronous
Android IPC communication into Request/Response model. Support for each
communication mechanism (e.g., Android IPC type) is enabled by the included
_IPCStack_ implementations, allowing for new communication mechanisms to be
added later through new implementations of _IPCStack_. Thus, LCM can easily be
extended later to support AF communication mechanisms other than existing
Android IPC, although in this paper we focus only on improving existing
Android IPC used for AF communication.
Upon receiving a Request, LCM selects a local communication method and the
corresponding stack implementation for establishing connectivity. The
selection of the most suitable communication method (from available _IPCStack_
implementations) can be based on any meaningful condition and realized as the
selection algorithm. However, the current implementation simply selects the
one set by the AF developer, i.e., the local communication mechanism set with
highest priority (in a ordered list of supported mechanisms provided), by the
developer of the requested AF.
The most prominent abstraction to the underlying Binder IPC kernel driver is
Android Intent (Backes et al., 2014). LCM uses Android Intent (Android
Developers, 2020c) messages as the default unified intra-app (process local
Intents (Android Developers, 2019c) with improvements made through extensions
to android Intent IPC introduced in Section 6.1) and inter-app (Android global
broadcasts (Android Developers, 2019b)) communication mechanism due to its
flexible, easy to use and optimized management procedures that are widely used
throughout the Android OS (Backes et al., 2014). Figure 5 shows the message
format used when sending AF (request/response) messages over Android Intent
IPC. Data in messages are stored as Key-Value pairs, with field names as keys
and their values in the corresponding value fields. The Intent action field is
used as the AF address field, while extra fields are used for storing all
other fields, including the payload.
Figure 5. Message format - using Android Intent IPC
The ’ _Reqid_ ’ field contains an identifier that uniquely identifies the
request (which is also in the response for matching responses with requests).
The ’ _Method_ ’ and ’ _URL_ ’ fields contain the request type, which follows
the web request type convention (e.g., GET, POST), and the complete AF
resource/service URL (if in the Request), respectively. The ’ _Payload_ ’
field contains the body of the message. The ’ _Status_ ’ filed contains the
status code of the response, following the same convention of the ’ _status
code_ ’ values of HTTP (Internet Assigned Numbers Authority (IANA), 2018).
### 6.1. Application-Layer Heap
A major drawback of using any Android IPC (e.g., Intent, Binder, Content
Provider) for communicating among AFs locally when not offloading, is the
resulting significant increase in memory overhead when transferring large
payloads (Hsieh et al., 2013) and inefficient memory management which
eventually lead apps to crash. This renders Android IPC impractical to be used
for some applications such as, continuous sensing and streaming apps, that
require sending a large amount of data between AFs.
Specifically, the focus of this section is on further improving communication
efficiency of AFs within the same application (intra-app), for the application
scenario of runtime bandwidth and latency requirements of intra-app AF
communication are higher, and the messages sent between inter-app AFs do not
include large payloads (decided by app developer at design-time - AFs with
high bandwidth and latency requirements reside in the same Android app locally
in the mobile device). Therefore, we continue to use Android IPC (Intent
abstraction of Binder) in its original form for inter-app communication.
We introduce an alternative intra-app AF-to-AF communication mechanism for
overcoming aforementioned limitations imposed by Android IPC mechanisms, while
still enabling flexible offloading of AFs into the network. Application-layer
Heap introduced in Figure 6 provides a shared memory space for storing and
communicating data between AFs. When AFs communicate within the same app,
_Payload_ data stored on Application-layer Heap are not shared with Android
IPC or the broader system and, does not leave the application process.
Moreover, Application-layer heap maintains only one copy of data and provides
REACT framework with complete control over the management of data (e.g.,
store, delete), i.e., it stores the payload when initializing a message and
deletes it immediately after transmission is complete. This removes any
unwanted copying and other inefficiencies when managing large payloads in
memory by Android IPC.
Figure 6. App-layer Heap overview
Alternatively, it is also possible for AFs to directly access the Application-
layer Heap, such that, operations in memory (store, amend, delete) can be
manually performed by AFs and, in turn provide/receive the reference to
payload in memory to/from REACT for transmitting, respectively. However,
details on how this can be done using tools provided by the REACT framework
library is not disclosed here as it is out of scope of this paper. Therefore,
in the rest of the paper, all memory operations are assumed to be performed
automatically by the LCM.
When communicating using Application-layer Heap, only a reference (’ _PaylRef_
’) to the (non-empty) payload is transmitted between AFs, as can be seen in
the format of request/response messages in Figure 5. Given the improved
performance when transmitting small messages (Hsieh et al., 2013), REACT uses
Android app-local Intent IPC (Android Developers, 2019c) for communicating
Application-layer Heap based messages that only contain references (’
_PaylRef_ ’) in the message body. Once, the message reaches its destination,
the payload data is automatically retrieved from the Application-layer Heap
and provided to the corresponding AF, before releasing the memory space
occupied by the message payload.
It is imperative that application data is localized and external exposure to
them are limited (outside the app process) as much as possible, maintaining
process-based isolation of data in Android (Raval et al., 2019). Therefore,
request/response payload data stored in Application-layer Heap is only
accessible locally within the app, and the data itself does not leave the app
at any point during intra-app AF communication, while the reference
information included in the IPC messages is not useful outside the app. Widely
used existing secure web protocols, such as HTTPS, may be used when offloading
AFs to web services for securing application data against attacks.
Any underlying Application-layer Heap implementation, 1) maintains only one
copy of data in the device, 2) contains data within the app process, 3)
provides LCM with means for storing and retrieving data, for sending and
receiving requests and responses at the function borders, respectively. It is
assumed that 3) provides a ’ _PaylRef_ ’ value per each stored payload data
for uniquely identifying and accessing.
Figure 7. Code snippet of the Heap interface
REACT library provides a common interface (Heap interface - code snippet shown
in Figure 7) which can be used for implementing and providing application-
layer heap implementations to LCM, i.e., the interface can be used for
providing new application-layer heap implementations, other than the one
provided. Any object that implements this interface can be added to LCM using
the methods provided by the library. The ’ _malloc_ ’ and ’ _write_ ’ methods
are used for storing payload data, by allocating a block of space (that
matches the size of the data) in memory and writing the data to it,
respectively, before adding the reference number returned by ’ _malloc_ ’ to
the ’ _PaylRef_ ’ field in the message body (Figure 5). Likewise, ’ _read_ ’
and ’ _free_ ’ method are used for retrieving data and freeing/deleting used
memory blocks, respectively.
_ByteArrayHeap_ is a simple implementation of a Heap, provided by REACT
library, that uses a Java one dimensional _ByteArray_ as the memory space.
_ByteArrayHeap_ is set as the default Heap implementation of REACT runtime.
Memory blocks are reserved within the continuous space of the _ByteArray_ ,
between the first byte and the last. The reference of a memory block is the
index of its first byte within the _ByteArray_ (as allocated by _malloc_),
which therefore can be used to uniquely identify the block and its starting
point in the larger array (as one byte can’t belong to more than one block at
a given point in time - block spaces do not overlap). Likewise, a memory
block’s ”end” border index can easily be calculated by simply adding the size
of the payload data (in bytes) to its block start index (the _reference_). In
what follows, we present the algorithms being used for implementing the
methods in Figure 7, and other procedures needed for managing the _ByteArray_
Heap implementation.
Figure 8. ByteArrayHeap block allocation overview
Portions of this continuous byte space is allocated through compartmentalizing
the array into separate memory blocks and maintaining a record them, as
depicted in Figure 8. For keeping track of the blocks and the order they are
stored in (shown by arrow in Figure 8), a list $E$ (a _LinkedList_ (Oracle
Docs, [n.d.])) of memory _Block_ objects corresponding to all existing memory
blocks is maintained. Each _Block_ element $i$ in the list stores its ’
_reference_ ’, the ’ _size_ ’ ($S_{i}$) of the block and its ’ _status_ ’, as
member fields. The status indicates weather the block is currently being used
(’non-free’) or not (’free’ - a freed portion of memory that is not being
used). It is this list that is manipulated when managing the _ByteArrayHeap_ ,
as opposed to making changes directly to the underlying _ByteArray_ when
performing aforementioned operations. The actual bytes in the _ByteArray_
never get erased, but instead get overwritten with new bytes, according to the
new block structure, when the corresponding ’free’ portion of memory gets
assigned to a new reallocation.
_malloc_ reuses ’free’ blocks after resizing to match the new block size,
releasing the remainder of bytes back to heap as ’free’ memory, if any.
However, if _ByteArrayHeap_ left unmanaged, over time, the operations
described above can potentially leave the underlying _ByteArray_ increasingly
fragmented, e.g., due to residue bytes of reused blocks left unused. Thus, the
adjacent ’free’ blocks gets periodically _merge_ d and ’free’ blocks at the
end of the list gets deleted (getting released to the non-allocated space).
Finally, when retrieving payload data (with the _read_ method), only the bytes
within the border (bytes from ’ _reference_ ’ location to end location of the
block inclusive) of the corresponding block is read.
In summary, _LCM_ enables _Execution Engine_ to flexibly switch between IPC
provided by the Android system and network requests, based on AF offloading
decisions made by the _Offload Decision Making Engine_. Moreover, application-
layer heap enables, REACT localise intra-app communication data and, transport
large payloads using Android IPC between intra-app AFs efficiently.
## 7\. Implementation
When building a REACT prototype, we have used the Android Volley HTTP library
(Android Developers, 2020d) for creating and handling HTTP requests/responses
that are sent between offloaded services and application functions that are
running on the device. We have modified the Volley library for implementing
the functionality of the _Execution Engine_ of the REACT architecture (in
Figure 1), incorporating interfaces to the _Local Decision Making Engine_ and
the _Local Communication Manager_ , for indirecting requests to corresponding
locally executing application functions, based on the offloading decisions
made by the _Offloading Decision Making Engine_. REACT uses _RequestQueue_ of
the Volley library for queuing and processing inter-AF requests.
Microservice/AF Addressing: REACT identifies all application components,
AFs/microservices running locally on the devices and web services in the
network by their FQDN (Fully Qualified Domain Name). Locally executing
AFs/Microservices may use the same FQDN (as their _address_), as their
matching services (that are used for offloading) in the network. Such an
addressing scheme simplifies the identification and mapping of addresses
between locally executing AFs and corresponding services in the network,
providing a one-to-one mapping. However, following the Java/Android
application naming convention (Android Developers, 2019e), addresses of all
locally executing AFs use the reverse FQDN of its web service counterpart. For
example, an AF may be addressed ” _com.example.myapp.process_ ”, when the
address of the web service counterpart is available on ”
_process.myapp.example.com_ ”. In this example, the _application ID_ is
com.example.myapp and the process function is provided by the application
provider (” _com.example.myapp_ ”). Likewise, the developer may also include
REACT compatible mobile AFs/microservices along with its web service
counterparts provided by a third-party application provider (e.g.,
”com.example2.thirdpartyapp”), i.e., AFs using REACT wrapper class,
distributed by the provider as a third-party android library. This allows
REACT to discover locally running AFs and indirect corresponding incoming
requests to corresponding functions.
## 8\. Evaluation
We have implemented an REACT prototype for demonstrating how mobile
applications can be developed as a collection of AFs (following the
microservices architecture), and how they can be managed dynamically at
runtime while adapting to contextual changes. Mainly, we focus the on the
flexibility provided by the framework for offloading AFs and managing the
communication between locally executing AFs and offloaded AFs, using the
application scenario presented in Section 8.1. Moreover, we evaluate the
performance of the communication mechanism introduced for inter-app AF
communication.
The results presented shows the flexibility in managing AFs running on a
user’s primary device, while adapting contextual changes. The REACT
implementation used for our evaluation is based on Android v.10. We have used
Google Pixel 2 XL, Nvidia Shield Pro devices, and a KVM VM deployed in an
Openstack environment, which are all connected to the same 2.5GHz Wi-Fi access
point.
### 8.1. Application Use Case
Figure 9. REACT enabled Video streaming app
For demonstrating and evaluating the capabilities of the REACT framework, we
have implemented an Android video viewing application (setup and app shown in
Figure 9). Using the REACT API, the application has been developed as a
collection of three AFs, namely, ” _Control_ ” AF, ” _Display_ ” AF and ”
_Process_ ” AF. The _Control_ AF provides the user with a control interface,
allowing the user to perform various video control actions (e.g., start/pause
video). The ” _Process_ ” AF receives the video from a video source in the
network, processes the video by applying an effect (selected by the user), and
provides the output to the ” _Display_ ” AF, forming a service chain. The ”
_Display_ ” AF displays the video on the screen. However, in this paper, we do
not discuss the details of application-specific internal AF implementations
themselves (e.g., video synchronization between AFs, video related control
signalling between AFs), but instead focuses only on the effect REACT has on
the mobile device during management and flexible execution of those AFs in a
distributed environment. The video app is run on a Google Pixel 2 XL mobile
device, and we fix the resolution of video to 360x640 for all experiments.
Web services in the network: We assume that a matching _Process_ function and
a _Display_ function are available in the network (i.e., accessible through
the WiFi network). A _Process_ service is implemented as a C++ application,
and deployed in a Openstack environment which serves _Process_ service
requests over the Apache web server (Server, [n.d.]). Likewise, a matching
Android _Display_ service/AF is deployed on the NVidia Android TV device that
is connected to a TV screen, which displays the video on the connected TV,
based on requests received from the _Control_ AF over the web interface (using
the NanoHTTPD (Elonen, 2020) android embedded HTTP server). When the mobile
device offloads to the _Display_ service, it retrieves the processed video
frames from the _Process_ service deployed in the network, leaving only the
_Control_ AF on the mobile device.
Offload Decision Making: We have employed a simple offloading policy that
takes the network connectivity and NFC tag readings as input. The _Process_ AF
is offloaded only when the user is connected to the home network. Likewise,
the _Display_ AF is offloaded (and pulled back to mobile device) only on
reading a known NFC tag associated with a display device with _Display_
service installed, while at the same time connected to the HOME Wi-Fi network.
Both policies are manually programmed into the _Offload Decision Making
Engine_.
### 8.2. Power Consumption
In order to quantitatively demonstrate the overall performance of the REACT
framework and the adaptability of the application that use the framework, we
analyse the changes in instantaneous power consumption. For measuring the
power consumption of the Android device, we use the Android’s _BatteryManager_
(Guclu et al., 2016) (Android Developers, 2019a).
Figure 10. Power usage when executing _Process_ AF (with ”Sharpen” effect
enabled), local vs offloading
Figure 10 shows the changes in power consumption as a result of dynamic AF
offloading, triggered by the changes in network connectivity and NFC readings.
As the user connects to the home Wi-Fi network, REACT offloads the _Process_
AF, as instructed by the _Offloading Decision Making Engine_ , reducing the
instantaneous overall app power consumption by $\sim$1.01 watts on average.
Then, as the user disconnects from the home Wi-Fi network, the _Process_
function is dynamically initiated locally and the communication between the
local _Process_ and _Display_ functions are re-established automatically as
the _Process_ AF registers against the function catalogue as a locally
available AF. Likewise, when offloading _Display_ AF (along with _Process_
AF), based on NFC tag readings, reducing the instantaneous overall app power
consumption by $\sim$1.53 watts on average.
Figure 11. Average power consumption
Improvements that may be made in power usage by offloading, depends on the
resource (and communication) intensity of the AF being offloaded. However,
choosing or developing suitable AFs towards improving power consumption is out
of the scope of this paper, and relies solely on the static polices programmed
in _Offload Decision Making Engine_ for showing REACT’s dynamic and flexible
management of independent AFs. Figure 11 shows the average power consumption
(per second) of the video viewing application running on the mobile device,
depending on the video effect (with varying computing intensity) that’s being
used. Specifically, on average, offloading can save power when using the
”Sharpen” effect, and less beneficial to offload when adding the ”Gray scale”
effect.
### 8.3. Memory Overhead
We used _Runtime_ (Android Developers, 2019d) on Android to measure the Random
Access Memory (RAM) usage within the application memory space, and used
_MemoryInfo_ (Android Developers, 2020a) for measuring the system-wide RAM
usage.
(a) Memory usage without app-layer heap - from app level
(b) Memory usage without app-layer heap - from system level
(c) Memory usage with app-layer heap - from app level
(d) Memory usage with app-layer heap - from system level
Figure 12. Memory overhead of the application from the application and system
levels
As shown by previous studies (Hsieh et al., 2013) (and as mentioned in Section
6.1) Android IPC mechanisms (in our case Android Broadcast Intents) infer
significant performance penalties when used for transferring large payload
data. Specifically, this can be observed over time in the application memory
usage, both at application level (shown in Figure 12(a)) and system level
(shown in Figure 12(b)). In both cases, memory consumption increases
continuously over time, until the application crashes, after reaching the
maximum amount of memory available to the application, $\sim$110 seconds after
running the application.
However, Figure 12(c) and Figure 12(d) shows how memory usage at the
application level and system level is being stabilized (at $\sim$200Mb)
through our application-layer heap enabled AF communication mechanism,
respectively. The shared memory space provided by the app-layer heap has
enabled REACT to localise the application specific data in intra-app AF
communication while also improving the efficiency of the Android IPC
mechanism. The _merge_ procedures introduced in Section 6.1 keeps the
application-layer heap defragmented. Figure 13(a) shows the average app-layer
heap usage per second. Figure 13(b) shows how the number of created memory
blocks increase when _merge_ is disabled, while enabling _merge_ procedures
keeps the number of blocks to a minimum. As the number of memory blocks
increase, the application eventually crashes ($\sim$150 seconds after running
the application).
(a) Avg. app-layer heap memory usage per second
(b) Number of blocks, with and without _merge_ enabled
Figure 13. App-layer Heap memory usage
## 9\. Conclusion
We have presented REACT, an Android-based application framework that enables
mobile applications to be developed as a collection of loosely couple
AFs/microservices (goal 1). REACT manages microservices individually, allowing
fine-grained application components to be dynamically offloaded to be executed
by web services in the network (goal 3) based on contextual changes (goal 2).
The _Local Communication Manager_ transfers requests and responses between
microservices efficiently using our newly introduced application-layer heap
enabled Android IPC mechanism, and enables seamless switching between network
requests and Android IPC based requests transparently to the communicating
microservices. Thus, the communication medium of microservice requests can be
dynamically changed, depending on the locality of communicating microservices
(goal 4). Our prototype implementation has proved that REACT enables the
development of highly flexible microservices-based mobile applications that
are highly dynamic and adaptable to contextual changes. The newly introduced,
application-layer heap enabled Android IPC mechanism has proven to improve the
efficiency of Android IPC, reducing memory overhead when communicating
messages with large payloads between locally executing microservices. We
expect REACT, distributed as an easy to use Android Library, to kick start the
development of creative and useful microservices based Android mobile
applications.
###### Acknowledgements.
The authors would like to thank Dr. Dirk Trossen for the initial technical
contribution to this work as team lead at InterDigital until December 2019.
## References
* (1)
* Abolfazli et al. (2014) S. Abolfazli, Z. Sanaei, A. Gani, F. Xia, and W. Lin. 2014. RMCC: Restful Mobile Cloud Computing Framework for Exploiting Adjacent Service-Based Mobile Cloudlets. In _2014 IEEE 6th International Conference on Cloud Computing Technology and Science_. 793–798.
* AlDuaij et al. (2019) Naser AlDuaij, Alexander Van’t Hof, and Jason Nieh. 2019\. Heterogeneous Multi-Mobile Computing. In _Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services_ (Seoul, Republic of Korea) _(MobiSys ’19)_. Association for Computing Machinery, New York, NY, USA, 494–507. https://doi.org/10.1145/3307334.3326096
* Android Developers (2019a) Android Developers 2019a. BatteryManager. https://developer.android.com/reference/android/os/BatteryManager
* Android Developers (2019b) Android Developers 2019b. Broadcasts Overview. https://developer.android.com/guide/components/broadcasts
* Android Developers (2019c) Android Developers 2019c. LocalBroadcastManager. https://developer.android.com/reference/androidx/localbroadcastmanager/content/LocalBroadcastManager
* Android Developers (2019d) Android Developers 2019d. Runtime. https://developer.android.com/reference/java/lang/Runtime
* Android Developers (2019e) Android Developers 2019e. Set the application ID. https://developer.android.com/studio/build/application-id
* Android Developers (2020a) Android Developers 2020a. ActivityManager.MemoryInfo. https://developer.android.com/reference/android/app/ActivityManager.MemoryInfo
* Android Developers (2020b) Android Developers 2020b. Context. https://developer.android.com/reference/android/content/Context
* Android Developers (2020c) Android Developers 2020c. Intent. https://developer.android.com/reference/android/content/Intent
* Android Developers (2020d) Android Developers 2020d. Volley overview. https://developer.android.com/training/volley
* Backes et al. (2014) Michael Backes, Sven Bugiel, and Sebastian Gerling. 2014\. Scippa: System-Centric IPC Provenance on Android. In _Proceedings of the 30th Annual Computer Security Applications Conference_ (New Orleans, Louisiana, USA) _(ACSAC ’14)_. Association for Computing Machinery, New York, NY, USA, 36–45. https://doi.org/10.1145/2664243.2664264
* Chun et al. (2011) Byung-Gon Chun, Sunghwan Ihm, Petros Maniatis, Mayur Naik, and Ashwin Patti. 2011. CloneCloud: Elastic Execution between Mobile Device and Cloud. In _Proceedings of the Sixth Conference on Computer Systems_ (Salzburg, Austria) _(EuroSys ’11)_. Association for Computing Machinery, New York, NY, USA, 301–314. https://doi.org/10.1145/1966445.1966473
* Cisco (2020) Cisco 2020. _Cisco Annual Internet Report, 2018–2023_. Technical Report.
* Cuervo et al. (2010) Eduardo Cuervo, Aruna Balasubramanian, Dae-ki Cho, Alec Wolman, Stefan Saroiu, Ranveer Chandra, and Paramvir Bahl. 2010. MAUI: Making Smartphones Last Longer with Code Offload. In _Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services_ (San Francisco, California, USA) _(MobiSys ’10)_. Association for Computing Machinery, New York, NY, USA, 49–62. https://doi.org/10.1145/1814433.1814441
* Dinh et al. (2013) Hoang T. Dinh, Chonho Lee, Dusit Niyato, and Ping Wang. 2013\. A survey of mobile cloud computing: architecture, applications, and approaches. _Wireless Communications and Mobile Computing_ 13, 18 (2013), 1587–1611. https://doi.org/10.1002/wcm.1203 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/wcm.1203
* Dou et al. (2010) Adam Dou, Vana Kalogeraki, Dimitrios Gunopulos, Taneli Mielikainen, and Ville H. Tuulos. 2010\. Misco: A MapReduce Framework for Mobile Systems. In _Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments_ (Samos, Greece) _(PETRA ’10)_. Association for Computing Machinery, New York, NY, USA, Article 32, 8 pages. https://doi.org/10.1145/1839294.1839332
* Elgazzar et al. (2016) K. Elgazzar, P. Martin, and H. S. Hassanein. 2016\. Cloud-Assisted Computation Offloading to Support Mobile Services. _IEEE Transactions on Cloud Computing_ 4, 3 (2016), 279–292.
* Elonen (2020) Jarno Elonen. 2020\. NanoHTTPD embeddable HTTP server in Java. (2020). https://github.com/NanoHttpd/nanohttpd
* Eom et al. (2013) H. Eom, P. S. Juste, R. Figueiredo, O. Tickoo, R. Illikkal, and R. Iyer. 2013\. Machine Learning-Based Runtime Scheduler for Mobile Offloading Framework. In _2013 IEEE/ACM 6th International Conference on Utility and Cloud Computing_. 17–25.
* Fielding et al. (1999) Roy Fielding, Jim Gettys, Jeffrey Mogul, Henrik Frystyk, Larry Masinter, Paul Leach, and Tim Berners-Lee. 1999. Hypertext transfer protocol–HTTP/1.1.
* Guclu et al. (2016) Isa Guclu, Yuan-Fang Li, Jeff Z. Pan, and Martin J. Kollingbaum. 2016. Predicting Energy Consumption of Ontology Reasoning over Mobile Devices. In _The Semantic Web – ISWC 2016_ , Paul Groth, Elena Simperl, Alasdair Gray, Marta Sabou, Markus Krötzsch, Freddy Lecue, Fabian Flöck, and Yolanda Gil (Eds.). Springer International Publishing, Cham, 289–304.
* Hsieh et al. (2013) Cheng-Kang Hsieh, Hossein Falaki, Nithya Ramanathan, Hongsuda Tangmunarunkit, and Deborah Estrin. 2013\. Performance Evaluation of Android IPC for Continuous Sensing Applications. _SIGMOBILE Mob. Comput. Commun. Rev._ 16, 4 (Feb. 2013), 6–7. https://doi.org/10.1145/2436196.2436200
* Internet Assigned Numbers Authority (IANA) (2018) Internet Assigned Numbers Authority (IANA) 2018. Hypertext Transfer Protocol (HTTP) Status Code Registry. http://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml
* Jamshidi et al. (2018) P. Jamshidi, C. Pahl, N. C. Mendonça, J. Lewis, and S. Tilkov. 2018. Microservices: The Journey So Far and Challenges Ahead. _IEEE Software_ 35, 3 (2018), 24–35.
* Kanti Datta et al. (2018) S. Kanti Datta, M. Irfan Khan, L. Codeca, B. Denis, J. Härri, and C. Bonnet. 2018\. IoT and Microservices Based Testbed for Connected Car Services. In _2018 IEEE 19th International Symposium on ”A World of Wireless, Mobile and Multimedia Networks” (WoWMoM)_. 14–19.
* Kosta et al. (2012) S. Kosta, A. Aucinas, Pan Hui, R. Mortier, and Xinwen Zhang. 2012. ThinkAir: Dynamic resource allocation and parallel execution in the cloud for mobile code offloading. In _2012 Proceedings IEEE INFOCOM_. 945–953.
* Kumar and Lu (2010) K. Kumar and Y. Lu. 2010. Cloud Computing for Mobile Users: Can Offloading Computation Save Energy? _Computer_ 43, 4 (2010), 51–56.
* Magurawalage] et al. (2014) Chathura M. [Sarathchandra Magurawalage], Kun Yang, Liang Hu, and Jianming Zhang. 2014. Energy-efficient and network-aware offloading algorithm for mobile cloud computing. _Computer Networks_ 74 (2014), 22 – 33. https://doi.org/10.1016/j.comnet.2014.06.020 Special Issue on Mobile Computing for Content/Service-Oriented Networking Architecture.
* Oh et al. (2019) Sangeun Oh, Ahyeon Kim, Sunjae Lee, Kilho Lee, Dae R. Jeong, Steven Y. Ko, and Insik Shin. 2019. FLUID: Flexible User Interface Distribution for Ubiquitous Multi-Device Interaction. In _The 25th Annual International Conference on Mobile Computing and Networking_ (Los Cabos, Mexico) _(MobiCom ’19)_. Association for Computing Machinery, New York, NY, USA, Article 42, 16 pages. https://doi.org/10.1145/3300061.3345443
* Oracle Docs ([n.d.]) Oracle Docs [n.d.]. LinkedList - Oracle Docs. https://docs.oracle.com/javase/7/docs/api/java/util/LinkedList.html
* O’Sullivan and Grigoras (2015) Michael J. O’Sullivan and Dan Grigoras. 2015. Integrating mobile and cloud resources management using the cloud personal assistant. _Simulation Modelling Practice and Theory_ 50 (2015), 20 – 41. https://doi.org/10.1016/j.simpat.2014.06.017 Special Issue on Resource Management in Mobile Clouds.
* Raval et al. (2019) Nisarg Raval, Ali Razeen, Ashwin Machanavajjhala, Landon P. Cox, and Andrew Warfield. 2019. Permissions Plugins as Android Apps. In _Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services_ (Seoul, Republic of Korea) _(MobiSys ’19)_. Association for Computing Machinery, New York, NY, USA, 180–192. https://doi.org/10.1145/3307334.3326095
* Server ([n.d.]) Apache Web Server. [n.d.]. The Apache HTTP Server Project. ([n. d.]). URLhttps://httpd.apache.org
* Shiraz et al. (2013) M. Shiraz, A. Gani, R. H. Khokhar, and R. Buyya. 2013\. A Review on Distributed Application Processing Frameworks in Smart Mobile Devices for Mobile Cloud Computing. _IEEE Communications Surveys Tutorials_ 15, 3 (2013), 1294–1313.
* Taleb et al. (2017) T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella. 2017\. On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. _IEEE Communications Surveys Tutorials_ 19, 3 (2017), 1657–1681.
* Yang et al. (2016) L. Yang, J. Cao, S. Tang, D. Han, and N. Suri. 2016. Run Time Application Repartitioning in Dynamic Mobile Cloud Environments. _IEEE Transactions on Cloud Computing_ 4, 3 (2016), 336–348.
* Zhou et al. (2017) B. Zhou, A. V. Dastjerdi, R. N. Calheiros, S. N. Srirama, and R. Buyya. 2017\. mCloud: A Context-Aware Offloading Framework for Heterogeneous Mobile Cloud. _IEEE Transactions on Services Computing_ 10, 5 (2017), 797–810.
|
16k
|
arxiv_papers
|
2101.00909
|
# Fair Training of Decision Tree Classifiers
Francesco Ranzato1, Caterina Urban2, Marco Zanella1
###### Abstract
We study the problem of formally verifying individual fairness of decision
tree ensembles, as well as training tree models which maximize both accuracy
and individual fairness. In our approach, fairness verification and fairness-
aware training both rely on a notion of stability of a classification model,
which is a variant of standard robustness under input perturbations used in
adversarial machine learning. Our verification and training methods leverage
abstract interpretation, a well established technique for static program
analysis which is able to automatically infer assertions about stability
properties of decision trees. By relying on a tool for adversarial training of
decision trees, our fairness-aware learning method has been implemented and
experimentally evaluated on the reference datasets used to assess fairness
properties. The experimental results show that our approach is able to train
tree models exhibiting a high degree of individual fairness w.r.t. the natural
state-of-the-art CART trees and random forests. Moreover, as a by-product,
these fair decision trees turn out to be significantly compact, thus enhancing
the interpretability of their fairness properties.
## 1 Introduction
The widespread adoption of data-driven automated decision-making software with
far-reaching societal impact, e.g., for credit scoring (Khandani, Kim, and Lo
2010), recidivism prediction (Chouldechova 2017), or hiring tasks (Schumann et
al. 2020), raises concerns on the fairness properties of these tools (Barocas
and Selbst 2016). Several fairness verification and bias mitigation approaches
for machine learning (ML) systems have been proposed in recent years, e.g.
(Aghaei, Azizi, and Vayanos 2019; Grari et al. 2020; Roh et al. 2020; Ruoss et
al. 2020; Urban et al. 2020; Yurochkin, Bower, and Sun 2020; Zafar et al.
2017) among the others. However, most works focus on neural networks (Roh et
al. 2020; Ruoss et al. 2020; Urban et al. 2020; Yurochkin, Bower, and Sun
2020) or on group-based notions of fairness (Grari et al. 2020; Zafar et al.
2017), e.g., demographic parity (Dwork et al. 2012) or equalized odds (Hardt,
Price, and Srebro 2016). These notions of group-based fairness require some
form of statistical parity (e.g. between positive outcomes) for members of
different protected groups (e.g. gender or race). On the other hand, they do
not provide guarantees for individuals or other subgroups. By contrast, in
this paper we focus on _individual fairness_ (Dwork et al. 2012), intuitively
meaning that similar individuals in the population receive similar outcomes,
and on decision tree ensembles (Breiman 2001; Friedman 2001), which are
commonly used for tabular datasets since they are easily interpretable ML
models with high accuracy rates.
#### Contributions.
We propose an approach for verifying individual fairness of decision tree
ensembles, as well as training tree models which maximize both accuracy and
fairness. The approach is based on _abstract interpretation_ (Cousot and
Cousot 1977; Rival and Yi 2020), a well known static program analysis
technique, and builds upon a framework for training robust decision tree
ensembles called Meta-Silvae (Ranzato and Zanella 2020b), which in turn
leverages a verification tool for robustness properties of decision trees
(Ranzato and Zanella 2020a). Our approach is fully parametric on a given
underlying abstract domain representing input space regions containing similar
individuals. We instantiate it with a product of two abstract domains: (a) the
well-known abstract domain of hyper-rectangles (or boxes) (Cousot and Cousot
1977), that represents exactly the standard notion of similarity between
individuals based on the $\ell_{\infty}$ distance metric, and does not lose
precision for the univariate split decision rules of type $x_{i}\leq k$; and
(b) a specific relational abstract domain which accounts for one-hot encoded
categorical features.
Our Fairness-Aware Tree Training method, called FATT, is designed as an
extension of Meta-Silvae (Ranzato and Zanella 2020b), a learning methodology
for ensembles of decision trees based on a genetic algorithm which is able to
train a decision tree for maximizing both its accuracy and its robustness to
adversarial perturbations. We demonstrate the effectiveness of FATT in
training accurate and fair models on the standard datasets used in the
literature on fairness. Overall, the experimental results show that our fair-
trained models are on average between $35\%$ and $45\%$ more fair than
naturally trained decision tree ensembles at an average cost of $-3.6\%$ of
accuracy. Moreover, it turns out that our tree models are orders of magnitude
more compact and thus more interpretable. Finally, we show how our models can
be used as “hints” for setting the size and shape hyper-parameters (i.e.,
maximum depth and minimum number of samples per leaf) when training standard
decision tree models. As a result, this hint-based strategy is capable to
output models that are about $20\%$ more fair and just about $1\%$ less
accurate than standard models.
#### Related Work.
The most related work to ours is by Aghaei et al. (Aghaei, Azizi, and Vayanos
2019), Raff et al. (Raff, Sylvester, and Mills 2018) and Ruoss et al. (Ruoss
et al. 2020).
By relying on the mixed-integer optimization learning approach by Bertsimas
and Dunn (Bertsimas and Dunn 2017), Aghaei et al. (Aghaei, Azizi, and Vayanos
2019) put forward a framework for training fair decision trees for
classification and regression. The experimental evaluation shows that this
approach mitigates unfairness as modeled by their notions of disparate impact
and disparate treatment at the cost of a significantly higher training
computational cost. Their notion of disparate treatment is distance-based and
thus akin to individual fairness with respect to the nearest individuals _in a
given dataset_ (e.g., the $k$-nearest individuals). In contrast, we consider
individual fairness with respect to the nearest individuals _in the input
space_ (thus, also individuals that are not necessarily part of a given
dataset).
Raff et al. (Raff, Sylvester, and Mills 2018) propose a regularization-based
approach for training fair decision trees as well as fair random forests. They
consider both group fairness as well as individual fairness with respect to
the $k$-nearest individuals in a given dataset, similarly to Aghaei et al.
(Aghaei, Azizi, and Vayanos 2019). In their experiments they use a subset of
the datasets that we consider in our evaluation (i.e., the Adult, German, and
Health datasets). Our fair models have higher accuracy than theirs (i.e.,
between $2\%$ and $5.5\%$) for all but one of these datasets (i.e., the Health
dataset). Interestingly, their models (in particular those with worse accuracy
than ours) often have accuracy on par with a constant classifier due to the
highly unbalanced label distribution of the datasets (cf. Table 1).
Finally, Ruoss et al. (Ruoss et al. 2020) have proposed an approach for
learning individually fair data representations and training neural networks
(rather than decision tree ensembles as we do) that satisfy individual
fairness with respect to a given similarity notion. We use the same notions of
similarity in our experiments (cf. Section 6.1).
## 2 Background
Given an input space $X\subseteq\mathbb{R}^{d}$ of numerical vectors and a
finite set of labels $\mathcal{L}=\\{y_{1},\ldots,y_{m}\\}$, a classifier is a
function $C\colon X\rightarrow\wp_{+}(\mathcal{L})$, where
$\wp_{+}(\mathcal{L})$ is the set of nonempty subsets of $\mathcal{L}$, which
associates at least one label to every input in $X$. A training algorithm
takes as input a dataset $D\subseteq X\times\mathcal{L}$ and outputs a
classifier $C\colon X\rightarrow\wp_{+}(\mathcal{L})$ which optimizes some
objective function, such as the Gini index or the information gain for
decision trees.
Categorical features can be converted into numerical ones through one-hot
encoding, where a single feature with $k$ possible distinct categories
$\\{c_{1},...,c_{k}\\}$ is replaced by $k$ new binary features with values in
$\\{0,1\\}$. Then, each value $c_{j}$ of the original categorical feature is
represented by a bit-value assignment to the new $k$ binary features in which
the $j$-th feature is set to $1$ (and the remaining $k-1$ binary features are
set to $0$).
Classifiers can be evaluated and compared through several metrics. Accuracy on
a test set is a basic metric: given a ground truth test set $T\subseteq
X\times\mathcal{L}$, the accuracy of $C$ on $T$ is
$\mathit{acc}_{T}(C)\triangleq|\\{(\boldsymbol{x},y)\in
T~{}|~{}C(\boldsymbol{x})=\\{y\\}\\}|/|T|$. However, according to a growing
belief (Goodfellow, McDaniel, and Papernot 2018), accuracy is not enough in
machine learning, since robustness to adversarial inputs of a ML classifier
may significantly affect its safety and generalization properties (Carlini and
Wagner 2017; Goodfellow, McDaniel, and Papernot 2018). Given an input
perturbation modeled by a function $P\colon X\rightarrow\wp(X)$, a classifier
$C:X\rightarrow\wp_{+}(\mathcal{L})$ is _stable_ (Ranzato and Zanella 2020a)
on the perturbation $P(\boldsymbol{x})$ of $\boldsymbol{x}\in X$ when $C$
consistently assigns the same label(s) to every attack ranging in
$P(\boldsymbol{x})$, i.e.,
$\operatorname*{stable}(C,\boldsymbol{x},P)\stackrel{{\scriptstyle{\mbox{$\triangle$}}}}{{\Leftrightarrow}}\forall\boldsymbol{x}^{\prime}\in
P(\boldsymbol{x})\colon C(\boldsymbol{x}^{\prime})=C(\boldsymbol{x}).$
When the sample $\boldsymbol{x}\in X$ has a ground truth label
$y_{\boldsymbol{x}}\in\mathcal{L}$, robustness of $C$ on $\boldsymbol{x}$
boils down to stability $\operatorname*{stable}(C,\boldsymbol{x},P)$ together
with correct classification $C(\boldsymbol{x})=\\{y_{\boldsymbol{x}}\\}$.
We consider standard classification decision trees commonly referred to as
CARTs (Classification And Regression Trees) (Breiman et al. 1984). A decision
tree $t\colon X\rightarrow\wp_{+}(\mathcal{L})$ is defined inductively. A base
tree $t$ is a single leaf $\lambda$ storing a frequency distribution of labels
for the samples of the training dataset, hence
$\lambda\in[0,1]^{|\mathcal{L}|}$, or, equivalently,
$\lambda\colon\mathcal{L}\rightarrow[0,1]$. Some algorithmic rule converts
this frequency distribution into a set of labels, typically as
$\operatorname*{arg\,max}_{y\in\mathcal{L}}\lambda(y)$. A composite tree $t$
is $\gamma(\operatorname*{\mathit{split}},t_{l},t_{r})$, where
$\operatorname*{\mathit{split}}\colon
X\rightarrow\\{\mathbf{tt},\mathbf{ff}\\}$ is a Boolean split criterion for
the internal parent node of its left and right subtrees $t_{l}$ and $t_{r}$;
thus, for all $\boldsymbol{x}\in X$,
$t(\boldsymbol{x})\triangleq\textbf{if}~{}\operatorname*{\mathit{split}}(\boldsymbol{x})~{}\textbf{then}~{}t_{l}(\boldsymbol{x})~{}\textbf{else}~{}t_{r}(\boldsymbol{x})$.
Although split rules can be of any type, most decision trees employ univariate
hard splits of type
$\operatorname*{\mathit{split}}(\boldsymbol{x})\triangleq\boldsymbol{x}_{i}\leq
k$ for some feature $i\in[1,d]$ and threshold $k\in\mathbb{R}$.
Tree ensembles, also known as forests, are sets of decision trees which
together contribute to formulate a unique classification output. Training
algorithms as well as methods for computing the final output label(s) vary
among different tree ensemble models. Random forests (RFs) (Breiman 2001) are
a major instance of tree ensemble where each tree of the ensemble is trained
independently from the other trees on a random subset of the features.
Gradient boosted decision trees (GBDT) (Friedman 2001) represent a different
training algorithm where an ensemble of trees is incrementally build by
training each new tree on the basis of the data samples which are mis-
classified by the previous trees. For RFs, the final classification output is
typically obtained through a voting mechanism (e.g., majority voting), while
GBDTs are usually trained for binary classification problems and use some
binary reduction scheme, such as one-vs-all or one-vs-one, for multi-class
classification.
## 3 Individual Fairness
Dwork et al. (Dwork et al. 2012) define _individual fairness_ as “the
principle that two individuals who are similar with respect to a particular
task should be classified similarly”. They formalize this notion as a
Lipschitz condition of the classifier, which requires that any two individuals
$\boldsymbol{x},\boldsymbol{y}\in X$ whose distance is
$\delta(\boldsymbol{x},\boldsymbol{y})\in[0,1]$ map to distributions
$D_{\boldsymbol{x}}$ and $D_{\boldsymbol{y}}$, respectively, such that the
statistical distance between $D_{\boldsymbol{x}}$ and $D_{\boldsymbol{y}}$ is
at most $\delta(\boldsymbol{x},\boldsymbol{y})$. The intuition is that the
output distributions for $\boldsymbol{x}$ and $\boldsymbol{y}$ are
indistinguishable up to their distance
$\delta(\boldsymbol{x},\boldsymbol{y})$. The distance metric $\delta\colon
X\times X\rightarrow\mathbb{R}_{\geq 0}$ is problem specific and satisfies the
basic axioms
$\delta(\boldsymbol{x},\boldsymbol{y})=\delta(\boldsymbol{y},\boldsymbol{x})$
and $\delta(\boldsymbol{x},\boldsymbol{x})=0$.
By following Dwork et al’s standard definition (Dwork et al. 2012), we
consider a classifier $C\colon X\rightarrow\wp_{+}(\mathcal{L})$ to be fair
when $C$ outputs the same set of labels for every pair of individuals
$\boldsymbol{x},\boldsymbol{y}\in X$ which satisfy a similarity relation
$S\subseteq X\times X$. Thus, $S$ can be derived from a distance $\delta$ as
$(\boldsymbol{x},\boldsymbol{y})\in
S\stackrel{{\scriptstyle{\mbox{$\triangle$}}}}{{\Leftrightarrow}}\delta(\boldsymbol{x},\boldsymbol{y})\leq\epsilon$,
where $\epsilon\in\mathbb{R}$ is a threshold of similarity. In order to
estimate a fairness metric for a classifier $C$, we count how often $C$ is
fair on sets of similar individuals ranging into a test set $T\subseteq
X\times\mathcal{L}$:
$\textstyle{\operatorname*{\mathit{fair}}_{T}(C)}\triangleq\displaystyle\frac{|\\{(\boldsymbol{x},y)\in
T~{}|~{}\operatorname*{fair}(C,\boldsymbol{x},S)\\}|}{|T|}$ (1)
where $\operatorname*{fair}(C,\boldsymbol{x},S)$ is defined as follows:
###### Definition 3.1 (Individual Fairness).
A classifier $C\colon X\rightarrow\wp_{+}(\mathcal{L})$ is _fair_ on an input
sample $\boldsymbol{x}\in X$ with respect to a similarity relation $S\subseteq
X\times X$, denoted by $\operatorname*{fair}(C,\boldsymbol{x},S)$, when
$\forall\boldsymbol{x}^{\prime}\in
X\colon(\boldsymbol{x},\boldsymbol{x}^{\prime})\in S\Rightarrow
C(\boldsymbol{x}^{\prime})=C(\boldsymbol{x})$. ∎
Hence, fairness for a similarity relation $S$ boils down to stability on the
perturbation $P_{S}(\boldsymbol{x})\triangleq\\{\boldsymbol{x}^{\prime}\in
X~{}|~{}(\boldsymbol{x},\boldsymbol{x}^{\prime})\in S\\}$, namely, for all
$\boldsymbol{x}\in X$,
$\operatorname*{fair}(C,\boldsymbol{x},S)\;\Leftrightarrow\;\operatorname*{stable}(C,\boldsymbol{x},P_{S})$
(2)
Let us remark that fairness is orthogonal to accuracy since it does not depend
on the correctness of the label assigned by the classifier, so that that
training algorithms that maximize accuracy-based metrics do not necessarily
achieve fair models. Thus, this is also the case of a natural learning
algorithm for CART trees and RFs, that locally optimizes split criteria by
measuring entropy or Gini impurity, which are both indicators of the correct
classification of training data.
It is also worth observing that fairness is monotonic with respect to the
similarity relation, meaning that
$\operatorname*{fair}(C,\boldsymbol{x},S)\wedge S^{\prime}\subseteq
S\;\Rightarrow\;\operatorname*{fair}(C,\boldsymbol{x},S^{\prime})$ (3)
We will exploit this monotonicity property, since this implies that, on one
hand, fair classification is preserved for smaller similarity relations and,
on the other hand, fairness verification and fair training is more challenging
for larger similarity relations.
## 4 Verifying Fairness
As individual fairness is equivalent to stability, individual fairness of
decision trees can be verified by Silva (Ranzato and Zanella 2020a), an
abstract interpretation-based algorithm for checking stability properties of
decision tree ensembles.
### 4.1 Verification by Silva
Silva performs a static analysis of an ensemble of decision trees in a so-
called abstract domain $A$ that approximates properties of real vectors,
meaning that each abstract value $a\in A$ represents a set of real vectors
$\gamma(a)\in\wp(\mathbb{R}^{d})$. Silva approximates an input region
$P(\boldsymbol{x})\in\wp(\mathbb{R}^{d})$ for an input vector
$\boldsymbol{x}\in\mathbb{R}^{d}$ by an abstract value $a\in A$ such that
$P(\boldsymbol{x})\subseteq\gamma(a)$ and for each decision tree $t$, it
computes an over-approximation of the set of leaves of $t$ that can be reached
from some vector in $\gamma(a)$. This is computed by collecting the
constraints of split nodes for each root-leaf path, so that each leaf
$\lambda$ of $t$ stores the minimum set of constraints $C_{\lambda}$ which
makes $\lambda$ reachable from the root of $t$. It is then checked if this set
of constraints $C_{\lambda}$ can be satisfied by the input abstract value
$a\in A$: this check is denoted by $a\models^{?}C_{\lambda}$ and its
_soundness_ requirement means that if some input sample
$\boldsymbol{z}\in\gamma(a)$ may reach the leaf $\lambda$ then $a\models
C_{\lambda}$ must necessarily hold. When $a\models C_{\lambda}$ holds the leaf
$\lambda$ is marked as reachable from $a$. For example, if
$C_{\lambda}=\\{x_{1}\leq 2,\neg(x_{1}\leq-1),x_{2}\leq-1\\}$ then an abstract
value such as $\langle{x_{1}\leq 0,x_{2}\leq 0}\rangle$ satisfies
$C_{\lambda}$ while a relational abstract value such as $x_{1}+x_{2}=4$ does
not. This over-approximation of the set of leaves of $t$ reachable from $a$
allows us to compute a set of labels, denoted by
$t^{A}(a)\in\wp_{+}(\mathcal{L})$ which is an over-approximation of the set of
labels assigned by $t$ to all the input vectors ranging in $\gamma(a)$, i.e.,
$\cup_{\boldsymbol{z}\in\gamma(a)}t(\boldsymbol{z})\subseteq t^{A}(a)$ holds.
Thus, if $P(\boldsymbol{x})\subseteq\gamma(a)$ and
$t^{A}(a)=t(\boldsymbol{x})$ then $t$ is stable on $P(\boldsymbol{x})$.
For standard classification trees with hard univariate splits of type
$x_{i}\leq k$, we will use the well-known hyper-rectangle abstract domain
$\operatorname*{HR}$ whose abstract values for vectors
$\boldsymbol{x}\in\mathbb{R}^{d}$ are of type
$h=\langle{\boldsymbol{x}_{i}\in[l_{1},u_{1}],\dots,\boldsymbol{x}_{d}\in[l_{d},u_{d}]}\rangle\in\textstyle\operatorname*{HR}_{d}$
where lower and upper bounds $l,u\in\mathbb{R}\cup\\{-\infty,+\infty\\}$ with
$l\leq u$ (more on this abstract domain can be found in (Rival and Yi 2020)).
Thus, $\gamma(h)=\\{\boldsymbol{x}\in\mathbb{R}^{d}\mid\forall
i.\,l_{i}\leq\boldsymbol{x}_{i}\leq u_{i}\\}$. The hyper-rectangle abstract
domain guarantees that for each leaf constraint $C_{\lambda}$ and
$h\in\operatorname*{HR}$, the check $h\models^{?}C_{\lambda}$ is (sound and)
_complete_ , meaning that $h\models C_{\lambda}$ holds iff there exists some
input sample in $\gamma(h)$ reaching $\lambda$. This completeness property
therefore entails that the set of labels $t^{\operatorname*{HR}}(h)$ computed
by this analysis coincides exactly with the set of classification labels
computed by $t$ for all the samples in $\gamma(h)$, so that for the
$\ell_{\infty}$-based perturbation such that
$P_{\infty}(\boldsymbol{x})=\gamma(h)$ then it turns out that $t$ is stable on
$P_{\infty}(\boldsymbol{x})$ iff $t^{\operatorname*{HR}}(h)=t(\boldsymbol{x})$
holds.
In order to analyse a forest $F$ of trees, Silva reduces the whole forest to a
single tree $t_{F}$, by stacking every tree $t\in F$ on top of each other,
i.e., each leaf becomes the root of the next tree in $F$, where the ordering
of this unfolding operation does not matter. Then, each leaf $\lambda$ of this
huge single tree $t_{F}$ collects all the constraints of the leaves in the
path from the root of $t_{F}$ to $\lambda$. Since this stacked tree $t_{F}$
suffers from a combinatorial explosion of the number of leaves, Silva deploys
a number of optimisation strategies for its analysis. Basically, Silva
exploits a best-first search algorithm to look for a pair of input samples in
$\gamma(a)$ which are differently labeled, hence showing instability. If one
such instability counterexample can be found then instability is proved and
the analysis terminates, otherwise stability is proved. Also, Silva allows to
set a safe timeout which, when met, stops the analysis and outputs the current
sound over-approximation of labels.
### 4.2 Verification with One-Hot Enconding
As described above, the soundness of Silva guarantees that no true reachable
leaf is missed by this static analysis. Moreover, when the input region
$P(\boldsymbol{x})$ is defined by the $\ell_{\infty}$ norm and the static
analysis is performed using the abstract domain of hyper-rectangles
$\operatorname*{HR}$, Silva is also complete, meaning that no false positive
(i.e., a false reachable leaf) can occur. However, that this is not true
anymore when dealing with classification problems involving some categorical
features.
[$\textit{color}_{\textit{white}}\leq 0.5$ [$\\{\ell_{1}\\}$, name = n1]
[$\\{\ell_{2}\\}$] ] at (n1) [left=6ex,above=1ex]$t_{1}$;
[$\textit{color}_{\textit{black}}\leq 0.5$ [$\\{\ell_{2}\\}$]
[$\\{\ell_{1}\\}$, name = n2] ] at (n2) [right=6ex,above=1ex]$t_{2}$;
The diagram above depicts a toy forest $F$ consisting of two trees $t_{1}$ and
$t_{2}$, where left/right branches are followed when the split condition is
false/true. Here, a categorical feature
$\textit{color}\in\\{\textit{white},\textit{black}\\}$ is one-hot encoded by
$\textit{color}_{\textit{white}},\textit{color}_{\textit{black}}\in\\{0,1\\}$.
Since colors are mutually exclusive, every white individual in the input
space, i.e.
$\textit{color}_{\textit{white}}=1,\textit{color}_{\textit{black}}=0$, will be
labeled as $\ell_{1}$ by both trees. However, by running the stability
analysis on the hyper-rectangle
$\langle\textit{color}_{\textit{white}}\in[0,1],\textit{color}_{\textit{black}}\in[0,1]\rangle$,
Silva would mark the classifier as unstable because there exists a sample in
$[0,1]^{2}$ whose output is
$\\{\ell_{1},\ell_{2}\\}\neq\\{\ell_{1}\\}=F(\textit{color}_{\textit{white}}=1,\textit{color}_{\textit{black}}=0)$.
This is due to the point $(0,0)\in[0,1]^{2}$ which is a feasible input sample
for the analysis, although it does not represent any actual individual in the
input space. In fact, $t_{1}(0,0)=\\{\ell_{2}\\}$ and
$t_{2}(0,0)=\\{\ell_{1}\\}$, so that by a majority voting
$F(0,0)=\\{\ell_{1},\ell_{2}\\}$, thus making $F$ unstable (i.e., unfair) on
$(1,0)$ (i.e., on white individuals).
To overcome this issue, we instantiate Silva to an abstract domain which is
designed as a reduced product (more details on reduced products can be found
in (Rival and Yi 2020)) with a relational abstract domain keeping track of the
relationships among the multiple binary features introduced by one-hot
encoding a categorical feature. More formally, this relational domain
maintains the following two additional constraints on the $k$ features
$x^{c}_{1},...,x^{c}_{k}$ introduced by one-hot encoding a categorical
variable $x^{c}$ with $k$ distinct values:
1. 1.
the possible values for each $x^{c}_{i}$ are restricted to $\\{0,1\\}$;
2. 2.
the sum of all $x^{c}_{i}$ must satisfy $\sum_{i=1}^{k}x^{c}_{i}=1$.
Hence, these conditions guarantee that any abstract value for
$x^{c}_{1},...,x^{c}_{k}$ represents precisely one possible category for
$x^{c}$. This abstract domain for a categorical variable $x$ with $k$ distinct
values is denoted by $\operatorname*{OH}_{k}(x)$. In the example above, any
hyper-rectangle
$\langle\textit{color}_{\textit{white}}\in[0,1],\textit{color}_{\textit{black}}\in[0,1]\rangle$
is reduced by $\operatorname*{OH}_{2}(\textit{color})$, so that just two
different values
$\langle\textit{color}_{\textit{white}}=0,\textit{color}_{\textit{black}}=1\rangle$
and
$\langle\textit{color}_{\textit{white}}=1,\textit{color}_{\textit{black}}=0\rangle$
are allowed.
Summing up, the generic abstract value of the reduced hyper-rectangle domain
computed by the analyzer Silva for data vectors consisting of $d$ numerical
variables $x^{j}\in\mathbb{R}$ and $m$ categorical variables $c^{j}$ with
$k_{j}\in\mathbb{N}$ distinct values is:
$\textstyle\langle{x^{j}\in[l_{j},u_{j}]}\rangle_{j=1}^{d}\times\langle{c^{j}_{i}\in\\{0,1\\}\mid\sum_{i=1}^{k_{j}}c^{j}_{i}=1}\rangle_{j=1}^{m}$
where $l_{j},u_{j}\in\mathbb{R}\cup\\{-\infty,+\infty\\}$ and $l_{j}\leq
u_{j}$.
## 5 FATT: Fairness-Aware Training of Trees
Several algorithms for training robust decision trees and ensembles have been
put forward (Andriushchenko and Hein 2019; Calzavara, Lucchese, and Tolomei
2019; Calzavara et al. 2020; Chen et al. 2019; Kantchelian, Tygar, and Joseph
2016; Ranzato and Zanella 2020b). These algorithms encode the robustness of a
tree classifier as a loss function which is minimized either by either exact
methods such as MILP or by suboptimal heuristics such as genetic algorithms.
The robust learning algorithm of (Ranzato and Zanella 2020b), called Meta-
Silvae, aims at maximizing a tunable weighted linear combination of accuracy
and stability metrics. Meta-Silvae relies on a genetic algorithm for evolving
a population of trees which are ranked by their accuracy and stability, where
tree stability is computed by resorting to the verifier Silva (Ranzato and
Zanella 2020a). At the end of this genetic evolution, Meta-Silvae returns the
best tree(s). It turns out that Meta-Silvae typically outputs compact models
which are easily interpretable and often achieve accurate and stable models
already with a single decision tree rather than a forest. By exploiting the
equivalence (2) between individual fairness and stability and the
instantiation of the verifier Silva to the product abstract domain tailored
for one-hot encoding, we use Meta-Silvae as a learning algorithm for decision
trees, called FATT, that enhances their individual fairness.
While standard learning algorithms for tree ensembles require tuning some
hyper-parameters, such as maximum depth of trees, minimum amount of
information on leaves and maximum number of trees, Meta-Silvae is able to
infer them automatically, so that the traditional tuning process is not
needed. Instead, some standard parameters are required by the underlying
genetic algorithm, notably, the size of the evolving population, the maximum
number of evolving iterations, the crossover and mutation functions (Holland
1984; Srinivas and Patnaik 1994). Moreover, we need to specify the objective
function of FATT that, for learning fair decision trees, is given by a
weighted sum of the accuracy and individual fairness scores over the training
set. It is worth remarking that, given an objective function, the genetic
algorithm of FATT converges to an optimal (or suboptimal) solution regardless
of the chosen parameters, which just affect the rate of convergence and
therefore should be chosen for tuning its speed.
Crossover and mutation functions are two main distinctive features of the
genetic algorithm of Meta-Silvae. The crossover function of Meta-Silvae
combines two parent trees $t_{1}$ and $t_{2}$ by randomly substituting a
subtree of $t_{1}$ with a subtree of $t_{2}$. Also, Meta-Silvae supports two
types of mutation strategies: grow-only, which only allows trees to grow, and
grow-and-prune, which also allows pruning the mutated trees. Finally, let us
point out that Meta-Silvae allows to set the basic parameters used by generic
algorithms: population size, selection function, number of iterations. In our
instantiation of Meta-Silvae to fair learning: the population size is kept
fixed to $32$, as the experimental evaluation showed that this provides an
effective balance between achieved fairness and training time; the standard
roulette wheel algorithm is employed as selection function; the number of
iterations is typically dataset-specific.
## 6 Experimental Evaluation
We consider the main standard datasets used in the fairness literature and we
preprocess them by following the steps of Ruoss et al. (Ruoss et al. 2020,
Section 5) for their experiments on individual fairness for deep neural
networks: (1) standardize numerical attributes to zero mean and unit variance;
(2) one-hot encoding of all categorical features; (3) drop rows/columns
containing missing values; and (4) split into train and test set. These
datasets concern binary classification tasks, although our fair learning
naturally extends to multiclass classification with no specific effort. We
will make all the code, datasets and preprocessing pipelines of FATT publicly
available upon publication of this work.
Adult.
The Adult income dataset (Dua and Graff 2017) is extracted from the 1994 US
Census database. Every sample assigns a yearly income (below or above $50K) to
an individual based on personal attributes such as gender, race, and
occupation.
Compas.
The COMPAS dataset contains data collected on the use of the COMPAS risk
assessment tool in Broward County, Florida (Angwin et al. 2016). Each sample
predicts the risk of recidivism for individuals based on personal attributes
and criminal history.
Crime.
The Communities and Crime dataset (Dua and Graff 2017) contains socio-
economic, law enforcement, and crime data for communities within the US. Each
sample indicates whether a community is above or below the median number of
violent crimes per population.
German.
The German Credit dataset (Dua and Graff 2017) contains samples assigning a
good or bad credit score to individuals.
Health.
The heritage Health dataset (https://www.kaggle.com/c/hhp) contains physician
records and insurance claims. Each sample predicts the ten-year mortality
(above or below the median Charlson index) for a patient.
| | Training Set | Test Set
---|---|---|---
dataset | #features | size | positive | size | positive
adult | 103 | 30162 | 24.9% | 15060 | 24.6%
compas | 371 | 4222 | 53.3% | 1056 | 55.6%
crime | 147 | 1595 | 50.0% | 399 | 49.6%
german | 56 | 800 | 69.8% | 200 | 71.0%
health | 110 | 174732 | 68.1% | 43683 | 68.0%
Table 1: Overview of Datasets.
Table 1 displays size and distribution of positive samples for these datasets.
As noticed by (Ruoss et al. 2020), some datasets exhibit a highly unbalanced
label distribution. For example, for the adult dataset, the constant
classifier $C(\boldsymbol{x})=1$ would achieve $75.4\%$ test set accuracy and
$100\%$ individual fairness with respect to any similarity relation. Hence, we
follow (Ruoss et al. 2020) and we will evaluate and report the balanced
accuracy of our FATT classifiers, i.e.,
$0.5\,(\frac{\textit{truePositive}}{\textit{truePositive}+\textit{falseNegative}}+\frac{\textit{trueNegative}}{\textit{trueNegative}+\textit{falsePositive}})$.
### 6.1 Similarity Relations
We consider four different types of similarity relations, as described by
Ruoss et al. (Ruoss et al. 2020, Section 5.1). In the following, let
$I\subseteq\mathbb{N}$ denote the set of indexes of features of an individual
after one-hot encoding.
noise:
Two individuals $\boldsymbol{x},\boldsymbol{y}\in X$ are similar when a subset
of their (standardized) numerical features indexed by a given subset
$I^{\prime}\subseteq I$ differs less than a given threshold $\tau\geq 0$,
while all the other features are unchanged:
$(\boldsymbol{x},\boldsymbol{y})\in S_{\textit{noise}}$ iff
$|{\boldsymbol{x}}_{i}-{\boldsymbol{y}}_{i}|\leq\tau$ for all $i\in
I^{\prime}$, and ${\boldsymbol{x}}_{i}={\boldsymbol{y}}_{i}$ for all $i\in
I\smallsetminus I^{\prime}$. For our experiments, we consider $\epsilon=0.3$
in the standardized input space, e.g., for adult two individuals are similar
if their age difference is at most 3.95 years.
cat:
Two individuals are similar if they are identical except for one or more
categorical sensitive attributes indexed by $I^{\prime}\subseteq I$:
$(\boldsymbol{x},\boldsymbol{y})\in S_{\textit{cat}}$ iff
${\boldsymbol{x}}_{i}={\boldsymbol{y}}_{i}$ for all $i\in I\smallsetminus
I^{\prime}$. For adult and german, we select the gender attribute. For compas,
we identify race as sensitive attribute. For crime, we consider two
individuals similar regardless of their state. Lastly, for health, neither
gender nor age group should affect the final prediction.
noise-cat:
Given noise and categorical similarity relations $S_{\textit{noise}}$ and
$S_{\textit{cat}}$, their union $S_{\textit{noise-cat}}\triangleq
S_{\textit{noise}}\cup S_{\textit{cat}}$ models a relation where two
individuals are similar when some of their numerical attributes differ up to a
given threshold while the other attributes are equal except some categorical
features.
conditional-attribute:
Here, similarity is a disjunction of two mutually exclusive cases. Consider a
numerical attribute $\boldsymbol{x}_{i}$, a threshold $\tau\geq 0$ and two
noise similarities $S_{n_{1}},S_{n_{2}}$. Two individuals are defined to be
similar if their $i$-th attributes are similar for $S_{n_{1}}$ and are bounded
by $\tau$ or these attributes are above $\tau$ and similar for $S_{n_{2}}$:
$S_{\textit{cond}}\triangleq\\{(\boldsymbol{x},\boldsymbol{y})\in
S_{n_{1}}~{}|~{}{\boldsymbol{x}}_{i}\leq\tau,\,{\boldsymbol{y}}_{i}\leq\tau\\}\cup\\{(\boldsymbol{x},\boldsymbol{y})\in
S_{n_{2}}~{}|~{}{\boldsymbol{x}}_{i}>\tau,\,{\boldsymbol{y}}_{i}>\tau\\}$. For
adult, we consider the median age as threshold $\tau=37$, and two noise
similarities based on age with thresholds $0.2$ and $0.4$, which correspond to
age differences of $2.63$ and $5.26$ years respectively. For german, we also
consider the median age $\tau=33$ and the same noise similarities on age, that
correspond to age differences of $0.24$ and $0.47$ years.
Note that our approach is not limited to supporting these similarity
relations. Further domain-specific similarities can be defined and handled by
our approach by instantiating the underlying verifier Silva with an
appropriately over-approximating abstract domain to retain soundness.
Moreover, if the similarity relation can be precisely represented in the
chosen abstract domain, we also retain completeness.
### 6.2 Setup
Our experimental evaluation compares CART trees and Random Forests with our
FATT tree models. CARTs and RFs are trained by scikit-learn. We first run a
preliminary phase for tuning the hyper-parameters for CARTs and RFs. In
particular, we considered both entropy and Gini index as split criteria, and
we checked maximum tree depths ranging from $5$ to $100$ with step $10$. For
RFs, we scanned the maximum number of trees ($5$ to $100$, step $10$). Cross
validation inferred the optimal hyper-parameters, where the datasets have been
split in $80\%$ training and $20\%$ validation sets. The hyper-parameters of
FATT (i.e, weights of accuracy and fairness in the objective function, type of
mutation, selection function, number of iterations) by assessing convergence
speed, maximum fitness value and variance among fitness in the population
during the training phase. FATT trained single decision trees rather than
forests, thus providing more compact and interpretable models. It turned out
that accuracy and fairness of single FATT trees are already competitive, where
individual fairness may exceed $85\%$ for the most challenging similarities.
We therefore concluded that ensembles of FATT trees do not introduce
statistically significant benefits over single decision trees. Since FATT
trees are stochastic by relying on random seeds, each experimental test has
been repeated 1000 times and the results refer to their median value.
### 6.3 Results
| Acc. % | Bal.Acc. % | Individual Fairness $\operatorname*{\mathit{fair}}_{T}$ %
---|---|---|---
| cat | noise | noise-cat
Dataset | RF | FATT | RF | FATT | RF | FATT | RF | FATT | RF | FATT
adult | 82.76 | 80.84 | 70.29 | 61.86 | 91.71 | 100.00 | 85.44 | 95.21 | 77.50 | 95.21
compas | 66.57 | 64.11 | 66.24 | 63.83 | 48.01 | 100.00 | 35.51 | 85.98 | 30.87 | 85.98
crime | 80.95 | 79.45 | 80.98 | 79.43 | 86.22 | 100.00 | 31.83 | 75.19 | 32.08 | 75.19
german | 76.50 | 72.00 | 63.62 | 52.54 | 91.50 | 100.00 | 92.00 | 99.50 | 90.00 | 99.50
health | 85.29 | 77.87 | 83.27 | 73.59 | 7.84 | 99.99 | 47.66 | 97.04 | 2.91 | 97.03
Average | 78.41 | 74.85 | 72.88 | 66.25 | 65.06 | 100.00 | 58.49 | 90.58 | 46.67 | 90.58
Table 2: RF and FATT comparison.
Table 2 shows a comparison between RFs and FATTs. We show accuracy, balanced
accuracy and individual fairness with respect to the noise, cat, and noise-cat
similarity relations as computed on the test sets $T$. As expected, FATT trees
are slightly less accurate than RFs — $~{}3.6\%$ on average, which also
reflects to balanced accuracy — but outperform them in every fairness test. On
average, the fairness increment ranges between $+35\%$ to $+45\%$ among
different similarity relations. Table 3 shows the comparison for the
conditional-attribute similarity, which applies to adult and german datasets
only. Here, the average fairness increase of FATT models is $+8.5\%$.
| Individual Fairness $\operatorname*{\mathit{fair}}_{T}$ %
---|---
Dataset | RF | FATT
adult | 84.75 | 94.12
german | 91.50 | 99.50
Table 3: Comparison for conditional-attribute.
Figure 1: Accuracy (top) and Fairness (bottom).
Fig. 1 shows the distribution of accuracy and individual fairness for FATT
trees over 1000 runs of the FATT learning algorithm. This plot is for fairness
with respect to noise-cat similarity, as this is the most challenging relation
to train for (this is a consequence of (3)). We can observe a stable behaviour
for accuracy, with $\approx 50\%$ of the observations laying within one
percentile from the median. The results for fairness are analogous, although
for compas we report a higher variance of the distribution, where the lowest
observed fairness percentage is $\approx 10\%$ higher than the corresponding
one for RFs. We claim that this may depend by the high number of features in
the dataset, which makes fair training a challenging task.
| Model size | Avg. verification time per sample (ms)
---|---|---
| cat | noise | noise-cat
Dataset | RF | FATT | RF | FATT | RF | FATT | RF | FATT
adult | 1427 | 43 | 0.03 | 0.02 | 0.03 | 0.02 | 0.03 | 0.02
compas | 147219 | 75 | 0.36 | 0.07 | 0.47 | 0.07 | 0.61 | 0.07
crime | 14148 | 11 | 0.12 | 0.07 | 2025.13 | 0.07 | 2028.47 | 0.07
german | 5743 | 2 | 0.06 | 0.03 | 0.06 | 0.02 | 0.07 | 0.03
health | 2558676 | 84 | 1.40 | 0.06 | 0.91 | 0.05 | 3.10 | 0.06
Table 4: Model sizes and verification times.
Table 4 compares the size of RF and FATT models, defined as total number of
leaves. It turns out that FATT tree models are orders of magnitude smaller
and, thus, more interpretable than RFs (while having comparable accuracy and
significantly enhanced fairness). Let us also remark that the average
verification time per sample for our FATT models is always less than $0.01$
milliseconds.
| FATT | Natural CART | Hinted CART
---|---|---|---
Dataset | Acc. % | Fair. % | Size | Acc. % | Fair. % | Size | Acc. % | Fair. % | Size
adult | 80.84 | 95.21 | 43 | 85.32 | 77.56 | 270 | 84.77 | 87.46 | 47
compas | 64.11 | 85.98 | 75 | 65.91 | 22.25 | 56 | 65.91 | 22.25 | 56
crime | 79.45 | 75.19 | 11 | 77.69 | 24.31 | 48 | 77.44 | 60.65 | 8
german | 72.00 | 99.50 | 2 | 75.50 | 57.50 | 115 | 73.50 | 86.00 | 4
health | 77.87 | 97.03 | 84 | 83.85 | 79.98 | 2371 | 82.25 | 93.64 | 100
Average | 74.85 | 90.58 | 43 | 77.65 | 52.32 | 572 | 76.77 | 70.00 | 43
Table 5: Decision trees comparison.
Finally, in Table 5 compares FATT models with natural CART trees in terms of
accuracy, size, and fairness with respect to the noise-cat similarity. While
CARTs are approximately $3\%$ more accurate than FATT on average, they are
roughly half less fair and more than ten times larger.
It is well known that decision trees often overfit (Bramer 2007) due to their
high number of leaves, thus yielding unstable/unfair models. Post-training
techniques such as tree pruning are often used to mitigate overfitting (Kearns
and Mansour 1998), although they are deployed when a tree has been already
fully trained and thus often pruning is poorly beneficial. As a byproduct of
our approach, we trained a set of natural CART trees, denoted by Hint in Table
5, which exploits hyper-parameters as “hinted” by FATT training. In
particular, in this “hinted” learning of CART trees, the maximum tree depth
and the minimum number of samples per leaf are obtained as tree depth and
minimum number of samples of our best FATT models. Interestingly, the results
in Table 5 show that these “hinted” decision trees have roughly the same size
of our FATT trees, are approximately $20\%$ more fair than natural CART trees
and just $1\%$ less accurate. Overall, it turns out that the general
performance of these “hinted” CARTs is halfway between natural CARTs and
FATTs, both in term of accuracy and fairness, while having the same
compactness of FATT models.
## 7 Conclusion
We believe that this work contributes to push forward the use of formal
verification methods in decision tree learning, in particular a very well
known program analysis technique such as abstract interpretation is proved to
be successful for training and verifying decision tree classifiers which are
both accurate and fair, improve on state-of-the-art CART and random forest
models, while being much more compact and interpretable. We also showed how
information from our FATT trees can be exploited to tune the natural training
process of decision trees. As future work we plan to extend further our
fairness analysis by considering alternative fairness definitions, such as
group or statistical fairness.
## References
* Aghaei, Azizi, and Vayanos (2019) Aghaei, S.; Azizi, M. J.; and Vayanos, P. 2019. Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making. In _Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019_ , 1418–1426. AAAI Press. doi:10.1609/aaai.v33i01.33011418. URL https://doi.org/10.1609/aaai.v33i01.33011418.
* Andriushchenko and Hein (2019) Andriushchenko, M.; and Hein, M. 2019. Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks. In _Proc. 33rd Annual Conference on Neural Information Processing Systems (NeurIPS 2019)_.
* Angwin et al. (2016) Angwin, J.; Larson, J.; Mattu, S.; and Kirchner, L. 2016. Machine Bias. _ProPublica, May_ 23: 2016.
* Barocas and Selbst (2016) Barocas, S.; and Selbst, A. D. 2016. Big Data’s Disparate Impact. _California Law Review_ 104: 671.
* Bertsimas and Dunn (2017) Bertsimas, D.; and Dunn, J. 2017. Optimal classification trees. _Mach. Learn._ 106(7): 1039–1082. URL http://dblp.uni-trier.de/db/journals/ml/ml106.html#BertsimasD17.
* Bramer (2007) Bramer, M. 2007. Avoiding overfitting of decision trees. _Principles of data mining_ 119–134.
* Breiman (2001) Breiman, L. 2001. Random Forests. _Machine Learning_ 45(1): 5–32. doi:10.1023/A:1010933404324. URL https://doi.org/10.1023/A:1010933404324.
* Breiman et al. (1984) Breiman, L.; Friedman, J. H.; Olshen, R. A.; and Stone, C. J. 1984. _Classification and Regression Trees_. Wadsworth. ISBN 0-534-98053-8.
* Calzavara, Lucchese, and Tolomei (2019) Calzavara, S.; Lucchese, C.; and Tolomei, G. 2019. Adversarial Training of Gradient-Boosted Decision Trees. In _Proc. 28th ACM International Conference on Information and Knowledge Management (CIKM 2019)_ , 2429–2432. ISBN 978-1-4503-6976-3. doi:10.1145/3357384.3358149. URL http://doi.acm.org/10.1145/3357384.3358149.
* Calzavara et al. (2020) Calzavara, S.; Lucchese, C.; Tolomei, G.; Abebe, S. A.; and Orlando, S. 2020. TREANT: training evasion-aware decision trees. _Data Mining and Knowledge Discovery_ doi:10.1007/s10618-020-00694-9. URL https://doi.org/10.1007/s10618-020-00694-9.
* Carlini and Wagner (2017) Carlini, N.; and Wagner, D. A. 2017. Towards Evaluating the Robustness of Neural Networks. In _Proc. of 38th IEEE Symposium on Security and Privacy (S & P 2017)_, 39–57. doi:10.1109/SP.2017.49. URL https://doi.org/10.1109/SP.2017.49.
* Chen et al. (2019) Chen, H.; Zhang, H.; Boning, D. S.; and Hsieh, C. 2019. Robust Decision Trees Against Adversarial Examples. In _Proc. 36th Int. Conf. on Machine Learning, (ICML 2019)_ , 1122–1131. URL http://proceedings.mlr.press/v97/chen19m.html.
* Chouldechova (2017) Chouldechova, A. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. _Big Data_ 5(2): 153–163. doi:10.1089/big.2016.0047. URL https://doi.org/10.1089/big.2016.0047.
* Cousot and Cousot (1977) Cousot, P.; and Cousot, R. 1977. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In _Proc. 4th ACM Symposium on Principles of Programming Languages (POPL 1977)_ , 238–252. doi:10.1145/512950.512973. URL http://doi.acm.org/10.1145/512950.512973.
* Dua and Graff (2017) Dua, D.; and Graff, C. 2017. UCI machine learning repository.
* Dwork et al. (2012) Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. 2012. Fairness Through Awareness. In _Proc. 3rd Innovations in Theoretical Computer Science Conference_ , 214–226.
* Friedman (2001) Friedman, J. H. 2001. Greedy Function Approximation: A Gradient Boosting Machine. _Annals of statistics_ 1189–1232.
* Goodfellow, McDaniel, and Papernot (2018) Goodfellow, I.; McDaniel, P.; and Papernot, N. 2018. Making Machine Learning Robust Against Adversarial Inputs. _Commun. ACM_ 61(7): 56–66. ISSN 0001-0782. doi:10.1145/3134599. URL http://doi.acm.org/10.1145/3134599.
* Grari et al. (2020) Grari, V.; Ruf, B.; Lamprier, S.; and Detyniecki, M. 2020. Achieving Fairness with Decision Trees: An Adversarial Approach. _Data Sci. Eng._ 5(2): 99–110. doi:10.1007/s41019-020-00124-2. URL https://doi.org/10.1007/s41019-020-00124-2.
* Hardt, Price, and Srebro (2016) Hardt, M.; Price, E.; and Srebro, N. 2016. Equality of Opportunity in Supervised Learning. In _Proc. 30th Annual Conference on Neural Information Processing Systems (NeurIPS 2016)_ , 3315–3323. URL http://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning.
* Holland (1984) Holland, J. H. 1984. Genetic algorithms and adaptation. In _Adaptive Control of Ill-Defined Systems_ , 317–333. Springer.
* Kantchelian, Tygar, and Joseph (2016) Kantchelian, A.; Tygar, J. D.; and Joseph, A. D. 2016. Evasion and Hardening of Tree Ensemble Classifiers. In _Proc. 33rd International Conference on Machine Learning (ICML 2016)_ , 2387–2396. URL http://dl.acm.org/citation.cfm?id=3045390.3045642.
* Kearns and Mansour (1998) Kearns, M. J.; and Mansour, Y. 1998. A Fast, Bottom-Up Decision Tree Pruning Algorithm with Near-Optimal Generalization. In _Proceedings of the Fifteenth International Conference on Machine Learning (ICML 1998)_ , 269–277.
* Khandani, Kim, and Lo (2010) Khandani, A. E.; Kim, A. J.; and Lo, A. W. 2010. Consumer Credit-Risk Models via Machine-Learning Algorithms. _Journal of Banking & Finance_ 34(11): 2767–2787. doi:https://doi.org/10.1016/j.jbankfin.2010.06.001.
* Raff, Sylvester, and Mills (2018) Raff, E.; Sylvester, J.; and Mills, S. 2018. Fair Forests: Regularized Tree Induction to Minimize Model Bias. In _Proc. 1st AAAI/ACM Conference on AI, Ethics, and Society (AIES 2018)_ , 243–250. doi:10.1145/3278721.3278742. URL https://doi.org/10.1145/3278721.3278742.
* Ranzato and Zanella (2020a) Ranzato, F.; and Zanella, M. 2020a. Abstract Interpretation of Decision Tree Ensemble Classifiers. In _Proc. 34th AAAI Conference on Artificial Intelligence (AAAI 2020), Github: https://github.com/abstract-machine-learning/silva_, 5478–5486. URL https://aaai.org/ojs/index.php/AAAI/article/view/5998.
* Ranzato and Zanella (2020b) Ranzato, F.; and Zanella, M. 2020b. Genetic Adversarial Training of Decision Trees. _arXiv:2012.11352, Github: https://github.com/abstract-machine-learning/meta-silvae_ .
* Rival and Yi (2020) Rival, X.; and Yi, K. 2020. _Introduction to Static Analysis: An Abstract Interpretation Perspective_. The MIT Press.
* Roh et al. (2020) Roh, Y.; Lee, K.; Whang, S.; and Suh, C. 2020. FR-Train: A Mutual Information-Based Approach to Fair and Robust Training. In _Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event_ , volume 119 of _Proceedings of Machine Learning Research_ , 8147–8157. PMLR. URL http://proceedings.mlr.press/v119/roh20a.html.
* Ruoss et al. (2020) Ruoss, A.; Balunovic, M.; Fischer, M.; and Vechev, M. 2020. Learning Certified Individually Fair Representations. In _Proc. 34th Annual Conference on Advances in Neural Information Processing Systems (NeurIPS 2020)_.
* Schumann et al. (2020) Schumann, C.; Foster, J. S.; Mattei, N.; and Dickerson, J. P. 2020. We Need Fairness and Explainability in Algorithmic Hiring. In _Proc. 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020)_ , 1716–1720. URL https://dl.acm.org/doi/abs/10.5555/3398761.3398960.
* Srinivas and Patnaik (1994) Srinivas, M.; and Patnaik, L. M. 1994. Genetic algorithms: a survey. _Computer_ 27(6): 17–26.
* Urban et al. (2020) Urban, C.; Christakis, M.; Wüstholz, V.; and Zhang, F. 2020. Perfectly Parallel Fairness Certification of Neural Networks. _Proceedings of the ACM on Programming Languages_ 4(OOPSLA): 185:1–185:30.
* Yurochkin, Bower, and Sun (2020) Yurochkin, M.; Bower, A.; and Sun, Y. 2020. Training individually fair ML models with sensitive subspace robustness. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net. URL https://openreview.net/forum?id=B1gdkxHFDH.
* Zafar et al. (2017) Zafar, M. B.; Valera, I.; Gomez-Rodriguez, M.; and Gummadi, K. P. 2017. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In _Proc. 26th International Conference on World Wide Web (WWW 2017)_ , 1171–1180. doi:10.1145/3038912.3052660. URL https://doi.org/10.1145/3038912.3052660.
|
8k
|
arxiv_papers
|
2101.00911
|
# Parametric simulation studies on the wave propagation of solar radio
emission:
the source size, duration, and position
PeiJin Zhang CAS Key Laboratory of Geospace Environment, School of Earth and
Space Sciences,
University of Science and Technology of China (USTC), Hefei, Anhui 230026,
China CAS Center for the Excellence in Comparative Planetology, USTC, Hefei,
Anhui 230026, China ChuanBing Wang CAS Key Laboratory of Geospace
Environment, School of Earth and Space Sciences,
University of Science and Technology of China (USTC), Hefei, Anhui 230026,
China CAS Center for the Excellence in Comparative Planetology, USTC, Hefei,
Anhui 230026, China Eduard P. Kontar School of Physics and Astronomy,
University of Glasgow, Glasgow G12 8QQ, UK
(Received October 19, 2020; Accepted xxx)
###### Abstract
The observed features of the radio sources indicate complex propagation
effects embedded in the waves of solar radio bursts. In this work, we perform
ray-tracing simulations on radio wave transport in the corona and
interplanetary region with anisotropic electron density fluctuations. For the
first time, the variation of the apparent source size, burst duration, and
source position of both fundamental emission and harmonic emission at
frequency 35 MHz are simulated as the function of the anisotropic parameter
$\alpha$ and the angular scattering rate coefficient
$\eta=\epsilon^{2}/h_{0}$, where $\epsilon^{2}={\langle\delta
n^{2}\rangle}/{n^{2}}$ is the density fluctuation level and $h_{0}$ is its
correlation length near the wave exciting site. It is found that isotropic
fluctuations produce a much larger decay time than a highly anisotropic
fluctuation for fundamental emission. By comparing the observed duration and
source size with the simulation results in the parameter space, we can
estimate the scattering coefficient and the anisotropy parameter
$\eta=8.9\times 10^{-5}\,\mathrm{km^{-1}}$ and $\alpha=0.719$ with point pulse
source assumption. Position offsets due to wave scattering and refraction can
produce the co-spatial of fundamental and harmonic waves in observations of
some type III radio bursts. The visual speed due to the wave propagation
effect can reach 1.5 $c$ for $\eta=2.4\times 10^{-4}\,\mathrm{km^{-1}}$ and
$\alpha=0.2$ for fundamental emission in the sky plane, accompanying with
large expansion rate of the source size. The visual speed direction is mostly
identical to the offset direction, thus, for the observation aiming at
obtaining the source position, the source centroid at the starting point is
closer to the wave excitation point.
solar radio burst — source size and position — wave propagation effects
††journal: xxx
## 1 Introduction
The imaging and spectroscopy observations of the solar radio bursts can
provide information on the non-thermal electrons associated with the transient
energy release in the solar active region and parameters of the background
plasma. For example, McCauley et al. (2018) inspected the background density
of the solar corona with the interferometric imaging of Type III radio bursts
in the frequency range of 80-240 MHz. High-cadence radio imaging spectroscopy
shows evidence of the existence of particle acceleration by solar flare
termination shock (Chen et al., 2015; Yu et al., 2020). The combined
observations of radio imaging and extreme ultraviolet /white-light imaging
indicate that the particle acceleration occurs at the flank of coronal mass
ejection shock (Chrysaphi et al., 2018; Morosan et al., 2019; Chen et al.,
2014). The imaging spectroscopy study with LOw Frequency Array (LOFAR) reveals
that the velocity dispersion of the electron beams is a key factor for the
duration of type III radio burst (Zhang et al., 2019).
The corona plasma is an in-homogeneous refractive media for solar radio waves,
so the refraction and scattering can cause the deformation of the observed
radio source including the expansion of the source size, the offset of the
visual source position from the wave generation position (Wild et al., 1959;
Kontar et al., 2017; Bisoi et al., 2018), and the duration broadening in the
dynamic spectrum (Zhang et al., 2019). Generally, the refraction can cause the
inward offset of the visual source from the position of wave excitation (Mann
et al., 2018). The visual source could be shifted outward if the scattering is
considered (Stewart, 1972, 1976; Arzner & Magun, 1999; Kontar et al., 2019).
For the wave of solar radio burst with fundamental-harmonic (F-H) pair
structure, the H-emission is generated at a higher height than the F-emission
with the same frequency due to the plasma emission mechanism, while imaging
result indicates that the source of F and H emission can have a similar
position for some bursts (Stewart, 1972; Dulk & Suzuki, 1980). This seems to
indicate that the waves of F-emission and H-emission have experienced
different amounts of refraction and scattering.
For a given frequency, the Langmuir wave excitation responsible for type III
emission is limited to the cross-section of the guiding magnetic field line
for the electrons and the layer with local plasma frequency close to $1$ or
$1/2$ times of the radio wave frequency. The structure of the magnetic field
and the distribution of the background density is stable within the time scale
of electron beam transit time. Thus the excitation site at a fixed frequency
is stable in both size and position. While the visual velocity and expansion
rate of the observed sources can be very large. Kontar et al. (2017) observed
1/4 $c$ radial speed with LOFAR beamformed observation of the fundamental part
of a type IIIb striae, where $c$ is the speed of light. Kuznetsov et al.
(2020) observed 1/3 $c$ at 32 MHz in a drift pair radio burst. The
interferometric imaging with the remote baseline of LOFAR finds about 4 $c$
visual speed for the fundamental part of a type III-IIIb pair event source at
26 MHz (Zhang et al., 2020). The observed source size of the radio burst is
also considerably larger than the size estimated from the bandwidth of the
striae (Kontar et al., 2017; Zhang et al., 2020). The fast variations of the
source is believed to be due to the wave propagation effects.
The ray-tracing simulation is an effective method to investigate the wave
propagation effects, which can help us interpreting the imaging and
spectroscopy observation and diagnosing the solar corona properties from the
observation results (Steinberg et al., 1971; Riddle, 1974). The ray-tracing
method is introduced to the solar radio study by Fokker (1965) to investigate
the source size of type I radio burst due to the scattering effect. Bougeret &
Steinberg (1977) indicated that the over-dense fiber structures in the coronal
should be considered to understand the position of the radio source, the
moving bursts, and the great variety of space-time shapes observed within the
same storm center, and Robinson (1983) introduced fiber in-homogeneity into
the ray tracing simulation and find that the in-homogeneity of the medium can
account for the source displacement. It has been pointed out that the
anisotropic scattering of the wave due to the statistical inhomogeneity is
essential to interpret the observed radio source properties (Arzner & Magun,
1999; Kontar et al., 2019; Bian et al., 2019). Recently, Kontar et al. (2019)
developed a model to include the anisotropic scattering of the wave in the ray
tracing process. It was found that the duration of type III radio burst
decrease with the anisotropy level of the background, and the anisotropic
density fluctuations are required to account for the source sizes and decay
times simulaneously. The theory of anisotropic scattering has been
successfully used to interpret the source properties of the drift pair
bursts(Kuznetsov & Kontar, 2019; Kuznetsov et al., 2020).
These case studies of observation with the corresponding simulation have shown
that the ray-tracing is an effective method to analyze the wave propagation
effect, while the detailed dependency relation between the observed source
characteristics and the density fluctuation properties of the medium is still
unclear. A further understanding of the physical process behind the visual
source variation requires more detailed simulation-observation comparison
study.
In this work, for the first time, we performed a large set of Monte Carlo
simulations of the radio wave propagation by ray tracing to explore a
parameter-space study on the radio source properties with various background
plasma parameters. The paper is arranged as flows: in Section 2, the model
used in the simulation is introduced. Section 3 shows the simulation results
of the radio source size, duration, and position for fundamental waves and
harmonic waves at 35 MHz. The results are discussed and compared in Section 4,
and a brief summary is given in Section 5.
## 2 Simulation model
We implemented the three-dimensional (3D) radio wave ray-tracing simulation
for a point pulse source on basis of the theory and algorithm proposed by
Kontar et al. (2019). The simulation solves the Hamilton ray equation and
nonlinear Langevin equation corresponding to the Fokker-Planck equation, which
can be expressed as
$\displaystyle\frac{\mathrm{d}r_{i}}{\mathrm{d}t}$
$\displaystyle=\frac{\partial\omega}{\partial
k_{i}}=\frac{c^{2}}{\omega}k_{i},$ (1)
$\displaystyle\frac{\mathrm{d}k_{i}}{\mathrm{d}t}$
$\displaystyle=-\frac{\partial\omega}{\partial r_{i}}+\frac{\partial
D_{ij}}{\partial k_{i}}+B_{ij}\xi_{j},$ (2)
where $r_{i}\,(i=x,y,z)$ is the position vector of photons, $k_{i}$ is the
wave vector of the radio wave in Cartesian coordinate. $\omega$ is the angular
frequency of the wave satisfying the dispersion relation of the unmagnetized
plasma:
$\omega^{2}=\omega_{pe}^{2}+c^{2}k^{2}.$ (3)
Here $\omega_{pe}$ is the local plasma frequency expressed as
$\omega_{pe}(\bm{r})=\sqrt{{4\pi e^{2}n(\bm{r})}/{m_{e}}}$ (where $e$ and
$m_{e}$ are respectively the electron charge and mass, and $n$ is the plasma
density). $D_{ij}$ is the diffusion tensor appropriate to scattering, which is
given by:
$D_{ij}=\frac{\pi\omega_{pe}^{4}}{4\omega c^{2}}\int
q_{i}q_{j}S(\bm{q})\delta(\bm{q}\cdot\bm{k})\frac{\mathrm{d}^{3}q}{(2\pi)^{3}},$
(4)
where $\bm{q}$ is the wave-vector of the density fluctuation. $S(\bm{q})$ is
the spectrum of the density fluctuation normalized to the relative density
fluctuation variance:
$\epsilon^{2}=\frac{\langle\delta n^{2}\rangle}{n^{2}}=\int
S(\bm{q})\frac{\mathrm{d}^{3}q}{(2\pi)^{3}},$
where $n$ is the local average plasma density, taken to be a slowly varying
function of position. $B_{ij}$ is a positive-semi-definitive matrix given by
$D_{ij}=(1/2)B_{im}B_{jm}^{T}$. $\bm{\xi}$ is a random vector satisfying the
Gaussian distribution with zero mean and unit variance.
To consider the scattering in a medium with anisotropic density fluctuation,
an anisotropy tensor
$\mathsf{A}=\left({\begin{array}[]{ccc}1&0&0\\\ 0&1&0\\\
0&0&\alpha^{-1}\end{array}}\right)$ (5)
is introduced. Here $\alpha$ is the anisotropic parameter representing the
ratio of density fluctuation wavelength in the direction of anisotropy and its
perpendicular direction. In the model of this work, the density fluctuations
are mostly in the direction perpendicular to the solar radius when $\alpha<1$,
and vice versa.
From Equation 4, one can get the angular scattering rate per unit distance for
a classic Gaussian spectrum of density fluctuations (Steinberg et al., 1971;
Chrysaphi et al., 2018; Kontar et al., 2019),
$\frac{\mathrm{d}\langle\theta^{2}\rangle}{\mathrm{d}x}=\frac{\sqrt{\pi}}{2}\frac{\epsilon^{2}}{h}\frac{\omega^{4}_{pe}}{(\omega^{2}-\omega_{pe}^{2})^{2}}$
(6)
where $h$ is the correlation length of density fluctuations. In this work, the
density fluctuation is characterized as power-law distribution with a cutoff
at the inner scale of $l_{i}=(r/R_{s})\,[\mathrm{km}]$ and outer scale of
$l_{o}=0.25R_{s}(r/R_{s})^{0.82}$ (Coles & Harmon, 1989; Wohlmuth et al.,
2001; Krupar et al., 2018), where $R_{s}$ is the solar radius, $r$ is the
heliocentric distance, the inner scale and outer scale represent the smallest
and largest wave length of the density fluctuations, respectively. The angular
scattering rate per unit length derived from the power-law fluctuation
(Equation (67) in Kontar et al. (2019)) can be expressed in a similar form of
Equation 6 by replacing $h$ with an equivalent scale length $h_{eq}$ given as
$h\equiv h_{eq}=l_{o}^{2/3}l_{i}^{1/3}/\pi^{3/2},$ (7)
and $h$ is a slowly varying function of heliocentric distance $r$.
For a given wave frequency, the angular scattering rate depends on $\epsilon$,
$h$, and the frequency ratio $\omega_{pe}/\omega$. As $h^{-1}$ and
$\omega_{pe}/\omega$ in Equation 6 decreasing with $r$, the scattering rate
decreases quickly when the radio waves propagate outward from their original
site. As a result, the scattering strength experienced by the waves is mainly
determined by the coefficient $\eta$ of the angular scattering rate per unit
distance in Equation 6, which is defined by
$\eta=\epsilon^{2}/h_{0},$ (8)
where $h_{0}=h(r_{0})$ is the scale length of density fluctuations near the
wave exciting site at $r_{0}$. In this work, the function $h(r)$ is given,
therefore the tuning parameter for scattering rate is the density fluctuation
level $\epsilon$ for different simulation cases, but we reiterate that it is
$\eta$ the key parameter.
The intensity decay $I/I_{0}$ due to the Coulomb collision absorption is
described by the optical depth integral along the ray path for each photon:
$\displaystyle I/I_{0}$ $\displaystyle=e^{-\tau_{a}},$ (9)
$\displaystyle\tau_{a}$ $\displaystyle=\int\gamma(\bm{r}(t))\,dt,$ (10)
$\displaystyle\gamma$
$\displaystyle=\frac{4}{3}\sqrt{\frac{2}{\pi}}\frac{e^{4}n_{e}\ln\Lambda}{mv_{Te}^{3}}\frac{\omega_{pe}^{2}}{\omega^{2}},$
(11)
where $v_{Te}$ is the thermal speed and a constant Coulomb logarithm
$\ln\Lambda\simeq 20$ is assumed.
## 3 Simulation results
With the anisotropic scattering model, we can perform ray-tracing simulation
for the photons launched at a given position. After sufficient steps forward,
when the majority of the photons reached 0.95 AU of heliocentric distance, the
final positions ($\bm{r}$) and wave vectors ($\bm{k}$) of the photons near the
direction of observation are collected to reconstruct the radio image. The
observed position of each photon in the sky plane is estimated by back-
tracking the wave vector. The apparent source intensity map $I(x,y)$ is the
weighted photon number density distribution in the sky plane, and the spread
of photon arrival times determines the temporal variation of the source.
In the simulation, the background density function $n(r)$ is assumed to be
spherically symmetric, the same as the model used in Kontar et al. (2019). The
source of radio emission is considered to be a point pulse source, the photons
are launched at the starting point at the same time. The initiate wave vectors
are randomly distributed in an outward direction ($\bm{r}\cdot\bm{k}>0$),
namely, a burst source with isotropic emission is considered.
Figure 1: The simulated source intensity image in the sky plane for
fundamental emission, which is reconstructed from all collected photons. The
frequency of the wave is 35 MHz with $f_{0}/f_{pe}=1.1$, $\alpha=0.72$,
$\epsilon=0.28$, and $\eta\simeq 9\times 10^{-5}\,\textup{km}^{-1}$. A
simulation of $3\times 10^{5}$ photons is conducted. The black circle shows
the outline of the solar disk, the blue line shows the FWHM of the
reconstructed source. The green ‘+’ marks the starting point of the photons.
The upper and right panel shows the histogram of the flux intensity in $x$ and
$y$ direction.
As a case study, Figure 1 shows the flux intensity map reconstructed from all
collected photons for fundamental emission, representing the time integral
imaging of the observed source. Figure 2 shows the time-intensity profile and
the temporal variations of the source position and size, which is obtained
from photons with arrival times within the corresponding time segment. In this
simulation case, the radio waves are fundamental waves initiated at the
position angle $(\theta_{0},\phi_{0})=(20^{\circ},0^{\circ})$, where $\theta$
is the longitude and $\phi$ is the latitude. The density fluctuation level is
$\epsilon=0.28$ and the anisotropy parameter is $\alpha=0.72$. The equivalent
correlation length ($h_{0}$) at the starting point $r_{0}=1.750R_{s}$ is about
860 km, and the corresponding value of the angular scattering rate coefficient
is $\eta\simeq 9\times 10^{-5}\,\textup{km}^{-1}$ near the wave generation
site.
In Figure 1, the source centroid offsets $0.11R_{s}$ in $x$ direction from the
starting point (marked as green ‘+’ in the figure). The offset in $y$
direction is negligible. This is because the centroid offset is in the radial
direction due to the spherical symmetry of the density model used, which is in
the $x$ direction in the sky plane for the case shown in this figure. The blue
line in Figure 1 shows the full width half maximum (FWHM) estimation of the
reconstructed source. The statistical results of time duration, FWHM size, the
visual speed, and size expansion rate can be extracted from the reconstructed
imaging and its variation.
Figure 2: Upper panel: the simulated time-intensity profile. The flux
variation is fitted with a double Gaussian function, shown as a cyan line.
FWHM range of the fitted flux are marked as gray shadow and the peak of the
fitted flux is marked as a black thin line in all panels. Middle panel: the
position offset of the simulated source centroid from its original site, where
red and blue represents X and Y in heliocentric coordinate. Lower panel: the
FWHM size of the simulated source. The linear fitting results of position
offset and size for the rising and decay phase are shown in the middle and
lower panel as orange and purple lines. The simulation parameters are the same
as Figure 1.
The FWHM of the time-intensity profile can be divided into two phases, namely
the rising and decay phase. We fitted the flux variation with a double
Gaussian function (Zhang et al., 2019):
$G(t)=A\exp{\left(-\frac{(t-t_{0})^{2}}{\tau^{2}}\right)},\qquad\tau=\left\\{\begin{aligned}
\tau_{R},&&{t\leq t_{0}}\\\ \tau_{D},&&{t>t_{0}}\end{aligned}\right..$ (12)
The fitted result is shown as cyan line in Figure 2. The peak and the FWHM of
the flux is obtained from the fitted curve. In Figure 2, the FWHM range marked
by gray shadow is divided by a thin vertical line at the peak of flux
($t=1.83$ s). The regime before the peak is the rising phase, and after is the
decay phase. The duration of the rising phase and decay phase are
$\tau_{R}=0.4$ and $\tau_{D}=2.0$ seconds for this case.
The source offset measures the vector distance between the observed centroid
and the starting point of the photons in the sky plane. In this case, the
average offset is $0.105R_{s}$ during the rising phase and $0.115R_{s}$ during
the decay phase, the offset in y direction is negligible (less than
$0.002R_{s}$).
The source size is measured by the FWHM in $x$ and $y$ direction:
$\textup{FWHM}_{x,y}=2\sqrt{2\ln{2}}\sigma_{x,y}$, where $\sigma_{x,y}$ is the
variance of the distribution. In this case, the average FWHM in $x$ direction
is $0.513R_{s}$ $(0.137^{\circ})$ for the rising phase and $0.662R_{s}$
$(0.176^{\circ})$ for the decay phase. The average FWHM in $y$ direction is
$0.538R_{s}$ $(0.143^{\circ})$ for the rising phase and $0.680R_{s}$
$(0.181^{\circ})$ for the decay phase. The source FWHM size in y direction is
slightly larger than that in $x$ direction.
The speed and expansion rate of the source in the sky plane can be obtained
from the linearly fit of the variation of the source position and size with
time. From Figure 2, we can see that, the source behaves differently in
movement and expansion during the rising and decay phase, for this case, the
moving speed and the expansion rate are both larger in the rising phase. The
source size has a positive expansion rate in both $x$ and $y$ direction.
In order to study the properties and variations of the source in different
parameter sets, namely the density fluctuation level and the anisotropic
scale, we run the above simulation process in the parameter space of
$\alpha\in[0.05,0.99]$ and $\epsilon\in[0.03,0.45]$. We uniformly select
$36\times 36$ points in the parameter space. For each case of parameter set
($\epsilon,\alpha$), a simulation of $3\times 10^{5}$ photons is conducted. In
each case, the frequency of the wave is assumed to be 35 MHz, the frequency
ratio between the wave and local plasma frequency at the starting point
($f_{0}/f_{pe}$) is 1.1 for fundamental emission and 2.0 for harmonic
emission. The photons are launched at the center of the solar disk
($\theta_{0}=\phi_{0}=0$), for fundamental emission $r_{0}=1.750R_{s}$ and
$h_{0}=860$ km, for harmonic emission $r_{0}=2.104R_{s}$ and $h_{0}=1010$ km.
The angular scattering rate coefficient $\eta$ changes in the range between
$8.9\times 10^{-7}\,\textup{km}^{-1}$ to $2.4\times 10^{-4}\,\textup{km}^{-1}$
for $\epsilon\in[0.03,0.45]$. The statistical result of the source properties
is presented in the following subsections.
### 3.1 Source size and duration
The source size and duration are the most common information that can be
extracted from the imaging and spectroscopy observations of solar radio
bursts. In this subsection, the source size is measured as the intensity
weighted FWHM size for all collected photons in the simulation as done in
Figure 1. The decay time is measured as the time of intensity decreasing by
$1/e$ from the peak, namely, $\tau_{D}$.
The source size and decay time of the fundamental emission
($f_{0}/f_{pe}=1.1$) is shown in Figure 3. Within the parameter space, the
source size varies from 0.2 to 0.95 solar radii and the decay time varies from
about 0 to 7 seconds. The source size is largely determined by the fluctuation
variance value (or the scattering coefficient $\eta$), the tendency of contour
lines in Figure 3a is nearly parallel to y-axes in the range of $\alpha>0.1$.
The decay time is sensitive to both $\alpha$ and $\epsilon$ (or $\eta$). The
decay time increases with the density fluctuation level and the anisotropy
parameter, thus decrease with the anisotropy degree in the density
fluctuation.
Figure 3: The source size and decay time of the simulation results for the
fundamental emission. Purple lines in the left panel show the contour of
source size, and black lines in the right panel show the contour of the decay
time for the simulation data. The bold lines mark the value of
$D_{\rm{FWHM}}=0.675R_{s}$, $\tau_{D}=2.30\,\rm{s}$ for $f_{0}=35$ MHz in
Equation 13 and 14. The green square marks the cross point of the bold contour
lines at $\epsilon=0.277$ and $\alpha=0.719$. The equivalent scale length of
density fluctuation $h_{0}$ is about 860 km near the wave generation site.
For the harmonic emission with the starting frequency of $f_{0}/f_{pe}=2.0$,
as shown in Figure 4 the source size varies from 0.1 to 0.9 $R_{s}$ and the
decay time varies from 0 to 0.7 seconds within the parameter space. Compared
to the fundamental emission with $f_{0}/f_{pe}=1.1$, the source size of
harmonic emission is slightly smaller than the size of fundamental emission
with a value of about 0.1 $R_{s}$, while the decay time of harmonic emission
is significantly smaller than the fundamental emission. Both the source size
and duration are not sensitive to the variation of fluctuation anisotropy for
$\alpha>0.2$.
Figure 4: The source size and decay time of the simulation results for the
harmonic emission, the specs are the same as Figure 3, and $h_{0}=1010$ km.
The simulation results in the parameter space can be used to constrain the
background plasma properties, compared with the corresponding observation.
According to the previous fitted result of multiple observations for the
source size and decay time of type III radio bursts (Kontar et al., 2019), the
relationship between the observed source size [Degree] and frequency [MHz] can
be expressed as:
$D_{\rm{FWHM}}=(11.78\pm 0.06)\times f^{-0.98\pm 0.05}.$ (13)
The decay time [s] and frequency [MHz] can be expressed as:
$\tau=(72.23\pm 0.05)\times f^{-0.97\pm 0.03}.$ (14)
The observed source size and decay time of the 35 MHz source are
$0.18^{\circ}$ (0.675 $R_{s}$) and 2.30 seconds accordingly. Figure 3 shows
the simulation results of the source size and decay time with different
density fluctuation variance and the anisotropy parameter. The parameter set
$\epsilon=0.277$ and $\alpha=0.719$ (marked as a green square in Figure 3) can
satisfy both the observation estimation of size and decay time at 35 MHz. The
corresponding scattering rate coefficient for $\epsilon=0.277$ in the
simulation is $\eta=8.9\times 10^{-5}\,\textup{km}^{-1}$. The source
properties variations of $\epsilon=0.28$ and $\alpha=0.72$ is shown in Figure
1 and 2, note that the starting point of the photons is chosen to be located
at $\theta_{0}=20^{\circ}$ in Figure 1 and 2 for displaying the source
position offset.
### 3.2 Position offset of the source
In Section 3.1, the starting point is set as in the center of the solar disk,
and the centroid of the reconstructed source is located near $(0,0)$. Thus,
the offset of the source from the starting point is close to zero when the
position angle of the starting point $\theta_{0}$ is 0. When the position
angle of the starting point is away from zero, the offset is not negligible.
Considering the sphere-symmetric of the background parameters, the simulated
observation for a given $\theta_{0}$ can be obtained by rotating the result
wave vector ($\bm{k}$) and ($\bm{r}$) of all photons to the reverse angle
direction of $\theta_{0}$ and redo the photon collecting and image
reconstruction process. For simplification without losing generality, we use
$y_{0}=0$ for the starting point so that the simulated offset is mainly in the
$x$ direction. We measure the offset in the $x$ direction as $\Delta
x=x_{c}^{obs}-x_{0}$. The source centroid $x_{c}^{obs}$ is measured as the
weighted average position in $x$ axis for all collected photons as in Figure
1.
Figure 5: The offset of the reconstructed source centroid from the starting
point of the fundamental emission. The left and right panels show the result
of the source locations at $\theta_{0}=30^{\circ}$ and $60^{\circ}$,
respectively. The positive value represents the outward offset from the solar
disk center, the negative value represents the inward offset in this figure.
The equivalent scale length of density fluctuation $h_{0}$ is about 860 km
near the wave generation site.
Figure 6: The offset from the reconstructed source centroid and the starting
point of the photons for harmonic emission. The specs are the same as Figure
5, and $h_{0}=1010$ km.
Different density fluctuation variance and anisotropic scale can result in
different offset amounts. Figure 5 and 6 shows the source offset ($\Delta x$)
of the fundamental ($f_{0}/f_{pe}=1.1$) and harmonic ($f_{0}/f_{pe}=2.0$)
emission in the parameter space of $\epsilon\in[0.03,0.45]$ and
$\alpha\in[0.05,0.99]$. For the fundamental emission, as shown in Figure 5,
the result of the offset value is positive for a major part of the parameter
space, meaning the direction of the offset is outward. In a small portion of
the parameter space at $\epsilon<0.03$ with weak scattering, the offset is
inward. The source offset $\Delta x$ increases with the density fluctuation
variance $\epsilon$ for the two starting position angles
$\theta_{0}=30^{\circ}$ and $\theta_{0}=60^{\circ}$. The offset of the source
with a starting position angle of $60^{\circ}$ is larger than that of
$30^{\circ}$. The maximum offset of $\theta_{0}=30^{\circ}$ is about
$0.28\,R_{s}$ and the maximum offset of $\theta_{0}=60^{\circ}$ is about
$0.43\,R_{s}$.
The source offset of harmonic emission is largely different from the
fundamental emission, as shown in Figure 6. Considerable portions of the
results in parameter space have inward offset with $\Delta x<0$. The outward
offset can exist when the density fluctuation level is high and the anisotropy
degree is low, for example, $\epsilon>0.3$ and $\alpha>0.3$. The offset value
of $\theta_{0}=30^{\circ}$ varies in $(-0.07,0.11)$ within the parameter
space. The offset value of $\theta_{0}=60^{\circ}$ varies in $(-0.10,0.17)$
within the parameter space. Comparing the offset of the fundamental and
harmonic. The offset value of the harmonic emission is much smaller than that
of the fundamental emission.
Figure 7: The offset ratio ($\Delta x/x_{0}$) of the source centroid from the
starting point of the photons with different starting position angle. The
simulation parameter sets are labeled on the top of each panel.
For further study of the source offset, we selected four cases with the
parameter of $\epsilon=0.102,0.354$, and $\alpha=0.157,0.802$. These four
cases represent the relatively high and low value of angular scattering rate
and a relatively large and small degree of anisotropy. Figure 7 shows the
offset ratio ($\Delta x/x_{0}$) of the source centroid to the starting point.
The four panels of Figure 7 show that for all these four parameter sets, the
offset of fundamental emission is more outward than the harmonic emission. For
relatively small anisotropy parameter $\alpha$ (high degree of anisotropy),
the relative offset for both fundamental and harmonic emission decrease with
the position angle $\theta_{0}$. For relatively larger anisotropy parameter
$\alpha$ (low degree of anisotropy), the relative offset is stable when
$\sin{\theta_{0}}<0.8$, and increase with position angle when
$\sin{\theta_{0}}>0.8$. Comparing the upper and lower column of Figure 7, one
can find that the value of relative offset is larger when the density
fluctuation (or angular scattering rate) is larger.
Assuming the wave of fundamental and harmonic emission with the same given
frequency is generated at the same angular direction from the solar center.
The visual distance between the starting points of the fundamental and the
harmonic is $\Delta R\sin\theta_{0}$ in the sky plane, where $\Delta R$ is the
height difference between the starting points of the fundamental and harmonic
wave. The source offset due to the refraction and scattering will cause the
observed visual distance to deviate from $\Delta R\sin\theta_{0}$. In this
work, for the wave of 35 MHz, the fundamental wave is generated at
$R_{F}=1.750R_{s}$ by assuming $f_{0}/f_{pe}=1.1$, the harmonic wave is
generated at $R_{H}=2.104R_{s}$ by assuming $f_{0}/f_{pe}=2.0$, $\Delta R$ for
this case is $0.354R_{s}$. Figure 8 shows the distance between the
reconstructed source centroids of fundamental and harmonic for 35 MHz. From
Figure 8 we can see that the value of the distance between the reconstructed
sources is significantly smaller than the visual distance of the starting
points of the two waves. In other words, the apparent sources of the
fundamental and harmonic emission will be much closer with each other than
their true source in the sky plane.
Figure 8: The distance of the reconstructed source centroid of fundamental and
harmonic emission (red lines), compared with the geometry distance of the
starting point in the sky plane (black lines).
The influence of the background medium to the visual source position can be
divided into two parts, namely the refraction and scattering, marked as ‘refr’
and ‘scat’ in the following discussion. With the dispersion relation of
unmagnetized cold plasma described in Equation (3), the motion direction of
the photon is aligned with the wave vector as shown in Equation 1. Thus, the
variation of the wave vector leads to the bending of the ray-path of the
photon, and eventually causes the offset of the visual position of the source
centroid. The variation of the wave vector in Equation 2 can be split into
these two parts accordingly:
$\displaystyle\frac{\mathrm{d}k_{i,\mathrm{refr}}}{\mathrm{d}t}$
$\displaystyle=-\frac{\partial\omega}{\partial r_{i}},$ (15)
$\displaystyle\frac{\mathrm{d}k_{i,\mathrm{scat}}}{\mathrm{d}t}$
$\displaystyle=\frac{\partial D_{ij}}{\partial k_{i}}+B_{ij}\xi_{j}.$ (16)
In the ray tracing process, we can measure the cumulative change of the wave
vector $k_{i}$ due to the refraction and scattering by individually
integrating the variation of wave vector due to these two factors for each
photon:
$\displaystyle\Delta K_{i,\mathrm{refr}}$
$\displaystyle=\sum\mathrm{\delta}k_{i,\mathrm{refr}},$ (17)
$\displaystyle\Delta K_{i,\mathrm{scat}}$
$\displaystyle=\sum\mathrm{\delta}k_{i,\mathrm{scat}}.$ (18)
The average change of the wave vector for all photons collected near the
observation site can provide a qualitative estimation of the relative
influence of scattering and refraction effects on the apparent source
position. In the simulation, the reconstructed image shows that the source
offset is mostly in the $x$ direction in the sky plane. Thus, we use $\Delta
K_{x}^{l}$ to represent the total bending influence of the observed source for
the $l\textup{th}$ photon. For the collected photons in each viewpoint, we
calculate the average value $\overline{\Delta K_{x}}=\Sigma_{l=1}^{l=N}\Delta
K_{x}^{l}/N$ as a measure of the influence of the propagation effects on the
shift of observed source position. Figure 9 shows the variation of
$\overline{\Delta K_{x}}$ with the position angle of the starting point, where
the solid lines and dash lines represent the effects of refraction and
scattering, respectively.
Figure 9: The average value $\overline{\Delta K_{x}}$ of all collected photons
due to influences of scattering (dash lines) and refraction (solid lines) for
the waves of fundamental and harmonic emission, respectively.
From Figure 9 one can see that, the contribution of the refraction is in the
positive-$x$ direction for the wave vector, while the contribution of
scattering is in the negative-$x$ direction. The influences of both refraction
and scattering effect are stronger for the fundamental wave than for the
harmonic wave in all four cases. When the scattering is more isotropic
($\alpha=0.802$ shown as cyan lines in the right panel), the influence of
scattering is overwhelmingly larger than the refraction effect for the
fundamental wave, while the influences on the harmonic wave are insignificant.
For high anisotropic scattering cases with small $\alpha$ (left panels in
Figure 9), the value of $\Delta K_{x}$ for refraction is comparable to the
scattering.
It needs to note that projecting the wave vector back to the sky plane, a
positive change of the wave vector in the $x$ direction indicates a negative
offset of the source position, and vice versa. Comparing Figure 8 and 9, one
can get that a larger angular scattering rate and more isotropic scattering
would cause the apparent positions of the fundamental and harmonic emission
being closer to each other.
### 3.3 Source expansion rate and the source speed in the sky plane
As the evolution of radio observation technology, the radio imaging can be
carried out with high time resolution, enabling the study of radio source
variation within a solar radio burst, namely the visual speed and expansion
rate of the source.
For the simulation, the temporal variation of the source properties can be
obtained from the series of the reconstructed source frame sectioned according
to the arrival time of the photons, as shown in Figure 2. For each parameter
set ($\epsilon$, $\alpha$), we linearly fit the time-distance profile of the
source centroid for the frames within the flux FWHM in the $x$ direction, the
resulting slope of the linear-fit yields the visual speed ($V_{x}$). The
$V_{x}$ is calculated in the parameter space of $\epsilon\in[0.03,0.45]$ and
$\alpha\in[0.05,0.99]$ with two given starting position angle of
$\theta_{0}=30^{\circ}$ and $\theta_{0}=60^{\circ}$. Accordingly, the angular
scattering rate coefficient varies in the space of $\eta\in[8.9\times
10^{-7},2.4\times 10^{-4}]\,\textup{km}^{-1}$ for $\epsilon\in[0.03,0.45]$.
As shown in Figure 4 and 6, the duration broadening and position offset due to
the wave propagation effects are small for harmonic emission. Therefore, there
is large uncertainty in linearly-fitting the source variation trend of
harmonic emission. Observation also shows that at a given frequency, the
source of the harmonic wave is relatively stable compared with the fundamental
wave (Zhang et al., 2020). In the following only variation of the fundamental
emission source is presented.
Figure 10: The visual speed of the source for fundamental emission. The left
and right panels show the result of the source locations at
$\theta_{0}=30^{\circ}$ and $\theta_{0}=60^{\circ}$. The positive value
represents the outward motion from the solar disc center, while the negative
value represents the inward motion in this figure. The scale length of density
fluctuation $h_{0}$ is about 860 km near the wave generation site.
Figure 10 shows the visual speed of the source of the fundamental emission.
The scattering makes the apparent source moving outward from the Sun. The
patterns of the visual speed are similar for $30^{\circ}$ and $60^{\circ}$,
and the major part of the parameter space is divided into two parts. The
source tends to have a small visual speed for the more isotropic background
($\alpha>0.5$) and large visual speed requires the background to be highly
anisotropic ($\alpha<0.4$). The visual speed reach the maximum at about
$\alpha=0.2$ and $\epsilon=0.45$ (corresponds to $\eta=2.4\times
10^{-4}\,\textup{km}^{-1}$) within the parameter space for both $30^{\circ}$
and $60^{\circ}$ of starting position angle. The maximum visual speed is about
$0.65\,R_{s}/\textup{s}$ or 1.5 $c$ for $\theta_{0}=30^{\circ}$ and about
$0.5\,R_{s}/\textup{s}$ or 1.2 $c$ for $\theta_{0}=60^{\circ}$. The visual
speed of the source with $30^{\circ}$ starting position angle is larger that
of $60^{\circ}$. Comparing Figure 5 and Figure 10, one can find that direction
of offset is mostly identical to the direction of visual speed within the
parameter space.
Figure 11: The expansion rate of the source size for fundamental emission
located at the solar disk center, measured with the increasing rate of the
FWHM width of the visual source.
For calculating the expansion rate of source size, we use the cases that the
original source is located at the solar disk center. The expansion rate is
measured as the slope of linear-fitting the FWHM width in $x$ axis for the
flux intensity frames, within the time period defined by the FWHM range in the
time-intensity profile. Figure 11 shows the expansion rate. The large
expansion rate mainly gathers in the high anisotropic part ($\alpha<0.4$)
within the parameter space. The expansion rate can reach $1.4R_{s}/\textup{s}$
or $22\,\textup{Arcmin/s}$. Comparing Figure 10 and 11, one can find that the
high expansion rate of the source size is often associated with the large
speed of the source apparent motion.
## 4 Discussion
In this study, the source in the calculation is assumed as point pulse source,
while the real source could have finite size and duration. Assuming there are
no interaction between the electromagnetic waves generated at different time
and position, the observed source can be expressed as the convolution of the
point pulse response and the time spatial distribution of the real source. The
beam electron generation process at energy release site is considered to be
fragmented (Benz, 1994; Bastian et al., 1998), the intrinsic radio sources can
have complex spatial and temporal structures. One should be cautious when the
simulation results presented in Section 3 are directly compared with the
observation results. For a burst element of the fragments like a type III
burst in a given frequency, the decay time is determined by the duration of
the local excitation decay phase, the velocity dispersion of beam electrons,
and the propagation effects (Li et al., 2008; Ratcliffe et al., 2014; Zhang et
al., 2019). The observed size of the source is determined by the real size of
the wave excitation regime and the broadening due to the propagation effect
(Kontar et al., 2017). Thus, the anisotropic degree and angular scattering
rate estimated from the simulation results shown in Figure 3 using Equation
(13) and (14) are the upper limits. The radio burst with fine structures and
the short term narrowband radio bursts can provide more constraint on the
parameters of the background. For example, the wave generation site of the
type IIIb radio burst is believed to be in a compact regime. The apparent
source size may be determined mainly by the propagation effects. However, the
observed source size varies largely from case to case, indicating that the
variation of the corona density fluctuations.
The density fluctuation variance and its length scale determine the angular
scattering rate of the radio wave. In the simulation of this work, the tuning
parameter for the angular scattering rate is $\epsilon$. The length scale is
described by the inner scale ($l_{i}$) and outer scale ($l_{o}$) of the
fluctuation spectrum. Both $l_{i}$ and $l_{o}$ are obtained from empirical
models (Coles & Harmon, 1989; Wohlmuth et al., 2001). The inner scale is
considered to be the inertial length of ion $d=v_{A}/\Omega_{i}$ (Coles &
Harmon, 1989; Spangler & Gwinn, 1990), where $v_{A}$ is the local Alfven
speed, $\Omega_{i}$ is the gyro-frequency of ion. The outer scale represents
the scale of energy containing in fluctuation. The density fluctuation
spectrum and cutoff scales in the corona may be largely different in active
region and quiet region. Observations from multiple methods like the
interplanetary scintillation observations (Chang et al., 2016), remote and in
situ observations can help constrain $l_{i},l_{o}$ and $\epsilon$, as well as
the angular scattering rate, and benefit the simulation model for the radio
wave ray-tracing. Also the radio burst observation combined with simulation
results can help to derive the density fluctuation property of the corona
(Krupar et al., 2020).
For a given frequency, the wave generation position of fundamental and
harmonic emission is at different height according to the plasma emission.
While observational results indicate that the apparent source of fundamental
and harmonic emission have co-spatial relationship for some type III radio
bursts (Dulk & Suzuki, 1980). One possible interpretation on this is that the
waves are generated and confined in an density depleted tube (Duncan, 1979; Wu
et al., 2002). Alternatively, this may be caused by the refraction and
scattering of the waves in their way propagating from the exciting site to the
observer (Steinberg et al., 1971). In this work, we inspect the source offset
and the relative position of the fundamental and harmonic emission of 35 MHz.
The results show that the scattering effect can cause the centroid of the
fundamental emission close to that of the harmonic emission. For the four
parameter sets shown in Figure 8, the visual distance of the reconstructed
sources is closer in the case of larger density fluctuation scale. For the
case of $\epsilon=0.354$ and $\alpha=0.157$, the position difference
$x_{H}-x_{F}$ is negative, meaning the source of fundamental emission can
appear even slightly higher than the harmonic emission.
The position offset due to the scattering and refraction is not negligible for
metric radio bursts generated far away from the solar disk center. The
relative error caused by neglecting the wave propagation effects could be up
to 50% for the fundamental and 10% for the harmonic as shown in Figure 7. The
correction of wave propagation effects is essential for the study concerning
the position of the source centroid, especially for the fundamental emission.
From the LOFAR high-cadence imaging spectroscopy, it is found that the
apparent sources of a solar type IIIb striae move outward quickly from the Sun
with a speed varying from 0.25 to 4 times the speed of light (Kontar et al.,
2017; Zhang et al., 2020). Meanwhile the observed expansion rate of the source
size can be as high as $382\pm 30\,\textup{arcmin}^{2}\textup{s}^{-1}$. A
commonly held belief in plasma emission is that type IIIb bursts are generated
in a source region with high density inhomogeneity. The simulation results in
Figure 10 and 11 show that both the visual speed and the expansion rate can be
very large, when the fluctuation variance $\epsilon$ is large and the
anisotropy parameter $\alpha$ is small. This is in consistent with the
observations and indicates that the density fluctuation may be highly
anisotropic in the source region of type IIIb bursts.
## 5 Summary
In this paper, we performed ray-tracing simulation of wave transport for point
pulse source. For the first time, we explored the parameter space of the
scattering rate coefficient $\eta$ (represented by density fluctuation level
$\epsilon$) and the anisotropy parameter $\alpha$ with massive number of ray
tracing simulation. We analyzed the simulation results to study the influences
of wave propagation on the observed source size, position, expansion rate,
visual speed, and duration of solar radio bursts for both fundamental emission
and harmonic emission. The following are the major conclusions:
* •
For fundamental emission, both the source size and decay time increase with
the scattering rate coefficient or the density fluctuation variance. The
isotropic fluctuation can produce much larger decay time than a highly
anisotropic fluctuation, while the source size is not sensitive to the level
of anisotropy. For harmonic emission, both the source size and decay time are
largely determined by the density variance. The decay time of harmonic
emission is significantly smaller than that of fundamental emission for the
same background parameters.
* •
By comparing the source size and decay time derived from simulation results to
the observational statistics, we obtained the estimation of $\eta=8.9\times
10^{-5}\,\textup{km}^{-1}$ and $\alpha=0.719$ near the source region of
fundamental emission at frequency 35 MHz.
* •
We derived the position offset of the source and analyzed its dominant factor
by decoupling the scattering and refraction in the wave propagation. The
statistical results of source offset show that, the source of the fundamental
emission is more outward shifted than that of harmonic emission by the
propagation effect, which could account for the co-spatial of fundamental and
harmonic emission in some observations.
* •
The observed source position and size can have significant visual motion and
expansion due to the wave propagation effects. Both the visual speed and the
expansion rate tend to be large for highly anisotropic medium. For fundamental
emission, the visual speed and the expansion rate for FWHM of the source size
can reach respectively about 1.5 $c$ and 22 Arcmin/s at $\eta=2.4\times
10^{-4}\,\mathrm{km^{-1}}$ and $\alpha=0.2$.
A comprehensive comparison of the observed source characteristics and their
corresponding values simulated in the parameter space can help us precisely
diagnose plasma properties near the wave exciting site of the radio bursts.
In this work, the simulation considers the radio emission of a single
frequency (35 MHz) with the frequency ratio of $f_{0}/f_{pe}=1.1$ for
fundamental emission and $f_{0}/f_{pe}=2.0$ for harmonic emission. The future
work of simulation concerning the parameter space of frequency and frequency-
ratio of wave excitation can help to understand the dynamic spectrum of a
complete radio burst event and its beam electron property. The background
density model is spherically symmetric in the present work, while the solar
corona is a non-uniform medium with a number of large structures. More
specific density models with helmet streamers, coronal holes, under-dense or
over-dense flux tubes may be also studied in future simulations.
## 6 Acknowledgements
The research was supported by the National Nature Science Foundation of China
(41974199 and 41574167) and the B-type Strategic Priority Program of the
Chinese Academy of Sciences (XDB41000000). The numerical calculations in this
paper have been done on the supercomputing system in the Supercomputing Center
of University of Science and Technology of China.
## References
* Arzner & Magun (1999) Arzner, K., & Magun, A. 1999, Astronomy and Astrophysics, 351, 1165
* Bastian et al. (1998) Bastian, T., Benz, A., & Gary, D. 1998, Annual Review of Astronomy and Astrophysics, 36, 131
* Benz (1994) Benz, A. 1994, Space science reviews, 68, 135
* Bian et al. (2019) Bian, N., Emslie, A., & Kontar, E. 2019, The Astrophysical Journal, 873, 33
* Bisoi et al. (2018) Bisoi, S. K., Sawant, H., Janardhan, P., et al. 2018, The Astrophysical Journal, 862, 65
* Bougeret & Steinberg (1977) Bougeret, J., & Steinberg, J. 1977, Astronomy and Astrophysics, 61, 777
* Chang et al. (2016) Chang, O., Gonzalez-Esparza, J., & Mejia-Ambriz, J. 2016, Advances in Space Research, 57, 1307
* Chen et al. (2015) Chen, B., Bastian, T. S., Shen, C., et al. 2015, Science, 350, 1238, doi: 10.1126/science.aac8467
* Chen et al. (2014) Chen, Y., Du, G., Feng, L., et al. 2014, The Astrophysical Journal, 787, 59
* Chrysaphi et al. (2018) Chrysaphi, N., Kontar, E. P., Holman, G. D., & Temmer, M. 2018, The Astrophysical Journal, 868, 79
* Coles & Harmon (1989) Coles, W., & Harmon, J. 1989, The Astrophysical Journal, 337, 1023
* Dulk & Suzuki (1980) Dulk, G., & Suzuki, S. 1980, Astronomy and Astrophysics, 88, 203
* Duncan (1979) Duncan, R. 1979, Solar Physics, 63, 389
* Fokker (1965) Fokker, A. 1965, Bulletin of the Astronomical Institutes of the Netherlands, 18, 111
* Kontar et al. (2017) Kontar, E., Yu, S., Kuznetsov, A., et al. 2017, Nature communications, 8, 1515
* Kontar et al. (2019) Kontar, E. P., Chen, X., Chrysaphi, N., et al. 2019, The Astrophysical Journal, 884, 122
* Krupar et al. (2018) Krupar, V., Maksimovic, M., Kontar, E. P., et al. 2018, ApJ, 857, 82, doi: 10.3847/1538-4357/aab60f
* Krupar et al. (2020) Krupar, V., Szabo, A., Maksimovic, M., et al. 2020, The Astrophysical Journal Supplement Series, 246, 57
* Kuznetsov et al. (2020) Kuznetsov, A. A., Chrysaphi, N., Kontar, E. P., & Motorina, G. 2020, The Astrophysical Journal, 898, 94
* Kuznetsov & Kontar (2019) Kuznetsov, A. A., & Kontar, E. 2019, Astronomy & Astrophysics, 631, L7
* Li et al. (2008) Li, B., Cairns, I. H., & Robinson, P. A. 2008, Journal of Geophysical Research: Space Physics, 113
* Mann et al. (2018) Mann, G., Breitling, F., Vocks, C., et al. 2018, Astronomy & Astrophysics, 611, A57
* McCauley et al. (2018) McCauley, P. I., Cairns, I. H., & Morgan, J. 2018, Solar Physics, 293, 132
* Morosan et al. (2019) Morosan, D. E., Carley, E. P., Hayes, L. A., et al. 2019, Nature Astronomy, 3, 452
* Ratcliffe et al. (2014) Ratcliffe, H., Kontar, E., & Reid, H. 2014, Astronomy & Astrophysics, 572, A111
* Riddle (1974) Riddle, A. 1974, Solar Physics, 35, 153
* Robinson (1983) Robinson, R. 1983, in Proceedings of the Astronomical Society of Australia, Vol. 5, 208–211
* Spangler & Gwinn (1990) Spangler, S. R., & Gwinn, C. R. 1990, The Astrophysical Journal, 353, L29
* Steinberg et al. (1971) Steinberg, J., Aubier-Giraud, M., Leblanc, Y., & Boischot, A. 1971, Astronomy and Astrophysics, 10, 362
* Stewart (1972) Stewart, R. 1972, Publications of the Astronomical Society of Australia, 2, 100
* Stewart (1976) —. 1976, Solar Physics, 50, 437
* Wild et al. (1959) Wild, J., Sheridan, K., & Trent, G. 1959, in Symposium-International Astronomical Union, Vol. 9, Cambridge University Press, 176–185
* Wohlmuth et al. (2001) Wohlmuth, R., Plettemeier, D., Edenhofer, P., et al. 2001, Space Science Reviews, 97, 9
* Wu et al. (2002) Wu, C. S., Wang, C. B., Yoon, P. H., Zheng, H. N., & Wang, S. 2002, The Astrophysical Journal, 575, 1094, doi: 10.1086/341468
* Yu et al. (2020) Yu, S., Chen, B., Reeves, K. K., et al. 2020, ApJ, 900, 17, doi: 10.3847/1538-4357/aba8a6
* Zhang et al. (2019) Zhang, P., Yu, S., Kontar, E. P., & Wang, C. B. 2019, The Astrophysical Journal, 885, 140
* Zhang et al. (2020) Zhang, P., Zucca, P., Sridhar, S. S., et al. 2020, Astronomy & Astrophysics, 639, A115
|
8k
|
arxiv_papers
|
2101.00912
|
# Holographic DC Conductivity for Backreacted NLED in Massive Gravity
Shihao Bi a [email protected] Jun Taoa [email protected] aCenter for
Theoretical Physics, College of Physics, Sichuan University, Chengdu, 610065,
China
###### Abstract
In this work a holographic model with the charge current dual to a general
nonlinear electrodynamics (NLED) is discussed in the framework of massive
gravity. Massive graviton can breaks the diffeomorphism invariance in the bulk
and generates momentum dissipation in the dual boundary theory. The expression
of DC conductivities in a finite magnetic field are obtained, with the
backreaction of NLED field on the background geometry. General transport
properties in various limits are presented, and then we turn to the three of
specific NLED models: the conventional Maxwell electrodynamics, the Maxwell-
Chern-Simons electrodynamics, and the Born-Infeld electrodynamics, to study
the parameter-dependence of in-plane resistivity. Two mechanisms leading to
the Mott-insulating behaviors and negative magneto-resistivity are revealed at
zero temperature, and the role played by the massive gravity coupling
parameters are discussed.
††preprint: CTP-SCU/2021001
## I Introduction
The discovery of gauge/gravity duality makes it possible to deal with the
strongly-coupled gauge theories on the boundary from the classical
gravitational theories in the higher dimensional bulk Susskind1997:PRD ;
Maldacena1997:IJTP ; Witten1998:ATMP ; Ammon2015:CUP ; Eleftherios2011:LNP ;
Natsuume2016:LNP . And the well-known prediction on the ratio of the shear
viscosity to the entropy density for $\mathcal{N}=4$ super Yang-Mills (SYM)
theory was found to be close to the experimental results of real quark-gluon
plasma (QGP) Policastro2001:PRL ; Buchel2004:PRL ; Kovtun2005:PRL ;
Benincasa2006:JHEP , making it more convincing to physical community. In
recent years the idea of gauge/gravity duality has been applied to
hydrodynamics DTSon2006:JHEP ; DTSon2007:ARNPS ; Kovtun2007:JPA ;
Rangamani2009:CQG , quantum chromodynamics (QCD) Adams2001:PRD ;
Brodsky2004:PLB ; Brodsky2005:PRL ; Brodsky2009:PRL ; Erlich2005:PRL ;
Rold20005:NPB ; Zayakin2008:JHEP ; Edelstein2009:AIP ; Gursoy2011:QCD ;
Alfimov2015:JHEP , nuclear physics Bergman2007:JHEP ; Baldino2017:PRD , and
strongly-coupled condensed matter systems Hartnoll2008:PRL ; Hartnoll2009:CQG
; Herzog2009:JPA ; Herzog2010:PRD ; Mcgreevy2010:AHEP ; Nishioka2010:JHEP ;
Cubrovic2009:Sci ; Liu2011:PRD ; Iqbal2012:NFL ; Faulkner2011:JHEP ;
Cai2015:SCP , etc, and new insights are brought into these physical branches.
The establishment of quantum many-body theory AGD2012:QFT ; Landau:vol9 in a
fairly low energy scale compared with high energy physics is another
magnificent and fascinating story apart from that of the standard model, which
have profoundly extended and deepen our understanding of realistic matters.
Many characterization methods, based on the linear response theory or quantum
transport theory, have been developed in the scattering or transport
experiments, to reveal the electronic or lattice structures, surface
topography, defects and disorder, and study the transport properties of the
material samples. In the framework of the energy band theory the materials are
roughly divided into three categories: metals, semiconductors, and insulators,
which differs in their electrical conductivities. And it is known to all of us
the classical electron motion in the presence of the magnetic field has been
studied since 1879, which is the famous classical Hall effect Hall1879 . Its
quantum version, known as the quantum Hall effect, was first observed in 1980
by von Klitzing Klitzing1980:PRL , has aroused a wide research enthusiasm in
the last two decades and witnessed impressive theoretical and experimental
breakthroughs Qi2011:RMP ; Hasan2010:RMP . The new quantum phases of matter,
termed topological insulators, exhibit another kind of bulk-boundary
correspondence between the gapped insulated bulk and gapless metallic edge
states on the boundary. Its exotic transport properties have also attracted
lots of research interests, among which the conductivity behavior in the
presence of magnetic field is of vital importance, for it provides a promising
way on controlling the electrical properties with the help of external fields.
For normal metals the resistivity always increases with the magnetic field
strength Wannier1972:PRB , showing the positive magneto-resistivity. However,
the experimental measurements Kim2013:PRL ; Xiong2015:Sci ; Li2016:NC ;
Zhang2016:NC ; Zhao2016:SR in topological materials, such as Dirac or Weyl
semimetals, etc., demonstrated the presence of negative magneto-resistivity or
crossover for positive to negative, which are mainly attributed to chiral
anomaly DTSon2013:PRB ; Qi2013:CRP ; Burkov2014:PRL ; Burkov2015:PRB ;
Lu2017:FP . In addition, the negative magneto-resistivity behaviors are also
found in holographic chiral anomalous systems Jimenez2014:PRD ;
Jimenez2015:JHEP ; Landsteiner2015:JHEP ; Sun2016:JHEP .
Mott insulators, which are believed to be parent materials of cuprate high
$T_{c}$ superconductors Wen2006:RMP , is another example beyond the framework
of energy band theory due to the strong Coulomb repulsive interaction. The
strong electron-electron interaction would prevent the available charge
carriers to efficiently transport charges as if suffers the electronic traffic
jam. The holographic construction of Mott insulators has been a heated topic
and much progress has been made Edalati2011:PRL ; Edalati2011:PRD ;
Wu2012:JHEP ; Wu2014:JHEP ; Wu2015:JHEP ; Wu2016:JHEP ; Fujita2015:JHEP ;
Nishioka2010:JHEP ; Kiritsis2015:JHEP . In Ref. Baggioli2016:JHEP , the
holographic model coupled with a particular NLED named iDBI was proposed to
study the strong electron-electron interaction by introducing the self-
interaction of NLED field, and the Mott-insulating behaviors appear for large
enough self-interaction strength.
In this paper, we construct a holographic model coupled with a general NLED
field to investigate the magneto-transport of the strongly-interacting system
on the boundary from the perspective of gauge/gravity duality. The
backreaction of NLED field on the bulk geometry is taken into consideration as
in Ref. Cremonini2017:JHEP . The massive gravity, with massive potentials
associated with the graviton mass, is adopted to break the diffeomorphism
invariance in the bulk producing momentum relaxation in the dual boundary
theory Vegh2013 .
This paper is organized as follows. In Sec. II we establish our holographic
model of massive gravity with NLED field. Then the DC conductivities with non-
zero magnetic field are derived as the function of $\rho$, $h$, $T$, and
related massive gravity coupling parameters in Sec. III, with various limit
situation being discussed. Then in Sec. IV we present detailed investigation
on the in-plane resistivity in the framework of conventional Maxwell
electrodynamics, the CP-violating Maxwell-Chern-Simons electrodynamics, and
Born-Infeld electrodynamics. Finally, we make our conclusion in Sec. V. We
will use the unit $\hslash=G=k_{B}=l=1$.
## II Holographic Setup
The 4-dimensional massive gravity Vegh2013 with a negative cosmological
constant $\Lambda$ coupled to a nonlinear electromagnetic field $A_{\mu}$ we
are considering is given by
$S=\frac{1}{16\pi}\int\mathrm{d}^{4}x\sqrt{-g}\left[R-2\Lambda+m^{2}\sum_{j=1}^{4}c_{j}\mathcal{U}_{j}(g,f)+\mathcal{L}(s,p)\right],$
(1)
where $\Lambda=-3/l^{2}$, $m$ is the mass parameter, $f$ is a fixed symmetric
tensor called the reference metric, $c_{j}$ are coupling constants 111To make
the massive gravity theory self-consistent, all the $c_{j}$ will be set to be
negative., and $\mathcal{U}_{j}(g,f)$ denotes the symmetric polynomials of the
eigenvalue of the $4\times 4$ matrix
$\mathcal{K}_{\nu}^{\mu}=\sqrt{g^{\mu\lambda}f_{\lambda\nu}}$ given as
$\displaystyle\mathcal{U}_{1}$ $\displaystyle=[\mathcal{K}],$ (2)
$\displaystyle\mathcal{U}_{2}$
$\displaystyle=[\mathcal{K}]^{2}-\left[\mathcal{K}^{2}\right],$
$\displaystyle\mathcal{U}_{3}$
$\displaystyle=[\mathcal{K}]^{3}-3[\mathcal{K}]\left[\mathcal{K}^{2}\right]+2\left[\mathcal{K}^{3}\right],$
$\displaystyle\mathcal{U}_{4}$
$\displaystyle=[\mathcal{K}]^{4}-6[\mathcal{K}]^{2}\left[\mathcal{K}^{2}\right]+8[\mathcal{K}]\left[\mathcal{K}^{3}\right]+3\left[\mathcal{K}^{2}\right]^{2}-6\left[\mathcal{K}^{4}\right].$
The square root in $\mathcal{K}$ means
$\left(\sqrt{\mathcal{K}}\right)_{\lambda}^{\mu}\left(\sqrt{\mathcal{K}}\right)_{\nu}^{\lambda}=\mathscr{\mathcal{K}}_{\nu}^{\mu}$
and $\left[\mathcal{K}\right]\equiv\mathcal{K}_{\mu}^{\mu}$. The NLED
Lagrangian $\mathcal{L}(s,p)$ in Eq. 1 is constructed as the function of two
independent nontrivial scalar using the field strength tensor
$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$
$\displaystyle s=$ $\displaystyle-\frac{1}{4}F^{\mu\nu}F_{\mu\nu},$ (3a)
$\displaystyle p=$
$\displaystyle-\frac{1}{8}\varepsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}.$
(3b)
Here $\epsilon^{abcd}\equiv-\left[a\;b\;c\;d\right]/\sqrt{-g}$ is a totally
antisymmetric Lorentz tensor, and $\left[a\;b\;c\;d\right]$ is the permutation
symbol. In the weak field limit we assume the NLED reduces to the Maxwell-
Chern-Simons Lagrangian $\mathcal{L}\left(s,p\right)\approx s+\theta p$ with
$\theta$ defined as $\mathcal{L}^{\left(0,1\right)}\left(0,0\right)$ for later
convenience.
Varying the action with respect to $g^{\mu\nu}$ and $A^{\mu}$ we obtain the
equations of motion
$\displaystyle R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}+\Lambda g_{\mu\nu}=$
$\displaystyle\frac{1}{2}T_{\mu\nu},$ (4a)
$\displaystyle\nabla_{\mu}G^{\mu\nu}=$ $\displaystyle 0.$ (4b)
where the energy-momentum tensor is
$T_{\mu\nu}=g_{\mu\nu}\left(\mathcal{L}(s,p)-p\frac{\partial\mathcal{L}(s,p)}{\partial
p}\right)+\frac{\partial\mathcal{L}(s,p)}{\partial
s}F_{\mu}^{\lambda}F_{\nu\lambda}+m^{2}\chi_{\mu\nu},$ (5)
with
$\displaystyle\chi_{\mu\nu}=$ $\displaystyle
c_{1}\left(\mathcal{U}_{1}g_{\mu\nu}-\mathcal{K}_{\mu\nu}\right)+c_{2}\left(\mathcal{U}_{2}g_{\mu\nu}-2\mathcal{U}_{1}\mathcal{K}_{\mu\nu}+2\mathcal{K}_{\mu\nu}^{2}\right)$
(6)
$\displaystyle+c_{3}\left(\mathcal{U}_{3}g_{\mu\nu}-3\mathcal{U}_{2}\mathcal{K}_{\mu\nu}+6\mathcal{U}_{1}\mathcal{K}_{\mu\nu}^{2}-6\mathcal{K}_{\mu\nu}^{3}\right)$
$\displaystyle+c_{4}\left(\mathcal{U}_{4}g_{\mu\nu}-4\mathcal{U}_{3}\mathcal{K}_{\mu\nu}+12\mathcal{U}_{2}\mathcal{K}_{\mu\nu}^{2}-24\mathcal{U}_{1}\mathcal{K}_{\mu\nu}^{3}+24\mathcal{K}_{\mu\nu}^{4}\right).$
And we introduce
$G^{\mu\nu}=-\frac{\partial\mathcal{L}(s,p)}{\partial
F_{\mu\nu}}=\frac{\partial\mathcal{L}(s,p)}{\partial
s}F^{\mu\nu}+\frac{1}{2}\frac{\partial\mathcal{L}(s,p)}{\partial
p}\varepsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}.$ (7)
We are looking for a black brane solution with asymptotic AdS spacetime by
taking the following ansatz Vegh2013 ; Cai2015:PRD for the metric and the
NLED field, and the reference metric:
$\displaystyle\mathrm{d}s^{2}=$
$\displaystyle-{}f(r)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{f(r)}+r^{2}\left(\mathrm{d}x^{2}+\mathrm{d}y^{2}\right),$
(8a) $\displaystyle A=$ $\displaystyle
A_{t}(r)\mathrm{d}t+\frac{h}{2}\left(x\mathrm{d}y-y\mathrm{d}x\right),$ (8b)
$\displaystyle f_{\mu\nu}=$
$\displaystyle\operatorname{diag}\left(0,0,\alpha^{2},\alpha^{2}\right).$ (8c)
where $h$ is the magnetic field strength. From Eqs. 8a and 8b we find the
nontrivial scalars Eq. 3b are
$\displaystyle s=$
$\displaystyle\frac{1}{2}\left(A_{t}^{\prime}(r)^{2}-\frac{h^{2}}{r^{4}}\right),$
(9a) $\displaystyle p=$ $\displaystyle-\frac{hA_{t}^{\prime}(r)}{r^{2}}.$ (9b)
The equations of motion then are obtained as
$\displaystyle rf^{\prime}(r)+f(r)-3r^{2}=$ $\displaystyle c_{1}\alpha
m^{2}r+c_{2}\alpha^{2}m^{2}+\frac{r^{2}}{2}\left(A_{t}^{\prime}(r)G^{rt}+\mathcal{L}(s,p)\right),$
(10a) $\displaystyle rf^{\prime\prime}(r)+2f^{\prime}(r)-6r=$ $\displaystyle
c_{1}\alpha m^{2}+r\left(\mathcal{L}(s,p)+hG^{xy}\right),$ (10b)
$\displaystyle\left[r^{2}G^{rt}\right]^{\prime}=$ $\displaystyle 0,$ (10c)
where the non-vanishing components $G^{\mu\nu}$ are
$\displaystyle G^{rt}=$ $\displaystyle\frac{\partial\mathcal{L}}{\partial
p}\frac{h}{r^{2}}-\frac{\partial\mathcal{L}}{\partial s}A_{t}^{\prime}(r),$
(11a) $\displaystyle G^{xy}=$
$\displaystyle\frac{\partial\mathcal{L}}{\partial
s}\frac{h}{r^{4}}+\frac{\partial\mathcal{L}}{\partial
p}\frac{A_{t}^{\prime}(r)}{r^{2}}.$ (11b)
Eq. 10c leads to $G^{tr}=\rho/r^{2}$ with $\rho$ being a constant. The event
horizon $r_{h}$ is the root of $f(r)$, i.e., $f(r_{h})=0$, and the Hawking
temperature of the black brane is given by
$T=\frac{f^{\prime}(r_{h})}{4\pi}.$ (12)
Then at $r=r_{h}$ Eq. 10a reduces to
$4\pi Tr_{h}-3r_{h}^{2}=c_{1}\alpha
m^{2}r_{h}+c_{2}\alpha^{2}m^{2}+\frac{r_{h}^{2}}{2}\left(A_{t}^{\prime}(r_{h})G_{h}^{rt}+\mathcal{L}(s_{h},p_{h})\right),$
(13)
where
$\displaystyle s_{h}=$
$\displaystyle\frac{1}{2}\left(A_{t}^{\prime}(r_{h})^{2}-\frac{h^{2}}{r_{h}^{4}}\right),$
(14a) $\displaystyle p_{h}=$
$\displaystyle-\frac{hA_{t}^{\prime}(r_{h})}{r_{h}^{2}},$ (14b) $\displaystyle
G_{h}^{rt}=$
$\displaystyle\mathcal{L}^{\left(0,1\right)}\left(s_{h},p_{h}\right)\frac{h}{r_{h}^{2}}-\mathcal{L}^{\left(1,0\right)}\left(s_{h},p_{h}\right)A_{t}^{\prime}\left(r_{h}\right)$
(14c)
## III DC Conductivity
From the perspective of gauge/gravity duality, the black brane solution Eqs.
8a and 8b in the bulk can describe an equilibrium state at finite temperature
$T$ given by Eq. 13. And the conserved current $\mathcal{J}^{\mu}$ in the
boundary theory is connected with the conjugate momentum of the NLED field in
the bulk, which allows us calculate the DC conductivity in the framework of
linear response theory Donos2014:JHEP ; Blake2014:PRL .
### III.1 Derivation of DC Conductivity
The following perturbations on the metric and the NLED field are applied to
derive the DC conductivity:
$\displaystyle\delta g_{ti}=$ $\displaystyle r^{2}h_{ti}(r),$ (15a)
$\displaystyle\delta g_{ri}=$ $\displaystyle r^{2}h_{ri}(r),$ (15b)
$\displaystyle\delta A_{i}=$ $\displaystyle-E_{i}t+a_{i}(r),$ (15c)
where $i=x,y$. We first consider the $t$ component. The absence of $A_{t}(r)$
in the Eq. 9b leads to the radially independent conjugate momentum
$\partial_{r}\Pi^{i}=0$, with
$\Pi^{t}=\frac{\partial\mathcal{L}\left(s,p\right)}{\partial\left(A_{t}^{\prime}\left(r\right)\right)}.$
(16)
Then the expectation value of $\mathcal{J}^{t}$ in the dual boundary field
theory is given by
$\left\langle\mathcal{J}^{t}\right\rangle=\Pi^{t}.$ (17)
In the linear level we have $\left\langle\mathcal{J}^{t}\right\rangle=\rho$,
which indicate that $\rho$ can be interpreted as the charge density in the
dual field theory. At the event horizon $r=r_{h}$ the charge density $\rho$ is
given by
$\rho=\mathcal{L}^{\left(1,0\right)}\left(s_{h},p_{h}\right)r_{h}^{2}A_{t}^{\prime}\left(r_{h}\right)-\mathcal{L}^{\left(0,1\right)}\left(s_{h},p_{h}\right)h.$
(18)
Then we consider the planar components. The NLED is explicitly independent of
$a_{i}(r)$, making the conjugate momentum of the field $a_{i}(r)$
$\Pi^{i}=\frac{\partial\mathcal{L}\left(s,p\right)}{\partial\left(a_{i}^{\prime}(r)\right)}=\frac{\partial\mathcal{L}\left(s,p\right)}{\partial\left(\partial_{r}A_{i}\right)}=\sqrt{-g}G^{ir},$
(19)
being radially independent as well. And the charge currents in the dual field
theory are given by $\left\langle\mathcal{J}^{i}\right\rangle=\Pi^{i}$, and
can be expressed with the perturbed metric and field components $h_{ti}$,
$h_{ri}$, $a^{\prime}_{i}$ and $E_{i}$:
$\displaystyle\left\langle\mathcal{J}^{x}\right\rangle$
$\displaystyle=-\mathcal{L}^{\left(1,0\right)}\left(s,p\right)\left[f\left(r\right)a_{x}^{\prime}\left(r\right)+hf\left(r\right)h_{ry}\left(r\right)+r^{2}A_{t}^{\prime}\left(r\right)h_{tx}\left(r\right)\right]-\mathcal{L}^{\left(0,1\right)}\left(s,p\right)E_{y},$
(20a) $\displaystyle\left\langle\mathcal{J}^{y}\right\rangle$
$\displaystyle=-\mathcal{L}^{\left(1,0\right)}\left(s,p\right)\left[f\left(r\right)a_{y}^{\prime}\left(r\right)-hf\left(r\right)h_{rx}\left(r\right)+r^{2}A_{t}^{\prime}\left(r\right)h_{ty}\left(r\right)\right]+\mathcal{L}^{\left(0,1\right)}\left(s,p\right)E_{x}.$
(20b)
The perturbed metric components $h_{ti}$ and $h_{ri}$ are coupled to $E_{i}$
in gravitational field equation and can be reduced. Thus we take the first
order of the perturbed Einstein’s equations of $tx$ and $ty$ components and we
get:
$\displaystyle\left(\frac{h^{2}}{r^{2}}-\frac{\alpha
m^{2}c_{1}r}{\mathcal{L}^{\left(1,0\right)}(s,p)}\right)h_{tx}(r)-hA_{t}^{\prime}(r)f(r)h_{ry}(r)=$
$\displaystyle A_{t}^{\prime}(r)f(r)a^{\prime}_{x}(r)-\frac{E_{y}h}{r^{2}},$
(21a) $\displaystyle\left(\frac{h^{2}}{r^{2}}-\frac{\alpha
m^{2}c_{1}r}{\mathcal{L}^{\left(1,0\right)}(s,p)}\right)h_{ty}(r)+hA_{t}^{\prime}(r)f(r)h_{rx}(r)=$
$\displaystyle A_{t}^{\prime}(r)f(r)a^{\prime}_{y}(r)+\frac{E_{x}h}{r^{2}}.$
(21b)
Next by using the regularity constraints of the metric and field near the
event horizon Blake2014:PRL :
$\displaystyle f\left(r\right)$ $\displaystyle=4\pi
T\left(r-r_{h}\right)+\cdots,$ (22) $\displaystyle A_{t}\left(r\right)$
$\displaystyle=A_{t}^{\prime}\left(r_{h}\right)\left(r-r_{h}\right)+\cdots,$
$\displaystyle a_{i}\left(r\right)$ $\displaystyle=-\frac{E_{i}}{4\pi
T}\ln\left(r-r_{h}\right)+\cdots,$ $\displaystyle h_{ri}\left(r\right)$
$\displaystyle=\frac{h_{ti}\left(r\right)}{f\left(r\right)}+\cdots,$
the Eqs. 21a and 21b at the event horizon $r=r_{h}$ are
$\displaystyle
hA_{t}^{\prime}(r_{h})h_{ty}(r_{h})-\left(\frac{h^{2}}{r_{h}^{2}}-\frac{\alpha
m^{2}c_{1}r_{h}}{\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})}\right)h_{tx}(r_{h})=A_{t}^{\prime}(r_{h})E{}_{x}+\frac{h}{r_{h}^{2}}E_{y},$
(23a) $\displaystyle\left(\frac{h^{2}}{r_{h}^{2}}-\frac{\alpha
m^{2}c_{1}r_{h}}{\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})}\right)h_{ty}(r_{h})+hA_{t}^{\prime}(r_{h})h_{tx}(r_{h})=\frac{h}{r_{h}^{2}}E_{x}-A_{t}^{\prime}(r_{h})E{}_{y}.$
(23b)
Solving Eqs. 23a and 23b for $h_{ti}(r_{h})$ in terms of $E_{i}$ and inserting
into Eqs. 20a and 20b, one can evaluate the current
$\left\langle\mathcal{J}^{i}\right\rangle$ to the electric fields $E_{i}$ at
the event horizon $r=r_{h}$ via
$\left\langle\mathcal{J}^{i}\right\rangle=\sigma_{ij}E_{j}$, where the DC
conductivities are given by
$\displaystyle\sigma_{xx}=$ $\displaystyle\sigma_{yy}=\dfrac{\alpha
m^{2}c_{1}r_{h}\left(\dfrac{\alpha
m^{2}c_{1}r_{h}}{\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})}-\dfrac{h^{2}}{r_{h}^{2}}+r_{h}^{2}A^{\prime}_{t}{}^{2}(r_{h})\right)}{\left(\dfrac{\alpha
m^{2}c_{1}r_{h}}{\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})}-\dfrac{h^{2}}{r_{h}^{2}}\right)^{2}+h^{2}A^{\prime}_{t}{}^{2}(r_{h})},$
(24a) $\displaystyle\sigma_{xy}=$
$\displaystyle-\sigma_{yx}=\dfrac{\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})r_{h}^{2}A^{\prime}_{t}(r_{h})}{h}\left(1-\dfrac{\left(\dfrac{\alpha
m^{2}c_{1}r_{h}}{\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})}\right)^{2}}{\left(\dfrac{\alpha
m^{2}c_{1}r_{h}}{\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})}-\dfrac{h^{2}}{r_{h}^{2}}\right)^{2}+h^{2}A^{\prime}_{t}{}^{2}(r_{h})}\right)-\mathcal{L}^{\left(0,1\right)}(s_{h},p_{h}).$
(24b)
The resistivity matrix is the inverse of the conductivity matrix:
$R_{xx}=R_{yy}=\frac{\sigma_{xx}}{\sigma_{xx}^{2}+\sigma_{xy}^{2}}\quad\text{and}\quad
R_{xy}=-R_{yx}=\frac{\sigma_{xy}}{\sigma_{xx}^{2}+\sigma_{xy}^{2}}.$ (25)
The event horizon radius $r_{h}$ is the solution of Eq. 13 and will be
affected by the temperature $T$, the charge density $\rho$, the magnetic field
$h$ and the parameters $c_{1,2}$, $m$ and $\alpha$. For a given Lagrangian
$\mathcal{L}(s,p)$ one first solve $A_{t}^{\prime}(r_{h})$ from Eq. 18, and
then plug it into Eq. 13 to solve $r_{h}$, which will be brought into Eqs. 24a
and 24b to obtain the DC conductivity as a complicated function of the
parameters mentioned above.
### III.2 Various Limits
The concrete examples of NLED will be discussed in Sec. IV and the properties
of DC resistivity are discussed in detail. Before focus on the specific NLED
model we first consider some general properties of the DC conductivity in
various limit.
#### III.2.1 Massless and Massive Limits
The massless limit corresponds to $m\to 0$ and the system will restore the
Lorentz invariance. It has been shown that in a Lorentz invariant theory that
the DC conductivities in the presence of a magnetic field were $\sigma_{xx}=0$
and $\sigma_{xy}=\rho/h$ Hartnoll2007:PRD . For comparison, in the massless
limit the DC conductivities in Eqs. 24a and 24b become
$\sigma_{xx}=\frac{\alpha
m^{2}\left|c_{1}\right|r_{h}^{3}}{h^{2}}+\mathcal{O}(m^{4})\quad\text{and}\quad\sigma_{xy}=\frac{\rho}{h}+\mathcal{O}(m^{4}).$
(26)
And in the massive limit $m^{-1}\to+\infty$, the DC conductivities have the
asymptotic behaviors
$\sigma_{xx}=\mathcal{L}^{\left(1,0\right)}(s_{h},p_{h})+\mathcal{O}(m^{-2})\quad\text{and}\quad\sigma_{xy}=-\mathcal{L}^{\left(0,1\right)}(s_{h},p_{h})+\mathcal{O}(m^{-2}).$
(27)
which agree with the calculation by treating the NLED field as a probe one
Guo2017:PRD , for the geometry is dominated by the massive terms.
#### III.2.2 Zero Field and Charge Density Limits
In the zero field limit $h=0$ case, the DC conductivities become
$\sigma_{xx}=\mathcal{L}^{\left(1,0\right)}\left(\frac{A_{t}^{\prime
2}\left(r_{h}\right)}{2},0\right)-\frac{\rho^{2}}{\alpha
m^{2}c_{1}r_{h}^{3}}\quad\text{and}\quad\sigma_{xy}=-\mathcal{L}^{\left(0,1\right)}\left(\frac{A_{t}^{\prime
2}\left(r_{h}\right)}{2},0\right).$ (28)
where $A_{t}^{\prime}\left(r_{h}\right)$ is obtained by solving
$\rho=\mathcal{L}^{\left(1,0\right)}\left(\frac{A_{t}^{\prime
2}\left(r_{h}\right)}{2},0\right)r_{h}^{2}A_{t}^{\prime}\left(r_{h}\right).$
(29)
At zero charge density $\rho=0$, the DC conductivities become
$\sigma_{xx}^{-1}=\frac{1}{\mathcal{L}^{\left(1,0\right)}\left(-\dfrac{h^{2}}{2r_{h}^{4}},0\right)}-\dfrac{h^{2}}{\alpha
m^{2}c_{1}r_{h}^{3}}\quad\text{and}\quad\sigma_{xy}=-\mathcal{L}^{\left(0,1\right)}\left(-\frac{h^{2}}{2r_{h}^{4}},0\right).$
(30)
We find that the DC conductivities are in general non-zero and can be
interpreted as incoherent contributions Davison2015:JHEP , known as the charge
conjugation symmetric contribution $\sigma_{\text{ccs}}$. There is another
contribution from explicit charge density relaxed by some momentum
dissipation, $\sigma_{\text{diss}}$, depending on the charge density $\rho$.
The results show that, for a general NLED model, the DC conductivities usually
depend on $\sigma_{\text{diss}}$ and $\sigma_{\text{ccs}}$ in a nontrivial
way.
#### III.2.3 High Temperature Limit
Finally, we consider the high temperature limit
$T\gg\max\left\\{\sqrt{h},\sqrt{\rho},|c_{1}|\alpha m^{2},\sqrt{|c_{2}|}\alpha
m\right\\}$. In this limit, Eq. 13 gives $T\approx\dfrac{3}{4\pi}r_{h}$. The
longitudinal resistivity then reduces to
$R_{xx}=\frac{1}{1+\theta^{2}}\left\\{1+\frac{27}{64\pi^{3}\alpha\left|c_{1}\right|T^{3}}\left[h^{2}\left(1+\theta^{2}\right)+2h\theta\rho-\frac{1-\theta^{2}}{1+\theta^{2}}\rho^{2}\right]\right\\}+\mathcal{O}\left(T^{-6}\right).$
(31)
The nonlinear effect will be suppressed by the temperature and we only keep
the leading order. Usually the metal and insulator have different temperature
dependence. For the metallic materials the phonon scattering enlarges the
resistivity, while the thermal excitation of carriers in insulating materials
can promotes the conductivity. And the metal-insulator transition can occur
when the coefficient of $1/T^{2}$ changes the sign, which is shown in Fig. 1
in the $h/\rho-\theta$ parameter space. The $\theta$ term can break the
$\left(\rho,h\right)\to\left(\rho,-h\right)$ or
$\left(\rho,h\right)\to\left(-\rho,h\right)$ symmetries for $\sigma_{ij}$ or
$R_{ij}$, but $\sigma_{ij}$ or $R_{ij}$ are invariant under
$\left(\rho,h\right)\to\left(-\rho,-h\right)$. And the phases are central
symmetric in the parameter plane.
Fig. 1: Left panel: The metal-insulator phase diagram in the high temperature
limit. The red region has positive temperature derivative of $R_{xx}$ and thus
describes a metal, while the blue region has negative temperature derivative,
and hence an insulator. The black solid lines are the phase boundaries. Right
panel: The Mott-insulating region (blue) and the negative magneto-resistivity
(red) in the high temperature limit
The Mott-insulating and magneto-resistance behaviors are also presented in
Fig. 1. The Mott-insulating region is where $\partial_{|\rho|}R_{xx}>0$ in the
parameter plane. The strong electron-electron interaction prevents the charge
carriers from transporting. From Eq. 31 we see that as long as $\theta^{2}>1$
the Mott-insulating emerges in the absence of the magnetic field. The magneto-
resistance is defined as
$MR_{xx}=\frac{R_{xx}\left(h\right)-R_{xx}\left(0\right)}{R_{xx}\left(0\right)},$
(32)
and we see that in the Fig. 1 the negative magneto-resistivity can only occur
with non-zero $\theta$. For $\theta=0$, Eq. 31 reduces to
$R_{xx}=1+\frac{27\left(h^{2}-\rho^{2}\right)}{64\pi^{3}\alpha\left|c_{1}\right|T^{3}}+\mathcal{O}\left(T^{-6}\right),$
(33)
which gives metallic behavior for $h|/\rho|<1$ and insulating behavior for
$|/\rho|>1$. And one always has $\partial_{|\rho|}R_{xx}<0$ and
$\partial_{|h|}R_{xx}>0$, indicating the absence of Mott-insulating behavior
and negative magneto-resistivity.
## IV Various NLED Models
In this section, we will use Eqs. 13, 18, 24a and 24a to study the dependence
of the in-plane resistivity $R_{xx}$ on the temperature $T$, the charge
density $\rho$ and the magnetic field $h$ in some specific NLED models. For
convenience we rescale the $c_{i}$’s as $c_{1}\sim\alpha m^{2}c_{1}$ and
$c_{2}\sim\alpha^{2}m^{2}c_{2}$. The conventional Maxwell electrodynamics is
firstly presented, with a detail discussion on the $R_{xx}$ and $R_{xy}$’s
dependence on involved massive gravity coupling parameters $c_{1}$ and
$c_{2}$, the charge density $\rho$, the magnetic field $h$ and the temperature
$T$. Then the Chern-Simons $\theta$ term as an extension is introduced to
investigate the CP-violating effect. Finally, we discuss the Born-Infeld
electrodynamics and the influence of non-linear effect on the DC resistivity.
The high temperature behaviors have been discussed in Sec. III.2.3, so we will
mainly focus on the behavior of $R_{xx}$ around $T=0$ in this section.
### IV.1 Maxwell Electrodynamics
We first consider the Maxwell electrodynamics, in which $\mathcal{L}(s,p)=s$.
From Eq. 18 we can solve out that
$A_{t}^{\prime}\left(r_{h}\right)=\frac{\rho}{r_{h}^{2}},$ (34)
and bring it into Eq. 13 we get the equation
$4\pi Tr_{h}-3r_{h}^{2}-c_{1}r_{h}-c_{2}+\frac{\rho^{2}+h^{2}}{4r_{h}^{2}}=0.$
(35)
It is notice that $c_{1}$ offers an effective temperature correction, and make
the solution complicated even at $T=0$. Although the $c_{2}$ is a constant
playing a similar role as the momentum dissipation strength in Ref.
Peng2018:EPJC , the effect on the resistivity is quite different. The effect
of $c_{1}$ and $c_{2}$ on $R_{xx}$ and $R_{xy}$ at zero temperature is
presented in Fig. 2. For $c_{1}=0$ the $R_{xx}=0$ and $R_{xy}$ is constant,
and is independent of $c_{2}$. For non-zero $c_{1}$, the $R_{xx}$ increases
and saturates, while the $R_{xy}$ decreases to zero as $c_{1}$ becomes more
negative. And a larger $|c_{2}|$ makes the surface steeper.
Fig. 2: The influence of $c_{1}$ and $c_{2}$ on $R_{xx}$ and $R_{xy}$. We take
$T=0$ and $\rho=0.3$,$h=0.2$ as an example. Fig. 3: Upper left: $R_{xx}$ as a
function of $\rho$ and $h$. Upper right: Contour plot of $R_{xx}$. Lower left:
$R_{xx}$ vs $\rho$ with $h=0,0.5,1,1.5,2$. Lower right: $MR_{xx}$ vs $h$ with
$\rho=0,0.1,0.2,0.3,0.4,0.5$. We set $T=0$, $c_{1}=-1$, and $c_{2}=0$.
In the following Figs. 3, 4 and 5 we separately investigate the resistivities
as functions of $\rho$ and $h$ under different $T$ and $c_{2}$. Because
$c_{1}$ acts as an effective temperature modification, we set $c_{1}=-1$.
First, we consider zero temperature $T=0$, and $c_{2}=0$. No Mott-insulating
behaviors and positive magneto-resistivity is observed in Fig. 3. In addition,
non-zero charge density can suppress the resistivity as more charge carriers
are introduced.
Fig. 4: Upper left: $R_{xx}$ as a function of $\rho$ and $h$. Upper right:
Contour plot of $R_{xx}$. Lower left: $R_{xx}$ vs $\rho$ with
$h=0,0.5,1,1.5,2$. Lower right: $MR_{xx}$ vs $h$ with
$\rho=0,0.1,0.2,0.3,0.4,0.5$. We set $T=0.4$, $c_{1}=-1$, and $c_{2}=0$.
In Fig. 5 the effect of $c_{2}$ at zero temperature is presented. The figures
show similar feature as those in Fig. 4. The larger $c_{2}$ also suppress the
resistivity, while the influence is not so obvious as the temperature. All the
pictures presented here do not exhibit the Mott-insulating behaviors and
negative magneto-resistivities. Then the finite temperature situation with
$T=0.4$ and $c_{2}=0$ is studied in Fig. 4, where the saddle surface is
relatively flatter than that in Fig. 3. And the magneto-resistivity is greatly
suppressed by the temperature and the differences induced by charge density
$\rho$ is smoothen out.
Fig. 5: Upper left: $R_{xx}$ as a function of $\rho$ and $h$. Upper right:
Contour plot of $R_{xx}$. Lower left: $R_{xx}$ vs $\rho$ with
$h=0,0.5,1,1.5,2$. Lower right: $MR_{xx}$ vs $h$ with
$\rho=0,0.1,0.2,0.3,0.4,0.5$. We set $T=0$, $c_{1}=-1$, and $c_{2}=-4$.
Then we present the dependence of $R_{xx}$ on $h/\rho$ and $T/\sqrt{\rho}$ for
$c_{1}=-1$. The effect of different $c_{1}$ and $c_{2}$ can be deduced by
change the temperature correspondingly, based on the analysis above. For
$h<\rho$, Fig. 6 shows that the temperature dependence of $R_{xx}$ is
monotonic, and corresponds to metallic behavior. For $h>\rho$, the $R_{xx}$
increases first and then decreases monotonically after reaching a maximum. The
insulating behavior appears at high temperatures in this case. Moreover, if we
take larger $c_{i}$, the metallic behaviors at $h>\rho$ would disappear and
the $R_{xx}$ decreases monotonically with increasing temperature. So we come
to the conclusion that increasing the magnetic field would induce a finite-
temperature transition or crossover from metallic to insulating behavior. In
the last sub-figure of Fig. 6 the influence of temperature on the magneto-
resistivity is shown. As temperature increases, the magneto-resistivity is
remarkably suppressed.
Fig. 6: Upper left: $R_{xx}$ as a function of $h/\rho$ and $T/\sqrt{\rho}$.
Upper right: Contour plot of $R_{xx}$. Lower left: $R_{xx}$ vs $T/\sqrt{\rho}$
with $h/\rho=0,0.5,1,1.5,2$. Lower right: $MR_{xx}$ vs $h$ with
$T/\sqrt{\rho}=0,0.1,0.2,0.3,0.4,0.5$. We set $T=0$, $c_{1}=-1$, and
$c_{2}=0$.
At the end of the discussion of Maxwell electrodynamics, the Hall angle
$\theta_{H}$, defined as
$\theta_{H}=\arctan\frac{\sigma_{xy}}{\sigma_{xx}},$ (36)
is shown in Fig. 7. At zero temperature for large $h$ or $\rho$ the Hall angle
get saturated, with the sign depend on those of $h$ or $\rho$. The surface is
anti-symmetric under the transformation
$\left(\rho,h\right)\to\left(\rho,-h\right)$ and
$\left(\rho,h\right)\to\left(-\rho,h\right)$. As we expect, the temperature
would evidently suppress the Hall angle.
Fig. 7: Hall angel at $T=0$ (left panel) and $T=0.4$ (right panel). $c_{1}=-1$
and $c_{2}=0$.
### IV.2 Maxwell-Chern-Simons Electrodynamics
The Lorentz and gauge invariance do not forbid the appearance of CP-violating
Chern-Simons $\theta$ term in the electrodynamics Lagrangian
$\mathcal{L}(s,p)=s+\theta p,$ (37)
and the Chern-Simons theory is of vital importance for the both integer and
fractional quantum Hall effects in condensed matter physics Zhang1989:PRL ;
Fradkin1991:PRB ; Zhang1992:IJMPB ; DavidTong:QHE . And the value of $\theta$
can be related to the Hall conductivity in the unit of $e^{2}/\hslash$. After
taking the Chern-Simons term into consideration, Eq. 18 gives
$A_{t}^{\prime}\left(r_{h}\right)=\frac{\rho+\theta h}{r_{h}^{2}},$ (38)
and the Eq. 13 becomes
$4\pi
Tr_{h}-3r_{h}^{2}-c_{1}r_{h}-c_{2}+\frac{\left(1+\theta^{2}\right)h^{2}+2\theta
h\rho+\rho^{2}}{4r_{h}^{2}}=0.$ (39)
We then reinvestigate the dependence of DC resistivity $R_{xx}$ on $\rho$ and
$h$. As one expects, the similar saddle surface to that of Maxwell
electrodynamics is found, and the reflection asymmetry and central symmetry
due to the $\theta$ term is shown in the upper panel of Fig. 8. The more
surprising things are the appearance of Mott-insulating behavior
$\partial_{|\rho|}R_{xx}>0$ and negative magneto-resistivity $MR_{xx}<0$. At
zero field $R_{xx}$ is the even function of $\rho$, and as $|\rho|$ increases,
the value of $R_{xx}$ increases and reaches a maximum, showing the feature of
Mott-insulating behavior, and then decreases monotonically. While for finite
magnetic field, the reflection symmetry for positive and negative $\rho$ is
broken. Along the positive $\rho$ direction the behavior of $R_{xx}$ is
similar, however, for negative $\rho$ it is more complicated. as $|\rho|$
increases, the value of $R_{xx}$ decreases and reaches a minimum, and then
share similar behavior as the positive $\rho$ case.
Fig. 8: Upper left: $R_{xx}$ as a function of $h$ and $\rho$. Upper right:
Contour plot of $R_{xx}$. Lower left: $R_{xx}$ vs $\rho$ with
$h=0,0.05,0.1,0.2,0.3$. Lower right: $MR_{xx}$ vs $h$ with
$\rho=-0.6,-0.3,0,0.3,0.6$. We set $T=0$, $c_{1}=-1.5$, $c_{2}=0$ and
$\theta=2$.
We then further study how the value of $\theta$ affect the Mott-insulating
behavior in Fig. 9. To clearly show the region $\partial_{|\rho|}R_{xx}>0$ we
set other region’s value to be zero. Similar results have been obtained as in
Ref. Peng2018:EPJC . At zero field there is no Mott-insulating behavior for
$\theta=1$. A larger $\theta$ makes it possible even with $h=0$. On the other
hand, the Mott-insulating region becomes larger but $\partial_{|\rho|}R_{xx}$
becomes smaller as $\theta$ increases.
Fig. 9: Mott-insulating region for $\theta=1,2,3$ in the parameter plane for
Maxwell-Chern-Simons electrodynamics at zero temperature. We set $c_{1}=-1.5$
and $c_{2}=0$.
The negative magneto-resistivity revealed in Fig. 8 is also shown in the
$\rho-h$ parameter plane in Fig. 10, from which we can see that the negative
magneto-resistivity emerges in the finite interval of $h/\rho$ of about
$\left[-1.2,0\right]$. The positive magneto-resistivity region’s value is set
to be zero as well. At larger magnetic field, the negative magneto-resistivity
phenomenon disappears, and the magneto-resistivity increases almost linearly
with $h$. For zero density the negative magneto-resistivity does not occur.
Fig. 10: Negative magneto-resistivity region for $\theta=3$ in the parameter
plane for Maxwell-Chern-Simons electrodynamics at zero temperature. We set
$c_{1}=-1.5$ and $c_{2}=0$.
Finally we show the behavior of Hall resistivity $R_{xy}$ and Hall
conductivity $\sigma_{xy}$ in Fig. 11. The negative Hall resistivity comes
from the negative transverse conductivity $\sigma_{xy}$ depending on the
direction of magnetic field $h$ and the type of charge carriers. From the
figure we see that at zero field the transverse conductivity $\sigma_{xy}$ is
exactly $-\theta$ in regardless of charge density $\rho$, which agrees with
the scenario in quantum Hall physics Zhang1992:IJMPB ; Qi2011:RMP . At finite
magnetic field, for positive $\rho$ the Hall conductivity will decrease to
zero, and then changes its sign, while for negative $\rho$ the Hall
conductivity remains negative. And we can also study the impact of magnetic
field on the Hall conductivity with fixed $\rho$. All the transverse
conductivity for various $\rho$ as strong field tend to be zero as the result
of localization. For positive $\rho$, in the positive magnetic field, the Hall
conductivity decreases to zero, then changes its sign and increases to a
maximum, and finally decreases monotonically to zero from the positive side.
And in the negative magnetic field, the Hall conductivity increases to a
negative maximum, and then decreases to zero from the negative side. The case
of negative $\rho$ can be known by simply making transformation
$\left(\rho,h\right)\to\left(-\rho,-h\right)$ in the discussion above.
Fig. 11: Upper left: $R_{xy}$ as a function of $h$ and $\rho$. Upper right:
Contour plot of $R_{xy}$. Lower left: $\sigma_{xy}$ vs $\rho$ with
$h=0,0.05,0.1,0.2,0.3$. Lower right: $\sigma_{xy}$ vs $h$ with
$\rho=-6,-3,0,3,6$. We set $T=0$, $c_{1}=-1.5$, $c_{2}=0$ and $\theta=2$.
### IV.3 Born-Infeld Electrodynamics
The Born-Infeld electrodynamics is described by a square-root Lagrangian
Born1933:Nat ; Born1934:PRSLA
$\mathcal{L}(s,p)=\frac{1}{a}\left(1-\sqrt{1-2as-a^{2}p^{2}}\right),$ (40)
where the coupling parameter $a=\left(2\pi\alpha^{\prime}\right)^{2}$ relates
to the Regge slope $\alpha^{\prime}$. It is believed that such a NLED governs
the dynamics of electromagnetic fields on D-branes. If we take the zero-slope
limit $\alpha^{\prime}\to 0$, the Maxwell Lagrangian is recovered
$\mathcal{L}(s,p)=s+\mathcal{O}(a).$ (41)
The Born-Infeld electrodynamics takes the advantage of eliminating the
divergence of electrostatic self-energy and incorporating maximal electric
fields zwiebach2004:CUP . This can be seen from the solution of Eq. 18
$A_{t}^{\prime}\left(r\right)=\frac{\rho}{\sqrt{a\left(\rho^{2}+h^{2}\right)+r^{4}}},$
(42)
which is finite when $r\to 0$. The $r_{h}$ is solved form
$4\pi
r_{h}T-3r_{h}^{2}-c_{1}r_{h}-c_{2}+\frac{\rho^{2}}{2\sqrt{a\left(h^{2}+\rho^{2}\right)+r_{h}^{4}}}+\frac{1}{2a}\left(\sqrt{\frac{\left(ah^{2}+r_{h}^{4}\right)^{2}}{a\left(h^{2}+\rho^{2}\right)+r_{h}^{4}}}-r_{h}^{2}\right)=0.$
(43)
Fig. 12: $R_{xx}$ as a function of $h$ and $\rho$ and some intersection curves
for fixed $h$ and $\rho$ for $a=-0.4$ (upper panel) and $a=-1$ (lower panel).
The surface ends at some place where $h$ and $\rho$ touch the upper bound.
The DC resistivity is then studied in the framework of Born-Infeld
electrodynamics and the results are shown in Fig. 13. For positive $a$ the
behavior of $R_{xx}$ is similar to the Maxwell case Peng2018:EPJC ;
Cremonini2017:JHEP , and the negative $a$ can bring in more interesting
phenomenon. However, for negative $a$ the Eq. 42 suffers a singularity
$r=r_{s}\equiv\sqrt[1/4]{\left|a\right|\left(h^{2}+\rho^{2}\right)}$, and the
physical solution requires that $r_{h}>r_{s}$, setting an upper bound for
$h^{2}+\rho^{2}$ Peng2018:EPJC . We then present two cases of $a=-0.4$ and
$a=-1$, respectively. For $a=-0.4$, We the saddle surface is similar to the
previous results, and there is no Mott-insulating behaviors and negative
magneto-resistivity. However, if one increases the absolute value of the
negative $a$, the region admitting physical solution for $R_{xx}$ shrinks and
is truncated at the upper bound of $h^{2}+\rho^{2}$. For zero and finite small
magnetic field, the Mott-insulating behavior is absent, while for a larger
field the Mott-insulating behavior emerges. And for a relatively small $\rho$,
the negative magneto-resistivity is present, and increasing $\rho$ can destroy
it. To expose the role of $a$ we expand the interaction between electrons to
the first order Peng2018:EPJC
$F(a)=\rho
A_{t}^{\prime}\left(r\right)\sim\frac{\rho^{2}}{r^{2}}\left[1-\frac{a}{2r^{4}}\left(h^{2}+\rho^{2}\right)\right]+\mathcal{O}(a^{2}),$
(44)
in which the leading order is the familiar Coulomb interaction, and the
nonlinearity parameter serves as an effective modification. A positive $a$
suppress the interaction and we do not expect different phenomenon compared
with the Maxwell electrodynamics. However, for a negative $a$ the interaction
is enhanced at $r_{h}$
$\frac{F(a)}{F(0)}=1+\frac{1}{2}\left(\frac{r_{s}}{r_{h}}\right)^{4},$ (45)
and we expect that the NLED model with negative $a$ can grasp some features of
strongly correlated systems. Besides, in the leading order expansion Eq. 41
the Chern-Simons term does not appear, and can be deduced from Eq. 33 that the
system will not exhibit Mott-insulating behavior and negative magneto-
resistivity. Thus we conclude that the Born-Infeld electrodynamics provides a
new mechanism different from the Chern-Simons theory to give rise to the Mott-
insulating behavior and negative magneto-resistivity, and the temperature can
induce a transition at finite temperature.
One can also consider another kind of construction of square-root Lagrangian
zwiebach2004:CUP
$\mathcal{L}(s,p)=\frac{1}{a}\left(1-\sqrt{1-2as}\right),$ (46)
which has the same leading order expansion as Eq. 41, and will re-derive the
results of Born-Infeld electrodynamics in the presence of zero field. The
solution of Eq. 18 gives
$A_{t}^{\prime}\left(r\right)=\frac{\rho}{r^{2}}\sqrt{\frac{ah^{2}+r^{4}}{a\rho^{2}+r^{4}}},$
(47)
which can be deduced that the effective interaction is
$F=\rho
A_{t}^{\prime}\left(r\right)\sim\frac{\rho^{2}}{r^{2}}\left[1-\frac{a}{2r^{4}}\left(\rho^{2}-h^{2}\right)\right]+\mathcal{O}(a^{2}).$
(48)
The radius equation is
$4\pi
r_{h}T-3r_{h}^{2}-c_{1}r_{h}-c_{2}+\frac{h^{2}}{2r_{h}^{2}}+\frac{r_{h}^{2}}{2a}\left(\sqrt{\frac{a\rho^{2}+r_{h}^{4}}{ah^{2}+r_{h}^{4}}}-1\right)=0.$
(49)
We re-check the $R_{xx}$ in the square-root electrodynamics for negative $a$.
For $a=-0.4$ the saddle surface is similar to the previous results, and both
the zero field and finite field no Mott insulating behaviors are shown. And
for fixed $\rho$, it is surprising that for larger $\rho$, the transition from
positive magneto-resistivity to negative magneto-resistivity is observed for
about $\rho>h$. We then studied the case of larger negative $a$. To our
surprise, the saddle surface changes its direction and looks as if it rotates
$90^{\circ}$. We find that the Mott-insulating behavior appears and becomes
significant at large $\rho$, and negative magneto-resistivity is observed for
various $\rho$.
Fig. 13: $R_{xy}$ as a function of $h$ and $\rho$ and some intersection curves
for fixed $h$ and $\rho$ for $a=-0.4$ (upper panel) and $a=-1$ (lower panel).
## V Conclusion
In this work the black brane solution of four-dimensional massive gravity with
backreacted NLED is obtained, and with the dictionary of gauge/gravity
duality, the transport properties of the strongly correlated systems in the
presence of finite magnetic field in 2+1-dimensional boundary is studied. In
our holographic setup the bulk geometry and NLED field are perturbed, and the
DC conductivities are obtained in the linear response regime. Then some
general properties are obtained in various limit, which agrees well with the
previous work. To make it concrete, we present the study of the conventional
Maxwell electrodynamics, the topological non-trivial Maxwell-Chern-Simons
electrodynamics, and the Born-Infeld electrodynamics with string-theoretical
correction taken into consideration. We concentrate on two interesting
phenomena, i.e., the Mott-insulating behavior and negative magneto-
resistivity, and results at zero temperature are summarized in Tab. 1.
| Lagrangian | Parameter | Mott-insulating behavior | Negative magneto-resistivity
---|---|---|---|---
Maxwell | $s$ | | No | No
Maxwell- | $s+\theta p$ | $\theta$ | See Fig. 9 | See Fig. 10
Chern-
Simons
Born-Infeld | $\dfrac{1}{a}\left(1-\sqrt{1-2as-a^{2}p^{2}}\right)$ | $a>0$ | No | No
$a=-0.4$ | No | No
$a=-1$ | Finite $h$ and small $\rho$. | Finite $h$ and small $\rho$.
See Fig. 12 | See Fig. 12
Square | $\dfrac{1}{a}\left(1-\sqrt{1-2as}\right)$ | $a>0$ | No | No
$a=-0.4$ | No | Larger $\rho$. See Fig. 13
$a=-1$ | Larger $\rho$. See Fig. 13 | Yes
Tab. 1: Conclusion of Mott-insulating behavior and negative magneto-
resistivity for various NLED models at zero temperature $T=0$.
The massive gravity coupling parameters’ influence on the in-plane resistivity
is compared with the temperature, where we find the $c_{1}$ behaves as the
effective temperature correction, and non-zero one can lead to the DC
conductivity. While the $c_{2}$ has less significant effect on $R_{xx}$ than
$c_{1}$. Moreover, the dependence on $\rho$ and $h$ is shown and we find the
field can induce the metal-insulator transition or Mott-insulating behavior
and negative magneto-resistivity. Two different mechanism, the Chern-Simons
term, and the negative nonlinearity parameter are proved that can give rise to
Mott-insulating behavior and negative magneto-resistivity. We hope our work
could explain some experimental phenomenon in strongly correlated systems.
## Acknowledgment
We are grateful to thank Peng Wang for useful discussions. This work is
supported by NSFC (Grant No.11947408).
## References
* (1) T. Banks, W. Fischler, S.H. Shenker and L. Susskind, _$M$ theory as a matrix model: A conjecture_, _Phys. Rev. D_ 55 (1997) 5112 [hep-th/9610043].
* (2) J. Maldacena, _The large- $N$ limit of superconformal field theories and supergravity_, _Int. J. Theor. Phys._ 38 (1999) 1113 [hep-th/9711200].
* (3) E. Witten, _Anti-de sitter space and holography_ , _Adv. Theor. Math. Phys._ 2 (1998) [hep-th/9802150].
* (4) M. Ammon and J. Erdmenger, _Gauge/Gravity Duality: Foundations and Applications_ , Cambridge University Press (2015).
* (5) E. Papantonopoulos, _From Gravity to Thermal Gauge Theories: The AdS/CFT Correspondence_ , vol. 828 of _Lecture Notes in Physics_ , Springer-Verlag Berlin Heidelberg (2011), 10.1007/978-3-642-04864-7.
* (6) M. Natsuume, _AdS/CFT Duality User Guide_ , vol. 903 of _Lecture Notes in Physics_ , Springer (2015), 10.1007/978-4-431-55441-7, [1409.3575].
* (7) G. Policastro, D.T. Son and A.O. Starinets, _Shear viscosity of strongly coupled $\mathcal{N}=4$ supersymmetric Yang-Mills plasma_, _Phys. Rev. Lett._ 87 (2001) 081601 [hep-th/0104066].
* (8) A. Buchel and J.T. Liu, _Universality of the shear viscosity from supergravity duals_ , _Phys. Rev. Lett._ 93 (2004) 090602 [hep-th/0311175].
* (9) P.K. Kovtun, D.T. Son and A.O. Starinets, _Viscosity in strongly interacting quantum field theories from black hole physics_ , _Phys. Rev. Lett._ 94 (2005) 111601 [hep-th/0405231].
* (10) P. Benincasa and A. Buchel, _Transport properties of $\mathcal{N}=4$ supersymmetric Yang-Mills theory at finite coupling_, _Journal of High Energy Physics_ 2006 (2006) 103 [hep-th/0510041].
* (11) D.T. Son and A.O. Starinets, _Hydrodynamics of $R$-charged black holes_, _Journal of High Energy Physics_ 2006 (2006) 052 [hep-th/0601157].
* (12) D.T. Son and A.O. Starinets, _Viscosity, black holes, and quantum field theory_ , _Annu. Rev. Nucl. Part. Sci._ 57 (2007) 95 [0704.0240].
* (13) P. Kovtun, _Lectures on hydrodynamic fluctuations in relativistic theories_ , _J. Phys. A: Math. Theor._ 45 (2012) 473001 [1205.5040].
* (14) M. Rangamani, _Gravity and hydrodynamics: Lectures on the fluid-gravity correspondence_ , _Class. Quant. Grav._ 26 (2009) 224003 [0905.4352].
* (15) A. Adams and E. Silverstein, _Closed string tachyons, AdS/CFT, and large $N$ QCD_, _Phys. Rev. D_ 64 (2001) 086001 [hep-th/0103220].
* (16) S.J. Brodsky and G.F. De Téramond, _Light-front hadron dynamics and AdS/CFT correspondence_ , _Phys. Lett. B_ 582 (2004) 211 [hep-th/0310227].
* (17) G.F. de Téramond and S.J. Brodsky, _Hadronic spectrum of a holographic dual of QCD_ , _Phys. Rev. Lett._ 94 (2005) 201601 [hep-th/0501022].
* (18) G.F. de Téramond and S.J. Brodsky, _Light-front holography: A first approximation to QCD_ , _Phys. Rev. Lett._ 102 (2009) 081601 [0809.4899].
* (19) J. Erlich, E. Katz, D.T. Son and M.A. Stephanov, _QCD and a holographic model of hadrons_ , _Phys. Rev. Lett._ 95 (2005) 261602 [hep-ph/0501128].
* (20) L. Da Rold and A. Pomarol, _Chiral symmetry breaking from five-dimensional spaces_ , _Nucl. Phys. B_ 721 (2005) 79 [hep-ph/0501218].
* (21) A. Zayakin, _QCD vacuum properties in a magnetic field from AdS/CFT: Chiral condensate and goldstone mass_ , _Journal of High Energy Physics_ 2008 (2008) 116 [0807.2917].
* (22) J.D. Edelstein, J.P. Shock and D. Zoakos, _The AdS/CFT correspondence and non-perturbative QCD_ , _AIP Conf. Proc._ 1116 (2009) 265 [0901.2534].
* (23) U. Gursoy, E. Kiritsis, L. Mazzanti, G. Michalogiorgakis and F. Nitti, _Improved holographic QCD_ , in _From Gravity to Thermal Gauge Theories: The AdS/CFT Correspondence_ , pp. 79–146, Springer (2011), DOI [1006.5461].
* (24) M. Alfimov, N. Gromov and V. Kazakov, _QCD pomeron from AdS/CFT quantum spectral curve_ , _Journal of High Energy Physics_ 2015 (2015) 164 [1408.2530].
* (25) O. Bergman, M. Lippert and G. Lifschytz, _Holographic nuclear physics_ , _Journal of High Energy Physics_ 2007 (2007) 056 [0708.0326].
* (26) S. Baldino, S. Bolognesi, S.B. Gudnason and D. Koksal, _Solitonic approach to holographic nuclear physics_ , _Phys. Rev. D_ 96 (2017) 034008 [1703.08695].
* (27) S.A. Hartnoll, C.P. Herzog and G.T. Horowitz, _Building a holographic superconductor_ , _Phys. Rev. Lett._ 101 (2008) 031601 [0803.3295].
* (28) S.A. Hartnoll, _Lectures on holographic methods for condensed matter physics_ , _Class. Quant. Grav._ 26 (2009) 224002 [0903.3246].
* (29) C.P. Herzog, _Lectures on holographic superfluidity and superconductivity_ , _Journal of Physics A: Mathematical and Theoretical_ 42 (2009) 343001 [0904.1975].
* (30) C.P. Herzog, _Analytic holographic superconductor_ , _Phys. Rev. D_ 81 (2010) 126009 [1003.3278].
* (31) J. McGreevy, _Holographic duality with a view toward many-body physics_ , _Adv. High Energy Phys._ 2010 (2010) [0909.0518].
* (32) T. Nishioka, S. Ryu and T. Takayanagi, _Holographic superconductor/insulator transition at zero temperature_ , _Journal of High Energy Physics_ 2010 (2010) 131 [0911.0962].
* (33) M. Čubrović, J. Zaanen and K. Schalm, _String theory, quantum phase transitions, and the emergent Fermi liquid_ , _Science_ 325 (2009) 439 [0904.1993].
* (34) H. Liu, J. McGreevy and D. Vegh, _Non-Fermi liquids from holography_ , _Phys. Rev. D_ 83 (2011) 065029 [0903.2477].
* (35) N. Iqbal, H. Liu and M. Mezei, _Lectures on holographic non-Fermi liquids and quantum phase transitions_ , in _String Theory And Its Applications: TASI 2010 From meV to the Planck Scale_ , pp. 707–815, World Scientific (2012), DOI [1110.3814].
* (36) T. Faulkner and J. Polchinski, _Semi-holographic Fermi liquids_ , _Journal of High Energy Physics_ 2011 (2011) 12 [1001.5049].
* (37) R. Cai, L. Li, L. Li and R. Yang, _Introduction to holographic superconductor models_ , _Sci. China Phys. Mech. Astron._ 58 (2015) 1 [1502.00437].
* (38) A.A. Abrikosov, L.P. Gorkov and I.E. Dzyaloshinski, _Methods of Quantum Field Theory in Statistical Physics_ , Courier Corporation (2012).
* (39) E. Lifshitz and L.P. Pitaevskii, _Statistical Physics: Theory of the Condensed State_ , vol. 9 of _Course of Theoretical Physics_ , Butterworth-Heinemann (2013).
* (40) E.H. Hall, _On a new action of the magnet on electric currents_ , _American Journal of Mathematics_ 2 (1879) 287.
* (41) K.v. Klitzing, G. Dorda and M. Pepper, _New method for high-accuracy determination of the fine-structure constant based on quantized Hall resistance_ , _Phys. Rev. Lett._ 45 (1980) 494.
* (42) X.-L. Qi and S.-C. Zhang, _Topological insulators and superconductors_ , _Rev. Mod. Phys._ 83 (2011) 1057 [1008.2026].
* (43) M.Z. Hasan and C.L. Kane, _Colloquium: Topological insulators_ , _Rev. Mod. Phys._ 82 (2010) 3045 [1002.3895].
* (44) G.H. Wannier, _Theorem on the magnetoconductivity of metals_ , _Phys. Rev. B_ 5 (1972) 3836.
* (45) H.-J. Kim, K.-S. Kim, J.-F. Wang, M. Sasaki, N. Satoh, A. Ohnishi et al., _Dirac versus Weyl fermions in topological insulators: Adler-Bell-Jackiw anomaly in transport phenomena_ , _Phys. Rev. Lett._ 111 (2013) 246603 [1307.6990].
* (46) J. Xiong, S.K. Kushwaha, T. Liang, J.W. Krizan, M. Hirschberger, W. Wang et al., _Evidence for the chiral anomaly in the Dirac semimetal Na 3Bi_, _Science_ 350 (2015) 413.
* (47) H. Li, H. He, H.-Z. Lu, H. Zhang, H. Liu, R. Ma et al., _Negative magnetoresistance in Dirac semimetal cd 3as2_, _Nature communications_ 7 (2016) 1 [1507.06470].
* (48) S.-Y.X. Cheng-Long Zhang et al., _Signatures of the Adler-Bell-Jackiw chiral anomaly in a Weyl fermion semimetal_ , _Nature communications_ 7 (2016) 1 [1601.04208].
* (49) B. Zhao, P. Cheng, H. Pan, S. Zhang, B. Wang, G. Wang et al., _Weak antilocalization in cd 3as2 thin films_, _Scientific Reports_ 6 (2016) 22377 [1601.05536].
* (50) D.T. Son and B.Z. Spivak, _Chiral anomaly and classical negative magnetoresistance of Weyl metals_ , _Phys. Rev. B_ 88 (2013) 104412 [1206.1627].
* (51) P. Hosur and X. Qi, _Recent developments in transport phenomena in Weyl semimetals_ , _Comptes Rendus Physique_ 14 (2013) 857 [1309.4464].
* (52) A.A. Burkov, _Chiral anomaly and diffusive magnetotransport in Weyl metals_ , _Phys. Rev. Lett._ 113 (2014) 247203 [1409.0013].
* (53) A.A. Burkov, _Negative longitudinal magnetoresistance in Dirac and Weyl metals_ , _Phys. Rev. B_ 91 (2015) 245157 [1505.01849].
* (54) H.-Z. Lu and S.-Q. Shen, _Quantum transport in topological semimetals under magnetic fields_ , _Frontiers of Physics_ 12 (2017) 127201 [1609.01029].
* (55) A. Jimenez-Alba, K. Landsteiner and L. Melgar, _Anomalous magnetoresponse and the Stückelberg axion in holography_ , _Phys. Rev. D_ 90 (2014) 126004 [1407.8162].
* (56) A. Jimenez-Alba, K. Landsteiner, Y. Liu and Y.-W. Sun, _Anomalous magnetoconductivity and relaxation times in holography_ , _Journal of High Energy Physics_ 2015 (2015) 117 [1504.06566].
* (57) K. Landsteiner, Y. Liu and Y.-W. Sun, _Negative magnetoresistivity in chiral fluids and holography_ , _Journal of High Energy Physics_ 2015 (2015) 127 [1410.6399].
* (58) Y.-W. Sun and Q. Yang, _Negative magnetoresistivity in holography_ , _Journal of High Energy Physics_ 2016 (2016) 122 [1603.02624].
* (59) P.A. Lee, N. Nagaosa and X.-G. Wen, _Doping a Mott insulator: Physics of high-temperature superconductivity_ , _Rev. Mod. Phys._ 78 (2006) 17 [cond-mat/0410445].
* (60) M. Edalati, R.G. Leigh and P.W. Phillips, _Dynamically generated Mott gap from holography_ , _Phys. Rev. Lett._ 106 (2011) 091602 [1010.3238].
* (61) M. Edalati, R.G. Leigh, K.W. Lo and P.W. Phillips, _Dynamical gap and cupratelike physics from holography_ , _Phys. Rev. D_ 83 (2011) 046012 [1012.3751].
* (62) J.-P. Wu and H.-B. Zeng, _Dynamic gap from holographic fermions in charged dilaton black branes_ , _Journal of High Energy Physics_ 2012 (2012) 68 [1411.5627].
* (63) Y. Ling, P. Liu, C. Niu, J.-P. Wu and Z.-Y. Xian, _Holographic fermionic system with dipole coupling on Q-lattice_ , _Journal of High Energy Physics_ 2014 (2014) 149 [1410.7323].
* (64) Y. Ling, P. Liu, C. Niu and J.-P. Wu, _Building a doped mott system by holography_ , _Phys. Rev. D_ 92 (2015) 086003 [1507.02514].
* (65) Y. Ling, P. Liu and J.-P. Wu, _A novel insulator by holographic Q-lattices_ , _Journal of High Energy Physics_ 2016 (2016) 75.
* (66) M. Fujita, S.M. Harrison, A. Karch, R. Meyer and N.M. Paquette, _Towards a holographic bose-hubbard model_ , _Journal of High Energy Physics_ 2015 (2015) 68 [1411.7899].
* (67) E. Kiritsis and J. Ren, _On holographic insulators and supersolids_ , _Journal of High Energy Physics_ 2015 (2015) 168 [1503.03481].
* (68) M. Baggioli and O. Pujolas, _On effective holographic Mott insulators_ , _Journal of High Energy Physics_ 2016 (2016) 107 [1604.08915].
* (69) S. Cremonini, A. Hoover and L. Li, _Backreacted DBI magnetotransport with momentum dissipation_ , _Journal of High Energy Physics_ 2017 (2017) 133 [1707.01505].
* (70) D. Vegh, _Holography without translational symmetry_ , 1301.0537.
* (71) R.-G. Cai, Y.-P. Hu, Q.-Y. Pan and Y.-L. Zhang, _Thermodynamics of black hooles in massive gravity_ , _Phys. Rev. D_ 91 (2015) 024032 [1409.2369].
* (72) A. Donos and J.P. Gauntlett, _Novel metals and insulators from holography_ , _Journal of High Energy Physics_ 2014 (2014) 7 [1401.5077].
* (73) M. Blake and A. Donos, _Quantum critical transport and the Hall angle in holographic models_ , _Phys. Rev. Lett._ 114 (2015) 021601 [1406.1659].
* (74) S.A. Hartnoll and P.K. Kovtun, _Hall conductivity from dyonic black holes_ , _Phys. Rev. D_ 76 (2007) 066001 [0704.1160].
* (75) X. Guo, P. Wang and H. Yang, _Membrane paradigm and holographic DC conductivity for nonlinear electrodynamics_ , _Phys. Rev. D_ 98 (2018) 026021 [1711.03298].
* (76) R.A. Davison and B. Goutéraux, _Dissecting holographic conductivities_ , _Journal of High Energy Physics_ 2015 (2015) 90 [1505.05092].
* (77) P. Wang, H. Wu and H. Yang, _Holographic DC conductivity for backreacted nonlinear electrodynamics with momentum dissipation_ , _Eur. Phys. J. C_ 79 (2019) 6 [1805.07913].
* (78) S.C. Zhang, T.H. Hansson and S. Kivelson, _Effective-field-theory model for the fractional quantum Hall effect_ , _Phys. Rev. Lett._ 62 (1989) 82.
* (79) A. Lopez and E. Fradkin, _Fractional quantum Hall effect and Chern-Simons gauge theories_ , _Phys. Rev. B_ 44 (1991) 5246.
* (80) S.C. Zhang, _The Chern-Simons-Landau-Ginzburg theory of the fractional quantum Hall effect_ , _International Journal of Modern Physics B_ 6 (1992) 25.
* (81) D. Tong, _Lectures on the quantum hall effect_ , 1606.06687.
* (82) M. Born, _Modified field equations with a finite radius of the electron_ , _Nature_ 132 (1933) 282.
* (83) M. Born, _On the quantum theory of the electromagnetic field_ , _Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character_ 143 (1934) 410.
* (84) B. Zwiebach, _A First Course in String Theory_ , Cambridge University Press (2009).
|
8k
|
arxiv_papers
|
2101.00913
|
# Credit Crunch: The Role of Household Lending Capacity in the Dutch Housing
Boom and Bust 1995-2018
Menno Schellekens1 and Taha Yasseri1,2,3,4111Corresponding author: Taha
Yasseri, D405 John Henry Newman Building, University College Dublin,
Stillorgan Rd, Belfield, Dublin 4, Ireland. Email: [email protected].
1Oxford Internet Institute, University of Oxford, Oxford, UK
2School of Sociology, University College Dublin, Dublin, Ireland
3Geary Institute for Public Policy, University College Dublin, Dublin, Ireland
4Alan Turing Institute for Data Science and AI, London, UK
###### Abstract
What causes house prices to rise and fall? Economists identify household
access to credit as a crucial factor. "Loan-to-Value" and "Debt-to-GDP" ratios
are the standard measures for credit access. However, these measures fail to
explain the depth of the Dutch housing bust after the 2009 Financial Crisis.
This work is the first to model household lending capacity based on the
formulas that Dutch banks use in the mortgage application process. We compare
the ability of regression models to forecast housing prices when different
measures of credit access are utilised. We show that our measure of household
lending capacity is a forward-looking, highly predictive variable that
outperforms ‘Loan-to-Value’ and debt ratios in forecasting the Dutch crisis.
Sharp declines in lending capacity foreshadow the market deceleration.
Keywords— Housing Price, Loan-to-Value, Dutch Market, Lending Capacity, Loan-
to-Income
## 1 Introduction
The flow of credit from the financial sector to the housing market is critical
for understanding house prices, because households usually finance properties
with mortgage debt. The availability of credit determines how much households
can borrow and thus how much they can bid on properties. As relaxed credit
constraints allow all market participants to borrow more, households often
have to borrow more to stay competitive, and house prices rise quickly
(Bernanke and Gertler, 1995; Kiyotaki and Moore, 1997). After the Great
Financial Crisis, scholars started emphasising the importance of credit
conditions to the development of housing prices.
Studies of housing markets in advanced economies consistently find that
empirical models that include measures of credit conditions outperform models
that do not.222Such studies have been conducted for the United States (Duca
and Muellbauer, 2016), Ireland (Lyons, 2018), Finland (Oikarinen, 2009),
Norway (Anundsen and Jansen, 2013), France (Chauvin and Muellbauer, 2014) and
Sweden (Turk, 2016). Competing measures of credit conditions have emerged in
the literature. Some authors employ the average ‘loan-to-value’ (LTV) ratio of
first time buyers (Duca and Muellbauer, 2016).333The LTV is the mortgage for a
property divided by the price paid. A low LTV means that the lender left a
large margin of safety between the mortgage and the market value of the home.
Thus, the value of the collateral would be greater than the mortgage even if
the house decreases in value. A high LTV indicates that lenders are willing to
tolerate more risk. Others choose mortgage debt to GDP ratios (Oikarinen,
2009), survey data from senior bank employees (van der Veer and Hoebrechts,
2016) and indeces of the ‘ease’ of credit policy (Chauvin and Muellbauer,
2014).
The Dutch housing market experienced a boom and bust in the period 1995-2018.
The Dutch case is a puzzle because existing models do not explain the size of
the boom and the depth of the bust. Figure 1 shows that house prices rose
rapidly in the 1990’s and early 2000’s and entered a period of sustained
decline after 2009. Despite falling interest rates, housing prices fell 16%
from their peak value. The crisis came at great cost to many Dutch families.
In 2015, 28% of homes were deemed ‘under water’: the home was worth less than
the outstanding mortgage debt (Centraal Plan Bureau, 2014).
Based on a qualitative analysis of Dutch housing reforms, we develop a novel
approach to forecasting house prices that utilises a new measure of credit
access. We study the ‘Loan-to-Income’ formulas that banks use to calculate how
much money households can borrow relative to their income. We model these
formulas and calculate lending capacity for the average Dutch household from
1995 to 2018 with three parameters: average household income, mortgage
interest rates and regulatory changes. We find household lending capacity is a
more accurately predictor of house prices in the Netherlands than LTV ratios
and ‘debt-to-GDP’ ratios. In a test on out-of-sample data, a univariate OLS
model with household lending capacity provides the most robust forecasts.
The next section provides an analysis of the formulas that govern access to
credit in the Netherlands and how we model these formulas. In the methods
section, we describe our dataset and the specification of statistical models.
Lastly, we report our findings and discuss the implications and limitations of
our approach.
## 2 Modelling Household Lending Capacity
The notion that household lending capacity ($HLC$) - the amount of mortgage
debt one can legally borrow to finance a home - influences house prices is not
new. Economists generally believe that the ability to afford loans is one of
the channels through which income and interest rates affect prices (ESB,
2017). We hypothesise that the details of how $HLC$ is calculated matter. The
rules that govern household borrowing relative to income are called ‘Loan to
Income’ (LTI) formulas. This section is divided in two parts: an introduction
to LTI formulas in the Netherlands and an elaboration on how we model LTI
formulas.
Figure 1: Statistics From The Central Bureau of Statistics (CBS) in the
Netherlands 1995-2018
### 2.1 LTI Formulas in the Netherlands
Before 2009, the Netherlands had no national laws or regulations that
restricted lending capacity. However, the associations of banks and insurers -
the dominant mortgage providers - authored the ‘Gedragscode Hypothecaire
Financieringen’ (Behavioral Code Mortgage Finance, GHF) in 1999. GHF specified
norms that the members agreed to adhere to for fair and responsible mortgage
finance, including norms pertaining to the height of mortgages. The
association specified that households would not be allowed to spend more than
a specific percentage of their available income on mortgage repayment,
depending on their family type and income bracket (NIBUD, 2018). For the
average family, the maximum percentage of income designated for income
(“woonquote”) was a range of 21-40% depending on their economic circumstances.
Banks were allowed to deviate from these norms in exceptional cases, but the
Dutch regulator found that the norms were mostly adhered to (De Nederlandsche
Bank, 2009). The norms were tightened in response to the Financial Crisis and
translated into law in 2011 (Rabobank, 2014, 2015).
The housing market faced a number of reforms in the period 2011-2018. One key
reform directly impacted lending capacity. After the advent of the Financial
Crisis of 2007 - in which mortgage debt played a key role in the American
crisis and to a lesser extent the near collapse of Dutch banks - the Dutch
government aimed to decrease the burden of mortgage debt on the economy
(International Monetary Fund, 2019). The Dutch Central Bank identified a
category of mortgage products - ‘krediethypotheken’ (interest-only mortgages)
- as a source of increasing mortgage debt. Interest-only mortgages grew their
market share from 10% to 50% in the period 1995-2008 (De Nederlandsche Bank,
2009). Customers with interest-only plans could choose to pay off their
mortgage at their own pace or not at all, whereas traditional annuity and
linear payment plans require households to amortise their mortgage every
month. The fact that borrowers only had to budget interest payments meant that
households could borrow more with an interest-only mortgage than with a
traditional annuity. As of 2011, the government required lenders to use the
maximum annuity as the maximum mortgage, even for an interest-only mortgage
(Centraal Plan Bureau, 2018; Rabobank, 2014). As of 2013, the government no
longer allowed households to deduct interest payments from taxes for new
interest-only mortgages (De Nederlandsche Bank, 2019b). This removed a crucial
fiscal benefit from the category and made interest-only mortgages more
expensive than annuity mortgages444The Dutch government decided to gradually
phase out the mortgage interest deduction altogether in 2014.. The next
section details how the transition between regulatory regimes is modelled.
### 2.2 Modelling LTI formulas
LTI formulas govern how much a household can borrow given their income and
mortgage interest rates. Lenders use formulas to calculate the legal limit
that households can borrow based on income, interest rates, risk of default,
expenses, debts, housing costs and more. We boil the formulas down into a
single equation for annuities and interest-only mortgages.
$LC_{k}=\frac{I_{h}}{r+c}$
where $I_{h}$ is annual income allocated to housing expenses, $r$ is annual
interest rates and $c$ is a fixed annual cost expressed as a percentage of the
value of the home. The effect of interest rate changes on lending capacity is
neither linear nor independent of income. This is significant, because
independence and linearity are key assumptions in most regression models. This
is also true of annuities. The maximum annuity mortgage is the mortgage for
which households can pay the yearly interest and mandatory amortisation. The
simplified equation for the maximum annuity mortgage $LC_{a}$ reads:
$LC_{a}=I_{h}*f(r+c)$
where $I_{h}$ is income allocated to housing expenses, $r$ is interest rates
and $c$ is a fixed annual cost expressed as a percentage of the value of the
home. Function $f(x)$ is the standard annuity formula:
$f(x)=\frac{1-(1+\frac{x}{12})^{(-360)})}{\frac{x}{12}}$
Annuity mortgages are calculated differently than interest-only mortgages, so
they produce different lending capacities for the same income and interest
rates. In summary, the LTI formulas are non-linear functions in which the
effect of a change in income and interest rates is dependent on the value of
the other variable.
## 3 Data & Methods
Given that the dependencies and non-linearities cannot be modelled accurately
in linear regression models, we construct the measure ‘average household
lending capacity’ ($HLC$). The following section has three purposes. First, it
lays out the variables and their summary statistics. Second, it details the
construction of the key variable $HLC$. Finally, it describes the models and
model evaluation metrics.
### 3.1 Data
We primarily utilise publicly available income and price data from Dutch
Central Bureau of Statistics (CBS) and the ‘De Nederlandsche Bank‘ (DNB), the
Dutch central bank. Summary statistics can be found in Table 1; details on
sources and variable construction in the Appendix.
Table 1: Summary Statistics Statistic | N | Mean | St. Dev. | Min | Pctl(25) | Pctl(75) | Max
---|---|---|---|---|---|---|---
Average House Price | 92 | 201.180 | 50,896 | 89.792 | 179.627 | 237.662 | 267.464
Household Income | 91 | 68.733 | 13.467 | 44.916 | 59.815 | 77.209 | 98.783
Interest Rates | 92 | 5,382 | 1,338 | 2,840 | 4,728 | 6,325 | 8,400
Avg. LTV | 88 | 101,3 | 2,6 | 96 | 100,2 | 103,4 | 103,9
Interest-Only Marketshare | 92 | 0,217 | 0,181 | 0,000 | 0,000 | 0,410 | 0,463
### 3.2 Model Specifications
Three model are tested: (1) a benchmark model with LTV values as a measure of
credit access, (2) a univariate model with average household lending capacity
($HLC$) and (3) a benchmark model plus $HLC$. Each of the models has house
prices ($HP$) as the dependent variable. The $\beta$ coefficients are
optimised to minimise the squared prediction error (see Section C.5.1).
#### 3.2.1 Benchmark Model
The benchmark model is specified as follows:
$HP_{t}=\beta_{0}+\beta_{1}I_{t}+\beta_{2}r_{t}+\beta_{3}LTV_{t}$
where $I$ is quarterly income, $r$ are annual interest rates and $LTV$ is the
average ’loan-to-value’ ratio of first time buyers in quarter $t$.
#### 3.2.2 $HLC$ Model
The univariate model only has $HLC$ as a lagged independent variable. $HLC$ is
the amount banks allowed the average household to borrow at time $t$:
$HP_{t}=\beta_{0}+\beta_{1}HLC_{t-i}$
where $HLC$ is constructed as a weighted average between the maximum interest-
only mortgage ($HLC_{k}$) and the maximum annuity mortgage ($HLC_{a}$):
$HLC=mHLC_{k}+(1-m)HLC_{a}$ $HLC_{k}=\frac{4IW}{(1-x)r+c}$
$HLC_{a}=\frac{I}{3}*W*a((1-x)r+c)$
where $m$ represents the market share of interest-only mortgages (see Section
C.4), $I$ is quarterly income555$I$ is multiplied by 4 to obtain the yearly
income or divided by 3 to obtain monthly income., $W$ is the share of income
used for housing expenses (see Section C.2), $x$ is the tax rate at which
households can deduct interest (see Section C.1), $r$ are interest rates and
$c$ is annual costs of maintaining the house expressed as a percentage of the
purchase value (see Section C.3). The lagged, unfitted variable $HLC$ is shown
in Figure 2. The similarities in variance between $HLC$ and house prices is
evident.
Figure 2: The correlation between the average household lending capacity
(black) and average house prices (purple) is strong.
#### 3.2.3 Benchmark Plus $HLC$ Model
The final model includes both benchmark variables and $HLC$. As $HLC$ is
calculated based on income and interest rates, it naturally correlates highly
with those variables. To reduce the cross-correlation, we convert $HLC$ by
dividing $HLC$ by income $I$. This creates a new variable that expresses the
ratio between lending capacity and income. This variable reflects the multiple
of their disposable income that the average household can borrow. The
benchmark model including $HLC$ has the form:
$HP_{t}=\beta_{0}+\beta_{1}I_{t}+\beta_{2}r_{t}+\beta_{3}LTV_{t}+\beta_{4}\frac{HLC_{t-i}}{I_{t-i}}$
### 3.3 Fitting Approaches
The model specifications above have the form of an Ordinary Least Squares
regression. OLS is the pocket knife of econometric modelling. It fits a
minimum number of parameters and is highly interpretable but does not correct
for auto correlation. The standard model in the literature that corrects for
autocorrelation is the Error Correction Model, a more complex regression model
that distinguishes between longterm and short-term predictors (Anundsen and
Jansen, 2013; Turk, 2016). ECM has the following generic form:
$\Delta y=\beta_{0}+\beta_{1}\Delta x_{1,t}+...+\beta_{i}\Delta
x_{i,t}+\gamma(y_{t-1}-(\alpha_{1}x_{1,t-1}+...+\alpha_{i}x_{i,t-1}))$
Predicting $\Delta y$ instead of $y$ indicates that we predict the change in
$y$ rather than its absolute value. ECM was developed for econometric time-
series analysis of variables that have trends on both the short and long term.
ECM combines three estimates: it simultaneously estimates $y_{t-1}$ based on
long-term predictors, $\Delta y$ based on short-term predictors and $\gamma$
to weight the longterm and short-term estimates. The strength of this model is
that it anticipates the existence of long and short term trends and adjusts
for auto correlation. However, the fact that the ECM model is much less sparse
and includes a past value of the dependent variable in the fitting process
makes it less persuasive. As it is the standard test in the literature, we fit
all models as both OLS and ECM.
### 3.4 Evaluating Model Performance
We measure the quality of fit of every model with the Root Mean Squared Error
(RMSE) and the Mean Absolute Error (MAE). We evaluate the tendency of the
model to overfit spurious correlations in the data by performing two sets of
fits. We also conduct an ‘out of sample‘ test, where the models are fitted
only on data up to the second quarter of 2008. We choose that cut-off because
it is the quarter in which the Dutch economy went into recession. The purpose
of the test is to ascertain whether models are able to anticipate the coming
crisis based on data from another part of the business cycle.
## 4 Results
In this section, we report the goodness of fit of 12 variants of the three
model specifications above. We fit every model specification as an OLS and as
an ECM on the all quarters and only quarters up to 2009. Tables 2, 3 and 4
present the measured quality of fit of all variants. Figure 5 displays the
fitted values of the OLS models. We focus on the OLS results because the ECM
models failed to generalise across the board. A figure with all the fitted
values from the ECM models can be found in Appendix E.
Fit On All Quarters
Out of Sample Test
Benchmark
$HLC$
Benchmark incl. $HLC$
Figure 3: Model Performance (OLS). These six diagrams displays the fitted
values of regression models (black) contrasted with observed average house
prices (purple). Fitted values on the left were generated by models fit on all
the available data whereas fitted values on the right were generated by models
trained only on quarters up to mid-2008. While the all three model
specifications seem reasonably accurate when fit on all data, the ‘out-of-
sample‘ test reveals that only the $HLC$ model forecasts accurately on unseen
data.
Table 2: Benchmark Model Model Type | RMSE | MAE
---|---|---
Model: OLS | |
Fit on Quarters up to 2018 | 10.150 | 5.109
Fit on Quarters up to 2009 | 31.173 | 8.146
Model: ECM | |
Fit on Quarters up to 2018 | 8.057 | 5.152
Fit on Quarters up to 2009 | 13.864 | 3.868
Table 3: Household Lending Capacity ($HLC$) Model Model Type | RMSE | MAE
---|---|---
Model: OLS | |
Fit on Quarters up to 2018 | 7.451 | 4.891
Fit on Quarters up to 2009 | 7.495 | 4.854
Model: ECM | |
Fit on Quarters up to 2018 | 6.700 | 4.681
Fit on Quarters up to 2009 | 13.161 | 7.745
Table 4: Benchmark Plus $HLC$ Model Model Type | RMSE | MAE
---|---|---
Model: OLS | |
Fit on Quarters up to 2018 | 5.375 | 3.606
Fit on Quarters up to 2009 | 21.210 | 5.744
Model: ECM | |
Fit on Quarters up to 2018 | 5.284 | 3.989
Fit on Quarters up to 2009 | 8.664 | 2.994
The most striking finding is that all but one models show poor performance in
the out-of-sample test. Only the OLS fit of the $HLC$ model achieves
comparable performance on the out-of-sample quarters and the in-sample
quarters. By contrast, the benchmark models completely miss the direction of
market in the out-of-sample quarters.666As a robustness test, mortgage debt-
to-GDP ratios were also included in the benchmark. This version of the
benchmark scored more poorly than the benchmark with LTV values. The increase
in robustness does not come at the expense of accuracy. When fit on all
quarters, the OLS $HLC$ model has a 27% lower MAE than the OLS benchmark
model. The models that include both benchmark variables and $HLC$ have the
best fit when fit on all quarters, but suffer from the same lack of robustness
as the benchmark model.
## 5 Discussion
Economists agree that access to credit is a crucial determinant of house
prices. This paper suggests a new approach to model credit conditions by
modelling the structure of Loan-to-Income norms in the Netherlands. By
modelling the formulas that Dutch banks use to calculate how much they can
lend to households, we construct a regression model that does not overfit in
the ‘out-of-sample‘ test. We compare the accuracy of the model with existing
measures of credit access and find it delivers more accurate results. Our
analysis indicates that the cause of the Dutch crisis was a Dutch phenomenon:
the rise of interest-only mortgages and their fall at the hands of the Dutch
government in 2011. As the rise and fall of these mortgage products is
correlated with rising and falling LTV rates, this effect is partly
represented in the benchmark model, but modelling it explicitly allows for a
better fit. The findings suggest that the ‘double dip‘ crisis was partly due
to the fact that Dutch government constricted lending capacity in 2011 by
regulating interest-only mortgages more strictly. Whereas all economic
indicators pointed towards price recovery, prices fell steeply soon after this
legislation came into effect.
More significantly, the model specification based on LTI formulas appears to
generalise more robustly when only trained on a fraction of the available
data. This observation suggests that the in-sample accuracy of econometric
house price models may be a poor measure of their predictive power. These
models may be fitting spurious correlations that do not generalise to other
economic circumstances. The field faces a challenge: how do we develop models
that account for complex interactions between economic variables (low bias)
whilst safeguarding against overfitting (low variance)? The methodology of
this work lays out a possible path forward. We identify relationships between
variables in qualitative research and construct a measure based on those
relationships for quantitive analysis. Rather than fit each variable
independently777In the general form
$y=\beta_{0}+\beta_{1}x_{1}+...+\beta_{i}x_{i}$., we fit a model where a
function based on the independent variables is fitted888In the general form:
$y=\beta_{0}+\beta_{1}f(x_{1},...,x_{i})$. In this work, the function
calculates household lending capacity based on the formulas that Dutch banks
use in the mortgage application process. Others authors can represent any
hypothesised interaction between variables as functions. Thus, one can model
any complex relationship between variables, hence lowering bias, whilst
fitting fewer parameters, hence lowering variance. This route might offer an
alternative route to the current direction of the field: plugging an
increasing number of variables into increasingly complex statistical
models.999A possible critique of this method would be that it invites
researchers to formulate endlessly complex versions of $f$ until they find
some function that fits the dependent variable. To counter this, researchers
should be required to carefully justify their hypothesised function based on
qualitative research. In this work, we base the construction of $HLC$ on the
LTI formulas of Dutch banks. Both the theoretical justification of the
construction of $f$ _and_ the empirical validation must be persuasive.
### Competing interests
The authors declare that they have no competing interests.
### Funding
TY was partially supported by the Alan Turing Institute under the EPSRC grant
no. EP/N510129/1. The sponsor had no role in study design; in the collection,
analysis and interpretation of data; in the writing of the report; and in the
decision to submit the article for publication.
### Authors’ contributions
MS analyzed the data. MS and TY designed the study and the analysis. All
authors read and approved the final manuscript.
## References
* Anundsen and Jansen (2013) Anundsen AK, Jansen ES (2013). “Self-reinforcing effects between housing prices and credit.” _Journal of Housing Economics_ , 22(3), 192–212.
* Bernanke and Gertler (1995) Bernanke BS, Gertler M (1995). “Inside the black box: the credit channel of monetary policy transmission.” _Journal of Economic perspectives_ , 9(4), 27–48.
* Centraal Bureau voor de Statistiek (2019a) Centraal Bureau voor de Statistiek (2019a). “Bestaande koopwoningen.” [Online; accessed 29-July-2019], URL https://opendata.cbs.nl/statline/#/CBS/nl/dataset/83906NED/table?dl=23269.
* Centraal Bureau voor de Statistiek (2019b) Centraal Bureau voor de Statistiek (2019b). “Inkomen Huishoudens.” [Online; accessed 29-July-2019], URL https://opendata.cbs.nl/statline/#/CBS/nl/dataset/83932NED/table?dl=23965.
* Centraal Plan Bureau (2014) Centraal Plan Bureau (2014). “Staat van de Woningmarkt, Jaarrapportage 2014.” URL http://www.aedes.nl/binaries/downloads/betaalbaarheid/141014-staat_van_de_woningmarkt-2014.pdf.
* Centraal Plan Bureau (2018) Centraal Plan Bureau (2018). “Doorrekening varianten aanpassing aflossingseis.”
* Chauvin and Muellbauer (2014) Chauvin V, Muellbauer J (2014). “Consumption, household portfolios and the housing market: a flow of funds approach for France.” In “Lyon Meeting,” .
* De Nederlandsche Bank (2009) De Nederlandsche Bank (2009). “Risico op de Nederlandse Hypotheekmarkt.” [Online; accessed 29-July-2019], URL https://www.dnb.nl/binaries/Risico%20op%20de%20hypotheekmarkt_tcm46-222059.pdf.
* De Nederlandsche Bank (2019a) De Nederlandsche Bank (2019a). “Deposito’s en leningen van MFI’s aan huishoudens, rentepercentages.” [Online; accessed 29-July-2019], URL https://tinyurl.com/y8xkgzm4.
* De Nederlandsche Bank (2019b) De Nederlandsche Bank (2019b). “Effecten van een verdere verlaging van de LTV-limiet.” 13.
* Duca and Muellbauer (2016) Duca J, Muellbauer J (2016). “How Mortgage Finance Reform Could Affect Housing †.” 106(5), 620–624.
* ESB (2017) ESB (2017). “Lenen om te wonen.” _ESB Dossier_ , 102(4749).
* Groot _et al._ (2018) Groot S, Vogt B, Van Der Wiel K, Van Dijk M (2018). “Oververhitting op de Nederlandse huizenmarkt?”
* International Monetary Fund (2019) International Monetary Fund (2019). “Country Report: Kingdom of the Netherlands.” (19).
* Kiyotaki and Moore (1997) Kiyotaki N, Moore J (1997). “Credit cycles.” _Journal of political economy_ , 105(2), 211–248.
* Lyons (2018) Lyons RC (2018). “Credit conditions and the housing price ratio: Evidence from Ireland’s boom and bust.” _Journal of Housing Economics_ , 42, 84–96. ISSN 10960791. doi:10.1016/j.jhe.2018.05.002.
* NIBUD (2018) NIBUD (2018). “Financieringslastnormen 2018.” [Online; accessed 29-July-2019], URL https://www.nibud.nl/wp-content/uploads/Advies-financieringslastpercentages-2018.pdf.
* Oikarinen (2009) Oikarinen E (2009). “Household borrowing and metropolitan housing price dynamics - Empirical evidence from Helsinki.” _Journal of Housing Economics_ , 18(2), 126–139. ISSN 10511377. doi:10.1016/j.jhe.2009.04.001. URL http://dx.doi.org/10.1016/j.jhe.2009.04.001.
* Rabobank (2014) Rabobank (2014). “De gevolgen van de terugkeer van de annuïteitenhypotheek.” [Online; accessed 29-July-2019], URL https://economie.rabobank.com/publicaties/2014/maart/de-gevolgen-van-de-terugkeer-van-de-annuiteitenhypotheek/.
* Rabobank (2015) Rabobank (2015). “Ene LTV is de andere niet.” [Online; accessed 29-July-2019], URL https://economie.rabobank.com/publicaties/2015/maart/de-ene-ltv-is-de-andere-niet/.
* Turk (2016) Turk R (2016). “Housing Price and Household Debt Interactions in Sweden.” _IMF Working Papers_ , 15(276), 1. ISSN 1018-5941. doi:10.5089/9781513586205.001.
* van der Veer and Hoebrechts (2016) van der Veer, Hoebrechts (2016). _Journal of Banking and Finance_. doi:10.1007/s11146-015-9531-2.
## Appendix A Summary of Dutch Housing Market History
This section summarises the Dutch story, highlights existing theories and
identifies the gap in understanding. The Dutch have consistently seen house
prices rise in the post-war period. There has a been a near-permanent shortage
of residential properties, so supply rarely meets demand.101010This is a
reason that supply-side factors are less relevant in the Dutch market. In a
country with one of the highest population densities in the world, population
growth and rising incomes, households expect that residential property values
will continue to rise. This makes property a good investment - especially in
popular cities such as Amsterdam, Utrecht and The Hague.
Figure 1 shows that house prices rose rapidly in the 1990’s and early 2000’s
and entered a period of sustained decline after 2009. Despite falling interest
rates, housing prices fell 16% from their peak value. The crisis came at great
cost to many Dutch families. In 2015, 28% of homes were deemed ‘under water’:
the home was worth less than the outstanding mortgage debt (Centraal Plan
Bureau, 2014). Families could not move to a new house because they could not
pay off the debt on their old property. Divorcees, the recently unemployed and
other citizens with an urgent reason to sell their house could not fetch a
price higher or equal to their mortgage. As a result, these citizens were left
with substantial and long-term debts without the means to repay them. Often,
the costs of new housing and debt payments were more than these households
could afford.
The Dutch government attempted to boost home prices by lowering taxes on real-
estate transactions and allowing parents to assist their children to buy
property by allowing a tax-free gift for the purpose of acquiring a property.
Interest rates fell to record lows (De Nederlandsche Bank, 2019a). However,
the housing market continued to slump. After years of falling prices, the
market rose in 2015. Prices shot back up beyond their previous peak within a
few years. In the large cities, prices rose with 10% per year, leading the
Dutch Central Bank to call markets “overheated” that had been ‘cold’ just a
few years before. The Dutch Central Bank expressed concern about the
unprecedented speed of price increases (Groot _et al._ , 2018).
The question remains which factors drove this dramatic cyclical movement in
the market. Figure 1 shows that The Netherlands experienced a ‘double dip’ in
prices: prices fell in 2008 and 2009, stabilized from 2009 to 2011 and sank
further in 2013. The second dip occurred despite the fact that Gross Domestic
Product had resumed to grow (see Figure 1). Research remains to be done why
the Dutch market experienced a ‘double dip’ that prolonged the crisis on the
housing market.
The International Monetary Fund, the Dutch National Bank and the Centraal Plan
Bureau have performed a great number of analyses of the housing market. These
institutions express concern that Dutch households are amongst the most
indebted in the world (International Monetary Fund, 2019). The bulk of
household debt consists of mortgage debts. Dutch households borrow because of
fiscal incentives to do so. Households either rent or own property. Buying
property with a mortgage is attractive, because monthly payments are partially
a form of savings, whereas rent is not. The Netherlands makes it fiscally
attractive for households to switch from renting to owning property. This has
three causes. The first is that renters in the middle and upper class pay some
of the highest rents in Europe111111The cause of high rents in the Netherlands
is a topic of much debate and beyond the scope of this work.. The second is
that the government allows home-owners to deduct interest from their taxable
income, effectively subsidising mortgage debt. The third cause is that the
Dutch government guarantees mortgage debt for homes valued up to €290.000
under the ‘Nationale Hypotheek Garantie’ (NGH) scheme. This reduces risks for
banks and thus allows them to provide lower interest rates on middle class
mortgages. NHG allows home owners to default on mortgage debt that cannot be
paid off by selling their own, reducing risks for new home owners. In summary,
households realise that the most fiscally profitable route is to buy property
using the highest amount of mortgage debt they can obtain.
## Appendix B Data Sources
### B.1 House Prices
We collect the average transaction price of residential properties in the
Netherlands ($HP$) as reported on a quarterly basis by the Dutch statistics
office (Centraal Bureau voor de Statistiek, 2019a). Values before 1999 were
only reported on a yearly basis. For these years, we imputed the missing
quarters by extrapolating the trend between years. While this approach may
miss quarterly variation, it captures the overall trend in prices.
### B.2 Household Income
Household Income after Taxes ($I$) is average after-tax household income as
reported by the Dutch statistics office (Centraal Bureau voor de Statistiek,
2019b). The variable has strong yearly seasonality, because bonuses are added
to income in the last quarter. Therefore, we smooth the variable by taking the
average of the last 4 quarters. As a result, changes in the variable show
changes in year-over-year income.
### B.3 Interest Rates
Banks calculate lending capacity using the nominal interest rate for the
mortgage product. Mortgage interest rates differ from the interest rates on
the capital markets, because banks factor in their margin, estimated risk of
default, risk of devaluation of the property and future developments of the
interest rates. Every bank sets different interest rates for their products.
We obtained the average mortgage interest rate for new contracts on a
quarterly basis from the Dutch Central Bank for the period 1992-2019 (De
Nederlandsche Bank, 2019a). We smooth the variable by taking the average of
the last 4 quarters. Changes in the variable show changes year-over-year.
### B.4 Average LTV Values
Average LTV ratios were collected from reports by the Dutch Central Bank and
the Rabobank (De Nederlandsche Bank, 2019b; Rabobank, 2015). The data was
reported on a yearly level, hence we miss some quarter-to-quarter variance.
Where data was missing, we imputed the value from the year before.
### B.5 Market Share of Interest Only Mortgages
Market Share of Interest-Only mortgages was collected from two sources: a
report by the Dutch Central Bank up to 2008 (De Nederlandsche Bank, 2009) and
a Rabobank report up to 2011 (Rabobank, 2014) after which the category became
irrelevant.
## Appendix C Setting Constants in Model Construction
### C.1 Accounting for Mortgage Interest Deduction ($t$)
The Dutch government allows to deduct mortgage payments from income tax. Thus,
the effective interest payed is a fraction of the nominal interest rate. For
example, a household might pay a top rate of 50% tax over their income. If
they have €10.000 in mortgage interest payments, they can subtract this sum
from their taxable income. They do not have to pay 50% tax over €10.000 of
their total income and as a result ‘save’ €5.000. This example demonstrates
that the amount of mortgage interest deduction depends on the top rate that
households pay. The Dutch income tax system features many tax brackets and
deductions. The top marginal rate can differ between households with the same
income. As tax policy has been adjusted, the brackets have also shifted over
time. The minutiae of Dutch income tax policy were beyond the scope of this
work to model and we take a simplified approach. The average household has
traditionally paid their top rate in a tax bracket of approximately 40% . We
assume that households deduct their interest payments in the 40% bracket. The
household pays 60% of their interest payment; the rest is subsidised by the
state. Thus, $t$ is set at 0.4.
### C.2 Share of Income for Housing Expenses ($W$)
The ‘woonquote’ governs the share of income the bank assumes that buyers can
use to pay for the costs associated with their home. The precise ‘woonquote’
used for the average Dutch household is not known, because the rules and
regulations only defines a range of 21% to 40% (NIBUD, 2018). Banks can choose
a value within that range based on a large number of variables not available
to me, such as family size, projected (energy) costs of the home, projections
of future income, etc. It is possible that the ‘woonquote’ was not the same
for interest-only mortgages and annuities, because these mortgages may have
been chosen by differing groups whose characteristics put them elsewhere on
the range. Without any hard data, we choose to set the parameter in the middle
of the range: 30%. If the coefficient is not set appropriately, this will be
compensated for in the estimation of $\beta_{1}$.
### C.3 Setting Maintenance Cost ($c$)
LTI limits specify that homeowners must be able to pay all their costs from
the portion of their income allocated for housing. We set the ‘other costs‘
parameter at 2,5% of the value of the home. The NIBUD - the government agency
that sets LTI rules - estimates ‘other costs’ to be a significant fraction of
the initial cost of the home. This matches a back-of-the-envelope calculation:
homeowners face a tax of 0,75% of their home value, municipal taxes and
approximately 1% yearly costs for maintanance.
### C.4 Constructing Marketshare ($m$)
For the construction of $HLC$, we require the market share of interest-only
mortgage over time. That data was not available over the entire time period.
However, we estimate the market share of interest-only mortgages in new
transactions based on the changes in the total stock of Dutch mortgages. The
rest of this section lays out how this measure was constructed.
We can approximate the share of households who got a new mortgage by dividing
the number of market transactions by the total number of households for every
year. This assumes that no household moved twice. Based on the overall
marketshare data, we calculate the increase in market share for interest-only
mortgages. If we know that 5% of households moved in a given year and
interest-only mortgages captured 2,5% of the market, we can deduce that 50% of
households who moved switched from an annuity-like mortgage to an interest-
only mortgage. What about the remaining 50%? These must be non-switchers. We
can assume that non-switcher households renewed the mortgage type they already
possessed. Therefore, in a case where interest-only mortgages have a
marketshare of 40%, we can deduce that the share of non-switcher households
who renewed an interest-only mortgage was the proportion of non-switchers
times the overall market share ($0.5*0.4=0.2$). In sum, we can deduce that 50%
of households switched into interest-only, 20% of households renewed an
interest-only mortgage and 30% renewed an an annuity-like mortgage. Thus, the
market share of interest-only mortgages of new mortgages is 70%.
### C.5 Calculating Optimal Lag for $HLC$
This section concerns itself with choosing an appropriate time-lag for $HLC$
in our regression analysis. In time-series analysis, the causal relationship
between variables might be difficult to observe because of a time-lag. In
those cases, we can shift the independent variables backward in time. If we
believe that lending capacity today will influence housing prices 2 years from
now, we regress lending capacity today against housing prices in two years.
Timelags - like most other causal relationships - can be chosen on purely
theoretical grounds or inferred from the data. we employ a common, hybrid
approach. We first specify a range of possible lags based on theory. This
assures that any of the lags would accord with the theoretical conception of
the behaviours of home-buyers. We then test empirically which lag best fits
the data. We regress the different lagged versions of $HLC$ against house
prices and take the $R^{2}$ to measure goodness of fit. We choose the lag with
the highest $R^{2}$.
Figure 4: $R^{2}$ of $HLC$ with different lags.
We chose a range of 0-6 quarters to lag $HLC$ by, because homeowners often
arrange financing first and then acquire a home. The process of acquiring a
home often involves bidding, bargaining, a waiting period for the owners to
find a new home and finally the formal transaction. That process can take many
months. Thus, we choose the range of 0-6 quarters as possible lags between
lending capacity and house prices. We regressed lending capacity lagged by 0-6
quarters against house prices and observed the $R^{2}$. The $R^{2}$ is highest
with a time-lag of 6 quarters.121212We also tested lags greater than 6 and
found the $R^{2}$ is lower with a lag of 7 and 8 quarters.
#### C.5.1 Choosing the number of coefficients to estimate
Originally, we intended to fit separate coefficients for $HLC_{a}$ and
$HLC_{k}$. However, we found that - remarkably - the value of both
coefficients was nearly identical (on three decimals) for the optimal fit.
This suggests that the best fit parameter values of constants $c$, $t$ and $W$
that are identical for $HLC_{a}$ and $HLC_{k}$. For the sake of representing
our model as clearly as possible, we therefore chose to simplify the model and
only formally estimate one coefficient on $HLC$ in the regression analysis.
However, for other values of $c$, $t$ and $W$ or in other markets, it may be
necessary to fit separate $\beta$’s for different mortgage product types.
## Appendix D Model Fits
Table 5: OLS Results | Dependent variable:
---|---
| $HP$
| (1) | (2) | (3)
$I$ | 3.618∗∗∗ | | 4.145∗∗∗
| (0.218) | | (0.073)
$LTV$ | 6,070.454∗∗∗ | | 2,604.368∗∗∗
| (519.257) | | (326.972)
$r$ | $-$6,270.771∗∗∗ | |
| (2,167.802) | |
$HLC_{t-6}$ | | 1.410∗∗∗ |
| | (0.058) |
$\frac{HLC}{I}_{t-6}$ | | | 67,784.850∗∗∗
| | | (4,132.603)
Constant | $-$627,143.700∗∗∗ | $-$39,257.250∗∗∗ | $-$521,212.700∗∗∗
| (53,894.820) | (10,537.810) | (30,881.110)
Observations | 82 | 81 | 78
R2 | 0.937 | 0.953 | 0.980
Adjusted R2 | 0.934 | 0.952 | 0.979
Residual Std. Error | 11,056.030 (df = 78) | 8,392.266 (df = 79) | 5,519.168 (df = 74)
F Statistic | 385.834∗∗∗ (df = 3; 78) | 1,590.978∗∗∗ (df = 1; 79) | 1,180.502∗∗∗ (df = 3; 74)
Note: | ∗p$<$0.1; ∗∗p$<$0.05; ∗∗∗p$<$0.01
Table 6: ECM Results | Dependent variable:
---|---
| $\Delta HP$
| (1) | (2) | (3)
$\Delta I$ | 1.241 | | 1.068
| (1.013) | | (0.958)
$\Delta LTV$ | $-$595.012 | | $-$77.518
| (2,244.856) | | (2,085.333)
$\Delta r$ | 161.405 | | $-$1,493.893
| (3,958.949) | | (3,681.579)
$\Delta\frac{HLC}{I}_{t-4}$ | | | 23,430.740∗
| | | (12,426.520)
$I_{t-1}$ | $-$0.098 | | 1.086∗∗∗
| (0.219) | | (0.371)
$LTV_{t-1}$ | 454.846 | | 249.341
| (376.520) | | (353.242)
$r_{t-1}$ | $-$3,753.088∗∗∗ | | $-$265.662
| (1,077.769) | | (1,396.372)
$\Delta HLC_{t-4}$ | | 1.074∗∗∗ |
| | (0.239) |
$HLC_{t-5}$ | | 0.259∗∗∗ |
| | (0.092) |
$\frac{HLC}{I}_{t-5}$ | | | 26,801.280∗∗∗
| | | (7,095.525)
$\gamma$ | $-$0.090∗ | $-$0.197∗∗∗ | $-$0.314∗∗∗
| (0.053) | (0.065) | (0.080)
Constant | 468.108 | $-$3,439.291 | $-$102,568.800∗∗
| (38,566.940) | (3,583.361) | (44,634.110)
Observations | 83 | 83 | 83
R2 | 0.246 | 0.316 | 0.374
Adjusted R2 | 0.175 | 0.290 | 0.297
Residual Std. Error | 4,453.263 (df = 75) | 4,130.705 (df = 79) | 4,112.399 (df = 73)
F Statistic | 3.490∗∗∗ (df = 7; 75) | 12.188∗∗∗ (df = 3; 79) | 4.844∗∗∗ (df = 9; 73)
Note: | ∗p$<$0.1; ∗∗p$<$0.05; ∗∗∗p$<$0.01
Fit On All Quarters
Out of Sample Test
Benchmark
$HLC$
Benchmark incl. $HLC$
Figure 5: Model Performance(ECM). Light Purple: Average House Price. Black:
Predicted Average House Price.
|
8k
|
arxiv_papers
|
2101.00914
|
# Benign overfitting without concentration
###### Abstract
We obtain a sufficient condition for benign overfitting of linear regression
problem. Our result does not rely on concentration argument but on small-ball
assumption and thus can holds in heavy-tailed case. The basic idea is to
establish a coordinate small-ball estimate in terms of effective rank so that
we can calibrate the balance of epsilon-Net and exponential probability. Our
result indicates that benign overfitting is not depending on concentration
property of the input vector. Finally, we discuss potential difficulties for
benign overfitting beyond linear model and a benign overfitting result without
truncated effective rank.
Zong [email protected], College of Computer Science and
Technology, Jilin University, China.
## 1 Introduction
In recent years, there are tremendous interest in studying generalization
property of statistical model when it interpolates the input data. The
classical learning theory suggests that when the predictor fits input data
perfectly, it will suffer from noise so that it will not generalize well. To
overcome this problem, regularization and penalized learning procedures like
LASSO are studied to weaken the effect of noise to avoid overfitting. However,
some empirical experiments indicate that overfitting may perform well. Why can
overfitting perform well? In what cases can overfitting perform well? These
questions motivated a series work on this field.
The original motivation is from the Deep Learning community, who empirically
revealed that overfitting Deep Neural Network can still generalize well, see
Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O. (2016). This
counter-intuitive phenomenon still appear for linear regression and kernel
ridge regression, see Belkin, M., Ma, S. and Mandal, S. (2018) and Tsigler, A.
and Bartlett, P. (2020). They believe that investigating benign overfitting
phenomenon in linear regression case will benefit to the more complex Deep
Neural Networks case.
The cornerstone work Bartlett, P.L., Long, P. M., Lugosi, G. and Tsigler, A.
(2019) presented a minimax bound of generalization error of overfitting linear
regression. Their result is in terms of effective ranks, which measures the
tail behavior of eigenvalues of covariance matrix and will be defined later in
our paper. Recently, Chinot, G.,Lerasle, M. (2020) improved their results to
the large deviation regime. Both their results rely on the assumption that the
input vector is a gaussian vector. This assumption is relaxed in Tsigler, A.
and Bartlett, P. (2020) to sub-gaussian vector. However, all these work use
concentration-argument and thus only adapt to a number of well-behaved
distributions. In heavy-tailed case, small-ball method is often employed to
study generalization property of statistical models, see Koltchinskii, V. and
Mendelson, S. (2015). However, the original coordinate small ball estimate
cannot be directly used when the input vector is anisotropic, but
anisotropicity is a necessary condition for benign overfitting. Fortunately,
this issue can be solved by a simple modification. In this paper, we derive a
sufficient condition for benign overfitting when the input is heavy-tailed by
using small-ball method.
Benign overfitting phenomenon was firstly discovered by Zhang, C., Bengio, S.,
Hardt, M., Recht, B. and Vinyals, O. (2016), and a great deal of effort in it
has been devoted to the investigating its reason. Song, M. and Montanari, A.
(2019), Hastie, T., Montanari, A., Rosset, S. and Tibshirani, R. (2019)
studied asymptotic generalization error in random feature setting. In fact,
benign overfitting in linear regression is not equivalent to that of random
feature because parameters in random features cannot be controlled to minimize
empirical loss like that in linear regression(only parameters in second layer
can be used to minimize empirical loss while others in first layer are
randomized). However, their empirical results illustrates that linear
regression and random features of shallow neural network share similar double-
descent risk curve. Bartlett, P.L., Long, P. M., Lugosi, G. and Tsigler, A.
(2019) obtained a two-sides non-asymptotic generalization bound of prediction
error in gaussian linear regression setting. Their result is in terms of
effective ranks, which is a truncated version of stable rank in Asymptotic
Geometric Analysis. Tsigler, A. and Bartlett, P. (2020) generalized it into
sub-gaussian linear regression. Their result is also in terms of effective
ranks. Liang, T., Rakhlin, A. (2020),Rakhlin, A. and Zhai, X. (2019), and
Liang, T., Rakhlin, A. and Zhai, X. (2020) studied benign overfitting in
Reproduced Kernel Hilbert Space. Belkin, M., Rakhlin, A. and Tsybakov, A.B.
(2019) showed that interpolant maybe the optimal predictor in some cases.
The closest to our work is Chinot, G.,Lerasle, M. (2020), who derived a
sufficient condition for benign overfitting for gaussian linear regression in
terms of effective rank. We obtain similar results but we only assume the
input satisfies small ball assumption, in stead of gaussian distribution. Our
result can also be partially compared to Tsigler, A. and Bartlett, P. (2020),
where they assumed both sub-gaussian and a marginal small ball assumption.
### 1.1 Background and notation
In this paper, we consider linear regression problems in $\mathbb{R}^{p}$.
Given a dataset $D_{N}=\left\\{\left(X_{i},Y_{i}\right)\right\\}_{i=1}^{N}$
and $Y_{i}=\left<X_{i},\alpha^{\ast}\right>+\xi_{i}$, where
$\alpha^{\ast}\in\mathbb{R}^{p}$ is an unknown vector and $(X_{i})_{i=1}^{N}$
are i.i.d. copies of $X$, $\xi_{i}$ are unpredictable i.i.d. centered sub-
gaussian noise, which is independent with $X$. Because we are going to compare
linear regression with more general functions class later, we also often use
$f_{\alpha}(\cdot)$ to denote $\left<\cdot,\alpha\right>$ in the following,
and $\mathcal{F}_{A}$ as a set of $f_{\alpha}$ such that $\alpha\in A$. Assume
the random vector $X\in\mathbb{R}^{p}$ satisfies weak small ball assumption
with parameter $(\mathcal{L},\theta)$, which will be defined in Definition 2.1
,and denote its covariance matrix as $\Sigma$. Define the design matrix
$\mathbf{X}$ with $N$ lines $X_{i}^{T}$. Denote response vector
$\mathbf{Y}=\left(Y_{1},\cdots,Y_{N}\right)$.
When $p>n$, the least-square estimator can interpolate $D_{N}$. Denote the one
that has the smallest $\ell_{2}$ norm as $\hat{\alpha}$. That is to say,
$\hat{\alpha}=\mathbf{X}^{\dagger}\mathbf{Y},$
where $\mathbf{X}^{\dagger}$ is the Moore-Penrose pseudo inverse of
$\mathbf{X}$. Denote $H_{N}\subset\mathbb{R}^{p}$ as
$H_{N}=\left\\{\alpha\in\mathbb{R}^{p}:\,\mathbf{X}\alpha=\mathbf{Y}\right\\}$,
we call $H_{N}$ as interpolation space. We have
$\hat{\alpha}=\underset{\alpha\in
H_{N}}{\mathrm{argmin}}\left\|\alpha\right\|_{\ell_{2}}.$
We assume that $\mathrm{rank}(\Sigma)>N$, then $\hat{\alpha}$ exists almost
surely.
Our loss function is squared loss, that is $\ell(t)=t^{2}$, and the loss of
$\alpha$ is denoted by
$\ell_{\alpha}=\ell\left(\left<\alpha-\alpha^{\ast},\mathbf{X}\right>\right)$.
So the empirical excess risk is defined as
$\mathrm{P}_{N}{\mathcal{L}_{\alpha}}=\mathrm{P}_{N}\left(\ell_{\alpha}-\ell_{\alpha^{\ast}}\right).$
Benign overfitting depends on effective ranks of $\Sigma$. If
$A\in\mathbb{R}^{p\times p}$ is a symmetric matrix, denote
$\lambda_{1}(A)\geqslant\cdots\geqslant\lambda_{p}(A)$ as eigenvalues of $A$
and $s_{1}(A)\geqslant\cdots\geqslant s_{p}(A)$ be its singular values. If
there is no ambiguity, we will write $s_{i}$ in stead of $s_{i}(A)$.
Bartlett, P.L., Long, P. M., Lugosi, G. and Tsigler, A. (2019) defined two
effective ranks:
$\displaystyle
r_{k}(\Sigma)=\frac{\sum_{i>k}\lambda_{i}(\Sigma)}{\lambda_{k+1}},\quad
R_{k}(\Sigma)=\frac{\left(\sum_{i>k}\lambda_{i}(\Sigma)\right)^{2}}{\sum_{i>k}\lambda_{i}^{2}(\Sigma)}.$
(1.1)
$R_{k}(\Sigma)$ is a truncated version of stable rank that occurred in
Asymptotic Geometric Analysis, see Vershynin, R. (2018), Naor, A. and Youssef,
P. (2017) and Mendelson, S. and Paouris, G. (2019) for a comprehensive review.
Stable rank, denoted as $\mathrm{srank}_{q}(A)$, defined by
$\mathrm{srank}_{q}(A)=\left(\frac{\left\|A\right\|_{S_{2}}}{\left\|A\right\|_{S_{q}}}\right)^{\frac{2q}{q-2}},$
where $\left\|\cdot\right\|_{S_{q}}$ is the $q$-Schatten norm of $A$, that is
to say,
$\left\|A\right\|_{S_{q}}=\left(\sum_{i=1}^{p}s_{i}^{q}(A)\right)^{1/q}$. When
$q=4$, and $A=\Sigma^{1/2}$, then
$\mathrm{srank}_{4}(\Sigma^{1/2})=\left(\frac{\left\|\Sigma^{1/2}\right\|_{S_{2}}}{\left\|\Sigma^{1/2}\right\|_{S_{4}}}\right)^{4}=\frac{\left(\sum_{i=1}^{p}s_{i}^{2}(\Sigma^{1/2})\right)^{2}}{\sum_{i=1}^{p}s_{i}^{4}(\Sigma^{1/2})}=\frac{\left(\sum_{i=1}^{p}\lambda_{i}(\Sigma)\right)^{2}}{\sum_{i=1}^{p}\lambda_{i}^{2}(\Sigma)}.$
It can be seen that $R_{k}(\Sigma)$ is the truncated version of
$\mathrm{srank}_{4}(\Sigma^{1/2})$.
Apart from this, $r_{k}(\Sigma)$ is also a truncated version of the usual
”effective rank” which is actually $\mathrm{tr}(\Sigma)/\lambda_{1}(\Sigma)$
in statistical literature, see Koltchinskii, V. and Lounici, K.
(2017),Rudelson, M. and Vershynin, R. (2007).
In fact, our result will be in terms of $R_{k}(\Sigma)$ instead of
$r_{k}(\Sigma)$, which is the usual choice in most past work. However, this
does not matter, because the two effective ranks are closely related, we refer
the reader to Appendix A.6 in Bartlett, P.L., Long, P. M., Lugosi, G. and
Tsigler, A. (2019) for a comprehensive review.
For sake of simplicity, we define some extra notations. They have no special
meanings, but will make our formula more clear. Denote
$\displaystyle
R_{k,2}(\Sigma):=\left(1-\sqrt{\frac{k}{\mathrm{srank}_{4}(\Sigma)}}\right)R_{k}(\Sigma).$
(1.2)
When $k=0$, we have $R_{k,2}(\Sigma)=\mathrm{srank}_{4}(\Sigma)$. Denote
$\mathfrak{R}_{k}(\Sigma):=\frac{(4p-k)^{2}}{8p}\frac{c^{\frac{16p^{2}}{(4p-k)^{2}}R_{k,2}(\Sigma)}}{R_{k,2}(\Sigma)},$
where $c<1$ is a constant.
Denote the operator norm of a matrix $A$ as $\left\|A\right\|$. Denote
$\left\|\alpha\right\|_{A}$ as $\sqrt{\alpha^{T}A\alpha}$. We use $S(r)$ to
denote the sphere in $\mathbb{R}^{p}$ with radius $r$ with respect to
$\left\|\cdot\right\|_{\ell_{2}}$, $B(r)$ as ball analogously. Denote
$S_{A}(r)$ and $B_{A}(r)$ as the corresponding sphere and ball with respect to
$\left\|\cdot\right\|_{A}$. Denote $(\varepsilon_{i})_{i=1}^{N}$ are i.i.d.
Bernoulli random variables. Denote $D$ as unit ball with respect to $L_{2}$
distance. If $F\subset L_{2}(\mu)$, let
$\left\\{G_{f}:\,f\in\mathcal{F}\right\\}$ be the canonical gaussian process
indexed by $F$, denote $\mathbb{E}\left\|G\right\|_{F}$ as
$\mathbb{E}\left\|G\right\|_{F}:=\mathrm{sup}\left\\{\mathbb{E}\underset{f\in
F^{\prime}}{\mathrm{sup}}G_{f}:\,F^{\prime}\subset F,\,F^{\prime}\text{ is
finite}\right\\}.$
Denote
$\Lambda_{s_{0},u}(\mathcal{F})=\mathrm{inf}\underset{f\in\mathcal{F}}{\mathrm{sup}}\sum_{s\geqslant
s_{0}}2^{s/2}\left\|f-\pi_{s}f\right\|_{(u^{2}2^{s})},\quad u\geqslant
1,\,s_{0}\geqslant 0.$
where the infimum is taken with respect to all admissible sequences
$(F_{s})_{s\geqslant 0}$ and $\pi_{s}f$ is the nearest point in $F_{s}$ to $f$
with respect to $\left\|\cdot\right\|_{(u^{2}2^{s})}$. Here
$\left\|\cdot\right\|_{(p)}=\mathrm{sup}_{1\leqslant q\leqslant
p}\left\|\cdot\right\|_{L_{q}}/\sqrt{q}$. An admissible sequence is a sequence
of partitions on $\mathcal{F}$ such that $\left|F_{s}\right|\leqslant
2^{2^{s}}$, and $\left|F_{0}\right|=1$, cf. Mendelson S. (2016c). Denote
$d_{q}(F)$ as diameter of $F$ with respect to $\left\|\cdot\right\|_{L_{q}}$.
Denote $\left\|\cdot\right\|_{\psi_{2}}$ as sub-gaussian norm. Denote $[N]$ as
set $\left\\{1,2,\cdots,N\right\\}$. Denote $C,c,c_{0},c_{1},c_{2},\cdots$ as
absolute constants.
### 1.2 Structure of this paper
Section 2 contains some preliminaries knowledge. Section 3 contains our main
result, Theorem 3.1 and its proof are decomposed into two parts, which will be
post-posed to section 4 and section 5. These two sections contains estimation
error and prediction error of interpolation procedure in linear regression
case. In section 6, we will discuss why it is difficult to obtain oracle
inequality beyond linear regression, and give a benign overfitting result
without truncated effective rank by using Dvoretzky-Milman Theorem in
Asymptotic Geometric Analysis.
## 2 Preliminaries
In this section, we introduce some preliminary techniques which will be used
to formulate and prove our main results. More precisely, we will introduce
localization method to yield oracle inequality and small ball method to
provide a lower bound of smallest singular value of design matrix.
### 2.1 Localization Method
To get an oracle inequality, there are two approaches in general. The first
one is called Isomorphism Method, which uses the isomorphy between between
empirical and actual structures to derive an oracle inequality. The other is
called localization method. In this work, we will use localization method to
derive an oracle inequality.
For a statistical model $\mathcal{F}$ and $f^{\ast}\in\mathcal{F}$ is an
oracle. Localization Method uses a $L_{2}$-ball centered at $f^{\ast}$ with
radius $r$ to localize model $\mathcal{F}$. This allows us to study
statistical properties of a learning procedure $\hat{f}$ on this small
ball222This diameter is not necessarily to be of $L_{2}$ distance in
localization method, though our choice is $L_{2}$ distance. We refer the
interested reader to Chinot, G., Lecué, G. and Lerasle, M. (2020). More
precisely, the radius $r$ captures upper bound estimation error
$\left\|f-f^{\ast}\right\|_{L_{2}}$ for all $f\in\mathcal{F}\cap rD$.
Therefore, if we can find an upper bound of $r$, we find the estimation error
of learning procedure $\hat{f}$. Analog to that in Chinot, G.,Lerasle, M.
(2020), our localized set in this paper is
$\mathcal{F}_{H_{r,\rho}}=\left\\{\left<\cdot,\alpha\right>\,:\alpha\in
H_{r,\rho}\right\\},\quad\text{where }H_{r,\rho}:=B(\rho)\cap B_{\Sigma}(r),$
where $\rho$ is upper bound of estimation error, which will be studied in
section 4, and $r$ is upper bound of prediction error, which will be studied
in section 5. Obtaining prediction risk is based on estimation risk. In this
paper, we obtain estimation risk by studying minimum $\ell_{2}$ interpolation
procedure and obtain prediction error by localization method based on it.
Optimal level of $r$, denoted as $r^{\ast}$ is carefully chosen by fixed
points called complexity parameters.
#### 2.1.1 Complexity parameters
In classical statistical learning theory, there are two common-used complexity
parameters called multiplier complexity $r_{M}$ and quadratic complexity
$r_{Q}$, we refer the reader to Mendelson, S. (2016b) for a comprehensive
view. Quadratic complexity $r_{Q}$ is defined as follows:
$r_{Q,1}(\mathcal{F},\zeta)=\underset{r>0}{\mathrm{arginf}}\,\mathbb{E}\left\|G\right\|_{(\mathcal{F}-\mathcal{F})\cap
rD}\leqslant\zeta r\sqrt{N},$
and
$r_{Q,2}(\mathcal{F},\zeta)=\underset{r>0}{\mathrm{arginf}}\,\mathbb{E}\underset{w\in(\mathcal{F}-\mathcal{F})\cap
rD}{\mathrm{sup}}\left|\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\varepsilon_{i}w(X_{i})\right|\leqslant\zeta
r\sqrt{N},$
where $\zeta$ is an absolute constant.
$r_{Q}(\zeta_{1},\zeta_{2})=\mathrm{max}\left\\{r_{Q,1}(\mathcal{F},\zeta_{1}),r_{Q,2}(\mathcal{F},\zeta_{2})\right\\}$
is an intrinsic parameter. That is to say, $r_{Q}$ does not rely on noise
$\xi$, but only on $\mathcal{F}$. This parameter measures the ability of
$\mathcal{F}$ to estimate target function $f^{\ast}$.
While multiplier complexity $r_{M}$ is defined as follows:
$\phi_{N}(r):=\underset{w\in(\mathcal{F}-\mathcal{F})\cap
rD}{\mathrm{sup}}\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\varepsilon_{i}\xi_{i}w(X_{i}),$
$r_{M,1}(\kappa,\delta)=\underset{r>0}{\mathrm{inf}}\left\\{\mathbb{P}\left\\{\phi_{N}(r)\leqslant
r^{2}\kappa\sqrt{N}\right\\}\geqslant 1-\delta\right\\},$
and
$r_{0}(\kappa)=\underset{r>0}{\mathrm{inf}}\underset{w\in(\mathcal{F}-\mathcal{F})\cap
rD}{\mathrm{sup}}\left\\{{\left\|\xi
w(X)\right\|_{L_{2}}\leqslant\frac{1}{2}\sqrt{N}\kappa r^{2}}\right\\},$
where $\kappa$ is an absolute constant.
Then $r_{M}(\kappa,\delta)=r_{M,1}+r_{0}$ is called multiplier complexity,
which measures the interplay between noise $\xi$ and function class
$\mathcal{F}$.
Classical learning theory employs $r_{M}$ to measure the ability of
$\mathcal{F}$ to absorb noise $\xi$. However, this parameter does not make
sense in interpolation case. This is because interpolant $\hat{f}$ causes no
loss on $r_{M}$ by interpolating $(X_{i},Y_{i})_{i=1}^{N}$ perfectly.
Therefore, $r_{M}=0$ in this case. However, since $\hat{f}$ interpolates
$(X_{i},Y_{i})_{i=1}^{N}$, it bears influence from noise $\xi$ so that $r_{Q}$
is no longer an intrinsic parameter. That is to say, $r_{Q}$ relies on $\xi$
implicitly because $\hat{f}$ has to estimate both signal and noise. It is this
that causes the biggest difference from interpolation case and classical
learning theory. Therefore, our complexity parameter is a variant of quadratic
complexity, which will be defined in Equation 3.5.
Localization method employed complexity parameters to provide radius of
localized set. However, we need to illustrate that interpolant $\hat{f}$ lies
in it with high probability. This step is guaranteed by an exclusion argument.
#### 2.1.2 Exclusion
For all $f\in\mathcal{F}$, if $f$ wants to be an interpolant, its empirical
excess risk must be lower than a fixed level with high probability. To see
this, we first decompose
$\mathrm{inf}_{f\in\mathcal{F}}\mathrm{P}_{N}\mathcal{L}_{f}$ to its lower
bound.
There are two decompositions of empirical excess risk into quadratic and
multiplier components. The first one is as follows:
$\displaystyle\underset{f\in\mathcal{F}}{\mathrm{inf}}\,\mathrm{P}_{N}\mathcal{L}_{f}$
$\displaystyle=$
$\displaystyle\underset{f\in\mathcal{F}}{\mathrm{inf}}\frac{1}{N}\sum_{i=1}^{N}(f(X_{i})-Y_{i})^{2}-(f^{\ast}(X_{i})-Y_{i})^{2}$
(2.1) $\displaystyle\geqslant$
$\displaystyle\underset{f\in\mathcal{F}}{\mathrm{inf}}\left\\{\frac{1}{N}\sum_{i=1}^{N}(f-f^{\ast})^{2}(X_{i})\right\\}-2\underset{f\in\mathcal{F}}{\mathrm{sup}}\left\\{\frac{1}{N}\sum_{i=1}^{N}\xi_{i}(f-f^{\ast})(X_{i})\right\\}$
$\displaystyle:=$
$\displaystyle\underset{f\in\mathcal{F}}{\mathrm{inf}}\,\mathrm{P}_{N}\mathcal{Q}_{f-f^{\ast}}-2\underset{f\in\mathcal{F}}{\mathrm{sup}}\,\mathrm{P}_{N}\mathcal{M}_{f-f^{\ast}}.$
This kind of decomposition needs lower bound of quadratic component
$\mathrm{P}_{N}\mathcal{Q}_{f-f^{\ast}}$. This lower bound is provided by
small ball method. We will use this approach in subsection 6.2 to acquire a
sufficient condition for benign overfitting without truncated effective rank.
We turn to the second kind of decomposition. Our localized statistical model
$\mathcal{F}=\mathcal{F}_{H_{r,\rho}}$ is a class of linear functionals on
$\mathbb{R}^{p}$. Recall that the optimal choice of $r$ is denoted as
$r^{\ast}$, so denote $\theta=r^{\ast}/r\in(0,1)$, and
$\alpha_{0}=\alpha^{\ast}+\theta(\alpha-\alpha^{\ast})$, so
$\left\|\Sigma^{1/2}(\alpha_{0}-\alpha^{\ast})\right\|_{\ell_{2}}=r^{\ast}$
and $\left\|\alpha_{0}-\alpha^{\ast}\right\|_{\ell_{2}}\leqslant\theta\rho$.
Denote
$Q_{r,\rho}=\underset{\alpha-\alpha^{\ast}\in
H_{r,\rho}}{\mathrm{sup}}\left|\frac{1}{N}\sum_{i=1}^{N}\left<X_{i},\alpha-\alpha^{\ast}\right>^{2}-\mathbb{E}\left<X_{i},\alpha-\alpha^{\ast}\right>^{2}\right|,$
and
$M_{r,\rho}=\underset{\alpha-\alpha^{\ast}\in
H_{r,\rho}}{\mathrm{sup}}\left|\frac{2}{N}\sum_{i=1}^{N}\xi_{i}\left<X_{i},\alpha-\alpha^{\ast}\right>\right|.$
Then
$\displaystyle\underset{f\in\mathcal{F}}{\mathrm{inf}}\,\mathrm{P}_{N}\mathcal{L}_{f}$
$\displaystyle=$
$\displaystyle\underset{f\in\mathcal{F}}{\mathrm{inf}}\frac{1}{N}\sum_{i=1}^{N}(f(X_{i})-Y_{i})^{2}-(f^{\ast}(X_{i})-Y_{i})^{2}$
(2.2) $\displaystyle\geqslant$
$\displaystyle\theta^{-2}\left((r^{\ast})^{2}-Q_{r^{\ast},\rho}\right)-2M_{r^{\ast},\rho}.$
Suppose $(\xi_{i})_{i=1}^{N}$ are i.i.d. sub-gaussian random variables, then
by Bernstein’s inequality, with probability at least $1-\mathrm{exp}(-N/2)$ we
have
$\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{2}\leqslant\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}.$
Because interpolation procedure $\hat{f}$ interpolates all these inputs
$(X_{i},Y_{i})_{i=1}^{N}$, the excess risk of $\hat{f}$ can be obtained by the
noises,
$\mathrm{P}_{N}\mathcal{L}_{\hat{f}}=\mathrm{P}_{N}\left(\ell_{\hat{f}}-\ell_{f^{\ast}}\right)=-\mathrm{P}_{N}\ell_{f^{\ast}}=-\mathrm{P}_{N}\xi^{2},$
then with probability at least $1-\mathrm{exp}(-N/2)$, we have
$\mathrm{P}_{N}\mathcal{L}_{\hat{f}}\leqslant-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}.$
That is to say, if $f$ wants to be an interpolant, it must satisfy this upper
bound. Otherwise, $f$ will be excluded because it has little probability to be
an interpolant.
This upper bound is different from the case of non-interpolation setting,
where it is $0$. It is smaller than $0$ because the interpolation procedure is
more restrict(in the sense of the interpolation space is smaller than version
space) than non-interpolation procedure like ERM. The smaller upper bound can
exclude more functions than non-interpolation procedure.
Therefore, we just need to upper bound multiplier and quadratic processes in
Equation 2.2, such that the lower bound of empirical excess risk over all
$f\in\mathcal{F}_{H_{r,\rho}}$ is greater than
$-\left\|\xi\right\|_{\psi_{2}}^{2}/2$ when $r$ is greater than a fixed level
$r^{\ast}$. So that functions in $\mathcal{F}_{H_{r,\rho}}$ will be excluded
from being an interpolant with high probability. Therefore, with high
probability, interpolant will lie in $H_{r^{\ast},\rho}$ and we can upper
bound its prediction risk by $r^{\ast}$.
#### 2.1.3 Multiplier process and Quadratic process
We employ upper bounds of multiplier process and quadratic process in
Mendelson S. (2016c):
###### Lemma 2.1 (Theorem 4.4 in Mendelson S. (2016c):Upper bound of
multiplier process).
There exist absolute constants $c_{1}$ and $c_{2}$ for which the following
holds. If $\xi\in L_{\psi_{2}}$ then for every $u,w\geqslant 8$, with
probability at least
$1-2\mathrm{exp}(-c_{1}u^{2}2^{s_{0}})-2\mathrm{exp}(-c_{1}Nw^{2})$,
$\underset{f\in\mathcal{F}}{\mathrm{sup}}\left|\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left(\xi_{i}f(X_{i})-\mathbb{E}\xi
f\right)\right|\leqslant
cuw\left\|\xi\right\|_{\psi_{2}}\tilde{\Lambda}_{s_{0},u}(\mathcal{F}),$
where $c$ is an absolute constant.
###### Lemma 2.2 (Theorem 1.13 in Mendelson S. (2016c): Upper bound of
empirical process).
There exists a constant $c(q)$ that depends only on $q$ for which the
following holds. Then with probability at least
$1-2\mathrm{exp}\left(-c_{0}u^{2}2^{s_{0}}\right)$,
$\underset{f\in\mathcal{F}}{\mathrm{sup}}\left|\frac{1}{N}\sum_{i=1}^{N}\left(f^{2}(X_{i})-\mathbb{E}f^{2}\right)\right|\leqslant\frac{c(q)}{N}\left(u^{2}\tilde{\Lambda}_{s_{0},u}^{2}(\mathcal{F})+u\sqrt{N}d_{q}(\mathcal{F})\tilde{\Lambda}_{s_{0},u}(\mathcal{F})\right),$
where $c(q)$ is a constant depending on $q$. Particularly, if $\mathcal{F}$ is
a sub-gaussian class, then with probability at least
$1-2\mathrm{exp}(-c_{0}N)$,
$\underset{f\in\mathcal{F}}{\mathrm{sup}}\left|\frac{1}{N}\sum_{i=1}^{N}\left(f^{2}(X_{i})-\mathbb{E}f^{2}\right)\right|\leqslant
cL^{2}\mathbb{E}\left\|G\right\|_{\mathcal{F}}^{2}.$
Set $2^{s_{0}}=k_{\mathcal{F}}$, where
$k_{\mathcal{F}}=\left(\mathbb{E}\left\|G\right\|_{\mathcal{F}}/d_{2}(\mathcal{F})\right)^{2}$
is the Dvoretzky-Milman Dimension of $\mathcal{F}$, we refer the reader to
Artstein-Avidan, S.,Giannopoulos, A. and Milman, V.D. (2015) for a
comprehensive view. For $\ell_{2}^{p}$, the Dvoretzky-Milman dimension $k\sim
p$, see e.g. Theorem 5.4.1 in Artstein-Avidan, S.,Giannopoulos, A. and Milman,
V.D. (2015).And $\tilde{\Lambda}(\mathcal{F})$ is called $\Lambda$-complexity
of $\mathcal{F}$, which is a generalization of Gaussian complexity so that
$\Lambda$-complexity just needs $\mathcal{F}$ has finite order of moments,
instead of infinite order of moments. Particularly, when $\mathcal{F}$ happens
to be a sub-gaussian class, $\tilde{\Lambda}(\mathcal{F})$ is equivalent to
$\mathbb{E}\left\|G\right\|_{\mathcal{F}}$. We refer the reader to Mendelson
S. (2016c) for a comprehensive review.
In this paper, our function class is of linear functionals on
$\mathbb{R}^{p}$, especially ellipses since we assume the random vector $X$ is
not isotropic. Given the covariance matrix $\Sigma$ of random vector $X$. By
Lemma 5 in Chinot, G.,Lerasle, M. (2020), we obtain
$\displaystyle\mathbb{E}\left\|G\right\|_{\mathcal{F}_{H_{r,\rho}}}\leqslant\sqrt{2}\sqrt{\sum_{i=1}^{p}\mathrm{min}\left\\{\lambda_{i}(\Sigma)\rho^{2},r^{2}\right\\}}.$
(2.3)
However, estimating $\tilde{\Lambda}(H_{r,\rho})$ is non-trivial unless
$H_{r,\rho}$ is a sub-gaussian class. Note that the deviation in Lemma 2.2 is
neither optimal nor user-friendly(in the sense of the deviation parameter $u$
is coupled with complexity parameter
$\mathbb{E}\left\|G\right\|_{\mathcal{F}}$). In fact, upper bound of quadratic
process given by Dirksen, S. (2015) is in optimal deviation when $\mathcal{F}$
is a sub-gaussian class. This is not fit to our heavy-tailed setup. It is non-
trivial to obtain upper bound of quadratic process in heavy-tailed case.
However, when $\mathcal{F}$ is sub-gaussian, we can omit parameter
$r_{2}^{\ast}$ which will be defined in section 3 and get a better bound of
$\left\|\Gamma\right\|$ in subsection 4.2, which will generate a preciser
result, whereas the proof is omitted. To make our result uniform to both
heavy-tailed and sub-gaussian case, we employ the one from Mendelson S.
(2016c) though it will not generate an optimal bound.
### 2.2 Small Ball Method
To deal with heavy-tailed case, we employ small ball method, which is a
crucial argument in Asymptotic Geometric Analysis, see Artstein-Avidan,
S.,Giannopoulos, A. and Milman, V.D. (2015). Small Ball Method in statistical
learning theory is first developed in Koltchinskii, V. and Mendelson, S.
(2015). It can be viewed as a kind of Paley-Zygmund method, which assumes the
random vector is sufficiently spread, so that it will have many large
coordinates. We refer the reader to Mendelson, S. (2016b) and Mendelson, S.
and Paouris, G. (2019) for a comprehensive view.
Classical small ball assumption is a lower bound on tail of random function,
that is
$\mathbb{P}\left\\{\left|X_{i}\right|\geqslant\kappa\left\|X\right\|_{L_{2}}\right\\}\geqslant\theta,$
which can be verified by Paley-Zygmund inequality, see Lemma 3.1 in
Kallenberg, O. (2002), under $L_{4}-L_{2}$ norm equivalence condition. In this
paper, small ball method is used to obtain lower bound of smallest singular
value of design matrix. In statistical learning theory, small ball assumption
is used to lead to lower bound of quadratic component in Equation 2.1, so that
it can provide a lower bound of smallest singular value. As we do not need
coordinates of input vector $X$ are independent, we need a small ball method
without independent. Fortunately, independence assumption is relaxed in
Mendelson, S. and Paouris, G. (2019), and the corresponding definition of
small-ball assumption is as follows:
###### Definition 2.1 (Small Ball Assumption:Mendelson, S. and Paouris, G.
(2019)).
The random vector $X\in\mathbb{R}^{p}$ satisfies a weak small ball
assumption(denoted as wSBA) with constants $\mathcal{L},\kappa$ if for every
$1\leqslant k\leqslant p-1$, every $k$ dimensional subspace $F$, every
$z\in\mathbb{R}^{p}$,
$\mathbb{P}\left\\{\left\|P_{F}X-z\right\|_{\ell_{2}}\leqslant\kappa\sqrt{k}\right\\}\leqslant\left(\mathcal{L}\kappa\right)^{k},$
where $P_{F}$ is the orthogonal projection onto the subspace $F$.
There are many cases when random vector $X$ satisfying wSBA, we refer the
reader to Appendix A in Mendelson, S. and Paouris, G. (2019) for a
comprehensive view.
## 3 Main Result
In this section, we will formulate our main result, Theorem 3.1. Before this,
we have to assume our final assumption and define some parameters.
Firstly, We need to following assumption: There are constants $\delta_{1}>0$
and $\delta_{2}\geqslant 1$ such that
$\displaystyle\left(\frac{1}{p}\sum_{i=1}^{p}\left\|\Sigma^{1/2}e_{i}\right\|_{\ell_{2}}^{2+\delta_{1}}\right)^{\frac{1}{2+\delta_{1}}}\leqslant\delta_{2}\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}},$
(3.1)
where $(e_{i})_{i=1}^{p}$ are ONB of $\mathbb{R}^{p}$.
This assumption is used to select a proper(in sense of a uniform lower bound
of inner product) subset $\sigma_{0}\subset[p]$ such that
$\left|\sigma_{0}\right|\geqslant c_{0}p$, where $c_{0}$ depends on
$\delta_{1}$ and $\delta_{2}$. This assumption is not restrictive. See
subsection 3.1 for an example.
Secondly, we define three parameters. Define $k^{\ast}$ as the smallest
integer such that
$\displaystyle
p\log{\left(1+\frac{\sqrt{d_{q}(D)\frac{\tilde{\Lambda}(D)}{\sqrt{p}}+\frac{\tilde{\Lambda}^{2}(D)}{p}+\lambda_{1}(\Sigma)}\sqrt{\frac{p}{\mathrm{tr}(\Sigma)}}}{\sqrt{3c_{0}(1-\mathfrak{R}_{k}(\Sigma))-1}}\right)}\leqslant
N\frac{c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0}}{2}+\log{p}$ (3.2)
where the minimum of empty set is defined as $\infty$. Denote $\nu$ as
follows:
$\displaystyle\nu:=N\frac{c_{0}\mathfrak{R}_{k^{\ast}}(\Sigma)+1-c_{0}}{2}+\log{p}-p\log{\left(1+\frac{\sqrt{d_{q}(D)\frac{\tilde{\Lambda}(D)}{\sqrt{p}}+\frac{\tilde{\Lambda}^{2}(D)}{p}+\lambda_{1}(\Sigma)}\sqrt{\frac{p}{\mathrm{tr}(\Sigma)}}}{\sqrt{3c_{0}(1-\mathfrak{R}_{k^{\ast}}(\Sigma))-1}}\right)}.$
(3.3)
Parameter $k^{\ast}$ is a level which can balance the two sides in Equation
3.2.
Denote
$\displaystyle\rho=\left\|\alpha^{\ast}\right\|_{\ell_{2}}+\sqrt{\frac{2}{3(1-c_{0})c_{0}-1}}\sqrt{\frac{p}{\mathrm{tr}(\Sigma)}}\frac{\left\|\xi\right\|_{\psi_{2}}}{\varepsilon},$
(3.4)
where $\varepsilon$ is a constant. $\rho$ will be upper bound of estimation
error.
Denote
$r_{1}^{\ast}:=\underset{r>0}{\mathrm{arginf}}\left\\{\tilde{\Lambda}(H_{r,\rho})\leqslant\sqrt{\zeta_{1}p}r\right\\},\quad
r_{2}^{\ast}:=\underset{r>0}{\mathrm{arginf}}\left\\{d_{q}(\mathcal{F}_{r,\rho})\leqslant\zeta_{2}r\right\\}.$
$\displaystyle r^{\ast}:=r_{1}^{\ast}+r_{2}^{\ast},$ (3.5)
where $\zeta_{1},\zeta_{2}$ are absolute constants. Particularly, when
$H_{r,\rho}$ is a sub-gaussian class, this definition reduces to that of
Chinot, G.,Lerasle, M. (2020). $r^{\ast}$ will be upper bound of prediction
risk.
Now we can formulate our main result as follows.
###### Theorem 3.1.
Suppose $X=\Sigma^{1/2}Z\in\mathbb{R}^{p}$ is a random vector, where $Z$ is an
isotropic random vector that satisfies wSBA with constants
$\mathcal{L},\kappa$, and $\Sigma$ satisfies Equation 3.1. If
$(X_{i})_{i=1}^{N}$ are i.i.d. copies of $X$, forming rows of a random matrix
$\mathbf{X}$. Let $\hat{\alpha}$ be an interpolation solution on
$\left(X_{i},Y_{i}\right)_{i=1}^{N}$, where
$Y_{i}=\left<X_{i},\alpha^{\ast}\right>+\xi_{i}$, and $(\xi_{i})_{i=1}^{N}$
are i.i.d. sub-gaussian random variables. Then there exists absolute constant
$c$ such that: with probability at least
$1-\mathrm{exp}(-\nu)-\mathrm{exp}(-cN)$,
$\left\|\hat{\alpha}-\alpha^{\ast}\right\|_{\ell_{2}}\leqslant\rho,\quad\left\|\Sigma^{1/2}\left(\hat{\alpha}-\alpha^{\ast}\right)\right\|_{\ell_{2}}\leqslant
r^{\ast},$
where $\rho$, $\nu$ and $r^{\ast}$ are defined in Equation 3.4, 3.3 and 3.5
### 3.1 Example
Consider a simple example considered in Bartlett, P.L., Long, P. M., Lugosi,
G. and Tsigler, A. (2019), Chinot, G.,Lerasle, M. (2020) and Tsigler, A. and
Bartlett, P. (2020). When $X$ is a sub-gaussian random vector, we have
$\tilde{\Lambda}(\mathcal{F})\sim\mathbb{E}\left\|G\right\|_{\mathcal{F}}$ at
once. Therefore, with probability at least $1-\mathrm{exp}(-cN)$, we have
$\left\|\Gamma\right\|\lesssim\sqrt{N}\lambda_{1}(\Sigma).$
Consider a concrete case that $\varepsilon=o(1)$ such that for any $k$,
$\lambda_{k}(\Sigma)=\mathrm{e}^{-k}+\varepsilon,\quad\text{with}\quad\log{\frac{1}{\varepsilon}}<N,\quad
p=cN\log{\frac{1}{\varepsilon}}.$
If $p\varepsilon=\omega(1)$, then $\mathrm{tr}(\Sigma)=O(1)$, and
$d_{q}(D)\frac{\tilde{\Lambda}(\mathcal{F})}{\sqrt{p}}+\frac{\tilde{\Lambda}^{2}(\mathcal{F})}{p}+\lambda_{1}(\Sigma)=O(1)$.
So
$\sqrt{\frac{p}{\mathrm{tr}(\Sigma)}}\sqrt{d_{q}(D)\frac{\tilde{\Lambda}(D)}{\sqrt{p}}+\frac{\tilde{\Lambda}^{2}(D)}{p}+\lambda_{1}(\Sigma)}=O(1).$
To choose $k$ such that Equation 3.2 holds, we have to bound
$\mathfrak{R}_{k}(\Sigma)$ from below. Firstly, we need to lower bound
$R_{k}(\Sigma)$.
$R_{k}(\Sigma)=\Theta\left(\frac{\left(\mathrm{e}^{-k}+p\varepsilon\right)^{2}}{\mathrm{e}^{-2k}+p\varepsilon^{2}}\right)$
by setting $k=\log{(1/\varepsilon)}<\frac{p}{2}$, then
$R_{k}(\Sigma)=\Theta(p)$.
Then we estimate $\mathrm{srank}_{4}(\Sigma)$. We have
$\mathrm{srank}_{4}(\Sigma)=\Theta(p)$. So $R_{k,2}(\Sigma)=\Theta(p)$.
Further, we have $\mathfrak{R}_{k}(\Sigma)=\Theta(2^{-p})$. Further,
$2^{-p}=\Theta(\varepsilon^{cN})$.
Therefore,
$\displaystyle\mathrm{LHS}/p$ $\displaystyle=$
$\displaystyle\Theta\left(\log{\left(1+\frac{1}{\sqrt{3c_{0}(1-2^{-p})-1}}\right)}\right)$
$\displaystyle=$
$\displaystyle\Theta\left(-\log{\left(3c_{0}(1-\varepsilon^{cN})-1\right)}\right)$
$\displaystyle=$
$\displaystyle\Theta\left(-\log{\left(1-\varepsilon^{cN}\right)}\right).$
For the right hand side of Equation 3.2, we have
$\frac{N}{p}\frac{c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0}}{2}=\Theta\left(\frac{1-c_{0}(1-2^{-p})}{2c\log{(1/\varepsilon)}}\right)=\Theta\left(\varepsilon^{cN}\right),$
and
$\frac{\log{p}}{p}=\Theta\left(\frac{\log{c}+\log{N}+\log{\log{(1/\varepsilon)}}}{cN\log{(1/\varepsilon)}}\right)=\Theta\left(\frac{\log{N}}{N}\right).$
Recall that $\varepsilon=o(1)$, so Equation 3.2 holds for $N$ large enough.
Consider $\delta_{2}$ such that $\delta_{2}^{2}\geqslant 1+1/\varepsilon$,
then Equation 3.1 holds. Therefore, we can set
$\nu=\frac{\log{N}}{N}-\varepsilon^{cN}+\log{\left(1-\varepsilon^{cN}\right)},$
Next, we estimate $r^{\ast}$. Since $\mathcal{F}_{H_{r,\rho}}$ is a sub-
gaussian class, $r_{2}^{\ast}=0$, and
$r_{1}^{\ast}=\underset{r>0}{\mathrm{arginf}}\left\\{\sum_{i=1}^{p}\mathrm{min}\left\\{r^{2},\lambda_{i}(\Sigma)\rho^{2}\right\\}\leqslant\zeta_{1}pr^{2}\right\\}.$
So
$r^{\ast}=r_{1}^{\ast}\leqslant\frac{2}{\sqrt{\zeta_{1}}}\left\|\alpha^{\ast}\right\|_{\ell_{2}}\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}}$.
Therefore, by Theorem 3.1, with probability at least
$1-\mathrm{exp}(-\nu)-\mathrm{exp}(-cN)$, we have
$\displaystyle\left\|\hat{\alpha}-\alpha^{\ast}\right\|_{\ell_{2}}$
$\displaystyle\leqslant$
$\displaystyle\left\|\alpha^{\ast}\right\|_{\ell_{2}}+C\left\|\xi\right\|_{\psi_{2}}\sqrt{\frac{p}{p\varepsilon+1}}\leqslant
c\left\|\alpha^{\ast}\right\|_{\ell_{2}},$
$\displaystyle\left\|\Sigma^{1/2}(\hat{\alpha}-\alpha^{\ast})\right\|_{\ell_{2}}^{2}$
$\displaystyle\leqslant$
$\displaystyle\frac{4}{\zeta_{1}}\left\|\alpha^{\ast}\right\|_{\ell_{2}}^{2}\left(\frac{p\varepsilon+1}{p}\right)=\frac{4}{\zeta_{1}}\left\|\alpha^{\ast}\right\|_{\ell_{2}}^{2}\left(\varepsilon+\frac{1}{c\log{(1/\varepsilon)}N}\right),$
if signal-to-noise ratio
$\left\|\alpha^{\ast}\right\|_{\ell_{2}}/\left\|\xi\right\|_{\psi_{2}}$ is
greater than $\sqrt{\frac{p}{p\varepsilon+1}}$.
## 4 Estimation Error
In this section, we are going to obtain a upper bound of
$\left\|\hat{\alpha}-\alpha^{\ast}\right\|_{\ell_{2}}$ in high probability. We
have
$\hat{\alpha}=\mathbf{X}^{\dagger}\mathbf{Y}=\mathbf{X}^{\dagger}\mathbf{X}\alpha^{\ast}+\mathbf{X}^{\dagger}\xi.$
Therefore,
$\displaystyle\left\|\hat{\alpha}-\alpha^{\ast}\right\|_{\ell_{2}}=\left\|\left(\mathbf{X}^{\dagger}\mathbf{X}-I\right)\alpha^{\ast}\right\|_{\ell_{2}}+\left\|\mathbf{X}^{\dagger}\xi\right\|_{\ell_{2}}\leqslant\left\|\alpha^{\ast}\right\|_{\ell_{2}}+\left\|\mathbf{X}^{\dagger}\right\|\left\|\xi\right\|_{\ell_{2}}.$
(4.1)
For $\left\|\xi\right\|_{\ell_{2}}$, we can obtain
$\left\|\xi\right\|_{\ell_{2}}\leqslant\sqrt{N}\left\|\xi\right\|_{\psi_{2}},$
with probability at least $1-\mathrm{exp}(-N)$ by Bernstein’s inequality.
To upper bound $\left\|\mathbf{X}^{\dagger}\right\|$, we need a lower bound of
the smallest singular value of $\mathbf{X}$ in high probability.
###### Lemma 4.1 (Lower bound of the smallest singular value).
Suppose $X=\Sigma^{1/2}Z\in\mathbb{R}^{p}$ is a random vector, where $Z$ is an
isotropic random vector that satisfies wSBA with constants
$\mathcal{L},\kappa$, and $\Sigma$ satisfies Equation 3.1. If
$(X_{i})_{i=1}^{N}$ are i.i.d. copies of $X$, forming rows of a random matrix
$\mathbf{X}$. Then there exists constant $c_{0}$ such that the smallest
singular value of $\mathbf{X}$ has lower bound
$s_{\mathrm{min}}(\mathbf{X})\geqslant\varepsilon\sqrt{\frac{3c_{0}(1-c_{0})-1}{2}}\cdot\sqrt{N}\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}}\gtrsim\sqrt{N\frac{\mathrm{tr}(\Sigma)}{p}},\quad\forall\varepsilon\in(0,1)$
with probability at least $1-\mathrm{exp}\left(-\nu\right)-\mathrm{exp}(-cN)$,
where $c$ is an absolute constant.
With the help of Lemma 4.1, we can arrive at the estimation error:
###### Theorem 4.1 (Estimation Error).
Suppose $X=\Sigma^{1/2}Z$, where $Z$ is a random vector satisfying wSBA with
parameters $\mathcal{L},\kappa$ and $\Sigma$ satisfies Equation 3.1. Let
$(X_{i})_{i=1}^{N}$ are i.i.d. copies of $X$. Let
$Y_{i}=\left<X_{i},\alpha^{\ast}\right>+\xi_{i}$, where $(\xi_{i})_{i=1}^{N}$
are i.i.d. sub-gaussian random variables, and let
$\mathbf{Y}=(Y_{i})_{i=1}^{N}$. Let $\mathbf{X}$ as random matrix with lines
$X_{i}^{T}$, and $\hat{\alpha}=\mathbf{X}^{\dagger}\mathbf{Y}$. For $\nu$
defined in Equation 3.3, there exists constant $c_{0}$ such that: with
probability at least $1-\mathrm{exp}(-cN)-\mathrm{exp}(-\nu)$,
$\displaystyle\left\|\hat{\alpha}-\alpha^{\ast}\right\|_{\ell_{2}}$
$\displaystyle\leqslant$
$\displaystyle\left\|\alpha^{\ast}\right\|_{\ell_{2}}+\sqrt{\frac{2}{3(1-c_{0})c_{0}-1}}\sqrt{\frac{p}{\mathrm{tr}(\Sigma)}}\frac{\left\|\xi\right\|_{\psi_{2}}}{\varepsilon},\quad\forall\varepsilon\in(0,1)$
$\displaystyle\leqslant$
$\displaystyle\left\|\alpha^{\ast}\right\|_{\ell_{2}}+c\left\|\xi\right\|_{\psi_{2}}\sqrt{\frac{p}{\mathrm{tr}(\Sigma)}}.$
The proof is trivial by using Equation 4.1 and Lemma 4.1.
Theorem 4.1 can be compared with Theorem 3 in Chinot, G.,Lerasle, M. (2020).
Their estimation error is related to effective rank $r_{k}(\Sigma)$, while our
bound depends only on $\mathrm{tr}(\Sigma)/p$. This is because our lower bound
on the smallest singular value is given by average eigenvalue, instead of
effective rank. We believe that by choosing $c_{0}$, the smallest singular
value can be controlled in terms of effective rank, though we think deriving
such a bound in our work is not necessary.
In the following subsection, we are going to prove Lemma 4.1. An outline of
the proof of Lemma 4.1 is as follows. Firstly, we establish a coordinate
small-ball estimation in terms of effective rank in Theorem 4.2. Secondly, we
prove a uniform lower bound of $\left\|\mathbf{X}t\right\|_{\ell_{2}}$ on an
epsilon-Net of $S^{p-1}$. Finally, we can lower bound the smallest singular
value by combining its minimal $\ell_{2}$ norm and its maximal operator norm.
### 4.1 Coordinate small ball estimates in terms of effective rank
In this subsection, we prove the following Theorem.
###### Theorem 4.2 (Coordinate Small Ball Estimate in terms of Effective
ranks).
If random vector $X=\Sigma^{1/2}Z\in\mathbb{R}^{p}$ satisfies wSBA with
constants $(\mathcal{L},\kappa)$, and $\left(e_{i}\right)_{i=1}^{p}$ are ONB
of $\mathbb{R}^{p}$ that satisfy Equation 3.1, then for $\varepsilon\in(0,1)$,
$\displaystyle\mathbb{P}\left\\{\left|\left\\{i\leqslant
p:\,\left|\left<\Sigma^{1/2}Z,e_{i}\right>\right|\geqslant\varepsilon\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}}\right\\}\right|\leqslant
c_{0}p\right\\}\lesssim\frac{(4p-k)^{2}}{8p}\frac{c^{\frac{16p^{2}}{(4p-k)^{2}}R_{k,2}(\Sigma)}}{R_{k,2}(\Sigma)},$
(4.2)
where $c$ depends on $\mathcal{L},\kappa$.
This is just a simple modification of that in Mendelson, S. and Paouris, G.
(2019). We divide the proof of Theorem 4.2 into three steps.
Firstly, we select a proper subset $\sigma_{0}\subset[p]$. This step can be
done by using a probabilistic combinatorics technique. Let $u_{i}$ be a random
vector uniformly distributed on the given ONB
$\left\\{e_{i}\right\\}_{i=1}^{p}$. Set indicators $(1_{i})_{i=1}^{p}$. If
$\left\|\Sigma^{1/2}u_{i}\right\|_{\ell_{2}}\geqslant\sqrt{\mathrm{tr}(\Sigma)/(2p)}$,
then $1_{i}=1$, otherwise $1_{i}=0$. Then
$\mathbb{E}\left[\sum_{i=1}^{p}1_{i}\right]=\sum_{i=1}^{p}\mathbb{P}\left\\{\left\|\Sigma^{1/2}u_{i}\right\|_{\ell_{2}}\geqslant\sqrt{\frac{\mathrm{tr}(\Sigma)}{2p}}\right\\}.$
Then by Equation 3.1 and Paley-Zygmund inequality, see e.g. Lemma 3.1 in
Kallenberg, O. (2002), we can get its lower bound: $\mathrm{RHS}\geqslant
c_{0}(\delta_{1},\delta_{2})p$. Therefore, there exists a subset
$\sigma_{0}\subset[p]$, whose cardinality is at least $c_{0}p$, such that for
all $i\in\sigma_{0}$, there exists
$\displaystyle\left\|\Sigma^{1/2}e_{i}\right\|_{\ell_{2}}\geqslant\sqrt{\frac{\mathrm{tr}(\Sigma)}{2p}}.$
(4.3)
Secondly, Mendelson, S. and Paouris, G. (2019) decompose $[c_{0}p]$ into
$\ell$ coordinate blocks by using restricted invertibility Theorem. That is to
say,
###### Lemma 4.2 (Lemma 3.1 in Mendelson, S. and Paouris, G. (2019)).
Assume that for every $1\leqslant i\leqslant p$, Equation 3.1 holds. Set
$k_{4}=\mathrm{srank}_{4}(\Sigma^{1/2})$. Then for any $\lambda\in(0,1)$,
there are disjoint subsets $(\sigma_{i})_{i=1}^{\ell}\subset[c_{0}p]$ such
that
* •
For $1\leqslant j\leqslant\ell$, there is $\left|\sigma_{j}\right|\geqslant
c_{0}^{4}k_{q}/2048$ and $\sum_{j=1}^{\ell}\left|\sigma_{j}\right|\geqslant
c_{0}p/2$.
* •
$\left\|\left((\Sigma^{1/2})^{\ast}P_{\sigma_{j}}^{\ast}\right)^{-1}\right\|_{S_{\infty}}\leqslant
4$.
Next, we derive a uniform lower bound of $\left|\sigma_{j}\right|$ by lower
bounding $\mathrm{srank}_{4}(\Sigma^{1/2})$.
###### Lemma 4.3 (Lower bound of stable rank in terms of effective rank).
For $0\leqslant k\leqslant p-1$ and $\Sigma\in\mathbb{R}^{p\times p}$, we have
$\mathrm{srank}_{4}(\Sigma^{1/2})\geqslant\frac{16p^{2}}{(4p-k)^{2}}\left(1-\sqrt{\frac{k}{\mathrm{srank}_{4}(\Sigma)}}\right)R_{k}(\Sigma),$
###### Proof.
The proof is separated into two parts. Firstly, we lower bound
$\left\|\Sigma^{1/2}\right\|_{S_{2}}$.
By Ky Fan’s maximal principle, see e.g. Lemma 8.1.8 in Størmer, E. (2013) or
Chapter 3 in Bhatia, R. (1997), we have
$\sum_{i=1}^{p-r}s_{i}^{2}(\Sigma^{1/2})\geqslant\mathrm{tr}\left(\Sigma\left(I_{p}-P\right)\right),$
where $I_{p}-P$ is an orthogonal projection of rank $(p-r)$, which provides a
lower bound of sum of largest $(p-r)$ eigenvalues of $\Sigma$. Set $p-r=k$,
then $\mathrm{rank}(P)=p-k$. We have
$\sum_{i>k}s_{i}^{2}(\Sigma^{1/2})=\left\|\Sigma^{1/2}\right\|_{S_{2}}^{2}-\sum_{i=1}^{k}s_{i}^{2}(\Sigma^{1/2})$.
It follows that
$\displaystyle\sum_{i>k}s_{i}^{2}(\Sigma^{1/2})\leqslant\left\|\Sigma^{1/2}\right\|_{S_{2}}^{2}-\mathrm{tr}\left(\Sigma\left(I_{p}-P\right)\right).$
(4.4)
We just need to lower bound
$\mathrm{tr}\left(\Sigma\left(I_{p}-P\right)\right)$ in terms of
$\left\|\Sigma^{1/2}\right\|_{S_{2}}^{2}$. Consider $\Sigma$ and $\Sigma P$
separately, we have the following identity:
$\left\|P\Sigma^{1/2}\right\|_{S_{2}}^{2}=\mathrm{tr}(\Sigma)-\mathrm{tr}\left(\Sigma\left(I_{p}-P\right)\right),$
so
$\displaystyle\mathrm{tr}\left(\Sigma\left(I_{p}-P\right)\right)=\left\|\Sigma^{1/2}\right\|_{S_{2}}^{2}-\left\|P\Sigma^{1/2}\right\|_{S_{2}}^{2}.$
(4.5)
Substitute Equation 4.5 into Equation 4.4, then we just need to upper bound
$\left\|P\Sigma^{1/2}\right\|_{S_{2}}^{2}$. However, by definition of
$\left\|\cdot\right\|_{S_{2}}$ and property of Frobenius norm, we have
$\left\|P\Sigma^{1/2}\right\|_{S_{2}}^{2}=\left\|\Sigma^{1/2}\right\|_{S_{2}}^{2}-\left\|P^{C}\Sigma^{1/2}\right\|_{S_{2}}^{2},$
where $P^{C}$ is complement of projector $P$. Recall that
$\mathrm{rank}(P)=p-k$, so we can set $P$ picking $p-k$ rows of
$\Sigma^{1/2}$, so $P^{C}$ picks $k$ rows of $\Sigma^{1/2}$ and it has lower
bound $\left\|\Sigma^{1/2}\right\|_{S_{2}}/(2\sqrt{p})$ by Equation 4.3.
Therefore, we have
$\sum_{i>k}s_{i}^{2}(\Sigma^{1/2})\leqslant\left(1-\frac{k}{4p}\right)\left\|\Sigma^{1/2}\right\|_{S_{2}}^{2}.$
and immediately,
$\displaystyle\sum_{i=1}^{p}\lambda_{i}(\Sigma)\geqslant\left(1-\frac{k}{4p}\right)^{-1}\sum_{i>k}s_{i}^{2}(\Sigma^{1/2})=\left(1-\frac{k}{4p}\right)^{-1}\sum_{i>k}\lambda_{i}(\Sigma).$
(4.6)
Secondly, we upper bound $\left\|\Sigma^{1/2}\right\|_{S_{4}}$.
By Holder’s inequality,
$\left\|\Sigma\right\|_{S_{2}}^{2}=\sum_{i=1}^{k}s_{i}^{2}(\Sigma)+\sum_{i>k}s_{i}^{2}(\Sigma)\leqslant\sqrt{k}\left\|\Sigma\right\|_{S_{4}}^{2}+\sum_{i>k}s_{i}^{2}(\Sigma).$
So we have
$\displaystyle\sum_{i=1}^{p}\lambda_{i}^{2}(\Sigma)=\left\|\Sigma\right\|_{S_{2}}^{2}\leqslant\left(1-\sqrt{\frac{k}{\mathrm{srank}_{4}(\Sigma)}}\right)^{-1}\sum_{i>k}\lambda_{i}^{2}(\Sigma),$
(4.7)
by definition of stable rank.
Combining Equation 4.7 and Equation 4.6, Lemma 4.3 is proved. ∎
The rest of the proof is based on the following Lemma:
###### Lemma 4.4 (Mendelson, S. and Paouris, G. (2019)).
If random vector $X$ satisfies the wSBA with constant $(\mathcal{L},\kappa)$,
and $(e_{i})_{i=1}^{p}$ are ONB of $\mathbb{R}^{p}$, and
$\Sigma^{1/2}:\mathbb{R}^{p}\to\mathbb{R}^{p}$ satisfies Equation 3.1, then
for $\varepsilon\in(0,1)$, we have
$\mathbb{P}\left\\{\left|\left\\{i\leqslant
p:\,\left|\left<\Sigma^{1/2}Z,e_{i}\right>\right|\geqslant\varepsilon\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}}\right\\}\right|\leqslant
c_{0}p\right\\}\leqslant\sum_{j\leqslant\ell}\left(\frac{e\mathcal{L}\varepsilon}{c_{0}/2}\right)^{\left|\sigma_{j}\right|/(c_{0}/2)},$
where $\sigma_{j}$, $c_{0}$, $\ell$ are the same as Lemma 4.2.
This Lemma is not explicitly given in Mendelson, S. and Paouris, G. (2019).
Using this, we can prove Theorem 4.2:
###### Proof of Theorem 4.2.
By Lemma 4.4, Lemma 4.3 and Lemma 4.2, Theorem 4.2 is proved easily. ∎
### 4.2 Lower bound of smallest singular value
In this subsection, we proceed step 2 and 3. Build an epsilon-Net on
$S^{p-1}$, obtain uniform lower bound of smallest singular value on it and
extend it on the whole $S^{p-1}$.
###### Proof of Lemma 4.1.
Fix a random vector $X_{j}\in\mathbb{R}^{p}$. Consider $p$ unit vectors
$(e_{i})_{i=1}^{p}$ which forming an ONB of $\mathbb{R}^{p}$, then by Theorem
4.2, with probability at least $1-\mathfrak{R}_{k}(\Sigma)$, there exists a
subset $\sigma_{j}$ with cardinality at least $c_{0}p$, and the vectors in it
satisfy that for all $e_{i}\in\sigma_{j}$,
$\displaystyle\left|\left<\Sigma^{1/2}Z_{j},e_{i}\right>\right|\geqslant\varepsilon\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}}.$
(4.8)
Each $X_{j}$ has such a subset $\sigma_{j}\subset(e_{i})_{i=1}^{p}$ of
cardinality at least $c_{0}p$ with probability at least
$1-\mathfrak{R}_{k}(\Sigma)$. Pick a $t\in(e_{i})_{i=1}^{p}$ randomly. If
$t\in\sigma_{j}$, then Equation 4.8 holds, that is to say, we can have a lower
bound on the inner product at this time.
Denote $1_{j}$ as $1_{\left\\{t\notin\sigma_{j}\right\\}}$, then
$\mathbb{E}1_{j}\leqslant 1-c_{0}(1-\mathfrak{R}_{k}(\Sigma))$. By Bernstein’s
inequality, with probability at least
$1-\mathrm{exp}(-\mathrm{min}\left\\{t^{2},t\right\\}N)$,
$\sum_{j=1}^{N}1_{j}\leqslant
N\mathbb{E}1_{j}+t\leqslant\frac{3}{2}N(c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0}).$
by setting $t=(c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0})/2$, then with
probability at least
$1-\mathrm{exp}\left(-N\left(c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0}\right)/2\right),$
we have $\sum_{j=1}^{N}1_{j}\leqslant
3N(c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0})/2$, that is to say,
$\sqrt{\sum_{j=1}^{N}\left<\Sigma^{1/2}Z_{j},u\right>^{2}}\geqslant\varepsilon\sqrt{\frac{3c_{0}(1-\mathfrak{R}_{k}(\Sigma))-1}{2}N}\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}},\quad\forall
u\in\sigma_{j}.$
Build an $\eta$-Net $\Gamma_{\varepsilon}$ on $B_{2}^{p}$.
Set
$\eta=\frac{\sqrt{\mathrm{tr}(\Sigma)}}{2\sqrt{2}\left\|\Gamma\right\|}\varepsilon\sqrt{3c_{0}(1-\mathfrak{R}_{k}(\Sigma))-1}\sqrt{\frac{N}{p}}$
By $\log{\left|\Gamma_{\eta}\right|}\leqslant p\log{\left(1+2/\eta\right)}$,
we have
$\log{\left|\Gamma_{\eta}\right|}\leqslant
p\log{\left(1+\frac{4\sqrt{2}}{\varepsilon}\frac{\left\|\Gamma\right\|}{\sqrt{\mathrm{tr}(\Sigma)}}\sqrt{\frac{p}{N}}\cdot\frac{1}{\sqrt{3c_{0}(1-\mathfrak{R}_{k}(\Sigma))-1}}\right)}$
We just need to ensure
$\displaystyle
p\log{\left(1+\frac{\left\|\Gamma\right\|}{\sqrt{\mathrm{tr}(\Sigma)}}\sqrt{\frac{p}{N}}\cdot\frac{c}{\sqrt{3c_{0}(1-\mathfrak{R}_{k}(\Sigma))-1}}\right)}\leqslant
N\frac{c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0}}{2}+\log{p}$ (4.9)
by choosing $k$ wisely. Here we just need to repeat the selecting process no
more than $\left\lceil\left|\Gamma_{\eta}\right|/(c_{0}p)\right\rceil$ times,
so to make Equation 4.8 holds uniformly for all elements in $\Gamma_{\eta}$,
we have to pay a
$\log{\left(\left\lceil\left|\Gamma_{\eta}\right|/(c_{0}p)\right\rceil\right)}$
in exponential term
$\mathrm{exp}\left(-N\left(c_{0}\mathfrak{R}_{k}(\Sigma)+1-c_{0}\right)/2\right)$
and make sure this probability is no greater than $1$.
For upper bound of $\left\|\Gamma\right\|$, we use Lemma 2.2. With probability
at least $1-\mathrm{exp}(-c_{0}N)$,
$\left\|\Gamma\right\|=\sqrt{\underset{t\in
S^{p-1}}{\mathrm{max}}\sum_{i=1}^{N}\left<\Gamma_{\cdot,i},t\right>^{2}_{\ell_{2}}}\leqslant\sqrt{N}\sqrt{C\left(d\frac{\tilde{\Lambda}(D)}{\sqrt{k_{D}}}+\frac{\tilde{\Lambda}^{2}(D)}{k_{D}}\right)+\lambda_{1}(\Sigma)}.$
By choosing $k=k^{\ast}$ defined in Equation 3.2, the probability can be lower
bounded by $1-\mathrm{exp}(-\nu)$, where $\nu$ is defined in Equation 3.3. In
summary, with probability at least
$1-\mathrm{exp}(-c_{0}N)-\mathrm{exp}(-\nu)$, we have
$s_{\mathrm{min}}(\mathbf{X})\geqslant\varepsilon\sqrt{\frac{3c_{0}(1-c_{0})-1}{8}N}\sqrt{\frac{\mathrm{tr}(\Sigma)}{p}}$
∎
## 5 Prediction error
In this section, we obtain an upper bound of prediction error based on upper
bound of estimation risk by using localization method introduced in subsetcion
2.1.
###### Theorem 5.1 (Prediction error).
If random vector $X=\Sigma^{1/2}Z\in\mathbb{R}^{p}$ satisfies wSBA with
constants $(\mathcal{L},\kappa)$, and $\Sigma$ that satisfy Equation 3.1. Then
with probability at least
$1-\mathrm{exp}\left(-\nu\right)-2\mathrm{exp}(-cN)$, prediction error
satisfies
$\left\|\Sigma^{1/2}(\hat{\alpha}-\alpha^{\ast})\right\|_{\ell_{2}}\leqslant
r^{\ast},$
where $c$ is an absolute constant.
The proof is a kind of localization argument. That is to say, we are going to
prove $\hat{\alpha}-\alpha^{\ast}$ lies in a localized area with respect to
$\left\|\cdot\right\|_{\Sigma}$. Firstly, we need a localization Lemma from
Chinot, G.,Lerasle, M. (2020):
###### Lemma 5.1 (Localization: Lemma 3 in Chinot, G.,Lerasle, M. (2020)).
With probability at least $1-\mathrm{exp}(-N/16)$, we have
$\mathrm{P}_{N}\mathcal{L}_{\hat{\alpha}}\leqslant-\left\|\xi\right\|_{\psi_{2}}^{2}/2$.
Moreover, for any $r$, let $\Omega_{r,\rho}$ denote the following event
$\Omega_{r,\rho}=\left\\{\alpha\in\mathbb{R}^{p}:\,\alpha-\alpha^{\ast}\in
B(\rho)\backslash B_{\Sigma}(r),\,\text{and
}\mathrm{P}_{N}\mathcal{L}_{\alpha}>-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}\right\\}.$
On the event
$\displaystyle\Omega_{r,\delta}\cap\left\\{\hat{\alpha}-\alpha^{\ast}\in
B(\rho)\right\\}\cap\left\\{\mathrm{P}_{N}\mathcal{L}_{\hat{\alpha}}\leqslant-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}\right\\},$
(5.1)
prediction risk has upper bound $r$, that is to say,
$\left\|\Sigma^{1/2}\left(\hat{\alpha}-\alpha^{\ast}\right)\right\|_{\ell_{2}}\leqslant
r.$
Lemma 5.1 reduces upper bound of prediction risk to Equation 5.1. Probability
of event $\left\\{\hat{\alpha}-\alpha^{\ast}\in B(\rho)\right\\}$ can be lower
bounded by estimation error, see Lemma 4.1. Recall that
$H_{r,\rho}=B(\rho)\cap B_{\Sigma}(r)$, we just need to prove that event
$\underset{\alpha\,:\alpha-\alpha^{\ast}\in
H_{r,\rho}}{\mathrm{inf}}\mathrm{P}_{N}\mathcal{L}_{\alpha}>-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}$
holds with high probability for $r>r^{\ast}$.
In this section, we are going to find lower bound of
$\mathrm{P}_{N}\mathcal{L}_{\alpha}$ in terms of upper bounds of quadratic and
multiplier processes according to Equation 2.2.
Firstly, we find upper bound of quadratic process. By Lemma 2.2, with
probability at least $1-\mathrm{exp}(-cN)$, we have
$\displaystyle
Q_{r,\rho}\lesssim_{q}\left(d_{q}(H_{r,\rho})\frac{\tilde{\Lambda}(H_{r,\rho})}{\sqrt{p}}+\frac{\tilde{\Lambda}^{2}(H_{r,\rho})}{p}\right).$
(5.2)
Secondly, we need upper bound of multiplier component. This can be done by
using Lemma 2.1. That is to say, with probability at least
$1-2\mathrm{exp}\left(-cN\right)$,
$\displaystyle
M_{r,\rho}\lesssim_{q}\left\|\xi\right\|_{\psi_{2}}\frac{\tilde{\Lambda}(H_{r,\rho})}{\sqrt{p}}.$
(5.3)
since $\xi$ is centered and independent with $X$. Therefore, when
$r>r^{\ast}$, we have
$d_{q}(H_{r,\rho})\leqslant\zeta_{1}r^{\ast},\quad\frac{\tilde{\Lambda}(H_{r,\rho})}{\sqrt{p}}\leqslant\zeta_{2}r^{\ast}.$
###### Proof of Theorem 5.1.
Let $\alpha\in\alpha^{\ast}+H_{r,\rho}$. Recall that
$r=\left\|\Sigma^{1/2}(\alpha-\alpha^{\ast})\right\|_{\ell_{2}}$. If
$r>r^{\ast}$, by substituting Equation 5.3 and Equation 5.2 into Equation 2.2,
it follows that
$\underset{\alpha\in\alpha^{\ast}+H_{r,\rho}}{\mathrm{inf}}\,\mathrm{P}_{N}\mathcal{L}_{\alpha}>(r^{\ast})^{2}\left(\theta^{-2}-\zeta_{1}\zeta_{2}\theta^{-2}-\zeta_{2}\theta^{-2}-\zeta_{2}\right)-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}.$
Set $\zeta_{1},\zeta_{2}$ small enough,
$\mathrm{RHS}>-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}$. By Lemma 5.1,
Theorem 5.1 is proved. ∎
## 6 Discussion
In this section, we discuss two aspects. Firstly, we discuss why it is so
difficult to investigate benign overfitting beyond linear model. Secondly, we
discuss a benign overfitting case without truncated effective rank.
### 6.1 Why linear model?
In this subsection, we imagine statistical model $\mathcal{F}$ is the affine
hull of sub-classes $(\mathcal{F}_{j})_{j=1}^{d}$, that is to say, for all
$f\in\mathcal{F}$, there exists $f_{j}$ in each $\mathcal{F}_{j}$ and
$\alpha_{j}\in\mathbb{R}$ such that $f=\sum_{j=1}^{d}\alpha_{j}f_{j}$. Denote
$\alpha=(\alpha_{1},\cdots,\alpha_{d})\in\mathbb{R}^{d}$. Of course this is
not the problem that we deal with in this paper, but considering such a
general case like this would be benefit to understand the role of $\alpha$ and
the difficulty to generalize benign overfitting beyond linear model.
Even in this kind of simple ”additive model” case, benign overfitting is much
more difficult. Firstly, $\hat{f}$ interpolates $(X_{i},Y_{i})_{i=1}^{N}$, but
$\hat{f}_{j}$ need not interpolates them. In fact, they may differs a lot, see
$\hat{f}(x)=-x+x=\hat{f}_{1}+\hat{f}_{2}$ can interpolate $(10,0)$ but
$\hat{f}_{1}(10)=-\hat{f}_{2}(10)$. It is a difficult task to derive oracle
inequality by studying $\mathcal{F}_{j}$.
If we minimizing $\left\|\alpha\right\|_{\ell_{2}}$ analogs to linear case,
the minimization of $\left\|\alpha\right\|_{\ell_{2}}$ given interpolation
condition $\sum_{j=1}^{p}\alpha_{j}f(X_{i})=Y_{i}$ for all $i=1,2,\cdots,N$
can be solved by Moore-Penrose inverse.
Condition on $(f_{j})_{j=1}^{p}$. Denote matrix $\Gamma$ as
$\Gamma=\left[\begin{matrix}f_{1}(X_{1})&f_{2}(X_{1})&\cdots&f_{p}(X_{1})\\\
f_{1}(X_{2})&f_{2}(X_{2})&\cdots&f_{p}(X_{2})\\\
\vdots&\vdots&\ddots&\vdots\\\
f_{1}(X_{N})&\cdots&\cdots&f_{p}(X_{N}).\end{matrix}\right]_{N\times p}$
Denote $\mathbf{f}^{\ast}$ as
$\left(f^{\ast}(X_{1}),f^{\ast}(X_{2}),\cdots,f^{\ast}(X_{N})\right)$ and
$\mathbf{\xi}$ as $\left(\xi_{1},\xi_{2},\cdots,\xi_{N}\right)$. Then the
interpolation condition is equivalent to
$\mathrm{minimizing}\,\left\|\alpha\right\|_{\ell_{2}},\quad\mathrm{s.t.}\,\Gamma\alpha=Y.$
We assume $\alpha$ satisfying interpolation condition always exists. Using
Moore-Penrose inverse, we have
$\hat{\alpha}=\Gamma^{\dagger}Y=\Gamma^{\dagger}\mathbf{f}^{\ast}+\Gamma^{\dagger}\xi.$
Therefore, to establish upper bound of
$\left\|\hat{\alpha}\right\|_{\ell_{2}}$, we need a lower bound of the
smallest singular value of $\Gamma$.
However, as we see in Lemma 4.1, the smallest singular value increases when
$N$ increases, causing
$\left\|\Gamma^{\dagger}\mathbf{f}^{\ast}\right\|_{\ell_{2}}$ decreasing. This
phenomenon is called ”signal blood” in Muthukumar, V., Vodrahalli, K.,
Subramanian, V. and Sahai, A. (2020), which means that the influence caused by
signal $\mathbf{f}^{\ast}$ will decline so that minimizing
$\left\|\hat{\alpha}\right\|_{\ell_{2}}$ cannot reflect properties true signal
unless there are some unrealistic restrictions.
Therefore, $\mathbf{f}^{\ast}$ should balance $\Gamma^{\dagger}$ when $N$
increase to avoid signal blood. This can be done by linear regression, where
$\Gamma=\mathbf{X}$. This illustrates that why we choose linear model.
### 6.2 Benign overfitting without truncated effective rank
Now, we try to establish benign overfitting without truncated effective rank,
but on stable rank $r_{0}(\Sigma)$, see Equation 1.1. Recall that a linear
model on $T\subset\mathbb{R}^{p}$ is
$\mathcal{F}_{T}=\left\\{\left<\cdot,t\right>:\,t\in T\right\\}$. Let
$\sigma=(X_{1},\cdots,X_{N})$, then the projection of $\mathcal{F}_{T}$ by
using $\sigma$ is indeed a random linear transformation of $T$. That is to
say, $P_{\sigma}\mathcal{F}_{T}=\mathbf{X}T$, where
$P_{\sigma}(f_{t})=(\left<X_{i},t\right>)_{i=1}^{N}$. We need lower bound of
smallest singular value of $\mathbf{X}$ to derive an upper bound of estimation
error, and a lower bound of quadratic component in Equation 2.1. Fortunately,
this can be done by Dvoretzky-Milman Theorem, see Artstein-Avidan,
S.,Giannopoulos, A. and Milman, V.D. (2015) or Mendelson S. (2016a).
Dvoretzky-Milman Theorem can hold with rather heavy-tailed random vectors, but
for the sake of simplicity, we assume $(g_{i})_{i=1}^{N}$ are i.i.d. gaussian
random vectors in $\mathbb{R}^{p}$.
###### Lemma 6.1 (Dovoretzky-Milman(one-side)).
There exists absolute constants $c_{1},c_{2}$ such that: If
$0<\delta<\frac{1}{2}$, and
$\displaystyle N\leqslant
c_{1}\frac{\delta^{2}}{\log{(1/\delta)}}r_{0}(\Sigma),$ (6.1)
and $\Gamma=\sum_{i=1}^{N}\left<g_{i},\cdot\right>e_{i}$, where
$(e_{i})_{i=1}^{N}$ are ONB of $\mathbb{R}^{N}$. Then with probability at
least $1-2\mathrm{exp}(-c_{2}r_{0}(\Sigma)\delta^{4}/\log{(1/\delta)})$,
$(1-\delta)\sqrt{\mathrm{tr}(\Sigma)}B_{2}^{N}\subset\Gamma\left(\Sigma^{1/2}B_{2}^{p}\right).$
Take $\delta=1/4$ for example, we have
$4\mathrm{tr}(\Sigma)B_{2}^{N}\subset\Gamma(\Sigma^{1/2}B_{2}^{p})$, so
$s_{\mathrm{min}}(\Gamma)=\sqrt{\underset{t\in
S^{p-1}}{\mathrm{min}}\left<g_{i},t\right>^{2}}\geqslant
4\sqrt{\mathrm{tr}(\Sigma)}$
holds with probability at least $1-2\mathrm{exp}(-cr_{0}(\Sigma))$. Therefore,
with probability at least
$1-2\mathrm{exp}(-c_{1}r_{0}(\Sigma))-2\mathrm{exp}(-c_{2}N)$,
$\left\|\hat{\alpha}-\alpha^{\ast}\right\|_{\ell_{2}}\leqslant\left\|\alpha^{\ast}\right\|_{\ell_{2}}+\left\|\xi\right\|_{\psi_{2}}\sqrt{\frac{N}{\mathrm{tr}(\Sigma)}}.$
As for the prediction risk, we have: when $r>r_{1}^{\ast}$,
$\underset{\alpha\in\alpha^{\ast}+H_{r,\rho}}{\mathrm{inf}}\mathrm{P}_{N}\mathcal{L}_{\alpha}\geqslant
r^{2}\left(\frac{16\mathrm{tr}(\Sigma)}{N}-\frac{1}{2}\zeta_{1}\right)-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}>-\frac{1}{2}\left\|\xi\right\|_{\psi_{2}}^{2}.$
From here on, the proof is as the same as that of Theorem 5.1, the details are
omitted.
Note that $r_{0}(\Sigma)\leqslant p$, and $p=cN\log{(1/\varepsilon)}$, so we
can set $c_{1},\delta$ wisely to adapt to the example discussed in subsection
3.1.
In summary, although interpolation learning suffers from estimating both noise
$\xi$ and sign $\alpha^{\ast}$, it still generalize well if the smallest
singular value of $\mathbf{X}$ is large enough such that it can absorb the
level of noise, $\sqrt{N}\left\|\xi\right\|_{\psi_{2}}$, see Equation 4.1. The
smallest singular value is used to weaken influence of noise. To make the
smallest singular value large enough, the number of samples should satisfy an
upper bound that depends on covariance of the input vector. This threshold is
used to balance the rate of exponential decay(acquired by concentration or
small-ball argument) and metric entropy(given by net argument). Therefore,
this threshold depends on dimension $p$, sample size $N$ and covariance
$\Sigma$. If we fix relationship between $p$ and $N$(like the example in
subsection 3.1), we need $\Sigma$ has a large trace(or at least heavy tail of
eigenvalues), which is the key to benign overfitting. Note that in this
interpretation, there is no restrictions on concentration properties of input
vector $X$, but its small-ball property, that is to say, $X$ should fully
spread on its margin. It is its spreading that can absorb noise $\xi$. It is
this that make minimum $\ell_{2}$ linear interpolant fit into heavy-tailed
case. Finally, we believe that our result could be easily modified to
”Informative-Outlier” framework, cf. Chinot, G., Lecué, G. and Lerasle, M.
(2020), to obtain a result in a ”robust flavor” both for CS community and
statistics community.
## References
* Artstein-Avidan, S.,Giannopoulos, A. and Milman, V.D. (2015) Artstein-Avidan, S.,Giannopoulos, A. and Milman, V.D. (2015) Asymptotic geometric analysis. Part I. American Mathematical Society, Providence.MR3331351.
* Bartlett, P.L., Long, P. M., Lugosi, G. and Tsigler, A. (2019) Bartlett, P.L., Long, P. M., Lugosi, G. and Tsigler, A. (2019) Benign Overfitting in Linear Regression. Proceedings of the National Academy of Sciences Apr 2020, 201907378.
* Belkin, M., Ma, S. and Mandal, S. (2018) Belkin, M., Ma, S. and Mandal, S. (2018) To understand deep learning we need to understand kernel learning. Proceedings of the the 35th International Conference on Machine Learning (ICML 2018).
* Belkin, M., Rakhlin, A. and Tsybakov, A.B. (2019) Belkin, M., Rakhlin, A. and Tsybakov, A.B. (2019) Does data interpolation contradict statistical optimality? AISTAT 2019.
* Bhatia, R. (1997) Bhatia, R. (1997) Matrix Analysis. Springer-Verlag, New York.MR1477662
* Boucheron, S.,Lugosi, G. and Massart, P. (2013) Boucheron, S.,Lugosi, G. and Massart, P. (2013) Concentration inequalities: A nonasymptotic theory of independence. 1nd ed.Oxford university press.MR3185193
* Chinot, G., Lecué, G. and Lerasle, M. (2020) G. Chinot, G. Lecué and M. Lerasle (2020) Statistical Learning with Lipschitz and convex loss functions,Probability Theory and Related Fields, 176, 897–940.MR4087486
* Chinot, G.,Lerasle, M. (2020) Chinot, G.,Lerasle, M. (2020) Benign overfitting in the large deviation regime. arXiv preprint arXiv:2003.05838.
* Dirksen, S. (2015) Dirksen, S. (2015) Tail bounds via generic chaining. Electronic Journal of Probability 20.MR3354613
* Hastie, T., Montanari, A., Rosset, S. and Tibshirani, R. (2019) Hastie, T. ,et al.(2019) Surprises in high-dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560.
* Kallenberg, O. (2002) Kallenberg, O. (2002) Foundations of modern probability. 2nd ed. Springer-Verlag, New York.MR1876169
* Koltchinskii, V. and Lounici, K. (2017) Koltchinskii, V. and Lounici, K. (2017) Concentration inequalities and moment bounds for sample covariance operators. Bernoulli 23, 110-133.MR3556768
* Koltchinskii, V. and Mendelson, S. (2015) Koltchinskii, V. and Mendelson, S. (2015) Bounding the smallest singular value of a random matrix without concentration. Int. Math. Res. Not. IMRN 23, 12991–13008.MR3431642
* Liang, T., Rakhlin, A. (2020) Liang, T., Rakhlin, A. (2020) Just Interpolate: Kernel ”Ridgeless” Regression Can Generalize. Annals of Statistics.MR4124325
* Liang, T., Rakhlin, A. and Zhai, X. (2020) Liang, T., Rakhlin, A. and Zhai, X. (2020) On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels. Conference on Learning Theory (COLT), 2020.
* Rakhlin, A. and Zhai, X. (2019) Rakhlin, A. and Zhai, X. (2019) Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon. Conference on Learning Theory (COLT), 2019.
* Song, M. and Montanari, A. (2019) Song, M. and Montanari, A. (2019) The generalization error of random features regression: Precise asymptotics and double descent curve. Submitted to Communications on Pure and Applied Mathematics.
* Størmer, E. (2013) Størmer, E. (2013) Positive Linear Maps of Operator Algebras. Springer Monographs in Mathematics.MR3012443
* Mendelson S. (2016a) Mendelson, S. (2016a) Dvoretzky type Theorems for subgaussian coordinate projections.J. Theoret. Probab., 29, 1644–1660.MR3571258
* Mendelson, S. (2016b) Mendelson, S. (2016b) Learning without concentration for general loss functions.Probability Theory and Related Fields, 171, 459–502.MR3800838
* Mendelson S. (2016c) Mendelson, S. (2016c) Upper bounds on product and multiplier empirical processes.Stochastic Processes and their Applications, 126, 3652–3680.MR3565471
* Mendelson, S. and Paouris, G. (2019) Mendelson, S. and Paouris, G. (2019) Stable recovery and the coordinate small-ball behaviour of random vectors. arXiv preprint arXiv:1904.08532.
* Meyer Carl D. (2000) Meyer Carl D. (2000) Matrix analysis and applied linear algebra.Society for Industrial and Applied Mathematics (SIAM), 71.MR1777382
* Muthukumar, V., Vodrahalli, K., Subramanian, V. and Sahai, A. (2020) Muthukumar, V., Vodrahalli, K., Subramanian, V. and Sahai, A. (2020) Harmless interpolation of noisy data in regression.IEEE Journal on Selected Areas in Information Theory.
* Naor, A. and Youssef, P. (2017) Naor, A. and Youssef, P. (2017) Restricted invertibility revisited.Springer Cham. A journey through discrete mathematics, 657–691.MR3726618
* Rudelson, M. and Vershynin, R. (2007) Rudelson, M. and Vershynin, R. (2007) Sampling from large matrices: an approach through geometric functional analysis.Journal of the ACM (2007), Art. 21, 19 pp.MR2351844
* Talagrand, M. (2014) Talagrand, M. (2014) Upper and lower bounds for stochastic processes: modern methods and classical problems.Springer Science & Business Media.MR3184689
* Tsigler, A. and Bartlett, P. (2020) Tsigler, A. and Bartlett, P. (2020) Benign overfitting in ridge regression.arXiv preprint arXiv:2009.14286.
* Vaart, Aad W and Wellner, Jon A (2016b) Vaart, Aad W and Wellner, Jon A (1996) Weak convergence and empirical processes: with applications to statistics.Springer Series in Statistics.MR1385671
* Vershynin, R. (2018) Vershynin, R. (2018) High-Dimensional Probability: An Introduction with Applications in Data Science.Cambridge University Press:New York.MR3837109
* Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O. (2016) Zhang, C. et al. (2016) Understanding deep learning requires rethinking generalization.arXiv preprint arXiv:1611.03530.
|
16k
|
arxiv_papers
|
2101.00915
|
# Pathwise regularization of the stochastic heat equation with multiplicative
noise through irregular perturbation
Rémi Catellier and Fabian A. Harang Rémi Catellier: Université Côte d Azur,
CNRS, LJAD, France [email protected] Fabian A. Harang:
Department of Mathematics, University of Oslo, P.O. box 1053, Blindern, 0316,
OSLO, Norway [email protected]
###### Abstract.
Existence and uniqueness of solutions to the stochastic heat equation with
multiplicative spatial noise is studied. In the spirit of pathwise
regularization by noise, we show that a perturbation by a sufficiently
irregular continuous path establish wellposedness of such equations, even when
the drift and diffusion coefficients are given as generalized functions or
distributions. In addition we prove regularity of the averaged field
associated to a Lévy fractional stable motion, and use this as an example of a
perturbation regularizing the multiplicative stochastic heat equation.
###### Key words and phrases:
Pathwise regularization by noise, stochastic heat equation, generalized
parabolic Anderson model, fractional Lévy processes
###### 2010 Mathematics Subject Classification:
Primary 60H50, 60H15 ; Secondary 60L20
_Acknowledgments._ FH gratefully acknowledges financial support from the STORM
project 274410, funded by the Research Council of Norway. We would also thank
Nicolas Perkowski for several fruitful discussions on this topic.
###### Contents
1. 1 Introduction
2. 2 Non-linear Young-Volterra theory in Banach spaces
3. 3 Averaged fields
4. 4 Existence and uniqueness of the mSHE
5. 5 Averaged fields with Lévy noise
6. 6 Conclusion
7. A Basic concepts of Besov spaces and properties of the heat kernel
8. B Cauchy-Lipschitz theorem for mSHE in standard case.
## 1\. Introduction
The stochastic heat equation with multiplicative noise (mSHE), is given on the
form
$\partial_{t}u=\Delta u+b(u)+g(u)\xi,\qquad u_{0}\in\mathcal{C}^{\beta},$
(1.1)
where $\xi$ is a space time noise on $\mathbb{R}^{d}$ or $\mathbb{T}^{d}$ and
$\mathcal{C}^{\beta}$ is the Besov Hölder space of $\mathbb{R}^{d}$ or
$\mathbb{T}^{d}$, and $b$ and $g$ are sufficiently smooth functions. This
equation is a fundamental stochastic partial differential equation, and is
applied for modelling in a diverse selection of natural sciences, ranging from
chemistry and biology to physics. The existence and uniqueness of (1.1) is
typically proven under the condition that both $b$ and $g$ are Lipschitz
functions of linear growth (see e.g. [25]). When taking a pathwise approach to
a solution theory, even more regularity of these non linear functions may be
required (see for instance [3] where $g\in C^{3}$ is required and Appendix B
for a proof a pathwise wellposedness in a simple context). Of course if
$g\equiv 0$, then (1.1) is known as the (deterministic) non-linear heat
equation, for which uniqueness fails in general under weaker conditions on $b$
than Lipschitz and linear growth.
Motivated by this, a natural question to ask is if it is possible to prove
existence and uniqueness of (1.2) under weaker conditions on $b$ and $g$.
Inspired by the well known regularization by noise phenomena in stochastic
differential equations, one may think that the same principles of
regularization would extend to the case of stochastic partial differential
equations like (1.2). In this article, we aim at giving some insights into
this question by investigating (1.2) in a fully pathwise manner under
perturbation by a measurable time (only) dependent path. In fact, we will
prove that a perturbation by a sufficiently irregular path yields
wellposedness of the mSHE, even for distributional (generalized functions)
coefficients $g$ and $b$. In the next section we give a more detailed
description of the specific equation under consideration, and the techniques
that we apply in order to prove this regularizing effect of the perturbation.
### 1.1. Methodology
Inspired by the theory of regularization by noise for ordinary or stochastic
differential equations, we show that a suitably chosen measurable path
$w:[0,T]\rightarrow\mathbb{R}^{d}$ provides a regularizing effect on (1.1), by
considering the formal equation
$\partial_{t}u=\Delta u+b(u)+g(u)\xi+\dot{\omega}_{t},\qquad
u_{0}\in\mathcal{C}^{\beta},$ (1.2)
where $\xi$ is a spatial (distributional) noise taking values in
$\mathbb{R}^{d}$, and $\dot{\omega}_{t}$ is the distributional derivative of a
continuous path $\omega$. To this end, we formulate (1.2) in terms of the non-
linear Young framework, developed in [7, 15, 14]. We extend this framework to
the infinite dimensional setting adapted to Volterra type integrals appearing
when considering the mild formulation of (1.2). The integration framework
developed here is strongly based on the recently developed Volterra sewing
lemma of [24], and does not require any semi-group property of the Volterra
operator. The integral can therefore be applied to several different problems
relating to infinite dimensional Volterra integration, and thus we believe
that this construction is interesting in itself.
To motivate the methodology of the current paper, consider again (1.2) and set
$\theta=u-w$ with $\theta_{0}=u_{0}\in\mathcal{C}^{\beta}$ then formally
$\theta$ solves the following integral equation
$\theta_{t}=P_{t}\theta_{0}+\int_{0}^{t}P_{t-s}b(\theta_{s}+\omega_{s})\mathop{}\\!\mathrm{d}s+\int_{0}^{t}P_{t-s}\xi
g(\theta_{s}+\omega_{s})\mathop{}\\!\mathrm{d}s,$
where $P$ is the fundamental solution operator associated with the heat
equation, and the product $P_{t}\theta_{0}$ is interpreted as spatial
convolution. For simplicity, we will carry out most of our analysis with
$b\equiv 0$, as this term is indeed easier to handle than the term with
multiplicative noise. The equation we will then consider is given by
$\theta_{t}=P_{t}\theta_{0}+\int_{0}^{t}P_{t-s}\xi
g(\theta_{s}+\omega_{s})\mathop{}\\!\mathrm{d}s.$ (1.3)
In Section 4.2 we provide a detailed description of how our results can easily
be extended to include the drift term with distributional $b$, by simply
appealing to the the analysis carried out for the purely multiplicative
equation (1.3). Associated to the path $\omega$ and the distribution $g$,
define the averaged distribution
$T^{\omega}g:[0,T]\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ by the
mapping
$(t,x)\mapsto\int_{0}^{t}g(x+\omega_{s})\mathop{}\\!\mathrm{d}s.$
After proving that for certain paths $\omega$ the distribution $T^{\omega}g$
is in fact a regular function, we then consider (1.3) as a non-linear Young
equation of the form
$\theta_{t}=P_{t}\theta_{0}+\int_{0}^{t}P_{t-s}\xi
T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(\theta_{s}),\quad\theta_{0}\in\mathcal{C}^{\beta}.$
(1.4)
Here, the integral is interpreted in an infinite dimensional non-linear Young-
Volterra sense. That is, to suit our purpose, we extend the non-linear Young
integral to an infinite dimensional setting as well as allowing for the action
of a Volterra operator on the integrand. The integral is then constructed as
the Banach valued element
$\int_{0}^{t}P_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(\theta_{s}):=\lim_{|\mathcal{P}|\rightarrow
0}\sum_{[u,v]\in\mathcal{P}}P_{t-u}\xi T^{\omega}_{u,v}g(\theta_{u}),$
where $T^{\omega}_{u,v}g:=T^{\omega}_{v}g-T^{\omega}_{u}g$. We stress that in
contrast to the non-linear Young integral used for example in [7, 15, 23, 14],
the above integral is truly an infinite dimensional object, and extra care
must be taken when building it from the averaged function $T^{\omega}g$.
Indeed, for each $t\geq 0$,
$\theta_{t}\in\mathcal{C}^{\beta}(\mathbb{R}^{d};\mathbb{R})$ and so the
function $T^{\omega}g$ is then lifted to be a functional on
$\mathcal{C}^{\beta}$. We show that this lift comes at the cost of an extra
degree of assumed regularity on the averaged function $T^{\omega}g$.
Furthermore, due to the assumption that $\xi\in\mathcal{C}^{-\vartheta}$ for
$\vartheta>0$ (i.e. $\xi$ is assumed to be truly distributional) we need to
make use of the product in Besov space in order to make the product of $\xi
T^{\omega}_{u,v}g(\theta_{u})$ well defined.
Similar to the theory of rough paths, our analysis can be divided into two
parts: (i) a probabilistic step, and (ii) a deterministic (analytic) step. We
give a short description of the two steps here:
* (i)
Let $E$ be a separable Banach space. We develop an abstract framework of
existence and uniqueness of Banach valued equations
$\theta_{t}=p_{t}+\int_{0}^{t}S_{t-s}X_{\mathop{}\\!\mathrm{d}s}(\theta_{s}),$
(1.5)
where $S$ is a suitable (possibly singular) Volterra operator, and
$X:[0,T]\times E\rightarrow E$ is a function which is $\frac{1}{2}+$ Hölder
regular in time, and suitably regular in its spatial argument (to be specified
later), and $p:[0,T]\rightarrow E$ is a sufficiently regular function. To this
end, we use a simple extension of the Volterra sewing lemma developed in [24]
to construct the non-linear Young-Volterra integral appearing in (1.5) as the
following
$\int_{0}^{t}S_{t-s}X_{\mathop{}\\!\mathrm{d}s}(\theta_{s}):=\lim_{|\mathcal{P}|\rightarrow
0}\sum_{[u,v]\in\mathcal{P}[0,t]}S_{t-u}X_{u,v}(\theta_{u}),$
where $\mathcal{P}[0,t]$ is a partition of $[0,t]$ with mesh size
$|\mathcal{P}|$ converging to zero.
* (ii)
The second step is then to consider $\\{\omega_{t}\\}_{t\in[0,T]}$ to be a
stochastic process on a probability space $(\Omega,\mathcal{F},\mathbb{P})$,
and we need to show that the averaged function $T^{\omega}g$ is indeed a
sufficiently regular function $\mathbb{P}$-a.s., even when $g$ is a true
distribution. This is done by probabilistic methods.
At last we relate the abstract function $X$ from (2.21) to the averaged
function $T^{\omega}$, and then a combination of the previous two steps gives
us existence and uniqueness of (1.4), and thus also (1.3), which then is used
to makes sense of (1.2) through the translation $u=v+\omega$.
### 1.2. Short overview of existing literature
In order to prove existence and uniqueness of (1.1), there is two main
directions to follow: the classical probabilistic setting based on Itô type
theory, or the pathwise approach based on rough paths or similar techniques.
Using the first approach, one typically require that $b$ and $g$ are Lipschitz
and of linear growth (see e.g. [25]). Note that in this context, at least in
dimension $d=1$, one can prove also prove existence and uniqueness with less
restrictive requirements on $g$ in certain cases (see [32, 33, 38]). In
particular in [32], the authors prove, using probabilistic arguments, that
there is a non-zero solution to the equation
$\partial_{t}u=\Delta u+|u|^{\gamma}\xi+\psi,\quad u(0,\cdot)=0,$
where $\xi$ is a space time white noise in dimension $1+1$, $\psi$ is a non-
zero, non-negative function smooth compactly supported function and
$0\leq\gamma<\frac{3}{4}$. They also prove that when $\gamma>\frac{3}{4}$
uniqueness holds. Note also that when $\xi$ is a deterministic function and
not a distribution, wellposedness is a well-know topic. In particular if $\xi$
is a non-negative continuous and bounded function, Fujita and Watanabe [13]
prove that Osgood condition on $g$ are "nearly" necessary and sufficient to
guaranty uniqueness. In particular when $g$ is only Hölder continuous, one can
not expect to have uniqueness. One can also consult [5] and the reference
therein for further attempt in that direction.
When using pathwise techniques to solve (stochastic) nonlinear heat equation,
as usual (see [9] for counterexamples in a rough path context), one typically
needs to require even higher regularity on $b$ and $g$ in order guarantee
existence and uniqueness (see e.g. [3] where three times differentiability is
assumed and the Appendix B for a simple proof).
To the best of our knowledge, little has been done in the direction of
investigating the regularizing effects obtained from measurable perturbations
of the heat equation. In the case when $g\equiv 0$ it has been proven in [34]
(see also [6]) that the additive stochastic heat equation on the form
$\partial_{t}u=\partial^{2}_{x}u+b(u)+\partial_{t}\partial_{x}\omega,\quad(t,x)\in[0,T]\times[0,1]$
exists uniquely, even when $b$ is only bounded and measurable and
$\partial_{t}\partial_{x}\omega$ is understood as a white noise on
$[0,T]\times[0,1]$. Thus the addition of noise seems to give similar
regularizing effects in the stochastic heat equation as is observed in SDEs.
Note that the recent publication [1] continue in the investigation when
$g\equiv 1$, and in particular the authors are able to recover results on skew
stochastic heat equation.
### 1.3. Main results
Before presenting our main results, let us first give a definition of what we
will call a solution to (1.2). Let us remind that the definition of admissible
weight and Besov spaces is written in Appendix A.2 and A.1, the precise
definition of non-linear Young-Volterra equation is given in Section 2.1 and
the definition of averaged field is given in Section 3. The equations
considered in the below results can either be considerd on $\mathbb{R}$ or
$\mathbb{T}$. Indeed, all the following results are true in both settings.
Furthermore, for the sake of using the space white noise, it might be
instructive to think to the space as the torus $\mathbb{T}$.
###### Definition 1.
Let $\omega:[0,T]\rightarrow\mathbb{R}$ be a measurable path, and consider a
$g\in\mathcal{S}^{\prime}$ such that the averaged field
$T^{\omega}b\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ for some
$\gamma>\frac{1}{2}$ and $\kappa\geq 3$ and an admissible weight
$w:\mathbb{R}^{d}\rightarrow\mathbb{R}$. Suppose that $0<\vartheta<\beta<1$
and suppose that $\xi$ is a spatial noise contained in
$\mathcal{C}^{-\vartheta}$. Take $\rho=\frac{\beta+\vartheta}{2}$ and suppose
that $\gamma-\rho>1-\gamma$. Let $u_{0}\in\mathcal{C}^{\beta}$. We say that
there exists a unique solution to the equation
$u_{t}=P_{t}u_{0}+\int_{0}^{t}P_{t-s}\xi
g(u_{s})\mathop{}\\!\mathrm{d}s+\omega_{t},\quad t\in[0,T]$ (1.6)
if $u\in\omega+\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ for any
$1-\gamma<\varsigma<\gamma-\rho$, and there exists a unique
$\theta\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ such that
$u=\omega+\theta$ and $\theta$ solves the non-linear Young equation
$\theta_{t}=P_{t}u_{0}+\int_{0}^{t}P_{t-s}\xi
T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(\theta_{s}).$ (1.7)
Here the integral is understood as a non-linear Young–Volterra integral, as
constructed in Section 2.1 and the space $\mathscr{C}^{\varsigma}_{T}$ is
defined in Definition 6.
###### Theorem 2.
Let $\omega:[0,T]\rightarrow\mathbb{R}$ be a measurable path, and consider a
distribution $g\in\mathcal{S}^{\prime}(\mathbb{R})$ such that the associated
averaged field $T^{\omega}b\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$
for some $\gamma>\frac{1}{2}$, $\kappa\geq 3$ and a admissible weight function
$w:\mathbb{R}^{d}\rightarrow\mathbb{R}_{+}$. Suppose that $0<\vartheta<1$ and
take $\vartheta<\beta<2-\vartheta$ and that $\xi$ is a spatial noise contained
in $\mathcal{C}^{-\vartheta}$. Let $\rho=\frac{\beta+\vartheta}{2}$ and assume
that $1-\gamma<\gamma-\rho$. Then there exists a time $\tau\in(0,T]$ such that
there exists a unique solution $u$ in the sense of Definition 1 to Equation
(1.6). If $w$ is globally bounded, then the solution is global, in the sense
that a unique solution $u$ to (1.6) is define on all $[0,\tau]$ for any
$\tau\in(0,T]$.
The unique solution can be interpreted to be a "physical" one, in the sense
that it is stable under approximations. We summarize this in the following
corollary.
###### Corollary 3.
Under the assumptions of Theorem 2, if $\\{g_{n}\\}_{n\in\mathbb{N}}$ is a
sequence of smooth functions converging to
$g\in\mathcal{S}^{\prime}(\mathbb{R})$ such that $T^{\omega}g_{n}\rightarrow
T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$, then the
corresponding sequence of solutions
$\\{u_{n}\\}_{n\in\mathbb{N}}=\\{\omega+\theta^{n}\\}_{n\in\mathbb{N}}$ to
equation (1.7) converge to $u=w+\theta$ in the sense that
$\theta^{n}\to\theta$ in $\mathscr{C}^{\varsigma}_{\tau}\mathcal{C}^{\beta}$
for any $1-\gamma<\varsigma<\gamma-\rho$.
In applications, one is typically interested in the regularizing effects
provided by specific sample paths of stochastic processes $\omega$. Although
the class of regularizing paths is already well developed, we show here as an
example the regularizing effect of measurable sample paths of fractional Lévy
processes (see Section 5).
In this connection suppose that $\xi$ is a spatial white noise on the torus
$\mathbb{T}$, and we find a conditions on the distribution $g$ and the initial
data $u_{0}$ so that a unique solution to (1.6) exists.
###### Theorem 4.
Let $\alpha\in(0,2]$. Let $H\in(0,1)\cap\\{\alpha^{-1}\\}$. Let $L^{H}$ be a
Linear Fractional Lévy Process with Hurst parameter $H$ built from a symmetric
$\alpha$-stable Lévy process as defined in Section 5. Let $\vartheta\in(0,1)$
and let $\kappa>3-\frac{1-\vartheta}{2H}$. There exists $\varepsilon>0$ small
enough such that for all $\xi\in\mathcal{C}^{-\vartheta}$ and all
$g\in\mathcal{C}^{\kappa}(w)$ where $w$ is an admissible weight, almost surely
for all $u_{0}\in\mathcal{C}^{\vartheta+\varepsilon}$ there exists a unique
local solution to the mSHE in the sense of Definition 1. If $w$ is bounded,
then the solution exists globally.
One can specify the previous theorem by letting $\xi$ to be the space white
noise :
###### Corollary 5.
Let $d=1$ and let $\xi$ be a space white noise on the Torus $\mathbb{T}$. Let
$\alpha\in(0,2]$, let $H\in(0,1)\cap\\{\alpha^{-1}\\}$ and let
$\kappa>3-\frac{1}{4H}$. Let $w$ be an admissible weight. There exists
$\varepsilon>0$ small enough such that for all $g\in\mathcal{C}^{\kappa}(w)$,
almost surely for all $u_{0}\in\mathcal{C}^{\frac{1}{2}+\varepsilon}$ and let
there exists a unique solution (in the sense of Definition 1) to the mSHE
$\mathop{}\\!\mathrm{d}u=\Delta
u\mathop{}\\!\mathrm{d}t+g(u)\xi\mathop{}\\!\mathrm{d}t+\mathop{}\\!\mathrm{d}L^{H}_{t},\quad
u(0,\cdot)=u_{0},$
where $L^{H}$ is a linear fractional stable motion.
In particular taking $H<\frac{1}{4}$ allows us to take $\kappa<2$ and go
beyond the classical theory using Bony estimates for the product of
distribution in Besov spaces. When $H<\frac{1}{8}$ one can deal with non
Lipschitz-continuous $g$ and when $H<\frac{1}{12}$, one can deal with
distributional field $g$.
The proofs of the above theorems and corollary can be found in Sections 4 and
5.
### 1.4. Outline of the paper
The paper is structured as follows: In section 2.1 we extend the concept of
non–linear Young integration to the infinite dimensional setting, including a
Volterra operator, which in later sections will play the role of the inverse
Laplacian. We also give a result on existence and uniqueness of abstract
equations in Banach spaces.
In section 3 we give a short overview on the concept of averaged fields and
their properties. As this topic is by now well studied in the literature, we
only give here the necessary details and provide several references for
further information. We also show how the standard averaged field can be
viewed as an operator on certain Besov function spaces.
In section 4 we formulate the multiplicative stochastic heat equation in the
non-linear Young-Volterra integration framework, using the concept of averaged
fields. We prove existence and uniqueness of these equations, as well as
Theorem 2 and Corollary 3. In section 5 we investigate closer the regularity
of averaged fields associated with sample paths of fractional Levy processes,
and prove Theorem 4.
At last, in Section 6 we give a short reflection on the main results of the
article and provide some thoughts on future extensions of our results.
For thee sake of self-containedness, we have included an appendix with
preliminaries on weighted Besov spaces, and some results regarding the
regularity/singularity of the inverse Laplacian acting on Besov distributions.
In addition we give a short proof for existence and uniqueness of (1.2) in the
case of twice differentiable $g$.
### 1.5. Notation
For $\beta\in\mathbb{R}$ and $d\geq 1$, we define the Hölder Besov space
$\mathcal{C}^{\beta}=B^{\beta}_{\infty,\infty}(\mathbb{R}^{d};\mathbb{R})$
endowed with its usual norm built upon Paley-Littlewood blocks and denoted by
$\|\cdot\|_{\mathcal{C}^{\beta}}$ (see Appendix A.2 and especially Proposition
39 for more on weighted Besov spaces). Note that when working with a weight
$w$ and weighted spaces, we denote the spaces $\mathcal{C}^{\beta}(w)$ and the
norm $\|\cdot\|_{\mathcal{C}^{\beta}(w)}$. For $T>0$ and $\varsigma\in(0,1)$
and $E$ a Banach space with norm $\|\cdot\|_{E}$ and for $f:[0,T]\mapsto E$,
$[f]_{\varsigma;E}=\sup_{s\neq
t}\frac{\|f_{t}-f_{s}\|_{E}}{|t-s|^{\varsigma}}<\infty,$
and for $\varsigma>0$ we define
$\|f\|_{\varsigma;E}:=\sum_{k=0}^{[\varsigma]}\|f^{([k])}_{0}\|_{E}+\big{[}f^{([\varsigma])}\big{]}_{\varsigma-[\varsigma];E},$
and finally
$\mathcal{C}^{\varsigma}_{T}E=\mathcal{C}^{\varsigma}\big{(}[0,T];E\big{)}:=\left\\{f:[0,T]\to
E\,:\,\|f\|_{\varsigma;E}<\infty\right\\}.$
Whenever the underlying space $E$ is clear form the context we will use the
short hand notation $[f]_{\varsigma}$ etc. to denote the the Hölder semi norm.
We will frequently use the increment notation $f_{s,t}:=f_{t}-f_{s}$. We write
$f\lesssim g$ if there exists a constant $C>0$ such that $f\leq Cg$.
Furthermore to stress that the constant $C$ depends on a parameter $p$ we
write $l\lesssim_{p}g$. We write $f\simeq g$ is $f\lesssim g$ and $g\lesssim
f$. For $s\leq t$, we denote by $\mathcal{P}([s,t])$ a partition of the
interval $[s,t]$. We write
$|\mathcal{P}|=\max_{k\in\\{0,\cdots,n-1\\}}|t_{k+1}-t_{k}|$.
## 2\. Non-linear Young-Volterra theory in Banach spaces
For $\alpha\in(0,2]$, let $t\mapsto P_{t}^{\frac{\alpha}{2}}$ be the
fundamental solution associated to the $\alpha$-fractional heat equation
$\partial_{t}P^{\frac{\alpha}{2}}=-(-\Delta)^{\frac{\alpha}{2}}P^{\frac{\alpha}{2}},\quad
P^{\frac{\alpha}{2}}_{0}=\delta_{0}.$
For a function $y:\mathbb{R}^{d}\rightarrow\mathbb{R}$ let
$P^{\frac{\alpha}{2}}_{t}y$ denotes the convolution between the fundamental
solution at time $t\geq 0$ and $y$. Note that for $\alpha=2$, we obtain the
classical heat equation, and $P_{\cdot}:=P^{1}_{\cdot}$ then denotes the
convolution operator with the Gaussian kernel. Towards a pathwise analysis of
stochastic parabolic equations, one encounter the problem that for a function
$y\in\mathcal{C}^{\kappa}(\mathbb{R}^{d})$ with $\kappa\geq 0$, the mapping
$t\mapsto P_{t}^{\frac{\alpha}{2}}y$ is smooth in time everywhere except when
approaching $0$ where it is only continuous. In particular, from standard
(fractional) heat kernel estimates (see Corollary 45 in the Appendix) we know
that for $s\leq t\in[0,T]$, any $\theta\in[0,1]$ and $\rho\in(0,\alpha]$ the
following inequality holds
$\|(P_{t}^{\frac{\alpha}{2}}-P^{\frac{\alpha}{2}}_{s})y\|_{\mathcal{C}^{\varsigma+\alpha\rho}}\lesssim\|y\|_{\mathcal{C}^{\kappa}}|t-s|^{\theta}s^{-\theta-\rho}.$
(2.1)
Note the special case when $\rho=0$. Then $S_{t}$ is a linear operator on
$\mathcal{C}^{\kappa}$ which is $\theta$-Hölder continuous on an interval
$[\varepsilon,T]\subset[0,T]$ for any $\varepsilon>0$ and $\theta\in[0,1]$,
but might only continuous when approaching the point $t=0$. We therefore need
to extend the concept of Hölder spaces in order to take into account this type
of loss of regularity near the origin.
###### Definition 6.
Let $E$ be a Banach space. We define the space of continuous paths on $(0,T]$
which are Hölder of order $\varsigma\in(0,1)$ on $(0,T]$ and only continuous
at zero in the following way
$\mathscr{C}^{\varsigma}_{T}E:=\\{y:[0,T]\rightarrow
E\,|\,[y]_{\varsigma}<\infty\\}$
where we define the semi-norm
$[y]_{\varsigma}:=\sup_{s\leq
t\in[0,T];\,\zeta\in[0,\varsigma]}\frac{|y_{s,t}|_{E}}{|t-s|^{\zeta}s^{-\zeta}}.$
The space $\mathscr{C}^{\varsigma}_{T}E$ is a Banach space when equipped with
the norm $\|y\|_{\varsigma}:=|y_{0}|_{E}+[y]_{\varsigma}$.
Singular Hölder type spaces introduced above has recently been extensively
studied in [4]. The space introduced above can be seen as a special case of
the more general spaces considered there. We use the convention of taking
supremum over $\theta\in[0,\varsigma]$, as this will make computations in
subsequent sections simpler. It is well known that if a function
$y\in\mathcal{C}^{\gamma}_{T}E$ for some $\gamma\in[0,1)$, then
$y\in\mathcal{C}^{\theta}_{T}E$ for all $\theta\in[0,\gamma]$ (note that
$\theta=0$ implies that $y$ is bounded, which is always true for Hölder
continuous functions on bounded domains). Thus it follows that also
$\sup_{s\leq
t\in[0,T];\,\theta\in[0,\gamma]}\frac{|y_{s,t}|_{E}}{|t-s|^{\theta}}<\infty.$
###### Remark 7.
It is readily seen that $\mathscr{C}^{\varsigma}_{T}E$ consists of all
functions which are Hölder continuous on $(0,T]$, but is only continuous in
the point $\\{0\\}$. Note therefore that the following inclusion
$\mathcal{C}^{\varsigma}_{T}E\subset\mathscr{C}^{\varsigma}_{T}E$ holds.
Indeed, for any $s,t\in[0,T]$
$\frac{|y_{s,t}|_{E}}{|t-s|^{\varsigma}}=\frac{s^{-\varsigma}|y_{s,t}|_{E}}{s^{-\varsigma}|t-s|^{\varsigma}}$
and thus in particular,
$\frac{|y_{s,t}|_{E}}{s^{-\varsigma}|t-s|^{\varsigma}}\leq
T^{\varsigma}\frac{|y_{s,t}|_{E}}{|t-s|^{\varsigma}}.$
###### Remark 8.
It may be instructive for the reader to keep in mind that in subsequent
sections, we will take the Banach space $E$ to be the Besov-Hölder space
$\mathcal{C}^{\beta}(\mathbb{R}^{d})$ or $\mathcal{C}^{\beta}(\mathbb{T}^{d})$
for some $\beta\in\mathbb{R}$ and $d\geq 1$.
We will throughout this section work with general Volterra type operators
satisfying certain regularity assumptions. We therefore give the following
working hypothesis.
###### Hypothesis 9.
For each $t\in[0,T]$, let $S_{t}\in\mathcal{L}(E)$ be a linear operator on $E$
satisfying the following three regularity conditions for some $\rho>0$ and any
$\theta,\theta^{\prime}\in[0,1]$ and any $0\leq s\leq
t\leq\tau^{\prime}\leq\tau\leq T$
$\displaystyle{\rm(i)}$ $\displaystyle|S_{t}u|_{E}$ $\displaystyle\lesssim
t^{-\rho}|u|_{E}$ $\displaystyle{\rm(ii)}$ $\displaystyle|(S_{t}-S_{s})u|_{E}$
$\displaystyle\lesssim(t-s)^{\theta}s^{-\theta-\rho}|u|_{E}$
$\displaystyle{\rm(iii)}$
$\displaystyle\Big{|}\big{(}(S_{\tau-t}-S_{\tau-s})-(S_{\tau^{\prime}-t}-S_{\tau^{\prime}-s})\big{)}u\Big{|}_{E}$
$\displaystyle\lesssim(\tau-\tau^{\prime})^{\theta^{\prime}}(t-s)^{\theta}(\tau^{\prime}-t)^{-\theta-\theta^{\prime}-\rho}|u|_{E}$
We then say that $t\mapsto S_{t}$ is a $\rho$–singular operator.
### 2.1. Non-linear Young-Volterra integration
We are now ready to construct a non-linear Young-Volterra integral in Banach
spaces. If $X:[0,T]\times E\to E$ is a smooth function in time, and
$(P_{t})_{t\in[0,T]}$ is a (nice) linear operator of $E$ and if
$(y_{t})_{t\in[0,T]}$ is a continuous path from $[0,T]$ to $E$ itself, it is
quite standard to consider integrals of the following form :
$\int_{0}^{t}P_{t-r}\dot{X}_{r}(y_{r})\mathop{}\\!\mathrm{d}r.$
The aim this part is to extend the notion of integral for non-smooth drivers
$X$, and to solve integral equations using this extension of the integral; The
notion of $\rho$–singular operator will be useful in the following.
###### Lemma 10.
Consider parameters $\gamma>\frac{1}{2}$, $0\leq\rho\leq\gamma$ and
$0<\varsigma<\gamma-\rho$ and assume $\varsigma+\gamma>1$. Let $E$ be a Banach
space and let $y\in\mathscr{C}^{\varsigma}_{T}E$. Suppose that $X:[0,T]\times
E\rightarrow E$ satisfies for any $x,y\in E$ and $s\leq t\in[0,T]$
$\displaystyle{\rm(i)}$ $\displaystyle|X_{s,t}(x)|_{E}\lesssim
H(|x|_{E})|t-s|^{\gamma}$ $\displaystyle{\rm(ii)}$
$\displaystyle|X_{s,t}(x)-X_{s,t}(y)|_{E}\lesssim
H(|x|_{E}\vee|y|_{E})|x-y|_{E}|t-s|^{\gamma}$
where $H$ is a positive locally bounded function on $\mathbb{R}_{+}$. Let
$(S_{t})_{t\in[0,T]}\in\mathcal{L}(E)$ be a $\rho$–singular linear operator on
$E$, satisfying Hypothesis 9. We then define the non-linear Young-Volterra
integral by
$\Theta(y)_{t}:=\lim_{\begin{subarray}{c}\mathcal{P}\in\mathcal{P}([0,t])\\\
|\mathcal{P}|\rightarrow
0\end{subarray}}\sum_{[u,v]\in\mathcal{P}}S_{t-u}X_{u,v}(y_{u}).$ (2.2)
The integration map $\Theta$ is a continuous non-linear operator from
$\mathscr{C}^{\varsigma}_{T}E\rightarrow\mathscr{C}^{\varsigma}_{T}E$, and
there exists an $\varepsilon>0$ such that the following inequality holds
$|\Theta(y)_{t}-\Theta(y)_{s}|_{E}\lesssim\sup_{0\leq
z\leq\|y\|_{\infty}}H(z)(1+[y]_{\varsigma})(t-s)^{\varsigma}T^{\gamma-\rho+\varsigma},$
(2.3)
Furthermore, for a linear operator $A\in\mathcal{L}(E)$, the following
commutative property holds
$A\Theta(y)_{t}=\lim_{|\mathcal{P}|\rightarrow}\sum_{[u,v]\in\mathcal{P}}A\,S_{t-u}X_{u,v}(y_{u}).$
###### Proof.
Let us first assume that for any $0\leq s\leq t\leq\tau^{\prime}\leq\tau\leq
T$ the following operator is well-defined :
$\Theta_{s}^{t}(y)_{\tau}:=\lim_{\begin{subarray}{c}\mathcal{P}\in\mathcal{P}([s,t])\\\
|\mathcal{P}|\to
0\end{subarray}}\sum_{[u,v]\in\mathcal{P}}S_{\tau-u}X_{u,v}(y_{u}).$
Note that in this setting we have
$\Theta(y)_{t}=\Theta_{0}^{t}(y)_{t}$
and the increment satisfies
$\Theta(y)_{s,t}=\Theta_{s}^{t}(y)_{t}+\Theta_{0}^{s}(y)_{s,t}.$
Hence, in order to have the bound (2.3), it is enough to have a bound on
$\Theta_{s}^{t}(y)_{\tau}$ and on $\Theta_{s}^{t}(y)_{\tau^{\prime},\tau}$.
To this end, we will begin to show the existence of the integrals
$\Theta_{s}^{t}(y)_{\tau}$ and $\Theta_{s}^{t}(y)_{\tau^{\prime},\tau}$
together with the suitable bounds. Both these terms are constructed in the
same way, however since
$\Theta_{s}^{t}(y)_{\tau^{\prime},\tau}=\lim_{\begin{subarray}{c}\mathcal{P}\in\mathcal{P}([s,t])\\\
|\mathcal{P}|\to
0\end{subarray}}\sum_{[u,v]\in\mathcal{P}}\left(S_{\tau-u}-S_{\tau^{\prime}-u}\right)X_{u,v}(y_{u}),$
(2.4)
this term is a bit more involved as it has an increment of the kernel $P$ in
the summand. We will therefore show existence as well as a suitable bound for
this term and leave the specifics of the first term as a simple exercise for
the reader. Everything will be proven in a similar manner as the sewing lemma
from the theory of rough paths. More specifically, the recently developed
Volterra sewing lemma from [24] provides the correct techniques to this
specific setting. The uniqueness, and additivity (i.e. that
$\Theta(y)_{s,t}=\Theta_{s}^{t}(y)_{t}+\Theta_{0}^{s}(y)_{s,t}$) of the
mapping follows directly from the standard arguments given for example in [24]
or [12, Lem. 4.2].
Consider now a dyadic partition $\mathcal{P}^{n}$ of $[s,t]$ defined
iteratively such that $\mathcal{P}^{0}=\\{[s,t]\\}$, and for $n\geq 0$
$\mathcal{P}^{n+1}:=\bigcup_{[u,v]\in\mathcal{P}^{n}}\\{[u,m],[m,v]\\},$
where $m:=\frac{u+v}{2}$. It follows that $\mathcal{P}^{n}$ consists of
$2^{n}$ sub-intervals $[u,v]$, each of length $2^{-n}|t-s|$. Define the
approximating sum
$\mathcal{I}_{n}:=\sum_{[u,v]\in\mathcal{P}^{n}}(S_{\tau-u}-S_{\tau^{\prime}-u})X_{u,v}(y_{u}),$
(2.5)
and observe that for $n\in\mathbb{N}$, we have
$\mathcal{I}_{n+1}-\mathcal{I}_{n}=-\sum_{[u,v]\in\mathcal{P}^{n}}\delta_{m}\left[(S_{\tau-u}-S_{\tau^{\prime}-u})X_{u,v}(y_{u})\right],$
(2.6)
where $m=\frac{u+v}{2}$ and for a two variable function $f$, we use that
$\delta_{m}f_{u,v}:=f_{u,v}-f_{u,m}-f_{m,v}$. By elementary algebraic
manipulations we see that
$\delta_{m}\left[(S_{\tau-u}-S_{t-u})X_{u,v}(y_{u})\right]\\\
=(S_{\tau-u}-S_{\tau^{\prime}-u})(X_{m,v}(y_{u})-X_{m,v}(y_{m}))+(S_{\tau-u}-S_{\tau^{\prime}-u}-S_{\tau-m}+S_{\tau^{\prime}-m})X_{m,v}(y_{u}).$
(2.7)
We first investigate the second term on the right hand side above. Invoking
(iii) of Hypothesis 9 and invoking assumption (i) on $X$, we observe that for
any $\theta,\theta^{\prime}\in[0,1]$
$|(S_{\tau-u}-S_{t-u}-S_{\tau-m}+S_{t-m})X_{m,v}(y_{u})|_{E}\\\ \lesssim
H(|y_{u}|_{E})|\tau-\tau^{\prime}|^{\theta^{\prime}}|\tau^{\prime}-m|^{-\theta^{\prime}-\rho-\theta}|m-u|^{\theta}|v-m|^{\gamma}.$
Let us now fixe $\theta^{\prime}=\varsigma\in[0,\gamma-\rho)$, and choose
$\theta\in[0,1]$ such that $\gamma+\theta>1$ and $\theta+\varsigma+\rho<1$.
Note that this is always possible due to the fact that
$\gamma-\rho-\varsigma>0$. Furthermore, we note that for any partition
$\mathcal{P}$ of $[s,t]$ we have
$\sum_{[u,v]\in\mathcal{P}}|\tau^{\prime}-m|^{-\theta-\rho-\varsigma}|v-m|\lesssim\int_{s}^{t}|\tau^{\prime}-r|^{-\theta-\rho-\varsigma}\mathop{}\\!\mathrm{d}r\lesssim|t-s|^{1-\varsigma-\rho-\theta},$
(2.8)
where we have used that $m=(u+v)/2$. From this, it follows that for any
$\theta\in[0,\varsigma]$ the following inequality holds
$\sum_{[u,v]\in\mathcal{P}^{n}}|(S_{\tau-u}-S_{t-u}-S_{\tau-m}+S_{t-m})X_{m,v}(y_{u})|_{E}\\\
\lesssim\sup_{0\leq
z\leq\|y\|_{\infty}}H(z)|\mathcal{P}^{n}|^{\gamma+\theta-1}|\tau-\tau^{\prime}|^{\varsigma}|t-s|^{1-\theta-\rho-\varsigma}.$
(2.9)
Let us now move on to the first term in (2.7). By invoking the bounds on $P$
from (ii) of Hypothesis 9 and assumption (ii) on $X$, we observe that for any
$\theta\geq 0$ and any $0\leq\zeta\leq\varsigma$,
$|(S_{\tau-u}-S_{\tau^{\prime}-u})(X_{m,v}(y_{u})-X_{m,v}(y_{m}))|_{E}\\\
\lesssim|\tau-\tau^{\prime}|^{\theta}|\tau^{\prime}-u|^{-\rho-\theta}|m-v|^{\gamma}|m-u|^{\zeta}u^{-\zeta}H(|y_{u}|_{E}\vee|y_{m}|_{E})[y]_{\varsigma}.$
Similarly as shown in (2.9), we now take $\theta=\zeta=\varsigma$ and we
consider a sum over a partition $\mathcal{P}$ over $[s,t]\subset[0,T]$, and
see that since $\varsigma+\gamma>1$, and for $\rho+\varsigma<1$,
$\sum_{[u,v]\in\mathcal{P}}|\tau-\tau^{\prime}|^{\varsigma}|\tau^{\prime}-m|^{-\rho-\varsigma}|m-v|^{\gamma}|m-u|^{\varsigma}u^{-\varsigma}\leq|\mathcal{P}|^{\varsigma+\gamma-1}|\tau-\tau^{\prime}|^{\varsigma}\int_{s}^{t}|\tau^{\prime}-r|^{-\rho-\varsigma}r^{-\varsigma}\mathop{}\\!\mathrm{d}r$
where again $m=(u+v)/2$. Furthermore, when $s<t<\tau^{\prime}$, we have
$\displaystyle\int_{s}^{t}|\tau^{\prime}-r|^{-\rho-\varsigma}r^{-\varsigma}\mathop{}\\!\mathrm{d}r\leq$
$\displaystyle\int_{s}^{t}|\tau^{\prime}-r|^{-\rho-\varsigma}r^{-\varsigma}\mathop{}\\!\mathrm{d}r$
$\displaystyle\leq$
$\displaystyle(t-s)^{1-\rho-2\varsigma}\int_{0}^{1}(1-r)^{-(\rho+\varsigma)}r^{-\varsigma}\mathop{}\\!\mathrm{d}r$
$\displaystyle\lesssim$ $\displaystyle(t-s)^{1-\rho-2\varsigma}.$
We therefore obtain when specifying $\theta=\varsigma$,
$\sum_{[u,v]\in\mathcal{P}^{n}}|(S_{\tau-u}-S_{\tau^{\prime}-u})(X_{m,v}(y_{u})-X_{m,v}(y_{m}))|_{E}\\\
\lesssim|\mathcal{P}^{n}|^{\varsigma+\gamma-1}|\tau-\tau^{\prime}|^{\varsigma}|t-s|^{1-\rho-2\varsigma}\sup_{0\leq
z\|y\|_{\infty}}H(z)[y]_{\varsigma}.$ (2.10)
Combining (2.9) and (2.10), and using that for
$|\mathcal{P}^{n}|=2^{-n}|t-s|$, it follows from (2.6) that
$|\mathcal{I}_{n+1}(s,t)-\mathcal{I}_{n}(s,t)|_{E}\lesssim\sup_{0\leq
z\leq\|y\|_{\infty,[s,t]}}H(z)(1+[y]_{\varsigma})2^{-n(\gamma-(\rho+\varsigma))}(\tau-\tau^{\prime})^{\varsigma}(t-s)^{\gamma-\rho-\varsigma},$
For $m>n\in\mathbb{N}$ thanks to the triangle inequality, and the estimate
above, we get
$\|\mathcal{I}_{m}(s,t)-\mathcal{I}_{n}(s,t)\|_{E}\lesssim\sup_{0\leq
z\|y\|_{\infty,[s,t]}}H(z)(1+[y]_{\varsigma})(\tau-\tau^{\prime})^{\varsigma}(t-s)^{\gamma-\rho-\varsigma}\psi_{n,m},$
(2.11)
where $\psi_{n,m}=\sum_{i=n}^{m}2^{-i(\gamma-(\rho+\varsigma))}$, and it
follows that $\\{\mathcal{I}_{n}\\}_{n\in\mathbb{N}}$ is Cauchy in $E$. It
follows that there exists a limit
$\mathcal{I}=\lim_{n\rightarrow\infty}\mathcal{I}_{n}$ $E$. Moreover, from
(2.11) we find that the following inequality holds
$|\mathcal{I}-(S_{\tau-s}-S_{\tau^{\prime}-s})X_{s,t}(y_{s})|_{E}\lesssim\sup_{0\leq
z\leq\|y\|_{\infty}}H(z)(1+[y]_{\varsigma})(\tau-\tau^{\prime})^{\varsigma}(t-s)^{\gamma-\rho-\varsigma}\psi_{0,\infty}.$
(2.12)
the proof that $\mathcal{I}$ is equal to
$\Theta_{s}^{t}(y)_{\tau,\tau^{\prime}}$ defined in (2.4) (where in particular
$\Theta(y)$ is defined independent of the partition $\mathcal{P}$ of $[s,t]$)
follows by standard arguments for the sewing lemma, see [24] in the Volterra
case. Finally, we get for any $0\leq s\leq t\leq\tau^{\prime}\leq\tau\leq T$,
$|\Theta_{s}^{t}(y)_{\tau^{\prime},\tau}-(S_{\tau-s}-S_{\tau^{\prime}-s})X_{s,t}(y_{s})|_{E}\lesssim\sup_{0\leq
z\leq\|y\|_{\infty}}H(z)T^{\gamma-\rho+\varsigma}|t-s|^{\varsigma}.$ (2.13)
Using the same techniques, in order to prove that $\Theta_{s}^{t}(y)_{t}$
exists for all $0\leq s\leq t\leq\tau\leq T$ one has to control
$S_{\tau-m}(X_{m,v}(y_{m})-X_{m,v}(y_{u}))+(S_{\tau-u}-S_{\tau-m})X_{m,v}(y_{u}).$
Performing exactly the same computation, we have
$|\Theta_{s}^{t}(y)_{\tau}-S_{\tau-s}X_{s,t}(y_{s})|_{E}\lesssim\sup_{0\leq
z\leq\|y\|_{\infty}}H(z)T^{\gamma-\varsigma+\rho}|t-s|^{\varsigma}(1+[h]_{\varsigma}).$
(2.14)
Combining (2.13) and (2.14), and using the fact that
$\Theta(y)_{t}-\Theta(y)_{s}=\Theta_{s,t}(y)_{t}+\Theta_{0}^{s}(y)_{s,t}$
$|\Theta(y)_{t,s}-S_{t-s}X_{s,t}(y_{s})-(S_{t}-S_{s})X_{0,s}(y_{0})|_{E}\\\
\leq|\Theta_{s}^{t}(y)_{t}-(S_{t}-S_{s})X_{s,t}(y_{s})|_{E}+|\Theta_{0}^{s}(y)_{s,t}-S_{t-s}\xi
X_{0,s}(y_{0})|_{E}$
it is readily checked that the following inequality holds
$|\Theta(y)_{t}-\Theta(y)_{s}-S_{t-s}X_{s,t}(y_{s})-(S_{t}-S_{s})X_{0,s}(y_{0})|_{E}\lesssim\sup_{0\leq\leq\|y\|_{\infty}}H(z)(1+[y]_{\varsigma})T^{\gamma-\rho+\varsigma}(t-s)^{\varsigma}.$
Finally note that since $\varsigma<\gamma-\rho$,
$|S_{t-s}X_{s,t}(y_{s})|_{E}\lesssim H(|y_{s}|)|t-s|^{\gamma-\rho}\lesssim
H(|y_{s}|)T^{\gamma-\rho+\varsigma}|t-s|^{\varsigma},$
and
$|(S_{t}-S_{s})X_{0,s}(y_{0})|_{E}\lesssim
H(|y_{0}|)s^{\gamma-\rho-\varsigma}|t-s|^{\varsigma}.$
For the last claim, if $A\in\mathcal{L}(E)$ is a linear operator, then one re-
define $\mathcal{I}_{n}$ in (2.5) to be given as
$\mathcal{I}_{n}(A):=\sum_{[u,v]\in\mathcal{P}^{n}}A(S_{\tau-u}-S_{t-u})X_{u,v}(y_{u}),$
and by linearity of $A$ we see that $\mathcal{I}_{n}(A)=A\mathcal{I}_{n}$.
Taking the limits, using the above established inequalities, we find that
$\lim_{n\rightarrow\infty}\|\mathcal{I}_{n}(A)-A\mathcal{I}_{n}\|_{E}=0$. ∎
With the construction of the non-linear Young Volterra integral, we will in
later applications need certain stability estimates.
###### Proposition 11 (Stability of $\Theta$).
Let $\gamma,\varsigma,\rho$ be given as in of Lemma 10. Assume that for
$i=1,2$, $X^{i}:[0,T]\times E\rightarrow E$ satisfies for any $x,y\in E$ and
$s\leq t\in[0,T]$
$\displaystyle{\rm(i)}\qquad$ $\displaystyle|X^{i}_{s,t}(x)|_{E}+\|\nabla
X^{i}_{s,t}(x)|_{\mathcal{L}(E)}$ $\displaystyle\lesssim
H(|x|_{E})|t-s|^{\gamma}$ (2.15) $\displaystyle{\rm(ii)}\qquad$
$\displaystyle|X^{i}_{s,t}(x)-X^{i}_{s,t}(y)|_{E}$ $\displaystyle\lesssim
H(|x|_{E}\vee|y|_{E})|x-y|_{E}|t-s|^{\gamma}$ $\displaystyle{\rm(iii)}\qquad$
$\displaystyle|\nabla X^{i}_{s,t}(x)-\nabla X^{i}_{s,t}(y)|_{\mathcal{L}(E)}$
$\displaystyle\lesssim H(|x|_{E}\vee|y|_{E})|x-y|_{E}|t-s|^{\gamma},$
where $H$ is a positive locally bounded function, and $\nabla$ is understood
as a linear operator on $E$ in the Fréchet sense. Furthermore, suppose there
exists a positive and locally bounded function $H_{X^{1}-X^{2}}$ such that
$\displaystyle{\rm(i)}\qquad$
$\displaystyle|X^{1}_{s,t}(x)-X^{2}_{s,t}(x)|_{E}$ $\displaystyle\lesssim
H_{X^{1}-X^{2}}(|x|_{E})|t-s|^{\gamma}$ (2.16) $\displaystyle{\rm(ii)}\qquad$
$\displaystyle|(X^{1}_{s,t}-X^{2}_{s,t})(x)-(X^{1}_{s,t}-X^{2}_{s,t})(y)|_{E}$
$\displaystyle\lesssim
H_{X^{1}-X^{2}}(|x|_{E}\vee|y|_{E})|x-y|_{E}|t-s|^{\gamma}$
Let $\Theta^{1}$ denote the non-linear integral operator constructed in Lemma
10 with respect to $X^{1}$, and similarly let $\Theta^{2}$ denote the integral
operator with respect to $X^{2}$. Then for two paths
$y,\tilde{y}\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta+2\rho}$,
$[\Theta^{1}(y^{1})-\Theta^{2}(y^{2})]_{\varsigma}\lesssim_{P}\bigg{[}\sup_{0\leq
z\leq\|y^{1}\|\vee\|y^{2}\|}H(z)\left([y^{1}]_{\varsigma}+[y^{2}]_{\varsigma}\right)(|y^{1}_{0}-y^{2}_{0}|_{E}+[y^{1}-y^{2}]_{\varsigma})\\\
+\sup_{0\leq
z\leq\|y^{1}\|_{\infty}\vee\|y^{2}\|_{\infty}}H_{X^{1}-X^{2}}(z)[y^{1}]_{\varsigma}\vee[y^{2}]_{\varsigma}\bigg{]}T^{\gamma-\rho+\varsigma}.$
(2.17)
###### Proof.
Define the following two functions
$\Theta_{s,t}^{1}(y^{1})-\Theta_{s,t}^{2}(y^{2})=(\Theta_{s,t}^{1}(y^{1})-\Theta_{s,t}^{2}(y^{1}))+(\Theta_{s,t}^{2}(y^{1})-\Theta_{s,t}^{2}(y^{2}))=:D_{X^{1},X^{2}}(s,t)+D_{y^{1},y^{2}}(s,t).$
We treat $D_{X^{1},X^{2}}$ and $D_{y^{1},y^{2}}$ separately, and begin to
consider $D_{y^{1},y^{2}}$. Since $X^{i}$ is differentiable and satisfies
(i)-(iii) in (2.15), for $i=1,2$ we have
$X^{i}_{s,t}(y^{1}_{s})-X^{i}_{s,t}(y^{2}_{s})=\mathcal{X}^{i}_{s,t}(y^{1}_{s},y^{2}_{s})(y^{1}_{s}-y^{2}_{s}),$
where $\mathcal{X}^{i}(y^{1}_{s},y^{2}_{s}):=\int_{0}^{1}\nabla
X^{i}_{s,t}(qy^{1}_{s}+(1-q)y^{2}_{s})\mathop{}\\!\mathrm{d}q$. In order to
prove (2.17), we proceed with the exact same strategy as outlined in the proof
of Lemma 10. That is, we use the same proof as the proof of Lemma 10 to first
prove appropriate bounds for $D_{y^{1},y^{2}}(s,t)$ and similarly for
$D_{X^{1},X^{2}}(s,t)$ afterwards. To this end, changing the integrand in
(2.5) so that
$\mathcal{I}_{n}(s,t):=\sum_{[u,v]\in\mathcal{P}^{n}[s,t]}(S_{\tau-u}-S_{t-u})\mathcal{X}_{u,v}^{i}(y^{1}_{u},y^{2}_{u})(y^{1}_{u}-y^{2}_{u}),$
we continue along the lines of the proof in Lemma 10 to show that
$\mathcal{I}^{n}$ is Cauchy. As the strategy of this proof is identical to
that of Lemma 10 we will here only point out the important differences. By
appealing to the condition (iii) in (2.15),we observe in particular that
$|\mathcal{X}^{i}(y^{1}_{s},y^{2}_{s})-\mathcal{X}^{i}(y^{1}_{u},y^{2}_{u})|_{\mathcal{L}(E)}\lesssim|t-s|^{\gamma}\sup_{0\leq
z\leq\|y^{1}\|_{\infty}\vee\|y^{2}\|_{\infty}}H(z)\left([y^{1}]_{\varsigma}+[y^{2}]_{\varsigma}\right).$
Furthermore, it is readily checked that
$|y^{1}_{s}-y^{2}_{s}|_{E}\lesssim|y^{1}_{0}-y^{2}_{0}|_{E}+[y-\tilde{y}]_{\varsigma}T^{\varsigma}.$
Following along the lines of the proof of Lemma 10, one can then check that
for $m>n\in\mathbb{N}$
$|\mathcal{I}_{m}(s,t)-\mathcal{I}_{n}(s,t)|_{E}\\\ \lesssim\sup_{0\leq
z\leq\|y^{1}\|_{\infty}\vee\|y^{2}\|_{\infty}}H(z)\left([y^{1}]_{\varsigma}+[y^{2}]_{\varsigma}\right)(|y^{1}_{0}-y^{2}_{0}|_{E}+[y^{1}-y^{2}]_{\varsigma}T^{\varsigma})(\tau-\tau^{\prime})^{\varsigma}(t-s)^{\gamma-\rho-\varsigma}\psi_{n,m},$
(2.18)
where $\psi_{n,m}$ is defined as below (2.11). With this inequality at hand,
the remainder of the proof can be verified in a similar way as in the proof of
Lemma 10, and we obtain from this lemma that
$\|D_{y^{1},y^{2}}\|_{\mathscr{C}^{\varsigma}_{T}E}\lesssim C\sup_{0\leq
z\leq\|y^{1}\|_{\infty}\vee\|y^{2}\|_{\infty}}(z)\left([y^{1}]_{\varsigma}+[y^{2}]_{\varsigma}\right)(\|y^{1}_{0}-y^{2}_{0}\|_{E}+[y^{1}-y^{2}]_{\varsigma}T^{\varsigma}).$
Next we move on to prove a similar bound of $D_{X^{1},X^{2}}$ as defined in
(2.1). Set $Z=X^{1}-X^{2}$. By (2.16) it follows that $Z$ satisfies the
conditions of Lemma 10, and then from (2.3) it follows that
$[D_{X_{1},X^{2}}]_{\varsigma}\lesssim\sup_{0\leq
z\leq\|y^{1}\|_{\infty}\vee\|y^{2}\|_{\infty}}H_{X^{1}-X^{2}}(z)[y^{1}]_{\varsigma}\vee[y^{2}]_{\varsigma}T^{\gamma-\varsigma-\rho}.$
∎
### 2.2. Existence and uniqueness
We begin to prove local existence and uniqueness for an abstract type of
equation with values in a Banach space. The equation in itself does not
require the use of the non-linear Young-Volterra integral, and is formulated
for general operators
$\Theta:[0,T]\times\mathscr{C}^{\varsigma}_{T}E\rightarrow\mathscr{C}^{\varsigma}_{T}E$
satisfying certain regularity conditions. We will apply these results in later
sections in combination with the non-linear Young-Volterra integral operator
$\Theta$ created in the previous section, and thus the reader is welcome to
already think of $\Theta$ as being a non-linear Young integral operator as
constructed in Lemma 10.
###### Theorem 12 (Local existence and uniqueness).
Let
$\Theta:[0,T]\times\mathscr{C}^{\varsigma}_{T}E\rightarrow\mathscr{C}^{\varsigma}_{T}E$
be a function which satisfies for $y,\tilde{y}\in\mathscr{C}^{\varsigma}_{T}E$
and some $\epsilon>0$
$\displaystyle\left[\Theta(y)\right]_{\varsigma}$ $\displaystyle\leq
C(\|y\|_{\infty})(1+[y]_{\varsigma})T^{\varepsilon}$ (2.19)
$\displaystyle[\Theta(y)-\Theta(\tilde{y})]_{\varsigma}$ $\displaystyle\leq
C(\|y\|_{\infty}\vee\|\tilde{y}\|_{\infty})([y]_{\varsigma}+[\tilde{y}]_{\varsigma})(|y_{0}-\tilde{y}_{0}|_{E}+[y-\tilde{y}]_{\varsigma})T^{\varepsilon},$
where $C$ is a positive and increasing locally bounded function. Consider
$p\in\mathscr{C}^{\varsigma}_{T}E$ and let $\tau>0$ be such that
$\tau\leq\left[4(1+[p]_{\varsigma})C(1+|p_{0}|+[p]_{\varsigma})\right]^{-\frac{1}{\varepsilon}}.$
(2.20)
Then there exists a unique solution to the equation
$y_{t}=p_{t}+\Theta(y)_{t},\qquad p\in\mathscr{C}^{\varsigma}_{T}E$ (2.21)
in $\mathcal{B}_{\tau}(p)$, where $\mathcal{B}_{\tau}(p)$ is a unit ball in
$\mathscr{C}^{\varsigma}_{\tau}E$, centered at $p$.
###### Proof.
To prove existence and uniqueness, we will apply a standard fixed point
argument. Define the solution map
$\Gamma_{\tau}:\mathscr{C}^{\varsigma}_{\tau}E\rightarrow\mathscr{C}^{\varsigma}_{\tau}E$
given by
$\Gamma_{\tau}(y):=\\{p_{t}+\Theta(y)_{t}|\,t\in[0,\tau]\\}.$
Since
$\Theta:\mathscr{C}^{\varsigma}_{\tau}E\rightarrow\mathscr{C}^{\varsigma}_{\tau}E$
and $p\in\mathscr{C}^{\varsigma}_{\tau}E$ it follows that
$\Gamma_{\tau}(y)\in\mathscr{C}^{\varsigma}_{\tau}E$. We will now prove that
the solution map $\Gamma_{\tau}$ is an invariant map and a contraction on a
unit ball $\mathcal{B}_{\tau}(p)\subset\mathscr{C}^{\varsigma}_{\tau}E$
centered at $p\in\mathscr{C}^{\varsigma}_{\tau}E$. In particular, we define
$\mathcal{B}_{\tau}(p):=\\{y\in\mathscr{C}^{\varsigma}_{\tau}|\,y_{t}=p_{t}+z_{t},\,{\rm
with}\,\,z\in\mathscr{C}^{\varsigma}_{\tau}E,\,\,z_{0}=0,\,\,[y-p]_{\varsigma}\leq
1\\}.$
We begin with the invariance. From the first condition in (2.19), it is
readily checked that for $y\in\mathcal{B}_{\tau}(P)$
$[\Gamma_{\tau}(y)-p]_{\varsigma}\leq
C(1+|p_{0}|+[p]_{\varsigma})(1+[y]_{\varsigma})\tau^{\varepsilon},$ (2.22)
where we have used that for $y\in\mathcal{B}_{\tau}(P)$ , $\|y\|_{\infty}\leq
1+|p_{0}|+[p]_{\varsigma}$ Choosing a parameter ${\tau_{1}}>0$ such that
$\tau_{1}\leq(2C(1+|p_{0}|+[p]_{\varsigma})^{-\frac{1}{\varepsilon}}$
it follows that
$\Gamma_{\tau_{1}}(\mathcal{B}_{\tau_{1}}(p))\subset\mathcal{B}_{\tau_{1}}(p)$and
we say that $\Gamma_{\tau_{1}}$ leaves the ball $\mathcal{B}_{\tau_{1}}(p)$
invariant.
Next, we prove that $\Gamma_{\tau}$ is a contraction on
$\mathcal{B}_{\tau}(p)$. From the second condition in (2.19), it follows that
for two elements $y,\tilde{y}\in\mathcal{B}_{\tau}(p)$ we have
$[\Gamma(y)-\Gamma(\tilde{y})]_{\varsigma}\leq
2(1+[p]_{\varsigma})C(1+|p_{0}|+[p]_{\varsigma})[y-\tilde{y}]_{\varsigma}\tau^{\varepsilon},$
where we have used that $y_{0}=\tilde{y}_{0}$, and
$[y]_{\varsigma}\vee[\tilde{y}]_{\varsigma}\leq 1+[p]_{\varsigma}\quad{\rm
and}\quad\|y\|_{\infty}\vee\|\tilde{y}\|_{\infty}\leq
1+|p_{0}|+[p]_{\varsigma}.$
Again, choosing a parameter $\tau_{2}>0$ such that
$\tau_{2}\leq\left(4(1+[p]_{\varsigma})C(1+|p_{0}|+[p]_{\varsigma})\right)^{-\frac{1}{\varepsilon}}.$
it follows that
$[\Gamma(y)-\Gamma(\tilde{y})]_{\varsigma;\tau_{2}}\leq\frac{1}{2}[y-\tilde{y}]_{\varsigma;\tau_{2}}.$
Since $\tau_{2}\leq\tau_{1}$, we conclude that the solution map
$\Gamma_{\tau_{2}}$ both is an invariant map and a contraction on the unit
ball $\mathcal{B}_{\tau_{2}}(p)$. It follows by Picard-Lindlöfs fixed point
theorem that a unique solution to (2.21) exists in
$\mathcal{B}_{\tau_{2}}(p)$.
∎
The next theorem shows that if the locally bounded function $C$ appearing in
the conditions on $\Theta$ in (2.19) of Theorem 12, is uniformly bounded, then
there exists a unique global solution to (2.21).
###### Theorem 13 (Global existence and uniqueness).
Let
$\Theta:[0,T]\times\mathscr{C}^{\varsigma}_{T}E\rightarrow\mathscr{C}^{\varsigma}_{T}E$
satisfy (2.19) for a positive, globally bounded function $C$, i.e. there
exists a constant $M>0$ such that $\sup_{x\in\mathbb{R}_{+}}C(x)\leq M$.
Furthermore, suppose $\Theta$ is time-additive, in the sense that
$\Theta_{t}=\Theta_{s}+\Theta_{s,t}$ for any $s\leq t\in[0,T]$. Then for any
$p\in\mathscr{C}^{\varsigma}_{T}E$ there exists a unique solution
$y\in\mathscr{C}^{\varsigma}_{T}E$ to the equation
$y_{t}=p_{t}+\Theta(y)_{t},\quad t\in[0,T].$
###### Proof.
By Theorem 12 we know that there exists a unique solution to (2.21) on an
interval $[0,\tau]$, where $\tau$ satisfies (2.20), and $C$ is replaced by the
bounding constant $M$, i.e.
$\tau\leq\left[4(1+[p]_{\varsigma})M\right]^{-\frac{1}{\varepsilon}}.$ (2.23)
By a slight modification of the proof in Theorem 12 it is readily checked that
the existence and uniqueness of
$y_{t}=p_{t}+\Theta_{a,t}(y),\quad t\in[a,a+\tau],$
holds on any interval $[a,a+\tau]\subset[0,T]$, i.e. the solution is
constructed in $\mathcal{B}_{[a,a+\tau]}(p)$.
Now, we want iterate solutions to (2.21) to the domain $[0,T]$, by "gluing
together" solutions on the integrals $[0,\tau],[\tau,2\tau]...\subset[0,T]$.
Using the time-additivity property of $\Theta$, note that for
$t\in[\tau,2\tau]$, we have
$y_{t}=p_{t}+\Theta_{t}(y)=p_{t}+\Theta_{a}(y)+\Theta_{a,t}(y).$
Thus, set $\tilde{p}_{t}=p_{t}+\Theta_{a}(y|_{[0,\tau]})$, where
$y|_{[0,\tau]}$ denotes the solution to (2.21) restricted to $[0,\tau]$. Note
that
$[\tilde{p}]_{\varsigma}=[p]_{\varsigma},$
since the Hölder seminorm is invariant to constants, and
$t\mapsto\Theta_{a}(y|_{[0,\tau]})$ is constant. Therefore, there exists a
unique solution to $\eqref{eq: abs equation}$ in
$\mathcal{B}_{[\tau,2\tau]}(\tilde{p})$ where $\tau$ is the same as in (2.23).
We can repeat this to all intervals $[k\tau,(k+1)\tau]\subset[0,T]$. At last,
invoking the scalability of Hölder norms (see [12, Exc. 4.24]), it follows
that there exists a unique solution to (2.21) in
$\mathscr{C}^{\varsigma}_{T}E$.
∎
## 3\. Averaged fields
We give here a quick overview of the concept of averaged fields and averaging
operators. We begin with the following definition:
###### Definition 14.
Let $\omega$ be a measurable path from $[0,T]$ to $\mathbb{R}^{d}$, and let
$g\in\mathcal{S}^{\prime}(\mathbb{R}^{d};\mathbb{R})$. We define the average
of $g$ against $\omega$ as the element of
$C^{0}\big{(}[0,T];\mathcal{S}^{\prime}(\mathbb{R}^{d};\mathbb{R})\big{)}$
defined for all $s\leq t\in[0,T]$ and all test functions $\phi$ by
$\big{\langle}\phi,T^{\omega}_{s,t}g\big{\rangle}=\int_{s}^{t}\langle\phi(\cdot-\omega_{r}),g\rangle\mathop{}\\!\mathrm{d}r.$
Introduced in the analysis of regularization by noise in [7], the concept of
averaged fields and averaging operators is by now a well studied topic. For
example, the recent analysis of Galeati and Gubinelli [15] provides a good
overview of the analytic properties in the context of regularization by noise.
See also [23, 16, 17] for further details on probabilistic and analytical
aspects of averaged fields and averaging operators, and their connection to
the concept of occupation measures. In the current article, we investigate
these operators from an infinite-dimensional perspective in order to apply
them in the context of (S)PDEs, and thus some extra considerations needs to be
taken into account. In addition, we include in section 5 a construction of the
averaged field associated to a fractional Lévy process. The regularizing
properties of Volterra-Lévy processes was recently investigated in [22], where
the authors constructed the averaged field using the concept of local times.
The construction given in the current article provides better regularity
properties of the resulting averaged field than those obtained in [22], but
comes at the cost of lost generality. More precisely, constructing the
averaged field associated to a measurable stochastic process
$\omega:\Omega\times[0,T]\rightarrow\mathbb{R}^{d}$ acting on a distribution
$b\in\mathcal{S}^{\prime}(\mathbb{R}^{d})$, the exceptional set
$\Omega^{\prime}\subset\Omega$ on which the function $T^{\omega}g$ is a
sufficiently regular field (say, Hölder in time and differentiable in space..)
depends on $b\in\mathcal{S}^{\prime}(\mathbb{R}^{d})$, i.e.
$\Omega^{\prime}=\Omega^{\prime}(b)$. Thus choosing one construction or the
other depends on the problem at hand, and which properties are important to
retain. We will not investigate these differences in more details in this
article, and will view the analysis of the SPDE from a purely deterministic
point of view. Let us give the following statement which can be viewed as a
short summary of some of the results appearing in [15] and [23]:
###### Proposition 15.
There exists a $\delta$-Hölder continuous path
$\omega:[0,T]\rightarrow\mathbb{R}$ such that for any given
$g\in\mathcal{C}^{\eta}$ with $\eta>3-\frac{1}{2\delta}$, the corresponding
averaged field $T^{\omega}g$ is contained in
$\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}$ for some $\kappa\geq 3$ and
$\gamma>\frac{1}{2}$. Moreover, there exists a continuous
$\omega:[0,T]\rightarrow\mathbb{R}$ such that for any
$g\in\mathcal{S}^{\prime}(\mathbb{R})$, the averaged field $T^{\omega}g$ is
contained in $\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ for some weight
$w:\mathbb{R}\rightarrow\mathbb{R}_{+}$ and any $\gamma\in(\frac{1}{2},1)$ and
any $\kappa\in\mathbb{R}$.
###### Proof.
The first statement can be seen as a simple version of [15, Thm. 1]. The
second is proven in [23, Prop. 24]. ∎
###### Remark 16.
In fact, the statement of [15, Thm. 1] is much stronger; given a
$g\in\mathcal{C}^{\eta}$ with $\eta\in\mathbb{R}$, then for almost all
$\delta$-Hölder continuous paths $\omega:[0,T]\rightarrow\mathbb{R}$ such that
$\eta>3-\frac{1}{2\delta}$, the averaged field
$T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{3}$ for some
$\gamma>\frac{1}{2}$. Similarly, it is stated that almost all continuous paths
$\omega$ are infinitely regularizing in the sense that for any
$g\in\mathcal{C}^{\eta}$ with $\eta\in\mathbb{R}$,
$T^{\omega}g\in\mathcal{C}^{\gamma}\mathcal{C}^{\kappa}$ for any
$\kappa\in\mathbb{R}$. The "almost surely" statement here is given through the
concept of prevalence. We refrain from writing proposition 15 in the most
general way in order to avoid going into details regarding the concept of
prevalence here. We therefore refer the reader to [15] for more details on
this result and this concept.
###### Remark 17.
The statement in Proposition 15 can also be generalized to measurable paths
$\omega:[0,T]\rightarrow\mathbb{R}$. Indeed, in [22] the authors show that
there exists a class of measurable Volterra-Lévy processes, which provides a
regularizing effect, similar to that of Gaussian processes, and a statement
similar to that of Proposition 15 can be found there.
in all the following we will use notions of (weighted)-Besov space. For a
recap on weighted Lebesgue and Besov spaces, as long as a recap on standard
(fractional)-Schauder estimates see Appendix Sections A.1 and A.2.
We continue with some properties which will be useful in later analysis.
###### Proposition 18.
Let $w$ be an admissible weight, $\kappa\in\mathbb{R}$ and $1\leq
p,q\leq+\infty$. Let $f\in B^{\kappa}_{p,q}(w)$. For all $j\geq-1$ and all
$0\leq s\leq t\leq T$,
$\Delta_{j}T^{\omega}_{s,t}g=T^{\omega}_{s,t}(\Delta_{j}g),$
where $\Delta_{j}$ denotes the standard Paley-Littlewood block (see Appendix
A.2).
In particular for all $\varepsilon,\delta>0$, and for all $f\in
B^{\kappa}_{p,q}(w)$, and for $\mathcal{S}_{k}=\sum_{j={-1}^{k}}\Delta_{j}$,
$T^{\omega}_{s,t}(\mathcal{S}_{k}g)\underset{k\to\infty}{\to}T^{\omega}_{s,t}g\quad\text{in}\quad\mathcal{S}^{\prime}\quad\text{and
in}\quad B^{\kappa-\varepsilon}_{p,q}(\langle\cdot\rangle^{\delta}w).$
Suppose that $g$ in a measurable locally bounded function, then
$T^{\omega}_{s,t}f$ is also a measurable function one has
$T^{\omega}_{s,t}g(x)=\int_{s}^{t}g(x+\omega_{r})\mathop{}\\!\mathrm{d}r.$
###### Proof.
Let us remind that the topology on $\mathcal{S}$ is the one generated by the
family of semi-norm
$\mathcal{N}_{n}(\phi)=\sum_{|k|,|l|\leq
n}\sup_{x\in\mathbb{R}^{d}}|x^{k}\partial^{l}\phi(x)|,$
where $k$ and $l$ are multi-indices. Furthermore, for any distribution
$f\in\mathcal{S}^{\prime}$, there exists $n\geq 0$ and $C>0$ such that
$|\langle\phi,g\rangle|\leq C\mathcal{N}_{n}(\phi).$
Let $\omega$ be a measurable path from $[0,T]$ to $\mathbb{R}^{d}$, then since
$\phi$ is bounded, there is a constant $C_{1}>0$ such that
$\mathcal{N}_{n}\big{(}\phi(\cdot-w_{r})\big{)}\leq
C\mathcal{N}_{n}\big{(}\phi\big{)}.$
In particular
$\Delta_{j}T^{\omega}_{t}g(x)=\left\langle
K_{j}(x-\cdot),T^{\omega}_{s,t}f\right\rangle=\int_{0}^{t}\left\langle
K_{j}\big{(}x+\omega_{r})-\cdot\big{)},g\right\rangle\mathop{}\\!\mathrm{d}r=\int_{0}^{t}\Delta_{j}g(x+\omega_{r})\mathop{}\\!\mathrm{d}r.$
Thanks to the previous remark, we are allowed to perform any Fubini arguments,
and for all $\phi\in\mathcal{S}$,
$\big{\langle}\phi,\Delta_{j}T^{\omega}_{t}g\big{\rangle}=\int_{\mathbb{R}^{d}}\phi(x)\int_{0}^{t}\Delta_{j}g(x+\omega_{r})\mathop{}\\!\mathrm{d}r\mathop{}\\!\mathrm{d}x=\int_{0}^{t}\int_{\mathbb{R}^{d}}\phi(x-\omega_{r})\Delta_{j}g(x)\mathop{}\\!\mathrm{d}x\mathop{}\\!\mathrm{d}r,$
which ends the proof by using properties of weighted-Besov spaces. ∎
For the purpose of this section, we will fix a distribution
$g\in\mathcal{S}^{\prime}(\mathbb{R}^{d})$ and a measurable path
$\omega:[0,T]\rightarrow\mathbb{R}^{d}$ with the property that there exists an
averaged field
$T^{\omega}g:[0,T]\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ defined as in
Definition 14 which is Hölder continuous in time and three times locally
differentiable in space. More specifically, we will assume that
$T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ for some
admissible weight function
$w:\mathbb{R}^{d}\rightarrow\mathbb{R}_{+}\setminus\\{0\\}$.
Our first goal is to show how the averaged field $T^{w}g$ can be seen as a
function from $[0,T]\times\mathcal{C}^{\beta}\rightarrow\mathcal{C}^{\beta}$
for some $\beta\geq 0$.
###### Proposition 19.
Let $w$ be an admissible weight (see Section A.1). For a measurable path
$\omega:[0,T]\rightarrow\mathbb{R}^{d}$ and
$g\in\mathcal{S}^{\prime}(\mathbb{R}^{d})$, suppose
$T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ for some
$\gamma>\frac{1}{2}$ and $\kappa\geq 3$. Then for all
$x,y\in\mathcal{C}^{\beta}$ with $\beta\in[0,1)$, we have that
$\displaystyle\|T^{\omega}_{s,t}g(x)\|_{\mathcal{C}^{\beta}}\vee\|\nabla
T^{\omega}_{s,t}g(x)\|_{\mathcal{C}^{\beta}}$
$\displaystyle\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}|t-s|^{\gamma}$
$\displaystyle\|T^{\omega}_{s,t}g(x)-T^{\omega}_{s,t}g(y)\|_{\mathcal{C}^{\beta}}$
$\displaystyle\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}\vee\|y\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}\|x-y\|_{\mathcal{C}^{\beta}}|t-s|^{\gamma}$
$\displaystyle\|\nabla T^{\omega}_{s,t}g(x)-\nabla
T^{\omega}_{s,t}g(y)\|_{\mathcal{C}^{\beta}}$
$\displaystyle\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}\vee\|y\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}\|x-y\|_{\mathcal{C}^{\beta}}|t-s|^{\gamma}.$
###### Proof.
For $x\in\mathcal{C}^{\beta}$ note that for any $\xi\in\mathbb{R}^{d}$
$|T^{\omega}_{s,t}g(x(\xi))|\leq
w^{-1}(x(\xi))|w(x(\xi))T^{\omega}_{s,t}g(x(\xi))|\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}}.$
By similar computations using that $T^{\omega}b$ is (weighted) differentiable
in space, it is straightforward to verify that
$\|T^{\omega}_{s,t}g(x)\|_{\mathcal{C}^{\beta}}\vee\|\nabla
T^{\omega}_{s,t}g(x)\|_{\mathcal{C}^{\beta}}\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}|t-s|^{\gamma}.$
Now consider $x,y\in\mathcal{C}^{\beta}$. Again using the differentiability of
$T^{\omega}b$, and elementary rules of calculus we observe that for any
$\xi\in\mathbb{R}^{d}$
$|T^{\omega}_{s,t}g(x(\xi))-T^{\omega}_{s,t}g(y(\xi))|\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}\vee\|y\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}\|x-y\|_{\mathcal{C}^{\beta}}|t-s|^{\gamma},$
where we have used that $|x(\xi)-y(\xi)|\leq\|x-y\|_{\mathcal{C}^{\beta}}$ for
all $\xi\in\mathbb{R}^{d}$. At last, for $\xi,\xi^{\prime}\in\mathbb{R}^{d}$,
observe that
$\displaystyle
T^{\omega}_{s,t}g(x(\xi))-T^{\omega}_{s,t}g(y(\xi))-T^{\omega}_{s,t}g(x(\xi^{\prime}))+T^{\omega}_{s,t}g(y(\xi^{\prime}))$
$\displaystyle=\int_{0}^{1}\nabla
T^{\omega}_{s,t}g\Big{(}l\big{(}x(\xi)-y(\xi)\big{)}+y(\xi)\Big{)}\big{(}x(\xi)-x(\xi^{\prime})-y(\xi)+y(\xi^{\prime})\big{)}\mathop{}\\!\mathrm{d}l$
$\displaystyle+\int_{0}^{1}\int_{0}^{1}D^{2}T^{\omega}_{s,t}g\Big{(}\Lambda(l,l^{\prime})\Big{)}\Big{(}l\big{(}x(\xi)-x(\xi^{\prime})\big{)}+(1-l)\big{(}y(\xi)-y(\xi^{\prime})\big{)}\Big{)}\otimes\big{(}x(\xi)-y(\xi)\big{)}\mathop{}\\!\mathrm{d}l^{\prime}\mathop{}\\!\mathrm{d}l,$
with
$\Lambda(l,l^{\prime})=l^{\prime}\Big{(}l\big{(}x(z)-x(z^{\prime})\big{)}+(1-l)\big{(}y(z)-y(z^{\prime})\big{)}\Big{)}+lx(z^{\prime})+(1-l)y(z^{\prime}).$
Since $T^{\omega}g$ is twice (weighted) differentiable, it follows that
$\|T^{\omega}_{s,t}g(x)-T^{\omega}_{s,t}g(y)\|_{\mathcal{C}^{\beta}}\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}\vee\|y\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\beta}(w)}\|x-y\|_{\mathcal{C}^{\beta}}|t-s|^{\gamma}.$
A similar argument for $\nabla T^{\omega}b$, using that $\kappa\geq 3$,
reveals that
$\|\nabla T^{\omega}_{s,t}g(x)-\nabla
T^{\omega}_{s,t}g(y)\|_{\mathcal{C}^{\beta}}\leq\sup_{|z|\leq\|x\|_{\mathcal{C}^{\beta}}\vee\|y\|_{\mathcal{C}^{\beta}}}w^{-1}(z)\|T^{\omega}b\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\beta}(w)}\|x-y\|_{\mathcal{C}^{\beta}}|t-s|^{\gamma},$
which concludes this proof. ∎
## 4\. Existence and uniqueness of the mSHE
We are now ready to formulate the multiplicative stochastic heat equation
using the abstract non-linear Young integral and the averaged field
$T^{\omega}g$ introduced in the previous section.
### 4.1. Standard multiplicative Stochastic Heat equation with additive noise
Recall that the mSHE with additive (time)-noise is given in its mild form by
$u_{t}=P_{t}u_{0}+\int_{0}^{t}P_{t-s}\xi
g(u_{s})\mathop{}\\!\mathrm{d}s+\omega_{t},\quad u_{0}\in\mathcal{C}^{\beta},$
(4.1)
where $\xi\in\mathcal{C}^{-\vartheta}$ for some
$0\leq\vartheta<\beta<2-\vartheta$, $P$ denotes the standard heat semi-group
acting on functions $u$ through convolution, and
$\omega:[0,T]\rightarrow\mathbb{R}$ is a measurable path. We will see in this
section that $g$ can be chosen to be distributional given that $\omega$ is
sufficiently irregular.
Similarly as is done for pathwise regularization by noise for SDEs (e.g. [7]),
we begin to consider (4.1) with a smooth function $g$, and we set
$\theta=u-\omega$, and then study the integral equation
$\theta_{t}=P_{t}u_{0}+\int_{0}^{t}P_{t-s}\xi
g(\theta_{s}+\omega_{s})\mathop{}\\!\mathrm{d}s.$ (4.2)
The integral term can be written on the form of a non-linear Young–Volterra
integral. To motivate this, we begin with the following observation: Define
the linear operator $S_{t}=P_{t}\xi$, whose action is defined by
$S_{t}f:=\int_{\mathbb{R}}P_{t}(x-y)\xi(y)f(y)\mathop{}\\!\mathrm{d}y.$ (4.3)
For $\beta>\vartheta$ the classical Schauder estimates for the heat equation
tells us that
$\|S_{t}f\|_{\mathcal{C}^{\beta}}=\|S_{t}f\|_{\mathcal{C}^{-\vartheta+2\frac{\beta+\vartheta}{2}}}\lesssim
t^{-\frac{\beta+\vartheta}{2}}\|\xi f\|_{\mathcal{C}^{-\vartheta}}\lesssim
t^{-\frac{\beta+\vartheta}{2}}\|\xi\|_{\mathcal{C}^{-\vartheta}}\|f\|_{\beta},$
where in the last estimate we have used that the product between the
distribution $\xi\in\mathcal{C}^{-\vartheta}$ and the function
$f\in\mathcal{C}^{\beta}$ since $\beta>\vartheta$. Thus we can view $S$ as a
bounded linear operator from $\mathcal{C}^{\beta}$ to itself for any
$\beta>\vartheta$. This motivates the next proposition:
###### Proposition 20.
If $\vartheta\geq 0$ and $\beta>\vartheta$, then the operator $S$ defined in
(4.3) is a linear operator on $\mathcal{C}^{\beta}$ and satisfies Hypothesis 9
with singularity $\rho=\frac{\beta+\vartheta}{2}$.
###### Proof.
This follows from the properties of the heat kernel proven in Corollary 45 in
Section A.3, and the fact that the para-product $\xi f$ satisfies the bound
$\|\xi
f\|_{\mathcal{C}^{-\vartheta}}\leq\|\xi\|_{\mathcal{C}^{-\vartheta}}\|f\|_{\mathcal{C}^{\beta}}$
due to the assumption that $\beta>\vartheta$. See [2] for more details on the
para-product in Besov spaces. ∎
The integral in (4.2) can therefore be written as
$\int_{0}^{t}P_{t-s}\xi
g(\theta_{s}+\omega_{s})\mathop{}\\!\mathrm{d}s=\int_{0}^{t}S_{t-s}g(\theta_{s}+\omega_{s})\mathop{}\\!\mathrm{d}s.$
Furthermore, the classical Volterra integral on the right hand side can be
written in terms of the averaged field $T^{\omega}g$, in the sense that
$\int_{0}^{t}S_{t-s}g(\theta_{s}+\omega_{s})\mathop{}\\!\mathrm{d}s=\int_{0}^{t}S_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(\theta_{s}),$
where for a partition $\mathcal{P}$ of $[0,t]$ with infinitesimal mesh, we
define
$\int_{0}^{t}S_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(\theta_{s}):=\lim_{|\mathcal{P}|\rightarrow
0}\sum_{[u,v]\in\mathcal{P}}S_{t-u}T^{\omega}_{u,v}g(\theta_{u}).$
It is not difficult to see (and we will rigorously prove it later) that when
$g$ is smooth, the above definition agrees with the classical Riemann
definition of the integral. However, the advantage with this formulation of
the integral is that $T^{\omega}g$ might still make sense as a function when
$g$ is only a distribution, and thus we will see that this formulation allows
for an extension of the concept of integration to distributional coefficients
$g$.
###### Proposition 21.
Consider a measurable path $\omega:[0,T]\rightarrow\mathbb{R}$ and
$g\in\mathcal{S}^{\prime}(\mathbb{R})$, and suppose
$T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ for some
$\gamma>\frac{1}{2}$ and $\kappa\geq 3$. For some $0<\beta<1$, suppose
$S:[0,T]\rightarrow\mathcal{L}(\mathcal{C}^{\beta})$ satisfies Hypothesis 9
for some $0\leq\rho<1$ such that $\gamma-\rho>1-\gamma$. Take
$\gamma-\rho>\varsigma>1-\gamma$ Then for any
$y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$, the integral
$\Theta(y)_{t}:=\int_{0}^{t}S_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(y_{s}):=\lim_{|\mathcal{P}|\rightarrow
0}\sum_{[u,v]\in\mathcal{P}}S_{t-u}T^{\omega}_{u,v}g(y_{u})$ (4.4)
exists as a non-linear Young-Volterra integral according to Lemma 10.
###### Proof.
Let $E:=\mathcal{C}^{\beta}$. Since
$T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ for some
$\gamma>\frac{1}{2}$ and $\kappa\geq 3$, it follows from Proposition 19 that
$T^{\omega}g$ can be seen as a function from $[0,T]\times E$ to $E$ that
satisfies (i)-(ii) in Lemma 10 with $H(x):=\sup_{|z|\leq
x}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}$.
Lemma 10 then implies that (4.4) exists, and satisfies the bound in (2.3). ∎
As an immediate consequence of the above proposition and Proposition 11, we
obtain the following Proposition:
###### Proposition 22.
Under the same assumptions as in Proposition 21, for two paths
$x,y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ we have that
$[\Theta(y)-\Theta(x)]_{\varsigma}\lesssim_{P,\xi}\Big{\\{}\sup_{z\leq\|x\|_{\infty}\vee\|y\|_{\infty}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}}\Big{\\}}\left(\|x_{0}-y_{0}\|_{\mathcal{C}^{\beta}}+[x-y]_{\varsigma}\right).$
Moreover, let $\\{g_{n}\\}_{n\in\mathbb{N}}$ be a sequence of smooth functions
converging to $g\in\mathcal{S}^{\prime}(\mathbb{R})$ such that
$T^{\omega}g_{n}\rightarrow
T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$. Then the
sequence of integral operators $\\{\Theta_{n}\\}_{n\in\mathbb{N}}$ built from
$T^{\omega}g_{n}$ converge to $\Theta$, in the sense that for any
$y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$
$\lim_{n\rightarrow\infty}\|\Theta_{n}(y)-\Theta(y)\|_{\mathcal{C}^{\varsigma}_{T}\mathcal{C}^{\beta}}=0,$
for any $1-\gamma<\varsigma<\gamma-\rho$.
###### Proof.
A combination of Proposition 21 and Proposition 11 gives the first claim. For
the second, consider the field
$X^{n}_{s,t}(x)=\left(T^{\omega}_{s,t}g_{n}-T^{\omega}_{s,t}g\right)(x)$. Then
$X$ satisfies condition (i)-(ii) of Lemma 10 with $H(x)=\sup_{|z|\leq
x}w^{-1}(z)\|T^{\omega}g_{n}-T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}$.
Thus by Proposition 11 it follows that the integral operator
$\Theta^{X^{n}}:\mathscr{C}^{\varsigma}\mathcal{C}^{\beta}\rightarrow\mathscr{C}^{\varsigma}\mathcal{C}^{\beta}$
built from $X^{n}$ satisfies for any
$y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$
$\|\Theta^{X^{n}}(y)\|_{\mathcal{C}^{\varsigma}_{T}\mathcal{C}^{\beta}}\lesssim_{S,\xi}\sup_{|z|\leq\|y\|_{\infty}}w^{-1}(z)\|T^{\omega}g_{n}-T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}(1+[y]_{\varsigma}).$
Thus taking the limits when $n\rightarrow\infty$, it follows that
$\|\Theta^{X^{n}}(y)\|_{\mathcal{C}^{\varsigma}_{T}\mathcal{C}^{\beta}}\rightarrow
0$ due to the assumption that
$\|T^{\omega}g_{n}-T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}\rightarrow
0$ when $n\rightarrow\infty$. ∎
We are now ready to prove existence and uniqueness of solutions to (4.2) in
the non-linear Volterra-Young formulation, through an application of the
abstract existence and uniqueness results of equations on the form
$\theta_{t}=p_{t}+\Theta(\theta)_{t},\quad t\in[0,T],$
proven in Theorem 12. In combination with Proposition 21 and 22 we consider
$p_{t}=P_{t}\psi$ for some $\psi\in\mathcal{C}^{\beta}$, $\Theta$ is the
integral operator constructed in (4.4) with $S_{t}=P_{t}\xi$ for some
$\xi\in\mathcal{C}^{-\vartheta}$ with $\beta>\vartheta$.
###### Theorem 23.
Consider a measurable path $\omega:[0,T]\rightarrow\mathbb{R}$ and
$g\in\mathcal{S}^{\prime}(\mathbb{R})$, and suppose
$T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ for some
$\gamma>\frac{1}{2}$ and $\kappa\geq 3$.
Let $0<\vartheta<1$ and $\vartheta<\beta<2-\vartheta$. Let
$\xi\in\mathcal{C}^{-\vartheta}$ and $\rho=\frac{\beta-\vartheta}{2}$ and
suppose that $1-\gamma<\gamma-\rho$. Suppose $\psi\in\mathcal{C}^{\beta}$.
Then there exists a time $\tau\in(0,T]$ such that there exists a unique
$\theta\in\mathcal{B}_{\tau}(P\psi)\subset\mathscr{C}^{\varsigma}_{\tau}\mathcal{C}^{\beta}$
which solves the non-linear Young equation
$\theta_{t}=P_{t}\psi+\int_{0}^{t}P_{t-s}\xi
T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(\theta_{s}),\quad t\in[0,\tau],$ (4.5)
for any $\gamma-\rho>\varsigma>1-\gamma$ where $S_{t}:=P_{t}\xi$ as defined
Proposition 20, and the integral is understood in the sense of proposition 21.
There exists a $C=C({P,\xi})>0$ such that the solution $\theta$ satisfies the
following bound
$[\theta]_{\varsigma;\tau}\leq C\left([P\psi]_{\varsigma,\tau}+2\sup_{|z|\leq
1+\|\psi\|_{\mathcal{C}^{\beta}}+[P\psi]_{\varsigma;\tau}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}}\right).$
(4.6)
Moreover, if $w\simeq 1$, then there exists a unique
$\theta\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ which is a global
solution, in the sense that $\tau=T$.
###### Proof.
By Proposition 20, it follows that $S_{t}:=P_{t}\xi$ is a linear operator on
$\mathcal{C}^{\beta}$ since $\beta>\vartheta$. It then follows from
Proposition 21 that the non-linear Young integral
$\Theta(y)_{t}:=\int_{0}^{t}S_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(y_{s}),$
exists as a map
$\Theta:[0,T]\times\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}\rightarrow\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$,
and we have that for any $y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$
with $\gamma-\rho>\varsigma>1-\gamma$ there exists a constant $C>0$ such that
$[\Theta(y)]_{\varsigma}\leq
C\sup_{|z|\leq\|y\|_{\infty}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}(1+[y]_{\varsigma})T^{\gamma-\rho-\varsigma}.$
(4.7)
Moreover by Proposition 22 we have that for two paths
$x,y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ there exists a constant
$C=C(S,\xi)>0$ such that
$[\Theta(x)-\Theta(y)]_{\varsigma}\leq
C\sup_{|z|\leq\|x\|_{\infty}\vee\|y\|_{\infty}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}\left(\|x_{0}-y_{0}\|_{\mathcal{C}^{\beta}}+[x-y]_{\varsigma}\right)T^{\gamma-\rho-\varsigma}.$
(4.8)
Thus by theorem 12, for any $\tau>0$ such that
$\tau\leq\left[4(1+[P\psi]_{\varsigma})C\sup_{|z|\leq
1+\|\psi\|_{\mathcal{C}^{\beta}}+[P\psi]_{\varsigma}}w^{-1}(z)\right]^{-\frac{1}{\gamma-\rho-\varsigma}},$
there exists a unique
$\theta\in\mathscr{C}^{\varsigma}([0,\tau];\mathcal{C}^{\beta}(\mathbb{R}))$
for $1-\gamma<\varsigma<\gamma-\rho$ which satisfies (4.5). Thus local
existence and uniqueness holds for (4.5).
We now move on to prove the bound in (4.6) First observe that
$[\theta]_{\varsigma;\tau}\leq[P\psi]_{\varsigma,\tau}+[\Theta(\theta)]_{\varsigma;\tau}.$
From Lemma 10, we know that there exists a $C=C(P,\xi)>0$ such that
$[\Theta(\theta)]_{\varsigma;\tau}\leq
C\sup_{|z|\leq\|\theta\|_{\infty}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}}(1+[\theta]_{\varsigma;\tau})\tau^{\gamma-(\rho+\varsigma)}$
Recall from Theorem 12 that $\tau>0$ is chosen such that $\theta$ is contained
in the unit ball centred at $S\psi$, i.e.
$\theta\in\mathcal{B}_{\tau}(P\psi)$, and thus
$[\theta]_{\varsigma;\tau}\leq[P\psi]_{\varsigma;\tau}+1$, which yields
$[\theta]_{\varsigma;\tau}\leq[P\psi]_{\varsigma,\tau}+2\sup_{|z|\leq
1+\|\psi\|_{\mathcal{C}^{\beta}}+[P\psi]_{\varsigma;\tau}}w^{-1}(z)\|T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}}.$
At last, if $w\simeq 1$, then for any
$x,y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$, there exists an $M>0$
such that
$C\sup_{|z|\leq\|x\|_{\infty}\vee\|y\|_{\infty}}w^{-1}(z)\leq M.$
where $C$ is the largest of the constants in (4.7) and (4.8). By Theorem 13 it
follows that a global solution exists, i.e. there exists a unique
$\Theta\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ satisfying (4.5). ∎
With the above theorem at hand, we are ready to prove Theorem 2 and Corollary
3.
###### Proof of Theorem 2.
From Theorem 23, we know that there exists a $\tau>0$ such that there exists a
unique $\theta\in\mathscr{C}^{\varsigma}_{\tau}\mathcal{C}^{\beta}$ which
solves (4.5). Thus we say that there exists a unique local solution $u$ in the
sense of Definition 1. As proven in Theorem 23, there exists a global solution
if $T^{\omega}g\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)$ with
$w\simeq 1$. This concludes the proof of theorem 2. ∎
###### Proof of Corollary 3.
Let $\\{g_{n}\\}$ be a sequence of smooth functions converging to $g$ such
that $T^{\omega}g_{n}$ converges to $T^{\omega}g$ in
$\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}$, and define the field
$Y^{n}_{s,t}(x)=(t-s)g_{n}(x+\omega_{s})$. It is readily checked that $Y^{n}$
satisfies (i)-(ii) of Lemma 10, and thus $\Theta^{Y^{n}}$ is an integral
operator from
$\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}\rightarrow\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$.
Furthermore, it follows from Theorem 12 together with the same reasoning as in
the proof of Theorem 23 that there exists a unique solution $\theta^{n}$ to
the equation
$\theta^{n}_{t}=P_{t}\psi+\Theta^{Y^{n}}(\theta)_{t}.$ (4.9)
In particular,
$\|\theta^{n}\|_{\mathscr{C}^{\varsigma}_{\tau}\mathcal{C}^{\beta}}<\infty$
for all $n\in\mathbb{N}$. Note that then $\Theta^{Y^{n}}$ corresponds to the
classical Riemann integral on $\mathcal{C}^{\beta}$. Indeed, since $g_{n}$ is
smooth, the integral $\Theta^{Y^{n}}$ is given by
$\Theta^{Y^{n}}(y)_{t}:=\int_{0}^{t}S_{t-s}g(y_{s})\mathop{}\\!\mathrm{d}s,$
where we recall that $S_{t}=P_{t}\xi$. Furthermore, it is readily checked that
$\Theta^{Y^{n}}$ agrees with the integral operator $\Theta^{T^{\omega}g_{n}}$,
since $g_{n}$ is smooth,
$T^{\omega}_{s,t}g_{n}(x)=\int_{s}^{t}g(x+\omega_{r})\mathop{}\\!\mathrm{d}r$
is differentiable in time. Thus note that the difference between the two
approximating integrals
$\displaystyle\Theta^{Y^{n}}_{\mathcal{P}}(y)_{t}=\sum_{[u,v]\in\mathcal{P}[0,t]}S_{t-u}g_{n}(y_{u}+\omega_{u})(v-u)$
$\displaystyle\Theta^{T^{\omega}g_{n}}_{\mathcal{P}}(y)_{t}=\sum_{[u,v]\in\mathcal{P}[0,t]}S_{t-u}\int_{u}^{v}g_{n}(y_{u}+\omega_{r})\mathop{}\\!\mathrm{d}r$
can be estimated by
$\left\|\sum_{[u,v]\in\mathcal{P}}S_{t-u}\int_{u}^{v}g(y_{u}+\omega_{u})-g(y_{u}+\omega_{r})\mathop{}\\!\mathrm{d}r\right\|_{\mathcal{C}^{\beta}}\rightarrow
0$
when $|\mathcal{P}|\rightarrow 0$. Therefore
$\Theta^{Y^{n}}\equiv\Theta^{T^{\omega}g_{n}}$. Let
$\theta\in\mathscr{C}^{\varsigma}_{\tau}\mathcal{C}^{\beta}$ be the solution
to (4.5) constructed from $T^{\omega}g$ with integral operator $\Theta$. We
now investigate the difference between $\theta$ and $\theta^{n}$ as found in
(4.9). It is readily checked that
$[\theta^{n}-\theta]_{\varsigma}\leq[\Theta(\theta)-\Theta^{T^{\omega}g_{n}}(\theta_{n})]_{\varsigma}+[\Theta^{T^{\omega}g_{n}}(\theta)-\Theta^{Y^{n}}]_{\varsigma}.$
As already argued, the last term on the right hand side is equal to zero, and
we are therefore left to show that the first term on the right hand side
converge to zero. Invoking Proposition 11 (in particular (2.17)) and following
along the lines of the proof of 22, it is readily checked that there exists a
$C=C(P,\xi)>0$ such that
$[\theta^{n}-\theta]_{\varsigma;\tau}\leq
C\sup_{|z|\leq\|\theta\|_{\infty}\vee\|\theta^{n}\|_{\infty}}w^{-1}(z)(\|T^{\omega}g_{n}-T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}(1+[\theta]_{\varsigma;\tau}\vee[\theta^{n}]_{\varsigma;\tau})+[\theta-\theta^{n}]_{\varsigma;\tau})\tau^{\gamma-\rho-\varsigma}$
We can now choose a parameter $\bar{\tau}>0$ sufficiently small, such that
$[\theta^{n}-\theta]_{\varsigma;\bar{\tau}}\leq
2\sup_{|z|\leq\|\theta\|_{\infty}\vee\|\theta^{n}\|_{\infty}}w^{-1}(z)(\|T^{\omega}g_{n}-T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}(1+[\theta]_{\varsigma;\tau}\vee[\theta^{n}]_{\varsigma;\tau}).$
From (4.6) we know that there exists a constant $M>0$ such that
$\|\theta^{n}\|_{\infty}\vee\|\theta\|_{\infty}\vee[\theta^{n}]_{\varsigma;\tau}\vee[\theta]_{\varsigma;\tau}\leq
M,$
and it follows that
$[\theta^{n}-\theta]_{\varsigma;\bar{\tau}}\leq 2\sup_{|z|\leq
M}w^{-1}(z)\|T^{\omega}g_{n}-T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}(1+M).$
(4.10)
The above inequality can, by similar arguments, be proven to hold for any sub-
interval $[a,a+\bar{\tau}]\subset[0,\tau]$ and thus we conclude by [12, Exc.
4.24] that there exists a constant $C=C(P,\xi,M,\bar{\tau},w)>0$ such that
$[\theta^{n}-\theta]_{\varsigma;\tau}\leq
C\|T^{\omega}g_{n}-T^{\omega}g\|_{\mathcal{C}^{\gamma}_{T}\mathcal{C}^{\kappa}(w)}.$
Taking the limit when $n\rightarrow\infty$ it follows that
$\theta^{n}\rightarrow\theta$ due to the assumption that
$T^{\omega}g_{n}\rightarrow T^{\omega}g$. If the solution is global, i.e.
$w\simeq 1$, then the same arguments as above holds, and the inequality in
(4.10) can be proven to hold on any sub-interval
$[a,a+\bar{\tau}]\subset[0,T]$ and then by the same arguments as above
$\theta^{n}\rightarrow\theta$ in
$\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$. ∎
### 4.2. The drifted multiplicative SHE
So far we have proven existence and uniqueness of solutions to equations on
the form
$u_{t}=P_{t}\psi+\int_{0}^{t}P_{t-s}\xi
g(u_{s})\mathop{}\\!\mathrm{d}s+\omega_{t},\quad
s\in[0,T],\,\,u_{0}=\psi\in\mathcal{C}^{\beta}(\mathbb{R}).$
That is, we have throughout the text assumed that
$\xi\in\mathcal{C}^{-\vartheta}$ with $\vartheta<\beta$ is a multiplicative
spatial noise. More generally, it is frequently considered a drift term in the
above equation as well, such that
$u_{t}=S_{t}\psi+\int_{0}^{t}P_{t-s}b(u_{s})\mathop{}\\!\mathrm{d}s+\int_{0}^{t}P_{t-s}\xi
g(u_{s})\mathop{}\\!\mathrm{d}s+\omega_{t},\quad
s\in[0,T],\,\,u_{0}=\psi\in\mathcal{C}^{\beta}(\mathbb{R}).$
Our results extends easily to this type of drifted equation as well, as long
as $T^{\omega}b\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{3}(w)$. Indeed, again
setting $\theta=u-\omega$, we consider the non-linear Young-Volterra equation
$\theta_{t}=P_{t}\psi+\int_{0}^{t}P_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}b(\theta_{s})+\int_{0}^{t}P_{t-s}\xi
T^{\omega}_{\mathop{}\\!\mathrm{d}s}g(\theta_{s}).$ (4.11)
It is straightforward to construct the non-linear Young-Volterra integral of
the form
$\int_{0}^{t}P_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}b(\theta_{s})$. Since
there is no multiplicative noise $\xi$, the linear operator $P_{t}$ on
$\mathcal{C}^{\beta}$ is not singular in its time argument. An application of
Lemma 10 would then allow to construct
$y\mapsto\Theta^{b}(y):=\int_{0}^{t}P_{t-s}T^{\omega}_{\mathop{}\\!\mathrm{d}s}b(y_{s})$
as an operator from $\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ into
itself. The second integral is constructed as before as an operator from
$\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$ into itself; let us denote
this by $\Theta^{g}$. Then (4.11) can be written as the abstract equation
$\theta_{t}=P_{t}\psi+\Theta(\theta)_{t},$
where $\Theta(y):=\Theta^{b}(y)+\Theta^{g}(y)$ for
$y\in\mathscr{C}^{\varsigma}_{T}\mathcal{C}^{\beta}$. An application of
Theorem 12 then provides local existence and uniqueness. If both $T^{\omega}b$
and $T^{\omega}g$ are contained in some unweighted Besov spaces, global
existence and uniqueness also holds, as proven in 13.
## 5\. Averaged fields with Lévy noise
In this section we will construct the averaged field associated to a
fractional Levy process, and prove it’s regularizing properties. In contrast
to the works of [22], we will not consider the regularity of the local time
associated with a process in order to obtain the regularizing effect, but
estimate the regularity of the averaged field $T^{\omega}b$ directly based on
probabilistic techniques developed in [7]. This has the benefit that it
improves the regularity of the averaged field, as discussed in the beginning
of section 3.
### 5.1. Linear fractional Stable motion
In this section we introduce a class of measurable processes which have the
regularization property, and allows us to deal with our equations.
###### Definition 24.
Let $\alpha\in(0,2]$. We say that $L$ is a $d$-dimensional symmetric
$\alpha$-stable Lévy process if
* (i)
$L_{0}=0$,
* (ii)
$L$ has independent and stationary increments,
* (iii)
$L$ is right cádlág and
* (iv)
there exists a constant such that for all $\lambda\in\mathbb{R}^{d}$
$\mathbb{E}[\exp(iL_{t}\cdot\xi)]=e^{-c_{\alpha}|\xi|^{\alpha}t}.$
Following [36], we define a fractional process with respect to a stable Lévy
process as followed:
###### Definition 25.
Let $\alpha\in(0,2]$ and $L,\tilde{L}$ be two independent $d$-dimensional
$\alpha$-stable symmetric Lévy processes. Let
$H\in(0,1)\backslash\\{\alpha^{-1}\\}$. For all $t\in\mathbb{R}_{+}$, we
define the $\alpha$-linear fractional stable motion ($\alpha$-LFSM) of Hurst
parameter $H$ by
$L^{H}_{t}=\int_{0}^{t}(t-v)^{H-\frac{1}{\alpha}}\mathop{}\\!\mathrm{d}L_{t}+\int_{0}^{+\infty}(t+v)^{H-\frac{1}{\alpha}}-v^{H-\frac{1}{\alpha}}\mathop{}\\!\mathrm{d}\tilde{L}_{v}.$
We extend this definition to $H=\frac{1}{\alpha}$ by setting
$L^{\frac{1}{\alpha}}=L$.
The existence of the LFSM is proved in [36], Chapter 3. Note that when
$\alpha=2$, we recover the standard fractional Brownian motion (up to a
multiplicative constant), and its different representations. Here we have
chosen the moving average representation, in the fractional Brownian motion
case, one could also use the harmonizable representation or something else.
Note that when $\alpha<2$, those different representations are no longer
equivalent.
We gather here some properties about the LFSM which will become useful in the
subsequent analysis of the regularity of averaged fields associated to LFSM.
One can again consult [36], but also [29] for the first three points. The last
point is a direct consequence of the definition of $L^{H}$.
###### Proposition 26.
Let $\alpha\in(0,2]$ and let $H\in(0,1)\cup\\{\alpha^{-1}\\}$, and let $L^{H}$
be an $\alpha$-LFSM.
* (i)
$L^{H}$ is almost surely measurable.
* (ii)
$L^{H}$ is continuous if and only if $\alpha=2$ or $H>\frac{1}{\alpha}$.
* (iii)
$L^{H}$ is $H$ self-similar and its increments are stationary.
* (iv)
For all $t\geq 0$ define
$\mathcal{F}_{t}=\sigma\left(\\{\tilde{L}_{(\tilde{t}})_{\tilde{t}\in\mathbb{R}_{+}}\\}\cup\\{L_{s}\,:\,s\leq
t\\}\right)$. Then $L^{H}$ is $(\mathcal{F}_{t})_{t\geq 0}$ adapted.
* (v)
For all $0\leq s\leq r$, we have
$L^{H}_{r}=L^{1,H}_{s,r}+L^{2,H}_{s,r},$
with
$L^{1,H}_{s,r}=\int_{s}^{r}(r-v)^{H-\frac{1}{\alpha}}\mathop{}\\!\mathrm{d}L_{v}$
and
$L^{2,H}_{s,r}=\int_{0}^{s}(r-v)^{H-\frac{1}{\alpha}}\mathop{}\\!\mathrm{d}L_{v}+\int_{0}^{+\infty}(r+v)^{H-\frac{1}{\alpha}}-v^{H-\frac{1}{\alpha}}\mathop{}\\!\mathrm{d}\tilde{L}_{v}.$
Hence, $L^{1,H}_{s,r}$ is independent of $\mathcal{F}_{r}$ while
$L^{2,H}_{s,r}$ is measurable with respect to $\mathcal{F}_{r}$, and for all
$x\in\mathbb{R}^{d}$
$\mathbb{E}[e^{i\xi\cdot
L^{1,H}_{s,r}}]=e^{-c_{\alpha}|\xi|^{\alpha}(s-r)^{\alpha H}}.$
Note that we can (and we will) reformulate the last point by saying that for
all $g\in\mathcal{S}$,
$\mathbb{E}[g(x+L^{1,H}_{s,r})]=P^{\frac{\alpha}{2}}_{c_{\alpha}(r-s)^{\alpha
H}}g(x)$
Finally, let us recall a useful result about martingales in Lebesgue spaces
due to Pinelis [35] and adapted in weighted spaces (see Appendix A.1).
###### Proposition 27.
Let $p\geq 2$, let $w$ be an admissible weight, and let $(M_{n})_{n}$ be a
$(\Omega,\mathcal{F},(\mathcal{F}_{n})_{n},\mathbb{P})$ martingale with value
in $L^{p}(\mathbb{R}^{d},w;\mathbb{R})$. Suppose that there exists a constant
$c>0$ such that for all $n\geq 0$, $\|M_{n+1}-M_{n}\|_{L^{p}(w)}\leq c$. Then
there exists two constants $C_{1}>0$ and $C_{2}>0$ such that for all $x\geq
0$,
$\mathbb{P}\big{(}\|M_{N}-\mathbb{E}[M_{N}]\|_{L^{P}(w)}\geq x\big{)}\leq
C_{1}e^{-\frac{C_{2}x^{2}}{Nc^{2}}}.$
### 5.2. Averaging operator
In this section we give another proof of the regularizing effect of the
fractional Brownian motion. This proof is similar to the one in [7], but
provides bounds in general weighted Besov spaces directly. For other proofs of
the same kind of results, one can consult [8], [15] and [28]. Other proofs may
rely on Burkolder-Davis-Gundy inquality in UMD spaces. For information about
UMD spaces, one can consult [26].
###### Lemma 28.
Let $w$ be an admissible weight. Let $\alpha\in(0,2]$ and let
$H\in(0,1)\cup\\{\alpha^{-1}\\}$. Let $\nu\in(0,1)$. Then there exists three
constants $K,C_{1},C_{2}>0$ such that for all $j\geq-1$, all
$g\in\mathcal{S}^{\prime}(\mathbb{R}^{d};\mathbb{R})$ all $s\leq t$ and all
$x>K$
$\mathbb{P}\left(\frac{\|\int_{s}^{t}\Delta_{j}g(\cdot+L^{H}_{r})\mathop{}\\!\mathrm{d}r\|_{L^{p}(w)}}{|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\|\Delta_{j}g\|_{L^{p}(w)}}\geq
x\right)\leq C_{1}e^{-C_{2}x^{2}}.$
###### Proof.
Let $0\leq s\leq t$ and let $N\in\mathbb{N}$ to be fixed from now. Let us
define $t_{n}=\frac{n}{N}(t-s)+s$, and $\mathcal{G}_{n}=\mathcal{F}_{t_{n}}$.
Let us define
$M_{n}(x)=\mathbb{E}\left[\int_{s}^{t}\Delta_{j}g(x+L^{H}_{r})\mathop{}\\!\mathrm{d}r\bigg{|}\mathcal{F}_{t_{n}}\right].$
Hence, $(M_{n})_{n}$ is a martingale with respect to $(\mathcal{G}_{n})_{n}$
and with value in $L^{p}(\mathbb{R}^{d},w;\mathbb{R})$ for all $p\geq 2$, and
furthermore. Note also that
$M_{N}=\int_{s}^{t}\Delta_{j}g(\cdot+L^{H}_{r})\mathop{}\\!\mathrm{d}r.$
Furthermore, we have for all $0\leq n\leq N-1$
$M_{n+1}-M_{n}=\int_{t_{n}}^{t_{n+1}}\Delta_{j}g(\cdot+L^{H}_{r})\mathop{}\\!\mathrm{d}r+A_{n+1}-A_{n},$
with
$A_{n}=\mathbb{E}\left[\int_{t_{n}}^{t}\Delta_{j}g(\cdot+L^{H}_{r})\mathop{}\\!\mathrm{d}r\bigg{|}\mathcal{F}_{t_{n}}\right].$
It is also readily checked that for all $n=0,\ldots,N$ we have
$\left\|\int_{t_{n}}^{t_{n+1}}\Delta_{j}g(\cdot+L^{H}_{r})\mathop{}\\!\mathrm{d}r\right\|_{L^{p}(w)}\leq\frac{|t-s|}{N}\|\Delta_{j}g\|_{L^{p}(w)}.$
Furthermore, using the decomposition of Proposition 26, one get for all
$n\in\\{0,\cdots,N\\}$,
$\displaystyle A_{n}(x)=$
$\displaystyle\int_{t_{n}}^{t}\mathbb{E}\left[\Delta_{j}g\left(x+L^{1,H}_{t_{n},r}+L^{2,H}_{t_{n},r}\right)\bigg{|}\mathcal{F}_{t_{n}}\right]\mathop{}\\!\mathrm{d}r$
$\displaystyle=$
$\displaystyle\int_{t_{n}}^{t}P^{\frac{\alpha}{2}}_{(r-t_{n})^{\alpha
H}}\Delta_{j}g(x+L^{2,H}_{t_{n},r})\mathop{}\\!\mathrm{d}r,$
Hence, thanks to Proposition 43, the following bound holds:
$\|A_{n}\|_{L^{p}(w)}\lesssim\int_{t_{n}}^{t}e^{-c2^{\alpha
j}(r-t_{n})^{\alpha H}}\|\Delta_{j}g\|_{L^{p}(w)}\lesssim
2^{-\frac{j}{H}}\|\Delta_{j}g\|_{L^{p}(w)}\int_{0}^{+\infty}e^{-cr^{\alpha
H}}\mathop{}\\!\mathrm{d}r.$
Finally, we have the bound
$\|M_{n+1}-M_{n}\|_{L^{p}(w)}\lesssim\left(\frac{t-s}{N}+2^{-\frac{j}{H}}\right)\|\Delta_{j}g\|_{L^{p}(w)}.$
(5.1)
Note also that we have the straightforward bound
$\|M_{n+1}-M_{n}\|_{L^{p}(w)}\lesssim(t-s)\|\Delta_{j}g\|_{L^{p}(w)}$.
Therefore, by interpolation, this gives
$\|M_{n+1}-M_{n}\|_{L^{p}(w)}\lesssim(t-s)^{1-\nu}\left(\frac{t-s}{N}+2^{-\frac{j}{H}}\right)^{\nu}\|\Delta_{j}g\|_{L^{p}(w)}.$
(5.2)
It is readily checked that
$\mathbb{P}\left(\left\|M_{N}\right\|_{L^{p}(w)}\geq
x|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\|\Delta_{j}g\|_{L^{p}(w)}\right)\\\
\leq\mathbb{P}\left(\left\|M_{N}-\mathbb{E}[M_{N}]\right\|_{L^{p}(w)}\geq\frac{x}{2}|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\|\Delta_{j}g\|_{L^{p}(w)}\right)\\\
+\mathbb{P}\left(\left\|\mathbb{E}[M_{N}]\right\|_{L^{p}(w)}\geq\frac{x}{2}|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\|g\|_{L^{p}(w)}\right).$
Furthermore, note that $\mathbb{E}[M_{N}]=\mathbb{E}[A_{0}]$, and
$\|A_{0}\|_{L^{p}(w)}\leq(t-s)\|\Delta_{j}g\|_{L^{p}(w)}$, where
$\|A_{0}\|_{L^{p}(w)}\lesssim 2^{-\frac{j}{H}}\|\Delta_{j}g\|_{L^{p}(w)}$.
Hence, there exists a constant $K>0$ such that
$\|\mathbb{E}[M_{N}]\|_{L^{p}(w)}\leq\frac{K}{2}|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\|\Delta_{j}g\|_{L^{p}(w)}.$
Therefore, for $x>K$,
$\mathbb{P}\left(\left\|\mathbb{E}[M_{N}]\right\|_{L^{p}(w)}\geq\frac{x}{2}|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\|\Delta_{j}g\|_{L^{p}(w)}\right)=0.$
Finally, this gives us, thanks to Proposition 27, there exists a constant
$\tilde{C}_{2}$ such that
$\mathbb{P}\left(\left\|\int_{s}^{t}\Delta_{j}g(\cdot+L^{H}_{r})\mathop{}\\!\mathrm{d}r\right\|_{L^{p}(w)}\geq
x|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\|\Delta_{j}g\|_{L^{p}(w)}\right)\\\
\lesssim\exp\left(-\tilde{C}_{2}\frac{x^{2}|t-s|^{2-\nu}2^{-j\frac{\nu}{H}}\|\Delta_{j}g\|^{2}_{L^{p}(w)}}{N\left((t-s)^{1-\nu}\left(\frac{t-s}{N}+2^{-\frac{j}{H}}\right)^{\nu}\right)^{2}\|\Delta_{j}g\|^{2}_{L^{p}(w)}}\right).$
When optimizing in $N$, one obtains the desired result. ∎
###### Remark 29.
Thanks to the bound (5.2) and to UMD space standard argument (see for example
[26], one could rather use the Burkoldher-Davis-Gundy inequality, and get _for
all $1<p<+\infty$_, and all $m\geq 2$
$\mathbb{E}\left[\left\|\int_{s}^{t}\Delta_{j}g(\cdot+L^{H}_{r})\mathop{}\\!\mathrm{d}r\right\|^{m}_{L^{p}(w)}\right]\lesssim_{m}|t-s|^{\frac{m}{2}}2^{-\frac{m}{2H}j}\|\Delta_{j}g\|^{m}_{L^{p}(w)}.$
This would allow for a more standard proof of regularizing effect of LFSM
(using Kolmogorov continuity theorem), but prevents to have Gaussian tails.
###### Lemma 30.
Let $X$ be a non-negative random variable such that there exists
$K,C_{1},C_{2}>0$ such that for all $x>K$,
$\mathbb{P}(X\geq x)\leq C_{1}e^{-C_{2}x^{2}}.$
Then for all $m\geq 1$
$\mathbb{E}[X^{2m}]\leq K^{2m}+\frac{C_{1}}{2}\frac{m!}{C_{2}^{m}}.$
###### Proof.
A simple computation reveals that
$\displaystyle\mathbb{E}[X^{2m}]$
$\displaystyle=\int_{0}^{K}2mX^{2m-1}\mathbb{P}(X\geq
x)\mathop{}\\!\mathrm{d}x+\int_{K}^{+\infty}2mX^{2m-1}\mathbb{P}(X\geq
x)\mathop{}\\!\mathrm{d}x$ $\displaystyle\leq
K^{2m}+C_{1}\int_{0}^{+\infty}e^{-C_{2}x^{2}}\mathop{}\\!\mathrm{d}x$
$\displaystyle=K^{2m}+C_{1}C_{2}^{-m}\int_{0}^{+\infty}e^{-x^{2}}\mathop{}\\!\mathrm{d}x$
$\displaystyle=K^{2m}+\frac{C_{1}}{2}\frac{m!}{C_{2}^{m}}.$
∎
Finally, we have all the tools to prove the following theorem of regularity of
the averaging operator with respect the LFSM. We will use a version of the
Garsia-Rodemich-Rumsey inequality from Friz and Victoir ([11] -Theorem 1.1 p.
571).
###### Theorem 31.
Let $w$ be an admissible weight. Let $\alpha\in(0,2]$. Let
$H\in(0,1)\cup\\{\alpha^{-1}\\}$. Let $\kappa\in\mathbb{R}$ and $\nu\in[0,1]$.
Let $2\leq p\leq\infty$ and let $1\leq q\leq+\infty$. There exists a constant
$C>0$ such that for any $g\in B^{\kappa}_{p,q}(w)$ a positive random variable
$F$ exists with
$\mathbb{E}[e^{CF^{2}}]<+\infty$
, and such that for any $0\leq s\leq t\leq T$
$\|T^{L^{H}}_{s,t}g\|_{B^{\kappa+\frac{\nu}{2H}}_{p,q}(w)}\leq
F|t-s|^{1-\frac{\nu}{2}-\frac{\varepsilon}{2}}\|g\|_{B^{\kappa}_{p,q}(w)}\quad\text{if}\quad
q<+\infty$
and
$\|T^{L^{H}}_{s,t}g\|_{B^{\kappa+\frac{\nu}{2H}-\delta}_{p,\infty}(w)}\leq
F|t-s|^{1-\frac{\nu}{2}-\frac{\varepsilon}{2}}\|g\|_{B^{\kappa}_{p,\infty}(w)}.$
###### Proof.
First consider $1\leq q<\infty$. Thanks to Lemma 28 and Lemma 30, we know that
there exists a constant $c_{q}>0$ such that for all $0\leq s<t\leq T$
$\mathbb{E}\left[\left(\frac{\left\|\Delta_{j}T^{L^{H}}_{s,t}g\right\|_{L^{p}(w)}}{|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2H}}\left\|\Delta_{j}g\right\|_{L^{p}(w)}}\right)^{q}\right]\leq
c_{q}$
and that for all $m\geq 1$,
$\mathbb{E}\left[\left\|\Delta_{j}T^{L^{H}}_{s,t}g\right\|^{2m}_{L^{p}(w)}\right]\lesssim\left(K^{2m}+\frac{m!}{C_{2}^{m}}\right)|t-s|^{(2-\nu)m}2^{-j\frac{\nu
m}{H}}\left\|\Delta_{j}g\right\|^{2m}_{L^{p}(w)}.$
Let us take $C<C_{2}$. We have
$\mathbb{E}\left[\exp\left(C\left(\frac{\|T^{L^{H}}_{s,t}g\|_{B^{\kappa+\frac{\nu}{2H}}_{p,q}(w)}}{|t-s|^{1-\frac{\nu}{2}}\|g\|_{B^{\kappa}_{p,q}(w)}}\right)^{2}\right)\right]=A+B,$
with
$A=\sum_{2m\leq
q}\frac{C^{m}}{m!|t-s|^{(2-\nu)m}\|g\|^{2m}_{B^{\kappa}_{p,q}(w)}}\mathbb{E}\left[\left(\sum_{j\geq
1}2^{j\left(\kappa+\frac{\nu}{2H}\right)q}\left\|\Delta_{j}T^{L^{H}}_{s,t}g\right\|^{q}_{L^{p}(w)}\|\right)^{\frac{2m}{q}}\right]$
and
$B=\sum_{2m>q}\frac{C^{m}}{m!}\mathbb{E}\left[\left(\sum_{j\geq{-1}}\frac{2^{jq\kappa}\|\Delta_{j}g\|^{q}_{L^{P}(w)}}{\sum_{j\geq{-1}}2^{jq\kappa}\|\Delta_{j}g\|^{q}_{L^{P}(w)}}\left(\frac{\|\Delta_{j}T^{L^{H}}_{s,t}g\|_{L^{p}(w)}}{|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2}}\|\Delta_{j}g\|_{L^{P}(w)}}\right)^{q}\right)^{\frac{2m}{q}}\right].$
For $A$, we use Jensen inequality in the concave case, and we have thanks to
Lemma 30
$\displaystyle A\leq$ $\displaystyle\sum_{2m\leq
q}\frac{C^{m}}{m!|t-s|^{(2-\nu)m}\|g\|^{2m}_{B^{\kappa}_{p,q}(w)}}\left(\sum_{j\geq
1}2^{j\left(\kappa+\frac{\nu}{2H}\right)q}\mathbb{E}\left[\left\|\Delta_{j}T^{L^{H}}_{s,t}g\right\|^{q}_{L^{p}(w)}\|\right]\right)^{\frac{2m}{q}}$
$\displaystyle\lesssim$
$\displaystyle\sum_{2m<q}\frac{C^{m}\left(c_{q}^{\frac{2}{q}}\right)^{m}}{m!}$
$\displaystyle\lesssim$ $\displaystyle e^{Cc_{q}^{\frac{2}{q}}}.$
For $B$, we use Jensen inequality in the convex case, and we have again thanks
to Lemma 30
$\displaystyle B=$
$\displaystyle\sum_{2m>q}\frac{C^{m}}{m!}\mathbb{E}\left[\left(\sum_{j\geq{-1}}\frac{2^{jq\kappa}\|\Delta_{j}g\|^{q}_{L^{P}(w)}}{\sum_{j\geq{-1}}2^{jq\kappa}\|\Delta_{j}g\|^{q}_{L^{P}(w)}}\left(\frac{\|\Delta_{j}T^{L^{H}}_{s,t}g\|_{L^{p}(w)}}{|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2}}\|\Delta_{j}g\|_{L^{P}(w)}}\right)^{q}\right)^{\frac{2m}{q}}\right]$
$\displaystyle\leq$
$\displaystyle\sum_{2m>q}\frac{C^{m}}{m!}\sum_{j\geq{-1}}\frac{2^{jq\kappa}\|\Delta_{j}g\|^{q}_{L^{P}(w)}}{\sum_{j\geq{-1}}2^{jq\kappa}\|\Delta_{j}g\|^{q}_{L^{P}(w)}}\mathbb{E}\left[\left(\frac{\|\Delta_{j}T^{L^{H}}_{s,t}g\|_{L^{p}(w)}}{|t-s|^{1-\frac{\nu}{2}}2^{-j\frac{\nu}{2}}\|\Delta_{j}g\|_{L^{P}(w)}}\right)^{2m}\right]$
$\displaystyle\lesssim$
$\displaystyle\sum_{2m>q}\frac{C^{m}}{m!}\left(K^{2m}+\frac{m!}{C_{2}^{m}}\right)$
$\displaystyle\lesssim$ $\displaystyle
e^{CK^{2}}+\frac{1}{1-\frac{C}{C_{2}}}.$
Hence, if we define
$F=\int_{0}^{T}\int_{0}^{T}\exp\left(C\left(\frac{\|T^{L^{H}}_{s,t}g\|_{B^{\kappa+\frac{\nu}{2H}}_{p,q}(w)}}{|t-s|^{1-\frac{\nu}{2}}\|g\|_{B^{\kappa}_{p,q}(w)}}\right)^{2}\right)\mathop{}\\!\mathrm{d}s\mathop{}\\!\mathrm{d}t,$
we are exactly in the scope of the Garsia-Rodemich-Rumsey inequality with
$\psi(x)=e^{Cx^{2}}$ and $p(u)=u^{1-\frac{\nu}{2}}$, which leads to the wanted
result when $q<+\infty$. When $q=\infty$ one needs to deal with the supremum
in $q$, and thus needs to lose a bit in $q$. We leave it to the reader since
its a direct adaptation of the previous proof. ∎
With the above construction of the averaged field associated with the
fractional Lévy process, we are ready to prove Theorem 4.
###### Proof of Theorem 4.
As in Section 4, let us first consider a distribution
$\xi\in\mathcal{C}^{-\vartheta}$ and an initial condition
$\psi\in\mathcal{C}^{\beta}$ with $0<\vartheta<1$ and
$\beta=\vartheta+\varepsilon_{1}$. The singularity
$\rho=\frac{\vartheta+\beta}{2}$ is one of the limiting thresholds in all the
previous computations. Hence, the best choice of $2-\vartheta>\beta>\vartheta$
for the space of the initial condition is for a small $\varepsilon_{1}>0$.
Note also that in Theorem 31 one want to take $\nu$ as close as possible to 1,
since $\frac{\nu}{2H}$ is somehow the index of regularization of the LFSM.
Note also that for some $\varepsilon_{2}>0$ small enough, one has
$\gamma=1-\frac{\nu}{2}-\varepsilon_{2}.$
The condition which allows the nonlinear Young–Volterra calculus to work is
$\frac{1}{2}<\gamma-\rho$
here it gives us
$\vartheta<1-\nu-\frac{\varepsilon_{1}}{2}-\varepsilon_{2},$
which gives for some $\varepsilon_{3}>0$
$\nu=1-\vartheta-\frac{\varepsilon_{1}}{2}-\varepsilon_{2}-\varepsilon_{3}.$
Hence, for an admissible weight $w$ and a function $g\in\mathcal{C}^{\kappa}$
with $\kappa>3-\frac{1-\vartheta}{2H}$, there exists $\varepsilon_{1}>0$,
$\varepsilon_{2}>0$ and $\varepsilon_{3}>0$ such that all the previous
condition are satisfied and such that almost surely
$T^{L^{H}}\in\mathcal{C}^{\gamma}_{T}\mathcal{C}^{3}(w).$
Applying Theorem 2 and 3 we have the desired result. ∎
###### Proof of Corollary 5.
The proof is straightforward when one recall that the space white noise $\xi$
is almost surely in $\mathcal{C}^{-\vartheta}$ for any
$\vartheta>\frac{d}{2}$. One can then apply Theorem 4. ∎
## 6\. Conclusion
We have proven that a certain class of measurable paths
$\omega:[0,T]\rightarrow\mathbb{R}$ provides strong regularizing effects on
the multiplicative stochastic heat equation of the form of (1.6). In
particular, we prove that there exists measurable paths $\omega$ such that
local existence and uniqueness of such equations holds even when the non-
linear function $g$ is only a Schwartz distribution and
$\xi\in\mathcal{C}^{-\vartheta}$ for some $\vartheta<1$, thus allowing for
very rough spatial noise in the one dimensional setting. To this end, we apply
the concept of non-linear Young integration, and extends this to the infinite
dimensional setting with Volterra operators. This sheds new light on the
application of the "pathwise regularization by noise" techniques developed in
[7] to the context of SPDEs, and we believe that this program can be taken
further in several directions in the future, and we provide some thoughts on
such developments here.
In the current article we restricted our analysis to the spatial space white
noise on $\mathbb{T}$ (Corollary 5). Our techniques could be extended to
$\mathbb{T}^{d}$, but with the same techniques, one could not allow for white
spatial noise $\xi$. Indeed, recall that if
$\xi:\mathbb{T}^{d}\rightarrow\mathbb{R}$ then
$\xi\in\mathcal{C}^{-\frac{d}{2}-\varepsilon}$ for any $\varepsilon>0$ and
thus even in $d=2$ the noise is too rough, since the best we can deal with in
the Young setting developed above is when $\xi\in\mathcal{C}^{-\vartheta}$
with $\vartheta<1$. However, there is a possibility that techniques from the
theory of paracontrolled calculus as developed in [18] could be applied here
to make sense of the product, and thus a generalization could then be
possible. In that connection, one would possibly need "second order correction
terms" associated to the averaged field $T^{\omega}g$ which at this point is
unclear (at least to us) how one should construct.
Another possible direction would be to allow multiplicative space-time noise,
i.e. consider $\xi$ as a distribution on $[0,T]\times\mathbb{R}^{d}$ (or
$\mathbb{T}^{d}$). When $\xi$ is depending on time, one can no longer use the
non-linear Young integral in the same way as developed here. The situation
looks like the one encountered when considering regularization by noise for
ODEs with multiplicative (time dependent) noise of the form
$y_{t}=y_{0}+\int_{0}^{t}b(y_{s}+\omega_{s})\mathop{}\\!\mathrm{d}\beta_{s}+\omega_{t},\qquad
y_{0}\in\mathbb{R}^{d}.$ (6.1)
Existence and uniqueness of the above equation when $\beta$ is a fractional
Brownian motion with $H>\frac{1}{2}$ was recently established in [17], even
when $b$ is a distribution. The key idea in this result was to consider the
average operator
$\Gamma_{s,t}^{\omega}b(x):=\int_{s}^{t}b(x+\omega_{r})\mathop{}\\!\mathrm{d}\beta_{r}$
and use a recently developed probabilistic lemma by Hairer and Li [20] to show
that the regularity of $\Gamma^{\omega}b$ is linked to the regularity of
$T^{\omega}b$. By considering an averaging operator given on the form on a
Banach space
$\Pi_{s,t}^{\omega}b(x)=\int_{s}^{t}b(x+\omega_{s})\mathop{}\\!\mathrm{d}\xi_{s},$
(6.2)
where $x\in E$ and $\xi_{t}\in E$ is a time-colored spatially-white noise, for
some Banach space $E$. If one can extend the lemmas of Hairer and Li to the
infinite dimensional setting, showing the connection between the regularity of
$\Pi^{\omega}b$ and $T^{\omega}b$ (when considering $T^{\omega}b$ as an
infinite dimensional averaged field, as in Proposition 19) there is a
possibility that one could prove regularization by noise for stochastic heat
equations with space-time noise on the form
$x_{t}=P_{t}\psi+\int_{0}^{t}P_{t-s}g(x_{s})\mathop{}\\!\mathrm{d}\xi_{s}+\omega_{t},$
by using similar techniques as developed in the current article. We leave a
deeper investigation into these possibilities open for future work.
## Appendix A Basic concepts of Besov spaces and properties of the heat
kernel
We gather here some material about Besov spaces, heat kernel estimates and
embedding in those spaces. This section is strongly inspired by [2] and [37].
See also [31] for the weighted besov norms. For the sake of the
comprehensions, we give elementary proofs of the main points for developing
the theory. This is the purpose of subsection A.1 and A.2. For the sake of the
Volterra sewing lemma (see Section 2.1), we need a few non-standard estimates
on the heat kernel action on Besov spaces. This is the purpose of subsection
A.3. In Section B we prove the elementary Cauchy-Lipschitz theorem for
multiplicative SHE without additive perturbation.
### A.1. Weighted Lebesgue spaces
In order to work in a general setting, we will define the weighted Besov
spaces. To do so, let us define the class of admissible weight, following
Triebel chapter 6 [37]. In [31] one can find a more general definition for
weights, which somehow allows the same kind of estimates.
###### Definition 32.
We say that $w\in C^{\infty}(\mathbb{R}^{d};\mathbb{R}_{+}\backslash\\{0\\})$
is an admissible weight function if
* (i)
For all $\kappa\in\mathbb{N}^{d}$, there exists a positive constant
$c_{\kappa}$ such that
$\forall x\in\mathbb{R}^{d},\quad\partial^{\kappa}w(x)\leq c_{\kappa}w(x).$
* (ii)
There exists $\lambda^{\prime}\geq 0$ and $c>0$ such that
$w(x)\leq cw(y)(1+|x-y|^{2})^{\frac{\lambda^{\prime}}{2}}.$ (A.1)
Furthermore for any admissible weight we define the weighted $L^{p}$ spaces as
the following :
$L^{p}(\mathbb{R}^{d};\mathbb{R}|w)=\\{f:\mathbb{R}^{d}\to\mathbb{R}\,:\,\|f\|_{L^{p}(\mathbb{R}^{d},w)}=\|wf\|_{L^{p}(\mathbb{R}^{d};\mathbb{R})}<+\infty\\}.$
A direct consequence of the definition is that the product of two admissible
weights is also an admissible weight. Furthermore, standard polynomials
weights are of course admissible, as proved in the following Proposition.
Finally, Hölder inequality in weighted Lebesgue spaces is straightforward.
###### Proposition 33.
Let $\lambda\in\mathbb{R}$, and let us define for all $x\in\mathbb{R}^{d}$,
$\langle x\rangle=(1+|x|^{2})^{\frac{1}{2}}$. Then
$\langle\cdot\rangle^{\lambda}$ is an admissible weight with
$\lambda^{\prime}=|\lambda|$.
###### Proof.
Let us remark that for all $x,y\in\mathbb{R}^{d}$
$(1+y\cdot(x-y))^{2}\leq 2\left(1+|y|^{2}|x-y|^{2}\right)\leq
2(1+|y|^{2}|x-y|^{2})^{2}.$
Hence,
$1+y\cdot(x-y)\leq\sqrt{2}(1+|y|^{2}|x-y|^{2}),$
then
$\displaystyle\langle x\rangle^{2}=$ $\displaystyle
1+|x-y|^{2}+|y|^{2}+2y\cdot(x-y)$ $\displaystyle\leq$
$\displaystyle|x-y|^{2}+|y|^{2}+2\sqrt{2}\left(1+|y|^{2}|x-y|^{2}\right)$
$\displaystyle\leq$ $\displaystyle
2\sqrt{2}\left(1+|x-y|^{2}+|y|^{2}+|y|^{2}|x-y|^{2}\right)$ $\displaystyle=$
$\displaystyle 2\sqrt{2}\langle y\rangle^{2}\langle x-y\rangle^{2}.$
and it therefore follows that
$\langle x\rangle\leq 2^{\frac{3}{4}}\langle y\rangle\langle x-y\rangle.$
If $\lambda\geq 0$,
$\langle x\rangle^{\lambda}\leq 2^{\frac{\lambda 3}{4}}\langle
y\rangle^{\lambda}\langle x-y\rangle^{\lambda},$
and
$\langle x\rangle^{-\lambda}=\langle x-y\rangle^{\lambda}\left(\langle
x\rangle\langle y-x\rangle\right)^{-\lambda}\leq 2^{\frac{3\lambda}{4}}\langle
y\rangle^{-\lambda}\langle x-y\rangle^{\lambda},$
which concludes the proof. ∎
###### Lemma 34.
Let $w$ be an admissible weight and $\lambda^{\prime}$ defined as in Equation
(A.1). Then for $1\leq p,q,r\leq+\infty$ with
$\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}$ and any measurable functions $f$ and
$g$,
$\|f*g\|_{L^{r}(w)}\leq\|f\|_{L^{p}(\langle\cdot\rangle^{\lambda^{\prime}})}\|g\|_{L^{p}(w)}.$
###### Proof.
The result is a direct consequence of the definition of admissible weights and
of standard Young inequality. Indeed,
$\displaystyle\|f*g\|_{L^{r}(w)}=$
$\displaystyle\left(\int_{\mathbb{R}^{d}}\left|\int_{\mathbb{R}^{d}}f(y)g(x-y)\mathop{}\\!\mathrm{d}y\right|^{r}w(x)^{r}\mathop{}\\!\mathrm{d}x\right)^{\frac{1}{r}}$
$\displaystyle\lesssim$
$\displaystyle\left(\int_{\mathbb{R}^{d}}\left|\int_{\mathbb{R}^{d}}\langle
x-y\rangle^{\lambda^{\prime}}f(x-y)w(y)g(y)\mathop{}\\!\mathrm{d}y\right|^{r}w(x)^{r}\mathop{}\\!\mathrm{d}x\right)^{\frac{1}{r}}$
$\displaystyle=$
$\displaystyle\|(\langle\cdot\rangle^{\lambda^{\prime}}f)*(wg)\|_{L^{r}}$
$\displaystyle\leq$
$\displaystyle\|\langle\cdot\rangle^{\lambda^{\prime}}f\|_{L^{p}}\|wg\|_{L^{q}}$
$\displaystyle=$
$\displaystyle\|f\|_{L^{p}(\langle\cdot\rangle^{\lambda^{\prime}})}\|g\|_{L^{q}(w)}$
which is the desired result. ∎
Furthermore, as usual in order to define Besov spaces we will work with
functions with compactly supported Fourier transform. In order to deal with
such functions, let us prove a Bernstein-type lemma in weighted $L^{p}$ space:
###### Lemma 35.
Let $w$ be an admissible weight. Let
$\mathcal{C}=\\{\xi\in\mathbb{R}^{d}\,:\,c_{1}\leq|\xi|\leq c_{2}\\}$ be an
annulus and $\mathcal{B}$ be a ball. There exists a constant $C>0$ such that
for all $1\leq p\leq p^{\prime}\leq+\infty$, $n\geq 0$, $a\geq 1$, and for any
function $f\in L^{p}(w)$, we have
* (i)
If $\mathrm{supp}\hat{f}\subset a\mathcal{B}$ then for any
$k\in\mathbb{N}^{d}$
$\|D^{n}f\|=\sup_{|k|=n}\|\partial^{k}f\|_{L^{p^{\prime}}(w)}\leq
C^{n+1}a^{n+d\left(\frac{1}{p}-\frac{1}{p^{\prime}}\right)}\|f\|_{L^{p}(w)}.$
* (ii)
If $\mathrm{supp}\hat{f}\subset a\mathcal{A}$ then
$\frac{1}{C^{n+1}}a^{n}\|f\|_{L^{P}(w)}\leq\|D^{n}f\|_{L^{p}(w)}\leq
C^{n+1}a^{n}\|f\|_{L^{p}(w)}.$
Here $\hat{f}$ is the Fourier transform of $f$.
###### Proof.
Let $\hat{K}$ be a function such that $\hat{K}\equiv 1$ on $\mathcal{B}$ and
$\mathrm{supp}(\hat{K})$ is compactly supported and let us define
$K_{a}=a^{d}K(a\cdot)$. If $\mathrm{supp}\hat{f}\subset a\mathcal{B}$, then
$f=K_{a}*f$, and
$\partial^{k}f=(\partial^{k}K_{a})*f=a^{|k|}((\partial^{k}K)_{a})*f$, where
$(\partial^{k}K)_{a}=a^{d}\partial^{k}K(a\cdot)$. Hence, by the previous
weighted Young inequality, for $1\leq p\leq p^{\prime}\leq+\infty$ and
$\frac{1}{r}=1-\left(\frac{1}{p}-\frac{1}{p^{\prime}}\right)$,
$\|\partial^{k}f\|_{L^{p^{\prime}}(w)}=a^{|k|}\|(\partial^{k}K)_{a}*f\|_{L^{p^{\prime}}(w)}\\\
\leq
a^{|k|}\|(\partial^{k}K)_{a}\|_{L^{r}(\langle\cdot\rangle^{\lambda^{\prime}})}\|f\|_{L^{p}(w)}.$
Furthermore, since $a\geq 1$ and $\lambda^{\prime}\geq 0$, one has $\langle
a^{-1}x\rangle^{\lambda^{\prime}}\leq\langle x\rangle^{\lambda^{\prime}}$, and
$\displaystyle\|(\partial^{k}K)_{a}\|_{L^{r}(\langle\cdot\rangle^{\lambda^{\prime}})}^{r}=$
$\displaystyle a^{rd}\int_{\mathbb{R}^{d}}|\partial^{k}K(ax)|^{r}\langle
x\rangle^{r\lambda^{\prime}}\mathop{}\\!\mathrm{d}x$ $\displaystyle=$
$\displaystyle a^{(r-1)d}\int_{\mathbb{R}^{d}}|\partial^{k}K(x)|^{r}\langle
a^{-1}x\rangle^{r\lambda^{\prime}}\mathop{}\\!\mathrm{d}x$ $\displaystyle\leq$
$\displaystyle a^{(r-1)d}\int_{\mathbb{R}^{d}}|\partial^{k}K(x)|^{r}\langle
x\rangle^{r\lambda^{\prime}}\mathop{}\\!\mathrm{d}x.$
It follows that
$\|(\partial^{k}K)_{a}\|_{L^{r}(\langle\cdot\rangle^{\lambda^{\prime}})}\lesssim
a^{\left(1-\frac{1}{r}\right)}\lesssim
a^{d\left(\frac{1}{p}-\frac{1}{p^{\prime}}\right)}.$
Gathering the above considerations proves (i).
The second inequality of (ii) is just a sub-case of (i). For the first
inequality of the (ii), consider a smooth function $L$ such that
$\mathrm{supp}\hat{L}$ is included in an annulus and such that $\hat{L}\equiv
1$ on $\mathcal{A}$. Following [2] Lemma 2.1 and (1.23) page 25, there exists
some real numbers $(A_{k})$ such that
$|\xi|^{2n}=\sum_{|k|=n}A_{k}(-i\xi)^{k}(i\xi)^{k},$
with
$(\xi_{1},\cdots,\xi_{d})^{(k_{1},\cdots,k_{d})}=\xi_{1}^{k_{1}}\cdots\xi_{d}^{k_{d}}$.
Hence, we have
$\sum_{|k|=n}A_{k}\frac{(-i\xi)^{k}}{|\xi|^{2n}}\hat{L}(a^{-1}\xi)\widehat{\partial^{k}f}(\xi)=L(a^{-1}\xi)\hat{f}(\xi)\sum_{|k|=n}A_{k}\frac{(-i\xi)^{k}(i\xi)^{k}}{|\xi|^{2n}}=\hat{f}(\xi).$
For $k\in\mathbb{N}^{d}$ with $|k|=n$ let us define
$L^{k}(x)=A_{k}\int_{\mathbb{R}^{d}}(-i\xi)^{k}|\xi|^{-2n}\hat{L}(\xi)e^{i\xi\cdot
x}\mathop{}\\!\mathrm{d}x$
and we have $f=a^{-n}\sum_{|k|=n}L^{k}_{a}*\partial^{k}f$. One can use the
weighted Young inequality to obtain that
$\|f\|_{L^{p}(w)}\lesssim
a^{-|n|}\sum_{|k|=n}\|L^{k}_{a}\|_{L^{1}(\langle\cdot\rangle^{\lambda^{\prime}})}\|\partial^{k}f\|_{L^{p}(w)}.$
Furthermore, since $\langle a^{-1}x\rangle^{\lambda^{\prime}}\leq\langle
x\rangle^{\lambda^{\prime}}$, it follows that
$\|L^{k}_{a}\|_{L^{1}(\langle\cdot\rangle^{\lambda^{\prime}})}\leq\|L^{k}\|_{L^{1}(\langle\cdot\rangle^{\lambda^{\prime}})}$.
Finally, we get
$\|f\|_{L^{p}(w)}\lesssim a^{-|n|}\|D^{n}f\|_{L^{p}(w)},$
which proves our claim. ∎
### A.2. Weighted Besov Spaces and standards estimates
###### Definition 36.
Let
$\mathcal{A}=\\{\lambda\in\mathbb{R}^{d}\,:\,\frac{3}{4}\leq|\lambda|\leq\frac{8}{3}\\}$.
There exists two radial function $\chi$ and $\varphi$ such that
$\mathrm{supp}(\chi)=B(0,\frac{3}{4})$,
$\mathrm{supp}(\varphi)\subset\mathcal{A}$,
$\forall\lambda\in\mathbb{R}^{d}\,\chi(\lambda)+\sum_{j\geq
0}\varphi(2^{-j}\lambda)=1,$
and for $j\geq 1$,
$\mathrm{supp}(\chi)\cap\mathrm{supp}(\varphi(2^{-j}\cdot))=\emptyset$
and for $|j-j^{\prime}|\geq 2$,
$\mathrm{supp}(\varphi(2^{-j}\cdot))\cap\mathrm{supp}(\varphi(2^{-j^{\prime}}\cdot))=\emptyset.$
For all $f\in\mathcal{S}^{\prime}$ and $j\geq 0$, we define the in-homogeneous
Paley-Littlewood blocks by
$\Delta_{-1}f=\mathcal{F}^{-1}\big{(}\chi\hat{f}\big{)},\quad\mathrm{and}\quad\Delta_{j}f=\mathcal{F}^{-1}(\varphi(2^{-j}\cdot)\hat{f}),$
where $\hat{f}$ denotes the Fourier transform of $f$ and $\mathcal{F}^{-1}$
the inverse Fourier transform.
Note that the Paley-Litllewood blocks define a nice appproximation of the
unity. We refer to [2] proposition 2.12 for a proof.
###### Proposition 37.
For all $f\in\mathcal{S}^{\prime}$, let us define for all $j\geq-1$
$\mathcal{S}_{j}f=\sum_{j^{\prime}\leq j-1}\Delta_{j}f$. Then
$f=\lim_{j\to\infty}\mathcal{S}_{j}f\quad\text{in}\quad\mathcal{S}^{\prime}.$
###### Definition 38.
Let $1\leq p,q\leq+\infty$, and let $\kappa\in\mathbb{R}$ and $w$ be an
admissible weight. For a distribution
$f\in\mathcal{S}^{\prime}(\mathbb{R}^{d};\mathbb{R})$ we define the (in-
homogeneous) weighted Besov norm by
$\|f\|_{B^{\kappa}_{p,q}(w)}=\Big{\|}\big{(}2^{\kappa
j}\|\Delta_{j}f\|_{L^{p}(\mathbb{R}^{d},w)}\big{)}_{j\geq-1}\Big{\|}_{\ell^{q}(\mathbb{N}\cup\\{-1\\})}$
where
$\|f\|_{L^{p}(\mathbb{R}^{d},w)}=\left(\int_{\mathbb{R}^{d}}f(x)^{p}w(x)^{p}\mathop{}\\!\mathrm{d}x\right)^{\frac{1}{p}}$.
When $w\equiv 1$, we only write $B^{\kappa}_{p,q}$.
We gather here some basic properties of (weighted) Besov spaces.
###### Proposition 39.
Let $w$ be an admissible weight.
* (i)
The space
$B^{s}_{p,q}(w)=\\{f\in\mathcal{S}^{\prime}(\mathbb{R}^{d};\mathbb{R})\,:\,\|f\|_{B^{\kappa}_{p,q}(w)}<+\infty\\}$
does not depend on the choice of $\varphi$ and $\chi$.
* (ii)
The two following quantities $\|f\|_{B^{\kappa}_{p,q}(w)}$ and
$\|wf\|_{B^{\kappa}_{p,q}}$ are equivalent norms on $B^{\kappa}_{p,q}(w)$.
* (iii)
For all $n\geq 0$,
$\|D^{n}f\|_{B^{\kappa}_{p,q}(w)}\lesssim\|f\|_{B^{\kappa+n}_{p,q}(w)}$.
* (iv)
Let $1\leq p\leq p^{\prime}\leq+\infty$ and $1\leq q\leq
q^{\prime}\leq+\infty$, then for all $\varepsilon>0$,
$\|f\|_{B^{\kappa-d\left(\frac{1}{p}-\frac{1}{p^{\prime}}\right)}_{p^{\prime},q}(w)}\lesssim\|f\|_{B^{\kappa}_{p,q}(w)}\lesssim\|f\|_{B^{\kappa}_{p^{\prime},q}\left(\langle\cdot\rangle^{d\left(\frac{1}{p}-\frac{1}{p^{\prime}}\right)+\varepsilon}w\right)}$
and
$\|f\|_{B^{\kappa}_{p,\infty}(w)}\lesssim\|f\|_{B^{\kappa}_{p,q}(w)}\lesssim\|f\|_{B^{\kappa-\varepsilon}_{p,q^{\prime}}(w)}.$
* (v)
For all $\varepsilon,\delta>0$ and all $\kappa\in\mathbb{R}$ and all $1\leq
p,q\leq+\infty$,
$B^{\kappa}_{p,q}(w)\quad\text{is compactly embedded in}\quad
B^{\kappa-\varepsilon}_{p,q}(\langle\cdot\rangle^{-\delta}w)$
* (vi)
Suppose that $\kappa>0$ and $\kappa\notin\mathbb{N}$ and for
$f:\mathbb{R}^{d}\mapsto\mathbb{R}$ let us define
$\|f\|_{\mathcal{C}^{\kappa}(w)}=\sum_{|k|\leq[\kappa]}\sup_{x\in\mathbb{R}^{d}}|w(x)\partial^{k}f(x)|+\sum_{|k|=[\kappa]}\sup_{0<|h|\leq
1}\sup_{x\in\mathbb{R}^{d}}\frac{w(x)|\partial^{k}f(x+h)-\partial^{k}f(x)|}{|h|^{\kappa-[\kappa]}}.$
Then
$\mathcal{C}^{\kappa}(w)=\\{f\,:\,\|f\|_{\mathcal{C}^{\kappa}(w)}<+\infty\\}=B^{\kappa}_{\infty,\infty}(w)$
and furthermore $\|\cdot\|_{\mathcal{C}^{\kappa}(w)}$ and
$\|\cdot\|_{B^{\kappa}_{\infty,\infty}(w)}$ are equivalent norms on this
space.
###### Proof.
We only prove the weighted inequality in the fourth point, and we refer [37]
and the references therein for the other ones. The first and the third
inequalities are direct consequences of Lemma 35. For the second one, let us
take $1\leq p<p^{\prime}<+\infty$. We have, thanks to Jensen inequality, for
any $\varepsilon>0$
$\displaystyle\|\Delta_{j}f\|_{L^{p}(w)}^{p^{\prime}}\lesssim_{d,\varepsilon}$
$\displaystyle\int_{\mathbb{R}^{d}}|\Delta_{j}f(x)|^{p^{\prime}}w(x)^{p^{\prime}}\langle
x\rangle^{\frac{(d+\varepsilon)p^{\prime}}{p}}\langle
x\rangle^{-\frac{(d+\varepsilon)p^{\prime}}{p^{\prime}}}\mathop{}\\!\mathrm{d}x$
$\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{d}}|\Delta_{j}f(x)|^{p^{\prime}}\left(w(x)\langle
x\rangle^{(d+\varepsilon)\left(\frac{1}{p}-\frac{1}{p^{\prime}}\right)}\right)^{p^{\prime}}\mathop{}\\!\mathrm{d}x.$
The constant in the previous inequality does not depends on $p^{\prime}$. This
gives
$\|\Delta_{j}f\|_{L^{p}(w)}\lesssim_{d,\varepsilon}\|\Delta_{j}f\|_{L^{p^{\prime}}\left(\langle\cdot\rangle^{(d+\varepsilon)\left(\frac{1}{p}-\frac{1}{p^{\prime}}\right)}w\right)}.$
∎
Finally, in order to deal with product of elements in Besov space, we give the
following result, which can be proved thanks to standard techniques (see [2]
Lemma 2.69 and 2.84 and [31] Theorem 3.17 and Corollary 3.19 and 3.21). Note
that it mostly relies on Hölder and Young inequality, and therefore is
available in the context of weighted spaces.
###### Proposition 40.
[Corollary 2.86 in [2] and Corollary 3.19 in [31]] Let $w$ be an admissible
weight, $\kappa_{2}\leq\kappa_{1}$ with $\kappa_{1}\geq 0$ and suppose that
$\kappa_{1}+\kappa_{2}>0$, and let $1\leq p,p_{1},p_{2},q\leq+\infty$ such
that $\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}$. Let
$\varepsilon,\delta>0$. Then for all
$f\in\mathcal{B}^{\kappa_{1}}_{p_{1},q}(w)$ and all
$g\in\mathcal{B}^{\kappa_{2}}_{p_{2},q}(w)$,
* (i)
$\left(\mathcal{S}_{j}f\mathcal{S}_{j}g\right)_{j\geq 0}$ converges in
$B^{\kappa_{2}-\varepsilon}_{p,q}(\langle\cdot\rangle^{-\delta}w)$ to a limit
in $B^{\kappa_{2}}_{p,q}(w)$.
* (ii)
We have
$\left\|\lim_{j}\mathcal{S}_{j}f\mathcal{S}_{j}g\right\|_{B^{\kappa_{2}}_{p,q}(w)}\lesssim\left\|f\right\|_{B^{\kappa_{1}}_{p,q}(w)}\left\|g\right\|_{B^{\kappa_{2}}_{p,q}(w)}$
* (iii)
When $\kappa_{2}\geq 0$, $\lim_{j}\mathcal{S}_{j}f\mathcal{S}_{j}g=fg$ the
standard point wise product.
###### Remark 41.
The previous proposition shows that the limit does not depend on the choice of
the blocks, and therefore it extends canonically the notion of product of
functions to a product of distributions, as soon as $\kappa_{1}+\kappa_{2}>0$,
with $\kappa_{1}\geq 0$. For this reason, we will denote by
$fg=\lim_{j}\mathcal{S}_{j}f\mathcal{S}_{j}g$, and this is a bi-linear
functional from $B^{\kappa_{1}}_{p_{1},q}(w)\times
B^{\kappa_{2}}_{p_{2},q}(w)$ to $B^{\kappa_{2}}_{p,q}(w)$.
### A.3. Heat kernel estimates
In order to deal with heat kernel estimates on weighted Besov spaces, we first
need some heat kernel estimates for function whose Fourier transform has a
support in an annulus. In order to deal with non-Gaussian noise, we need to
consider heat semi-group for fractional Laplacian. For a full study of the
fractional Laplacian, we refer to [27]. Here we just define the fractional
Laplacian for smooth functions.
###### Definition 42.
Let $\alpha\in(0,2]$. For any function
$f\in\mathcal{S}(\mathbb{R}^{d};\mathbb{R})$ we define the fractional Laplace
operator $\Delta^{\frac{\alpha}{2}}=-(-\Delta)^{\frac{\alpha}{2}}$ by
$\Delta^{\frac{\alpha}{2}}f=\mathcal{F}^{-1}(-|\cdot|^{\alpha}\hat{f}(\cdot)).$
Furthermore, we define the semi-group associated to
$\Delta^{\frac{\alpha}{2}}$
$P^{\frac{\alpha}{2}}_{t}f=\mathcal{F}^{-1}\left(e^{-|\cdot|^{\alpha}t}\hat{f}(\cdot)\right).$
Finally we extend this definition to the whole space $\mathcal{S}^{\prime}$ by
the standard procedure.
Note that when $\alpha=2$, the previous definition gives the standard Laplace
operator. When this is the case we simply write $P$ instead of $P^{1}$.
###### Proposition 43.
Let $w$ be an admissible weight and $\lambda^{\prime}$ be defined as in (A.1).
Let $\alpha\in(0,2]$. Let
$\mathcal{A}=\\{\xi\in\mathbb{R}^{d}\,:\,c_{1}\leq|\xi|\leq c_{2}\\}$ be an
annulus, let $a\geq 1$ and let $f:\mathbb{R}^{d}\to\mathbb{R}^{d}$ be a
function such that $supp\hat{f}\subset a\mathcal{A}$. Let $0<s\leq t$. There
exists a constant $c>0$ such that for all $p\geq 1$,
$\|P^{\frac{\alpha}{2}}_{t}f\|_{L^{p}(w)}\lesssim
e^{-cta^{\alpha}}\|f\|_{L^{p}(w)},$
and for all $\rho\geq 0$,
$\|(P_{t}-P_{s})f\|_{L^{p}(w)}\lesssim
a^{-\alpha\rho}\left|\frac{1}{s^{\rho}}-\frac{1}{t^{\rho}}\right|\|f\|_{L^{p}(w)}.$
Finally, for all $s\leq u\leq\tau^{\prime}\leq\tau$, and for all $\rho>0$,
$\Big{\|}\big{(}(P_{\tau-s}-P_{\tau-u})-(P_{\tau^{\prime}-s}-P_{\tau^{\prime}-u})\big{)}f\Big{\|}_{L^{p}(w)}\\\
\lesssim
a^{-\alpha\rho}\left(\frac{1}{(\tau^{\prime}-u)^{\rho}}-\frac{1}{(\tau^{\prime}-s)^{\rho}}-\frac{1}{(\tau-u)^{\rho}}+\frac{1}{(\tau-s)^{\rho}}\right)\|f\|_{L^{p}(w)}$
###### Proof.
We follow the proof of [2], Lemma 2.4. The first affirmation is the result of
this lemma but in the context of weighted spaces and of fractional operator.
We also refer to [31] Lemma 2.10 to a proof. Since the proof of the second and
third points are similar to the proof of the first one, we will not detail the
first point. For the second and third one, let us define
$E(s,t;\xi):=e^{-|\xi|^{\alpha}t}-e^{-|\xi|^{\alpha}s}.$
Note that in that case, we have
$(P_{t}-P_{s})f(x)=\int_{\mathbb{R}^{d}}E(s,t;\xi)\hat{f}(\xi)e^{i\xi
x}\mathop{}\\!\mathrm{d}x.$
Thanks to the hypothesis, there also exists a smooth function
$\varphi:\mathbb{R}^{d}\to\mathbb{R}^{d}$ such that $\varphi\equiv 1$ on
$\mathcal{A}$ and
$\mathrm{supp}\varphi\subset\\{\xi\in\mathbb{R}^{d}\,:\,\frac{c_{1}}{2}\leq|\xi|\leq
2c_{2}\\}$. Let us define for all $s\leq t$
$K(s,t;x)=\int E(s,t;\xi)\varphi(\xi)e^{i\xi x}d\xi,$
and $K_{a}(s,t,x)=a^{d}K(s,t,ax)$. Hence we have
$\displaystyle(P_{t}-P_{s})f(x)=$
$\displaystyle\int_{\mathbb{R}^{d}}E(s,t;\xi)\hat{f}(\xi)e^{i\xi\cdot x}d\xi$
$\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{d}}E(s,t;\xi)\varphi(a^{-1}\xi)\hat{f}(\xi)e^{i\xi\cdot
x}d\xi$ $\displaystyle=$ $\displaystyle
a^{d}\int_{\mathbb{R}^{d}}E(s,t;a\xi)\varphi(\xi)\hat{f}(\xi)e^{i\xi\cdot
ax}d\xi.$
And since $E(s,t,a\xi)=E(a^{\alpha}s,a^{\alpha}t;\xi)$, one has
$(P_{t}-P_{s})*f(x)=K_{a}(a^{\alpha}s,a^{\alpha}t;\cdot)*f(x).$
Finally, thanks to Young inequality, one has
$\|(P_{t}-P_{s})*f\|_{L^{p}(w)}\leq\|K_{a}(a^{\alpha}s,a^{\alpha}t;\cdot)\|_{L^{1}(\langle\cdot\rangle^{\lambda^{\prime}})}\|f\|_{L^{p}(w)}.$
Note also that since $\lambda^{\prime}\geq 0$ and $a\geq 1$, we have $\langle
a^{-1}x\rangle^{\lambda^{\prime}}\leq\langle x\rangle^{\lambda^{\prime}}$, and
$\displaystyle\|K_{a}(a^{\alpha}s,a^{\alpha}t;\cdot)\|_{L^{1}(\langle\cdot\rangle^{\lambda^{\prime}})}=$
$\displaystyle a^{d}\int_{\mathbb{R}^{d}}K_{a^{\alpha}s,a^{\alpha}t,ax}\langle
x\rangle^{\lambda^{\prime}}\mathop{}\\!\mathrm{d}x$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{d}}K_{a^{\alpha}s,a^{\alpha}t,ax}\langle
a^{-1}x\rangle^{\lambda^{\prime}}\mathop{}\\!\mathrm{d}x$ $\displaystyle=$
$\displaystyle\leq
a^{d}\int_{\mathbb{R}^{d}}K_{a^{\alpha}s,a^{\alpha}t,x}\langle
x\rangle^{\lambda^{\prime}}\mathop{}\\!\mathrm{d}x$ $\displaystyle=$
$\displaystyle\|K(a^{\alpha}s,a^{\alpha}t;\cdot)\|_{L^{1}(\langle\cdot\rangle^{\lambda^{\prime}})}$
Hence, it is enough to prove the proposition for $a=1$ and for all
$0<s^{\prime}\leq t^{\prime}$, and then specify $s^{\prime}=a^{\alpha}s$ and
$t^{\prime}=a^{\alpha}t$.
Let $M\in\mathbb{N}$ such that $2M>d+\lambda^{\prime}$. In order to prove the
proposition, it is then enough to bound
$\big{|}(1+|x|^{2})^{M}K(s,t;x)\big{|}$ by the wanted quantity
$\frac{1}{s^{\rho}}-\frac{1}{t^{\rho}}$. We have
$\displaystyle\left(1+|x|^{2}\right)^{M}K(s,t;x)=$
$\displaystyle\int_{\mathbb{R}^{d}}\left((1-\Delta)^{M}e^{ix\cdot}\right)(\xi)\varphi(\xi)E(s,t;\xi)d\xi$
$\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{d}}e^{ix\xi}(1-\Delta)^{M}\big{(}\varphi(\cdot)E(s,t;\cdot)\big{)}(\xi)d\xi$
And thanks to the Faà di Bruno formula, there exists constants
$(c_{\nu,\kappa})$ where $\nu$ and $\kappa$ are multi-indices such that
$\left(1+|x|^{2}\right)^{M}K(s,t;x)=\sum_{|\nu|+|\kappa|\leq
2M}c_{\nu,\kappa}\int_{\mathbb{R}^{d}}e^{i\xi
x}\partial^{\nu}\varphi(\xi)\partial^{\kappa}E(s,t;\xi)d\xi.$
Hence, in order to prove the proposition, one only has to bound all the
derivatives up to order $2M$ of $E(s,t;\cdot)$ for $\xi\in\mathcal{A}$ by the
wanted quantity $\frac{1}{s^{\rho}}-\frac{1}{t^{\rho}}$.
Note that the same strategy could be used in the case where we have the
rectangular increment of the semi-group if we replace $E(s,t;\xi)$ by
$F(s,u,\tau^{\prime},\tau;\xi):=\left(e^{-|\xi|^{\alpha}(\tau-s)}-e^{-|\xi|^{\alpha}(\tau-u)}\right)-\left(e^{-|\xi|^{\alpha}(\tau^{\prime}-s)}-e^{-|\xi|^{\alpha}(\tau^{\prime}-u)}\right).$
The same partial conclusion holds, one only has to control all the derivatives
of $F(s,u,\tau^{\prime},\tau;\cdot)$ up to order $2M$ for $\xi\in\mathcal{A}$.
Furthermore, observe that
$E(s,t;\xi)=\int_{s}^{t}-|\xi|^{\alpha}e^{-r|\xi|^{\alpha}}\mathop{}\\!\mathrm{d}r.$
Hence, by a direct induction and since $\xi\in\mathcal{A}$, for every
multiindexe $k=(k_{1}\cdots,k_{d})\in\mathbb{N}^{d}$, there exists a
polynomial
$P_{k,\xi}(t)=\sum_{l=0}^{|k|}a_{l}^{k}(\xi)t^{l},$
where for all $l\in\\{0,\cdots,n\\},$ $\xi\to a_{l}^{k}(\xi)$ are non-negative
smooth functions on $\mathcal{A}$, and for all $\xi\in\mathcal{A}$,
$a_{n}^{k}(\xi)\neq 0$ and such that
$\partial^{k}\big{(}|\cdot|^{2}e^{-r|\cdot|^{\alpha}}\big{)}(\xi)=P_{k,\xi}(r)e^{-r|\xi|^{\alpha}}.$
Furthermore, since $c_{1}\leq|\xi|\leq c_{2}$, there exists a constant $c>0$
(depending on $\mathcal{A}$) such that
$\big{|}\partial^{k}\big{(}|\cdot|^{\alpha}e^{-r|\cdot|^{\alpha}}\big{)}(\xi)\big{|}\lesssim\tilde{P}_{k,\xi}(r)e^{-r|\xi|^{2}}\lesssim_{k,\mathcal{A}}e^{-cr}\lesssim
r^{-\rho-1}$
for any $\rho\geq-1$. Hence
$|\partial^{k}E(s,t;\cdot)|\lesssim\int_{s}^{t}r^{-\rho}\mathop{}\\!\mathrm{d}r\lesssim\frac{1}{s^{\rho}}-\frac{1}{t^{\rho}}.$
With the previous discussion, this gives the wanted result, when we replace
$t$ by $a^{\alpha}t$ and $s$ by $a^{\alpha}s$. In order to deal with the last
estimates, one only has to remember that
$\displaystyle F(s,u,\tau^{\prime},\tau;\xi)=$
$\displaystyle-\int_{u}^{s}|\xi|^{\alpha}\left(e^{-|\xi|^{\alpha}(\tau-r)}-e^{-|\xi|^{\alpha}(\tau^{\prime}-r)}\right)\mathop{}\\!\mathrm{d}r$
$\displaystyle=$
$\displaystyle\int_{s}^{u}\int_{\tau^{\prime}-r}^{\tau-r}|\xi|^{2\alpha}e^{-|\xi|^{\alpha}v}\mathop{}\\!\mathrm{d}v\mathop{}\\!\mathrm{d}r$
The same argument as before gives the bound
$|\partial^{k}F(s,u,\tau^{\prime},\tau;\cdot)|\lesssim\int_{s}^{u}\int_{\tau^{\prime}-r}^{\tau-r}e^{-cv}\mathop{}\\!\mathrm{d}v\mathop{}\\!\mathrm{d}r.$
Finally, for any $\rho\geq 0$,
$\displaystyle|\partial^{k}F(s,u,\tau^{\prime},\tau;\cdot)|\lesssim$
$\displaystyle\int_{s}^{u}\int_{\tau^{\prime}-r}^{\tau-r}v^{-(\rho+2)}\mathop{}\\!\mathrm{d}v\mathop{}\\!\mathrm{d}r$
$\displaystyle\lesssim$
$\displaystyle\frac{1}{(\tau^{\prime}-u)^{\rho}}-\frac{1}{(\tau^{\prime}-s)^{\rho}}-\frac{1}{(\tau-u)^{\rho}}+\frac{1}{(\tau-s)^{\rho}}$
Again, this give the wanted result when recalling that one must replace
$\tau$,$\tau^{\prime}$,$u$ and $s$ by $a^{\alpha}\tau$,
$a^{\alpha}\tau^{\prime}$, $a^{\alpha}u$ and $a^{\alpha}s$. ∎
Let us gives the following useful and straightforward corollary for the action
of the fractional heat semi-group on weighted Besov spaces. Let us first
remind a rather elementary be useful lemma ([10], Lemma 4.4). For the sake of
the reader, we provide a full proof of it.
###### Lemma 44.
Let $\rho\geq 0$ and $\theta\in[0,1]$. There is a constant $c>0$ such that for
any $0<s\leq t$,
$\frac{1}{s^{\rho}}-\frac{1}{t^{\rho}}\leq c(t-s)^{\theta}s^{-(\rho+\theta)}.$
Let $\rho\geq 0$ and $\theta,\theta^{\prime}\in[0,1]$. There exists a constant
such that
$\frac{1}{(\tau^{\prime}-u)^{\rho}}-\frac{1}{(\tau^{\prime}-s)^{\rho}}-\frac{1}{(\tau-u)^{\rho}}+\frac{1}{(\tau-s)^{\rho}}\leq
c(\tau-\tau^{\prime})^{\theta}(u-s)^{\theta^{\prime}}(\tau^{\prime}-u)^{-(\rho+\theta+\theta^{\prime})}$
###### Proof.
Let us remark that
$\frac{1}{s^{\rho}}-\frac{1}{t^{\rho}}=(1+\rho)\int_{s}^{t}r^{-(\rho+1)}\mathop{}\\!\mathrm{d}r\leq(1+\rho)(t-s)s^{-(\rho+1)}.$
The result follows by a standard interpolation between the two inequalities.
For the second inequality, set
$B=\frac{1}{(\tau^{\prime}-u)^{\rho}}-\frac{1}{(\tau^{\prime}-s)^{\rho}}-\frac{1}{(\tau-u)^{\rho}}+\frac{1}{(\tau-s)^{\rho}}.$
We have, thanks to the same integral representation,
$B\lesssim(\tau^{\prime}-u)^{-\rho},$
$B\lesssim(\tau-\tau^{\prime})(\tau^{\prime}-u)^{-(\rho+1)},$
$B\lesssim(u-s)(\tau^{\prime}-u)^{-(\rho+1)},$
and
$B\lesssim(u-s)(\tau-\tau^{\prime})(\tau^{\prime}-u)^{-(\rho+2)}.$
Let us suppose, without loss of generality that $\theta^{\prime}\leq\theta$
that is $\theta^{\prime}=\alpha\theta$ for some $0\leq\alpha<1$. We have,
using the first and the last inequalities,
$B\lesssim(\tau-\tau^{\prime})^{\theta}(u-s)^{\theta}(\tau^{\prime}-u)^{-(\rho+2\theta)}.$
Using the first and the second one, we have
$B\lesssim(\tau-\tau^{\prime})^{\theta}(\tau^{\prime}-u)^{-(\rho+2\theta)}.$
We interpolate those last two inequalities to have
$B\lesssim(\tau-\tau^{\prime})^{\theta}(u-s)^{\alpha\theta}(\tau^{\prime}-u)^{-(\rho+2\alpha\theta+(1-\alpha)\theta)}=(\tau-\tau^{\prime})^{\theta}(u-s)^{\theta^{\prime}}(\tau^{\prime}-u)^{-(\rho+\theta+\theta^{\prime})}.$
∎
###### Corollary 45.
Let $w$ be an admissible weight and $\alpha\in(0,2]$. Let
$\kappa\in\mathbb{R}$ and let $1\leq p,q\leq+\infty$. Let $0\leq t$, then for
all $\rho\geq 0$,
$\|P^{\frac{\alpha}{2}}_{t}f\|_{B^{\kappa+\alpha\rho}_{p,q}(w)}\lesssim
t^{-\rho}\|f\|_{B^{\kappa}_{p,q}(w)}.$
For all $\theta\in[0,1]$ and all $\rho>0$,
$\|(P^{\frac{\alpha}{2}}_{t}-P^{\frac{\alpha}{2}}_{s})f\|_{B^{\kappa+\alpha\rho}_{p,q}(w)}\lesssim|(t-s)^{\theta}s^{-(\rho+\theta)}\|f\|_{B^{\kappa}_{p,q}(w)}.$
Furthermore, for any $\theta,\theta^{\prime}\in[0,1]$ and all $\rho\geq 0$,
$s\leq u<\tau^{\prime}\leq\tau$,
$\left\|\big{(}(P^{\frac{\alpha}{2}}_{\tau-s}-P^{\frac{\alpha}{2}}_{\tau-u})-(P^{\frac{\alpha}{2}}_{\tau^{\prime}-s}-P^{\frac{\alpha}{2}}_{\tau^{\prime}-u})\big{)}f\right\|_{B^{\kappa+\alpha\rho}_{p,q}(w)}\lesssim(\tau-\tau^{\prime})^{\theta}(u-s)^{\theta^{\prime}}(\tau^{\prime}-u)^{-(\rho+\theta+\theta^{\prime})}\|f\|_{B^{\kappa}_{p,q}(w)}.$
###### Proof.
The first bound is a direct consequence of the first bound of Proposition 43.
Indeed, for all $j\geq 0$,
$\|\Delta_{j}P^{\frac{\alpha}{2}}_{t}f\|_{L^{P}(w)}=\|P_{t}(\Delta_{j}f)\|_{L^{P}(w)}\lesssim
e^{-c2^{\alpha j}t}\|\Delta_{j}f\|_{L^{P}(w)}\lesssim 2^{-\alpha\rho
j}t^{-\rho}\|\Delta_{j}f\|_{L^{P}(w)},$
and the result follows via the definition of weighted Besov spaces. For the
second bound, let us remark that we have for all $0<s\leq t$, and thanks to
Proposition 43,
$\|\Delta_{j}(P^{\frac{\alpha}{2}}_{t}-P^{\frac{\alpha}{2}}_{s})f\|_{L^{P}(w)}=\lesssim\left(\frac{1}{s^{\rho}}-\frac{1}{s^{\rho}}\right)2^{-\alpha\rho
j}\|\Delta_{j}f\|_{L^{P}(w)}\lesssim 2^{-\alpha\rho
j}(t-s)^{\theta}s^{-(\rho+\theta)}\|\Delta_{j}f\|_{L^{p}(w)}.$
where the last inequality comes from the previous lemma. This allows us to
derive the result by using the definition of $B$ spaces. Finally, note that we
have thanks to Proposition 43 and Lemma 44,
$\Big{\|}\Delta_{j}\big{(}(P^{\frac{\alpha}{2}}_{\tau-u}-P^{\frac{\alpha}{2}}_{\tau-s})-(P^{\frac{\alpha}{2}}_{\tau^{\prime}-u}-P^{\frac{\alpha}{2}}_{\tau^{\prime}-s})\big{)}f\Big{\|}_{L^{p}(w)}\lesssim
2^{-\alpha\rho
j}(\tau-\tau^{\prime})^{\theta}(u-s)^{\theta^{\prime}}(\tau^{\prime}-u)^{\rho+\theta+\theta^{\prime}}\|\Delta_{j}f\|_{L^{p}(w)}.$
And again one can conclude with the definition of the weighted Besov spaces. ∎
## Appendix B Cauchy-Lipschitz theorem for mSHE in standard case.
We give a short proof of local well-posedness of the mSHE in a simple context.
For more on this, one can consult [30] and the generalization for more
irregular noise with more involve techniques [19] and [21]. Note that
###### Theorem 46.
Let $\alpha\in(0,2]$. Let $\vartheta\in(0,\min(1,\alpha))$, let
$\vartheta<\beta<\alpha-\vartheta$. Let $g\in\mathcal{C}^{2}$ and let
$u_{0}\in\mathcal{C}^{\beta}$ and $\xi\in\mathcal{C}^{-\vartheta}$ There
exists a unique local solution of the equation
$\partial_{t}u=\Delta^{\frac{\alpha}{2}}u+g(u)\xi$
in the mild form
$u(t,\cdot)=P^{\frac{\alpha}{2}}_{t}u_{0}+\int_{s}^{t}P^{\frac{\alpha}{2}}_{t-s}\xi
g(u(s,\cdot))\mathop{}\\!\mathrm{d}s.$
###### Proof.
First, let us take $u\in\mathcal{C}^{\beta}$. We have
$\|g(u(t,\cdot)\|_{\mathcal{C}^{\beta}}\leq\|g\|_{\mathcal{C}^{2}}\|u(t,.)\|_{\mathcal{C}^{\beta}}.$
For $u\in C([0,T];\mathcal{C}^{\beta})$, let us remark that thanks to standard
Bony estimates in Besov-Hölder spaces (Proposition 40)
$\|\xi
g(u(t,\cdot)\|_{\mathcal{C}^{-\vartheta}}\lesssim\|g\|_{\mathcal{C}^{2}}\|\xi\|_{\mathcal{C}^{-\vartheta}}\|u(t,.)\|_{\mathcal{C}^{\beta}}.$
Finally for $0\leq s<t\leq T$,
$\|P_{t-s}\xi
g(u(t,\cdot))\|_{\mathcal{C}^{\beta}}\lesssim\frac{1}{(t-s)^{\frac{\beta+\rho}{\alpha}}}\|g\|_{\mathcal{C}^{2}}\|\xi\|_{\mathcal{C}^{-\vartheta}}\|u(t,.)\|_{\mathcal{C}^{\beta}}.$
Hence, the application $\Gamma$
$\Gamma(u)(t,x)=P_{t}u_{0}+\int_{0}^{t}P_{t-s}\xi
g(u(s,\cdot))\mathop{}\\!\mathrm{d}s$
is well-defined from $C^{0}([0,T];\mathcal{C}^{\beta})$ to itself.
Furthermore, Let us remark that for $u,v\in\mathcal{C}^{\beta}$,
$\|g(u)-g(v)\|_{\mathcal{C}^{\beta}}\lesssim\|g\|_{\mathcal{C}^{2}}\|u-v\|_{\mathcal{C}^{\beta}},$
hence for $u,v\in C^{0}([0,T];\mathcal{C}^{\beta})$,
$\sup_{t\in[0,T]}\|\Gamma(u)(t,\cdot)-\Gamma(v)(t,\cdot)\|_{\beta}\lesssim\int_{0}^{t}\frac{1}{(t-s)^{\frac{\beta+\rho}{\alpha}}}\|g\|_{\mathcal{C}^{2}}\|\xi\|_{\mathcal{C}^{-\vartheta}}\|u(t,.)-v(t,.)\|_{\mathcal{C}^{\beta}}\mathop{}\\!\mathrm{d}s\\\
\lesssim
T^{1-\frac{\beta+\rho}{\alpha}}\sup_{t\in[0,T]}\|u(t,.)-v(t,.)\|_{\mathcal{C}^{\beta}}.$
Hence, be standard Schauder fixed point, for $T$ small enough, there is a
unique $u\in C^{0}([0,T];\mathcal{C}^{\beta})$. ∎
## References
* [1] S. Athreya, O. Butkovsky, K. Lê, and L. Mytnik. Well-posedness of stochastic heat equation with distributional drift and skew stochastic heat equation. arXiv:2011.13498 [math], November 2020. arXiv: 2011.13498.
* [2] H. Bahouri, J-Y. Chemin, and R. Danchin. Fourier analysis and nonlinear partial differential equations, volume 343 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Heidelberg, 2011.
* [3] I. Bailleul, A. Debussche, and M. Hofmanová. Quasilinear generalized parabolic Anderson model equation. Stoch. Partial Differ. Equ. Anal. Comput., 7(1):40–63, 2019.
* [4] C. Bellingeri, P. Friz, and M. Gerencsér. Singular paths spaces and applications, 2020.
* [5] J. Benedikt, V. Bobkov, P. Girg, L. Kotrla, and P. Takac. Nonuniqueness of solutions of initial-value problems for parabolic p-Laplacian. Electron. J. Differ. Equ, 38:2015, 2015.
* [6] O. Butkovsky and L. Mytnik. Regularization by noise and flows of solutions for a stochastic heat equation. Ann. Probab., 47(1):165–212, 01 2019.
* [7] R. Catellier and M. Gubinelli. Averaging along irregular curves and regularisation of ODEs. Stochastic Process. Appl., 126(8):2323–2366, 2016.
* [8] L. Coutin, R. Duboscq, and A. Réveillac. The Itô-Tanaka Trick: a non-semimartingale approach, July 2019\.
* [9] A. M. Davie. Differential equations driven by rough paths: an approach via discrete approximation. Appl. Math. Res. Express. AMRX, (2):Art. ID abm009, 40, 2007.
* [10] A. Deya and S. Tindel. Rough Volterra equations. I. The algebraic integration setting. Stoch. Dyn., 9(3):437–477, 2009.
* [11] P. Friz. Mini-course on Rough Paths, 2009.
* [12] P. Friz and M. Hairer. A course on rough paths. Universitext. Springer, Cham, 2014. With an introduction to regularity structures.
* [13] H. Fujita and S. Watanabe. On the uniqueness and non-uniqueness of solutions of initial value problems for some quasi-linear parabolic equations. Communications on Pure and Applied Mathematics, 21(6):631–652, 1968\. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/cpa.3160210609.
* [14] L. Galeati. Nonlinear young differential equations: a review, 2020.
* [15] L. Galeati and M. Gubinelli. Noiseless regularisation by noise, 2020.
* [16] L. Galeati and M. Gubinelli. Prevalence of rho-irregularity and related properties, 2020.
* [17] L. Galeati and F. A. Harang. Regularization of multiplicative sdes through additive noise, 2020. Arxiv.
* [18] M. Gubinelli, P. Imkeller, and N. Perkowski. Paracontrolled distributions and singular PDEs. Forum of Mathematics, Pi, 3(e6), 2015.
* [19] M. Gubinelli, P. Imkeller, and N. Perkowski. A Fourier analytic approach to pathwise stochastic integration. Electron. J. Probab., 21, 2016.
* [20] M. Hairer and X. M. Li. Averaging dynamics driven by fractional Brownian motion. Ann. Probab., 48(4):1826–1860, 07 2020.
* [21] M. Hairer and N. S. Pillai. Regularity of laws and ergodicity of hypoelliptic SDEs driven by rough paths. Ann. Probab., 41(4):2544–2598, 2013.
* [22] F. A. Harang and C. Ling. Regularity of local times associated to volterra-lévy processes and path-wise regularization of stochastic differential equations, 2020. Arxiv.
* [23] F. A. Harang and N. Perkowski. C-infinity regularization of odes perturbed by noise, 2020. Arxiv.
* [24] F. A. Harang and S. Tindel. Volterra equations driven by rough signals, 2019. Arxiv.
* [25] Y. Hu, D. Nualart, and J. Song. A nonlinear stochastic heat equation: Hölder continuity and smoothness of the density of the solution. Stochastic Processes and their Applications, 123(3):1083 – 1103, 2013.
* [26] T. Hytönen, J. Neerven, M. Veraar, and L. Weis. Analysis in Banach Spaces: Volume I: Martingales and Littlewood-Paley Theory. Springer, 1st ed. 2016 edition, November 2016.
* [27] M. Kwasnicki. Ten equivalent definitions of the fractional laplace operator. Fractional Calculus and Applied Analysis, 20(1):7–51, February 2017\. Publisher: De Gruyter Section: Fractional Calculus and Applied Analysis.
* [28] K. Lê. A stochastic sewing lemma and applications. arXiv:1810.10500 [math], March 2020. arXiv: 1810.10500.
* [29] M. Lifshits and T. Simon. Small deviations for fractional stable processes. Annales de l’Institut Henri Poincare (B) Probability and Statistics, 41(4):725–752, July 2005.
* [30] W. Liu and M. Röckner. Stochastic Partial Differential Equations: An Introduction. Universitext. Springer International Publishing, 2015.
* [31] J. C. Mourrat and H. Weber. Global well-posedness of the dynamic $\phi^{4}$ model in the plane. Ann. Probab., 45(4):2398–2476, 2017.
* [32] C. Mueller, L. Mytnik, and E. Perkins. Nonuniqueness for a parabolic SPDE with $\frac{3}{4}-\varepsilon $-Hölder diffusion coefficients. Annals of Probability, 42(5):2032–2112, September 2014. Publisher: Institute of Mathematical Statistics.
* [33] E. Neuman. Pathwise uniqueness of the stochastic heat equation with spatially inhomogeneous white noise. Annals of Probability, 46(6):3090–3187, November 2018. Publisher: Institute of Mathematical Statistics.
* [34] D. Nualart and Y. Ouknine. Regularization of quasilinear heat equations by a fractional noise. Stoch. Dyn., 4(2):201–221, 2004.
* [35] I. Pinelis. Optimum Bounds for the Distributions of Martingales in Banach Spaces. Annals of Probability, 22(4):1679–1706, October 1994. Publisher: Institute of Mathematical Statistics.
* [36] G. Samorodnitsky and M. S. Taqqu. Stable non-Gaussian random processes. Stochastic Modeling. Chapman & Hall, New York, 1994. Stochastic models with infinite variance.
* [37] H. Triebel. Theory of function spaces. III, volume 100 of Monographs in Mathematics. Birkhäuser Verlag, Basel, 2006.
* [38] X. Yang and X. Zhou. Pathwise uniqueness for an SPDE with Hölder continuous coefficient driven by $\alpha $-stable noise. Electronic Journal of Probability, 22, 2017. Publisher: The Institute of Mathematical Statistics and the Bernoulli Society.
|
32k
|
arxiv_papers
|
2101.00918
|
# New Insights into Time Series Analysis IV:
Panchromatic and Flux Independent Period Finding Methods
C. E. Ferreira Lopes1, N. J. G. Cross2, F. Jablonski1
1National Institute For Space Research (INPE/MCTI), Av. dos Astronautas, 1758
– São José dos Campos – SP, 12227-010, Brazil
2SUPA (Scottish Universities Physics Alliance) Wide-Field Astronomy Unit,
Institute for Astronomy, School of Physics and Astronomy,
University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ,
UK E-mail: [email protected]
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
New time-series analysis tools are needed in disciplines as diverse as
astronomy, economics and meteorology. In particular, the increasing rate of
data collection at multiple wavelengths requires new approaches able to handle
these data. The panchromatic correlated indices $K^{(s)}_{(fi)}$ and
$L^{(s)}_{(pfc)}$ are adapted to quantify the smoothness of a phased light-
curve resulting in new period-finding methods applicable to single- and multi-
band data. Simulations and observational data are used to test our approach.
The results were used to establish an analytical equation for the amplitude of
the noise in the periodogram for different false alarm probability values, to
determine the dependency on the signal-to-noise ratio, and to calculate the
yield-rate for the different methods. The proposed method has similar
efficiency to that found for the String Length period method. The
effectiveness of the panchromatic and flux independent period finding methods
in single waveband as well as multiple-wavebands that share a fundamental
frequency is also demonstrated in real and simulated data.
###### keywords:
methods: data analysis – methods: statistical – techniques: photometric –
astronomical databases: miscellaneous – stars: variables: general
††pubyear: 2018††pagerange: New Insights into Time Series Analysis IV:
Panchromatic and Flux Independent Period Finding Methods–References
## 1 Introduction
If the brightness variations of a variable star are periodic, one can fold the
sparsely sampled light-curve with that period and inspect the magnitude as a
function of phase plot. This will be equivalent to all the measurements of the
star brightness taken within one period. The shape of the phased light-curve
and the period allow one to determine the physical nature of variability
(pulsations, eclipses, stellar activity, etc.). If the light-curve is folded
with a wrong period, the magnitude measurements will be all over the place
rather than align into a smoothly varying function of the phase. Other methods
figure out the best period fitting a specific model into the phased light
curve, like a sine function. The most common methods used in astronomy are the
following: the Deeming method (Deeming, 1975), phase dispersion minimization
(PDM - Stellingwerf, 1978; Dupuy & Hoffman, 1985), string length minimization
(SLM - Lafler & Kinman, 1965; Dworetsky, 1983; Stetson, 1996; Clarke, 2002),
information entropy (Cincotta et al., 1995), the analysis of variance (ANOVA -
Schwarzenberg-Czerny, 1996), and the Lomb-Scargle periodogram and its
extension using error bars (LS and LSG - Lomb, 1976; Scargle, 1982;
Zechmeister & Kürster, 2009). All of these methods require as input the
minimum frequency ($f_{min}$), the maximum frequency ($f_{max}$), and the
sampling frequency (or the number of frequencies tested - $N_{freq}$). The
input parameters and their constraints to determine reliable variability
detections were addressed by Ferreira Lopes et al. (2018), where a summary of
recommendations on how to determine the sampling frequency and the
characteristic period and amplitude of the detected variations is provided.
From these constraints, a good period finding method should find all periodic
features if the time series has enough measurements covering nearly all
variability phases (Carmo et al., 2020).
Light curve shape, non-Gaussianity of noise, non-uniformities in the data
spacing, and multiple periodicities modify the significance of the periodogram
and to increase completeness and reliability, more than one period finding
method is usually applied to the data (e.g. Angeloni et al., 2012; Ferreira
Lopes et al., 2015a, c). The capability to identify the "true" period is
increased by using several methods (see Sect. 4.3). However, this does not
prevent the appearance of spurious results. Therefore, new insights into
signal detection which provide more reliable results are welcome mainly when
the methods provide dissimilar periods. Moreover, the challenge of big-data
analysis would benefit a lot from a single and reliable detection and
characterization method. The present paper is part of a series of studies
performed in the project called New Insight into Time Series Analysis (NITSA),
where all steps to mining photometric data on variable stars are being
reviewed. The selection criteria were reviewed and improved (Ferreira Lopes &
Cross, 2016, 2017), optimized parameters to search and analyse periodic
signals were introduced (Ferreira Lopes et al., 2018), and now new frequency
finding methods are proposed to increase our inventory of tools to create and
optimize automatic procedures to analyse photometric surveys. The outcome of
this project is crucial if we are to efficiently select the most complete and
reliable sets of variable stars in surveys like the VISTA Variables in the Via
Lactea (VVV - Minniti et al., 2010; Angeloni et al., 2014), Gaia (Perryman,
2005), the Transiting Exoplanet Survey Satellite (TESS - Ricker et al., 2015),
the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS -
Chambers et al., 2016), a high-cadence All-sky Survey System (ATLAS - Tonry et
al., 2018), Zwicky Transient Facility (ZTF - Bellm et al., 2019) as well as
the next generation of surveys like PLAnetary Transits and Oscillation of
stars (PLATO - Rauer et al., 2014) and Large Synoptic Survey Telescope (LSST -
Ivezic et al., 2008).
Many efforts are being performed to generalize for multi-band data the period
finding methods. Süveges et al. (2012) utilized the principal component
analysis to optimally extract the best period using multi-band data. However,
the multi-band observations must be taken simultaneously that impose an
important limitation to the method. On the other hand, VanderPlas & Ivezić
(2015) introduces a general extension of the Lomb-Scargle method while Mondrik
et al. (2015) for the ANOVA from single band algorithm to multiple bands not
necessarily taken simultaneously. Indeed, methods combining the results from
two different classes of period-determination algorithms are also being
reached (Saha & Vivas, 2017). The current paper adds one piece to this puzzle.
Section 2.1 describes the new set of periodic signal detection methods as well
as their limitations and constraints. Next, numerical simulations are used to
test our approach in Sect. 3. From this, the efficiency rate and the
fractional fluctuation of noise (FFN) are determined. Real data are also used
to support our final results (see Sect. 4). Finally, our conclusions are
presented in Sect. 5.
## 2 Panchromatic and flux independent frequency finding methods
The Welch-Stetson variability index Stetson (1996) was generalized and new
ones were performed by Ferreira Lopes & Cross (2016). From where the
panchromatic and flux independent variability indices were proposed. These
indices are used to discriminate variable stars from noise. To summarise, the
panchromatic index is related to the correlation amplitude (or correlation
height) while the second one computes the correlation sign, i.e. if the
correlation value is negative or positive without taking into account the
amplitude. The flux independent index provides correlation information that is
weakly dependent on the amplitude or the presence of outliers. These features
enable us to reduce the misclassification rate and improve the selection
criteria. Moreover, this parameter is designed to compute correlation values
among two or more observations. The correlation order ($s$), gives the number
of observations correlated together, i.e., $s=2$ means correlation computed
between pairs of observations and $s=3$ means that correlations are computed
on triplets. However, these observations must be close in time, i.e., those
observations are taken in an a interval time smaller much less than the main
variability period. Inaccurate or incorrect outputs will be obtained if this
restriction is not enforced. Therefore, the data sets and sources with
observations close in time were named as correlated-data otherwise non-
correlated-data.
The efficiency rate to detect variable stars is maximised using the
panchromatic and flux independent variability indices when the number of
correlations is increased, i.e when there is a strong variability between
bins, but only slight differences between the measurements in each correlation
bin. These variability indices only use those measurements that are close in
time (i.e., a time interval much smaller than the variability period) and
hence this constraint substantially reduced the number of possible
correlations for sparse data. If we consider a light-curve folded on its true
variability period, with little noise, we could calculate these indices using
standard correlated observations grouped in time, missing the observations
where too few meet the criteria of having at least $s$ closer than $\Delta\,T$
in time. Alternatively, all measurements can be used to compute the indices if
the observations are grouped by phase instead of time. It is the main idea to
support the Panchromatic and flux independent frequency finding methods.
For the main variability period, the observations closed in phase should
return strong correlation values. Since many variable stars show most
variation as a function of phase, and little variation from period to period,
recalculating the indices this way should return indices that are as strong as
those grouped by time. On the other hand, if the light-curve is folded on an
incorrect period and the calculated phase is no longer a useful correlation
measure, so correlations will be weaker, much like adding more noise to the
data. The statistics considered in this paper are unlikely to be useful for
data with multiple periodicities or if noise keeps its autocorrelation for
phased data. In the next section, we propose an approach to compute the
panchromatic and flux independent indices in phase and hence provide a new
period finding method.
Be aware that, the definition of expected noise performed by Ferreira Lopes et
al. (2015a) needs to be corrected, as pointed by the referee of the current
paper. The authors provided the correct theoretical definition of expected
noise but the mathematical expression was incorrect. In the case of
statistically independent events, the probability that a given event will
occur is obtained by dividing the number of events of the given type by the
total number of possible events, according to the authors. There will always
be 2 desired permutations (either all positive or all negative) for any s
value. However, the total number of events is $2^{s}$, not $s^{2}$ as defined
by the authors. The correct definition for expected noise value is then given
by,
$P_{s}=\frac{2}{2^{s}}=2^{(1-s)}$ (1)
The relative differences between the old and new definition for $s=2$ and
$s=4$ are zero while for $s=3$ is $\sim 11\%$. However, for s values larger
than 4 these differences increase considerably. The authors have only used the
noise definitions to set the noise level for s values smaller than 4, so far.
Therefore, this mistake has not provided any significant error in the results
of the authors to date.
### 2.1 New frequency finding methods
In common to other frequency finding methods, in our approach the light curve
data are folded with a number of trial frequencies (periods). The trial
frequency that produces the smallest scatter in the phase diagram according to
some criteria is taken as the estimate of the real period of variations. In
our approach we combine data from multiple bands with special transformations
and characterize the phase diagram scatter using variability indices
calculated from correlations of the phases (rather than correlations in the
observation times, as they were used in previous works). The even-statistic
(for more details see Paper II- Ferreira Lopes & Cross, 2017) was used to
calculate the mean, median and deviation values. It only requires that the
data must have even number of measurements (and if there is an odd number, the
median value is not used) while the equations to compute these parameters are
equal to the previous ones. This statement will be more important when the
data has only a few measurements. On the other hand, these parameters assume
equal values to the previous ones when the data has even number of
measurements and they are quite similar for large data samples (typically
bigger than 100).
Consider a generic time series observed in multiple wavebands with
observations not necessarily taken simultaneously in each band. This could
also mean a time series of the same sources taken by different instruments in
single or multi-wavelength observations. The sub-index $w$ is used to denote
each waveband. Using this notation, all data are listed in a single table
where the $i^{th}$ observation is
$\mathrm{\left[t_{i,w},\,\delta_{i,w}\right]}$ where $\delta_{i,w}$ is given
by
$\delta_{i,w}=\sqrt{\frac{n_{w}}{n_{w}-1}}\cdot\left(\frac{y_{i,w}-\bar{y_{w}}}{\sigma_{i,w}}\right),$
(2)
where $n_{w}$ is the number of measurements, $y_{i,w}$ are the flux
measurements, $\overline{y_{i,w}}$ is the even-mean computed using those
observations inside of a $3\sigma$ clipped absolute even-median deviation (for
more details see Paper II- Ferreira Lopes & Cross, 2017), and $\sigma_{i,w}$
denotes the flux errors of waveband $w$. One should note that greater success
in searching for signals in multi-band light curves is found when a penalty
and an offset between the bands are used (for more detail see Long et al.,
2014; VanderPlas & Ivezić, 2015; Mondrik et al., 2015). However, these
modifications require a more complex model since more constraints need to be
added. For our purpose, we suppose that all wavebands used are well populated
in order to provide a good estimation of the mean value.
The vector given by $\mathrm{\left[t_{i,w},\,\delta_{i,w}\right]}$ values
contains data measured in either a single or multi-wavebands. For instance, w
assumes a single value if a single waveband is used. Therefore the w sub-index
is only useful to discriminate different wavebands and to compute the
$\delta_{i,w}$. To simplify, the $w$ subindex is suppressed in the following
steps. Therefore, the notation regarding all observations ($N$), observed in
single or multi-wavebands by a single or different telescopes, is given by
$\mathrm{\left[(t_{1},\,\delta_{1}),(t_{2},\,\delta_{2}),\cdots,(t_{N},\,\delta_{N})\right]}$.
From which, the panchromatic ($PL^{(s)}$) and flux independent ($PK^{(s)}$)
period finding indices are proposed as following;
1. 1.
First, consider a frequency sampling
$\mathrm{F=[f_{1},f_{2},\cdots,f_{N_{freq}}]}$.
2. 2.
Next, the phase values
$\Phi^{{}^{\prime}}=[\phi^{{}^{\prime}}_{1},\phi^{{}^{\prime}}_{2},\cdots,\phi^{{}^{\prime}}_{N}]$
are computed by $\mathrm{\phi^{{}^{\prime}}_{i}=t_{i}\times f_{1}-\lfloor
t_{i}\times f_{1}\rfloor}$, where $t_{i}$ is the time and the
$\mathrm{\lfloor\rfloor}$ means the ceiling function of $\mathrm{t_{i}\times
f_{1}}$.
3. 3.
The phase values are re-ordered in ascending sequence of phase where
$\Phi=[\phi_{1},\phi_{2},\cdots,\phi_{N}]$ and $\phi_{i}\leq\phi_{j}$ for all
$i<j$, and the $\delta_{i}$ are also re-ordered with their respective phases.
Where for each phase value we have
$\mathrm{\left[(\phi_{1},\,\delta_{1}),(\phi_{2},\,\delta_{2}),\cdots,(\phi_{N},\,\delta_{N})\right]}$.
4. 4.
Next, the following parameter $Q$ of order $s$ is computed as;
$Q^{(s)}_{i}=\Lambda^{(s)}_{i,\,\cdots\,,i+s-1}\sqrt[s]{\left|\delta_{i}\,\cdots\,\delta_{i+s-1}\right|}$
(3)
where the $\Lambda^{(s)}$ function is given by
$\Lambda^{(s)}_{i}=\left\\{\begin{array}[]{ll}+1&\qquad\mbox{if \,\,
$\delta_{i}>0,\,\,\cdots\,,\,\delta_{i+s-1}>0$ };\\\ +1&\qquad\mbox{if \,\,
$\delta_{i}<0,\,\,\cdots\,,\,\delta_{i+s-1}<0$ };\\\ 0&\qquad\mbox{if \,\,
$\delta_{i}=0,\,\,\cdots\,,\,\delta_{i+s-1}=0$ };\\\
-1&\qquad\mbox{otherwise}.\end{array}\right.$ (4)
Since we are assuming that the variables are periodic, when the period is
correct, $\phi\sim 0$ should be equivalent to $\phi\sim 1$, so the last phases
can be correlated with the first phases, i.e. if $s=2$, $Q^{(s)}_{N}$
correlates $\delta_{N}$ with $\delta_{1}$, and if $s=3$, $Q^{(s)}_{N-1}$
correlates $\delta_{N-1}$ with $\delta_{N}$ and $\delta_{1}$, and
$Q^{(s)}_{N}$ correlates $\delta_{N}$ with $\delta_{1}$ and $\delta_{2}$, and
so on. This consideration ensures the non repetition of any term and keeps the
number of $Q^{(s)}$ terms equal to the number of observations. The sub-index
$s$ sets the number of observations that will be combined (for more details
see Paper I \- Ferreira Lopes & Cross, 2016).
5. 5.
Finally, the period indices, equivalent to the flux-independent and
panchromatic indices are given by,
$PK^{(s)}=\frac{N^{(+)}}{N}$ (5)
and,
$PL^{(s)}=\frac{1}{N}\sum_{i=1}^{N}Q^{(s)}_{i},$ (6)
where $N^{(+)}$ means the total number of positive correlations (see Eq. 4).
Indeed, the total number of negative correlations ($N^{(-)}$) is given by
$N^{(-)}=N-N^{(+)}$.
6. 6.
The steps ii to v are repeated for all frequencies, $f_{1}$ to $f_{N_{freq}}$.
One should be aware that $\delta_{i,w}$ values are strongly dependent on the
average and hence incorrect values can be found for Algol type variable stars
and time-series which have outliers, for example. In order to more accurately
measure the average value only those observations within three times the
absolute even-median deviations of the even-median were used to do this.
Additionally, $\Lambda^{(s)}$ is a bit different from that proposed for the
flux independent variability indices. The current version assumes
$\Lambda^{(s)}=0$ if $\delta_{i}=0\,\,\cdots\,\,\delta_{i+s-1}=0$. This would
produce $PK^{(s)}$ and $PL^{(s)}$ equal zero in the trivial case of all
observations being exactly equal, e.g. a noiseless non-variable example, i.e.,
$y_{i}=y_{j}$ for all $i$ values (see two last panels of Fig. 2).
Figure 1: The $PK_{\rm max}^{(s)}$ as function of number of measurements for a
sinusoidal function (see Eq. 7) for $s=2$ (solid black line), $s=3$ (solid red
line), $s=4$ (solid blue line) where $N^{-}_{(min)}=2$ was adopted.
### 2.2 The maximum $\rm PK^{(s)}$ considering different signals
The maximum value allowed for the $PK^{(s)}$ parameter considering the true
variability frequency ($\mathrm{f_{true}}$) of a signal is limited by the
number of measurements which lead to $\Lambda^{(s)}=-1$, i.e., the minimum
number of times that one of the consecutive phase observations has a value on
the opposite side of the even-mean ($N_{(c)}$). This restriction limits the
maximum value achievable by $PK^{(s)}$ ($PK_{\rm(max)}^{(s)}$).
$PK_{\rm(max)}^{(s)}$ also varies with the order, $s$, since the number of
$\Lambda^{(s)}=-1$ corresponding to observations on opposite sides of the
even-mean varies with $s$. Indeed, $N_{(c)}$ depends on the shape of the
signal. For instance, $N_{(c)}=1$ for a line, $N_{(c)}=2$ for a sinusoidal
signal, $N_{(c)}=4$ for a eclipsing binary light curve. Moreover, if a set of
measurements is given by a line $y=ax+b$ ($a\neq 0$), the number of negative
correlation measurements will be $N^{-}_{(min)}=1$ for $s=2$,
$N^{-}_{(min)}=2$ for $s=3$, and $N^{-}_{(min)}=3$ for $s=4$. Therefore,
$N^{-}_{(min)}$, and hence the maximum $PK^{(s)}$ value, varies with s. These
considerations can be expressed as following
$N^{-}_{(min)}=N_{(c)}\times\left(s-1\right)$. Lastly, the general relation
for $PK_{\rm(max)}^{(s)}$ can be written as
$PK_{\rm(max)}^{(s)}=1-\frac{N^{-}_{(min)}}{N}=1-\frac{N_{(c)}\times\left(s-1\right)}{N}.$
(7)
Figure 2: Phase diagrams for pulsating stars (RR, Ceph, RRblz), eclipsing
binaries (type EA and EB), rotational variable stars (Rot), and white noise
from uniform and normal distributions. The data and model are shown as black
dots and solid lines, respectively. The even-mean value considering those
measurements within three times the absolute even-median deviation is shown as
orange dashed lines. Moreover, the $PK^{(2)}$ and $PL^{(2)}$ values for the
real and modelled data are displayed at the bottom of each diagram.
A similar analytic equation for $PL^{(s)}$ index is not possible since it
depends on the amplitude. On the other hand, two features of $PK^{(s)}$ can be
seen in Eq. 7. First, $PK_{\rm(max)}^{(s)}$ values computed for two time-
series having the same $N^{-}_{(\rm min)}$ value but a different number of
observations differ (see Fig. 1). Second, all frequencies close to
$\mathrm{f_{\rm true}}$ produce $PK^{(s)}\simeq PK_{\rm(max)}^{(s)}$, since
$N>>N^{-}_{(\rm min)}$. These frequencies include the sub-harmonic frequencies
of $\mathrm{f_{true}}$. Indeed, $\mathrm{f_{true}}$ will always return the
$PK_{\rm(max)}^{(s)}$ value for time-series models or signals without noise
(see blue lines on Fig. 2). However, when noise is included, statistical
fluctuations can lead to the wrong identification of $\mathrm{f_{\rm true}}$.
This means that the $PK^{(s)}$ and consequently $PL^{(s)}$ parameters can
return a main frequency that implies a smooth phase diagram but is different
to $\mathrm{f_{\rm true}}$.
The values of $N_{(c)}$ and therefore $N^{-}_{(min)}$ depend on the
arrangement of observations around the even-mean and are intrinsically related
to the signal-to-noise as discussed above (see Fig. 2). For instance, the
detached eclipsing binary (EA) signal looks like noise if only the
measurements outside of the eclipse are observed, e.g. if the phase fraction
of the eclipses is $\lesssim\frac{2}{N}$. This means that the detection of the
correct period using the $PK^{(s)}$ and $PL^{(s)}$ parameters will be
extremely dependent on the number of observations at the eclipses. On the
other hand, the RR Lyr (RR) objects show a small dispersion around the even-
mean even if the signal-to-noise ratio (SNR) is a bit low. To summarize, the
$PK^{(s)}$ power will have a peak for all frequencies which produce smooth
phase diagrams, with the largest spectrum peak corresponding to the smallest
$N^{-}_{(min)}$. On the other hand, the $PK_{\rm max}^{(s)}$ results in
discrete values and so is more degenerate for a small number of observations.
Figure 2 shows the phase diagrams for a Cepheid (Ceph), a RR Lyrae (RR), a RR
Lyrae having the Blazhko effect (RRblz), eclipsing binaries (EA and EB), a
rotational variable (Rot), and two white noise light curves generated by
uniform and normal distributions (for more details see Sect. 3). One thousand
equally spaced phased measurements where used to plot each model and to
compute the $PK^{(s)}$ shown in each panel ($PK_{(Model)}$). For these
examples, we normalize amplitude to allow us to separate out the effects of
the morphology of the light curves on these indices. As already mentioned,
$PK^{(s)}$ is not directly dependent on the amplitude of the signal, as
opposed to $PL^{(s)}$, which is. As expected, the model crosses the even-mean
twice ($N^{-}_{(min)}=2$) for the Ceph, RR, RRblz, and Rot models giving
$PK_{\rm max}^{(2)}=0.998$. For the eclipsing binaries the model crosses the
even-mean four times, implying a $PK_{\rm max}^{(2)}=0.996$. Actually, the EA
and EB models have $PK_{\rm max}^{(2)}=0.994$ due to fluctuations of the model
about the even-median. Uniform and normal distributions (i.e., time series
mimicking noisy data) $PK_{\rm max}^{(2)}=\gamma+2^{1-s}$ where $\gamma$ is a
positive number related with the maximum fluctuation of positive correlations.
However, the $PK_{\rm max}^{(s)}=1$ only happens when
$\delta_{i}=0\,\forall\,i$ values, i.e. noiseless non-variation.
The $PL^{(s)}$ values for the data are biased by the amplitude, i.e., signals
having different amplitudes will provide distinct values. Consider the index
values computed using the data: the EA and EB signals have the smallest
$PL^{(2)}$ values among all model tested due to their morphology, since the
majority of measurements are near to the even-mean and hence the peak power is
reduced. The Ceph, RR, and RRblz signals usually have large amplitudes and
hence large $PL^{(2)}$ values. The highest $PL^{(2)}$ values are found for RR
and Ceph signatures since there are a larger fraction of measurements distant
from the even-mean than for the other models. The $PL^{(2)}$ value for RR
stars is about half that found for the Rot model. This is a property related
to the morphology of the phase diagram. Finally, the smallest values are found
for pure-noise signals (normal and uniform distributions).
An examination of Eq. 7 can be used to estimate the theoretical expected value
for any signal type. However, in real data, where noise is included, the
$PK_{\rm max}^{(s)}$ values are smaller (see Fig. 2) since the values decrease
with the increase in the dispersion of individual measurements about the even-
mean. Therefore, $Rot$ and $EA$ models have the largest reduction in $PK_{\rm
max}^{(s)}$. In contrast, the smallest reduction of $PK_{\rm max}^{(s)}$ is
found for Ceph and RR models since the dispersion about even-mean is small. A
detailed analysis of the weight of signal-to-noise ratio on $PK^{(s)}$ for
different signal types is performed in the Sect. 3.
### 2.3 The optimal $s$ value
The optimal $s$ value ($s_{(opt)}$) will be found when the difference between
$PK^{(s)}$ computed on the phase diagram folded using $\mathrm{f_{\rm true}}$
($PK_{\rm max}^{(s)}$) and those ones found at other frequencies $PK_{\rm
f_{other}}^{(s)}$ is maximum. This difference can be written as,
$PK_{\rm(\mathrm{f_{\rm true}})}^{(s)}-PK_{\rm(f_{other})}^{(s)}\simeq PK_{\rm
max}^{(s)}-PK_{\rm noise}^{(s)}$ (8)
where we consider that $PK_{\rm(f_{other})}^{(s)}\simeq PK_{\rm noise}^{(s)}$
and compare the expressions 7 and 8
$\frac{N^{+}_{\rm f_{true}}}{N}\simeq 1-\frac{N_{(c)}\times\left(s_{(\rm
opt)}-1\right)}{N}$ (9)
and hence,
$s_{(\rm opt)}\simeq 1+\frac{\left(N-N^{+}_{\rm f_{true}}\right)}{N_{(c)}},$
(10)
where $N^{+}_{\rm f_{true}}$ is the number of positive correlations for
$\mathrm{f=f_{\rm true}}$. Equation 10 provides the $s$ value where the
maximum difference between the $PK_{\rm(\mathrm{f_{\rm true}})}^{(s)}$ and
$PK_{\rm(f_{other})}^{(s)}$ is found.
For high-SNR light curves, i.e. $N-N^{+}_{\rm f_{\rm true}}\rightarrow N_{(\rm
c)}$, $s_{(\rm opt)}=2$ since for this case $N^{+}_{\rm f_{\rm
true}}\rightarrow N$. Indeed, the $N^{+}_{\rm f_{\rm true}}$ is directly
proportional to the SNR while $N_{(c)}$ is the opposite, i.e. the increase of
SNR increases $N^{+}_{\rm f_{true}}$ and decreases $N_{(c)}$. Therefore, for
low-SNR $N^{+}_{\rm f_{\rm true}}\rightarrow N/2$ and hence $s_{(\rm
opt)}\approx 1+N/2\times N_{(\rm c)}$. However, at the limit, $N_{(c)}$ also
tends to $N/2$ and hence $s_{(\rm opt)}\approx 2$. To summarize, the choice of
$s$ value depends on the signal type and SNR since $N_{(\rm c)}$ and
$N^{+}_{\rm f_{\rm true}}$ vary with both parameters. For instance, a large
value of $N_{(\rm c)}$ is expected for EA binary systems whatever its SNR and
hence a small $s$ value is recommended to increase the range of signal type
detected. The choice of $s$ value must take all of these properties into
account.
## 3 Numerical tests and simulations
Artificial variable stars were simulated using a similar set of models as
those produced in paper III (for more details see Ferreira Lopes et al.,
2018). Seven simulated time series were created that mimic rotational
variables ($Y_{(Rot)}$), detached eclipsing binaries ($Y_{(EA)}$), eclipsing
binaries ($Y_{(EB)}$), pulsating stars ($Y_{(Ceph)}$, $Y_{(RR)}$,
$Y_{(RRblz)}$), and white noise ($Y_{(Uniform)}$ and $Y_{(Normal)}$). The
Ceph, RR, RRblz, EA, and Rot models were based on the CoRoT light curves
CoRoT-211626074, CoRoT-101370131, CoRoT-100689962, CoRoT-102738809, and
CoRoT-110843734, respectively. The variability types were previously
identified by Debosscher et al. (2007); Poretti et al. (2015); Paparó et al.
(2009); Chadid et al. (2010); Maciel et al. (2011); Carone et al. (2012) and
De Medeiros et al. (2013) while the variability period and amplitudes were
reviewed by Ferreira Lopes et al. (2018). The models of variable stars were
found using harmonic fits having $12$, $12$, $12$, $24$, $24$, and $6$
coefficients for $Ceph$, $RR$, $RRblz$, $EA$, $EB$, and $Rot$ variable stars,
respectively. The white noise simulations given by a normal distribution were
used to determine the fractional fluctuation of noise (FFN). The Ceph, RR,
RRblz, EA, and Rot models were used to realistically test and illustrate our
approach.
The efficiency rate of any frequency finding method depends mainly on the
signal type, the signal-to-noise ratio, and the number of observations.
Therefore, three sets of simulations having $20$, $60$, and $100$ measurements
for an interval of $\rm SNR$ (see Eq. 11) ranging from $\sim 1$ to $\sim 20$
were created for the models found in Fig. 2. In particular, $20\%$ of
measurements were randomly selected at the eclipses for EA and EB simulations.
This is required because these simulations look like noise if no measurement
is found at the eclipses, and is justified because any light-curves that are
processed with period-finding algorithms in NITSA must already have been
selected as variables, so eclipsing binaries with few measurements must have a
relatively high fraction at the eclipses. There will be a selection effect
against binaries with narrow eclipses, since the probability of them being
detected as variables is reduced. Values sorted randomly from a normal
distribution were used to add noise to the simulations and the error-bars were
set to be the differences between the model and simulated data. The error bars
are not relevant to compute the $PK^{(s)}$ values. However, they are necessary
to determine $PL^{(s)}$ parameters. The $\rm SNR$ was computed as,
$SNR=\frac{A}{2.96\times eMAD(\delta_{y})}$ (11)
where $A$ is the signal amplitude, $\delta_{y}$ are the residuals (observed
minus its predicted measurement), and $eMAD$ is the even-median of the
absolute deviations from the even-median. The $eMAD$ is a slight modification
of median absolute deviation from median ($MAD$). The $2.96\times
eMAD(\delta_{y})$ is equivalent to two times the standard deviation but it is
a robust estimate of the standard deviation when outliers are considered (e.g.
Hoaglin et al., 1983). For completeness, other estimates of the SNR where the
model is not required were tested (e.g. Rimoldini, 2013). According to our
tests, the latter usually overestimates the SNR compared with those values
computed by Eq. 11.
Figure 3: $PK^{(s)}$ as a function of the number of measurements for orders 2,
3, and 4. The results for significance levels of $99\%$ (orange), $95\%$
(grey), and $90\%$ (blue) are in different colors. The dashed lines indicate
the $\rm FFN_{(s)}$ for models while the solid, red line shows the expected
value for the noise (see Sect.3.1).
### 3.1 Fractional Fluctuation of Noise (FFN)
The fractional fluctuation of noise for signal detection is related to the
level for which the figure of merit of the methods (e.g., power, in the
classical periodogram) is not expected to exceed more than a fraction of times
due to stochastic variation (or noise) on the input light curve. Indeed, the
FFN mimics a false alarm probability since it sets the power value above which
a certain percentage of spurious signals are found. Indeed, there are many
difficulties of estimating FAPs in realistic astronomical time series (for
more detail see Koen, 1990; Sulis et al., 2017; VanderPlas, 2018) and hence
FFN only means the lower empirical limit to find a reliable signal. The
expected value of the flux-independent index, $K_{\rm fi}^{(s)}$, for white
noise is analytical defined as $P_{s}=2^{s-1}$ (see Sect. 2). The same
equation can be applied to $PK^{(s)}$ since $K_{\rm fi}^{(s)}$ and $PK^{(s)}$
are based on the same concept. Therefore, the $FFN_{(s)}$ can be defined as,
$FFN_{(s)}=P_{s}+\Delta=\sqrt{\frac{\alpha}{N}}+\beta$ (12)
where $\alpha$ and $\beta$ are real positive numbers. $\beta$ must be larger
than $P_{s}$ since it is a threshold for white noise. $10^{7}$ Monte Carlo
simulations using a normal distribution were run with the number of
measurements ranging from $10$ to $1000$ in order to compute the free
parameters for Eq. 12. Figure 3 shows the mean values of $PK^{(s)}$ above
which $1\%$ (orange dots), $5\%$ (grey dots), and $10\%$ (blue dots) of
simulated data are found. The $FFN_{(s)}$ models are shown as dashed lines and
the free parameters of the models are presented in Table 1. The minimum values
of $FFN_{(s)}$ are found when $N\rightarrow\infty$. For this condition the
$FFN_{(s)}$ estimates have values above the noise (see Table 1). The scatter
found for small numbers of measurements (typically less than $20$) is related
to the discrete values allowed for $PK^{(s)}$ (for more details see Ferreira
Lopes & Cross, 2016). The results shown in Fig. 3 are quite similar for all
uncorrelated zero-mean noise distributions.
Table 1: The constraints to $\rm FFN_{s}$ models (see Eq. 12) which delimit $99\%$, $95\%$, and $90\%$ of white noise, respectively. | $99\%$ | $95\%$ | $90\%$
---|---|---|---
order | $\alpha$ | $\beta$ | $\alpha$ | $\beta$ | $\alpha$ | $\beta$
$\rm FFN_{(2)}$ | $0.9459$ | $0.5107$ | $0.5350$ | $0.5177$ | $0.4377$ | $0.5112$
$\rm FFN_{(3)}$ | $1.2321$ | $0.2541$ | $0.6150$ | $0.2696$ | $0.4784$ | $0.2625$
$\rm FFN_{(4)}$ | $1.0937$ | $0.1316$ | $0.4511$ | $0.1482$ | $0.2380$ | $0.1482$
The $FFN_{(s)}$ can be used as a reference to remove unreliable signals that
lead to random phase variations in any survey, whatever the wavelength
observed. This property is related to the weak dependence of $PK^{(s)}$ on
amplitude, error bars, or outliers according to Ferreira Lopes & Cross (2017).
Indeed, spurious periods that lead to smooth phase diagrams will break this
constraint. On the other hand, the period that produces the main peak in the
periodogram can be related with a phase diagram which has gaps for common
methods like PDM and LSG. This happens because the function used to measure
the periodogram can interpret this arrangement of measurements as a smooth
phase diagram. This result can lead to the highest periodogram peak when the
signal is not well defined and/or when a small number of epochs are available.
On the other hand, the periods that lead to folded phase diagrams with gaps
may not have many correlated measurements and hence they will not leads to
peaks in the $PK^{(s)}$ and $PL^{(s)}$ periodogram. Indeed, the main peak of
the periodogram will be the arrangement of measurements that leads to the
largest correlation value.
Figure 4: $PK^{(s)}$ as a function of signal-to-noise ratio for order 2
($s=2$) for the variable stars models shown in the Fig. 2. Each column
displays the results for $20$ (left column of panels), $60$ (middle panels),
and $100$ (right-hand columns of panels) measurements while each row presents
the results for different variable types: Cepheid, RRlyrae, RRblz, EA, EB, and
Rot models. Box plots containing $90\%$ of the data are shown. The even-median
values for each box are marked by a solid red line while the dashed blue lines
show the expected value for the noise.
### 3.2 Dependency on the signal-to-noise ratio
The $Ceph$, $RR$, $RRblz$, $EA$, $EB$, and $Rot$ models (for more details see
Sect. 3) were used to analyze the $PK^{(s)}$ values for the main variability
signal. $PK^{(s)}$ values were computed using $10^{7}$ Monte Carlo simulations
for SNR ranging from $1$ to $20$. The simulations were created for $s=2$,
$s=3$, and $s=4$. The results for $s=3$ and $s=4$ show lower efficiency than
those found for $s=2$ for lower SNR values, as expected from Sect. 2.3. The
results for larger orders, $s$, provide better results than those found for
$s=2$, for high signal-to-noise time-series, having large number of
measurements (see Sect. 2.3). Therefore, we only show the results for $s=2$.
Figure 4 shows the $PK^{(s)}$ as function of SNR for $s=2$. The results are
displayed using box plots instead of error bars because $PK^{(s)}$ results in
discrete values and its distribution is not symmetric. A box plot range that
includes $90\%$ of results was used, and the red line sets the middle of the
distribution. The main results can be summarized as follows:
* •
The maximum value achieved by $PK^{(s)}$ is limited by the number of
measurements for all SNR. Moreover, this effect is also observed for higher s
orders in agreement with the values estimated by Eq. 7 (see Fig. 1).
* •
$PK^{(s)}$ tends to $PK_{(max)}^{(s)}$ for simulations using $20$, $60$, and
$100$ measurements and high SNR for Ceph, RR, RRblz, EB, and Rot models. The
same trend having a slower growth is also observed for EA. Indeed, $PK^{(s)}$
values are improved for EA models when the number of measurements, mainly at
the eclipses, is increased. About $\sim 50\%$ of $PK^{(s)}$ values for SNR
$=3$ are found below the expected noise value when the time-series has 20
measurements. This number is reduced to less than $\sim 10\%$ when more than
60 measurements are available.
* •
The dispersion of $PK^{(s)}$ values decreases with the number of measurements
for all values of SNR. The effect is less noticeable for EA models. This
happens because the simulated time series looks like noise when most of the
measurements are sorted outside of eclipses.
* •
About $\sim 95\%$ of $PK^{(2)}$ values are above $P_{2}$ values for the whole
range of SNR for Ceph, RR, RRblz, EB, and Rot simulations using $60$ and $100$
measurements. This is also true for the simulations containing 20 measurements
for $\rm SNR>2$. On the other hand, the EA model shows $PK^{(2)}$ values
around the noise level for the whole range of SNR on the simulations
containing $20$ measurements. The reason for this behaviour is the same as
explained in the last item.
* •
The time-series like EA and EB models have the lowest $PK^{(s)}$ values among
all models analysed.
Figure 5: Cumulative histograms of SNR, $PK^{2}$, and number of measurements
for WVSC1 and CVSC1 stars. The results are shown for Z (brown), Y (grey), J
(red), H (green), K (blue), V (yellow) as well as the panchromatic (black)
data.
In summary, the probability of finding $\rm PK^{(s)}$ values above the noise
is dependent on the number of measurements, SNR, and signal type, as expected.
For all simulations, when the number of measurements is increased, we can
measure reliable periods at lower SNR. The simulations for higher $s$ order
are quite similar to those found for $s=2$.
## 4 Testing the method on real data
A robust numerical simulation is complex because it usually does not reproduce
the correlated nature of the noise intrinsic to the data as well as variations
related to the instrumentation. Many constraints are required to provide
realistic simulations such as a wide range of amplitudes, error bars,
outliers, and correlated noise, to name a few. However, the simulation of the
$PK^{(s)}$ power is facilitated because: (a) the amplitude can be a free
parameter since $PK^{(s)}$ is only weakly dependent on it; (b) the even mean
values (see Eq. 2) are computed using those observations within three times
the absolute even-median deviation, which effectively reduces the outliers
weight on zero point estimation ($\overline{y_{w}}$, see Eqn. 2) but all
epochs are considered to compute the powers; (c) the correlated nature of
successive measurements is reduced since they are computed using phase
diagrams. On the other hand, a robust simulation for $PL^{(s)}$ covering all
important aspects of it is difficult because $PL^{(s)}$ has a strong
dependence on amplitude, outliers, and error bars. Therefore, the discussions
in the previous sections only address the constraints on $PK^{(s)}$.
The $PK^{(s)}$ and $PL^{(s)}$ methods can be tested on real data using
existing variable stars catalogues. The WFCAMCAL variable stars catalogue
(WVSC1) having $280$ stars (Ferreira Lopes et al., 2015a) and the Catalina
Survey Periodic Variable star catalogue (CVSC1) having $\sim 47000$ sources
(Drake et al., 2014) were used to estimate the efficiency rate of our new
period finding methods. The WVSC1 was created from the analysis of the WFCAM
Calibration 08B release (WFCAMCAL08B - Hodgkin et al., 2009; Cross et al.,
2009). More information about the design, the data reduction, the layout, and
about variability analysis of this database are described in detail in Hambly
et al. 2008; Cross et al. 2009; Ferreira Lopes et al. 2015a. The WFCAMCAL
database is a useful dataset to test single and panchromatic wavelength period
finding methods. To summarize, WFCAM database contains panchromatic data
($ZYJHK$ wavebands) that were observed to calibrate the UKIDSS surveys
Lawrence et al. 2007. A sequence of filters $JHK$ or $ZYJHK$ were observed
within a few minutes during each visit the fields. These sequences were
repeated in a semi-regular way that leads to an uneven sampling having large
seasonal gaps. On the other hand, the CVSC1 has a huge amount of objects, with
seventeen variable stars types that were visually inspected by the authors.
In order to perform a straightforward comparison between the results, the SNR
of WVSC1 and CVSC1 stars were also estimated using Eq. 11. Also, the number of
measurements and the $PK^{(2)}$ values were computed. Figure 5 shows the
cumulative histograms of SNR and number of measurements for WVSC1 and CVSC1
stars. The results for each waveband as well as for panchromatic data are
shown by different colours. About $\sim 90\%$ of WVSC1 single waveband has
$SNR>\sim 3$ while this number decreases to $\sim 70\%$ for the CVSC1 and
panchromatic data. The WVSC1 single waveband data have a number of
measurements ranging from $\sim 30$ to $\sim 150$ while CVSC1 stars have a
number from $\sim 100$ to $\sim 300$. When the panchromatic data is
considered, the number of measurements increases considerably by a factor
$\sim 5$ compared with WVSC1 single waveband. However, the SNR are smaller
than those found for single wavebands. In general, the SNR for panchromatic
wavebands are smaller than those found for CVSC1 stars.
Figure 6: Phase diagrams of panchromatic data (left panels) and the $PK^{(s)}$
and $PL^{(s)}$ normalized periodogram for Z and ZYJHK wavebands (right
panels). The cross symbols in the phase diagrams set the measurement of Z
(brown), Y (grey), J (green), H (red), and K (blue) wavebands. The dashed
green lines indicate the published variability periods while the full yellow
lines indicate the periods related with the largest peak in the periodogram.
Understanding the peculiarities of the sample tested is crucial when analysing
the efficiency rate of our approach. Therefore we summarize how the period
searches were performed to find periods for WVSC1 and CVSC1 stars. Ferreira
Lopes et al. (2015a) selected about $6651$ targets to which four period
finding methods were applied. Next, the ten best ranked periods in each of the
four methods were selected. For each period a light curve model was created
using harmonics fits. Finally, the very best period was chosen as that with
the smallest $\chi^{2}$ with respect to all ranked periods; On the other hand,
the period search for $\sim 154$ thousand CVSC1 sources was made using the
Lomb Scargle method. Next, the main periods were analysed using the Adaptive
Fourier Decomposition (AFD) method (Torrealba et al., 2015) in order to
determine the main variability period and reduce the number of sources to be
visually inspected ($112$ thousand). Additionally, the periods of a large
number of the sources were improved and corrected by the authors. Many of the
variability periods of WVSC1 and CVSC1 were related to sub-harmonics of their
true period and the final results were set after visual inspection.
The following sections discuss the WVSC1 and CVSC1 variable stars from the
viewpoint of $PK^{(s)}$ and $PL^{(s)}$ parameters. The periodogram, the
efficiency rate, and the peculiarities of our approach are analysed. For that,
we perform the period search using the SLM, LSG, and PDM methods besides the
panchromatic and flux independent methods. A frequency range of
$(2/T_{max})d^{-1}$ to $30d^{-1}$ was explored and we evenly sampled this
frequency range with a frequency step of $\frac{1}{300\times T_{max}}$, where
$T_{max}$ is the total time span. The frequency sampling constrains a maximum
phase shift of $0.1$ that allows us to detect the large majority of signal
types (for more detail see Paper III). A quick visual inspection was performed
on some of WVSC1 and CVSC1 to test our analysis in the next sections. The main
goal of this work is to propose a new period finding method instead of
checking the reliability of the periods in the WVSC1 and CVSC1 catalogues.
### 4.1 Periodogram and efficiency rate
The $PK^{(2)}$ and $PL^{(2)}$ periodogram were computed for the WVSC1 and
CVSC1 stars. For better visualization, the differential panchromatic light
curve (see Fig. 6) was obtained by subtracting the even-median from the
magnitudes in each light curve. As a result, light curves with zero mean are
produced. The $PK^{(s)}$ and $PL^{(s)}$ parameters do not use any kind of
transformation to combine measurements at different wavelengths. However, a
better way to combine multi-wavelength data is an open question.
Figure 6 shows the phase diagrams and their normalized periodogram for some
WVSC1 stars. The phase diagrams (left panels) show the folded panchromatic
data while the periodogram considering single and panchromatic wavebands are
displayed in the centre and right panels respectively. The periodogram for a
large part of CVSC1 stars are quite similar to those found for panchromatic
data, i.e., periodogram for sources having more measurements and smaller SNR
than for the WVSC1 single waveband. The main results can be summarized as
follows:
* •
$PK^{(2)}$ has more than one peak related with the maximum power for WVSC-239
and WVSC-263, i.e., this means that $PK^{(s)}_{(max)}$ indicates more than one
viable period. The number of such periods gives the ”degeneracy” of a
particular arrangement of measurements in the phase diagram. This number
increases as the number of measurements decreases (see Eq. 7).
* •
The main period found by $PK^{(2)}$ is not the same as that obtained by
$PL^{(2)}$. This means that the maximum number of positive correlations is not
the same as the maximum correlation power. Besides, the ”degeneracy” of
periods found for $PK^{(2)}$ is not observed in the $PL^{(2)}$ periodogram.
* •
$PK^{(2)}$ and $PL^{(2)}$ periodogram for the Z waveband show a scatter around
the expected noise level. Additionally, the WVSC-054 and WVSC-209 periodogram
show an increase in $PK^{(2)}$ for long periods. Indeed, this behaviour is
observed in all wavebands and hence it is an attribute of the proposed method.
* •
$PK^{(2)}$ for panchromatic data increases with periods up to a maximum of
around $5$ days and then levels off and drops slightly for longer periods for
almost all sources. These sources have variability periods of less than $5$
days. This behaviour is not found for WVSC-209 and WVSC-102. Indeed, WVSC-102
has a variability period of $589$ days and hence the trend observed is
different to the others. On the other hand, WVSC-209 has a low-SNR signal.
Therefore, this trend is related with the variability period and SNR. Indeed,
the phase diagram keeps part of the correlation information when the light
curve is folded using a test period bigger than the true variability period.
* •
$PK^{(2)}$ and $PL^{(2)}$ periodogram have peaks at the previous measured
(true) variability periods of WVSC1 stars. However, the period related with
the largest peak is not always the true variability period.
* •
The panchromatic data have lower SNR. Indeed, no clear signal can be observed
in WVSC-263. This could mean that the signal shape is very different from one
band to another, or a signal or seasonal variation is present in a single
waveband, or the variability period is wrong, to name the most likely
possibilities.
To summarize, the $PK^{(s)}$ and $PL^{(s)}$ periodogram indicate the
arrangement of measurements in the phase diagram that maximize the correlation
signal and power, respectively. Therefore, the $PK^{(s)}$ and $PL^{(s)}$
parameters can be used to identify the periods that lead to a smooth phase
diagram from the viewpoint of correlation strength.
Table 2: Accuracy considering two approaches; the main period ($E_{(M)}$) and the main period plus its sub-harmonic and overtone. We consider that there is agreement if the relative difference is smaller than $1\%$. | Z | Y | J | H | K | ZYJHK | V
---|---|---|---|---|---|---|---
| $E_{(M)}$ | $E_{(MH)}$ | $E_{(M)}$ | $E_{(MH)}$ | $E_{(M)}$ | $E_{(MH)}$ | $E_{(M)}$ | $E_{(MH)}$ | $E_{(M)}$ | $E_{(MH)}$ | $E_{(M)}$ | $E_{(MH)}$ | $E_{(M)}$ | $E_{(MH)}$
$\rm PK^{(2)}$ | $0.30$ | $0.55$ | $0.30$ | $0.59$ | $0.30$ | $0.60$ | $0.29$ | $0.59$ | $0.23$ | $0.50$ | $0.19$ | $0.40$ | $0.24$ | $0.51$
$\rm PL^{(2)}$ | $0.20$ | $0.46$ | $0.22$ | $0.49$ | $0.22$ | $0.50$ | $0.28$ | $0.57$ | $0.17$ | $0.44$ | $0.14$ | $0.27$ | $0.25$ | $0.53$
LSG | $0.10$ | $0.27$ | $0.09$ | $0.29$ | $0.09$ | $0.34$ | $0.09$ | $0.26$ | $0.11$ | $0.31$ | $0.09$ | $0.32$ | $0.20$ | $0.87$
PDM | $0.17$ | $0.50$ | $0.26$ | $0.59$ | $0.25$ | $0.62$ | $0.27$ | $0.62$ | $0.22$ | $0.57$ | $0.26$ | $0.69$ | $0.22$ | $0.88$
SLM | $0.30$ | $0.55$ | $0.30$ | $0.59$ | $0.29$ | $0.63$ | $0.28$ | $0.59$ | $0.22$ | $0.50$ | $0.15$ | $0.32$ | $0.25$ | $0.49$
### 4.2 Accuracy
The accuracy was measured considering the main signal(s) detected by the
$PK^{(2)}$ and $PL^{(2)}$ methods. Indeed, the largest power of $PK^{(s)}$
periodogram can be related to more than one period. Therefore, all periods
related to the largest periodogram peak were considered to measure the
accuracy, i.e., the recovery fraction of variability periods. Two parameters
to measure the accuracy were considered: $E_{(M)}$ \- when the main period is
detected; $E_{(MH)}$ \- when the main variability period ($P_{Lit}$), measured
in Ferreira Lopes et al. (2015a), or its sub-harmonic, or overtone is found.
Indeed, the processing time of each method was not taken into account in this
discussion. A new approach to reduce the running time necessary to perform
period searches will be addressed in a forthcoming paper in this series. Those
signals found within $\pm 1\%$ of the variability period were considered as
detected. Table 2 shows the results for individual wavebands as well as for
the panchromatic data. The main results can be summarized as:
* •
The accuracy is lower than $100\%$ for all methods and data tested. However,
new estimates of Catalina variability periods have been produced recently
(e.g. Papageorgiou et al., 2018) and the CVSC1 combines the results found by
PDM, LSG, and STR methods for all wavebands to determine the best variability
period. Therefore, the accuracy for both datasets is larger than that
displayed in Table 2 if these results are taken into consideration.
* •
$PK^{(2)}$ has the highest efficiency rate considering only the main period
($E_{(M)}$) for the Z, Y, J, H, and K wavebands. The efficiency rate of
$PK^{(2)}$ for the V waveband and panchromatic data is similar to that found
for the SLM and $PL^{(2)}$ methods. $E_{(M)}$ decreases form Z to K wavebands
because the first ones have larger SNR.
* •
The $E_{(MH)}$ values for Z, Y, J, H, and K for $PK^{(2)}$ and SLM are quite
similar and they have the highest accuracy for the Z and Y wavebands. On the
other hand, PDM has the highest $E_{(MH)}$ values for H, ZYJHK, and V
wavebands. Indeed, the $E_{(MH)}$ values for PDM and LSG are quite similar for
the V waveband.
* •
The $E_{(M)}$ for $PL^{(2)}$ is always smaller than that found for $PK^{(2)}$
except for V band where it is $4\%$ lower for $PK^{(2)}$.
* •
The highest $E_{(M)}$ is found for $PK^{(2)}$ method while the highest
$E_{(MH)}$ is found for the PDM method. The accuracy found for LSG method is
quite similar to that found for PDM method for V waveband while in other
wavebands the PDM method has twice the accuracy. Indeed, this difference is
reduced by a few percent if a higher relative error is considered. Indeed, the
$P_{Lit}$ found by the ZYJHK wavebands are refined using the SLM method (for
more details see Ferreira Lopes et al., 2015a). On the other hand, the V
waveband results were computed using Lomb-Scargle and refined using reduced
$\chi^{2}$ (for more details see Drake et al., 2014). Therefore, the accuracy
can be biased by the approach used to improve the variability period
estimation. Indeed, a deep discussion about how to determine accurately the
variability period and its error is found in the third paper of this series
(for more details see Ferreira Lopes et al., 2018).
* •
The panchromatic data do not significantly increase the efficiency rate for
any method. The panchromatic data provides a larger number of measurements but
a smaller SNR compared with those found for single wavebands (see Fig. 5).
* •
The efficiency rate of $PK^{(2)}$, $PL^{(2)}$, and SLM is strongly decreased
for V and panchromatic data. This is related with the smaller SNR of these
data. It indicates a strong dependence of the $PK^{(2)}$ and $PL^{(2)}$
methods on the SNR.
The periods detected by PDM are also detected by the LSG method. Moreover,
almost all periods detected using the PK, PL, and STR methods are also found
by the LSG method. The periods detected using LSG or PDM which are not found
by other methods belong mainly to a few types: W UMa ($\sim 61\%$), EA ($\sim
13\%$), RR Lyr on first overtone ($\sim 10\%$), and RR Lyr on several modes
($\sim 2\%$), measured using the ratio of the number of missed sources to the
total number of sources missed. Indeed, the largest miss-rate is found for the
multi-periodic RR Lyr-type when the relative number of sources are considered,
i.e. the fraction of missed sources divided by the number of sources detected
for each variability type. As expected, the multi-periodic periods have the
largest miss-rate since the current approach is not designed to select these
periods. From quick visual inspection on phased data of the periods found by
methods other than PDM and LSG the following concerns have been raised: the
period found is a higher harmonic or overtone of that found in the literature;
the phase diagram is not always smooth; the period found sometimes produces a
smooth phase diagram but the period found is different or has a relative
difference larger than $1\%$. This means that STR, PK, and PL methods are more
likely to return spurious periods or higher harmonics of the main variability
period since the majority of the sources missed belong to these groups.
On the other hand, about $\sim 8\%$ of the Catalina periods do not correspond
to our periods. We also made a quick visual inspection of the phased data
using the periods found by Catalina and those found by us. As a result, we
verify that the large majority (more that $\sim 70\%$) of phase diagrams of
this subsample produce smoother phase diagrams using our periods than those
found using Catalina periods (see third row of panels of Fig. 7). Indeed, this
assumes that the true or main variability period should be that one which
produces the smoothest phase diagram.
In summary, the $PK^{(s)}$ and $PL^{(s)}$ methods can be used as a new tool to
find periodic signals. In fact, they are more efficient than all methods
tested if high SNR data are considered. As a rule, the new approach can be
used in the same fashion as other period finding methods. One should be aware
that, the lower efficiency rate for small SNR, probable bias for longer
periods, and the multiple-periods given by $PK^{(s)}$must be taken into
account.
Figure 7: Phase diagrams for CVSC1 stars considering the published variability
period ($P_{Lit}$) and that one found by $PK^{(2)}$ method ($P_{PK}$). The
star name is shown on the top of each diagram while the periods are in the
bottom left corners.
### 4.3 Cautionary notes on period searching
The main variability period is assumed to be that one which provides the
smoothest phase diagram. Indeed, the periodicity of many signals in
oversampled data like CoRoT and Kepler light curves (e.g. Paz-Chinchón et al.,
2015; Ferreira Lopes et al., 2015b) can be easily identified by looking
directly at the light curve. This may not include low SNR multi-periodic
signals. On the other hand, the signals in undersampled data can only can be
identified by eye using phase diagrams. In both cases, the phase diagram
should be smooth at the main variability period. However, more than one period
can lead to smooth phase diagrams. In fact, due to the nature of the analysis
of big-datasets it is highly likely that some observational biases exist or
that pathological cases arise where the combination of random or correlated
errors, nearby sources, mimic expected variations. Therefore, additional
information must be put together to solve this puzzle. For instance,
photometric colours, amplitudes, nearby saturated sources, crowded sky
regions, distances, and other information are crucial to confirm the period
reliably.
All configurations that produce smooth phase diagrams return the peaks in the
$PK^{(s)}$ method. The current approach was designed to find the main
variability period from the viewpoint of correlation. Indeed, the harmonic
periods also provide peaks in the periodogram since the number of consecutive
measurements that cross the even-mean is only a small increase (see Eq. 7),
often smaller than random crosses due to noise. Actually, other signals not
related to the main variability period also can lead to smooth phase diagrams
and hence they also have peaks close to $PK^{(s)}_{(max)}$. Moreover,
incorrect periods also can be obtained if all configurations that lead to
smoother phase diagrams are not addressed.
The $PK^{(s)}$ method is a useful tool to find all periods that lead to smooth
phase diagrams. Other methods, e.g. the string length, or PDM method, or the
fitting of truncated Fourier series also lead to smooth phase diagrams. For
completeness, the most prominent peaks should be examined to evaluate the best
candidate for the main variability signal. The best period can be assumed to
be the one that leads to the smallest $\chi^{2}$ of a model computed from the
phase diagram (e.g. Drake et al., 2014; Ferreira Lopes et al., 2015a;
Torrealba et al., 2015). Figure 7 shows five cases from CVSC1 where topics
discussed here are a hindrance. In each row of panels are presented some
examples as follows:
* •
First row of panels: stars where the $PK^{(2)}$ method does not identify the
correct variability period. In these cases, an examination of the phase
diagrams for the other peaks in $PK^{(2)}$ may help to find the correct value.
* •
Second row of panels: stars for which a smooth phase diagram is not clearly
defined by either CVSCI or the $PK^{(2)}$ method. Therefore, both the
$P_{Lit}$ and $P_{PK}$ estimate may be wrong. Indeed, Drake et al. 2014 use
other criteria to define the period reliably. However, this analysis is
hindered if other information besides of light curve are not available.
* •
Third row of panels: both the $P_{Lit}$ and $P_{PK}$ estimates produce smooth
phase diagrams. However, they are not sub-harmonics of one another. This means
that both periods are sub-harmonic of the main variability period or one of
them is incorrect. Indeed, these systems also might be a complex systems with
multiple periodicities, e.g. an eclipsing binary where one of component is a
pulsating star. These examples illustrate that the criterion of having a
smooth phase diagram per se is not enough to define the variability period.
* •
Fourth row of panels: the variability period found by $PK^{(2)}$ method is an
overtone (greater than 2) of the variability period. Therefore, it indicates
that the efficiency rate discussed in Sect. 4.2 is better if higher sub-
harmonics are considered.
* •
Last row of panels: stars where $P_{Lit}$ is wrong or inaccurate. $P_{PK}$
returns smoother phase diagrams than those using $P_{Lit}$. Indeed, the
$PK^{(s)}$ period for CSS_J110010.1+165359 appears to be a sub-harmonic of the
true variability period. The wrong period determination can result in a
misclassification since many of parameters used for classification are derived
from the variability period.
The WVSC1 stars have similar features to those discussed using CVSC1 stars. A
quick visual inspection was performed to support our remarks. A few stars look
like those found on the last row of panels shown in Fig. 7 in both samples.
Indeed, the main goal of this work is to provide a new way to find and analyse
variability time-series.
## 5 Conclusions
Two new ways to search variability periods are proposed. These methods are not
derived from any previous period finding method. The $PK^{(s)}$ method is
characterized by presenting ordinates in the range $0$ to $1$, does not have a
strong dependence on the amplitude of the signal, and also has an analytical
equation to determine the $\rm FFN$. Moreover, the weight of outliers is
reduced since the method only considers the signs of the correlation signal.
These are unique features that allow us to determine a universal false alarm
probability, i.e., the cut-off values that can be applied to any time-series,
where it mainly depends on the SNR of the light curve. In contrast, the
$PL^{(s)}$ method uses the correlation values and provides complementary
information about the variability period.
The $PK^{(s)}$ and $PL^{(s)}$ methods were compared with the LSG, PDM, and SLM
methods from real and simulated data having single and multi-wavelength data.
As result, the efficiency rate found for LSG and PDM methods are better than
all other methods for sub-samples having low ($<3$) or high ($>3$) SNR data.
On the other hand, $PK^{(s)}$ and $PL^{(s)}$ efficiency is similar to that
found for SLM method for data in both constraints. As expected, the accuracy
of all methods is increased for data having high SNR.
In fact, the statistics considered in this paper are unlikely to be useful for
data with multiple periodicities. The current methods were recent applied in
the entire data of VVV survey (Ferreira Lopes et al., 2020) from where the
periods estimated from five period finding method can be found. This paper is
the second of this series about period search methods. Our next paper will
provide our summary of recommendation to reduce running time and improve the
periodicity search on big-data sets.
## 6 Data and Materials
The data underlying this article are available in the Catalina
repository111http://nesssi.cacr.caltech.edu/DataRelease/ and in the WFCAM
Science Archive - WSA222http://wsa.roe.ac.uk/. A friendly version of the data
also can be shared on reasonable request to the corresponding author.
## Acknowledgements
C. E. F. L. acknowledges a post-doctoral fellowship from the CNPq. N. J. G. C.
acknowledges support from the UK Science and Technology Facilities Council.
The authors thank to MCTIC/FINEP (CT-INFRA grant 0112052700) and the Embrace
Space Weather Program for the computing facilities at INPE.
## References
* Angeloni et al. (2012) Angeloni R., Di Mille F., Ferreira Lopes C. E., Masetti N., 2012, ApJ, 756, L21
* Angeloni et al. (2014) Angeloni R., et al., 2014, A&A, 567, A100
* Bellm et al. (2019) Bellm E. C., et al., 2019, PASP, 131, 018002
* Carmo et al. (2020) Carmo A., Ferreira Lopes C. E., Papageorgiou A., Jablonski F. J., Rodrigues C. V., Drake A. J., Cross N. J. G., Catelan M., 2020, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 61C, 88
* Carone et al. (2012) Carone L., et al., 2012, A&A, 538, A112
* Chadid et al. (2010) Chadid M., et al., 2010, A&A, 510, A39
* Chambers et al. (2016) Chambers K. C., et al., 2016, arXiv e-prints,
* Cincotta et al. (1995) Cincotta P. M., Mendez M., Nunez J. A., 1995, ApJ, 449, 231
* Clarke (2002) Clarke D., 2002, A&A, 386, 763
* Cross et al. (2009) Cross N. J. G., Collins R. S., Hambly N. C., Blake R. P., Read M. A., Sutorius E. T. W., Mann R. G., Williams P. M., 2009, MNRAS, 399, 1730
* De Medeiros et al. (2013) De Medeiros J. R., et al., 2013, A&A, 555, A63
* Debosscher et al. (2007) Debosscher J., Sarro L. M., Aerts C., Cuypers J., Vandenbussche B., Garrido R., Solano E., 2007, A&A, 475, 1159
* Deeming (1975) Deeming T. J., 1975, Ap&SS, 36, 137
* Drake et al. (2014) Drake A. J., et al., 2014, ApJS, 213, 9
* Dupuy & Hoffman (1985) Dupuy D. L., Hoffman G. A., 1985, International Amateur-Professional Photoelectric Photometry Communications, 20, 1
* Dworetsky (1983) Dworetsky M. M., 1983, MNRAS, 203, 917
* Ferreira Lopes & Cross (2016) Ferreira Lopes C. E., Cross N. J. G., 2016, A&A, 586, A36
* Ferreira Lopes & Cross (2017) Ferreira Lopes C. E., Cross N. J. G., 2017, A&A, 604, A121
* Ferreira Lopes et al. (2015a) Ferreira Lopes C. E., Dékány I., Catelan M., Cross N. J. G., Angeloni R., Leão I. C., De Medeiros J. R., 2015a, A&A, 573, A100
* Ferreira Lopes et al. (2015b) Ferreira Lopes C. E., et al., 2015b, A&A, 583, A122
* Ferreira Lopes et al. (2015c) Ferreira Lopes C. E., Leão I. C., de Freitas D. B., Canto Martins B. L., Catelan M., De Medeiros J. R., 2015c, A&A, 583, A134
* Ferreira Lopes et al. (2018) Ferreira Lopes C. E., Cross N. J. G., Jablonski F., 2018, MNRAS, 481, 3083
* Ferreira Lopes et al. (2020) Ferreira Lopes C. E., et al., 2020, MNRAS, 496, 1730
* Hambly et al. (2008) Hambly N. C., et al., 2008, MNRAS, 384, 637
* Hoaglin et al. (1983) Hoaglin D. C., Mosteller F., Tukey J. W., 1983, Understanding robust and exploratory data anlysis. New York: John Wiley
* Hodgkin et al. (2009) Hodgkin S. T., Irwin M. J., Hewett P. C., Warren S. J., 2009, MNRAS, 394, 675
* Ivezic et al. (2008) Ivezic Z., et al., 2008, Serbian Astronomical Journal, 176, 1
* Koen (1990) Koen C., 1990, ApJ, 348, 700
* Lafler & Kinman (1965) Lafler J., Kinman T. D., 1965, ApJS, 11, 216
* Lawrence et al. (2007) Lawrence A., et al., 2007, MNRAS, 379, 1599
* Lomb (1976) Lomb N. R., 1976, Ap&SS, 39, 447
* Long et al. (2014) Long J. P., Chi E. C., Baraniuk R. G., 2014, arXiv e-prints, p. arXiv:1412.6520
* Maciel et al. (2011) Maciel S. C., Osorio Y. F. M., De Medeiros J. R., 2011, New Astron., 16, 68
* Minniti et al. (2010) Minniti D., et al., 2010, New Astron., 15, 433
* Mondrik et al. (2015) Mondrik N., Long J. P., Marshall J. L., 2015, ApJ, 811, L34
* Papageorgiou et al. (2018) Papageorgiou A., Catelan M., Christopoulou P.-E., Drake A. J., Djorgovski S. G., 2018, ApJS, 238, 4
* Paparó et al. (2009) Paparó M., Szabó R., Benkő J. M., Chadid M., Poretti E., Kolenberg K., Guggenberger E., Chapellier E., 2009, in Guzik J. A., Bradley P. A., eds, American Institute of Physics Conference Series Vol. 1170, American Institute of Physics Conference Series. pp 240–244, doi:10.1063/1.3246453
* Paz-Chinchón et al. (2015) Paz-Chinchón F., et al., 2015, preprint, (arXiv:1502.05051)
* Perryman (2005) Perryman M. A. C., 2005, in Seidelmann P. K., Monet A. K. B., eds, Astronomical Society of the Pacific Conference Series Vol. 338, Astrometry in the Age of the Next Generation of Large Telescopes. p. 3
* Poretti et al. (2015) Poretti E., Le Borgne J. F., Rainer M., Baglin A., Benkő J. M., Debosscher J., Weiss W. W., 2015, Monthly Notices of the Royal Astronomical Society, 454, 849
* Rauer et al. (2014) Rauer H., et al., 2014, Experimental Astronomy, 38, 249
* Ricker et al. (2015) Ricker G. R., et al., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Rimoldini (2013) Rimoldini L., 2013, preprint, (arXiv:1304.6715)
* Saha & Vivas (2017) Saha A., Vivas A. K., 2017, AJ, 154, 231
* Scargle (1982) Scargle J. D., 1982, ApJ, 263, 835
* Schwarzenberg-Czerny (1996) Schwarzenberg-Czerny A., 1996, ApJ, 460, L107
* Stellingwerf (1978) Stellingwerf R. F., 1978, ApJ, 224, 953
* Stetson (1996) Stetson P. B., 1996, PASP, 108, 851
* Sulis et al. (2017) Sulis S., Mary D., Bigot L., 2017, IEEE Transactions on Signal Processing, 65, 2136
* Süveges et al. (2012) Süveges M., et al., 2012, MNRAS, 424, 2528
* Tonry et al. (2018) Tonry J. L., et al., 2018, PASP, 130, 064505
* Torrealba et al. (2015) Torrealba G., et al., 2015, MNRAS, 446, 2251
* VanderPlas (2018) VanderPlas J. T., 2018, ApJS, 236, 16
* VanderPlas & Ivezić (2015) VanderPlas J. T., Ivezić Ž., 2015, ApJ, 812, 18
* Zechmeister & Kürster (2009) Zechmeister M., Kürster M., 2009, A&A, 496, 577
|
16k
|
arxiv_papers
|
2101.00928
|
# Search for new physics via baryon EDM at LHC
L. Henry1 D. Marangotto2 A. Merli2,3 N. Neri2,3 J. Ruiz1 F. Martinez Vidal1
1IFIC, Universitat de València-CSIC, Valencia, Spain
2INFN Sezione di Milano and Università di Milano, Milan, Italy
3CERN, Geneva, Switzerland
###### Abstract
Permanent electric dipole moments (EDMs) of fundamental particles provide
powerful probes for physics beyond the Standard Model. We propose to search
for the EDM of strange and charm baryons at LHC, extending the ongoing
experimental program on the neutron, muon, atoms, molecules and light nuclei.
The EDM of strange $\mathchar 28931\relax$ baryons, selected from weak decays
of charm baryons produced in $p$ $p$ collisions at LHC, can be determined by
studying the spin precession in the magnetic field of the detector tracking
system. A test of $C\\!PT$ symmetry can be performed by measuring the magnetic
dipole moment of $\mathchar 28931\relax$ and $\kern
1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}$ baryons. For short-
lived ${\mathchar 28931\relax}^{+}_{c}$ and ${\mathchar 28932\relax}^{+}_{c}$
baryons, to be produced in a fixed-target experiment using the 7
$\mathrm{\,Te\kern-1.00006ptV}$ LHC beam and channeled in a bent crystal, the
spin precession is induced by the intense electromagnetic field between
crystal atomic planes. The experimental layout based on the LHCb detector and
the expected sensitivities in the coming years are discussed.
###### keywords:
Baryons (including antiparticles) - Electric and magnetic moments
## Sec. 1 Introduction
The magnetic dipole moment (MDM) and the electric dipole moment (EDM) are
static properties of particles that determine the spin motion in an external
electromagnetic field, as described by the T-BMT equation [1, 2, 3].
The EDM is the only static property of a particle that requires the violation
of parity ($P$) and time reversal ($T$) symmetries and thus, relying on
$C\\!PT$ invariance, the violation of $C\\!P$ symmetry. The amount of $C\\!P$
violation in the weak interactions of quarks is not sufficient to explain the
observed imbalance between matter and antimatter in the Universe. CP-violation
in strong interactions is strongly bounded by the experimental limit on the
neutron EDM [4]. In the Standard Model (SM), contributions to the EDM of
baryons are highly suppressed but can be largely enhanced in some of its
extensions. Hence, the experimental searches for the EDM of fundamental
particles provide powerful probes for physics beyond the SM.
Since EDM searches started in the fifties [5, 6], there has been an intense
experimental program, leading to limits on the EDM of leptons [7, 8, 9],
neutron [4], heavy atoms [10], proton (indirect from 199Hg) [11], and
$\mathchar 28931\relax$ baryon [12]. New experiments are ongoing and others
are planned, including those based on storage rings for muon [13, 14], proton
and light nuclei [15, 16, 17]. Recently we proposed to improve the limit on
strange baryons and extend it to charm and bottom baryons [18, 19].
EDM searches of fundamental particles rely on the measurement of the spin
precession angle induced by the interaction with the electromagnetic field.
For unstable particles this is challenging since the precession has to take
place before the decay. A solution to this problem requires large samples of
high energy polarized particles traversing an intense electromagnetic field.
Here we reviewed the unique possibility to search for the EDM of the strange
$\mathchar 28931\relax$ baryon and of the charmed baryons at LHC. Using the
experimental upper limit of the neutron EDM, the absolute value of the
$\mathchar 28931\relax$ EDM is predicted to be $<4.4\times
10^{-26}~{}e\rm\,cm$ [20, 21, 22, 23], while the indirect constraints on the
charm EDM are weaker, ${~{}\raise 1.49994pt\hbox{$<$}\kern-8.50006pt\lower
3.50006pt\hbox{$\sim$}~{}}4.4\times 10^{-17}~{}e\rm\,cm$ [24]. Any
experimental observation of an EDM would indicate a new source of $C\\!P$
violation from physics beyond the SM. The EDM of the long-lived $\mathchar
28931\relax$ baryon was measured to be $<1.5\times 10^{-16}~{}e\rm\,cm$ (95%
C.L.) in a fixed-target experiment at Fermilab [12]. No experimental
measurements exist for short-lived charm baryons since negligibly small spin
precession would be induced by magnetic fields used in current particle
detectors.
## Sec. 2 Experimental setup
The magnetic and electric dipole moment of a spin-1/2 particle is given (in
Gaussian units) by $\bm{\mu}=g\mu_{B}{\mathbf{s}}/2$ and
$\bm{\delta}=d\mu_{B}{\mathbf{s}}/2$, respectively, where $\mathbf{s}$ is the
spin-polarization vector111The spin-polarization vector is defined such as
$\mathbf{s}=2\langle\mathbf{S}\rangle/\hbar$, where $\mathbf{S}$ is the spin
operator. and $\mu_{B}=e\hbar/(2mc)$ is the particle magneton, with $m$ its
mass. The $g$ and $d$ dimensionless factors are also referred to as the
gyromagnetic and gyroelectric ratios. The experimental setup to measure the
change of the spin direction in an elextromagnetic field relies on three main
elements:
a source of polarized particles whose direction and polarization degree are
known;
an intense electromagnetic field able to induce a sizable spin precession
angle during the lifetime of the particle;
the detector to measure the final polarization vector by analysing the angular
distribution of the particle decays.
### 2.1 $\mathchar 28931\relax$ and $\kern
1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}$ case
Weak decays of heavy baryons (charm and beauty), mostly produced in the
forward/backward directions at LHC, can induce large longitudinal polarization
due to parity violation. For example, the decay of unpolarized ${\mathchar
28931\relax}^{+}_{c}$ baryons to the ${\mathchar 28931\relax}{{\pi}^{+}}$
final state [25], produces $\mathchar 28931\relax$ baryons with longitudinal
polarization $\approx-90\%$. Another example is the ${{\mathchar
28931\relax}^{0}_{b}}\rightarrow{\mathchar
28931\relax}{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ decay where
$\mathchar 28931\relax$ baryons are produced almost 100% longitudinally
polarized [26, 27].
The spin-polarization vector $\mathbf{s}$ of an ensemble of $\mathchar
28931\relax$ baryons can be analysed through the angular distribution of the
${\mathchar 28931\relax}\rightarrow{p}{{\pi}^{-}}$ decay [28, 29],
$\frac{dN}{d\Omega^{\prime}}\propto
1+\alpha\mathbf{s}\cdot\hat{\mathbf{k}}~{},$ (1)
where $\alpha=0.642\pm 0.013$ [30] is the decay asymmetry parameter. The
$C\\!P$ invariance in the $\mathchar 28931\relax$ decay implies
$\alpha=-\overline{\alpha}$, where $\overline{\alpha}$ is the decay parameter
of the charge-conjugate decay. The unit vector
$\hat{\mathbf{k}}=(\sin\theta^{\prime}\cos\phi^{\prime},\sin\theta^{\prime}\sin\phi^{\prime},\cos\theta^{\prime})$
indicates the momentum direction of the proton in the $\mathchar 28931\relax$
helicity frame, with $\Omega^{\prime}=(\theta^{\prime},\phi^{\prime})$ the
corresponding solid angle. For the particular case of $\mathchar 28931\relax$
flying along the $z$ axis in the laboratory frame, an initial longitudinal
polarization $s_{0}$, i.e. $\mathbf{s}_{0}=(0,0,s_{0})$, and
$\mathbf{B}=(0,B_{y},0)$, the solution of the T-BMT equation is [18]
$\mathbf{s}=\begin{cases}s_{x}=-s_{0}\sin\Phi\\\
s_{y}=-s_{0}\dfrac{d\beta}{g}\sin\Phi\\\ s_{z}=s_{0}\cos\Phi\\\ \end{cases}$
(2)
where $\Phi=\frac{D_{y}\mu_{B}}{\beta\hbar
c}\sqrt{d^{2}\beta^{2}+g^{2}}\approx\frac{gD_{y}\mu_{B}}{\beta\hbar c}$ with
$D_{y}\equiv D_{y}(l)=\int_{0}^{l}B_{y}dl^{\prime}$ the integrated magnetic
field along the $\mathchar 28931\relax$ flight path. The polarization vector
precesses in the $xz$ plane, normal to the magnetic field, with the precession
angle $\Phi$ proportional to the gyromagnetic factor of the particle. The
presence of an EDM introduces a non-zero $s_{y}$ component perpendicular to
the precession plane of the MDM, otherwise not present. At LHCb, with a
tracking dipole magnet providing an integrated field $D_{y}\approx\pm
4~{}\mathrm{T\rm\,m}$ [31], the maximum precession angle for particles
traversing the entire magnetic field region yields $\Phi_{\rm
max}\approx\pm\pi/4$, and allows to achieve about 70% of the maximum $s_{y}$
component. Moreover, a test of $C\\!PT$ symmetry can be performed by comparing
the $g$ and $-\bar{g}$ factors for $\mathchar 28931\relax$ and $\kern
1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}$ baryons,
respectively, which precess in opposite directions as $g$ and $d$ change sign
from particle to antiparticle.
### 2.2 Charm baryon case
The ${\mathchar 28931\relax}^{+}_{c}$ baryon EDM can be extracted by measuring
the precession of the polarization vector of channeled particles in a bent
crystal. There, a positively-charged particle channeled between atomic planes
moves along a curved path under the action of the intense electric field
between crystal planes. In the instantaneous rest frame of the particle the
electromagnetic field causes the spin rotation. The signature of the EDM is a
polarization component perpendicular to the initial baryon momentum and
polarization vector, otherwise not present, similarly to the case of the
$\mathchar 28931\relax$ baryon.
The phenomenon of spin precession of positively-charged particles channeled in
a bent crystal was firstly observed by the E761 collaboration that measured
the MDM of the strange ${\mathchar 28934\relax}^{+}$ baryon [32]. The
feasibility of the measurement at LHC energies offers clear advantages with
respect to lower beam energies since the estimated number of channeled charm
baryons is proportional to $\gamma^{3/2}$, where $\gamma$ is the Lorentz
factor of the particles [33].
Figure 1: (Left) Production plane of the ${\mathchar 28931\relax}^{+}_{c}$
baryon defined by the proton and the ${\mathchar 28931\relax}^{+}_{c}$
momenta. The initial polarization vector $\mathbf{s}_{0}$ is perpendicular to
the production plane, along the $y$ axis, due to parity conservation in strong
interactions [34]. (Right) Deflection of the baryon trajectory and spin
precession in the $yz$ and $xy$ plane induced by the MDM and the EDM,
respectively. The red (dashed) arrows indicate the (magnified) $s_{x}$ spin
component proportional to the particle EDM. $\Phi$ is the MDM precession angle
and $\theta_{C}$ is the crystal bending angle. $\mathbf{E^{*}}$ and
$\mathbf{B^{*}}$ are the intense electromagnetic field in the particle rest
frame [35, 36] which induce spin precession.
In the limit of large boost with Lorentz factor $\gamma\gg 1$, the precession
angle $\Phi$, shown in Figure 1, induced by the MDM is [37]
$\Phi\approx\frac{g-2}{2}\gamma\theta_{C},$ (3)
where $g$ is the gyromagnetic factor, $\theta_{C}=L/\rho_{0}$ is the crystal
bending angle, $L$ is the circular arc of the crystal and $\rho_{0}$ the
curvature radius .
In presence of a non-zero EDM, the spin precession is no longer confined to
the $yz$ plane, originating a $s_{x}$ component proportional to the particle
EDM represented by the red (dashed) arrows in (Right) Figure 1. The
polarization vector, after channeling through the crystal is [18]
$\mathbf{s}~{}=~{}\left\\{\begin{array}[]{l}s_{x}\approx
s_{0}\dfrac{d}{g-2}(\cos{\Phi}-1)\\\ s_{y}\approx s_{0}\cos\Phi\\\
s_{z}\approx s_{0}\sin\Phi\end{array}\right.,$ (4)
where $\Phi$ is given by Eq. (3). The MDM and EDM information can be extracted
from the measurement of the spin polarization of channeled baryons at the exit
of the crystal, via the study of the angular distribution of final state
particles. For ${\mathchar 28931\relax}^{+}_{c}$ decaying to two-body final
states such as $f=\Delta^{++}{{K}^{-}},{p}{{K}^{*0}},\Delta(1520){{\pi}^{+}}$
and $\Lambda{{\pi}^{-}}$, the angular distribution is described by Eq. 1. A
Dalitz plot analysis would provide the ultimate sensitivity to the EDM
measurement.
The initial polarization $s_{0}$ would require in principle the measurement of
the angular distribution for unchanneled baryons. In practice this is not
required since the measurement of the three components of the final
polarization vector for channeled baryons allows a simultaneous determination
of $g,d$ and $s_{0}$, up to discrete ambiguities. These can be solved
exploiting the dependence of the angular distribution with the ${\mathchar
28931\relax}^{+}_{c}$ boost $\gamma$, as discussed in Ref. [19].
## Sec. 3 Sensitivity studies
### 3.1 $\mathchar 28931\relax$ and $\kern
1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}$ case
The number of $\mathchar 28931\relax$ particles produced can be estimated as
$N_{\mathchar
28931\relax}=2\mathcal{L}\sigma_{{q}{\overline{q}}}f({q}\rightarrow H){\cal
B}(H\rightarrow{\mathchar 28931\relax}X^{\prime}){\cal B}({\mathchar
28931\relax}\rightarrow{p}{{\pi}^{-}}){\cal
B}(X^{\prime}\rightarrow\mathrm{charged}),$ (5)
where $\mathcal{L}$ is the total integrated luminosity,
$\sigma_{{q}{\overline{q}}}$ (${q}={c},{b}$) are the heavy quark production
cross sections from $p$ $p$ collisions at
$\sqrt{s}=13$$\mathrm{\,Te\kern-1.00006ptV}$ [38, 39, 40, 41], and $f$ is the
fragmentation fraction into the heavy baryon $H$ [42, 43, 44, 45].
Dominant $\mathchar 28931\relax$ production mechanisms from heavy baryon
decays and estimated yields produced per $\mbox{\,fb}^{-1}$ at
$\sqrt{s}=13$$\mathrm{\,Te\kern-1.00006ptV}$, shown separately for SL and LL
topologies. The $\mathchar 28931\relax$ baryons from ${\mathchar
28932\relax}^{-}$ decays, produced promptly in the $p$ $p$ collisions, are
given in terms of the unmeasured production cross section. SL events
$N_{{\mathchar 28931\relax}}/\mbox{\,fb}^{-1}~{}(\times 10^{10})$ LL events,
${{\mathchar 28932\relax}^{-}}\rightarrow{\mathchar 28931\relax}{{\pi}^{-}}$
$N_{{\mathchar 28931\relax}}/\mbox{\,fb}^{-1}~{}(\times 10^{10})$ ${{\mathchar
28932\relax}^{0}_{c}}\rightarrow{\mathchar 28931\relax}{{K}^{-}}{{\pi}^{+}}$
7.7 ${{\mathchar 28932\relax}^{0}_{c}}\rightarrow{{\mathchar
28932\relax}^{-}}{{\pi}^{+}}{{\pi}^{+}}{{\pi}^{-}}$ 23.6 ${{\mathchar
28931\relax}^{+}_{c}}\rightarrow{\mathchar
28931\relax}{{\pi}^{+}}{{\pi}^{+}}{{\pi}^{-}}$ 3.3 ${{\mathchar
28932\relax}^{0}_{c}}\rightarrow{{\mathchar 28932\relax}^{-}}{{\pi}^{+}}$ 7.1
${{\mathchar 28932\relax}^{+}_{c}}\rightarrow{\mathchar
28931\relax}{{K}^{-}}{{\pi}^{+}}{{\pi}^{+}}$ 2.0 ${{\mathchar
28932\relax}^{+}_{c}}\rightarrow{{\mathchar
28932\relax}^{-}}{{\pi}^{+}}{{\pi}^{+}}$ 6.1 ${{\mathchar
28931\relax}^{+}_{c}}\rightarrow{\mathchar 28931\relax}{{\pi}^{+}}$ 1.3
${{\mathchar 28931\relax}^{+}_{c}}\rightarrow{{\mathchar
28932\relax}^{-}}{{K}^{+}}{{\pi}^{+}}$ 0.6 ${{\mathchar
28932\relax}^{0}_{c}}\rightarrow{\mathchar 28931\relax}{{K}^{+}}{{K}^{-}}$ (no
$\phi$) 0.2 ${{\mathchar 28932\relax}^{0}_{c}}\rightarrow{{\mathchar
28932\relax}^{-}}{{K}^{+}}$ 0.2 ${{\mathchar
28932\relax}^{0}_{c}}\rightarrow{\mathchar
28931\relax}\phi({{K}^{+}}{{K}^{-}})$ 0.1 Prompt ${{\mathchar
28932\relax}^{-}}$ $0.13\times\sigma_{{p}{p}\rightarrow{{\mathchar
28932\relax}^{-}}}~{}[\mu\rm b]$
In Table 3.1 the dominant production channels and the estimated yields are
summarised. Only the decays where it is experimentally possible to determine
the production and decay vertex of the $\mathchar 28931\relax$ are considered.
Overall, there are about $1.5\times 10^{11}$ $\mathchar 28931\relax$ baryons
per $\mbox{\,fb}^{-1}$ produced directly from heavy baryon decays (referred
hereafter as short-lived, or SL events), and $3.8\times 10^{11}$ from charm
baryons decaying through an intermediate ${\mathchar 28932\relax}^{-}$
particle (long-lived, or LL events). The yield of $\mathchar 28931\relax$
baryons experimentally available can then be evaluated as $N_{\mathchar
28931\relax}^{\rm reco}=\epsilon_{\rm geo}\epsilon_{\rm trigger}\epsilon_{\rm
reco}N_{\mathchar 28931\relax}$, where $\epsilon_{\rm geo}$, $\epsilon_{\rm
trigger}$ and $\epsilon_{\rm reco}$ are the geometric, trigger and
reconstruction efficiencies of the detector system. The geometric efficiency
for SL topology has been estimated to be about 16% using a Monte Carlo
simulation of ${p}{p}$ collisions at
$\sqrt{s}=13$$\mathrm{\,Te\kern-1.00006ptV}$ and the decay of heavy hadrons.
To assess the EDM sensitivity, pseudo-experiments have been generated using a
simplified detector geometry that includes an approximate LHCb magnetic field
mapping [31, 46]. $\mathchar 28931\relax$ baryons decaying towards the end of
the magnet provide most of the sensitivity to the EDM and MDM, since a
sizeable spin precession could happen. The decay angular distribution and spin
dynamics have been simulated using Eq. (1) and the general solution as a
function of the $\mathchar 28931\relax$ flight length [18], respectively. For
this study the initial polarization vector $\mathbf{s}_{0}=(0,0,s_{0})$, with
$s_{0}$ varying between 20% and 100%, and factors $g=-1.458$ [30] and $d=0$,
were used. Each generated sample was fitted using an unbinned maximum
likelihood method with $d$, $g$ and $\mathbf{s}_{0}$ as free parameters. The
$d$-factor uncertainty scales with the number of events $N_{\mathchar
28931\relax}^{\rm reco}$ and the initial longitudinal polarization $s_{0}$ as
$\sigma_{d}\propto 1/(s_{0}\sqrt{N_{\mathchar 28931\relax}^{\rm reco}})$. The
sensitivity saturates at large values of $s_{0}$, as shown in (Left) Fig. 2,
and it partially relaxes the requirements on the initial polarizations.
Similarly, (Right) Fig. 2 shows the expected sensitivity on the EDM as a
function of the integrated luminosity, summing together SL and LL events,
assuming global trigger and reconstruction efficiency $\epsilon_{\rm
trigger}\epsilon_{\rm reco}$ of 1% (improved LHCb software-based trigger and
tracking for the upgrade detector [47, 48]) and 0.2% (current detector [31]),
where the efficiency estimates are based on a educated guess. An equivalent
sensitivity is obtained for the gyromagnetic factor. Therefore, with 8
$\mbox{\,fb}^{-1}$ a sensitivity $\sigma_{d}\approx 1.5\times 10^{-3}$ could
be achieved (current detector), to be compared to the present limit,
$1.7\times 10^{-2}$ [12]. With 50 $\mbox{\,fb}^{-1}$ (upgraded detector) the
sensitivity on the gyroelectric factor can reach $\approx 3\times 10^{-4}$.
Figure 2: (Left) Dependence of the $\mathchar 28931\relax$ gyroelectric factor
uncertainty with the initial polarization for $N_{\mathchar 28931\relax}^{\rm
reco}=10^{6}$ events, and (Right) as a function of the integrated luminosity
assuming reconstruction efficiency of 0.2% and 1%.
### 3.2 Charm baryon case
We propose to search for charm baryon EDMs in a dedicated fixed-target
experiment at the LHC to be installed in front of the LHCb detector. The
target should be attached to the crystal to maximize the yield of short-lived
charm baryons to be channeled. The rate of ${\mathchar 28931\relax}^{+}_{c}$
baryons produced with 7$\mathrm{\,Te\kern-1.00006ptV}$ protons on a fixed
target can be estimated as
$\frac{dN_{{\mathchar
28931\relax}^{+}_{c}}}{dt}=\frac{F}{A}\sigma({p}{p}\rightarrow{{\mathchar
28931\relax}^{+}_{c}}X)N_{T},$ (6)
where $F$ is the proton rate, $A$ the beam transverse area, $N_{T}$ the number
of target nucleons, and $\sigma({p}{p}\rightarrow{{\mathchar
28931\relax}^{+}_{c}}X)$ is the cross-section for ${\mathchar
28931\relax}^{+}_{c}$ production in $p$ $p$ interactions at
$\sqrt{s}=114.6\mathrm{\,Ge\kern-1.00006ptV}$ center-of-mass energy. The
number of target nucleons is $N_{T}=N_{A}\rho ATA_{N}/A_{T}$, where $N_{A}$ is
the Avogadro number, $\rho$ ($T$) is the target density (thickness), and
$A_{T}$ ($A_{N}$) is the atomic mass (atomic mass number). For our estimates
we consider a target of tungsten thick $T=0.5\rm\,cm$ with density
$\rho=19.25{\rm\,g/cm}$. The rate of ${\mathchar 28931\relax}^{+}_{c}$
particles channeled in the bent crystal and reconstructed in the LHCb detector
is estimated as
$\frac{dN_{{\mathchar 28931\relax}^{+}_{c}}^{\rm
reco}}{dt}=\frac{dN_{{\mathchar 28931\relax}^{+}_{c}}}{dt}{\cal B}({{\mathchar
28931\relax}^{+}_{c}}\rightarrow f){\varepsilon_{\rm CH}}{\varepsilon_{\rm
DF}}({{\mathchar 28931\relax}^{+}_{c}}){\varepsilon_{\rm det}},$ (7)
where ${\cal B}({{\mathchar 28931\relax}^{+}_{c}}\rightarrow f)$ is the
branching fraction of ${\mathchar 28931\relax}^{+}_{c}$ decaying to $f$,
$\varepsilon_{\rm CH}$ is the efficiency of channeling ${\mathchar
28931\relax}^{+}_{c}$ inside the crystal, $\varepsilon_{\rm DF}$ (${\mathchar
28931\relax}^{+}_{c}$) is the fraction of ${\mathchar 28931\relax}^{+}_{c}$
decaying after the crystal and $\varepsilon_{\rm det}$ is the efficiency to
reconstruct the decays. A 6.5$\mathrm{\,Te\kern-1.00006ptV}$ proton beam was
extracted from the LHC beam halo by channeling protons in bent crystals [49].
A beam with intensity of $5\times 10^{8}~{}\text{proton/s}$, to be directed on
a fixed target, is attainable with this technique [50].
The ${\mathchar 28931\relax}^{+}_{c}$ cross section is estimated from the
total charm production cross section [51], rescaled to
$\sqrt{s}=114.6\mathrm{\,Ge\kern-1.00006ptV}$ assuming a linear dependence on
$\sqrt{s}$, and ${\mathchar 28931\relax}^{+}_{c}$ fragmentation function [44]
to be $\sigma_{{{\mathchar 28931\relax}^{+}_{c}}}\approx 18.2{\rm\,\upmu b}$,
compatible with theoretical predictions [52].
The channeling efficiency in silicon crystals, including both channeling
angular acceptance and dechanneling effects, is estimated to be
${\varepsilon_{\rm CH}}\approx 10^{-3}$ [53], while the fraction of
${\mathchar 28931\relax}^{+}_{c}$ baryons decaying after the crystal is
${\varepsilon_{\rm DF}}({{\mathchar 28931\relax}^{+}_{c}})\approx 19\%$, for
$\gamma=1000$ and 10$\rm\,cm$ crystal length. The geometrical acceptance for
${{\mathchar 28931\relax}^{+}_{c}}\rightarrow{p}{{K}^{-}}{{\pi}^{+}}$ decaying
into the LHCb detector is ${\varepsilon_{\rm geo}}\approx 25\%$ according to
simulation studies. The LHCb software-based trigger for the upgrade detector
[47] is expected to have efficiency for charm hadrons comparable to the
current high level trigger [31], i.e. ${\varepsilon_{\rm trigger}}\approx
80\%$. The tracking efficiency is estimated to be $70\%$ per track, leading to
an efficiency ${\varepsilon_{\rm track}}\approx 34\%$ for a ${\mathchar
28931\relax}^{+}_{c}$ decay with three charged particles. The detector
reconstruction efficiency, ${\varepsilon_{\rm det}}={\varepsilon_{\rm
geo}}{\varepsilon_{\rm trigger}}{\varepsilon_{\rm track}}$, is estimated to be
${\varepsilon_{\rm det}}({p}{{K}^{-}}{{\pi}^{+}})\approx 5.4\times 10^{-2}$
for ${{\mathchar 28931\relax}^{+}_{c}}\\!\rightarrow{p}{{K}^{-}}{{\pi}^{+}}$
decays.
Few ${\mathchar 28931\relax}^{+}_{c}$ decay asymmetry parameters $\alpha_{f}$
are known. At present, they can be computed from existing ${{\mathchar
28931\relax}^{+}_{c}}\rightarrow{p}{{K}^{-}}{{\pi}^{+}}$ amplitude analysis
results [54] yielding $\alpha_{{\mathchar 28929\relax}^{++}{{K}^{-}}}=-0.67\pm
0.30$ for the ${{\mathchar 28931\relax}^{+}_{c}}\\!\rightarrow{\mathchar
28929\relax}^{++}{{K}^{-}}$ decay [18].
For the sensitivity studies we assume $s_{0}=0.6$ and $(g-2)/2=0.3$, according
to experimental results and available theoretical predictions, respectively,
quoted in Ref. [55]. The $d$ and $g-2$ values and errors can be derived from
Eq. (4). The estimate assumes negligibly small uncertainties on $\theta_{C}$,
$\gamma$.
Given the estimated quantities we obtain $dN^{reco}_{{{\mathchar
28931\relax}^{+}_{c}}}/dt\approx 5.9\times
10^{-3}~{}{\rm\,s^{-1}}=21.2~{}{\rm\,h^{-1}}$ for ${{\mathchar
28931\relax}^{+}_{c}}\\!\rightarrow{\mathchar 28929\relax}^{++}{{K}^{-}}$. A
data taking of 1 month will be sufficient to reach a sensitivity of
$\sigma_{\delta}=1.3\times 10^{-17}$ on the ${\mathchar 28931\relax}^{+}_{c}$
EDM. Therefore, a measurement of ${\mathchar 28931\relax}^{+}_{c}$ EDM is
feasible in ${\mathchar 28931\relax}^{+}_{c}$ quasi two-body decays at LHCb.
The dependence of the sensitivity to ${\mathchar 28931\relax}^{+}_{c}$ EDM and
MDM as a function of the number of incident protons on the target is shown in
Fig. 3.
Figure 3: Dependence of the (Left) $d$ and (Right) $g$ uncertainties for the
${\mathchar 28931\relax}^{+}_{c}$ baryon, reconstructed in the ${\mathchar
28929\relax}^{++}{{K}^{-}}$ final state, with the number of protons on target.
One month of data taking corresponds to $1.3\times 10^{15}$ incident protons
(dashed line), according to the estimated quantities.
The same technique could be applied to any other heavy charged baryon, for
instance containing $b$ quark. The production rate is lower than ${\mathchar
28931\relax}^{+}_{c}$ and the estimates have been studied and discussed in
Ref. [19].
## Sec. 4 Conclusions
The unique possibility to search for the EDM of strange and charm baryons at
LHC is discussed, based on the exploitation of large statistics of baryons
with large Lorentz boost and polarization. The $\mathchar 28931\relax$ strange
baryons are selected from weak charm baryon decays produced in $p$ $p$
collisions at $\approx 14$ $\mathrm{\,Te\kern-1.00006ptV}$ center-of-mass
energy, while ${\mathchar 28931\relax}^{+}_{c}$ charm baryons are produced in
a fixed-target experiment to be installed in the LHC, in front of the LHCb
detector. Signal events can be reconstructed using the LHCb detector in both
cases. The sensitivity to the EDM and the MDM of the strange and charm baryons
arises from the study of the spin precession in intense electromagnetic
fields. The long-lived $\mathchar 28931\relax$ precesses in the magnetic field
of the detector tracking system. Short-lived charm baryons are channeled in a
bent crystal attached to the target and the intense electric field between
atomic planes induces the spin precession. Sensitivities for the $\mathchar
28931\relax$ EDM at the level of $1.3\times 10^{-18}~{}e\rm\,cm$ can be
achieved using a data sample corresponding to an integrated luminosity of 50
$\mbox{\,fb}^{-1}$ to be collected during the LHC Run 3. A test of $C\\!PT$
symmetry can be performed by measuring the MDM of $\mathchar 28931\relax$ and
$\kern 1.00006pt\overline{\kern-1.00006pt\mathchar 28931\relax}$ baryons with
a precision of about $4\times 10^{-4}$ on the $g$ factor. The EDM of the
${\mathchar 28931\relax}^{+}_{c}$ can be searched for with a sensitivity of
$2.1\times 10^{-17}\,e\rm\,cm$ in 11 days of data taking. The proposed
experiment would allow about two orders of magnitude improvement in the
sensitivity for the $\mathchar 28931\relax$ EDM and the first search for the
charm baryon EDM, expanding the search for new physics through the EDM of
fundamental particles.
## References
* [1] L. H. Thomas, The motion of a spinning electron, Nature 117, p. 514 (1926).
* [2] L. H. Thomas, The kinematics of an electron with an axis, Phil. Mag. 3, 1 (1927).
* [3] V. Bargmann, L. Michel and V. L. Telegdi, Precession of the polarization of particles moving in a homogeneous electromagnetic field, Phys. Rev. Lett. 2, 435 (May 1959).
* [4] J. M. Pendlebury et al., Revised experimental upper limit on the electric dipole moment of the neutron, Phys. Rev. D92, p. 092003 (2015).
* [5] E. M. Purcell and N. F. Ramsey, On the Possibility of Electric Dipole Moments for Elementary Particles and Nuclei, Phys. Rev. 78, 807 (1950).
* [6] J. H. Smith, E. M. Purcell and N. F. Ramsey, Experimental limit to the electric dipole moment of the neutron, Phys. Rev. 108, 120 (1957).
* [7] J. Baron et al., Order of Magnitude Smaller Limit on the Electric Dipole Moment of the Electron, Science 343, 269 (2014).
* [8] G. W. Bennett et al., An Improved Limit on the Muon Electric Dipole Moment, Phys. Rev. D80, p. 052008 (2009).
* [9] K. Inami et al., Search for the electric dipole moment of the tau lepton, Phys. Lett. B551, 16 (2003).
* [10] W. C. Griffith, M. D. Swallows, T. H. Loftus, M. V. Romalis, B. R. Heckel and E. N. Fortson, Improved Limit on the Permanent Electric Dipole Moment of Hg-199, Phys. Rev. Lett. 102, p. 101601 (2009).
* [11] V. F. Dmitriev and R. A. Sen’kov, Schiff moment of the mercury nucleus and the proton dipole moment, Phys. Rev. Lett. 91, p. 212303 (2003).
* [12] L. Pondrom, R. Handler, M. Sheaff, P. T. Cox, J. Dworkin, O. E. Overseth, T. Devlin, L. Schachinger and K. J. Heller, New Limit on the Electric Dipole Moment of the $\Lambda$ Hyperon, Phys. Rev. D23, 814 (1981).
* [13] J. Grange et al., Muon (g-2) Technical Design Report, tech. rep. (2015).
* [14] N. Saito, A novel precision measurement of muon g-2 and EDM at J-PARC, AIP Conf. Proc. 1467, 45 (2012).
* [15] V. Anastassopoulos et al., A Storage Ring Experiment to Detect a Proton Electric Dipole Moment, 2015, (2015).
* [16] J. Pretz, Measurement of electric dipole moments at storage rings, Physica Scripta 2015, p. 014035 (2015).
* [17] I. B. Khriplovich, Feasibility of search for nuclear electric dipole moments at ion storage rings, Phys. Lett. B444, 98 (1998).
* [18] F. J. Botella, L. M. Garcia Martin, D. Marangotto, F. M. Vidal, A. Merli, N. Neri, A. Oyanguren and J. R. Vidal, On the search for the electric dipole moment of strange and charm baryons at LHC, Eur. Phys. J. C77, p. 181 (2017).
* [19] E. Bagli et al., Electromagnetic dipole moments of charged baryons with bent crystals at the LHC, Eur. Phys. J. C77, p. 828 (2017).
* [20] F.-K. Guo and U.-G. Meissner, Baryon electric dipole moments from strong $C\\!P$ violation, JHEP 12, p. 097 (2012).
* [21] D. Atwood and A. Soni, Chiral perturbation theory constraint on the electric dipole moment of the $\Lambda$ hyperon, Phys. Lett. B291, 293 (1992).
* [22] A. Pich and E. de Rafael, Strong CP violation in an effective chiral Lagrangian approach, Nucl. Phys. B367, 313 (1991).
* [23] B. Borasoy, The electric dipole moment of the neutron in chiral perturbation theory, Phys. Rev. D61, p. 114017 (2000).
* [24] F. Sala, A bound on the charm chromo-EDM and its implications, JHEP 03, p. 061 (2014).
* [25] J. M. Link et al., Study of the decay asymmetry parameter and CP violation parameter in the $\Lambda_{c}^{+}\rightarrow\Lambda\pi^{+}$ decay, Phys. Lett. B634, 165 (2006).
* [26] R. Aaij et al., Measurements of the $\Lambda_{b}^{0}\rightarrow J/\psi\Lambda$ decay amplitudes and the $\Lambda_{b}^{0}$ polarisation in $pp$ collisions at $\sqrt{s}=7$ TeV, Phys. Lett. B724, 27 (2013).
* [27] G. Aad et al., Measurement of the parity-violating asymmetry parameter $\alpha_{b}$ and the helicity amplitudes for the decay $\Lambda_{b}^{0}\rightarrow J/\psi\Lambda^{0}$ with the ATLAS detector, Phys. Rev. D89, p. 092009 (2014).
* [28] T. D. Lee and C.-N. Yang, General Partial Wave Analysis of the Decay of a Hyperon of Spin 1/2, Phys. Rev. 108, 1645 (1957).
* [29] J. D. Richman, An experimenter’s guide to the helicity formalism, Tech. Rep. CALT-68-1148, Calif. Inst. Technol. (Pasadena, CA, 1984).
* [30] C. Patrignani, Review of Particle Physics, Chin. Phys. C40, p. 100001 (2016).
* [31] R. Aaij et al., LHCb detector performance, Int. J. Mod. Phys. A30, p. 1530022 (2015).
* [32] D. Chen et al., First observation of magnetic moment precession of channeled particles in bent crystals, Phys. Rev. Lett. 69, 3286 (1992).
* [33] V. G. Baryshevsky, The possibility to measure the magnetic moments of short-lived particles (charm and beauty baryons) at LHC and FCC energies using the phenomenon of spin rotation in crystals, Phys. Lett. B757, 426 (2016).
* [34] M. Jacob and G. C. Wick, On the general theory of collisions for particles with spin, Annals Phys. 7, 404 (1959).
* [35] V. Baryshevsky, Spin rotation and depolarization of high-energy particles in crystals at LHC and FCC energies. The possibility to measure the anomalous magnetic moments of short-lived particles and quadrupole moment of $\Omega$-hyperon, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 402, 5 (2017).
* [36] I. J. Kim, Magnetic moment measurement of baryons with heavy flavored quarks by planar channeling through bent crystal, Nucl. Phys. B229, 251 (1983).
* [37] V. L. Lyuboshits, The Spin Rotation at Deflection of Relativistic Charged Particle in Electric Field, Sov. J. Nucl. Phys. 31, p. 509 (1980).
* [38] R. Aaij et al., Measurements of prompt charm production cross-sections in $pp$ collisions at $\sqrt{s}=13$ TeV, JHEP 03, p. 159 (2016), [Erratum: JHEP09,013(2016)].
* [39] M. Cacciari, FONLL Heavy Quark Production http://www.lpthe.jussieu.fr/~cacciari/fonll/fonllform.html, Accessed: 17.05.2016.
* [40] R. Aaij et al., Measurement of $\sigma(pp\rightarrow b\bar{b}X)$ at $\sqrt{s}=7~{}\rm{TeV}$ in the forward region, Phys. Lett. B694, 209 (2010).
* [41] R. Aaij et al., Measurement of forward $J/\psi$ production cross-sections in $pp$ collisions at $\sqrt{s}=13$ TeV, JHEP 10, p. 172 (2015).
* [42] M. Lisovyi, A. Verbytskyi and O. Zenaiev, Combined analysis of charm-quark fragmentation-fraction measurements, Eur. Phys. J. C76, p. 397 (2016).
* [43] L. Gladilin, Fragmentation fractions of $c$ and $b$ quarks into charmed hadrons at LEP, Eur. Phys. J. C75, p. 19 (2015).
* [44] Y. Amhis et al., Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of summer 2016, Eur. Phys. J. C77, p. 895 (2017).
* [45] M. Galanti, A. Giammanco, Y. Grossman, Y. Kats, E. Stamou and J. Zupan, Heavy baryons as polarimeters at colliders, JHEP 11, p. 067 (2015).
* [46] A. Hicheur and G. Conti, Parameterization of the LHCb magnetic field map, Proceedings, 2007 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC 2007): Honolulu, Hawaii, October 28-November 3, 2007 , 2439 (2007).
* [47] LHCb collaboration, LHCb Trigger and Online Technical Design Report (2014), LHCb-TDR-016.
* [48] LHCb collaboration, LHCb Tracker Upgrade Technical Design Report (2014), LHCb-TDR-015.
* [49] W. Scandale et al., Observation of channeling for 6500 GeV/c protons in the crystal assisted collimation setup for LHC, Phys. Lett. B758, 129 (2016).
* [50] J. P. Lansberg et al., A Fixed-Target ExpeRiment at the LHC (AFTER@LHC) : luminosities, target polarisation and a selection of physics studies, PoS QNP2012, p. 049 (2012).
* [51] A. Adare et al., Measurement of High-${p}_{T}$ Single Electrons from Heavy-Flavor Decays in $p+p$ Collisions at $\sqrt{s}=200\text{ }\mathrm{GeV}$, Phys. Rev. Lett. 97, p. 252002 (2006).
* [52] B. A. Kniehl and G. Kramer, ${D}^{0}$, ${D}^{+}$, ${D}_{s}^{+}$, and ${\Lambda}_{c}^{+}$ fragmentation functions from CERN LEP1, Phys. Rev. D71, p. 094013 (2005).
* [53] V. M. Biryukov et al., Crystal Channeling and Its Application at High-Energy Accelerators (Springer-Verlag Berlin Heidelberg, 1997).
* [54] E. M. Aitala et al., Multidimensional resonance analysis of ${{\mathchar 28931\relax}^{+}_{c}}\rightarrow pK^{-}\pi^{+}$, Phys. Lett. B471, 449 (2000).
* [55] V. M. Samsonov, On the possibility of measuring charm baryon magnetic moments with channeling, Nucl. Instrum. Meth. B119, 271 (1996).
|
8k
|
arxiv_papers
|
2101.00930
|
# Lassie: HOL4 Tactics by Example
Heiko Becker MPI-SWS,
Saarland Informatics Campus (SIC)Germany [email protected] , Nathaniel Bos
McGill UniversityCanada [email protected] , Ivan Gavran MPI-
SWSGermany [email protected] , Eva Darulova MPI-SWSGermany [email protected]
and Rupak Majumdar MPI-SWSGermany [email protected]
(2021)
###### Abstract.
Proof engineering efforts using interactive theorem proving have yielded
several impressive projects in software systems and mathematics. A key
obstacle to such efforts is the requirement that the domain expert is also an
expert in the low-level details in constructing the proof in a theorem prover.
In particular, the user needs to select a sequence of tactics that lead to a
successful proof, a task that in general requires knowledge of the exact names
and use of a large set of tactics.
We present Lassie, a tactic framework for the HOL4 theorem prover that allows
individual users to define their own tactic language _by example_ and give
frequently used tactics or tactic combinations easier-to-remember names. The
core of Lassie is an extensible semantic parser, which allows the user to
interactively extend the tactic language through a process of definitional
generalization. Defining tactics in Lassie thus does not require any knowledge
in implementing custom tactics, while proofs written in Lassie retain the
correctness guarantees provided by the HOL4 system. We show through case
studies how Lassie can be used in small and larger proofs by novice and more
experienced interactive theorem prover users, and how we envision it to ease
the learning curve in a HOL4 tutorial.
Interactive Theorem Proving and HOL4 and Semantic Parsing.
Interactive Theorem Proving, HOL4, Semantic Parsing, Tactic Programming
††copyright: rightsretained††doi: 10.1145/3437992.3439925††journalyear:
2021††submissionid: poplws21cppmain-p27-p††isbn:
978-1-4503-8299-1/21/01††conference: Proceedings of the 10th ACM SIGPLAN
International Conference on Certified Programs and Proofs; January 18–19,
2021; Virtual, Denmark††booktitle: Proceedings of the 10th ACM SIGPLAN
International Conference on Certified Programs and Proofs (CPP ’21), January
18–19, 2021, Virtual, Denmark††ccs: Software and its engineering Formal
software verification††ccs: Software and its engineering Programming by
example††ccs: Software and its engineering Macro languages
## 1\. Introduction
Interactive theorem proving is increasingly replacing “pen-and-paper”
correctness proofs in domains such as compilers (Leroy, 2009; Kiam Tan et al.,
2019), operating system kernels (Klein et al., 2009), and formalized
mathematics (Hales, 2006; Gonthier, 2008). Interactive theorem provers (ITPs)
provide strong guarantees: all proof steps are formalized and machine-checked
by a kernel using only a small set of generally accepted proof rules.
These guarantees come at a cost. Writing proofs in an ITP requires both domain
expertise in the target research area as well as in the particulars of the
interactive theorem prover. Formally proving a theorem requires an expert to
manually translate the general high-level proof idea from a pen-and-paper
proof into detailed, low-level kernel proof steps, which makes writing formal
proofs tedious and time-consuming. Theorem provers thus provide tactic
languages that allow to programmatically combine low-level proof steps
(Gonthier and Mahboubi, 2010; Delahaye, 2000; Matichuk et al., 2016; Wenzel
and Paulson, 2006). While this makes proofs less tedious, users need to build
up a vocabulary of appropriate tactics, which constitutes a steep learning
curve for novice ITP users.
Controlled natural language interfaces (Bancerek et al., 2015; Frerix and
Koepke, 2019) have been explored as an alternative, more intuitive interface
to an ITP. However, these systems do not allow a combination with a general
tactic language and are thus constrained to a specific subset of proofs.
In this paper, we present the tactic framework _Lassie_ that allows HOL4 users
to define their own tactic language on top of the existing ones _by example_ ,
effectively providing an individualized interface. Each example consists of
the to-be-defined tactic (a natural language expression, called _utterance_)
and its definition using existing HOL4 tactics with concrete arguments.
For instance, we can define
⬇
instantiate ’x’ with ’$\top$’
as
⬇
qpat_x_assum ’x’ (qspec_then ’$\top$’ assume_tac)
Newly defined Lassie tactics map directly and transparently to the underlying
HOL4 tactics, and can be freely combined.
The main novelty to existing tactic languages is that Lassie allows to define
tactics by example and thus does not require knowledge in tactic programming.
A tactic defined by example is automatically _generalized_ into a parametric
tactic by Lassie to make the tactic applicable in different contexts, making
Lassie go beyond a simple macro system.
Our key technical contribution is that Lassie realizes this definition-by-
example using an extensible semantic parser (Berant et al., 2013; Wang et al.,
2017). Lassie tactics are defined as grammar rules that map to HOL4 tactics.
Lassie starts with an initial core grammar that is gradually extended through
user-provided examples. For each example, the semantic parser finds matchings
between the utterance and its definition. These matchings are used to create
new rules for the grammar. Effectively, the semantic parser identifies the
parameters of the newly given command, and thus generalizes from the given
example. In our illustrative example, Lassie will identify ’x’ and $\top$ as
arguments and add a rule that will work with arbitrary terms in place of ’x’
and $\top$.
Typically, extending a grammar through examples leads to ambiguity—for a
single uterance-definition pair there may be different possible matchings and
thus several new parsing rules introduced. In previous work (Wang et al.,
2017), this ambiguity was resolved through user interaction, e.g. showing the
user a visualization of different parses and letting them choose the parse
with the intended effect. However, it is non-trivial to visualize intermediate
steps in a general-purpose programming language. Our core insight is that ITPs
offer an ideal setting to resolve this ambiguity. We show that by carefully
designing the core grammar and by making use of type information, the
ambiguity can be resolved automatically. Furthermore, ITPs “visualize”
individual steps by showing the intermediate proof state, and rule out wrong
tactic definitions by forcing proofs to be checked by the ITP systems kernel.
Lassie’s target audience are trained ITP users who implement decision
procedures and simple tactic descriptions in Lassie. Lassie allows them to
define their own individualized language by defining easy-to-remember names
for individual tactics, or (frequently used) combinations of tactics. A tactic
language implemented in Lassie can then used by non-expert users with prior
programming experience but without necessarily in-depth experience with an
ITP.
Compared to general tactic languages like ssreflect (Gonthier and Mahboubi,
2010), Ltac (Delahaye, 2000), and Eisbach (Matichuk et al., 2016), Lassie
requires less expert knowledge, at the expense of expressiveness. Similar to
Lassie, structured tactic languages like Isar (Wenzel, 1999) have an extended
parser. Extending a language like Isar requires editing the source code, while
Lassie supports different tactic languages that can be defined simply by
example. While Lassie can be used to define a tactic language that is closer
to a natural language, by not requiring the interface to be entirely natural,
Lassie is more general and flexible than systems like Mizar (Bancerek et al.,
2015) and Naproche-SAD (Frerix and Koepke, 2019).
We implement Lassie as a library for the HOL4 (Slind and Norrish, 2008) ITP
system, but our technique is applicable to other theorem provers as well.
Lassie is fully compatible with standard HOL4 proofs. Since all Lassie tactics
map to standard HOL4 tactics, Lassie allows exporting a Lassie proof into
standard HOL4 to maintain portability of proofs. On the other hand, the
learned grammar can be ported as well and can be used, for example, by a
teacher to predefine a domain-specific (tactic) language with Lassie, which is
used by learners to ease proofs in a particular area.
We demonstrate Lassie on a number of case studies proving theorems involving
logic, and natural and real numbers. In particular, we show the generality of
the naturalized tactics by reusing them across different proofs, and we show
that Lassie can be incrementally used for proofs inside larger code bases.
Finally, by predefining a tactic language with Lassie, we develop a tutorial
for the HOL4 theorem prover.
#### Contributions
In summary, this paper presents:
* •
an interactive, extensible framework called Lassie for writing tactics in an
ITP by example;
* •
an implementation of this approach inside HOL4 (available at
https://github.com/HeikoBecker/Lassie);
* •
a number of case studies and a HOL4 tutorial (available at
https://github.com/HeikoBecker/HOL4-Tutorial)
showing the effectiveness of Lassie.
## 2\. Lassie by Example
We start by demonstrating Lassie on a small example, before explaining our
approach in detail in Section 3.
⬇
Theorem REAL_INV_LE_AMONO:
$\forall$ x y.
0 < x $\wedge$ 0 < y $\Rightarrow$
$\texttt{x}^{-1}$ $\leq$ $\texttt{y}^{-1}$ $\Leftrightarrow$ y $\leq$ x
Proof
rpt strip_tac
\\\ ‘$\texttt{x}^{-1}$ < $\texttt{y}^{-1}$ $\Leftrightarrow$ y < x‘
by (MATCH_MP_TAC REAL_INV_LT_ANTIMONO \\\ fs [])
\\\ EQ_TAC
\\\ fs [REAL_LE_LT]
\\\ STRIP_TAC
\\\ fs [REAL_INV_INJ]
QED
(a) HOL4 proof
⬇
Theorem REAL_INV_LE_AMONO:
$\forall$ x y.
0 < x $\wedge$ 0 < y $\Rightarrow$
$\texttt{x}^{-1}$ $\leq$ $\texttt{y}^{-1}$ $\Leftrightarrow$ y $\leq$ x
Proof
nltac ‘
introduce assumptions.
show ’inv x < inv y <=> y < x’
using (use REAL_INV_LT_ANTIMONO
THEN follows trivially).
case split.
simplify with [REAL_LE_LT].
introduce assumptions.
simplify with [REAL_INV_INJ]. trivial.‘
QED
(b) Lassie proof
Figure 1. HOL4 proof (left) and Lassie proof (right) for theorem
REAL_INV_LE_AMONO
###### Theorem 1.
$\forall x\,y,0<x\wedge 0<y\Rightarrow x^{-1}\leq y^{-1}\Leftrightarrow y\leq
x$
###### Proof 1.
We show both sides of the implication separately.
To show ($x^{-1}\leq y^{-1}\Rightarrow y\leq x$), we do a case split on
whether $x^{-1}<y^{-1}$ or $x^{-1}=y^{-1}$. If $x^{-1}<y^{-1}$, the claim
follows because the inverse function is inverse monotonic for $<$. If
$x^{-1}=y^{-1}$, the claim follows from injectivity of the inverse.
To show the case ($y\leq x\Rightarrow x^{-1}\leq y^{-1}$), we do a case split
on whether $y<x$ or $y=x$. If $y<x$ the claim follows because the inverse
function is inverse monotonic for $<$. If $y=x$, the claim follows trivially.
Figure 2. Textbook proof that the inverse function is inverse monotonic for
$\leq$
For our initial example we choose to prove that the inverse function
($x^{-1}$) on real numbers is inverse monotonic for $\leq$. Figure 2 shows the
formal statement of this theorem, together with an (informal) proof that one
may find in a textbook (the proof uses a previously proven theorem about $<$).
#### Proofs in HOL4
1(a) shows the corresponding HOL4 theorem statement and proof. We can be sure
that this proof is correct, because it is machine-checked by HOL4. HOL4 (Slind
and Norrish, 2008) is an ITP system from the HOL-family. It is based on
higher-order logic and all proofs are justified by inference rules from a
small, trusted kernel. Its implementation language is Standard ML (SML), and
similar to other HOL provers like HOL-Light (Harrison, 2009), and Isabelle/HOL
(Nipkow et al., 2002), proof steps are described using so-called tactics that
manipulate a goal state until the goal has been derived from true.
When doing a HOL4 proof, one first states the theorem to be proven and starts
an interactive proof. Figure 3 shows the example proof statement from 1(a) on
the left and the interactive session on the right. To show that the theorem
holds, the user would write a tactic proof at the place marked with (* Proof
*), starting with the initial tactic rpt strip_tac, sending each tactic to the
interactive session on the right.
A HOL4 tactic implements e.g. a single kernel step, such as assume_tac thm
which introduces thm as a new assumption, but a tactic can also implement more
elaborate steps, like fs, which implements a stateful simplification
algorithm, and imp_res_tac thm, resolving thm with the current assumptions to
derive new facts. In our example, rpt strip_tac repeatedly introduces
universally quantified variables and introduces left-hand sides of
implications as assumptions.
After each tactic application, the HOL4 session prints the goal state that the
user still needs to show, keeping track of the state of the proof. Once the
HOL4 session prints Initial goal proved, the proof is finished. To make sure
that the proof can be checked by HOL4 when run non-interactively, the separate
tactics used in each step are chained together using the infix-operator \\\.
As this operator returns a tactic after taking some additional inputs, it is
called a tactical.
⬇
Theorem REAL_INV_LE_AMONO:
$\forall$ x y.
0 < x $\wedge$ 0 < y $\Rightarrow$
(inv x $\leq$ inv y $\Leftrightarrow$ y $\leq$ x)
Proof
rpt strip_tac
(* Proof *)
QED
⬇
1 subgoal:
val it =
0. 0 < x
1. 0 < y
\---------------------
$\text{x}^{-1}$ $\leq$ $\text{y}^{-1}$ $\Leftrightarrow$ y $\leq$ x
: proof
>
Figure 3. HOL4 theorem (left) and interactive proof session (right)
⬇
Theorem REAL_INV_LE_AMONO:
$\forall$ x y. 0 < x $\wedge$ 0 < y $\Rightarrow$
(inv x $\leq$ inv y $\Leftrightarrow$ y $\leq$ x)
Proof
nlexplain()
introduce assumptions.
we show ’inv x < inv y <=> y < x’
using (use REAL_INV_LT_ANTIMONO
THEN follows trivially).
case split.
simplify with [REAL_LE_LT].
introduce assumptions.
simplify with [REAL_INV_INJ]. trivial.
QED
⬇
rpt strip_tac \\\
‘ inv x < inv y $\Leftrightarrow$ y < x ‘
by (irule REAL_INV_LT_ANTIMONO THEN fs [ ])
0. 0 < x
1. 0 < x
2. $\text{x}^{-1}$ < $\text{y}^{-1}$ $\Leftrightarrow$ y < x
\-------------------------------------
$\text{x}^{-1}$ $\leq$ $\text{y}^{-1}$ $\Leftrightarrow$ y $\leq$ x
$|$>
Figure 4. Intermediate proof state using goalTree’s and nlexplain
#### Proofs in Lassie
1(b) shows the proof of our theorem using Lassie. This proof follows the same
steps as the standard HOL4 proof, but each tactic is called using a name that
we have previously defined in Lassie by example. We chose the Lassie tactics
to be more descriptive (for us at least), and while they make the proof
slightly more verbose, they also make it easier to follow for (non-)experts.
Each of our Lassie tactics maps to corresponding formal HOL4 tactics, so that
the proof is machine-checked by HOL4 as before, retaining all correctness
guarantees.
Unlike existing tactic languages, Lassie allows to define custom tactics _by
example_ and thus does not require any knowledge in tactic programming. For
instance, for our example proof, we defined a new tactic by
⬇
def ‘simplify with [REAL_LE_LT]‘ ‘fs [REAL_LE_LT]‘;
Lassie automatically generalizes from this example so that we can later use
this tactic with a different argument:
⬇
simplify with [REAL_INV_INJ]
To achieve this automated generalization, Lassie internally uses an extensible
semantic parser (Berant et al., 2013). That is, Lassie tactics are defined as
grammar rules. Lassie initially comes with a relatively small core grammar,
supporting commonly used HOL4 tactics. This grammar is gradually and
interactively extended with additional tactic descriptions by giving example
mappings. For instance our definition above would add the following rule to
the grammar:
⬇
simplify with [THM1, THM2, ...] $\to$ fs [THM1, THM2, ...]
Note that this rule allows simplify with to be called with a list of theorems,
not just a single theorem as in the example given. This generalization happens
completely automatically in the semantic parser and does not require any
programming by the user.
The Lassie-defined tactics can be used in a proof using the function nltac,
that sends tactic descriptions to the semantic parser, which returns the
corresponding HOL4 tactic. Because nltac has the same return type as all other
standard HOL4 tactics, it can be used as a drop-in replacement for standard
HOL4 tactics, and can be freely combined with other HOL4 tactics in a proof.
#### Explaining Proofs with Lassie
Lassie also comes with a function nlexplain. Instead of being a drop-in
replacement, like nltac, nlexplain decorates the proof state with the HOL4
tactic that is internally used to perform the current proof step. Figure 4
shows an intermediate state when using nlexplain to prove our example theorem.
All Lassie tactics inside the red dashed box on the left-hand side have been
passed to nlexplain. The goal state on the right-hand side shows the current
state of the proof as well as the HOL4 tactic script that has the same effect
as the Lassie tactics.
We envision nlexplain to be used for example in a HOL4 tutorial to ease the
learning curve when learning interactive theorem proving. Lassie allows a
teacher to first define a custom tactic language that follows the same
structure as the HOL4 proof, but that uses descriptive names and may be thus
easier to follow for a novice. In a second step, one can use nlexplain to
teach the actual underlying HOL4 tactics.
Function nlexplain can furthermore be used for sharing Lassie proofs without
introducing additional dependencies on the semantic parser. While sharing
Lassie proof scripts directly is possible, it requires sharing the state of
the semantic parser as well. Alternatively, one can send the Lassie proof to
nlexplain and obtain a HOL4 tactic script that can then be shared without
depending on the semantic parser.
$ROOT | $\rightarrow$ $tactic | $(\lambda x.x)$
---|---|---
$tactic | $\rightarrow$ $TOKEN | $(\lambda x.$lookup "tactic" $x)$
$tactic | $\rightarrow$ $thm->tactic $thm | $(\lambda x\,y.x\,y)$
$thm->tactic | $\rightarrow$ $TOKEN | $(\lambda x.$lookup "thm list->tactic" $x)$
$thm | $\rightarrow$ $TOKEN | $(\lambda x.x)$
gen_tac : tactic all_tac : tactic strip_tac : tactic fs : thm list->tactic
simp : thm list->tactic
Figure 5. Excerpt from Lassie grammar (left) and the database (right), parsing
tactics and thm list tactics
#### More Complex Tactics
While the target user that we had in mind when developing Lassie is not an ITP
expert, experts may nonetheless find Lassie useful to, e.g., group commonly
used combinations of tactics. For example, to make the proofs of simple
subgoals easier, an expert can define a tactic that uses different
simplification algorithms and an automated decision procedure to attempt to
solve a goal automatically:
⬇
def ‘prove with [ADD_ASSOC]‘
‘all_tac THEN ( fs [ ADD_ASSOC ] THEN NO_TAC)
ORELSE (rw [ ADD_ASSOC ] THEN NO_TAC)
ORELSE metis_tac [ ADD_ASSOC ]‘
The HOL4 tactic will first attempt to solve the goal using the simplification
algorithms implemented in tactics fs and rw, and if both fail, it will call
into the automated decision procedure metis_tac, based on first-order
resolution. (Tactical t1 ORELSE t2 applies first tactic t1, and if t1 fails,
t2 is applied. THEN NO_TAC makes the simplification fail if it does not solve
the goal.)
The resulting tactic description prove with [THM1, THM2, ...] is parametric in
the used list of theorems making it applicable in different contexts.
Defined tactic descriptions are added to the grammar and are as such part of
the generalization algorithm. Thus we can reuse the just defined tactic
description to define an even more elaborate version:
⬇
def ‘’T’ from [ CONJ_COMM ] ‘
‘’T’ by ( prove with [CONJ_COMM] )‘;
This tactic description, once generalized by the semantic parser, completely
hides the fact that we may need to call into three different algorithms to
prove a subgoal, while allowing us to enrich our assumptions with arbitrary
goals, as long as they are provable by the underlying HOL4 tactics.
## 3\. Defining Tactics in Lassie
Existing approaches to tactic languages, like Eisbach (Matichuk et al., 2016)
and ssreflect (Gonthier and Mahboubi, 2010) are implemented as domain-specific
languages (DSL), usually within the theorem prover’s implementation language.
In these approaches, defining a new tactic is the same as defining a function
in the implemented DSL. If a tactic should be generalized over e.g. a list of
theorems, this generalization must be performed manually by the user of the
tactic language.
In contrast, Lassie’s tactics are defined in a grammar that is extended
interactively by example using a semantic parser (Berant et al., 2013) that
performs parameter generalization automatically. We define an initial core
grammar (Section 3.1) that users can extend by example (Section 3.2). Each
such defined description (Lassie tactic), maps a description to a (sequence
of) HOL4 tactics, which is then applied to the proof state and checked by the
HOL4 kernel. Note that a Lassie user does not directly modify and thus does
not have to be aware of the underlying (core) grammar—the extension happens by
example.
### 3.1. The Core Grammar
The left-hand side of Figure 5 shows a subset of Lassie’s core grammar. $ROOT
is the symbol for the root node in the grammar and must always be a valid
tactic. The core grammar is used to parse theorems, tactics, tacticals (of
type thm list -> tactic) and looks up functions of these types.
Each rule has the form $left $\rightarrow$ $right $(\lambda x.\ldots)$. While
$left $\rightarrow$ $right works just as in a standard context free grammar,
the $\lambda$-abstraction, called logical form, is applied to the result of
parsing $right using the grammar. The logical form allows us to manipulate
parsing results after they have been parsed by the grammar, essentially
interpreting them within the parser. In Lassie we use it to implement function
applications when combining tactics, and to lookup names in a database.
We have built a core grammar for Lassie that supports the most common tactics
and tacticals of HOL4. For instance the core grammar will parse fs
[REAL_INV_INJ] unambiguously into the equivalent SML code as its logical form.
We think of this core grammar as the starting point for users to define Lassie
tactics on top of the HOL4 tactics.
Adding every HOL4 tactic and tactical as a separate terminal to the grammar
would clutter it unnecessarily and make it hard to maintain. That is why the
grammar allows so-called lookup rules that check a dictionary for elements of
predefined sets. The right-hand side of Figure 5 shows a subset of the
database used for the lookups. In the grammar in Figure 5, a tactic can then
either be looked up from the database (second rule), or a tactic can be a
combination of a function of type thm -> tactic and a theorem (third rule). We
refer to functions of type thm -> tactic as theorem tactics, as they take a
theorem as input, and return a HOL4 tactic. Theorem tactics are again looked
up from the database, whereas theorems can be any possible string denoted in
the grammar by $TOKEN. In addition to HOL4 tactics and theorem tactics, our
core grammar also uses a combination of rules (not shown in Figure 5) to
support functions that return a tactic of type
* •
thm list -> tactic
* •
tactic -> tactic
* •
term quotation -> tactic
* •
(thm -> tactic) -> tactic
* •
tactic -> tactic -> tactic
* •
term quotation -> (thm -> tactic) -> thm -> tactic
* •
term quotation list -> (thm -> tactic) ->
thm -> tactic
These types capture most of the tactics implemented in HOL4, and we add a
subset of 53 commonly used tactics into the database.
#### Non-Ambiguity
A common issue in semantic parsing is grammar ambiguity. In Lassie, having an
ambiguous grammar is not desirable as it would require users to disambiguate
each ambiguous Lassie tactic while proving theorems. We thus aim to have an
unambiguous grammar and achieve this by a careful design of our core grammar.
By encoding the types of the tactics as non-terminals, our core grammar acts
as a type-checker for our supported subset of HOL4 tactics. Even after
defining custom tactics, the semantic parser will always parse Lassie tactics
into the subset it can type check thus keeping the grammar unambiguous. During
our experiments we have not found a case where extending the grammar
introduced any ambiguity, which reassures this design choice.
### 3.2. Extending Lassie with New Definitions
With our core grammar, Lassie can parse the HOL4 tactics we have added to the
grammar into their (equivalent) SML code. We now explain how this grammar can
be interactively extended by example in order to provide custom names for
(sequences of) tactics.
Lassie’s tactic learning mechanism relies on a semantic parser. A semantic
parser converts a natural language utterance into a corresponding (executable)
logical form or—due to ambiguity—a ranked list of candidates. Semantic parsers
can be implemented in many ways, e.g., they can be rule-based or learned from
data (Liang, 2016). SEMPRE (Berant et al., 2013), which we use, is a toolkit
for developing semantic parsers for different tasks. It provides commonly used
natural language processing methods, and different ways of encoding logical
forms.
Lassie’s semantic parser is implemented on top of the interactive version of
SEMPRE (Wang et al., 2017). It starts with a core formal grammar, which can be
expanded through interactions with the user. Users can add new concepts to the
grammar by example using Lassie’s library function def, which invokes the
semantic parser. Each example consists of a (_utterance_ , _definition_) pair,
where the utterance is the new tactic to be defined and the definition is an
expression that is already part of the grammar. For instance, we can give as
example:
⬇
def ‘simplify with REAL_ADD_ASSOC‘ (*utterance*)
‘fs [REAL_ADD_ASSOC]‘ (*definition*)
Note that the command demonstrates the new tactic (simplify with) with a
particular argument (REAL_ADD_ASSOC), but does not explicitly state what the
argument is.
The definition has to already be part of the grammar and thus fully parsable,
otherwise the parser will reject the pair, whereas only some parts of the
utterance may be parsable. That is, the definition needs to be already
understood by the semantic parser, either because it is part of the core
grammar or because it was previously already defined by the user.
The function def first obtains a logical form for the definition (which exists
since the definition is part of the grammar). The semantic parser then induces
one or more grammar rules from the utterance-definition pair and attaches the
logical form of the definition to those rules.
The induction of new grammar rules relies on finding correspondences between
parsable parts of the utterance and its definition. As an example, observe our
simplify with command. Because REAL_ADD_ASSOC can be parsed into a category
$thm, the two new production rules added to the grammar are:
⬇
$tactic $\rightarrow$
simplify with REAL_ADD_ASSOC ($\lambda$ x.fs [REAL_ADD_ASSOC])
$tactic $\rightarrow$
simplify with $thm ($\lambda$ thm. fs [thm])
Based on the second added rule, we can now use the Lassie tactic simplify with
connected to any other description that is parsed as a $thm, because the
parser identified REAL_ADD_ASSOC as an argument and generalized from our
example by learning the $\lambda$-abstraction over the variable thm.
Next time the user calls, for instance,
⬇
nltac ‘simplify with REAL_ADD_COMM‘
Lassie’s semantic parser will parse this command into the tactic
fs[REAL_ADD_COMM] using the second added rule.
## 4\. Lassie Design
Lassie is implemented as a HOL4 library, which can be loaded into a running
HOL4 session with open LassieLib;. This will start a SEMPRE process and the
library captures its input and output as SML streams. Whenever nltac or
nlexplain are run, the input is send to SEMPRE over the input stream, and if
it can be parsed with the currently learned grammar, SEMPRE writes the
resulting HOL4 tactic to the output stream as a string. If parsing fails, i.e.
SEMPRE does not recognize the description, LassieLib raises an exception, such
that an end-user can define the tactic with a call to def.
We want nltac to act as a drop-in replacement for HOL4 tactics. Therefore,
nltac must not only be able to parse single tactics, but must also be able to
parse full tactic scripts, performing a proof from start to finish. During our
case-studies, we noticed that SEMPRE was not built for parsing large strings
of text, but rather for smaller examples. To speed up parsing, we have defined
a global constant, LassieSep which is used to split input strings of nltac.
For example, calling
⬇
nltac ‘case split. simplify with [REAL_LE_LT].‘
will lead to two separate calls to the semantic parser: one for case split and
one for simplify with [REAL_LE_LT]. The resulting HOL4 tactics are joined
together using the THEN_LT tactical, which is a more general version of the
tactical \\\, as it has an additional argument for selecting the subgoal to
which the given tactic is applied. When proving a goal interactively, some
tactics, like induction, and case splitting, can lead to multiple subgoals
being generated. We use the THEN_LT tactical to implement selecting subgoals
in nltac.
There are some differences in how nltac and nlexplain are used. Function nltac
can be used as a drop-in replacement for HOL4 tactics, and thus supports
selection of subgoals. In contrast, nlexplain is meant to be used
interactively, and therefore parses Lassie tactics, but does not support
selection of subgoals. Instead, subgoals are proven in order of appearance.
The main purpose of nlexplain is to show how Lassie tactics are translated
back into HOL4 tactics. To do so, it modifies HOL4’s interactive read-eval-
print loop (REPL), and thus can only be used interactively, but not to replace
plain HOL4 tactics in proof scripts like nltac.
To differentiate between SML expressions and HOL4 expressions, HOL4 requires
HOL4 expressions to be wrapped in quotes (`), but quotes are also a way of
allowing multiline strings in HOL4 proofscripts. Therefore we choose quotes to
denote the start and end of a Lassie proofscript, and use apostrophes (’) to
denote the start and the end of a HOL4 expression in a Lassie proof script.
Lassie currently does not support debugging tactic applications. While an end-
user can easily define new tactics by example using the semantic parser,
figuring out the tactics exact behavior, and fixing bugs still requires the
user to manually step through the corresponding HOL4 tactic in an interactive
proof and manually inspecting steps. We see extending Lassie with debugging
support as future work.
### 4.1. Extending Lassie with New Tactics
Our initial core grammar supports only a fixed set of the most commonly used
HOL4 tactics. However, it is common in ITPs to develop custom tactics on a
per-project basis, possibly including fully blown decision procedures
(Solovyev and Hales, 2013). To make sure that users can add their own HOL4
tactics as well as custom decision procedures to Lassie, the library provides
the functions addCustomTactic, addCustomThmTactic, and addCustomThmlistTactic.
The difference between def and addCustom[*]Tactic is in where the elements are
added to the semantic parser’s grammar. Function def uses SEMPRE’s
generalization algorithm and adds rules to the grammar that may contain non-
terminals (e.g. follows from [ $thms ]). Function addCustomTactic always adds
a new terminal to the grammar.
We explain addCustomTactic by example. Suppose a user wants to reuse an
existing linear decision procedure for real numbers (REAL_ASM_ARITH_TAC) to
close simple proof goals. Running addCustomTactic REAL_ASM_ARITH_TAC adds the
new production rule $tactic $\rightarrow$ REAL_ASM_ARITH_TAC to the SEMPRE
grammar. Tactic REAL_ASM_ARITH_TAC can then be used in subsequent calls to def
to provide Lassie-based descriptions, or immediately in nltac and nlexplain.
Now that SEMPRE accepts the decision procedure as a valid tactic, we extend
our expert automation tactic from before to try to solve a goal with this
decision procedure too:
⬇
def ‘prove with [ADD_ASSOC]‘
‘all_tac THEN ( fs [ ADD_ASSOC ] THEN NO_TAC)
ORELSE (rw [ ADD_ASSOC ] THEN NO_TAC)
ORELSE REAL_ASM_ARITH_TAC
ORELSE metis_tac [ ADD_ASSOC ]‘
Functions addCustomThmTactic, and addCustomThmlistTactic work similarly,
adding grammar rules for $thm->tactic and $thm list->tactic.
### 4.2. Defining and Loading Libraries
Users can define libraries with their own defined Lassie tactics using the
function registerLibrary which takes as first input a string, giving the
libraries a unique name, and as second input a function of type :unit ->unit,
where the function should call def on the definitions to be added, following
Section 3.2. The defined libraries can then be shared and loaded simply by
calling the function loadLibraries.
We defined libraries for proofs using logic, natural numbers, and real numbers
from our case studies and used these in our HOL4 tutorial (Section 5)
⬇
Theorem EUCLID:
$\forall$ n . $\exists$ p . n < p $\wedge$ prime p
Proof
CCONTR_TAC \\\ fs[]
\\\ ‘FACT n + 1 $\neq$ 1‘
by rw[FACT_LESS, neq_zero]
\\\ qspec_then ‘FACT n + 1‘ assume_tac PRIME_FACTOR
\\\ ‘$\exists$ q. prime q $\wedge$ q divides (FACT n + 1)‘ by fs[]
\\\ ‘q $\leq$ n‘ by metis_tac[NOT_LESS_EQUAL]
\\\ ‘0 < q‘ by metis_tac[PRIME_POS]
\\\ ‘q divides FACT n‘
by metis_tac [DIVIDES_FACT]
\\\ ‘q = 1‘ by metis_tac[DIVIDES_ADDL, DIVIDES_ONE]
\\\ ‘prime 1‘ by fs[]
\\\ fs[NOT_PRIME_1]
QED
⬇
Theorem EUCLID: (* Lassie *)
$\forall$ n . $\exists$ p . n < p $\wedge$ prime p
Proof
nltac‘
suppose not. simplify.
we can derive ’FACT n + 1 <> 1’
from [FACT_LESS, neq_zero].
thus PRIME_FACTOR for ’FACT n + 1’.
we further know
’$\exists$ q. prime q and q divides (FACT n + 1)’.
show ’q <= n’ using [NOT_LESS_EQUAL].
show ’0 < q’ using [PRIME_POS] .
show ’q divides FACT n’ using [DIVIDES_FACT].
show ’q=1’ using [DIVIDES_ADDL, DIVIDES_ONE].
show ’prime 1’ using (simplify).
[NOT_PRIME_1] solves the goal.‘
QED
Figure 6. HOL4 proof (left) and Lassie proof (right) of euclids theorem
## 5\. Case Studies
We evaluate Lassie on three case studies and show how it can be used for
developing a HOL4 tutorial. In the paper, we show only the main theorems for
the case studies, but the full developments can be found in the Lassie
repository.
### 5.1. Case Study: Proving Euclid’s Theorem
First, we prove Euclid’s theorem from the HOL4 tutorial (Slind and Norrish,
2008) that is distributed with the HOL4 theorem prover documentation. Euclid’s
theorem states that the prime numbers form an infinite sequence. Its HOL
equivalent states that for any natural number $n$, there exists a natural
number $p$ which is greater than $n$ and a prime number.
To prove the final theorem, shown in Figure 6, we have proven 19 theorems in
total. To prove these theorems, we defined a total of 22 new tactics using
LassieLib.def. Some tactics have been used only once, but for example the
tactic [...] solves the goal, was reused 16 times.
Another example is the tactic thus PRIME_FACTOR for ’FACT n + 1’ which
introduces a specialized version of the theorem PRIME_FACTOR, proving the
existence of a prime factor for every natural number. Note how the tactic
description can freely mix text descriptions with the parameters for the
underlying tactic. Similarly, the first step of the HOL4 proof reads
CCONTR_TAC, which initiates a proof by contradiction. For an untrained user,
figuring out and remembering this name can be cumbersome, even though the user
might know the high-level proof step. Instead, in Lassie we have used the—for
us—more intuitive name suppose not.
Finally, each sub-step of the HOL4 proof is closed using the tactic metis_tac.
For an expert user, it is obvious that metis_tac can be used, because the
expert knows that it performs first order resolution to prove the goal. In the
Lassie proof, we hide metis_tac [] in combination with the simplification
tactics fs [] and rw[] under the description [] solves the goal. To further
automate proving simple subgoals, we combine the tactic [] solves the goal
with our Lassie tactic for proving subgoals (show ’T’ using (gen_tac)) by
defining show ’T’ using [...] as
show ’T’ using ([...] solves the goal).
### 5.2. Case Study: Real and Natural Number Theorems
Next, we will show how Lassie can be used in more involved proofs about both
real and natural numbers. As an example, we prove that for any natural number
$n$, the sum of the cubes of the first $n$ natural numbers is the same as the
square of the sum. The Lassie proof of the final theorem is in Figure 7.
⬇
Theorem sum_of_cubes_is_squared_sum:
$\forall$ n. sum_of_cubes n = (sum n) pow 2
Proof
nltac ‘
induction on ’n’.
simplify conclusion with [sum_of_cubes_def, sum_def].
rewrite with [POW_2, REAL_LDISTRIB, REAL_RDISTRIB,
REAL_ADD_ASSOC].
showing
’&SUC n pow 3 =
&SUC n * &SUC n + &SUC n * sum n + sum n * &SUC n’
closes the proof
because (simplify conclusion with [REAL_EQ_LADD]).
we know ’& SUC n * sum n + sum n * &SUC n =
2 * (sum n * & SUC n)’.
rewrite once [<- REAL_ADD_ASSOC].
rewrite last assumption.
rewrite with [pow_3, closed_form_sum, real_div,
REAL_MUL_ASSOC].
we know ’2 * &n * (1 + &n) * inv 2 =
2 * inv 2 * & n * (1 + &n)’.
rewrite last assumption.
simplify conclusion with [REAL_MUL_RINV].
we show ’n + 1 = SUC n’ using (simplify conclusion).
rewrite last assumption. simplify conclusion.
we show ’2 = (SUC (SUC 0))’
using (simplify conclusion).
rewrite last assumption. rewrite last assumption.
rewrite with [EXP].
we show ’SUC n = n + 1’ using (simplify conclusion).
rewrite last assumption.
rewrite with [GSYM REAL_OF_NUM_ADD, pow_3].
rewrite with [REAL_OF_NUM_ADD, REAL_OF_NUM_MUL,
MULT_RIGHT_1, RIGHT_ADD_DISTRIB,
LEFT_ADD_DISTRIB, MULT_LEFT_1].
simplify.‘
QED
Figure 7. Lassie proof that the sum of the natural numbers from $1$ to $n$
cubed is the same as the square of their sum
We have proven a total of 5 theorems: two (real-numbered) binomial laws, the
closed form for summing the first $n$ natural numbers, a side lemma on
exponentiation, and the main result about cubing the first $n$ numbers. All
our proofs in this case study have been performed using the HOL4 theory of
real numbers simply for convenience, as we found real number arithmetic easier
for proving theorems that involve subtractions, powers, and divisions. We
defined a total of 42 tactics by example using LassieLib.def and added 3
custom tactics using LassieLib.addCustomTactic and
LassieLib.addCustomThmTactic. Again, some of the tactics were used only once
or twice but our Lassie tactics for rewriting with a theorem (two calls to
LassieLib.def to support rewriting from left to right, and right to left) are
reused 13 times within the proofs.
This Lassie proof shows how it can be extended with custom tactics. Our
restricted core grammar of Lassie does not include HOL4’s decision procedure
for reals. Nevertheless, a user may want to provide this tactic as part of
some automation. Because Lassie supports on-the-fly grammar extensions we add
the decision procedure for reals (REAL_ASM_ARITH_TAC) to the grammar:
addCustomTactic REAL_ASM_ARITH_TAC. Having added this tactic, it can be used
just like the HOL4 tactics we support in the base grammar. Thus we define a
Lassie tactic using the decision procedure:
⬇
def ‘we know ’T’‘
‘’T’ by (REAL_ASM_ARITH_TAC ORELSE DECIDE_TAC)‘
The semantic parser now automatically generalizes the grammar rule for this
tactic, learning the rule
⬇
$tactic $\to$
we know ’$term’($\lambda$ t.
’t’ by (REAL_ASM_ARITH_TAC ORELSE DECIDE_TAC))
With this, we can use more complicated tactics like we know ’2 * &n * (1 + &n)
* inv 2 = 2 * inv 2 * &n * (1 = &n)’.
In general, combining the extensibility of Lassie and the generalization of
SEMPRE allows us to support arbitrary settings where trained experts can
implement domain-specific decision procedures and provide simple tactic
descriptions to novice users that want to use them in a HOL4 proof,
essentially decoupling the automation from its implementation. Equally, any
user can define personalized and more intuitive names for often-used tactics.
### 5.3. Case Study: Naturalizing a Library Proof
In our final example, we show how Lassie can be integrated into larger
developments, by proving a soundness theorem from a library of FloVer (Becker
et al., 2018). FloVer is a verified checker for finite-precision roundoff
error bounds implemented in HOL4. Its HOL4 definitions and proofs span
approximately 10000 lines of code and the interval library is one of the
critical components which is used in most of the soundness proofs. As the
FloVer proofs are performed over real numbers, we reuse the tactic
descriptions from our previous example and do not need to add additional
definitions. In Figure 8 we show that if we have an interval $iv$, and a real
number $a\in iv$, then the inverse of $a$ is contained in the inverse of $iv$.
⬇
Theorem interval_inversion_valid:
$\forall$ iv a.
(SND iv < 0 \/ 0 < FST iv) /\ contained a iv ==>
contained (inv a) (invertInterval iv)
Proof
nltac ‘
introduce variables.
case split for ’iv’.
simplify with [contained_def, invertInterval_def].
introduce assumptions.
rewrite once [<- REAL_INV_1OVER].
Next Goal.
rewrite once [ <- REAL_LE_NEG].
we know ’a < 0’. thus ’a <> 0’.
we know ’r < 0’. thus ’r <> 0’.
’inv(-a) <= inv (-r) <=> (- r) <= -a’ using
(use REAL_INV_LE_AMONO THEN simplify).
resolve with REAL_NEG_INV.
rewrite assumptions.
follows trivially.
Next Goal.
rewrite once [<- REAL_LE_NEG].
we know ’a < 0’. thus ’a <> 0’. we know ’q <> 0’.
resolve with REAL_NEG_INV.
’inv (-q) <= inv (-a) <=> (-a) <= (-q)’ using
(use REAL_INV_LE_AMONO THEN simplify
THEN trivial).
rewrite assumptions. follows trivially.
Next Goal.
rewrite with [<- REAL_INV_1OVER].
’inv r <= inv a <=> a <= r’ using
(use REAL_INV_LE_AMONO THEN trivial).
follows trivially.
Next Goal.
rewrite with [<- REAL_INV_1OVER].
’inv a <= inv q <=> q <= a’ using
(use REAL_INV_LE_AMONO THEN trivial).
follows trivially.‘
QED
Figure 8. Soundness of FloVer’s interval inversion in Lassie
This example shows that Lassie’s tactic definitions are expressive enough to
build libraries of common tactic descriptions that can be shared between
projects.
⬇
Definition sum_def:
sum (n:num) = if n = 0 then 0 else sum (n-1) + n
End
Theorem closed_form_sum:
$\forall$ n. sumEq n = n * (n + 1) DIV 2
Proof
nlexplain()
Induction on ’n’.
simplify with [sumEq_def].‘
simplify with [sumEq_def, GSYM ADD_DIV_ADD_DIV].
’2 * SUC n + n * (n + 1) = SUC n * (SUC n + 1)’
suffices to show the goal.
show ’SUC n * (SUC n + 1) =
(SUC n + 1) + n * (SUC n + 1)’
using (simplify with [MULT_CLAUSES]).
simplify.
show ’n * (n + 1) = SUC n * n’
using (trivial using [MULT_CLAUSES,MULT_SYM]).
rewrite assumptions. simplify.
QED
⬇
Induct on ‘ n ‘
>- ( fs [ sum_def ])
>- ( fs [ sum_def, GSYM ADD_DIV_ADD_DIV ] \\\
‘2 * SUC n + n * (n + 1) = SUC n * (SUC n + 1)‘
suffices_by (fs [ ]) \\\
0. sum n = n * (n + 1 DIV 2)
\----------------------------
2 * SUC n + n * (n + 1) = SUC n * (SUC n + 1)
$|$>
Figure 9. Intermediate state of nlexplain in our tutorial
### 5.4. HOL4 Tutorial
We have used Lassie to write a new tutorial for HOL4 with the goal of
decoupling the learning of the basic structure of formal proofs from the
particular syntax and tactic names of HOL4, and by this easing the learning
curve. Our tutorial is based on the existing HOL4 tutorial (Slind and Norrish,
2008) and the HOL4 emacs interaction guide.
First, the new HOL4 user uses nltac and the Lassie tactics that we defined for
our three case studies (i.e. loads them as libraries) to do the proofs. He or
she can thus learn the syntax of theorems and definitions, as well as
structure of proofs without having to also learn the often unintuitive tactic
names of the proofs. For example, we show the proof of the closed form for
summing the first $n$ natural numbers from our tutorial in Figure 10. The
example proof shows Lassie tactics that abstract from the tactic, but not the
theorem names. Lassie has limited support for defining descriptions of
theorems similar to how Lassie tactics are defined which could be used when
developing individual languages.
In the second step, the new HOL4 user is introduced to the HOL4 tactics using
nlexplain. For instance, they can step through the proof and see the HOL4
tactics underlying each Lassie tactic. We show an example in Figure 9. The
left-hand side shows the HOL4 proof state obtained by applying Lassie tactics
with nlexplain, and the right-hand side the modified HOL4 REPL with the
current proof goal and a partial HOL4 tactic script. The red dashed box on the
left-hand side marks all Lassie tactics that have been passed to nlexplain.
Our tutorial is split into six separate parts. We start by explaining how HOL4
(and Lassie) are installed and configured on a computer such that the tutorial
can be followed interactively. Next, we explain how one interacts with HOL4 in
an interactive session. The first technical section uses the proof from Figure
10 as a first example of an interactive HOL4 proof, using only nltac to
perform proofs. Having introduced the reader to the basics of interactive
proofs in HOL4, we show how a simple library of proofs can be developed. The
library is a re-implementation of our first case study, and hence follows the
structure of the original HOL4 tutorial. It spans a total of two definitions,
and 13 theorems. For each of the theorems we show a proof using nltac. Only
after these introductory sections, where a user will have already gained an
intuition both about how one interacts with the HOL4 REPL, and how proofs are
stored in reusable theories, the next section introduces nlexplain and
explains how HOL4 proofs are performed with plain HOL4 tactics. Finally, the
tutorial concludes with some helpful tips and tricks that we have collected.
We defined the tutorial using definitions that we personally found intuitive.
However, Lassie’s ability to define tactics by example allows each teacher to
define their own individual language in a straightforward way.
⬇
Theorem closed_form_sum:
$\forall$ n. sum n = (n * (n + 1)) DIV 2
Proof
nltac‘
Induction on ’n’.
Goal ’sum 0 = 0 * (0 + 1) DIV 2’.
simplify.
End.
Goal ’sum (SUC n) = SUC n * (SUC n + 1) DIV 2’.
use [sum_def, GSYM ADD_DIV_ADD_DIV] to simplify.
’2 * SUC n + n * (n + 1) = SUC n * (SUC n + 1)’
suffices to show the goal.
show ’SUC n * (SUC n + 1) =
(SUC n + 1) + n * (SUC n + 1)’
using (simplify with [MULT_CLAUSES]).
simplify.
show ’n * (n + 1) = SUC n * n’
using (trivial using [MULT_CLAUSES, MULT_SYM]).
’2 * SUC n = SUC n + SUC n’ follows trivially.
’n * (SUC n + 1) = SUC n * n + n’ follows trivially.
rewrite assumptions. simplify.
End.‘
QED
Figure 10. Example proof of the closed form for summing $n$ numbers using
Lassie in our HOL4 tutorial
## 6\. Related Work
In this section, we review approaches designed to ease the user burden when
writing proofs in an ITP.
#### Hammers
So-called “hammers” use automated theorem provers (ATP) to discharge proof
obligations by translating a proof goal into the logic of an ATP and a proof
back into the logic of the interactive prover. Examples are Sledgehammer
(Paulson and Susanto, 2007) for Isabelle, HolyHammer (Kaliszyk and Urban,
2014) for HOL4, and a hammer for Coq (Czajka and Kaliszyk, 2018). A general
overview is given in the survey paper by Blanchette et al. (Blanchette et al.,
2016). Some of these use learning to predict which premises are needed to be
sent to the ATP, in order not to overwhelm the prover. In contrast to Lassie,
the main focus of such hammers is not to make the proofs more accessible but
to solve simple proof obligations using a push-button method. As Lassie is
open to adding custom decision procedures we think that integrating a hammer
with Lassie could provide for even richer and easier to define tactic
languages by automating simple proofs.
#### Learning-based
While hammers try to automate the proof with the help automated theorem
provers, other systems use statistical methods to recommend tactics to the end
user to finish a proof. DeepHOL (Bansal et al., 2019) learns a neural network
that, given a proof goal, predicts a potential next tactic in HOL Light.
GamePad (Huang et al., 2019) and the work by Yang et al. (Yang and Deng, 2019)
similarly use machine learning to predict tactics for Coq. TacticToe (Gauthier
et al., 2020) uses A* search, guided by previous tactic-level proofs, to
predict tactics in HOL4.
#### Programming Language-based
Languages like Eisbach (Matichuk et al., 2016), Ltac (Delahaye, 2000), Ltac2
(Pédrot, 2019) and Mtac2 (Kaiser et al., 2018) use rigorous programming
language foundations to give more control to expert users when writing
tactics. Eisbach and Ltac are tactic languages similar to the one of HOL4.
Mtac2 formalizes “Coq in Coq” allowing to define tactics as Coq programs,
whereas Ltac2 is a strongly typed language for writing Coq tactics. The tactic
language of the Lean theorem prover (de Moura et al., 2015) additionally
implements equational reasoning on top of its tactics, which allows for more
textbook-like proofs. Recently, the Lean theorem prover has also been extended
with a hygienic macro system (Ullrich and de Moura, 2020). A core contribution
of their work is excluding unintentional capturing in tactic programming, thus
making tactic programming more robust. In Lassie we did not experience any
hygiene issues as the definition by example relies on the semantic parser to
do the generalization and as such keeps variable levels separate. Using any of
the languages above requires all the desired generality to be stated explicit
in the tactic definition, usually in the form of function definitions. In
contrast, Lassie’s definition by example makes it easier to define new tactics
and generalizes automatically.
#### Natural Language Interfaces
Several systems provide an interface to a theorem prover that is as close as
possible to natural language. Languages like Isar (Wenzel, 1999), Mizar
(Bancerek et al., 2015), and the work by Corbineau (Corbineau, 2007) follow a
similar approach as Lassie by having an extended parser. Their supported
naturalized proof descriptions are fixed to the authors style of declarative
proofs and extending or changing these would required editing the tool code.
In contrast, Lassie is extensible enough to support different tactic languages
that can coexist without interferring if not loaded simultaneously.
The Naproche system (Frerix and Koepke, 2019) provides a controlled natural
language, which maps natural language utterances into first-order logic proof
obligations, to be checked by an (automated) theorem prover (e.g. E Prover
(Schulz, 2013)). The extensions to Alfa by Hallgren et al. (Hallgren and
Ranta, 2000) also use natural language processing technology to extend the
Alfa proof editor with a more natural language. The book by Ganesalingam
(Ganesalingam, 2013) gives a comprehensive explanation of the relation between
natural language and mathematics. Similarly, Ranta et al. (Ranta, 2011)
provide more sophisticated linguistic techniques to translate between natural
language and predicate logic. An orthogonal approach to the above is presented
in the work by Coscoy et al. (Coscoy et al., 1995). Instead of translating
from natural language to tactics, they provide a translation from Coq proof
terms to natural language. The main goal of these systems is to provide an
interface that supports as much natural language as possible. A major
limitation, however, is that their grammars are fixed, i.e. only the
naturalized tactics implemented by the authors is available. Our work does not
strive to be a full natural language interface, and in turn provides an
extensible grammar, which adapts to different users and proofs.
## 7\. Conclusion
We have presented the Lassie tactic language framework for the HOL4 theorem
prover. Using a semantic parser with an extensible grammar, Lassie learns
individualized tactics from user-provided examples. Our example case studies
show that these learned tactics can be easily reused across different proofs
and can ease both the writing and reading of HOL4 proofs by providing a more
intuitive, personalized interface to HOL4’s tactics.
###### Acknowledgements.
The authors would like to thank Magnus Myreen, Zachary Tatlock, and the
anonymous reviewers of ITP 2020 and CPP 2021 for providing feedback on Lassie
and (initial) drafts of the paper. Gavran and Majumdar were supported in part
by the DFG project 389792660 TRR 248–CPEC and by the European Research Council
under the Grant Agreement 610150 (ERC Synergy Grant ImPACT).
## References
* (1)
* Bancerek et al. (2015) Grzegorz Bancerek, Czeslaw Bylinski, Adam Grabowski, Artur Kornilowicz, Roman Matuszewski, Adam Naumowicz, Karol Pak, and Josef Urban. 2015. Mizar: State-of-the-art and Beyond. In _International Conference on Intelligent Computer Mathematics (CICM)_. https://doi.org/10.1007/978-3-319-20615-8_17
* Bansal et al. (2019) Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. 2019. HOList: An Environment for Machine Learning of Higher Order Logic Theorem Proving. In _International Conference on Machine Learning (ICML)_.
* Becker et al. (2018) Heiko Becker, Nikita Zyuzin, Raphaël Monat, Eva Darulova, Magnus O Myreen, and Anthony Fox. 2018\. A Verified Certificate Checker for Finite-Precision Error Bounds in Coq and HOL4. In _FMCAD (Formal Methods in Computer Aided Design)_. https://doi.org/10.23919/FMCAD.2018.8603019
* Berant et al. (2013) Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013\. Semantic Parsing on Freebase from Question-Answer Pairs. In _Conference on Empirical Methods in Natural Language Processing (EMNLP)_.
* Blanchette et al. (2016) Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef Urban. 2016. Hammering towards QED. _Journal of Formalized Reasoning_ 9, 1 (2016). https://doi.org/10.6092/issn.1972-5787/4593
* Corbineau (2007) Pierre Corbineau. 2007\. A Declarative Language for the Coq Proof Assistant. In _International Workshop on Types for Proofs and Programs (TYPES)_. https://doi.org/10.1007/978-3-540-68103-8_5
* Coscoy et al. (1995) Yann Coscoy, Gilles Kahn, and Laurent Théry. 1995\. Extracting Text from Proofs. In _International Conference on Typed Lambda Calculi and Applications (TLCA)_. https://doi.org/10.1007/BFb0014048
* Czajka and Kaliszyk (2018) Łukasz Czajka and Cezary Kaliszyk. 2018. Hammer for Coq: Automation for dependent type theory. _Journal of Automated Reasoning_ 61, 1-4 (2018). https://doi.org/10.1007/s10817-018-9458-4
* de Moura et al. (2015) Leonardo Mendonça de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer. 2015. The Lean Theorem Prover (System Description). In _International Conference on Automated Deduction (CADE)_. https://doi.org/10.1007/978-3-319-21401-6_26
* Delahaye (2000) David Delahaye. 2000\. A Tactic Language for the System Coq. In _International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR)_. https://doi.org/10.1007/3-540-44404-1_7
* Frerix and Koepke (2019) Steffen Frerix and Peter Koepke. 2019. Making Set Theory Great Again: The Naproche-SAD Project. _Conference on Artificial Intelligence and Theorem Proving (AITP)_ (2019).
* Ganesalingam (2013) Mohan Ganesalingam. 2013\. _The Language of Mathematics - A Linguistic and Philosophical Investigation_. Lecture Notes in Computer Science, Vol. 7805. Springer. https://doi.org/10.1007/978-3-642-37012-0
* Gauthier et al. (2020) Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. 2020. TacticToe: Learning to Prove with Tactics. _Journal of Automated Reasoning_ (2020).
* Gonthier (2008) Georges Gonthier. 2008\. Formal proof–the four-color theorem. _Notices of the AMS_ 55, 11 (2008).
* Gonthier and Mahboubi (2010) Georges Gonthier and Assia Mahboubi. 2010. An introduction to small scale reflection in Coq. _Journal of Formalized Reasoning_ 3, 2 (2010). https://doi.org/10.6092/issn.1972-5787/1979
* Hales (2006) Thomas C. Hales. 2006\. Introduction to the Flyspeck Project. In _Mathematics, Algorithms, Proofs_.
* Hallgren and Ranta (2000) Thomas Hallgren and Aarne Ranta. 2000. An Extensible Proof Text Editor. In _International Conference on Logic for Programming and Automated Reasoning (LPAR)_. https://doi.org/10.1007/3-540-44404-1_6
* Harrison (2009) John Harrison. 2009\. HOL light: An overview. In _International Conference on Theorem Proving in Higher Order Logics (TPHOL)_.
* Huang et al. (2019) Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. 2019. GamePad: A Learning Environment for Theorem Proving. In _International Conference on Learning Representations (ICLR)_.
* Kaiser et al. (2018) Jan-Oliver Kaiser, Beta Ziliani, Robbert Krebbers, Yann Régis-Gianas, and Derek Dreyer. 2018\. Mtac2: typed tactics for backward reasoning in Coq. _Proc. ACM Program. Lang._ 2, ICFP (2018), 78:1–78:31. https://doi.org/10.1145/3236773
* Kaliszyk and Urban (2014) Cezary Kaliszyk and Josef Urban. 2014. Learning-Assisted Automated Reasoning with Flyspeck. _Journal of Automated Reasoning_ 53, 2 (2014). https://doi.org/10.1007/s10817-014-9303-3
* Kiam Tan et al. (2019) Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish. 2019\. The verified CakeML compiler backend. _Journal of Functional Programming_ 29 (2019). https://doi.org/10.1017/S0956796818000229
* Klein et al. (2009) Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, et al. 2009\. seL4: Formal verification of an OS kernel. In _ACM Symposium on Operating Systems Principles (SOSP)_. https://doi.org/10.1145/1629575.1629596
* Leroy (2009) Xavier Leroy. 2009\. Formal Verification of a Realistic Compiler. _Commun. ACM_ 52, 7 (2009). https://doi.org/10.1145/1538788.1538814
* Liang (2016) Percy Liang. 2016\. Learning executable semantic parsers for natural language understanding. _Commun. ACM_ 59, 9 (2016). https://doi.org/10.1145/2866568
* Matichuk et al. (2016) Daniel Matichuk, Toby C. Murray, and Makarius Wenzel. 2016\. Eisbach: A Proof Method Language for Isabelle. _Journal of Automated Reasoning_ 56, 3 (2016). https://doi.org/10.1007/s10817-015-9360-2
* Nipkow et al. (2002) Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. 2002\. _Isabelle/HOL - A Proof Assistant for Higher-Order Logic_. Lecture Notes in Computer Science, Vol. 2283. Springer. https://doi.org/10.1007/3-540-45949-9
* Paulson and Susanto (2007) Lawrence C. Paulson and Kong Woei Susanto. 2007. Source-Level Proof Reconstruction for Interactive Theorem Proving. In _International Conference on Theorem Proving in Higher Order Logics (TPHOL)_. https://doi.org/10.1007/978-3-540-74591-4_18
* Pédrot (2019) Pierre-Marie Pédrot. 2019\. Ltac2: Tactical Warfare. _CoqPL 2019_ (2019).
* Ranta (2011) Aarne Ranta. 2011\. Translating between Language and Logic: What Is Easy and What Is Difficult. In _International Conference on Automated Deduction (CADE)_. https://doi.org/10.1007/978-3-642-22438-6_3
* Schulz (2013) Stephan Schulz. 2013\. System Description: E 1.8. In _International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR)_. https://doi.org/10.1007/978-3-642-45221-5_49
* Slind and Norrish (2008) Konrad Slind and Michael Norrish. 2008. A Brief Overview of HOL4. In _International Conference on Theorem Proving in Higher Order Logics (TPHOL)_. https://doi.org/10.1007/978-3-540-71067-7_6
* Solovyev and Hales (2013) Alexey Solovyev and Thomas C. Hales. 2013. Formal Verification of Nonlinear Inequalities with Taylor Interval Approximations. In _NASA Formal Methods Symposium (NFM)_. https://doi.org/10.1007/978-3-642-38088-4_26
* Ullrich and de Moura (2020) Sebastian Ullrich and Leonardo de Moura. 2020. Beyond Notations: Hygienic Macro Expansion for Theorem Proving Languages. In _International Joint Conference on Automated Reasoning (IJCAR)_. https://doi.org/10.1007/978-3-030-51054-1_10
* Wang et al. (2017) Sida I. Wang, Samuel Ginn, Percy Liang, and Christopher D. Manning. 2017. Naturalizing a Programming Language via Interactive Learning. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL_. https://doi.org/10.18653/v1/P17-1086
* Wenzel (1999) Markus Wenzel. 1999\. Isar - A Generic Interpretative Approach to Readable Formal Proof Documents. In _International Conference on Theorem Proving in Higher Order Logics (TPHOL)_. https://doi.org/10.1007/3-540-48256-3_12
* Wenzel and Paulson (2006) Markus Wenzel and Lawrence C. Paulson. 2006. Isabelle/Isar. In _The Seventeen Provers of the World_. Lecture Notes in Computer Science, Vol. 3600. Springer, 41–49. https://doi.org/10.1007/11542384_8
* Yang and Deng (2019) Kaiyu Yang and Jia Deng. 2019. Learning to Prove Theorems via Interacting with Proof Assistants. In _International Conference on Machine Learning (ICML)_.
|
8k
|
arxiv_papers
|
2101.00931
|
The Empirical Under-determination Argument
Against Scientific Realism for Dual Theories
Sebastian De Haro
Institute for Logic, Language of Computation, University of
Amsterdam111Forthcoming in Erkenntnis.
Institute of Physics, University of Amsterdam
Vossius Center for History of Humanities and Sciences, University of Amsterdam
[email protected]
Abstract
This paper explores the options available to the anti-realist to defend a
Quinean empirical under-determination thesis using examples of dualities. I
first explicate a version of the empirical under-determination thesis that can
be brought to bear on theories of contemporary physics. Then I identify a
class of examples of dualities that lead to empirical under-determination. But
I argue that the resulting under-determination is benign, and is not a threat
to a cautious scientific realism. Thus dualities are not new ammunition for
the anti-realist. The paper also shows how the number of possible
interpretative options about dualities that have been considered in the
literature can be reduced, and suggests a general approach to scientific
realism that one may take dualities to favour.
###### Contents
1. 1 Introduction
2. 2 The Empirical Under-determination Thesis
1. 2.1 Quine’s under-determination thesis, and strategies for response
2. 2.2 Empirical equivalence
3. 2.3 Theoretical equivalence
3. 3 Duality
1. 3.1 The Schema for dualities
2. 3.2 Examples of dualities
3. 3.3 Internal and external interpretations
4. 4 Duality and Under-determination
1. 4.1 Scientific realism: caution and-or virtue?
2. 4.2 Theoretically inequivalent duals
3. 4.3 Empirically equivalent duals are under-determined
4. 4.4 Obtaining interpretations by abstraction
5. 4.5 Trouble for scientific realism?
5. 5 Conclusion
6. Acknowledgements
7. References
## 1 Introduction
Over the last twenty to thirty years, dualities have been central tools in
theory construction in many areas of physics: from statistical mechanics to
quantum field theory to quantum gravity. A duality is, roughly speaking, a
symmetry between two (possibly very different-looking) theories. So in
physics: while a symmetry typically maps a state of the system into another
appropriately related state (and likewise for quantities); in a duality, an
entire theory is mapped into another appropriately related theory. And like
for symmetries, there is a question of under what conditions dual theories
represent empirically equivalent situations. Indeed, in some cases, physicists
claim that dual pairs of theories describe the very same physical, not just
the very same empirical, facts (more on this in Section 3).
Thus there is a natural question of whether dualities can generate interesting
examples of empirical under-determination, or under-determination of theory by
empirical data.
The empirical under-determination thesis says that, roughly speaking,
‘physical theory is underdetermined even by all possible observations…
Physical theories… can be logically incompatible and empirically equivalent’
(Quine, 1970: p. 179).222The qualification ‘all possible’ distinguishes this
type of under-determination thesis from the under-determination of theories by
the available evidence so far, also called ‘transient under-determination’
(Sklar, 1975: p. 380), which is of course a very common phenomenon. Transient
under-determination tends to blur the distinction between the limits of our
current state of knowledge and understanding of a theory vs. the theory’s
intrinsic limitations. Furthermore, under-determination by all possible
evidence is closer to discussions of dualities. For these reasons, in this
paper I restrict attention to the original Quinean under-determination thesis.
Despite the wide philosophical interest of the empirical under-determination
thesis, it is controversial whether there are any genuine examples of it
(setting aside under-determination ‘by the evidence so far’). Quine himself
regarded this as an ‘open question’. One well-known example is the various
versions of non-relativistic quantum mechanics, including its different
interpretations; however, it is controversial whether these are cases of
under-determination by all the possible evidence, or by all the evidence so
far. Likewise, Laudan and Leplin (1991: p. 459) say that most examples of
under-determination are contrived and limited in scope:
> It is noteworthy that contrived examples alleging empirical equivalence
> always involve the relativity of motion; it is the impossibility of
> distinguishing apparent from absolute motion to which they owe their
> plausibility. This is also the problem in the pre-eminent historical
> examples, the competition between Ptolemy and Copernicus, which created the
> idea of empirical equivalence in the first place, and that between Einstein
> and H. A. Lorentz.
But dualities certainly go well beyond relative motion and the interpretation
of non-relativistic quantum mechanics. For they are at the centre of theory
construction in theoretical physics: and so, if they turned out to give cases
of under-determination, this would show that the problem of under-
determination is at the heart of current scientific research. And since
philosophers have thought extensively about under-determination, this old
philosophical discussion could contribute to the understanding of current
developments in theoretical physics.
Furthermore, empirical under-determination is also one of the arguments that
have been mounted against scientific realism. Thus if dualities turn out to
not illustrate under-determination, then this particular argument against
scientific realism would be undermined. Either way, the question of whether
dualities give cases of under-determination deserves scrutiny.
So far as I know, and apart from an occasional mention, the import of
dualities for the under-determination debate has so far only been studied in a
handful of previous papers: see early papers by Dawid (2006, 2017a), Rickles
(2011, 2017), Matsubara (2013) and recent discussions by Read (2016) and Le
Bihan and Read (2018).333For a discussion of the import of dualities for the
question of scientific realism, see Dawid (2017a). Given that a rich Schema
for dualities now exists equipped with the necessary notions for analysing
under-determination,444See De Haro and Butterfield (2017), De Haro (2019,
2020), and Butterfield (2020). and several cases of rigorously proven
dualities also exist, the time is ripe for a study of the question of
empirical under-determination vis-à-vis dualities.
In this paper, I aim to undertake a study of the main options available to
produce examples of empirical under-determination using dualities. Thus,
rather than giving a response to the various under-determination theses or
defending scientific realism, my aim is to analyse whether dualities offer
genuine examples of under-determination—as it is reasonable to expect that
they do. We will find that, although the examples of dualities in physics do
illustrate the under-determination thesis, they do so in a benign way, i.e.
they are not a threat to—cautious forms of—scientific realism. In particular,
I will argue that—leaving aside cases of transient under-
determination555Transient under-determination for dualities is discussed in
Dawid (2006).—not all of the interpretative options that have been considered
by Matsubara (2013) and Le Bihan and Read (2018) are distinct or relevant
options for the problem of emprical under-determination, and that ultimately
there is a single justified option for the cautious scientific realist. Thus
dualities may be taken to favour a cautious approach to scientific realism.
In Section 2, I introduce the Quinean empirical under-determination thesis.
Section 3 introduces the Schema for duality. Section 4 then explores whether
dualities give cases of under-determination, and whether scientific realism is
in trouble. Section 5 concludes.
## 2 The Empirical Under-determination Thesis
This Section introduces the Quinean empirical under-determination thesis, and
the allied concepts of empirical and theoretical equivalence.
### 2.1 Quine’s under-determination thesis, and strategies for response
The word ‘under-determination’ is notoriously over-used in philosophy of
science. The under-determination thesis that I am concerned with here is what
Quine (1975: p. 314) called ‘empirical under-determination’, which is not to
be confused with the Duhem-Quine thesis, or holism: indeed a quite different
doctrine.666Lyre (2011: p. 236) gives an interesting discussion of the
relation and contrast between under-determination, on the one hand, and Duhem-
Quine holism and Humean under-determination, on the other. Quine gives several
different formulations of his own thesis, of which the simplest version is:
Quinean under-determination0: two theory formulations are under-determined if
they are empirically equivalent but logically incompatible.
Under-determination theses are prominent as arguments against scientific
realism. For if the same empirical facts can be described by two theories that
contradict one another, why should we believe what one of the theories
says?777I concur with Stanford (2006: p. 17) in rejecting the use of
fictitious examples to put forward under-determination theses: ‘[T]he critics
of underdetermination have been well within their rights to demand that
serious, nonskeptical, and genuinely distinct empirical equivalents to a
theory actually be produced before they withhold belief in it and refusing to
presume that such equivalents exist when none can be identified.’
There are, roughly, two strategies for realists to try to respond to the
challenge of empirical under-determination, as follows:
(i) One can try to undercut the under-determination threat through appropriate
modifications of one’s notions of equivalence, so that on the correct notions
there is no under-determination after all, i.e. what one thought were examples
of empirical under-determination turn out not to be. Since under-
determination0 involves two notions of equivalence in tension (under-
determination “lives in the space between empirical and logical equivalence”),
this can be done in two ways. First, one can try to argue that an appropriate
notion of empirical equivalence is sufficiently strong, so that the class of
potential examples of under-determination is reduced (i.e. because the two
theory formulations in the putative example are not empirically equivalent).
Or one can try to argue that the appropriate notion of logical equivalence
(or, more generally, theoretical equivalence: see Section 2.3) is sufficiently
weak, so that the class of potential examples of under-determination is
reduced (i.e. because the criterion of logical equivalence adopted is liberal,
and makes the pairs of theory formulations in the putative examples logically
equivalent).
(ii) One can try to use alternative assessment criteria that do not involve
modifications of the notions of equivalence. These criteria may be empirical,
super-empirical, or even non-empirical. They are often forms of ampliative
inference which offer more “empirical support” to one theory than the other;
or one theory may be “better confirmed by the evidence than the other”
(Laudan, 1990: p. 271).
I take Quine’s challenge of under-determination0 to be primarily about (i) not
(ii). This follows from the formulation of the challenge itself. Thus my
position is that in so far as Quinean under-determination0 poses a challenge
to scientific realism, one must aspire to meet it in its own terms, i.e. by
adopting the first strategy.
If (i) turns out to fail, so that we need to adopt strategy (ii) instead, this
might still not be a blow to scientific realism, since (ii) can indeed solve
the empirical under-determination problems that the practicing scientist might
be concerned with. In particular, if we adopt the strategy (i), we may still
be left with some cases of under-determination, and then we may, and should,
still ask: which of these two theory formulations is better confirmed, and
which one should we further develop or accept? That is, the strategy (ii)
remains relevant.
Also, one should note that, although the strategy (i) is semantic and (ii) is
epistemic, epistemic considerations are relevant to both approaches, since we
are interested in theories that are proposed as candidate descriptions of the
actual world. Thus in various parts of the paper, when I talk about holding a
certain thesis (e.g. giving a verdict of theoretical equivalence), I will also
consider the justification that we have for this thesis.
Fortunately so far, for realists, cases of under-determination seem hard to
come by. Quine (1975: p. 327) could find no good examples, and concluded that
it is an ‘open question’ whether such alternatives to our best ‘system of the
world’ exist. (See also Earman (1993: pp. 30-31)).888Laudan and Leplin’s
(1991: p. 458) example, the “bason”, illustrates how hard-pressed the group of
philosophers involved in the empirical under-determination debate have
sometimes been to find what remotely look like genuine physics examples. The
“bason” is a mythical particle invented to detect absolute motion: it
fortuitously arises as a result of absolute motion, and ‘the positive absolute
velocity of the universe represents energy available for bason creation’.
Unfortunately, this cunning philosophical thought experiment hardly deserves
the name ‘scientific theory’. See my endorsement of Stanford’s critique of the
use of fictitious theories, in footnote 7. As late as 2011, Lyre (2011: p.
236) could write that empirical under-determination ‘suffers from a severe but
rarely considered problem of missing examples’.999Lyre (2011: pp. 237-241)
contains a useful classification of some available examples. I agree with
Lyre’s verdict about most of these examples being cases of transient under-
determination, so that ‘in retrospect such historic cases appear mainly as
artefacts of incomplete scientific knowledge—and do as such not provide really
worrisome… cases’ of empirical under-determination (p. 240).
In the rest of the paper, I will discuss strategy (i), and how dualities
address this problem of missing examples. As we will see, my analysis will in
effect be as follows. (A) I will only minimally strengthen the notion of
empirical equivalence (in Section 2.2), although I will also use this notion
in a liberal way (in Section 4.3). (B) I will also strengthen the notion of
equivalence that is to replace Quine’s condition of logical equivalence. Thus
the effects of my two modifications on the empirical under-determination
thesis are in tension with each other. (A) strengthens empirical equivalence,
which should help strategy (i) (cf. the beginning of this Section), but I will
also allow a liberal use of it, which again runs against (i). (B) is a
substantial strengthening of the logical criteria of equivalence. So the
overall effect of my analysis will not necessarily be good news for scientific
realists—and this is of course the case we should consider, if we are to make
the challenge to scientific realism as strong as possible. My liberal use of
(A) will be motivated by how physicists use dualities, and by a discussion of
the semantic and syntactic conceptions of empirical equivalence. The
strengthening (B) will also be based on work on dualities, and how dualities
give us a plausible criterion of theory individuation, that was developed
independently of the question of empirical under-determination.
### 2.2 Empirical equivalence
Quine’s (1975: p. 319) criterion of empirical equivalence is syntactic: two
theories are empirically equivalent if they imply the same observational
sentences, also called observational conditionals, for all possible
observations—present, past, future or ‘pegged to inaccessible place-times’ (p.
234)
Another influential and, as we will see, complementary account of the meaning
of ‘empirical’ is by van Fraassen (1980: p. 64), who puts it thus:
> To present a theory is… to present certain parts of those models (the
> empirical substructures) as candidates for the direct representation of
> observable phenomena. The structures which can be described in experimental
> and measurement reports we call appearances: the theory is empirically
> adequate if it has some model such that all appearances are isomorphic to
> empirical substructures of that model.
Van Fraassen famously restricts the scope of ‘observable phenomena’ to
observation by the unaided human senses. Accordingly, his mention of
‘experimental and measurement reports’ is restricted to certain kinds of
experiments and measurements.101010For example, van Fraassen’s conception of
observability rules out collider experiments and astronomical observations,
where the reports are based on computer-generated data that encode
observations by artificial devices. A conception of observability that is more
straightforwardly applicable to modern physics is in Lenzen (1955). Thus I
will set van Fraassen’s notion of observability aside but keep his notion of
empirical adequacy as a useful semantic alternative to Quine’s syntactic
construal of the empirical—and in this sense the two views are complementary
to each other.
In Section 4.3, I will give a judicious reading of these two criteria of
empirical equivalence, which will give us a verdict that duals, i.e. dual
theories, are empirically equivalent, as a surprising but straightforward
application of van Fraassen’s and Quine’s proposals.
### 2.3 Theoretical equivalence
The second notion entering the definition of under-determination0—namely,
logical equivalence—was replaced by Quine, and later by others, by other
(weaker) notions, often under the heading of ‘theoretical equivalence’. In
this Section, I will introduce some of these notions, and then present my own
account, following De Haro (2019).
Note that ‘logical equivalence’ is defined in logic books as relative to a
vocabulary or signature, and so it is obviously too strict.111111See, for
example, Hodges (1997: pp. 37-38). For example, one would not wish to count
French and English formulations of the theory of electrodynamics as different
theories, while they would count as logically inequivalent by the criterion in
logic books, since their vocabularies are different.
Quine also argues that logical equivalence is too strong a criterion. He
proposes the following criterion of equivalence between theory formulations:
‘I propose that we count two formulations as formulations of the same theory
if, besides being empirically equivalent, the two formulations can be rendered
identical by switching predicates in one of them’ (Quine, 1975: p. 320). He
broadens this criterion further to allow not only ‘switchings’ of terms, but
more generally ‘reconstrual of predicates’, i.e. any mapping of the lexicon of
predicates that is used by the theory, into open formulas (i.e. mapping
$n$-place predicates to $n$-variable formulas). For a formalization of this,
see Barrett and Halvorson (2016: pp. 4-6). I will follow these authors in
calling this new kind of theoretical equivalence Quine equivalence. Thus we
arrive at:
Quinean under-determination: two theory formulations are under-determined if
they are empirically equivalent but there is no reconstrual of predicates that
renders them logically equivalent (i.e. they are Quine-inequivalent).
Barrett and Halvorson (2016: pp. 6-8) argue that Quine equivalence is too
liberal a criterion for theoretical equivalence (which, in turn, means that
one ends up with a strict notion of empirical under-determination, which will
have few instantiations). Indeed, they give several examples of theories that
are equivalent according to Quine’s criterion, but that one has good reason to
consider inequivalent. They then introduce another criterion, due to Glymour
(1970), that better captures what one means by intertranslatability, i.e. the
existence of a suitable translation between two theory formulations. The
criterion is, roughly speaking, the existence of reconstrual maps “both ways”,
i.e. from $T$ to $T^{\prime}$ and back. They show that, in first order logic,
intertranslatability is equivalent to the notion of definitional equivalence:
which had already been defined in logic and advocated for philosophy of
science by Glymour (1970: p. 279). Since the criterion of intertranslatability
is Quine’s criterion taken “both ways”, it can be seen as an improvement of
it.
In De Haro (2019), I argued that there is an interesting project of finding
criteria of equivalence that are mostly formal, while the full project, of
formal plus interpretative equivalence that I am interested in here, requires
the consideration of ontological matters. In particular, theoretical
equivalence requires that the interpretations are the same. Thus we can give
the following definition:
Theoretical equivalence: two theory formulations are theoretically equivalent
if they are formally equivalent and, in addition, they have the same
interpretations.
The following Sections will further articulate the above definition. Thus pace
Quine, the criterion of individuation of theories that is relevant to
scientific theories is not merely formal, but is a criterion of theoretical
equivalence. Interpretation matters in science,121212See e.g. Coffey (2014).
and two theories can only be said to be equivalent if they have the same
interpretations, i.e. they have the same ontology. Thus taking theoretical
equivalence as our notion of equivalence, we arrive at the following
conception of under-determination, based on Quine’s original notion, but now
with the correct criterion of individuation of physical theories:
Empirical under-determination: two theory formulations are under-determined if
they are empirically equivalent but theoretically inequivalent.
As we will see in the next Section, the most straightforward way to look for
examples of empirical under-determination is if the theories have different
interpretations. This is because, as I argued above, empirical under-
determination is primarily a matter of meaning, interpretation, and ontology.
In what follows, we will always deal with theory formulations, and so will not
need to distinguish between ‘theory’ and ‘theory formulation’. So for brevity,
I will from now on often talk of ‘theories’ instead of ‘theory formulations’.
## 3 Duality
This Section introduces the notion of duality, along with our Schema (with
Butterfield) for understanding it, and a few examples.131313The full Schema is
presented in De Haro (2020), De Haro and Butterfield (2017), Butterfield
(2020), and De Haro (2019). Further philosophical work on dualities is in
Rickles (2011, 2017), Dieks et al. (2015), Read (2016), De Haro (2017),
Huggett (2017), Read and Møller-Nielsen (2020). See also the special issue
Castellani and Rickles (2017). The Schema encompasses our overall treatment of
dualities: which comprises the notions of bare theory, interpretation,
duality, and theoretical equivalence and its conditions (Section 3.1); and two
different kinds of interpretations, dubbed ‘internal’ and ‘external’ (Section
3.3). Section 3.2 discusses two examples of dualities.
### 3.1 The Schema for dualities
In this Section, I illustrate the Schema for dualities with an example from
elementary quantum mechanics.
Consider position-momentum duality in one-dimensional quantum mechanics,
represented on wave-functions by the Fourier transformation. For every
position wave-function with value $\psi(x)$, there is an associated momentum
wave-function whose value is denoted by $\tilde{\psi}(p)$, and the two are
related by the Fourier transformation as follows:
$\displaystyle\psi(x)$ $\displaystyle=$
$\displaystyle{1\over\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty}\mbox{d}p~{}\tilde{\psi}(p)~{}e^{{i\over\hbar}\,x\,p}$
(1) $\displaystyle\tilde{\psi}(p)$ $\displaystyle=$
$\displaystyle{1\over\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty}\mbox{d}x~{}\psi(x)~{}e^{-{i\over\hbar}\,x\,p}~{}.$
Likewise for operators: any operator can be written down in a position
representation, $A$, or in a momentum representation, $\tilde{A}$. Because of
the linearity of the Fourier transformation, all the transition amplitudes are
invariant under it, so that the following holds (using standard textbook bra-
ket notation for the inner product):
$\displaystyle\langle\psi|A|\psi^{\prime}\rangle=\langle\tilde{\psi}|\tilde{A}|\tilde{\psi}^{\prime}\rangle~{}.$
(2)
Since the Schrödinger equation is linear in the Hamiltonian and in the wave-
function, the equation can itself be written down and solved in either
representation.
The Fourier transformation between position and momentum gives us an
isomorphism between the position and momentum formulations of quantum
mechanics. For Eq. (1) maps, one-to-one, each wave-function in the position
formulation to the corresponding wave-function in the momentum formulation,
and likewise for operators. Furthermore, the map preserves the dynamics, i.e.
the Schrödinger equation in position or in momentum representation, and all
the values of all the physical quantities, i.e. the transition amplitudes Eq.
(2), are invariant under this map.
Position-momentum duality illustrates what we mean by duality in general: a
duality is an isomorphism between theories, which I will denote by
$d:T\rightarrow T^{\prime}$. States, quantities, and dynamics are mapped onto
each other one-to-one by the duality map, while the values of the quantities,
Eq. (2), which determine what will be the physical predictions of the theory,
are invariant under the map.
Notice the dependent clause ‘which determine what will be the physical
predictions of the theory’. The reason for the insertion of the italicised
phrase is, of course, that my talk of quantum mechanics has so far been mostly
formal. Unless one specifies a physical interpretation for the operators and
wave-functions in Eq. (2), the formalism of quantum mechanics does not make
any physical predictions at all, i.e. only interpreted theories make physical
predictions. Without an interpretation, the formalism of quantum mechanics
could equally well be describing some probabilistic system that happens to
obey a law whose resemblance with Schrödinger’s equation is only formal.
This prompts the notions of bare theory and of interpretation which, together,
form what we call a ‘theory’. A bare theory is a theory (formulation) before
it is given an interpretation—like the formal quantum mechanics above: it was
defined in terms of its states (the set of wave-functions), quantities
(operators) and dynamics (the Schrödinger equation). Thus it is useful to
think of a bare theory in general as a triple of states, quantities, and
dynamics. I will usually denote bare theories by $T$, and a duality will be an
isomorphism between bare theories.141414One may object that the position and
momentum representations of quantum mechanics are not two different bare
theories, but two different formulations of the same theory. But recall that I
have adopted ‘theory’ instead of ‘theory formulation’ for simplicity, so that
it is correct to consider a duality relating two formulations of the same
theory.
Interpretations can be modelled using the idea of interpretation maps. Such a
map is a structure-preserving partial function mapping a bare theory
(paradigmatically: the states and the quantities) into the theory’s domain of
application, i.e. appropriate objects, properties, and relations in the
physical world. I will denote such a (set of) map(s) by: $i:T\rightarrow
D$.151515If a bare theory is presented as a triple of states, quantities, and
dynamics, then the interpretation is a triple of maps, on each of the factors.
However, I will here gloss over these details: for a detailed exposition of
interpretations as maps, the conditions they satisfy, and how this formulation
uses referential semantics and intensional semantics, see De Haro (2019, 2020)
and De Haro and Butterfield (2017).
The above notions of bare theory, of interpretation as a map, and of duality
as an isomorphism between bare theories, allow us to make more precise the
notion of theoretical equivalence, from Section 2.3. First, suppose that we
have two bare theories, $T_{1}$ and $T_{2}$, with their respective
interpretation maps, $i_{1}$ and $i_{2}$, which map the two theories into
their respective domains of application, $D_{1}$ and $D_{2}$. Further, assume
that there is a duality map between the two bare theories, $d:T_{1}\rightarrow
T_{2}$, i.e. an isomorphism as just defined. Theoretical equivalence can then
be defined as the condition that the domains of application of the two
theories are the same, i.e. $D:=D_{1}=D_{2}$, as in Figure 1, so that the two
interpretation maps have the same range. Note that, while the domains of
application of theoretically equivalent theories are the same, the bare
theories $T_{1}$ and $T_{2}$ are different, and so the theories are equivalent
but not identical.
$\displaystyle\begin{array}[]{ccc}T_{1}\\!\\!&\overset{d}{\longleftrightarrow}&\\!T_{2}\\\
~{}~{}~{}~{}~{}~{}~{}{\mbox{\scriptsize$i$}}_{\mbox{\tiny
1}}\searrow&&\swarrow\mbox{\scriptsize$i$}_{\mbox{\tiny 2}}{}{}{}{}{}{}{}{}\\\
&D&\end{array}$ (6) Figure 1: Theoretical equivalence. The two interpretations
describe “the same sector of reality”, so that the ranges of the
interpretations coincide.
As I mentioned in the definition of empirical under-determination in Section
2.3, in this paper we are mostly interested in situations in which the two
theories are theoretically inequivalent, i.e. the ontologies of two dual
theories are different. This means that the ranges of the interpretation maps,
i.e. the domains of application of the two theories, are distinct, so that:
$D_{1}\not=D_{2}$. Thus the diagram for the three maps, $d,i_{1},i_{2}$, does
not close: the diagram for theoretical inequivalence is a square, as in Figure
2. This notion of theoretical inequivalence makes precise the first condition
for under-determination, at the end of Section 2.3.
This will allow us, in Section 4, to give precise verdicts of under-
determination. In order to do that, we also need to make more precise the
second condition for under-determination in Section 2.3, i.e. the notion of
empirical equivalence. We turn to this next.
$\displaystyle\begin{array}[]{ccc}T_{1}&\overset{d}{\longleftrightarrow}&T_{2}\\\
~{}~{}\Big{\downarrow}{\mbox{\scriptsize$i_{1}$}}&&~{}~{}\Big{\downarrow}{\mbox{\scriptsize$i_{2}$}}\\\
D_{1}&\not=&D_{2}\end{array}$ (10) Figure 2: Theoretical inequivalence. The
two interpretations describe “different sectors of reality”, so that the
ranges of the interpretations differ.
To illustrate the notion of empirical equivalence, let me first briefly
discuss the standard textbook interpretation of quantum mechanics, in the
language of the Schema—the other interpretations are obtained by making the
appropriate modifications. Since my aim here is only to illustrate the Schema,
rather than to try to shed light on quantum mechanics itself, I will set aside
the measurement problem, and simply adopt the standard Born rule for making
probabilistic predictions about the outcomes of measurements. On this
understanding, the interpretation map(s) $i$ map the bare theory to its domain
of application161616See for example van Fraassen (1970: pp. 329, 334-335). as
follows:
$\displaystyle i(x)$ $\displaystyle=$ ‘the position, with value $x$, of the
particle upon measurement’ $\displaystyle i\left(|\psi(x)|^{2}\right)$
$\displaystyle=$ ‘the probability density of finding the particle at position
$x$, upon measurement’ $\displaystyle i(\psi)$ $\displaystyle=$
$\displaystyle\mbox{\small`the physical state of the system (the particle)'},$
etc., where $x$ is an eigenvalue of the position operator.171717Note that
there are different kinds of elements that are here being mapped to in the
domain. The first example is a standard extensional map, which gives a truth
value to an observational conditional in a straightforward way. The second
maps to the probability of an outcome over measurements of many identically
prepared systems, rather than measurement of a single system. The third map
describes the physical situation of a particle with given properties: thus the
map’s range is a concrete rather than an abstract object.
Although all of the above notions (outcomes of measurements, Born
probabilities, physical states of a system, etc.) are in the domain $D$, and
they all enter in the assessment of theoretical equivalence, not all of them
are relevant for empirical equivalence, since not all of them qualify as
‘observable’, in the sense of Section 2.2. For while the position of a
particle can be known by a measurement of the particle’s position on a
detection screen or in a bubble chamber, the state of a particle cannot be so
known: thus it is a theoretical concept. On the other hand, a probability (on
the standard—and admittedly simplistic!—frequentist interpretation of
probabilities in quantum mechanics, where probabilities are relative
frequencies of events for an ensemble of identically prepared systems) is an
observational concept, since it can be linked to relations between
measurements (i.e. frequencies), even if it does not itself correspond to an
outcome of a measurement or physical interaction.
Thus in the case of quantum mechanics, the empirical substructures in van
Fraassen’s definition in Section 2.2 are the subsets of structures of the
theory that correspond to outcomes of measurements, in a broad sense, and
relations between measurements. These structures are, quite generally, the set
of (absolute values of) transition amplitudes of self-adjoint operators, Eq.
(2), and expressions constructed from them, including powers of transition
amplitudes. These are the empirical substructres (on van Fraassen’s semantic
conception), which make true or false the observational conditionals of the
theory (Quine’s phrase). The remaining terms of the theory may have a physical
significance, but they are theoretical. Thus this discussion distinguishes,
for elementary textbook quantum mechanics, between what is theoretical and
what is empirical.
### 3.2 Examples of dualities
I will now give two other important examples of dualities: T-duality in string
theory, and black hole-quark-gluon plasma duality.
To this end, I will first say a few words about string theory. String theory
is a generalisation of the theory of relativistic point particles to
relativistic, one-dimensional extended objects moving in time, i.e. strings.
On analogy with the world-line swept out by a point particle, the two-
dimensional (one space plus one time dimension!) surface swept out by the
string in spacetime is called the ‘world-sheet’ of the string. Strings can be
open or closed, so that their world-sheet is topologically an infinitely long
strip (for open strings) or an infinitely long tube (for closed strings). Type
I theories contain both open and closed strings, while type II theories
contain only closed strings. In what follows, I will focus on type II
theories. In order to render string theories quantum mechanically consistent,
fermionic string excitations are introduced that make the theories
supersymmetric, i.e. so that the number of bosonic degrees of freedom is
matched by the number of fermionic degrees of freedom. Depending on the
chirality of these fermionic excitations, i.e. roughly whether they are all
handed in the same direction on the world-sheet or not, we have a type IIA
(left-right symmetric) or a type IIB theory (left-right antisymmetric). The
quantum mechanical consistency of string theory also requires that the number
of spacetime dimensions be 10, so that the spacetime is the Lorentzian
$\mathbb{R}^{10}$.
T-duality first appears when one of these ten dimensions is compact, for
example a circle of radius $R$.181818For an early review, see Schwarz (1992).
See also Huggett (2017), Read (2016), Butterfield (2020). I follow the physics
convention of setting $\hbar=1$. Thus the spacetime is $\mathbb{R}^{9}\times
S^{1}$, where $S^{1}$ is the circle. The periodicity of the circle then
entails that the centre-of-mass momentum of a string along this direction is
quantised in units of the radius $R$; so $p=n/R$, where $n$ is an integer.
Furthermore, closed strings can wind around the circle (this is only possible
if the spacetime is $\mathbb{R}^{9}\times S^{1}$ not $\mathbb{R}^{10}$): so
they have, in addition to momentum, an additional winding quantum number, $m$,
which counts the number of times that a string wraps around the circle. Now
the contribution of the centre-of-mass momentum, and of the winding of the
string around the circle, to the square of the mass $M^{2}$ of the string, is
quadratic in the quantum numbers, as follows:
$\displaystyle M^{2}_{nm}={n^{2}\over
R^{2}}+{m^{2}R^{2}\over\ell_{\mbox{\scriptsize s}}^{4}}~{},$ (11)
where the subscript $nm$ indicates that this is the contribution of the
momentum and the winding around the circle—there are other contributions to
the mass that are independent of the momentum, winding, and radius, which I
suppress—and $\ell_{\mbox{\scriptsize s}}$ is the fundamental string length,
i.e. the length scale with respect to which one measures distances on the
world-sheet. We see that the contribution to the mass, Eq. (11), is invariant
under the simultaneous exchange:
$\displaystyle(n,m)$ $\displaystyle\leftrightarrow$ $\displaystyle(m,n)$ (12)
$\displaystyle R$ $\displaystyle\leftrightarrow$
$\displaystyle\ell^{2}_{\mbox{\scriptsize s}}/R~{}.$
In fact, one can show that the entire spectrum, and not only the centre-of-
mass contribution Eq. (11), is invariant under this map (cf. Zwiebach (2009:
pp. 392-397), Polchinski (2005: pp. 235-247)). This is the basic statement of
T-duality: namely, that the theory is invariant under Eq. (12), i.e. the
exchange of the momentum and winding quantum numbers, in addition to the
inversion of the radius.191919Taking into account how T-duality acts on the
fermionic string excitations, one can show that it exchanges the type IIA and
type IIB string theories compactified on a circle.
T-duality is a duality in the sense of the previous Section, in that it maps
states of type IIA string theory to states of type IIB, and likewise for
quantities, while leaving all the values of the physical quantities, such as
the mass spectrum Eq. (11), invariant.
My second example is gauge-gravity duality, itself a large subject (cf. De
Haro, Mayerson, Butterfield, 2016), but here I focus on its use in the
Relativistic Heavy Ion Collider (RHIC) experiments carried out in Brookhaven,
NY. Here, the duality successfully relates the four-dimensional quantum field
theory (QCD, quantum chromodynamics) that describes the quark-gluon plasma,
produced in high-energy collisions between lead atoms, to the properties of a
five-dimensional black hole. The latter was employed to perform a calculation
that, via an approximate duality, provided a result in QCD: namely, the shear-
viscosity-to-entropy-density ratio of the plasma, which could not be obtained
in the theory of QCD describing the plasma. Thus a five-dimensional black hole
was used to describe, at least approximately, an entirely different (four-
dimensional!) empirical situation.
The level of rigour to which duality has been established differs among
various examples. T-duality can be proven perturbatively, i.e. order by order
in perturbation theory; but not (yet) exactly. Gauge-gravity duality is hard
to prove beyond the semi-classical approximation: and, although much evidence
has been amassed, it is still a conjecture. A rigorously proven duality is
Kramers-Wannier duality (see e.g. Baxter (1982)): this, and other cases
treated elsewhere,202020Another example of a mathematically proven duality is
bosonization, or the duality between bosons and fermions in two dimensions.
See De Haro and Butterfield (2017: Sections 4 and 5). Cf. also Butterfield
(2020). show that nothing that I will have to say depends on the conjectural
status of the string theory dualities. Indeed, my analysis here will be
independent of those details.
### 3.3 Internal and external interpretations
In the previous two Sections, I defined dualities formally (Section 3.1) and
then I gave two examples (Section 3.2). The examples already came with an
interpretation, i.e. I discussed not just the bare theories but the
interpreted theories, since that is the best way to convey the relevant
physics. In this Section, I will be more explicit about the kinds of
interpretations that can lead to theoretically equivalent theories.
Not all interpretations lead to theoretically equivalent theories: and this
fact—that theoretical equivalence is not automatic—creates space for empirical
under-determination. For, as I will argue in the next Section, the
interpretations that fail to lead to theoretical equivalence introduce the
possibility of empirical under-determination.
Which interpretations lead to theoretically equivalent theories, as in Figure
1, or to theoretically inequivalent ones, as in Figure 2? Note that
theoretical equivalence requires that the domains of the two theories (the
ranges of the interpretation maps) are the same, i.e. the two theories have
the same ontology.212121As I stress in this Section, duality does not
automatically give theoretical equivalence, because dual theories can have
different ontologies. This imposes a strong condition on the interpretations
of two dual theories: whereas theoretical inequivalence comes at a low cost.
Thus one expects that, generally speaking, an interpretation of two dual
theories—for example, the interpretations with which dual theories have been
historically endowed—renders two theories inequivalent. I will call such
interpretations, that deliver theoretically inequivalent theories, external.
An external interpretation is best defined in contrast with an internal
interpretation, which maps all of and only what is common to the two theory
formulations, i.e. an internal interpretation interprets only the invariant
content under the duality. An external interpretation, by contrast, also
interprets the content that is not invariant under the duality—thereby
typically rendering two duals theoretically, and potentially also empirically,
inequivalent.
I will dub this additional structure, which is not part of what I have called
the ‘invariant content’, the specific structure. An isomorphism of theories
maps the states, quantities, and dynamics, but not the specific structure,
which is specified additionally for each specific theory formulation. Indeed,
theory formulations often contain structure (e.g. gauge-dependent structure)
beyond the bare theory’s empirical and theoretical structure (which is gauge-
independent).
In the example of T-duality, the interpretation of string quantum numbers as
‘momentum’ or as ‘winding’ is external and makes two T-dual theories
inequivalent, since the interpretation maps to distinct elements of the domain
under the exchange in Eq. (12).
If an interpretation does not map the specific structure of a theory but only
common structure, I will call it an internal interpretation. This means that
it only maps the structure that is common to the duals. Such an interpretation
gives rise to two maps, $i_{1}$ and $i_{2}$, one for each theory: their
domains differ, but their range can be taken to be the same, since for each
interpretation map of one theory there is always a corresponding
interpretation map of the other theory with the same domain of
application.222222This conception gives a formal generalisation of an earlier
characterisation, in Dieks et al. (2015: pp. 209-210) and De Haro (2020:
Section 1.3), of an internal interpretation as one where the meaning of the
symbols is derived from their place and relation with other symbols in the
theoretical structure, i.e. not determined from the outside. Those papers were
concerned with theory construction, the idea being that when, in order to
interpret $T$’s symbols, we e.g. couple a theory $T$ to a theory of
measurement, we do this either through $T$’s specific structure, or by
changing $T$ in some other way. Either way, we have an external
interpretation, because the specific structure makes an empirical difference.
Thus an internal interpretation, which does not introduce an external context
or couplings, only concerns facts internal to the triples and our use of them.
Thus we get the situation of theoretical equivalence, in Figure 1.232323The
phrase ‘the internal interpretation does not map the specific structure’ can
be weakened: one can allow that the interpretation map maps the specific
structure to the domain of application, as long as the duality is respected
(i.e. the domains of two duals are still the same), and the empirical
substructures of the domain do not change.
Unextendability justifies the use of an internal interpretation. There is an
important epistemic question, that will play a role in Section 4.2’s analysis,
about whether we are justified in adopting an internal interpretation,
according to which the duals are theoretically equivalent.
The question is, roughly speaking, whether the interpretation $i$ of a bare
theory $T$, and $T$ itself, are “detailed enough” and “general enough” that
they cannot be expected to change, for a given domain of application $D$. I
call this condition unextendability. De Haro (2020: Section 1.3.3) gives a
technical definition of the two conditions for unextendability, the ‘detail’
and ‘generality’ conditions. Roughly speaking, the condition that the
interpreted theory is detailed enough means that it describes the entire
domain of application $D$, i.e. it does not leave out any details. And the
generality condition means that both the bare theory $T$ and the domain of
application $D$ are general enough that the theory “cannot be extended” to a
theory covering a larger set of phenomena. Thus $T$ is as general as it
can/should be, and $D$ is an entire “possible world”, and cannot be extended
beyond it.
To give an example involving symmetries: imagine an effective quantum field
theory with a classical symmetry, valid up to some cutoff, and imagine an
interpretation that maps symmetry-related states to the very same elements in
the domain of application. Now imagine that extending the theory beyond the
cutoff reveals that the symmetry is broken (for example, by a higher-loop
effect), so that it is anomalous. The possibility of this extension, with the
corresponding breaking of the symmetry, will lead us to question that our
interpretation is correct: maybe we should not identify symmetry-related
states by assigning them the same interpretation, especially if it turns out
that the states are different after the extension (for example, if higher-loop
effects correct the members of a pair of symmetry-related states in different
ways). And so, it will probably prompt us to develop an interpretation that is
consistent with the theory’s extension to high energies, so that symmetry-
related states are now mapped to distinct elements in the domain. In
conclusion, we are not epistemically justified in interpreting symmetry-
related states as equivalent, because we should take into account the
possibility that an extension of the theory to higher energies might compel us
to change our interpretation.
External interpretations of two dual theories do not in general classify them
as having the same interpretation. And so, although these interpretations
could be subject to change, this does not affect the verdict that the theories
are theoretically inequivalent.
There is a weaker sense of unextendability, which allows that a theory might
be extended in some cases, but only in such a way that the interpretation in
the original domain of application does not change, and the theories continue
to be theoretically equivalent after the extension. In what follows, I will
often use ‘unextendability’ in this weaker sense.
The example of position-momentum duality in elementary quantum mechanics can
be interpreted internally once we have developed quantum mechanics on a
Hilbert space. The two theories are then representations of a single Hilbert
space, and we can describe the very same phenomena, regardless of whether we
use the momentum or the position representation. And quantum mechanics is an
unextendable theory in the weaker sense (namely, with respect to this aspect
with which we are now concerned, of position vs. momentum interpretations)
because, even if we do not have a “final” Hamiltonian (we can often add new
terms to it), the position and momentum representations keep their power of
describing all possible phenomena equally well. Namely, because of quantum
mechanics’ linear and adjoint structure, unitary transformations remain
symmetries of the theory regardless of which terms we may add to the
Hamiltonian: so that our interpretation in terms of position or momentum will
not change.
Dualities in string and M-theory are also expected to be dualities between
unextendable theories, at least in the weak sense. For example, the type II
theories discussed in Section 3.2 have the maximal amount of supersymmetry
possible in ten dimensions; and the number of spacetime dimensions is
determined by requiring that the quantum theories be consistent. Also, their
interactions are fixed by the field content and the symmetries. If we imagine,
for a moment, that these theories are exactly well-defined (since the
expectation is that M-theory gives an exact definition of these theories, and
that T-duality is a manifestation of some symmetry of M-theory), then they are
in some sense “unique”, constrained by symmetries, and thus unextendable: they
are picked out by the field content, their set of symmetries, and the number
of spacetime dimensions.242424See also Dawid (2006: pp. 310-311), who
discusses a related phenomenon of ‘structural uniqueness’. Thus, if these
conjectures can be fleshed out, taking type IIA and type IIB on a circle to be
theoretically equivalent will be justified, because the theories describe an
entire possible world, and there is no other theory “in their vicinity”.
## 4 Duality and Under-determination
In this Section, I first make some remarks about aspects of scientific realism
relevant for dualities (Section 4.1). Then I will illustrate the notions of
theoretical inequivalence of dual theory formulations (Section 4.2) and
empirical equivalence (Section 4.3). This will lead to a surprising but
straightforward conclusion, which will give us our final verdict about whether
dualities admit empirical under-determination. Section 4.4 discusses how to
obtain internal interpretations. Section 4.5 will then ask whether scientific
realism is in trouble.
### 4.1 Scientific realism: caution and-or virtue?
My discussion of scientific realism aims to be as general as possible, i.e. as
independent as possible of a specific scientific realist position.252525My own
position is in De Haro (2020a: pp. 27-59). The notion of under-determination
that we are considering has both semantic and epistemic aspects (see Section
2.1), and also the interesting scientific realism to consider has both
semantic and epistemic aspects. Roughly, the relevant scientific realism is
the belief in the approximate truth of scientific theories that are well-
confirmed in a given domain of application.
The semantic aspect does not involve a naïve “direct reading” of a scientific
theory (as some formulations, like van Fraassen’s (1980), could lead us to
think). Rather, it involves a literal, but nevertheless cautious, reading,
informed by current scientific practice and by history. This is essential to
secure that the belief in the theory’s statements is epistemically justified.
Indeed, one does not simply take the nearest scientific textbook and quantify
existentially over the entities that are defined on the page. There are many
cases where one is not justified in believing in the entities that are
introduced by even our best scientific theories, and so one proceeds
tentatively. In such cases, a cautious scientific realist ought to suspend
judgment about the existence of the posited entities, until further
interpretative work has been carried out that justifies the corresponding
belief. For example, consider the cases of local gauge symmetries, and of a
complex phase in the overall wave-function of a system: in both cases, belief
in the corresponding entities ought to be postponed until the further analysis
is done. This contrasts with, say, the particle content of our best theories,
whose confirmed existence, under normal circumstances, one accepts (even if
the particles are microscopic).
Proponents of selective scientific realist views, for example Psillos (1999)
and Kitcher (1993), are in this sense cautious. Their accounts assign
reference only to those entities that are in some sense indispensable or
important to explain the empirical success of the theories, while stating that
other terms (like ‘phlogiston’) were less central to the theories’ success,
e.g. because they are not causally involved in the production of the relevant
phenomena, and so should be regarded as non-referring, or as referring on some
occasions but not others.262626These accounts have been criticised on various
points, most notably because the defence of selective confirmation appears to
involve a selective reading of the historical record (Stanford, 2006: p. 174).
While I do not myself endorse the details of these proposals, I am sympathetic
to them: and they do illustrate the general idea of ‘caution’ about realist
commitments endorsed above. Namely, that determining the realist’s commitments
is not a matter of “reading off”, but sometimes involves judgment.
Let me now say more about how this applies to dualities (more details in
Section 4.5), without aiming to develop a new scientific realist view here:
since, as I said, my arguments should apply to different versions of
scientific realism. Rather, I wish to express a general attitude—which I
denote with the word ‘caution’—towards the role of inter-theoretic relations
in the constitution of scientific theories.
In so far as a duality can have semantic implications—namely, in so far as
dualities contribute to the criteria of theory individuation—the cautious
scientific realist should take notice of those implications, and suspend
judgment about whether two dual theory-formulations are distinct theories,
until those criteria have been clarified. Indeed, the account of dualities
that I favour proposes that, under sufficient conditions of:
(1) internal interpretation,
(2) unextendability,
(3) having a philosophical conception of ‘interpretation’,272727This is my own
version of what Read and Møller-Nielsen (2020: p. 266) call ‘an explication of
the shared ontology of two duals’. See De Haro (2019: Section 2.3.1) and
footnote 36.
one is justified (but not obliged) to view duals as notational variants of a
single theory.
This view of theory individuation takes a middle way—“the best of both
worlds”—between two positions that, in the recent literature, have been
presented as antagonistic. Read and Møller-Nielsen (2020: p. 276) defend what
they call a ‘cautious motivationalism’: duality-related models may only be
regarded as being theoretically equivalent once an interpretation affording a
coherent explication of their common ontology is provided, and there is no
guarantee that such an interpretation exists.
Huggett and Wüthrich (2020: p. 19) dub this position an ‘agnosticism about
equivalence’ and object that, though it is cautious, it is less epistemically
virtuous than their own position: namely, to assert that ‘string theory is
promising as a complete unified theory in its domain, and so it is reasonably
thought to be unextendable. And from that we do think physical equivalence is
the reasonable conclusion’.
The position that I defend takes a middle way between these apparently
contrasting positions: one is justified, but not obliged, (i) to take duals to
be theoretically equivalent (under the three conditions, (1)-(3), mentioned
above), and (ii) to be a realist about their common core.282828This notion
appears to be close to Ney’s (2012: p. 61) ‘core metaphysics’, which retains
those common ‘entities, structures, and principles in which we come to believe
as a result of what is found to be indispensable to the formulation of our
physical theories’. However, one should keep in mind that I am here concerned
with semantics and epistemology, and not chiefly with metaphysics. I thank an
anonymous reviewer for pointing out this similarity.
But even if the three conditions above, (1)-(3), are not fully met, or are not
explicitly formulated, it may still be legitimate, lacking full epistemic
justification, to take the duals as equivalent and to be a realist about their
common entities: as a working assumption, a methodological heuristic, or a
starting point for the formulation of a new theory.
### 4.2 Theoretically inequivalent duals
In Section 3.1, I defined theoretically equivalent theories in terms of the
triangular diagram in Figure 1, and theoretically inequivalent theories in
terms of a diagram that does not close, i.e. a square diagram as in Figure 2.
In Section 3.3, we made a distinction between internal and external
interpretations. Thus the Schema gives us two cases in which dual theories can
prima facie be theoretically inequivalent. The first is through the adoption
of external interpretations. These interpretations lead to the square diagram
in Figure 2, with different domains of application. I will discuss this case
in Section 4.5.
The second case is that of two theoretically equivalent but extendable dual
theories. In this case, although the judgment, based on a given pair of
internal interpretations, is that the theories are theoretically equivalent,
this judgment is epistemically unjustified because the theory is extendable
(as I discussed in Section 3.3). And since the judgment of theoretical
equivalence is unjustified, this could prima facie give a new way to get
theoretically inequivalent theories.
However, this second case is not a genuine new possibility. For the lack of
justification for adopting the internal interpretations prompts us to either:
(i) Justify the use of these, or some other, internal interpretations, thus
getting a justified verdict of theoretical equivalence; or (ii) conclude that
the internal interpretations under consideration are indeed not adequate
interpretations, and adopt external interpretations instead, which then judge
the two theories to be inequivalent.
In case (i), where one finds internal interpretations whose use is justified,
we end up with a case of theoretical equivalence and not inequivalence, i.e. a
case of the triangle diagram Figure 1; and so this case is irrelevant to the
under-determination thesis. In case (ii), we are back to the situation of
external interpretations. Thus external interpretations exhaust the
interesting options for empirical under-determination.292929My reasoning here
bears a formal similarity with Norton’s (2006) analysis of Goodman’s new
riddle of induction.
Recall the examples of external interpretations from Section 3.3. External
interpretations of T dual pairs of strings winding on a circle interpret the
momentum and winding quantum numbers, $n$ and $m$ respectively, as
corresponding to different elements in the domain (namely, to momentum and to
winding, respectively), and in addition they would interpret the circle as
having a definite radius of either $R$ or $R^{\prime}:=\ell_{\mbox{\scriptsize
s}}^{2}/R$: so that the duality transformation Eq. (12) leads to physically
inequivalent theories, despite the existence of a formal duality that pairs up
the states and quantities. Such an external interpretation can for example
include ways to measure the quantities of interest—momentum, winding, and the
radius of the circle—so that they indeed come to have definite values,
according to the external interpretation. An indirect way to measure the
radius is by measuring the time that a massless string takes to travel around
the circle.
### 4.3 Empirically equivalent duals are under-determined
In this Section, I will use Section’s 2.2 account of empirical equivalence,
which will enable our final verdict about empirical under-determination in
cases of duality. The Section summarises De Haro (2020b) (see also Weatherall,
2020).
According to the syntactic conception of theories, two theories are
empirically equivalent if they imply the same observational
sentences.303030See Quine (1970, 1975) and Glymour (1970, 1977). As I argued
in Section 3.3, externally interpreted theories are in general not empirically
equivalent, in this sense. The domains are distinct, as Figure 2 illustrates.
On the semantic conception, two theories are empirically equivalent if the
empirical substructures of their models are isomorphic to each other (cf.
Section 2.2).
For dualities, the duality map $d$ gives us a natural—even if surprising—new
candidate for an isomorphism between the empirical substructures of the
models: I will dub it the ‘induced duality map’, $\tilde{d}:D_{1}\rightarrow
D_{2}$.313131Since we are discussing empirical equivalence, the domains of
application can here be restricted to the observable phenomena. However, I
will not indicate this explicitly in my notation. It is an isomorphism between
the domains of application, subject to the condition that the resulting (four-
map) diagram, in Figure 3, commutes. This commutation condition is the natural
condition for the induced duality map to mesh with the interpretation (the
condition for its commutation is that
$i_{2}\,\circ\,d=\tilde{d}\,\circ\,i_{1}$). If such a map exists, then the two
theories are clearly empirically equivalent on van Fraassen’s conception, even
though they are theoretically inequivalent, because the induced duality map is
not the identity, and the domains differ: $D_{1}\not=D_{2}$.
$\displaystyle\begin{array}[]{ccc}T_{1}&\overset{d}{\longleftrightarrow}&T_{2}\\\
~{}~{}\Big{\downarrow}{\mbox{\scriptsize$i_{1}$}}&&~{}~{}\Big{\downarrow}{\mbox{\scriptsize$i_{2}$}}\\\
D_{1}&\overset{\tilde{d}}{\longleftrightarrow}&D_{2}\end{array}$ (16) Figure
3: Empirical equivalence. There is an induced duality map, $\tilde{d}$,
between the domains.
Thus, if such an induced duality map exists, on this literal account the
dualities that we have discussed do in fact (and surprisingly!) relate
empirically equivalent theories.323232De Haro (2020b) argues that duality is
indeed the correct type of isomorphism to be considered, on the semantic
criterion of empirical equivalence. Take for example T-duality: here, by
construction, we can map the parts of the theory that depend on the quantum
numbers $(n,m)$ and radius $R$ to the quantum numbers $(m,n)$ and radius
$R^{\prime}=\ell^{2}_{\mbox{\scriptsize s}}/R$, and likewise in the domain, by
swapping measurements of positions of momenta, and inverting the physical
radii.
The semantic notion is prima facie more liberal than Quine’s syntactic notion
(in the sense that it is less fine-grained, because it gives a verdict of
empirical equivalence more easily), and more in consonance with the scientific
practice of dualities—thus it prompts a judicious reading of the syntactic
notion of empirical equivalence. To this end, it is not necessary to change
Quine’s criterion of empirical equivalence from Section 2.2; all we need to do
is to change one of the theories, generating a new theory by giving a non-
standard interpretation to the bare theory.
Since we have a non-standard and innovative interpretation, we have abandoned
what may have been the theory’s intended meaning, i.e. we have changed the
theory, stipulating a reinterpretation of its terms, thus producing the
desired observational sentences.333333For unextendable theories, the theory’s
natural interpretation is surely an internal, not an external, interpretation.
Thus we have been faithless to the meanings of words. But this is allowed,
since we are dealing with external interpretations anyway: and external
interpretations can be changed if one’s aims change. Indeed, nobody said that
we had to stick to a single interpretation of a bare theory in order for it to
make empirical predictions: for although theories may have intended
interpretations, assigned to them by history and convenience, nothing—more
precisely, none of the Quinean notions of empirical under-determination and
empirical equivalence—prevents us from generating new theories by
reinterpreting the old ones, thus extending the predictive power of a bare
theory: but also creating cases of under-determination!
In Section 4.4, I first sketch how internal interpretations are usually
obtained. For this will be important for the resolution of the problem of
under-determination, in Section 4.5.
### 4.4 Obtaining interpretations by abstraction
Internal interpretations of dual theory formulations are often obtained by a
process of abstraction from existing (often, historically given)
interpretations of the two formulations. Let me denote the common core theory
obtained by abstraction, by $\hat{t}$, and let $\hat{T}_{1}$ and $\hat{T}_{2}$
be two empirically equivalent dual theories from which $\hat{t}$ is obtained.
Here, the hats indicate that these are interpreted, rather than bare,
theories.
We can view the formulation of the common core theory $\hat{t}$ as a two-step
procedure. We first develop the bare i.e. uninterpreted theory, and then its
interpretation:
(1) A common core bare theory, $t$, is obtained by a process of abstraction
from the two bare theories, $T_{1}$ and $T_{2}$, so that these theory
formulations are usually representations (in the mathematical sense) of this
common core bare theory. This is discussed in detail in De Haro and
Butterfield (2017).
(2) The internal interpretation of the common core theory, $\hat{t}$, is
similarly obtained through abstraction: namely, by abstracting from the
commonalities shared by the interpretations of $\hat{T}_{1}$ and
$\hat{T}_{2}$. In this way, one obtains an internal interpretation that maps
only the common core of the two dual theory formulations.343434‘Abstraction’
does not necessarily mean ‘crossing out the elements of the ontology that are
not common to the two theory formulations’. Sometimes it also means ‘erasing
some of the characterisations given to the entities, while retaining the
number of entities’. For example, when we go from quantum mechanics described
in terms of position, or of momentum, space (Eq. (1)), to a formulation in
terms of an abstract Hilbert space, we do not simply cross out positions and
momenta from the theory, but rather make our formulation independent of that
choice of basis. In this sense, the formalism of the common core theory is not
always a “reduced” or “quotiented” version of the formalisms the two theories.
The incompatibility of the interpretations of two theory formulations is thus
resolved through an internal interpretation that captures their common
aspects, and assigns no theoretical or physical significance to the rest.
Let me spell out the consequences of this a bit more. The result of the
process of abstraction, i.e. points (1) and (2), is that the interpreted
theories $\hat{T}_{1}$ and $\hat{T}_{2}$ are representations or instantiations
of the common core theory, $\hat{t}$. Thus in particular, we have:
$\displaystyle\hat{T}_{1}\Rightarrow\hat{t}~{}~{}~{}~{}\mbox{and}~{}~{}~{}~{}\hat{T}_{2}\Rightarrow\hat{t}~{},$
(17)
where I temporarily adopt a syntactic construal of theories (but the same idea
can also be expressed for semantically construed theories). That is, because
$\hat{T}_{1}$ and $\hat{T}_{2}$ are representations or instantiations of
$\hat{t}$, and in particular because $\hat{t}$ has a common core
interpretation (more on this below): whenever either of $\hat{T}_{1}$ and
$\hat{T}_{2}$ is true, then $\hat{t}$ is also true.353535Since the sentences
of a scientific theory, observable or not, are given by the theory’s
interpretation, the above entailments should be read as entailments between
statements about the world. As I mentioned in (2), this requires that we adopt
internal interpretations: so that $\hat{t}$’s interpretation is obtained from
the interpretations of $\hat{T}_{1}$ and $\hat{T}_{2}$ by abstraction, and the
entailments are indeed preserved. (For example, think of how a representation
of a group, being defined as a homomorphism from the group to some structure,
satisfies the abstract axioms that define the group, and thus makes those
axioms true for that structure).
Further, for empirically equivalent but theoretically inequivalent theories,
we have:
$\displaystyle\hat{T}_{1}\Rightarrow\neg\,\hat{T}_{2}~{}.$ (18)
Let me first give a simple example, not of a full common core theory, but of a
single quantity that is represented by different theory formulations. Consider
the quantity that is interpreted as the ‘energy’ of a system: while different
theory formulations may describe this quantity using different specific
structure, and so their external interpretations differ (even greatly) in
their details, the basic quantity represented—the energy of the system—is the
same, as described by the internal interpretation. Indeed, in all of the
following examples, the energy is indeed represented, on both sides of the
duality, by quantities that match, i.e. map to one another under the duality:
position-momentum duality (Section 3.1), T-duality and gauge-gravity duality
(Section 3.2; Huggett 2017; De Haro, Mayerson, Butterfield, 2016),
bosonization (De Haro and Butterfield, 2017), electric-magnetic duality.
Read and Møller-Nielsen (2020) question that a common core theory $\hat{t}$
always exists, since they require an explication of the shared ontology of two
duals, before a verdict of theoretical equivalence is justified.363636I
endorse their requirement, which is more specific than my own earlier
requirement of a ‘deeper analysis of the notion of reference itself’ (2020:
Section 1.3.2) and ‘an agreed philosophical conception of the interpretation’
(2019: Section 2.3.1).
Since points (1) and (2) sketch a procedure for obtaining $\hat{t}$ by
abstraction, the question of “whether $\hat{t}$ exists” is effectively the
question of whether the common core theory $\hat{t}$ thus obtained has a well-
defined ontology. This can be divided into two further subquestions:
(1’) Is the thus obtained structure $t$, of which $T_{1}$ and $T_{2}$ are
representations, itself a bare theory, i.e. a triple of states, quantities,
and dynamics?
(2’) Does $t$ also have a well-defined ontology?
If these two questions are answered affirmatively, then $\hat{t}$ is a common
core theory in the appropriate domain of application.
While examples of dualities have been given in De Haro (2019) for extendable
theories, where the answer to at least one of these questions is negative (so
that a common core theory does not exist): I am not aware of any examples of
unextendable theories for which it is known that a common core does not exist.
Let me briefly sketch whether and how some of the familiar examples of
dualities satisfy the requirements (1’) and (2’) (this paragraph is slightly
more technical and can be skipped). Consider bosonization (De Haro and
Butterfield, 2017). (1’): The set of quantities is constructed from an
infinite set of currents that are the generators of the enveloping algebra of
the affine Lie algebra of $\mbox{SU}(2)$ (or other gauge group). The states
are the irreducible unitary representations of this algebra. Finally, the
dynamics is simply given by a specific Hamiltonian operator. (2’): The theory
has an appropriate interpretation: the states can be interpreted as usual
field theory states describing fermions and bosons. The operators are likewise
interpreted in terms of energy, momentum, etc. And the dynamics is ordinary
Hamiltonian dynamics in two dimensions. Thus bosonization answers both (1’)
and (2’) affirmatively.
Other string theory dualities are less well-established, and so the results
here are restricted to special cases. In the case of gauge-gravity dualities,
under appropriate conditions, the states include a conformal manifold with a
conformal class of metrics on it (De Haro, 2020: p. 278). The quantities are
specific sets of operators (which again can be interpreted in terms of energy
and momentum), and the dynamics is again a choice of a Hamiltonian operator.
Thus, at least within the given idealisations and approximations, and
admitting that gauge-gravity dualities are not yet fully understood; they also
seem to answer (1’) and (2’) affirmatively.373737Note that this answer to (2’)
about the domain of application does not just mean getting correct
predictions. For example, common core theories are also being used to give
explanations and answer questions that are traditionally regarded as
theoretical, e.g. about locality and causality (Balasubramanian and Kraus,
1999) and about black hole singularities (Festuccia and Liu, 2006). For a
variety of other physical questions addressed using the AdS/CFT common core,
see Part III of Ammon and Erdmenger (2015).
There are two reasons to set aside the worry that $\hat{t}$ does not exist,
together with its threatened problem of under-determination: so that in
Section 4.5 I can safely assume that $\hat{t}$ exists. (I restrict the
discussion to unextendable theories).
First, as just discussed, there is evidence that such common core theories
exist in many (not to say all!) examples of such dualities. Indeed, dualities
would be much less interesting for physicists if the common core was some
arbitrary structure that does not qualify as a physical theory in the
appropriate domain of application. Thus it is safe to conjecture, for
unextendable theories, that of all the putative dualities, those that are
genuine dualities and are of scientific importance (and that would potentially
give rise to a more serious threat of under-determination, because the theory
formulations differ from each other more) do have common cores.
The second, and main, reason is that there are, to my knowledge, no examples
of dualities for unextendable theories where a common core is known to not
exist. For some dualities, it is not known whether a common core exists. For
example, Huggett and Wüthrich (2020) mention T-duality as an example where a
common core has not been formulated explicitly: working it out explicitly
would, in their view, require a formulation of M-theory. It might also be
possible to work out a common core theory perturbatively. But so long as it is
not known whether a common core exists, rather than having positive knowledge
that it does not exist, this just means that there is work to be done: the
lack of an appropriate common core theory $\hat{t}$ can, at best, give a case
of transient under-determination, which, as I argued in Section 2, reflects
our current state of knowledge, and is not really worrisome.
This agrees with Stanford’s requirement that, before such putative cases of
under-determination are accepted, actual examples should be produced (cf.
footnotes 7 and 8). And so, this worry can be set aside.
### 4.5 Trouble for scientific realism?
In this Section, I address the question announced at the beginning of Section
4.2: namely, whether, for dual bare theories, the under-determination of
external interpretations gives a problem of empirical under-determination. Let
us return to the distinction between extendable and unextendable theories:
(A) Extendable theories: I will argue that here we have a case of transient
under-determination (see Section 1 and footnotes 2 and 9), i.e. under-
determination by the evidence so far. And so, I will argue that there is no
problem of empirical under-determination here, but only a limitation of our
current state of knowledge.
The reason is that extensions of the theory formulations that break the
duality are allowed by the external interpretations, i.e. such that two duals
map to a different domain of application with different empirical
substructures (or different observation sentences). Interpreting the specific
structure, as external interpretations do, introduces elements into the
domains that, in general, render the two theory formulations empirically
inequivalent.
This is in the ordinary business of theory construction. For, although in our
current state of knowledge it may appear that the theory formulations are
empirically equivalent, the fact that the theory is extendable means that
future theory development could well make an empirical difference. Thus one
should interpret such theories tentatively. This is the ordinary business of
transient under-determination: and one should here look for the ordinary
responses of scientific realist positions.
(B) Unextendable theories: In this case, the theories are somehow “unique”,
perhaps ‘isolated in the space of related theories’ (cf. Section 3.3).
I will argue that a cautious scientific realism (see Section 4.1) does not
require belief in either of the (incompatible) interpretations of the dual
theory formulations. Rather, belief in the internal interpretation, obtained
by a process of abstraction from the external interpretations, is justified.
And so, there is under-determination, but of a kind that is benign. Thus
dualities may be taken to favour a cautious approach to scientific realism.
In more detail: as I discussed in Section 4.1, in the presence of inter-
theoretic relations (and given the three conditions discussed at the end of
Section 4.1), the cautious scientific realist will take the inter-theoretic
relations into account when she determines her realist commitments. Although
the under-determination prevents her from being able to choose between
$\hat{T}_{1}$ and $\hat{T}_{2}$, she has an important reason to favour
$\hat{t}$. Namely, she will prefer $\hat{t}$ over $\hat{T}_{1}$ and
$\hat{T}_{2}$ on logical grounds. For, on the basis of the implications Eqs.
(17) and (18), and regardless of which of $\hat{T}_{1}$ and $\hat{T}_{2}$ is
true, she can consistently (and at this point, also should) accept the truth
of $\hat{t}$. Thus, since her scientific realism commits her to at least one
of these three theories (notably, because they are empirically adequate
theories, and they otherwise satisfy the requirements of her scientific
realism), she will in any case be commmitted to $\hat{t}$. Namely, a
commitment to either $\hat{T}_{1}$ or $\hat{T}_{2}$ commits her, in virtue of
Eq. (17), ipso facto to $\hat{t}$, while she cannot know which of
$\hat{T}_{1}$ or $\hat{T}_{2}$ is true.
This does not prevent the scientific realist from, in addition, being
committed to one of the two external interpretations, i.e. committing herself
to one of the two theory formulations (perhaps using alternative assessment
criteria, i.e. point (ii) in Section 2.1): $\hat{T}_{1}$, say. Indeed,
$\hat{t}$ and $\hat{T}_{1}$ are of course compatible, by Eq. (17): and so,
there is no under-determination here either.
In any case, the scientific realist does not make a mistake if (perhaps in the
absence of additional assessment criteria) she commits only to $\hat{t}$. Thus
the cautious scientific realist is, in the cases under consideration, always
justified in believing the common core theory.
This is a conclusion that the multiple interpretative options considered in Le
Bihan and Read (2018: Figure 1) obscures. They also consider a “common core”
theory, but theirs is not constructed by abstraction: in any case, they do not
consider the possibility of a constraint, Eqs. (17) and (18), that follows
from the process of abstraction.383838This is clear from e.g. the following
quote (p. 5): ‘[T]here is a sense in which, absent further philosophical
details, such a move [i.e. identifying the common core] has made the situation
worse: we have, in effect, identified a further world which is empirically
adequate to the actual world’. This constraint393939Together with the
assumption that at least one of $\hat{T}_{1}$, $\hat{T}_{2}$, $\hat{t}$ is
worth scientific realist commitment—else there is no question of under-
determination either! eliminates four out of six of the options in their
taxonomy. The remaining two are the two cases just discussed, i.e. commitment
only to $\hat{t}$, or possible commitment to both $\hat{t}$ and $\hat{T}_{1}$.
Thus, as just argued, the problem of under-determination is thereby
resolved.404040It is of course not guaranteed that a common core theory
$\hat{t}$ always exists. However, as I argued in Section 4.4, this does not
lead to cases of empirical under-determination.
In sum, a scientific realist may lack the resources to know which of two dual
theory formulations, $\hat{T}_{1}$ or $\hat{T}_{2}$, describes the world, but
is not required to believe either of them. Rather, the cautious scientific
realist is justified in adopting an internal interpretation, which abstracts
from the external interpretations, and thereby resolves their
incompatibilities. For unextendable theories, a scientific realist may take
theoretical equivalence to be a criterion of theory individuation. Thus a
catious scientific realism recommends only belief in the internal
interpretation of unextendable theories.
My overall view can be well summarised in terms of how it clarifies previous
discussions of under-determination for dualities: by Matsubara (2013) and
Rickles (2017). Matsubara presents two approaches to dualities: his first
approach (‘Accept the different dual descriptions as describing two different
situations’) corresponds to the Schema’s case of theoretically inequivalent
theories, and he says that these theoretically inequivalent theories are
nevertheless empirically equivalent (‘the world may in reality be more like
one dual description than the other but we have no empirical way of knowing
this’).
While I broadly agree with Matsubara, my analysis makes several
clarifications. For example, my characterisation of the under-determination is
slightly different: there is an under-determination specific to dualities only
in the case of unextendable theories, but one is always justified in adopting
internal interpretations. (For extendable theories, further theory development
can distinguish between two theory formulations, so this is ordinary transient
under-determination.) Since, in the relevant examples, these are obtained by
abstraction from the external interpretations of the various theory
formulations, they do not contradict them (Eq. (17)). Thus I disagree with
Matsubara’s conclusion that ‘This means that we must accept epistemic anti-
realism since in this situation it is hard to find any reason for preferring
one alternative before another.’ Indeed, as I have argued, a cautious
scientific realist does not need to choose one dual over the other: such a
realist is justified in adopting an internal interpretation of the kind
discussed, which agrees—in everything it says—with the external
interpretations, and is silent about their other specific aspects.
Matsubara’s second approach (‘We do not accept that [the duals] describe
different situations; instead they are descriptions of the same underlying
reality’) corresponds to my case of theoretically equivalent duals. I agree
with Matsubara that there is no under-determination, but I think this position
is less tangled with ontic structural realism than he seems to think (although
it is of course compatible with it). As the notions of an internal
interpretation and its justification through unextendability should make
clear, judging two duals as being theoretically equivalent is a matter of
setting both formal and interpretative criteria for individuating theories.
In the case of unextendable theories, a methodological preference for internal
interpretations surfaces—because justified by unextendability and abstraction.
This feeds into physicists’ main interest in dualities, which comes from their
heuristic power in formulating new theories. On an internal interpretation, a
duality is thus taken to be a starting point of theory individuation, with an
interpretation of the theory’s common core, of which the various theory
formulations are specific instantiations. This further motivates De Haro and
Butterfield’s (2017) proposal to lift the usage of ‘theory’ and ‘model’ “one
level up”, so that the various theory formulations are ‘models’ (i.e.
instantiations or representations) of a single theory. And this now holds not
only for bare theories and models, but also—on internal interpretations—for
interpreted ones.414141My conclusions are similar to some of Dawid’s (2017b).
Where he says that there is a shift in the role played by empirical
equivalence in theory construction (and I agree with this, see De Haro
(2018)), I have argued that the traditional semantic and syntactic construals
of empirical equivalence are in themselves sufficient to analyse dualities,
and that the main difference is in the theories whose empirical equivalence is
being assessed.
## 5 Conclusion
The discussion of under-determination begins slightly differently, and is
complementary, to usual discussions of duality. While usual discussions of
duality aim to establish conditions for when duals are theoretically
equivalent, when analysing under-determination we begin with theoretically
inequivalent theories, whose inequivalence—as my analysis in Section 4.2
shows—ends up depending on external interpretations. Thus our discussion has
put two under-emphasised aspects of duality to good use: namely, theoretical
inequivalence and external interpretations.
The Schema’s construal of bare theories, interpretation, and duality prompts a
notion of theoretical equivalence that, although strict, turns out to generate
cases of under-determination: but I have also argued that these do not present
a new problem for a catious scientific realism. For theory formulations that
can be extended beyond their domain of application, we have the familiar
situation of transient under-determination, where further theory development
may be expected to break the duality. For theory formulations that cannot be
so extended (or only in a way that does not change their interpretations in
the old domain), the under-determination is benign, because—for dualities in
the literature that are sufficiently well-undestood—a common core theory can
be obtained by abstraction from the external interpretations. In this case, a
cautious scientific realism does not commit to the external interpretations,
but belief in the internal interpretation is justified. Thus the under-
determination is here benign, and dualities do not provide the anti-realist
with new ammunition, although they may be taken to favour a cautious approach
to scientific realism.
Just as dualities bear on the problem of theory individuation that is central
to the discussion of theoretical equivalence, they also bear on the question
of scientific realism. Namely, they suggest a cautious approach, according to
which inter-theoretic relations should be taken into account when determining
one’s realist commitments.
## Acknowledgements
I thank Jeremy Butterfield, James Read, and two anonymous reviewers for
comments on this paper. I also thank John Norton for a discussion of duality
and under-determination. This work was supported by the Tarner scholarship in
Philosophy of Science and History of Ideas, held at Trinity College,
Cambridge.
## References
Ammon, M. and Erdmenger, J. (2015). Gauge/Gravity Duality. Foundations and
Applications. Cambridge: Cambridge University Press.
Balasubramanian, V., Kraus, P., Lawrence, A. E. and Trivedi, S. P. (1999).
‘Holographic probes of anti-de Sitter space-times’. Physical Review D, 59, p.
104021.
Barrett, T. W. and Halvorson, H. (2016). ‘Glymour and Quine on theoretical
equivalence’. Journal of Philosophical Logic, 45 (5), pp. 467-483.
Baxter, R. J. (1982). Solved Models in Statistical Mechanics. Academic Press.
Butterfield, J. (2020). ‘On Dualities and Equivalences Between Physical
Theories’. Forthcoming in Space and Time after Quantum Gravity, Huggett, N.
and Wüthrich, C. (Eds.). An extended version is in: http://philsci-
archive.pitt.edu/14736.
Castellani, E. and Rickles, D. (2017). ‘Introduction to special issue on
dualities’. Studies in History and Philosophy of Modern Physics, 59: pp. 1-5.
doi.org/10.1016/j.shpsb.2016.10.004.
Coffey, K. (2014). ‘Theoretical Equivalence as Interpretative Equivalence’.
The British Journal for the Philosophy of Science, 65 (4), pp. 821-844.
Dawid, R. (2006). ‘Under-determination and Theory Succession from the
Perspective of String Theory’. Philosophy of Science, 73 (3), pp. 298-322.
Dawid, R. (2017a). ‘Scientific Realism and High-Energy Physics’. The Routledge
Handbook of Scientific Realism, J. Saatsi (Ed.), pp. 279-290.
Dawid, R. (2017b). ‘String Dualities and Empirical Equivalence’. Studies in
History and Philosophy of Modern Physics, 59, pp. 21-29.
De Haro, S. (2017). ‘Dualities and Emergent Gravity: Gauge/Gravity Duality’.
Studies in History and Philosophy of Modern Physics, 59, pp. 109-125.
arXiv:1501.06162 [physics.hist-ph]
De Haro, S. (2019). ‘Theoretical Equivalence and Duality’. Synthese, topical
collection on Symmetries. M. Frisch, R. Dardashti, G. Valente (Eds.), 2019,
pp. 1-39. arXiv:1906.11144 [physics.hist-ph]
De Haro, S. (2020). ‘Spacetime and Physical Equivalence’. In: Beyond
Spacetime. The Foundations of Quantum Gravity, Huggett, N., Matsubara, K. and
Wüthrich, C. (Eds.), pp. 257-283. Cambridge: Cambridge University Press.
arXiv:1707.06581 [hep-th].
De Haro, S. (2020a). On Inter-Theoretic Relations and Scientific Realism. PhD
dissertation, University of Cambridge. http://philsci-archive.pitt.edu/17347.
De Haro, S. (2020b). ‘On Empirical Equivalence and Duality’. To appear in 100
Years of Gauge Theory. Past, Present and Future Perspectives, S. De Bianchi
and C. Kiefer (Eds.). Springer. arXiv:2004.06045 [physics.hist-ph].
De Haro, S. and Butterfield, J.N. (2017). ‘A Schema for Duality, Illustrated
by Bosonization’. In: Kouneiher, J. (Ed.), Foundations of Mathematics and
Physics one century after Hilbert, pp. 305-376. arXiv:1707.06681
[physics.hist-ph].
De Haro, S., Mayerson, D. R. and Butterfield, J. (2016). ‘Conceptual Aspects
of Gauge/Gravity Duality’. Foundations of Physics, 46, pp. 1381-1425.
arXiv:1509.09231 [physics.hist-ph].
Dieks, D., Dongen, J. van, Haro, S. de (2015). ‘Emergence in Holographic
Scenarios for Gravity’. Studies in History and Philosophy of Modern Physics 52
(B), pp. 203-216. arXiv:1501.04278 [hep-th].
Earman, J. (1993). ‘Underdetermination, Realism, and Reason’. Midwest Studies
in Philosophy, XVIII, pp. 19-38.
Festuccia, G. and Liu, H. (2006). ‘Excursions beyond the horizon: Black hole
singularities in Yang-Mills theories. I’, Journal of High-Energy Physics, 04,
044.
Glymour, C. (1970). ‘Theoretical Equivalence and Theoretical Realism. PSA:
Proceedings of the Biennial Meeting of the Philosophy of Science Association
1970, pp. 275-288.
Glymour, C. (1977). ‘The epistemology of geometry’. No$\hat{u}$s, pp. 227-251.
Hodges, W. (1997). A Shorter Model Theory. Cambridge: Cambridge University
Press.
Huggett, N. (2017). ‘Target space $\neq$ space’. Studies in History and
Philosophy of Modern Physics, 59, 81-88.
Huggett, N. and Wüthrich, C. (2020). Out of Nowhere: Duality, Chapter 7.
Oxford: Oxford University Press (forthcoming). http://philsci-
archive.pitt.edu/17217.
Kitcher, P. (1993). The Advancement of Science. New York and Oxford: Oxford
University Press.
Laudan, L. (1990). ‘Demystifying Underdetermination’. Minnesota Studies in the
Philosophy of Science, 14, pp. 267-297.
Laudan, L. and Leplin, J. (1991). ‘Empirical Equivalence and
Underdetermination’. The Journal of Philosophy, 88 (9), pp. 449-472.
Le Bihan, B. and Read, J. (2018). ‘Duality and Ontology’. Philosophy Compass,
13 (12), e12555.
Lenzen, V. F. (1955). ‘Procedures of Empirical Science’. In: International
Encyclopedia of Unified Science, Neurath, O., Bohr, N., Dewey, J., Russell,
B., Carnap, R., and Morris, C. W. (Eds.). Volume I, pp. 280-339.
Lyre, H. (2011). ‘Is Structural Underdetermination Possible?’ Synthese, 180,
pp. 235-247.
Matsubara, K. (2013). ‘Realism, Underdetermination and String Theory
Dualities’. Synthese, 190, 471-489.
Ney, A. (2012). ‘Neo-Positivist Metaphysics’. Philosophical Studies, 160, pp.
53-78.
Norton, J. (2006). ‘How the Formal Equivalence of Grue and Green Defeats What
Is New in the New Riddle of Induction’. Synthese, 150, pp. 185-207.
Polchinski, J. (2005). String Theory, volume II. Cambridge: Cambridge
University Press. Second Edition.
Psillos, S. (1999). Scientific Realism. How Science Tracks Truth. London and
New York: Routledge.
Quine, W. V. (1970). ‘On the Reasons for Indeterminacy of Translation’. The
Journal of Philosophy, 67 (6), pp. 178-183.
Quine, W. V. (1975). ‘On empirically equivalent systems of the world’.
Erkenntnis, 9 (3), pp. 313-328.
Read, J. (2016). ‘The Interpretation of String-Theoretic Dualities’.
Foundations of Physics, 46, pp. 209-235.
Read, J. and Møller-Nielsen, T. (2020). ‘Motivating Dualities’. Synthese, 197,
pp. 263-291.
Rickles, D. (2011). ‘A Philosopher Looks at String Dualities’. Studies in
History and Philosophy of Modern Physics, 42, pp. 54-67.
Rickles, D. (2017). ‘Dual Theories: ‘Same but Different’ or ‘Different but
Same’?’ Studies in History and Philosophy of Modern Physics, 59, pp. 62-67.
Schwarz, J. (1992). ‘Superstring Compactification and Target Space Duality’.
In: N. Berkovits, H. Itoyama, K. Schoutens, A. Sevrin, W. Siegel, P. van
Nieuwenhuizen and J. Yamron, Strings and Symmetries. World Scientific, pp.
3-18.
Sklar, L. (1975). ‘Methodological Conservatism’. The Philosophical Review, 84
(3), pp. 374-400.
Stanford, P. K. (2006). Exceeding Our Grasp. New York: Oxford University
Press.
van Fraassen, B. C. (1970). ‘On the Extension of Beth’s Semantics of Physical
Theories’. Philosophy of Science, 37 (3), pp. 325-339.
van Fraassen, B. C. (1980). The Scientific Image. Oxford: Clarendon Press.
Weatherall, J. O. (2020). ‘Equivalence and Duality in Electromagnetism’.
Forthcoming in Philosophy of Science. https://doi.org/10.1086/710630.
Zwiebach, B. (2009). A First Course in String Theory. Cambridge: Cambridge
University Press.
|
16k
|
arxiv_papers
|
2101.00932
|
# Weakly-Supervised Saliency Detection via Salient Object Subitizing
Xiaoyang Zheng*, Xin Tan*, Jie Zhou, Lizhuang Ma†, and Rynson W.H. Lau† *
Equal Contribution.† Corresponding Author.X. Zheng, X. Tan, J. Zhou and L. Ma
are with the Department of Computer Science and Engineering, Shanghai Jiao
Tong University, 200240, China. X. Tan and J. Zhou are also with the
Department of Computer Science, City University of Hong Kong, HKSAR, China.
E-mail: [email protected], [email protected],
[email protected], [email protected] W.H. Lau is with the
Department of Computer Science, City University of Hong Kong, HKSAR, China.
E-mail: [email protected] received Sept. 20, 2020; revised
Dec. 04, 2020.
###### Abstract
Salient object detection aims at detecting the most visually distinct objects
and producing the corresponding masks. As the cost of pixel-level annotations
is high, image tags are usually used as weak supervisions. However, an image
tag can only be used to annotate one class of objects. In this paper, we
introduce saliency subitizing as the weak supervision since it is class-
agnostic. This allows the supervision to be aligned with the property of
saliency detection, where the salient objects of an image could be from more
than one class. To this end, we propose a model with two modules, Saliency
Subitizing Module (SSM) and Saliency Updating Module (SUM). While SSM learns
to generate the initial saliency masks using the subitizing information,
without the need for any unsupervised methods or some random seeds, SUM helps
iteratively refine the generated saliency masks. We conduct extensive
experiments on five benchmark datasets. The experimental results show that our
method outperforms other weakly-supervised methods and even performs
comparable to some fully-supervised methods.
###### Index Terms:
weak supervision, saliency detection, object subitizing
## I Introduction
The salient object detection task aims at accurately recognizing the most
distinct objects in a given image that would attract human attention. This
task has received a lot of research interests in recent years, as it plays an
important role in many other computer vision tasks, such as visual tracking
[1], image editing/manipulation [2, 3] and image retrieval [4]. Recently, deep
convolutional neural networks have achieved significant progress in saliency
detection [5, 6, 7, 8, 9]. However, most of these recent methods are primarily
CNN-based, which rely on a large amount of pixel-wised annotations for
training. For such an image segmentation task, it is both arduous and
inefficient to collect a large amount of pixel-wised saliency masks. For
example, it usually takes several minutes for an experienced worker to
annotate one single image. This drawback confines the amount of available
training samples. In this paper, we focus on the salient object detection task
with a weakly-supervised setting.
Figure 1: Several inconsistent cases between the given image labels and the
actual salient objects. These images and tags are chosen from the Pascal VOC
[10] and Microsoft COCO [11] datasets. The captions under images are the given
labels. The masks are generated with our methods, which show the actually
salient objects.
Some methods [12, 13, 14] tried to address salient object detection with
image-tag supervisions. Li et al. [12] utilized Class Activation Maps as
coarse saliency maps. Together with results from unsupervised methods, those
masks are used to supervise the segmentation stream. Wang et al [13] proposed
a two-stage method, which assigns category tags to object regions and fine-
tunes the network with the predicted saliency maps with the ground truth. Zeng
et al. [14] proposed a unified framework to conduct saliency detection with
diverse weak supervisions, including image tags, captions and pseudo labels.
They achieved good performances with image labels from the Microsoft COCO [11]
or Pascal VOC [10] datasets. However, their results are established on a
critical assumption that the labelled object is the most visually distinct
one. From those datasets with image tags, We observe that this assumption is
not always reliable. As shown in Figure 1, the actual salient objects are
inconsistent with the image labels. For example, the image in the second
column is labelled as “fire hydrant”, it is obvious that the orange “ball”
should also be a salient object. In addition, even trained on datasets with
multi-class labels, these methods essentially detect object within the
categories, but not salient objects. Hence, image category labels do not
guarantee the property of saliency.
Motivation. Subitizing is the rapid enumeration of a small number of items in
a given image, regardless of their semantic category. According to [15],
subitizing of up to four targets is highly accurate, quick and confident. In
addition, since the subitizing information may contain objects from different
categories, it is class-agnostic. Inspired by the above advantages, we propose
to address the saliency detection problem using only the object subitizing
information as a weak supervision.
Although there exist works, e.g., [16], that use subitizing as an auxiliary
supervision, we propose to apply subitizing as the weak supervision in this
work. To the best of our knowledge, we are the first to adopt subitizing as
the only supervision in saliency detection task. However, the subitizing
information does not indicate the position and appearance of salient objects.
Therefore, we propose the Saliency Subitizing Module (SSM) to produce saliency
masks. Recent works [17, 18] have proven that, even trained with
classification tasks, CNNs implicitly reveal the attention regions in the
given images. Trained on subitizing information, the SSM relies on the
distinct regions to conduct classification. By extracting those regions, we
can explicitly obtain the locations of the salient objects.
However, as pointed out in [19], in a well trained classification network, the
discriminative power of each object part is different from each other, and
thus lead to incomplete segmentation. In the finetune stage, we need to
further enlarge the prominent regions extracted from the network. Kolesnikov
et al. [20] trained their network with pseudo labels for multiple iterations
and obtain integrated results, while the multi-stage training is complicated.
In order to address this issue, we design the Saliency Updating Module (SUM)
for refining the saliency masks produced by SSM. In each iteration, the
generated saliency maps, combined with original images, are used to generate
masked images. With those masked images as input to the next iteration, the
network learns to recognize those related but less salient regions. During the
inference phase, given an image, our model will produce the saliency maps
without any iterations, and there will be no need to provide the subitizing
information. Our extensive evaluations demonstrate the superiority of the
proposed methods over the state-of-the-art weakly-supervised methods.
In summary, this paper has the following contributions:
* •
We propose to use subitizing as a weak supervision in the saliency detection
task, which has the advantage of being class-agnostic.
* •
We propose an end-to-end multi-task saliency detection network. By introducing
subitizing information, our network first generates rough saliency masks with
the Saliency Subitizing Module (SSM), and then iteratively refines the
saliency masks with the Saliency Updating Module (SUM).
* •
Our extensive experiments show that the proposed method achieves superior
performance on five popular salient datasets, compared with other weak-
supervised methods. It even performs comparable to some of the fully-
supervised methods.
## II Related Work
Recently, the progress on salient object detection is substantial, benefiting
by the development of deep neural networks. He et al. [5] proposed a
convolution neural network based on super-pixel for saliency detection. Li et
al. [21] utilized multi-scale features extracted from a deep CNN. Zhao et al.
[22] proposed a multi-context deep learning framework for detecting salient
objects with two different CNNs used to learn global and local context
information. Yuan et al. [23] proposed a saliency detection framework, which
extracted the correlations among macro object contours, RGB features and low-
level image features. Wang et al. [24] proposed a pyramid attention structure
to enlarge the receptive field. Hou et al. [25] introduced short connections
to an edge detector. Zhu et al. [8] proposed the attentional DenseASPP to
exploit local and global contexts at multiple dilated convolutional layers. Hu
et al. [9] proposed a spatial attenuation context network, which recurrently
translated and aggregated the context features in different layers. Tu et al.
[26] presented an edge-guided block to embed boundary information into
saliency maps. Zhou et al. [27] proposed a multi-type self-attention network
to learn more semantic details from degraded images. However, these methods
rely heavily on pixel-wised supervisions. Due to the scarcity of pixel-wised
data, we focus on the weakly-supervised saliency detection task.
Weakly-Supervised Saliency detection. There are many works using weak
supervisions for the saliency detection task. For example, Li et al. [12] used
the image-level labels to train the classification network and applied the
coarse activation maps as saliency maps. Wang et al. [13] proposed a two-stage
weakly-supervised method by designing a Foreground Inference Network (FIN) to
predict foreground regions and a Global Smooth Pooling (GSP) to aggregate
responses of predicted objects. Zeng et al. [14] proposed a unified network,
which is trained on multi-source weak supervisions, including image tags,
captions and pseudo labels. They designed an attention transfer loss to
transmit signals between sub-networks with different supervisions. However, as
discussed in Section I, the image-level supervisions are not always reliable.
In addition, captions were used as a weak supervision in [14], combined with
other supervisions. Different from those methods above, we propose to use
subitizing information as the weak supervision in the saliency detection task.
Salient object subitizing. Zhang et al. [28] proposed a salient object
subitizing dataset SOS. They firstly studied the problem of salient object
subitizing and revealed the relations between subitizing and saliency
detection. Lu et al. [29] formulated the subitizing task as a matching problem
and exploited the self-similarity within the same class. He et al. [16]
trained a subitizing network to provide additional knowledge to the pixel-
level segmentation stream. Recently, Amirul et al. [30] proposed a salient
object subitizing network and recognized the variability of subitizing. They
also provided outputs as a distribution that reflects this variability. In
this paper, our approach is motivated by these methods but we use subitizing
as the weak supervision.
Map refinement. Li et al. [12] adopted saliency maps generated by some
unsupervised methods as the initial seeds. With a graphical model, these
saliency maps are used as pixel-level annotations and refined in an iterative
way. However, in our proposed method, we do not utilize any unsupervised
methods or initial seeds. The saliency maps are refined iteratively from the
activation maps of the subitizing network. Li et al. [31] adopted a soft mask
loss to an auxiliary attention stream. However, the input of [31] is updated
only once while the inputs to our network are iteratively updated. In
addition, there are some existing post-processing techniques used to refine
the saliency masks. In [13, 12], the authors utilized an iterative conditional
random field (CRF) to enforce spatial label consistency. Zheng et al. [32]
further proposed to conduct approximate inference with pair-wised Gaussian
potential in CRF as a recurrent neural network. Chen et al. [33] employed the
relations of the deep features to promote saliency detectors in a self-
supervised way. In order to achieve better results, we adopt a refinement
process, which maintains the internal structure of original images and
enforces smoothness into the final saliency maps.
Figure 2: The pipeline of the proposed network, with the Saliency Subitizing
Module (SSM), the Saliency Updating Module (SUM) and the refinement process.
## III Salient Object Detection Method
We propose a multi-task convolutional neural network, which consists of two
main modules: saliency subitizing module (SSM) and saliency updating module
(SUM). SSM helps learn counting of salient objects and extract coarse saliency
maps with the precise locations of the target objects. SUM helps update
saliency masks produced by SSM, and extend the activation regions. Finally, we
apply a refinement process to refine the object boundaries. The pipeline of
our method is presented in Figure 2.
### III-A Saliency Subitizing Module
The subitizing information indicates the number of salient objects in a given
image. It does not provide the location or the appearance information of the
salient objects explicitly. However, when we train our network with the
subitizing supervisons, the network will learn to focus on regions related to
the salient objects. Hence, we design the Saliency Subitizing Module (SSM) to
extract the attention regions as coarse saliency masks. We regard the saliency
object subitizing task as a classification task. Training images are divided
into 5 categories based on the number of salient objects: $0$, $1$, $2$, $3$
and $4+$. We use ResNet-50 [34] as the backbone network, pretrained on the
ImageNet [35] dataset. The original 1000-way fully-connected layer are
replaced by a 5-way fully-connected layer. We use cross-entropy loss as the
classification loss $L_{cls}$. In order to obtain denser saliency maps, the
stride of the last two down-sampling layers is set as 1 in our backbone
network, which produces feature maps with $1/8$ of the original resolution
before the classification layer. In order to enhance the representation power
of the proposed network, we also apply two attention modules: channel
attention module and spatial attention module, which tells the network “where”
and “what” to focus, respectively. Both of them are placed in a sequential way
between the ResNet blocks.
We apply the technique of Grad-CAM [18] to extract activation regions as the
initial saliency maps, which contains the gradient information flowing into
the last convolutional layers during the backward phase. The gradient
information represents the importance of each neuron during the inference of
the network. We assume that the produced features from the last convolutional
layer have a channel size of $K$. For a given image, let $f_{k}$ be the
activation of unit $k$, where $k\in[1,K]$. For each class $c$, the gradients
of the score $y^{c}$ with respect to the activation map $f_{k}$ are averaged
to obtain the neuron importance weight $\alpha_{k}^{c}$ of class $c$:
$\alpha_{k}^{c}=\frac{1}{N}\sum_{i}^{m}\sum_{j}^{h}\frac{\partial
y^{c}}{\partial f_{ij}^{k}},$ (1)
where $i$ and $j$ represent the coordinates in the feature map and $N=m\times
h$. With the neuron importance weight $\alpha_{k}^{c}$, we can compute the
activation map $M^{c}$:
$M^{c}=\text{ReLU}(\sum_{k}\alpha_{k}^{c}f^{k}).$ (2)
Note that the ReLU function filters the negative gradient values, since only
the positive ones contribute to the class decision, while the negative values
contribute to other categories. The size of the saliency map is the same as
the size of the last convolutional feature maps ($1/8$ of the original
resolution). Since the saliency maps $M^{c}$ are obtained within each
inference, they become trainable during the training stage.
### III-B Saliency Updating Module
In the Saliency Subitizing Module, our proposed network learns to detect the
regions that contribute to the counting of salient objects. Due to this
attribute, with only the SSM module, we can obtain saliency masks with
accurate locations of the target objects. However, the quality of the saliency
masks may not be very high. In order to address this issue, we design a
Saliency Updating Module (SUM) to fine-tune the obtained masks, with the aim
to refine the activation regions. Li et al. [12] updated saliency mask with an
additional CRF module. In contrast, our proposed model refines the saliency
masks in an end-to-end way.
As shown in Figure 2, we fuse the origin images and the saliency maps to
obtain masked images as new inputs to the next iteration. Visually, the
current prominent area is eliminated from the original samples. We define
$I_{0}$ as the original images. $I^{c}_{i}$ denotes the input images at the
$i$-th iteration ($i>=1$). $M^{c}_{i}$ denotes the saliency maps of class $c$
at the $i$-th iteration.
The fusion operation is formulated as:
$I^{c}_{i}=I_{0}-(\text{Sigmoid}(\omega\cdot(M^{c}_{i-1}-\bm{\sigma}))\odot
I_{0}),$ (3)
where $\odot$ stands for element-wise multiplication, $\bm{\sigma}$ is a
threshold matrix with all elements equal to $\sigma$. With the scale parameter
$\omega$, the mask term gets closer to 1 when $M^{c}_{i-1}>\sigma$, and gets
closer to 0 otherwise. As presented in Eq. 3, we enforce $I_{i}^{c}$ to
contain as few features from the target class $c$ as possible.
Trained on samples without features from the current prominent area, the
network learns to recognize those related but less salient regions. In other
words, regions beyond the high-responding area should also include features
that trigger the network to recognize the sample as class $c$. Similar to
[31], we introduce an mask mining loss $L_{mask}$ to extract larger activation
area. This loss penalizes the prediction error for class $c$, as shown below,
$L_{mask}=\frac{1}{n}\sum_{c}y^{c}(I^{c}),$ (4)
where $n$ is the dataset size and $y^{c}(I^{c})$ represents the prediction
score of masked images $I^{c}$ for class $c$. With the loss perspective, the
prediction scores of the right label for the masked images should be lower
than those for the original images. During the training phase, the total loss
$L$ is the combination of the classification loss $L_{cls}$ and the mask
mining loss $L_{mask}$.
$L=L_{cls}+\alpha L_{mask},$ (5)
where $\alpha$ is the weighting parameter. We set $\alpha=1$ in all our
experiments. With this loss, the network learns to extract those less salient
but related parts of the target objects, while maintains the ability of
recognizing subitizing information. In [31], the extracted regions for masking
the input are updated only once. These regions extracted from just a single
step are usually small and sparse, since CNNs tend to capture the most
discriminative regions and neglect the others in the image. In contrast, our
method updates the extracted regions through multiple iterations. In this way,
the extracted regions in our method are more integrated.
Discussion. Although Wei et al. [36] also adopted an iterative strategy, our
method is different from [36] in two main aspects. First, during the
generation of training images for the next iteration, [36] simply replaced the
internal pixels by the mean pixel values of all the training images. Instead,
we use a threshold $\sigma$ to determine the salient regions and a weighting
parameter $\omega$ to adjust the removing rate of features (as presented in
Eq. 3), so that the correlations of the extracted regions and the backgrounds
at different iterations would be smoothly changed. Second, [36] took the mined
object region as the pixel-level label for segmentation training. Instead, our
method is only trained on the given dataset with subitizing labels, avoiding
training on unreliable pseudo labels.
### III-C Refinement Process
To refine the object boundaries in the saliency maps, we take a graph-based
optimization method. Inspired by [37], we adopt super-pixels produced with
SLIC [38] as the basic representation units. Those super-pixels are organized
as an adjacency graph to model the structure in both spatial and feature
dimensions. A given image is segmented into a set of super-pixels
$\\{x_{i}\\}_{i=1}^{N}$, where $N=200$. The super-pixel graph
$A=(a_{i,j})_{N\times N}$ is defined as follows:
$a_{i,j}=\left\\{\begin{array}[]{ll}K(x_{i},x_{j}),&\text{if $x_{i}$ and
$x_{j}$ are spatially adjacent;}\\\ 0,&\text{otherwise,}\end{array}\right.$
(6)
where $K(x_{i},x_{j})=\text{exp}(-\frac{1}{2}\|x_{i}-x_{j}\|^{2}_{2})$
evaluates the feature similarity. Assume that there exist $l$ super-pixels
with initial scores. Our task is to learn a non-linear regression function
$g(x)=\sum_{j=1}^{N}\alpha_{j}K(x_{i},x_{j})$ for each super-pixel $x$. The
framework is shown as:
$\min_{g}\frac{1}{l}\sum_{i=1}^{l}(y_{i}-g(x_{i}))^{2}+\theta_{1}\|g\|_{K}^{2}+\frac{\theta_{2}}{N^{2}}g^{T}D^{-\frac{1}{2}}LD^{-\frac{1}{2}}g,$
(7)
where $y_{i}$ is the weight of the i-th unit in the super-pixel graph, and
$g=(g(x_{1}),...,g(x_{N}))^{T}$. $\|g\|_{K}$ denotes the norm of $g$ induced
by the function $K$; $D$ is the diagonal matrix containing the degree value in
the adjacency graph; $L$ denotes the graph Laplacian matrix, defined as
$L=D-A$; $\theta_{1}$ and $\theta_{2}$ are two weights, set as $1$ and $1e-6$,
respectively.
In Eq. 7, the first term is the trivial square loss, and the second term aims
at normalizing the desired regression function. However, unlike [37], we
introduce matrix $D$ in the third term to enforce constraints between units of
the super-pixel graph, and normalize the optimized results. Since we introduce
constraints between different graph units, our method can help strengthen the
connections and smoothness among different graph units. The optimization
objective function is transformed into a matrix form as:
$\min_{\alpha}\frac{1}{l}\sum_{i=1}^{l}\|y-JK\alpha\|_{2}^{2}+\theta_{1}\alpha^{T}K\alpha+\frac{\theta_{2}}{N^{2}}\alpha^{T}KD^{-\frac{1}{2}}LD^{-\frac{1}{2}}K\alpha,$
(8)
where $J$ is a diagonal matrix with the first $l$ elements set to $1$, while
the other elements are set to $0$;
$\alpha=(\alpha_{1},\ldots,\alpha_{N})^{T}$; $K$ is the kernel gram matrix.
The solution to the above optimization problem is formulated as:
$\alpha^{*}=(JK+\theta_{1}lI+\frac{\theta_{2}l}{N^{2}}D^{-\frac{1}{2}}LD^{-\frac{1}{2}}K)^{-1}y,$
(9)
where $I$ is an identity matrix. With the optimized $\alpha^{*}$, we can
calculate the saliency score $g(x)$ for each super-pixel. As presented in
Figure 3, the refinement process optimizes the boundaries of the salient maps.
Figure 3: Comparison between coarse saliency maps and refined saliency maps.
The color code above the second column indicates the degree of saliency.
## IV Experiment Results
### IV-A Implementation Details
In this paper, we utilize ResNet-101 [34] as the backbone and modify it to
meet our requirement. Those unmodified layers are initialized with weights
pretrained on ImageNet [35], while the rest are randomly initialized. Our
method is implemented based on the PyTorch framework. We use Stochastic
Gradient Descent for parameter updating. The learning rate is initially set up
as 1e-3, and will go down as the training progresses. The weight decay and the
momentum is set as 5e-4 and 0.9, respectively.
It has been widely proved that inputs with various scales helps the accurate
localization of target objects. Hence, the input scales are set as
$\\{0.5,0.75,1.0\\}$ of the original size. Saliency maps from three replicate
networks with shared weights are summed up and regularized as $[0,1]$. The
proposed method is trained and evaluated on a PC with i9 3.3GHz CPU, an Nvidia
1080Ti GPU and 64GB RAM. Given an image of 400$\times$400, the network takes
about 0.05s to produce a single saliency map and the refinement procedure
takes 0.03s per image. We apply random horizontal flipping, color scale and
random rotations ($\pm 30^{\circ}$) to augment the training datasets.
### IV-B Datasets and Evaluation Metrics
The ESOS dataset [15] is a saliency detection dataset, annotated with
subitizing labels. It contains $17,000$ images, which are selected from four
datasets: MS COCO [11], Pascal VOC 2007 [10], ImageNet [35] and SUN [39]. Each
image in the dataset is re-labeled by [15] with $0$, $1$, $2$, $3$ and $4+$
salient objects (5 classes). We randomly choose $80\%$ of the whole dataset
(around $13,000$ images) as the training set and use the rest $20\%$ as the
validation set for model selection. All images are scaled to $256\times 256$
during training. To compare with other saliency detection methods, we evaluate
our proposed method on five benchmarks: Pascal-S [40], ECSSD [41], HKU-IS
[42], DUT-OMRON [43] and MSRA-B [44]. These five datasets are all commonly
used in the saliency detection task. All of them provide images and pixel-
level masks.
In this paper, we adopt four metrics to measure the performance: precision-
recall (PR) curve, $F_{\beta}$, S-measure [45] and mean absolute error (MAE).
The continuous saliency maps are binarized with different threshold values.
The PR curve is computed by comparing a series of binary masks with the ground
truth masks. The second metric is defined as:
$F_{\beta}=\frac{(1+\beta^{2})\cdot\textit{precision}\cdot\textit{recall}}{\beta^{2}\cdot\textit{precision}+\textit{recall}},$
(10)
where $\beta^{2}$ is set as $0.3$ to balance between precision and recall
[38]. The maximum F-measure is selected among all precision-recall pairs. The
structural measure, or S-measure [45], is used to evaluate the structural
similarity of non-binary saliency maps. It is defined as:
$S=\frac{S_{o}+S_{r}}{2},$ (11)
$S_{o}$ is used to assess the object structure similarity against the ground
truth, while $S_{r}$ measures the global similarity of the target objects.
Please refer to the original paper [45] for the definition of these two terms.
The metric of MAE measures the average pixel-wised absolute difference between
the binary ground-truth and the saliency maps. It is calculated as:
${MAE}(S,GT)=\frac{1}{W\times
H}\sum^{W}_{x=1}\sum^{H}_{y=1}\|S(x,y)-GT(x,y)\|,$ (12)
where $S$ and $GT$ are generated saliency maps and the ground truth of size
$W\times H$.
Figure 4: Precision-recall curves of our proposed method and other methods on three benchmark datasets. Our proposed method consistently outperforms unsupervised and weakly-supervised methods. It is comparable to some fully-supervised methods. TABLE I: Quantitative results on $F_{\beta}$ and MAE. The red ones refer to the best results and the blue ones refer to the second best results. Dataset | Metric | Unsupervised | Weakly-supervised | Fully-supervised
---|---|---|---|---
BSCA[46] | HS[43] | ASMO[12] | WSS[13] | Ours | SOSD[16] | LEGS[47] | DCL[42] | MSWS[14]
ECSSD | $F_{\beta}\uparrow$ | 0.705 | 0.727 | 0.837 | 0.823 | 0.858 | 0.832 | 0.827 | 0.859 | 0.878
MAE$\downarrow$ | 0.183 | 0.228 | 0.110 | 0.120 | 0.108 | 0.105 | 0.118 | 0.106 | 0.096
MSRA-B | $F_{\beta}\uparrow$ | 0.830 | 0.813 | 0.881 | 0.845 | 0.897 | 0.875 | 0.870 | 0.905 | 0.890
MAE$\downarrow$ | 0.131 | 0.161 | 0.095 | 0.112 | 0.082 | 0.104 | 0.081 | 0.072 | 0.071
DUT-OMRON | $F_{\beta}\uparrow$ | 0.500 | 0.616 | 0.722 | 0.657 | 0.778 | 0.665 | 0.669 | 0.733 | 0.718
MAE$\downarrow$ | 0.196 | 0.227 | 0.110 | 0.150 | 0.083 | 0.198 | 0.133 | 0.094 | 0.114
Pascal-S | $F_{\beta}\uparrow$ | 0.597 | 0.641 | 0.752 | 0.720 | 0.803 | 0.794 | 0.752 | 0.815 | 0.790
MAE$\downarrow$ | 0.225 | 0.264 | 0.152 | 0.145 | 0.131 | 0.114 | 0.157 | 0.113 | 0.134
HKU-IS | $F_{\beta}\uparrow$ | 0.654 | 0.710 | 0.846 | 0.821 | 0.882 | 0.860 | 0.770 | 0.892 | /
MAE$\downarrow$ | 0.174 | 0.213 | 0.086 | 0.093 | 0.080 | 0.129 | 0.118 | 0.074 | /
### IV-C Visualized Results
We present some visualized results of our proposed method in Figure 5, which
is close to the ground-truth.
Figure 5: Visualized results of our proposed method. From top to bottom, there
are five groups, organized as: images, our results and the ground-truth.
### IV-D Comparison with Other Methods
We conduct the saliency detection task in a weakly-supervised way. There exist
several other weakly-supervised methods. The difference on the settings is
presented in Table II. Our method requires less extra information than those
existing weakly-supervised methods. In addition, we apply the refinement
process to optimize the saliency maps, instead of the commonly used CRF.
TABLE II: Comparison of our setting with other methods. Seed means using unsupervised saliency maps as the initial seeds. Pixel means applying pixel-wised supervision. CRF means using conditional random fields as the post-processing step. Pseudo means adopting pseudo labels. Setting | ASMO [12] | WSS [13] | SOSD [16] | Ours
---|---|---|---|---
Label | image tag | image tag | subitizing+pixel | subitizing
Seed | ✓ | $\times$ | $\times$ | $\times$
Pixel | $\times$ | $\times$ | ✓ | $\times$
Pseudo | ✓ | ✓ | $\times$ | $\times$
CRF | ✓ | ✓ | ✓ | $\times$
We compare our results with eight state-of-the-art methods, including two
unsupervised ones: BSCA[46] and HS[43]; two weakly-supervised ones using
image-label supervisions: ASMO[12] and WSS[13]; four fully-supervised ones:
SOSD[16], LEGS[13], DCL[42] and MSWS[14].
As shown in Table I, our proposed method outperforms existing unsupervised
methods with a considerable margin. Compared to weakly-supervised methods with
image-label supervisions, our method achieves better performance on all
benchmarks. It proves that the subitizing supervision helps boost the saliency
detection task. In addition, our method compares favorably against some fully-
supervised counterparts. Note that on the DUT-OMRON dataset, our method
obtains more precise results than the fully-supervised methods. Since the
masks of the DUT-OMRON dataset are complex in appearance, sometimes with
holes, it reveals that our method is capable of handling difficult situations.
Compared to SOSD [16], which utilized additional subitizing information, our
method extracts more valid information from the subitizing supervision.
Moreover, our method achieve comparable results with MSWS [14], which applied
multi-source weak supervisions, including subitizing, image labels and
captioning.
TABLE III: Comparison of our results with two weakly-supervised methods (ASMO [12] and WSS [13] using image-tag supervision) and a fully-supervised method SOSD [16] additionally using subitizing information in terms of S-measure (larger is better). Methods | ASMO[12] | WSS[13] | SOSD[16] | Ours
---|---|---|---|---
ECSSD | 0.827 | 0.829 | 0.837 | 0.860
DUT | 0.736 | 0.803 | 0.816 | 0.832
Pascal-S | 0.702 | 0.815 | 0.742 | 0.854
The PR curves on the ECSSD, HKU-IS and DUT-OMRON datasets are presented in
Figure 4. Our method consistently outperforms other unsupervised methods and
weakly-supervised methods. It is also better than some fully-supervised
methods like SOSD [16], LEGS [13] and DCL[42], except on the ECSSD dataset
where ours is very close to the result of DCL. We also evaluate those methods
on S-measure. The results are shown in Table III. It reveals that our method
generates saliency maps of higher structural similarity compared with the
ground-truth masks. The qualitative result is shown in Figure 6. The first two
rows show that our method provides clear separation between multiple objects.
The next four rows present that our results maintain the complete appearance
of the salient objects. Moreover, we generate saliency maps with clear
boundaries, as shown in the last three rows.
Figure 6: Qualitative comparison with other saliency detection methods.
Unsupervised methods, weakly-supervised methods and fully-supervised methods
are placed from left to right. Among those weakly-supervised methods, our
proposed method produces saliency maps closest to the ground-truth masks.
Figure 7: Comparison between different iterations. From top to bottom, they
are original images, saliency maps generated with Grad-CAM, and saliency maps
after 50 iterations. Saliency maps after 50 iterations cover larger activation
area belonging to the salient objects. The color code on the left represents
the degree of saliency.
The superior performance of our proposed method confirms that object
subitizing generalizes better to the saliency detection task than image-level
supervision. We also conduct extensive experiments to validate the performance
of each component in our method.
## V Ablation Study
In this section, we discuss the advantage of subitizing supervisions over
image-level supervisions. We also evaluate the effectiveness of the Saliency
Updating Module and the refinement process.
### V-A The Advantage of Subitizing Supervisions
In this subsection, we aim to reveal the advantage of subitizing supervisions
over image-tag supervisions. The generated saliency maps from our method are
compared against those from ASMO [12] with image-label supervisions, as
presented in Figure 8. The first two rows reveal that the subitizing
supervision helps recognize the border between multiple objects. The last two
rows indicate that our method captures the whole regions of the salient
objects, while those methods supervised with image tags leave out some parts
of the salient objects. In addition, we train our network with the image-tag
supervisions. The results are also process with the refinement module. The
performance with different training data is presented in Figure IV. The
subitizing-supervised framework performs better than the image-tag-supervised
framework, which reveals the advantage of subitizing supervision.
Figure 8: Comparison between our method and ASMO [12], a weakly-supervised method using image-tag supervision. The results reveal the superiority of subitizing supervision. TABLE IV: The performance of our framework trained with image tags and subitizing, respectively. The better ones are marked in bold. Dataset | Metric | w/ image-tag | w/ subitizing
---|---|---|---
ECSSD | $F_{\beta}\uparrow$ | 0.825 | 0.858
MAE$\downarrow$ | 0.110 | 0.108
DUT-OMRON | $F_{\beta}\uparrow$ | 0.745 | 0.778
MAE$\downarrow$ | 0.103 | 0.083
### V-B The Effect of Saliency Updating Module
In this subsection, we aim to evaluate the effect of the Saliency Updating
Module. As shown in Figure 7, the results after updating are more complete in
appearance, while those without any updating only focus on a limited but
notable region due to the property of neural networks. With the SUM module,
the network captures more parts within the semantic affinity. In addition, on
the DUT-OMRON dataset, the $F_{\beta}$ and MAE measures of saliency
predictions after 10 and 50 iterations with our SUM module are 0.638/0.252 and
0.704/0.139, respectively. It reveals that the SUM module helps boost the
performance of saliency detection.
### V-C The Effect of Refinement Process
In order to obtain promising results, we adopt the refinement process to
optimize the boundaries of saliency maps. As CRF is the most popular technique
to refine segmentation results, we apply CRF on coarse maps and evaluate the
outputs as well. The results on the ECSSD, MSRA-B and Pascal-S datasets are
presented in Table V. The refinement process contributes a lot to the
recognition of salient objects. It reveals that our refinement process
achieves better optimization results than CRF. In addition, to evaluate the
effectiveness of the refinement module on other methods, the refinement
process is conducted on unsupervised results from BSCA [46]. As shown in Table
VI, the refinement module helps improve the performance, but the processed
results are still worse than our results.
TABLE V: The performance with/without the refinement process. The saliency results with CRF is also presented. The best performance is marked in bold. Dataset | Metric | w/o post-pro. | w/ CRF | w/ ref.
---|---|---|---|---
ECSSD | $F_{\beta}\uparrow$ | 0.707 | 0.721 | 0.858
MAE$\downarrow$ | 0.197 | 0.185 | 0.108
MSRA-B | $F_{\beta}\uparrow$ | 0.731 | 0.782 | 0.897
MAE$\downarrow$ | 0.167 | 0.152 | 0.082
Pascal-S | $F_{\beta}\uparrow$ | 0.644 | 0.680 | 0.803
MAE$\downarrow$ | 0.206 | 0.191 | 0.131
TABLE VI: The performance of unsupervised results (BSCA [46]) processed by our refinement module. Dataset | Metric | BSCA[46] | BSCA w/ ref. | ours w/ ref.
---|---|---|---|---
ECSSD | $F_{\beta}\uparrow$ | 0.705 | 0.756 | 0.858
MAE$\downarrow$ | 0.183 | 0.140 | 0.108
DUT-O | $F_{\beta}\uparrow$ | 0.500 | 0.618 | 0.778
MAE$\downarrow$ | 0.196 | 0.134 | 0.083
## VI Conclusion
In this paper, we propose a novel method for the salient object detection task
with the subitizing supervision. We design a model with the Saliency
Subitizing Module and the Saliency Updating Module, which generates the
initial masks using subitizing information and iteratively refines the
generated saliency masks, respectively. Without any seeds from unsupervised
methods, our method outperforms other weakly-supervised methods and even
performs comparable to some fully-supervised methods.
## Acknowledgment
We thank for the support from National Natural Science Foundation of
China(61972157, 61902129), Shanghai Pujiang Talent Program (19PJ1403100),
Economy and Information Commission of Shanghai (XX-RGZN-01-19-6348), National
Key Research and Development Program of China (No. 2019YFC1521104), Science
and Technology Commission of Shanghai Municipality Program (No. 18D1205903).
Xin Tan is also supported by the Postgraduate Studentship (Mainland Schemes)
from City University of Hong Kong.
## References
* [1] X. Qin, S. He, Z. V. Zhang, M. Dehghan, and M. Jägersand, “Real-time salient closed boundary tracking using perceptual grouping and shape priors.” in _BMVC_ , 2017.
* [2] S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” _IEEE TPAMI_ , vol. 34, no. 10, pp. 1915–1926, 2011.
* [3] M.-M. Cheng, F.-L. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Repfinder: finding approximately repeated scene elements for image editing,” _TOG_ , vol. 29, no. 4, p. 83, 2010.
* [4] J. He, J. Feng, X. Liu, T. Cheng, T.-H. Lin, H. Chung, and S.-F. Chang, “Mobile product search with bag of hash bits and boundary reranking,” in _CVPR_ , 2012, pp. 3005–3012.
* [5] S. He, R. W. Lau, W. Liu, Z. Huang, and Q. Yang, “Supercnn: A superpixelwise convolutional neural network for salient object detection,” _IJCV_ , vol. 115, no. 3, pp. 330–344, 2015.
* [6] N. Liu and J. Han, “Dhsnet: Deep hierarchical saliency network for salient object detection,” in _CVPR_ , 2016, pp. 678–686.
* [7] Y. Zhuge, Y. Zeng, and H. Lu, “Deep embedding features for salient object detection,” in _AAAI_ , vol. 33, 2019, pp. 9340–9347.
* [8] L. Zhu, J. Chen, X. Hu, C.-W. Fu, X. Xu, J. Qin, and P.-A. Heng, “Aggregating attentional dilated features for salient object detection,” _IEEE TCSVT_ , 2019.
* [9] X. Hu, C.-W. Fu, L. Zhu, T. Wang, and P.-A. Heng, “Sac-net: spatial attenuation context for salient object detection,” _IEEE TCSVT_ , 2020.
* [10] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” _IJCV_ , vol. 88, no. 2, pp. 303–338, 2010.
* [11] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in _ECCV_ , 2014, pp. 740–755.
* [12] G. Li, Y. Xie, and L. Lin, “Weakly supervised salient object detection using image labels,” in _AAAI_ , 2018.
* [13] L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan, “Learning to detect salient objects with image-level supervision,” in _CVPR_ , 2017, pp. 136–145.
* [14] Y. Zeng, Y. Zhuge, H. Lu, L. Zhang, M. Qian, and Y. Yu, “Multi-source weak supervision for saliency detection,” in _CVPR_ , 2019, pp. 6074–6083.
* [15] J. Zhang, S. Ma, M. Sameki, S. Sclaroff, M. Betke, Z. Lin, X. Shen, B. Price, and R. Mech, “Salient object subitizing,” _IJCV_ , 2017.
* [16] S. He, J. Jiao, X. Zhang, G. Han, and R. W. Lau, “Delving into salient object subitizing and detection,” in _ICCV_ , 2017, pp. 1059–1067.
* [17] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in _CVPR_ , 2016, pp. 2921–2929.
* [18] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in _ICCV_ , 2017, pp. 618–626.
* [19] S. Woo, J. Park, J.-Y. Lee, and I. So Kweon, “Cbam: Convolutional block attention module,” in _ECCV_ , 2018, pp. 3–19.
* [20] A. Kolesnikov and C. H. Lampert, “Seed, expand and constrain: Three principles for weakly-supervised image segmentation,” in _ECCV_ , 2016, pp. 695–711.
* [21] G. Li and Y. Yu, “Visual saliency based on multiscale deep features,” in _CVPR_ , 2015, pp. 5455–5463.
* [22] R. Zhao, W. Ouyang, H. Li, and X. Wang, “Saliency detection by multi-context deep learning,” in _CVPR_ , 2015, pp. 1265–1274.
* [23] Y. Yuan, C. Li, J. Kim, W. Cai, and D. D. Feng, “Dense and sparse labeling with multidimensional features for saliency detection,” _IEEE TCSVT_ , 2016\.
* [24] W. Wang, S. Zhao, J. Shen, S. C. Hoi, and A. Borji, “Salient object detection with pyramid attention and salient edges,” in _CVPR_ , 2019, pp. 1448–1457.
* [25] H. Qibin, C. Ming-Ming, H. Xiaowei, B. Ali, T. Zhuowen, and H. S. T. Philip, “Deeply supervised salient object detection with short connections,” _IEEE TPAMI_ , vol. 41, no. 4, pp. 815–828, 2019.
* [26] Z. Tu, Y. Ma, C. Li, J. Tang, and B. Luo, “Edge-guided non-local fully convolutional network for salient object detection,” _IEEE TCSVT_ , 2020\.
* [27] Z. Zhou, Z. Wang, H. Lu, S. Wang, and M. Sun, “Multi-type self-attention guided degraded saliency detection.” in _AAAI_ , 2020, pp. 13 082–13 089.
* [28] J. Zhang, S. Ma, M. Sameki, S. Sclaroff, M. Betke, Z. Lin, X. Shen, B. Price, and R. Mech, “Salient object subitizing,” in _CVPR_ , 2015, pp. 4045–4054.
* [29] E. Lu, W. Xie, and A. Zisserman, “Class-agnostic counting,” in _ACCV_ , 2018, pp. 669–684.
* [30] M. Amirul Islam, M. Kalash, and N. D. Bruce, “Revisiting salient object detection: Simultaneous detection, ranking, and subitizing of multiple salient objects,” in _CVPR_ , 2018, pp. 7142–7150.
* [31] K. Li, Z. Wu, K.-C. Peng, J. Ernst, and Y. Fu, “Tell me where to look: Guided attention inference network,” in _CVPR_ , 2018, pp. 9215–9223.
* [32] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. Torr, “Conditional random fields as recurrent neural networks,” in _ICCV_ , 2015, pp. 1529–1537.
* [33] C. Chen, X. Sun, Y. Hua, J. Dong, and H. Xv, “Learning deep relations to promote saliency detection.” in _AAAI_ , 2020, pp. 10 510–10 517.
* [34] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _CVPR_ , 2016, pp. 770–778.
* [35] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _CVPR_ , 2009, pp. 248–255.
* [36] Y. Wei, J. Feng, X. Liang, M.-M. Cheng, Y. Zhao, and S. Yan, “Object region mining with adversarial erasing: A simple classification to semantic segmentation approach,” in _CVPR_ , 2017, pp. 1568–1576.
* [37] X. Li, L. Zhao, L. Wei, M.-H. Yang, F. Wu, Y. Zhuang, H. Ling, and J. Wang, “Deepsaliency: Multi-task deep neural network model for salient object detection,” _IEEE TIP_ , vol. 25, no. 8, pp. 3919–3930, 2016.
* [38] R. Achanta, S. Hemami, F. Estrada, and S. Süsstrunk, “Frequency-tuned salient region detection,” in _CVPR_ , 2009, pp. 1597–1604.
* [39] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in _CVPR_ , 2010, pp. 3485–3492.
* [40] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “The secrets of salient object segmentation,” in _CVPR_ , 2014, pp. 280–287.
* [41] J. Shi, Q. Yan, L. Xu, and J. Jia, “Hierarchical image saliency detection on extended cssd,” _IEEE TPAMI_ , vol. 38, no. 4, pp. 717–729, 2015.
* [42] G. Li and Y. Yu, “Deep contrast learning for salient object detection,” in _CVPR_ , 2016, pp. 478–487.
* [43] C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in _CVPR_ , 2013, pp. 3166–3173.
* [44] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: A discriminative regional feature integration approach,” in _CVPR_ , 2013, pp. 2083–2090.
* [45] D.-P. Fan, M.-M. Cheng, Y. Liu, T. Li, and A. Borji, “Structure-measure: A new way to evaluate foreground maps,” in _ICCV_ , 2017, pp. 4548–4557.
* [46] Y. Qin, H. Lu, Y. Xu, and H. Wang, “Saliency detection via cellular automata,” in _CVPR_ , 2015, pp. 110–119.
* [47] L. Wang, H. Lu, X. Ruan, and M.-H. Yang, “Deep networks for saliency detection via local estimation and global search,” in _CVPR_ , 2015, pp. 3183–3192.
|
8k
|
arxiv_papers
|
2101.00935
|
# First-Order Methods for Convex Optimization
Pavel Dvurechensky Weierstrass Institute for Applied Analysis and
Stochastics, Mohrenstr. 39, 10117 Berlin, Germany pavel.dvurechensky@wias-
berlin.de Shimrit Shtern Faculty of Industrial Engineering and Management,
Technion - Israel Institute of Technology, Haifa, Israel
[email protected] Mathias Staudigl Maastricht University, Department of
Data Science and Knowledge Engineering (DKE) and Mathematics Centre Maastricht
(MCM), Paul-Henri Spaaklaan 1, 6229 EN Maastricht, The Netherlands
[email protected]
###### Abstract
First-order methods for solving convex optimization problems have been at the
forefront of mathematical optimization in the last 20 years. The rapid
development of this important class of algorithms is motivated by the success
stories reported in various applications, including most importantly machine
learning, signal processing, imaging and control theory. First-order methods
have the potential to provide low accuracy solutions at low computational
complexity which makes them an attractive set of tools in large-scale
optimization problems. In this survey we cover a number of key developments in
gradient-based optimization methods. This includes non-Euclidean extensions of
the classical proximal gradient method, and its accelerated versions.
Additionally we survey recent developments within the class of projection-free
methods, and proximal versions of primal-dual schemes. We give complete proofs
for various key results, and highlight the unifying aspects of several
optimization algorithms.
###### keywords:
Convex Optimization, Composite Optimization, First-Order Methods, Numerical
Algorithms, Convergence Rate, Proximal Mapping, Proximity Operator, Bregman
Divergence.
###### MSC:
[2010] 90C25 , 90C30 , 90C06 , 68Q25 , 65Y20 , 68W40
††journal: Journal of LaTeX Templates
## 1 Introduction
The traditional standard in convex optimization was to translate a problem
into a conic program and solve it using a primal-dual interior point method
(IPM). The monograph [1] was instrumental in setting this standard. The
primal-dual formulation is a mathematically elegant and powerful approach as
these conic problems can then be solved to high accuracy when the dimension of
the problem is of moderate size. This philosophy culminated into the
development of a robust technology for solving convex optimization problems
which is nowadays the computational backbone of many specialized solution
packages like MOSEK [2], or SeDuMi [3]. However, in general, the iteration
costs of interior point methods grow non-linearly with the problem’s
dimension. As a result, as the dimension $n$ of optimization problems grows,
off-the shelve interior point methods eventually become impractical. As an
illustration, the computational complexity of a single step of many
standardized IPMs scales like $n^{3}$, corresponding roughly to the complexity
of inverting an $n\times n$ matrix. This means that for already quite small
problems of size like $n=10^{2}$, we would need roughly $10^{6}$ arithmetic
operations just to compute a single iterate. From a practical viewpoint, such
a scaling is not acceptable. An alternative solution approach, particularly
attractive for such "large-scale" problems, are _first-order methods_ (FOMs).
These are iterative schemes with computationally cheap iterations usually
known to yield low-precision solutions within reasonable computation time. The
success-story of FOMs went hand-in-hand with the fast progresses made in data
science, analytics and machine learning. In such data-driven optimization
problems, the trade-off between fast iterations and low accuracy is
particularly pronounced, as these problems usually feature high-dimensional
decision variables. In these application domains precision is usually
considered to be a subordinate goal because of the inherent randomness of the
problem data, which makes it unreasonable to minimize with accuracy below the
statistical error.
The development of first-order methods for convex optimization problems is
still a very vibrant field, with a lot of stimulus from the already mentioned
applications in machine learning, statistics, optimal control, signal
processing, imaging, and many more, see e.g. review papers on optimization for
machine learning [4, 5, 6]. Naturally, any attempt to try to survey this
lively scientific field is already doomed from the beginning to be a failure,
if one is not willing to make restrictions on the topics covered. Hence, in
this survey we tried to give a largely self-contained and concise summary of
some important families of FOMs, which we believe have had an ever-lasting
impact on the modern perspective of continuous optimization. Before we give an
outline what is covered in this survey, it is therefore maybe fair to mention
explicitly, what is NOT covered in the pages to come. One major restriction we
imposed on ourselves is the concentration on _deterministic_ optimization
algorithms. This is indeed a significant cut in terms of topics, since the
field of stochastic optimization and randomized algorithms has particularly
been at the forefront of recent progresses made. Nonetheless, we made this cut
by purpose, since most of the developments within stochastic optimization
algorithms are based on deterministic counterparts, and actually in many cases
one can think of deterministic algorithms as the mean-field equivalent of a
stochastic optimization technique. As well-known example, we can mention the
celebrated stochastic approximation theory initiated by Robbins and Monro [7],
with its deep connection to deterministic gradient descent. See [8, 9, 10],
for classical references from the point of view of systems theory and
optimization, and [11] for its deep connection with deterministic dynamical
systems. This link has gained significant relevance in various stochastic
optimization models recently [12, 13, 14, 15]. An excellent reference on
stochastic optimization is [16] and [17]. Furthermore, we excluded the very
important class of alternating minimization methods, such as block-coordinate
descent, and variations of the same idea. These methods are fundamental in
distributed optimization, and lay the foundations for the now heavily
investigated randomized algorithms, exploiting the block-structure of the
model to achieve acceleration and reduce the overall computational complexity.
Section 14 in the beautiful book by Amir Beck [18] gives a thorough account of
these methods and we urge the interested reader to start reading there.
So, what is it that we actually do in this article? Four seemingly different
optimization algorithms are surveyed, all of which belong now to the standard
toolkit of mathematical programmers. After introducing the (standard) notation
that will be used in this survey, we give a precise formulation of the model
problem for which modern convex optimization algorithms are developed. In
particular, we focus on the general composite convex optimization model,
including a smooth and one non-smooth term. This model is rich enough to
capture a significant class of convex optimization problems. Non-smoothness is
an important feature of the model, as it allows us to incorporate constraints
via penalty and barrier functions. An efficient way to deal with non-
smoothness is provided by the use of _proximal operators_ , a key
methodological contribution born within convex analysis (see [19] for an
historical overview). Section 3 introduces the general non-Euclidean proximal
setup, which describes the mathematical framework within which the celebrated
_Mirror Descent_ and _Bregman proximal gradient methods_ are analyzed
nowadays. This set of tools has been extremely popular in online learning and
convex optimization [20, 21, 22]. The main idea behind this technology is to
exploit favorable structure in the problem’s geometry to boost the practical
performance of gradient-based methods. The proximal revolution has also
influenced the further development of classical primal-dual optimization
methods based on augmented Lagrangians. We review proximal variants of the
celebrated Alternating Direction Method of Multipliers (ADMM) in Section 4. We
then move on to give in-depth presentation of projection-free optimization
methods based on linear minimization oracles, the classical _Conditional
Gradient_ (CG) (a.k.a Frank-Wolfe) method and its recent variants. CG gained
extreme popularity in large-scale optimization, mainly because of its good
scalability properties and small iteration costs. Conceptually, it is an
interesting optimization method, as it allows us to solve convex programming
problems with complicated geometry on which proximal operators are not easy to
evaluate. This, in fact, applies to many important domains, like the
Spectrahedron, or domains defined via intersections of several half spaces. CG
is also relevant when the iterates should preserve structural features of the
desired solution, like sparsity. Section 5 gives a comprehensive account of
this versatile tool. All the methods we discussed so far generally provide
sublinear convergence guarantees in terms of function values with iteration
complexity of $O(1/{\varepsilon})$ In his influential paper [23], Nesterov
published an optimal method with iteration complexity of
$O(1/\sqrt{\varepsilon})$ to reach an $\varepsilon$-optimal solution. This was
the starting point for the development of acceleration techniques for given
FOMs. Section 6 summarizes the recent developments in this field. While
writing this survey, we tried to give a holistic presentation of the main
methods in use. At various stages in the survey, we establish connections, if
not equivalences, between various methods. For many of the key results we
provide self-contained proofs to illustrate the main lines of thought in
developing FOMs for convex optimization problems.
##### Notation
We use standard notation and concepts from convex and variational analysis,
which, unless otherwise specified, can all be found in the monograph [19, 24,
25]. Throughout this article, we let $\mathsf{V}$ represent a finite-
dimensional vector space of dimension $n$ with norm $\lVert\cdot\rVert$. We
will write $\mathsf{V}^{\ast}$ for the (algebraic) dual space of $\mathsf{V}$
with duality pairing $\langle y,x\rangle$ between $y\in\mathsf{V}^{\ast}$ and
$x\in\mathsf{V}$. The dual norm of $y\in\mathsf{V}^{\ast}$ is $\lVert
y\rVert_{\ast}=\sup\\{\langle y,x\rangle|\quad\lVert x\rVert\leq 1\\}$. The
set of proper lower semi-continuous functions
$f:\mathsf{V}\to(-\infty,\infty]$ is denoted as $\Gamma_{0}(\mathsf{V})$. The
(effective) domain of a function $f\in\Gamma_{0}(\mathsf{V})$ is defined as
$\operatorname{dom}f=\\{x\in\mathsf{V}|f(x)<\infty\\}$. For a given
continuously differentiable function
$f:\mathsf{C}\subseteq\mathsf{V}\to\mathbb{R}$ we denote its gradient vector
$\nabla f(x_{1},\ldots,x_{n})=\left(\frac{\partial f}{\partial
x_{1}},\ldots,\frac{\partial f}{\partial x_{n}}\right)^{\top}.$
The subdifferential at a point $x\in\mathsf{C}\subseteq\mathsf{V}$ of a convex
function $f:\mathsf{V}\to\mathbb{R}\cup\\{+\infty\\}$ is denoted as
$\displaystyle\partial f(x)=\\{p\in\mathsf{V}^{\ast}|f(y)\geq f(x)+\langle
p,y-x\rangle\quad\forall y\in\mathsf{V}\\}.$
The elements of $\partial f(x)$ are called subgradients. The subdifferential
is the set-valued mapping $\partial f:\mathsf{V}\to 2^{\mathsf{V}^{\ast}}$.
As a notational convention, we write matrices in bold capital fonts. Given
some set $\mathsf{X}\subseteq\mathsf{V}$, denote its relative interior as
$\operatorname{relint}(\mathsf{X})$. Recall that, if the dimension of the set
$\mathsf{X}$ agrees with the dimension of the ground space $\mathsf{V}$, then
the relative interior coincides with the topological interior, which we denote
as $\operatorname{int}(\mathsf{X})$. Hence, the two notions differ only in
situations where $\mathsf{X}$ is contained in a lower-dimensional submanifold.
We denote the closure as $\operatorname{cl}(\mathsf{X})$. The boundary of
$\mathsf{X}$ is defined in the usual way
$\operatorname{bd}(\mathsf{C})=\operatorname{cl}(\mathsf{C})\setminus\operatorname{int}(\mathsf{C})$.
## 2 Composite convex optimization
In this survey we focus on the generic optimization problem
$\min_{x\in\mathsf{X}}\\{\Psi(x)=f(x)+r(x)\\},$ (P)
where
* •
$\mathsf{X}\subseteq\mathsf{V}$ is a nonempty closed convex set embedded in a
finite-dimensional real vector space $\mathsf{V}$;
* •
$f(\cdot)$ is $L_{f}$-smooth meaning that it is differentiable on $\mathsf{V}$
with a $L_{f}$-Lipschitz continuous gradient on $\mathsf{X}$:
$\lVert\nabla f(x)-\nabla f(x^{\prime})\rVert_{\ast}\leq L_{f}\lVert
x-x^{\prime}\rVert\qquad\forall x,x^{\prime}\in\mathsf{X}.$ (2.1)
* •
$r\in\Gamma_{0}(\mathsf{V})$ and $\mu$-strongly convex on $\mathsf{V}$ for
some $\mu\geq 0$ with respect to a norm $\lVert\cdot\rVert$ on $\mathsf{V}$.
This means that for all $x,y\in\operatorname{dom}r$, and any selection
$r^{\prime}(x)\in\partial r(x)$, we have
$r(y)\geq r(x)+\langle r^{\prime}(x),y-x\rangle+\frac{\mu}{2}\lVert
x-y\rVert^{2}.$
Finally, we are interested in problems with a well-posed problem formulation.
###### Assumption 1.
$\operatorname{dom}r\cap\mathsf{X}\neq\varnothing$.
The most important examples of function $r$ are as follows:
* •
$r$ is an indicator function of a closed convex set $\mathsf{C}$ with
$\mathsf{C}\cap\mathsf{X}\neq\varnothing$:
$r(x)=\delta_{\mathsf{C}}(x):=\left\\{\begin{array}[]{cc}0&\text{ if
}x\in\mathsf{C},\\\ +\infty&\text{if }x\notin\mathsf{C}.\end{array}\right.$
(2.2)
* •
$r$ is a self-concordant barrier [1, 26] for a closed convex set
$\mathsf{C}\subset\mathsf{V}$ with $\mathsf{C}\cap\mathsf{X}\neq\varnothing$.
* •
$r$ is a nonsmooth convex function with relatively simple structure. For
example, it could be a norm regularization like the celebrated
$\ell_{1}$-regularizer
$r(x)=\left\\{\begin{array}[]{cc}\lVert x\rVert_{1}&\text{ if }\lVert
x\rVert_{1}\leq R,\\\ +\infty&\text{ else}\end{array}\right..$ (2.3)
This regularizer plays a fundamental role in high-dimensional statistics [27]
and signal processing [28, 29].
For characterizing solutions to our problem (P), define the _tangent cone_
associated with the closed convex set $\mathsf{X}\subseteq\mathsf{V}$ as
$\operatorname{\mathsf{TC}}_{\mathsf{X}}(x):=\left\\{\begin{array}[]{cc}\\{v=t(x^{\prime}-x)|x^{\prime}\in\mathsf{X},t\geq
0\\}\subseteq\mathsf{V}&\text{if }x\in\mathsf{X},\\\
\varnothing&\text{else.}\end{array}\right.$
and the _normal cone_ associated to the closed convex set $\mathsf{X}$ at
$x\in\mathsf{X}$ as the polar cone of
$\operatorname{\mathsf{TC}}_{\mathsf{X}}(x)$:
$\operatorname{\mathsf{NC}}_{\mathsf{X}}(x):=\left\\{\begin{array}[]{cc}\\{p\in\mathsf{V}^{\ast}|\sup_{v\in\operatorname{\mathsf{TC}}(x)}\langle
p,v\rangle\leq 0\\}&\text{if }x\in\mathsf{X}\\\
\varnothing&\text{else.}\end{array}\right.$
We remark that
$\partial\delta_{\mathsf{X}}(x)=\operatorname{\mathsf{NC}}_{\mathsf{X}}(x)$
for all $x\in\mathsf{X}$.
Given the feasible set $\mathsf{X}\subseteq\mathsf{V}$, we denote the minimal
function value
$\Psi_{\min}(\mathsf{X})=\inf_{x\in\mathsf{X}}\Psi(x).$ (2.4)
We are focussing in this survey on problems which are solvable. This justifies
the next assumption.
###### Assumption 2.
$\mathsf{X}^{\ast}=\\{x\in\mathsf{X}|\Psi(x)=\Psi_{\min}(\mathsf{X})\\}\neq\varnothing.$
Given the standing hypothesis on the functions $f$ and $r$, it is easy to see
that $\mathsf{X}^{\ast}$ is always a closed convex set. Moreover, if $\mu>0$,
then problem (P) is _strongly convex_ , and so $\mathsf{X}^{\ast}$ is a
singleton.
Given the structural assumptions of the model problem (P), the sum rule of
subgradients implies that all points in the solution set $\mathsf{X}^{\ast}$
satisfy the monotone inclusion (Fermat’s rule)
$0\in\nabla f(x^{\ast})+\partial
r(x^{\ast})+\operatorname{\mathsf{NC}}_{\mathsf{X}}(x^{\ast}).$ (2.5)
This means that there exists $\xi\in\partial r(x^{\ast})$ such that
$\langle\nabla f(x^{\ast})+\xi,v\rangle\geq 0\qquad\forall
v\in\operatorname{\mathsf{TC}}_{\mathsf{X}}(x^{\ast}).$ (2.6)
The structured composite optimization problem (P) has attracted a lot of
interest in convex programming over the last 20 years motivated by a number of
important applications. This led to a rich interplay between convex
programming on the one hand and machine learning and signal/image processing
on the other hand. Indeed, several work-horse models in these applied fields
are of the composite type
$\Psi(x)=g({\mathbf{A}}x)+r(x)$ (2.7)
where $g:\mathsf{E}\to\mathbb{R}$ is a smooth function defined on a finite-
dimensional set $\mathsf{E}$ (usually of lower dimension than $\mathsf{V}$),
and ${\mathbf{A}}\in\mathsf{BL}(\mathsf{V},\mathsf{E})$ is bounded linear
operator mapping points $x\in\mathsf{V}$ to elements
${\mathbf{A}}x\in\mathsf{E}$. Convexity allows us to switch between primal and
dual formulations freely, so that the above problem can be equivalently
considered as a convex-concave minimax problem
$\min_{x\in\mathsf{X}}\max_{y\in\mathsf{E}}\\{r(x)+\langle{\mathbf{A}}x,y\rangle-g^{\ast}(y)\\}$
(2.8)
Such minimax problems have been of key importance in signal processing and
machine learning [30, 21, 22], game theory [31], decomposition methods [32]
and its very recent innovation around generative adversarial networks [33].
Another canonical class of optimization problems in machine learning is the
finite-sum model
$\Psi(x)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x)+r(x),$ (2.9)
which comes from supervised learning, where $f_{i}(x)$ corresponds to the loss
incurred on the $i$-th data sample using a hypothesis parameterized by the
decision variable $x$. Hence, in practice, $N$ is an extremely large number as
it corresponds to the size of the data set. The recent literature on variance
reduction techniques and distributed optimization is very active in making
such large scale optimization problems tractable. Surveys on the latest
developments in these fields can be found in [34] and the comprehensive
textbook by Lan [35].
## 3 The Proximal Gradient Method
### 3.1 Motivation
We are starting our survey on first-order methods for solving convex
optimization problems with perhaps the most basic optimization method known to
every student who took a course in mathematical programming: the gradient
projection scheme. In the context of the composite optimization problem (P), a
classical and very powerful idea is to construct numerical optimization
methods by exploiting problem structure. Following this philosophy, we
determine the position of the next iterate by minimizing the sum of the
linearization of the smooth part, the non-smooth part
$r\in\Gamma_{0}(\mathsf{V})$, and a quadratic regularization term with weight
$\gamma>0$:
$x^{+}(\gamma)=\operatorname*{argmin}_{u\in\mathsf{X}}\\{f(x)+\langle\nabla
f(x),u-x\rangle+r(u)+\frac{1}{\gamma}\lVert u-x\rVert^{2}_{2}\\}.$ (3.1)
Disregarding terms which do not influence the computation of the solution of
this strongly convex minimization problem, and absorbing the set constraint
into the non-smooth part by defining $\phi(x)=r(x)+\delta_{\mathsf{X}}(x)$, we
see that (3.1) can be equivalently written as
$x^{+}(\gamma)=\operatorname*{argmin}_{u\in\mathsf{V}}\left\\{\gamma\phi(u)+\frac{1}{2}\lVert
u-(x-\gamma\nabla f(x))\rVert^{2}_{2}\right\\}.$ (3.2)
The trained reader will immediately see some geometric principles involved in
this minimization routine; Indeed, if $r$ would be constant on $\mathsf{X}$
(say $0$ for concreteness), then the rule (3.2) is nothing else than the
Euclidean projection of the directional vector $x-\gamma\nabla f(x)$ onto the
set $\mathsf{X}$. In this case, the minimization routine returns the classical
projected gradient step $x^{+}(\gamma)=P_{\mathsf{X}}(x-\gamma\nabla f(x)).$
Iterating the map
$T_{\gamma}:=P_{\mathsf{X}}\circ(\operatorname{Id}-\gamma\nabla f)$ generates
the Gradient projection method, which can be traced back to the 1960s (see
[36] for the history of this method). A new obstacle arises in cases where the
non-smooth function $r$ is non-trivial over the relevant domain $\mathsf{X}$.
A fundamental idea, going back to Moreau [37], is to define the _proximity
operator_ $\operatorname{Prox}_{\phi}:\mathsf{V}\to\mathsf{V}$ associated with
a function $\phi\in\Gamma_{0}(\mathsf{V})$ as111The repository
http://proximity-operator.net/index.html provides codes and explicit
expressions for proximity operators of many standard functions. A useful
MATLAB implementation of proximal methods is described in [38].
$\operatorname{Prox}_{\phi}(x):=\operatorname*{argmin}_{u\in\mathsf{V}}\left\\{\phi(u)+\frac{1}{2}\lVert
u-x\rVert^{2}_{2}\right\\}.$ (3.3)
In terms of the proximity-operator, the minimization step (3.2) becomes
$x^{+}(\gamma)=T_{\gamma}(x):=\operatorname{Prox}_{\gamma\phi}(x-\gamma\nabla
f(x)).$ (3.4)
Iterating the map $T_{\gamma}$ yields a new and more general method, known in
the literature as the proximal gradient method (PGM).
The Proximal Gradient Method (PGM)
Input: pick $x^{0}\in\mathsf{X}.$
General step: For $k=0,1,\ldots$ do:
pick $\gamma_{k}>0$.
set $x^{k+1}=\operatorname{Prox}_{\gamma_{k}\phi}\left(x^{k}-\gamma_{k}\nabla
f(x^{k})\right)$.
PGM is a very powerful method which received enormous interest in optimization
and its applications. For a survey in the context of signal processing we
refer the reader to [39]. A general survey on proximal operators has been
given by Parikh and Boyd [40] and Beck [18], and many more references can be
found in these references.
The special case when $f=0$ is known as the _proximal point method_ , which
reads explicitly as
$x^{k+1}=\operatorname{Prox}_{\gamma_{k}\phi}(x^{k})=\operatorname*{argmin}_{u\in\mathsf{V}}\\{\phi(u)+\frac{1}{2\gamma_{k}}\lVert
u-x^{k}\rVert^{2}\\}.$ (3.5)
The value function
$\phi_{\gamma}(x)=\inf_{u}\\{\phi(u)+\frac{1}{2\gamma}\lVert u-x\rVert^{2}\\}$
is called the _Moreau envelope_ of the function $\phi$, and is an important
smoothing and regularization tool, frequently employed in numerical analysis.
Indeed, for a function $\phi\in\Gamma_{0}(\mathsf{V})$, its Moreau envelope is
finite everywhere, convex and has $\gamma^{-1}$-Lipschitz continuous gradient
on $\mathsf{V}$ given by
$\nabla\phi_{\gamma}(x)=\frac{1}{\gamma}(x-\operatorname{Prox}_{\gamma\phi}(x)).$
### 3.2 Bregman Proximal Setup
The basic idea behind non-Euclidean extensions of PGM is to replace the
$\ell_{2}$-norm $\frac{1}{2}\lVert u-x\rVert^{2}$ by a different distance-like
function which is tailored to the geometry of the feasible set
$\mathsf{X}\subseteq\mathsf{V}$. These non-Euclidean distance-like functions
that will be used are _Bregman divergences_. The transition from Euclidean to
non-Euclidean distance measures is motivated by the usefulness and flexibility
of the latter in computational perspectives and potentials for improving
convergence properties for specific application domains. In particular, the
move from Euclidean to non-Euclidean distance measures allows to adapt the
algorithm to the underlying geometry, typically explicitly embodied in the set
constraint $\mathsf{X}$, see e.g. [41]. This can not only positively affect
the per-iteration complexity, but also will have a footprint on the overall
iteration complexity of the method, as we will demonstrate in this section.
In the rest of this section, we assume that the set constraint $\mathsf{X}$ is
a closed convex set with nonempty relative interior
$\operatorname{relint}(\mathsf{X})$. The point of departure of Bregman
Proximal algorithms is to introduce a distance generating function
$h:\mathsf{V}\to(-\infty,\infty]$, which is a barrier-type of mapping suitably
chosen to capture geometric features of the set $\mathsf{X}$.
###### Definition 3.1.
Let $\mathsf{X}$ be a compact convex subset of $\mathsf{V}$. We say that
$h\in\Gamma_{0}(\mathsf{V})$ is a _ distance generating function (DGF)_ with
modulus $\alpha>0$ with respect to $\lVert\cdot\rVert$ on $\mathsf{X}$ if
1. 1.
Either $\operatorname{dom}h=\operatorname{relint}(\mathsf{X})$ or
$\operatorname{dom}h=\mathsf{X}$;
2. 2.
$h$ is differentiable over $\mathsf{X}^{\circ}=\\{x\in\mathsf{X}|\partial
h(x)\neq\varnothing\\}$.
3. 3.
$h$ is $\alpha$-strongly convex on $\mathsf{X}$ relative to
$\lVert\cdot\rVert$
$h(x^{\prime})\geq h(x)+\langle\nabla
h(x),x^{\prime}-x\rangle+\frac{\alpha}{2}\lVert x^{\prime}-x\rVert^{2}$ (3.6)
for all $x\in\mathsf{X}^{\circ}$ and all $x^{\prime}\in\mathsf{X}$.
We denote by $\mathcal{H}_{\alpha}(\mathsf{X})$ the set of DGFs on
$\mathsf{X}$.
Note that $\mathsf{X}^{\circ}$ contains the relative interior of $\mathsf{X}$
and, restricted to $\mathsf{X}^{\circ}$, $h$ is continuously differentiable
with $\partial h(x)=\\{\nabla h(x)\\}$. In many proximal settings we are
interested in DGFs which act as barriers on the feasible set $\mathsf{X}$.
Such DGFs are included in the case
$\operatorname{dom}h=\operatorname{relint}(\mathsf{X})$. Naturally, the
barrier properties of the function $h$ are captured by its scaling near
$\operatorname{bd}(\mathsf{X})$, usually encoded in terms of the notion of
_essential smoothness_ [42].
###### Definition 3.2 (Essential smoothness).
$h\in\mathcal{H}_{\alpha}(\mathsf{X})$ is _essentially smooth_ if for all
sequences $(x_{j})_{j\in\mathbb{N}}\subseteq\operatorname{relint}(\mathsf{X})$
with
$\lim_{j\to\infty}\operatorname{dist}(\\{x_{j}\\},\operatorname{bd}(\mathsf{X}))=0$,
we have $\lim_{j\to\infty}\lVert\nabla h(x_{j})\rVert_{\ast}=\infty$.
Given a DGF $h\in\mathcal{H}_{\alpha}(\mathsf{X})$, we define the _Bregman
divergence_ $D_{h}:\operatorname{dom}h\times\mathsf{X}^{\circ}\to\mathbb{R}$
induced by $h$ as
$D_{h}(u,x)=h(u)-h(x)-\langle\nabla h(x),u-x\rangle.$ (3.7)
Since $h\in\mathcal{H}_{\alpha}(\mathsf{X})$, it follows immediately that
$D_{h}(u,x)\geq\frac{\alpha}{2}\lVert u-x\rVert^{2}\qquad\forall
x\in\mathsf{X}^{\circ},u\in\operatorname{dom}h.$ (3.8)
Hence, Bregman divergences are zero on the main diagonal of
$\mathsf{X}^{\circ}\times\mathsf{X}^{\circ}$, but in general they are not
symmetric and they do not satisfy a triangle inequality. This disqualifies
them to carry the label of a metric, but still they can be interpreted as
distance measures on $\mathsf{X}^{\circ}$.
The convex conjugate $h^{\ast}(y)=\sup_{x\in\mathsf{V}}\\{\langle
x,y\rangle-h(x)\\}$ for a function $h\in\mathcal{H}_{\alpha}(\mathsf{X})$ is
known to be differentiable on $\mathsf{V}^{\ast}$ and
$\frac{1}{\alpha}$-Lipschitz smooth, i.e.
$h^{\ast}(y_{2})\leq h^{\ast}(y_{1})+\langle\nabla
h^{\ast}(y_{1}),y_{2}-y_{1}\rangle+\frac{1}{2\alpha}\lVert
y_{2}-y_{1}\rVert^{2}_{\ast}.$ (3.9)
for all $y_{1},y_{2}\in\mathsf{V}^{\ast}$. In fact, Section 12H in [19] gives
us the following general result which is of fundamental importance for the
following approaches.
###### Proposition 3.3.
Let $\omega:\mathbb{R}^{n}\to\mathbb{R}\cup\\{+\infty\\}$ be a proper convex
and lower semi-continuous function. Consider the following statements:
1. (a)
$\omega$ is strongly convex with parameter $\alpha>0$
2. (b)
The subdifferential mapping $\partial\omega:\mathbb{R}^{n}\to
2^{\mathbb{R}^{n}}$ is strongly monotone with parameter $\alpha>0$:
$\langle u-v,x-y\rangle\geq\alpha\lVert
x-y\rVert^{2}\qquad\forall(x,u),(v,y)\in\operatorname{graph}(\partial\omega)$
(3.10)
3. (c)
The inverse map $(\partial\omega)^{-1}$ is single-valued and Lipschitz
continuous with modulus $\frac{1}{\alpha}$;
4. (d)
$\omega^{\ast}$ is finite and differentiable everywhere.
Then $(a)\Leftrightarrow(b)\Rightarrow(c)\Leftrightarrow(d)$.
Once we endow our set $\mathsf{X}$ with a Bregman divergence, the technology
generating a gradient method in this non-Euclidean setting is the _prox-
mapping_.
###### Definition 3.4 (Prox-Mapping).
Given $h\in\mathcal{H}_{\alpha}(\mathsf{X})$ and
$\phi\in\Gamma_{0}(\mathsf{V})$, define the _prox-mapping_ as
$\mathcal{P}^{h}_{\phi}(x,y):=\operatorname*{argmin}_{u\in\mathsf{X}}\\{\phi(u)+\langle
y,u-x\rangle+D_{h}(u,x)\\}.$ (3.11)
The prox-mapping takes as inputs a "primal-dual" pair
$(x,y)\in\mathsf{X}^{\circ}\times\mathsf{V}^{\ast}$ where $x$ is the current
iterate, and $y$ is a dual variable representing the signal we obtain on the
smooth part of the minimization problem (P). Various conditions on the well-
posedness of the prox-mapping have been stated in the literature. We will not
repeat them here, but rather refer to the recent survey [43].
It will be instructive to go over some standard examples of the Bregman
proximal setup. See also [44], [25], and [45].
###### Example 3.1 (Proximity Operator).
We begin by revisiting the Euclidean projection on some convex closed subset
$\mathsf{X}$ of the vector space $\mathsf{V}$. Letting $h(x)=\frac{1}{2}\lVert
x\rVert^{2}_{2}+\delta_{\mathsf{X}}(x)$ for $x\in\mathsf{V}$, we readily see
that $\mathsf{X}^{\circ}=\mathsf{X}=\operatorname{dom}(h)$. Moreover, for
$x\in\mathsf{X}$, the vector field $\nabla h(x)=x$ is a continuous selection
of $\partial h(x)$ for all $x\in\mathsf{X}$. Hence, the associated Bregman
divergence is $D_{h}(u,x)=\frac{1}{2}\lVert u-x\rVert^{2}_{2}$ for all
$u,x\in\mathsf{X}$. Given a function $\phi\in\Gamma_{0}(\mathsf{V})$, the
resulting prox-mapping reads as
$\mathcal{P}^{h}_{\phi}(x,y)=\operatorname*{argmin}_{u\in\mathsf{X}}\\{\phi(u)+\langle
y,u-x\rangle+\frac{1}{2}\lVert
u-x\rVert^{2}_{2}\\}=\operatorname{Prox}_{\phi+\delta_{\mathsf{X}}}(x-y)$
where $\operatorname{Prox}_{\phi}$ is the proximity operator defined in (3.3).
###### Example 3.2 (Entropic Regularization).
Let $\mathsf{X}=\\{x\in\mathbb{R}^{n}_{+}|\sum_{i=1}^{n}x_{i}=1\\}$ denote the
unit simplex in $\mathsf{V}=\mathbb{R}^{n}$. Define the function
$\psi:\mathbb{R}\to[0,\infty]$ as
$\psi(t):=\left\\{\begin{array}[]{ll}t\ln(t)-t&\text{if }t>0,\\\ 0&\text{if
}t=0,\\\ +\infty&\text{else}\end{array}\right.$
As DGF consider the _Boltzmann-Shannon entropy_
$h(x):=\sum_{i=1}^{n}\psi(x_{i})+\delta_{\\{x\in\mathsf{V}|\sum_{i=1}^{n}x_{i}=1\\}}$.
Endowing the ground space $\mathsf{V}$ with the $\ell_{1}$ norm, it can be
shown that $h\in\mathcal{H}_{1}(\mathsf{X})$ with
$\operatorname{dom}h=\mathsf{X}$ and
$\mathsf{X}^{\circ}=\\{x\in\mathbb{R}^{n}_{++}|\sum_{i=1}^{n}x_{i}=1\\}=\operatorname{relint}(\mathsf{X})$.
The resulting Bregman divergence is the _Kullback-Leibler divergence_
$D_{h}(u,x)=\sum_{i=1}^{n}u_{i}\ln\left(\frac{u_{i}}{x_{i}}\right)+\sum_{i=1}^{n}(x_{i}-u_{i}).$
For $\phi=0$ a standard calculation gives rise to the prox-mapping
$[\mathcal{P}^{h}_{\delta_{\mathsf{X}}}(x,y)]_{i}=\frac{x_{i}e^{y_{i}}}{\sum_{j=1}^{n}x_{j}e^{y_{j}}}\qquad
1\leq i\leq n,x\in\mathsf{X},y\in\mathsf{V}^{\ast}.$ (3.12)
This mapping plays a key role in optimization, where it is known as
exponentiated gradient descent [46, 47].
###### Example 3.3 (Box Constraints).
Assume that $\mathsf{V}=\mathbb{R}^{n}$ and
$\mathsf{X}=\prod_{i=1}^{n}[a_{i},b_{i}]$ where $0\leq a_{i}\leq b_{i}$. Given
parameters $0\leq a\leq b$, define the _Fermi-Dirac entropy_
$\psi_{a,b}(t):=\left\\{\begin{array}[]{ll}(t-a)\ln(t-a)+(b-t)\ln(b-t)&\text{if
}t\in(a,b),\\\ 0&\text{if }t\in\\{a,b\\},\\\
+\infty&\text{else}\end{array}\right.$
Then $h(x)=\sum_{i=1}^{n}\psi_{a_{i},b_{i}}(x_{i})$ is a DGF on
$\mathsf{X}=\operatorname{dom}h$ with
$\mathsf{X}^{\circ}=\prod_{i=1}^{n}(a_{i},b_{i})$.
###### Example 3.4 (Semidefinite Constraints).
Let $\mathsf{V}$ be the set of real symmetric matrices and
$\mathsf{X}=\mathsf{S}^{n}_{+}$ be the cone of real symmetric positive semi-
definite matrices equipped with the inner product
$\langle{\mathbf{A}},{\mathbf{B}}\rangle=\operatorname{tr}({\mathbf{A}}{\mathbf{B}})$.
Define $h(\mathbf{X})=\operatorname{tr}[\mathbf{X}\log(\mathbf{X})]$ as the
matrix-equivalent of the negative Boltzmann-Shannon entropy. It can be
verified that $\operatorname{dom}h=\mathsf{X}$ and $\nabla
h(\mathbf{X})=\log(\mathbf{X})+\mathbf{I}$. Hence, one sees that
$\operatorname{dom}h=\mathsf{X}$, and
$\mathsf{X}^{\circ}=\mathsf{S}^{n}_{++}$, the cone of positive definite
matrices. For $\mathbf{X}^{\prime}\in\mathsf{S}^{n}_{++}$, the corresponding
Bregman divergence is given by
$D_{h}(\mathbf{X}^{\prime},\mathbf{X})=\operatorname{tr}[\mathbf{X}\log(\mathbf{X})-\mathbf{X}\log(\mathbf{X}^{\prime})+\mathbf{X}^{\prime}-\mathbf{X}]$
See [48] for further examples on matrix domains.
###### Example 3.5 (Spectrahedron).
Let
$\mathsf{X}=\\{\mathbf{X}\in\mathsf{S}^{n}_{+}|\operatorname{tr}(\mathbf{X})\leq
1\\}$ the unit spectrahedron of positive semi-definite matrices with the
nuclear norm $\lVert\mathbf{X}\rVert_{1}=\operatorname{tr}(\mathbf{X})$. For
this geometry, a widely used regularizer is the von Neumann entropy
$h(\mathbf{X})=\operatorname{tr}(\mathbf{X}\log\mathbf{X})+(1-\operatorname{tr}(\mathbf{X}))\log(1-\operatorname{tr}(\mathbf{X}))$
(3.13)
It can be shown that this function is $\frac{1}{2}$-strongly convex with
respect to the nuclear norm and $\operatorname{dom}h=\mathsf{X}$, as well as
$\mathsf{X}^{\circ}=\\{\mathbf{X}\in\mathsf{S}^{n}_{++}|\operatorname{tr}(\mathbf{X})<1\\}$.
###### Example 3.6 (2nd order cone constraints).
Let $\mathsf{V}=\mathbb{R}^{n}$ and
$L^{n}_{++}:=\\{x\in\mathsf{V}|x_{n}>(x_{1}^{2}+\ldots+x_{n-1}^{2})^{1/2}\\}$
the interior of the second-order cone with closure denoted by $\mathsf{X}$.
Let ${\mathbf{J}}_{n}$ be the $n\times n$ diagonal matrix with $-1$ in its
first $n-1$ diagonal entries and $1$ in the last one. Define
$h(x)=-\ln(\langle{\mathbf{J}}_{n}x,x\rangle)+\frac{\alpha}{2}\lVert
x\rVert^{2}_{2}$. Then $h\in\mathcal{H}_{\alpha}(\mathsf{X})$ with
$\operatorname{dom}h=\mathsf{X}^{\circ}=L^{n}_{++}\subset\mathsf{X}$. The
associated Bregman divergence is
$D_{h}(x,u)=-\ln\left(\frac{\langle{\mathbf{J}}_{n}x,x\rangle}{\langle{\mathbf{J}}_{n}u,u\rangle}\right)+2\frac{\langle{\mathbf{J}}_{n}x,u\rangle}{\langle{\mathbf{J}}_{n}u,u\rangle}-2+\frac{\alpha}{2}\lVert
x-u\rVert^{2}_{2}.$
The proximal framework for general conic constraints has been developed in
[49].
If $\gamma>0$ is a step-size parameter and $y=\gamma\nabla f(x)$, then we
obtain the _Bregman proximal map_ $T^{h}_{\gamma}(x)=\mathcal{P}^{h}_{\gamma
r}(x,\gamma\nabla f(x))$ for all $x\in\mathsf{X}$. Iterating this map
generates a discrete-time dynamical system known as the _Bregman proximal
gradient method_ (BPGM).
The Bregman Proximal Gradient Method (BPGM)
Input: $h\in\mathcal{H}_{\alpha}(\mathsf{X})$. Pick
$x^{0}\in\operatorname{dom}(r)\cap\mathsf{X}^{\circ}.$
General step: For $k=0,1,\ldots$ do:
pick $\gamma_{k}>0$.
set $x^{k+1}=\mathcal{P}^{h}_{\gamma_{k}r}(x^{k},\gamma_{k}\nabla f(x^{k}))$.
The BPGM approach consists of linearizing the differentiable part $f$ around
$x$, adding the composite term $r$, and regularizing the sum with a proximal
distance from the point $x$. When $h$ is the squared Euclidean norm, BPGM
reduces to the classical proximal gradient method. For simple implementation,
BPGM relies on the structural assumption that the prox-mapping
$\mathcal{P}^{h}_{r}(x,y)$ can be evaluated efficiently on the trajectory
$\\{(x^{k},\gamma_{k}\nabla f(x^{k}))|0\leq k\leq K\in\mathbb{N}^{\ast}\\}$.
This, often somewhat hidden, assumption is known in the literature as the
"prox-friendliness" assumption, a terminology apparently coined by [50]).
### 3.3 Basic Complexity Properties
To analyze the iteration complexity of BPGM, let us define the convex lower
semi-continuous and proper function
$\varphi(u)=r(u)+\langle y,u-x\rangle+\delta_{\mathsf{X}}(x),$ (3.14)
where $y\in\mathsf{V}^{\ast}$ and $x\in\mathsf{X}$ are treated as parameters.
Under this terminology, we readily see that the basic iterate of BPGM is
determined by the evaluation of the _Bregman proximal operator_ [51] applied
to the function $\varphi\in\Gamma_{0}(\mathsf{V})$:
$\displaystyle\operatorname{Prox}_{\varphi}^{h}(x):=\operatorname*{argmin}_{u\in\mathsf{V}}\\{\varphi(u)+D_{h}(u,x)\\}.$
Writing the first-order optimality condition satisfied by the point
$x^{+}=\mathcal{P}^{h}_{r}(x,y)$ in terms of the function $\varphi$ in (3.14),
we get
$\displaystyle 0\in\partial\varphi(x^{+})+\nabla h(x^{+})-\nabla h(x).$
Whence, there exists $\xi\in\partial\varphi(x^{+})$ such that, for all
$u\in\mathsf{X}$,
$\displaystyle\langle\xi+\nabla h(x^{+})-\nabla h(x),x^{+}-u\rangle\leq 0.$
(3.15)
Via the subgradient inequality for the convex function $u\mapsto\varphi(u)$,
we obtain for all $u\in\mathsf{X}$:
$\varphi(x^{+})-\varphi(u)\leq\langle\xi,x^{+}-u\rangle\leq\langle\nabla
h(x)-\nabla h(x^{+}),x^{+}-u\rangle.$ (3.16)
For further analysis, we need the celebrated three-point identity, due to
[52].
###### Lemma 3.5 (3-point lemma).
For all $x,y\in\mathsf{X}^{\circ}$ and $z\in\operatorname{dom}h$ we have
$\displaystyle D_{h}(z,x)-D_{h}(z,y)-D_{h}(y,x)=\langle\nabla h(x)-\nabla
h(y),y-z\rangle.$
$\blacksquare$
This yields immediately,
$\displaystyle\varphi(x^{+})-\varphi(u)\leq
D_{h}(u,x)-D_{h}(u,x^{+})-D_{h}(x^{+},x).$
Performing the formal substitution $r\leftarrow\gamma r$ and
$y\leftarrow\gamma\nabla f(x)$ in the definition of the function $\varphi$ in
(3.14), this delivers the inequality
$\gamma(r(x^{+})-r(u))\leq\gamma\langle\nabla
f(x),u-x^{+}\rangle+D_{h}(u,x)-D_{h}(u,x^{+})-D_{h}(x^{+},x).$ (3.17)
Note that if $x^{+}$ is calculated inexactly in the sense that instead of
(3.15) it holds that
$\displaystyle\langle\xi+\nabla h(x^{+})-\nabla h(x),x^{+}-u\rangle\leq\Delta$
(3.18)
for some $\Delta\geq 0$, then instead of (3.17) we have
$\gamma(r(x^{+})-r(u))\leq\gamma\langle\nabla
f(x),u-x^{+}\rangle+D_{h}(u,x)-D_{h}(u,x^{+})-D_{h}(x^{+},x)+\Delta.$ (3.19)
See [49] for an explicit analysis of the error-prone implementation.
Since $f$ is assumed to possess a Lipschitz continuous gradient on
$\mathsf{V}$, the classical "descent Lemma" [26] tells us that
$f(x^{+})\leq f(x)+\langle\nabla f(x),x^{+}-x\rangle+\frac{L_{f}}{2}\lVert
x^{+}-x\rVert^{2}.$ (3.20)
Additionally, for all $u\in\operatorname{dom}h$, convexity of $f$ on
$\mathsf{V}$ implies
$f(u)\geq f(x)+\langle\nabla f(x),u-x\rangle.$
Therefore, combining this with (3.20) and using (3.8), we obtain for any
$u\in\operatorname{dom}h$,
$f(x^{+})-f(u)\leq\langle\nabla f(x),x^{+}-u\rangle+\frac{L_{f}}{2}\lVert
x^{+}-x\rVert^{2}\leq\langle\nabla
f(x),x^{+}-u\rangle+\frac{L_{f}}{\alpha}D_{h}(x^{+},x).$
Multiplying this by $\gamma$ and adding the result to (3.17), we obtain, for
any $u\in\operatorname{dom}h$,
$\gamma(\Psi(x^{+})-\Psi(u))\leq
D_{h}(u,x)-D_{h}(u,x^{+})-\left(1-\frac{\gamma
L_{f}}{\alpha}\right)D_{h}(x^{+},x).$ (3.21)
If $\gamma\in(0,\frac{\alpha}{L_{f}}]$, then the above yields
$\displaystyle\gamma(\Psi(x^{+})-\Psi(u))\leq D_{h}(u,x)-D_{h}(u,x^{+}),\quad
u\in\operatorname{dom}h.$
Setting $x=x^{k},x^{+}=x^{k+1},\gamma=\gamma_{k}$ and
$u\in\operatorname{dom}h$, one can reformulate the previous display as
$\displaystyle\gamma_{k}\left(\Psi(x^{k+1})-\Psi(u)\right)\leq
D_{h}(u,x^{k})-D_{h}(u,x^{k+1}),\quad u\in\operatorname{dom}h.$
If $u=x^{k}$, we readily see $\gamma_{k}(\Psi(x^{k+1})-\Psi(x^{k}))\leq-
D_{h}(x^{k},x^{k+1})\leq 0$, i.e. the sequence of function values
$\\{\Psi(x^{k})\\}_{k\in\mathbb{N}}$ is non-increasing. On the other hand, for
a general reference point $u\in\operatorname{dom}h$, we also see that
$\displaystyle\sum_{k=0}^{N-1}\left(\Psi(x^{k+1})-\Psi(u)\right)$
$\displaystyle\leq\sum_{k=0}^{N-1}\frac{1}{\gamma_{k}}\left[D_{h}(u,x^{k})-D_{h}(u,x^{k+1})\right]$
$\displaystyle=\frac{1}{\gamma_{0}}D_{h}(u,x^{0})-\frac{1}{\gamma_{N-1}}D_{h}(u,x^{N})+\sum_{k=0}^{N-2}\left(\frac{1}{\gamma_{k+1}}-\frac{1}{\gamma_{k}}\right)D_{h}(u,x^{k+1}).$
Assuming a constant step size policy $\gamma_{k}=\gamma$, this gives us
$\displaystyle\sum_{k=0}^{N-1}\left(\Psi(x^{k+1})-\Psi(u)\right)$
$\displaystyle\leq\frac{1}{\gamma}D_{h}(u,x^{0}).$
Define the function gap $s^{k}:=\Psi(x^{k})-\Psi(u)$, then
$s^{k+1}-s^{k}=\Psi(x^{k+1})-\Psi(x^{k})\leq 0$, and therefore
$\displaystyle s^{N}$
$\displaystyle\leq\frac{1}{N}\sum_{k=0}^{N-1}s^{k+1}=\frac{1}{N}\sum_{k=0}^{N-1}[\Psi(x^{k+1})-\Psi(u)]\leq\frac{1}{N\gamma}D_{h}(u,x^{0})$
for all $u\in\operatorname{dom}h$. As an attractive step size choice, we may
take the greedy choice $\gamma=\frac{\alpha}{L_{f}}$. However, we need to know
the Lipschitz constant of the gradient map of the smooth part $f$ of the
minimization problem (P) to make this an implementable solution strategy.
Assuming that $\operatorname{dom}h$ is closed we get immediately from the
estimate above the basic complexity result on the BPGM.
###### Proposition 3.6.
If BPGM is run with the constant step size $\gamma_{k}=\frac{\alpha}{L_{f}}$
and $\operatorname{dom}h=\operatorname{cl}(\operatorname{dom}h)=\mathsf{X}$,
then for any $x^{\ast}\in\mathsf{X}^{\ast}$, we have
$\Psi(x^{k})-\Psi_{\min}(\mathsf{X})\leq\frac{L_{f}}{\alpha
k}D_{h}(x^{\ast},x^{0}).$ (3.22)
This global sublinear rate of convergence for the Euclidean setting has been
established in [53, 54]. Under additional assumption that the objective $\Psi$
is $\mu$-relatively strongly convex [55] it is possible to obtain linear
convergence rate of BPGM, i.e. $\Psi(x^{k})-\Psi_{\min}(\mathsf{X})\leq
2L_{f}\exp(-k\mu/L_{f})D_{h}(x^{\ast},x^{0})$, see e.g. [55, 56, 57], where
the authors of the latter two papers also analyze this kind of methods under
inexact oracle and inexact Bregman proximal step.
#### 3.3.1 Subgradient and Mirror Descent
In the previous subsections we focused on the setting of problem (P) with
smooth part $f$ and obtained for BPGM a convergence rate $O(1/k)$. The same
method actually works for non-smooth convex optimization problems when $f$ has
bounded subgradients. In this setting BPGM with a different choice of the
step-size $\gamma$ is known as the _Mirror Descent_ (MD) method [58]. A
version of this method for convex composite non-smooth optimization was
proposed in [59], and an overview of Subgradient/Mirror Descent type of
methods for non-smooth problems can be found in [18, 60, 35].
The main difference between BPGM and MD is that one replaces the assumption
that $\nabla f$ is Lipschitz continuous with the assumption that $f$ is
subdifferentiable with bounded subgradients, i.e.
$\|f^{\prime}(x)\|_{\ast}\leq M_{f}$ for all $x\in\mathsf{X}$ and
$f^{\prime}(x)\in\partial f(x)$. For a given sequence of step-sizes
$(\gamma_{k})_{k}$ one defines the next test point as
$x^{k+1}=\operatorname*{argmin}_{u}\left\\{\langle\gamma_{k}f^{\prime}(x^{k}),u-x^{k}\rangle+\gamma_{k}r(u)+D_{h}(u,x^{k})\right\\}=\mathcal{P}^{h}_{\gamma_{k}r}(x^{k},\gamma_{k}f^{\prime}(x^{k})).$
A typical choice for the step size sequence is a monotonically decreasing
policy like $\gamma_{k}\sim k^{-1/2}$. Under such a specification, the MD
sequence $(x^{k})_{k}$ can be shown to converge with rate $O(1/\sqrt{k})$ to
the solution, which is optimal in this setting. A proof of this result can be
patterned via a suitable adaption of the arguments employed in our analysis of
the Dual Averaging Method in Section 3.4.
#### 3.3.2 Potential Improvements due to relative smoothness
A key pillar of the complexity analysis of BPGM was the descent lemma (3.20),
which in turn is a consequence of the assumed Lipschitz continuity of the
gradient $\nabla f$. The very influential recent work by [61] introduced a
very clever construction which allows one to relax this restrictive
assumption.222Variations on the same theme can be found in [55]. The elegant
observation made in [61] is that the Lipschitz-gradient-based descent lemma
has the equivalent, but insightful, expression
$\displaystyle\left(\frac{L_{f}}{2}\lVert
x\rVert^{2}-f(x)\right)-\left(\frac{L_{f}}{2}\lVert
u\rVert^{2}-f(u)\right)\geq\langle L_{f}u-\nabla f(u),x-u\rangle\qquad\forall
x,u\in\mathsf{V}.$
This is just the gradient inequality for the convex function
$x\mapsto\frac{L_{f}}{2}\lVert x\rVert^{2}-f(x)$. Based on the general
intuition we have gained while working with a general proximal setup, a very
tempting and natural generalization is the following.
###### Definition 3.7 (Relative Smoothness, [61]).
The function $f$ is smooth relative to the essentially smooth DGF
$h\in\mathcal{H}_{0}(\mathsf{X})$ with
$\mathsf{X}=\operatorname{cl}(\operatorname{dom}h)$, if for any
$x,u\in\mathsf{X}^{\circ}$, there is a scalar $L_{f}^{h}\geq 0$ for which
$f(u)\leq f(x)+\langle\nabla f(x),u-x\rangle+L_{f}^{h}D_{h}(u,x).$ (3.23)
Structurally, relative smoothness implies a descent lemma where the squared
Euclidean norm is replaced with a general Bregman divergence induced by an
essentially smooth function $h\in\mathcal{H}_{0}(\mathsf{X})$. Rearranging
terms, a very concise and elegant way of writing relative smoothness is that
$D_{L_{f}^{h}h-f}(u,x)\geq 0$ on $\mathsf{X}^{\circ}$, or that $L_{f}^{h}h-f$
is convex on $\mathsf{X}^{\circ}$ if the latter is a convex set. Clearly, if
$f$ and $h$ are twice continuously differentiable on $\mathsf{X}^{\circ}$, the
relative smoothness condition can be stated in terms of a positive semi-
definitness condition on the set $\mathsf{X}^{\circ}$ as
$L_{f}^{h}\nabla^{2}h(x)-\nabla^{2}f(x)\succeq 0\qquad\forall
x\in\mathsf{X}^{\circ}.$ (3.24)
Beside providing a non-Euclidean version of the descent lemma, the notion of
relative smoothness allows us to rigorously apply gradient methods to problems
whose smooth part admits no global Lipschitz continuous gradient. This gains
relevance in solving various classes of inverse problems (see Section 5.2 in
[61]), and optimal experimental design [55], a class of problems structurally
equivalent to finding the minimum volume ellipsoid containing a list of
vectors [62, 63].
The complexity analysis of BPGM under a relative smoothness assumption on the
pair $(f,h)$ proceeds analogous to the previous analysis. This NoLips
algorithm, using the terminology coined by [61], however involves a different
condition number than the ratio $\frac{\alpha}{L_{f}}$ as in (3.22). This is
an important fact which makes this method potentially interesting even if the
problem at hand admits a Lipschitz continuous gradient. The first important
result is an extended version of the fundamental inequality (3.21), which
reads as
$\gamma(\Psi(x^{+})-\Psi(u))\leq D_{h}(u,x)-D_{h}(u,x^{+})-(1-\gamma
L_{f}^{h})D_{h}(x^{+},x)\quad\forall u\in\operatorname{dom}h.$ (3.25)
The derivation of this inequality is analogous to inequality (3.21), replacing
the Lipschitz-gradient-based descent inequality (3.20) by the relative
smoothness inequality (3.23) with parameter $L_{f}^{h}$. The continuation of
the proof differs then in an important aspect. It relies on the introduction
of the _symmetry coefficient_ of the DGF $h$ as
$\nu(h):=\inf\left\\{\frac{D_{h}(x,u)}{D_{h}(u,x)}|(x,u)\in\mathsf{X}^{\circ}\times\mathsf{X}^{\circ},x\neq
u\right\\}.$ (3.26)
The symmetry coefficient $\nu(h)$ is confined to the interval $[0,1]$, and
$\nu(h)=1$ applies essentially only to the energy function
$h(x)=\frac{1}{2}\lVert x\rVert^{2}$. Choosing
$\gamma=\frac{1+\nu}{2L_{f}^{h}},x^{+}=x^{k+1},x=x^{k}$ gives
$\gamma(\Psi(x^{k+1})-\Psi(u))\leq
D_{h}(u,x^{k})-D_{h}(u,x^{k+1})-\frac{1-\nu}{2}D_{h}(x^{k+1},x^{k})\\\ $
Setting $u=x^{k}$ gives descent of the function value sequence
$(\Psi(x^{k}))_{k\geq 0}$. Moreover, it immediately follows that
$\Psi(x^{k})-\Psi(u)\leq\frac{2L_{f}^{h}}{1+\nu}\left(D_{h}(u,x^{k-1})-D_{h}(u,x^{k})\right)$
Summing from $k=1,2,\ldots,N$, the same argument as for the BPGM give
sublinear convergence of NoLips
$\Psi(x^{N})-\Psi(u)\leq\frac{2L_{f}^{h}}{N(1+\nu)}D_{h}(u,x^{0}).$ (3.27)
Comparing the constants in the complexity estimates of NoLips and BPGM we see
that the relative efficiency of the two methods depends on the condition
number ratio $\frac{2L_{f}^{h}/(1+\nu)}{L_{f}/\alpha}$. Hence, even if the
objective function is globally Lipschitz smooth (i.e. admits a Lipschitz
continuous gradient), exploiting the idea of relative smoothness might lead to
superior performance of NoLips.
To establish global convergence of the trajectory $(x^{k})_{k\in\mathbb{N}}$,
additional "reciprocity" conditions on the Bregman divergence must be imposed.
###### Assumption 3.
The essentially smooth function $h\in\mathcal{H}_{0}(\mathsf{X})$ satisfies
the Bregman reciprocity condition if the level sets
$\\{u\in\mathsf{X}^{\circ}|D_{h}(u,x)\leq\beta\\}$ are bounded for all
$\beta\in\mathbb{R}$, and
$\displaystyle x^{k}\to
x\in\mathsf{X}^{\circ}\Leftrightarrow\lim_{k\to\infty}D_{h}(x,x^{k})=0.$
This assumption is necessary, as in some settings Bregman reciprocity is
violated. See Example 4.1 in [48] as a simple illustration. Under Bregman
reciprocity, one can prove global convergence in the spirit of Opial’s lemma
[64]:
###### Theorem 3.8 ([61], Theorem 2).
Let $(x^{k})_{k\in\mathbb{N}}$ be the sequence generated by BPGM with
$\gamma\in(0,\frac{1+\nu(h)}{L_{f}^{h}})$. Assume
$\mathsf{X}=\operatorname{cl}(\operatorname{dom}h)=\operatorname{dom}h$ and
that Assumption 3 holds additional to the standing hypothesis of this survey.
Then, the sequence $(x^{k})_{k\in\mathbb{N}}$ converges to some solution
$x^{\ast}\in\mathsf{X}^{\ast}$.
### 3.4 Dual Averaging
An alternative method called Dual Averaging (DA) was proposed in [65] and, on
the contrary, is a primal-dual method making alternating updates in the space
of gradients and in the space of iterates. The extension to the convex non-
smooth composite problem (P) is due to [66]. Below we give a self-contained
complexity analysis of this scheme for non-smooth optimization, i.e. under an
assumption that $\lVert\nabla f(x)\rVert_{\ast}\leq L_{f}$ for all
$x\in\mathsf{X}$ instead of the $L_{f}$-smoothness assumption in the previous
subsections.
We start the description and analysis of the Dual Averaging method with some
preliminaries and assumptions. First, we change in this section the Lipschitz-
smoothness assumption on $f$ to the following.
###### Assumption 4.
The part $f$ in the problem (P) has bounded subgradients, i.e.
$\|f^{\prime}(x)\|_{\ast}\leq M_{f}$ for all $x\in\mathsf{X}$ and all
$f^{\prime}(x)\in\partial f(x)$.
###### Assumption 5.
$\mathsf{X}$ is a nonempty convex compact set.
Let $h\in\mathcal{H}_{\alpha}(\mathsf{X})$ be a given DGF for the feasible set
$\mathsf{X}\subseteq\mathsf{V}$.
###### Assumption 6.
The DGF $h\in\mathcal{H}_{\alpha}(\mathsf{X})$ is nonnegative on $\mathsf{X}$
and upper bounded. Denote by
$\Omega_{h}(\mathsf{X}):=\max_{p\in\mathsf{X}}h(p)-\min_{p\in\mathsf{X}}h(p)$
(3.28)
the $h$-diameter of the set $\mathsf{X}$.
We emphasize that Assumption 6 implies that $\Omega_{h}(\mathsf{X})<\infty$.
###### Assumption 7.
For all $x\in\mathsf{X}$ we have $r(x)\geq 0$.
Define the _mirror map_
$Q_{\beta,\gamma}(y):=\operatorname*{argmax}_{x\in\mathsf{X}}\left\\{\langle
y,x\rangle-\beta h(x)-\gamma r(x)\right\\}.$ (3.29)
Our terminology is motivated by the working of the Dual Averaging method.
Given the current primal-dual pair $(x,y)$, DA performs a gradient step in the
dual space $\mathsf{V}^{\ast}$ to produce a new gradient feedback point
$y^{+}=y-\lambda\nabla f(x)$, where $\lambda>0$ is a step size parameter.
Taking this as a new signal, we update the primal state by applying the mirror
map $x^{+}=Q_{\beta,\gamma}(y^{+}).$
The Dual Averaging method (DA)
Input: pick $y^{0}=0,x^{0}=Q_{\beta_{0},\gamma_{0}}(0)$, nondecreasing
learning sequence
$(\beta_{k})_{k\in\mathbb{N}_{0}},(\gamma_{k})_{k\in\mathbb{N}_{0}}$ and non-
increasing step-size sequence $(\lambda_{k})_{k\in\mathbb{N}_{0}}$
General step: For $k=0,1,\ldots$ do:
dual update $y^{k+1}=y^{k}-\lambda_{k}f^{\prime}(x^{k})$,
set $x^{k+1}=Q_{\beta_{k+1},\gamma_{k+1}}(y^{k+1})$.
###### Remark 3.1.
If $r=0$ on $\mathsf{X}$ we recover the dual averaging scheme of [65]. The
definition of this method can be simplified to the following primal-dual
updating scheme:
$\left\\{\begin{array}[]{ll}y^{k+1}=y^{k}-\lambda_{k}f^{\prime}(x^{k}),y^{0}\text{
given,}\\\
x^{k+1}=Q_{\beta_{k+1}}(y^{k+1})=\operatorname*{argmax}_{x\in\mathsf{X}}\\{\langle
y^{k+1},x\rangle-\beta_{k+1}h(x)\\}.\end{array}\right.$ (3.30)
We will revisit this scheme more thoroughly in Sections 3.4.1 and 3.4.2.
We now assess the iteration complexity of DA, showing that it features the
same order convergence rate $O(1/\sqrt{k})$ as BPGM and MD.
Fix an arbitrary anchor point $p\in\mathsf{X}$ and define for given parameters
$\beta,\gamma\in[0,\infty)$ the function
$H_{\beta,\gamma}(y):=\max_{x\in\mathsf{X}}\left\\{\langle y,x-p\rangle-\beta
h(x)-\gamma r(x)\right\\}$ (3.31)
The mapping $x\mapsto\beta h(x)+\gamma r(x)$ is
$\alpha_{\omega}:=(\alpha\beta+\gamma\mu)$-strongly convex. Applying
Proposition 3.3, the function $y\mapsto H_{\beta,\gamma}(y)$ is convex and
continuously differentiable with
$H_{\beta,\gamma}(y+g)\leq H_{\beta,\gamma}(y)+\langle\nabla
H_{\beta,\gamma}(y),g\rangle+\frac{1}{2\alpha_{\omega}}\lVert
g\rVert^{2}_{\ast}\quad\forall y,g\in\mathsf{V}^{\ast}.$ (3.32)
Moreover, if $\beta_{1}\geq\beta_{2}$ and $\gamma_{1}\geq\gamma_{2}$, then it
is easy to see that
$H_{\beta_{1},\gamma_{1}}(y)\leq H_{\beta_{2},\gamma_{2}}(y)\qquad\forall
y\in\mathsf{V}^{\ast}.$
An important consequence of strong convexity is the following relation (see
e.g. [67])
$Q_{\beta,\gamma}(y)-p=\nabla H_{\beta,\gamma}(y)\qquad\forall
y\in\mathsf{V}^{\ast}.$
To simplify the notation, let $H_{k}(y)\equiv H_{\beta_{k},\gamma_{k}}(y)$ for
all $y\in\mathsf{V}^{\ast}$. Thanks to the monotonicity in the parameters, we
get through some elementary manipulations the relation
$\displaystyle H_{k}(y^{k+1})\geq
H_{k+1}(y^{k+1})+(\beta_{k+1}-\beta_{k})h(x^{k+1})+(\gamma_{k+1}-\gamma_{k})r(x^{k+1}).$
By Assumption 6, we know that $h\geq 0$ on $\mathsf{X}$. Indeed, this can be
achieved by a simple shift of the graph of the function, if it is not
satisfied from the beginning. Continuing under this nonnegativity assumption,
and using $\beta_{k+1}\geq\beta_{k}$, we arrive at the estimate
$H_{k}(y^{k+1})\geq H_{k+1}(y^{k+1})+(\gamma_{k+1}-\gamma_{k})r(x^{k+1}).$
Now, we impose some further structure on the choice of the step-sizes
$\gamma_{k},\lambda_{k}$. Specifically, assume that
$\gamma_{k+1}=\gamma_{k}+\lambda_{k},\gamma_{0}=0,$
so that $\Lambda_{k}:=\sum_{i=0}^{k}\lambda_{i}\equiv\gamma_{k}$. From this,
via equation (3.32), we arrive at the upper bound
$\displaystyle H_{k+1}(y^{k+1})+\lambda_{k}r(x^{k+1})\leq
H_{k}(y^{k})-\lambda_{k}\langle
f^{\prime}(x^{k}),x^{k}-p\rangle+\frac{\lambda^{2}_{k}}{2\alpha_{k}}\lVert
f^{\prime}(x^{k})\rVert_{\ast}^{2},$
where $\alpha_{k}=\beta_{k}\alpha+\gamma_{k}\mu$ is the strong concavity
parameter of the maximization problem in (3.31) at iteration $k$. Rearranging
and using the convexity of the part $f$ gives
$\lambda_{k}[f(x^{k})-f(p)+r(x^{k+1})]\leq
H_{k}(y^{k})-H_{k+1}(y^{k+1})+\frac{\lambda^{2}_{k}}{2\alpha_{k}}\lVert
f^{\prime}(x^{k})\rVert_{\ast}^{2}.$
Summing over $k=0,1,\ldots,N$, this gives
$\sum_{k=0}^{N}\lambda_{k}[f(x^{k})-f(p)+r(x^{k+1})]\leq
H_{0}(y^{0})-H_{N+1}(y^{N+1})+\sum_{k=0}^{N}\frac{\lambda^{2}_{k}}{2\alpha_{k}}\lVert
f^{\prime}(x^{k})\rVert_{\ast}^{2}.$
Since $h,r\geq 0$ on $\mathsf{X}$, it is clear that $H_{0}(0)\leq 0$.
Moreover, $H_{N+1}(y^{N+1})\geq-\beta_{N+1}h(p)-\gamma_{N+1}r(p)$, where
$p\in\mathsf{X}$ is the chosen anchor point. This, using the bounded
subgradients assumption, leads to the weaker estimate
$\displaystyle\sum_{k=0}^{N}\lambda_{k}[\Psi(x^{k})-\Psi(p)+r(x^{k+1})+r(p)-r(x^{k})]\leq\beta_{N+1}h(p)+\Lambda_{N}r(p)+\sum_{k=0}^{N}\frac{\lambda^{2}_{k}M^{2}_{f}}{2\alpha_{k}}.$
Jensen’s inequality applied to the ergodic average
$\bar{x}_{N}:=\frac{1}{\Lambda_{N}}\sum_{k=0}^{N}\lambda_{k}x^{k}$
gives us further
$\Lambda_{N}[\Psi(\bar{x}^{N})-\Psi(p)]\leq\beta_{N+1}\Omega_{h}(\mathsf{X})+\lambda_{0}r(x^{0})+\sum_{k=0}^{N}\frac{\lambda^{2}_{k}M^{2}_{f}}{2\alpha_{k}}.$
Let us now make the concrete choice of parameters
$\displaystyle\beta_{k}=\beta>0\text{ and
}\lambda_{k}=\frac{1}{\sqrt{k+1}}\qquad\forall k\geq 0.$
Then, for all $k\geq 1$, classical Calculus arguments (see e.g. [18], Lemma
8.26) yield the bounds
$\displaystyle\gamma_{k}=\Lambda_{k}\geq\sqrt{k+1},\alpha_{k}=\alpha+\Lambda_{k}\mu\geq\alpha+\frac{\mu}{\sqrt{k+1}},$
$\displaystyle\frac{\lambda^{2}_{k}}{\alpha_{k}}\leq\frac{1}{\alpha(k+1)},\text{
and
}\sum_{k=0}^{N}\frac{\lambda^{2}_{k}}{2\alpha_{k}}\leq\frac{1+\log(N+1)}{2\alpha}.$
to get the $O(\log(N)/\sqrt{N})$ bound
$\Psi(\bar{x}_{N})-\Psi_{\min}(\mathsf{X})\leq\frac{\beta\Omega_{h}(\mathsf{X})+r(x^{0})+\frac{M_{f}^{2}}{2\alpha}(1+\log(N+1))}{\sqrt{N+1}}.$
(3.33)
##### The effectiveness of non-Euclidean setups
With the help of the explicit rate estimate (3.33) we are now in the position
to evaluate the potential efficiency gains we can make by adopting the non-
Euclidean framework. To do so, assume that we are interested in estimating the
accuracy obtained when running DA over an a-priori fixed window
$\\{0,\ldots,N\\}$. If the optimizer commits at the beginning to this
decision, then a more efficient step size strategy can be constructed by
setting
$\beta_{k}=\beta,\text{ and
}\lambda_{k}=\frac{\sqrt{2\alpha\beta\Omega_{h}(\mathsf{X})}}{\sqrt{N+1}M_{f}}.$
(3.34)
By doing this, we obtain
$\Lambda_{N}=\sqrt{2\alpha\beta\Omega_{h}(\mathsf{X})(N+1)}$, and therefore
(3.33) reads as
$\displaystyle\Psi(\bar{x}^{N})-\Psi(p)\leq\frac{(\beta+1)\Omega_{h}(\mathsf{X})+r(x^{0})}{\sqrt{2\alpha\beta(N+1)}\sqrt{\Omega_{h}(\mathsf{X})}}.$
Assuming that $r(x^{0})=0$, the complexity estimate becomes
$O(\sqrt{\Omega_{h}(\mathsf{X})/N}M_{f})$. We now illustrate how the factor
$\sqrt{\Omega_{h}(\mathsf{X})}M_{f}$ depends on the choice of the Bregman
setup.
###### Example 3.7.
Assume that $\mathsf{X}=\\{x\in\mathbb{R}^{n}_{+}|\sum_{i=1}^{n}x_{i}=1\\}$.
We investigate the complexity of DA under two different potentially
interesting Bregman proximal setups.
1. 1.
Endow the set $\mathsf{X}$ with the $\ell_{2}$ norm
$\lVert\cdot\rVert=\lVert\cdot\rVert_{2}$. Then
$\lVert\cdot\rVert_{\ast}=\lVert\cdot\rVert_{2}$ and $M_{f}\equiv
M_{f}^{(2)}=\sup_{x\in\mathsf{X}}\lVert f^{\prime}(x)\rVert_{2}$. It can be
easily computed that $\Omega_{h}(\mathsf{X})=\frac{n-1}{2n}\approx 1/2$ for
$n\to\infty$.
2. 2.
Enow the set $\mathsf{X}$ with the $\ell_{1}$ norm
$\lVert\cdot\rVert=\lVert\cdot\rVert_{1}$. We then have
$\lVert\cdot\rVert_{\ast}=\lVert\cdot\rVert_{\infty}$. Set $M_{f}\equiv
M_{f}^{\infty}=\sup_{x\in\mathsf{X}}\lVert f^{\prime}(x)\rVert_{\infty}$. As
DGF let us consider $h(x)=\sum_{i=1}^{n}x_{i}\ln(x_{i})$. Then,
$\Omega_{h}(\mathsf{X})=\ln(n)$.
Since $\lVert a\rVert_{\infty}\leq\lVert a\rVert_{2}\leq\sqrt{n}\lVert
a\rVert_{\infty}$, we see that
$1\leq\frac{M_{f}^{2}}{M_{f}^{\infty}}\leq\sqrt{n}$, and hence
$\displaystyle\frac{\sqrt{(n-1)/(2n)}}{\sqrt{\ln(n)}}\leq\frac{\sqrt{(n-1)/(2n)}}{\sqrt{\ln(n)}}\frac{M_{f}^{(2)}}{M_{f}^{\infty}}\leq\frac{\sqrt{(n-1)/(2n)}}{\sqrt{\ln(n)}}\sqrt{n}.$
Thus, in particular for $n$ large, it can be seen that the $\ell_{1}$-setup is
never worse than the $\ell_{2}$-setup, and there can be strong reasons to
prefer the non-Euclidean $\ell_{1}$ setup over the $\ell_{2}$ setup.
#### 3.4.1 On the connection between Dual Averaging and Mirror Descent
A deep and important connection between the Dual Averaging and Mirror Descent
algorithms for convex non-smooth optimization has been observed in [46]. To
illustrate this link, let us particularize our model problem (P) to the
constrained convex programming case where $r(x)=0$ on $\mathsf{X}$. In this
case, the dual averaging scheme produces primal-dual iterates via the updates
(3.30). To relate these iterates to BPGM, we assume that
$\operatorname{dom}h\subset\mathsf{X}$ and $h$ is essentially smooth in the
sense of Definition 3.2.
Let us recall that $h$ is essentially smooth if and only if its Fenchel
conjugate $h^{\ast}$ is essentially smooth. Moreover, $\nabla
h:\operatorname{int}(\operatorname{dom}h)\to\operatorname{int}(\operatorname{dom}h^{\ast})$
is a bijection with
$(\nabla h)^{-1}=\nabla h^{\ast}\text{ and }\nabla h^{\ast}(\nabla
h(x))=\langle x,\nabla h(x)\rangle-h(x)$ (3.35)
Taking $\mathsf{X}=\operatorname{cl}(\operatorname{dom}h)$, it follows
$\displaystyle\operatorname{dom}\partial
h=\operatorname{relint}(\operatorname{dom}h)=\operatorname{relint}(\mathsf{X})\text{
with }\partial h(x)=\\{\nabla h(x)\\}\quad\forall
x\in\operatorname{relint}(\mathsf{X}).$
Assuming that the penalty function $h$ is of Legendre type, the primal
projection step is seen to be the regularized maximization step
$\displaystyle x^{k}=\operatorname*{argmax}_{u\in\mathsf{X}}\\{\langle
y^{k},u\rangle-\beta_{k}h(u)\\}\Leftrightarrow y^{k}=\beta_{k}\nabla
h(x^{k}).$
Using the definition of the dual trajectory, we see that for all $k\geq 0$ the
primal-dual relation obeys:
$\displaystyle 0=\lambda_{k}\nabla f(x^{k})+\beta_{k+1}\nabla
h(x^{k+1})-\beta_{k}\nabla h(x^{k}).$
Assuming that $\beta_{k}\equiv 1$, this implies
$\displaystyle
x^{k+1}\in\operatorname*{argmin}_{u\in\mathsf{X}}\\{\langle\lambda_{k}\nabla
f(x^{k}),u-x^{k}\rangle+D_{h}(u,x^{k})\\}=\mathcal{P}_{0}(x^{k},\lambda_{k}\nabla
f(x^{k})).$
We have thus shown that DA and BPGM/MD agree if all parameters and initial
conditions are chosen in the same way.
#### 3.4.2 Links to continuous-time dynamical systems
The connection between numerical algorithms and continuous-time dynamical
systems for optimization is classical and well-documented in the literature
(see e.g. [68] for a textbook reference). Here we describe an interesting link
between dual averaging and a class or Riemannian gradient flows originally
introduced in [69, 70, 71] and further studied in [72]. A complexity analysis
of discretized versions of these gradient flows has recently been obtained in
[73]. Our point of departure is the following continuous-time dynamical system
based on dual averaging, which has been introduced in [12] in the context of
convex programming and in [13] for general monotone variational inequality
problems. The main ingredient of this dynamical system is a pair of primal-
dual trajectories $(x(t),y(t))_{t\geq 0}$ evolving in continuous time
according to the differential-projection system
$\left\\{\begin{array}[]{l}y^{\prime}(t):=\frac{\>dy(t)}{\>dt}=-\lambda(t)\nabla
f(x(t)),\\\ x(t)=Q_{1}(\eta(t)y(t))=:Q(\eta(t)y(t)).\end{array}\right.$ (3.36)
To relate this scheme formally to its discrete-time counterpart (3.30), let us
perform an Euler discretization of the dual trajectory by
$y^{k}-y^{k-1}=-\lambda_{k}\nabla f(x^{k})$, and project the resulting point
to the primal space by applying the mirror map
$Q(\frac{1}{\beta_{k+1}}y^{k+1})$, where $\beta_{k+1}^{-1}$ is the discrete-
time learning rate appropriately sampled from the function $\eta(t)$. As in
Section 3.4.1, let us assume that the mirror map is generated by a Legendre
function $h$, so that
$x(t)=\nabla h^{\ast}(\eta(t)y(t)).$
Let us further assume that $h$ is twice continuously differentiable and
$\eta(t)\equiv 1$. Differentiating the previous equation with respect to time
$t$ gives
$x^{\prime}(t)=\nabla^{2}h^{\ast}(y(t))y^{\prime}(t)=-\lambda(t)\nabla^{2}h^{\ast}(y(t))\nabla
f(x(t)).$
To make headway, recall the basic properties of Legendre function saying
$\nabla h^{\ast}(\nabla h(x))=x$ for all
$x\in\operatorname{int}\operatorname{dom}h$ (cf. (3.35)). Differentiating
implicitly this identity, we obtain $\nabla^{2}h^{\ast}(\nabla
h(x)))\equiv\operatorname{Id}$, or
$\nabla^{2}h^{\ast}(\nabla h(x))=[\nabla^{2}h(x)]^{-1}=:H(x)^{-1}.$ (3.37)
As in Section 3.4.1, it holds true that $y(t)=\nabla h(x(t))$ for all $t\geq
0$, we therefore obtain the interesting characterization of the primal
trajectory as
$x^{\prime}(t)=-\lambda(t)H(x(t))^{-1}\nabla f(x(t)).$
If $\mathsf{X}$ is a smooth manifold, we can define a Riemannian metric
$g_{x}(u,v):=\langle
H(x)u,v\rangle\qquad\forall(x,u,v)\in\mathsf{X}^{\circ}\times\mathsf{V}\times\mathsf{V}.$
The gradient of a smooth function $\phi$ with respect to the metric $g$ is
then given by $\nabla_{g}\phi(x)=H(x)^{-1}\nabla\phi(x)$. Hence, the
continuous-time version of the dual averaging method gives rise the class of
primal _Riemannian-Hessian gradient flows_
$x^{\prime}(t)+\lambda(t)\nabla_{g}f(x(t))=0,\quad x(0)\in\mathsf{X}^{\circ}.$
(3.38)
This class of continuous-time dynamical systems gave rise to a vigorous
literature in connection with Nesterov’s optimal method, which we will
thoroughly discuss in Section 6. As an appetizer, consider the system of
differential equations
$y^{\prime}(t)=-\lambda(t)\nabla f(x(t)),\quad
x^{\prime}(t)=\gamma(t)[Q(\eta(t)y(t))-x(t)].$ (3.39)
Suppose that in (3.39) we take $Q(y)=y,\eta(t)=1$. This corresponds
to the Legendre function $h(x)=\frac{1}{2}\lVert
x\rVert^{2}_{2}+\delta_{\mathsf{X}}(x)$ for a given closed convex set
$\mathsf{X}$. Under this specification, the dynamical system (3.39) becomes
$y^{\prime}(t)=-\lambda(t)\nabla f(x(t)),\quad
x^{\prime}(t)=\gamma(t)[y(t)-x(t)].$
Combining the primal and the dual trajectory, we easily derive a purely primal
second-order in time dynamical system given by
$\displaystyle
x^{\prime\prime}(t)-x^{\prime}(t)\left(\frac{\gamma(t)^{2}-\gamma^{\prime}(t)}{\gamma(t)}\right)+\lambda(t)\nabla
f(x(t))=0.$
Setting $\gamma(t)=\beta/t$ and $\lambda(t)=1/\gamma(t)$ and rearranging gives
$\displaystyle x^{\prime\prime}(t)+\frac{\beta+1}{t}x^{\prime}(t)+\nabla
f(x(t))=0,$
which corresponds to the continuous-time version of the Heavy-ball method of
Polyak [74]. For $\beta=2$ this gives the continuous-time formulation of
Nesterov’s accelerated scheme, as shown by [75].
More generally, suppose that $h$ is a twice continuously differentiable
Legendre function and $\eta(t)\equiv 1$. Then a direct calculation shows that
$\displaystyle
x^{\prime\prime}(t)+\left(\gamma(t)-\frac{\gamma^{\prime}(t)}{\gamma(t)}\right)x^{\prime}(t)+\gamma(t)\lambda(t)(\nabla^{2}h)^{-1}(Q(y(t)))\nabla
f(x(t))=0.$
Using the identity (3.37), as well as
$\frac{x^{\prime}(t)}{\gamma(t)}+x(t)=\nabla h^{\ast}(y(t))$, it follows that
$\displaystyle\nabla^{2}h\left(x(t)+\frac{x^{\prime}(t)}{\gamma(t)}\right)\left(\frac{x^{\prime\prime}(t)}{\gamma(t)}+\left(1-\frac{\gamma^{\prime}(t)}{\gamma(t)^{2}}\right)x^{\prime}(t)\right)=-\lambda(t)\nabla
f(x(t))\Leftrightarrow\frac{\>d}{\>dt}\nabla
h\left(x(t)+\frac{x^{\prime}(t)}{\gamma(t)}\right)=-\lambda(t)\nabla f(x(t)).$
This shows that for $\eta\equiv 1$, the dynamic coincides with the Lagrangian
family of second-order systems constructed in [76]. These ideas are now
investigated heavily when combined with numerical discretization schemes for
dynamical system with the hope to get insights how to construct new and more
efficient algorithmic formulation of gradient-methods. This literature grew
quite fastly over the last years, and we mention [77, 78, 79, 80].
## 4 The Proximal Method of Multipliers and ADMM
In this section we turn our attention to a classical toolbox for solving
linearly constrained optimization problems building on the classical idea of
the celebrated _method of multipliers_. An extremely powerful proponent of
this class of algorithms is the _Alternating Direction Method of Multipliers_
(ADMM), which has received enormous interest from different directions,
including PDEs [81, 82], mixed-integer programming [83], optimal control [84]
and signal processing [85, 86]. The very influential monograph [87] contains
over 180 references, reflecting the deep impact of alternating methods on
optimization theory and its applications. Following the general spirit of this
survey, we introduce alternating direction methods in a proximal framework, as
pioneered by Rockafellar [88, 89], and due to [90]. See also [91] for some
further important elaborations.
To set the stage, consider the composite convex optimization problem (P), in
its special form (2.7). Hence, we are interested in minimizing the composite
convex function
$\Psi(x)=g({\mathbf{A}}x)+r(x),$
for a given bounded linear operator ${\mathbf{A}}$. To streamline the
presentation, we directly assume in this section that
$\mathsf{V}=\mathsf{V}^{\ast}=\mathbb{R}^{n}$, and the underlying metric
structure is generated by the Euclidean norm $\lVert a\rVert\equiv\lVert
a\rVert_{2}=\langle a,a\rangle^{1/2}=\left(\sum_{i=1}^{n}a_{i}\right)^{1/2}$.
Introducing the auxiliary variable $z={\mathbf{A}}x$, this problem can be
equivalently written as
$\inf\\{\Phi(x,z)=g(z)+r(x)|{\mathbf{A}}x-z=0,x\in\mathsf{X},z\in\mathsf{Z}\\},$
(4.1)
where $\mathsf{X}=\mathbb{R}^{n}$ and $\mathsf{Z}=\mathbb{R}^{m}$. We will
call this the _primal_ problem. By Fenchel-Rockafellar duality [25], the _dual
problem_ to (4.1) is
$\min_{y}g^{\ast}(y)+r^{\ast}(-{\mathbf{A}}^{\top}y).$ (4.2)
The Lagrangian associated to (4.1) is
$L(x,z,y)=g(z)+r(x)+\langle y,{\mathbf{A}}x-z\rangle,$ (4.3)
where $y\in\mathbb{R}^{m}$ is the Lagrange multiplier associated with the
linear constraint.
###### Assumption 8.
The Lagrangian $L$ associated to problem (4.1) has a saddle point, i.e. there
exists $(x^{\ast},z^{\ast},y^{\ast})$ such that
$L(x^{\ast},z^{\ast},y)\leq L(x^{\ast},z^{\ast},y^{\ast})\leq
L(x,z,y^{\ast})\qquad\forall(x,z,y)\in\mathsf{X}\times\mathsf{Z}\times\mathbb{R}^{m}.$
(4.4)
A key actor in alternating direction methods is the _augmented Lagrangian_
defined for some $c>0$ as
$L_{c}(x,z,y)=r(x)+g(z)+\langle y,Ax-z\rangle+\frac{c}{2}\lVert
Ax-z\rVert_{2}^{2}.$ (4.5)
The Alternating Direction of Method of Multipliers (ADMM)
Input: pick $(z^{0},y^{0})\in\mathsf{Z}\times\mathbb{R}^{m}$ and penalty
parameter $c>0$;
General step: For $k=0,1,\ldots$ do: $\displaystyle x^{k+1}$
$\displaystyle=\operatorname*{argmin}_{x\in\mathsf{X}}\\{r(x)+\frac{c}{2}\lVert{\mathbf{A}}x-z^{k}+\frac{1}{c}y^{k}\rVert_{2}^{2}\\}$
(4.6) $\displaystyle z^{k+1}$
$\displaystyle=\operatorname*{argmin}_{z\in\mathsf{Z}}\\{g(z)+\frac{c}{2}\lVert{\mathbf{A}}x^{k+1}-z+\frac{1}{c}y^{k}\rVert^{2}_{2}\\}$
(4.7) $\displaystyle y^{k+1}$
$\displaystyle=y^{k}+c({\mathbf{A}}x^{k+1}-z^{k+1}).$ (4.8)
ADMM updates the decision variables in a sequential manner, and thus is not
capable of featuring parallel updates which are often required in large-scale
distributed optimization problems. In the context of the AC optimal power flow
problem in electric power grid optimization [92] provide such a modification
of ADMM. Furthermore, the ADMM can be extended to consider formulations with
general linear constraints of the form
${\mathbf{A}}_{1}x+{\mathbf{A}}_{2}z=b$. For ease of exposition we stick to
the simplified problem formulation above.
### 4.1 The Douglas-Rachford algorithm and ADMM
The Douglas-Rachford (DR) algorithm is a fundamental method to solve general
monotone inclusion problems where the task is to find zeros of the sum of two
maximally monotone operators (see [25] and [93]). To keep the focus on convex
programming, we introduce this method for solving the dual problem (4.2). To
that end, let us define the matrix $\mathbf{K}=-{\mathbf{A}}^{\top}$, so that
our aim is to solve the convex programming problem
$\min_{z}g^{\ast}(z)+r^{\ast}(\mathbf{K}z).$ (4.9)
Any solution $\bar{z}\in\operatorname{dom}(r^{\ast})$ satisfies the monotone
inclusion
$0\in\mathbf{K}^{\top}\partial r^{\ast}(\mathbf{K}\bar{z})+\partial
g^{\ast}(\bar{z}).$ (4.10)
The DR algorithm aims to determine such a point $\bar{z}$ by iteratively
constructing a sequence $\\{(u^{k},v^{k},y^{k}),k\geq 0\\}$ determined by
$\displaystyle v^{k+1}$
$\displaystyle=(\operatorname{Id}+c\mathbf{K}^{\top}\circ\partial
r^{\ast}\circ\mathbf{K})^{-1}(2y^{k}-u^{k}),$ $\displaystyle u^{k+1}$
$\displaystyle=v^{k+1}+u^{k}-y^{k},$ $\displaystyle y^{k+1}$
$\displaystyle=(\operatorname{Id}+c\partial g^{\ast})^{-1}(u^{k+1}).$
To bring this into an equivalent form, let us focus on the definition of the
$y^{k+1}$ update, which reads as the inclusion
$0\in\frac{1}{c}(y^{k+1}-u^{k+1})+\partial g^{\ast}(y^{k+1}).$
This is clearly recognizable as the first-order optimality condition of the
$\min_{y}\\{g^{\ast}(y)+\frac{1}{2c}\lVert y-u^{k+1}\rVert^{2}_{2}\\}$.
Therefore, we can rewrite the above iteration in terms of convex optimization
subroutines as:
$\displaystyle v^{k+1}$
$\displaystyle=\operatorname*{argmin}_{v}\\{r^{\ast}(\mathbf{K}v)+\frac{1}{2c}\lVert
v-(2y^{k}-u^{k})\rVert_{2}^{2}\\},$ (4.11) $\displaystyle u^{k+1}$
$\displaystyle=v^{k+1}+u^{k}-w^{k},$ (4.12) $\displaystyle y^{k+1}$
$\displaystyle=\operatorname*{argmin}_{y}\\{g^{\ast}(y)+\frac{1}{2c}\lVert
y-u^{k+1}\rVert_{2}^{2}\\}.$ (4.13)
Via Fenchel-Rockafellar duality, the dual problem to (4.11) reads as
$\displaystyle
x^{k+1}=\operatorname*{argmin}_{x}\\{r(x)+\frac{c}{2}\lVert{\mathbf{A}}x+\frac{1}{c}(2y^{k}-u^{k})\rVert_{2}^{2}\\},$
where the coupling between the primal and the dual variables is
$u^{k+1}=y^{k}+c{\mathbf{A}}x^{k+1}.$
The dual to step (4.13) reads as
$\displaystyle z^{k+1}=\operatorname*{argmin}_{z}\\{g(z)+\frac{c}{2}\lVert
z-\frac{1}{c}u^{k+1}\rVert_{2}^{2}\\}.$
The coupling between primal and dual variables reads as
$y^{k+1}=u^{k+1}-cz^{k+1}.$
Combining all these relations, we can write the dual minimization problem as
$\displaystyle x^{k+1}$
$\displaystyle=\operatorname*{argmin}_{x}\\{r(x)+\frac{c}{2}\lVert{\mathbf{A}}x-z^{k}+\frac{1}{c}y^{k}\rVert_{2}^{2}\\},$
$\displaystyle z^{k+1}$
$\displaystyle=\operatorname*{argmin}_{z}\\{g(z)+\frac{c}{2}\lVert{\mathbf{A}}x^{k+1}-z+\frac{1}{c}y^{k}\rVert_{2}^{2}\\},$
$\displaystyle y^{k+1}$ $\displaystyle=y^{k}+c({\mathbf{A}}x^{k+1}-z^{k+1})$
which is just the standard ADMM. By this we have recovered a classical result
on connection between the DR and ADMM algorithms due to [94] and [95].
### 4.2 Proximal Variant of ADMM
One of the limitations of the ADMM comes from the presence of the term
${\mathbf{A}}x$ in the update of $x^{k+1}$. The presence of this factor makes
it impossible to implement the algorithm in parallel, which makes it slightly
unattractive for large-scale problems in distributed optimization. Moreover,
due to the result of [96] the convergence of ADMM for general linear
constraints does not generalize to more than two blocks. Leaving
parallelization issues aside, Shefi and Teboulle [90] proposed an interesting
extension of the ADMM by adding further quadratic penalty terms, which adds
stability to the algorithm, and as well allows us to give a unified
perspective of Lagrangian methods and prove global convergence results.
Given some point
$(x^{k},z^{k},y^{k})\in\mathsf{X}\times\mathsf{Z}\times\mathbb{R}^{m}$ and two
positive definite matrices $\mathbf{M}_{1},\mathbf{M}_{2}$, we are ready to
define the new ingredient of the method.
###### Definition 4.1.
The proximal augmented Lagrangian of (4.1) is
$P_{k}(x,z,y)=L_{c}(x,z,y)+q_{k}(x,z)$ (4.14)
where
$q_{k}(x,z)=\frac{1}{2}\lVert
x-x^{k}\rVert^{2}_{\mathbf{M}_{1}}+\frac{1}{2}\lVert
z-z^{k}\rVert_{\mathbf{M}_{2}}^{2}.$ (4.15)
Here, $\lVert u\rVert_{\mathbf{M}}^{2}=\langle u,\mathbf{M}u\rangle$ is the
semi-norm induced by $\mathbf{M}$, which is a norm if $\mathbf{M}$ is positive
definite.
The Alternating Direction proximal Method of Multipliers (AD-PMM)
Input: pick
$(x^{0},z^{0},y^{0})\in\mathsf{X}\times\mathsf{Z}\times\mathbb{R}^{m}$ and
penalty parameter $c>0$;
General step: For $k=0,1,\ldots$ do: $\displaystyle x^{k+1}$
$\displaystyle=\operatorname*{argmin}_{x\in\mathsf{X}}\\{r(x)+\frac{c}{2}\lVert{\mathbf{A}}x-z^{k}+\frac{1}{c}y^{k}\rVert_{2}^{2}+\frac{1}{2}\lVert
x-x^{k}\rVert^{2}_{\mathbf{M}_{1}}\\}$ (4.16) $\displaystyle z^{k+1}$
$\displaystyle=\operatorname*{argmin}_{z\in\mathsf{Z}}\\{g(z)+\frac{c}{2}\lVert{\mathbf{A}}x^{k+1}-z+\frac{1}{c}y^{k}\rVert_{2}^{2}+\frac{1}{2}\lVert
z-z^{k}\rVert_{\mathbf{M}_{2}}^{2}\\}$ (4.17) $\displaystyle y^{k+1}$
$\displaystyle=y^{k}+c({\mathbf{A}}x^{k+1}-z^{k+1}).$ (4.18)
We give a brief analysis of the complexity of AD-PMM in the special case of
problem (4.1). Recall that a standing hypothesis in this survey is that the
smooth part $f$ of the composite convex programming problem (P) admits a
Lipschitz continuous gradient. Since $f(x)=g({\mathbf{A}}x)$, the Lipschitz
constant of $\nabla f$ is determined by a corresponding Lipschitz assumption
on $\nabla g$, with the constant henceforth denoted as $L_{g}$, and a bound on
spectrum of the matrix ${\mathbf{A}}$. To highlight the primal-dual nature of
the algorithm, a key element in the complexity analysis is the bifunction
$S(x,y)=r(x)-g^{\ast}(y)+\langle y,{\mathbf{A}}x\rangle=L(x,0,y).$
Our derivation of an iteration complexity estimate of AD-PMM proceeds in two
steps. First, we present an interesting “Meta-Theorem”, due to [90], and
reprinted here as Proposition 4.3. It gives a general convergence guarantees
for any primal-dual algorithms satisfying a specific per-iteration bound. We
then apply this general result to AD-PMM, by verifying that this scheme
actually satisfies these mentioned per-iteration bounds.
We start with an auxiliary technical fact.
###### Lemma 4.2.
Let $h:\mathbb{R}^{n}\to\mathbb{R}$ be a proper convex and $L_{h}$-Lipschitz
continuous. Then, for any $\xi\in\mathbb{R}^{n}$ we have
$h(\xi)\leq\max\\{\langle\xi,u\rangle-h^{\ast}(u):\lVert u\rVert_{2}\leq
L_{h}\\}.$ (4.19)
###### Proof.
Since $h$ is convex and continuous, it agrees with its biconjugate:
$h^{\ast\ast}=h$. By Corollary 13.3.3 in [42], $\operatorname{dom}h^{\ast}$ is
bounded with $\operatorname{dom}h^{\ast}\subseteq\\{u:\lVert u\rVert_{2}\leq
L_{h}\\}$. Hence, the definition of the conjugate gives
$h(\xi)=\sup_{u\in\operatorname{dom}h^{\ast}}\\{\langle
u,\xi\rangle-h^{\ast}(u)\\}\leq\max_{u:\lVert u\rVert_{2}\leq
L_{h}}\\{\langle\xi,u\rangle-h^{\ast}(u)\\}.$
$\blacksquare$
###### Proposition 4.3.
Let $(x^{\ast},y^{\ast},z^{\ast})$ be a saddle point for $L$. Let
$\\{(x^{k},y^{k},z^{k});k\geq 0\\}$ be a sequence generated by some algorithm
for which the following estimate holds for any $y\in\mathbb{R}^{m}$:
$L(x^{k},z^{k},y)-\Psi(x^{\ast})\leq\frac{1}{2k}\left[C(x^{\ast},z^{\ast})+\frac{1}{c}\lVert
y-y^{0}\rVert_{2}^{2}\right]$ (4.20)
for some constant $C(x^{\ast},z^{\ast})>0$. Then
$\Psi(x^{k})-\Psi(x^{\ast})\leq\frac{C_{1}(x^{\ast},z^{\ast},L_{g})}{2k}.$
where
$C_{1}(x^{\ast},z^{\ast},L_{g})=C(x^{\ast},z^{\ast})+\frac{2}{c}(L^{2}_{g}+\lVert
y^{0}\rVert_{2}^{2}).$
###### Proof.
Thanks to the Fenchel inequality
$\displaystyle L(x,z,y)-S(x,y)=g(z)+g^{\ast}(y)-\langle y,z\rangle\geq 0.$
By the definition of the convex conjugate
$\displaystyle\Psi(x)$
$\displaystyle=g({\mathbf{A}}x)+r(x)=\sup_{y}\\{r(x)+\langle
y,{\mathbf{A}}x\rangle-g^{\ast}(y)\\}=\sup_{y}S(x,y).$
Now, since $g$ is convex and continuous on $\mathbb{R}^{m}$, we know
$g=g^{\ast\ast}$, and we can apply Lemma 4.2 to obtain the string of
inequalities:
$\displaystyle\Psi(x^{k})-\Psi(x^{\ast})$
$\displaystyle=\sup_{y}\\{S(x^{k},y)-\Psi(x^{\ast})\\}\leq\sup_{y:\lVert
y\rVert_{2}\leq L_{g}}\\{S(x^{k},y)-\Psi(x^{\ast})\\}\leq\sup_{y:\lVert
y\rVert_{2}\leq L_{g}}\\{L(x^{k},z^{k},y)-\Psi(x^{\ast})\\}$
$\displaystyle\leq\sup_{y:\lVert y\rVert_{2}\leq
L_{g}}\left\\{\frac{1}{2k}\left(C(x^{\ast},z^{\ast})+\frac{1}{c}\lVert
y-y^{0}\rVert_{2}^{2}\right)\right\\}\leq\frac{1}{2k}\left[C(x^{\ast},z^{\ast})+\frac{2}{c}(L_{g}+\lVert
y^{0}\rVert_{2}^{2})\right].$
$\blacksquare$
To apply this Meta-Theorem, we need to verify that AD-PMM satisfies the
condition (4.20). To make progress towards that end, Lemma 4.2 in [90] proves
that
$L(x^{k+1},z^{k+1},y)-L(x,z,y^{k+1})\leq T_{k}(x,z,x^{k+1})+R_{k}(x,y,z)$
(4.21)
for all $(x,z,y)\in\mathsf{X}\times\mathsf{Z}\times\mathbb{R}^{m}$ and some
explicitly given functions $T_{k}$ and $R_{k}$. Furthermore, it is shown that
$\displaystyle
T_{k}(x,z,x^{k+1})\leq\frac{c}{2}\left(\lVert{\mathbf{A}}x-z^{k}\rVert_{2}^{2}-\lVert{\mathbf{A}}x-z^{k+1}\rVert_{2}^{2}+\frac{c}{2}\lVert{\mathbf{A}}x^{k+1}-z^{k+1}\rVert_{2}^{2}\right),\text{
and }$ $\displaystyle
R_{k}(x,z,y)\leq\frac{1}{2}\left(\Delta_{k}(x,\mathbf{M}_{1})+\Delta_{k}(z,\mathbf{M}_{2})+\frac{1}{c}\Delta_{k}(y,\operatorname{Id})\right)-\frac{c}{2}\lVert{\mathbf{A}}x^{k+1}-z^{k+1}\rVert_{2}^{2},$
where for any point $z$ and positive semi-definite matrix $\mathbf{M}$,
$\displaystyle\Delta_{k}(z,\mathbf{M})=\frac{1}{2}\lVert
z-z^{k}\rVert_{\mathbf{M}}^{2}-\frac{1}{2}\lVert
z-z^{k+1}\rVert^{2}_{\mathbf{M}}.$
Using these bounds and summing inequality (4.21) over $k=0,1,\ldots,N-1$, we
get
$\sum_{k=0}^{N-1}[L(x^{k+1},z^{k+1},y)-L(x,z,y^{k+1})]\leq\frac{1}{2}\left(c\lVert{\mathbf{A}}x-z^{0}\rVert_{2}^{2}+\lVert
x-x^{0}\rVert^{2}_{\mathbf{M}_{1}}+\lVert
z-z^{0}\rVert^{2}_{\mathbf{M}_{2}}+\frac{1}{c}\lVert
y-y^{0}\rVert_{2}^{2}\right)$
Dividing both sides by $N$ and using the convexity of the Lagrangian with
respect to $(x,z)$ and the linearity in $y$, we easily get
$L(\bar{x}_{N},\bar{z}_{N},y)-L(x,z,\bar{y}_{N})\leq\frac{1}{2N}\left(C(x,z)+\frac{1}{c}\lVert
y-y^{0}\rVert_{2}^{2}\right)$
in terms of the ergodic average
$\displaystyle\bar{x}_{N}=\frac{1}{N}\sum_{k=0}^{N-1}x^{k},\;\bar{y}_{N}=\frac{1}{N}\sum_{k=0}^{N-1}y^{k},\;\bar{z}_{N}=\frac{1}{N}\sum_{k=0}^{N-1}z^{k},$
and the constant $C(x,z)=c\lVert{\mathbf{A}}x-z^{0}\rVert^{2}+\lVert
x-x^{0}\rVert^{2}_{\mathbf{M}_{1}}+\lVert z-z^{0}\rVert^{2}_{\mathbf{M}_{2}}$.
Therefore, we can apply Proposition 4.3 to the sequence of ergodic averages
$(\bar{x}_{k},\bar{z}_{k},\bar{y}_{k})$ generated by AD-PMM, and derive a
$O(1/N)$ convergence rate in terms of the function value.
### 4.3 Relation to the Chambolle-Pock primal-dual splitting
In this subsection we discuss the relation between ADMM and the celebrated
Chambolle-Pock (a.k.a Primal-Dual Hybrid Gradient) method [97], designed for
problems in the form (2.8).
The Chambolle-Pock primal-dual algorithm (CP)
Input: pick
$(x^{0},y^{0},p^{0})\in\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{m}$
and $c,\tau>0,\theta\in[0,1]$;
General step: For $k=0,1,\ldots$ do: $\displaystyle x^{k+1}$
$\displaystyle=\operatorname*{argmin}_{x}\\{r(x)+\frac{1}{2\tau}\lVert
x-(x^{k}-\tau{\mathbf{A}}^{\top}p^{k})\rVert_{2}^{2}$ (4.22) $\displaystyle
y^{k+1}$
$\displaystyle=\operatorname*{argmin}_{y}\\{g^{\ast}(y)+\frac{1}{2c}\lVert
y-(y^{k}+c{\mathbf{A}}x^{k+1})\rVert_{2}^{2}\\}$ (4.23) $\displaystyle
p^{k+1}$ $\displaystyle=y^{k+1}+\theta(y^{k+1}-y^{k}).$ (4.24)
For later references it is instructive to write this algorithm slightly
differently in operator-theoretic notation. From the optimality condition of
the step $x^{k+1}$, we see
$\displaystyle 0\in\partial
r(x^{k+1})+\frac{1}{\tau}(x^{k+1}-w^{k})=(\operatorname{Id}+\tau\partial
r)(x^{k+1})-w^{k}$
where $w^{k}=x^{k}-\tau{\mathbf{A}}^{\top}p^{k}$. Hence, we can give an
explicit expression of the update as
$\displaystyle x^{k+1}=(\operatorname{Id}+\tau\partial
r)^{-1}(w^{k})=(\operatorname{Id}+\tau\partial
r)^{-1}(x^{k}-\tau{\mathbf{A}}^{\top}p^{k}).$
Similarly, we can write the update $y^{k+1}$ explicitly as
$\displaystyle y^{k+1}=(\operatorname{Id}+c\partial
g^{\ast})^{-1}(y^{k}+c{\mathbf{A}}x^{k+1}).$
When $\theta=0$ we obtain the classical Arrow-Hurwicz primal-dual algorithm
[98]. For $\theta=1$ the last line in CP becomes $p^{k+1}=2y^{k+1}-y^{k}$,
which corresponds to a simple linear extrapolation based on the current and
previous iterates. In this case, [97] provide a $O(1/N)$ non-asymptotic
convergence guarantees in terms of the primal-dual gap function of the
corresponding saddle-point problem. The CP primal-dual splitting method has
been of immense importance in imaging and signal processing and constitutes
nowadays a standard method for tackling large-scale instances in these
application domains. Interestingly, if $\theta=1$, CP is a special case of the
proximal version of ADMM (AD-PMM). To establish this connection, let us set
$\mathbf{M}_{1}=\frac{1}{c}\operatorname{Id}-c{\mathbf{A}}^{\top}{\mathbf{A}}$
and $\mathbf{M}_{2}=0$. After some elementary manipulations, we arrive at the
update formula for $x^{k+1}$ in AD-PMM (4.16) as
$\displaystyle x^{k+1}=\operatorname*{argmin}_{x}\\{r(x)+\frac{1}{2\tau}\lVert
x-(x^{k}-\tau{\mathbf{A}}^{\top}(y^{k}+c({\mathbf{A}}x^{k}-z^{k})))\rVert_{2}^{2}\\}.$
Introducing the variable $p^{k}=y^{k}+c({\mathbf{A}}x^{k}-z^{k})$, the above
reads equivalently as
$\displaystyle x^{k+1}=\operatorname*{argmin}_{x}\\{r(x)+\frac{1}{2\tau}\lVert
x-(x^{k}-\tau{\mathbf{A}}^{\top}p^{k})\rVert_{2}^{2}\\}=\operatorname{Prox}_{\tau
r}(x^{k}-\tau{\mathbf{A}}^{\top}p^{k}).$
For $\mathbf{M}_{2}=0$, the second update step in AD-PMM (4.17) reads as
$z^{k+1}=(\operatorname{Id}+\frac{1}{c}\partial
g)^{-1}\left({\mathbf{A}}x^{k+1}+\frac{1}{c}y^{k}\right)=\operatorname{Prox}_{\frac{1}{c}g}\left(\frac{1}{c}(c{\mathbf{A}}x^{k+1}+y^{k})\right).$
Moreau’s identity [25, Proposition 23.18] states that
$c\operatorname{Prox}_{\frac{1}{c}g}(u/c)+\operatorname{Prox}_{cg^{\ast}}(u)=u\quad\forall
u\in\mathsf{V}.$ (4.25)
Applying this fundamental identity, we see
$\displaystyle
cz^{k+1}+\operatorname{Prox}_{cg^{\ast}}(y^{k}+c{\mathbf{A}}x^{k+1})=y^{k}+c{\mathbf{A}}x^{k+1}.$
The second summand is just the $y^{k+1}$-update in the CP algorithm, so that
we deduce
$\displaystyle cz^{k+1}+y^{k+1}=y^{k}+c{\mathbf{A}}x^{k+1}\Leftrightarrow
y^{k+1}=y^{k}+c({\mathbf{A}}x^{k+1}-z^{k+1}).$
Consequently,
$\displaystyle p^{k+1}=y^{k+1}+c({\mathbf{A}}x^{k+1}-z^{k+1})=2y^{k+1}-y^{k},$
and hence we recover the three-step iteration defining CP:
$\displaystyle x^{k+1}$
$\displaystyle=\operatorname*{argmin}_{x}\\{r(x)+\frac{1}{2\tau}\lVert
x-(x^{k}-\tau{\mathbf{A}}^{\top}p^{k})\rVert_{2}^{2}\\}$ $\displaystyle
y^{k+1}$
$\displaystyle=\operatorname*{argmin}_{y}\\{g^{\ast}(y)+\frac{1}{2c}\lVert
y-(y^{k}+c{\mathbf{A}}x^{k+1})\rVert_{2}^{2}\\}$ $\displaystyle p^{k+1}$
$\displaystyle=2y^{k+1}-y^{k}.$
Given the above derivations, we can summarize this subsection by the following
interesting observation.
###### Proposition 4.4 (Proposition 3.1, [90]).
Let $(x^{k},y^{k},p^{k})$ be a sequence generated by CP with $\theta=1$. Then,
the $y^{k+1}$-update (4.23) is equivalent to
$\displaystyle
z^{k+1}=\operatorname*{argmin}_{z}\\{g(z)+\frac{c}{2}\lVert{\mathbf{A}}x^{k+1}-z+\frac{1}{c}y^{k}\rVert_{2}^{2}\\},$
$\displaystyle y^{k+1}=y^{k}+c({\mathbf{A}}x^{k+1}-z^{k+1})$
which corresponds to the primal $z^{k+1}$-minimization step (4.17) with
$\mathbf{M}_{2}=0$, and to the dual multiplier update for $y^{k+1}$ (4.18) of
AD-PMM, respectively. Moreover, the minimization step with respect to $x$ in
the CP algorithm given in (4.22) together with (4.18) reduces to (4.16) of AD-
PMM with
$\mathbf{M}_{1}=\tau\operatorname{Id}-c{\mathbf{A}}^{\top}{\mathbf{A}}$.
## 5 The Conditional Gradient Method
The Bregman proximal gradient method is an efficient first-order method
whenever the prox-mapping can be evaluated efficiently. In this section, we
present a class of first-order methods for convex programming problems which
gain relevance in large-scale problems for which the computation of the prox-
mapping is a significant computational bottleneck. We describe _conditional
gradient_ (CG) methods, a family of methods which, originating in the 1960’s,
have received much attention in both machine learning and optimization in the
last 10 years. CG is designed to be a method which solves convex programming
problems over compact convex sets. Therefore, we assume in this section that
the feasible set $\mathsf{X}$ is a compact convex set.
###### Assumption 9.
The set $\mathsf{X}$ is a compact convex subset in a finite-dimensional real
vector space $\mathsf{V}$.
### 5.1 Classical Conditional gradient
To set the stage for the material presented in this section, we give a quick
summary on the main developments of the classical CG method. CG, also known as
the _Frank-Wolfe method_ , was independently suggested by Frank and Wolfe [99]
for linearly constrained quadratic problems and by Levitin and Polyak [100]
for solving problem (2.7) with $r(x)\equiv 0$ and a general compact set
$\mathsf{X}$, i.e.,
$\displaystyle\Psi_{\min}(\mathsf{X}):=\min\\{f(x)|x\in\mathsf{X}\\}.$ (5.1)
CG attempts to solve problem (5.1) by sequentially calling a _linear oracle_
(LO).
###### Definition 5.1.
The Operator $\mathcal{L}_{\mathsf{X}}:\mathsf{V}^{\ast}\rightarrow\mathsf{X}$
is a _linear oracle_ (LO) over set $\mathsf{X}$ if for any vector
$y\in\mathsf{V}^{\ast}$ we have that
$\mathcal{L}_{\mathsf{X}}(y)\in\operatorname*{argmin}_{s\in\mathsf{X}}\langle
y,s\rangle.$ (5.2)
The practical application of an LO requires to make a selection from the set
of solutions of the defining linear minimization problem. The precise
definition of such a selection mechanism is not of any importance, and thus we
are just concerned with any answer $\mathcal{L}_{\mathsf{X}}(y)$ revealed by
the oracle.
The information-theoretic assumption that the optimizer can only query a
linear minimization oracle is clearly the main difference between CG and other
gradient-based methods discussed in Section 3. For instance, the dual
averaging algorithm solves at each iteration a strongly convex subproblem of
the form
$\min_{u\in\mathsf{X}}\\{\langle y,u\rangle+h(u)\\},$ (5.3)
where $h\in\mathcal{H}_{\alpha}(\mathsf{X})$, whereas CG solves a single
linear minimization problem at each iteration. This difference in the updating
mechanism yields the following potential advantages of the CG method.
1. 1.
Low iteration costs: In many cases it is much easier to construct an LO rather
than solving the non-linear subproblem (5.3). We emphasize that this potential
benefit of CG does not depend on the structure of the objective function $f$,
but rather on the geometry of the feasible set $\mathsf{X}$. To illustrate
this point, consider the set $\mathsf{X}=\\{\mathbf{X}\in\mathbb{R}^{n\times
n}_{\text{sym}}|\mathbf{X}\succeq 0,\operatorname{tr}(\mathbf{X})\leq 1\\}$,
known as the spectrahedron (cf. Example 3.5). Computing the orthogonal
projection of some symmetric matrix $\mathbf{Y}$ onto the spectrahedron
requires first to compute the full spectral decomposition
$\mathbf{Y}=\mathbf{U}{\mathbf{D}}\mathbf{U}^{\top}$, and then for the
diagonal matrix ${\mathbf{D}}$ computing the projection of its diagonal
elements onto the simplex. The resulting projection is therefore given by
$P_{\mathsf{X}}(\mathbf{Y})=\mathbf{U}\operatorname*{Diag}(P_{\Delta_{n}}(\operatorname{diag}({\mathbf{D}})))\mathbf{U}^{\top}.$
In contrast, computing a linear oracle over $\mathsf{X}$ for the symmetric
matrix $\mathbf{Y}$ involves finding the eigenvector of $\mathbf{Y}$
corresponding to the minimal eigenvalue, that is
$\mathcal{L}_{\mathsf{X}}(\mathbf{Y})=uu^{\top}$, where
$u^{\top}\mathbf{Y}u=\lambda_{\min}(\mathbf{Y})$. This operation can be
typically done using such methods as Power, Lanczos or Kaczmarz, and
randomized versions thereof - see [101] for general complexity results. For
large-scale problems, computing such a leading eigenvector to a predefined
accuracy is much more efficient than a full spectral decomposition.
2. 2.
Simplicity: The definition of an LO does not rely on a specific DGF
$h\in\mathcal{H}_{\alpha}(\mathsf{X})$ and makes the update affine invariant.
3. 3.
Structural properties of the updates: When the feasible set $\mathsf{X}$ can
be represented as the convex hull of a countable set of atoms ("generators"),
then CG often leads to simple updates, activating only few atoms at each
iteration. In particular, in the case of the spectrahedron, the LO returns a
matrix of rank one, which allows for sparsity preserving iterates.
The classical form of CG takes the answer obtained from querying the LO at a
given gradient feedback $y=\nabla f(x)$, and returns the target vector
$p(x)=\mathcal{L}_{\mathsf{X}}(\nabla f(x))\qquad\forall x\in\mathsf{X}.$
(5.4)
It proposes then to move in the direction $p(x)-x$. As in every optimization
routine, a key question is how to design efficient step-size rules to
guarantee reasonable numerical performance. Letting $x^{k-1}$ and
$p^{k}=p(x^{k-1})$ be a current position of the method together with its
implied target vector, the following policies are standard choices:
Standard: $\displaystyle\quad\gamma_{k}=\frac{1}{2+k},$ (5.5) Exact line
search:
$\displaystyle\quad\gamma_{k}\in\operatorname*{argmin}_{t\in(0,1]}f(x^{k-1}+t(p^{k}-x^{k-1})),$
(5.6) Adaptive: $\displaystyle\quad\gamma_{k}=\min\left\\{\frac{\langle\nabla
f(x^{k-1}),x^{k-1}-p^{k}\rangle}{L_{f}\lVert
x^{k-1}-p^{k}\rVert^{2}},1\right\\}.$ (5.7)
Exact line search is conceptually attractive, but can be costly in large-scale
applications when computing the function value is computationally expensive.
To understand the construction of the adaptive step-size scheme, it is
instructive to introduce a primal gap (merit) function to the problem, which
is the fundamental performance measure of CG methods. The primal gap (merit)
function is defined as
$\mathtt{e}(x):=\sup_{u\in\mathsf{X}}\langle\nabla f(x),x-u\rangle.$ (5.8)
This merit function is just the gap program (see e.g. [102]) associated to the
monotone variational inequality (2.6) in which the non-smooth part is trivial.
In terms of this merit function, the celebrated descent lemma (3.20) yields
immediately
$\displaystyle f(x+t(p(x)-x))$ $\displaystyle\leq f(x)+t\langle\nabla
f(x),p(x)-x\rangle+\frac{L_{f}t^{2}}{2}\lVert p(x)-x\rVert^{2}$
$\displaystyle=f(x)-t\mathtt{e}(x)+\frac{L_{f}t^{2}}{2}\lVert
p(x)-x\rVert^{2}=f(x)-\eta_{x}(t),$
where $\eta_{x}(t):=t\mathtt{e}(x)-\frac{L_{f}t^{2}}{2}\lVert
p(x)-x\rVert^{2}$. Optimizing this function with respect to $t\in[0,1]$ yields
the largest-possible per-iteration decrease and returns the adaptive step-size
rule in (5.7). Once the optimizer decided upon the specific step-size policy,
the classical CG picks one of the step sizes (5.5), (5.6), or (5.7), and
performs the update
$x^{k}=x^{k-1}+\gamma_{k}(p(x^{k})-x^{k-1}).$
The classical conditional gradient (CG)
Input: A linear oracle $\mathcal{L}_{\mathsf{X}}$, a starting point
$x^{0}\in\mathsf{X}$.
Output: A solution $x$ such that
$\Psi(x)-\Psi_{\min}(\mathsf{X})<\varepsilon$.
General step: For $k=1,2,\ldots$
Compute $p^{k}=\mathcal{L}_{X}(\nabla f(x^{k-1}))$;
Choose a step-size $\gamma_{k}$ either by (5.5), (5.6), (5.7);
Update $x^{k}=x^{k-1}+\gamma_{k}(p^{k}-x^{k-1})$;
Compute $\mathtt{e}^{k}=\mathtt{e}(x^{k-1})$.
If $\mathtt{e}^{k}<\varepsilon$ return $x^{k}$.
The convergence properties of classical CG under either of the step-size
variants above is well documented in the literature (see e.g. the recent text
by [35], or [103]). We will obtain a full convergence and complexity theory
under our more general analysis of the generalized CG scheme.
#### 5.1.1 Relative smoothness
The basic ingredient in proving convergence and complexity results on the
classical CG is the fundamental inequality
$f(x+t(p(x)-x))\leq f(x)-t\mathtt{e}(x)+\frac{L_{f}t^{2}}{2}\lVert
p(x)-x\rVert^{2}.$
Based on the relative smoothness analysis in Section 3.3.2, it seems to be
intuitively clear that we could easily prove also convergence of CG when
instead of the restrictive Lipschitz gradient assumption we make a relative
smoothness assumption in terms of the pair $(f,h)$ for some DGF
$h\in\mathcal{H}_{\alpha}(\mathsf{X})$. Indeed, if we are able to estimate a
scalar $L_{f}^{h}>0$ such that $L_{f}^{h}h(x)-f(x)$ is convex on $\mathsf{X}$,
then the modified descent lemma (3.23) yields the overestimation
$f(x+(tp-x))\leq f(x)-t\mathtt{e}(x)+L_{f}^{h}D_{h}(x+t(p-x),x).$ (5.9)
Instead of requiring that $f$ has a Lipschitz continuous gradient over the
convex compact set $\mathsf{X}$, let us alternatively require the following:
###### Assumption 10.
There exists a DGF $h\in\mathcal{H}_{\alpha}(\mathsf{X})$ and a constant
$L_{f}^{h}>0$, such that $L_{f}^{h}h-f$ is convex on $\mathsf{X}$, and $h$ has
a finite curvature on $\mathsf{X}$, that is,
$\Omega^{2}_{h}(\mathsf{X})\coloneqq\sup_{x,u\in\mathsf{X},t\in[0,1]}\frac{2D_{h}(tu+(1-t)x,x)}{t^{2}}<\infty.$
(5.10)
Note that when choosing $h$ to be the squared Euclidean norm
$h(x)=\frac{1}{2}\lVert x\rVert^{2}$ and $L_{f}^{h}=L_{f}$, then Assumption 10
is equivalent to the Lipschitz gradient assumption, where
$\Omega_{h}(\mathsf{X})$ is the diameter of set $\mathsf{X}$. On the other
hand, choosing $h(x)=f(x)$ and $L_{f}^{h}=L_{f}$, we essentially retrieve the
finite curvature assumption used by Jaggi [103].
###### Remark 5.1.
It is clear that the finite curvature assumption (5.10) is not compatible with
the DGF to be essentially smooth on $\mathsf{X}$. We are therefore forced to
work with non-steep distance-generating functions.
The analysis of CG under a relative smoothness condition and Assumption 10
runs in the same way as for the classical CG. However, the adaptive step-size
is reformulated as
$\gamma_{k}=\min\left\\{\frac{\langle\nabla
f(x^{k-1}),x^{k-1}-p^{k}\rangle}{L_{f}^{h}\Omega^{2}_{h}(\mathsf{X})},1\right\\}.$
This can be easily seen by replacing the upper model function
$f(x)-t\mathtt{e}(x)+L_{f}^{h}D_{h}(x+t(p-x),x)$, with its more conservative
bound
$f(x)-t\mathtt{e}(x)+\frac{L_{f}^{h}t^{2}}{2}\Omega^{2}_{h}(\mathsf{X})$. Of
course, in the case of the Euclidean norm this results in a smaller step-size
than the adaptive step, which hints towards a deterioration of performance.
Nevertheless, this trick allows us to handle convex programming problems
outside the Lipschitz smooth case, which is not uncommon in various
applications [104, 105, 106].
### 5.2 Generalized Conditional Gradient
Introduced by Bach [107] and [108], the generalized conditional gradient (GCG)
method, is targeted to solve our master problem (P) over a compact set
$\mathsf{X}$. To handle the composite case, we need to modify our definition
of a linear oracle accordingly.
###### Definition 5.2.
Operator $\mathcal{L}_{\mathsf{X},r}:\mathsf{V}^{\ast}\rightarrow\mathsf{X}$
is a _generalized linear oracle_ (GLO) over set $\mathsf{X}$ with respect to
function $r$ if for any vector $y\in\mathsf{V}^{\ast}$ we have that
$\mathcal{L}_{\mathsf{X},r}(y)\in\operatorname*{argmin}_{x\in\mathsf{X}}\langle
y,x\rangle+r(x).$
Besides this more demanding oracle assumption, the resulting generalized
conditional gradient method is formally identical to the classical CG. In
particular, we can consider the target vector
$p(x)=\mathcal{L}_{\mathsf{X},r}(\nabla f(x))\qquad\forall x\in\mathsf{X}$
(5.11)
and the same three step size policies as in the classical CG, with the obvious
modifications:
Exact line search:
$\displaystyle\gamma_{k}\in\operatorname*{argmin}_{t\in[0,1]}\Psi(x^{k-1}+t(p^{k}-x^{k-1})),$
(5.12) Adaptive:
$\displaystyle\gamma_{k}=\min\left\\{\frac{r(x^{k-1})-r(p^{k})+\langle\nabla
f(x^{k-1}),x^{k-1}-p^{k}\rangle}{L_{f}\lVert
x^{k-1}-p^{k}\rVert^{2}},1\right\\}.$ (5.13)
The adaptive step size variant is derived from an augmented merit function,
taking into consideration the non-smooth composite nature of the underlying
optimization problem. Indeed, as again can be learned from the basic theory of
variational inequalities (see [109]), the natural merit function for the
composite model problem (P) is the non-smooth function
$\mathtt{e}(x)=\sup_{u\in\mathsf{X}}\Gamma(x,u),\text{ where
}\Gamma(x,u):=r(x)-r(u)+\langle\nabla f(x),x-u\rangle.$ (5.14)
By definition, we see that $\mathtt{e}(x)\geq 0$ for all $x\in\mathsf{X}$,
with equality if and only if $x\in\mathsf{X}^{\ast}.$ These basic properties
justify our terminology, calling $\mathtt{e}(x)$ a merit function. Of course,
$\mathtt{e}(\cdot)$ is also easily seen to be convex. Furthermore, using the
convexity of $f$, one first sees that
$\langle\nabla f(x),x-u\rangle\geq f(x)-f(u),$
so that for all $x,u\in\operatorname{dom}(r)$,
$\displaystyle\Gamma(x,u)$ $\displaystyle\geq
r(x)-r(u)+f(x)-f(u)=\Psi(x)-\Psi(u).$
From here, one immediately arrives at the relation
$\mathtt{e}(x)\geq\Psi(x)-\Psi_{\min}(\mathsf{X}).$ (5.15)
Clearly, with $r=0$, the above specification yields the classical CG.
#### 5.2.1 Basic Complexity Properties of GCG
We now turn to prove that the GCG method with one of the above mentioned step-
sizes converges at a rate of $O(\frac{1}{k})$. We will derive this rate under
the standard assumption Lipschitz smoothness assumption on $f$. This gives us
access to the classical descent lemma (3.20). Combining this with the assumed
convexity of the non-smooth function $r(\cdot)$, we readily obtain
$\displaystyle\Psi(x^{k-1}+t(p^{k}-x^{k-1}))$ $\displaystyle\leq
f(x^{k-1})-t\langle\nabla
f(x^{k-1}),p^{k}-x^{k-1}\rangle+\frac{t^{2}L_{f}}{2}\lVert
p^{k}-x^{k-1}\rVert^{2}+(1-t)r(x^{k-1})+tr(p^{k})$
$\displaystyle=\Psi(x^{k-1})-t\mathtt{e}(x^{k-1})+\frac{t^{2}L_{f}}{2}\lVert
p^{k}-x^{k-1}\rVert^{2}.$
Based on this fundamental inequality of the per-iteration decrease, we can
deduce the iteration complexity via an induction argument. First, one observes
that for each of the three introduced step-size rules (standard, line search
and adaptive), one obtains a recursion of the form
$\displaystyle\Psi(x^{k-1}+\gamma_{k}(p^{k}-x^{k-1}))\leq\Psi(x^{k-1})-\gamma_{k}\mathtt{e}(x^{k-1})+\frac{L_{f}\gamma^{2}_{k}}{2}\lVert
p^{k}-x^{k}\rVert^{2}.$
When denoting $s^{k}:=\Psi(x^{k})-\Psi_{\min}(\mathsf{X})$,
$\mathtt{e}^{k}=\mathtt{e}(x^{k-1})$ and
$\Omega^{2}\equiv\Omega_{\frac{1}{2}\lVert\cdot\rVert^{2}}^{2}(\mathsf{X})=\max_{x,u\in\mathsf{X}}\lVert
x-u\rVert^{2}$, this gives us
$s^{k}\leq
s^{k-1}-\gamma_{k}\mathtt{e}^{k}+\frac{L_{f}\gamma^{2}_{k}}{2}\Omega^{2}.$
Applying to this recursion Lemma 13.13 in [18], we deduce the next iteration
complexity result for GCG.
###### Theorem 5.3.
Consider algorithm GCG with one of the step size rules: standard (5.5), line
search (5.12), or adaptive (5.13). Then
$\displaystyle\Psi(x^{k})-\Psi_{\min}(\mathsf{X})\leq\frac{2\max\\{\Psi(x^{0})-\Psi_{\min}(\mathsf{X}),L_{f}\Omega^{2}\\}}{k}\quad\forall
k\geq 1.$
###### Proof.
We give a self-contained proof of this result for the adaptive step-size
policy (5.13).
If $\gamma_{k}=1$, the per-iteration progress is easily seen to be
$\displaystyle s^{k}$ $\displaystyle\leq
s^{k-1}-\mathtt{e}^{k}-\frac{L_{f}}{2}\Omega^{2}\leq
s^{k-1}-\mathtt{e}^{k}\leq s^{k-1}-\frac{1}{2}\mathtt{e}^{k}\leq
s^{k-1}-\frac{1}{2}s^{k-1}=\frac{1}{2}s^{k-1}$
where we have used $\mathtt{e}^{k}\geq 0$, and (5.15). For
$\gamma_{k}=\frac{r(x^{k})-r(p^{k})+\langle\nabla
f(x^{k-1}),x^{k-1}-p^{k}\rangle}{L_{f}\lVert
x^{k-1}-p^{k}\rVert^{2}}=\frac{\mathtt{e}^{k}}{L_{f}\lVert
p^{k}-x^{k-1}\rVert^{2}}$, a simple computation reveals
$\displaystyle s^{k}\leq s^{k-1}-\frac{\mathtt{e}^{k}}{2L_{f}\lVert
p^{k}-x^{k-1}\rVert^{2}}\leq
s^{k-1}-\frac{(\mathtt{e}^{k})^{2}}{2L_{f}\Omega^{2}}\leq
s^{k-1}-\frac{(s^{k-1})^{2}}{2L_{f}\Omega^{2}}.$
Summarizing these two cases, we see
$s^{k}\leq\max\left\\{\frac{1}{2}s^{k-1},s^{k-1}-\frac{(s^{k-1})^{2}}{2L_{f}\Omega^{2}}\right\\}.$
Thus, the convergence is split into two periods, which are defined by
$K:=\log_{2}\left(\lfloor\frac{s^{0}}{\min\\{L_{f}\Omega^{2},s^{0}\\}}\rfloor\right)+1$.
If $k\leq K$ then $s^{k-1}\geq L_{f}\Omega^{2}$ and thus
$s^{k}\leq\frac{1}{2}s^{k-1}$, which implies
$\displaystyle s^{k}\leq 2^{-k}s^{0},\;k\in\\{0,1,\ldots,K\\}.$
However, if $k>K$ then $s^{k-1}<\min\\{L_{f}\Omega^{2},s^{0}\\}$ and
$s^{k}\leq s^{k-1}-\frac{1}{2L_{f}\Omega^{2}}(s^{k-1})^{2}$, which by
induction (see for example [110, Lemma 5.1]) implies that
$\displaystyle
s^{k}\leq\frac{s^{K}}{1+\frac{s^{K}}{2L_{f}\Omega^{2}}(k-K)}\leq\frac{2L_{f}\Omega^{2}}{2+(k-K)}\leq\frac{\max\\{K,2\\}L_{f}\Omega^{2}}{k}\leq\frac{2\max\\{s^{0},L_{f}\Omega^{2}\\}}{k},\;k\geq
K+1,$
where the second inequality follows from
$s^{K}<\min\\{L_{f}\Omega^{2},s^{0}\\}$, the third inequality follows from
$\frac{a}{a+(k-K)}$ being a monotonic function in $a\geq 0$ for any $k\geq
K+1$, and the last inequality follows from
$K\leq\max\left\\{2,\frac{s^{0}}{L_{f}\Omega^{2}}\right\\}$. Combining these
two results, we have that
$\displaystyle s^{k}\leq\frac{2\max\\{s^{0},L_{f}\Omega^{2}\\}}{k}.$
$\blacksquare$
#### 5.2.2 Alternative assumptions and step-sizes
A key takeaway from the analysis of the generalized conditional gradient is
that one needs to have a bound on the quadratic term of the upper model
$t\mapsto Q(x,p(x),t,L_{f}):=\Psi(x)-t\mathtt{e}(x)+\frac{L_{f}t^{2}}{2}\lVert
p(x)-x\rVert^{2}.$
Such a bound was given to us essentially for free under the compactness
assumption of the domain $\mathsf{X}$, and the Lipschitz-smoothness assumption
on the smooth part $f$. The resulting complexity constant is then determined
by $L_{f}\Omega^{2}$. Moreover, this constant will be involved in lower bounds
of the adaptive step-size rule (5.13). However, such a constant may not be
known, or may be expensive to compute. Moreover, a global estimate of this
constant is not actually needed for obtaining an upper bound. To see this, we
proceed formally as follows. Consider an alternative quadratic function of the
form
$\displaystyle Q(x,p,t,M):=\Psi(x)-t\mathtt{e}(x)+\frac{t^{2}M}{2}q(p,x),$
where $q(p,x)$ is a positive function bounded by some constant $C$, and choose
$\gamma(x,M):=\min\\{1,\frac{\mathtt{e}(x)}{Mq(p(x),x)}\\}$, for
$p(x)=\mathcal{L}_{\mathsf{X},r}(\nabla f(x))$. Let $M>0$ be a constant such
that the point obtained by using this step-size is upper bounded by the
corresponding quadratic function, i.e.,
$\displaystyle\Psi\left((1-\gamma(x,M))x+\gamma(x,M)p(x)\right)\leq
Q(x,p(x),\gamma(x,M),M)<\Psi(x).$ (5.16)
Thus applying the update $x^{+}:=(1-\gamma(x,M))x+\gamma(x,M)p(x)$, we obtain
$\Psi(x^{+})-\Psi_{\min}(\mathsf{X})\leq\Psi(x)-\Psi_{\min}(\mathsf{X})-\frac{1}{2}\mathtt{e}(x)\leq\frac{1}{2}(\Psi(x)-\Psi_{\min}(\mathsf{X}))$
if $\gamma(x,M)=1$, and
$\displaystyle\Psi(x^{+})-\Psi_{\min}(\mathsf{X})$
$\displaystyle\leq\Psi(x)-\Psi_{\min}(\mathsf{X})-\frac{1}{2Mq(p(x),x)}\mathtt{e}(x)^{2}\leq\Psi(x)-\Psi_{\min}(\mathsf{X})-\frac{1}{2MC}(\Psi(x)-\Psi_{\min}(\mathsf{X}))^{2}$
if $\gamma(x,M)=\frac{\mathtt{e}(x)}{Mq(p(x),x)}$. If $(x^{k})_{k\geq 0}$ is
the trajectory defined in this specific way, we get the familiar recursion
$\displaystyle
s^{k}\leq\min\\{\frac{1}{2}s^{k-1},s^{k-1}-\frac{1}{2M_{k}C}(s^{k-1})^{2}\\}$
in terms of the approximation error
$s^{k}:=\Psi(x^{k})-\Psi_{\min}(\mathsf{X})$, and the local estimates
$(M_{k})_{k\geq 0}$. Thus, as we are able to bound $M_{k}$ from above for all
iterations of the algorithm, the same convergence as for GCG can be achieved.
Based on this observation, and knowing that $M_{k}$ must be bounded for
Lipschitz smooth objective functions, we can try to determine $M_{k}$ via a
backtracking procedure, as suggested in [111]. By construction, the resulting
iterates $x^{k}$ will induce monotonically decreasing function values so that
the whole trajectory $x^{k}$ will be contained in the level set
$\\{x\in\mathsf{X}|\Psi(x)\leq\Psi(x^{0})\\}$. Hence, it is sufficient for
$Q(x^{k},p^{k},t,M)$ to be an upper bound on $\Psi(x_{t})$ for any point
$x_{t}=(1-t)x^{k-1}+tp^{k}$ such that $\Psi(x_{t})\leq\Psi(x^{0})$. Thus, the
Lipschitz continuity (or curvature) can be assumed only on the appropriate
level set and there is no need to insist on global Lipschitz smoothness on the
entire set $\mathsf{X}$. This insight enabled, for example, proving the
$O(1/k)$ convergence rate of CG with adaptive and exact step-size rules when
applied to self-concordant functions, which are not necessarily Lipschitz
smooth on the predefined set $\mathsf{X}$ [112, 113]. However, this
observation need not apply to the standard step size rule (5.5), since the
standard step-size choice does not guarantee that all the iterates remain in
the appropriate level set.
To conclude, we reiterate that the step-size choices analyzed here are the
most common, but there may be many more choices of step-size which provide
similar guarantees. For example, [114] suggests new step-size rules based on
an alternative analysis of the CG method that utilizes an updated duality gap.
[108] discusses recursive step-size rules, and in [115, 112] new step-size
rules are suggested based on additional assumptions on the problem structure.
### 5.3 Variants of CG
One of the main drawbacks of CG method is that, in general, it comes with
worse complexity bounds than BPGM for strongly convex functions. Indeed, it
was shown as early as in 1968 by Cannon and Cullum [116] (see also [117, 35])
that the rate of $O(\frac{1}{k})$ is in fact tight, even when the function $f$
is strongly convex. This slow convergence is due to the well-documented zig-
zagging effect between different extreme points in $\mathsf{X}$. In the smooth
case, where $r=0$, and the objective function $f$ and the feasible set
$\mathsf{X}$ are both strongly convex, only a rate of $O(\frac{1}{k^{2}})$ can
be shown [118], whereas [108] showed an accelerated $O(\frac{1}{k^{2}})$ rate
of convergence for GCG with strongly convex $r$ ($\mu>0$). Linear convergence
of the CG method can only be proved under additional assumptions regarding the
problem structure or location of the optimal solution (see e.g. [100, 110,
119, 120, 121]).
Departing from these somewhat negative results, variants of the classical CG
were suggested in order to obtain the desired linear convergence in the case
of strongly convex function $f$. We will discuss four of these variants: Away-
step CG, Fully-corrective CG, CG based on a local linear optimization oracle
(LLOO), and CG with sliding.
#### 5.3.1 Away-step CG
The away-step variation of CG (AW-CG), first suggested by Wolfe [122], treats
the case where $\mathsf{X}$ is a polyhedron. It requires two calls of the LO
at each iteration. The first call generates
$p^{k}=\mathcal{L}_{\mathsf{X}}(\nabla f(x^{k}))$, defined in the original CG
algorithm, while the second call generates an additional vector
$u^{k}=\mathcal{L}_{\mathsf{X}}(-\nabla f(x^{k}))$. The two vectors $p^{k}$
and $u^{k}$ define the _forward direction_ $d^{k}_{FW}=p^{k}-x^{k-1}$ and the
_away direction_ $d^{k}_{A}=x^{k-1}-u^{k}$, respectively. By construction,
both of this directions are descent directions. The effectively chosen
direction at iteration $k$ is obtained by
$\displaystyle
d^{k}=\operatorname*{argmax}_{d\in\\{d^{k}_{FW},d^{k}_{A}\\}}\langle-\nabla{f}(x^{k}),d\rangle,$
with a corresponding updating step
$\displaystyle x^{k}=x^{k-1}+\gamma_{k}d^{k}.$
Here, the choice of the step-size $\eta_{k}$ will also depend on the direction
chosen. The first analysis of this algorithm by Guélat and Marcotte [119]
assumes that the step-size is chosen using exact line search over
$\gamma_{k}\in[0,\gamma_{\max}]$, where $\gamma_{\max}:=\max\\{t\geq
0:x^{k-1}+td^{k}\in\mathsf{X}\\}$. Under this step-size choice, they prove
linear convergence of CG for strongly convex $f$. However, this rate estimate
depends on the distance between the optimal solution and the boundary of set
$T\subset\mathsf{X}$, which is the minimal face of $\mathsf{X}$ containing the
optimal solution. This result was later extended in [123], with a slight
variation on the original algorithm. In this variation, the set $\mathsf{X}$
is represented as the convex hull of a finite set of atoms $\mathcal{A}$ (not
necessarily containing only its vertices), and a representation of the current
iterate as a convex combination of these atoms is maintained throughout the
algorithm, i.e., $x^{k}=\sum_{S^{k}}\lambda^{k}_{a}a$ where
$S^{k}=\\{a\in\mathcal{A}:\lambda^{k}_{a}>0\\}$ is defined as the set of
active atoms. Thus, the AW-CG produces $p^{k}\in\mathcal{A}$ and $u^{k}\in
S^{k}$, and the away step maximal step size is respecified as
$\gamma_{\max}=\frac{\lambda_{u^{k}}}{1-\lambda_{u^{k}}}$. This implies, that
using the maximal away-step step-size will not necessarily result on a point
on the boundary of $\mathsf{X}$. Thus, when $f$ is strongly convex, Jaggi and
Lacoste-Julian [123] show a linear convergence of AW-CG with a rate which only
depends on the geometry of set $\mathsf{X}$, which is captured by the
_pyramidal width_ parameter. The Pairwise variant of AW-CG, which is also
presented and analyzed in [123], takes $d^{k}=u^{k}-p^{k}$ and
$\gamma_{\max}=\lambda_{u^{k}}$, and has similar analysis.
In [124], Beck and Shtern extend the linear convergence results of AS-CG to
functions of the form $f(x)=g({\mathbf{A}}x)+\langle b,x\rangle$ where $g$ is
a strongly convex function. The linear rate depends on a parameter based on
the Hoffman constant, which captures both on the geometry of $\mathsf{X}$ as
well as matrix ${\mathbf{A}}$. It is also worth mentioning, a stream of work
which shows linear convergence of AS-CG where the strong convexity assumption
is replaced by the assumption that sufficient second order optimality
conditions, known as Robinson conditions [125], are satisfied (see for example
[126]).
#### 5.3.2 Fully-corrective CG
The Fully-corrective variant of CG (FC-CG) also involves polyhedral
$\mathsf{X}$, and aims to reduce the number of calls to the linear oracle, by
replacing them with a more accurate minimization over a convex-hull of some
subset $\mathcal{A}^{k}\subseteq\mathcal{A}$. The heart of the method is a
correction routine, which updates the correction atoms $\mathcal{A}^{k}$ and
iterate $x^{k}$, and satisfy the following:
$\displaystyle S^{k}$ $\displaystyle\subseteq\mathcal{A}^{k}$ $\displaystyle
f(x^{k})$ $\displaystyle\leq\min_{t\in[0,1]}f((1-t)x^{k-1}+tp^{k})$
$\displaystyle\epsilon$ $\displaystyle\geq\max_{s\in S^{k}}\langle\nabla
f(x^{k}),s-x^{k}\rangle$
where $p^{k}=\mathcal{L}_{\mathsf{X}}(\nabla f(x^{k-1}))$, and $\epsilon$ is a
given accuracy parameter. The FC-CG was known by various names depending on
the updating scheme of $\mathcal{A}^{k}$ and $x^{k}$ [127, 128], and was
unified and analyzed to show linear convergence in [123]. The convergence
analysis of FC-CG is similar to that of AW-CG, and is based on the correction
routine guaranteeing that the forward step is larger than the away-step
computed in the previous iteration.
In order to apply FC-CG one must choose a correction routine, and the linear
convergence analysis does not take into account the computational cost of this
routine. One choice of a correction routine is to apply AS-CG on the subset
$\mathcal{A}^{k}=S^{k-1}\cup\\{p^{k}\\}$ until the conditions are satisfied.
This correction routine is wise only if efficient linear oracles
$\mathcal{L}_{\mathcal{A}^{k}}$ can be constructed for all $k$ such that their
low computational cost balances the routine’s iteration complexity.
#### 5.3.3 Enhanced LO based CG
A variant of CG which is based on an enhanced linear minimization oracle, was
suggested by Garber and Hazan [129]. In this variant, the linear oracle
$\mathcal{L}_{\mathsf{X}}(c)$ is replaced by a _local oracle_
$\mathcal{L}_{\mathsf{X},\rho}(c,x,\delta)$ with some constant $\rho\geq 1$,
which takes an additional radius input $\delta$ and returns a point
$p\in\mathsf{X}$ satisfying
$\displaystyle\lVert p-x\rVert$ $\displaystyle\leq\rho\delta$
$\displaystyle\langle p,y\rangle$
$\displaystyle\leq\min_{u\in\mathsf{X}:\lVert u-x\rVert\leq\delta}\langle
u,y\rangle.$
Thus, the only deviation from the CG algorithm is that $p^{k}$ is obtained by
applying $\mathcal{L}_{X,\rho}(\nabla f(x^{k}),x^{k},\delta_{k})$ for a
suitably chosen sequence $(\delta_{k})_{k}$. The linear convergence for the
case where the smooth part $f$ is strongly convex, is obtained by a specific
update of $\delta_{k}$ at each step of the algorithm. This update depends on
the Lipschitz constant $L_{f}$, the strong convexity constant of $f$, and the
parameter $\rho$. Moreover, despite the fact that LLOO-CG can theoretically be
applied to any set $\mathsf{X}$, constructing a general LLOO is challenging.
In [129], the authors suggest an LLOO with $\rho=\sqrt{n}$ when the set
$\mathsf{X}$ is the unit simplex, and generalize it for convex polytopes with
$\rho=\sqrt{n}\tilde{\rho}$ where $\tilde{\rho}$ depends on some geometric
properties the polytope which may generally not tractably computed. Thus,
while the strong convexity and geometric properties of the problem are only
used for the analysis of the AW-CG and FC-CG, the associated parameters are
explicitly used in the execution of LLOO-CG. The difficulty of accurately
estimating the strong convexity and the geometric parameters renders the LLOO-
CG less applicable in practice.
#### 5.3.4 CG with gradient sliding
Each iteration of CG requires one call to the linear minimization oracle and
one gradient evaluation. Coupled with our knowledge about the iteration
complexity of CG, this fact implies that CG requires $O(1/\varepsilon)$
gradient evaluations of the objective function. This is suboptimal, when
compared with the $O(1/\sqrt{\varepsilon})$ gradient evaluations for smooth
convex optimization, as we will see in Section 6. While it is known that
within the linear minimization oracle, the order estimate $O(1/\varepsilon)$
for the number of calls of the LO is unimprovable, in this section we review a
method based on the linear minimization oracle which can skip the computation
of gradients from time to time. This improves the complexity of LO-based
methods and leads us to the _conditional gradient sliding_ (S-CG) algorithm
introduced by Lan and Zhou [130]. S-CG is a numerical optimization method
which runs in epochs and overall contains some similarities with accelerated
methods, to be thoroughly surveyed in Section 6. S-CG has been described in
the context of the smooth convex programming problem for which $r=0$.
The conditional gradient sliding methods (S-CG)
Input: A linear oracle $\mathcal{L}_{\mathsf{X}}$ a starting point
$x^{0}\in\mathsf{X}$.
$(\beta_{k})_{k},(\gamma_{k})_{k}$ parameter sequence such that
$\displaystyle\gamma_{1}=1,\;L_{f}\gamma_{k}\leq\beta_{k},$
$\displaystyle\frac{\beta_{k}\gamma_{k}}{\Gamma_{k}}\geq\frac{\beta_{k-1}\gamma_{k-1}}{\Gamma_{k-1}},$
where $\Gamma_{k}=\left\\{\begin{array}[]{ll}1&\text{if }k=1,\\\
\Gamma_{k-1}(1-\gamma_{k})&\text{if }k\geq 2.\end{array}\right.$ (5.17)
General step: For $k=1,2,\ldots$
Compute $\displaystyle z^{k}$
$\displaystyle=(1-\gamma_{k})y^{k-1}+\gamma_{k}x^{k-1},$ $\displaystyle x^{k}$
$\displaystyle=\text{CndG}(\nabla f(z^{k}),x^{k-1},\beta_{k},\eta_{k}),$
$\displaystyle y^{k}$ $\displaystyle=(1-\gamma_{k})y^{k-1}+\gamma_{k}x^{k}.$
Similarly to accelerated methods, S-CG keeps track of three sequentially
updated sequences. The update of the sequence $(x^{k})$ is stated in terms of
a procedure CndG, which describes an inner loop of conditional gradient steps.
This subroutine aims at approximately solving for the proximal step
$\displaystyle\min_{x\in X}f(z^{k})+\langle\nabla
f(z^{k}),x-z^{k}\rangle+\frac{\beta_{k}}{2}\lVert x-x^{k-1}\rVert^{2}$
up to an accuracy of $\eta_{k}$. As will become clear later, the S-CG can thus
be thought of as an approximate version of the accelerated scheme presented in
Section 6.1.
The procedure $\text{CndG}(g,u,\beta,\eta)$
Input: $u_{1}=u,t=1$.
Output: point $u^{+}=\text{CndG}(g,u,\beta,\eta).$
General step: Let $v_{t}=\operatorname*{argmax}_{x\in\mathsf{X}}\langle
g+\beta(u_{t}-u),u_{t}-x\rangle$
If $V_{g,u,\beta}(u_{t})=\langle g+\beta(u_{t}-u),u_{t}-v_{t}\rangle\leq\eta$,
set $u^{+}=u_{t}$;
else, set $u_{t+1}=(1-\alpha_{t})u_{t}+\alpha_{t}v_{t}$, where
$\alpha_{t}=\min\left\\{1,\frac{\langle\beta(u-u_{t})-g,v_{t}-u_{t}\rangle}{\beta\lVert
v_{t}-u_{t}\rVert^{2}}\right\\}.$ Set $t\leftarrow t+1$. Repeat General step.
The main performance guarantee of the algorithm S-CG is summarized in the
following theorem:
###### Theorem 5.4.
For all $k\geq 1$ and $u\in\mathsf{X}$, we have
$f(y^{k})-f(u)\leq\frac{\beta\gamma_{k}\Omega^{2}}{2}+\Gamma_{k}\sum_{i=1}^{k}\frac{\eta_{i}\gamma_{i}}{\Gamma_{i}},$
(5.18)
where $\Omega\equiv\Omega_{\frac{1}{2}\lVert\cdot\rVert}(\mathsf{X})$. The
number of calls of the linear minimization oracle is bounded by
$\lceil\frac{6\beta_{k}\Omega^{2}}{\eta_{k}}\rceil$. In particular, if the
parameter sequences in S-CG are chosen as
$\beta_{k}=\frac{3L_{f}}{k+1},\gamma_{k}=\frac{3}{k+2},\eta_{k}=\frac{L_{f}\Omega^{2}}{k(k+1)},$
then
$f(y^{k})-f(u)\leq\frac{15L_{f}\Omega^{2}}{2(k+1)(k+2)}.$
As a consequence, the total number of calls of the function gradients and the
LO oracle is bounded by
$O\left(\sqrt{\frac{L_{f}\Omega^{2}}{\varepsilon}}\right)$, and
$O(L_{f}\Omega^{2}/\varepsilon)$, respectively.
## 6 Accelerated Methods
In previous sections we focused on simple first-order methods with sublinear
convergence guarantees in the convex case, and linear convergence in the
strongly convex case. Towards the end of the discussion in Section 3, we
pointed out the possibility to accelerate simple iterative schemes via
suitably defined extrapolation steps. In this last section of the survey, we
are focusing on such _accelerated methods_. The idea of acceleration dates
back to 1980’s. The rationale for this research direction is the desire to
understand the computational boundaries of solving optimization problems. Of
particular interest has been the unconstrained smooth, and strongly convex
optimization problem. This would be covered by our generic model (P) by
setting $r=0,\mathsf{X}=\mathsf{V}=\mathbb{R}^{n}$ and $f$ strongly convex
with parameter $\mu_{f}>0$ and $L_{f}$-smooth. The standard approach to
quantify the computational hardness of optimization problems is through the
oracle model. Upon receiving a query point $x$, the oracle reports the
corresponding function value $f(x)$, and in first-order models, the function
gradient $\nabla f(x)$ as well. In their seminal work, Nemirovski and Yudin
[58] showed that for any first-oder optimization algorithm, there exists an
$L_{f}$-smooth (with some $L_{f}>0$) and convex function
$f:\mathbb{R}^{n}\to\mathbb{R}$ such that the number of queries required to
obtain an $\varepsilon$-optimal solution $x^{\ast}$ which satisfies
$f(x^{\ast})<\min_{x}f(x)+\varepsilon,$
is at least of the order of
$\min\\{n,\sqrt{L_{f}/\mu_{f}}\\}\ln(1/\varepsilon)$ if $\mu_{f}>0$ and
$\min\\{n\ln(1/\varepsilon),\sqrt{L_{f}/\varepsilon}\\}$, if $\mu_{f}=0$. This
bound, obtained by information-theoretical arguments, turned out to be tight.
Nemirovski [131] proposed a method achieving the optimal rate $O(1/k^{2})$ via
a combination of standard gradient steps with the classical center of gravity
method, which required additional small-dimensional minimization, see also a
recent paper [132]. Nesterov [23] proposed an optimal method with explicit
step-sizes, which is now known as Nesterov’s accelerated gradient method.
Mainly driven by applications in imaging and machine learning, the idea of
acceleration turned out to be very productive in the last 20 years. During
this time span it has been extended to composite optimization [54, 133],
general proximal setups [67, 26], stochastic optimization problems [134, 135,
136, 137, 138, 139, 140], optimization with inexact oracle [141, 142, 138,
139, 143, 144, 145, 57], variance reduction methods [148, 149, 150, 151, 152,
153], alternating minimization methods [154, 155], random coordinate descent
[156, 157, 158, 159, 160, 161, 162, 163, 164, 154] and other randomized
methods such as randomized derivative-free methods [165, 164, 166, 167] and
randomized directional search [164, 168, 169], second-order methods [170] and
even high-order methods [171, 172, 173].
### 6.1 Accelerated Gradient Method
In this section we consider one of the multiple variants of an Accelerated
Gradient Method. This variant is close to the accelerated proximal method in
[174], which has been very influential to the field. Another very influential
version of the accelerated method, especially in applications, is the FISTA
algorithm [133], which is excellently described in [18]. The version we
present here is inspired by the Method of Similar Triangles [175, 26] and is
obtained via the change of the Dual Averaging step (see Section 3.4) to the
Bregman Proximal Gradient step. In our presentation of the accelerated method,
we consider a particular choice of the the control sequences, i.e., numerical
sequences $\alpha_{k}$, $A_{k}$ from [203, 202]. A more general way of
constructing such sequences can be found in [35], see also the constants used
in the S-CG method described at the end of Section 5. Moreover, the version we
present here, is very flexible and allows one to obtain accelerated methods
for many settings. As a particular example, below in Section 6.3, we show how
a slight modification of this method allows one to obtain universal
accelerated gradient method.
Our aim is to solve the composite model problem (P) within a general Bregman
proximal setup, formulated in Section 3.2. Let $\mathsf{X}\subseteq\mathsf{V}$
be a closed convex set in a finite-dimensional real vector space $\mathsf{V}$
with primal-dual pairing $\langle\cdot,\cdot\rangle$ and general norm
$\lVert\cdot\rVert$. We are given a DGF $h\in\mathcal{H}_{1}(\mathsf{X})$. The
scaling of the strong convexity parameter to the value 1 actually is without
loss of generality, modulo a constant rescaling of the employed DGF. Recall
the Bregman divergence $D_{h}(u,x)=h(u)-h(x)-\langle\nabla
h(x),u-x\rangle\geq\frac{1}{2}\lVert u-x\rVert^{2}$ for all
$x\in\mathsf{X}^{\circ},u\in\mathsf{X}$
The Accelerated Bregman Proximal Gradient Method (A-BPGM)
Input: pick $x^{0}=u^{0}=y^{0}\in\operatorname{dom}(r)\cap\mathsf{X}^{\circ}$,
set $A_{0}=0$
General step: For $k=0,1,\ldots$ do:
Find $\alpha_{k+1}$ from quadratic equation
$A_{k}+\alpha_{k+1}=L_{f}\alpha_{k+1}^{2}$. Set $A_{k+1}=A_{k}+\alpha_{k+1}$.
Set $y^{k+1}=\frac{\alpha_{k+1}}{A_{k+1}}u^{k}+\frac{A_{k}}{A_{k+1}}x^{k}$.
Set $\displaystyle u^{k+1}$
$\displaystyle=\mathcal{P}^{h}_{\alpha_{k+1}r}(u^{k},\alpha_{k+1}\nabla
f(y^{k+1}))$
$\displaystyle=\operatorname*{argmin}_{x\in\mathsf{X}}\left\\{\alpha_{k+1}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),x-y^{k+1}\rangle+r(x)\right)+D_{h}(x,u^{k})\right\\}.$ Set
$x^{k+1}=\frac{\alpha_{k+1}}{A_{k+1}}u^{k+1}+\frac{A_{k}}{A_{k+1}}x^{k}$.
$x^{k}$$u^{k}$$\frac{A_{k}}{A_{k+1}}$$y^{k+1}$$\frac{\alpha_{k+1}}{A_{k+1}}$$u^{k+1}=u^{k}-\alpha_{k+1}\nabla
f(y^{k+1})$$x^{k+1}=y^{k+1}-\frac{1}{L_{f}}\nabla f(y^{k+1})$ Figure 1:
Illustration of the three sequences of the A-BPGM in the unconstrained case
$\mathsf{X}=\mathbb{R}^{n}$, $r=0$, $h=\frac{1}{2}\lVert x\rVert_{2}^{2}$. In
this simple case it is easy to see that $u^{k+1}=u^{k}-\alpha_{k+1}\nabla
f(y^{k+1})$, and the sequence $u^{k}$ accumulates the previous gradient, while
helping to keep momentum. Also by the similarity of the triangles,
$x^{k+1}=y^{k+1}-\alpha_{k+1}\nabla
f(y^{k+1})\cdot\frac{\alpha_{k+1}}{A_{k+1}}=y^{k+1}-\frac{1}{L_{f}}\nabla
f(y^{k+1})$, i.e. $y^{k}$ is the sequence obtained by gradient descent steps.
Finally, the sequence $x^{k}$ is a convex combination of the momentum step and
the gradient step. The illustration is inspired by personal communication with
Yu. Nesterov on the Method of Similar Triangles [175, 26].
We start the analysis applying the descent Lemma property (3.20) which holds
for any two points due to $L_{f}$-smoothness:
$\displaystyle\Psi(x^{k+1})=f(x^{k+1})+r(x^{k+1})\leq f(y^{k+1})+\langle\nabla
f(y^{k+1}),x^{k+1}-y^{k+1}\rangle+\frac{L_{f}}{2}\lVert
x^{k+1}-y^{k+1}\rVert^{2}+r(x^{k+1}).$ (6.1)
Let us next consider the squared norm term. Using the definition of
$x^{k+1},y^{k+1}$ and the quadratic equation for $\alpha_{k+1}$, as well as
strong convexity of the Bregman divergence, i.e. (3.8), we obtain
$\displaystyle\frac{L_{f}}{2}\lVert x^{k+1}-y^{k+1}\rVert^{2}$
$\displaystyle=\frac{L_{f}}{2}\lVert\frac{\alpha_{k+1}}{A_{k+1}}u^{k+1}+\frac{A_{k}}{A_{k+1}}x^{k}-\left(\frac{\alpha_{k+1}}{A_{k+1}}u^{k}+\frac{A_{k}}{A_{k+1}}x^{k}\right)\rVert^{2}$
$\displaystyle=\frac{L_{f}\alpha_{k+1}^{2}}{2A_{k+1}^{2}}\lVert
u^{k+1}-u^{k}\rVert^{2}=\frac{1}{2A_{k+1}}\lVert
u^{k+1}-u^{k}\rVert^{2}\leq\frac{1}{A_{k+1}}D_{h}(u^{k+1},u^{k}).$ (6.2)
Next, we consider the remaining terms in the r.h.s. of (6.1). Substituting
$x^{k+1}$ and using $A_{k+1}=A_{k}+\alpha_{k+1}$, we obtain
$\displaystyle f(y^{k+1})$ $\displaystyle+\langle\nabla
f(y^{k+1}),x^{k+1}-y^{k+1}\rangle+r(x^{k+1})$
$\displaystyle=\left(\frac{\alpha_{k+1}}{A_{k+1}}+\frac{A_{k}}{A_{k+1}}\right)f(y^{k+1})+\langle\nabla
f(y^{k+1}),\frac{\alpha_{k+1}}{A_{k+1}}u^{k+1}+\frac{A_{k}}{A_{k+1}}x^{k}-\left(\frac{\alpha_{k+1}}{A_{k+1}}+\frac{A_{k}}{A_{k+1}}\right)y^{k+1}\rangle$
(6.3)
$\displaystyle+r\left(\frac{\alpha_{k+1}}{A_{k+1}}u^{k+1}+\frac{A_{k}}{A_{k+1}}x^{k}\right)$
$\displaystyle\leq\frac{A_{k}}{A_{k+1}}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),x^{k}-y^{k+1}\rangle+r(x^{k})\right)$ (6.4)
$\displaystyle+\frac{\alpha_{k+1}}{A_{k+1}}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),u^{k+1}-y^{k+1}\rangle+r(u^{k+1})\right)$
$\displaystyle\leq\frac{A_{k}}{A_{k+1}}\left(f(x^{k})+r(x^{k})\right)+\frac{\alpha_{k+1}}{A_{k+1}}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),u^{k+1}-y^{k+1}\rangle+r(u^{k+1})\right)$
$\displaystyle=\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{\alpha_{k+1}}{A_{k+1}}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),u^{k+1}-y^{k+1}\rangle+r(u^{k+1})\right),$ (6.5)
where in the first inequality used the convexity of $r$, and in the second
inequality we used the convexity of $f$. Now we plug (6.1) and (6.1) into
(6.1) to obtain
$\displaystyle\Psi(x^{k+1})$
$\displaystyle\leq\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{\alpha_{k+1}}{A_{k+1}}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),u^{k+1}-y^{k+1}\rangle+r(u^{k+1})\right)+\frac{1}{A_{k+1}}D_{h}(u^{k+1},u^{k})$
$\displaystyle=\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{1}{A_{k+1}}\left[\alpha_{k+1}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),u^{k+1}-y^{k+1}\rangle+r(u^{k+1})\right)+D_{h}(u^{k+1},u^{k})\right].$
(6.6)
Given the definition of $u^{k+1}$ as a Prox-Mapping, we can apply (3.17) by
substituting $x^{+}=u^{k+1}$, $x=u^{k}$, $\gamma=\alpha_{k+1}$. In this way,
we obtain, for any $u\in\mathsf{X}$,
$\displaystyle\Psi(x^{k+1})$
$\displaystyle\leq\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{1}{A_{k+1}}\left(\alpha_{k+1}(f(y^{k+1})+\langle\nabla
f(y^{k+1}),u^{k+1}-y^{k+1}\rangle+r(u^{k+1}))+D_{h}(u^{k+1},u^{k})\right)$
$\displaystyle\stackrel{{\scriptstyle\eqref{eq:r}}}{{\leq}}\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{1}{A_{k+1}}\left(\alpha_{k+1}(f(y^{k+1})+\langle\nabla
f(y^{k+1}),u-y^{k+1}\rangle+r(u))+D_{h}(u,u^{k})-D_{h}(u,u^{k+1})\right)$
$\displaystyle\leq\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{\alpha_{k+1}}{A_{k+1}}(f(u)+r(u))+\frac{1}{A_{k+1}}D_{h}(u,u^{k})-\frac{1}{A_{k+1}}D_{h}(u,u^{k+1})$
$\displaystyle=\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{\alpha_{k+1}}{A_{k+1}}\Psi(u)+\frac{1}{A_{k+1}}D_{h}(u,u^{k})-\frac{1}{A_{k+1}}D_{h}(u,u^{k+1}),$
(6.7)
where we also used convexity of $f$. Multiplying both sides of the last
inequality by $A_{k+1}$, summing these inequalities from $k=0$ to $k=N-1$, and
using that $A_{N}-A_{0}=\sum_{k=0}^{N-1}\alpha_{k+1}$, we obtain
$\displaystyle A_{N}\Psi(x^{N})\leq
A_{0}\Psi(x^{0})+(A_{N}-A_{0})\Psi(u)+D_{h}(u,u^{0})-D_{h}(u,u^{N}).$ (6.8)
Since $A_{0}=0$, we can choose
$u=x^{\ast}\in\operatorname*{argmin}\\{D_{h}(u,u^{0})|u\in\mathsf{X}^{\ast}\\}\subseteq\mathsf{X}^{\ast}$
and $D_{h}(x^{\ast},u^{N})\geq 0$, so that, for all $N\geq 1$,
$\displaystyle\Psi(x^{N})-\Psi_{\min}(\mathsf{X})$
$\displaystyle\leq\frac{D_{h}(x^{\ast},u^{0})}{A_{N}},\quad
D_{h}(x^{\ast},u^{N})\leq D_{h}(x^{\ast},u^{0}).$ (6.9)
So, we see from the second inequality that the Bregman distance between the
iterates $u^{N}$ and the solution $x^{\ast}$ is non-increasing. Then, from the
inequality $D_{h}(x^{\ast},u^{N})\geq\frac{1}{2}\lVert
x^{\ast}-u^{N}\rVert^{2}$ it follows that $\|x^{\ast}-u^{N}\|$ is bounded for
any $N$, which leads to the existence of a subsequence converging to
$x^{\ast}$ by the continuity of $\Psi$. To obtain the convergence rate in
terms of the objective residual it remains to estimate the sequence $A_{N}$
from below.
We prove by induction that $A_{k}\geq\frac{(k+1)^{2}}{4L_{f}}$. For $k=1$ this
inequality holds as equality since $A_{0}=0$, and, hence,
$A_{1}=\alpha_{1}=\frac{1}{L_{f}}$. Let us prove the induction step. From the
quadratic equation $A_{k}+\alpha_{k+1}=L_{f}\alpha_{k+1}^{2}$, we have
$\displaystyle\alpha_{k+1}=\frac{1}{2L_{f}}+\sqrt{\frac{1}{4L_{f}^{2}}+\frac{A_{k}}{L_{f}}}\geq\frac{1}{2L}+\sqrt{\frac{A_{k}}{L_{f}}}\geq\frac{1}{2L_{f}}+\frac{k+1}{2L_{f}}=\frac{k+2}{2L_{f}}.$
(6.10) $\displaystyle
A_{k+1}=A_{k}+\alpha_{k+1}\geq\frac{(k+1)^{2}}{4L_{f}}+\frac{k+2}{2L_{f}}=\frac{k^{2}+2k+1+2k+4}{4L_{f}}\geq\frac{(k+2)^{2}}{4L_{f}}.$
(6.11)
Thus, combining (6.11) with (6.9), we obtain that the A-BPGM has optimal
convergence rate:
$\displaystyle\Psi(x^{N})-\Psi_{\min}(\mathsf{X})$
$\displaystyle\leq\frac{4L_{f}D_{h}(x^{\ast},u^{0})}{(N+1)^{2}}.$ (6.12)
As it was mentioned above, accelerated gradient method in the form of A-BPGM
can serve as a template meta-algorithm for many accelerated algorithms. The
examples of accelerated methods which have a close form include primal-dual
accelerated methods [174, 202, 207], random coordinate descent and other
randomized algorithms [158, 164, 154], methods for stochastic optimization
[135, 140], methods with inexact oracle [143] and inexact model of the
objective [144, 57]. Moreover, only using this one-projection version it was
possible to obtain accelerated gradient methods with inexact model of the
objective [144], accelerated decentralized distributed algorithms for
stochastic convex optimization [146], and accelerated method for stochastic
optimization with heavy-tailed noise [147]. The key to the last two results is
the proof that the sequence generated by the one-projection accelerated
gradient method is bounded with large probability, which, to our knowledge, is
not possible to prove for other types of accelerated methods applied to
stochastic optimization problems.
#### 6.1.1 Linear Convergence
Under additional assumptions, we can use the scheme A-BPGM to obtain a linear
convergence rate, or, in other words, logarithmic in the desired accuracy
complexity bound. One such possible assumption is that $\Psi(x)$ satisfies a
quadratic error bound condition for some $\mu>0$:
$\Psi(x)-\Psi_{\min}(\mathsf{X})\geq\frac{\mu}{2}\|x-x^{\ast}\|^{2}.$ (6.13)
This is a weaker assumption than the assumption that $\Psi(x)$ is
$\mu$-strongly convex with $\mu>0$. For a review of different additional
conditions which allow to obtain linear convergence rate we refer the reader
to [176, 177]. The linear convergence rate can be obtained under quadratic
error bound condition by a widely used restart technique, which dates back to
[23, 178], and was extended in the past 20 years to many settings including
problems with non-quadratic error bound condition [179, 180], stochastic
optimization problems [179, 137, 138, 139, 181], methods with inexact oracle
[138, 139], randomized methods [182, 183], conditional gradient [184, 185],
variational inequalities and saddle-point problems [186, 57], methods for
constrained optimization problems [181].
To apply the restart technique, we make several additional assumptions. First,
without loss of generality, we assume that $0\in\mathsf{X}$,
$0=\arg\min_{x\in\mathsf{X}}h(x)$ and $h(0)=0$. Second, we assume that we are
given a starting point $x^{0}\in\mathsf{X}$ and a number $R_{0}>0$ such that
$\|x^{0}-x^{\ast}\|^{2}\leq R_{0}^{2}$. Finally, we make the assumption that
$h$ is bounded on the unit ball [179] in the following sense. Assume that
$x^{\ast}$ is some fixed point and $x$ is such that $\|x-x^{\ast}\|^{2}\leq
R^{2}$, then
$h\Big{(}\frac{x-x^{\ast}}{R}\Big{)}\leq\frac{\Omega}{2},$ (6.14)
where $\Omega$ is some known number. For example, in the Euclidean setup
$\Omega=1$, and other examples are given in [179, Section 2.3], where
typically $\Omega=O(\ln n)$.
The Restarted Accelerated Bregman Proximal Gradient Method (R-A-BPGM)
Input: $z^{0}\in\operatorname{dom}(r)\cap\mathsf{X}^{\circ}$ such that
$\|z^{0}-x^{\ast}\|^{2}\leq R_{0}^{2}$, $\Omega,L_{f},\mu$.
General step: For $p=0,1,\ldots$ do:
Make $N=\left\lceil 2\sqrt{\frac{\Omega L_{f}}{\mu}}\right\rceil-1$ steps of
A-BPGM with starting point $x^{0}=z^{p}$ and proximal setup given by distance-
generating function $h_{p}(x)=R_{p}^{2}h\left(\frac{x-z^{p}}{R_{p}}\right)$,
where $R_{p}:=R_{p-1}/2=R_{0}\cdot 2^{-p}$.
Set $z^{p+1}=x^{N}$.
We next use the above assumptions to show the accelerated logarithmic
complexity of R-A-BPGM, i.e. that the number of Bregman proximal steps to find
a point $\hat{x}$ such that $f(\hat{x})-f(x^{\ast})\leq\varepsilon$ is
proportional to $\sqrt{L_{f}/\mu}\log_{2}(1/\varepsilon)$ instead of
$(L_{f}/\mu)\log_{2}(1/\varepsilon)$ for the BPGM under the error bound
condition. The idea of the proof is to show by induction that, for all $p\geq
0$, $\|z^{p}-x^{\ast}\|^{2}\leq R_{p}^{2}$. For $p=0$ this holds by the
assumption on $z^{0}$ and $R_{0}$. So, next we prove an induction step from
$p-1$ to $p$. Using the definition of $h_{p-1}$, assumptions about $h$, and
the inductive assumption, we have
$D_{h_{p-1}}(x^{\ast},z^{p-1})\leq
h_{p-1}(x^{\ast})=R_{p-1}^{2}h\Big{(}\frac{z^{p-1}-x^{\ast}}{R_{p-1}}\Big{)}\stackrel{{\scriptstyle\eqref{eq:h_bounded}}}{{\leq}}\frac{\Omega
R_{p-1}^{2}}{2}.$ (6.15)
Thus, applying the error bound condition (6.13), the bound (6.12) and our
choice of the number of steps $N$, we obtain
$\displaystyle\frac{\mu}{2}\|z^{p}-x^{\ast}\|^{2}$
$\displaystyle\stackrel{{\scriptstyle\eqref{eq:quadr_err_bound}}}{{\leq}}\Psi(z^{p})-\Psi_{\min}(\mathsf{X})=\Psi(x^{N})-\Psi_{\min}(\mathsf{X})\stackrel{{\scriptstyle\eqref{eq:AGD_conv_rate}}}{{\leq}}\frac{L_{f}D_{h_{p-1}}(x^{\ast},z^{p-1})}{(N+1)^{2}}\stackrel{{\scriptstyle\eqref{eq:AGD_SC_Proof_1}}}{{\leq}}\frac{L_{f}\Omega
R_{p-1}^{2}}{2(N+1)^{2}}$ $\displaystyle\leq\frac{\mu
R_{p-1}^{2}}{8}=\frac{\mu R_{p}^{2}}{2}.$
So, we obtain that $\|z^{p}-x^{\ast}\|\leq R_{p}=R_{0}\cdot 2^{-p}$ and
$\Psi(z^{p})-\Psi_{\min}(\mathsf{X})\leq\frac{\mu R_{0}^{2}\cdot 2^{-2p}}{2}$.
To estimate the total number of basic steps of A-BPGM to achieve
$\Psi(z^{p})-\Psi_{\min}(\mathsf{X})\leq\varepsilon$, we need to multiply the
sufficient number of restarts $\hat{p}=\left\lceil\frac{1}{2}\log_{2}\frac{\mu
R_{0}^{2}}{2\varepsilon}\right\rceil$ by the number of A-BPGM steps $N$ in
each restart. This leads to the complexity estimate $O\left(\sqrt{\frac{\Omega
L_{f}}{\mu}}\log_{2}\frac{\mu R_{0}^{2}}{\varepsilon}\right)$ which is optimal
[58, 26] for first-order methods applied to smooth strongly convex
optimization problems.
A possible drawback of the restart scheme is that one has to know an estimate
$R_{0}$ for $\|z^{0}-x^{\ast}\|$. It is possible to avoid this by directly
incorporating the parameter $\mu$ into the steps of A-BPGM, see e.g. [187, 26,
35, 57]. Yet, in this case, a stronger assumption that $\Psi(x)$ is strongly
convex or relatively strongly convex [55] is used. The second drawback of both
approaches: restart technique and direct incorporation of $\mu$ into the
steps, is that they require to know the value of the parameter $\mu$. This is
in contrast to non-accelerated BPGM, which using the same step-size as in the
non-strongly convex case automatically has linear convergence rate and
complexity $O\left({\frac{L_{f}}{\mu}}\log_{2}\frac{\mu
R_{0}^{2}}{\varepsilon}\right)$, see e.g. [56, 57]. Several recipes on how to
restart accelerated methods with only rough estimates of the parameter $\mu$
are proposed in [183].
### 6.2 Smooth minimization of non-smooth functions
An important observation made during the last 20 years of development of
first-order methods for convex programming is that there is a large gap
between the optimal convergence rate for black-box non-smooth optimization
problems, i.e. $O(1/\sqrt{N})$ and the optimal convergence rate for black-box
smooth optimization problems, i.e. $O(1/N^{2})$. For the second observation,
let us make a thought experiment. Assume that we minimize a smooth function by
$N$ steps of A-BPGM, i.e. solve problem (P) with $r=0$. Then in each iteration
we observe first-order information $(f(y^{k+1}),\nabla f(y^{k+1}))$ and can
construct a non-smooth piecewise linear approximation of $f$ as
$g(x)=\max_{k=1,...,N}\\{f(y^{k+1})+\langle\nabla
f(y^{k+1}),x-y^{k+1}\rangle$. If we now make $N$ steps of A-BPGM with the same
starting point to minimize $g(x)$, and choose the appropriate subgradients of
$g(\cdot)$, the steps will be absolutely the same as when we minimized $f(x)$,
and we will be able to minimize a non-smooth function $g$ with much faster
rate $1/N^{2}$ than the lower bound $1/\sqrt{N}$. This leads to an idea of
trying to find a sufficiently wide class of non-smooth functions which can be
efficiently minimized by A-BPGM.
To do this, one needs to look into the black-box and use the structure of a
non-smooth problem to obtain faster convergence rates. The result is known as
_Nesterov’s smoothing technique_ [67], a powerful tool we are about to
describe now.
Consider the model problem (P), with the added assumption that the non-smooth
part admits a Fenchel representation of the form
$r(x)=\max_{w\in\mathsf{W}}\\{\langle{\mathbf{A}}x,w\rangle-\kappa(w)\\}.$
(6.16)
Here, $\mathsf{W}\subseteq\mathsf{E}$ is a compact convex subset of a finite-
dimensional real vector space $\mathsf{E}$, and
$\kappa:\mathsf{W}\to\mathbb{R}$ is a continuous convex function on
$\mathsf{W}$. ${\mathbf{A}}$ is a linear operator from $\mathsf{V}$ to
$\mathsf{E}^{\ast}$. This additional structure of the problem gives rise to a
min-max formulation of (P), given by
$\min_{x\in\mathsf{X}}\max_{w\in\mathsf{W}}\\{f(x)+\langle{\mathbf{A}}x,w\rangle-\kappa(w)\\}.$
(6.17)
The main idea of Nesterov is based on the observation that the function $r$
can be well approximated by a class of smooth convex functions, defined as
follows. Let $h_{w}\in\mathcal{H}_{1}(\mathsf{W})$ with a nonrestrictive
assumptions that $\min_{w\in\mathsf{W}}h_{w}(w)=0$, and for some $\tau>0$,
define the function
$\Psi_{\tau}(x):=f(x)+\max_{w\in\mathsf{W}}\\{\langle{\mathbf{A}}x,w\rangle-\kappa(w)-\tau
h_{w}(w)\\}.$ (6.18)
We denote by $\widehat{w}_{\tau}(x)$ the optimal solution of the maximization
problem for a fixed $x$. The main technical lemma, which leads to the main
result is as follows.
###### Proposition 6.1 ([67]).
The function $\Psi_{\tau}(x)$ is well defined, convex and continuously
differentiable at any $x\in\mathsf{X}$ with $\nabla\Psi_{\tau}(x)=\nabla
f(x)+{\mathbf{A}}^{\ast}\widehat{w}_{\tau}(x)$. Moreover,
$\nabla\Psi_{\tau}(x)$ is Lipschitz continuous with constant
$L_{\tau}=L_{f}+\frac{\|{\mathbf{A}}\|_{\mathsf{V},\mathsf{E}}^{2}}{\tau}$.
Here the adjoint operator ${\mathbf{A}}^{\ast}$ is defined by equality
$\langle{\mathbf{A}}x,w\rangle_{\mathsf{E}}=\langle{\mathbf{A}}^{\ast}w,x\rangle_{\mathsf{V}}$
and the norm of the operator $\|{\mathbf{A}}\|_{\mathsf{V},\mathsf{E}}$ is
defined by
$\|{\mathbf{A}}\|_{\mathsf{V},\mathsf{E}}=\max_{x,w}\\{\langle{\mathbf{A}}x,w\rangle:\|x\|_{\mathsf{V}}=1,\|w\|_{\mathsf{E}}=1\\}$.
Since $\mathsf{W}$ is bounded, $\Psi_{\tau}(x)$ is a uniform approximation for
the function $\Psi$, namely, for all $x\in\mathsf{X}$,
$\Psi_{\tau}(x)\leq\Psi(x)\leq\Psi_{\tau}(x)+\tau D_{\mathsf{W}},$ (6.19)
where $D_{\mathsf{W}}:=\max\\{h_{w}(w)|w\in\mathsf{W}\\}$, assumed to be a
finite number. Then, the idea is to choose $\tau$ sufficiently small and apply
accelerated gradient method to minimize $\Psi_{\tau}(x)$ on $\mathsf{X}$ with
a DGF $h_{x}\in\mathcal{H}_{1}(\mathsf{X})$. Doing this, and assuming that
$D_{\mathsf{X}}=\max\\{h_{x}(u)|u\in\mathsf{X}\\}<\infty$, we can apply the
result (6.12) to $\Psi_{\tau}(x)$ and, using (6.19), to obtain
$\displaystyle 0\leq\Psi(x^{N})-\Psi_{\min}(\mathsf{X})$
$\displaystyle\leq\Psi_{\tau}(x^{N})+\tau
D_{\mathsf{W}}-\Psi_{\tau}(x^{*})\leq\Psi_{\tau}(x^{N})+\tau
D_{\mathsf{W}}-\Psi_{\tau}(x_{\tau}^{*})\leq\tau
D_{\mathsf{W}}+\frac{4L_{\tau}D_{\mathsf{X}}}{(N+1)^{2}}$ $\displaystyle=\tau
D_{\mathsf{W}}+\frac{4\|{\mathbf{A}}\|_{\mathsf{V},\mathsf{E}}^{2}D_{\mathsf{X}}}{\tau(N+1)^{2}}+\frac{4L_{f}D_{\mathsf{X}}}{(N+1)^{2}}.$
Choosing $\tau$ to minimize the r.h.s., i.e.
$\tau=\frac{2\|{\mathbf{A}}\|_{\mathsf{V},\mathsf{E}}}{N+1}\sqrt{\frac{D_{\mathsf{X}}}{D_{\mathsf{W}}}}$,
we obtain
$0\leq\Psi(x^{N})-\Psi_{\min}(\mathsf{X})\leq\frac{4\|{\mathbf{A}}\|_{\mathsf{V},\mathsf{E}}\sqrt{D_{\mathsf{X}}D_{\mathsf{W}}}}{N+1}+\frac{4L_{f}D_{\mathsf{X}}}{(N+1)^{2}}.$
(6.20)
A more careful analysis in the proof of [67, Theorem 3], allows also to obtain
an approximate solution to the conjugate problem
$\max_{w\in\mathsf{W}}\\{\psi(w):=-\kappa(w)+\min_{x\in\mathsf{X}}\left(\langle{\mathbf{A}}x,w\rangle+f(x)\right)\\}.$
(6.21)
In each iteration of A-BPGM, the optimizer needs to calculate
$\nabla\Psi_{\tau}(y^{k+1})$, which requires to calculate
$\widehat{w}_{\tau}(y^{k+1})$. This information is aggregated to obtain the
vector
$\widehat{w}^{N}=\sum_{k=0}^{N-1}\frac{\alpha_{k+1}}{A_{k+1}}\widehat{w}_{\tau}(y^{k+1})$
and is used to obtain the following primal-dual result
$0\leq\Psi(x^{N})-\Psi_{\min}(\mathsf{X})\leq\Psi(x^{N})-\psi(\widehat{w}^{N})\leq\frac{4\|{\mathbf{A}}\|_{\mathsf{V},\mathsf{E}}\sqrt{D_{\mathsf{X}}D_{\mathsf{W}}}}{N+1}+\frac{4L_{f}D_{\mathsf{X}}}{(N+1)^{2}}.$
(6.22)
In both cases using the special structure of the problem it is possible to
obtain convergence rate $O(1/N)$ for non-smooth optimization, which is better
than the lower bound $O(1/\sqrt{N})$ for general non-smooth optimization
problems.
We illustrate the smoothing technique by two examples of piecewise-linear
minimization.
###### Example 6.1 (Uniform fit).
Consider the problem of finding a uniform fit of some signal $b\in\mathsf{E}$,
given linear observations ${\mathbf{A}}x$. where
${\mathbf{A}}:\mathsf{V}\to\mathsf{E}$ is a bounded linear operator. This
problem amount to minimize the non-smooth function
$\lVert{\mathbf{A}}x-b\rVert_{\infty}$. Of course, this problem can be
equivalently formulated as an LP, however in case where the dimensionality of
the parameter vector $x$ is large, such a direct approach could turn out to be
not very practical. Adopting the just introduced smoothing technology, the
representation (6.17) can be obtained using the definition of the dual norm
$\lVert\cdot\rVert_{1}$, i.e.
$\lVert{\mathbf{A}}x-b\rVert_{\infty}=\max_{w:\lVert w\rVert_{1}\leq
1}\langle{\mathbf{A}}x-b,w\rangle$. Yet, a better representation is obtained
using the unit simplex
$\mathsf{W}=\\{w\in\mathbb{R}^{2m}_{+}|\sum_{i=1}^{2m}w_{i}=1\\}$, matrix
$\hat{{\mathbf{A}}}=[{\mathbf{A}};-{\mathbf{A}}]$, and vector
$\hat{b}=[b;-b]$. For the set $\mathsf{W}$, a natural Bregman setup is the
norm $\lVert w\rVert_{\mathsf{E}}=\lVert w\rVert_{1}$ and the Boltzmann-
Shannon entropy $h_{w}(w)=\ln 2m+\sum_{i=1}^{m}w_{i}\ln w_{i}$. This gives
$\Psi_{\tau}(x)=\max_{w\in\mathsf{W}}\\{\langle\hat{{\mathbf{A}}}x-\hat{b},w\rangle-\tau
h_{w}(w)\\}=\tau\ln\left(\frac{1}{2m}\sum_{i=1}^{m}\exp\left(\frac{\langle
a_{i},x\rangle-b_{i}}{\tau}\right)+\exp\left(-\frac{\langle a_{i},x\rangle-
b_{i}}{\tau}\right)\right),$
which is recognized as a softmax function.
###### Example 6.2 ($\ell_{1}$-fit).
In compressed sensing [188, 189, 190] one encounters the problem to minimize
the $\ell_{1}$ norm of the residual vector ${\mathbf{A}}x-b$ over a given
closed convex set $\mathsf{X}$. While it is well-known that this problem can
in principle again be reformulated as an LP, the typical high-dimensionality
of such problems makes this direct approach often not practicable. Adopting
the smoothing technology, it is natural to choose
$\mathsf{W}=\\{w\in\mathbb{R}^{m}|\|w\|_{\infty}\leq 1\\}$ and
$h_{w}(w)=\frac{1}{2}\sum_{i=1}^{m}\|a_{i}\|_{\mathsf{E},\ast}w_{i}^{2}$,
which gives
$\Psi_{\tau}(x)=\max_{w\in\mathsf{W}}\\{\langle{{\mathbf{A}}}x-{b},w\rangle-\tau
h_{w}(w)\\}=\sum_{i=1}^{m}\|a_{i}\|_{x,\ast}\psi_{\tau}\left(\frac{|\langle
a_{i},x\rangle-b_{i}|}{\|a_{i}\|_{\mathsf{E},\ast}}\right),$
where $\psi_{\tau}(t)$ is the Huber function equal to $t^{2}/(2\tau)$ for
$0\leq t\leq\tau$ and $t-\tau/2$ if $t\geq\tau$.
For the particular case of smoothing the absolute value function $|x|$, Figure
2 gives the plot of the original function, its softmax smoothing and Huber
smoothing, both with $\tau=1$. Potentially, other ways of smoothing a non-
smooth function can be applied, see [191] for a general framework.
Figure 2: Absolute value function $|x|$, its softmax smoothing and Huber
smoothing, both with $\tau=1$.
Figure 3: Non-smooth function $f(x)=\max\\{x-1,x/2\\}$, a quadratic function
constructed using the first-order information at the point $x=2$, and a
shifted quadratic function constructed using the first-order information at
the point $x=2$. As one can see, adding a shift allows to obtain an upper
quadratic bound for the objective, which is then minimized to obtain a new
test point.
##### Closing Remarks
Let us make several remarks on the related literature. A close approach is
proposed in [192], where the problem (6.17) is considered directly as a min-
max saddle-point problem. These classes of equilibrium problems are typically
solved via tools from monotone variational inequalities, whose performance is
typically worse than the performance of optimization algorithms. In
particular, contrasting the above rate estimate with the one reported in
[192], one observes that the bound in [192] has a similar to (6.22) structure,
yet with the second term being non-accelerated, i.e. proportional to $1/N$.
This approach was generalized to obtain an accelerated method for a special
class of variational inequalities in [193], where an optimal iteration
complexity $O(L/\sqrt{\varepsilon})$ to reach an $\varepsilon$-close solution
is reported. In the original paper [67], the smoothing parameter is fixed and
requires to know the parameters of the problem in advance. This has been
improved in [194], where an adaptive version of the smoothing techniques is
proposed. This framework was extended in [195, 196, 197, 198] for structured
composite optimization problems in the form (2.7) and a related primal-dual
representation (2.8). A related line of works studies minimization of strongly
convex functions under linear constraints. Similarly to (6.18) the objective
in the Lagrange dual problem has Lipschitz gradient, yet the challenge is that
the feasible set in the dual problem is not bounded. Despite that it is
possible to obtain accelerated primal-dual methods [195, 196, 199, 200, 201,
202, 203, 204, 155, 205, 132, 206]. In particular, this allows to obtain
improved complexity bounds for different types of optimal transport problems
[202, 207, 205, 140, 208, 209, 210, 211, 212].
### 6.3 Universal Accelerated Method
As it was discussed in the previous subsection, there is a gap in the
convergence rate between the class of non-smooth convex optimization problems
and the class of smooth convex optimization problems. In this subsection, we
present a unifying framework [213] for these two classes which allows to
obtain uniformly optimal complexity bounds for both classes by a single method
without the need to know whether the objective is smooth or non-smooth. To do
that, consider the Problem (P) with $f$ which belongs to the class of
functions with Hölder-continuous subgradients, i.e. for some $L_{\nu}>0$ and
$\nu\in[0,1]$ it holds that $\lVert\nabla f(x)-\nabla f(y)\rVert_{*}\leq
L_{\nu}\lVert x-y\rVert^{\nu}$ for all $x,y\in\operatorname{dom}f$. If
$\nu=1$, we recover the $L_{f}$-smoothness condition (2.1). If $\nu=0$ we have
that $f$ has bounded variation of the subgradient, which is essentially
equivalent to the bounded subgradient Assumption 4. The main observation [142,
213] is that this Hölder condition allows to prove an inexact version of the
"descent Lemma" inequality (3.20). More precisely [213, Lemma 2], for any
$x,y\in\operatorname{dom}f$ and any $\delta>0$,
$f(y)\leq f(x)+\langle\nabla f(x),y-x\rangle+\frac{L_{\nu}}{1+\nu}\lVert
y-x\rVert^{1+\nu}\leq f(x)+\langle\nabla f(x),y-x\rangle+\frac{L}{2}\lVert
y-x\rVert^{2}+\delta,$ (6.23)
where
$L\geq
L(\delta):=\left(\frac{1-\nu}{1+\nu}\frac{1}{\delta}\right)^{\frac{1-\nu}{1+\nu}}L_{\nu}^{\frac{2}{1+\nu}}$
(6.24)
with the convention that $0^{0}=1$. We illustrate this by Figure 3 where we
plot a quadratic bound in the r.h.s. of (6.23) with $\delta=0$ and a shifted
quadratic bound in the r.h.s. of (6.23) with some $\delta>0$. The first
quadratic bound can not be an upper bound for $f(y)$ for any $L>0$, and the
positive shift allows to construct an upper bound. Thus, it is sufficient to
equip the A-BPGM with a backtracking line-search to obtain a universal method.
The Universal Accelerated Bregman Proximal Gradient Method (U-A-BPGM)
Input: Pick $x^{0}=u^{0}=y^{0}\in\operatorname{dom}(r)\cap\mathsf{X}^{\circ}$,
$\varepsilon>0$, $0<L_{0}<L(\varepsilon/2)$, set $A_{0}=0$
General step: For $k=0,1,\ldots$ do:
Find the smallest integer $i_{k}\geq 0$ such that if one defines
$\alpha_{k+1}$ from quadratic equation
$A_{k}+\alpha_{k+1}=2^{i_{k}-1}L_{k}\alpha_{k+1}^{2}$, sets
$A_{k+1}=A_{k}+\alpha_{k+1}$,
sets $y^{k+1}=\frac{\alpha_{k+1}}{A_{k+1}}u^{k}+\frac{A_{k}}{A_{k+1}}x^{k}$,
sets
$u^{k+1}=\operatorname*{argmin}_{x\in\mathsf{X}}\left\\{\alpha_{k+1}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),x-y^{k+1}\rangle+r(x)\right)+D_{h}(x,u^{k})\right\\},$
sets $x^{k+1}=\frac{\alpha_{k+1}}{A_{k+1}}u^{k+1}+\frac{A_{k}}{A_{k+1}}x^{k}$,
then it holds that $f(x^{k+1})\leq f(y^{k+1})+\langle\nabla
f(y^{k+1}),x^{k+1}-y^{k+1}\rangle+\frac{2^{i_{k}-1}L_{k}}{2}\lVert
x^{k+1}-y^{k+1}\rVert^{2}+\frac{\varepsilon\alpha_{k+1}}{2A_{k+1}}$.
Set $L_{k+1}=2^{i_{k}-1}L_{k}$ and go to the next iterate $k$.
We first observe that for sufficiently large $i_{k}$, $2^{i_{k}-1}L_{k}\geq
L\left(\frac{\varepsilon\alpha_{k+1}}{2A_{k+1}}\right)$, see [213, p.396].
This means that the process of finding $i_{k}$ is finite since the condition
which is checked for each $i_{k}$ is essentially (6.23) with
$\delta=\frac{\varepsilon\alpha_{k+1}}{2A_{k+1}}$. Further, the convergence
proof follows the same steps as the proof of the convergence rate for A-BPGM.
The first thing which is changed is equation (6.1), where now the inexact
descent Lemma is used instead of the exact one. The only difference is that
$L_{f}$ is changed to its local approximation $L_{k+1}$ and add the error term
$\frac{\varepsilon\alpha_{k+1}}{2A_{k+1}}$ appears in the r.h.s. In (6.1) the
new quadratic equation with $L_{k+1}$ is used and the inequality remains the
same. This eventually leads to (6.1) with the only change being an additive
error term $\frac{\varepsilon\alpha_{k+1}}{2A_{k+1}}$ in the r.h.s. Finally,
this leads to the bound
$\Psi(x^{N})-\Psi_{\min}(\mathsf{X})\leq\frac{D_{h}(u^{\ast},u^{0})}{A_{N}}+\frac{\varepsilon}{2}.$
After some algebraic manipulation, Nesterov [213, p.397] obtains an inequality
$A_{N}\geq\frac{N^{\frac{1+3\nu}{1+\nu}}\varepsilon^{\frac{1-\nu}{1+\nu}}}{2^{\frac{2+4\nu}{1+\nu}}L_{\nu}^{\frac{2}{1+\nu}}}$.
Substituting, we obtain
$\Psi(x^{N})-\Psi_{\min}(\mathsf{X})\leq\frac{2^{\frac{2+4\nu}{1+\nu}}D_{h}(u^{\ast},u^{0})L_{\nu}^{\frac{2}{1+\nu}}}{N^{\frac{1+3\nu}{1+\nu}}\varepsilon^{\frac{1-\nu}{1+\nu}}}+\frac{\varepsilon}{2}.$
Since the method does not require to know $\nu$ and $L_{\nu}$, the iteration
complexity to achieve accuracy $\varepsilon$ is
$N=O\left(\inf_{\nu\in[0,1]}\left(\frac{L_{\nu}}{\varepsilon}\right)^{\frac{2}{1+3\nu}}\left(D_{h}(u^{\ast},u^{0})\right)^{\frac{1+\nu}{1+3\nu}}\right).$
It is easy to see that the oracle complexity, i.e. the number of proximal
operations, is approximately the same. Indeed, the number of oracle calls for
each $k$ is $2(i_{k}+1)$. Further, $L_{k+1}=2^{i_{k}-1}L_{k}$, which means
that the total number of the oracle calls up to iteration $N$ is
$\sum_{k=0}^{N-1}2(i_{k}+1)=\sum_{k=0}^{N-1}2(2\log_{2}\frac{L_{k+1}}{L_{k}})=4N+2\log_{2}\frac{L_{N}}{L_{0}}$,
i.e. is, up to a logarithmic term, four times larger than $N$. The obtained
oracle complexity coincides up to a constant factor with the lower bound [58]
for first-order methods applied to minimization of functions with Hölder-
continuous gradients. In the particular case $\nu=0$, we obtain the complexity
$O\left(\frac{L_{0}^{2}D_{h}(u^{\ast},u^{0})}{\varepsilon^{2}}\right)$, which
corresponds to the convergence rate $1/\sqrt{k}$, which is typical for general
non-smooth minimization. In the opposite case of smooth minimization
corresponding to $\nu=1$, we obtain the complexity
$O\left(\sqrt{\frac{L_{1}D_{h}(u^{\ast},u^{0})}{\varepsilon}}\right)$, which
corresponds to the optimal convergence rate $1/k^{2}$. The same idea can be
used to obtain universal version of the BPGM method [213]. One can also use
the strong convexity assumption to obtain faster convergence rate of the U-A-
BPGM either by restarts [180, 145], or by incorporating the strong convexity
parameter in the steps [57]. The same backtracking line-search can be applied
in a much simpler way if one knows that $f$ is $L_{f}$-smooth with some
unknown Lipschitz constant or to achieve acceleration in practice caused by a
pessimistic estimate for $L_{f}$ [54, 196, 200, 202, 214, 215, 203]. The idea
is to use standard exact "descent Lemma" inequality in each step of the
accelerated method.
The idea of universal methods turned out to be very productive and several
extensions has been proposed in the literature including universal primal-dual
method for composite optimization [216], universal primal-dual method [217]
for problems with linear constraints and problems in the form (2.7), universal
method for convex and non-convex optimization [218], a universal primal-dual
hybrid of accelerated gradient method with conjugate gradient method using
additional one-dimensional minimization [132]. Extensions are also known for
first-order methods for variational inequalities and saddle-point problems
[186]. The above-described method is not the only way to obtain adaptive and
universal methods for smooth and non-smooth optimization problems. An
alternative way which uses the norm of the current (sub)gradient to define the
step-size was initiated probably by [219] and became very popular in
stochastic optimization for machine learning after the paper [220]. On this
avenue it was possible to obtain for $\nu\in\\{0,1\\}$ universal accelerated
optimization method [221] and universal methods for variational inequalities
and saddle-point problems [222, 223].
### 6.4 Connection between Accelerated method and Conditional Gradient
In this subsection we describe how a variant of conditional gradient method
can be obtained as a particular case of A-BPGM with inexact Bregman Proximal
step. Since we consider conditional gradient method it is natural to assume
that the set $\mathsf{X}$ is bounded with
$\max_{x,u\in\mathsf{X}}D_{h}(x,u)\leq D_{\mathsf{X}}$. We follow the idea of
[45] where the main observation of is that the Prox-Mapping in A-BPGM can be
calculated inexactly by applying the generalized linear oracle given in
Definition 5.2. The idea is very similar to the idea of the conditional
gradient sliding described in Section 5.3.4 with the difference that here we
implement an approximate Bregman Proximal step using only one step of the
generalized conditional gradient method. The resulting algorithm is listed
below with the only difference with A-BPGM being the change of the Bregman
Proximal step $u^{k+1}=\mathcal{P}_{\alpha_{k+1}r}(u^{k},\alpha_{k+1}\nabla
f(y^{k+1}))$ to the step
$u^{k+1}=\mathcal{L}_{\mathsf{X},\alpha_{k+1}r}(\alpha_{k+1}\nabla
f(y^{k+1}))$ given by generalized linear oracle.
Conditional Gradient Method by A-BPGM with Approximate Bregman Proximal Step
Input: pick $x^{0}=u^{0}=y^{0}\in\operatorname{dom}(r)\cap\mathsf{X}^{\circ}$,
set $A_{0}=0$
General step: For $k=0,1,\ldots$ do:
Find $\alpha_{k+1}$ from quadratic equation
$A_{k}+\alpha_{k+1}=L_{f}\alpha_{k+1}^{2}$. Set $A_{k+1}=A_{k}+\alpha_{k+1}$.
Set $y^{k+1}=\frac{\alpha_{k+1}}{A_{k+1}}u^{k}+\frac{A_{k}}{A_{k+1}}x^{k}$.
Set (Approximate Bregman proximal step by generalized linear oracle)
$u^{k+1}=\operatorname*{argmin}_{x\in\mathsf{X}}\left\\{\alpha_{k+1}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),x-y^{k+1}\rangle+r(x)\right)\right\\}=\mathcal{L}_{\mathsf{X},\alpha_{k+1}r}(\alpha_{k+1}\nabla
f(y^{k+1}))$.
Set $x^{k+1}=\frac{\alpha_{k+1}}{A_{k+1}}u^{k+1}+\frac{A_{k}}{A_{k+1}}x^{k}$.
Since the difference between such conditional gradient method and A-BPGM is in
one simple change of the step for $u^{k+1}$, to obtain the convergence rate of
the former, it is sufficient to track, what changes such approximate Bregman
Proximal step entails in the convergence rate proof for A-BPGM. In other
words, we need to understand what happens with the proof for A-BPGM if the
Bregman Proximal step is made inexactly by applying the generalized linear
oracle. The first important difference is that we need an inexact version of
inequality (3.17), which was used in the convergence proof of A-BPGM and which
the result of the exact Bregman Proximal step. To obtain its inexact version,
let us denote
$\varphi(x)=\alpha_{k+1}\left(f(y^{k+1})+\langle\nabla
f(y^{k+1}),x-y^{k+1}\rangle+r(x)\right).$
Then generalized linear oracle actually minimizes this function on the set
$\mathsf{X}$ to obtain $u^{k+1}$. Thus, by the optimality condition, we have
that there exists $\xi\in\partial\varphi(u^{k+1})$ such that
$\langle\xi,u^{k+1}-x\rangle\leq 0$ for all $x\in\mathsf{X}$. Now we remind
that the Bregman Proximal step in A-BPGM minimizes
$\varphi(x)+D_{h}(x,u^{k})$. These observations allow to estimate the
inexactness of the Bregman Proximal step implemented via generalized linear
oracle. Indeed, for
$u^{k+1}=\mathcal{L}_{\mathsf{X},\alpha_{k+1}r}(\alpha_{k+1}\nabla
f(y^{k+1}))$
$\displaystyle\langle\xi+\nabla h(u^{k+1})-\nabla
h(u^{k}),u^{k+1}-x\rangle\leq\langle\nabla h(u^{k+1})-\nabla
h(u^{k}),u^{k+1}-x\rangle$
$\displaystyle=-D_{h}(x,u^{k})+D_{h}(x,u^{k+1})+D_{h}(u^{k+1},u^{k})\leq
2D_{\mathsf{X}},$ (6.25)
where we used three-point identity in Lemma 3.5. This inequality provides
inexact version of the optimality condition (3.15) in the problem
$\min_{x\in\mathsf{X}}\\{\varphi(x)+D_{h}(x,u^{k})\\}$, i.e. (3.18) with
$\Delta=2D_{\mathsf{X}}$. This in order leads to (3.19) with
$\Delta=2D_{\mathsf{X}}$, which is the desired inexact version of (3.17).
Let us now see, how this affects the convergence rate proof of A-BPGM.
Inequality (3.17) was used in the analysis only in (6.1). This means that the
change of (3.17) to (3.19) with $\Delta=2D_{\mathsf{X}}$ leads to an additive
term $\frac{2D_{\mathsf{X}}}{A_{k+1}}$ in the r.h.s. of (6.1):
$\displaystyle\Psi(x^{k+1})$
$\displaystyle\leq\frac{A_{k}}{A_{k+1}}\Psi(x^{k})+\frac{\alpha_{k+1}}{A_{k+1}}\Psi(u)+\frac{1}{A_{k+1}}D_{h}(u,u^{k})-\frac{1}{A_{k+1}}D_{h}(u,u^{k+1})+\frac{2D_{\mathsf{X}}}{A_{k+1}},\quad
u\in\mathsf{X}.$ (6.26)
Multiplying both sides of the last inequality by $A_{k+1}$, summing these
inequalities from $k=0$ to $k=N-1$, and using that
$A_{N}-A_{0}=\sum_{k=0}^{N-1}\alpha_{k+1}$, we obtain
$\displaystyle A_{N}\Psi(x^{N})\leq
A_{0}\Psi(x^{0})+(A_{N}-A_{0})\Psi(u)+D_{h}(u,u^{0})-D_{h}(u,u^{N})+2ND_{\mathsf{X}}.$
(6.27)
Since $A_{0}=0$, we can choose
$u=x^{\ast}\in\operatorname*{argmin}\\{D_{h}(u,u^{0})|u\in\mathsf{X}^{\ast}\\}$,
so that, for all $N\geq 1$,
$\Psi(x^{N})-\Psi_{\min}(\mathsf{X})\leq\frac{D_{h}(x^{\ast},u^{0})}{A_{N}}+\frac{2ND_{\mathsf{X}}}{A_{N}}\leq\frac{D_{\mathsf{X}}}{A_{N}}+\frac{2ND_{\mathsf{X}}}{A_{N}},$
which, given the lower bound $A_{N}\geq\frac{(N+1)^{2}}{4L_{f}}$ leads to the
final result for the convergence rate of this inexact A-BPGM implemented via
generalized linear oracle:
$\Psi(x^{N})-\Psi_{\min}(\mathsf{X})\leq\frac{4L_{f}D_{\mathsf{X}}}{(N+1)^{2}}+\frac{8L_{f}D_{\mathsf{X}}}{N+1}.$
Thus, we obtain a variant of conditional gradient method with the same
convergence rate $1/N$ as for the standard conditional gradient method. Using
the same approach, but with U-A-BPGM as the basis method, one can obtain a
universal version of conditional gradient method [57] for minimizing
objectives with Hölder-continuous gradient. The bounds in this case a similar
to the ones obtained in a more direct universal method in [108]. Similar
bounds were also recently obtained in [224].
## 7 Conclusion
We close this survey, with a very important fact which Nesterov writes in the
introduction of his important textbook [26]: _in general, optimization
problems are unsolvable._ Convex programming stands out from this general
fact, since it describes a significantly large class of model problems, with
important practical applications, for which general solution techniques have
been developed within the mathematical framework of interior-point techniques.
However, modern optimization problems are large-scale in nature, which renders
these polynomial time methods impractical. First-order methods have become the
gold standard in balancing cheap iterations with low solution accuracy, and
many theoretical and practical advances having been made in the last 20 years.
Despite the fact that convex optimization is approaching the state of being a
primitive similar to linear algebra techniques, we foresee that the
development of first-order methods has not come to a halt yet. In connection
with stochastic inputs, the combination of acceleration techniques with other
performance boosting tricks, like variance reduction, incremental techniques,
as well as distributed optimization, still promises to produce some new
innovations. On the other hand, there is also still much room for improvement
of algorithms for optimization problems which do not admit a prox-friendly
geometry. Distributed optimization, in particular in the context of federated
learning is now a very active area of research, see [225] for a recent review
of federated learning and [226] for a recent review of distributed
optimization. Another important focus in the research in optimization methods
is now on numerical methods for non-convex optimization motivated by training
of deep neural networks, see [227, 228] for a recent review. A number of open
questions remain in the theory of first-order methods for variational
inequalities and saddle-point problems, mainly in the case of variational
inequalities with non-monotone operators. In particular, recently the authors
of [229] observed a connection between extragradient methods for monotone
variational inequalities and accelerated first-order methods. Thus, as we
emphasize in this survey, new connections, that are still continuously being
discovered between different methods and different formulations, can lead to
new understanding and developments in this lively field of first-order
methods.
## Acknowledgements
The authors are grateful to Yu. Nesterov and A. Gasnikov for fruitful
discussions. M. Staudigl thanks the COST Action CA16228 (European Network for
Game Theory), the FMJH Program PGMO and from the support of EDF (Project
"Privacy preserving algorithms for distributed control of energy markets") for
its support.
## References
## References
* [1] Y. Nesterov, A. Nemirovski, Interior Point Polynomial methods in Convex programming, SIAM Publications, 1994.
* [2] E. D. Andersen, K. D. Andersen, The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm, Springer US, Boston, MA, 2000, pp. 197–232.
* [3] J. F. Sturm, Using sedumi 1.02, a Matlab toolbox for optimization over symmetric cones, Optimization methods and software 11 (1-4) (1999) 625–653.
* [4] P. Jain, P. Kar, Non-convex optimization for machine learning, Found. Trends Mach. Learn. 10 (3–4) (2017) 142–336.
* [5] F. E. Curtis, K. Scheinberg, Optimization methods for supervised machine learning: From linear models to deep learning, arXiv preprint arXiv:1706.10207.
* [6] S. J. Wright, Optimization algorithms for data analysis, The Mathematics of Data 25 (2018) 49.
* [7] H. Robbins, S. Monro, A stochastic approximation method, The Annals of Mathematical Statistics 22 (3) (1951) 400–407.
* [8] H. J. Kushner, Approximation and Weak Convergence Methods for Random Processes, The MIT Press, 1984.
* [9] A. Benveniste, M. Métivier, P. Priouret, Adaptive Algorithms and Stochastic Approximations, Springer, Berlin, 1990.
* [10] L. Ljung, G. Pflug, H. Walk, Stochastic approximation and optimization of random systems, Vol. 17, Birkhäuser, 2012.
* [11] M. Benaïm, Recursive algorithms, urn processes, and the chaining number of chain recurrent sets, Ergodic Theory and Dynamical Systems 18 (1998) 53–87.
* [12] P. Mertikopoulos, M. Staudigl , SIAM Journal on Optimization 28 (1) (2018) 163–197.
* [13] P. Mertikopoulos, M. Staudigl, Stochastic mirror descent dynamics and their convergence in monotone variational inequalities, Journal of Optimization Theory and Applications 179 (3) (2018) 838–867.
* [14] J. C. Duchi, F. Ruan, Stochastic methods for composite and weakly convex optimization problems, SIAM Journal on Optimization 28 (4) (2018) 3229–3259.
* [15] D. Davis, D. Drusvyatskiy, S. Kakade, J. D. Lee, Stochastic subgradient method converges on tame functions, Foundations of Computational Mathematics 20 (1) (2020) 119–154.
* [16] A. Shapiro, D. Dentcheva, A. Ruszczyński, Lectures on Stochastic Programming, Society for Industrial and Applied Mathematics, 2009.
* [17] G. C. Pflug, A. Pichler, Multistage stochastic optimization, Springer, 2014.
* [18] A. Beck, First-Order Methods in Optimization, Society for Industrial and Applied Mathematics, 2017.
* [19] R. T. Rockafellar, R. J. B. Wets, Variational Analysis, Vol. 317 of A Series of Comprehensive Studies in Mathematics, Springer-Verlag, Berlin, 1998.
* [20] S. Bubeck, Convex optimization: Algorithms and complexity, Foundations and Trends in Machine Learning 8 (3-4) (2015) 231–357.
* [21] A. Juditsky, A. Nemirovski, First order methods for nonsmooth convex large-scale optimization, i: General purpose methods, MIT Press. Optimization for Machine Learning, 2011, Ch. 5, pp. 121–148.
* [22] A. Juditsky, A. Nemirovski, First order methods for nonsmooth convex large-scale optimization, ii: Utilizing problems structure, MIT Press. Optimization for Machine Learning, 2011, Ch. 6, pp. 149–183.
* [23] Y. Nesterov, A method of solving a convex programming problem with convergence rate $o(1/k^{2})$., Soviet Mathematics Doklady 27 (2) (1983) 372–376.
* [24] J.-B. Hiriart-Urrut, C. Lemaréchal, Fundamentals of Convex Analysis, Springer, 2001.
* [25] H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer - CMS Books in Mathematics, 2016.
* [26] Y. Nesterov, Lectures on Convex Optimization, Vol. 137 of Springer Optimization and Its Applications, Springer International Publishing, 2018.
* [27] P. Bühlmann, S. van de Geer, Statistics for High-Dimensional Data, Springer Series in Statistics, Springer-Verlag Berlin Heidelberg, 2011.
* [28] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Communications on pure and applied mathematics 57 (11) (2004) 1413–1457
* [29] A. Bruckstein, D. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Review 51 (1) (2009) 34–81.
* [30] A. Juditsky, F. KılınçKarzan, A. Nemirovski, Randomized first order algorithms with applications to $\ell_{1}$-minimization, Mathematical Programming 142 (1) (2013) 269–310.
* [31] S. Sorin, A First-Course on Zero-Sum Repeated Games, Springer, 2000.
* [32] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM Journal on Control and Optimization 29 (1) (1991) 119–138.
* [33] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in neural information processing systems, 2014, pp. 2672–2680.
* [34] R. M. Gower, M. Schmidt, F. Bach, P. Richtárik, Variance-reduced methods for machine learning, Proceedings of the IEEE 108 (11) (2020) 1968–1983.
* [35] G. Lan, First-order and Stochastic Optimization Methods for Machine Learning, Springer, 2020. .
* [36] D. Bertsekas, Nonlinear Programming, Athena Scientific, 1999.
* [37] J.-J. Moreau, Proximité et dualité dans un espace Hilbertien, Bulletin de la Société mathématique de France 93 (1965) 273–299.
* [38] A. Beck, N. Guttmann-Beck, FOM –a Matlab toolbox of first-order methods for solving convex optimization problems, Optimization Methods and Software 34 (1) (2019) 172–193.
* [39] P. L. Combettes, J.-C. Pesquet, Proximal splitting methods in signal processing, Springer, 2011, pp. 185–212.
* [40] N. Parikh, S. Boyd, Proximal algorithms, Foundations and Trends® in Optimization 1 (3) (2014) 127–239
* [41] A. Auslender, M. Teboulle, Projected subgradient methods with non-euclidean distances for non-differentiable convex minimization and variational inequalities, Mathematical Programming 120 (1) (2009) 27–48.
* [42] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, 1970\.
* [43] M. Teboulle, A simplified view of first order methods for optimization, Mathematical Programming 170 (1) (2018) 67–96.
* [44] P. L. Combettes, V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Modeling & Simulation 4 (4) (2005) 1168–1200.
* [45] A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization (Lecture Notes), Personal web-page of A. Nemirovski, 2020.
URL https://www2.isye.gatech.edu/~nemirovs/LMCOLN2020WithSol.pdf
* [46] A. Beck, M. Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization, Operations Research Letters 31 (3) (2003) 167–175.
* [47] A. Juditsky, A. V. Nazin, A. B. Tsybakov, N. Vayatis, Recursive aggregation of estimators by the mirror descent algorithm with averaging, Problems of Information Transmission 41 (4) (2005) 368–384.
* [48] M. Doljansky, M. Teboulle, An interior proximal algorithm and the exponential multiplier method for semidefinite programming, SIAM Journal on Optimization 9 (1) (1998) 1–13.
* [49] A. Auslender, M. Teboulle, Interior gradient and proximal methods for convex and conic optimization, SIAM Journal on Optimization 16 (3) (2006) 697–725.
* [50] B. Cox, A. Juditsky, A. Nemirovski, Dual subgradient algorithms for large-scale nonsmooth learning problems, Mathematical Programming 148 (1) (2014) 143–180.
* [51] M. Teboulle, Entropic proximal mappings with applications to nonlinear programming, Mathematics of Operations Research 17 (1992) 670–690.
* [52] G. Chen, M. Teboulle, Convergence analysis of a proximal-like minimization algorithm using Bregman functions, SIAM Journal on Optimization 3 (3) (1993) 538–543.
* [53] A. Beck, M. Teboulle, Gradient-based algorithms with applications to signal recovery, in: D. P. Palomar, Y. C. Eldar (Eds.), Convex optimization in signal processing and communications, Cambridge University Press, 2009, pp. 42–88.
* [54] Y. Nesterov, Gradient methods for minimizing composite functions, Mathematical Programming 140 (1) (2013) 125–161
* [55] H. Lu, R. Freund, Y. Nesterov, Relatively smooth convex optimization by first-order methods, and applications, SIAM Journal on Optimization 28 (1) (2018) 333–354.
* [56] F. S. Stonyakin, D. Dvinskikh, P. Dvurechensky, A. Kroshnin, O. Kuznetsova, A. Agafonov, A. Gasnikov, A. Tyurin, C. A. Uribe, D. Pasechnyuk, S. Artamonov, Gradient methods for problems with inexact model of the objective, in: M. Khachay, Y. Kochetov, P. Pardalos (Eds.), Mathematical Optimization Theory and Operations Research, Springer International Publishing, Cham, 2019, pp. 97–114, arXiv:1902.09001.
* [57] F. Stonyakin, A. Tyurin, A. Gasnikov, P. Dvurechensky, A. Agafonov, D. Dvinskikh, D. Pasechnyuk, S. Artamonov, V. Piskunova, Inexact relative smoothness and strong convexity for optimization and variational inequalities by inexact model, arXiv:2001.09013. WIAS Preprint No. 2709.
* [58] A. S. Nemirovski, D. B. Yudin, Problem Complexity and Method Efficiency in Optimization, Wiley, New York, NY, 1983.
* [59] J. Duchi, S. Shalev-Shwartz, Y. Singer, A. Tewari, Composite objective mirror descent, in: COLT 2010 - The 23rd Conference on Learning Theory, 2010, pp. 14–26.
* [60] P. E. Dvurechensky, A. V. Gasnikov, E. A. Nurminski, F. S. Stonyakin, Advances in Low-Memory Subgradient Optimization, Springer International Publishing, Cham, 2020, pp. 19–59, arXiv:1902.01572.
* [61] H. H. Bauschke, J. Bolte, M. Teboulle, A descent lemma beyond Lipschitz gradient continuity: First-order methods revisited and applications, Mathematics of Operations Research 42 (2) (2016) 330–348.
* [62] S. Boyd, L. Vandenberghe, Convex optimization, Cambridge university press, 2004\.
* [63] M. J. Todd, Minimum-Volume Ellipsoids, Society for Industrial and Applied Mathematics, 2016.
* [64] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bulletin of the American Mathematical Society 73 (4) (1967) 591–597.
* [65] Y. Nesterov, Primal-dual subgradient methods for convex problems, Mathematical Programming 120 (1) (2009) 221–259.
* [66] L. Xiao, Dual averaging methods for regularized stochastic learning and online optimization, Journal of Machine Learning Research 11 (Oct) (2010) 2543–2596.
* [67] Y. Nesterov, Smooth minimization of non-smooth functions, Mathematical Programming 103 (1) (2005) 127–152.
* [68] U. Helmke, J. B. Moore, Optimization and Dynamical Systems, Communications & Control Engineering, Springer Berlin Heidelberg, 1996.
* [69] F. Alvarez, J. Bolte, O. Brahic, Hessian Riemannian gradient flows in convex programming, SIAM Journal on Control and Optimization 43 (2) (2004) 477–501.
* [70] H. Attouch, J. Bolte, P. Redont, M. Teboulle, Singular Riemannian barrier methods and gradient-projection dynamical systems for constrained optimization, Optimization 53 (5-6) (2004) 435–454.
* [71] H. Attouch, M. Teboulle, Regularized Lotka-Volterra dynamical system as continuous proximal-like method in optimization, Journal of Optimization Theory and Applications 121 (3) (2004) 541–570.
* [72] J. Bolte, M. Teboulle, Barrier operators and associated gradient-like dynamical systems for constrained minimization problems, SIAM Journal on Control and Optimization 42 (4) (2003) 1266–1292.
* [73] I. M. Bomze, P. Mertikopoulos, W. Schachinger, M. Staudigl, Hessian barrier algorithms for linearly constrained optimization problems, SIAM Journal on Optimization 29 (3) (2019) 2100–2127.
* [74] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Computational Mathematics and Mathematical Physics 4 (5) (1964) 1–17.
* [75] W. Su, S. Boyd, E. J. Candes, A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights, Journal of Machine Learning Research 17 (153) (2016), 1-43
* [76] A. Wibisono, A. C. Wilson, M. I. Jordan, A variational perspective on accelerated methods in optimization, Proceedings of the National Academy of Sciences 113 (47) (2016) E7351.
* [77] H. Attouch, Z. Chbani, J. Peypouquet, P. Redont, Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity, Mathematical Programming 168 (1-2) (2018) 123–175.
* [78] B. Bah, H. Rauhut, U. Terstiege, M. Westdickenberg, Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers, arXiv preprint arXiv:1910.05505.
* [79] B. Shi, S. S. Du, W. Su, M. I. Jordan, Acceleration via symplectic discretization of high-resolution differential equations, Advances in Neural Information Processing Systems (2019) 5744–5752.
* [80] H. Attouch, Z. Chbani, J. Fadili, H. Riahi, First-order optimization algorithms via inertial systems with Hessian driven damping, Mathematical Programming (2020)
* [81] H. Attouch, P. Redont, A. Soubeyran, A new class of alternating proximal minimization algorithms with costs-to-move, SIAM Journal on Optimization 18 (3) (2007) 1061–1081.
* [82] H. Attouch, A. Cabot, P. Frankel, J. Peypouquet, Alternating proximal algorithms for linearly constrained variational inequalities: application to domain decomposition for PDE’s, Nonlinear Analysis: Theory, Methods & Applications 74 (18) (2011) 7455–7473.
* [83] M. J. Feizollahi, S. Ahmed, A. Sun, Exact augmented Lagrangian duality for mixed integer linear programming, Mathematical Programming 161 (1) (2017) 365–387.
* [84] F. Lin, M. Fardad, M. R. Jovanović, Sparse feedback synthesis via the alternating direction method of multipliers, 2012 American Control Conference (ACC), Montreal, QC, 2012, pp. 4765–4770.
* [85] X. Yuan, Alternating direction method for covariance selection models, Journal of Scientific Computing 51 (2) (2012) 261–273.
* [86] J. Yang, Y. Zhang, Alternating direction algorithms for $\backslash$ell_1-problems in compressive sensing, SIAM journal on scientific computing 33 (1) (2011) 250–278.
* [87] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends® in Machine learning 3 (1) (2011) 1–122.
* [88] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM journal on control and optimization 14 (5) (1976) 877–898.
* [89] R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Mathematics of operations research 1 (2) (1976) 97–116.
* [90] R. Shefi, M. Teboulle, Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization, SIAM Journal on Optimization 24 (1) (2014) 269–297.
* [91] S. Banert, R. I. Bot, E. R. Csetnek, Fixing and extending some recent results on the ADMM algorithm, arXiv preprint arXiv:1612.05057.
* [92] A. X. Sun, D. T. Phan, S. Ghosh, Fully decentralized AC optimal power flow algorithms, in: 2013 IEEE Power & Energy Society General Meeting, IEEE, 2013, pp. 1–5.
* [93] A. Auslender, M. Teboulle, Asymptotic cones and functions in optimization and variational inequalities, Springer Science & Business Media, 2006.
* [94] D. Gabay, Applications of the method of multipliers to variational inequalities, Vol. 15, Elsevier, 1983, Ch. ix, pp. 299–331.
* [95] J. Eckstein, D. P. Bertsekas, On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators, Mathematical Programming 55 (1-3) (1992) 293–318.
* [96] C. Chen, B. He, Y. Ye, X. Yuan, The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent, Mathematical Programming 155 (1-2) (2016) 57–79.
* [97] A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision 40 (1) (2011) 120–145.
* [98] K. Arrow, L. Hurwicz, H. Uzawa, Studies in linear and non-linear programming., in: H. Chenery, S. Johnson, S. Karlin, T. Marschak, R. Solow (Eds.), Stanford Mathematical Studies in the Social Sciences, vol. II., Stanford University Press, Stanford, 1958.
* [99] M. Frank, P. Wolfe, et al., An algorithm for quadratic programming, Naval research logistics quarterly 3 (1-2) (1956) 95–110.
* [100] E. S. Levitin, B. T. Polyak, Constrained minimization methods, USSR Computational mathematics and mathematical physics 6 (5) (1966) 1–50.
* [101] J. Kuczyński, H. Woźniakowski, Estimating the largest eigenvalue by the power and lanczos algorithms with a random start, SIAM Journal on Matrix Analysis and Applications 13 (4) (1992) 1094–1122.
* [102] F. Facchinei, J.-s. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems - Volume I and Volume II, Springer Series in Operations Research, 2003.
* [103] M. Jaggi, Revisiting Frank-Wolfe: Projection-free sparse convex optimization., in: International Conference on Machine Learning, 2013, pp. 427–435.
* [104] W. Bian, X. Chen, Linearly constrained non-Lipschitz optimization for image restoration, SIAM Journal on Imaging Sciences 8 (4) (2015) 2294–2322.
* [105] W. Bian, X. Chen, Y. Ye, Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization, Mathematical Programming 149 (1) (2015) 301–327.
* [106] G. Haeser, H. Liu, Y. Ye, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary , Mathematical Programming 178 (2019), 263–299
* [107] F. Bach, Duality between subgradient and conditional gradient methods, SIAM Journal on Optimization 25 (1) (2015) 115–129.
* [108] Y. Nesterov, Complexity bounds for primal-dual methods minimizing the model of objective function, Mathematical Programming 171 (1-2) (2018) 311–330.
* [109] Y. Nesterov, Dual extrapolation and its applications to solving variational inequalities and related problems, Mathematical Programming 109 (2) (2007) 319–344.
* [110] J. C. Dunn, Rates of convergence for conditional gradient algorithms near singular and nonsingular extremals, SIAM Journal on Control and Optimization 17 (2) (1979) 187–211.
* [111] F. Pedregosa, G. Negiar, A. Askari, M. Jaggi, Linearly convergent Frank-Wolfe with backtracking line-search, in: International Conference on Artificial Intelligence and Statistics, PMLR, 2020, pp. 1–10.
* [112] P. Dvurechensky, P. Ostroukhov, K. Safin, S. Shtern, M. Staudigl, Self-concordant analysis of frank-Wolfe algorithms, in: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol. 119 of Proceedings of Machine Learning Research, PMLR, Virtual, 2020, pp. 2814–2824.
* [113] P. Dvurechensky, K. Safin, S. Shtern, M. Staudigl, Generalized self-concordant analysis of Frank-Wolfe algorithms, arXiv:2010.01009.
* [114] R. M. Freund, P. Grigas, New analysis and results for the Frank–Wolfe method, Mathematical Programming 155 (1-2) (2016) 199–230.
* [115] G. Odor, Y.-H. Li, A. Yurtsever, Y.-P. Hsieh, Q. Tran-Dinh, M. El Halabi, V. Cevher, Frank-Wolfe works for non-Lipschitz continuous gradient objectives: scalable poisson phase retrieval, in: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Ieee, 2016, pp. 6230–6234.
* [116] M. D. Canon, C. D. Cullum, A tight upper bound on the rate of convergence of Frank-Wolfe algorithm, SIAM Journal on Control 6 (4) (1968) 509–516.
* [117] G. Lan, The complexity of large-scale convex programming under a linear optimization oracle, arXiv preprint arXiv:1309.5550.
* [118] D. Garber, E. Hazan, Faster rates for the Frank-Wolfe method over strongly-convex sets, in: 32nd International Conference on Machine Learning, ICML 2015, 2015.
* [119] J. Guélat, P. Marcotte, Some comments on Wolfe’s ‘away step’, Mathematical Programming 35 (1) (1986) 110–119.
* [120] M. Epelman, R. M. Freund, Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system, Mathematical Programming 88 (3) (2000) 451–485.
* [121] A. Beck, M. Teboulle, A conditional gradient method with linear rate of convergence for solving convex linear systems, Mathematical Methods of Operations Research 59 (2) (2004) 235–247.
* [122] P. Wolfe, Convergence theory in nonlinear programming, in: J. Abadie (Ed.), Integer and nonlinear programming, North-Holland, Amsterdam, 1970.
* [123] S. Lacoste-Julien, M. Jaggi, On the global linear convergence of Frank-Wolfe optimization variants, Advances in neural information processing systems 28 (2015) 496–504.
* [124] A. Beck, S. Shtern, Linearly convergent away-step conditional gradient for non-strongly convex functions, Mathematical Programming 164 (1-2) (2017) 1–27.
* [125] S. M. Robinson, Generalized equations and their solutions, part ii: applications to nonlinear programming, in: Optimality and Stability in Mathematical Programming, Springer, 1982, pp. 200–221.
* [126] S. Damla Ahipasaoglu, P. Sun, M. J. Todd, Linear convergence of a modified frank–wolfe algorithm for computing minimum-volume enclosing ellipsoids, Optimisation Methods and Software 23 (1) (2008) 5–19.
* [127] C. A. Holloway, An extension of the Frank and Wolfe method of feasible directions, Mathematical Programming 6 (1) (1974) 14–27.
* [128] B. Von Hohenbalken, Simplicial decomposition in nonlinear programming algorithms, Mathematical Programming 13 (1) (1977) 49–68.
* [129] D. Garber, E. Hazan, A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization, SIAM Journal on Optimization 26 (3) (2016) 1493–1528.
* [130] G. Lan, Y. Zhou, Conditional gradient sliding for convex optimization, SIAM Journal on Optimization 26 (2) (2016) 1379–1409.
* [131] A. Nemirovski, Orth-method for smooth convex optimization, Izvestia AN SSSR, Transl.: Eng. Cybern. Soviet J. Comput. Syst. Sci 2 (1982) 937–947.
* [132] Y. Nesterov, A. Gasnikov, S. Guminov, P. Dvurechensky, Primal-dual accelerated gradient methods with small-dimensional relaxation oracle, Optimization Methods and Software (2020) 1–28
* [133] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences 2 (1) (2009) 183–202.
* [134] O. Devolder, Stochastic first order methods in smooth convex optimization, CORE Discussion Paper 2011/70.
* [135] G. Lan, An optimal method for stochastic composite optimization, Mathematical Programming 133 (1) (2012) 365–397
* [136] S. Ghadimi, G. Lan, Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: A generic algorithmic framework, SIAM Journal on Optimization 22 (4) (2012) 1469–1492.
* [137] S. Ghadimi, G. Lan, Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: Shrinking procedures and optimal algorithms, SIAM Journal on Optimization 23 (4) (2013) 2061–2089.
* [138] P. Dvurechensky, A. Gasnikov, Stochastic intermediate gradient method for convex problems with stochastic inexact oracle, Journal of Optimization Theory and Applications 171 (1) (2016) 121–145.
* [139] A. V. Gasnikov, P. E. Dvurechensky, Stochastic intermediate gradient method for convex optimization problems, Doklady Mathematics 93 (2) (2016) 148–151.
* [140] P. Dvurechensky, D. Dvinskikh, A. Gasnikov, C. A. Uribe, A. Nedić, Decentralize and randomize: Faster algorithm for Wasserstein barycenters, in: S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems 31, NeurIPS 2018, Curran Associates, Inc., 2018, pp. 10783–10793.
* [141] A. d’Aspremont, Smooth optimization with approximate gradient, SIAM J. on Optimization 19 (3) (2008) 1171–1183.
* [142] O. Devolder, F. Glineur, Y. Nesterov, First-order methods of smooth convex optimization with inexact oracle, Mathematical Programming 146 (1) (2014) 37–75.
* [143] M. Cohen, J. Diakonikolas, L. Orecchia, On acceleration with noise-corrupted gradients, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research, PMLR, Stockholmsmässan, Stockholm Sweden, 2018, pp. 1019–1028, arXiv:1805.12591.
* [144] A. V. Gasnikov, A. I. Tyurin, Fast gradient descent for convex minimization problems with an oracle producing a ($\delta$, L)-model of function at the requested point, Computational Mathematics and Mathematical Physics 59 (7) (2019) 1085–1097.
* [145] D. Kamzolov, P. Dvurechensky, A. V. Gasnikov, Universal intermediate gradient method for convex problems with inexact oracle, Optimization Methods and Software (2020) 1–28, arXiv:1712.06036.
URL https://doi.org/10.1080/10556788.2019.1711079
* [146] E. Gorbunov, D. Dvinskikh, A. Gasnikov, Optimal decentralized distributed algorithms for stochastic convex optimization, arXiv:1911.07363.
* [147] E. Gorbunov, M. Danilova, A. Gasnikov, Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping, in: Proceedings of the 33rd International Conference on Neural Information Processing Systems, NeurIPS 2020, 2020.
* [148] R. Frostig, R. Ge, S. Kakade, A. Sidford, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, in: F. Bach, D. Blei (Eds.), Proceedings of the 32nd International Conference on Machine Learning, Vol. 37 of Proceedings of Machine Learning Research, PMLR, Lille, France, 2015, pp. 2540–2548.
* [149] H. Lin, J. Mairal, Z. Harchaoui, A universal catalyst for first-order optimization, in: Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, MIT Press, Cambridge, MA, USA, 2015, pp. 3384–3392.
* [150] Y. Zhang, L. Xiao, Stochastic primal-dual coordinate method for regularized empirical risk minimization, in: F. Bach, D. Blei (Eds.), Proceedings of the 32nd International Conference on Machine Learning, Vol. 37 of Proceedings of Machine Learning Research, PMLR, Lille, France, 2015, pp. 353–361.
* [151] Z. Allen-Zhu, Katyusha: The first direct acceleration of stochastic gradient methods, in: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, ACM, New York, NY, USA, 2017, pp. 1200–1205, arXiv:1603.05953.
* [152] G. Lan, Y. Zhou, An optimal randomized incremental gradient method, Mathematical Programming 171 (2018), 167-215
* [153] A. Ivanova, A. Gasnikov, P. Dvurechensky, D. Dvinskikh, A. Tyurin, E. Vorontsova, D. Pasechnyuk, Oracle complexity separation in convex optimization, arXiv:2002.02706 WIAS Preprint No. 2711.
* [154] J. Diakonikolas, L. Orecchia, Alternating randomized block coordinate descent, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research, PMLR, Stockholmsmässan, Stockholm Sweden, 2018, pp. 1224–1232.
* [155] S. Guminov, P. Dvurechensky, N. Tupitsa, A. Gasnikov, Accelerated alternating minimization, accelerated Sinkhorn’s algorithm and accelerated Iterative Bregman Projections, arXiv:1906.03622 WIAS Preprint No. 2695.
* [156] Y. Nesterov, Efficiency of coordinate descent methods on huge-scale optimization problems, SIAM Journal on Optimization 22 (2) (2012) 341–362.
* [157] Y. T. Lee, A. Sidford, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, in: Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS ’13, IEEE Computer Society, Washington, DC, USA, 2013, pp. 147–156.
* [158] O. Fercoq, P. Richtárik, Accelerated, parallel, and proximal coordinate descent, SIAM Journal on Optimization 25 (4) (2015) 1997–2023.
* [159] Q. Lin, Z. Lu, L. Xiao, An accelerated proximal coordinate gradient method, in: Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 27, Curran Associates, Inc., 2014, pp. 3059–3067, first appeared in arXiv:1407.1296.
* [160] Y. Nesterov, S. U. Stich, Efficiency of the accelerated coordinate descent method on structured optimization problems, SIAM Journal on Optimization 27 (1) (2017) 110–123, first presented in May 2015 http://www.mathnet.ru:8080/PresentFiles/11909/7_nesterov.pdf.
* [161] A. Gasnikov, P. Dvurechensky, I. Usmanova, On accelerated randomized methods, Proceedings of Moscow Institute of Physics and Technology 8 (2) (2016) 67–100, in Russian, first appeared in arXiv:1508.02182.
* [162] Z. Allen-Zhu, Z. Qu, P. Richtarik, Y. Yuan, Even faster accelerated coordinate descent using non-uniform sampling, in: M. F. Balcan, K. Q. Weinberger (Eds.), Proceedings of The 33rd International Conference on Machine Learning, Vol. 48 of Proceedings of Machine Learning Research, PMLR, New York, New York, USA, 2016, pp. 1110–1119.
* [163] S. Shalev-Shwartz, T. Zhang, Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization, in: E. P. Xing, T. Jebara (Eds.), Proceedings of the 31st International Conference on Machine Learning, Vol. 32 of Proceedings of Machine Learning Research, PMLR, Bejing, China, 2014, pp. 64–72, first appeared in arXiv:1309.2375.
* [164] P. Dvurechensky, A. Gasnikov, A. Tiurin, Randomized similar triangles method: A unifying framework for accelerated randomized optimization methods (coordinate descent, directional search, derivative-free method), arXiv:1707.08486.
* [165] Y. Nesterov, V. Spokoiny, Random gradient-free minimization of convex functions, Found. Comput. Math. 17 (2) (2017) 527–566,
* [166] E. Gorbunov, P. Dvurechensky, A. Gasnikov, An accelerated method for derivative-free smooth stochastic convex optimization, arXiv:1802.09022.
* [167] E. A. Vorontsova, A. V. Gasnikov, E. A. Gorbunov, P. E. Dvurechenskii, Accelerated gradient-free optimization methods with a non-euclidean proximal operator, Automation and Remote Control 80 (8) (2019) 1487–1501.
* [168] E. A. Vorontsova, A. V. Gasnikov, E. A. Gorbunov, Accelerated directional search with non-euclidean prox-structure, Automation and Remote Control 80 (4) (2019) 693–707.
* [169] P. Dvurechensky, E. Gorbunov, A. Gasnikov, An accelerated directional derivative method for smooth stochastic convex optimization, European Journal of Operational Research 290 (2) (2021) 601–621
* [170] Y. Nesterov, Accelerating the cubic regularization of newton’s method on convex problems, Mathematical Programming 112 (1) (2008) 159–181.
* [171] M. Baes, Estimate sequence methods: extensions and approximations, Institute for Operations Research, ETH, Zürich, Switzerland.
* [172] Y. Nesterov, Implementable tensor methods in unconstrained convex optimization, Mathematical Programming (2019)
* [173] A. Gasnikov, P. Dvurechensky, E. Gorbunov, E. Vorontsova, D. Selikhanovych, C. A. Uribe, B. Jiang, H. Wang, S. Zhang, S. Bubeck, Q. Jiang, Y. T. Lee, Y. Li, A. Sidford, Near optimal methods for minimizing convex functions with Lipschitz $p$-th derivatives, in: A. Beygelzimer, D. Hsu (Eds.), Proceedings of the Thirty-Second Conference on Learning Theory, Vol. 99 of Proceedings of Machine Learning Research, PMLR, Phoenix, USA, 2019, pp. 1392–1393, arXiv:1809.00382.
* [174] P. Tseng, On accelerated proximal gradient methods for convex-concave optimization, Tech. rep., MIT (2008).
URL http://www.mit.edu/~dimitrib/PTseng/papers/apgm.pdf
* [175] A. Gasnikov, Yu. Nesterov, Universal method for stochastic composite optimization problems, Computational Mathematics and Mathematical Physics 58 (1) (2018) 48–64.
* [176] I. Necoara, Y. Nesterov, F. Glineur, Linear convergence of first order methods for non-strongly convex optimization, Mathematical Programming 175 (1) (2019) 69–107.
* [177] J. Bolte, T. P. Nguyen, J. Peypouquet, B. W. Suter, From error bounds to the complexity of first-order descent methods for convex functions, Mathematical Programming 165 (2) (2017) 471–507.
* [178] A. Nemirovskii, Y. Nesterov, Optimal methods of smooth convex minimization, USSR Computational Mathematics and Mathematical Physics 25 (2) (1985) 21 – 30.
* [179] A. Juditsky, Y. Nesterov, Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization, Stochastic Systems 4 (1) (2014) 44–80.
* [180] V. Roulet, A. d’Aspremont, Sharpness, restart and acceleration, in: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems 30, Curran Associates, Inc., 2017, pp. 1119–1129.
* [181] A. Bayandina, P. Dvurechensky, A. Gasnikov, F. Stonyakin, A. Titov, Mirror descent and convex optimization problems with non-smooth inequality constraints, in: P. Giselsson, A. Rantzer (Eds.), Large-Scale and Distributed Optimization, Springer International Publishing, 2018, Ch. 8, pp. 181–215, arXiv:1710.06612.
* [182] Z. Allen-Zhu, E. Hazan, Optimal black-box reductions between optimization objectives, in: D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, R. Garnett (Eds.), Advances in Neural Information Processing Systems, Vol. 29, Curran Associates, Inc., 2016, pp. 1614–1622.
* [183] O. Fercoq, Z. Qu, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Computational Optimization and Applications 75 (1) (2020) 63–91.
* [184] G. Lan, The complexity of large-scale convex programming under a linear optimization oracle, arXiv:1309.5550.
* [185] T. Kerdreux, A. d’Aspremont, S. Pokutta, Restarting Frank-Wolfe, in: K. Chaudhuri, M. Sugiyama (Eds.), Proceedings of Machine Learning Research, Vol. 89 of Proceedings of Machine Learning Research, PMLR, 2019, pp. 1275–1283.
* [186] F. Stonyakin, A. Gasnikov, P. Dvurechensky, M. Alkousa, A. Titov, Generalized Mirror Prox for monotone variational inequalities: Universality and inexact oracle, arXiv:1806.05140.
* [187] O. Devolder, Exactness, inexactness and stochasticity in first-order methods for large-scale convex optimization, Ph.D. thesis (2013), UC Louvain.
* [188] E. J. Candes, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory 52 (2) (2006) 489–509.
* [189] D. L. Donoho, Compressed sensing, IEEE Transactions on Information Theory 52 (4) (2006) 1289–1306.
* [190] E. Candes, T. Tao, The Dantzig selector: Statistical estimation when $p$ is much larger than $n$, The Annals of Statistics 35 (6) (2007) 2313–2351.
* [191] A. Beck, M. Teboulle, Smoothing and first order methods: A unified framework, SIAM Journal on Optimization 22 (2) (2012) 557–580.
* [192] A. Nemirovski, Prox-method with rate of convergence $o(1/t)$ for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems, SIAM Journal on Optimization 15 (1) (2004) 229–251.
* [193] Y. Chen, G. Lan, Y. Ouyang, Accelerated schemes for a class of variational inequalities, Mathematical Programming 165 (2017), 113–149
* [194] Y. Nesterov, Excessive gap technique in nonsmooth convex minimization, SIAM Journal on Optimization 16 (1) (2005) 235–249.
* [195] Q. Tran-Dinh, V. Cevher, Constrained convex minimization via model-based excessive gap, in: Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPS’14, MIT Press, Cambridge, MA, USA, 2014, pp. 721–729.
* [196] Q. Tran-Dinh, O. Fercoq, V. Cevher, A smooth primal-dual optimization framework for nonsmooth composite convex minimization, SIAM Journal on Optimization 28 (1) (2018) 96–134, arXiv:1507.06243.
* [197] A. Alacaoglu, Q. Tran Dinh, O. Fercoq, V. Cevher, Smooth primal-dual coordinate descent algorithms for nonsmooth convex optimization, in: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems 30, Curran Associates, Inc., 2017, pp. 5852–5861.
* [198] Q. Tran-Dinh, A. Alacaoglu, O. Fercoq, V. Cevher, An adaptive primal-dual framework for nonsmooth convex minimization, Mathematical Programming Computation 12 (3) (2020) 451–491.
* [199] A. Chernov, P. Dvurechensky, A. Gasnikov, Fast primal-dual gradient method for strongly convex minimization problems with linear constraints, in: Y. Kochetov, M. Khachay, V. Beresnev, E. Nurminski, P. Pardalos (Eds.), Discrete Optimization and Operations Research: 9th International Conference, DOOR 2016, Vladivostok, Russia, September 19-23, 2016, Proceedings, Springer International Publishing, 2016, pp. 391–403.
* [200] P. Dvurechensky, A. Gasnikov, E. Gasnikova, S. Matsievsky, A. Rodomanov, I. Usik, Primal-dual method for searching equilibrium in hierarchical congestion population games, in: Supplementary Proceedings of the 9th International Conference on Discrete Optimization and Operations Research and Scientific School (DOOR 2016) Vladivostok, Russia, September 19 - 23, 2016, 2016, pp. 584–595, arXiv:1606.08988.
* [201] A. S. Anikin, A. V. Gasnikov, P. E. Dvurechensky, A. I. Tyurin, A. V. Chernov, Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints, Computational Mathematics and Mathematical Physics 57 (8) (2017) 1262–1276.
* [202] P. Dvurechensky, A. Gasnikov, A. Kroshnin, Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn’s algorithm, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research, 2018, pp. 1367–1376, arXiv:1802.04367.
* [203] P. Dvurechensky, A. Gasnikov, S. Omelchenko, A. Tiurin, A stable alternative to Sinkhorn’s algorithm for regularized optimal transport, in: A. Kononov, M. Khachay, V. A. Kalyagin, P. Pardalos (Eds.), Mathematical Optimization Theory and Operations Research, Springer International Publishing, Cham, 2020, pp. 406–423.
* [204] S. V. Guminov, Y. E. Nesterov, P. E. Dvurechensky, A. V. Gasnikov, Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems, Doklady Mathematics 99 (2) (2019) 125–128.
* [205] A. Kroshnin, N. Tupitsa, D. Dvinskikh, P. Dvurechensky, A. Gasnikov, C. Uribe, On the complexity of approximating Wasserstein barycenters, in: K. Chaudhuri, R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, PMLR, Long Beach, California, USA, 2019, pp. 3530–3540, arXiv:1901.08686.
* [206] A. Ivanova, P. Dvurechensky, A. Gasnikov, D. Kamzolov, Composite optimization for the resource allocation problem, Optimization Methods and Software 0 (0) (2020) 1–35, arXiv:1810.00595.
* [207] T. Lin, N. Ho, M. Jordan, On efficient optimal transport: An analysis of greedy and accelerated mirror descent algorithms, in: K. Chaudhuri, R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, PMLR, Long Beach, California, USA, 2019, pp. 3982–3991.
* [208] C. A. Uribe, D. Dvinskikh, P. Dvurechensky, A. Gasnikov, A. Nedić, Distributed computation of Wasserstein barycenters over networks, in: 2018 IEEE Conference on Decision and Control (CDC), 2018, pp. 6544–6549, arXiv:1803.02933.
* [209] T. Lin, N. Ho, M. Cuturi, M. I. Jordan, On the Complexity of Approximating Multimarginal Optimal Transport, arXiv e-printsArXiv:1910.00152.
* [210] T. Lin, N. Ho, X. Chen, M. Cuturi, M. I. Jordan, Computational Hardness and Fast Algorithm for Fixed-Support Wasserstein Barycenter, arXiv e-prints (2020) arXiv:2002.04783
* [211] N. Tupitsa, P. Dvurechensky, A. Gasnikov, C. A. Uribe, Multimarginal optimal transport by accelerated alternating minimization, in: 2020 IEEE 59th Conference on Decision and Control (CDC), 2020, (accepted), arXiv:2004.02294.
* [212] R. Krawtschenko, C. A. Uribe, A. Gasnikov, P. Dvurechensky, Distributed optimization with quantization for computing Wasserstein barycenters, arXiv:2010.14325
* [213] Y. Nesterov, Universal gradient methods for convex optimization problems, Mathematical Programming 152 (1) (2015) 381–404.
* [214] Y. Malitsky, T. Pock, A first-order primal-dual algorithm with linesearch, SIAM Journal on Optimization 28 (1) (2018) 411–432.
* [215] D. Dvinskikh, A. Ogaltsov, A. Gasnikov, P. Dvurechensky, V. Spokoiny, On the line-search gradient methods for stochastic optimization, IFAC-PapersOnLine21th IFAC World Congress, accepted, arXiv:1911.08380.
* [216] D. R. Baimurzina, A. V. Gasnikov, E. V. Gasnikova, P. E. Dvurechensky, E. I. Ershov, M. B. Kubentaeva, A. A. Lagunovskaya, Universal method of searching for equilibria and stochastic equilibria in transportation networks, Computational Mathematics and Mathematical Physics 59 (1) (2019) 19–33, arXiv:1701.02473.
* [217] A. Yurtsever, Q. Tran-Dinh, V. Cevher, A universal primal-dual convex optimization framework, in: Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, MIT Press, Cambridge, MA, USA, 2015, pp. 3150–3158.
* [218] S. Ghadimi, G. Lan, H. Zhang, Generalized uniformly optimal methods for nonlinear programming, Journal of Scientific Computing 79 (3) (2019) 1854–1881, arXiv:1508.07384.
* [219] B. T. Polyak, Introduction to Optimization, Optimization Software, 1987.
* [220] J. Duchi, E. Hazan, Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, Journal of Machine Learning Research 12 (Jul.) (2011) 2121–2159.
* [221] K. Y. Levy, A. Yurtsever, V. Cevher, Online adaptive methods, universality and acceleration, in: S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems 31, Curran Associates, Inc., 2018, pp. 6500–6509, arXiv:1809.02864.
* [222] F. Bach, K. Y. Levy, A universal algorithm for variational inequalities adaptive to smoothness and noise, in: A. Beygelzimer, D. Hsu (Eds.), Proceedings of the Thirty-Second Conference on Learning Theory, Vol. 99 of Proceedings of Machine Learning Research, PMLR, Phoenix, USA, 2019, pp. 164–194, arXiv:1902.01637.
* [223] K. Antonakopoulos, E. V. Belmega, P. Mertikopoulos, Adaptive extra-gradient methods for min-max optimization and games, arXiv:2010.12100.
* [224] R. Zhao, R. M. Freund, Analysis of the Frank-Wolfe method for logarithmically-homogeneous barriers, with an extension, arXiv:2010.08999.
* [225] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, Advances and open problems in federated learning, arXiv preprint arXiv:1912.04977.
* [226] E. Gorbunov, A. Rogozin, A. Beznosikov, D. Dvinskikh, A. Gasnikov, Recent theoretical advances in decentralized distributed convex optimization, arXiv preprint arXiv:2011.13259.
* [227] R. Sun, Optimization for deep learning: theory and algorithms, arXiv preprint arXiv:1912.08957.
* [228] M. Danilova, P. Dvurechensky, A. Gasnikov, E. Gorbunov, S. Guminov, D. Kamzolov, I. Shibaev, Recent theoretical advances in non-convex optimization, arXiv:2012.06188.
* [229] M. B. Cohen, A. Sidford, K. Tian, Relative lipschitzness in extragradient methods and a direct recipe for acceleration, arXiv:2011.06572.
*[DGF]: distance generating function
|
32k
|
arxiv_papers
|
2101.00937
|
# Spectrum of Landau Levels in GaAs Quantum Wells
Imran Khan , Bipin Singh Koranga and Sunil Kumar
###### Abstract
We have studied the electroluminescence spectra from an n-i-p LED as a
function of magnetic field. This sample incorporated three GaAs quantum wells
in the intrinsic region. This device had excess n-type doping and as a result.
The quantum wells were populated by a 2D Landua electron gas. The broad B=0
field emission band evolved into a series of discrete features in the presence
of a magnetic field. These were identified as inter-band transitions between
the $\ell$ = 0, 1, and 2: Landau levels associated with the $e_{1}$ and
$h_{1}$ sub-bands, with the selection rule Δ$\ell$ = 0. The EL spectra were
analyzed in their $\sigma$\+ (LCP) and $\sigma$\- (RCP) components. An energy
splitting between the two polarized components was observed for each Landau
level transition. This was found to be equal to the sum of the conduction and
valence band spin splittings. We used the know value of elctron’s g-factor
[11] to determine the valence band spin splittings. Our experimental values
were compared to the numerically calculated values shown in [1] and were found
to be in reasonable agreement.
Ramjas college (University of Delhi,) Delhi-110007, India
Kirori Mal college (University of Delhi,) Delhi-110007, India
Ramjas college (University of Delhi,) Delhi-110007, India
## 1 Introduction
The injection of spin-polarized electrons from a ferromagnetic Fe contact into
empty quantum wells has been studied in detail [1] & [2]. We have also studied
injection of electrons into GaAs quantum wells which are occupied by a dilute
electron gas (areal electron density less than $10^{11}cm^{-2}$ . In these
devices the emission is excitonic in nature. In this chapter we describe
magneto-EL studies of Spin-LEDs which incorporate quantum wells occupied by a
dense electron gas, areal electron density less than $10^{12}cm^{-2}$. In this
system the excitons are screened [3], [4] & [5] and only interband transitions
among the $e_{1}$ and $h_{1}$ subband Landau levels are observed in the
emission spectra.Addd—-see
## 2 Bandstructure of Sample
In table 1 we give a description of the sample used in our studies.
Layer | Description | Thickness (A)
---|---|---
buffer | p+ GaAs | 1250
p+ barrier | p+ AlGaAs (25% Al) | 250
Undoped barrier | AlGaAs | 250
Quantum Well #1 | GaAs | 125
Undoped barrier | AlGaAs (25% Al) | 300
Quantum Well #2 | GaAs | 125
Undoped barrier | AlGaAs (25% Al | 100
Quantum Well #3 | GaAs | 125
Undoped barrier | AlGaAs (25% Al) | 100
n ~ 1e17 | n- AlGaAs | 430
transistion | transition AlGaAs | 150
n ~ 1e19 | n+ AlGaAs | 150
cap | Fe | 160
Table 1: Description of the sample used and its dimension
A schematic diagram of the bandstructure of the device studied is shown in
fig.(1) under flat band and high forward bias conditions. It consists of three
125 $A^{o}$ GaAs quantum wells separated by 300 $A^{o}$ of AlGaAs barriers.
Figure 1: Schematic diagram of the hetrostructure studied.
The doping level of the n-type section of the n-i-p junction was higher than
the doping of its p-type component. The calculated band diagram at zero bias
is shown in fig.(2). The diagram was generated by “1D Poisson” program, which
is a program for calculating energy band diagrams for semiconductor structures
written by Greg Snider from University of Notre Dame [ref]
(www.nd.edu/~gsnider/ ). We note that the $e_{1}$ subband of all three quantum
wells in this LED lie above the Fermi level( dotted line) ; as a result at V =
0 all three quantum wells are empty.
Figure 2: Calculated band diagram at zero bias[ref].
## 3 Results and Discussion:
The two circularly polarized components (ς+/red and ς-/black ) of the EL
spectra recorded at T = 7 K in the presence of a magnetic field B = 5 tesla
under low bias conditions is shown in fig.(3). Under these conditions the
excess donors in the structure have not released their electrons into the
quantum wells and thus the quantum wells are empty. As a result, the emission
feature at 12400 $cm^{-1}$ of fig.(3) is excitonic in nature ($e_{1}\,h_{1}$
exciton).
Figure 3: EL spectra at T = 7 K and B = 5 T..
The blue line in fig.(3) represents the circular polarization P of the
spectrum. We note that the polarization maximum is on the high energy side of
the EL peak but still lies within the linewidth of the emission. We attribute
the maximum in P at 12411 $cm^{-1}$ to the free exciton, and the maximum in EL
intensity at 12400 $cm^{-1}$ to the bound excitons. In fig.(4) we plot the
circular polarization P versus magnetic field B at the polarization maximum
(red circles) and at the EL intensity maximum (black squares).
Figure 4: Circular polarization P versus magnetic field B.
Both sets of data show clear evidence of spin injection from the top Fe layer.
The polarization mirrors the out of plane magnetization of the Fe contact[2]
The bound excitons (black squares) show polarizations below that of the free
exciton (red circles) because in addition to the recombining electron-hole
pair, the impurity atom on which the exciton is bound is involved. The data of
fig.(4) clearly show that the Fe/AlGaAs(n) Schottky barrier is an efficient
injector of spin polarized electrons into the empty quantum wells. The device
current was increased to obtain the flat band conditions shown in fig.(1). The
expectation was that the excess donors would release their electrons into the
three quantum wells and that these electrons would be equallydistributed among
the three quantum wells. The zero field EL spectrum at T = 7 K, for a current
of 1 mA is shown in fig.(5)
Figure 5: Zero field EL spectrum at T = 6 K, for a current of 1 mA.
The spectrum contains two unresolved features which can be separated using the
line fitting program “Peakfit”. The calculated features are labeled “A” and
“B” and are also plotted in fig.(5). Feature A (green line) at 12382 $cm^{-1}$
is identified as the $e_{1}\,\,h_{1}$ exciton, while feature B (blue line) at
12362$cm^{-1}$ is attributed to the electron-recombination of the two-
dimensional electron gas with holes [6],[7]&[8] . It is clear from the EL
spectrum of fig.(5) that the electrons have populated one or two quantum
wells; feature B is associated with the populated well(s). At least one
quantum well is empty; feature A is the excitonic emission from one or more
empty wells. In order to verify this, we applieda magnetic field perpendicular
to the GaAs layers. The EL spectrum analyzed as ς- is shown in fig.(6) for B =
6 tesla.
Figure 6: The EL spectrum analyzed as ς- for B = 6 tesla..
On the same figure we also give the various spectral components into which the
EL was deconvoluted using the “Peakfit” program. The feature labeled “Imp” at
12249 $cm^{-1}$ becomes weaker as the device temperature is raised and then
totally disappears. It is therefore attributed to an impurity-related
transition. Feature labeled “$e_{1\,\,}h_{1}$ ” at 12373 $cm^{-1}$ is
identified as the unscreened exciton from an empty quantum well. The features
labeled 0 at 12380 $cm^{-1}$ , 1 at 12454 $cm^{-1}$ , and 2 at 12544 $cm^{-1}$
are attributed to interband transitions between the 0, 1, and 2 conduction and
valence band Landau transitions, respectively associated with feature B in
fig.(5) Thus at low temperatures,we have two types of quantum wells in this
device. One quantum well is empty and is associated with the $e_{1}\,h_{1}$
exciton and two quantum wells that are occupied by a two- dimensional electron
gas. Under some bias conditions we actually see two sets of Landau levels
which are slightly displaced in energy. That indicates that the remaining two
quantum wells are occupied by electrons but the two electron populations are
unequal, which accounts for the different bandgap renormalization [9] & [10].
Under these conditions we found it impossible to disentangle the polarizations
of the two types of quantum wells. The study of spin injection will have to be
carried out on pairs of Fe Spin-LEDs: one LED in which the quantum well is
empty (equal n- and p-type doping) and the other in which the quantum well is
occupied by a two-dimensional electron gas (excess n-type doping).The
comparison of the polarizations between the two emission spectra is expected
to give us a clear idea of what happens when spin polarized electrons are
injected in a quantum well which is already occupied by a dense electron gas.
Given these constraints we decided to raise the sample temperature in the hope
thatthis will result in populating the three quantum wells with equal number
of electrons. field The zero emission spectrum at T = 75 K for I = 1.5 mA is
shown in fig.(7). It closely resembles the emission spectra from n-type
modulation doped QWs.
Figure 7: Zero field emission spectrum at T = 75 K for I = 1.5 mA.
In the presence of an external magnetic field applied perpendicular to the
heterostructure’s layers the continuum breaks into a Landau fan and there is
no evidence of the $e_{1}\,h_{1}$ exciton. In the presence of a magnetic field
we have the appearance of distinct features in the EL spectra. An example is
given in fig 4.7 for B = 6.5 T.
Figure 8: EL Spectra in presence of a magnetic field of B=6.5T.
Both circular polarization components (ς+ and ς- ) are shown in the same
figure. The emission features are labeled using the quantum number of the
Landau levels associated with the $e_{1}$ and $h_{1}$ subbands that are
involved in a particular EL feature. A schematic diagram of the energy states
in the conduction and valence band and the allowed interband transitions are
shown in fig.(9).
Figure 9: Schematic diagram of the energy states in the conduction and valence
band.
The interband transitions occur between Landau levels with the same quantum
number i.e. they obey the selection rule 0 . In this diagram the spin
splitting has been omitted. The various inter-band transitions among the spin-
split Landua levels are shown in fig.(11). In fig.(10) we plot the energies of
the inter-band transitions measured in our experiment as function of magnetic
field.
Figure 10: Energies of the interband transitions as function of magnetic
field..
We have identified EL features associated with the 0 , 1, and 2 Landau levels
of the $e_{1}$ and $h_{1}$ confinement sub-bands. Each EL feature is analyzed
in its andcomponents, and the energy of each component is plotted separately.
A schematic diagram of the conduction and valence sub-band Landau level spin
splitting, as well as the allowed transitions in the Faraday geometry are
shown in fig.(11). The spin splittings have been greatly exaggerated for the
sake of clarity.
Figure 11: Schematic diagram of the conduction and valence subbands Landau
level spin splitting..
The energies of the and components of the photon associated with the
recombination among the Landau level with quantum number are given by the
equations:
$E_{l}(\sigma+)=E_{l}(B)+\frac{e_{s}+h_{s}}{2},$ (1)
$E_{l}(\sigma-)=E_{l}(B)-\frac{e_{s}+h_{s}}{2},$ (2)
here
$E_{l}(B)=E_{g}^{*}+\left(l+\frac{1}{2}\right)\left(\hbar\omega_{ce}+\hbar\omega_{ch}\right)$
$e_{s}$ and $h_{s}$ are the electron and hole spin splittings, respectively;
ce and ch are the electron and hole cyclotron energies, respectively. The
splittings between the $E_{l}(\sigma+)$ and $E_{l}(\sigma-)$ are equal to
$e_{s}h_{s}$. The conduction band spin splitting $e_{s}$ is equal to
$g^{*}\mu_{B}B$ and can be calculated because the effective Landé g factor for
GaAs conduction band has been measured [11]. From our experimental values of
the difference $E_{l}(\sigma+)$ -$E_{l}(\sigma-)$,we can extract the hole spin
splitting and compare it with the calculated values [12] shown in fig.(12).
Figure 12: Calculated Landau-level structure for holes in the valence band of
a GaAs/Al0.3Ga0.7As quantum well structure with 12.5-nm wells. 1.
Earlier study [12] has measured the hole cyclotron energies $\hbar\omega_{ch}$
for the $m_{j}=\pm\frac{3}{2}$ holes involved in the transitions [12]. The
cyclotron energies are indicated by the double headed arrows in fig.(12). In
our experiment we have measured the energy indicated by the dotted double
arrow in fig.(12). transition is too small to be determined reliably. The
splitting for the 0 Landau for 1 our experimental value for $e_{s}h_{s}$= 29.8
$cm^{-1}$ at B = 7 tesla. The calculated value for $e_{s}$ is 1.41 $cm^{-1}$
from which we extract a value of 28.4 $cm^{-1}$ for $h_{s}$ . The calculated
value is 27.9 $cm^{-1}$ at B = 7 tesla. For 2 we get an experimental splitting
of 22 $cm^{-1}$ . Unfortunately we do not have calculated energies for the
spin components of this Landau level.
## 4 Conclusions—add-see
The work described in this chapter has shown that magneto-EL studies of
quantum wells is a useful tool for the study of the energy states of the
system. In the past there have been extensive magneto-PL studies of quantum
wells and the concern in these experiment is to minimize the excess energy
introduced into the system from the fact that the existing photon energy has
to be above the inter-band transition under study. In EL experiments the
electrons and holes are introduced in the well without an excess energy. We
plan to carry out magneto EL studies in locally grown LEDs. In addition we
plan to pursue studies of spin LEDs which incorporate only one quantum well.
In this these experiments we intend to measure the polarization emission
characteristics from a series of diodes in which the quantum well have the
same well width but have different electron concentration.
## References
* [1] Hanbicki, A.T., et al., Efficient electrical spin injection from a magnetic metal/tunnel barrier contact into a semiconductor. Applied Physics Letters, 2002. 80(7): p. 1240-1242.
* [2] Hanbicki, A.T., et al., Analysis of the transport process providing spin injection through an Fe/AlGaAs Schottky barrier. Applied Physics Letters, 2003. 82(23): p. 4092-4094.
* [3] Schmittrink, S., C. Ell, and H. Haug, Many-Body Effects in the Absorption, Gain, and Luminescence Spectra of Semiconductor Quantum-Well Structures. Physical Review B, 1986. 33(2): p. 1183-1189.
* [4] Ruckenstein, A.E. and S. Schmittrink, Many-Body Aspects of the Optical-Spectra of Bulk and Low-Dimensional Doped Semiconductors. Physical Review B, 1987. 35(14): p. 7551-7557.
* [5] Ruckenstein, A.E., S. Schmittrink, and R.C. Miller, Infrared and Polarization Anomalies in the Optical-Spectra of Modulation-Doped Semiconductor Quantum- Well Structures. Physical Review Letters, 1986. 56(5): p. 504-507.
* [6] Perrry, C.H., et al. Magneto-Luminescence in Modeulation-Doped AlGaAs-GaAs Multiple Quantum Well Hetrostructures. in Proceedings of the International Conference, Wurzburg,Fed. Rep. of Germany. 1986.
* [7] Schmiedel, T., et al., Subband Tuning in Semiconductor Quantum-Wells Using Narrow Barriers. Journal of Applied Physics, 1992. 72(10): p. 4753-4756.
* [8] Kioseoglou, G., et al., Magnetoluminescence study of n-type modulation-doped ZnSe/ZnxCd1-xSe quantum-well structures. Physical Review B, 1997. 55(7): p. 4628-4632.
* [9] Zhang, Y.H., R. Cingolani, and K. Ploog, Density-Dependent Band-Gap Renormalization of One-Component Plasma in Gaxin1-Xas/Alyin1-Yas Single Quantum-Wells. Physical Review B, 1991. 44(11): p. 5958-5961.
* [10] Hawrylak, P., Optical-Properties of a 2-Dimensional Electron-Gas - Evolution of Spectra from Excitons to Fermi-Edge Singularities. Physical Review B, 1991. 44(8): p. 3821-3828.
* [11] Adachi, S., Gaas, Alas, and Alxga1-Xas - Material Parameters for Use in Research and Device Applications. Journal of Applied Physics, 1985. 58(3): p. R1-R29.
* [12] Nickel, H.A., et al., Internal transitions of confined neutral magnetoexcitons in GaAs/AlxGa1-xAs quantum wells. Physical Review B, 2000. 62(4): p. 2773-2779.
|
2k
|
arxiv_papers
|
2101.00942
|
[]
# Reiterman’s Theorem on Finite Algebras for a Monad
Jiří Adámek Department of Mathematics, Faculty of Electrical EngineeringCzech
Technical University in Prague, Czech Recublic, and Technische Universität
BraunschweigGermany [email protected] , Liang-Ting Chen Institute
of Information ScienceAcademia SinicaTaiwan [email protected] ,
Stefan Milius Friedrich-Alexander-Universität Erlangen-NürnbergMartensstr.
3Erlangen91058Germany [email protected] and Henning Urbat Friedrich-
Alexander-Universität Erlangen-NürnbergMartensstr. 3Erlangen91058Germany
[email protected]
###### Abstract.
Profinite equations are an indispensable tool for the algebraic classification
of formal languages. Reiterman’s theorem states that they precisely specify
pseudovarieties, i.e. classes of finite algebras closed under finite products,
subalgebras and quotients. In this paper, Reiterman’s theorem is generalized
to finite Eilenberg-Moore algebras for a monad $\mathbf{T}$ on a category
$\mathscr{D}$: we prove that a class of finite $\mathbf{T}$-algebras is a
pseudovariety iff it is presentable by profinite equations. As a key technical
tool, we introduce the concept of a profinite monad ${\widehat{\mathbf{T}}}$
associated to the monad $\mathbf{T}$, which gives a categorical view of the
construction of the space of profinite terms.
Monad, Pseudovariety, Profinite Algebras
Theory
††ccs: Theory of computation Algebraic language theory
## 1\. Introduction
One of the main principles of both mathematics and computer science is the
specification of structures in terms of equational properties. The first
systematic study of equations as mathematical objects was pursued by Birkhoff
(Birkhoff, 1935) who proved that a class of algebraic structures over a
finitary signature $\Sigma$ can be specified by equations between
$\Sigma$-terms if and only if it is closed under quotient algebras (a.k.a.
homomorphic images), subalgebras, and products. This fundamental result, known
as the _HSP theorem_ , lays the ground for universal algebra and has been
extended and generalized in many directions over the past 80 years, including
categorical approaches via Lawvere theories (Adámek et al., 2011; Lawvere,
1963) and monads (Manes, 1976).
While Birkhoff’s seminal work and its categorifications are concerned with
general algebraic structures, in many computer science applications the focus
is on _finite_ algebras. For instance, in automata theory, regular languages
(i.e. the behaviors of classical finite automata) can be characterized as
precisely the languages recognizable by finite monoids. This algebraic point
of view leads to important insights, including decidability results. As a
prime example, Schützenberger’s theorem (Schützenberger, 1965) asserts that
star-free regular languages correspond to _aperiodic_ finite monoids, i.e.
monoids where the unique idempotent power $x^{\omega}$ of any element $x$
satisfies $x^{\omega}=x\cdot x^{\omega}$. As an immediate application, one
obtains the decidability of star-freeness. However, the identity
$x^{\omega}=x\cdot x^{\omega}$ is not an equation in Birkhoff’s sense since
the operation $(\mathord{-})^{\omega}$ is not a part of the signature of
monoids. Instead, it is an instance of a _profinite equation_ , a topological
generalization of Birkhoff’s concept introduced by Reiterman (Reiterman,
1982). (Originally, Reiterman worked with the equivalent concept of an
_implicit equation_ , cf. Section 5.) Given a set $X$ of variables and $x\in
X$, the expression $x^{\omega}$ can be interpreted as an element of the Stone
space $\widehat{X^{*}}$ of _profinite words_ , constructed as the cofiltered
limit of all finite quotient monoids of the free monoid $X^{*}$. Analogously,
over general signatures $\Sigma$ one can form the Stone space of _profinite
$\Sigma$-terms_. Reiterman proved that a class of finite $\Sigma$-algebras can
be specified by profinite equations (i.e. pairs of profinite terms) if and
only if it is closed under quotient algebras, subalgebras, and finite
products. This result establishes a finite analogue of Birkhoff’s HSP theorem.
In this paper, we develop a categorical approach to Reiterman’s theorem and
the theory of profinite equations. The idea is to replace monoids (or general
algebras over a signature) by Eilenberg-Moore algebras for a monad
$\mathbf{T}$ on an arbitrary base category $\mathscr{D}$. As an important
technical device, we introduce a categorical abstraction of the space of
profinite words. To this end, we consider a full subcategory
$\mathscr{D}_{\mathsf{f}}$ of $\mathscr{D}$ of “finite” objects and form the
category $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, the free completion
of $\mathscr{D}_{\mathsf{f}}$ under cofiltered limits. We then show that the
monad $\mathbf{T}$ naturally induces a monad ${\widehat{\mathbf{T}}}$ on
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, called the _profinite monad_
of $\mathbf{T}$, whose free algebras ${\widehat{\mathbf{T}}}X$ serve as
domains for profinite equations. For example, for $\mathscr{D}=\mathbf{Set}$
and the full subcategory $\mathbf{Set}_{\mathsf{f}}$ of finite sets, we get
$\mathop{\mathsf{Pro}}\mathbf{Set}_{\mathsf{f}}={\mathbf{Stone}}$, the
category of Stone spaces. Moreover, if $\mathbf{T}X=X^{*}$ is the finite-word
monad (whose algebras are precisely monoids), then ${\widehat{\mathbf{T}}}$ is
the monad of profinite words on ${\mathbf{Stone}}$; that is,
${\widehat{\mathbf{T}}}$ associates to each finite Stone space (i.e. a finite
set with the discrete topology) $X$ the space $\widehat{X^{*}}$ of profinite
words on $X$. Our overall approach can thus be summarized by the following
diagram, where the skewed functors are inclusions and the horizontal ones are
forgetful functors.
$\vbox{ \lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
31.2304pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&\\\&\crcr}}}\ignorespaces{\hbox{\kern-22.72226pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\quad\
{\mathbf{Stone}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
72.32227pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces{\hbox{\kern-20.0pt\raise
10.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\cirbuild@}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
0.0pt\raise 9.83331pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-31.2304pt\raise
19.91682pt\hbox{\hbox{\kern
0.0pt\raise-3.61111pt\hbox{$\textstyle{\scriptstyle\widehat{(\mathord{-})^{*}}}$}}}}}{\hbox{\kern
44.52226pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern 72.32227pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathbf{Set}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces{\hbox{\kern
82.26672pt\raise 10.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\cirbuild@}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
82.26672pt\raise 9.83331pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
106.07628pt\raise 19.91682pt\hbox{\hbox{\kern
0.0pt\raise-2.21529pt\hbox{$\textstyle{\scriptstyle(\mathord{-})^{*}}$}}}}}{\hbox{\kern-3.0pt\raise-37.80553pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern
36.72224pt\raise-37.80553pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathbf{Set}_{\mathsf{f}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
34.76617pt\raise-29.22037pt\hbox{\hbox{\kern
0.0pt\raise-1.55685pt\hbox{\hbox{\kern
3.91212pt\hbox{{}{\hbox{\kern-3.91212pt\hbox{\ignorespaces\hbox{\kern
0.0pt\raise
3.11371pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}}}}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
3.76758pt\raise-3.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
56.55853pt\raise-27.97221pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\kern
79.50989pt\raise-3.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces}}}}\ignorespaces}\qquad\rightsquigarrow\qquad\vbox{
\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
28.38513pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&\\\&\crcr}}}\ignorespaces{\hbox{\kern-23.03615pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\quad\
\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
66.38612pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces{\hbox{\kern-20.0pt\raise
10.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\cirbuild@}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
0.0pt\raise 9.83331pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-28.38513pt\raise
19.91682pt\hbox{\hbox{\kern
0.0pt\raise-3.61111pt\hbox{$\textstyle{\scriptstyle{\widehat{\mathbf{T}}}}$}}}}}{\hbox{\kern
41.71114pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern 66.38612pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathscr{D}\\!\\!\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces{\hbox{\kern
71.53894pt\raise 10.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\cirbuild@}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
71.53894pt\raise 9.83331pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
97.05635pt\raise 19.91682pt\hbox{\hbox{\kern
0.0pt\raise-2.39166pt\hbox{$\textstyle{\scriptstyle\mathbf{T}}$}}}}}{\hbox{\kern-3.0pt\raise-38.77774pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern
37.03613pt\raise-38.77774pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\mathscr{D}_{\mathsf{f}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
37.03613pt\raise-32.12958pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\kern
5.6977pt\raise-4.94444pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
51.50995pt\raise-28.94443pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\kern
69.46472pt\raise-3.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces}}}}\ignorespaces}$
It turns out that many familiar properties of the space of profinite words can
be developed at the abstract level of profinite monads and their algebras. Our
main result is the
Generalized Reiterman Theorem. A class of finite $\mathbf{T}$-algebras is
presentable by profinite equations if and only if it is closed under quotient
algebras, subalgebras, and finite products.
Here, _profinite equations_ are modelled categorically as finite quotients
$e\colon\widehat{T}X\twoheadrightarrow E$ of the object $\widehat{T}X$ of
generalized profinite terms. If the category $\mathscr{D}$ is $\mathbf{Set}$
or, more generally, a category of first-order structures, we will see that
this abstract concept of an equation is equivalent to the familiar one:
$\widehat{T}X$ is a topological space and quotients $e$ as above can be
identified with sets of pairs $(s,t)$ of profinite terms $s,t\in\widehat{T}X$.
Thus, our categorical results instantiate to the original Reiterman theorem
(Reiterman, 1982) ($\mathscr{D}=\mathbf{Set}$), but also to its versions for
ordered algebras ($\mathscr{D}=\mathbf{Pos}$) and for first-order structures
due to Pin and Weil (Pin and Weil, 1996).
Our proof of the Generalized Reiterman Theorem is purely categorical and
relies on general properties of (codensity) monads, free completions and
locally finitely copresentable categories. It does not employ any topological
methods, as opposed to all known proofs of Reiterman’s theorem and its
variants. The insight that topological reasoning can be completely avoided in
the profinite world is quite surprising, and we consider it as one of the main
contributions of our paper.
### Related work
This paper is the full version of an extended abstract (Chen et al., 2016)
presented at FoSSaCS 2016. Besides providing complete proofs of all results,
the presentation is significantly more general than in _op. cit._ : there we
restricted ourselves to base categories $\mathscr{D}$ which are varieties of
(possibly ordered) algebras, and the development of the profinite monad and
its properties used results from topology. In contrast, the present paper
works with general categories $\mathscr{D}$ and develops all required
profinite concepts in full categorical abstraction, with topological arguments
only appearing in the verification that our concrete instances satisfy the
required categorical properties.
An important application of the Generalized Reiterman Theorem and the
profinite monad can be found in algebraic language theory: we showed that
given a category $\mathscr{C}$ dually equivalent to
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, the concept of a profinite
equational class of finite $\mathbf{T}$-algebras dualizes to the concept of a
_variety of $\mathbf{T}$-recognizable languages in $\mathscr{C}$_. For
instance, for $\mathscr{D}=\mathbf{Set}$ and
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}={\mathbf{Stone}}$, the
classical Stone duality yields the category $\mathscr{C}=\mathbf{BA}$ of
boolean algebras, and for the monad $\mathbf{T}X=X^{*}$ on $\mathbf{Set}$ the
dual correspondence gives Eilenberg’s fundamental _variety theorem_ for
regular languages (Eilenberg, 1976). Using our duality-theoretic approach we
established a categorical generalization of Eilenberg’s theorem and showed
that it instantiates to more than a dozen Eilenberg-type results known in the
literature, along with a number of new correspondence results (Urbat et al.,
2017). Let us also mention some of the very few known instances of Eilenberg-
type results not obtained using an application of the Generalized Reiterman
Theorem. The first one is a recent Eilenberg-type correspondence for regular
languages (Birkmann et al., 2021), which is based on lattice bimodules, a new
algebraic structure for recognition originally proposed by Polák and Klima
(Klíma and Polák, 2019) under the name lattice algebras. The second result is
our first Eilenberg-type corresponding for nominal languages (Urbat and
Milius, 2019). Finally, there are Eilenberg-type correspondences for varieties
of non-regular languages; they appear in work of Behle et al. (Behle et al.,
2011) and as instances of Salamanca’s general framework (Salamanca, 2017).
Recently, an abstract approach to HSP-type theorems (Milius and Urbat, 2019)
has been developed that not only provides a common roof over Birkhoff’s and
Reiterman’s theorem, but also applies to classes of algebras with additional
underlying structure, such as ordered, quantitative, or nominal algebras. The
characterization of pseudovarieties in terms of pseudoeuqations given in 3.8
is a special case of the HSP theorem in _op. cit_.
## 2\. Profinite Completion
In this preliminary section, we review the profinite completion (commonly
known as pro-completion) of a category and describe it for the category
$\Sigma$-$\mathsf{Str}$ of structures over a first-order signature $\Sigma$.
###### Remark 2.1.
Recall that a category is _cofiltered_ if every finite subcategory has a cone
in it. For example, every cochain (i.e. a poset dual to an ordinal number) is
cofiltered. A _cofiltered limit_ is a limit of a diagram with a small
cofiltered diagram scheme. A functor is _cofinitary_ if it preserves
cofiltered limits. An object $A$ of a category $\mathscr{C}$ is called
_finitely copresentable_ if the functor
$\mathscr{C}(-,A)\colon\mathscr{C}\to\mathbf{Set}^{\mathrm{op}}$ is
cofinitary. The latter means that for every limit cone $c_{i}\colon C\to
C_{i}$ ($i\in\mathcal{I}$) of a cofiltered diagram,
1. (1)
each morphism $f\colon C\to A$ factorizes through some $c_{i}\colon C\to
C_{i}$ as $f=g\cdot c_{i}$, and
2. (2)
the morphism $g\colon C_{i}\to A$ is _essentially unique_ , i.e. given another
factorization $f=h\cdot c_{i}$, there is a connecting morphism $c_{ji}\colon
C_{j}\to C_{i}$ with $g\cdot c_{ji}=h\cdot c_{ji}$:
---
$\textstyle{C\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{c_{i}}$$\scriptstyle{c_{j}}$$\textstyle{A}$$\textstyle{C_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{ji}}$$\textstyle{C_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\scriptstyle{h}$
The dual concept is that of a _filtered colimit_.
###### Notation 2.2.
1. (1)
The free completion of a category $\mathscr{C}$ under cofiltered limits, i.e.
the _pro-completion_ , is denoted by
$\mathop{\mathsf{Pro}}\mathscr{C}.$
This is a category with cofiltered limits together with a full embedding
$E\colon\mathscr{C}\rightarrowtail\mathop{\mathsf{Pro}}\mathscr{C}$ satisfying
the following universal property:
1. (1a)
Every functor $F\colon\mathscr{C}\to\mathscr{K}$ into a category $\mathscr{K}$
with cofiltered limits admits a cofinitary extension
$\overline{F}\colon\mathop{\mathsf{Pro}}\mathscr{C}\to\mathscr{K}$, i.e. the
triangle below commutes:
$\textstyle{\mathscr{C}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{E}$$\scriptstyle{F}$$\textstyle{\mathop{\mathsf{Pro}}\mathscr{C}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{F}}$$\textstyle{\mathscr{K}}$
2. (1b)
The functor $\overline{F}$ is _essentially unique_ , i.e. for every cofinitary
extension $G$ of $F$ there exists a unique natural isomorphism
$i\colon\overline{F}\xrightarrow{\cong}G$ with $iE=\mathsf{id}_{F}$.
More precisely, the full embedding $E$ _is_ the pro-completion, but we will
often simply refer to $\mathop{\mathsf{Pro}}\mathscr{C}$ as the pro-completion
instead.
2. (2)
Dually, the free completion of $\mathscr{C}$ under filtered colimits, i.e. the
_ind-completion_ , is denoted by
$\mathop{\mathsf{Ind}}\mathscr{C}.$
Some standard results on ind- and pro-completions can be found in the
Appendix.
###### Example 2.3.
1. (1)
Let $\mathbf{Set}_{\mathsf{f}}$ be the category of finite sets and functions.
Its pro-completion is the category
$\mathop{\mathsf{Pro}}\mathbf{Set}_{\mathsf{f}}={\mathbf{Stone}}$
of _Stone spaces_ , i.e. compact topological spaces in which distinct elements
can be separated by clopen subsets. Morphisms are the continuous functions.
The embedding $\mathbf{Set}_{\mathsf{f}}\rightarrowtail{\mathbf{Stone}}$
identifies finite sets with finite discrete spaces. This is a consequence of
the Stone duality (Johnstone, 1982) between ${\mathbf{Stone}}$ and the
category $\mathbf{BA}$ of boolean algebras, and its restriction to finite sets
and finite Boolean algebras. In fact, since $\mathbf{BA}$ is a finitary
variety, it is the ind-completion of its full subcategory
$\mathbf{BA}_{\mathsf{f}}$ of finitely presentable objects, which are
precisely the finite Boolean algebras. Therefore
$\mathop{\mathsf{Pro}}\mathbf{Set}_{\mathsf{f}}=(\mathop{\mathsf{Ind}}\mathbf{Set}_{\mathsf{f}}^{\mathrm{op}})^{\mathrm{op}}\cong(\mathop{\mathsf{Ind}}\mathbf{BA}_{\mathsf{f}})^{\mathrm{op}}\cong\mathbf{BA}^{\mathrm{op}}\cong{\mathbf{Stone}}.$
2. (2)
For the category of finite posets and monotone functions, denoted by
$\mathbf{Pos}_{\mathsf{f}}$, we obtain the category
$\mathop{\mathsf{Pro}}\mathbf{Pos}_{\mathsf{f}}=\mathbf{Priest}$
of _Priestley spaces_ , i.e. ordered Stone spaces such that any two distinct
elements can be separated by clopen upper sets. Morphisms in $\mathbf{Priest}$
are continuous monotone functions. This follows from the Priestley duality
(Priestley, 1972) between $\mathbf{Priest}$ and bounded distributive lattices.
The argument is analogous to item (1): finite, equivalently finitely
presentable, distributive lattices dualize to finite posets with discrete
topology.
###### Notation 2.4 (First-order structures).
We will often work with the category
$\Sigma\text{-}\mathbf{Str}$
of $\Sigma$-structures and $\Sigma$-homomorphisms for a first-order many-
sorted signature $\Sigma$. Given a set $\mathcal{S}$ of sorts, an
_$\mathcal{S}$ -sorted signature_ $\Sigma$ consists of (1) operation symbols
$\sigma\colon s_{1},\ldots,s_{n}\to s$ where $n\in\mathbb{N}$, the sorts
$s_{i}$ form the domain of $\sigma$ and $s$ is its codomain, and (2) relation
symbols $r\colon s_{1},\ldots,s_{m}$ where
$m\in\mathbb{N}^{+}=\mathbb{N}\setminus\\{0\\}$. A _$\Sigma$ -structure_ is an
$\mathcal{S}$-sorted set
$A=(A^{s})_{s\in\mathcal{S}}\quad\text{in}\quad\mathbf{Set}^{\mathcal{S}}$
with (1) an operation $\sigma_{A}\colon A^{s_{1}}\times\dots\times
A^{s_{n}}\to A^{s}$ for every operation symbol $\sigma\colon
s_{1},\ldots,s_{n}\to s$, and (2) a relation $r_{A}\subseteq
A^{s_{1}}\times\dots A^{s_{n}}$ for every relation symbol $r\colon
s_{1},\ldots,s_{n}$. A _$\Sigma$ -homomorphism_ is an $\mathcal{S}$-sorted
function $f\colon A\to B$ which preserves operations and relations in the
usual sense. We denote by $\Sigma\text{-}\mathbf{Str}_{\mathsf{f}}$ the full
subcategory of $\Sigma\text{-}\mathbf{Str}$ given by all $\Sigma$-structures
$A$ where each $A^{s}$ is finite.
When $\mathcal{S}$ is a singleton, the notion of $\Sigma$-structures boils
down to a more common situation. Namely, the arity of an operation symbol is
given solely by $n\in\mathbb{N}$ and that of a relation symbol by
$m\in\mathbb{N}^{+}$. A $\Sigma$-structure is a set $A$ equipped with an
operation $\sigma_{A}\colon A^{n}\to A$ for every $n$-ary operation symbol
$\sigma$ and with a relation $r_{A}\subseteq A^{m}$ for every $m$-ary relation
symbol $r$.
###### Assumption 2.5.
Throughout the paper, we assume that every signature has a finite set of sorts
and finitely many relation symbols. There is no restriction on the number of
operation symbols.
###### Remark 2.6.
1. (1)
The category $\Sigma\text{-}\mathbf{Str}$ is complete with limits created at
the level of $\mathbf{Set}^{\mathcal{S}}$. More precisely, consider a diagram
$D$ in $\Sigma\text{-}\mathbf{Str}$ indexed by $\mathcal{I}$. Let
$U^{s}\colon\mathbf{Set}^{\mathcal{S}}\to\mathbf{Set}$ be the projection
sending $B$ to $B^{s}$, and let
$b^{s}_{i}\colon B^{s}\to D_{i}^{s}\quad(i\in\mathcal{I})$
form limit cones of the diagrams $U^{s}D$ in $\mathbf{Set}$ for every
$s\in\mathcal{S}$. Then the limit of $D$ is the $\mathcal{S}$-sorted set
$B\coloneqq(B^{s})$, with operations $\sigma_{B}\colon
B^{s_{1}}\times\dots\times B^{s_{n}}\to B^{s}$ uniquely determined by the
requirement that each $b_{i}\colon B\to D_{i}$ preserves $\sigma$, and with
relations $r_{B}\subseteq B^{s_{1}}\times\dots\times B^{s_{n}}$ consisting of
all $n$-tuples $(x_{1},\dots,x_{n})$ that each function
$b^{s_{1}}_{i}\times\dots\times b^{s_{n}}_{i}$ maps into $r_{D_{i}}$ for all
$i\in\mathcal{I}$. The limit cone is given by
$(b_{i}^{s})_{s\in\mathcal{S}}\colon B\to D_{i}$ for $i\in\mathcal{I}$.
2. (2)
The category $\Sigma\text{-}\mathbf{Str}$ is also cocomplete. Indeed, let
$\Sigma_{\mathrm{op}}$ be the subsignature of all operation symbols in
$\Sigma$. Then $\Sigma_{\mathrm{op}}\text{-}\mathbf{Str}$ is a monadic
category over $\mathbf{Set}^{\mathcal{S}}$. Since epimorphisms split in
$\mathbf{Set}^{\mathcal{S}}$, all monadic categories are cocomplete, see e.g.
(Adámek, 1977). The category $\Sigma\text{-}\mathbf{Str}$ has colimits
obtained from the corresponding colimits in
$\Sigma_{\mathrm{op}}\text{-}\mathbf{Str}$ by taking the smallest relations
making each of the colimit injections a $\Sigma$-homomorphism.
###### Notation 2.7.
The category of Stone topological $\Sigma$-structures and continuous
$\Sigma$-homomorphisms is denoted by
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str}).$
A _topological $\Sigma$-structure_ is an $\mathcal{S}$-sorted topological
space $A=(A^{s})$ endowed with a $\Sigma$-structure such that every operation
$\sigma_{s}\colon A^{s_{1}}\times\dots\times A^{s_{n}}\to A$ is continuous and
for every relation symbol $r$ the relation $r_{A}\subseteq
A^{s_{1}}\times\cdots\times A^{s_{n}}$ is a closed subset.
###### Remark 2.8.
The category ${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ is complete with
limits formed on the level of $\mathbf{Set}^{\mathcal{S}}$. This follows from
the construction of limits in ${\mathbf{Stone}}^{\mathcal{S}}$ and in
$\Sigma\text{-}\mathbf{Str}$. Thus, the forgetful functor from
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ to $\Sigma\text{-}\mathbf{Str}$
preserves limits.
The following proposition describes the pro-completion of
$\Sigma\text{-}\mathbf{Str}_{\mathsf{f}}$. It is a categorical reformulation
of results by Pin and Weil (Pin and Weil, 1996) on topological
$\Sigma$-structures, and also appears in Johnstone’s book (Johnstone, 1982,
Prop. & Rem. VI.2.4) for the special case of single-sorted algebras. We
provide a full proof for the convenience of the reader.
###### Definition 2.9.
A Stone topological $\Sigma$-structure is called _profinite_ if it is a
cofiltered limit in ${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ of finite
$\Sigma$-structures.
###### Proposition 2.10.
The category $\mathop{\mathsf{Pro}}(\Sigma\text{-}\mathbf{Str}_{\mathsf{f}})$
is the full subcategory of ${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ on
all profinite $\Sigma$-structures.
###### Proof.
1. (1)
We first observe that cofiltered limits of finite sets in ${\mathbf{Stone}}$
have the following property: If $b_{i}\colon B\to B_{i}$ $(i\in\mathcal{I})$
is a cofiltered limit cone such that all $B_{i}$ are finite, then for every
$i\in\mathcal{I}$ there exists a connecting morphism of our diagram $h\colon
B_{j}\to B_{i}$ with the same image as $b_{i}$:
(2.1) $b_{i}[B]=h[B_{j}].$
Since under Stone duality finite Stone spaces dualizes to finite boolean
algebras, it suffices to verify the dual statement about filtered colimits of
finite Boolean algebras: if $c_{i}\colon C_{i}\to C$ ($i\in\mathcal{I}$) is a
filtered colimit cocone of finite Boolean algebras, then for every $i$ there
exists a connecting morphism $f\colon C_{i}\to C_{j}$ with the same kernel as
$c_{i}$. But this is clear: given any pair $x,y\in C_{i}$ merged by $c_{i}$,
there exists a connecting morphism $f$ merging $x$ and $y$, since filtered
colimits are formed on the level of $\mathbf{Set}$. Due to $C_{i}\times C_{i}$
being finite, we can choose one $f$ for all such pairs.
2. (2)
The argument is similar for cofiltered limits of finite $\Sigma$-structures in
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$: Consider a limit cone
$b_{i}\colon B\to B_{i}\qquad(i\in\mathcal{I})$
of a cofiltered diagram $D$ in ${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$.
For every $i\in\mathcal{I}$, we verify that there is a connecting morphism
$h\colon B_{j}\to B_{i}$ with sorts $h^{s}$ for $s\in\mathcal{S}$ such that
(2.2) $b_{i}^{s}[B^{s}]=h^{s}[B_{j}^{s}]\qquad\text{for all
$s\in\mathcal{S}$,}$
and
(2.3) $b_{i}^{s_{1}}\times\dots\times
b_{i}^{s_{n}}[r_{B}]=h^{s_{1}}\times\dots\times
h^{s_{n}}[r_{B}]\qquad\text{for all $r\colon s_{1},\ldots,s_{n}$ in
$\Sigma$.}$
Indeed, if we only consider (2.2) then the existence of such an $h$ follows
from (1) by the assumption that $\mathcal{S}$ is finite and that $\mathcal{I}$
is cofiltered. For every sort $s$, we have a cofiltered limit $b_{j}^{s}\colon
B^{s}\to B_{j}^{s}$ in ${\mathbf{Stone}}$, thus we can apply (1) and obtain a
connecting morphism $h\colon B_{j}\to B_{i}$. Again, $\mathcal{S}$ is finite,
so the choice of $h$ can be made independent of $s\in\mathcal{S}$.
Next consider (2.3) for a fixed relation symbol $r\colon s_{1},\ldots,s_{n}$.
Form the diagram $D_{r}$ in ${\mathbf{Stone}}$ with the above diagram scheme
$\mathcal{I}$ and with objects
$D_{r}i=r_{B_{i}}\text{ (a finite discrete space)}.$
Connecting morphisms are the domain-codomain restrictions of all connecting
morphisms $B_{j}\xrightarrow{h}B_{k}$: since $h$ preserves the relation $r$,
we have
$h^{s_{1}}\times\dots\times h^{s_{n}}[r_{B_{j}}]\subseteq r_{B_{k}},$
and we form the corresponding connecting morphism $\overline{h}\colon
r_{B_{j}}\to r_{B_{k}}$ of $D_{r}$. From the description of limits in
$\Sigma\text{-}\mathbf{Str}$ in Remark 2.6 and the fact that limits in
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ are preserved by the forgetful
functor into $\Sigma\text{-}\mathbf{Str}$ by Remark 2.8 we deduce that the
limit of $D_{r}$ in ${\mathbf{Stone}}$ is the space $r_{B}\subseteq
B^{s_{1}}\times\dots\times B^{s_{n}}$ and the limit cone $r_{B}\to r_{B_{j}}$,
$j\in\mathcal{I}$, is formed by domain-codomain restrictions of
$b_{j}^{s_{1}}\times\dots\times b_{j}^{s_{n}}$ for $j\in\mathcal{I}$. Apply
(1) to this cofiltered limit to find a connecting morphism $h\colon B_{j}\to
B_{i}$ of $D$ satisfying (2.3) for any chosen relation symbol $r$ of $\Sigma$.
Since we only have finitely many relation symbols by 2.5, we conclude that $h$
can be chosen to satisfy (2.3).
3. (3)
Denote the full subcategory formed by profinite $\Sigma$-structures by
$\mathscr{L}\subseteq{\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str}).$
In order to prove that $\mathscr{L}$ forms the pro-completion of
$\Sigma\text{-}\mathbf{Str}_{\mathsf{f}}$, we verify the conditions given in
A.5. By construction, conditions (1) and (2) hold. It remains to prove
condition (3): every finite $\Sigma$-structure $A$ is finitely copresentable
in $\mathscr{L}$. Hence, consider a limit cone
$b_{i}\colon B\to B_{i}\quad(i\in\mathcal{I})$
of a cofiltered diagram $D$ in $\mathscr{L}$. Due to the definition of
$\mathscr{L}$, each $B_{i}$ is a cofiltered limit of finite structures.
Therefore, without loss of generality, we may assume that all $B_{i}$ are
finite. We need to show that for every homomorphism
$f=(f^{s})_{s\in\mathcal{S}}\colon B\to A$ into a finite $\Sigma$-structure
$A=(A^{s})_{s\in\mathcal{S}}$, there is an essentially unique factorization
through some $b_{i}$. For every sort $s$, we have a projection
$V^{s}\colon\mathscr{L}\to{\mathbf{Stone}}$, and the cofiltered diagram
$V^{s}D$ has the limit cone $f_{i}^{s}\colon B^{s}\to
B_{i}^{s}\;(i\in\mathcal{I})$. Since each $A^{s}$ is finite, the fact that
${\mathbf{Stone}}$ is the pro-completion of $\mathbf{Set}_{\mathsf{f}}$
implies that for every sort $s$ there is $i\in\mathcal{I}$ and an essentially
unique factorization of $f^{s}$ as follows
---
$\textstyle{B^{s}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{s}}$$\scriptstyle{b_{i}^{s}}$$\textstyle{A^{s}}$$\textstyle{B_{i}^{s}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g^{s}}$
By 2.5 the set $\mathcal{S}$ is finite, so we can choose $i$ independent of
$s$ and thus obtain a continuous $\mathcal{S}$-sorted function
$g=(g^{s})\colon B_{i}\to A\quad\text{in }{\mathbf{Stone}}^{\mathcal{S}}$
which factorizes $f$, i.e. $f=g\cdot b_{i}$.
All we still need to prove is that we can choose our $i$ and $g$ so that,
moreover, $g$ is a $\Sigma$-homomorphism. The essential uniqueness of $g$ then
follows from the corresponding property of $g$ in ${\mathbf{Stone}}$.
Let $h\colon B_{j}\to B_{i}$ be a connecting map satisfying (2.2) and (2.3).
Choose $j$ in lieu of $i$ and $\overline{g}=g\cdot h$ in lieu of $g$. We
conclude that $\overline{g}$ is a morphism of ${\mathbf{Stone}}^{\mathcal{S}}$
factorizing $f$ through the limit map $b_{j}$:
---
$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{j}}$$\scriptstyle{f}$$\scriptstyle{b_{i}}$$\textstyle{A}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{B_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\scriptstyle{\overline{g}}$
Moreover, we prove that $\overline{g}$ is a $\Sigma$-homomorphism:
1. (3a)
For every operation symbol $\sigma\colon s_{1}\dots s_{n}\to s$ in $\Sigma$
and every $n$-tuple $(x_{1},\ldots,x_{n})\in B_{j}^{s_{1}}\times\dots\times
B_{j}^{s_{n}}$ we have
$\overline{g}^{s}\cdot\sigma_{B_{j}}(x_{1},\ldots,x_{n})=\sigma_{A}\left(\overline{g}^{s_{1}}(x_{1}),\ldots,\overline{g}^{s_{n}}(x_{n})\right).$
Indeed, choose $y_{k}\in B^{s_{k}}$ with
$b_{i}^{s_{k}}(y_{k})=h^{s_{k}}(x_{k})$, $k=1,\ldots,n$, using (2.2). Then
$\displaystyle\overline{g}^{s}\cdot\sigma_{B_{j}}(x_{1},\ldots,x_{n})$
$\displaystyle=g^{s}\cdot h^{s}\cdot\sigma_{B_{j}}(x_{1},\ldots,x_{n})$
$\displaystyle\overline{g}=g\cdot h$
$\displaystyle=g^{s}\cdot\sigma_{B_{i}}(h^{s_{1}}(x_{1}),\ldots,h^{s_{n}}(x_{n}))$
$h$ a $\Sigma$-homomorphism
$\displaystyle=g^{s}\cdot\sigma_{B_{i}}(b_{i}^{s_{1}}(y_{1}),\ldots,b_{i}^{s_{n}}(y_{n}))$
$\displaystyle b_{i}^{s_{k}}(y_{k})=h^{s_{k}}(x_{k})$
$\displaystyle=g^{s}\cdot b_{i}^{s}\cdot\sigma_{B}(y_{1},\ldots,y_{n})$
$b_{i}$ a $\Sigma$-homomorphism $\displaystyle=\sigma_{A}(g^{s_{1}}\cdot
b_{i}^{s_{1}}(y_{1}),\ldots,g^{s_{n}}\cdot b_{i}^{s_{n}}(y_{n}))$ $g\cdot
b_{i}=f$ a $\Sigma$-homomorphism $\displaystyle=\sigma_{A}(g^{s_{1}}\cdot
h^{s_{1}}(x_{1}),\ldots,g^{s_{n}}\cdot h^{s_{n}}(x_{n}))$ $\displaystyle
b_{i}^{s_{k}}(y_{k})=h^{s_{k}}(x_{k})$
$\displaystyle=\sigma_{A}(\overline{g}^{s_{1}}(x_{1}),\ldots,\overline{g}^{s_{n}}(x_{n}))$
$\displaystyle\overline{g}=g\cdot h.$
2. (3b)
For every relation symbol $r\colon s_{1},\ldots,s_{n}$ in $\Sigma$, we have
that
$(x_{1},\ldots,x_{n})\in r_{B_{j}}\text{ implies
}(\overline{g}^{s_{1}}(x_{1}),\ldots,\overline{g}^{s_{n}}(x_{n}))\in r_{A}.$
Indeed, using (2.3), we can choose $(y_{1},\ldots,y_{n})\in r_{B}$ with
$(b_{i}^{s_{1}}(y_{1}),\ldots,b_{i}^{s_{n}}(y_{n}))=(h^{s_{1}}(x_{1}),\ldots,h^{s_{n}}(x_{n})).$
Then the $n$-tuple
$(\overline{g}^{s_{1}}(x_{1}),\ldots,\overline{g}^{s_{n}}(x_{n}))=(g^{s_{1}}\cdot
b_{i}^{s_{1}}(y_{1}),\ldots,g^{s_{n}}\cdot b_{i}^{s_{n}}(y_{n}))$
lies in $r_{A}$ because $g\cdot b_{i}=f$ is a $\Sigma$-homomorphism. ∎
###### Notation 2.11.
Let $\mathscr{D}$ be a full subcategory of $\Sigma\text{-}\mathbf{Str}$. We
denote by
${\mathbf{Stone}}\mathscr{D}$
the full subcategory of ${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ on all
Stone topological $\Sigma$-structures whose $\Sigma$-structure lies in
$\mathscr{D}$. Moreover, let $\mathscr{D}_{\mathsf{f}}$ denote the full
subcategory of $\mathscr{D}$ on all finite objects, i.e.
$D\in\mathscr{D}_{\mathsf{f}}$ if each $D^{s}$ is finite.
###### Corollary 2.12.
Let $\mathscr{D}$ be a full subcategory of $\Sigma\text{-}\mathbf{Str}$ closed
under cofiltered limits. Then $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
is the full subcategory of ${\mathbf{Stone}}\mathscr{D}$ given by all
profinite $\mathscr{D}$-structures, i.e. cofiltered limits of finite
$\Sigma$-structures in $\mathscr{D}$.
The proof is completely analogous to that of 2.10: the only fact we used in
that proof was the description of cofiltered limits in
$\Sigma\text{-}\mathbf{Str}$.
###### Example 2.13.
For $\mathscr{D}=\mathbf{Pos}$, we get an alternative description of the
category $\mathbf{Priest}$ of 2.3(2). For the signature $\Sigma$ with a single
binary relation, $\mathbf{Pos}$ is a full subcategory of
$\Sigma\text{-}\mathbf{Str}$. The category
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ is that of graphs on Stone
spaces. By 2.12, $\mathop{\mathsf{Pro}}(\mathbf{Pos}_{\mathsf{f}})$ is the
category of all profinite posets, i.e. Stone graphs that are cofiltered limits
of finite posets. Note that every such limit $B=(V,E)$ is a poset: given $x\in
V$ we have $(x,x)\in E$ because every object of the given cofiltered diagram
has its relation reflexive. Analogously, $E$ is transitive and (since limit
cones are collectively monic) antisymmetric.
Moreover, $B$ is a Priestley space: given $x,y\in V$ with $x\not\leq y$, then
there exists a member $b_{i}\colon B\to B_{i}$ of the limit cone with
$b_{i}(x)\not\leq b_{i}(y)$. Since $B_{i}$ is finite, and thus carries the
discrete topology, the upper set $b_{i}^{-1}(\mathord{\uparrow}x)$ is clopen,
and it contains $x$ but not $y$. Conversely, every Priestley space is a
profinite poset, as shown by Speed (Speed, 1972).
###### Example 2.14.
Johnstone (Johnstone, 1982, Thm. VI.2.9) proves that for a number of
“everyday” varieties of algebras $\mathscr{D}$, we simply have
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}={\mathbf{Stone}}\mathscr{D}.$
This holds for semigroups, monoids, groups, vector spaces, semilattices,
distributive lattices, etc. In contrast, for some important varieties
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is a proper subcategory of
${\mathbf{Stone}}\mathscr{D}$, e.g. for the variety of lattices or the variety
of $\Sigma$-algebras where $\Sigma$ consists of a single unary operation.
###### Remark 2.15.
1. (1)
The category $\Sigma\text{-}\mathbf{Str}$ has a factorization system
$(\mathcal{E},\mathcal{M})$ where $\mathcal{E}$ consists of all surjective
$\Sigma$-homomorphisms (more precisely, every sort is a surjective function)
and $\mathcal{M}$ consists of all injective $\Sigma$-homomorphisms reflecting
all relations. That is, a $\Sigma$-homomorphism $f\colon X\to Y$ lies in
$\mathcal{M}$ iff for every sort $s$ the function $f^{s}\colon X^{s}\to Y^{s}$
is injective, and for every relation symbol $r\colon s_{1},\ldots,s_{n}$ in
$\Sigma$ and every $n$-tuple $(x_{1},\ldots,x_{n})\in
X^{s_{1}}\times\ldots\times X^{s_{n}}$ one has
$(x_{1},\ldots,x_{n})\in
r_{X}\quad\text{iff}\quad(f^{s_{1}}(x_{1}),\ldots,f^{s_{n}}(x_{n}))\in r_{Y}.$
The $(\mathcal{E},\mathcal{M})$-factorization of a $\Sigma$-homomorphism
$g\colon X\to Z$ is constructed as follows. Define a $\Sigma$-structure $Y$ by
$Y^{s}=g^{s}[X^{s}]$ for all sorts $s\in\mathcal{S}$, let the operations of
$Y$ be the domain-codomain restriction of those of $Z$, and for every relation
symbol $r\colon s_{1},\ldots,s_{n}$ define $r_{Y}$ to be the restriction of
$r_{Z}$ to $Y$, i.e. $r_{Y}=r_{Z}\cap Y^{s_{1}}\times\ldots\times Y^{s_{n}}$.
Then the codomain restriction of $g$ is a surjective $\Sigma$-homomorphism
$e\colon X\twoheadrightarrow Y$, and the embedding $m\colon Y\rightarrowtail
Z$ is a injective $\Sigma$-homomorphism reflecting all relations.
2. (2)
Similarly, the category ${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ has the
factorization system $(\mathcal{E},\mathcal{M})$ where $\mathcal{E}$ consists
of all surjective morphisms and $\mathcal{M}$ of all relation-reflecting
monomorphisms. Indeed, if $f\colon X\to Z$ is a continuous
$\Sigma$-homomorphism, and if its factorization in
$\Sigma\text{-}\mathbf{Str}$ is given by a $\Sigma$-structure $Y$ and
$\Sigma$-homomorphisms $e\colon X\twoheadrightarrow Y$ (surjective) and
$m\colon Y\rightarrowtail Z$ (injective and relation-reflecting), then the
Stone topology on $Y$ inherited from $Z$ yields, due to $Y=e[X]$ being closed
in $Z$, the desired factorization in
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$.
###### Remark 2.16.
Recall that the _arrow category_ $\mathscr{A}^{\to}$ of a category
$\mathscr{A}$ has as objects all morphisms $f\colon X\to Y$ in $\mathscr{A}$.
A morphism from $f\colon X\to Y$ to $g\colon U\to V$ in $\mathscr{A}^{\to}$ is
given by a pair of morphisms $m\colon X\to U$ and $n\colon Y\to V$ in
$\mathscr{A}$ with $n\cdot f=g\cdot m$. Identities and composition are defined
componentwise. If $\mathscr{A}$ has limits of some type, then also
$\mathscr{A}^{\to}$ has these limits, and the two projection functors from
$\mathscr{A}^{\to}$ to $\mathscr{A}$ mapping an arrow to its domain or
codomain, respectively, preserve them.
###### Lemma 2.17.
1. (1)
For every cofiltered diagram $D$ in $\mathbf{Set}_{\mathsf{f}}$ with epic
connecting maps, the limit cone of $D$ in ${\mathbf{Stone}}$ is formed by
epimorphisms.
2. (2)
For every cofiltered diagram $D$ in ${\mathbf{Stone}}^{\to}$ whose objects are
epimorphisms in ${\mathbf{Stone}}$, also $\lim D$ is epic.
###### Proof.
These properties follow easily from standard results about cofiltered limits
in the category of compact Hausdorff spaces, see e.g. Ribes and Zalesskii
(Ribes and Zalesskii, 2010, Sec. 1). Here, we give an alternative proof using
Stone duality, i.e. we verify that the category $\mathbf{BA}$ of boolean
algebras satisfies the statements dual to (1) and (2).
The dual of (1) states that a filtered diagram of finite boolean algebras with
monic connecting maps has a colimit in $\mathbf{BA}$ whose colimit maps are
monic. This follows from the fact that filtered colimits in $\mathbf{BA}$ are
created by the forgetful functor to $\mathbf{Set}$, and that filtered colimits
of monics in $\mathbf{Set}$ clearly have the desired property.
Similarly, the dual of (2) states that a filtered colimit of monomorphisms in
$\mathbf{BA}^{\to}$ is a monomorphism, which follows from the corresponding
property in $\mathbf{Set}^{\to}$. ∎
## 3\. Pseudovarieties
In universal algebra, a pseudovariety of $\Sigma$-algebras is defined to be a
class of finite algebras closed under finite products, subalgebras, and
quotient algebras. In the present section, we introduce an abstract concept of
pseudovariety in a given category $\mathscr{D}$ with a specified full
subcategory $\mathscr{D}_{\mathsf{f}}$. The objects of
$\mathscr{D}_{\mathsf{f}}$ are called “finite”, but this is just terminology.
Our approach follows the footsteps of Banaschewski and Herrlich (Banaschewski
and Herrlich, 1976) who introduced varieties of objects in a category
$\mathscr{D}$, and proved that they are precisely the full subcategories of
$\mathscr{D}$ presentable by an abstract notion of equation (see 3.2). Here,
we establish a similar result for pseudovarieties: they are precisely the full
subcategories of $\mathscr{D}_{\mathsf{f}}$ that can be presented by
pseudoequations (3.8), which are shown to be equivalent to profinite equations
in many examples (3.23).
###### Assumption 3.1.
For the rest of our paper, we fix a complete category $\mathscr{D}$ with a
proper factorization system $(\mathcal{E},\mathcal{M})$, that is, all
morphisms in $\mathcal{E}$ are epic and all morphisms in $\mathcal{M}$ are
monic. _Quotients_ and _subobjects_ in $\mathscr{D}$ are represented by
morphisms in $\mathcal{E}$ and $\mathcal{M}$, respectively, and denoted by
$\twoheadrightarrow$ and $\rightarrowtail$. Moreover, we fix a small full
subcategory $\mathscr{D}_{\mathsf{f}}$ whose objects are called the _finite_
objects of $\mathscr{D}$, and denote by $\mathcal{E}_{\mathsf{f}}$ and
$\mathcal{M}_{\mathsf{f}}$ the morphisms of $\mathscr{D}_{\mathsf{f}}$ in
$\mathcal{E}$ or $\mathcal{M}$, respectively. We assume that
1. (1)
the category $\mathscr{D}_{\mathsf{f}}$ is closed under finite limits and
subobjects, and
2. (2)
every object of $\mathscr{D}_{\mathsf{f}}$ is a quotient of some projective
object of $\mathscr{D}$.
Here, recall that an object $X$ is called _projective_ (more precisely,
_$\mathcal{E}$ -projective_) if for every quotient $e\colon
P\twoheadrightarrow P^{\prime}$ and every morphism $f\colon X\to P^{\prime}$
there exists a morphism $g\colon X\to P$ with $e\cdot g=f$.
###### Definition 3.2 (Banaschewski and Herrlich (Banaschewski and Herrlich,
1976)).
1. (1)
A _variety_ is a full subcategory of $\mathscr{D}$ closed under products,
subobjects, and quotients.
2. (2)
An _equation_ is a quotient $e\colon X\twoheadrightarrow E$ of a projective
object $X$. An object $A$ is said to _satisfy_ the equation $e$ provided that
$A$ is _injective_ w.r.t. $e$, that is, if for every morphism $g\colon X\to A$
there exists a morphism $h\colon E\to A$ making the triangle below commute:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{g}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\textstyle{A}$
We note that Banaschewski and Herrlich worked with the factorization system of
regular epimorphisms and monomorphisms. However, all their results and proofs
apply to general proper factorization systems, as already pointed out in their
paper (Banaschewski and Herrlich, 1976).
###### Example 3.3.
Let $\Sigma$ be a one-sorted signature of operation symbols. If
$\mathscr{D}=\Sigma\text{-}\mathsf{Alg}$ is the category of $\Sigma$-algebras
with its usual factorization system ($\mathcal{E}=\text{surjective
homomorphisms}$ and $\mathcal{M}=$ injective homomorphisms), then the above
definition of a variety gives the usual concept in universal algebra: a class
of $\Sigma$-algebras closed under product algebras, subalgebras, and
homomorphic images. Moreover, equations in the above categorical sense are
expressively equivalent to equations $t=t^{\prime}$ between $\Sigma$-terms in
the usual sense:
1. (1)
Given a term equation $t=t^{\prime}$, where $t,t^{\prime}\in T_{\Sigma}X_{0}$
are taken from the free algebra of all $\Sigma$-terms in the set $X_{0}$ of
variables, let $\sim$ denote the least congruence on $T_{\Sigma}X_{0}$ with
$t\sim t^{\prime}$. The corresponding quotient morphism $e\colon
T_{\Sigma}X_{0}\twoheadrightarrow T_{\Sigma}X_{0}/\mathord{\sim}$ is a
categorical equation satisfied by precisely those $\Sigma$-algebras that
satisfy $t=t^{\prime}$ in the usual sense.
2. (2)
Conversely, given a projective $\Sigma$-algebra $X$ and a surjective
homomorphism $e\colon X\twoheadrightarrow E$, then for any set $X_{0}$ of
generators of $X$ we have a split epimorphism $q\colon
T_{\Sigma}X_{0}\twoheadrightarrow X$ using the projectivity of $X$. Consider
the set of term equations $t=t^{\prime}$ where $(t,t^{\prime})$ ranges over
the kernel of $e\cdot q\colon T_{\Sigma}X_{0}\twoheadrightarrow E$. Then a
$\Sigma$-algebra $A$ satisfies all these equations iff it satisfies $e$ in the
categorical sense.
Recall that the category $\mathscr{D}$ is _$\mathcal{E}$ -co-well-powered_ if
for every object $X$ of $\mathscr{D}$ the quotients with domain $X$ form a
small set.
###### Theorem 3.4 (Banaschewski and Herrlich (Banaschewski and Herrlich,
1976)).
Let $\mathscr{D}$ be a category with a proper factorization system
$(\mathcal{E},\mathcal{M})$. Suppose that $\mathscr{D}$ is complete,
$\mathcal{E}$-co-well-powered, and has enough projectives, i.e. every object
is a quotient of a projective one. Then, a full subcategory of $\mathscr{D}$
is a variety iff it can be presented by a class of equations. That is, it
consists of precisely those objects satifying each of these equations.
Note that the category of $\Sigma$-algebras satisfies all conditions of the
theorem. Thus, in view of 3.3, Banaschewski and Herrlich’s result subsumes
Birkhoff’s HSP theorem (Birkhoff, 1935). In the following, we are going to
move from varieties in $\mathscr{D}$ to pseudovarieties in
$\mathscr{D}_{\mathsf{f}}$.
###### Definition 3.5.
A _pseudovariety_ is a full subcategory of $\mathscr{D}_{\mathsf{f}}$ closed
under finite products, subobjects, and quotients.
###### Remark 3.6.
Quotients of an object $X$ are ordered by factorization: given
$\mathcal{E}$-quotients $e_{1},e_{2}$, we put $e_{1}\leq e_{2}$ if $e_{1}$
factorizes through $e_{2}$
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{1}}$$\scriptstyle{e_{2}}$$\textstyle{E_{1}}$$\textstyle{E_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Every pair of quotients $e_{i}\colon X\twoheadrightarrow E_{i}$ has a least
upper bound, or _join_ , $e_{1}\vee e_{2}$ obtained by
$(\mathcal{E},\mathcal{M})$-factorizing the mediating morphism $\langle
e_{1},e_{2}\rangle\colon X\to E_{1}\times E_{2}$ as follows:
(3.1) $\begin{gathered}\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
10.63744pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-7.53471pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern-10.63744pt\raise-18.87831pt\hbox{{}\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{e_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-27.92332pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\lower-1.99997pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
20.45277pt\raise 5.5889pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.30002pt\hbox{$\scriptstyle{e_{1}\vee e_{2}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 55.18126pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern-1.99997pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\kern
20.92499pt\raise-18.87831pt\hbox{\hbox{\kern
0.0pt\raise-2.5pt\hbox{$\scriptstyle{\langle
e_{1},e_{2}\rangle}$}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\kern
45.92941pt\raise-27.92332pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}{\hbox{\kern
55.18126pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
62.09099pt\raise-3.0pt\hbox{\hbox{\kern 0.0pt\raise 2.5pt\hbox{\hbox{\kern
0.0pt\hbox{{}{\hbox{\kern 0.0pt\hbox{\ignorespaces\hbox{\kern
0.0pt\raise-5.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}}}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
62.09099pt\raise-27.92332pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-7.94379pt\raise-37.75664pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{E_{i}}$}}}}}}}{\hbox{\kern
41.9438pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{E_{1}\times
E_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces.}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
19.25969pt\raise-42.9231pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{\pi_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
7.9438pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces}}}}\ignorespaces\end{gathered}$
A nonempty collection of quotients closed under joins is called a _semilattice
of quotients_.
###### Definition 3.7.
A _pseudoequation_ is a semilattice $\rho_{X}$ of quotients of a projective
object $X$ (of “variables”). A finite object $A$ of $\mathscr{D}$ _satisfies_
$\rho_{X}$ if $A$ is cone-injective w.r.t. $\rho_{X}$, that is, for every
morphism $h\colon X\to A$, there exists a member $e\colon X\twoheadrightarrow
E$ of $\rho_{X}$ through which $h$ factorizes:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\exists
e}$$\scriptstyle{\forall
h}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\exists}$$\textstyle{A}$
###### Proposition 3.8.
A collection of finite objects of $\mathscr{D}$ forms a pseudovariety iff it
can be presented by pseudoequations, i.e. it consists of precisely those
finite objects that satisfy each of the given pseudoequations.
###### Proof.
1. (1)
We first prove the _if_ direction. Since the intersection of a family of
pseudovarieties is a pseudovariety, it suffices to prove that for every
pseudoequation $\rho_{X}$ over a projective object $X$, the class
$\mathscr{V}$ of all finite objects satisfying $\rho_{X}$ forms a
pseudovariety, i.e. is closed under finite products, subobjects, and
quotients.
1. (1a)
_Finite products._ Let $A,B\in\mathscr{V}$. Since $A$ and $B$ satisfy
$\rho_{X}$, for every morphism $\langle h,k\rangle\colon X\to A\times B$ there
exists $e\colon X\twoheadrightarrow E$ in $\rho_{X}$ such that both $h\colon
X\to A$ and $k\colon X\to B$ factorize through $e$ – this follows from the
closedness of pseudoequations under binary joins. Given $h=e\cdot h^{\prime}$
and $k=e\cdot k^{\prime}$, then $\langle h^{\prime},k^{\prime}\rangle\colon
X\to E_{i}$ is the desired factorization:
$\langle h,k\rangle=e\cdot\langle h^{\prime},k^{\prime}\rangle.$
Thus $A\times B\in\mathscr{V}$. Since the terminal object $1$ clearly
satisfies every pseudoequation, we also have $1\in\mathscr{V}$.
2. (1b)
_Subobjects._ Let $m\colon A\rightarrowtail B$ be a morphism in
$\mathcal{M}_{\mathsf{f}}$ with $B\in\mathscr{V}$. Then for every morphism
$h\colon X\to A$ we know that $m\cdot h$ factorizes as $e\cdot k$ for some
$e\colon X\twoheadrightarrow E$ in $\rho_{X}$ and some $k\colon E\to B$. The
diagonal fill-in property then shows that $h$ factorizes through $e$:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\scriptstyle{e}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{k}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{B}$
Thus, $A\in\mathscr{V}$.
3. (1c)
_Quotients._ Let $q\colon B\twoheadrightarrow A$ be a morphism in
$\mathcal{E}_{\mathsf{f}}$ with $B\in\mathscr{V}$. Every morphism $h\colon
X\to A$ factorizes, since $X$ is projective, as
$h=q\cdot k\quad\text{for some $k\colon X\to B$}$
Since $k$ factorizes through some $e\in\rho_{X}$, so does $h$. Thus,
$A\in\mathscr{V}$.
2. (2)
For the “only if” direction, suppose that $\mathscr{V}$ is a pseudovariety.
For every projective object $X$ we form the pseudoequation $\rho_{X}$
consisting of all quotients $e\colon X\twoheadrightarrow E$ with
$E\in\mathscr{V}$. This is indeed a semilattice: given $e,f\in\rho_{X}$ we
have $e\vee f\in\rho_{X}$ by (3.1), using that $\mathscr{V}$ is closed under
finite products and subobjects. We claim that $\mathscr{V}$ is presented by
the collection of all the above pseudoequations $\rho_{X}$.
1. (2a)
Every object $A\in\mathscr{V}$ satisfies all $\rho_{X}$. Indeed, given a
morphism $h\colon X\to A$, factorize it as $e\colon X\twoheadrightarrow E$ in
$\mathcal{E}$ followed by $m\colon E\rightarrowtail A$ in $\mathcal{M}$. Then
$E\in\mathscr{V}$ because $\mathscr{V}$ is closed under subobjects, so $e$ is
a member of $\rho_{X}$. Therefore $h=m\cdot e$ is the desired factorization of
$h$, proving that $A$ satisfies $\rho_{X}$.
2. (2b)
Every finite object $A$ satisfying all the pseudoequations $\rho_{X}$ lies in
$\mathscr{V}$. Indeed, by 3.1 there exists a quotient $q\colon
X\twoheadrightarrow A$ for some projective object $X$. Since $A$ satisfies
$\rho_{X}$, there exists a factorization $q=h\cdot e$ for some $e\colon
X\twoheadrightarrow E$ in $\rho_{X}$ and some $h\colon E\to A$. We know that
$E\in\mathscr{V}$, and from $q\in\mathcal{E}$ we deduce $h\in\mathcal{E}$.
Thus $A$, being a quotient of an object of $\mathscr{V}$, lies in
$\mathscr{V}$.∎
###### Remark 3.9.
1. (1)
3.8 would remain valid if we defined pseudoequations as semilattices of
_finite_ quotients of a projective object. This follows immediately from the
above proof.
2. (2)
Let us assume that a collection $\mathsf{Var}$ of projective objects of
$\mathscr{D}$ is given such that every finite object is a quotient of an
object of $\mathsf{Var}$ (cf. 3.1(2)). Then we could define pseudoequations as
semilattices of quotients of members of $\mathsf{Var}$ with finite codomains.
Again, from the above proof we see that 3.8 would remain true.
We would like to reduce pseudoequations to equations in the sense of
Banaschewski and Herrlich. For that we need to move from the category
$\mathscr{D}$ to the pro-completion of $\mathscr{D}_{\mathsf{f}}$.
###### Notation 3.10.
Since $\mathscr{D}$ has (cofiltered) limits and
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is the free completion of
$\mathscr{D}_{\mathsf{f}}$ under cofiltered limits, the embedding
$\mathscr{D}_{\mathsf{f}}\rightarrowtail\mathscr{D}$ extends to an essentially
unique cofinitary functor
$V\colon\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\to\mathscr{D}.$
###### Example 3.11.
If $\mathscr{D}$ is a full subcategory of $\Sigma\text{-}\mathbf{Str}$ closed
under cofiltered limits, we have seen that
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ can be described as a full
subcategory of ${\mathbf{Stone}}{\mathscr{D}}$ by 2.12. The above functor
$V\colon\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\to\mathscr{D}$
is the functor forgetting the topology. Indeed, the corresponding forgetful
functor from ${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ to
$\Sigma\text{-}\mathbf{Str}$ is cofinitary, hence, so is $V$.
###### Remark 3.12.
Recall, e.g. from Mac Lane (Mac Lane, 1998), that the _right Kan extension_ of
a functor $F\colon\mathscr{A}\to\mathscr{C}$ along
$K\colon\mathscr{A}\to\mathscr{B}$ is a functor
$R=\mathsf{Ran}_{K}F\colon\mathscr{B}\to\mathscr{C}$ with a universal natural
transformation $\varepsilon\colon RK\to F$, that is, for every functor
$G\colon\mathscr{B}\to\mathscr{C}$ and every natural transformation
$\gamma\colon GK\to F$ there exists a unique natural transformation
$\gamma^{\dagger}\colon G\to R$ with
$\gamma=\varepsilon\cdot\gamma^{\dagger}K$. If $\mathscr{A}$ is small and
$\mathscr{C}$ is complete, then the right Kan extension exists (Mac Lane,
1998, Theorem X.3.1, X.4.1), and the object $RB$ ($B\in\mathscr{B}$) can be
constructed as the limit
$RB=\lim(B/K\xrightarrow{Q_{B}}\mathscr{A}\xrightarrow{F}\mathscr{C}),$
where $B/K$ denotes the slice category of all morphisms $f\colon B\to KA$
($A\in\mathscr{A})$ and $Q_{B}$ is the projection functor $f\mapsto A$.
Equivalently, $RB$ is given by the end
$RB=\int_{A\in\mathscr{A}}\mathscr{B}(B,KA)\pitchfork FA,$
with $S\pitchfork C$ denoting $S$-fold power of $C\in\mathscr{C}$.
###### Lemma 3.13.
The functor $V$ has a left adjoint
$\widehat{(\mathord{-})}=\mathsf{Ran}_{J}E\colon\mathscr{D}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
given by the right Kan extension of the embedding
$E\colon\mathscr{D}_{\mathsf{f}}\rightarrowtail\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
along the embedding
$J\colon\mathscr{D}_{\mathsf{f}}\rightarrowtail\mathscr{D}$ and making the
following triangle commute up to isomorphism:
$\textstyle{\mathscr{D}_{\mathsf{f}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{J}$$\scriptstyle{E}$$\textstyle{\mathscr{D}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{(\mathord{-})}}$$\textstyle{\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}}$
###### Proof.
Recall that, up to equivalence,
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is the full subcategory of
$[\mathscr{D}_{\mathsf{f}},\mathbf{Set}]^{\mathrm{op}}$ on cofiltered limits
of representables with $ED=\mathscr{D}_{\mathsf{f}}(D,-)$ for every
$D\in\mathscr{D}_{\mathsf{f}}$ (see Remark A.6), and the functor $V$ is given
by
$V=\mathsf{Ran}_{E}J\colon\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\to\mathscr{D}.$
Consider the following chain of isomorphisms natural in $D\in\mathscr{D}$ and
$H\in\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$:
$\displaystyle\mathscr{D}(D,VH)$
$\displaystyle\cong\mathscr{D}(D,\left(\mathsf{Ran}_{E}J\right)H)$
$\displaystyle\cong\mathscr{D}\left(D,\int_{X}\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}(H,EX)\pitchfork
JX\right)$ by the end formula for $\mathsf{Ran}$,
$\displaystyle=\mathscr{D}\left(D,\int_{X}[\mathscr{D}_{\mathsf{f}},\mathbf{Set}](EX,H)\pitchfork
JX\right)$ $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ full subcategory of
$[\mathscr{D}_{\mathsf{f}},\mathbf{Set}]^{\mathrm{op}}$,
$\displaystyle\cong\mathscr{D}\left(D,\int_{X}HX\pitchfork JX\right)$ by the
Yoneda lemma, $\displaystyle\cong\int_{X}\mathscr{D}(D,HX\pitchfork JX)$
$\mathscr{D}(D,-)$ preserves ends,
$\displaystyle\cong\int_{X}\mathbf{Set}(HX,\mathscr{D}(D,JX))$ by the
universal property of power,
$\displaystyle\cong[\mathscr{D}_{\mathsf{f}},\mathbf{Set}](H,\mathscr{D}(D,J-))$
the set of natural transf. as an end,
$\displaystyle=\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}(\mathscr{D}(D,J-),H)$
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ full subcategory of
$[\mathscr{D}_{\mathsf{f}},\mathbf{Set}]^{\mathrm{op}}$.
Hence, the functor $\widehat{(\mathord{-})}\colon D\mapsto\mathscr{D}(D,J-)$
is a left adjoint to $V$. Moreover, $\widehat{(\mathord{-})}$ extends $E$: for
each $D\in\mathscr{D}_{\mathsf{f}}$, we have
$\widehat{D}=\mathscr{D}(JD,J-)=\mathscr{D}_{\mathsf{f}}(D,-)=ED,$
and similarly on morphisms, since $J$ is a full inclusion. It remains to
verify that the functor $\widehat{(\mathord{-})}$ coincides with
$\mathsf{Ran}_{J}E$. This follows from the fact that every presheaf is a
canonical colimit of representables expressed as a coend in
$[\mathscr{D}_{\mathsf{f}},\mathbf{Set}]$:
$\mathscr{D}(D,J-)\cong\int^{X}\mathscr{D}(D,JX)\bullet EX,$
with $\bullet$ denoting copowers. This corresponds to an end in
$[\mathscr{D}_{\mathsf{f}},\mathbf{Set}]^{\mathrm{op}}$:
$\int_{X}\mathscr{D}(D,JX)\pitchfork EX=(\mathsf{Ran}_{J}E)D.$
Thus $\widehat{(\mathord{-})}=\mathsf{Ran}_{J}E$, as claimed. ∎
###### Construction 3.14.
By expressing the right Kan extension
$\widehat{(\mathord{-})}=\mathsf{Ran}_{J}E\colon\mathscr{D}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
as a limit, the action $D\to\widehat{D}$ on objects, $f\mapsto\widehat{f}$ on
morphisms, the unit, and the counit of the adjunction
$\widehat{(\mathord{-})}\dashv V$ are given as follows.
1. (1)
For every object $D$ of $\mathscr{D}$, the object
$\widehat{D}\in\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is a limit of
the diagram
$P_{D}\colon
D/{\mathscr{D}_{\mathsf{f}}}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}},\quad
P_{D}(D\xrightarrow{a}A)=A.$
We use the following notation for the limit cone of $P_{D}$:
$\frac{D\xrightarrow{a}A}{\widehat{D}\xrightarrow{\widehat{a}}A}$
where $(A,a)$ ranges over $D/{\mathscr{D}_{\mathsf{f}}}$. For finite $D$ we
choose the trivial limit: $\widehat{D}=D$ and $\widehat{a}=a$.
2. (2)
Given $f\colon D\to D^{\prime}$ in $\mathscr{D}$, the morphisms
$\widehat{a\cdot f}$ with $a$ ranging over
$D^{\prime}/\mathscr{D}_{\mathsf{f}}$ form a cone over $P_{D^{\prime}}$.
Define $\widehat{f}\colon\widehat{D}\to\widehat{D^{\prime}}$ to be the unique
morphism such that the following triangles commute for all $a\colon
D^{\prime}\to A$ with $A\in\mathscr{D}_{\mathsf{f}}$:
---
$\textstyle{\widehat{D}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{f}}$$\scriptstyle{\widehat{a\cdot
f}}$$\textstyle{\widehat{D^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{a}}$$\textstyle{A}$
Note that overloading the notation $\widehat{(-)}$ causes no problem because
if $\widehat{D^{\prime}}=D^{\prime}\in\mathscr{D}_{\mathsf{f}}$ then
$\widehat{f}$ is a projection of the limit cone for $P_{D}$ (see item (1)),
since for $a=\mathsf{id}_{D^{\prime}}$ we have
$\widehat{a}=\mathsf{id}_{D^{\prime}}$.
3. (3)
The unit $\eta$ at $D\in\mathscr{D}$ is given by the unique morphism
$\eta_{D}\colon D\to V\widehat{D}$
in $\mathscr{D}$ such that the following triangles commute for all $h\colon
D\to A$ with $A\in\mathscr{D}_{\mathsf{f}}$:
$\textstyle{D\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{D}}$$\scriptstyle{h}$$\textstyle{V\widehat{D}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\widehat{h}}$$\textstyle{A}$
Here one uses that $V$ is cofinitary, and thus the morphisms $V\widehat{h}$
form a limit cone in $\mathscr{D}$.
4. (4)
The counit $\varepsilon$ at
$D\in\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is the unique morphism
$\varepsilon_{D}\colon\widehat{VD}\to D$
in $\widehat{\mathscr{D}}$ such that the following triangles commute, where
$a\colon D\to A$ ranges over the slice category
$D/{\mathscr{D}_{\mathsf{f}}}$:
---
$\textstyle{\widehat{VD}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varepsilon_{D}}$$\scriptstyle{\widehat{Va}}$$\textstyle{D\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a}$$\textstyle{A}$
###### Notation 3.15.
Recall that $\mathcal{E}_{\mathsf{f}}$ and $\mathcal{M}_{\mathsf{f}}$ are the
morphisms of $\mathscr{D}_{\mathsf{f}}$ in $\mathcal{E}$ and $\mathcal{M}$,
respectively. We denote by
$\widehat{\mathcal{E}}\qquad\text{and}\qquad\widehat{\mathcal{M}}$
the collection of all morphisms of
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ that are cofiltered limits of
members of $\mathcal{E}_{\mathsf{f}}$ or $\mathcal{M}_{\mathsf{f}}$ in the
arrow category
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\rightarrow}$, respectively.
###### Remark 3.16.
1. (1)
Let us recall that a functor $P:\mathscr{J}\to\mathscr{J}^{\prime}$ is _final_
if
1. (a)
for every object $J^{\prime}$ of $\mathscr{J}^{\prime}$ a morphism from
$J^{\prime}$ into $PJ$ exists for some $J\in\mathscr{J}$;
2. (b)
given two morphisms $f_{i}\colon J^{\prime}\to PJ_{i}$ ($i=1,2$) there exist
morphisms $g_{i}\colon J_{i}\to J$ in $\mathscr{J}$ with $Pg_{1}\cdot
f_{1}=Pg_{2}\cdot f_{2}$.
Finality of $P$ implies that for every diagram
$D\colon\mathscr{J}^{\prime}\to\mathscr{K}$ one has
$\mathop{\mathsf{colim}}D=\mathop{\mathsf{colim}}D\cdot P$ whenever one of the
colimits exists. The dual concept is that of an _initial functor_
$P\colon\mathscr{J}\to\mathscr{J}^{\prime}$.
2. (2)
For every finite $\mathcal{E}$-quotient $e\colon X\twoheadrightarrow E$ in
$\mathscr{D}$, the corresponding limit projection
$\widehat{e}\colon\widehat{X}\twoheadrightarrow E$ lies in
$\widehat{\mathcal{E}}$. Indeed, since $E$ is finitely copresentable in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, the morphism $\widehat{e}$
factorizes through $\widehat{h}$ for some $h$ in $X/\mathscr{D}_{\mathsf{f}}$,
which can be assumed to be a quotient in $\mathscr{D}$. Otherwise, take the
$(\mathcal{E},\mathcal{M})$-factorization $h=m\cdot q$ of $h$ and replace $h$
by $q$.
Thus, we obtain an initial subdiagram
$P^{\prime}_{X}\colon\mathscr{I}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
of $P_{X}\colon
X/\mathscr{D}_{\mathsf{f}}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ by
restricting $P_{X}$ to the full subcategory of finite quotients $h\colon
X\twoheadrightarrow A$ in $X/\mathscr{D}_{\mathsf{f}}$ through which $e$
factorizes, i.e. where $e=e_{h}\cdot h$ for some $e_{h}\colon A\to E$. Note
that $e_{h}\in\mathcal{E}$ because $e,h\in\mathcal{E}$. The quotients $e_{h}$
($h\in\mathscr{I}$) form a cofiltered diagram in
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\to}$ with limit cone
$(\widehat{h},\mathsf{id}_{E})$:
$\textstyle{\widehat{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{h}}$$\scriptstyle{\widehat{e}}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{id}_{e}}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{h}}$$\textstyle{E}$
Thus, $\widehat{e}\in\widehat{\mathcal{E}}$.
3. (3)
For every cofiltered diagram $B\colon I\to\mathscr{D}_{\mathsf{f}}$ with
connecting morphisms in $\mathcal{E}$, the limit projections in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ lie in
$\widehat{\mathcal{E}}$. Indeed, let $b_{i}\colon X\to B_{i}$ ($i\in I)$
denote the limit cone. Given $j\in I$, we are to show
$b_{j}\in\widehat{\mathcal{E}}$. Form the diagram in
$\mathscr{D}_{\mathsf{f}}^{\to}$ whose objects are all connecting morphisms of
$B$ with codomain $B_{j}$ and whose morphisms from $h\colon
B_{i}\twoheadrightarrow B_{j}$ to $h^{\prime}\colon
B_{i^{\prime}}\twoheadrightarrow B_{j}$ are all connecting maps $k\colon
B_{i}\twoheadrightarrow B_{i^{\prime}}$ of $B$. This is a cofiltered diagram
in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ with limit $b_{j}$ and the
following limit cone:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\scriptstyle{b_{j}}$$\textstyle{B_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{id}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\textstyle{B_{j}}$
Since each $h$ lies in $\mathcal{E}_{\mathsf{f}}$, this proves
$b_{j}\in\widehat{\mathcal{E}}$.
4. (4)
For every cone $p_{i}\colon P\to B_{i}$ of the diagram $B$ in (3) with
$p_{i}\in\widehat{\mathcal{E}}$ for all $i\in I$, the unique factorization
$p\colon P\to X$ through the limit of $B$ lies in $\widehat{\mathcal{E}}$.
Indeed, $p$ is the limit of $p_{i}$, $i\in I$, with the following limit cone:
$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{id}}$$\scriptstyle{p}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{i}}$$\textstyle{B_{i}}$
###### Proposition 3.17.
The pair $(\widehat{\mathcal{E}},\widehat{\mathcal{M}})$ is a proper
factorization system of $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$.
###### Proof.
1. (1)
All morphisms of $\widehat{\mathcal{E}}$ are epic. This follows from the dual
of (Adámek and Rosický, 1994, Prop. 1.62); however, we give a direct proof.
Given $e\colon X\to Y$ in $\widehat{\mathcal{E}}$, we have a limit cone of a
cofiltered diagram $D$ in
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\to}$ as follows:
$\vbox{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
11.07817pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-7.53471pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
15.23166pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{e}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 32.7088pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern-11.07817pt\raise-18.87831pt\hbox{{}\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{a_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-27.92332pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
32.7088pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
39.72269pt\raise-18.87831pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.77103pt\hbox{$\scriptstyle{b_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
39.72269pt\raise-27.92332pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-7.71465pt\raise-37.75664pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
14.54263pt\raise-42.9231pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{e_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern-1.99997pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{B_{i}}$}}}}}}}\ignorespaces}}}}\ignorespaces}\qquad\qquad(i\in
I)$
where $e_{i}\in\mathcal{E}_{\mathsf{f}}$ for each $i\in I$. Let $p,q\colon
Y\to Z$ be two morphisms with $p\cdot e=q\cdot e$; we need to show $p=q$.
Without loss of generality we can assume that the object $Z$ is finite because
$\mathscr{D}_{\mathsf{f}}$ is limit-dense in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$. Since $(b_{i})$ is a
cofiltered limit cone in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$,
there exists $i\in I$ such that $p$ and $q$ factorize through $b_{i}$, i.e.
there exists morphisms $p^{\prime},q^{\prime}$ with $p^{\prime}\cdot b_{i}=p$
and $q^{\prime}\cdot b_{i}=q$. The limit projection $a_{i}$ of the cofiltered
limit $X=\lim A_{i}$ in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ merges
$p^{\prime}\cdot e_{i}$ and $q^{\prime}\cdot e_{i}$ (since $e$ merges $p$ and
$q$). Since $Z$ is finite, there exists a connecting morphism
$(a_{ji},b_{ji})$ of $D$ such that $p^{\prime}\cdot e_{i}$ and
$q^{\prime}\cdot e_{i}$ are merged by $a_{ji}$.
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{j}}$$\scriptstyle{e}$$\scriptstyle{a_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\scriptstyle{p}$$\scriptstyle{q}$$\textstyle{Z}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{i}}$$\scriptstyle{e_{i}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p^{\prime}}$$\scriptstyle{q^{\prime}}$$\textstyle{A_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{ji}}$$\scriptstyle{e_{j}}$$\textstyle{B_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{ji}}$
Therefore
$p^{\prime}\cdot b_{ji}\cdot e_{j}=p^{\prime}\cdot e_{i}\cdot
a_{ji}=q^{\prime}\cdot e_{i}\cdot a_{ji}=q^{\prime}\cdot b_{ji}\cdot e_{j}.$
Since $e_{j}$ is an epimorphism in $\mathscr{D}_{\mathsf{f}}$, this implies
$p^{\prime}\cdot b_{ji}=q^{\prime}\cdot b_{ji}$. Thus
$p=p^{\prime}\cdot b_{i}=p^{\prime}\cdot b_{ji}\cdot b_{j}=q^{\prime}\cdot
b_{ji}\cdot b_{j}=q^{\prime}\cdot b_{i}=q.$
2. (2)
All morphisms of $\widehat{\mathcal{M}}$ are monic. Indeed, given $m\colon
X\to Y$ in $\widehat{\mathcal{M}}$, we have a limit cone of a cofiltered
diagram $D$ in $(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\to}$ as
follows:
$\vbox{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
11.07817pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-7.53471pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
13.7883pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{m}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 32.7088pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern-11.07817pt\raise-18.87831pt\hbox{{}\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{a_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-27.92332pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
32.7088pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
39.72269pt\raise-18.87831pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.77103pt\hbox{$\scriptstyle{b_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
39.72269pt\raise-27.92332pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-7.71465pt\raise-37.75664pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
5.21465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\hbox{{}{\hbox{\kern 5.0pt\hbox{\ignorespaces\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}}}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
13.09927pt\raise-42.9231pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{m_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{B_{i}}$}}}}}}}\ignorespaces}}}}\ignorespaces}\qquad\qquad(i\in
I)$
where $m_{i}\in\mathcal{M}_{\mathsf{f}}$ for each $i\in I$. Suppose that
$f,g\colon Z\to X$ with $m\cdot f=m\cdot g$ are given. Express $Z$ as a
cofiltered limit $z_{j}\colon Z\twoheadrightarrow Z_{j}$ ($j\in J$) of finite
objects with epimorphic limit projections $z_{j}$. For each $i\in I$, since
$A_{i}$ is finitely copresentable, we obtain a factorization of $a_{i}\cdot f$
and $a_{i}\cdot g$ through some $z_{j_{i}}$, say $f_{i}\cdot z_{j_{i}}=f$ and
$g_{i}\cdot z_{j_{i}}=g$.
$\textstyle{Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{g}$$\scriptstyle{z_{j_{i}}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\scriptstyle{a_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{Z_{j_{i}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{i}}$$\scriptstyle{g_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{i}}$$\textstyle{B_{i}}$
From $m\cdot f=m\cdot g$ it follows that $m_{i}\cdot f_{i}\cdot
z_{j_{i}}=m_{i}\cdot g_{i}\cdot z_{j_{i}}$ for each $i$. This implies
$m_{i}\cdot f_{i}=m_{i}\cdot g_{i}$ because $z_{j_{i}}$ is epic, and thus
$f_{i}=g_{i}$ because $m_{i}$ is monic in $\mathscr{D}_{\mathsf{f}}$.
Therefore, $a_{i}\cdot f=a_{i}\cdot g$ for each $i$, thus $f=g$ because the
limit projections $a_{i}$ are collectively monic.
3. (3)
Every morphism $g\colon X\to Y$ of
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ has an
$(\widehat{\mathcal{E}},\widehat{\mathcal{M}})$-factorization. Indeed,
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\to}$ is the pro-completion
of $\mathscr{D}_{\mathsf{f}}^{\to}$; see (Adámek and Rosický, 1994, Cor. 1.54)
for the dual statement. Thus, there exists a cofiltered diagram $R\colon
I\to\mathscr{D}_{\mathsf{f}}^{\to}$ with limit $g$. Let the following
morphisms
$\vbox{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
11.07817pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-7.53471pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern-11.07817pt\raise-18.87831pt\hbox{{}\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{a_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-27.92332pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
15.06639pt\raise 5.1875pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8264pt\hbox{$\scriptstyle{g}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 32.7088pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
32.7088pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
39.72269pt\raise-18.87831pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.77103pt\hbox{$\scriptstyle{b_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
39.72269pt\raise-27.92332pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-7.71465pt\raise-37.75664pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
14.37737pt\raise-42.94412pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8264pt\hbox{$\scriptstyle{g_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{B_{i}}$}}}}}}}\ignorespaces}}}}\ignorespaces}\quad(i\in
I)$
form the limit cone. Factorize $g_{i}$ into an $\mathcal{E}$-morphism
$e_{i}\colon A_{i}\twoheadrightarrow C_{i}$ and followed by an
$\mathcal{M}$-morphism $m_{i}\colon C_{i}\rightarrowtail B_{i}$. Since
$\mathscr{D}_{\mathsf{f}}$ is closed under subobjects, we have
$e_{i}\in\mathcal{E}_{\mathsf{f}}$ and $m_{i}\in\mathcal{M}_{\mathsf{f}}$.
Diagonal fill-in yields a diagram $\overline{R}\colon
I\to\mathscr{D}_{\mathsf{f}}$ with objects $C_{i}$, $i\in I$, and connecting
morphisms derived from those of $R$. Let
$Z\in\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ be a limit of
$\overline{R}$ with the limit cone
$c_{i}\colon Z\to C_{i}\quad(i\in I).$
Then there are unique morphisms $e=\lim e_{i}\in\widehat{\mathcal{E}}$, and
$m=\lim m_{i}\in\widehat{\mathcal{M}}$ such that the following diagrams
commute for all $i\in I$:
---
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a_{i}}$$\scriptstyle{g}$$\scriptstyle{e}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\scriptstyle{c_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{i}}$$\textstyle{C_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{i}}$$\textstyle{B_{i}}$
4. (4)
We verify the diagonal fill-in property. Let a commutative square
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{u}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{Q}$
with $e\in\widehat{\mathcal{E}}$ and $m\in\widehat{\mathcal{M}}$ be given.
1. (4a)
Assume first that $m\in\mathcal{M}_{\mathsf{f}}$. Express $e$ as a cofiltered
limit of objects $e_{i}\in\mathcal{E}_{\mathsf{f}}$ with the following limit
cone:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{a_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{i}}$$\textstyle{B_{i}}$
Since $P$ is finite and $X=\lim A_{i}$ is a cofiltered limit, $u$ factorizes
through some $a_{i}$. Analogously for $v$ and some $b_{i}$; the index $i$ can
be chosen to be the same since the diagram is cofiltered. Thus we have
morphisms $u^{\prime}$, $v^{\prime}$ such that in the following diagram the
left-hand triangle and the right-hand one commute:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{u}$$\scriptstyle{a_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\scriptstyle{b_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{u^{\prime}}$$\scriptstyle{e_{i}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v^{\prime}}$$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{Q}$
Without loss of generality, we can assume that the lower part also commutes.
Indeed, $Q$ is finite and the limit map $a_{i}$ merges the lower part:
$(m\cdot u^{\prime})\cdot a_{i}=m\cdot u=v\cdot e=v^{\prime}\cdot b_{i}\cdot
e=(v^{\prime}\cdot e_{i})\cdot a_{i}.$
Since our diagram is cofiltered, some connecting morphism from $A_{j}$ to
$A_{i}$ also merges the lower part. Hence, by choosing $j$ instead of $i$ we
could get the lower part commutative.
Since $e_{i}\in\mathcal{E}$ and $m\in\mathcal{M}$, we use diagonal fill-in to
get a morphism $d\colon B_{i}\to P$ with $d\cdot e_{i}=u^{\prime}$ and $m\cdot
d=v^{\prime}$. Then $d\cdot b_{i}\colon Y\to P$ is the desired diagonal in the
original square.
2. (4b)
Now suppose that $m\in\widehat{\mathcal{M}}$ is arbitrary, i.e. a cofiltered
limit a diagram $D$ whose objects are morphisms $m_{t}$ of
$\mathcal{M}_{\mathsf{f}}$ with a limit cone as follows:
$\vbox{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
10.96632pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-6.90451pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
13.86671pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{m}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 32.92674pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern-10.96632pt\raise-19.38887pt\hbox{{}\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8264pt\hbox{$\scriptstyle{p_{t}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-28.94443pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
32.92674pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
39.87952pt\raise-19.38887pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8264pt\hbox{$\scriptstyle{q_{t}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
39.87952pt\raise-28.94443pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-7.91563pt\raise-38.77774pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{P_{t}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
5.41563pt\raise-38.77774pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\hbox{{}{\hbox{\kern 5.0pt\hbox{\ignorespaces\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}}}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
13.1445pt\raise-43.89977pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.89186pt\hbox{$\scriptstyle{m_{t}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
31.91563pt\raise-38.77774pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
31.91563pt\raise-38.77774pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{Q_{t}}$}}}}}}}\ignorespaces}}}}\ignorespaces}\quad(t\in
T)$
For each $t$ we have, due to item (4)(4a) above, a diagonal fill-in
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{u}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\textstyle{\scriptstyle
d_{t}}$$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{t}}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{t}}$$\textstyle{P_{t}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{t}}$$\textstyle{Q_{t}}$
Given a connecting morphism $(p,q)\colon m_{t}\to m_{s}$ ($t,s\in T$) of the
diagram $D$, the following triangle
$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{d_{t}}$$\scriptstyle{d_{s}}$$\textstyle{P_{t}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{P_{s}}$
commutes, that is, all $d_{t}$ form a cone of the diagram $D_{0}\cdot D$,
where $D_{0}\colon\mathscr{D}_{\mathsf{f}}^{\to}\to\mathscr{D}_{\mathsf{f}}$
is the domain functor, with limit $p_{t}\colon P\to P_{t}$ ($t\in T$). Indeed,
$e$ is epic by item (1), and from the fact that $p_{s}=p\cdot p_{t}$ we obtain
$(p\cdot d_{t})\cdot e=p\cdot p_{t}\cdot u=p_{s}\cdot u=d_{s}\cdot e.$
Thus, there exists a unique $d\colon Y\to P$ with $d_{t}=p_{t}\cdot d$ for all
$t\in T$. This is the desired diagonal: $u=d\cdot e$ follows from
$(p_{t})_{t\in T}$ being collectively monic, since
$p_{t}\cdot u=d_{t}\cdot e=p_{t}\cdot d\cdot e.$
This implies $v=d\cdot m$ because $v\cdot e=m\cdot u=m\cdot d\cdot e$ and $e$
is epic.
###### Proposition 3.18.
Let $\mathscr{D}$ be a full subcategory of $\Sigma\text{-}\mathbf{Str}$ closed
under products and subobjects. Then in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\subseteq{\mathbf{Stone}}\mathscr{D}$
we have
$\displaystyle\widehat{\mathcal{E}}$ $\displaystyle=\text{surjective
morphisms, and}$ $\displaystyle\widehat{\mathcal{M}}$
$\displaystyle=\text{relation-reflecting injective morphisms},$
cf. Remark 2.15(2).
###### Proof.
1. (1)
Let $e\colon X\to Y$ be a surjective morphism of
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$. We shall prove that
$e\in\widehat{\mathcal{E}}$ by expressing it as a cofiltered limit of a
diagram of quotients in $\mathscr{D}_{\mathsf{f}}^{\to}$. In
${\mathbf{Stone}}\mathscr{D}$ we have the factorization system
$(\mathcal{E}_{0},\mathcal{M}_{0})$ where $\mathcal{E}_{0}$ = surjective
homomorphisms, and $\mathcal{M}_{0}$ = injective relation-reflecting
homomorphisms. This follows from Remark 2.15 and the fact that $\mathscr{D}$,
being closed under subobjects in $\Sigma\text{-}\mathbf{Str}$, inherits the
factorization system $\Sigma\text{-}\mathbf{Str}$.
The category $\mathscr{D}$ is closed under products and subobjects, so it is
closed under all limits. Since $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
is the closure of $\mathscr{D}_{\mathsf{f}}$ under cofiltered limits in
${\mathbf{Stone}}(\mathscr{D})$ by 2.12, also
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\to}=\mathop{\mathsf{Pro}}(\mathscr{D}_{\mathsf{f}}^{\to})$
is the closure of $\mathscr{D}_{\mathsf{f}}^{\to}$ under cofiltered limits in
$({\mathbf{Stone}}{\mathscr{D}})^{\to}$. Thus for $e$ there exists a
cofiltered diagram $D$ in $\mathscr{D}_{\mathsf{f}}^{\to}$ of morphisms
$h_{i}\colon A_{i}\to B_{i}$ ($i\in I$) of $\mathscr{D}_{\mathsf{f}}$ with a
limit cone in $({\mathbf{Stone}}{\mathscr{D}})^{\to}$ as follows:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{a_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{i}}$$\textstyle{B_{i}}$
Using the factorization system $(\mathcal{E}_{0},\mathcal{M}_{0})$ we
factorize
$a_{i}=m_{i}\cdot\overline{a}_{i}\quad\text{and}\quad
b_{i}=n_{i}\cdot\overline{b}_{i}\quad\text{for $i\in I$},$
and use the diagonal fill-in to define morphisms $\overline{h}_{i}$ as
follows:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{\overline{a}_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{b}_{i}}$$\textstyle{\overline{A}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{h}_{i}}$$\scriptstyle{m_{i}}$$\textstyle{\overline{B}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{n_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{i}}$$\textstyle{B_{i}}$
We obtain a diagram $\overline{D}$ with objects
$\overline{h}_{i}\colon\overline{A}_{i}\to\overline{B}_{i}$ ($i\in I$) in
$({\mathbf{Stone}}\mathscr{D})^{\to}$. Connecting morphisms are derived from
those of $D$: given $(p,q)\colon h_{i}\to h_{j}$ in $D$
$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{i}}$$\scriptstyle{p}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{A_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{j}}$$\textstyle{B_{j}}$
the diagonal fill-in property yields morphisms $\overline{p}$ and
$\overline{q}$ as follows:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{a}_{j}}$$\scriptstyle{\overline{a}_{i}}$$\textstyle{\overline{A}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{i}}$$\scriptstyle{\overline{p}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{\overline{A}_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{j}}$$\textstyle{A_{j}}$
$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{a}_{j}}$$\scriptstyle{\overline{b}_{i}}$$\textstyle{\overline{B}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{n_{i}}$$\scriptstyle{\overline{q}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{\overline{B}_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{n_{j}}$$\textstyle{B_{j}}$
It is easy to see that $(\overline{p},\overline{q})$ is a morphism from
$\overline{h}_{i}$ to $\overline{h}_{j}$ in
$({\mathbf{Stone}}\mathscr{D})^{\to}$. This yields a cofiltered diagram
$\overline{D}$. Since
$\overline{h}_{i}\cdot\overline{a}_{i}=\overline{b}_{i}\cdot e$ is surjective,
it follows that $\overline{h}_{i}$ is also surjective. We claim that the
morphisms
$(\overline{a}_{i},\overline{b}_{i})\colon e\to\overline{h}_{i}\quad(i\in I)$
form a limit cone of $\overline{D}$. To see this, note first that since the
morphisms $(a_{i},b_{i})\colon e\to h_{i}$, $i\in I$, form a cone of $D$ and
all $m_{i}$ and $n_{i}$ are monic, the morphisms
$(\overline{a}_{i},\overline{b}_{i})$, $i\in I$, form a cone of
$\overline{D}$. Now let another cone be given with domain $r\colon U\to V$ as
follows:
$\textstyle{U\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\scriptstyle{u_{i}}$$\textstyle{V\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v_{i}}$$\textstyle{\overline{A}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{i}}$$\scriptstyle{\overline{h}_{i}}$$\textstyle{\overline{B}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{n_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{i}}$$\textstyle{B_{i}}$
Then we get a cone of $D$ for all $i\in I$ by the morphisms
$(m_{i}u_{i},n_{i}v_{i})\colon r\to h_{i}$. The unique factorization $(u,v)$
through the limit cone of $D$:
$\textstyle{{\,U}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\scriptstyle{u}$$\scriptstyle{m_{i}u_{i}}$$\textstyle{V\;\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\scriptstyle{n_{i}v_{i}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{a_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{i}}$$\textstyle{B_{i}}$
is a factorization of $(u_{i},v_{i})$ through the cone
$(\overline{a}_{i},\overline{b}_{i})$. Indeed, in the following diagram
$\textstyle{{\,U}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\scriptstyle{u}$$\scriptstyle{u_{i}}$$\textstyle{V\;\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{\overline{a}_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{b}_{i}}$$\textstyle{\overline{A}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{i}}$$\scriptstyle{\overline{h}_{i}}$$\textstyle{\overline{B}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{n_{i}}$$\scriptstyle{v_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{i}}$$\textstyle{B_{i}}$
the desired equality $v_{i}=\overline{b}_{i}v$ follows since $n_{i}$ is monic;
analogously for $u_{i}=\overline{a}_{i}u$. The uniqueness of the factorization
$(u,v)$ also follows from the last diagram: if the upper left-hand and right-
hand parts commute, then $(u,v)$ is a factorization of the cone
$(m_{i}u_{i},n_{i}v_{i})$ through the limit cone of $D$. Thus, it is unique.
2. (2)
Conversely, every cofiltered limit of quotients in
$\mathscr{D}_{\mathsf{f}}^{\to}$ is surjective in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$. Indeed, cofiltered limits in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ are formed in
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$ by 2.12, and the forgetful
functor into ${\mathbf{Stone}}$ thus preserves them. Hence the same is true
about the forgetful functor from
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\to}$ to
${\mathbf{Stone}}^{\to}$. Thus, the claim follows from 2.17.
3. (3)
We show that every morphism of $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
which is monic and reflects relations is an element of
$\widehat{\mathcal{M}}$.
1. (3a)
We first prove a property of filtered colimits in $\mathbf{BA}^{\to}$. Let $D$
be a filtered diagram with objects $h_{i}\colon A_{i}\to B_{i}$ ($i\in I$) in
$\mathbf{BA}^{\to}$. Let $h_{i}=m_{i}\cdot e_{i}$ be the factorization of
$h_{i}$ into an epimorphism $e_{i}\colon
A_{i}\twoheadrightarrow\overline{B}_{i}$ followed by a monomorphism
$m_{i}\colon\overline{B}_{i}\rightarrowtail B_{i}$ in $\mathbf{BA}$. Using
diagonal fill-in we get a filtered diagram $\overline{D}$ with objects $e_{i}$
($i\in I$) and with connecting morphisms $(u,\overline{v})\colon e_{i}\to
e_{j}$ derived from the connecting morphisms $(u,v)\colon h_{i}\to h_{j}$ of
$D$ using diagonal fill-in:
$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{i}}$$\scriptstyle{u}$$\scriptstyle{e_{i}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\textstyle{\overline{B}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{v}}$$\scriptstyle{m_{i}}$$\textstyle{\overline{B}_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{j}}$$\textstyle{A_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{j}}$$\scriptstyle{h_{j}}$$\textstyle{B_{j}}$
Our claim is that if the colimit $h=\mathop{\mathsf{colim}}h_{i}$ in
$\mathbf{BA}^{\to}$ is an epimorphism of $\mathbf{BA}$, then one has
$h=\mathop{\mathsf{colim}}e_{i}$. To see this, suppose that a colimit cocone
of $D$ is given as follows:
$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{i}}$$\scriptstyle{h_{i}}$$\scriptstyle{a_{i}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{\overline{B}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{i}}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\textstyle{B}$
Then we prove that $\overline{D}$ has the colimit cocone $(a_{i},b_{i}\cdot
m_{i})$, $i\in I$. Indeed, since $A=\mathop{\mathsf{colim}}A_{i}$ with colimit
cocone $(a_{i})$, all we need to verify is that
$B=\mathop{\mathsf{colim}}\overline{B}_{i}$ with cocone $(b_{i}\cdot m_{i})$.
This cocone is collectively epic because every element $x$ of $B$ has the form
$x=h(y)$ for some $y\in A$, using that $h$ is epic by hypothesis, and that the
cocone $(a_{i})$ is collectively epic. The diagram $\overline{D}$ is filtered,
thus, to prove that $B=\mathop{\mathsf{colim}}\overline{B}_{i}$, we only need
to verify that whenever a pair $x_{1},x_{2}\in\overline{B}_{i}$ (for some
$i\in I$) is merged by $b_{i}\cdot m_{i}$, there exists a connecting morphism
$\overline{v}\colon\overline{B}_{i}\to\overline{B}_{j}$ merging $x_{1},x_{2}$.
Since $m_{i}$ is monic and $B=\mathop{\mathsf{colim}}B_{i}$, some connecting
morphism $v\colon B_{i}\to B_{j}$ merges $m_{i}(x_{1})$ and $m_{i}(x_{2})$.
Then
$m_{j}\cdot\overline{v}(x_{1})=v\cdot m_{i}(x_{1})=v\cdot
m_{i}(x_{2})=m_{j}\cdot\overline{v}(x_{2}),$
whence $\overline{v}(x_{1})=\overline{v}(x_{2})$ because $m_{j}$ is monic.
2. (3b)
Denote by
$W\colon{\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})\to\mathbf{Set}^{\mathcal{S}}$
the forgetful functor mapping a Stone-topological $\Sigma$-structure to its
underlying sorted set. Moreover, letting
$\Sigma_{\mathsf{rel}}\subseteq\Sigma$ denote the set of all relation symbols
in $\Sigma$, we have the forgetful functors
$W_{r}\colon{\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})\to\mathbf{Set}\quad(r\in\Sigma_{\mathsf{rel}})$
assigning to every object $A$ the corresponding subset $r_{A}\subseteq
A^{s_{1}}\times\cdots\times A^{s_{n}}$. From the description of limits in
$\Sigma\text{-}\mathbf{Str}$ in Remark 2.6, it follows that the functors $W$
and $W_{r}$ ($r\in\Sigma_{\mathsf{rel}}$) collectively preserve and reflect
limits. That is, given a diagram $D$ in
${\mathbf{Stone}}(\Sigma\text{-}\mathbf{Str})$, a cocone of $D$ is a limit
cone if and only if its image under $W$ is a limit cone of $W\cdot D$ and its
image under $W_{r}$ is a limit cone of $W_{r}\cdot D$ for all
$r\in\Sigma_{\mathsf{rel}}$.
3. (3c)
We are ready to prove that if $h\colon A\to B$ in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is a relation-reflecting
monomorphism, then $h\in\widehat{\mathcal{M}}$. We have a cofiltered diagram
$D$ in $\mathscr{D}_{\mathsf{f}}^{\to}$ with objects $h_{i}\colon A_{i}\to
B_{i}$ and a limit cone $(a_{i},b_{i})\colon h_{i}\to h$ ($i\in I$). Let
$h_{i}=m_{i}\cdot e_{i}$ be the image factorization in
$\Sigma\text{-}\mathbf{Str}$.
$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\scriptstyle{a_{i}}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces[rr]}$$\scriptstyle{e_{i}}$$\scriptstyle{h_{i}}$$\textstyle{\overline{A}_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{i}}$$\textstyle{B_{i}}$
It is our goal to prove that $h=\lim_{i\in I}m_{i}$. More precisely: we have
$m_{i}$ in $\mathscr{D}_{\mathsf{f}}^{\to}$ and diagonal fill-in yields a
cofiltered diagram $\overline{D}$ of these objects in
$\mathscr{D}_{\mathsf{f}}^{\to}$. We will prove that $(e_{i}\cdot
a_{i},b_{i})\colon h\to m_{i}$ ($i\in I$) is a limit cone. By part (3)(3b)
above it suffices to show that the images of that cone under $W^{\to}$ and
$W_{r}^{\to}$ ($r\in\Sigma_{\mathsf{rel}}$) are limit cones.
For $W^{\to}$ just dualize (3)(3a): from the fact that $Wh=\lim Wh_{i}$ we
derive $Wh=\lim Wm_{i}$. We need to show that $W_{r}$ preserves the limit of
the diagram of all $m_{i}$’s. Given $r\colon s_{1},\ldots,s_{n}$ in
$\Sigma_{\mathsf{rel}}$, we know that $r_{A}$ consists of the $n$-tuples
$(x_{1},\ldots,x_{n})$ with $(a_{i}(x_{1}),\ldots,a_{i}(x_{n}))\in r_{A_{i}}$
for every $i\in I$ (see Remark 2.6). In particular, for
$(x_{1},\ldots,x_{n})\in r_{A}$ we have $(e_{i}\cdot
a_{i}(x_{1}),\ldots,e_{i}\cdot a_{i}(x_{n}))\in r_{\overline{A}_{i}}$.
Conversely, given $(x_{1},\ldots,x_{n})$ with the latter property, then
$(m_{i}\cdot e_{i}\cdot a_{i}(x_{1}),\ldots,m_{i}\cdot e_{i}\cdot
a_{i}(x_{n}))\in r_{B_{i}}$, i.e. $(b_{i}\cdot h(x_{1}),\ldots,b_{i}\cdot
h(x_{n}))\in r_{B_{i}}$ for all $i\in I$. Since $B=\lim B_{i}$, this implies
$(h(x_{1}),\ldots,h(x_{n}))\in r_{B}$, whence $(x_{1},\ldots,x_{n})\in r_{A}$
because $h$ is relation-reflecting.
4. (4)
It remains to prove that every morphism $m\in\widehat{\mathcal{M}}$ is a
relation-reflecting monomorphism. Let a cofiltered limit cone be given as
follows:
$\vbox{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
11.07817pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-6.75pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
13.7883pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{m}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 32.67929pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern-11.07817pt\raise-18.87831pt\hbox{{}\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{a_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-27.92332pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
32.67929pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
39.72269pt\raise-18.87831pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.77103pt\hbox{$\scriptstyle{b_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
39.72269pt\raise-27.92332pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-7.71465pt\raise-37.75664pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
5.21465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\hbox{{}{\hbox{\kern 5.0pt\hbox{\ignorespaces\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}}}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
13.09927pt\raise-42.9231pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.84743pt\hbox{$\scriptstyle{m_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
31.71465pt\raise-37.75664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{B_{i}}$}}}}}}}\ignorespaces}}}}\ignorespaces}\qquad\qquad(i\in
I)$
where each $m_{i}$ lies in $\mathcal{M}_{\mathsf{f}}$, i.e. is a relation-
reflecting monomorphism in $\mathscr{D}_{\mathsf{f}}$. Then $m$ is monic:
given $x\neq y$ in $A$, there exists $i\in I$ with $a_{i}(x)\neq a_{i}(y)$
because the limit projections $a_{i}$ are collectively monic. Since $m_{i}$ is
monic, this implies $b_{i}\cdot m(x)\neq b_{i}\cdot m(y)$, whence $m(x)\neq
m(y)$.
Moreover, for every relation symbol $r\colon s_{1},\ldots,s_{n}$ in $\Sigma$
and $(x_{1},\ldots,x_{n})\in A^{s_{1}}\times\cdots\times A^{s_{n}}$, we have
that
$(x_{1},\ldots,x_{n})\in
r_{A}\quad\text{iff}\quad(m(x_{1}),\ldots,m(x_{n}))\in r_{B}.$
Indeed, the _only if_ direction follows from the fact that the maps
$m_{i}\cdot a_{i}$ preserve relations and the maps $b_{i}$ collectively
reflect them. For the _if_ direction, suppose that $(m(x_{1}),\ldots
m(x_{n}))\in r_{B}$. Since for every $i\in I$ the morphism $b_{i}$ preserves
relations and $m_{i}$ reflects them, we get
$(a_{i}(x_{1}),\ldots,a_{i}(x_{n}))\in r_{A_{i}}$ for every $i$. Since the
maps $a_{i}$ collectively reflect relations, this implies
$(x_{1},\ldots,x_{n})\in r_{A}$.∎
We now introduce the crucial property of factorization systems needed for our
main result. Actually it only concerns the class $\mathcal{E}$ of quotients
and asserts it to be well-behaved with respect to cofiltered limits.
###### Definition 3.19.
The factorization system $(\mathcal{E},\mathcal{M})$ of $\mathscr{D}$ is
called _profinite_ if $\mathcal{E}$ is closed in $\mathscr{D}^{\to}$ under
cofiltered limits of finite quotients; that is, for every cofiltered diagram
$D$ in $\mathscr{D}^{\to}$ whose objects are elements of
$\mathcal{E}_{\mathsf{f}}$, the limit of $D$ in $\mathscr{D}^{\to}$ lies in
$\mathcal{E}$.
###### Example 3.20.
For every full subcategory $\mathscr{D}\subseteq\Sigma\text{-}\mathbf{Str}$
closed under limits and subobjects, the factorization system of surjective
morphisms and relation-reflecting injective morphisms is profinite. This
follows from 2.17 and the fact that limits in $\mathscr{D}$ are formed at the
level of underlying sets (see Remark 2.6).
###### Proposition 3.21.
If the factorization system $(\mathcal{E},\mathcal{M})$ of $\mathscr{D}$ is
profinite, the following holds:
1. (1)
The forgetful functor
$V\colon\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\to\mathscr{D}$ is
faithful and satisfies $V(\widehat{\mathcal{E}})\subseteq\mathcal{E}$.
2. (2)
For every $\mathcal{E}$-projective object $X\in\mathscr{D}$, the object
$\widehat{X}\in\widehat{\mathscr{D}}$ is $\widehat{\mathcal{E}}$-projective.
3. (3)
Every object of $\mathscr{D}_{\mathsf{f}}$ is an
$\widehat{\mathcal{E}}$-quotient of some $\widehat{\mathcal{E}}$-projective
object in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$.
###### Proof.
1. (1)
$V(\widehat{\mathcal{E}})\subseteq\mathcal{E}$ is clear: given
$e\in\widehat{\mathcal{E}}$ expressed as a cofiltered limit of finite
quotients $e_{i}$, $i\in I$, in
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\to}$, then since $V$ is
cofinitary, we see that $Ve$ is a cofiltered limit of $Ve_{i}=e_{i}$ in
$\mathscr{D}^{\to}$, thus $Ve\in\mathcal{E}$ by the definition of a profinite
factorization system.
To prove that $V$ is faithful, recall that a right adjoint is faithful if and
only if each component of its counit is epic. Thus, it suffices to prove that
$\varepsilon_{D}\in\widehat{\mathcal{E}}$ (and use that by 3.17 every
$\widehat{\mathcal{E}}$-morphism is epic). The triangles defining
$\varepsilon_{D}$ in 3.14(4) can be restricted to those with
$a\in\widehat{\mathcal{E}}$. Indeed, in the slice category
$D/\mathscr{D}_{\mathsf{f}}$ all objects $a\colon D\to A$ in
$\widehat{\mathcal{E}}$ form an initial subcategory.
Now given such a triangle with $a\in\widehat{\mathcal{E}}$ we know that
$Va\in\mathcal{E}$. Thus all those objects $A$ form a cofiltered diagram with
connecting morphisms in $\mathcal{E}$. Moreover,
$\widehat{Va}\in\widehat{\mathcal{E}}$ by Remark 3.16(2). This implies
$\varepsilon_{D}\in\widehat{\mathcal{E}}$ by Remark 3.16(4) .
2. (2)
Let $X$ be an $\mathcal{E}$-projective object. To show that $\widehat{X}$ is
$\widehat{\mathcal{E}}$-projective, suppose that a quotient $e\colon
A\twoheadrightarrow B$ in $\widehat{\mathcal{E}}$ and a morphism
$f\colon\widehat{X}\to B$ are given. Since $\widehat{(\mathord{-})}$ is left
adjoint to $V$ and $V(\widehat{\mathcal{E}})\subseteq\mathcal{E}$, the
morphism $f$ has an adjoint transpose $f^{*}\colon X\to VB$ that factorizes
through $VA$ via $g^{*}$ for some $g\colon\widehat{X}\to A$. Then $e\cdot
g=f$, which proves that $\widehat{X}$ is projective.
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g^{*}}$$\scriptstyle{f^{*}}$$\textstyle{VA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Ve}$$\textstyle{VB}$
iff
$\textstyle{\widehat{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\scriptstyle{f}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\textstyle{B}$
3. (3)
Given $A\in\mathscr{D}_{\mathsf{f}}$, by 3.1 there exists an
$\mathcal{E}$-projective object $X\in\mathscr{D}$ and a quotient $e\colon
X\twoheadrightarrow A$. The limit projection
$\widehat{e}\colon{\widehat{X}}\twoheadrightarrow A$ lies in
$\widehat{\mathcal{E}}$ by Remark 3.16(2), and item (2) above shows that
$\widehat{X}$ is $\widehat{\mathcal{E}}$-projective.∎
We are ready to prove the following general form of the Reiterman Theorem:
given the factorization system $(\widehat{\mathcal{E}},\widehat{\mathcal{M}})$
on the pro-completion of $\mathscr{D}_{\mathsf{f}}$, we have the concept of an
equation in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$. We call it a
profinite equation for $\mathscr{D}$, and prove that pseudovarieties in
$\mathscr{D}$ are precisely the classes in $\mathscr{D}_{\mathsf{f}}$ that can
be presented by profinite equations.
###### Definition 3.22.
A _profinite equation_ is an equation in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, i.e. a morphism $e\colon
X\twoheadrightarrow E$ in $\widehat{\mathcal{E}}$ whose domain $X$ is
$\widehat{\mathcal{E}}$-projective. It is _satisfied_ by a finite object $D$
provided that $D$ is injective w.r.t. $e$.
###### Theorem 3.23 (Generalized Reiterman Theorem).
Given a profinite factorization system on $\mathscr{D}$, a class of finite
objects is a pseudovariety iff it can be presented by profinite equations.
###### Proof.
Every class $\mathcal{V}\subseteq\mathscr{D}_{\mathsf{f}}$ presented by
profinite equations is a pseudovariety: this is proved precisely as (1) in
3.8.
Conversely, every pseudovariety can be presented by profinite equations.
Indeed, following the same proposition, it suffices to construct, for every
pseudoequation $e_{i}\colon X\twoheadrightarrow E_{i}$ ($i\in I$), a profinite
equation satisfied by the same finite objects.
For every $i\in I$, we have the corresponding limit projection
$\widehat{e}_{i}\colon\widehat{X}\twoheadrightarrow E_{i}\quad\text{with
$e_{i}=V\widehat{e}_{i}\cdot\eta_{X}$}.$
Let $R$ be the diagram in $\mathscr{D}_{\mathsf{f}}$ of objects $E_{i}$. The
connecting morphism $k\colon E_{i}\to E_{j}$ are given by the factorization
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{j}}$$\scriptstyle{e_{i}}$$\textstyle{E_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{k}$$\textstyle{E_{j}}$
iff $e_{j}\leq e_{i}$. Since the pseudoequation is closed under finite joins,
$R$ is cofiltered. Form the limit of $R$ in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ with the limit cone
$p_{i}\colon E\twoheadrightarrow E_{i}\quad(i\in I).$
The morphisms $\widehat{e}_{i}$ above form a cone of $R$: given $e_{j}=k\cdot
e_{i}$, then $V\widehat{e}_{j}\cdot\eta_{X}=V(\widehat{k\cdot
e_{i}})\cdot\eta_{X}=k\cdot V\widehat{e}_{i}\cdot\eta_{X}$ implies
$\widehat{e}_{j}=k\cdot\widehat{e}_{i}$. Here we apply the universal property
of $\eta_{X}$: the morphism $\widehat{e}_{j}$ is uniquely determined by
$V\widehat{e}_{j}\cdot\eta_{X}$. Thus we have a unique morphism
$e\colon\widehat{X}\twoheadrightarrow E$ making the following triangles
commutative:
$\vbox{ \lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
5.77779pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\\\&\crcr}}}\ignorespaces{\hbox{\kern-5.77779pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\widehat{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
14.2311pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{e}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 30.74242pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern-1.99997pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
5.92715pt\raise-26.20358pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.9516pt\hbox{$\scriptstyle{\widehat{e}_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
29.77779pt\raise-29.96852pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern-1.41003pt\lower-1.41833pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}{\hbox{\kern
30.74242pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
37.72157pt\raise-18.97554pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8264pt\hbox{$\scriptstyle{p_{i}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
37.72157pt\raise-28.11778pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern
0.0pt\lower-1.99997pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern-3.0pt\raise-37.9511pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern
29.77779pt\raise-37.9511pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{E_{i}}$}}}}}}}\ignorespaces}}}}\ignorespaces}\qquad(i\in
I)$
The connecting morphisms of $R$ lie in $\mathcal{E}$ (since $k\cdot
e_{i}\in\mathcal{E}$ implies $k\in\mathcal{E}$). Thus each $\widehat{e}_{i}$
lies in $\widehat{\mathcal{E}}$ since $e_{i}\in\mathcal{E}$, see Remark
3.16(3). Therefore, $e\in\widehat{\mathcal{E}}$ by Remark 3.16(4). Since
$\widehat{X}$ is $\widehat{\mathcal{E}}$-projective by 3.21, we have thus
obtained a profinite equation $e\colon\widehat{X}\twoheadrightarrow E$.
We are going to prove that a finite object $A$ satisfies the pseudoequation
$(e_{i})_{i\in I}$ iff it satisfies the profinite equation $e$.
1. (1)
Let $A$ satisfy the pseudoequation $(e_{i})$. For every morphism
$f\colon\widehat{X}\to A$ we present a factorization through $e$. The morphism
$Vf\cdot\eta_{X}$ factorizes through some $e_{j}$, $j\in I$:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{X}}$$\scriptstyle{e_{j}}$$\textstyle{V\widehat{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Vf}$$\textstyle{E_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{A}$
Since $e_{j}=V\widehat{e}_{j}\cdot\eta_{X}$, we get
$V(g\cdot\widehat{e}_{j})\cdot\eta_{X}=Vf\cdot\eta_{X}$. By the universal
property of $\eta_{X}$ this implies
$g\cdot\widehat{e}_{j}=f.$
The desired factorization is $g\cdot p_{j}$:
$\textstyle{\widehat{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{e}$$\textstyle{\scriptstyle\widehat{e}_{j}}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{j}}$$\textstyle{A}$$\textstyle{E_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$
2. (2)
Let $A$ satisfy the profinite equation $e$. For every morphism $h\colon X\to
A$ we find a factorization through some $e_{j}$. The morphism
$\widehat{h}\colon\widehat{X}\to A$ factorizes through $e$:
$\widehat{h}=u\cdot e\quad\text{with $u\colon E\to A$}.$
The codomain of $u$ is finite, thus, $u$ factorizes through one of the limit
projection of $E$, i.e.
$u=v\cdot p_{j}\quad\text{with $j\in I$ and $v\colon E_{j}\to A$}.$
This gives the following commutative diagram:
(3.2)
$\textstyle{\widehat{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{h}}$$\scriptstyle{e}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{j}}$$\scriptstyle{u}$$\textstyle{A}$$\textstyle{E_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$
That $v$ is the desired factorization of $h$ is now shown using the following
diagram:
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\scriptstyle{\eta_{X}}$$\textstyle{V\widehat{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\widehat{h}}$$\scriptstyle{Ve}$$\textstyle{VE\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Vp_{j}}$$\textstyle{E_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\scriptstyle{e_{j}}$$\textstyle{A}$
Indeed, the upper part commutes since since
$e_{j}=V\widehat{e}_{j}\cdot\eta_{X}=Vp_{j}\cdot Ve\cdot\eta_{X},$
the lower left-hand part commutes since $h=V\widehat{h}\cdot\eta_{X}$, and for
the remaining lower right-hand part apply $V$ to (3.2) and use that $Vv=v$
since $v$ lies in $\mathscr{D}_{\mathsf{f}}$.∎
## 4\. Profinite Monad
In the present section we establish the main result of our paper: a
generalization of Reiterman’s theorem from algebras over a signature to
algebras for a given monad $\mathbf{T}$ in a category $\mathscr{D}$ (4.20). To
this end, we introduce and investigate the _profinite monad_
${\widehat{\mathbf{T}}}$ associated to the monad $\mathbf{T}$. It provides an
abstract perspective on the formation of spaces of profinite words or
profinite terms and serves as key technical tool for our categorical approach
to profinite algebras.
###### Assumption 4.1.
Throughout this section, $\mathscr{D}$ is a category satisfying 3.1, and
$\mathbf{T}=(T,\mu,\eta)$ is a monad on $\mathscr{D}$ preserving quotients,
i.e. $T(\mathcal{E})\subseteq\mathcal{E}$.
We denote by $\mathscr{D}^{\mathbf{T}}$ the category of $\mathbf{T}$-algebras
and $\mathbf{T}$-homomorphisms, and by $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$
the full subcategory of all _finite algebras_ , i.e. $\mathbf{T}$-algebras
whose underlying object lies in $\mathscr{D}_{\mathsf{f}}$.
###### Remark 4.2.
The category $\mathscr{D}^{\mathbf{T}}$ satisfies 3.1. More precisely:
1. (1)
Since $\mathbf{T}$ preserves quotients, the factorization system of
$\mathscr{D}$ lifts to $\mathscr{D}^{\mathbf{T}}$: every homomorphism in
$\mathscr{D}^{\mathbf{T}}$ factorizes as a homomorphism in $\mathcal{E}$
followed by one in $\mathcal{M}$. When speaking about _quotient algebras_ and
_subalgebras_ of $\mathbf{T}$-algebras, we refer to this lifted factorization
system $(\mathcal{E}^{\mathbf{T}},\mathcal{M}^{\mathbf{T}})$.
2. (2)
Since $\mathscr{D}$ is complete, so is $\mathscr{D}^{\mathbf{T}}$ with limits
created by the forgetful functor into $\mathscr{D}$.
3. (3)
The category $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ is closed under finite
products and subalgebras, since $\mathscr{D}_{\mathsf{f}}$ is closed under
finite products and subobjects.
4. (4)
For every $\mathcal{E}$-projective object $X$, the free algebra $(TX,\mu_{X})$
is $\mathcal{E}^{\mathbf{T}}$-projective. Indeed, given
$\mathbf{T}$-homomorphisms $e\colon(A,\alpha)\twoheadrightarrow(B,\beta)$ and
$h\colon(TX,\mu_{X})\to(B,\beta)$ with $e\in\mathcal{E}$, then
$h\cdot\eta_{X}\colon X\to B$ factorizes through $e$ in $\mathscr{D}$, i.e.
$h\cdot\eta_{X}=e\cdot k_{0}$ for some $k_{0}$. Then the
$\mathbf{T}$-homomorphism $k\colon(T{X},\mu_{X})\to(A,\alpha)$ extending
$k_{0}$ fulfils $e\cdot k\cdot\eta_{{X}}=h\cdot\eta_{{X}}$, hence, $e\cdot
k=h$ by the universal property of $\eta_{{X}}$.
$\textstyle{{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{{X}}}$$\scriptstyle{k_{0}}$$\textstyle{(TX,\mu_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{k}$$\scriptstyle{h}$$\textstyle{(A,\alpha)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\textstyle{(B,\beta)}$
It follows that every finite algebra is a quotient of an
$\mathcal{E}^{\mathbf{T}}$-projective $\mathbf{T}$-algebra.
###### Notation 4.3.
The forgetful functor of $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ into
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is denoted by
$K\colon\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
For example, if $\mathscr{D}=\Sigma\text{-}\mathbf{Str}$, then $K$ assigns to
every finite $\mathbf{T}$-algebra its underlying $\Sigma$-structure, equipped
with the discrete topology.
###### Remark 4.4.
For any functor $K\colon\mathscr{A}\to\mathscr{C}$, the right Kan extension
$R=\mathsf{Ran}_{K}K\colon\mathscr{C}\to\mathscr{C}$
can be naturally equipped with the structure of a monad. Its unit and
multiplication are given by
$\widehat{\eta}=(\mathsf{id}_{K})^{\dagger}\colon\mathsf{Id}\to
R\qquad\text{and}\qquad\widehat{\mu}=(\varepsilon\cdot
R\varepsilon)^{\dagger}\colon RR\to R,$
where $\varepsilon\colon RK\to K$ denotes the universal natural transformation
and $(\mathord{-})^{\dagger}$ is defined as in Remark 3.12. The monad
$(R,\widehat{\eta},\widehat{\mu})$ is called the _codensity monad_ of $K$, see
e.g. Linton (Linton, 1969).
###### Definition 4.5.
The _profinite monad_
${\widehat{\mathbf{T}}}=(\widehat{T},\widehat{\mu},\widehat{\eta})$
of the monad $\mathbf{T}$ is the codensity monad of the forgetful functor
$K\colon\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$.
###### Construction 4.6.
Since $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is complete and
$\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ is small, the limit formula for right
Kan extensions (see Remark 3.12) yields the following concrete description of
the profinite monad:
1. (1)
To define the action of $\widehat{T}$ on an object $X$, form the coslice
category $X/K$ of all morphisms $a\colon X\to K(A,\alpha)$ with
$(A,\alpha)\in\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$. The projection functor
$Q_{X}\colon X/K\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, mapping $a$
to $A$, has a limit
$\widehat{T}X=\lim Q_{X}.$
The limit cone is denoted as follows:
$\frac{X\xrightarrow{a}K(A,\alpha)}{\widehat{T}X\xrightarrow{\alpha_{a}^{+}}A}$
For every finite $\mathbf{T}$-algebra $(A,\alpha)$, we write
$\alpha^{+}\colon\widehat{T}A\to A$
instead of $\alpha^{+}_{\mathsf{id}_{A}}$.
2. (2)
The action of $\widehat{T}$ on morphisms $f\colon Y\to X$ is given by the
following commutative triangles
---
$\textstyle{\widehat{T}Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{T}f}$$\scriptstyle{\alpha_{af}^{+}}$$\textstyle{\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha_{a}^{+}}$$\textstyle{A}$
for all $a\colon X\to K(A,\alpha)$.
3. (3)
The unit $\widehat{\eta}\colon\mathsf{Id}\to\widehat{T}$ is given by the
following commutative triangles
---
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{\eta}_{X}}$$\scriptstyle{a}$$\textstyle{\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha_{a}^{+}}$$\textstyle{A}$
for all $a\colon X\to K(A,\alpha)$.
and the multiplication by the following commutative squares
$\textstyle{\widehat{T}\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{\mu}_{X}}$$\scriptstyle{\widehat{T}\alpha_{a}^{+}}$$\textstyle{\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha_{a}^{+}}$$\textstyle{\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha^{+}}$$\textstyle{A}$
for all $a\colon X\to K(A,\alpha)$.
###### Remark 4.7.
A concept related to the profinite monad was studied by Bojańczyk (Bojańczyk,
2015) who associates to every monad $\mathbf{T}$ on $\mathbf{Set}$ a monad
$\overline{\mathbf{T}}$ on $\mathbf{Set}$ (rather than on
$\mathop{\mathsf{Pro}}\mathbf{Set}_{\mathsf{f}}={\mathbf{Stone}}$ as in our
setting). Specifically, $\overline{\mathbf{T}}$ is the monad induced by the
composite right adjoint
${\mathbf{Stone}}^{{\widehat{\mathbf{T}}}}\to{\mathbf{Stone}}\xrightarrow{V}\mathbf{Set}$.
Its construction also appears in the work of Kennison and Gildenhuys (Kennison
and Gildenhuys, 1971) who investigated codensity monads for
$\mathbf{Set}$-valued functors and their connection with profinite algebras.
###### Remark 4.8.
1. (1)
Every finite $\mathbf{T}$-algebra $(A,\alpha)$ yields a finite
${\widehat{\mathbf{T}}}$-algebra $(A,\alpha^{+})$. Indeed, the unit law and
the associative law for $\alpha^{+}$ follow from 4.6(3) with $X=A$ and
$a=\mathsf{id}_{A}$.
2. (2)
The monad $\widehat{T}$ is cofinitary. To see this, let $x_{i}\colon X\to
X_{i}$ ($i\in I$) be a cofiltered limit cone in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$. For each object of $X/K$
given by an algebra $(A,\alpha)$ and morphism $a\colon X\to A$, due to
$A\in\mathscr{D}_{\mathsf{f}}$ there exists $i\in I$ and a morphism $b\colon
X_{i}\to A$ with $a=b\cdot x_{i}$. From the definition of $\widehat{T}$ on
morphisms we get
$\alpha_{a}^{+}=(\,\widehat{T}X\xrightarrow{\leavevmode\nobreak\
\widehat{T}x_{i}\leavevmode\nobreak\
}\widehat{T}X_{i}\xrightarrow{\leavevmode\nobreak\
\alpha_{b}^{+}\leavevmode\nobreak\ }B\,).$
To prove that $\widehat{T}x_{i}\colon\widehat{T}X\to\widehat{T}X_{i}$ ($i\in
I$) forms a limit cone, suppose that any cone $c_{i}\colon
C\to\widehat{T}X_{i}$ ($i\in I$) is given. It is easy to verify that then the
cone of $Q_{X}$ (see 4.6(1)) assigning to the above $a$ the morphism
$\alpha_{b}^{+}\cdot c_{i}$ is well-defined, i.e. independent of the choice of
$i$ and $b$ and compatible with $Q_{X}$. The unique morphism $c\colon
C\to\widehat{T}X$ factorizing that cone fulfils $c_{i}=\widehat{T}x_{i}\cdot
c$ because this equation holds when postcomposed with the members of the limit
cone of $Q_{X_{i}}$. This proves the claim.
3. (3)
The free ${\widehat{\mathbf{T}}}$-algebra $(\widehat{T}X,\widehat{\mu}_{X})$
on an object $X$ of $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is a
cofiltered limit of finite ${\widehat{\mathbf{T}}}$-algebras. In fact, for the
squares in 4.6(3) defining $\widehat{\mu}_{X}$ we have the limit cone
$(\alpha_{a}^{+})$ in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, and
since all $\alpha_{a}^{+}$ are homomorphisms of
${\widehat{\mathbf{T}}}$-algebras and the forgetful functor from
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\widehat{\mathbf{T}}}$ to
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ reflects limits, it follows
that $(\widehat{T}X,\widehat{\mu}_{X})$ is a limit of the algebras
$(A,\alpha^{+})$.
4. (4)
For “free” objects of $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, i.e.
those of the form $\hat{X}$ for $X\in\mathscr{D}$ (cf. 3.13), the definition
of $\widehat{T}{\widehat{X}}$ can be stated in a more convenient form:
$\widehat{T}{\widehat{X}}$ is the cofiltered limit of all finite quotient
algebras of the free $\mathbf{T}$-algebra $(TX,\mu_{X})$. More precisely, let
$(TX,\mu_{X})\mathord{\mathrel{\rotatebox[origin={c}]{90.0}{$\twoheadleftarrow$}}}\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$
denote the full subcategory of the slice category
$(TX,\mu_{X})/\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ on all finite quotient
algebras of $(TX,\mu_{X})$, and consider the diagram
$D_{X}\colon(TX,\mu_{X})\mathord{\mathrel{\rotatebox[origin={c}]{90.0}{$\twoheadleftarrow$}}}\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$
that maps $e\colon(TX,\mu_{X})\twoheadrightarrow(A,\alpha)$ to $A$. Then we
have the following
###### Lemma 4.9.
For every object $X$ of $\mathscr{D}$, one has $\widehat{T}{\widehat{X}}=\lim
D_{X}$.
###### Proof.
The diagram $D_{X}$ is the composite
$(TX,\mu_{X})\mathord{\mathrel{\rotatebox[origin={c}]{90.0}{$\twoheadleftarrow$}}}\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\rightarrowtail(TX,\mu_{X})/\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\cong\widehat{X}/K\xrightarrow{Q_{\hat{X}}}\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}},$
where the isomorphism
$(TX,\mu_{X})/\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\cong\widehat{X}/K$ maps
$e\colon(TX,\mu_{X})\to(A,\alpha)$ to
$\widehat{e\cdot\eta_{X}}\colon\widehat{X}\to A$. Since every
$\mathbf{T}$-homomorphism has an
$(\mathcal{E}^{\mathbf{T}},\mathcal{M}^{\mathbf{T}})$-factorization,
$(TX,\mu_{X})\mathord{\mathrel{\rotatebox[origin={c}]{90.0}{$\twoheadleftarrow$}}}\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$
is an initial subcategory of
$(TX,\mu_{X})/\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$. Thus, $\widehat{T}X=\lim
Q_{\hat{X}}=\lim D$. ∎
###### Notation 4.10.
The above proof gives, for every object $X\in\mathscr{D}$, the limit cone
$\alpha_{\widehat{e\cdot\eta_{X}}}^{+}\colon\widehat{T}{\widehat{X}}\twoheadrightarrow
A$ with $e\colon(TX,\mu_{X})\twoheadrightarrow(A,\alpha)$ ranging over
$(TX,\mu_{X})\mathord{\mathrel{\rotatebox[origin={c}]{90.0}{$\twoheadleftarrow$}}}\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$.
In the following, we abuse notation and simply write $\alpha_{e}^{+}$ for
$\alpha_{\widehat{e\cdot\eta_{X}}}^{+}$.
###### Example 4.11.
Given the monad $TX=X^{*}$ of monoids on $\mathscr{D}=\mathbf{Set}$, the
profinite monad is the monad of monoids in ${\mathbf{Stone}}$
$\widehat{T}X=\text{free monoid in ${\mathbf{Stone}}$ on the space $X$}.$
For a finite set $X$, the elements of $\widehat{T}X$ are called the _profinite
words_ over $X$. A profinite word is a compatible choice of a congruence class
of $X^{*}/{\sim}$ for every congruence $\sim$ of finite rank. Compatibility
means that given another congruence $\approx$ containing $\sim$, the class
chosen for $\approx$ contains the above class as a subset.
###### Lemma 4.12.
The monad $\widehat{T}$ preserves quotients, i.e.
$\widehat{T}(\widehat{\mathcal{E}})\subseteq\widehat{\mathcal{E}}$.
###### Proof.
Suppose that $e\colon X\to Y$ is a morphism im $\widehat{\mathcal{E}}$. This
means that it can be expressed as a cofiltered limit in
$\widehat{\mathscr{D}}^{\to}$ of morphisms $e_{i}\in\mathcal{E}_{\mathsf{f}}$
($i\in I$):
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{p_{i}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{i}}$$\textstyle{X_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{i}}$$\textstyle{Y_{i}}$
Since $\widehat{T}$ is cofinitary by Remark 4.8(2), it follows that
$\widehat{T}e$ is the limit of $\widehat{T}e_{i}=Te_{i}$ ($i\in I$) in
$\widehat{\mathscr{D}}^{\to}$. Since $T$ preserves $\mathcal{E}$, we have
$Te_{i}\in\mathcal{E}$ for all $i\in I$, which proves that
$\widehat{T}e\in\widehat{\mathcal{E}}$. ∎
It follows that the factorization system
$(\widehat{\mathcal{E}},\widehat{\mathcal{M}})$ of
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ lifts to the category
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{{\widehat{\mathbf{T}}}}$.
Moreover, this category with the choice
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{{\widehat{\mathbf{T}}}}_{\mathsf{f}}=\text{all
${\widehat{\mathbf{T}}}$-algebras $(A,\alpha)$ with
$A\in\mathscr{D}_{\mathsf{f}}$}$
satisfies all the requirements of 3.1; this is analogous to the corresponding
observations for $\mathscr{D}^{\mathbf{T}}$ in Remark 4.2. Note that we are
ultimately interested in finite $\mathbf{T}$-algebras, not finite
${\widehat{\mathbf{T}}}$-algebras. However, there is no clash: we shall prove
in 4.16 that they coincide.
###### Notation 4.13.
Recall from 4.6 the definition of $\widehat{T}X$ as a cofiltered limit
$\alpha_{a}^{+}\colon\widehat{T}X\to A$ of $Q_{X}\colon
X/K\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$. Since the functor
$V\colon\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\to\mathscr{D}$ (see
3.10) preserves that limit, and since all morphisms
$TVX\xrightarrow{TVa}TA\xrightarrow{\alpha}A$
form a cone of $V\cdot Q_{X}$, there is a unique morphism $\varphi_{X}$ such
the squares below commute for every finite $\mathbf{T}$-algebra $(A,\alpha)$:
(4.1)
$\textstyle{TVX\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{X}}$$\scriptstyle{TVa}$$\textstyle{V\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\alpha_{a}^{+}}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{A}$
###### Example 4.14.
For the monoid monad $TX=X^{*}$ on $\mathbf{Set}$, the map
$\varphi_{X}\colon(VX)^{*}\to V\widehat{T}X$
is the embedding of finite words into profinite words. More precisely, by
representing elements of $\widehat{T}X$ as compatible choices of congruences
classes (see 4.11), $\varphi_{X}$ maps $w\in X^{*}$ to the compatible family
of all congruence classes $[w]_{\sim}$ of $w$, where $\sim$ ranges over all
congruences on $X^{*}$ of finite rank.
We now prove that the morphisms $\varphi_{X}$ are the components of a monad
morphism from $\mathbf{T}$ to ${\widehat{\mathbf{T}}}$ in the sense of Street
(Street, 1972).
###### Lemma 4.15.
The morphisms $\varphi_{X}$ form a natural transformation
$\varphi\colon TV\to V\widehat{T}$
such that the following diagrams commute:
$\textstyle{V\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta
V}$$\scriptstyle{V\widehat{\eta}}$$\textstyle{TV\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{V\widehat{T}}$
$\textstyle{TTV\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{T\varphi}$$\scriptstyle{\mu
V}$$\textstyle{TV\widehat{T}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi\widehat{T}}$$\textstyle{V\widehat{T}\widehat{T}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\widehat{\mu}}$$\textstyle{TV\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{V\widehat{T}}$
###### Proof.
1. (1)
We first prove that $\varphi$ is natural. Given a morphism $f\colon X\to Y$ in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, consider an arbitrary object
$a\colon Y\to K(A,\alpha)$ of $Q_{Y}$ (see 4.6(1)) and recall that by the
definition of $\widehat{T}$ on the morphism $f$ we have
$\alpha_{a}^{+}\cdot\widehat{T}f=\alpha_{a\cdot f}^{+}.$
Consider the following diagram:
---
$\textstyle{TVX\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{X}}$$\scriptstyle{TVf}$$\scriptstyle{TV(a\cdot
f)}$$\textstyle{V\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\alpha_{a\cdot
f}^{+}}$$\scriptstyle{V\widehat{T}f}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{A}$$\textstyle{TVY\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{TVa}$$\scriptstyle{\varphi_{Y}}$$\textstyle{V\widehat{T}Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\alpha_{a}^{+}}$
Since all inner parts commute by definition, and the morphisms
$V\alpha_{a}^{+}$ form a collectively monic cone using that $V$ is cofinitary,
we see that the outside commutes, i.e. $\varphi$ is natural.
2. (2)
To prove $V\widehat{\eta}_{X}=\varphi_{X}\cdot\eta_{VX}$, use the collectively
monic cone $V\alpha_{a}^{+}\colon V\widehat{T}X\to VA$, where $a\colon X\to
K(A,\alpha)$ ranges over $Q_{X}$. Using the triangle in 4.6(3), we see that
the following diagram
$\textstyle{VX\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{VX}}$$\scriptstyle{Va}$$\textstyle{TVX\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{TVa}$$\scriptstyle{\varphi_{X}}$$\textstyle{V\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\alpha_{a}^{+}}$$\scriptstyle{V\widehat{\eta}_{X}}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{A}}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{id}_{A}}$
has the desired upper part commutative, since it commutes when post-composed
by every $V\alpha_{a}^{+}$, which follows from the fact that the two lower
squares and the outside clearly commute.
3. (3)
To prove $V\widehat{\mu}_{X}\cdot\varphi_{\widehat{T}X}\cdot
T{\varphi_{X}}=\varphi_{X}\cdot\mu_{VX}$, we again use the collectively monic
cone $V\alpha_{a}^{+}$. The square in 4.6(3) makes it clear that in the
following diagram
---
$\textstyle{TTVX\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{TTVa}$$\scriptstyle{T\varphi_{X}}$$\scriptstyle{\mu_{VX}}$$\textstyle{TVX\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{TVa}$$\scriptstyle{\varphi_{X}}$$\textstyle{TTA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu_{A}}$$\scriptstyle{T\alpha}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{TV\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{TV\alpha_{a}^{+}}$$\scriptstyle{\varphi_{\widehat{T}X}}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\scriptstyle{\varphi_{A}}$$\textstyle{A}$$\textstyle{V\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\alpha^{+}}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V\widehat{T}\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\widehat{T}\alpha_{a}^{+}}$$\scriptstyle{V\widehat{\mu}_{X}}$$\textstyle{V\widehat{T}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\alpha_{a}^{+}}$
the outside commutes, since it does when post-composed by all
$V\alpha_{a}^{+}$.∎
###### Proposition 4.16.
The categories of finite $\mathbf{T}$-algebras and finite
${\widehat{\mathbf{T}}}$-algebras are isomorphic: the functor taking
$(A,\alpha)$ to $(A,\alpha^{+})$ and being the identity map on morphisms is an
isomorphism.
###### Proof.
1. (1)
We first prove that, given finite $\mathbf{T}$-algebras $(A,\alpha)$ and
$(B,\beta)$, a morphism $h\colon A\to B$ is a homomorphism for $\mathbf{T}$
iff $h\colon(A,\alpha^{+})\to(B,\beta^{+})$ is a homomorphism for
${\widehat{\mathbf{T}}}$. If the latter holds, then the naturality of
$\varphi$ yields a commutative diagram as follows
$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Th}$$\scriptstyle{\varphi_{A}}$$\textstyle{V\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\widehat{T}h}$$\scriptstyle{V\alpha^{+}}$$\textstyle{VA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Vh}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\textstyle{TB\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{B}}$$\textstyle{V\widehat{T}B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\beta^{+}}$$\textstyle{VB\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B}$
Thus $h$ is a homomorphism for $\mathbf{T}$, since the horizontal morphisms
are $\alpha$ and $\beta$, respectively.
Conversely, if $h$ is a homomorphism for $\mathbf{T}$, then the diagram
$Q_{A}$ of 4.6(1) has the following connecting morphism
$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{id}_{A}}$$\scriptstyle{h}$$\textstyle{K(A,\alpha)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Kh}$$\textstyle{K(B,\beta).}$
This implies $h\cdot\alpha^{+}=\beta_{h}^{+}$. The definition of
$\widehat{T}h$ yields $\beta^{+}\cdot\widehat{T}h=\beta_{h}^{+}$ (see 4.6(1)
again). Thus, $h$ is a homomorphism for ${\widehat{\mathbf{T}}}$:
$\textstyle{\widehat{T}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces
A}$$\textstyle{\scriptstyle\beta_{h}^{+}}$$\scriptstyle{\alpha^{+}}$$\scriptstyle{\widehat{T}h}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\textstyle{\widehat{T}B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta^{+}}$$\textstyle{B}$
Note that the _only if_ part implies that the object assignment
$(A,\alpha)\mapsto(A,\alpha^{+})$ is indeed functorial.
2. (2)
For every finite ${\widehat{\mathbf{T}}}$-algebra $(A,\delta)$ we prove that
the composite
(4.2)
$\alpha=TA\xrightarrow{\varphi_{A}}V\widehat{T}A\xrightarrow{V\delta}VA=A$
defines a $\mathbf{T}$-algebra with $\alpha^{+}=\delta$.
The unit law follows from that of $\delta$,
$\delta\cdot\widehat{\eta}_{A}=\mathsf{id}$ and from
$\varphi_{A}\cdot\eta_{A}=V\widehat{\eta}_{A}$ (see 4.15):
$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta_{A}}$$\scriptstyle{V\widehat{\eta}_{A}}$$\scriptstyle{\mathsf{id}_{A}}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{A}}$$\textstyle{V\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\delta}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$
The associative law follows from that of $\delta$,
$\delta\cdot\widehat{\mu}_{A}=\delta\cdot\widehat{T}\delta$ and from
$\varphi_{A}\cdot\mu_{A}=V\widehat{\mu}_{A}\cdot\varphi_{\widehat{T}A}\cdot
T\varphi_{A}$ (see 4.15):
$\textstyle{TTA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{T\varphi_{A}}$$\scriptstyle{T\alpha}$$\scriptstyle{\mu_{A}}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{A}}$$\scriptstyle{\alpha}$$\textstyle{TV\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{TV\delta}$$\scriptstyle{\varphi_{\widehat{T}A}}$$\textstyle{V\widehat{T}\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\widehat{\mu}_{A}}$$\scriptstyle{V\widehat{T}\delta}$$\textstyle{V\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\delta}$$\textstyle{TA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{A}}$$\textstyle{V\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V\delta}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$
To prove that
$\alpha^{+}=\delta,$
recall from 4.9 and 4.10 that $\widehat{T}A$ is a cofiltered limit of all
finite quotients $b\colon(TA,\mu_{A})\twoheadrightarrow(B,\beta)$ in
$\mathscr{D}^{\mathbf{T}}$ with the limit cone
$\beta_{b}^{+}\colon\widehat{T}A\twoheadrightarrow B$. Since $A$ is finite,
both $\alpha^{+}$ and $\delta$ factorize through one of the limit projections
$\beta_{b}^{+}$, i.e. we have commutative triangles as follows:
(4.3)
---
$\textstyle{A}$$\textstyle{\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\scriptstyle\beta_{b}^{+}}$$\scriptstyle{\delta}$$\scriptstyle{\alpha^{+}}$$\textstyle{A}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta_{0}}$$\scriptstyle{\alpha_{0}}$
Recall from 4.10 that $\beta_{b}^{+}$ denotes
$\beta_{\widehat{b\cdot\eta_{A}}}^{+}$, and by 3.13 we have
$\widehat{b\cdot\eta_{A}}=b\cdot\eta_{A}\colon A\to B$ since this morphism
lies in $\mathscr{D}_{\mathsf{f}}$. Combining this with the definition (4.1)
of $\varphi_{A}$ we have a commutative square
(4.4)
$\textstyle{TVA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{A}}$$\scriptstyle{TV(b\cdot\eta_{A})}$$\textstyle{V\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{b}^{+}}$$\textstyle{TB\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\textstyle{B}$
Now we compute
$\displaystyle\delta_{0}\cdot\beta\cdot T(b\cdot\eta_{A})$
$\displaystyle=\delta_{0}\cdot\beta\cdot TV(b\cdot\eta_{A})$ since
$b\cdot\eta_{A}$ lies in $\mathscr{D}_{\mathsf{f}}$
$\displaystyle=\delta_{0}\cdot V\beta_{b}^{+}\cdot\varphi_{A}$ by (4.4)
$\displaystyle=V\delta_{0}\cdot V\beta_{b}^{+}\cdot\varphi_{A}$ since
$\delta_{0}$ lies in $\mathscr{D}_{\mathsf{f}}$
$\displaystyle=V\delta\cdot\varphi_{A}$ by (4.3).
Analogously, we obtain
(4.5) $V\alpha^{+}\cdot\varphi_{A}=\alpha_{0}\cdot\beta\cdot
T(b\cdot\eta_{A}).$
From the definition (4.1) of $\varphi_{A}$, we also get
(4.6)
$V\alpha^{+}\cdot\varphi_{A}=V\alpha_{id}^{+}\cdot\varphi_{A}=\alpha\cdot
TV\mathsf{id}_{A}=\alpha=V\delta\cdot\varphi_{A},$
where we use (4.2) in the last step. Therefore, we can compute
$\displaystyle\delta_{0}\cdot b$ $\displaystyle=\delta_{0}\cdot
b\cdot\mu_{A}\cdot T\eta_{A}$ since $\mu_{A}\cdot T\eta_{A}=\mathsf{id}$
$\displaystyle=\delta_{0}\cdot\beta\cdot Tb\cdot T\eta_{A}$ since $b$ is a
$\mathbf{T}$-homomorphism $\displaystyle=V\delta\cdot\varphi_{A}$ shown
previously $\displaystyle=V\alpha^{+}\cdot\varphi_{A}$ by (4.6)
$\displaystyle=\alpha_{0}\cdot\beta\cdot Tb\cdot T\eta_{A}$ by (4.5)
$\displaystyle=\alpha_{0}\cdot b\cdot\mu_{A}\cdot T\eta_{A}$ since $b$ is a
$\mathbf{T}$-homomorphism $\displaystyle=\alpha_{0}\cdot b.$ since
$\mu_{A}\cdot T\eta_{A}=\mathsf{id}$.
Since $b$ is epic, this implies $\alpha_{0}=\delta_{0}$, whence
$\alpha^{+}=\delta$.
3. (3)
Uniqueness of $\alpha$. Let $(A,\alpha)$ be a finite $\mathbf{T}$-algebra with
$\alpha^{+}=\delta$. By the definition of $\varphi_{A}$ this implies
$\alpha=V\alpha^{+}\cdot\varphi_{A}=V\delta\cdot\varphi_{A},$
so $\alpha$ is unique.∎
From now on, we identify finite algebras for $\mathbf{T}$ and for
${\widehat{\mathbf{T}}}$.
###### Proposition 4.17.
The pro-completion of the category $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ of
finite $\mathbf{T}$-algebras is the full subcategory of the category of
${\widehat{\mathbf{T}}}$-algebras given by all cofiltered limits of finite
$\mathbf{T}$-algebras.
###### Proof.
Let $\mathscr{L}$ denote the full subcategory of
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\widehat{\mathbf{T}}}$ given
by all cofiltered limits of finite $\mathbf{T}$-algebras. To show that
$\mathscr{L}$ forms the pro-completion of
$\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$, we verify the three conditions of
A.5. By definition $\mathscr{L}$ satisfies condition (2), and condition (1)
follows from the fact that since
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ has cofiltered limits, so does
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{{\widehat{\mathbf{T}}}}$.
Thus, it only remains to prove condition (3): every algebra $(A,\alpha^{+})$
with $(A,\alpha)\in\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ is finitely
copresentable in $\mathscr{L}$. Let
$b_{i}\colon(B,\beta)\to(B_{i},\beta_{i})$, $(i\in I)$, be a limit cone of a
cofiltered diagram $D$ in $\mathscr{L}$. Our task is to prove for every
morphism $f\colon(B,\beta)\to(A,\alpha^{+})$ that
1. (a)
a factorization through a limit projection exists, i.e. $f=f^{\prime}\cdot
b_{i}$ for some $i\in I$ and
$f^{\prime}\colon(B_{i},\beta_{i})\to(A,\alpha^{+})$, and
2. (b)
given another factorization $f=f^{\prime\prime}\cdot b_{i}$ in $\mathscr{L}$,
then $f^{\prime}$ and $f^{\prime\prime}$ are merged by a connecting morphism
$b_{ji}\colon(B_{j},\beta_{j})\to(B_{i},\beta_{i})$ of $D$ (for some $j\in
I$).
Ad (a), since $b_{i}\colon B\to B_{i}$ is a limit of a cofiltered diagram in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ and $A$ is as an object of
$\mathscr{D}_{\mathsf{f}}$ finitely copresentable in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, we have $i\in I$ and a
factorization $f=f^{\prime}\cdot b_{i}$, for some $f^{\prime}\colon B_{i}\to
A$ in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$. If $f^{\prime}$ is a
$\mathbf{T}$-homomorphism, i.e. if the following diagram
(4.7)
$\textstyle{\widehat{T}B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\scriptstyle{\widehat{T}b_{i}}$$\scriptstyle{\widehat{T}f}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{i}}$$\scriptstyle{f}$$\textstyle{\widehat{T}B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{i}}$$\scriptstyle{\widehat{T}f^{\prime}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\textstyle{\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha^{+}}$$\textstyle{A}$
commutes, we are done. In general, we have to change the choice of $i$: from
4.6(2) recall that $\widehat{T}$ is cofinitary, thus $(\widehat{T}b_{i})_{i\in
I}$ is a limit cone. The parallel pair
$f^{\prime}\cdot\beta_{i},\alpha^{+}\cdot\widehat{T}f^{\prime}\colon\widehat{T}B_{i}\to
A$
has a finitely copresentable codomain (in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$) and is merged by
$\widehat{T}b_{i}$. Indeed, the outside of the above diagram (4.7) commutes
since $f=f^{\prime}\cdot b_{i}$ is a homomorphism. Consequently, that parallel
pair is also merged by $\widehat{T}b_{ji}$ for some connecting morphism
$b_{ji}\colon(B_{j},\beta_{j})\to(B_{i},\beta_{i})$ of the diagram $D$:
$(\alpha^{+}\cdot\widehat{T}f^{\prime})\cdot\widehat{T}b_{ji}=(f^{\prime}\cdot\beta_{i})\cdot\widehat{T}b_{ji}.$
From $b_{i}=b_{ji}\cdot b_{j}$ we get another factorization of $f$:
$f=(f^{\prime}\cdot b_{ji})\cdot b_{j}$
and this tells us that the factorization morphism
$\overline{f}=f^{\prime}\cdot b_{ji}$ is a homomorphism as desired:
$\textstyle{\widehat{T}B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\scriptstyle{\widehat{T}b_{j}}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{j}}$$\textstyle{\widehat{T}B_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{j}}$$\scriptstyle{\widehat{T}\overline{f}}$$\scriptstyle{\widehat{T}b_{ji}}$$\textstyle{B_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b_{ji}}$$\scriptstyle{\overline{f}}$$\textstyle{\widehat{T}B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta_{i}}$$\scriptstyle{\widehat{T}f^{\prime}}$$\textstyle{B_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\textstyle{\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha^{+}}$$\textstyle{A}$
Ad (b), suppose that
$f^{\prime},f^{\prime\prime}\colon(B_{i},\beta_{i})\to(A,\alpha)$ are
homomorphisms satisfying $f=f^{\prime}\cdot b_{i}=f^{\prime\prime}\cdot
b_{i}$. Since $B=\lim B_{i}$ is a cofiltered limit in
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ and the limit projection
$b_{i}$ merges $f^{\prime},f^{\prime\prime}\colon B_{i}\to A$, it follows that
some connecting morphism $b_{ji}\colon(B_{j},\beta_{j})\to(B_{i},\beta_{i})$
also merges $f^{\prime},f^{\prime\prime}$, as desired. ∎
###### Remark 4.18.
If $(\mathcal{E},\mathcal{M})$ is a profinite factorization system on
$\mathscr{D}$, then $(\mathcal{E}^{\mathbf{T}},\mathcal{M}^{\mathbf{T}})$ is a
profinite factorization system on $\mathscr{D}^{\mathbf{T}}$. Indeed, since
$\mathcal{E}$ is closed in $\mathscr{D}^{\to}$ under cofiltered limits of
finite quotients, and since the forgetful functor from
$(\mathscr{D}^{\mathbf{T}})^{\to}$ to $\mathscr{D}^{\to}$ creates limits, it
follows that $\mathcal{E}^{\mathbf{T}}$ is also closed under cofiltered limits
of finite quotients.
###### Definition 4.19.
A _${\widehat{\mathbf{T}}}$ -equation_ is an equation in the category of
${\widehat{\mathbf{T}}}$-algebras, i.e. a
${\widehat{\mathbf{T}}}$-homomorphism $e$ in
$\widehat{\mathcal{E}}^{\widehat{\mathbf{T}}}$ with
$\widehat{\mathcal{E}}^{{\widehat{\mathbf{T}}}}$-projective domain. A finite
$\mathbf{T}$-algebra _satisfies_ $e$ if it is injective with respect to $e$ in
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{{\widehat{\mathbf{T}}}}$.
###### Theorem 4.20 (Generalized Reiterman Theorem for Monads).
Let $\mathscr{D}$ be a category with a profinite factorization system
$(\mathcal{E},\mathcal{M})$, and suppose that $\mathbf{T}$ is a monad
preserving quotients. Then a class of finite $\mathbf{T}$-algebras is a
pseudovariety in $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ iff it can be
presented by ${\widehat{\mathbf{T}}}$-equations.
###### Remark 4.21.
We will see in the proof that the ${\widehat{\mathbf{T}}}$-equations
presenting a given pseudovariety can be chosen to be of the form
$e\colon(\widehat{T}\widehat{X},\widehat{\mu}_{\widehat{X}})\twoheadrightarrow(A,\alpha)$
where $e\in\widehat{\mathcal{E}}$, the object $X$ is $\mathcal{E}$-projective
in $\mathscr{D}$, and $A$ is finite. Moreover, we can assume
$X\in\mathsf{Var}$ for any class $\mathsf{Var}$ of objects as in Remark 3.9.
###### Proof of 4.20.
Every class of finite $\mathbf{T}$-algebras presented by
${\widehat{\mathbf{T}}}$-equations is a pseudovariety – this is analogous to
3.8.
Conversely, let $\mathcal{V}$ be a pseudovariety in
$\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$. For every finite $\mathbf{T}$-algebra
$(A,\alpha)$ we have an $\mathcal{E}$-projective object $X$ in $\mathscr{D}$
and a quotient $e\colon X\twoheadrightarrow A$ (see 3.1). Since
$\widehat{e}\in\widehat{\mathcal{E}}$ by Remark 3.16(2), we have
$\widehat{T}\widehat{e}\in\widehat{\mathcal{E}}$ by 4.12. Therefore the
homomorphism
$\overline{e}\colon(\widehat{T}\widehat{X},\widehat{\mu}_{X})\to(A,\alpha^{+})$
extending $\widehat{e}$ lies in $\widehat{\mathcal{E}}$: we have
$\overline{e}=\alpha^{+}\cdot\widehat{T}\widehat{e}$, and $\alpha^{+}$ is a
split epimorphism by the unit law
$\alpha^{+}\cdot\widehat{\eta}_{A}=\mathsf{id}_{A}$. Since
$(\widehat{\mathcal{E}},\widehat{\mathcal{M}})$ is a proper factorization
system and $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ has finite
coproducts, every split epimorphism lies in $\widehat{\mathcal{E}}$ (Adámek et
al., 2009, Thm. 14.11), whence $\alpha^{+}\in\widehat{\mathcal{E}}$. Thus, we
see that every finite $\mathbf{T}$-algebra is a quotient, in the category of
${\widehat{\mathbf{T}}}$-algebras, of
$(\widehat{T}\widehat{X},\widehat{\mu}_{\widehat{X}})$ for an
$\mathcal{E}$-projective object $X$ of $\mathscr{D}$. Each such quotient lies
in $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$. Indeed, the
codomain, being a finite $\mathbf{T}$-algebra, does. To see that the domain
also does, combine Remark 4.8(3) and 4.16.
In Remark 3.9 we can thus denote by $\mathsf{Var}$ the collection of all free
algebras $(\widehat{T}\widehat{X},\widehat{\mu}_{\widehat{X}})$ where $X$
ranges over $\mathcal{E}$-projective objects of $\mathscr{D}$. Then 3.23 and
Remark 3.9 yield our claim that every pseudovariety in
$\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ can be presented by
${\widehat{\mathbf{T}}}$-equations which are finite quotients of free algebras
$(\widehat{T}\widehat{X},\widehat{\mu}_{\widehat{X}})$ where $X$ is
$\mathcal{E}$-projective in $\mathscr{D}$. ∎
## 5\. Profinite Terms and Implicit Operations
In our presentation so far, we have worked with an abstract categorical notion
of equations given by quotients of projective objects. In Reiterman’s original
paper (Reiterman, 1982) on pseudovarieties of $\Sigma$-algebras, a different
concept is used: equations between _implicit operations_ , or equivalently,
equations between _profinite terms_. This raises a natural question: which
categories $\mathscr{D}$ allow the simplification of equations in the sense of
4.19 to equations between profinite terms? It turns out to be sufficient that
$\mathscr{D}$ is cocomplete and has a finite dense set $\mathcal{S}$ of
objects that are projective w.r.t. strong epimorphisms. Recall that density of
$\mathcal{S}$ means that every object $D$ of $\mathscr{D}$ is a canonical
colimit of all morphisms from objects of $\mathcal{S}$ to $D$. More precisely,
if we view $\mathcal{S}$ as a full subcategory of $\mathscr{D}$, then $D$ is
the colimit of the diagram
$\mathcal{S}/D\to\mathscr{D}\quad\text{given
by}\quad\left(s\xrightarrow{f}D\right)\mapsto s$
with colimit cocone given by the morphisms $f$.
###### Assumption 5.1.
Throughout this section $\mathscr{D}$ is a cocomplete category with a finite
dense set $\mathcal{S}$ of objects projective w.r.t. strong epimorphisms. It
follows (see 5.4 below) that $\mathscr{D}$ has
$(\mathsf{StrongEpi},\mathsf{Mono})$-factorizations, and we work with this
factorization system. We denote by $\mathscr{D}_{\mathsf{f}}$ the collection
of all objects $D$ such that
(5.1) $\mathscr{D}(s,D)\text{ is finite for every object $s\in\mathcal{S}$.}$
We will show in 5.4 below that every category $\mathscr{D}$ satisfying the
above assumptions can be presented as a category of algebras over an
$\mathcal{S}$-sorted signature. Throughout this section, let $\Sigma$ be an
$\mathcal{S}$-sorted algebraic signature, i.e. a signature without relation
symbols. We denote by
$\mathop{\mathbf{Alg}}\Sigma$
the category of $\Sigma$-algebras and homomorphisms.
###### Example 5.2.
1. (1)
The category $\mathbf{Set}^{\mathcal{S}}$ satisfies 5.1. A finite dense set in
$\mathbf{Set}^{\mathcal{S}}$ is given by the objects
$\mathbf{1}_{s}\;(s\in\mathcal{S})$
where $\mathbf{1}_{s}$ is the $\mathcal{S}$-sorted set that is empty in all
sorts except $s$, and has a single element $\ast$ in sort $s$. Indeed, let $A$
and $B$ be $\mathcal{S}$-sorted sets and let a cocone of the canonical diagram
for $A$ be given:
$\frac{\mathbf{1}_{s}\xrightarrow{f}A}{\mathbf{1}_{s}\xrightarrow{f^{*}}B}$
By this we mean that we have morphisms $f^{*}\colon\mathbf{1}_{s}\to B$ for
every $f\colon\mathbf{1}_{s}\to A$ (and observe that the cocone condition is
void in this case because there are no connecting morphisms
$\mathbf{1}\to\mathbf{1}_{t}$ for $s\neq t$). Then we are to prove that there
exists a unique $\mathcal{S}$-sorted function $h\colon A\to B$ with
$f^{*}=h\cdot f$ for all $f$. Uniqueness is clear: given $x\in A$ of sort $s$,
let $f_{x}\colon\mathbf{1}_{s}\to A$ be the map with $f_{x}(*)=x$. Then
$h\cdot f_{x}=f_{x}^{*}$ implies
$h(x)=f_{x}^{*}(*).$
Conversely, if $h$ is defined by the above equation, then for every
$s\in\mathcal{S}$ and $f\colon\mathbf{1}_{s}\to A$ we have $f^{*}=h\cdot f$
because $f=f_{x}$ for $x=f(*)$.
More generally, every set of objects $\mathbf{K}_{s}$ ($s\in\mathcal{S}$),
where $\mathbf{K}_{s}$ is nonempty in sort $s$ and empty in all other sorts,
is dense in $\mathbf{Set}^{\mathcal{S}}$.
2. (2)
The category $\mathop{\mathbf{Alg}}\Sigma$ satisfies 5.1. Recall that strong
epimorphisms are precisely the homomorphisms with surjective components, and
monomorphisms are the homomorphisms with injective components. It follows
easily that for the free-algebra functor
$F_{\Sigma}\colon\mathbf{Set}^{\mathcal{S}}\to\mathop{\mathbf{Alg}}\Sigma$ all
algebras $F_{\Sigma}X$ are projective w.r.t. strong epimorphisms. We present a
finite dense set of free algebras.
Assume first that $\Sigma$ is a unary signature, i.e. all operation symbols in
$\Sigma$ are of the form $\sigma\colon s\to t$. Then the free algebras
$F_{\Sigma}\mathbf{1}_{s}\;(s\in\mathcal{S})$
form a dense set in $\mathop{\mathbf{Alg}}\Sigma$. Indeed, let
$U_{\Sigma}\colon\mathop{\mathbf{Alg}}\Sigma\to\mathbf{Set}^{\mathcal{S}}$
denote the forgetful functor and $\eta\colon\mathsf{Id}\to
U_{\Sigma}F_{\Sigma}$ the unit of the adjunction $F_{\Sigma}\dashv
U_{\Sigma}$. Given $\Sigma$-algebras $A$ and $B$ and a cocone of the canonical
diagram as follows:
$\frac{F_{\Sigma}\mathbf{1}_{s}\xrightarrow{f}A}{F_{\Sigma}\mathbf{1}_{s}\xrightarrow{f^{*}}B}$
We are to prove that there exists a unique homomorphism $h\colon A\to B$ with
$f^{*}=h\cdot f$ for every $f$. We obtain a corresponding cocone in
$\mathbf{Set}^{\mathcal{S}}$ as follows:
$\frac{\mathbf{1}_{s}\xrightarrow{\eta}U_{\Sigma}F_{\Sigma}\mathbf{1}_{s}\xrightarrow{U_{\Sigma}f}U_{\Sigma}A}{\mathbf{1}_{s}\xrightarrow{\eta}U_{\Sigma}F_{\Sigma}\mathbf{1}_{s}\xrightarrow{U_{\Sigma}f^{*}}U_{\Sigma}B}$
Due to (1) there exists a unique function $k\colon U_{\Sigma}A\to U_{\Sigma}B$
with
(5.2) $U_{\Sigma}f^{*}\cdot\eta=(k\cdot U_{\Sigma}f)\cdot\eta\qquad\text{for
all $f$}.$
Here and in the following we drop the subscripts indicating components of
$\eta$. It remains to prove that $k$ is a homomorphism from $A$ to $B$; then
the universal property of $\eta$ implies $f^{*}=k\cdot f$. Thus, given
$\sigma\colon s\to t$ in $\Sigma$ and $a\in A_{s}$ we need to prove
$k(\sigma_{A}(a))=\sigma_{B}(k(a))$. Consider the unique homomorphisms
$\displaystyle f\colon F_{\Sigma}\mathbf{1}_{t}\to A,$ $\displaystyle
f(\ast)=\sigma_{A}(a),$ $\displaystyle g\colon F_{\Sigma}\mathbf{1}_{s}\to A,$
$\displaystyle g(\ast)=a,$ $\displaystyle j\colon F_{\Sigma}\mathbf{1}_{t}\to
F_{\Sigma}\mathbf{1}_{s},$ $\displaystyle j(\ast)=\sigma(\ast).$
Then $f=g\cdot j$ and thus $f^{*}=g^{*}\cdot j$ because the morphisms
$(\mathord{-})^{*}$ form a cocone of the canonical diagram of $A$. It follows
that
$k(\sigma_{A}(a))=k(f(*))=f^{*}(\ast)=g^{*}(j(\ast))=g^{*}(\sigma(\ast))=\sigma_{B}(g^{*}(\ast))=\sigma_{B}(k(g(\ast)))=\sigma_{B}(k(a)),$
where the last but one equation holds by (5.2). Thus, $k$ is a homomorphism as
desired.
For a general signature $\Sigma$, let $k\in\mathbb{N}\cup\\{\omega\\}$ be an
upper bound of the arities of operation symbols in $\Sigma$ and let for every
set $T\subseteq\mathcal{S}$ the following $\mathcal{S}$-sorted set $X_{T}$ be
given: $X_{T}$ is empty for every sort outside of $T$, and for sorts $s\in T$
the elements are $(X_{T})_{s}=\\{\,i\mid i<k\,\\}$. Then the set
$F_{\Sigma}X_{T}\quad(T\subseteq\mathcal{S})$
is dense in $\mathop{\mathbf{Alg}}\Sigma$. The proof is analogous to the unary
case.
3. (3)
The category of graphs, i.e. sets with a binary relation, and graph
homomorphisms satisfies 5.1. Strong epimorphisms are precisely the surjective
homomorphisms which are also surjective on all edges. Thus the two graphs
shown below are clearly projective w.r.t. strong epimorphisms. Moreover, they
form a dense set: every graph is a canonical colimit of all of its vertices
and all of its edges.
4. (4)
Every variety, and even every quasivariety of $\Sigma$-algebras (presented by
implications) satisfies 5.1. This will follow from 5.4 below.
###### Definition 5.3.
A full subcategory $\mathscr{D}$ of $\mathop{\mathbf{Alg}}\Sigma$ is said to
be _closed under $(\mathsf{StrongEpi},\mathsf{Mono})$-factorizations_ if for
every morphism $f\colon A\to B$ of $\mathscr{D}$ with factorization
$f=\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
4.75pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&\crcr}}}\ignorespaces{\hbox{\kern-4.75pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 1.0pt\raise
0.0pt\hbox{$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
12.21094pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{e}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 28.75pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern-1.99997pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
28.75pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
1.0pt\raise
0.0pt\hbox{$\textstyle{C\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
36.11249pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\hbox{{}{\hbox{\kern 5.0pt\hbox{\ignorespaces\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}}}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
44.59552pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{m}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 62.61249pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
62.61249pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
1.0pt\raise 0.0pt\hbox{$\textstyle{B}$}}}}}}}\ignorespaces}}}}\ignorespaces$,
the object $C$ lies in $\mathscr{D}$.
###### Proposition 5.4.
For every category $\mathscr{D}$ the following two statements are equivalent:
1. (1)
$\mathscr{D}$ is cocomplete and has a finite dense set of objects which are
projective w.r.t. strong epimorphisms.
2. (2)
There exists a signature $\Sigma$ such that $\mathscr{D}$ is equivalent to a
full reflective subcategory of $\mathop{\mathbf{Alg}}\Sigma$ closed under
$(\mathsf{StrongEpi},\mathsf{Mono})$-factorizations.
Moreover, $\Sigma$ can always be chosen to be a unary signature.
###### Proof.
(2) $\Rightarrow$ (1) Suppose that
$\mathscr{D}\subseteq\mathop{\mathbf{Alg}}\Sigma$ is a full reflective
subcategory and that $\mathscr{D}$ is closed under
$(\mathsf{StrongEpi},\mathsf{Mono})$-factorizations. Cocompleteness of
$\mathscr{D}$ is clear because $\mathop{\mathbf{Alg}}\Sigma$ is cocomplete.
Denote by $(-)^{@}\colon\mathop{\mathbf{Alg}}\Sigma\to\mathscr{D}$ the
reflector (i.e. the left adjoint to the inclusion functor
$\mathscr{D}\hookrightarrow\mathop{\mathbf{Alg}}\Sigma$) and by
$\eta_{X}\colon X\to X^{@}$ the universal maps. From 5.2 we know
$\mathop{\mathbf{Alg}}\Sigma$ has a finite dense set of projective objects
$A_{i}$, $i\in I$. We prove that the objects $A_{i}^{@}$, $i\in I$, form a
dense set in $\mathscr{D}$.
To verify the density, let $\mathscr{A}$ be the full subcategory of
$\mathop{\mathbf{Alg}}\Sigma$ on $\\{A_{i}\\}_{i\in I}$. For every algebra
$D\in\mathscr{D}$ the canonical diagram
$\mathscr{A}/D\to\mathop{\mathbf{Alg}}\Sigma$ assigning $A_{i}$ to each
$f\colon A_{i}\to D$ has the canonical colimit $D$. Since the left adjoint
$(-)^{@}$ preserves that colimit, we have that $D=D^{@}$ is a canonical
colimit of all $f^{@}\colon A_{i}^{@}\to D$ for $f$ ranging over
$\mathscr{A}/D$, as required. (Indeed, observe that every morphism $f\colon
A_{i}^{@}\to D$ in $\mathscr{D}$ has the form $f=f^{@}$ because the
subcategory $\mathscr{D}$ is full and contains the domain and codomain of
$f$.)
Next, we observe that every strong epimorphism $e$ of $\mathscr{D}$ is
strongly epic in $\mathop{\mathbf{Alg}}\Sigma$. Indeed, take the
$(\mathsf{StrongEpi},\mathsf{Mono})$-factorization $e=m\cdot e^{\prime}$ of
$e$ in $\mathop{\mathbf{Alg}}\Sigma$. Since $\mathscr{D}$ is closed under
factorizations, we have that $e^{\prime},m\in\mathscr{D}$. Moreover, the
morphism $m$ is monic in $\mathscr{D}$ because it is monic in
$\mathop{\mathbf{Alg}}\Sigma$. Since $e$ is a strong (and thus extremal)
epimorphism in $\mathscr{D}$, it follows that $m$ is an isomorphism. Thus
$e\cong e^{\prime}$ is a strong epimorphism in $\mathop{\mathbf{Alg}}\Sigma$.
Since $\mathop{\mathbf{Alg}}\Sigma$ is complete, this is equivalent to being
an extremal epimorphism.
Since each $A_{i}$ is projective w.r.t. strong epimorphisms in
$\mathop{\mathbf{Alg}}\Sigma$, it thus follows that $A_{i}^{@}$ is projective
w.r.t. strong epimorphisms $e\colon B\twoheadrightarrow C$ in $\mathscr{D}$.
Indeed, given a morphism $h\colon A_{i}^{@}\to C$, compose it with the
universal arrow $\eta\colon A_{i}\to A_{i}^{@}$. Thus, $h\cdot\eta$ factorizes
in $\mathop{\mathbf{Alg}}\Sigma$ through $e$:
$\textstyle{A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\eta}$$\scriptstyle{k}$$\textstyle{A_{i}^{@}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$$\textstyle{\scriptstyle\overline{k}}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\textstyle{C}$
The unique morphism $\overline{k}\colon A_{i}^{@}\to B$ of $\mathscr{D}$ with
$k=\overline{k}\cdot\eta$ then fulfils the desired equality
$h=e\cdot\overline{k}$ since $h\cdot\eta=e\cdot\overline{k}\cdot\eta$.
(1)$\Rightarrow$(2) Let $\mathcal{S}$ be a finite dense set of objects
projective w.r.t. strong epimorphisms, and consider $\mathcal{S}$ as a full
subcategory of $\mathscr{D}$. Define an $\mathcal{S}$-sorted signature of
unary symbols
$\Sigma=\mathsf{Mor}(\mathcal{S}^{\mathrm{op}})\setminus\\{\,\mathsf{id}_{s}\mid
s\in\mathcal{S}\,\\}.$
Every morphism $\sigma\colon s\to t$ of $\mathcal{S}^{\mathrm{op}}$ has arity
as indicated: the corresponding unary operation has inputs of sort $s$ and
yields values of sort $t$. Define a functor
$E\colon\mathscr{D}\to\mathop{\mathbf{Alg}}\Sigma$
by assigning to every object $D$ the $\mathcal{S}$-sorted set with sorts
$(ED)^{s}=\mathscr{D}(s,D)\quad\text{for $s\in\mathcal{S}$}$
endowed with the operations
$\sigma_{ED}\colon\mathscr{D}(s,D)\to\mathscr{D}(s^{\prime},D)$
given by precomposing with $\sigma\colon s^{\prime}\to s$ in
$\mathcal{S}\subseteq\mathscr{D}$. To every morphism $f\colon D_{1}\to D_{2}$
of $\mathscr{D}$ assign the $\Sigma$-homomorphism $Ef$ with sorts
$(Ef)^{s}\colon\mathscr{D}(s,D_{1})\to\mathscr{D}(s,D_{2})$
given by postcomposing with $f$. To say that $\mathcal{S}$ is a dense set is
equivalent to saying that $E$ is full and faithful (Adámek and Rosický, 1994,
Prop. 1.26). Moreover, since $\mathscr{D}$ is cocomplete, $E$ is a right
adjoint (Adámek and Rosický, 1994, Prop. 1.27). Thus, $\mathscr{D}$ is
equivalent to a full reflective subcategory of $\mathop{\mathbf{Alg}}\Sigma$.
Next we show that $\mathscr{D}$ has the factorization system
$(\mathsf{StrongEpi},\mathsf{Mono})$. Indeed, being reflective in
$\mathop{\mathbf{Alg}}\Sigma$, it is a complete category. Moreover,
$\mathscr{D}$ is well-powered because the right adjoint
$\mathscr{D}\hookrightarrow\mathop{\mathbf{Alg}}\Sigma$ preserves
monomorphisms and $\mathop{\mathbf{Alg}}\Sigma$ is well-powered. Consequently,
the factorization system exists (Adámek et al., 2009, Cor. 14.21).
To prove closure under factorizations, observe first that a morphism $e\colon
D_{1}\to D_{2}$ is strongly epic in $\mathscr{D}$ iff $Ee$ is strongly epic in
$\mathop{\mathbf{Alg}}\Sigma$. Indeed, if $e$ is strongly epic, then $Ee$ has
surjective sorts $(Ee)^{s}$ because $s$ is projective w.r.t. $e$. Thus, $Ee$
is a strong epimorphism in $\mathop{\mathbf{Alg}}\Sigma$. Conversely, if $Ee$
is strongly epic in $\mathop{\mathbf{Alg}}\Sigma$, then for every commutative
square $g\cdot e=m\cdot f$ in $\mathscr{D}$ with $m$ monic, the morphism $Em$
is monic in $\mathop{\mathbf{Alg}}\Sigma$ because $E$ is a right adjoint, and
thus a diagonal exists.
Now let$f\colon A\to B$ be a morphism in $\mathscr{D}$ and let
$f=\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
4.75pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&&\crcr}}}\ignorespaces{\hbox{\kern-4.75pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 1.0pt\raise
0.0pt\hbox{$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
12.21094pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{e}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 28.75pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern-1.99997pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
28.75pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
1.0pt\raise
0.0pt\hbox{$\textstyle{C\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
36.11249pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\hbox{{}{\hbox{\kern 5.0pt\hbox{\ignorespaces\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}}}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
44.59552pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{m}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 62.61249pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
62.61249pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
1.0pt\raise 0.0pt\hbox{$\textstyle{B}$}}}}}}}\ignorespaces}}}}\ignorespaces$
be its $(\mathsf{StrongEpi},\mathsf{Mono})$-factorization in $\mathscr{D}$.
Thus $C\in\mathscr{D}$ and since by the above argument $Ee$ and $Em$ are
strong epimorphisms and monomorphisms in $\mathop{\mathbf{Alg}}\Sigma$,
respectively, $C$ is the image of $f$ w.r.t. to the factorization system of
$\mathop{\mathbf{Alg}}\Sigma$. ∎
###### Example 5.5.
1. (1)
If $\mathscr{D}=\mathbf{Set}$, we can take $S=\\{\mathbf{1}\\}$ where
$\mathbf{1}$ is a singleton set. The one-sorted signature $\Sigma$ in the
above proof is empty, thus, $\mathop{\mathbf{Alg}}\Sigma=\mathbf{Set}$.
2. (2)
In the category $\mathbf{Gra}$ of graphs we can take $S=\\{G_{1},G_{2}\\}$,
see 5.2(3). Here $\Sigma$ is a $2$-sorted signature with two operations
$s,t\colon G_{2}\to G_{1}$. A graph $G=(V,E)$ is represented as an algebra $A$
with sorts $A_{G_{1}}=V$ and $A_{G_{2}}=E$ and $s,t$ given by the source and
target of edges, respectively. More precisely, $\mathsf{Gra}$ is equivalent to
the full subcategory of all $\Sigma$-algebras $(V,E)$ where for all
$e,e^{\prime}\in E$ with $s(e)=s(e^{\prime})$ and $t(e)=t(e^{\prime})$, one
has $e=e^{\prime}$.
###### Assumption 5.6.
From now on we assume that
1. (1)
The category $\mathscr{D}$ is a full reflective subcategory of
$\Sigma$-algebras closed under
$(\mathsf{StrongEpi},\mathsf{Mono})$-factorizations; the reflecting of a
$\Sigma$-algebra $A$ into $\mathscr{D}$ is denoted by $A^{@}$.
2. (2)
The category $\mathscr{D}_{\mathsf{f}}$ consists of all $\Sigma$-algebras in
$\mathscr{D}$ of finite cardinality in all sorts.
In the case where the arities of operations in $\Sigma$ are bounded, our
present choice of $\mathscr{D}_{\mathsf{f}}$ corresponds well with the
previous one in 5.1: choosing the set $\mathcal{S}$ as in 5.2(2), a
$\Sigma$-algebra $D$ has finite cardinality iff the set of all morphisms from
$s$ to $D$ (for $s\in S$) is finite.
###### Notation 5.7.
For the profinite monad ${\widehat{\mathbf{T}}}$ of 4.5 we denote by
$U\colon(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{{\widehat{\mathbf{T}}}}\to\mathbf{Set}^{\mathcal{S}}$
the forgetful functor that assigns to a ${\widehat{\mathbf{T}}}$-algebra
$(A,\alpha)$ the underlying $\mathcal{S}$-sorted set of $A$.
Recall from 2.12 that $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is a
full subcategory of ${\mathbf{Stone}}(\mathop{\mathbf{Alg}}\Sigma)$, the
category of Stone $\Sigma$-algebras and continuous homomorphisms, closed under
limits. From 3.20 and 3.18, we get the following
###### Lemma 5.8.
The factorization system $(\mathsf{StrongEpi},\mathsf{Mono})$ on $\mathscr{D}$
is profinite and yields the factorization system on
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ given by
$\displaystyle\widehat{\mathcal{E}}$ $\displaystyle=\text{continuous
homomorphisms surjective in every sort, and}$
$\displaystyle\widehat{\mathcal{M}}$ $\displaystyle=\text{continuous
homomorphisms injective in every sort.}$
###### Notation 5.9.
Let $X$ be a finite $\mathcal{S}$-sorted set of variables.
1. (1)
Denote by
$F_{\Sigma}X$
the free $\Sigma$-algebra of _terms_. It is carried by the smallest
$\mathcal{S}$-sorted set containing $X$ and such that for every operation
symbol $\sigma\colon s_{1},\ldots,s_{n}\to s$ and every $n$-tuple of terms
$t_{i}$ of sorts $s_{i}$ we have a term
$\sigma(t_{1},\ldots,t_{n})\quad\text{of sort $s$}.$
2. (2)
For the reflection $(F_{\Sigma}X)^{@}$, the free object of $\mathscr{D}$ on
$X$, we put
$X^{\oplus}=\widehat{(F_{\Sigma}X)^{@}}.$
This is a free object of $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ on
$X$, see 3.13.
3. (3)
Let $(A,\alpha)$ be a finite $\mathbf{T}$-algebra. An _interpretation_ of the
given variables in $(A,\alpha)$ is an $S$-sorted function $f$ from $X$ to the
underlying sorted set $U(A,\alpha)$. We denote by
$f^{@}\colon(F_{\Sigma}X)^{@}\to A$
the corresponding morphism of $\mathscr{D}$. It extends to a unique
homomorphism of ${\widehat{\mathbf{T}}}$-algebras (since $(A,\alpha^{+})$ is a
${\widehat{\mathbf{T}}}$-algebra by 4.16) that we denote by
$f^{\oplus}\colon\left(\widehat{T}X^{\oplus},\mu_{X^{\oplus}}\right)\to(A,\alpha^{+}).$
###### Definition 5.10.
A _profinite term_ over a finite $S$-sorted set $X$ (of variables) is an
element of $\widehat{T}X^{\oplus}$.
###### Example 5.11.
Let $\mathscr{D}=\mathbf{Set}$ and $TX=X^{*}$ be the monoid monad. For every
finite set $X=X^{@}$ we have that $\widehat{T}X^{\oplus}$ is the set of
profinite words over $X$ (see 4.11).
###### Definition 5.12.
Let $t_{1},t_{2}$ be profinite terms of the same sort in
$\widehat{T}X^{\oplus}$. A finite $\mathbf{T}$-algebra is said to _satisfy the
equation_ $t_{1}=t_{2}$ provided that for every interpretation $f$ of $X$ we
have $f^{\oplus}(t_{1})=f^{\oplus}(t_{2})$.
###### Remark 5.13.
In order to distinguish equations being pairs of profinite terms according to
5.12 from equations being quotients according to 4.19, we shall sometimes call
the latter _equation morphisms_.
###### Theorem 5.14 (Generalized Reiterman Theorem for Monads on
$\Sigma$-algebras).
Let $\mathscr{D}$ be a full reflective subcategory of
$\mathop{\mathbf{Alg}}\Sigma$ closed under
$(\mathsf{StrongEpi},\mathsf{Mono})$-factorizations, and let $\mathbf{T}$ be a
monad on $\mathscr{D}$ preserving strong epimorphisms. Then a collection of
finite $\mathbf{T}$-algebras is a pseudovariety iff it can be presented by
equations between profinite terms.
###### Proof.
1. (1)
We first verify that all assumptions needed for applying 4.20 and Remark 4.21
are satisfied. Put
$\mathsf{Var}\coloneqq\\{\,(F_{\Sigma}X)^{@}\mid\text{$X$ a finite
$\mathcal{S}$-sorted set}\,\\},$
the set of all free objects of $\mathscr{D}$ on finitely many generators. We
know from 5.8 that the factorization system
$(\mathsf{StrongEpi},\mathsf{Mono})$ is profinite.
1. (1a)
Every object $(F_{\Sigma}X)^{@}$ of $\mathsf{Var}$ is projective w.r.t. strong
epimorphisms. Indeed, given a strong epimorphism $e\colon A\twoheadrightarrow
B$ in $\mathscr{D}$, it is a strong epimorphism in
$\mathop{\mathbf{Alg}}\Sigma$, i.e. $e$ has a splitting $i\colon
B\rightarrowtail A$ in $\mathbf{Set}^{\mathcal{S}}$ with $e\cdot
i=\mathsf{id}$. For every morphism $f\colon(F_{\Sigma}X)^{@}\to B$ of
$\mathscr{D}$ we are to prove that $f$ factorizes through $e$. The
$\mathcal{S}$-sorted function $X\to A$ which is the domain-restriction of
$i\cdot f\colon(F_{\Sigma}X)^{@}\to A$ has a unique extension to a morphism
$g\colon(F_{\Sigma}X)^{@}\to A$ of $\mathscr{D}$. It is easy to see that
$e\cdot i=\mathsf{id}$ implies $e\cdot g=f$, as required.
2. (1b)
Every object $D\in\mathscr{D}_{\mathsf{f}}$ is a strong quotient
$e\colon(F_{\Sigma}X)^{@}\twoheadrightarrow D$ of some $(F_{\Sigma}X)^{@}$ in
$\mathsf{Var}$. Indeed, let $X$ be the underlying set of $D$. Then the
underlying function of $\mathsf{id}\colon X\to D$ is a split epimorphism in
$\mathbf{Set}^{\mathcal{S}}$, hence,
$\mathsf{id}^{@}\colon(F_{\Sigma}X)^{@}\twoheadrightarrow D$ is a strong
epimorphism by (Adámek et al., 2009, Prop. 14.11).
2. (2)
By applying 4.20 and Remark 4.21, all we need to prove is that the
presentation of finite $\mathbf{T}$-algebras by equation morphisms
$e\colon(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})\twoheadrightarrow(A,\alpha),\quad\text{$X$
finite and $e$ strongly epic},$
is equivalent to their presentation by equations between profinite terms.
1. (2a)
Let $\mathcal{V}$ be a collection in $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$
presented by equations $t_{i}=t_{i}^{\prime}$ in $\widehat{T}X^{\oplus}_{i}$,
$i\in I$. Using 4.20, we just need proving that $\mathcal{V}$ is a
pseudovariety:
1. (i)
Closure under finite products $\prod_{k\in K}(A_{k},\alpha_{k})$: Let $f$ be
an interpretation of $X_{i}$ in the product. Then we have $f=\langle
f_{k}\rangle_{k\in K}$ for interpretations $f_{k}$ of $X_{i}$ in
$(A_{k},\alpha_{k})$. By assumption
$f_{k}^{\oplus}(t_{i})=f_{k}^{\oplus}(t_{i}^{\prime})$ for every $k\in K$.
Since the forgetful functor from ${\widehat{\mathbf{T}}}$-algebras to
$\mathbf{Set}^{\mathcal{S}}$ preserves products, we have $f^{\oplus}=\langle
f_{k}^{\oplus}\rangle_{k\in K}$, hence
$f^{\oplus}(t_{i})=f^{\oplus}(t_{i}^{\prime})$.
2. (ii)
Closure under subobjects $m\colon(A,\alpha)\rightarrowtail(B,\beta)$: Let $f$
be an interpretation of $X_{i}$ in $(A,\alpha)$. Then $g=(Um)\cdot f$ is an
interpretation in $(B,\beta)$, thus
$g^{\oplus}(t_{i})=g^{\oplus}(t_{i}^{\prime})$. Since $m$ is a homomorphism of
${\widehat{\mathbf{T}}}$-algebras, we have $g^{\oplus}=m\cdot f^{\oplus}$.
Moreover, $m$ is monic in every sort, whence
$f^{\oplus}(t_{i})=f^{\oplus}(t_{i}^{\prime})$.
3. (iii)
Closure under quotients $e\colon(B,\beta)\twoheadrightarrow(A,\alpha)$: Let
$f$ be an interpretation of $X_{i}$ in $A$. Since $Ue$ is a split epimorphism
in $\mathbf{Set}^{\mathcal{S}}$, we can choose $m\colon UA\to UB$ with
$(Ue)\cdot m=\mathsf{id}$. Then $g=m\cdot f$ is an interpretation of $X_{i}$
in $(B,\beta)$, thus, $g^{\oplus}(t_{i})=g^{\oplus}(t_{i}^{\prime})$. Since
$e$ is a homomorphism of ${\widehat{\mathbf{T}}}$-algebras, we have
$e\cdot g^{\oplus}=(Ue\cdot g)^{\oplus}=(Ue\cdot m\cdot
f)^{\oplus}=f^{\oplus}.$
Using this, we obtain $f^{\oplus}(t_{i})=f^{\oplus}(t_{i}^{\prime})$.
2. (2b)
For every equation morphism
$e\colon(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})\twoheadrightarrow(A,\alpha)$
we consider the set of all profinite equations $t=t^{\prime}$ where
$t,t^{\prime}\in\widehat{T}X^{\oplus}$ have the same sort and fulfil
$e(t)=e(t^{\prime})$. We prove that given a finite algebra $(B,\beta)$, it
satisfies $e$ iff it satisfies all of those equations.
1. (i)
Let $(B,\beta)$ satisfy $e$ and let $f$ be an interpretation of $X$ in it.
Then the homomorphism $f^{\oplus}$ factorizes through $e$:
$\textstyle{(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\oplus}}$$\scriptstyle{e}$$\textstyle{(B,\beta)}$$\textstyle{(A,\alpha)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h}$
Thus, $f^{\oplus}(t)=f^{\oplus}(t^{\prime})$ whenever $e(t)=e(t^{\prime})$, as
required.
2. (ii)
Let $(B,\beta)$ satisfy the given equations $t=t^{\prime}$. We prove that
every homomorphism
$h\colon(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})\to(B,\beta)$
factorizes through the given $e$, which lies in
$(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{{\widehat{\mathbf{T}}}}$. We
clearly have
$h=f^{\oplus}$
for the interpretation $f\colon X\to U(B,\beta)$ obtained by the domain-
restriction of $Uh$. Consequently, for all
$t,t^{\prime}\in\widehat{T}X^{\oplus}$ of the same sort, we know that
$e(t)=e(t^{\prime})\quad\text{implies}\quad h(t)=h(t^{\prime}).$
This tells us precisely that $Uh$ factorizes in $\mathbf{Set}^{\mathcal{S}}$
through $Ue$:
$\textstyle{U(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Ue}$$\scriptstyle{Uh}$$\textstyle{U(A,\alpha)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{k}$$\textstyle{U(B,\beta)}$
It remains to prove that $k$ is a homomorphism of
${\widehat{\mathbf{T}}}$-algebras. Firstly, $k$ preserves the operations of
$\Sigma$ and is thus a morphism $k\colon A\to B$ in $\mathscr{D}$. This
follows from $Ue$ being epic in $\mathbf{Set}^{\mathcal{S}}$: given
$\sigma\colon s_{1},\ldots,s_{n}\to s$ in $\Sigma$ and elements $x_{i}$ of
sort $s_{i}$ in $A$, choose $y_{i}$ of sort $s_{i}$ in
$U(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})$ with $Ue(y_{i})=x_{i}$.
Using that $e$ and $h$ are $\Sigma$-homomorphism we obtain the desired
equation
$k(\sigma_{A}(x_{i}))=k(\sigma_{A}(Ue(y_{i}))=k\cdot
Ue(\sigma(y_{i}))=Uh(\sigma(y_{i}))=\sigma_{B}(h(y_{i}))=\sigma_{B}(k(x_{i})).$
Moreover, $\widehat{T}e$ is epic by 5.8. In the following diagram
$\textstyle{\widehat{T}\widehat{T}X^{\oplus}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widehat{\mu}_{X^{\oplus}}}$$\scriptstyle{\widehat{T}e}$$\scriptstyle{\widehat{T}h}$$\textstyle{\widehat{T}X^{\oplus}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{h}$$\textstyle{\widehat{T}A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\scriptstyle{\widehat{T}k}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{k}$$\textstyle{\widehat{T}B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\textstyle{B}$
the outside and upper square commute because $h$ and $e$ are a homomorphism of
${\widehat{\mathbf{T}}}$-algebras, respectively, and the left hand and right
hand parts commute because $k\cdot e=h$. Since $\widehat{T}e$ is epic, it
follows that the lower square also commutes.∎
###### Remark 5.15.
We now show that profinite terms are just another view of the implicit
operations that Reiterman used in his paper (Reiterman, 1982). We start with a
one-sorted signature $\Sigma$ (for notational simplicity) and then return to
the general case. We denote by
$W\colon\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\to\mathbf{Set}$
the forgetful functor assigning to every finite algebra $(A,\alpha)$ the
underlying set $A$.
###### Definition 5.16.
An $n$-ary _implicit operation_ is a natural transformation $\varrho\colon
W^{n}\to W$ for $n\in\mathbb{N}$. Thus if
$U\colon\mathscr{D}_{\mathsf{f}}\to\mathbf{Set}$
denotes the forgetful functor, then $\varrho$ assigns to every finite
$\mathbf{T}$-algebra $(A,\alpha)$ an $n$-ary operation on $UA$ such that every
homomorphism in $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ preserves that
operation.
For the case of finitary $\Sigma$-algebras, i.e. finitary monads $\mathbf{T}$
on $\mathbf{Set}$, the above concept is due to Reiterman (Reiterman, 1982,
Sec. 2).
###### Example 5.17.
Let $\mathscr{D}=\mathbf{Set}$ and $TX=X^{*}$ be the monoid monad. Every
element $x$ of a finite monoid $(A,\alpha)$ has a unique idempotent power
$x^{k}$ for some $k>0$, denoted by $x^{\omega}$. Since monoid morphisms
preserve idempotent powers, this yields a unary implicit operation $\varrho$
with components $\varrho_{(A,a)}\colon x\mapsto x^{\omega}$.
###### Notation 5.18.
Consider $n$ as the set $\\{0,\ldots,n-1\\}$. Every profinite term
$t\in\widehat{T}n^{\oplus}$ defines an $n$-ary implicit operation
$\varrho_{t}$ as follows: Given a finite $\mathbf{T}$-algebra $(A,\alpha)$ and
an $n$-tuple $f\colon n\to UA$, we get the homomorphism
$f^{\oplus}\colon(\widehat{T}n^{\oplus},\widehat{\mu}_{n^{\oplus}})\to(A,\alpha)$,
and $\varrho_{t}$ assigns to $f$ the value
$\varrho_{t}(f)=f^{\oplus}(t).$
The naturality of $\varrho_{t}$ is easy to verify.
###### Lemma 5.19.
Implicit $n$-ary operations correspond bijectively to profinite terms in
$\widehat{T}n^{\oplus}$ via $t\mapsto\varrho_{t}$.
###### Proof.
Recall from 2.12 that $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is a
full subcategory of ${\mathbf{Stone}}(\mathop{\mathbf{Alg}}\Sigma)$ closed
under limits. The forgetful functor of the latter preserves limits, hence, so
does the forgetful functor
$\overline{U}\colon\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}\to\mathbf{Set}$.
Recall further from 4.6 that
$\widehat{T}n^{\oplus}=\lim Q_{n^{\oplus}}$
where $Q_{n^{\oplus}}\colon
n^{\oplus}/K\to\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is the diagram
of all morphisms
$a\colon n^{\oplus}\to K(A,\alpha)=A\quad\text{of
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$}.$
Thus, profinite terms $t\in\widehat{T}n^{\oplus}$ are elements of the limit of
$\overline{U}\cdot Q_{n^{\oplus}}\colon n^{\oplus}/K\to\mathbf{Set}$
By the well-known description of limits in $\mathbf{Set}$, to give $t$ means
to give a compatible collection of elements of $UA$, i.e. for every
$n^{\oplus}\xrightarrow{a}K(A,\alpha)$ one gives $t_{a}\in UA$ such that for
every morphism of $n^{\oplus}/{K}$:
$\textstyle{n^{\oplus}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{b}$$\scriptstyle{a}$$\textstyle{K(A,\alpha)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Kh}$$\textstyle{K(B,\beta)}$
we have $Uh(t_{a})=t_{b}$.
Now observe that an object of $n^{\oplus}/K$ is precisely a finite
$\mathbf{T}$-algebra $(A,\alpha)$ together with an $n$-tuple $a_{0}$ of
elements of $UA$. Thus, the given collection $a\mapsto t_{a}$ is precisely an
$n$-ary operation on $UA$ for every finite algebra $(A,\alpha)$. Moreover, the
compatibility means precisely that every homomorphism
$h\colon(A,\alpha)\to(B,\beta)$ of finite $\mathbf{T}$-algebras preserves that
operation. Thus, $\widehat{T}n^{\oplus}$ consists of precisely the $n$-ary
implicit operations. Finally, it is easy to see that the resulting operation
is $\varrho_{t}$ of 5.18 for every $t\in\widehat{T}n^{\oplus}$. ∎
###### Remark 5.20.
1. (1)
For $\mathcal{S}$-sorted signatures this is completely analogous. Let
$W^{s}\colon\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}\to\mathbf{Set}$ assign to
every finite $\mathbf{T}$-algebra $(A,\alpha)$ the component of sort $s$ of
the underlying $\mathcal{S}$-sorted set $UA$. An _implicit operation_ of arity
$\varrho\colon s_{1},\ldots,s_{n}\to s$
is a natural transformation
$\varrho\colon W^{s_{1}}\times\dots\times W^{s_{n}}\to W^{s}$
Thus $\varrho$ assigns to every finite $\mathbf{T}$-algebra $(A,\alpha)$ an
operation
$\varrho_{(A,a)}\colon{UA}^{s_{1}}\times\dots UA^{s_{n}}\to UA^{s}$
that all homomorphisms in $\mathscr{D}_{\mathsf{f}}^{\mathbf{T}}$ preserve.
2. (2)
Recall that we identify every natural number $n$ with the set
$\\{0,\ldots,n-1\\}$. For every arity $s_{1},\ldots,s_{n}\to s$ we choose a
finite $\mathcal{S}$-sorted set $X$ such that for every sort $t$ we have
$X^{t}=\\{\,i\in\\{1,\ldots,n\\}\mid t=s_{i}\,\\}.$
Then for every finite $\mathbf{T}$-algebra $(A,\alpha)$, to give an $n$-tuple
$a_{i}\in A_{s_{i}}$ is the same as to give $\mathcal{S}$-sorted function
$f\colon X\to UA$.
3. (3)
5.18 has the following generalization: given a profinite term
$t\in\widehat{T}X^{\oplus}$ over $X$ of sort $s$, we define an implicit
operation $\varrho_{t}\colon s_{1},\ldots,s_{n}\to s$ by its components at all
finite $\mathbf{T}$-algebras $(A,\alpha)$ as follows:
$\varrho_{t}(f)=f^{\oplus}(t)\quad\text{for all $f\colon X\to UA$}.$
This yields a bijection between $\widehat{T}X^{\oplus}$ and implicit
operations of arity $s_{1},\ldots,s_{n}\to s$ for $X$ in (2). The proof is
completely analogous to that of 5.19.
###### Definition 5.21.
Let $\varrho$ and $\varrho^{\prime}$ be implicit operations of the same arity.
A finite algebra $(A,\alpha)$ _satisfies the equation_
$\varrho=\varrho^{\prime}$ if their components $\varrho_{(A,\alpha)}$ and
$\varrho_{(A,\alpha)}^{\prime}$ coincide.
The above formula $\varrho_{t}(f)=f^{\oplus}(t)$ shows that given profinite
terms $t,t^{\prime}\in\widehat{T}X^{\oplus}$ of the same sort, a finite
algebra satisfies the profinite equation $t=t^{\prime}$ if and only if it
satisfies the implicit equation $\varrho_{t}=\varrho_{t^{\prime}}$.
Consequently:
###### Corollary 5.22.
Under the hypotheses of 5.14, a collection of finite $\mathbf{T}$-algebras is
a pseudovariety iff it can be presented by equations between implicit
operations.
## 6\. Profinite Inequations
Whereas for varieties $\mathscr{D}$ of algebras the equation morphisms in the
Reiterman 4.20 can be substituted by equations $t=t^{\prime}$ between
profinite terms, this does not hold for varieties $\mathscr{D}$ of ordered
algebras (i.e. classes of ordered $\Sigma$-algebras specified by inequations
$t\leq t^{\prime}$ between terms). The problem is that $\mathbf{Pos}$ does not
have a dense set of objects projective w.r.t. strong epimorphisms. Indeed,
only discrete posets are projective w.r.t. the following regular epimorphism:
We are going to show that for $\mathscr{D}=\mathbf{Pos}$ (and more generally
varieties $\mathscr{D}$ of ordered algebras) a change of the factorization
system from $(\mathsf{StrongEpi},\mathsf{Mono})$ to (surjective, order-
reflecting) enables us to apply the results of Section 4 to the proof that
pseudovarieties of finite ordered $\mathbf{T}$-algebras are presentable by
inequations between profinite terms. This generalizes results of Pin and Weil
(Pin and Weil, 1996) who proved a version of Reiterman’s theorem (without
monads) for ordered algebras, in fact, for general first-order structures. We
begin with monads on $\mathbf{Pos}$, and then show how this yields results for
monads on varieties $\mathscr{D}$ of ordered algebras.
###### Notation 6.1.
Given an $\mathcal{S}$-sorted signature $\Sigma$ of operation symbols, let
$\Sigma_{\leq}$ denote the $\mathcal{S}$-sorted first-order signature with
operation symbols $\Sigma$ and a binary relation symbol $\leq_{s}$ for every
$s\in\mathcal{S}$. Moreover, let
$\mathop{\mathbf{Alg}}\Sigma_{\leq}$
be the full subcategory of $\Sigma_{\leq}\text{-}\mathbf{Str}$ for which
$\leq_{s}$ is interpreted as a partial order on the sort $s$ for every
$s\in\mathcal{S}$, and moreover every $\Sigma$-operation is monotone w.r.t.
these orders. Thus, objects are ordered $\Sigma$-algebras, morphisms are
monotone $\Sigma$-homomorphisms. Recall from Remark 2.15 our factorization
system with
$\displaystyle\mathcal{E}$ $\displaystyle=\text{morphisms surjective in all
sorts, and}$ $\displaystyle\mathcal{M}$ $\displaystyle=\text{morphisms order-
reflecting in all sorts.}$
Thus a $\Sigma$-homomorphisms $m$ lies in $\mathcal{M}$ iff for all $x,y$ in
the same sort of its domain we have $x\leq y$ iff $m(x)\leq m(y)$. The notion
of a subcategory $\mathscr{D}$ of $\mathop{\mathbf{Alg}}\Sigma_{\leq}$ being
closed under factorizations is analogous to 5.3.
###### Assumption 6.2.
Throughout this section, $\mathscr{D}$ denotes a full reflective subcategory
of $\mathop{\mathbf{Alg}}\Sigma_{\leq}$ closed under factorizations. Moreover,
$\mathscr{D}_{\mathsf{f}}$ is the full subcategory of $\mathscr{D}$ given by
all algebras which are finite in every sort.
Thus, every variety of ordered algebras (presented by inequations $t\leq
t^{\prime}$ betweens terms) can serve as $\mathscr{D}$, as well as every
quasivariety (presented by implications between inequations).
###### Remark 6.3.
1. (1)
Recall from 2.12 that $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is a
full subcategory of ${\mathbf{Stone}}(\mathop{\mathbf{Alg}}\Sigma_{\leq})$,
the category of ordered Stone $\Sigma$-algebras.
2. (2)
The factorization system on $\mathscr{D}$ inherited from
$\mathop{\mathbf{Alg}}\Sigma_{\leq}$ is profinite, see 3.20. Moreover, the
induced factorization system $\widehat{\mathcal{E}}$ and
$\widehat{\mathcal{M}}$ of $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ is
given by the surjective and order-reflecting morphisms of
$\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$, respectively (see 3.18).
###### Notation 6.4.
1. (1)
We again denote by
$(-)^{@}\colon\mathop{\mathbf{Alg}}\Sigma_{\leq}\to\mathscr{D}$ the reflector.
2. (2)
For every finite $\mathcal{S}$-sorted set $X$ we have the free algebra
$F_{\Sigma}X$ (discretely ordered).
3. (3)
The free object of $\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}}$ on a sorted
set $X$ is again denoted by $X^{\oplus}$ (in lieu of
$\widehat{(F_{\Sigma}X)^{@}}$). For every finite $\mathbf{T}$-algebra
$(A,\alpha)$, given an interpretation $f$ of $X$ in $(A,\alpha)$, we obtain a
homomorphism
$f^{\oplus}\colon(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})\to(A,\alpha)$
###### Definition 6.5.
By a _profinite term_ on a finite $\mathcal{S}$-sorted set $X$ of variables is
meant an element of $\widehat{T}X^{\oplus}$.
Given profinite terms $t_{1},t_{2}$ of the same sort $s$, a finite
$\mathbf{T}$-algebra $(A,\alpha)$ is said to _satisfy the inequation_
$t_{1}\leq t_{2}$
provided that for every interpretation $f$ of $X$ we have
$f^{\oplus}(t_{1})\leq f^{\oplus}(t_{2})$.
###### Theorem 6.6.
Let $\mathscr{D}$ be a full reflective subcategory of
$\mathop{\mathbf{Alg}}\Sigma_{\leq}$ closed under factorizations, and let
$\mathbf{T}$ be a monad on $\mathscr{D}$ preserving sortwise surjective
morphisms. Then a collection of finite $\mathbf{T}$-algebras is a
pseudovariety iff it can be presented by inequations between profinite terms.
###### Proof.
In complete analogy to the proof of 5.14, we put
$\mathsf{Var}=\\{\,(F_{\Sigma}X)^{@}\mid X\text{ a finite $\mathcal{S}$-sorted
set}\,\\}.$
and observe that 4.20 and Remark 4.21 can be applied.
1. (1)
If $\mathcal{V}$ is a collection of finite $\mathbf{T}$-algebras presented by
inequations $t_{i}\leq t_{i}^{\prime}$, we need to verify that $\mathcal{V}$
is a pseudovariety. This is analogous to the proof of 5.14; in part (2) we use
that $m$ reflects the relation symbols $\leq_{s}$, hence from $m\cdot
f^{\oplus}(t_{i})\leq_{s}m\cdot f^{\oplus}(t_{i}^{\prime})$ we derive
$f^{\oplus}(t_{i})\leq_{s}f^{\oplus}(t_{i}^{\prime})$.
2. (2)
Given an equation morphism
$e\colon(\widehat{T}X^{\oplus},\widehat{\mu}_{X^{\oplus}})\twoheadrightarrow(A,\alpha)$,
consider all inequations $t\leq_{s}t^{\prime}$ where $t$ and $t^{\prime}$ are
profinite terms of sort $s$ with $Ue(t)\leq Ue(t^{\prime})$ in $A$. We verify
that a finite ${\widehat{\mathbf{T}}}$-algebra $(B,\beta)$ satisfies those
inequations iff it satisfies $e$. This is again completely analogous to the
corresponding argument in the proof of 5.14; just at the end we need to
verify, additionally, that
$\text{$x\leq_{s}x^{\prime}$ in
$B$}\qquad\text{implies}\qquad\text{$h(x)\leq_{s}h(x^{\prime})$ in $A$}.$
Denote by
$U\colon(\mathop{\mathsf{Pro}}\mathscr{D}_{\mathsf{f}})^{\widehat{\mathbf{T}}}\to\mathbf{Pos}^{\mathcal{S}}$
the forgetful functor. Since $Ue$ has surjective components, we have terms
$t,t^{\prime}$ in $\widehat{T}X^{\oplus}$ of sort $s$ with $x=Ue(t)$ and
$x^{\prime}=Ue(t^{\prime})$, thus $t\leq t^{\prime}$ is one of the above
inequations. The algebra $(B,\beta)$ satisfies $t\leq t^{\prime}$ and (like in
5.14) we get $h=f^{\oplus}$, hence $Uh(t)\leq Uh(t^{\prime})$. From $Uh=k\cdot
Ue$, this yields $k(x)\leq_{s}k(x^{\prime})$.∎
###### Remark 6.7.
In particular, if $\mathscr{D}$ is a variety of ordered one-sorted
$\Sigma$-algebras and $\mathbf{T}$ a monad preserving surjective morphisms,
pseudovarieties of $\mathbf{T}$-algebras can be described by inequations
between profinite terms. This generalizes the result of Pin and Weil (Pin and
Weil, 1996). In fact, these authors consider pseudovarieties of general first-
order structures, which can be treated within our categorical framework
completely analogously to the case of ordered algebras.
## References
* (1)
* Adámek (1977) Jiří Adámek. 1977\. Colimits of algebras revisited. _Bull. Aust. Math. Soc._ 17, 3 (1977), 433–450.
* Adámek et al. (2009) Jiří Adámek, Horst Herrlich, and George E. Strecker. 2009. _Abstract and Concrete Categories: The Joy of Cats_ (2nd ed.). Dover Publications.
* Adámek and Rosický (1994) Jiří Adámek and Jiří Rosický. 1994. _Locally Presentable and Accessible Categories_. Cambridge University Press. 332 pages.
* Adámek et al. (2011) Jiří Adámek, Jiří Rosický, and Enrico Vitale. 2011. _Algebraic Theories_. Cambridge University Press.
* Banaschewski and Herrlich (1976) Bernhard Banaschewski and Horst Herrlich. 1976. Subcategories defined by implications. _Houston Journal Mathematics_ 2 (1976), 149–171.
* Behle et al. (2011) Christoph Behle, Andreas Krebs, and Stephanie Reifferscheid. 2011\. Typed Monoids - An Eilenberg-Like Theorem for Non Regular Languages. In _Algebraic Informatics - 4th International Conference, CAI 2011, Linz, Austria, June 21-24, 2011. Proceedings_ _(Lecture Notes in Computer Science, Vol. 6742)_ , Franz Winkler (Ed.). Springer, 97–114. https://doi.org/10.1007/978-3-642-21493-6_6
* Birkhoff (1935) G. Birkhoff. 1935\. On the structure of abstract algebras. _Proceedings of the Cambridge Philosophical Society_ 10 (1935), 433––454.
* Birkmann et al. (2021) Fabian Birkmann, Stefan Milius, and Henning Urbat. 2021\. On Language Varieties Without Boolean Operations. In _Proc. 14th-15th International Conference on Language and Automata Theory and Applications (LATA 2020/2021)_ , Alberto Leporati, Carlos Martín-Vide, Dana Shapira, and Claudio Zandron (Eds.). Springer. To appear. Full version available at: https://arxiv.org/abs/2011.06951.
* Bojańczyk (2015) Mikołaj Bojańczyk. 2015\. Recognisable languages over monads. In _Proc. DLT_ , Igor Potapov (Ed.). LNCS, Vol. 9168. Springer, 1–13. Full version: http://arxiv.org/abs/1502.04898.
* Chen et al. (2016) L.-T. Chen, J. Adámek, S. Milius, and H. Urbat. 2016\. Profinite Monads, Profinite Equations and Reiterman’s Theorem. In _Proc. FoSSaCS’16_ _(Lecture Notes Comput. Sci., Vol. 9634)_ , B. Jacobs and C. Löding (Eds.). Springer, 531–547.
* Eilenberg (1976) Samuel Eilenberg. 1976\. _Automata, Languages, and Machines_. Vol. 2. Academic Press, New York.
* Johnstone (1982) Peter T. Johnstone. 1982\. _Stone spaces_. Cambridge University Press. 398 pages.
* Kennison and Gildenhuys (1971) John F. Kennison and Dion Gildenhuys. 1971. Equational completion, model induced triples and pro-objects. _J. Pure Appl. Algebr._ 1, 4 (1971), 317–346.
* Klíma and Polák (2019) Ondřej Klíma and Libor Polák. 2019. Syntactic structures of regular languages. _Theoret. Comput. Sci._ 800 (2019), 125–141.
* Lawvere (1963) F. W. Lawvere. 1963\. _Functorial Semantics of Algebraic Theories_. Ph.D. Dissertation. Columbia University.
* Linton (1969) F. E. J. Linton. 1969\. An outline of functorial semantics. In _Semin. Triples Categ. Homol. Theory_ , B. Eckmann (Ed.). LNM, Vol. 80. Springer Berlin Heidelberg, 7–52.
* Mac Lane (1998) Saunders Mac Lane. 1998\. _Categories for the working mathematician_ (2 ed.). Springer.
* Manes (1976) E. G. Manes. 1976\. _Algebraic Theories_. Graduate Texts in Mathematics, Vol. 26. Springer.
* Milius and Urbat (2019) Stefan Milius and Henning Urbat. 2019. Equational Axiomatization of Algebras with Structure. In _Proc. Foundations of Software Science and Computation Structures (FoSSaCS 2019)_ , Mikołaj Bojańczyk and Alex Simpson (Eds.). Springer, 400–417.
* Pin and Weil (1996) Jean-Éric Pin and Pascal Weil. 1996. A Reiterman theorem for pseudovarieties of finite first-order structures. _Algebra Universalis_ 35 (1996), 577–595.
* Priestley (1972) Hilary A. Priestley. 1972\. Ordered topological spaces and the representation of distributive lattices. _Proc. London Math. Soc._ 3, 3 (1972), 507\.
* Reiterman (1982) Jan Reiterman. 1982\. The Birkhoff theorem for finite algebras. _Algebra Universalis_ 14, 1 (1982), 1–10.
* Ribes and Zalesskii (2010) Luis Ribes and Pavel Zalesskii. 2010. _Profinite Groups_. Springer Berlin Heidelberg.
* Salamanca (2017) Julian Salamanca. 2017\. Unveiling Eilenberg-type Correspondences: Birkhoff’s Theorem for (finite) Algebras + Duality. (February 2017). preprint.
* Schützenberger (1965) Marcel Paul Schützenberger. 1965\. On finite monoids having only trivial subgroups. _Inform. and Control_ 8 (1965), 190–194.
* Speed (1972) T. P. Speed. 1972\. Profinite posets. _Bull. Austral. Math. Soc._ 6 (1972), 177–183.
* Street (1972) Ross Street. 1972\. The formal theory of monads. _J. Pure Appl. Algebr._ 2, 2 (1972), 149–168.
* Urbat et al. (2017) Henning Urbat, Jiří Adámek, Liang-Ting Chen, and Stefan Milius. 2017. Eilenberg Theorems for Free. In _Proc. 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)_ _(LIPIcs, Vol. 83)_ , Kim G. Larsen, Hans L. Bodlaender, and Jean-François Raskin (Eds.). Schloss Dagstuhl, 43:1–43:14. EATCS Best Paper Award; full version available at https://arxiv.org/abs/1602.05831.
* Urbat and Milius (2019) Henning Urbat and Stefan Milius. 2019. Varieties of Data Languages. In _Proc. 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)_ _(LIPIcs, Vol. 132)_ , Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, and Stefano Leonardi (Eds.). Schloss Dagstuhl, 130:1–130:14.
## Appendix A Ind- and Pro-Completions
The aim of this appendix is to characterize, for an arbitrary small category
$\mathscr{C}$, the free completion $\mathop{\mathsf{Pro}}\mathscr{C}$ under
cofiltered limits and its dual concept, the free completion
$\mathop{\mathsf{Ind}}\mathscr{C}$ under filtered colimits (see 2.2). Let us
first recall the construction of the latter:
###### Remark A.1.
For any small category $\mathscr{C}$, the ind-completion is given up to
equivalence by the full subcategory $\mathscr{L}$ of the presheaf category
$[\mathscr{C}^{\mathrm{op}},\mathbf{Set}]$ on filtered colimits of
representables, and the Yoneda embedding
$E\colon\mathscr{C}\rightarrowtail\mathscr{L},\quad C\mapsto\mathscr{C}(-,C).$
We usually leave the embedding $E$ implicit and view $\mathscr{C}$ as a full
subcategory of $\mathscr{L}$.
Dually to Remark 2.1, an object $A$ of a category $\mathscr{C}$ is called
_finitely presentable_ if the functor
$\mathscr{A}(A,\mathord{-})\colon\mathscr{C}\to\mathbf{Set}$ is finitary, i.e.
preserves filtered colimits.
###### Definition A.2.
Let $L$ be an object of a category $\mathscr{L}$. Its _canonical diagram_
w.r.t. a full subcategory $\mathscr{C}$ of $\mathscr{L}$ is the diagram
$D^{L}$ of all morphisms from objects of $\mathscr{C}$ to $L$:
$D^{L}\colon\mathscr{C}/L\to\mathscr{L},\quad(C\xrightarrow{c}L)\mapsto C.$
###### Lemma A.3.
Let $\mathscr{C}$ be a full subcategory of $\mathscr{L}$ such that each object
$C\in\mathscr{C}$ is finitely presentable in $\mathscr{L}$. An object $L$ of
$\mathscr{L}$ is a colimit of some filtered diagram in $\mathscr{C}$ if and
only if its canonical diagram is filtered and the canonical cocone
$(C\xrightarrow{c}L)_{c\in\mathscr{C}/L}$ is a colimit.
###### Proof sketch.
The _if_ part is trivial. Conversely, if $L$ is a colimit of some filtered
diagram, then we can view it as a final subdiagram of its canonical diagram.
Therefore, their colimits coincide. ∎
###### Theorem A.4.
Let $\mathscr{C}$ be a small category. A category $\mathscr{L}$ containing
$\mathscr{C}$ as a full subcategory is an ind-completion of $\mathscr{C}$ if
and only if the following conditions hold:
1. (1)
$\mathscr{L}$ has filtered colimits,
2. (2)
every object of $\mathscr{L}$ is the colimit of a filtered diagram in
$\mathscr{C}$, and
3. (3)
every object of $\mathscr{C}$ is finitely presentable in $\mathscr{L}$.
###### Proof.
1. (1)
The _only if_ part follows immediately from the construction of
$\mathop{\mathsf{Ind}}\mathscr{C}$ in Remark A.1: (1) is obvious, (3) follows
from the Yoneda Lemma, and (2) follows from A.3 and the fact that
$\mathscr{C}$ is dense in $[\mathscr{C}^{\mathrm{op}},\mathbf{Set}]$.
2. (2)
We now prove the _if_ part. Suppose that (1)–(3) hold. Let
$F\colon\mathscr{C}\to\mathscr{K}$ be any functor to a category $\mathscr{K}$
with filtered colimits.
1. (2a)
First, define the extension $\overline{F}\colon\mathscr{L}\to\mathscr{K}$ of
$F$ as follows. For any object $L\in\mathscr{L}$ expressed as the canonical
colimit $(C\xrightarrow{c}L)_{c\in\mathscr{C}/L}$, the colimit of $F\\!D^{L}$
exists since the canonical diagram is filtered by condition (2) and
$\mathscr{K}$ has filtered colimits. Thus $\overline{F}$ on objects can be
given by a choice of a colimit:
$\overline{F}L\coloneqq\mathop{\mathsf{colim}}\left(\mathscr{C}/L\xrightarrow{D^{L}}\mathscr{C}\xrightarrow{F}\mathscr{K}\right)$
We choose the colimits such that $\overline{F}L=L$ if $L$ is in $\mathscr{C}$.
For any morphism $f\colon L\to L^{\prime}$, each colimit injection
$\tau_{c}\colon FC\to\overline{F}L$, for $C\xrightarrow{c}L$, associates with
another colimit injection $\tau^{\prime}_{f\cdot c}\colon
FC\to\overline{F}L^{\prime}$. Hence, there is a unique morphism
$\overline{F}f\colon\overline{F}L\to\overline{F}L^{\prime}$ such that
$\tau^{\prime}_{f\cdot c}=\overline{F}f\cdot\tau_{c}$. By the uniqueness of
mediating morphisms, $\overline{F}$ preserves identities and composition.
Therefore, $\overline{F}$ extends $F$.
2. (2b)
Second, we show that $\overline{F}$ is finitary. Observe that $\overline{F}$
is in fact a pointwise left Kan extension of $F$ along the embedding
$E\colon\mathscr{C}\rightarrowtail\mathscr{L}$. By (Mac Lane, 1998, Cor.
X.5.4) we have, equivalently, that for every $L\in\mathscr{C}$ and
$K\in\mathscr{K}$ the following map from $\mathscr{K}(\overline{F}L,K)$ to the
set of natural transformations from $\mathscr{L}(E-,L)$ to $\mathscr{K}(F-,K)$
is a bijection: it assigns to a morphism $f\colon\overline{F}L\to K$ the
natural transformation whose components are
$\left(EC\xrightarrow{c}L\right)\mapsto\left(FC=\overline{F}EC\xrightarrow{\overline{F}c}\overline{F}L\xrightarrow{f}K\right).$
Hence, given any colimit cocone $(A_{i}\to L)_{i\in\mathcal{I}}$ of a filtered
diagram, we have the following chain of isomorphisms, natural in $K$:
$\displaystyle\mathscr{K}(\overline{F}L,K)$
$\displaystyle\cong[\mathscr{C}^{\mathrm{op}},\mathbf{Set}](\mathscr{L}(E-,L),\mathscr{K}(F-,K))$
see above
$\displaystyle\cong[\mathscr{C}^{\mathrm{op}},\mathbf{Set}](\mathop{\mathsf{colim}}_{i}\mathscr{L}(E-,A_{i}),\mathscr{K}(F-,K))$
by condition (3)
$\displaystyle\cong\mathop{\mathsf{lim}}_{i}[\mathscr{C}^{\mathrm{op}},\mathbf{Set}](\mathscr{L}(E-,A_{i}),\mathscr{K}(F-,K))$
$\displaystyle\cong\mathop{\mathsf{lim}}_{i}\mathscr{K}(\overline{F}A_{i},K)$
see above
$\displaystyle\cong\mathscr{K}(\mathop{\mathsf{colim}}\overline{F}A_{i},K)$
Thus, by Yoneda Lemma,
$\mathop{\mathsf{colim}}\overline{F}A_{i}=\overline{F}L$, i.e. $\overline{F}$
is finitary.
3. (2c)
The essential uniqueness of $\overline{F}$ is clear, since this functor is
given by a colimit construction.∎
By dualizing A.4, we obtain an analogous characterization of pro-completions:
###### Corollary A.5.
Let $\mathscr{C}$ be a small category. The pro-completion of $\mathscr{C}$ is
characterized, up to equivalence of categories, as a category $\mathscr{L}$
containing $\mathscr{C}$ as a full subcategory such that
1. (1)
$\mathscr{L}$ has cofiltered limits,
2. (2)
every object of $\mathscr{L}$ is a cofiltered limit of a diagram in
$\mathscr{C}$, and
3. (3)
every object of $\mathscr{C}$ is finitely copresentable in $\mathscr{L}$.
###### Remark A.6.
Let $\mathscr{C}$ be a small category.
1. (1)
$\mathop{\mathsf{Pro}}\mathscr{C}$ is unique up to equivalence.
2. (2)
$\mathop{\mathsf{Pro}}\mathscr{C}$ can be constructed as the full subcategory
of $[\mathscr{C},\mathbf{Set}]^{\mathrm{op}}$ given by all cofiltered limits
of representable functors. The category $\mathscr{C}$ has a full embedding
into $\mathop{\mathsf{Pro}}\mathscr{C}$ via the Yoneda embedding $E\colon
C\mapsto\mathscr{C}(C,\mathord{-})$. This follows from the description of Ind-
completions in Remark A.1 and the fact that
$\mathop{\mathsf{Pro}}\mathscr{C}=(\mathop{\mathsf{Ind}}\mathscr{C}^{\mathrm{op}})^{\mathrm{op}}.$
3. (3)
If the category $\mathscr{C}$ is finitely complete, then
$\mathop{\mathsf{Pro}}\mathscr{C}$ can also be described as the dual of the
category of all functors in
$\boldlsqbracket\mathscr{C},\mathbf{Set}\boldrsqbracket$ preserving finite
limits. Again, $E$ is given by the Yoneda embedding. This is dual to (Adámek
and Rosický, 1994, Thm. 1.46). Moreover, it follows that
$\mathop{\mathsf{Pro}}\mathscr{C}$ is complete and cocomplete.
4. (4)
Given a small category $\mathscr{K}$ with cofiltered limits, denote by
$[\mathop{\mathsf{Pro}}\mathscr{C},\mathscr{K}]_{\mathrm{cfin}}$ the full
subcategory of $[\mathop{\mathsf{Pro}}\mathscr{C},\mathscr{K}]$ given by
cofinitary functors. Then the pre-composition by $E$ defines an equivalence of
categories
$(\mathord{-})\cdot
E\colon[\mathop{\mathsf{Pro}}\mathscr{C},\mathscr{K}]_{\mathrm{cfin}}\xrightarrow{\leavevmode\nobreak\
\simeq\leavevmode\nobreak\ }[\mathscr{C},\mathscr{K}],$
where the inverse is given by right Kan extension along $E$.
|
32k
|
arxiv_papers
|
2101.00946
|
# A compact manifold with infinite-dimensional co-invariant cohomology
Mehdi Nabil M. Nabil
Cadi Ayyad University, Faculty of Sciences Semlalia, Department of
Mathematics, Marrakesh. Morocco
[email protected]
Abstract. Let $M$ be a smooth manifold. When $\Gamma$ is a group acting on $M$
by diffeomorphisms one can define the $\Gamma$-co-invariant cohomology of $M$
to be the cohomology of the complex
$\Omega_{c}(M)_{\Gamma}=\mathrm{span}\\{\omega-\gamma^{*}\omega,\;\omega\in\Omega_{c}(M),\;\gamma\in\Gamma\\}$.
For a Lie algebra $\mathcal{G}$ acting on the manifold $M$, one defines the
cohomology of $\mathcal{G}$-divergence forms to be the cohomology of the
complex
$\mathcal{C}_{\mathcal{G}}(M)=\mathrm{span}\\{L_{X}\omega,\;\omega\in\Omega_{c}(M),\;X\in\mathcal{G}\\}$.
In this short paper we present a situation where these two cohomologies are
infinite dimensional.
Mathematics Subject Classification 2010: 57S15, 14F40.
Keywords: Cohomology, Transformation Groups.
## 1\. Introduction
In [1], the authors have introduced the concept of co-invariant cohomology. In
basic terms it is the cohomology of a subcomplex of the de Rham complex
generated by the action of a group on a smooth manifold. The authors showed
that under nice enough hypotheses on the nature of the action, there is an
interplay between the de Rham cohomology of the manifold, the cohomology of
invariant forms and the co-invariant cohomology, and this relationship can be
exhibited either by vector space decompositions or through long exact
sequences depending on the case of study (Theorems $1.1$ and $1.3$ in [1]).
Among the various consequences that can be derived from this inspection, it is
evident that the dimension of de Rham cohomology has some control over the
dimension of the co-invariant cohomology, and in most cases presented in [1]
the latter is finite whenever the former is. This occurs for instance in the
case of a finite action on a compact manifold or more generally in the case of
an isometric action on a compact oriented Riemannian manifold, and this fact
holds as well for a non-compact manifold as long as one requires the action to
be free and properly discontinuous with compact orbit space. A concept closely
related to co-invariant cohomology is the cohomology of divergence forms,
which is defined by means of a Lie algebra action on a smooth manifold and was
introduced by A. Abouqateb in [2]. In the course of his study, the author gave
many examples where the cohomology of divergence forms is finite-dimensional.
The goal of this paper is to show that this phenomenon heavily depends on the
nature of the action in play, and that without underlying hypotheses, co-
invariant cohomology and cohomology of divergence forms are not generally
well-behaved. This is illustrated by an example of a vector field action on a
smooth compact manifold giving rise to infinite-dimensional cohomology of
divergence forms and whose discrete flow induces an infinite-dimensional co-
invariant cohomology as opposed to the de Rham cohomology of the manifold.
This shows in particular that many results obtained in [1] and [2] cannot be
easily generalized and brings into perspective the necessity to look for finer
finiteness conditions of co-invariant cohomology in a future study which would
put the present paper in a broader context.
The general outline of the paper is as follows : In the first paragraph, we
briefly recall the notions of co-invariant forms and divergence forms, then we
define an homomorphism of the de Rham complex that is induced by a complete
vector field on the manifold, and which maps divergence forms relative to the
action of the vector field onto the complex of co-invariant differential forms
associated to its discrete flow (see (1) and Proposition 2.0.1). The next
paragraph is concerned with the setting on which our cohomology computations
will take place, it comprises a smooth compact manifold, the $3$-dimensional
hyperbolic torus, which can be obtained as the quotient of a solvable Lie
group by a uniform lattice (the construction given here is that of A. EL
Kacimi in [3]), the Lie algebra action considered is by means of a left-
invariant vector field. We then use a number of results to prove Theorem 3.0.1
which states that the operator defined in (1) is an isomorphism between the
complex of divergence forms and the complex of co-invariant forms, hence
allowing to only consider the cohomology of co-invariant forms for
computation. Finally, the last paragraph is dedicated to the main computation
in which we prove that the discrete flow of the vector field in question on
the hyperbolic torus gives infinite-dimensional co-invariant cohomology.
### Acknowledgement
The author would like to thank Abdelhak Abouqateb for his helpful discussions
and advice concerning this paper.
## 2\. Preliminaries
Let $M$ be a smooth $n$-dimensional manifold and denote $\mathrm{Diff}(M)$ the
group of diffeomorphisms of $M$ and $\Large\raisebox{2.0pt}{$\chi$}(M)$ the
Lie algebra of smooth vector fields on $M$. Let
$\rho:\Gamma\longrightarrow\mathrm{Diff}(M)$ be an action of a group $\Gamma$
on $M$ by diffeomorphisms. For an $r$-form $\omega$ on $M$ and element
$\gamma\in\Gamma$, we denote $\gamma^{*}\omega$ the pull-back of $\omega$ by
the diffeomorphism $\rho(\gamma):M\longrightarrow M$. Let
$\Omega_{c}(M)=\oplus_{p}\Omega_{c}^{p}(M)$ denote the de Rham complex of
forms with compact support on $M$ and put:
$\Omega_{c}^{p}(M)_{\rho}:=\mathrm{span}\\{\omega-\gamma^{*}\omega,\;\gamma\in\Gamma,\;\omega\in\Omega_{c}^{p}(M)\\}.$
Any element of $\Omega_{c}^{p}(M)_{\rho}$ is called a $\rho$-co-invariant or
just a ($\Gamma$-)co-invariant when there is no ambiguity. The graded vector
space $\Omega_{c}(M)_{\rho}:=\oplus_{p}\Omega_{c}^{p}(M)_{\rho}$ is a
differential subcomplex of the de Rham complex $\Omega_{c}(M)$, it is called
the complex of co-invariant differential forms on $M$. When $M$ is compact
this complex is simply denoted $\Omega(M)_{\rho}$. In the case where
$\rho:\mathbb{Z}\longrightarrow M$ is the action induced by a diffeomorphism
$\gamma:M\longrightarrow M$, i.e $\rho(n):=\gamma^{n}$, then we get that:
$\Omega^{p}_{c}(M)=\\{\omega-\gamma^{*}\omega,\;\omega\in\Omega^{p}_{c}(M)\\}.$
Let $\tau:\mathcal{G}\longrightarrow\Large\raisebox{2.0pt}{$\chi$}(M)$ be a
Lie algebra homomorphism and denote $\hat{X}:=\tau(X)$ for any
$X\in\mathcal{G}$ then define:
$\mathcal{C}^{p}_{\tau}(M):=\mathrm{span}\\{L_{\hat{X}}\omega,\;X\in\mathcal{G},\;\omega\in\Omega^{p}_{c}(M)\\}.$
Any element of $\mathcal{C}^{p}_{\tau}(M)$ is called a $\tau$-divergence
$p$-form or simply $\mathcal{G}$-divergence form. The graded vector space
$\mathcal{C}_{\tau}(M):=\oplus_{p}\mathcal{C}^{p}_{\tau}(M)$ is a differential
subcomplex of the de Rham complex. If $X$ is any vector field on $M$, with
corresponding Lie algebra homomorphism
$\tau:\mathbb{R}\longrightarrow\Large\raisebox{2.0pt}{$\chi$}(M)$,
$\tau(1):=X$ then:
$\mathcal{C}^{p}_{\tau}(M)=\mathrm{span}\\{L_{X}\omega,\;\omega\in\Omega^{p}_{c}(M)\\}.$
In what follows, $X\in\Large\raisebox{2.0pt}{$\chi$}(M)$ is a complete vector
field and $\phi:M\times[0,1]\longrightarrow M$ is the flow $\phi^{X}$ of the
vector field $X$ restricted to $M\times[0,1]$. We define the linear operator
$I:\Omega(M)\longrightarrow\Omega(M)$ by the expression:
$I(\eta):=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}\phi^{*}\eta\wedge\mathrm{pr}_{2}^{*}(ds)$
(1)
where
$\Large\raisebox{2.0pt}{$\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-6.99998pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-4.68001pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-3.3pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.7pt}}\\!\int_{0}^{1}$}:\Omega^{*}(M\times[0,1])\longrightarrow\Omega^{*-1}(M)$
is the fiberwise integration operator of the trivial bundle
$M\times[0,1]\overset{\mathrm{pr}_{1}}{\longrightarrow}M$ (see [4]) and $ds$
the usual volume form on $[0,1]$.
Let $\tau:\mathbb{R}\longrightarrow\Large\raisebox{2.0pt}{$\chi$}(M)$ be the
Lie algebra homomorphism induced by $X$ and let
$\rho:\mathbb{Z}\longrightarrow\mathrm{Diff}(M)$ be the discrete flow of $X$
i.e the group action given by $\rho(n):=\phi_{n}^{X}$.
###### Proposition 2.0.1.
The operator $I:\Omega(M)\longrightarrow\Omega(M)$ defined by
$\eqref{opcodiv}$ is a differential complex homomorphism i.e $I\circ d=-d\circ
I$. Moreover $I(\mathcal{C}_{\tau}(M))\subset\Omega_{c}(M)_{\rho}$ and the
restriction of $I:\mathcal{C}_{\tau}(M)\longrightarrow\Omega_{c}(M)_{\rho}$ is
surjective.
###### Proof.
Let $\eta\in\Omega(M)$ and denote $\iota_{s}:M\longrightarrow
M\times\\{s\\}\hookrightarrow M\times[0,1]$ be the natural inclusion, then
using Stokes formula for fiberwise integration we get that:
$I(d\eta)=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}\phi^{*}(d\eta)\wedge\mathrm{pr}_{2}^{*}(ds)=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}d(\phi^{*}\eta\wedge\mathrm{pr}_{2}^{*}(ds))=-dI(\eta)+\big{[}\iota_{s}^{*}(\phi^{*}\eta\wedge\mathrm{pr}_{2}^{*}ds)\big{]}_{0}^{1},$
and since $\iota_{s}^{*}\mathrm{pr}_{2}^{*}(ds)=0$ then $I(d\eta)=-dI(\eta)$.
For the second claim we start by showing that $I(\eta)$ has compact support
whenever $\eta$ does. Indeed assume $\eta\in\Omega_{c}(M)$ and denote
$K:=\mathrm{supp}(\eta)$, next consider the map:
$f:M\times\mathbb{R}\longrightarrow
M,\;\;\;(x,s)\mapsto\phi_{s}^{-1}(x):=\phi(x,-s),$
Then $f$ is continuous and therefore $L:=f(K\times[0,1])$ is compact. For any
$y\in M\setminus L$ and any $s\in[0,1]$ we get that $\phi_{s}(y)\notin K$ and
therefore $(\phi^{*}\eta)_{(y,s)}=0$, this implies that $I(\eta)_{y}=0$. We
conclude that $\mathrm{supp}\;I(\eta)\subset L$ i.e $I(\eta)\in\Omega_{c}(M)$.
From the relation $\mathrm{T}_{\small(x,t)}\phi(0,1)=X_{\phi_{t}(x)}$ one gets
that $\phi^{*}\circ i_{X}=i_{(0,\frac{\partial}{\partial s})}\circ\phi^{*}$
and therefore $\phi^{*}\circ L_{X}=L_{(0,\frac{\partial}{\partial
s})}\circ\phi^{*}$. Moreover we have that
$L_{(0,\frac{\partial}{\partial
s})}\mathrm{pr}_{2}^{*}(ds)=0\;\;\;\text{and}\;\;\;\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-5.83331pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.90001pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.75pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.25pt}}\\!\int_{0}^{1}\circ
i_{(0,\frac{\partial}{\partial s})}=0.$
If we write $\eta=L_{X}\omega$ for some $\omega\in\Omega_{c}(M)$ then we get
that:
$\displaystyle I(L_{X}\omega)$ $\displaystyle=$
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}\phi^{*}(L_{X}\omega)\wedge\mathrm{pr}_{2}^{*}(ds)$
$\displaystyle=$
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}L_{(0,\frac{\partial}{\partial
s})}(\phi^{*}\omega)\wedge\mathrm{pr}_{2}^{*}(ds)$ $\displaystyle=$
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}L_{(0,\frac{\partial}{\partial
s})}(\phi^{*}\omega\wedge\mathrm{pr}_{2}^{*}(ds))$ $\displaystyle=$
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}d\circ
i_{(0,\frac{\partial}{\partial
s})}(\phi^{*}\omega\wedge\mathrm{pr}_{2}^{*}(ds))+\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}i_{(0,\frac{\partial}{\partial
s})}d(\phi^{*}\omega\wedge\mathrm{pr}_{2}^{*}(ds))$ $\displaystyle=$
$\displaystyle
d\left(\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{0}^{1}i_{(0,\frac{\partial}{\partial
s})}(\phi^{*}\omega\wedge\mathrm{pr}_{2}^{*}(ds))\right)+[\iota_{s}^{*}i_{(0,\frac{\partial}{\partial
s})}(\phi^{*}\omega\wedge\mathrm{pr}_{2}^{*}(ds))]_{0}^{1}$ $\displaystyle=$
$\displaystyle[\phi_{s}^{*}\omega]_{0}^{1}$ $\displaystyle=$
$\displaystyle\phi_{1}^{*}\omega-\phi_{0}^{*}\omega$ $\displaystyle=$
$\displaystyle\phi_{1}^{*}\omega-\omega.$
It follows that $I(\mathcal{C}_{\tau}(M))\subset\Omega_{c}(M)_{\rho}$. This
also shows that $I:\mathcal{C}_{\tau}(M)\longrightarrow\Omega_{c}(M)_{\rho}$
is surjective.
###### Remark 2.0.1.
Note that $\phi^{X}$-invariant forms on $M$ are fixed by $I$ i.e if
$\omega\in\Omega(M)$ such that $L_{X}\omega=0$ then $I(\omega)=\omega$.
## 3\. The hyperbolic torus
Consider $A\in\mathrm{SL}(2,\mathbb{Z})$ with $\mathrm{tr}(A)>2$. It is easy
to check that $A=PDP^{-1}$ for some $P\in\mathrm{GL}(2,\mathbb{R})$ and
$D=\mathrm{diag}(\lambda,\lambda^{-1})$. Clearly $\lambda>0$ and $\lambda\neq
1$. Hence it makes sense to set
$D^{t}=\mathrm{diag}(\lambda^{t},\lambda^{-t})$ and define
$A^{t}=PD^{t}P^{-1}$ for any $t\in\mathbb{R}$. Next we define the Lie group
homomorphism:
$\phi:\mathbb{R}\longrightarrow\mathrm{Aut}(\mathbb{R}^{2}),\;\;\;t\mapsto
A^{t}$
The hyperbolic torus $\mathbb{T}^{3}_{A}$ is the smooth manifold defined as
the quotient $\Gamma_{3}\backslash G_{3}$ where
$G_{3}:=\mathbb{R}^{2}\rtimes_{\phi}\mathbb{R}$ and
$\Gamma_{3}:=\mathbb{Z}^{2}\rtimes_{\phi}\mathbb{Z}$. The natural projection
$\mathbb{R}^{2}\rtimes_{\phi}\mathbb{R}\overset{p}{\longrightarrow}\mathbb{R}$
induces a fiber bundle structure
$\mathbb{T}^{3}_{A}\overset{p}{\longrightarrow}\mathbb{S}^{1}$ with fiber type
$\mathbb{T}^{2}$ and $p[x,y,t]=[t]$.
If $(1,a)$ and $(1,b)$ are the eigenvectors of $A$ respectively associated to
the eigenvalues $\lambda$ and $\lambda^{-1}$ then:
$v=(1,a,0),\;\;\;w=(1,b,0)\;\;\;\text{and}\;\;\;e=(0,0,-\log(\lambda)^{-1}),$
forms a basis of $\mathfrak{g}_{3}=\mathrm{Lie}(G_{3})$, and we can check
that:
$[v,w]_{\mathfrak{g}_{3}}=0,\;\;\;[e,v]_{\mathfrak{g}_{3}}=-v,\;\;\;\text{and}\;\;\;[e,w]_{\mathfrak{g}_{3}}=w.$
(2)
Denote $X$, $Y$ and $Z$ the left invariant vector fields on
$\mathbb{R}^{2}\rtimes_{\phi}\mathbb{R}$ associated to $v$, $w$ and $e$
respectively, then $\\{X,Y,Z\\}$ defines a parallelism on
$\mathbb{T}_{3}^{A}$, a direct calculation leads to:
$X=\lambda^{t}\left(\frac{\partial}{\partial x}+a\frac{\partial}{\partial
y}\right),\;\;\;Y=\lambda^{-t}\left(\frac{\partial}{\partial
x}+b\frac{\partial}{\partial
y}\right)\;\;\;\text{and}\;\;\;Z=-\log(\lambda)^{-1}\frac{\partial}{\partial
t}.$ (3)
Now denote $\alpha$, $\beta$ and $\theta$ the dual forms associated to $X$,
$Y$ and $Z$ respectively. It is clear that the vector fields $X$ and $Y$ of
$\mathbb{T}^{3}_{A}$ are tangent to the fibers of the fiber bundle
$\mathbb{T}^{3}_{A}\overset{p}{\longrightarrow}\mathbb{S}^{1}$, and that
$\theta=-(\log\lambda)p^{*}(\sigma)$ where $\sigma$ is the invariant volume
form on $\mathbb{S}^{1}$ satisfying $\int_{\mathbb{S}^{1}}\sigma=1$. Assume in
what follows that the eigenvalue $\lambda$ of $A$ is irrational, then from the
relation:
$A\left(\begin{matrix}1\\\
a\end{matrix}\right)=\lambda\left(\begin{matrix}1\\\ a\end{matrix}\right)$
we deduce that $a\in\mathbb{R}\setminus\mathbb{Q}$. This remark leads to:
###### Proposition 3.0.1.
The orbits of the vector field $X$ defined in (3) are dense in the fibers of
the fiber bundle
$\mathbb{T}^{3}_{A}\overset{p}{\longrightarrow}\mathbb{S}^{1}$. In particular
for any $f\in\mathcal{C}^{\infty}(\mathbb{T}^{3}_{A})$, $X(f)=0$ is equivalent
to $f=p^{*}(\phi)$ for some $\phi\in\mathcal{C}^{\infty}(\mathbb{S}^{1})$.
###### Proof.
We shall identify $\mathbb{S}^{1}$ with $\mathbb{R}/\mathbb{Z}$. Fix
$[t]\in\mathbb{S}^{1}$ and consider the diffeomorphism:
$\Phi_{t}:\mathbb{T}^{2}\longrightarrow p^{-1}[t],\;\;\;[x,y]\mapsto[x,y,t].$
Then define the vector field $\hat{X}$ on $\mathbb{T}^{2}$ given by:
$\hat{X}_{[x,y]}=T_{[x,y,t]}\phi_{t}^{-1}(X_{[x,y,t]})=\lambda^{t}\left(\frac{\partial}{\partial
x}+a\frac{\partial}{\partial y}\right).$
Since $a$ is irrational we get that the family $\\{1,a\\}$ is
$\mathbb{Q}$-linearly independent and thus the orbits of $\hat{X}$ are dense
in $\mathbb{T}^{2}$, consequently the orbits of $X$ are dense in $p^{-1}[t]$,
this proves the assertion since $t$ is arbitrary.
The following Lemma is of central importance for the development of this
paragraph and for the computations of the next section:
###### Lemma 3.0.1.
Let $f\in\mathcal{C}^{\infty}(\mathbb{T}^{3}_{A})$ then for every
$s\in\mathbb{R}$ we have the following formula:
$Z\big{(}(\phi_{s}^{X})^{*}(f)\big{)}=-s(\phi_{s}^{X})^{*}\big{(}X(f)\big{)}+(\phi_{s}^{X})^{*}\big{(}Z(f)\big{)}.$
(4)
In particular $Z(\gamma^{*}f)=-X(\gamma^{*}f)+\gamma^{*}(Z(f))$ and
$i_{Z}\circ\gamma^{*}=-\gamma^{*}\circ i_{X}+\gamma^{*}\circ i_{Z}$, where
$\gamma:=\phi_{1}^{X}$.
###### Proof.
For any $(x,y,t)\in\mathbb{R}^{3}$, a straightforward computation gives that:
$\displaystyle Z\big{(}(\phi_{s}^{X})^{*}(f)\big{)}(x,y,t)$ $\displaystyle=$
$\displaystyle-\frac{1}{\log\lambda}d(f\circ\phi_{s}^{X})_{(x,y,t)}(0,0,1)$
$\displaystyle=$
$\displaystyle-\frac{1}{\log\lambda}\frac{d}{du}_{|u=0}(f\circ\phi_{s}^{X})(x,y,t+u)$
$\displaystyle=$
$\displaystyle-\frac{1}{\log\lambda}\frac{d}{du}_{|u=0}f(s\lambda^{t+u}+x,as\lambda^{t+u}+y,t+u)$
$\displaystyle=$
$\displaystyle-\frac{1}{\log\lambda}(df)_{\phi_{s}^{X}(x,y,t)}(s\log(\lambda)\lambda^{t},as\log(\lambda)\lambda^{t},1)$
$\displaystyle=$
$\displaystyle-s(df)_{\phi_{s}^{X}(x,y,t)}(\lambda^{t},a\lambda^{t},0)-\frac{1}{\log\lambda}(df)_{\phi_{s}^{X}(x,y,t)}(0,0,1)$
$\displaystyle=$
$\displaystyle-s(X(f)\circ\phi_{s}^{X})(x,y,t)+(Z(f)\circ\phi_{s}^{X})(x,y,t).$
Which achieves the proof.
###### Corollary 3.0.1.
Let $f\in\mathcal{C}^{\infty}(\mathbb{T}^{3}_{A})$ such that $f=\gamma^{*}f$
with $\gamma:=\phi_{1}^{X}$. Then $X(f)=0$ and consequently $f=p^{*}\psi$ with
$\psi\in\mathcal{C}^{\infty}(\mathbb{S}^{1})$.
###### Proof.
Since $f=\gamma^{*}f$ we get that for every $n\in\mathbb{Z}$,
$f=(\gamma^{n})^{*}(f)$ thus the preceding lemma gives that:
$Z(f)=-nX(f)+(\gamma^{n})^{*}(Z(f)).$
Consequently we obtain that for every $n\in\mathbb{Z}$:
$|X(f)|\leq\frac{1}{n}\big{(}\lVert
Z(f)\lVert_{\infty}+\lVert(\gamma^{n})^{*}(Z(f))\lVert_{\infty}\big{)}\leq\frac{2}{n}\lVert
Z(f)\lVert_{\infty},$
which leads to $X(f)=0$ and achieves the proof.
In what follows we denote $M:=\mathbb{T}^{3}_{A}$. It is straightforward to
check that:
$d\alpha=-\alpha\wedge\theta,\;\;\;d\beta=\beta\wedge\theta,\;\;\;d\theta=0.$
and that $L_{X}\alpha=-\theta$ and $L_{X}\beta=L_{X}\theta=0$ thus
$L_{X}(\alpha\wedge\beta\wedge\theta)=0$.
Let $\tau:\mathbb{R}\longrightarrow\Large\raisebox{2.0pt}{$\chi$}(M)$ be the
Lie algebra homomorphism corresponding to the vector field $X$ and
$\rho:\mathbb{Z}\longrightarrow\mathrm{Diff}(M)$ the discrete action generated
by $\gamma:=\phi_{1}^{X}$ where $\phi^{X}$ is the flow of $X$, that is,
$\rho(n)(x)=\phi_{n}^{X}(x)$ for any $n\in\mathbb{Z}$.
###### Theorem 3.0.1.
The homomorphism $I:\mathcal{C}_{\tau}(M)\longrightarrow\Omega(M)_{\rho}$
defined in (1) is an isomorphism.
###### Proof.
In view of Proposition 2.0.1 it only remains to prove that $I$ is injective.
Choose $\eta\in\mathcal{C}_{\tau}(M)$ and write $\eta=L_{X}\omega$ for some
$\omega\in\Omega(M)$. Assume that $I(\eta)=0$, in view of the previous
computation this is equivalent to $\omega=\gamma^{*}\omega$.
If $\eta\in\mathcal{C}_{\tau}^{0}(M)$ then Corollary 3.0.1 gives that
$\omega=p^{*}\phi$ for some $\phi\in\mathcal{C}^{\infty}(\mathbb{S}^{1})$,
thus $\eta=0$. On the other hand if $\eta\in\mathcal{C}_{\tau}^{3}(M)$ we can
write $\eta=X(f)\alpha\wedge\beta\wedge\theta$ for some
$f\in\mathcal{C}^{\infty}(M)$ satisfying $f=\gamma^{*}f$ and so by Corollary
3.0.1 we get $\eta=0$.
Now for $\eta\in\mathcal{C}_{\tau}^{1}(M)$ we can write:
$\omega=f\alpha+g\beta+h\theta,\;\;\;\;\;f,g,h\in\mathcal{C}^{\infty}(M).$
Applying $I$ to $L_{X}\alpha=-\theta$ leads to
$\gamma^{*}\alpha=\alpha-\theta$. Moreover since $\theta$ and $\beta$ are
$\phi^{X}$-invariant we get that $\beta=\gamma^{*}\beta$ and
$\theta=\gamma^{*}\theta$ thus:
$\gamma^{*}\omega=(\gamma^{*}f)\alpha+(\gamma^{*}g)\beta+(\gamma^{*}h-\gamma^{*}f)\theta,$
hence $\omega=\gamma^{*}\omega$ is equivalent to $f=\gamma^{*}f$,
$g=\gamma^{*}g$ and $h=\gamma^{*}h-f$. The last relation then implies that
$h-(\gamma^{n})^{*}h=nf$ for all $n\in\mathbb{Z}$ and thus:
$\lVert f\lVert_{\infty}\leq\frac{1}{n}\lVert
h-(\gamma^{n})^{*}h\lVert_{\infty}\leq\frac{2}{n}\lVert
h\lVert_{\infty}\underset{n\rightarrow+\infty}{\longrightarrow}0.$
Therefore $f=0$ and $h=\gamma^{*}h$, $g=\gamma^{*}g$ which according to
Corollary 3.0.1 gives that $X(g)=0$ and $X(h)=0$, and so using that
$L_{X}\beta=0$ and $L_{X}\theta=0$ it follows that $\eta=L_{X}\omega=0$.
Finally let $\eta\in\mathcal{C}^{2}_{\tau}(M)$ and write:
$\omega=f\alpha\wedge\beta+g\alpha\wedge\theta+h\beta\wedge\theta,\;\;\;\;\;f,g,h\in\mathcal{C}^{\infty}(M).$
Then using $\gamma^{*}\alpha=\alpha-\theta$ we obtain that:
$\gamma^{*}\omega=(\gamma^{*}f)\alpha\wedge\beta+(\gamma^{*}g)\alpha\wedge\theta+(\gamma^{*}h+\gamma^{*}f)\beta\wedge\theta,$
and so $\omega=\gamma^{*}\omega$ is equivalent in this case to
$f=\gamma^{*}f$, $g=\gamma^{*}g$ and $h=\gamma^{*}h+\gamma^{*}f$. As before,
this leads to $f=0$, $X(g)=0$ and $X(h)=0$ and so:
$\eta=L_{X}\omega=L_{X}(g\alpha\wedge\theta+h\beta\wedge\theta)=gL_{X}(\alpha\wedge\theta)=-g\theta\wedge\theta=0.$
Thus $I:\mathcal{C}_{\tau}(M)\longrightarrow\Omega(M)_{\rho}$ is an
isomorphism.
This result gives in particular that
$\mathrm{H}(\mathcal{C}_{\tau}(M))\simeq\mathrm{H}(\Omega(M)_{\rho})$ and
therefore we only need to compute the cohomology of $\rho$-co-invariant forms
in this case.
## 4\. Cohomology computation
We now have all the necessary ingredients to perform our computation. Let $M$
denote the hyperbolic torus $\mathbb{T}^{3}_{A}$ defined in the previous
section with $A$ having irrational eigenvalues and let
$X,Y,Z\in\Large\raisebox{2.0pt}{$\chi$}(M)$ be the vector fields defined in
(3) with respective dual $1$-forms $\alpha$, $\beta$ and $\theta$. Define the
action $\rho:\mathbb{Z}\longrightarrow\mathrm{Diff}(M)$ to be the discrete
flow of the vector field $X$ with $\gamma:=\rho(1)$. The main goal is to prove
that first and second co-invariant cohomology groups are infinite-dimension,
however we shall compute the whole cohomology in order get a global picture.
Calculating $\mathrm{H}^{0}(\Omega(M)_{\rho})$: Choose
$f\in\Omega^{0}(M)_{\rho}$ such that $df=0$, then $f$ is a constant function
equal to $g-\gamma^{*}g$ for some $g\in\mathcal{C}^{\infty}(M)$. Consequently
we obtain that:
$\int_{M}f\alpha\wedge\beta\wedge\theta=\int_{M}(g-\gamma^{*}g)\alpha\wedge\beta\wedge\theta=\int_{M}g\alpha\wedge\beta\wedge\theta-\int_{M}\gamma^{*}(g\alpha\wedge\beta\wedge\theta)=0.$
Thus $f=0$ and we conclude that $\mathrm{H}^{0}(\Omega(M)_{\rho})=0$.
Calculating $\mathrm{H}^{1}(\Omega(M)_{\rho})$: We prove that
$\mathrm{H}^{1}(\Omega(M)_{\rho})$ is infinite dimensional. In order to do so,
we prove that the map
$p^{*}:\Omega^{1}(\mathbb{S}^{1})\longrightarrow\mathrm{H}^{1}(\Omega(M)_{\rho})$
is well-defined and injective or equivalently we can show that
$p^{*}(\Omega^{1}(\mathbb{S}^{1}))\subset\mathrm{Z}^{1}(\Omega(M)_{\rho})$ and
$p^{*}(\Omega^{1}(\mathbb{S}^{1}))\cap\mathrm{B}^{1}(\Omega(M)_{\rho})=0$.
An element $\eta\in p^{*}(\Omega^{1}(\mathbb{S}^{1}))$ can always be written
as $\eta=p^{*}(\phi)\theta$ where
$\phi\in\mathcal{C}^{\infty}(\mathbb{S}^{1})$. Since $L_{X}\theta=0$ and
$L_{X}\alpha=-\theta$, then by applying $I$ to $L_{X}\alpha$ we get that
$\theta=\alpha-\gamma^{*}\alpha$, therefore:
$\eta=p^{*}(\phi)\theta=p^{*}(\phi)\alpha-\gamma^{*}(p^{*}(\phi)\alpha).$
Moreover observe that $d\eta=0$, hence we deduce that
$p^{*}(\Omega^{1}(\mathbb{S}^{1}))\subset\mathrm{Z}^{1}(\Omega(M)_{\rho})$.
Now suppose $\eta=d(g-\gamma^{*}g)$ then clearly $X(g-\gamma^{*}g)=0$ and
$Z(g-\gamma^{*}g)=p^{*}(\phi)$, thus according to Proposition 3.0.1,
$g-\gamma^{*}g=p^{*}\psi$ for some
$\psi\in\mathcal{C}^{\infty}(\mathbb{S}^{1})$. By induction we can show that
for any $n\in\mathbb{N}$, $g=\rho(n)^{*}g+np^{*}\psi$ which then leads to:
$|p^{*}\psi|\leq\frac{1}{n}|g-\rho(n)^{*}g|\leq\frac{2}{n}\lVert
g\lVert_{\infty}\underset{n\rightarrow+\infty}{\longrightarrow}0.$ (5)
Hence $p^{*}\psi=g-\gamma^{*}g=0$ and so $\eta=0$. Thus
$p^{*}(\Omega^{1}(\mathbb{S}^{1}))\cap\mathrm{B}^{1}(\Omega(M)_{\mathbb{Z}})=0$.
Calculating $\mathrm{H}^{2}(\Omega(M)_{\rho})$: We will show that
$p^{*}(\Omega^{1}(\mathbb{S}^{1}))\wedge\beta\subset\mathrm{H}^{2}(\Omega(M)_{\rho})$.
To do this, we fix a $2$-form $\eta=p^{*}(\phi)\theta\wedge\beta$ such that
$\phi\in\mathcal{C}^{\infty}(\mathbb{S}^{1})$. We can easily check that
$d\eta=0$, moreover from the previous calculations and the fact that
$L_{X}\beta=0$ we get that $\beta=\gamma^{*}\beta$, therefore:
$p^{*}(\phi)\theta\wedge\beta=(p^{*}\phi\alpha\wedge\beta)-\gamma^{*}(p^{*}(\phi)\alpha\wedge\beta).$
Hence
$p^{*}(\Omega^{1}(\mathbb{S}^{1}))\wedge\beta\subset\mathrm{Z}^{2}(\Omega(M)_{\rho})$.
Now assume that $\eta=d(\omega-\gamma^{*}\omega)$, then using that $[X,Y]=0$
we get $i_{Y}i_{X}(d\omega)=\gamma^{*}(i_{Y}i_{X}d\omega)$ hence according to
Corollary 3.0.1 we can write $i_{X}i_{Y}d\omega=p^{*}\psi$ for some
$\psi\in\mathcal{C}^{\infty}(\mathbb{S}^{1})$. On the other hand we get from
Lemma 3.0.1 that:
$p^{*}\phi-
i_{Y}i_{X}(d\omega)=\gamma^{*}(i_{Z}i_{Y}d\omega)-i_{Z}i_{Y}d\omega.$
It follows from these remarks that
$p^{*}(\phi-\psi)=\gamma^{*}(i_{Z}i_{Y}d\omega)-i_{Z}i_{Y}d\omega$, and as in
(5) we once again prove that $p^{*}(\phi-\psi)=0$, so we deduce that
$p^{*}\phi=p^{*}\psi=i_{Y}i_{X}(d\omega)$. Now if we write
$\omega=f\alpha+g\beta+h\theta$, then we get that $p^{*}\phi=X(g)-Y(f)$.
Moreover, from $X(p^{*}\phi)=Y(p^{*}\phi)=0$ we get that for every
$s\in\mathbb{R}$:
$\displaystyle s^{2}p^{*}\phi$ $\displaystyle=$
$\displaystyle\int_{0}^{s}\int_{0}^{s}(\phi_{t}^{X})^{*}(\phi_{u}^{Y})^{*}(p^{*}\phi)dudt$
$\displaystyle=$
$\displaystyle\int_{0}^{s}\int_{0}^{s}(\phi_{t}^{X})^{*}(\phi_{u}^{Y})^{*}(X(g))dudt-\int_{0}^{s}\int_{0}^{s}(\phi_{t}^{X})^{*}(\phi_{u}^{Y})^{*}(Y(f))dudt$
$\displaystyle=$
$\displaystyle\int_{0}^{s}(\phi_{t}^{X})^{*}X\left(\int_{0}^{s}(\phi_{u}^{Y})^{*}(g)du\right)dt-\int_{0}^{s}(\phi_{u}^{Y})^{*}Y\left(\int_{0}^{s}(\phi_{t}^{X})^{*}(f)dt\right)du$
$\displaystyle=$
$\displaystyle(\phi_{s}^{X})^{*}\left(\int_{0}^{s}(\phi_{u}^{Y})^{*}(g)du\right)-\int_{0}^{s}(\phi_{u}^{Y})^{*}(g)du-(\phi_{s}^{Y})^{*}\left(\int_{0}^{s}(\phi_{t}^{X})^{*}(f)dt\right)+\int_{0}^{s}(\phi_{t}^{X})^{*}(f)dt.$
It follows that:
$s^{2}|p^{*}\phi|\leq
2\left\lVert\int_{0}^{s}(\phi_{u}^{Y})^{*}(g)du\right\lVert_{\infty}+2\left\lVert\int_{0}^{s}(\phi_{t}^{X})^{*}(f)dt\right\lVert_{\infty}\leq
2|s|(\lVert g\lVert_{\infty}+\lVert f\lVert_{\infty})$
Hence $|p^{*}\phi|\leq\dfrac{2}{|s|}(\lVert g\lVert_{\infty}+\lVert
f\lVert_{\infty})\underset{s\rightarrow+\infty}{\longrightarrow}0$.
We conclude that $\eta=0$ and
$p^{*}(\Omega^{1}(\mathbb{S}^{1}))\wedge\beta\cap\mathrm{B}^{2}(\Omega(M)_{\rho})=0$,
in particular this proves that $\mathrm{H}^{2}(\Omega(M)_{\rho})$ is infinite
dimensional.
Calculating $\mathrm{H}^{3}(\Omega(M)_{\rho})$: The elements of
$\Omega^{3}(M)_{\rho}$ are of the form:
$(f-\gamma^{*}f)\alpha\wedge\beta\wedge\theta,$
for some $f\in\mathcal{C}^{\infty}(M)$. Put:
$c=\dfrac{\int_{M}f\alpha\wedge\beta\wedge\theta}{\alpha\wedge\beta\wedge\theta},\;\;\;\text{then}\;\int_{M}(f-c)\alpha\wedge\beta\wedge\theta=0.$
Thus $(f-c)\alpha\wedge\beta\wedge\theta=d\omega$ and since
$L_{X}(\alpha\wedge\beta\wedge\theta)=0$ then
$(\gamma^{*}f-c)\alpha\wedge\beta\wedge\theta=d(\gamma^{*}\omega)$ and
therefore it follows that:
$(f-\gamma^{*}f)\alpha\wedge\beta\wedge\theta=d(\omega-\gamma^{*}\omega),$
i.e $\mathrm{H}^{3}(\Omega(M)_{\rho})=0$.
## References
* [1] A. Abouqateb, M. Boucetta and M. Nabil, Cohomology of Coinvariant Differential Forms. Journal of Lie Theory 28, no. 3 (2018): 829-841.
* [2] A. Abouqateb, Cohomologie des formes divergences et Actions propres d’algèbres de Lie. Journal of Lie Theory 17 (2007), No. 2, 317-335.
* [3] A. El Kacimi Alaoui. Invariants de certaines actions de Lie instabilité du caractère Fredholm, Manuscripta Mathematica 74.1 (1992): 143-160.
* [4] W. Greub, S. Halperin and R. Vanstone, Connections, Curvature and Cohomology, Vol. II, Academic Press 1972/1973.
|
4k
|
arxiv_papers
|
2101.00947
|
# Data driven Dirichlet sampling on manifolds
Luan S Prado11footnotemark: 1and Thiago G Ritto22footnotemark: 2 Department of
Mechanical Engineering, Universidade Federal do Rio de Janeiro
###### Abstract
This article presents a novel method to sampling on manifolds based on the
Dirichlet distribution. The proposed strategy allows to completely respect the
underlying manifold around which data is observed, and to do massive samplings
with low computational effort. This can be very helpful, for instance, in
neural networks training process, as well as in uncertainty analysis and
stochastic optimization. Due to its simplicity and efficiency, we believe that
the new method has great potential. Three manifolds (two dimensional ring,
Mobius strip and spider geometry) are considered to test the proposed
methodology, and then it is employed to an engineering application, related to
gas seal coefficients.
###### keywords:
sampling on manifolds , Dirichlet distribution , data driven , gas seal
coefficients
††journal: Journal of LaTeX Templates
## 1 Introduction
Machine Learning encompasses several methods and algorithms, from a simple
linear regression [1, 2] to intricate neural networks structures [3, 4, 5]. In
the past few years, artificial neural networks (ANNs) showed versatility,
being able to perform many tasks, such as facial recognition [6] and
autonomous vehicle control [7]. However, to perform such tasks the training
set must be big, since the error surface in neural networks, with large
degrees of freedom, tends to be highly non-convex and non-smooth [8].
In order to circumvent this difficulty, there are some strategies that can be
pursued to augment data. This might be helpful in ANNs training, uncertainty
quantification, and stochastic optimization. The present work is particularly
interested in manifold learning [9]. Manifold learning shines when the dataset
size is small, since it can unravel the intrinsic structure of data [10, 11].
One recent procedure developed to perform probabilistic sampling on manifolds
(PSoM) considers multidimensional kernel-density estimation, diffusion maps,
and the Itô stochastic differential equation [10]. Another strategy explicitly
estimates the manifold by the density ridge, and generates new data by
bootstrapping [12].
The main contribution of the present article is to present a novel data driven
sampling based on the Dirichlet distribution. The proposed Dirichlet sampling
on manifolds (DSoM) is straightforward, simple to implement, and efficient to
reproduce samples of data around a manifold. Some reference data are needed,
which can be obtained directly from a physical system, or be generated with
high fidelity models. Then, some points of the original data are randomly
selected, the parameters of the Dirichlet distribution are obtained, new data
points are generated, and a convex combination is considered to make sure that
each point sampled lies around the manifold delineated by the original data.
The proposed strategy is employed to three manifolds: two dimensional ring,
Mobius strip and spider geometry. The results are quite satisfactory. Further
on, the new method is employed to an engineering application, where a physics-
based model is used to compute the pressure distribution inside a gas seal
[13]; then the eight seal coefficients (stiffness and damping) are obtained
from the pressure field. The simulation points generated by a stochastic
physics-based model are used to train an ANN (regression problem). Since
simulations of the physics-based system are time consuming, the DSoM is
employed to augment the data for the ANN training. The results are again
reasonably good, with the added sampling improving the ANN performance.
The organization of this article goes as follows. Section 2 presents the new
method, where the context, the main ideas, and the algorithm are discussed.
The numerical results are shown in section 3, where three simple manifolds are
analyzed, and the engineering application is presented. Finally, the
concluding remarks are make in the last section.
## 2 Data driven Dirichlet sampling on manifolds – DSoM
### 2.1 Manifold learning
Before describing the methodology, it is worth to briefly discuss what is
Manifold Learning and which areas it encompasses. Manifold Learning is a
multidisciplinary area that involves General Topology [14, 15], Differential
Geometry [16, 17] and Statistics [18, 19].
The main focus of Manifold Learning is the information extraction of
manifolds, which are a generalization of curves and surfaces in two, three or
higher dimensions spaces. To properly develop the intuition behind the
manifold, imagine an ant crawling on a guitar body. For the ant, due to its
tiny size, the guitar seems flat and featureless, although its shape is
curved. A manifold is a topological space that locally looks flat and
featureless and behaves like an Euclidean Space; however, different from
Euclidean Spaces, topological spaces might not have the concept of distance.
In order to clarify what a locally Euclidean space is, some definitions are
necessary. A topological space $X$ is said to be locally Euclidean if there
exists an integer $d\geq 0$ such that around every point in $X$, there is a
local neighborhood which is homeomorphic, i.e. there is a invertible
continuous map $g:X\to Y$, to an open subset in an Euclidean space
$\mathbb{R}^{d}$ [9].
To extract features from a Manifold one can apply Isomap [20], Local Linear
Embedding (LLE) [21], Laplacian Eigenmaps [22], Diffusion Maps [23], Hessian
Eigenmaps [24], and Nonlinear PCA [25].
### 2.2 Sampling on manifolds
Recently, Soize and Ghanem [10] developed a method to perform probabilistic
sampling on manifolds (PSoM). This strategy is used to generate stochastic
samples that follow the probability distribution underlined by a cloud of
points concentrated around a manifold, and is based on multidimensional
kernel-density estimation, diffusion-maps, and Itô stochastic differential
equation. To avoid MCMC sampling, Zhang and Ghanem [12] developed a different
strategy that explicitly estimates the manifold by the density ridge, and
generates new data by bootstrapping.
In the present paper we develop a simple and efficient strategy with the same
purpose. The two important ingredients in the proposed data driven Dirichlet
Sampling on Manifolds (DSoM) are (1) the Dirichlet distribution and (2) convex
combinations.
The Dirichlet distribution [26] is widely used in other fields, such as text
classification [27]. It is also used in Bayesian Bootstrap [28]. This
distribution is described by [29]:
$f(\mathbf{x})=\frac{1}{B(\bm{\alpha})}\prod_{i=1}^{K}x_{i}^{\alpha_{i}-1}\,,$
(1)
where $\mathbf{x}=(x_{1},...,x_{K})\in\mathbb{R}^{K}$, with parameters
$\bm{\alpha}=(\alpha_{1},...,\alpha_{K})$, and the beta function $B$ defined
by:
${B(\bm{\alpha})=\frac{\prod_{i=1}^{K}\Gamma(\alpha_{i})}{\Gamma(\sum_{i=1}^{K}\alpha_{i})}}\,,$
(2)
in which $\Gamma$ is the gamma function, and $\mathbf{x}$ belongs to a $K-1$
simplex, i.e., $\sum_{i=1}^{K}{x}_{i}=1$ and ${x}_{i}\geq 0$, for $i=1,...,K$;
exactly the same properties needed to guarantee convex combinations. Samples
from the Dirichlet distribution can be concentrated in specific regions of a
simplex, depending on the parameter $\bm{\alpha}$. Some examples are given in
Fig. 1.
(a) (b) (c) (d)
Figure 1: Dirichlet distribution with (a) $\alpha=(1,1,1)$ (samples are
uniformly distributed), (b) $\alpha=(20,20,20)$ (as $\alpha$’s increase, the
samples get concentrated in the simplex center), (c) $\alpha=(0.9,0.9,0.9)$
(as $\alpha$’s decrease, the samples get concentrated in the simplex borders),
and (d) $\alpha=(10,2,2)$ (samples are attracted to a simplex vertex).
Convex combination is a linear combination of the following type:
$y=\sum_{i}\alpha_{i}x_{i},\alpha_{i}\geq 0,\sum\alpha_{i}=1$. The set of all
convex combinations define the convex hull of the $x_{i}$ points, see Fig. 2.
Figure 2: $P$ is a convex combination of $x_{1},x_{2},x_{3}$, while the
triangle $x_{1}x_{2}x_{3}$ is the set of all possible convex combinations of
$x_{1},x_{2},x_{3}$, being then its convex hull [30]. Note that $Q$ is not a
convex combination of $x_{1},x_{2},x_{3}$, since it cannot be represented as a
convex combination of aforementioned points.
To start the process, we need to have access to a certain amount of data ($m$
samples of a random vector of size $n$) that is organized in a matrix
$\mathcal{Y}_{data}\in\mathbb{R}^{m\times n}$, where each line corresponds to
one observation:
$\mathcal{Y}_{data}=\left(\begin{array}[]{cccc}\mathcal{Y}_{11}&...&\mathcal{Y}_{1n}\\\
\mathcal{Y}_{21}&...&\mathcal{Y}_{2n}\\\ &...&\\\
\mathcal{Y}_{m1}&...&\mathcal{Y}_{mn}\end{array}\right)=\left(\begin{array}[]{ccc}\mathcal{Y}_{(1,:)}\\\
\mathcal{Y}_{(2,:)}\\\ ...\\\ \mathcal{Y}_{(m,:)}\\\ \end{array}\right)\,.$
(3)
From the unknown data distribution, we want to generate $n_{s}$ simulated
samples:
$\mathbf{Y}_{sim}=\left(\begin{array}[]{ccc}\mathbf{Y}_{(1,:)}\\\
\mathbf{Y}_{(2,:)}\\\ ...\\\ \mathbf{Y}_{(n_{s},:)}\\\ \end{array}\right)\,,$
(4)
where $\mathbf{Y}\in\mathbb{R}^{n_{s}\times n}$. With the original data
$\mathcal{Y}_{data}$, obtained from a specific application (that can be
normalized if necessary), the steps of the proposed methodology are the
following.
First we choose randomly (Uniform distribution) $K$ points from the data
$\mathcal{Y}_{(i,:)}$ ($i=1,...,K)$. Then, we choose randomly (Uniform
distribution) one pivot point $\mathcal{Y}_{p}$ from the $K$ points.
Afterwards, the parameter of the Dirichlet distribution are computed:
${\alpha}_{i}=\exp\left(-\gamma||\mathcal{Y}_{(i,:)}-\mathcal{Y}_{p}||^{2}\right)$,
where $\gamma$ is a tradeoff parameter. Note that, at the pivot,
${\alpha}_{i}$ equals to one, and it gets smaller when the distance from the
pivot increases.
We need to define a threshold $t_{hr}$ and set $\alpha_{i}=0$ if
$\alpha_{i}<t_{hr}$. This reinforces sampling around the original data,
avoiding sampling in void spaces. After that we use the parameters
${\alpha_{i}}$ to sample $\mathbf{X}=(X_{1},...,X_{K})\in\mathbb{R}^{K}$ from
the Dirichlet distribution, i.e. $\sum_{i=1}^{b}X_{i}=1$. Finally. the $j$-th
sample is generated by means of a convex combination [31] of the $K$ data
points:
$\mathbf{Y}_{(j,:)}=X_{1}\mathcal{Y}_{(1.:)}+...+X_{K}\mathcal{Y}_{(K.:)}$.
Note that the Dirichlet sample serves as weights to each one of the $K$
points. To generate a new sample, the process is repeated with the random
selection of other $K$ points from the data $\mathcal{Y}_{(i,:)}$
($i=1,...,K)$.
Thus, we need to tune only three parameters: $K$, which is the number of data
points used in the process of generating each simulated sample; $\gamma$, that
defines the shape of the exponential curve; and $t_{hr}$, which is the
threshold that will define zero weight for points far away from the pivot. We
also need to define the number of samples $n_{s}$ and a metric to compute the
norm $||\cdot||$; for instance, the Euclidean norm or the Mahalanobis
distance.
Finally, it should be noted that each sample requires the computation of an
Algebraic system $\mathbf{Y}_{(j,:)}=A^{T}\mathbf{X}$, in which
$\mathbf{Y}_{(j,:)}$ is the sampled point while $A^{T}\mathbf{X}$ is the
product of the dataset points by its weights, sampled from a Dirichlet
distribution ($\mathbf{Y}_{(j,:)}\in\mathbb{R}^{n}$,
$\mathbf{X}\in\mathbb{R}^{K}$, $A\in\mathbb{R}^{K\times n}$). In order to
avoid cumbersome computations, few samples are used instead of the whole
dataset, and the computational complexity is $\mathcal{O}(Kn)$, where $K$ is
prescribed by the user and $n$ is the size of the random vector.
### 2.3 DSoM algorithm and convergence
The DSoM algorithm is given below (Algorithm 1).
for _j = 1 to $n_{s}$_ do
$A=sample(dataset,K)$
$\mathcal{Y}_{p}=sample(A,1)$
for _i = 1 to K_ do
$\alpha[i]=exp(-\gamma||\mathcal{Y}[i]-\mathcal{Y}_{p}||)$
if _$\alpha[i] <threshold$_ then
else
$\alpha[i]=1e-7$
end if
$\alpha[i]=\alpha[i]$
end for
$\mathbf{X}=Dirichlet(\alpha_{1},...,\alpha_{K})$
$\mathbf{Y}[j]=A^{T}\mathbf{X}$
end for
Algorithm 1 DSoM(dataset,$n_{s}$,$K$,$threshold$)
To verify if the proposed method is converging to the underlined distribution
of the points that lie around a manifold, we check the convergence in mean:
$conv_{1}(n_{s})=\displaystyle\frac{\frac{1}{n_{s}}\sum_{j=1}^{n_{s}}\left|\left|\mathbf{Y}_{(j,:)}-\left(\frac{1}{m}\sum_{i=1}^{m}{\mathcal{Y}_{data}}_{(i,:)}\right)\right|\right|}{\left|\left|\frac{1}{m}\sum_{i=1}^{m}{\mathcal{Y}_{data}}_{(i,:)}\right|\right|}\,.$
(5)
This equation is considering the mean of the original data point as a
reference, and it observes the convergence of the mean value of the Euclidian
norm of the random vector. This convergence is not sufficient because we also
want to assure the convergence related to the correlation among the random
variables. Hence, we also consider the convergence of the correlation matrix,
in terms of the Frobenious norm,
$conv_{2}(n_{s})=\displaystyle\frac{\left|\left|\frac{1}{n_{s}-1}\mathbf{Y}^{T}\mathbf{Y}-\frac{1}{m-1}\sum_{i=1}^{m}\mathcal{Y}_{data}^{T}\mathcal{Y}_{data}\right|\right|_{F}}{\left|\left|\frac{1}{m-1}\sum_{i=1}^{m}\mathcal{Y}_{data}^{T}\mathcal{Y}_{data}\right|\right|_{F}}\,.$
(6)
Indeed, since we are dealing with manifolds, one must be careful with these
convergence metrics, and also observe the simulated points.
## 3 Numerical Results
### 3.1 Three simple manifolds
Before applying the sampling strategy to the engineering application, a
verification of the method must be done. Figure 3 shows the results for a two
dimensional ring. The original data was generated by
$\begin{array}[]{l}Z_{1}=\cos(\theta)+U\,,\\\ Z_{2}=\sin(\theta)+U\,,\\\
\end{array}$ (7)
with $\theta\in[0,2\pi]$, and $U$ is a Uniform random variable with support
$[0,0.5]$. The Figure 3 shows $m=1,000$ original data points (black dots), and
$n_{s}=50,000$ new data (red dots) sampled with parameters $K=10$,
$\gamma=0.8$ and $t_{sh}=0.8$. It is noticed that the simulated data yielded
by DSoM respect the original cloud of points observed around the manifold.
Figure 4 shows the convergence of the mean and the correlation matrix; Eqs.
(5) and (6). It can be seeing that the convergence is quite reasonable. In
addition, Fig. 5 shows a heat map, obtained by kernel density estimation,
where the marginal distribution (clearly non-Gaussian) are plotted.
Figure 3: Two dimensional ring. 50,000 sampled data (red dots) from 1,000
original database (black dots). Parameters: $K=5$ $\gamma=0.8$, $t_{sh}=0.8$.
(a) (b)
Figure 4: Convergence curves: (a) $conv_{1}(n_{s})$, $L_{2}$-norm of the mean,
and (b) $conv_{2}(n_{s})$, Frobenious-norm of the correlation matrix. Figure
5: Kernel Density Estimation of the red dots shown in Fig. 3.
To avail the proposed sampling strategy functionality, two more manifolds are
considered. The first is a Mobius Strip, parametrized by
$\begin{array}[]{l}Z_{1}=0.5\sin(0.5t)\cos(0.5t)+\epsilon\,,\\\
Z_{2}=0.5\cos(0.5t)\cos(0.5t)+\epsilon\,,\\\
Z_{3}=0.5\cos(0.5)+\epsilon\,,\end{array}$ (8)
with $t\in[0,8\pi]$, and $\epsilon$ is a Gaussian random variable with zero
mean and standard deviation equals to 0.05. Figure 6 shows $m=1,000$ original
data points (black dots), and $n_{s}=10,000$ new data (red dots) sampled with
parameters $K=20$, $\gamma=5$ and $t_{sh}=0.9$. Again, the simulated data
generated by DSoM respect the original cloud of points observed around this
manifold.
Figure 6: Mobius strip. 10,000 sampled data (red dots) from 1,000 original
data (black dots). Parameters: $K=20$ $\gamma=5$, $t_{sh}=0.9$.
The last manifold tested is the spider geometry obtained from a ply file [32].
The results are shown in Fig. 8, where the DSoM was applied with parameters
$K=300$ $\gamma=0.85$, $t_{sh}=0.7$. Note that, spite of the challenging
geometry, the method still works well.
Figure 7: Spider geometry. 10,000 sampled data (red dots) from 5,000 original
data (black dots). Parameters: $K=300$ $\gamma=0.85$, $t_{sh}=0.7$.
In the next section, we apply the methodology to an engineering problem,
related to the coefficients of a gas seal, where eight parameters are
considered, i.e., the dimension of the random vector is $n=8$.
### 3.2 Gas seal coefficients
Before applying the DSoM, we need to explain the steps of the analysis
performed in this section. We are interested in computing the eight seal
coefficients of a centrifugal compressor. For this purpose, the Reynolds
equation, for a compressible fluid, is used [33, 34, 35, 36, 37, 13]:
$\frac{\partial}{\partial\bar{x}}{\left(PH^{3}\frac{\partial
P}{\partial\bar{x}}\right)}+\frac{\partial}{\partial\bar{z}}{\left(PH^{3}\frac{\partial
P}{\partial\bar{z}}\right)}=\lambda{\frac{\partial(PH)}{\partial\bar{x}}}+\sigma{\frac{\partial(PH)}{\partial\tau}}\,,$
(9)
with $P=p/p_{a}$, $H=h/h^{*}$, $\tau=\omega t$, $\bar{x}=x/L^{*}$,
$\bar{z}=z/L^{*}$, $\lambda=\frac{6\mu UL^{*}}{p_{a}(h^{*})^{2}}$, and
$\sigma=\frac{12\mu\omega(L^{*})^{2}}{p_{a}(h^{*})^{2}}$; where $L^{*}$ is a
characteristic length of the bearing, $h^{*}$ is a characteristic film
thickness, $p_{a}$ is the atmospheric pressure, $\omega$ the rotation speed of
the shaft.
The pressured field and the corresponding seal coefficients are computed using
the ISOTSEAL (constant-temperature seal code) [36, 38]. The idea is to use
this physics-based model to train an ANN, that will serve as a surrogate model
for the system under analysis.
The following training procedure was adopted. A stochastic model is built,
considering some parameters of the deterministic model as random variables.
This is done to explore the surroundings of the chosen configuration, and to
create a more robust ANN, that takes into account uncertainties related to the
model parameters. After the initial training with the stochastic model, the
DSoM is employed to augment data and leverage the ANN training process.
Twenty model parameters are varied, which means that they are modeled as
random variables and serve as input to the ANN: seal radius, number of teeth,
tooth pitch, tooth height, radial clearance, gas composition of methane,
ethane, propane, isobutan, butane, hydrogen, nitrogen, oxygen, $CO_{2}$,
reservoir temperature, reservoir pressure, sump pressure, inlet tangential
velocity ratio, whirl speed, rotational speed. The outputs of the model are
the eight seal coefficients (stiffness and damping), namely $k_{xx}$,
$k_{xy}$, $k_{yx}$, $k_{yy}$, $c_{xx}$, $c_{xy}$, $c_{yx}$, $c_{yy}$.
Each one of the twenty input variables are modeled with a Uniform
distribution, with support that encompass 20% above and 20% below the
reference value. A normalization process was not carried out; however,
depending on the problem and on its dimensionalty, its use is suggested.
The constructed ANN has 20 neurons in the first layer (input parameters), and
8 neurons in the last layer (seal coefficients). There are 2 hidden layers
with sixteen neurons each, activated by the ReLU function.
The first ANN was trained with 4,205 points sampled from the stochastic
physics-based model. The second ANN was trained with the DSoM, which was
sampled from the previous database, where a 100,000 points were generated. In
Fig. 8, the effectiveness of DSoM is exhibited in terms of the loss function.
Note that, with the proposed data augmentation procedure, overfit is removed
from the ANN. The distance between the loss function in training (blue line)
and the loss function in validation (orange line) is close to zero in the
second scenario.
(a) (b)
Figure 8: Neural network loss function in different scenarios. (a) ANN trained
with 4,205 points from the stochastic physics-based model, and (b) ANN trained
with 100,000 points, augmenting data using DSoM.
Figure 9 shows the samples of the seal coefficients. The black dots were
computed using the stochastic physics-based model, and the red dots were
obtained taken into account these original data, and employing the DSoM to
increase the number of points. Since the dimension of the random vector is
greater than three, each graphic in Fig. 9 shows a cloud of points for three
different parameters. It can be observed that the manifold structure is
respected.
(a) (b) (c) (d)
Figure 9: stiffness seal coefficients plotted against each other. DSoM
parameters: $\gamma=6e-13$, $t_{hr}=0.5$
Figure 10 shows the convergence of the mean and the correlation matrix; Eqs.
(5) and (6). It can be seeing that the convergence is also reasonable for this
application, with convergence values lower than 5%.
(a) (b)
Figure 10: Convergence curves: (a) $conv_{1}(n_{s})$, $L_{2}$-norm of the
mean, and (b) $conv_{2}(n_{s})$, Frobenious-norm of the correlation matrix.
## 4 Concluding remarks
This paper proposes a new methodology to sample on manifolds using the
Dirichilet distribution, which is simple and effective. The Dirichlet sampling
on manifolds (DSoM) requires an original sampling that can be obtained from
simulation or from experiments. The DSoM generates samples that follow the
unknown distribution of the original dataset. This might be helpful in
uncertainty quantification, stochastic optimization, and ANN training.
The methodology was successfully applied to three simple manifolds, and an
engineering application, related to seal coefficients. The next step is to
apply it to different problems to test its versatility. In addition, a formal
mathematical proof of the efficiency of the DSoM should be pursued.
## References
* [1] S. Weisberg, Applied Linear Regression, 1st Edition, Wiley, 1985.
* [2] T. Hastie, Tibishirani, R., Friedman, Elements of Statistical Learning, 1st Edition, Springer, 2009.
* [3] S. Haykin, Neural Networks and Learning Machines, 1st Edition, Pearson, 2009.
* [4] M. A. Nielsen, Neural networks and deep learning, Determination Press, 2015.
* [5] I. N. da Silva, D. H. Spatti, R. A. Flauzino, L. H. B. Liboni, S. F. dos Reis Alves, Artificial Neural Networks: A Practical Course, 1st Edition, Springer Publishing Company, Incorporated, 2016.
* [6] S. Lawrence, C. Giles, A. C. Tsoi, A. Back, Face recognition: a convolutional neural-network approach, Neural Networks, IEEE Transactions on 8 (1) (1997) 98–113. doi:10.1109/72.554195.
URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=554195&tag=1
* [7] J. Kocic, N. S. Jovicic, V. Drndarevic, An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms., Sensors 19 (9) (2019) 2064.
URL http://dblp.uni-trier.de/db/journals/sensors/sensors19.html#KocicJD19
* [8] K. Khamaru, M. Wainwright, Convergence guarantees for a class of non-convex and non-smooth optimization problems, in: Proceedings of the 35th International Conference on Machine Learning, PMLR, Vol. 80, 2018, pp. 2601–2610.
* [9] Y. Ma, Y. Fu, Manifold Learning Theory and Applications, CRC Press, 2012.
* [10] C. Soize, R. Ghanem, Data-driven probability concentration and sampling on manifold, Journal of Computational Physic 321 (2016) 242–258.
* [11] R. Ghanem, Statistical sampling on manifolds for expensive computational models, CDSE Days (2018).
* [12] R. Zhang, R. Ghanem, Normal-bundle bootstrap, arXiv (07 2020).
* [13] L. San Andrés, Modern Lubrication Theory, Gas Film Lubrication,, Texas A & M University Digital Libraries, 2010.
* [14] E. L. Lima, Elementos de Topologia Geral, 1st Edition, Editora USP, São Paulo, 1970.
* [15] I. M. James, History of Topology, Elsevier B. V., Netherlands, 1999.
* [16] M. Spivak, Calculus on Manifolds, Benjamin, New York, 1965.
* [17] A. Pressley, Elementary Differential Geometry, 2nd Edition, Springer-Verlag, New York, 2010.
* [18] G. Casella, R. L. Berger, Statistical Inference, Duxbury Press, 2002.
* [19] M. DeGroot, Schervish, Probability and Statistics, 3rd Edition, Addison-Wesley, 2002\.
* [20] J. Tenenbaum, V. Silva, J. Langford, A global geometric framework for nonlinear dimensionality reduction, Science (New York, N.Y.) 290 (2001) 2319–23. doi:10.1126/science.290.5500.2319.
* [21] S. Roweis, L. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science (New York, N.Y.) 290 (2001) 2323–6. doi:10.1126/science.290.5500.2323.
* [22] M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, Advances in Neural Information Processing System 14 (04 2002).
* [23] B. Nadler, S. Lafon, R. Coifman, I. Kevrekidis, Diffusion maps, spectral clustering and eigenfunctions of fokker-planck operators, Adv Neural Inf Process Syst 18 (07 2005).
* [24] D. Donoho, C. Grimes, Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. proc. national academy of science (pnas), 100, 5591-5596, Proceedings of the National Academy of Sciences of the United States of America 100 (2003) 5591–6. doi:10.1073/pnas.1031596100.
* [25] B. Schölkopf, A. Smola, K.-R. Müller, Nonlinear component analysis as a kernel eigenvalue problem, Neural Computation 10 (1998) 1299–1319. doi:10.1162/089976698300017467.
* [26] J. C. D. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003.
* [27] D. Blei, A. Ng, M. Jordan, Latent dirichlet allocation, Journal of Machine Learning Research 3 (2013) 993.
* [28] D. Rubin, The bayesian bootstrap, Ann Statist 9 (1981) 130–134.
* [29] S. Kotz, N. Balakrishnan, N. L. Johnson, Multivariate Distributions: Reduced Models and Applications, Vol. 1, Wiley, 2000.
* [30] Wikipedia, Convex combinations, https://en.wikipedia.org/wiki/Convex_combination (October 2020).
* [31] R. R. Tyrrel, Convex Analysis, Princeton University Press, 1970.
* [32] J. Burkardt, Ply files an ascii polygon format, https://people.sc.fsu.edu/ jburkardt/data/ply/ply.html (June 2012).
* [33] W. Gross, Gas Film Lubrication,, John Wiley & Sons, Inc., 1962.
* [34] C. Pan, Gas Bearing Tribology: Friction, Lubrication and Wear, A.Z. Szeri, Hemisphere Pub. Corp., 1980.
* [35] B. Hamrock, Fundamentals of Fluid Film Lubrication, McGrawHill, Inc., 1994.
* [36] G. Kleynhans, A two-control-volume bulk-fow rotordynamic analysis for smooth-rotor/honeycomb-stator gas annular, Ph.D. thesis, Texas A& M University (1996).
* [37] M. Faria, L. San Andrés, On the numerical modeling of high speed hydrodynamic gas bearings, ASME Journal of Tribology 122 (1) (2000) 124–130.
* [38] C. Holt, D. W. Childs, Theory versus experiment for the rotordynamic impedances of two hole-pattern-stator gas annular seals, Journal of Tribology 124 (1) (2002) 137–143.
|
4k
|
arxiv_papers
|
2101.00948
|
# Classification and Segmentation of Pulmonary Lesions in CT images using a
combined VGG-XGBoost method, and an integrated Fuzzy Clustering-Level Set
technique.
Niloofar Akhavan Javan Niloofar Akhavan Javan is a Master graduate with from
the Department of Computer Engineering, Khayyam University,Iran. She is also a
corresponding author for this work. Email: [email protected]
Ali Jebreili Ali Jebreili is an Assistant Professor of Computer Engineering at
Khayyam University, Iran. Babak Mozafari Babak Mozafari is a Master graduate
from the Department of Computer Engineering, Khayyam University, Iran.
Morteza Hosseinioun Morteza Hosseinioun is a recently graduate with a Master’s
from the Department of Computer Engineering, School of Science and
Engineering, Sharif University of Technology, Iran. Email:
[email protected] Department of Computer Engineering, Sharif
University of Technology
###### Abstract
Given that lung cancer is one of the deadliest diseases, and many die from the
disease every year, early detection and diagnosis of this disease are
valuable, preventing cancer from growing and spreading. So if cancer is
diagnosed in the early stage, the patient’s life will be saved. However, the
current pulmonary disease diagnosis is made by human resources, which is time-
consuming and requires a specialist in this field. Also, there is a high level
of errors in human diagnosis. Our goal is to develop a system that can detect
and classify lung lesions with high accuracy and segment them in CT-scan
images. In the proposed method, first, features are extracted automatically
from the CT-scan image; then, the extracted features are classified by
Ensemble Gradient Boosting methods. Finally, if there is a lesion in the CT-
scan image, using a hybrid method based on [1], including Fuzzy Clustering and
Level Set, the lesion is segmented. We collected a dataset, including CT-scan
images of pulmonary lesions. The target community was the patients in Mashhad.
The collected samples were then tagged by a specialist. We used this dataset
for training and testing our models. Finally, we were able to achieve an
accuracy of 96% for this dataset. This system can help physicians to diagnose
pulmonary lesions and prevent possible mistakes.
###### Index Terms:
Pulmonary Lesion Classification and Segmentation, Deep Learning, VGG
Convolutional Neural Networks, XGBoost, Level Set Methods.
## I Introduction
Cancer is a group of diseases that are characterized by the growth of
uncontrollable abnormal cells. If the spread of the abnormal cell is not
controlled, it can lead to death. However, the disease’s cause is unknown for
many cancers, especially those that occur during childhood. Many factors that
cause cancer are known, including lifestyle factors such as smoking,
overweight and unmodifiable factors, such as mutations hereditary, hormonal,
and immune conditions. These risk factors may be associated simultaneously or
continuously to initiate and/or promote cancer growth. In 2020, the American
Cancer Society’s lung cancer measures in the United States indicated roughly
228,820 new cases of lung cancer amongst 116,300 in men and 112,520 in women
and approximately 135,720 mortality from lung cancer in between 72,500 in men
and 63,220 in women [2]. Lung cancer is the second most common cancer in women
and men (irrespective of skin cancer) and is one of the most important causes
of cancer deaths among men and women. The mortality rate from lung cancer is
higher than colon, breast, and prostate cancer altogether. However, the
detection of small lung nodules from volumetric CT-scans is also tricky, and
for this reason, many CAD tools are designed to compensate for this problem
[3, 4]. If lung cancer is detected at an early stage, when it is small and has
not spread yet, a person has a greater chance of living. Computerized
diagnosis tools (CAD) are used to create a classification between natural and
abnormal lung tissue that may improve the ability of the radiologist [5, 6].
The onset of lung cancer begins in the lungs, while secondary lung cancer
begins elsewhere in the body and reaches the lungs. A pulmonary nodule is an
oval or round growth in the lungs. The size of the nodules varies from a few
millimeters to 5 centimeters. Given the shape and size of the nodule,
classifying is a challenging task. Detection of large-sized malignant nodules
is straightforward, but we have difficulty identifying small malignant nodules
[7]. The lung cancer death rate has weakened by 45% since 1990 in men and by
19% since 2002 in women due to cutbacks in smoking, with the pace of decline
quickening over the past decade; from 2011 to 2015, the rate decreased by 3.8%
per year in men and by 2.3% per year in women [1]. According to Cancer
Research UK, the five-year survival rate for patients diagnosed in stage one
is more than 55%, while the survival rate in patients with lung cancer in
stage four is almost 5% [8]. Computer-aided diagnosis systems (CAD) are
effective schemes for identifying and detecting various pulmonary lesions. The
main purpose of these systems is to assist the radiologist in various stages
of diagnosis. The CAD system output acts as the second opinion for
radiologists before the final diagnosis. In this way, researchers are
developing more auto CAD systems for lung cancer. Many different publications
have provided auto nodule detection systems using image processing, including
various features extraction, classification, and segmentation techniques.
## II Related works
The diagnosis of pulmonary lesions is a very important topic, and a lot of
work has been done in this field. However, due to many different types of
pulmonary lesions and difficulty of diagnosis in this field, they are
constantly seeking to increase the accuracy of existing systems. Thus, we have
tried to design and train a more accurate system than the existing systems.
Various existing works are as follows: Ying Xie et al. applied an
interdisciplinary mechanism based on metabolomics and six machine learning
methods and reached the sensitivity of 98.1%, AUC in 0.989, and Specificity in
100.0%. As a result, the machine learning methods are AdaBoost, K-nearest
neighbor (KNN), Naive Bayes, Support Vector Machine (SVM), Random Forest, and
Neural Network. They also recommended Naive Bayes as a suitable method [9].
Also, Netto et al. worked on the automatic separation of pulmonary nodules
with growing neural gas and SVM. Their purpose was to automatically collect
lung nodules through computed tomography images using the growth neural gas
(GNG) algorithm to isolate structures with very similar properties to the lung
nodules. They then used the distance conversion to separate the partitioned
structures that connect the blood vessels and bronchitis. Finally, they used a
set of features of the shape and texture using the SVM classifier to classify
these structures: lung nodules [10]. Additionally, Lee et al. used a
collection of classifiers called random forest in two stages. The first stage
was diagnosing lung nodules and the second stage was false positive reduction
[11]. Moreover, Yi et al. worked with five attributes, including intensity
information, shape index, 3D space, and location. They worked more on
segmentation issues and reached 81% accuracy [12]. Javid et al. worked on
identifying nodes such as heart and muscle nodes. A brief analysis of CT
histograms is performed to select an appropriate threshold for better results.
A simple morphological closing is used in the segmentation of the lung area.
The K-means clustering is applied for the initial detection and segmentation
of potential nodes. This segmentation eventually reached a sensitivity of
91.65% [4].
## III Material
To train and evaluate the model presented in this article, we have been
preparing a dataset, including CT-scan images of the local patients’ pulmonary
lesions. More than 10000 slides were used to prepare this dataset. All images
were tagged and classified by specialist physicians. There are a large number
of various types of lesions in this dataset. Due to the large size of the
model, there is a need for the right hardware resources to train these models,
hardware resources, and sufficient time was provided, and all steps were
successfully completed.
## IV Proposed Method
Our proposed model is a complete and automatic system for the classification
and segmentation of pulmonary lesions. In the first stage, a deep
convolutional neural network automatically extracts features from CT-scan
images. In the second stage, based on extracted features, an Ensemble Gradient
boosting classifier identifies pulmonary lesions. Finally, in the third stage,
CT-scan images are segmented by a hybrid fuzzy clustering – Level set method
based on [1]. Among the proposed system’s positive features, the reduction of
diagnosis time and high accuracy can be noted. The proposed method is a
complete CAD system. After receiving the image, it automatically performs all
the steps of extracting features, classification, and segmentation and
provides the final output. Extracting a feature involves extracting a higher
level of information from raw pixel values that differentiate between
different categories. Based on specific algorithms such as HOG, Haar, SIFT,
LBP, GIST, the image features are extracted in classic methods. Based on these
features extraction, the classifier is trained, and the final model is
achieved. Some of these classification modules like SVM, logistic regression,
random forest, KNN can be noted. One of the problems of classic methods is
choosing and designing a suitable feature extraction method. A variety of
different methods have been proposed over the years. Each method often works
well in a particular field, and for new issues, there is a need to improve and
change the feature extraction technique. The problem with traditional methods
is that the feature extraction method cannot be set based on classes and
images. Therefore, if the selected feature does not have the abstract needed
to identify the categories, regardless of the type of classification strategy
used, the classification model’s accuracy will be very low. The problem with
classic methods is always to find a distinctive feature among several
features. Also, achieving accuracy, such as human accuracy, has been a big
challenge. That is why it took years to have a flexible computer vision system
(such as OCR, face recognition, image categorization, and object recognition)
that works with various data. Another problem with these methods is that it is
entirely different from how we learn to recognize things. Immediately after
the birth of a child, he cannot understand his surroundings, but with the
advancement and processing of data, he learns to identify things. This
philosophy is behind deep learning. A computational model is trained in deep
learning based on existing datasets and then automatically extracts the best
features. Most deep neural networks require a lot of memory and computation,
especially when training. Hence, this is an essential concern in these
networks.
### IV-A VGG Convolutional Neural Network
At present, convolutional neural networks have managed to surpass humans in
computer vision tasks, such as image classification. The image classification
means determining which image belongs to which class. In the VGG network [13],
for the first time, they used tiny $3\times 3$ filters in each convolution
layer and also combined them as a sequence of convolutions. Contrary to the
principles of LeNet, which uses large convolutions to capture similar features
in an image, as well as AlexNet, which uses $9\times 9$ or $11\times 11$
convolution filters, filters in the VGG network begin to shrink and approach
The bad $1\times 1$ convolution that LeNet wanted to avoid. These ideas are
also used in newer architectures such as Inception and ResNet [13, 14].
Illustrated the VGG Diagram in Figure. 1.
#### IV-A1 Specification and Parameters
For the first convolutional layer, the network must learn $64$ filters of size
$3\times 3$ with input depth $(3)$. In addition, each of the $64$ filters has
a bias, so the total number of parameters is $64\times 3\times 3\times
3+64=1792$. It can be applied the same logic to other convolutional layers.
The depth of an output layer will be the number of convolution filters. The
padding is selected as 1 pixel, so the spatial resolution is maintained
through the convolutional layers. Thus, the spatial resolution will only
change at the pooling layers. Therefore, the first convolutional layer’s
output will be $64\times 224\times 224$. The pooling layer does not learn
anything, so we have $0$ learning parameters. To calculate the pooling layer’s
output, we need to consider the size of the window and the step. To calculate
the number of parameters in fully connected layers, we must multiply the
number of units in the previous layer with the current layer’s number of
units. By following the previous paragraph’s logic, it can be seen that the
number of units in the last convolutional layer will be $512\times 7\times 7$.
Therefore, the total number of parameters in the first fully connected layer
is $7\times 7\times 512\times 4096+4096=1027645447$ [13, 15, 14]. As already
mentioned, this network consists of two parts: a convolutional section and a
fully connected section. The first $5$ blocks form the convolution section
that is responsible for the feature extraction. The fully connected section
consists of three dense layers that perform Classification.
Figure 1: VGG Diagram [13]
#### IV-A2 Transfer Learning
In the dataset we provide, we encounter many intra-class patterns for positive
samples, so we need to have a model that can learn these intra-class different
patterns. The VGG network can understand complex models because of the high
number of learnable parameters. Hence, if we have enough data and time, and
proper hardware to train this network, we can achieve high accuracy. First,
the default fully connected part of the VGG network is removed, and the
required fully connected layers are added. These layers are arranged after the
convolution layers, respectively. Also, to increase the model’s generalization
and prevent overfitting, a dropout layer of $0.1$ is used. In practice, we use
this layer to increase the accuracy of the model on the test data. In the
modified network, first, the convolution layers were frozen. Then we began to
train the fully connected layers due to the possibility of a large difference
in the initial values of fully connected layer weights from optimal values. If
the convolutional layer is trained, it may be possible to reduce the system’s
absolute accuracy by inappropriately changing convolution weights. Hence, it
is just the training of the fully connected layer at this stage, and we do not
change the weights of the convolutional layers [16]. The results are as
follows: (Figure. 2 shows the confusion matrix of these results).
Figure 2: Transfer Learning Results Figure 3: Transfer Learning / Training and
Validation / Accuracy and Loss
As seen in Figure. 3, almost the loss and accuracy are reached relative
stability and no longer change after ten epochs. Continuing training in these
situations may lead to network overfitting, so the training ends.
#### IV-A3 Fine-tuning
In the next step, we proceeded to Fine-tuning of the network so that only the
weights of the last two blocks and the fully connected layer are trained [17].
Figure. 4 shows the results of the Fine-tuning step. The last two blocks of
the VGG network include convolutional modules and pooling modules. By training
these modules, we see an increase in the accuracy of the results.
Figure 4: Fine-tuning Results Figure 5: Fine-tuning / Training and Validation
/ Accuracy and Loss
As can be seen in Figure. 5, after about ten epochs, loss and accuracy changes
are significantly reduced and converged. Therefore, at this stage, we stopped
training.
#### IV-A4 Feature Extraction
Our goal is to use the convolutional VGG network as an automatic and accurate
feature extraction method. One of the ideas used in this article is to use an
Ensemble-based Gradient boosting classifier instead of a fully connected layer
to increase the accuracy of the pulmonary lesions’ classification. Hence, we
extracted the features from the CNN layers of the VGG model, and instead of
the fully connected layer, we used classifiers based on Ensemble and Gradient
boosting methods. What we will see below is the achievement of higher accuracy
with this technique.
### IV-B Classification
At this stage, different classifiers were trained based on the extracted
features of the VGG convolutional neural network. Then the accuracy of each of
these models was calculated on the test data. The results of these models are
presented below.
#### IV-B1 Ensemble Methods
The ensemble methods are a concept of machine learning, the main idea of using
multiple models to create a single and better algorithm. In other words, the
methods in which multiple classifiers combine to make a more robust model. The
accuracy of the generated model is greater than the accuracy of each of the
initial models. One method for combining the results of classifiers is the
majority vote. Voting and averaging are two of the easiest methods in the
ensemble methods. Each of these methods has a simple understanding and
implementation, voting is for classification, and averaging is for regression.
In both ways, the first step is to create multiple classification/regression
models using some of the training datasets. Each base model can be created
using different training dataset divisions and the same algorithm or using the
same dataset with different algorithms, or any other method [18].
#### IV-B2 Boosting and Bagging
Bagging and boosting are both algorithms of ensemble methods [19], which
combine a set of poor learners to create a strong learner who performs better.
The leading cause of the error is related to noise, bias, and variance.
Ensembles help reduce these factors. These methods are designed to improve the
stability and accuracy of machine learning algorithms. The use and combination
of several classifiers reduce the final model’s variance, especially for
unstable classifiers, and may produce a more reliable model. In Bagging, each
element has a similar probability in a new dataset. Nevertheless, in boosting,
the elements are weighed to increase the impact, and therefore some will be
more involved in the training process. In Bagging, the training phase is
parallel (for example, each model is built independently), but in Boosting,
the new learner is sequentially arranged. In boosting algorithms, each
classifier is trained on the dataset based on the previous classifiers’
success. After each training step, weights are distributed. Data that is
classified incorrectly increases its weight so that the classifier is forced
to focus on that. To predict the class of new data, we only need to apply the
learners to the newly observed data. Bagging results are obtained by averaging
the responses of all learners (or the majority vote). However, in boosting,
the second set of weights is allocated to learners to obtain a weighted
average of all classifiers’ results. At the boosting training stage, the
algorithm assigns weights to each model. A classifier with good results gets a
higher weight than a weak classifier. So boosting also needs to keep track of
learners’ errors. Boosting includes three simple steps:
* •
A basic $F0$ model is defined for predicting the target variable $y$. This
model is associated with a residual value $(y-F0)$.
* •
A new $H1$ model is fitted on the residual of the previous stage.
* •
Now, $F0$ and $H1$ are combined for $F1$ (the boosted version of $F0$). The
average square error of $F1$ will be less than $F0$:
$F_{1}(X)=-F_{0}(X)+H_{1}(X)$ (1)
To improve the performance of $F1$, we can create a new $F2$ after the
residual of $F1$.
$F_{2}(X)=-F_{1}(X)+H_{2}(X)$ (2)
This can be done for ${}^{\prime}m^{\prime}$ iterations until the residual
value reaches our lowest target value.
$F_{m}(X)<-F_{m-1}(X)+H_{m}(X)$ (3)
As a first step, the model must begin with a function $F0(x)$. $F0(x)$ must be
a function that minimizes the loss function or MSE111Mean Square Error, in
this case:
$F_{0}(x)=argmin_{Y}\displaystyle\sum_{i=1}^{n}L(Y_{i}-y)$ (4)
$argmin_{Y}\displaystyle\sum_{i=1}^{n}L(Y_{i}-y)=argmin_{Y}\displaystyle\sum_{i=1}^{n}(Y_{i}-y)^{2}$
(5)
#### IV-B3 XGBoost
XGBoost is similar to Gradient boosting algorithm, but it has a few tricks up
its sleeve, making it stand out from the rest. It has proven itself in terms
of performance and speed. XGBoost unlike other GBM222Glioblastoma methods that
first specify the step, and then the step value, directly determine the step
using the following statement for each x in the data:
$\frac{\delta L(y‚f^{(}(m-1))(x)+f_{m}(x))}{\delta f_{m}(x)}=0$ (6)
By doing second-order Taylor expansion of the loss function around the current
estimate $f_{m-1}(x)$, we get:
$L\left(y‚f^{m-1}(x)+f_{m}(x)\right)\\\ \approx
L(y‚f^{m-1}(x))+g_{m}(x)f_{m}(x)+\frac{1}{2}h_{m}(x)f_{m}(x)^{2}$ (7)
where $g_{m}(x)$ is the Gradient, same as the one in GBM, and $h_{m}(x)$ is
the Hessian (second order derivative) at the current estimate.
$h_{m}(x)=\frac{\delta^{2}L(Y,f(x))}{\delta f(x)^{2}}$ (8)
$f(x)=f^{m-1}(x)$ (9)
Then the loss function can be rewritten as:
$L(f_{m})\approx\displaystyle\sum_{i=1}^{n}[g_{m}(x_{i})f_{m}(x_{i})+\frac{1}{2}h_{m}(x_{i})f_{m}(x_{i})^{2}]+const.\\\
\propto\sum_{j=1}^{T_{m}}\sum_{i\in
R_{j}m}^{0}\left[g_{m}(x_{i})w_{j}m+\frac{1}{2}h_{m}(x_{i})w^{2}_{j}m\right]$
(10)
While Gradient Boosting follows negative Gradients to optimize the loss
function, XGBoost uses Taylor expansion to calculate the value of the loss
function for different base learners. XGBoost does not explore all possible
tree structures but builds a tree greedily, and its regularization term
penalizes building a complex tree with several leaf nodes [20, 21].
#### IV-B4 Classification Results
In the first try, we used the output of the last layer of the CNN as the
feature vector, which contains 25088 features. In the second try, instead of
using the output of the last layer of CNN, we used the output of the first
Dense layer which achieved a higher final accuracy. The results are as
follows: (Figures. 6 to 11)
Figure 6: Simple Decision Tree Confusion Matrix Figure 7: Random Forest
Confusion Matrix Figure 8: Extremely Randomized Tree Confusion Matrix Figure
9: AdaBoost Confusion Matrix Figure 10: XGBoost Confusion Matrix Figure 11:
SVM Confusion Matrix
As can be seen, the XGBoost classifier’s accuracy is 96%, and the accuracy of
the fully-connected layer is 94%, so we showed that using the XGBoost
classifier instead of the dense layers of the VGG network leads to higher
accuracy [16, 22].
### IV-C Pulmonary Lesions Segmentation
Segmentation is the process of partitioning an image into different meaningful
segments. These segments often correspond to different tissue classes, organs,
pathologies, or other biologically relevant structures in medical imaging.
Medical image segmentation is made difficult by low contrast, noise, and other
imaging ambiguities [7]. A major difficulty of medical image segmentation is
the high variability in medical images. The result of the segmentation can
then be used to obtain further diagnostic insights. Level set methods, based
on partial differential equations (PDEs), are effective in the medical image
segmentation tasks. However, to use this method, determining its control
parameters is very important. Hence, other methods are used to determine these
parameters. One of these techniques is FCM clustering. Using this method, with
the medical image’s initial segmentation into several clusters, the level set
initial parameters can be set automatically [23, 24, 25, 26].
#### IV-C1 Fuzzy C-Means Clustering
The fuzzy C-means algorithm is very similar to the k-means algorithm. The
steps are as follows:
* •
Choose a number of clusters.
* •
Assign coefficients randomly to each data point for being in the clusters.
* •
Repeat until the algorithm has converged
* –
Compute the centroid for each cluster.
* –
For each data point, compute its coefficients of being in the clusters.
Each point $x$ contains a set of coefficients that determine the membership
degree in the k-th cluster, $w_{k}(x)$. In Fuzzy C-means, the center of each
cluster is averaged from all points. Weighing the membership degree is
obtained by using the following formula:
$C_{k}=\frac{\sum_{x}^{W_{k}(x)^{m}x}}{\sum_{x}^{W_{k}(x)^{m}}}$ (11)
$m$ is a hyperparameter that controls how clustering works. The goal of the
Fuzzy C-means algorithm is to minimize the following target function [27]:
$C_{k}=\frac{\sum_{x}^{W_{k}(x)^{m}x}}{\sum_{x}^{W_{k}(x)^{m}}}$ (12)
$argmin_{c}\displaystyle\sum_{i=1}^{n}\displaystyle\sum_{j=1}^{c}W_{ij}^{m}\displaystyle\left\lvert\lvert
x_{i}-c_{j}\right\rvert\rvert^{2}$ (13)
where:
$W_{ij}=\frac{1}{\displaystyle\sum_{k=1}^{c}\frac{\left\lvert\lvert
x_{i}-c_{j}\right\rvert\rvert^{\frac{2}{m-1}}}{\left\lvert\lvert
x_{i}-c_{k}\right\rvert\rvert}}$ (14)
#### IV-C2 Level Set Methods
The idea of extending a surface ($\Phi$) instead of a front boundary (C) is
used in this method, and the front boundary is defined so that all points with
no elevation ($\Phi=0$). When the surface evolves and develops, the surface
with a zero level set takes on various shapes. The surface $\Phi$ points and
our reference surface form our implicit boundary, and the zero level set shows
contours splitting and merging. In this method, additional care is not
required for topological changes. Therefore, this method is more suitable for
our application.
#### IV-C3 The mathematical study of Level Set Methods
Assume that the point $x=(x,y)$ belongs to the evolving front. So it changes
over time, and $x(t)$ is the position over time. At any time $t$, for each
point $x(t)$ on the front, the surface has by definition no height, thus:
$\phi(x(t),t)=0$ (15)
To obtain the boundary, we require zero on Level Set, within the fact that it
could carry any value. Assuming a primary $\phi$ at $t=0$, we may obtain
$\phi$ at any time $t$ with the equation of motion $\delta\phi/\delta t$.
Based on the following chain rules, we have:
$\frac{\delta\phi(x(t),t)}{\delta t}=0$ (16)
$\frac{\delta\phi}{\delta x(t)}\frac{\delta x(t)}{\delta t}+\frac{\delta
t}{t}\frac{t}{t}=0$ (17)
$\frac{\delta\phi}{\delta x_{t}}x_{t}+\phi_{t}=0$ (18)
We call here $\delta\phi/\delta t=\nabla\phi$. Also, $x(t)$ is obtained
through the force $F$, which is normalized to the surface, and therefore:
$x_{t}=F\left(x(t)\right)n$ (19)
Such that $n=\nabla\phi/\lvert\nabla\Phi\rvert$ and the previous moving
equations are rewritten as follows:
$\phi_{t}+\nabla\phi x_{t}=0$ (20)
$\phi_{t}+\nabla\phi F_{n}=0$ (21)
$\phi_{t}+F\nabla_{\phi}\frac{\nabla_{\phi}}{\lvert\nabla_{\phi}\rvert}=0$
(22)
$\phi_{t}+F\lvert\nabla_{\phi}\rvert=0$ (23)
The last equation is the $\phi$ moving equation. If $\phi$ is given at time
$t=0$ and its motion equation is known over time, it is now possible to find
$\phi(x,y,t)$ any time $t$ through the expansion of $\phi$ over time. An
interesting feature with $\phi$ is that we can find the curvature of the curve
by the following equation:
$k=\nabla\frac{\nabla_{\phi}}{\lvert\nabla\phi\rvert}=\frac{\phi_{xx}\phi_{y}^{2}-2\phi_{xy}\phi_{x}\phi_{y}+\phi_{yy}\phi_{x}^{2}}{(\phi_{x}^{2}+\phi_{y})^{\frac{1}{2}}}$
(24)
This parameter can be used to control the smoothness and uniformity of the
evolving front boundary [28, 29, 30].
#### IV-C4 Integrating spatial fuzzy clustering with Level Set
In this section, we used a method based on the work of Li et al. for pulmonary
lesions image segmentation [1]. The level set method requires initial control
parameters and sometimes also requires manual intervention to control these
parameters. In this paper, a Fuzzy level set method is used to facilitate the
segmentation of pulmonary lesions. Level set evolution can be started directly
from the primary segmentation by spatial fuzzy clustering. Control of the
parameters of the level set also evolves according to the fuzzy clustering
results. Such methods help to better segmentation.
This method uses fuzzy clustering as the initial surface function. The FCM
algorithm with spatial information can accurately estimate the boundaries.
Therefore, the level set evolution will begin from an area close to the actual
boundaries. Here, information from fuzzy clustering is used to estimate
control parameters, which reduces manual interventions. The new level set
fuzzy algorithm automates the initial settings and parameter setup of the
level set using local fuzzy clustering. This is an FCM with spatial
constraints to determine the approximate lines of interest in a medical image
[23].
The model evolution equation presented by Li et al. is as follows:
$\epsilon(g,\phi)=\lambda\delta(\phi)div(g\frac{\nabla\phi}{\lvert\nabla\phi\rvert})+gG(R_{k})\delta(\phi)$
(25)
where, $\lambda$ is the Coefficient of the contour length for smoothness
regulation, $\epsilon$ is Regulator for Dirac function $\delta(\phi)$, $\phi$
converts the 2D image segmentation problem into a 3D problem, $(\delta(\phi))$
denotes the Dirac function, $\epsilon(g,\phi)$ (attracts $\phi$ towards the
variational boundary, which is similar to the standard level set methods.
$R_{k}=\\{r_{k}=\mu_{nk},n=x\times N_{y}+y\\}$ (26)
$R_{k}$ is the component of interest in FCM.
Finally, surface evolution can be regulated using local fuzzy clustering. In
other words, the level set evolution is stable when it approaches the actual
boundaries, which not only prevents boundary leakage but also prevents manual
intervention. All these improvements lead to a strong algorithm for medical
image segmentation.
### IV-D Proposed system’s final outputs and results
After reading the CT-Scan image, its features are automatically extracted by
the VGG convolutional neural network. Then, based on extracted features, it is
classified using the XGBoost classifier. In the next step, if there is a
lesion in the CT-scan image, the observed lesion is segmented by the mentioned
hybrid segmentation method. A number of system outputs are shown below in
Figures. 12 to 14.
Figure 12: Left picture is the Positive Sample and Right is Segmented Image
Figure 13: Left picture: Positive Sample - Right: Segmented Image Figure 14:
Left: Positive Sample - Right: Segmented Image
## V Discussion and Conclusion
The model proposed in this article is based on the latest methods of
artificial intelligence and deep learning. To increase the accuracy, we used
several hybrid methods for required tasks and the result was very successful.
In the proposed method, first, a convolutional neural network was used to
automatically extract the best features. Duo to a large number of different
intra-class patterns, the VGG network was chosen for this work. The VGG
network has many learnable parameters so it can learn many different patterns
very well. With these extracted features various classifiers were trained and
tested. The best results were achieved with the XGBoost classifier. And we
showed that the use of ensemble and Gradient boosting based classifiers
instead of a fully connected layer in the VGG network, increases the accuracy
of the classification of pulmonary lesions. Then, in the case of a positive
diagnosis of a pulmonary lesion, a hybrid fuzzy level set method (Li et al)
was used for image segmentation. A dataset including CT-scan images of
patients in Mashhad Local Area was collected and labeled by a specialist. We
used this dataset for training and testing the proposed models. Significant
features of the proposed system include the followings:
* •
Using a deep convolutional model to automatically extract the best features.
* •
The use of the VGG network based on fully-connected a large number of
different intra-class patterns (Due to a large number of different types of
pulmonary lesions.
* •
Applying XGBoost instead of a fully connected layer.
* •
Using a highly accurate and hybrid method for segmentation of pulmonary
lesions.
* •
Designing an automated and complete system as a health care CAD system.
* •
Achieving very high accuracy so that the proposed model can be operationally
used in the health care industry.
* •
Using a local dataset based on native patients in Mashhad.
Future recommendations for this work include: Determine the exact type of
lesion, determine the risk of a diagnosed lesion and extend the dataset so
that it covers all pulmonary lesions. Providing such dataset requires a great
deal of time and money. To achieve this, there is a need for financial and
scientific support in the form of an interdisciplinary research team.
Diagnosis of pulmonary lesions due to the very high diversity is a hard and
highly specialized task. Therefore, the preparation of such systems requires
high technical knowledge. With the advancement of artificial intelligence
methods, the accuracy of such systems can always be improved. The field of
advancing and improving these systems will always be open to researchers.
## Acknowledgment
We would like to thank Dr. Mahjoub and Behsazteb Medical Imaging Center for
providing local CT-scan images of patients with human lung problems and
providing diverse comments and reports on these images, which enabled us not
only to categorize them accurately but also undoubtedly had a significant
impact on this work’s overall improvement.
## References
* [1] B. N. Li, C. K. Chui, S. Chang, and S. Ong, “Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation,” Computers in Biology and Medicine, vol. 41, no. 1, pp. 1 – 10, 2011.
* [2] A. C. Society, Key Statistics for Lung Cancer, January 2020. https://www.cancer.org/cancer/lung-cancer/about/key-statistics.html.
* [3] A. C. Society, Can Lung Cancer Be Found Early?, November 2020. https://www.cancer.org/cancer/lung-cancer/detection-diagnosis-staging/detection.html.
* [4] M. Javaid, M. Javid, M. Z. U. Rehman, and S. I. A. Shah, “A novel approach to cad system for the detection of lung nodules in ct images,” Computer Methods and Programs in Biomedicine, vol. 135, pp. 125 – 139, 2016.
* [5] Q. Abbas, “Segmentation of differential structures on computed tomography images for diagnosis lung-related diseases,” Biomedical Signal Processing and Control, vol. 33, pp. 325 – 334, 2017.
* [6] W.-J. Choi and T.-S. Choi, “Automated pulmonary nodule detection based on three-dimensional shape-based feature descriptor,” Computer Methods and Programs in Biomedicine, vol. 113, no. 1, pp. 37 – 54, 2014.
* [7] P. Kamra, R. Vishraj, Kanica, and S. Gupta, “Performance comparison of image segmentation techniques for lung nodule detection in ct images,” in 2015 International Conference on Signal Processing, Computing and Control (ISPCC), pp. 302–306, 2015.
* [8] C. R. UK, Survival, September 2020. https://www.cancerresearchuk.org/about-cancer/lung-cancer/survival.
* [9] Y. Xie, W.-Y. Meng, R.-Z. Li, Y.-W. Wang, X. Qian, C. Chan, Z.-F. Yu, X.-X. Fan, H.-D. Pan, C. Xie, Q.-B. Wu, P.-Y. Yan, L. Liu, Y.-J. Tang, X.-J. Yao, M.-F. Wang, and E. L.-H. Leung, “Early lung cancer diagnostic biomarker discovery by machine learning methods,” Translational Oncology, vol. 14, no. 1, p. 100907, 2021.
* [10] S. Magalhães Barros Netto, A. Corrêa Silva, R. Acatauassú Nunes, and M. Gattass, “Automatic segmentation of lung nodules with growing neural gas and support vector machine,” Computers in Biology and Medicine, vol. 42, no. 11, pp. 1110 – 1121, 2012.
* [11] S. L. A. Lee, A. Z. Kouzani, and E. J. Hu, “Automated identification of lung nodules,” in 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 497–502, 2008.
* [12] X. Ye, G. Beddoe, and G. Slabaugh, “Graph cut-based automatic segmentation of lung nodules using shape, intensity, and spatial features,”
* [13] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
* [14] Y. Zou, G. Zhang, and L. Liu, “Research on image steganography analysis based on deep learning,” Journal of Visual Communication and Image Representation, vol. 60, pp. 266 – 275, 2019.
* [15] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
* [16] L. Li, R. Situ, J. Gao, Z. Yang, and W. Liu, “A hybrid model combining convolutional neural network with xgboost for predicting social media popularity,” in Proceedings of the 25th ACM international conference on Multimedia, pp. 1912–1917, 2017.
* [17] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in International conference on artificial neural networks, pp. 270–279, Springer, 2018.
* [18] N. Demir, Ensemble Methods: Elegant Techniques to Produce Improved Machine Learning Results, 2018. https://www.toptal.com/machine-learning/ensemble-methods-machine-learning.
* [19] S. González, S. García, J. Del Ser, L. Rokach, and F. Herrera, “A practical tutorial on bagging and boosting based ensembles for machine learning: Algorithms, software tools, performance study, practical perspectives and opportunities,” Information Fusion, vol. 64, pp. 205 – 237, 2020.
* [20] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794, 2016.
* [21] SauceCat, Boosting algorithm: XGBoost, 2017. https://towardsdatascience.com/boosting-algorithm-xgboost-4d9ec0207d.
* [22] X. Ren, H. Guo, S. Li, S. Wang, and J. Li, “A novel image classification method with cnn-xgboost model,” in International Workshop on Digital Watermarking, pp. 378–390, Springer, 2017.
* [23] Y. Zhang, B. J. Matuszewski, L. Shark, and C. J. Moore, “Medical image segmentation using new hybrid level-set method,” in 2008 Fifth International Conference BioMedical Visualization: Information Visualization in Medical and Biomedical Informatics, pp. 71–76, 2008.
* [24] M. Forouzanfar, N. Forghani, and M. Teshnehlab, “Parameter optimization of improved fuzzy c-means clustering algorithm for brain mr image segmentation,” Engineering Applications of Artificial Intelligence, vol. 23, no. 2, pp. 160 – 168, 2010.
* [25] P. Swierczynski, B. W. Papież, J. A. Schnabel, and C. Macdonald, “A level-set approach to joint image segmentation and registration with application to ct lung imaging,” Computerized Medical Imaging and Graphics, vol. 65, pp. 58 – 68, 2018. Advances in Biomedical Image Processing.
* [26] X. Jiang and S. Nie, “Segmentation of pulmonary nodule in ct image based on level set method,” in 2008 2nd International Conference on Bioinformatics and Biomedical Engineering, pp. 2698–2701, 2008.
* [27] D.-Q. Zhang and S.-C. Chen, “A novel kernelized fuzzy c-means algorithm with application in medical image segmentation,” Artificial Intelligence in Medicine, vol. 32, no. 1, pp. 37 – 50, 2004. Atificial Intelligence in Medicine in China.
* [28] H. Lombaert, Level set method: Explanation, 2006. https://profs.etsmtl.ca/hlombaert/levelset/.
* [29] S. Osher and J. A. Sethian, “Fronts propagating with curvature-dependent speed: Algorithms based on hamilton-jacobi formulations,” Journal of Computational Physics, vol. 79, no. 1, pp. 12 – 49, 1988.
* [30] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International journal of computer vision, vol. 1, no. 4, pp. 321–331, 1988.
## Appendix A Graphical Abstract
Figure 15: Graphical Abstract
## Appendix B Highlights:
* •
A pulmonary lesion classification method with slightly more than 96% accuracy.
* •
Combining VGG and XG-Boost for lesion classification.
* •
Models are trained on a dataset with more than thousands samples.
* •
Gathering a new dataset of lung lesions based on local patients.
* •
Pulmonary lesion segmentation using an integrated Fuzzy Clustering-Level Set
method.
* •
An automated lung lesion classification and segmentation method.
|
8k
|
arxiv_papers
|
2101.00951
|
# Significance of lower energy density region of neutron star and
universalities among neutron star properties.
1Wasif Husain [email protected] 1Anthony W. Thomas
[email protected] 1Department of Physics, The University of
Adelaide, North Terrace, Adelaide 5005.
###### Abstract
We have constructed and compared models of rotating neutron stars and strange
stars, within the Hartle framework. The significance of the low energy density
region and crust region inside the neutron star has been studied, along with
how much the existence of strange matter above the energy density 300 MeV/fm3
can affect the neutron star properties. We have confirmed several
universalities among the neutron star properties such as, dimensionless moment
of inertia vs dimensionless quadrupole moment, dimensionless tidal
deformability vs dimensionless moment of inertia and moment of inertia
parameters vs R/2M.
## 1 Introduction
From the atmosphere to the core of neutron stars covers an energy density
range from 106 gm/cm3 to several times nuclear matter density. At densities
above nuclear matter density matter may undergo phase transitions and new
exotic states may form. The existence of these states relies on the strong
interaction and the structure of baryons. In 1960 [1] it was suggested that
the core of neutron stars may have mesons (pions-the lightest meson). In 1980
[2] the possibility of Bose condensation of Kaons at densities above three
times nuclear matter density was suggested. Since then it has been addressed
by many authors, for instance see Ref.[3]. As the energy density increases,
baryons may decompose into their constituent quarks. Invanenko and
Kurdgelaidze [4] in 1965 showed that neutron stars core may be made of
deconfined quark matter. Since 1990, such phase transitions have been
considered by many authors [5][6].
The key ingredient to understand their interior is the nuclear equation of
state. New born neutron stars are extremely hot but after enough time they
become cold and they have matter made of mainly neutrons partly protons,
electrons and muons in $\beta$ equilibrium with respect to the weak
interactions. Different approaches have been adopted over time to model the
neutron star under the constraints imposed by observed neutron star
properties. We will compare properties of neutron stars based on EoSs that
include either, nucleons only, nucleons and hyperons or strange quark matter
at high energy density.
## 2 Models of compact stars
### 2.1 Equation of state (EoS)
We are taking into consideration three EoSs. N-QMC700 and F-QMC700 (see
Ref.[7]) which are based on QMC model [8]. N-QMC700 only allows nucleons while
F-QMC700 includes hyperons. Both EoSs take nucleon structure into account. In
both, N-QMC700 and F-QMC700, the mass of $\sigma$ meson [9] is considered
$m_{\sigma}$ = 700 MeV . Below energy density $<$ 110 MeV/fm3, the matter
forms in the crust of the star, with inhomogeneities consisting of nucleons
arranged on a lattice, as well as neutron and electron gases. In this density
region the QMC Equation of State (EoS) needs to be matched [7] with the
equations of state, showing the composition of matter at those densities.
Baym-Pethick-Sutherland (BPS) EoS [10] is used in this work. For the sake of
convenience the parametrized EoSs are given below. The parameterization of the
EoSs of N-QMC700 and F-QMC700 is
$P=\frac{N_{1}\epsilon^{p1}}{1+e^{(\epsilon-r)/a}}+\frac{N_{2}\epsilon^{p2}}{1+e^{-(\epsilon-r)/a}}$
(1)
which works well up to an energy density 1200 MeV/fm3 and values for the
constants $N_{1}$, $N_{2}$, $p_{1}$, $p_{2}$, ’$r$’ and ’$a$’ are given in
Table 1.
| $N_{1}$ | $p_{1}$ | $N_{2}$ | $p_{2}$ | $r$ | $a$
---|---|---|---|---|---|---
N-QMC700 | 0 | 0 | 0.008623 | 1.548 | 342.4 | 184.4
F-QMC-700 | 0.0000002.62 | 3.197 | 0.0251 | 1.286 | 522.1 | 113
Table 1: Parameters of EoS (1) from Ref.[7]
The results will be compared with the EoS [11] of strange stars which is based
on the MIT bag model [12]. The EoS of a strange star matter (strange matter
EoS or MIT EoS) is given [11] by
$P=\frac{1}{3}(\epsilon-4B),$ (2)
where B is the bag constant whose value is taken to be $10^{14}$gm/cm3. This
EoS suggests quark deconfinement at the higher energy densities.
### 2.2 Structural equations
The structural equation for a static, spherical neutron star involves solving
the TOV [13][14] equation, derived from the general theory of relativity [15],
for a particular equation of state (EoS). The line element for such a compact
star is given by (G = c = 1)
$ds\textsuperscript{2}=-e^{2\Phi(r)}dt^{2}+e^{2\Lambda(r)}dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta
d\phi^{2}.$ (3)
The TOV equation for the star is
$\displaystyle\frac{dP}{dr}=-\frac{[\epsilon(r)+P(r)][4\pi
r^{3}P(r)+m(r)]}{r^{2}(1-\frac{2m(r)}{r})},\qquad
m(r)=4\pi\int_{0}^{r}dr.r^{2}\epsilon(r).$ (4)
#### Rotational equation.
For slowly rotating neutron stars, Hartle’s [16] approach is adopted.
Structural equations are solved under the general relativistic framework.
Hartle’s approach to get the solution for a perturbed Schwarzschild metric is
based on the idea that, as stars rotate their pressure, energy density and
baryon number, all get perturbed. The perturbed line element of rotating,
axially symmetric equilibrium configuration, is given by
$ds^{2}=-e^{2\nu(r,\theta,\Omega)}(dt)^{2}+e^{2\psi(r,\theta,\Omega)}(d\phi-\omega(r,\theta,\Omega)dt)^{2}+e^{2\mu(r,\theta,\Omega)}(d\theta)^{2}+e^{2\lambda(r,\theta,\Omega)}(dr)^{2}+\mathcal{O}(\Omega^{3})$
(5)
where $\nu$, $\psi$, $\mu$ and $\lambda$ are the perturbed metric functions ,
$r$ and $\theta$ are polar coordinates and $\Omega$ is the uniform rotational
velocity of the star. $\omega$ is the rotating velocity of the local inertial
frame of reference dragged along the direction of rotation of the neutron
star. This dragged velocity also depends on the polar coordinates $r$,
$\theta$. Since the velocity of local dragging of the inertial reference frame
depends on the neutron star’s mass concentration inside and outside, which
varies with $\Omega$, $\omega$ is a function of $\Omega$. Relative angular
velocity is denoted by, $\overline{\omega}(\Omega-\omega(r,\theta,\Omega))$ It
is this relative velocity ($\overline{\omega}$), which is of particular
interest, when discussing the rotational flow of the fluid inside the neutron
star.
$\frac{d}{dr}(r^{4}j(r)\frac{d\overline{\omega}_{l}(r)}{dr})+4r^{3}\frac{dj(r)}{dr}\overline{\omega}(r)=0$
(6)
where $j(r)=e^{-(\Phi+\Lambda)}=e^{-\Phi(r)}\sqrt{1-\gamma(r)}$ and
$\gamma(r)=1-\frac{2m(r)}{r}$. Equation (6) is to be integrated from the
center of the neutron star towards the surface under the boundary conditions
that $\overline{\omega}$ must be regular at the center where r = 0 and
$\frac{\overline{d\omega}}{dr}$ vanishes at r = 0. For numerical calculations
an arbitrary value of $\omega$ is selected at the center of the neutron star
and integration takes place from there, towards the surface. Outside the
neutron star $\overline{\omega}$ has the behavior
$\overline{\omega}(r,\Omega)=\Omega-\frac{2}{r^{3}}J(\Omega)$ where $\Omega$
is the rotation of the neutron star and J($\Omega$) is the angular momentum of
the neutron star. The angular momentum can also be given by
$J(\Omega)=\frac{R^{4}}{6}(\frac{d\overline{\omega}}{dr})_{R}.$ The quadrupole
equations given in [16] are solved for a rotational frequency 500Hz. The tidal
deformability equations given in [17] are essential to determine the shape and
deformation of the neutron star caused by rotation about its axis and inspiral
rotation in a binary system, respectively.
## 3 Results
Fig. 1(a) represents the pressure and energy relation, for all three chosen
EoSs. The strange matter EoS (MIT - black dot) is unphysical in the low energy
density region (at the crust). Here, a key point must be stated, i.e. in the
low energy density region, there is only nuclear matter. Strange matter or
hyperons only come into existence when the energy density increases beyond a
few times the nuclear matter density. All of them have only nuclear matter at
low energy density, so they must show a similar characteristic in that region.
As the energy density increases, around 580 MeV/fm3, F-QMC700 starts to
deviate from the N-QMC700 because hyperons ($\Xi^{-},\Lambda,\Xi^{0}$) start
to develop at this point (see Ref.[7]). On the other hand, the strange matter
EoS around the energy density 300 MeV/fm3 and 500 MeV/fm3 crosses the nucleons
only matter EoS, suggesting a transition to strange quark matter (deconfined
quarks) often called a ”strange star”. N-QMC700 is the stiffest EoS which is
based on nucleon only matter. Fig. 1(b) gives the relation between the mass
and the radius of neutron stars for these EoSs. Although the strange matter
EoS predicts a maximum mass of over 2 $M\odot$, it poorly predicts the mass
and the radii of the neutron stars at the low energy densities. At the crust,
where the energy density is low, it must show an expansion in the radius, like
the N-QMC700 and F-QMC700.
(a)
(b)
Figure 1: (a) Pressure-Energy density plot for selected EoSs, (b) Mass-Radius
plot for selected EoSs.
(a)
(b)
Figure 2: (a) Pressure-Energy density plot for selected EoSs with strange
matter EoS parameters changed to N-QMC700 EoS parameters for the lower energy
densities below 300 MeV/fm3, (b) Mass-Radius plot for EoSs given in Fig. 2(a)
In Fig. 2, the strange matter EoS is corrected for the low energy density
region with the parameters of nucleons only EoS (N-QMC700). In Fig. 2(a), the
strange matter EoS parameters are matched to the nucleon only EoS parameters
for the energy densities below 300 MeV/fm3 and in Fig. 2(b), mass vs radius
plot is presented with changed parameters for strange matter EoS. Here, we are
calculating the properties of neutron stars when the strange matter starts to
exist at 300 MeV/fm3. In Fig. 2, all three EoSs now show the same
characteristic at the low energy densities. With this change the behavior of
all the three EoS show similar radii at the low energy densities and all three
EoSs predict maximum mass of neutron stars close to 2$M_{\odot}$. There is a
universality that most of the EoSs dimensionless moment of inertia vs
dimensionless quadrupole moment (I-Q relation) plots fall on the top of each
other and show a characteristic which is (almost) independent of EoSs [18]. In
Fig. 3(a), N-QMC700 and F-QMC700 both show the same characteristic and follow
the universality while strange matter EoS behavior is very different and it
does not follow the universality. After fixing the parameters of the strange
matter EoS in the lower energy density region, it follows the universality
depicted in Fig. 3(b). As predicted by Yagi and Yunes [17], dimensionless
tidal deformability ($\lambda/M^{5}$) vs dimensionless moment of inertia
($I/M^{3}$) plot shows a behavior which is independent of EoS. As shown in
Fig. 4(a), both N-QMC700 and F-QMC700 EoSs fall exactly on top of each other
and follow the universality whereas the strange matter EoS is very different,
falling far away from them and not following the universality. The results
when the parameters of the strange matter EoS are fixed for the energy density
below 300 MeV/fm3 with N-QMC700 are shown in Fig. 4(b). The strange matter EoS
dimensionless moment of inertia vs dimensionless tidal deformability plot
follows the universality like the QMC EoS. Lattimer and Schultz [19] showed
that the parameter of moment of inertia, I/M0R${}_{0}^{2}$, where $M_{0}$ is
the mass and $R_{0}$ is the radius of the neutron star, is related to
compactness in such a way that their plot is independent of EoSs and all of
them fall almost on the same line. We see in Fig. 5(a) that the moment of
inertia parameter for the strange matter EoS is very different from N-QMC700
and F-QMC700 and it does not follow the moment of inertia constraint, while
N-QMC700 and F-QMC700 EoS do. After fixing the parameters as shown in Fig.
5(b) at the lower energy densities 300 MeV/fm3, all three EoSs satisfy the
relation independent of EoS. They fall almost on the same line [10].
(a)
(b)
Figure 3: Dimensionless moment of inertia vs dimentionless quadrupole moment
plot for EoSs given, (a) in Fig 1(a) and (b) in Fig. 2(a).
(a)
(b)
Figure 4: Dimensionless tidal deformability vs dimensionless moment of inertia
plot for EoSs given, (a) in Fig 1(a) and (b) in Fig. 2(a).
(a)
(b)
Figure 5: Moment of inertia parameter vs R/2M plot for EoSs given, (a) in
Fig.1(a) and (b) in Fig. 2(a).
## 4 Conclusion and Future Perspectives
Models for rotating neutron stars have been successfully compared within the
Hartle framework at frequency 500 Hz for the different EoSs under the
constraints imposed by observations. It is shown that EoSs based purely on
hadrons (N-QMC700 & F-QMC700), satisfy the empherical constraints which is a
clear indication that the QMC model may play a significant role in defining
the properties of matter at energy densities several times the nuclear matter
density. Several universalities among the neutron star properties such as
dimensionless moment of inertia vs dimensionless quadrupole momentum,
dimensionless moment of inertia vs dimensionless tidal deformability and
moment of inertia parameter vs R/2M, have been confirmed in these models.
Following the discovery of gravitational waves we may impose new constraints
on the EoS in a neutron star binary system, which may prove handy in
understanding the interior of neutron stars. Furthermore, recently
improvements have been made in the QMC model [20][21], and with these
advancements we can improve our EoS and can get better results. In future it
will be interesting to see how QMC EoS explain the neutron star cooling
process, magnetic fields and the tidal deformability in the neutron star
merger.
## Acknowledgments
This research is supported by Adelaide Scholarship International, without the
aid this study could not have been possible.
## 5 References
## References
* [1] Bahcall J. N. and Wolf R. A. 1965 Phys. Rev. 140 B1452–66
* [2]
* [3] Kaplan D. B. and Nelson A. E. 1986 Physics Letters B. 175 57 - 63
* [4]
* [5] Potekhin A. Y. 2010 Physics-Uspekhi 53 1235-56
* [6]
* [7] Ivanenko D. D. and Kurdgelaidze D. F. 1965 Astrophysics 1 251-52
* [8]
* [9] Iosilevskiy I 2010 Acta Phys. Polon. Supp.3 589-600
* [10]
* [11] Glendenning N.K. 1992 Physical Review D 41274-87
* [12]
* [13] Rikovska-Stone J, Guichon P. A. M., Matevosyan H. H. and Thomas A. W. 2007 Nucl. Phys.A792 341-69
* [14]
* [15] Guichon P. A. M. 1988 Phys. Lett. B200 235-40
* [16] Guichon P. A. M., Matevosyan H. H., Sandulescu N. and Thomas A. W. 2006Nucl. Phys. A772 1-19
* [17] Baym G., Pethick C. and Sutherland P. 1971 Astrophys. J. 170 299-317
* [18]
* [19] Urbanec M, Miller J. C and Stuchlík Z 2013 Monthly Notices of the Royal Astronomical Society 433 1903-09
* [20]
* [21] Chodos A., Jaffe R. L., Johnson K. and Thorn, C. B. Phys. Rev. D 10 2599–2604
* [22]
* [23] Tolman R. C. 1939 Phys. Rev. 55 364-73
* [24]
* [25] Oppenheimer J. R. and Volkoff G. M. 1939 Phys. Rev. 55 374-81
* [26]
* [27] Tredcr I. J. 1975 Astronomische Nachrichten 296 45-46
* [28]
* [29] Hartle, J. B. 1967 Astrophys. J. 150 1005-29
* [30]
* [31] Hinderer T., Lackey B. D., Lang R. N. and Read J. S. 2010 Phys. Rev. D81 123016
* [32]
* [33] Yagi K. and Yunes N. 2013 Science 341 365-68
* [34]
* [35] Lattimer J. M. and Schutz B. F. 2005 Astrophys. J. 629 979-84
* [36] Motta T. F. , Guichon P. A. M. and Thomas A. W. 2018 Int. J. Mod. Phys. A333 31
* [37]
* [38] Whittenbury D. L., Carroll J. D. , Thomas A. W., Tsushima K. and Stone J. R. 2014 Phys. Rev. C 89 18
|
2k
|
arxiv_papers
|
2101.00952
|
# Effect of an external electric field on local magnetic moments in silicene
J. Villarreal†, F. Escudero†, J. S. Ardenghi† and P. Jasen†
IFISUR, Departamento de Física (UNS-CONICET)
Avenida Alem 1253, Bahía Blanca, Buenos Aires, Argentina email:
[email protected], fax number: +54-291-4595142
###### Abstract
In this work we analyze the effects of the application of an external electric
field in the formation of a local magnetic moment in silicene. By adding an
impurity in a top site in the host lattice and computing the real and
imaginary part of the self-energy of the impurity energy level, the polarized
density of states is used in order to obtain the occupation number of the up
and down spin formation in the impurity considering the mean field
approximation. Unequal occupation numbers is the precursor of a formation of a
local magnetic moment and this depends critically on the Hubbard parameter,
the on-site energy of the impurity, the spin-orbit interaction in silicene and
the electric field applied. In particular, it is shown that in the absence of
electric field, the boundary between the magnetic and non-magnetic phases
increases with the spin-orbit interaction with respect to graphene with a top
site impurity and shrinks and narrows it when the electric field is turned on.
The electric field effect is studied for negative and positive on-site
impurity energies generalizing the results obtained in the literature for
graphene.
## 1 Introduction
In solid state physics, two dimensional (2D) systems have become one of the
most significant topics, where applications in nanoelectronics and spintronics
become possible due to the exotic electronic structures of these 2D
materials([1], [2]). The most well known is graphene, a two dimensional
honeycomb lattice of carbon atoms [3], but other new developed 2D materials
have arisen, such as molybdenum disulfide (MoS2) [4], silicene [5], germanene
([6] and [7]), phosphorene [8], transition-metal dichalcogenides and hexagonal
boron nitride [9] and two-dimensional SiC monolayers [10], that are similar to
graphene but with different atoms at each lattice site but with a buckled
structure. Due to the intrinsic low electron and phonon densities, 2D
materials are ideal platforms to host single atoms with magnetic moments
pointing out-of-plane with potential applications in current information
nanotechnology. Among these materials, silicene is particularly interesting
thanks to its compatibility with the current Si-based electronic technology.
In 2010, the synthesis of silicene nanoribbons on the anisotropic Ag(110)
surface and on Ag(111) was reported ([11], [12]), showing that silicene has a
larger in-plane lattice constant, with two interpenetrating sublattices
displaced vertically with respect to each other due to the $sp^{3}$
hybridization. In turn, the buckling of silicene can be influenced by the
interaction with a ZrB2 substrate which allows to tune the band gap at the $K$
or $K^{\prime}$ points in the Brillouin zone [13]. By applying the tight
binding model on silicene it is possible to compute the long wavelength
approximation in order to obtain an effective Dirac-like Hamiltonian ([1] and
[14] and [15]), and around the Fermi energy, the charge carriers behaves as
massive Dirac fermions in the $\pi$ bands moving with a Fermi velocity
$v_{F}=5.5\times 10^{5}$ m/s ([16] [17]). The layer separation between the
sublattices in silicene due to its buckled structure, is suitable for
application of external fields in order to open a bandgap that introduces
diagonal terms in the Hamiltonian written in the $A$ and $B$ sublattices
([18], [19], [5]).The spin-orbit interaction (SOI) in silicene is about $3.9$
meV, larger than that of graphene, where is of the order of $10^{-3}$ meV
([20], [21]). The large SOI allows the quantum spin Hall effect to be observed
which implies that silicene becomes a topological insulator ([22], [23]). The
interplay between the SOI and external electric field can induce transitions
from topological to band insulators allowing valley effects in the
conductivity ([18], [24]).
When impurity atoms are deposited on graphene or silicene they can be adsorbed
on different adsorption sites, where the most usual is the six-fold hollow
site of the honeycomb lattice, on top of a carbon or silicon atom or the two-
fold bridge site of neighboring atoms of the host lattice ([25], [26], [27],
[28] and [29]). In turn, adatoms bonded to the surface of graphene can lead to
a quasi-localized state where the wave function includes contributions from
the orbitals of neighboring carbon atoms ([30], [31]).
In particular, when the impurity atoms are magnetic, the strong coupling
between the localized magnetic state of the adatom and the band of the 2D host
lattice allows non-trivial effects in the static properties of the system,
such as the Kondo effect, where the local density of states show a resonance
at the Fermi level [32] due to the screening of the magnetic moment of the
adatom by the sourrounding itinerant electrons. In graphene, the Kondo effect
has been reported with Co adatoms spectroscopy [33], but not much is known
about the spectral features or Kondo effect in other 2D materials such as
silicene. In silicene, the effect of different magnetic adatoms has been
studied by using density functional theory ([34], [35], [36] and [37]),
showing that silicene is able to form strong bonds with transition metals due
to its buckled form and that its properties can be tailored to design
batteries [38].
In turn, it has been shown that the magnetic properties of 2D materials are
very sensitive to the SOI and the application of external electric and
magnetic fields ([39], [40]). These properties can be altered when impurity
atoms are introduced in the material because it induces the formation of local
magnetic moments. Thus, while there are several numerical studies about
transition metal adsorption in silicene and other two-dimensional materials,
not much is known about the dependence of the localized magnetic moment on the
applied external electric field. The tight-binding method combined with the
mean-field approximation [41] allows studying the dependence of the local
magnetism with the strong correlation effects of the inner shell electrons,
parametrized by the on-site Hubbard contribution and the hybridization of the
impurity orbital with the host lattice. By computing the spin-polarized
density of states with Green function methods it is possible to obtain the
occupation number of each spin in the adatom ([26], [42]). Moreover, the
effect of the SOI and an external electric field can tailor the magnetic
properties due to the interplay of the level broadening and the sublattice
asymmetry that induces a bandgap.
Motivated by this, in this paper we study the magnetic regime of the impurity
atom as a function of the Fermi level, the Hubbard parameter, the SOI and an
external electric field and we compare it with those obtained in graphene. In
particular we will consider impurity atoms adsorbed in a top site in silicene
with on-site energy below and above the Dirac point. Based on these results it
is possible to study the formation of localized magnetic states in the
impurities and their dependence with the external electric field and the
asymmetric hybridization with the host lattice. In turn, the boundary between
different magnetic phases can be approximated in terms of the Fermi energy and
the Hubbard parameter. In this sense, while there are several works with ab-
initio calculations with different transition metals, not much is known about
the magnetic features of silicene with respect to the different parameters in
the Hamiltonian. This work will be organized as follows: In section II, the
tight-binding model with adatoms is introduced and the Anderson model in the
mean-field approximation is applied to silicene. In section III, the results
are shown, and a discussion is given and the principal findings of this paper
are highlighted in the conclusion.
## 2 Theoretical model
The tight-binding Hamiltonian of silicene with spin-orbit coupling and a
perpendicular electric field reads (see [5])
$H_{0}=-t\underset{\left\langle
i,j\right\rangle,s}{\overset{}{\sum}}a_{i,s}^{{\dagger}}b_{j,s}+h.c.+\frac{i\lambda_{SO}}{3\sqrt{3}}\underset{\left\langle\left\langle
i,j\right\rangle\right\rangle,s}{\overset{}{\sum}}s(\nu_{ij}a_{i,s}^{{\dagger}}a_{j,s}+\nu_{ij}b_{i,s}^{{\dagger}}b_{j,s})-elE_{z}\underset{i,s}{\overset{}{\sum}}\mu_{i}(a_{i,s}^{{\dagger}}a_{j,s}+b_{i,s}^{{\dagger}}b_{j,s})$
(1)
where $a_{i,s}^{{\dagger}}$($a_{j,s}$) are the creation (annihilation)
operators in the sublattice $A$ and $b_{i,s}^{{\dagger}}$($b_{j,s}$) are the
creation (annihilation) operators in the sublattice $B$ of silicene in the
site $i$ with spin $s=\pm 1$. The first term of the last equation is the
kinetic energy which is $t_{gr}=2.7$eV for graphene and $t_{sil}=1.6$eV for
silicene. The second term represents the effective spin-orbit coupling with
$\lambda_{SO}=3.9$ meV for silicene (see [5]) and
$\nu_{ij}=(\mathbf{d}_{i}\mathbf{\times
d}_{j})/\left|\mathbf{d}_{i}\mathbf{\times d}_{j}\right|=\pm 1$, depending on
the orientation of the two nearest neighbor bonds $\mathbf{d}_{i}$ and
$\mathbf{d}_{j}$ that connect the next nearest neighbors $\mathbf{d}_{ij}$
(see [43] and [44]). The last term is the staggered sublattice potential with
$\mu_{i}=+1(-1)$ for the $A$ and $B$ sublattice sites, where the buckling for
silicene is $l_{sil}=0.23$Å [5] and $e$ is the electron charge. We are not
considering the Rashba spin-orbit coupling because it has a negligible effect
on the dispersion relation, being comparable to $\lambda_{so}$ only at the
near edge of the Brillouin zone [45].
The basis vectors for the hexagonal Bravais lattice can be written as
$\mathbf{R}_{n,m}=n\mathbf{a}_{1}+m\mathbf{a}_{2}$ where $n,m$ are integer
numbers, $\mathbf{a}_{1}=\frac{a}{2}(3,\sqrt{3},0)$ and
$\mathbf{a}_{2}=\frac{a}{2}(3,-\sqrt{3},0)$ are the primitive basis vectors
(see red hexagonal in figure 1). Considering the Fourier transform of the
creation and annihilation operators
$a_{j,s}=\frac{1}{\sqrt{N}}\overset{}{\underset{\mathbf{k}}{\sum}}e^{-i\mathbf{kR_{j}}}a_{\mathbf{k,\sigma}}$
and
$b_{j,s}=\frac{1}{\sqrt{N}}\overset{}{\underset{\mathbf{k}}{\sum}}e^{-i\mathbf{kR_{j}}}b_{\mathbf{k,\sigma}}$
where $j=(n,m)$, the Hamiltonian $H_{0}$ becomes
$H_{0}=-t\underset{\mathbf{k},s}{\overset{}{\sum}}\phi_{\mathbf{k}}a_{\mathbf{k},s}^{{\dagger}}b_{\mathbf{k},s}+h.c.+\underset{\mathbf{k},s}{\overset{}{\sum}}(\frac{i\lambda_{SO}}{3\sqrt{3}}s\xi_{\mathbf{k}}-elE_{z})(a_{\mathbf{k},s}^{{\dagger}}a_{\mathbf{k},s}-b_{\mathbf{k},s}^{{\dagger}}b_{\mathbf{k},s})$
(2)
where
$\phi_{\mathbf{k}}=\overset{3}{\underset{i=1}{\sum}}e^{i\mathbf{k\cdot\delta_{i}}}e^{ik_{z}(h_{A}-h_{B})}=\overset{3}{\underset{i=1}{\sum}}e^{i\mathbf{k\cdot\delta_{i}}}e^{ik_{z}2l}$
and $\xi_{\mathbf{k}}=\overset{6}{\underset{i=1}{\sum}}e^{i\mathbf{k\cdot
n_{i}}}$, where $\mathbf{\delta}_{1}=\frac{a}{2}(1,\sqrt{3},0)$,
$\mathbf{\delta}_{2}=\frac{a}{2}(1,-\sqrt{3},0)$ and
$\mathbf{\delta}_{3}=a(1,0,0)$ are the next nearest neighbor vectors, whereas
$\mathbf{n}_{1}=-\mathbf{n_{2}=a_{1}}$, $\mathbf{n}_{3}=-\mathbf{n_{4}=a_{2}}$
and $\mathbf{n_{5}=-\mathbf{n_{6}=a_{1}-a_{2}}}$ are the six next-nearest
neighbor hopping sites (see figure 1) that connect identical sublattice sites.
Notice that $\phi_{\mathbf{k}}~{}$contains the contribution of the buckled
structure in the $z$ direction given by the factor $e^{ik_{z}(h_{A}-h_{B})}$,
where $h_{A/B}$ are the sublattice $A$ and $B$ heights with respect to the
middle of the buckling, which obey $h_{A}-h_{B}=2l\,$ (see figure 1)and
$k_{z}$ is the wave-vector in the $z$ direction, in contrast to
$\xi_{\mathbf{k}}$ that does not depend on $l$ because next-nearest neighbors
belong to the same sublattice,
Figure 1: Up. Silicene honeycomb lattice (black and white dots are silicon
atoms). Green arrows represent nearest neighbors and blue arrows represent
next-nearest neighbors, $\mathbf{r}_{1}$ and $\mathbf{r}_{2}$ are the lattice
vectors and the red hexagon is a particular Bravais lattice. Red point
represent impurity adsorbed on a top site. Down: Side view of silicene with
the adsorbed impurity where $l(h)$ is the distance of each
sublattice(impurity) with respect the middle of the buckling.
Ab-initio calculations have shown that there are two most stable sites in
which transition metals can be adsorbed in two-dimensional systems: the center
of the hexagon and the bridge between two atoms [34]. In silicene, the
adsorbed atoms preserve the buckled structure, although small distortions in
the geometry near the adsorbed atoms appear changing the local buckling. This
warping of the silicene sheet can alter the distance between the adatom and
the neighboring silicon atoms. The transition metal atoms most likely
hybridize at the hollow site via $s$, $d$ or $f~{}$orbitals [46]. For
simplicity we will consider an impurity atom adsorbed in the top site ($A$
sublattice), with a height $h$ with respect to the $A$ silicon (see figure 1)
and neglect small distortions of the buckled structure. In turn, the orbital
symmetry that sits on top is not particularly important and we will consider
only an $s$ orbital. Considering that the adatom is fixed in a position
$\mathbf{R}_{0}$ and hybridizes with the sublattice $A$ with strength $V$, the
hybridization Hamiltonian can be written as
$H_{V}=V\overset{}{\underset{s}{\sum}}a_{0,s}^{{\dagger}}(\mathbf{\delta}_{0A}^{\prime})f_{s}+h.c.$
(3)
where $\mathbf{\delta}_{0A}^{\prime}=(h-l)e_{z}$ where $h-l$ is the distance
between the impurity and the $A$ silicon atom and $f_{s}$ annihilates an
electron in the magnetic impurity. In the momentum representation, this last
Hamiltonian can be written as
$H_{V}=\overset{}{\underset{\mathbf{k},s}{\sum}}Ve^{ik_{z}(h-l)}a_{\mathbf{k},s}^{{\dagger}}f_{s}+h.c.$
(4)
Finally, in order to consider the interaction between electrons in the
impurity, we can add the Hamiltonian
$H_{F}=\left[\epsilon_{0}-(1+r)elE_{z}\right]\overset{}{\underset{s}{\sum}}n_{s}+Un_{\uparrow}n_{\downarrow}$
(5)
where $\epsilon_{0}$ contains the single electron energy at the impurity atom,
$r=l_{I}/l$, where $l_{I}$ is the distance between the magnetic impurity and
the $A$ sublattice, $n_{s}=$ $f_{s}^{{\dagger}}f_{s}$ is the occupation number
operator for the impurity with spin $s$ and $elE_{z}$ is the staggered
potential. For simplicity we are not considering the redistribution of charges
due to the electric field [47]. The Hubbard parameter $U$ characterizes the
strength of the electron correlations in the inner shell states of the
impurity. By adopting the mean field approximation ([41]), the Hamiltonian
$H_{F}$ can be decomposed in a constant term and the electronic correlations
at the impurities $Un_{\uparrow}n_{\downarrow}\sim
U\overset{}{\underset{s}{\sum}}\left\langle n_{s}\right\rangle
n_{s}-U\left\langle n_{\uparrow}\right\rangle\left\langle
n_{\downarrow}\right\rangle$, such that the Hamiltonian of the impurity can be
rewritten as $H_{F}=\overset{}{\underset{s}{\sum}}\epsilon_{s}n_{s}$, where
$\epsilon_{s}=\epsilon+U\left\langle n_{-s}\right\rangle$, and
$\epsilon=\epsilon_{0}-(1+r)elE_{z}$ is the effective on-site energy of the
impurity and the remaining term $-U\left\langle
n_{\uparrow}\right\rangle\left\langle n_{\downarrow}\right\rangle$ can be
dropped. Then, by considering eq.(2), eq.(4) and eq.(5) the Hamiltonian in
compact form can be written in matrix form in the basis
$(\Psi_{A,\uparrow},\Psi_{B,\uparrow},\Psi_{A,\downarrow},\Psi_{B,\downarrow},\Psi_{I,\uparrow},\Psi_{I,\downarrow})$
as
$\displaystyle
H=\underset{\mathbf{k,}s}{\overset{}{\sum}}\left(\begin{array}[]{cccccc}a_{\mathbf{k},\uparrow}^{{\dagger}}&b_{\mathbf{k},\uparrow}^{{\dagger}}&a_{\mathbf{k},\downarrow}^{{\dagger}}&b_{\mathbf{k},\downarrow}^{{\dagger}}&f_{\uparrow}^{{\dagger}}&f_{\downarrow}^{{\dagger}}\end{array}\right)\times$
(7)
$\displaystyle\left(\begin{array}[]{cccccc}-\Delta_{\mathbf{k}\uparrow}&\phi_{\mathbf{k}}^{\ast}&0&0&V&0\\\
\phi_{\mathbf{k}}&\Delta_{\mathbf{k}\uparrow}&0&0&0&0\\\
0&0&-\Delta_{\mathbf{k}\downarrow}&\phi_{\mathbf{k}}^{\ast}&0&V\\\
0&0&\phi_{\mathbf{k}}&\Delta_{\mathbf{k}\downarrow}&0&0\\\
V&0&0&0&\epsilon+U\left\langle n_{\downarrow}\right\rangle&0\\\
0&0&V&0&0&\epsilon+U\left\langle
n_{\uparrow}\right\rangle\end{array}\right)\left(\begin{array}[]{c}a_{\mathbf{k},\uparrow}\\\
b_{\mathbf{k},\uparrow}\\\ a_{\mathbf{k},\downarrow}\\\
b_{\mathbf{k},\downarrow}\\\ f_{\uparrow}\\\ f_{\downarrow}\end{array}\right)$
(20)
where
$\Delta_{\mathbf{k}s}=\frac{i\lambda_{SO}}{3\sqrt{3}}s\xi_{\mathbf{k}}-elE_{z}$
and
$\phi_{\mathbf{k}}=t\overset{3}{\underset{i=1}{\sum}}e^{i\mathbf{k\cdot\delta_{i}}}$.
Figure 2: Dispersion relation in the long-wavelength approximation for
silicene for each spin, where $elE_{z}=0.5$ eV and $\lambda=0.039$ eV, ($\pm$)
for conduction (valence) band.
The local density of states $\rho_{s}(\omega)$ at the impurity can be obtained
as $\rho_{s}(\omega)=-\frac{1}{\pi}\Im{g}_{s}(\omega)$, where
$g_{s}=\left\langle f_{s}\right|G\left|f_{s}\right\rangle$ is the Green
function element for each spin $s$ at the impurity level. By solving
$G=(zI-H)^{-1}$ in the $a_{\mathbf{k},s}$, $b_{\mathbf{k},s}$ and $f_{s}$
basis, a coupled algebraic system is obtained, where the matrix element
$g_{s}=\left\langle f_{s}\right|G\left|f_{s}\right\rangle$ reads
$g_{s}=\frac{1}{z-\epsilon_{s}-\Sigma_{s}}$ (21)
where $z=\omega+i0^{+}$ and $\Sigma_{s}$ is the self-energy
$\Sigma_{s}=\overset{}{\underset{\mathbf{k,}\alpha=\pm
1}{\sum}}\frac{\sigma_{\mathbf{k}\alpha s}}{z-\alpha\epsilon_{\mathbf{k}s}}$
(22)
where
$\sigma_{\mathbf{k}\alpha
s}=\frac{V^{2}}{2}\left(1+\frac{\alpha\Delta_{s}}{\epsilon_{\mathbf{k}s}}\right)$
(23)
where $\Delta_{s}=elE_{z}-s\lambda_{so}$ and where
$\epsilon_{\mathbf{k}s}=\sqrt{(E_{z}-s\lambda_{so})^{2}+\hslash
v_{F}^{2}k^{2}}$ (24)
is the low energy dispersion relation of electrons in silicene with spin-orbit
coupling, obtained expanding the numerator and denominator of
$\xi_{\mathbf{k}}$ and $\phi_{\mathbf{k}}$ around the $K$ point in the
Brillouin zone.
Figure 3: Real $\Re\Sigma_{s}$ and imaginary $\Im\Sigma_{s}$ part of the self-
energy for different staggered potential values and where we have used that
$D=7$eV, $V=0.9$ eV, $\epsilon_{0}=0.2$ eV, $U=0$ and $\lambda=0.039$ eV and
$t=1.6~{}$eV for silicene. An asymmetric contribution of $\Sigma_{s}$ on the
valence and conduction bands is shown.
From the last equation we can note that there are four bands for $\alpha,s=\pm
1$ describing electrons ($\alpha=1$) or holes ($\alpha=-1$) with spin $s$. The
bandgap $2\left|\Delta_{s}\right|\sim 1.5$ meV for $elE_{z}0.5$eV turns
silicene into a semiconductor, in contrast to graphene, and the dependence of
the gap with the spin is explicit (see figure 2). By computing the imaginary
part of the local Green function $g_{s}$ at the impurity, the local spin
density of states can be obtained as
$\rho_{s}(\omega)=-\frac{1}{\pi}\Im{g}_{s}=\frac{\Im\Sigma_{s}}{(Z_{s}^{-1}(\omega)\omega-\epsilon_{s})^{2}+\Im^{2}\Sigma_{s}}$
(25)
where $Z_{s}^{-1}(\epsilon)=1-\frac{\Re\Sigma_{s}}{\omega}$ is the
quasiparticle residue and $\Re\Sigma_{s}(\Im\Sigma_{s})$ is the real
(imaginary) part of the self-energy which can be written as
$\Re\Sigma_{s}=\overset{}{\underset{k,\alpha=\pm
1}{\sum}}\frac{\sigma_{\mathbf{k}\alpha
s}}{\omega-\alpha\epsilon_{\mathbf{k}s}}\text{ \ \ \ \ \ \ \
}\Im\Sigma_{s}=\pi\overset{}{\underset{k,\alpha=\pm
1}{\sum}}\sigma_{\mathbf{k}\alpha
s}\delta(\omega-\alpha\epsilon_{\mathbf{k}s})$ (26)
Computing the integral of eq.(26), the real and imaginary part of the self
energy reads
$\displaystyle\Re\Sigma_{s}=\frac{V^{2}}{D^{2}}(\Delta_{s}-\omega)\ln\left(\left|\frac{\omega^{2}-\Delta_{s}^{2}-D^{2}}{\omega^{2}-\Delta_{s}^{2}}\right|\right)$
(27) $\displaystyle\Im\Sigma_{s}=\frac{\pi
V^{2}}{D^{2}}(\Delta_{s}-\omega)\left[\theta(\left|\Delta_{s}\right|-\omega)-\theta(\left|\Delta_{s}\right|+\omega)\right]\theta(D-\left|\omega\right|)$
where $D\sim 7$eV is the bandwidth, and where
$\theta(\sqrt{\Delta_{s}^{2}+D^{2}}-\omega)$ and
$\theta(\sqrt{\Delta_{s}^{2}+D^{2}}+\omega)$ have been disregarded from the
last equation because they only introduce changes for $\left|\omega\right|>D$.
The last results are a generalization of [48] for the top site with
$\Delta_{s}=0$ in graphene. In figure (3) the real and imaginary part of the
polarized self-energy are shown for different values of the staggered
potential. In contrast to graphene, $\Im\Sigma_{s}$ and $\Re\Sigma_{s}$ are
not symmetric with respect to the Dirac point as it happens for adatoms on top
carbon atoms [49]. This is due to the presence of $\Delta_{s}$ which causes an
asymmetry that increases with the external electric field applied (see figure
3). Eq.(27) indicates that the level broadening scales as
$\left|\Delta_{s}-\omega\right|$ and is identical to zero for
$\left|\omega\right|<\left|\Delta_{s}\right|$.
Figure 4: Polarized real part $\Re\Sigma_{s}$ of $\Sigma_{s}$ in silicene
compared with $\Re\Sigma$ in graphene near the Dirac point and vanishing
electric field, and where $D=7$eV, $V=0.9$ eV, $\epsilon_{0}=0.2$ eV, $U=0$
and $\lambda=0.039$ eV. At the Dirac point, the quasiparticle residue in
silicene with $E_{z}=0$ is not zero.
The real part of the self-energy shifts the assumed unperturbed energy
$\omega$ and, in contrast to graphene, it is not identical to zero at the
Dirac point (see figure 4). The particle-hole symmetry breaking occurs in the
whole spectrum in contrast with $s$ orbitals for hollow site adatoms in
graphene, where the asymmetry is only evident in the high energy sector ([49],
[50]).
## 3 Results and discussions
The spin-polarized occupation numbers can be computed using $\rho_{s}$ of
eq.(25) as
$n_{s}=\int_{-D}^{\mu}\rho_{s}(\omega)d\omega$ (28)
where $\mu$ is the Fermi level. In order to obtain unequal spin occupation
numbers at the impurity $n_{\uparrow}\neq n_{\downarrow}$, we must determine
eq.(25), where the polarized density of states $\rho_{s}$ depend on $n_{-s}$.
The computation of $n_{\uparrow}$ and $n_{\downarrow}$ implies solving a self-
consistent equation for $n_{s}$ as a function of $\mu$. The Fermi energy can
be tuned experimentally by applying an external voltage to the sample that
adds or subtracts charge carriers, in the form of electrons or holes ([51],
[52]).
Figure 5: Local magnetic moment (in units of $\mu_{B}$) as a function of $\mu$
for different values of $elE_{z}$ in the $U=0$ limit and where $V=1$ eV and
$\epsilon_{0}=0.2$ eV.
Before computing the self-consistency equations for the occupation numbers, we
can study the limits $U=0$ and $U\rightarrow\infty$. In the limit $U=0$, local
magnetism is possible due to the shift between polarized local density of
states in the impurity created by $\lambda_{so}$.
Figure 6: Local magnetic moment (in units of $\mu_{B}$) as a function of $\mu$
for different values of $\epsilon_{0}$ in the $U=\infty$ limit and where $V=1$
eV and $E_{z}=0$.
Because $U=0$, the occupation numbers $n_{-s}$ do not appear inside $\rho_{s}$
and eq.(28) can be integrated without difficulty. In figure 5, the local
magnetic moment in units of $\mu_{B}$, where $\mu_{B}$ is the Bohr magneton,
for $U=0$ is shown as a function of $\mu$ for different values of $elE_{z}$
where the peaks for $elE_{z}>0$ correspond to the shift of the polarized
density of states due to external electric field. On the other side, in the
limit $U=\infty$, one of the quantities $n_{\uparrow}$ or $n_{\downarrow}$ is
zero because, in the case $n_{\uparrow}\neq 0$, by putting a spin-down
electron on the adatom implies infinite energy. Then, we can write without
loss of generality that $n_{\downarrow}=0$, then the local magnetic moment can
be computed as $m=n_{\uparrow}(U=\infty)$ (see figure 6), where the local
magnetic moment is shown as a function of $\mu$ for different values of
$elE_{z}$ and tends to $m=1$ for large $elE_{z}$ and vanishes for $E_{z}=0$.
In order to describe local magnetism between these two limits, the self-
consistent equations for $n_{s}$ must be computed by starting with random
values of $n_{\uparrow}$ and $n_{\downarrow}$ and computing $\rho_{s}(\omega)$
from eq.(25). This local density of states at the impurity is used through
eq.(25) to obtain new values of $n_{\uparrow}$ and $n_{\downarrow}$ which are
reintroduced in ${\rho}_{s}$.
Figure 7: Local magnetic moment (in units of $\mu_{B}$) in the impurity atom
in the variables $x=\frac{\pi V^{2}}{UD}$ and $y=(\mu-\epsilon_{0})/U$ for
graphene and silicene with $E_{z}=0$ and $x=\frac{\pi V^{2}}{UD}$ and
$y=(\mu-\epsilon_{0}-(1+r)elE_{z})/U$, where $\epsilon_{0}=0.2$ eV which is
above the Dirac point. The color bar indicates local magnetic moment in units
of $\mu_{B}$.
The iteration is done until the occupation numbers satisfy the condition
$\left|n_{s}(i+1)-n_{s}(i)\right|<10^{-6}$. In figures 7 (for
$\epsilon_{0}=0.2$ eV ) and 8 (for $\epsilon_{0}=-0.2$ eV) the magnetic regime
of the impurity atom as a function of $x=\pi V^{2}/UD$ and
$y=(\mu-\epsilon_{0}-(1+r)elE_{z})/U$ is shown for different values of the
electric field strength.
Figure 8: Local magnetic moment (in units of $\mu_{B}$) in the impurity atom
in the variables $x=\frac{\pi V^{2}}{UD}$ and $y=(\mu-\epsilon_{0})/U$
graphene and silicene with $E_{z}=0$ and $x=\frac{\pi V^{2}}{UD}$ and
$y=(\mu-\epsilon_{0}-(1+r)elE_{z})/U$, where $\epsilon_{0}=-0.2$ eV, which is
below the Dirac point.
In both figures we can compare the magnetic phases of the impurity atom in
silicene, with and without electric field, with the magnetic phases in
graphene, in terms of the Hamiltonian parameters $x=\frac{\pi V^{2}}{UD}$ and
$y=\frac{\mu-\epsilon_{0}-(1+r)elE_{z}}{U}$, where the local magnetic moment
is given in units of $\mu_{B}$ (see the color bar in both figures).
Figure 9: Left. Local magnetic moment
$\left|n_{\uparrow}-n_{\downarrow}\right|$ (in units of $\mu_{B}$) as a
function of $elE_{z}$ for different values of $\mu$ where $V=1$eV, $U=0.1$eV,
$\lambda=0.039$eV, $r=0.3$ and $\epsilon_{0}=-0.2$ eV. Right.Boundary between
magnetic and non-magnetic zone in a $\mu$-$elE_{z}$ space for the same set of
parameters $V$, $U$, $\lambda$, $r$ and $\epsilon_{0}$. The straight line of
local magnetism corresponds to the shifted peaks in the left figure.
In both figures, for $x=0$ there is a non-vanishing local magnetic moment for
$\epsilon_{0}<\mu<\epsilon_{0}+U$ without electric field and for
$\epsilon_{0}+(1+r)elE_{z}<\mu<\epsilon_{0}+U+(1+r)elE_{z}$. This is expected
because the electric field only introduces a shift in the local energy of the
impurity. As in graphene, the boundary in silicene is not symmetrical at
$y=\frac{1}{2}$ and exhibits no particle-hole symmetry around
$\mu=\epsilon_{0}+(1+r)elE_{z}$ (see [53]) even for a top site adatom, where
the orbital symmetry is irrelevant and the $C_{3v}$ point group symmetry of
the honeycomb sublattice is preserved by the adatom in the top site. Without
electric field, there is a non-vanishing local magnetic moment when the Fermi
level is below $\epsilon_{0}$ for $\epsilon_{0}>0$ as it is shown in figure
(7), which is a similar effect that than found in [42] for hollow site
adsorption in silicene. In turn, the spin-orbit interaction streches the
boundary between phases towards the lower half plane and narrows it when the
electric field is turned on.111The electric field values used are smaller than
the critical electric field at which the honeycomb structure of silicene
becomes unstable $elE_{z}\sim 0.59$ eV (see [18]). The fact that the boundary
between magnetic phases enlarges for silicene with $\lambda_{so}\neq 0$ and
$E_{z}=0$ implies that the suppresion of the local broadening for
$\left|\omega\right|<\lambda_{so}$ and the unequal spin shift factor
$\Delta_{s}-\omega$ in eq.(26) allows the formation of spin moments for
$\mu<\epsilon_{0}-U$ and small $U$.
Due to the linear scaling of the broadening with $\omega$ of the impurity
level, magnetism is allowed for Fermi energies $\mu>\epsilon_{0}-\frac{3}{2}U$
well below the impurity energy $\epsilon_{0}$ and this behavior is enhanced
when the electric field is turned on.In figure 8, the same effect is shown for
$\epsilon_{0}=-0.2$ eV, where magnetism can be found when the Fermi level is
larger than $\epsilon_{0}+U$ and the boundary between magnetic phases shrinks
in the $y$ direction when the electric field is negative. By increasing the
electric field strength, an asymmetric broadening of the impurity energy level
is lifted by the modification of the imaginary part of the self-energy and in
turn a shift in the impurity peak appears due to effective impurity energy
$\epsilon_{0}+(1+r)elE_{z}$
In figure (9), the local magnetic moment is shown for different values of
$\mu$ as a function of $elE_{z}$, where we have considered $V=1$ eV,
$\epsilon_{0}=-0.2$ eV, $\lambda=0.039$eV and $U=0.1$ eV. In both figures it
can be seen that magnetism follows a linear relation between $\mu$ and
$E_{z}$, caused by the shift in the density of states due to the effective
impurity $\epsilon_{0}+(1+r)elE_{z}$. When
$\epsilon_{0}+(1+r)elE_{z}+Un_{-s}<\mu<Un_{s}$, the ocuppation numbers are not
identical. The boundary of the magnetic phases in this case is controlled by
the broadening of the peak. For $-0.85$eV$<elE_{z}<0.35$eV there is a local
magnetic moment for $\mu>-0.4$ eV. When the impurity peaks in the polarized
density of states of the impurity enters the gap zone, given by
$-\left|\Delta_{s}\right|<\omega<\left|\Delta_{s}\right|$, and when
$\mu>-\left|\Delta_{s}\right|$, the non-vanishing local magnetic moment is
freezed for larger $\mu$. Thus by manipulating the Fermi level with the
applied gate voltage, an adatom interacting with silicene fullfils the
requirement for the formation of a magnetic state due to the spin-asymmetric
anomalous broadening and spin-asymmetric broadening gap for energies near the
Dirac point. In turn, even for small $U$ values [54] and Fermi energies below
the effective on-site impurity energy, magnetism arises in the impurity
carried by the itinerant electrons in the host lattice in contrast with
transition metals adatoms, where it is harder to enhance the local magnetic
moment for large $U$ [55]. For low impurity concentrations, when the local
magnetic moments are driven to an excited state, for example with an external
electric field, dynamical spin-excitations are formed and are carried for long
distances [56] which can be utilized in spintronic devices to develop magnetic
information storage with electric gates [57]. Currently, X-ray magnetic
circular dichroism (XMCD) and inelastic scanning tunneling spectroscopy are
used to identify adatoms with magnetocrystalline anisotropy energy of few meV
deposited on Graphene/SiC showing a paramagnetic behavior, with a magnetic
moment out-of-plane for Co and Fe adatoms [58].
## 4 Conclusions
In this work we have studied the effect of the electric field on the formation
of a local magnetic moment in an impurity adsorbed on a top site in silicene.
By computing the polarized density of states in the impurity and solving the
self-consistent equations for the occupation numbers in the mean-field
approximation, we obtain the boundary of the magnetic phases for silicene with
spin-orbit coupling and different electric field strengths, considering on-
site impurity energies below and above the Dirac point. A local magnetic
moment is formed for Fermi energies below the on-site impurity energy due to
the broadening of the impurity level that scales linearly in
$\left|\omega\right|$ with a shift due to the spin-orbit coupling and the
external electric field. In turn, a gap in the broadening for
$\left|\omega\right|<elE_{z}-s\lambda_{so}$ allows the local magnetic moment
to be freezed when $\mu$ crosses the gap zone, even when $E_{z}=0$. By
increasing the electric field strength the boundary between magnetic phases
streches allowing a moment formation in silicene more easily than in graphene.
The results obtained can be important to design spintronic devices, where the
local magnetic moment can be controlled by an electric field application and
to manipulate spin waves by considering different adatoms coverages of
silicene subject to external oscillating electric fields.
## 5 Acknowledgment
This paper was partially supported by grants of CONICET (Argentina National
Research Council) and Universidad Nacional del Sur (UNS) and by ANPCyT through
PICT 1770, and PIP-CONICET Nos. 114-200901-00272 and 114-200901-00068 research
grants, as well as by SGCyT-UNS., J. S. A. and P. J. are members of CONICET.,
F. E and J. V. are fellow researchers at this institution.
## 6 Author contributions
All authors contributed equally to all aspects of this work.
## References
* [1] G. G. Guzmán-Verri and L. C. L. Y. Voon, Phys. Rev. B 76, 075131 (2007).
* [2] M. Houssa, A. Dimoulas, A. Molle, Jour. Phys. Cond. Matt., 27 (25), 253002 (2015).
* [3] A. Geim, Science 324, 1530 (2009).
* [4] C. V. Nguyen, N. N. Hieu, Chem. Phys., 468, 9 (2016).
* [5] M. J. Spencer, T. Morishita (Eds.), Silicene, Springer International Publishing, 2016.
* [6] N. Gillgren, D. Wickramaratne, Y. Shi, T. Espiritu, J. Yang, J. Hu, J. Wei, X. Liu, Z. Mao, K. Watanabe, T. Taniguchi, M. Bockrath, Y. Barlas, R. K. Lake, C. Ning Lau, 2D Mater., 2 011001 (2014).
* [7] E. Bianco, S. Butler, S. Jiang, O. D. Restrepo, W. Windl, J. E. Goldberger, ACS Nano 7, 4413 (2013).
* [8] P. T. T. Le, M. Yarmohammadi, Chem. Phys., 519, 1-5 (2019).
* [9] R. Roldán, L. Chirolli, E. Prada, J. A. Silva-Guillén, P. San Jose and F. Guinea, Chem. Soc. Rev., 15 (2017).
* [10] M. Yarmohammadi, Phys. Rev. B, 98, 155424 (2018).
* [11] P. De Padova, C. Quaresima, C. Ottaviani, P. M. Sheverdyaeva, P. Moras, C. Carbone, D.Topwal, B. Olivieri, A. Kara, H. Oughaddou, B. Aufray, and G. Le Lay, Appl. Phys. Lett., 96, 261905 (2010).
* [12] P. Vogt, P. De Padova, C. Quaresima, J. Avila, E. Frantzeskakis, M. C. Asensio, A. Resta, B. Ealet, and G. Le Lay, Phys. Rev. Lett., 108, 155501 (2012).
* [13] A. Fleurence, R. Friedlein, T. Ozaki, H. Kawai, Y. Wang, and Y. Yamada-Takamura, Phys. Rev. Lett., 108, 245501 (2012).
* [14] M. Houssa, E. Scalise, K. Sankaran, G. Pourtois, V. V. Afanas’ev, A. Stesmans, Appl. Phys. Lett., 98 (22), 223107 (2011).
* [15] R. Winkler and U. Zülicke, Phys. Rev. B, 82, 245313 (2010).
* [16] N. Y. Dzade, K. O. Obodo, S. K. Adjokatse, A. C. Ashu, E. Amankwah, C. D. Atiso, A. A. Bello, E. Igumbor, S. B. Nzabarinda, J. T. Obodo, A. O. Ogbuu, O. E. Femi, J. O. Udeigwe, U. V. Waghmare, J. Phys.: Condens. Matter, 22 (37), 375502 (2010).
* [17] S. Lebegue and O. Eriksson, Phys. Rev. B, 79, 115409 (2009).
* [18] N. D. Drummond, V. Zlyomi, V. I. Fal’ko, Phys. Rev. B, 85 (7), 3702 (2012).
* [19] Z. Ni, Q. Liu, K. Tang, J. Zheng, J. Zhou, R. Qin, Z. Gao, D. Yu, J. Lu, Nano Letters, 12 (1), 113–118 (2012).
* [20] C. C. Liu, H. Jiang, Y. Yao, Phys. Rev. B, 84 (19), 195430 (2011).
* [21] Y. Yao, F. Ye, X. L. Qi, S. C. Zhang, Z. Fang, Phys. Rev. B, 75 (4) 041401 (2007).
* [22] C. C. Liu, W. Feng, Y. Yao, Phys. Rev. Lett., 107 (7), 076802 (2011).
* [23] M. Ezawa, Phys. Rev. B, 87 (15) 155415 (2013).
* [24] M. Yarmohammadi and K. Mirabbaszadeh, Commun. Theor. Phys., 67, 5 (2017).
* [25] E. Rotenberg, Graphene Nanoelectronics, in: H.Raza (Ed.),Springer-Verlag,Berlin, Heidelberg, 2012.
* [26] F. Escudero, J. S. Ardenghi, L. Sourrouille, P. Jasen and A. Juan, Super. and Micro., 113, 291-300 (2018).
* [27] J. S. Ardenghi, P. Bechthold, E. Gonzalez, P. Jasen, A. Juan, Super. and Micro., 72, 325-335, (2014).
* [28] J. S. Ardenghi, P. Bechthold, E. Gonzalez, P. Jasen, A., Eur. Phys. J. B, 88: 47 (2015).
* [29] J. S. Ardenghi, P. Bechthold, P. Jasen, E. Gonzalez, O. Nagel, Physica B, 427, 97-105, (2013).
* [30] Y. V. Skrypnyk and V. M. Loktev, Phys. Rev. B 73, 241402(R) (2006).
* [31] J. O. Sofo, G. Usaj, P. S. Cornaglia, A. M. Suarez, A. D. Hernandez-Nieves, and C. A. Balseiro, Phys. Rev. B 85, 115405 (2012).
* [32] A. C. Hewson, The Kondo problem to heavy fermions (Cambridge University Press, Cambridge, 1997).
* [33] J. Ren, H. Guo, J. Pan, Y. Y. Zhang, X. Wu, H.-G. Luo, S. Du, S. T. Pantelides, and H.-J. Gao, Nano Lett. 14, 4011 (2014).
* [34] B. Aufray, A. Kara, S. Vizzini, H. Oughaddou, C. Léandri, B. Ealet and G. Le Lay, Appl. Phys. Lett. 96 183102 (2010).
* [35] E. Cinquanta, E. Scalise, D. Chiappe, C. Grazianetti, B. van den Broek, M. Houssa, M. Fanciulli and A. Molle, J. Phys. Chem. C, 117 16719–24 (2013).
* [36] N. Y. Dzade, K. O. Obodo, S. K. Adjokatse, A. C. Ashu, E. Amankwah, C. D. Atiso, A. A. Bello, E. Igumbor, S. B. Nzabarinda, J. T. Obodo, A. O. Ogbuu, O. E. Femi, J. O. Udeigwe and U. V. Waghmare, J. Phys.: Condens. Matter, 22 375502 (2010).
* [37] V. Q. Bui, T. T. Pham, H. V. S. Nguyen and H. Le, J. Phys. Chem. C, 117 23364–71 (2013).
* [38] Y. Liu, X. Zhou, M. Zhou, M.-Q. Long, G. Zhou, J.Appl. Phys., 116 (24) 244312 (2014).
* [39] F. Escudero, J. S. Ardenghi and P. Jasen, Jour. Magn. Magn. Mat., 454, 131-138 (2018).
* [40] F. Escudero, J. S. Ardenghi and P. Jasen, J. Phys. Condens. Matter, 30, 275803 (2018).
* [41] P. W. Anderson, Phys. Rev., 124, 41 (1964).
* [42] J. Villarreal, J. S. Ardenghi and P. Jasen, Superlattice. Microst., 130 (285-296) 2019.
* [43] M. Laubach, J. Reuther, R. Thomale, S. Rachel, Phys. Rev. B 90, 165136 (2014).
* [44] C. L. Kane and E. J. Mele, Phys. Rev. Lett., 95, 226801 (2005).
* [45] M. Zare, Phys. Rev. B, 100, 085434 (2019).
* [46] H. M. Le, T. T Pham, T. S. Dinh, Y. Kawazoe, and D. Nguyen-Manh, J. Phys.: Condens. Matter, 28(13), 135301 (2016).
* [47] S. Nigam, S. K. Gupta, C. Majumder and R. Pandey, Phys. Chem. Chem. Phys., 17, 11324 (2015).
* [48] B. Uchoa, L. Yang, S. W. Tsai, N. M. R. Peres, and A. H. Castro Neto, Phys. Rev. Lett. 103, 206804 (2009).
* [49] B. Uchoa, L. Yang, S.-W. Tsai, N. M. R. Peres, A. H. Castro Neto, New Journal of Physics 16, 013045 (2014).
* [50] M. A. Romero, A. Iglesias-Garcia and E. C. Goldberg, Phys. Rev. B, 83 125411 (2011).
* [51] R. R. Nair, I-L. Tsai, M. Sepioni, O. Lehtinen, J. Keinonen, A. V. Krasheninnikov, A. H. Castro Neto, M. I. Katsnelson, A. K. Geim, and I. V. Grigorieva, Nat. Commun. 4, 2010 (2013).
* [52] N. A. Pike and D. Stroud, Phys. Rev. B, 89 115428 (2014).
* [53] B. Uchoa, V. N. Kotov, N. M. R. Peres, and A. H. Castro Neto, Phys. Rev. Lett. 101, 026805 (2008).
* [54] S. K. Pati, T. Enoki and C. N. R. Rao, Graphene and its fascinating attributes, World Scientific Publishing), 2011 (chapter 7).
* [55] M. Manadé, F. Viñes, and F. Illas, Carbon, 95:525 (2015).
* [56] F. S. M. Guimaraes, D. F. Kirwan, A. T. Costa, R. B.Muniz, D. L. Mills, and M. S. Ferreira, Phys. Rev. B 81, 153408 (2010).
* [57] Li Tao, E. Cinquanta, D. Chiappe, C. Grazianetti, M. Fanciulli, M. Dubey, A. Molle and D. Akinwande, Nature Nanotech. 10, 227–231 (2015)
* [58] T. Eelbo, M. Wasniowska, P. Thakur, M. Gyamfi, B. Sachs, T. O. Wehling, S. Forti, U. Starke, C. Tieg, A. I. Lichtenstein, and R. Wiesendanger, Phys. Rev. Lett. 110,136804 (2013).
|
8k
|
arxiv_papers
|
2101.00955
|
11institutetext: Physique Nucléaire Théorique et Physique Mathématique, C.P.
229, Université Libre de Bruxelles (ULB), B 1050 Brussels, Belgium
# Resonances in ${}^{12}{\mathrm{C}}$ and ${}^{24}{\rm Mg}$: what do we learn
from a microscopic cluster theory?
P. Descouvemont _Directeur de Recherches FNRS_
(Received: date / Revised version: date)
###### Abstract
We discuss resonance properties in three-body systems, with examples on
${}^{12}{\mathrm{C}}$ and ${}^{24}{\rm Mg}$. We use a microscopic cluster
model, where the generator coordinate is defined in the hyperspherical
formalism. The ${}^{12}{\mathrm{C}}$ nucleus is described by an
$\alpha+\alpha+\alpha$ structure, whereas ${}^{24}{\rm Mg}$ is considered as
an ${}^{16}{\rm O}+\alpha+\alpha$ system. We essentially pay attention to
resonances. We review various techniques which may extend variational methods
to resonances. We consider $0^{+}$ and $2^{+}$ states in ${}^{12}{\mathrm{C}}$
and ${}^{24}{\rm Mg}$. We show that the r.m.s. radius of a resonance is
strongly sensitive to the variational basis. This has consequences for the
Hoyle state ($0^{+}_{2}$ state in ${}^{12}{\mathrm{C}}$) whose radius has been
calculated or measured in several works. In ${}^{24}{\rm Mg}$, we identify two
$0^{+}$ resonances slightly below the three-body threshold.
## 1 Introduction
Clustering is a well established phenomenon in light nuclei (see Refs. FHK18 ;
HIK12 ; DD12 for recent reviews). In nuclear physics, most cluster states
involve the $\alpha$ particle. Due to its large binding energy, the $\alpha$
particle tends to keep its own identity, which leads to the $\alpha$ cluster
structure Br66 ; FHI80 . The cluster structure in $\alpha$ nuclei (i.e. with
nucleon numbers $A=4k$, where $k$ is an integer number) was clarified by Ikeda
et al. ITH68 who proposed a diagram which identifies situations where a
cluster structure can be observed. The $\alpha$ cluster structure is
essentially found for nuclei near $N=Z$. The $\alpha$ model and its extensions
were utilized by many authors to investigate the properties of
$\alpha$-particle nuclei such as 8Be, 12C, 16O, etc. In particular, the
interest for $\alpha$-cluster models was recently revived by the hypothesis of
a new form of nuclear matter, in analogy with the Bose-Einstein condensates
THS01 ; ZFH20 .
Nuclear models are essentially divided in two categories: (1) in non-
microscopic models, the internal structure of the clusters is neglected, and
they interact by a nucleus-nucleus potential; (2) in microscopic models, the
wave functions depend on the $A$ nucleons of the system, and the Hamiltonian
involves a nucleon-nucleon interaction. Recent developments in nuclear models
NF08 ; NQS09 ; He20 aim to find exact solutions of the $A$-body problem, but
they present strong difficulties when the nucleon number increases. To
simplify the problem, cluster models assume that the nucleons are grouped in
clusters. The simplest variant is a two-cluster model and is being developed
since more than 40 years WT77 ; Ho77 . Multicluster microscopic models are
more recent (see, for example, Ref. DD12 ). They allow to extend the range of
applications.
In the present paper, we focus on two $\alpha$ nuclei: ${}^{12}{\mathrm{C}}$
and ${}^{24}{\rm Mg}$, which are described by three-body structures ($3\alpha$
and ${}^{16}{\rm O}+\alpha+\alpha$, respectively). Over the last 20 years,
there was a strong interest on ${}^{12}{\mathrm{C}}$, and in particular on the
$0^{+}_{2}$ resonance, known as the Hoyle state FF14 . The unbound nature,
however, make theoretical studies delicate.
With ${}^{12}{\mathrm{C}}$ and ${}^{24}{\rm Mg}$ as typical examples, we
discuss more specifically the determination of resonance properties in cluster
models. As resonances are unbound, a rigorous treatment would require a
scattering model, with scattering boundary conditions. There are, however,
various techniques aimed at complementing the much simpler variational method
which is, strictly speaking, valid for bound states only. In a variational
method, negative energies are associated with physical bound states. The
positive eigenvalues correspond to approximations of the continuum. For narrow
resonances, a single eigenvalue is in general a fair approximation. We show,
however, that the calculation of physical quantities, such as the r.m.s.
radius, should be treated carefully.
The paper is organized as follows. In Sec. 2, we present the microscopic
three-body model, using hyperspherical coordinates. Section 3 is devoted to a
brief discussion of different techniques dealing with resonances. The
${}^{12}{\mathrm{C}}$ and ${}^{24}{\rm Mg}$ nuclei are presented in Sects. 4
and 5, respectively. Concluding remarks are given in Sect. 6.
## 2 The Microscopic three-cluster model
### 2.1 Hamiltonian and wave functions
In a microscopic model, the Hamiltonian of a $A$-nucleon system is given by
$\displaystyle H=\sum_{i=1}^{A}T_{i}+\sum_{i<j=1}^{A}(V^{N}_{ij}+V^{C}_{ij}),$
(1)
where $T_{i}$ is the kinetic energy of nucleon $i$, and $V^{N}_{ij}$ and
$V^{C}_{ij}$ are the nuclear and Coulomb interactions between nucleons $i$ and
$j$.
In a partial wave $J\pi$, the wave function $\Psi^{JM\pi}$ is a solution of
the Schrödinger equation
$\displaystyle H\Psi^{JM\pi}=E\Psi^{JM\pi}.$ (2)
Recent ab initio models NF08 ; NQS09 aim at determining exact solutions of
Eq. (2). For instance, the No-Core Shell Model (NCSM) is based on large one-
center harmonic-oscillator (HO) bases and effective interactions NKB00 ,
derived from realistic forces such as Argonne WSS95 or CD-Bonn Ma01 . These
interactions are adapted for finite model spaces through a particular unitary
transformation. Wave functions are then expected to be accurate, but states
presenting a strong clustering remain difficult to describe with this
approach.
In cluster models, the nucleon-nucleon interaction $V^{N}_{ij}$ must account
for the cluster approximation of the wave function. This leads to effective
interactions, adapted to harmonic-oscillator orbitals. For example, using $0s$
orbitals for the $\alpha$ particle makes all matrix elements of non-central
forces equal to zero. The effect of non-central components is simulated by an
appropriate choice of the central effective interaction. Typical effective
interactions are the Minnesota TLT77 or the Volkov potentials Vo65 . These
central forces simulate the effects of the missing tensor interaction. They
include an adjustable parameter, which can be slightly modified without
changing the basic properties of the force. This parameter is typically used
to reproduce the energy of the ground state or of a resonance.
In the present model, Eq. (2) is solved by using the cluster approximation,
i.e. the nucleus is seen as a three-body system, where each cluster is
represented by a shell-model wave function. This leads to the Resonating Group
Method (RGM, see Refs. Ho77 ; DD12 ) which was initially developed for two-
cluster systems, but more recently extended to three-body nuclei KD04 .
The three-body system is illustrated in Fig. 1. Coordinate $\boldsymbol{r}$ is
associated with the external clusters, whereas $\boldsymbol{R}$ is the
relative coordinate between the core and the two-body subsystem.
Figure 1: Coordinates in the microscopic three-cluster system.
Scaled Jacobi coordinates $\boldsymbol{x}$ and $\boldsymbol{y}$ ZDF93 ; KD04
are obtained from
$\displaystyle\boldsymbol{x}=\sqrt{A_{1}A_{2}/(A_{1}+A_{2})}\,\boldsymbol{r},$
$\displaystyle\boldsymbol{y}=\sqrt{A_{c}(A_{1}+A_{2})/(A_{c}+A_{1}+A_{2})}\,\boldsymbol{R},$
(3)
where $A_{c}$ is the mass of the core and ($A_{1},A_{2}$) the masses of the
external clusters. We use the hyperspherical formalism where the hyperradius
$\rho$ and the hyperangle $\alpha$ are defined as
$\displaystyle\rho^{2}=\vec{x}^{2}+\vec{y}^{2}$
$\displaystyle\alpha=\arctan(y/x).$ (4)
The hyperspherical formalism is well known in non-microscopic three-body
systems KRV08 ; ZDF93 , where the structure of the nuclei is neglected. This
formalism makes use of five angles $\Omega^{5}=(\Omega_{x},\Omega_{y},\alpha)$
and of an hypermomentum $K$ which generalizes the concept of angular momentum
in two-body systems. The hyperspherical functions are RR70
$\displaystyle{\cal
Y}^{\ell_{x}\ell_{y}}_{KLM}(\Omega^{5})=\phi_{K}^{\ell_{x}\ell_{y}}(\alpha)\left[Y_{\ell_{x}}(\Omega_{x})\otimes
Y_{\ell_{y}}(\Omega_{y})\right]^{LM},$ (5)
where $\ell_{x}$ and $\ell_{y}$ are angular momenta associated with the Jacobi
coordinates $\boldsymbol{x}$ and $\boldsymbol{y}$. Functions
$\phi_{K}^{\ell_{x}\ell_{y}}$ are given by
$\displaystyle\phi_{K}^{\ell_{x}\ell_{y}}(\alpha)={\cal
N}_{K}^{\ell_{x}\ell_{y}}\,(\cos\alpha)^{\ell_{x}}(\sin\alpha)^{\ell_{y}}P_{n}^{\ell_{y}+\frac{1}{2},\ell_{x}+\frac{1}{2}}(\cos
2\alpha).$ (6)
In these definition, $P_{n}^{\alpha,\beta}(x)$ is a Jacobi polynomial, ${\cal
N}_{K}^{l_{x}l_{y}}$ is a normalization factor, and
$n=(K-\ell_{x}-\ell_{y})/2$ is a positive integer.
For the sake of clarity, we limit the presentation to clusters with spin 0 (a
generalization can be found in Ref. De19 ). The total wave function of the
system is written as
$\displaystyle\Psi^{JM\pi}=\sum_{\ell_{x}\ell_{y}}\sum_{K=0}^{\infty}{\cal
A}\,\phi_{1}\phi_{2}\phi_{3}{\cal
Y}^{\ell_{x}\ell_{y}}_{KJM}(\Omega^{5})\chi^{J\pi}_{\ell_{x}\ell_{y}K}(\rho),$
(7)
where ${\cal A}$ is the $A$-nucleon antisymmetrizor, and $\phi_{i}$ are the
cluster wave functions, defined in the shell-model. For the $\alpha$ particle,
the internal wave function $\phi$ is a Slater determinant involving four $0s$
orbitals. Definition (7), however, is valid in a broader context where $\phi$
is a linear combination of several Salter determinants. A recent example De19
is the 11Li nucleus described by a 9Li+n+n structure, where the shell-model
description of 9Li involves all Slater determinants (90) which can be built in
the $p$ shell. The oscillator parameter $b$ is taken identical for the three
clusters. Going beyond this approximation raises enormous difficulties due to
center-of-mass problems, even in two-cluster calculations BK92 . In Eq. (7),
the hypermoment $K$ runs from zero to infinity. In practice a truncation value
$K_{\rm max}$ is adopted. The hyperradial functions
$\chi^{J\pi}_{\ell_{x}\ell_{y}K}(\rho)$ are to be determined from the
Schrödinger equation (2).
As for two-cluster systems, the RGM definition clearly displays the physical
interpretation of the cluster approximation. In practice, however, using the
Generator Coordinate Method (GCM) wave functions is equivalent, and is more
appropriate to systematic numerical calculations DD12 . In the GCM, the wave
function (7) is equivalently written as
$\displaystyle\Psi^{JM\pi}=\sum_{\gamma}\int
dR\,f^{J\pi}_{\gamma}(R)\,\Phi^{JM\pi}_{\gamma}(R),$ (8)
where we use label $\gamma=(\ell_{x},\ell_{y},K)$. In this equation, $R$ is
the generator coordinate, $\Phi^{JM\pi}_{\gamma}(R)$ are projected Slater
determinants, and $f^{J\pi}_{\gamma}(R)$ are the generator functions (see Ref.
DD12 for more detail). In practice, the integral is replaced by a sum over a
finite set of $R$ values (typically 10-15 values are chosen, up to $R\approx
10-12$ fm).
After discretization of (8), the generator functions are obtained from the
eigenvalue problem, known as the Hill-Wheeler equation,
$\displaystyle\sum_{\gamma n}$
$\displaystyle\biggl{[}H^{J\pi}_{\gamma,\gamma^{\prime}}(R_{n},R_{n^{\prime}})-E^{J\pi}_{k}N^{J\pi}_{\gamma,\gamma^{\prime}}(R_{n},R_{n^{\prime}})\biggr{]}f^{J\pi}_{(k)\gamma}(R_{n})=0,$
(9)
where $k$ denotes the excitation level. The Hamiltonian and overlap kernels
are obtained from 7-dimension integrals involving matrix elements between
Slater determinants (see Refs. KD04 ; DD12 for detail). They are given by
$\displaystyle N^{J\pi}_{\gamma,\gamma^{\prime}}(R,R^{\prime})$
$\displaystyle=\langle\Phi^{JM\pi}_{\gamma}(R)|\Phi^{JM\pi}_{\gamma^{\prime}}(R^{\prime})\rangle$
$\displaystyle H^{J\pi}_{\gamma,\gamma^{\prime}}(R,R^{\prime})$
$\displaystyle=\langle\Phi^{JM\pi}_{\gamma}(R)|H|\Phi^{JM\pi}_{\gamma^{\prime}}(R^{\prime})\rangle.$
(10)
The non-projected matrix elements are computed with Brink’s formula Br66 , and
the main part of the numerical calculations is devoted to the two-body
interaction which involves quadruple sums over the orbitals. The projection
over angular momentum, which requires multidimension integrals, makes the
calculation very demanding. A more detailed description is provided in Ref.
De19 .
### 2.2 Radii and energy curves
From the wave function (7,8), various properties can be computed. We discuss
more specifically the r.m.s. radius, defined as
$\displaystyle\langle
r^{2}\rangle=\frac{1}{A}\langle\Psi^{JM\pi}|\sum_{i=1}^{A}(r_{i}-R_{\rm
c.m.})^{2}|\Psi^{JM\pi}\rangle,$ (11)
which, in the GCM is determined from
$\displaystyle\langle r^{2}\rangle=\sum_{\gamma
n}\sum_{\gamma^{\prime}n^{\prime}}f^{J\pi}_{\gamma}(R_{n})f^{J\pi}_{\gamma^{\prime}}(R_{n}^{\prime})$
$\displaystyle\hskip
8.5359pt\times\langle\Phi^{JM\pi}_{\gamma}(R_{n})|\frac{1}{A}\sum_{i=1}^{A}(r_{i}-R_{\rm
c.m.})^{2}|\Phi^{JM\pi}_{\gamma^{\prime}}(R_{n^{\prime}})\rangle.$ (12)
The matrix elements between Slater determinants are obtained as in Eqs. (10).
Notice that these calculations are rigorous for bound states, i.e. states with
energy $E^{J\pi}_{k}$ lower than the three-cluster breakup threshold
$E_{T}=E_{1}+E_{2}+E_{3}$, $E_{i}$ being the internal energy of cluster $i$,
computed consistently with the same Hamiltonian. In this case, the relative
functions $\chi^{J\pi}_{\gamma}(\rho)$ [see Eq. (7)] tend rapidly to zero, and
the sum over $(n,n^{\prime})$ in (12) converges. For resonances
($E^{J\pi}_{k}>E_{T}$), the convergence of (12) is not guaranteed. We discuss
this issue in more detail in Sect. 3.
The energies and r.m.s. radii discussed in the previous subsection involve all
generator coordinates. It is, however, useful to analyze the energies of the
system for a single $R$ value. This quantity is referred to as the energy
curves. Two variants can be considered. In the former, only the diagonal
matrix elements of the Hamiltonian are used, and the energy curves are defined
as
$\displaystyle
E^{J\pi}_{\gamma}(R)=\frac{H^{J\pi}_{\gamma,\gamma}(R,R)}{N^{J\pi}_{\gamma,\gamma}(R,R)}-E_{T}.$
(13)
This definition ignores the couplings between the channels. At large
distances, they tend to
$\displaystyle
E^{J\pi}_{\gamma}(R)\rightarrow\frac{Z_{\gamma\gamma}e^{2}}{R}+\frac{\hbar^{2}}{2m_{N}}\frac{(K+3/2)(K+5/2)}{R^{2}}+\frac{1}{4}\hbar\omega,$
(14)
where $Z_{\gamma\gamma}e^{2}$ is a diagonal element of the Coulomb three-body
interaction (see, for example, Ref. De10 ), $m_{N}$ is the nucleon mass, and
$\frac{1}{4}\hbar\omega$ is the residual energy associated with the harmonic
oscillator functions ($\hbar\omega=\hbar^{2}/m_{N}b^{2}$).
In the alternative approach, the energy curves stem from a diagonalization of
the Hamiltonian for a fixed $R$ value. They are given by the eigenvalue
problem
$\displaystyle\sum_{\gamma}$
$\displaystyle\biggl{[}H^{J\pi}_{\gamma,\gamma^{\prime}}(R,R)-E^{J\pi}_{k}(R)N^{J\pi}_{\gamma,\gamma^{\prime}}(R,R)\biggr{]}c^{J\pi}_{(k)\gamma}(R)=0.$
(15)
At large distances, the coupling elements ($\gamma\neq\gamma^{\prime}$) tend
to zero and both definitions (13) and (15) are equivalent. In three-body
systems, however, the couplings are known to extend to large distances, even
for short-range interactions (see, for example, Refs. DTB06 ; De10 ; DD09 ).
The energy curves cannot be considered as genuine potentials. However, they
provide various informations, such as the existence of bound states or of
narrow resonances, the level ordering, the cluster structure, etc.
## 3 Discussion of resonances
The eigenvalue problem (9) is, strictly speaking, valid for bound states only.
The variational principle guarantees that an upper limit of the exact solution
is found, and the wave function tends exponentially to zero. The situation,
however, is different for positive-energy states. In that case, the lowest
energy is zero, i.e. the optimal solution, according to the variational
principle, corresponds to a system where the clusters are at infinite distance
from each other.
For narrow resonances, the bound-state approximation (BSA), which is a direct
extension of (9), usually provides a fair approximation of the energy, even
with finite bases. If the width is small, the energy is fairly stable when the
basis changes. The situation is different for the wave function. The long-
range part may be sensitive to the choice of the basis, and matrix elements
using these wave functions may be unstable. A typical example will be shown
with the $0^{+}_{2}$ resonance of ${}^{12}{\mathrm{C}}$.
For broad resonances, there are various techniques which complement the
variational method. The idea is to avoid scattering calculations, such as in
the $R$-matrix theory DB10 , where resonance properties are derived from an
analysis of the phase shifts (or scattering matrices). The complementary
methods have solid mathematical foundations, and are, in principle, relatively
simple to implement in variational calculations. We briefly summarize below
some of them.
* •
The complex scaling method (CSM) is based on the rotation of the space and
momentum coordinates Ho83 ; AC71 ; AMK06 . In other words, the space
coordinate $\boldsymbol{r}$ and momentum $\boldsymbol{p}$ of each particle are
transformed as
$\displaystyle
U(\theta)\boldsymbol{r}U^{-1}(\theta)=\boldsymbol{r}\exp(i\theta),$
$\displaystyle
U(\theta)\boldsymbol{p}U^{-1}(\theta)=\boldsymbol{p}\exp(-i\theta),$ (16)
where $\theta$ is the rotation angle. Under this transformation, the
Schrödinger equation reads
$\displaystyle
H(\theta)\Psi(\theta)=U(\theta)HU^{-1}(\theta)\Psi(\theta)=E(\theta)\Psi(\theta),$
(17)
and the solutions $\Psi(\theta)$ are square-integrable provided that $\theta$
is properly chosen AMK06 . They can be expanded over a finite basis, after
rotation of the Hamiltonian. Of course the potential should be available in an
analytic form to apply transformation (16).
The ABC theorem AC71 shows that the eigenvalues $E_{k}(\theta)$ are located
on a straight line in the complex plane, rotated by an angle $2\theta$.
Resonant states are not affected by this angle and correspond to stable
eigenvalues
$\displaystyle E_{k}(\theta)=E_{R}-i\Gamma/2,$ (18)
where $E_{R}$ is the energy and $\Gamma$ the width of the resonance.
Recently, the CSM has been extended to the calculation of level densities
SMK05 ; SKG08 and to dipole strength distributions AMK06 . Of course the
resonance properties (18) derived from the CSM should also be consistent with
those derived from a phase-shift analysis.
* •
In the complex absorbing potential (CAP) method, an imaginary potential is
added to the Hamiltonian kernel. The first applications were developed in
atomic physics RM93 and in non-microscopic nuclear models TOI14 . Ito and
Yabana IY05 have extended the method to microscopic cluster calculations
within the GCM. The Hamiltonian kernel (10) is replaced by
$\displaystyle H(R,R^{\prime})\rightarrow H(R,R^{\prime})-i\eta
W(R)\delta(R-R^{\prime}),$ (19)
where $\eta$ is a positive real number. The absorbing potential is usually
taken as
$\displaystyle W(R)=\theta(R-R_{0})(R-R_{0})^{\beta},$ (20)
where $R_{0}$ is an arbitrary radius, larger than the range of the nuclear
force and $\theta$ is the step function. In most calculations, $\beta$ is
taken as $\beta=4$.
This method provides the energy and widths of resonances as in (18), even for
broad states. In Ref. IY05 , it was shown, however, that the method needs many
generator coordinates. In their microscopic study of $\alpha+^{6}$He
scattering, Ito and Yabana use 100 generator coordinates, up to $R=50$ fm. For
computational reasons, this method is difficult to apply to three-body
systems, owing to the strong couplings between the channels, and to the long
range of the potentials.
* •
The analytic continuation in a coupling constant (ACCC) method has been
proposed by Kukulin et al. KKH89 to evaluate the energy and width of a
resonance. The main advantage is that the ACCC method only requires bound-
state calculations, much simpler than scattering calculations involving
boundary conditions. Some applications to a microscopic description of two-
and three-cluster models can be found in Ref. TSV99 . The method has been
applied to a non-microscopic description of ${}^{12}{\mathrm{C}}$ in Ref. KK05
.
To apply the ACCC method, one assumes that the Hamiltonian can be written as
$\displaystyle H(u)=H_{0}+u\,H_{1},$ (21)
where $u$ is a linear parameter. The linear part $H_{1}$ is supposed to be
attractive so that, for increasing $u$ values, the system becomes bound. For
$u=u_{0}$, the energy is zero, and we have $E(u_{0})=0$.
The problem is to determine the resonance properties for the physical value
$u<u_{0}$. In the bound-state regime $(u>u_{0})$, the wave number $k$ is
imaginary, and is parametrized by a Padé approximant as
$\displaystyle
k(x)=i\frac{c_{0}+c_{1}x+\cdots+c_{M}x^{M}}{1+d_{1}x+\cdots+d_{N}x^{N}},$ (22)
where $x=\sqrt{u-u_{0}}$, and ($M,N$) define the degree of the Padé
approximant. The ($M+N+1$) coefficients $c_{i}$ and $d_{j}$ are calculated in
the bound-state region by using $u_{i}$ values such that $E(u_{i})<0$. Going
to the physical $u$ value ($u<u_{0},x$ imaginary), one determines $k$ from
(22). The energy $E_{R}$ and width $\Gamma$ are obtained from
$\displaystyle E=\frac{\hbar^{2}k^{2}}{2m}=E_{R}-i\Gamma/2.$ (23)
The ACCC is, in principle, a simple extension of the variational method.
However, it was pointed out KKH89 ; TSV99 that this method requires a high
accuracy in the numerical calculation. In particular, the $u_{0}$ value must
be determined with several digits. It is therefore not realistic for
microscopic three-body systems.
* •
The box method MCD80 can be used for narrow resonances. This method has been
applied essentially in atomic physics. The idea is to search for positive
eigenvalues of the Schrödinger equation (2) inside a box. Then, looking at the
eigenvalues as a function of the box size, a narrow resonance appears at a
stable energy (see, for example, Fig. 1 of Ref. MCD80 ). This method is
simpler than other approximate techniques and, therefore, permits the use of
large bases. The method has been recently extended to the determination of
resonance widths ZMZ09 . However, it requires the numerical calculation of the
first and second derivatives, which means that, in practice, many generator
coordinates must be used for a good accuracy of the resonance parameters.
## 4 Application to ${}^{12}{\mathrm{C}}$
The ${}^{12}{\mathrm{C}}$ nucleus has attracted much attention in recent
years, in particular for the $0^{+}_{2}$ resonance, known as the Hoyle state
(see Ref. FF14 for a recent review). The Hoyle state, located just above the
$3\alpha$ threshold ($E_{R}=0.36$ MeV), is quite important in nuclear
astrophysics since its properties determine the triple-$\alpha$ reaction rate.
Its existence was predicted by Hoyle Ho54 on the basis of the observed
abundance of ${}^{12}{\mathrm{C}}$.
There is an impressive literature about the Hoyle state, and we refer to Refs.
FF14 ; ZFH20 for an overview. One of its characteristics is to present a
marked $\alpha+^{8}$Be cluster structure DB87b . This $\alpha$ clustering is
well established in many light nuclei, such as 7Li, 7,8,9,10Be, 16,17,18O,
18,19,20Ne FHI80 . The specificity of the ${}^{12}{\mathrm{C}}$ nucleus is
that the second cluster 8Be also presents an $\alpha$ cluster structure. As in
all excited states located near a breakup threshold, the Hoyle state presents
an extended density, which means that the density at short distances is
decreased if compared to well bound states. This natural property, common to
all nuclei, lead some authors to refer to the concept of ”dilute gas” Ka07 ;
Fu15 and of Bose-Einstein Condensates THS01 ; ZFH20 .
Our aim here is not to perform new calculations on the ${}^{12}{\mathrm{C}}$
nucleus. The first microscopic $3\alpha$ calculation was performed by Uegaki
et al. in 1977 UOA77 , and improved in different ways Ka81 ; DB87b ; THS01 ;
CFN07 ; Ka07 ; SMO08 . The ab initio calculation of Ref. CFN07 works rather
well for the ground state, but needs the introduction of specific $3\alpha$
configurations to reproduce the Hoyle state. One of the frequent issues about
the Hoyle state is the determination of its r.m.s. radius, which is expected
to be large (see references in Ref. FF14 ).
We adopt the same microscopic $3\alpha$ model as in Ref. SMO08 , where the
Minnesota nucleon-nucleon interaction TLT77 with $u=0.9487$ was used. For the
generator coordinates, we use a large basis: 12 values from 1.5 to 18 fm with
a step of 1.5 fm, and 3 additional values at 20, 22 and 24 fm. This basis is
unusually large, but permits a reliable discussion concerning the properties
of resonances.
Figure 2: Energy curves (13) for $J=0^{+}$ in ${}^{12}{\mathrm{C}}$. The
curves are labeled by the $K$ values. The dashed line represents the
asymptotic behaviour (14) for $K=0$.
In Fig. 2, we display the energy curves (13) for $J=0^{+}$ and for different
$K$ values. Note that in ${}^{12}{\mathrm{C}}$, a full symmetrization of the
wave functions for $\alpha$ exchange leads to the cancellation of the $K=2$
component (see, for example, Ref. De10 ). The dashed line illustrates the
asymptotic behaviour (14) with $Z_{00}=28.81$ De10 . The energy curves for
$K=0,4,6$ present a minimum around $R=4$ fm. At short distances, the
antisymmetrization between the nucleons makes these curves equivalent. This is
not true at large distances, where the effective three-body Coulomb
interaction is different, and where the centrifugal term plays a role.
Figure 3 presents the binding energy (with respect to the $3\alpha$ threshold)
of the $0^{+}_{1}$ and $0^{+}_{2}$ states for increasing size of the basis. We
define $R_{\rm max}$ as the maximum $R$ value included in the basis. The upper
and lower panels present the energy and r.m.s. radius, respectively. As
expected for a bound state, the $0^{+}_{1}$ energy and r.m.s. radius converge
rapidly. With $\mbox{$R_{\rm max}$}\approx 6$ fm, a fair convergence is
reached. The corresponding radius $\sqrt{<r^{2}>}=2.21$ fm is too small
compared to experiment (2.48 fm KPS17 ) but this difference is due to the
overbinding of the theoretical state.
Figure 3: Total energy (top) and r.m.s. radius (bottom) of the $0^{+}_{1}$ and
$0^{+}_{2}$ states in ${}^{12}{\mathrm{C}}$. Energies are given with respect
to the $3\alpha$ threshold.
More interesting results are obtained for the $0^{+}_{2}$ state. If the energy
remains almost constant above $\mbox{$R_{\rm max}$}\approx 15$ fm, the r.m.s.
radius significantly increases with larger bases. We go here up to
$\mbox{$R_{\rm max}$}=24$ fm, which represents a huge value, much larger than
in standard calculations (see, for example, Ref. SMO08 ). This is, however,
necessary to illustrate convergence issues. As we may expect from the unbound
nature of the $0^{+}_{2}$ state, the r.m.s. radius diverges. Any (large) value
can be obtained, provided that the basis is large enough. This explains why
calculations in the literature are quite different FF14 .
The same quantities are plotted in Fig. 4 for $J=2^{+}$. The existence of a
$2^{+}_{2}$ resonance, second state of a band based on the Hoyle state, is
well established Ka81 ; DB87b , but this state should be broad. Even the
energy is not stable, and does not present a plateau with $R_{\rm max}$. The
corresponding r.m.s. radius strongly diverges. Notice that a microscopic
$3\alpha$ model does not predict any $0^{+}_{3}$ or $2^{+}_{3}$ resonance.
Let us briefly comment on non-microscopic descriptions of the $3\alpha$
system. Two $\alpha+\alpha$ potentials, the shallow Ali-Bodmer potential AB66
and the deep Buck potential BFW77 reproduce very well the experimental
$\alpha+\alpha$ phase shifts up to 20 MeV. When applied to $N\alpha$ systems
($N>2$), however, none of these potentials provides satisfactory results. With
the Ali-Bodmer potential, the ${}^{12}{\mathrm{C}}$ ground state is strongly
underbound TBD03 , and the $0^{+}_{2}$ resonance is far above the $3\alpha$
threshold, in contradiction with experiment. This problem may be partly
addressed by adding a phenomenological 3-body force, but this technique
introduces spurious $3\alpha$ resonances De10 .
The deep Buck potential raises the question of the two-body forbidden states
which should be removed in the $3\alpha$ model. This is usually done by using
a projection technique but, here also, several problems remain (see the
discussions in Refs. FMK04 ; De10 ). An efficient alternative would be to use
non-local $\alpha+\alpha$ potentials SMO08 , but the simplicity of non-
microscopic models is lost. A fully satisfactory description of the $3\alpha$
system within non-microscopic models remains an open issue.
Figure 4: See caption to Fig. 3 for $J=2^{+}$.
## 5 Application to ${}^{24}{\rm Mg}$
The ${}^{16}{\rm O}+\alpha+\alpha$ system is more complex than
${}^{12}{\mathrm{C}}$ since the level density is much higher. The ${}^{24}{\rm
Mg}$ nucleus was studied within an $\alpha+^{20}$Ne multicluster model in Ref.
DB89b , where 20Ne is described by an $\alpha+^{16}$O structure. In Ref. IIK11
the authors use a microscopic ${}^{16}{\rm O}+\alpha+\alpha$ model to search
for $0^{+}$ resonances in a stochastic approach. The basis, however, is more
limited since a fixed geometry is adopted. The present work contains a more
extended basis, owing to the use of the hyperspherical coordinates.
The oscillator parameter is chosen as $b=1.65$ fm, which represents a
compromise between the optimal values of $\alpha$ and of 16O. We use the
Volkov force V2 Vo65 with a Majorana exchange parameter $M=0.624$. This value
reproduces the binding energy of the ground state with respect to the
${}^{16}{\rm O}+\alpha+\alpha$ threshold ($-14.05$ MeV).
The calculations of the matrix elements are much longer than for
${}^{12}{\mathrm{C}}$. Since most of the computer time is devoted to the
quadruple sums involved in the trwo-body interaction, the ratio between the
computer times is approximately given by $6^{4}/3^{4}=16$, since ${}^{24}{\rm
Mg}$ involves 6 orbitals ($s$ and $p$), whereas ${}^{12}{\mathrm{C}}$ involves
3 orbitals ($s$ only).
As for ${}^{12}{\mathrm{C}}$, we use a large basis with 10 $R$-values from 1.2
fm to 12 fm, complemented by larger values $R=13.5,15,17,19$ fm. In Fig. 5, we
present the binding energies and r.m.s. radii of $J=0^{+}$ states as a
function of $R_{\rm max}$, the maximum $R$ value included in the basis.
Figure 5: Energies with respect to the ${}^{16}{\rm O}+\alpha+\alpha$
threshold (top) and r.m.s. radii (bottom) of $J=0^{+}$ states in ${}^{24}{\rm
Mg}$ for different sizes of the basis. The color schemes are identical in both
panels. In the upper panel, the right side shows experimental energies. In the
lower panel, the r.m.s. radii for the third and fourth eigenvalues are almost
superimposed.
The upper panel shows that four states are bound with respect to the
${}^{16}{\rm O}+\alpha+\alpha$ decay, two of which are above the
$\alpha+^{20}$Ne threshold ($-4.7$ MeV), and are therefore not rigorously
bound. This is confirmed by the r.m.s. radii (lower panel). The two lowest
states converge rapidly to $<r^{2}>\approx 8$ fm2, whereas the $0^{+}_{3}$ and
$0^{+}_{4}$ radii present a slower convergence and a larger radius
$<r^{2}>\approx 10$ fm2. The radii corresponding to the four negative-energy
curves are linked by solid lines.
The right side of the upper panel shows the experimental $0^{+}$ states. The
ground-state energy is adjusted by the nucleon-nucleon interaction. The
$0^{+}_{2}$ energy is in fair agreement with experiment. In the high-energy
part of the spectrum, several $0^{+}$ experimental states are present.
Figure 5 suggests two important properties. Around $E\approx 5$ MeV, there is
a plateau in the energy curves, and the corresponding radii are rather
insensitive to the size of the basis. This is consistent with a high-energy
($E_{x}\approx 19$ MeV) resonance in the ${}^{24}{\rm Mg}$ spectrum. As
$R_{\rm max}$ increases, the label of the eigenvalue varies. The radii
corresponding to the plateau therefore show up as individual points around
$<r^{2}>\approx 7.5$ fm2. The dip near $\mbox{$R_{\rm max}$}=5$ fm is due to a
crossing between the energy curves. The second information concerns the radii.
The lower panel of Fig. 5 shows a clear distinction between physical states
and approximations of the continuum. The former present a stable radius,
whereas the latter are characterized by diverging radii. A careful study of
the radii, and in particular of their stability against the extension of the
basis, is therefore an efficient way to make a distinction between physical
states and pseudostates.
Figure 6 displays the energy curves for $J=2^{+}$. The level density is still
higher than for $J=0^{+}$ and it is difficult to make a clear link between
theory and experiment. The $2^{+}_{1}$ and $2^{+}_{3}$ energies are in fair
agreement, but the GCM $2^{+}_{2}$ energy is too low by about 2 MeV. This is
probably due to the lack of a spin-orbit force in a ${}^{16}{\rm
O}+\alpha+\alpha$ model. The model predicts seven states below the
${}^{16}{\rm O}+\alpha+\alpha$ threshold. The r.m.s. radii are not presented
as they qualitatively follow those of $J=0^{+}$. The converged radii for the
$2^{+}_{1}$ and $2^{+}_{2}$ states are around $<r^{2}>\approx 8.2$ fm2, which
is close to the radius of the ground state.
Figure 6: See caption to Fig. 5 for $J=2^{+}$. Only energies are displayed.
## 6 Conclusion
The goal of the present work is to illustrate the calculation of resonance
properties in cluster models, and more especially in multicluster models. A
rigorous treatment of the resonances would require a scattering theory, with
exact boundary conditions. If this approach is fairly simple in two-cluster
models, it raises strong difficulties when more than two clusters are
involved. The bound-state approximation is commonly used owing to its
simplicity. We have shown, however, that positive-energy eigenvalues should be
treated carefully, and that, even for narrow resonances, the wave function may
be sensitive to the basis. A direct consequence is that some properties, such
as the r.m.s. radius, are unstable. The stability against the basis should be
assessed.
Several method exist to complement the bound-state approximation, such as the
CSM or the ACCC. They permit to determine the energy and width of a resonance.
In practice, however, they are difficult to apply to microscopic calculations
since they usually need very large bases. We have used an alternative of the
box method, where the number of basis functions is progressively increased. We
have shown that a stability of the energy can be obtained, but the
corresponding r.m.s. radii are unstable. The application to the Hoyle state in
${}^{12}{\mathrm{C}}$ is an excellent example which explains the variety of
the values in the literature.
## Acknowledgments
This work was supported by the Fonds de la Recherche Scientifique - FNRS under
Grant Numbers 4.45.10.08 and J.0049.19. It benefited from computational
resources made available on the Tier-1 supercomputer of the Fédération
Wallonie-Bruxelles, infrastructure funded by the Walloon Region under the
grant agreement No. 1117545.
## References
* (1) M. Freer, H. Horiuchi, Y. Kanada-En’yo, D. Lee, U.G. Meißner, Rev. Mod. Phys. 90, 035004 (2018)
* (2) H. Horiuchi, K. Ikeda, K. Katō, Prog. Theor. Phys. Suppl. 192, 1 (2012)
* (3) P. Descouvemont, M. Dufour, _Clusters in Nuclei_ , Vol. 2 (Springer, 2012)
* (4) D. Brink, Proceedings of the International School of Physics “Enrico Fermi,” Course XXXVI, Varenna, 1965, Academic Press, New-York p. 247 (1966)
* (5) Y. Fujiwara, H. Horiuchi, K. Ikeda, M. Kamimura, K. Katō, Y. Suzuki, E. Uegaki, Prog. Theor. Phys. Suppl. 68, 29 (1980)
* (6) K. Ikeda, N. Takigawa, H. Horiuchi, Prog. Theo. Phys. Suppl., Extra Number p. 464 (1968)
* (7) A. Tohsaki, H. Horiuchi, P. Schuck, G. Röpke, Phys. Rev. Lett. 87, 192501 (2001)
* (8) B. Zhou, Y. Funaki, H. Horiuchi, A. Tohsaki, Frontiers of Physics 15 (2020)
* (9) T. Neff, H. Feldmeier, Eur. Phys. J. Special Topics 156, 69 (2008)
* (10) P. Navrátil, S. Quaglioni, I. Stetcu, B.R. Barrett, J. Phys. G 36, 083101 (2009)
* (11) H. Hergert, Frontiers in Physics 8, 379 (2020)
* (12) K. Wildermuth, Y.C. Tang, _A Unified Theory of the Nucleus_ (Vieweg, Braunschweig, 1977)
* (13) H. Horiuchi, Prog. Theor. Phys. Suppl. 62, 90 (1977)
* (14) M. Freer, H. Fynbo, Prog. Part. Nucl. Phys. 78, 1 (2014)
* (15) P. Navrátil, G.P. Kamuntavičius, B.R. Barrett, Phys. Rev. C 61, 044001 (2000)
* (16) R.B. Wiringa, V.G.J. Stoks, R. Schiavilla, Phys. Rev. C 51, 38 (1995)
* (17) R. Machleidt, Phys. Rev. C 63, 024001 (2001)
* (18) D.R. Thompson, M. LeMere, Y.C. Tang, Nucl. Phys. A 286, 53 (1977)
* (19) A.B. Volkov, Nucl. Phys. 74, 33 (1965)
* (20) S. Korennov, P. Descouvemont, Nucl. Phys. A 740, 249 (2004)
* (21) M.V. Zhukov, B.V. Danilin, D.V. Fedorov, J.M. Bang, I.J. Thompson, J.S. Vaagen, Phys. Rep. 231, 151 (1993)
* (22) A. Kievsky, S. Rosati, M. Viviani, L.E. Marcucci, L. Girlanda, J. Phys. G 35, 063101 (2008)
* (23) J. Raynal, J. Revai, Nuovo Cim. A 39, 612 (1970)
* (24) P. Descouvemont, Phys. Rev. C 99, 064308 (2019)
* (25) D. Baye, M. Kruglanski, Phys. Rev. C 45, 1321 (1992)
* (26) P. Descouvemont, J. Phys. G 37, 064010 (2010)
* (27) P. Descouvemont, E.M. Tursunov, D. Baye, Nucl. Phys. A 765, 370 (2006)
* (28) A. Damman, P. Descouvemont, Phys. Rev. C 80, 044310 (2009)
* (29) P. Descouvemont, D. Baye, Rep. Prog. Phys. 73, 036301 (2010)
* (30) Y.K. Ho, Phys. Rep. 99, 1 (1983)
* (31) J. Aguilar, J.M. Combes, Commun. Math. Phys. 22, 269 (1971)
* (32) S. Aoyama, T. Myo, K. Katō, K. Ikeda, Prog. Theor. Phys. 116, 1 (2006)
* (33) R. Suzuki, T. Myo, K. Katō, Prog. Theor. Phys. 113, 1273 (2005)
* (34) R. Suzuki, A.T. Kruppa, B.G. Giraud, K. Katō, Prog. Theor. Phys. 119, 949 (2008)
* (35) U.V. Riss, H.D. Meyer, J. Phys. B 26, 4503 (1993)
* (36) Y. Takenaka, R. Otani, M. Iwasaki, K. Mimura, M. Ito, Prog. Theor. Exp. Phys. 2014 (2014)
* (37) M. Ito, K. Yabana, Prog. Theor. Phys. 113, 1047 (2005)
* (38) V.I. Kukulin, V.M. Krasnopol’sky, J. Horǎćek, _Theory of Resonances, Principles and Applications_ (Kluwer Academic, 1989)
* (39) N. Tanaka, Y. Suzuki, K. Varga, R.G. Lovas, Phys. Rev. C 59, 1391 (1999)
* (40) C. Kurokawa, K. Katō, Phys. Rev. C 71, 021301 (2005)
* (41) C.H. Maier, L.S. Cederbaum, W. Domcke, J. Phys. B 13, L119 (1980)
* (42) S.G. Zhou, J. Meng, E.G. Zhao, J. Phys. B 42, 245001 (2009)
* (43) F. Hoyle, Astrophys. J. Suppl. 1, 121 (1954)
* (44) P. Descouvemont, D. Baye, Phys. Rev. C 36, 54 (1987)
* (45) Y. Kanada-En’yo, Prog. Theor. Phys. 117, 655 (2007)
* (46) Y. Funaki, Phys. Rev. C 92, 021302 (2015)
* (47) E. Uegaki, S. Okabe, Y. Abe, H. Tanaka, Prog. Theor. Phys. 57, 1262 (1977)
* (48) M. Kamimura, Nucl. Phys. A 351, 456 (1981)
* (49) M. Chernykh, H. Feldmeier, T. Neff, P. von Neumann-Cosel, A. Richter, Phys. Rev. Lett. 98, 032501 (2007)
* (50) Y. Suzuki, H. Matsumura, M. Orabi, Y. Fujiwara, P. Descouvemont, M. Theeten, D. Baye, Phys. Lett. B 659, 160 (2008)
* (51) J.H. Kelley, J.E. Purcell, C.G. Sheu, Nucl. Phys. A 968, 71 (2017)
* (52) S. Ali, A.R. Bodmer, Nucl. Phys. 80, 99 (1966)
* (53) B. Buck, H. Friedrich, C. Wheatley, Nucl. Phys. A 275, 246 (1977)
* (54) E.M. Tursunov, D. Baye, P. Descouvemont, Nucl. Phys. A 723, 365 (2003)
* (55) Y. Fujiwara, K. Miyagawa, M. Kohno, Y. Suzuki, D. Baye, J.M. Sparenberg, Phys. Rev. C 70, 024002 (2004)
* (56) P. Descouvemont, D. Baye, Phys. Lett. B 228, 6 (1989)
* (57) T. Ichikawa, N. Itagaki, T. Kawabata, T. Kokalova, W. von Oertzen, Phys. Rev. C 83, 061301 (2011)
|
8k
|
arxiv_papers
|
2101.00956
|
# Computable Random Variables and Conditioning
Pieter Collins
Department of Data Science and Knowledge Engineering
Maastricht University
[email protected]
(22 December 2020)
###### Abstract
The aim of this paper is to present an elementary computable theory of random
variables, based on the approach to probability via valuations. The theory is
based on a type of lower-measurable sets, which are controlled limits of open
sets, and extends existing work in this area by providing a computable theory
of conditional random variables. The theory is based within the framework of
type-two effectivity, so has an explicit direct link with Turing computation,
and is expressed in a system of computable types and operations, so has a
clean mathematical description.
## 1 Introduction
In this paper, we present a computable theory of probability and random
variables. The theory is powerful enough to provide a theoretical foundation
for the rigorous numerical analysis of discrete-time continuous-state Markov
chains and stochastic differential equations [Col14]. We provide an exposition
of the approach to probability distributions using valuations and the
development of integrals of positive lower-semicontinuous and of bounded
continuous functions, and on the approach to random variables as limits of
almost-everywhere defined continuous partial functions.
We use the framework of _type-two effectivity (TTE)_ [Wei99], in which
computations are performed by Turing machines working on infinite sequences,
as a foundational theory of computability. We believe that this framework is
conceptually simpler for non-specialists than the alternative of using a
domain-theoretic framework. Since in TTE we work entirely in the class of
quotients of countably-based (QCB) spaces, which form a cartesian closed
category, many of the basic operations can be carried out using simple type-
theoretic constructions such as the $\lambda$-calculus.
In this paper, we deal with computability theory, rather than constructive
mathematics. In practice, this means that we allow recursive constructions,
and so accept the axiom of dependent (countable) choice, but since not all
operations are decidable, we do not accept the law of the excluded middle.
However, proofs of correctness of computable operators may use non-computable
functions and proof by contradiction.
We assume that the reader has a basic familiarity with classical probability
theory (see e.g. [Shi95, Pol02]. Much of this article is concerned with giving
computational meaning to classical concepts and arguments. The main difficulty
lies in the use of $\sigma$-algebras in classical probability, which have poor
computability properties due to the presence of countable unions and
complementation. Instead, we use only topological constructions, which can
usually be effectivised directly. We can compute lower bounds (but not upper
bounds) to the measure of open sets, and extend results to measurable sets
using completion constructions. Similarly we define types of measurable and
integral functions as completions of types of (piecewise) continuous
functions.
In Section 2, we briefly introduce the foundations of computable analysis. In
Section 3, we describe the approach to probability theory using valuations.
The main results are in Section 5, in which we give a complete theory of
random variables in separable metric spaces. We begin by constructing types of
measurable sets and measurable functions on a given base probability space
$(\Omega,P)$ using completion operators, similarly to existing approaches in
the literature. We show that the distribution of a random variable is
computable, and conversely, that for any valuation we can construct a
realisation by a random variable, similarly to results of [SS06b, HR09]. We
then show the trivial result that the product of two random variables is
computable, and the classical result [MW43, BC72] that the image of a random
variable under a continuous function is computable. We define the expectation
of a random variable, and types of integrable random variables in the standard
way. Finally, we discuss conditioning of random variables, and show how a
random variable can be computed from its conditional expectation.
### Comparison with other approaches
An early fairly complete constructive theory of measure theory based on the
Daniell integral was developed in [BC72] was presented in [BB85] and [Cha74].
The theory is developed using abstract _integration spaces_ , which are
triples $(X,L,I)$ where $X$ is a space, $L$ a subset of test functions
$X\rightarrow\mathbb{R}$ and $I:L\rightarrow\mathbb{R}$ satifying properties
of an integral. The integral is extended from test functions to integrable
functions by taking limits. Measurable functions are those which can be
uniformly approximated by integrable functions on large-measure sets. It is
shown that the image of a measurable function under a continuous function is
measurable, and analogue of our Theorem 49. Measurable sets are defined via
_complemented sets_ , which are pairs of sets $(A_{1},A_{0})$ such that
$A_{0}\cap A_{1}=\emptyset$, and are measurable if $A_{0}\cup A_{1}$ is a full
set. Abstract measure spaces are defined in terms of measurable sets, and
shown to be equivalent to integration spaces.
A standard approach to a constructive theory of probability measures, as
developed in [JP89, Eda95a, SS06a, Esc09], is through _valuations_ , which are
essentially measures restricted to open sets. Explicit representations of
valuations within the framework of type-two effectivity were given in [Sch07].
Valuations satisfy the _modularity_ property
$\nu(U_{1})+\nu(U_{2})=\nu(U_{1}\cup U_{2})+\nu(U_{1}\cap U_{2})$, and
(monotonic) continuity $\nu(U_{\infty})=\lim_{n\to\infty}\nu(U_{n})$ whenever
$U_{n}$ is an increasing sequence of open sets with
$U_{\infty}=\bigcup_{n\in\mathbb{N}}U_{n}$. Relationships between valuations
and Borel measures were given in [Eda95a] and extended in [AM02] and [GL05].
The most straightforward approach to integration is the _Choquet_ or
_horizontal_ integral, a lower integral introduced within the framework of
domain theory in [Tix95]; see also [Kön97, Law04]. The lower integral on
valuations in the form we use was given in [Vic08]. The monadic properties of
the lower integral on valuations, which has type
$(\mathbb{X}\rightarrow\mathbb{R}^{+}_{<})\rightarrow\mathbb{R}^{+}_{<}$ were
noted by [Vic11]. A similar monadic approach to probability measures was used
in [Esc09] to develop a language EPCL for nondeterministic and probabilistic
computation. Here, the type of probability measures on the Cantor space
$\Omega=\\{0,1\\}^{\omega}$ was identified with the type of integrals
$(\Omega\rightarrow\mathbb{I})\rightarrow\mathbb{I}$ where $\mathbb{I}=[0,1]$
is the unit interval.
An alternative to the use of valuations is that of [CP02]. The exposition is
given in terms of general _boolean rings_ , but in the language of sets, a
measure $\mu$ is given satisfying the modularity condition, and extended by
the completion under the metric $d(S_{1},S_{2})=\mu(S_{1}\triangle S_{2})$
where $S_{1}\triangle S_{2}=(S_{1}\setminus S_{2})\cup(S_{2}\setminus S_{1})$.
In [WD05, WD06] a concept of _computable measure space_ with a concrete
representation was given using a ring of subsets $R$ generating the Borel
$\sigma$-algebra. Disadvantages of this approach are that the elements of $R$
must have a computable measure, introducing an undesirable dependency between
the measure and the “basic” sets which is not present in the approach using
valuations.
In the approach presented here, we start with the use of valuations, since
these are intrinsic given a base type $\mathbb{X}$. For a fixed valuation, we
can extend to a class of _lower-measurable sets_ , and also give a definition
of measurable set using a completion on complemented open sets (equivalently,
on topologically regular sets). We do not use integration spaces, since we
feel that the concept of measure is more fundamental than that of integral.
In the approach of [Spi06], integrable functions are defined as limits of
simple functions with respect to the measurable sets of [CP02]. Measurable
function are defined as limits of effectively-converging Cauchy sequences with
respect to the pseudometric $d_{h}(f,g):=\int|f-g|\wedge h$ for positive
integrable $h$ and integrable $f,g$. This work was generalised to Riesz spaces
in [CS09].
Random variables over discrete domains were defined in [Mis07], based on work
of [Var02], and extended to continuous domains in [GLV11]. A continuous random
variable in $\mathbb{X}$ was defined as a pair $(\nu,f)$ where $\nu$ is a
continuous valuation on $\Omega=\\{0,1\\}^{\omega}$, and $f$ is a continuous
map from $\mathrm{supp}(\nu)$ to $\mathbb{X}$, where $\mathrm{supp}(\nu)$ is
the smallest closed set $A$ such that $\nu(A)=1$. A difficulty with this
construction is that different random variables require different valuations
on the bases space $\Omega$, which makes computation of joint distributions
problematic.
In this paper, we define measurable functions as those for which the preimage
of an open set is a lower-measurable set and satisfy the natural properties.
This mimics the standard property that the preimage of an open set under a
measurable function is (Borel) measurable. Since measurable functions are in
general uncomputable, we do not even attempt to define the “image of a point”.
A similar approach to [Spi06] is also possible, defining random variables
(measurable functions) directly by completion with respect to the Fan metric
$d(X,Y)=\sup\\!\big{\\{}\varepsilon\in\mathbb{Q}^{+}\mid\
P\big{(}\\{\omega\in\Omega\mid
d(X(\omega),Y(\omega))>\varepsilon\\}\big{)}>\varepsilon\big{\\}}.$ The
resulting theory is essentially equivalent to that of [BC72], but developed in
reverse. The resulting representation is equivalent to that using lower-
measurable sets.
In [SS06b], an alternative representation of valuations and measures on
$\mathbb{X}$ was developed by defining a valuation $\pi$ on (a subset of) the
sequence space $\\{0,1\\}^{\omega}$, and pushing-forward by the representation
$\delta$ of $\mathbb{X}$, yielding $\nu(U)=\pi(\delta^{-1}(U))$. This
representation of valuations is similar the valuation induced by our random
variables, except that our random variables are obtained by taking limits, so
we need to prove separately that the valuation induced by a random variable is
computable.
It was further shown [SS06b] and that the alternative representations of
valuations always exist on sufficiently nice spaces, which can be seen as a
realisation result for valuations, where the representation $\delta$ of the
space $\mathbb{X}$ is a random variable. In [HR09], a theory of probability
was developed for the study of algorithmic randomness, and a similar
representation result for valuations was given, here allowing both the base-
space measure $\pi$ and point-representation $\delta$ to be given. In this
paper, we also show that valuations have concrete realisations by random
variables, but our result constructs random variables relative to to uniform
probability measure on the base space $\\{0,1\\}^{\omega}$, and a Cauchy
sequence of (continuous) functions rather than a single function on a
$G_{\delta}$-set.
The problem of finding conditional expectation, which classically uses the
Radon-Nikodym derivative, was shown to be uncomputable by [HRW11]. This means
that computably, there is a difference between a random variable, and a
“conditional random variable”. Here we show that given a conditional random
variable $Y|\mathcal{X}$, and an $\mathcal{X}$-measurable random variable $X$,
we can effectively compute $Y$. This result is important for stochastic
processes, in which we typically can compute the distribution of $X_{t+1}$
given $X_{t}=x_{t}$.
## 2 Computable Analysis
In the theory of type-two effectivity, computations are performed by Turing
machines acting on _sequences_ over some alphabet $\Sigma$. A computation
performed by a machine $\mathcal{M}$ is _valid_ on an input
$p\in\Sigma^{\omega}$ if the computation does not halt, and writes infinitely
many symbols to the output tape. A type-two Turing machine therefore performs
a computation of a partial function
$\eta:\Sigma^{\omega}\rightharpoonup\Sigma^{\omega}$; we may also consider
multi-tape machines computing
$\eta:(\Sigma^{\omega})^{n}\rightharpoonup(\Sigma^{\omega})^{m}$. It is
straightforward to show that any machine-computable function
$\Sigma^{\omega}\rightharpoonup\Sigma^{\omega}$ is continuous on its domain.
In order to relate Turing computation to functions on mathematical objects, we
use _representations_ of the underlying sets, which are partial surjective
functions $\delta:\Sigma^{\omega}\rightharpoonup\mathbb{X}$. An operation
$\mathbb{X}\rightarrow\mathbb{Y}$ is
$(\delta_{\mathbb{X}};\delta_{\mathbb{Y}})$-computable if there is a machine-
computable function $\eta:\Sigma^{\omega}\rightharpoonup\Sigma^{\omega}$ with
$\mathrm{dom}(\eta)\supset\mathrm{dom}(\delta_{\mathbb{X}})$ such that
$\delta_{\mathbb{Y}}\circ\eta=f\circ\delta_{\mathbb{X}}$ on
$\mathrm{dom}(\delta_{\mathbb{X}})$. Representations are equivalent if they
induce the same computable functions. A _computable type_ is a pair
$(\mathbb{X},[\delta])$ where $\mathbb{X}$ is a space and $[\delta]$ is an
equivalence class of representations of $\mathbb{X}$.
If $X$ is a topological space, we say that a representation $\delta$ of $X$ is
an _admissible quotient representation_ if (i) whenever
$f:\mathbb{X}\rightarrow\mathbb{Y}$ is such that $f\circ\delta$ is continuous,
then $f$ is continuous, and (ii) whenever
$\phi:\Sigma^{\omega}\rightharpoonup\mathbb{X}$ is continuous, there exists
continuous $\eta:\Sigma^{\omega}\rightharpoonup\Sigma^{\omega}$ such that
$\phi=\delta\circ\eta$. Any space with a quotient representation is a quotient
of a subset of the countably-based space $\Sigma^{\omega}$, and is a
_sequential space_. (A topological space is a _sequential space_ if any subset
$W$ for which $x_{n}\to x_{\infty}$ with $x_{\infty}\in W$ implies $x_{n}\in
W$ for all sufficiently large $n$, is an open set.)
A function $f:\mathbb{X}\rightarrow\mathbb{Y}$ is _computable_ if there is a
machine-computable function
$\eta:\Sigma^{\omega}\rightharpoonup\Sigma^{\omega}$ with
$\mathrm{dom}(\eta)\supset\mathrm{dom}(\delta_{\mathbb{X}})$ such that
$\delta_{\mathbb{Y}}\circ\eta=f\circ\delta_{\mathbb{X}}$ on
$\mathrm{dom}(\delta_{\mathbb{X}})$. A multivalued function
$F:\mathbb{X}\rightrightarrows\mathbb{Y}$ is _computably selectable_ if there
is a machine-computable function
$\eta:\Sigma^{\omega}\rightharpoonup\Sigma^{\omega}$ with
$\mathrm{dom}(\eta)\supset\mathrm{dom}(\delta_{\mathbb{X}})$ such that
$\delta_{\mathbb{Y}}\circ\eta\in F\circ\delta_{\mathbb{X}}$ on
$\mathrm{dom}(\delta_{\mathbb{X}})$; note that different names of
$x\in\mathbb{X}$ may give rise to different values of $y\in\mathbb{Y}$.
The category of computable types with (sequentially) continuous functions is
Cartesian closed, and the computable functions yield a Cartesian closed
subcategory. For any types $\mathbb{X}$, $\mathbb{Y}$ there exist a canonical
product type $\mathbb{X}\times\mathbb{Y}$ with computable projections
$\pi_{\mathbb{X}}:\mathbb{X}\times\mathbb{Y}\rightarrow\mathbb{X}$ and
$\pi_{\mathbb{Y}}:\mathbb{X}\times\mathbb{Y}\rightarrow\mathbb{Y}$, and a
canonical exponential type $\mathbb{Y}^{\mathbb{X}}$ such that evaluation
$\epsilon:\mathbb{Y}^{\mathbb{X}}\times\mathbb{X}\rightarrow\mathbb{Y}:(f,x)\mapsto
f(x)$ is computable. Since objects of the exponential type are continuous
function from $\mathbb{X}$ to $\mathbb{Y}$, we also denote
$\mathbb{Y}^{\mathbb{X}}$ by $\mathbb{X}\rightarrow\mathbb{Y}$ or
$\mathcal{C}(\mathbb{X};\mathbb{Y})$; in particular, whenever we write
$f:\mathbb{X}\rightarrow\mathbb{Y}$, we imply that $f$ is continuous. There is
a canonical equivalence between
$(\mathbb{X}\times\mathbb{Y})\rightarrow\mathbb{Z}$ and
$\mathbb{X}\rightarrow(\mathbb{Y}\rightarrow\mathbb{Z})$ given by
$\tilde{f}(x):\mathbb{Y}\rightarrow\mathbb{Z}:\tilde{f}(x)(y)=f(x,y)$.
There are canonical types representing basic building blocks of mathematics,
including the natural number type $\mathbb{N}$ and the real number type
$\mathbb{R}$. We use a three-valued logical type with elements
$\\{{\sf{F}},{\sf{T}},\bot\\}$ representing _false_ , _true_ , and
_indeterminate_ or _unknowable_ , and its subtypes the _Boolean_ type
$\mathbb{B}$ with elements $\\{{\sf{F}},{\sf{T}}\\}$ and the _Sierpinski_ type
$\mathbb{S}$ with elements $\\{{\sf{T}},\bot\\}$. Given any type $\mathbb{X}$,
we can identify the type $\mathcal{O}(\mathbb{X})$ of open subsets $U$ of
$\mathbb{X}$ with $\mathbb{X}\rightarrow\mathbb{S}$ via the characteristic
function $\chi_{U}$. Further, standard operations on these types, such as
arithmetic on real numbers, are computable.
A sequence $(x_{n})$ is an _effective Cauchy sequence_ if
$d(x_{m},x_{n})<\epsilon_{\max(m,n)}$ where $(\epsilon_{n})_{n\in\mathbb{N}}$
is a known computable sequence with $\lim_{n\to\infty}\epsilon_{n}=0$, and a
_strong Cauchy sequency_ if $\epsilon_{n}=2^{-n}$. The limit of an effective
Cauchy sequence of real number is computable.
We shall also need the type $\mathbb{H}\equiv\mathbb{R}^{+,\infty}_{<}$ of
positive real numbers with infinity under the lower topology. The topology on
the lower halfline $\mathbb{H}$ is the toplogy of lower convergence, with open
sets $(a,\infty]$ for $a\in\mathbb{R}^{+}$ and $\mathbb{H}$ itself. A
representation of $\mathbb{H}$ then encodes an increasing sequence of positive
rationals with the desired limit. We note that the operators $+$ and $\times$
are computable on $\mathbb{H}$, where we define $0\times\infty=\infty\times
0=0$, as is countable supremum
$\sup:\mathbb{H}^{\omega}\rightarrow\mathbb{H}$,
$(x_{0},x_{1},x_{2},\ldots)\mapsto\sup\\{x_{0},x_{1},x_{2},\ldots\\}$.
Further, $\mathrm{abs}:\mathbb{R}\rightarrow\mathbb{H}$ is computable, as is
the embedding $\mathbb{S}\hookrightarrow\mathbb{H}$ taking ${\sf{T}}\mapsto 1$
and $\bot\mapsto 0$. We let $\mathbb{I}_{<}$ be the unit interval $[0,1]$,
again with the topology of lower convergence with open sets $(a,1]$ for
$a\in[0,1)$ and $\mathbb{I}$ itself, and $\mathbb{I}_{>}$ the interval with
the topology of upper convergence.
A _computable metric space_ is a pair $(\mathbb{X},d)$ where $\mathbb{X}$ is a
computable type, and $d:\mathbb{X}\times\mathbb{X}\rightarrow\mathbb{R}^{+}$
is a computable metric, such that the extension of $d$ to
$\mathbb{X}\times\mathcal{A}(\mathbb{X})$ (where $\mathcal{A}(\mathbb{X})$ is
the type of closet subsets of $\mathbb{X}$) defined by
$d(x,A)=\inf\\{d(x,y)\mid y\in A\\}$ is computable as a function into
$\mathbb{R}^{+,\infty}_{<}$. This implies that given an open set $U$ we can
compute $\epsilon>0$ such that $B_{\epsilon}(x)\subset U$, which captures the
relationship between the metric and the open sets. The effective metric spaces
of [Wei99] are a concrete class of computable metric space.
A type $\mathbb{X}$ is _effectively separable_ if there is a computable
function $\xi:\mathbb{N}\rightarrow\mathbb{X}$ such that $\mathrm{rng}(\xi)$
is dense in $\mathbb{X}$.
Throughout this paper we shall use the term “compute” to indicate that a
formula or procedure can be effectively carried out in the framework of type-
two effectivity. Other definitions and equations may not be possible to verify
constructively, but hold from axiomatic considerations.
## 3 Valuations
The main difficulty with classical measure theory is that Borel sets and Borel
measures have very poor computability properties. Although a computable theory
of Borel sets was given in [Bra05], the measure of a Borel set is in general
not computable in $\mathbb{R}$. However, we can consider an approach to
measure theory in which we may only compute the measure of _open_ sets. Since
open sets are precisely those which can be approximated from inside, we expect
to be able to compute lower bounds for the measure of an open set, but not
upper bounds. The above considerations suggest an approach which has become
standard in computable measure theory, namely that using _valuations_ [JP89,
Eda95a, SS06a, Esc09].
###### Definition 1 (Valuation).
The type of _valuations_ on $\mathbb{X}$ is the subtype
$\mathcal{O}(\mathbb{X})\rightarrow\mathbb{H}$ consisting of elements $\nu$
satisfying $\nu(\emptyset)=0$ and the _modularity_ condition
$\nu(U)+\nu(V)=\nu(U\cup V)+\nu(U\cap V)$ for all
$U,V\in\mathcal{O}(\mathbb{X})$.
Note that since our valuations are elements of
$\mathcal{O}(\mathbb{X})\rightarrow\mathbb{H}$, any $\nu$ satisfies the
_monotonicity_ condition $\nu(U)\leq\nu(V)$ whenever $U\subset V$, and the
_continuity_ condition
$\nu\bigl{(}\bigcup_{n=0}^{\infty}U_{n}\bigr{)}=\lim_{n\to\infty}\nu(U_{n})$
whenever $U_{n}$ is an increasing sequence of open sets.
A valuation $\nu$ on $\mathbb{X}$ is _finite_ if $\nu(\mathbb{X})$ is finite,
_effectively finite_ if $\nu(\mathbb{X})$ is a computable real number, and
_locally finite_ if $\nu(U)<\infty$ for any $U$ which is a subset of a compact
set.
An effectively finite valuation computably induces an upper-valuation on
closed sets $\bar{\nu}:\mathcal{A}(\mathbb{X})\rightarrow\mathbb{R}^{+}_{>}$
by $\bar{\nu}(A)=\nu(\mathbb{X})-\nu(\mathbb{X}\setminus A)$. For any finite
valuation, $\nu(U)\leq\bar{\nu}(A)$ whenever $U\subset A$. We say a set $S$ is
_$\nu$ -regular_ if $\bar{\nu}(\partial S)=0$. An open set $U$ is
$\nu$-regular if, and only if, $\nu(U)=\bar{\nu}(\overline{U})$.
The following result shows that the measure of a sequence of small sets
approaches zero. Recall that a space $\mathbb{X}$ is _regular_ if for any
point $x$ and open set $U$, there exists an open set $V$ and a closed set $A$
such that $x\in V\subset A\subset U$.
###### Lemma 2.
Let $\mathbb{X}$ be a separable regular space, and $\nu$ a finite valuation on
$\mathbb{X}$. If $U_{n}$ is any sequence of open sets such that
$U_{n+1}\subset U_{n}$ and $\bigcap_{n=0}^{\infty}U_{n}=\emptyset$, then
$\nu(U_{n})\to 0$ as $n\to\infty$.
A link with classical measure theorey is provided by a number of results that
show that valuations can be extended to measures on the Borel
$\sigma$-algebra.
###### Theorem 3.
Borel measures and continuous valuations are in one-to-one correspondance:
1. 1.
on a countably-based locally-compact Hausdorff space [Eda95b, Corollary 5.3],
or
2. 2.
on a locally compact sober space [AM02].
For a purely constructive approach valuations themselves are main objects of
study, and we only (directly) consider the measure of open and closed sets.
Just as for classical measure theory, we say (open) sets $U_{1},U_{2}$ are
_independent_ if $\nu(U_{1}\cap U_{2})=\nu(U_{1})\nu(U_{2})$.
###### Definition 4 (Conditioning).
Given a sub-topology $\mathcal{V}$ on $\mathbb{X}$ and a valuation $\nu$ on
$\mathbb{X}$, a _conditional valuation_ is a function
$\nu(\cdot|\cdot):\mathcal{O}(\mathbb{X})\times\mathcal{V}\rightarrow\mathbb{H}$
such that $\nu(U\cap V)=\nu(U|V)\nu(V)$ for all $U\in\mathcal{O}(\mathbb{X})$
and $V\in\mathcal{V}$.
Clearly, $\nu(U\cap V)$ can be computed given $\nu(U|V)$ and $\nu(V)$. The
conditional valuation $\nu(\cdot|V)$ is uniquely defined if $\nu(V)\neq 0$.
However, since $\nu(U\cap V):\mathbb{R}^{+}_{<}$ but
$1/\nu(V):\mathbb{R}^{+,\infty}_{>}$, the conditional valuation $\nu(\cdot|V)$
cannot be _computed_ , even when $\nu(V)>0$, unless we are also given a set
$A\in\mathcal{A}(\mathbb{X})$ such that $V\subset A$ and $\bar{\nu}(A\setminus
V)=0$, in which case we have $\nu(U|V)=\nu(U\cap V)/\bar{\nu}(A)$.
We can define a notion of integration for positive lower-semicontinuous
functions by the _Choquet_ or _horizontal_ integral; see [Tix95, Law04,
Vic08].
###### Definition 5 (Lower integral).
Given a valuation
$\nu:(\mathbb{X}\rightarrow\mathbb{S})\rightarrow\mathbb{H}$, define the lower
integral $(\mathbb{X}\rightarrow\mathbb{H})\rightarrow\mathbb{H}$ by
$\textstyle\int_{\mathbb{X}}\psi\,d\nu=\sup\bigl{\\{}{\textstyle\sum_{m=1}^{n}}(p_{m}-p_{m-1})\,\nu(\psi^{-1}(p_{m},\infty])\\\
\mid(p_{0},\ldots,p_{n})\in\mathbb{Q}^{*}\wedge
0=p_{0}<p_{1}<\cdots<p_{n}\bigr{\\}}$ (1)
which is equivalent to the real integral
$\textstyle\int_{\mathbb{X}}\\!\psi\,d\nu=\int_{0}^{\infty}\nu\bigl{(}\psi^{-1}(x,\infty]\bigr{)}dx.$
(2)
Note that we could use any dense set of computable positive real numbers, such
as the dyadic rationals $\mathbb{Q}_{2}$, instead of the rationals in (1).
Since each sum is computable, and the supremum of countably many elements of
$\mathbb{H}$ is computable, the lower integral is computable.
It is fairly straightforward to show that the integral is linear,
$\textstyle\int_{\mathbb{X}}(a_{1}\psi_{1}+a_{2}\psi_{2})\,d\nu=a_{1}\int_{\mathbb{X}}\psi_{1}\,d\nu+a_{2}\int_{\mathbb{X}}\psi_{2}\,d\nu$
(3)
for all $a_{1},a_{2}\in\mathbb{H}$ and
$\psi_{1},\psi_{2}:\mathbb{X}\rightarrow\mathbb{H}$.
If $\raisebox{1.89444pt}{$\chi$}_{U}$ is the characteristic function of a set
$U$, then
$\textstyle\int_{\mathbb{X}}\raisebox{1.89444pt}{$\chi$}_{U}\,d\nu=\nu(U),$
and it follows that if
$\phi=\sum_{i=1}^{n}a_{i}\,\raisebox{1.89444pt}{$\chi$}_{U_{i}}$ is a step
function, then
$\textstyle\int_{\mathbb{X}}\phi\,d\nu=\sum_{i=1}^{n}a_{i}\,\nu(U_{i}).$
Given a (lower-semi)continuous linear functional
$\mu:(\mathbb{X}\rightarrow\mathbb{H})\rightarrow\mathbb{H}$, we can define a
function $\mathcal{O}(\mathbb{X})\rightarrow\mathbb{H}$ by
$U\mapsto\mu(\raisebox{1.89444pt}{$\chi$}_{U})$ for
$U\in\mathcal{O}(\mathbb{X})$. By linearity,
$\textstyle\mu(\raisebox{1.89444pt}{$\chi$}_{U})+\mu(\raisebox{1.89444pt}{$\chi$}_{V})=\mu(\raisebox{1.89444pt}{$\chi$}_{U\cap
V})+\mu(\raisebox{1.89444pt}{$\chi$}_{U\cup V}).$
Hence $\mu$ induces a valuation on $\mathbb{X}$. We therefore obtain a
computable equivalence between the type of valuations and the type of positive
linear lower-semicontinuous functionals:
###### Theorem 6.
The type of valuations
$(\mathbb{X}\rightarrow\mathbb{S})\rightarrow\mathbb{H}$ is computably
equivalent to the type of continuous linear functionals
$(\mathbb{X}\rightarrow\mathbb{H})\rightarrow\mathbb{H}$.
Types of the form $(\mathbb{X}\rightarrow\mathbb{T})\rightarrow\mathbb{T}$ for
a fixed type $\mathbb{T}$ form a _monad_ [Str72] over $\mathbb{X}$, and are
particularly easy to work with.
In [Eda95a, Section 4], a notion of integral
$\mathcal{C}_{\mathrm{bd}}(\mathbb{X};\mathbb{R})\rightarrow\mathbb{R}$ on
continuous bounded functions was introduced based on the approximation by
measures supported on finite sets of points. Our lower integral on positive
lower-semicontinuous functions can be extended to bounded functions as
follows:
###### Definition 7 (Bounded integration).
A continuous function $f:\mathbb{X}\rightarrow\mathbb{R}$ is _effectively
bounded_ if there are (known) computable reals $a,b\in\mathbb{R}$ such that
$a<f(x)<b$ for all $x\in\mathbb{X}$.
If $\nu$ is effectively finite with $\nu(\mathbb{X})=c$, we define the
integral
$\mathcal{C}_{\mathrm{bd}}(\mathbb{X};\mathbb{R})\rightarrow\mathbb{R}$ by
$\textstyle\int_{\mathbb{X}}f(x)\,d\nu(x)=\int_{\mathbb{X}}\bigl{(}a+f(x)\bigr{)}\,d\nu(x)-a\,c=b\,c-\int_{\mathbb{X}}\bigl{(}b-f(x)\bigr{)}\,d\nu(x)$
where $a<b$ are bounds for $f$.
It is clear that the first formula for the integral of $f$ is computable in
$\mathbb{R}_{<}$ and the second in $\mathbb{R}_{>}$, and that the lower and
upper integrals agree if $f$ is continuous. If $\mathbb{X}$ is compact, then
any (semi)continuous function is effectively bounded, so the integrals always
exist.
In order to define a valuation given a positive linear functional
$\mathcal{C}_{\mathrm{cpt}}(\mathbb{X};\mathbb{R})\rightarrow\mathbb{R}$ on
compactly-supported continuous functions, we need some way of approximating
the characteristic function of an open set by continuous functions. If
$\mathbb{X}$ is _effectively regular_ , then given any open set $U$, we can
construct an increasing sequence of closed sets $A_{n}$ such that
$\bigcup_{n\to\infty}A_{n}=U$. Further, a type $\mathbb{X}$ is _effectively
quasi-normal_ if given disjoint closed sets $A_{0}$ and $A_{1}$, we can
construct a continuous function $\phi:\mathbb{X}\rightarrow[0,1]$ such that
$\phi(A_{0})=\\{0\\}$ and $\phi(A_{1})=\\{1\\}$ using an effective Uryshon
lemma; see [Sch09] for details.
We then have an effective version of the Riesz representation theorem:
###### Theorem 8.
Suppose $\mathbb{X}$ is an effectively regular and effectively quasi-normal
type. Then type of locally-finite valuations
$(\mathbb{X}\rightarrow\mathbb{S})\rightarrow\mathbb{H}$ is effectively
equivalent to the type of positive linear functionals
$\mathcal{C}_{\mathrm{cpt}}(\mathbb{X}\rightarrow\mathbb{R})\rightarrow\mathbb{R}$
on continuous functions of compact support.
We consider lower-semicontinuous functionals
$(\mathbb{X}\rightarrow\mathbb{H})\rightarrow\mathbb{H}$ to be more
appropriate as a foundation for computable measure theory than the continuous
functionals $(\mathbb{X}\rightarrow\mathbb{R})\rightarrow\mathbb{R}$, since
the equivalence given by Theorem 6 is entirely independent of any assumptions
on the type $\mathbb{X}$ whereas the equivalence of Theorem 8 requires extra
properties of $\mathbb{X}$ and places restrictions on the function space.
###### Theorem 9 (Fubini).
If $\mathbb{X}_{1}$ and $\mathbb{X}_{2}$ are countably-based spaces, then for
any $\psi:\mathbb{X}_{1}\times\mathbb{X}_{2}\rightarrow\mathbb{H}$,
$\textstyle\int_{\mathbb{X}_{1}}\int_{\mathbb{X}_{2}}\psi(x_{1},x_{2})d\nu_{2}(x_{2})d\nu_{1}(x_{1})=\int_{\mathbb{X}_{2}}\int_{\mathbb{X}_{1}}\psi(x_{1},x_{2})d\nu_{1}(x_{1})d\nu_{2}(x_{2}).$
(4)
Extending valuations to functions
$(\mathbb{X}\rightarrow\mathbb{H})\rightarrow\mathbb{H}$, we can write
$\nu_{1}(\lambda x_{1}.\nu_{2}(\lambda
x_{2}.\psi(x_{1},x_{2})))=\nu_{2}(\lambda x_{2}.\nu_{1}(\lambda
x_{1}.\psi(x_{1},x_{2}))).$
###### Definition 10 (Product valuation).
Let $\nu_{i}$ be a valuation on $\mathbb{X}_{i}$ for $i=1,2$, where each
$\mathbb{X}_{i}$ is countably-based. The _product_ of two valuations is given
by
$\displaystyle\textstyle[\nu_{1}\times\nu_{2}](U)$
$\displaystyle\textstyle:=\int_{\mathbb{X}_{1}}\int_{\mathbb{X}_{2}}\chi_{U}(x_{1},x_{2})d\nu_{2}(x_{2})d\nu_{1}(x_{1})$
(5)
$\displaystyle\textstyle\qquad=\int_{\mathbb{X}_{2}}\int_{\mathbb{X}_{1}}\chi_{U}(x_{1},x_{2})d\nu_{1}(x_{1})d\nu_{2}(x_{2}),$
where the two integrals are equal by Fubini’s theorem.
In the sequel, we shall make frequent use of the following result.
###### Proposition 11.
Let $U,V,W:\mathcal{O}(\mathbb{X})$ with $U\subset V$, and $\nu$ a valuation
on $\mathbb{X}$. Then
1. (a)
$\nu(U)+\nu(V\cap W)\leq\nu(V)+\nu(U\cap W)$, and
2. (b)
$\nu(U)+\nu(V\cup W)\leq\nu(V)+\nu(U\cup W)$.
The proof is straightforward.
## 4 Lower-measurable sets
In this section, we define measures of non-open sets.
The standard approach to probability theory used in classical analysis is to
define a measure over a $\sigma$-algebra of sets. The main difficulty with a
direct effectivisation of probability theory via $\sigma$-algebras is the
operation on complementation. Given an open set $U$, we can _only_ hope to
compute $\nu(U)$ in $\mathbb{R}^{+}_{<}$ (i.e. from below), and since
$\sigma$-algebras are closed under complementation, for $A=U^{\complement}$,
we compute $\nu(A)$ in $\mathbb{R}^{+}_{>}$. Then for a countable union of
nested closed sets $A_{\infty}=\bigcup_{n=0}^{\infty}A_{n}$ with
$A_{n+1}\subset A_{n}$, we need to find information about the limit
$\lim_{n\to\infty}\nu(A_{n})$ which is an increasing sequence in
$\mathbb{R}^{+}_{>}$, so we can find neither an upper- or a lower-bound.
Our solution is to consider, for a _fixed_ probability valuation $\nu$, a type
of _$\nu$ -lower-measurable sets_. These essentially extend the open sets to
the $G_{\delta}$ sets, and have a representation under which it is possible to
compute the $\nu$-measure from below. Further, they are closed under finite
intersection and countable union (so may be called a _$\sigma$ -semiring_,
though this usage is different from that of e.g. [Sch04]). However, they do
not formally define a topology on $\mathbb{X}$, since they are essentially
_equivalence-classes_ of subsets of $\mathbb{X}$ (and even this property only
holds for sufficiently nice spaces, including the Polish spaces). The
resulting theory can be seen as a construction of an outer-regular measure for
$G_{\delta}$ subsets of a space (see [vG02].
### 4.1 Definition and basic properties
We first define the type of lower-measurable sets, and prove some of its basic
properties.
###### Definition 12 (Lower-Cauchy sequence).
Let $\mathcal{V}$ be a set with a intersection (or _meet_) operation
$\cap:\mathcal{V}\times\mathcal{V}\to\mathcal{V}$ and a compatible subset (or
order) relation $\subset$ on $\mathcal{V}\times\mathcal{V}$, and let
$\nu:\mathcal{V}\to\mathbb{R}$.
A sequence $(V_{k})$ of elements of $\mathcal{V}$ is a _lower-Cauchy sequence_
if for all $\epsilon>0$, there exists $n=N(\epsilon)$, such that
$\forall m>n,\ \nu(V_{m}\cap V_{n})\geq\nu(V_{n})-\epsilon.$ (6)
The convergence is _effective_ if $N(\epsilon)$ is known, equivalently, if
there is a known sequence $(\epsilon_{k})$ with
$\lim_{k\to\infty}\epsilon_{k}=0$ such that for all $m>n$, $\nu(V_{m}\cap
V_{n})\geq\nu(V_{n})-\epsilon_{n}$. The convergence is _fast_ if
$\epsilon_{k}=2^{-k}$.
The sequence is _monotone (decreasing)_ if for all $m>n$, $V_{m}\subset
V_{n}$.
If $(V_{n})$ is an effective lower-Cauchy sequence, then
$\forall m>n,\ \nu(V_{n}\setminus V_{m})\leq\epsilon_{n}.$
If $(V_{n})$ is a fast monotone lower-Cauchy sequence, then
$\forall m>n,\ \nu(V_{n})\geq\nu(V_{m})\geq\nu(V_{n})-2^{-n}.$
###### Definition 13 (Equivalence of lower-Cauchy sequence).
Two $\nu$-lower-Cauchy sequences are _equivalent_ , denoted
$(U_{n})\sim(V_{n})$ if, and only if, for
$\forall\epsilon>0,\;\exists n\in\mathbb{N},\;\forall m\geq n,\ \nu(U_{m}\cap
V_{m})\geq\max(\nu(U_{m}),\nu(V_{m}))-\epsilon.$
###### Lemma 14.
If $(U_{n})$ and $V_{n}$ are fast lower-Cauchy sequences, then $(U_{n})$ and
$(V_{n})$ are equivalent if, and only if
$\forall n\in\mathbb{N},\ \nu(U_{n}\cap
V_{n})\geq\max(\nu(U_{n}),\nu(V_{n}))-2^{-n}.$
###### Lemma 15.
The relation $\sim$ is an equivalence relation on fast lower-Cauchy sequences.
###### Proof.
Reflexivity and commutativity are immediate; it remains to show transitivity.
Suppose $(U_{n})$, $(V_{n})$ and $(W_{n})$ are fast monotone $\nu$-lower-
Cauchy sequences, that $(U_{n})\sim(V_{n})$ and $(V_{n})\sim(W_{n})$. Then for
any $m>n$, $\nu(U_{n}\cap W_{n})\geq\nu(U_{m}\cap V_{m}\cap
W_{m})\geq\nu(U_{m}\cap V_{m})+\nu(V_{m}\cap
W_{m})-\nu(W_{m})\geq\nu(U_{m})+\bigl{(}\nu(U_{m}\cap
V_{m})-\nu(U_{m})\bigr{)}+\bigl{(}\nu(V_{m}\cap
W_{m})-\nu(V_{m})\bigr{)}\geq\nu(U_{m})-2\\!\times\\!2^{-m}\geq\nu(U_{n})-2^{-n}-2\\!\times\\!2^{-m}$.
Since $m$ can be made arbitrarily large, $\nu(U_{n}\cap
W_{n})\geq\nu(U_{n})-2^{-n}$ as required. By symmetry, $\nu(U_{n}\cap
W_{n})\geq\nu(W_{n})-2^{-n}$. ∎
###### Definition 16 (Lower-measurable set).
Let $\mathbb{X}$ be a type, and $\nu$ a valuation on $\mathbb{X}$. The type of
$\nu$-lower-measurable sets is defined as the equivalence classes of fast
monotone $\nu$-lower-Cauchy sequences of open subsets of $\mathbb{X}$.
Note that a fast monotone $\nu$-lower-Cauchy sequence $(U_{n})$ satisfies
$\forall n\in\mathbb{N},\;\forall m>n,\ U_{m}\subset
U_{n}\wedge\nu(U_{m})\geq\nu(U_{n})-2^{-n}.$ (7)
By countable-additivity, the classical measure of $U_{\infty}$ coincides with
its $\nu$-lower-measure.
###### Proposition 17 (Lower-measure is computable).
If $(U_{n})$ is a fast monotone lower-Cauchy sequence of open sets, then
$\sup_{n\in\mathbb{N}}(\nu(U_{n})-2^{-n})$ is computable in
$\mathbb{R}^{+}_{<}$.
###### Proof.
The value $\sup_{n\in\mathbb{N}}\nu(U_{n})-2^{-n}$ is the supremum of a
countable set of lower-reals, so is computable in $\mathbb{R}^{+}_{<}$. ∎
###### Proposition 18 (Lower-measure is well-defined).
If $(U_{n})$ and $(V_{n})$ are equivalent fast monotone $\nu$-lower-Cauchy
sequences, then
$\sup_{n\in\mathbb{N}}\bigl{(}\nu(U_{n})-2^{-n}\bigr{)}=\sup_{n\in\mathbb{N}}\bigl{(}\nu(V_{n})-2^{-n}\bigr{)}$.
###### Proof.
Suppose $U_{n}$ and $V_{n}$ are equivalent $\nu$-lower-Cauchy sequences. Then
for all $n$, we have $\nu(U_{\infty})\geq\nu(U_{n})-2^{-n}\geq\nu(U_{n}\cap
V_{n})-2^{-n}\geq\nu(V_{n})-2\times 2^{-n}\geq\nu(V_{\infty})-2\times 2^{n}$.
Since $n$ is arbitrary, $\nu(U_{\infty})\geq\nu(V_{\infty})$. Switching the
$U$s and $V$s gives $\nu(U_{\infty})\geq\nu(V_{\infty})$. ∎
###### Definition 19 (Lower-measure).
The $\nu$-lower-measure of a lower measurable set $U_{\infty}$ is defined as
$\sup_{n\in\mathbb{N}}\nu(U_{n})-2^{-n}$, where $U_{n}$ is any fast monotone
lower-Cauchy sequence converging to $U_{\infty}$.
###### Lemma 20.
Let $(U_{n})$ be a $\nu$-lower-Cauchy sequence. Then sequence $(U_{n+1})$ is
$\nu$-lower-Cauchy, and $(U_{n+1})\sim_{\nu}(U_{n})$.
###### Proof.
If $m>n$, $\nu(U_{n+1})-\nu(U_{m+1})\leq 2^{-(n+1)}<2^{-n}$. ∎
We first show that given an effective lower-Cauchy sequence, then it is
possible to compute an equivalent fast lower-Cauchy subsequence, and given a
non-monotone lower-Cauchy sequence of lower-measurable sets (notably, of open
sets), we can compute a monotone sequence with the same limit.
###### Lemma 21 (Computing fast monotone lower-Cauchy sequences).
1. 1.
Suppose $(V_{n})_{n\in\mathbb{N}}$ is an effective $\nu$-lower-Cauchy
sequence. Then $U_{n}=V_{N(2^{-n})}$ is an equivalent fast lower-measurable
subsequence.
2. 2.
Suppose $(V_{n})_{n\in\mathbb{N}}$ is a fast $\nu$-lower-Cauchy sequence. Then
$U_{n}=\bigcup_{m\geq n+1}V_{m}$ is an equivalent fast monotone $\nu$-lower-
Cauchy sequence.
###### Proof.
1. 1.
Clearly for $m>n\geq N(2^{-n})$, we have $\nu(U_{m}\cap
U_{n})=\nu(V_{N(2^{-m})}\cap
V_{N(2^{-n})})\leq\nu(V_{N(2^{-m})})+2^{-n}=\nu(U_{m})+2^{-n}$.
2. 2.
Since $U_{n}=U_{n+1}\cup V_{n+1}$, we have $\nu(U_{n})+\nu(U_{n+1}\cap
V_{n+1})=\nu(U_{n+1}\cup V_{n+1})+\nu(U_{n+1}\cap
V_{n+1})=\nu(U_{n+1})+\nu(V_{n+1})$, and since $V_{n+2}\subset U_{n+1}$, we
have $\nu(U_{n+1}\cap V_{n+1})\geq\nu(V_{n+2}\cap
V_{n+1})\geq\nu(V_{n+1})-2^{-(n+1)}$. Hence
$\nu(U_{n})\leq\nu(U_{n+1})+2^{-(n+1)}$, and by induction, we see
$\nu(U_{n})<\nu(U_{m})+2^{-n}$ wheneve $m>n$. Hence $(U_{n})$ is a fast
monotone lower-Cauchy subsequence.
To show $(U_{n})\sim(V_{n})$, since clearly $\nu(U_{n})\geq\nu(V_{n})$, we
need to show $\nu(U_{n})\leq\nu(V_{n})+\epsilon$ for $n$ sufficiently large.
We first show that for all $m>n$, $\nu(U_{n})\leq 2^{-n}+\nu(V_{m})$. Note
$U_{n}=U_{n+1}\cup V_{n+1}$, and $V_{n+2}\subset U_{n+1}$. Let
$U_{n,m}=\bigcup_{k=n+1}^{m}V_{k}$, noting $U_{m-1,m}=V_{m}$. Then
$\nu(U_{n+1,m})+\nu(V_{n+1})=\nu(U_{n+1,m}\cup V_{n+1})+\nu(U_{n+1,m}\cap
V_{n+1})=\nu(U_{n,m})+\nu(U_{n+1,m}\cap
V_{n+1})\geq\nu(U_{n,m})+\nu(V_{n+2}\cap
V_{n+1})\geq\nu(U_{n,m})+\nu(V_{n+1})-2^{-(n+1)}$, so
$\nu(U_{n+1,m})\geq\nu(U_{n,m})-2^{-(n+1)}$. Hence
$\nu(U_{n,m})\leq\nu(U_{m-1,m})+\sum_{k=n+1}^{m-1}2^{-k}\leq\nu(V_{m})+2^{-n}$.
Since $U_{n}\bigcup_{m=n+1}^{\infty}U_{n,m}$, there exists $m$ such that
$\nu(U_{n,m})\geq\nu(U_{n})-2^{-n}$. Then
$\nu(V_{m})\geq\nu(U_{n})-2\\!\times\\!2^{-n}$, and since
$\nu(U_{m})\geq\nu(U_{n})-2^{-n}$, we have
$\nu(V_{m})\geq\nu(U_{m})-3\\!\times\\!2^{-n}$.
∎
###### Remark 22 (Equivalent definitions of $\nu$-lower-measurable sets).
By Lemma 21, we see that the monotonicity condition on the open sets $U_{n}$
in Definition 16 is unnecessary, and that fast convergence can be weakened to
effective convergence.
By Theorem 26, we see that a second definition would be to say (i) any open
set is $\nu$-lower-measurable, and (ii) any fast $\nu$-lower-Cauchy sequence
$(V_{n})_{n\in\mathbb{N}}$ of $\nu$-lower-measurable sets defines a lower-
measurable set $V_{\infty}$, with then
$V_{\infty}:=\sup_{n\in\mathbb{N}}\nu(V_{n})-2^{-n}$.
A third definition for countably-based topological spaces would be to take a
countable basis, and consider $\nu$-lower-Cauchy sequences of _finite_ unions
of basic sets.
Which definition to take is a matter of taste; our Definition 16 provides
strong properties of the approximating sequences, so is easy to use in
hypotheses, but it requires more work to prove. However, Lemma 21 shows that
it suffices to compute an effectively lower-Cauchy sequence. One advantage of
using fast sequences over effective sequences is that we do not have to
explicitly pass around a counvergence rate.
has the advantage of providing a uniform construction, though for explicit
computations, it may be more appropriate to restrict to finite unions of basic
open sets.
### 4.2 Intersections and unions of lower-measurable sets
We now consider computability of intersections and unions of lower-measurable
sets.
The following lemma compares measures of unions and intersections of
equivalent sets.
###### Lemma 23.
Let $(U_{n})$, $(V_{n})$ and $(W_{n})$ be $\nu$-lower-Cauchy sequences.
1. 1.
If $\nu\bigl{(}(U\cap V)_{\infty}\bigr{)}=\nu(U_{\infty})=\nu(V_{\infty})$, or
$\nu\bigl{(}(U\cup V)_{\infty}\bigr{)}=\nu(U_{\infty})=\nu(V_{\infty})$, then
$(U_{n})\sim(V_{n})$.
2. 2.
If $(U_{n})\sim_{\nu}(V_{n})$, then $(U_{n}\cap V_{n})\sim_{\nu}(U_{n+1}\cup
V_{n+1})$, and both are equivalent to $(U_{n})$ and $(V_{n})$.
###### Proof.
1. 1.
$\nu(U_{n}\cap V_{n})\geq\lim_{n\to\infty}\nu(U_{n}\cap
V_{n})=\nu(U_{\infty})\geq\nu(U_{n})-2^{-n}$. Similarly, $\nu(U_{n}\cup
V_{n})\geq\nu(\lim_{n\to\infty}U_{n}\cup
V_{n})=\nu(U_{\infty})\geq\nu(U_{n})-2^{-n}$.
2. 2.
$\nu(U_{n}\cap(U_{n}\cap V_{n}))=\nu(U_{n}\cap V_{n})$, and $\nu(U_{n}\cap
V_{n})\geq\nu(U_{n})-2^{-n}$ by definition of $\sim_{\nu}$.
$\nu(U_{n}\cap(U_{n}\cup V_{n}))=\nu(U_{n})$, and $\nu(U_{n}\cup
V_{n})=\nu(U_{n})+\nu(V_{n})-\nu(U_{n}\cap V_{n})\leq\nu(U_{n})+2^{-n}$, so
$\nu(\nu(U_{n}\cap(U_{n}\cup V_{n}))\geq\nu(U_{n}\cup V_{n})-2^{-n}$ as
required. ∎
###### Theorem 24 (Intersections and unions of lower-measurable sets).
Let $\nu$ be a probability valuation on a type $\mathbb{X}$. Then operations
of (1) intersection, and (2) union are computable on $\nu$-lower-measurable
sets.
###### Proof.
Suppose $U_{n}\rightsquigarrow_{\nu}U_{\infty}$ and
$V_{n}\rightsquigarrow_{\nu}V_{\infty}$.
1. 1.
We show that $(U_{n}\cap V_{n})$ is a fast $\nu$-lower-Cauchy sequence. For
$m>n$ and any $l>m$, $\nu(U_{n}\cap V_{n})-\nu(U_{m}\cap
V_{m})\leq\nu(U_{n})-\nu(U_{l}\cap V_{l})\leq\nu(U_{n})-\nu(U_{l})+2^{-l}\leq
2^{-n}+2^{-l}$. Since $l$ can be made arbitrarily large, $\nu(U_{m}\cap
V_{m})\geq\nu(U_{n}\cap V_{n})-2^{-n}$.
2. 2.
For $m>n$, we have $U_{m}\subset U_{n}$ and $V_{m}\subset V_{n}$, so by
Proposition 11, $\nu(U_{m})+\nu(V_{m})+\nu(U_{n}\cup
V_{n})\leq\nu(V_{m})+\nu(U_{m}\cup V_{n})+\nu(U_{n})\leq\nu(U_{m}\cup
V_{m})+\nu(U_{n})+\nu(V_{n})$. Hence $\nu(U_{n}\cup V_{n})-\nu(U_{m}\cup
V_{m})\leq(\nu(U_{n})-\nu(U_{m}))+(\nu(V_{n})-\nu(V_{m}))\leq
2\\!\times\\!2^{-n}$, so $\nu(U_{n+1}\cup V_{n+1})-\nu(U_{m+1}\cup
V_{m+1})\leq 2^{-n}$. ∎
###### Theorem 25 (Countable unions of lower-measurable sets).
Let $\nu$ be a probability valuation on a type $\mathbb{X}$. Then the
operation of countable union is computable on $\nu$-lower-measurable sets.
###### Proof.
We can show that if $U_{k}\subset V_{k}$ for $k=1,2,\ldots$, then
$\nu(\bigcup_{k=1}^{\infty}V_{k})-\nu(\bigcup_{k=1}^{\infty}U_{k})\leq\sum_{k=1}^{n}\bigl{(}\nu(V_{k})-\nu(U_{k})\bigr{)}$.
Given $U_{k,n}\rightsquigarrow_{\nu}U_{k}$, we show that
$V_{n}:=\bigcup_{k=0}^{\infty}U_{k,n+k+1}$ is a $\nu$-lower-Cauchy sequence.
For if $m>n$, then
$\nu\bigl{(}\bigcup_{k=0}^{\infty}U_{k,n+k+1}\bigr{)}-\nu\bigl{(}\bigcup_{k=0}^{\infty}U_{k,m+k+1}\bigr{)}\leq\sum_{k=0}^{\infty}\bigl{(}\nu(U_{k,n+k+1})-\nu(U_{k,m+k+1})\bigr{)}\leq\sum_{k=0}^{\infty}2^{-(n+k+1)}=2^{-n}$
as required. ∎
We can similarly compute countable intersections of _effectively_ decreasing
sequences of $\nu$-lower-measurable sets.
###### Theorem 26 (Effective countable intersections of lower-measurable
sets).
Suppose $(V_{n})$ is a fast monotone lower-Cauchy sequence of $\nu$-lower-
measurable sets. Then $V_{n}$ converges effectively to a $\nu$-lower-
measurable set $V_{\infty}$.
###### Proof.
Write $V_{k,\infty}=V_{k}$ and let $V_{k,n}$ be a fast monotone lower-Cauchy
sequence of open sets converging to $V_{k}$. Define
$V_{\infty,n}=V_{n+1,n+1}\cap V_{\infty,n-1}$, which is clearly monotone. Note
that since $V_{n+1,\infty}\subset V_{n+1,n+1}$ and $V_{n+1,\infty}\subset
V_{n,\infty}$, we have $V_{\infty,n}\supset V_{n+1,\infty}=V_{n+1}$. Then for
$m>n$,
$\nu(V_{\infty,n})-\nu(V_{\infty,m})\leq\nu(V_{n+1,n+1})-\nu(V_{m+1})\leq\nu(V_{n+1,n+1})-\nu(V_{n+1,\infty})+\nu(V_{n+1})-\nu(V_{m+1})\leq
2^{-(n+1)}+2^{-(n+1)}=2^{-n}$, so $V_{\infty,n}$ is a fast monotone lower-
Cauchy sequence of open sets, so represents a lower-measurable set
$V_{\infty,\infty}=V_{\infty}$.
Finally, we have
$\nu(V_{n})-\nu(V_{\infty})-\nu(V_{n,\infty})-\nu(V_{\infty,\infty})\leq\nu(V_{\infty,n})-\nu(V_{\infty,\infty})\leq
2^{-n}$, from which we see that $V_{\infty}$ is indeed the limit of $(V_{n})$.
∎
###### Proposition 27 (Lower measure is modular).
$\nu(U_{\infty}\cap V_{\infty})+\nu(U_{\infty}\cup
V_{\infty})=\nu(U_{\infty})+\nu(V_{\infty})$.
###### Proof.
For fixed $n$, $\nu(U_{n}\cap V_{n})+\nu(U_{n}\cup
V_{n})=\nu(U_{n})+\nu(V_{n})$ by modularity of valuations. Then
$\nu(U_{\infty}\cap V_{\infty})+\nu(U_{\infty}\cup
V_{\infty})\geq\nu(U_{n}\cap V_{n})-2^{-n}+\nu(U_{n}\cup
V_{n})-2\\!\times\\!2^{-n}=\nu(U_{n})+\nu(V_{n})-3\times
2^{-n}\geq\nu(U_{\infty})+\nu(V_{\infty})-3\times 2^{-n}$. Taking $n\to\infty$
gives $\nu(U_{\infty}\cap V_{\infty})+\nu(U_{\infty}\cup
V_{\infty})\geq\nu(U_{\infty})+\nu(V_{\infty})$. The reverse inequality is
similar, since
$\nu(U_{\infty})+\nu(V_{\infty})\geq\nu(U_{n})+\nu(V_{n})-2\times 2^{-n}$. ∎
### 4.3 Topology of lower-measurable sets
The representation of $\nu$-lower-measurable sets induces a (non-Hausdorff)
quotient topology on the space. Recall that for open sets, $U_{n}\to
U_{\infty}\iff\forall x\in U_{\infty},\exists N,\forall n\geq N,x\in U_{n}$.
For $\nu$-lower-Cauchy sequences, convergence is given by
$(U_{k,n})\to(U_{\infty,n})$ as $k\to\infty$ if for all $n$, there exists $K$
such that $\nu(U_{k,n}\cap U_{\infty,n})\geq\nu(U_{\infty,n})-2^{-n}$ whenever
$k\geq K(n)$. The convergence is effective if $K(n)$ is known; by restricting
to subsequences we may take $K(n)=n$. In this case, a $\nu$-lower-Cauchy
sequence representing the limit is
$U_{\infty,n}=\bigcup_{k=n+1}^{\infty}U_{k,n+k+1}$.
###### Property 28 (Topology on $\nu$-lower-measurable sets).
A set of $\nu$-lower-measurable sets $\mathcal{W}$ is _open_ if
$\forall W\in\mathcal{W},\ \exists\epsilon>0,\ \forall V,\ \nu(V\cap
W)>\nu(W)-\epsilon\implies V\in\mathcal{W}.$ (8)
###### Property 29.
The convergence relation on $\nu$-lower-Cauchy sequences of $\nu$-lower-
measurable sets is given by $V_{k}\to V_{\infty}$ as ${k\to\infty}$ if
$\liminf_{k\to\infty}\nu(V_{k}\cap V_{\infty})\geq\nu(V_{\infty})$.
The convergence is _effective_ if $\nu(V_{k}\cap
V_{\infty})\geq\nu(V_{\infty})-\epsilon(k)$ for _known_ $\epsilon(k)$, and
_fast_ if $\epsilon(k)=2^{-k}$.
### 4.4 Relationship with classical measure-theory
###### Remark 30 (Outer-regular measures).
In the literature on classical measure theory (see [vG02]), defining
$\nu(U_{\infty})=\bigcap_{n\in\mathbb{N}}\nu(U_{n})=\inf_{n\in\mathbb{N}}\nu(U_{n})$
for a decreasing sequence of open sets corresponds to a _outer-regular
measure_ , since we approximate $U_{\infty}$ from outside. However, since
$U_{n}$ converges to $U_{\infty}$ from above, but the open sets $U_{n}$ are
inherently approximated from below, we cannot compute the measure of an
_arbitrary_ decreasing sequence of open sets. This motivates the use of
$\nu$-lower-Cauchy sequences, which converge rapidly from above. Since we
compute $\nu(U_{\infty})$ from below, we use the terminology “lower measure”.
###### Remark 31 (Relationship with classical Borel measures).
Our lower-measurable sets are all Borel sets, with lower measure equal to the
classical measure. Further, since any outer measure on a separable metric
space is a Borel measure, the measure of any set is the infemum of the measure
of its $\epsilon$-neighbourhoods, we see that any measurable set is equal to a
lower-measurable set up to a set of measure $0$. Hence our lower-measurable
sets capture the measure-theoretic behaviour of all Borel sets, but do so in a
way in which the measure is semicomputable.
### 4.5 Measurable sets
###### Definition 32 (Upper-measurable sets; upper-measure).
The type of _upper-measurable sets_ is the set of increasing sequences of
closed sets $(A_{n})$ such that $\nu(A_{m})\leq\nu(A_{n})+2^{-n}$ for all $n$
and all $m>n$. The _upper-measure_ is
$\inf_{n\in\mathbb{N}}\nu(A_{n})+2^{-n}$.
Note that a representation of an upper-measurable set is the same as the
complement of a lower-measurable set.
###### Definition 33 (Measurable sets).
The type of _measurable sets_ consists of equivalence classes of monotone
sequences of pairs of open and closed sets $(U_{n},A_{n})$ such that
$U_{n}\subset U_{n+1}\subset A_{n+1}\subset U_{n}$ for all $n$, and
$\nu(A_{n}\setminus U_{n})\leq 2^{-n}$, under the equivalence relation
$(U_{n},A_{n})\sim(V_{n},B_{n})$ if, and only if, $\nu(A_{n}\setminus
V_{n}\cup B_{n}\setminus U_{n})\to 0$ as $n\to\infty$.
In other words type of $\nu$-measurable sets in $\mathbb{X}$ is the effective
completion of the type of pairs
$(U,A)\in\mathcal{O}(\mathbb{X})\times\mathcal{A}(\mathbb{X})$ satisfying
$U\subset A$ under the (non-metric) distance $d((U,A),(V,B))=P(A\setminus
V\cup B\setminus U)$. Note that $d((U,A),(U,A))=\nu(A\setminus U)$, which need
not be zero, but from the condition
$d((U_{m},A_{m}),(U_{n},A_{n}))<2^{-\min(m,n)}$, we have $\mu(A_{n}\setminus
U_{n})<2^{-n}$ for all $n$.
The type of measurable sets is equivalent to giving a fast monotone lower-
Cauchy sequence $(U_{n})$ and a fast monotone upper-Cauchy sequence $(A_{n})$
for the same set i.e. such that $\nu(A_{n}\setminus U_{m})\to 0$ whenever
$m,n\to\infty$.
### 4.6 Lower-measurable sets as point-sets
If $(U_{n})$ is a $\nu$-lower-Cauchy sequence, we will write $\nu(U_{\infty})$
for $\sup_{n\in\mathbb{N}}\nu(U_{n})-2^{-n}$. Similarly, we write $\nu((U\cap
V)_{\infty})$ for $\sup_{n\in\mathbb{N}}\nu(U_{n}\cap V_{n})-2^{-n}$. It is
tempting to define the lower-measure of $(U_{n})$ as a property of the
_intersection_ $\bigcap_{n\in\mathbb{N}}U_{n}$. Unfortunately, for general
spaces, it need not be the case that
$\sup_{n\in\mathbb{N}}\nu(U_{n})-2^{-n}=\sup_{n\in\mathbb{N}}\nu(V_{n})-2^{-n}$
even if $\bigcap_{n\in\mathbb{N}}U_{n}=\bigcap_{n\in\mathbb{N}}V_{n}$.
However, for spaces satisfying the conditions of Theorem 3, the $\nu$-lower-
measure is indeed a property of the $G_{\delta}$ intersection. We prove this
directly for Polish (separable completly metrisable) spaces:
###### Theorem 34 (Lower-measurable sets in metric spaces).
If $\mathbb{X}$ is a separable complete metric space, $\nu$ a valuation on
$\mathbb{X}$, and
$\bigcap_{n\in\mathbb{N}}U_{n}=\bigcap_{n\in\mathbb{N}}V_{n}$ for $\nu$-lower-
Cauchy sequences $(U_{n}),(V_{n})$, then
$\sup_{n\in\mathbb{N}}U_{n}-2^{-n}=\sup_{n\in\mathbb{N}}V_{n}-2^{-n}$.
###### Proof.
Since $I_{\epsilon}(U)\subset\overline{I}_{\epsilon}(U)\subset I_{\delta}(U)$
when $\delta<\epsilon$, and since $\nu$ is continuous, there exist closed sets
$A_{n}\subset U_{n}$ such that $\nu(A_{n})\geq\nu(U_{n})-2^{-(n+1)}$. Let
$B_{n}=\bigcap_{m\geq n}A_{m}$, so $B_{n}$ is an increasing sequence of closed
sets such that $B_{n}\subset\bigcap_{m\geq n}U_{m}=U_{\infty}$ for all $n$, so
$\bigcup_{n\in\mathbb{N}}B_{n}\subset\bigcap_{n\in\mathbb{N}}U_{n}$. Further,
since $\nu(A_{n}\cap\cdots\cap A_{m})\geq\nu(U_{n}\cap\cdots\cap
U_{m})+\sum_{l=n}^{m}\bigl{(}\nu(A_{l})-\nu(U_{l})\bigr{)}\geq\nu(U_{m})-2^{-n}$
for all $m$, we have that
$\nu(B_{n})\geq\inf_{m\in\mathbb{N}}\nu(U_{m})-2^{-n}=\sup_{m\in\mathbb{N}}U_{m}-2^{-m}-2^{-n}$,
so
$\inf_{n\in\mathbb{N}}\nu(B_{n})+2^{-n}\geq\sup_{n\in\mathbb{N}}\nu(U_{n})-2^{-n}$,
and $\lim_{n\to\infty}\nu(B_{n})\geq\lim_{n\to\infty}\nu(U_{n})$. Since also
$B_{n}\subset\bigcap_{n\in\mathbb{N}}V_{n}$ for all $n$, we have
$\lim_{n\to\infty}\nu(U_{n})\leq\lim_{n\to\infty}\nu(B_{n})\leq\lim_{n\to\infty}\nu(V_{n})$,
so $\sup_{n\in\mathbb{N}}U_{n}-2^{-n}\leq\sup_{n\in\mathbb{N}}V_{n}-2^{-n}$.
The reverse inequality follows by symmetry. ∎
## 5 Computable Random Variables
In the standard approach to probability theory developed in classical
analysis, one defines random variables as measurable functions over a base
probability space. Given types $\mathbb{X}$ and $\mathbb{Y}$, a representation
of the Borel measurable functions $f$ from $\mathbb{X}$ to $\mathbb{Y}$ was
given in [Bra05], but this does not allow one to compute lower bounds for the
measure of $f^{-1}(V)$ for $V\in\mathcal{O}(\mathbb{Y})$.
A computable theory of random variables should, at a minimum, enable us to
perform certain basic operations, including:
1. (i)
Given a random variable $X$ and open set $U$, compute lower-approximation to
$\mathbb{P}(X\in U)$.
2. (ii)
Given random variables $X_{1},X_{2}$, compute the random variable $X_{1}\times
X_{2}$ giving the joint distribution.
3. (iii)
Given a random variable $X$ and a continuous function $f$, compute the image
$f(X)$.
4. (iv)
Given a sequence of random variables $X_{1},X_{2},\ldots$ converging
effectively in probability, compute a limit random variable
$X_{\infty}=\lim_{m\to\infty}X_{m}$.
5. (v)
Given a probability distribution $\nu$ on a sufficiently nice space
$\mathbb{X}$, compute a random variable $X$ with distribution $\nu$.
Property (i) states that we can compute the distribution of a random variable,
while property (ii) implies that a random variable is _more_ than its
distribution; it also allows us to compute its joint distribution with another
random variable. Property (iii) also implies that for random variables
$X_{1},X_{2}$ on a computable metric space $(\mathbb{X},d)$, the random
variable $d(X_{1},X_{2})$ is computable in $\mathbb{R}^{+}$, so the
probability $\mathbb{P}(d(X_{1},X_{2})<\epsilon)$ is computable in
$\mathbb{I}_{<}$, and $\mathbb{P}(d(X_{1},X_{2})\leq\epsilon)$ is computable
in $\mathbb{I}_{>}$. Property (iv) is a completeness property and allows
random variables to be approximated. Property (v) shows that random variables
can realise a given distribution. These properties are similar to those used
in [Ker08].
Ideally, one would like a representation of bounded measurable functions
$f:\mathbb{X}\rightarrow\mathbb{R}$ such that for _every_ finite measure $\mu$
on $\mathbb{X}$, the integral $\int_{\mathbb{X}}f(x)\,d\mu(x)$ is computable.
But then $f(y)=\int_{\mathbb{X}}f(x)\,d\delta_{y}(x)$ would be computable, so
$f$ would be continuous. Any effective approach to measurable functions and
integration must therefore take some information about the measure into
account.
We will consider random variables on a fixed probability space $(\Omega,P)$.
Since any probability distribution on a Polish space is equivalent to a
distribution on the standard Lesbesgue-Rokhlin probability space [Roh52], it
is reasonable to take the base space to be the Cantor space
$\Sigma=\\{0,1\\}^{\omega}$ and $P$ the standard measure.
However, our treatment of random variables will require the notion _lower-
measures_ , and a type of _lower-measurable sets_ , for which we can compute
the measure in $\mathbb{R}^{+}_{<}$.
### 5.1 Measurable functions and random variables
###### Definition 35 (Measurable function).
Let $\mathbb{W},\mathbb{X}$ be types and $\nu$ a finite measure on
$\mathbb{W}$. Then the type of _$\nu$ -measurable functions_ $f$ from
$\mathbb{W}$ to $\mathbb{X}$ is defined by continuous
$f^{-1}:\mathcal{O}(\mathbb{X})\to\mathcal{M_{<}}(\mathbb{W})$ satisfying
$f^{-1}(\emptyset)=_{\nu}\emptyset$, $f^{-1}(X)=_{\nu}W$, $f^{-1}(U_{1}\cap
U_{2})=_{\nu}f^{-1}(U_{1})\cap f^{-1}(U_{2})$ and $f^{-1}(U_{1}\cup
U_{2})=_{\nu}f^{-1}(U_{1})\cup f^{-1}(U_{2})$.
We denote the $\nu$-measurable functions $f$ from $\mathbb{W}$ to $\mathbb{X}$
by $f:\mathbb{W}\rightsquigarrow_{\nu}\mathbb{X}$, or simply
$f:\mathbb{W}\rightsquigarrow\mathbb{X}$ if the measure $\nu$ is clear from
the context.
Measurable functions $f_{1}$ and $f_{2}$ are considered equal if
$f_{1}^{-1}(U)=_{\nu}f_{2}^{-1}(U)$ for all $U\in\mathcal{O}(\mathbb{X})$.
Note that since $f^{-1}$ is continuous, then we must have
$f^{-1}(V_{1})\subset_{\nu}f^{-1}(V_{2})$ if $V_{1}\subset V_{2}$, and
$f^{-1}(\bigcup_{n=0}^{\infty}V_{n})=_{\nu}\bigcup_{n=0}^{\infty}f^{-1}(V_{n})$.
###### Remark 36.
We do not actually define $f$ as a _function_ $\mathbb{W}\to\mathbb{X}$ since
this would involve evaluating at points, though clearly any continuous
function $\mathbb{W}\to\mathbb{X}$ is measurable.
###### Definition 37 (Random variable).
Let $\Omega$ be a separable complete metric space used as a base space, $P$ a
probability measure on $\Omega$, and $\mathbb{X}$ a topological space Then a
_random variable_ $X$ on $\mathbb{X}$ is a $P$-measurable function
$\Omega\rightsquigarrow\mathbb{X}$. We denote the type of random variables on
$\mathbb{X}$ by $\mathcal{R}(\mathbb{X})$ or
$\Omega\rightsquigarrow\mathbb{X}$.
We will sometimes write $\mathcal{P}(X\in U)$ as a shorthand for
$P(\\{\omega\in\Omega\mid X(\omega)\in U\\})$; for a piecewise-continuous
random variable $X$ we implicitly restrict $\omega$ to $\mathrm{dom}(X)$.
Just as for measurable functions, although a random variable $X$ is _defined_
relative to the underlying space $\Omega$, we cannot in general actually
_compute_ $X(\omega)$ in any meaningful sense for fixed $\omega\in\Omega$. The
expression $X(\omega)$ only makes sense for random variables _given_ as
(piecewise) continuous functions $\Omega\rightarrow\mathbb{X}$ as stated
below:
###### Definition 38 ((Piecewise)-continuous random variable).
A _continuous random variable_ on $(\Omega,P)$ with values in $\mathbb{X}$ is
a continuous function $X:\Omega\rightarrow\mathbb{X}$.
A _piecewise-continuous random variable_ is a continuous partial function
$X:\Omega\rightharpoonup\mathbb{X}$ such that
$\mathrm{dom}(X)\in\mathcal{O}(\Omega)$ and $P(\mathrm{dom}(X))=1$.
We use the terminology “piecewise-continuous” since
$X:\Omega\rightharpoonup\mathbb{X}$ may arise as the restriction of a
piecewise-continuous function to its continuity set.
###### Observation 39.
Given a piecewise-continuous random variable
$X:\Omega\rightharpoonup\mathbb{X}$, we can compute
$X^{-1}:\mathcal{O}(\mathbb{X})\to\mathcal{M}_{<}(\Omega)$, since $X^{-1}(U)$
is open for open $U$, and any open set is lower-measurable.
Clearly, for piecewise-continuous random variables $X_{1},X_{2}$, we have
$X_{1}=X_{2}$ if $P(\\{\omega\in\Omega\mid X_{1}(\omega)\neq
X_{2}(\omega)\\})=0$; in other words, $X_{1}$ and $X_{2}$ are _almost-surely
equal_.
By [Wei99, Theorem 2.2.4], machine-computable functions
$\\{0,1\\}^{\omega}\rightarrow\\{0,1\\}^{\omega}$ are defined on a
$G_{\delta}$-subsets of $\\{0,1\\}^{\omega}$. Indeed, any function into a
metric space is continuous on a $G_{\delta}$ set of points. This makes
functions defined and continuous on a full-measure $G_{\delta}$-subset of
$\\{0,1\\}^{\omega}$ a natural class of random variables, where by a full-
measure $G_{\delta}$-set $W$, we require $P(U)=1$ whenever $W\subset U$ for
open $U$.
###### Definition 40 (Almost-surely continuous random variable).
An _almost-surely-continuous random variable_ on $(\Omega,P)$ with values in
$\mathbb{X}$ is a continuous partial function
$X:\Omega\rightharpoonup\mathbb{X}$ such that $\mathrm{dom}(X)$ is a
$G_{\delta}$ set and $P(\mathrm{dom}(X))=1$.
Not all measurable random variables are almost-surely continuous:
###### Example 41.
Define a strong Cauchy sequence $X_{n}$ of piecewise-continuous random
variable taking values in $\\{0,1\\}$ such that $X_{n}=1$ on a decreasing
sequence of closed sets $W_{n}$ of measure $(1+2^{-n})/2$ whose limit is a
Cantor set. Then $X_{\infty}=\lim_{n\to\infty}X_{n}$ is discontinuous on a set
of positive measure, so is not an almost-surely-continuous random variable.
It will often be useful to consider random variables taking finitely many
values:
###### Definition 42 (Simple random variable).
A random variable $X$ on $(\Omega,P)$ with values in $\mathbb{X}$ _simple_ if
it takes finitely many values.
### 5.2 Properties of random variables
We now consider the properties (i)-(iv) that we wish our random variables to
have, and show that they are satisfied.
#### 5.2.1 Distribution
###### Definition 43 (Distribution of a measurable random variable).
For a measurable random variable $X$ over base space $(\Omega,\mathcal{P})$,
define its _distribution_ by
$\mathbb{P}(X\in U)=\mathcal{P}(X^{-1}(U)).$
From our definition, the probability distribution of a random variable is
trivially computable:
###### Observation 44.
Let $X$ be a random variable on $\mathbb{X}$. Then the distribution of $X$ is
computable.
#### 5.2.2 Products
###### Definition 45.
Suppose $\mathbb{X}_{1}$ and $\mathbb{X}_{2}$ are such that the product space
$\mathbb{X}_{1}\times\mathbb{X}_{2}$ is a sequential space. Then for
$X_{1}:\mathcal{R}(\mathbb{X}_{1})$ and $X_{2}:\mathcal{R}(\mathbb{X}_{2})$,
the product $X_{1}\times
X_{2}:\mathcal{R}(\mathbb{X}_{1}\times\mathbb{X}_{2})$ is defined by setting
$[X_{1}\times X_{2}]^{-1}(U_{1}\times U_{2})=X_{1}^{-1}(U_{1})\cap
X_{2}^{-1}(U_{2})$, and extending to arbitrary sets by taking unions of the
product open sets.
The product of two random variables is computable:
###### Theorem 46 (Computability of products).
Suppose $\mathbb{X}$ and $\mathbb{Y}$ are such that the product space
$\mathbb{X}\times\mathbb{Y}$ is sequential. Then for
$X:\mathcal{R}(\mathbb{X})$ and $\mathbb{Y}:\mathcal{R}(\mathbb{Y})$, the
product $X\times Y:\mathcal{R}(\mathbb{X}\times\mathbb{Y})$ is computable.
###### Proof.
Since $\mathbb{X}\times\mathbb{Y}$ is a sequential space, the topology is
generated by sets of the form $U\times V$ for $U\in\mathcal{O}(\mathbb{X})$
and $V\in\mathcal{O}(\mathbb{Y})$. Then $(X\times Y)^{-1}(U\times
V)=X^{-1}(U)\cap Y^{-1}(V)$, which is computable in
$\mathcal{M_{<}}(\Omega,P)$ by Theorem 24. ∎
###### Lemma 47.
If $X_{1}$, $X_{2}$ are continuous random variables, then the product
$X_{1}\times X_{2}$ is the functional product
$(X_{1}\times X_{2})(\omega)=(X_{1}(\omega),X_{2}(\omega)).$
###### Proof.
$[X_{1}\times X_{2}]^{-1}(U_{1}\times U_{2})=X_{1}^{-1}(U_{1})\cap
X_{2}^{-1}(U_{2})=\\{\omega\in\Omega\mid X_{1}(\omega)\in
U_{1}\\}\times\\{\omega\in\Omega\mid X_{2}(\omega)\in U_{2}\\}$. ∎
#### 5.2.3 Image
###### Definition 48.
The image of a random variable $X:\mathcal{R}(\mathbb{X})$ under a continuous
function $f:\mathbb{X}\to\mathbb{Y}$ is defined by
$[f(X)]^{-1}(V):=X^{-1}(f^{-1}(V))$.
###### Theorem 49 (Computability of images).
The image of a random variable under a continuous function is computable.
###### Proof.
$[f(X)]^{-1}(V)=X^{-1}(f^{-1}(V))$, which is computable since $f^{-1}(V)$ is
computable in $\mathcal{O}(\mathbb{X})$. ∎
###### Remark 50.
If $g:\mathbb{X}\rightsquigarrow\mathbb{Y}$ and
$f:\mathbb{Y}\rightsquigarrow\mathbb{Z}$ are measurable then the composition
is not computable. For example, take $g_{c}:\Omega\to\mathbb{R}$ to be the
constant function $g_{c}(\omega)=c$, and
$f:\mathbb{R}\rightsquigarrow\\{0,1\\}$ the Heaviside function $f(x)=0$ if
$x\leq 0$ and $f(x)=1$ for $x>0$. Then for $r=1$ and $c_{n}\searrow
c_{\infty}0$ we have $P((f\circ g_{c_{n}})^{-1}(1))=P(\Omega)=1$, but
$P((f\circ g_{c_{\infty}})^{-1}(1))=P(\emptyset)=0$.
#### 5.2.4 Convergence
The convergence relation induced by the standard representation of a function
type is that of _pointwise-convergence_. For random variables (as measurable
functions), this means that $X_{n}\to X_{\infty}$ if, and only if, for all
open $U$, $X_{n}^{-1}(U)\to X_{\infty}^{-1}(U)$ in the type of
$\mathcal{P}$-lower-measurable sets. Explicitly:
###### Property 51.
A sequence of random variables $(X_{n})_{n\in\mathbb{N}}$ _converges_ to a
random variable $X_{\infty}$ if for all open $U$,
$\liminf_{n\to\infty}\mathbb{P}(X_{n}\in U\wedge X_{\infty}\in
U)\geq\mathbb{P}(X_{\infty}\in U)$.
The convergence is _effective_ if $\mathbb{P}(X_{n}\in U\wedge X_{\infty}\in
U)\geq\mathbb{P}(X_{\infty}\in U)-\varepsilon(U,n)$ for known
$\varepsilon(U,n)$, and _fast_ if $\mathbb{P}(X_{n}\in U\wedge X_{\infty}\in
U)\geq\mathbb{P}(X_{\infty}\in U)-2^{-n}$.
We also obtain computability of limits of effectively-converging Cauchy-like
sequences.
###### Definition 52.
A sequence of random variables $(X_{n})_{n\in\mathbb{N}}$ is a _fast Cauchy
sequence_ if for all open $U$, and all $m>n$, $\mathbb{P}(X_{m}\in U\wedge
X_{n}\in U)\geq\mathbb{P}(X_{n}\in U)-2^{-n}$.
###### Theorem 53.
If $(X_{n})$ is a fast Cauchy sequence of random variables, then
$\lim_{n\to\infty}X_{n}$ exists and is computable from $(X_{n})$.
###### Proof.
For every open $U\subset\mathbb{X}$, we need to compute $X_{\infty}^{-1}(U)$
as a $\mathcal{P}$-lower-measurable set. Define $W_{n}=X_{n}^{-1}(U)$, which
is a $\mathcal{P}$-lower-measurable set computable from $X_{n}$. Then for
$m>n$, $\mathcal{P}(W_{m}\cap W_{n})\geq\mathcal{P}(W_{n})-2^{-n}$, so
$(W_{n})$ forms a fast lower-Cauchy sequence on $\mathcal{P}$-lower-measurable
sets, and converges to $W_{\infty}$ with the correct properties by Theorem 26.
∎
### 5.3 Equality of random variables
We now show a result that two random variables are equal if, and only if,
their products with the identity random variable on the base space are equal.
###### Proposition 54.
Let $I$ be the identity random varible on $\Omega$. Then $X_{1}=X_{2}$ if, and
only if $Y_{1}:=I\times X_{1}$ and $Y_{2}:=I\times X_{2}$ have the same
distribution.
###### Proof.
We need to show that for every open $U$,
$X_{1}^{-1}(U)=_{\mathcal{P}}X_{2}^{-1}(U)$, which holds if
$\mathcal{P}(X_{1}^{-1}(U)\cap
X_{2}^{-1}(U))=\mathcal{P}(X_{1}^{-1}(U))=\mathcal{P}(X_{2}^{-1}(U))$
Computing a $\mathcal{P}$-lower-Cauchy sequence representing
$W_{i}=X_{i}^{-1}(U)$ yields open sets $W_{i,n}$ such that
$\mathcal{P}(W_{i,n}\cap W_{i})\geq\mathcal{P}(W_{i})-2^{-n}$. Then
$\mathbb{P}(X_{1}\in U\wedge X_{2}\in U)+2^{-n}=\mathcal{P}(W_{1}\cap
W_{2})+2^{-n}\geq\mathcal{P}(W_{1,n}\cap W_{2})=\mathbb{P}(I\in W_{1,n}\wedge
X_{2}\in U)=\mathbb{P}(Y_{2}\in W_{1,n}\times U)=\mathbb{P}(Y_{1}\in
W_{1,n}\times U)=\mathbb{P}(I\in W_{1,n}\wedge X_{1}\in
U)=\mathcal{P}(W_{1,n}\cap W_{1})=\mathcal{P}(W_{1})=\mathbb{P}(X_{1}\in U)$.
Since $n$ is arbitrary, $\mathbb{P}(X_{1}\in U\wedge X_{2}\in
U)\geq\mathbb{P}(X_{1}\in U)$. The result follows by symmetry. ∎
## 6 Random Variables in Metric Spaces
In this section, we consider random variables in metric spaces. We show that
an equivalent notion to our general random variables is given by completion in
the Fan metric.
### 6.1 Constructions in metric spaces
We first prove some generally-useful results on constructions of topological
partitions in computable metric spaces.
The following decomposition result is essentially a special case of the
effective Baire category theorem [YMT99, Bra01].
###### Lemma 55.
Let $X$ be an effectively separable computable metric space, and $\nu$ be a
valuation on $X$. Then given any $\epsilon>0$, we can compute a topological
partition $\mathcal{B}$ of $X$ such that $\mathrm{diam}(B)\leq\epsilon$ for
all $B\in\mathcal{B}$, and $\nu(X\setminus\bigcup\mathcal{B})=0$.
###### Proof.
For any $\epsilon>0$, any $\delta>0$, and any $x\in\mathbb{X}$,
$\\{r>0\mid\epsilon/2<r<\epsilon\,\wedge\,\nu(\overline{B}(x,r)\setminus
B(x,r))<\delta\\}$ is a computable open dense subset of
$[\epsilon/2,\epsilon]$. We can therefore construct a sequence of rationals
$q_{k}\in(\epsilon/2,\epsilon)$ such that $|q_{k}-q_{k+1}|<2^{-k-1}$ and
$\nu(\overline{B}(x,q_{k})\setminus B(x,q_{k}))<2^{-k}$. Then taking
$r_{\epsilon}(x)=\lim_{k\to\infty}q_{k}$ yields a radius
$r_{\epsilon}(x)\in[\epsilon/2,\epsilon]$ such that
$\nu(B(x,r_{\epsilon}(x))=\nu(\overline{B}(x,r_{\epsilon}(x)))$.
Since $X$ is effectively separable, it has a computable dense sequence
$(x_{n})_{n\in\mathbb{N}}$. For $\epsilon>0$, the sets
$B(x_{n},r_{\epsilon}(x_{n}))$ have radius at least $\epsilon/2$, so cover
$X$. We take as topological partition the sets
$B(x_{n},r_{\epsilon}(x_{n}))\setminus\bigcup_{m=0}^{n-1}\overline{B}(x_{m},r_{\epsilon}(x_{m}))$
for $n\in\mathbb{N}$. ∎
###### Lemma 56.
Let $\mathbb{X}$ be an effectively separable computable metric space, and
$\nu$ a valuation on $\mathbb{X}$. Then for any $\epsilon>0$, we can construct
an open set $U:\mathcal{O}(\mathbb{X})$ and a function $r:U\to X$ such that
$\nu(U)=1$, $r$ has finite range, and $d(r(x),x)<\epsilon$ for all $x\in U$.
###### Proof.
Let $\mathcal{B}$ be the topological partition computed from Lemma 55. For
every $B\in\mathcal{B}$, compute an element $x_{B}$ of $\mathbb{X}$ in $B$.
Define $r(x)$ by for $x\in\bigcup\mathcal{B}$ by $r(x)=x_{B}$ for $x\in B$. ∎
### 6.2 The Fan metric
Let $\mathbb{X}$ be a computable metric space. For a closed set $A$, define
$d(x,A)=\inf\\{d(x,y)\mid y\in A\\}$ and
$\,\overline{\\!N}_{\epsilon}(A):=\\{x\in\mathbb{X}\mid
d(x,A)\leq\varepsilon\\}$ For an open set $U$ define
$I_{\varepsilon}(U):=\mathbb{X}\setminus(\,\overline{\\!N}_{\varepsilon}(\mathbb{X}\setminus
U))=\\{x\in U\mid\exists\delta>0,B(x,\varepsilon+\delta)\subset U\\}$. Since
$d(x,A)$ is computable in $R^{+,\infty}_{<}$ by our definition of a computable
metric space, $\,\overline{\\!N}_{\epsilon}(A)$ is computable as a closed set,
so $I_{\varepsilon}(U)$ is computable as an open set. Note that
$I_{\varepsilon_{1}+\varepsilon_{2}}(U)\subset
I_{\varepsilon_{1}}(I_{\varepsilon_{2}}(U))$.
If $(\mathbb{X},d)$ is a complete metric space, then the _Fan metric_ is a
natural distance function on random variables:
###### Definition 57 (Fan metric).
$\displaystyle d_{F}(X,Y)$
$\displaystyle=\sup\\!\big{\\{}\varepsilon\in\mathbb{Q}^{+}\mid\
\mathcal{P}\big{(}\\{\omega\in\Omega\mid
d(X(\omega),Y(\omega))>\varepsilon\\}\big{)}>\varepsilon\big{\\}}$ (9)
$\displaystyle=\inf\\!\big{\\{}\varepsilon\in\mathbb{Q}^{+}\mid\
\mathcal{P}\big{(}\\{\omega\in\Omega\mid
d(X(\omega),Y(\omega))\geq\varepsilon\\}\big{)}<\varepsilon\big{\\}}.$
Given a computable metric
$d:\mathbb{X}\times\mathbb{X}\rightarrow\mathbb{R}^{+}$, the Fan metric on
continuous random variables is easily seen to be computable: The convergence
relation defined by the Fan metric corresponds to _convergence in probability_
: A sequence of random variables $X_{n}$ taking values in a metric space
converges in probability to a random variable $X_{\infty}$ if
$\mathcal{P}(d(X_{n},X_{\infty})>\epsilon)\to 0$ for all $\epsilon>0$. If the
metric $d$ on $\mathbb{X}$ is bounded, the distance $\textstyle
d(X,Y):=\int_{\Omega}d(X(\omega),Y(\omega))\,dP(\omega).$ is equivalent to the
Fan metric.
Recall that for topological spaces, a sequence of random variables $(X_{n})$
converges to $X_{\infty}$ if all open $U$,
$\liminf_{n\to\infty}\mathcal{P}(X_{n}\in U\wedge X_{\infty}\in
U)\geq\mathcal{P}(X_{\infty}\in U)$, which corresponds to
$\mathcal{P}(X_{n}\in U\wedge X_{\infty}\in U)\to\mathcal{P}(X_{\infty}\in U)$
with convergence of probabilities being considered in $\mathbb{R}^{+}_{<}$.
###### Proposition 58.
The Fan metric is computable.
###### Proof.
The random variable $X\times Y$ is computable given $X,Y$. Then
$\mathcal{P}(\\{\omega\mid
d(X(\omega),Y(\omega))>\varepsilon)=\mathbb{P}\bigl{(}X\times Y\in
d^{-1}(\\{e\mid e>\varepsilon\\})\bigr{)}$ is computable in
$\mathbb{R}^{+}_{<}$, and $\mathcal{P}\bigl{(}\\{\omega\mid
d(X(\omega),Y(\omega))>\varepsilon\bigr{)}>\varepsilon$ is verifiable, so
$d_{F}(X,Y)$ is computable in $\mathbb{R}^{+}_{<}$. Similarly,
$\mathcal{P}(\\{\omega\mid d(X(\omega),Y(\omega))\geq\varepsilon)$ is
computable in $\mathbb{R}^{+}_{>}$, so $d_{F}(X,Y)$ is computable in
$\mathbb{R}^{+}_{>}$. ∎
###### Theorem 59.
Suppose $(X_{n})_{n\in\mathbb{N}}$ is a fast Cauchy sequence of random
variables in the Fan metric. Then $X_{\infty}=\lim_{n\to\infty}X_{n}$ a random
variable which is computable from the $X_{n}$.
The proof is based on the non-effective version of this result from [MW43].
###### Proof.
Given open $U$, we need to compute $X_{\infty}^{-1}(U)$. Let
$U_{k+1}=I_{2^{-k}}(U)$. Then for $m>n$, Now $x_{n}\in U_{n}$ and
$d(x_{n},x_{m})<2^{-n}$, then $x_{m}\in U_{m}$ since $I_{2^{-(n-1)}}(U)\subset
I_{2^{-n}}(I_{2^{-(m-1)}}(U))$, and since
$\mathbb{P}(d(X_{m},X_{n})>2^{-n})\leq 2^{-n}$, we have $\mathbb{P}(X_{m}\in
U_{m}\wedge X_{n}\in U_{n})\geq\mathbb{P}(X_{n}\in U_{n})-2^{-n}$. Let
$W_{n}(U)=X_{n}^{-1}(I_{2^{-(n-1)}}(U))$, so $\mathcal{P}(W_{m}\cap
W_{n})\geq\mathcal{P}(W_{n})-2^{-n}$. Then $W_{n}(U)$ is a fast lower-Cauchy
sequence of $\mathcal{P}$-lower-measurable sets, so converges effectively to
some $\mathcal{P}$-lower-measurable set $W_{\infty}(U)$. We define
$X_{\infty}^{-1}(U)=W_{\infty}(U)$, and note that $\mathcal{P}(W_{n}\cap
W_{\infty})\geq\mathcal{P}(W_{n})-2^{-n}$ for all $n$.
It is straightforward to check
$X_{\infty}^{-1}(U)=\lim_{n\to\infty}X_{n}^{-1}(U)$ in the class of
$\mathcal{P}$-lower-measurable sets. The modularity property of
$X_{\infty}^{-1}$ follows from the equations $\mathcal{P}(X_{n}^{-1}(U_{1}\cup
U_{2}))+\mathcal{P}(X_{n}^{-1}(U_{1}\cap
U_{2}))=\mathcal{P}(X_{n}^{-1}(U_{1}))+\mathcal{P}(X_{n}^{-1}(U_{2}))$ by
passing through the limit. ∎
A random variable can therefore be represented by a sequence
$(X_{0},X_{1},X_{2},\ldots)$ of random variables from some simpler class
satisfying $d(X_{m},X_{n})<2^{-\min(m,n)}$, and two such sequences are
equivalent (represent the same random variable) if $d(X_{1,n},X_{2,n})\to 0$
as $n\to\infty$.
### 6.3 Representation
In this section, we prove two representation results on random variables in
metric spaces. We show that we can _construct_ a random variable with a given
_distribution_ , and that given any random variable, we can construct a
sequence of simple continuous random variables converging effectively to it.
These results depends on the base space $\Omega=\\{0,1\\}^{\omega}$ being
totally disconnected. Clearly, for a connected base space $\Omega$, such as
the interval $[0,1]$, then any continuous random variable takes values in a
single component of $\mathbb{X}$, and any simple continuous random variable is
constant, but if $\mathbb{X}$ is contractible, then the continuous random
variables may still be dense.
The following result shows that random variables can be represented by a
sequence of simple random variables. It is a variant of [SS06b, Theorem 14]
and [HR09, Theorem 1.1.1], which shows that any distribution is effectively
measurably isomorphic to a distribution on $\\{0,1\\}^{\omega}$, and the proof
is similar.
###### Theorem 60.
Let $\mathbb{X}$ be a computable metric space, and $\nu$ be a valuation on
$\mathbb{X}$. Then we can construct a random variable $X$ on base space
$\Omega=\\{0,1\\}^{\omega}$ such that for any open $U$, $\mathbb{P}(X\in
U)=\nu(U)$. Further, $X$ can be constructed as the effective limit of a fast
Cauchy sequence of simple continuous random variables.
###### Proof.
For each $n$, use Lemma 55 to construct a countable topological partition
$\mathcal{B}_{n}$ such that each $B\in B_{n}$ has diameter at most $2^{-n}$,
and $\sum_{B\in\mathcal{B}_{n}}\nu(\partial B)=0$. By taking intersections if
necessary, we can assume that each $\mathcal{B}_{n+1}$ is a refinement of
$\mathcal{B}_{n}$.
We now construct random variables $X_{n}$ as follows. Suppose we have
constructed cylinder sets $W_{n,m}\subset\\{0,1\\}^{\infty}$ such that
$P(W_{n,m})<\nu(B_{n,m})$ and $\sum_{m}P(W_{n,m})>1-2^{-n}$. Since $B_{n,m}$
is a union of open sets
$\\{B_{n+1,m,1},\ldots,B_{n+1,m,k}\\}\subset\mathcal{B}_{n+1}$, we can
effectively compute dyadic numbers $p_{n+1,m}$ such that
$p_{n+1,m,k}<\mu(B_{n+1,m,k})$ and $\sum_{k}p_{n+1,m,k}\geq P(W_{n,m})$. We
then partition $W_{n,m}$ into cylinder sets $W_{n+1,m,k}$ each of measure
$p_{n,m,k}$. For each $n,m,k$ we construct a point $x_{n+1,m,k}\in
B_{n+1,m,k}$, and take $X_{n+1}$ to map $W_{n+1,m,k}$ to a point
$x_{n+1,m,k}\in B_{n+1,m,k}$. It is clear that $X_{n}$ is a strongly-
convergent Cauchy sequence, so is a representation of a measurable random
variable $X_{\infty}$.
It remains to show that $\mathbb{P}(X_{\infty}\in U)=\nu(U)$ for all
$U\in\mathcal{O}(X)$. This follows since for given $n$ we have
$\mathbb{P}(X_{n}\in U)>\nu(I_{2^{1-n}}(U))-2^{-n}\nearrow\nu(U)$ as
$n\to\infty$. ∎
###### Theorem 61.
Let $\mathbb{X}$ be a computable metric space and $X:\mathcal{R}(\mathbb{X})$
a random variable. Then one can construct a fast Cauchy sequence of simple
continuous random variables $X_{n}$ such that
$\lim_{n\to\infty}X_{n}=X_{\infty}$.
Although one could prove this directly, there is a simple proof based on
Theorem 60 and Proposition 54:
###### Proof.
Let $I$ be the identity random variable on $\Omega$, and let $Y=I\times
X:\Omega\rightsquigarrow\Omega\times\mathbb{X}$. Let $Y_{n}=I_{n}\times X_{n}$
be a sequence of simple random variables with limit
$Y_{\infty}=I_{\infty}\times X_{\infty}$ such that
$\mathbb{P}[Y_{\infty}]=\mathbb{P}[Y]$. Then $Y_{\infty}=Y$, so
$X_{\infty}=\lim_{n\to\infty}X_{n}$ as required. ∎
###### Corollary 62.
If $\mathbb{X}$ is a computable metric space, then the representation of
random variable $X$ by its preimage
$X^{-1}:\mathcal{O}(\mathbb{X})\to\mathcal{M}_{<}(\Omega,\mathcal{P})$ is
equivalent to the representation by fast Cauchy sequences of simple
(piecewise-)continuous random variables.
###### Proof.
Given a representation of $X$, we can compute a fast Cauchy sequence of
(piecewise-)continuous random variables by Theorem 61. Conversely, given a
fast Cauchy sequence of piecewise-continuous random variables $X_{n}$, we can
compute $X_{n}$ as a measurable random variable by Observation 39, and the
limit by Theorem 59. ∎
### 6.4 Expectation
For bounded random variables taking values in the reals, the expectation is
defined in the usual way:
###### Definition 63 (Expectation).
If $X:\Omega\rightsquigarrow\mathbb{R}$ is an effectively bounded real-valued
random variable, the _expectation_ of $X$ is given by the integral
$\textstyle\mathbb{E}(X)=\int_{\mathbb{R}}x\,d\mathbb{P}[X],$
where $\mathbb{P}[X]$ is the valuation on $\mathbb{R}$ induced by $X$ i.e.
$\mathbb{P}[X](U)=\mathbb{P}(X\in U)=\mathcal{P}(X^{-1}(U))$, and the integral
is given by Definition 7.
The expectation of possibly unbounded real-valued random variables is not
continuous in the weak topology; for example, we can define continuous random
variables $X_{n}$ taking value $2^{n}$ on a subset of $\Omega$ of measure
$2^{-n}$, so that $X_{n}\to 0$ but $\mathbb{E}(X_{n})=1$ for all $n$. For this
reason, we need a new type of _integrable random variables_.
###### Definition 64 (Integrable random variable).
Let $(\mathbb{X},d)$ be metric space with distinguished element $z$ (e.g. the
zero of a normed space), and let $Z$ be the constant random variable
$Z(\omega)=z$. Let $d_{1}$ be the (possibly infinite-valued) distance function
$d_{1}(X,Y)=\int_{\Omega}d(X(\omega),Y(\omega))dP(\omega).$ (10)
The type of _integrable random variables_ $L^{1}(\mathbb{X})$ is the
completion of the set of all effectively bounded random variables $X$ such
that $d_{1}(X,Z)<\infty$. Then $d_{1}$ is a metric on $L^{1}(\mathbb{X})$.
If $d$ is a bounded metric, then this metric is equivalent to the Fan metric.
For continuous and integrable real-valued random variables, then the
expectation is also given by an integral over the base space $\Omega$.
###### Proposition 65 (Expectation).
1. (i)
If $X:\Omega\rightarrow\mathbb{R}$ is a continuous real-valued random
variable, then the expectation of $X$ is given by the integral
$\textstyle\mathbb{E}(X)=\int_{\Omega}X(\omega)\,d\mathcal{P}(\omega),$
which always exists since $X$ has compact values.
2. (ii)
If $X:\Omega\rightsquigarrow\mathbb{R}$ is an integrable real-valued random
variable, and $X$ is presented as $\lim_{n\to\infty}X_{n}$ for some sequence
of continuous random variables satisfying
$\mathbb{E}[|X_{n_{1}}-X_{n_{2}}|]\leq 2^{-\min(n_{1},n_{2})}$, then
$\bigl{(}\mathbb{E}(X_{n})\bigr{)}_{n\in\mathbb{N}}$ is an effective Cauchy
sequence, and
$\textstyle\mathbb{E}(X)=\lim_{n\to\infty}\mathbb{E}(X_{n}).$
###### Proof.
1. (i)
It suffices to consider the case $X\geq 0$. Then both
$\int_{\mathbb{R}}xd\mathbb{P}[X]$ and
$\int_{\Omega}X(\omega)d\mathcal{P}(\omega)$ yield Choquet integral sums of
the form $\sum_{m=1}^{n}(p_{m}-p_{m-1})\mathcal{P}(X^{-1}(p_{m},\infty))$.
2. (ii)
The expectation is continuous. ∎
We can effectivise Lesbegue spaces $\mathcal{L}^{p}$ of integrable random
variables through the use of effective Cauchy sequences in the natural way: If
$(\mathbb{X},|\cdot|)$ is a normed space, then the type of $p$-integrable
random variables with values in $\mathbb{X}$ is the effective completion of
the type of $p$-integrable continuous random variables under the metric
$d_{p}(X,Y)=||X-Y||_{p}$ induced by the norm
$\textstyle||X||_{p}=\left(\int_{\Omega}|X(\omega)|^{p}\,dP(\omega)\right)^{1/p}=\bigl{(}\mathbb{E}(|X|^{p})\bigr{)}^{1/p}.$
(11)
We can easily prove the Cauchy-Schwarz and triangle inequalities for
measurable random variables
$||XY||_{pq/(p+q)}\leq||X||_{p}\cdot||Y||_{q}\quad\text{and}\quad||X+Y||_{p}\leq||X||_{p}+||Y||_{p}\,.$
The following result relates the expectation of a random variable to an
integration over its valuation. An analogous result in a different setting
[SS06b, Theorem 15].
###### Theorem 66 (Expectation).
Let $X$ be a positive real-valued random variable such that
$\mathbb{E}(X)<\infty$. Then
$\textstyle\mathbb{E}(X)=\int_{0}^{\infty}\mathbb{P}(X>x)dx=\int_{0}^{\infty}\mathbb{P}(X\geq
x)dx.$
Note that the first integral is computable in $\mathbb{R}^{+}_{<}$, but the
second integral is in general uncomputable in $\mathbb{R}^{+}_{>}$, due to the
need to take the limit as the upper bound of the integral goes to infinity.
However, the second integral may be computable if the tail is bounded, for
example, if $X$ takes bounded values. The proof follows from the definition of
the lower integral:
###### Proof.
First assume $X$ is a continuous random variable, so by definition,
$\mathbb{E}(X)=\int_{\Omega}X(\omega)\,dP(\omega)$.
The definition of the lower horizontal integral gives
$\int_{\Omega}X(\omega)\,dP(\omega)\geq\sum_{i=0}^{n-1}(x_{i}-x_{i-1})P(\\{\omega\mid
X(\omega)>x_{i}\\})$ for all values $0=x_{0}<x_{1}<\cdots<x_{n}$. Take
$x_{i}-x_{i-1}<\epsilon$ for all $i$. Then
$\mathbb{E}(X)+\epsilon=\int_{\Omega}X(\omega)+\epsilon\,dP(\omega)\geq\sum_{i=1}^{n}(x_{i}-x_{i-1})P(\\{\omega\mid
X(\omega)+\epsilon>x_{i}\\})=\sum_{i=1}^{n}\int_{x_{i-1}}^{x_{i}}\mathbb{P}(X(\omega)>x_{i}-\epsilon)\,dx\geq\sum_{i=1}^{n}\int_{x_{i-1}}^{x_{i}}\mathbb{P}(X(\omega)>x_{i-1})\,dx\geq\sum_{i=1}^{n}\int_{x_{i-1}}^{x_{i}}\mathbb{P}(X(\omega)>x)\,dx=\int_{0}^{x_{n-1}}\mathbb{P}(X(\omega)>x)$.
Taking $n\to\infty$ gives
$\mathbb{E}(X)\geq\int_{0}^{\infty}\mathbb{P}(X>x)dx-\epsilon$, and since
$\epsilon$ is arbitrary,
$\mathbb{E}(X)\geq\int_{0}^{\infty}\mathbb{P}(X>x)dx$.
The definition of the lower horizontal integral gives for all $\epsilon>0$,
there exist $0=x_{0}<x_{1}<\cdots<x_{n}$, such that
$\int_{\Omega}X(\omega)\,dP(\omega)\leq\sum_{i=1}^{n}(x_{i}-x_{i-1})P(\\{\omega\mid
X(\omega)>x_{i}\\})+\epsilon$. By refining the partition if necessary, we can
assume $x_{i}-x_{i-1}<\epsilon$ for all $i$. Then
$\mathbb{E}(X)-\epsilon\leq\sum_{i=1}^{n}(x_{i}-x_{i-1})P(\\{\omega\mid
X(\omega)>x_{i}\\})=\sum_{i=1}^{n}\int_{x_{i-1}}^{x_{i}}P(\\{\omega\mid
X(\omega)>x_{i}\\}dx\leq\sum_{i=1}^{n}\int_{x_{i-1}}^{x_{i}}P(\\{\omega\mid
X(\omega)>x\\}dx=\int_{0}^{x_{n}}P(\\{\omega\mid
X(\omega)>x\\}dx\leq\int_{0}^{\infty}P(\\{\omega\mid X(\omega)>x\\}dx$. Hence
$\mathbb{E}(X)\leq\int_{0}^{\infty}\mathbb{P}(X>x)dx+\epsilon$, and since
$\epsilon$ is arbitrary,
$\mathbb{E}(X)\leq\int_{0}^{\infty}\mathbb{P}(X>x)dx$.
The case of measurable random variables follows by taking limits.
We show $\int_{0}^{\infty}\mathbb{P}(X\geq x)dx=\mathbb{E}(X)$ since
$\int_{0}^{\infty}\mathbb{P}(X>x)dx\leq\int_{0}^{\infty}\mathbb{P}(X\geq
x)dx\leq\int_{0}^{\infty}\mathbb{P}(X+\epsilon>x)dx=\epsilon+\int_{0}^{\infty}\mathbb{P}(X\geq
x)dx$ for any $\epsilon>0$. ∎
By changing variables in the integral, we obtain:
###### Corollary 67.
If $X$ is a real-valued random variable, then for any $\alpha\geq 1$,
$\textstyle\mathbb{E}(|X|^{\alpha})=\int_{0}^{\infty}\alpha\,x^{\alpha-1}\,\mathbb{P}(X>x)dx=\int_{0}^{\infty}\alpha
x^{\alpha-1}\mathbb{P}(X\geq x)dx.$
###### Remark 68 (Expectation of a distribution).
Theorem 66 shows that the expectation of a random variable depends only on its
_distribution_. Indeed, we can define the expectation of a probability
_valuation_ $\pi$ on $[0,\infty[$ by
$\textstyle\mathbb{E}(\pi)=\int_{0}^{\infty}\pi(\,]x,\infty[\,)dx=\int_{0}^{\infty}\pi(\,[x,\infty[\,)dx.$
If $f:\mathbb{X}\rightarrow\mathbb{R}^{+}_{<}$, then we can compute the _lower
expectation_ of $f(X)$ by
$\textstyle\mathbb{E}_{<}(f(X)):=\int_{0}^{\infty}\mathbb{P}\bigl{(}X\in
f^{-1}(\,]\lambda,\infty[\,)\bigr{)}d\lambda.$ (12)
We have an effective version of the classical dominated convergence theorem.
###### Theorem 69 (Dominated convergence).
Suppose $X_{n}\to X$ weakly, and there is an integrable function
$Y:\Omega\rightarrow\mathbb{R}$ such that $|X_{n}|\leq Y$ for all $n$ (i.e.
$\mathbb{P}(Y-|X_{n}|\geq 0)=1$) and that $\mathbb{E}|Y|<\infty$. Then $X_{n}$
converges effectively under the metric (10). In particular, the limit of
$\mathbb{E}(X_{n})$ always exists
###### Proof.
Since $\mathbb{E}(Y)<\infty$, the probabilities $\mathbb{P}(Y\geq y)\to 0$ as
$y\to\infty$. For fixed $\epsilon>0$, let
$b(\epsilon)=\sup\\{y\mid\mathbb{P}(Y\geq y)\geq\epsilon$, which is computable
in $\mathbb{R}_{>}$ given $\epsilon$. Then $\sup\\{\int_{A}YdP\mid
P(A)\leq\epsilon\\}\leq\int_{b(\epsilon)}^{\infty}\mathbb{P}(Y\geq
y)dy=\mathbb{E}(Y)-\int_{0}^{b(\epsilon)}\mathbb{P}(Y>y)dy$ in
$\mathbb{R}^{+}_{>}$. For continuous random variables $X_{m}$, $X_{n}$ with
$2^{-m},2^{-n}<\epsilon$, taking $A_{\epsilon}=\\{\omega\mid
d(X_{m}(\omega),X_{n}(\omega))\geq\epsilon\\}$ gives
$\mathbb{E}(|X_{m}-X_{n}|)\leq\epsilon+\int_{A_{\epsilon}}|X_{m}(\omega)-X_{n}(\omega)|\,dP(\omega)\leq\epsilon+\int_{A_{\epsilon}}|X_{m}(\omega)|+|X_{n}(\omega)|dP(\omega)\leq\epsilon+\int_{A_{\epsilon}}2|Y|dP\leq\epsilon+2\int_{b(\epsilon)}^{\infty}\mathbb{P}(Y\geq
y)dy,$ which converges effectively to $0$ as $\epsilon\to 0$. ∎
## 7 Conditioning
The concept of conditional random variable is subtle even in classical
probability theory. The basic idea is that if we condition a random quantity
$Y$ on some information of kind $\mathcal{X}$, then we can reconstruct $Y$
given a value $x$ describable by $\mathcal{X}$. Classically, conditional
_random variables_ are _not_ defined, but conditional distributions and
expectations are. Conditional expectations can be shown to exist using the
Radon-Nikodym derivative, but this is uncomputable [HRW11].
### 7.1 Independence
In the classical case, we condition relative to a sub-sigma-algebra of the
measure space. In the computable case, it makes sense to consider instead a
sub-topology $\mathcal{T}$ on $\Omega$. We first need to define concepts of
$\mathcal{T}$ measurability and $\mathcal{T}$ independence
###### Definition 70 (Measure-topologies).
Let $\nu$ be a valuation on $\mathbb{X}$. A _$\nu$ -topology_ is a collection
of $\nu$-lower-measurable sets which contains $\emptyset,\mathbb{X}$ and is
closed under intersection and countable union.
The $P$-topology _generated_ by a random variable
$X:\Omega\rightsquigarrow\mathbb{X}$ is simply $\\{X^{-1}(U)\mid
U\in\mathcal{O}(\mathbb{X})\\}$. A random variable $X$ is
$\mathcal{T}$-measurable if $X^{-1}(U)\in\mathcal{T}$ for all
$U\in\mathcal{O}(\mathbb{X})$
We write $\mathcal{R}_{\mathcal{T}}(\mathbb{X})$ for the type of
$\mathcal{T}$-measurable random variables with values in $\mathbb{X}$.
Note that a $\nu$-topology is not a topology on $\mathbb{X}$ in the standard
sense, since it consists of equivalence-classes of subsets of $\mathbb{X}$,
rather than sets themselves.
Recall that classically, we say random variables $X_{1},X_{2}$ taking values
in $\mathbb{X}_{1}$, $\mathbb{X}_{2}$ are independent if for all open
$U_{1}\subset\mathbb{X}_{1}$ and $U_{2}\subset\mathbb{X}_{2}$, we have
$\mathbb{P}(X_{1}\in U_{1}\wedge X_{2}\in U_{2})=\mathbb{P}(X_{1}\in
U_{1})\cdot\mathbb{P}(X_{2}\in U_{2}).$ This classical definition does not
relate well with computability theory, as the following example shows:
###### Example 71.
Consider the result $X$ of throwing a $6$-sided die, and the random variables
$X_{\mathrm{even}}$ which is $1$ if $X$ is even and $0$ otherwise, and
$X_{\mathrm{high}}$ which is $1$ is $X$ is a 5 or 6. Then $X_{\mathrm{even}}$
and $X_{\mathrm{high}}$ are independent for a fair die, but not if the
probability of a 6 is $\tfrac{1}{6}+5\epsilon$ and of a 1 to 5 is
$\tfrac{1}{6}-\epsilon$ for $\epsilon\neq 0$.
It is therefore useful to consider different versions of independence
properties.
###### Definition 72 (Independence).
$P$-topologies $\mathcal{T}_{1,2}$ on $\Omega$ are _independent_ if
$P(U_{1}\cap U_{2})=P(U_{1})P(U_{2})$ for all $U_{1}\in\mathcal{T}_{1}$,
$U_{2}\in\mathcal{T}_{2}$.
$P$-topologies $\mathcal{T}_{1,2}$ are _strongly independent_ if we can write
$\Omega=\Omega_{1}\times\Omega_{2}$, with projections $p_{1,2}$ and inclusion
$q:\Omega_{1}\times\Omega_{2}\to\Omega$ such that
$q(p_{1}(U_{1}),\omega_{2})=U_{1}$ for all $U_{i}\in\mathcal{T}_{i}$, there
exists $V_{i}\subset\Omega_{i}$ such that $U_{i}=p_{i}^{-1}(V_{i})$ for
$i=1,2$.
Random variables $X_{1,2}:\Omega\rightsquigarrow\mathbb{X}_{1,2}$ are
_effectively independent_ if
$X_{1}:\mathcal{R}_{\mathcal{T}_{1}}(\mathbb{X}_{1})$ and
$X_{2}:\mathcal{R}_{\mathcal{T}_{2}}(\mathbb{X}_{2})$ for independent
topologies $\mathcal{T}_{1,2}$.
Random variables $X_{1},\ldots,X_{k}$ are _jointly_ independent of
$\mathcal{T}$ if the product $\prod_{i=1}^{k}X_{i}$ is independent of
$\mathcal{T}$, and $X_{1},X_{2},\ldots$ are jointly independent of
$\mathcal{T}$ if every finite product is independent of $\mathcal{T}$.
A random variable $X$ is _effectively independent_ of a topology $\mathcal{T}$
on $\Omega$ if $X:\mathcal{R}_{\mathcal{X}}(\mathbb{X}_{1})$ for some topology
$\mathcal{X}$ independent of $\mathcal{T}$.
We can express independence relative to sub-topologies using the identity
random variable $I:\Omega\rightarrow\Omega$. If
$X:\Omega\rightsquigarrow\mathbb{X}$ is independent of $\mathcal{T}$, then
$\mathbb{P}(X\times I\in U\times W)=\mathbb{P}(X\in U)P(W)$ whenever
$U\in\mathcal{O}(\mathbb{X})$ and $W\in\mathcal{T}$. We write
$\mathcal{R}_{\perp\mathcal{T}}(\mathbb{X})$ for $\mathcal{T}$-independent
random variable with values in $\mathbb{X}$. Note that
$\mathcal{R}_{\perp\mathcal{T}}(\mathbb{X})$ does not form a natural type,
since it is possible for $X_{1}$, $X_{2}$ to be independent of $\mathcal{T}$,
but $X_{1}\times X_{2}$ not to be. If $X$ is effectively independent of
$\mathcal{T}$, and $Y$ is $\mathcal{T}$-measurable, then $X$ is effectively
independent of $Y$.
### 7.2 Conditional Random Variables
We now proceed to our notion of conditional random variable $Y|\mathcal{X}$,
where $\mathcal{X}$ is a measure-topology on $\Omega$. Recall that
classically, the conditional expectation $\mathbb{E}(Y|\mathcal{X})$ for a
random variable $Y$ is a $\mathcal{X}$-measurable random variable.
###### Definition 73 (Conditional random variable).
Let $\mathcal{X}$ be a measure-topology. A $\mathcal{X}$-independent
conditional random variable is a function
$Y|:\mathbb{X}\to\mathcal{R}(\mathbb{Y})$ such that the $Y|x$ are jointly-
independent of $\mathcal{X}$.
If $Y|:\mathbb{X}\to\mathcal{R}(\mathbb{Y})$ is $\mathcal{X}$-independent, and
$X:\mathcal{R}(\mathbb{X})$ is simple and $\mathcal{X}$-measurable, then we
can define the _joint random variable_
###### Definition 74 (Joint random variable).
The _joint random variable_ of $X\rtimes Y|$ of a $\mathcal{X}$-measurable
simple random variable $X:\mathcal{R}(\mathbb{X})$ and a
$\mathcal{X}$-independent conditional random variable
$Y|:\mathbb{X}\to\mathcal{R}(\mathbb{Y})$ is defined by
$\textstyle(X\rtimes Y|)^{-1}(U\times V)=\bigcup_{x_{i}\in
U}X^{-1}(x_{i})\cap(Y|x_{i})^{-1}(V).$ (13)
The _joint random variable_ of a $\mathcal{X}$-measurable random variable
$X:\mathcal{R}(\mathbb{X})$ and a $\mathcal{X}$-independent conditional random
variable $Y|:\mathbb{X}\to\mathcal{R}(\mathbb{Y})$ is defined to be
$\lim_{n\to\infty}X_{n}\rtimes Y|$, where $X_{n}$ is a sequence of
$\mathcal{X}$-measurable simple random variables converging to $\mathcal{X}$.
Note that if $X$ is a simple random variable,
$\textstyle\mathcal{P}\bigl{(}\bigcup_{x_{i}\in
U}X^{-1}(x_{i})\cap(Y|x_{i})^{-1}(V)\bigr{)}=\sum_{x_{i}\in
U}\mathbb{P}(X=x_{i})\times\mathbb{P}(Y|x_{i}\in V).$
To show Definition 74 makes sense in the general case, we need to show that it
is independent of the sequence of simple random variables used to specify $X$.
###### Lemma 75.
Let $X$ be a $\mathcal{X}$-measurable random variable in a metric space
$\mathbb{X}$. Then we can construct a sequence of $\mathcal{X}$-measurable
simple continuous random variables converging effectively to $X$.
###### Proof.
Let $r_{n}:U_{n}\to\mathbb{X}$ be a finite-valued map defined on an open set
$U_{n}$ with $\mathbb{P}(X\in U_{n})=1$ such that $d(r_{n}(x),x)<2^{-n}$ for
all $n$, as guaranteed by Lemma 56. Take $X_{n}=r_{n+1}\circ X$, which is
$\mathcal{X}$-measurable since $X$ is. Then $(X_{n})$ is sequence of random
variables with $d(X_{n},X)\leq 2^{-(n+1)}$, and $d(X_{n_{1}},X_{n_{2}})\leq
2^{-\min(n_{1},n_{2})}$ as required. ∎
###### Theorem 76.
The joint random variable of a $\mathcal{X}$-measurable random variable
$X:\mathcal{R}(\mathbb{X})$ and a $\mathcal{X}$-independent conditional random
variable $Y|:\mathbb{X}\to\mathcal{R}(\mathbb{Y})$ is independent of the
sequence of simple approximations $X_{n}$ to $X$ used in the definition, so is
computable.
###### Proof.
Define the continuity sets
$C_{\delta,\epsilon}=\\{x\in\mathbb{X}\mid\forall\tilde{x}\in\mathbb{X},\
d(x,\tilde{x})\leq\delta\implies d_{F}(Y|x,Y|\tilde{x})<\epsilon\\}.$
Note that every $C_{\delta,\epsilon}$ is open, and for any fixed $\epsilon$,
continuity of $Y|:\mathbb{X}\rightarrow\mathcal{R}(\mathbb{Y})$ implies
$\bigcup_{\delta>0}C_{\delta,\epsilon}=\mathbb{X}$. Hence for fixed $\epsilon$
and some $\delta<\epsilon/8$ sufficiently small, $\mathbb{P}(X\in
C_{\delta,\epsilon/4})>1-\epsilon/4$.
Now suppose that $X_{1},X_{2}$ are simple $\mathcal{X}$-measurable random
variables such that $\mathbb{P}(d(X,X_{i})<\delta)>1-\epsilon/8$. Let
$U=\\{(x,x_{1},x_{2})\mid x\in C_{\delta,\epsilon/4}\wedge
d(x,x_{1})<\delta\wedge d(x,x_{2})<\delta\\},$
and
$V=\\{(x_{1},x_{2})\mid d(Y|x_{1},Y|x_{2})<\epsilon/2\wedge
d(x,x_{1})<\delta\wedge d(x,x_{2})<\delta\\}.$
Then $(x,x_{1},x_{2})\in U\implies(x_{1},x_{2})\in V$, so
$\mathbb{P}(X_{1}\\!\times\\!X_{2}\in
V)\geq\mathbb{P}(X\\!\times\\!X_{1}\\!\times\\!X_{2}\in U)\geq
1-\epsilon/4-\epsilon/8-\epsilon/8=1-\epsilon/2.$
Since each $Y|x$ is $\mathcal{X}$-independent,
$\displaystyle\textstyle\mathbb{P}(d(Y|X_{1},Y|X_{2})>\epsilon/2)$
$\displaystyle\qquad\textstyle=\sum_{x_{1},x_{2}}\mathbb{P}(X_{1}=x_{1}\wedge
X_{2}=x_{2})\mathbb{P}(d(Y|x_{1},Y|x_{2})>\epsilon/2)$
$\displaystyle\qquad\textstyle\leq\sum_{(x_{1},x_{2})\in
V}\mathbb{P}(X_{1}=x_{1}\wedge X_{2}=x_{2})\times\epsilon/2$
$\displaystyle\qquad\qquad\qquad\textstyle+\sum_{(x_{1},x_{2})\not\in
V}\mathbb{P}(X_{1}=x_{1}\wedge X_{2}=x_{2})\times 1$
$\displaystyle\qquad\qquad\textstyle=\mathbb{P}(X_{1}\\!\times\\!X_{2}\in
V)\times\epsilon/2+(1-\mathbb{P}(X_{1}\\!\times\\!X_{2}\in V))\times 1$
$\displaystyle\qquad\qquad\textstyle\leq\epsilon/2+\epsilon/2=\epsilon.$
Hence if $d_{F}(X,X_{1}),d_{F}(X,X_{2})<\delta$, we have
$d_{F}(Y|X_{1},Y|X_{2})<\epsilon$. ∎
### 7.3 Random functions
In the definition of conditional random variable, we use objects of type
$\mathbb{X}\rightarrow\mathcal{R}(\mathbb{Y})$, which are random-variable-
valued functions, rather than _random functions_ with type
$\mathcal{R}(\mathbb{X}\rightarrow\mathbb{Y})$, alternatively written
$\mathcal{R}(\mathcal{C}(\mathbb{X};\mathbb{Y}))$.
Given a random function $F:\mathcal{R}\bigl{(}\mathbb{X}\to\mathbb{Y}\bigr{)}$
and a random variable $X:\mathcal{R}(\mathbb{X})$, since the evaluation map
$\varepsilon:(\mathbb{X}\to\mathbb{Y})\times\mathbb{X}\to\mathbb{Y}$ is
computable, we can apply it to $F$ and $X$ to obtain a random variable
$Y=\varepsilon(F,X):\mathcal{R}(\mathbb{Y})$.
The information provided by a random function
$\mathcal{R}(\mathbb{X}\to\mathbb{Y})$ is strictly stronger than that provided
by a function $\mathbb{X}\to\mathcal{R}(\mathbb{Y})$:
###### Proposition 77 (Random function).
The natural bijection
$\mathcal{R}(\mathbb{X}\rightarrow\mathbb{Y})\hookrightarrow(\mathbb{X}\rightarrow\mathcal{R}(\mathbb{Y}))$
is computable, but its inverse is not continuous.
###### Proof.
For fixed $x$, evaluation
$\varepsilon_{x}:(\mathbb{X}\rightarrow\mathbb{Y})\rightarrow\mathbb{Y}:f\mapsto
f(x)$ is computable, so by Theorem 49,
$\varepsilon(F):\mathcal{R}(\mathbb{Y})$ is computable for any
$F:\mathcal{R}(\mathbb{X}\rightarrow\mathbb{Y})$ given $x$. Hence the function
$x\mapsto\varepsilon_{x}(F)$ is computable.
Conversely, let $X=\\{0,1\\}^{\omega}$ and $Y=\\{0,1\\}$. Define
$F(x,\omega,n)=1$ if $x|_{n}=\omega|_{n}$, and $0$ otherwise. Then for fixed
$x$, $F(d(x,\cdot,n),0)=2^{-n}$, so $F(x,\cdot,n)$ converges to $0$ uniformly
in $x$.
For fixed $\omega$, $d(F(\cdot,\omega,n_{1}),F(\cdot,\omega,n_{2}))=\sup_{x\in
X}d(F(x,\omega,n_{1}),F(x,\omega,n_{2}))=1$, since (for $n_{1}<n_{2}$) there
exists $x$ such that $x|_{n_{1}}=\omega|_{n_{1}}$ but
$x|_{n_{2}}\neq\omega|_{n_{2}}$. Hence
$d(F(\cdot,\cdot,n_{1}),F(\cdot,\cdot,n_{2}))=1$ for all $n_{1},n_{2}$, and
the sequence is not a Cauchy sequence in
$\mathcal{R}(\mathbb{X}\rightarrow\mathbb{Y})$. ∎
However, if $Y|:\mathbb{X}\to\mathcal{R}(\mathbb{Y})$ is such that each $Y|x$
is a continuous random variable i.e. a continuous function
$\Omega\to\mathbb{Y}$, then $Y|$ corresponds to a continuous random function
$F$ by $[F(\omega)](x)=Y|x(\omega)$, and $Y|X$ is the random variable
$\varepsilon(F,X)$.
## 8 Conclusions
In this paper, we have developed a theory of probability and random variables.
The theory uses type-two effectivity to provide an underlying machine model of
computation, but is largely developed using type theory in the cartesian-
closed category of quotients of countably-based spaces, which has an effective
interpretation. The approach extends existing work on probability via
valuations and random variables in metric spaces via limits of Cauchy
sequences.
The approach has been used to give a computable theory for stochastic
processes which is sufficiently powerful to effectively compute the solution
of stochastic differential equations [Col14]. Ultimately, we hope that this
work will form a basic for practical software tools for the rigorous
computational analysis of stochastic systems.
Acknowledgement: The author would like to thank Bas Spitters for many
interesting discussions on measurable functions and type theory, and pointing
out the connection with monads.
## References
* [AM02] Mauricio Alvarez-Manilla. Extension of valuations on locally compact sober spaces. Topology Appl., 124:397–433, 2002.
* [BB85] Errett Bishop and Douglas Bridges. Constructive analysis, volume 279 of Grundlehren der Mathematischen Wissenschaften. Springer, 1985.
* [BC72] Errett Bishop and Henry Cheng. Constructive measure theory. American Mathematical Society, 1972.
* [Bra01] Vasco Brattka. Computable versions of Baire’s category theorem. In Proc. 26th International Symposium on Mathematical Foundations of Computer Science, pages 224–235. Springer, 2001.
* [Bra05] Vasco Brattka. Effective Borel measurability and reducibility of functions. Math. Logic Quarterly, 51:19–44, 2005.
* [Cha74] Y. K. Chan. Notes on constructive probability theory. Ann. Probability, 2(1):51–75, 1974.
* [Col14] Pieter Collins. Computable stochastic processes. Technical report, 2014. arXiv:1409.4667.
* [CP02] Thierry Coquand and Erik Palmgren. Metric boolean algebras and constructive measure theory. Archive for Mathematical Logic, 41(7):687–704, 2002.
* [CS09] Thierry Coquand and Bas Spitters. Integrals and valuations. J. Logic Analysis, 1(3):1–22, 2009.
* [Eda95a] Abbas Edalat. Domain theory and integration. Theor. Comput. Sci., 151:163–193, November 1995.
* [Eda95b] Abbas Edalat. Dynamical systems, measures, and fractals via domain theory. Inf. Comput., 120:32–48, July 1995.
* [Esc09] Martín Escardó. Semi-decidability of may, must and probabilistic testing in a higher-type setting. Electron. Notes Theor. Comput. Sci., 249:219–242, August 2009.
* [GL05] Jean Goubault-Larrecq. Extensions of valuations. Mathematical. Structures in Comp. Sci., 15:271–297, April 2005\.
* [GLV11] Jean Goubault-Larrecq and Daniele Varacca. Continuous random variables. In Proceedings of the 2011 IEEE 26th Annual Symposium on Logic in Computer Science, pages 97–106, Washington, DC, USA, 2011.
* [HR09] Mathieu Hoyrup and Cristóbal Rojas. Computability of probability measures and Martin-Löf randomness over metric spaces. Information and Computation, 207:830–847, 2009.
* [HRW11] Mathieu Hoyrup, Cristóbal Rojas, and Klaus Weihrauch. Computability of the radon-nikodym derivative. In Benedikt Löwe, Dag Normann, Ivan Soskov, and Alexandra Soskova, editors, Models of Computation in Context, volume 6735 of Lecture Notes in Computer Science, pages 132–141. Springer, 2011.
* [JP89] C. Jones and G. Plotkin. A probabilistic powerdomain of evaluations. In Proceedings of the Fourth Annual Symposium on Logic in computer science, pages 186–195, Piscataway, NJ, USA, 1989.
* [Ker08] Götz Kersting. Random vaiables — without basic space. In J. Blath, P. Mörters, and M. Scheutzow, editors, Trends in Stochastic Analysis. Cambridge University Press, 2008.
* [Kön97] H. König. Measure and Integration. Springer-Verlag, 1997.
* [Law04] Jimmie D. Lawson. Domains, integration and ‘positive analysis’. Mathematical. Structures in Comp. Sci., 14:815–832, December 2004\.
* [Mis07] Michael Mislove. Discrete random variables over domains. Theor. Comput. Sci., 380:181–198, July 2007.
* [MW43] H.B. Mann and A. Wald. On stochastic limit and order relationships. Ann. Math. Statistics, 14(3):217–226, 1943.
* [Pol02] David Pollard. A User’s Guide to Measure Theoretic Probability. Cambridge Series in Statistical and Probabilistic Mathematics. 2002.
* [Roh52] V. A. Rohlin. On the fundamental ideas of measure theory, volume 71 of Translations. American Mathematical Society, 1952. Translated from Russian.
* [Sch04] Jean Schmets. Théorie de la mesure. Notes de cours, Université de Liège, 2004.
* [Sch07] Matthias Schröder. Admissible representations of probability measures. Electron. Notes Theor. Comput. Sci., 167:61–78, January 2007.
* [Sch09] Matthias Schröder. An effective Tietze-Urysohn theorem for QCB-spaces. J. Univers. Comput. Sci., 15(6):1317–1336, 2009.
* [Shi95] Al’bert Nikolaevich Shiryaev. Probability. Springer, 1995.
* [Spi06] Bas Spitters. Constructive algebraic integration theory. Ann. Pure Appl. Logic, 137(1-3):380–390, 2006.
* [SS06a] Matthias Schröder and Alex Simpson. Probabilistic observations and valuations. Electron. Notes Theor. Comput. Sci., 155:605–615, May 2006.
* [SS06b] Matthias Schröder and Alex Simpson. Representing probability measures using probabilistic processes. J. Complexity, 22(6):768 – 782, 2006. Computability and Complexity in Analysis.
* [Str72] Ross Street. The formal theory of monads. J. Pure Appl. Math., 2:149–168, 1972.
* [Tix95] R. Tix. Stetige Bewertungen auf topologischen Räumen. PhD thesis, Master’s Thesis, Technische Universität Darmstadt, 1995\.
* [Var02] Daniele Varacca. The powerdomain of indexed valuations. In Proceedings of the 17th Annual IEEE Symposium on Logic in Computer Science, pages 299–, Washington, DC, USA, 2002.
* [vG02] Onno van Gaans. Probability measures on metric spaces. 2002\.
* [Vic08] Steven Vickers. A localic theory of lower and upper integrals. Math. Log. Quart., 54(1):109–123, 2008.
* [Vic11] Steven Vickers. A monad of valuation locales. http://www.cs.bham.ac.uk/~sjv/Riesz.pdf, 2011.
* [WD05] Yongcheng Wu and Decheng Ding. Computability of measurable sets via effective metrics. Mathematical Logic Quarterly, 51(6):543–559, 2005.
* [WD06] Yongcheng Wu and Decheng Ding. Computability of measurable sets via effective topologies. Archive for Mathematical Logic, 45(3):365–379, 2006.
* [Wei99] Klaus Weihrauch. Computability on the probability measures on the Borel sets of the unit interval. Theor. Comput. Sci., 219:421–437, May 1999.
* [YMT99] M. Yasugi, T. Mori, and Y. Tsujii. Effective properties of sets and functions in metric spaces with computability structure. Theor. Comput. Sci., 219(1-2):467–486, 1999.
|
16k
|
arxiv_papers
|
2101.00959
|
# $F$-manifold color algebras
Ming Ding, Zhiqi Chen and Jifu Li Department of Mathematics and information
science, Guangzhou University, Guangzhou, P.R. China
[email protected] School of Mathematical Sciences and LPMC,
Nankai University, Tianjin, P.R. China [email protected] School of
Science, Tianjin University of technology, Tianjin, P.R. China
[email protected]
###### Abstract.
In this paper, we introduce the notion of $F$-manifold color algebras and
study their properties which extend some results for $F$-manifold algebras.
###### Key words and phrases:
$F$-manifold color algebra, coherence $F$-manifold color algebra,
pre-$F$-manifold color algebra
## 1\. Introduction
The notion of Frobenius manifolds was invented by Dubrovin [10] in order to
give a geometrical expression of the Witten-Dijkgraaf-Verlinde-Verlinde
equations. In 1999, Hertling and Manin [12] introduced the concept of
$F$-manifolds as a relaxation of the conditions of Frobenius manifolds.
Inspired by the investigation of algebraic structures of $F$-manifolds, the
notion of an $F$-manifold algebra is given by Dotsenko [8] in 2019 to relate
the operad $F$-manifold algebras to the operad pre-Lie algebras. An
$F$-manifold algebra is defined as a triple $(A,\cdot,[,])$ satisfying the
following Hertling-Manin relation,
$P_{x\cdot y}(z,w)=x\cdot P_{y}(z,w)+y\cdot P_{x}(z,w),\ \ \ \forall
x,y,z,w\in A,$
where $(A,\cdot)$ is a commutative associative algebra, $(A,[,])$ is a Lie
algebra and $P_{x}(y,z)=[x,y\cdot z]-[x,y]\cdot z-y\cdot[x,z]$. A pre-Lie
algebra is a vector space $A$ with a bilinear multiplication $\cdot$
satisfying
$(x\cdot y)\cdot z-x\cdot(y\cdot z)=(y\cdot x)\cdot z-y\cdot(x\cdot z),\ \ \
\forall x,y,z\in A.$
Pre-Lie algebras have attracted a great deal of attentions in many areas of
mathematics and physics (see [1, 2, 4, 5, 9, 16] and so on).
Recently, Liu, Sheng and Bai [13] introduced the concept of pre-$F$-manifold
algebras which gives rise to $F$-manifold algebras. They also studied
representations of $F$-manifold algebras, and constructed many other examples
of $F$-manifold algebras.
In this paper, we introduce the notions of an $F$-manifold color algebra and a
pre-$F$-manifold color algebra which can be viewed as natural generalizations
of an $F$-manifold algebra and a pre-$F$-manifold algebra, and then extend
some properties of an $F$-manifold algebra in [13] to the color case.
The paper is organized as follows. In Section 2, we summarize some basic
concepts of Lie color algebras, pre-Lie color algebras, representations of
$\varepsilon$-commutative associative algebras and Lie color algebras. In
Section 3, we introduce the concept of an $F$-manifold color algebra and study
its representations. We also introduce the notion of a coherence $F$-manifold
color algebra and obtain that an $F$-manifold color algebra with a
nondegenerate symmetric bilinear form is a coherence $F$-manifold color
algebra. In Section 4, we introduce the concept of a pre-$F$-manifold color
algebra and show that a pre-$F$-manifold color algebra can naturally give rise
to an $F$-manifold color algebra.
Throughout this paper, we assume that $\mathbb{K}$ is an algebraically closed
field of characteristic zero and all the vector spaces are finite dimensional
over $\mathbb{K}$.
## 2\. Lie color algebras and relative algebraic structures
The concept of a Lie color algebra was introduced by Scheunert [17] which has
been widely studied. In this section, we collect some definitions and the
representations of Lie color algebras and some Lie color admissible algebras
such as pre-Lie color algebras and $\varepsilon$-commutative associative
algebras. One can refer to [3, 6, 7, 11, 14, 15, 17, 18, 19, 20] for more
details.
###### Definition 2.1.
Let $G$ be an abelian group. A map $\varepsilon:G\times
G\rightarrow\mathbb{K}\backslash\\{0\\}$ is called a skew-symmetric
bicharacter on $G$ if the following identities hold,
1. (i)
$\varepsilon(a,b)\varepsilon(b,a)=1$,
2. (ii)
$\varepsilon(a,b+c)=\varepsilon(a,b)\varepsilon(a,c)$,
3. (iii)
$\varepsilon(a+b,c)=\varepsilon(a,c)\varepsilon(b,c)$,
for all $a,b,c\in G$.
By the definition, it is obvious that
$\varepsilon(a,0)=\varepsilon(0,a)=1,\varepsilon(a,a)=\pm 1\;\mbox{for
all}\;a\in G$.
###### Definition 2.2.
Let $G$ be an abelian group with bicharacter $\varepsilon$ as above. The
$G$-graded vector space
$A=\bigoplus_{g\in G}A_{g}$
is called a pre-Lie color algebra, if $A$ has a bilinear multiplication
operation $\cdot$ satisfying
1. (1)
$A_{a}\cdot A_{b}\subseteq A_{a+b},$
2. (2)
$(x\cdot y)\cdot z-x\cdot(y\cdot z)=\varepsilon(a,b)((y\cdot x)\cdot
z-y\cdot(x\cdot z)),$
for all $x\in A_{a},y\in A_{b},z\in A_{c}$, and $a,b,c\in G$.
###### Definition 2.3.
Let $G$ be an abelian group with bicharacter $\varepsilon$ as above. The
$G$-graded vector space
$A=\bigoplus_{g\in G}A_{g}$
is called a Lie color algebra, if $A$ has a bilinear multiplication
$[,]:A\times A\rightarrow A$ satisfying
1. (1)
$[A_{a},A_{b}]\subseteq A_{a+b},$
2. (2)
$[x,y]=-\varepsilon(a,b)[y,x],$
3. (3)
$\varepsilon(c,a)[x,[y,z]]+\varepsilon(b,c)[z,[x,y]]+\varepsilon(a,b)[y,[z,x]]=0,$
for all $x\in A_{a},y\in A_{b},z\in A_{c}$, and $a,b,c\in G$.
###### Remark 2.4.
It is well known that a pre-Lie algebra $(A,\cdot)$ with a commutator
$[x,y]=x\cdot y-y\cdot x$ becomes a Lie algebra. Similarly, one has a pre-Lie
color algebra’s version, that is a pre-Lie color algebra
$(A,\cdot,\varepsilon)$ with a commutator $[x,y]=x\cdot
y-\varepsilon(x,y)y\cdot x$ becomes a Lie color algebra.
An element $x$ of the $G$-graded vector space $A$ is called homogeneous of
degree $a$ if $x\in A_{a}$. In the rest of this paper, for homogeneous
elements $x\in A_{a}$, $y\in A_{b}$, $z\in A_{c}$, we will write
$\varepsilon(x,y)$ instead of $\varepsilon(a,b)$, $\varepsilon(x+y,z)$ instead
of $\varepsilon(a+b,c)$, and so on. Furthermore, when the skew-symmetric
bicharacter $\varepsilon(x,y)$ appears, it means that $x$ and $y$ are both
homogeneous elements.
By an $\varepsilon$-commutative associative algebra $(A,\cdot,\varepsilon)$,
we mean that $(A,\cdot)$ is a $G$-graded associative algebra with the
following $\varepsilon$-commutativity
$x\cdot y=\varepsilon(x,y)y\cdot x$
for all homogeneous elements $x,y\in A$.
Let $V$ be a $G$-graded vector space. A representation $(V,\mu)$ of an
$\varepsilon$-commutative associative algebra $(A,\cdot,\varepsilon)$ is a
linear map $\mu:A\longrightarrow\mathfrak{gl}(V)$ satisfying
$\mu(x)v\in V_{a+b},\ \ \ \ \mu(x\cdot y)=\mu(x)\circ\mu(y)$
for all $v\in V_{a},x\in A_{b},y\in A_{c}.$ A representation $(V,\rho)$ of a
Lie color algebra $(A,[,],\varepsilon)$ is a linear map
$\rho:A\longrightarrow\mathfrak{gl}(V)$ satisfying
$\rho(x)v\in V_{a+b},\ \ \ \
\rho([x,y])=\rho(x)\circ\rho(y)-\varepsilon(x,y)\rho(y)\circ\rho(x)$
for all $v\in V_{a},x\in A_{b},y\in A_{c}.$
We consider the dual space $V^{*}$ of the $G$-graded vector space $V$. Then
$V^{*}=\bigoplus_{a\in G}V^{*}_{a}$ is also a $G$-graded space, where
$V^{*}_{a}=\\{\alpha\in V^{*}|\alpha(x)=0,b\neq-a,\forall x\in V_{b},b\in
G\\}.$
Define a linear map $\mu^{*}:A\longrightarrow\mathfrak{gl}(V^{*})$ by
$\mu^{*}(x)\alpha\in V^{*}_{a+c},\ \ \ \
\langle\mu^{*}(x)\alpha,v\rangle=-\varepsilon(x,\alpha)\langle\alpha,\mu(x)v\rangle$
for all $x\in A_{a},v\in V_{b},\alpha\in V^{*}_{c}.$
It is easy to see that
(1) If $(V,\mu)$ is a representation of an $\varepsilon$-commutative
associative algebra $(A,\cdot,\varepsilon)$, then $(V^{*},-\mu^{*})$ is a
representation of $(A,\cdot,\varepsilon)$;
(2) If $(V,\mu)$ is a representation of a Lie color algebra
$(A,[,],\varepsilon)$, then $(V^{*},\mu^{*})$ is a representation of
$(A,[,],\varepsilon)$.
## 3\. $F$-manifold color algebras and representations
In this section, we will introduce the notion of $F$-manifold color algebras,
and generalize some results in [13] to the color case.
###### Definition 3.1.
An $F$-manifold color algebra is a quadruple $(A,\cdot,[,],\varepsilon)$,
where $(A,\cdot,\varepsilon)$ is an $\varepsilon$-commutative associative
algebra and $(A,[,],\varepsilon)$ is a Lie color algebra, such that for all
homogeneous elements $x,y,z,w\in A$, the color Hertling-Manin relation holds:
(1) $P_{x\cdot y}(z,w)=x\cdot P_{y}(z,w)+\varepsilon(x,y)y\cdot P_{x}(z,w),$
where $P_{x}(y,z)$ is defined by
(2) $P_{x}(y,z)=[x,y\cdot z]-[x,y]\cdot z-\varepsilon(x,y)y\cdot[x,z].$
###### Remark 3.2.
When $G=\\{0\\}$ and $\varepsilon(0,0)=1$, then an $F$-manifold color algebra
is exactly an $F$-manifold algebra.
###### Definition 3.3.
Let $(A,\cdot,[,],\varepsilon)$ be an $F$-manifold color algebra. A
representation of $A$ is a triple $(V,\rho,\mu)$, such that $(V,\rho)$ is a
representation of the Lie color algebra $(A,[,],\varepsilon)$ and $(V,\mu)$ is
a representation of the $\varepsilon$-commutative associative algebra
$(A,\cdot,\varepsilon)$ satisfying
$\displaystyle R_{\rho,\mu}(x_{1}\cdot
x_{2},x_{3})=\mu(x_{1})R_{\rho,\mu}(x_{2},x_{3})+\varepsilon(x_{1},x_{2})\mu(x_{2})R_{\rho,\mu}(x_{1},x_{3}),$
$\displaystyle\mu(P_{x_{1}}(x_{2},x_{3}))=\varepsilon(x_{1},x_{2}+x_{3})S_{\rho,\mu}(x_{2},x_{3})\mu(x_{1})-\mu(x_{1})S_{\rho,\mu}(x_{2},x_{3}),$
where $R_{\rho,\mu},S_{\rho,\mu}:A\otimes A\rightarrow\mathfrak{gl}(V)$ are
defined by
(3) $\displaystyle R_{\rho,\mu}(x_{1},x_{2})$ $\displaystyle=$
$\displaystyle\rho(x_{1})\mu(x_{2})-\varepsilon(x_{1},x_{2})\mu(x_{2})\rho(x_{1})-\mu([x_{1},x_{2}]),$
(4) $\displaystyle S_{\rho,\mu}(x_{1},x_{2})$ $\displaystyle=$
$\displaystyle\mu(x_{1})\rho(x_{2})+\varepsilon(x_{1},x_{2})\mu(x_{2})\rho(x_{1})-\rho(x_{1}\cdot
x_{2})$
for all homogeneous elements $x_{1},x_{2},x_{3}\in A$.
In the rest of the paper, we sometimes denote $R_{\rho,\mu},S_{\rho,\mu}$ by
$R,S$ respectvely if there is no risk of confusion.
###### Example 3.4.
Let $(A,\cdot,[,],\varepsilon)$ be an $F$-manifold color algebra. Then
$(A,\mathrm{ad},\mathcal{L})$ is a representation of $A$, where
$\mathrm{ad}:A\longrightarrow\mathfrak{gl}(A)$ defined by
$\mathrm{ad}_{x}y=[x,y]$, and multiplication operator
$\mathcal{L}:A\longrightarrow\mathfrak{gl}(A)$ are defined by
$\mathcal{L}_{x}y=x\cdot y$ for all homogeneous elements $x,y\in A$.
###### Proof.
It is known that $(V,\mathrm{ad})$ is a representation of the Lie color
algebra $(A,[,],\varepsilon)$ and $(V,\mathcal{L})$ is a representation of the
$\varepsilon$-commutative associative algebra $(A,\cdot,\varepsilon)$.
Note that, for all homogeneous elements $x,y,z,w\in A$, we have
$\displaystyle R(x,y)z$ $\displaystyle=$
$\displaystyle(ad_{x}\mathcal{L}_{y}-\varepsilon(x,y)\mathcal{L}_{y}ad_{x}-\mathcal{L}_{[x,y]})\cdot
z$ $\displaystyle=$ $\displaystyle[x,y\cdot
z]-\varepsilon(x,y)y\cdot[x,y]-[x,y]\cdot z$ $\displaystyle=$ $\displaystyle
P_{x}(y,z).$
Thus
$\displaystyle P_{x\cdot y}(z,w)=x\cdot P_{y}(z,w)+\varepsilon(x,y)y\cdot
P_{x}(z,w)$
implies the equation
$\displaystyle R(x\cdot
y,z)w=\mathcal{L}_{x}R(y,z)w+\varepsilon(x,y)\mathcal{L}_{y}R(x,z)w.$
On the other hand
$\displaystyle
S(y,z)w=(\mathcal{L}_{y}ad_{z}+\varepsilon(y,z)\mathcal{L}_{z}ad_{y}-ad_{y\cdot
z})w$ $\displaystyle=$ $\displaystyle
y\cdot[z,w]+\varepsilon(y,z)z\cdot[y,w]-[y\cdot z,w]$ $\displaystyle=$
$\displaystyle-\varepsilon(z,w)y\cdot[w,z]-\varepsilon(y,w)\varepsilon(z,w)[w,y]\cdot
z+\varepsilon(y+z,w)[w,y\cdot z]$ $\displaystyle=$
$\displaystyle\varepsilon(y+z,w)([w,y\cdot z]-[w,y]\cdot
z-\varepsilon(w,y)y\cdot[w,z])$ $\displaystyle=$
$\displaystyle\varepsilon(y+z,w)P_{w}(y,z).$
Thus
$\displaystyle\varepsilon(x,y+z)S(y,z)\mathcal{L}_{x}w-\mathcal{L}_{x}S(y,z)w$
$\displaystyle=$ $\displaystyle\varepsilon(x,y+z)S(y,z)(x\cdot w)-x\cdot
S(y,z)w$ $\displaystyle=$
$\displaystyle\varepsilon(x,y+z)\varepsilon(y+z,x+w)P_{x\cdot
w}(y,z)-\varepsilon(y+z,w)x\cdot P_{w}(y,z)$ $\displaystyle=$
$\displaystyle\varepsilon(y+z,w)\\{P_{x\cdot w}(y,z)-x\cdot P_{w}(y,z)\\}$
$\displaystyle=$ $\displaystyle\varepsilon(y+z,w)\varepsilon(x,w)w\cdot
P_{x}(y,z)$ $\displaystyle=$ $\displaystyle\varepsilon(x+y+z,w)w\cdot
P_{x}(y,z)$ $\displaystyle=$ $\displaystyle P_{x}(y,z)w.$
Hence, $(A,\mathrm{ad},\mathcal{L})$ is a representation of $A$. ∎
Let $(A,\cdot,[,],\varepsilon)$ be an $F$-manifold color algebra and
$(V,\rho,\mu)$ a representation of $A$. Note that $A\oplus V$ is a G-graded
vector space. In the following, if we write $x+v\in A\oplus V$ as a
homogeneous element, it means that $x,v$ are of the same degree as $x+v$. For
any homogeneous elements $x_{1}+v_{1},x_{2}+v_{2}\in A\oplus V$, define
$[x_{1}+v_{1},x_{2}+v_{2}]_{\rho}=[x_{1},x_{2}]+\rho(x_{1})v_{2}-\varepsilon(x_{1},x_{2})\rho(x_{2})v_{1}.$
It is well known that $(A\oplus V,[,]_{\rho},\varepsilon)$ is a Lie color
algebra. Moreover, if one defines
$(x_{1}+v_{1})\cdot_{\mu}(x_{2}+v_{2})=x_{1}\cdot
x_{2}+\mu(x_{1})v_{2}+\varepsilon(x_{1},x_{2})\mu(x_{2})v_{1},$
then $(A\oplus V,\cdot_{\mu},\varepsilon)$ is an $\varepsilon$-commutative
associative algebra. In fact, we have
###### Proposition 3.5.
With the above notations, let $(A,\cdot,[,],\varepsilon)$ be an $F$-manifold
color algebra and $(V,\rho,\mu)$ a representation of $A$. Then $(A\oplus
V,\cdot_{\mu},[,]_{\rho},\varepsilon)$ is an $F$-manifold color algebra.
###### Proof.
We only need to prove that the color Hertling-Manin relation holds.
For any homogeneous elements $x_{1}+v_{1},x_{2}+v_{2},x_{3}+v_{3}\in A\oplus
V$, we have
$\displaystyle P_{x_{1}+v_{1}}(x_{2}+v_{2},x_{3}+v_{3})$ $\displaystyle=$
$\displaystyle[x_{1}+v_{1},(x_{2}+v_{2})\cdot_{\mu}(x_{3}+v_{3})]_{\rho}-[x_{1}+v_{1},x_{2}+v_{2}]_{\rho}\cdot_{\mu}(x_{3}+v_{3})$
$\displaystyle-\varepsilon(x_{1},x_{2})(x_{2}+v_{2})\cdot_{\mu}[x_{1}+v_{1},x_{3}+v_{3}]_{\rho}$
$\displaystyle=$ $\displaystyle[x_{1},x_{2}\cdot
x_{3}]+\rho(x_{1})\\{\mu(x_{2})v_{3}+\varepsilon(x_{2},x_{3})\mu(x_{3})v_{2}\\}-\varepsilon(x_{1},x_{2}+x_{3})\rho(x_{2}\cdot
x_{3})v_{1}-I-II.$
where
$\displaystyle I$ $\displaystyle=$
$\displaystyle\\{[x_{1},x_{2}]+\rho(x_{1})v_{3}-\varepsilon(x_{1},x_{2})\rho(x_{2})v_{1}\\}\cdot_{\mu}(x_{3}+v_{3})$
$\displaystyle=$ $\displaystyle[x_{1},x_{2}]\cdot
x_{3}+\mu([x_{1},x_{2}])v_{3}+\varepsilon(x_{1}+x_{2},x_{3})\mu(x_{3})\\{\rho(x_{1})v_{2}-\varepsilon(x_{1},x_{2})\rho(x_{2})v_{1}\\},$
and
$\displaystyle II$ $\displaystyle=$
$\displaystyle\varepsilon(x_{1},x_{2})(x_{2}+v_{2})\cdot_{\mu}\\{[x_{1},x_{3}]+\rho(x_{1})v_{3}-\varepsilon(x_{1},x_{3})\rho(x_{3})v_{1}\\}$
$\displaystyle=$
$\displaystyle\varepsilon(x_{1},x_{2})\\{x_{2}\cdot[x_{1},x_{3}]+\mu(x_{2})(\rho(x_{1})v_{3}-\varepsilon(x_{1},x_{3})\rho(x_{3})v_{1})$
$\displaystyle+\varepsilon(x_{2},x_{1}+x_{3})\mu([x_{1},x_{3}])v_{2}\\}.$
Thus
$\displaystyle P_{x_{1}+v_{1}}(x_{2}+v_{2},x_{3}+v_{3})$ $\displaystyle=$
$\displaystyle
P_{x_{1}}(x_{2},x_{3})+\\{\rho(x_{1})\mu(x_{2})-\mu([x_{1},x_{2}])-\varepsilon(x_{1},x_{2})\mu(x_{2})\rho(x_{1})\\}v_{3}$
$\displaystyle+\\{\varepsilon(x_{2},x_{3})\rho(x_{1})\mu(x_{3})-\varepsilon(x_{1}+x_{2},x_{3})\mu(x_{3})\rho(x_{1})$
$\displaystyle-\varepsilon(x_{1},x_{2})\varepsilon(x_{2},x_{1}+x_{3})\mu([x_{1},x_{3}])\\}v_{2}+\\{-\varepsilon(x_{1},x_{2}+x_{3})\rho(x_{2}\cdot
x_{3})$
$\displaystyle+\varepsilon(x_{1}+x_{2},x_{3})\varepsilon(x_{1},x_{2})\mu(x_{3})\rho(x_{2})+\varepsilon(x_{1},x_{2})\varepsilon(x_{1},x_{3})\mu(x_{2})\rho(x_{3})\\}v_{1}$
$\displaystyle=$ $\displaystyle
P_{x_{1}}(x_{2},x_{3})+R(x_{1},x_{2})v_{3}+\varepsilon(x_{2},x_{3})R(x_{1},x_{3})v_{2}+\varepsilon(x_{1},x_{2}+x_{3})S(x_{2},x_{3})v_{1}.$
Hence, for any homogeneous element $x_{4}+v_{4}\in A\oplus V$, we have
$\displaystyle
P_{(x_{1}+v_{1})\cdot_{\mu}(x_{2}+v_{2})}(x_{3}+v_{3},x_{4}+v_{4})$
$\displaystyle=$ $\displaystyle P_{x_{1}\cdot
x_{2}+\mu(x_{1})v_{2}+\varepsilon(x_{1},x_{2})\mu(x_{2})v_{1}}(x_{3}+v_{3},x_{4}+v_{4})$
$\displaystyle=$ $\displaystyle P_{x_{1}\cdot x_{2}}(x_{3},x_{4})+R(x_{1}\cdot
x_{2},x_{3})v_{4}+\varepsilon(x_{3},x_{4})R(x_{1}\cdot x_{2},x_{4})v_{3}$
$\displaystyle+\varepsilon(x_{1}+x_{2},x_{3}+x_{4})S(x_{3},x_{4})(\mu(x_{1})v_{2}+\varepsilon(x_{1},x_{2})\mu(x_{2})v_{1}).$
On the other hand
$\displaystyle(x_{1}+v_{1})\cdot_{\mu}P_{x_{2}+v_{2}}(x_{3}+v_{3},x_{4}+v_{4})$
$\displaystyle=$
$\displaystyle(x_{1}+v_{1})\cdot_{\mu}\\{P_{x_{2}}(x_{3},x_{4})+R(x_{2},x_{3})v_{4}+\varepsilon(x_{3},x_{4})R(x_{2},x_{4})v_{3}+\varepsilon(x_{2},x_{3}+x_{4})S(x_{3},x_{4})v_{2}\\}$
$\displaystyle=$ $\displaystyle x_{1}\cdot
P_{x_{2}}(x_{3},x_{4})+\mu(x_{1})\\{R(x_{2},x_{3})v_{4}+\varepsilon(x_{3},x_{4})R(x_{2},x_{4})v_{3}+\varepsilon(x_{2},x_{3}+x_{4})S(x_{3},x_{4})v_{2}\\}$
$\displaystyle+\varepsilon(x_{1},x_{2}+x_{3}+x_{4})\mu(P_{x_{2}}(x_{3},x_{4}))v_{1},$
and
$\displaystyle\varepsilon(x_{1},x_{2})(x_{2}+v_{2})\cdot_{\mu}P_{x_{1}+v_{1}}(x_{3}+v_{3},x_{4}+v_{4})$
$\displaystyle=$ $\displaystyle\varepsilon(x_{1},x_{2})\\{x_{2}\cdot
P_{x_{1}}(x_{3},x_{4})+\mu(x_{2})\\{R(x_{1},x_{3})v_{4}+\varepsilon(x_{3},x_{4})R(x_{1},x_{4})v_{3}$
$\displaystyle+\varepsilon(x_{1},x_{3}+x_{4})S(x_{3},x_{4})v_{1}\\}+\varepsilon(x_{2},x_{1}+x_{3}+x_{4})\mu(P_{x_{1}}(x_{3},x_{4}))v_{2}\\}.$
Thus
$\displaystyle(x_{1}+v_{1})\cdot_{\mu}P_{x_{2}+v_{2}}(x_{3}+v_{3},x_{4}+v_{4})+\varepsilon(x_{1},x_{2})(x_{2}+v_{2})\cdot_{\mu}P_{x_{1}+v_{1}}(x_{3}+v_{3},x_{4}+v_{4})$
$\displaystyle=$ $\displaystyle x_{1}\cdot
P_{x_{2}}(x_{3},x_{4})+\varepsilon(x_{1},x_{2})x_{2}\cdot
P_{x_{1}}(x_{3},x_{4})$
$\displaystyle+\\{\mu(x_{1})R(x_{2},x_{3})+\varepsilon(x_{1},x_{2})\mu(x_{2})(R(x_{1},x_{3}))\\}v_{4}$
$\displaystyle+\\{\varepsilon(x_{3},x_{4})\mu(x_{1})R(x_{2},x_{4})+\varepsilon(x_{1},x_{2})\varepsilon(x_{3},x_{4})\mu(x_{2})R(x_{1},x_{4})\\}v_{3}$
$\displaystyle+\\{\varepsilon(x_{2},x_{3}+x_{4})\mu(x_{1})S(x_{3},x_{4})+\varepsilon(x_{1},x_{2})\varepsilon(x_{2},x_{1}+x_{3}+x_{4})\mu(P_{x_{1}}(x_{3},x_{4}))\\}v_{2}$
$\displaystyle+\varepsilon(x_{1},x_{2}+x_{3}+x_{4})\\{\mu(x_{2})S(x_{3},x_{4})+\mu(P_{x_{2}}(x_{3},x_{4}))\\}v_{1}$
$\displaystyle=$ $\displaystyle
P_{(x_{1}+v_{1})\cdot_{\mu}(x_{2}+v_{2})}(x_{3}+v_{3},x_{4}+v_{4}).$
By the definition of an $F$-manifold color algebra, the conclusion follows
immediately. ∎
Let $(V,\rho,\mu)$ be a representation of an $F$-manifold algebra, then the
triple $(V^{*},\rho^{*},-\mu^{*})$ is not necessarily a representation of this
$F$-manifold algebra [13]. In the following, we prove the version of an
$F$-manifold color algebra.
###### Proposition 3.6.
Let $(A,\cdot,[,],\varepsilon)$ be an $F$-manifold color algebra, $(V,\rho)$ a
representation of the Lie color algebra $(A,[,],\varepsilon)$, and $(V,\mu)$ a
representation of the $\varepsilon$-commutative associative algebra
$(A,\cdot,\varepsilon)$, such that, for any homogeneous elements $x,y,z\in A$,
$\displaystyle R_{\rho,\mu}(x\cdot
y,z)=\varepsilon(x,y+z)R_{\rho,\mu}(y,z)\mu(x)+\varepsilon(y,z)R_{\rho,\mu}(x,z)\mu(y),$
$\displaystyle\mu(P_{x}(y,z))=-\varepsilon(x,y+z)T_{\rho,\mu}(y,z)\mu(x)+\mu(x)T_{\rho,\mu}(y,z),$
where $R_{\rho,\mu}$ is given by (3) and $T_{\rho,\mu}:A\otimes
A\rightarrow\mathfrak{gl}(V)$ is defined by
$\displaystyle T_{\rho,\mu}(x,y)$ $\displaystyle=$
$\displaystyle-\varepsilon(x,y)\rho(y)\mu(x)-\rho(x)\mu(y)+\rho(x\cdot y),$
then $(V^{*},\rho^{*},-\mu^{*})$ is a representation of $A$.
###### Proof.
Assume that $x,y,z\in A,v\in V,\alpha\in V^{*}$ are all homogeneous elements.
First, we claim the following two identities:
$\displaystyle\langle R_{\rho^{*},-\mu^{*}}(x,y)(\alpha),v\rangle$
$\displaystyle=$
$\displaystyle\langle\alpha,\varepsilon(x+y,\alpha)R_{\rho,\mu}(x,y)v\rangle;$
$\displaystyle\langle S_{\rho^{*},-\mu^{*}}(x,y)(\alpha),v\rangle$
$\displaystyle=$
$\displaystyle\langle\alpha,\varepsilon(x+y,\alpha)T_{\rho,\mu}(x,y)v\rangle.$
The claims follow from some direct calculations, respectively:
$\displaystyle\langle R_{\rho^{*},-\mu^{*}}(x,y)(\alpha),v\rangle$
$\displaystyle=$
$\displaystyle\langle(-\rho^{*}(x)\mu^{*}(y)+\varepsilon(x,y)\mu^{*}(y)\rho^{*}(x)+\mu^{*}([x,y])\alpha,v\rangle$
$\displaystyle=$
$\displaystyle\varepsilon(x,y+\alpha)\langle\mu^{*}(y)\alpha,\rho(x)v\rangle-\varepsilon(x,y)\varepsilon(y,x+\alpha)\langle(\rho^{*}(x)\alpha,\mu(y)v\rangle$
$\displaystyle-\varepsilon(x+y,\alpha)\langle\alpha,\mu([x,y])v\rangle$
$\displaystyle=$
$\displaystyle-\varepsilon(x,y)\varepsilon(x+y,\alpha)\langle\alpha,\mu(y)\rho(x)v\rangle+\varepsilon(y,\alpha)\varepsilon(x,\alpha)\langle\alpha,\rho(x)\mu(y)v\rangle$
$\displaystyle-\varepsilon(x+y,\alpha)\langle\alpha,\mu([x,y])v\rangle$
$\displaystyle=$
$\displaystyle\langle\alpha,\varepsilon(x+y,\alpha)\\{-\varepsilon(x,y)\mu(y)\rho(x)+\rho(x)\mu(y)-\mu([x,y])\\}v\rangle$
$\displaystyle=$
$\displaystyle\langle\alpha,\varepsilon(x+y,\alpha)R_{\rho,\mu}(x,y)v\rangle,$
and
$\displaystyle\langle S_{\rho^{*},-\mu^{*}}(x,y)(\alpha),v\rangle$
$\displaystyle=$
$\displaystyle\langle\\{-\mu^{*}(x)\rho^{*}(y)-\varepsilon(x,y)\mu^{*}(y)\rho^{*}(x)-\rho^{*}(x\cdot
y)\\}\alpha,v\rangle$ $\displaystyle=$
$\displaystyle-\varepsilon(x,y+\alpha)\varepsilon(y,\alpha)\langle\alpha,\rho(y)\mu(x)v\rangle-\varepsilon(y,\alpha)\varepsilon(x,\alpha)\langle\alpha,\rho(x)\mu(y)v\rangle$
$\displaystyle+\varepsilon(x+y,\alpha)\langle\alpha,\rho(x\cdot y)v\rangle$
$\displaystyle=$
$\displaystyle\langle\alpha,\varepsilon(x+y,\alpha)\\{-\varepsilon(x,y)\rho(y)\mu(x)-\rho(x)\mu(y)+\rho(x\cdot
y)\\}v\rangle$ $\displaystyle=$
$\displaystyle\langle\alpha,\varepsilon(x+y,\alpha)T_{\rho,\mu}(x,y)v\rangle.$
With the above identities, we have
$\displaystyle\langle\\{R_{\rho^{*},-\mu^{*}}(x\cdot
y,z)+\mu^{*}(x)R_{\rho^{*},-\mu^{*}}(y,z)+\varepsilon(x,y)\mu^{*}(y)R_{\rho^{*},-\mu^{*}}(x,z)\\}\alpha,v\rangle$
$\displaystyle=$
$\displaystyle\langle\alpha,\varepsilon(x+y+z,\alpha)R_{\rho,\mu}(x\cdot
y,z)v\rangle-\varepsilon(x,y+z+\alpha)\varepsilon(y+z,\alpha)\langle\alpha,R_{\rho,\mu}(y,z)\mu(x)v\rangle$
$\displaystyle-\varepsilon(x+z,\alpha)\varepsilon(y,z+\alpha)\langle\alpha,R_{\rho,\mu}(x,z)\mu(y)v\rangle$
$\displaystyle=$
$\displaystyle\varepsilon(x+y+z,\alpha)\langle\alpha,\\{R_{\rho,\mu}(x\cdot
y,z)-\varepsilon(x,y+z)R_{\rho,\mu}(y,z)\mu(x)-\varepsilon(y,z)R_{\rho,\mu}(x,z)\mu(y)\\}v\rangle$
$\displaystyle=$ $\displaystyle 0.$
And
$\displaystyle\langle\\{-\mu^{*}(P_{x}(y,z))+\varepsilon(x,y+z)S_{\rho^{*},-\mu^{*}}(y,z)\mu^{*}(x)-\mu^{*}(x)S_{\rho^{*},-\mu^{*}}(y,z)\\}\alpha,v\rangle$
$\displaystyle=$
$\displaystyle\varepsilon(x+y+z,\alpha)\langle\alpha,\mu(P_{x}(y,z))v\rangle+\varepsilon(x,y+z)\varepsilon(y+z,x+\alpha)\langle\mu^{*}(x)\alpha,T_{\rho,\mu}(y,z)v\rangle$
$\displaystyle+\varepsilon(x,y+z+\alpha)\langle
S_{\rho^{*},-\mu^{*}}(y,z)\alpha,\mu(x)v\rangle$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,\alpha)\langle\alpha,\mu(P_{x}(y,z))v\rangle-\varepsilon(y+z,\alpha)\varepsilon(x,\alpha)\langle\alpha,\mu(x)T_{\rho,\mu}(y,z)v\rangle$
$\displaystyle+\varepsilon(x,y+z+\alpha)\varepsilon(y+z,\alpha)\langle\alpha,T_{\rho,\mu}(y,z)\mu(x)v\rangle$
$\displaystyle=$
$\displaystyle\varepsilon(x+y+z,\alpha)\langle\alpha,\\{\mu(P_{x}(y,z))-\mu(x)T_{\rho,\mu}(y,z)+\varepsilon(x,y+z)T_{\rho,\mu}(y,z)\mu(x)\\}v\rangle$
$\displaystyle=$ $\displaystyle 0.$
Thus, the conclusion follows immediately from the definition of
representations and the hypothesis. ∎
###### Definition 3.7.
A coherence $F$-manifold color algebra is an $F$-manifold color algebra such
that, for all homogeneous elements $x,y,z,w\in A$, the following hold
$\displaystyle P_{x\cdot y}(z,w)$ $\displaystyle=$
$\displaystyle\varepsilon(x,y+z)P_{y}(z,x\cdot
w)+\varepsilon(y,z)P_{x}(z,y\cdot w),$ $\displaystyle P_{x}(y,z)w$
$\displaystyle=$ $\displaystyle-\varepsilon(x,y+z)T(y,z)(x\cdot
w)+xT(y,z)(w),$
where
$T(y,z)(w)=-\varepsilon(y,z)[z,y\cdot w]-[y,z\cdot w]+[y\cdot z,w].$
###### Proposition 3.8.
Let $(A,\cdot,[,],\varepsilon)$ be an $F$-manifold color algebra, and
$\mathfrak{B}$ a nondegenerate symmetric bilinear form on $A$ satisfying
$\mathfrak{B}(x\cdot y,z)=\mathfrak{B}(x,y\cdot
z),\;\;\mathfrak{B}([x,y],z)=\mathfrak{B}(x,[y,z]),$
for all homogeneous elements $x,y,z\in A$, then $(A,\cdot,[,],\varepsilon)$ is
a coherence $F$-manifold color algebra.
###### Proof.
First, we prove that
$\mathfrak{B}(P_{x}(y,z),w)=\varepsilon(x+y,z)\mathfrak{B}(z,P_{x}(y,w))$
for all homogeneous elements $x,y,z,w\in A$.
In fact, we have
$\displaystyle\mathfrak{B}(P_{x}(y,z),w)$ $\displaystyle=$
$\displaystyle\mathfrak{B}([x,y\cdot z]-[x,y]\cdot
z-\varepsilon(x,y)y\cdot[x,z],w)$ $\displaystyle=$
$\displaystyle-\varepsilon(x,y+z)\mathfrak{B}([y\cdot
z,x],w)-\varepsilon(x+y,z)\mathfrak{B}(z,[x,y]\cdot
w)-\varepsilon(x,y)\varepsilon(y,x+z)\mathfrak{B}([x,z],y\cdot w)$
$\displaystyle=$ $\displaystyle-\varepsilon(x,y+z)\mathfrak{B}(y\cdot
z,[x,w])-\varepsilon(x+y,z)\mathfrak{B}(z,[x,y]\cdot
w)+\varepsilon(y,z)\varepsilon(x,z)\mathfrak{B}(z,[x,y\cdot w])$
$\displaystyle=$
$\displaystyle-\varepsilon(x,y+z)\varepsilon(y,z)\mathfrak{B}(z,y\cdot[x,w])-\varepsilon(x+y,z)\mathfrak{B}(z,[x,y]\cdot
w)+\varepsilon(x+y,z)\mathfrak{B}(z,[x,y\cdot w])$ $\displaystyle=$
$\displaystyle\varepsilon(x+y,z)\mathfrak{B}(z,-\varepsilon(x,y)y\cdot[x,w]-[x,y]\cdot
w+[x,y\cdot w])$ $\displaystyle=$
$\displaystyle\varepsilon(x+y,z)\mathfrak{B}(z,P_{x}(y,w)).$
By the above relation, for all homogeneous elements $x,y,z,w_{1},w_{2}\in A$,
we have
$\displaystyle\mathfrak{B}(P_{x\cdot
y}(z,w_{1})-\varepsilon(x,y+z)P_{y}(z,x\cdot
w_{1})-\varepsilon(y,z)P_{x}(z,y\cdot w_{1}),w_{2})$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x\cdot
y}(z,w_{2}))-\varepsilon(x,y+z)\varepsilon(y+z,x+w_{1})\mathfrak{B}(x\cdot
w_{1},P_{y}(z,w_{2}))$ $\displaystyle-\varepsilon(y,z)\varepsilon(x\cdot
z,y\cdot w_{1})\mathfrak{B}(y\cdot w_{1},P_{x}(z,w_{2}))$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x\cdot
y}(z,w_{2}))-\varepsilon(x,y+z)\varepsilon(y\cdot z,x\cdot
w_{1})\varepsilon(x,w_{1})\mathfrak{B}(w_{1},x\cdot P_{y}(z,w_{2}))$
$\displaystyle-\varepsilon(y,z)\varepsilon(x\cdot z,y\cdot
w_{1})\varepsilon(y,w_{1})\mathfrak{B}(w_{1},y\cdot P_{x}(z,w_{2}))$
$\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x\cdot
y}(z,w_{2}))-\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},x\cdot
P_{y}(z,w_{2}))$
$\displaystyle-\varepsilon(y,z)\varepsilon(x+z,y)\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},y\cdot
P_{x}(z,w_{2}))$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x\cdot
y}(z,w_{2}))-\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},x\cdot
P_{y}(z,w_{2}))$
$\displaystyle-\varepsilon(x,y)\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},y\cdot
P_{x}(z,w_{2}))$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x\cdot
y}(z,w_{2})-x\cdot P_{y}(z,w_{2})-\varepsilon(x,y)y\cdot P_{x}(z,w_{2}))$
$\displaystyle=$ $\displaystyle 0.$
We claim the following identity:
$\mathfrak{B}(T(y,z)(x\cdot
w_{1}),w_{2})=\varepsilon(y+z,w_{1}+w_{2})\mathfrak{B}(w_{1},P_{w_{2}}(y,z)).$
In fact, we have
$\displaystyle\mathfrak{B}(T(y,z)(x\cdot w_{1}),w_{2})$ $\displaystyle=$
$\displaystyle\mathfrak{B}(-\varepsilon(y,z)[z,y\cdot w_{1}]-[y,z\cdot
w_{1}]+[y\cdot z,w_{1}],w_{2})$ $\displaystyle=$
$\displaystyle\varepsilon(y,z)\varepsilon(z,y+w_{1})\mathfrak{B}(y\cdot
w_{1},[z,w_{2}])+\varepsilon(y,z+w_{1})\mathfrak{B}(z\cdot
w_{1},[y,w_{2}])-\varepsilon(y+z,w_{1})\mathfrak{B}(w_{1},[yz,w_{2}])$
$\displaystyle=$
$\displaystyle\varepsilon(z,w_{1})\varepsilon(y,w_{1})\mathfrak{B}(w_{1},y\cdot[z,w_{2}])+\varepsilon(y,z+w_{1})\varepsilon(z,w_{1})\mathfrak{B}(w_{1},z\cdot[y,w_{2}])$
$\displaystyle-\varepsilon(y+z,w_{1})\mathfrak{B}(w_{1},[y\cdot z,w_{2}])$
$\displaystyle=$
$\displaystyle\varepsilon(y+z,w_{1})\mathfrak{B}(w_{1},y\cdot[z,w_{2}])+\varepsilon(y+z,w_{1})\varepsilon(y,z)\mathfrak{B}(w_{1},z\cdot[y,w_{2}])-\varepsilon(y+z,w_{1})\mathfrak{B}(w_{1},[y\cdot
z,w_{2}])$ $\displaystyle=$
$\displaystyle\varepsilon(y+z,w_{1})\mathfrak{B}(w_{1},y\cdot[z,w_{2}]+\varepsilon(y,z)z\cdot[y,w_{2}]-[y\cdot
z,w_{2}])$ $\displaystyle=$
$\displaystyle\varepsilon(y+z,w_{1})\mathfrak{B}(w_{1},\varepsilon(y+z,w_{2})P_{w_{2}}(y,z))$
$\displaystyle=$
$\displaystyle\varepsilon(y+z,w_{1}+w_{2})\mathfrak{B}(w_{1},P_{w_{2}}(y,z)).$
With the above identity, we have
$\displaystyle\mathfrak{B}(P_{x}(y,z)\cdot
w_{1}+\varepsilon(x,y+z)T(y,z)(x\cdot w_{1})-x\cdot T(y,z)(w_{1}),w_{2})$
$\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x}(y,z)w_{2})+\varepsilon(x,y+z)\varepsilon(y+z,x+w_{1}+w_{2})\mathfrak{B}(x\cdot
w_{1},P_{w_{2}}(y,z))$
$\displaystyle-\varepsilon(x,y+z+w_{1})\mathfrak{B}(T(y,z)w_{1},x\cdot w_{2})$
$\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x}(y,z)w_{2})+\varepsilon(x,w_{1})\varepsilon(y+z,w_{1}+w_{2})\mathfrak{B}(w_{1},x\cdot
P_{w_{2}}(y,z))$
$\displaystyle-\varepsilon(x,y+z+w_{1})\varepsilon(y+z,x+w_{1}+w_{2})\mathfrak{B}(w_{1},P_{x\cdot
w_{2}}(y,z))$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x}(y,z)w_{2})+\varepsilon(x,w_{1})\varepsilon(y+z,w_{1}+w_{2})\mathfrak{B}(w_{1},x\cdot
P_{w_{2}}(y,z))$
$\displaystyle-\varepsilon(x,w_{1})\varepsilon(y+z,w_{1}+w_{2})\mathfrak{B}(w_{1},P_{x\cdot
w_{2}}(y,z))$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x}(y,z)w_{2}+\varepsilon(y+z,w_{2})x\cdot
P_{w_{2}}(y,z)-\varepsilon(y+z,w_{2})P_{x\cdot w_{2}}(y,z))$ $\displaystyle=$
$\displaystyle\varepsilon(x+y+z,w_{1})\mathfrak{B}(w_{1},P_{x}(y,z)w_{2}+\varepsilon(y+z,w_{2})x\cdot
P_{w_{2}}(y,z)$ $\displaystyle-(P_{x}(y,z)w_{2}+\varepsilon(y+z,w_{2})x\cdot
P_{w_{2}}(y,z)))$ $\displaystyle=$ $\displaystyle 0.$
By the assumptions that $A$ is an $F$-manifold color algebra and
$\mathfrak{B}$ is nondegenerate, the conclusion is obtained. ∎
## 4\. Pre-$F$-manifold color algebras
In this section, we introduce the notion of a pre-$F$-manifold color algebra,
and then construct an $F$-manifold color algebra from a pre-$F$-manifold color
algebra.
###### Definition 4.1.
A triple $(A,\diamond,\varepsilon)$ is called a Zinbiel color algebra if $A$
is a G-graded vector space and $\diamond:A\otimes A\longrightarrow A$ is a
bilinear multiplication satisfying
1. (1)
$A_{a}\diamond A_{b}\subseteq A_{a+b},$
2. (2)
$x\diamond(y\diamond z)=(x\diamond y)\diamond z+\varepsilon(x,y)(y\diamond
x)\diamond z,$
for all $x\in A_{a},y\in A_{b},z\in A_{c}$, and $a,b,c\in G$.
Let $(A,\diamond,\varepsilon)$ be a Zinbiel color algebra. For any homogeneous
elements $x,y\in A$, define
$x\cdot y=x\diamond y+\varepsilon(x,y)y\diamond x,$
and define $\mathfrak{L}:A\longrightarrow\mathfrak{gl}(A)$ by
(5) $\mathfrak{L}_{x}y=x\diamond y.$
###### Lemma 4.2.
With the above notations, we have that $(A,\cdot,\varepsilon)$ is an
$\varepsilon$-commutative associative algebra, and $(A,\mathfrak{L})$ is its
representation.
###### Proof.
First, for any homogeneous elements $x,y\in A$, we have
$\displaystyle\varepsilon(x,y)y\cdot x=\varepsilon(x,y)(y\diamond
x+\varepsilon(y,x)x\diamond y)=x\diamond y+\varepsilon(x,y)y\diamond x=x\cdot
y.$
Thus, the $\varepsilon$-commutativity follows.
Secondly, for any homogeneous elements $x,y,z\in A$, we have
$\displaystyle(x\cdot y)\cdot z=(x\diamond y+\varepsilon(x,y)y\diamond x)\cdot
z$ $\displaystyle=$ $\displaystyle(x\diamond y)\diamond
z+\varepsilon(x+y,z)z\diamond(x\diamond y)+\varepsilon(x,y)\\{(y\diamond
x)\diamond z+\varepsilon(x+y,z)z\diamond(y\diamond x)\\}$ $\displaystyle=$
$\displaystyle(x\diamond y)\diamond z+\varepsilon(x,y)(y\diamond x)\diamond
z+\varepsilon(x,y)\varepsilon(x+y,z)z\diamond(y\diamond
x)+\varepsilon(x+y,z)z\diamond(x\diamond y)$ $\displaystyle=$
$\displaystyle(x\diamond y)\diamond z+\varepsilon(x,y)(y\diamond x)\diamond
z+\varepsilon(x,y)\varepsilon(x+y,z)\\{(z\diamond y)\diamond
x+\varepsilon(z,y)(y\diamond z)\diamond x\\}$
$\displaystyle+\varepsilon(x+y,z)\\{(z\diamond x)\diamond
y+\varepsilon(z,x)(x\diamond z)\diamond y\\}$ $\displaystyle=$
$\displaystyle\\{(x\diamond y)\diamond z+\varepsilon(x,y)(y\diamond x)\diamond
z\\}+\\{\varepsilon(x,y+z)(y\diamond z)\diamond
x+\varepsilon(x,y+z)\varepsilon(y,z)(z\diamond y)\diamond x\\}$
$\displaystyle+\varepsilon(y,z)\\{(x\diamond z)\diamond
y+\varepsilon(x,z)(z\diamond x)\diamond y\\}$ $\displaystyle=$ $\displaystyle
x\diamond(y\diamond z)+\varepsilon(x,y+z)(y\diamond z)\diamond
x+\varepsilon(y,z)\varepsilon(x,y+z)(z\diamond y)\diamond
x+\varepsilon(y,z)x\diamond(z\diamond y),$
and
$\displaystyle x\cdot(y\cdot z)=x\cdot(y\diamond z+\varepsilon(y,z)z\diamond
y)$ $\displaystyle=$ $\displaystyle x\diamond(y\diamond
z)+\varepsilon(x,y+z)(y\diamond z)\diamond
x+\varepsilon(y,z)\\{x\diamond(z\diamond y)+\varepsilon(x,y+z)(z\diamond
y)\diamond x\\}$ $\displaystyle=$ $\displaystyle x\diamond(y\diamond
z)+\varepsilon(x,y+z)(y\diamond z)\diamond
x+\varepsilon(y,z)\varepsilon(x,y+z)(z\diamond y)\diamond
x+\varepsilon(y,z)x\diamond(z\diamond y).$
Thus, the associativity follows.
From the definition of $\mathfrak{L}$, it follows that
$\mathfrak{L}_{xy}z=(x\cdot y)\diamond z=(x\diamond
y+\varepsilon(x,y)(y\diamond x))\diamond z=x\diamond(y\diamond
z)=\mathfrak{L}_{x}\mathfrak{L}_{y}(z).$
Hence, $(A,\mathfrak{L})$ is a representation of the $\varepsilon$-commutative
associative algebra $(A,\cdot,\varepsilon)$. ∎
###### Definition 4.3.
$(A,\diamond,\ast,\varepsilon)$ is called a pre-$F$-manifold color algebra if
$(A,\diamond,\varepsilon)$ is a Zinbiel color algebra and
$(A,\ast,\varepsilon)$ is a pre-Lie color algebra, such that, for all
homogeneous elements $x,y,z,w\in A$,
$F_{1}(x\cdot y,z,w)=x\diamond F_{1}(y,z,w)+\varepsilon(x,y)y\diamond
F_{1}(x,z,w),$
$\displaystyle(F_{1}(x,y,z)+\varepsilon(y,z)F_{1}(x,z,y)+\varepsilon(x,y+z)F_{2}(y,z,x))\diamond
w$ $\displaystyle=$ $\displaystyle\varepsilon(x,y+z)F_{2}(y,z,x\diamond
w)-x\diamond F_{2}(y,z,w).$
where $F_{1},F_{2}:\otimes^{3}A\longrightarrow A$ are defined by
$\displaystyle F_{1}(x,y,z)$ $\displaystyle=$ $\displaystyle x\ast(y\diamond
z)-\varepsilon(x,y)y\diamond(x\ast z)-[x,y]\diamond z,$ $\displaystyle
F_{2}(x,y,z)$ $\displaystyle=$ $\displaystyle x\diamond(y\ast
z)+\varepsilon(x,y)y\diamond(x\ast z)-(x\cdot y)\ast z,$
and the operation $\cdot$ and bracket $[,]$ are defined by
$x\cdot y=x\diamond y+\varepsilon(x,y)y\diamond x,\quad[x,y]=x\ast
y-\varepsilon(x,y)y\ast x.$
It is well known that $(A,[,],\varepsilon)$ is a Lie color algebra, and
$(A,L)$ is a representation of the Lie color algebra $(A,[,],\varepsilon)$
where $L:A\longrightarrow\mathfrak{gl}(A)$ is defined by
$L_{x}y=x\ast y,$
for any homogeneous elements $x,y\in A$.
###### Theorem 4.4.
Let $(A,\diamond,\ast,\varepsilon)$ be a pre-$F$-manifold color algebra. Then
* $\rm(1)$
$(A,\cdot,[,],\varepsilon)$ is an $F$-manifold color algebra, where the
operation $\cdot$ and bracket $[,]$ are defined in Definition 4.3.
* $\rm(2)$
$(A;L,\mathfrak{L})$ is a representation of $(A,\cdot,[,],\varepsilon)$, where
$L,\mathfrak{L}:A\longrightarrow\mathfrak{gl}(A)$ are defined by
$L_{x}y=x\ast y,\ \ \ \mathfrak{L}_{x}y=x\diamond y,$
for any homogeneous elements $x,y\in A$.
###### Proof.
(1) By Lemma 4.2, $(A,\cdot,\varepsilon)$ is an $\varepsilon$-commutative
associative algebra. It is known that $(A,[,],\varepsilon)$ is a Lie color
algebra. Thus we only need to prove that the color Hertling-Manin relation
holds.
Assume that $x,y,z,w\in A$ are all homogeneous elements. We claim the
following identity:
(6)
$P_{x}(y,z)=F_{1}(x,y,z)+\varepsilon(y,z)F_{1}(x,z,y)+\varepsilon(x,y+z)F_{2}(y,z,x).$
In fact, we have
$\displaystyle P_{x}(y,z)=[x,y\cdot z]-[x,y]\cdot
z-\varepsilon(x,y)y\cdot[x,z]$ $\displaystyle=$ $\displaystyle x\ast(y\cdot
z)-\varepsilon(x,y+z)(y\cdot z)\ast x-[x,y]\diamond
z-\varepsilon(x+y,z)z\diamond[x,y]$
$\displaystyle-\varepsilon(x,y)\\{y\diamond[x,z]+\varepsilon(y,x+z)[x,z]\diamond
y\\}$ $\displaystyle=$ $\displaystyle x\ast(y\diamond
z)-\varepsilon(x,y)y\diamond(x\ast z)-[x,y]\diamond
z+\varepsilon(y,z)\\{x\ast(z\diamond y)-\varepsilon(x,z)z\diamond(x\ast
y)-[x,z]\diamond y\\}$ $\displaystyle+\varepsilon(x,y+z)\\{y\diamond(z\ast
x)+\varepsilon(y,z)z\diamond(y\ast x)-(yz)\ast x\\}$ $\displaystyle=$
$\displaystyle
F_{1}(x,y,z)+\varepsilon(y,z)F_{1}(x,z,y)+\varepsilon(x,y+z)F_{2}(y,z,x).$
With the above identity, we have
$\displaystyle P_{x\cdot y}(z,w)-x\cdot P_{y}(z,w)-\varepsilon(x,y)y\cdot
P_{x}(z,w)$ $\displaystyle=$ $\displaystyle F_{1}(x\cdot
y,z,w)+\varepsilon(z,w)F_{1}(x\cdot
y,w,z)+\varepsilon(x+y,z+w)F_{2}(z,w,x\cdot y)$
$\displaystyle-x\cdot\\{F_{1}(y,z,w)+\varepsilon(z,w)F_{1}(y,w,z)+\varepsilon(y,z+w)F_{2}(z,w,y)\\}$
$\displaystyle-\varepsilon(x,y)y\cdot\\{F_{1}(x,z,w)+\varepsilon(z,w)F_{1}(x,w,z)+\varepsilon(x,z+w)F_{2}(z,w,x)\\}$
$\displaystyle=$ $\displaystyle\big{\\{}F_{1}(x\cdot y,z,w)-x\diamond
F_{1}(y,z,w)-\varepsilon(x,y)y\diamond F_{1}(x,z,w)\big{\\}}$
$\displaystyle+\big{\\{}\varepsilon(z,w)F_{1}(x\cdot
y,w,z)-\varepsilon(z,w)x\diamond
F_{1}(y,w,z)-\varepsilon(x,y)\varepsilon(z,w)y\diamond F_{1}(x,w,z)\big{\\}}$
$\displaystyle+\big{\\{}\varepsilon(x+y,z+w)F_{2}(z,w,x\diamond
y)-\varepsilon(x,y)\varepsilon(y,x+z+w)F_{1}(x,z,w)\diamond y$
$\displaystyle-\varepsilon(x,y)\varepsilon(z,w)\varepsilon(y,x+w+z)F_{1}(x,w,z)\diamond
y-\varepsilon(x,y)\varepsilon(x,z+w)\varepsilon(y,x+w+z)F_{2}(z,w,x)\diamond
y$ $\displaystyle-\varepsilon(y,z+w)x\diamond
F_{2}(z,w,y)\big{\\}}+\big{\\{}\varepsilon(x+y,z+w)\varepsilon(x,y)F_{2}(z,w,y\diamond
x)$ $\displaystyle-\varepsilon(x,y+z+w)F_{1}(y,z,w)\diamond
x-\varepsilon(z,w)\varepsilon(x,y+z+w)F_{1}(y,w,z)\diamond x$
$\displaystyle-\varepsilon(y,z+w)\varepsilon(x,z+w+y)F_{2}(z,w,y)\diamond
x-\varepsilon(x,y)\varepsilon(x,z+w)y\diamond F_{2}(z,w,x)\big{\\}}$
$\displaystyle=$
$\displaystyle\varepsilon(x+y,z+w)\big{\\{}F_{2}(z,w,x\diamond
y)-\varepsilon(z+w,x)F_{1}(x,z,w)\diamond y$
$\displaystyle-\varepsilon(z,w)\varepsilon(z+w,x)F_{1}(x,w,z)\diamond
y-F_{2}(z,w,x)\diamond y-\varepsilon(z+w,x)x\diamond F_{2}(z,w,y)\big{\\}}$
$\displaystyle+\varepsilon(x,y+z+w)\big{\\{}\varepsilon(y,z+w)F_{2}(z,w,y\diamond
x)-F_{1}(y,z,w)\diamond x-\varepsilon(z,w)F_{1}(y,w,z)\diamond x$
$\displaystyle-\varepsilon(y,z+w)F_{2}(z,w,y)\diamond x-y\diamond
F_{2}(z,w,x)\big{\\}}$ $\displaystyle=$
$\displaystyle\varepsilon(y,z+w)\big{\\{}\varepsilon(x,z+w)F_{2}(z,w,x\diamond
y)-F_{1}(x,z,w)\diamond y$ $\displaystyle-\varepsilon(z,w)F_{1}(x,w,z)\diamond
y-\varepsilon(x,z+w)F_{2}(z,w,x)\diamond y-x\diamond F_{2}(z,w,y)\big{\\}}$
$\displaystyle=$ $\displaystyle 0.$
Hence, $(A,\cdot,[,],\varepsilon)$ is an $F$-manifold color algebra.
(2) $(A,\mathfrak{L})$ is a representation of the $\varepsilon$-commutative
associative algebra $(A,\cdot,\varepsilon)$ by Lemma 4.2. It is known that
$(A,L)$ is a representation of the Lie color algebra $(A,[,],\varepsilon)$.
Note that $F_{1}(x,y,z)=R_{L,\mathfrak{L}}(x,y)(z)$, thus the equation
$F_{1}(x\cdot y,z,w)=x\diamond F_{1}(y,z,w)+\varepsilon(x,y)y\diamond
F_{1}(x,z,w)$
implies
$R_{L,\mathfrak{L}}(x\cdot
y,z)=\mathfrak{L}_{x}R_{L,\mathfrak{L}}(y,z)+\varepsilon(x,y)\mathfrak{L}_{y}R_{L,\mathfrak{L}}(x,z).$
On the other hand, $F_{2}(x,y,z)=S_{L,\mathfrak{L}}(x,y)(z)$, thus combining
(6), the equation
$\displaystyle(F_{1}(x,y,z)+\varepsilon(y,z)F_{1}(x,z,y)+\varepsilon(x,y+z)F_{2}(y,z,x))\diamond
w$ $\displaystyle=$ $\displaystyle\varepsilon(x,y+z)F_{2}(y,z,x\diamond
w)-x\diamond F_{2}(y,z,w)$
implies
$\mathfrak{L}_{P_{x}(y,z)}=\varepsilon(x,y+z)S_{L,\mathfrak{L}}(y,z)\mathfrak{L}_{x}-\mathfrak{L}_{x}S_{L,\mathfrak{L}}(y,z).$
Hence, $(A;L,\mathfrak{L})$ is a representation of
$(A,\cdot,[,],\varepsilon)$. ∎
## 5\. Acknowledgements
ZC was partially supported by National Natural Science Foundation of China
(11931009) and Natural Science Foundation of Tianjin (19JCYBJC30600). J. Li
was partially supported by National natural Science Foundation of China
(11771331).
## References
* [1] C. Bai, Left-symmetric algebras from linear functions. _J. Algebra_ 281 (2004), no. 2, 651-665.
* [2] R. Bandiera, Formality of Kapranov’s brackets in Kähler geometry via pre-Lie deformation theory. _Int. Math. Res. Not._ IMRN 2016, no. 21, 6626-6655.
* [3] J. Bergen and D. S. Passman, Delta ideal of Lie color algebras. _J. Algebra_ 177 (1995), 740-754.
* [4] D. Burde, Left-symmetric algebras, or pre-Lie algebras in geometry and physics. _Cent. Eur. J. Math._ 4 (2006), no. 3, 323-357.
* [5] F. Chapoton and M. Livernet, Pre-Lie algebras and the rooted trees operad. _Int. Math. Res. Not._ IMRN 2001, no. 8, 395-408.
* [6] L. Chen, Y. Ma and L. Ni, Generalized derivations of Lie color algebras. _Results Math._ 63 (2013), no. 3-4, 923-936.
* [7] X. Chen, S. D. Silvestrov and F. V. Oystaeyen, Representations and cocycle twists of color Lie algebras. _Algebr. Represent. Theory_ 9 (2006), no. 6, 633-650
* [8] V. Dotsenko, Algebraic structures of $F$-manifolds via pre-Lie algebras. _Ann. Mat. Pura Appl._ (4) 198 (2019), no. 2, 517-527.
* [9] V. Dotsenko, S. Shadrin and B. Vallette, Pre-Lie deformation theory. _Mosc. Math. J._ 16 (2016), no. 3, 505-543.
* [10] B. Dubrovin, Geometry of 2D topological field theories. _Lecture Notes in Math_ , 1620 (1995).
* [11] J. Feldvoss, Representations of Lie color algebras. _Adv. Math._ 157 (2001), 95-137.
* [12] C. Hertling and Y. I. Manin, Weak Frobenius manifolds. _Int. Math. Res. Not._ IMRN 1999, no. 6, 277-286.
* [13] J. Liu, Y. Sheng and C. Bai, $F$-manifold algebras and deformation quantization via pre-Lie algebras. _J. Algebra_ 559 (2020), 467-495.
* [14] Y. Ma, L. Chen and J. Lin, $T^{*}$-extension of Lie color algebras. _Chinese Ann. Math. Ser. A_ 35 (2014), no. 5, 623-638; translation in Chinese J. Contemp. Math. 35 (2014), no. 4, 385-400
* [15] X. Ning and X. Wang, Lie color algebras and left color symmetric structures. _J. Math._ 27(2007), no 3, 359-362.
* [16] S. Majid and W. Tao, Noncommutative differentials on Poisson-Lie groups and pre-Lie algebras. _Pacific J. Math._ 284 (2016), no 1, 213-256.
* [17] M. Scheunert, Generalized Lie algebras. _J. Math. Phys._ 20(1979), no 4, 712-720.
* [18] M. Scheunert and R. Zhang, Cohomology of Lie superalgebras and their generalizations. _J. Math. Phys._ 39 (1998), no. 9, 5024-5061.
* [19] B. Sun, L. Chen and Y.Liu, $T^{*}$-extensions and abelian extensions of hom-Lie color algebras. _Rev. Un. Mat. Argentina_ 59 (2018), no. 1, 123-142.
* [20] S. Wang and L. Zhu, Non-degenerate invariant bilinear forms on Lie color algebras. _Algebra Colloq._ 17(2010), no 3, 365-374.
|
8k
|
arxiv_papers
|
2101.00966
|
# Markov models of coarsening in two-dimensional foams with edge rupture
Joseph Klobusicky
###### Abstract
We construct Markov processes for modeling the rupture of edges in a two-
dimensional foam. We first describe a network model for tracking topological
information of foam networks with a state space of combinatorial embeddings.
Through a mean-field rule for randomly selecting neighboring cells of a
rupturing edge, we consider a simplified version of the network model in the
sequence space $\ell_{1}(\mathbb{N})$ which counts total numbers of cells with
$n\geq 3$ sides ($n$-gons). Under a large cell limit, we show that number
densities of $n$-gons in the mean field model are solutions of an infinite
system of nonlinear kinetic equations. This system is comparable to the
Smoluchowski coagulation equation for coalescing particles under a
multiplicative collision kernel, suggesting gelation behavior. Numerical
simulations reveal gelation in the mean-field model, and also comparable
statistical behavior between the network and mean-field models.
Keywords: foams, kinetic equations, Markov processes, combinatorial embeddings
Mathematics Subject Classification: 82D30,37E25,60J05
## 1 Introduction
Foams are a common instance of macroscopic material structure encountered in
manufacturing. Some foams are desirable, such as those found in mousses,
breads, detergents, and cosmetics, while others are unwanted byproducts in the
production of steel, glass, and pulp [29, 7]. To better understand the complex
geometric and topological structure of three-dimensional foams, scientists
have designed simplified experiments to create two-dimensional foams, often
through trapping a soap foam in a region between two transparent plates thin
enough for only a single layer of cells to form [6, 16, 10, 27].
To replicate the topological transition that we find in an edge rupture, the
author has conducted a simple experiment with a soap foam consisting of a
mixture of liquid dish soap and water. The mixture is vigorously stirred to
produce a foam and then spooned onto a $28\times 36\times.3$ cm transparent
acrylic plate. Another plate is placed on top of the foam and then compressed
to form a two-dimensional structure. The plates are tilted vertically to drain
liquid, and after several minutes the foam sufficiently dries into a structure
approximating a planar network. To produce the transition seen in Fig. 1, a
small local force is applied to the outside of a plate at the center of an
edge, causing it to rupture, immediately followed by each of the two
neighboring edges at the rupturing edge’s endpoints merging into a single edge
. While the experiment just described selects a single edge for rupture,
multiple ruptures can occur naturally without applying external forces, with a
typical time scale for the coarsening of the foam on the order of tens of
minutes [8]. The rupture rate can be increased through using a weaker
surfactant or applying heat. Typically, periods between ruptures are
nonuniform, with infrequent ruptures eventually turning into a cascading
regime during which the majority of ruptures occur [27].
Figure 1: An edge in a two-dimensional soap foam immediately before (left) and
after (right) its rupture. The length of the edge before rupture is
approximately $1$ cm.
The focus for this work is to construct minimal Markovian models for studying
the statistical behavior of two-dimensional foams which coarsen through
multiple ruptures of the type seen in Fig. 1. As a basis for comparison, let
us briefly overview the more well-studied coarsening process of gas diffusion
across cell boundaries. For a foam with isotropic surface tension on its
boundary, gas diffusion induces network edges to evolve with respect to mean
curvature flow. In two dimensions, the $n-6$ rule of von Neumann and Mullins
[28, 23] gives a particularly elegant result that area growth of each cell
with $n$ sides is constant and proportional to $n-6$. A cell with fewer than
six sides can therefore shrink to a point, triggering topological changes in
its neighboring cells. Several physicists used the $n-6$ rule to write down
kinetic limits in the form of transport equations with constant area advection
and a nonlinear intrinsic source term for handling topological transitions.
Simulations of these models were shown to produce universal statistics found
in physical experiments and direct numerical simulations on planar networks
[14, 20, 15, 19].
The time scale for coarsening by gas diffusion is much slower than edge
rupture, and is often measured in tens of hours [8]. In a foam with rupture,
gas diffusion is a relatively minor phenomenon in determining densities for
numbers of sides, and our models for this study will not consider diffusion by
coarsening. Furthermore, the repartitioning of areas for cells after a rupture
is a complex event where edges quickly adjust to reach a quasistationary state
to minimize total surface tension, and unfortunately there is no known analog
of the $n-6$ rule relating area and cell topology for ruptures. Since a main
theme in this paper is to keep our models minimal, we will avoid questions
related to cell areas, but rather only study frequencies of $n$-gons (cells
with $n$ sides) after a total number of ruptures are performed. In Section 2,
we construct a Markov chain model over a state space of combinatorial
embeddings, which we refer to as ‘the network model’. Correlations in space
between which two edges rupture in succession have been observed in physical
experiments [6]. However, Chae and Tabor [8, Sect. IV:A] performed numerical
simulations on several random models of foam rupture with uncorrelated rules
for selecting rupturing edges, including selecting edges with uniform
probability, and found comparable long-term behavior to physical experiments.
In particular, all models produced networks consisting of larger cells
surrounded by many smaller cells having few sides.
Using combinatorial rather than geometric embeddings as a state space in the
network model allows us to track topological information of a network without
needing to record geometrical quantities such as edge length, vertex
coordinates, or curvature. A state transitions by removing a random edge from
the network and performing the smoothing operation seen in Fig. 1. Explicit
expressions for state transitions are provided in Section 2.3. While the
network model does not need any geometric information to be well-defined, it
is possible to generate a visualization of the coarsening process if we are
provided with vertex coordinates for an initial embedding. Snapshots of the
Markov chain $\\{\mathbf{G}(m)\\}_{m\geq 0}$ after $m=250k$ ruptures for
$k=1,\dots,8$ are given in Figs. 2 and 3 for foams having initial conditions
of 2500 cells generated by a randomly seeded Voronoi diagram and hexagonal
lattice.
A schematic of the changes in side numbers for cells adjacent to a rupturing
edge is given in Fig. 4. Typically, edge rupture can be seen as the
composition of two graph operations:
1. 1.
Face merging: The two cells whose boundaries completely contain the rupturing
edge will join together as a single cell after rupture. If the two cells have
$i$ and $j$ sides before rupture, the new cell created from face merging has
$i+j-4$ sides.
2. 2.
Edge merging: Each of the two cells sharing only a single vertex with the
rupturing edge will have two of its edges smooth to create a single edge. If
the two cells have $k$ and $l$ sides before rupture, the cells after edge
merging have $k-1$ and $l-1$ sides.
Figure 2: Snapshots of a sample path $\\{\mathbf{G}(m)\\}_{m\geq 0}$ with
disordered initial conditions of a Voronoi diagram with random seeding. Top
row left to right: $\mathbf{G}(0),\mathbf{G}(250),$ and $\mathbf{G}(500)$.
Middle row: $\mathbf{G}(750),\mathbf{G}(1000),$ and $\mathbf{G}(1250)$. Bottom
row: $\mathbf{G}(1500),\mathbf{G}(1750),$ and $\mathbf{G}(2000)$. Figure 3:
Snapshots of a sample path $\\{\mathbf{G}(m)\\}_{m\geq 0}$ with ordered
hexagonal lattice initial conditions. Top row left to right:
$\mathbf{G}(0),\mathbf{G}(250),$ and $\mathbf{G}(500)$. Middle row:
$\mathbf{G}(750),\mathbf{G}(1000),$ and $\mathbf{G}(1250)$. Bottom row:
$\mathbf{G}(1500),\mathbf{G}(1750),$ and $\mathbf{G}(2000)$. Figure 4: Side
numbers immediately before (left) and after (right) a typical edge rupture. A
numbers inside a cell denotes its number of sides. The rupturing edge is shown
in bold red. Shaded cells denote those which undergo face merging.
In Fig. 4, shaded cells with eight and five sides merge to form a cell with
nine sides, and the two unshaded cells with five and four sides undergo edge
merging, producing cells with four and three sides. For a cell $C_{n}$
containing $n$ sides, we represent edge rupture with the three irreversible
reactions
$\displaystyle C_{i}+C_{j}\rightharpoonup C_{i+j-4},\quad C_{k}\rightharpoonup
C_{k-1},\quad C_{l}\rightharpoonup C_{l-1}.$ (1)
Rupture is mentioned as an ‘elementary move’ in [17] and [8] along with
reactions occurring from gas diffusion, although the reaction (1) is not
explicitly written down. It is important to note that not all ruptures will
produce the reactions in (1). For instance, some edges do not have four
distinct cells as neighbors. As an example, the ‘isthmus’ shown Fig. 5 has
only three neighbors. To further complicate matters, rupture causes a loss of
numbers of sides in neighboring cells which can create loops, multiedges, and
islands. To keep our model minimal, in Section 2.2 we define a class
rupturable edges which restricts all reactions to satisfy (1), with the
exception of some edges at the domain boundary which have a similar reaction.
Appendix A is meant to explicitly show the variety of reactions which can
occur when some of the conditions for rupturable edges are lifted. Section 2.3
shows that the rupture operations restricted to rupturable edges is closed in
a suitably chosen space of combinatorial embeddings. This enables us to
construct a well-defined Markov chain by randomly selecting edges to rupture
at each transition.
A major advantage of keeping the network model minimal is the relative ease of
creating a simplified mean-field Markov model to approximate statistical
topologies. In Section 3, we define a mean-field rule and its associated
Markov chain for randomly selecting neighbors of a rupturing edge which only
depends on $n$-gon frequencies. A formal argument for deriving kinetic
equations in the large particle limit of the mean-field model is given in
Section 4. The limiting equations give number densities $u_{n}(t)$ of
$n$-gons, with a time scale $t\geq 0$ of the fraction of edge ruptures over
the initial number of cells. The kinetic equations take the form of the
nonlinear autonomous system
$\displaystyle\dot{u}_{n}$
$\displaystyle=\sum_{i=3}^{n+1}K^{F}_{4+n-i,i}u_{4+n-i}u_{i}+2q_{n+1}^{E}u_{n+1}-2q_{n}^{F}u_{n}-2q_{n}^{E}u_{n},\quad
n\geq 3.$ (2)
The terms $K^{F},q^{F},$ and $q^{E}$ are state-dependent rates of creation and
annihilation of $n$-gons through face and edge merging. We derive explicit
formulas for these rates in Section 4.
We note the similarity of (2) to the Smoluchowski coagulation equation [24]
for number densities $v_{n}$ of size $n$ coalescing clusters, given by
$\dot{v}_{n}=\frac{1}{2}\sum_{i=1}^{n-1}K_{n-i,i}v_{n-i}v_{i}-\sum_{i\geq
1}K_{n,i}v_{n}v_{i},\quad n\geq 1.$ (3)
A major result for the Smoluchowski equations is the decrease of the total
mass $\sum_{k\geq 1}kv_{k}$ under the multiplicative kernel $K_{i,j}=ij$ [21].
The missing total number is interpreted as a gel, or a single massive particle
of infinite mass. In (2), we find that the rate of cell merging between $i$
and $j$-gons is
$\displaystyle K^{F}_{i,j}=\frac{ij}{S^{2}(1-p_{3}^{2})},\quad S=\sum_{k\geq
3}ku_{k},\quad p_{3}=3u_{3}/S.$
The similarity between $K^{F}_{i,j}$ and $K_{i,j}$ suggests the formation of a
gel in (2), which should be interpreted as a cell with infinitely many sides.
In Section 5, we perform Monte Carlo numerical simulations of edge ruptures
over large networks for both the network and mean-field models. The large
initial cell number produces number densities which are approximately
deterministic (having low variance at all times). For the mean-field model, we
find strong evidence of gelation behavior. While we find that topological
frequencies between the mean-field and network models generally agree to
within a few percentage points, we observe that gelation behavior is quite
weak in the network model. We conjecture that this is likely due to the
rupturability requirements imposed in Def. 6.
As the kinetic equations for (2) only give interactions between cells with
finitely many sides (the sol), we should interpret that the mean-field model
approximates (2) only in the pregelation phases. The postgelation regime will
require separate kinetic equations which include interactions of the sol with
the gel. An advantage to Monte Carlo simulations is that they are a relatively
simple method for approximating limiting number densities in both regimes, as
opposed to the numerics involved in a deterministic discretization of the
infinite system (2) (see [12] for a finite volume method for simulating
coagulation equations). We hope to produce a more rigorous numerical and
theoretical treatment of the phase transition in future works.
## 2 The network model
In this Section, we construct a minimal Markovian model, referred to as the
‘network model’, for tracking topological information of foams.
### 2.1 Foams as planar embeddings
We begin our construction of the network model by defining geometric
embeddings which model two-d foams. Our space of embeddings is chosen to
capture the typical topological reaction (1) seen in physical foams while also
being sufficiently minimal to permit a derivation of limiting kinetic
equations.
###### Definition 1.
The space of simple foams $\mathfrak{M}(S)$ in the unit square $S=[0,1]^{2}$
is the set of planar embeddings $\widetilde{G}\subset S$ of a simple connected
trivalent planar graph $G$ such that $\widetilde{G}$ contains the boundary
$\partial S.$
Some comments are in order for our choice of embeddings. We first mention that
the ambient space $S$ can certainly be generalized to other subsets of the
plane or a two-dimensional manifold. However, restricting to the unit square
is a natural choice since previous physical experiments involve generating
foams between two rectangular glass panes, and numerical simulations
generating foams are often performed on rectangular domains [6, 16, 10, 27].
We also require that the boundary $\partial S$ is contained in the graph
embedding so that the collection of cells covers all of $S$. Edges contained
in $\partial S$ are considered as walls, and are not allowed to rupture. We
do, however, allow rupture of edges with one or both vertices on $\partial S$.
The reaction equations for these ruptured edges are slightly different than
(1), as there is no cell adjacent to the vertex which undergoes edge merging.
Requiring $G$ to be trivalent is a consequence of the Herring conditions [18]
for isotropic networks, which can be derived through a force balance argument.
Connected and simple graphs are imposed for keeping the model minimal.
Connectivity allows for us to represent all sides in a cell with a single
directed loop. Simple graphs forbid loops and multiedges, which in graph
embeddings are one and two-sided cells. To prevent the creation of 2-gons we
will require reactants in (1) to contain sufficiently many sides.
For a planar embedding $\widetilde{G}\subset S$ of a graph $G$, we can
represent faces using counterclockwise vertex paths
$\sigma=(v_{1},\dots,v_{n})$, where $\\{v_{i},v_{i+1}\\}$ is an edge in $G$
for $i=1,\dots,n-1$. By a ‘counterclockwise’ path, we mean that a single face
lies to the left on an observer traversing the edges in $\sigma$ from $v_{1}$
to $v_{n}$. Since $\widetilde{G}$ is trivalent, we refer to counterclockwise
vertex paths as left paths, and a length three left path $(v_{1},v_{2},v_{3})$
as a left turn. For a geometric embedding with curves as edges, left paths can
always be computed through an application of Tutte’s Spring Theorem [26],
which guarantees a combinatorally isomorphic embedding
$\mathcal{T}(\widetilde{G})$ of $\widetilde{G}$ where all edges are
represented by line segments. By ‘pinning’ external vertices of an outer face,
vertex coordinates of $\mathcal{T}(\widetilde{G})$ can be computed as a
solution of a linear system. In our case, if we fix the outer face in
$\mathcal{T}(\widetilde{G})$ as the boundary of the unit square $\partial S$,
with the same vertex coordinates on $\partial S$ as $\widetilde{G}$, we ensure
that the Tutte embedding is orientation preserving, so that counterclockwise
paths in $\widetilde{G}$ remain counterclockwise in
$\mathcal{T}(\widetilde{G})$. Technically, Tutte’s Spring Theorem requires
$\widetilde{G}$ to be 3-connected, which is not a condition in the definition
of a simple foam, but this can be handled by inserting sufficiently many edges
to $\widetilde{G}$ to make it 3-connected, obtaining the Tutte embedding on
the augmented graph, and then removing the added edges. Left paths in
$\widetilde{G}$ then correspond to the counterclockwise polygonal paths in
$\mathcal{T}(\widetilde{G})$ that can be found by comparing angles between
incident edges at vertices.
Starting with a directed edge $(v_{1},v_{2})$, we may traverse the edges of a
face by taking a maximal number of distinct left turns. Doing so gives us a
method for representing faces in an embedding through left paths.
Figure 5: An example of an isthmus edge, shown in bold red. Arrows near edges
denote the path for the left loop containing the isthmus.
###### Definition 2.
A left loop $(v_{1},\dots,v_{n},v_{n+1})$ is a left path where (i)
$v_{1}=v_{n+1}$, (ii) $(v_{i-1},v_{i},v_{i+1})$ are distinct a left turns for
$i=2,\dots,n$, and (iii) $(v_{n},v_{1},v_{2})$ is a left turn.
It is possible that both $(u,v)$ and $(v,u)$ are contained in a left loop.
When this occurs, it follows that $(u,v)$ is an isthmus, or an edge whose
removal disconnects the graph. See Fig. 5 for an example of an isthmus edge
and its associated left loop. Since $\widetilde{G}$ is connected, a left loop
uniquely determines a face, which we write as
$f=[v_{1},\dots,v_{|f|}],$ (4)
with the understanding that $(v_{1},\dots,v_{|f|},v_{1})$ is a left loop, and
square brackets denote that (4) is an equivalence relation of left loops under
a cyclic permutations of indices. The number of sides for a face is given by
$|f|$. The collection $\Pi$ of left loops obtained from an embedding of a
graph $G$ is known in combinatorial topology as a combinatorial embedding of
$G$ [11]. As a convention, $\Pi$ does not include the left loop for the outer
face obtained by traversing $\partial S$ clockwise. Note that $\Pi$ only
consists of vertices in $G$, and contains no geometrical information from the
embedding.
###### Definition 3.
The pair $\mathcal{G}=(G,\Pi)$ belongs to the space of combinatorial foams in
$S$, denoted $\mathcal{C}(S)$, if $G$ is a simple trivalent connected graph
and $\Pi$ is a combinatorial embedding of $G$ obtained from a simple foam.
In the language of computational geometry, combinatorial foams are provided
through doubly-connected edge lists [9]. Loops can be recovered through
repeatedly applying the next and previous pointers of half-edges (equivalent
to direct edges).
### 2.2 Typical edges and rupturability
We now aim to identify edges in $\mathfrak{M}(S)$ whose ruptures are well-
defined and follow the reaction (1). One implicit assumption in (1) is that an
edge has four distinct neighboring cells: two for performing face merging and
two others for edge merging. We formalize the differences between types of
neighboring cells of an edge in the following definition.
###### Definition 4.
For an edge $e=\\{u,v\\}$ in $G$ and a combinatorial foam
$\mathcal{G}=(G,\Pi)\in\mathcal{C}(S)$, a face $f\in\Pi$ is an edge neighbor
of $e$ if $(u,v)$ or $(v,u)$ is in $f$. If there exist vertices
$a,b\notin\\{u,v\\}$ such that $(a,u,b)$ or $(a,v,b)$ is a left turn in $f$,
then $f$ is a vertex neighbor of $e$.
Edge and vertex neighbors will be those cells which will undergo face and edge
merging in reaction (1), respectively. When considering common trivalent
networks such as Archimedean lattices and almost every randomly generated
Voronoi diagram, interior edges (those not intersecting $\partial S$) will
have two edge neighbors and two vertex neighbors. This is in fact the maximum
number of neighbors an edge can have.
###### Lemma 1.
For $\mathcal{G}=(G,\Pi)\in\mathcal{C}(S)$, then $e\in G$ can have at most
four distinct neighbors. If $e$ has four neighbors, then two neighbors will be
vertex neighbors, and two will be vertex neighbors.
###### Proof.
An edge $e=\\{u_{0},v_{0}\\}$ and its neighbors can be labeled as in Figure
6(a). The four left arcs
$\displaystyle a_{1}$ $\displaystyle=(u_{1},u_{0},v_{0},v_{1}),$
$\displaystyle a_{2}=(v_{2},v_{0},u_{0},u_{2}),$ (5) $\displaystyle a_{3}$
$\displaystyle=(u_{2},u_{0},u_{1}),$ $\displaystyle a_{4}=(v_{1},v_{0},v_{2})$
(6)
contain all possible directed edges with $u_{0}$ or $v_{0}$ as an endpoint,
which implies there can be at most four neighbors of $e$, in which case each
arc belongs to a separate face. The two edge neighbors contain arcs $a_{1}$
and $a_{2}$, and the two vertex neighbors contain arcs $a_{3}$ and $a_{4}$. ∎
To limit reaction types, we will permit only edges with four neighbors to
rupture, with the exception of boundary edges (those with vertices in
$\partial S$) which have similar local configurations.
###### Definition 5.
An edge with four edge neighbors is a typical interior edge.
An edge $e$ is a typical boundary edge if either
(a) one and only one vertex of $e$ is in $\partial S$, and $e$ has two edge
neighbors and one vertex neighbor, or
(b) both vertices of $e$ are in $\partial S$, and $e$ has two edge neighbors
and no vertex neighbors.
The collection of typical interior edges and typical boundary edges are called
typical edges.
There are multiple examples where an edge in $\widetilde{G}$ is atypical (not
typical). For instance, an isthmus has only one edge neighbor. Other examples
include neighbors of isthmuses. For each of these configurations, rupturing an
atypical edge will produce reactions different from (1). See Appendix A for a
cataloguing of atypical edges and their associated reactions.
A second issue arising in (1) occurs when a 3-gon is a reactant in edge
merging, or two 3-gons are reactants in face merging, producing a 2-gon.
However, 2-gons correspond to multiedges, and so are forbidden in simple
foams. We impose one more requirement which ensures that all cells after
rupture have at least three sides.
###### Definition 6.
A typical edge is rupturable if both of its vertex neighbors contain at least
four edges, and at least one of its edge neighbors contains four edges. The
set of rupturable edges for a combinatorial foam $\mathcal{G}$ is denoted
$\mathcal{R}(\mathcal{G})$.
While we forbid 1- and 2-gons in simple foams for simplicity, we remark that
they can exist in physical foams. Their behavior, however, can be quite
erratic. For instance, when a 2-gon is formed, Burnett et al. [6] observed
that sometimes the cell will slide along an edge until reaching a juncture,
mutate into a 3-gon, and then quickly vanish to a point.
### 2.3 Edge rupture
We are now ready to define an edge rupture operation on $\mathcal{C}(S)$. For
an interior rupturable edge $e=\\{u_{0},v_{0}\\}$, let $u_{i}$ and $v_{i}$ for
$i=1,2$ denote the vertex neighbors for $u_{0}$ and $v_{0}$, labeled such that
we have the left arcs (5)-(6) as shown in Figure 6(a). The four neighbors of
$\\{u_{0},v_{0}\\}$ in $\Pi$ are written as
$\displaystyle f_{1}$ $\displaystyle=[u_{2},u_{0},u_{1},A_{1}],\qquad
f_{2}=[v_{1},v_{0},v_{2},A_{3}],$ (7) $\displaystyle f_{3}$
$\displaystyle=[u_{1},u_{0},v_{0},v_{1},A_{2}]\qquad
f_{4}=[v_{2},v_{0},u_{0},u_{2},A_{4}],$ (8)
where $A_{1},\dots,A_{4}$ are left arcs. It is possible that $u_{i}=v_{j}$ for
some $i,j\in\\{1,2\\}$ so that an edge neighbor is a 3-gon. However,
$u_{1}\neq u_{2}$ and $v_{1}\neq v_{2}$ since this would make $G$ a
multigraph. Also, the sets $\\{u_{i}:i=1,2\\}$ and $\\{v_{i}:i=1,2\\}$ are not
equal, since this would force both edge neighbors of $e$ to have three sides,
violating the rupturability conditions in Def. 6.
###### Definition 7.
For $\mathcal{G}=(G,\Pi)\in\mathcal{C}(S)$, we define an edge rupture
$\Phi_{e}(\mathcal{G})$ for an edge
$e=\\{u_{0},v_{0}\\}\in\mathcal{R}(\mathcal{G})$ through the mapping
$(G,\Pi)\mapsto\mathcal{G}^{\prime}=(G^{\prime},\Pi^{\prime})$. If $G$ has
vertices labeled as in Fig. 6(a), we obtain $G^{\prime}$ from $G$ by
1. 1.
Removing $\\{u_{0},v_{0}\\}$, followed by
2. 2.
Edge smoothing on the (now degree 2) vertices $u_{0}$ and $v_{0}$ by removing
$\\{u_{0},u_{1}\\},\\{u_{0},u_{2}\\},\\{v_{0},v_{1}\\}$, and
$\\{v_{0},v_{2}\\}$, and adding edges $\\{u_{1},u_{2}\\}$ and
$\\{v_{1},v_{2}\\}$.
If $e$ is an interior edge, we obtain $\Pi^{\prime}$ by removing faces
$f_{1},\dots,f_{4}$ from $\Pi$ and adding
$\displaystyle f_{1}^{\prime}=[u_{2},u_{1},A_{1}],\quad
f_{2}^{\prime}=[v_{2},v_{1},A_{3}],\quad
f_{3}^{\prime}=[u_{1},u_{2},A_{4},v_{2},v_{1},A_{2}].$ (9)
For a boundary edge where $u_{0}$ (or $v_{0}$) is in $\partial S$, the vertex
neighbor $f_{1}$ (or $f_{2}$) does not exist, and we omit the addition of
$f_{1}^{\prime}$ (or $f_{2}^{\prime}$) in (9).
A schematic of an embedding before and after edge rupture process is given in
Fig. 6 (a)-(b). From counting sides of faces removed and added in rupture, we
obtain
###### Lemma 2.
The types of reactions from edge rupture are limited to either
1. 1.
Interior rupture:
$C_{i}+C_{j}\rightharpoonup C_{i+j-4},\quad C_{k}\rightharpoonup C_{k-1},\quad
C_{l}\rightharpoonup C_{l-1},$ (10)
2. 2.
Boundary rupture with one vertex on $\partial S$:
$C_{i}+C_{j}\rightharpoonup C_{i+j-4},\quad C_{k}\rightharpoonup C_{k-1},$
(11)
3. 3.
Boundary rupture with two vertices on $\partial S$:
$C_{i}+C_{j}\rightharpoonup C_{i+j-4}.$ (12)
Figure 6: Left loops before and after the rupture of $\\{u_{0},v_{0}\\}$.
It is also straightforward to show
###### Lemma 3.
Let $\mathcal{G}=(G,\Pi)\in\mathcal{C}(S)$. If $e\in\mathcal{R}(\mathcal{G})$
and $\Phi_{e}(\mathcal{G})=(G^{\prime},\Pi^{\prime})$, then $G^{\prime}$ is
connected.
###### Proof.
Since $e$ is rupturable, it cannot be an isthmus, so $G$ remains connected
after removing $e$ in Step 1 of Def. 7. It is also connected after Step 2 as
edge smoothing clearly maintains connectivity. ∎
Our main result is then
###### Theorem 1.
Edge rupture is closed in the space of combinatorial foams. In other words,
for $\mathcal{G}=(G,\Pi)\in\mathcal{C}(S)$ and $e\in\mathcal{R}(\mathcal{G})$,
then $\Phi_{e}(\mathcal{G})=(G^{\prime},\Pi^{\prime})\in\mathcal{C}(S)$.
###### Proof.
From Lemma 3, $G^{\prime}$ is connected and it is clear that Steps 1 and 2 in
Def. 7 maintain trivalency. Furthermore, from the requirements for rupturable
edges in Definition 6 and the possible reactions listed in Lemma 2, the three
new faces in $G^{\prime}$ each have at least three sides. ∎
With Theorem 1, we are now ready to define a Markov chain for edge rupture in
the state space $\mathcal{C}(S)$. For each state
$\mathcal{G}\in\mathcal{C}(S)$, the range of possible one step transitions is
given by $\cup_{e\in\mathcal{R}(\mathcal{G})}\Phi_{e}(\mathcal{G})$. If
$|\mathcal{R}(\mathcal{G})|\geq 1$, we randomly select a rupturable edge
uniformly, so that the probability transition kernel $p(\cdot,\cdot)$ is
defined, for $e\in\mathcal{R}(\mathcal{G})$, by
$p(\mathcal{G},\Phi_{e}(\mathcal{G}))=\frac{1}{|\mathcal{R}(\mathcal{G})|}.$
(13)
In the case where there are no rupturable edges, we define $\mathcal{G}$ to be
an absorbing state, so that $p(\mathcal{G},\mathcal{G})=1$. Uniform
probabilities were also chosen for the simplest model of edge selection in [8]
along with other distributions which considered geometric quantities such as
the length of an edge. While we focus only on uniform selection of edges, more
complicated transitions can be considered which depend on the local
topological configurations of neighboring edges of $\mathcal{G}$.
Beginning with an initial state
$\mathbf{G}(0)=\mathbf{G}_{0}\in\mathcal{C}(S)$, the Markov chain
$\\{\mathbf{G}(m)\\}_{m\geq 0}$ is defined on $\mathcal{C}(S)$ recursively by
obtaining $\mathbf{G}(m)$ from a random edge rupture on $\mathbf{G}(m-1)$.
After generating an initial embedding and recording left loops to obtain an
embedding topology $\mathbf{G}_{0}$, it is not necessary to use any
geometrical quantities to perform one or more edge ruptures. If available,
however, we may use vertex coordinates from initial conditions of a simple
foam for providing a visual of sample paths. This is done by fixing positions
of vertices, and adding new edges in Step 2 of Def. 7 as line segments. This
method is especially convenient with initial conditions such as Voronoi
diagrams and trivalent Archimedean lattices, which have straight segments as
edges and vertex coordinates that are easy to numerically generate, store, and
access. It should be noted that even with a valid combinatorial embedding,
representing edges as line segments for each step may produce crossings in the
visualization. However, in multiple simulations of networks we find that such
crossings are exceedingly rare.
In Fig. 2 we show snapshots of a sample path $\\{\mathbf{G}(m)\\}_{0\leq m\leq
2000}$ under disordered initial conditions of a Voronoi diagram seeded with
2500 uniformly distributed initial site points in $S$. Fig. 3 is a sample path
with ordered initial conditions of 2500 cells in a hexagonal lattice (an
experimental method for generating two-dimensional physical foams with lattice
and other ordered structures is outlined in [3]). In both figures, snapshots
are taken after $250\cdot k$ ruptures for $k=0,\dots,8$. We observe that under
both initial conditions, ruptures create networks which are markedly different
from those obtained through mean curvature flow. The most evident distinction
is in the creation of high-sided grains, which are bordered by a large number
of 3 and 4-gons. Furthermore, the universal attractor of statistical
topologies found from coarsening by gas diffusion [14, 20, 15, 19] does not
appear in edge rupturing. We address statistical topologies in more detail in
Section 5.
## 3 The mean-field model
In this section, we construct a simplified mean-field model of
$\\{\mathbf{G}(m)\\}_{m\geq 0}$. The state space $E=\ell_{1}(\mathbb{N})$
consists of summable sequences $\mathbf{L}=(L_{3},L_{4},\dots)\in E$, with
$L_{n}$ for $n\geq 3$ giving the total number of $n$-gons. For simplicity, our
model consists of $n$-gons restricted to the single reaction (1). Since there
is no notion of neighboring cells in $E$, we select four cells for face and
edge merging randomly using only frequencies in $\mathbf{L}$. The mean-field
rule is that for a randomly selected rupturable edge in a network, the
probability that a vertex or edge neighbor is a $n$-gon is proportional to
$n$, and that are no correlations between side numbers of the neighboring
cells. Specifically, the mean-field probabilities we use for selecting a
neighboring $n$-gon at state $\mathbf{L}$ are given by the two distributions
$\displaystyle Q(n;\mathbf{L})=\frac{nL_{n}}{\sum_{i\geq
3}iL_{i}},\quad\widetilde{Q}(n;\mathbf{L})=\frac{nL_{n}\mathbf{1}_{n\geq
4}}{\sum_{i\geq 4}iL_{i}}.$ (14)
Here, $Q$ is used for face merging, and allows for sampling among all cells,
whereas $\widetilde{Q}$ forbids sampling 3-gons and is used for edge merging.
Similar mean-field rules were a popular choice in the creation of minimal
models for coarsening under gas diffusion [14, 20, 15]. It should be noted
that nontrivial correlations exist for the number of sides in cells bordering
the same edge. Studies for first and higher order correlations exist and
depend on the type of network considered [1, 22]. Therefore, we should regard
our selection probabilities $Q$ and $\widetilde{Q}$, which do not take these
correlations into account, as estimates with errors that should not be
expected to vanish as the number of cells becomes large.
We randomly select two cells for edge merging from $\mathbf{L}$, with the
number of sides $\nu_{1}$ and $\nu_{2}$ obtained by sampling from
$\widetilde{Q}$. Similarly, we select two cells for face merging, having
$\sigma_{1}$ and $\sigma_{2}$ sides obtained by sampling from $Q$. After
selecting these four cells, we update $\mathbf{L}$ in accordance with (1).
This involves removing the four reactant cells having $\nu_{i}$ and
$\sigma_{i}$ sides for $i=1,2$, and adding three product cells, having
$\sigma_{1}+\sigma_{2}-4$, $\nu_{1}-1$, and $\nu_{2}-1$ sides.
In what follows, we state in detail the process of generating
$\sigma_{i},\nu_{i}$ for $i=1,2$ through sampling from $\mathbf{L}$ without
replacement. Steps (1)-(4) remove cells from $\mathbf{L}$ which are the
reactants in (1), and step (5) adds the face and edge-merged products to
create $\mathbf{L}^{\prime}$.
Mean-field process: For a state $\mathbf{L}\in E$ with $\sum_{i\geq
4}L_{i}\geq 3$ and $\sum_{i\geq 3}L_{i}\geq 4$, obtain the transitioned state
$\mathbf{L}^{\prime}\in E$ through performing the following steps in order:
1. 1.
Sample $\nu_{1}\sim\widetilde{Q}(\cdot;\mathbf{L})$. Remove a $\nu_{1}$-gon
from $\mathbf{L}$ and update remaining cells as
$\mathbf{L}^{(1)}=(L^{(1)}_{3},L_{4}^{(1)},\dots)$, where
$L^{(1)}_{\nu_{1}}=L_{\nu_{1}}-1$ and $L^{(1)}_{i}=L_{i}$ for $i\neq\nu_{1}$.
2. 2.
Sample $\nu_{2}\sim\widetilde{Q}(\cdot;\mathbf{L}^{(1)})$. Remove a
$\nu_{2}$-gon from $\mathbf{L}^{(1)}$ and update remaining cells as
$\mathbf{L}^{(2)}=(L^{(2)}_{3},L_{4}^{(2)},\dots)$, where
$L^{(2)}_{\nu_{2}}=L_{\nu_{2}}^{(1)}-1$ and $L^{(2)}_{i}=L_{i}^{(1)}$ for
$i\neq\nu_{2}$.
3. 3.
Sample $\sigma_{1}\sim Q(\cdot;\mathbf{L}^{(2)}).$ Remove a $\sigma_{1}$-gon
from $\mathbf{L}^{(2)}$ and update remaining cells as
$\mathbf{L}^{(3)}=(L^{(3)}_{3},L_{4}^{(3)},\dots)$, where
$L^{(3)}_{\sigma_{1}}=L_{\sigma_{1}}^{(2)}-1$ and $L^{(3)}_{i}=L_{i}^{(2)}$
for $i\neq\sigma_{1}$.
4. 4.
Sample $\sigma_{2}\sim Q(\cdot;\mathbf{L}^{(3)}).$ If
$(\sigma_{1},\sigma_{2})=(3,3)$, reject both $\sigma_{1}$ and $\sigma_{2}$ and
repeat steps (3) and (4). If $(\sigma_{1},\sigma_{2})\neq(3,3)$, remove a
$\sigma_{2}$-gon from $\mathbf{L}^{(3)}$ and update remaining cells as
$\mathbf{L}^{(4)}=(L^{(4)}_{3},L_{4}^{(4)},\dots)$, where
$L^{(4)}_{\sigma_{2}}=L_{\sigma_{2}}^{(3)}-1$ and $L^{(4)}_{i}=L_{i}^{(3)}$
for $i\neq\sigma_{2}$.
5. 5.
Add a $(\sigma_{1}+\sigma_{2}-4)$, $(\nu_{1}-1)$, and $(\nu_{2}-1)$-gon to
$\mathbf{L}^{(4)}$ to obtain the transitioned state
$\mathbf{L}^{\prime}=(L_{3}^{\prime},L_{4}^{\prime},\dots)$, with
$\displaystyle L_{n}^{\prime}=$ $\displaystyle
L_{n}^{(4)}+\mathbf{1}(\sigma_{1}+\sigma_{2}-4=n)+\sum_{j=1}^{2}\mathbf{1}(n=\nu_{j}-1).$
(15)
Note that in Step 5 and in future equations we use the indicator notation for
a statement $A$, written as either $\mathbf{1}(A)$ or $\mathbf{1}_{A}$, and
defined as
$\mathbf{1}(A)=\begin{cases}1&\hbox{if }A\hbox{ holds,}\\\
0&\hbox{otherwise.}\\\ \end{cases}$ (16)
The requirement that there are at least four cells, and that three cells have
at least four sides is to ensure that sampling from $\widetilde{Q}$ and $Q$ is
always possible. Note that the sampling algorithm accounts for the edge
rupture conditions in Def. 6 by restricting sampling to occur with on cells
with at least four sides in Steps 1 and 2, and also by the rejection condition
in Step 4 forbidding both cells for face merging to be 3-gons. To ensure the
sampling process is well-defined, we define states with $\sum_{i\geq
4}L_{i}<3$ or $\sum_{i\geq 3}L_{i}<4$ as absorbing so that
$\mathbf{L}^{\prime}=\mathbf{L}$.
If we consider an initial distribution of cells $\mathbf{L}(0)\in E$, by the
above process we may obtain a Markov chain $\\{\mathbf{L}(m)\\}_{m\geq 0}$
defined on $E$ by through the recursive formula
$\mathbf{L}(m)=(\mathbf{L}(m-1))^{\prime}$. Like $\\{\mathbf{G}(m)\\}_{m\geq
0}$, it is evident that at each nonabsorbing state the total number of cells
decreases by one, and sum of edges over all cells decreases by six. In other
words, under norms $\|\mathbf{L}\|=\sum_{i\geq 3}L_{i}$ and
$\|\mathbf{L}\|_{s}=\sum_{i\geq 3}iL_{i}$,
$\|\mathbf{L}(m)\|=\|\mathbf{L}(m-1)\|-1,\quad\hbox{}\quad\|\mathbf{L}(m)\|_{s}=\|\mathbf{L}(m-1)\|_{s}-6.$
(17)
We compare statistics of $n$-gons between the mean-field and network model in
Section 5.
## 4 Kinetic equations of the mean-field model
By considering a network with large number of cells, we give a derivation of a
hydrodynamic limit for the state transition given in the previous section. For
the mean-field process $\mathbf{L}^{N}(m)$ with $N$ initial cells, we define
time increments $t_{m}^{N}=m/N$ to write the number densities of $n$-gons as a
continuous time càdlàg jump process
$u_{n}^{N}(t;\gamma)=\sum_{m\geq
0}\frac{L_{n}^{N}(m)}{N}\cdot\mathbf{1}(t\in[t_{m}^{N}/\gamma,t_{m+1}^{N}/\gamma)),\quad
t\geq 0.$ (18)
Here we have included a constant parameter $\gamma>0$ denoting the rate of
edge ruptures per unit time. Under the existence of limiting number densities
$u_{n}^{N}(t)\rightarrow u_{n}(t)$ as $N\rightarrow\infty$, we formally derive
limiting kinetic equations by computing limiting probabilities (14) of cell
selection probabilities in face and edge merging.
In the kinetic limit, the $n$-gon growth rate $\dot{u}_{n}$ is equal to the
edge rupture rate $\gamma$ multiplied by the expected number $H_{n}[u]$ of
$n$-gons gained at a rupture with limiting number densities
$u=(u_{3},u_{4},\dots)$. Decomposing $H_{n}[u]$ with respect to different
reactions, we obtain the infinite system
$\dot{u}_{n}=\gamma(H_{n,+}^{F}[u]+H_{n,+}^{E}[u]-H_{n,-}^{F}[u]-H_{n,-}^{E}[u]),\quad
n\geq 3,$ (19)
where $H_{n,\pm}^{F/E}$ denote the expected number of created $(+)$ and
annihilated $(-)$ $n$-gons from face ($F$) and edge $(E)$ merging. In what
follows, we compute the explicit formulas for each term in (19).
As $N\rightarrow\infty$, the differences in probabilities in the mean-field
sampling process for sampling without replacement vanish, so that limiting
probabilities in steps (1)-(4) of the mean-field sampling process for
selecting reactants can be given solely in terms of $u$. The limiting
distribution of $Q$ in (14) is given by
$\displaystyle p_{n}=\frac{nu_{n}}{S(u)},\quad S(u)=\sum_{k\geq 3}ku_{k},$
(20)
and the limiting distribution of $\widetilde{Q}$ is
$\displaystyle\widetilde{p}_{n}=\frac{nu_{n}}{\widetilde{S}(u)}\mathbf{1}_{n\geq
4},\quad\widetilde{S}(u)=\sum_{k\geq 4}ku_{k}.$ (21)
From the reaction $C_{n+1}\rightharpoonup C_{n}$, we write the expected number
of created $n$-gons from edge merging as
$H_{n,+}^{E}=2\widetilde{p}_{n+1}=2q_{n+1}^{E}u_{n+1},\quad
q_{n}^{E}:=\frac{n\mathbf{1}_{n\geq 4}}{\widetilde{S}}.$ (22)
The factor of two in (22) accounts for the two edge merging reactions involved
in each rupture. From the reaction $C_{n}\rightharpoonup C_{n-1}$, the
expected number of annihilated $n$-gons from edge merging is then
$H_{n,-}^{E}=2\widetilde{p}_{n}=2q_{n}^{E}u_{n}.$ (23)
Computing expected $n$-gons from face merging involves a straightforward
conditional probability calculation. Let $\Sigma_{1}$ and $\Sigma_{2}$ be iid
random variables with $\mathbb{P}(\Sigma_{1}=n)=p_{n}$ for $n\geq 3$. Then the
number of sides $(\sigma_{1},\sigma_{2})$ for the two cells selected for face
merging has the same law as $(\Sigma_{1},\Sigma_{2})$ under the edge rupture
condition that $(\Sigma_{1},\Sigma_{2})\neq(3,3)$. The expected number of
$n$-gons selected under the reaction $C_{i}+C_{j}\rightharpoonup C_{i+j-4}$ is
then computed with linearity of expectation and the definition of conditional
probability:
$\displaystyle
H_{n,-}^{F}=\mathbb{E}[\mathbf{1}_{\sigma_{1}=n}+\mathbf{1}_{\sigma_{2}=n}]$
$\displaystyle=\mathbb{E}[\mathbf{1}_{\Sigma_{1}=n}+\mathbf{1}_{\Sigma_{2}=n}|(\Sigma_{1},\Sigma_{2})\neq(3,3)]$
(24)
$\displaystyle=\frac{2p_{3}}{1+p_{3}}\mathbf{1}_{n=3}+\frac{2p_{n}}{1-p_{3}^{2}}\mathbf{1}_{n\geq
4}.$ (25)
This may also be written as
$H_{n,-}^{F}=2q_{n}^{F}u_{n},\quad
q_{n}^{F}:=\frac{3}{S(1+p_{3})}\mathbf{1}_{n=3}+\frac{n}{S(1-p_{3}^{2})}\mathbf{1}_{n\geq
4}.$ (26)
Here, the factor of two comes from the two reactants in the single reaction
for cell merging in (1).
A similar calculation gives the probability for a pairing of cells in face
merging, with
$\displaystyle
p_{i,j}:=\mathbb{P}((\sigma_{1},\sigma_{2})=(i,j))=\frac{p_{i}p_{j}}{1-p_{3}^{2}},\quad(i,j)\neq(3,3).$
(27)
The creation of $n$-gons through face merging can be enumerated by reactions
$C_{i}+C_{4+n-i}\rightharpoonup C_{n}$ for $i=3,\dots,n+1$. The expected
number of $n$-gons created is then
$\displaystyle K^{F}_{i,j}$ $\displaystyle:=\frac{ij\mathbf{1}(i,j\geq
3,(i,j)\neq(3,3))}{S^{2}(1-p_{3}^{2})},$ (28) $\displaystyle H_{n,+}^{F}$
$\displaystyle=\sum_{i=3}^{n+1}p_{4+n-i,i}=\sum_{i=3}^{n+1}K^{F}_{4+n-i,i}u_{4+n-i}u_{i}.$
(29)
From (20)-(28), we can express $H_{n}$ explicitly in terms of $p_{k}$ and
$\widetilde{p}_{k}$. For $3\leq n\leq 6$,
$\displaystyle H_{3}$
$\displaystyle=\frac{2p_{3}p_{4}}{1-p_{3}^{2}}-\frac{2p_{3}}{1+p_{3}}+2\widetilde{p}_{4},$
(30) $\displaystyle H_{4}$
$\displaystyle=\frac{p_{4}^{2}+2(p_{3}p_{5}-p_{4})}{1-p_{3}^{2}}+2(\widetilde{p}_{5}-\widetilde{p}_{4}),$
(31) $\displaystyle H_{5}$
$\displaystyle=\frac{2(p_{3}p_{6}+p_{4}p_{5}-p_{5})}{1-p_{3}^{2}}+2(\widetilde{p}_{6}-\widetilde{p}_{5}),$
(32) $\displaystyle H_{6}$
$\displaystyle=\frac{2(p_{3}p_{7}+p_{4}p_{6}-p_{6})+p_{5}^{2}}{1-p_{3}^{2}}+2(\widetilde{p}_{7}-\widetilde{p}_{6}).$
(33)
Combining (22), (26), and (28), we rewrite (19) as an infinite-dimensional
system of nonlinear, autonomous ordinary differential equations to obtain
$\displaystyle\dot{u}_{n}$
$\displaystyle=\gamma\cdot\left(\sum_{i=3}^{n+1}K^{F}_{4+n-i,i}u_{4+n-i}u_{i}+2q_{n+1}^{E}u_{n+1}-2q_{n}^{F}u_{n}-2q_{n}^{E}u_{n}\right)$
(34)
for $n\geq 3$.
We note a subtlety with regards to face merging and 4-gons, due to the merging
of an $n$-gon and a 4-gon producing another $n$-gon. This reaction means that
face merging of an $n$-gon with a $4$-gon does not result in the annihilation
of an $n$-gon. Therefore, if we substitute $p_{n}=(1-p_{3}^{2})\sum_{i\geq
3}p_{n,i}$ into the numerators of (25), terms containing $p_{n,4}$,
corresponding to the reaction $C_{n}+C_{4}\rightharpoonup C_{n}$, should not
be included in $H_{-}^{F}$. On the other hand, these same probabilities appear
equally in $H_{+}^{F}$, corresponding to $i=4$ and $n$ for the sum in (29), in
which the merging of a $n$-gon and 4-gon does not increase the total number of
$n$-gons. Thus the total contribution of $n$-gons by face merging with
$4$-gons in (34) is zero, and equation (34) still holds.
Setting $\gamma\ =1$ and summing (34) over $n\geq 3$, we find formal growth
rates for the zeroth and first moments of $u$, with
$\displaystyle\sum_{n\geq 3}\dot{u}_{n}=\sum_{n\geq 3}H_{n}[u]=-1\quad\hbox{
and }\quad\sum_{n\geq 3}n\dot{u}_{n}=\sum_{n\geq 3}nH_{n}[u]=-6.$ (35)
This simply reflects the fact that each rupture reduces the number of cells in
the foam by one and reduces the number of sides by six. Since the dynamical
system is infinite dimensional, however, it is not necessarily true that we
can interchange the derivative and sum in (35) and deduce that the total side
number $S(u)=\sum_{k\geq 3}ku_{k}$ satisfies $\dot{S}=-6$. A similar issue
arises in other models of coagulation with sufficiently fast collision rates,
in which conservation of the first moment, or mass, exists until some
nonnegative time $T_{\mathrm{gel}}$ at which total mass starts to decrease. A
popular example is the Smoluchowski equation [24] for coalescing clusters
$A_{n}$ of size $n$ under the second order reaction
$A_{i}+A_{j}\rightharpoonup A_{i+j}.$ (36)
The proportion $v_{n}$ of size $n$ clusters is given by
$\dot{v}_{n}=\frac{1}{2}\sum_{j=1}^{n-1}K_{n-j,j}v_{n-j}v_{j}-\sum_{j\geq
1}K_{n,j}v_{n}v_{j},\quad n\geq 1$ (37)
for a collision kernel $K$ describing rates of cluster collisions. The kernel
$K^{F}$ for cell merging in (28) bears resemblance to the multiplicative
kernel $K_{i,j}=ij$ for (37), differing by a factor depending on $S$ and
$p_{3}$. For (37) with the multiplicative kernel, it is well known that a
gelation time $T_{\mathrm{gel}}$ exists, meaning that the total mass
$\sum_{k\geq 1}kv_{k}(t)$ is conserved for $t\leq T_{\mathrm{gel}}$, and then
decreases for $t>T_{\mathrm{gel}}$ [21]. The interpretation is that while the
total mass of finite size clusters decreases, the remaining total mass is
contained in an infinite sized cluster called a gel.
An equivalent definition of gelation time for the Smoluchowski equations comes
from a moment analysis (see [2] for a thorough summary). Denote the $k$th
moment for solutions of (37) as $m_{k}^{S}(t)=\sum_{j\geq 1}j^{k}v_{j}(t)$.
The gelation time $T_{\mathrm{gel}}$ is then defined as the (possibly
infinite) blowup time of $m_{2}^{S}(t)$. A finite gelation time implies an
explosive flux of mass toward a large cluster, and occurs when $m_{1}^{S}(t)$
begins to decrease. To see the blowup of $m_{2}^{S}$, we compare the squared
cluster sizes of products and reactants in (36), with
$|A_{i+j}|^{2}-|A_{i}|^{2}-|A_{j}|^{2}=2ij.$ (38)
The rate of growth for the second moment is found by summing, over $i$ and
$j,$ the difference of squares in (38) multiplied by the expected number of
collisions $v_{i}v_{j}K(i,j)/2$. Thus,
$\dot{m}_{2}^{S}=\sum_{i,j\geq 1}ijK(i,j)v_{i}v_{j}.$ (39)
Under the multiplicative kernel $K(i,j)=ij$, (39) with monodisperse initial
conditions ($v_{1}(0)=1$ and $v_{j}(0)=0$ for $j\geq 2$) reduces to the
elegant form
$\dot{m}_{2}^{S}=(m_{2}^{S})^{2}\quad\Rightarrow\quad
m_{2}^{S}(t)=(1-t)^{-1},\quad t\in[0,1).$ (40)
To compare with our kinetic equations for foams with edge rupture, we denote
moments as $m_{k}(t)=\sum_{j\geq 3}j^{k}u_{j}(t)$. The difference of squares
from side numbers before and after reaction (1) is given by
$|C_{i+j-4}|^{2}-|C_{i}|^{2}-|C_{j}|^{2}=2ij-8(i+j)+16$ (41)
for face merging and two instances of
$|C_{k-1}|^{2}-|C_{k}|^{2}=-2k+1$ (42)
for edge merging. Ignoring technical issues of interchanging infinite sums, we
formally compute that the second moment grows as
$\displaystyle\dot{m}_{2}$ $\displaystyle=\sum_{\begin{subarray}{c}i,j\geq
3\\\
(i,j)\neq(3,3)\end{subarray}}(2ij-8(i+j)+16)K^{F}(i,j)u_{i}u_{j}+2\sum_{k\geq
4}(-2k+1)q^{E}_{k}u_{k}$ (43)
$\displaystyle=\frac{2m_{2}^{2}-16m_{1}m_{2}+16m_{1}^{2}-126u_{3}^{2}}{m_{1}^{2}(1-p_{3}^{2})}-\frac{4m_{2}-2m_{1}-30u_{3}}{m_{1}-3u_{3}}.$
(44)
As the quadratic term $m_{2}^{2}$ in (44) is similar to (39), we conjecture a
finite-time blowup of $m_{2}$. However, we will withhold a more rigorous
moment analysis for future work, and note that the time dependent first moment
$m_{1}(t)$ and proportion of 3-gons $u_{3}(t)$ will almost certainly present
difficulties in either solving or estimating $m_{2}$. In particular, it is
possible that $3u_{3}$ may approach $m_{1}$ in finite time, creating a
singularity in (44).
## 5 Numerical experiments
Figure 7: Fractions of total ruptures over initial cell count and number
densities of $n$-gons for network and mean-field models under disordered
initial conditions with $n=3,\dots,8$. Samples paths are plotted with
transparency and mean paths are plotted with solid (network) and dashed (mean-
field) lines. Figure 8: Fractions of total ruptures over initial cell count
and number densities of $n$-gons for network and mean-field models under
ordered initial conditions with $n=3,\dots,8$. Samples paths are plotted with
transparency and mean paths are plotted with solid (network) and dashed (mean-
field) lines. Figure 9: Statistical topologies for disordered initial
conditions for fractions $.06k$ for $k=0,\dots,15$ of total ruptures over the
initial number of cells. Solid lines between integers are for ease in
visualization. As a guide, in all figures, fractions for $6$-gons are largest
at time 0 and decrease as ruptures increase. Top: Subprobabilities (left) and
probabilities (right) for network model. Bottom: Subprobabilities (left) and
probabilities (right) for mean-field model. Figure 10: Statistical topologies
for ordered initial conditions for fractions $.06k$ for $k=0,\dots,15$ of
total ruptures over the initial number of cells. Solid lines between integers
are for ease in visualization. As a guide, in all figures, fractions for
$6$-gons are largest at time 0 and decrease as ruptures increase. Top:
Subprobabilities (left) and probabilities (right) for network model. Bottom:
Subprobabilities (left) and probabilities (right) for mean-field model.
Figure 11: Fractions of total ruptures over initial cell count and gelation
fractions. Top: Ordered initial conditions for network (left) and mean-field
(right) models. Bottom: Disordered initial conditions for network (left) and
mean-field (right) models. Samples paths are plotted with transparency and
mean paths are plotted with solid lines.
In this section, we compare simulations between the Markov chains
$\\{\mathbf{G}(m)\\}_{m\geq 0}$ and $\\{\mathbf{L}(m)\\}_{m\geq 0}$. For
simulating the network model, we consider disordered initial conditions of a
Voronoi diagram with a uniform random seeding of $3\times 10^{4}$ site points,
and also ordered initial conditions of a hexagonal (honeycomb) lattice. For
each of these initial conditions, we implement the voronoi library in Python
to provide the initial combinatorial embedding through a doubly connected edge
list, from which we randomly sample rupturable edges. Over 50 simulations, we
decrease the number of cells by a decade, performing $2.7\times 10^{4}$
ruptures. For the mean-field model, we compute the initial distribution of
$n$-gons by generating Voronoi diagrams, and for ordered hexagonal lattice
conditions we set all cells to have six sides. For 50 simulations, we perform
simulations with $10^{5}$ initial cells and $9\times 10^{4}$ ruptures. Both
experiments take approximately 20 minutes to perform, although the mean-field
model is substantially easier to implement. Attempting to increase the initial
number of cells in the network model to $10^{5}$ greatly increased the run
time. As we shall we, however, using $3\times 10^{4}$ initial cells was
sufficient in creating approximately deterministic statistics for comparing
against the mean-field model.
A plot comparing total ruptures and number densities of $n$-gons is given in
Figs. 7 and 8. A time scale is given by the total number of ruptures over the
initial number of cells, which corresponds to the time scale in (19) with
$\gamma=1$. Each sample path is plotted with transparency along with the mean
path of the samples. We observe in both models that the evolution of $n$-gons
appears to approach a deterministic limit, although we observe greater
variance in sample paths for 7 and 8-gons. This is due to relatively fewer
cells having 7 or 8 sides, especially as the foam ages. For disordered initial
conditions, number densities of $n$-gons decrease for $n\geq 5$. The number
density of 4-gons reach a local maximum when about half as many cells remain,
while 3-gons increase during the entire process. When 10% of cells remain,
approximately 70% of cells in the network model and nearly 90% in the mean-
field model are 3-gons. This difference gives the greatest discrepancies
between the two models. For comparing other $n$-gons, number densities agree
to within a few percentage points, with particularly accurate behavior during
the first half of the process. The two models also agree especially well for
6-gons during the entire simulation. Similar behavior occurs with ordered
initial conditions, although number densities for 4, 5, 7, and 8-gons
experience a temporary increase as the network mutates from initially
monodisperse conditions of 6-gons.
The evolution of statistical topologies is given in Figs. 9 and 10. To keep
the graphs readable, we plot only the mean frequencies over the 50
simulations, but Figs. 7 and 8 show that the variations between mean and
pathwise frequencies are small. We note that number densities in (18) are
actually subprobabilities, since in (18) we are scaling the total number of
$n$-gons at all times against the initial number of cells $N$. We also
consider normalized number densities $\hat{u}_{n}^{N}=u_{n}^{N}/\sum_{j\geq
3}u_{j}^{N}$, shown in Figures 9 and 10. Such a normalization more clearly
demonstrates the differences of frequencies between low-sided grains.
The most interesting difference between the two models occurs when comparing
cells having the most sides. We define the gel fraction of a combinatorial
foam $\mathcal{G}=(G,F)$ and a state $\mathbf{L}\in E$ by
$\mathrm{Gel}(\mathcal{G})=\frac{\max\\{|f|:f\in F\\}}{\sum_{f\in
F}|f|},\quad\mathrm{Gel}(\mathbf{L})=\frac{\max\\{i:L_{i}>0\\}}{\|\mathbf{L}\|_{s}}.$
(45)
In words, the gel fraction is the largest fraction of total sides from a
single cell. For ordered initial conditions, we observe in Fig. 11 that
gelation occurs at about $T_{\mathrm{gel}}=.8$, meaning that
$\mathrm{Gel}(\mathbf{L})$ is approximately zero until $T_{\mathrm{gel}}$, and
suddenly increases past this point. For disordered intial conditions, gelation
occurs at approximately $T_{\mathrm{gel}}=.75$. Past the gelation time, the
gelation fraction appears to grow at a roughly linear rate until the process
is terminated at $t=.9.$ Gel fractions in the network model, however, are
quite negligible, with sample paths having $\mathrm{Gel}(\mathcal{G})$ rarely
above .02, and not having the ‘elbow’ found in the mean-field model marking a
sudden increase in gel fraction. We conjecture the lack of gelation is likely
due to the edge rupture conditions in Def. 6. While such conditions allow for
deriving a simple mean-field model and limiting kinetic equations, aged foams
in the network model produce a large amount of 3-gons which forbid neighboring
edges to rupture and merge large adjacent cells.
## 6 Conclusion
We have studied a minimal Markov chain on the state space of combinatorial
embeddings which models the rupture of edges in foams. The model can be
further simplified by a mean-field assumption on the selection of which cells
are neighbors of a rupturing edge, producing a Markov chain on the state space
$\ell_{1}(\mathbb{N})$. An advantage to using such a mean-field model is in
the derivation of limiting kinetic equations (34), a nonlinear infinite system
which bears resemblance to the Smoluchowski coagulation equations with
multiplicative kernel. Numerical simulations of the mean-field show a similar
phase transition (the creation of a gel) also seen in models of coagulation. A
quadratic term in the formal derivation of the first order ODE (43)-(44)
suggests that the second moment $m_{2}(t)$ has finite time blowup, but it
remains to show this rigorously.
A number of computational and mathematical questions can be raised from this
study. First, it should be noted that the kinetic equations (34) do not
account for interactions between cells with finitely many sides and the
hypothesized gel (an $\infty$-gon). Thus, our kinetic equations are only valid
in the pre-gelation phase. Since we should expect the $\infty$-gon to interact
with the rest of the foam after gelation, the kinetic equations should be
augmented, akin to the Flory model of polymer growth [13], to include a term
$u_{\infty}(t)$ for the fraction of sides belonging to the gel. A numerical
investigation relating the mean-field process to a discretization scheme of
the kinetic equation, perhaps similar to the finite-volume method used in
[12], would prove useful in estimating gelation times as well as convergence
rates of the stochastic mean-field process to its law of large numbers limit.
We may also focus on the more combinatorial related questions of the network
model. One hypothesis is that more significant gelation behavior will arise
under relaxed conditions for rupture. While dropping rupturability conditions
offers a more realistic version of edge rupture, cataloguing possible
reactions becomes much more complicated, as outlined in Appendix A. Advances
in proving a phase transition for the network model could potentially use
methods from the similar problem of graph percolation [5]. Here, edges are
randomly occupied in a large graph, and a phase transition corresponds when
the probability of edge selection passes a percolation threshold to create a
unique graph component of occupied edges. Bond percolation thresholds have
been established in a variety of networks, including hexagonal lattices [25]
and Voronoi diagrams [4].
Finally, we mention a natural way for introducing cell areas. While we have
interpreted networks as foams, we can alternatively see them as spring
networks, with vertices as point masses and edges as springs between the
points. This allows a natural interpretation of areas arising from Tutte’s
spring theorem [26], which creates a planar network as minimizing distortion
energy of the spring network, and cell areas can easily be computed once the
minimal configuration is found through solving a linear system. A random
‘snipping’ of springs would typically produce the same topological reaction
(1), but with spring embedding we may now ask questions regarding gelation for
both topology and area.
Acknowledgements: The author wishes to thank Anthony Kearsley and Paul Patrone
for providing guidance during his time as a National Research Council
Postdoctoral Fellow at the National Institute of Standards and Technology, and
also Govind Menon for helpful suggestions regarding the preparation of this
paper.
## Appendix A Typical and atypical reactions
Figure 12: Typical and atypical reactions. The rupturable edge in each diagram
is the horizontal edge in the center of the diagram. An edge with a $\circ$
symbol in its center denotes an isthmus. A vertical edge with $W$ denotes that
a vertex of the rupturable edge is contained on a wall. For the 3-isthmus,
3-isthmus+wall, and 5-isthmus, a square with $L_{i}$ for $i=1,2$ denotes a
left arc connected to an isthmus with $L_{i}$ sides. For isthmus and
isthmus+wall, the square with $j$ denotes a $j$-gon connected to an isthmus,
and the square with $L$ denotes a left arc containing $L$ sides (including the
two contained in the $k$-gon and connected to the isthmus).
By removing the condition in Def. 6 that a rupturing edge must be typical, we
can consider the broader collection of atypical configurations and their
corresponding reactions. A diagram of the thirteen different local
configurations and the twelve different reactions for typical and atypical
edges are given in Fig. 12. For some of these reactions, there are cells which
undergo both edge and face merging, so for simplicity the collection of
reacting cells and their products are listed as a single reaction. For each
reaction listed, we assume a sufficient number of sides in each reactant cell
so that all products have at least three sides. The set of atypical edges
includes isthmuses, whose rupture disconnects the foam. If we wished to
continue rupturing after rupturing an isthmus, it would be necessary to relax
the requirement of connectivity in a simple foam, which in turn would further
increase possible reactions. Even more reactions are possible by permitting
foams to include loops (1-gons) and multiedges (2-gons). For now, we withhold
from enumerating this rather complicated set of reactions.
We now give an informal derivation for how the enumeration in Fig. 12 is
obtained. This is done by counting reactions in configurations arising from
whether a rupturable edge $e=\\{u_{0},v_{0}\\}$ or its neighbors are
isthmuses. We begin by considering configurations with no isthmuses. We have
already discussed the three typical reactions (10)-(12). There is also the
possibility that an interior edge $e$ contains two edge neighbors and a single
vertex neighbor containing both $u_{0}$ and $v_{0}$. This cell wraps around
several other cells to contain both vertices, so we call such a configuration
a halo.
If $e$ is not an isthmus, it is possible for either one or two incident edges
to be isthmuses, but no more. This follows from the fact that if two isthmuses
are incident to a vertex, then the third incident edge must be an isthmus as
well. This creates four possible configurations: two containing one isthmus
neighbor with or without a vertex contained on the boundary, and another
containing two isthmus neighbors (both of which producing the same reaction
$C_{i}+C_{j}\rightharpoonup C_{i+j-6})$. Since the original edge is not an
isthmus, each of these configurations after rupture remains connected.
We finally consider the set of configurations for when $e$ is an isthmus. If
no other edges are isthmuses, then $e$ can be in the interior of $S$ or have a
single vertex in $\partial S$ (two such vertices on $\partial S$ would imply
that $e$ is not an isthmus). One or both of $u_{0}$ or $v_{0}$ can have all of
its incident edges as isthmuses. If one vertex of $e$ has three incident
isthmuses, then the other vertex can either be on $\partial S$, or have one or
three incident isthmuses. In total, there are five different reactions with
$e$ as an isthmus.
Some care is needed when counting the products for reactions with isthmuses.
Under the left path interpretation for face sides, isthmuses count for two
sides. Additionally, the rupture of an isthmus will disconnect the network.
This results in the creation of a new ‘island’ cell with a left path of
exterior edges around the island, which are also removed from the cell
originally contained ruptured isthmus $e$. In all reactions, the change in
total number of sides is given by the number of boundary vertices in $e$ minus
six.
In each atypical reaction, the process of edge removal and insertion is indeed
the same as typical reactions. Updates for left loops in the combinatorial
foam are more complicated, and will depend on the local configuration. As an
example, let us consider the isthmus neighbor configuration, which has a
single vertex neighbor $f_{1}$ and two edge neighbors $f_{2},f_{3}$. We write
the left loops of these neighbors as
$\displaystyle f_{1}=[v_{1},v_{0},v_{2},A_{2}],\qquad
f_{2}=[v_{2},v_{0},u_{0},u_{2},A_{1}],$ (46) $\displaystyle
f_{3}=[u_{1},u_{0},v_{0},v_{1},A_{3},u_{2},u_{0},u_{1},A_{4}],$ (47)
where $\\{u_{0},u_{1}\\}$ is an isthmus, and $A_{1},\dots,A_{4}$ are left
arcs. After rupture, there are two cells remaining, with left loops
$\displaystyle f_{2}^{\prime}=[v_{1},v_{2},A_{2}],\quad
f_{2,3}^{\prime}=[u_{1},u_{2},A_{1},v_{2},v_{1},A_{3},u_{2},u_{1},A_{4}].$
(48)
## Conflict of interest
The author declares that he has no conflict of interest.
## References
* [1] D. Aboav, The arrangement of grains in a polycrystal, Metallography, 3 (1970), pp. 383–390.
* [2] D. J. Aldous et al., Deterministic and stochastic models for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists, Bernoulli, 5 (1999), pp. 3–48.
* [3] J. Bae, K. Lee, S. Seo, J. G. Park, Q. Zhou, and T. Kim, Controlled open-cell two-dimensional liquid foam generation for micro-and nanoscale patterning of materials, Nature communications, 10 (2019), pp. 1–9.
* [4] A. M. Becker and R. M. Ziff, Percolation thresholds on two-dimensional voronoi networks and delaunay triangulations, Physical Review E, 80 (2009), p. 041101.
* [5] B. Bollobás, B. Bollobás, O. Riordan, and O. Riordan, Percolation, Cambridge University Press, 2006.
* [6] G. Burnett, J. Chae, W. Tam, R. M. De Almeida, and M. Tabor, Structure and dynamics of breaking foams, Physical Review E, 51 (1995), p. 5788.
* [7] I. Cantat, S. Cohen-Addad, F. Elias, F. Graner, R. Höhler, O. Pitois, F. Rouyer, and A. Saint-Jalmes, Foams: structure and dynamics, OUP Oxford, 2013.
* [8] J. Chae and M. Tabor, Dynamics of foams with and without wall rupture, Physical Review E, 55 (1997), p. 598.
* [9] M. De Berg, M. Van Kreveld, M. Overmars, and O. Schwarzkopf, Computational geometry, in Computational geometry, Springer, 1997, pp. 1–17.
* [10] J. Duplat, B. Bossa, and E. Villermaux, On two-dimensional foam ageing, Journal of fluid mechanics, 673 (2011), pp. 147–179.
* [11] J. R. Edmonds Jr, A combinatorial representation for oriented polyhedral surfaces, PhD thesis, 1960.
* [12] F. Filbet and P. Laurençot, Numerical simulation of the smoluchowski coagulation equation, SIAM Journal on Scientific Computing, 25 (2004), pp. 2004–2028.
* [13] P. J. Flory, Molecular size distribution in three dimensional polymers. i. gelation1, Journal of the American Chemical Society, 63 (1941), pp. 3083–3090.
* [14] H. Flyvbjerg, Model for coarsening froths and foams, Physical Review E, 47 (1993), p. 4037.
* [15] V. Fradkov, A theoretical investigation of two-dimensional grain growth in the ‘gas’ approximation, Philosophical Magazine Letters, 58 (1988), pp. 271–275.
* [16] J. A. Glazier, S. P. Gross, and J. Stavans, Dynamics of two-dimensional soap froths, Physical Review A, 36 (1987), p. 306.
* [17] J. A. Glazier and D. Weaire, The kinetics of cellular patterns, Journal of Physics: Condensed Matter, 4 (1992), p. 1867.
* [18] C. Herring, Surface tension as a motivation for sintering, in Fundamental Contributions to the Continuum Theory of Evolving Phase Interfaces in Solids, Springer, 1999, pp. 33–69.
* [19] J. Klobusicky, G. Menon, and R. L. Pego, Two-dimensional grain boundary networks: stochastic particle models and kinetic limits, Archive for Rational Mechanics and Analysis, (2020), pp. 1–55.
* [20] M. Marder, Soap-bubble growth, Physical Review A, 36 (1987), p. 438.
* [21] J. McLeod, On an infinite set of non-linear differential equations, The Quarterly Journal of Mathematics, 13 (1962), pp. 119–128.
* [22] L. Meng, H. Wang, G. Liu, and Y. Chen, Study on topological properties in two-dimensional grain networks via large-scale monte carlo simulation, Computational Materials Science, 103 (2015), pp. 165–169.
* [23] W. W. Mullins, Two-dimensional motion of idealized grain boundaries, Journal of Applied Physics, 27 (1956), pp. 900–904.
* [24] M. v. Smoluchowski, Drei vortrage uber diffusion, brownsche bewegung und koagulation von kolloidteilchen, ZPhy, 17 (1916), pp. 557–585.
* [25] M. F. Sykes and J. W. Essam, Exact critical percolation probabilities for site and bond problems in two dimensions, Journal of Mathematical Physics, 5 (1964), pp. 1117–1127.
* [26] W. T. Tutte, How to draw a graph, Proceedings of the London Mathematical Society, 3 (1963), pp. 743–767.
* [27] N. Vandewalle and J. Lentz, Cascades of popping bubbles along air/foam interfaces, Physical Review E, 64 (2001), p. 021507.
* [28] J. Von Neumann, Discussion: grain shapes and other metallurgical applications of topology, Metal Interfaces, (1952).
* [29] D. L. Weaire and S. Hutzler, The physics of foams, Oxford University Press, 2001.
|
16k
|
arxiv_papers
|
2101.00969
|
# Understanding Power Consumption and Reliability of High-Bandwidth Memory
with Voltage Underscaling
Seyed Saber Nabavi Larimi1,2 Behzad Salami1,5 Osman S. Unsal1 Adrián Cristal
Kestelman1,2,3 Hamid Sarbazi-Azad4 Onur Mutlu5 1BSC 2UPC 3CSIC-IIIA 4SUT and
IPM 5ETH Zürich
###### Abstract
Modern computing devices employ High-Bandwidth Memory (HBM) to meet their
memory bandwidth requirements. An HBM-enabled device consists of multiple DRAM
layers stacked on top of one another next to a compute chip (e.g. CPU, GPU,
and FPGA) in the same package. Although such HBM structures provide high
bandwidth at a small form factor, the stacked memory layers consume a
substantial portion of the package’s power budget. Therefore, power-saving
techniques that preserve the performance of HBM are desirable. Undervolting is
one such technique: it reduces the supply voltage to decrease power
consumption without reducing the device’s operating frequency to avoid
performance loss. Undervolting takes advantage of voltage guardbands put in
place by manufacturers to ensure correct operation under all environmental
conditions. However, reducing voltage without changing frequency can lead to
reliability issues manifested as unwanted bit flips.
In this paper, we provide the first experimental study of real HBM chips under
reduced-voltage conditions. We show that the guardband regions for our HBM
chips constitute 19% of the nominal voltage. Pushing the supply voltage down
within the guardband region reduces power consumption by a factor of 1.5X for
all bandwidth utilization rates. Pushing the voltage down further by 11% leads
to a total of 2.3X power savings at the cost of unwanted bit flips. We explore
and characterize the rate and types of these reduced-voltage-induced bit flips
and present a fault map that enables the possibility of a three-factor trade-
off among power, memory capacity, and fault rate.
###### Index Terms:
High-Bandwidth Memory, Power Consumption, Voltage Scaling, Fault
Characterization, Reliability.
## I Introduction
Dynamic Random Access Memory (DRAM) is the predominant main memory technology
used in traditional computing systems. With the significant growth in the
computational capacity of modern systems, DRAM has become a
power/performance/energy bottleneck, especially for data-intensive
applications [12, 38, 39, 37, 15]. There are two approaches to alleviate this
issue: (i) replacing DRAM with emerging technologies (e.g., Magnetic Memory
(MRAM) [40, 24] and Phase-Change Memory (PCM) [46, 25, 47]) and (ii) improving
DRAM design (e.g., Reduced Latency DRAM (RLDRAM) [52], Graphics DDR (GDDR)
[18], and Low-Power DDR (LPDDR) [33]). To the latter end, High-Bandwidth
Memory (HBM) [27, 26] has been developed to bridge the _bandwidth_ gap of
computing devices and DRAM-based main memory.
An HBM-enabled device consists of multiple DRAM layers stacked and placed next
to computing elements, all integrated in the same package. Higher bandwidth,
lower power consumption, and smaller form factor are the advantages of such
integration. Therefore, despite being a relatively new technology, HBM has
found its way into high-end devices such as NVIDIA A100 [41], Xilinx Virtex
Ultrascale+ HBM family [66], and AMD Radeon Pro family [48], and into some of
the world’s fastest computing systems such as the Summit supercomputer [21].
However, being placed inside the same package with computing devices means
that HBM consumes a portion of the package’s overall power budget, limiting
the power available for computing devices. Since HBM targets high-performance
applications, any power saving technique with bandwidth overhead is
undesirable. Therefore, there is a need for methods that save power without
reducing the bandwidth.
Undervolting, also called voltage underscaling, lowers supply voltage without
decreasing operating frequency, thereby saving power without affecting
performance. In real devices, undervolting is effective because manufacturers
conservatively specify a higher supply voltage for the operation of a device
than the minimum necessary supply voltage for correct operation. The
difference between the default supply voltage and this minimum supply voltage
is called “guardband”. Guardbands are put in place to ensure correct and
consistent operation under all possible (including worst-case) operating
conditions. Pushing the supply voltage down in the guardband region reduces
power consumption.
We obtain 1.5X power savings in real HBM chips under all bandwidth utilization
rates by reducing the supply voltage from the nominal 1.2V down to 0.98V,
safely without any faults under common operating conditions. Pushing the
supply voltage further down to 0.85V, results in an overall 2.3X power
savings. However, at voltages below the guardband region, device components
start experiencing timing violations, causing unwanted bit flips. In our
experiments, first bit flips occur at 0.97V. From 0.97V to 0.84V, the number
of faults increases _exponentially_ until almost all bits are faulty. Between
0.84V and 0.81V, all bits become faulty, while using voltages lower than 0.81V
result in the failure of entire HBM chips. To save power with undervolting, we
need to understand the occurrence rate of faults at each voltage level, if and
how faults are clustered and how far we can lower the supply voltage below the
guardband region.
Undervolting has been experimentally studied on CPUs [43, 71, 51, 2], GPUs
[30, 29, 28, 70] and FPGAs [56, 60, 54], as well as DRAMs [12, 13, 19, 14],
SRAMs [68, 67], and NAND flash memories [4, 9, 5, 6, 7, 10, 36, 8]. Our work
is the first experimental study of undervolting HBM chips. Our main
contributions are as follows:
* •
We empirically measure a 19% voltage guardband in HBM chips. We show that
undervolting within the guardband region reduces HBM power consumption by a
factor of 1.5X.
* •
We empirically examine undervolting below the guardband region in HBM chips
and demonstrate a total of 2.3X power savings at the cost of some unwanted bit
flips.
* •
We provide the first experimental fault characterization study of HBM
undervolting below the guardband region. We find that (i) HBM channels behave
differently from each other with voltage underscaling due to process
variation, and (ii) most faults are clustered together in small regions of HBM
layers.
* •
We provide a fault map that enables the user to perform a three-factor trade-
off among power, fault-rate, and usable memory space. For instance, 2.3X power
savings is possible by sacrificing some memory space while the remaining
memory space can work with 0% to 50% fault rate.
## II Experimental Methodology
### II-A Background on HBM
Fig. 1(a) shows the general organization of an HBM-enabled device where
several DRAM chips (and an optional IO/controller chip) are piled and
interconnected by Through Silicon Vias (TSVs) [27]. An efficient way to
utilize an HBM stack is to place it on a silicon interposer next to computing
chips (e.g., FPGA, GPU, or CPU) inside the same package [20]. Signals between
HBM stack and computing chips go through the underlying silicon interposer. As
a result, there can be far more data lanes in an HBM channel (1024 per HBM
stack) than a regular 64-bit DRAM channel, while each HBM channel is more
efficient. Therefore, HBM provides at least an order of magnitude higher
bandwidth than DDRx DRAM [15] at a lower power consumption (nearly 7pJ/bit as
opposed to 25pJ/bit for a DDRx DRAM) with a smaller form factor [65].
### II-B Testing Platform
The hardware platform we use in our experiments consists of a Xilinx VCU128
board [62] mounted with an XCVU37P FPGA. This FPGA includes two HBM stacks of
4GB each (HBM0 and HBM1). Each stack has four DRAM chips of 1GB capacity each.
Fig. 1(b) shows a general overview of the underlying HBM memory. The FPGA
fabric is divided into three Super Logic Regions (SLRs). Each SLR is a
separately fabricated chip with configurable circuitry. SLRs are
interconnected by the same interposer technology connecting them to the HBM
stacks. Both HBM stacks of our setup are connected to SLR0, as shown in
Fig.1(b).
Figure 1: (a) General structure of an HBM-enabled device. (b) HBM interface
and internal organization of XCVU37P, adapted from [62].
Address space of each HBM stack is divided among 8 independent Memory Channels
(MC). Each MC is 128b wide and works independently on a 512MB memory assigned
to it. Address space of each channel is divided between two 64b Pseudo-
Channels (PCs). These two PCs share clock and command signals but have
separate data buses. Each PC independently interprets commands and works with
its own non-overlapping 256MB memory array portion. Therefore, at memory side,
there are a total of 32 PCs, 64b wide each. At user side, Xilinx’s HBM IP core
provides 32 AXI ports (16 per HBM stack). Each AXI port corresponds to one PC.
However, if the switching network is enabled, any packet from an AXI port can
be routed to any PC at the cost of extra delay and lower bandwidth. An AXI
port is 256b wide, which provides a 4:1 data width ratio over a PC (with 64b
width). As a result, an AXI port can operate at a clock frequency that is a
quarter of the memory data transfer rate (1:4 ratio) and yet take advantage of
the maximum HBM bandwidth provided by PCs [65]. The maximum clock frequency
allowed for memory arrays in our device is 900MHz, and being a double data
rate memory, it translates to a maximum data transfer rate of 1800 Mega-
Transfers per second (MT/s).
In this work, we tune the supply voltage of our HBM stacks by accessing
voltage regulators on the VCU128 board. One of these regulators, ISL68301, is
a Power Management Bus (PMBus) [45] compliant driver from Intersil Corporation
in charge of supplying power to our HBM stacks. We implemented a customized
interface on the host to control this regulator and measure power, voltage and
current during our experiments. We also implement controllers for the two HBM
stacks. Each controller includes 16 AXI Traffic Generators (TG), one for each
AXI port in that stack. The controller is in charge of configuring each TG,
sending macro commands, receiving responses, checking status, and reporting
statistics back to the host. Each TG is capable of running customized macro
commands that we later use to implement our test routines. We collect power
measurements from a Texas Instruments INA226 chip placed on VCU128 board.
### II-C Experiments
We conduct experiments to measure (i) the power we can save with undervolting
and (ii) the fault rate of our HBM devices when we reduce the voltage below
the nominal value. The methodology we used for our experiments, considers the
following points:
* •
Since we focus on HBM stacks and not FPGA fabric, we disable the switching
network. This removes any impact the switching network might have on the
results.
* •
We follow a statistical method to determine the number of runs based on error
and confidence margin [31]. We run each test 130 times, which gives us a 7%
error margin with 90% confidence interval.
* •
HBM bandwidth is much larger than the communication speed between the FPGA and
host CPU. As a result, we focus on measuring simple statistics on the FPGA
itself and then report those raw numbers back to the host for further
analysis.
* •
The operating temperature of HBM stacks was 35 ±1°C during our experiments.
We conduct the following power and reliability experiments:
#### II-C1 Power Measurement Tests
We measure the power consumption of HBM stacks at different bandwidth
utilization rates while underscaling their supply voltage. We reach nearly
310GB/sec when accessing the memory by enabling all 32 AXI ports at the same
time and running them at maximum frequency.111The combined peak theoretical
bandwidth of HBM stacks in VCU128 is 429GB/sec [62]. For the experiments
discussed in this paper, we reach the throughput of 310GB/sec. However, we
believe that with more engineering effort, the peak performance is also
achievable. The power savings obtained via undervolting is achievable for any
bandwidth utilization rate, as discussed in Section III. We then progressively
disable AXI ports to reduce bandwidth. We do this since some AXI ports (and
their corresponding PCs) that map to the more vulnerable HBM memory blocks are
more sensitive to faults induced by undervolting than others. Therefore,
disabling those ports is an effective technique to decrease the impact of
undervolting faults and further reduce the supply voltage. Section III-B
discusses variability across different ports in more details.
#### II-C2 Reliability Assessment
The fault characterization test we conduct writes data into the undervolted
HBM sequentially and then reads it back to check for any faults. Algorithm 1
shows the pseudo-code of the reliability tester to extract $faultCount$. We
change the HBM’s supply voltage (i.e., VCC_HBM) from 1.2V (the nominal voltage
level, i.e., $V_{nom}$) to 0.81V (minimum voltage possible for memory
operation, i.e., $V_{critical}$), with 10mV step size. We experimentally set
the batchSize, the number of times we repeat each test to ensure consistent
results, to 130\. memSize is the size of the memory divided by 256b (i.e.,
width of an AXI port). By setting dataPattern to all 1’s or all 0’s, we can
check for 1-to-0 or 0-to-1 bit flips, respectively. dataWidth is 256b since
each AXI port is 256b wide. Depending on the type of the test, memSize takes
different values, i.e., 256M or 8M for testing the entire HBM or a single
Pseudo Channel (PC), respectively.
Input: batchSize: 130
dataPattern: all 1’s & all 0’s
dataWidth: 256 (b)
memSize: 256M (testing entire HBM) & 8M (testing one PC)
Output: $faultCount$ (at each voltage level)
for _$voltage$ := $V_{nom}$ downto $V_{critical}$ in 10mV steps_ do
VCC_HBM := $voltage$;
for _b := 0 to batchSize-1_ do
reset_axi_ports();
for _address := 0 to memSize-1_ do
writeHBM(address, dataPattern);
$faultCount$ := 0;
for _address := 0 to memSize-1_ do
data := readHBM(address);
for _i := 0 to dataWidth-1_ do
if _(data[i] != dataPattern[i])_ then
$faultCount$ += 1;
return $faultCount$;
Algorithm 1 Reliability assessment via sequential access
## III Results
### III-A Power Analysis
We divide the total power consumption of an HBM chip into _active_ and _idle_
portions.
#### III-A1 Active Power
Active power consumption of a DRAM chip is proportional to the square of
supply voltage ($V_{dd}$), as shown in Equation (1) [11]. In this equation,
$C_{L}$ is the active load capacitance, ${f}$ is operating frequency, and
$\alpha$ is the activity factor which determines the average charge/discharge
rate of the capacitor. Thus, with undervolting, we expect a quadratic
reduction in active power consumption.
$P={\color[rgb]{0,0,0}\alpha\times}C_{L}\times{f}\times{V_{dd}^{2}}$ (1)
Our empirical results shown in Fig. 2 comply with expectations. Fig. 2 shows
the power consumption of HBM chips at representative bandwidth utilization
rates (in 25% increments).
Working within the guardband region (1.20V-0.98V), provides 1.5X power savings
while pushing the supply voltage further down to 0.85V results in a total of
2.3X savings compared to default voltage 1.2V. In both cases (within or below
the guardband region), the amount of power savings is _independent_ of the
bandwidth utilization because undervolting does _not_ affect the memory
bandwidth. Therefore, we can save the memory power by undervolting no matter
what the memory bandwidth demand is.
Figure 2: HBM power saving by undervolting. We normalize all power
measurements to the power consumption at 1.2V with maximum bandwidth
utilization (i.e., 310GB/s). Voltage step size is 10mV in our experiments, but
the figure displays only the 50mV steps for better visibility.
On the other hand, looking at Equation (1), if we divide our power measurement
results by $V_{dd}^{2}$, we are left with raw values for $\alpha\times
C_{L}\times{f}$. The unit for these values is farads per second, which
indicates how much capacitance is being actively charged/discharged every
second. ${f}$ is constant since the clock frequency of HBM memory, and the
sequence that we run these tests are always fixed. $\alpha\times C_{L}$, on
the other hand, depends on the memory bandwidth utilization rate, which we
expect to remain fixed when working at a fixed bandwidth (i.e., same number of
PCs). As a result, we expect values for $\alpha\times C_{L}\times{f}$ to
remain the same throughout our experiments. However, through undervolting, we
observe that HBM chips’ fidelity starts to degrade. This is because at
voltages lower than 0.98V (i.e., below the guardband region), some bits remain
always stuck at 0 or 1. Since memory operations cannot charge or discharge
these faulty bits anymore, such bits do _not_ contribute to the overall active
capacitance, resulting in a drop in $\alpha$. We show this behavior in Fig. 3:
for supply voltages above 0.98V, $\alpha\times C_{L}\times{f}$ remains within
3% of what we expect. However, below 0.98V, it starts dropping and at 0.85V it
reaches 14% lower than the maximum active capacitance (at nominal voltage). In
other words, undervolting below the guardband region leads to a lower active
capacitance, as shown in Fig. 3. This is due to the exponential increase of
fault rate in HBM memory, as discussed in Section III-B.
Figure 3: Normalized $\alpha\times C_{L}\times{f}$. For each bandwidth, we
normalize all values to $\alpha\times C_{L}\times{f}$ of that bandwidth at
1.2V to rule out the effect of bandwidth on load capacitance. Below the
guardband region, the active capacitance is lower than our expectation due to
some bits remained stuck at 0 or 1, resulting in additional power gain.
Voltage step size is 10mV in our experiments, but the figure displays only the
50mV steps for better visibility.
#### III-A2 Idle Power
To evaluate idle power savings, we measure the power consumption of HBM when
bandwidth utilization is zero. We find that even when HBM is idle, it consumes
nearly one-third of the power it consumes at full load with 100% bandwidth
utilization, limiting the maximum amount of power we can save. As seen in Fig.
2, idle power gradually reduces with undervolting.
Figure 4: Fraction of faulty portion in each HBM stack at different supply
voltages. Figure 5: Percentage (%) of memory cells that are faulty for each
AXI port (and its corresponding PC) at different supply voltages. The left and
right halves refer to HBM0 and HBM1 chips. (Values less than 1% are rounded to
0%. “NF” means that “No Fault” is observed.)
### III-B Reliability Analysis
#### III-B1 Overall Analysis
Fig. 4 shows the behavior of each HBM stack with undervolting. We observe the
followings:
* •
Guardband Region: Starting from the nominal voltage ($V_{nom}$=1.2V) down to
the minimum safe voltage ($V_{min}$=0.98V), we observe _no_ memory faults.
This _guardband_ region is safe for all operations and workloads. An
application that cannot tolerate any fault in memory has to work in this
region.
* •
Unsafe Region: Faults occur in voltages below $V_{min}$. Any application that
uses HBM with supply voltages in the _unsafe_ region needs to take the impact
of such faults into account to ensure correct operation. Reducing the voltage
introduces new faults with an exponentially growing trend until about 0.84V,
where all memory bits experience 0-to-1 or 1-to-0 bit flips. Other works have
reported similar exponential growth of faults with undervolting on regular
DDRx DRAM chips [12]. Below 0.84V down to the minimum working voltage
($V_{critical}$=0.81V), the entire HBM parts become faulty.
* •
In our tests, HBM crashes (i.e., stops responding) at voltages below
$V_{critical}$. Even restoring the supply voltage does not re-enable
operation, and a power-down and restart is required.
#### III-B2 Detailed Analysis of Fault Rate Variation
Fig. 5 shows the fault rate for each HBM chip, each AXI port (and its
corresponding PC), and each data pattern at supply voltage levels below
$V_{min}$. Due to process variation and noise in memory, we observe three
categories of fault rate/type variation:
* •
Variation Across HBM Chips: In the _unsafe_ voltage range (i.e., between
$V_{min}$ and $V_{critical}$), HBM0 has lower fault rate than HBM1 (13% on
average). However, both stacks have the same $V_{min}$ and $V_{critical}$.
* •
Variation Across PCs: Some PCs are more sensitive to undervolting than others
(e.g., PC4 and PC5 of HBM0 and PC18, PC19, and PC20 of HBM1). These PCs
experience a higher rate of bit flips when we reduce the voltage below
$V_{min}$.
* •
Data Pattern Variation: The first 1-to-0 and 0-to-1 bit flips start at 0.97V
and 0.96V, respectively. The average rate of 0-to-1 bit flips is 21% higher
than that of 1-to-0 bit flips.
### III-C User- and Application-Level Implications
Applications that are intrinsically resilient to faults can save more power
than others by taking advantage of aggressive undervolting even below the
guardband region. To effectively achieve this benefit, application developers
need practical information about the effects of undervolting. To this end, we
present a three-factor trade-off among power, fault rate, and available memory
capacity that helps application developers determine how much power can be
saved and what the associated costs are.
An HBM chip has multiple independently-controllable PCs (32 in our case). We
utilize this inherent independence to provide practical information about how
many PCs an application can use based on its tolerable fault rate, as shown in
Fig. 6. For example:
* •
Those applications that cannot tolerate any faults (e.g., [35, 55, 59]) and
need the entire 8GB of HBM are restricted to work only in the guardband
region, which starts at $V_{nom}$=1.2V and ends at $V_{min}$=0.98V. This
region offers a fixed 1.5X power savings without any trade-off option.
* •
Below $V_{min}$, a triple-factor trade-off is at the user’s disposal. For
example, up to 1.6X power savings is achievable for an application that cannot
tolerate _any_ faults but can work with smaller memory capacity, by using only
7 fault-free PCs operating at 0.95V.
* •
Applications that can tolerate a _non-zero_ fault rate (e.g., [23, 54, 22,
34]) allow more room for trade-offs. For example, an application that can
tolerate a 0.0001% fault rate and requires only half of the total memory
capacity can push the voltage down to 0.90V and save power by a factor of
about 1.8X.
Figure 6: Number of PCs (out of 32) that can be used under different tolerable
fault rates with respect to the supply voltage level of the HBM memory. Higher
numbers mean higher memory capacity and bandwidth available for applications.
## IV Related Work
To our best knowledge, this paper presents the first experimental study of
undervolting in real High-Bandwidth Memory (HBM) chips. Below, we briefly
cover closely-related work on reduced-voltage operation in other computing and
memory devices.
* •
General-Purpose Processors: Papadimitriou et al. explore undervolting for
multi-core ARM processors [42, 43]. They show up to 38.8% power reduction at
the cost of up to 25% performance loss. Similar undervolting studies are
conducted for other types of processors [3, 2, 50, 51, 71].
* •
Hardware Accelerators: Undervolting in FPGAs has recently been studied [17,
53, 56, 57, 54, 60, 58]. These studies focus on undervolting multiple
components of FPGAs (e.g., Block RAMs (BRAMs) and internal components).
Undervolting in GPUs is also studied with detailed analysis of power saving,
voltage guardbands, and reliability costs [70, 28, 30, 29].
* •
Memory Chips: Koppula et al. [23] propose a DRAM undervolting and latency
reduction framework for neural networks that improves energy efficiency and
performance of such networks by 37% and 8%, respectively. Chang et al. [12]
study the impact of undervolting on the reliability and energy consumption of
DRAM by characterizing faults in real DRAM chips and provide techniques to
mitigate undervolting-induced faults. Earlier works study undervolting in main
memory systems [13, 14], but do not analyze faults due to undervolting. Ghose
et al. [16] study power consumption in modern DRAM chips. Luo et al. [34] show
that unreliable DRAM chips can be used in real applications to enhance the
cost of scaling the main memory system. Undervolting is also studied for other
memory types like SRAM [68, 67] and flash [4, 9, 5, 6, 7, 10, 36, 8].
In addition to real chips, undervolting is studied at the simulation-level,
e.g., for CPUs [61, 44], FPGAs [32], ASICs [49, 69], and SRAMs [63, 64, 1].
## V Conclusion
We reported the first undervolting study of real High-Bandwidth Memory (HBM)
chips. We demonstrated 1.5X to 2.3X power savings for such chips via voltage
underscaling below the nominal level. We measured a voltage guardband of 19%
of the nominal voltage, and showed that eliminating it results in 1.5X power
savings. We discussed that further undervolting below the guardband region
provides more power savings, at the cost of unwanted bit flips in HBM cells.
We explored and characterized the behavior of these bit flips (e.g., rate,
type, and variation across memory channels) and presented a fault map that
enables the possibility of a three-factor trade-off between power, memory
capacity, and fault rate. We conclude that undervolting for High-Bandwidth
Memory chips is very promising for future systems.
## Acknowledgments
The research leading to these results has received funding from the European
Union’s Horizon 2020 Programme under the LEGaTO Project (www.legato-
project.eu), grant agreement No. 780681. This work has received financial
support, in part, from Tetramax for the LV-EmbeDL project. This work is
supported in part by funding from SRC and gifts from Intel, Microsoft and
VMware to Onur Mutlu.
## References
* [1] A. Alameldeen “Energy-Efficient Cache Design Using Variable-Strength Error-Correcting Codes” In _ISCA_ , 2011
* [2] A. Bacha “Dynamic Reduction of Voltage Margins by Leveraging On-Chip ECC in Itanium II Processors” In _ISCA_ , 2013
* [3] R. Bertran “Voltage Noise in Multi-Core Processors: Empirical Characterization and Optimization Opportunities” In _MICRO_ , 2014
* [4] Y. Cai “Error Analysis and Retention-Aware Error Management for NAND Flash Memory” In _Intel Technology Journal (ITJ)_ , 2013
* [5] Y. Cai “Error Characterization, Mitigation, and Recovery in Flash-Memory-based Solid-State Drives” In _Proc. IEEE_ , 2017
* [6] Y. Cai “Error Patterns in MLC NAND Flash Memory: Measurement, Characterization, and Analysis” In _DATE_ , 2012
* [7] Y. Cai “Read Disturb Errors in MLC NAND Flash Memory: Characterization, Mitigation, and Recovery” In _DSN_ , 2015
* [8] Y. Cai “Reliability Issues in Flash-Memory-Based Solid-State Drives: Experimental Analysis, Mitigation, Recovery” In _Proc. Springer, Inside Solid State Drives (SSDs)_ , 2018
* [9] Y. Cai “Threshold Voltage Distribution in MLC NAND Flash Memory: Characterization, Analysis, and Modeling” In _DATE_ , 2013
* [10] Y. Cai “Vulnerabilities in MLC NAND Flash Memory Programming: Experimental Analysis, Exploits, and Mitigation Techniques” In _HPCA_ , 2017
* [11] “Calculating Memory Power for DDR4 SDRAM” In _Technical Report, Micron Technology, Inc._ , 2017
* [12] K. Chang “Understanding Reduced-voltage Operation in Modern DRAM Devices: Experimental Characterization, Analysis, and Mechanisms” In _POMACS_ , 2017
* [13] H. David “Memory Power Management via Dynamic Voltage/Frequency Scaling” In _ICAC_ , 2011
* [14] Q. Deng “MemScale: Active Low-power Modes for Main Memory” In _ASPLOS_ , 2011
* [15] S. Ghose “Demystifying Complex Workload-DRAM Interactions: An Experimental Study” In _SIGMETRICS_ , 2019
* [16] S. Ghose “What Your DRAM Power Models Are Not Telling You: Lessons from a Detailed Experimental Study” In _SIGMETRICS_ , 2018
* [17] D. Gizopoulos “Modern Hardware Margins: CPUs, GPUs, FPGAs Recent System-Level Studies” In _IOLTS_ , 2019
* [18] “Graphics Double Data Rate 6 (GDDR6) SGDRAM Standard”, 2018 JEDEC Solid State Technology Association
* [19] J. Haj-Yahya “SysScale: Exploiting Multi-Domain Dynamic Voltage and Frequency Scaling for Energy-Efficient Mobile Processors” In _ISCA_ , 2020
* [20] “High-Bandwidth Memory (HBM) DRAM”, 2020 JEDEC Solid State Technology Association
* [21] J. Hines “Stepping up to Summit” In _CISE_ , 2018
* [22] K. Hsieh “Focus: Querying Large Video Datasets with Low Latency and Low Cost.” In _OSDI_ , 2018
* [23] S. Koppula “EDEN: Enabling Energy-Efficient, High-Performance Deep Neural Network Inference using Approximate DRAM” In _MICRO_ , 2019
* [24] E. Kultursay “Evaluating STT-RAM as an Energy-Efficient Main Memory Alternative” In _ISPASS_ , 2013
* [25] B. Lee “Architecting Phase-Change Memory as a Scalable DRAM Alternative” In _ISCA_ , 2009
* [26] D. Lee “HBM: Memory Solution for Bandwidth-Hungry Processors” In _HCS_ , 2014
* [27] D. Lee “Simultaneous Multi-Layer Access: Improving 3D-Stacked Memory Bandwidth at Low Cost” In _TACO_ , 2016
* [28] J. Leng “GPU Voltage Noise: Characterization and Hierarchical Smoothing of Spatial and Temporal Voltage Noise Interference in GPU Architectures” In _HPCA_ , 2015
* [29] J. Leng “GPUWattch: Enabling Energy Optimizations in GPGPUs” In _ISCA_ , 2013
* [30] J. Leng “Safe Limits on Voltage Reduction Efficiency in GPUs: A Direct Measurement Approach” In _MICRO_ , 2015
* [31] R. Leveugle “Statistical Fault Injection: Quantified Error and Confidence” In _DATE_ , 2009
* [32] S. Linda “Fast Voltage Transients on FPGAs: Impact and Mitigation Strategies” In _FCCM_ , 2019
* [33] “Low-Power Double Data Rate 4 (LPDDR4)”, 2020 JEDEC Solid State Technology Association
* [34] Y. Luo “Characterizing Application Memory Error Vulnerability to Optimize Datacenter Cost via Heterogeneous-Reliability Memory” In _DSN_ , 2014
* [35] O. Melikoglu “A Novel FPGA-Based High Throughput Accelerator For Binary Search Trees” In _HPCS_ , 2019
* [36] R. Micheloni “Inside Solid State Drives (SSDs)” In _Proc. Springer_ , 2013
* [37] O. Mutlu “A Modern Primer on Processing in Memory” In _Proc. Springer_ , 2021
* [38] O. Mutlu “Memory Scaling: A Systems Architecture Perspective” In _IMW_ , 2013
* [39] O. Mutlu “Research Problems and Opportunities in Memory Systems” In _SUPERFRI_ , 2015
* [40] S. Nabavi “Power and Energy Reduction of Racetrack-based Caches by Exploiting Shared Shift Operations” In _VLSI-SoC_ , 2016
* [41] “NVIDIA A100 Tensor Core GPU: Unprecedented Acceleration at Every Scale”, 2020 Nvidia Corporation
* [42] G. Papadimitriou “Adaptive Voltage/Frequency Scaling and Core Allocation for Balanced Energy and Performance on Multicore CPUs” In _HPCA_ , 2019
* [43] G. Papadimitriou “Harnessing Voltage Margins for Energy-Efficiency in Multicore CPUs” In _MICRO_ , 2017
* [44] K. Parasyris “A Framework for Evaluating Software on Reduced Margins Hardware” In _DSN_ , 2018
* [45] “Power Management Bus (PMBus)” URL: https://pmbus.org
* [46] M. Qureshi “Phase-Change Memory: from Devices to Systems” In _Proc. Morgan & Claypool Publishers_, 2011
* [47] M. Qureshi “Scalable High Performance Main Memory System using Phase-Change Memory Technology” In _ISCA_ , 2009
* [48] “Radeon™ Pro Vega II Graphics — AMD” URL: https://www.amd.com/en/graphics/workstations-radeon-pro-vega-ii
* [49] B. Reagen “Minerva: Enabling low-power, highly-accurate deep neural network accelerators” In _ISCA_ , 2016
* [50] V. Reddi “Voltage Emergency Prediction: Using Signatures to Reduce Operating Margins” In _HPCA_ , 2009
* [51] V. Reddi “Voltage Smoothing: Characterizing and Mitigating Voltage Noise in Production Processors via Software-Guided Thread Scheduling” In _MICRO_ , 2010
* [52] “RLDRAM Memory”, 2020 Micron Technology, Inc. URL: https://www.micron.com/products/dram/rldram-memory
* [53] B. Salami “Aggressive Undervolting of FPGAs: Power & Reliability Trade-offs” In _Ph.D. Dissertation at UPC_ , 2018
* [54] B. Salami “An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration” In _DSN_ , 2020
* [55] B. Salami “AxleDB: A Novel Programmable Query Processing Platform on FPGA” In _MICPRO_ , 2017
* [56] B. Salami “Comprehensive Evaluation of Supply Voltage Underscaling in FPGA On-Chip Memories” In _MICRO_ , 2018
* [57] B. Salami “Evaluating Built-in ECC of FPGA On-Chip Memories for the Mitigation of Undervolting Faults” In _PDP_ , 2019
* [58] B. Salami “Fault Characterization through FPGA Undervolting: Fault Characterization and Mitigation” In _FPL_ , 2018
* [59] B. Salami “HATCH: Hash Table Caching in Hardware for Efficient Relational Join on FPGA” In _FCCM_ , 2015
* [60] B. Salami “On the Resilience of RTL NN Accelerators: Fault Characterization and Mitigation” In _SBAC-PAD_ , 2018
* [61] K. Swaminathan “BRAVO: Balanced Reliability-Aware Voltage Optimization” In _HPCA_ , 2017
* [62] “Virtex UltraScale+ HBM VCU128-ES1 FPGA evaluation kit”, 2020 Xilinx, Inc. URL: https://www.xilinx.com/products/boards-and-kits/vcu128-es1.html
* [63] C. Wilkerson “Reducing Cache Power with Low-Cost, Multi-Bit Error-Correcting Codes” In _ISCA_ , 2010
* [64] C. Wilkerson “Trading off Cache Capacity for Reliability to Enable Low-Voltage Operation” In _ISCA_ , 2008
* [65] M. Wissolik “Virtex Ultrascale+ HBM FPGA: a Revolutionary Increase in Memory Performance (WP485)” In _Technical Report, Xilinx Inc._ , 2019
* [66] “Xilinx Virtex UltraScale+ HBM” URL: https://www.xilinx.com/products/silicon-devices/fpga/virtex-ultrascale-plus-hbm.html
* [67] L. Yang “Approximate SRAM for Energy-Efficient, Privacy-Preserving Convolutional Neural Networks” In _ISVLSI_ , 2017
* [68] L. Yang “SRAM Voltage Scaling for Energy-Efficient Convolutional Neural Networks” In _ISQED_ , 2017
* [69] J. Zhang “Thundervolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy-Efficient Deep Learning Accelerators” In _DAC_ , 2018
* [70] A. Zou “Voltage-Stacked GPUs: A Control Theory Driven Cross-Layer Solution for Practical Voltage Stacking in GPUs” In _MICRO_ , 2018
* [71] Y. Zu “Adaptive Guardband Scheduling to Improve System-Level Efficiency of the POWER7+” In _MICRO_ , 2015
|
8k
|
arxiv_papers
|
2101.00973
|
# Towards Robust Data Hiding against (JPEG) Compression:
A Pseudo-differentiable Deep Learning Approach
###### Abstract
Data hiding is one widely used approach for protecting authentication and
ownership. Most multimedia content like images and videos are transmitted or
saved in the compressed form. This kind of lossy compression, such as JPEG,
can destroy the hidden data, which raises the need of robust data hiding. It
is still an open challenge to achieve the goal of data hiding that can be
against these compressions. Recently, deep learning has shown large success in
data hiding, while non-differentiability of JPEG makes it challenging to train
a deep pipeline for improving robustness against lossy compression. The
existing SOTA approaches replace the non-differentiable parts with
differentiable modules that perform similar operations. Multiple limitations
exist: (a) large engineering effort; (b) requiring a white-box knowledge of
compression attacks; (c) only works for simple compression like JPEG. In this
work, we propose a simple yet effective approach to address all the above
limitations at once. Beyond JPEG, our approach has been shown to improve
robustness against various image and video lossy compression algorithms. Code:
https://github.com/mikolez/Robust_JPEG.
Index Terms— robust data hiding, lossy compression, pseudo-differentialble
## 1 Introduction
The Internet has revolutionized the world development in the sense that it has
become the de facto most convenient and cost-effective remote communication
medium [1]. Early internet users mainly communicated in the form of plain
text, and in recent years, images or even videos have gradually become the
dominant multimedia content for communication. However, those images are
vulnerable to security attacks including unauthorized modification and
sharing. Data hiding has been established as a promising and competitive
approach for authentication and ownership protection [2].
Social media has become indispensable part for most of us who are active on
the FaceBook, Twitter, Youtube for sharing multimedia content like photos or
videos. For reducing the bandwidth or traffic to faciliate the transmission,
most photos and videos on the social media (or internet in general) are in the
compressed form, such as JPEG or MPEG. This kind of lossy operation can easily
destroy the hidden information for most existing data hiding approaches [1,
2]. Thus, robust data hiding in multimedia that can resist non-malicious
attack, lossy compression, has become a hot research direction.
Recently, deep learning has also shown large success in hiding data in images
[3, 4, 5, 6, 7]. For improving the robustness against JPEG compression, the
SOTA approaches need to include the JPEG compression in the training process
[6, 7]. The challenge of this task is that JPEG compression is non-
differentiable. Prior works have either roughly simulated the JPEG compression
[6] or carefully designed the differentiable function to mimic every step of
the JPEG compression [7]. Compared with the rough simulation approach [6], the
carefully designed mimicing approach has achieved significant performance [7].
However, it still has several limitations: (a) mimicking the JPEG compression
requires large amount of engineering effort, (b) it is compression specific,
i.e. it only works for JPEG compression, while not effective against other
compression, (c) it requires a full white-box knowledge of the attack method.
In practice, the attacker might have some publicly available tool. In this
work, we demonstrate that a simple approach can be adopted for alleviating all
the above limitations at once.
## 2 Related work
### 2.1 Lossy Data compression
In information technology, data compression is widely used to reduce the data
size [8]. The data compression techniques can be mainly divided into lossless
and lossy ones [8]. Lossless techniques have no distortion effect on the
encoded image, thus it is not relevant to robust data hiding. We mainly
summarize the lossy compression for images and videos. Overall, lossy
compression is a class of data encoding methods that discards partial data.
Without noticeable artifacts, it can reduce the data size significantly, which
facilitates storing and transmitting digital images. JPEG was first introduced
in 1992 and has become the de facto most widely used compression technique for
images [9]. JPEG compression involves two major steps: DCT transform and
quantization. For color images, color transform is also often adopted as a
standard pre-processing step [9]. Even though humans cannot easily identify
the difference between original images and compressed images, this lossy
compression can have significant influence on the deep neural networks.
JPEG2000 [10] is another famous lossy compression that has higher visual
quality with less mosaic artifacts. In contrast to JPEG, JPEG2000 is based on
Discrete Wavelet Transform [11]. Concurrent to JPEG2000, another variant of
DWT based lossy compression, i.e. progressive file format Progressive Graphics
File (PGF) [12] has also come out. In 2010, Google developed a new image
format called WebP, which is also based on DCT as JPEG but outperforms JPEG in
terms of compression ratio [13]. For video lossy compression, MPEG [14] is the
most widely adopted approach.
### 2.2 Traditional robust data hiding
Since the advent of image watermarking, numerous works have investigated
robust data hiding to improve its imperceptibility, robustness as well as
hiding capacity. Early works have mainly explored manipulating the cover image
pixels directly, i.e. embedding the hidden data in the spatial domain, for
which hiding data in least significant bits (LSBs) can be seen as a classical
example. For improving robustness, the trend has shifted to hiding in a
transform domain, such as Discrete Cosine Transform (DCT) [15], Discrete
Wavelet Transform [11], or a mixture of them [16, 17]. For achieving a good
balance between imperceptibility and robustness, adaptive data hiding has
emerged to embed the hidden data based on the cover image local features [18,
19]. Traditional data hiding methods often require the practitioners to have a
good understanding of a wide range of relevant expertise. Deep learning based
approaches automated the whole process, which significantly facilitates its
wide use. Moreover, recent works have demonstrated that deep learning based
approaches have achieved competitive performance against traditional
approaches in terms of capacity, robustness, and security [6, 7].
### 2.3 Deep learning based robust data hiding
Beyond the success in a wide range of applications, deep learning has shown
success in data hiding [20]. The trend has shifted from adopting DNNs for a
certain stage of the whole data hiding pipeline [21, 22, 23] to utilizing DNNs
for the whole pipeline in an end-to-end manner [3]. [24] has shown the
possibility of hiding a full image in a image, which has been extended to
hiding video in video in [4, 5]. Hiding binary data with adversarial learning
has also been proposed in [3]. The above deep learning approaches do not take
robustness into account and the hidden data can easily get destroyed by common
lossy compression, such as JPEG. For improving its robustness to JPEG, [6]
have proposed to include “noise layer” that simulates the JPEG to augment the
encoded images in the training. To overcome the challenge that “true” JPEG is
non-differentiable, the authors propose to “differentiate” the JPEG
compression with two approximation methods, i.e. JPEG-Mask and JPEG-Drop, to
remove the high frequency content in the images. We refer the readers to [6]
for more details. Even though JPEG-Mask and JPEG-Drop also remove the high
frequency content as the “true” JPEG, their behavior is still quite far from
the “true” JPEG, resulting in over-fitting and poor performance when tested
under the “true” JPEG. To minimize the gap approximation and “true” JPEG, more
recently, [7] proposed a new approach to carefully simulate the important
steps in the “true” JPEG. Similar approach for simulating JPEG has also been
proposed in [25] for generating JPEG-resistant adversarial examples.
Fig. 1: Main framework.
## 3 Pseudo-Differentiable JPEG and Beyond
Robust data hiding requires imperceptibility and robustness. The
imperceptibility is achieved minimizing its distortion on the cover image. In
this work, we aim to improve robustness against lossy compression, such as
JPEG, while maintaining its imperceptibility. Specifically, we focus on deep
learning based approach through addressing the challenge that lossy
compression is non-differentiable. Note that lossy compression by default is
always non-differentiable due to many factors, out of which quantization is
one of the widely known factors.
### 3.1 Why do we focus on lossy compression?
A intentional attacker can adopt various techniques, such as nosing, filtering
or compression, to remove the hidden data. In practice, however, unauthorized
sharing or modication is often done without any intentional attacks. Instead,
the social media platform often performs non-intentional attack through lossy
compression, while it is very unusual that a media platform would
intentionally add noise to the images. Thus, addressing the robustness against
lossy compression has high practical relevance. Moreover, the existing
approaches already have a good solution to other methods.
### 3.2 Deep Learning based robust data hiding SOTA framework
Before introducing the proposed method for improving robustness against lossy
compression, we first summarize the deep learning based robust data hiding
framework adopted by SOTA approaches [6, 7] in Fig 1. Straightforwardly, it
has two major components: encoding network for hiding the data in the cover
image and decoding network for extracting the hidden data from the encoded
image. Additionally it has an auxiliary component, i.e. attack layer (also
called as “noise layer” in [6]), necessary for improving its robustness
against attacks, such as lossy compression.
Our key contribution lies in providing a unified perspective on this attack
layer and proposing a frustratingly simple technique for improving robustness
against lossy compression.
### 3.3 Pseudo differentiable lossy compression
For general noise, such as Gaussian noise, the noise can be directly added to
the encoded image. The role of such noise is to augment the encoded images in
the training process to make it robust against such noise. Note that the
operation of adding noise is differentiable, thus it is fully compatible with
the need of being differentiable for the attack layer. Other attack operations
like filtering are also differentiable. The process of JPEG compression and
decompression is shown in Fig. 2, where quantization involves rounding
operation which is non-differentiable. It renders the JPEG unfit for gradient-
based optimization [6] and it constitutes a fundamental barrier for directly
applying it in the attack layer of Fig. 1.
Fig. 2: JPEG compression and decompression.
Unified perspective on attack layer. Regardless of the attack type, the
resulting effect is a distortion on the encoded image. Specifically, Gaussian
noise is independent of the encoded image while the JPEG distortion can be
seen as a pseudo-noise that is dependent on the encoded image.
Fig. 3: Proposed pseudo-differentiable compression method.
Fig. 4: Proposed pseudo-differentiable compression method.
Pseudo-differentiable JPEG method. Given the above unified perspective, we
notice that the effect of the JPEG compression and decompression is equivalent
to adding pseudo noise to the encoded image. Since adding Gaussian noise is a
differentiable operation, the operation of adding JPEG pseudo-noise is also
differentiable. Note that similar to Gaussian noise we can directly add JPEG
pseudo-noise to the encoded image after the JPEG pseudo-noise is known. With
this understanding, the problem remains how to get the JPEG pseudo-noise.
Recall that JPEG pseudo-noise is the gap between the compression processed
image and original encoded image. As the output of the encoding network, the
encoded image is readily available and the compression processed image can
also be simply retrieved by performing the forward operation of JPEG
compression and decompression to the encoded image. The whole process of our
proposed Pseudo-differentiable method is shown in Fig. 3. Note that during
training the forward operation is performed on both encoded image pathway and
pseudo-noise pathway, while the backward operation is only performed on the
encoded image pathway. Since the backward operation does not go through the
pseudo-noise path, the fact that JPEG compression is non-differentiable does
not make the training fail. Astute readers can find that compression processed
image and noisy deconded image in Fig. 3 have exactly identical values. At
first sight it appears that substracting the pseudo-noise from the compression
image and then re-adding it to the encoded image are two meaningless
operations that offset each other. However, from the gradient-optimization
perspective, they are not dummy operations but instead they are the key to
make non-differentiable JPEG operation pseudo-differentiable. Romoving these
two seemingly meaniless operation results in a process shown in Fig. 4 which
is impractical because the backward operation goes through the non-
differentiable JPEG compression.
Beyond JPEG compression. The above proposed approach can be used for any lossy
compression or any non-differentiable operation. With our proposed approach,
we treat the non-differentiable operation as a blackbox, and we only need to
perform the forward operation. This is conceivably significant advantage
because some non-differentiable operation might be too complex for the
practioner to understand how it works and mimicing its every step as in [7].
## 4 Results
Following [6], we hide 30 bits of random binary data in 128x128 color images.
The comparison with existing various JPEG approximation methods are shown in
the Table 1 with the decoding bit error rate (BER) as the metric. Our method
outperforms all existing techniques for different JPEG quality factors.
Table 1: Comparison of our method with other JPEG approximation techniques.
The results are reported with the metric of BER ($\%$).
JPEG Quality | JPEG-Drop | JPEG-Mask | JPEG-Drop + JPEG-Mask | JPEG [25] | Ours
---|---|---|---|---|---
JPEG-10 | $49.33$ | $46.23$ | $46.38$ | $35.68$ | $\mathbf{31.22}$
JPEG-25 | $48.50$ | $39.48$ | $41.38$ | $22.00$ | $\mathbf{16.75}$
JPEG-50 | $47.62$ | $32.77$ | $38.48$ | $15.85$ | $\mathbf{12.63}$
JPEG-75 | $46.75$ | $27.22$ | $37.10$ | $14.03$ | $\mathbf{12.41}$
JPEG-90 | $46.08$ | $18.57$ | $34.93$ | $13.81$ | $\mathbf{12.00}$
To our knowledge, there is no existing deep learning based approach that is
robust against other types of lossy compression, such as JPEG-2000 and WebP.
Our approach can be readily extended to them and the results are shown in the
Table 2 and Table 3. We train the network with quality factor of 500. For
evaluation, when the quality factor is higher than 500, the decoding BER is
very low, showing our approach can also improve the robustness against
JPEG2000. It is expected that the BER increases when the evaluation quality
factor decreases. Similar trend can be observed for WebP where the training
quality factor is set to 50.
Table 2: Results for different quality levels of JPEG 2000.
JPEG 2000 | 100 | 250 | 500 | 750 | 900
---|---|---|---|---|---
Ours | $35.29$ | $4.16$ | $00.31$ | $00.04$ | $00.03$
Table 3: Results for different quality levels of WebP.
WebP | 10 | 25 | 50 | 75 | 90
---|---|---|---|---|---
Ours | $20.69$ | $15.37$ | $12.86$ | $12.17$ | $12.14$
We further extend our approach to two common video compression methods: MPEG
and XVID. The results are available in Table 4. We observe that the BER with
our approach is quite low for both video compression methods. Note that we
still hide the same amount of binary data in each frame of the video. Our
algorithm achieves low BER even in challenging scenario of video compression.
Note that without approach, the BER is close to 50%, i.e. random guess.
Table 4: Results for video compression methods.
MPEG4 | XVID
---|---
$8.45$ | $13.59$
Finally, we train a network to be simultaneously robust against JPEG, JPEG2000
and WebP by hiding an image in another. The average pixel discrepancy (APD)
between cover and container images is 9.13/255, while that between secret and
revealed secret images is 8.54/255. The qualitative results are shown in the
Figure 5.
Fig. 5: Hiding an image in another under different lossy compressions.
## 5 Conclusion
In this paper, we provide a unified perspective on various attacks and propose
a simple yet effective approach for addressing the non-differentiability for
lossy compression. Our approach outperforms the existing approximation
approach by a visible margin. Moreover, our work is the first to show that a
deep learning approach can be robust against JPEG2020 and WebP as well as two
video compression algorithms. We also show hiding images in images while
staying simultaneously robust against several lossy compression.
## References
* [1] Z Bao, X Luo, Y Zhang, C Yang, and F Liu, “A robust image steganography on resisting jpeg compression with no side information,” IETE Technical Review, 2018.
* [2] Farhan A Alenizi, Robust Data Hiding in Multimedia for Authentication and Ownership Protection, Ph.D. thesis, 2017.
* [3] Jamie Hayes and George Danezis, “Generating steganographic images via adversarial training,” in Advances in Neural Information Processing Systems (NeurIPS), 2017\.
* [4] Xinyu Weng, Yongzhi Li, Lu Chi, and Yadong Mu, “Convolutional video steganography with temporal residual modeling,” arXiv preprint arXiv:1806.02941, 2018.
* [5] Pin Wu, Yang Yang, and Xiaoqiang Li, “Image-into-image steganography using deep convolutional network,” in Pacific Rim Conference on Multimedia, 2018.
* [6] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei, “Hidden: Hiding data with deep networks,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
* [7] Mahdi Ahmadi, Alireza Norouzi, Nader Karimi, Shadrokh Samavi, and Ali Emami, “Redmark: Framework for residual diffusion watermarking based on deep networks,” Expert Systems with Applications, 2020.
* [8] Khalid Sayood, Introduction to data compression, 2017\.
* [9] William B Pennebaker and Joan L Mitchell, JPEG: Still image data compression standard, 1992\.
* [10] Athanassios Skodras, Charilaos Christopoulos, and Touradj Ebrahimi, “The jpeg 2000 still image compression standard,” IEEE Signal processing magazine, 2001.
* [11] Wei Lu, Wei Sun, and Hongtao Lu, “Robust watermarking based on dwt and nonnegative matrix factorization,” Computers & Electrical Engineering, 2009.
* [12] Christoph Stamm, “A new progressive file format for lossy and lossless image compression,” in Proceedings of the International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision, Plzen, Czech Republic, 2002.
* [13] Giaime Ginesu, Maurizio Pintus, and Daniele D Giusto, “Objective assessment of the webp image coding algorithm,” Signal Processing: Image Communication, 2012.
* [14] Didier Le Gall, “Mpeg: A video compression standard for multimedia applications,” Communications of the ACM, vol. 34, no. 4, pp. 46–58, 1991.
* [15] Satendra Pal Singh and Gaurav Bhatnagar, “A new robust watermarking system in integer dct domain,” Journal of Visual Communication and Image Representation, 2018.
* [16] Surya Pratap Singh, Paresh Rawat, and Sudhir Agrawal, “A robust watermarking approach using dct-dwt,” International journal of emerging technology and advanced engineering, 2012.
* [17] NJ Harish, BBS Kumar, and Ashok Kusagur, “Hybrid robust watermarking techniques based on dwt, dct, and svd,” International Journal of Advanced Electrical and electronics engineering, 2013.
* [18] Preeti Bhinder, Kulbir Singh, and Neeru Jindal, “Image-adaptive watermarking using maximum likelihood decoder for medical images,” Multimedia Tools and Applications, 2018.
* [19] Beijing Chen, Chunfei Zhou, Byeungwoo Jeon, Yuhui Zheng, and Jinwei Wang, “Quaternion discrete fractional random transform for color image adaptive watermarking,” Multimedia Tools and Applications, 2018.
* [20] Bibi Isac and V Santhi, “A study on digital image and video watermarking schemes using neural networks,” International Journal of Computer Applications, 2011.
* [21] Cong Jin and Shihui Wang, “Applications of a neural network to estimate watermark embedding strength,” in International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 2007.
* [22] Haribabu Kandi, Deepak Mishra, and Subrahmanyam RK Sai Gorthi, “Exploring the learning capabilities of convolutional neural networks for robust image watermarking,” Computers & Security, 2017.
* [23] Seung-Min Mun, Seung-Hun Nam, Han-Ul Jang, Dongkyu Kim, and Heung-Kyu Lee, “A robust blind watermarking using convolutional neural network,” arXiv preprint arXiv:1704.03248, 2017.
* [24] Shumeet Baluja, “Hiding images in plain sight: Deep steganography,” in Advances in Neural Information Processing Systems (NeurIPS), 2017\.
* [25] Richard Shin and Dawn Song, “Jpeg-resistant adversarial images,” in NIPS 2017 Workshop on Machine Learning and Computer Security, 2017.
|
4k
|
arxiv_papers
|
2101.00975
|
# A Study on Erdős-Straus conjecture on Diophantine equation
$\frac{4}{n}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$††thanks: AMS 2010
Mathematics Subject Classification: 11Dxx, 11D68, 11N37 (Primary) 11D45,
11Gxx, 14Gxx; 11D72, 11N56, 11P81 (Secondary)
S. Maiti1,2
1 Department of Mathematics, The LNM Institute of Information Technology
Jaipur 302031, India
2Department of Mathematical Sciences, Indian Institute of Technology (BHU),
Varanasi-221005, India Corresponding author, Email address:
[email protected]/[email protected] (S. Maiti)
###### Abstract
The Erdős-Straus conjecture is a renowned problem which describes that for
every natural number $n~{}(\geq 2)$, $\frac{4}{n}$ can be represented as the
sum of three unit fractions. The main purpose of this study is to show that
the Erdős-Straus conjecture is true. The study also re-demonstrates Mordell
theorem which states that $\frac{4}{n}$ has a expression as the sum of three
unit fractions for every number $n$ except possibly for those primes of the
form $n\equiv r$ (mod 780) with $r=1^{2},11^{2},13^{2},17^{2},19^{2},23^{2}$.
For $l,r,a\in\mathbb{N}$;
$\frac{4}{24l+1}-\frac{1}{6l+r}=\frac{4r-1}{(6l+r)(24l+1)}$ with $1\leq r\leq
12l$, if at least one of the sums in right side of the expression, say,
$a+(4r-a-1),~{}1\leq a\leq 2r-1$ for at least one of the possible value of $r$
such that $a,(4r-a-1)$ divide $(6l+r)(24l+1)$; then the conjecture is valid
for the corresponding $l$. However, in this way the conjecture can not be
proved only twelve values of $l$ for $l$ up to $l=10^{5}$.
Keywords: Erdős-Straus Conjecture, Diophantine Equation;
$\frac{4}{n}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$; Elementary Number Theory.
## 1 Introduction
The famous Erdős-Straus conjecture in number theory, formulated by Paul Erdős
and Ernst G. Strauss [1, 2] in 1948, states that for every natural number
$n~{}(\geq 2)$, $\exists$ natural numbers $x,y,z$ such that $\frac{4}{n}$ can
be expressed as
$\frac{4}{n}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}.$ (1)
Many researchers, not only in Number Theory but also in different areas of
Mathematics, gave attention to this conjecture such as L. Bernstein [3], M. B.
Crawford [4], M. Di Giovanni, S. Gallipoli, M. Gionfriddo [5], C. Elsholtz and
T. Tao [6], J. Ghanouchi [7, 8], L. J. Mordell [9], D. J. Negash [10], R.
Obláth [11], L. A. Rosati [12], J. W. Sander [13], R. C. Vaughan [14], K.
Yamamoto [15], J. W. Porras Ferreira [19, 20] etc. The validity of the
conjecture for all $n\leq 10^{14}$ and $n\leq 10^{17}$ was reported by Swett
[16] and Salez [17] respectively.
If $m$ and $n$ are relatively prime integers, then Schinzel [18] established
that $\frac{4}{mt+n}=\frac{1}{x(t)}+\frac{1}{y(t)}+\frac{1}{z(t)}$ having
$x(t)$, $y(t)$ and $z(t)$ as integer polynomials in $t$ together with positive
leading coefficients and non quadratic residue $n$ (mod $m$). Mordell [9]
demonstrated that the validity of the Erdős-Straus conjecture for all $n$
except possible cases where $n$ is congruent to $1^{2}$, $11^{2}$, $13^{2}$,
$17^{2}$, $19^{2}$ or $23^{2}$ (mod 840).
The conjecture can be proved if it is derived for the all prime numbers to any
of the following cases (i) $n=p=2m+1$ (ii) $n=p=4m+1$ (iii) $n=p=8m+1$ (iv)
$n=p=24m+1$ where $m\in\mathbb{N}$. For $l,r,a\in\mathbb{N}$;
$\frac{4}{24l+1}-\frac{1}{6l+r}=\frac{4r-1}{(6l+r)(24l+1)}$ with $1\leq r\leq
12l$. If at least one of the sums in right side of the expression, say,
$a+(4r-a-1),~{}1\leq a\leq 2r-1$ for at least one of the possible value of $r$
such that $a,(4r-a-1)$ divide $(6l+r)(24l+1)$; then $\frac{4}{24l+1}$ has the
expression in the form of the equation (1). However, in this way the
expression in the form of equation (1) can not be proved for all values of $l$
although it can be established for most of the values of $l$. The computations
of $\frac{4}{24l+1}-\frac{1}{6l+r}=\frac{4r-1}{(6l+r)(24l+1)}$ for $l$ up to
$l=10^{5}$ has been carried out which shows that the expression in the form of
the equation (1) with $n=24l+1$ can not be proved only twelve values of $l$.
Finally, it has been shown (by other way) in Section 5 that the conjecture is
true.
## 2 Results and Discussion
### 2.1 Demonstration of solutions of Erdos-Straus Problem
If we want to express any $n\in\mathbb{N}$ in the form of (1), then it is
equivalent to find the equation (1) for all primes $n=p$. We know that (i) if
$n=2m$, then $\frac{4}{n}=\frac{4}{2m}=\frac{1}{2m}+\frac{1}{2m}+\frac{1}{m}$
(ii) if $n=3m$, then
$\frac{4}{n}=\frac{4}{3m}=\frac{1}{2m}+\frac{1}{2m}+\frac{1}{3m}=\frac{1}{m}+\frac{1}{3(m+1)}+\frac{1}{3m(m+1)}=\frac{1}{3m}+\frac{1}{m+1}+\frac{1}{m(m+1)}$
(iii) if $n=3m+2$, then
$\frac{4}{n}=\frac{4}{3m+2}=\frac{1}{3m+2}+\frac{1}{m+1}+\frac{1}{(m+1)(3m+2)}$
(iv) if $n=4m+3$, then
$\frac{4}{n}=\frac{4}{4m+3}=\frac{1}{m+1}+\frac{1}{2(4m+3)(m+1)}+\frac{1}{2(4m+3)(m+1)}$
(v) if $n=4m$ and $n=4m+2$, then these reduce to case (i) where
$m\in\mathbb{N}$. Thus we have to prove the equation (1) for all $n=4m+1$.
From the equation (1), we get
$\frac{4}{n}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}>\frac{1}{x},\frac{1}{y},\frac{1}{z}$
i.e. $\frac{1}{x},\frac{1}{y},\frac{1}{z}\leq\frac{1}{[\frac{n}{4}]+1}$ as
$\frac{4}{n}-\frac{1}{[\frac{n}{4}]}\leq 0$. If $x\leq y\leq z$, then
$\frac{4}{n}\leq\frac{3}{x}$ i.e. $\frac{1}{[\frac{3n}{4}]}\leq\frac{1}{x}$.
Thus $\frac{1}{[\frac{3n}{4}]}\leq\frac{1}{x}\leq\frac{1}{[\frac{n}{4}]+1}$
and $\frac{1}{z}\leq\frac{1}{y}\leq\frac{1}{x}$.
For $\frac{4}{n}-\frac{1}{x}=\frac{4x-n}{nx}$ with $x\leq y\leq z$ and
$\frac{1}{[\frac{3n}{4}]}\leq\frac{1}{x}\leq\frac{1}{[\frac{n}{4}]+1}$, if
$4x-n=p_{i_{1}}^{\alpha_{1}}p_{i_{2}}^{\alpha_{2}}\cdots
p_{i_{r}}^{\alpha_{r}}+p_{j_{1}}^{\beta_{1}}p_{j_{2}}^{\beta_{2}}\cdots
p_{j_{s}}^{\beta_{s}}$ with $nx=p_{1}^{\gamma_{1}}p_{2}^{\gamma_{2}}\cdots
p_{m}^{\gamma_{m}},~{}\\{p_{i_{1}},p_{i_{2}},\cdots,p_{i_{r}};p_{j_{1}},p_{j_{2}},\cdots,p_{j_{s}}\\}\subset\\{p_{1},p_{2},\cdots,p_{m}\\}$;
$p_{i_{1}}^{\alpha_{1}}p_{i_{2}}^{\alpha_{2}}\cdots
p_{i_{r}}^{\alpha_{r}},p_{j_{1}}^{\beta_{1}}p_{j_{2}}^{\beta_{2}}\cdots
p_{j_{s}}^{\beta_{s}}$ divide $nx$; then $n$ has a expression as in the form
of (1).
Again if $4x-n=y_{1}+z_{1};~{}y_{1},z_{1}\in\mathbb{N}$ with
$(y_{1},z_{1})=1$, then
$\frac{4}{n}-\frac{1}{x}=\frac{4x-n}{nx}=\frac{y_{1}+z_{1}}{nx}=\frac{1}{\frac{nx}{y_{1}}}+\frac{1}{\frac{nx}{z_{1}}}$.
If $n$ satisfies the equation (1), then $nx=k_{1}y_{1},~{}k_{1}\in\mathbb{N}$
and $nx=k_{2}z_{1},~{}k_{2}\in\mathbb{N}$ i.e. $k_{1}y_{1}=k_{2}z_{1}$. Or,
$k_{2}=gy_{1},~{}g\in\mathbb{N}$ as $(y_{1},z_{1})=1$. Then $k_{1}=gz_{1}$ and
$nx=gy_{1}z_{1}$. Thus
$\frac{4}{n}=\frac{1}{x}+\frac{1}{gz_{1}}+\frac{1}{gy_{1}}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$
with $g=\frac{nx}{y_{1}z_{1}}$, $y=gy_{1}$ and $z=gz_{1}$.
If $4x-n=dy_{1}+dz_{1};~{}d,y_{1},z_{1}\in\mathbb{N}$ with $(y_{1},z_{1})=1$,
then
$\frac{4}{n}-\frac{1}{x}=\frac{4x-n}{nx}=\frac{dy_{1}+dz_{1}}{nx}=\frac{1}{\frac{nx}{dy_{1}}}+\frac{1}{\frac{nx}{dz_{1}}}$.
If $n$ satisfies the equation (1), then $nx=dk_{1}y_{1},~{}k_{1}\in\mathbb{N}$
and $nx=dk_{2}z_{1},~{}k_{2}\in\mathbb{N}$ i.e. $k_{1}y_{1}=k_{2}z_{1}$. Or,
$k_{2}=gy_{1},~{}g\in\mathbb{N}$. Then $k_{1}=gz_{1}$ and $nx=gdy_{1}z_{1}$.
Thus
$\frac{4}{n}=\frac{1}{x}+\frac{1}{gz_{1}}+\frac{1}{gy_{1}}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$
with $g=\frac{nx}{dy_{1}z_{1}}$, $y=gy_{1}$ and $z=gz_{1}$.
For $\frac{4}{4m+1}$, $[\frac{4m+1}{4}]=m$ and $[\frac{3(4m+1)}{4}]=3m$,
$\frac{1}{3m}\leq\frac{1}{x}\leq\frac{1}{m+1}$. Then for the equation (1) with
$n=4m+1$, the possible cases of $\frac{1}{x}$ will be only
$\frac{1}{x}=\frac{1}{m+1},~{}\frac{1}{m+2},~{}\cdots,~{}\frac{1}{3m}$. Thus
for
$\frac{4}{4m+1}-\frac{1}{m+r}=\frac{4r-1}{(m+r)(4m+1)}=\frac{\\{1+(4r-2)\\},\\{2+(4r-3)\\},\cdots,\\{(2r-1)+2r\\}}{(m+r)(4m+1)}$
(2)
with $r\in\mathbb{N}$ and $1\leq r\leq 2m$; if at least one of the sums in
right side of the equation, say, $a+(4r-a-1),~{}1\leq a\leq 2r-1$ for at least
one of the possible value of $r$ such that $a,(4r-a-1)$ divide $(m+r)(4m+1)$;
then $\frac{4}{4m+1}$ satisfies the equation (1). A question is arising
naturally that whether the equation (1) can be proved with the help of the
equation (2)? The answer of this question will be given later.
If $m=2k-1$, then $4m+1=8k-3$,
$[\frac{8k-3}{4}]=2k-1,~{}[\frac{3(8k-3)}{4}]=6k-3,~{}\frac{1}{6k-3}\leq\frac{1}{x}\leq\frac{1}{2k}$.
So,
$\frac{4}{8k-3}-\frac{1}{2k}=\frac{3}{2k(8k-3)}=\frac{1+2}{2k(8k-3)}=\frac{1}{2k(8k-3)}+\frac{1}{k(8k-3)}$
i.e. $\frac{4}{8k-3}=\frac{1}{2k}+\frac{1}{2k(8k-3)}+\frac{1}{k(8k-3)}$.
If $m=2k$, then $4m+1=8k+1$,
$[\frac{8k+1}{4}]=2k,~{}[\frac{3(8k+1)}{4}]=6k,~{}\frac{1}{6k}\leq\frac{1}{x}\leq\frac{1}{2k+1}$.
So, $\frac{4}{8k+1}-\frac{1}{2k+1}=\frac{3}{(2k+1)(8k+1)}$.
Again if $k=3l-2$, then $8k+1=24l-15$,
$[\frac{24l-15}{4}]=6l-4,~{}[\frac{3(24l-15)}{4}]=18l-12,~{}\frac{1}{18l-12}\leq\frac{1}{x}\leq\frac{1}{6l-3}$.
So,
$\frac{4}{24l-15}-\frac{1}{6l-3}=\frac{3}{(6l-3)(24r-15)}=\frac{1}{(2l-1)(24l-15)}=\frac{1}{2(2l-1)(24l-15)}+\frac{1}{2(2l-1)(24l-15)}$
i.e.
$\frac{4}{24l-15}=\frac{1}{6l-3}+\frac{1}{2(2l-1)(24l-7)}+\frac{1}{2(2l-1)(24l-15)}$.
If $k=3l-1$, then $8k+1=24l-7$,
$[\frac{24l-7}{4}]=6l-2,~{}[\frac{3(24l-7)}{4}]=18l-6,~{}\frac{1}{18l-6}\leq\frac{1}{x}\leq\frac{1}{6l-1}$.
So,
$\frac{4}{24l-7}-\frac{1}{6l-1}=\frac{3}{(6l-1)(24l-7)}=\frac{6l}{2l(6l-1)(24l-7)}=\frac{1+(6l-1)}{2l(6l-1)(24l-7)}=\frac{1}{2l(6l-1)(24l-7)}+\frac{1}{2l(24l-7)}$
i.e.
$\frac{4}{24l-7}=\frac{1}{6l-1}+\frac{1}{2l(6l-1)(24l-7)}+\frac{1}{2l(24l-7)}$.
If $k=3l$, then $8k+1=24l+1$,
$[\frac{24l+1}{4}]=6l,~{}[\frac{3(24l+1)}{4}]=18l$. Thus we have to prove the
equation (1) for all $n=24l+1$ with
$\frac{1}{18l}\leq\frac{1}{x}\leq\frac{1}{6l+1}$. Using the expression of the
equation (2), we get
$\frac{4}{24l+1}-\frac{1}{6l+r}=\frac{4r-1}{(6l+r)(24l+1)}=\frac{\\{1+(4r-2)\\},\\{2+(4r-3)\\},\cdots,\\{(2r-1)+2r\\}}{(6l+r)(24l+1)}$
(3)
with $r\in\mathbb{N}$ and $1\leq r\leq 12l$. If at least one of the sums in
right side of the expression, say, $a+(4r-a-1),~{}1\leq a\leq 2r-1$ for at
least one of the possible value of $r$ such that $a,(4r-a-1)$ divide
$(6l+r)(24l+1)$; then $\frac{4}{24l+1}$ has the expression in the form of the
equation (1). Thus the same question is arising naturally that whether the
equation (1) can be proved with the help of the equation (3)?
The answer of this question is that the expression in the form of equation (1)
can not be proved with the help of the equation (3) for all values of
$l\in\mathbb{N}$. However, the equation (1) can be proved with the help of the
equation (3) for most of the values of $l\in\mathbb{N}$ and it can not be
proved for very less number of values of $l\in\mathbb{N}$. To find the answer,
the computations of $l$ up to $l=10^{5}$ has been carried out which shows that
the expression in the form of the equation (1) with $n=24l+1$ can not be
proved with the help of the equation (3) only if $\frac{4}{n}=\frac{4}{409}$
(i.e. for $l=17$), $\frac{4}{n}=\frac{4}{577}$ (i.e. for $l=24$),
$\frac{4}{n}=\frac{4}{5569}$ (i.e. for $l=232$), $\frac{4}{n}=\frac{4}{9601}$
(i.e. for $l=400$), $\frac{4}{n}=\frac{4}{23929}$ (i.e. for $l=997$),
$\frac{4}{n}=\frac{4}{83449}$ (i.e. for $l=3477$),
$\frac{4}{n}=\frac{4}{102001}$ (i.e. for $l=4250$),
$\frac{4}{n}=\frac{4}{329617}$ (i.e. for $l=13734$),
$\frac{4}{n}=\frac{4}{712321}$ (i.e. for $l=29680$),
$\frac{4}{n}=\frac{4}{1134241}$ (i.e. for $l=47260$),
$\frac{4}{n}=\frac{4}{1724209}$ (i.e. for $l=71842$),
$\frac{4}{n}=\frac{4}{1726201}$ (i.e. for $l=71925$) and the computations are
taking several days to give result in Mathematica in my system with 8 GB RAM
and 1 TB har disk if the equation (1) with $n=24l+1$ can not be proved with
the help of the equation (3) for $l\geq 29680$.
Thus for exceptional cases, we have to find the expression of the equation (1)
by other ways. One way will be by multiplying some suitable constant, say
$r_{1}\in\mathbb{N}$, on numerator, denominator of right side of
$\frac{4}{24l+1}-\frac{1}{6l+r}=\frac{4r-1}{(6l+r)(24l+1)}$ such that
$\frac{4}{24l+1}-\frac{1}{6l+r}=\frac{4r-1}{(6l+r)(24l+1)}=\frac{(4r-1)r_{1}}{r_{1}(6l+r)(24l+1)}$
and at least one of the sums of numerator $(4r-1)r_{1}$ in right side of the
expression, say, $a+((4r-1)r_{1}-a)$ for at least one of the possible value of
$r$ such that $a,(4r-1)r_{1}-a$ divide $r_{1}(6l+r)(24l+1)$; then
$\frac{4}{24l+1}$ has the expression in the form of the equation (1). However,
there is no way to find $r$, $r_{1}$ in this process but trial and error
method.
A suitable method to find the expression of the equation (1) for all
$n\in\mathbb{N}$ has been discussed in Section 5.
#### 2.1.1 Example
(i) (for $l=17$) $\frac{4}{409}-\frac{1}{104}=\frac{7}{2^{3}\times 13\times
409}=\frac{2\times 7}{2^{4}\times 13\times 409}=\frac{1+13}{2^{4}\times
13\times 409}=\frac{1}{85072}+\frac{1}{6544}$ i.e.
$\frac{4}{409}=\frac{1}{104}+\frac{1}{85072}+\frac{1}{6544}$.
(ii) (for $l=24$) $\frac{4}{577}-\frac{1}{145}=\frac{3}{5\times 29\times
577}=\frac{2\times 3}{2\times 5\times 29\times 577}=\frac{1+5}{2\times 5\times
29\times 577}=\frac{1}{167330}+\frac{1}{33466}$ i.e.
$\frac{4}{577}=\frac{1}{145}+\frac{1}{167330}+\frac{1}{33466}$.
(iii) (for $l=232$) $\frac{4}{5569}-\frac{1}{1394}=\frac{7}{2\times 17\times
41\times 5569}=\frac{6\times 7}{6\times 2\times 17\times 41\times
5569}=\frac{1+41}{6\times 2\times 17\times 41\times
5569}=\frac{1}{46579116}+\frac{1}{1136076}$ i.e.
$\frac{4}{5569}=\frac{1}{1394}+\frac{1}{46579116}+\frac{1}{1136076}$.
(iv) (for $l=400$) $\frac{4}{9601}-\frac{1}{2405}=\frac{19}{5\times 13\times
37\times 9601}=\frac{2\times 19}{2\times 5\times 13\times 37\times
9601}=\frac{1+37}{2\times 5\times 13\times 37\times
9601}=\frac{1}{46180810}+\frac{1}{1248130}$ i.e.
$\frac{4}{9601}=\frac{1}{2405}+\frac{1}{46180810}+\frac{1}{1248130}$.
(v) (for $l=997$) $\frac{4}{23929}-\frac{1}{5984}=\frac{7}{32\times 11\times
17\times 23929}=\frac{3\times 7}{3\times 32\times 11\times 17\times
23929}=\frac{4+17}{3\times 32\times 11\times 17\times
23929}=\frac{1}{107393352}+\frac{1}{25269024}$ i.e.
$\frac{4}{23929}=\frac{1}{5984}+\frac{1}{107393352}+\frac{1}{25269024}$.
(vi) (for $l=3477$) $\frac{4}{83449}-\frac{1}{20865}=\frac{11}{3\times 5\times
13\times 107\times 83449}=\frac{10\times 11}{10\times 3\times 5\times 13\times
107\times 83449}=\frac{3+107}{10\times 3\times 5\times 13\times 107\times
83449}=\frac{1}{5803877950}+\frac{1}{162725550}$ i.e.
$\frac{4}{83449}=\frac{1}{20865}+\frac{1}{5803877950}+\frac{1}{162725550}$.
(vii) (for $l=4250$) $\frac{4}{102001}-\frac{1}{25502}=\frac{7}{2\times
41\times 311\times 102001}=\frac{6\times 7}{6\times 2\times 41\times 311\times
102001}=\frac{1+41}{6\times 2\times 41\times 311\times
102001}=\frac{1}{15607377012}+\frac{1}{380667732}$ i.e.
$\frac{4}{102001}=\frac{1}{25502}+\frac{1}{15607377012}+\frac{1}{380667732}$.
(viii) (for $l=13734$) $\frac{4}{329617}-\frac{1}{82405}=\frac{3}{5\times
16481\times 329617}=\frac{2\times 3}{2\times 5\times 16481\times
329617}=\frac{1+5}{2\times 5\times 16481\times
329617}=\frac{1}{54324177770}+\frac{1}{10864835554}$ i.e.
$\frac{4}{102001}=\frac{1}{82405}+\frac{1}{54324177770}+\frac{1}{10864835554}$.
(ix) (for $l=29680$) $\frac{4}{712321}-\frac{1}{178086}=\frac{23}{2\times
3\times 67\times 443\times 712321}=\frac{3\times 23}{3\times 2\times 3\times
67\times 443\times 712321}=\frac{2+67}{3\times 2\times 3\times 67\times
443\times 712321}=\frac{1}{190281596409}+\frac{1}{5680047654}$ i.e.
$\frac{4}{712321}=\frac{1}{178086}+\frac{1}{190281596409}+\frac{1}{5680047654}$.
(x) (for $l=47260$) $\frac{4}{1134241}-\frac{1}{283561}=\frac{3}{233\times
1217\times 1134241}=\frac{78\times 3}{78\times 233\times 1217\times
1134241}=\frac{1+233}{78\times 233\times 1217\times
1134241}=\frac{1}{25086867951678}+\frac{1}{107668961166}$ i.e.
$\frac{4}{1134241}=\frac{1}{283561}+\frac{1}{25086867951678}+\frac{1}{107668961166}$.
(xi) (for $l=71842$) $\frac{4}{1724209}-\frac{1}{431054}=\frac{7}{2\times
13\times 59\times 281\times 1724209}=\frac{2\times 7}{2\times 2\times 13\times
59\times 281\times 1724209}\\\ =\frac{1+13}{2\times 2\times 13\times 59\times
281\times 1724209}=\frac{1}{1486454372572}+\frac{1}{114342644044}$ i.e.
$\frac{4}{1724209}=\frac{1}{431054}+\frac{1}{1486454372572}+\frac{1}{114342644044}$.
(xii) (for $l=71925$) $\frac{4}{1726201}-\frac{1}{431566}=\frac{63}{2\times
19\times 41\times 277\times 1726201}=\frac{5\times 63}{5\times 2\times
19\times 41\times 277\times 1726201}\\\ =\frac{38+277}{5\times 2\times
19\times 41\times 277\times
1726201}=\frac{1}{98022323785}+\frac{1}{13447105790}$ i.e.
$\frac{4}{1724209}=\frac{1}{431566}+\frac{1}{98022323785}+\frac{1}{13447105790}$.
### 2.2 Lemma
If $6l+1$ or $24l+1$ has a factor $3b+2$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}$ has the expression in the form of (1).
Proof: If $6l+1$ or $24l+1$ ($l\in\mathbb{N}$) has a factor $3b+2$,
then $\frac{4}{24l+1}-\frac{1}{6l+1}=\frac{3}{(3b+2)\times f}$ (where
$(24l+1)(6l+1)=f\times(3b+2))$
$=\frac{3(b+1)}{(3b+2)\times(b+1)\times
f}=\frac{1+(3b+2)}{(3b+2)\times(b+1)\times f}=\frac{1}{(3b+2)\times(b+1)\times
f}+\frac{1}{(b+1)\times f}$
i.e. $\frac{4}{24l+1}=\frac{1}{6l+1}+\frac{1}{(3b+2)\times(b+1)\times
f}+\frac{1}{(b+1)\times f}$.
Example: $\frac{4}{97}-\frac{1}{25}=\frac{3}{5^{2}\times 97}$ ($b=1$)
$\frac{2\times 3}{2\times 5^{2}\times 97}=\frac{1+5}{2\times 5^{2}\times
97}=\frac{1}{4850}+\frac{1}{970}$.
### 2.3 Lemma
If $3l+1=5b$ ($l,b\in\mathbb{N}$), then $\frac{4}{40b-7}$ has the expression
in the form of (1).
Proof: If $3l+1=5b$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+2}=\frac{4}{40b-7}-\frac{1}{10b}=\frac{7}{10b(40b-7)}=\frac{2+5}{10b(40b-7)}=\frac{1}{5b(40b-7)}+\frac{1}{2b(40b-7)}$
i.e. $\frac{4}{40b-7}=\frac{1}{10b}+\frac{1}{5b(40b-7)}+\frac{1}{2b(40b-7)}$.
Note: If $24l+1$ has a factor $5$ ($l\in\mathbb{N}$), then $\frac{4}{24l+1}$
has the expression in the form of (1).
### 2.4 Lemma
If $3l+1=7b$ ($l,b\in\mathbb{N}$), then $\frac{4}{56b-7}$ has the expression
in the form of (1).
Proof: If $3l+1=7b$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+2}=\frac{4}{56b-7}-\frac{1}{14b}=\frac{7}{14b(56b-7)}=\frac{1}{2b(56b-7)}=\frac{1}{4b(56b-7)}+\frac{1}{4b(56b-7)}$
i.e. $\frac{4}{56b-7}=\frac{1}{14b}+\frac{1}{4b(56b-7)}+\frac{1}{4b(56b-7)}$.
Note: If $24l+1$ has a factor $7$ ($l\in\mathbb{N}$), then $\frac{4}{24l+1}$
has the expression in the form of (1).
### 2.5 Lemma
If $3l+1=7b+5$ ($l,b\in\mathbb{N}$), then $\frac{4}{56b+33}$ has the
expression in the form of (1).
Proof: If $3l+1=7b+5$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+2}=\frac{4}{56b+33}-\frac{1}{2(7b+5)}=\frac{7}{2(7b+5)(56b+33)}=\frac{7(b+1)}{2(b+1)(7b+5)(56b+33)}=\frac{2+(7b+5)}{2(b+1)(7b+5)(56b+33)}=\frac{1}{(b+1)(7b+5)(56b+33)}+\frac{1}{2(b+1)(56b+33)}$
i.e.
$\frac{4}{56b+33}=\frac{1}{2(7b+5)}+\frac{1}{(b+1)(7b+5)(56b+33)}+\frac{1}{2(b+1)(56b+33)}$.
Note: If $24l+1$ has a factor $7b+5$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}$ has the expression in the form of (1).
### 2.6 Lemma
If $3l+1=7b+6$ ($l,b\in\mathbb{N}$), then $\frac{4}{56b+41}$ has the
expression in the form of (1).
Proof: If $3l+1=7b+6$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+2}=\frac{4}{56b+41}-\frac{1}{2(7b+6)}=\frac{7}{2(7b+6)(56b+41)}=\frac{7(b+1)}{2(b+1)(7b+6)(56b+41)}=\frac{1+(7b+6)}{2(b+1)(7b+6)(56b+41)}=\frac{1}{2(b+1)(7b+6)(56b+41)}+\frac{1}{2(b+1)(56b+41)}$
i.e.
$\frac{4}{56b+41}=\frac{1}{2(7b+6)}+\frac{1}{2(b+1)(7b+6)(56b+41)}+\frac{1}{2(b+1)(56b+41)}$.
Note: If $24l+1$ has a factor $7b+6$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}$ has the expression in the form of (1).
### 2.7 Lemma
If $3l+1=7b+3$ ($l,b\in\mathbb{N}$), then $\frac{4}{56b+17}$ has the
expression in the form of (1).
Proof: If $3l+1=7b+3$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+2}=\frac{4}{56b+17}-\frac{1}{2(7b+3)}=\frac{7}{2(7b+3)(56b+17)}=\frac{7(2b+1)}{2(2b+1)(7b+3)(56b+17)}=\frac{1+(14b+6)}{(2b+1)(14b+6)(56b+41)}=\frac{1}{2(2b+1)(7b+3)(56b+17)}+\frac{1}{(2b+1)(56b+17)}$
i.e.
$\frac{4}{56b+17}=\frac{1}{2(7b+3)}+\frac{1}{2(2b+1)(7b+3)(56b+17)}+\frac{1}{(2b+1)(56b+17)}$.
Note: If $24l+1$ has a factor $7b+3$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}$ has the expression in the form of (1).
## 3 Theorem
The expression in the form of (1) has a solution for every natural number $n$,
except possibly for those primes of the form $n\equiv r$ (mod 120), with
$r=1,7^{2}$.
Proof: (i) If $l=5b-4$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+1}=\frac{4}{120b-95}-\frac{1}{30b-23}=\frac{3}{5(24b-19)(30b-23)}=\frac{2\times
3}{2\times
5(24b-19)(30b-23)}=\frac{1+5}{10(24b-19)(30b-23))}=\frac{1}{10(24b-19)(30b-23)}+\frac{1}{2(24b-19)(30b-23)}$
i.e.
$\frac{4}{120b-95}=\frac{1}{30b-23}+\frac{1}{10(24b-19)(30b-23)}+\frac{1}{2(24b-19)(30b-23)}$.
(ii) If $l=5b-2$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+2}=\frac{4}{120b-47}-\frac{1}{30b-10}=\frac{7}{10(120b-47)(3b-1)}=\frac{2+5}{10(120b-47)(3b-1))}=\frac{1}{5(120b-47)(3b-1)}+\frac{1}{2(120b-47)(3b-1)}$
i.e.
$\frac{4}{120b-47}=\frac{1}{30b-10}+\frac{1}{5(120b-47)(3b-1)}+\frac{1}{2(120b-47)(3b-1)}$.
(iii) If $l=5b-1$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}-\frac{1}{6l+1}=\frac{4}{120b-23}-\frac{1}{30b-5}=\frac{3}{5(120b-23)(6b-1)}=\frac{2\times
3}{2\times
5(120b-23)(6b-1)}=\frac{1+5}{10(120b-23)(6b-1)}=\frac{1}{10(120b-23)(6b-1)}+\frac{1}{2(120b-23)(6b-1)}$
i.e.
$\frac{4}{120b-23}=\frac{1}{30b-5}+\frac{1}{10(120b-23)(6b-1)}+\frac{1}{2(120b-23)(6b-1)}$.
(iv) If $l=5b-3$ ($l,b\in\mathbb{N}$), then
$\frac{4}{24l+1}=\frac{4}{120b-71}$ where $120b-71=49$ (mod 120)
(v) If $l=5b$ ($l,b\in\mathbb{N}$), then $\frac{4}{24l+1}=\frac{4}{120b+1}$
where $120b+1=1$ (mod 120).
Hence, the expression in the form of (1) has a solution for every natural
number $n$, except possibly for those primes of the form $n\equiv r$ (mod
120), with $r=1,7^{2}$.
## 4 Theorem (Mordell)
The expression in the form of (1) has a solution for every number $n$, except
possibly for those primes of the form $n\equiv r$ (mod 780), with
$r=1^{2},11^{2},13^{2},17^{2},19^{2},23^{2}$.
Proof: From Theorem 3, we know that the expression (1) has a solution if
$\frac{4}{n}=\frac{4}{120b+r}$ except $r=1,-71$.
(1) Let $\frac{4}{n}=\frac{4}{120b+1}$.
(i) If $b=7c-6$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b+1}=\frac{4}{840c-719}$ where $840c-719=11^{2}$ (mod 840).
(ii) If $b=7c-5$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b+1}-\frac{1}{30b+3}=\frac{4}{840c-599}-\frac{1}{210c-147}=\frac{11}{21(10c-7)(840c-599)}=\frac{2\times
11}{2\times
21(10c-7)(840c-599)}=\frac{1+21}{42(10c-7)(840c-599)}=\frac{1}{42(10c-7)(840c-599)}+\frac{1}{2(10c-7)(840c-599)}$
i.e.
$\frac{4}{840c-599}=\frac{1}{210c-147}+\frac{1}{42(10c-7)(840c-599)}+\frac{1}{2(10c-7)(840c-599)}$.
(iii) If $b=7c-4$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b+1}=\frac{4}{840c-479}$ where $840c-479=19^{2}$ (mod 840).
(iv) If $b=7c-3$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b+1}-\frac{1}{30b+2}=\frac{4}{840c-359}-\frac{1}{210c-88}=\frac{7}{2(105c-44)(840c-359)}=\frac{(15c-6)\times
7}{(15c-6)\times
2(105c-44)(840c-359)}=\frac{2+(105c-42)}{2(15c-6)(105c-44)(840c-359)}=\frac{1}{(15c-6)(105c-44)(840c-359)}+\frac{1}{2(15c-6)(840c-359)}$
i.e.
$\frac{4}{840c-359}=\frac{1}{210c-88}+\frac{1}{(15c-6)(105c-44)(840c-359)}+\frac{1}{2(15c-6)(840c-359)}$.
(v) If $b=7c-2$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b+1}-\frac{1}{30b+2}=\frac{4}{840c-239}-\frac{1}{210c-58}=\frac{7}{2(105c-29)(840c-239)}=\frac{(15c-4)\times
7}{(15c-4)\times
2(105c-29)(840c-239)}=\frac{1+(105c-29)}{2(15c-4)(105c-29)(840c-239)}=\frac{1}{2(15c-4)(105c-29)(840c-239)}+\frac{1}{2(15c-4)(840c-239)}$
i.e.
$\frac{4}{840c-239}=\frac{1}{210c-58}+\frac{1}{2(15c-4)(105c-29)(840c-239)}+\frac{1}{2(15c-4)(840c-239)}$.
(vi) If $b=7c-1$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b+1}-\frac{1}{30b+2}=\frac{4}{840c-119}-\frac{1}{210c-28}=\frac{7}{7(120c-17)14(15c-2)}=\frac{1}{14(15c-2)(120c-17)}=\frac{1}{28(15c-2)(120c-17)}+\frac{1}{28(15c-2)(120c-17)}$
i.e.
$\frac{4}{840c-119}=\frac{1}{210c-28}+\frac{1}{28(15c-2)(120c-17)}+\frac{1}{28(15c-2)(120c-17)}$.
(vii) If $b=7c$ ($b,c\in\mathbb{N}$), then $\frac{4}{120b+1}=\frac{4}{840c+1}$
where $840c+1=1^{2}$ (mod 840).
(2) Let $\frac{4}{n}=\frac{4}{120b-71}$.
(i) If $b=7c-6$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b-71}-\frac{1}{30b-16}=\frac{4}{840c-791}-\frac{1}{210c-196}=\frac{7}{7(120c-113)14(15c-14)}=\frac{1}{14(120c-113)(15c-14)}=\frac{1}{28(120c-113)(15c-14)}+\frac{1}{28(120c-113)(15c-14)}$
i.e.
$\frac{4}{840c-791}=\frac{1}{210c-196}+\frac{1}{28(120c-113)(15c-14)}+\frac{1}{28(120c-113)(15c-14)}$.
(ii) If $b=7c-5$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b-71}=\frac{4}{840c-671}$ where $840c-671=13^{2}$ (mod 840).
(iii) If $b=7c-4$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b-71}=\frac{4}{840c-551}$ where $840c-551=17^{2}$ (mod 840).
(iv) If $b=7c-3$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b-71}-\frac{1}{30b-16}=\frac{4}{840c-431}-\frac{1}{210c-106}=\frac{7}{2(105c-53)(840c-431)}=\frac{(30c-15)\times
7}{(30c-15)\times
2(105c-53)(840c-431)}=\frac{1+(210c-106)}{(30c-15)(210c-106)(840c-431)}=\frac{1}{30(2c-1)(105c-53)(840c-431)}+\frac{1}{15(2c-1)(840c-431)}$
i.e.
$\frac{4}{840c-431}=\frac{1}{210c-106}+\frac{1}{30(2c-1)(105c-53)(840c-431)}+\frac{1}{15(2c-1)(840c-431)}$.
(v) If $b=7c-2$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b-71}=\frac{4}{840c-311}$ where $840c-311=23^{2}$ (mod 840).
(vi) If $b=7c-1$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b-71}-\frac{1}{30b-16}=\frac{4}{840c-191}-\frac{1}{210c-46}=\frac{7}{2(105c-23)(840c-191)}=\frac{(15c-3)\times
7}{(15c-3)\times
2(105c-23)(840c-191)}=\frac{2+(105c-23)}{2(15c-3)(105c-23)(840c-191)}=\frac{1}{3(5c-1)(105c-23)(840c-191)}+\frac{1}{6(5c-1)(840c-191)}$
i.e.
$\frac{4}{840c-191}=\frac{1}{210c-46}+\frac{1}{3(5c-1)(105c-23)(840c-191)}+\frac{1}{6(5c-1)(840c-191)}$.
(vii) If $b=7c$ ($b,c\in\mathbb{N}$), then
$\frac{4}{120b-71}-\frac{1}{30b-16}=\frac{4}{840c-71}-\frac{1}{210c-16}=\frac{7}{2(105c-8)(840c-71)}=\frac{(15c-1)\times
7}{(15c-1)\times
2(105c-8)(840c-71)}=\frac{1+(105c-8)}{2(15c-1)(105c-8)(840c-71)}=\frac{1}{2(15c-1)(105c-8)(840c-71)}+\frac{1}{2(15c-1)(840c-71)}$
i.e.
$\frac{4}{840c-71}=\frac{1}{210c-16}+\frac{1}{2(15c-1)(105c-8)(840c-71)}+\frac{1}{2(15c-1)(840c-71)}$.
Hence, the expression in the form of the equation (1) has a solution for every
number $n$, except possibly for those primes of the form $n\equiv r$ (mod
840), with $r=1^{2},11^{2},13^{2},17^{2},19^{2},23^{2}$.
## 5 Theorem
The Erdős-Straus conjecture is true i.e. the expression in the form of (1) has
a solution for every natural number $n$.
Proof: We know that the Erdős-Straus conjecture is equivalent to find the
equation (1) for all primes. Because, it is well known that if
$\frac{4}{n}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$ is true for $n=p$, then we
get $\frac{4}{mp}=\frac{1}{mx}+\frac{1}{my}+\frac{1}{mz}$ for any natural
number $m$. Let us consider, $n$ as prime number $p$, which is greater than 2,
in the equation (1). For $p=2$, we know
$\frac{4}{2}=\frac{1}{1}+\frac{1}{2}+\frac{1}{2}$.
Case 1: If $x=y=z$, then $3p=4x$ which is a invalid equation. Thus the
equation (1) has no solution if $n$ is a prime number and $x=y=z$.
Case 2: Two of $x,y,z$ are equal. Without loss of generality, let $x=y$.
$\text{Then}~{}\frac{4}{p}=\frac{2}{x}+\frac{1}{z}~{}\text{i.e.}~{}p(x+2z)=4xz.$
(4)
Thus $p$ divides $x$ or $p$ divides $y$ for $p>2$.
(I) Let $p$ divides $x$, then $x=up,~{}u\in\mathbb{N}$. From (4), we get
$up+2z=4uz$. Or, $2z(2u-1)=up$.
(i) If $p$ divides $z$ i.e. $z=vp,~{}v\in\mathbb{N}$, then $2v(2u-1)=u$, which
is a invalid equation.
(ii) If $p$ divides $2u-1$ i.e. $2u-1=v_{1}p,~{}v_{1}\in\mathbb{N}$, then
$2zv_{1}=u$ and $u$ is even since $z=\frac{u}{2v_{1}}$ i.e. $2v_{1}$ divides
$u$. Then $u=2u_{1}$, $4u_{1}-1=v_{1}p$ and $v_{1}$ divides $u_{1}$. So,
$v_{1}$ divides $u_{1}$ and $v_{1}$ divides $4u_{1}-1$ imply that $v_{1}=1$.
Thus $p=4u_{1}-1=4u_{2}+3,~{}x=2(u_{2}+1)p=y,~{}z=u_{2}+1$, where
$u_{1}=u_{2}+1$.
$\text{Then}~{}\frac{4}{p}=\frac{1}{2(u_{2}+1)p}+\frac{1}{2(u_{2}+1)p}+\frac{1}{u_{2}+1}=\frac{1}{\frac{p(p+1)}{2}}+\frac{1}{\frac{p(p+1)}{2}}+\frac{1}{\frac{p+1}{4}}.$
(5)
(II) Let $p$ divides $z$, then $z=u_{3}p,~{}u_{3}\in\mathbb{N}$. From (4), we
get $x+2u_{3}p=4u_{3}x$. Or, $(4u_{3}-1)x=2u_{3}p$.
(i) If $p$ divides $x$ i.e. $x=v_{2}p,~{}v_{2}\in\mathbb{N}$, then
$(4u_{3}-1)v_{2}=2u_{3}$, which is a invalid equation.
(ii) If $p$ divides $4u_{3}-1$ i.e. $4u_{3}-1=v_{3}p,~{}v_{3}\in\mathbb{N}$,
then $xv_{3}=2u_{3}$. Thus $v_{3}$ divides $2u_{3}$ and $v_{3}$ divides
$4u_{3}-1$ imply that $v_{3}=1$. Thus
$p=4u_{3}-1=4u_{4}+3,~{}x=2(u_{4}+1)=y,~{}z=(u_{4}+1)p$, where
$u_{3}=u_{4}+1$.
$\text{Then}~{}\frac{4}{p}=\frac{1}{2(u_{4}+1)}+\frac{1}{2(u_{4}+1)}+\frac{1}{(u_{2}+1)p}=\frac{1}{\frac{(p+1)}{2}}+\frac{1}{\frac{(p+1)}{2}}+\frac{1}{\frac{p(p+1)}{4}}.$
(6)
Case 3: Let $x\neq y\neq z\neq x$. From the equation (1), we get
$p(xy+yz+zx)=4xyz$. Then $p$ divides at least one of $x,y,z$. Without loss of
generality, let $p$ divides $x$ i.e. $x=u_{5}p,~{}u_{5}\in\mathbb{N}$.
$\text{Thus}~{}u_{5}p(y+z)=yz(4u_{5}-1).$ (7)
(I) Let $p$ divides $y$ or $z$. Without loss of generality, let $p$ divides
$y$. Then $y=v_{4}p,~{}v_{4}\in\mathbb{N}$. From (7), we get
$u_{5}v_{4}p=z(4u_{5}v_{4}-u_{5}-v_{4})$.
(i) If $p$ divides $z$ i.e. $z=w_{1}p,~{}w_{1}\in\mathbb{N}$, then
$u_{5}v_{4}+w_{1}v_{4}+w_{1}u_{5}=4u_{5}v_{4}w_{1}$, which is a invalid
equation.
(ii) If $p$ divides $4u_{5}v_{4}-u_{5}-v_{4}$,
$\text{then}~{}4u_{5}v_{4}-u_{5}-v_{4}=w_{2}p,~{}w_{2}\in\mathbb{N}.$ (8)
So, $u_{5}v_{4}=zw_{2}$. Thus $w_{2}$ divides $u_{5}v_{4}$, $w_{2}$ divides
$4u_{5}v_{4}-u_{5}-v_{4}$ and hence $w_{2}$ divide $u_{5}+v_{4}$, $u_{5}^{2}$,
$v_{4}^{2}$, $u_{5}(u_{5}-v_{4})$, $v_{4}(u_{5}-v_{4})$, $(u_{5}-v_{4})^{2}$
etc. Now, $w_{2}$ divide $u_{5}v_{4}$ and $u_{5}+v_{4}$ mean
$u_{5}v_{4}=w_{3}w_{2},~{}w_{3}\in\mathbb{N}$ and
$u_{5}+v_{4}=w_{4}w_{2},~{}w_{4}\in\mathbb{N}$. Then from (8), we get
$p=4w_{3}-w_{4}$. Hence the equation (1) has solution when $n=p=4w_{3}-w_{4}$
with $x=u_{5}p$, $y=v_{4}p$, $z=w_{3}$, $u_{5}v_{4}=w_{3}w_{2}$,
$u_{5}+v_{4}=w_{4}w_{2}$ since
$\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=\frac{(u_{5}+v_{4})w_{3}+u_{5}v_{4}p}{u_{5}v_{4}w_{3}p}=\frac{w_{4}w_{2}w_{3}+w_{3}w_{2}p}{u_{5}v_{4}w_{3}p}=\frac{(w_{4}+p)w_{2}}{u_{5}v_{4}p}=\frac{4w_{3}w_{2}}{w_{3}w_{2}p}=\frac{4}{p}.$
(9)
We have to demonstrate the solutions of the equation (1) if $n=4m+1$ since we
already know solutions of it for $n=4m$, $n=4m+2$ and $n=4m+3$.
(a) If $w_{4}=3$, then $p=4w_{3}-3$ (i.e. $n=4m+1$), $v_{4}=3w_{2}-u_{5}$,
$w_{3}w_{2}=u_{5}v_{4}=u_{5}(3w_{2}-u_{5})$ i.e.
$w_{2}=\frac{u_{5}^{2}}{3u_{5}-w_{3}}=\frac{u_{5}^{2}}{3u_{5}-\frac{p+3}{4}}$.
Thus we have to choose $u_{5}$
$\left(>\left[\frac{\left(\frac{p+3}{4}\right)}{3}\right]\right)$ such that
$w_{2}\in\mathbb{N}$. However, it will have limited use for the solutions when
$n=4m+1$ as $w_{3}=3$ is a special case.
For example, if $p=13$, then $w_{3}=4,~{}w_{4}=3$. If we choose $u_{5}=2$,
then $w_{2}=2,~{}v_{4}=4$. Hence $\frac{4}{13}=\frac{1}{2\times
13}+\frac{1}{4\times 13}+\frac{1}{4}$. We can try to reproduce the solutions
of Section 2.1.1.
For $p=409$, $w_{3}=103,~{}w_{4}=3$. Then, we can not find any
$w_{2}\in\mathbb{N}$ if $u_{5}<1000001$.
For $p=577$, $w_{3}=145,~{}w_{4}=3$. If we choose $u_{5}=50$, then we can get
$w_{2}=500,~{}v_{4}=1450$. Thus $\frac{4}{577}=\frac{1}{50\times
577}+\frac{1}{1450\times 577}+\frac{1}{145}$. If we choose $u_{5}=58$, then we
can get $w_{2}=116,~{}v_{4}=290$ and $\frac{4}{577}=\frac{1}{58\times
577}+\frac{1}{290\times 577}+\frac{1}{145}$.
For $p=5569$, $w_{3}=1393,~{}w_{4}=3$. Then, we can not find any
$w_{2}\in\mathbb{N}$ if $u_{5}<1000001$.
For $p=9601$, $w_{3}=2401,~{}w_{4}=3$. Then, we can not find any
$w_{2}\in\mathbb{N}$ if $u_{5}<1000001$.
Note: Let $w_{2}\in\mathbb{N}$, then $3u_{5}-w_{3}>0$. Thus
$v_{4}=3w_{2}-u_{5}=\frac{3u_{5}^{2}}{3u_{5}-w_{3}}-u_{5}=\frac{u_{5}w_{3}}{3u_{5}-w_{3}}>0$.
Hence $w_{2}\in\mathbb{N}$ implies $v_{4}\in\mathbb{N}$.
(b) Let $w_{4}=4w_{5}+3$ where $w_{5}=0$ or $w_{5}\in\mathbb{N}$, then
$p=4(w_{3}-w_{5})-3$ (i.e. $n=p=4m+1$). So,
$w_{3}-w_{5}=\frac{p+3}{4}\in\mathbb{N}$. Or, $w_{3}=w_{5}+\frac{p+3}{4}$,
$w_{2}=\frac{u_{5}^{2}}{(4w_{5}+3)u_{5}-\frac{p+3}{4}-w_{5}}$. Thus, we have
to choose $w_{5}$, $u_{5}$
$\left(>\left[\frac{w_{5}+\frac{p+3}{4}}{4w_{5}+3}\right]\right)$ such that
$w_{2}\in\mathbb{N}$. Using these $w_{5}$, $u_{5}$, $w_{2}$; we can calculate
$w_{3}=w_{5}+\frac{p+3}{4}$, $v_{4}=(4w_{5}+3)w_{2}-u_{5}$. Hence, in this way
we can generate all solutions of the equation (1) if $n=p=4m+1$.
As examples, we can reproduce the solutions of Section 2.1.1. We can get
multiple solutions if we choose $w_{5},~{}u_{5}\leq 1000$ with
$u_{5}>\left[\frac{w_{5}+\frac{p+3}{4}}{4w_{5}+3}\right]$, however for same
solutions as of Section 2.1.1, we have to increase the range of $u_{5}$ for
majority of cases.
Let $p=409$. For $w_{5},~{}u_{5}\leq 1000$, we get eleven solutions of the
equation (1). The first solution is for $w_{5}=1,~{}u_{5}=15$. Then
$w_{2}=225$, $v_{4}=1560$, $w_{3}=104$ and hence
$\frac{4}{409}=\frac{1}{15\times 409}+\frac{1}{1560\times 409}+\frac{1}{104}$.
The last solution is for $w_{5}=14,~{}u_{5}=234$. Then $w_{2}=4$, $v_{4}=2$,
$w_{3}=117$ and hence $\frac{4}{409}=\frac{1}{234\times 409}+\frac{1}{2\times
409}+\frac{1}{117}$. The same solutions as of Section 2.1.1 is for
$w_{5}=1,~{}u_{5}=16$. Then $w_{2}=32$, $v_{4}=208$, $w_{3}=104$ and hence
$\frac{4}{409}=\frac{1}{16\times 409}+\frac{1}{208\times 409}+\frac{1}{104}$.
Let $p=577$. For $w_{5},~{}u_{5}\leq 1000$, we get twelve solutions of the
equation (1). The first solution is for $w_{5}=0,~{}u_{5}=50$. Then
$w_{2}=500$, $v_{4}=1450$, $w_{3}=145$ and hence
$\frac{4}{577}=\frac{1}{50\times 577}+\frac{1}{1450\times 577}+\frac{1}{145}$.
The last solution is for $w_{5}=20,~{}u_{5}=330$. Then $w_{2}=4$, $v_{4}=2$,
$w_{3}=165$ and hence $\frac{4}{577}=\frac{1}{330\times 577}+\frac{1}{2\times
577}+\frac{1}{165}$. The same solutions as of Section 2.1.1 is for
$w_{5}=0,~{}u_{5}=58$. Then $w_{2}=116$, $v_{4}=290$, $w_{3}=104$ and hence
$\frac{4}{577}=\frac{1}{58\times 577}+\frac{1}{290\times 577}+\frac{1}{104}$.
Let $p=5569$. For $w_{5},~{}u_{5}\leq 1000$, we get eleven solutions of the
equation (1). The first solution is for $w_{5}=1,~{}u_{5}=204$. Then
$w_{2}=1224$, $v_{4}=8364$, $w_{3}=1394$ and hence
$\frac{4}{5569}=\frac{1}{204\times 5569}+\frac{1}{8364\times
5569}+\frac{1}{1394}$, which is the same solutions as of Section 2.1.1. The
last solution is for $w_{5}=35,~{}u_{5}=10$. Then $w_{2}=50$, $v_{4}=7140$,
$w_{3}=1428$ and hence $\frac{4}{5569}=\frac{1}{10\times
5569}+\frac{1}{7140\times 5569}+\frac{1}{1428}$.
Let $p=9601$. For $w_{5},~{}u_{5}\leq 1000$, we get six solutions of the
equation (1). The first solution is for $w_{5}=4,~{}u_{5}=130$. Then
$w_{2}=260$, $v_{4}=4810$, $w_{3}=2405$ and hence
$\frac{4}{9601}=\frac{1}{130\times 9601}+\frac{1}{4810\times
9601}+\frac{1}{2405}$, which is the same solutions as of Section 2.1.1. The
last solution is for $w_{5}=104,~{}u_{5}=6$. Then $w_{2}=4$, $v_{4}=1670$,
$w_{3}=2505$ and hence $\frac{4}{9601}=\frac{1}{6\times
9601}+\frac{1}{1670\times 9601}+\frac{1}{2505}$.
Let $p=23929$. For $w_{5},~{}u_{5}\leq 1000$, we get twenty four solutions of
the equation (1). The first solution is for $w_{5}=1,~{}u_{5}=855$. Then
$w_{2}=731025$, $v_{4}=5116320$, $w_{3}=5984$ and hence
$\frac{4}{23929}=\frac{1}{855\times 23929}+\frac{1}{5116320\times
23929}+\frac{1}{5984}$. The last solution is for $w_{5}=854,~{}u_{5}=2$. Then
$w_{2}=4$, $v_{4}=13674$, $w_{3}=6837$ and hence
$\frac{4}{23929}=\frac{1}{2\times 23929}+\frac{1}{13674\times
23929}+\frac{1}{6837}$. To get the same solutions as of Section 2.1.1, we can
consider the case $w_{5}\leq 1000,~{}u_{5}\leq 10000$ and then we get thirty
six solutions of the equation (1). Thus the same solutions as of Section 2.1.1
is for $w_{5}=1,~{}u_{5}=1056$. Then $w_{2}=792$, $v_{4}=4488$, $w_{3}=5984$
and hence $\frac{4}{23929}=\frac{1}{1056\times 23929}+\frac{1}{4488\times
23929}+\frac{1}{5984}$.
Let $p=83449$. For $w_{5},~{}u_{5}\leq 1000$, we get eleven solutions of the
equation (1). The first solution is for $w_{5}=5,~{}u_{5}=908$. Then
$w_{2}=51529$, $v_{4}=1184259$, $w_{3}=20868$ and hence
$\frac{4}{83449}=\frac{1}{908\times 83449}+\frac{1}{1184259\times
83449}+\frac{1}{20868}$. The last solution is for $w_{5}=353,~{}u_{5}=15$.
Then $w_{2}=25$, $v_{4}=35360$, $w_{3}=21216$ and hence
$\frac{4}{83449}=\frac{1}{15\times 83449}+\frac{1}{35360\times
83449}+\frac{1}{21216}$. To get the same solutions as of Section 2.1.1, we can
consider the case $w_{5}\leq 1000,~{}u_{5}\leq 10000$ and then we get
seventeen solutions of the equation (1). Thus the same solutions as of Section
2.1.1 is for $w_{5}=2,~{}u_{5}=1950$. Then $w_{2}=6500$, $v_{4}=69550$,
$w_{3}=20865$ and hence $\frac{4}{83449}=\frac{1}{1950\times
83449}+\frac{1}{69550\times 83449}+\frac{1}{20865}$.
Let $p=102001$. For $w_{5},~{}u_{5}\leq 1000$, we get thirteen solutions of
the equation (1). The first solution is for $w_{5}=7,~{}u_{5}=826$. Then
$w_{2}=6962$, $v_{4}=214996$, $w_{3}=25508$ and hence
$\frac{4}{102001}=\frac{1}{826\times 102001}+\frac{1}{214996\times
102001}+\frac{1}{25508}$. The last solution is for $w_{5}=542,~{}u_{5}=12$.
Then $w_{2}=16$, $v_{4}=34724$, $w_{3}=26043$ and hence
$\frac{4}{102001}=\frac{1}{12\times 102001}+\frac{1}{34724\times
102001}+\frac{1}{26043}$. To get the same solutions as of Section 2.1.1, we
can consider the case $w_{5}\leq 1000,~{}u_{5}\leq 10000$ and then we get
twenty solutions of the equation (1). Thus the same solutions as of Section
2.1.1 is for $w_{5}=1,~{}u_{5}=3732$. Then $w_{2}=22392$, $v_{4}=153012$,
$w_{3}=25502$ and hence $\frac{4}{102001}=\frac{1}{3732\times
102001}+\frac{1}{153012\times 102001}+\frac{1}{25502}$.
Let $p=329617$. For $w_{5},~{}u_{5}\leq 1000$, we get fourteen solutions of
the equation (1). The first solution is for $w_{5}=26,~{}u_{5}=774$. Then
$w_{2}=1548$, $v_{4}=164862$, $w_{3}=82431$ and hence
$\frac{4}{329617}=\frac{1}{774\times 329617}+\frac{1}{164862\times
329617}+\frac{1}{82431}$. The last solution is for $w_{5}=867,~{}u_{5}=24$.
Then $w_{2}=18$, $v_{4}=62454$, $w_{3}=83272$ and hence
$\frac{4}{329617}=\frac{1}{24\times 329617}+\frac{1}{62454\times
329617}+\frac{1}{83272}$. To get the same solutions as of Section 2.1.1, we
can consider the case $w_{5}\leq 1000,~{}u_{5}\leq 100000$ and then we get
thirty two solutions of the equation (1). Thus the same solutions as of
Section 2.1.1 is for $w_{5}=0,~{}u_{5}=32962$. Then $w_{2}=65924$,
$v_{4}=164810$, $w_{3}=82405$ and hence $\frac{4}{329617}=\frac{1}{32962\times
329617}+\frac{1}{164810\times 329617}+\frac{1}{82405}$.
Let $p=712321$. For $w_{5},~{}u_{5}\leq 1000$, we get fourteen solutions of
the equation (1). The first solution is for $w_{5}=47,~{}u_{5}=936$. Then
$w_{2}=1352$, $v_{4}=257296$, $w_{3}=178128$ and hence
$\frac{4}{712321}=\frac{1}{936\times 712321}+\frac{1}{257296\times
712321}+\frac{1}{178128}$. The last solution is for $w_{5}=587,~{}u_{5}=76$.
Then $w_{2}=722$, $v_{4}=1697346$, $w_{3}=178668$ and hence
$\frac{4}{712321}=\frac{1}{76\times 712321}+\frac{1}{1697346\times
712321}+\frac{1}{178668}$. To get the same solutions as of Section 2.1.1, we
can consider the case $w_{5}\leq 1000,~{}u_{5}\leq 10000$ and then we get
twenty six solutions of the equation (1). Thus the same solutions as of
Section 2.1.1 is for $w_{5}=5,~{}u_{5}=7974$. Then $w_{2}=11961$,
$v_{4}=267129$, $w_{3}=178086$ and hence $\frac{4}{712321}=\frac{1}{7974\times
712321}+\frac{1}{267129\times 712321}+\frac{1}{178086}$.
Let $p=1134241$. For $w_{5},~{}u_{5}\leq 1000$, we get twelve solutions of the
equation (1). The first solution is for $w_{5}=101,~{}u_{5}=697$. Then
$w_{2}=28577$, $v_{4}=11630142$, $w_{3}=283662$ and hence
$\frac{4}{1134241}=\frac{1}{697\times 1134241}+\frac{1}{11630142\times
1134241}+\frac{1}{283662}$. The last solution is for $w_{5}=696,~{}u_{5}=102$.
Then $w_{2}=612$, $v_{4}=1705542$, $w_{3}=284257$ and hence
$\frac{4}{1134241}=\frac{1}{102\times 1134241}+\frac{1}{1705542\times
1134241}+\frac{1}{284257}$. To get the same solutions as of Section 2.1.1, we
can consider the case $w_{5}\leq 1000,~{}u_{5}\leq 100000$ and then we get
forty four solutions of the equation (1). Thus the same solutions as of
Section 2.1.1 is for $w_{5}=0,~{}u_{5}=94926$. Then $w_{2}=7404228$,
$v_{4}=22117758$, $w_{3}=283561$ and hence
$\frac{4}{1134241}=\frac{1}{94926\times 1134241}+\frac{1}{22117758\times
1134241}+\frac{1}{283561}$.
Let $p=1724209$. For $w_{5},~{}u_{5}\leq 1000$, we get seven solutions of the
equation (1). The first solution is for $w_{5}=125,~{}u_{5}=858$. Then
$w_{2}=1859$, $v_{4}=934219$, $w_{3}=431178$ and hence
$\frac{4}{1724209}=\frac{1}{858\times 1724209}+\frac{1}{934219\times
1724209}+\frac{1}{431178}$. The last solution is for $w_{5}=749,~{}u_{5}=144$.
Then $w_{2}=384$, $v_{4}=1151472$, $w_{3}=431802$ and hence
$\frac{4}{1724209}=\frac{1}{144\times 1724209}+\frac{1}{1151472\times
1724209}+\frac{1}{431802}$. To get the same solutions as of Section 2.1.1, we
can consider the case $w_{5}\leq 1000,~{}u_{5}\leq 100000$ and then we get
thirty five solutions of the equation (1). Thus the same solutions as of
Section 2.1.1 is for $w_{5}=1,~{}u_{5}=66316$. Then $w_{2}=132632$,
$v_{4}=862108$, $w_{3}=431054$ and hence
$\frac{4}{1724209}=\frac{1}{66316\times 1724209}+\frac{1}{862108\times
1724209}+\frac{1}{431054}$.
Let $p=1726201$. For $w_{5},~{}u_{5}\leq 1000$, we get thirteen solutions of
the equation (1). The first solution is for $w_{5}=125,~{}u_{5}=864$. Then
$w_{2}=256$, $v_{4}=127904$, $w_{3}=431676$ and hence
$\frac{4}{1726201}=\frac{1}{864\times 1726201}+\frac{1}{127904\times
1726201}+\frac{1}{431676}$. The last solution is for $w_{5}=473,~{}u_{5}=228$.
Then $w_{2}=1444$, $v_{4}=2736152$, $w_{3}=432024$ and hence
$\frac{4}{1726201}=\frac{1}{228\times 1726201}+\frac{1}{2736152\times
1726201}+\frac{1}{432024}$. To get the same solutions as of Section 2.1.1, we
can consider the case $w_{5}\leq 1000,~{}u_{5}\leq 10000$ and then we get
twenty seven solutions of the equation (1). Thus the same solutions as of
Section 2.1.1 is for $w_{5}=15,~{}u_{5}=7790$. Then $w_{2}=1025$,
$v_{4}=56785$, $w_{3}=431566$ and hence $\frac{4}{1726201}=\frac{1}{7790\times
1726201}+\frac{1}{56785\times 1726201}+\frac{1}{431566}$.
Thus the Erdős-Straus conjecture is true for all values of natural numbers
$n\geq 2$.
Note: Let $w_{2}\in\mathbb{N}$, then $(4w_{5}+3)u_{5}-\frac{p+3}{4}-w_{5}>0$.
Thus
$v_{4}=(4w_{5}+3)w_{2}-u_{5}=\frac{(4w_{5}+3)u_{5}^{2}}{(4w_{5}+3)u_{5}-\frac{p+3}{4}-w_{5}}-u_{5}=\frac{u_{5}(\frac{p+3}{4}+w_{5})}{(4w_{5}+3)u_{5}-\frac{p+3}{4}-w_{5}}>0$.
Hence $w_{2}\in\mathbb{N}$ implies $v_{4}\in\mathbb{N}$.
### 5.1 Remark
For $w_{4}=4w_{5}+3$, $p=4(w_{3}-w_{5})-3$,
$x=u_{5}p,~{}y=v_{4}p,~{}z=w_{3},~{}u_{5}v_{4}=w_{2}w_{3},~{}u_{5}+v_{4}=w_{2}w_{4}$.
Then $w_{3}=\frac{p+3}{4}+w_{5}$,
$u_{5}v_{4}=w_{2}(\frac{p+3}{4}+w_{5}),~{}u_{5}+v_{4}=w_{2}(4w_{5}+3)$. Thus
$\frac{u_{5}v_{4}}{u_{5}+v_{4}}=\frac{\frac{p+3}{4}+w_{5}}{4w_{5}+3}$. Hence
$u_{5}v_{4}(4w_{5}+3)=(\frac{p+3}{4}+w_{5})(u_{5}+v_{4})$.
### 5.2 Corollary
If $w_{2}=1$ and $u_{5}+v_{4}=4w_{6}-1$, then
$p=4\left\\{(u_{5}(4w_{6}-u_{5}-1)-w_{6})\right\\}+1~{}\text{and}~{}\frac{4}{p}=\frac{1}{u_{5}p}+\frac{1}{(4w_{6}-u_{5}-1)p}+\frac{1}{u_{5}(4w_{6}-u_{5}-1)}.$
(10)
Proof: Let $w_{2}=1$. Then from equation (8), we get
$p=4u_{5}v_{4}-u_{5}-v_{4},~{}x=u_{5}p,~{}y=v_{4}p,~{}z=u_{5}v_{4}$. If
$u_{5}+v_{4}=4w_{6}-1$, then
$p=4\left\\{(u_{5}(4w_{6}-u_{5}-1)-w_{6})\right\\}+1,~{}y=(4w_{6}-u_{5}-1)p$.
Thus
$\frac{1}{u_{5}p}+\frac{1}{(4w_{6}-u_{5}-1)p}+\frac{1}{u_{5}(4w_{6}-u_{5}-1)}=\frac{(4w_{6}-u_{5}-1)+u_{5}+p}{u_{5}(4w_{6}-u_{5}-1)p}=\frac{(4w_{6}-1)+4\left\\{(u_{5}(4w_{6}-u_{5}-1)-w_{6})\right\\}+1}{u_{5}(4w_{6}-u_{5}-1)p}=\frac{4u_{5}(4w_{6}-u_{5}-1)}{u_{5}(4w_{6}-u_{5}-1)p}=\frac{4}{p}$.
If (a) $u_{5}=1$, then $p=12w_{6}-7=12w_{7}+5$ (b) $u_{5}=2$, then
$p=28w_{6}-23=28w_{7}+5$ (c) $u_{5}=3$, then $p=44w_{6}-47=12w_{7}+41$ (d)
$u_{5}=4$, then $p=60w_{6}-79=60w_{7}+41$ (e) $u_{5}=5$, then
$p=76w_{6}-119=76w_{7}+33$ (f) $u_{5}=6$, then $p=92w_{6}-167=12w_{7}+17$ (g)
$u_{5}=7$, then $p=108w_{6}-223=108w_{7}+101$ (h) $u_{5}=8$, then
$p=124w_{6}-287=124w_{7}+85$ (i) $u_{5}=9$, then $p=140w_{6}-359=140w_{7}+61$
(j) $u_{5}=10$, then $p=156w_{6}-439=156w_{7}+29$ etc.
(II) Let $p$ divides $4u_{5}-1$. Then $4u_{5}-1=v_{5}p,~{}v_{5}\in\mathbb{N}$.
From (7), we get $u_{5}(y+z)=yzv_{5}$ i.e.
$\frac{v_{5}}{u_{5}}=\frac{1}{y}+\frac{1}{z}$.
(i) If $v_{5}$ divides $u_{5}$, then $v_{5}$ divides $u_{5}$ and $4u_{5}-1$.
Thus $v_{5}=1$, $p=4u_{5}-1=4u_{6}+3,~{}x=(u_{6}+1)p$ and one possible values
of $y,~{}z$ are $y=(u_{6}+2),~{}z=(u_{6}+1)(u_{6}+2)$ where $u_{5}=u_{6}+1$.
$\text{Then}~{}\frac{4}{p}=\frac{1}{(u_{6}+1)p}+\frac{1}{(u_{6}+2)}+\frac{1}{(u_{6}+1)(u_{6}+2)}=\frac{1}{\frac{p(p+1)}{4}}+\frac{1}{\frac{p+1}{4}+1}+\frac{1}{\frac{p+1}{4}(\frac{p+1}{4}+1)}.$
(11)
(ii) If $v_{5}=v_{6}+v_{7}$ with $v_{6}$ and $v_{7}$ divide $u_{5}$, then
$y=\frac{u_{5}}{v_{6}},~{}z=\frac{u_{5}}{v_{7}}$. Also if for some suitable
scalar $r_{2}\in\mathbb{N}$,
$\frac{1}{y}+\frac{1}{z}=\frac{v_{5}}{u_{5}}=\frac{r_{2}v_{5}}{r_{2}u_{5}}$
with $r_{2}v_{5}=v_{8}+v_{9}$ such that $v_{8}$ and $v_{9}$ divide
$r_{2}u_{5}$, then $y=\frac{r_{2}u_{5}}{v_{8}},~{}z=\frac{r_{2}u_{5}}{v_{9}}$.
Acknowledgment: I am grateful to the University Grants Commission (UGC), New
Delhi for awarding the Dr. D. S. Kothari Post Doctoral Fellowship from 9th
July, 2012 to 8th July, 2015 at Indian Institute of Technology (BHU),
Varanasi. The self training for this investigation was continued during the
period.
## References
* [1] Elsholtz, C. Sums of k unit fractions. Transactions of the American Mathematical Society, 353(8), pp. 3209-3227, 2001.
* [2] Kotsireas I. The Erdős-Straus conjecture on Egyptian fractions, Paul Erdős and his mathematics pp 140-144, Budapest, 1999, János Bolyai Math. Soc., Budapest, 1999.
* [3] Bernstein, L. Zur Losung der diophantischen Gieichung $m/n=1/x+1/y+1/z$, insbesondere im Fall m = 4, Journal fur die Reine und Angewandte Mathematik 211, pp 1-10, 1962.
* [4] Crawford, M. B. On the Number of Representations of One as the Sum of Unit Fractions, Masters of Science in Mathematics, Virginia Polytechnic Institute and State University, 2019.
* [5] Giovanni, M. Di, Gallipoli, S. and Gionfriddo, M. Historical origin and scientific development of Graphs, Hypergraphs and Design Theory, Bulletin of Mathematics and Statistics Research, 7, pp 19-23, 2019.
* [6] Elsholtz, C. and Tao, T. Counting the number of solutions to the Erdős-Straus equations on unit fractions, J. of Australasian Math. Society, 94(1), pp 50-105, 2013.
* [7] Ghanouchi, J. An analytic approach of some conjectures related to diophantine equations, Bulletin of Mathematical Sciences and Applications, 1, pp 29-40, 2012, $doi:10.18052/www.scipress.com/BMSA.1.29$.
* [8] Ghanouchi, J. About the Erdős conjecture, International Journal of Sciinece and Research, 4(2), pp 341-341, 2015.
* [9] Mordell, L. J. Diophantine Equations , Academic Press, $London/NewYork$, 1969.
* [10] Negash, D. J. Solutions to Dophantine Equation of Erdős-Straus Conjecture, https://arxiv.org/pdf/1812.05684.pdf
* [11] Obláth, R. Sur l’equation diophantienne $4/n=1/x_{1}+1/x_{2}+1/x_{3}$ , Mathesis, 59, pp 308-316, 1950.
* [12] Rosati, L. A. Sull’equazione diofantea $4/n=1/x+1/y+1/z$, Bollettino dellUnione Matematica Italiana, 3, 59-63, 1954.
* [13] Sander, J. W. $4/n=1/x+1/y+1/z$ and Iwaniec’ Half Dimensional Sieve, J. of Number Theory, 46, pp 123-136, 1994.
* [14] Vaughan, R. C. On a Problem of Erdős, Straus and Schinzel. Mathematika, 17, pp. 193-198, 1970.
* [15] Yamamoto, K. On the Diophantine equation $4/n=1/x+1/y+1/z$, Memoirs of the Faculty of Science, Kyushu University, 19, pp 37-47, 1965.
* [16] Swett, A. The Erdős-Straus Conjecture. Rev. $10/28/99$. http://math.uindy.edu/swett/esc.htm
* [17] Salez, S. E. The Erdős-Straus conjecture: New modular equations and checking up to $n=10^{17}$, arXiv 1406.6307, 2014\.
* [18] Schinzel, A. On some number properties $3n$ and $4n$, where $n$ is an odd number (English). Sur Quelques Propriétés des Nombres $3/n$ et $4/n$ oún est un Nombre Impair (original in French). Matheseis, 65, pp. 219-222, 1956.
* [19] Ferreira, J. W. P. An Aprroach to solve Erdős-Straus conjecture, https://www.researchgate.net/publication/324173391, 2018.
* [20] Ferreira, J. W. P. A simple solution of Erdős-Straus Conjecture, https://www.researchgate.net/publication/338282492, 2019.
|
8k
|
arxiv_papers
|
2101.00976
|
# Nested Coordinate Systems in Geometric Algebra
Garret Sobczyk
Universidad de las Américas-Puebla
Departamento de Actuaría Física y Matemáticas
72820 Puebla, Pue., México
###### Abstract
A nested coordinate system is a reassigning of independent variables to take
advantage of geometric or symmetry properties of a particular application.
Polar, cylindrical and spherical coordinate systems are primary examples of
such a regrouping that have proved their importance in the separation of
variables method for solving partial differential equations. Geometric algebra
offers powerful complimentary algebraic tools that are unavailable in other
treatments.
AMS Subject Classification MSC-2020: 15A63, 15A67, 42B37.
Keywords: Clifford algebra, coordinate systems, geometric algebra, separation
of variables.
## 0 Introduction
Geometric algebra $\mathbb{G}_{3}$ is the natural generalization of Gibbs
Heaviside vector algebra, but unlike the latter, it can be immediately
generalized to higher dimensional geometric algebras $\mathbb{G}_{p,q}$ of a
quadratic form. On the other hand, Clifford analysis, the generalization of
Hamilton’s quaternions, is also expressed in Clifford’s geometric algebras
[1]. The main purpose of this article is to formulate the concept of a nested
coordinate system, a generalization of the well-known methods of orthogonal
coordinate systems to apply to any coordinate system. We restrict ourselves to
the geometric algebra $\mathbb{G}_{3}$ because of its close relationship to
the Gibbs-Heaviside vector calculus [3]. This restriction also draws attention
to the clear advantages of geometric algebra over the later, because of its
powerful associative algebraic structure.
The idea of a nested rectangular coordinate system arises naturally when
studying properties of polar coordinates in the $2$ and $3$-dimensional
Euclidean vector spaces $\mathbb{R}^{2}$ and $\mathbb{R}^{3}$. We begin by
discussing the relationship between ordinary polar coordinates and the nested
rectangular coordinate system ${\cal N}_{1,2}$, before going on to the higher
dimensional nested coordinate system ${\cal N}_{1,2,3}$ utilized in the
reformulation of cylindrical and spherical coordinates. A detailed discussion
of the geometric algebra $\mathbb{G}_{3}$ is not given here, but results are
often expressed in the closely related well-known Gibbs-Heaviside vector
analysis for the benefit of the reader.
## 1 Polar and nested coordinates systems
Let $\mathbb{G}_{2}:=\mathbb{G}(\mathbb{R}^{2})$ be the geometric algebra of
$2$-dimensional Euclidean space $\mathbb{R}^{2}$. An introductory treatment of
the geometric algebras $\mathbb{G}_{1}$, $\mathbb{G}_{2}$ and $\mathbb{G}_{3}$
is given in [4, 5, 6]. Most important in studying the geometry of the
Euclidean plane is the position vector
$\mathbf{x}:=\mathbf{x}[x,\hat{x}]=x\hat{x}$ (1)
expressed here as a product of its Euclidean magnitude $x$ and its unit
direction, the unit vector $\hat{x}$. In terms of rectangular coordinates
$(x_{1},x_{2})\in\mathbb{R}^{2}$,
$\mathbf{x}=\mathbf{x}[x_{1},x_{2}]=x_{1}e_{1}+x_{2}e_{2},$ (2)
for the orthogonal unit vectors $e_{1},e_{2}$ along the $x_{1}$ and $x_{2}$
axis, respectively. The advantage of our notation is that it immediately
generalizes to $3$ and higher dimensional spaces of arbitrary signature
$(p,q)$ in any of the definite geometric algebras
$\mathbb{G}_{p,q}:=\mathbb{G}(\mathbb{R}^{p,q})$ of a quadratic form.
The vector derivative, or gradient in the Euclidean plane is defined by
$\nabla:=e_{1}\partial_{1}+e_{2}\partial_{2}$ (3)
where $\partial_{1}:=\frac{\partial}{\partial x_{1}}$ and
$\partial_{2}:=\frac{\partial}{\partial x_{2}}$ are partial derivatives [3,
p.105]. Clearly,
$e_{1}=\partial_{1}\mathbf{x}=e_{1}\cdot\nabla\mathbf{x},\ \
e_{2}=\partial_{2}\mathbf{x}=e_{2}\cdot\nabla\mathbf{x}.$
Since $\nabla$ is the usual 2-dimensional gradient, it has the well-known
properties
$\nabla\mathbf{x}=2,\quad{\rm and}\quad\nabla x=\hat{x}.$
With the help of the product rule for differentiation,
$2=\nabla\mathbf{x}=(\nabla
x)\hat{x}+x(\nabla\hat{x})=\hat{x}^{2}+x(\nabla\hat{x}).$ (4)
Since in geometric algebra $\mathbf{x}^{2}=x^{2}$, it follows that
$\hat{x}^{2}=1$, so that for $\mathbf{x}\in\mathbb{R}^{2}$,
$\nabla\hat{x}=\frac{1}{x}\ \ {\rm and}\ \ e_{1}\cdot\nabla
x=e_{1}\cdot\hat{x}=\frac{x_{1}}{x},\ \ e_{2}\cdot\nabla
x=e_{2}\cdot\hat{x}=\frac{x_{2}}{x}.$ (5)
Similarly, $\nabla\hat{x}=\frac{n-1}{x}$ for $\mathbf{x}\in\mathbb{R}^{n}$.
This is the first of many demonstrations of the power of geometric algebra
over standard vector algebra.
By a nested rectangular coordinate system ${\cal
N}_{1,2}(x_{1},x[x_{1},x_{2}])$, we mean
$\mathbf{x}=x\hat{x}=\mathbf{x}[x_{1},x]=\mathbf{x}\big{[}x_{1},x[x_{1},x_{2}]\big{]}.$
The grouping of the variables allows us to consider $x_{1}$ and
$x:=\sqrt{x_{1}^{2}+x_{2}^{2}}$ to be independent. The partial derivatives
with respect to these independent variables is denoted by
$\hat{\partial}_{1}:=\frac{\hat{\partial}}{\partial x_{1}}$ and
$\hat{\partial}_{x}:=\frac{\hat{\partial}}{\partial x}$, the hat on the
partial derivatives indicating the new choice of independent variables.
For polar coordinates $(x,\theta)\in\mathbb{R}^{2}$, for
$x:=\sqrt{x_{1}^{2}+x_{2}^{2}}\geq 0$, $0\leq\theta<2\pi$, and
$\mathbf{x}:=\mathbf{x}[x,\theta]$,
$\mathbf{x}=x\hat{x}[\theta]=x(e_{1}\frac{x_{1}}{x}+e_{2}\frac{x_{2}}{x})=x(e_{1}\cos\theta+e_{2}\sin\theta),$
(6)
where $\cos\theta:=\frac{x_{1}}{x}$ and $\sin\theta:=\frac{x_{2}}{x}$. Using
(5),
$\nabla\hat{x}=\nabla\hat{x}[\theta]=(\nabla\theta)\frac{\partial\hat{x}}{\partial\theta}=\frac{1}{x}\quad\iff\quad\nabla\theta=\frac{1}{x}\frac{\partial\hat{x}}{\partial\theta},\
\nabla^{2}\theta=0,$ (7)
since
$\nabla\hat{x}=(\nabla\theta)\partial_{\theta}(e_{1}\cos\theta+e_{2}\sin\theta)=(\nabla\theta)(-e_{1}\sin\theta+e_{2}\cos\theta),$
and
$\nabla^{2}\theta=-\frac{\hat{x}}{x^{2}}\partial_{\theta}\hat{x}+\frac{1}{x}(\nabla\theta)\partial_{\theta}^{2}\hat{x}=-2\Big{(}\frac{\hat{x}}{x^{2}}\cdot(\partial_{\theta}\hat{x})\Big{)}=0.$
The $\iff$ follows by multipling both sides of the first equation by the unit
vector $\partial_{\theta}\hat{x}$, which is allowable in geometric algebra.
Note also the use of the famous geometric algebra identity
$2\mathbf{a}\cdot\mathbf{b}=(\mathbf{a}\mathbf{b}+\mathbf{b}\mathbf{a})$ for
vectors $\mathbf{a}$ and $\mathbf{b}$, [4, p.26].
The 2-dimensional gradient $\nabla$,
$\nabla=e_{1}\frac{\partial}{\partial x_{1}}+e_{2}\frac{\partial}{\partial
x_{2}}=e_{1}\partial_{1}+e_{2}\partial_{2}$ (8)
already defined in (3), and the Laplacian $\nabla^{2}$ is given by
$\nabla^{2}=\frac{\partial^{2}}{\partial
x_{1}^{2}}+\frac{\partial^{2}}{\partial
x_{2}^{2}}=\partial_{1}^{2}+\partial_{2}^{2}.$ (9)
In polar coordinates,
$\hat{\nabla}=(\nabla
x)\hat{\partial}_{x}+(\nabla\theta)\hat{\partial}_{\theta}=\hat{x}\,\hat{\partial}_{x}+\frac{1}{x}(\hat{\partial}_{\theta}x)\hat{\partial}_{\theta}$
(10)
for the gradient where
$\hat{\partial}_{\theta}:=\frac{\hat{\partial}}{\partial\theta}$, and since
$\hat{\nabla}^{2}\theta=0$,
$\hat{\nabla}^{2}=\hat{\nabla}\big{(}\hat{x}\,\hat{\partial}_{x}+(\hat{\nabla}\theta)\,\hat{\partial}_{\theta}\big{)}=\Big{(}\hat{\nabla}\hat{x}+\hat{x}\cdot\hat{\nabla}\Big{)}\hat{\partial}_{x}+\Big{(}\hat{\nabla}^{2}\theta+(\hat{\nabla}\theta)\cdot\hat{\nabla}\Big{)}\hat{\partial}_{\theta}$
$=\hat{\partial}_{x}^{2}+\frac{1}{x}\hat{\partial}_{x}+\frac{1}{x^{2}}\hat{\partial}_{\theta}^{2}.$
(11)
for the Laplacian. The decomposition of the Laplacian (11), directly implies
that Laplace’s differential equation is separable in polar coordinates.
When expressed in nested rectangular coordinates ${\cal N}_{1,2}(x_{1},x)$,
the gradient $\nabla\equiv\hat{\nabla}$ takes the form
$\hat{\nabla}:=(\nabla x_{1})\frac{\hat{\partial}}{\partial x_{1}}+(\nabla
x)\frac{\hat{\partial}}{\partial
x}=e_{1}\hat{\partial}_{1}+\hat{x}\hat{\partial}_{x}.$ (12)
Dotting equations (8) and (12) on the left by $e_{1}$ and $\hat{x}$ gives the
transformation rules
$\partial_{1}=\hat{\partial}_{1}+\frac{x_{1}}{x}\hat{\partial}_{x},\ \
\hat{x}\cdot\hat{\nabla}=\frac{x_{1}}{x}\hat{\partial}_{1}+\hat{\partial}_{x}=\cos\theta\,\partial_{1}+\sin\theta\,\partial_{2}.$
Using these formulas the nested Laplacian takes the form
$\hat{\nabla}^{2}=\hat{\partial}_{1}^{2}+2\frac{x_{1}}{x}\hat{\partial}_{x}\hat{\partial}_{1}+\frac{1}{x}\hat{\partial}_{x}+\hat{\partial}_{x}^{2}=-\hat{\partial}_{1}^{2}+2\partial_{1}\hat{\partial}_{1}+\frac{1}{x}\hat{\partial}_{x}+\hat{\partial}_{x}^{2}.$
(13)
The unusual feature of the nested Laplacian is that it is defined in terms of
both the ordinary partial derivative $\partial_{1}$ and the nested partial
derivative $\hat{\partial}_{1}$. Whereas partial derivatives generally
commute, partial derivatives of different types do not. For example, it is
easily verified that
$\partial_{1}\hat{\partial}_{1}x_{1}x^{2}=2x_{1},\ \ {\rm whereas}\ \
\hat{\partial}_{1}\partial_{1}x_{1}x^{2}=4x_{1}.$
Because the mixed partial derivatives $\hat{\partial}_{x}\hat{\partial}_{1}$
occurs in (13), Laplace’s differential equation in the real rectangular
coordinate system ${\cal N}_{1,2}(x_{1},x)$ is not, in general, separable.
Indeed, suppose that a harmonic function $F$ is separable, so that $F=X_{1}X$
for $X_{1}=X_{1}[x_{1}],X=X[x]$. Using (13),
$\frac{\hat{\nabla}^{2}F}{X_{1}X}=\frac{\partial_{1}^{2}X_{1}}{X_{1}}+\frac{\Big{(}\partial_{x}^{2}X+\frac{1}{x}\partial_{x}\Big{)}X}{X}+2\Big{(}\frac{x_{1}\partial_{1}X_{1}}{X_{1}}\Big{)}\Big{(}\frac{\partial_{x}X}{xX}\Big{)}=0.$
(14)
The last term on the prevents F in general from being separable. However, it
is easily checked that $F=k\frac{x_{1}}{x^{2}}$ is harmonic and a solution of
(13). When $X_{1}[x_{1}]=kx_{1}$, it is easily checked that
$\frac{x_{1}\partial_{1}X_{1}}{X_{1}}=1$. Letting $F=kx_{1}X[x]$, and
requiring $\hat{\nabla}^{2}F=0$, leads to the differential equation for
$X[x]$,
$3\partial_{x}X+x\partial_{x}^{2}X=0,$
with the solution $X[x]=c_{1}\frac{1}{x^{2}}+c_{2}$. The simplest example of a
harmonic function $F=X_{1}X$ is when $X_{1}=x_{1}$ and $X=\frac{1}{x^{2}}$. A
graph of this function is shown in Figure 1.
Figure 1: The harmonic $2$-dimensional function
$F=\frac{x_{1}}{x_{1}^{2}+x_{2}^{2}}$ is shown.
## 2 Special harmonic functions in nested coordinates
Consider the real nested rectangular coordinate system $(x_{1},x_{p},x)$,
defined by
${\cal N}_{1,2,3}:=\\{(x_{1},x_{p},x)|\
\mathbf{x}=x\hat{x}=x_{1}e_{1}+x_{p}\hat{x}_{p}+x\hat{x}\\},$
where $x_{p}=\sqrt{x_{1}^{2}+x_{2}^{2}}\geq 0,\
x=\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}\geq 0$. In nested coordinates, the
gradient $\nabla=e_{1}\partial_{1}+e_{2}\partial_{2}+e_{3}\partial_{3}$ takes
the form
$\hat{\nabla}=(\hat{\nabla}x_{1})\hat{\partial}_{1}+(\hat{\nabla}x_{p})\hat{\partial}_{p}+(\hat{\nabla}x)\hat{\partial}_{x}=e_{1}\hat{\partial}_{1}+\hat{x}_{p}\hat{\partial}_{p}+\hat{x}\hat{\partial}_{x},$
(15)
where $\hat{\partial}_{p}:=\frac{\hat{\partial}}{\partial x_{p}}$. Formulas
relating the gradients $\nabla$ and $\hat{\nabla}$ easily follow:
$\partial_{1}=\hat{\partial}_{1}+e_{1}\cdot\hat{x}_{p}\,\hat{\partial}_{p}+e_{1}\cdot\,\hat{x}\,\hat{\partial}_{x}=\hat{\partial}_{1}+\frac{x_{1}}{x_{p}}\,\hat{\partial}_{p}+\frac{x_{1}}{x}\,\hat{\partial}_{x}$
(16)
$\partial_{2}=e_{2}\cdot\hat{x}_{p}\,\hat{\partial}_{p}+e_{2}\cdot\,\hat{x}\,\hat{\partial}_{x}=\frac{x_{2}}{x_{p}}\,\hat{\partial}_{p}+\frac{x_{2}}{x}\,\hat{\partial}_{x}$
(17)
and
$\partial_{3}=e_{3}\cdot\,\hat{x}\,\hat{\partial}_{x}=\frac{x_{3}}{x}\,\hat{\partial}_{x}.$
(18)
For the Laplacian $\nabla^{2}$ in nested coordinates, with the help of (15),
$\hat{\nabla}^{2}=\hat{\nabla}(e_{1}\hat{\partial}_{1}+\hat{x}_{p}\partial_{x_{p}}+\hat{x}\hat{\partial}_{x})=e_{1}\cdot\hat{\nabla}\,\hat{\partial}_{1}+\hat{\nabla}\cdot\hat{x}_{p}\,\hat{\partial}_{p}+\hat{\nabla}\cdot\hat{x}\,\hat{\partial}_{x}$
$=e_{1}\cdot\hat{\nabla}\,\hat{\partial}_{1}+\hat{\nabla}\cdot\hat{x}_{p}\,\hat{\partial}_{p}+\hat{\nabla}\cdot\hat{x}\,\hat{\partial}_{x}$
$=\Big{(}\hat{\partial}_{1}+\frac{x_{1}}{x_{p}}\,\hat{\partial}_{p}+\frac{x_{1}}{x}\,\hat{\partial}_{x}\Big{)}\hat{\partial}_{1}+\Big{(}\frac{1}{x_{p}}+\frac{x_{1}}{x_{p}}\hat{\partial}_{1}+\hat{\partial}_{p}+\frac{x_{p}}{x}\hat{\partial}_{p}\Big{)}\hat{\partial}_{p}$
$+\Big{(}\frac{2}{x}+\frac{x_{1}}{x}\hat{\partial}_{1}+\frac{x_{p}}{x}\hat{\partial}_{p}+\hat{\partial}_{x}\Big{)}\hat{\partial}_{x}$
$=\hat{\partial}_{1}^{2}+\hat{\partial}_{p}^{2}+\hat{\partial}_{x}^{2}+2\bigg{(}\frac{x_{1}}{x_{p}}\hat{\partial}_{1}\hat{\partial}_{p}+\frac{x_{1}}{x}\hat{\partial}_{1}\hat{\partial}_{x}+\frac{x_{p}}{x}\hat{\partial}_{p}\hat{\partial}_{x}\bigg{)}+\frac{1}{x_{p}}\hat{\partial}_{p}+\frac{2}{x}\hat{\partial}_{x}.$
(19)
Another expression for the Laplacian in mixed coordinates is obtained with the
help of (16),
$\hat{\nabla}^{2}=-\hat{\partial}_{1}^{2}+\hat{\partial}_{p}^{2}+\hat{\partial}_{x}^{2}+2\bigg{(}\partial_{1}\hat{\partial}_{1}+\frac{x_{p}}{x}\hat{\partial}_{p}\hat{\partial}_{x}\bigg{)}+\frac{1}{x_{p}}\hat{\partial}_{p}+\frac{2}{x}\hat{\partial}_{x}.$
(20)
Suppose $F=F[x_{1},x_{p},x]$. In order for $F$ to be harmonic,
$\hat{\nabla}^{2}F=0$. Assuming that $F$ is separable,
$F=X_{1}[x_{1}]X_{p}[x_{p}]X_{x}[x]$, and applying the Laplacian (20) to $F$
gives
$\hat{\nabla}^{2}F=(\hat{\partial}_{1}^{2}X_{1})X_{p}X_{x}+X_{1}\Big{(}\big{(}\hat{\partial}_{p}^{2}+\frac{1}{x_{p}}\hat{\partial}_{p}\big{)}X_{p}\Big{)}X_{x}+X_{1}X_{p}\Big{(}\frac{2}{x}\hat{\partial}_{x}X_{x}\Big{)}$
$+2\bigg{(}\big{(}x_{p}\partial_{p}X_{p}\big{)}\big{(}\frac{1}{x}\hat{\partial}_{x}X_{x}\big{)}X_{1}+\big{(}\partial_{1}X_{p}\big{)}X_{x}+X_{p}\big{(}\partial_{1}X_{x}\big{)}\bigg{)}.$
(21)
We now calculate the interesting expression
$\frac{\big{(}x_{p}\partial_{p}X_{p}\big{)}\big{(}\frac{1}{x}\hat{\partial}_{x}X_{x}\big{)}X_{1}+\big{(}\partial_{1}X_{p}\big{)}X_{x}+X_{p}\big{(}\partial_{1}X_{x}\big{)}}{X_{1}X_{p}X_{x}}$
$=\Big{(}x_{p}\big{(}\partial_{p}\log
X_{p}\big{)}\Big{)}\Big{(}\big{(}\frac{1}{x}\partial_{x}\log
X_{x}\big{)}\Big{)}+\frac{\partial_{1}\log(X_{p}X_{x})}{X_{1}}.$
In general, because of the last term in (21), a function $F=X_{1}X_{p}X_{x}$
will not be separable. However, just as in the two dimensional case, there are
$3$-dimensional harmonic solutions of the form $F=x_{1}^{k}x_{p}^{m}x^{n}$.
Taking the Laplacian (19) of $F$, with the help of [7], gives
$\hat{\nabla}^{2}F=(2km+m^{2})x^{n}x_{1}^{k}x_{p}^{m-2}+(-k+k^{2})x^{n}x_{1}^{k-2}x_{p}^{m}$
$+(2kn+2mn+n(1+n))x^{n-2}x_{1}^{k}x_{p}^{m}=0.$
This last expression vanishes when the system of three equations,
$\\{2km+m^{2}=0,\ \ -k+k^{2}=0,\ \ {\rm and}\ \ 2kn+2mn+n(1+n)=0\\}.$
All of the distinct non-trivial harmonic solutions $F=x_{1}^{k}x_{p}^{m}x^{n}$
are listed in the following Table
k | m | n
---|---|---
1 | 0 | 0
0 | 0 | -1
1 | -2 | 0
1 | 0 | -3
1 | -2 | 1
(22)
## 3 Cylindrical and spherical coordinates
Cylindrical and spherical coordinates are examples of nested coordinates
${\cal N}_{1,2}(\mathbb{R})$, and ${\cal N}_{2,3}(\mathbb{R})$, respectively.
For the first,
$\mathbf{x}=\mathbf{x}[x_{p},\theta,x_{3}]=\mathbf{x}_{p}[x_{p},\theta]+\mathbf{x}_{3}[x_{3}],$
(23)
where $\mathbf{x}_{p}=x_{p}\hat{x}_{p}[\theta]$,
$x_{p}=\sqrt{x_{1}^{2}+x_{2}^{2}}$, and $\mathbf{x}_{3}=x_{3}e_{3}$.
Cylindrical coordinates
$(x_{p},\theta,x_{3})\in\mathbb{R}^{3}=\mathbb{R}^{2}\cup\mathbb{R}^{1}$ are a
decomposition of $\mathbb{R}^{3}$ into the polar coordinates
$(x_{p},\theta)\in\mathbb{R}^{2}$, already studied in Section 1, and
$x_{3}\in\mathbb{R}^{1}$. For spherical coordinates,
$\mathbf{x}_{p}=x_{p}\hat{x}_{p}[\theta]$ the same as in cylindrical and polar
coordinates, and
$\mathbf{x}=\mathbf{x}[x,\theta,\varphi]=x\hat{x}[\theta,\varphi])=x\Big{(}e_{3}\cos\varphi+\hat{x}_{p}[\theta]\sin\varphi\Big{)},$
(24)
where
$x=\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}},\
\hat{x}[\theta,\varphi]=e_{3}\cos\varphi+\hat{x}_{p}[\theta]\sin\varphi,\
\hat{x}_{p}[\theta]=e_{1}\cos\theta+e_{2}\sin\theta.$
The basic quantities that define both cylindrical and spherical coordinates
are shown in Figure 2.
Figure 2: For cylindrical coordinates,
$\mathbf{x}=x_{p}\hat{x}_{p}[\theta]+x_{3}e_{3}$. For spherical coordinates,
$\mathbf{x}=x(e_{3}\cos\varphi+\hat{x}_{p}[\theta]\sin\varphi)$.
The gradient $\hat{\nabla}$ and Laplacian $\hat{\nabla}^{2}$ for cylindrical
coordinates are easily calculated. With the help of (7), (10), and (11),
$\hat{\nabla}=(\hat{\nabla}x_{p})\hat{\partial}_{p}+(\hat{\nabla}\theta)\hat{\partial}_{\theta}+(\hat{\nabla}x_{3})\hat{\partial}_{3}$
for the cylindrical gradient, and
$\hat{\nabla}^{2}=\hat{\nabla}\Big{(}\hat{x}_{p}\,\hat{\partial}_{p}+(\hat{\nabla}\theta)\,\hat{\partial}_{\theta}+e_{3}\hat{\partial}_{3}\Big{)}=\hat{\partial}_{p}^{2}+\frac{1}{x_{p}}\hat{\partial}_{p}+\frac{1}{x_{p}^{2}}\hat{\partial}_{\theta}^{2}+\hat{\partial}_{3}^{2}$
(25)
for the cylindrical Laplacian. Letting
$F[\mathbf{x}]=X_{p}[x]X_{\theta}[\theta]X_{3}[x_{3}]$, the resulting equation
is easily separated and solved by standard methods, resulting in three second
order differential equations with solutions,
$X_{p}[x_{p}]=k_{1}J_{n}[\beta x_{p}]+k_{2}Y_{n}[\beta x_{p},]$
$X_{\theta}[\theta]=k_{3}\cos n\theta+k_{4}\sin n\theta$
$X_{3}[x_{3}]=k_{5}\cosh(\alpha(m-x_{3}))+k_{6}\sinh(\alpha(m-x_{3})),$
where $J_{n}$ and $Y_{n}$ are Bessel functions of the first and second kind .
The constants are determined by the various boundary conditions that must be
satisfied in different applications [8, p.254].
Turning to spherical coordinates $(x,\theta,\varphi)\in\mathbb{R}^{3}$, the
spherical gradient
$\hat{\nabla}=(\hat{\nabla}x)\hat{\partial}_{x}+(\hat{\nabla}\theta)\hat{\partial}_{\theta}+(\hat{\nabla}\varphi)\hat{\partial}_{\varphi}=\hat{x}\hat{\partial}_{x}+\frac{1}{x_{p}}(\hat{\partial}_{\theta}\hat{x}_{p})\hat{\partial}_{\theta}+\frac{1}{x}(\hat{\partial}_{\varphi}\hat{x})\hat{\partial}_{\varphi},$
(26)
where from previous calculations for polar and cylindrical coordinates,
$\hat{\nabla}\theta=\frac{1}{x_{p}}(\hat{\partial}_{p}\hat{x}_{p}),\
(\hat{\nabla}\theta)^{2}=\frac{1}{x_{p}^{2}},\ \ \hat{\nabla}^{2}\theta=0,\ \
\hat{\nabla}\varphi=\frac{1}{x}\hat{\partial}_{\varphi}\hat{x},\ \
(\hat{\nabla}\varphi)^{2}=\frac{1}{x^{2}}.$ (27)
Furthermore, since
$\hat{x}=\hat{x}[\theta,\varphi]=e_{3}\cos\varphi+\hat{x}_{p}[\theta]\sin\varphi$
$\frac{2}{x}=\hat{\nabla}\hat{x}=(\hat{\nabla}\theta)(\hat{\partial}_{\theta}\hat{x})+(\hat{\nabla}\varphi)(\hat{\partial}_{\varphi}\hat{x})=\frac{1}{x_{p}}(\hat{\partial}_{\theta}\hat{x})(\hat{\partial}_{\varphi}\hat{x})+\frac{1}{x},$
it follows that
$(\hat{\partial}_{\theta}\hat{x})(\hat{\partial}_{\varphi}\hat{x})=\frac{x_{p}}{x}=\sin\varphi,\
\ {\rm and}\ \ \hat{\nabla}^{2}\varphi=\frac{x_{3}}{x^{2}x_{p}}.$
That $\hat{\nabla}^{2}\varphi=\frac{x_{3}}{x^{2}x_{p}}$ follows using (26) and
(27),
$\hat{\nabla}^{2}\varphi=\hat{\nabla}\big{(}\frac{1}{x}\hat{\partial}_{\varphi}\hat{x}\big{)}=\Big{(}-\frac{\hat{x}}{x^{2}}+\frac{1}{x}\hat{\nabla}\Big{)}\hat{\partial}_{\varphi}\hat{x}$
$=-\frac{\hat{x}}{x^{2}}\hat{\partial}_{\varphi}\hat{x}-\frac{1}{x}\Big{(}(\hat{\nabla}x)\hat{\partial}_{x}\hat{\partial}_{\varphi}\hat{x}+(\hat{\nabla}\theta)\hat{\partial}_{\theta}\hat{\partial}_{\varphi}\hat{x}+(\hat{\nabla}\varphi)\hat{\partial}_{\varphi}^{2}\hat{x}\Big{)}$
$=-\Big{(}\frac{\hat{x}}{x^{2}}\hat{\partial}_{\varphi}\hat{x}+\frac{1}{x^{2}}(\hat{\partial}_{\varphi}\hat{x})\hat{x}\Big{)}+\frac{1}{xx_{p}}(\hat{\partial}_{\theta}\hat{x}_{p})(\hat{\partial}_{\varphi}\hat{\partial}_{\theta}x)=\frac{x_{3}}{x^{2}x_{p}},$
since partial derivatives commute, $\hat{\partial}_{x}\hat{x}=0$, and
$\hat{\partial}_{\varphi}^{2}\hat{x}=-\hat{x}$.
For the spherical Laplacian, using (26) and (27),
$\hat{\nabla}^{2}=\hat{\nabla}\Big{(}\hat{x}\hat{\partial}_{x}+(\hat{\nabla}\theta)\hat{\partial}_{\theta}+(\hat{\nabla}\varphi)\hat{\partial}_{\varphi}\Big{)}$
$=\Big{(}\frac{2}{x}+\hat{\partial}_{x}\Big{)}\hat{\partial}_{x}+(\hat{\nabla}\theta)\cdot\hat{\nabla}\hat{\partial}_{\theta}+\Big{(}\hat{\nabla}^{2}\varphi+(\hat{\nabla}\varphi)\cdot\hat{\nabla}\Big{)}\hat{\partial}_{\varphi}$
$=\Big{(}\hat{\partial}_{x}+\frac{2}{x}\Big{)}\hat{\partial}_{x}+\frac{1}{x_{p}^{2}}\hat{\partial}_{\theta}^{2}+\Big{(}\frac{x_{3}}{x^{2}x_{p}}+\frac{1}{x^{2}}\hat{\partial}_{\varphi}\Big{)}\hat{\partial}_{\varphi},$
equivalent to the usual expression for the Laplacian in spherical coordinates
[8, p.256].
Just as in cylindrical coordinates, the solution of Laplace’s equation in
spherical coordinates is separable,
$F=X_{x}[x]X_{\theta}[\theta]X_{\varphi}[\varphi]$, resulting in three second
order differential equations with solutions
$X_{x}[x]=k_{1}x^{\beta}+k_{2}x^{-(\beta+1)},$ $X_{\theta}[\theta]=k_{3}\cos
n\theta+k_{4}\sin n\theta,$
$X_{\varphi}[\varphi]=k_{5}P_{n}^{m}(\cos\varphi)+k_{6}Q_{n}^{m}(\cos\varphi),$
where $P_{n}^{m}$ and $Q_{n}^{m}$ are the Legendre functions of the first and
second kind, respectively [8, p.258].
## Acknowledgment
This work was largely inspired by a current project that author has with
Professor Joao Morais of Instituto Tecnológico Autónomo de México, utilizing
spheroidal coordinate systems. The struggle with this orthogonal coordinate
system [2], led the author to re-examine the foundations of general coordinate
systems in geometric algebra [5, p.63].
## References
* [1] R. Ablamowicz, G. Sobczyk, Editors: Lectures on Clifford (Geometric) Algebras and Applications, Birkhäuser, Boston 2003.
* [2] E. Hobson, 1931, The theory of spherical and ellipsoidal harmonics, Cambridge.
* [3] J.E. Marsden, A.J. Tromba, Vector Calculus 2nd Ed., Freeman and Company, San Francisco 1980.
* [4] G. Sobczyk, Matrix Gateway to Geometric Algebra, Spacetime and Spinors, Independent Publisher November 2019. https://www.garretstar.com
* [5] G. Sobczyk, New Foundations in Mathematics: The Geometric Concept of Number, Birkhäuser, New York 2013.
* [6] G. Sobczyk. Many early versions of my work can be found on arXiv, or on my website: https://www.garretstar.com
* [7] S. Wolfram, Mathematica.
* [8] Tyn Myint-U, Partial Differential Equations of Mathematical Physics 2nd Ed., North Holland, NY 1980.
|
4k
|
arxiv_papers
|
2101.00977
|
###### Proof.
Towards Understanding the Behaviors of
Optimal Deep Active Learning Algorithms
Yilun Zhou∗ Adithya Renduchintala† Xian Li†
Sida Wang† Yashar Mehdad† Asish Ghoshal†
∗MIT CSAIL †Facebook AI
###### Abstract
Active learning (AL) algorithms may achieve better performance with fewer data
because the model guides the data selection process. While many algorithms
have been proposed, there is little study on what the optimal AL algorithm
looks like, which would help researchers understand where their models fall
short and iterate on the design. In this paper, we present a simulated
annealing algorithm to search for this optimal oracle and analyze it for
several tasks. We present qualitative and quantitative insights into the
behaviors of this oracle, comparing and contrasting them with those of various
heuristics. Moreover, we are able to consistently improve the heuristics using
one particular insight. We hope that our findings can better inform future
active learning research. The code is available at
https://github.com/YilunZhou/optimal-active-learning.
## 1 Introduction
Training deep models typically requires a large dataset, which limits its
usefulness in domains where expensive expert annotation is required.
Traditionally, active learning (AL), in which the model selects the data
points to annotate and learn from, is often presented as a more sample
efficient alternative to standard supervised learning (Settles,, 2009).
However the gain of AL with deep models is less consistent. For example, the
best method seems to depend on the task in an unpredictable manner (Lowell et
al.,, 2019).
Is active learning still useful in the era of deep learning? Although many
issues are identified, it is not clear whether those problems only plague
current AL methods or also apply to methods developed in the future. Moreover,
in existing literature, there lacks a comparison of proposed methods to the
oracle upper-bound. Such a comparison is helpful for debugging machine
learning models in many settings. For example, the classification confusion
matrix may reveal the inability of the model to learn a particular class, and
a comparison between a sub-optimal RL policy and the human expert play may
indicate a lack of exploration. By contrast, without such an oracle reference
in AL, it is extremely difficult, if at all possible, to pinpoint the
inadequacy of an AL method and improve it.
In this paper, we propose a simulated annealing algorithm to search for the
optimal AL strategy for a given base learner. With practical computational
resources, this procedure is able to find an oracle that significantly
outperforms existing heuristics (+7.53% over random orders, compared to +1.49%
achieved by the best heuristics on average across three vision and language
tasks), for models both trained from scratch and finetuned from pre-training,
definitively asserting the usefulness of a high-performing AL algorithm in
most scenarios (Sec. 6.1). We also present the following insights into its
behaviors.
While many papers do not explicitly state whether and how training
stochasticity (e.g. model initialization or dropout) is controlled across
iterations, we show that training stochasticity tends to negatively affect the
oracle performance (Sec. 6.2).
Previous work (Lowell et al.,, 2019) has found that for several heuristic
methods, the actively acquired dataset does not transfer across different
architectures. We observed a lesser extent of this phenomenon, but more
importantly, the oracle transfers better than heuristics (Sec. 6.3).
It may seem reasonable that a high-performing AL algorithm should exhibit a
non-uniform sampling behavior (e.g. focusing more on harder to learn regions).
However, the oracle mostly preserves data distributional properties (Sec.
6.4).
Finally, using the previous insight, heuristics can on average be improved by
2.95% with a simple distribution-matching regularization (Sec. 6.5).
## 2 Related Work
Active learning (Settles,, 2009) has been studied for a long time. At the core
of an active learning algorithm is the acquisition function, which generates
or selects new data points to label at each iteration. Several different types
of heuristics have been proposed, such as those based on uncertainty (Kapoor
et al.,, 2007), disagreement (Houlsby et al.,, 2011), diversity (Xu et al.,,
2007), and expected error reduction (Roy and McCallum,, 2001). In addition,
several recent studies focused on meta-active learning, i.e. learning a good
acquisition function on a source task and transfer it to a target task
(Konyushkova et al.,, 2017; Fang et al.,, 2017; Woodward and Finn,, 2017;
Bachman et al.,, 2017; Contardo et al.,, 2017; Pang et al.,, 2018; Konyushkova
et al.,, 2018; Liu et al.,, 2018; Vu et al.,, 2019).
With neural network models, active learning has been applied to computer
vision (CV) (e.g. Gal et al.,, 2017; Kirsch et al.,, 2019; Sinha et al.,,
2019) and natural language processing (NLP) (e.g. Shen et al.,, 2018; Fang et
al.,, 2017; Liu et al.,, 2018; Kasai et al.,, 2019) tasks. However, across
different tasks, it appears that the relative performance of different methods
can vary widely: a method can be the best on one task while struggling to even
outperform the random baseline on another, with little explanation given. We
compile a meta-analysis in App. A. Moreover, Lowell et al., (2019) found that
the data acquisition order does not transfer well across architectures, a
particularly important issue during deployment when it is expected that the
acquired dataset will be used for future model development (i.e. datasets
outliving models).
The closest work to ours is done by Koshorek et al., (2019), who studied the
active learning limit. There are numerous differences. First, we analyze
several CV and NLP tasks, while they focused on semantic role labeling (SRL)
in NLP. Second, we explicitly account for training stochasticity, shown to be
important in Sec. 6.2, but they ignored it. Third, our global simulated
annealing search is able to find significant gaps between the upper limit and
existing heuristics while their local beam search failed to find such a gap
(though on a different task). We additionally show how to improve upon
heuristics with our insights.
## 3 Problem Formulation
### 3.1 Active Learning Loop
Let the input and output space be denoted by $\mathcal{X}$ and $\mathcal{Y}$
respectively, where $\mathcal{Y}$ is a finite set. Let $\mathbb{P}_{XY}$ be
the data distribution over $\mathcal{X}\times\mathcal{Y}$. We consider multi-
class classification where a model
$m_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}$, parameterized by $\theta$,
maps an input to an output.
We study pool-based batch-mode active learning where we assume access to an
unlabeled data pool $\mathcal{D}^{U}$ drawn from the marginal distribution
$\mathbb{P}_{X}$ over $\mathcal{X}$.
Starting with a labeled warm-start set
$\mathcal{D}^{L}_{0}\sim\mathbb{P}_{XY}$, an AL loop builds a sequence of
models $(\theta_{0},\ldots,\theta_{K})$ and a sequence of labeled datasets
$(\mathcal{D}^{L}_{1},\ldots,\mathcal{D}^{L}_{K})$ in an interleaving manner.
At the $k$-th iteration, for $k=0,\ldots,K$, a trainer $\eta$ takes in
$\mathcal{D}^{L}_{k}$ and produces the model parameters $\theta_{k}$ by
minimizing some loss function on $\mathcal{D}^{L}_{k}$. In deep-learning the
model training is typically stochastic (due to, for example, random
initialization and drop-out masking) and $\theta_{k}$ is a random variable. We
assume that all such stochasticity are captured in $\xi$ such that
$\theta_{k}=\eta(\mathcal{D}^{L}_{k},\xi)$ is deterministic.
Using the trained model $m_{\theta_{k}}$ and the current labeled set
$\mathcal{D}^{L}_{k}$, an acquisition function $\mathcal{A}$ builds the
dataset $\mathcal{D}^{L}_{k+1}$ by selecting a batch of $B$ data points from
the unlabeled pool, $\Delta\mathcal{D}_{k+1}\subseteq\mathcal{D}^{U}$,
querying the annotator for labels, and adding them to $\mathcal{D}^{L}_{k}$;
i.e.
$\Delta\mathcal{D}_{k+1}=\mathcal{A}(\theta_{k},\mathcal{D}^{L}_{k});\mathcal{D}^{L}_{k+1}=\mathcal{D}^{L}_{k}\cup\Delta\mathcal{D}_{k+1}$.111Some
acquisition functions such as BALD are stochastic. For simplicity, we discuss
the case of deterministic $\mathcal{A}$ here. Extension of our claims to the
stochastic case is straightforward and presented in App. B. The model training
and data acquisition loop repeats until we obtain $\mathcal{D}^{L}_{K}$ by
querying labels for $KB$ data points, followed by training $m_{\theta_{K}}$
for a final time.
### 3.2 Performance Curve
Given a specific draw of $\xi$, we measure the performance of an acquisition
function $\mathcal{A}$ by its _performance curve_
$\tau_{\mathcal{A},\xi}:\\{1,\ldots,K\\}\rightarrow[0,1]$ defined below:
$\displaystyle\tau_{\mathcal{A},\xi}(k)$
$\displaystyle=\mathbb{E}_{x,y\sim\mathbb{P}_{XY}}\left[e(m_{\theta_{k}}(x),y)\right],$
where $e:\mathcal{Y}\times\mathcal{Y}\rightarrow[0,1]$ is the evaluation
metric (e.g. accuracy). $\theta_{k}=\eta(\mathcal{D}^{L}_{k},\xi)$, and
$\mathcal{D}^{L}_{k}=\mathcal{D}^{L}_{k-1}\cup\mathcal{A}(\theta_{k-1},\mathcal{D}^{L}_{k-1})$.
###### Definition 1 (Anytime optimality).
An acquisition function $\mathcal{A}$ is $\xi$-anytime optimal if it uniformly
dominates every other acquisition function $\mathcal{A}^{\prime}$ as measured
by $\tau_{\cdot,\xi}$; i.e.,
$\tau_{\mathcal{A},\xi}(k)\geq\tau_{\mathcal{A}^{\prime},\xi}(k)$ $\forall
k\in\\{1,\ldots,K\\}$ and $\forall\mathcal{A}^{\prime}\neq\mathcal{A}$.
###### Proposition 1.
There exist data distribution $\mathbb{P}_{XY}$ and model class $m_{\theta}$
for which an anytime optimal acquisition function does not exist.
###### Proof Sketch.
Fig. 1 shows a counter-example. In Fig. 1(a), we have an underlying
distribution $\mathbb{P}_{XY}$ shown as the colored background, and four
points drawn from the distribution. If we learn a max-margin linear classifier
from the four points, we can uncover the ground-truth decision boundary. If we
acquire two data points, the optimal combination is shown in Fig. 1(b),
resulting in a slightly wrong decision boundary. Choosing a different blue
point would result in a much worse decision boundary (Fig. 1(c)). However, the
optimal three-point acquisition (Fig. 1(d)), which leads to the ground-truth
decision boundary, does not contain the previously optimal blue point in Fig.
1(b). Thus, there is no acquisition function simultaneously optimal at both
two and three data points.
Figure 1: The counter example to prove Prop. 1.
∎
###### Remark.
In most AL papers, the authors demonstrate the superiority of their proposed
method by showing its performance curve visually “above” those of the
baselines. However, Prop. 1 shows that such an anytime optimal acquisition
function may not exist.
### 3.3 $\xi$-Quality
To make different acquisition functions comparable, we propose the following
average performance metric, which we will refer to as _$\xi$ -quality_:
$\displaystyle
q_{\xi}(\mathcal{A})\overset{\mathrm{def}}{=}\frac{1}{K}\sum_{k=1}^{K}\tau_{\mathcal{A},\xi}(k).$
We refer to the acquisition strategy that maximizes this $\xi$-quality as
$\xi$-optimal strategy, denoted as $\mathcal{A}_{\xi}^{*}$. Obviously, the
$\xi$-anytime optimal acquisition function, if exists, will also be
$\xi$-optimal.
While we could define the quality simply as the end point performance,
$\tau_{\mathcal{A},\xi}(K)$, doing so fails to distinguish acquisition
functions that lead to a fast initial performance rise from the rest,
potentially incurring more annotation cost than necessary.
There are two interpretations to the above quality definition. First, it is
the right Riemann sum approximation of the area under curve (AUC) of
$\tau_{\mathcal{A},\xi}$, which correlates with intuitive notions of
optimality for acquisition functions. Second, the un-averaged version of our
definition can also be interpreted as the undiscounted cumulative reward of an
acquisition policy over $K$ steps where the per-step reward is the resulting
model performance following data acquisition. Thus, the optimal policy
implicitly trades off between immediate ($k=1$) and future ($k\rightarrow K$)
model performances.
### 3.4 Optimal Data Labeling Order
We index data points in $\mathcal{D}^{U}$ by $\mathcal{D}^{U}_{i}$ and use
$y\left(\mathcal{D}^{U}_{i}\right)$ to denote the label. Since an acquisition
function generates a sequence of labeled datasets
$\mathcal{D}^{L}_{1}\subset\ldots\subset\mathcal{D}^{L}_{K}$, an acquisition
function is equivalent to a partial permutation order $\sigma$ of $KB$ indices
of ${\mathcal{D}^{U}}$ with
$\mathcal{D}^{L}_{k}=\mathcal{D}^{L}_{0}\cup\\{\left(\mathcal{D}^{U}_{\sigma_{i}},y\left(\mathcal{D}^{U}_{\sigma_{i}}\right)\right)\\}_{i\in[kB]}$.
Thus, the problem of searching for $\mathcal{A}_{\xi}^{*}$ reduces to that of
searching for the $\xi$-optimal order $\sigma_{\xi}^{*}$, which also relieves
us from explicitly considering different forms of acquisition functions.
Moreover, a direct implication of above definitions is that optimal order
depends on $\xi$; i.e. $q_{\xi}(\sigma_{\xi}^{*})\geq
q_{\xi}(\sigma_{\xi^{\prime}}^{*})$, for $\xi\neq\xi^{\prime}$. As we
experimentally demonstrate in Sec. 6.2, such a gap does exist. Since
stochasticity in model training is completely controllable, an acquisition
function that approaches optimal limit may need to explicitly take such
randomness into account.
Task | Type | Dataset | $|\mathcal{D}^{U}|,|\mathcal{D}^{L}_{0}|,|\mathcal{D}^{M}|,|\mathcal{D}^{V}|,|\mathcal{D}^{T}|$ | $B,\,\,K$ | Metric | Architecture | Heuristics
---|---|---|---|---|---|---|---
OC | class. | Fashion-MNIST | 2000, 0,50, ,150, 4000, 4000 | 25, 12 | Acc | CNN | Max-Ent., (Batch)BALD
IC | class. | TOPv2 (alarm) | 800, 0,40, ,100, 4000, 4000 | 20, 8 | F1 | LSTM, CNN, AOE, RoBERTa | Max-Ent., BALD
NER | tagging | MIT Restaurant | 1000, 0,50, ,200, 3000, 3000 | 25, 10 | F1 | LSTM | (Norm.-)Min-Conf., Longest
Table 1: Summary of experiment settings. Architecture details are in App. C.
## 4 Search for the Optimal Order
There are two technical challenges in finding $\sigma_{\xi}^{*}$. First, we do
not have access to the data distribution. Second, the permutation space of all
orders is too large.
### 4.1 Validation Set Maximum
To solve the first problem, we assume access to a validation set
$\mathcal{D}^{V}\sim\mathbb{P}_{XY}$. Since we are searching for the oracle
model, there is no practical constraints on the size of $\mathcal{D}^{V}$. In
addition, we assume access to an independently drawn test set
$\mathcal{D}^{T}\sim\mathbb{P}_{XY}$. We define our optimal order estimator as
$\displaystyle\widehat{\sigma}_{\xi}=\operatorname*{arg\,max}_{\sigma}q_{\xi}\left(\sigma,\mathcal{D}^{V}\right),$
(1)
and its quality on the test set $q_{\xi}\left(\sigma,\mathcal{D}^{T}\right)$
serves as the an unbiased estimate of its generalization quality.
### 4.2 Simulated Annealing Search
Exhaustively searching over the space of all labeling orders is prohibitive as
there are
$|\mathcal{D}^{U}|!/\left[(|\mathcal{D}^{U}|-BK)!\cdot(B!)^{K}\right]$
different orders to consider. In addition, we cannot use gradient-based
optimization due to the discreteness of order space. Thus, we use simulated
annealing (SA) (Kirkpatrick et al.,, 1983), as described in Alg. 1.
Input: SA steps $T_{S}$, greedy steps $T_{G}$, linear annealing factor
$\gamma$
$\sigma^{(0)}$ | $=\texttt{random-shuffle}\left([1,2,...,|\mathcal{D}^{U}|]\right);$
---|---
$q^{(0)}$ | $=q_{\xi}(\sigma^{(0)});$
$\sigma^{*}$ | $=\sigma^{(0)};q^{*}=q^{(0)};$
for _$t=1,2,...,T_{S}$_ do
$\sigma^{(p)}$ | $=\texttt{propose}\left(\sigma^{(t-1)}\right);$
---|---
$q^{(p)}$ | $=q_{\xi}\left(\sigma^{(p)},\mathcal{D}^{V}\right);$
$u$ | $\sim\mathrm{Unif}[0,1]$;
if _$u <\exp\left[\gamma t\left(q^{(p)}-q^{(t-1)}\right)\right]$_ then
$\sigma^{(t)}=\sigma^{(p)};q^{(t)}=q^{(p)}$;
if _$q^{*} <q^{(p)}$_ then
$\sigma^{*}=\sigma^{(p)};q^{*}=q^{(p)}$;
else
$\sigma^{(t)}=\sigma^{(t-1)};q^{(t)}=q^{(t-1)}$;
for _$t=1,2,...,T_{G}$_ do
$\sigma^{(p)}$ | $=\texttt{propose}\left(\sigma^{*}\right);$
---|---
$q^{(p)}$ | $=q_{\xi}\left(\sigma^{(p)},\mathcal{D}^{V}\right);$
if _$q^{(p)} >q^{*}$_ then
$\sigma^{*}=\sigma^{(p)};q^{*}=q^{(p)}$;
return _$\sigma^{*},q^{*}$_
Algorithm 1 Simulated Annealing (SA) Search
The search starts with a randomly initialized order $\sigma^{(0)}$. At time
step $t$, it proposes a new order $\sigma^{(p)}$ from $\sigma^{(t-1)}$ with
the following transition kernel (Fig. 2): with equal probabilities, it either
swaps two data points from two different batches or replaces a data point in
use by an unused one. It then evaluates the quality of the new order and
accepts or rejects the proposal with probability depending on the quality
difference. After searching for $T_{S}$ steps, the algorithm greedily
optimizes the best order for an additional $T_{G}$ steps, and returns best
order and quality in the end.
Figure 2: Illustration of the transition kernel. With equal probabilities, the
kernel either swaps two data points from two different batches (sections of
different colors) or replaces a data point in the first $K$ batches with one
outside (the gray section).
## 5 Experiment Overview
In addition to datasets $\mathcal{D}^{U},\mathcal{D}^{L}_{0},\mathcal{D}^{V}$
and $\mathcal{D}^{T}$ described in Sec. 3 and 4, since training neural
networks on small datasets is prone to overfitting, we introduced an
additional model selection set $\mathcal{D}^{M}$ to select the best model
across 100 epochs and also trigger early stopping if the performance on this
set is not improved for 20 consecutive epochs. In typical AL settings where
data annotation is costly, $D^{M}$ is typically comparable in size to the
warm-start set $\mathcal{D}^{L}_{0}$.
For each task, Tab. 1 describes the experiment-specific parameters. The
training used the batch size equal to the acquisition batch size $B$ and the
Adam optimizer (Kingma and Ba,, 2014). Following standard practice, the
learning rate is $10^{-3}$ except for RoBERTa finetuning, which is $10^{-4}$.
We searched for $T_{S}=25,000$ and $T_{G}=5,000$ steps with $\gamma=0.1$. We
found little improvement in the greedy stage.
The full dataset is shuffled and split into non-overlapping sets
$\mathcal{D}^{U},\mathcal{D}^{L}_{0},\mathcal{D}^{M},\mathcal{D}^{V},\mathcal{D}^{T}$.
Since the shuffling is not label-stratified, the empirical label distribution
in any set (and $\mathcal{D}^{L}_{0}$ in particular) may not be close to the
actual distribution. We made this choice as it better reflects the actual
deployment. For each task we considered commonly used architectures.
### 5.1 Object Classification (OC)
For object classification (OC), we used the Fashion-MNIST dataset (Xiao et
al.,, 2017) with ten classes. Even though the dataset is label-balanced,
$|\mathcal{D}^{L}_{0}|=50$ means that the initial warm-start set can be
extremely imbalanced, posing additional challenge to uncertainty estimation
used by many heuristics. We used a CNN architecture with two convolution
layers followed by two fully connected layers. We compared against max-
entropy, BALD (Gal et al.,, 2017), and BatchBALD (Kirsch et al.,, 2019)
heuristics.
### 5.2 Intent Classification (IC)
For intent classification (IC), we used the Task-Oriented Parsing v2 (TOPv2)
dataset (Chen et al.,, 2020), which consists of eight domains of human
interaction with digital assistants, such as _weather_ and _navigation_. We
used the “alarm” domain for our experiments. In intent classification, given
an user instruction such as “set an alarm at 8 am tomorrow”, the task is to
predict the main intent of the sentence (create-alarm). There are seven intent
classes: create-alarm, get-alarm, delete-alarm, silence-alarm, snooze-alarm,
update-alarm, and other. The dataset is very imbalanced, with create-alarm
accounting for over half of all examples. Hence, we used multi-class weighted
F1 score as the evaluation metric.
Our main architecture of study is the bi-directional LSTM (BiLSTM)
architecture with word embeddings initialized to GloVe (Pennington et al.,,
2014). To study the model transfer quality (Sec. 6.3), we also considered a
CNN architecture, which uses 1D convolution layers to process a sentence
represented as its word embedding sequence, an Average-of-Embedding (AOE)
architecture, which uses a fully connected network to process a sentence
represented by the average embedding of the words, and finetuning from the
per-trained RoBERTa model (Liu et al.,, 2019). Detailed architectures are
described in App. C. We considered max-entropy and BALD heuristics.
### 5.3 Named Entity Recognition (NER)
Named entity recognition (NER) is a structured prediction NLP task to predict
a tag type for each word in the input sentence. We used the MIT Restaurant
dataset (Liu et al.,, 2013), which consists of restaurant-related sentences
tagged in Beginning-Inner-Outer (BIO) format. Tags include amenity, price,
location, etc. For example, the tags for “what restaurant near here serves
pancakes at 6 am” are [O, O, B-location, I-location, O, B-dish, O, B-hours,
I-hours]. The outer tag accounts for more than half of all tags, and tag-level
weighted F1 score is used as the evaluation metric.
We used a BiLSTM encoder and an LSTM decoder following Shen et al., (2018) but
without the CNN character encoder as an ablation study does not find it
beneficial. We used teacher-forcing at training time and greedy decoding at
prediction time. During active acquisition, we assumed that the annotation
cost is the same for each sentence regardless of its length. We considered
several heuristics. The min-confidence heuristic selects the sentence with the
lowest log joint probability of the greedily decoded tag sequence (i.e. sum of
log probability of each tag), which is divided by sentence length in its
normalized version. The longest heuristic prioritizes longer sentences.
## 6 Results and Analyses
We compared the performance of the oracle order against existing heuristics.
We called the oracle order and performance “optimal” even though they are
estimates of the truly optimal ones due to our use of the validation set for
estimating the quality and the simulated annealing search.
### 6.1 Optimal Quality Values and Curves
Tab. 2 presents the qualities of the optimal, heuristic and random orders. On
all three tasks, there are large gaps between the heuristic and optimal
orders. Specifically, the optimal orders are better than the best heuristics
by 11.7%, 3.1%, and 3.5% for OC, IC, and NER respectively in terms of
$\xi$-quality. For OC and IC, we repeated the training across five random
seeds. The difference between the optimal and heuristic performances is
significant at $\alpha=0.05$ for both tasks under a paired $t$-test, with
$p\leq 0.001$ and $p=0.003$ respectively. In addition, the optimal orders are
better than the random orders by 7.53% on average across three tasks, while
the best heuristic orders only achieves an improvement of 1.49%.
Visually, Fig. 3 depicts the performance curve of the optimal order against
heuristic and random baselines on both the validation ($\mathcal{D}^{V}$) and
test ($\mathcal{D}^{T}$) set for OC. The optimal order significantly
outperforms heuristic and random baselines, both numerically and visually, and
we observed that the optimal order found using the validation set generalizes
to the test set. Quality summary for each individual heuristic and performance
curves for IC and NER tasks are presented in App. D and E. Even though there
might not exist an anytime optimal acquisition function (Prop. 1), we did
observe that the oracle order is uniformly better than heuristics on all three
tasks.
| OC | IC | NER
---|---|---|---
Optimal | 0.761 | 0.887 | 0.839
Best Heuristic | 0.682∗ | 0.858 | 0.811
Random | 0.698∗ | 0.816 | 0.800
Table 2: Qualities of optimal, heuristic, and random orders on the three
tasks. Individual heuristic performances are shown in App. D. ∗No heuristics
outperform the random baseline on object classification. Figure 3:
Performance curves of optimal, heuristic, and random orders for object
classification.
### 6.2 Effect of Training Stochasticity
As previously stated, training stochasticity could negatively affect the
optimal quality. To experimentally verify this, for $\xi$ and $\xi^{\prime}$
we compared $q_{\xi}(\widehat{\sigma}_{\xi},\mathcal{D}^{T})$ and
$q_{\xi}(\widehat{\sigma}_{\xi^{\prime}},\mathcal{D}^{T})$ to study whether
$\xi^{\prime}$-optimal order is less optimal for $\xi$-training. Since we did
not have dropout layers in our network, the only source of stochasticity comes
from the random initialization.
For five different seeds in OC and IC, $\xi,\xi^{\prime}\in\\{0,...,4\\}$,
Fig. 4 shows the pairwise quality
$q_{\xi}(\widehat{\sigma}_{\xi^{\prime}},\mathcal{D}^{T})$, where
$\xi^{\prime}$ is the source seed on the column and $\xi$ is the target seed
on the row. The color on each cell represents the seed-mismatch quality gap
$q_{\xi}(\widehat{\sigma}_{\xi},\mathcal{D}^{T})-q_{\xi}(\widehat{\sigma}_{\xi^{\prime}},\mathcal{D}^{T})$,
with a darker color indicating a larger gap.
Figure 4: Training stochasticity negatively affects the optimal quality. The
number in each cell represents $q_{\xi}(\widehat{\sigma}_{\xi^{\prime}})$,
with $\xi$ on the row and $\xi^{\prime}$ on the column. The color represents
$q_{\xi}(\widehat{\sigma}_{\xi^{\prime}})-q_{\xi}(\widehat{\sigma}_{\xi})$,
with darker colors indicating larger gaps.
The results suggest that in order to fully realize the potential of active
learning, the acquisition function needs to specifically consider the
particular model. However, high-quality uncertainty estimation are most
commonly defined for a model class (e.g. the class of all random
initialization in Deep Ensemble (Lakshminarayanan et al.,, 2017), the class of
dropout masks in MC-Dropout (Gal and Ghahramani,, 2016), and the class of
Gaussian noises in Bayes-by-Backprop (Blundell et al.,, 2015)). A definition
and algorithms for single-model uncertainty have the potential of further
improving uncertainty-based heuristics. In addition, the result also implies
that a purely diversity-based method relying on pre-defined embedding or
distance metric could not be optimal.
### 6.3 Model Transfer Quality
Lowell et al., (2019) found that the heuristic acquisition order on one model
sometimes does not transfer well to another model. However, it is not clear
whether this is a general bane of active learning or just about specific
heuristics. On the one hand, if the optimal order finds generally valuable
examples early on, we would expect a different model architecture to also
benefit from this order. On the other hand, if the optimal order exploits
particularities of the model architecture, we would expect it to have worse
transfer quality.
We replicated this study on the IC task, using four very different deep
architectures: BiLSTM, CNN, AOE and RoBERTa described in Sec. 5. For a pair of
source-target architectures $(A_{s},A_{t})$, we found both the optimal and
best heuristic order on $A_{s}$ and evaluate it on $A_{t}$. Then we compared
its quality to that of a random order on $A_{t}$. The best heuristic is BALD
for LSTM, CNN and RoBERTa, and max-entropy for AOE. Fig. 5 shows the quality
of each order. The optimal order always transfers better than the random order
and, with the single exception of RoBERTa $\rightarrow$ AOE, better than or
equal to the heuristic order as well. This suggests that the optimal order
tends to label model-agnostically valuable data points, and it is likely that
better acquisition functions for one architecture can also help assemble high-
quality datasets for others.
Figure 5: Quality of optimal and max-entropy heuristic order across different
architectures. Cell color represents difference to the random baseline.
Perhaps surprisingly, we found that the optimal orders for different
architectures actually share fewer data points than the heuristic orders,
despite achieving higher transfer performance. Fig. 6 shows how the optimal
(top) and heuristic (bottom) orders for different architectures relate to each
other on the intent classification task. Specifically, each vertical bar is
composed of 160 horizontal stripes, which represent the 160 data points
acquired under the specified order, out of a pool of 800 data points (not
shown). The first stripe from the top represents the first data point being
acquired, followed by data points represented by the second, third, etc.
stripes from the top. A line between the two orders connects the same data
point in both orders and reflects the position of that data point in each of
the two orders. The number of data points shared by both orders (i.e. number
of connecting lines) is computed and shown.
Figure 6: A visual comparison of optimal (top) and heuristic (bottom) orders,
for every pair of architectures, showing the number of shared data points
acquired under both architectures and their relative ranking. Figure 7:
Distribution visualization for object classification. Left: first five batches
numerically labeled in $t$-SNE embedding space, along with warm-start
(triangle) and test (circle) samples. Right: label distribution w.r.t. labeled
set size, with test-set distribution shown between plots and as dashed lines.
The results suggest that there are an abundant amount of high quality data
points that could lead to high AL performance, even though the optimal order
for different architectures selects largely different ones. However, they are
mostly missed by the heuristics. The heavy sharing among heuristics can be
explained by the fact that they mostly over-sample sentences that are short in
length or belong to the minority class, and thus deplete the pool of such
examples.
### 6.4 Distributional Characteristics
In this set of experiments, we visualized the distributional characteristics
of the acquired data points under the optimal and heuristic orders. We
analyzed both the input and the output space.
#### 6.4.1 Object Classification
The left two panels of Fig. 7 visualizes the first five batches (125 data
points in total) acquired by the optimal and max-entropy orders. For $28\times
28$ grayscale Fashion-MNIST images, we reduced their dimension to 100 using
principal component analysis (PCA), then to 2 using $t$-distributed stochastic
neighbor embedding ($t$-SNE) (Maaten and Hinton,, 2008). Each acquired image,
positioned at its 2-dimensional embedding and color-coded by label, is
represented by a number referring to the batch index. Circle and triangle
marks represent points in the test and warm-start sets.
The right three panels of Fig. 7 visualizes label distribution of the
$\mathcal{D}^{L}_{k}$ during acquisition. The horizontal axis represents the
total number of data points (starting from 50 images in $\mathcal{D}^{L}_{0}$,
each order acquires 300 from the pool), while the relative length of each
color represents the frequency of each label. The bars between plots show the
“reference distribution” from the test set, which is balanced in this task.
For both input and output, the optimal order samples quite uniformly in the
sample space, and is not easily distinguishable from random sampling. However,
this order is distinctively non-random because it achieves a much higher
quality than the random order (Tab. 2). By contrast, the max-entropy sampling
is very imbalanced in both input and output space. BALD and BatchBALD exhibit
similar behaviors (App. F).
Figure 8: Sentence length and label distribution w.r.t. labeled set size for
intent classification. For sentence lengths, the colors represent, from bottom
to top: 1–3, 4, 5, 6, 7, 8, 9, 10, 11–12, and 13+. For labels, the colors
represent, from bottom to top: create-alarm, get-alarm, delete-alarm, other,
silence-alarm, snooze-alarm, and update-alarm. Figure 9: Sentence length and
tag distribution w.r.t. labeled set size for NER. For sentence lengths, the
colors represent, from bottom to top: 1–5, 6, 7, 8, 9, 10, 11–12, 13-16, and
17+. For labels, the colors represent, from bottom to top: Outer, B/I-amenity,
B/I-cuisine, B/I-dish, B/I-hours, B/I-location, B/I-price, B/I-rating, and
B/I-restaurant-name. Figure 10: IDMR-augmented heuristics performance,
improving upon the vanilla versions by 2.95% on average.
#### 6.4.2 Intent Classification
Fig. 8 presents the analogous analysis for IC, where we studied the sentence-
length aspect of the input distribution. Since there are few sentences with
less than four or more than ten words, we grouped those sentences into 0-3,
11-12, and 13+ categories for ease of visualization. Both input and output
distribution plots echo those in Fig. 7.
In particular, even though short sentences (blue and orange) are infrequent in
the actual data distribution, both heuristics over-samples them, likely
because shorter sentences provide less “evidence” for a prediction. In
addition, they also significantly under-samples the create-alarm label (blue),
likely because it is already well-represented in the warm-start set, and over-
samples the silence-alarm label (purple), for the converse reason. The optimal
order, again, does not exhibit any of these under- or over-sampling behaviors
while achieving a much higher quality.
#### 6.4.3 Named Entity Recognition
Fig. 9 presents the analysis for the NER task. We observed the save
distribution-matching property, on both the input and the output side. By
contrast, even if the acquisition function does not have the “freedom” to
choose sentences with arbitrary tags, the min-confidence heuristic still over-
samples the relatively infrequent B/I-hours tags (purple) in the beginning.
Notably, in NER the basic “supervision unit” is an individual word-tag pair.
Thus, the model can effectively learn from more data by selecting longer
sentences. This is indeed exploited by the min-confidence heuristic, where
very long sentences are vastly over-sampled. However, the optimal order does
not exhibit such length-seeking behavior, and instead matches the test set
distribution quite closely.
### 6.5 Distribution-Matching Regularization
Sec. 6.4 consistently suggests that unlike heuristics, the optimal order
matches the data distribution meticulously. Can we improve heuristics with
this insight?
Input: $\mathcal{A}\left(m_{\theta},\mathcal{D}^{L},\mathcal{D}^{U}\right)$
that returns the next data point in $\mathcal{D}^{U}$ to label
$d_{\mathrm{ref}}$ | $=\texttt{bin-distribution}\left(\mathcal{D}^{L}_{0,X}\cup\mathcal{D}^{U}_{X}\cup\mathcal{D}^{M}\right)$;
---|---
$d_{\mathrm{cur}}$ | $=\texttt{bin-distribution}\left(\mathcal{D}_{X}^{L}\right)$;
$b^{*}$ | $=\operatorname*{arg\,min}_{b}\left(d_{\mathrm{cur}}-d_{\mathrm{ref}}\right)_{b}$;
$\mathcal{D}^{U}_{b^{*}}$ | $=\\{x\in\mathcal{D}^{U}_{X}:\texttt{bin}(x)=b^{*}\\}$;
return
$\mathcal{A}\left(m_{\theta},\mathcal{D}^{L},\mathcal{D}^{U}_{b^{*}}\right)$
Algorithm 2 Input Dist.-Matching Reg. (IDMR)
Alg. 2 presents Input Distribution-Matching Regularization (IDMR), a meta-
algorithm that augments any acquisition function to its input distribution-
matching regularized version. $\texttt{bin}(x)$ is a function that assigns
each input data point $x$ to one of finitely many bins by its characteristics.
$\texttt{bin-distribution}(\mathcal{D}_{X})$ computes the empirical count
distribution of the bin values for input part of $\mathcal{D}_{X}$. The IDMR
algorithm compares the reference bin distribution on all available data (i.e.
$\mathcal{D}^{L}_{0,X}\cup\mathcal{D}^{U}_{X}\cup\mathcal{D}^{M}_{X}$) to the
current bin distribution of $\mathcal{D}^{L}_{X}$, and uses the base
acquisition function to select a data point in the highest deficit bin.
For OC, a $K$-means algorithm identified five clusters in the $t$-SNE space,
which are used as the five bins. For IC and NER, sentences are binned by
length. We used IDMR to augment the best performing heuristics. As shown in
Fig. 10, on all tasks, the IDMR-augmented heuristic is better than all but the
optimal order, with an average improvement of 2.95% over the its non-augmented
counterpart. Across five random seeds for OC and IC, paired $t$-tests yield
$p=028$ and $0.017$ for improvement, both significant at $\alpha=0.05$.
We also tried using a similar technique to match the output label
distribution. However, it is complicated by the fact that the labels are not
observed prior to acquisition and we observed mixed results. The algorithm and
results are described in App. G.
## 7 Discussion and Conclusion
In this paper, we searched for and analyzed the optimal data acquisition order
in active learning. Our findings are consistent across tasks and models.
First, we confirmed the dependence of optimal orders on training stochasticity
such as model initialization or dropout masking, which should be explicitly
considered as AL methods approach the optimal limit.
Notably, we did not find any evidence that the optimal order needs to
over-/under-sample in hard-/easy-to-learn regions. Given this observation, it
is reasonable that the optimal order transfers across architectures better
than heuristics and that heuristics can benefit from a distribution-matching
regularization.
Indeed, most AL heuristics seek to optimize proxy objectives such as maximal
informativeness or input space coverage. While intuitive, they have not been
rigorously justified to correlate with any AL performance (e.g.
$\xi$-quality). For example, even if a data point is maximally informative, it
is possible that the information could not be fully absorbed during training
and lead to performance improvement.
Moreover, in supervised learning, the fundamental assumption that the training
and test data come from the same underlying distribution is the key to most
guarantees of empirical risk minimization (ERM). Thus, we conjecture that the
distribution-matching property arises from the nice behavior of ERM under this
assumption. This also suggests that when the proposed algorithm is expected to
violate this assumption, more careful analysis should be done on how such
distribution shift may (adversely) affect the performance of models optimized
under ERM. Developing theoretical guarantees of ERM under controlled
distribution mismatch and/or formulations beyond ERM may also benefit active
learning.
Finally, we focused on classification problems with a few classes. It would be
interesting for future work to extend the analysis to other settings, such as
large number of classes (e.g. CIFAR-100) or regression, and study whether the
same behaviors hold or not.
## Acknowledgement
Yilun Zhou conducted this work during an internship at Facebook AI.
Correspondence should be sent to Yilun Zhou at [email protected]. We thank the
anonymous reviewers for their feedback.
## References
* Bachman et al., (2017) Bachman, P., Sordoni, A., and Trischler, A. (2017). Learning algorithms for active learning. In International Conference on Machine Learning (ICML), pages 301–310. PMLR.
* Blundell et al., (2015) Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. (2015). Weight uncertainty in neural network. In International Conference on Machine Learning (ICML), pages 1613–1622. PMLR.
* Chen et al., (2020) Chen, X., Ghoshal, A., Mehdad, Y., Zettlemoyer, L., and Gupta, S. (2020). Low-resource domain adaptation for compositional task-oriented semantic parsing. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
* Contardo et al., (2017) Contardo, G., Denoyer, L., and Artières, T. (2017). A meta-learning approach to one-step active-learning. In International Workshop on Automatic Selection, Configuration and Composition of Machine Learning Algorithms, volume 1998, pages 28–40. CEUR.
* Fang et al., (2017) Fang, M., Li, Y., and Cohn, T. (2017). Learning how to active learn: A deep reinforcement learning approach. In Empirical Methods in Natural Language Processing (EMNLP), pages 595–605.
* Gal and Ghahramani, (2016) Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning (ICML), pages 1050–1059.
* Gal et al., (2017) Gal, Y., Islam, R., and Ghahramani, Z. (2017). Deep bayesian active learning with image data. In International Conference on Machine Learning (ICML), pages 1183–1192. PMLR.
* Houlsby et al., (2011) Houlsby, N., Huszár, F., Ghahramani, Z., and Lengyel, M. (2011). Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745.
* Kapoor et al., (2007) Kapoor, A., Grauman, K., Urtasun, R., and Darrell, T. (2007). Active learning with gaussian processes for object categorization. In International Conference on Computer Vision (ICCV), pages 1–8. IEEE.
* Kasai et al., (2019) Kasai, J., Qian, K., Gurajada, S., Li, Y., and Popa, L. (2019). Low-resource deep entity resolution with transfer and active learning. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 5851–5861.
* Kingma and Ba, (2014) Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
* Kirkpatrick et al., (1983) Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598):671–680.
* Kirsch et al., (2019) Kirsch, A., van Amersfoort, J., and Gal, Y. (2019). Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. In Advances in Neural Information Processing Systems (NeurIPS), pages 7026–7037.
* Konyushkova et al., (2017) Konyushkova, K., Sznitman, R., and Fua, P. (2017). Learning active learning from data. In Advances in Neural Information Processing Systems (NIPS), pages 4225–4235.
* Konyushkova et al., (2018) Konyushkova, K., Sznitman, R., and Fua, P. (2018). Discovering general-purpose active learning strategies. arXiv preprint arXiv:1810.04114.
* Koshorek et al., (2019) Koshorek, O., Stanovsky, G., Zhou, Y., Srikumar, V., and Berant, J. (2019). On the limits of learning to actively learn semantic representations. In Conference on Computational Natural Language Learning (CoNLL), pages 452–462.
* Lakshminarayanan et al., (2017) Lakshminarayanan, B., Pritzel, A., and Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems (NIPS), pages 6402–6413.
* Liu et al., (2013) Liu, J., Pasupat, P., Cyphers, S., and Glass, J. (2013). ASGARD: A portable architecture for multilingual dialogue systems. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8386–8390. IEEE.
* Liu et al., (2018) Liu, M., Buntine, W., and Haffari, G. (2018). Learning how to actively learn: A deep imitation learning approach. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 1874–1883.
* Liu et al., (2019) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
* Lowell et al., (2019) Lowell, D., Lipton, Z. C., and Wallace, B. C. (2019). Practical obstacles to deploying active learning. In Conference on Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 21–30.
* Maaten and Hinton, (2008) Maaten, L. v. d. and Hinton, G. (2008). Visualizing data using $t$-SNE. Journal of Machine Learning Research (JMLR), 9(Nov):2579–2605.
* Pang et al., (2018) Pang, K., Dong, M., Wu, Y., and Hospedales, T. (2018). Meta-learning transferable active learning policies by deep reinforcement learning. arXiv preprint arXiv:1806.04798.
* Pennington et al., (2014) Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Cconference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
* Roy and McCallum, (2001) Roy, N. and McCallum, A. (2001). Toward optimal active learning through monte carlo estimation of error reduction. International Conference on Machine Learning (ICML), pages 441–448.
* Settles, (2009) Settles, B. (2009). Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.
* Shen et al., (2018) Shen, Y., Yun, H., Lipton, Z. C., Kronrod, Y., and Anandkumar, A. (2018). Deep active learning for named entity recognition. In International Conference on Learning Representation (ICLR).
* Sinha et al., (2019) Sinha, S., Ebrahimi, S., and Darrell, T. (2019). Variational adversarial active learning. In International Conference on Computer Vision (ICCV), pages 5972–5981.
* Vu et al., (2019) Vu, T., Liu, M., Phung, D., and Haffari, G. (2019). Learning how to active learn by dreaming. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 4091–4101.
* Woodward and Finn, (2017) Woodward, M. and Finn, C. (2017). Active one-shot learning. arXiv preprint arXiv:1702.06559.
* Xiao et al., (2017) Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.
* Xu et al., (2007) Xu, Z., Akella, R., and Zhang, Y. (2007). Incorporating diversity and density in active learning for relevance feedback. In European Conference on Information Retrieval (ECIR), pages 246–257. Springer.
## Appendix A Meta-Analysis of Existing Results
In this section, we present a meta-analysis of existing results to highlight
the inconsistencies among them. Fig. 1 of (Gal et al.,, 2017) (replicated as
Fig. 11 left) shows that BALD (Bayesian active learning by disagreement) using
MC-dropout achieves much higher performance than the random baseline on one
image dataset. By contrast, Fig. 2 of (Sinha et al.,, 2019) (replicated as Fig
.11 right) shows that MC-Dropout methods struggle to outperform even the
random baseline on four datasets.
Figure 11: Left: Fig. 1 of (Gal et al.,, 2017). Right: Fig. 2 of (Sinha et
al.,, 2019).
Fig. 2 of (Gal et al.,, 2017) (replicated as Fig. 12 top) shows that active
learning strategies with MC-Dropout-based uncertainty estimation consistently
outperforms their deterministic counterparts on a CV task. However, Fig. 4 of
(Shen et al.,, 2018) (replicated as Fig. 12 bottom) finds no discernible
difference between MC-Dropout-based BALD and other deterministic heuristics on
an NLP task.
Figure 12: Top: Fig. 2 of (Gal et al.,, 2017). Bottom: Fig. 4 of (Shen et
al.,, 2018).
For meta-active-learning methods, Fig. 3 of (Fang et al.,, 2017) (replicated
as Fig. 13 top) shows that the RL-based PAL (policy active learning)
consistently outperforms uncertainty and random baselines in a cross-lingual
transfer setting. However, Fig .2 of (Liu et al.,, 2018) (replicated as Fig.
13 middle) shows that PAL are hardly better than uncertainty or random
baselines in both cross-domain and cross-lingual transfer settings, while
their proposed ALIL (active learning by imitation learning) is better than all
studied baselines. Nevertheless, Fig. 4 of (Vu et al.,, 2019) (replicated as
Fig. 13 bottom) again shows that both PAL and ALIL are not better than the
uncertainty-based heuristic on a named entity recognition task, while their
proposed active learning by dreaming method ranks the best.
Figure 13: Top: Fig. 3 of (Fang et al.,, 2017). Middle: Fig .2 of (Liu et
al.,, 2018). Bottom: Fig. 4 of (Vu et al.,, 2019).
These results demonstrate concerning inconsistency and lack of generalization
to even very similar datasets or tasks. In addition, the above-presented
performance curves are typically the sole analysis conducted by the authors.
Thus, we believe that it is enlightening to analyze more aspects of a proposed
strategy and its performance. In addition, a comparison with the optimal
strategy could reveal actionable ways to improve the proposed strategy, which
is the main motivation of our work.
## Appendix B Considerations for Stochastic Acquisition Functions
It is straightforward to extend the discussion in Sec. 3 to stochastic
acquisition functions, such as those that require stochastic uncertainty
estimation, like BALD, or those that are represented as stochastic policies.
First, we introduce $\zeta$ to capture such randomness, so that
$\Delta\mathcal{D}^{L}_{k+1}=\mathcal{A}(\theta_{k},\mathcal{D}^{L}_{k},\zeta)$
remains deterministic. Then we can establish a one-to-one relationship between
$\mathcal{A}(\cdot,\cdot,\zeta)$ and order $\sigma_{\zeta}$ on
$\mathcal{D}^{U}$. By definition of $\xi$-optimal order $\sigma_{\xi}^{*}$, we
still have $q_{\xi}(\sigma_{\xi}^{*})\geq q_{\xi}(\sigma_{\zeta})$, and
$q_{\xi}(\sigma_{\xi}^{*})\geq\mathbb{E}_{\zeta}\left[q_{\xi}(\sigma_{\zeta})\right]$.
In other words, the $\xi$-optimal order $\sigma_{\xi}^{*}$ outperforms both
the expected quality of the stochastic acquisition function and any single
instantiation of stochasticity.
The above discussion suggests that not accounting or for stochasticity in
acquisition function or unable to do so (i.e. using the expected quality) is
inherently sub-optimal. Since many heuristics require uncertainty estimation
through explicitly stochastic processes such as MC-Dropout (Gal and
Ghahramani,, 2016), the stochastic components may become the bottleneck to
further improvement as AL algorithms approach the optimal limit.
## Appendix C Model Architecture
Input: $28\times 28\times 1$
---
Conv: $32\,\,\,5\times 5$ filters
(Optional: MC-Dropout)
Max-Pool: $2\times 2$
ReLU
Conv: $64\,\,\,5\times 5$ filters
(Optional: MC-Dropout)
Max-Pool: $2\times 2$
ReLU
Linear: $1024\times 128$
(Optional: MC-Dropout)
ReLU
Linear: $128\times 10$
(a) CNN Object Class.
Input: $L$ tokens
---
GloVe embedding: 300 dim
BiLSTM: 300 hidden dim
Average of $L$ hiddens
(Optional: MC-Dropout)
Linear: $600\times 7$
(b) LSTM Intent Class.
Input: $L$ tokens
---
GloVe embedding: 300 dim
Conv: 50 size-3 kernels
ReLU
Conv: 50 size-3 kernels
ReLU
Conv: 50 size-3 kernels
ReLU
Average-Pool
Linear: $50\times 50$
ReLU
(Optional: MC-Dropout)
Linear: $50\times 7$
(c) CNN Intent Class.
Input: $L$ tokens
---
GloVe embedding: 300 dim
Averege of $L$ embeddings
Linear: $300\times 100$
ReLU
Linear: $100\times 100$
ReLU
(Optional: MC-Dropout)
Linear: $100\times 7$
(d) AOE Intent Class.
Table 3: Model Architectures for Object and Intent Classification
### C.1 Object Classification
We follow the architecture used by Kirsch et al., (2019). The MC-Dropout
layers are present in BALD and BatchBALD for uncertainty estimation. The
architecture is summarized in Tab. 3(a).
### C.2 Intent Classification
We use an LSTM architecture as our main architecture of study for the intent
classification task. We initialize the word embeddings with GloVe (Pennington
et al.,, 2014) and jointly train it along with other model parameters. The
architecture is shown in Tab. 3(b). Again, MC-Dropout is only used for
uncertainty estimation in BALD.
To study model transfer performance, we also used (1D-)CNN and Average of
Embedding (AOE) architectures, shown in Tab. 3(c) and (d), as well as the pre-
trained RoBERTa architecture (Liu et al.,, 2019) with a linear classification
layer on top.
### C.3 Named Entity Recognition
For named entity recognition, we used the same architecture as Shen et al.,
(2018), except that we removed character-level embedding layer since an
ablation study did not find it beneficial to the performance.
Specifically, the model architecture is encoder-decoder. The encoder builds
the representation for each word in the same way as the BiLSTM architecture
for intent classification. At each decoding step, the decoder maintains the
running hidden and cell state, receives the concatenation of the current word
encoder representation and previous tag one-hot representation as the input,
and predicts the current tag with a linear layer from output hidden state. At
the first step, a special [GO] tag was used.
## Appendix D Extended Quality Summary
Tab. 4 presents the numerical quality scores for optimal, heuristic, and
random orders. The object and intent classification tasks are evaluated and
averaged across 5 different model seeds, and the NER task is evaluated on a
single model seed. In all cases, we can observe a large gap between the
optimal order and the rest, indicating abundant room of improvement for the
heuristics.
| Object Class. | Intent Class. | | NER
---|---|---|---|---
Optimal | 0.761 | 0.887 | Optimal | 0.838
Max-Entorpy | 0.682 | 0.854 | Min-Confidence | 0.811
BALD | 0.676 | 0.858 | Norm. Min-Conf. | 0.805
BatchBALD | 0.650 | N/A | Longest | 0.793
Random | 0.698 | 0.816 | Random | 0.801
Table 4: Complete results of optimal, heuristic, and random order qualities.
## Appendix E Optimal Performance Curves
Fig. 14 shows the performance curves on the validation and test sets for
intent classification (left two panels) and named entity recognition (right
two panels). Similar to those for object classification (Fig. 3), the shape of
the curves are quite similar on validation and test sets, indicating good
generalization.
Figure 14: Performance curves for intent classification (left two panels) and
NER (right two panels).
## Appendix F Additional Visualizations for Object Classification
Fig. 15 presents the input (via PCA and t-SNE dimensionality reduction) and
output distribution for BALD and BatchBALD order. The labels for data points
sampled by the BALD and BatchBALD heuristics are much less balanced than the
optimal and random orders. In addition, BatchBALD seems to focus on only a few
concentrated regions in the input space.
Figure 15: Top: the first five batches numerically labeled in t-SNE embedding
space, along with warmstart (triangle) and test (circle) samples. Bottom:
label distribution w.r.t. labeled set size, with test-set distribution shown
between plots and as dashed lines.
## Appendix G Output Distribution-Matching Regularization (ODMR)
In this section we describe the mixed results of regularizing the output
distribution on object and intent classification. Since NER is a structured
prediction task where the output tags in a sentence are correlated (e.g.
I-tags always follow B- or I-tags), it is not clear what is the best way to
enforce the distribution-matching property in the tag space and we leave the
investigation to future work.
Alg. 3 presents the OMDR algorithm. Compared to the IDMR algorithm (Alg. 2),
there are two additional challenges. First, the labels for the pool set are
not observed, so when the algorithm tries to acquire a data point with a
certain label, it can only use the predicted label, which, with few data, can
be very wrong in some cases. As soon as a data point is selected for
annotation, its label is revealed immediately and used to calculate
$d_{\mathrm{cur}}$ during the next data point selection. In other words, data
annotations are not batched.
Second, due to the unavailability of pool labels, the only labels for which we
can use to compute the reference distribution are from $\mathcal{D}^{L}_{0}$
and $\mathcal{D}^{M}$. If they are small, as are typically, the reference
label distribution can be far from the true distribution $\mathbb{P}_{Y}$,
especially for rare classes. Thus, we employ the add-1 smoothing (i.e. adding
1 to each label count before calculating the empirical count distribution,
also known as Laplace smoothing) to ensure that rare classes are not outright
missed.
Input: $\mathcal{A}\left(\mathcal{D}^{U},m_{\theta},\mathcal{D}^{L}\right)$
that returns the next data point in $\mathcal{D}^{U}$ to label
$d_{\mathrm{ref}}$ | $=\texttt{label-distribution}\left(\mathcal{D}^{L}_{0,Y}\cup\mathcal{D}^{M}_{Y},\texttt{add-one=True}\right)$;
---|---
$d_{\mathrm{cur}}$ | $=\texttt{label-distribution}\left(\mathcal{D}^{L}_{Y},\texttt{add-one=False}\right)$;
$l^{*}$ | $=\operatorname*{arg\,min}_{l}\left(d_{\mathrm{cur}}-d_{\mathrm{ref}}\right)_{l}$;
$\mathcal{D}^{U}_{l^{*}}$ | $=\\{x\in\mathcal{D}^{U}:m_{\theta}(x)=l^{*}\\}$;
return
$\mathcal{A}\left(\mathcal{D}^{U}_{b^{*}},m_{\theta},\mathcal{D}^{L}\right)$
Algorithm 3 Output Distribution-Matching Regularization (IDMR)
In addition to Alg. 3, we also experiment with three “cheating” versions of
ODMR. The most extreme version (test+groundtruth) uses test set (rather than
the accessible set $\mathcal{D}^{L}_{0}\cup\mathcal{D}^{M}$) to estimate label
distribution and the groundtruth label (rather than the predicted label) are
used to construct the subset $\mathcal{D}_{l^{*}}^{U}$. The other two versions
(accessible+groundtruth and test+predicted) drops either piece of information.
The actual “non-cheating” ODMR (accessible+predicted) is the only version that
is practically feasible in reality.
Fig. 16 presents the four versions of ODMR-augmented heuristics on the intent
classification result. We can see that all four versions are able to
outperform the baseline heuristic, and three out of four (including the “non-
cheating” ODMR) are able to outperform the random baseline. In addition, the
label distribution are much more uniform compared to the unregularized version
in Fig. 7 (bottom middle).
Figure 16: ODMR with max-entropy heuristic on object classification along with
observed label distribution.
Fig. 17 presents the four versions of ODMR-augmented max-entropy (top) and
BALD (bottom) heuristics on the intent classification result. Unlike object
classification, none of the “cheating” or “non-cheating” versions are able to
outperform the respective vanilla heuristic. Interestingly, using predicted
rather than groundtruth labels achieves better performance in both cases.
Figure 17: ODMR with max-entropy (top) and BALD (bottom) heuristics on intent
classification along with observed label distribution.
Overall, the results are inconclusive about whether ODMR can help. Since it
uses the predicted label to guide data acquisition, this kind of bootstrapping
is prone to “confident mistakes” when a data point is predicted wrong but with
high certainty and thus never acquired, while the selection mechanism of IDMR
provides a much more independent, yet still heuristic-guided, way to cover the
data distribution. Visually, ODMR performance curves for both tasks are much
less smooth than IDMR curves (Fig. 10), indicating unstability of using output
labels to regularize a prediction task. Additional studies are required to
determine when and how ODMR can help, and we leave them to future work.
|
8k
|
arxiv_papers
|
2101.00986
|
# Reviving Frequentism††thanks: Accepted for publication in _Synthese_.
Mario Hubert California Institute of Technology
Division of the Humanities and Social Sciences
(December 31, 2020)
###### Abstract
Philosophers now seem to agree that frequentism is an untenable strategy to
explain the meaning of probabilities. Nevertheless, I want to revive
frequentism, and I will do so by grounding probabilities on typicality in the
same way as the thermodynamic arrow of time can be grounded on typicality
within statistical mechanics. This account, which I will call _typicality
frequentism_ , will evade the major criticisms raised against previous forms
of frequentism. In this theory, probabilities arise within a physical theory
from statistical behavior of almost all initial conditions. The main advantage
of typicality frequentism is that it shows which kinds of probabilities (that
also have empirical relevance) can be derived from physics. Although one
cannot recover all probability talk in this account, this is rather a virtue
than a vice, because it shows which types of probabilities can in fact arise
from physics and which types need to be explained in different ways, thereby
opening the path for a pluralistic account of probabilities.
###### Contents
1. 1 Introduction
2. 2 Typicality Frequentism
1. 2.1 Typicality and the Arrow of Time
2. 2.2 Probabilities as Typical Frequencies
1. 2.2.1 Random Variables and their Relation to Physics
2. 2.2.2 Probability from Typicality
3. 3 Defending Typicality Frequentism
1. 3.1 The Range-Account of Probabilities
2. 3.2 The Humean Mentaculus
3. 3.3 Countering Standard Critique of Frequentism
1. 3.3.1 It’s a Poor _Analysis_ of Probabilities
2. 3.3.2 The Problem of Ascertainability
3. 3.3.3 The Reference Class Problem
4. 3.3.4 Frequentism Is Not Completely Objective
4. 3.4 Objections and Replies
4. 4 Conclusion
## 1 Introduction
Frequentism is dead. This seems to be the consensus among contemporary
philosophers. A recent textbook on the philosophy of probabilities phrases it
this way:
> Although the frequency view remains popular outside philosophy – e.g. among
> statisticians – it is not the subject of much, if any, active research.
> (Rowbottom, 2015, p. 112)
Frequentism may be useful for all practical purposes for statisticians,
although it does not convey the true meaning of probabilities, since
philosophers have successfully exposed the underlying unsurmountable problems.
Therefore, active research for developing frequentism has been discontinued.
Hájek (1996) prominently debunked finite frequentism; a decade later followed
his criticism of hypothetical frequentism (Hájek, 2009). Recently, La Caze
(2016) agreed that any version of frequentism is doomed to fail, at least in
providing a comprehensive understanding of probabilities.
I think we can breathe life back into frequentism and develop it into a
serious account of probabilities. I intend to defend frequentism against these
criticisms and modify it in such a way that it incorporates elements of finite
frequentism, hypothetical frequentism, and the classical interpretation of
probabilities. I will call this account _typicality frequentism_ , which
defines, in brief, probabilities as typical long-term frequencies based on the
law of large numbers.
Typicality has been developed within Boltzmann’s reduction of thermodynamics
to statistical mechanics, but the scope of this notion is not particularly
tied to statistical mechanics (Wagner, 2020). Wilhelm (2019) recently showed
how typicality explanations work in general by connecting them with Hempel’s
deductive–nomological model. Ideas along these lines to derive probabilities
from typicality as special kinds of frequencies have been presented by Maudlin
(2007a, 2018) and sketched in the literature on Boltzmann’s statistical
mechanics and the de Broglie–Bohm quantum theory (for a brief overview, see
Goldstein, 2012), but there has been no work contrasting this kind of
frequentism with the traditional theories of frequentism in order to establish
typicality frequentism as a serious alternative theory in its own right.
In my opinion, the biggest methodological error made by the forefathers of
frequentism, like Reichenbach (1949/1971), Venn (1888), and von Mises
(1928/1957), was to interpret probabilities as frequencies from empirical
behavior: they started with how we talk about probabilities and tried to
underly an interpretation in terms of frequencies that supports their
empiricism (Gillies, 2000, Ch. 5). Instead, I propose a strategy from a
physical theory to probabilities: starting with a deterministic fundamental
physical _theory_ and analyze how this theory introduces probabilities from
statistical behavior. Then we may recover how our general use of probabilities
is backed up by physics. But, as it turns out, some ordinary ways of talking
about probability will not be recovered within this approach. To account for
these, we are free to introduce another, complementary interpretation of
probability—becoming pluralists about probability. The method I will be using
to define probabilities is the same statistical method that has been used to
justify the thermodynamic arrow of time in statistical mechanics or the arrow
of time in electrodynamics (North, 2003).
## 2 Typicality Frequentism
The idea behind typicality frequentism is to apply the tools from statistical
mechanics to explain how probabilities arise from deterministic physical
dynamics. Maudlin (2018) is confident about this strategy, “The challenge of
deriving probabilities—or quasi-probabilities, probabilities with
tolerances—from an underlying deterministic dynamics can be met. Typicality is
the conceptual tool by which the trick is done.” An important predecessor of
typicality frequentism, apart from the different versions of frequentism, is
the theory of probability by Johannes von Kries, laid out in his _Principien
der Wahrscheinlichkeitsrechnung_ (1886, _engl._ Principles of Probability
Theory). As von Kries’s view seems to be best characterized as objective
Bayesianism (see p. 16 of the introduction by Eberhardt and Glymour in
Reichenbach, 2008) or as a predecessor of the logical interpretation
(Fioretti, 2001), subjective and objective aspects are intertwined. For my
purpose, I want to lay out in more detail the objective parts of von Kries’s
account, because they contain some essential features of typicality
frequentism, although von Kries criticized the frequentist theories at his
time (Zabell, 2016b, section 3).
Influenced by the physics of the 19th century, it was important to von Kries
to distinguish between laws of nature and initial conditions (Pulte, 2016).
Unlike Laplace, who reduced probability to incomplete knowledge of the initial
conditions, von Kries built up probabilities from objective _variations_ of
initial conditions, and he called the sets of admissible initial conditions
“Spielräume,” which are best translated as “sets of possibilities”.111Eberhard
and Glymour call them “sets of ur-events” because they are the irreducible
basis for von Kries’s probabilities (see Reichenbach, 2008, Introduction,
section 4.2). More quantitatively, von Kries defined the probability for an
event $E$ in the following way (see Pulte, 2016, section 5, for this
reconstruction). Let us say that the event $E$ is brought about by the set of
initial conditions $C$ (given certain laws of nature) and the set of initial
conditions that do _not_ bring about $E$ is $C^{*}$ ($C$ would be then the
“Spielraum” or set of possibilities for $E$). The probability $p$ for $E$ is
then defined, in the Laplacian sense, as the quotient of the favorable initial
conditions leading to $E$ over all possible initial conditions by measuring
the Spielraum and its complement by an appropriate measure $m$:
$p:=\frac{m(C)}{m(C)+m(C^{*})}.$
Although the objective aspect of probabilities mentioned here comes from the
initial conditions of the physical process, it is not a frequentist account.
Moreover, this reconstruction of von Kries’s theory may incline us to think
that the Spielräume are unique and always tied to a physical theory, but von
Kries was in this respect more a pragmatist and sometimes even a skeptic (see
the discussion in Pulte, 2016, section 5). Depending on the knowledge of the
agent building a probabilistic model, the space of possibilities may change
and may not be a space of initial conditions of a physical theory but rather a
more generalized sample space; thus, the measure $m$ may not be unique either.
I share von Kries’s intuition to reduce probabilities to certain basic events,
but I endeavor a more objective account of probabilities always embedded into
physics and using only the tools of physics in defining probabilities. I,
therefore, propose that physics offers a unique space from which to derive
probabilities as typical long-term frequencies. This is the fundamental space
of physics, like _phase space_ or _configuration space_ (depending on the
physical theory). Here, I agree with the method of arbitrary functions (or
more adequately named the _range account of probabilities_ by Rosenthal,
2016), which can be regarded as a modern elaboration of von Kries’s theory.
But I deviate from the range account by incorporating typicality as a central
notion to define probabilities; in a similar fashion, it is possible to
explain the thermodynamic arrow of time as arising from generic initial
conditions of the universe.
### 2.1 Typicality and the Arrow of Time
Scrambled eggs never unscramble, a shattered vase never reassembles itself,
and ice cubes never un-melt. Although our basic physical laws are time-
reversal invariant, that is, the time-reversed process of a physically
possible process is also physically possible, such time-reversed processes are
not observed. Boltzmann proposed a solution to this problem by distinguishing
microscopic from macroscopic behavior and systems with few degrees of freedom
from systems with many degrees of freedom. It is the microscopic behavior that
is time-reversal invariant, and one only finds directed processes on the
macroscopic level, when systems have many degrees of freedom. If we have a
sequence of photos showing a system of few degrees of freedom, like two
rotating molecules, we would not be able to distinguish forward from backward
behavior, but if we have a sequence of photos of a glass bottle thrown on the
ground, we would distinguish one direction as the true one.
Boltzmann gave an explanation in terms of statistics why such behavior is not
observed. The short answer is due to the many degrees of freedom of a
macroscopic process: one had to finely orchestrate all the many microscopic
states of the particles constituting a macroscopic object in order to yield a
time-reversed process. If we don’t interfere this meticulously (and in most
cases we cannot do so), then a familiar directed process comes about. In other
words, almost all initial conditions of a macroscopic system yield the
familiar directed processes; only very special initial conditions yield the
reversed process. So given broken glass on the floor, there are many more
states of particles constituting the pieces of glass such that these pieces
remain on the floor than those states that would converge the pieces into a
brand-new bottle.
This behavior can be phrased by means of typicality: a physical behavior is
called _typical_ , if almost all initial conditions yield this behavior (see,
for instance, Goldstein, 2001; Lebowitz, 2008; Myrvold, 2016; Volchan, 2007).
And a physical behavior is called _atypical_ , if almost none of the initial
conditions yield this behavior. So, it is typical that broken glass remains
broken, and it is atypical that scrambled eggs unscramble.
Boltzmann’s ideas on the irreversibility of physical processes have recently
experienced a renaissance among philosophers in which the notion of typicality
has become central (see, for instance, Barrett, 2017; Frigg, 2009; Lazarovici
and Reichert, 2015, 2019). The notion of typicality, as we introduced it, is
still too imprecise for quantitative use in physics. As Wilhelm (2019) rightly
emphasizes, there are many ways to formalize “almost all.” The right way to do
so in statistical mechanics is by means of a measure over phase space. Phase
space is constructed from a set of particles in three-dimensional space.222For
simplicity’s sake and to be as close to Boltzmann’s reasoning as possible, I
assume Newtonian mechanics as the microscopic theory. Consider $N$ particles
in three-dimensional space (if we have a realistic macroscopic body, $N$ is of
the order of Avogadro’s constant, that is, approximately $10^{23}$). Since a
particle is completely described by its position $\boldsymbol{x}$ and momentum
$\boldsymbol{p}$, we can summarize the complete physical state of $N$
particles as a vector
$\left(\boldsymbol{x}_{1},\boldsymbol{p}_{1},\boldsymbol{x}_{2},\boldsymbol{p}_{2},\dots,\boldsymbol{x}_{N},\boldsymbol{p}_{N}\right)$,
and this vector is one point in phase space, which has roughly $6\times
10^{23}$ dimensions. So every point in phase space, each microstate,
represents a set of $N$ particles with their precise positions and momenta. In
order to get macrostates, one needs to divide phase space $\mathcal{P}$ into
disjoint subsets, where each set represents a macrostate (see Figure 1). So a
macrostate arises from a map $M$ that assigns to every microstate $X$ a
macrostate $M(X)$ corresponding to one of the subsets
$\mathcal{P}_{M}\subseteq\mathcal{P}$ according to the partition—$M(X)$ is the
macrostate of $X$ if $X\in\mathcal{P}_{M}$.
Figure 1: Clusters in phase space according to thermodynamic macrostates and
the measure of typicality, as depicted by Roger Penrose (1989, p. 402).
Thermal equilibrium is by far the biggest macrostate in phase space.
The tool that ultimately explains irreversible behavior is Boltzmann’s
definition of entropy assigned to every point in phase space:
$S_{B}(X):=k_{B}\ln\lvert\mathcal{P}_{M(X)}\rvert,$ (1)
where $k_{B}$ is Boltzmann’s constant and $\ln$ is the natural logarithm. The
main part of Boltzmann’s entropy is $\lvert\mathcal{P}_{M(X)}\rvert$, which
deserves some elaboration. In order to measure the sizes of the subsets
$\mathcal{P}_{M(X)}$, one needs to introduce a measure $\lambda$, which
assigns a number to every such subset. Conventionally, if the system is
finite, one normalizes the measure to $1$ such that the size of the entire
phase space would be $1$. In the entropy formula,
$\lvert\mathcal{P}_{M(X)}\rvert$ denotes the size of $\mathcal{P}_{M(X)}$
according to the appropriate measure $\lambda$, that is,
$\lvert\mathcal{P}_{M(X)}\rvert=\lambda\left(\mathcal{P}_{M(X)}\right)$. The
only purpose of the measure $\lambda$ is to tell us which sets are big and
which are small, in order to identify typical and atypical behavior; in this
sense, it is _a measure of typicality_. Since for real physical systems, like
gases in a box or melting ice cubes, the phase space volume of thermal
equilibrium has by far the largest volume according to the measure of
typicality and so it has the highest entropy, we observe systems that are not
in equilibrium (low entropy $S_{B}$) to reach equilibrium (high entropy
$S_{B}$), whereas we do not see a system going from a high entropy state to a
low entropy state, because the low entropy states are much smaller in phase
space.
Moreover, if we zoom into the phase space region of a low entropy macrostate,
a melting ice cube, for example, almost all microstates will move to a
macrostate with higher entropy and ultimately to equilibrium. It is physically
possible that a low entropy macrostate goes into another low entropy
macrostate (by itself), but there are very few microstates within this macro
region that do that. This is Boltzmann’s explanation why we observe only one
direction of a physical process and not the time-reversed process, although
this behavior is physically possible according to the time-reversal
fundamental laws. The symmetry is broken by a statistical argument, that is,
by distinguishing those initial conditions that yield typical behavior from
those that yield atypical behavior.
In classical mechanics, one normally uses the _Liouville measure_ , a natural
generalization of the standard _Lebesgue measure_ on three-dimensional space
to phase space, as the measure of typicality. But in order to distinguish
small sets from big sets, other measures would do the job as well. Indeed,
every measure that is _absolutely continuous_ with respect to the Liouville
measure will agree on the same physical behavior to be typical or
atypical.333A measure $\mu$ is absolutely continuous with respect to a measure
$\lambda$ (symbolically $\mu\ll\lambda$), if all the null sets of $\lambda$
are null sets of $\mu$, that is $\forall
X\left(\lambda(X)=0\Rightarrow\mu(X)=0\right)$. Moreover, there is a certain
vagueness intended in the notion of typicality that is to be reflected in the
mathematical formalization. The sets $A$ yielding typical behavior are those
that have measure $1$ or close to one, that is, $\lambda(A)=1-\epsilon$, where
$\epsilon$ is a very small number also depending on the application.
Similarly, for atypical behavior where the relevant sets may have a measure
$\lambda(B)=0+\delta$ for some small $\delta$, which depends on the specific
application.444There has been a long debate to make Boltzmann’s argument more
mathematically and conceptually precise. For our purpose, we do not need to
dive into these details (see, e.g., Volchan, 2007; Frigg, 2009; Werndl, 2013;
Lazarovici and Reichert, 2015; Myrvold, 2019). This will become important when
we apply typicality and its mathematical formalizations to develop a new
theory of frequentism.
### 2.2 Probabilities as Typical Frequencies
There are two steps to present the theory of typicality frequentism. First, I
need to elucidate the role of random variables, in a way that differs from
standard accounts of probability theory (section 2.2.1). In typicality
frequentism, random variables are primarily used to bridge the gap between a
_physical_ theory and the mathematics of probability theory. Second, this
account of random variables is needed to interpret the law of large numbers in
such a way to define probabilities as typical long-term frequencies (section
2.2.2).
#### 2.2.1 Random Variables and their Relation to Physics
Consider a box with $1000$ balls; the balls are either blue, green, or red.
Let’s say 500 balls are blue, 300 are green, and 200 are red; in other words,
$50\%$ are blue, $30\%$ are green, and $20\%$ are red. With this information
we can build a simple stochastic model. The set of balls forms the _sample
space_ $\Omega:=\\{1,\dots,1000\\}$. From this sample space, we can define a
coarse-graining function $X:\Omega\rightarrow\\{B,G,R\\}$, which assigns to
every ball a color B=blue, G=green, or R=red. Functions of this kind are
usually (and unfortunately misleadingly) called _random variables_. There is
indeed nothing random about them; their only use is to abstract from the
sample space, when one is interested in specific features of the members of
the sample space. Next, one determines the _distribution_ of the random
variable. This is a function $\rho_{X}:F(X)=\\{B,G,R\\}\rightarrow[0,1]$, such
that $\rho_{X}(B)=0.5$, $\rho_{X}(G)=0.3$, and $\rho_{X}(R)=0.2$. This
illustrates the standard way of building a _probability space_ (see Fig.
2).555There are some subtleties when one generalizes this scheme to infinite
sample spaces, like, forming a $\sigma$-algebra. These are treated in standard
textbooks on probability theory and are not the focus of this paper.
The distribution $\rho$ is normally called a _probability distribution_ , for
it assigns “probabilities” to certain sets of the sample space. But this would
be putting the cart before the horse; at this stage, we do not have a theory
of probabilities, just a certain recipe for building a mathematical model.
This particular model of colored balls is conceptually very simple, because
the numbers $50\%$, $30\%$, and $20\%$ are mere proportions of balls having
the same color. Nevertheless, some work is to be done to interpret these
numbers correctly as probabilities, as we will be doing in the next
subsection, when I fully lay out typicality frequentism.
Figure 2: The ingredients of a stochastic model and how they relate to each
other. Random variables $X$ abstract from the sample space $\Omega$ by
assigning every member of $\Omega$ a real number. Abstracting means that $X$
maps many elements in its domain to the same number. The image of $X$ gets
assigned a number in the interval $[0,1]$, which measures the size of the sets
that are mapped by $X$ to the same real number. It’s important for typicality
frequentism that all random variables are ultimately defined on phase space,
which is the fundamental sample space.
The sample space can be in principle any kind of (mathematical) space, and in
general no particular attention is paid to the sample space in textbooks,
because in order to make correct predictions the images of the random
variables and the probability distribution are sufficient. I want to go beyond
a pragmatic attitude toward probability theory, although it is justified by
its success in application, and derive probabilities instead from physical
behavior. This is where von Kries’s idea of sets of possibilities or ur-events
comes in. He wanted to prove that probabilities can be derived from certain
compositions of ur-events—random variables, in his account, are defined on
these spaces. Although he intended an objective theory of probability, von
Kries had to rely on a subjective element in order to justify that ur-events
are equiprobable. This element is _the Principle of Indifference_ , which he
advocated in the form of a principle of _in_ sufficient reason: Two events are
equipossible if at the current state of knowledge there is no reason to
consider one of the events more likely than the other (Reichenbach, 2008, p.
15).
It is, however, possible to retain ur-events without this subjective
ingredient. There is a distinguished sample space among all possible sample
spaces, namely, phase space, on which a typicality measure can be defined,
thereby erasing the principle of insufficient reason.666If one were to embed
this discussion in quantum theory, one would need to replace phase space with
configuration space. I now make the following postulate: _all random variables
are ultimately defined on phase space, because all statistical patterns are
determined by what happens in the fundamental physical space, which are
governed by the laws of physics_. Hence, probability theory ultimately works
because it is embedded into physics, and it is so successful because it
abstracts from many physical details, so that when we apply probability theory
we, in most cases, are not aware of the relations to fundamental physics.
In the above example, the sample space $\Omega$, which distinguishes the
different balls, is a coarse-grained space of phase space $\mathcal{P}_{B}$,
which describes _the balls’_ actual positions and velocities. These two spaces
are also connected by a random variable
$X_{B}:\mathcal{P}_{B}\rightarrow\Omega$. We can even go one floor deeper to
the fundamental phase space. Every ball is a macroscopic object consisting of
zillions of tiny particles. The positions and momenta of these particles are
summarized in the fundamental phase space $\mathcal{P}_{f}$ . Again a random
variable $X_{f}$ connects this fundamental space to $\mathcal{P}_{B}$, that
is, $X_{f}:\mathcal{P}_{f}\rightarrow\mathcal{P}_{B}$.
Of course, this interpretation of probability theory will not be shared by
subjective Bayesians and other schools. My goal is not to provide a framework
that suits all interpretations of probability but rather to interpret
probability theory in such a way that is best suited for a modern version of
frequentism.
#### 2.2.2 Probability from Typicality
Let us now apply all this to demonstrate how probability arises from
typicality. Recall Boltzmann’s explanation that we observe certain physical
processes only in one direction: it is typical that ice cubes melt and not
unmelt because the universe started in a low entropy macrostate, where most of
the initial microstates yield a universe in which ice cubes melt. A very
similar kind of explanation can be given for the emergence of probabilities
from a deterministic dynamics. For simplicity’s sake, I’ll restrict myself to
coin tosses, which is a _deterministic_ physical process following the laws of
Newtonian mechanics, but it is, in principle, straightforward to generalize
the main idea to other physical processes and to other deterministic physical
theories.
First, there is an observational fact about coin tosses, as there is an
observational fact about the thermodynamic behavior of ice cubes: When we toss
a coin thousands of times, we see that heads and tails appear approximately
half the time, and the more we toss the closer the fraction of heads and tails
approaches $\nicefrac{{1}}{{2}}$. For instance, Kerrich (1946) noted to have
tossed a coin 10,000 times of which heads appeared 5,067 times, and it’s also
said that Karl Pearson tossed a coin 24,000 times of which heads appeared
12,012 times (see Küchenhoff, 2008, although no source for Pearson’s
experiment is given).
Second, recall from the thermodynamic arrow of time that almost all points in
phase space are in thermal equilibrium, where every point represents the
physical state of a gas in a box or the entire universe. When the system
starts from a low-entropy macrostate, statistical mechanics says that almost
all phase space points within this macrostate will follow a trajectory
according to the Newtonian laws of motion that leads to thermal equilibrium
(for not too long time scales). If, say, $\Gamma$ is this low-entropy
macrostate and $P$ is the property “following a trajectory that leads to
thermal equilibrium,” then the property $P$ is said to be typical in $\Gamma$
(see Wilhelm, 2019, p. 4, for this general framework). (We can imagine the
property $P$ to give a certain color to phase-space points.) Next, let’s say
we are interested in the behavior of a subsystem with respect to the initial
conditions of a larger system, for example, gases in a box with respect to the
initial conditions of the entire universe. Then given the special low-entropy
initial macrostate of the universe, it is typical (within this macrostate)
that subsystems in this universe will reflect thermodynamically time-oriented
behavior. In other words, $\Gamma$ would be the low-entropy initial macrostate
of the entire universe, and the property $P$ would be “subsystems reflect
thermodynamically time-oriented behavior”. (Then again, almost all points in
this macrostate would have the same color).
This relation between the behavior of subsystems and the initial conditions of
the universe is central to typicality frequentism. When we apply this picture
to the coin toss, we need to start with the phase space regions of the entire
universe in which coins exist. The relevant property $P$ is “long-term
frequencies of fair coin-tosses are approximately $\nicefrac{{1}}{{2}}$”. It
turns out that almost all universes share this property.
All this can be mathematically captured by the (weak) law of large
numbers:777Whenever I refer to the law of large numbers, I always mean the
weak law of large numbers.
$\lambda\left(\left\lvert\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)-\frac{1}{2}\right\rvert<\epsilon\right)\approx
1,$
where $\epsilon$ is an arbitrary small real number, $N$ is taken to be very
large, the random variables $X_{k}$ represents the $k$th toss, $X_{k}(x)$ is
the result of the $k$th toss determined _by the initial condition of the
universe $x$_, and $\lambda$ is the measure of typicality. For typical coin
tosses, that is, for most universes in which coins are tossed, which
translates mathematically into $\lambda(\cdot)\approx 1$, the arithmetical
mean of an actual run of tosses does not deviate from $\nicefrac{{1}}{{2}}$
more than $\epsilon$. So in any sufficiently long finite series of flips in
these generic universes the frequency of heads and tails will be in the range
of $50\%\pm\epsilon$ for some specific $\epsilon$.
There is something different and something similar between finite and infinite
sequences. In both cases the fraction of heads and tails lies within
$\pm\epsilon$ from 50%, but $\epsilon$ in the finite case cannot be
arbitrarily small and the actual frequency to be (typically) within the error
bounds, whereas that is the case for infinite (or sufficiently long) sequences
according to the law of large numbers. However small $\epsilon$ is chosen, it
is typical that an infinite series of coin flips will have a limiting
frequency within $50\%\pm\epsilon$. Note also that the finite case has to be
sufficiently large in order to show some robust behavior.888 More precisely,
there are three parameters in the law of large numbers that are fixed
successively. First, one chooses an $\epsilon$, then a $\delta$, and then
sufficiently large $N>N_{0}$ such that:
$\lambda\left(\left\lvert\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)-\frac{1}{2}\right\rvert<\epsilon\right)>1-\delta.$
Such an $N_{0}$ exists, because according to the Chebychev inequality
$\lambda\left(\left\lvert\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)-\frac{1}{2}\right\rvert<\epsilon\right)>1-\frac{1}{\epsilon^{2}N}.$
Moreover, the law of large numbers mathematically says that a series of coin
flips that shows 100% heads and 0% tails after, say, 1,000,000 flips, although
physically possible, would be atypical:
$\lambda\left(\left\lvert\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)-\frac{1}{2}\right\rvert>\epsilon\right)\approx
0.$
For any large $\epsilon$ you choose, there is a very small set of initial
conditions that would give a long series that deviate from $50\%\pm\epsilon$.
One may argue that the law of large numbers is a limit theorem and so doesn’t
say anything about finite cases, nor does it say anything about what is
typical or atypical (I thank an anonymous referee for raising these concerns).
One needs to distinguish between _what_ the limit of a sequence is and _how_
the sequence approaches the limit. Finding out about the right convergence
behavior is, for example, a major task in functional analysis and mathematical
quantum mechanics (Lieb and Seiringer, 2010). Here is a simple example to
illustrate this point. The three sequences $\frac{1}{n}$, $\frac{1}{\ln(n)}$,
and $\frac{1}{\ln(\ln(n))}$ go to $0$ for $n\rightarrow\infty$. These
sequences, however, approach the limit differently: $\frac{1}{\ln(n)}$ goes to
0 more slowly than $\frac{1}{n}$, and $\frac{1}{\ln(\ln(n))}$ even more slowly
than $\frac{1}{\ln(n)}$.999E.g., $\frac{1}{1,000,000}=0.000001$,
$\frac{1}{\ln(1,000,000)}\approx 0.072$, and
$\frac{1}{\ln(\ln(1,000,000))}\approx 0.38$. If one traces a certain
(standard) proof of the weak law of large numbers, one finds the formula
footnote 8, which can be itself proven and which tells us something about the
limit behavior of _finite_ sequences. Then, given how typicality is defined
via a measure, one can indeed rephrase the law of large numbers, as well as
the limit behavior of finite sequences, in terms of typicality. I admit that
this is not how the law of large numbers is standardly understood, but it is a
possible, and I think consistent, way of re-interpreting what the law of large
numbers says (see also Dürr et al., 2017).
After these elaborations, we can finally define what probabilities are in
typicality frequentism:
_Definition of probability:_ Some state of affairs has probability $p$, if,
according to a fundamental physical theory, the physical process $X_{k}$
yielding this state of affairs is in principle infinitely repeatable and the
instances of $X_{k}$ are uncorrelated such that the frequency typically (that
is, in almost all universes) approaches a unique limit $p$, that is, for all
$\epsilon>0$
$\lim\limits_{N\rightarrow\infty}\lambda\left(\left\lvert\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)-p\right\rvert<\epsilon\right)=1.$
This definition has several important parts that I want to comment on:101010I
thank an anonymous referee for raising many of the following points.
1. 1.
_Definition of probability_ : One may argue that what follows is not a
definition but rather a sufficient condition for probabilities, because what
the definition says requires certain strong idealizations that may not be met.
My reply is twofold. For one, this is a definition of what probabilities are
in typicality frequentism. For another, if one takes a broader view of what
probabilities are in general, then this “definition” is indeed a sufficient
(but not necessary) condition of probabilities, since I am aware that other
ways of talking about probabilities differs from a frequentist account. I,
therefore, advocate a pluralist theory of probabilities that complements
typicality frequentism in areas where typicality frequentism does not give an
account of probabilities.
2. 2.
_State of affairs:_ I use “state of affairs” instead of “events” that get
assigned probabilities, in order not to confuse events with the standard
technical term in probability theory as a subset of the sample space or, more
precisely, a member of the $\sigma$-algebra. The tossing of a coin or a ball
in roulette would be examples of “states of affairs”.
3. 3.
_The fundamental physical theory:_ In the ideal case, the fundamental physical
theory I refer to is the Theory of Everything, the unique physical theory that
correctly represents the world. Since we haven’t found this theory yet, other
approximately true deterministic physical theories can do the job, like
Newtonian physics or the de Broglie–Bohm quantum theory. The theory needs to
be approximately true in order to give rise to (at least) the right
statistical pattern. Newtonian physics has been proven successful in the
domain of statistical mechanics, and it is good enough for most macroscopic
applications. It is also very unlikely that Newtonian physics and statistical
mechanics will be completely overthrown by future physical theories. It is
plausible to assume that both theories will be recovered in a kind of
classical limit. A candidate for a deterministic theory on the quantum level
is the de Broglie–Bohm pilot-wave theory, which also allows for extension to
quantum field theory. Another deterministic quantum theory is the many-worlds
theory according to Everett. My introduction of probabilities is closer to the
de Broglie–Bohm theory, but also Hugh Everett III wanted to base probabilities
on typicality (Barrett, 2017). I also think that one can generalize typicality
frequentism to indeterministic theories, which would be a future project and
would also require to distinguish this idea from propensities.
4. 4.
_Uncorrelated events:_ The events $X_{k}$ (or rather the random variables)
that build up the physical process need to be uncorrelated in order to
converge. Standardly, the law of large numbers requires the events of the
stochastic process to be stochastically independent, which is a stronger
condition than being uncorrelated. If the $X_{k}$’s were correlated (for
example, the individual tosses of a coin), then one would be able to undermine
the law of large numbers, and a unique limit may not exist. Or a limit may
exist but it would not be $p$, where $p$ is technically the expectation value
of $X_{k}$.
5. 5.
_The frequency:_ The frequency that is supposed to approach the limit $p$ is
the relative (finite) frequency of the the physical process $X_{k}$:
$\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)$. For a coin toss, for example,
$X_{k}\in\\{0,1\\}$ representing when a coin lands heads $X_{k}=0$ or tails
$X_{k}=1$. So, $\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)$ counts the number of tails
and divides it by how often one tossed the coin.
$1-\left(\frac{1}{N}\sum_{k=1}^{N}X_{k}(x)\right)$ would then be the relative
frequency for heads.
6. 6.
_Almost all universes:_ One may think that one needs to quantify over all
universes in order to determine the probabilities in our universe, and,
therefore, the probabilities in our universe are also determined by what
happens in other universes. The first part is correct—that one needs to
quantify over all possible universes—but this doesn’t mean that the
probabilities here are determined by the goings-on in the other universes.
Rather, one needs to compare what happens here to what happens there, and the
appropriate tool for this comparison is the measure of typicality. If we are
in such a world in which the assumptions of the law of large numbers hold,
then the probabilities are particularly robust and regular, because most of
the other universes show the same statistical pattern.111111There is one
caveat: even if all the assumptions of the law of large numbers were fulfilled
it is still possible for a sequence to have a different limit or no limit at
all; the initial conditions leading to these sequences have measure zero
though. The “atypical” worlds widely diverge from the typical ones and also
widely diverge among themselves. There is no unifying or regular behavior to
be expected in these “atypical” universes.
We need to distinguish between the definition of probability and the empirical
significance of this number. While the number $p$, is defined in terms of
infinite sequences, which cannot be instantiated in the real world, the
empirical content of this number arises from its relation to finite sequences:
_Empirical significance of $p$:_ The relative frequencies of finite sequences
obey the restrictions given by the law of large numbers; that is, the observed
frequencies of sufficiently long finite series typically lie in an interval
$p\pm\epsilon$, where $\epsilon$ is a positive number approximately equal to
$0$.
Let me add the following comments:121212I also thank here an anonymous referee
for raising these issues.
1. 1.
_Status of the empirical significance of $p$:_ I take it to be a true
statement about the observed relative frequencies, and that this statement
follows from the definition of probabilities. It is, therefore, rather a
corollary than a criterion, since a criterion would be something closer to an
axiom.
2. 2.
_Sufficiently long series:_ The above definition of probabilities presupposes
that the physical process is “in principle infinitely repeatable”, but, of
course, it doesn’t and cannot say how often the real process is actually
repeated. The probability $p$ is empirically significant because it gives
bounds for the real observed (and expected) relative frequencies. It may be
unsatisfactory that the real process needs to be “sufficiently long” without a
precise numerical length. The appropriate length of the series depends on many
factors of the real physical set-up and the overall physical process.
3. 3.
_Interval $p\pm\epsilon$:_ For real world cases, one has tolerances for the
relative frequencies. The question is now how robust these tolerances are. A
small “uncertainty” of $p$ would also be consistent with the observed
frequency being within the interval. First, I assume the real $p$, the one
given by the true Theory of Everything, to be unique. Second, I assume that
the approximately true candidate theories that are not the Theory of
Everything, like Newtonian physics or the de Broglie–Bohm theory, etc., would
give $p$’s that are very similar. So in this case, there may be a tiny
interval or at least a point-like spread of $p$’s. And we would have to say
that there are several ”candidate probabilities”. Perhaps one of them hits the
true probability; I assume, however, that these “candidate probabilities” are
very close to the true one and for all practical purposes indistinguishable.
In typicality frequentism, there are actually three ideas mingled together
from other interpretations of probability. The first ingredient is similar to
the classical interpretation of probability, which adds up different equally
probable events according to the principle of indifference. Everything in
typicality frequentism hinges on a proper way of counting that leads us to
distinguish typical from atypical behavior based on big and small sets, whose
elements are intrinsically “equally likely” to occur. Second, the definition
of probabilities in terms of a limit that cannot be carried out in the real
world is reminiscent of hypothetical frequentism. Third, in order to make
these typical frequencies empirically meaningful one needs to introduce
tolerances for finite sequences in order to have realistic frequency bands for
actual processes, but in contrast to finite frequentism probabilities in
typicality frequentism are _not defined_ by finite sequences.
There are two ways to undermine the long-term frequency of
$\nicefrac{{1}}{{2}}$. Either one is in one of those special universes that
yield a different statistical pattern for fair coin tosses, or one were able
to replicate, say, with a sophisticated tossing machine, the exact conditions
in every toss. The special universes that yield atypical coin behavior may
reflect all kinds of long-term coin pattern: there are initial conditions of
the universe that lead to 95% heads and also to no regular behavior at all.
Because of these diverse behaviors, there is no way to put these special
universes under one umbrella. It is, however, appropriate to talk of a
probability of 100% showing heads in the tossing machine example. In order to
get probabilities diverging from 100% or 0%, physics requires significant
variations in the ways a coin is tossed (as is realistic), and these
variations in fact yield robust statistical patterns.
As Gillies (2000, Ch. 5) describes, the problem of connecting limiting
frequencies with actual finite frequencies had been raised by de Finetti
against von Mises (1928/1957):
> It is often thought that these objections may be escaped by observing that
> the impossibility of making the relations between probabilities and
> frequencies precise is analogous to the practical impossibility that is
> encountered in all the experimental sciences of relating exactly the
> abstract notions of the theory and the empirical realities. The analogy is,
> in my view, illusory: in the other sciences one has a theory which asserts
> and predicts with certainty and exactitude what would happen if the theory
> were completely exact; in the calculus of probability it is the theory
> itself which obliges us to admit the possibility of all frequencies. In the
> other sciences the uncertainty flows indeed from the imperfect connection
> between the theory and the facts; in our case, on the contrary, it does not
> have its origin in this link, but in the body of the theory itself […]. (de
> Finetti, 1937, p. 77)
The criticism against frequentism is (i) that limiting frequencies as
predicted in infinite series are not observed, (ii) that there is no precise
way to give an interval for the empirical frequencies, and (iii) that if an
interval is given it is still possible that the actual observed frequency may
lie outside this interval. Von Mises argued (see Gillies, 2000, p. 103), as
described in the first sentence of de Finetti’s quote, that the problem of
connecting limiting frequencies with actual frequencies is no different from
connecting the idealized predictions of a scientific theory with actual
observations, something ubiquitous and practically unproblematic in all of the
natural sciences. To which, de Finetti replied that this analogy is invalid
because a scientific theory would in principle be able to make exact
predictions if it were to capture sufficiently all the relevant details of the
world, whereas probability theory, even in the best case, would allow
significant deviations from its predictions, both from the limiting
frequencies, as well as from finite frequencies.
I think, de Finetti’s criticism of von Mises is correct, and von Mises indeed
overlooked the disanalogy between probability theory and the standard
application of scientific theories. The imprecision of probability theory has
a different origin than the imprecision of applying scientific theories or
scientific models to real world cases. The main problem for von Mises was to
justify were the imprecision of his frequentism comes from. Since his theory
was solely based on empirical facts, the truthmakers for the predictions of
probability theory need to be empirical facts too. But how can these
exceptions be empirically made true if they are rarely or never observed in
the first place?
Hájek (2009, pp. 217–8) makes the same argument as de Finetti when he says,
“There is no Fact of what the Hypothetical Sequences Look Like”. He imagines a
coin that is just tossed once and happens to have landed Heads. Hájek then
asks about the coin:
> How would it have landed if tossed infinitely many times? Never mind
> that—let’s answer a seemingly easier question: how would it have landed on
> the second toss? Suppose you say "Heads". Why Heads! The coin equally could
> have landed Tails, so I say that it would have. We can each pound the table
> if we like, but we can’t both be right. More than that: neither of us can be
> right. For to give a definite answer as to how a chancy device would behave
> is to misunderstand chance. (Hájek, 2009, p. 217)
Again, this argument is valid for the traditional version of frequentism, but
in typicality frequentism a physical theory tells us “how the coin would have
landed on the second toss”. The truthmakers for the predicted frequencies come
from a physical theory, in particular, from the distributions of initial
conditions of the micro-constituents of the involved physical bodies and
ultimately of the entire universe itself—of course, this move would be
contested by an empiricist like von Mises. Typicality frequentism, thus,
explains why probability theory is intrinsically imprecise and that this
imprecision cannot be improved, but at least to certain degree quantified and
grounded.
## 3 Defending Typicality Frequentism
Typicality Frequentism combines ideas from finite frequentism, hypothetical
frequentism, and the classical interpretation of probabilities. Finite
frequencies (with error bounds) describe actual outcomes of a series of a
chancy process; hypothetical frequencies in terms of infinite series are used
to define what probabilities are; and the principle of indifference, which is
the central piece of the classical interpretation, is replaced by a measure of
typicality to count events on the sample space. It seems, therefore, that the
critique raised against either of these interpretations of probability is
again effective to undermine typicality frequentism. The principle of
indifference has been rightly dismissed when an agent is truly
ignorant—although it may be successfully used for symmetry arguments (Zabell,
2016a). Hájek (1996, 2009) presents a total of 30 arguments against different
versions of frequentism, 15 against finite and 15 against hypothetical
frequentism, demanding that in order to rescue any kind of frequentist account
all these arguments need to be countered, where one counterargument would
still leave the other 29 unanswered. I won’t endeavor to reply to every single
argument, because not all counterarguments are in fact counterarguments but
rather characterize a frequentist’s account. Instead, I will first contrast
typicality frequentism with two most recent competitors, the range account and
the Humean Mentaculus. Then I will counter some recent arguments raised
against frequentism by La Caze (2016), who builds on Hájek’s papers.
### 3.1 The Range-Account of Probabilities
The work by von Kries (1886) was a rich source for further research. Henri
Poincaré and Eduard Hopf filled in a major gap by developing _the method of
arbitrary functions_ , which is also known as _the range-account of
probabilities_ , advocated and further refined in different versions by Abrams
(2012), Rosenthal (2010, 2016), and Strevens (2003, 2008, 2013). Here, the
probabilities, like in typicality frequentism, are related to some sort of
initial conditions, but, unlike typicality frequentism, regions of phase space
together with a probability density or a volume measure directly determine
probabilities. For example, for the coin toss a probability density over the
initial conditions for _every single toss_ is used (see Figure 3). The
physical state of a coin is completely described by its vertical velocity $v$
for the trajectory of the coin and the angular momentum $\omega$ for its
rotation, given a fixed height and further simplifying restrictions, like the
exclusion of bouncing (see Keller, 1986; Strzałko et al., 2008; Stefan and
Cheche, 2017, for detailed physical models). The phase space structure for the
coin toss has a regular structure, in which the size of the areas leading the
coin to land on the same side as it has started are approximately equal to the
size of the areas for which the coin changes faces (if $v$ or $\omega$ are not
too small). In the method of arbitrary functions, probabilities result when a
probability density is integrated over specific regions in this phase space.
The main two problems for the method of arbitrary functions is, first, to
justify the particular shape of the probability density and, second, to base
this justification on non-probabilistic facts in order not to explain
probabilities by probabilities. This is the main point in which Abrams,
Rosenthal, and Strevens disagree. They agree, however, that some measure must
be used to determine the sizes of phase space regions in terms of which
probabilities are defined.
The range account of probability is easily confused with typicality
frequentism. First, the range account does not define probabilities in terms
of frequencies. Nonetheless, Strevens’s account, for example, relies on a
close link to frequencies; he aims at explaining probabilities in long series
of trials and facts about frequencies determine facts about the (initial)
probability density (see also Strevens, 2011, section 4.2 and 4.3). Second,
typicality frequentism considers, like statistical mechanics, the initial
conditions _for the entire universe_ , where a measure of typicality is
imposed on. All these initial conditions are grouped into two main groups:
almost all initial conditions lead to typical behavior, whereas almost no
initial conditions lead to atypical behavior (there may be remaining sets that
do not fit in either category, but they are not important for our current
purposes). The typicality measure is only used to group the initial conditions
of the universe, from which probabilities are defined in terms of frequencies.
Figure 3: This shows the partition of phase space for the initial conditions
of a single coin, which determine how the coin will land after it is tossed.
The x-axis represents the initial conditions for the angular momentum around a
certain axis; the y-axis represents the vertical velocity of the entire coin.
Pink areas depict the initial conditions for which the coin lands on the same
face as it started, while the white areas stand for the initial conditions for
which the coin changes faces. In the method of arbitrary functions, one puts a
probability density on this phase space, which gives $\nicefrac{{1}}{{2}}$
once integrated over all the pink or all the white areas. (Picture from
Strzałko et al. (2008, p. 62) as an elaboration of Keller (1986, p. 193).)
There are several problems a range account faces, which Abrams, Rosenthal, and
Strevens are aware of and have reacted to. First, one needs to justify the
initial probability distribution. Where did it come from? Second, by
explaining the probabilities for a coin toss by a probability distribution
over the initial conditions, one would explain probabilities with
probabilities. It is a challenge to explain the properties of the initial
probability density from non-probabilistic facts in order not to make the
theory circular. Third, a probability distribution actually contains much more
information than is needed to get probabilities for frequencies. Typicality
frequentism, on the other hand, introduces something weaker than an initial
probability distribution that is more tailored to define probabilities as
special kinds of frequencies, and it does not suffer from a circular argument
(see the next section for a more detailed discussion of this point).
Fourth, typicality frequentism can explain the initial probability
distribution of the range-account (if it is the one used for frequencies). It
is known that the probabilities in repeatable processes are robust under many
changes of the initial probability distribution. Only very special
distributions (Rosenthal, 2016, calls them ‘eccentric’) would lead to
different probability assignments. In typicality frequentism these
distributions are, in fact, explained to arise from special initial conditions
of the universe yielding atypical behavior. Strevens (2011, p. 355–6) seems to
be aware of this when he says, “the typical actual pre-toss state, together
with the typical actual set of timings and the typical actual coin, usually
produce—because of such and such properties of the physiology of tossing—a
macroperiodic set of spin speeds.” But instead of embedding his theory into a
theory of typicality, Strevens borrows from Lewis’s possible-worlds semantics
to explain why we observe typical frequencies in our world.
### 3.2 The Humean Mentaculus
Albert (2000, 2015) and Loewer (2001, 2004, 2012) have been working on a
Humean account of probabilities.131313Hoefer (2007, 2011, 2019) developed a
more pragmatic account of Humean probabilities, which is closely linked to the
Albert–Loewer account. Similar to the range account, they postulate an initial
probability distribution, but this initial probability distribution is defined
on the phase space for the initial conditions of the _entire_ universe. More
precisely, the Albert–Loewer account of probabilities consists of three
postulates:
1. 1.
The fundamental deterministic laws of physics.
2. 2.
The existence of a special (low-entropy) macrostate (called the _past
hypothesis_).
3. 3.
A probability distribution over the initial conditions of the universe (within
this macrostate).
These three postulates are embedded in a Humean interpretation of laws of
nature, so they are axioms in the best systematization of the Humean mosaic,
balancing simplicity, strength, and fit. The initial probability distribution
assigns a probability to all kinds of factual and counterfactual events. These
three postulates, the _Mentaculus_ , are said to form a probability map of the
history of the universe. Probabilities in this theory are defined, similarly
to the range account, as weighed regions of initial conditions _of the
universe_ (in phase space); in other words, one counts and weighs, according
to the initial probability distribution, all possible initial conditions of
the universe that would give rise to the relevant phenomenon. And again, as
the range-account, the Mentaculus needs to explain what it means for the
initial probability distribution to be a _probability_ distribution. So far,
the probability distribution axiomatically introduced by the Mentaculus is
merely a mathematical object that assigns numbers to certain sets.
The central feature and goal of the Albert–Loewer account is “to obtain a
definite numerical assignment of probability to every formulable proposition
about the physical history of the world” (Albert, 2015, p. 7-8). This
probability map assigns a probability not only to coin tosses but also to
events that may happen (or not) just once, like France defending the Soccer
World Cup title in 2022. There seems to be a shared intuition that these
single-case probabilities are meaningful and crucial to the notion of
probability—a point that has been raised against frequentism:
> The most famous problem for finite frequentism is _the problem of single
> case_. According to finite frequentism all single-case events automatically
> have the probability 0 or 1. Consider a coin that is only tossed once and
> comes up Heads. It appears that the probability of heads may be
> intermediate, but the finite frequentist is unable to say this. This goes
> against some strong intuitions about probability. A form of this problem
> remains in larger finite sequences. (La Caze, 2016, p. 343)
This criticism was raised early on against frequentism, to which von Mises
answered:
> ‘The probability of winning a battle’, for instance, has no place in our
> theory of probability, because we cannot think of a collective to which it
> belongs. The theory of probability cannot be applied to this problem any
> more than the physical concept of work can be applied to the calculation of
> the ‘work’ done by an actor in reciting his part in a play. (von Mises,
> quoted in Gillies, 2000, p. 98)
For many it was a shortcoming of frequentism that it does not assign
probabilities to single events, although it ought to do so (Hájek, 2009, p.
227–8). Von Mises argues, and I agree with him here, that scientific concepts
may not capture the full range of intuitive notions and it may not even be the
goal of science to form concepts that capture all the different meanings of an
intuitive notion. Scientific concepts are defined in a precise way for the
price of being less general. Probability, according to von Mises, is like the
word “work” in physics, which has a precise meaning in terms of an integral of
the forces along a certain path and which, thus, differs from the everyday
meaning of “work”. Von Mises was, therefore, open to a pluralistic account of
probability dependent on the field of application.
In contrast to von Mises, other frequentists tried to generalize probabilities
to single cases as a kind of fictitious value:
> Frequentists from Venn to Reichenbach have attempted to show how the
> frequency concept can be made to apply to the single case. According to
> Reichenbach, the probability concept is extended by giving probability a
> “fictitious” meaning in reference to single events. We find the probability
> associated with an infinite sequence and transfer that value to a given
> single member of it. (Salmon, 1966, p. 90)
Although one can formally or “fictitiously” assign these numbers from
frequencies to single events, their meaning is unclear, especially their
meaning as something objective or physical. This is not only a problem for
frequentism, but also for the Humean Mentaculus because it is unclear what a
probability in an objective or physical sense for a single event is in the
first place. A purely subjective account, on the other hand, would not have
this problem, as probabilities are an agent’s degree of belief, which are
meaningful for single events, because they capture how confident an agent is
to believe a proposition.
In the Mentaculus, probabilities are introduced by a probability density over
the initial conditions of the universe, but this probability density, it shall
be noted, merely axiomatically introduces numbers on the Humean mosaic. To
make this distribution of numbers a _probability_ distribution requires
further elaboration and an interpretation that turns these numbers into
probabilities. This is accomplished in two steps (Loewer, 2004). First, the
concept of “fit” is introduced. Every (probabilistic) proposition is said to
have a certain degree of fit, that is, how likely it is to be true, and this
is quantified by a probability. If a proposition with high probability matches
the actual facts, it has a better fit than a proposition with low
probability.141414The concept of fit leads to the _zero-fit_ problem; Elga
(2004) proposes a solution by invoking a certain notion of typicality. Second,
in order that fit in terms of probabilities is informative, an agent needs to
constrain her belief according to these probabilities (Loewer, 2004, p. 1122),
and this is done according to another axiom, the _Principal Principle_. It
roughly says that an agent ought to adjust her degree of belief or her
credence according to the probability of the proposition given by the Humean
best system.
It is not immediately clear what the physical meaning of single-case
probabilities is in this Humean theory. Let us say that there are two coins,
and the Mentaculus assigns a probability of landing heads of 0.4 to one coin
and 0.6 to the other. Each coin is just once tossed and then destroyed. What
can these numbers 0.4 and 0.6 mean? These probabilities indeed influence, by
the Principal Principle, an agent’s attitude and behavior toward the outcome
of the coin tosses. For example, an agent will bet differently on an event
with probability of 0.4 than on an event with a probability of 0.6. It seems,
however, that these single-case probabilities need also to say something about
the physical events themselves, whether their occurrence is in some way
constrained or not, which is then the basis for an agent to adjust her degree
of belief. Moreover, this example of two coins being tossed just once is in
principle repeatable, and so Humeans need to clarify the relationship between
these single-case probabilities and the frequencies of repeatable coin tosses.
Although the Albert–Loewer account of Humean probabilities explicitly
introduces and endorses single-case probabilities, it is, as of now, unclear
what their objective physical meaning is supposed to be.
### 3.3 Countering Standard Critique of Frequentism
Building on Hájek’s critique of frequentism, La Caze (2016) launched another
comprehensive attack. Here, I reply to four of La Caze’s arguments: (i) that
frequentism is a poor _analysis_ of probabilities, (ii) the problem of
ascertainability, (iii) the reference class problem, and (iv) that frequentism
is not completely objective.
#### 3.3.1 It’s a Poor _Analysis_ of Probabilities
La Caze (2016) claims that hypothetical frequencies are not the right
description of probabilities because they provide a poor analysis of what
probabilities are:
> The hypothetical frequentist provides an answer to the question “What is
> probability?” with an analysis that has little relationship with what most
> people _mean_ by the probability statements they make. […] When stating that
> a specific coin has the probability of Heads of half, people are typically
> referring to their beliefs about the coin, their experience with this coin
> in a finite series of tosses, or their experience with similar-seeming
> coins. (La Caze, 2016, p. 350)
The aim of typicality frequentism is not to reduce all ways in which
probabilities are invoked to typical long-term frequencies. It, rather, aims
at showing how one can derive from fundamental physics physically meaningful
probabilities, and it is open to be complemented by other accounts of
probability outside its scope. Given the myriads of different cases in which
probabilities are used, it is plausible that all these cases are not unified
by one account. Typicality frequentism would be, in my view, one piece in a
pluralistic landscape of probabilities. Moreover, if typicality frequentism is
true, then people may need to re-think their intuitions they have about
probabilities of coins and other physical processes. I aim at giving an
account of objective probability, but I agree that we also need an account of
subjective probabilities, and I can envision that it may be possible, in
certain circumstances, to connect a particular interpretation of subjective
probabilities with typicality frequentism.
#### 3.3.2 The Problem of Ascertainability
“[T]he problem of ascertainability is the most fundamental difficulty the
frequency interpretation faces,” says (Salmon, 1966, p. 89–90), and he defines
this problem in the following way:
> _Ascertainability._ This criterion requires that there be some method by
> which, _in principle at least_ , we can ascertain values of probabilities.
> It merely expresses the fact that a concept of probability will be useless
> if it is impossible _in principle_ to find out what the probabilities are.
> (Salmon, 1966, p. 64, my emphasis)
Actually, all interpretations of probability face in one form or other the
problem of ascertainability, that is, how to assign probabilities in practice.
A meaningful definition is not enough, because it may lack the instructions
for how to pick the right probabilities. Salmon stresses that these
instructions, however, are supposed to be applicable only _in principle_ , and
not necessarily in actual practice. Applied to hypothetical frequencies, they
are said to be unascertainable for the following reasons (see also Hájek,
2009, pp. 214–5):
> To ascertain a hypothetical frequency with certainty we would need to
> observe an infinite number of trials. Assuming that a specific sequence of
> observations will converge to a limiting relative frequency, there is no
> guarantee that it will do so within the number of trials that will be
> observed. And if a relative frequency appears to have converged in a finite
> number of trials, it is always possible that the relative frequency diverges
> from this value in subsequent trials. These points are direct consequences
> of the mathematics of infinite sequences. The task for the frequentist is to
> justify inferring a (frequentist) probability from a relative frequency
> observed in a finite number of trials, and there is no deductively valid way
> to do this. (La Caze, 2016, p. 353)
The argument amounts to the correct observation that we cannot figure out the
true probability (as a hypothetical frequency) by observing finite
frequencies. This objection is particularly damaging to von Mises and
Reichenbach because they defined the probabilities in the spirit of logical
empiricism based on obersvation. The only means that they had to reach the
hypothetical frequencies is by means of observable finite frequencies. In
order to mitigate this problem, von Mises introduced two principles
(Rowbottom, 2015, p. 100–1):
1. 1.
_Law of Stability_ : The relative frequencies of attributes in collectives
become increasingly stable as observations increase.
2. 2.
_Law of Randomness_ : Collectives involve random sequences, in the sense that
they contain no predictable patterns of attributes.
These laws are arguably ad hoc in von Mises theory, but at least they may be
justified by induction.
Similarly to von Mises, Reichenbach (1949/1971) bridged the gap between finite
and infinite sequences by induction; Reichenbach called his law the _Rule of
Induction by Enumeration_. Starting with an infinite sequence of events $A$,
we are interested in the relative frequency that some feature $B$ occurs in
this sequence. We can only observe a finite sequence of events of length $n$,
for example. The frequency of feature $B$ among the first $n$ members of $A$
is, say, $F^{n}(A,B)=\nicefrac{{m}}{{n}}$. In order to infer the limiting
frequency, the _Rule of Induction by Enumeration_ needs to be applied: Given
$F^{n}(A,B)=\nicefrac{{m}}{{n}}$, to infer that
$\lim\limits_{n\rightarrow\infty}F^{n}(A,B)=\nicefrac{{m}}{{n}}$ (Salmon,
1966, pp. 85–6).
La Caze, on the other hand, demands a deductive way to get to the hypothetical
frequencies, and this can be, in principle, accomplished by typicality
frequentism, as the hypothetical frequencies are predictions of the laws of
physics about typical behavior. By applying a physical theory, probably by
building a model as is standard in many cases (Cartwright, 1983; Morgan and
Morrison, 1999; Giere, 2004), the probabilities fall out of the theory as any
other empirical prediction. This move was not possible for von Mises and
Reichenbach, as they based their probabilities on observable behavior of the
physical processes. Typicality frequentism adheres to Salmon’s requirement for
solving the problem of ascertainability, because we can access the information
of a physical theory _in principle_ ; in practice, there might be strong
limitations on how to access all this information, but these obstacles are not
of a different nature than we normally encounter in other kinds of empirical
predictions.
One may argue that the “in principle” in typicality frequentism does a lot of
work.151515Thanks to an anonymous referee for raising this point. If we have a
powerful enough physical theory that also makes it easy to extract empirical
predictions, then we would be able to solve the problem of ascertainability.
But what if we cannot extract this information from a physical theory (for
whatever reason)? Then either we need to extract the right frequencies from
observations, or we need to apply further metaphysical or physical
assumptions. Both paths are problematic: the first because we would fall back
to the (empirical) problem of hypothetical frequentism, the second because
further theoretical assumptions need to be justified. I grant that this is
argument poses a challenge to the epistemology of typicality frequentism, that
is, how to ascertain the probabilities in practice. It is in general very
hard, and mostly impossible, to extract precise empirical information from a
physical theory for sufficiently complex systems—we cannot even analytically
solve the three-body problem in classical physics. Therefore, for practical
purposes we rely on other means to make empirical predictions: for example, by
making certain idealizations and approximations. In the case of probabilities,
we may need to rely on past incomplete empirical observations for future
predictions, or we may use theoretical assumptions, like symmetry arguments
(Zabell, 2016a).
#### 3.3.3 The Reference Class Problem
The reference class problem is generally regarded to sound another death knell
to frequentism (Hájek, 2009, pp. 219), although it was originally raised _by
frequentists_ , like Venn (1888) and Reichenbach (1949/1971), _against single-
case probabilities_ :
> If we are asked to find the probability holding for an individual future
> event, we must first incorporate the case in a suitable reference class. An
> individual thing or event may be incorporated in many reference classes,
> from which different probabilities will result. This ambiguity has been
> called the _problem of the reference class_. (Reichenbach, 1949/1971, p.
> 374)
The reference class problem for single-case probabilities is a problem of how
to get the probability of one event when it can be part of many collections.
Venn’s example is the single-case probability of a man called John Smith, aged
50, to die at age 61. In order to make a qualified prediction of Smith’s life
in the future eleven years, one needs to compare Smith with other people
similar to Smith. In order to extract single-case probabilities from
frequentism, one would need to find a set of people similar to Smith and who
live until 61 and compare this number with all the people of this age. The
problem is, however, that it is not clear which properties the reference
class, that is, the people similar to Smith, need to have in order to count as
”similar to Smith,” (also because Smith himself has so many different
properties).
This example can be transferred into a reference class problem for frequentism
in general (La Caze, 2016, section 16.4.4); we just need to add to John Smith
any finite number of people of the same age and ask about the probability of
their life expectancy until age 61. What is the correct infinite collection of
people that give rise to the right probability? More precisely, given a finite
sequence of events $(x_{1},\dots,x_{n})$, what is the appropriate infinite
sequence $(y_{1},y_{2},\dots)$ that we shall associate with
$(x_{1},\dots,x_{n})$ in order to assign the probability
$p=\lim\limits_{m\rightarrow\infty}\frac{1}{m}\sum_{i=1}^{m}y_{i}$ for some
feature of $(x_{1},\dots,x_{n})$?
Furthermore, having found a suitable (or even the “correct’?) reference class,
the order of the members of the reference class may change the probability,
and there may be even an ordering where the sequence does not converge and no
probability can be assigned in the first place. Hájek (2007, p. 567) calls
this subcategory of the reference class problem the _reference sequence
problem_. Von Mises dealt with the reference sequence problem by restricting
the admissible sequences to give a unique ordering; these sequences, he called
_collectives_ , and they are defined by means of his two laws of probability,
the law of stability and the law of randomness. With this move, von Mises
could only solve, or propose a solution to, one aspect of the reference class
problem, namely, what Hájek (2007, p. 565 ) calls the _metaphysical_ reference
class problem. Given the two laws of probability, there is (hopefully) a fact
what the correct reference class is and what accordingly the probability is.
Still, this information may be practically inaccessible for an agent, which
amounts to an _epistemological_ reference class problem.
Does the reference class problem only arise in frequentist interpretations of
probabilities? Hájek (2007) argues that basically _all_ interpretations of
probability face their version of the reference class problem, and the best we
can hope for is to solve the metaphysical problem—the epistemological problem
will always remain. And theories that do not face a reference class problem in
the first place, like radical subjectivists à la de Finetti or certain
versions of the propensity interpretation, are, according to Hájek, no-theory
theories of probability, because they do not sufficiently specify what
probabilities are and how they are to be used to guide agents’ beliefs and
actions.
Typicality frequentism indeed solves the metaphysical reference class problem
by means of a physical theory, something Salmon also mentioned as a way out:
> When a sequence is generated by a physical process that is well understood
> in terms of accepted physical theory, we may be able to make theoretical
> inferences concerning convergence properties. For instance, our present
> knowledge of mechanics enables us to infer the frequency behavior of many
> kinds of gambling mechanisms. Our theory of probability must allow room for
> inferences of this kind. The basic problem, however, concerns sequences of
> events for which we are lacking such physical knowledge. (Salmon, 1966, p.
> 84)
The Theory of Everything ultimately determines the underlying physical
processes of a random sequence, and thus determines the limit of a finite
sequence if one were to repeat it infinitely. The reference class problem is
solved in typicality frequentism, because the reference class is the finite
sequence itself which gets extrapolated into an infinite sequence by means of
the Theory of Everything. Since we do not yet have a Theory of Everything, any
candidate for a fundamental physical theory determines the behavior of the
reference class. In other words, the truthmaker for singling out the reference
class and the corresponding behavior is the Theory of Everything, and for the
current situation we can replace the Theory of Everything by an appropriate
candidate for a fundamental physical theory or by a model of the physical
theory (given certain idealizations). So the gap in Reichenbach’s Rule of
Induction by Enumeration is closed not by induction from the observable
sequence itself, but by a physical theory describing the physical processes
underlying the sequence.
In a similar vein, the reference sequence problem is tackled. Intricate
orderings that yield different limits or no limit at all are physically
possible but atypical, given the initial conditions of the universe, which
determine the physical conditions of the physical processes governing the
sequence.161616Something similar has been proposed in certain versions of the
propensity interpretation. Miller (1994, p. 56) says that propensities are
determined by “the complete situation of the universe (or the light-cone) at
the time, and, for Fetzer (1982, p. 195), they are determined by “a complete
set of (nomically and/or causally) relevant conditions […] which happens to be
instantiated in that world at that time.” These solutions, however, are not
satisfactory for Hájek (2007, p. 576) because the propensities, such defined,
are not accessible to an agent to assign probabilities in practice. Therefore,
he subsumes these proposals under no-theory theories of probabilities. More
precisely, there is a physically distinguished “natural” ordering of the
sequence, namely, the temporal ordering as determined or predicted by physics.
Rowbottom (2015, p. 111) presents an argument that physics is not able to
single out a natural order for sequences , because, according to special
relativity, the order of, say, coin flips depends on the state of motion of an
observer. So two observers on two different trajectories may disagree on the
order of the same coin flips that they observe. But this would be only correct
when the observers would see _two_ different sequences of coin flips that are
space-like separated. If Rowbottom refers to one sequence of coin flips, and I
assume he does because this is the relevant case at issue, then the coin flips
are time-like separated, and, according to special relativity, the temporal
order of time-like separated events are objective, that is, independent of the
state of motion of observers.
#### 3.3.4 Frequentism Is Not Completely Objective
Von Mises (1928/1957, p. 14) makes a strong assertion about frequentistic
probabilities when he says, ”The probability of a 6 is a physical property of
a given die and is a property analogous to its mass, specific heat, or
electrical resistance.” I agree with La Caze (2016, section 16.4.5) that
frequentism does not oblige upon us this strong metaphysical interpretation of
probabilities, but I disagree that frequentistic probabilities are not
objective. For La Caze, “[h]ypothetical frequencies are not divorced from
consideration of personal factors (including beliefs).”
His argument goes like this. Since the main advantage proclaimed of
frequentism is that it introduces objective probabilities, any subjective
trace in frequentistic probabilities would undermine the entire project. The
subjectivity that frequentism relies on comes from how the particular physical
process that gives rise to frequencies is modeled. The probabilities for a
coin toss, for example, depend on how the properties of the coin and the
tossing mechanism are modeled. That some particular physical model is suitable
for giving rise to the proper frequencies needs to be judged by an agent. And
this judgement is unequivocally subjective, as La Caze (2016, p. 358) says,
“Scientists employing frequentists probabilities need to make a judgement that
the data-generating processes providing the measured outcomes of the study are
adequately modeled by one or more of these approaches to specifying the
requirements on the expected sequence of outcomes.” The bar raised by this
requirement is so high that basically all our physical predictions are deemed
to be subjective, because they depend on certain idealizations to be made by
an agent. The practice of physics has for practical matters this “subjective”
ingredient but it does not make physics a subjective science. Therefore, I do
not see that frequentistic probabilities are less objective than other
predictions of physics.
Hájek (2009, pp. 215–7) also criticizes the idealizations made in hypothetical
frequentism, but he approaches this problem from a different direction:
> Consider the coin toss. We are supposed to imagine infinitely many results
> of tossing the coin: that is, a world in which coins are ‘immortal’, lasting
> forever, coin-tossers are immortal and never tire of tossing (or something
> similar anyway), or else in which coin tosses can be completed in ever
> shorter intervals of time… In short, we are supposed to imagine _utterly
> bizarre_ worlds […]. (Hájek, 2009, pp. 215–6)
For Hájek, the problem of hypothetical frequentism lies in the definition of
probabilities: in order to define hypothetical frequencies “utterly bizarre”
counterfactual scenarios need to be set up that “would have to be _very_
different from the actual world” (Hájek, 2009, pp. 215). I think this problem
can be remedied by a physical theory and the laws of nature in such a theory.
We know that laws of nature ground facts beyond the actual regularities (e.g.,
Maudlin, 2007b). The counterfactual idealizations that need to be made for
hypothetical frequentism—and also for typicality frequentism—may be more
radical or more detached from the actual world than in other applications,
like in the normal way of model building (Morgan and Morrison, 1999), but they
can be still grounded and made true by the laws in a physical theory.
### 3.4 Objections and Replies
1. 1.
_Typicality seems to be too vague. How can it be meaningful?_
Typicality is intentionally a vague term. Not all notions need to be precise
to be meaningful. We know when someone is tall or when someone is bald. Of
course, there may be borderline cases when we may debate is this person really
tall or really bald, but for all practical purposes there is no ambiguity. The
same we encounter in physics. The initial macrostate the universe evolved from
according to statistical mechanics is also vague, because the boundaries are
fuzzy and not precisely specified. But when we reason about the evolution of
the universe we talk about microstates that do not lie on the boundary, so
this vagueness is harmless. It is a strength of the notion of typicality to be
vague, because we don’t need to cope with unnecessary details in our
explanation and we can use typicality in many different areas.
2. 2.
_What is a formal definition of typicality?_
In many cases, typicality does not need a formal definition. It is basically a
technical term for _most_ or _almost all_. Maudlin (2018) and Wilhelm (2019)
propose two different approaches to formalize typicality. Maudlin interprets
typicality as a second-order predicate, that is, a predicate of a predicate.
We formally write $F(X)$ symbolizing that $X$ has property $F$. Typicality
would be a further qualification between $X$ and $F$. $T(F,X)$ would symbolize
that it is typical for $X$ to have $F$. One may even consider typicality as
another quantifier. Wilhelm, on the other hand, focuses on the explanatory
scheme of typicality explanations and points out that it resembles Hempel’s
deductive–nomological model.
3. 3.
_What is the relationship between a probability measure and a typicality
measure?_
Mathematically, a typicality measure is usually represented as a probability
measure, but a probability measure contains more information than is actually
needed:
> While typicality is usually defined – as it was here – in terms of a
> probability measure, the basic concept is not genuinely probabilistic, but
> rather a less detailed concept. A measure $\mu$ of typicality need not be
> countably additive, nor even finitely additive. Moreover, for any event $E$,
> if $\mu$ is merely a measure of typicality, there is no point worrying
> about, nor any sense to, the question as to the real meaning of say
> ‘$\mu(E)=\nicefrac{{1}}{{2}}$‘. Distinctions such as between
> ‘$\mu(E)=\nicefrac{{1}}{{2}}$‘ and ‘$\mu(E)=\nicefrac{{3}}{{4}}$‘ are
> distinctions without a difference.
>
> The only thing that matters for a measure $\mu$ of typicality is ‘$\mu(E)\ll
> 1$´: a measure of typicality plays solely the role of informing us when a
> set $E$ of exceptions is sufficiently small that we may in effect ignore it
> and regard the phenomenon in question occurring of the set $E$, as having
> been explained. (Goldstein, 2001, p. 15)
4. 4.
_What is the difference between typicality and probability? Is _typical_ just
another word for _highly probable_ and _atypical_ for _highly improbable_?_
Historically typicality evolved from abstracting from highly probable cases.
Boltzmann, for example, said that the second law of thermodynamics makes it
highly probable that a gas in a box equilibrates. But I think that typicality
is a more primitive notion than and different from probability, and this paper
showed how one can reduce probabilities to typicality. Typicality is a much
less-fine grained and more general concept than probability.
5. 5.
_There are cases where something typical is highly improbable. For example, a
long, well-mixed sequence of heads is typical but improbable, e.g.
HTHTTHHHTTTHTHHT. Or a very specific event may be typical but improbable, e.g.
the probability of randomly selecting from the US population a man of height
exactly 175.4 cm is very low, even though this is the average height, and in a
good sense typical. How can one reconcile that?_ 171717Thanks to an anonymous
referee for raising this issue, who I quote almost verbatim.
The difference between typicality and probability have been addressed in more
detail in Wilhelm (2019). Typicality is not a categorical property. So it
doesn’t make sense to say that something is typical by itself. There needs to
be always a reference: “typical with respect to what?” It is typical for
clovers to have three leaves, because in the class of all clovers most of them
have three leaves. If we zoom in too much, for instance, when comparing the
particular shapes of the leaves, every leaf may be unique, and we may not be
able to find any “typical shape”. Applied to the coin toss, if we zoom in the
particular pattern of a series of tosses, and ask, “Is HTHTTHHHTTTHTHHT
typical?”. The right answer is, “Typical with respect to what?” Typical with
respect to the number of heads and tails. Then yes, because approximately 50%
are heads and tails (I ignore that the series needs to be much longer to make
such a statement). But what about the particular pattern HTHTTHHHTTTHTHHT? It
is very unlikely to repeat this particular pattern in an actual coin toss. But
this is the case for any particular pattern. The same is true in statistical
mechanics: every particular trajectory in phase space has measure zero and is
therefore very unlikely (or atypical, although it is meaningless to talk about
atypicality per se too). This point has been raised against typicality by
Uffink (2007), and I think it has been rightly answered by Lazarovici and
Reichert (2015, section 5.2). I agree on the example of the height, which is
similar to the way I define probabilities. The actual height 175.4 cm is
rarely found in a person but most people are close to the average.
6. 6.
_If you reduce probabilities to typical longterm frequencies, then you cannot
account for all the uses of probability. Especially single-case probabilities
lack an explanation._
That is correct, but I claim that single-case probabilities are not
meaningfully interpreted as some kind of frequency. They may be properly
construed as purely subjective degrees of belief, as a kind of tool in
Bayesian updating, but not in an ontological sense. Therefore, I endorse a
pluralistic account of probabilities tailored to different applications.
## 4 Conclusion
If our world is correctly described by a deterministic physical theory, then
every event is determined by the initial conditions of the universe.
Typicality frequentism builds on this insight and singles out physical
processes that give rise to stable long-term frequencies. If these frequencies
are typical they define probabilities. As I have shown, the essential idea
behind this approach comes from how Boltzmann explained the thermodynamic
arrow of time and how he reduced thermodynamics to statistical mechanics. The
main advantage I see with typicality frequentism is that it carves objective
probabilities at the right joint by specifying those kinds of probabilities
that are meaningful within physics. In this way, typicality frequentism does
not face the same problems as traditional empiricist accounts of frequentism
do. Other applications of probability beyond physics may be properly described
by subjective approaches that would complement to a pluralistic picture of
probabilities.
## Acknowledgements
I wish to thank Frederick Eberhardt, Christopher Hitchcock, and Charles Sebens
for their helpful and detailed comments on previous drafts of this paper. I
also wish to thank David Albert, Jeffrey Barrett, Detlef Dürr, Sheldon
Goldstein, Dustin Lazarovici, Barry Loewer, Tim Maudlin, and Isaac Wilhelm for
many invaluable hours of discussions. I also thank the members of the _Caltech
Philosophy of Physics Reading Group_ , in particular Joshua Eisentahl and
James Woodward. I want to thank two anonymous reviewers for their helpful
comments, which significantly improved the paper. Especially one of the
anonymous reviewers spent considerable time and effort in the review process;
I particularly thank this reviewer.
## References
* Abrams (2012) M. Abrams. Mechanistic probability. _Synthese_ , 187(2):343–75, 2012.
* Albert (2000) D. Z. Albert. _Time and Chance_. Cambridge, MA: Harvard University Press, 2000.
* Albert (2015) D. Z. Albert. _After Physics_. Cambridge, MA: Harvard University Press, 2015.
* Barrett (2017) J. A. Barrett. Typical worlds. _Studies in History and Philosophy of Modern Physics_ , 58:31–40, 2017.
* Cartwright (1983) N. Cartwright. _How the Laws of Physics Lie_. Oxford: Clarendon Press, 1983.
* de Finetti (1937) B. de Finetti. Foresight: Its logical laws, its subjective sources. _Annales de l’Institut Henri Poincaré_ , 7, 1937. Translated into English by H. E. Kyburg, Jr. in H. E. Kyburg, Jr. and H. E. Smokler (ed.). _Studies in Subjective Probability_ , pages 53-118. New York: Robert E. Krieger, 1980.
* Dürr et al. (2017) D. Dürr, A. Froemel, and M. Kolb. _Einführung in die Wahrscheinlichkeitstheorie als Theorie der Typizität_. Heidelberg: Springer, 2017.
* Elga (2004) A. Elga. Infinitesimal chances and the laws of nature. _Australasian Journal of Philosophy_ , 82(1):67–76, 2004.
* Fetzer (1982) J. H. Fetzer. Probabilistic explanations. _PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association_ , 1982:194– 207, 1982.
* Fioretti (2001) G. Fioretti. Von Kries and the other “German logicians”: Non-numercial probabilities before Keynes. _Economics and Philosophy_ , 17:245–73, 2001.
* Frigg (2009) R. Frigg. Typicality and the approach to equilibrium in Boltzmannian statistical mechanics. _Philosophy of Science_ , 76(5):997–1008, 2009\.
* Giere (2004) R. N. Giere. How models are used to represent reality. _Philosophy of Science_ , 71:742–52, 2004.
* Gillies (2000) D. Gillies. _Philosophcial Theories of Probabilities_. London: Routledge, 2000.
* Goldstein (2001) S. Goldstein. Boltzmann’s approach to statistical mechanics. In J. Bricmont, D. Dürr, M. Galavotti, G. Ghirardi, F. Petruccione, and N. Zanghì, editors, _Chance in Physics: Foundations and Perspectives_ , pages 39–54. Heidelberg: Springer, 2001.
* Goldstein (2012) S. Goldstein. Typicality and notions of probability in physics. In Y. Ben-Menahem and M. Hemmo, editors, _Probability in Physics_ , chapter 4, pages 59–71. Heidelberg: Springer, 2012.
* Hájek (1996) A. Hájek. “Mises Redux” – Redux: Fifteen arguments against finite frequentism. _Erkenntnis_ , 45(2/3):209–27, 1996.
* Hájek (2007) A. Hájek. The reference class problem is your problem too. _Synthese_ , 156(3):563–85, 2007.
* Hájek (2009) A. Hájek. Fifteen arguments against hypothetical frequentism. _Erkenntnis_ , 70(2):211–35, 2009.
* Hoefer (2007) C. Hoefer. The third way on objective probability: A sceptic’s guide to objective chance. _Mind_ , 116(463):549–96, 2007.
* Hoefer (2011) C. Hoefer. Physics and the Humean approach to probability. In C. Beisbart and S. Hartmann, editors, _Probabilities in Physics_ , chapter 12, pages 321–37. New York: Oxford University Press, 2011.
* Hoefer (2019) C. Hoefer. _Chance in the World: A Humean Guide to Objective Chance_. New York: Oxford University Press, 2019.
* Keller (1986) J. B. Keller. The probability of heads. _The American Mathematical Monthly_ , 93(3):191–7, 1986.
* Kerrich (1946) J. E. Kerrich. _An Experimental Introduction to the Theory of Probability_. Copenhagen: Einar Munksgaard, 1946.
* Küchenhoff (2008) H. Küchenhoff. Coin tossing and spinning – useful classroom experiments for teaching statistics. In Shalabh and C. Heumann, editors, _Recent Advances in Linear Models and Related Areas_. Physica-Verlag HD, 2008.
* La Caze (2016) A. La Caze. Frequentism. In A. Hájek and C. Hitchcock, editors, _The Oxford Handbook of Probability and Philosophy_ , chapter 16, pages 341–59. Oxford: Oxford University Press, 2016.
* Lazarovici and Reichert (2015) D. Lazarovici and P. Reichert. Typicality, irreversibility and the status of macroscopic laws. _Erkenntnis_ , 80(4):689–716, 2015.
* Lazarovici and Reichert (2019) D. Lazarovici and P. Reichert. Arrow(s) of time without a past hypothesis. In V. Allori, editor, _Statistical Mechanics and Scientific Explanation: Determinism, Indeterminism and Laws of Nature_. World Scientific, 2019. Forthcoming.
* Lebowitz (2008) J. L. Lebowitz. Time’s arrow and Boltzmann’s entropy. _Scholarpedia_ , 3(4):3348, 2008. 10.4249/scholarpedia.3448.
* Lieb and Seiringer (2010) E. H. Lieb and R. Seiringer. _The Stability of Matter in Quantum Mechanics_. Cambridge, UK: Cambridge University Press, 2010.
* Loewer (2001) B. Loewer. Determinism and chance. _Studies in History and Philosophy of Modern Physics_ , 32(4):609–20, 2001.
* Loewer (2004) B. Loewer. David Lewis’s Humean theory of objective chance. _Philosophy of Science_ , 71(5):1115–25, 2004\.
* Loewer (2012) B. Loewer. Two accounts of laws and time. _Philosophical Studies_ , 160(1):115–37, 2012\.
* Maudlin (2007a) T. Maudlin. What could be objective about probabilities? _Studies in History and Philosophy of Modern Physics_ , 38(2):275–91, 2007a.
* Maudlin (2007b) T. Maudlin. A modest proposal concerning laws, counterfactuals, and explanations. In _The Metaphysics Within Physics_ , chapter 1, pages 5–49. New York: Oxford University Press, 2007b.
* Maudlin (2018) T. Maudlin. The grammar of typicality. In V. Allori, editor, _Statistical Mechanics and Scientific Explanation: Determinism, Indeterminism and Laws of Nature_. World Scientific, 2018. Forthcoming.
* Miller (1994) D. W. Miller. _Critical Rationalism: A Restatement and Defence_. La Salle, Illinois: Open Court, 1994.
* Morgan and Morrison (1999) M. Morgan and M. Morrison, editors. _Models as mediators: Perspectives on Natural and Social Science_. Cambridge, UK: Cambridge University Press, 1999.
* Myrvold (2016) W. C. Myrvold. Probabilities in statistical mechanics. In A. Hájek and C. Hitchcock, editors, _The Oxford Handbook of Probability and Philosophy_ , chapter 27. Oxford: Oxford University Press, 2016\.
* Myrvold (2019) W. C. Myrvold. Explaining thermodynamics: What remains to be done? In V. Allori, editor, _Statistical Mechanics and Scientific Explanation: Determinism, Indeterminism and Laws of Nature_. World Scientific, 2019. Forthcoming.
* North (2003) J. North. Understanding the time-asymmetry of radiation. _Philosophy of Science_ , 70(5):1086–97, 2003\.
* Penrose (1989) R. Penrose. _The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics_. Oxford: Oxford University Press, 1989.
* Pulte (2016) H. Pulte. Johannes von Kries’s objective probability as a semi-classical concept. prehistory, preconditions and problems of a progressive idea. _Journal for General Philosophy of Science_ , 47(1):109–29, 2016.
* Reichenbach (1949/1971) H. Reichenbach. _The Theory of Probability_. Berkeley: University of California Press, 1949/1971.
* Reichenbach (2008) H. Reichenbach. _The Concept of Probability in the Mathematical Representation of Reality_. Chicago: The Open Court Publishing Co., 2008. Translated and edited by Frederick Eberhardt and Clark Glymour.
* Rosenthal (2010) J. Rosenthal. The natural-range conception of probability. In G. Ernst and A. Hüttemann, editors, _Time, Chance and Reduction: Philosophical Aspects of Statistical Mechanics_ , chapter 5, pages 71–91. Cambridge, UK: Cambridge University Press, 2010.
* Rosenthal (2016) J. Rosenthal. Johannes von Kries’s range conception, the method of arbitrary functions, and related modern approaches to probability. _Journal for General Philosophy of Science_ , 47(1):151–70, 2016.
* Rowbottom (2015) D. P. Rowbottom. _Probability_. Cambridge, UK: Polity Press, 2015.
* Salmon (1966) W. C. Salmon. _The Foundations of Scientific Inference_. Pittsburgh: University of Pittsburgh Press, 1966.
* Stefan and Cheche (2017) R. C. Stefan and T. O. Cheche. Coin toss modeling. _Romanian Reports in Physics_ , 69(904):1–11, 2017.
* Strevens (2003) M. Strevens. _Bigger than Chaos: Understanding Complexity through Probability_. Cambridge, MA: Harvard University Press, 2003.
* Strevens (2008) M. Strevens. _Depth: An Account of Scientific Explanation_. Cambridge, MA: Harvard University Press, 2008.
* Strevens (2011) M. Strevens. Probability out of determinism. In C. Beisbart and S. Hartmann, editors, _Probabilities in Physics_ , chapter 13, pages 339–64. New York: Oxford University Press, 2011.
* Strevens (2013) M. Strevens. _Tychomancy: Inferring Probability from Causal Structure_. Cambridge, MA: Harvard University Press, 2013.
* Strzałko et al. (2008) J. Strzałko, J. Grabski, A. Stefański, P. Perlikowski, and T. Kapitaniak. Dynamics of coin tossing is predictable. _Physics Reports_ , 469(2):59–92, 2008.
* Uffink (2007) J. Uffink. Compendium to the foundations of classical statistical mechanics. In J. Butterfield and J. Earman, editors, _Handbook for the Philosophy of Physics_ , pages 924–1074. Amsterdam: Elsevier, 2007.
* Venn (1888) J. Venn. _The Logic of Chance_. London: Macmillan, 3rd edition, 1888.
* Volchan (2007) S. B. Volchan. Probability as typicality. _Studies in History and Philosophy of Modern Physics_ , 38(4):801–14, 2007.
* von Kries (1886) J. von Kries. _Die Principien der Wahrscheinlichkeitsrechnung_. J. C. B. Mohr, 1886.
* von Mises (1928/1957) R. von Mises. _Probability, Statistics and Truth_. New York: Macmillan, 1928/1957.
* Wagner (2020) G. Wagner. Typicality and minutis rectis laws: From physics to sociology. _Journal for General Philosophy of Science_ , 2020. 10.1007/s10838-020-09505-7.
* Werndl (2013) C. Werndl. Justifying typicality measures of boltzmannian statistical mechanics and dynamical systems. _Studies in History and Philosophy of Modern Physics_ , 44(4):470–9, 2013.
* Wilhelm (2019) I. Wilhelm. Typical: A theory of typicality and typicality explanation. _The British Journal for the Philosophy of Science_ , 2019. 10.1093/bjps/axz016.
* Zabell (2016a) S. Zabell. Symmetry arguments in probability. In A. Hájek and C. Hitchcock, editors, _The Oxford Handbook of Probability and Philosophy_ , chapter 15, pages 315–40. Oxford: Oxford University Press, 2016a.
* Zabell (2016b) S. Zabell. Johannes von Kries’s Principien: A brief guide for the perplexed. _Journal for General Philosophy of Science_ , 47(1):131–50, 2016b.
|
16k
|
arxiv_papers
|
2101.00990
|
# Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation
Manel Mateos, Alejandro González111Corresponding author:
[email protected], Xavier Sevillano GTM - Grup de Recerca en
Tecnologies Mèdia. La Salle - Universitat Ramon Llull
###### Abstract
Generative Adversarial Networks (GANs) are an arrange of two neural networks
–the generator and the discriminator– that are jointly trained to generate
artificial data, such as images, from random inputs. The quality of these
generated images has recently reached such levels that can often lead both
machines and humans into mistaking fake for real examples. However, the
process performed by the generator of the GAN has some limitations when we
want to condition the network to generate images from subcategories of a
specific class. Some recent approaches tackle this conditional generation by
introducing extra information prior to the training process, such as image
semantic segmentation or textual descriptions. While successful, these
techniques still require defining beforehand the desired subcategories and
collecting large labeled image datasets representing them to train the GAN
from scratch. In this paper we present a novel and alternative method for
guiding generic non-conditional GANs to behave as conditional GANs. Instead of
re-training the GAN, our approach adds into the mix an encoder network to
generate the high-dimensional random input vectors that are fed to the
generator network of a non-conditional GAN to make it generate images from a
specific subcategory. In our experiments, when compared to training a
conditional GAN from scratch, our guided GAN is able to generate artificial
images of perceived quality comparable to that of non-conditional GANs after
training the encoder on just a few hundreds of images, which substantially
accelerates the process and enables adding new subcategories seamlessly.
###### keywords:
Neural Network , Generative Adversarial Networks , Conditional image
generation , Guiding process , Encoder Networks
††journal: Neural Networks
## 1 Introduction
The generation of artificial data that follows real distributions has
encouraged the computer science community to develop generative algorithms
that aim to create data as indistinguishable as possible from real data.
Applications range from the generation of missing data for incomplete datasets
[1] to coherent text generation [2], among many others.
Recently, the computer vision community has focused on the generation of real-
looking artificial images, and a specific type of neural networks called
generative adversarial networks (GANs) have attained remarkable performance in
this area [3]. In GANs, models are built with two neural networks: the
generator, which is a convolutional neural network (CNN) trained to generate
images that mimic the distribution of the training dataset, and the
discriminator, which tries to distinguish between real images and fake images
generated by the generator. The process of the generator trying to fool the
discriminator leads to a joint learning process that allows to effectively
generate fake real-looking images similar to those in the training set.
Despite their success, giving users total control on the characteristics of
the images generated by a GAN is still an issue to be solved. Traditionally,
researchers have used additional information as inputs in the generator
network training to condition the generation process [4]. This gives rise to
conditional GANs, which are able to generate artificial images with certain
specific, desired characteristics. Many authors in recent years have explored
different ways of including this conditional information, be it through image
textual descriptions [5], semantic segmentations [6] or image category
definition [7], among others. However, all these approaches set the number and
definition of characteristics before the GAN training process and cannot be
changed later. Thus, for any addition or variation, a new labeled dataset must
be collected, and the GAN must be re-trained, making the whole process complex
and time-consuming.
In the face of these limitations of current conditional GANs, in this paper we
present a novel and efficient way of guiding pre-trained non-conditional GANs
for generating images belonging to specific subcategories avoiding burdensome
re-training processes. In Figure 1, the general scheme of the proposed method
is depicted.
Figure 1: Guiding GANs process: the user collects a small sample of images
corresponding to the specific subcategory. Based on this sample, the encoder
network generates the subcategory prototype vector that represents the
distribution of the sample. This vector is then randomly sampled to create
random vectors that are fed to the pre-trained non-conditional GAN to generate
images of the desired subcategory.
The key element of our proposal is an encoder network, which is first trained
to learn the opposite transformation of the pre-trained non-conditional GAN.
To do so, pairs of random vectors and the corresponding images generated by
the GAN are employed.
Then, the guiding process starts with the user collecting a small sample of
non-annotated real images as examples of the specific image subcategory he/she
wants the pre-trained non-conditional GAN to generate. These images are fed to
the encoder network, which estimates the distribution of the selected
subcategory, and embeds it in the so-called subcategory prototype vector. By
randomly sampling the prototype vector, we create multiple random vectors
which are fed to the pre-trained generator network of the GAN to produce
artificial images similar to the previously selected ones. By proceeding this
way, we enable users to obtain real-looking artificial images at a reduced
computational cost.
This paper is organized as follows: in Section 2 the theoretical background of
our proposal, as well as relevant previous work is reviewed. Then, Section 3
describes the proposed method. Next, in Section 4, our proposal is evaluated
in a series of experiments. Finally, Section 5 discusses the obtained results,
highlights the advantages of our method and outlines future research lines.
## 2 State of the art
The image generation problem has pushed researchers to find the most optimal
frameworks and models to produce real-looking artificial images.
In addition to GANs [8], other relevant approaches include adversarial auto-
encoders (AAEs) [9], variational auto-encoders (VAEs) [10] and auto-regression
models (ARMs) (e.g. PixelRNN [11]).
As GANs are probably the most widely employed technique for the conditional
generation of high resolution artificial images, this section is focused on
reviewing the basic concepts of GANs (Section 2.1), as well as common
approaches to GAN-based increased resolution image generation (Section 2.2)
and proposals on conditional generation image generation via GANs (Section
2.3).
### 2.1 Artificial image generation via GANs
GANs are a subset of implicit density generative models that focus on
transforming a random input into an image which aims to be part of the true
distribution of the training data set [8]. As mentioned earlier, the GAN
models are built through the interaction of two neural networks, the generator
and the discriminator, that together learn how to generate new real-looking
images effectively by trying to confuse one another. On the one hand, the
generator network (G) is a CNN that takes a random vector from a distribution
and maps it through the model to an output with the desired sample size. Its
objective is to produce images that look as similar as possible to the
training examples. On the other hand, the discriminator (D) is a binary
classification CNN that predicts if a given image is a real one (i.e. a
training sample) or a fake one (generated by G). In this sense, the GAN
training process can be understood as a two-player game where G tries to fool
D by generating real-looking images and D tries to distinguish between real
and fake images, as shown in Figure 2.
Figure 2: Generic GAN architecture. Generator, discriminator and their
connections.
These two models are trained jointly minimizing the possible loss for a worst-
case scenario by using as the objective function:
$\min\limits_{G}\max\limits_{D}V(D,G)=E_{x\sim p_{data}(x)}[logD(x)]+E_{z\sim
p_{z}(z)}[log(1-D(G(z)))]$, where $D(x)$ is the discriminator output for the
real data sample $x$, and $D(G(z))$ is the discriminator output for the
artificial data generated by the generator with $z$ as the random input
vector.
During the training process, the discriminator tries to maximize the objective
function by making $D(x)$ close to 1 and $D(G(z))$ close to 0, while the
generator focuses on minimizing it such that $D(G(z))$ is close to 1, fooling
the discriminator.
### 2.2 Increasing artificial image resolution via progressive growing GANs
The generation of high resolution and complex images via GANs requires up-
scaling their architecture. In this sense, higher resolutions imply some
additional challenges, such as i) gradients during training become useless,
generating poor images easily identifiable as fake, and therefore making the
training process fail, ii) memory constraints increase, forcing researchers to
reduce batch sizes, which compromises the stability of the training process,
and iii) better hardware acceleration is needed to train these bigger models
and handle them efficiently.
Figure 3: Progressive GANs training process. The training process for G and D
starts at a low-resolution ($4\times 4$ pixels) and is gradually adapted to
higher resolutions by adding layers to G and D. At each training step all
existing layers remain trainable. Added layers are convolutional layers
operating on $N\times N$ spatial resolution.
These problems forced the research community to formulate new approaches to
scale up the resolution of the generated images successfully. Some authors
like Salimans et al. [12] and Gulrajani et al. [13] presented improvements on
the GAN training process, and others like Berthelot et al. [14] and Kodali et
al. [15] proposed new GAN architectures.
An alternative and interesting approach are Progressive Growing GANs (PG-GANs)
[16], which consist in gradually training GANs and iteratively adapting them
to higher resolution images on each step of the training. The authors propose
starting the training process on low-resolution images and then gradually
increasing the resolution by adding layers to both G and D (see Figure 3).
This procedure implies that G and D must be symmetrical and grow
synchronously. The method is based on the premise that CNNs are capable of
first learning large general features, generalizing the training images, and
the addition of more layers allows to move into the finer details. According
to Karras et al., the use of this progressive growing approach i) reduces GANs
training time, ii) improves convergence, as low-resolution neural networks are
stable and the progressive increase of image resolution allows starting the
higher resolution training process with a stable pre-trained network, and iii)
introduces an adaptation degree in terms of resolution, allowing the control
of the training process for obtaining artificial images of a given desired
resolution. Due to this flexibility and reduced training time, we adopt PG-
GANs in our work to generate artificial images.
### 2.3 Conditional image generation with GANs
Another issue worth considering is how to control the generation process
beyond the training dataset, a process that goes by the name of conditional
generation.
In this context, authors have developed different approaches based on training
the GANs not only with uncategorized images but also introducing categorical
information of the images included in the training dataset. For instance,
works like [5] used the categories of the training images as extra features in
the generation and discrimination processes.
Other authors addressed the problem by using both categorical information of
the training images and also their semantic segmentation, as in [6, 7, 17,
18]. In those works, the authors train the discriminator to distinguish real
and fake images and at the same time, to match the objective pixel wise
semantic information given.
An example of a use case for conditional GANs consists on generating human
body images simulating specific body poses. In this context, some authors
proposed new architectures for the training of GANs, which receive the body
poses of the training images as an extra feature. [19, 20, 21], once the GAN
is trained users may generate images where the pose is freely chosen.
The following section describes our proposal, which guides non-conditional PG-
GANs for generating images of a specific subcategory within the training set,
at will and without the need of retraining the GAN models.
## 3 Guiding non-conditional pre-trained GANs
Our proposal is based on considering the $d$-dimensional input space of random
vectors that feed the generator network G of a non-conditional GAN once it is
trained to generate images of a specific category $C$, which we refer to as
$\mathrm{G}_{C}$. Without loss of generality, any category $C$ is
intrinsically composed of multiple (say $m$) subcategories $SC_{i}$, that is
$C=\displaystyle\bigcup_{i=1}^{m}{SC_{i}}$.
In response to these random input vectors, $\mathrm{G}_{C}$ generates images
corresponding to the category represented in the training set, but no control
mechanism is available to “tell” $\mathrm{G}_{C}$ to generate images of a
specific subcategory $SC_{k}$. In our method, we propose hand-picking the
random vector input to $\mathrm{G}_{C}$ to produce images belonging to the
desired subcategory, thus giving the user total control over the artificial
image generation process.
Our method is described step by step in the following paragraphs. Please refer
to Figure 1 for a graphical reference.
Figure 4: Encoder training.
Step 1) Encoder training: an encoder network is trained to learn the opposite
transformation from the one carried out by the trained generator
$\mathrm{G}_{C}$. To that end, the encoder is trained using pairs of
$d$-dimensional random input vectors and their correspondent generated images
directly extracted from the non-conditional generator network
$\mathrm{G}_{C}$, therefore making the training data supply virtually endless
(see Figure 4). As a result, we obtain an encoder model that given an image
generated by $\mathrm{G}_{C}$ returns the $d$-dimensional input vector which
would have created that image. After finishing the training process, the
encoder is capable of successfully returning input vectors from random images
not produced by the generator.
Step 2) Subcategory random vectors generation: The user collects real images
of the desired subcategory $SC_{k}$ and feeds the trained encoder with them.
This image collection process can be fully automated, as described in Section
4. In response, the encoder returns a $d$-dimensional random vector
$\vec{x}_{i}^{SC_{k}}$ corresponding to each input image. Notice that the
larger the number of collected images (referred to as $N$), the more accurate
the estimation of the distribution of the desired subcategory. Moreover, it is
also to note that the user can decide to add a new subcategory, and the
obtainment of the corresponding vectors through the encoder can be started at
any point.
Step 3) Subcategory prototype vector creation and sampling: the mean value and
standard deviation of each of the $d$ components of the vectors
$\vec{x}_{i}^{SC_{k}}$ ($\forall i=1...N$) output by the encoder in response
to the images of the desired subcategory $SC_{k}$ are computed and embedded in
the subcategory prototype vector $\vec{p}^{SC_{k}}$. Next, this prototype
vector is used to generate as many random vectors as desired by sampling $d$
normal random variables $X_{j}\sim
N\left(\mu_{j},\alpha\cdot\sigma_{j}\right)$ (with $\forall j=1..d$), where
$\mu_{j}$ and $\sigma_{j}$ are the mean value and the standard deviation of
the $j$th component of the vectors $\vec{x}_{i}^{SC_{k}}$ ($\forall i=1...N$),
and $\alpha$ is a scalar parameter. In our experiments, we heuristically tuned
the value of this parameter to 2.5. These random vectors follow the
distribution of the desired subcategory $SC_{k}$, so they will make the pre-
trained generator network $\mathrm{G}_{C}$ generate images belonging to the
specific subcategory of choice.
Figure 5: Training real examples of the “mountains” category used to train the
non-conditional progressive growing GAN
## 4 Experiments and results
The experiments described in this section aim to evaluate our method to guide
a non-conditional progressive growing GAN.
We start by describing the data employed in our experiments. Subsequently, we
present the architecture of the PG-GAN, and an experiment involving a
subjective quality evaluation test to assess the quality of the images it
generates.
In the final experiment, we guide the non-conditional PG-GAN to generate
images from specific subcategories of choice. We describe the architecture of
the encoder network employed in the experiments, and then evaluate i) the
ability of the non-conditional network to effectively generate images that
correspond to the chosen subcategories, and ii) the perceived quality of the
generated images.
### 4.1 Dataset
The non-conditional PG-GAN was trained to generate images of the category $C=$
“mountains”. To that end, a total of 19.765 images of mountains were
downloaded from the Flickr image hosting service and used to train the GAN.
Some example images from the training dataset are presented in Figure 5.
On the other hand, to train the encoder network, we created 500.000 random
vectors, fed them to the pre-trained non-generic PG-GAN, and collected the
corresponding images.
The images used for guiding the non-conditional GAN to generate images from a
specific subcategory were obtained using the Flickr API, downloading $N$
images that were tagged as one of the following selected subcategories
$SC_{k}=\\{$“mountains + snow”, “mountains + sunset”, “mountains + trees”,
“mountains + night” and “mountains + rocks”$\\}$.
### 4.2 Non-conditional progressive growing GAN
#### 4.2.1 Architecture
The PG-GAN used in these experiments follows the architecture presented by
Karras et al. in [16] and shown in Figure 3.
In a nutshell, the model starts training at a resolution of 4x4 and progresses
until reaching a final resolution of 128x128.
The architecture of both the generator and the discriminator are based on
strided convolutions with leakyReLU activations and constrain the signal
magnitude and competition during training through pixel wise feature
normalization and equalizing the learning. The whole model has over 45 million
parameters and was trained on Google Colab for 200 epochs.
Figure 6 shows several examples of the artificial images of the mountain
category generated by the GAN. All examples portray high fidelity and
variance, successfully capturing the true distribution of the images provided
during the training of the model.
Figure 6: Examples of the “mountain” category artificial images generated by
the non-conditional progressive GAN Figure 7: Normalized histogram of the
scores given by the participants in the subjective quality evaluation test of
the non-conditional progressive GAN
#### 4.2.2 Artificial image quality evaluation
To evaluate the quality of the images generated by the non-conditional PG-GAN,
we carried out a subjective quality evaluation test, in which 50 participants
were asked to evaluate the degree of realism of 20 artificial images using a
rating scale from 0 to 10 (the greater the score, the greater the realism).
The normalized histogram of the obtained ratings is depicted in Figure 7. The
left skewed distribution of scores reveals that the participants judged most
of the images as quite realistic, obtaining an mean opinion score of 6.3.
### 4.3 Guiding the non-conditional GAN
The architecture of the encoder network used to guide the non-conditional GAN
is presented in Figure 8.
Figure 8: Architecture of the encoder network
As mentioned earlier, the encoder was trained on 500.000 pairs of random
vectors and the corresponding artificial images generated by the non-
conditional PG-GAN. The training took 4 epochs to converge.
The computation of the subcategory prototype vector was made after
programmatically downloading a variable number $N$ of images from Flickr
corresponding to the desired subcategories that were described in section 4.1.
The experiments to evaluate the quality of the images generated by the guided
non-conditional PG-GAN are presented next.
#### 4.3.1 Effect of $N$ on the perceived quality of the images
First, we evaluated how the number of images fed to the encoder to create the
subcategory prototype vector affects the quality of the images that are
subsequently generated by the guided non-conditional PG-GAN.
To that end, we presented 50 participants with images generated when the
subcategory prototype vector was computed after feeding the encoder network
with $N=\\{64,128\,\mathrm{and}\,256\\}$ images.
Figure 9: Examples of images of the “mountains+snow”, “mountains+sunset”,
“mountains+trees”, “mountains+night” and “mountains+rocks” subcategories
generated by the guided non-conditional “mountain” GAN
The mean opinion score for these configurations was 5.9, 6.2 and 6.4,
respectively. Taking into account that the subjective quality evaluation of
the images created by the non-conditional progressive growing GAN yielded a
mean opinion score of 6.3, these results prove that using a few hundreds of
images corresponding to the desired subcategory suffices to generate images of
that subcategory with an equivalent level of perceived quality.
To illustrate this fact, Figure 9 presents images generated by the guided GAN
when asked to create images of the subcategories mentioned earlier with
$N=256$. It can be observed that the network succeeds in generating images of
the specific subcategory.
#### 4.3.2 Subcategory identification
In this experiment, we evaluate whether the participants in the subjective
evaluation test were able to correctly identify the subcategory of the images
generated by the guided GAN.
The experiment consisted of presenting the participants with 20 images that
had to be classified in the (“mountains+”) “snow”, “sunset”, “trees”, “night”
or “rocks” subcategories.
In average, the participants successfully chose the correct subcategory with a
85.2% accuracy. The confusion matrix corresponding to this experiment is
presented in Table 1. Notice that the “snow” and “sunset” subcategories are
identified close to perfection, while the “tree” subcategory is identified
with a 56.6% accuracy.
| | Predicted class
---|---|---
| | Snow | Sunset | Trees | Rocks | Night
Actual class | Snow | 99.5 | 0 | 0 | 0 | 0.5
Sunset | 0 | 99.5 | 0.5 | 0 | 0
Trees | 1 | 0.5 | 56.5 | 42 | 0
Rocks | 2 | 0.5 | 7 | 91 | 0
Night | 1 | 8 | 9 | 2.5 | 79.5
Table 1: Confusion matrix of the subcategory identification experiment (values
in %).
## 5 Conclusions
This work has introduced a novel method that gives users control over the
specific type of images generated by GANs. Our proposal enables the generation
of artificial images from a user-defined subcategory, guiding a non-
conditional GAN thanks to a new architecture that includes an encoder network
to feed the GAN.
This novel process transforms the conditional image generation problem into a
simpler task for general users, reaching a flexibility level that cannot be
reached by a non-conditional GAN.
Our proposal allows to considerably reduce the time needed to perform
conditional image generation, while maintaining similar results in terms of
artificial image quality. Additionally, since only a small set of images of
the desired subcategory is needed to guide the GAN, the process can be fully
automated. Moreover, the proposed method enables the user to select the
desired image subcategory on the go, which allows new ideas to be tested in
minutes, much faster than the time that would be required to train a new
regular, non-generic GAN from scratch.
Moving forward, we believe the subcategory prototype vector creation process
described in Section 3 could be further improved to better represent the
subcategory’s distribution, which would help the generator network yield more
variance between the images belonging to a single subcategory. Additionally,
studying how input vectors are transformed throughout the generator process,
and specifically trying to understand how dependent the perceived subcategory
is to each step of the network, could help better guide the model by not only
feeding it the right vector, but also further ”steering” the generation
process into the desired direction.
## References
* Hartley [1958] H. O. Hartley, Maximum likelihood estimation from incomplete data, Biometrics 14 (1958) 174–194.
* Roh and Lee [2003] J. Roh, J.-H. Lee, Coherent text generation using entity-based coherence measures, in: Advances in Computation of Oriental Languages–Proceedings of the 20th International Conference on Computer Processing of Oriental Languages, 2003.
* Alqahtani et al. [2019] H. Alqahtani, M. Kavakli-Thorne, G. Kumar, Applications of generative adversarial networks (gans): An updated review, Archives of Computational Methods in Engineering (2019) 1–28.
* Dai et al. [2017] B. Dai, S. Fidler, R. Urtasun, D. Lin, Towards diverse and natural image descriptions via a conditional gan, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2970–2979.
* Chang et al. [2019] C.-H. Chang, C.-H. Yu, S.-Y. Chen, E. Y. Chang, Kg-gan: Knowledge-guided generative adversarial networks, arXiv preprint arXiv:1905.12261 (2019).
* Tang et al. [2020] H. Tang, X. Qi, D. Xu, P. H. Torr, N. Sebe, Edge guided gans with semantic preserving for semantic image synthesis, arXiv preprint arXiv:2003.13898 (2020).
* Qi et al. [2018] X. Qi, Q. Chen, J. Jia, V. Koltun, Semi-parametric image synthesis, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8808–8816.
* Goodfellow et al. [2014] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets (2014) 2672–2680.
* Makhzani et al. [2015] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, B. Frey, Adversarial autoencoders, arXiv preprint arXiv:1511.05644 (2015).
* Kingma and Welling [2013] D. P. Kingma, M. Welling, Auto-encoding variational bayes, arXiv preprint arXiv:1312.6114 (2013).
* Oord et al. [2016] A. v. d. Oord, N. Kalchbrenner, K. Kavukcuoglu, Pixel recurrent neural networks, arXiv preprint arXiv:1601.06759 (2016).
* Salimans et al. [2016] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, Improved techniques for training gans, in: Advances in neural information processing systems, 2016, pp. 2234–2242.
* Gulrajani et al. [2017] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. C. Courville, Improved training of wasserstein gans, in: Advances in neural information processing systems, 2017, pp. 5767–5777.
* Berthelot et al. [2017] D. Berthelot, T. Schumm, L. Metz, Began: Boundary equilibrium generative adversarial networks, arXiv preprint arXiv:1703.10717 (2017).
* Kodali et al. [2017] N. Kodali, J. Abernethy, J. Hays, Z. Kira, How to train your dragan, arXiv preprint arXiv:1705.07215 2 (2017).
* Karras et al. [2017] T. Karras, T. Aila, S. Laine, J. Lehtinen, Progressive growing of gans for improved quality, stability, and variation, arXiv preprint arXiv:1710.10196 (2017).
* Bau et al. [2019] D. Bau, J.-Y. Zhu, J. Wulff, W. Peebles, H. Strobelt, B. Zhou, A. Torralba, Seeing what a gan cannot generate, in: Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 4502–4511.
* Park et al. [2019] T. Park, M.-Y. Liu, T.-C. Wang, J.-Y. Zhu, Semantic image synthesis with spatially-adaptive normalization, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2337–2346.
* Tang et al. [2019] H. Tang, D. Xu, G. Liu, W. Wang, N. Sebe, Y. Yan, Cycle in cycle generative adversarial networks for keypoint-guided image generation, in: Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 2052–2060.
* Dong et al. [2018] H. Dong, X. Liang, K. Gong, H. Lai, J. Zhu, J. Yin, Soft-gated warping-gan for pose-guided person image synthesis, in: Advances in neural information processing systems, 2018, pp. 474–484.
* Ma et al. [2018] L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, M. Fritz, Disentangled person image generation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 99–108.
|
4k
|
arxiv_papers
|
2101.00997
|
# Hamiltonian chaos and differential geometry of configuration space-time
Loris Di Cairano [email protected] Institute of Neuroscience and
Medicine INM-9, and Institute for Advanced Simulation IAS-5, Forschungszentrum
Jülich, 52428 Jülich, Germany Department of Physics, Faculty of Mathematics,
Computer Science and Natural Sciences, Aachen University, 52062 Aachen,
Germany Matteo Gori [email protected] Physics and Materials Science
Research Unit, University of Luxembourg, L-1511 Luxembourg, Luxembourg Giulio
Pettini [email protected] Dipartimento di Fisica Università di Firenze, and
I.N.F.N., Sezione di Firenze, via G. Sansone 1, I-50019 Sesto Fiorentino,
Italy Marco Pettini [email protected] Aix-Marseille University,
Marseille, France CNRS Centre de Physique Théorique UMR7332, 13288 Marseille,
France
###### Abstract
This paper tackles Hamiltonian chaos by means of elementary tools of
Riemannian geometry. More precisely, a Hamiltonian flow is identified with a
geodesic flow on configuration space-time endowed with a suitable metric due
to Eisenhart. Until now, this framework has never been given attention to
describe chaotic dynamics. A gap that is filled in the present work. In a
Riemannian-geometric context, the stability/instability of the dynamics
depends on the curvature properties of the ambient manifold and is
investigated by means of the Jacobi–Levi-Civita (JLC) equation for geodesic
spread. It is confirmed that the dominant mechanism at the ground of chaotic
dynamics is parametric instability due to curvature variations along the
geodesics. A comparison is reported of the outcomes of the JLC equation
written also for the Jacobi metric on configuration space and for another
metric due to Eisenhart on an extended configuration space-time. This has been
applied to the Hénon-Heiles model, a two-degrees of freedom system. Then the
study has been extended to the 1D classical Heisenberg XY model at a large
number of degrees of freedom. Both the advantages and drawbacks of this
geometrization of Hamiltonian dynamics are discussed. Finally, a quick hint is
put forward concerning the possible extension of the differential-geometric
investigation of chaos in generic dynamical systems, including dissipative
ones, by resorting to Finsler manifolds.
Hamiltonian Chaos, Differential Geometry, Eisenhart metric
###### pacs:
05.20.Gg, 02.40.Vh, 05.20.- y, 05.70.- a
## I Introduction
As is well known, a generic property of nonlinear dynamical systems, described
by a system of differential equations, is the presence of deterministic chaos.
This means that despite the deterministic nature of a dynamical system of this
kind, that is, despite the Cauchy’s theorem of existence and unicity of the
solutions of a system of differential equations, the property of
predictability of the dynamics for arbitrary times is lost in the absence of
stability of the dynamics chaos ; wiggins ; chaos1 . Such a dramatic
consequence of the breaking of integrability of a three body problem was
already pointed out by Poincaré while describing the complexity of the
homoclinic tangles in the proximity of hyperbolic points in phase space
poincare . It was at the beginning of the 60’s of the last century that for
the first time the consequences of homoclinic tangles in phase space of a
nonlinear Hamiltonian system became visually evident. This was thanks to the
numerical integration of the equations of motion of the celebrated Hénon-
Heiles model henon . The numerically worked out surfaces of section in phase
space displayed what Poincaré declared to be unable even to dare to attempt
drawing poincare . For many decades now, a huge amount of work has been done,
both numerical and mathematical, on deterministic chaos. However, especially
for many degrees of freedom systems, a theoretical explanation of the origin
of chaos has been lacking. Homoclinic intersections certainly provide an
elegant explanation of the origin of chaos in both dissipative and Hamiltonian
systems, but apply to 1.5 or two degrees of freedom systems. Beautiful
theorems on Axiom A systems chaos and Anosov flows anosov cannot account for
the emergence of chaos in dynamical systems of physical relevance. An
independent attempt to explain the origin of chaos in Hamiltonian systems was
put forward by N.S.Krylov who resorted to the possibility of identifying a
Hamiltonian flow with a geodesic flow in configuration space to try to explain
the origin of the dynamical instability (which we nowadays call deterministic
chaos) that could explain the spontaneous tendency to thermalization of many
body systems. Krylov’s pioneering approach focused on the search for negative
curvatures in configuration space equipped with a suitable metric krylov .
Krylov’s work inspired abstract ergodic theory but did not go too far to
explain the origin of chaos in Hamiltonian dynamical systems. For instance, in
the case of the already mentioned Hénon-Heiles model, it turns out that no
region of negative curvature can be found in configuration space, therefore
Krylov’s intuition has been discarded for a long time. However, more recently,
on the basis of numerical ”experiments” it has been shown that chaos in
Hamiltonian flows of physical relevance stems from another mechanism,
parametric instability, which will be discussed throughout this paper. The
Riemannian-geometric approach to explaining the origin of chaos in Hamiltonian
flows is based on two fundamental elements marco : i) the identification of a
Hamiltonian flow with a geodesic flow of a Riemannian manifold equipped with a
suitable metric, so that the geodesic equations
$\frac{d^{2}q^{i}}{ds^{2}}+\Gamma^{i}_{jk}\frac{dq^{j}}{ds}\frac{dq^{k}}{ds}=0~{}.$
(1)
coincide with Newton’s equations
$\frac{d^{2}q^{i}}{dt^{2}}=-\frac{\partial V(q)}{\partial q^{i}}~{}.$ (2)
a Hamiltonian flow - of which the kinetic energy is a quadratic form in the
velocities, that is, $\displaystyle
H=\frac{1}{2}a_{ik}p^{i}p^{k}+V(q_{1},\ldots,q_{N})$ \- is equivalent to the
solutions of Newton’s equations of motion stemming from a Lagrangian function
$\displaystyle
L=\frac{1}{2}a_{ik}\dot{q}^{i}\dot{q}^{k}-V(q_{1},\ldots,q_{N})$;
ii) the description of the stability/instability of the dynamics by means of
the Jacobi–Levi-Civita (JLC) equation for the geodesic spread measured by the
geodesic deviation vector field $\displaystyle J$ (which locally measures the
distance between nearby geodesics), which in a parallel-transported frame
reads
$\frac{d^{2}J^{k}}{ds^{2}}+R^{k}_{~{}ijr}\frac{dq^{i}}{ds}{J^{j}}\frac{dq^{r}}{ds}=0~{}.$
(3)
where $\displaystyle R^{k}_{~{}ijr}$ are the components of the Riemann-
Christoffel curvature tensor.
The most natural geometrization of Hamiltonian dynamics in a Riemannian
framework 111The natural and elegant geometric setting of Hamiltonian dynamics
is provided by symplectic geometry. This geometrical framework is very
powerful to study, for example, symmetries. However, symplectic manifolds are
not endowed with a metric, and without a metric we do not know how to measure
the distance between two nearby phase space trajectories and thus to study
their stability/instability through the time evolution of such a distance. is
a consequence of Maupertuis least action principle for isoenergetic paths
$\delta\ \int_{q(t_{0})}^{q(t_{1})}\ dt\ W(q,\dot{q})=0\ ,$ (4)
where $\displaystyle
W(q,\dot{q})=\\{[E-V(q)]a_{ik}{\dot{q}}^{i}{\dot{q}}^{i}\\}^{1/2}$, which is
equivalent to the variational definition of a geodesic line on a Riemannian
manifold, a line of stationary or minimum length joining the points
$\displaystyle A$ and $\displaystyle B$:
$\delta\ \int_{A}^{B}\ ds=0\ .$ (5)
If the subset of configuration space $\displaystyle
M_{E}=\\{(q_{1},\ldots,q_{N})\in{\mathbb{R}}^{N}|V(q_{1},\ldots,q_{N})<E\\}$
is given the non-Euclidean metric of components
$g_{ij}=2[E-V(q)]a_{ik}\ ,$ (6)
whence the infinitesimal arc element $\displaystyle
ds^{2}=4[E-V(q)]^{2}dq_{i}\ dq^{i}$, then Newton’s equations (2) are retrieved
from the geodesic equations (1).
The JLC equation for the geodesic spread can be rewritten as book
$\frac{d^{2}J^{k}}{ds^{2}}+2\Gamma^{k}_{ij}\frac{dq^{i}}{ds}\frac{dJ^{j}}{ds}+\left(\frac{\partial\Gamma^{k}_{ri}}{\partial
q^{j}}\right)\,\frac{dq^{r}}{ds}\frac{dq^{i}}{ds}\,{J^{j}}=0\ ,$ (7)
which has general validity independently of the metric of the ambient
manifold.
Importantly, there are other Riemannian manifolds, endowed with different
metric tensors, to geometrize Hamiltonian dynamics book . Two of these
alternatives are concisely described in the following. One brings about the
standard tangent dynamics equation as geodesic spread (JLC) equation, whereas
the second one has never been investigated hitherto to describe chaos in
Hamiltonian flows. This gap is filled in the present work. The choice among
these manifolds is driven by practical computational reasons as will be
discussed in what follows.
## II Eisenhart Geometrization of Hamiltonian dynamics
It is worth summarizing some basic facts of a geometrization of Hamiltonian
dynamics which makes a direct and unexpected link between the standard tangent
dynamics equations, used to numerically compute Lyapunov exponents, and the
JLC equation for the geodesic spread book .
### II.1 Eisenhart Metric on Enlarged Configuration Space-Time $\displaystyle
M\times\mathbb{R}^{2}$
L.P.Eisenhart proposed a geometric formulation of Newtonian dynamics that
makes use, as ambient space, of an enlarged configuration space-time
$\displaystyle M\times\mathbb{R}^{2}$ of local coordinates
$\displaystyle(q^{0},q^{1},\ldots,q^{i},\ldots,q^{N},q^{N+1})$. This space can
be endowed with a nondegenerate pseudo-Riemannian metric Eisenhart whose arc
length is
$ds^{2}=\left(g_{e}\right)_{\mu\nu}\,dq^{\mu}dq^{\nu}=a_{ij}\,dq^{i}dq^{j}-2V(q)(dq^{0})^{2}+2\,dq^{0}dq^{N+1}~{},$
(8)
where $\displaystyle\mu$ and $\displaystyle\nu$ run from $\displaystyle 0$ to
$\displaystyle N+1$ and $\displaystyle i$ and $\displaystyle j$ run from 1 to
$\displaystyle N$. The relation between the geodesics of this manifold and the
natural motions of the dynamical system is contained in the following theorem
lichnerowicz :
Theorem. The natural motions of a Hamiltonian dynamical system are obtained as
the canonical projection of the geodesics of
$\displaystyle(M\times\mathbb{R}^{2},g_{e})$ on the configuration space-time,
$\displaystyle\pi:M\times\mathbb{R}^{2}\mapsto M\times\mathbb{R}$. Among the
totality of geodesics, only those whose arc lengths are positive definite and
are given by
$ds^{2}=c_{1}^{2}dt^{2}$ (9)
correspond to natural motions; the condition (9) can be equivalently cast in
the following integral form as a condition on the extra coordinate
$\displaystyle q^{N+1}$:
$q^{N+1}=\frac{c_{1}^{2}}{2}t+c^{2}_{2}-\int_{0}^{t}{L}\,d\tau~{},$ (10)
where $\displaystyle c_{1}$ and $\displaystyle c_{2}$ are given real
constants. Conversely, given a point $\displaystyle P\in M\times\mathbb{R}$
belonging to a trajectory of the system, and given two constants
$\displaystyle c_{1}$ and $\displaystyle c_{2}$, the point $\displaystyle
P^{\prime}=\pi^{-1}(P)\in M\times\mathbb{R}^{2}$, with $\displaystyle q^{N+1}$
given by (10), describes a geodesic curve in
$\displaystyle(M\times\mathbb{R}^{2},g_{e})$ such that $\displaystyle
ds^{2}=c_{1}^{2}dt^{2}$.
For the full proof, see lichnerowicz . Since the constant $\displaystyle
c_{1}$ is arbitrary, we will always set $\displaystyle c_{1}^{2}=1$ in order
that $\displaystyle ds^{2}=dt^{2}$ on the physical geodesics.
From (8) it follows that the explicit table of the components of the Eisenhart
metric is given by
$g_{e}=\left(\begin{array}[]{ccccc}-2V(q)&0&\cdots&0&1\\\
0&a_{11}&\cdots&a_{1N}&0\\\ \vdots&\vdots&\ddots&\vdots&\vdots\\\
0&a_{N1}&\cdots&a_{NN}&0\\\ 1&0&\cdots&0&0\\\ \end{array}\right)\ ,$ (11)
where $\displaystyle a_{ij}$ is the kinetic energy metric. The Christoffel
coefficients
$\Gamma^{i}_{jk}=\frac{1}{2}g^{im}\left(\frac{\partial g_{mk}}{\partial
q^{j}}+\frac{\partial g_{mj}}{\partial q^{k}}-\frac{\partial g_{jk}}{\partial
q^{m}}\right)$ (12)
for $\displaystyle g_{e}$ and with $\displaystyle a_{ij}=\delta_{ij}$ are
found to be non-vanishing only in the following cases
$\Gamma^{i}_{00}=-\Gamma^{N+1}_{0i}=\partial_{i}V~{},$ (13)
where $\displaystyle\partial_{i}=\partial/\partial q^{i}$ so that the geodesic
equations read
$\displaystyle\displaystyle\frac{d^{2}q^{0}}{ds^{2}}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0~{},$ (14)
$\displaystyle\displaystyle\frac{d^{2}q^{i}}{ds^{2}}+\Gamma^{i}_{00}\frac{dq^{0}}{ds}\frac{dq^{0}}{ds}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,$ (15)
$\displaystyle\displaystyle\frac{d^{2}q^{N+1}}{ds^{2}}+\Gamma^{N+1}_{0i}\frac{dq^{0}}{ds}\frac{dq^{i}}{ds}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ;$ (16)
using $\displaystyle ds=dt$ one obtains
$\displaystyle\displaystyle\frac{d^{2}q^{0}}{dt^{2}}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,$ (17)
$\displaystyle\displaystyle\frac{d^{2}q^{i}}{dt^{2}}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle-\frac{\partial
V}{\partial q_{i}}~{},$ (18)
$\displaystyle\displaystyle\frac{d^{2}q^{N+1}}{dt^{2}}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle-\frac{d{L}}{dt}~{}.$
(19)
Equation (17) states only that $\displaystyle q^{0}=t$. The $\displaystyle N$
equations (18) are Newton’s equations, and (19) is the differential version of
(10).
The fact that in the framework of the Eisenhart metric the dynamics can be
geometrized with an affine parametrization of the arc length, i.e.,
$\displaystyle ds=dt$, will be extremely useful in the following, together
with the remarkably simple curvature properties of the Eisenhart metric.
#### II.1.1 Curvature of $\displaystyle(M\times\mathbb{R}^{2},g_{e})$
The curvature properties of the Eisenhart metric $\displaystyle g_{e}$ are
much simpler than those of the Jacobi metric, and this is obviously a great
advantage from a computational point of view. The components of the
Riemann–Christoffel curvature tensor are
$R^{k}_{~{}ijr}=\left(\Gamma^{t}_{ri}\Gamma^{k}_{jt}-\Gamma^{t}_{ji}\Gamma^{k}_{rt}+\partial_{j}\Gamma^{k}_{ri}-\partial_{r}\Gamma^{k}_{ji}\right)\
.$ (20)
Hence, and after Eq.(13), the only non-vanishing components of the curvature
tensor are
$R_{0i0j}=\partial_{i}\partial_{j}V$ (21)
hence the Ricci tensor has only one nonzero component
$R_{00}=\triangle V$ (22)
so that the Ricci curvature is
$K_{R}(q,\dot{q})=R_{00}\dot{q}^{0}\dot{q}^{0}\equiv\triangle V\ ,$ (23)
and the scalar curvature is identically vanishing
$\displaystyle{\mathscr{R}}(q)=0~{}.$
#### II.1.2 Geodesic Spread Equation for the Eisenhart Metric $\displaystyle
g_{e}$
The Jacobi equation (3) for $\displaystyle(M\times\mathbb{R}^{2},g_{e})$ takes
the form
$\displaystyle\displaystyle\frac{\nabla^{2}J^{0}}{ds^{2}}+R^{0}_{i0j}\frac{dq^{i}}{ds}J^{0}\frac{dq^{j}}{ds}+R^{0}_{0ij}\frac{dq^{0}}{ds}J^{i}\frac{dq^{j}}{ds}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,~{}~{}~{}$ (24)
$\displaystyle\displaystyle\frac{\nabla^{2}J^{i}}{ds^{2}}+R^{i}_{0j0}\left(\frac{dq^{0}}{ds}\right)^{2}J^{j}+R^{i}_{00j}\frac{dq^{0}}{ds}J^{0}\frac{dq^{j}}{ds}+R^{i}_{j00}\frac{dq^{j}}{ds}J^{0}\frac{dq^{0}}{ds}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,~{}~{}~{}$ (25)
$\displaystyle\displaystyle\frac{\nabla^{2}J^{N+1}}{ds^{2}}+R^{N+1}_{i0j}\frac{dq^{i}}{ds}J^{0}\frac{dq^{j}}{ds}+R^{N+1}_{ij0}\frac{dq^{i}}{ds}J^{j}\frac{dq^{0}}{ds}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,~{}~{}~{}$ (26)
and since $\displaystyle\Gamma^{0}_{ij}=0$ and
$\displaystyle\Gamma^{i}_{0k}=0$ it is $\displaystyle\nabla
J^{0}/ds=dJ^{0}/ds$, $\displaystyle R^{0}_{~{}ijk}=0$, and
$\displaystyle{\nabla J^{i}}/{ds}={dJ^{i}}/{ds}$, the only accelerating
components of the vector field $\displaystyle J$ are found to obey the
equations
$\frac{d^{2}J^{i}}{ds^{2}}+\frac{\partial^{2}V}{\partial q_{i}\partial
q^{k}}\left(\frac{dq^{0}}{ds}\right)^{2}J^{k}=0\ .$ (27)
and using $\displaystyle dq^{0}/ds=1$ one is left with
$\frac{d^{2}J^{i}}{dt^{2}}+\frac{\partial^{2}V}{\partial q_{i}\partial q^{k}}\
J^{k}=0\ ,$ (28)
the usual tangent dynamics equations. This fact is a crucial point in the
development of a geometric theory of Hamiltonian chaos because there is no new
definition of chaos in the geometric context. In fact, the numerical Lyapunov
exponents computed by means of Eqs.(28) already belong to geometric treatment
of chaotic geodesic flows.
### II.2 Eisenhart Metric on Configuration Space-Time $\displaystyle
M\times\mathbb{R}$
Another interesting choice of the ambient space and Riemannian metric to
reformulate Newtonian dynamics in a geometric language was also proposed by
Eisenhart Eisenhart . If and how the description of Hamiltonian chaos in this
framework is coherent with the results obtained by standard treatment based on
the tangent-dynamics/JLC equations discussed in the preceding section has
never been investigated before.
This geometric formulation makes use of an enlarged configuration space
$\displaystyle M\times\mathbb{R}$, with local coordinates
$\displaystyle(q^{0},q^{1},\ldots,q^{N})$, where a proper Riemannian metric
$\displaystyle G_{e}$ is defined to give
$ds^{2}=\left(G_{e}\right)_{\mu\nu}\,dq^{\mu}dq^{\nu}=a_{ij}\,dq^{i}dq^{j}+A(q)\,(dq^{0})^{2}~{},$
(29)
where $\displaystyle\mu$ and $\displaystyle\nu$ run from $\displaystyle 0$ to
$\displaystyle N$ and $\displaystyle i$ and $\displaystyle j$ run from 1 to
$\displaystyle N$, and the function $\displaystyle A(q)$ does not explicitly
depend on time. With the choice $\displaystyle 1/[2A(q)]=V(q)+\eta$ and under
the condition
$q^{0}=2\int_{0}^{t}V(q)\,d\tau+2\eta t\ ,$ (30)
for the extra variable it can easily be seen that the geodesics of the
manifold $\displaystyle(M\times\mathbb{R},G_{e})$ are the natural motions of
standard autonomous Hamiltonian systems. Since
$\displaystyle\frac{1}{2}a_{ij}\dot{q}^{i}\dot{q}^{j}+V(q)=E$, where
$\displaystyle E$ is the energy constant along a geodesic, we can see that the
following relation exists between $\displaystyle q^{0}$ and the action:
$q^{0}=-2\int_{0}^{t}T\,d\tau+2(E+\eta)t\ .$ (31)
Explicitly, the metric $\displaystyle G_{e}$ reads as
$G_{e}=\left(\begin{array}[]{cccc}[2V(q)+2\eta]^{-1}&0&\cdots&0\\\
0&a_{11}&\cdots&a_{1N}\\\ \vdots&\vdots&\ddots&\vdots\\\
0&a_{N1}&\cdots&a_{NN}\\\ \end{array}\right)\ ,$ (32)
and together with the condition (31), this gives an affine parametrization of
the arc length with the physical time, i.e., $\displaystyle
ds^{2}=2(E+\eta)dt^{2}$, along the geodesics that coincide with natural
motions. The constant $\displaystyle\eta$ can be set equal to an arbitrary
value greater than the largest value of $\displaystyle|E|$ so that the metric
$\displaystyle G_{e}$ is nonsingular. This metric is a priori very interesting
because it seems to have some better property than the Jacobi metric and than
the previous metric $\displaystyle g_{e}$. In fact, at variance with the
Jacobi metric $\displaystyle g_{J}$ in Eq.(6), the metric $\displaystyle
G_{e}$ is nonsingular on the boundary $\displaystyle V(q)=E$; moreover, by
varying the total energy $\displaystyle E$ we get a family of different
metrics $\displaystyle g_{J}$, whereas by choosing a convenient value of
$\displaystyle\eta$, at different values of the energy the metric
$\displaystyle G_{e}$ remains the same. The consequence is that a comparison
among the geometries of the submanifolds of
$\displaystyle(M\times\mathbb{R},G_{e})$—where the geodesic flows of different
energies “live”—is meaningful. To the contrary, this is not true with
$\displaystyle(M_{E},g_{J})$. In some cases, the possibility of making this
kind of comparison can be important. With respect to the Eisenhart metric
$\displaystyle g_{e}$ on $\displaystyle M\times\mathbb{R}^{2}$ in the previous
section, the metric $\displaystyle G_{e}$ on $\displaystyle M\times\mathbb{R}$
defines a somewhat richer geometry, for example the scalar curvature of
$\displaystyle g_{e}$ is identically vanishing, which is not the case of
$\displaystyle G_{e}$.
In the case of a diagonal kinetic-energy metric, i.e. $\displaystyle
a_{ij}\equiv\delta_{ij}$, the only non vanishing Christoffel symbols are
$\Gamma_{00}^{i}=\frac{(\partial V/\partial
q^{i})}{[2V(q)+2\eta]^{2}},~{}~{}~{}~{}~{}\Gamma_{i0}^{0}=-\frac{(\partial
V/\partial q^{i})}{[2V(q)+2\eta]}\ ,$ (33)
whence the geodesic equations
$\displaystyle\displaystyle\frac{d^{2}q^{0}}{ds^{2}}+\Gamma^{0}_{i0}\frac{dq^{i}}{ds}\frac{dq^{0}}{ds}+\Gamma^{0}_{0i}\frac{dq^{0}}{ds}\frac{dq^{i}}{ds}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,$ (34)
$\displaystyle\displaystyle\frac{d^{2}q^{i}}{ds^{2}}+\Gamma^{i}_{00}\frac{dq^{0}}{ds}\frac{dq^{0}}{ds}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,$ (35)
which, using the affine parametrization of the arc length with time, i.e.,
$\displaystyle ds^{2}=2(E+\eta)dt^{2}$, with
$\displaystyle(dq^{0}/dt)=2[V(q)+\eta]$ from (30), give
$\displaystyle\displaystyle\frac{d^{2}q^{0}}{dt^{2}}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 2\frac{d{V}}{dt}\ ,$
$\displaystyle\displaystyle\frac{d^{2}q^{i}}{dt^{2}}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle-\frac{\partial
V}{\partial q_{i}},~{}~{}~{}~{}~{}~{}i=1,\dots,N~{},$ (36)
respectively. The first equation is the differential version of (30), and
equations (36) are Newton’s equations of motion.
#### II.2.1 Curvature of $\displaystyle(M\times\mathbb{R},G_{e})$
The basic curvature properties of the Eisenhart metric $\displaystyle G_{e}$
can be derived by means of the Riemann curvature tensor, which is found to
have the non-vanishing components
$R_{0i0j}=\frac{\partial_{i}\partial_{j}V}{(2V+2\eta)^{2}}-\frac{3(\partial_{i}V)(\partial_{j}V)}{(2V+2\eta)^{3}}\
,$ (37)
whence, after contraction, using $\displaystyle G^{00}=2V+2\eta$ the
components of the Ricci tensor are found to be
$\displaystyle\displaystyle R_{kj}$ $\displaystyle\displaystyle=$
$\displaystyle\displaystyle\frac{\partial_{k}\partial_{j}V}{(2V+2\eta)}-\frac{3(\partial_{k}V)(\partial_{j}V)}{(2V+2\eta)^{2}}\
,$ $\displaystyle\displaystyle R_{00}$ $\displaystyle\displaystyle=$
$\displaystyle\displaystyle\frac{\triangle V}{(2V+2\eta)^{2}}-\frac{3\|\nabla
V\|^{2}}{(2V+2\eta)^{3}}\ ,$ (38)
where $\displaystyle\triangle V=\sum_{i=1}^{N}{\partial^{2}V}/{\partial
q^{i\,2}}$, and thus we find that the Ricci curvature at the point
$\displaystyle q\in M\times\mathbb{R}$ and in the direction of the velocity
vector $\displaystyle\dot{q}$ is
$K_{R}(q,\dot{q})=\triangle V+R_{ij}\dot{q}^{i}\dot{q}^{j}$ (39)
and the scalar curvature at $\displaystyle q\in M\times\mathbb{R}$ is
${\mathscr{R}}(q)=\frac{\triangle V}{(2V+2\eta)}-\frac{3\|\nabla
V\|^{2}}{(2V+2\eta)^{2}}\ .$ (40)
#### II.2.2 Geodesic Spread Equation for the Eisenhart Metric $\displaystyle
G_{e}$
Let us now give the explicit form of Eq.(3) in the case of
$\displaystyle(M\times\mathbb{R},G_{e})$, the enlarged configuration space-
time equipped with one of the Eisenhart metrics. Since the nonvanishing
Christoffel coefficients are $\displaystyle\Gamma^{i}_{00}$ and
$\displaystyle\Gamma^{0}_{0i}$, then using the affine parametrization of the
arc length with physical time, we obtain
$\displaystyle\displaystyle\frac{d^{2}J^{k}}{dt^{2}}+\frac{2(\partial_{k}V)}{2V+2\eta}\frac{dJ^{0}}{dt}+\left[\partial_{kj}^{2}V-\frac{4(\partial_{k}V)(\partial_{j}V)}{2V+2\eta}\right]J^{j}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,$
$\displaystyle\displaystyle\frac{d^{2}J^{0}}{dt^{2}}-\frac{2(\partial_{i}V)\dot{q}^{i}}{2V+2\eta}\frac{dJ^{0}}{dt}-2(\partial_{i}V)\frac{dJ^{i}}{dt}-\left[\partial_{ij}^{2}V-\frac{2(\partial_{i}V)(\partial_{j}V)}{2V+2\eta}\right]\dot{q}^{i}J^{j}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0\ ,$
where the indexes $\displaystyle i,j,k$ run from $\displaystyle 1$ to
$\displaystyle N$. These equations have not yet been used to tackle
Hamiltonian chaos, but are certainly worth to be investigated.
As reported in Ref.cerruti1997lyapunov , the JLC equation in Eq.(7) is rather
complicated for the kinetic energy (Jacobi) metric in (6), it considerably
simplifies to (28) for $\displaystyle(M\times{\mathbb{R}}^{2},g_{e})$, and
displays an intermediate level of complexity for
$\displaystyle(M\times\mathbb{R},G_{e})$ as shown by Eqs.(II.2.2). This is
related with a different degree of ”richness” of the geometrical properties of
the respective manifolds. It is therefore important to check whether all these
geometrical frameworks provide the same information about regular and chaotic
motions rick ; cerruti1996geometric ; cerruti1997lyapunov , a necessary
condition which a-priori could be questioned as it was done in
Ref.cuervo2015non even though the claims of this work have been proved wrong
in loris .
## III Order and chaos in a paradigmatic two-degrees of freedom model with
$\displaystyle(M\times\mathbb{R},G_{e})$
The first benchmarking is performed for a two-degrees of freedom system. In
this case a paradigmatic candidate is the Hénon-Heiles model described by the
Hamiltonian
${H}=\frac{1}{2}\left(p_{x}^{2}+p_{y}^{2}\right)+\frac{1}{2}\left(q_{1}^{2}+q_{2}^{2}\right)+q_{1}^{2}q_{2}-\frac{1}{3}q_{2}^{3}\
.$ (42)
In this case, the JLC equation for the Jacobi metric is exactly written in the
form
$\displaystyle\displaystyle\frac{d^{2}J^{\perp}}{ds^{2}}$
$\displaystyle\displaystyle+$
$\displaystyle\displaystyle\frac{1}{2}\left[\frac{\triangle
V}{(E-V)^{2}}+\frac{\|\nabla V\|^{2}}{(E-V)^{3}}\right]\,J=0~{},$ (43)
$\displaystyle\displaystyle\frac{d^{2}J^{\parallel}}{ds^{2}}$
$\displaystyle\displaystyle=$ $\displaystyle\displaystyle 0$ (44)
where the expression in square brackets is the scalar curvature of the
manifold $\displaystyle(M_{E},g_{J})$, $\displaystyle g_{J}$ is the metric
tensor whose components are in Eq.(6), $\displaystyle J^{\perp}$ and
$\displaystyle J^{\parallel}$ are the components of the geodesic separation
vector transversal and parallel to the velocity vector along the reference
geodesic, respectively. It is well evident that this scalar curvature is
always positive and that chaotic motions can only be the consequence of
parametric instability due to the variability of the scalar curvature along
the geodesics. At first sight, the scalar curvature of
$\displaystyle(M\times{\mathbb{R}},G_{e})$ given in Eq.(40) can take also
negative values as is shown in Figure 1. On the one side this could add
another source of dynamical instability to parametric instability, but, on the
other side, the extension of regions of negative curvature depends on the
value of the arbitrary parameter $\displaystyle\eta$ that enters the metric
$\displaystyle G_{e}$, extension that can be arbitrarily reduced making its
contribution to degree of chaoticity not intrinsic. In Figure 2 the plane
$\displaystyle(q_{1},q_{2})$ is taken as surface of section of phase space
trajectories when $\displaystyle p_{2}=0$ and $\displaystyle p_{1}>0$.
Figure 1: Configuration space of the Hénon-Heiles model. The dashed lines
represent the equipotential boundaries: $\displaystyle V(q_{1},q_{2})=0.0833$
(cyan); $\displaystyle V(q_{1},q_{2})=0.125$ (green); $\displaystyle
V(q_{1},q_{2})=0.1667$ (yellow). Left panel: $\displaystyle\eta=0.045$. Right
panel: $\displaystyle\eta=0.1667$. The scale of colours represents different
intervals of values of the scalar curvature given in Eq.(40).
At the lowest energy, $\displaystyle E=0.0833$, when all the motions are
regular, the trajectories are found to visit also regions of negative
curvature, whereas at higher energies, $\displaystyle E=0.125$ and
$\displaystyle E=0.1667$, the chaotic trajectories considered display a large
number of intersections in regions of positive curvature. In other words, the
role of negatively curved regions does not appear to play a relevant role in
determining the chaotic instability of the dynamics.
Figure 2: Superposition of the configuration space of the Hénon-Heiles model
with the surfaces of section of phase space trajectories. Red dots correspond
to the crossing of the $\displaystyle(q_{1},q_{2})$ plane when $\displaystyle
p_{2}=0$ and $\displaystyle p_{1}>0$. Upper left panel corresponds to
$\displaystyle E=0.0833$; upper right panel corresponds to $\displaystyle
E=0.125$; lower panel corresponds to $\displaystyle E=0.1667$. For all these
cases $\displaystyle\eta=0.0833$.
As a matter of fact, the comparison of the results obtained by numerically
integrating the stability equations (28), (II.2.2), and (43) along with the
equations of motion of the Hénon-Heiles model, at different energies and
initial conditions, show an excellent qualitative and quantitative agreement.
The integration of the Hamilton equations of motion is performed with a
symplectic integrator. The stability equations have been integrated with a
fourth-order Runge-Kutta scheme. The choice of the energy values follows the
historical paper by Hénon-Heiles, and the initial conditions for regular and
chaotic motions are chosen according to the selections in
Ref.cerruti1996geometric . The quantity reported in Figures 3 and 4 is
$\lambda(t)=\frac{1}{t}\log\left[\frac{\|{\dot{J}}(t)\|^{2}+\|J(t)\|^{2}}{\|{\dot{J}}(0\|^{2}+\|J(0)\|^{2}}\right]$
(45)
where the separation vector $\displaystyle J$ is in turn the solution of the
three different stability equations.
Figure 3: Numerical solutions of the tangent dynamics equation (28) (black
line) compared to the solution of equation (II.2.2) (blue line), and to the
solution of equation (43) (red line). Left panel: $\displaystyle E=0.0833$,
$\displaystyle\eta=0.0833$ and the initial condition is point ($\displaystyle
a$) of Figure 1 of cerruti1996geometric . The dashed green line is the
reference $\displaystyle t^{-1}$ slope for regular motions. Right panel:
$\displaystyle E=0.125$, $\displaystyle\eta=0.0833$ and the initial condition
is point ($\displaystyle d$) of Figure 3 of cerruti1996geometric .
The robustness of the results obtained by means of Eq.(II.2.2) for the
manifold $\displaystyle(M\times\mathbb{R},G_{e})$ with respect to different
choices of the free parameter $\displaystyle\eta$ has been checked and
confirmed. It is in particular the close agreement between the results
obtained with the Eqs.(II.2.2) and (43) which confirms that chaos stems from
parametric instability, because in the latter equation the scalar curvature is
always positive. The right panel of Figure 3 shows a clear qualitative
agreement among the three patterns $\displaystyle\lambda(t)$ but some
quantitative deviations that do not change neither with longer integrations
not by changing the value of $\displaystyle\eta$ in the case of
$\displaystyle\lambda(t)$ computed with (II.2.2). Perhaps such a discrepancy
could stem from the inhomogeneity of the chaotic layer in phase space due to
the presence of very small regular islands, inhomogeneity detected differently
by the different JLC equations. Actually, this discrepancy is no longer
observed at higher energy (right panel of Figure 4) when the chaotic layer
seems more homogeneous. The reason why the geometrization of Hamiltonian
dynamics by means of $\displaystyle(M\times\mathbb{R},G_{e})$ can be of
prospective interest relies on its intermediate geometrical ”richness”.
Figure 4: Numerical solutions of the tangent dynamics equation (28) (black
line) compared to the solution of equation (II.2.2) (blue line), and to the
solution of equation (43) (red line). Here $\displaystyle E=0.1667$,
$\displaystyle\eta=0.0833$ and the initial condition for the left panel is
point ($\displaystyle a$) of Figure 5 of cerruti1996geometric , and for the
right panel point ($\displaystyle c_{2}$) of the same Figure.
On $\displaystyle(M\times{\mathbb{R}}^{2},g_{e})$ the scalar curvature is
always vanishing, the Riemann curvature tensor is just the Hessian of the
potential and the Ricci tensor has only one non-vanishing component, to the
opposite, on $\displaystyle(M_{E},g_{J})$ the Riemann curvature tensor has
$\displaystyle{\cal O}(N^{4})$ non-vanishing components and at large
$\displaystyle N$ the scalar curvature can happen to be overwhelmingly
negative without affecting the degree of chaoticity of the dynamics. The
geometry of $\displaystyle(M\times\mathbb{R},G_{e})$ is definitely richer than
that of $\displaystyle(M\times{\mathbb{R}}^{2},g_{e})$ and less complicated
than that of $\displaystyle(M_{E},g_{J})$, therefore, and mainly at large
$\displaystyle N$, this framework can offer some computational advantage for
more refined investigations about the geometric origin of parametric
instability of the geodesics. Loosely speaking, to give an idea of what a more
refined geometrical investigation might mean, it has been shown book ; cecmar
that integrability is related with the existence of Killing tensor fields on
the mechanical manifolds, therefore the degree of breaking of the hidden
symmetries associated with Killing tensor fields could be defined,
investigated, and related with the existence of weak and strong chaos in
Hamiltonian flows.
## IV One-dimensional $\displaystyle XY$-model in the Eisenhart metric
$\displaystyle(M\times\mathbb{R},G_{e})$
Let us now proceed to investigate how Hamiltonian chaos is described in this
geometric framework at a large number of degrees of freedom. This is shown for
a specific model, the one-dimensional classical XY model. The reason for
choosing this model is that it has a rich variety of dynamical behaviors: at
low energy it is equivalent to a collection of weakly coupled harmonic
oscillators, at asymptotically high energy it represents a set of freely
rotating spins, at intermediate energies it displays a strongly chaotic
dynamics, as witnessed by the whole spectrum of Lyapounov exponents JSP .
Moreover, for this model it was necessary to introduce an ad hoc adjustment of
an otherwise successful geometric-statistical model for the analytic
computation of the largest Lyapounov exponent CasClePet carried on in the
framework $\displaystyle(M\times{\mathbb{R}}^{2},g_{e})$. It is thus
interesting to check whether or not another geometric framework can allow to
fix the problem more naturally.
The 1D $\displaystyle XY$ model, describes a linear chain of $\displaystyle N$
spins/rotators constrained to rotate in a plane and coupled by a nearest-
neighbour interaction. This model is formally obtained by restricting the
classical Heisenberg model with $\displaystyle O(2)$ symmetry to one spatial
dimension. The potential energy of the $\displaystyle O(2)$ Heisenberg model
is $\displaystyle V=-{\cal I}\sum_{\langle i,j\rangle}{\bf s}_{i}\cdot{\bf
s}_{j}$, where the sum is extended only over nearest-neighbour pairs,
$\displaystyle{\cal I}$ is the coupling constant, and each $\displaystyle{\bf
s}_{i}$ has unit modulus and rotates in the plane. To each “spin”
$\displaystyle{\bf s}_{i}=(\cos q_{i},\sin q_{i})$, the velocity
$\displaystyle{\bf\dot{s}}_{i}=(-\dot{q}_{i}\sin q_{i},\dot{q}_{i}\cos q_{i})$
is associated, so that $\displaystyle{H}=\sum_{i=1}^{N}\frac{1}{2}\dot{\bf
s}_{i}^{2}-{\cal I}\sum_{\langle i,j\rangle}{\bf s}_{i}\cdot{\bf s}_{j}$. The
Hamiltonian of this model is then
$H(p,q)=\sum_{i=1}^{N}\frac{p_{i}^{2}}{2}+{\cal
I}\sum_{i=1}^{N}[1-\cos(q_{i}-q_{i-1})]~{},$ (46)
The canonical coordinates $\displaystyle q_{i}$ and $\displaystyle p_{i}$ are
thus given the meaning of angular coordinates and momenta. As already
mentioned above, this Hamiltonian system has two integrable limits. In the
low-energy limit it represents a chain of harmonic oscillators, as can be seen
by expanding the potential energy in power series
${H}(p,q)\approx\sum_{i=1}^{N}\left[\frac{p_{i}^{2}}{2}+\frac{{\cal
I}}{2}(q_{i+1}-q_{i})^{2}\right]~{},$ (47)
where $\displaystyle p_{i}=\dot{q}_{i}$, whereas in the high-energy limit it
represents a system of freely rotating objects, since the kinetic energy
increases with total energy without bounds, at variance with potential energy
which is bounded from above.
### IV.1 Numerical solution of the JLC equation for
$\displaystyle(M\times\mathbb{R},G_{e})$
Let us proceed by comparing the outcomes of the integration of the equations
(28) and (II.2.2) computed along the flow of the Hamiltonian (46). The
standard tangent dynamics equations (28) can be split as
$\displaystyle\displaystyle\dot{J}^{i}_{q}$ $\displaystyle\displaystyle=$
$\displaystyle\displaystyle J^{i}_{p}$
$\displaystyle\displaystyle\dot{J}^{i}_{p}$ $\displaystyle\displaystyle=$
$\displaystyle\displaystyle-Hess(V)_{ij}\ J^{j}_{q}$ (48)
which explicitly read as
$\displaystyle\displaystyle\dot{J}^{i}_{q}$ $\displaystyle\displaystyle=$
$\displaystyle\displaystyle J^{i}_{p}$ (49)
$\displaystyle\displaystyle\dot{J}^{i}_{p}$ $\displaystyle\displaystyle=$
$\displaystyle\displaystyle-{\cal I}\cos(q_{i-1}-q_{i})J^{i-1}_{q}+{\cal
I}[\cos(q_{i-1}-q_{i})+\cos(q_{i}-q_{i+1})]J^{i}_{q}-{\cal
I}\cos(q_{i-1}-q_{i})J^{i+1}_{q}\ ,$
whence the Largest Lyapunov Exponent is worked out by computing
$\lambda_{1}=\lim_{t\rightarrow\infty}\frac{1}{t}\log\left[\frac{\|J_{q}(t)\|^{2}+\|J_{p}(t)\|^{2}}{\|J_{q}(0)\|^{2}+\|J_{p}(0)\|^{2}}\right]\
.$ (50)
At the same time, the integration of the JLC equations (II.2.2), by setting
$\displaystyle{J}=(J^{0},J^{i})$, and choosing $\displaystyle\eta=E$, yields
another estimate of the instability exponent through the analogous definition
$\lambda_{G}=\lim_{t\rightarrow\infty}\frac{1}{t}\log\left[\frac{\|{J}(t)\|^{2}_{G_{e}}+\|\dot{{J}}(t)\|^{2}_{G_{e}}}{\|{J}(0)\|^{2}_{G_{e}}+\|\dot{{J}}(0)\|^{2}_{G_{e}}}\right]\
.$ (51)
We have solved the equations of motion of the 1D XY model (setting
$\displaystyle{\cal I}=1$) and the tangent dynamics equations (49) by using a
bi-lateral symplectic algorithm lapo . The JLC equations (II.2.2) have been
solved by using a third-order predictor-corrector algorithm. Periodic boundary
conditions have been considered. Random initial conditions have been adopted
by taking the $\displaystyle q_{i}$ randomly distributed in the interval
$\displaystyle[0,2\pi]$, and by taking the $\displaystyle p_{i}$ gaussian-
distributed and suitably scaled so as to complement with the kinetic energy
the difference between the total energy initially set and the initial value of
the potential energy resulting from the random assignment of the
$\displaystyle q_{i}$. Figure 5 shows the comparison between the results
obtained at different values of the energy density $\displaystyle\epsilon=E/N$
for $\displaystyle\lambda_{1}(\epsilon)$ and
$\displaystyle\lambda_{G}(\epsilon)$ defined above. It is well evident that
the results so obtained are globally in very good agreement. At energy
densities in the interval between $\displaystyle\epsilon\simeq 0.2$ and
$\displaystyle\epsilon\simeq 100$ the agreement is perfect, whereas at lower
energy densities, below $\displaystyle\epsilon\simeq 0.2$, small discrepancies
are found which seem due to a slower time-relaxation of
$\displaystyle\lambda_{G}(t)$ with respect to $\displaystyle\lambda_{1}(t)$.
Of course, an unavoidable check of consistency has to be performed on an
integrable dynamics. This check has been performed on the flow of the
Hamiltonian (47). The results obtained with the equations (28) and (II.2.2)
are reported in Figure (6). As expected for non-chaotic dynamics, it is found
that $\displaystyle\lambda_{1}(t)$ decays as a straight line of slope
$\displaystyle-1$ in double logarithmic scale, and
$\displaystyle\lambda_{G}(t)$ decays with an oscillating pattern with a
$\displaystyle t^{-1}$ envelope. This has been checked at different
$\displaystyle N$ and energy values. Some cases are reported in Figure 6.
Figure 5: Lyapunov Exponents $\displaystyle\lambda_{1}$ (cyan circles) and
$\displaystyle\lambda_{G}$ (green triangles) versus the energy density
$\displaystyle\epsilon$ for a system of $\displaystyle N=150$ spins. The
parameter $\displaystyle\eta$ has been set as $\displaystyle\eta=E$. Figure 6:
Lyapunov Exponents $\displaystyle\lambda_{1}(t)$ (red, green and black lines)
versus $\displaystyle\lambda_{G}(t)$ (blue, magenta and cyan lines) for a
system of $\displaystyle N=2,100,1000$ harmonic oscillators, respectively. The
black dashed line is the $\displaystyle t^{-1}$ reference slope for a regular
dynamics. Here $\displaystyle\epsilon=1$ and $\displaystyle\eta=E$.
## V The effective scalar model for the JLC equation
In CasClePet an effective scalar approximation of the JLC equation (7) has
been worked out under some suitable hypothesis. In a nutshell, at large
$\displaystyle N$ under an hypothesis of quasi-isotropy - meaning that a
coarse-grained mechanical manifold appears as a constant curvature isotropic
manifold - with broad spatial spectrum of curvature variations at a finer
scale, the evolution of the norm of the geodesic separation vector is
described by a stochastic oscillator equation
$\frac{d^{2}\psi(s)}{ds^{2}}+\left[\langle
k_{R}\rangle+\langle\delta^{2}k_{R}\rangle^{1/2}\eta(s)\right]\psi(s)=0$
where $\displaystyle\eta(s)$ a $\displaystyle\delta$-correlated gaussian
stochastic process of zero mean and unit variance, and
$\begin{split}\langle k_{R}\rangle&=\frac{1}{N-1}\langle K_{R}\rangle\\\
\langle\delta^{2}k_{R}\rangle^{1/2}&=\frac{1}{N-1}(\langle
K_{R}^{2}\rangle-\langle K_{R}\rangle^{2})\end{split}$
where $\displaystyle K_{R}$ is the Ricci curvature of the mechanical manifold
under consideration, and the averages are meant along a reference geodesic or
as microcanonical averages on suitable energy surface
$\displaystyle\Sigma_{E}$. By putting $\displaystyle k_{0}=\langle
k_{R}\rangle$, $\displaystyle\sigma=\langle\delta^{2}k_{R}\rangle^{1/2}$,
$\begin{split}\tau_{1}&=\Big{\langle}\frac{dt}{ds}\Big{\rangle}\frac{\pi}{2\sqrt{k_{0}+\sigma}}\\\
\tau_{2}&=\Big{\langle}\frac{dt}{ds}\Big{\rangle}\frac{k_{0}^{1/2}}{\sigma}\end{split}$
(52)
and hence defining $\displaystyle\tau^{-1}=2(\tau_{1}^{-1}+\tau_{2}^{-1})$, an
analytic expression for a geometric Largest Lyapunov Exponent is given by
CasClePet
$\displaystyle\displaystyle\lambda(k_{0},\sigma,\tau)$
$\displaystyle\displaystyle=$
$\displaystyle\displaystyle\frac{1}{2}\left(\Lambda-\frac{4k_{0}}{3\Lambda}\right)\
,$ $\displaystyle\displaystyle\Lambda$ $\displaystyle\displaystyle=$
$\displaystyle\displaystyle\left(\sigma^{2}\tau+\sqrt{\left(\frac{4k_{0}}{3}\right)^{3}+\sigma^{4}\tau^{2}}\,\right)^{1/3}\
.$ (53)
This can be applied to the geometrization on the manifold
$\displaystyle(M\times{\mathbb{R}},G_{e})$ of Hamiltonian dynamics. In this
case the Ricci curvature reads as
$\begin{split}K_{R}(s)=\frac{1}{2(E+\eta)}\left(\Delta V-\frac{3\|\nabla
V\|^{2}}{2V+2\eta}+\frac{\partial^{2}_{kj}V\,\dot{q}^{j}\dot{q}^{k}}{2V+2\eta}-\frac{3\partial_{j}V\,\dot{q}^{j}\partial_{k}V\dot{q}^{k}}{(2V+2\eta)^{2}}\right)\equiv\frac{K_{R}(t)}{2(E+\eta)}\end{split}$
(54)
and using the arc-length parametrization $\displaystyle
ds^{2}=2(E+\eta)dt^{2}$ with physical time, we can compute by means of
Eqs.(53) an analytic prediction of $\displaystyle\lambda_{G}(\epsilon)$ for
$\displaystyle(M\times{\mathbb{R}},G_{e})$ and compare it to the outcome
obtained for $\displaystyle(M\times{\mathbb{R}}^{2},g_{e})$.
The first step consists in computing the average Ricci curvature and its
variance of the two manifolds at different values of the energy density. We
can limit these computations to one single choice of $\displaystyle N$ for
which the asymptotic values of $\displaystyle\langle k_{R}\rangle$ and
$\displaystyle\langle\delta^{2}k_{R}\rangle$ are already attained (see
CasClePet ). Moreover, for non-integrable systems, after the Poincaré-Fermi
theorem, all the constant energy surface is accessible to the dynamics, and
since chaos entails phase space mixing, with sufficiently long integration
times we obtain good estimate of microcanonical averages of the observables of
interest. Figures 7 and 8 provide the comparison between $\displaystyle\langle
k_{R}\rangle$ and $\displaystyle\langle\delta^{2}k_{R}\rangle$ for the two
manifolds.
Figure 7: Average of Ricci curvature $\displaystyle\langle K_{R}\rangle$ of
$\displaystyle{M\times{\mathbb{R}}^{2}}$ (red squares) and of $\displaystyle
M\times{\mathbb{R}}$ (green triangles), respectively, vs energy density
$\displaystyle\epsilon$ for a system of $\displaystyle N=150$. Here
$\displaystyle\eta=E$.
Somewhat unexpectedly these average quantities are found to be practically
coincident, thus it is not surprising that the application of the effective
scalar model for the JLC equation - recalled above - yields outcomes in close
agreement, as shown by Figure 9.
Figure 8: Average variance of the Ricci curvature $\displaystyle\sigma_{K}$ of
$\displaystyle{M\times{\mathbb{R}}^{2}}$ (red squares) and of $\displaystyle
M\times{\mathbb{R}}$ (green triangles) vs energy density
$\displaystyle\epsilon$ for a system of $\displaystyle N=150$ particles. Here
$\displaystyle\eta=E$. Figure 9: Geometric Lyapunov Exponents
$\displaystyle\lambda$ $\displaystyle\lambda$ worked out for
$\displaystyle{M\times{\mathbb{R}}^{2}}$ (red squares) and for $\displaystyle
M\times{\mathbb{R}}$ (green triangles) vs energy density
$\displaystyle\epsilon$, for a system of $\displaystyle N=150$ particles. Here
$\displaystyle\eta=E$. Figure 10: Comparison between the two Geometric
Lyapunov Exponents $\displaystyle\lambda_{g_{e}}$ (red squares),
$\displaystyle\lambda_{G_{e}}$ (green triangles) and the standard numerical
computation of $\displaystyle\lambda_{1}$ (cyan circles) vs energy density
$\displaystyle\epsilon$ for a system of $\displaystyle N=150$. Here
$\displaystyle\eta=E$.
The comparison among the outcomes $\displaystyle\lambda_{g_{e}}(\epsilon)$,
$\displaystyle\lambda_{G_{e}}(\epsilon)$ of the ”statistical” formula (53),
and the standard computation of $\displaystyle\lambda_{1}(\epsilon)$ are
displayed in Figure 10. The discrepancy, observed approximately for
$\displaystyle\epsilon$ in the interval between $\displaystyle 0.2$ and
$\displaystyle 2$, has been given an explanation in Ref.CasClePet where it
has been shown that the numerical distribution of the Ricci curvature of
$\displaystyle{M\times{\mathbb{R}}^{2}}$ actually displays a non-vanishing
skewness with an excess of negative values with respect to a Gaussian
distribution. This information is lost in the effective scalar model for the
JLC equation above recalled. An ad hoc displacement of $\displaystyle\langle
k_{R}\rangle$ to empirically account for the excess of negative values of
$\displaystyle K_{R}$ allowed to exactly retrieve the pattern of
$\displaystyle\lambda_{1}(\epsilon)$ by means of the scalar effective model.
A-priori the use of $\displaystyle(M\times{\mathbb{R}},G_{e})$ could have
fixed the problem more naturally but, disappointedly, this has not been the
case thus calling for an improvement of the effective scalar model, possibly
taking into account higher order moments of the Ricci curvature distribution.
Finally, it is worth to mention that the potential function of the Hamiltonian
(46) has a large number of critical points $\displaystyle q_{c}$, that is such
that $\displaystyle\nabla V(q)|_{q=q_{c}}=0$ book ; near each critical point,
in Morse chart one has $\displaystyle
V(q)=V(q_{c})-\sum_{i=1}^{k}q_{i}^{2}+\sum_{i=k+1}^{N}q_{i}^{2}$ where
$\displaystyle k$ is the Morse index of a given critical point. Now, the
neighborhoods of critical points are enhancers of chaos because using the
expression for $\displaystyle V(q)$ in Morse chart together with
$\displaystyle\nabla V(q_{c})=0$, both equations (28) and (II.2.2) diagonalize
with $\displaystyle k$ unstable components in proximity of a critical point of
index $\displaystyle k$. Morse theory relates critical points of a suitable
real valued function (here the potential function) with topological properties
of its levels sets, here of equipotential manifolds in configuration space. In
other words, the 1D XY model highlights the necessity of taking into account
also some topological property of the mechanical manifolds in order to improve
the effective scalar model for the JLC equation.
## VI Discussion
Summarizing, the geometrization of Hamiltonian dynamics within the framework
of the configuration space-time equipped with an Eisenhart metric,
$\displaystyle(M\times{\mathbb{R}},G_{e})$, provides a correct distinction of
regular and chaotic motions and it is in qualitative and quantitative
agreement with the two other geometrization frameworks reported above. As
already remarked, the advantage of this framework could be that of an
intermediate level of complexity/richness of its geometry with respect to
$\displaystyle(M_{E},g_{J})$ and
$\displaystyle(M\times{\mathbb{R}}^{2},g_{e})$ which could be useful in
performing more elaborated investigations about the relation between geometry
and chaos.
Let us conclude with an outlook at a prospective extension to generic
dynamical systems of the geometric description of chaos in systems of
differential equations
$\dot{x}^{i}=f^{i}(x^{1},\dots,x^{N})=f^{i}(\boldsymbol{x})$ (55)
that is, also in the case of dissipative systems. By differentiation with
respect to time of Eq.(55) we get a new system of equations
$\ddot{x}^{i}=\sum_{j=1}^{N}\frac{\partial f^{i}(\boldsymbol{x})}{\partial
x^{j}}\dot{x}^{j}=\sum_{j=1}^{N}\frac{\partial f^{i}(\boldsymbol{x})}{\partial
x^{j}}f^{j}(\boldsymbol{x})$ (56)
that can be derived from the Lagrangian function
$L(\boldsymbol{x},\boldsymbol{\dot{x}})=\sum_{i=1}^{N}[{\dot{x}}^{i}-f^{i}(\boldsymbol{x})]^{2}$
(57)
and the usual Lagrange equations. To this Lagrangian $\displaystyle
L(\boldsymbol{x},\boldsymbol{\dot{x}})$ one associates a metric function
homogeneous of degree one in the velocities
$\Lambda(x^{a},{\dot{x}}^{a})=L(x^{i},{\dot{x}}^{i}/{\dot{x}}^{0}){\dot{x}}^{0}\
,\hskip 14.22636pta=0,1,\dots,N;\ i=1,\dots,N$ (58)
involving an extra velocity $\displaystyle\dot{x}^{0}$; through this metric
function a metric tensor expressed as
$g_{ab}(\boldsymbol{x},\boldsymbol{\dot{x}})=\frac{1}{2}\frac{\partial^{2}\Lambda^{2}}{\partial\dot{x}^{a}\partial\dot{x}^{b}}$
(59)
provides the tangent bundle of the configuration space of the system (55) with
a Finslerian structure. The geodesics of this space, minimizing the functional
$\displaystyle\int_{\tau_{0}}^{\tau_{1}}\Lambda(x^{a},{\dot{x}}^{a})d\tau$,
are given by marco ; rund
$\frac{d^{2}x^{a}}{ds^{2}}+\gamma^{a}_{bc}(\boldsymbol{x},\boldsymbol{\dot{x}})\frac{dx^{b}}{ds}\frac{dx^{c}}{ds}=0$
(60)
where $\displaystyle\gamma^{a}_{bc}(\boldsymbol{x},\boldsymbol{\dot{x}})$ are
the connection coefficients derived from the velocity dependent metric
$\displaystyle g_{ab}(\boldsymbol{x},\boldsymbol{\dot{x}})$, and coincide with
the solutions of Eqs.(56). Then a geodesic deviation equation is defined also
on Finsler manifolds and relates stability/instability of the geodesics with
the curvature properties of the space marco . This approach certainly deserves
to be investigated to tackle chaotic dynamics of dissipative systems with the
same methodological approach successfully applied to Hamiltonian systems.
Acknowledgments
M.P. participated in this work within the framework of the project MOLINT
which has received funding from the Excellence Initiative of Aix-Marseille
University - A*Midex, a French “Investissements d’Avenir” programme.
## References
* [1] D. V. Anosov. Geodesic flows on closed Riemannian manifolds with negative curvature. Proc. Steklov Math. Inst., 90:1–235, 1967.
* [2] L. Casetti. Efficient symplectic algorithms for numerical simulations of Hamiltonian flows. Physica scripta, 51(1):29, 1995.
* [3] L. Casetti, C. Clementi, and M. Pettini. Riemannian theory of Hamiltonian chaos and Lyapunov exponents. Physical Review E, 54(6):5969, 1996.
* [4] M. Cerruti-Sola, R. Franzosi, and M. Pettini. Lyapunov exponents from geodesic spread in configuration space. Physical Review E, 56(4):4872, 1997.
* [5] M. Cerruti-Sola and M. Pettini. Geometric description of chaos in two-degrees-of-freedom Hamiltonian systems. Physical Review E, 53(1):179, 1996.
* [6] C. Clementi and M. Pettini. A geometric interpretation of integrable motions. Celestial Mechanics and Dynamical Astronomy, 84(3):263–281, 2002\.
* [7] E. Cuervo-Reyes and R. Movassagh. Non-affine geometrization can lead to non-physical instabilities. Journal of Physics A: Mathematical and Theoretical, 48(7):075101, 2015.
* [8] L. Di Cairano, M. Gori, and M. Pettini. Coherent Riemannian-geometric description of Hamiltonian order and chaos with Jacobi metric. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(12):123134, 2019.
* [9] L. P. Eisenhart. Dynamical trajectories and geodesics. Annals of Mathematics, pages 591–606, 1929.
* [10] J. Guckenheimer and P. Holmes. Nonlinear oscillations, dynamical systems and bifurcations of vector fields. Appl. Math. Sci. Series, 42, 1983.
* [11] M. Hénon and C. Heiles. The applicability of the third integral of motion: some numerical experiments. The Astronomical Journal, 69:73, 1964.
* [12] N. S. Krylov. Works on the foundations of statistical physics. Princeton Univ. Press, 1979.
* [13] A. Lichnerowicz and T Teichmann. Théories relativistes de la gravitation et de l’électromagnétisme. PhT, 8(10):24, 1955.
* [14] A. J. Lichtenberg and M. A. Lieberman. Regular and chaotic dynamics. Springer-Verlag, Berlin, 1992.
* [15] R. Livi, M. Pettini, S. Ruffo, and A. Vulpiani. Chaotic behavior in nonlinear Hamiltonian systems and equilibrium statistical mechanics. Journal of statistical physics, 48(3-4):539–559, 1987.
* [16] The natural and elegant geometric setting of Hamiltonian dynamics is provided by symplectic geometry. This geometrical framework is very powerful to study, for example, symmetries. However, symplectic manifolds are not endowed with a metric, and without a metric we do not know how to measure the distance between two nearby phase space trajectories and thus to study their stability/instability through the time evolution of such a distance.
* [17] M. Pettini. Geometrical hints for a nonperturbative approach to Hamiltonian dynamics. Physical Review E, 47(2):828, 1993.
* [18] M. Pettini. Geometry and topology in Hamiltonian dynamics and statistical mechanics, volume 33. Springer Science & Business Media, 2007.
* [19] M. Pettini and R. Valdettaro. On the Riemannian description of chaotic instability in Hamiltonian dynamics. Chaos: An Interdisciplinary Journal of Nonlinear Science, 5(4):646–652, 1995.
* [20] H. Poincaré. Les méthodes nouvelles de la mécanique céleste, volume 3. Blanchard, Paris, 1987.
* [21] H. Rund. The differential geometry of Finsler spaces, volume 101. Springer Science & Business Media, 2012.
* [22] S Wiggins. Global bifurcations and Chaos. Applied Mathematial Sciences, 73, 1988.
|
8k
|
arxiv_papers
|
2101.00999
|
# Lifshitz point at commensurate melting of 1D Rydberg atoms
Natalia Chepiga Department of Quantum Nanoscience, Kavli Institute of
Nanoscience, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The
Netherlands Frédéric Mila Institute of Physics, Ecole Polytechnique Fédérale
de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
(August 27, 2024)
###### Abstract
The recent investigation of chains of Rydberg atoms has brought back the
problem of commensurate-incommensurate transitions into the focus of current
research. In 2D classical systems, or in 1D quantum systems, the commensurate
melting of a period-p phase with p larger than 4 is known to take place
through an intermediate floating phase where correlations between domain walls
or particles decay only as a power law, but when p is equal to 3 or 4, it has
been argued by Huse and Fisher that the transition could also be direct and
continuous in a non-conformal chiral universality class with a dynamical
exponent larger than 1. This is only possible however if the floating phase
terminates at a Lifshitz point before reaching the conformal point, a
possibility debated since then. Here we argue that this is a generic feature
of models where the number of particles is not conserved because the exponent
of the floating phase changes along the Pokrovsky-Talapov transition and can
thus reach the value at which the floating phase becomes unstable.
Furthermore, we show numerically that this scenario is realized in an
effective model of the period-3 phase of Rydberg chains in which hard-core
bosons are created and annihilated three by three: The Luttinger liquid
parameter reaches the critical value $p^{2}/8=9/8$ along the Pokrovsky-Talapov
transition, leading to a Lifshitz point that separates the floating phase from
a chiral transition. Implications beyond Rydberg atoms are briefly discussed.
###### pacs:
75.10.Jm,75.10.Pq,75.40.Mg
## I Introduction
Rydberg atoms trapped with optical tweezers are becoming one of the major
playgrounds to investigate quantum matter. The laser detuning, which plays the
role of the chemical potential and controls the number of excited atoms, can
be easily tuned, and its interplay with the long-range van der Waals repulsion
and the creation and annihilation of excited states by a laser with
appropriate Rabi frequency has opened the way to a full experimental mapping
of the phase diagram in one-dimensionBernien2017 ; kibble_zureck . This phase
diagram is dominated at large detuning by big lobes of density-waves of simple
integer periodsBernien2017 ; rader2019floating ; kibble_zureck , and at small
detuning by a disordered phase with short-range, incommensurate
correlationsprl_chepiga ; chepiga2020kibblezurek . What happens between these
main phases is remarkably rich however. At very large detuning, devil’s
staircases of incommensurate density wavesBak_1982 are expected to be present
because of the long-range character of the repulsion between
atomsrader2019floating . But even at intermediate detuning, where these phases
are not present, the transition between the integer-period density waves and
the disordered incommensurate phase is a very subtle issuefendley ;
prl_chepiga ; chepiga2020kibblezurek ; giudici ; samajdar . The problem is the
1D quantum analog of the famous problem of commensurate-incommensurate (C-IC)
transitions in classical 2D physicsOstlund ; huse ; HuseFisher ; Selke1982 ;
HUSE1983363 ; HuseFisher1984 ; Duxbury ; yeomans1985 ; HOWES1983169 ;
howes1983 ; houlrik1986 ; bartelt ; Den_Nijs ; Everts_1989 ; birgeneau ;
SelkeExperiment ; sato ; PhysRevB.92.035154 , a problem still partially
unsolved in spite of four decades of analytical and numerical work. With their
tunability, Rydberg atoms open new perspectives in the experimental
investigation of the entire boundary of these transitions, and in the possible
resolution of the puzzles still posed by the C-IC transition.
A priori, one could expect the transition out of period-$p$ phases to be
simply in a universality class controlled by the value of $p$ (Ising for
$p$=2, 3-state Potts for $p$=3, Ashkin-Teller for $p$=4, etc.). However, when
the transition is driven by the proliferation of domain walls between ordered
domains simply related to each other by translation, the disordered phase is
incommensurate. The asymmetry between domain walls induces a chiral
perturbationHuseFisher that is in most cases relevant and has to drive the
transition away from the standard universality classes. For $p\geq 5$, it is
well accepted that the transition becomes a Pokrosky-TalapovPokrovsky_Talapov
; schulz1980 transition into a critical phase, followed by a Kosterlitz-
ThoulessKosterlitz_Thouless transition into a disordered phase with
exponentially decaying correlationsDen_Nijs . By analogy to the classical 2D
problem, the intermediate critical phase is referred to as the floating phase.
For $p=3$ and $p=4$, there is no consensus however. If, in the disordered
phase, there is a line where the short-range correlations have the periodicity
of the adjacent ordered phase, as is the case for Rydberg chains, the
transition is expected to be a standard transition in the 3-state Potts and
Ashkin-Teller universality classes respectivelyHuseFisher1984 . This has been
explicitly demonstrated for models with infinite short-range repulsions for
period-3 fendley ; prl_chepiga and period-4 phaseschepiga2020kibblezurek .
Far from this commensurate line, it is also by now fairly well established
numerically that, as for $p\geq 5$, there is a floating phaserader2019floating
. The main issue is what happens in between. In 1982, Huse and
FisherHuseFisher argued that the floating phase may not appear immediately,
and that the transition, which cannot be in a standard universality class
because the chiral perturbation is relevant, could still be direct and
continuous, but in a new universality class that they called chiral. The
presence of such a chiral transition is consistent with the interpretation of
recent Kibble-Zurek experiments on Rydberg chainschepiga2020kibblezurek ;
kibble_zureck . However, it has not been possible so far to come up with a
compelling theoretical argument in favour of a Lifschitz point that would
terminate the floating phase at a distance from the Potts point for $p=3$ or
the Ashkin-Teller point for $p=4$, and in the absence of such an argument, the
issue remains controversial.
In the present paper, we come up with such an argument. We show that the
instability of the floating phase is driven by a property of the Pokrovsky-
Talapov transition that has apparently been overlooked so far, namely that the
Luttinger liquid exponent of the floating phase can change along this
transition. Then, if the Luttinger liquid exponent reaches the value at which
the floating phase becomes instable, which is the mechanism behind the
Kosterlitz-Thouless transition from the floating phase into the disordered
one, the transition can no longer take place through an intermediate phase,
opening the way to a chiral transition. We further argue that the Pokrovsky-
Talapov transition is expected to have this property for models where the
number of particles is not conserved, and we prove it in the case of a 1D
model where particles are created and annihilated three by three, a model put
forward recently in the context of Rydberg atomsPhysRevB.98.205118 , but
already introduced earlier in the fermionic description of domain walls in the
context of the C-IC transition in 2D classical systemsDen_Nijs .
## II General argument
From now on, we will concentrate on the case $p=3$ for clarity. The argument
can be straightforwardly extended to $p=4$. Let us assume that there is a line
where the correlations remain commensurate in the disordered phase along which
the transition is in the 3-state Potts universality class, and that there is a
floating phase further away along the transition. Then either the floating
phase starts right away, as in the bottom panel of Fig.1, or it only starts at
a Lifshitz point different from the Potts point, as in the top panel of Fig.1.
In the language of 1D quantum physics, the floating phase is a Luttinger
liquidgiamarchi , and it is described by two parameters: i) The parameter $K$
that controls the decay of all correlation functions, often referred to as the
Luttinger liquid exponent; ii) The velocity $v$ that controls the small
momentum dispersion of the excitations. At a C-IC transition, this
intermediate phase is bounded by two very different transitions, and each of
them is a priori controlled by a single parameter:
\- The parameter $K$ controls the Kosterlitz-Thouless (KT) transition into the
disordered phase with exponentially decaying correlations. This transition
occurs when an operator present in the model (or generated under the
renormalization group flow) becomes relevant, i.e. when its scaling dimension
becomes smaller than 2. For the operator that simultaneously creates $p$
domain walls or particles, this scaling dimension is equal to $p^{2}/4K$ in a
Luttinger liquid with exponent $K$sachdev_QPT , and this operator becomes
relevant for $K>K_{c}$ with $K_{c}=p^{2}/8$.
\- The parameter $v$ controls the Pokrosky-Talapov transition. At this
transition, $v$ goes to zero. The dispersion becomes quadratic, and the
dynamical exponent is equal to 2.
Figure 1: (Color online) Sketches of the two main possibilities for the phase
diagram of commensurate melting. a In the absence of chiral perturbation the
transition is conformal in the three-state Potts universality class (blue
point). If the Pokrosvky-Talapov transition does not lead to an empty phase,
the Luttinger liquid exponent changes along the Pokrovsky-Talapov transition.
It reaches the value $K=K_{c}$ of the Kosterlitz-Thouless transition at the
Lifshitz point (red dot), beyond which the transition is chiral in the Huse-
Fisher universality class. b When the phase below the Pokrosvky-Talapov
transition is empty, the Luttinger liquid exponent tends to the non-
interacting value $K=1$ when approaching this line. All constant $K$ lines
meet at the three-state Potts critical point.
There is a priori no reason for the Luttinger liquid parameter to have a
specific value along the Pokrovsky-Talapov (PT) transition since this
transition is controlled by the velocity. However, it is a well known fact
that the Luttinger liquid exponent is often constant along transition
linescazalilla ; giamarchi , and this also applies to the Pokrovsky-Talapov
transition in certain cases. In particular, the PT transition describes the
transition at which a fermionic system starts to fill up. If the Hamiltonian
conserves the number of fermions, the system is empty on one side of the
transition. Then, on the other side, in the Luttinger liquid phase, the
density goes continuously to zero at the transition. In that case, and for
interactions that decay fast enough, the Luttinger liquid exponent is expected
to tend to the non-interacting value $K=1$ at the transition. Then the
floating phase is limited on one side by $K=K_{c}$, which is larger than 1 as
soon as $p\geq 3$, and on the other side by $K=1$. If the two lines merge at a
point along the transition, this point should correspond to the point where
all constant $K$ lines with $1\leq K\leq K_{c}$ meet. This possibility has
been discussed by Haldane, Bak and Bohrhaldane_bak and by Schulzschulz1983
in the context of the quantum sine-Gordon model, in which case the point at
which the lines meet was shown to be in the $p$-state clock universality class
by symmetry. This is summarized in the sketch of the bottom panel of Fig.1,
where the point where all constant $K$ lines meets is called 3-state Potts,
the standard terminology for $p=3$.
However, if the density of particles does not go to zero at the Pokrovsky-
Talapov transition, and this will in particular be the case if the number of
particles is not conserved, the Luttinger liquid exponent is not fixed at the
non-interacting value $K=1$ along this transition. Then, the constant $K$
lines do not have to meet at a single point, but they can terminate at
different points along the PT transition, as sketched in the top panel of
Fig.1. This opens the possibility of a Lifshitz point defined as the point
where the line $K=K_{c}$ hits the PT transition. This point is not a point of
special symmetry, and there is no reason for this point to be $p$-state clock.
If the Lifshitz point occurs before the Potts point, then between them the
transition must be the chiral transition predicted by Huse and
FisherHuseFisher .
## III Model with 3-site term
We will now show that this is precisely what happens in a hard-core boson
model recently proposed as a dual description of the period-$p=3$ transition
of 1D Rydberg atomsPhysRevB.98.205118 . This model is defined by the
Hamiltonian:
$H=\sum_{i}-t(d^{\dagger}_{i}d_{i+1}+\mathrm{h.c.})-\mu
n_{i}+\lambda(d^{\dagger}_{i}d^{\dagger}_{i+1}d^{\dagger}_{i+2}+\mathrm{h.c.}),$
(1)
Without loss of generality we will fix $t=1$ in the following.
Figure 2: (Color online) Phase diagram of the hard-core boson model with
three-site term of Eq.1. It consists of three phases: (i) A disordered phase
(blue) at large $\mu$; (ii) A Z3 ordered phase (yellow) at small $\mu$ and not
too small $\lambda$; (iii) A floating phase (green) that starts at small
values of $\lambda$ for small $\mu$, and that extends up to $\lambda\approx 7$
upon approaching the disordered phase. The floating phase is separated from
the Z3 phase by a Kosterlitz-Thouless transition (red squares), and from the
disordered phase by a Pokrovsky-Talapov transition (blue circles). For larger
values of $\lambda$ the transition between the disordered phase and the Z3
phase is a direct one in the Huse-Fisher universality class (black diamonds).
The dotted lines show the cuts presented in Fig.6
When $\lambda\neq 0$, this model does not conserve the number of particles,
and the $U(1)$ symmetry is reduced to Z3. The last term splits the Hilbert
space into three sectors distinguished by the total filling modulo 3. The Z3
symmetry is broken if these sectors have the same ground state energy, and
unbroken otherwise. We have studied this model numerically with large-scale
density matrix renormalization group (DMRG)dmrg1 ; dmrg2 ; dmrg3 ; dmrg4
simulations on systems with up to 3000 sites keeping up to 2000 states and
truncating singular values below $10^{-8}$. Our numerical results are
summarized in the phase diagram of Fig.2. The phase diagram is symmetric
around $\mu=0$, so we only show and discuss the positive $\mu$ side. There are
two gapped phases: i) the disordered phase at large enough $\mu$, which is
commensurate with wave-vector zero and corresponds to a full system for
$\lambda=0$, and ii) the Z3 ordered phase, with short-range incommensurate
correlations. There is also a floating phase, a critical phase in the
Luttinger liquid universality class with algebraic incommensurate
correlations. Along the vertical line $\mu=0$, the wave-vector vanishes by
symmetry, making this line the commensurate line along which the transition
should be in the universality class of the 3-state Potts model. As we shall
see, the floating phase extends up to a Lifshitz point located at ($\mu\simeq
0.35,\lambda\simeq 7$), far from the commensurate line, hence of the 3-state
Potts point. Accordingly, beyond this Lifshitz point, the transition must be a
direct one in the Huse-Fisher chiral universality classHuseFisher ;
HuseFisher1984 ; PhysRevB.98.205118 . The floating phase is separated from the
disordered phase by a Pokrovsky-Talapov transition, and from the Z3 ordered
phase by a Kosterlitz-Thouless transition.
Note that the boundary of the disordered phase agrees with the numerical
results of Ref.PhysRevB.98.205118 . In this reference, the authors also
reported that the Z3 ordered phase is gapped provided $\lambda$ is large
enough, implicitly implying that it might be gapless at small $\lambda$, but
they did not try to determine the boundary where the gap closes. In that
respect, our numerical results complement and correct the numerical results of
Ref.PhysRevB.98.205118 . There is indeed a gapless phase at small $\lambda$,
but it extends to large values of $\lambda$ in the vicinity of the transition
to the disordered phase in the form of very narrow floating phase. The
difficulty encountered by the authors of Ref.PhysRevB.98.205118 to identify
the universality class of the transition is probably a consequence of this
narrow floating phase.
Note also that, because of the dual nature of the model, the role of ordered
and disordered phases is exchanged with respect to Rydberg atoms. The period-3
phase of Rydberg atoms corresponds to the disordered phase of the model of
Eq.1, and the disordered phase of Rydberg atoms to its Z3 ordered phase.
The precise form of this phase diagram has been reached by a careful numerical
identification of the various phases and of the transitions between them that
we now review.
Figure 3: Kosterlitz-Thouless transition at $\mu=1$. a Luttinger Liquid
parameter as a function of $\lambda$. It increases from $K=1$ in the non-
interacting case at $\lambda=0$ to the critical value of the Kosterlitz-
Thouless transition $K_{c}=9/8$ at $\lambda_{c}\approx 0.187$. The large
values beyond that point (marked with dashed line) are finite-size effects.
The Z3 ordered phase is gapped, and correlations no longer decay as a power
law. b Correlation length in the Z3 ordered phase. It is consistent with a
strong divergence upon approaching $\lambda_{c}$. c Scaling of the correlation
length in the Z3 ordered phase as a function of $1/\sqrt{\lambda-\lambda_{0}}$
in a semilog plot. The scaling is linear for $\lambda_{0}=\lambda_{c}\approx
0.187$ identified in panel a, as expected for a Kosterlitz-Thouless
transition, and it is concave for $\lambda_{0}=0.25$ and convex for
$\lambda_{0}=0.1$, confirming that the transition has to take place at a
finite value of $\lambda$. The solid line is a linear fit, dashed lines are
guides to the eye.
#### III.0.1 Floating phase and Kosterlitz-Thouless transition
In the non-interacting case $\lambda=0$, the model can be mapped on non-
interacting spinless fermions. For $\mu<2$, the state is a partially filled
band up to a wave-vector $k_{F}$, and all correlations are critical. Along
this non-interacting line, the Luttinger liquid exponent is rigorously equal
to $K=1$, including at the Pokrovsky-Talapov commensurate-incommensurate
transition to the gapped phase at $\mu=2$.
Upon increasing $\lambda$, the correlations are expected to remain critical as
long as all operators acting as perturbations are irrelevant, i.e. have a
scaling dimension larger than 2. Now, the operator with the smallest scaling
dimension is expected to be the three boson operator. Indeed, the scaling
dimension of an operator creating $m$ particles is equal to $m^{2}/4K$, and
since the number of particles is conserved modulo 3, the only operators
allowed by symmetry correspond to creating $m=3n$ fermions with $n$ integer,
with an exponent $9n^{2}/4K$ that is minimal for $n=1$, i.e. for the term
creating $p=3$ bosons. Its exponent is equal to $9/4K$, so this operator is
irrelevant along the non-interacting line. It only becomes relevant when
$K=9/8$. As a consequence, the critical behaviour of the non-interacting line
must extend into a critical floating phase up to the constant-$K$ line $K=9/8$
in the $\lambda-\mu$ phase diagram, a property of the model not discussed in
Ref.PhysRevB.98.205118, .
Figure 4: Main properties of the Pokrovsky-Talapov transition. a Scaling of
the incommensurate wave-vector $q$ with the distance to the Pokrovsky-Talapov
transition. The solid lines are the results of fits with the Pokrovsky-Talapov
critical exponent $\beta=1/2$. The critical point $\mu_{c}$ identified with
these fits is used in panels b-e. b Evolution of the Luttinger liquid
parameter $K$. The system is in the floating phase when $K$ is below
$K_{c}=9/8$. Larger values of $K$ (shown as dashed lines) are finite size
effects since the Z3 phase is gapped with exponential correlations. The dotted
line is a linear extrapolation of the last 3 points. c Inverse of the
correlation length on both sides on the floating phase. The convex curve on
the left is consistent with the exponential divergence expected at the
Kosterlitz-Thouless transition. In the disordered phase, the correlation
length diverges with the exponent $\bar{\beta}\approx 0.49$, consistent with
the Pokrovsky-Talapov value $1/2$. d Average density across the Pokrovsky-
Talapov transition. The solid line is a fit with the Pokrovsky-Talapov
critical exponent $\bar{\beta}=1/2$. e-f Opposite of the second derivative of
energy with respect to $\mu$ \- the analog of the specific heat - across the
transition. It diverges with critical exponent $\alpha=1/2$ below the
transition and approaches a finite value as $(\mu-\mu_{c})\log(\mu-\mu_{c})$
in the disordered phase.
To extract the Luttinger liquid exponent inside the floating phase, we have
fitted the Friedel oscillations of the local density profile induced by the
open boundary conditions. Details can be found in the Methods section. To
demonstrate the validity of the criterion $K=9/8$ for the transition into the
disordered phase, we show in Fig.3 the evolution of $K$ along a vertical cut
$\mu=1$ inside the floating phase, and the correlation length $\xi$ extracted
from the density-density correlations in the Z3 ordered phase. Note that since
$K$ is expected to change only by $12.5\%$, high accuracy and sufficiently
large system sizes are required to detect such changes. At the KT transition,
the correlation length is expected to diverge very fast, as
$\xi\propto\exp{C/\sqrt{\lambda-\lambda_{c}}}$. This is consistent with the
very steep divergence of $\xi$, and plotting $\ln\xi$ as function of
$1/\sqrt{\lambda-\lambda_{0}}$ for various values of $\lambda_{0}$ shows that
the behaviour is linear at $\lambda_{0}=\lambda_{c}$ and concave resp. convex
away from it, where $\lambda_{c}$ is the value at which $K$ reaches the value
$K=9/8$.
The boundary between the floating phase and the Z3 ordered phase is almost
horizontal and limited to small values of $\lambda$ up to $\mu\simeq 1.8$, but
then it turns up and slowly approaches the boundary to the disordered phase.
For $\lambda>1$, the floating phase is extremely narrow, with a width
$\Delta\mu<0.02$. The key qualitative question is whether the two lines get
asymptotically close as $\lambda\rightarrow+\infty$, or whether they meet at a
finite value of $\lambda$, which would signal the presence of a Lifshitz
point. To address this question, we now turn to a careful investigation of the
transition between the floating phase and the disordered phase.
#### III.0.2 Pokrovsky-Talapov transition
For $\lambda=0$, the transition at $\mu=2$ is just a transition between a
completely filled band for $\mu>2$ and a partially filled band for $-2<\mu<2$
in terms of spinless fermions. The density is equal to 1 for $\mu>2$ and
decreases with a singularity $(2-\mu)^{1/2}$ for $\mu<2$. In the partially
filled band, all correlation functions decay as power laws with an oscillating
prefactor $\cos(kr)$, where $k$ is a multiple of the Fermi wave-vector
$k_{F}$. In particular, the density-density correlations decay as
$\cos(qr)/r^{2}$, with $q=2k_{F}$. Since between $\mu=-2$ and $\mu=2$ the
Fermi wave-vector grows continuously from $0$ to $\pi$, the wave-vector $q$ is
generically incommensurate. Close to $\mu=2$, $q$ approaches 0 modulo $2\pi$
as $(2-\mu)^{1/2}$. At the transition point itself the velocity vanishes and
the dispersion is quadratic, so that the dynamical exponent is given by $z=2$.
For $-2<\mu<2$, the Luttinger liquid exponent is fixed a $K=1$. Finally, the
second derivative of the energy, the equivalent of the specific heat for
quantum transition, vanishes identically above $\mu=2$ and diverges with
exponent $\alpha=1/2$ when $\mu\rightarrow 2$ from below.
This transition is a special example of a Pokrosky-Talapov transition, and
most of its characteristics are generic to this universality class, but not
all. So let us review in detail the general properties of the Pokrovsky-
Talapov universality class. This is a very asymmetric transition with a
dynamical exponent $z=2$. On the commensurate side (the empty or full side for
free fermions), the correlation length diverges with an exponent $\nu=1/2$,
the specific heat goes to a constant with a cusped singularity due to a
logarithmic correction, as shown by Huse and FisherHuseFisher1984 , and the
density is in general not constant but approaches its value at the critical
point without any power-law singularity. On the incommensurate side, the
system is described by a Luttinger liquid with a velocity that vanishes at the
transition and with an exponent $K$ that can take a priori any value. The
wave-vector of the correlations is expected to go to the commensurate one as
$|\mu_{c}-\mu|^{1/2}$, and the density increases or decreases from its value
at $\mu_{c}$ with a singularity $(\mu_{c}-\mu)^{1/2}$. Finally, as discussed
above, the Luttinger liquid exponent is not fixed by symmetry at the PT
transition.
This Pokrovsky-Talapov universality class is expected to be realized upon
reducing $\mu$ from the disordered commensurate phase if the transition leads
to an intermediate floating phase with critical correlations. Since the
behaviour along the PT transition is central to our analysis, let us first
carefully check the properties of this transition for $\lambda>0$ but not too
large. Our numerical results for a horizontal cut at $\lambda=1$ are
summarized in Fig.4. All the properties expected for a PT transition are
realized to a high degree of accuracy. We extract the location of the PT
transition by fitting the values of $q$ as a function of chemical potential
$\mu$ to the form $(\mu_{c}-\mu)^{\bar{\beta}}$ with the critical exponent
$\bar{\beta}=1/2$ as shown in Fig.4(a). As one can see, finite-size effects
for the system sizes shown are already negligible. Considering the other
characteristics, the transition is clearly very asymmetric, and all expected
critical exponents are consistent with our numerical results. The density is
already significantly smaller than 1 at the transition, and it decreases with
a square root singularity upon entering the floating phase. Since the density
is not fixed to 1 at the transition, the system is not empty in hole language,
and the Luttinger liquid exponent is not fixed to 1. And indeed, although it
decreases upon approaching the PT transition, it is consistent with a value
definitely larger than 1 at the transition.
Figure 5: (Color online) Luttinger liquid critical exponent $K$ as a function
of $\lambda$ along the Pokrovsky-Talapov critical line. Error bars are
estimated by extrapolating $K$ as shown in Fig. 8(c),(d) over different
subsets of data points and for different system sizes. The dashed line is a
linear extrapolation. It reaches the value $K=9/8$ at $\lambda\approx 7$.
Inset: Width $\Delta\mu$ of the floating phase as a function of $1/\lambda$.
#### III.0.3 Lifshitz point and chiral transition
To extract a quantitative estimate of the Luttinger liquid exponent at the
transition point, we have extrapolated the last 3-4 points with linear fits,
and we have performed the extrapolation over different sets of points and for
various system sizes to estimate the error bars. The evolution of the
Luttinger liquid exponent extracted in this way along the PT transition is
shown in Fig.5. This is the central numerical result of this paper: In
agreement with our hypothesis, the exponent $K$ increases steadily from its
non-interacting value $K=1$ at $\lambda=0$. It is impossible to follow it
numerically beyond $\lambda\simeq 3$ because the floating phase becomes too
narrow, but a linear extrapolation of the results beyond $\lambda=3$ suggests
that it will reach the critical value $K=9/8$ around $\lambda=7$, hence that
the floating phase has to terminate at a Lifshitz point located at the end of
the PT line at $\lambda\simeq 7$.
Figure 6: Inverse of the correlation length on both sides of the Lifshitz
point. a $\lambda=5$, below the Lifshitz point. Solid lines are fits in the
dirsordered phase with $|\mu_{c}-\mu|^{\nu^{\prime}}$. The value of the
critical exponent ${\nu^{\prime}}$ decreases when the system size increases,
and it is within 10% of the Pokrovsky-Talapov prediction ${\nu^{\prime}}=1/2$.
b $\lambda=10$, above the Lifshitz point. Solid lines are fits of the data
points on both sides of the transition with a unique critical transition point
and equal critical exponents $\nu={\nu^{\prime}}$. The extracted values of the
critical exponent remain well above $1/2$ and are in reasonable agreement with
the critical exponents $\nu={\nu^{\prime}}\simeq 2/3$ reported previously in
other studies of the chiral transition for the period-3 commensurate-
incommensurate transition. The data points used for the fits are marked with
filled symbols.
As an independent check, we have kept track of the width $\Delta\mu$ of the
floating phase as a function of $\lambda$ (see inset of Fig.5). A simple
polynomial fit as a function of $1/\lambda$ suggests that the floating phase
disappears at $1/\lambda\simeq 0.14$, in good agreement with $\lambda\simeq
7$.
To further check this prediction, we have carefully looked at the nature of
the transition across two cuts that intersect the transition out of the
disordered phase at $\lambda=5$ and $\lambda=10$ shown in Fig.2. As shown in
Fig.6, there is a clear difference in the way the correlation length diverges
along these two cuts. For the lower cut, the correlation length is expected to
diverge exponentially at the KT transition, algebraically with exponent
$\nu^{\prime}=1/2$ at the PT transition, and to be infinite in between. In
Fig.6a one can clearly see the asymmetry between the left and right branches,
signaling the existence of two different quantum phase transitions.
Above the Lifshitz point, the transition is expected to be a direct chiral
transition in the Huse-FisherHuseFisher universality class. In this case the
correlation length is expected to diverge on both sides of the transition with
the same exponent $\nu=\nu^{\prime}$. Solid lines show the results of the fit
of the data points on both sides of the transition with a single critical
exponent and a unique critical point $\mu_{c}$. The extracted value of the
critical exponent is consistent with $\nu=\nu^{\prime}\simeq 2/3$, in
agreement with recent quantum field theory resultsPhysRevB.98.205118 , with
numerical results on a classical model expected to have a transition in the
same universality classnyckees2020identifying , and with the exact result
$\nu=\nu^{\prime}=2/3$ derived albertini ; Baxter1989 for an integrable
version of the chiral Potts modelauyang1987 , and extended by Cardy to a
family of self-dual modelsCardy1993 .
## IV Summary and perspectives
To summarize, we have found a simple physical argument in favour of the
presence of Lifshitz points at commensurate melting in 1D models of Rydberg
atoms, and we have demonstrated that it rightly predicts the location of the
Lifshitz point in a model of hard-core bosons with 3-site terms. The core of
the argument relies on a simple property of the Pokrosvky-Talapov transition
in systems where the number of particles is not conserved, namely that the
Luttinger liquid exponent is not constant along this transition because it is
not fixed by the density. This argument applies to Rydberg atoms, where
excited states are created and annihilated by a laser with appropriate Rabi
frequency, and it provides a solid physical basis to the results recently
obtained on this problem. Interestingly enough, it probably also applies to
the old problem of commensurate melting of surface phases in the context of
which Huse and Fisher came up with the suggestion that the transition could be
in a new, non-conformal universality class if the non-vanishing fugacity of
the domain walls is properly taken into account. Indeed, the role of the
particles is played by the domain walls in these systems, and the fugacity
controls the density of dislocations. So with a non-vanishing fugacity one can
expect that the exponent of the floating phase will change along the
Pokrovsky-Talapov transition line.
It will be very interesting to revisit the investigation of various models in
quantum 1D and classical 2D physics along the lines of the present work. In
particular, it would be interesting to try and measure the exponent of the
critical phases in various models of commensurate melting along the Pokrovsky-
Talapov line, and hopefully to locate accurately the Lifshitz point using the
criterion that it reaches the critical value of the KT transition.
Beyond the Lifshitz point itself, the possibility to determine the extent of
the floating phase on the basis of a numerical investigation of the Luttinger
liquid exponent also opens new perspectives in the field. Indeed, locating the
KT transition by looking at the correlation length is notoriously difficult
because it diverges so fast that it exceeds the accessible system sizes long
before the transition, and it is well known in the context of the XY model
that to calculate the spin stiffness of the critical phase leads to much more
accurate results. Work is in progress along these lines.
Finally, the present results provide a strong motivation to further
investigate the properties of the chiral universality class, whose realization
at commensurate melting is more likely than ever, but whose characteristics
are still partly elusive.
## V Methods
### V.1 Extraction of the Luttinger liquid exponent
To extract the Luttinger liquid exponent inside the floating phase, we have
fitted the Friedel oscillations of the local density profile induced by the
open boundary conditions. An example of a typical fit is provided in Fig.7. In
the absence of incommensurability, boundary conformal field theory (CFT)
predicts the profile to be of the form $\propto(-1)^{x}/\left[\sin(\pi
x/(N+1))\right]^{K}$. To account for incommensurate correlations, we use the
modified version $\propto\sin(qx+\phi_{0})/\left[\sin(\pi
x/(N+1))\right]^{K}$, which is expected to describe the asymptotic form of the
scaling. Therefore, to reduce finite-size effects, we fit only the central
part of the profile sufficiently far from the edges $x,(N-x)>>1$, as shown in
Fig.7(a). It turns out that the scaling dimension $K$ is sensitive to both the
interval of the fit and the error in the wave-vector $q$. The latter problem
is solved by increasing the accuracy of the fit to $\approx 10^{-8}$. The
values obtained for $K$ are then averaged out over different numbers of
discarded edge sites as shown in Fig.7(b). To get a reliable fit, the central
part used in the fit should contain sufficiently many helices. This prevents
us from getting results in the immediate vicinity of the Pokrovsky-Talapov
transition, where the wave-vector $q$ is very small, but, as we shall see, the
exponent $K$ varies smoothly as a function of $\mu$, so it is still possible
to get a precise estimate of $K$ in the vicinity of the PT line.
Figure 7: (Color online) (a) Example of the local density profile obtained
with DMRG for $N=2100$ sites at $\lambda=1$ and $\mu=1.7945$ and centered
around its mean value (blue dots). The red line is a fit to the CFT prediction
$\propto\sin(qx+\phi_{0})/\left[\sin(\pi x/(N+1))\right]^{K}$. The value of
the incommensurate wave-vector extracted from this fit is $q\approx
0.01474688\pi$. The average error between the DMRG data and the fit is of the
order of $10^{-8}$. To reduce finite-size effects we only fit the central part
of the profile and discard a few hundreds sites at the edges. (b) Luttinger
liquid exponent $K$ extracted from the fit as a function of the number of
discarded sites at each edge. To get a better estimate of $K$ we take the
average over an integer number of helices keeping a balance between the
distance from the edges and the size of the middle domain (white region).
We have extracted the Luttinger liquid parameter and the incommensurate wave-
vector for various values of $\lambda$. In addition to Fig.4a,b for
$\lambda=1$, we present our results for $\lambda=0.2$ and $\lambda=2$ in
Fig.8. Note that for $\lambda=2$ the width of the floating phase is already
very small, of the order of $7\cdot 10^{-3}$. The evolution of the Luttinger
liquid parameter along the Pokrovsky-Talapov transition is presented in Fig.5.
Figure 8: Luttinger liquid parameter and incommensurate wave-vector for
various cuts. a,b Luttinger liquid parameter $K$ and c,d incommensurate wave-
vector $q$ as a function of the chemical potential $\mu$ for coupling constant
$\lambda=0.2$ (a,c) and $\lambda=2$ (b,d). Dotted lines are guides to the eye.
The black dotted lines in panels a and b are linear fits of the 3-4 last
points; in panels c and d we fit all available points to
$|\mu_{c}-\mu|^{1/2}$. The error bars are smaller than the size of the
symbols. Similar plots for $\lambda=1$ are presented in Fig.4a,b
### V.2 Density at the Pokrovsky-Talapov transition
We argue that the Luttinger liquid exponent is not constant along this
transition because it is not fixed by the density. In Fig.9 we summarize how
the density evolves across the PT transition. We extract the density by
averaging the local density $\langle n_{i}\rangle$. The interval over which we
average always lies between two local maxima. This way, even if the wave-
vector $q$ is close to zero, we obtain meaningful results. To reduce the edge
effects we start with maxima located at 100-200 sites from the edges for
$N=600$, and at $200-400$ sites for $N\geq 1200$.
In order to find the density $\langle n_{i}\rangle_{c}$ at the PT transition
we fit the data point above the transition with a straight line and extract
the value at the critical point determined from the incommensurate wave-vector
$q$ as shown in Fig.8c,d. Below the transition we fit the data with
$a|\mu-\mu_{c}|^{1/2}+\langle n_{i}\rangle_{c}$, where $a$ is the only fitting
parameter. As shown in panels b and c of Fig.9, the density changes
significantly along the PT line, and the change of density is correlated with
the non-constant value of $K$ along this line.
Figure 9: Density at the Pokrovsky-Talapov transition. a Average density
across the Pokrovsky-Talapov transition for various fixed $\lambda$. Circles
stand for the data points. The lines are the result of two types of fits:
linear above the transition, and of the form $a|\mu-\mu_{c}|^{1/2}+\langle
n_{i}\rangle_{c}$ below it. b Density along the Pokrovsky-Talapov line as a
function of $\lambda$. The errors are smaller than the size of the symbols. c
Density as a function of the Luttinger parameter $K$.
## VI Acknowledgments
We thank Titouan Dorthe, Matthjis Hogervorst, and Joao Penedones for useful
discussions. This work has been supported by the Swiss National Science
Foundation. The calculations have been performed using the facilities of the
University of Amsterdam and the facilities of the Scientific IT and
Application Support Center of EPFL.
## References
* (1) H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletic, and M. D. Lukin, “Probing many-body dynamics on a 51-atom quantum simulator,” Nature, vol. 551, p. 579, Nov 2017.
* (2) A. Keesling, A. Omran, H. Levine, H. Bernien, H. Pichler, S. Choi, R. Samajdar, S. Schwartz, P. Silvi, S. Sachdev, P. Zoller, M. Endres, M. Greiner, Vuletić, V. , and M. D. Lukin, “Quantum Kibble-Zurek mechanism and critical dynamics on a programmable Rydberg simulator,” Nature (London), vol. 568, pp. 207–211, Apr 2019.
* (3) M. Rader and A. M. Läuchli, “Floating phases in one-dimensional Rydberg Ising chains.” arXiv:1908.02068, 2019.
* (4) N. Chepiga and F. Mila, “Floating phase versus chiral transition in a 1d hard-boson model,” Phys. Rev. Lett., vol. 122, p. 017205, Jan 2019.
* (5) N. Chepiga and F. Mila, “Kibble-Zurek exponent and chiral transition of the period-4 phase of Rydberg chains.” arXiv:2001.06698, 2020.
* (6) P. Bak, “Commensurate phases, incommensurate phases and the devil’s staircase,” Reports on Progress in Physics, vol. 45, pp. 587–629, jun 1982\.
* (7) P. Fendley, K. Sengupta, and S. Sachdev, “Competing density-wave orders in a one-dimensional hard-boson model,” Phys. Rev. B, vol. 69, p. 075106, Feb 2004.
* (8) G. Giudici, A. Angelone, G. Magnifico, Z. Zeng, G. Giudice, T. Mendes-Santos, and M. Dalmonte, “Diagnosing Potts criticality and two-stage melting in one-dimensional hard-core boson models,” Phys. Rev. B, vol. 99, p. 094434, Mar 2019.
* (9) R. Samajdar, S. Choi, H. Pichler, M. D. Lukin, and S. Sachdev, “Numerical study of the chiral ${Z}_{3}$ quantum phase transition in one spatial dimension,” Phys. Rev. A, vol. 98, p. 023614, Aug 2018.
* (10) S. Ostlund, “Incommensurate and commensurate phases in asymmetric clock models,” Phys. Rev. B, vol. 24, pp. 398–405, Jul 1981.
* (11) D. A. Huse, “Simple three-state model with infinitely many phases,” Phys. Rev. B, vol. 24, pp. 5180–5194, Nov 1981.
* (12) D. A. Huse and M. E. Fisher, “Domain walls and the melting of commensurate surface phases,” Phys. Rev. Lett., vol. 49, pp. 793–796, Sep 1982.
* (13) W. Selke and J. M. Yeomans, “A Monte Carlo study of the asymmetric clock or chiral Potts model in two dimensions,” Zeitschrift für Physik B Condensed Matter, vol. 46, pp. 311–318, Dec 1982.
* (14) D. A. Huse, A. M. Szpilka, and M. E. Fisher, “Melting and wetting transitions in the three-state chiral clock model,” Physica A: Statistical Mechanics and its Applications, vol. 121, no. 3, pp. 363 – 398, 1983.
* (15) D. A. Huse and M. E. Fisher, “Commensurate melting, domain walls, and dislocations,” Phys. Rev. B, vol. 29, pp. 239–270, Jan 1984.
* (16) P. M. Duxbury, J. Yeomans, and P. D. Beale, “Wavevector scaling and the phase diagram of the chiral clock model,” Journal of Physics A: Mathematical and General, vol. 17, no. 4, p. L179, 1984.
* (17) J. Yeomans and B. Derrida, “Bulk and interface scaling properties of the chiral clock model,” Journal of Physics A: Mathematical and General, vol. 18, pp. 2343–2355, aug 1985.
* (18) S. Howes, L. P. Kadanoff, and M. Den Nijs, “Quantum model for commensurate-incommensurate transitions,” Nuclear Physics B, vol. 215, no. 2, pp. 169 – 208, 1983.
* (19) S. F. Howes, “Commensurate-incommensurate transitions and the Lifshitz point in the quantum asymmetric clock model,” Phys. Rev. B, vol. 27, pp. 1762–1768, Feb 1983.
* (20) J. M. Houlrik and S. J. K. Jensen, “Phase diagram of the three-state chiral clock model studied by Monte Carlo renormalization-group calculations,” Phys. Rev. B, vol. 34, pp. 325–329, Jul 1986.
* (21) N. C. Bartelt, T. L. Einstein, and L. D. Roelofs, “Structure factors associated with the melting of a (31) ordered phase on a centered-rectangular lattice gas: Effective scaling in a three-state chiral-clock-like model,” Phys. Rev. B, vol. 35, pp. 4812–4818, Apr 1987.
* (22) M. den Nijs, “The domain wall theory of two-dimensional commensurate-incommensurate phase transitions,” Phase Transitions and Critical Phenomena, vol. 12, p. 219, 1988.
* (23) H. U. Everts and H. Roder, “Transfer matrix study of the chiral clock model in the Hamiltonian limit,” Journal of Physics A: Mathematical and General, vol. 22, pp. 2475–2494, jul 1989.
* (24) D. L. Abernathy, S. Song, K. I. Blum, R. J. Birgeneau, and S. G. J. Mochrie, “Chiral melting of the Si(113) (3x1) reconstruction,” Phys. Rev. B, vol. 49, pp. 2691–2705, Jan 1994.
* (25) J. Schreiner, K. Jacobi, and W. Selke, “Experimental evidence for chiral melting of the Ge(113) and Si(113) 3x1 surface phases,” Phys. Rev. B, vol. 49, pp. 2706–2714, Jan 1994.
* (26) H. Sato and K. Sasaki, “Numerical study of the two-dimensional three-state chiral clock model by the density matrix renormalization group method,” Journal of the Physical Society of Japan, vol. 69, no. 4, pp. 1050–1054, 2000\.
* (27) Y. Zhuang, H. J. Changlani, N. M. Tubman, and T. L. Hughes, “Phase diagram of the ${Z}_{3}$ parafermionic chain with chiral interactions,” Phys. Rev. B, vol. 92, p. 035154, Jul 2015.
* (28) V. L. Pokrovsky and A. L. Talapov, “Ground state, spectrum, and phase diagram of two-dimensional incommensurate crystals,” Phys. Rev. Lett., vol. 42, pp. 65–67, Jan 1979.
* (29) H. J. Schulz, “Critical behavior of commensurate-incommensurate phase transitions in two dimensions,” Phys. Rev. B, vol. 22, pp. 5274–5277, Dec 1980.
* (30) J. M. Kosterlitz and D. J. Thouless, “Ordering, metastability and phase transitions in two-dimensional systems,” Journal of Physics C: Solid State Physics, vol. 6, no. 7, p. 1181, 1973.
* (31) S. Whitsitt, R. Samajdar, and S. Sachdev, “Quantum field theory for the chiral clock transition in one spatial dimension,” Phys. Rev. B, vol. 98, p. 205118, Nov 2018.
* (32) T. Giamarchi, Quantum physics in one dimension. Internat. Ser. Mono. Phys., Oxford: Clarendon Press, 2004.
* (33) S. Sachdev, Quantum Phase Transitions. Cambridge University Press, 2011.
* (34) M. A. Cazalilla, R. Citro, T. Giamarchi, E. Orignac, and M. Rigol, “One dimensional bosons: From condensed matter systems to ultracold gases,” Rev. Mod. Phys., vol. 83, pp. 1405–1466, Dec 2011.
* (35) F. D. M. Haldane, P. Bak, and T. Bohr, “Phase diagrams of surface structures from Bethe-ansatz solutions of the quantum sine-Gordon model,” Phys. Rev. B, vol. 28, pp. 2743–2745, Sep 1983.
* (36) H. J. Schulz, “Phase transitions in monolayers adsorbed on uniaxial substrates,” Phys. Rev. B, vol. 28, pp. 2746–2749, Sep 1983.
* (37) S. R. White, “Density matrix formulation for quantum renormalization groups,” Phys. Rev. Lett., vol. 69, pp. 2863–2866, Nov 1992.
* (38) U. Schollwöck, “The density-matrix renormalization group,” Rev. Mod. Phys., vol. 77, pp. 259–315, Apr 2005.
* (39) S. Östlund and S. Rommer, “Thermodynamic limit of density matrix renormalization,” Phys. Rev. Lett., vol. 75, pp. 3537–3540, Nov 1995.
* (40) U. Schollwöck, “The density-matrix renormalization group in the age of matrix product states,” Annals of Physics, vol. 326, no. 1, pp. 96 – 192, 2011.
* (41) S. Nyckees, J. Colbois, and F. Mila, “Identifying the Huse-Fisher universality class of the three-state chiral Potts model.” arXiv:2008.08408, 2020.
* (42) G. Albertini, B. M. McCoy, J. H. Perk, and S. Tang, “Excitation spectrum and order parameter for the integrable N-state chiral Potts model,” Nuclear Physics B, vol. 314, no. 3, pp. 741 – 763, 1989.
* (43) R. J. Baxter, “Superintegrable chiral Potts model: Thermodynamic properties, an “inverse” model, and a simple associated Hamiltonian,” Journal of Statistical Physics, vol. 57, pp. 1–39, Oct 1989.
* (44) H. Au-Yang, B. M. McCoy, J. H. Perk, S. Tang, and M.-L. Yan, “Commuting transfer matrices in the chiral Potts models: Solutions of star-triangle equations with genus $>$ 1,” Physics Letters A, vol. 123, no. 5, pp. 219 – 223, 1987.
* (45) J. L. Cardy, “Critical exponents of the chiral Potts model from conformal field theory,” Nuclear Physics B, vol. 389, no. 3, pp. 577 – 586, 1993\.
|
8k
|
arxiv_papers
|
2101.01000
|
# Conditional Local Convolution for Spatio-temporal Meteorological Forecasting
Haitao Lin,1 3 Zhangyang Gao,11footnotemark: 11 Yongjie Xu, 1 Lirong Wu, 1
Ling Li, 2 Stan. Z. Li, 1 Equal Contributions.
###### Abstract
Spatio-temporal forecasting is challenging attributing to the high
nonlinearity in temporal dynamics as well as complex location-characterized
patterns in spatial domains, especially in fields like weather forecasting.
Graph convolutions are usually used for modeling the spatial dependency in
meteorology to handle the irregular distribution of sensors’ spatial location.
In this work, a novel graph-based convolution for imitating the meteorological
flows is proposed to capture the local spatial patterns. Based on the
assumption of smoothness of location-characterized patterns, we propose
conditional local convolution whose shared kernel on nodes’ local space is
approximated by feedforward networks, with local representations of coordinate
obtained by horizon maps into cylindrical-tangent space as its input. The
established united standard of local coordinate system preserves the
orientation on geography. We further propose the distance and orientation
scaling terms to reduce the impacts of irregular spatial distribution. The
convolution is embedded in a Recurrent Neural Network architecture to model
the temporal dynamics, leading to the Conditional Local Convolution Recurrent
Network (CLCRN). Our model is evaluated on real-world weather benchmark
datasets, achieving state-of-the-art performance with obvious improvements. We
conduct further analysis on local pattern visualization, model’s framework
choice, advantages of horizon maps and etc. The source code is available at
https://github.com/BIRD-TAO/CLCRN.
## 1\. Introduction
In classical statistical learning, spatio-temporal forecasting is usually
regarded as a multi-variate time series problem, and methods such as
autoregressive integrated moving average (ARIMA) with its variants are
proposed, but the stationary assumption is usually hard to satisfy. Recently,
the rise of deep learning approaches has attracted lots of attention. For
example, in traffic flow forecasting, Graph Neural Networks (GNNs) are
regarded as superior solutions to model spatial dependency by taking sensors
as graph nodes (Yu, Yin, and Zhu 2018; Li et al. 2018). Compared with great
progress in traffic forecasting, works focusing on meteorology are scarce,
while the need for weather forecasting is increasing dramatically (Shi et al.
2015; Sønderby et al. 2020). In this work, our research attention is paid to
spatio-temporal meteorological forecasting tasks.
The task is challenging due to two main difficulties. First, the irregular
sampling of meteorological signals usually disables the classical
Convolutional Neural Networks (CNNs) which work well on regular mesh grid
signals on Euclidean domains such as 2-D planar images. Signals are usually
acquired from irregularly distributed sensors, and the manifolds from which
signals are sampled are usually non-planar. For example, sensors detecting
temperature are located unevenly on land and ocean, which are not fixed on
structured mesh grids, and meteorological data are often spherical signals
rather than planar ones. Second, the high temporal and spatial dependency
makes it hard to model the dynamics. For instance, different landforms show
totally distinct wind flow or temperature transferring patterns in weather
forecasting tasks and extreme climate incidents like El Nino (Broad 2002)
often cause non-stationarity for prediction.
GNNs yield effective and efficient performance for irregularly spatio-temporal
forecasting, enabling to update node representations by aggregating messages
from their neighbors, the process of which can be analogized to heat or wind
flow from localized areas on the earth surface. As discussed, the
meteorological flow may demonstrate totally variant patterns in different
local regions. Inspired by the analogy and location-characterized patterns, we
aim to establish a graph convolution kernel, which varies in localized regions
to approximate and imitate the true local meteorological patterns.
Therefore, we propose our conditional local kernel. We embed it in a graph-
convolution-based recurrent network, for spatio-temporal meteorological
forecasting. The convolution is performed on the local space of each node,
which is constructed considering both distance between nodes and their
relative orientation, with the kernel proposed mainly based on the assumption:
smoothness of location-characterized patterns. In summary, our contributions
are:
* •
Proposing a location-characterized kernel to capture and imitate the
meteorological local spatial patterns in its message-passing process;
* •
Establishing the spatio-temporal model with the proposed graph convolution
which achieves state-of-the-art performance in weather forecasting tasks;
* •
Conducting further analysis on learned local pattern visualization, framework
choice, local space and map choice and ablation.
## 2\. Related Work
#### Spherical signal processing.
The spatial signals in meteorology are usually projected on a sphere, e.g.
earth surface. Different from regular mesh grid samples on plane, sampling
space of spherical signals demonstrates different manifold properties, which
needs a specially designed convolution to capture the spatial pattern, such as
multi-view-projection-based 2-D CNN (Coors, Condurache, and Geiger 2018), 3-D
mesh-based convolution (Jiang et al. 2019) and graph-based spherical CNN
(Perraudin et al. 2019). Further proposed mesh-based convolutions have
remarkable hard-baked properties, such as in (Cohen et al. 2018) and (Esteves
et al. 2018) with equivariance, in spite of high computational cost and
requirements of mesh grid data. For signals on irregular spatial distribution,
graph-based neural networks are usually employed, representing the nodes on
sphere as nodes of the established graph, with fast implementation and good
performance (Defferrard et al. 2020). Our method also processes spherical
signals with graph-based methods, and harnesses properties of sphere manifold,
to model the local patterns in meteorology.
#### Spatio-temporal graph neural networks.
Graph neural networks perform convolution based on the graph structure, and
yield effective representations by aggregating or diffusing messages from or
to neighborhoods (Niepert, Ahmed, and Kutzkov 2016; Atwood and Towsley 2016;
Kipf and Welling 2017) or filter different frequency based on graph Laplacian
(Bruna et al. 2014; Defferrard, Bresson, and Vandergheynst 2017). After the
rise of GNNs (Gilmer et al. 2017; Veličković et al. 2018; Wu et al. 2021),
spatio-temporal forecasting models are mostly graph-based neural networks
thanks to their ability to learn representations of spatial irregular
distributed signals, such as STGCN (Yu, Yin, and Zhu 2018) convoluting spatial
signals by using spectral filters for traffic forecasting, and DCRNN (Li et
al. 2018) achieving tremendous improvements on the same task by employing
diffusion convolution on graphs. Despite the inability of previous methods to
adaptively model location-characterized patterns, graph attention (Guo et al.
2019) is employed in spatio-temporal model to learn the adjacency relation
among traffic sensors, and an adaptive graph recurrent model (Bai et al. 2020)
is proposed to optimize different local patterns according to higher-level
node representations. Differing from the previous, the adaptively learned
local patterns in our method are hard-baked with property of local smoothness
of location-characterized patterns, thus capable of capturing the
meteorological flowing process.
## 3\. Background
#### Problem setting.
Given $N$ correlated signals located on the sphere manifold $S^{2}$ at time
$t$, we can represent the signals as a (directed) graph
$\mathcal{G}=(\mathcal{V},\mathcal{E},\bm{A})$, where $\mathcal{V}$ is a node
set with $\mathcal{V}=\\{\bm{\mathrm{x}}_{i}^{S}=(x_{i,1},x_{i,2},x_{i,3})\in
S^{2}:i=1,2,\ldots,N\\}$ meaning that it records the position of $N$ nodes,
which satisfies $||\bm{\mathrm{x}}_{i}^{S}||_{2}=1$. We denote positions of
nodes in Euclidean space by $\bm{\mathrm{x}}^{E}$, and in sphere as
$\bm{\mathrm{x}}^{S}$. $\mathcal{E}$ is a set of edges and
$\bm{A}\in\mathbb{R}^{N\times N}$ is the adjacency matrix which can be
asymmetric. The signals observed at time $t$ of the nodes on $\mathcal{G}$ are
denoted by $\bm{F}^{(t)}\in\mathbb{R}^{N\times D}$. For the forecasting tasks,
our goal is to learn a function $P(\cdot)$ for approximating the true mapping
of historical $T^{\prime}$ observed signals to the future $T$ signals, that is
$\displaystyle[\bm{F}^{(t-T^{\prime})},\ldots,\bm{F}^{(t)};\mathcal{G}]\overset{P}{\longrightarrow}[\bm{F}^{(t+1)},\ldots,\bm{F}^{(t+T)};\mathcal{G}].$
(1)
In the paper, for meteorological datasets which do not provide the adjacency
matrix, we construct it by K-nearest neighbors algorithm based on induced
spherical distance of their spatial location which will be discussed later.
#### Graph convolution neural networks.
For notation simplicity, we omit the supper-script $(t)$ when discussing
spatial dependency. Denote the set of neighbors of center node $i$ by
$\mathcal{N}(i)=\\{j:(i,j)\in\mathcal{E}\\}$, and note that
$(i,i)\in\mathcal{N}(i)$. In Graph Neural Networks, $(W^{l},\bm{b}^{l})$ is
the weights and bias parameters for layer $l$, and $\sigma(\cdot)$ is a non-
lieanr activation function. The message-passing rule concludes that at layer
$l$, representation of node $i$ updates as
$\displaystyle\bm{y}^{l}_{i}$
$\displaystyle=\sum_{j\in\mathcal{N}(i)}\omega_{i,j}\bm{h}^{l-1}_{j};$ (2)
$\displaystyle\bm{h}^{l}_{i}$
$\displaystyle=\sigma(\bm{y}^{l}_{i}\bm{W}^{l}+\bm{b}^{l}),$ (3)
where $\bm{h}^{l}_{i}$ is the representation of node $i$ after $l$-th layer,
with $\bm{h}^{0}_{i}=\bm{F}_{i}$, which is the observed graph signals on node
$i$. Denote the neighborhood coordinate set of center node $i$ by
$\mathcal{V}(i)=\\{\bm{\mathrm{x}}^{S}_{j}:j\in\mathcal{N}(i)\\}$, and then
Eq. 2 represents aggregation of messages from neighbors, which can also be
regarded as the convolution operation on graph, which can be written as
$\displaystyle(\Omega\star_{\mathcal{N}(i)}\bm{H}^{l-1})(\bm{\mathrm{x}}_{j}^{S})$
$\displaystyle=\sum_{\bm{\mathrm{x}}^{S}_{j}\in\mathcal{V}(i)}\Omega(\bm{\mathrm{x}}^{S}_{j};\bm{\mathrm{x}}^{S}_{i})\bm{H}^{l-1}(\bm{\mathrm{x}}^{S}_{j}),$
(4)
where $\star_{\mathcal{N}(i)}$ means convolution on the $i$-th node’s
neighborhood, $\Omega:S^{2}\times S^{2}\rightarrow\mathbb{R}$ is the
convolution kernel, such that
$\Omega(\bm{\mathrm{x}}^{S}_{j},\bm{\mathrm{x}}^{S}_{i})=\omega_{i,j}$, and
$\bm{H}^{l-1}$ is a function mapping each point on sphere to its feature
vector in $l$-th representation space.
###### Example 1.
The convolutional kernel used in DCRNN (Li et al. 2018) is
$\displaystyle\Omega(\bm{\mathrm{x}}^{S}_{j},\bm{\mathrm{x}}^{S}_{i})=$
$\displaystyle\exp(-d^{2}(\bm{\mathrm{x}}^{S}_{i},\bm{\mathrm{x}}^{S}_{j})/\tau),$
(5)
where $d(\cdot,\cdot)$ is the distance between the two nodes, and $\tau$ is a
hyper-parameter to control the smoothness of the kernel.
To imitate the meteorological patterns, the value of convolution kernel should
be large for neighbors having great meteorological impacts on centers. For
example, If there exists heat flows from the south-east to the north-west, the
kernel should give more weights to the nodes from the south-east when
aggregating messages from neighbors. Using slight abuse of terminology, we
consider the convolution kernels are equivalent to meteorological patterns in
local regions.
#### Sphere manifold.
The signals are located on the earth surface, which is regarded as a sphere,
and thus we introduce the notation of sphere manifold to further develop our
convolution method. The $M$-D sphere manifold is denoted by
$S^{M}=\\{\bm{\mathrm{x}}^{S}=(x_{1},x_{2},\ldots,x_{M+1})\in\mathbb{R}^{M+1}:||\bm{\mathrm{x}}^{S}||=1\\}$.
The convolution is usually operated on a plane, so we introduce the local
space, the $M$-D Euclidean space, as the convolution domains.
###### Definition 1.
Define the local space centered at point $\bm{\mathrm{x}}$ as some Euclidean
space denoted by $\mathcal{L}_{\bm{\mathrm{x}}}S^{M}$, with
$\bm{\mathrm{x}}\in\mathcal{L}_{\bm{\mathrm{x}}}S^{M}$, which is homeomorphic
to the local region centered at $\bm{\mathrm{x}}$. (Formal definition see
Appendix A2.)
###### Example 2.
The tangent space centered at point $\bm{\mathrm{x}}$ is an example of local
space, denoted by
$\mathcal{T}_{\bm{\mathrm{x}}}S^{M}=\\{\bm{\mathrm{v}}\in\mathbb{R}^{M+1}:<\bm{\mathrm{x}},\bm{\mathrm{v}}>=0\\}$,
where $<\cdot,\cdot>$ is the Euclidean inner product.
The geodesics and induced distance on sphere are important to both defining
the neighborhood of a node, as well as identifying the message-passing
patterns. Intuitively, the greater is the distance from one node to another,
the fewer messages should be aggregated from the node into another in graph
convolution.
###### Proposition 1.
Let $\bm{\mathrm{x}}\in S^{M}$, and
$\bm{\mathrm{u}}\in\mathcal{T}_{\bm{\mathrm{x}}}S^{M}$ be unit-speed. The
unit-speed geodesics is
$\gamma_{\bm{\mathrm{x}}\rightarrow\bm{\mathrm{u}}}(t)=\bm{\mathrm{x}}\cos
t+\bm{\mathrm{u}}\sin t$, with
$\gamma_{\bm{\mathrm{x}}\rightarrow\bm{\mathrm{u}}}(0)=\bm{\mathrm{x}}$ and
$\dot{\gamma}_{\bm{\mathrm{x}}\rightarrow\bm{\mathrm{u}}}(0)=\bm{\mathrm{u}}$.
The intrinsic shortest distance function between two points
$\bm{\mathrm{x}},\bm{\mathrm{y}}\in S^{M}$ is
$\displaystyle
d_{S^{M}}(\bm{\mathrm{x}},\bm{\mathrm{y}})=\arccos(<\bm{\mathrm{x}},\bm{\mathrm{y}}>).$
(6)
The distance function is usually called great-circle distance on sphere. In
practice, the K-nearest neighbors algorithm to construct the graph structure
is conducted based on spherical distance.
On the establishment of local space of each center node, an isometric map
$\mathcal{M}_{\bm{\mathrm{x}}}(\cdot):S^{M}\rightarrow\mathcal{L}_{\bm{\mathrm{x}}}S^{M}$
satisfying that
$||\mathcal{M}_{\bm{\mathrm{x}}}(\bm{\mathrm{y}})||=d_{S^{M}}(\bm{\mathrm{x}},\bm{\mathrm{y}})$
can be used to map neighbor nodes on sphere into the local space.
###### Example 3.
Logarithmic map is usually used to map the neighbor node
$\bm{\mathrm{x}}_{j}\in\mathcal{V}(i)$ on sphere isometrically into
$\mathcal{T}_{\bm{\mathrm{x}}_{i}}S^{M}$, which reads
$\displaystyle\log_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{x}}_{j})$
$\displaystyle=d_{S^{M}}(\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j})\frac{P_{{\bm{\mathrm{x}}_{i}}}(\bm{\mathrm{x}}_{j}-\bm{\mathrm{x}}_{i})}{||P_{{\bm{\mathrm{x}}_{i}}}(\bm{\mathrm{x}}_{j}-\bm{\mathrm{x}}_{i})||},$
where
$P_{{\bm{\mathrm{x}}_{i}}}(\bm{\mathrm{x}})=\frac{\bm{\mathrm{x}}}{||\bm{\mathrm{x}}||}-<\frac{\bm{\mathrm{x}}_{i}}{||\bm{\mathrm{x}}_{i}||},\frac{\bm{\mathrm{x}}}{||\bm{\mathrm{x}}||}>\frac{\bm{\mathrm{x}}_{i}}{||\bm{\mathrm{x}}_{i}||}$
is the normalized projection operator.
After the neighbors of $\bm{\mathrm{x}}_{i}$ are mapped into local space of
the center nodes through the isometric maps, which reads
$\bm{\mathrm{v}}_{j}=\mathcal{M}_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{x}}_{j})$,
the local coordinate system of each center node is set up, through a transform
mapping
$\Pi_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{v}}_{j})=\bm{\mathrm{x}}_{j}^{i^{\prime}}$
for each $\bm{\mathrm{x}}_{j}\in\mathcal{V}(i)$. We call
$\bm{\mathrm{x}}_{j}^{i^{\prime}}$ the relative position of
$\bm{\mathrm{v}}_{j}$ in the local coordinate system of the local space
centered at $\bm{\mathrm{x}}_{i}$. As $\bm{\mathrm{x}}_{j}^{i^{\prime}}$ is
always in the local space which is Euclidean, the supper-script $E$ is
omitted. The mapping $\Pi_{\bm{\mathrm{x}}_{i}}(\cdot)$ can be determined by
$M$ orthogonal basis chosen in the local coordinate system, i.e.
$\\{\bm{\xi}^{1},\bm{\xi}^{2},\ldots,\bm{\xi}^{M}\\}$, which will be discussed
later in $S^{2}$ scenario for meteorological application.
## 4\. Proposed Method
### 4.1. Local convolution on sphere
Given a center node ${\bm{\mathrm{x}}_{i}^{E}}\in\mathbb{R}^{2}$, from the
perspective of defined graph convolution in Eq. 4, the convolution on planar
mesh grids such as pixels on images is written as
$\displaystyle(\Omega\star_{\mathcal{V}{(i)}}\bm{H})(\bm{\mathrm{x}}_{i}^{E})$
$\displaystyle=\sum_{\bm{\mathrm{x}}^{E}}\Omega(\bm{\mathrm{x}}^{E};\bm{\mathrm{x}}^{E}_{i})\bm{H}(\bm{\mathrm{x}}^{E})\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^{E})$
$\displaystyle=\sum_{\bm{\mathrm{x}}^{E}}\chi(\bm{\mathrm{x}}^{E}_{i}-\bm{\mathrm{x}}^{E})\bm{H}(\bm{\mathrm{x}}^{E})\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^{E}),$
(7)
where $\delta_{\mathcal{A}}(\bm{\mathrm{x}})=1$ if
$\bm{\mathrm{x}}\in\mathcal{A}$, else $0$. In terms of convolution on 2-D
images,
$\mathcal{V}(i)=\\{\bm{\mathrm{x}}^{E}:\bm{\mathrm{x}}^{E}-\bm{\mathrm{x}}_{i}^{E}\in\mathbb{Z}^{2}\cap([-k_{1},k_{1}]\times[-k_{2},k_{2}])\\}$.
$k_{1}>0$ and $k_{2}>0$ are the convolution views to restrict how far away
pixels are included in the neighborhood along the width-axis and length-axis
respectively. When $k_{1},k_{2}<+\infty$, the neighborhood set is limited, and
thus the convolution is defined as local, conducted on each node’s local
space, with local convolution kernel $\chi(\cdot)$ .
To extend the local convolution on generalized manifolds, we conclude that the
local space of $\bm{\mathrm{x}}^{E}_{i}$ is
$\mathcal{L}_{\bm{\mathrm{x}}_{i}^{E}}\mathbb{R}^{2}=\\{\bm{\mathrm{x}}^{E}:\bm{\mathrm{x}}^{E}-\bm{\mathrm{x}}_{i}^{E}\in[-k_{1},k_{1}]\times[-k_{2},k_{2}]\\}$,
so that the isometric map satisfies
$\bm{\mathrm{v}}^{E}=\mathcal{M}_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{x}}^{E})=\bm{\mathrm{x}}^{E}-\bm{\mathrm{x}}^{E}_{i}$.
$\\{-\bm{e}_{x},-\bm{e}_{y}\\}$ with $\bm{e}_{x}=(1,0)$ and $\bm{e}_{y}=(0,1)$
is the orthogonal basis in local coordinate system of the local space. In
conclusion,
$\displaystyle\bm{\mathrm{x}}^{i^{\prime}}=\Pi_{\bm{\mathrm{x}}_{i}^{E}}(\bm{\mathrm{v}}^{E})=-\bm{\mathrm{v}}^{E}=\bm{\mathrm{x}}_{i}^{E}-\bm{\mathrm{x}}^{E}.$
(8)
In this way, we obtain the local convolution on 2-D Euclidean plane, which
reads
$\displaystyle(\Omega\star_{\mathcal{V}{(i)}}\bm{H})(\bm{\mathrm{x}}_{i}^{E})$
$\displaystyle=\sum_{\bm{\mathrm{x}}^{E}}\chi(\bm{\mathrm{x}}^{i^{\prime}})\bm{H}(\bm{\mathrm{x}}^{E})\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^{E}).$
(9)
In analogy to this, the local convolution on 2-D spherical the center node
$\bm{\mathrm{x}}^{S}_{i}$ is defined similarly:
$\displaystyle(\Omega\star_{\mathcal{V}{(i)}}\bm{H})(\bm{\mathrm{x}}_{i}^{E})$
$\displaystyle=\sum_{\bm{\mathrm{x}}^{S}}\chi(\bm{\mathrm{x}}^{i^{\prime}})\bm{H}(\bm{\mathrm{x}}^{S})\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}}^{S}),$
(10)
where $\mathcal{V}(i)$ is given by the graph structure, nodes in which can be
mapped into $\bm{\mathrm{x}}_{i}^{S}$’s local space. And
$\bm{\mathrm{x}}^{i^{\prime}}$ is obtained by:
$\displaystyle\bm{\mathrm{x}}^{i^{\prime}}=\Pi_{\bm{\mathrm{x}}_{i}^{S}}(\bm{\mathrm{v}}^{S})=\Pi_{\bm{\mathrm{x}}_{i}^{S}}(\mathcal{M}_{\bm{\mathrm{x}}_{i}^{S}}(\bm{\mathrm{x}}^{S})).$
(11)
Following parts are organized to discuss how to elaborate
* •
$\mathcal{M}_{\bm{\mathrm{x}}_{i}^{S}}(\cdot)$ and
$\Pi_{\bm{\mathrm{x}}_{i}^{S}}(\cdot)$, isometry mapping neighbors into some
local space of ${\bm{\mathrm{x}}_{i}^{S}}$ and choice of orthogonal basis in
local coordinate system.
* •
$\chi(\cdot)$, the formulation of convolution kernel to approximate and
imitate the meteorological patterns.
### 4.2. Orientation-preserving local regions
In the following parts, all the nodes are located on sphere, so the supper-
script $S$ is omitted. We choose what we define as cylindrical-tangent space
and horizon maps (Fig. 1(a).) to construct local spaces and to map neighbors
into them.
###### Definition 2.
For $\bm{\mathrm{x}}_{i}\in S^{2}$, the cylindrical-tangent space centered at
$\bm{\mathrm{x}}_{i}$ reads
$\displaystyle\mathcal{C}_{\bm{\mathrm{x}}_{i}}S^{2}=\\{\bm{\mathrm{v}}\in\mathbb{R}^{3}:<\bm{\mathrm{v}}^{-},\bm{\mathrm{x}}_{i}^{-}>=0\\},$
where $\bm{\mathrm{x}}^{-}=(x_{1},x_{2})$, taking the first two coordinates of
vectors in $\mathbb{R}^{3}$.
###### Proposition 2.
Similar to logarithmic map, the horizon map
$\mathcal{H}_{\bm{\mathrm{x}}_{i}}(\cdot)$ is used to map the neighbor node
$\bm{\mathrm{x}}_{j}\in\mathcal{V}(i)$ isometrically into
$\mathcal{C}_{\bm{\mathrm{x}}_{i}}S^{2}$, which reads
$\displaystyle\mathcal{H}_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{x}}_{j})=d_{S^{2}}(\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j})\frac{[P_{{\bm{\mathrm{x}}_{i}^{-}}}(\bm{\mathrm{x}}_{j}^{-}-\bm{\mathrm{x}}_{i}^{-}),x_{j,3}-x_{i,3}]}{||[P_{{\bm{\mathrm{x}}_{i}^{-}}}(\bm{\mathrm{x}}_{j}^{-}-\bm{\mathrm{x}}_{i}^{-}),x_{j,3}-x_{i,3}]||},$
where $[\cdot,\cdot]$ is the concatenation of vectors/scalars.
(a) local space and isometric maps.
(b) Example of unified standard of basis
(c) Example of reweighting angle scale
Figure 1: (a) shows tangent space with logarithmic maps in the top and
cylindrical-tangent space with horizon maps in the below. (b) shows the
necessity of the unified standard for choice of the basis.
$\bm{\mathrm{x}}_{p}$ from the east affects both $\bm{\mathrm{x}}_{i}$ and
$\bm{\mathrm{x}}_{j}$ a lot, with the corresponding local patterns in two
center nodes shown in the heatmaps. However, if the basis is not unified as
given in the example, the smoothness of local convolution kernel will be
compromised. (c) shows the motivation of reweighting the angle scale. The
angle scale $\frac{\psi}{2\pi}$ is given to balance neighbors’ uneven
contributions to the centers resulted from irregular distribution.
The reason for choosing cylindrical-tangent space with horizon maps rather
than tangent space with logarithmic maps is that the former one preserves the
relative orientation on geographic graticules on the earth surface, which has
explicitly geophysical meaning in meteorology. The logarithmic maps distort
the relative position in orientation on graticules. For a node in the northern
hemisphere, a neighbor locate in the east of it will locate in the north-east
on its tangent plane after logarithmic map. In comparison, the defined
cylindrical-tangent space preserves both relative orientation on graticules
and spherical distance after mapping. Detailed proofs are provided Appendix B1
and empirical comparisons are given in Experiments 5.4.
As discussed, the cylindrical-tangent space is Euclidean, so in $S^{2}$, the
transform $\Pi_{\bm{\mathrm{x}}_{i}}(\cdot)$ can be determined by two
orthogonal bases which are not unique. Since our method is mainly implemented
in spherical meteorological signals, we choose
$\\{\bm{e}_{\phi},\bm{e}_{z}\\}$ as the two orthogonal bases in every local
coordinate system of the cylindrical-tangent plane, in order to permit every
local space to share the consistent South and North poles and preserve the
relative position. For $\bm{\mathrm{x}}_{i}=(x_{i,1},x_{i,2},x_{i,3})$ and
$\bm{\mathrm{v}}\in\mathcal{C}_{\bm{\mathrm{x}}_{i}}S^{2}$, let
$\phi_{i}=\arctan{(x_{i,2}/x_{i,1})}$, and
$\bm{\mathrm{x}}^{i^{\prime}}=\Pi_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{v}})=(\theta^{i^{\prime}},z^{i^{\prime}})$
which is obtained by
$\displaystyle\phi^{i^{\prime}}=<\bm{\mathrm{v}},\bm{e}_{\phi_{i}}>;\quad
z^{i^{\prime}}=<\bm{\mathrm{v}},\bm{e}_{z_{i}}>,$ (12)
which is the latitude and longitude on the sphere, and
$\displaystyle\bm{e}_{\phi_{i}}=(-\sin\phi_{i},\cos\phi_{i},0);\quad\bm{e}_{z_{i}}=(0,0,1).$
(13)
The discussed maps and transforms cannot be applied for the South and North
pole. We discuss it in Appendix B2.
### 4.3. Conditional local convolution
Now we introduce the conditional local convolution, which is the core module
in our model. We aim to formulate a kernel which is
* •
location-characterized: In the local regions of different center nodes, the
meteorological patterns governed by convolution kernel differ.
* •
smooth: Patterns are broadly similar when the center nodes are close in
spatial distance.
* •
common: The kernel is shared by different local spaces where the neighbors’
spatial distribution is distinct.
#### Kernel conditional on centers.
Contrary to Example 1 in DCRNN whose convolution kernel is predefined, we aim
to propose the convolution kernel which can adaptively learn and imitate the
location-characterized patterns of each local region centered at node $i$. A
trivial way is to use a multi-layer neural network whose input is
$\bm{\mathrm{x}}^{i^{\prime}}$ to approximate the convolution kernel
$\chi({\bm{\mathrm{x}}^{i^{\prime}}})$. However,
${\bm{\mathrm{x}}^{i^{\prime}}}$ as the input only represents the relative
position and disables the kernel to capture the location-characterized
patterns. For example, given two different center nodes whose neighbors’
relative positions are totally the same, the convolution kernel in different
locations will also coincide exactly, contrary to location-characterized
patterns. Therefore, we propose to use conditional kernel, which reads
$\chi({\bm{\mathrm{x}}^{i^{\prime}}};\bm{\mathrm{x}}_{i})$, meaning that the
convolution kernel in a certain local region is determined by the center node
$\bm{\mathrm{x}}_{i}$. An multi-layer feedforward network is used to
approximate this term, as
$\displaystyle\chi({\bm{\mathrm{x}}^{i^{\prime}}};\bm{\mathrm{x}}_{i})=\mathrm{MLP}([\bm{\mathrm{x}}^{i^{\prime}},\bm{\mathrm{x}}_{i}]).$
(14)
#### Smoothness of local patterns.
We assume that the localized patterns of meteorological message flows have the
property of smoothness – two close center nodes’ patterns of aggregation of
messages from neighbors should be similar. In the light of convolution kernel,
we define the smoothness of kernel function as follows:
###### Definition 3.
The conditional kernel $\chi(\cdot|\cdot)$ is smooth, if it satisfies that for
any $\epsilon>0$, there exist $\delta>0$, such that for any two points
$\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j}\in{S^{2}}$ with
$d_{S^{2}}(\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j})\leq\delta$,
$\displaystyle\sup_{\bm{\mathrm{v}}\in\mathcal{C}_{\bm{\mathrm{x}}_{i}}{S^{2}},\bm{\mathrm{u}}\in\mathcal{C}_{\bm{\mathrm{x}}_{j}}{S^{2}}\atop\Pi_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{v}})=\Pi_{\bm{\mathrm{x}}_{j}}(\bm{\mathrm{u}})}|\chi(\Pi_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{v}});\bm{\mathrm{x}}_{j})-\chi(\Pi_{\bm{\mathrm{x}}_{j}}(\bm{\mathrm{u}});\bm{\mathrm{x}}_{i})|\leq\epsilon.$
The definition of smoothness of location-characterized kernel is motivated by
the fact that if the distance between two center nodes
$d_{S^{2}}(\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j})$ is very small, the
meteorological patterns in two local region should be of little difference,
and thus kernel function $\chi(\cdot;\bm{\mathrm{x}}_{i})$ and
$\chi(\cdot;\bm{\mathrm{x}}_{j})$ should be almost exactly the same.
The unified standard for choice of orthogonal basis on the cylindrical-tangent
plane avoids problems caused by path-dependent parallel transport (Cohen et
al. 2019), and contributes to the smoothness of conditional kernel. The
property is likely to be compromised without unified standard for orthogonal
basis, as the following example illustrates.
###### Example 4.
(shown in Fig. 1(b).) For one node $\bm{\mathrm{x}}_{i}$, the orthogonal basis
is $\\{\mathbf{e}_{\phi_{i}},\mathbf{e}_{z_{i}}\\}$ as previously defined in
Eq. 13, and for another node $\bm{\mathrm{x}}_{j}$ which is close to it, it is
$\\{-\mathbf{e}_{\phi_{i}},\mathbf{e}_{z_{i}}\\}$. There exists a node
$\bm{\mathrm{x}}_{p}$ which is in their east on the sphere as the neighbors of
both, and has great meteorological impacts on both of them. In one’s local
coordinate system, the first coordinate is positive while it is negative in
another. Then if the kernel is smooth, the neighbor $\bm{\mathrm{x}}_{p}$ from
the east will never be given large value in both local regions centered at
node $i$ and $j$, violating the true patterns in meteorology, or it is likely
to violate the smoothness of the kernel.
As such, by using $\mathrm{MLP}(\cdot)$ as the approximator with smooth
activate function like $\tanh$ and unifying the standard for choice of
orthogonal basis, the smoothness property of conditional kernel can be
ensured. However, the irregular spatial distribution of discrete nodes
conflicts with the continuous kernel function shared by different center
nodes, which will be discussed in the next part.
#### Reweighting for irregular spatial distribution.
Because the kernel function is continuous and shared by different center
nodes, when the spatial distribution of each node’s neighbors is similar or
even identical, e.g. nodes are distributed on regular spatial grids in local
spaces, the proposed conditional kernel takes both distance and orientation
into consideration. However, the nodes are discrete and irregularly
distributed on the sphere. Since the kernel is shared by all center nodes, the
distinct spatial distribution of neighbors of different center nodes is likely
to disrupt the smoothness of local patterns. An explicit example is given to
illustrate the problems brought about by it.
###### Example 5.
(shown in Fig. 1(c).) The two center nodes are close in distance, but the
spatial distribution of their neighbors is different. The number of the right
center’s neighbors located in the south-west is two, while it is one for the
left center. If the kernel is smooth, the message from the south-west flowing
into the right center will be about twice than it from the south-west flowing
into the left.
To reweight the convolution kernel for each
$\bm{\mathrm{x}}_{j}\in\mathcal{V}(i)$, we consider both their angle and
distance scales. We first turn its representation in Cartesian coordinate
system
$\bm{\mathrm{x}}_{j}^{i^{\prime}}=(\phi^{i^{\prime}}_{j},z^{i^{\prime}}_{j})$
in cylindrical-tangent space of $\bm{\mathrm{x}}_{i}$ into polar coordinate
$(\varphi^{i^{\prime}}_{j},\rho^{i^{\prime}}_{j})$, where
$\varphi^{i^{\prime}}_{j}=\arctan(z^{i^{\prime}}_{j}/\phi^{i^{\prime}}_{j})$
and
$\rho^{i^{\prime}}_{j}=\sqrt{(z^{i^{\prime}}_{j})^{2}+(\phi^{i^{\prime}}_{j})^{2}}$.
Note that $\rho^{i^{\prime}}_{j}$ equals to the geodesics induced distance
between the two nodes on sphere. In terms of angle, we calculate the angle
bisector of every pair of adjacent nodes in the neighborhood according to
$\varphi^{i^{\prime}}_{j}$. We denote the angle between two adjacent angular
bisectors of $\bm{\mathrm{x}}_{j}^{i^{\prime}}$ by $\psi^{i^{\prime}}_{j}$ (as
shown in lower-right subfigure in Fig. 1(c) ), and thus the angle scale is
written as $\psi^{i^{\prime}}_{j}/2\pi$. The distance scale is obtained
similarly as DCRNN in Example 1, which reads
$\exp(-(\rho^{i^{\prime}}_{j})^{2}/\tau)$, where $\tau$ is a learnable
parameter.
To sum up, combining the two scaling terms with Eq. 14, the final formulation
of the smooth conditional local kernel in the case of irregular spatial
distribution reads
$\displaystyle\chi({\bm{\mathrm{x}}^{i^{\prime}}_{j}};\bm{\mathrm{x}}_{i})=\frac{\psi^{i^{\prime}}_{j}}{2\pi}\exp(-\frac{(\rho^{i^{\prime}}_{j})^{2}}{\tau})\mathrm{MLP}([\bm{\mathrm{x}}^{i^{\prime}}_{j},\bm{\mathrm{x}}_{i}]).$
(15)
### 4.4. Inapplicability in traffic forecasting
(a) Flows in meteorology
(b) Flows in traffic
(c) Local pattern in meteorology
(d) Local pattern in traffic
Figure 2: Different geographic sample spaces and local patterns in meteorology
and traffic.
The proposed convolution is inapplicable to traffic forecasting. One reason is
that the smoothness is not a reasonable property of local traffic flows’
patterns, i.e. great difference may exist between traffic patterns of two
close regions. An important transportation hub exit may exist in the middle of
them, so the patterns are likely to differ a lot. Besides, because our
convolution kernel is continuous in spatial domain, the continuity in
orientation of the local convolution kernel is of no physical meaning in
traffic irregular networks. In essence, the irregular structure of the road
network restricts the flows of traffic to road direction, stopping vehicles
from crossing the road boundary, so that the geographic sample space is
restricted to the road networks, and traffic can only flow along roads. In
comparison, the flows in meteorology like heat and wind can diffuse freely on
the earth, without boundary, and the geographic sample space is the whole
earth surface, enabling the local patterns to satisfy the continuity and
smoothness.
Figure 3: Overall workflows and architecture of CLCRN.
### 4.5. Temporal dynamics modeling
The temporal dynamics is modeled as DCRNN does, which replaces fully-connected
layers in cells of recurrent neural network with graph convolution layers.
Using the kernel proposed in Eq. 15, we obtain the GRU cell constituting the
conditional local convolution recurrent network (CLCRN), whose overall
architecture is shown in Fig. 3.
The overall neural network architecture for multi-step forecasting is
implemented based on Sequence to Sequence framework (Sutskever, Vinyals, and
Le 2014), with the encoder fed with previously observed time series and
decoder generating the predictions. By setting a target function of
predictions and ground truth observations such as minimal mean square error,
we can use backpropagation through time to update the parameters in the
training step. More details are given in Appendix C.
## 5\. Experiments
### 5.1. Experiment setting
Methods | Spatial | Temporal | Learnable | Continuous
---|---|---|---|---
STGCN | Vanilla GCN | 1-D Conv | ✗ | ✗
MSTGCN | ChebConv | 1-D Conv | ✗ | ✗
ASTGCN | GAT | Attention | ✔ | ✗
TGCN | Vanilla GCN | GRU | ✗ | ✗
GCGRU | ChebConv | GRU | ✗ | ✗
DCRNN | DiffConv | GRU | ✗ | ✗
AGCRN | Node Similarity | GRU | ✔ | ✗
CLCRN | CondLocalConv | GRU | ✔ | ✔
Table 1: Comparison of different spatio-temporal methods. ‘Spatial’ and
‘Temporal’ represent the spatial convolution and temporal dynamics modules. If
the spatial kernel is predefined, it is not ‘learnable’. Only our method is
established for ‘continuous’ spatial domain from which meteorological signals
usually sampled.
#### Datasets.
The datasets used for performance evaluation are provided in WeatherBench
(Rasp et al. 2020), with 2048 nodes on the earth sphere. We choose four hour-
wise weather forecasting tasks including temperature, cloud cover, humidity
and surface wind component, the units of which are $\mathrm{K}$, $\%\times
10^{-1}$, $\%\times 10$, $\mathrm{ms}^{-1}$ respectively. We truncate the
temporal scale from Jan.1, 2010 to Dec.31, 2018, and set input time length as
12 and forecasting length as 12 for the four datasets.
#### Metrics.
We compare CLCRN with other methods by deploying three widely used metrics -
Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute
Percentage Error (MAPE) to measure the performance of predictive models.
#### Protocol.
Seven representative methods are set up, which can be classified into
attention-based methods (Rozemberczki et al. 2021): STGCN (Yu, Yin, and Zhu
2018), MSTGCN, ASTGCN (Guo et al. 2019) and recurrent-based method: TGCN (Zhao
et al. 2020), GCGRU (Seo et al. 2016), DCRNN (Li et al. 2018), AGCRN (Bai et
al. 2020). Note that the spatial dependency in AGCRN is based on the product
of learnable nodes’ embeddings, which is called ’Node Similarity’. The
comparison of these methods and ours are given in Table. 1. All the models are
trained with target function of MAE and optimized by Adam optimizer for a
maximum of 100 epoches. The hyper-parameters are chosen through a carefully
tuning on the validation set (See Appendix D1 for more details). The reported
results of mean and standard deviation are obtained through five experiments
under different random seeds.
### 5.2 Performance comparison
Datasets | Metrics | TGCN | STGCN | MSTGCN | ASTGCN | GCGRU | DCRNN | AGCRN | CLCRN | Improvements
---|---|---|---|---|---|---|---|---|---|---
Temperature | MAE | 3.8638±0.0970 | 4.3525±1.0442 | 1.2199±0.0058 | 1.4896±0.0130 | 1.3256±0.1499 | 1.3232±0.0864 | 1.2551±0.0080 | 1.1688±0.0457 | 7.2001%
RMSE | 5.8554±0.1432 | 6.8600±1.1233 | 1.9203±0.0093 | 2.4622±0.0023 | 2.1721±0.1945 | 2.1874±0.1227 | 1.9314±0.0219 | 1.8825±0.1509 | 2.5318%
Cloud cover | MAE | 2.3934±0.0216 | 2.0197±0.0392 | 1.8732±0.0010 | 1.9936±0.0002 | 1.5925±0.0023 | 1.5938±0.0021 | 1.7501±0.1467 | 1.4906±0.0037 | 6.3987%
RMSE | 3.6512±0.0223 | 2.9542±0.0542 | 2.8629±0.0073 | 2.9576±0.0007 | 2.5576±0.0116 | 2.5412±0.0044 | 2.7585±0.1694 | 2.4559±0.0027 | 3.3567%
Humidity | MAE | 1.4700±0.0295 | 0.7975±0.2378 | 0.6093±0.0012 | 0.7288±0.0229 | 0.5007±0.0002 | 0.5046±0.0011 | 0.5759±0.1632 | 0.4531±0.0065 | 9.5067%
RMSE | 2.1066±0.0551 | 1.1109±0.2913 | 0.8684±0.0019 | 1.0471±0.0402 | 0.7891±0.0006 | 0.7956±0.0033 | 0.8549±0.2025 | 0.7078±0.0146 | 10.3028%
Wind | MAE | 4.1747±0.0324 | 3.6477±0.0000 | 1.9440±0.0150 | 2.0889±0.0006 | 1.4116±0.0057 | 1.4321±0.0019 | 2.4194±0.1149 | 1.3260±0.0483 | 6.0640%
RMSE | 5.6730±0.0412 | 4.8146±0.0003 | 2.9111±0.0292 | 3.1356±0.0012 | 2.2931±0.0047 | 2.3364±0.0055 | 3.4171±0.1127 | 2.1292±0.0733 | 7.1475%
Table 2: MAE and RMSE comparison in forecasting length of $12\mathrm{h}$.
Results with underlines are the best performance achieved by baselines, and
results with bold are the overall best. Comparisons in other lengths and
metrics are shown Appendix D2.
Because MAPE is of great difference among methods and hard to agree on an
order of magnitude, we show it in Appendix D2.
(a) MAE on Temperature
(b) MAE on Cloud cover
(c) MAE on Humidity
(d) MAE on Wind
Figure 3: To avoid complicated and verbose plots, we choose the top three
methods for MAE comparison in different forecasting length.
From Table. 2 and Fig. 3, it can be conclude that (1) The recurrent-based
methods outperform the attention-based, except that in Temperature dataset,
MSTGCN works well. (2) Our method further improves recurrent-based methods in
weather prediction with a significant margin. (3) Because most of the compared
methods are established for traffic forecasting, they demonstrate a
significant decrease in performance for meteorological tasks, such as TGCN and
STGCN. The differences of the two tasks are analyzed by Sec. 4.4. The
‘performance convergence’ phenomenon on Temperature is explained in Appendix
D2.
### 5.3. Visualization of local patterns
Figure 4: Changes of local kernels
$\chi(\bm{\mathrm{x}}^{i^{\prime}};\bm{\mathrm{x}}_{i})$ for uniformly-spaced
$\bm{\mathrm{x}}_{i}$ obtained by trained CLCRN according to Humidity dataset.
The proposed convolution kernel aims to imitate the meteorological local
patterns. For this, we give visualization of conditional local kernels to
further explore the local patterns obtained from trained models. We choose a
line from the south-west to the north-east in USA, and sample points as center
nodes uniformly on the line.
As shown in Fig. 4, the kernels conditional on center nodes show the
smoothness property, and the patterns obtained from Humidity datasets
demonstrate obvious directionality - nodes from the north-west and south-east
impact the centers most. However, the kernel is over-smoothing - The change is
very little although the center nodes vary a lot, which will be one of our
future research issues.
### 5.4. Framework choice: CNN or RNN?
As concluded in (1) in performance comparison, recurrent-based methods usually
outperform the attention-based in our evaluation. For the latter one,
classical CNNs are usually used for intra-sequence temporal modeling. Here we
further establish the CLCSTN by embedding our convolution layer into the
framework of MSTGCN, as the attention-based version of CLCRN, to compare the
cons and pros of the two frameworks.
| Lengths | Metrics | Temperature | Cloud cover | Humidity | Wind
---|---|---|---|---|---|---
CLCSTN | 3h | MAE | 1.1622±0.2773 | 1.5673±0.0050 | 0.4710±0.0423 | 1.2262±0.0072
RMSE | 1.9097±0.5892 | 2.4798±0.0105 | 0.6765±0.0596 | 1.8085±0.0163
6h | MAE | 1.2516±0.2762 | 1.6461±0.0052 | 0.5125±0.0401 | 1.3582±0.0070
RMSE | 2.0216±0.5409 | 2.5814±0.0106 | 0.7330±0.0553 | 1.9985±0.0168
12h | MAE | 1.3325±0.2204 | 1.7483±0.0044 | 0.5691±0.0385 | 1.5727±0.0035
RMSE | 2.1239±0.3949 | 2.7101±0.0090 | 0.8104±0.0519 | 2.3058±0.0102
CLCRN | 3h | MAE | 0.3902±0.0345 | 0.9225±0.0011 | 0.1953±0.0015 | 0.5233±0.0177
RMSE | 0.6840±0.0488 | 1.6428±0.0020 | 0.3307±0.0037 | 0.9055±0.0246
6h | MAE | 0.7050±0.0402 | 1.1996±0.0023 | 0.3107±0.0035 | 0.8492±0.0265
RMSE | 1.2408±0.1098 | 2.0611±0.0048 | 0.5114±0.0088 | 1.4296±0.0411
12h | MAE | 1.1688±0.0457 | 1.4906±0.0037 | 0.4531±0.0065 | 1.3260±0.0483
RMSE | 1.8825±0.1509 | 2.4559±0.0027 | 0.7078±0.0146 | 2.1292±0.0733
Table 3: MAE and RMSE comparison in different forecasting length of CLCSTN and
CLCRN.
From Table. 3, it is shown that the CLCRN outperforms CLCSTN in all
evaluations. Besides, it is noted that the significant gap between two methods
is in short term prediction rather than long term. We conjecture that the
attention-based framework gives smoother prediction, while the other one can
fit extremely non-stationary time series with great oscillation. Empirical
studies given in Fig. 5 show that the former framework tends to fit low-
frequency signals, but struggles to fit short-term fluctuations. In long term,
the influence of fitting deviation is weakened, so the performance gap is
reduced. In this case, the fact that the learning curve of the former one is
much smoother (Fig. 6) can be explained as well. The unstable learning curve
is actually a common problem of all the recurrent-based models, which is
another future research issue of ours.
Figure 5: Predictions on Humidity, where the filled intervals show steep
slopes and drastic fluctuations. (Appendix D3 in detail) Figure 6: Learning
curve of the two methods on Humidity.
### 5.5. Advantages of horizon maps
Methods | Metrics | Temperature | Cloud cover | Humidity | Wind
---|---|---|---|---|---
CLCRN_log | MAE | 1.2638±0.1554 | 1.5599±0.0019 | 0.4663±0.0082 | 1.3958±0.0120
| RMSE | 2.0848±0.1719 | 2.5171±0.0255 | 0.7341±0.0151 | 2.2659±0.0211
CLCRN_hor | MAE | 1.1688±0.0457 | 1.4906±0.0037 | 0.4531±0.0065 | 1.3260±0.0483
| RMSE | 1.8825±0.1509 | 2.4559±0.0027 | 0.7078±0.0146 | 2.1292±0.0733
Table 4: MAE and RMSE comparison in forecasting length of 12h for logarithmic
and horizon maps.
In this part, we discussed two maps: logarithmic and horizon maps, which are
established for two local spaces: tangent and cylindrical-tangent space,
respectively. Here we compare the performance of our model with two different
maps and spaces, to illustrate the advantages of the horizon maps as shown in
Table.4.
### 5.6. Ablation study
#### Decomposition of the kernel.
Composition | Metrics | Temperature | Cloud cover | Humidity | Wind
---|---|---|---|---|---
Angle | MAE | 3.1673±0.3422 | 1.7787±0.0258 | 0.6653±0.0361 | 3.3753±0.4199
RMSE | 4.8939±0.6142 | 2.8745±0.0572 | 1.0054±0.0721 | 5.1317±0.3603
Distance | MAE | 16.5671±0.0002 | 2.7308±0.0002 | 1.3443±0.0001 | 4.0531±0.0000
RMSE | 21.7427±0.0085 | 3.7995±0.0029 | 1.8692±0.0003 | 5.2275±0.0004
MLP | MAE | 1.8815±0.1934 | 1.9047±0.0023 | 0.6208±0.1074 | 2.8672±0.0840
RMSE | 2.9691±0.2311 | 3.1022±0.0215 | 0.9482±0.1496 | 4.2902±0.0982
MLP + Distance | MAE | 1.4505±0.2248 | 1.8743±0.0038 | 0.6289±0.0711 | 2.5454±0.3485
RMSE | 2.1754±0.2092 | 3.0627±0.0320 | 0.9388±0.1302 | 3.8815±0.5141
MLP + Angle | MAE | 1.1205±0.2031 | 1.4919±0.0019 | 0.4538±0.0064 | 1.2991±0.0344
RMSE | 1.7957±0.2101 | 2.4398±0.0110 | 0.7082±0.0134 | 2.0763±0.0625
Angle + Distance | MAE | 1.4986±0.0872 | 1.6907±0.0219 | 0.5378±0.0363 | 1.8932±0.2488
RMSE | 2.1755±0.1042 | 2.7215±0.0330 | 0.8007±0.0484 | 3.0245±0.4769
MLP+Angle +Distance | MAE | 1.1688±0.0457 | 1.4906±0.0037 | 0.4531±0.0065 | 1.3260±0.0483
RMSE | 1.8825±0.1509 | 2.4559±0.0027 | 0.7078±0.0146 | 2.1292±0.0733
Table 5: MAE and RMSE comparison in forecasting length of 12h in different
combinations of kernels. Results with underlines are the best performance
achieved by baselines, and results with bold are the overall best.
As our kernel includes three terms shown in Eq. 15, i.e. MLP term, distance
scaling term and angle scaling term, we decompose the kernel to further
validate the proposed kernel empirically.
From Table. 5, we conclude that the ‘Distance’ scaling term is of the least
importance in that the performance of ‘MLP + Angle’ is almost the same as it
obtained by ‘MLP + Angle + Distance’, and the kernel only composed of
‘Distance’ usually performs worst.
#### Further analysis.
There are several hyper-parameters (neighbor number $K$, number of layers and
hidden units) determining the performance of our methods. We conduct
experiments to explore their impacts on the performance empirically. More
results are shown in Appendix D4.
(a) Impacts of $K$
(b) Impacts of layer number
Figure 7: Impacts of hyper-parameters $K$ and layer number on performance on
two datasets.
## 6\. Conclusion
We proposed a local conditional convolution to capture and imitate the
meteorological flows of local patterns on the whole sphere, which is based on
the assumption: smoothness of location-characterized patterns. An MLP and
reweighting terms with continuous relative positions of neighbors and center
as inputs are employed as convolution kernel to handle uneven distribution of
nodes.
Empirical study shows the method achieves improved performance. Further
analysis reveals two existing problems of our method: the over-smoothness of
the learned local patterns (Sec. 5.3.) and instability of the training process
(Sec. 5.4.), which would be our future research issues to focus on.
## Acknowledgement
CAIRI Internal Fund established by Prof. Stan. Z. Li provided support to
assist the authors with research. Besides, Prof. Ling Li and Dr. Lu Yi shared
expertise and insights in meteorology.
As a newcomer to the field of machine learning and artificial intelligence, I
hope that I can make tiny but recognized contributions to noble scientific
issues such as climate change. I will keep my enthusiasm with inspiration and
diligence in my future academic life. Wish me good luck!
## References
* Atwood and Towsley (2016) Atwood, J.; and Towsley, D. 2016. Diffusion-Convolutional Neural Networks. arXiv:1511.02136.
* Bai et al. (2020) Bai, L.; Yao, L.; Li, C.; Wang, X.; and Wang, C. 2020. Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting. arXiv:2007.02842.
* Broad (2002) Broad, K. S. A. C. e. 2002. El Niño 1997–98: The Climate Event of the Century. _Climatic Change_ , 53: 523–527.
* Bruna et al. (2014) Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2014. Spectral Networks and Locally Connected Networks on Graphs. arXiv:1312.6203.
* Cohen et al. (2018) Cohen, T. S.; Geiger, M.; Koehler, J.; and Welling, M. 2018. Spherical CNNs. arXiv:1801.10130.
* Cohen et al. (2019) Cohen, T. S.; Weiler, M.; Kicanaoglu, B.; and Welling, M. 2019. Gauge Equivariant Convolutional Networks and the Icosahedral CNN. arXiv:1902.04615.
* Coors, Condurache, and Geiger (2018) Coors, B.; Condurache, A. P.; and Geiger, A. 2018. SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images. In Ferrari, V.; Hebert, M.; Sminchisescu, C.; and Weiss, Y., eds., _Computer Vision – ECCV 2018_ , 525–541. Cham: Springer International Publishing. ISBN 978-3-030-01240-3.
* Defferrard, Bresson, and Vandergheynst (2017) Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2017. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. arXiv:1606.09375.
* Defferrard et al. (2020) Defferrard, M.; Milani, M.; Gusset, F.; and Perraudin, N. 2020. DeepSphere: a graph-based spherical CNN. In _International Conference on Learning Representations_.
* Esteves et al. (2018) Esteves, C.; Allen-Blanchette, C.; Makadia, A.; and Daniilidis, K. 2018. Learning SO(3) Equivariant Representations with Spherical CNNs. arXiv:1711.06721.
* Gilmer et al. (2017) Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017. Neural Message Passing for Quantum Chemistry. arXiv:1704.01212.
* Guo et al. (2019) Guo, S.; Lin, Y.; Feng, N.; Song, C.; and Wan, H. 2019. Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 33(01): 922–929.
* Jiang et al. (2019) Jiang, C. M.; Huang, J.; Kashinath, K.; Prabhat; Marcus, P.; and Niessner, M. 2019\. Spherical CNNs on Unstructured Grids. arXiv:1901.02039.
* Kipf and Welling (2017) Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907.
* Li et al. (2018) Li, Y.; Yu, R.; Shahabi, C.; and Liu, Y. 2018. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. arXiv:1707.01926.
* Niepert, Ahmed, and Kutzkov (2016) Niepert, M.; Ahmed, M.; and Kutzkov, K. 2016. Learning Convolutional Neural Networks for Graphs. arXiv:1605.05273.
* Perraudin et al. (2019) Perraudin, N.; Defferrard, M.; Kacprzak, T.; and Sgier, R. 2019. DeepSphere: Efficient spherical convolutional neural network with HEALPix sampling for cosmological applications. _Astronomy and Computing_ , 27: 130 – 146.
* Rasp et al. (2020) Rasp, S.; Dueben, P. D.; Scher, S.; Weyn, J. A.; Mouatadid, S.; and Thuerey, N. 2020\. WeatherBench: A benchmark dataset for data-driven weather forecasting. arXiv:2002.00469.
* Rozemberczki et al. (2021) Rozemberczki, B.; Scherer, P.; He, Y.; Panagopoulos, G.; Riedel, A.; Astefanoaei, M.; Kiss, O.; Beres, F.; Lopez, G.; Collignon, N.; and Sarkar, R. 2021. PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models. arXiv:2104.07788.
* Seo et al. (2016) Seo, Y.; Defferrard, M.; Vandergheynst, P.; and Bresson, X. 2016. Structured Sequence Modeling with Graph Convolutional Recurrent Networks. arXiv:1612.07659.
* Shi et al. (2015) Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; kin Wong, W.; and chun Woo, W. 2015. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. arXiv:1506.04214.
* Sutskever, Vinyals, and Le (2014) Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to Sequence Learning with Neural Networks. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N.; and Weinberger, K. Q., eds., _Advances in Neural Information Processing Systems_ , volume 27, 3104–3112. Curran Associates, Inc.
* Sønderby et al. (2020) Sønderby, C. K.; Espeholt, L.; Heek, J.; Dehghani, M.; Oliver, A.; Salimans, T.; Agrawal, S.; Hickey, J.; and Kalchbrenner, N. 2020. MetNet: A Neural Weather Model for Precipitation Forecasting. arXiv:2003.12140.
* Veličković et al. (2018) Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; and Bengio, Y. 2018. Graph Attention Networks. arXiv:1710.10903.
* Wu et al. (2021) Wu, L.; Lin, H.; Gao, Z.; Tan, C.; and Li, S. Z. 2021. Self-supervised Learning on Graphs: Contrastive, Generative,or Predictive. arXiv:2105.07342.
* Yu, Yin, and Zhu (2018) Yu, B.; Yin, H.; and Zhu, Z. 2018. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence_.
* Zhao et al. (2020) Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; and Li, H. 2020\. T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction. _IEEE Transactions on Intelligent Transportation Systems_ , 21(9): 3848–3858.
## Appendix A A: Notation and Formal Definition
### A1: Glossary of notations
Table 6: Glossary of notations used in this paper. Symbol | Used for
---|---
$t$ | Timestamp.
$N$ | Number of nodes sampled on sphere.
$\mathcal{G}=(\mathcal{V},\mathcal{E},\bm{A})$ | Graph represented nodes, edges and adjacency matrix respectively.
$\bm{\mathrm{x}}_{i}^{S}$ | Coordinate representation of the $i$-th nodes on sphere.
$\bm{\mathrm{x}}_{i}^{E}$ | Coordinate representation of the $i$-th nodes on Euclidean space.
$\bm{F}^{(t)},\bm{F}^{(t)}_{i}$ | Signal Matrix at time $t$, and it of the $i$-th node.
$\mathcal{N}(i)$ | The set of neighbors of center node $i$.
$\bm{h}^{l}_{i}$ | The $i$-th nodes’ representation after the $l$-th layer.
$\mathcal{V}(i)$ | The neighborhood coordinate set of center node $i$.
$\star_{\mathcal{N}(i)}$ | Convolution on the $i$-th nodes’ neighborhood.
$\Omega(\bm{\mathrm{x}}^{S}_{j},\bm{\mathrm{x}}^{S}_{i})$ | Convolution kernel measuring impacts of the $i$-th node on $j$.
$\chi(\bm{\mathrm{x}}^{i^{\prime}})$ | Convolution kernel with relative position as input.
$\bm{H}(\bm{\mathrm{x}}^{S}_{i})$ | Function mapping each point on sphere to its feature vector.
$\mathcal{L}_{\bm{\mathrm{x}}}S^{M}$ | Local space centering at $\bm{\mathrm{x}}$ on sphere.
$\mathcal{T}_{\bm{\mathrm{x}}}S^{M}$ | Tangent space centering at $\bm{\mathrm{x}}$ on sphere.
$\mathcal{C}_{\bm{\mathrm{x}}}S^{M}$ | Cylindrical-tangent space centering at $\bm{\mathrm{x}}$ on sphere.
$d_{S^{M}}(\bm{\mathrm{x}},\bm{\mathrm{y}})$ | Distance between $\bm{\mathrm{x}}$ and $\bm{\mathrm{y}}$ on sphere induced by geodesics.
$\mathcal{M}_{\bm{\mathrm{x}}}(\cdot)$ | Isometric map on $\bm{\mathrm{x}}$’s local space.
$\log_{\bm{\mathrm{x}}}(\cdot)$ | Logarithmic map on $\bm{\mathrm{x}}$’s tangent space.
$\mathcal{H}_{\bm{\mathrm{x}}}(\cdot)$ | Horizon map on $\bm{\mathrm{x}}$’s cylindrical-tangent space.
$\Pi_{\bm{\mathrm{x}}}(\cdot)$ | Transform of point in local space to local coordinate system.
$P_{{\bm{\mathrm{x}}_{i}}}(\cdot)$ | Normalized Projection operator.
$\bm{e}_{x},\bm{e}_{y},\ldots$ | Orthogonal basis.
$(\phi^{i^{\prime}},z^{i^{\prime}})$ | Neighbors in local space represented by Cartesian coordinate system.
$(\varphi^{i^{\prime}},\rho^{i^{\prime}})$ | Neighbors in local space represented by polar coordinate system.
$\psi^{i^{\prime}}$ | The angle between two adjacent angular bisectors in polar coordinate system.
### A2: Formal Definition of Local Space
###### Definition 4.
A manifold is a topological space that locally resembles Euclidean space near
each point. More precisely, an n-dimensional manifold is a topological space
with the property that each point has a neighborhood that is homeomorphic to
an open subset of n-dimensional Euclidean
space.111https://en.wikipedia.org/wiki/Manifold
###### Proposition 3.
The M-dimension sphere is a manifold. Therefore, for each point
$\bm{\mathrm{x}}\in S^{M}$, and a ball
$\mathrm{B}_{S^{M}}(\bm{\mathrm{x}},r)=\bm{\mathrm{y}}\in
S^{M}:d_{S^{M}}(\bm{\mathrm{x}},\mathrm{\bm{y}})<r\\}$, there exist a
homeomorphism $\phi$, such that
$\phi:B_{S^{M}}(\bm{\mathrm{x}},r)\rightarrow\mathbb{R}^{M}$ and
$\max{r}=\pi$. In this way, we define the local space of $\bm{\mathrm{x}}$ as
$\phi(\mathrm{B}_{S^{M}}(\bm{\mathrm{x}},\pi))=\mathcal{L}_{\bm{\mathrm{x}}}S^{M}$.
## Appendix B B: Local Space and Mappings
### B1: Proof of Proposition 2
Proof: It is easy to prove for any vector $\bm{\mathrm{x}}$, after projection
operator it satisfies
$<P_{{\bm{\mathrm{x}}_{i}}}(\bm{\mathrm{x}}),\bm{\mathrm{x}}_{i}>=0$, where
$P_{{\bm{\mathrm{x}}_{i}}}(\bm{\mathrm{x}})=\frac{\bm{\mathrm{x}}}{||\bm{\mathrm{x}}||}-<\frac{\bm{\mathrm{x}}_{i}}{||\bm{\mathrm{x}}_{i}||},\frac{\bm{\mathrm{x}}}{||\bm{\mathrm{x}}||}>\frac{\bm{\mathrm{x}}_{i}}{||\bm{\mathrm{x}}_{i}||}$.
Then we first aim to prove that the vector after being mapped is located in
cylindrical-tangent space of $\bm{\mathrm{x}_{i}}$. Denote
$\bm{\mathrm{v}}_{j}=\mathcal{H}_{\bm{\mathrm{x}}_{i}}(\bm{\mathrm{x}}_{j})$,
then
$\displaystyle<\bm{\mathrm{v}}_{j}^{-},\bm{\mathrm{x}}_{i}^{-}>$ (16)
$\displaystyle=$
$\displaystyle\frac{d_{S^{2}}(\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j})}{||[P_{{\bm{\mathrm{x}}_{i}^{-}}}(\bm{\mathrm{x}}_{j}^{-}-\bm{\mathrm{x}}_{i}^{-}),x_{j,3}-x_{i,3}]||}<P_{{\bm{\mathrm{x}}_{i}^{-}}}(\bm{\mathrm{x}}_{j}^{-}-\bm{\mathrm{x}}_{i}^{-}),\bm{\mathrm{x}}_{i}^{-}>$
(17) $\displaystyle=$ $\displaystyle 0.$ (18)
Finally, we prove that it is an isometric mapping, since
$\displaystyle\left\lVert
d_{S^{2}}(\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j})\frac{[P_{{\bm{\mathrm{x}}_{i}^{-}}}(\bm{\mathrm{x}}_{j}^{-}-\bm{\mathrm{x}}_{i}^{-}),x_{j,3}-x_{i,3}]}{||[P_{{\bm{\mathrm{x}}_{i}^{-}}}(\bm{\mathrm{x}}_{j}^{-}-\bm{\mathrm{x}}_{i}^{-}),x_{j,3}-x_{i,3}]||}\right\rVert$
(19) $\displaystyle=$ $\displaystyle
d_{S^{2}}(\bm{\mathrm{x}}_{i},\bm{\mathrm{x}}_{j}).$ (20)
Therefore, the horizon map shown in Proposition 2. is an isometric map of
nodes on $S^{2}$ into $\mathcal{C}_{\bm{\mathrm{x}}_{i}}S^{2}$.
### B2: Fast implementation and transforms on poles
#### Fast implementation.
When the spatial sampling is dense on the sphere, we can also use a fast
implementation to replace the local coordinate transform and horizon map,
which reads
$\bm{\mathrm{x}}^{i^{\prime}}_{j}=\begin{cases}&(\theta_{j}-\theta_{i},\phi_{j}-\phi_{i})\quad\quad\quad\phi_{j}-\phi_{i}\in[-\pi,\pi];\\\
&(\theta_{j}-\theta_{i},\phi_{j}-\phi_{i}-2\pi)\quad\phi_{j}-\phi_{i}\in(\pi,2\pi);\\\
&(\theta_{j}-\theta_{i},\phi_{j}-\phi_{i}+2\pi)\quad\phi_{j}-\phi_{i}\in(-2\pi,-\pi),\\\
\end{cases}$ (21)
The empirical evaluation shows that it can also give a competitive
performance, although the mapping just perserves the relative orientation on
graticules, without considering the distance. We infer that the reason for the
phenomenon is that the inputs of the conditional local kernel is of the same
scale. Further research will be conducted on this.
#### Handling the poles.
We use the negative relative positions of the pole in each neighbors local
space as the neighbors’ relative position. To be more specific, all the
neighbors’ first coordinate is $0$ in both poles, and the second are all
negative in the North pole, while they are all positive in the South pole.
## Appendix C C: Temporal Dynamics Modeling
Temporal dynamics of node $i$ modeled by the GRU units with Conditional local
convolution is shown as follows:
$\begin{cases}&\bm{r}_{i}^{(t)}=\sigma(\Omega\star_{{\mathcal{V}}(i)}[\bm{F}_{i}^{(t)},\bm{Z}_{i}^{(t-1)}]\bm{W}_{r}+\bm{b}_{r});\\\
&\bm{u}_{i}^{(t)}=\sigma(\Omega\star_{{\mathcal{V}}(i)}[\bm{F}_{i}^{(t)},\bm{Z}_{i}^{(t-1)}]\bm{W}_{u}+\bm{b}_{u});\\\
&\bm{C}_{i}^{(t)}=\text{tanh}(\Omega\star_{{\mathcal{V}}(i)}[\bm{F}_{i}^{(t)},(\bm{r}_{i}^{(t)}\odot\bm{Z}_{i}^{(t-1)})]\bm{W}_{C}+\bm{b}_{C});\\\
&\bm{Z}_{i}^{(t)}=\bm{u}_{i}^{(t)}\odot\bm{Z}_{i}^{(t-1)}+(\bm{I}-\bm{u}_{i}^{(t)})\odot\bm{C}_{i}^{(t)},\\\
\end{cases}$ (22)
where $\bm{W}_{r},\bm{W}_{u},\bm{W}_{C}$ are weights, and
$\bm{b}_{r},\bm{b}_{u},\bm{b}_{C}$ are bias. $\odot$ is the Hadamard product,
$\bm{F}_{i}^{(t)}$ is the input of the unit of node $i$ at time $t$,
$\bm{Z}_{i}^{(t)}$ is the hidden state of node $i$ at time $t$ as well as the
output of unit at time $t-1$, $\bm{r}^{(t)}$ and $\bm{u}^{(t)}$ is the reset
and update gate defined in GRU respectively. In detail,
$\displaystyle\Omega\star_{{\mathcal{V}}(i)}\bm{h}_{i}^{(t)}=\sum_{\bm{\mathrm{x}}}\chi(\bm{\mathrm{x}}^{i^{\prime}})\bm{H}(\bm{\mathrm{x}})\delta_{\mathcal{V}(i)}(\bm{\mathrm{x}})$
is the convolution on $i$’s local space, where the kernel
$\chi(\bm{\mathrm{x}}^{i^{\prime}})$ is formulated by Eq. 15.
## Appendix D D: Experiments
### D1: Experiment protocol and details
Training details. All the methods are trained using the random seeds chosen in
$\\{2021,2022,2023,2024,2025\\}$, from which the reported mean and standard
deviation are obtained. The basic learning rate is choosen as 0.01, and it
decays with the ratio 0.05 per 10 epoch in the first 50 epoch. Early-stopping
techniques are used for choosing epoch according to validation set. The
baselines are all implemented according to Geometric Temporal (Rozemberczki et
al. 2021). Batch size for training is set as 32, and the ablation study in D4
only changes the target hyper-parameter. For example, when studying the
effects of $K$, we set it as several different values, with others like layer
numbers and hidden units unchanged. The Pytorch framework is used, of the
version 1.8.0 with CUDA version 11.1. Every single experiment is executed on a
server with one NVIDIA V100 GPU card, equipped with 32510 MB video memory.
Hyper-parameters. In our method, the 3-layer MLP to approximate the
convolution kernel is set up with the neuron number $[10,8,6]$, which is
shared by all CLCGRU layers.
Dataset details. For more details for the four datasets in Weatherbench, the
statistics is shown in Table. 7.
Datasets | Temperature | Cloud cover | Humidity | Wind
---|---|---|---|---
Dimension | 1 | 1 | 1 | 2
Input length | 12 | 12 | 12 | 12
Forecasting length | 12 | 12 | 12 | 12
#Nodes | 2048 | 2048 | 2048 | 2048
#Training set | 2300 | 2300 | 2300 | 2300
#Validation set | 329 | 329 | 329 | 329
#Test set | 657 | 657 | 657 | 657
Mean value | 278.8998 | 0.6750 | 78.0328 | -0.0921/0.2398
Max value | 323.7109 | 1.0000 | 171.1652 | 32.2456/32.8977
Min value | 193.0290 | 0.0000 | -2.4951 | -37.7845/-31.6869
Std value | 21.1159 | 0.3617 | 18.2822 | 5.6053/4.8121
Table 7: Dataset statistics
#### Metrics computation.
Let $\mathbf{F}^{(t,T)}=[F^{(t)},\ldots,F^{(t+T)}]$ be the ground truth, and
$\mathbf{\hat{F}}^{(t,T)}=[\hat{F}^{(t)},\ldots,\hat{F}^{(t+T)}]$ be the
predictions given by neural networks. The three metrics including MAE, RMSE
and MAPE are calculated as
$\displaystyle\mathrm{MAE}(\mathbf{F}^{(t,T)},\mathbf{\hat{F}}^{(t,T)})$
$\displaystyle=\frac{1}{T}\sum_{i=t}^{T}|\hat{F}^{(i)}-F^{(i)}|$
$\displaystyle\mathrm{RMSE}(\mathbf{F}^{(t,T)},\mathbf{\hat{F}}^{(t,T)})$
$\displaystyle=\frac{1}{T}\sqrt{\sum_{i=t}^{T}|\hat{F}^{(i)}-F^{(i)}|^{2}}$
$\displaystyle\mathrm{MAPE}(\mathbf{F}^{(t,T)},\mathbf{\hat{F}}^{(t,T)})$
$\displaystyle=\frac{1}{T}\sum_{i=t}^{T}\frac{|\hat{F}^{(i)}-F^{(i)}|}{|F^{(i)}|}$
MAPE metrics is not stable, because there exists a term in the denominator,
and thus we do not consider the contributions of terms with ground truth
equalling 0. However, for Cloud cover, the minimal is still exrtremely small,
causing the MAPE term extremely large.
### D2: Overall method comparison
The model comparison of hour-wise prediction on MAE, RMSE and MAPE is shown in
Fig. 11. For each dataset, we give the comparison of methods with overall
top-3 performance.
The reason for ‘convergence’ of model performance on Temperature can be
explained by visualization in Appendix D.3 and analysis in section 5.4. As we
state in section 5.4, the drastic fluctuations are very hard for neural
network to model. The plot of ground truth temperature in Figure. 8(a) is much
smoother than other three datasets (Figure. 8), without drastic fluctuations.
When the true values always fluctuate slightly around, the model giving smooth
outputs as the future prediction usually performs well because the mean
predictive errors are minor when the time scale is large enough. However, when
the model tries to capture the fluctuations, even a single significant
deviation from the ground truth causes the error to explode. In this way, the
performances of models seem to ‘converge’ when time length increases on
Temperature dataset, because the increase in time length benefits models with
smooth outputs and hurts models trying to fit the fluctuations.
### D3: More visualization of prediction
We give more visualization to demonstrate the claim that attention-based
methods is hard to fit the sharp fluctuations, as shown in Fig. 8.
(a) Predictions on Temperature on $1000$-th node.
(b) Predictions on Cloud cover on $1000$-th node.
(c) Predictions on Cloud cover on $2000$-th node.
(d) Predictions on Humidity on $1000$-th node.
(e) Predictions on Humidity on $2000$-th node.
(f) Predictions on Wind on $1000$-th node.
(g) Predictions on Wind on $2000$-th node.
Figure 8: Visualization of two methods on different datasets.
It shows that the Temperature dataset is of gentle slope, and dramatic
fluctuations and changes are rare, so the prediction generated by CLCSTN is
usually good, and metrics are comparably competitive. However, in other
dataset, the change is significant and drastic, so CLCRN gets more accurant
predition than it.
### D4: Sensitivity of hyperparameters
The sensitivity analysis of parameters in details are given in this part.
(a) Sensitivity of model performance on layer number on Temperature.
(b) Sensitivity of model performance on layer number on Cloud cover.
(c) Sensitivity of model performance on layer number on Humidity.
(d) Sensitivity of model performance on layer number on Wind.
Figure 8: Sensitivity analysis on layer number.
#### Layer number.
It can be conclude according to Fig. 8 that when the layer number increase,
the model parameter increase fast, and the over-fitting affects are more
serious. Therefore, we recommend that the layer number is choosen in one or
two. When the dataset is relative stationary, the layer number should be one.
(a) Sensitivity of model performance on neighbor number K on Temperature.
(b) Sensitivity of model performance on neighbor number K on Cloud cover.
(c) Sensitivity of model performance on neighbor number K on Humidity.
(d) Sensitivity of model performance on neighbor number K on Wind.
Figure 9: Sensitivity analysis on neighbor number.
#### Neighbor Number.
Fig. 9 shows that the effects of K on model performance. It can be concluded
that with the increase of K, CLGRN will perform better, because for each
point, it can obtain messages from farther points directly. However, when the
K is too large, performance is likely to be compromised due to the aggregation
of redundant and irrelevant messages from uncorrelated distant nodes.
(a) Sensitivity of model performance on hidden units on Temperature.
(b) Sensitivity of model performance on hidden units on Cloud cover.
(c) Sensitivity of model performance on hidden units on Humidity.
(d) Sensitivity of model performance on hidden units on Wind.
Figure 10: Sensitivity analysis on neighbor number.
#### Hidden Units.
According to Fig. 10, in most cases, the model performs better when the hidden
units increase due to the enhanced expressivity of the model. However, the
increase of hidden units will dramatically raise the cost of computational
time and space. In this way, we recommend that it should be choosen as 64 to
96.
(a) Comparison on Temperature
(b) Comparison on Cloud cover
(c) Comparison on Humidity
(d) Comparison on Wind
Figure 11: Overall comparison on four datasets.
|
16k
|
arxiv_papers
|
2101.01001
|
# On the domains of Bessel operators
Jan Dereziński
Department of Mathematical Methods in Physics,
Faculty of Physics, University of Warsaw,
Pasteura 5,
02-093 Warszawa, Poland,
email: [email protected]
Vladimir Georgescu
Laboratoire AGM, UMR 8088 CNRS,
CY Cergy Paris Université,
F-95000 Cergy, France,
email: [email protected]
###### Abstract
We consider the Schrödinger operator on the halfline with the potential
$(m^{2}-\frac{1}{4})\frac{1}{x^{2}}$, often called the Bessel operator. We
assume that $m$ is complex. We study the domains of various closed homogeneous
realizations of the Bessel operator. In particular, we prove that the domain
of its minimal realization for $|{\rm Re}(m)|<1$ and of its unique closed
realization for ${\rm Re}(m)>1$ coincide with the minimal second order Sobolev
space. On the other hand, if ${\rm Re}(m)=1$ the minimal second order Sobolev
space is a subspace of infinite codimension of the domain of the unique closed
Bessel operator. The properties of Bessel operators are compared with the
properties of the corresponding bilinear forms.
###### Acknowledgements
Jan Dereziński is grateful to F. Gesztesy for drawing attention to the
references [1, 2, 3] and for useful comments. His work was supported by
National Science Center (Poland) under the grant UMO-2019/35/B/ST1/01651. The
authors thank the referees for their comments.
Keywords: Schrödinger operatros, Bessel operators, unbounded operators,
Sobolev spaces.
MSC 2020: 47E99, 81Q80
## 1 Introduction
### 1.1 Overview of closed realizations of the Bessel operator
The Schrödinger operator on the half-line given by the expression
$L_{\alpha}:=-\frac{{\rm d}^{2}}{{\rm
d}x^{2}}+\Big{(}\alpha-\frac{1}{4}\Big{)}\frac{1}{x^{2}}$ (1.1)
is often called the Bessel operator. The name is justified by the fact that
its eigenfunctions and many other related objects can be expressed in terms of
Bessel-type functions.
There exists a large literature devoted to self-adjoint realizations of (1.1)
for real $\alpha$. The theory of closed realizations of (1.1) for complex
$\alpha$ is also interesting. Let us recall the basic elements of this theory,
following [7, 6].
For any complex $\alpha$ there exist two most obvious realizations of
$L_{\alpha}$: the minimal $L_{\alpha}^{\min}$, and the maximal
$L_{\alpha}^{\max}$. The complex plane is divided into two regions by the
parabola defined by
$\alpha=(1+{\rm i}\omega)^{2},\quad\omega\in{\mathbb{R}},$ (1.2)
(or, if we write $\alpha=\alpha_{\mathrm{R}}+{\rm i}\alpha_{\mathrm{I}}$, by
$\alpha_{\mathrm{R}}+\sqrt{\alpha_{\mathrm{R}}^{2}+\alpha_{\mathrm{I}}^{2}}=2$).
To the right of this parabola, that is, for $|{\rm Re}\sqrt{\alpha}|\geq 1$,
we have $L_{\alpha}^{\min}=L_{\alpha}^{\max}$. For $|{\rm
Re}\sqrt{\alpha}|<1$, that is to the left of (1.2), ${\cal
D}(L_{\alpha}^{\min})$ has codimension $2$ inside ${\cal
D}(L_{\alpha}^{\max})$. The operators ${\cal D}(L_{\alpha}^{\min})$ and ${\cal
D}(L_{\alpha}^{\max})$ are homogeneous of degree $-2$.
Let us note that in the region $|{\rm Re}\sqrt{\alpha}|<1$ the operators
$L_{\alpha}^{\min}$ and $L_{\alpha}^{\max}$ are not the most important
realizations of $L_{\alpha}$. Much more useful are closed realizations of
$L_{\alpha}$ situated between $L_{\alpha}^{\min}$ and $L_{\alpha}^{\max}$,
defined by boundary conditions near zero. (Among these realizations, the best
known are self-adjoint ones corresponding to real $\alpha$ and real boundary
conditions). All of this is described in [7].
Among these realizations for $\alpha\neq 0$ only two, and for $\alpha=0$ only
one, are homogeneous of degree $-2$. All of them are covered by the
holomorphic family of closed operators $H_{m}$, introduced in [6] and defined
for ${\rm Re}(m)>-1$ as the restriction of $L_{m^{2}}^{\max}$ to functions
that behave as $x^{\frac{1}{2}+m}$ near zero. Note that
$\displaystyle L_{m^{2}}^{\min}=$ $\displaystyle
H_{m}=L_{m^{2}}^{\max},\quad{\rm Re}(m)\geq 1;$ (1.3) $\displaystyle
L_{m^{2}}^{\min}\mathchar 13608\relax$ $\displaystyle H_{m}\mathchar
13608\relax L_{m^{2}}^{\max},\quad|{\rm Re}(m)|<1.$ (1.4)
### 1.2 Main results
Our new results give descriptions of the domains of various realizations of
$L_{\alpha}$ for $\alpha\in{\mathbb{C}}$. First of all, we prove that for
$|{\rm Re}\sqrt{\alpha}|<1$ the domain of $L_{\alpha}^{\min}$ does not depend
on $\alpha$ and coincides with the minimal 2nd order Sobolev space
${\cal H}_{0}^{2}({\mathbb{R}}_{+}):=\\{f\in{\cal H}^{2}({\mathbb{R}}_{+})\ |\
f(0)=f^{\prime}(0)=0\\},$ (1.5)
where
${\cal H}^{2}({\mathbb{R}}_{+}):=\\{f\in L^{2}({\mathbb{R}}_{+})\ |\
f^{\prime\prime}\in L^{2}({\mathbb{R}}_{+})\\}$ (1.6)
is the (full) 2nd order Sobolev space. We also show that
$\\{\alpha\ |\ |{\rm Re}\sqrt{\alpha}|<1\\}\ni\alpha\mapsto L_{\alpha}^{\min}$
(1.7)
is a holomorphic family of closed operators.
We find the constancy of the domain of the minimal operator quite surprising
and interesting. It contrasts with the fact that ${\cal D}(L_{\alpha}^{\max})$
for $|{\rm Re}\sqrt{\alpha}|<1$ depends on $\alpha$. Similarly, ${\cal
D}(H_{m})$ for $|{\rm Re}(m)|<1$ depends on $m$.
The holomorphic family $L_{\alpha}^{\min}$ for $|{\rm Re}\sqrt{\alpha}|<1$
consists of operators whose spectrum covers the whole complex plane.
Therefore, the usual approach to holomorphic families of closed operators
based on the study of the resolvent is not available.
We also study $H_{m}$ for ${\rm Re}(m)\geq 1$ (which by (1.3) coincides with
$L_{m^{2}}^{\min}$ and $L_{m^{2}}^{\max}$). We prove that for ${\rm Re}(m)>1$
its domain also coincides with ${\cal H}_{0}^{2}({\mathbb{R}}_{+})$. The most
unusual situation occurs in the case ${\rm Re}(m)=1$. In this case we show
that the domain of $H_{m}$ is always larger than
$\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$ and depends on $m$.
Specifying to real $\alpha$, the main result of our paper can be summarized as
follows: Let $L_{\alpha}^{\min}$ be the closure in $L^{2}({\mathbb{R}}_{+})$
of the operator $-\partial_{x}^{2}+\frac{\alpha-\frac{1}{4}}{x^{2}}$ with
domain $C_{\rm c}^{\infty}({\mathbb{R}}_{+})$.
1) If $\alpha<1$ then $L_{\alpha}^{\min}$ is Hermitian (symmetric) but not
self-adjoint and its domain is $\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$.
2) If $\alpha=1$ then $L_{\alpha}^{\min}$ is self-adjoint and
$\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$ is a dense subspace of infinite
codimension of its domain.
3) If $\alpha>1$ then $L_{\alpha}^{\min}$ is self-adjoint with domain
$\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$.
As a side remark, let us mention two open problems about Bessel operators.
###### Open Problem 1.1
1. 1.
Can the holomorphic family $H_{m}$ be extended beyond ${\rm Re}(m)>-1$?
(Probably not).
2. 2.
Can the holomorphic family $L_{\alpha}^{\min}$ (hence also
$L_{\alpha}^{\max}$) be extended beyond $|{\rm Re}\sqrt{\alpha}|<1$? (Probably
not).
Question 1 has already been mentioned in [6]. We hope that both questions can
be answered by methods of [10].
### 1.3 Bilinear Bessel forms
With every operator $T$ on a Hilbert space ${\cal H}$ one can associate the
sesquilinear form
$(f|Tg),\qquad f,g\in{\cal D}(T).$ (1.8)
One can try to extend (1.8) to a larger domain. If $T$ is self-adjoint, there
is a natural extension to the so-called form domain of $T$, ${\cal
Q}(T):={\cal D}(\sqrt{|T|})$. Interpreting $T$ as a bounded map from ${\cal
Q}(T)$ to its anti-dual, we obtain the sesquilinear form
$(f|Tg),\qquad f,g\in{\cal Q}(T),$ (1.9)
which extends (1.8).
We would like to have a similar construction for Bessel operators, including
non-self-adjoint ones. Before we proceed we should realize that identities
involving non-self-adjoint operators do not like complex conjugation.
Therefore, instead of sesquilinear forms it is more natural to use bilinear
forms.
Our analysis of bilinear Bessel forms is based on the pair of formal
factorizations of the Bessel operator
$\displaystyle-\partial_{x}^{2}+\Big{(}m^{2}-\frac{1}{4}\Big{)}\frac{1}{x^{2}}$
$\displaystyle=\Big{(}\partial_{x}+\frac{\frac{1}{2}+m}{x}\Big{)}\Big{(}-\partial_{x}+\frac{\frac{1}{2}+m}{x}\Big{)}$
(1.10)
$\displaystyle=\Big{(}\partial_{x}+\frac{\frac{1}{2}-m}{x}\Big{)}\Big{(}-\partial_{x}+\frac{\frac{1}{2}-m}{x}\Big{)}.$
(1.11)
In Theorems 8.2 and 8.3 for each ${\rm Re}(m)>-1$ we interpret (1.10) and
(1.11) as factorizations of the Bessel operator $H_{m}$ into two closed 1st
order operators. They define natural bilinear forms, which we call Bessel
forms. For each ${\rm Re}(m)>-1$ the corresponding Bessel form is unique,
except for ${\rm Re}(m)=0$, $m\neq 0$, when the two factorizations yield two
distinct Bessel forms.
Instead of ${\cal H}_{0}^{2}({\mathbb{R}}_{+})$, the major role is now played
by the minimal 1st order Sobolev space
$\displaystyle{\cal H}_{0}^{1}({\mathbb{R}}_{+}):=\\{f\in{\cal
H}^{1}({\mathbb{R}}_{+})\ |\ f(0)=0\\},$ (1.12)
subspace of the (full) 1st order Sobolev space
$\displaystyle{\cal H}^{1}({\mathbb{R}}_{+}):=\\{f\in L^{2}({\mathbb{R}}_{+})\
|\ f^{\prime}\in L^{2}({\mathbb{R}}_{+})\\}.$ (1.13)
Note that ${\cal H}_{0}^{1}({\mathbb{R}}_{+})$ is the domain of Bessel forms
for ${\rm Re}(m)>0$.
The analysis of Bessel forms and their factorizations shows a variety of
behaviors depending on the parameter $m$. In particular, there is a kind of a
phase transition at ${\rm Re}(m)=0$. Curiously, in the analysis of the domain
of Bessel operators the phase transition occurs elsewhere: at ${\rm Re}(m)=1$.
### 1.4 Comparison with literature
The fact that ${\cal D}(L_{\alpha}^{\min})$ does not depend on $\alpha$ for
real $\alpha\in[0,1[$ was first proven in [1], see also [2, 3]. Actually, the
arguments of [1] are enough to extend the result to complex $\alpha$ such that
$|\alpha-\frac{1}{4}|<\frac{3}{4}$. The proof is based on the bound
$\|Q\|=\frac{3}{4}$ of the operator $Q$ on $L^{2}({\mathbb{R}}_{+})$ given by
the integral kernel
$Q(x,y)=\frac{1}{x^{2}}(x-y)\theta(x-y),$ (1.14)
where $\theta$ is the Heaviside function. Our proof is quite similar. Instead
of (1.14) we consider for $|{\rm Re}(m)|<1$ the operator $Q_{m^{2}}$ with the
kernel
$Q_{m^{2}}(x,y)=\frac{1}{2mx^{2}}(x^{\frac{1}{2}+m}y^{\frac{1}{2}-m}-x^{\frac{1}{2}-m}y^{\frac{1}{2}+m})\theta(x-y).$
(1.15)
Note that $Q_{\frac{1}{4}}$ coincides with (1.14). We prove that the norm of
$Q_{m^{2}}$ is the inverse of the distance of $m^{2}$ to the parabola (1.2). A
simple generalization of the Kato-Rellich Theorem to closed operators implies
then our result about ${\cal D}(L_{\alpha}^{\min})$.
In the paper [6] on page 567 it is written “If $m\neq 1/2$ then ${\cal
D}(L^{\min}_{m})\neq\mathcal{H}_{0}^{2}$.” (In that paper $L_{m^{2}}^{\min}$
was denoted $L_{m}^{\min}$). This sentence was not formulated as a
proposition, and no proof was provided. Anyway, in view of the results of [2]
and of this paper, this sentence was wrong.
The analysis of Bessel forms in the self-adjoint case, that is for real
$m>-1$, is well known–it is essentially equivalent to the famous Hardy
inequality. This subject is discussed e.g. in the monograph [4] and in a
recent interesting paper [11] about a refinement of the 1-dimensional Hardy’s
inequality. The latter paper contains in particular many references about
factorizations of Bessel operators in the self-adjoint case.
Results about Bessel forms and their factorizations for complex parameters are
borrowed to a large extent from [6]. We include them in this paper, because
they provide an interesting complement to the analysis of domains of Bessel
operators.
## 2 Basic closed realizations of the Bessel operator
The main topic of this preliminary section are closed homogeneous realizations
of $L_{\alpha}$. We recall their definitions following [6, 7].
We will denote by ${\mathbb{R}}_{+}$ the open positive half-line, that is
$]0,\infty[$. We will use $L^{2}({\mathbb{R}}_{+})$ as our basic Hilbert
space. We define $L_{\alpha}^{\max}$ to be the operator given by the
expression $L_{\alpha}$ with the domain
${\cal D}(L_{\alpha}^{\max})=\\{f\in L^{2}({\mathbb{R}}_{+})\mid
L_{\alpha}f\in L^{2}({\mathbb{R}}_{+})\\}.$
We also set $L_{\alpha}^{\min}$ to be the closure of the restriction of
$L_{\alpha}^{\max}$ to $C_{\rm c}^{\infty}({\mathbb{R}}_{+})$.
We will often write $m$ for one of the square roots of $\alpha$, that is,
$\alpha=m^{2}$. It is easy to see that the space of solutions of the
differential equation
$L_{\alpha}f=0$ (2.1)
is spanned for $\alpha\neq 0$ by $x^{\frac{1}{2}+m}$, $x^{\frac{1}{2}-m}$, and
for $\alpha=0$ by $x^{\frac{1}{2}}$, $x^{\frac{1}{2}}\log x$. Note that both
solutions are square integrable near $0$ iff $|{\rm Re}(m)|<1$. This is used
in [6] to show that we have
$\displaystyle{\cal D}(L_{\alpha}^{\max})$ $\displaystyle={\cal
D}(L_{\alpha}^{\min})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi+{\mathbb{C}}x^{\frac{1}{2}-m}\xi,$
$\displaystyle|{\rm Re}\sqrt{\alpha}|<1,\ \alpha\neq 0;$ (2.2)
$\displaystyle{\cal D}(L_{0}^{\max})$ $\displaystyle={\cal
D}(L_{0}^{\min})+{\mathbb{C}}x^{\frac{1}{2}}\xi+{\mathbb{C}}x^{\frac{1}{2}}\log(x)\xi,$
$\displaystyle\alpha=0;$ (2.3) $\displaystyle{\cal D}(L_{\alpha}^{\max})$
$\displaystyle={\cal D}(L_{\alpha}^{\min}),$ $\displaystyle|{\rm
Re}\sqrt{\alpha}|\geq 1.$ (2.4)
Above (and throughout the paper) $\xi$ is any
$C_{\mathrm{c}}^{\infty}[0,\infty[$ function such that $\xi=1$ near $0$.
Following [6], for ${\rm Re}(m)>-1$ we also introduce another family of closed
realizations of Bessel operators: the operators $H_{m}$ defined as the
restrictions of $L_{m^{2}}^{\max}$ to
${\cal D}(H_{m}):={\cal
D}(L_{m^{2}}^{\min})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi.$ (2.5)
We will use various basic concepts and facts about 1-dimensional Schrödinger
operators with complex potentials. We will use [9] as the main reference, but
clearly most of them belong to the well-known folklore. In particular, we will
use two kinds of Green’s operators. Let us recall this concept, following [9].
Let $L_{\mathrm{c}}^{1}({\mathbb{R}}_{+})$ be the set of integrable functions
of compact support in ${\mathbb{R}}_{+}$. We will say that an operator
$G:L_{\mathrm{c}}^{1}({\mathbb{R}}_{+})\to AC^{1}({\mathbb{R}}_{+})$ is a
Green’s operator of $L_{\alpha}$ if for any $g\in
L_{\mathrm{c}}^{1}({\mathbb{R}}_{+})$
$L_{\alpha}Gg=g.$ (2.6)
## 3 The forward Green’s operator
Let us introduce the operator $G_{\alpha}^{\to}$ defined by the kernel
$\displaystyle G_{\alpha}^{\to}(x,y)$
$\displaystyle:=\frac{1}{2m}\big{(}x^{\frac{1}{2}+m}y^{\frac{1}{2}-m}-x^{\frac{1}{2}-m}y^{\frac{1}{2}+m}\big{)}\theta(x-y),$
$\displaystyle\alpha\neq 0;$ (3.1) $\displaystyle G_{0}^{\to}(x,y)$
$\displaystyle:=x^{\frac{1}{2}}y^{\frac{1}{2}}\log\Big{(}\frac{x}{y}\Big{)}\theta(x-y),$
$\displaystyle\alpha=0.$ (3.2)
Note that $G_{\alpha}^{\to}$ is a Green’s operator in the sense of (2.6).
Besides,
$\displaystyle{\rm supp}G_{\alpha}^{\to}g\subset{\rm supp}g+{\mathbb{R}}_{+},$
(3.3)
which is why it is sometimes called the forward Green’s operator.
Unfortunately, the operator $G_{\alpha}^{\to}$ is unbounded on
$L^{2}({\mathbb{R}}_{+})$. To make it bounded, for any $a>0$ we can compress
it to the finite interval $[0,a]$, by introducing the operator
$G_{\alpha}^{a\to}$ with the kernel
$G_{\alpha}^{a\to}(x,y):={\mathchoice{\rm 1\mskip-4.0mul}{\rm
1\mskip-4.0mul}{\rm 1\mskip-4.5mul}{\rm
1\mskip-5.0mul}}_{[0,a]}(x)G_{\alpha}^{\to}(x,y){\mathchoice{\rm
1\mskip-4.0mul}{\rm 1\mskip-4.0mul}{\rm 1\mskip-4.5mul}{\rm
1\mskip-5.0mul}}_{[0,a]}(y).$ (3.4)
It is also convenient to consider the operator $L_{\alpha}$ restricted to
$[0,a]$. One of its closed realizations, is defined by the zero boundary
condition at $0$ and no boundary conditions at $a$ (see [9] Def. 4.14). It
will be denoted $L_{\alpha,0}^{a}$. By Prop. 7.3 of [9] we have
$G_{\alpha}^{a\to}=(L_{\alpha,0}^{a})^{-1}$, and hence
${\cal D}(L_{\alpha,0}^{a})=G_{\alpha}^{a\to}L^{2}[0,a].$ (3.5)
Now we can describe the domain of $L_{\alpha}^{\min}$ with the help of the
forward Green’s operator.
###### Proposition 3.1
Suppose that $f\in{\cal D}(L_{\alpha}^{\max})$. Then the following statements
are equivalent:
1. 1.
$f\in{\cal D}(L_{\alpha}^{\min})$.
2. 2.
For some $a>0$ and $g^{a}\in L^{2}[0,a]$ we have
$f\Big{|}_{[0,a]}=G_{\alpha}^{\to}g^{a}\Big{|}_{[0,a]}$.
3. 3.
For all $a>0$ there exists $g^{a}\in L^{2}[0,a]$ such that
$f\Big{|}_{[0,a]}=G_{\alpha}^{\to}g^{a}\Big{|}_{[0,a]}$.
Proof. The boundary space ([9] Def. 5.2) of $L_{\alpha}$ is trivial at
$\infty$ (see [9] Prop. 5.15). Therefore, for any $a>0$ we have
$f\in{\cal D}(L_{\alpha}^{\min})\ \Leftrightarrow\ f\Big{|}_{[0,a]}\in{\cal
D}(L_{\alpha,0}^{a}).$ (3.6)
Hence it is enough to apply (3.5). $\Box$
Define the operator $Q_{\alpha}:=\frac{1}{x^{2}}G_{\alpha}^{\to}.$ Its
integral kernel is
$\displaystyle Q_{\alpha}(x,y)$
$\displaystyle=\frac{1}{2m}(x^{-\frac{3}{2}+m}y^{\frac{1}{2}-m}-x^{-\frac{3}{2}-m}y^{\frac{1}{2}+m})\theta(x-y),$
$\displaystyle\alpha\neq 0;$ (3.7) $\displaystyle Q_{0}(x,y)$
$\displaystyle:=x^{-\frac{3}{2}}y^{\frac{1}{2}}\log\Big{(}\frac{x}{y}\Big{)}\theta(x-y),$
$\displaystyle\alpha=0.$ (3.8)
###### Proposition 3.2
Assume that $|{\rm Re}\sqrt{\alpha}|<1$. Then the operator $Q_{\alpha}$ is
bounded on $L^{2}({\mathbb{R}}_{+})$, and
$\|Q_{\alpha}\|=\frac{1}{\mathrm{dist}\big{(}\alpha,(1+{\rm
i}{\mathbb{R}})^{2}\big{)}}$ (3.9)
Proof. Introduce the unitary operator $U:L^{2}({\mathbb{R}}_{+})\to
L^{2}({\mathbb{R}})$ given by
$(Uf)(t):={\rm e}^{\frac{t}{2}}f({\rm e}^{t}).$ (3.10)
Note that if an operator $K$ has the kernel $K(x,y)$, then $UKU^{-1}$, has the
kernel ${\rm e}^{\frac{t}{2}}K({\rm e}^{t},{\rm e}^{s}){\rm e}^{\frac{s}{2}}$.
Therefore, for any $\alpha$ the operator $UQ_{\alpha}U^{-1}$ has the kernel
$\displaystyle\frac{1}{2m}({\rm e}^{-(t-s)(1-m)}-{\rm
e}^{-(t-s)(1+m)})\theta(t-s),$ $\displaystyle\alpha\neq 0;$ (3.11)
$\displaystyle{\rm e}^{-(t-s)}(t-s)\theta(t-s),$ $\displaystyle\alpha=0.$
(3.12)
Thus, it is the convolution by the function
$\displaystyle t\to$ $\displaystyle\frac{1}{2m}({\rm e}^{-t(1-m)}-{\rm
e}^{-t(1+m)})\theta(t),$ $\displaystyle\alpha\neq 0;$ (3.13) $\displaystyle
t\to$ $\displaystyle{\rm e}^{-t}t\theta(t),$ $\displaystyle\alpha=0.$ (3.14)
Assume now that $|{\rm Re}\sqrt{\alpha}|<1$. Then the function (3.13) is
integrable and we can apply the Fourier transformation defined by $({\cal
F}u)(\omega)=(2\pi)^{-1/2}\int{\rm e}^{-i\omega t}u(t)\,{\rm d}t$. After this
transformation the operator $UQ_{\alpha}U^{-1}$ becomes the multiplication wrt
the Fourier transform of (3.13) or (3.14), that is
$\omega\mapsto\frac{1}{(1+{\rm i}\omega)^{2}-m^{2}}.$ (3.15)
Thus the norm of $UQ_{\alpha}U^{-1}$, and hence also of $Q_{\alpha}$, is the
supremum of the absolute value of (3.15). $\Box$
###### Remark 3.3
The operator $Q_{\alpha}$ belongs to the class of operators analyzed in [17]
on p. 271, which goes back to Hardy-Littlewood-Polya [12] p. 229.
Proposition (3.2) for $\alpha=\frac{1}{4}$ is especially important and simple.
This case was noted in cf. [6, p. 566] and [1, Lemma 2.2]. It can be written
as
$\displaystyle g(x):=x^{-2}\int_{0}^{x}(x-y)f(y){\rm d}y\ \Rightarrow\
\|g\|\leq\frac{4}{3}\|f\|.$ (3.16)
One can remark that (3.16) is essentially equivalent to the one-dimensional
version of the classical Rellich’s inequality, see e.g. [4, (6.1.1)]:
$\int_{0}^{\infty}\frac{|u|^{2}}{x^{4}}{\rm
d}x\leq\frac{16}{9}\int_{0}^{\infty}|u^{\prime\prime}|^{2}{\rm d}x,\quad u\in
C_{\mathrm{c}}^{\infty}({\mathbb{R}}_{+}),$ (3.17)
where we identify $f=u^{\prime\prime}$ and $g=\frac{u}{x^{2}}$.
The proof of the following proposition uses only the simple estimate (3.16).
###### Proposition 3.4
${\cal D}(L_{\alpha}^{\max})\cap{\cal
D}(L_{\beta}^{\max})=\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$ if
$\alpha\neq\beta$.
Proof. We have $f\in{\cal D}(L_{\alpha}^{\max})$ if and only if $f\in
L^{2}({\mathbb{R}}_{+})$ and $-f^{\prime\prime}+(\alpha-1/4)x^{-2}f\in
L^{2}({\mathbb{R}}_{+})$ hence if we also have $f\in{\cal
D}(L_{\beta}^{\max})$ then $(\alpha-\beta)x^{-2}f\in L^{2}({\mathbb{R}}_{+})$
and since $\alpha\neq\beta$ we get $x^{-2}f\in L^{2}({\mathbb{R}}_{+})$ hence
$f^{\prime\prime}\in L^{2}({\mathbb{R}}_{+})$. Recall that
$f,f^{\prime\prime}\in L^{2}({\mathbb{R}}_{+})$ implies
$f\in\mathcal{H}^{1}({\mathbb{R}}_{+})$ and
$\|f^{\prime}\|^{2}_{L^{2}({\mathbb{R}}_{+})}\leq\|f\|^{2}_{L^{2}({\mathbb{R}}_{+})}\|f^{\prime\prime}\|^{2}_{L^{2}({\mathbb{R}}_{+})}$.
It follows that $f$ is absolutely continuous and
$f(x)=a+\int_{0}^{x}f^{\prime}(y){\rm d}y$ for some constant $a$ and
$f^{\prime}$ is absolutely continuous and
$f^{\prime}(x)=b+\int_{0}^{x}f^{\prime\prime}(y){\rm d}y$ for some constant
$b$, thus
$f(x)=a+bx+\int_{0}^{x}\int_{0}^{y}f^{\prime\prime}(z){\rm d}z{\rm
d}y=a+bx+x^{2}g(x),\qquad g(x):=x^{-2}\int_{0}^{x}(x-y)f^{\prime\prime}(y){\rm
d}y.$
Then, by (3.16)
$\displaystyle\|g\|_{L^{2}({\mathbb{R}}_{+})}\leq\frac{4}{3}\|f^{\prime\prime}\|_{L^{2}({\mathbb{R}}_{+})}.$
(3.18)
Thus $x^{-2}f(x)=ax^{-2}+bx^{-1}+g(x)$ where $g\in L^{2}({\mathbb{R}}_{+})$,
so $\int_{0}^{1}|x^{-2}f(x)|^{2}{\rm d}x<\infty$ if and only if $a=b=0$, so
that $f(x)=\int_{0}^{x}(x-y)f^{\prime\prime}(y){\rm d}y$ and
$f^{\prime}(x)=\int_{0}^{x}f^{\prime\prime}(y){\rm d}y$, hence
$f\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$.
Reciprocally, if $f\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$ then $x^{-2}f\in
L^{2}({\mathbb{R}}_{+})$ with
$\|x^{-2}f\|_{L^{2}({\mathbb{R}}_{+})}\leq\frac{4}{3}\|f^{\prime\prime}\|_{L^{2}({\mathbb{R}}_{+})}$
by (3.16), hence $f\in{\cal D}(L_{\alpha}^{\max})$ for all $\alpha$. $\Box$
## 4 Domain of Bessel operators for $|{\rm Re}(m)|<1$
Below we state the first main result of our paper (which is an extension of a
result of [1]).
###### Theorem 4.1
If $|{\rm Re}\sqrt{\alpha}|<1$, then ${\cal
D}(L_{\alpha}^{\min})=\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$. Moreover,
$\left\\{\alpha\in{\mathbb{C}}\ |\ |{\rm
Re}\sqrt{\alpha}|<1\right\\}\ni\alpha\mapsto L_{\alpha}^{\min}$ (4.1)
is a holomorphic family of closed operators.
The proof of this theorem is based on the following lemma.
###### Lemma 4.2
Let $|{\rm Re}\sqrt{\alpha}|<1$ and $f\in{\cal D}(L_{\alpha}^{\min})$. Then
$\|x^{-2}f\|\leq\frac{1}{\mathrm{dist}\big{(}\alpha,(1+{\rm
i}{\mathbb{R}})^{2}\big{)}}\|L_{\alpha}^{\min}f\|.$ (4.2)
Proof. Let $a>0$. Set $g:=L_{\alpha}^{\min}f$, $f^{a}:=f\Big{|}_{[0,a]}$,
$g^{a}:=g\Big{|}_{[0,a]}$. Let $G_{\alpha}^{a\to}$ be as in (3.4). As in the
proof of Prop. 3.1,
$f^{a}=G_{\alpha}^{a\to}g^{a}.$ (4.3)
So
$\displaystyle\|x^{-2}f\|$ $\displaystyle=\lim_{a\to\infty}\|x^{-2}f^{a}\|$
(4.4) $\displaystyle=\lim_{a\to\infty}\|x^{-2}G_{\alpha}^{a\to}g^{a}\|$
$\displaystyle=\|Q_{\alpha}g\|\leq\frac{1}{\mathrm{dist}\big{(}\alpha,(1+{\rm
i}{\mathbb{R}})^{2}\big{)}}\|g\|.\quad\Box$ (4.5)
Proof of Theorem 4.1. We can cover the region on the lhs of (4.1) by disks
touching the boundary of this region, that is, (1.2). Inside each disk we
apply Thm A.1 and Lemma 4.2. We obtain in particular, that if $|{\rm
Re}\sqrt{\alpha}_{i}|<1$, $i=1,2$, then ${\cal D}(L_{\alpha_{1}}^{\min})={\cal
D}(L_{\alpha_{2}}^{\min})$. But clearly ${\cal
D}(L_{\frac{1}{4}}^{\min})=\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$. $\Box$
###### Theorem 4.3
We have
$\displaystyle{\cal D}(L_{\alpha}^{\max})$ $\displaystyle={\cal
H}_{0}^{2}+{\mathbb{C}}x^{\frac{1}{2}+m}\xi+{\mathbb{C}}x^{\frac{1}{2}-m}\xi,$
$\displaystyle|{\rm Re}\sqrt{\alpha}|<1,\ \alpha\neq 0;$ (4.6)
$\displaystyle{\cal D}(L_{\alpha}^{\max})$ $\displaystyle={\cal
H}_{0}^{2}+{\mathbb{C}}x^{\frac{1}{2}}\xi+{\mathbb{C}}x^{\frac{1}{2}}\log(x)\xi,$
$\displaystyle\alpha=0.$ (4.7)
Besides,
${\cal D}(L_{\alpha_{1}}^{\max})\neq{\cal
D}(L_{\alpha_{2}}^{\max}),\qquad\alpha_{1}\neq\alpha_{2},\quad|{\rm
Re}\sqrt{\alpha}_{i}|<1,\quad i=1,2.$ (4.8)
Furthermore,
$\left\\{\alpha\in{\mathbb{C}}\ |\ |{\rm
Re}\sqrt{\alpha}|<1\right\\}\ni\alpha\mapsto L_{\alpha}^{\max}$ (4.9)
is a holomorphic family of closed operators.
Proof. Using ${\cal D}(L_{\alpha}^{\min})={\cal H}_{0}^{2}$, (2.2) and (2.3)
can be now rewritten as (4.6) and (4.7).
Clearly, $x^{\frac{1}{2}+m}\xi$ and $x^{\frac{1}{2}}\log(x)\xi$ do not belong
to ${\cal H}_{0}^{2}({\mathbb{R}}_{+})$ (because their second derivatives are
not square integrable). Therefore, ${\cal D}(L_{\alpha}^{\max})\neq{\cal
H}_{0}^{2}({\mathbb{R}}_{+})$. This together with Proposition 3.4 implies
(4.8).
We have $(L_{\alpha}^{\min})^{*}=L_{\overline{\alpha}}^{\max}$. Therefore, to
obtain the holomorphy we can use Proposition A.2. $\Box$
The most important holomorphic family of Bessel operators is
$\\{m\in{\mathbb{C}}\ |\ {\rm Re}(m)>-1\\}\ni m\mapsto H_{m}.$ (4.10)
Its holomorphy has been proven in [6]. Using arguments similar to those in the
proof of Theorem 4.3 we obtain a closer description of this family in the
region $|{\rm Re}(m)|<1$.
###### Theorem 4.4
We have
$\displaystyle{\cal D}(H_{m})$ $\displaystyle={\cal
H}_{0}^{2}+{\mathbb{C}}x^{\frac{1}{2}+m}\xi,$ $\displaystyle|{\rm Re}(m)|<1.$
(4.11)
Besides, if $m_{1}\neq m_{2}$ and $|{\rm Re}(m_{i})|<1$, $i=1,2$, then ${\cal
D}(H_{m_{1}})\neq{\cal D}(H_{m_{2}})$.
## 5 Two-sided Green’s operator
For any $m\in{\mathbb{C}}$, $m\neq 0$, let us introduce the operator $G_{m}$
with the kernel
$\displaystyle G_{m}(x,y)$
$\displaystyle:=\frac{1}{2m}\left(x^{\frac{1}{2}+m}y^{\frac{1}{2}-m}\theta(y-x)+x^{\frac{1}{2}-m}y^{\frac{1}{2}+m}\theta(x-y)\right).$
(5.1)
Recall that $\theta$ is the Heaviside function. (5.1) is one of Green’s
operators of $L_{m^{2}}$ in the sense of (2.6), Following [9], we will call it
the two-sided Green’s operator.
The operator $G_{m}$ is not bounded on $L^{2}({\mathbb{R}}_{+})$ for any
$m\in{\mathbb{C}}$. However, at least for ${\rm Re}(m)>-1$, it is useful in
the $L^{2}$ setting.
###### Proposition 5.1
Let ${\rm Re}(m)>-1$, $m\neq 0$ and $a>0$.
1. 1.
If $g\in L^{2}[0,a]$, then
$f(x)=G_{m}g(x)=\int_{0}^{\infty}G_{m}(x,y)g(y){\rm d}y$ (5.2)
is well defined, belongs to $\in AC^{1}]0,\infty[$ and $L_{\alpha}f=g$.
2. 2.
Conversely, if $f\in AC^{1}]0,\infty[$, $L_{\alpha}f=g\in L^{2}[0,a]$, then
there exist $c_{+},c_{-}$ such that
$\displaystyle f(x)$
$\displaystyle=c_{+}x^{\frac{1}{2}+m}+c_{-}x^{\frac{1}{2}-m}+G_{m}g(x),\quad
m\neq 0.$ (5.3)
Proof. Note first that ${\rm Re}(m)>1$ implies $x^{\frac{1}{2}\pm m}$ is
locally in $L^{2}$. Using this, the proof of the first part of the proposition
is a straightforward computation done, in a more general setting, in [9], see
§2.7 and Definition 2.10 there. For the second part, note that
$L_{\alpha}(f-G_{m}g)=0$ by the first part of the proposition, and that the
two functions $x^{\frac{1}{2}\pm m}$ give a basis of the nullspace of
$L_{\alpha}$. $\Box$
Let us introduce the operator $Z_{m}:=\frac{1}{x^{2}}G_{m}$ with the kernel
$\displaystyle Z_{m}(x,y)$
$\displaystyle=\frac{1}{2m}\left(x^{-\frac{3}{2}+m}y^{\frac{1}{2}-m}\theta(y-x)+x^{-\frac{3}{2}-m}y^{\frac{1}{2}+m}\theta(x-y)\right).$
(5.4)
###### Proposition 5.2
Let ${\rm Re}(m)>1$. Then $Z_{m}$ is bounded and
$\|Z_{m}\|=\frac{1}{\mathrm{dist}\big{(}m^{2},(1+{\rm
i}{\mathbb{R}})^{2}\big{)}}$ (5.5)
Proof. If $U$ is given by (3.10), then $UZ_{m}U^{-1}$ has the kernel
$\displaystyle\frac{1}{2m}\left({\rm e}^{-(m-1)(s-t)}\theta(s-t)+{\rm
e}^{-(m+1)(s-t)}\theta(t-s)\right).$ (5.6)
If ${\rm Re}(m)>1$, after the Fourier transformation (defined as in the proof
of Proposition 3.2) it becomes the multiplication by the function
$\displaystyle\omega\mapsto\frac{1}{2m}\Big{(}\frac{1}{(m-1-{\rm
i}\omega)}+\frac{1}{1+m+{\rm i}\omega)}\Big{)}=\frac{1}{m^{2}-(1+{\rm
i}\omega)^{2}},$ (5.7)
whose supremum is the right hand side of (5.5). $\Box$
## 6 Domain of Bessel operators for ${\rm Re}(m)>1$
For ${\rm Re}(m)\geq 1$ there is a unique closed Bessel operator. We will see
in the following theorem that its domain is again the minimal 2nd order
Sobolev space, except at the boundary ${\rm Re}(m)=1$, cf. Section 7.
###### Theorem 6.1
Let ${\rm Re}(m)>1$. Then ${\cal
D}(H_{m})=\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$.
Proof. We know that $\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})\subset{\cal
D}(L_{m^{2}}^{\max})$ for any $m$. But for ${\rm Re}(m)>1$ we have
$L_{m^{2}}^{\max}=H_{m}$. This proves the inclusion
$\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})\subset{\cal D}(H_{m})$.
Let us prove the converse inclusion. Let $f\in{\cal D}(H_{m})$. It is enough
to assume that $f\in L^{2}[0,1]$. Let $g:=H_{m}f$. Then $g\in L^{2}[0,1]$. By
Prop. 5.1, we can write
$f(x)=c_{+}x^{\frac{1}{2}+m}+c_{-}x^{\frac{1}{2}-m}+\frac{x^{\frac{1}{2}+m}}{2m}\int_{x}^{1}y^{\frac{1}{2}-m}g(y){\rm
d}y+\frac{x^{\frac{1}{2}-m}}{2m}\int_{0}^{x}y^{\frac{1}{2}+m}g(y){\rm d}y.$
(6.1)
For $x>1$ we have
$f(x)=c_{+}x^{\frac{1}{2}+m}+x^{\frac{1}{2}-m}\left(c_{-}+\frac{1}{2m}\int_{0}^{1}y^{\frac{1}{2}+m}g(y){\rm
d}y\right),$ (6.2)
hence $c_{+}=0$. We have, for $x\to 0$,
$\displaystyle\left|x^{\frac{1}{2}+m}\int_{x}^{1}y^{\frac{1}{2}-m}g(y){\rm
d}y\right|$ $\displaystyle\leq x\int_{0}^{1}|g(x)|{\rm d}y\to 0;$ (6.3)
$\displaystyle\left|x^{\frac{1}{2}-m}\int_{0}^{x}y^{\frac{1}{2}+m}g(y)\right|{\rm
d}y$ $\displaystyle\leq x\int_{0}^{x}|g(y)|{\rm d}y\to 0.$ (6.4)
$x^{\frac{1}{2}-m}$ is not square integrable near zero. Hence $c_{-}=0$. Thus
$f(x)=\frac{x^{\frac{1}{2}+m}}{2m}\int_{x}^{1}y^{\frac{1}{2}-m}g(y){\rm
d}y+\frac{x^{\frac{1}{2}-m}}{2m}\int_{0}^{x}y^{\frac{1}{2}+m}g(y){\rm d}y.$
(6.5)
By (6.3) and (6.4), $\lim\limits_{x\to 0}f(x)=0$. Now
$\displaystyle f^{\prime}(x)=\frac{(\frac{1}{2}+m)x^{-\frac{1}{2}+m}}{2m}$
$\displaystyle\int_{x}^{1}y^{\frac{1}{2}-m}g(y){\rm
d}y+\frac{(\frac{1}{2}-m)x^{-\frac{1}{2}-m}}{2m}\int_{0}^{x}y^{\frac{1}{2}+m}g(y){\rm
d}y,$ (6.6)
$\displaystyle\left|x^{-\frac{1}{2}-m}\int_{0}^{x}y^{\frac{1}{2}+m}g(y){\rm
d}y\right|$ $\displaystyle\leq\int_{0}^{x}|g(y)|{\rm d}y\to 0,$ (6.7)
$\displaystyle\left|x^{-\frac{1}{2}+m}\int_{x}^{1}y^{\frac{1}{2}-m}g(y){\rm
d}y\right|$ $\displaystyle\leq\int_{0}^{\epsilon}|g(y)|{\rm
d}y+x^{-\frac{1}{2}+{\rm Re}(m)}\int_{\epsilon}^{1}y^{\frac{1}{2}-{\rm
Re}(m)}|g(y)|{\rm d}y.$ (6.8)
For any $\epsilon>0$, the second term on the right of (6.8) goes to zero. The
first, by making $\epsilon$ small, can be made arbitrarily small. Therefore
(6.8) goes to zero. Thus $\lim\limits_{x\to 0}f^{\prime}(0)=0$.
Finally
$\displaystyle f^{\prime\prime}(x)+g(x)=$
$\displaystyle\frac{(m^{2}-\frac{1}{4})x^{-\frac{3}{2}+m}}{2m}\int_{x}^{1}y^{\frac{1}{2}-m}g(y){\rm
d}y+\frac{(m^{2}-\frac{1}{4})x^{-\frac{3}{2}-m}}{2m}\int_{0}^{x}y^{\frac{1}{2}+m}g(y){\rm
d}y$ (6.9) $\displaystyle=$
$\displaystyle\Big{(}m^{2}-\frac{1}{4}\Big{)}Z_{m}g(x).$ (6.10)
By Proposition 5.2 $Z_{m}$ is bounded. Hence $f^{\prime\prime}\in
L^{2}({\mathbb{R}}_{+})$. Therefore,
$f\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$. $\Box$
## 7 Domain of Bessel operators for ${\rm Re}(m)=1$
In this section we treat the most complicated situation concerning the domain
of $H_{m}$, namely the case ${\rm Re}(m)=1$. By (1.3) we then have
$H_{m}=L_{m^{2}}^{\min}=L_{m^{2}}^{\max}$. We prove the following theorem.
###### Theorem 7.1
Let ${\rm Re}(m)=1$.
1. 1.
$\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$ is a dense subspace of ${\cal
D}(H_{m})$ of infinite codimension.
2. 2.
If $\xi$ is a $C_{\mathrm{c}}^{2}[0,\infty[$ function equal $1$ near zero,
then $x^{\frac{1}{2}+m}\xi\in{\cal D}(H_{m})$ but
$x^{\frac{1}{2}+m}\xi\not\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$.
3. 3.
If ${\rm Re}(m^{\prime})=1$ and $m\neq m^{\prime}$, then ${\cal
D}(H_{m})\cap{\cal D}(H_{m^{\prime}})=\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$.
By (1.3), it is clear that $\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})\subset{\cal
D}(H_{m})$ and $x^{\frac{1}{2}+m}\xi\in{\cal D}(H_{m})$. The density of
$\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$ in ${\cal D}(H_{m})$ is a consequence
of $H_{m}=L_{m^{2}}^{\min}$. The last assertion of the theorem is a special
case of Proposition 3.4. In the rest of this section we construct an infinite
dimensional vector subspace ${\cal V}$ of ${\cal D}(H_{m})$ such that ${\cal
V}\cap\big{(}\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi\big{)}=\\{0\\}$,
which will finish the proof of the theorem.
Let us study the behaviour at zero of the functions in ${\cal D}(H_{m})$. For
functions in the subspace
$\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi$ this
is easy, cf. the next lemma, but this is not so trivial for the other
functions.
###### Lemma 7.2
If
$f=f_{0}+cx^{\frac{1}{2}+m}\xi\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi$
then
$c=\lim_{x\to 0}x^{-\frac{1}{2}-m}f(x).$ (7.1)
Proof. If $f_{0}\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})$ then
$f_{0}(x)=\int\limits_{0}^{x}(x-y)f_{0}^{\prime\prime}(y){\rm d}y$. Therefore,
$\sqrt{3}|f_{0}(x)|\leq x^{\frac{3}{2}}\|f_{0}^{\prime\prime}\|_{L^{2}[0,x]}$
and since ${\rm Re}(m+\frac{1}{2})=\frac{3}{2}$ we get $\lim\limits_{x\to
0}x^{-m-\frac{1}{2}}f_{0}(x)=0$, which implies (7.1). $\Box$
Let $a>0$. Let $G_{m}^{a}$ be the operator $G_{m}$ compressed to the interval
$[0,a]$. Its kernel is
$G_{m}^{a}(x,y)={\mathchoice{\rm 1\mskip-4.0mul}{\rm 1\mskip-4.0mul}{\rm
1\mskip-4.5mul}{\rm 1\mskip-5.0mul}}_{[0,a]}(x)G_{m}(x,y){\mathchoice{\rm
1\mskip-4.0mul}{\rm 1\mskip-4.0mul}{\rm 1\mskip-4.5mul}{\rm
1\mskip-5.0mul}}_{[0,a]}(y).$ (7.2)
We will write $L_{\alpha}^{a,\max}$ for the maximal realization of operator
$L_{\alpha}$ on $L^{2}[0,a]$.
###### Lemma 7.3
Let ${\rm Re}(m)>-1$. Then $G_{m}^{a}$ is a bounded operator on $L^{2}[0,a]$.
If $g\in L^{2}[0,a]$, then $G_{m}^{a}g\in{\cal D}(L_{m^{2}}^{a,\max})$ and
$L_{m^{2}}^{a,\max}G_{m}^{a}g=g$. Consequently, $G_{m}^{a}$ is injective.
Proof. We check that (7.2) belongs to $L^{2}\big{(}[0,a]\times[0,a]\big{)}$.
This proves that $G_{m}^{a}$ is Hilbert Schmidt, hence bounded. $G_{m}^{a}$ is
a right inverse of $L_{m^{2}}^{a,\max}$, because $G_{m}$ is a right inverse of
$L_{m^{2}}$ (see Proposition 5.1). $\Box$
###### Lemma 7.4
Let ${\rm Re}(m)=1$. Let $g\in L^{2}[0,a]$ and $f=G_{m}^{a}g$. Then
$\lim_{x\to
0}\left(2mx^{-\frac{1}{2}-m}f(x)-\int_{x}^{a}y^{\frac{1}{2}-m}g(y){\rm
d}y\right)=0.$ (7.3)
Therefore, if
$\lim_{x\to 0}\int_{x}^{a}y^{\frac{1}{2}-m}g(y){\rm d}y$ (7.4)
does not exist, then
$f=G_{m}^{a}g\not\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi$.
Proof. We have
$2mx^{-\frac{1}{2}-m}f(x)=\int_{x}^{a}y^{-\frac{1}{2}-m}g(y){\rm
d}y+x^{-2m}\int_{0}^{x}y^{\frac{1}{2}+m}g(y){\rm d}y.$ (7.5)
Since ${\rm Re}(m)=1$ the absolute value of the second term on the right hand
side is less than
$x^{-\frac{1}{2}}\int_{0}^{x}(y/x)^{\frac{3}{2}}|g(y)|{\rm d}y\leq
x^{-\frac{1}{2}}\int_{0}^{x}|g(y)|{\rm d}y\leq\|g\|_{L^{2}[0,x]}$
This proves (7.3). If
$f=G_{m}^{a}g\in\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi$,
then by (7.3) and (7.1) there exists (7.4). This proves the second statement
of the lemma. $\Box$
###### Lemma 7.5
Let ${\rm Re}(m)=1$. There exists an infinite dimensional subspace ${\cal
V}\subset{\cal D}(H_{m})$ such that
${\cal
V}\cap\big{(}\mathcal{H}_{0}^{2}({\mathbb{R}}_{+})+{\mathbb{C}}x^{\frac{1}{2}+m}\xi\big{)}=\\{0\\}.$
(7.6)
Proof. For each $\tau\in]\frac{1}{2},1[$ let $g_{\tau}\in C^{2}(]0,1])$, for
$0<x<\frac{1}{2}$ given by
$g_{\tau}(x)=x^{-\frac{3}{2}+m}\big{(}\ln(1/x)\big{)}^{-\tau}$
and arbitrary on $[\frac{1}{2},1]$. Then for $x<\frac{1}{2}$ we have
$|g_{\tau}(x)|^{2}=x^{-1}\big{(}\ln(1/x)\big{)}^{-2\tau}=(2\tau-1)^{-1}\frac{{\rm
d}}{{\rm d}x}\big{(}\ln(1/x)\big{)}^{1-2\tau}.$
Hence
$\int_{0}^{\frac{1}{2}}|g_{\tau}(x)|^{2}{\rm d}x=(2\tau-1)^{-1}(\ln
2)^{1-2\tau},$
and $g_{\tau}\in L^{2}[0,1]$. Moreover, if $x<\frac{1}{2}$ then
$x^{\frac{1}{2}-m}g_{\tau}(x)=x^{-1}\big{(}\ln(1/x)\big{)}^{-\tau}=(\tau-1)^{-1}\frac{{\rm
d}}{{\rm d}x}\big{(}\ln(1/x)\big{)}^{1-\tau}.$
Hence
$\int_{x}^{\frac{1}{2}}y^{-\frac{1}{2}}g_{\tau}(y){\rm d}y=(\tau-1)^{-1}(\ln
2)^{1-\tau}+(1-\tau)^{-1}\big{(}\ln(1/x)\big{)}^{1-\tau}\to\infty\quad\text{as
}x\to 0.$
Let ${\cal G}$ be the vector subspace of $L^{2}[0,1]$ generated by the
functions $g_{\tau}$ with $\frac{1}{2}<\tau<1$. Note that each finite set
$\\{g_{\tau}\mid\tau\in A\\}$ with $A\subset]\frac{1}{2},1[$ finite is
linearly independent. Indeed, if $\sum\limits_{\tau\in A}c_{\tau}g_{\tau}=0$
and $\sigma=\min A$ and $\tau\neq\sigma$ then
$\frac{g_{\tau}(x)}{g_{\sigma}(x)}=\big{(}\ln(1/x)\big{)}^{\sigma-\tau}\to 0$
as $x\to 0$ so we get $c_{\sigma}=0$, etc. Moreover, for each not zero
$g=\sum\limits_{\tau\in A}c_{\tau}g_{\tau}\in{\cal G}$ (with $c_{\tau}\neq 0$)
we have $\lim\limits_{x\to 0}\left|\int_{x}^{1}y^{-\tfrac{1}{2}}g(y){\rm
d}y\right|=\infty$. Indeed, we may assume $c_{\sigma}=1$ and then,
$\displaystyle\int_{x}^{\frac{1}{2}}y^{-\frac{1}{2}}g(y){\rm d}y=$
$\displaystyle(1-\sigma)^{-1}\big{(}\ln(1/x)\big{)}^{1-\sigma}$
$\displaystyle+\sum\limits_{\tau\in A}c_{\tau}(\tau-1)^{-1}(\ln
2)^{1-\tau}+\sum_{\tau\neq\sigma}c_{\tau}(1-\tau)^{-1}\big{(}\ln(1/x)\big{)}^{1-\tau},$
and the first term on the right hand side tends to $+\infty$ more rapidly than
all the other, hence
$\left|\int_{x}^{\frac{1}{2}}y^{-\frac{1}{2}}g(y){\rm
d}y\right|\geq\frac{1}{2(1-\sigma)}\big{(}\ln(1/x)\big{)}^{1-\sigma}$
if $x$ is small enough.
Finally, let $\varphi\in C_{\mathrm{c}}^{\infty}[0,\infty[$ equal $1$ on
$[0,1]$. Let us define ${\cal V}$ as the space of functions on
${\mathbb{R}}_{+}$ of the form $f=\varphi G_{m}g$ with $g\in{\cal G}$. By
Lemma 7.3, $G_{m}{\mathchoice{\rm 1\mskip-4.0mul}{\rm 1\mskip-4.0mul}{\rm
1\mskip-4.5mul}{\rm 1\mskip-5.0mul}}_{[0,1]}(x)$ is injective. Hence ${\cal
V}$ is infinite dimensional and it satisfies (7.6) by Lemma 7.4. $\Box$
## 8 Bilinear forms associated with Bessel operators
As noted in the introduction, in this section we will avoid complex
conjugation. Thus in the place of the usual sesquilinear scalar product
$(f|g):=\int_{0}^{\infty}\overline{f(x)}g(x){\rm d}x,$ (8.7)
we will prefer to use the bilinear product
$\langle f|g\rangle:=\int_{0}^{\infty}f(x)g(x){\rm d}x,$ (8.8)
Clearly, (8.8) is well defined for $f,g\in L^{2}({\mathbb{R}}_{+})$. Instead
of the usual adjoint $T^{*}$ we will use the transpose $T^{\\#}$, defined with
respect to (8.8), see [9].
An important role will be played by the 1st order operators given by the
formal expression
$A_{\rho}:=\partial_{x}-\frac{\rho}{x}.$ (8.9)
A detailed analysis of (8.9) has been done in [6], where the notation was
slightly different: $A_{\rho}:=-{\rm i}(\partial_{x}-\frac{\rho}{x})$. Let us
recall the main points of that analysis.
In the usual way we define two closed realizations of $A_{\rho}$: the minimal
and the maximal one, denoted $A_{\rho}^{\min}$, resp. $A_{\rho}^{\max}$. The
following theorem was (mostly) proven in Section 3 of [6]. For the proof of
the infinite codimensionality assertion in 6 see the proof of Lemma 3.9 there
(where $\gamma$ is arbitrary $>\frac{1}{2}$).
###### Theorem 8.1
1. 1.
$A_{\rho}^{\min}\subset A_{\rho}^{\max}$.
2. 2.
$A_{\rho}^{\min\\#}=-A_{-\rho}^{\max}$,
$A_{\rho}^{\max\\#}=-A_{-\rho}^{\min}$.
3. 3.
$A_{\rho}^{\min}$ and $A_{\rho}^{\max}$ are homogeneous of degree $-1$.
4. 4.
$A_{\rho}^{\min}=A_{\rho}^{\max}$ iff $|{\rm Re}(\rho)|\geq\frac{1}{2}$. If
this is the case, we will often denote them simply by $A_{\rho}$
5. 5.
If ${\rm Re}(\rho)\neq\frac{1}{2}$, then ${\cal D}(A_{\rho}^{\min})={\cal
H}_{0}^{1}$.
6. 6.
If ${\rm Re}(\rho)=\frac{1}{2}$, then ${\cal
H}_{0}^{1}+{\mathbb{C}}x^{\rho}\xi$ is a dense subspace of ${\cal
D}(A_{\rho})$ of infinite codimension.
7. 7.
If $|{\rm Re}(\rho)|<\frac{1}{2}$, then ${\cal D}(A_{\rho}^{\max})={\cal
H}_{0}^{1}+{\mathbb{C}}x^{\rho}\xi\neq{\cal H}_{0}^{1}$.
8. 8.
If ${\rm Re}(\rho),{\rm Re}(\rho^{\prime})\in]-\frac{1}{2},\frac{1}{2}]$ and
$\rho\neq\rho^{\prime}$ then ${\cal D}(A_{\rho}^{\max})\neq{\cal
D}(A_{\rho^{\prime}}^{\max})$.
Now let us describe possible factorizations of $H_{m}$ into operators of the
form $A_{\rho}^{\min}$ and $A_{\rho}^{\max}$. On the formal level they
correspond to one of the factorizations (1.10) and (1.11).
###### Theorem 8.2
1. 1.
For ${\rm Re}(m)>-1$ we have
$\langle f|H_{m}g\rangle=\langle
A_{\frac{1}{2}+m}^{\max}f|A_{\frac{1}{2}+m}^{\max}g\rangle,\quad f\in{\cal
D}(A_{\frac{1}{2}+m}^{\max}),\quad g\in{\cal
D}(A_{\frac{1}{2}+m}^{\max})\cap{\cal D}(H_{m}).$ (8.10)
Moreover,
$\displaystyle{\cal D}(H_{m})$ $\displaystyle=\left\\{f\in{\cal
D}(A_{\frac{1}{2}+m}^{\max})\ |\ A_{\frac{1}{2}+m}^{\max}f\in{\cal
D}(A_{-\frac{1}{2}-m}^{\min})\right\\},$ (8.11) $\displaystyle H_{m}f$
$\displaystyle=-A_{-\frac{1}{2}-m}^{\min}A_{\frac{1}{2}+m}^{\max}f,\qquad
f\in{\cal D}(H_{m}).$ (8.12)
2. 2.
For ${\rm Re}(m)>0$ we have
$\langle f|H_{m}g\rangle=\langle
A_{\frac{1}{2}-m}^{\min}f|A_{\frac{1}{2}-m}^{\min}g\rangle,\quad f\in{\cal
D}(A_{\frac{1}{2}-m}^{\min}),\quad g\in{\cal
D}(A_{\frac{1}{2}-m}^{\min})\cap{\cal D}(H_{m}).$ (8.13)
Moreover,
$\displaystyle{\cal D}(H_{m})$ $\displaystyle=\left\\{f\in{\cal
D}(A_{\frac{1}{2}-m}^{\min})\ |\ A_{\frac{1}{2}-m}^{\min}f\in{\cal
D}(A_{-\frac{1}{2}+m}^{\max})\right\\},$ (8.14) $\displaystyle H_{m}f$
$\displaystyle=-A_{-\frac{1}{2}+m}^{\max}A_{\frac{1}{2}-m}^{\min}f,\qquad
f\in{\cal D}(H_{m}).$ (8.15)
The factorizations described in Theorem 8.2 can be used to define bilinear
forms corresponding to $H_{m}$. For details of the proof, we refer again to
[6], especially pages 571–574 and 577.
###### Theorem 8.3
The following bilinear forms are extensions of
$\langle f|H_{m}g\rangle=\langle H_{m}f|g\rangle,\qquad f,g\in{\cal
D}(H_{m}),$ (8.16)
to larger domains:
1. 1.
For $1\leq{\rm Re}(m)$,
$\displaystyle\langle A_{\frac{1}{2}+m}f|A_{\frac{1}{2}+m}g\rangle$
$\displaystyle=\langle A_{\frac{1}{2}-m}f|A_{\frac{1}{2}-m}g\rangle,$
$\displaystyle\quad f,g\in{\cal H}_{0}^{1}.$ (8.17)
2. 2.
For $0<{\rm Re}(m)<1$,
$\displaystyle\langle A_{\frac{1}{2}+m}f|A_{\frac{1}{2}+m}g\rangle$
$\displaystyle=\langle
A_{\frac{1}{2}-m}^{\min}f|A_{\frac{1}{2}-m}^{\min}g\rangle,$
$\displaystyle\quad f,g\in{\cal H}_{0}^{1}.$ (8.18)
3. 3.
For ${\rm Re}(m)=0$,
$\displaystyle\langle A_{\frac{1}{2}+m}f|A_{\frac{1}{2}+m}g\rangle,$
$\displaystyle\quad f,g\in{\cal D}(A_{\frac{1}{2}+m})\supset{\cal
H}_{0}^{1}+{\mathbb{C}}x^{\frac{1}{2}+m}\xi,$ (8.19) $\displaystyle\langle
A_{\frac{1}{2}-m}f|A_{\frac{1}{2}-m}g\rangle,$ $\displaystyle\quad f,g\in{\cal
D}(A_{\frac{1}{2}-m})\supset{\cal
H}_{0}^{1}+{\mathbb{C}}x^{\frac{1}{2}-m}\xi.$ (8.20)
4. 4.
For $-1<{\rm Re}(m)<0$,
$\displaystyle\langle
A_{\frac{1}{2}+m}^{\max}f|A_{\frac{1}{2}+m}^{\max}g\rangle,$
$\displaystyle\quad f,g\in{\cal H}_{0}^{1}+{\mathbb{C}}x^{\frac{1}{2}+m}\xi.$
(8.21)
Note that for ${\rm Re}(m)>0$ both factorizations yield the same quadratic
form. This is no longer true for ${\rm Re}(m)=0$, $m\neq 0$, when there are
two distinct quadratic forms with distinct domain corresponding to $H_{m}$.
Finally, for $-1<m<0$, and also for $m=0$, we have a unique factorization.
Let us finally specialize Theorem 8.3 to real $m$. The following theorem is
essentially identical with Thm 4.22 of [6].
###### Theorem 8.4
For real $-1<m$ the operators $H_{m}$ are positive and self-adjoint. The
corresponding sesquilinear form can be factorized as follows:
1. 1.
For $1\leq m$,
$\displaystyle(\sqrt{H_{m}}f|\sqrt{H_{m}}g)=(A_{\frac{1}{2}+m}f|A_{\frac{1}{2}+m}g)$
$\displaystyle=(A_{\frac{1}{2}-m}f|A_{\frac{1}{2}-m}g),$ $\displaystyle\quad
f,g\in{\cal Q}(H_{m})={\cal H}_{0}^{1}.$ (8.22)
$H_{m}$ is essentially self-adjoint on
$C_{\mathrm{c}}^{\infty}({\mathbb{R}}_{+})$.
2. 2.
For $0<m<1$,
$\displaystyle(\sqrt{H_{m}}f|\sqrt{H_{m}}g)=(A_{\frac{1}{2}+m}f|A_{\frac{1}{2}+m}g)$
$\displaystyle=(A_{\frac{1}{2}-m}^{\min}f|A_{\frac{1}{2}-m}^{\min}g),$
$\displaystyle\quad f,g\in{\cal Q}(H_{m})={\cal H}_{0}^{1}.$ (8.23)
$H_{m}$ is the Friedrichs extension of $L_{m^{2}}$ restricted to
$C_{\mathrm{c}}^{\infty}({\mathbb{R}}_{+})$.
3. 3.
For $m=0$,
$\displaystyle(\sqrt{H_{0}}f|\sqrt{H_{0}}g)=(A_{\frac{1}{2}}f|A_{\frac{1}{2}}g),$
$\displaystyle\quad f,g\in{\cal Q}(H_{0})={\cal D}(A_{\frac{1}{2}})\mathchar
13609\relax{\cal H}_{0}^{1}+{\mathbb{C}}x^{\frac{1}{2}}\xi.$ (8.24)
$H_{0}$ is both the Friedrichs and Krein extension of $L_{0}$ restricted to
$C_{\mathrm{c}}^{\infty}({\mathbb{R}}_{+})$.
4. 4.
For $-1<m<0$,
$\displaystyle(\sqrt{H_{m}}f|\sqrt{H_{m}}g)=(A_{\frac{1}{2}+m}^{\max}f|A_{\frac{1}{2}+m}^{\max}g),$
$\displaystyle\quad f,g\in{\cal Q}(H_{m})={\cal
H}_{0}^{1}+{\mathbb{C}}x^{\frac{1}{2}+m}\xi.$ (8.25)
$H_{m}$ is the Krein extension of $L_{m^{2}}$ restricted to
$C_{\mathrm{c}}^{\infty}({\mathbb{R}}_{+})$.
## Appendix A Holomorphic families of closed operators and the Kato-Rellich
Theorem
In this appendix we describe a few general concepts and facts from the
operator theory, which we use in our paper.
The definition (or actually a number of equivalent definitions) of a
holomorphic family of bounded operators is quite obvious and does not need to
be recalled. In the case of unbounded operators the situation is more subtle.
Suppose that $\Theta$ is an open subset of ${\mathbb{C}}$, ${\cal H}$ is a
Banach space, and $\Theta\ni z\mapsto H(z)$ is a function whose values are
closed operators on ${\cal H}$. We say that this is a holomorphic family of
closed operators if for each $z_{0}\in\Theta$ there exists a neighborhood
$\Theta_{0}$ of $z_{0}$, a Banach space ${\cal K}$ and a holomorphic family of
bounded operators $\Theta_{0}\ni z\mapsto A(z)\in B({\cal K},{\cal H})$ such
that ${\rm Ran}A(z)={\cal D}(H(z))$ and
$\Theta_{0}\ni z\mapsto H(z)A(z)\in B({\cal K},{\cal H})$
is a holomorphic family of bounded operators.
The following theorem is essentially a version of the well-known Kato-Rellich
Theorem generalized from self-adjoint to closed operators:
###### Theorem A.1
Suppose that $A$ is a closed operator on a Hilbert space ${\cal H}$. Let $B$
be an operator ${\cal D}(A)\to{\cal H}$ such that
$\|Bf\|\leq c\|Af\|,\quad f\in{\cal D}(A).$ (A.26)
Then for $|z|<\frac{1}{c}$ the operator $A+zB$ is closed on ${\cal D}(A)$ and
$\left\\{z\in{\mathbb{C}}\ |\ |z|<c^{-1}\right\\}\ni z\mapsto A+zB$ (A.27)
is a holomorphic family of closed operators.
Proof. We easily check that the norms $\sqrt{\|f\|^{2}+\|Af\|^{2}}$ and
$\sqrt{\|f\|^{2}+\|(A+zB)f\|^{2}}$ are equivalent for $|z|<\frac{1}{c}$. Let
${\cal H}_{0}$ be the closure of ${\cal D}(A)$ in ${\cal H}$. The restriction
of $A$ to ${\cal H}_{0}$ is densely defined, so that we can define $A^{*}$.
The operator $(A^{*}A+{\mathchoice{\rm 1\mskip-4.0mul}{\rm 1\mskip-4.0mul}{\rm
1\mskip-4.5mul}{\rm 1\mskip-5.0mul}})^{-\frac{1}{2}}$ is unitary from ${\cal
H}_{0}$ to ${\cal D}(A)$. Clearly, it is bounded in the sense of ${\cal
H}_{0}$. Now
${\mathbb{C}}\ni z\mapsto(A+zB)(A^{*}A+{\mathchoice{\rm 1\mskip-4.0mul}{\rm
1\mskip-4.0mul}{\rm 1\mskip-4.5mul}{\rm 1\mskip-5.0mul}})^{-\frac{1}{2}}$
(A.28)
is obviously a polynomial of degree 1 with values in bounded operators (hence
obviously a holomorphic family). $\Box$
Let us also quote the following fact proven by Bruk [5], see also [10]:
###### Proposition A.2
If $z\mapsto A(z)$ is a holomorphic family of closed operators, then so is
$z\mapsto A(\overline{z})^{*}$.
## References
* [1] Alekseeva, V.S. and Ananieva, A.Yu.: On extensions of the Bessel operator on a finite interval and a half-line, Journal of Mathematical Sciences, 187 (2012) 1-8
* [2] Ananieva, A. Yu., Budika, V.: To the spectral theory of the Bessel operator on finite interval and half-line, J. of Math. Sc., Vol. 211, Issue 5, 624-645 (2015)
* [3] Anan’eva, A. Yu. and Budyka, V. S.: On the Spectral Theory of the Bessel Operator on a Finite Interval and the Half-Line, Differential Equations, 52, No. 11, (2016) 1517-1522
* [4] Balinsky, A. A., Evans, W. D., Lewis, R.T.: The Analysis and Geometry of Hardy’s Inequality, Springer 2015
* [5] Bruk, V.M.: A uniqueness theorem for holomorphic families of operators, Matematicheskie Zametki, Vol. 53, No. 3, 155–156 (1991)
* [6] Bruneau, L., Dereziński, J., Georgescu, V.: Homogeneous Schrödinger operators on half-line. Ann. Henri Poincaré 12(3), 547–590 (2011)
* [7] Dereziński, J., Richard S.: On Schrödinger Operators with Inverse Square Potentials on the Half-Line Ann. Henri Poincaré 18 (2017) 869-928
* [8] Dereziński, J.: Homogeneous rank one perturbations and inverse square potentials, "Geometric Methods in Physics XXXVI" Workshop and Summer School, Bialowieza, Poland, 2017 Editors: Kielanowski, P., Odzijewicz, A., Previato, E.; Birkhauser, 2019
* [9] Dereziński, J., Georgescu, V.: One-dimensional Schrödinger operators with complex potentials, Annales Henri Poincare, 21 (2020), 1947–2008
* [10] Dereziński, J., Wrochna, M.: Continuous and holomorphic functions with values in closed operators, Journ. Math. Phys. 55 (2014) 083512
* [11] Gesztesy, F., Pang, M., Stanfill, J.: A refinement of Hardy’s inequality, preprint 2020
* [12] Hardy, G.H., Littlewood, I.E., Polya, G.: Inequalities, Cambridge University Press, Cambridge, reprinted 1988
* [13] Pankrashkin, K., Richard, S.: Spectral and scattering theory for the Aharonov-Bohm operators. Rev. Math. Phys. 23(1), 53–81 (2011)
* [14] Kovařik, H., Truc, F.: Schrödinger operators on a half-line with inverse square potentials. Math. Model. Nat. Phenom. 9(5), 170–176 (2014)
* [15] Gitman, D. M., Tyutin, I. V., and Voronov, B. L. Self-adjoint extensions in quantum mechanics. General theory and applications to Schrödinger and Dirac equations with singular potentials, vol. 62 of Progress in Mathematical Physics. Birkhäuser/Springer, New York, 2012.
* [16] Simon B., Reed M.C.: Methods of Modern Mathematical Physics, Vol. 2. Academic Press, Inc., San Diego, California (1975)
* [17] Stein, E.M.: Singular Integrals and Differentiability Properties of Functions, Princeton Univ. Press, Princeton, 1970
|
8k
|
arxiv_papers
|
2101.01009
|
# Finite-size effects in a bosonic Josephson junction
Sandro Wimberger Dipartimento di Scienze Matematiche, Fisiche ed
Informatiche, Università di Parma, Parco Area delle Scienze 7/A, 43124 Parma,
Italy INFN - Sezione di Milano-Bicocca, gruppo collegato di Parma, Parco Area
delle Scienze 7/A, 43124 Parma, Italy Gabriele Manganelli Dipartimento di
Fisica e Astronomia ’Galileo Galilei’, Università di Padova, via Marzolo 8,
35131 Padova, Italy Scuola Galileiana di Studi Superiori, Università di
Padova,
via San Massimo 33, 35129 Padova, Italy Alberto Brollo Dipartimento di
Fisica e Astronomia ’Galileo Galilei’, Università di Padova, via Marzolo 8,
35131 Padova, Italy Luca Salasnich Dipartimento di Fisica e Astronomia
’Galileo Galilei’, Università di Padova, via Marzolo 8, 35131 Padova, Italy
Padua Quantum Technologies Research Center, Università di Padova,
via Gradenigo 6/b, 35131 Padova, Italy INFN - Sezione di Padova, via Marzolo
8, 35131 Padova, Italy CNR-INO, via Nello Carrara 1, 50019 Sesto Fiorentino,
Italy
###### Abstract
We investigate finite-size quantum effects in the dynamics of $N$ bosonic
particles which are tunneling between two sites adopting the two-site Bose-
Hubbard model. By using time-dependent atomic coherent states (ACS) we extend
the standard mean-field equations of this bosonic Josephson junction, which
are based on time-dependent Glauber coherent states. In this way we find $1/N$
corrections to familiar mean-field (MF) results: the frequency of macroscopic
oscillation between the two sites, the critical parameter for the dynamical
macroscopic quantum self trapping (MQST), and the attractive critical
interaction strength for the spontaneous symmetry breaking (SSB) of the ground
state. To validate our analytical results we perform numerical simulations of
the quantum dynamics. In the case of Josephson oscillations around a balanced
configuration we find that also for a few atoms the numerical results are in
good agreement with the predictions of time-dependent ACS variational
approach, provided that the time evolution is not too long. Also the numerical
results of SSB are better reproduced by the ACS approach with respect to the
MF one. Instead the onset of MQST is correctly reproduced by ACS theory only
in the large $N$ regime and, for this phenomenon, the $1/N$ correction to the
MF formula is not reliable.
###### pacs:
03.75.Lm; 74.50.+r
## I Introduction
The Josephson junction is a quantum mechanical device made of two
superconductors, or two superfluids, separated by a tunneling barrier
josephson1962 . The Josephson junction can give rise to the direct-current
(DC) Josephson effect, where a supercurrent flows indefinitely long across the
barrier, but also to the alternate-current (AC) Josephson effect, where due to
an energy difference the supercurrent oscillates periodically across the
barrier barone1982 . The superconducting quantum interference devices
(SQUIDs), which are very sensitive magnetometers based on superconducting
Josephson junctions, are widely used in science and engineering vari2017 .
Moreover, Josephson junctions are now used to realize qubits (see, for
instance, quantum1 ; quantum2 ).
The achievement of Bose-Einstein condensation with ultracold and dilute
alkali-metal atoms bec1995 has renewed and increased the interest on
macroscopic quantum phenomena and, in particular, on the Josephson effect
bloch2008 . Indeed, contrary to the case of superconducting Josephson
junctions, with atomic Josephson junctions it is possible to have a large
population imbalance with the appearance of the self-trapping phenomenon
smerzi1997 . A direct experimental observation of tunneling and nonlinear
self-trapping in a single bosonic Josephson junction was made in 2005 with
87Rb atoms albiez2005 . More recently, in 2015, Josephson effect has been
detected in fermionic superfluids across the BEC-BCS crossover with 6Li atoms
valtolina2015 .
The fully quantum behavior of Josephson junctions is usually described by
using the phase model leggett1991 , which is based on the quantum commutation
rule commutation between the number operator ${\hat{N}}$ and the phase angle
operator ${\hat{\phi}}$. Within this model it has been found that quantum
fluctuations renormalize the mean-field Josephson oscillation smerzi2000 ;
anglin2001 ; ferrini2008 . However, the phase angle operator ${\hat{\phi}}$ is
not Hermitian, the exponential phase operator $e^{i{\hat{\phi}}}$ is not
unitary, and their naive application can give rise to wrong results. Despite
such problems, the phase model is considered a good starting point in many
theoretical studies of Josephson junctions, because the phase-number
commutation rule is approximately correct for systems with a large number of
condensed electronic Cooper-pairs or bosonic atoms anglin2001 .
In this paper we study finite-size quantum effects in a Josephson junction
avoiding the use of the phase operator. The standard mean-field theory is
based on the Glauber coherent state $|CS\rangle$ which however is not
eigenstate of the total number operator glauber1963 . Here we adopt the atomic
coherent state $|ACS\rangle$, which is instead eigenstate of the total number
operator, and it reduces to the Glauber coherent state only in the limit of a
large number $N$ of bosons arecchi1972 ; gilmore1990 ; walls1997 ; penna2005 ;
penna2006 ; penna2008 . We prove that the frequency of macroscopic oscillation
of bosons between the two sites is given by $\sqrt{J^{2}+NUJ(1-1/N)}/\hbar$,
where $J$ is the tunneling energy, $U$ is the on-site interaction energy.
Remarkably, for very large number $N$ of bosons this formula becomes the
familiar mean-field one $\sqrt{J^{2}+NUJ}/\hbar$. We find similar corrections
for the critical strength of the dynamical self-trapping and for the critical
strength of the population-imbalance symmetry breaking of the ground state.
Once again in these cases the standard mean-field results are retrieved in the
limit of a large number $N$ of bosons. In the last part of the paper we
compare the ACS theory with numerical simulations. In the case of Josephson
oscillations we find a very good agreement between ACS theory and numerical
results also for a small number $N$ of bosons. For the ACS critical
interaction strength of the semiclassical spontaneous symmetry breaking of the
ground state we obtain a reasonable agreement with the numerical results.
Instead, for the phenomenon of self-trapping, our numerical quantum
simulations suggest that the $1/N$ corrections predicted by the ACS theory are
not reliable. We attribute this discrepancy to the increased importance of
quantum fluctuations and stronger many-body correlation in the so-called Fock
regime, see e.g. leggett2001 .
## II Two-site model
The macroscopic quantum tunneling of bosonic particles or Cooper pairs in a
Josephson junction made of two superfluids or two superconductors separated by
a potential barrier can be described within a second-quantization formalism,
see for instance lewenstein . The simplest quantum Hamiltonian of a system
made of bosonic particles which are tunneling between two sites ($j=1,2$) is
given by
${\hat{H}}=-{J}\left({\hat{a}}_{1}^{+}{\hat{a}}_{2}+{\hat{a}}_{2}^{+}{\hat{a}}_{1}\right)+{U}\sum_{j=1,2}{\hat{N}}_{j}({\hat{N}}_{j}-1)\;,$
(1)
where ${\hat{a}}_{j}$ and ${\hat{a}}_{j}^{+}$ are the dimensionless ladder
operators which, respectively, destroy and create a boson in the $j$ site,
${\hat{N}}_{j}={\hat{a}}_{j}^{+}{\hat{a}}_{j}$ is the number operator of
bosons in the $j$ site. $U$ is the on-site interaction strength of particles
and $J>0$ is the tunneling energy, both measured in units of the reduced
Planck constant $\hbar$. Eq. (1) is the so-called two-site Bose-Hubbard
Hamiltonian. We also introduce the total number operator
${\hat{N}}={\hat{N}}_{1}+{\hat{N}}_{2}\;.$ (2)
The time evolution of a generic quantum state $|\psi(t)\rangle$ of our system
described by the Hamiltonian (1) is then given by the Schrödinger equation
$i\hbar{\partial\over\partial t}|\psi(t)\rangle={\hat{H}}|\psi(t)\rangle\;.$
(3)
Quite remarkably, this time-evolution equation can be derived by extremizing
the following action
$S=\int dt\,\langle\psi(t)|\left(i\hbar{\partial\over\partial
t}-{\hat{H}}\right)|\psi(t)\rangle\;,$ (4)
characterized by the Lagrangian
$L=i\hbar\langle\psi(t)|{\partial\over\partial
t}|\psi(t)\rangle-\langle\psi(t)|{\hat{H}}|\psi(t)\rangle\;.$ (5)
Clearly, Eqs. (3–5) hold for any quantum system.
## III Standard mean-field dynamics
The familiar mean-field dynamics of the bosonic Josephson junction can be
obtained with a specific choice for the quantum state $|\psi(t)\rangle$,
namely penna1998
$|\psi(t)\rangle=|CS(t)\rangle\;,$ (6)
where
$|CS(t)\rangle=|\alpha_{1}(t)\rangle\otimes|\alpha_{2}(t)\rangle\;$ (7)
is the tensor product of Glauber coherent states $|\alpha_{j}(t)\rangle$,
defined as
$|\alpha_{j}(t)\rangle=e^{-{1\over 2}|\alpha_{j}(t)|^{2}}\
e^{\alpha_{j}(t){\hat{a}}_{j}^{+}}|0\rangle$ (8)
with $|0\rangle$ the vacuum state, and such that
${\hat{a}}_{j}|\alpha_{j}(t)\rangle=\alpha_{j}(t)|\alpha_{j}(t)\rangle\;.$ (9)
Thus, $|\alpha_{j}(t)\rangle$ is the eigenstate of the annihilation operator
${\hat{a}}_{j}$ with eigenvalue $\alpha_{j}(t)$ glauber1963 . The complex
eigenvalue $\alpha_{j}(t)$ can be written as
$\alpha_{j}(t)=\sqrt{N_{j}(t)}\,e^{i\phi_{j}(t)}\;,$ (10)
with $N_{j}(t)=\langle\alpha_{j}(t)|{\hat{N}}_{j}|\alpha_{j}(t)\rangle$ the
average number of bosons in the site $j$ at time $t$ and $\phi_{j}(t)$ the
corresponding phase angle at the same time $t$.
Adopting the coherent state (7) with Eq. (8) the Lagrangian (5) becomes
$\displaystyle L_{CS}$ $\displaystyle=$ $\displaystyle i\hbar\langle
CS(t)|{\partial\over\partial t}|CS(t)\rangle-\langle
CS(t)|{\hat{H}}|CS(t)\rangle$ (11) $\displaystyle=$ $\displaystyle
N\hbar\,z{\dot{\phi}}-{UN^{2}\over 2}z^{2}+JN\sqrt{1-z^{2}}\,\cos{(\phi)}\;,$
where the dot means the derivative with respect to time $t$,
$N=N_{1}(t)+N_{2}(t)$ (12)
is the average total number of bosons (that is a constant of motion),
$\phi(t)=\phi_{2}(t)-\phi_{1}(t)$ (13)
is the relative phase, and
$z(t)={N_{1}(t)-N_{2}(t)\over N}$ (14)
is the population imbalance. The last term in the Lagrangian (11) is the one
which makes possible the periodic oscillation of a macroscopic number of
particles between the two sites.
In the Lagrangian $L_{CS}(\phi,z)$ of Eq. (11) the dynamical variables
$\phi(t)$ and $z(t)$ are the generalized Lagrangian coordinates (see, for
instance, penna2000 ). The extremization of the action (4) with the Lagrangian
(11) gives rise to the Euler-Lagrange equations
$\displaystyle{\partial{L}_{CS}\over\partial\phi}-{d\over
dt}{\partial{L}_{CS}\over\partial{\dot{\phi}}}=0\;,$ (15)
$\displaystyle{\partial{L}_{CS}\over\partial z}-{d\over
dt}{\partial{L}_{CS}\over\partial{\dot{z}}}=0\;,$ (16)
which, explicitly, become
$\displaystyle{\dot{\phi}}$ $\displaystyle=$ $\displaystyle
J{z\over\sqrt{1-z^{2}}}\cos{(\phi)}+UNz\;,$ (17) $\displaystyle{\dot{z}}$
$\displaystyle=$ $\displaystyle-J\sqrt{1-z^{2}}\sin{(\phi)}\;.$ (18)
These equations describe the mean-field dynamics of the macroscopic quantum
tunneling in a Josephson junction, where $\phi(t)$ is the relative phase angle
of the complex field of the superfluid (or superconductor) between the two
junctions at time $t$ and $z(t)$ is the corresponding relative population
imbalance of the Bose condensed particles (or Cooper pairs).
Assuming that both $\phi(t)$ and $z(t)$ are small, i.e. $|\phi(t)|\ll 1$ and
$|z(t)|\ll 1$, the Lagrangian (11) can be approximated as
${L}_{CS}^{(2)}=N\hbar\,z{\dot{\phi}}-{JN\over 2}\phi^{2}-{(JN+UN^{2})\over
2}z^{2}\;,$ (19)
removing a constant term. The Euler-Lagrange equations of this quadratic
Lagrangian are the linearized Josephson-junction equations
$\displaystyle\hbar\,\dot{\phi}$ $\displaystyle=$ $\displaystyle(J+UN)z\;,$
(20) $\displaystyle\hbar\,\dot{z}$ $\displaystyle=$ $\displaystyle-J\phi\;,$
(21)
which can be rewritten as a single equation for the harmonic oscillation of
$\phi(t)$ and the harmonic oscillation of $z(t)$, given by
$\displaystyle\ddot{\phi}+\Omega^{2}\ \phi=0\;,$ (22)
$\displaystyle\ddot{z}+\Omega^{2}\ z=0\;,$ (23)
both with frequency
$\Omega={1\over\hbar}\sqrt{J^{2}+NUJ}\;,$ (24)
that is the familiar mean-field frequency of macroscopic quantum oscillation
in terms of tunneling energy $J>0$, interaction strength $U$, and number $N$
of particles smerzi1997 .
It is straightforward to find that the conserved energy of the mean-field
system described by Eqs. (17) and (18) is given by
${E}_{CS}={UN^{2}\over 2}z^{2}-JN\sqrt{1-z^{2}}\,\cos{(\phi)}\;.$ (25)
If the condition
${E}_{CS}(z(0),\phi(0))>{\color[rgb]{0,0,0}E_{CS}}(0,\pi)$ (26)
is satisfied then $\langle z\rangle\neq 0$ since $z(t)$ cannot become zero
during an oscillation cycle. This situation is known as macroscopic quantum
self trapping (MQST) smerzi1997 ; rag1999 ; ashhab2002 . Introducing the
dimensionless strength
$\Lambda={NU\over J}\;,$ (27)
the expression (25) and the trapping condition (26) give
$\Lambda_{MQST}={{1+\sqrt{1-z^{2}(0)}\cos(\phi(0))}\over z(0)^{2}/2}$ (28)
for the critical value of $\Lambda$ above which the self trapping occurs.
Indeed,
$\Lambda>\Lambda_{MQST}$ (29)
is the familiar mean field condition to achieve MQST in BECs smerzi1997 . We
stress that MQST condition crucially depend on the specific initial conditions
$\phi(0)$ and $z(0)$.
Let us study the stationary solutions of (11). From the system of Eqs. (17)
and (18) we obtain the symmetric solutions
$\displaystyle(\tilde{z}_{-},\tilde{\phi})=(0,2n\pi)$ (30)
$\displaystyle(\tilde{z}_{+},\tilde{\phi})=(0,(2n+1)\pi)$ (31)
with $n\in\mathbb{Z}$ ,respectively with energies $\tilde{E}_{-}=-JN$ and
$\tilde{E}_{+}=JN$. Due to the nonlinear interaction there are degenerate
ground-state solutions that break the z-symmetry
$\displaystyle z_{\pm}$ $\displaystyle=$
$\displaystyle\pm\sqrt{1-{{1}\over{\Lambda^{2}}}}$ (32)
$\displaystyle\phi_{n}$ $\displaystyle=$ $\displaystyle{\color[rgb]{0,0,0}2\pi
n}$ (33)
where $n\in\mathbb{Z}$. These solutions give a minimum of the energy with
$\phi=0$ only for $\Lambda=UN/J<0$. Thus, the spontaneous symmetry breaking
(SSB) of the balanced ground state ($z=0$, $\phi=0$) appears at the critical
dimensionless strength
$\Lambda_{SSB}=-1\;.$ (34)
In other words, for $\Lambda=UN/J<\Lambda_{SSB}=-1$ the population imbalance
$z$ of the ground state of our bosonic system becomes different from zero.
## IV Finite-size effects
Different results are obtained by choosing another quantum state
$|\psi(t)\rangle$ in Eqs. (4) and (5). In this section, our choice for the
quantum state $|\psi(t)\rangle$ is
$|\psi(t)\rangle=|ACS(t)\rangle\;,$ (35)
where
$|ACS(t)\rangle={\left(\sqrt{1+z(t)\over 2}{\hat{a}}_{1}^{+}+\sqrt{1-z(t)\over
2}\,e^{-i\phi(t)}{\hat{a}}_{2}^{+}\right)^{N}\over\sqrt{N!}}|0\rangle\;$ (36)
is the atomic coherent state arecchi1972 , also called SU(2) coherent state or
Bloch state or angular momentum coherent state gilmore1990 , with $|0\rangle$
the vacuum state. This atomic coherent state depends on two dynamical
variables $\phi(t)$ and $z(t)$ which, as we shall show, can be again
interpreted as relative phase and population imbalance of the Josephson
junction walls1997 ; penna2005 ; penna2006 ; penna2008 ; trimborn2008 ;
trimborn2009 .
Contrary to the Glauber coherent state $|CS(t)\rangle$ of Eq. (7), the atomic
coherent state of Eq. (36) is an eigenstate of the total number operator (2),
i.e.
${\hat{N}}|ACS(t)\rangle=N|ACS(t)\rangle\;.$ (37)
Moreover, the averages calculated with the atomic coherent state
$|ACS(t)\rangle$ become equal to the ones performed with the Glauber coherent
state $|CS(t)\rangle$ only in the regime $N\gg 1$ arecchi1972 ; gilmore1990 ;
walls1997 ; penna2005 ; penna2006 ; penna2008 ; trimborn2008 ; trimborn2009 .
Adopting the atomic coherent state (36) the Lagrangian (5) becomes
$\displaystyle L_{ACS}$ $\displaystyle=$ $\displaystyle i\hbar\langle
ACS(t)|{\partial\over\partial t}|ACS(t)\rangle-\langle
ACS(t)|{\hat{H}}|ACS(t)\rangle$ (38) $\displaystyle=$ $\displaystyle
N\hbar\,z{\dot{\phi}}-{UN^{2}\over 2}\left(1-{1\over
N}\right)z^{2}+JN\sqrt{1-z^{2}}\,\cos{(\phi)}\;.$
Comparing this expression with the Lagrangian of the Glauber coherent state,
Eq. (11), one immediately observes that the two Lagrangians become equal under
the condition $N\gg 1$. Moreover, the former is obtained from the latter with
the formal substitution $U\to U(1-1/N)$. In other words, the term $(1-1/N)$
takes into account few-body effects, which become negligible only for $N\gg
1$.
It is immediate to write down the corresponding Josephson equations
$\displaystyle{\dot{\phi}}$ $\displaystyle=$ $\displaystyle
J{z\over\sqrt{1-z^{2}}}\cos{(\phi)}+UN\left(1-{1\over N}\right)z\;,$ (39)
$\displaystyle{\dot{z}}$ $\displaystyle=$
$\displaystyle-J\sqrt{1-z^{2}}\sin{(\phi)}\;,$ (40)
which are derived as the Euler-Lagrange equations of the Lagrangian (38).
Assuming that both $\phi(t)$ and $z(t)$ are small, i.e. $|\phi(t)|\ll 1$ and
$|z(t)|\ll 1$, the Lagrangian (38) can be approximated as
${L}_{ACS}^{(2)}=N\hbar\,z{\dot{\phi}}-{JN\over
2}\phi^{2}-{(JN+U\left(1-{1\over N}\right)N^{2})\over 2}z^{2}\;,$ (41)
removing a constant term. The Euler-Lagrange equations of this quadratic
Lagrangian are the linearized Josephson-junction equations
$\displaystyle\hbar\,\dot{\phi}$ $\displaystyle=$
$\displaystyle\left(J+UN\left(1-{1\over N}\right)\right)z\;,$ (42)
$\displaystyle\hbar\,\dot{z}$ $\displaystyle=$ $\displaystyle-J\phi\;,$ (43)
which can be rewritten as a single equation for the harmonic oscillation of
$\phi(t)$ and the harmonic oscillation of $z(t)$, given by
$\displaystyle\ddot{\phi}+\Omega_{A}^{2}\ \phi=0\;,$ (44)
$\displaystyle\ddot{z}+\Omega_{A}^{2}\ z=0\;,$ (45)
both with frequency
$\Omega_{A}={1\over\hbar}\sqrt{J^{2}+NUJ\left(1-{1\over N}\right)}\;,$ (46)
that is the atomic-coherent-state frequency of macroscopic quantum oscillation
in terms of tunneling energy $J$, interaction strength $U$, and number $N$ of
particles. Quite remarkably, this frequency is different and smaller with
respect to the standard mean-field one, given by Eq. (24). However, the
familiar mean-field result is recovered in the limit of a large number $N$ of
bosonic particles. In addition, for $N=1$, Eq. (46) gives $\Omega_{A}=J/\hbar$
that is the exact Rabi frequency of the one-particle tunneling dynamics in a
double-well potential.
In the same fashion as in the previous section, the conserved energy
associated to Eqs. (39) and (40) reads
${E}_{ACS}={UN^{2}\over 2}\left(1-{1\over
N}\right)z^{2}-JN\sqrt{1-z^{2}}\,\cos{(\phi)}$ (47)
and using the condition (26) we get the inequality
$\Lambda>\Lambda_{MQST,A}={{1+\sqrt{1-z^{2}(0)}\cos(\phi(0))}\over
z(0)^{2}/2}{1\over\left(1-{1\over{N}}\right)}\;,$ (48)
where $\Lambda_{MQST,A}$ is the atomic-coherent-state MQST critical parameter
in terms of tunneling energy $J$, interaction strength $U$, and number $N$ of
particles. Remarkably this value is bigger than the standard mean field one,
given by Eq. (28), which is recovered in the semiclassical approximation of a
large number $N$ of bosonic particles.
In addition to the usual ground-state stationary solutions (30) and (31) we
obtain from the system of Eq. (39) and (40) a correction to the symmetry-
breaking ones
$\displaystyle z_{ACS\pm}$ $\displaystyle=$
$\displaystyle\pm\sqrt{1-{{1}\over{\Lambda^{2}}}\left(1-{1\over{N}}\right)^{-2}}$
(49) $\displaystyle\phi_{n}$ $\displaystyle=$
$\displaystyle{\color[rgb]{0,0,0}2\pi n}$ (50)
with $n\in\mathbb{Z}$ and $\Lambda=NU/J$. It follows that, within the approach
based on the atomic coherent state, the critical strength for the SSB of the
balanced ground state ($z=0$, $\phi=0$) reads
$\Lambda_{SSB,A}=-{1\over\left(1-{1\over N}\right)}\;.$ (51)
This means that for $\Lambda=UN/J<\Lambda_{SSB,A}=1/(1-1/N)$ the ground state
is not balanced. Clearly, for $N\gg 1$ from Eq. (51) one gets Eq. (34), while
for $N=1$ one finds $\Lambda_{SSB,A}=-\infty$: within the ACS approach with
only one boson the spontaneous symmetry breaking cannot be obtained.
## V Numerical results
To test our analytical results we compare them with numerical simulations. The
initial many-body state $|\Psi(0)\rangle$ for the time-dependent numerical
simulations is the coherent state $|ACS(0)\rangle$ from Eq. (36), with a given
choice of $z(0)$ and $\phi(0)$. The time evolved many-body state is then
formally obtained as
$|\Psi(t)\rangle=e^{-i{\hat{H}}t/\hbar}\,|\Psi(0)\rangle\;,$ (52)
with ${\hat{H}}$ given by Eq. (1).
Knowing $|\Psi(t)\rangle$ the population imbalance at time $t$ is given by
$z(t)=\langle\Psi(t)|{{\hat{N}}_{1}-{\hat{N}}_{2}\over N}|\Psi(t)\rangle\;.$
(53)
Figure 1: (Color online). Josephson frequency $\Omega$ as a function of the
number $N$ of bosons, with $UN/J=1$, $J>0$, and $\hbar=1$. Filled circles:
numerical results. Dashed line: mean-field result, Eq. (24), based on Glauber
coherent states. Solid curve: results of Eq. (46), based on atomic coherent
states (ACS). Initial conditions: $z(0)=0.1$ and $\phi(0)=0$.
In Fig. 1 we plot the Josephson frequency $\Omega$ as a function of the number
$N$ of bosons, but with a fixed value of $UN/J=1$. As shown in the figure, the
standard mean-field prediction (dashed curve), Eq. (24), predicts an
horizontal line. The numerical results (filled circles), which are very far
from the standard mean-field predictions, are instead reproduced extremely
well by Eq. (46), based on atomic coherent states. Indeed, as previously
stessed, for $N=1$ Eq. (46) gives the correct Rabi frequency. However, this
exact result is, in some sense, accidental since, as shown by the figure, for
intermediate values of $N$ ($4<N<10$) the agreement gets slightly worse.
Figure 2: (Color online). Time evolution of the numerical population imbalance
of Eq. (53) for different values of number $N=2$ and $4$ (a) and $N=20$ (b) of
bosons and interaction strength $U/J$, please see the legends, and $J>0$. The
initial quantum state $|ACS(0)\rangle$ is characterized by $\phi(0)=0$ and
$z(0)=0.5$ (a, b) and $z(0)=0.6$ (only in b). Both panels highlight the
difficulty in determining a critical value for self-trapping due to a smooth
transitions to the oscillating regime (b) and the possibly long oscillation
periods (a, b). Strict self-trapping seems to be absent for too small $N<5$
(a).
We investigate numerically also the onset of macroscopic quantum self trapping
(MQST). In Fig. 2 we report the numerical time evolution of the population
imbalance $z_{ex}(t)$ for different values of the number $N$ of bosons and of
the interaction strength $NU/J$. In the figure the numerical results are
obtained with an initial ACS state $|ACS(0)\rangle$ where $z(0)=0.5$ and
$\phi(0)=0$. In general, during the time evolution the many-body quantum state
$|\Psi(t)\rangle$ does not remains close to an atomic coherent state. This is
especially true in the so-called Fock regime, where $U/J\gg N$ leggett2001 .
Unfortunately, this is the regime where the MQST can be achieved. Fig. 2
illustrates the problems in determining a critical value for MQST: for
$N\lesssim 10$ interwell oscillations possibly occur with a very long period
even for very large values of $\Lambda$, see Fig. 2(a). For larger
$N=10,\dots,100$, MQST is found, yet the loss of it occurs smoothly
diminishing the interaction parameter, making it hard to define a critical
value. Fig. 2(b) illustrates this problem for $N=20$ and various values of
$U/J$ for two slightly different initial conditions. We opted for the
definition that just no crossing of zero imbalance should happen. This
definition typically underestimates the values obtained from, e.g., mean-field
theory, as seen in the next Fig. 3.
Figure 3: (Color online). Critical interaction strength $\Lambda_{MQST}$ for
the macroscopic quantum self trapping (MQSF) as a function of the number $N$
of bosons. Notice that we take $J>0$. Filled circles: numerical results.
Dashed line: mean-field result, Eq. (28), based on Glauber coherent states.
Solid curve: results of Eq. (48), based on atomic coherent states (ACS).
Initial conditions: $z(0)=0.5$ and $\phi(0)=0$.
In Fig. 3 we show the critical interaction strength $\Lambda_{MQST}$ for the
macroscopic quantum self trapping (MQSF) as a function of the number $N$ of
bosons. In this case neither the mean-field results (dashed line) nor the ACS
predictions (solid curve) are able to describe accurately the numerical
findings (filled circles) for a small number $N$ of atoms.
Let us conclude this Section by investigating the spontaneous symmetry
breaking (SSB) of the ground state of the two-site Bose-Hubbard model, which
appears for $U<0$ above a critical threshold sala-mazza1 . The exact number-
conserving ground state of our system can be written as
$|GS\rangle=\sum_{j=0}^{N}c_{j}\,|j\rangle_{1}\otimes|N-j\rangle_{2}\;,$ (54)
where $|c_{j}|^{2}$ is the probability of finding the ground state with $j$
bosons in the site $1$ and $N-j$ bosons in the site $2$. Here $|j\rangle_{1}$
is the Fock state with $j$ bosons in the site $1$ and $|N-j\rangle$ is the
Fock state with $N-j$ bosons in the site $2$. The amplitude probabilities
$c_{j}$ are determined numerically by diagonalizing the $(N+1)\times(N+1)$
Hamiltonian matrix obtained from (1). Clearly these amplitude probabilities
$c_{j}$ strongly depend on the values of the hopping parameter $J$, on-site
interaction strength $U$, and total number $N$ of bosons. For $U>0$ the
distribution ${\cal P}(|c_{j}|^{2})$ of the probabilities $|c_{j}|^{2}$ is
unimodal with its maximum at $|c_{N/2}|^{2}$ (if $N$ is even) sala-mazza1 .
However, for $U<0$ the distribution ${\cal P}(|c_{j}|^{2})$ becomes bimodal
with a local minimum at $|c_{N/2}|^{2}$ (if $N$ is even) when $|U|$ exceeds a
critical threshold sala-mazza1 .
Figure 4: (Color online). Dimensionless interaction strength $(|U|/J)_{SSB}$
for the onset of spontaneous symmetry breaking (SSB) as a function of the
number $N$ of bosons. Notice that we use $J>0$. Filled circles: numerical
results obtained from the onset of a bimodal structure in the distribution
${\cal P}(|c_{j}|^{2})$. Dashed line: mean-field result $(|U|/J)_{SSB}=1/N$
based on Glauber coherent states, see Eq. (34). Solid curve:
$(|U|/J)_{SSB}=1/(N-1)$, based on atomic coherent states (ACS), see Eq. (51).
The semiclassical SSB, described by Eqs. (34) and (51), corresponds in a full
quantum mechanical treatment to the onset of the bimodal structure in the
distribution ${\cal P}(|c_{j}|)$ sala-mazza1 . In Fig. 4 we report the
dimensionless interaction strength $(|U|/J)_{SSB}$ for the spontaneous
symmetry breaking (SSB) as a function of the number $N$ of bosons. In the
figure we compare the numerical results (filled circles) sala-mazza1 with the
semiclassical predictions based on Glauber coherent states (dashed curve) and
atomic coherent states (solid curve). The figure shows that the numerical
results of SSB are quite well approximated by the ACS variational approach,
which is more accurate with respect to the standard mean-field one. For large
$N$ the numerical results end up in the analytical curves, which become
practically indistinguishable.
## VI Conclusions
In this paper we have adopted a second-quantization formalism and time-
dependent atomic coherent states to study finite-size effects in a Josephson
junction of $N$ bosons, obtaining experimentally detectable theoretical
predictions. The experiments with cold atoms in lattices and double wells
reported in Ober1 ; Ober2 ; Ober3 ; Ober4 , for instance, showed that atom
numbers well below $N=100$ can be reached and successfully detected with an
uncertainty of the order one atom. In particular we have obtained an
analytical formula with $1/N$ corrections to the standard mean-field treatment
for the frequency of Josephson oscillations. We have shown that this formula,
based on atomic coherent states, is in very good agreement with numerical
simulations and it reduces to the familiar mean-field one in the large $N$
limit. We have also investigated the spontaneous symmetry breaking of the
ground state. At the critical interaction strength for the spontaneous
symmetry breaking the population-balanced configuration is no more the one
with maximal probability. Also in this case the agreement between the
analytical predictions of the atomic coherent states and numerical results is
good. Finally, we have studied the critical interaction strength for the
macroscopic quantum self trapping. Here we have found that the $1/N$
corrections to the standard mean-field theory predicted by the atomic coherent
states do not work. Summarizing, the time-dependent variational ansatz with
atomic coherent states is quite reliable in the description of the short-time
dynamics of the bosonic Josephson junction both in the Rabi regime, where
$0\leq|U/J|\ll 1/N$, and in the Josephson regime, where $1/N\ll|U/J|\ll N$
leggett2001 . Instead, in the Fock regime, where $|U/J|\gg N$, a full many-
body quantum treatment is needed.
## Acknowledgements
The authors thank A. Cappellaro, L. Dell’Anna, A. Notari, V. Penna, and F.
Toigo for useful discussions. LS acknowledges the BIRD project “Superfluid
properties of Fermi gases in optical potentials” of the University of Padova
for financial support.
## References
* (1) B. D. Josephson, Phys. Lett. 1, 251 (1962).
* (2) A. Barone and G. Paterno, Physics and Applications of the Josephson effect (Wiley, New York, 1982).
* (3) E. L. Wolf, G.B. Arnold, M.A. Gurvitch, and John F. Zasadzinski, Josephson Junctions: History, Devices, and Applications (Pan Stanford Publishing, Singapore, 2017).
* (4) D.T. Ladd et al., Nature 464, 45 (2010).
* (5) I. Buluta et al., Rep. Prog. Phys. 74, 104401 (2011).
* (6) M.H. Anderson et al., Science 269, 198 (1995); K.B. Davis et al., Phys. Rev. Lett. 75, 3969 (1995).
* (7) I. Bloch, J. Dalibard and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
* (8) A. Smerzi, S. Fantoni, S. Giovanazzi, and S.R. Shenoy, Phys. Rev. Lett. 79, 4950 (1997).
* (9) S. Raghavan, A. Smerzi, S. Fantoni, and S. R. Shenoy, Phys. Rev. A 59, 620 (1999).
* (10) S. Ashhab and C. Lobo, Phys. Rev. A 66, 013609 (2002).
* (11) M. Albiez et al., Phys. Rev. Lett. 95, 010402 (2005).
* (12) G. Valtolina et al., Science 350, 1505 (2015).
* (13) A. Leggett and F. Sols, Found. Phys. 21, 353 (1991).
* (14) P. Carruthers and M.M. Nieto, Rev. Mod. Phys. 40, 411 (1968).
* (15) A. Smerzi and S. Raghavan, Phys. Rev. A 61, 063601 (2000).
* (16) J. R. Anglin, P. Drummond, and A. Smerzi, Phys. Rev. A 64, 063605 (2001).
* (17) G. Ferrini, A. Minguzzi, and F.W.J. Hekking, Phys. Rev. A 78, 023606 (2008).
* (18) R. Glauber, Phys. Rev. 131, 2766 (1963).
* (19) F. T. Arecchi, E. Courtens, R. Gilmore, and H. Thomas, Phys. Rev. A 6, 2211 (1972).
* (20) W-M. Zhang, D. H. Feng, and R. Gilmore, Rev. Mod. Phys. 62, 867 (1990).
* (21) G. J. Milburn, J. Corney, E.M. Wright, and D.F. Walls, Phys. Rev. A 55, 4318 (1997).
* (22) P. Buonsante, V. Penna, and A. Vezzani, Phys. Rev. A 72, 043620 (2005).
* (23) P. Buonsante, P. Kevrekidis, V. Penna and A. Vezzani, J. Phys. B: At. Mol. Opt. Phys. 39, S77 (2006).
* (24) P. Buonsante and V. Penna, J. Phys. A: Math. Gen. 41, 175301 (2008).
* (25) A. J. Leggett, Rev. Mod. Phys. 73, 307 (2001).
* (26) M. Lewenstein, A. Sanpera, and V. Ahufinger, Ultracold Atoms in Optical Lattice (Oxford Univ. Press, 2012).
* (27) L. Amico and V. Penna, Phys. Rev. Lett. 80, 2189 (1998).
* (28) R. Franzosi, V. Penna, and R. Zecchina, Int. J. Mod. Physics B 14, 943 (2000).
* (29) F. Trimborn, D. Witthaut, and H. J. Korsch, Phys. Rev. A 77, 043631 (2008).
* (30) F. Trimborn, D. Witthaut, and H. J. Korsch, Phys. Rev. A 79, 013608 (2009).
* (31) G. Mazzarella, L. Salasnich, A. Parola, and F. Toigo, Phys. Rev. A 83, 053607 (2011).
* (32) C. Gross, H. Strobel, E. Nicklas, T. Zibold, N. Bar-Gill, G. Kurizki, and M. K. Oberthaler, Nature (London) 480, 219 (2011).
* (33) W. Muessel, H. Strobel, M. Joos, E. Nicklas, I. Stroescu, J. Tomkovic, D. Hume, and M. K. Oberthaler, Appl. Phys. B 113, 69 (2013).
* (34) D. B. Hume, I. Stroescu, M. Joos, W. Muessel, H. Strobel, and M. K. Oberthaler, Phys. Rev. Lett. 111, 253001 (2013).
* (35) I. Stroescu, D. B. Hume, and M. K. Oberthaler, Phys. Rev. A 91, 013412 (2015).
|
4k
|
arxiv_papers
|
2101.01011
|
# Rao–Blackwellization in the MCMC era
Christian P. Robert1,2,3 and Gareth O. Roberts1111 The work of the first
author was partly supported in part by the French government under management
of Agence Nationale de la Recherche as part of the “Blanc SIMI 1” program,
reference ANR-18-CE40-0034 and in part by the French government under
management of Agence Nationale de la Recherche as part of the “Investissements
d’avenir” program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). The
first author also acknowledges the support of l’Institut Universitaire de
France through two consecutive senior chairs.
1University of Warwick,
2Université Paris Dauphine PSL,
and 3CREST-ENSAE
###### Abstract
Rao–Blackwellization is a notion often occurring in the MCMC literature, with
possibly different meanings and connections with the original Rao–Blackwell
theorem (Rao, , 1945; Blackwell, , 1947), including a reduction of the
variance of the resulting Monte Carlo approximations. This survey reviews some
of the meanings of the term.
Keywords: Monte Carlo, simulation, Rao–Blackwellization, Metropolis-Hastings
algorithm, Gibbs sampler, importance sampling, mixtures, parallelisation,
> This paper is dedicated to Professor C.R. Rao in honour of his 100th
> birthday.
## 1 Introduction
The neologism Rao–Blackwellization222We will use the American English spelling
of the neologism as this version is more commonly used in the
literature.333Berkson, (1955) may have been the first one to use (p.142) this
neologism. stems from the famous Rao–Blackwell theorem (Rao, , 1945;
Blackwell, , 1947), which states that replacing an estimator by its
conditional expectation given a sufficient statistic improves estimation under
any convex loss. This is a famous mathematical statistics result, both dreaded
and appreciated by our students for involving conditional expectation and for
producing a constructive improvement, respectively. While Monte Carlo
approximation techniques cannot really be classified as estimation, since they
operate over controlled simulations, rather than observations, with the
ability to increase the sample size if need be, and since there is rarely a
free and unknown parameter involved, hence almost never a corresponding notion
of sufficiency, seeking improvement in Monte Carlo approximation via partial
conditioning has nonetheless been named after this elegant theorem. As shown
in Figure 1, the use of the expression Rao–Blackwellization has considerably
increased in the 1990’s, once the foundational paper popularising MCMC
techniques refered to this technique to reduce Monte Carlo variability,
Figure 1: Google assessment of the popularity of the names Rao–Blackwell,
Rao–Blackwellisation, and Rao–Blackwellization since the derivation of the
Rao–Blackwell theorem.
The concept indeed started in the 1990 foundational paper by Gelfand and Smith
(“foundational” as it launched the MCMC revolution, see Green et al., , 2015).
While this is not exactly what is proposed in the paper, as detailed in the
following section, it is now perceived444See e.g. the comment in the
Introduction Section of Liu et al., (1994). that the authors remarked that,
given a Gibbs sampler whose component $\theta_{1}$ is simulated from the
conditional distribution, $\pi(\theta_{1}|\theta_{2},x)$, the estimation of
the marginal $\pi(\theta_{1}|x)$ is improved by considering the average of the
(full) conditionals across iterations,
$\nicefrac{{1}}{{T}}\sum_{t=1}^{r}\pi(\theta_{1}|\theta_{2}^{(t)},x)$
which provides a parametric, unbiased and
$\text{O}(\nicefrac{{1}}{{\sqrt{T}}})$ estimator. Similarly, the approximation
to $\mathbb{E}[\theta_{1}|x]$ based on this representation
$\nicefrac{{1}}{{T}}\sum_{t=1}^{r}\mathbb{E}[\theta_{1}|\theta_{2}^{(t)},x]$
is using conditional expectations with lesser variances than the original
$\theta_{1}^{(t)}$ and may thus lead to a reduced variance for the estimator,
if correlation does not get into the way. (In that specific two-step sampler,
this is always the case Liu et al., , 1994.
We are thus facing the difficult classification task of separating what is
Rao–Blackwellization from what is not Rao–Blackwellization in simulation and
in particular MCMC settings.
The difficulty resides in setting the limits as
* •
there is no clear notion of sufficiency in simulation and, further,
conditioning may increase the variance of the resulting estimator or slow down
convergence;
* •
variance reduction and unbiasedness are not always relevant (as shown by the
infamous harmonic mean estimator, Neal, , 1999; Robert and Wraith, , 2009), as
for instance in infinite variance importance sampling (Chatterjee and
Diaconis, , 2018; Vehtari et al., , 2019);
* •
there are (too) many forms of potential conditioning in simulation settings to
hope for a ranking (see, e.g., the techniques of partitioning, antithetic or
auxiliary variables, control variates as in Berg et al., , 2019, delayed
acceptance as in Banterle et al., , 2019; Beskos et al., 2006a, , adaptive
mixtures as in Owen and Zhou, , 2000; Cornuet et al., , 2012; Elvira et al., ,
2019, the later more closely connected to Rao–Blackwellization);
* •
the large literature on the approximation of normalising constants and Bayes
factors (Robert and Marin, , 2008; Marin and Robert, , 2010, 2011) contains
many proposals that relate to Rao-Blackwellisation, as, e.g., through the
simulation of auxiliary samples from instrumental distributions as initiated
in Geyer, (1993) and expanded into bridge sampling by Chopin and Robert,
(2010) and noise-constrastive estimation by Gutmann and Hyvärinen, (2012);
* •
in connection with the above, many versions of demarginalization such as slice
sampling (Roberts and Rosenthal, , 1999; Mira et al., , 2001) introduce
auxiliary variables that could be exploited towards bringing a variance
reduction;555This may however be seen as a perversion of Rao–Blackwellization
in that the dimension of the random variable used in the simulation is
increased, with the resulting estimate being obtained by the so-called Law of
the Unconscious Statistician.
* •
there is no optimal solution in simulation as, mathematically, a quantity such
as an expectation is uniquely and exactly defined once the distribution is
known: if computation time is not accounted for, the exact value is the
optimal solution;
* •
while standing outside a probabilistic framework, quasi-Monte Carlo techniques
(Liao, , 1998) can also be deemed to constitute an ultimate form of
Rao–Blackwellization, with the proposal of Kong et al., (2003) being an
intermediate solution;666As mentioned by the authors, the “group-averaged
estimator may be interpreted as Rao–Blackwellization given the orbit, so group
averaging cannot increase the variance” (p. 592)
but we will not cover any further these aspects here.
The rest of this review paper discusses Gibbs sampling in Section 2, other
MCMC settings in 3, particle filters and SMC in 5, and conclude in Section 6.
## 2 Gibbs sampling
Let us recall that a Gibbs sampler (Geman and Geman, , 1984) is a specific way
of building a Markov chain with stationary density $\pi(\cdot)$ through the
iterative generation from conditional densities associated with the joint
$\pi(\cdot)$. Its simplest version consists in partitioning the argument
$\theta$ into $\theta=(\theta_{1},\theta_{2})$ and generating alternatively
from $\pi_{1}(\theta_{1}|\theta_{2})$ and from
$\pi_{2}(\theta_{2}|\theta_{1})$. This binary version is sometimes called data
augmentation in reference to Tanner and Wong, (1987), who implemented an
algorithm related to the Gibbs sampler for latent variable models.
When proposing this algorithm as a way to simulating from marginal densities
and (hence) posterior distributions, Gelfand and Smith (1990) explicitely
relate to the Rao–Blackwell theorem, as shown by the following quote777The
text has been retyped and may hence contains typos. The notations are those
introduced by Gelfand and Smith, (1990) and used for a while in the
literature, see e.g. Spiegelhalter et al., (1995) with $[X\mid Y]$ denoting
the conditional density of $X$ given $Y$. The double indexation of the
sequence is explained below.
> …we consider the problem of calculating a final form of marginal density
> from the final sample produced by either the substitution or Gibbs sampling
> algorithms. Since for any estimated marginal the corresponding full
> conditional has been assumed available, efficient inference about the
> marginal should clearly be based on using this full conditional
> distribution. In the simplest case of two variables, this implies that
> $[X\mid Y]$ and the $y^{(i)}_{j}$’s $(j=1,\ldots,m)$ should be used to make
> inferences about $[X]$, rather than imputing $X^{(i)}_{j}$ $(j=1,\ldots,nm)$
> and basing inference on these $X^{(i)}_{j}$’s. Intuitively, this follows,
> because to estimate $[X]$ using the $x^{(i)}_{j}$’s requires a kernel
> density estimate. Such an estimate ignores the known form $[X\mid Y]$ that
> is mixed to obtain $[X]$. The formal argument is essentially based on the
> Rao–Blackwell theorem. We sketch a proof in the context of the density
> estimator itself. If $X$ is a continuous p-dimensional random variable,
> consider any kernel density estimator of $[X]$ based on the $X^{(i)}_{j}$’s
> (e.g., see Devroye and Györfi, 1985) evaluated at $x_{0}$:
> $\Delta^{(i)}_{x_{0}}=(1/h_{m}^{p})\sum_{j=1}^{m}K[(X_{0}-X^{(i)}_{j})/h_{m}]$,
> say, where $K$ is a bounded density on $\mathbb{R}^{p}$ and the sequence
> $\\{h_{m}\\}$ is such that as $m\rightarrow\infty$, $h_{m}\rightarrow 0$,
> whereas $mh_{m}\rightarrow\infty$. To simplify notation, set
> $Q_{m,x_{0}}(X)=(1/h_{m}^{p})K[(X-X^{(i)}_{j})/h_{m}]$ so that
> $\Delta^{(i)}_{x_{0}}=(1/m)\sum_{j=1}^{m}Q_{m,x_{0}}(X_{j}^{(i)})$. Define
> $\gamma_{x_{0}}^{i}=(1/m)\sum_{j=1}^{m}\mathbb{E}[Q_{m,x_{0}}(X)\mid
> Y^{(i)}_{j}]$. By our earlier theory, both $\Delta^{(i)}_{x_{0}}$ and
> $\gamma_{x_{0}}^{i}$ have the same expectation. By the Rao–Blackwell
> theorem, $\text{var}\,\mathbb{E}[Q_{m,x_{0}}(X)]\mid
> Y)\leq\text{var}\,Q_{m,x_{0}}(X)$, and hence
> $\text{MSE}(\gamma_{x_{0}}^{i})\leq\text{MSE}(\Delta^{(i)}_{x_{0}})$, where
> MSE denotes the mean squared error of the estimate of $[X]$.
This Section 2.6. of the paper calls for several precisions:
* •
the simulations $x^{(i)}_{j}$ and $y^{(i)}_{j}$ are double-indexed because the
authors consider $m$ parallel and independent runs of the Gibbs sampler, $i$
being the number of iterations since the initial step, in continuation of
Tanner and Wong, (1987),
* •
the Rao–Blackwell argument is more specifically a conditional expectation
step,
* •
as later noted by Geyer, (1994), the conditioning argument is directed at
(better) approximating the entire density $[X]$, even though the authors
mention on the following page that the argument is “simpler for estimation of”
a posterior expectation,
* •
they compare the mean squared errors of the expected density estimate rather
than the rates of convergence of a non-parametric kernel estimator(in
$n^{\nicefrac{{-1}}{{4+d}}}$) versus an unbiased parametric density estimator
(in $n^{\nicefrac{{-1}}{{2}}}$), which does not call for a
Rao–Blackwellization argument,
* •
they do not (yet) mention “Rao–Blackwellization” as a technique,
* •
and they do not envision (more) ergodic averages across iterations, possibly
fearing the potential impact of the correlation between the terms for a given
chain.
A more relevant step in the use of Rao–Blackwellization techniques for the
Gibbs sampler is found in Liu et al., (1994). This later article establishes
in particular that, for the two-step Gibbs sampler, Rao–Blackwellization
always produces a decrease in the variance of the empirical averages. This is
established in a most elegant manner by showing that each extra conditioning
(or further lag) decreases the correlation, which is always positive.888The
“always” qualification applies to every transform of the chain and to every
time lag. The proof relies on the associated notion of interleaving and
expresses the above correlation as the variance of a multiply conditioned
expectation:
$\text{cov}(h(\theta_{1}^{(0)}),h(\theta_{1}^{(n)})=\text{var}\,(\mathbb{E}(\ldots\mathbb{E}[\mathbb{E}\\{h(\theta_{1})\mid\theta_{2}\\}]\ldots)\\}\,,$
where the number of conditional expectations on the rhs is $n$. The authors
also warn that a “fast mixing scheme gains an extra factor in efficiency if
the mixture estimate can be easily computed” and give a counter-example when
Rao–Blackwellization increases the variance.999The function leading to the
counter-example is however a function of both $\theta_{1}$ and $\theta_{2}$,
which may be considered as less relevant in latent variable settings. This
counter-example is exploited in a contemporary paper by Geyer,
(1994)101010Geyer, (1994) also points out that a similar Rao–Blackwellization
was proposed by Pearl, (1987). where a necessary and sufficient but highly
theoretical condition is given for an improvement. As the author puts it in
his conclusion,
> The point of this article is not that Rao–Blackwellized estimators are a
> good thing or a bad thing. They may be better or worse than simple averaging
> of the functional of interest without conditioning. The point is that, when
> the autocorrelation structure of the Markov chain is taken into account, it
> is not a theorem that Rao–Blackwellized estimators are always better than
> simple averaging. Hence the name Rao–Blackwellized should be avoided,
> because it brings to mind optimality properties that these estimators do not
> really possess. Perhaps “averaging a conditional expectation” is a better
> name.
but his recommendation was not particularly popular, to judge from the
subsequent literature resorting to this denomination.
Another connection between Rao–Blackwellization and Gibbs sampling can be
found in Chib, (1995), where his approximation to the marginal likelihood
$m(x)=\dfrac{\pi(\theta^{*})f(x|\theta^{*})}{\hat{\pi}(\theta^{*}|x)}$
is generaly based on an estimate111111Chib, (1995) mentions this connection
(p.1314) but seems to restrict it to the two-stage Gibbs sampler. In the
earlier version known as “the candidate’s formula”, due to a Durham student
coming up with it, Besag, (1989) points out the possibility of using an
approximation such as a Laplace approximation, rather than an MCMC
estimation.121212A question found on the statistics forum Cross-Validated
illustrates the difficulty with understanding demarginalisation and joint
simulation: “Chib suggests that we can insert the Gibbs sampling outputs of
$\mu$ into the summation [of the full conditionals]. But aren’t the outputs
obtained from Gibbs about the joint posterior $p(\mu,\phi|y)$? Why suddenly
can we use the results from joint distribution to replace the marginal
distribution?” of the posterior density using a latent (or auxiliary)
variable, as in Gelfand and Smith, (1990),
$\hat{\pi}(\theta^{*}|x)=\nicefrac{{1}}{{T}}\sum_{t=1}^{T}\pi(\theta^{*}|x,z^{(t)})$
The stabilisation brought by this parametric approximation is notable when
compared with kernel estimates, even though it requires that the marginal
distribution on $z$ is correctly simulated (Neal, , 1999).
## 3 Markov chain Monte Carlo methods
In the more general setting of Markov chain Monte Carlo (MCMC) algorithms
(Robert and Casella, , 2004), further results characterise the improvement
brought by Rao–Blackwellization. Let us briefly recall that the concept behind
MCMC is to create a Markov sequence $\theta^{(n)})_{n}$ of dependent variables
that converge (in distribution) to the distribution of interest (also called
target). One of the most ubiquitous versions of an MCMC algorithm is the
Metropolis–Hastings algorithm (Metropolis et al., , 1953; Hastings, , 1970;
Green et al., , 2015)
One direct exploitation of the Rao–Blackwell theorem is found in McKeague and
Wefelmeyer, (2000), who sho in particular that, when estimating the mean of
$\theta$ under the target distribution, a Rao–Blackwellized version based on
$\mathbb{E}[h(\theta^{(n)}|\theta^{(n-1)}]$ will improve the asymptotic
variance of the ordinary empirical estimator when the chain $(\theta^{(n)})$
is reversible. While the setting may appear quite restrictive, the authors
manage to recover data augmentation with a double conditional expectation
(when compared with Liu et al., , 1994) as well as reversible Gibbs and
Metropolis samplers of the Ising model. The difficulty in applying the method
resides in computing the conditional expectation, since a replacement with a
Monte Carlo approximation cancels its appeal.
Casella and Robert, (1996) consider an altogether different form of
Rao–Blackwellization for both accept-reject and Metropolis–Hastings samples.
The core idea is to integrate out via a global conditional expectation the
Uniform variates used to accept or reject the proposed values.
A sample produced by the Metropolis–Hastings algorithm,
$\theta^{(1)},\ldots,\allowbreak\theta^{(T)}$, is in fact based on two
simulated samples, the sample of proposed values $\eta_{1},\ldots,\eta_{T}$
and the sample of decision variates $u_{1},\ldots,u_{T}$, with $\eta_{t}\sim
q(y|\theta^{(t-1)})$ and $u_{t}\sim\mathcal{U}([0,1])$. Since $\theta^{(t)}$
is equal to one of the earlier proposed values, an empirical average
associated with this sample can be written131313In order to avoid additional
notations, we assume a continuous model where all $\eta_{i}$’s are different
with probability one.
$\delta^{MH}=\nicefrac{{1}}{{T}}\sum_{t=1}^{T}h(\theta^{(t)})=\nicefrac{{1}}{{T}}\sum_{t=1}^{T}\sum_{i=1}^{t}\;\mathbb{I}_{\theta^{(t)}=\eta_{i}}\;h(\eta_{i})\,.$
Therefore, taking a conditional expectation of the above by integrating the
decision variates,
$\displaystyle\delta^{RB}$
$\displaystyle=\displaystyle{\nicefrac{{1}}{{T}}\sum_{i=1}^{T}\;h(\eta_{i})\;\mathbb{E}\left[\sum_{t=i}^{T}\;\mathbb{I}_{\theta^{(t)}=\eta_{i}}\bigg{|}\eta_{1},\ldots,\eta_{T}\right]}$
$\displaystyle=\displaystyle{\nicefrac{{1}}{{T}}\sum_{i=1}^{T}\;h(\eta_{i})\;\sum_{t=i}^{T}\;\mathbb{P}(\theta^{(t)}=\eta_{i}|\eta_{1},\ldots,\eta_{T})}\,,$
leads to an improvement of the empirical average, $\delta^{MH}$, under convex
losses.
While, for the independent Metropolis–Hastings algorithm, the conditional
probability can be obtained in closed form (see also Atchadé and Perron, ,
2005 and Jacob et al., , 2011), the general case, based on an arbitrary
proposal distribution $q(\cdot|\theta)$ is such that $\delta^{RB}$ is less
tractable but Casella and Robert, (1996) derive a tractable recursive
expression for the weights of $h(\eta_{i})$ in $\delta^{RB}$, with complexity
of order $\mathcal{O}(T^{2})$. Follow-up papers are Perron, (1999) and
Casella et al., (2004).
While again attempting at integrating out the extraneous uniform variates
exploited by the Metropolis–Hastings algorithm, Douc and Robert, (2011)
derive another Rao–Blackwellized improvement over the regular
Metropolis–Hastings algorithm by following a different representation of
$\delta^{MH}$, using the accepted chain $(\xi_{i})_{i}$ instead of the
proposed sequence of the $\eta_{i}$’s as in Casella and Robert, (1996). The
version based on accepted values is indeed rewritten as
$\delta^{MH}=\nicefrac{{1}}{{T}}\,\sum_{i=1}^{M}\mathfrak{n}_{i}h(\xi_{i})\,,$
where the $\xi_{i}$’s are the accepted $\eta_{j}$’s, $M$ is the number of
accepted $\eta_{j}$’s till iteration $T$, and $\mathfrak{n}_{i}$ is the number
of times $\xi_{i}$ appears in the sequence $(\theta^{(t)})_{t}$. This
representation is also exploited in Sahu and Zhigljavsky, (1998); Gȧsemyr,
(2002); Sahu and Zhigljavsky, (2003), and Malefaki and Iliopoulos, (2008).
The Rao–Blackwellisation construct of Douc and Robert, (2011) exploits the
following properties:
1. 1.
$(\xi_{i},\mathfrak{n}_{i})_{i}$ is a Markov chain;
2. 2.
$\xi_{i+1}$ and $\mathfrak{n}_{i}$ are independent given $\xi_{i}$;
3. 3.
$\mathfrak{n}_{i}$ is distributed as a Geometric random variable with
probability parameter
$p(\xi_{i}):=\int\alpha(\xi_{i},\eta)\,q(\eta|\xi_{i})\,\mathrm{d}\eta\,;$ (1)
4. 4.
$(\xi_{i})_{i}$ is a Markov chain with transition kernel
$\tilde{Q}(\xi,\mathrm{d}\eta)=\tilde{q}(\eta|\xi)\mathrm{d}\eta$ and
stationary distribution $\tilde{\pi}$ such that
$\tilde{q}(\cdot|\xi)\propto\alpha(\xi,\cdot)\,q(\cdot|\xi)\quad\mbox{and}\quad\tilde{\pi}(\cdot)\propto\pi(\cdot)p(\cdot)\,.$
Since the Metropolis–Hastings estimator $\delta^{MH}$ only involves the
$\xi_{i}$’s, i.e. the accepted $\eta_{t}$’s, an optimal weight for those
random variables is the importance weight $1/p(\xi_{i})$, leading to the
corresponding importance sampling estimator
$\delta^{IS}=\nicefrac{{1}}{{N}}\,\sum_{i=1}^{M}\dfrac{h(\xi_{i})}{p(\xi_{i})}\,,$
but this quantity is almost invariably unavailable in closed form and need be
estimated by an unbiased estimator. The geometric $\mathfrak{n}_{i}$ is the de
facto solution that is used in the original Metropolis-Hastings estimate, but
solutions with smaller variance also are available, based on the property that
(if $\alpha(\xi,\eta)$ denotes the Metropolis–Hastings acceptance probability)
$\hat{\zeta}_{i}=1+\sum_{j=1}^{\infty}\prod_{\ell\leq
j}\left\\{1-\alpha(\xi_{i},\eta_{\ell})\right\\}\qquad\eta_{\ell}\stackrel{{\scriptstyle\text{i.i.d.}}}{{\sim}}q(\eta|\xi_{i})$
is an unbiased estimator of $1/p(\xi_{i})$ whose variance, conditional on
$\xi_{i}$, is lower than the conditional variance of $\mathfrak{n}_{i}$,
$\\{1-p(\xi_{i})\\}/p^{2}(\xi_{i})$. For practical implementation, in the
event $\alpha(\xi,\eta)$ is too rately equal to one, the number of terms where
the indicator funtion is replaced with its expectation
$\alpha(\xi_{i},\eta_{\ell})$ may be limited, without jeopardising the
variance domination.
## 4 Retrospective: Continuous time Monte Carlo methods
Retrospective simulation (Beskos et al., 2006a, ) is an attempt to take
advantage of the redundancy inherent in modern simulation algorithms
(particularly MCMC, rejection sampling) by subverting the traditional order of
algorithm steps. It is connected to demarginalisation and pseudo-marginal
(Andrieu and Roberts, , 2009) techniques in that it replaces a probability of
acceptance with an unbiased estimation of the said probability, hence creating
an auxiliary variable in the process. In the case of the Metropolis-Hastings
algorithm, this means substituting the ratio
$\dfrac{\pi(\theta^{\prime})q(\theta^{(t)}|\theta^{\prime})}{\pi(\theta^{(t)})q(\theta^{\prime}|^{(t)})}$
with
$\dfrac{\hat{\pi}^{\prime}q(\theta^{(t)}|\theta^{\prime})}{\hat{\pi}^{(t)}q(\theta^{\prime}|^{(t)})}$
where $\hat{\pi}^{\prime}$ is an auxiliary variable such that
$\mathbb{E}[\hat{\pi}^{\prime}|\theta^{\prime}]=\kappa\pi(\theta^{\prime})\qquad\mathbb{E}[\hat{\pi}^{(t)}|\theta^{(t)}]=\kappa\pi(\theta^{(t)})$
Retrospective simulation is most powerful in infinite dimensional contexts,
where its natural competitors are approximate and computationally expensive.
The solution advanced by Beskos et al., 2006a and Beskos et al., 2006b to
simulate diffusions in an exact manner (for a finite number of points) relies
on an auxiliary and bounding Poisson process. The selected points produced
this way actually act as a random sufficient statistic in the sense that the
stochastic process can be generated from Brownian bridges between these points
and closed form estimators conditional of these points may be available and
with a smaller variance. See also Fearnhead et al., (2017) for related
results on continuous-time importance sampling (CIS). This includes a
sequential importance sampling procedure with a random variable whose
expectation is equal to the importance weight.141414One difficulty with the
approach is the possible occurrence of negative importance weights (Jacob and
Thiery, , 2015).
## 5 Rao–Blackwellized particle filters
Also known as particle filtering151515An early instance, called bootstrap
filter (Gordon et al., , 1993), involved one of the authors of Gelfand and
Smith, (1990), who thus contributed to the birth of two major advances in the
field. sequential Monte Carlo (Liu and Chen, , 1998; Doucet et al., , 1999;
Del Moral et al., , 2006) is another branch of the Monte Carlo methodology
where the concept of Rao–Blackwellisation has had an impact. We briefly recall
here that sequential Monte Carlo is used in state-space and other Bayesian
dynamic models where the magnitude of the latent variable prevents the call to
traditional Monte Carlo (and MCMC) techniques. It is also relevant for dealing
with complex static problems by creating a sequence of intermediate and
artificial models, a technique called tempering (Marinari and Parisi, , 1992).
Doucet et al., (2000) introduce a general version of the Rao–Blackwellized
particle filter by commenting on the inherent inefficiency of particle filters
in large dimensions, compounded by the dynamic nature of the sampling scheme.
The central filtering equation is a Bayesian update of the form
$p(z_{1:t}|y_{1:t}\propto
p(z_{1:(t-1)}|y_{1:(t-1)})p(y_{t}|z_{t})p(z_{t}|z_{t-1})$ (2)
in a state-space formulation where $(z_{t})_{t}$ (also denoted $z_{1:T}$) is
the latent Markov chain and $(y_{t})_{t}$ the observed sequence. In this
update, the conditional densities of $z_{1:t}$ and $z_{1:(t-1)}$ are usually
unavailable and need be approximated by sampling solutions.
If some marginalisation of the sampling is available for the model at hand,
this reduces the degeneracy phenomenon at the core of particle filters. The
example provided in Doucet et al., (2000) is one where $z_{t}(x_{t},r_{t})$,
with
$p(x_{1:t},r_{1:t}|y_{1:t})=p(x_{1:t}|y_{1:t},r_{1:t})p(r_{1:t}|y_{1:t})$
and $p(x_{1:t}|y_{1:t},r_{1:t})$ available in closed form.161616Doucet et al.,
(2000) provide a realistic illustration for a neural network where the
manageable part is obtained via a Kalman filter. This component can then be
used in the approximation of the filtering distribution (2), instead of
weighted Dirac masses, which improves its precision if only by bringing a
considerable decrease in the dimension of the particles (Doucet et al., ,
2000, Proposition 2). It is indeed sufficient to resort only to particles for
the intractable part.
See Andrieu et al., (2001); Johansen et al., (2012); Lindsten, (2011);
Lindsten et al., (2011) for further extensions on this principle. In
particular, the PhD thesis of Lindsten, (2011) contains the following and
relevant paragraph:
> Moving from [the particle estimator] to [its Rao–Blackwellized version]
> resembles a Rao–Blackwellisation of the estimator (see also Lehmann, 1983).
> In some sense, we movefrom a Monte Carlo integration to a partially
> analytical integration. However, it is not clear that the Rao–Blackwellized
> particle filter truly is a Rao-Blackwellisation of [the original], in the
> factual meaning of the concept. That is, it is not obvious that the
> conditional expectation of [the original] results in the [its
> Rao–Blackwellized version]. This is due to the nontrivial relationship
> between the normalised weights generated by the [particle filter], and those
> generated by [its Rao–Blackwellized version]. It can thus be said that [it]
> has earned its name from being inspired by the Rao–Blackwell theorem, and
> not because it is a direct application of it.
Nonetheless, any exploitation of conditional properties that does not induce a
(significant) bias is bound to bring stability and faster convergence to
particle filters.
## 6 Conclusion
The term of Rao–Blackwellisation is therefore common enough in the MCMC
literature to be considered as a component of the MCMC toolbox. As we pointed
out in the introduction, many tricks and devices introduced in the past can
fall under the hat of that term and, while a large fraction of them does not
come with a demonstrated improvement over earlier proposals, promoting the
concepts of conditioning and demarginalising as central to the field should be
seen as essential for researchers and students alike. Linking such concepts,
shared by statistics and Monte Carlo, with an elegant and historical result
like the Rao–Blackwell theorem stresses both the universality and the
resilience of the idea.
## References
* Andrieu et al., (2001) Andrieu, C., de Freitas, N., and Doucet, A. (2001). Rao–Blackwellised particle filtering via data augmentation. In Advances in Neural Information Processing Systems (NIPS), pages 561–567.
* Andrieu and Roberts, (2009) Andrieu, C. and Roberts, G. (2009). The pseudo-marginal approach for efficient Monte Carlo computations. Ann. Statist., 37(2):697–725.
* Atchadé and Perron, (2005) Atchadé, Y. and Perron, F. (2005). Improving on the independent Metropolis–Hastings algorithm. Statistica Sinica, 15:3–18.
* Banterle et al., (2019) Banterle, M., Grazian, C., Lee, A., and Robert, C. P. (2019). Accelerating Metropolis–Hastings algorithms by delayed acceptance. Foundations of Data Science, 1(2):103.
* Berg et al., (2019) Berg, S., Zhu, J., and Clayton, M. K. (2019). Control variates and Rao–Blackwellization for deterministic sweep Markov chains. arXiv:1912.06926.
* Berkson, (1955) Berkson, J. (1955). Maximum likelihood and minimum $\chi^{2}$ estimates of the logistic function. J. American Statist. Assoc., 50(269):130–162.
* Besag, (1989) Besag, J. (1989). A candidate’s formula: A curious result in Bayesian prediction. Biometrika, 76(1):183–183.
* (8) Beskos, A., Papaspiliopoulos, O., Roberts, G., and Fearnhead, P. (2006a). Exact and computationally efficient likelihood-based estimation for discretely observed diffusion processes (with discussion). J. Royal Statist. Society Series B, 68(3):333–382.
* (9) Beskos, A., Papaspiliopoulos, O., and Roberts, G. O. (2006b). Retrospective exact simulation of diffusion sample paths with applications. Bernoulli, 12(6):1077–1098.
* Blackwell, (1947) Blackwell, D. (1947). Conditional expectation and unbiased sequential estimation. Ann. Statist., 18(1):105–110.
* Casella and Robert, (1996) Casella, G. and Robert, C. (1996). Rao-Blackwellization of sampling schemes. Biometrika, 83:81–94.
* Casella et al., (2004) Casella, G., Robert, C. P., and Wells, M. T. (2004). Generalized accept-reject sampling schemes. Lecture Notes-Monograph Series, 45:342–347.
* Chatterjee and Diaconis, (2018) Chatterjee, S. and Diaconis, P. (2018). The sample size required in importance sampling. Ann. Appl. Probab., 28(2):1099–1135.
* Chib, (1995) Chib, S. (1995). Marginal likelihood from the Gibbs output. J. American Statist. Assoc., 90:1313–1321.
* Chopin and Robert, (2010) Chopin, N. and Robert, C. (2010). Properties of nested sampling. Biometrika, 97:741–755.
* Cornuet et al., (2012) Cornuet, J.-M., Marin, J.-M., Mira, A., and Robert, C. (2012). Adaptive multiple importance sampling. Scandinavian Journal of Statistics, 39(4):798–812.
* Del Moral et al., (2006) Del Moral, P., Doucet, A., and Jasra, A. (2006). Sequential Monte Carlo samplers. J. Royal Statist. Society Series B, 68(3):411–436.
* Douc and Robert, (2011) Douc, R. and Robert, C. (2011). A vanilla variance importance sampling via population Monte Carlo. Ann. Statist., 39(1):261–277.
* Doucet et al., (1999) Doucet, A., de Freitas, N., and Gordon, N. (1999). Sequential MCMC in Practice. Springer-Verlag.
* Doucet et al., (2000) Doucet, A., Freitas, N. d., Murphy, K. P., and Russell, S. J. (2000). Rao–Blackwellised particle filtering for dynamic Bayesian networks. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, UAI ’00, pages 176–183, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
* Elvira et al., (2019) Elvira, V., Martino, L., Luengo, D., and Bugallo, M. (2019). Generalized multiple importance sampling. Statist. Science, 34(1):129–155.
* Fearnhead et al., (2017) Fearnhead, P., Łatuszynski, K., Roberts, G. O., and Sermaidis, G. (2017). Continious-time importance sampling: Monte Carlo methods which avoid time-discretisation error. arXiv:1712.06201.
* Gȧsemyr, (2002) Gȧsemyr, J. (2002). Markov chain Monte Carlo algorithms with independent proposal distribution and their relation to importance sampling and rejection sampling. Technical Report 2, Department of Statistics, Univ. of Oslo.
* Gelfand and Smith, (1990) Gelfand, A. and Smith, A. (1990). Sampling based approaches to calculating marginal densities. J. American Statist. Assoc., 85:398–409.
* Geman and Geman, (1984) Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell., 6:721–741.
* Geyer, (1993) Geyer, C. (1993). Estimating normalizing constants and reweighting mixtures in Markov chain Monte Carlo. Technical Report 568, School of Statistics, Univ. of Minnesota.
* Geyer, (1994) Geyer, C. (1994). Conditioning in Markov chain Monte Carlo. J. Comput. Graph. Statis., 4:148–154.
* Gordon et al., (1993) Gordon, N., Salmond, J., and Smith, A. (1993). A novel approach to non-linear/non-Gaussian Bayesian state estimation. IEEE Proceedings on Radar and Signal Processing, 140:107–113.
* Green et al., (2015) Green, P. J., Łatuszyński, K., Pereyra, M., and Robert, C. P. (2015). Bayesian computation: a summary of the current state, and samples backwards and forwards. Statistics and Computing, 25(4):835–862.
* Gutmann and Hyvärinen, (2012) Gutmann, M. U. and Hyvärinen, A. (2012). Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. J. Mach. Learn. Res., 13(1):307–361.
* Hastings, (1970) Hastings, W. (1970). Monte Carlo sampling methods using Markov chains and their application. Biometrika, 57:97–109.
* Jacob et al., (2011) Jacob, P., Robert, C., and Smith, M. (2011). Using parallel computation to improve independent Metropolis–Hastings based estimation. J. Comput. Graph. Statist., 20(3):616–635.
* Jacob and Thiery, (2015) Jacob, P. E. and Thiery, A. H. (2015). On nonnegative unbiased estimators. Ann. Statist., 43(2):769–784.
* Johansen et al., (2012) Johansen, A. M., Whiteley, N., and Doucet, A. (2012). Exact approximation of Rao–Blackwellised particle filters. IFAC Proceedings Volumes, 45(16):488 – 493. 16th IFAC Symposium on System Identification.
* Kong et al., (2003) Kong, A., McCullagh, P., Meng, X. L., Nicolae, D., and Tan, Z. (2003). A theory of statistical models for Monte-Carlo integration. J. Royal Statist. Society Series B, 65(3):585–618.
* Liao, (1998) Liao, J. (1998). Variance reduction in Gibbs sampler using quasi-random numbers. J. Comput. Graph. Statist., 7(3):253–266.
* Lindsten, (2011) Lindsten, F. (2011). Rao-Blackwellised particle methods for inference and identification. PhD thesis, University of Linköping, Sweden.
* Lindsten et al., (2011) Lindsten, F., Schön, T. B., and Olsson, J. (2011). An explicit variance reduction expression for the Rao–Blackwellised particle filter. IFAC Proceedings Volumes, 44(1):11979 – 11984. 18th IFAC World Congress.
* Liu and Chen, (1998) Liu, J. and Chen, R. (1998). Sequential Monte-Carlo methods for dynamic systems. J. American Statist. Assoc., 93:1032–1044.
* Liu et al., (1994) Liu, J., Wong, W., and Kong, A. (1994). Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and sampling schemes. Biometrika, 81:27–40.
* Malefaki and Iliopoulos, (2008) Malefaki, S. and Iliopoulos, G. (2008). On convergence of importance sampling and other properly weighted samples to the target distribution. J. Statist. Plann. Inference, 138:1210–1225.
* Marin and Robert, (2010) Marin, J. and Robert, C. (2010). On resolving the Savage–Dickey paradox. Electron. J. Statist., 4:643–654.
* Marin and Robert, (2011) Marin, J. and Robert, C. (2011). Importance sampling methods for Bayesian discrimination between embedded models. In Chen, M.-H., Dey, D., Müller, P., Sun, D., and Ye, K., editors, Frontiers of Statistical Decision Making and Bayesian Analysis, pages 513–527. Springer-Verlag, New York.
* Marinari and Parisi, (1992) Marinari, E. and Parisi, G. (1992). Simulated tempering: A new Monte Carlo schemes. Europhysics letters, 19:451–458.
* McKeague and Wefelmeyer, (2000) McKeague, I. and Wefelmeyer, W. (2000). Markov chain Monte Carlo and Rao–Blackwellisation. J. Statist. Plann. Inference, 85:171–182.
* Metropolis et al., (1953) Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equations of state calculations by fast computing machines. J. Chem. Phys., 21:1087–1092.
* Mira et al., (2001) Mira, A., Møller, J., and Roberts, G. (2001). Perfect slice samplers. J. Royal Statist. Society Series B, 63:583–606.
* Neal, (1999) Neal, R. (1999). Erroneous results in “Marginal likelihood from the Gibbs output“. Technical report, University of Toronto.
* Owen and Zhou, (2000) Owen, A. and Zhou, Y. (2000). Safe and effective importance sampling. J. American Statistical Association, 95:135–143.
* Pearl, (1987) Pearl, J. (1987). Evidential reasoning using stochastic simulation in causal models. Artificial Intelligence, 32:247–257.
* Perron, (1999) Perron, F. (1999). Beyond accept–reject sampling. Biometrika, 86(4):803–813.
* Rao, (1945) Rao, C. (1945). Information and the accuracy attainable in the estimation of statistical parameters. Bulletin of the Calcutta Mathematical Society, 37:81–91.
* Robert and Casella, (2004) Robert, C. and Casella, G. (2004). Monte Carlo Statistical Methods. Springer-Verlag, New York, second edition.
* Robert and Marin, (2008) Robert, C. and Marin, J.-M. (2008). On some difficulties with a posterior probability approximation technique. Bayesian Analysis, 3(2):427–442.
* Robert and Wraith, (2009) Robert, C. and Wraith, D. (2009). Computational methods for Bayesian model choice. In Goggans, P. M. and Chan, C.-Y., editors, MaxEnt 2009 proceedings, volume 1193. AIP.
* Roberts and Rosenthal, (1999) Roberts, G. and Rosenthal, J. (1999). Convergence of slice sampler Markov chains. J. Royal Statist. Society Series B, 61:643–660.
* Sahu and Zhigljavsky, (1998) Sahu, S. and Zhigljavsky, A. (1998). Adaptation for self regenerative MCMC. Technical report, Univ. of Wales, Cardiff.
* Sahu and Zhigljavsky, (2003) Sahu, S. and Zhigljavsky, A. (2003). Self regenerative Markov chain Monte Carlo with adaptation. Bernoulli, 9:395–422.
* Spiegelhalter et al., (1995) Spiegelhalter, D., Thomas, A., Best, N., and Gilks, W. (1995). BUGS: Bayesian inference using Gibbs sampling. Technical report, Medical Research Council Biostatistics Unit, Institute of Public Health, Cambridge University.
* Tanner and Wong, (1987) Tanner, M. and Wong, W. (1987). The calculation of posterior distributions by data augmentation. J. American Statist. Assoc., 82:528–550.
* Vehtari et al., (2019) Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2019). Pareto smoothed importance sampling. arXiv:1507.02646.
|
8k
|
arxiv_papers
|
2101.01012
|
11institutetext: Instituto de Física Fundamental (CSIC). Calle Serrano
121-123, 28006, Madrid, Spain. 11email: [email protected]
22institutetext: Facultad de Ciencias. Universidad Autónoma de Madrid, 28049
Madrid, Spain. 33institutetext: Institut de Radioastronomie Millimétrique
(IRAM), Grenoble, France. 44institutetext: LERMA, Observatoire de Paris, PSL
Research University, CNRS, Sorbonne Universités, 92190 Meudon, France.
55institutetext: Observatorio Astronómico Nacional (OAN), Alfonso XII, 3,
28014 Madrid, Spain. 66institutetext: Max-Planck-Institut für Radioastronomie,
Auf dem Hügel 69, 53121 Bonn, Germany. 77institutetext: OASU/LAB-UMR5804,
CNRS, Université Bordeaux, 33615 Pessac, France. 88institutetext: European
Southern Observatory, Alonso de Cordova 3107, Vitacura, Santiago, Chile.
# Bottlenecks to interstellar sulfur chemistry
Sulfur-bearing hydrides in UV-illuminated gas and grains
J. R. Goicoechea 11 A. Aguado 22 S. Cuadrado 11 O. Roncero 11 J. Pety 33 E.
Bron 44
A. Fuente 55 D. Riquelme 66 E. Chapillon 3377 C. Herrera 33 C. A. Duran 6688
(Received 25 October 2020 / Accepted 23 December 2020)
Hydride molecules lie at the base of interstellar chemistry, but the synthesis
of sulfuretted hydrides is poorly understood and their abundances often
crudely constrained. Motivated by new observations of the Orion Bar
photodissociation region (PDR) – 1′′ resolution ALMA images of SH+; IRAM 30m
detections of bright H${}_{2}^{32}$S, H${}_{2}^{34}$S, and H${}_{2}^{33}$S
lines; H3S+ (upper limits); and SOFIA/GREAT observations of SH (upper limits)
– we perform a systematic study of the chemistry of sulfur-bearing hydrides.
We self-consistently determine their column densities using coupled
excitation, radiative transfer as well as chemical formation and destruction
models. We revise some of the key gas-phase reactions that lead to their
chemical synthesis. This includes ab initio quantum calculations of the
vibrational-state-dependent reactions $\rm
SH^{+}+H_{2}({\it{v}})\rightleftarrows H_{2}S^{+}+H$ and $\rm
S\,+\,H_{2}\,({\it{v}})\rightleftarrows SH\,+\,H$. We find that reactions of
UV-pumped H2($v$ $\geq$ 2) molecules with S+ ions explain the presence of SH+
in a high thermal-pressure gas component, $P_{\rm th}/k$ $\approx$ 108 cm-3 K,
close to the H2 dissociation front (at $A_{V}$ $<$ 2 mag). These PDR layers
are characterized by no or very little depletion of elemental sulfur from the
gas. However, subsequent hydrogen abstraction reactions of SH+, H2S+, and S
atoms with vibrationally excited H2, fail to form enough H2S+, H3S+, and SH to
ultimately explain the observed H2S column density ($\sim$2.5$\times$1014
cm-2, with an ortho-to-para ratio of 2.9 $\pm$ 0.3; consistent with the high-
temperature statistical value). To overcome these bottlenecks, we build PDR
models that include a simple network of grain surface reactions leading to the
formation of solid H2S (s-H2S). The higher adsorption binding energies of S
and SH suggested by recent studies imply that S atoms adsorb on grains (and
form s-H2S) at warmer dust temperatures ($T_{d}$ $<$ 50 K) and closer to the
UV-illuminated edges of molecular clouds. We show that everywhere s-H2S
mantles form(ed), gas-phase H2S emission lines will be detectable.
Photodesorption and, to a lesser extent, chemical desorption, produce roughly
the same H2S column density (a few 1014 cm-2) and abundance peak (a few 10-8)
nearly independently of $n_{\rm H}$ and $G_{0}$. This agrees with the observed
H2S column density in the Orion Bar as well as at the edges of dark clouds
without invoking substantial depletion of elemental sulfur abundances.
###### Key Words.:
Astrochemistry — line: identification — ISM: clouds — (ISM:) photon-dominated
region (PDR) — ISM: clouds
## 1 Introduction
Hydride molecules play a pivotal role in interstellar chemistry (e.g., Gerin
et al., 2016), being among the first molecules to form in diffuse interstellar
clouds and at the UV-illuminated edges of dense star-forming clouds, so-called
photodissociation regions (PDRs; Hollenbach & Tielens, 1997). Sulfur is on the
top ten list of most abundant cosmic elements and it is particularly relevant
for astrochemistry and star-formation studies. Its low ionization potential
(10.4 eV) makes the photoionization of S atoms a dominant source of electrons
in molecular gas at intermediate visual extinctions $A_{V}$ $\simeq$ 2 - 4 mag
(Sternberg & Dalgarno, 1995; Goicoechea et al., 2009; Fuente et al., 2016).
The sulfur abundance, [S/H], in diffuse clouds (e.g., Howk et al., 2006) is
very close to the [S/H] measured in the solar photosphere
(${\rm[S/H]}_{\odot}$ $\simeq$ 1.4$\times$10-5; Asplund et al., 2009). Still,
the observed abundances of S-bearing molecules in diffuse and translucent
molecular clouds ($n_{\rm H}$ $\simeq$ $10^{2}$ $-$ $10^{3}$ cm-3) make up a
very small fraction, $<$ 1 $\%$, of the sulfur nuclei (mostly locked as S+;
Tieftrunk et al., 1994; Turner, 1996; Lucas & Liszt, 2002; Neufeld et al.,
2015). In colder dark clouds and dense cores shielded from stellar UV
radiation, most sulfur is expected in molecular form. However, the result of
adding the abundances of all detected gas-phase S-bearing molecules is
typically a factor of $\sim$102-103 lower than [S/H]⊙ (e.g., Fuente et al.,
2019). Hence, it is historically assumed that sulfur species deplete on grain
mantles at cold temperatures and high densities (e.g., Graedel et al., 1982;
Millar & Herbst, 1990; Agúndez & Wakelam, 2013). However, recent chemical
models predict that the major sulfur reservoir in dark clouds can be either
gas-phase neutral S atoms (Vidal et al., 2017; Navarro-Almaida et al., 2020)
or organo-sulfur species trapped on grains (Laas & Caselli, 2019).
Unfortunately, it is difficult to overcome this dichotomy from an
observational perspective. In particular, no ice carrier of an abundant sulfur
reservoir other than solid OCS (hereafter s-OCS, with an abundance of
$\sim$10-8 with respect to H nuclei; Palumbo et al., 1997) has been
convincingly identified. Considering the large abundances of water ice (s-H2O)
grain mantles in dense molecular clouds and cold protostellar envelopes (see
reviews by van Dishoeck, 2004; Gibb et al., 2004; Dartois, 2005), one may also
expect hydrogen sulfide (s-H2S) to be the dominant sulfur reservoir. Indeed,
s-H2S is the most abundant S-bearing ice in comets such as
67P/Churyumov–Gerasimenko (Calmonte et al., 2016). However, only upper limits
to the s-H2S abundance of $\lesssim$1 % relative to water ice have so far been
estimated toward a few interstellar sightlines (e.g., Smith, 1991; Jiménez-
Escobar & Muñoz Caro, 2011). These values imply a maximum s-H2S ice abundance
of several 10-6 with respect to H nuclei. Still, this upper limit could be
higher if s-H2S ices are well mixed with s-H2O and s-CO ices (Brittain et al.,
2020).
The bright rims of molecular clouds illuminated by nearby massive stars are
intermediate environments between diffuse and cold dark clouds. Such
environments host the transition from ionized S+ to neutral atomic S, as well
as the gradual formation of S-bearing molecules (Sternberg & Dalgarno, 1995).
In one prototypical low-illumination PDR, the edge of the Horsehead nebula,
Goicoechea et al. (2006) inferred very modest gas-phase sulfur depletions. In
addition, the detection of narrow sulfur radio recombination lines in dark
clouds (implying the presence of S+; Pankonin & Walmsley, 1978) is an argument
against large sulfur depletions in the mildly illuminated surfaces of these
clouds. The presence of new S-bearing molecules such as S2H, the first (and so
far only) doubly sulfuretted species detected in a PDR (Fuente et al., 2017),
suggests that the chemical pathways leading to the synthesis of sulfuretted
species are not well constrained; and that the list of S-bearing molecules is
likely not complete.
Figure 1: Overview of the Orion Bar. The (0′′, 0′′) position corresponds to
$\mathrm{\alpha_{2000}=05^{h}\,35^{m}\,20.1^{s}\,}$;
$\mathrm{\delta_{2000}=-\,05^{\circ}25^{\prime}07.0^{\prime\prime}}$. Left
panel: Integrated line intensity maps in the 13CO $J$ = 3-2 (color scale) and
SO 89-78 emission (gray contours; from 6 to 23.5 K km s-1 in steps of 2.5 K km
s-1) obtained with the IRAM 30 m telescope at 8′′ resolution. The white dotted
contours delineate the position of the H2 dissociation front as traced by the
infrared H2 $v$ = 1–0 $S$(1) line (from 1.5 to 4.0 $\times$ 10-4 erg s-1 cm-2
sr-1 in steps of 0.5 $\times$ 10-4 erg s-1 cm-2 sr-1; from Walmsley et al.,
2000). The black-dashed rectangle shows the smaller FoV imaged with ALMA (Fig.
3). The DF position has been observed with SOFIA, IRAM 30 m, and Herschel.
Cyan circles represent the $\sim$15′′ beam at 168 GHz. Right panel: H2S lines
lines detected toward three positions of the Orion Bar.
Interstellar sulfur chemistry is unusual compared to that of other elements in
that none of the simplest species, X=S, S+, SH, SH+, or H2S+, react
exothermically with H2 ($v$ = 0) in the initiation reactions X + H2
$\rightarrow$ XH + H (so-called hydrogen abstraction reactions). Hence, one
would expect a slow sulfur chemistry and very low abundances of SH+
(sulfanylium) and SH (mercapto) radicals in cold interstellar gas. However,
H2S (Lucas & Liszt, 2002), SH+ (Menten et al., 2011; Godard et al., 2012), and
SH (Neufeld et al., 2012, 2015) have been detected in low-density diffuse
clouds ($n_{\rm H}\lesssim 100$ cm-3) through absorption measurements of their
ground-state rotational lines111SH was first reported by IR spectroscopy
toward the cirumstellar envelope around the evolved star R Andromedae
(Yamamura et al., 2000).. In UV-illuminated gas, most sulfur atoms are
ionized, but the very high endothermicity of reaction
${\rm S^{+}\,(^{4}{\it{S}})+H_{2}\,(^{1}\Sigma^{+},\nu=0)\rightleftarrows
SH^{+}\,(^{3}\Sigma^{-})+H\,(^{2}{\it{S}})}$ (1)
($E\,/\,k$ = 9860 K, e.g., Zanchet et al., 2013a, 2019) prevents this reaction
from being efficient unless the gas is heated to very high temperatures. In
diffuse molecular clouds (on average at $T_{\rm k}$ $\sim$ 100 K), the
formation of SH+ and SH only seems possible in the context of local regions of
overheated gas subjected to magnetized shocks (Pineau des Forets et al., 1986)
or in dissipative vortices of the interstellar turbulent cascade (Godard et
al., 2012, 2014). In these tiny pockets ($\sim$100 AU in size), the gas would
attain the hot temperatures ($T_{\rm k}\,$$\simeq$ 1000 K) and/or ion-neutral
drift needed to overcome the endothermicities of the above hydrogen
abstraction reactions (see, e.g., Neufeld et al., 2015).
Dense PDRs ($n_{\rm H}$ $\simeq 10^{3}-10^{6}$ cm-3) offer a complementary
environment to study the first steps of sulfur chemistry. Because of their
higher densities and more quiescent gas, fast shocks or turbulence dissipation
do not contribute to the gas heating. Instead, the molecular gas is heated to
$T_{\rm k}\lesssim$ 500 K by mechanisms that depend on the flux of far-UV
photons (FUV; $E$ $<$13.6 eV). A different perspective of the H2 ($v$)
reactivity emerges because certain endoergic reactions become exoergic and
fast when a significant fraction of the H2 reagents are radiatively pumped to
vibrationally excited states $v\geq 1$ (Stecher & Williams, 1972; Freeman &
Williams, 1982; Tielens & Hollenbach, 1985; Sternberg & Dalgarno, 1995). In
this case, state-specific reaction rates for H2 ($\nu,J$) are needed to make
realistic predictions of the abundance of the product XH (Agúndez et al.,
2010; Zanchet et al., 2013b; Faure et al., 2017). The presence of abundant
FUV-pumped H2 ($v$ $\geq$ 1) triggers a nonthermal “hot” chemistry. Indeed,
CH+ and SH+ emission lines have been detected in the Orion Bar PDR (Nagy et
al., 2013; Goicoechea et al., 2017) where H2 lines up to $v$ = 10 have been
detected as well (Kaplan et al., 2017).
In this study we present a systematic (observational and modeling) study of
the chemistry of S-bearing hydrides in FUV-illuminated gas. We try to answer
the question of whether gas-phase reactions of S atoms and SH+ molecules with
vibrationally excited H2 can ultimately explain the presence of abundant H2S,
or if grain surface chemistry has to be invoked.
The paper is organized as follows. In Sects. 2 and 3 we report on new
observations of H${}_{2}^{32}$S, H${}_{2}^{34}$S, H${}_{2}^{33}$S, SH+, SH,
and H3S+ emission lines toward the Orion Bar. In Sect. 4 we study their
excitation and derive their column densities. In Sect. 6 we discuss their
abundances in the context of updated PDR models, with emphasis on the role of
hydrogen abstraction reactions
${\rm SH^{+}\,(^{3}\Sigma^{-})+H_{2}\,(^{1}\Sigma^{+})\rightleftarrows
H_{2}S^{+}\,(^{2}A^{\prime})+H\,(^{2}{\it{S}})},$ (2)
$\rm{H_{2}S^{+}\,(^{2}{\it{A^{\prime}}})+H_{2}\,(^{1}\Sigma^{+})\rightleftarrows
H_{3}S^{+}\,(X^{1}{\it{A_{\rm 1}}})+H\,(^{2}{\it{S}})},$ (3)
$\rm{S\,({\it{{}^{3}P}})+H_{2}\,(^{1}\Sigma^{+})\rightleftarrows
SH\,(X^{2}\Pi)+H\,({\it{{}^{2}S}})},$ (4)
photoreactions, and grain surface chemistry. In Sect. 5 we summarize the ab
initio quantum calculations we carried out to determine the state-dependent
rates of reactions (2) and (4). Details of these calculations are given in
Appendices A and B.
## 2 Observations of S-bearing hydrides
### 2.1 The Orion Bar
At an adopted distance of $\sim$414 pc, the Orion Bar is an interface of the
Orion molecular cloud and the Huygens H ii region that surrounds the Trapezium
cluster (Genzel & Stutzki, 1989; O’Dell, 2001; Bally, 2008; Goicoechea et al.,
2019, 2020; Pabst et al., 2019, 2020). The Orion Bar is a prototypical
strongly illuminated dense PDR. The impinging flux of stellar FUV photons
($G_{0}$) is a few 104 times the mean interstellar radiation field (Habing,
1968). The Bar is seen nearly edge-on with respect to the FUV illuminating
sources, mainly $\theta^{1}$ Ori C, the most massive star in the Trapezium.
This favorable orientation allows observers to spatially resolve the H+-to-H
transition (the ionization front or IF; see, e.g., Walmsley et al., 2000;
Pellegrini et al., 2009) from the H-to-H2 transition (the dissociation front
or DF; see, e.g., Allers et al., 2005; van der Werf et al., 1996, 2013;
Wyrowski et al., 1997; Cuadrado et al., 2019). It also allows one to study the
stratification of different molecular species as a function of cloud depth
(i.e., as the flux of FUV photons is attenuated; see, e.g., Tielens et al.,
1993; van der Wiel et al., 2009; Habart et al., 2010; Goicoechea et al., 2016;
Parikka et al., 2017; Andree-Labsch et al., 2017).
Regarding sulfur222Sulfur has four stable isotopes, in decreasing order of
abundance: 32S ($I_{\rm N}$ = 0), 34S ($I_{\rm N}$ = 0), 33S ($I_{\rm N}$ =
3/2), and 36S ($I_{\rm N}$ = 0), where $I_{\rm N}$ is the nuclear spin. The
most abundant isotope is here simply referred to as S., several studies
previously reported the detection of S-bearing molecules in the Orion Bar.
These include CS, C34S, SO, SO2, and H2S (Hogerheijde et al., 1995; Jansen et
al., 1995), SO+ (Fuente et al., 2003), C33S, HCS+, H2CS, and NS (Leurini et
al., 2006), and SH+ (Nagy et al., 2013). These detections refer to modest
angular resolution pointed observations using single-dish telescopes. Higher-
angular-resolution interferometric imaging of SH+, SO, and SO+ (Goicoechea et
al., 2017) was possible thanks to the Atacama Compact Array (ACA).
### 2.2 Observations of H2S isotopologues and H3S+
We observed the Orion Bar with the IRAM 30 m telescope at Pico Veleta (Spain).
We used the EMIR receivers in combination with the Fast Fourier Transform
Spectrometer (FTS) backends at 200 kHz resolution ($\sim$0.4 km s-1, $\sim$0.3
km s-1, and $\sim$0.2 km s-1 at $\sim$168 GHz, $\sim$217 GHz, and $\sim$293
GHz, respectively). These observations are part of a complete line survey
covering the frequency range 80 $-$ 360 GHz (Cuadrado et al., 2015, 2016,
2017, 2019) and include deep integrations at 168 GHz toward three positions of
the PDR located at a distance of 14′′, 40′′, and 65′′ from the IF (see Fig.
1). Their offsets with respect to the IF position at
$\mathrm{\alpha_{2000}=05^{h}\,35^{m}\,20.1^{s}\,}$,
$\mathrm{\delta_{2000}=-\,05^{\circ}25^{\prime}07.0^{\prime\prime}}$ are
(+10′′, -10′′), (+30′′, -30′′’), and (+35′′, -55′′). The first position is the
DF.
We carried out these observations in the position switching mode taking a
distant reference position at ($-$600′′, 0′′). The half power beam width
(HPBW) at $\sim$168 GHz, $\sim$217 GHz, and $\sim$293 GHz is $\sim$15′′,
$\sim$11′′, and $\sim$8′′, respectively. The latest observations (those at 168
GHz) were performed in March 2020. The data were first calibrated in the
antenna temperature scale $T^{*}_{\rm A}$ and then converted to the main beam
temperature scale, $T_{\rm mb}$, using $T_{\rm mb}$ = $T^{*}_{\rm
A}/\upeta_{\rm mb}$, where $\upeta_{\rm mb}$ is the antenna efficiency
($\upeta_{\rm mb}$ = 0.74 at $\sim$168 GHz). We reduced and analyzed the data
using the GILDAS software as described in Cuadrado et al. (2015). The typical
rms noise of the spectra is $\sim$3.5, 5.3, and 7.8 mK per velocity channel at
$\sim$168 GHz, $\sim$217 GHz, and $\sim$293 GHz, respectively. Figures 1 and 2
show the detection of $o$-H2S $1_{1,0}-1_{0,1}$ (168.7 GHz), $p$-H2S
$2_{2,0}-2_{1,1}$ (216.7 GHz), and $o$-H234S $1_{1,0}-1_{0,1}$ lines (167.9
GHz) (see Table 7 for the line parameters), as well as several $o$-H233S
$1_{1,0}-1_{0,1}$ hyperfine lines (168.3 GHz).
We complemented our dataset with higher frequency H2S lines detected by the
Herschel Space Observatory (Nagy et al., 2017) toward the “CO+ peak” position
(Stoerzer et al., 1995), which is located at only $\sim$4′′ from our DF
position (i.e., within the HPBW of these observations). These observations
were carried out with the HIFI receiver (de Graauw et al., 2010) at a
spectral-resolution of 1.1 MHz (0.7 km s-1 at 500 GHz). HIFI’s HPBW range from
$\sim$42′′ to $\sim$20′′ in the 500 - 1000 GHz window (Roelfsema et al.,
2012). The list of additional hydrogen sulfide lines detected by Herschel
includes the $o$-H2S $2_{2,1}-2_{1,2}$ (505.5 GHz), $2_{1,2}-1_{0,1}$ (736.0
GHz), and $3_{0,3}-2_{1,2}$ (993.1 GHz), as well as the $p$-H2S
$2_{0,2}-1_{1,1}$ (687.3 GHz) line. We used the line intensities, in the
$T_{\rm mb}$ scale, shown in Table A.1 of Nagy et al. (2017).
In order to get a global view of the Orion Bar, we also obtained 2.5′ $\times$
2.5′ maps of the region observed by us with the IRAM 30 m telescope using the
330 GHz EMIR receiver and the FTS backend at 200 kHz spectral-resolution
($\sim$0.2 km s-1). On-the-fly (OTF) scans were obtained along and
perpendicular to the Bar. The resulting spectra were gridded to a data cube
through convolution with a Gaussian kernel providing a final resolution of
$\sim$8′′. The total integration time was $\sim$6 h. The achieved rms noise is
$\sim$1 K per resolution channel. Figure 1 shows the spatial distribution of
the 13CO $J$=3-2 (330.5 GHz) and SO 89-78 (346.5 GHz) integrated line
intensities.
### 2.3 ALMA imaging of Orion Bar edge in SH+ emission
We carried out mosaics of a small field of the Orion Bar using twenty-seven
ALMA 12 m antennas in band 7 (at $\sim$346 GHz). These unpublished
observations belong to project 2012.1.00352.S (P.I.: J. R. Goicoechea) and
consisted of a 27-pointing mosaic centered at $\alpha$(2000) = 5h35m20.6s;
$\delta$(2000) = -05o25′20′′. The total field-of-view (FoV) is
58′′$\times$52′′ (shown in Fig. 1). The two hyperfine line components of the
SH+ $N_{J}$ = $1_{0}-0_{1}$ transition were observed with correlators
providing $\sim$500 kHz resolution (0.4 km s-1) over a 937.5 MHz bandwidth.
The total observation time with the ALMA 12 m array was $\sim$2h. In order to
recover the large-scale extended emission filtered out by the interferometer,
we used deep and fully sampled single-dish maps, obtained with the total-power
(TP) antennas at 19′′ resolution, as zero- and short-spacings. Data
calibration procedures and image synthesis steps are described in Goicoechea
et al. (2016). The synthesized beam is $\sim$1′′. This is a factor of $\sim$4
better than previous interferometric SH+ observations (Goicoechea et al.,
2017). Figure 3 shows the resulting image of the SH+ $1_{0}-0_{1}$ $F$ =
1/2-3/2 hyperfine emission line at 345.944 GHz. We rotated this image 37.5o
clockwise to bring the FUV illumination in the horizontal direction. The
typical rms noise of the final cube is $\sim$ 80 mK per velocity channel and
1′′-beam. As expected from their Einstein coefficients, the other $F$ =
1/2-1/2 hyperfine line component at 345.858 GHz is a factor of $\sim$2 fainter
(see Table 8) and the resulting image has low signal-to-noise (S/N).
We complemented the SH+ dataset with the higher frequency lines observed by
HIFI (Nagy et al., 2013, 2017) at $\sim$526 GHz and $\sim$683 GHz (upper
limit). These pointed observations have HPBWs of $\sim$41′′ and $\sim$32′′
respectively, thus they do not spatially resolve the SH+ emission. To
determine their beam coupling factors ($f_{\rm b}$), we smoothed the bigger
4′′-resolution ACA + TP SH+ image shown in Goicoechea et al. (2017) to the
different HIFI’s HPBWs. We obtain $f_{\rm b}$ $\simeq$ 0.4 at $\sim$526 GHz
and $f_{\rm b}$ $\simeq$ 0.6 at $\sim$683 GHz. The corrected intensities are
computed as $W_{\rm corr}$ = $W_{\rm HIFI}$ / $f_{\rm b}$. These correction
factors are only a factor of $\lesssim$ 2 lower than simply assuming uniform
SH+ emission from a 10′′ width filament.
### 2.4 SOFIA/GREAT search for SH emission
We finally used the GREAT receiver (Heyminck et al., 2012) on board the
Stratospheric Observatory For Infrared Astronomy (SOFIA; Young et al., 2012)
to search for the lowest-energy rotational lines of SH (${}^{2}\Pi_{3/2}$ $J$
= 3/2-1/2) at 1382.910 and 1383.241 GHz (e.g., Klisch et al., 1996; Martin-
Drumel et al., 2012). These lines lie in a frequency gap that Herschel/HIFI
could not observe from space. These SOFIA observations belong to project
$07\\_0115$ (P.I.: J. R. Goicoechea). The SH lines were searched on the lower
side band of 4GREAT band 3. We employed the 4GREAT/HFA frontends and 4GFFT
spectrometers as backends. The HPBW of SOFIA at 1.3 THz is $\sim$20′′, thus
comparable with IRAM 30 m/EMIR and Herschel/HIFI observations. We also
employed the total power mode with a reference position at ($-$600′′,0′′). The
original plan was to observe during two flights in November 2019 but due to
bad weather conditions, only $\sim$70 min of observations were carried out in
a single flight.
After calibration, data reduction included: removal of a first order spectral
baseline, dropping scans with problematic receiver response, rms weighted
average of the spectral scans, and calibration to $T_{\rm mb}$ intensity scale
($\upeta_{\rm mb}$ = 0.71). The final spectrum, smoothed to a velocity-
resolution of 1 km s-1 has a rms noise of $\sim$50 mK (shown in Fig. 4). Two
emission peaks are seen at the frequencies of the $\Lambda$-doublet lines.
Unfortunately, the achieved rms is not enough to assure the unambiguous
detection of each component of the doublet. Although the stacked spectrum does
display a single line (suggesting a tentative detection) the resulting line-
width ($\Delta$v $\simeq$ 7 km s-1) is a factor of $\sim$3 broader than
expected in the Orion Bar (see Table 9). Hence, this spectrum provides
stringent upper limits to the SH column density but deeper integrations would
be needed to confirm the detection.
## 3 Observational results
Figure 2: Detection of H233S (at $\sim$168.3 GHz) toward the DF position of
the Orion Bar. Red lines indicate hyperfine components. Blue lines show
interloping lines from 13CCH. The length of each line is proportional to the
transition line strength (taken from the Cologne Database for Molecular
Spectroscopy, CDMS; Endres et al., 2016). Figure 3: ALMA 1′′-resolution images
zooming into the edge of the Orion Bar in 12CO 3-2 (left panel, Goicoechea et
al., 2016) and SH+ 10-01 $F$ = 1/2-3/2 line (middle panel, integrated line
intensity). The right panel shows the H2 $v$ = 1–0 $S$(1) line (Walmsley et
al., 2000). We rotated these images (all showing the same FoV) with respect to
Fig. 1 to bring the FUV illuminating direction in the horizontal direction
(from the right). The circle shows the DF position targeted with SOFIA in SH
(20′′ beam) and with the IRAM 30m telescope in H2S and H3S+.
### 3.1 H232S, H234S, and H233S across the PDR
Figure 1 shows an expanded view of the Orion Bar in the 13CO ($J$ = 3-2)
emission. FUV radiation from the Trapezium stars comes from the upper-right
corner of the image. The FUV radiation field is attenuated in the direction
perpendicular to the Bar. The infrared H2 $v$ = 1–0 $S$(1) line emission
(white contours) delineates the position of the H-to-H2 transition, the DF.
Many molecular species, such as SO, specifically emit from deeper inside the
PDR where the flux of FUV photons has considerably decreased. In contrast,
H2S, and even its isotopologue H${}_{2}^{34}$S, show bright 11,0-10,1 line
emission toward the DF (right panels in Fig. 1; see also Jansen et al., 1995).
Rotationally excited H2S lines have been also detected toward this position
(Nagy et al., 2017), implying the presence of warm H2S close to the irradiated
cloud surface (i.e., at relatively low extinctions). The presence of
moderately large H2S column densities in the PDR is also demonstrated by the
unexpected detection of the rare isotopologue H${}_{2}^{33}$S toward the DF
(at the correct LSR velocity of the PDR: vLSR $\simeq$ 10.5 km s-1). Figure 2
shows the H${}_{2}^{33}$S 11,0-10,1 line and its hyperfine splittings
(produced by the 33S nuclear spin). To our knowledge, H${}_{2}^{33}$S lines
had only been reported toward the hot cores in Sgr B2 and Orion KL before
(Crockett et al., 2014).
The observed $o$-H2S/$o$-H${}_{2}^{34}$S 11,0-10,1 line intensity ratio toward
the DF is 15 $\pm$ 2, below the solar isotopic ratio of 32S/34S $=$ 23 (e.g.,
Anders & Grevesse, 1989). The observed ratio thus implies optically thick
$o$-H2S line emission at $\sim$168 GHz. However, the observed
$o$-H${}_{2}^{34}$S/$o$-H${}_{2}^{33}$S 11,0-10,1 intensity ratio is 6 $\pm$
1, thus compatible with the solar isotopic ratio (34S/33S $=$ 5.5) and with
H${}_{2}^{34}$S and H${}_{2}^{33}$S optically thin emission.
Figure 4: Search for the SH 2$\Pi_{3/2}$ $J$=5/2-3/2 doublet (at $\sim$1383
GHz) toward the DF position with SOFIA/GREAT. Vertical magenta lines indicate
the position of hyperfine splittings taken from CDMS.
### 3.2 SH+ emission from the PDR edge
Figure 3 zooms into a small field of the Bar edge. The ALMA image of the CO
$J$ = 3-2 line peak temperature was first presented by Goicoechea et al.
(2016). Because the CO $J$ = 3-2 emission is nearly thermalized and optically
thick from the DF to the molecular cloud interior, the line peak temperature
scale ($T_{\rm peak}$) is a good proxy of the gas temperature ($T_{\rm k}$
$\simeq$ $T_{\rm ex}$ $\simeq$ $T_{\rm peak}$). The CO image implies small
temperature variations around $T_{\rm k}$ $\simeq$ 200 K. The middle panel in
Fig. 3 shows the ALMA image of the SH+ $N_{J}$ = 10-01 $F$ = 1/2-3/2 hyperfine
line at 345.944 GHz. Compared to CO, the SH+ emission follows the edge of the
molecular PDR, akin to a filament of $\sim$10′′ width (for the spatial
distribution of other molecular ions, see, Goicoechea et al., 2017). The SH+
emission shows localized small-scale emission peaks (density or column density
enhancements) that match, or are very close to, the vibrationally excited H2
($v$ = 1-0) emission (Fig. 3). We note that while some H2 ($v$ = 1-0) emission
peaks likely coincide with gas density enhancements (e.g., Burton et al.,
1990), the region also shows extended emission from FUV-pumped H2 ($v$ = 2-1)
(van der Werf et al., 1996) that does not necessarily coincide with the H2
($v$ = 1-0) emission peaks.
### 3.3 Search for SH, H3S+, and H2S $\nu_{2}$ = 1 emission
We used SOFIA/GREAT to search for SH 2$\Pi_{3/2}$ $J$=5/2-3/2 lines toward the
DF (Fig. 4). This would have been the first time that interstellar SH
rotational lines were seen in emission. Unfortunately, the achieved rms of the
observation does not allow a definitive confirmation of these lines, so here
we will only discuss upper limits to the SH column density. The red, green,
and blue curves in Fig. 4 show radiative transfer models for $n_{\rm H}$ = 106
cm-3, $T_{\rm k}$ = 200 K, and different SH column densities (see Sect. 4 for
more details).
Our IRAM 30 m observations toward the DF neither resulted in a detection of
H3S+, a key gas-phase precursor of H2S. The $\sim$293.4 GHz spectrum around
the targeted H3S+ $1_{0}$-$0_{0}$ line is shown in Fig. 5. Again, the achieved
low rms allows us to provide a sensitive upper limit to the H3S+ column
density. This results in $N$(H3S+) =(5.5-7.5)$\times$1010 cm-2 (5$\sigma$)
assuming an excitation temperature range $T_{\rm ex}$ = 10-30 K and extended
emission. Given the bright H2S emission close to the edge of the Orion Bar,
and because H2S formation at the DF might be driven by very exoergic
processes, we also searched for the 11,0-10,1 line of vibrationally excited
H2S (in the bending mode $\nu_{2}$). The frequency of this line lies at
$\sim$181.4 GHz (Azzam et al., 2013), thus at the end of our 2 mm-band
observations of the DF (rms $\simeq$ 16 mK). However, we do not detect this
line either.
Figure 5: Search for H3S+ toward the Orion Bar with the IRAM 30 m telescope.
The blue curve shows the expected position of the line.
## 4 Coupled nonlocal excitation and chemistry
In this section we study the rotational excitation of the observed S-bearing
hydrides333Readers interested only in the chemistry of these species and in
depth-dependent PDR models could directly jump to Section 6.. We determine the
SH+, SH (upper limit), and H2S column densities in the Orion Bar, and the
“average” gas physical conditions in the sense that we search for the
combination of single $T_{\rm k}$, $n_{\rm H}$, and $N$ that better reproduces
the observed line intensities (so-called “single-slab” approach). In Sect. 6
we expand these excitation models to multi-slab calculations that take into
account the expected steep gradients in a PDR.
In the ISM, rotationally excited levels are typically populated by inelastic
collisions. However, the lifetime of very reactive molecules can be so short
that the details of their formation and destruction need to be taken into
account when determining how these levels are actually populated (Black,
1998). Reactive collisions (collisions that lead to a reaction and thus to
molecule destruction) influence the excitation of these species when their
timescales become comparable to those of nonreactive collisions. The lifetime
of reactive molecular ions observed in PDRs (e.g., Fuente et al., 2003; Nagy
et al., 2013; van der Tak et al., 2013; Goicoechea et al., 2017, 2019) can be
so short that they do not get thermalized by nonreactive collisions or by
absorption of the background radiation field (Black, 1998). In these cases, a
proper treatment of the molecule excitation requires including chemical
formation and destruction rates in the statistical equilibrium equations
(d$n_{i}$ / d$t$ = 0) that determine the level populations:
$\displaystyle\sum_{j>i}n_{j}\,A_{ji}+\sum_{j\neq
i}n_{i}\left(B_{ji}\,\bar{J}_{ji}+C_{ji}\right)+F_{i}=$ (5)
$\displaystyle=n_{i}\left(\sum_{j<i}A_{ij}+\sum_{j\neq
i}\left(B_{ij}\,\bar{J}_{ij}+C_{ij}\right)\,+\,D_{i}\right),$ (6)
where $n_{i}$ $[\rm cm^{-3}]$ is the population of rotational level $i$,
$A_{ij}$ and $B_{ij}$ are the Einstein coefficients for spontaneous and
induced emission, $C_{ij}$ $[\rm s^{-1}]$ is the rate of inelastic
collisions444We use the following inelastic collision rate coefficients
$\gamma_{ij}$:
$\bullet$ SH+– $e^{-}$, including hyperfine splittings (Hamilton et al.,
2018).
$\bullet$ SH+– $o$-H2 and $p$-H2, including hyperfine splittings (Dagdigian,
2019).
$\bullet$ SH+– H, including hyperfine splittings (Lique et al., 2020).
$\bullet$ $o$-H2S and $p$-H2S with $o$-H2 and $p$-H2 (Dagdigian, 2020).
$\bullet$ SH– He, including fine-structure splittings (Kłos et al., 2009).
($C_{ij}$ = $\sum_{k}\gamma_{ij,\,k}\,n_{k}$, where $\gamma_{ij,\,k}(T)$ $[\rm
cm^{3}s^{-1}]$ are the collisional rate coefficients and $k$ stands for H2, H,
and $e^{-}$), and $\bar{J}_{ij}$ is the mean intensity of the total radiation
field over the line profile. In these equations, $n_{i}\,D_{i}$ is the
destruction rate per unit volume of the molecule in level $i$, and $F_{i}$ its
formation rate per unit volume (both in $\rm cm^{-3}s^{-1}$). When state-to-
state formation rates are not available, and assuming that the destruction
rate is the same in every level ($D_{i}$ = $D$), one can use the total
destruction rate $D\,[{\rm s^{-1}}]$ ($=\,\sum_{k}n_{k}\,k_{k}(T)$ \+
photodestruction rate, where $k_{k}$ $[\rm cm^{3}s^{-1}]$ is the state-
averaged rate of the two-body chemical reaction with species $k$) and consider
that the level populations of the nascent molecule follow a Boltzmann
distribution at an effective formation temperature $T_{\rm form}$:
$F_{i}=F\,g_{i}\,e^{-E_{i}/kT_{\rm form}}\,/\,Q(T_{\rm form}).$ (7)
In this formalism, $F$ $[\rm cm^{-3}\,s^{-1}]$ is the state-averaged formation
rate per unit volume, $g_{i}$ the degeneracy of level $i$, and $Q(T_{\rm
form})$ is the partition function at $T_{\rm form}$ (van der Tak et al.,
2007).
This “formation pumping” formalism has been previously implemented in large
velocity gradient codes to treat, for example, the local excitation of the
very reactive ion CH+ (Nagy et al., 2013; Godard & Cernicharo, 2013; Zanchet
et al., 2013b; Faure et al., 2017). However, interstellar clouds are
inhomogeneous and gas velocity gradients are typically modest at small spatial
scales. This means that line photons can be absorbed and reemitted several
times before leaving the cloud. Here we implemented this formalism in a Monte
Carlo code that explicitly models the nonlocal behavior of the excitation and
radiative transfer problem (see Appendix of Goicoechea et al., 2006).
Although radiative pumping by dust continuum photons does not generally
dominate in PDRs, for completeness we also included radiative excitation by a
modified blackbody at a dust temperature of $\sim$50 K and a dust opacity
$\tau_{\lambda}$ = 0.03 (150/$\lambda[\upmu{\rm m}]$)1.6 (which reproduces the
observed intensity and wavelength dependence of the dust emission in the Bar;
Arab et al., 2012). The molecular gas fraction, $f$(H2) = 2$n$(H2)/$n_{\rm
H}$, is set to 2/3, where $n_{\rm H}$ = $n$(H) + 2$n$(H2) is the total density
of H nuclei. This choice is appropriate for the dissociation front and implies
$n$(H2) = $n$(H). As most electrons in the DF come from the ionization of
carbon atoms, the electron density $n_{e}$ is set to $n_{e}$ $\simeq$ $n$(C+)
= 1.4$\times$10-4 $n_{\rm H}$ (e.g., Cuadrado et al., 2019). For the inelastic
collisions with $o$-H2 and $p$-H2, we assumed that the H2 ortho-to-para (OTP)
ratio is thermalized to the gas temperature.
### 4.1 SH+ excitation and column density
We start by assuming that the main destruction pathway of SH+ are reactions
with H atoms and recombinations with electrons (see Sect. 6.1). Hence, the SH+
destruction rate is $D$ $\simeq$ $n_{e}$ $k_{\rm e}$($T$) + $n$(H) $k_{\rm
H}$($T$) (see Table 1 for the relevant chemical destruction rates). For
$T_{\rm k}$ = $T_{\rm e}$ = 200 K and $n_{\rm H}$ = 106 cm-3 (e.g., Goicoechea
et al., 2016) this implies $D$ $\simeq$ 10-4 s-1 (i.e., the lifetime of an SH+
molecule in the Bar is less than 3 h). At these temperatures and densities,
$D$ is about ten times smaller than the rate of radiative and inelastic
collisional transitions that depopulate the lowest-energy rotational levels of
SH+. Hence, formation pumping does not significantly alter the excitation of
the observed SH+ lines, but it does influence the population of higher-energy
levels. Formation pumping effects have been readily seen in CH+ because this
species is more reactive555CH+ is more reactive than SH+ because CH+ does
react with H2($v$=0) exothermically producing CH${}_{2}^{+}$ at $k$ =
1.2$\times$10-9 cm3 s-1 (Anicich, 2003) and also because reaction of CH+ with
H is faster, $k$ = 7.5$\times$10-10 cm3 s-1. and its rotationally excited
levels lie at higher-energy (i.e., their inelastic collision pumping rates are
slower, e.g., Zanchet et al., 2013b)
Figure 6: Non-LTE excitation models of SH+. The horizontal lines mark the
observed line intensities in the Orion Bar. Dotted curves are for a standard
model ($F=D=0$). Continuous curves are for a model that includes chemical
destruction by H atoms and $e^{-}$ (model $F,D$). Dashed lines are for a model
in which destruction rates are multiplied by ten (model $F,D$ $\times$10). The
vertical black line marks the best model.
Figure 6 shows results of several models: without formation pumping (dotted
curves for model “$F=D$ = 0”), adding formation pumping with SH+ destruction
by H and $e^{-}$ (continuous curves for model “$F,D$”), and using a factor of
ten higher SH+ destruction rates (simulating a dominant role of SH+
photodissociation or destruction by reactions with vibrationally excited H2;
dashed curves for model “$F,D$ $\times$10”). Since the formation of SH+ is
driven by reaction (1) when H2 molecules are in $v$ $\geq$ 2, here we adopted
$T_{\rm form}$ $\simeq$ $E$($v$ = 2, $J$ = 0) / $k$ $-$ 9860 K $\approx$ 2000
K. Because these are constant column density $N$(SH+) excitation and radiative
transfer models, we used a normalized formation rate $F$ = $\sum F_{i}$ that
assumes steady-state SH+ abundances consistent with the varying gas density in
each model. That is, $F$ = $\sum F_{i}$ = $x$(SH+) $n_{\rm H}$ $D$ $[\rm
cm^{-3}s^{-1}]$, where $x$ refers to the abundance with respect to H nuclei.
The detected SH+ rotational lines connect the fine-structure levels $N_{J}$ =
10-01 (345 GHz) and 12-01 (526 GHz). Upper limits also exist for the 11-01
(683 GHz) lines. SH+ critical densities ($n_{\rm cr}$ = $A_{ij}$ /
$\gamma_{ij}$) for inelastic collisions with H or H2 are of the same order and
equal to several 106 cm-3. As for many molecular ions (e.g., Desrousseaux et
al., 2021), SH+–H2 (and SH+–H) inelastic collisional rate coefficients4 are
large ($\gamma_{ij}$ $\gtrsim$ 10-10 cm3 s-1). Thus, collisions with H (at low
$A_{V}$) and H2 (at higher $A_{V}$) generally dominate over collisions with
electrons ($\gamma_{ij}$ of a few 10-7 cm3 s-1). At low densities (meaning
$n_{\rm H}$ $<$ $n_{\rm cr}$) formation pumping increases the population of
the higher-energy levels (and their $T_{\rm ex}$), but there are only minor
effects in the low-energy submillimeter lines. At high densities, $n_{\rm H}$
$>$ 107 cm-3, formation pumping with $T_{\rm form}$ = 2000 K produces lower
intensities in these lines because the lowest-energy levels ($E_{\rm u}/k$ $<$
$T_{\rm k}$ $<$ $T_{\rm form}$) are less populated.
The best fit to the observed lines in model F, D is for $N$(SH+) $\simeq$
1.1$\times$1013 cm-3, $n_{\rm H}$ $\simeq$ 3$\times$105 cm-3, and $T_{\rm k}$
$\simeq$ 200 K. This is shown by the vertical dotted line in Fig. 6. This
model is consistent with the upper limit intensity of the 683 GHz line (Nagy
et al., 2013). In this comparison, and following the morphology of the SH+
emission revealed by ALMA (Fig. 3), we corrected the line intensities of the
SH+ lines detected by Herschel/HIFI with the beam coupling factors discussed
in Sec. 2.3, The observed 12-01/10-01 line ratio ($R$ =
$W$(526.048)/$W$(345.944) $\simeq$ 2) is sensitive to the gas density. In
these models, $R$ is 1.1 for $n_{\rm H}$ = 105 cm-3 and 3.0 for $n_{\rm H}$
=106 cm-3. We note that $n_{\rm H}$ could be lower if SH+
formation/destruction rates were faster, as in the $F,D$ $\times$10 model.
This could happen if SH+ photodissociation or destruction reactions with
H2($v$ $\geq$2) were faster than reactions of SH+ with H atoms or with
electrons. In Sec. 6 we show that this is not the case.
### 4.2 SH excitation and column density
SH is a ${}^{2}\Pi$ open-shell radical with fine-structure,
$\Lambda$-doubling, and hyperfine splittings (e.g., Martin-Drumel et al.,
2012). However, the frequency separation of the SH ${}^{2}\Pi_{3/2}$ $J$ =
5/2-3/2 hyperfine components is too small to be spectrally resolved in
observations of the Orion Bar (see Fig. 4).The available rate coefficients for
inelastic collisions of SH with helium atoms do not resolve the hyperfine
splittings. Hence, we first determined line frequencies, level degeneracies,
and Einstein coefficients of an SH molecule without hyperfine structure. To do
this, we took the complete set of hyperfine levels tabulated in CDMS. Lacking
specific inelastic collision rate coefficients, we scaled the available SH– He
rates of Kłos et al. (2009) by the square root of the reduced mass ratios and
estimated the SH– H and SH– H2 collisional rates.
The scaled rate coefficients are about an order of magnitude smaller than
those of SH+. However, the chemical destruction rate of SH at the PDR edge
(reactions with H, photodissociation, and photoionization, see Sect. 6.1) is
also slower (we take the rates of SH–H reactive collisions from Zanchet et
al., 2019). We determine $D$ $\simeq$ 3$\times$10-6 s-1 for $n_{\rm H}$ = 106
cm-3, $T_{\rm k}$ = 200 K, and $A_{V}$ $\simeq$ 0.7 mag. Models in Fig. 7
include these chemical rates for $T_{\rm form}$ =$T_{\rm k}$ (a lower limit to
the unknown formation temperature). Formation pumping enhances the intensity
of the ${}^{2}\Pi_{3/2}$ $J$ = 5/2-3/2 ground-state lines by a few percent
only.
Figure 7: Non-LTE excitation models of SH emission lines targeted with
SOFIA/GREAT. Horizontal dashed lines refer to observational limits, assuming
extended emission (lower intensities) and for a 10′′ width emission filament
at the PDR surface (higher intensities).
To estimate the SH column density in the Orion Bar we compare with the upper
limit intensities of the SH lines targeted by SOFIA. If SH and SH+ arise from
roughly the same gas at similar physical conditions ($n_{\rm H}$ $\simeq$ 106
cm-3 and $T_{k}$ $\simeq$ 200 K) the best model column density is for $N$(SH)
$\leq$ (0.6-1.6)$\times$1014 cm-2. If densities were lower, around $n_{\rm H}$
$\simeq$ 105 cm-3, the upper limit $N$(SH) column densities will be a factor
ten higher.
Figure 8: Non-LTE excitation models for $o$-H2S and $p$-H2S. Thin horizontal
lines show the observed intensities assuming either extended emission (lower
limit) or emission that fills the 15′′ beam at 168.7 GHz. The vertical line
marks the best model, resulting in an OTP ratio of 2.9 $\pm$ 0.3.
### 4.3 H2S excitation and column density
H2S has a $X^{2}A$ ground electronic state and two nuclear spin symmetries
that we treat separately, $o$-H2S and $p$-H2S. Previous studies of the H2S
line excitation have used collisional rates coefficients scaled from those of
the H2O – H2 system. Dagdigian (2020) recently carried out specific
calculations of the cross sections of $o$-H2S and $p$-H2S inelastic collisions
with $o$–H2 and $p$-H2 at different temperatures. The behavior of the new and
the scaled rates is different and it depends on the H2 OTP ratio (e.g., on gas
temperature) because the collisional cross sections are different for
$o$-H2–H2S and $p$-H2–H2S systems. At the warm temperatures of the PDR,
collisions with $o$-H2 dominate, resulting in rate coefficients for the
$\sim$168 GHz $o$-H2S line that are a factor up to $\sim$2.5 smaller than
those scaled from H2O–H2.
H2S is not a reactive molecule. At the edge of the PDR its destruction is
driven by photodissociation. We determine that the radiative and collisional
pumping rates are typically a factor of $\sim$100 higher than $D$ $\approx$
2$\times$10-6 s-1 (for $n_{\rm H}$ = 106 cm-3, $T_{\rm k}$ = 200 K, $G_{0}$
$\simeq$104, and $A_{V}$ $\simeq$ 0.7 mag). Figure 8 shows non-LTE $o$-H2S and
$p$-H2S excitation and radiative transfer models. As H2S may have its
abundance peak deeper inside the PDR and display more extended emission than
SH+ (e.g., Sternberg & Dalgarno, 1995), we show results for $T_{\rm k}$ = 200
and 100 K. When comparing with the observed line intensities, we considered
either emission that fills all beams, or a correction that assumes that the
H2S emission only fills the 15′′ beam of the IRAM 30m telescope at 168 GHz.
The vertical dotted lines in Fig. 8 show the best model, $N$(H2S) =
$N$($o$-H2S)+$N$($p$-H2S) = 2.5$\times$1014 cm-2, with an OTP ratio of 2.9
$\pm$ 0.3, thus consistent with the high-temperature statistical ratio of 3/1
(see discussion at the end of Sect. 6.4). Models with lower densities, $n_{\rm
H}$ $\simeq$ 105 cm-3, show worse agreement, and would translate into even
higher $N$(H2S) of $\gtrsim$ 1015 cm-2. In either case, these calculations
imply large columns of warm H2S toward the PDR. They result in a limit to the
SH to H2S column density ratio of $\leq$ 0.2-0.6. This upper limit is already
lower than the $N$(SH)/$N$(H2S) = 1.1-3.0 ratios observed in diffuse clouds
(Neufeld et al., 2015). This difference suggests an enhanced H2S formation
mechanism in FUV-illuminated dense gas.
Figure 9: Minimum energy paths for reactions (1), (2), and (3). Points
correspond to RCCSD(T)-F12a calculations and lines to fits (Appendix A). The
reaction coordinate, $s$, is defined independently for each path. The
geometries of each species at $s$=0 are different.
## 5 New results on sulfur-hydride reactions
In this section we summarize the ab initio quantum calculations we carried out
to determine the vibrationally-state-dependent rates of gas-phase reactions of
H2($v$ $>$ 0) with several S-bearing species. We recall that all hydrogen
abstraction reactions,
${\rm S}^{+}\xrightarrow[(1)]{{{+{\rm H_{2}}}}}{\rm
SH}^{+}\xrightarrow[(2)]{{+{\rm H_{2}}}}{\rm
H_{2}S^{+}}\xrightarrow[(3)]{{+{\rm H_{2}}}}{\rm H_{3}S^{+},}\hskip
28.45274pt{\rm S}\xrightarrow[(4)]{{{+{\rm H_{2}}}}}{\rm SH,}\vspace{0.1cm}$
are very endoergic for H2 ($v$ = 0), with endothermicities in Kelvin units
that are significantly higher than $T_{\rm k}$ even in PDRs. This is markedly
different to O+ chemistry, for which all hydrogen abstraction reactions
leading to H3O+ are exothermic and fast (Gerin et al., 2010; Neufeld et al.,
2010; Hollenbach et al., 2012).
The endothermicity of reactions involving HnS+ ions decreases as the number of
hydrogen atoms increases. The potential energy surfaces (PES) of these
reactions possess shallow wells at the entrance and products channels (shown
in Fig. 9). In addition, these PESs show saddle points between the energy
walls of reactants and products whose heights increase with the number of H
atoms. For reaction (2), the saddle point has an energy of 0.6 eV
($\simeq$7,000 K) and is slightly below the energy of the products. However,
for reaction (3), the saddle point is above the energy of the products and is
a reaction barrier. These saddle points act as a bottleneck in the gas-phase
hydrogenation of S+.
If one considers the state dependent reactivity of vibrationally excited H2,
the formation of SH+ through reaction (1) becomes exoergic666If one considers
H2 rovibrational levels, reaction (1) becomes exoergic for $v$ = 0, $J$ $\geq$
11 and for $v$ = 1, $J$ $\geq$ 7 (Zanchet et al., 2019). when $v$ $\geq$ 2
(Zanchet et al., 2019). The detection of bright H2S emission in the Orion Bar
(Figs. 1 and 4) might suggest that subsequent hydrogen abstraction reactions
with H2 ($v$ $\geq$ 2) proceed as well. Motivated by these findings, and
before carrying out any PDR model, we studied reaction (2) and the reverse
process in detail. This required to build a full dimensional quantum PES of
the H3S+ (X1A1) system (see Appendix A).
In addition, we studied reaction (4) (and its reverse) through quantum
calculations. Details of these ab initio calculations and of the resulting
reactive cross sections are given in Appendix B. Table 1 summarizes the
updated reaction rate coefficients that we will include later in our PDR
models.
Table 1: Relevant rate coefficients from a fit of the Arrhenius-like form
$k\,(T)$ = $\alpha\,(T/300\,{\rm K})^{\beta}\,{\rm exp}(-\gamma/T)$ to the
calculated reaction rates.
Reaction | $\alpha$ | $\beta$ | $\gamma$
---|---|---|---
| (cm3 s-1) | | (K)
SH+ \+ H2 ($v$=1) $\rightarrow$ H2S+ \+ H | 4.97e-11 | 0 | 1973.4 a
SH+ \+ H2 ($v$=2) $\rightarrow$ H2S+ \+ H | 5.31e-10 | -0.17 | 0 a
SH+ \+ H2 ($v$=3) $\rightarrow$ H2S+ \+ H | 9.40e-10 | -0.16 | 0 a
SH+ \+ H $\rightarrow$ S+ \+ H2 | 1.86e-10 | -0.41 | 27.3 b
SH+ \+ $e^{-}$ $\rightarrow$ S + H | 2.00e-07 | -0.50 | c
H2S+ \+ H $\rightarrow$ SH+ \+ H2 | 6.15e-10 | -0.34 | 0 a
S + H2 ($v$=2) $\rightarrow$ SH + H | $\sim$8.6e-13 | $\sim$2.3 | $\sim$2500 a
S + H2 ($v$=3) $\rightarrow$ SH + H | $\sim$1.7e-12 | $\sim$2.0 | $\sim$1500 a
SH + H $\rightarrow$ S + H2 | 5.7e-13 | 2.48 | 1600a,†
| 7.7e-14 | 0.39 | $-$1.3a,†
S+ \+ H2 ($v$=2) $\rightarrow$ SH+ \+ H | 2.88e-10 | -0.15 | 42.9 b
S+ \+ H2 ($v$=3) $\rightarrow$ SH+ \+ H | 9.03e-10 | -0.11 | 26.2 b
S+ \+ H2 ($v$=4) $\rightarrow$ SH+ \+ H | 1.30e-09 | -0.04 | 40.8 b
S+ \+ H2 ($v$=5) $\rightarrow$ SH+ \+ H | 1.21e-09 | 0.09 | 34.5 b
777(a) This work. (b) From Zanchet et al. (2019). (c) From Prasad & Huntress
(1980). †Total rate is the sum of the two expressions.
The H2S+ formation rate through reaction (2) with H2 ($v$ = 0) is very slow.
For H2 ($v$ = 1), the rate constant increases at $\approx$ 500 K,
corresponding to the opening of the H2S+ \+ H threshold. For H2 ($v$ = 2) and
H2 ($v$ = 3), the reaction rate is much faster, close to the Langevin limit
(see Appendix A.2). However, our estimated vibrational-state specific rates
for SH formation through reaction (4) (S + H2) are considerably smaller than
for reactions (1) and (2), and show an energy barrier even for H2 ($v$ = 2)
and H2 ($v$ = 3). We anticipate that this reaction is not a relevant formation
route for SH.
In FUV-illuminated environments, collisions with H atoms are very important
because they compete with electron recombinations in destroying molecular
ions, and also they contribute to their excitation. An important result of our
calculations is that the destruction rate of H2S+ (SH+) in reactions with H
atoms are a factor of $\geq$ 3.5 ($\geq$ 1.7) faster (at $T_{\rm k}$ $\leq$
200 K) than those previously used in astrochemical models (Millar et al.,
1986). Conversely, we find that destruction of SH in reactions with H atoms
(Appendix B) is slower than previously assumed.
## 6 PDR models of S-bearing hydrides
We now investigate the chemistry of S-bearing hydrides and the effect of the
new reaction rates in PDR models adapted to the Orion Bar conditions. In this
analysis we used version 1.5.4. of the Meudon PDR code (Le Petit et al., 2006;
Bron et al., 2014). Following our previous studies, we model the Orion Bar as
a stationary PDR at constant thermal-pressure (i.e., with density and
temperature gradients). When compared to time-dependent hydrodynamic PDR
models (e.g., Hosokawa & Inutsuka, 2006; Bron et al., 2018; Kirsanova & Wiebe,
2019), stationary isobaric models seem a good description of the most exposed
and compressed gas layers of the PDR, from $A_{V}$ $\approx$ 0.5 to $\approx$
5 mag (Goicoechea et al., 2016; Joblin et al., 2018).
In our models, the FUV radiation field incident at the PDR edge is $G_{0}$ =
2$\times$104 (e.g., Marconi et al., 1998). We adopted an extinction to color-
index ratio, $R_{V}$ = $A_{V}$/$E_{B-V}$, of 5.5 (Joblin et al., 2018),
consistent with the flatter extinction curve observed in Orion (Lee, 1968;
Cardelli et al., 1989). This choice implies slightly more penetration of FUV
radiation into the cloud (e.g., Goicoechea & Le Bourlot, 2007). The main input
parameters and elemental abundances of these PDR models are summarized in
Table 2. Figure 10 shows the resulting H2, H, and electron density profiles,
as well as the $T_{\rm k}$ and $T_{\rm d}$ gradients.
Our chemical network is that of the Meudon code updated with the new reaction
rates listed in Table 1. This network includes updated photoreaction rates
from Heays et al. (2017). To increase the accuracy of our abundance
predictions, we included the explicit integration of wavelength-dependent SH,
SH+, and H2S photodissociation cross sections ($\sigma_{\rm diss}$), as well
as SH and H2S photoionization cross sections ($\sigma_{\rm ion}$). These cross
sections are shown in Fig 23 of the Appendix. The integration is performed
over the specific FUV radiation field at each position of the PDR. In
particular, we took $\sigma_{\rm ion}$(SH) from Hrodmarsson et al. (2019) and
$\sigma_{\rm diss}$(H2S) from Zhou et al. (2020), both determined in
laboratory experiments. Figure 11 summarizes the relevant chemical network
that leads to the formation of S-bearing hydrides and that we discuss in the
following sections.
Table 2: Main parameters used in the PDR models of the Orion Bar.
Model parameter | Value | Note
---|---|---
FUV illumination, $G_{0}$ | 2$\times$104 Habing | $(a)$
Total depth $A_{\rm V}$ | 10 mag |
Thermal pressure $P_{\rm th}/k$ | 2$\times$108 cm-3K |
Density $n_{\rm H}$ = $n$(H) + 2$n$(H2) | $n_{\rm H}$ = $P_{\rm th}\,/\,kT_{\rm k}$ | Varying
Cosmic Ray $\zeta_{\rm CR}$ | 10-16 H2 s-1 | $(b)$
$R_{\rm V}$ = $A_{\rm V}$/$E_{\rm B-V}$ | 5.5 | Orionc
$M_{\rm gas}/M_{\rm dust}$ | 100 | Local ISM
Abundance O / H | 3.2$\times$10-4 |
Abundance C / H | 1.4$\times$10-4 | Oriond
Abundance S / H | 1.4$\times$10-5 | Solare
888aMarconi et al. (1998). bIndriolo et al. (2015). cCardelli et al. (1989).
dSofia et al. (2004). eAsplund et al. (2009).
Figure 10: Structure of an isobaric PDR representing the most FUV-irradiated
gas layers of the Orion Bar (see Table 2 for the adopted parameters). This
plot shows the H2, H, and electron density profiles (left axis scale), and the
gas and dust temperatures (right axis scale). Figure 11: Main gas and grain
reactions leading to the formation of sulfur hydrides. Red arrows represent
endoergic reactions (endothermicity given in units of K). Dashed arrows are
uncertain radiative associations (see Sect. A.3), $\gamma$ stands for a FUV
photon, and “s-” for solid.
### 6.1 Pure gas-phase PDR model results
Figure 12 shows results of the “new gas-phase” model using the reaction rates
in Table 1. The continuous curves display the predicted fractional abundance
profiles as a function of cloud depth in magnitudes of visual extinction
($A_{V}$). The dashed curves are for a model that uses the standard thermal
rates previously adopted in the literature (see, e.g., Neufeld et al., 2015).
As noted by Zanchet et al. (2013a, 2019), the inclusion of H2 ($v$ $\geq$ 2)
state-dependent quantum rates for reaction (1) enhances the formation of SH+
in a narrow layer at the edge of the PDR ($A_{V}$ $\simeq$ 0 to 2 mag). This
agrees with the morphology of the SH+ emission revealed by ALMA images (Fig.
3). For H2 ($v$ $=$ 2), the reaction rate enhancement with respect to the
thermal rate $\Delta k$ = $k_{2}(T)/k_{0}(T)$ (see discussion by Agúndez et
al., 2010) is about 4$\times$108 at $T_{\rm k}$ = 500 K (Millar et al., 1986).
Indeed, when the fractional abundance of H2 ($v$ = 2) with respect to H2 ($v$
= 0), defined as $f_{\,2}$ = $n$ (H2 $v$ = 2)/$n$ (H2 $v$ = 0), exceeds a few
times 10-9, meaning $\Delta k\cdot f_{\,2}>1$, reaction (1) with H2 ($v$
$\geq$ 2) dominates SH+ formation. This reaction enhancement takes place only
at the edge of the PDR, where FUV-pumped H2 ($v$ $\geq$ 2) molecules are
abundant enough (gray dashed curves in Fig. 12) and drive the formation of
SH+. The resulting SH+ column density increases by an order of magnitude
compared to models that use the thermal rate.
In this isobaric model, the SH+ abundance peak occurs at $A_{V}$ $\simeq$ 0.7
mag, where the gas density has increased from $n_{\rm H}$ $\simeq$
6$\times$104 cm-3 at the PDR edge (the IF) to $\sim$5$\times$105 cm-3 (at the
DF). At this point, SH+ destruction is dominated by recombination with
electrons and by reactive collisions with H atoms. This implies $D$(SH+)
$[$s-1$]$ $\sim$ $n_{e}$ $k_{e}$ $\simeq$ $n_{\rm H}$ $k_{\rm H}$ $\gg$ $n$(H2
$v$$\geq$ 2) $k_{2}$, as we assumed in the single-slab SH+ excitation models
(Sec. 4.1). Therefore, only a small fraction of SH+ molecules further react
with H2 ($v$ $\geq$ 2) to form H2S+. The resulting low H2S+ abundances limit
the formation of abundant SH from dissociative recombinations of H2S+ (recall
that we estimated that reaction S + H2 ($v$ $\geq$2) $\rightarrow$ SH + H is
very slow). The SH abundance peak is shifted deeper inside the cloud, at about
$A_{V}$ $\simeq$ 1.8 mag, where SH forms by dissociative recombination of H2S+
and it is destroyed by FUV photons and reactions with H atoms. In these gas-
phase models the H2S abundance peaks even deeper inside the PDR, at AV
$\simeq$ 5 mag, where it forms by recombinations of H2S+ and H3S+ with
electrons as well as by charge exchange S + H2S+. However, the new rate of
reaction H2S+ \+ H is higher than assumed in the past, so the new models
predict lower H2S+ abundances at intermediate PDR depths (thus, less H3S+ and
H2S; see Fig. 12).
Table 3: Column density predictions from different PDR models (up to $A_{V}$ =
10 mag) and estimated values from observations (single-slab approach).
| log $N$ (cm-2) | | | |
---|---|---|---|---|---
Type of PDR modela | SH+ | SH | H2S | H2S+ | H3S+
Standard gas-phase | 11.0a–12.2b | 11.4a–12.5b | 11.3a–12.4b | 9.9a–11.1b | 7.8a–9.0b
New gas-phase (Table 1) | 12.1a–13.2b | 11.4a–12.5b | 10.6a–11.7b | 9.9a–11.0b | 7.7a–8.9b
Gas-grain (low $E_{\rm b}$, $\epsilon$=1%) | 12.0a–13.2b | 13.2a–14.4b | 12.9a–14.1b | 9.6a–10.7b | 10.1a–11.2b
Gas-grain (high $E_{\rm b}$, $\epsilon$=1%) | 12.0a–13.1b | 13.6a–14.8b | 13.7b–14.8b | 9.9b–11.0b | 10.8b–12.0b
Estimated from observations | $\sim$13.1 | $<$13.8 | $\sim$14.4 | – | $<$ 10.7
999aColumn densities for a face-on PDR. bEdge-on PDR with a tilt angle
$\alpha$ = 4o, leading to the maximum expected geometrical enhancement.
Figure 12: Pure gas-phase PDR models of the Orion Bar. Continuous curves show
fractional abundances as a function of cloud depth, in logarithm scale to
better display the irradiated edge of the PDR, using the new reaction rates
listed in Table 1. The gray dotted curve shows $f_{2}$, the fraction of H2
that is in vibrationally excited levels $v$ $\geq$ 2 (right axis scale).
Dashed curves are for a model using standard reaction rates.
The SH column density predicted by the new gas-phase model is below the upper
limit determined from SOFIA. However, the predicted H2S column density is much
lower than the value we derive from observations (Table 3) and the predicted
H2S line intensities are too faint (see Sect. 6.4).
Because the cross sections of the different H2S photodissociation channels
have different wavelength dependences (Zhou et al., 2020), the H2S and SH
abundances between $A_{V}$ $\approx$ 2 and 6 mag are sensitive to the specific
shape of the FUV radiation field (determined by line blanketing, dust
absorption, and grain scattering; e.g., Goicoechea & Le Bourlot, 2007). Still,
we checked that using steeper extinction curves does not increase H2S column
density any closer to the observed levels. This disagreement between the
observationally inferred $N$(H2S) column density and the predictions of gas-
phase PDR models is even worse101010Older gas-phase PDR models previously
predicted low H2S column densities (Jansen et al., 1995; Sternberg & Dalgarno,
1995). if one considers the uncertain rates of radiative association reactions
S+ \+ H2 $\rightarrow$ H2S+ \+ $h\nu$ and SH+ \+ H2 $\rightarrow$ H3S+ \+
$h\nu$ included in the new gas-phase model. For the latter reaction, the main
problem is that the electronic states of the reactants do not correlate with
the ${}^{1}A_{1}$ ground electronic state of the activated complex H3S+∗
(denoted by $*$). Instead, H3S+∗ forms in an excited triplet state
(${}^{3}A$). Herbst et al. (1989) proposed that a spin-flip followed by a
radiative association can occur in interstellar conditions and form
H3S+∗($X^{1}A_{1}$) (Millar & Herbst, 1990). In Appendix A.3, we give
arguments against this mechanism. For similar reasons, Prasad & Huntress
(1982) avoided to include the S+ \+ H2 radiative association in their models.
Removing these reactions in pure gas-phase models drastically decreases the
H2S+ and H3S+ abundances, and thus those of SH and H2S (by a factor of
$\sim$100 in these models). The alternative H2S+ formation route through
reaction SH+ \+ H2($v$ = 2) is only efficient at the PDR surface ($A_{V}$ $<$
1 mag). This is due to the large H2($v$ = 2) fractional abundances, $f_{2}$
$>$ 10-6 at $T_{k}$ $>$ 500 K, required to enhance the H2S+ production.
Therefore, and contrary to S+ destruction, reaction of SH+ with H2 is not the
dominant destruction pathway for SH+. Only deeper inside the PDR, reactions of
S with H${}_{3}^{+}$ produce small abundances of SH+ and H2S+, but the
hydrogenation of HnS+ ions is not efficient and limits the gas-phase
production H2S.
### 6.2 Grain surface formation of solid H2S
Similarly to the formation of water ice (s-H2O) on grains (e.g., Hollenbach et
al., 2009, 2012), the formation of H2S may be dominated by grain surface
reactions followed by desorption back to the gas (e.g., Charnley, 1997).
Indeed, water vapor is relatively abundant in the Bar ($N$(H2O) $\approx$ 1015
cm-2; Choi et al., 2014; Putaud et al., 2019) and large-scale maps show that
the H2O abundance peaks close to cloud surfaces (Melnick et al., 2020).
To investigate the s-H2S formation on grains, we updated the chemical model by
allowing S atoms to deplete onto grains as the gas temperature drops inside
the molecular cloud (for the basic grain chemistry formalism, see, Hollenbach
et al., 2009). The timescale of this process ($\tau_{\rm gr,\,S}$) goes as
$x({\rm S})^{-1}$ $n_{\rm H}^{-1}\,T_{\rm k}^{-1/2}$, where $x$(S) is the
abundance of neutral sulfur atoms with respect to H nuclei. In a PDR, the
abundance of H atoms is typically higher than that of S atoms111111We only
consider the depletion of neutral S atoms. S+ ions are expected to be more
abundant than S atoms at the edge of the Orion Bar ($A_{V}$ $\lesssim$ 2 mag)
where $T_{\rm k}$ and $T_{\rm d}$ are too high, and the FUV radiation field
too strong, to allow the formation of abundant grain mantles. and H atoms
stick on grains more frequently than S atoms unless $x$(H) $<$
$x$(S)$\cdot$0.18. An adsorbed H atom (s-H) is weakly bound, mobile, and can
diffuse throughout the grain surface until it finds an adsorbed S atom (s-S).
If the timescale for a grain to be hit by a H atom ($\tau_{\rm gr,\,H}$) is
shorter that the timescale for a s-S atom to photodesorb ($\tau_{\rm
photdes,\,S}$) or sublimate ($\tau_{\rm subl,\,S}$) then reaction of s-H with
s-S will proceed and form a s-SH radical roughly upon “collision” and without
energy barriers (e.g., Tielens & Hagen, 1982; Tielens, 2010). Likewise, if
$\tau_{\rm gr,\,H}$ $<$ $\tau_{\rm photdes,\,SH}$ and $\tau_{\rm gr,\,H}$ $<$
$\tau_{\rm subl,\,SH}$, a newly adsorbed s-H atom can diffuse, find a grain
site with an s-SH radical and react without barriers to form s-H2S. In these
surface processes, a significant amount of S is ultimately transferred to
s-H2S (e.g., Vidal et al., 2017), which can subsequently desorb: thermally, by
FUV photons, or by cosmic rays. In addition, laboratory experiments show that
the excess energy of certain exothermic surface reactions can promote the
direct desorption of the product (Minissale et al., 2016). In particular,
reaction s-H + s-SH directly desorbs H2S with a maximum efficiency of $\sim$
60 % (as observed in experiments, Oba et al., 2018). Due to the high flux of
FUV photons in PDRs, chemical desorption may not always compete with
photodesorption. However, it can be a dominant process inside molecular clouds
Garrod et al. (2007); Esplugues et al. (2016); Vidal et al. (2017); Navarro-
Almaida et al. (2020).
The photodesorption timescale of an ice mantle is proportional to $Y^{-1}$
$G_{0}^{-1}$ exp (+$b$ $A_{V}$), where $Y$ is the photodesorption yield (the
number of desorbed atoms or molecules per incident photon) and $b$ is a dust-
related FUV field absorption factor. The timescale for mantle sublimation
(thermal desorption) goes as $\nu_{\rm ice}^{-1}$ exp (+$E_{\rm b}$ /
$k\,T_{\rm d}$), where $\nu_{\rm ice}$ is the characteristic vibrational
frequency of the solid lattice, $T_{\rm d}$ is the dust grain temperature, and
$E_{\rm b}/k$ is the adsorption binding energy of the species (in K). Binding
energies play a crucial role in model predictions because they determine the
freezing temperatures and sublimation timescales. Table 4 lists the $E_{\rm
b}/k$ and $Y$ values considered here.
Figure 13: Representative timescales relevant to the formation of s-H2S and
s-H2O as well as their freeze-out depths. Upper panel: The continuous black
curve is the timescale for a grain to be hit by an H atom. Once in the grain
surface, the H atom diffuses and can react with an adsorbed S atom to form
s-SH. The dashed magenta curves show the timescale for thermal desorption of
an s-S atom ($E_{\rm b}/k$ (S) = 1100 K left curve, and 2600 K right curve)
and of an s-O atom (blue curve; $E_{\rm b}/k$ (O) = 1800 K). The gray dotted
curve is the photodesorption timescale of s-S. At $G_{0}$ values where the
continuous line is below the dashed and dotted lines, s-O and s-S atoms remain
on grain surfaces sufficiently long to combine with an adsorbed H atom and
form s-OH and s-SH (and then s-H2O and s-H2S). These timescales are for
$n_{\rm H}$ = 105 cm-3 and $n$(H) = 100 cm-3. Bottom panel: Freeze-out depth
at which most O and S are incorporated as s-H2O and s-H2S (assuming no
chemical desorption and $T_{\rm k}$ = $T_{\rm d}$).
Representative timescales of the basic grain processes described above are
summarized in the upper panel of Fig. 13. In this plot, $T_{d}$ is a
characteristic dust temperature inside the PDR, $T_{d}$ = (3$\cdot$104 \+
2$\cdot$103 $G_{0}^{1.2}$)0.2, taken from Hollenbach et al. (2009). In the
upper panel, the continuous black curve is the timescale for a grain to be hit
by an H atom ($\tau_{\rm gr,\,H}$). The dashed magenta curves show the
timescale for thermal desorption of an s-S atom ($\tau_{\rm subl,\,S}$) (left
curve for $E_{\rm b}/k$ (S) = 1100 K and right curve for $E_{\rm b}/k$ (S) =
2600 K), and the same for an s-O atom (blue curve). The gray dotted curve is
the timescale for s-S atom photodesorption ($\tau_{\rm photodes,\,S}$) at
$A_{V}$ = 5 mag. At $G_{0}$ strengths where the continuous line is below the
dashed and dotted lines, an adsorbed s-S atom remains on the grain surface
sufficiently long to react with a diffusing s-H atom, form s-SH, and
ultimately s-H2S.
Figure 13 shows that, if one takes $E_{\rm b}/k$ (S) = 1100 K (the most common
value in the literature; Hasegawa & Herbst, 1993), the formation of s-H2S is
possible inside clouds illuminated by modest FUV fields, when grains are
sufficiently cold ($T_{\rm d}$ $<$ 22 K). However, recent calculations of s-S
atoms adsorbed on water ice surfaces suggest higher binding energies ($\sim$
2600 K; Wakelam et al., 2017). This would imply that S atoms freeze at higher
$T_{\rm d}$ ($\lesssim$ 50 K) and that s-H2S mantles form in more strongly
illuminated PDRs (the observed $T_{\rm d}$ at the edge of the Bar is $\simeq$
50 K and decreases to $\simeq$35 K behind the PDR; see, Arab et al., 2012).
The freeze-out depth for sulfur in a PDR, the $A_{V}$ at which most sulfur is
incorporated as S-bearing solids (s-H2S in our simple model) can be estimated
by equating $\tau_{\rm gr,\,S}$ and $\tau_{\rm photdes,\,H_{2}S}$. This
implicitly assumes the H2S chemical desorption does not dominate in FUV-
irradiated regions, which is in line with the particularly large FUV
absorption cross section of s-H2S measured in laboratory experiments (Cruz-
Diaz et al., 2014). With these assumptions, the lower panel of Fig. 13 shows
the predicted s-H2S and s-H2O freeze-out depths. Owing to the lower abundance
and higher atomic mass of sulfur atoms (i.e., grains are hit slower by S atoms
than by O atoms), the H2S freeze-out depth appears slightly deeper than that
of water ice. For the FUV-illumination conditions in the Bar, the freeze-out
depth of sulfur is expected at $A_{V}$ $\gtrsim$ 6 mag. This implies that
photodesorption of s-H2S can produce enhanced abundances of gaseous H2S at
$A_{V}$ $<$ 6 mag.
Table 4: Adopted binding energies and photodesorption yields.
Species | $E_{\rm b}/k$ | Yield
---|---|---
| (K) | (FUV photon)-1
S | 1100 a/2600 b | 10-4
SH | 1500 a/2700 b | 10-4
H2S | 2700 b,c | 1.2$\times$10-3 g (as H2S)
CO | 1300 d | 3$\times$10-3 h
O | 1800 e | 10-4 h
O2 | 1200 d | 10-3 h
OH | 4600 a | 10-3 h
H2O | 4800 f | 10-3 h (as H2O)
| | 2$\times$10-3 h (as OH)
121212aHasegawa & Herbst (1993). bWakelam et al. (2017). cCollings et al.
(2004). dMinissale et al. (2016). eHe et al. (2015).fSandford & Allamandola
(1988). gFuente et al. (2017) hSee, Hollenbach et al. (2009).
Figure 14: Gas-grain PDR models leading to the formation of s-H2S (shown as
black curves). Continuous colored curves show gas-phase fractional abundances
as a function of depth into the cloud. $\epsilon$ refers to the efficiency of
the chemical desorption reaction s-H + s-H2S $\rightarrow$ SH + H2 (see text).
Left panel: Gas-grain high $E_{\rm b}$ model (high adsorption binding energies
for S and SH, see Table 4). Right panel: Low $E_{\rm b}$ model.
FUV-irradiation and thermal desorption of H2S ice mantles have been studied in
the laboratory (e.g., Cruz-Diaz et al., 2014; Jiménez-Escobar & Muñoz Caro,
2011). These experiments show that pure s-H2S ices thermally desorb around 82
K, and at higher temperatures for H2S–H2O ice mixtures. These experiments
determine a photodesorption yield of $Y_{\rm H_{2}S}$ $\sim\,$1.2$\times$10-3
molecules per FUV photon (see also Fuente et al., 2017). Regarding surface
grain chemistry, experiments show that reaction s-H + s-SH $\rightarrow$ s-H2S
is exothermic (Oba et al., 2018), whereas reaction s-H + s-H2S, although it
has an activation energy barrier of $\sim$1500 K, it may directly desorb
gaseous SH. Finally, reaction s-SH + s-SH $\rightarrow$ s-H2S2 may trigger the
formation of doubly sulfuretted species, but it requires mobile s-SH radicals
(e.g., Jiménez-Escobar & Muñoz Caro, 2011; Fuente et al., 2017). Here we will
only consider surface reactions with mobile s-H.
### 6.3 Gas-grain PDR model results
Here we show PDR model results in which we add a simple network of gas-grain
reactions for a small number of S-bearing (S, SH, and H2S) and O-bearing (O,
OH, H2O, O2, and CO) species. These species can adsorb on grains as
temperatures drop, photodesorb by FUV photons (stellar and secondary), desorb
by direct impact of cosmic-rays, or sublimate at a given PDR depth (depending
on $T_{\rm d}$ and on their $E_{\rm b}$). Grain size distributions ($n_{\rm
gr}$ $\propto$ $a^{-3.5}$, where $a$ is the grain radius) and gas-grain
reactions are treated within the Meudon code formalism (see, Le Petit et al.,
2006; Goicoechea & Le Bourlot, 2007; Le Bourlot et al., 2012; Bron et al.,
2014). As grain surface chemistry reactions we include s-H + s-X $\rightarrow$
s-XH and s-H + s-XH $\rightarrow$ s-H2X, where s-X refers to s-S and s-O. In
addition, we add the direct chemical desorption reaction s-H + s-SH
$\rightarrow$ H2S with an efficiency of 50 $\%$ per reactive event, and also
tested different efficiencies ($\epsilon$) for the chemical desorption process
s-H + s-H2S $\rightarrow$ SH + H2.
In our models we compute the relevant gas-grain timescales and atomic
abundances at every depth $A_{V}$ of the PDR. If the timescale for a grain to
be struck by an H atom ($\tau_{\rm gr,\,H}$) is shorter than the timescales to
sublimate or to photodesorb an s-X atom or a s-XH molecule; and if H atoms
stick on grains more frequently than X atoms, we simply assume these surface
reactions proceed instantaneously. At large $A_{V}$, larger than the freeze-
out depth, this grain chemistry builds abundant s-H2O and s-H2S ice mantles.
Figure 14 shows results of two types of gas-grain models. The only difference
between them is the adopted adsorption binding energies for s-S and s-SH. Left
panel is for a “high $E_{\rm b}$” model and right panel is for a “low $E_{\rm
b}$” model (see Table 4). We note that these models do not include the gas-
phase radiative association reactions S+ \+ H2 $\rightarrow$ H2S+ \+ $h\nu$
and SH+ \+ H2 $\rightarrow$ H3S+ \+ $h\nu$; although their effect is smaller
than in pure gas-phase models.
The chemistry of the most exposed PDR surface layers ($A_{V}$ $\lesssim$ 2
mag) is the same to that of the gas-phase models discussed in Sect. 6.1.
Photodesorption keeps dust grains free of ice mantles, and fast gas-phase ion-
neutral reactions, photoreactions, and reactions with FUV-pumped H2 drive the
chemistry. The resulting SH+ abundance profile is nearly identical and there
is no need to invoke depletion of elemental sulfur from the gas-phase to
explain the observed SH+ emission (see Fig. 15). Beyond these first PDR
irradiated layers, the chemistry does change because the formation of s-H2S on
grains and subsequent desorption alters the chemistry of the other S-bearing
hydrides.
Figure 15: Line intensity predictions for different isobaric PDR models.
Calculations were carried out in a multi-slab Monte Carlo code (Sect. 4) that
uses the output of the PDR model. Blue stars show the line intensities
observed toward the Bar (corrected by beam dilution). Left panel: SH+ emission
models for PDRs of different $P_{\rm th}$ values and $\alpha$ = 5o. Right
panel: SH and H2S (adopting an OTP ratio of 3) emission from: high $E_{\rm b}$
(magenta squares), low $E_{\rm b}$ (gray triangles), and gas-phase (cyan
circles) PDR models, all with $P_{\rm th}$ / $k$ = 2$\times$108 K cm-3. Upper
limit intensity predictions are for a PDR with an inclination angle of
$\alpha$ = 5o with respect to a edge-on geometry. Lower limit intensities
refer to a face-on PDR model.
In model high $E_{\rm b}$, S atoms start to freeze out closer to the PDR edge
($T_{\rm d}$ $<$ 50 K). Because of the increasing densities and decreasing
temperatures, the s-H2S abundance with respect to H nuclei reaches $\sim$10-6
at $A_{V}$ $\simeq$ 4 mag. In model low $E_{\rm b}$, this level of s-H2S
abundance is only reached beyond an $A_{V}$ of 7 mag. At lower $A_{V}$, the
formation of s-H2S on bare grains and subsequent photodesorption produces more
H2S than pure-gas phase models independently of whether H2S chemical
desorption is included or not. In these intermediate PDR layers, at $A_{V}$
$\simeq$ 2-7 mag for the strong irradiation conditions in the Bar, the flux of
FUV photons drives much of the chemistry, desorbing grain mantles, preventing
complete freeze out, and dissociating the gas-phase products.
There are two H2S abundance peaks at $A_{V}$ $\simeq$ 4 and 7 mag. The H2S
abundance in these “photodesorption peaks” depends on the amount of s-H2S
mantles formed on grains and on the balance between s-H2S photodesorption and
H2S photodissociation (which now becomes the major source of SH). The enhanced
H2S abundance modifies the chemistry of H2S+ and H3S+ as well: H2S
photoionization (with a threshold at $\sim$10.4 eV) becomes the dominant
source of H2S+ at $A_{V}$ $\simeq$ 4 mag because the H2 ($v$ $\geq$2)
abundance is too low to make reaction (2) competitive. Besides, reactions of
H2S with abundant molecular ions such as HCO+, H${}_{3}^{+}$, and H3O+
dominate the H3S+ production.
Our gas-grain models predict that other S-bearing molecules, such as SO2 and
SO, can be the major sulfur reservoirs at these intermediate PDR depths.
However, their abundances strongly depend on those of O2 and OH through
reactions S + O2 $\rightarrow$ SO + O and SO + OH $\rightarrow$ SO2 \+ H (see
e.g., Sternberg & Dalgarno, 1995; Fuente et al., 2016, 2019). These reactions
link the chemistry of S- and O-bearing neutral molecules (Prasad & Huntress,
1982) and are an important sink of S atoms at $A_{V}$ $\gtrsim$ 5 mag.
However, while large column densities of OH have been detected in the Orion
Bar ($\gtrsim$ 1015 cm-2; Goicoechea et al., 2011), O2 remains undetected
despite deep searches (Melnick et al., 2012). Furthermore, the inferred upper
limit $N$(O2) columns are below the expectations of PDR models (Hollenbach et
al., 2009). This discrepancy likely implies that these gas-grain models miss
details of the grain surface chemistry leading to O2 (for other environments
and modeling approaches see, e.g., Ioppolo et al., 2008; Taquet et al., 2016).
Here we will not discuss SO2, SO, or O2 further.
At large cloud depths, $A_{V}$ $\gtrsim$ 8 mag, the FUV flux is largely
attenuated, temperatures drop, the chemistry becomes slower, and other
chemical processes dominate. The H2S abundance is controlled by the chemical
desorption reaction s-H + s-SH $\rightarrow$ H2S. This process keeps a floor
of detectable H2S abundances ($>$10-9) in regions shielded from stellar FUV
radiation. In addition, and although not energetically favorable, the chemical
desorption s-H + s-H2S $\rightarrow$ SH + H2 enhances the SH production at
large $A_{V}$ (the enhancement depends on the desorption efficiency
$\epsilon$), which in turn boosts the abundances of other S-bearing species,
including that of neutral S atoms.
The H2S abundances predicted by the high $E_{\rm b}$ model reproduce the H2S
line intensities observed in the Bar (Sect. 6.4). In this model s-H2S becomes
the main sulfur reservoir. However, we stress that here we do not consider the
formation of more complex S-bearing ices such as s-OCS, s-H2S2, s-Sn, s-SO2 or
s-HSO (Jiménez-Escobar & Muñoz Caro, 2011; Vidal et al., 2017; Laas & Caselli,
2019). Together with our steady-state solution of the chemistry, this implies
that our predictions are not precise deep inside the PDR. However, we recall
that our observations refer to the edge of the Bar, so it is not plausible
that the model conditions at $A_{V}$ $\gtrsim$ 8 mag represent the line of
sight we observe.
Model low $E_{\rm b}$ produces less H2S in the PDR layers below $A_{V}$
$\lesssim$ 8 mag because S atoms do not freeze until the dust temperature
drops deep inside the PDR. Even beyond these layers, thermal desorption of s-S
maintains higher abundances of S atoms at large depths. Indeed, model low
$E_{\rm b}$ predicts that the major sulfur reservoir deep inside the cloud are
gas-phase S atoms. This agrees with recent chemical models of cold dark clouds
(Vidal et al., 2017; Navarro-Almaida et al., 2020).
Figure 16: Constant density gas-grain PDR models using the high $E_{\rm b}$
chemical network and undepleted sulfur elemental abundances. Left panel:
Effects of changing the FUV radiation field. Right panel: Effects of varying
the gas density.
### 6.4 Line intensity comparison and H2S ortho-to-para ratio
We now specifically compare the SH+, SH, and H2S line intensities implied by
the different PDR models with the intensities observed toward the DF position
of the Bar. We used the output of the PDR models – $T_{\rm k}$, $T_{\rm d}$,
$n$(H2), $n$(H), $n_{e}$, $n$(SH+), $n$(SH), and $n$(H2S) profiles from
$A_{V}$ = 0 to 10 mag – as input for a multi-slab Monte Carlo model of their
line excitation, including formation pumping (formalism presented in Sect. 4)
and radiative transfer. As the Orion Bar is not a perfectly edge-on, this
comparison requires a knowledge of the tilt angle ($\alpha$) with respect to a
pure edge-on PDR. Different studies suggest $\alpha$ of $\approx$ 5o (e.g.,
Jansen et al., 1995; Melnick et al., 2012; Andree-Labsch et al., 2017). This
inclination implies an increase in line-of-sight column density, compared to a
face-on PDR, by a geometrical factor (sin $\alpha$)-1. It also means that
optically thin lines are limb-brightened.
The left panel of Fig. 15 shows SH+ line intensity predictions for isobaric
PDR models of different $P_{\rm th}$ values (leading to different $T_{\rm k}$
and $n_{\rm H}$ profiles). Since the bulk of the SH+ emission arises from the
PDR edge ($A_{V}$ $\simeq$ 0 to 2 mag) all models (gas-phase or gas-grain)
give similar results. The best fit is for $P_{\rm th}$ $\simeq$
(1–2)$\times$108 cm-3 K and $\alpha$ $\simeq$5o. These high pressures, at
least close to the DF, agree with those inferred from ALMA images of HCO+ ($J$
= 4-3) emission (Goicoechea et al., 2016), Herschel observations of high-$J$
CO lines (Joblin et al., 2018), and IRAM 30 m detections of carbon
recombination lines (Cuadrado et al., 2019).
Right panel of Fig. 15 shows SH and H2S line emission predictions for the high
$E_{\rm b}$ gas-grain model (magenta squares), low $E_{\rm b}$ gas-grain model
(gray triangles), and a pure gas-phase model (cyan circles). For each model,
the upper limit intensities refer to radiative transfer calculations with an
inclination angle $\alpha$ = 5o. The lower intensity limits refer to a face-on
PDR. Gas-phase models largely underestimate the observed H2S intensities.
Model low $E_{\rm b}$ produces higher H2S columns and brighter H2S lines, but
still below the observed levels (by up to a factor of ten). Model high $E_{\rm
b}$ provides a good agreement with observations; the two possible inclinations
bracket the observed intensities, and it should be considered as the reference
model of the Bar. It is also consistent with the observational SH upper
limits.
Our observations and models provide a (line-of-sight)
$N$($o$-H2S)/$N$($p$-H2S) OTP ratio of 2.9 $\pm$ 0.3, consistent with the
(gas-phase) high-temperature statistical equilibrium value. However, the cold
“nuclear-spin-temperatures” ($T_{\rm spin}$ $\ll$ $T_{\rm k}$; see definition
in eq. 16) implied by the low water vapor OTP ratios observed in some sources
($<$ 2.5) have been associated with the temperature of the ice mantles where
H2O molecules might have formed (i.e., $T_{\rm spin}$ $\simeq$ $T_{\rm d}$;
Mumma et al., 1987; Lis et al., 2013). In the case of H2S, our derived OTP
ratio toward the DF position implies any $T_{\rm spin}$ above 30 $\pm$10 K
(see Fig. 24). Hence, this temperature might be also compatible with s-H2S
formation131313Crockett et al. (2014) inferred $N$($o$-H2S)/$N$($p$-H2S) = 2.5
$\pm$ 0.8 in the hot core of Orion KL using LTE rotational digrams. However,
they favored an OTP ratio of 1.7 $\pm$ 0.8 based on the column density ratio
of selected pairs of rotational levels with similar energies. This latter OTP
ratio implies $T_{\rm spin}$(H2S) $\simeq$ 12 K (Fig. 24), perhaps related to
much colder dust grains than in PDRs or to colder gas conditions just before
the hot core phase; so that reactive collisions did not have time to establish
the statistical equilibrium value. We note that the observed OTP ratios of
H2CO, H2CS, and H2CCO in the Bar are also $\sim$3 (Cuadrado et al., 2017). in
warm grains if $T_{\rm spin}$ $\simeq$ $T_{\rm d}$ upon formation is preserved
in the gas-phase after photodesorption (e.g., Guzmán et al., 2013).
Interestingly, the H2O OTP ratio derived from observations of the Orion Bar is
2.8 $\pm$ 0.1 (Putaud et al., 2019) and implies $T_{\rm spin}$(H2O) = 35 $\pm$
2 K. This value is compatible with $T_{\rm spin}$(H2S) and might reflect the
similar $T_{\rm d}$ of the PDR layers where most s-H2O and s-H2S form and
photodesorb. Nevertheless, laboratory experiments have challenged this $T_{\rm
spin}$ $\simeq$ $T_{\rm d}$ association, at least for s-H2O: cold water ice
surfaces, at 10 K, photodesorb H2O molecules with an OTP ratio of $\sim$3
(Hama et al., 2016). Follow up observations of $p$-H2S lines across the Bar
will allow us to study possible variations of the OTP ratio as $G_{0}$
diminishes and grains get colder.
### 6.5 Generalization to different G0 and nH conditions
In this section we generalize our results to a broader range of gas densities
and FUV illumination conditions (i.e., to clouds with different $G_{0}$ /
$n_{\rm H}$ ratios). We run several PDR models using the high $E_{\rm b}$ gas-
grain chemistry. The main difference compared to the Orion Bar models is that
here we model constant density clouds with standard interstellar grain
properties ($R_{V}$ = 3.1). Figure 16 (left panel) shows models of clouds with
constant $n_{\rm H}$ = 104 cm-3 and varying FUV radiation fields, while Fig.
16 (right panel) show models of constant FUV illumination ($G_{0}$ = 100) and
varying gas densities141414In these models we consider undepleted [S/H]
abundances and only the chemical desorption s-H + s-SH $\rightarrow$ H2S (with
a 50 % efficiency).. The main result of this study is the similar gas-phase
H2S column density (a few 1014 cm-2 up to $A_{V}$ = 10) and H2S abundance peak
(a few 10-8 close to the FUV-irradiated cloud edge) predicted by these models
nearly irrespective of $G_{0}$ and $n_{\rm H}$. A similar conclusion was
reached previously for water vapor in FUV-illuminated clouds (Hollenbach et
al., 2009, 2012). Increasing $G_{0}$ shifts the position of the H2S abundance
peak to larger $A_{V}$ until the rate of S atoms sticking on grains balances
the H2S photodissociation rate (the dominant H2S destruction mechanism except
in shielded gas; see also Fig.13). Since s-H2S photodesorption and H2S
photodissociation rates depend on $G_{0}$, the peak H2S abundance in the PDR
is roughly the same independently of $G_{0}$. On the other hand, the formation
rate of s-H2S mantles depends on the product $n$(S) $n_{\rm gr}$ $\propto$
$n_{\rm H}^{2}$, whereas the H2S photodesorption rate depends on $n_{\rm gr}$
$\propto$ $n_{\rm H}$. Hence, the H2S abundance peak moves toward the cloud
surface for denser PDRs (like the Orion Bar). The exact abundance value
depends on the adopted grain-size distribution and on the H2S photodesorption
yield (which is well constrained by experiments; see, Cruz-Diaz et al., 2014;
Fuente et al., 2017).
The role of chemical desorption increases and can dominate beyond the
photodesorption peak as the flux of stellar FUV photons is attenuated. Here we
do not carry out an exhaustive study of this mechanism, which is hard to model
in full detail because its efficiency decreases considerably with the
properties of grain surfaces (bare vs. icy; see e.g., Minissale & Dulieu,
2014). In our models, and depending on $\zeta_{\rm CR}$, photodesorption by
secondary FUV photons can also be important in cloud interiors. These
processes limit the conversion of most of the sulfur reservoir into S-bearing
ices and increase the abundance of other gas-phase species deep inside clouds,
notably S atoms and H2S molecules.
The H2S abundance in shielded gas depends on the destruction rate by gas-phase
reactions different than photodissociation, in particular H2S reactions with
H${}_{3}^{+}$. The H${}_{3}^{+}$ abundance increases with $\zeta_{\rm CR}$ and
decreases with the electron density. Figure 16 (right) shows models of
constant $G_{0}$ and constant $\zeta_{\rm CR}$ in which the H2S abundance at
large depths increases with decreasing density (more penetration of FUV
photons, more ionization, more electrons, less H${}_{3}^{+}$). The lowest gas
density model, $n_{\rm H}$ = 103 cm-3, shows the highest H2S abundance at
large $A_{V}$. Because S freeze-out is less efficient at low densities, the
low-density model shows higher gas-phase S abundances at large depths, making
atomic S a dominant gas-phase sulfur reservoir. Unfortunately, direct
observation of atomic S in cold gas is complicated, which makes it difficult
to benchmark this prediction.
In warm PDRs, in addition to S radio recombination lines (e.g., Smirnov et
al., 1995), the ${}^{3}P$ fine-structure lines of atomic sulfur, the [S i] 25,
56 $\upmu$m lines, can be interesting diagnostics of gas physical conditions
and of [S/H] abundances. Unfortunately, the low sensitivity of previous
infrared telescopes was not sufficient to detect the [S i] 25 $\upmu$m line
($\Delta E_{12}$ = 570 K) in the Orion Bar (Rosenthal et al., 2000); although
it is detected in protostellar outflows (e.g., Neufeld et al., 2009;
Goicoechea et al., 2012). Moreover, the ${}^{3}P_{2}$-${}^{1}D_{2}$ forbidden
line of atomic sulfur at 1.082 $\upmu$m can be an interesting tracer of the
ionization and dissociation fronts in PDRs. Some of these lines will be
accesible to high-angular-resolution and high sensitivity observations with
JWST.
#### 6.5.1 The origin of H2S emission in other environments
Irrespective of $n_{\rm H}$ and $G_{0}$, grain surface formation of s-H2S and
photodesorption back to the gas-phase lead to H2S column densities of a few
1014 cm-2 in PDRs. This is in agrement with the observed column in the Bar
($G_{0}$ $\approx$ 104) as well as at the mildly illuminated rims of TMC-1 and
Barnard 1b clouds ($G_{0}$ $\approx$ 10; Navarro-Almaida et al., 2020). The
inferred H2S abundance in the shielded interior of these dark clouds ($A_{V}$
$>$ 10 mag) drops to a few 10-9, but the species clearly does not disappear
from the gas ($N$(H2S) of a few 1013 cm-2; Navarro-Almaida et al., 2020).
Interestingly, neither in the Bar the H2S line emission at $\sim$168 GHz
decreases much behind the PDR (Fig. 1) even if the flux of FUV photons is
largely attenuated compared to the irradiated PDR edge.
Despite oxygen is $\sim$25 times more abundant than sulfur, the H2O to H2S
column density ratio in the Orion Bar PDR is only about $\sim$ 5\. This
similarity must also reflect the higher abundances of CO compared to CS.
Furthermore, the H2S column density in cold cores is strikingly similar to
that of water vapor (Caselli et al., 2010, 2012). This coincidence points to a
more efficient desorption mechanism of s-H2S compared to s-H2O in gas shielded
from stellar FUV photons. Navarro-Almaida et al. (2020) argues that chemical
desorption is able to reproduce the observed H2S abundance floor if the
efficiency of this process diminishes as ice grain mantles get thicker inside
cold dense cores.
Turning back to warmer star-forming environments, our predicted H2S abundance
in FUV-illuminated gas is comparable to that observed toward many hot cores
($\sim$10-9-10-8; van der Tak et al., 2003; Herpin et al., 2009). In these
massive protostellar environments, thermal desorption of icy mantles, suddenly
heated to $T_{\rm d}$ $\gtrsim$ 100 K by the luminosity of the embedded
massive protostar, drives the H2S production. Early in their evolution, young
hot cores ($\lesssim$ 104 yr) can show even higher abundances of recently
desorbed H2S (before further chemical processing takes place in the gas-phase;
e.g., Charnley, 1997; Hatchell et al., 1998; Jiménez-Serra et al., 2012;
Esplugues et al., 2014). Indeed, Crockett et al. (2014) reports a gas-phase
H2S abundance of several 10-6 toward the hot core in Orion KL. This high value
likely reflects the minimum s-H2S abundance locked as s-H2S mantles just
before thermal desorption. In addition, the H2S abundance in the Orion Bar is
only slightly lower than that inferred in protostellar outflows (several
10-8). In these regions, fast shocks erode and sputter the grain mantles,
releasing a large fraction of their molecular content and activating a high-
temperature gas-phase chemistry that quickly reprocesses the gas (e.g.,
Holdship et al., 2019). All in all, it seems reasonable to conclude that
everywhere s-H2S grain mantles form, or already formed in a previous
evolutionary stage, emission lines from gas-phase H2S will be detectable.
In terms of its detectability with single-dish telescopes, H2S rotational
lines are bright in hot cores ($T_{\rm peak,\,168\,GHz}$ $\simeq$ 30 K in
Orion KL but $\simeq$ 1-3 K toward most hot cores; Tercero et al., 2010; van
der Tak et al., 2003; Herpin et al., 2009), in strongly irradiated PDRs
($\simeq$ 6 K, this work), and in lower-illumination PDRs such as the
Horsehead ($\simeq$ 1 K; Rivière-Marichalar et al., 2019). The H2S emission is
fainter toward cold dark clouds ($\simeq$ 0.2 K in TMC-1; Navarro-Almaida et
al., 2020) and protostellar outflows ($\simeq$ 0.6 K in L1157; Holdship et
al., 2019). These line intensity differences are mostly produced by different
gas physical conditions and not by enormous changes of the H2S abundance.
Finally, H2S is also detected outside the Milky Way (firstly by Heikkilä et
al., 1999). Lacking enough spatial-resolution it is more difficult to
determine the origin of the extragalactic H2S emission. The derived abundances
in starburst galaxies such as NGC 253 ($\sim$10-9; Martín et al., 2006) might
be interpreted as arising from a collection of spatially unresolved hot cores
(Martín et al., 2011). However, hot cores have low filling factors at star-
forming cloud scales. Our study suggests that much of this emission can arise
from (the most common) extended molecular gas illuminated by stellar FUV
radiation (e.g., Goicoechea et al., 2019).
## 7 Summary and conclusions
We carried out a self-consistent observational and modeling study of the
chemistry of S-bearing hydrides in FUV-illuminated gas. We obtained the
following results:
– ALMA images of the Orion Bar show that SH+ is confined to narrow gas layers
of the PDR edge, close to the H2 dissociation front. Pointed observations
carried out with the IRAM 30m telescope show bright H${}_{2}^{32}$S,
H${}_{2}^{34}$S, H${}_{2}^{33}$S emission toward the PDR (but no H3S+, a key
gas precursor of H2S) as well as behind the Bar, where the flux of FUV photons
is largely attenuated. SOFIA observations provide tight limits to the SH
emission.
– The SH+ line emission arises from a high-pressure gas component, $P_{\rm
th}$ $\simeq$ (1–2)$\times$108 cm-3 K, where SH+ ions are destroyed by
reactive collisions with H atoms and electrons (as most HnS+ ions do). We
derive $N$(SH+) $\simeq$ 1013 cm-2 and an abundance peak of several
$\sim$10-9. H2S shows larger column densities toward the PDR, $N$(H2S) =
$N$($o$-H2S) + $N$($p$-H2S) $\simeq$ 2.5$\times$1014 cm-2. Our tentative
detection of SH translates into an upper limit column density ratio
$N$(SH)/$N$(H2S) of $<$ 0.2-0.6, already lower than the ratio of 1.1-3.0
observed in low-density diffuse molecular clouds (Neufeld et al., 2015). This
implies an enhanced H2S production mechanism in FUV-illuminated dense gas.
– All gas-phase reactions X + H2($v$=0) $\rightarrow$ XH + H (with X = S+, S,
SH+, or H2S+) are highly endoergic. While reaction of FUV-pumped H2($v$ $\geq$
2) molecules with S+ ions becomes exoergic and explains the observed levels of
SH+, further reactions of H2($v$ $\geq$ 2) with SH+ or with neutral S atoms,
both reactions studied here through ab initio quantum calculations, do not
form enough H2S+ or H3S+ to ultimately produce abundant H2S. In particular,
pure gas-phase models underestimate the H2S column density observed in the
Orion Bar by more than two orders of magnitude. This implies that these models
miss the main H2S formation route. The disagreement is even worse as we favor,
after considering the potential energy surfaces of the H2S+∗ and H3S+∗
complexes, that the radiative associations S+ \+ H2 $\rightarrow$ H2S+ \+
$h\nu$ and SH+ \+ H2 $\rightarrow$ H3S+ \+ $h\nu$ may actually not occur or
possess slower rates than considered in the literature.
– To overcome these bottlenecks, we built PDR models that include a simple
network of gas-grain and grain surface reactions. The higher binding energies
of S and SH suggested by recent studies imply that bare grains start to grow
s-H2S mantles not far from the illuminated edges of molecular clouds. Indeed,
the observed $N$(H2S) in the Orion Bar can only be explained by the freeze-out
of S atoms, grain surface formation of s-H2S mantles, and subsequent
photodesorption back to the gas phase. The inferred H2S OTP ratio of 2.9 $\pm$
0.3 (equivalent to $T_{\rm spin}$ $\geq$ 30 K) is compatible with the high-
temperature statistical ratio as well as with warm grain surface formation if
$T_{\rm spin}$ $\simeq$ $T_{\rm d}$ and if $T_{\rm spin}$ is preserved in the
gas-phase after desorption.
– Comparing observations with chemical and excitation models, we conclude that
the SH+-emitting layers at the edge of the Orion Bar ($A_{V}$ $<$ 2 mag) are
charaterized by no or very little depletion of sulfur from the gas-phase. At
intermediate PDR depths ($A_{V}$ $<$ 8 mag) the observed H2S column densities
do not require depletion of elemental (cosmic) sulfur abundances either.
– We conclude that everywhere s-H2S grain mantles form (or formed) gas-phase
H2S will be present in detectable amounts. Independently of $n_{\rm H}$ and
$G_{0}$, FUV-illuminated clouds produce roughly the same H2S column density (a
few 1014 cm-2) and H2S peak abundances (a few 10-8). This agrees with the H2S
column densities derived in the Orion Bar and at the edges of mildly
illuminated clouds. Deep inside molecular clouds ($A_{V}$ $>$ 8 mag), H2S
still forms by direct chemical desorption and photodesorption by secondary FUV
photons. These processes alter the abundances of other S-bearing species and
makes difficult to predict the dominant sulfur reservoir in cloud interiors.
In this study we focused on S-bearing hydrides. Still, many subtle details
remain to be fully understood: radiative associations, electron
recombinations, and formation of multiply sulfuretted molecules. For example,
the low-temperature (Tk $<$ 1000 K) rates of the radiative and dielectronic
recombination of S+ used in PDR models may still be not accurate enough
(Badnell, 1991). In addition, the main ice-mantle sulfur reservoirs are not
fully constrained observationally. Thus, some of the narrative may be subject
to speculation. Similarly, reactions of S+ with abundant organic molecules
desorbed from grains (such as s-H2CO, not considered in our study) may
contribute to enhance the H2S+ abundance through gas-phase reactions (e.g., S+
\+ H2CO $\rightarrow$ H2S+ \+ CO; Prasad & Huntress, 1982). Future
observations of the abundance and freeze out depths of the key ice carriers
with JWST will clearly help in these fronts.
###### Acknowledgements.
We warmly thank Prof. György Lendvay for interesting discussions and for
sharing the codes related to their S(${}^{3}P$) + H2(${}^{1}\Sigma_{g}^{+}$,v)
PES. We thank Paul Dagdigian, François Lique, and Alexandre Faure for sharing
their H2S–H2, SH+–H, and SH+–e- inelastic collisional rate coefficients and
for interesting discussions in Grenoble and Salamanca. We thank Helgi
Hrodmarsson for sending his experimental SH photoionization cross section in
tabulated format. We finally thank our referee, John H. Black, for encouraging
and insightful suggestions. This paper makes use of the ALMA data
ADS/JAO.ALMA#2012.1.00352.S. ALMA is a partnership of ESO (representing its
member states), NSF (USA), and NINS (Japan), together with NRC (Canada), and
NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint
ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. It also includes IRAM
30 m telescope observations. IRAM is supported by INSU/CNRS (France), MPG
(Germany), and IGN (Spain). We thank the staff at the IRAM 30m telescope and
the work of the USRA and NASA staff of the Armstrong Flight Research Center in
Palmdale and of the Ames Research Center in Mountain View (California), and
the Deutsches SOFIA Institut. We thank the Spanish MICIU for funding support
under grants AYA2016-75066-C2-2-P, AYA2017-85111-P, FIS2017-83473-C2
PID2019-106110GB-I00, and PID2019-106235GB-I00 and the French-Spanish
collaborative project PICS (PIC2017FR). We finally acknowledge computing time
at Finisterrae (CESGA) under RES grant ACCT-2019-3-0004.
## References
* Aguado et al. (2010) Aguado, A., Barragan, P., Prosmiti, R., et al. 2010, J. Chem. Phys., 133, 024306
* Aguado & Paniagua (1992) Aguado, A. & Paniagua, M. 1992, J. Chem. Phys., 96, 1265
* Aguado et al. (2001) Aguado, A., Tablero, C., & Paniagua, M. 2001, Comput. Phys. Comm., 134, 97
* Agúndez et al. (2010) Agúndez, M., Goicoechea, J. R., Cernicharo, J., Faure, A., & Roueff, E. 2010, ApJ, 713, 662
* Agúndez & Wakelam (2013) Agúndez, M. & Wakelam, V. 2013, Chemical Reviews, 113, 8710
* Allers et al. (2005) Allers, K. N., Jaffe, D. T., Lacy, J. H., Draine, B. T., & Richter, M. J. 2005, ApJ, 630, 368
* Anders & Grevesse (1989) Anders, E. & Grevesse, N. 1989, Geochim. Cosmochim. Acta., 53, 197
* Andree-Labsch et al. (2017) Andree-Labsch, S., Ossenkopf-Okada, V., & Röllig, M. 2017, A&A, 598, A2
* Anicich (2003) Anicich, V. G. 2003, JPL Publication 03-19, 1-1194
* Arab et al. (2012) Arab, H., Abergel, A., Habart, E., et al. 2012, A&A, 541, A19
* Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481
* Azzam et al. (2013) Azzam, A. A. A., Yurchenko, S. N., Tennyson, J., Martin-Drumel, M.-A., & Pirali, O. 2013, J. Quant. Spec. Radiat. Transf., 130, 341
* Badnell (1991) Badnell, N. R. 1991, ApJ, 379, 356
* Bally (2008) Bally, J. 2008, Overview of the Orion Complex, ed. B. Reipurth, 459
* Bañares et al. (2003) Bañares, L., Aoiz, F. J., Honvault, P., Bussery-Honvault, B., & Launay, J.-M. 2003, J. Chem. Phys., 118, 565
* Bañares et al. (2004) Bañares, L., Aoiz, F. J., Honvault, P., & Launay, J.-M. 2004, J. Phys. Chem., 108, 1616
* Black (1998) Black, J. H. 1998, Faraday Discussions, 109, 257
* Bonnet & Rayez (1997) Bonnet, L. & Rayez, J.-C. 1997, Chem. Phys. Lett., 277, 183
* Bonnet & Rayez (2004) Bonnet, L. & Rayez, J.-C. 2004, Chem. Phys. Lett., 397, 106
* Brittain et al. (2020) Brittain, A., Coolbroth, K., & Boogert, A. 2020, in American Astronomical Society Meeting Abstracts, Vol. 236, American Astronomical Society Meeting Abstracts #236, 247.08
* Bron et al. (2018) Bron, E., Agúndez, M., Goicoechea, J. R., & Cernicharo, J. 2018, ArXiv e-prints
* Bron et al. (2014) Bron, E., Le Bourlot, J., & Le Petit, F. 2014, A&A, 569, A100
* Buckinghan (1967) Buckinghan, A. D. 1967, Adv. Chem. Phys., 12, 107
* Burton et al. (1990) Burton, M. G., Hollenbach, D. J., & Tielens, A. G. G. M. 1990, ApJ, 365, 620
* Calmonte et al. (2016) Calmonte, U., Altwegg, K., Balsiger, H., et al. 2016, MNRAS, 462, S253
* Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245
* Caselli et al. (2012) Caselli, P., Keto, E., Bergin, E. A., et al. 2012, ApJ, 759, L37
* Caselli et al. (2010) Caselli, P., Keto, E., Pagani, L., et al. 2010, A&A, 521, L29
* Charnley (1997) Charnley, S. B. 1997, ApJ, 481, 396
* Choi et al. (2014) Choi, Y., van der Tak, F. F. S., Bergin, E. A., & Plume, R. 2014, A&A, 572, L10
* Collings et al. (2004) Collings, M. P., Anderson, M. A., Chen, R., et al. 2004, MNRAS, 354, 1133
* Crockett et al. (2014) Crockett, N. R., Bergin, E. A., Neill, J. L., et al. 2014, ApJ, 781, 114
* Cruz-Diaz et al. (2014) Cruz-Diaz, G. A., Muñoz Caro, G. M., Chen, Y. J., & Yih, T. S. 2014, A&A, 562, A119
* Cuadrado et al. (2017) Cuadrado, S., Goicoechea, J. R., Cernicharo, J., et al. 2017, A&A, 603, A124
* Cuadrado et al. (2015) Cuadrado, S., Goicoechea, J. R., Pilleri, P., et al. 2015, A&A, 575, A82
* Cuadrado et al. (2016) Cuadrado, S., Goicoechea, J. R., Roncero, O., et al. 2016, A&A, 596, L1
* Cuadrado et al. (2019) Cuadrado, S., Salas, P., Goicoechea, J. R., et al. 2019, A&A, 625, L3
* Dagdigian (2019) Dagdigian, P. J. 2019, MNRAS, 487, 3427
* Dagdigian (2019) Dagdigian, P. J. 2019, J. Chem. Phys., 150, 084308
* Dagdigian (2020) Dagdigian, P. J. 2020, MNRAS, 494, 5239
* Dartois (2005) Dartois, E. 2005, Space Sci. Rev., 119, 293
* Davidson (1975) Davidson, E. R. 1975, J. Comp. Phys., 17, 87
* de Graauw et al. (2010) de Graauw, T., Helmich, F. P., Phillips, T. G., et al. 2010, A&A, 518, L6
* Desrousseaux et al. (2021) Desrousseaux, B., Lique, F., Goicoechea, J. R., Quintas-Sánchez, E., & Dawes, R. 2021, A&A, 645, A8
* Endres et al. (2016) Endres, C. P., Schlemmer, S., Schilke, P., Stutzki, J., & Müller, H. S. P. 2016, Journal of Molecular Spectroscopy, 327, 95
* Esplugues et al. (2016) Esplugues, G. B., Cazaux, S., Meijerink, R., Spaans, M., & Caselli, P. 2016, A&A, 591, A52
* Esplugues et al. (2014) Esplugues, G. B., Viti, S., Goicoechea, J. R., & Cernicharo, J. 2014, A&A, 567, A95
* Farah et al. (2012) Farah, K., Muller-Plathe, F., & Bohm, M. C. 2012, Chem. Phys. Chem., 13, 1127
* Faure et al. (2017) Faure, A., Halvick, P., Stoecklin, T., et al. 2017, MNRAS, 469, 612
* Freeman & Williams (1982) Freeman, A. & Williams, D. A. 1982, Ap&SS, 83, 417
* Fuente et al. (2016) Fuente, A., Cernicharo, J., Roueff, E., et al. 2016, A&A, 593, A94
* Fuente et al. (2017) Fuente, A., Goicoechea, J. R., Pety, J., et al. 2017, ApJ, 851, L49
* Fuente et al. (2019) Fuente, A., Navarro, D. G., Caselli, P., et al. 2019, A&A, 624, A105
* Fuente et al. (2003) Fuente, A., Rodrıguez-Franco, A., Garcıa-Burillo, S., Martın-Pintado, J., & Black, J. H. 2003, A&A, 406, 899
* Garrod et al. (2007) Garrod, R. T., Wakelam, V., & Herbst, E. 2007, A&A, 467, 1103
* Genzel & Stutzki (1989) Genzel, R. & Stutzki, J. 1989, ARA&A, 27, 41
* Gerin et al. (2010) Gerin, M., de Luca, M., Black, J., et al. 2010, A&A, 518, L110
* Gerin et al. (2016) Gerin, M., Neufeld, D. A., & Goicoechea, J. R. 2016, ARA&A, 54, 181
* Gibb et al. (2004) Gibb, E. L., Whittet, D. C. B., Boogert, A. C. A., & Tielens, A. G. G. M. 2004, ApJS, 151, 35
* Godard & Cernicharo (2013) Godard, B. & Cernicharo, J. 2013, A&A, 550, A8
* Godard et al. (2012) Godard, B., Falgarone, E., Gerin, M., et al. 2012, A&A, 540, A87
* Godard et al. (2014) Godard, B., Falgarone, E., & Pineau des Forêts, G. 2014, A&A, 570, A27
* Goicoechea et al. (2012) Goicoechea, J. R., Cernicharo, J., Karska, A., et al. 2012, A&A, 548, A77
* Goicoechea et al. (2017) Goicoechea, J. R., Cuadrado, S., Pety, J., et al. 2017, A&A, 601, L9
* Goicoechea et al. (2011) Goicoechea, J. R., Joblin, C., Contursi, A., et al. 2011, A&A, 530, L16
* Goicoechea & Le Bourlot (2007) Goicoechea, J. R. & Le Bourlot, J. 2007, A&A, 467, 1
* Goicoechea et al. (2020) Goicoechea, J. R., Pabst, C. H. M., Kabanovic, S., et al. 2020, A&A, 639, A1
* Goicoechea et al. (2016) Goicoechea, J. R., Pety, J., Cuadrado, S., et al. 2016, Nature, 537, 207
* Goicoechea et al. (2009) Goicoechea, J. R., Pety, J., Gerin, M., Hily-Blant, P., & Le Bourlot, J. 2009, A&A, 498, 771
* Goicoechea et al. (2006) Goicoechea, J. R., Pety, J., Gerin, M., et al. 2006, A&A, 456, 565
* Goicoechea et al. (2019) Goicoechea, J. R., Santa-Maria, M. G., Bron, E., et al. 2019, A&A, 622, A91
* Gómez-Carrasco & Roncero (2006) Gómez-Carrasco, S. & Roncero, O. 2006, J. Chem. Phys., 125, 054102
* Graedel et al. (1982) Graedel, T. E., Langer, W. D., & Frerking, M. A. 1982, ApJS, 48, 321
* Grozdanov & Solov’ev (1982) Grozdanov, T. P. & Solov’ev, E. A. 1982, J. Phys. B, 15, 1195
* Guzmán et al. (2013) Guzmán, V. V., Goicoechea, J. R., Pety, J., et al. 2013, A&A, 560, A73
* Habart et al. (2010) Habart, E., Dartois, E., Abergel, A., et al. 2010, A&A, 518, L116
* Habing (1968) Habing, H. J. 1968, Bull. Astron. Inst. Netherlands, 19, 421
* Hama et al. (2016) Hama, T., Kouchi, A., & Watanabe, N. 2016, Science, 351, 65
* Hamilton et al. (2018) Hamilton, J. R., Faure, A., & Tennyson, J. 2018, MNRAS, 476, 2931
* Hasegawa & Herbst (1993) Hasegawa, T. I. & Herbst, E. 1993, MNRAS, 261, 83
* Hatchell et al. (1998) Hatchell, J., Thompson, M. A., Millar, T. J., & MacDonald, G. H. 1998, A&A, 338, 713
* He et al. (2015) He, J., Shi, J., Hopkins, T., Vidali, G., & Kaufman, M. J. 2015, ApJ, 801, 120
* Heays et al. (2017) Heays, A. N., Bosman, A. D., & van Dishoeck, E. F. 2017, A&A, 602, A105
* Heikkilä et al. (1999) Heikkilä, A., Johansson, L. E. B., & Olofsson, H. 1999, A&A, 344, 817
* Herbst et al. (1989) Herbst, E., DeFrees, D. J., & Koch, W. 1989, Mon. Not. R. Astyr. Soc., 237, 1057
* Herpin et al. (2009) Herpin, F., Marseille, M., Wakelam, V., Bontemps, S., & Lis, D. C. 2009, A&A, 504, 853
* Heyminck et al. (2012) Heyminck, S., Graf, U. U., Güsten, R., et al. 2012, A&A, 542, L1
* Hogerheijde et al. (1995) Hogerheijde, M. R., Jansen, D. J., & van Dishoeck, E. F. 1995, A&A, 294, 792
* Holdship et al. (2019) Holdship, J., Jimenez-Serra, I., Viti, S., et al. 2019, ApJ, 878, 64
* Hollenbach et al. (2009) Hollenbach, D., Kaufman, M. J., Bergin, E. A., & Melnick, G. J. 2009, ApJ, 690, 1497
* Hollenbach et al. (2012) Hollenbach, D., Kaufman, M. J., Neufeld, D., Wolfire, M., & Goicoechea, J. R. 2012, ApJ, 754, 105
* Hollenbach & Tielens (1997) Hollenbach, D. J. & Tielens, A. G. G. M. 1997, ARA&A, 35, 179
* Hosokawa & Inutsuka (2006) Hosokawa, T. & Inutsuka, S.-i. 2006, ApJ, 646, 240
* Howk et al. (2006) Howk, J. C., Sembach, K. R., & Savage, B. D. 2006, ApJ, 637, 333
* Hrodmarsson et al. (2019) Hrodmarsson, H. R., Garcia, G. A., Nahon, L., Loison, J.-C., & Gans, B. 2019, Physical Chemistry Chemical Physics (Incorporating Faraday Transactions), 21, 25907
* Indriolo et al. (2015) Indriolo, N., Neufeld, D. A., Gerin, M., et al. 2015, ApJ, 800, 40
* Ioppolo et al. (2008) Ioppolo, S., Cuppen, H. M., Romanzin, C., van Dishoeck, E. F., & Linnartz, H. 2008, ApJ, 686, 1474
* Jansen et al. (1995) Jansen, D. J., Spaans, M., Hogerheijde, M. R., & van Dishoeck, E. F. 1995, A&A, 303, 541
* Jiménez-Escobar & Muñoz Caro (2011) Jiménez-Escobar, A. & Muñoz Caro, G. M. 2011, A&A, 536, A91
* Jiménez-Serra et al. (2012) Jiménez-Serra, I., Zhang, Q., Viti, S., Martín-Pintado, J., & de Wit, W. J. 2012, ApJ, 753, 34
* Joblin et al. (2018) Joblin, C., Bron, E., Pinto, C., et al. 2018, A&A, 615, A129
* Johnson (1987) Johnson, B. R. 1987, J. Chem. Phys., 86, 1445
* Kaplan et al. (2017) Kaplan, K. F., Dinerstein, H. L., Oh, H., et al. 2017, ApJ, 838, 152
* Karplus et al. (1965) Karplus, M., Porter, R. N., & Sharma, R. D. 1965, J. Chem. Phys., 43, 3259
* Kirsanova & Wiebe (2019) Kirsanova, M. S. & Wiebe, D. S. 2019, MNRAS, 486, 2525
* Klisch et al. (1996) Klisch, E., Klaus, T., Belov, S. P., et al. 1996, ApJ, 473, 1118
* Kłos et al. (2009) Kłos, J., Lique, F., & Alexander, M. H. 2009, Chemical Physics Letters, 476, 135
* Knizia et al. (2009) Knizia, G., Adler, T. B., & Werner, H. J. 2009, J. Chem. Phys., 130, 054104
* Laas & Caselli (2019) Laas, J. C. & Caselli, P. 2019, A&A, 624, A108
* Le Bourlot et al. (2012) Le Bourlot, J., Le Petit, F., Pinto, C., Roueff, E., & Roy, F. 2012, A&A, 541, A76
* Le Petit et al. (2006) Le Petit, F., Nehmé, C., Le Bourlot, J., & Roueff, E. 2006, ApJS, 164, 506
* Lee (1968) Lee, T. A. 1968, ApJ, 152, 913
* Leurini et al. (2006) Leurini, S., Rolffs, R., Thorwirth, S., et al. 2006, A&A, 454, L47
* Levine & Bernstein (1987) Levine, R. D. & Bernstein, R. B. 1987, Molecular Reaction Dynamics and Chemical Reactivity (Oxford University Press)
* Lique et al. (2020) Lique, F., Zanchet, A., Bulut, N., Goicoechea, J. R., & Roncero, O. 2020, A&A, 638, A72
* Lis et al. (2013) Lis, D. C., Bergin, E. A., Schilke, P., & van Dishoeck, E. F. 2013, Journal of Physical Chemistry A, 117, 9661
* Lucas & Liszt (2002) Lucas, R. & Liszt, H. S. 2002, A&A, 384, 1054
* Maiti et al. (2004) Maiti, B., Schatz, G. C., & Lendvay, G. 2004, Journal of Physical Chemistry A, 108, 8772
* Marconi et al. (1998) Marconi, A., Testi, L., Natta, A., & Walmsley, C. M. 1998, A&A, 330, 696
* Martín et al. (2011) Martín, S., Krips, M., Martín-Pintado, J., et al. 2011, A&A, 527, A36
* Martín et al. (2006) Martín, S., Mauersberger, R., Martín-Pintado, J., Henkel, C., & García-Burillo, S. 2006, ApJS, 164, 450
* Martin-Drumel et al. (2012) Martin-Drumel, M. A., Eliet, S., Pirali, O., et al. 2012, Chemical Physics Letters, 550, 8
* Melnick et al. (2012) Melnick, G. J., Tolls, V., Goldsmith, P. F., et al. 2012, ApJ, 752, 26
* Melnick et al. (2020) Melnick, G. J., Tolls, V., Snell, R. L., et al. 2020, ApJ, 892, 22
* Menten et al. (2011) Menten, K. M., Wyrowski, F., Belloche, A., et al. 2011, A&A, 525, A77
* Millar et al. (1986) Millar, T. J., Adams, N. G., Smith, D., Lindinger, W., & Villinger, H. 1986, MNRAS, 221, 673
* Millar & Herbst (1990) Millar, T. J. & Herbst, E. 1990, Astron. Astrophys., 231, 466
* Millar & Herbst (1990) Millar, T. J. & Herbst, E. 1990, A&A, 231, 466
* Minissale & Dulieu (2014) Minissale, M. & Dulieu, F. 2014, J. Chem. Phys., 141, 014304
* Minissale et al. (2016) Minissale, M., Dulieu, F., Cazaux, S., & Hocuk, S. 2016, A&A, 585, A24
* Mumma et al. (1987) Mumma, M. J., Weaver, H. A., & Larson, H. P. 1987, A&A, 187, 419
* Nagy & Lendvay (2017) Nagy, T. & Lendvay, G. 2017, J. Phys. Chem. Lett., 8, 4621
* Nagy et al. (2017) Nagy, Z., Choi, Y., Ossenkopf-Okada, V., et al. 2017, A&A, 599, A22
* Nagy et al. (2013) Nagy, Z., Van der Tak, F. F. S., Ossenkopf, V., et al. 2013, A&A, 550, A96
* Navarro-Almaida et al. (2020) Navarro-Almaida, D., Le Gal, R., Fuente, A., et al. 2020, A&A, 637, A39
* Neufeld et al. (2012) Neufeld, D. A., Falgarone, E., Gerin, M., et al. 2012, A&A, 542, L6
* Neufeld et al. (2015) Neufeld, D. A., Godard, B., Gerin, M., et al. 2015, A&A, 577, A49
* Neufeld et al. (2010) Neufeld, D. A., Goicoechea, J. R., Sonnentrucker, P., et al. 2010, A&A, 521, L10
* Neufeld et al. (2009) Neufeld, D. A., Nisini, B., Giannini, T., et al. 2009, ApJ, 706, 170
* Oba et al. (2018) Oba, Y., Tomaru, T., Lamberts, T., Kouchi, A., & Watanabe, N. 2018, Nature Astronomy, 2, 228
* O’Dell (2001) O’Dell, C. R. 2001, ARA&A, 39, 99
* Pabst et al. (2019) Pabst, C., Higgins, R., Goicoechea, J. R., et al. 2019, Nature, 565, 618
* Pabst et al. (2020) Pabst, C. H. M., Goicoechea, J. R., Teyssier, D., et al. 2020, A&A, 639, A2
* Palumbo et al. (1997) Palumbo, M. E., Geballe, T. R., & Tielens, A. G. G. M. 1997, ApJ, 479, 839
* Pankonin & Walmsley (1978) Pankonin, V. & Walmsley, C. M. 1978, A&A, 64, 333
* Parikka et al. (2017) Parikka, A., Habart, E., Bernard-Salas, J., et al. 2017, A&A, 599, A20
* Pellegrini et al. (2009) Pellegrini, E. W., Baldwin, J. A., Ferland, G. J., Shaw, G., & Heathcote, S. 2009, ApJ, 693, 285
* Peterson et al. (2008) Peterson, K. A., Adler, T. B., & Werner, H. J. 2008, J. Chem. Phys., 128, 084102
* Pineau des Forets et al. (1986) Pineau des Forets, G., Flower, D. R., Hartquist, T. W., & Dalgarno, A. 1986, MNRAS, 220, 801
* Prasad & Huntress (1980) Prasad, S. S. & Huntress, W. T., J. 1980, ApJS, 43, 1
* Prasad & Huntress (1982) Prasad, S. S. & Huntress, W. T., J. 1982, ApJ, 260, 590
* Putaud et al. (2019) Putaud, T., Michaut, X., Le Petit, F., Roueff, E., & Lis, D. C. 2019, A&A, 632, A8
* Qu & Bowman (2016) Qu, C. & Bowman, J. M. 2016, J. Phys. Chem. A, 120, 4988
* Rivière-Marichalar et al. (2019) Rivière-Marichalar, P., Fuente, A., Goicoechea, J. R., et al. 2019, A&A, 628, A16
* Roelfsema et al. (2012) Roelfsema, P. R., Helmich, F. P., Teyssier, D., et al. 2012, A&A, 537, A17
* Roncero et al. (2018) Roncero, O., Zanchet, A., & Aguado, A. 2018, Phys. Chem. Chem. Phys., 20, 25951
* Rosenthal et al. (2000) Rosenthal, D., Bertoldi, F., & Drapatz, S. 2000, A&A, 356, 705
* Sandford & Allamandola (1988) Sandford, S. A. & Allamandola, L. J. 1988, Icarus, 76, 201
* Sanz-Sanz et al. (2013) Sanz-Sanz, C., Roncero, O., Paniagua, M., & Aguado, A. 2013, J. Chem. Phys., 139, 184302
* Shiozaki & Werner (2013) Shiozaki, T. & Werner, H.-J. 2013, Mol. Phys., 111, 607, mRCI-F12
* Smirnov et al. (1995) Smirnov, G. T., Sorochenko, R. L., & Walmsley, C. M. 1995, A&A, 300, 923
* Smith (1991) Smith, R. G. 1991, MNRAS, 249, 172
* Sofia et al. (2004) Sofia, U. J., Lauroesch, J. T., Meyer, D. M., & Cartledge, S. I. B. 2004, ApJ, 605, 272
* Stecher & Williams (1972) Stecher, T. P. & Williams, D. A. 1972, ApJ, 177, L141
* Sternberg & Dalgarno (1995) Sternberg, A. & Dalgarno, A. 1995, ApJS, 99, 565
* Stoerzer et al. (1995) Stoerzer, H., Stutzki, J., & Sternberg, A. 1995, A&A, 296, L9
* Stowe et al. (1990) Stowe, G. F., Schultz, R. H., Wright, C. A., & Armentrout, P. B. 1990, Int. J. Mass Spectrom. Ion Proc., 100, 377
* Tablero et al. (2001) Tablero, C., Aguado, A., & Paniagua, M. 2001, Comput. Phys. Comm., 140, 412
* Taquet et al. (2016) Taquet, V., Furuya, K., Walsh, C., & van Dishoeck, E. F. 2016, MNRAS, 462, S99
* Tercero et al. (2010) Tercero, B., Cernicharo, J., Pardo, J. R., & Goicoechea, J. R. 2010, A&A, 517, A96
* Tieftrunk et al. (1994) Tieftrunk, A., Pineau des Forets, G., Schilke, P., & Walmsley, C. M. 1994, A&A, 289, 579
* Tielens (2010) Tielens, A. G. G. M. 2010, The Physics and Chemistry of the Interstellar Medium
* Tielens & Hagen (1982) Tielens, A. G. G. M. & Hagen, W. 1982, A&A, 114, 245
* Tielens & Hollenbach (1985) Tielens, A. G. G. M. & Hollenbach, D. 1985, ApJ, 291, 722
* Tielens et al. (1993) Tielens, A. G. G. M., Meixner, M. M., van der Werf, P. P., et al. 1993, Science, 262, 86
* Turner (1996) Turner, B. E. 1996, ApJ, 468, 694
* van der Tak et al. (2007) van der Tak, F. F. S., Black, J. H., Schöier, F. L., Jansen, D. J., & van Dishoeck, E. F. 2007, A&A, 468, 627
* van der Tak et al. (2003) van der Tak, F. F. S., Boonman, A. M. S., Braakman, R., & van Dishoeck, E. F. 2003, A&A, 412, 133
* van der Tak et al. (2013) van der Tak, F. F. S., Nagy, Z., Ossenkopf, V., et al. 2013, A&A, 560, A95
* van der Werf et al. (2013) van der Werf, P. P., Goss, W. M., & O’Dell, C. R. 2013, ApJ, 762, 101
* van der Werf et al. (1996) van der Werf, P. P., Stutzki, J., Sternberg, A., & Krabbe, A. 1996, A&A, 313, 633
* van der Wiel et al. (2009) van der Wiel, M. H. D., van der Tak, F. F. S., Ossenkopf, V., et al. 2009, A&A, 498, 161
* van Dishoeck (2004) van Dishoeck, E. F. 2004, ARA&A, 42, 119
* Velilla et al. (2008) Velilla, L., Lepetit, B., Aguado, A., Beswick, J., & Paniagua, M. 2008, J. Chem. Phys., 129, 084307
* Vidal et al. (2017) Vidal, T. H. G., Loison, J.-C., Jaziri, A. Y., et al. 2017, MNRAS, 469, 435
* Wakelam et al. (2017) Wakelam, V., Loison, J. C., Mereau, R., & Ruaud, M. 2017, Molecular Astrophysics, 6, 22
* Walmsley et al. (2000) Walmsley, C. M., Natta, A., Oliva, E., & Testi, L. 2000, A&A, 364, 301
* Werner & Knowles (1988a) Werner, H. J. & Knowles, P. J. 1988a, J. Chem. Phys., 89, 5803
* Werner & Knowles (1988b) Werner, H. J. & Knowles, P. J. 1988b, Chem. Phys. Lett., 145, 514
* Werner et al. (2012) Werner, H.-J., Knowles, P. J., Knizia, G., Manby, F. R., & Schütz, M. 2012, WIREs Comput Mol Sci, 2, 242
* Wyrowski et al. (1997) Wyrowski, F., Schilke, P., Hofner, P., & Walmsley, C. M. 1997, ApJ, 487, L171
* Yamamura et al. (2000) Yamamura, I., Kawaguchi, K., & Ridgway, S. T. 2000, ApJ, 528, L33
* Young et al. (2012) Young, E. T., Becklin, E. E., Marcum, P. M., et al. 2012, ApJ, 749, L17
* Zanchet et al. (2013a) Zanchet, A., Agúndez, M., Herrero, V. J., Aguado, A., & Roncero, O. 2013a, AJ, 146, 125
* Zanchet et al. (2018) Zanchet, A., del Mazo, P., Aguado, A., et al. 2018, PCCP, 20, 5415
* Zanchet et al. (2013b) Zanchet, A., Godard, B., Bulut, N., et al. 2013b, ApJ, 766, 80
* Zanchet et al. (2019) Zanchet, A., Lique, F., Roncero, O., Goicoechea, J. R., & Bulut, N. 2019, A&A, 626, A103
* Zanchet et al. (2009) Zanchet, A., Roncero, O., González-Lezana, T., et al. 2009, Journal of Physical Chemistry A, 113, 14488
* Zhou et al. (2020) Zhou, J., Zhao, Y., Hansen, C. S., et al. 2020, Nature Communications, 11, 1547
## Appendix A H2S+ formation and destruction
In this Appendix we give details about how we calculated the H2 vibrational-
state-dependent rates of reaction (2) and of the reverse reaction, the
destruction of H2S+ (${}^{2}A^{\prime}$) by reactive collisons with H
(${}^{2}S$) atoms (summarized in Fig. 17).
We first built a full dimensional potential energy surface (PES) of the
triplet H3S+ (${}^{3}A$) system by fitting more than 150,000 ab initio points,
including the long range interactions in the reactants and products channels.
The main topological features of the PES are summarized in the minimum energy
path between reactants and products (see middle panel of Fig. 9). These ab
initio points were calculated with an explicitly correlated restricted coupled
cluster including a single, double, and (perturbatively) triple excitations
(RCCSD(T)-F12a) method (Knizia et al. 2009). The analytical fit has a overall
rms error of $\simeq$ 0.01 eV (Fig. 18). Appendix A.1 provides more details.
Reaction (2) is endothermic by 0.672 eV, and the PES of the triplet state
shows two shallow wells in the H2 \+ SH+ entrance channel (named
${}^{3}W_{1a}$ and ${}^{3}W_{1b}$, with a depth of $\simeq$ 0.118 eV) and
another one near the H + H2S+ products (named ${}^{3}W_{2}$, with a depth of
0.08 eV). Between the reactants and products wells there is a saddle point,
with an energy of 0.601 eV. This saddle point, slightly below the products,
has a geometry similar to ${}^{3}W_{2}$ in which the H–H distance is strongly
elongated compared to that of H2. These features are also present in the
maximum multiplicity PES of reactions H2 \+ S${}^{+}(^{4}S)$ and H2 \+
H2S+(${}^{2}A$) (see Fig. 9). We determine the state-dependent rates of
reaction (2) and of the reverse reaction using a quasi-classical trajectory
(QCT) method on our ground triplet PES. We provide more details on how the
reactive cross sections for fixed collision energies were calculated in
Appendix. A.2.
The formation rate of H2S+ from H2 ($v$ = 0) is very slow. For H2 ($v$ = 1),
the rate constant significantly increases at $\approx$ 500 K, corresponding
with the opening of the H2S+ \+ H threshold. At this point, it is important to
consider the zero-point energy (ZPE) of the products (see next section for
details). For H2 ($v$ = 2) and H2 ($v$ = 3), reaction rates are faster, close
to the Langevin limit. Finally, the H2S+ destruction rate constant is very
similar to that of its formation from H2 ($v$ = 2). In Appendix A.3 we provide
more information about the destruction of HnS+ ions through radiative
association and spin flip mechanisms.
Figure 17: Calculated rate constants as a function of temperature (for
translation and rotation) for SH+ ($v$ = 0, $j$ = 0) + H2 ($v$ = 1, 2, 3, $j$
= 0) and H2S+ ($v$ = 0, $j$ = 0) + H reactions (lavender) using ZPE corrected
QCT method. Dotted curves are fits of the form $k(T)$ = $\alpha$ ($T$/300)β
exp$(-\gamma/T)$. Rate coefficients are listed in Table 1. Figure 18: Rms
error as a function of total energy, showing the number of ab initio points
used to evaluate the error in the PES calculation. Arrows indicate selected
critical points in the PES and provide an estimate of the error in each
region. TS means transition state.
### A.1 Ab initio calculations and PES
Dagdigian (2019) presented a PES for the SH+-H2 system that includes
4-dimensions and is based on RCCSD(T)-F12a ab initio calculations. This PES
was used to study SH+–H2 inelastic collisions using a rigid rotor approach in
which the two diatomic molecules are kept fixed at their equilibrium
distances. However, in order to study the reactivity of the collision, the two
diatomic distances have to be included to account for the breaking and
formation of new bonds.
Reaction (2) corresponds to a triplet state H3S+ (${}^{3}A$). The H2S+
(${}^{2}A^{\prime}$) + H (${}^{2}S$) products can form a triplet and a singlet
state. The triplet state can lead to the destruction of H2S+ through reaction
with H atoms. The singlet state, however, produces very excited states of the
reactants. Thus, it only leads to inelastic collisions but not not to the
destruction of H2S${}^{+}(^{2}A^{\prime})$. In consequence, here we only
consider the ground triplet electronic state of the system. In addition, the
H${}_{3}^{+}$ \+ S (${}^{3}P$) channel is about 2.4 eV above the H2 \+ SH+
asymptote, and will not be included in the present study.
In order to study the regions where several electronic states intersect, we
performed a explicitly correlated internally contracted multireference
configuration interaction (ic-MRCI-F12) calculation (Shiozaki & Werner 2013;
Werner & Knowles 1988a, b) including the Davidson correction (icMRCI-F12+Q;
Davidson 1975). The ic-MRCI-F12 calculations were carried out using state-
averaged complete active space self-consistent field (SA-CASSCF) orbitals with
all the CAS configurations as the reference configuration state functions. We
used a triple zeta correlation consistent basis set for explicitly correlated
wave functions (cc-pVTZ-F12; Peterson et al. 2008). In order to avoid orbital
flipping between core and valence orbitals. SA-CASSCF calculations with three
lowest triplet states were carried out including the core and valence orbitals
as active space (18 electrons in 11 orbitals). For the ic-MRCI-F12
calculation, the core orbitals was kept doubly occupied, resulting in about
$2.5\times 10^{6}$ $(9\times 10^{7})$ contracted (uncontracted)
configurations. All ab initio calculations were performed with MOLPRO (Werner
et al. 2012).
Our ic-MRCI-F12 calculations show that the crossings with electronic excited
states are 2 eV above the energy of the reactants. The energy interval below 2
eV is enough to study reaction 2. In these low-energy regions, RCCSD(T)-F12a
calculations were also performed. They are in good agreement with the ic-
MRCI-F12 results and the t1 diagnostic is always below 0.03. This allows us to
conclude that for energies below 2 eV, the RCCSD(T)-F12a method performs well,
presents a simple convergence, and being size consistent, is well adapted to
the present case. This method is the same one employed in the inelastic
collision calculations by Dagdigian (2019).
We performed extensive RCCSD(T)-F12a calculations in all accessible regions to
properly describe the six-dimensional phase space. 150000 ab initio points
were fitted to a multidimensional analytic function, that generates the six-
dimensional PES represented as
$\displaystyle H=H^{diab}+H^{MB}$ (8)
(Aguado et al. 2010; Sanz-Sanz et al. 2013; Zanchet et al. 2018; Roncero et
al. 2018), where $H^{diab}$ is an electronic diabatic matrix in which each
diagonal matrix element describes a rearrangement channel – six in this case,
three equivalent for SH+ \+ H2 channels, and three equivalent for H2S+ \+ H
fragments (we omitted the H${}_{3}^{+}$ \+ S channel) – as an extension of the
reactive force field approach (Farah et al. 2012). In each diagonal term, the
molecular fragments (SH+, H2 and H2S+) are described by 2 or 3 body fits
(Aguado & Paniagua 1992), and the interaction among them is described by a sum
of atom-atom terms plus the long range interaction. The non diagonal terms of
$H^{diab}$ are described as previously (Zanchet et al. 2018; Roncero et al.
2018) and the parameters are fitted to approximately describe the saddle
points along the minimum energy path in the right geometry.
In the reactants channel, the leading long range interaction
$\mbox{SH}^{+}(X^{3}\Sigma^{-})+\mbox{H}_{2}(X^{1}\Sigma_{g}^{+})$ corresponds
to charge-quadrupole and charge-induced dipole interactions (Buckinghan 1967):
$\displaystyle
V_{\mbox{charge}}(\mathbf{r}_{HH},\mathbf{R})=\Theta_{2}(r_{HH})P_{2}(\cos\theta_{2})R^{-3}$
(9) $\displaystyle-$
$\displaystyle\left[\frac{1}{2}\alpha_{0}(r_{HH})+\frac{1}{3}\left(\alpha_{\parallel}(r_{HH})-\alpha_{\perp}(r_{HH})\right)P_{2}(\cos\theta_{2})\right]R^{-4}$
and the dipole-quadrupole interactions (Buckinghan 1967):
$\displaystyle
V_{\mbox{dipole}}(\mathbf{r}_{SH},\mathbf{r}_{HH},\mathbf{R})=3\mu_{1}(r_{SH})\Theta_{2}(r_{HH})$
(10) $\displaystyle\times$
$\displaystyle\left[\cos\theta_{1}P_{2}(\cos\theta_{2})+\sin\theta_{1}\sin\theta_{2}\cos\theta_{2}\cos\phi\right]R^{-4},$
where $\Theta_{2}(r_{HH})$ is the cuadrupole moment of
$\mbox{H}_{2}(X^{1}\Sigma_{g}^{+})$, $\alpha_{0}(r_{HH})$,
$\alpha_{\parallel}(r_{HH})$, and $\alpha_{\perp}(r_{HH})$ are the average,
parallel, and perpendicular polarizabilities of
$\mbox{H}_{2}(X^{1}\Sigma_{g}^{+})$, respectively, and $\mu_{1}(r_{SH})$ is
the dipole moment of $\mbox{SH}^{+}(X^{3}\Sigma^{-})$. $P_{2}(\cos\theta)$
represents the Legendre polynomial of degree 2. The dependence of the
molecular properties of H2 with the interatomic distance $r_{HH}$ is obtained
from Velilla et al. (2008). The dipole moment of $\mbox{SH}^{+}$ depends on
the origin of coordinates. Since $\mbox{SH}^{+}(X^{3}\Sigma^{-})$ dissociates
in $\mbox{S}^{+}(^{4}S)+\mbox{H}(^{2}S)$, we select the origin of coordinates
in the $S$ atom, so that the dipole moment tends to zero when R goes to
infinity.
In the products channel, the long range interaction
$\mbox{H}_{2}\mbox{S}^{+}\,(X^{2}A^{\prime\prime})\,+\,\mbox{H}\,(^{2}S)$
corresponds to the isotropic charge-induced dipole and charge-induced
quadrupole dispersion terms
$V_{\mbox{disp}}(R)=-\frac{9}{4}R^{-4}-\frac{15}{4}R^{-6}.$
These long range terms diverge at $R$=0. To avoid this behavior, we replace
$R$ by ${\cal R}$:
${\cal R}=R+R_{0}e^{-(R-R_{e})}\quad\quad{\rm with}\quad\quad R_{0}=10\,{\rm
bohr.}$
In Eq. (8), $H^{MB}$ is the many-body term, which is described by
permutationaly invariant polynomials following the method of Aguado an
collaborators (Aguado & Paniagua 1992; Tablero et al. 2001; Aguado et al.
2001). This many-body term improves the accuracy of the PES, especially in the
region of the reaction barriers (as shown in Fig. 9). Features of the
stationary points are listed in Table 5.
Table 5: RCCSD(T)-F12a and fit stationary points on the PES. Stationary point | Geometry | Energy/cm-1 | Energy/eV
---|---|---|---
Reactants | SH${}^{+}+$ H2 | 0.0 | 0.0
Minimum 1 | SH${}^{+}-$ H2 | $-$950.2 | $-$0.1178
TS12 | SH+ $\cdot\cdot$ H2 | $-$579.5 | $-$0.0719
Minimum 2 | SH${}^{+}-$ H2 | $-$937.9 | $-$0.1163
TS13 | SH+ $\cdot\cdot$ H $\cdot\cdot$ H | 4843.9 | 0.6006
Minimum 3 | H2S${}^{+}-$ H | 4766.5 | 0.5910
Products | H2S${}^{+}+$ H | 5422.3 | 0.6723
### A.2 Determination of reactive collision rates
We studied the reaction dynamics using a quasi-classical trajectory (QCT)
method with the code miQCT (Zanchet et al. 2018; Roncero et al. 2018). In this
method, the initial vibrational energy of the reactants is included using the
adiabatic switching method (AS) (Grozdanov & Solov’ev 1982; Johnson 1987; Qu &
Bowman 2016; Nagy & Lendvay 2017). Energies are listed in Table 6. The initial
distance between the center-of-mass of the reactants (H2 \+ SH+ or H2S+ \+ H)
is set to 85 bohr, and the initial impact parameter is set randomly within a
disk, the radius of which is set according to a capture model (Levine &
Bernstein 1987) using the corresponding long-range interaction. The
orientation among the two reactants is set randomly.
Table 6: $E_{v}$ of reactants and products, and adiabatic switching energies for the QCT initial conditions. System(vibration) | Exact $E_{v}$ (eV) | AS energy (eV) |
---|---|---|---
H2 ($v$ = 0) | 0.270 | 0.269 |
H2 ($v$ = 1) | 0.786 | 0.785 |
H2 ($v$ = 2) | 1.272 | 1.272 |
H2 ($v$ = 3) | 1.735 | 1.730 |
SH+ ($v$ = 0) | 0.157 | 0.157 |
H2S+ ($v$ = 0) | 0.389 | 0.388 |
A first exploration of the reaction dynamics is done at fixed collision
energy, for H2 ($v$ = 0, 1, 2 , 3) + SH+ ($v$ = 0) and H + H2S+($v$ = 0), and
the reactive cross section is calculated as in Karplus et al. (1965)
$\displaystyle\sigma_{vj}(E)=\pi b_{max}^{2}P_{r}(E)\quad{\rm with}\quad
P_{r}(E)={N_{r}\over N_{tot}},$ (11)
where $N_{t}$ is the maximum number of trajectories with initial impact
parameter lower than $b_{max}$, the maximum impact parameter for which the
reaction takes place, and $N_{r}$ is the number of trajectories leading to
products. Fig. 19 shows results for $N_{t}$ $>$ 20000 and all energies and
initial reactant and vibrational states.
For the SH+ ($v$ = 0, $j$ = 0) + H2 ($v$, $j$ = 0) reaction there is a strong
dependence on the initial vibrational state. For H2 ($v$ = 0), there is nearly
no reactive event, and only at 1 eV there are some reactive trajectories. For
H2 ($v$ = 2 and 3), however, the reaction shows a relatively large cross
section, that decreases with increasing collision energy, as expected for
exoergic reactions. Energies below 10-100 meV are dominated by long range
interactions, leading to an increase in the maximum impact parameter,
$b_{max}$, consistent with the variation of the cross section.
Figure 19: Reaction cross section (in bohr2) as a function of collision energy
(in meV) for the SH+ ($v$ = 0, $j$ = 0) + H2 ($v$ = 1, 2, 3, $j$ = 0) and H2S+
($v$ = 0, $j$=0) + H collisions. Filled symbols are obtained counting all
trajectories leading to products, while open symbols correspond to the ZPE
corrected ones.
Reaction SH+ ($v$ = 0, $j$ = 0) + H2 ($v$ = 1, $j$ = 0) shows an unexpected
behavior that deserves some discussion. At energies below 40 meV, the cross
section is large and decreases with increasing energy. In the 40-200 meV
range, the reactive cross section drops to zero, showing a threshold at 200
meV that is consistent with the endothermicity of the reaction.
In order to analyze the reaction mechanism for H2 ($v$ = 1) below 40 meV, we
carried out an extensive analysis of the trajectories. A typical one is
presented in Fig. 20 for 10 meV. The H2 and SH+ reactants are attracted to
each other by long range interactions, until they get trapped in the
${}^{3}W_{1}$ wells, as it is shown by the evolution of $R$, the distance
between center-of-mass of the two molecules. The trapping lasts for 8 ps, thus
allowing several collisions between H2 and SH+ and permitting the energy
transfer between them. The H2 molecule ultimately breaks, and leaves SH+ with
less vibrational energy. This can be inferred from the decrease in the
amplitudes of the SH+ distance. The energy of the H2S+ product is below the
ZPE (see Table 6). This is a clear indication of ZPE leakage in the QCT
method, due to the energy transfer promoted by the long-lived collision
complex.
Figure 20: H-H, SH+ and $R$ distances (in bohr) versus time (in ps), for a
typical reactive trajectory for the SH+ ($v$ = 0, $j$ = 0) + H2($v$ = 1, $j$
=0) collision at 10 meV.
Several methods exist that correct the ZPE leakage. One is the gaussian
binning (Bonnet & Rayez 1997, 2004; Bañares et al. 2003, 2004). Here we have
applied a simplification of this method, which assigns a weight ($w$) for each
trajectory as
$\displaystyle w=\left\\{\begin{array}[]{ccc}1&{\rm for}&E_{vib}>ZPE\\\
e^{-\gamma(E_{vib}-ZPE)^{2}}&{\rm for}&E_{vib}<ZPE\end{array}\right.,$ (14)
where $E_{vib}$ is the vibrational energy of reactants (adding those of H2 and
SH+) or H2S+ products at the end of each trajectory. These new weights are
used to calculate $N_{r}$ and Ntot in Eq. 11. ZPE-corrected results are shown
in Fig. A.3 with open symbols. This plot shows that all values are nearly the
same as those calculated simply by counting trajectories as an integer (as
done in the normal binning method; see filled symbols in Fig. A.3). The only
exception is the case of SH+ \+ H2 ($v$ = 1) below 400 meV, which becomes zero
when considering the ZPE of fragments at the end of the trajectories.
The reaction thermal rate in specific initial vibrational state of reactants
are calculated running a minimum of 105 trajectories per temperature, with
fixed vibrational states of reactants, assuming a Boltzmann distribution over
translational and rotational degrees of freedom, and following the ZPE-
corrected method as:
$\displaystyle
k_{v}(T)=\sqrt{{8k_{B}T\over\pi\mu}}\quad\pi\,b^{2}_{max}(T)\quad P_{r}(T).$
(15)
The results of these calculations are shown in Fig. 17.
### A.3 On the radiative associations of HnS+
Herbst et al. (1989) and Millar & Herbst (1990) proposed that the radiative
association HnS+ \+ H2 $\rightarrow$ Hn+1S+ \+ $h\nu$ is viable process at low
gas temperatures. Although this chemical route is widely used in astrochemical
models, here we question the viability of this process. The lower multiplicity
(L) PESs of H2S+ (${}^{2}A^{\prime\prime}$) and H3S+ (${}^{1}A$) are L = 1/2
and 0 respectively. These are shown in Fig. 9, together with the minimum
multiplicity electronic state of H4S+ (bottom panel). This state does not have
a deep well or any higher multiplicity state that could connect to higher
states of reactants and products.
For of H3S+ formation through radiative association, this process assumes that
a H3S${}^{+}(^{3}A)$∗ complex forms in a triplet state, the high spin state H
considered here. According to our calculations, such a complex is formed after
low-energy H2 ($v$ = 0, 1) + SH+ reactions (below 40 meV). The complex is
formed in the ${}^{3}W_{1}$ well, corresponding to geometries very far from
those of the low spin well, the ${}^{1}W$ well. Therefore, a radiative spin
flip and decay through phosphorescence is not possible. Herbst et al. (1989)
proposed a second step, in which the spin flips from the triplet to the
singlet state, followed by a radiative association, finally leading to the
H3S${}^{+}(^{1}A)$ product.
The origin of the spin flip must be the spin-orbit couplings, very relevant
for S-bearing species, that favor the spin transition when singlet and triplet
states are close in energy. Using the PESs calculated here, the lowest
crossing region is at $\simeq$ 0.25 eV, very close to that of H2($v$ = 0). At
low temperatures, the H3S${}^{+}(^{3}A)$∗ complex formed by H2 ($v$ = 0) + SH+
reactions might allow a transition between the two electronic states with
different spin. However, the spin flip probability is proportional to the
square of the overlap $|\langle{\rm H_{3}S^{+}}\,(^{3}A)^{*}\,|\,{\rm
H_{3}S^{+}}\,(^{1}A)^{*}\rangle|^{2}$. This probability is very small because
the two wells, ${}^{3}W_{1}$ and ${}^{1}W_{1}$, correspond to very different
geometries. In consequence, we conclude that this radiative association
mechanism must be negligible, especially at the high gas temperatures of PDR
edges where the H3S+(${}^{3}A$)∗ complex is not formed.
As an alternative, a spin flip in a direct collision (not forming a
H3S${}^{+}(^{3}A)^{*}$ complex) may be more efficient and should be further
investigated. Indeed, experimental measurements of the S${}^{+}(^{4}S)$ \+ H2
($v$ = 0) cross section show a maximum at about 1 eV of collisional energy
attributed to spin-orbit transitions leading to spin flip (Stowe et al. 1990).
## Appendix B Reaction $\rm S\,(^{3}P)\,+\,H_{2}\,({\it{v}})\rightleftarrows
SH\,+\,H$
This reaction involves open shell reactants, S (${}^{3}P$), and products, SH
(${}^{2}\Pi$). Neglecting spin flipping, there are three states that correlate
to S(${}^{3}P$), two of them connect to the SH (${}^{2}\Pi$). These two
electronic states are of ${}^{3}A^{\prime}$ and ${}^{3}A^{\prime\prime}$
symmetry, and have been studied in detail by Maiti et al. (2004). Here we use
the adiabatic PES calculated by Maiti et al. (2004). Reaction S + H2
$\rightarrow$ SH + H is endothermic by $\simeq$ 1.02 eV (without zero-point
energy corrections), very similar to the endothermicity of reaction S+ \+ H2
$\rightarrow$ SH+ \+ H (Zanchet et al. 2013a, 2019). The main difference is
the presence of a barrier, of $\simeq$ 78 meV ($\simeq$ 905 K) with respect to
the SH + H asymptote.
Figure 21: Calculated rate constants as a function of temperature for reaction
S(${}^{3}P$) + H2($v$) $\rightarrow$ SH + H. Dotted curves are fits of the
form $k(T)$ = $\alpha$ ($T$/300)β exp$(-\gamma/T)$. Rate coefficients are
listed in Table 1.
We performed quantum wave packet calculations for the reactions S + H2 ($v$ =
2, 3, $j$=0) and SH ($v$ = 0, $j$=0) + H. We used MADWAVE3 (Gómez-Carrasco &
Roncero 2006; Zanchet et al. 2009) to calculate the reaction probabilities for
the initial vibrational state of the diatomic reactant (in the ground state
rotational state, $j$ = 0). We employed the usual partial wave expansion to
calculate the reaction cross section. We calculated only few total angular
momenta of the triatomic system, $J$ = 0, 10 and 20. The other $J$ needed in
the partial wave expansion were obtained using the $J$-shifting-interpolation
method (see Zanchet et al. 2013a). The initial-state-specific rate constants
are obtained by numerical integration of the cross section using a Boltzmann
distribution (Zanchet et al. 2013a). The resulting reaction rate constants are
shown in Figs. 21 and 22. The numerical values of the rate constants are
fitted to the usual analytical Arrhenius-like expresion (shown as dotted
curves). We note that the shoulder in the rate constants of reaction SH
($v$=0) + H requires two functions in the temperature range of 200-800 K. Rate
coefficients are tabulated in Table 1.
Figure 22: Calculated rate constants as a function of temperature for reaction
SH ($v$=0) + H $\rightarrow$ S + H2. The best fit to the calculated rate
requires two Arrhenius-like expressions (one for low temperatures and one for
high temperatures). Rate coefficients of these fits are listed in Table 1.
## Appendix C SH and H2S photoionization and photodissociation cross sections
Figure 23 shows the experimental SH and H2S photoionization and
photodissociation cross sections (cm-2) used in our PDR models. We integrate
these cross sections over the specific FUV radiation field at each $A_{V}$
depth of the PDR to obtain the specific photoionization and photodissociation
rates (s-1).
Figure 23: Photoionization and photodissociation cross sections. Top panel:
$\sigma_{\rm ion}$(SH) (blue curve from laboratory experiments by Hrodmarsson
et al. 2019). The pink curve is $\sigma_{\rm diss}$(SH) (Heays et al. 2017,
and references therein). Bottom panel: $\sigma_{\rm ion}$(H2S) (blue curve)
and $\sigma_{\rm diss}$(H2S) (gray and pink curves; from Zhou et al. 2020).
## Appendix D H2S ortho-to-para ratio and $T_{\rm spin}$
The OTP ratio is sometimes related to a nuclear-spin-temperature ($T_{\rm
spin}$, e.g., Mumma et al. 1987) defined, for H2O or H2S, as:
${{\rm{OTP}}=\frac{3\sum{(2J+1)\,{\rm{exp}}(-E_{o}(J)/T_{\rm
spin})}}{\sum{(2J+1)\,{\rm{exp}}(-E_{p}(J)/T_{\rm spin})}}}.$ (16)
Here, $E_{o}(J)$ and $E_{p}(J)$ are the energies (in Kelvin) of $o$-H2S and
$p$-H2S rotational levels (with the two ground rotational states separated by
$\Delta E$ = 19.8 K). Figure 24 shows the OTP ratio of the two H2S nuclear
spin isomers as a function of $T_{\rm spin}$. The OTP ratio we infer toward
the DF position of the Bar, 2.9 $\pm$ 0.3, is consistent with the statistical
ratio of 3/1, and implies $T_{\rm spin}$ $\geq$ 30 $\pm$ 10 K.
Figure 24: OTP ratio of H2S as a function of spin temperature (eq. 16).
## Appendix E Line parameters of IRAM 30m, ALMA, and SOFIA observations
Table 7: Parameters of H2S and H${}_{2}^{34}$S lines detected with the IRAM 30
m telescope toward three positions of the Orion Bar.
Position | Species | Transition | Frequency | $E_{\rm u}$/k | $A_{\rm ul}$ | $S_{\rm ul}$ | $g_{\rm u}$ | $\displaystyle{\int}T_{\rm mb}$dv | vLSR | $\Delta$v | $T_{\rm mb}$
---|---|---|---|---|---|---|---|---|---|---|---
| | $J_{K_{\rm a},K_{\rm c}}$ | [GHz] | [K] | [s-1] | | | [K km s-1] | [km s-1] | [km s-1] | [K]
(+10, $-$10) | $o$-H2S | 11,0 – 10,1 | 168.763 | 8.1 | 2.68 $\times$ 10-5 | 1.5 | 3 | 18.32 (0.01) | 10.5 (0.1) | 2.5 (0.1) | 7.03
| $o$-H${}_{2}^{34}$S | 11,0 – 10,1 | 167.911 | 8.1 | 2.62 $\times$ 10-5 | 1.5 | 3 | 1.22 (0.01) | 10.5 (0.1) | 2.0 (0.1) | 0.57
| $p$-H2S | 22,0 – 21,1 | 216.710 | 84.0 | 4.87 $\times$ 10-5 | 2.2 | 5 | 0.35 (0.01) | 10.4 (0.1) | 2.1 (0.1) | 0.16
(+30, $-$30) | $o$-H2S | 11,0 – 10,1 | 168.763 | 8.1 | 2.68 $\times$ 10-5 | 1.5 | 3 | 17.16 (0.02) | 10.3 (0.1) | 2.4 (0.1) | 6.85
| $o$-H${}_{2}^{34}$S | 11,0 – 10,1 | 167.911 | 8.1 | 2.62 $\times$ 10-5 | 1.5 | 3 | 1.28 (0.01) | 10.4 (0.1) | 1.9 (0.1) | 0.63
(+35, $-$55) | $o$-H2S | 11,0 – 10,1 | 168.763 | 8.1 | 2.68 $\times$ 10-5 | 1.5 | 3 | 3.57 (0.02) | 9.6 (0.1) | 3.1 (0.1) | 1.08
| $o$-H${}_{2}^{34}$S | 11,0 – 10,1 | 167.911 | 8.1 | 2.62 $\times$ 10-5 | 1.5 | 3 | 0.18 (0.02) | 9.8 (0.1) | 2.7 (0.3) | 0.06
151515Parentheses indicate the uncertainty obtained by the Gaussian fitting
programme.
Table 8: Parameters of SH+ targeted with ALMA toward the DF position.
Position | Species | Transition | Frequency | $E_{\rm u}$/k | $A_{\rm ul}$ | $\displaystyle{\int}T_{\rm mb}$dv | vLSR | $\Delta$v | $T_{\rm mb}$
---|---|---|---|---|---|---|---|---|---
| | | [GHz] | [K] | [s-1] | [K km s-1] | [km s-1] | [km s-1] | [K]
(+10, $-$10) | SH+ | $N_{J}$=10-01 $F$=1/2-1/2 | 345.858 | 16.6 | 1.14$\times$10-4 | 0.36a (0.03) | 10.7 (0.2) | 2.7 (0.3) | 0.12
| SH+ | $N_{J}$=10-01 $F$=1/2-3/2b | 345.944 | 16.6 | 2.28$\times$10-4 | 0.70a (0.03) | 10.4 (0.1) | 2.5 (0.1) | 0.26
161616aIntegrated over a 5′′ aperture to increase the S/N of the line
profiles. bLine integrated intensity map shown in Fig. 3.
Table 9: Parameters of SH lines (neglecting HFS) targeted with SOFIA toward
the DF position.
Position | Species | Transition | Frequency | $E_{\rm u}$/k | $A_{\rm ul}$ | $\displaystyle{\int}T_{\rm mb}$dv | vLSR | $\Delta$v | $T_{\rm mb}$
---|---|---|---|---|---|---|---|---|---
| | | [GHz] | [K] | [s-1] | [K km s-1] | [km s-1] | [km s-1] | [K]
(+10, $-$10) | SH | ${}^{2}\Pi_{3/2}$ $J$=5/2+–3/2- | 1382.911 | 66.4 | 4.72 $\times$ 10-3 | $<$1.11a (0.20) | 12.1a (0.8) | 7.9a (1.3) | 0.16
| SH | ${}^{2}\Pi_{3/2}$ $J$=5/2-–3/2+ | 1383.242 | 66.4 | 4.72 $\times$ 10-3 | $<$0.34 (0.12) | 11.7 (0.5) | 2.3 (0.8) | 0.14
171717aUncertain fit.
|
32k
|
arxiv_papers
|
2101.01013
|
# Inverse semigroup from metrics on doubles III. Commutativity and
(in)finiteness of idempotents
V. Manuilov Moscow Center for Fundamental and Applied Mathematics and Moscow
State University, Leninskie Gory 1, Moscow, 119991, Russia
[email protected]
###### Abstract.
We have shown recently that, given a metric space $X$, the coarse equivalence
classes of metrics on the two copies of $X$ form an inverse semigroup $M(X)$.
Here we study the property of idempotents in $M(X)$ of being finite or
infinite, which is similar to this property for projections in
$C^{*}$-algebras. We show that if $X$ is a free group then the unit of $M(X)$
is infinite, while if $X$ is a free abelian group then it is finite. As a by-
product, we show that the inverse semigroup $M(X)$ is not a quasi-isometry
invariant. More examples of finite and infinite idempotents are provided. We
also give a geometric description of spaces, for which their inverse semigroup
$M(X)$ is commutative.
### 1\. Introduction
Given metric spaces $(X,d_{X})$ and $(Y,d_{Y})$, a metric $d$ on $X\sqcup Y$
that extends the metrics $d_{X}$ on $X$ and $d_{Y}$ on $Y$, depends only on
the values $d(x,y)$, $x\in X$, $y\in Y$, and it may be not easy to check which
functions $d:X\times Y\to(0,\infty)$ determine a metric on $X\sqcup Y$. The
problem of description of all such metrics is difficult due to the lack of a
nice algebraic structure on the set of metrics, but, passing to coarse
equivalence of metrics, we get an algebraic structure, namely, that of an
inverse semigroup [4]. Recall that two metrics, $b,d$, on a space $Z$ are
coarsely equivalent if there exist monotone functions
$\varphi,\psi:[0,\infty)\to[0,\infty)$ such that
$\lim\nolimits_{t\to\infty}\varphi(t)=\lim\nolimits_{t\to\infty}\psi(t)=\infty$
and
$\varphi(d(z_{1},z_{2}))\leq b(z_{1},z_{2})\leq\psi(d(z_{1},z_{2}))$
for any $z_{1},z_{2}\in Z$. A coarse equivalence class of a metric $d$ we
denote by $[d]$.
Let $\mathcal{M}(X,Y)$ denote the set of all metrics $d$ on $X\sqcup Y$ such
that
* •
the restriction of $d$ onto $X$ and $Y$ are $d_{X}$ and $d_{Y}$ respectively;
* •
$\inf_{x\in X,y\in Y}d(x,y)>0$.
Coarse equivalence classes of metrics in $\mathcal{M}(X,Y)$ can be considered
as morphisms from $X$ to $Y$ [3], where the composition $\rho d$ of a metric
$d$ on $X\sqcup Y$ and a metric $\rho$ on $Y\sqcup Z$ is given by the metric
determined by
$(\rho\circ d)(x,z)=\inf\nolimits_{y\in Y}[d(x,y)+\rho(y,z)],\quad x\in X,\
z\in Z.$
When $Y=X$, we call $X\sqcup X$ the double of $X$. In what follows we identify
$X\sqcup X$ with $X\times\\{0,1\\}$, and write $X$ for $X\times\\{0\\}$
(resp., $x$ for $(x,0)$) and $X^{\prime}$ for $X\times\\{1\\}$ (resp.,
$x^{\prime}$ for $(x,1)$).
The main result of [4] is that the semigroup $M(X)$ (with respect to this
composition) of coarse equivalence classes of metrics on the double of $X$ is
an inverse semigroup with the unit element ${\mathbf{1}}$ and the zero element
${\mathbf{0}}$, and the unique pseudo-inverse for $[d]\in M(X)$ is the coarse
equivalence class of th metric $d^{*}$ given by
$d^{*}(x,y^{\prime})=d(x^{\prime},y)$, $x,y\in X$.
Recall that a semigroup $S$ is an inverse semigroup if for any $s\in S$ there
exists a unique $t\in S$ (denoted by $s^{*}$ and called a pseudo-inverse) such
that $s=sts$ and $t=tst$ [2]. Philosophically, inverse semigroups describe
local symmetries in a similar way as groups describe global symmetries, and
technically, the construction of the (reduced) group $C^{*}$-algebra of a
group generalizes to that of the (reduced) inverse semigroup $C^{*}$-algebra
[6]. It is known that any two idempotents in an inverse semigroup $S$ commute,
and that there is a partial order on $S$ defined by $s\leq t$ if $s=ss^{*}t$.
Our standard reference for inverse semigroups is [2].
Close relation between inverse semigroups and $C^{*}$-algebras allows to use
classification of projections in $C^{*}$-algebras for idempotents in inverse
semigroups. Namely, as in $C^{*}$-algebra theory, we call two idempotents,
$e,f\in E(S)$ von Neumann equivalent (and write $e\sim f$) if there exists
$s\in S$ such that $s^{*}s=e$, $ss^{*}=f$. An idempotent $e\in E(S)$ is called
infinite if there exists $f\in E(S)$ such that $f\preceq e$, $f\neq e$, and
$f\sim e$. Otherwise $e$ is finite. An inverse semigroup is finite if every
idempotent is finite, and is weakly finite if it is unital and the unit is
finite. A commutative inverse semigroup is patently finite.
In [5] we gave a geometric description of idempotents in the inverse semigroup
$M(X)$ (there are two types of idempotents, named type I and type II) and
showed in Lemma 3.3 that the type is invariant under the von Neumann
equivalence. In the first part of this paper, we study the property of weak
finiteness for $M(X)$ (i.e. finiteness of the unit element) and discuss its
relation to geometric properties of $X$.
We start with several examples of finite or infinite idempotents, and then
show that if $X$ is a free group then $M(X)$ is not weakly finite, while if
$X$ is a free abelian group then it is weakly finite. We also show that the
inverse semigroup $M(X)$ is not a quasi-isometry invariant. The property of
being weakly finite is also not a coarse invariant. We don’t know if it is a
quasi-isometry invariant.
In the second part of this paper, we give a geometric description of spaces,
for which the inverse semigroup $M(X)$ is commutative.
## Part I Weak finiteness of $M(X)$
### 2\. Some examples
The following example shows that in $M(X)$, for an appropriate $X$, we can
imitate examples of partial isometries and projections in a Hilbert space.
###### Example 2.1.
Let $l^{1}(\mathbb{N})$ be the space of infinite $l^{1}$ sequences, with the
metric given by the $l^{1}$-norm, and let
$X_{n}=\\{(0,\ldots,0,t,0,\ldots):t\in[0,\infty)\\}$
with $t$ at the $n$-th place, $n\in\mathbb{N}$. Set
$X=\cup_{n\in\mathbb{N}}X_{n}\subset l^{1}(\mathbb{N})$.
Let $x=(0,\ldots,0,t,0,\ldots)\in X_{n}$, $y=(0,\ldots,0,s,0,\ldots)\in
X_{m}$. Define metrics $d$, $e$, $f$ on $X\sqcup X^{\prime}$ by
$\displaystyle d(x,y^{\prime})$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{cl}|s-t|+1,&\mbox{if\ }m=n+1;\\\
s+t+1,&\mbox{if\ }m\neq n+1,\end{array}\right.$ $\displaystyle
e(x,y^{\prime})$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{cl}|s-t|+1,&\mbox{if\ }m=n;\\\
s+t+1,&\mbox{if\ }m\neq n,\end{array}\right.$ $\displaystyle f(x,y^{\prime})$
$\displaystyle=$ $\displaystyle\left\\{\begin{array}[]{cl}|s-t|+1,&\mbox{if\
}m=n\geq 2;\\\ s+t+1,&\mbox{if\ }m\neq n\mbox{\ or\ }m=n=1.\end{array}\right.$
It is easy to see that $d$, $e$, $f$ are metrics on $X\sqcup X^{\prime}$, and
that $d^{*}d=e$, $dd^{*}=f$ are idempotents, and that $e={\mathbf{1}}\neq f$.
In particular, ${\mathbf{1}}$ is infinite. Although $d$ seems similar to a
one-sided shift in a Hilbert space, it behaves differently: $f$ is
orthogonally complemented, i.e. there exists $h$ such that $f\lor
h={\mathbf{1}}$, $f\land h={\mathbf{0}}$, but the complement is not a minimal
idempotent, i.e. there exists a lot of idempotents $j\in E(M(X))$ such that
$j\leq h$, $j\neq h$.
On the other hand, if $X\subset[0,\infty)$ with the standard metric then the
inverse semigroup $M(X)$ is commutative (Prop. 7.1 in [4]), hence any
idempotent can be equivalent only to itself, hence is finite. In Part 2, we
shall give a geometric description of all metric spaces with commutative
$M(X)$, which is patently finite.
The next example shows that the picture may be more complicated.
###### Theorem 2.2.
There exists an amenable space $X$ of bounded geometry and $s\in M(X)$ such
that $s^{*}s=\mathbf{1}$, but $ss^{*}\neq\mathbf{1}$.
###### Proof.
Let
$x_{n}=(\log 2,\log 3,\ldots,\log n,\log(n+1),0,0,\ldots),$
$x^{\prime}_{n}=(\log 2,\log 2,\log 3,\log 3,\ldots,\log n,\log
n,\log(n+1),\log(n+1),0,0,\ldots),$
and let
$X=\\{x_{n}:n\in\mathbb{N}\\},\quad
X^{\prime}=\\{x^{\prime}_{n}:n\in\mathbb{N}\\},$
$X,X^{\prime}\subset l^{\infty}(\mathbb{N})$ with the metric
$d(x,y)=\sup\nolimits_{k}|x_{k}-y_{k}|,\quad x=(x_{1},x_{2},\ldots),\quad
y=(y_{1},y_{2},\ldots).$
Take $m>n$, then $d(x_{n},x_{m})=\log m=d(x^{\prime}_{n},x^{\prime}_{m})$,
hence the restriction of $d$ onto the two copies of $X$ coincide (thus
determining the metric $d_{X}$ on $X$), and $d\in\mathcal{M}(X)$.
We have, for $n$ even, $n=2k$,
$\displaystyle d(x_{n},X^{\prime})$ $\displaystyle=$
$\displaystyle\inf\nolimits_{m\in\mathbb{N}}d(x_{2k},x^{\prime}_{m})=d(x_{2k},x^{\prime}_{k})$
$\displaystyle=$ $\displaystyle\max\nolimits_{i\leq k}(|\log(i+1)-\log(2i+1)|\
\leq\ \log 2,$
and for $n$ odd, $n=2k-1$,
$d(x_{2k-1},x^{\prime}_{m})\geq\log(k+1)$
for any $m\in\mathbb{N}$, hence
$d(x_{n},X^{\prime})=\inf\nolimits_{m\in\mathbb{N}}d(x_{2k-1},x^{\prime}_{m})=d(x_{2k-1},x^{\prime}_{k})=\log(k+1),$
i.e.
$\lim_{k\to\infty}d(x_{2k-1},X^{\prime})=\infty.$
On the other hand,
$\displaystyle d(x^{\prime}_{n},X)$ $\displaystyle=$
$\displaystyle\inf\nolimits_{m\in\mathbb{N}}d(x^{\prime}_{n},x_{m})\leq
d(x^{\prime}_{n},x_{2n})$ $\displaystyle=$ $\displaystyle\log(2n+1)-\log(n+1)\
\leq\ \log 2.$
Let $X_{+}=\\{x_{2k}:k\in\mathbb{N}\\}$,
$X_{-}=\\{x_{2k-1}:k\in\mathbb{N}\\}$. Then
$\displaystyle d^{*}d(x,x^{\prime})$ $\displaystyle=$
$\displaystyle\inf\nolimits_{y\in
X}[d(x,y^{\prime})+d^{*}(y,x^{\prime})]=\inf\nolimits_{y\in
X}[d(x,y^{\prime})+d(x,y^{\prime})]$ $\displaystyle=$ $\displaystyle
2d(x,X^{\prime})\ \leq\ \log 2$
for any $x\in X_{+}$ and
$\lim_{x\in X_{-};\ x\to\infty}d^{*}d(x,x^{\prime})=\lim_{x\in X_{-};\
x\to\infty}2d(x,X^{\prime})=\infty,$
while
$dd^{*}(x,x^{\prime})=2d(x^{\prime},X)\leq\log 2$
for any $x\in X$.
Let $d_{+},d_{-}\in\mathcal{M}(X)$ be the idempotent selfadjoint metrics
defined by
$d_{\pm}(x,y^{\prime})=\inf\nolimits_{u\in X_{\pm}}[d_{X}(x,u)+1+d_{X}(u,y)].$
Then $[d^{*}d]=[d_{+}]$, $[dd^{*}]=\mathbf{1}$, and $[d_{+}]$ is strictly
smaller than $\mathbf{1}$, hence $M(X)$ is not finite.
Note that $X$ is amenable. Set $F_{n}=\\{x_{1},\ldots,x_{n}\\}\subset X$. Let
$N_{r}(A)$ denote the $r$-neighborhood of the set $A$. Then
$N_{r}(F_{n})\setminus F_{n}$ is empty when $\log(n+1)>r$, hence
$\\{F_{n}\\}_{n\in\mathbb{N}}$ is a Følner sequence. For $r=\log m$, the ball
$B_{r}(x_{n})$ of radius $r$ centered at $x_{n}$ contains either no other
points besides $x_{n}$ (if $n\geq m+1$), or it consists of the points
$x_{1},\ldots,x_{m}$ (if $n\leq m$), hence the metric on $X$ is of bounded
geometry. In fact, this space is of asymptotic dimension zero.
∎
### 3\. Case of free groups
Let $X=\Gamma$ be a finitely generated group with the word length metric
$d_{X}$. Consider the following property (I):
* (i1)
$X=Y\sqcup Z$, and for any $D>0$ there exists $z\in Z$ such that
$d_{X}(z,Y)>D$;
* (i2)
there exist $g,h\in\Gamma$ such that $gY\subset Y$, $hZ\subset Y$ and $gY\cap
hZ=\emptyset$;
* (i3)
there exists $C>0$ such that $|d_{X}(gy,hz)-d_{X}(y,z)|<C$ for any $y\in Y$,
$z\in Z$.
The property (I) is neither stronger nor weaker than non-amenability. If we
require additionally that $Y\sim Z$ then it would imply non-amenability.
###### Lemma 3.1.
The free group $\mathbb{F}_{2}$ on two generators satisfies the property (I).
###### Proof.
Let $a$ and $b$ be the generating elements of $\mathbb{F}_{2}$, and let
$Y\subset X$ be the set of all reduced words in $a$, $a^{-1}$, $b$ and
$b^{-1}$ that begin with $a$ or $a^{-1}$, $Z=X\setminus Y$. Let $g=ab$,
$h=a^{2}$. Clearly, $gY\subset Y$ and $hZ\subset Y$.
If $z$ begins with $a^{n}$, $n>D$, then $d_{X}(z,Y)\geq n$.
If $y\in Y$, $z\in Z$ then
$d_{X}(aby,a^{2}z)=|y^{-1}b^{-1}a^{-1}a^{2}z|=|y^{-1}b^{-1}az|=|y^{-1}z|+2=d_{X}(y,z)+2,$
as the word $y^{-1}b^{-1}az$ cannot be reduced any further ($y^{-1}$ ends with
$a^{\pm}$, and $z$ either begins with $b^{\pm}$, or is an empty word).
∎
###### Theorem 3.2.
Let $X=\Gamma$ be a group with the property (I). Then $X$ is not weakly
finite.
###### Proof.
We shall prove that there exists $d\in\mathcal{M}(X)$ such that
$[d^{*}d]=\mathbf{1}$ and $[dd^{*}]\neq\mathbf{1}$.
Let $X=Y\sqcup Z$, $g,h\in\Gamma$ satisfy the conditions of the property (I).
Define a map $f:X\to X$ by setting
$f(x)=\left\\{\begin{array}[]{cl}gx,&\mbox{if\ }x\in Y;\\\ hx,&\mbox{if\ }x\in
Z.\end{array}\right.$
The maps $f|_{Y}$ and $f|_{Z}$ are left multiplications by $g$ and $h$,
respectively, hence are isometries. If $y\in Y$, $z\in Z$ then (i3) holds for
some $C>0$, hence $|d_{X}(f(x),f(y))-d_{X}(x,y)|<C$ holds for any $x,y\in X$.
Set
$d(x,y^{\prime})=\inf\nolimits_{u\in X}[d_{X}(x,u)+C+d_{X}(f(u),y)].$
It is easy to check that $d$ satisfies all triangle inequalities, hence $d$ is
a metric, $d\in\mathcal{M}(X)$. Then
$d^{*}d(x,x^{\prime})=2d_{X}(x,X^{\prime})=2\inf\nolimits_{u,y\in
X}[d_{X}(x,u)+C+d_{X}(f(u),y)]=2C$
for any $x\in X$, and
$\displaystyle dd^{*}(x,x^{\prime})$ $\displaystyle=$ $\displaystyle
2d_{X}(x^{\prime},X)=\inf\nolimits_{z,u\in X}[d_{X}(z,u)+C+D_{X}(f(u),x)]$
$\displaystyle=$ $\displaystyle 2(C+d_{X}(f(X),x))\ \geq\ 2C+2d_{X}(Y,x)$
is not bounded. Thus, $[d^{*}d]=\mathbf{1}$, while $[dd^{*}]\neq\mathbf{1}$.
∎
### 4\. Case of abelian groups
A positive result is given by the following Theorem.
###### Theorem 4.1.
Let $X=\mathbb{R}^{n}$, with a norm $\|\cdot\|$, and let the metric $d_{X}$ be
determined by the norm $\|\cdot\|$. If $s\in M(X)$, $s^{*}s=\mathbf{1}$ then
$ss^{*}=\mathbf{1}$.
###### Proof.
Let $d\in\mathcal{M}(X)$, $[d]=s$. As $[d^{*}d]=\mathbf{1}$, there exists
$C>0$ such that $2d(x,X^{\prime})<C$ for any $x\in X$. It suffices to show
that there exists $D>0$ such that $2d(x^{\prime},X)<D$ for any $x\in X$.
Suppose the contrary: for any $n\in\mathbb{N}$ there exists $x_{n}\in X$ such
that $d(x^{\prime}_{n},X)>2n$. Then $d(y^{\prime},X)>n$ for any $y\in X$ such
that $d_{X}(y,x_{n})\leq n$.
As $2d(x,X^{\prime})<C$ for any $x\in X$, there is a (not continuous) map
$f:X\to X$ such that $d_{X}(x,f(x)^{\prime})<C/2$ for any $x\in X$. This map
satisfies
$|d_{X}(f(x),f(y))-d_{X}(x,y)|<C\quad\mbox{for\ any\ }x,y\in X.$
Then there exists a continuous map $g:X\to X$ such that $d_{X}(f(x),g(x))<C$
for any $x\in X$. This map satisfies
$|d_{X}(g(x),g(y))-d_{X}(x,y)|<2C\quad\mbox{for\ any\ }x,y\in X,$
and $d(x,g(x)^{\prime})<3C/2$ for any $x\in X$.
Let $x_{0}=0$ denote the origin of $X$, and let $S_{R}$ be the sphere of
radius $R$ centered at the origin,
$S_{R}=\\{x\in X:d_{X}(x_{0},x)=R\\}.$
Set $x^{\prime}_{0}=f(x_{0})^{\prime}\in X^{\prime}$. For $x\in S_{R}$, we
have
$|d_{X}(x_{0},g(x))|=|d_{X}(x^{\prime}_{0},g(x)^{\prime})-R|\leq 2C.$
For $R>3C$, set $h_{R}(x)=\frac{g(x)}{\|g(x)\|}$. Then $h_{R}$ is a coninuous
map from $S_{R}$ to $S_{R}$ for any $R>3C$, and we have
$|d_{X}(h_{R}(x),h_{R}(y))-d_{X}(x,y)|<3C\quad\mbox{for\ any\ }x,y\in S_{R},$
and $d(x,h_{R}(x)^{\prime})<5C$ for any $x\in X$.
Let $R_{n}=d_{X}(x^{\prime}_{0},x^{\prime}_{n})$. Then $x_{n}\in S_{R_{n}}$.
If $d_{X}(y,x_{n})\leq n$ then $d(y^{\prime},X)>n$, hence $y\notin
h_{R_{n}}(S_{R_{n}})$, so the map $h_{R_{n}}$ is not surjective. Then, by the
Borsuk–Ulam Theorem, there exists a pair of antipodal points $y_{1},y_{2}\in
S_{R_{n}}$ such that $h_{R_{n}}(y_{1})=h_{R_{n}}(y_{2})=z$. As
$d(y_{i},z^{\prime})<5C$, $i=1,2$, and $d_{X}(y_{1},y_{2})=2R_{n}$, the
triangle inequality (for the trianlge with the vertices $y_{1}$, $y_{2}$,
$z^{\prime}$) is violated when $2R_{n}>10C$. This contradiction proves the
claim.
∎
###### Corollary 4.2.
Let $X=\mathbb{Z}^{n}$ with an $l_{p}$-metric, $1\leq p\leq\infty$, and let
$s\in M(X)$. Then $s^{*}s=\mathbf{1}$ implies $ss^{*}=\mathbf{1}$.
###### Proof.
By Proposition 9.2 of [4], $M(\mathbb{Z}^{n})=M(\mathbb{R}^{n})$.
∎
### 5\. $M(X)$ doesn’t respect equivalences
###### Proposition 5.1.
The inverse semigroup $M(X)$ is not a coarse invariant.
###### Proof.
The space $X$ from Theorem 2.2 is coarsely equivalent to the space
$Y=\\{n^{2}:n\in\mathbb{N}\\}$ with the standard metric, which we denote by
$b_{X}$. Indeed, for $n<m$, we have $b_{X}(x_{n},x_{m})=m^{2}-n^{2}$ and
$d_{X}(x_{n},x_{m})=\log(m+1)$. As $m^{2}-(m-1)^{2}=2m-1>\log(m+1)$ for $m>1$,
we have $d_{X}(x,y)\leq b_{X}(x,y)$ for any $x,y\in X$, and taking
$f(t)=2e^{t}$, we have $b_{X}(x,y)\leq f(d_{X}(x,y))$ for any $x,y\in X$.
For the metric $d_{X}$ from Theorem 2.2, the inverse semigroup $M(X,d_{X})$ is
not commutative ($[d^{*}d]\neq[dd^{*}]$), while the inverse semigroup
$M(X,b_{X})$ is commutative by Prop. 7.1 of [5].
∎
###### Theorem 5.2.
The inverse semigroup $M(X)$ is not a quasi-isometry invariant.
###### Proof.
Let $X=\mathbb{N}$ be endowed with the metric $b_{X}$ given by
$b_{X}(n,m)=|2^{n}-2^{m}|$, $n,m\in\mathbb{N}$, and let
$y_{n}=s(n)4^{[\frac{n}{2}]}$, where $s(n)=(-1)^{[\frac{n-1}{2}]}$ and $[t]$
is the greatest integer not exceeding $t$. Let $d_{X}$ be the metric on $X$
given by $d_{X}(n,m)=|y_{n}-y_{m}|$, $n,m\in\mathbb{N}$. The two metrics are
quasi-isometric. Indeed, suppose that $n>m$. If $s(n)=-s(m)$ then
$d_{X}(n,m)=4^{[\frac{n}{2}]}+4^{[\frac{m}{2}]}\leq
4^{\frac{n}{2}+1}+4^{\frac{m}{2}+1}=4(\cdot 2^{n}+2^{m})\leq 12b_{X}(n,m);$
$d_{X}(n,m)=4^{[\frac{n}{2}]}+4^{[\frac{m}{2}]}\geq
4^{\frac{n}{2}}+4^{\frac{m}{2}}\geq 2^{n}-2^{m}=b_{X}(n,m).$
We use here that $\frac{2^{r}+1}{2^{r}-1}\leq 3$ for any $r=n-m\in\mathbb{N}$.
If $s(n)=s(m)$ then
$d_{X}(n,m)=4^{[\frac{n}{2}]}-4^{[\frac{m}{2}]}\leq
4^{\frac{n}{2}+1}-4^{\frac{m}{2}}=4\cdot 2^{n}-2^{m}\leq 7b_{X}(n,m).$
We use here that $\frac{4\cdot 2^{r}-1}{2^{r}-1}\leq 7$ for any
$r=n-m\in\mathbb{N}$. To obtain an estimate in other direction, note that
$s(n)=s(m)$ implies that $[n/2]\geq[m/2]+1$, and that $n-m\neq 2$. If $n=m+1$
then
$d_{X}(m+1,m)=3\cdot 4^{[m/2]}\geq\frac{3}{2}\cdot 2^{m}=\frac{3}{2}\
b_{X}(m+1,m),$
If $n\geq m+3$ then
$d_{X}(n,m)=4^{[\frac{n}{2}]}-4^{[\frac{m}{2}]}\geq
4^{\frac{n}{2}}-4^{\frac{m}{2}+1}=2^{n}-4\cdot 2^{m}\geq\frac{4}{7}\
b_{X}(n,m).$
We use here that $\frac{2^{r}-4}{2^{r}-1}\geq\frac{4}{7}$ for any $r=n-m\geq
3$. Thus,
$\frac{3}{7}\ b_{X}(n,m)\leq d_{X}(n,m)\leq 12\cdot b_{X}(n,m)$
for any $n,m\in\mathbb{N}$, so the two metrics are quasi-isometric.
We already know that $M(X,b_{X})$ is commutative, so it remains to expose two
non-commuting elements in $M(X,d_{X})$.
Let
$X=\\{(y_{n},0):n\in\mathbb{N}\\},\quad
X^{\prime}=\\{(-y_{n},1):n\in\mathbb{N}\\},$
and let $d$ be the metric on $X\sqcup X^{\prime}$ induced from the standard
metric on the plane $\mathbb{R}^{2}$, $s=[d]$. Note that $-y_{n}=y_{n-1}$ if
$y_{n}>0$ and $n>1$, and $-y_{n}=y_{n+1}$ if $y_{n}<0$. Hence, $d^{*}=d$ and
$s^{2}=\mathbf{1}$.
Let
$A_{+}=\\{y_{n}:n\in\mathbb{N};y_{n}>0\\},\quad
A_{-}=\\{y_{n}:n\in\mathbb{N};y_{n}<0\\},$
$X=A_{+}\sqcup A_{-}$, and let the metrics $d_{+}$ and $d_{-}$ on $X\sqcup
X^{\prime}$ be given by
$d_{\pm}(n,m^{\prime})=\inf\nolimits_{k\in A_{\pm}}[d_{X}(n,k)+1+d_{X}(k,m)],$
$e=[d_{+}]$, $f=[d_{-}]$. Then $es=\mathbf{0}$, while $se=f$, i.e. $e$ and $s$
do not commute.
∎
## Part II When $M(X)$ is commutative
### 6\. R-spaces
###### Definition 6.1.
A metric space $X$ is an R-space (R for rigid) if, for any $C>0$ and any two
sequences $\\{x_{n}\\}_{n\in\mathbb{N}}$, $\\{y_{n}\\}_{n\in\mathbb{N}}$ of
points in $X$ satisfying
$|d_{X}(x_{n},x_{m})-d_{X}(y_{n},y_{m})|<C\quad\mbox{for\ any\
}n,m\in\mathbb{N}$ (6.1)
there exists $D>0$ such that $d_{X}(x_{n},y_{n})<D$ for any $n\in\mathbb{N}$.
###### Example 6.2.
As $M(X)$ is commutative for any $X\subset[0,\infty)$, it would follow from
Theorem 7.2 below that such $X$ is an R-space. A less trivial example is a
planar spiral $X$ given by $r=e^{\varphi}$ in polar coordinates with the
metric induced from the standard metric on the plane. Indeed, take any two
sequences $\\{x_{n}\\}_{n\in\mathbb{N}}$, $\\{y_{n}\\}_{n\in\mathbb{N}}$, in
$X$. Without loss of generality we may assume that $x_{1}=y_{1}=0$ is the
origin. If these sequences satisfy (6.1) then
$|d_{X}(0,x_{n})-d_{X}(0,y_{n})|<C$
for some fixed $C>0$ (we take $m=1$). If $x_{n}=(r_{n},\varphi_{n})$,
$y_{n}=(s_{n},\psi_{n})$ then $d_{X}(0,x_{n})=r_{n}$, $d_{X}(0,y_{n})=s_{n}$,
and we have $|r_{n}-s_{n}|<C$. Then $x_{n}$ and $y_{n}$ lie in a ring of width
$C$, say $R\leq r\leq R+C$. If $R$ is sufficiently great then
$d_{X}(x_{n},y_{n})\leq(\log(R+C)-\log R)(R+C),$
which is bounded as a function of $R$.
Recall that two maps $f,g:X\to X$ are equivalent if there exists $C>0$ such
that $d_{X}(f(x),g(x))<C$ for any $x\in X$. A map $f:X\to X$ is an almost
isometry if there exists $C>0$ such that
$|d_{X}(f(x),f(y))-d_{X}(x,y)|<C$
for any $x,y\in X$ and if for any $y\in X$ there exists $x\in X$ such that
$d_{X}(f(x),y)<C$ (the latter condition provides existence of an ‘inverse’ map
$g:X\to X$ such that $f\circ g$ and $g\circ f$ are equivalent to the identity
map). The set $AI(X)$ of all equivalence classes of almost isometries of $X$
is a group with respect to the composition. A metric space $X$ is called AI-
rigid [1] if the group $AI(X)$ is trivial.
###### Proposition 6.3.
A countable R-space $X$ is AI-rigid.
###### Proof.
Let $\\{x_{n}\\}_{n\in\mathbb{N}}$ be a sequence of all points of $X$, and let
$f:X\to X$ be an almost isometry. Set $y_{n}=f(x_{n})$. Then there exists
$C>0$ such that
$|d_{X}(f(x_{n}),f(x_{m}))-d_{X}(x_{n},x_{m})|<C$
for any $n,m\in\mathbb{N}$, hence there exists $D>0$ such that
$d_{X}(x_{n},f(x_{n}))=d_{X}(x_{n},y_{n})<D$
for any $n\in\mathbb{N}$, i.e. $f$ is equivalent to the identity map, hence
$X$ is AI-rigid.
∎
###### Example 6.4.
Euclidean spaces $\mathbb{R}^{n}$, $n\geq 1$, are not R-spaces, as they have a
non-trivial symmetry. The Archimedean spiral $r=\varphi$ is not an R-space, as
it is $\pi$-dense in $\mathbb{R}^{2}$.
### 7\. Criterion of commutativity
Let $a,b:T\to[0,\infty)$ be two functions on a set $T$. We say that $a\preceq
b$ if there exists a monotonely increasing function
$\varphi:[0,\infty)\to[0,\infty)$ with $\lim_{s\to\infty}\varphi(s)=\infty$
(we call such functions reparametrizations) such that $a(t)\leq\varphi(b(t))$
for any $t\in T$.
The following Lemma should be known, but we could not find a reference.
###### Lemma 7.1.
Let $a,b:T\to[0,\infty)$ be two functions. If $a\preceq b$ is not true then
there exists $C>0$ and a sequence $(t_{n})_{n\in\mathbb{N}}$ of points in $T$
such that $b(t_{n})<C$ for any $n\in\mathbb{N}$ and
$\lim_{n\to\infty}a(t_{n})=\infty$.
###### Proof.
If $a\preceq b$ is not true then for any reparametrization $\varphi$ there
exists $t\in T$ such that $a(t)>\varphi(b(t))$. Suppose that for any $C>0$,
the value $\max\\{a(t):b(t)\leq C\\}$ is finite. Then set
$f(C)=\max(\max\\{a(t):b(t)\leq C\\},C).$
This gives a reparametrization $f$. If $b(t)=C$ then $a(t)\leq f(C)=f(b(t))$ —
a contradiction. Thus, there exists $C>0$ such that $\max\\{a(t):b(t)\leq
C\\}=\infty$. It remains to choose a sequence $(t_{n})_{n\in\mathbb{N}}$ in
the set $\\{t\in T:b(t)\leq C\\}$ such that $a(t_{n})>n$.
∎
###### Theorem 7.2.
$X$ is an R-space if and only if $M(X)$ is commutative.
###### Proof.
Let $X$ be an R-space. We shall show that any $s\in M(X)$ is a projection. It
would follow that $M(X)$ is commutative. First, we shall show that any $s\in
M(X)$ is selfadjoint. Let $d\in\mathcal{M}(X)$, $[d]=s$. Suppose that
$[d^{*}]\neq[d]$. This means that either $d^{*}\preceq d$ or $d\preceq d^{*}$
is not true, where $d$ and $d^{*}$ are considered as functions on $T=X\times
X^{\prime}$. Without loss of generality we may assume that $d^{*}\preceq d$ is
not true. Then there exist sequences $(x_{n})_{n\in\mathbb{N}}$ in $X$ and
$(y^{\prime})_{n\in\mathbb{N}}$ in $X^{\prime}$ and $L>0$ such that
$d(x_{n},y^{\prime}_{n})<L$ for any $n\in\mathbb{N}$ and
$\lim_{n\to\infty}d(y_{n},x^{\prime}_{n})=\infty$ (recall that
$d^{*}(x,y^{\prime}):=d(y,x^{\prime})$).
Take $n,m\in\mathbb{N}$. Since $d(x_{n},y^{\prime}_{n})<L$,
$d(x_{m},y^{\prime}_{m})<L$, we have
$|d_{X}(x_{n},x_{m})-d_{X}(y_{n},y_{m})|=|d_{X}(x_{n},x_{m})-d_{X}(y^{\prime}_{n},y^{\prime}_{m})|<2L,$
and, since $X$ is an R-space, there exists $D>0$ such that
$d_{X}(x_{n},y_{n})<D$ for any $n\in\mathbb{N}$.
Then, using the triangle inequality for the quadrangle
$x_{n}y_{n}x^{\prime}_{n}y^{\prime}_{n}$, we get
$\displaystyle d(y_{n},x^{\prime}_{n})$ $\displaystyle\leq$ $\displaystyle
d_{X}(y_{n},x_{n})+d(x_{n},y^{\prime}_{n})+d_{X}(y^{\prime}_{n},x^{\prime}_{n})$
$\displaystyle=$ $\displaystyle
d_{X}(y_{n},x_{n})+d(x_{n},y^{\prime}_{n})+d_{X}(y_{n},x_{n})$
$\displaystyle<$ $\displaystyle D+L+D,$
which contradicts the condition
$\lim_{n\to\infty}d(y_{n},x^{\prime}_{n})=\infty$.
Now, let us show that $[d]\in M(X)$ is idempotent if $X$ is an R-space. Let
$a(x)=d(x,X^{\prime})$, $b(x)=d(x,x^{\prime})$. It was shown in [4] (Theorem
3.1 and remark at the end of Section 11) that if $[d]$ is selfadjoint then it
is idempotent if and only if $b\preceq a$. Suppose that the latter is not
true. Then there exists $L>0$ and a sequence $\\{x_{n}\\}_{n\in\mathbb{N}}$ of
points in $X$ such that $d(x_{n},X^{\prime})<L$ for any $n\in\mathbb{N}$ and
$\lim_{n\to\infty}d(x_{n},x^{\prime}_{n})=\infty$. In particular, this means
that there exists a sequence $\\{y_{n}\\}_{n\in\mathbb{N}}$ of points in $X$
such that $d(x_{n},y^{\prime}_{n})<L$ for any $n\in\mathbb{N}$. Since $[d]$ is
selfadjoint, for any $L>0$ there exists $R>0$ such that if $d(x,y^{\prime})<L$
then $d(x^{\prime},y)<R$.
It follows from the triangle inequality for the quadrangle
$x_{n}x_{m}y^{\prime}_{n}y^{\prime}_{m}$ that
$\displaystyle|d_{X}(x_{n},x_{m})-d_{X}(y_{n},y_{m})|$ $\displaystyle=$
$\displaystyle|d_{X}(x_{n},x_{m})-d_{X}(y^{\prime}_{n},y^{\prime}_{m})|$
$\displaystyle\leq$ $\displaystyle
d(x_{n},y^{\prime}_{n})+d(x_{m},y^{\prime}_{m})\ <\ 2L$
for any $n,m\in\mathbb{N}$, hence, the property of being an R-space implies
that there exists $D>0$ such that $d_{X}(x_{n},y_{n})<D$ for any
$n\in\mathbb{N}$. Therefore,
$d(x_{n},x^{\prime}_{n})\leq d_{X}(x_{n},y_{n})+d(y_{n},x^{\prime}_{n})<D+R$
for any $n\in\mathbb{N}$ — a contradiction with
$\lim_{n\to\infty}d(x_{n},x^{\prime}_{n})=\infty$.
In the opposite direction, suppose that $X$ is not an R-space. i.e. that there
exists $C>0$ and sequences $\\{x_{n}\\}_{n\in\mathbb{N}}$,
$\\{y_{n}\\}_{n\in\mathbb{N}}$ of points in $X$ such that (6.1) holds and
$\lim_{n\to\infty}d_{X}(x_{n},y_{n})=\infty$.
Note that these sequences cannot be bounded. Indeed, if there exists $R>0$
such that $d_{X}(x_{1},x_{n})<R$ for any $n\in\mathbb{N}$ then
$d_{X}(y_{1},y_{n})\leq d_{X}(x_{1},x_{n})+C=R+C$
for any $n\in\mathbb{N}$, but then
$d_{X}(x_{n},y_{n})\leq
d_{X}(x_{n},x_{1})+d_{X}(x_{1},y_{1})+d_{X}(y_{1},y_{n})<R+d_{X}(x_{1},y_{1})+R+C,$
which contradicts $\lim_{n\to\infty}d_{X}(x_{n},y_{n})=\infty$. Passing to a
subsequence, we may assume that
$d_{X}(x_{k},x_{n})>k,\ d_{X}(x_{k},y_{n})>k,\ d_{X}(y_{k},x_{n})>k,\
d_{X}(y_{k},y_{n})>k$
for any $n<k$, and $d_{X}(x_{k},y_{k})>k$ for any $k\in\mathbb{N}$. In
particular, this means that
$d_{X}(x_{k},y_{n})>k\quad\mbox{for\ any\ }k,n\in\mathbb{N}.$ (7.1)
Let us define two metrics on $X$ and show that they don’t commute. For $x,y\in
X$ set
$d_{1}(x,y^{\prime})=\min\nolimits_{n\in\mathbb{N}}[d_{X}(x,x_{n})+C+d_{X}(y_{n},y)];$
$d_{2}(x,y^{\prime})=\min\nolimits_{n\in\mathbb{N}}[d_{X}(x,y_{n})+C+d_{X}(x_{n},y)]$
(it is clear that the minimum is attained on some $n\in\mathbb{N}$ as
$x_{n},y_{n}\to\infty$). Let us show that $d_{1}$ is a metric on $X\sqcup
X^{\prime}$ (the case of $d_{2}$ is similar).
Due to symmetry, it suffices to check the two triangle inequalities for the
triangle $xzy^{\prime}$, $z\in X$:
$\displaystyle d_{1}(x,y^{\prime})+d_{1}(z,y^{\prime})$ $\displaystyle=$
$\displaystyle\min\nolimits_{n\in\mathbb{N}}[d_{X}(x,x_{n})+C+d_{X}(y_{n},y)]$
$\displaystyle+\
\min\nolimits_{m\in\mathbb{N}}[d_{X}(z,x_{m})+C+d_{X}(y_{m},y)]$
$\displaystyle=$ $\displaystyle
d_{X}(x,x_{n_{x}})+d_{X}(y_{n_{x}},y)+d_{X}(y,y_{n_{z}})+d_{X}(z,x_{n_{z}})+2C$
$\displaystyle\geq$ $\displaystyle
d_{X}(x,x_{n_{x}})+d_{X}(y_{n_{x}},y_{n_{z}})+d_{X}(z,x_{n_{z}})+2C$
$\displaystyle\geq$ $\displaystyle
d_{X}(x,x_{n_{x}})+(d_{X}(x_{n_{x}},x_{n_{z}})-C)+d_{X}(z,x_{n_{z}})+2C$
$\displaystyle=$ $\displaystyle
d_{X}(x,x_{n_{x}})+d_{X}(x_{n_{x}},x_{n_{z}}+d_{X}(z,x_{n_{z}})+C$
$\displaystyle\geq$ $\displaystyle d_{X}(x,z)+C\ \geq\ d_{X}(x,z).$
and
$\displaystyle d_{1}(x,y^{\prime})$ $\displaystyle=$
$\displaystyle\min\nolimits_{n\in\mathbb{N}}[d_{X}(x,x_{n})+C+d_{X}(y_{n},y)]$
$\displaystyle\leq$ $\displaystyle d_{X}(x,x_{n_{z}})+d_{X}(y_{n_{z}},y)+C$
$\displaystyle\leq$ $\displaystyle
d_{X}(x,z)+d_{X}(z,x_{n_{z}})+d_{X}(y_{n_{z}},y)+C$ $\displaystyle=$
$\displaystyle d_{X}(x,z)+d_{1}(z,y^{\prime}).$
Let us evaluate $(d_{2}\circ d_{1})(x_{k},x^{\prime}_{k})$ and $(d_{1}\circ
d_{2})(x_{k},x^{\prime}_{k})$.
Taking fixed values $n=m=k$, $u=y_{k}$, we get
$\displaystyle(d_{2}\circ d_{1})(x_{k},x^{\prime}_{k})$ $\displaystyle=$
$\displaystyle\inf\nolimits_{u\in
X}\\{\min\nolimits_{n\in\mathbb{N}}[d_{X}(x_{k},x_{n})+C+d_{X}(y_{n},u)]$
$\displaystyle+\
\min\nolimits_{m\in\mathbb{N}}[d_{X}(u,y_{m})+C+d_{X}(x_{m},x_{k})]\\}$
$\displaystyle\leq$ $\displaystyle\inf\nolimits_{u\in
X}\\{[d_{X}(x_{k},x_{k})+C+d_{X}(y_{k},u)]$ $\displaystyle+\
[d_{X}(u,y_{k})+C+d_{X}(x_{k},x_{k})]\\}$ $\displaystyle=$
$\displaystyle[d_{X}(x_{k},x_{k})+C]+[C+d_{X}(x_{k},x_{k})]\ =\ 2C.$
Using the triangle inequality for the triangle $x_{n}x_{m}u$ and (7.1), we get
$\displaystyle(d_{1}\circ d_{2})(x_{k},x^{\prime}_{k})$ $\displaystyle=$
$\displaystyle\inf\nolimits_{u\in
X}\\{\min\nolimits_{n\in\mathbb{N}}[d_{X}(x_{k},y_{n})+C+d_{X}(x_{n},u)]$
$\displaystyle+\
\min\nolimits_{m\in\mathbb{N}}[d_{X}(u,x_{m})+C+d_{X}(y_{m},x_{k})]\\}$
$\displaystyle\geq$ $\displaystyle\inf\nolimits_{u\in
X}\\{\min\nolimits_{n\in\mathbb{N}}[d_{X}(x_{k},y_{n})+d_{X}(x_{n},u)]$
$\displaystyle+\
\min\nolimits_{m\in\mathbb{N}}[d_{X}(u,x_{m})+d_{X}(y_{m},x_{k})]\\}$
$\displaystyle\geq$
$\displaystyle\min\nolimits_{n,m\in\mathbb{N}}[d_{X}(x_{k},y_{n})+d_{X}(x_{n},x_{m})+d_{X}(y_{m},x_{k})]$
$\displaystyle>$ $\displaystyle k+d_{X}(x_{n},x_{m})+k\ >\ 2k.$
Thus, for the sequence $\\{x_{k}\\}_{k\in\mathbb{N}}$ of points in $X$, the
distances $(d_{2}\circ d_{1})(x_{k},x^{\prime}_{k})$ are uniformly bounded,
while $\lim_{k\to\infty}(d_{1}\circ d_{2})(x_{k},x^{\prime}_{k})=\infty$,
hence the metrics $(d_{2}\circ d_{1})$ and $d_{1}\circ d_{2}$ are not
equivalent, i.e. $[d_{2}][d_{1}]\neq[d_{1}][d_{2}]$.
∎
### References
* [1] A. Kar, J.-F. Lafont, B. Schmidt. Rigidity of Almost-Isometric Universal Covers. Indiana Univ. Math. J. 65 (2016), 585–613.
* [2] M. V. Lawson. Inverse Semigroups: The Theory of Partial Symmetries. World Scientific, 1998.
* [3] V. Manuilov. Roe bimodules as morphisms of discrete metric spaces. Russian J. Math. Phys., 26 (2019), 470–478.
* [4] V. Manuilov. Metrics on doubles as an inverse semigroup. J. Geom. Anal., to appear.
* [5] V. Manuilov. Metrics on doubles as an inverse semigroup II. J. Math. Anal. Appl., to appear.
* [6] A. L. T. Paterson. Groupoids, Inverse Semigroups, and their Operator Algebras. Springer, 1998.
|
4k
|
arxiv_papers
|
2101.01018
|
Anomaly constraint on chiral central charge of (2+1)d topological order
Ryohei Kobayashi
Institute for Solid State Physics,
---
University of Tokyo, Kashiwa, Chiba 277-8583, Japan
In this short paper, we argue that the chiral central charge $c_{-}$ of a
$(2+1)$d topological ordered state is sometimes strongly constrained by ’t
Hooft anomaly of anti-unitary global symmetry. For example, if a $(2+1)$d
fermionic TQFT has a time reversal anomaly with $T^{2}=(-1)^{F}$ labeled as
$\nu\in\mathbb{Z}_{16}$, the TQFT must have $c_{-}=1/4$ mod $1/2$ for odd
$\nu$, while $c_{-}=0$ mod $1/2$ for even $\nu$. This generalizes the fact
that the bosonic TQFT with $T$ anomaly in a particular class must carry
$c_{-}=4$ mod $8$ to fermionic cases. We also study such a constraint for
fermionic TQFT with $U(1)\times CT$ symmetry, which is regarded as a gapped
surface of the topological superconductor in class AIII.
## 1 Introduction
The ’t Hooft anomaly in quantum field theory is a mild violation of the
conservation law due to quantum effects. It is well known that the ’t Hooft
anomaly constrains the low energy behavior of the system, since we need
nontrivial degrees of freedom in IR to match the given anomaly. For example,
the seminal Lieb-Schultz-Mattis theorem [1, 2, 3] and its generalizations [4,
5, 6, 7, 8] provide a strong spectral constraints on quantum systems on
lattice, which are understood as the consequence of ’t Hooft anomaly involving
lattice spatial symmetries that behave internally in infrared [9, 10, 11, 12,
13, 14, 15].
The ’t Hooft anomaly is typically matched by a symmetry broken or gapless
phase (e.g., perturbative anomaly), but in some cases the anomaly is known to
be matched by a symmetry preserving gapped phase, realized by Topological
Quantum Field Theory (TQFT) enriched by the global symmetry [16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. This implies that an anomaly in
some particular class can be carried by topological degrees of freedom, not by
gapless particles and in particular the system with an anomaly can have an
energy gap. Recently, it was also discovered that some global anomalies cannot
be matched by a symmetry preserving TQFT and lead to even stronger spectral
constraints [31, 32, 33].
In this paper, we work on symmetry preserving TQFT with ’t Hooft anomaly in
$(2+1)$ dimensions, and explore the constraints on the gapped phase enforced
by the anomaly. We find that the chiral central charge $c_{-}$ of the TQFT is
strongly constrained by the ’t Hooft anomaly of anti-unitary global symmetry.
This can be understood as a constraint on thermal Hall conductance observed on
the surface state of a topological superconductor based on time reversal
symmetry. The result of this paper also implies that the $(2+1)$d topological
ordered state with an anomalous $T$ symmetry of a specific index must have a
quantized energy current on the $(1+1)$d boundary, which is proportional to
the chiral central charge [34].
Here, let us illustrate what we mean by the chiral central charge of a
$(2+1)$d gapped phase. If there is the boundary for the $(2+1)$d gapped phase
realized by a $(1+1)$d CFT, we can define the chiral central charge $c_{-}$
via the chiral central charge of $(1+1)$d CFT on the boundary. As a canonical
example, the gravitational Chern-Simons theory $\mathrm{CS}_{\mathrm{grav}}$
has $c_{-}=1/2$ on the boundary. We can also observe the chiral nature of
$(2+1)$d gapped theory as a sort of quantum anomaly of the $(2+1)$d gapped
phase, even without making up a $(1+1)$d boundary. Namely, the gravitational
Chern-Simons term has the framing anomaly characterized by the bulk
topological action $\mathrm{Tr}(R\wedge R)/(192\pi)$, since the gravitational
Chern-Simons theory is defined on a spin manifold as
$\displaystyle\int_{M=\partial
W}\mathrm{CS}_{\mathrm{grav}}=\pi\int_{W}\widehat{A}(R)=\frac{1}{192\pi}\int_{W}\mathrm{Tr}(R\wedge
R).$ (1.1)
Once we know the anomaly $\mathrm{Tr}(R\wedge R)/(192\pi)$ can be expressed as
the gravitational Chern-Simons theory, and once we know
$\mathrm{CS}_{\mathrm{grav}}$ has $c_{-}=1/2$, then we can combine them
together to find that $\mathrm{Tr}(R\wedge R)/(192\pi)$ implies $c_{-}=1/2$.
Thus we can say that the $(2+1)$d TQFT has the chiral central $c_{-}$, if the
theory has the framing anomaly given by
$\displaystyle\frac{c_{-}}{96\pi}\mathrm{Tr}(R\wedge R).$ (1.2)
Now we summarize the result of this paper. We start with time reversal
symmetry with $T^{2}=(-1)^{F}$ of fermionic TQFT (known as class DIII [35, 36,
37]), whose anomaly is classified by $\mathbb{Z}_{16}$ [17]. We show that, if
the TQFT has a $T$ anomaly labeled by an odd index $\nu\in\mathbb{Z}_{16}$,
the TQFT must carry $c_{-}=1/4$ mod $1/2$, while for even
$\nu\in\mathbb{Z}_{16}$, the TQFT must instead carry $c_{-}=0$ mod $1/2$.
We also consider $T$ anomaly in bosonic TQFT, and show that we must have
$c_{-}=4$ mod $8$ for some particular class of the anomaly, while $c_{-}=0$
mod $8$ for the other class. This result in the bosonic case is essentially
known in [38], but we provide an alternative understanding for this phenomena,
which is also applicable for fermionic cases. We also study a more involved
fermionic TQFT with $U(1)\times CT$ symmetry (known as class AIII), and obtain
a constraint $c_{-}=1/2$ mod $1$ for a specific class of the anomaly regarded
as a surface state of a topological superconductor [39].
## 2 SPT phases on time reversal symmetry defects
Let us consider a $(2+1)$d TQFT with the time reversal symmetry $T$ that
suffers from an ’t Hooft anomaly. In our discussion, we couple the anomalous
$(2+1)$d TQFT with a $(3+1)$d SPT phase based on the $T$ symmetry, 111 In this
paper, we don’t make a distinction between the SPT phases and the invertible
field theories; what are referred to as SPT phases in the main text should
more properly be called as invertible phases. An invertible phase is a quantum
field theory with a unique ground state on an arbitrary closed spatial
manifold. An SPT phase is usually defined as an equivalence class of short-
range-entangled gapped Hamiltonian systems with a specified symmetry. An SPT
phase in this sense determines an equivalence class of invertible phases, by
isolating its ground state, but it is a difficult and unsolved problem whether
an arbitrary invertible phase associated to a global symmetry can be realized
as an SPT phase in this sense. Invertible phases also include e.g. the low-
energy limit of the Kitaev wire, which is not counted as an SPT phase in the
standard usage in the literature on condensed matter physics, but is often
called as an SPT phase in the high energy physics literature. and regard the
anomalous $(2+1)$d TQFT as a boundary state of the $(3+1)$d SPT phase. In a
general quantum theory with a global symmetry, there exists a codimension one
topological operator that implements the symmetry action. We call this
topological operator a symmetry defect. In the $(3+1)$d $T$ SPT phase, let us
consider a symmetry defect for the $T$ symmetry, which implements the
orientation reversal of the spacetime.
In general, for a $d$-dimensional SPT phase with the $T$ symmetry, the $T$
symmetry defect itself becomes a SPT phase with the unitary $\mathbb{Z}_{2}$
symmetry. This phenomena can be understood in the phase where the $T$ symmetry
is spontaneously broken. Then, the $T$ defect is realized as a $T$ domain wall
separating two distinct vacua of the symmetry broken phase.
Concretely, let us consider an infinite system of the $(3+1)$d $T$ SPT phase,
and make up a $T$ domain wall of the SPT phase by breaking the $T$ symmetry,
dividing the system into the left and right domain. We are interested in a
theory supported on the $T$ domain wall in this setup. To study the localized
degrees of freedom on the domain wall, it is important to ask what the global
symmetry of the domain wall is.
Throughout the paper, we assume that the theory is Lorentz invariant. In that
case, we can find a global symmetry induced on the domain wall, with help of
the $CPT$ symmetry [40, 41]. Concretely, if the $T$ domain wall is located in
a reflection symmetric fashion, the combined transformation of $T$ and
$CP_{\perp}T$ fixes the domains, and thus acts solely on the domain wall.
Here, $P_{\perp}$ denotes a spatial reflection fixing the configuration of the
domain wall, see Fig. 1. Since the $CPT$ is anti-unitary, the combined
transformation $T\cdot(CP_{\perp}T)$ turns out to behave as a unitary
$\mathbb{Z}_{2}$ symmetry on the domain wall. The theory on the $T$ domain
wall is based on this induced $\mathbb{Z}_{2}$ symmetry.
Figure 1: The illustration for the $T$ domain wall. The $T$ domain wall
separates the two distinct vacua in the $T$ broken phase. The $T$ symmetry
acts on the figure by changing two vacua (i.e., yellow $\leftrightarrow$
purple). Since the $CPT$ commutes with $T$ (up to fermion parity), the $CPT$
leaves the vacua of the $T$-broken phase invariant, and acts as the parity
that reflects the figure across the domain wall. $T$ alone cannot be a
symmetry on the domain wall since it flips the domain, but
$T\cdot(CP_{\perp}T)$ works as the symmetry on the wall, since $CP_{\perp}T$
reflects back the configuration of domains.
In fact, there is a linear relation between the classification of the $(2+1)$d
SPT phase on the $T$ symmetry defect and that of $(3+1)$d $T$ SPT phases [40,
41]. This relationship allows us to determine the classification of the
$(3+1)$d SPT phase, from a given theory on the symmetry defect. This linear
map between SPT classifications is nicely formulated in terms of the
classification scheme of SPT phases given by cobordism group [42]. Here, we
briefly review the formulation of the map.
First, SPT phases in $(d+1)$ spacetime dimension are classified by the
cobordism group $\Omega^{d+1}_{\mathrm{str}}$, where $\mathrm{str}$ stands for
spacetime structure that corresponds to the global symmetry, i.e., the choice
of internal symmetry and the spacetime symmetry such as fermion parity and/or
time reversal [43, 42, 37, 44]. If the structure group is the direct product
of the internal symmetry $G$ and the spacetime symmetry, we sometimes write
the cobordism group in the form of $\Omega^{d+1}_{\mathrm{spacetime}}(BG)$,
where $\mathrm{spacetime}$ denotes the spacetime symmetry.
Then, for a given $(d+1)$d SPT phase with a structure group $\mathrm{str}$ and
a codimension one symmetry defect of the $\mathbb{Z}_{2}$ global symmetry, we
can define the linear map based on the induced structure on the symmetry
defect,
$\displaystyle\Omega^{d}_{\mathrm{str^{\prime}}}\to\Omega^{d+1}_{\mathrm{str}},$
(2.1)
where $\mathrm{str^{\prime}}$ denotes the structure for the induced symmetry
on the symmetry defect. This map of cobordism groups is called the Smith map.
For example, if we have unitary $\mathbb{Z}_{2}$ symmetry for the fermionic
phase in a $(d+1)$d spacetime $X$, $X$ is equipped with $\mathrm{spin}$
structure on $TX$, and the $\mathbb{Z}_{2}$ gauge field. The SPT
classification in $(d+1)$d is
$\Omega^{d+1}_{\mathrm{str}}=\Omega_{\mathrm{spin}}^{d+1}(B\mathbb{Z}_{2})$.
If we consider the $\mathbb{Z}_{2}$ symmetry defect $Y$ in $X$, the induced
structure on $Y$ from that of $X$ is $\mathrm{spin}$ structure on $TY\oplus
NY$, since $TX$ is decomposed into a tangent and normal bundle on $Y$. This
structure is shown to be equivalent to $\mathrm{pin}^{-}$ structure on $Y$.
Thus, we have the Smith map
$\displaystyle\Omega^{d}_{\mathrm{pin}^{-}}\to\Omega^{d+1}_{\mathrm{spin}}(B\mathbb{Z}_{2}),$
(2.2)
which reflects that anti-unitary symmetry $T^{2}=1$ is induced on the symmetry
defect from unitary $\mathbb{Z}_{2}$, via the $CPT$ theorem. The detailed
description about the property of the Smith map is found in [40]. In the
following discussions, we determine this linear Smith map by considering
several cases that span the SPT classification we are interested in.
### 2.1 $(3+1)$d bosonic $T$ SPT phase
In the bosonic case, the Smith map determines the classification of $(3+1)$d
$T$ SPT phase from that of $(2+1)$d $\mathbb{Z}_{2}$ SPT phase on the $T$
symmetry defect, expressed as
$\displaystyle\Omega_{\mathrm{SO}}^{3}(B\mathbb{Z}_{2})\to\Omega_{\mathrm{O}}^{4},$
(2.3)
where $\mathrm{SO}$ and $\mathrm{O}$ denote the oriented and unoriented
structure, respectively. The SPT classification is
$\Omega_{\mathrm{SO}}^{3}(B\mathbb{Z}_{2})=\mathbb{Z}_{2}\times\mathbb{Z}$,
and $\Omega_{\mathrm{O}}^{4}=\mathbb{Z}_{2}\times\mathbb{Z}_{2}$. We label the
elements of $\Omega_{\mathrm{SO}}^{3}(B\mathbb{Z}_{2})$ as
$(n_{DW},n_{E})\in\mathbb{Z}_{2}\times\mathbb{Z}$. The generators are
described as follows:
* •
$(1,0)$ corresponds to the $\mathbb{Z}_{2}$ SPT phase given by the classical
action
$\displaystyle\exp\left(\pi i\int a^{3}\right)$ (2.4)
with a $\mathbb{Z}_{2}$ gauge field $a$, which characterizes a nontrivial
element of $H^{3}(B\mathbb{Z}_{2},U(1))=\mathbb{Z}_{2}$.
* •
$(0,1)$ corresponds to the $E_{8}$ state [45] with chiral central charge
$c_{-}=8$.
Meanwhile, we label the $(3+1)$d $T$ SPT classification by
$(m_{1},m_{2})\in\mathbb{Z}_{2}\times\mathbb{Z}_{2}$, whose generators are
described as follows:
* •
$(1,0)$ corresponds to the classical action
$\displaystyle\exp\left(\pi i\int w_{1}^{4}\right),$ (2.5)
where $[w_{1}]\in H^{1}(M,\mathbb{Z}_{2})$ is the first Stiefel-Whitney class
of the spacetime $M$.
* •
$(0,1)$ corresponds to the classical action
$\displaystyle\exp\left(\pi i\int w_{2}^{2}\right),$ (2.6)
with $[w_{2}]\in H^{2}(M,\mathbb{Z}_{2})$ the second Stiefel-Whitney class.
The Smith map
$\mathbb{Z}_{2}\times\mathbb{Z}\to\mathbb{Z}_{2}\times\mathbb{Z}_{2}$ for
(2.3) is given in the form of
$\displaystyle(n_{DW},n_{E})\to(\alpha_{1}n_{DW}+\alpha_{2}n_{E},\beta_{1}n_{DW}+\beta_{2}n_{E}),$
(2.7)
for coefficients $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\in\mathbb{Z}_{2}$.
We determine these coefficients by finding what $(2+1)$d phases map to the
action (2.5), (2.6) respectively. We will see that
$\alpha_{1}=1,\alpha_{2}=0,\beta_{1}=0,\beta_{2}=1$ in the following
discussions.
We find the theory on the $T$ symmetry defect for (2.5), by twisted
compactification of (2.5) with respect to the $T$ symmetry. It turns out that
the restriction of the $T$ defect on the $T$ symmetry defect is regarded as
the $\mathbb{Z}_{2}$ gauge field $a$, and the compactified action is given by
(2.4). This determines $\alpha_{1}=1,\alpha_{2}=0$.
To find the theory on the $T$ symmetry defect for (2.6), it is convenient to
consider the $(2+1)$d gapped boundary of the SPT phase that preserves the $T$
symmetry. The gapped boundary is realized by the $\mathbb{Z}_{2}$ gauge theory
given by the action
$\displaystyle\exp\left(\pi i\int a\cup\delta b+a\cup w_{2}+b\cup
w_{2}\right),$ (2.8)
with dynamical $\mathbb{Z}_{2}$ gauge fields $a,b$. This action realizes a
TQFT known as the 3-fermion state [46, 38, 47]; a $(2+1)$d $\mathbb{Z}_{2}$
gauge theory whose electric and magnetic particle are both fermions. In
general, a $(2+1)$d bosonic topological ordered state is described by the
fusion and braiding properties of anyons, which are characterized by an
algebraic theory of anyons known as a unitary modular tensor category (UTMC).
For a given UTMC that describes a $(2+1)$d TQFT, there is a way to compute the
chiral central charge $c_{-}$ modulo 8 known as the Gauss-Milgram formula,
given by $e^{2\pi ic_{-}/8}=\sum_{a}d_{a}^{2}\theta_{a}/\mathcal{D}$ [45]. 222
We can see the correspondence of the Gauss-Milgram formula with the framing
anomaly based on the following argument. Starting from the UTMC $\mathcal{C}$,
we can construct the $(3+1)$d Walker-Wang model [48], a $(3+1)$d SPT phase
whose boundary is given by a $(2+1)$d TQFT described by the UTMC
$\mathcal{C}$. Then, it has been shown in [38] that the partition function of
the Walker-Wang TQFT on the complex projective space $\mathbb{CP}^{2}$
produces the Gauss-Milgram formula $\displaystyle
Z(\mathbb{CP}^{2})=\frac{1}{\mathcal{D}}\sum_{a}d_{a}^{2}\theta_{a}.$ (2.9)
Meanwhile, we can see that
$\displaystyle\exp\left(\int_{\mathbb{CP}^{2}}\frac{ic_{-}}{96\pi}\mathrm{Tr}(R\wedge
R)\right)=e^{2\pi ic_{-}/8},$ (2.10) by recalling that $\mathbb{CP}^{2}$ has
the signature 1. Hence, supposing that the Walker-Wang model is effectively
described by the $R\wedge R$ action, the Gauss-Milgram formula exactly
computes the framing anomaly $c_{-}$.
Here, the sum is over anyons of the UTMC, $d_{a}$ is the quantum dimension,
$\theta$ is the topological spin, and $\mathcal{D}$ is the total dimension
given by $\mathcal{D}^{2}=\sum_{a}d_{a}^{2}$. According to this formula, we
can immediately see that the 3-fermion state has the chiral central charge
$c_{-}=4$ mod $8$.
Then, let us break the $T$ symmetry simultaneously in the $(3+1)$d bulk and
the $(2+1)$d boundary, such that the $T$ domain wall in the bulk terminates on
a domain wall at the boundary. The $T$ domain wall separates the left and
right domain, see Fig. 2. Let us assume that one specific realization of our
system has $c_{-}=4+8m$ for $m\in\mathbb{Z}$, on the boundary of the left
domain.
Since the right domain can be prepared as a partner of the left domain
conjugate under the $T$ symmetry, the boundary of right domain carries
$c_{-}=-(4+8m)$, since the orientation at the right domain gets reversed by
the $T$ action. This implies the $(3+1)$d SPT action given by
$\displaystyle\frac{c_{-}}{96\pi}\mathrm{Tr}(R\wedge R)$ (2.11)
has a kink of $c_{-}$ from $c_{-}=4+8m$ to $c_{-}=-(4+8m)$ on the domain wall,
thus we obtain the gravitational Chern-Simons theory $(8+16m)\cdot
2\mathrm{CS}_{\mathrm{grav}}$ on the $T$ domain wall, which carries $c_{-}=8$
mod $16$. We note that the bulk action (2.11) with $c_{-}=4$ mod $8$ gives the
same action as (2.6) on a closed oriented manifold, since $w_{2}^{2}$ is
related to the Pontrjagin class as [49, 46]
$\displaystyle w_{2}^{2}=p_{1}\ \mod 2,$ (2.12)
and then we have
$\displaystyle\exp\left(\pi i\int
w_{2}^{2}\right)=\exp\left(\frac{i}{24\pi}\int\mathrm{Tr}(R\wedge R)\right).$
(2.13)
We can also understand the chiral domain wall with $c_{-}=8$ mod $16$ as
follows. We denote the $(2+1)$d spacetime for the left domain on the gapped
boundary as $X$, see Fig. 2. The $T$ domain wall in the bulk ends on $\partial
X$ at the $(2+1)$d boundary. Since the $T$ symmetry is preserved on the
boundary, $\partial X$ must support a $T$ defect operator of the $(2+1)$d TQFT
on the boundary. Because the boundary is a gapped TQFT, the $T$ defect on
$\partial X$ must be topological and carry gapped degrees of freedom, which
must lead to $c_{-}=0$ on $\partial X$.
Now, the left and right domain of the TQFT on the boundary contributes
$c_{-}=8$ mod $16$ to $\partial X$, and it must be cancelled by the bulk
contribution. Thus, the $T$ domain wall in the $(3+1)$d SPT phase must carry
$c_{-}=8$ mod $16$. We identify the $c_{-}=8$ phase on the $T$ domain wall as
the $E_{8}$ state that generates the free part of
$\Omega_{\mathrm{SO}}^{3}(B\mathbb{Z}_{2})=\mathbb{Z}_{2}\times\mathbb{Z}$.
Thus, we conclude that $\beta_{2}=1$ in the Smith map. We can further see that
$\beta_{1}=0$, by seeing that the action given by decorating the $(2+1)$d
action (2.4) on the $T$ domain wall evaluates $Z(\mathbb{CP}^{2})=1$ since
$\mathbb{CP}^{2}$ is oriented, so cannot generate the action (2.6).
Summarizing, the Smith map (2.3) is given by
$\displaystyle(n_{DW},n_{E})\to(n_{DW},n_{E})\quad\mod 2.$ (2.14)
### 2.2 $(3+1)$d fermionic $T$ SPT phase: $T^{2}=(-1)^{F}$
In the fermionic case, the $T$ symmetry $T^{2}=(-1)^{F}$ corresponds to
$\mathrm{pin}^{+}$ structure of the spacetime. The Smith map determines the
classification of $(3+1)$d $T$ SPT phase from that of $(2+1)$d
$\mathbb{Z}_{2}$ SPT phase on the $T$ domain wall, expressed as
$\displaystyle\Omega_{\mathrm{spin}}^{3}(B\mathbb{Z}_{2})\to\Omega_{\mathrm{pin}^{+}}^{4}.$
(2.15)
This gives a linear map $\mathbb{Z}_{8}\times\mathbb{Z}\to\mathbb{Z}_{16}$,
where the $\mathbb{Z}$ part of $\Omega_{\mathrm{spin}}^{3}(B\mathbb{Z}_{2})$
is generated by the $p+ip$ superconductor with $c_{-}=1/2$. The
$\mathbb{Z}_{8}$ part corresponds to the $\mathbb{Z}_{2}$ SPT phase described
by the decoration of the Kitaev wire [50]. If we label elements as
$(n,k)\in\mathbb{Z}_{8}\times\mathbb{Z}$ and $\nu\in\mathbb{Z}_{16}$, the map
is determined by [40] in the form of
$\displaystyle\nu=2n-k\quad\mod 16.$ (2.16)
In particular, the above formula dictates that the odd $\nu$ must carry odd
$k$ on the $T$ domain wall. Namely, $c_{-}$ of the SPT phase on the $T$ domain
wall must be $c_{-}=1/2$ mod $1$ when $\nu$ is odd, and $c_{-}=0$ mod $1$ when
$\nu$ is even.
## 3 Constraint on $(2+1)$d pin+ and bosonic TQFT
We argue that the TQFT on the boundary of $(2+1)$d TQFT has a restricted value
of $c_{-}$, depending on the chiral phase on the $T$ domain wall controlled by
the Smith map. For simplicity, we focus on $\mathrm{pin}^{+}$ anomaly
classified by $\mathbb{Z}_{16}$. The generalization to the bosonic case is
straightforward.
Let us consider a $(2+1)$d $\mathrm{pin}^{+}$ TQFT $\mathcal{T}$ on the
boundary of a $(3+1)$d $T$ SPT phase, classified by $\nu\in\mathbb{Z}_{16}$.
We again work on the geometry described in Fig. 2, i.e., we break the $T$
symmetry simultaneously in the $(3+1)$d bulk and the $(2+1)$d boundary, such
that the $T$ domain wall in the bulk terminates on a domain wall at the
boundary.
Figure 2: The illustration for our setup. We have a $T$ domain wall (red
plane) in the $(3+1)$d bulk, which ends at $\partial X$ on the boundary.
If the boundary TQFT on the left domain $X$ has the chiral central charge
$c_{-}=c_{\mathcal{T}}+m/2$ for $m\in\mathbb{Z}$, the right domain
$\overline{X}$ has $c_{-}=-(c_{\mathcal{T}}+m/2)$ since it is the $T$ partner
of the left domain. This implies the $(3+1)$d SPT action given by
$\displaystyle\frac{c_{-}}{96\pi}\mathrm{Tr}(R\wedge R)$ (3.1)
has a kink of $c_{-}$ from $c_{-}=c_{\mathcal{T}}+m/2$ to
$c_{-}=-c_{\mathcal{T}}+m/2$ on the $T$ domain wall, thus we obtain the
gravitational Chern-Simons theory $(2c_{\mathcal{T}}+m)\cdot
2\mathrm{CS}_{\mathrm{grav}}$ on the $T$ domain wall, which carries
$c_{-}=2c_{\mathcal{T}}$ mod $1$. Meanwhile, according to the Smith map
(2.16), $c_{-}$ carried by the $T$ domain wall must be $c_{-}=\nu/2$ mod $1$.
This leads to $c_{\mathcal{T}}=\nu/4$ mod $1/2$.
We can also understand the chiral domain wall with $c_{-}=1/4$ mod $1/2$ as
follows. The $T$ domain wall in the bulk ends on $\partial X$ at the $(2+1)$d
boundary. Since the $T$ symmetry is preserved on the boundary, $\partial X$
must support a $T$ defect operator of the $(2+1)$d TQFT on the boundary.
Because the boundary is a gapped TQFT, the $T$ defect on $\partial X$ must be
topological and carry gapped degrees of freedom, which must lead to $c_{-}=0$
on $\partial X$. Due to the Smith map (2.16), the $T$ defect from the bulk
contributes as $c_{-}=1/2$ mod $1$ to $\partial X$, when $\nu$ is odd in
$\mathbb{Z}_{16}$. Meanwhile, the TQFT on the boundary contributes
$c_{-}=(c_{\mathcal{T}}+m/2)+(c_{\mathcal{T}}+m/2)=2c_{\mathcal{T}}$ mod $1$.
Thus, in order to have $c_{-}=0$ on $\partial X$ for odd
$\nu\in\mathbb{Z}_{16}$, we must have $2c_{\mathcal{T}}=1/2$ mod $1$, so
$c_{\mathcal{T}}=1/4$ mod $1/2$. For even $\nu\in\mathbb{Z}_{16}$, the $T$
defect instead carries $c_{-}=0$ mod $1$, so we have $c_{\mathcal{T}}=0$ mod
$1/2$.
For the bosonic case, a similar argument shows that $c_{-}=4$ mod $8$ if the
$(2+1)$d TQFT has an anomaly characterized by the SPT action $\int w_{2}^{2}$,
otherwise $c_{-}=0$ mod $8$. This is also understood by the fact that $c_{-}$
mod $8$ is diagnosed by the partition function of the bulk $(3+1)$d SPT phase
on $\mathbb{CP}^{2}$ that corresponds to the Gauss-Milgram formula [38],
$Z_{\mathrm{SPT}}(\mathbb{CP}^{2})=e^{2\pi ic_{-}/8}$, which is $-1$ for $\int
w_{2}^{2}$ and $1$ for $\int w_{1}^{4}$.
## 4 $(3+1)$d topological superconductor in class AIII
Here, let us apply our argument to the $(3+1)$d SPT phase with $U(1)\times CT$
symmetry (called class AIII). This structure corresponds to the structure
group $\mathrm{pin}^{c}:=(\mathrm{pin}^{\pm}\times U(1))/\mathbb{Z}_{2}$,
where $CT$ corresponds to the orientation reversing element of
$\mathrm{pin}^{\pm}$ which commutes with $U(1)$.
Let us consider the $CT$ defect of the $\mathrm{pin}^{c}$ $(3+1)$d SPT phase.
To see the induced structure on the $CT$ domain wall, it is convenient to
regard $\mathrm{pin}^{c}$ as a $\mathrm{pin}^{+}$ structure twisted by $U(1)$.
$\mathrm{pin}^{+}$ induces oriented $\mathrm{spin}$ structure equipped with
the $\mathbb{Z}_{2}$ symmetry on the domain wall, and we also have $U(1)$
symmetry that twists the induced $\mathrm{spin}$ structure. Then, the induced
structure on the domain wall becomes $\mathrm{spin}^{c}$ with $\mathbb{Z}_{2}$
symmetry.
Therefore, we have the Smith map for cobordism groups
$\displaystyle\Omega^{3}_{\mathrm{spin}^{c}}(B\mathbb{Z}_{2})\to\Omega_{\mathrm{pin}^{c}}^{4}.$
(4.1)
The bordism or cobordism groups for these structures are studied in [51, 52,
53], and given by
$\Omega^{3}_{\mathrm{spin}^{c}}(B\mathbb{Z}_{2})=\mathbb{Z}_{4}\times\mathbb{Z}\times\mathbb{Z}$,
and $\Omega_{\mathrm{pin}^{c}}^{4}=\mathbb{Z}_{8}\times\mathbb{Z}_{2}$. We
label the elements of $\Omega^{3}_{\mathrm{spin}^{c}}(B\mathbb{Z}_{2})$ as
$(n_{4},n_{CS},n_{E})\in\mathbb{Z}_{4}\times\mathbb{Z}\times\mathbb{Z}$. The
generators are described as follows:
* •
$(0,1,0)$ corresponds to the $\mathrm{spin}^{c}$ Chern-Simons theory at level
$1$, defined via the $(3+1)$d $\mathrm{spin}^{c}$ $\theta$-term in (4.5). This
theory carries $c_{-}=1$.
* •
$(0,0,1)$ corresponds to the $E_{8}$ state, which carries $c_{-}=8$.
* •
$(1,0,0)$ generates the $\mathbb{Z}_{4}$ group, and we believe that it should
be formulated in a similar way to the Gu-Wen $\mathbb{Z}_{2}$ SPT phase based
on $\mathrm{spin}$ structure, which is labeled by a pair of cohomological data
[54, 55]. Actually, if we compute the cobordism group by using the toolkit of
the Atiyah-Hilzebruch spectral sequence (AHSS) [52], we see that it can also
be described by a pair of cohomological data, which should be regarded as the
$\mathrm{spin}^{c}$ version of the Gu-Wen phase. Namely, the group
$\mathbb{Z}_{4}$ is the nontrivial extension of
$\displaystyle
H^{2}(B\mathbb{Z}_{2},\Omega_{\mathrm{spin}^{c}}^{1})=H^{2}(B\mathbb{Z}_{2},\mathbb{Z})=\mathbb{Z}_{2}$
(4.2)
by
$\displaystyle
H^{4}(B\mathbb{Z}_{2},\Omega_{\mathrm{spin}^{c}}^{-1})=H^{3}(B\mathbb{Z}_{2},U(1))=\mathbb{Z}_{2}.$
(4.3)
So, we expect that the $\mathbb{Z}_{2}$ subgroup of the $\mathbb{Z}_{4}$ is
given by the bosonic $\mathbb{Z}_{2}$ SPT phase with the classical action
$\displaystyle\exp\left(\pi i\int a^{3}\right),$ (4.4)
which characterizes the nontrivial element of
$H^{3}(B\mathbb{Z}_{2},U(1))=\mathbb{Z}_{2}$. Based on the analogy with the
Gu-Wen $\mathrm{spin}$ SPT phase, we believe that
$H^{2}(B\mathbb{Z}_{2},\Omega_{\mathrm{spin}^{c}}^{1})$ is associated with the
physical description that a $(0+1)$d $\mathrm{spin}^{c}$ SPT phase (in this
case a complex fermion with charge $1$) is decorated on the junction of
$\mathbb{Z}_{2}$ defects, and the way for the decoration is controlled by
$H^{2}(B\mathbb{Z}_{2},\Omega_{\mathrm{spin}^{c}}^{1})$. Though we have not
constructed any action for the $\mathbb{Z}_{4}$ generator, we believe that
there exists a good state sum definition for this theory on lattice like the
Gu-Wen $\mathrm{spin}$ SPT phases, which carries $c_{-}=0$.
Meanwhile, if we label the element of $\Omega_{\mathrm{pin}^{c}}^{4}$ as
$(m_{8},m_{2})\in\mathbb{Z}_{8}\times\mathbb{Z}_{2}$, the actions for the
generators are described as follows:
* •
If the spacetime is oriented, the generator $\mathbb{Z}_{8}$, $(1,0)$ is
described by the $\theta$-term for $\mathrm{spin}^{c}$ gauge field at
$\theta=\pi$ [56, 57], given by
$\displaystyle S[a]=i\theta\left(\frac{1}{2(2\pi)^{2}}\int f\wedge
f-\frac{\sigma}{8}\right),$ (4.5)
where $a$ is a $\mathrm{spin}^{c}$ gauge field with the Dirac quantization
condition
$\displaystyle\int_{C}\frac{f}{2\pi}=\frac{1}{2}\int_{C}w_{2}\quad\mod 1,$
(4.6)
for any oriented 2d cycle in the spacetime. $\sigma$ denotes the signature of
the manifold. Also, for later convenience, we note that
$m_{8}=4\in\mathbb{Z}_{8}$, $(4,0)$ is given by
$\displaystyle\exp\left(\pi i\int w_{1}^{4}\right).$ (4.7)
* •
The generator of $\mathbb{Z}_{2}$, $(0,1)$ is given by
$\displaystyle\exp\left(\pi i\int w_{2}^{2}\right).$ (4.8)
Then, we can almost completely determine the Smith map (4.1) by considering a
$CT$ domain wall of the $(3+1)$d action. First, since we know in Sec. 2.1 that
the $(3+1)$d action (4.7) localizes the action (4.4) on the domain wall, we
expect that $(2,0,0)$ is mapped to $(4,0)$ by the Smith map. Due to linearity
of the Smith map, it shows that $\mathbb{Z}_{4}$ part of
$\Omega^{3}_{\mathrm{spin}^{c}}(B\mathbb{Z}_{2})$ is mapped to the
$\mathbb{Z}_{4}$ subgroup of the $\mathbb{Z}_{8}$ part in
$\Omega_{\mathrm{pin}^{c}}^{4}$. According to Sec. 2.1, we also know that the
$(3+1)$d action (4.8) for $(0,1)$ localizes the $E_{8}$ state on the domain
wall, so it also determines how $(0,0,1)$ transforms. Finally, for $(1,0)$ in
$\Omega_{\mathrm{pin}^{c}}^{4}$, the $CT$ domain wall for the $(3+1)$d
$\theta$-term induces a kink of $\theta$ from $\theta=+\pi$ to $\theta=-\pi$,
so we obtain a Chern-Simons theory at level $1$ on the domain wall. So, we
know how $(0,1,0)$ transforms. Thus, our Smith map is obtained as
$\displaystyle(n_{4},n_{CS},n_{E})\to(n_{CS}+(2+4p)n_{4},n_{E}),$ (4.9)
where $p$ is an undetermined integer.
According to the Smith map, the odd element of $\mathbb{Z}_{8}$ in
$\Omega_{\mathrm{pin}^{c}}^{4}$ must carry $c_{-}=1$ mod $2$ on the $CT$
domain wall. So, by using the same logic as the $\mathrm{pin}^{+}$ anomaly, we
can see that the odd phase in $\mathbb{Z}_{8}$ must have $c_{-}=1/2$ mod $1$.
## 5 Discussion
In this paper, we found the anomaly constraint on chiral central charge of
$(2+1)$d topological order with $T$ symmetry. The constraint comes from a
chiral state localized on the $T$ domain wall in the bulk SPT phase. It should
be interesting to study such a constraint on the $(2+1)$d TQFT enriched by
more generic global symmetry, though we have only studied the cases for $T$
and $U(1)\times CT$. For example, by using the AHSS, $(d+1)$d fermionic SPT
phases with $G_{b}$ symmetry is generally labeled by the set of cohomological
data [58, 59]
$\displaystyle n_{p}\in H^{p}(BG_{b},\Omega_{\mathrm{spin}}^{d+1-p}),$ (5.1)
for $0\leq p\leq d+2$. Here, $G_{b}$ can contain time reversal, and the full
global symmetry is described by $G_{f}$, defined as the symmetry extension by
fermion parity $\mathbb{Z}_{2}^{F}\to G_{f}\to G_{b}$. The data $n_{p}$ is
associated with the description of the SPT phase based on the decorated domain
wall; $n_{p}$ controls the way to decorate an $((d-p)+1)$d SPT phase on the
domain wall of $G_{b}$. In particular, the nontrivial $n_{1}$ implies the
decoration of $p+ip$ superconductor on the $G_{b}$ domain wall. We expect that
this description of decorated domain wall leads to a unified formulation of
the anomaly constraints on the Hilbert space for a broad class of global
symmetries. See also [60].
It is also very interesting to study the constraint on the chiral central
charge for crystalline symmetries. In that case, there is a simple way to
reduce the (3+1)d SPT phase to the lower dimensional system with internal
symmetries [61]. For example, consider a unitary reflection symmetry across
the (2+1)d plane which protects the the (3+1)d SPT phase. Then, we can operate
the unitary circuit respecting the reflection symmetry away from the
reflection plane, which can disentangle the SPT phase away from the reflection
plane. After all, we obtain the reduced (2+1)d SPT phase supported on the
reflection plane, where the reflection symmetry acts internally. Based on this
logic, [62] obtains a similar constraint on chiral central charge for the
(3+1)d fermionic SPT phase with the spatial reflection symmetry.
In the present paper, we worked on the setup with Lorentz invariance and did
not discuss the effect of lattice. It is interesting what the Smith map and
the global symmetry on the symmetry defects looks like in lattice systems,
since we deal with lattice systems in the study of SPT phases which are not
Lorentz invariant in general. There is a lattice model for the (3+1)d SPT
phase with $T^{2}=(-1)^{F}$ [63], where in the $T$ broken phase we observe a
unitary global $\mathbb{Z}_{2}$ symmetry on the domain wall, at the level of a
lattice model. We believe that this global symmetry reflects the induced
$\mathbb{Z}_{2}$ symmetry via the CPT theorem of the effective field theory.
It is interesting to study such a lattice model in more detail, e.g., the
gapped boundary of this theory and see how the anomaly constraint works.
Finally, another interesting direction is to ask if our constraint on $c_{-}$
is applicable to generic gapless phases, while we have worked on gapped
topological ordered states in the present paper. We leave these problems to
future work.
## 6 Acknowledgements
The author is grateful to Maissam Barkeshli and Kantaro Ohmori for
enlightening discussions. The author also thanks Yunqin Zheng for helpful
comments on the manuscript. The author is supported by the Japan Society for
the Promotion of Science (JSPS) through Grant No. 19J20801.
## References
* [1] E. Lieb, T. Schultz, and D. Mattis, Two soluble models of an antiferromagnetic chain, Annals of Physics 16 (1961) 407.
* [2] M. Hastings, Lieb-schultz-mattis in higher dimensions, Physical Review B 69 (2004) 104431, arXiv:cond-mat/0305505.
* [3] M. Oshikawa, Commensurability, excitation gap and topology in quantum many-particle systems on a periodic lattice, Physical Review Letters 84 (2000) 1535, arXiv:cond-mat/9911137.
* [4] S. A. Parameswaran, A. M. Turner, D. P. Arovas, and A. Vishwanath, Topological order and absence of band insulators at integer filling in non-symmorphic crystals, Nature Physics 9 (2013) 299, arXiv:1212.0557.
* [5] H. Watanabe, H. C. Po, A. Vishwanath, and M. P. Zaletel, Filling constraints for spin-orbit coupled insulators in symmorphic and non-symmorphic crystals, Proc. Natl. Acad. Sci. 112 (2015) 14551, arXiv:1505.04193.
* [6] C.-M. Jian, Z. Bi, and C. Xu, Lieb-schultz-mattis theorem and its generalizations from the perspective of the symmetry-protected topological phase, Physical Review B 97 (2018) 054412, arXiv:1705.00012.
* [7] R. Kobayashi, K. Shiozaki, Y. Kikuchi, and S. Ryu, Lieb-schultz-mattis type theorem with higher-form symmetry and the quantum dimer models, Physical Review B 99 (2019) 014402, arXiv:1805.05367.
* [8] M. Cheng, Fermionic lieb-schultz-mattis theorems and weak symmetry-protected phases, Physical Review B 99 (2019) 075143, arXiv:1804.10122.
* [9] S. C. Furuya and M. Oshikawa, Symmetry protection of critical phases and global anomaly in 1+1 dimensions, Physical Review Letter 118 (2017) 021601, arXiv:1503.07292.
* [10] G. Y. Cho, C.-T. Hsieh, and S. Ryu, Anomaly manifestation of lieb-schultz-mattis theorem and topological phases, Physical Review B 96 (2017) 195105, arXiv:1705.03892.
* [11] M. Cheng, M. Zaletel, M. Barkeshli, A. Vishwanath, and P. Bonderson, Translational symmetry and microscopic constraints on symmetry-enriched topological phases: A view from the surface, Physical Review X 6 (Dec, 2016) , arXiv:1511.02263.
* [12] M. A. Metlitski and R. Thorngren, Intrinsic and emergent anomalies at deconfined critical points, Physical Review B 98 (2018) 085140, arXiv:1707.07686.
* [13] Y. Yao, C.-T. Hsieh, and M. Oshikawa, Anomaly matching and symmetry-protected critical phases in su(n) spin systems in 1+1 dimensions, Physical Review Letter 123 (2019) 180201, arXiv:1805.06885.
* [14] Y. Tanizaki and T. Sulejmanpasic, C-p-t anomaly matching in bosonic quantum field theory and spin chains, Physical Review B 97 (2018) 144201, arXiv:1802.02153.
* [15] Y. Tanizaki and T. Sulejmanpasic, Anomaly and global inconsistency matching: $\theta$-angles, $su(3)/u(1)^{2}$ nonlinear sigma model, $su(3)$ chains and its generalizations, Physical Review B 98 (2018) 115126, arXiv:1805.11423.
* [16] F. J. Burnell, X. Chen, L. Fidkowski, and A. Vishwanath, Exactly Soluble Model of a 3D Symmetry Protected Topological Phase of Bosons with Surface Topological Order, Phys. Rev. B 90 (2014) 245122, arXiv:1302.7072v3.
* [17] L. Fidkowski, X. Chen, and A. Vishwanath, Non-Abelian topological order on the surface of a 3d topological superconductor from an exactly solved model, Physical Review X 3 (2014) 041016, arXiv:1305.5851v4.
* [18] C. Wang, A. C. Potter, and T. Senthil, Gapped Symmetry Preserving Surface-State for the Electron Topological Insulator, Physical Review B 88 (2013) 115137, arXiv:1306.3223v3.
* [19] P. Bonderson, C. Nayak, and X.-l. Qi, A Time-Reversal Invariant Topological Phase at the Surface of a 3D Topological Insulator, Journal of Statistical Mechanics: Theory and Experiment 2013 (2013) P09016, arXiv:1306.3230v2.
* [20] X. Chen, L. Fidkowski, and A. Vishwanath, Symmetry Enforced Non-Abelian Topological Order at the Surface of a Topological Insulator, Physical Review B 89 (2014) 165132, arXiv:1306.3250v2.
* [21] M. A. Metlitski, C. Kane, and M. Fisher, A symmetry-respecting topologically-ordered surface phase of 3d electron topological insulators, arXiv:1306.3286.
* [22] X. Chen, F. J. Burnell, A. Vishwanath, and L. Fidkowski, Anomalous symmetry fractionalization and surface topological order, Physical Review X 5 (2015) 1–21, arXiv:1403.6491v2.
* [23] A. Kapustin and R. Thorngren, Anomalous discrete symmetries in three dimensions and group cohomology, Physical Review Letters 112 (2014) 1–31, arXiv:1404.3230v2.
* [24] M. A. Metlitski, L. Fidkowski, X. Chen, and A. Vishwanath, Interaction effects on 3D topological superconductors: surface topological order from vortex condensation, the 16 fold way and fermionic Kramers doublets, arXiv:1406.3032.
* [25] R. Thorngren and C. von Keyserlingk, Higher SPT’s and a generalization of anomaly in-flow, arXiv:1511.02929.
* [26] E. Witten, The "Parity" Anomaly on an Unorientable Manifold, Phys. Rev. B94 (2016) 195150, arXiv:1605.02391 [hep-th].
* [27] J. Wang, X.-G. Wen, and E. Witten, Symmetric Gapped Interfaces of SPT and SET States: Systematic Constructions, Phys. Rev. X 8 (2018) 031048, arXiv:1705.06728 [cond-mat.str-el].
* [28] R. Kobayashi, K. Ohmori, and Y. Tachikawa, On gapped boundaries for spt phases beyond group cohomology, Journal of High Energy Physics 11 (2019) 131, arXiv:1905.05391.
* [29] R. Kobayashi, Pin TQFT and Grassmann integral, Journal of High Energy Physics 12 (2019) 014, arXiv:1905.05902.
* [30] D. Bulmash and M. Barkeshli, Absolute anomalies in (2+1)d symmetry-enriched topological states and exact (3+1)d constructions, Physical Review Research 2 (Oct, 2020) , arXiv:2003.11553.
* [31] C. Córdova and K. Ohmori, Anomaly obstructions to symmetry preserving gapped phases, arXiv:1910.04962.
* [32] C. Córdova and K. Ohmori, Anomaly constraints on gapped phases with discrete chiral symmetry, Physical Review D 102 (2020) 025011, arXiv:1912.13069.
* [33] R. Thorngren, Tqft, symmetry breaking, and finite gauge theory in 3+1d, Physical Review B 101 (2020) 245160, arXiv:2001.11938.
* [34] A. Kapustin and L. Spodyneiko, Thermal hall conductance and a relative topological invariant of gapped two-dimensional systems, Physical Review B 101 (Jan, 2020) , arXiv:1905.06488.
* [35] A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Classification of topological insulators and superconductors in three spatial dimensions, Physical Review B 78 (Nov, 2008) , arXiv:0803.2786.
* [36] A. Kitaev, Periodic table for topological insulators and superconductors, AIP Conference Proceedings (2009) , arXiv:0901.2686.
* [37] D. S. Freed and M. J. Hopkins, Reflection Positivity and Invertible Topological Phases, arXiv:1604.06527 [hep-th].
* [38] M. Barkeshli, P. Bonderson, C.-M. Jian, M. Cheng, and K. Walker, Reflection and time reversal symmetry enriched topological phases of matter: path integrals, non-orientable manifolds, and anomalies, arXiv:1612.07792.
* [39] C. Wang and T. Senthil, Interacting fermionic topological insulators/superconductors in three dimensions, Physical Review B 89 (2020) 195124, arXiv:1401.1142.
* [40] I. Hason, Z. Komargodski, and R. Thorngren, Anomaly matching in the symmetry broken phase: Domain walls, cpt, and the smith isomorphism, SciPost Physics 8 (2020) 062, arXiv:1910.14039.
* [41] C. Córdova, K. Ohmori, S.-H. Shao, and F. Yan, Decorated $\mathbb{Z}_{2}$ symmetry defects and their time-reversal anomalies, arXiv:1910.14046.
* [42] A. Kapustin, R. Thorngren, A. Turzillo, and Z. Wang, Fermionic Symmetry Protected Topological Phases and Cobordisms, JHEP 12 (2015) 052, arXiv:1406.7329 [cond-mat.str-el].
* [43] A. Kapustin, Symmetry Protected Topological Phases, Anomalies, and Cobordisms: Beyond Group Cohomology, arXiv:1403.1467 [cond-mat.str-el].
* [44] K. Yonekura, On the Cobordism Classification of Symmetry Protected Topological Phases, arXiv:1803.10796 [hep-th].
* [45] A. Kitaev, Anyons in an exactly solved model and beyond, Annals of Physics 321 (2006) 2–111, arXiv:cond-mat/0506438.
* [46] P.-S. Hsin and S.-H. Shao, Lorentz symmetry fractionalization and dualities in (2+1)d, SciPost Physics 8 (Feb, 2020) .
* [47] R. Thorngren, Framed wilson operators, fermionic strings, and gravitational anomaly in 4d, Journal of High Energy Physics 2015 (Feb, 2015) .
* [48] K. Walker and Z. Wang, (3+1)-tqfts and topological insulators, Frontier of Physics 7 (2012) 150, arXiv:1104.2632 [cond-mat.str-el].
* [49] O. Aharony, N. Seiberg, and Y. Tachikawa, Reading between the lines of four-dimensional gauge theories, Journal of High Energy Physics 2013 (Aug, 2013) , arXiv:1305.0318.
* [50] N. Tarantino and L. Fidkowski, Discrete spin structures and commuting projector models for 2d fermionic symmetry protected topological phases, Phys. Rev. B 94 (2016) 115115, arXiv:1604.02145 [cond-mat.str-el].
* [51] M. Guo, P. Putrov, and J. Wang, Time reversal, $su(n)$ yang-mills and cobordisms: Interacting topological superconductors/insulators and quantum spin liquids in 3+1d, Annals of Physics 394 (2018) 244 – 293, arXiv:1711.11587.
* [52] I. García-Etxebarria and M. Montero, Dai-freed anomalies in particle physics, Journal of High Energy Physics 8 (2019) 003, arXiv:1808.00009.
* [53] P. B. Gilkey, The geometry of spherical space form groups, Series in Pure Mathematics 28 508\.
* [54] Z.-C. Gu and X.-G. Wen, Symmetry-protected topological orders for interacting fermions: Fermionic topological nonlinear $\sigma$ models and a special group supercohomology theory, Phys. Rev. B90 (2014) 115141, arXiv:1201.2648 [cond-mat.str-el].
* [55] D. Gaiotto and A. Kapustin, Spin TQFTs and Fermionic Phases of Matter, Int. J. Mod. Phys. A31 (2016) 1645044, arXiv:1505.05856 [cond-mat.str-el].
* [56] N. Seiberg and E. Witten, Gapped boundary phases of topological insulators via weak coupling, Progress of Theoretical and Experimental Physics 2016 (2016) 1–86, arXiv:1602.04251v3.
* [57] M. A. Metlitski, S-duality of $u(1)$ gauge theory with $\theta=\pi$ on non-orientable manifolds: Applications to topological insulators and superconductors, arXiv:1510.05663.
* [58] Q.-R. Wang and Z.-C. Gu, Construction and classification of symmetry protected topological phases in interacting fermion systems, Physical Review X 10 (2018) 031055, arXiv:1811.00536.
* [59] R. Thorngren, Anomalies and bosonization, arXiv:1810.04414.
* [60] D. Delmastro, D. Gaiotto, and J. Gomis, Global anomalies on the hilbert space, arXiv:2101.02218 [hep-th].
* [61] H. Song, S.-J. Huang, L. Fu, and M. Hermele, Topological phases protected by point group symmetry, Physical Review X 7 (Feb, 2017) , arXiv:1604.08151 [cond-mat.str-el].
* [62] B.-B. Mao and C. Wang, Mirror anomaly in fermionic topological orders, Physical Review Research 2 (Jun, 2020) , arXiv:2002.07714.
* [63] R. Kobayashi, Commuting projector models for ( 3+1 )-dimensional topological superconductors via a string net of ( 1+1 )-dimensional topological superconductors, Physical Review B 102 (Aug, 2020) , arXiv:2006.00159.
|
8k
|
arxiv_papers
|
2101.01021
|
# A survey of heavy-antiheavy hadronic molecules
Xiang-Kun Dong1,2 [email protected] Feng-Kun Guo1,2 [email protected] Bing-
Song Zou1,2,3 [email protected] 1CAS Key Laboratory of Theoretical Physics,
Institute of Theoretical Physics,
Chinese Academy of Sciences, Beijing 100190, China
2School of Physical Sciences, University of Chinese Academy of Sciences,
Beijing 100049, China
3School of Physics, Central South University, Changsha 410083, China
###### Abstract
Many efforts have been made to reveal the nature of the overabundant resonant
structures observed by the worldwide experiments in the last two decades.
Hadronic molecules attract special attention because many of these seemingly
unconventional resonances are located close to the threshold of a pair of
hadrons. To give an overall feature of the spectrum of hadronic molecules
composed of a pair of heavy-antiheavy hadrons, namely, which pairs are
possible to form molecular states, we take charmed hadrons for example to
investigate the interaction between them and search for poles by solving the
Bethe-Salpeter equation. We consider all possible combinations of hadron pairs
of the $S$-wave singly-charmed mesons and baryons as well as the narrow
$P$-wave charmed mesons. The interactions, which are assumed to be meson-
exchange saturated, are described by constant contact terms which are resummed
to generate poles. It turns out that if a system is attractive near threshold
by the light meson exchange, there is a pole close to threshold corresponding
to a bound state or a virtual state, depending on the strength of interaction
and the cutoff. In total, 229 molecular states are predicted. The observed
near-threshold structures with hidden-charm, like the famous $X(3872)$ and
$P_{c}$ states, fit into the spectrum we obtain. We also highlight a
$\Lambda_{c}\bar{\Lambda}_{c}$ bound state that has a pole consistent with the
cross section of the $e^{+}e^{-}\to\Lambda_{c}\bar{\Lambda}_{c}$ precisely
measured by the BESIII Collaboration.
###### Contents
1. I Introduction
2. II Lagrangian from heavy quark spin symmetry
1. II.1 Heavy mesons
2. II.2 Heavy baryons
3. III Potentials
1. III.1 Conventions
2. III.2 Potentials from light vector exchange
3. III.3 Potentials from vector charmonia exchange
4. IV Molecular states from constant interactions
1. IV.1 Poles
2. IV.2 Discussions of selected systems
1. IV.2.1 $D^{(*)}\bar{D}^{(*)}$: $X(3872)$, $Z_{c}(3900)$ and their partners
2. IV.2.2 $D_{s}^{(*)}\bar{D}_{s}^{(*)}$ virtual states
3. IV.2.3 $D^{(*)}\bar{D}_{s}^{(*)}$: $Z_{cs}$ as virtual states
4. IV.2.4 $D^{(*)}\bar{D}_{1,2}$: $Y(4260)$ and related states
5. IV.2.5 $\Lambda_{c}\bar{\Lambda}_{c}$: analysis of the BESIII data and more baryon-antibaryon bound states
6. IV.2.6 $\bar{D}^{(*)}\Sigma_{c}^{(*)}$: $P_{c}$ states
7. IV.2.7 $\bar{D}^{(*)}\Xi_{c}^{(\prime)}$: $P_{cs}$ and related states
5. V Summary and discussion
6. A Vertex factors for direct processes
7. B List of the potential factor $F$
8. C Amplitude calculation for cross processes
## I Introduction
The formation of hadrons from quarks and gluons is governed by quantum
chromodynamics (QCD), which at low energy is nonperturbative. Therefore, it is
difficult to tackle with the hadron spectrum model-independently. The
traditional quark model Gell-Mann (1964); Zweig (1964), where hadrons are
classified as mesons, composed of $q\bar{q}$ and baryons, composed of $qqq$,
provides quite satisfactory description of the hadrons observed in last
century. The last two decades witnessed the emergence of plenty of states or
resonant structures in experiments, many of which do not fit the hadron
spectrum predicted by the naive quark model and thus are candidates of the so-
called exotic states. Many efforts have been devoted to understand the nature
of these states but most of them still remain controversial (see Refs. Chen
_et al._ (2016a); Hosaka _et al._ (2016); Richard (2016); Lebed _et al._
(2017); Esposito _et al._ (2017); Guo _et al._ (2018); Ali _et al._ (2017);
Olsen _et al._ (2018); Altmannshofer _et al._ (2019); Kalashnikova and
Nefediev (2019); Cerri _et al._ (2019); Liu _et al._ (2019a); Brambilla _et
al._ (2020); Guo _et al._ (2020); Yang _et al._ (2020a); Ortega and Entem
(2020) for recent reviews).
The observation that most of these structures are located near the threshold
of a pair of heavy-antiheavy hadrons may shed light on identifying the nature
of these structures. To name a few famous examples, let us mention the
$X(3872)$, aka $\chi_{c1}(3872)$ Choi _et al._ (2003) and $Z_{c}(3900)^{\pm}$
Ablikim _et al._ (2013a); Liu _et al._ (2013); Ablikim _et al._ (2014a)
around the $D\bar{D}^{*}$ threshold, the $Z_{c}(4020)^{\pm}$ Ablikim _et al._
(2014b, 2013b) near the $D^{*}\bar{D}^{*}$ threshold, the $Z_{b}(10610)^{\pm}$
and $Z_{b}(10650)^{\pm}$ Bondar _et al._ (2012); Garmash _et al._ (2016)
near the $B\bar{B}^{*}$ and $B^{*}\bar{B}^{*}$ thresholds, the
$Z_{cs}(3985)^{-}$ Ablikim _et al._ (2020a) near the $\bar{D}_{s}D^{*}$ and
$\bar{D}_{s}^{*}D$ thresholds, and the $P_{c}$ states Aaij _et al._ (2019)
near the $\bar{D}^{(*)}\Sigma_{c}$ thresholds, respectively. These resonant
structures were widely investigated in many models, including assigning them
to be the molecular states of the corresponding systems. We refer to Ref. Guo
_et al._ (2018) for a comprehensive review of hadronic molecules.
Although these near-threshold resonant structures have been explored by many
works using various methods, our understanding of these structures will
greatly benefit from a whole and systematic spectrum of heavy-antiheavy
hadronic molecules based on one single model. In this paper, we provide such a
spectrum of hadronic molecules composed of a pair of heavy-antiheavy hadrons
including $S,P$-wave heavy mesons and $S$-wave heavy baryons. The interactions
between these hadron pairs are assumed to be contact terms saturated by meson
exchanges as in, e.g., Refs. Ecker _et al._ (1989); Epelbaum _et al._
(2002); Peng _et al._ (2020a). In order to have a unified treatment in all
systems, we will not consider coupled-channel effects. In Ref. Peng _et al._
(2020a) a similar study, focusing on the possible bound states composed of
$\bar{D}^{(*)}$ and $\Sigma_{c}^{(*)}$, are performed. Also note that such
interactions have been obtained in many other works, e.g. Refs. Liu _et al._
(2008); Ding (2009); Wu _et al._ (2010, 2011); Yang _et al._ (2012); Pavon
Valderrama (2020), to look for possible bound states associated with the near-
threshold resonant structures. These works used different models and
conventions and in some cases the results are inconsistent with each other.
In this work we consider the resonance saturation of the contact interaction
due to the one-boson-exchange (light pseudoscalar and vector mesons) together
with heavy quark spin symmetry (HQSS), chiral symmetry and SU(3) flavor
symmetry, to estimate the potentials of the heavy-antiheavy systems, and
special attention is paid to the signs of coupling constants to get self-
consistent results. Heavy quark flavor symmetry (HQFS) implies that the
potentials between bottomed hadron pairs are the same as those between charmed
ones and thus we take charmed hadrons for example. The obtained potentials are
used to solve the Bethe-Salpeter equation in the on-shell form to determine
the pole positions of these heavy-antiheavy hadron systems and a systematic
spectrum of these hadronic molecules is obtained.
Besides the possible molecular states, the interaction between a pair of
heavy-antiheavy hadrons at threshold is crucial to understand the line shape
of the invariant mass distribution of related channels, as discussed in Ref.
Dong _et al._ (2020a). It is known that unitarity of the $S$ matrix requires
the threshold to be a branch point of the scattering amplitude. Therefore, the
square of amplitude modulus always shows a cusp at the threshold and a
nontrivial structure may appear in the line shape of the invariant mass
distribution. The detailed structure of the cusp, a peak, a dip or buried in
the background, depends on the interaction of the two relevant particles near
the threshold. More specifically, the cusp at the threshold of a heavy-
antiheavy hadron pair will generally show up as a peak in the invariant mass
distribution of a heavy quarkonium and light hadron(s), which couple to the
heavy-antiheavy hadron pair, if the interaction is attractive at threshold and
a nearby virtual state comes into existence (for subtlety and more detailed
discussions, we refer to Ref. Dong _et al._ (2020a)). If the attraction is
strong and a bound state is formed, the peak will be located at the mass of
the bound state below threshold. Therefore, the potentials and the pole
positions obtained in this work are of great significance for the study of
near-threshold line shapes.
Note that the leading order interaction, which is just a constant, between a
heavy-antiheavy hadron pair, when resummed, works well for both purposes
mentioned above if we only focus on the near-threshold pole and line shape.
Therefore, we only keep the leading order interaction, which is saturated by
the vector meson exchange as discussed in the following, and its resummation.
This paper is organized as follows. In Section II, the Lagrangians of heavy
hadron and light meson coupling are presented. In Section III, the potentials
of different systems are obtained. We calculate the pole positions of
different systems and compare them with experimental and other theoretical
results in Section IV. Section V is devoted to a brief summary. Some details
regarding the potentials are relegated to Appendices A, B and C.
## II Lagrangian from heavy quark spin symmetry
For hadrons containing one or more heavy quarks, additional symmetries emerge
in the low energy effective field theory for QCD due to the large heavy quark
mass Isgur and Wise (1990, 1989). On the one hand, the strong suppression of
the chromomagnetic interaction, which is proportional to
${\bm{\sigma}\cdot\bm{B}}/{m_{Q}}\sim{\Lambda_{\rm QCD}}/{m_{Q}}$ with
$\Lambda_{\rm QCD}\sim 200-300$ MeV the typical QCD scale, implies that the
spin of the heavy quark $s_{Q}$ is decoupled with the angular momentum
$s_{\ell}$ of light quarks in the limit of $m_{Q}\to\infty$. Therefore, HQSS
emerges, which means that the interaction is invariant under a transformation
of $s_{Q}$. On the other hand, the change of the velocity of the heavy quark
in a singly-heavy hadron during the interaction, $\Delta v={\Delta
p}/{m_{Q}}\sim{\Lambda_{\rm QCD}}/{m_{Q}}$, vanishes in the limit of
$m_{Q}\to\infty$, and the heavy quark behaves like a static color source,
independent of the quark flavor. Therefore, it is expected that the potentials
between bottomed hadron pairs are the same as those of the charmed ones, and
in turn it is sufficient to focus on the charm sector.
### II.1 Heavy mesons
To construct a Lagrangian that is invariant under the heavy quark spin
transformation and chiral transformation, it is convenient to represent the
ground states of charmed mesons as the following superfield Wise (1992);
Casalbuoni _et al._ (1992, 1997)
$\displaystyle
H_{a}^{(Q)}=\frac{1+\not{v}}{2}\left[P_{a}^{*(Q)\mu}\gamma_{\mu}-P_{a}^{(Q)}\gamma_{5}\right],$
(1)
where $a$ is the SU(3) flavor index,
$\displaystyle P^{(Q)}=(D^{0},D^{+},D_{s}^{+}),\quad
P^{*(Q)}_{\mu}=(D^{*0}_{\mu},D^{*+}_{\mu},D^{*+}_{s\mu}),$ (2)
and $v^{\mu}=p^{\mu}/M$ is the four-velocity of the heavy meson satisfying
$v\cdot v=1$. The heavy field operators contain a factor $\sqrt{M_{H}}$ and
have dimension 3/2. The superfield that creates heavy mesons is constructed as
$\bar{H}_{a}^{(Q)}=\gamma_{0}H_{a}^{(Q)\dagger}\gamma_{0}.$ (3)
The superfields that annihilate or create mesons containing an antiheavy quark
are not $\bar{H}_{a}^{(Q)}$ or $H_{a}^{(Q)}$ but the following ones Grinstein
_et al._ (1992):
$\displaystyle H_{a}^{(\bar{Q})}$
$\displaystyle=C\left(\mathcal{C}H_{a}^{(Q)}\mathcal{C}^{-1}\right)^{T}C^{-1}$
$\displaystyle=\left[P_{a\mu}^{*(\bar{Q})}\gamma^{\mu}-P_{a}^{(\bar{Q})}\gamma_{5}\right]\frac{1-\not{v}}{2},$
(4) $\displaystyle\bar{H}_{a}^{(\bar{Q})}$
$\displaystyle=\gamma_{0}H_{a}^{(\bar{Q})\dagger}\gamma_{0},$ (5)
with
$\displaystyle P^{(\bar{Q})}=(\bar{D}^{0},D^{-},D_{s}^{-}),\quad
P_{\mu}^{*(\bar{Q})}=(\bar{D}_{\mu}^{*0},D_{\mu}^{*-},D^{*-}_{s\mu}).$ (6)
$\mathcal{C}$ is the charge conjugation operator and $C=i\gamma^{2}\gamma^{0}$
is the charge conjugation matrix, where we have taken the phase convention for
charge conjugation as
$\mathcal{C}P_{a}^{(Q)}\mathcal{C}^{-1}=P_{a}^{(\bar{Q})}$ and
$\mathcal{C}P_{a}^{*(Q)}\mathcal{C}^{-1}=-P_{a}^{*(\bar{Q})}$.
The $P$-wave heavy mesons have two spin multiplets, one with $s_{\ell}=1/2$
represented by $S$ while the other with $s_{\ell}=3/2$ represented by $T$ Falk
(1992); Falk and Luke (1992),
$\displaystyle S_{a}^{(Q)}=$
$\displaystyle\,\frac{1+\not{v}}{2}\left[P_{1a}^{\prime(Q)\mu}\gamma_{\mu}\gamma_{5}-P_{0a}^{*(Q)}\right],$
(7) $\displaystyle T_{a}^{(Q)\mu}=$
$\displaystyle\,\frac{1+\not{v}}{2}\bigg{[}P_{2a}^{*(Q)\mu\nu}\gamma_{\nu}$
$\displaystyle-\sqrt{\frac{3}{2}}P_{1a\nu}^{(Q)}\gamma_{5}\left(g^{\mu\nu}-\frac{1}{3}\gamma^{\nu}\left(\gamma^{\mu}-v^{\mu}\right)\right)\bigg{]}.$
(8)
Analogous with Eqs. (3,4,5), we have
$\displaystyle\bar{S}_{a}^{(Q)}=$
$\displaystyle\,\gamma_{0}S_{a}^{(Q)\dagger}\gamma_{0},$ (9)
$\displaystyle\bar{T}_{a}^{(Q)\mu}=$
$\displaystyle\,\gamma_{0}T_{a}^{(Q)\mu\dagger}\gamma_{0},$ (10)
$\displaystyle S_{a}^{(\bar{Q})}=$
$\displaystyle\,\left[P_{1a}^{\prime(\bar{Q})\mu}\gamma_{\mu}\gamma_{5}-P_{0a}^{*(\bar{Q})}\right]\frac{1-\not{v}}{2},$
(11) $\displaystyle T_{a}^{(\bar{Q})\mu}=$
$\displaystyle\,\left[P_{2a}^{(\bar{Q})\mu\nu}\gamma_{\nu}-\sqrt{\frac{3}{2}}P_{1a\nu}^{(\bar{Q})}\right.$
$\displaystyle\left.\times\gamma_{5}\left(g^{\mu\nu}-\frac{1}{3}\left(\gamma^{\mu}-v^{\mu}\right)\gamma^{\nu}\right)\right]\frac{1-\not{v}}{2},$
(12) $\displaystyle\bar{S}_{a}^{(\bar{Q})}=$
$\displaystyle\,\gamma_{0}S_{a}^{(\bar{Q})\dagger}\gamma_{0},$ (13)
$\displaystyle\bar{T}_{a\mu}^{(\bar{Q})}=$
$\displaystyle\,\gamma_{0}T_{a\mu}^{(\bar{Q})\dagger}\gamma_{0}.$ (14)
The mesons in the $T$ multiplet are
$\displaystyle P_{1}^{(Q)}$
$\displaystyle=(D_{1}(2420)^{0},D_{1}(2420)^{+},D_{s1}(2536)^{+}),$
$\displaystyle P_{2}^{(Q)}$
$\displaystyle=(D_{2}(2460)^{0},D_{2}(2460)^{+},D_{s2}(2573)^{+}),$ (15)
which can couple to $D^{*}\pi/K$ only in $D$-wave in the heavy quark limit.
While the $P$-wave charmed mesons with $s_{\ell}=1/2$ can couple to
$D^{*}\pi/K$ in $S$-wave without violating HQSS, there are issues in
identifying them. The $D_{0}^{*}(2300)$ and $D_{1}(2430)$ listed in the Review
of Particle Physics (RPP) Zyla _et al._ (2020) could be candidates for the
charm-nonstrange ones. However, on the one hand, they have rather large widths
such that they would have decayed before they can be bound together with
another heavy hadron Filin _et al._ (2010); Guo and Meißner (2011); on the
other hand, they were extracted using the Breit-Wigner parameterization which
has deficiencies in the current case Du _et al._ (2019) and has been
demonstrated Du _et al._ (2020a) to lead to resonance parameters for the
$D_{0}^{*}(2300)$ in conflict with the precise LHCb data of the $B^{-}\to
D^{+}\pi^{-}\pi^{-}$ process Aaij _et al._ (2016). For the ones with
strangeness, the lowest positive-parity $D_{s0}^{*}(2317)$ and $D_{s1}(2460)$
are widely considered as molecular states of $DK$ and $D^{*}K$ Barnes _et
al._ (2003); van Beveren and Rupp (2003); Kolomeitsev and Lutz (2004); Chen
and Li (2004); Guo _et al._ (2006, 2007), see Ref. Guo (2019) for a recent
review collecting evidence for such an interpretation. This multiplet $S$,
therefore, will not be considered in the rest of this work. For studies of
three-body hadronic molecular states involving the $D_{s0}^{*}(2317)$ as a
$DK$ subsystem, we refer to Refs. Ma _et al._ (2019); Martínez Torres _et
al._ (2019); Wu _et al._ (2019, 2020).
The light pseudoscalar meson octet can be introduced using the nonlinear
realization of the spontaneous chiral symmetry breaking of QCD as
$\Sigma=\xi^{2}$ and $\xi=e^{i\Pi/(\sqrt{2}F_{\pi})}$ with $F_{\pi}=92$ MeV
the pion decay constant and
$\displaystyle\Pi=\left(\begin{array}[]{ccc}\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}&\pi^{+}&K^{+}\\\
\pi^{-}&-\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}&K^{0}\\\
K^{-}&\bar{K}^{0}&-\sqrt{\frac{2}{3}}\eta\end{array}\right).$ (19)
The effective Lagrangian for the coupling of heavy mesons and light
pseudoscalar mesons is constructed by imposing invariance under both heavy
quark spin transformation and chiral transformation Wise (1992); Falk and Luke
(1992); Grinstein _et al._ (1992),
$\displaystyle\mathcal{L}_{PP\Pi}$ $\displaystyle=\,ig\left\langle
H_{b}^{(Q)}\not{\mathcal{A}}_{ba}\gamma_{5}\bar{H}_{a}^{(Q)}\right\rangle+ik\left\langle
T_{b}^{(Q)\mu}\not{\mathcal{A}}_{ba}\gamma_{5}\bar{T}_{a\mu}^{(Q)}\right\rangle+i\tilde{k}\left\langle
S_{b}^{(Q)}\not{\mathcal{A}}_{ba}\gamma_{5}\bar{S}_{a}^{(Q)}\right\rangle+\left[ih\left\langle
S_{b}^{(Q)}\not{\mathcal{A}}_{ba}\gamma_{5}\bar{H}_{a}^{(Q)}\right\rangle\right.$
$\displaystyle\left.+\,i\tilde{h}\left\langle T_{b}^{(Q)\mu}\mathcal{A}_{\mu
ba}\gamma_{5}\bar{S}_{a}^{(Q)}\right\rangle+i\frac{h_{1}}{\Lambda_{\chi}}\left\langle
T_{b}^{(Q)\mu}\left(D_{\mu}\not{\mathcal{A}}\right)_{ba}\gamma_{5}\bar{H}_{a}^{(Q)}\right\rangle+i\frac{h_{2}}{\Lambda_{\chi}}\left\langle
T_{b}^{(Q)\mu}\left(\not{D}\mathcal{A}_{\mu}\right)_{ba}\gamma_{5}\bar{H}_{a}^{(Q)}\right\rangle+h.c.\right]$
$\displaystyle+ig\left\langle\bar{H}_{a}^{(\bar{Q})}\not{\mathcal{A}}_{ab}\gamma_{5}H_{b}^{(\bar{Q})}\right\rangle+ik\left\langle\bar{T}_{a}^{(\bar{Q})\mu}\not{\mathcal{A}}_{ab}\gamma_{5}T_{b\mu}^{(\bar{Q})}\right\rangle+i\tilde{k}\left\langle\bar{S}_{a}^{(\bar{Q})}\not{\mathcal{A}}_{ab}\gamma_{5}S_{b}^{(\bar{Q})}\right\rangle+{\bigg{[}}ih\left\langle\bar{H}_{a}^{(\bar{Q})}\not{\mathcal{A}}_{ab}\gamma_{5}S_{b}^{(\bar{Q})}\right\rangle$
$\displaystyle+i\tilde{h}\left\langle\bar{S}_{a}^{(\bar{Q})}\mathcal{A}_{\mu
ab}\gamma_{5}T_{b}^{(\bar{Q})\mu}\right\rangle+i\frac{h_{1}}{\Lambda_{\chi}}\left\langle\bar{H}_{a}^{(\bar{Q})}\Big{(}\not{\mathcal{A}}\stackrel{{\scriptstyle\leftarrow}}{{D_{\mu}^{\prime}}}\Big{)}_{ab}\gamma_{5}T_{b}^{(\bar{Q})\mu}\right\rangle+i\frac{h_{2}}{\Lambda_{\chi}}\left\langle\bar{H}_{a}^{(\bar{Q})}\Big{(}\mathcal{A}_{\mu}\stackrel{{\scriptstyle\leftarrow}}{{\not{D}^{\prime}}}\Big{)}_{ab}\gamma_{5}T_{b}^{(\bar{Q})\mu}\right\rangle+\text{h.c.}{\bigg{]}},$
(20)
where $D_{\mu}=\partial_{\mu}+\mathcal{V}_{\mu}$,
$D^{\prime}_{\mu}=\partial_{\mu}-\mathcal{V}_{\mu}$, $\langle\cdots\rangle$
denotes tracing over the Dirac $\gamma$ matrices, $\Lambda_{\chi}\simeq 4\pi
F_{\pi}$ is the chiral symmetry breaking scale, and
$\displaystyle\mathcal{V}_{\mu}$
$\displaystyle=\frac{1}{2}\left(\xi^{\dagger}\partial_{\mu}\xi+\xi\partial_{\mu}\xi^{\dagger}\right),$
(21) $\displaystyle\mathcal{A}_{\mu}$
$\displaystyle=\frac{1}{2}\left(\xi^{\dagger}\partial_{\mu}\xi-\xi\partial_{\mu}\xi^{\dagger}\right)$
(22)
are the vector and axial currents which contain an even and odd number of
pseudoscalar mesons, respectively.
The coupling of heavy mesons and light vector mesons can be introduced by
using the hidden local symmetry approach Bando _et al._ (1985, 1988); Meißner
(1988), and the Lagrangian reads Casalbuoni _et al._ (1992, 1993, 1997)
$\displaystyle\mathcal{L}_{PPV}=$ $\displaystyle\,i\beta\left\langle
H_{b}^{(Q)}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ba}\bar{H}_{a}^{(Q)}\right\rangle+i\lambda\left\langle
H_{b}^{(Q)}\sigma^{\mu\nu}F_{\mu\nu}(\rho)_{ba}\bar{H}_{a}^{(Q)}\right\rangle+i\beta_{1}\left\langle
S_{b}^{(Q)}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ba}\bar{S}_{a}^{(Q)}\right\rangle$
$\displaystyle+i\lambda_{1}\left\langle
S_{b}^{(Q)}\sigma^{\mu\nu}F_{\mu\nu}(\rho)_{ba}\bar{S}_{a}^{(Q)}\right\rangle+i\beta_{2}\left\langle
T_{b}^{(Q)\lambda}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ba}\bar{T}_{a\lambda}^{(Q)}\right\rangle+i\lambda_{2}\left\langle
T_{b}^{(Q)\lambda}\sigma^{\mu\nu}F_{\mu\nu}(\rho)_{ba}\bar{T}_{a\lambda}^{(Q)}\right\rangle$
$\displaystyle+\left[i\zeta\left\langle
H_{b}^{(Q)}\gamma^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ba}\bar{S}_{a}^{(Q)}\right\rangle+i\mu\left\langle
H_{b}^{(Q)}\sigma^{\lambda\nu}F_{\lambda\nu}(\rho)_{ba}\bar{S}_{a}^{(Q)}\right\rangle+i\zeta_{1}\left\langle
T_{b}^{(Q)\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ba}\bar{H}_{a}^{(Q)}\right\rangle\right.$
$\displaystyle\left.+\mu_{1}\left\langle
T_{b}^{(Q)\mu}\gamma^{\nu}F_{\mu\nu}(\rho)_{ba}\bar{H}_{a}^{(Q)}\right\rangle+h.c.\right]$
$\displaystyle-i\beta\left\langle\bar{H}_{a}^{(\bar{Q})}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ab}H_{b}^{(\bar{Q})}\right\rangle+i\lambda\left\langle\bar{H}_{a}^{(\bar{Q})}\sigma^{\mu\nu}F_{\mu\nu}(\rho)_{ab}H_{b}^{(\bar{Q})}\right\rangle-i\beta_{1}\left\langle\bar{S}_{a}^{(\bar{Q})}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ab}S_{b}^{(\bar{Q})}\right\rangle$
$\displaystyle+i\lambda_{1}\left\langle\bar{S}_{a}^{(\bar{Q})}\sigma^{\mu\nu}F_{\mu\nu}(\rho)_{ab}S_{b}^{(\bar{Q})}\right\rangle-i\beta_{2}\left\langle\bar{T}_{a\lambda}^{(\bar{Q})}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ab}T_{b}^{(\bar{Q})\lambda}\right\rangle+i\lambda_{2}\left\langle\bar{T}_{a\lambda}^{(\bar{Q})}\sigma^{\mu\nu}F_{\mu\nu}(\rho)_{ab}T_{b}^{(\bar{Q})\lambda}\right\rangle$
$\displaystyle+\left[i\zeta\left\langle\bar{S}_{a}^{(\bar{Q})}\gamma^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ab}H_{a}^{(\bar{Q})}\right\rangle+i\mu\left\langle\bar{S}_{a}^{(\bar{Q})}\sigma^{\lambda\nu}F_{\lambda\nu}(\rho)_{ab}H_{b}^{(\bar{Q})}\right\rangle-i\zeta_{1}\left\langle\bar{H}_{a}^{(\bar{Q})}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)_{ab}T_{b}^{(\bar{Q})\mu}\right\rangle\right.$
$\displaystyle\left.+\mu_{1}\left\langle\bar{H}_{a}^{(\bar{Q})}\gamma^{\nu}F_{\mu\nu}(\rho)_{ab}T_{b}^{(\bar{Q})\mu}\right\rangle+\text{h.c.}\right],$
(23)
with
$F_{\mu\nu}=\partial_{\mu}\rho_{\nu}-\partial_{\nu}\rho_{\mu}+\left[\rho_{\mu},\rho_{\nu}\right]$,
and
$\displaystyle\rho=i\frac{g_{V}}{\sqrt{2}}V=i\frac{g_{V}}{\sqrt{2}}\left(\begin{array}[]{ccc}\frac{\omega}{\sqrt{2}}+\frac{\rho^{0}}{\sqrt{2}}&\rho^{+}&K^{*+}\\\
\rho^{-}&\frac{\omega}{\sqrt{2}}-\frac{\rho^{0}}{\sqrt{2}}&K^{*0}\\\
K^{*-}&\bar{K}^{*0}&\phi\end{array}\right),$ (27)
which satisfies $\mathcal{C}V\mathcal{C}^{-1}=-V^{T}$.
Remind that in the following we are only interested in the potential near
threshold and will not consider coupled channels. Therefore, the Lagrangian
that results in potentials proportional to the transferred momentum
$\bm{q}^{2}$ will have little contributions. At the leading order of the
chiral expansion, the light pseudoscalar mesons as Goldstone bosons only
couple in derivatives, as demonstrated in Eq. (20), so all pseudoscalar
exchanges have subleading contributions near threshold in comparison with the
constant contact term that can generate a near-threshold pole after
resummation. Moreover, coupled channels are not taken into account here, and
we do not consider the $s_{\ell}=1/2$ mesons. To this end we can just keep the
$\beta$, $\beta_{2}$ and $\zeta_{1}$ terms in Eq. (23). Expanding these terms
we obtain
$\displaystyle\mathcal{L}_{PPV}=-\sqrt{2}\beta
g_{V}\left(P_{a}^{(Q)}P_{b}^{(Q)\dagger}-P_{b}^{(\bar{Q})}P_{a}^{(\bar{Q})\dagger}\right)v_{\mu}V^{\mu}_{ab}$
$\displaystyle+\sqrt{2}\beta
g_{V}\left(P_{a}^{*(Q)\nu}P_{b\nu}^{*(Q)\dagger}-P_{b}^{*(\bar{Q})\nu}P_{a\nu}^{*(\bar{Q})\dagger}\right)v_{\mu}V^{\mu}_{ab}$
$\displaystyle-\sqrt{2}\beta_{2}g_{V}\left(P_{1a}^{(Q)\nu}P_{1b\nu}^{(Q)\dagger}-P_{1b}^{(\bar{Q})\nu}P_{1a\nu}^{(\bar{Q})\dagger}\right)v_{\mu}V^{\mu}_{ab}$
$\displaystyle+\sqrt{2}\beta_{2}g_{V}\left(P_{2a}^{(Q)\alpha\beta}P_{2b\alpha\beta}^{(Q)\dagger}-P_{2b}^{(\bar{Q})\alpha\beta}P_{2a\alpha\beta}^{(\bar{Q})\dagger}\right)v_{\mu}V^{\mu}_{ab}$
$\displaystyle+\Big{[}\sqrt{2}\zeta_{1}g_{V}\left(P_{2a}^{(Q)\mu\nu}P_{b\nu}^{(Q)*\dagger}+P_{2b}^{(\bar{Q})\mu\nu}P_{a\nu}^{(\bar{Q})*\dagger}\right)V_{ab\mu}$
$\displaystyle-\frac{i\zeta_{1}g_{V}}{\sqrt{3}}\epsilon_{\alpha\beta\gamma\delta}\left(P_{1a}^{(Q)\alpha}P_{b}^{(Q)*\dagger\beta}+P_{1b}^{(\bar{Q})\alpha}P_{a}^{(\bar{Q})*\dagger\beta}\right)v^{\gamma}V_{ab}^{\delta}$
$\displaystyle-\frac{2\zeta_{1}g_{V}}{\sqrt{3}}\left(P_{1a\mu}^{(Q)}P_{b}^{(Q)\dagger}-P_{1b\mu}^{(\bar{Q})}P_{a}^{(\bar{Q})\dagger}\right)V_{ab}^{\mu}+{\rm
h.c.}\Big{]}.$ (28)
Assuming vector meson dominance, the coupling constants $g_{V}$ and $\beta$
were estimated to be $5.8$ Bando _et al._ (1988) and $0.9$ Isola _et al._
(2003), respectively. In Ref. Dong _et al._ (2020b), it is estimated that
$\beta_{2}\approx-\beta=-0.9$ under the assumption that the coupling of
$D_{1}D_{1}V$ is the same as that of $DDV$ and $\zeta_{1}\approx 0.16$ from
the decay of $K_{1}\to K\rho$.
### II.2 Heavy baryons
In the heavy quark limit, the ground states of heavy baryons $Qqq$ form an
SU(3) antitriplet with $J^{P}=\frac{1}{2}^{+}$ denoted by $B^{(Q)}_{\bar{3}}$
and two degenerate sextets with $J^{P}=(\frac{1}{2},\frac{3}{2})^{+}$ denoted
by $(B^{(Q)}_{6},B^{(Q)*}_{6})$ Yan _et al._ (1992),
$\displaystyle B^{(Q)}_{\bar{3}}$
$\displaystyle=\left(\begin{array}[]{ccc}0&\Lambda_{c}^{+}&\Xi_{c}^{+}\\\
-\Lambda_{c}^{+}&0&\Xi_{c}^{0}\\\
-\Xi_{c}^{+}&-\Xi_{c}^{0}&0\end{array}\right),$ (32) $\displaystyle
B^{(Q)}_{6}$
$\displaystyle=\left(\begin{array}[]{ccc}\Sigma_{c}^{++}&\frac{1}{\sqrt{2}}\Sigma_{c}^{+}&\frac{1}{\sqrt{2}}\Xi_{c}^{\prime+}\\\
\frac{1}{\sqrt{2}}\Sigma_{c}^{+}&\Sigma_{c}^{0}&\frac{1}{\sqrt{2}}\Xi_{c}^{\prime
0}\\\ \frac{1}{\sqrt{2}}\Xi_{c}^{\prime+}&\frac{1}{\sqrt{2}}\Xi_{c}^{\prime
0}&\Omega_{c}^{0}\end{array}\right),$ (36) $\displaystyle B_{6}^{(Q)*}$
$\displaystyle=\left(\begin{array}[]{ccc}\Sigma_{c}^{*++}&\frac{1}{\sqrt{2}}\Sigma_{c}^{*+}&\frac{1}{\sqrt{2}}\Xi_{c}^{*+}\\\
\frac{1}{\sqrt{2}}\Sigma_{c}^{*+}&\Sigma_{c}^{*0}&\frac{1}{\sqrt{2}}\Xi_{c}^{*0}\\\
\frac{1}{\sqrt{2}}\Xi_{c}^{*+}&\frac{1}{\sqrt{2}}\Xi_{c}^{*0}&\Omega_{c}^{*0}\end{array}\right).$
(40)
Here we do not consider the $P$-wave heavy baryons since they are not well
established experimentally. The two sextets are collected into the superfield
$S_{\mu}$,
$\displaystyle S^{(Q)}_{\mu}$
$\displaystyle=B_{6\mu}^{(Q)*}-\frac{1}{\sqrt{3}}\left(\gamma_{\mu}+v_{\mu}\right)\gamma^{5}B^{(Q)}_{6},$
(41) $\displaystyle\bar{S}^{(Q)}_{\mu}$
$\displaystyle=\bar{B}_{6\mu}^{(Q)*}+\frac{1}{\sqrt{3}}\bar{B}^{(Q)}_{6}\gamma^{5}\left(\gamma_{\mu}+v_{\mu}\right),$
(42)
where $B_{6\mu}$ is the Rarita-Schwinger vector-spinor field Rarita and
Schwinger (1941). The fields that annihilate anti-baryons are obtained by
taking the charge conjugation of $B^{(Q)}_{\bar{3}}$, $B^{(Q)}_{6}$ and
$B_{6}^{(Q)*}$,
$\displaystyle B^{(\bar{Q})}_{{3}}$
$\displaystyle=\left(\begin{array}[]{ccc}0&\Lambda_{c}^{-}&\Xi_{c}^{-}\\\
-\Lambda_{c}^{-}&0&\bar{\Xi}_{c}^{0}\\\
-\Xi_{c}^{-}&-\bar{\Xi}_{c}^{0}&0\end{array}\right),$ (46) $\displaystyle
B^{(\bar{Q})}_{\bar{6}}$
$\displaystyle=\left(\begin{array}[]{ccc}\Sigma_{c}^{--}&\frac{1}{\sqrt{2}}\Sigma_{c}^{-}&\frac{1}{\sqrt{2}}\Xi_{c}^{\prime-}\\\
\frac{1}{\sqrt{2}}\Sigma_{c}^{-}&\bar{\Sigma}_{c}^{0}&\frac{1}{\sqrt{2}}\bar{\Xi}_{c}^{\prime
0}\\\
\frac{1}{\sqrt{2}}\Xi_{c}^{\prime-}&\frac{1}{\sqrt{2}}\bar{\Xi}_{c}^{\prime
0}&\bar{\Omega}_{c}^{0}\end{array}\right),$ (50) $\displaystyle
B_{\bar{6}}^{(\bar{Q})*}$
$\displaystyle=\left(\begin{array}[]{ccc}\Sigma_{c}^{*--}&\frac{1}{\sqrt{2}}\Sigma_{c}^{*-}&\frac{1}{\sqrt{2}}\Xi_{c}^{*-}\\\
\frac{1}{\sqrt{2}}\Sigma_{c}^{*-}&\bar{\Sigma}_{c}^{*0}&\frac{1}{\sqrt{2}}\bar{\Xi}_{c}^{*0}\\\
\frac{1}{\sqrt{2}}\Xi_{c}^{*-}&\frac{1}{\sqrt{2}}\bar{\Xi}_{c}^{*0}&\bar{\Omega}_{c}^{*0}\end{array}\right),$
(54)
where we have used the phase conventions such that
$\mathcal{C}B^{(Q)}\mathcal{C}^{-1}=B^{(\bar{Q})}$. The corresponding
superfields now read
$\displaystyle S^{(\bar{Q})}_{\mu}$
$\displaystyle=B_{\bar{6}\mu}^{(\bar{Q})*}-\frac{1}{\sqrt{3}}\left(\gamma_{\mu}+v_{\mu}\right)\gamma^{5}B^{(\bar{Q})}_{\bar{6}},$
(55) $\displaystyle\bar{S}^{(\bar{Q})}_{\mu}$
$\displaystyle=\bar{B}_{\bar{6}\mu}^{(\bar{Q})*}+\frac{1}{\sqrt{3}}\bar{B}^{(\bar{Q})}_{\bar{6}}\gamma^{5}\left(\gamma_{\mu}+v_{\mu}\right).$
(56)
The Lagrangian for the coupling of heavy baryons and light mesons is
constructed as Liu and Oka (2012)
$\displaystyle\mathcal{L}_{\mathcal{B}}=$
$\displaystyle\,\mathcal{L}_{B_{3}}+\mathcal{L}_{S}+\mathcal{L}_{\text{int}},$
(57) $\displaystyle\mathcal{L}_{B_{{\bar{3}}}}=$
$\displaystyle\,\frac{1}{2}\operatorname{tr}\left[\bar{B}^{(Q)}_{\bar{3}}(iv\cdot
D)B^{(Q)}_{\bar{3}}\right]$
$\displaystyle+i\beta_{B}\operatorname{tr}\left[\bar{B}^{(Q)}_{\bar{3}}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)B^{(Q)}_{\bar{3}}\right],$
(58) $\displaystyle\mathcal{L}_{S}=$
$\displaystyle\,-\operatorname{tr}\left[\bar{S}^{(Q)\alpha}\left(iv\cdot
D-\Delta_{B}\right)S^{(Q)}_{\alpha}\right]$
$\displaystyle+\frac{3}{2}g_{1}\left(iv_{\kappa}\right)\epsilon^{\mu\nu\lambda\kappa}\operatorname{tr}\left[\bar{S}^{(Q)}_{\mu}\mathcal{A}_{\nu}S^{(Q)}_{\lambda}\right]$
$\displaystyle+i\beta_{S}\operatorname{tr}\left[\bar{S}^{(Q)}_{\mu}v_{\alpha}\left(\mathcal{V}^{\alpha}-\rho^{\alpha}\right)S^{(Q)\mu}\right]$
$\displaystyle+\lambda_{S}\operatorname{tr}\left[\bar{S}^{(Q)}_{\mu}F^{\mu\nu}S^{(Q)}_{\nu}\right],$
(59) $\displaystyle\mathcal{L}_{\rm int}=$
$\displaystyle\,g_{4}\operatorname{tr}\left[\bar{S}^{(Q)\mu}\mathcal{A}_{\mu}B^{(Q)}_{\bar{3}}\right]$
$\displaystyle+i\lambda_{I}\epsilon^{\mu\nu\lambda\kappa}v_{\mu}\operatorname{tr}\left[\bar{S}^{(Q)}_{\nu}F_{\lambda\kappa}B^{(Q)}_{\bar{3}}\right]+{\rm
h.c.},$ (60)
where $D_{\mu}B=\partial_{\mu}B+\mathcal{V}_{\mu}B+B\mathcal{V}_{\mu}^{T}$,
and $\Delta_{B}=m_{6}-m_{\bar{3}}$ is the mass difference between the anti-
triplet and sextet baryons. The coupling constants $\beta_{B}$ and $\beta_{S}$
are estimated in Ref. Liu and Oka (2012) where $\beta_{S}=-2\beta_{B}=1.44$ or
$2.06$ from quark model and $\beta_{S}=-2\beta_{B}=1.74$ from the vector meson
dominance assumption. However, there is a sign ambiguity (only the absolute
value of the $\rho NN$ coupling was determined from the $NN$ scattering
Machleidt _et al._ (1987)). It turns out that the sign choice in Ref. Liu and
Oka (2012) yields potentials of the anti-charmed meson and charmed baryon
systems with an opposite sign compared to the ones obtained by SU(4) relations
Wu _et al._ (2011). It also conflicts with the famous $P_{c}$ states Aaij
_et al._ (2019), which are believed to be molecular states of
$\bar{D}^{(*)}\Sigma_{c}^{(*)}$ with isospin $1/2$—with a positive
$\beta_{S}$, these systems will be repulsive (see below). These issues can be
fixed by choosing the signs of $\beta_{B}$ and $\beta_{S}$ opposite to those
taken in Ref. Liu and Oka (2012), just like what Ref. Chen _et al._ (2019a)
did.
For the coupling of antiheavy baryons and light mesons, by taking the charge
conjugation transformation of the above ones, we have
$\displaystyle\mathcal{L}^{\prime}_{\mathcal{B}}=$
$\displaystyle\,\mathcal{L}^{\prime}_{B_{3}}+\mathcal{L}^{\prime}_{S}+\mathcal{L}^{\prime}_{\text{int}},$
(61) $\displaystyle\mathcal{L}^{\prime}_{B_{{\bar{3}}}}=$
$\displaystyle\,\frac{1}{2}\operatorname{tr}\left[\bar{B}^{(\bar{Q})}_{{3}}(iv\cdot
D)^{T}B^{(\bar{Q})}_{{3}}\right]$
$\displaystyle-i\beta_{B}\operatorname{tr}\left[\bar{B}^{(\bar{Q})}_{{3}}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)^{T}B^{(\bar{Q})}_{{3}}\right],$
(62) $\displaystyle\mathcal{L}^{\prime}_{S}=$
$\displaystyle\,-\operatorname{tr}\left[\bar{S}^{(\bar{Q})\alpha}\left(iv\cdot
D-\Delta_{B}\right)^{T}S^{(\bar{Q})}_{\alpha}\right]$
$\displaystyle+\frac{3}{2}g_{1}\left(iv_{\kappa}\right)\epsilon^{\mu\nu\lambda\kappa}\operatorname{tr}\left[\bar{S}^{(\bar{Q})}_{\mu}\mathcal{A}_{\nu}^{T}S^{(\bar{Q})}_{\lambda}\right]$
$\displaystyle-i\beta_{S}\operatorname{tr}\left[\bar{S}^{(\bar{Q})}_{\mu}v_{\alpha}\left(\mathcal{V}^{\alpha}-\rho^{\alpha}\right)^{T}S^{(\bar{Q})\mu}\right]$
$\displaystyle+\lambda_{S}\operatorname{tr}\left[\bar{S}^{(\bar{Q})}_{\mu}(F^{\mu\nu})^{T}S^{(\bar{Q})}_{\nu}\right],$
(63) $\displaystyle\mathcal{L}^{\prime}_{\rm int}=$
$\displaystyle\,g_{4}\operatorname{tr}\left[\bar{S}^{(\bar{Q})\mu}\mathcal{A}^{T}_{\mu}B^{(\bar{Q})}_{{3}}\right]$
$\displaystyle+i\lambda_{I}\epsilon^{\mu\nu\lambda\kappa}v_{\mu}\operatorname{tr}\left[\bar{S}^{T}_{\nu}(F_{\lambda\kappa})^{T}B^{(\bar{Q})}_{{3}}\right]+{\rm
h.c.},$ (64)
with the transpose acting on the SU(3) flavor matrix. Notice that the spinor
for an antibaryon is $u$ instead of $v$ since here the fields of heavy baryons
and heavy antibaryons are treated independently.
Similar with the Lagrangian for heavy mesons, we can focus on the vector
exchange contributions, and only the following terms are relevant,
$\displaystyle\mathcal{L}_{BBV}$
$\displaystyle=\,i\beta_{B}\mathrm{tr}\left[\bar{B}^{(Q)}_{\bar{3}}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)B^{(Q)}_{\bar{3}}\right]$
$\displaystyle-i\beta_{B}\mathrm{tr}\left[\bar{B}^{(\bar{Q})}_{{3}}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)^{T}B^{(\bar{Q})}_{{3}}\right]$
$\displaystyle+i\beta_{S}\operatorname{tr}\left[\bar{S}_{\nu}^{(Q)}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)S^{(Q)\nu}\right]$
$\displaystyle-i\beta_{S}\operatorname{tr}\left[\bar{S}_{\nu}^{(\bar{Q})}v^{\mu}\left(\mathcal{V}_{\mu}-\rho_{\mu}\right)^{T}S^{(\bar{Q})\nu}\right].$
(65)
## III Potentials
### III.1 Conventions
In this paper, we take the following charge conjugation conventions:
$\displaystyle\mathcal{C}\left|D\right\rangle$
$\displaystyle=\left|\bar{D}\right\rangle,$
$\displaystyle\mathcal{C}\left|D^{*}\right\rangle$
$\displaystyle=-\left|\bar{D}^{*}\right\rangle,$
$\displaystyle\mathcal{C}\left|D_{1}\right\rangle$
$\displaystyle=\left|\bar{D}_{1}\right\rangle,$
$\displaystyle\mathcal{C}\left|D_{2}\right\rangle$
$\displaystyle=-\left|\bar{D}_{2}\right\rangle,$
$\displaystyle\mathcal{C}\left|B_{\bar{3}}\right\rangle$
$\displaystyle=\left|\bar{B}_{\bar{3}}\right\rangle,$
$\displaystyle\mathcal{C}\left|B^{(*)}_{6}\right\rangle$
$\displaystyle=\left|\bar{B}^{(*)}_{6}\right\rangle,$ (66)
which are consistent with the Lagrangians in Section II. Within these
conventions, the flavor wave functions of the flavor-neutral systems that are
charge conjugation eigenstates, including $\left|DD^{*}\right\rangle_{c}$,
$\left|DD_{1}\right\rangle_{c}$, $\left|D^{*}D_{1}\right\rangle_{c}$,
$\left|DD_{2}\right\rangle_{c}$, $\left|D^{*}D_{2}\right\rangle_{c}$,
$\left|D_{1}D_{2}\right\rangle_{c}$ and
$\left|B_{6}B_{6}^{*}\right\rangle_{c}$, can be expressed as
$\displaystyle\left|A_{1}A_{2}\right\rangle_{c}=\frac{1}{\sqrt{2}}\left(\left|A_{1}\bar{A}_{2}\right\rangle\pm(-1)^{J-J_{1}-J_{2}}c\,c_{1}c_{2}\left|A_{2}\bar{A}_{1}\right\rangle\right),$
(67)
where $J_{i}$ is the spin of $\left|A_{i}\right\rangle$, $J$ is the total spin
of the system $\left|A_{1}A_{2}\right\rangle_{c}$, $c_{i}$ is defined by
$\mathcal{C}\left|A_{i}\right\rangle=c_{i}\left|\bar{A}_{i}\right\rangle$, and
the plus and minus between the two terms are for boson-boson and fermion-
fermion systems, respectively. These systems satisfy
$\mathcal{C}\left|A_{1}A_{2}\right\rangle_{c}=c\left|A_{1}A_{2}\right\rangle_{c}$
with $c=\pm 1$.
For the isospin conventions, we use the following ones:
$\displaystyle\left|u\right\rangle$
$\displaystyle=\left|\frac{1}{2},\frac{1}{2}\right\rangle,\quad~{}~{}\,\left|d\right\rangle=\left|\frac{1}{2},-\frac{1}{2}\right\rangle,$
$\displaystyle\left|\bar{d}\right\rangle$
$\displaystyle=\left|\frac{1}{2},\frac{1}{2}\right\rangle,\quad\left|\bar{u}\right\rangle=-\left|\frac{1}{2},-\frac{1}{2}\right\rangle.$
(68)
Consequently, we have
$\displaystyle\left|D^{+}\right\rangle$
$\displaystyle=\left|\frac{1}{2},\frac{1}{2}\right\rangle,$
$\displaystyle\left|D^{-}\right\rangle$
$\displaystyle=\left|\frac{1}{2},-\frac{1}{2}\right\rangle,$
$\displaystyle\left|D^{0}\right\rangle$
$\displaystyle=-\left|\frac{1}{2},-\frac{1}{2}\right\rangle,$
$\displaystyle\left|\bar{D}^{0}\right\rangle$
$\displaystyle=\left|\frac{1}{2},\frac{1}{2}\right\rangle,$
$\displaystyle\left|D_{s}^{+}\right\rangle$
$\displaystyle=\left|0,0\right\rangle,$
$\displaystyle\left|D_{s}^{-}\right\rangle$
$\displaystyle=\left|0,0\right\rangle,$
$\displaystyle\left|\Lambda_{c}^{+}\right\rangle$
$\displaystyle=\left|0,0\right\rangle,$
$\displaystyle\left|\Lambda_{c}^{-}\right\rangle$
$\displaystyle=-\left|0,0\right\rangle,$
$\displaystyle\left|\Xi_{c}^{(^{\prime}*)+}\right\rangle$
$\displaystyle=\left|\frac{1}{2},\frac{1}{2}\right\rangle,$
$\displaystyle\left|\Xi_{c}^{(^{\prime}*)-}\right\rangle$
$\displaystyle=\left|\frac{1}{2},-\frac{1}{2}\right\rangle,$
$\displaystyle\left|\Xi_{c}^{(^{\prime}*)0}\right\rangle$
$\displaystyle=\left|\frac{1}{2},-\frac{1}{2}\right\rangle,$
$\displaystyle\left|\bar{\Xi}_{c}^{(^{\prime}*)0}\right\rangle$
$\displaystyle=-\left|\frac{1}{2},\frac{1}{2}\right\rangle,$
$\displaystyle\left|\Sigma_{c}^{(*)++}\right\rangle$
$\displaystyle=\left|1,1\right\rangle,$
$\displaystyle\left|\Sigma_{c}^{(*)--}\right\rangle$
$\displaystyle=\left|1,-1\right\rangle,$
$\displaystyle\left|\Sigma_{c}^{(*)+}\right\rangle$
$\displaystyle=\left|1,0\right\rangle,$
$\displaystyle\left|\Sigma_{c}^{(*)-}\right\rangle$
$\displaystyle=-\left|1,0\right\rangle,$
$\displaystyle\left|\Sigma_{c}^{(*)0}\right\rangle$
$\displaystyle=\left|1,-1\right\rangle,$
$\displaystyle\left|\bar{\Sigma}_{c}^{(*)0}\right\rangle$
$\displaystyle=\left|1,1\right\rangle,$
$\displaystyle\left|\Omega_{c}^{(*)0}\right\rangle$
$\displaystyle=\left|0,0\right\rangle,$
$\displaystyle\left|\bar{\Omega}_{c}^{(*)0}\right\rangle$
$\displaystyle=\left|0,0\right\rangle.$ (69)
The isospin states of $D^{*}$, $D_{1}$ and $D_{2}^{*}$ are the same as those
of $D$. The flavor wave functions of the systems considered below with certain
isospins can be easily computed using Clebsch-Gordan coefficients with these
conventions.
The potential we calculate is $V=-\mathcal{M}$ with $\mathcal{M}$ the $2\to 2$
invariant scattering amplitude so that a negative $V$ means an attraction
interaction. This convention is the same as the widely used one in the on-
shell Bethe-Salpeter equation $T=V+VGT$ Oller and Oset (1997), and it is also
the same as the nonrelativistic potential in the Schrödinger equation up to a
mass factor.
### III.2 Potentials from light vector exchange
With the Lagrangian and conventions presented above we are ready to calculate
the potentials of different systems. We will use the resonance saturation
model to get the approximate potentials as constant contact terms. Note that
the resonance saturation has been known to be able to well approximate the
low-energy constants (LECs) in the higher order Lagrangians of chiral
perturbation theory Ecker _et al._ (1989); Donoghue _et al._ (1989), and it
turns out therein that whenever vector mesons contribute they dominate the
numerical values of the LECs at the scale around the $\rho$-meson mass, which
is called the modern version of vector meson dominance.
The general form of $\mathcal{M}$ for a process
$A_{1}(p_{1})\bar{A}_{2}(p_{2})\to A_{1}(k_{1})\bar{A}_{2}(k_{2})$ by the
vector meson exchange reads
$\displaystyle
i\mathcal{M}=ig_{1}v_{\mu}\frac{-i(g^{\mu\nu}-{q^{\mu}q^{\nu}}/{m_{\rm
ex}^{2}})}{q^{2}-m_{\rm
ex}^{2}+i\epsilon}v_{\nu}ig_{2}\approx-i\frac{g_{1}g_{2}}{m_{\rm ex}^{2}},$
(70)
where $g_{1}$ and $g_{2}$ account for the vertex information for $A_{1}$ and
$\bar{A}_{2}$, respectively, $q=p_{1}-k_{1}$, and we have neglected terms
suppressed by $\mathcal{O}(\vec{q}\,^{2}/m_{\text{ex}}^{2})$. For different
particles $g_{1}$ and $g_{2}$ are collected in Appendix A. It is worth
mentioning that the spin information of the component particles is irrelevant
here since the exchanged vectors only carry the momentum information, see Eqs.
(28,65). Hence for a given system with different total spins, the potentials
at threshold are the same. With the vertex factors evaluated, the potentials
of different systems have a uniform expression,
$V\approx-F\tilde{\beta}_{1}\tilde{\beta}_{2}g_{V}^{2}\frac{2m_{1}m_{2}}{m_{\rm
ex}^{2}},$ (71)
where $m_{1},m_{2}$ and $m_{\rm ex}$ are the masses of the two heavy hadrons
and the exchanged particle, respectively. $\tilde{\beta}_{1}$ and
$\tilde{\beta}_{2}$ are the coupling constants for the two heavy hadrons with
the vector mesons, and, given explicitly, $\tilde{\beta}_{i}=\beta$ for the
$S$-wave charmed mesons, $\tilde{\beta}_{i}=-\beta$ for the $P$-wave charmed
mesons, $\tilde{\beta}_{i}=\beta_{B}$ for the anti-triplet baryons, and
$\tilde{\beta}_{i}=-\beta_{S}/2$ for the sextet baryons. $F$ is a group theory
factor accounting for the light-flavor SU(3) information, and in our
convention a positive $F$ means an attractive interaction. The values of $F$
are listed in Tables 8 and 9 in Appendix B for all combinations of heavy-
antiheavy hadron pairs.
For a system which can have different $C$-parities, like those in Eqs. (67),
the potential is expressed as
$V=V_{d}\pm(-1)^{J-J_{1}-J_{2}}c\,c_{1}c_{2}V_{c}$ (72)
with $V_{d}$ the potential from the _direct_ process, e.g., $D\bar{D}^{*}\to
D\bar{D}^{*}$ and $V_{c}$ from the _cross_ one, e.g., $D\bar{D}^{*}\to
D^{*}\bar{D}$. $V_{d}$’s for these systems are covered by Eq. (71), while for
the cross processes, it turns out that $V_{c}$’s for
$\left|DD^{*}\right\rangle_{c}$, $\left|DD_{2}\right\rangle_{c}$,
$\left|D_{1}D_{2}\right\rangle_{c}$ and
$\left|B_{6}B_{6}^{*}\right\rangle_{c}$ systems vanish at threshold. Explicit
calculation shows that for the other three systems,
$\left|DD_{1}\right\rangle_{c}$, $\left|D^{*}D_{1}\right\rangle_{c}$, and
$\left|D^{*}D_{2}\right\rangle_{c}$,
$\displaystyle V_{c}\approx
F_{c}F\zeta_{1}^{2}g_{V}^{2}\frac{m_{1}m_{2}}{m_{\rm ex}^{2}-\Delta m^{2}},$
(73)
where $F$ is the same as in Eq. (71), $\Delta m^{2}=(m_{1}-m_{2})^{2}$, and
the additional factor $F_{c}$ accounts for the spin information, which are
shown in Appendix C. However, $V_{c}$ is much smaller than $V_{d}$ and has
little influence on the pole positions compared with the cutoff dependence
(see below).
Table 1: Values of the coupling parameters used in the calculations. $g_{V}$ | $\beta$ | $\beta_{2}$ | $\zeta_{1}$ | $\beta_{B}$ | $\beta_{S}$
---|---|---|---|---|---
5.8 | 0.9 | $-0.9$ | 0.16 | 0.87 | $-1.74$
Bando _et al._ (1988) | Isola _et al._ (2003) | Dong _et al._ (2020b) | Dong _et al._ (2020b) | Liu and Oka (2012); Chen _et al._ (2019a) | Liu and Oka (2012); Chen _et al._ (2019a)
In Table 1, we collect the numerical values of the coupling parameters used in
our calculations.
### III.3 Potentials from vector charmonia exchange
In principle, the $J/\psi$, as well as the excited $\psi$ vector charmonia,
can also be exchanged between charmed and anti-charmed hadrons. Being vector
mesons, their couplings to the charmed mesons have the same spin-momentum
behavior as that of the light vectors. According to Eq. (71), such
contributions should be suppressed due to the much larger masses of the $\psi$
states than those of the light vectors by a factor of
${m_{\rho}^{2}}/{m_{\psi}^{2}}\sim 0.1$, up to the difference of coupling
constants. Therefore, the exchange of light mesons, if not vanishing,
dominates the potentials at threshold. While for the systems where
contributions from light vectors vanish (or the $\rho$ and $\omega$ exchanges
cancel each other), the vector charmonia exchange, as the sub-leading term,
will play an important role in the near-threshold potentials.
To be more precise, let us take the $J/\psi$ exchange for example, for which
the Lagrangian reads
$\displaystyle\mathcal{L}_{{DD}J/\psi}=ig_{DDJ/\psi}\psi^{\mu}\left(\partial_{\mu}D^{\dagger}D-D^{\dagger}\partial_{\mu}D\right),$
(74)
with $g_{DDJ/\psi}\approx 7.64$ Lin and Ko (2000). The resulting potential in
the nonrelativistic limit is
$\displaystyle V\sim-g_{DDJ/\psi}^{2}\frac{4m_{1}m_{2}}{m_{\rm ex}^{2}},$ (75)
which is about 40% of the potential from the $\phi$ exchange between $D_{s}$
and $\bar{D}_{s}$. The contributions from the other vector charmonia will be
similar since their masses are of the same order. Notice that for all charmed
and anti-charmed hadron systems, the vector charmonia exchange yields
attractive interactions. Unfortunately, it is not easy to quantitatively
estimate their contributions because the masses of these charmonia are much
larger than the energy scale of interest, and there is no hierarchy among them
to help selecting the dominant ones. Nevertheless, it could be possible to
use, e.g., the $Z_{c}(3900)$, as a benchmark to estimate the overall
contribution of the charmonia exchange. Given the controversy regarding its
pole position Ablikim _et al._ (2013a, 2014a); Albaladejo _et al._ (2016a);
Pilloni _et al._ (2017); Gong _et al._ (2018), we refrain from doing so here
(for further discussion, see Section IV.2.1).
## IV Molecular states from constant interactions
### IV.1 Poles
Now that we have obtained the constant interactions between a pair of heavy-
antiheavy hadrons, we can give a rough picture of the spectrum of possible
molecular states. We search for poles of the scattering amplitude by solving
the single channel Bethe-Salpeter equation which factorizes into an algebraic
equation for a constant potential,
$\displaystyle T=\frac{V}{1-VG},$ (76)
where $G$ is the one loop two-body propagator. Here we adopt the dimensional
regularization (DR) to regularize the loop integral Veltman (2012),
$\displaystyle G(E)=$
$\displaystyle\frac{1}{16\pi^{2}}\bigg{\\{}a(\mu)+\log\frac{m_{1}^{2}}{\mu^{2}}+\frac{m_{2}^{2}-m_{1}^{2}+s}{2s}\log\frac{m_{2}^{2}}{m_{1}^{2}}$
$\displaystyle+\frac{k}{E}\Big{[}\log\left(2kE+s+\Delta\right)+\log\left(2kE+s-\Delta\right)$
$\displaystyle-\log\left(2kE-s+\Delta\right)-\log\left(2kE-s-\Delta\right)\Big{]}\bigg{\\}},$
(77)
where $s=E^{2}$, $m_{1}$ and $m_{2}$ are the particle masses, and
$\displaystyle k=\frac{1}{2E}\lambda^{1/2}(E^{2},m_{1}^{2},m_{2}^{2})$ (78)
is the corresponding three-momentum with
$\lambda(x,y,z)=x^{2}+y^{2}+z^{2}-2xy-2yz-2xz$ for the Källén triangle
function. Here $\mu$, chosen to be 1 GeV, denotes the DR scale, and $a(\mu)$
is a subtraction constant. The branch cut of $k$ from the threshold to
infinity along the positive real $E$ axis splits the whole complex energy
plane into two Riemann sheets (RSs) defined as Im$(k)>0$ on the first RS while
Im$(k)<0$ on the second RS. Another way to regularize the loop integral is
inserting a Gaussian form factor, namely,
$\displaystyle G(E)=$
$\displaystyle\,\int\frac{l^{2}dl}{4\pi^{2}}\frac{\omega_{1}+\omega_{2}}{\omega_{1}\omega_{2}}\frac{e^{-2l^{2}/\Lambda^{2}}}{E^{2}-(\omega_{1}+\omega_{2})^{2}+i\epsilon},$
(79)
with $\omega_{i}=\sqrt{m_{i}^{2}+l^{2}}$. The cutoff $\Lambda$ is usually in
the range of $0.5\sim 1.0$ GeV. The subtraction constant $a(\mu)$ in DR is
determined by matching the values of $G$ from these two methods at threshold.
We will use the DR loop with the so-determined subtraction constant for
numerical calculations.
For a single channel, if the interaction is attractive and strong enough to
form a bound state, the pole will be located below threshold on the first RS.
If it is not strong enough, the pole will move onto the second RS as a virtual
state, still below threshold. In Tables 2, 3, 4, 5, 6 and 7, we list all the
pole positions of the heavy-antiheavy hadron systems which have attractive
interactions, corresponding to the masses of hadronic molecules. For better
illustration, these states, together with some hadronic molecule candidates
observed in experiments, are also shown in Figs. 1, 2, 3, 4, 5 and 6. In
total, we obtain a spectrum of 229 hadronic molecules considering constant
contact interactions, saturated by the light vector mesons, with the coupled-
channel effects neglected.
Figure 1: The spectrum of hadronic molecules consisting of a pair of charmed-
anticharmed hadrons with $(I,S)=(0,0)$. $0^{--}$, $1^{-+}$ and $3^{-+}$ are
exotic quantum numbers. The colored rectangle, green for a bound state and
orange for a virtual state, covers the range of the pole position for a given
system with cutoff $\Lambda$ varies in the range of $[0.5,1.0]$ GeV.
Thresholds are marked by dotted horizontal lines. The rectangle closest to,
but below, the threshold corresponds to the hadronic molecule in that system.
In some cases where the pole positions of two systems overlap, small
rectangles are used with the left (right) one for the system with the higher
(lower) threshold. The blue line (band) represents the center value (error) of
the mass of the experimental candidate of the corresponding molecule. The
averaged central value and error of the $\psi(4230)$ mass are taken from RPP
Zyla _et al._ (2020). Figure 2: The spectrum of hadronic molecules consisting
of a pair of charmed-anticharmed hadrons with $(I,S)=(0,0)$. $2^{+-}$ are
exotic quantum numbers. The parameters of the $X(3872)$ and $\tilde{X}(3872)$
are taken from RPP Zyla _et al._ (2020) and Ref. Aghasyan _et al._ (2018),
respectively. See the caption for Fig.1.
Figure 3: The spectrum of hadronic molecules consisting of a pair of charmed-
anticharmed hadrons with $(I,S)=(\frac{1}{2},0)$ and unit baryon number. The
left orange band and right green band for each pole represent that the pole
moves from a virtual state on the second RS to a bound state on the first RS
when the cutoff $\Lambda$ changes from 0.5 to 1.0 GeV. The parameters of these
three $P_{c}$ states are taken from Ref. Aaij _et al._ (2019), whose $J^{P}$
have not been determined experimentally. See the caption for Fig. 1.
Figure 4: The spectrum of hadronic molecules consisting of a pair of charmed-
anticharmed hadrons with $(I,S)=(0,1)$ and unit baryon number. The parameters
of $P_{cs}(4459)$ are taken from Ref. Aaij _et al._ (2020a), whose $J^{P}$
have not been determined experimentally. See the caption for Fig. 3. Figure 5:
The spectrum of hadronic molecules consisting of a pair of charmed-anticharmed
hadrons with $(I,S)=(1,0)$. See the caption for Fig. 1.
Figure 6: The spectrum of hadronic molecules consisting of a pair of charmed-anticharmed hadrons with $(I,S)=(\frac{1}{2},1)$. See the captions for Figs. 1 and 3. Table 2: Pole positions of heavy-antiheavy hadron systems with $(I,S)=(0,0)$. $E_{\rm th}$ in the second column is the threshold in MeV. The number 0.5 (1.0) in the third (fourth) column means that the cutoff $\Lambda=0.5$ ($1.0$) GeV for Eq. (79) is used to determine the subtraction constant $a(\mu)$ in Eq. (77). In the last two columns, the first number in the parenthesis refers to the RS where the pole is located while the second number means the distance between the pole position and the corresponding threshold, namely, $E_{\rm th}-E_{\rm pole}$, in MeV. System | $E_{\rm th}$ | $J^{PC}$ | Pole (0.5) | Pole (1.0)
---|---|---|---|---
$D\bar{D}$ | 3734 | $0^{++}$ | (1, 1.31) | (1, 35.8)
$D\bar{D}^{*}$ | 3876 | $1^{+\pm}$ | (1, 1.56) | (1, 36.2)
$D_{s}\bar{D}_{s}$ | 3937 | $0^{++}$ | (2, 35.5) | (2, 4.72)
$D^{*}\bar{D}^{*}$ | 4017 | $(0,2)^{++},1^{+-}$ | (1, 1.82) | (1, 36.6)
$D_{s}\bar{D}_{s}^{*}$ | 4081 | $1^{+\pm}$ | (2, 31.0) | (2, 3.15)
$D_{s}^{*}\bar{D}_{s}^{*}$ | 4224 | $(0,2)^{++},1^{+-}$ | (2, 26.7) | (2, 1.92)
$D\bar{D}_{2}$ | 4330 | $2^{-\pm}$ | (1, 2.2) | (1, 36.7)
$D_{s}\bar{D}_{s2}$ | 4537 | $2^{-\pm}$ | (2, 21.3) | (2, 0.713)
$D_{1}\bar{D}_{1}$ | 4844 | $(0,2)^{++},1^{+-}$ | (1, 3.01) | (1, 36.7)
$D_{1}\bar{D}_{2}$ | 4885 | $(1,2,3)^{+\pm}$ | (1, 3.06) | (1, 36.6)
$D_{2}\bar{D}_{2}$ | 4926 | $(0,2,4)^{++},(1,3)^{+-}$ | (1, 3.1) | (1, 36.6)
$D_{s1}\bar{D}_{s1}$ | 5070 | $(0,2)^{++},1^{+-}$ | (2, 11.7) | (1, 0.074)
$D_{s1}\bar{D}_{s2}$ | 5104 | $(1,2,3)^{+\pm}$ | (2, 11.3) | (1, 0.104)
$D_{s2}\bar{D}_{s2}$ | 5138 | $(0,2,4)^{++},(1,3)^{+-}$ | (2, 10.9) | (1, 0.139)
$\Lambda_{c}\bar{\Lambda}_{c}$ | 4573 | $0^{-+},1^{--}$ | (1, 1.98) | (1, 33.8)
$\Sigma_{c}\bar{\Sigma}_{c}$ | 4907 | $0^{-+},1^{--}$ | (1, 11.1) | (1, 60.8)
$\Xi_{c}\bar{\Xi}_{c}$ | 4939 | $0^{-+},1^{--}$ | (1, 4.72) | (1, 42.2)
$\Sigma_{c}^{*}\bar{\Sigma}_{c}$ | 4972 | $1^{-\pm},2^{-\pm}$ | (1, 11.0) | (1, 60.1)
$\Sigma_{c}^{*}\bar{\Sigma}_{c}^{*}$ | 5036 | $(0,2)^{-+},(1,3)^{--}$ | (1, 10.9) | (1, 59.5)
$\Xi_{c}\bar{\Xi}_{c}^{\prime}$ | 5048 | $0^{-\pm},1^{-\pm}$ | (1, 4.79) | (1, 41.9)
$\Xi_{c}\bar{\Xi}_{c}^{*}$ | 5115 | $1^{-\pm},2^{-\pm}$ | (1, 4.84) | (1, 41.6)
$\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}$ | 5158 | $0^{-+},1^{--}$ | (1, 4.87) | (1, 41.5)
$\Xi_{c}^{*}\bar{\Xi}_{c}^{\prime}$ | 5225 | $1^{-\pm},2^{-\pm}$ | (1, 4.91) | (1, 41.3)
$\Xi_{c}^{*}\bar{\Xi}_{c}^{*}$ | 5292 | $(0,2)^{-+},(1,3)^{--}$ | (1, 4.95) | (1, 41.0)
$\Omega_{c}\bar{\Omega}_{c}$ | 5390 | $0^{-+},1^{--}$ | (1, 4.17) | (1, 38.0)
$\Omega_{c}^{*}\bar{\Omega}_{c}$ | 5461 | $1^{-\pm},2^{-\pm}$ | (1, 4.22) | (1, 37.8)
$\Omega_{c}^{*}\bar{\Omega}_{c}^{*}$ | 5532 | $(0,2)^{-+},(1,3)^{--}$ | (1, 4.26) | (1, 37.6)
Table 3: Pole positions of heavy-antiheavy hadron systems with $(I,S)=(0,0)$. See the caption for Table 2. In these systems different total spins and $C$-parities yield slightly different pole positions. System | $E_{\rm th}$ | $J^{PC}$ | Pole (0.5) | Pole (1.0)
---|---|---|---|---
$D\bar{D}_{1}$ | 4289 | $1^{-+}$ | (1, 1.78) | (1, 34.9)
| | $1^{--}$ | (1, 2.53) | (1, 38.4)
$D^{*}\bar{D}_{1}$ | 4431 | $0^{-+}$ | (1, 2.55) | (1, 37.4)
| | $0^{--}$ | (1, 2.29) | (1, 36.3)
| | $1^{-+}$ | (1, 2.36) | (1, 36.6)
| | $1^{--}$ | (1, 2.49) | (1, 37.1)
| | $2^{-+}$ | (1, 2.36) | (1, 36.6)
| | $2^{--}$ | (1, 2.49) | (1, 37.1)
$D^{*}\bar{D}_{2}$ | 4472 | $1^{-+}$ | (1, 2.54) | (1, 37.1)
| | $1^{--}$ | (1, 2.4) | (1, 36.5)
| | $2^{-+}$ | (1, 2.68) | (1, 37.7)
| | $2^{--}$ | (1, 2.26) | (1, 35.9)
| | $3^{-+}$ | (1, 2.89) | (1, 38.6)
| | $3^{--}$ | (1, 2.05) | (1, 34.9)
$D_{s}\bar{D}_{s1}$ | 4503 | $1^{-+}$ | (2, 24.2) | (2, 1.4)
| | $1^{--}$ | (2, 19.6) | (2, 0.402)
$D_{s}^{*}\bar{D}_{s1}$ | 4647 | $0^{-+}$ | (2, 17.5) | (2, 0.179)
| | $0^{--}$ | (2, 19.2) | (2, 0.402)
| | $1^{-+}$ | (2, 18.7) | (2, 0.402)
| | $1^{--}$ | (2, 17.9) | (2, 0.227)
| | $2^{-+}$ | (2, 18.7) | (2, 0.342)
| | $2^{--}$ | (2, 17.9) | (2, 0.402)
$D_{s}^{*}\bar{D}_{s2}$ | 4681 | $1^{-+}$ | (2, 17.4) | (2, 0.177)
| | $1^{--}$ | (2, 18.3) | (2, 0.402)
| | $2^{-+}$ | (2, 16.6) | (2, 0.402)
| | $2^{--}$ | (2, 19.2) | (2, 0.418)
| | $3^{-+}$ | (2, 15.4) | (2, 0.023)
| | $3^{--}$ | (2, 20.6) | (2, 0.402)
Table 4: Pole positions of heavy-antiheavy hadron systems with $(I,S)=(1/2,0)$ and unit baryon number. See the caption for Table 2. System | $E_{\rm th}$ | $J^{P}$ | Pole (0.5) | Pole (1.0)
---|---|---|---|---
$\bar{D}\Sigma_{c}$ | 4321 | $\frac{1}{2}^{-}$ | (2, 2.04) | (1, 7.79)
$\bar{D}\Sigma_{c}^{*}$ | 4385 | $\frac{3}{2}^{-}$ | (2, 1.84) | (1, 8.1)
$\bar{D}^{*}\Sigma_{c}$ | 4462 | $(\frac{1}{2},\frac{3}{2})^{-}$ | (2, 1.39) | (1, 8.95)
$\bar{D}^{*}\Sigma_{c}^{*}$ | 4527 | $(\frac{1}{2},\frac{3}{2},\frac{5}{2})^{-}$ | (2, 1.23) | (1, 9.26)
$\bar{D}_{1}\Sigma_{c}$ | 4876 | $(\frac{1}{2},\frac{3}{2})^{+}$ | (2, 0.417) | (1, 11.5)
$\bar{D}_{2}\Sigma_{c}$ | 4917 | $(\frac{3}{2},\frac{5}{2})^{+}$ | (2, 0.366) | (1, 11.7)
$\bar{D}_{1}\Sigma_{c}^{*}$ | 4940 | $(\frac{1}{2},\frac{3}{2},\frac{5}{2})^{+}$ | (2, 0.34) | (1, 11.8)
$\bar{D}_{2}\Sigma_{c}^{*}$ | 4981 | $(\frac{1}{2},\frac{3}{2},\frac{5}{2},\frac{7}{2})^{+}$ | (2, 0.294) | (1, 12.0)
Table 5: Pole positions of heavy-antiheavy hadron systems with $(I,S)=(0,1)$ and unit baryon number. See the caption for Table 2. System | $E_{\rm th}$ | $J^{P}$ | Pole (0.5) | Pole (1.0)
---|---|---|---|---
$\bar{D}\Xi_{c}$ | 4337 | $\frac{1}{2}^{-}$ | (2, 2.14) | (1, 7.53)
$\bar{D}\Xi_{c}^{\prime}$ | 4446 | $\frac{1}{2}^{-}$ | (2, 1.82) | (1, 8.05)
$\bar{D}^{*}\Xi_{c}$ | 4478 | $(\frac{1}{2},\frac{3}{2})^{-}$ | (2, 1.47) | (1, 8.69)
$\bar{D}\Xi_{c}^{*}$ | 4513 | $\frac{3}{2}^{-}$ | (2, 1.65) | (1, 8.34)
$\bar{D}^{*}\Xi_{c}^{\prime}$ | 4587 | $(\frac{1}{2},\frac{3}{2})^{-}$ | (2, 1.21) | (1, 9.21)
$\bar{D}^{*}\Xi_{c}^{*}$ | 4655 | $(\frac{1}{2},\frac{3}{2},\frac{5}{2})^{-}$ | (2, 1.08) | (1, 9.51)
$\bar{D}_{1}\Xi_{c}$ | 4891 | $(\frac{1}{2},\frac{3}{2})^{+}$ | (2, 0.455) | (1, 11.3)
$\bar{D}_{2}\Xi_{c}$ | 4932 | $(\frac{3}{2},\frac{5}{2})^{+}$ | (2, 0.4) | (1, 11.5)
$\bar{D}_{1}\Xi_{c}^{\prime}$ | 5001 | $(\frac{1}{2},\frac{3}{2})^{+}$ | (2, 0.326) | (1, 11.8)
$\bar{D}_{2}\Xi_{c}^{\prime}$ | 5042 | $(\frac{3}{2},\frac{5}{2})^{+}$ | (2, 0.28) | (1, 12.0)
$\bar{D}_{1}\Xi_{c}^{*}$ | 5068 | $(\frac{1}{2},\frac{3}{2},\frac{5}{2})^{+}$ | (2, 0.262) | (1, 12.1)
$\bar{D}_{2}\Xi_{c}^{*}$ | 5109 | $(\frac{1}{2},\frac{3}{2},\frac{5}{2},\frac{7}{2})^{+}$ | (2, 0.222) | (1, 12.3)
Table 6: Pole positions of heavy-antiheavy hadron systems with $(I,S)=(1/2,1)$. See the caption for Table 2. System | $E_{\rm th}$ | $J^{P}$ | Pole (0.5) | Pole (1.0)
---|---|---|---|---
$\Lambda_{c}\bar{\Xi}_{c}$ | 4756 | $(0,1)^{-}$ | (2, 1.29) | (1, 8.42)
$\Lambda_{c}\bar{\Xi}_{c}^{\prime}$ | 4865 | $(0,1)^{-}$ | (2, 1.05) | (1, 8.93)
$\Xi_{c}\bar{\Sigma}_{c}$ | 4923 | $(0,1)^{-}$ | (1, 5.98) | (1, 46.4)
$\Lambda_{c}\bar{\Xi}_{c}^{*}$ | 4932 | $(1,2)^{-}$ | (2, 0.92) | (1, 9.23)
$\Xi_{c}\bar{\Sigma}_{c}^{*}$ | 4988 | $(1,2)^{-}$ | (1, 6.01) | (1, 46.1)
$\Sigma_{c}\bar{\Xi}_{c}^{\prime}$ | 5032 | $(0,1)^{-}$ | (1, 6.03) | (1, 45.9)
$\Sigma_{c}^{*}\bar{\Xi}_{c}^{\prime}$ | 5097 | $(1,2)^{-}$ | (1, 6.06) | (1, 45.6)
$\Sigma_{c}\bar{\Xi}_{c}^{*}$ | 5100 | $(1,2)^{-}$ | (1, 6.05) | (1, 45.6)
$\Sigma_{c}^{*}\bar{\Xi}_{c}^{*}$ | 5164 | $(0,1,2,3)^{-}$ | (1, 6.08) | (1, 45.2)
$\Xi_{c}\bar{\Omega}_{c}$ | 5165 | $(0,1)^{-}$ | (2, 8e-5) | (1, 15.9)
$\Xi_{c}\bar{\Omega}_{c}^{*}$ | 5235 | $(1,2)^{-}$ | (1, 0.002) | (1, 16.2)
$\Xi_{c}^{\prime}\bar{\Omega}_{c}$ | 5274 | $(0,1)^{-}$ | (1, 0.006) | (1, 16.4)
$\Xi_{c}^{*}\bar{\Omega}_{c}$ | 5341 | $(1,2)^{-}$ | (1, 0.016) | (1, 16.6)
$\Xi_{c}^{\prime}\bar{\Omega}_{c}^{*}$ | 5345 | $(1,2)^{-}$ | (1, 0.016) | (1, 16.6)
$\Xi_{c}^{*}\bar{\Omega}_{c}^{*}$ | 5412 | $(0,1,2,3)^{-}$ | (1, 0.030) | (1, 16.8)
Table 7: Pole positions of heavy-antiheavy hadron systems with $(I,S)=(1,0)$. See the caption for Table 2. System | $E_{\rm th}$ | $J^{PC}$ | Pole (0.5) | Pole (1.0)
---|---|---|---|---
$\Lambda_{c}\bar{\Sigma}_{c}$ | 4740 | $(0,1)^{-\pm}$ | (1, 2.19) | (1, 33.9)
$\Lambda_{c}\bar{\Sigma}_{c}^{*}$ | 4805 | $(1,2)^{-\pm}$ | (1, 2.27) | (1, 33.9)
$\Sigma_{c}\bar{\Sigma}_{c}$ | 4907 | $0^{-+},1^{--}$ | (1, 8.28) | (1, 53.3)
$\Xi_{c}\bar{\Xi}_{c}$ | 4939 | $0^{-+},1^{--}$ | (2, 18.2) | (2, 0.39)
$\Sigma_{c}^{*}\bar{\Sigma}_{c}$ | 4972 | $(1,2)^{-\pm}$ | (1, 8.27) | (1, 52.8)
$\Sigma_{c}^{*}\bar{\Sigma}_{c}^{*}$ | 5036 | $(0,2)^{-+},(1,3)^{--}$ | (1, 8.25) | (1, 52.3)
$\Xi_{c}\bar{\Xi}_{c}^{\prime}$ | 5048 | $(0,1)^{-\pm}$ | (2, 16.5) | (2, 0.19)
$\Xi_{c}\bar{\Xi}_{c}^{*}$ | 5115 | $(1,2)^{-\pm}$ | (2, 15.6) | (2, 0.11)
$\Xi_{c}^{\prime}\bar{\Xi}_{c}^{\prime}$ | 5158 | $0^{-+},1^{--}$ | (2, 14.9) | (2, 0.061)
$\Xi_{c}^{*}\bar{\Xi}_{c}^{\prime}$ | 5225 | $(1,2)^{-\pm}$ | (2, 14.0) | (2, 0.020)
$\Xi_{c}^{*}\bar{\Xi}_{c}^{*}$ | 5292 | $(0,2)^{-+},(1,3)^{--}$ | (2, 13.2) | (2, 0.002)
### IV.2 Discussions of selected systems
It is worthwhile to notice that the overwhelming majority of the predicted
spectrum is located in the energy region that has not been experimentally
explored in detail. Searching for these states at BESIII, Belle-II, LHCb and
other planned experiments will be important to establish a clear pattern of
the hidden-charm states and to understand how QCD organizes the hadron
spectrum.
In the following, we discuss a few interesting systems that have experimental
candidates.
#### IV.2.1 $D^{(*)}\bar{D}^{(*)}$: $X(3872)$, $Z_{c}(3900)$ and their
partners
Within the mechanism considered here, the interactions of $D\bar{D}$,
$D\bar{D}^{*}$ and $D^{*}\bar{D}^{*}$ are the same. For the meson pairs to be
isospin scalars, the attractions are strong enough to form bound states with
similar binding energies, see Table 2 and Fig. 2, while for the isovector
pairs, the contributions from the $\rho$ and $\omega$ exchanges cancel each
other, see Table 8 in Appendix B.
The $X(3872)$ observed by the Belle Collaboration Choi _et al._ (2003) is
widely suggested as an isoscalar $D\bar{D}^{*}$ molecule with $J^{PC}=1^{++}$
Törnqvist (2003); Wong (2004); Swanson (2004); Törnqvist (2004) (for reviews,
see, e.g., Refs. Chen _et al._ (2016a); Guo _et al._ (2018); Kalashnikova
and Nefediev (2019); Brambilla _et al._ (2020)). Actually such a hadronic
molecule was predicted 10 years before the discovery by Törnvist considering
the one-pion exchange Törnqvist (1994). Our results show that the light vector
exchange leads to a near-threshold isoscalar bound state that can be
identified with the $X(3872)$ as well, together with a negative $C$-parity
partner of $X(3872)$ with the same binding energy (see also Refs. Gamermann
_et al._ (2007); Gamermann and Oset (2007)). There is experimental evidence of
such a negative $C$-parity state, named as $\tilde{X}(3872)$,111It should be
called $h_{c}(3872)$ according to the RPP nomenclature. reported by the
COMPASS Collaboration Aghasyan _et al._ (2018). A recent study of the
$D_{(s)}^{(*)}\bar{D}_{(s)}^{(*)}$ molecular states using the method of QCD
sum rules also finds both $1^{++}$ and $1^{+-}$ $D\bar{D}^{*}$ states Wang
(2020a).
The potential predicting the $X(3872)$ as an isoscalar $D\bar{D}^{*}$ bound
state, also predicts the existence of isoscalar $D\bar{D}$ and
$D^{*}\bar{D}^{*}$ bound states. By imposing only HQSS, there are two
independent contact terms in the $D^{(*)}\bar{D}^{(*)}$ interactions for each
isospin AlFiky _et al._ (2006); Nieves and Valderrama (2012), which can be
defined as Guo _et al._ (2018)
$\displaystyle C_{H\bar{H},0}$
$\displaystyle=\left\langle\frac{1}{2},\frac{1}{2},0\right|\mathcal{H}_{I}\left|\frac{1}{2},\frac{1}{2},0\right\rangle,$
$\displaystyle C_{H\bar{H},1}$
$\displaystyle=\left\langle\frac{1}{2},\frac{1}{2},1\right|\mathcal{H}_{I}\left|\frac{1}{2},\frac{1}{2},1\right\rangle,$
(80)
where $\mathcal{H}_{I}$ is the interaction Hamiltonian, and $\left|s_{\ell
1},s_{\ell 2},s_{\ell}\right\rangle$ denotes the charmed meson pair with
$s_{\ell}$ being the total angular momentum of the light degrees of freedom in
the two-meson system and $s_{\ell 1,\ell 2}$ for the individual mesons. Such
an analysis leads to the prediction of a $2^{++}$ $D^{*}\bar{D}^{*}$ tensor
state as the HQSS partner of the $X(3872)$ considering the physical charmed
meson masses Nieves and Valderrama (2012); Hidalgo-Duque _et al._ (2013a);
Guo _et al._ (2013) and three partners with $0^{++},1^{+-}$ and $2^{++}$ in
the strict heavy quark limit Hidalgo-Duque _et al._ (2013b); Baru _et al._
(2016) that depend on the same contact term as the $X(3872)$. The resonance
saturation by the light vector mesons in fact leads to a relation
$C_{H\bar{H},0}=C_{H\bar{H},1}$, and consequently 6 $S$-wave
$D^{(*)}\bar{D}^{(*)}$ bound states.
The existence of an isoscalar $D\bar{D}$ bound state has been predicted by
various phenomenology models Zhang _et al._ (2006); Gamermann _et al._
(2007); Liu _et al._ (2009); Wong (2004); Nieves and Valderrama (2012);
Hidalgo-Duque _et al._ (2013a), and more recently by lattice QCD calculations
Prelovsek _et al._ (2020). Despite attempts Gamermann and Oset (2008); Dai
_et al._ (2020); Wang _et al._ (2020a) to dig out hints for such a state from
the available experimental data Uehara _et al._ (2006); Pakhlov _et al._
(2008); Aubert _et al._ (2010), no clear evidence has yet been found.
However, this could be because its mass is below the $D\bar{D}$ threshold so
that no easily detectable decay modes are available.
As for the isoscalar $2^{++}$ $D^{*}\bar{D}^{*}$ bound state, it can decay
into $D\bar{D}$ in $D-$wave, and the width was predicted to be in the range
from a few to dozens of MeV Albaladejo _et al._ (2015); Baru _et al._
(2016). No evidence has been found so far. One possible reason is that the
coupling to ordinary charmonia could either move the $2^{++}$ pole deep into
the complex energy plane and thus invisible Cincioglu _et al._ (2016)222For a
discussion of the intricate interplay between a meson-meson channel with
multiple quark model states, see Ref. Hammer _et al._ (2016). or make the
$D^{*}\bar{D}^{*}$ interaction in the $2^{++}$ sector unbound Ortega _et al._
(2018); Ortega and Entem (2020).333Mixing of two energy levels will push them
further apart. Thus, mixing of the $D^{*}\bar{D}^{*}$ state with a lower-mass
$\chi_{c2}(2P)$ can effectively provide a repulsive contribution to the
$D^{*}\bar{D}^{*}$ interaction. For more discussions regarding the mixing of
charmonia with meson-meson channels, we refer to Refs. Kalashnikova (2005);
Zhou and Xiao (2017); Cincioglu _et al._ (2020).
The $Z_{c}(3900)$ Ablikim _et al._ (2013a); Liu _et al._ (2013); Ablikim
_et al._ (2014a) was also suggested to be an isovector $D\bar{D}^{*}$ molecule
with quantum numbers $J^{PC}=1^{+-}$ Wang _et al._ (2013); Guo _et al._
(2013) even though the light vector exchange vanishes in this case. Recall
that the vector charmonia exchange will also yield an attractive interaction,
as discussed in Section III.3, which can possibly lead to a virtual state
below threshold. In fact, it has been suggested that the $J/\psi$-exchange is
essential in the formation of the $Z_{c}(3900)$ in Ref. Aceti _et al._
(2014). It was shown in Refs. Albaladejo _et al._ (2016a); He and Chen
(2018); Ortega _et al._ (2019) that a virtual state assignment for the
$Z_{c}(3900)$ is consistent with the experimental data.444Notice that the
coupling of the $D\bar{D}^{*}$ to a lower channel, which is $J/\psi\pi$ in
this case, induces a finite width for the virtual state pole so that it
behaves like a resonance, i.e. a pole in the complex plane off the real axis.
The same is true for all other poles generated here. Furthermore, it was shown
in Ref. Albaladejo _et al._ (2016b) that the finite volume energy levels are
also consistent with the lattice QCD results which did not report an
additional state Prelovsek _et al._ (2015). Similarly, the
$Z_{c}(4020)^{\pm}$ Ablikim _et al._ (2013b, 2014b) with isospin-1 near the
$D^{*}\bar{D}^{*}$ threshold can be a virtual state as well. It was recently
argued that a near-threshold virtual state needs to be understood as a
hadronic molecule Matuschek _et al._ (2020). Analysis of the Belle data on
the $Z_{b}$ states Bondar _et al._ (2012); Garmash _et al._ (2016) using
constant contact terms to construct the unitary $T$-matrix also supports the
$Z_{b}$ states as hadronic molecules Cleven _et al._ (2011); Hanhart _et
al._ (2015); Guo _et al._ (2016); Wang _et al._ (2018a); Baru _et al._
(2020). The molecular explanation of the $Z_{c}$ and $Z_{b}$ states is further
supported by their decay patterns studied using a quark exchange model Wang
_et al._ (2019); Xiao _et al._ (2020). Without a quantitative calculation, as
commented in Section III.3, we postulate that there can be 6 isovector
hadronic molecules (with the same $J^{PC}$ as the isoscalar ones) as virtual
states of $D^{(*)}\bar{D}^{(*)}$, which will show up as prominent threshold
cusps (see Ref. Dong _et al._ (2020a) for a general discussion of the line
shape behavior in the near-threshold region).
#### IV.2.2 $D_{s}^{(*)}\bar{D}_{s}^{(*)}$ virtual states
Here we find that the potential from the $\phi$ exchange is probably not
enough to form bound states of $D_{s}^{(*)}\bar{D}_{s}^{(*)}$. Instead,
virtual states are obtained, see Table 2 and Fig. 2.
On the contrary, based on two prerequisites that
* 1)
$X(3872)$ is a marginally bound state of $D^{0}\bar{D}^{*0}$ with binding
energy $0\sim 1$ MeV, and
* 2)
$D_{s}\bar{D}_{s}$ can form a bound states with a binding energy of $2.4\sim
12.9$ MeV from the lattice result Prelovsek _et al._ (2020) and the
$\chi_{c0}(3930)$ mass determined by LHCb Aaij _et al._ (2020b, c),
Ref. Meng _et al._ (2020a) obtained $D_{s}^{(*)}\bar{D}_{s}^{(*)}$ bound
systems with binding energies up to 80 MeV.
Figure 7: Thresholds of charm-strange meson pairs in the energy range relevant
for the $B^{+}\to J/\psi\phi K^{+}$. Here, $D_{s0}^{*}$ denotes
$D_{s0}^{*}(2317)$, $D_{s1}$ and $D_{s1}^{\prime}$ denote $D_{s1}(2536)$ and
$D_{s1}(2460)$, respectively, and $D_{s2}$ denotes $D_{s2}(2573)$. The data
are taken from Ref. Aaij _et al._ (2017a).
The $X(4140)$ first observed by the CDF Collaboration Aaltonen _et al._
(2009) was considered as a molecule of $D^{*}_{s}\bar{D}_{s}^{*}$ with
$J^{PC}=0^{++}$ or $2^{++}$ in Refs. Liu and Zhu (2009); Branz _et al._
(2009); Albuquerque _et al._ (2009); Ding (2009); Zhang and Huang (2010);
Chen _et al._ (2015); Karliner and Rosner (2016), which is, however,
disfavored by the results of LHCb Aaij _et al._ (2017a, b) where the $J^{PC}$
of $X(4140)$ were suggested to be $1^{++}$ (and thus the $X(4140)$ was named
as $\chi_{c1}(4140)$ in the latest version of RPP). Actually in our
calculation, it is not likely for the $D^{*}_{s}\bar{D}_{s}^{*}$ to form such
a deeply bound state, noticing that the $X(4140)$ is about 80 MeV below the
threshold of $D^{*}_{s}\bar{D}_{s}^{*}$. Instead, it is interesting to notice
that just at the $D^{*}_{s}\bar{D}_{s}^{*}$ threshold there is evidence for a
peak in the invariant mass distribution of $J/\psi\phi$, see Fig. 7.555This
structure around the $D^{*}_{s}\bar{D}^{*}_{s}$ threshold drew the attention
of Ref. Wang _et al._ (2018b) where two resonances, a narrow $X(4140)$ and a
broad $X(4160)$, were introduced to fit the $J/\psi\phi$ invariant mass
distribution from the threshold to about 4250 MeV. There the broad $X(4160)$
was considered as a $D^{*}_{s}\bar{D}^{*}_{s}$ molecule. Following the
analysis in Ref. Dong _et al._ (2020a), if the interaction of the
$D^{*}_{s}\bar{D}_{s}^{*}$ is attractive but not strong enough to form a bound
state, a peak will appear just at the $D^{*}_{s}\bar{D}_{s}^{*}$ threshold in
the invariant mass distribution of $J/\psi\phi$, and the peak is narrow if
there is a nearby virtual state pole. A detailed study of this threshold
structure can tell us whether the attraction between a pair of charm-strange
mesons is strong enough to form a bound state or not.
The difference between the $J/\psi\phi$ and $D_{s}\bar{D}_{s}^{*}$ thresholds
is merely 36 MeV. Thus, the shallow $D_{s}\bar{D}_{s}^{*}$ virtual state with
$1^{++}$ could be responsible for the quick rise of the $J/\psi\phi$ invariant
mass distribution just above threshold observed in the LHCb data Aaij _et
al._ (2017a).
#### IV.2.3 $D^{(*)}\bar{D}_{s}^{(*)}$: $Z_{cs}$ as virtual states
No light vector can be exchanged here and the attractive interaction from
vector charmonia exchange is crucial, similar to the isovector
$D\bar{D}^{(*)}$ systems. A virtual state pole could exist below threshold. In
particular, if the $Z_{c}(3900)$ exists as a virtual state, the same
interaction would induce $D^{(*)}\bar{D}_{s}^{(*)}$ virtual states.
Recently, a near-threshold enhancement in the invariant mass distribution of
$D_{s}^{-}D^{*0}+D_{s}^{*-}D^{0}$ was reported by the BESIII Collaboration
Ablikim _et al._ (2020a) and an exotic state $Z_{cs}(3985)^{-}$ was claimed.
This state has been widely investigated Wang _et al._ (2020b); Wan and Qiao
(2020); Wang _et al._ (2020c); Meng _et al._ (2020b); Yang _et al._
(2020b); Chen and Huang (2020); Du _et al._ (2020b); Cao _et al._ (2020);
Sun and Xiao (2020); Wang _et al._ (2020d, e); Wang (2020b); Azizi and Er
(2020); Jin _et al._ (2020); Simonov (2020); Süngü _et al._ (2020); Ikeno
_et al._ (2020); Xu _et al._ (2020), some of which regard it as a molecule of
$D_{s}^{-}D^{*0}+D_{s}^{*-}D^{0}$ while some others object such an
explanation. In Ref. Yang _et al._ (2020b), it was found that a virtual or
resonant pole together with a triangle singularity can well reproduce the line
shape of the BESIII data, consistent with the analysis here.
#### IV.2.4 $D^{(*)}\bar{D}_{1,2}$: $Y(4260)$ and related states
It is possible for the isoscalar $D\bar{D}_{1}$ pair to form a bound state
with a binding energy from a few MeV to dozens of MeV. Note that this system
can have $J^{PC}=1^{-\pm}$ and the $1^{--}$ state is slightly more deeply
bound than the $1^{-+}$ one which has exotic quantum numbers.
The $Y(4260)$ was discovered by the BABAR Collaboration Aubert _et al._
(2005) with a mass of $(4259\pm 8^{+2}_{-6})$ MeV and a width of 50 $\sim$ 90
MeV and later confirmed by other experiments He _et al._ (2006); Yuan _et
al._ (2007). Now it is called $\psi(4230)$ due to a lower mass from the more
precise BESIII data and a combined analysis in four channels,
$e^{+}e^{-}\rightarrow\omega\chi_{c0}$ Ablikim _et al._ (2016),
$\pi^{+}\pi^{-}h_{c}$ Ablikim _et al._ (2017a), $\pi^{+}\pi^{-}J/\psi$
Ablikim _et al._ (2017b) and $D^{0}D^{*-}\pi^{+}+c.c.$ Ablikim _et al._
(2019), yielding a mass of $(4219.6\pm 3.3\pm 5.1)$ MeV and a width of
$(56.0\pm 3.6\pm 6.9)$ MeV Gao _et al._ (2017). This state is a good
candidate of exotic states (see reviews, e.g., Refs. Chen _et al._ (2016a);
Guo _et al._ (2018); Brambilla _et al._ (2020)). It was argued that the
isoscalar $D\bar{D}_{1}$ plays important roles in the structure of $Y(4260)$
in, e.g., Refs. Wang _et al._ (2013); Qin _et al._ (2016); Chen _et al._
(2019b). The binding energy of the isoscalar $D\bar{D}_{1}$ system with
$J^{PC}=1^{-\pm}$ via the vector meson exchange was calculated by solving the
Schrödinger equation in spatial space in Ref. Dong _et al._ (2020b), and the
results are consistent with this work except that the sign of $V_{c}$, which
has a minor impact, is not correct there. Note that the mass of the isoscalar
$D\bar{D}_{1}$ bound state obtained here is larger than the nominally mass of
the $\psi(4230)$, see Fig. 1, but the mixing of the $D\bar{D}_{1}$ molecule
with a $D$-wave vector charmonium Lu _et al._ (2017) may solve this
discrepancy.
From the results in Table 2 and Fig. 1, the isoscalar $D\bar{D}_{1}$ bound
state has quite some partners, either of HQSS or of SU(3) flavor. In
particular, several of them have vector quantum numbers, including a
$D^{*}\bar{D}_{1}$ bound state with a mass about 4.39 $\sim$ 4.43 GeV, a
$D^{*}\bar{D}_{2}$ bound state with a mass about 4.43 $\sim$ 4.47 GeV, and
three virtual states of $D_{s}\bar{D}_{s1}$, $D^{*}_{s}\bar{D}_{s1}$ and
$D^{*}_{s}\bar{D}_{s2}$.
The current status of the vector charmonium spectrum around 4.4 GeV is not
clear, and the peak structures in exclusive and inclusive $R$-value
measurements are different (for a compilation of the relevant data, see Ref.
Yuan ). Thus, it is unclear which structure(s) can be identified as the
candidate(s) of the $D^{*}\bar{D}_{1(2)}$ bound states. Nevertheless, the
$Y(4360)$, aka $\psi(4360)$, and $\psi(4415)$ have been suggested to
correspond to the $D_{1}\bar{D}^{*}$ and $D_{2}\bar{D}^{*}$ states,
respectively Wang _et al._ (2014); Ma _et al._ (2015); Cleven _et al._
(2015); Hanhart and Klempt (2020). A determination of the poles around 4.4 GeV
would require a thorough analysis of the full data sets including these open-
charm channels, and the first steps have been done in Refs. Cleven _et al._
(2014); Olschewsky (2018).
As for the virtual states with hidden-strangeness, they are expected to show
up as narrow threshold cusps in final states like $J/\psi f_{0}(980)$ and
$\psi(2S)f_{0}(980)$. They could play an important role in generating the
$Y(4660)$, aka $\psi(4660)$, peak observed in the
$\psi(2S)f_{0}(980)\to\psi(2S)\pi^{+}\pi^{-}$ invariant mass distribution Lees
_et al._ (2014); Wang _et al._ (2015).666The $Y(4660)$ was suggested to be a
$\psi(2S)f_{0}(980)$ bound state in Ref. Guo _et al._ (2008) to explain why
it was seen only the $\psi(2S)\pi^{+}\pi^{-}$ final state with the pion pair
coming from the $f_{0}(980)$. Although it was proposed Cotugno _et al._
(2010); Guo _et al._ (2010) that the $Y(4630)$ structure observed in the
$\Lambda_{c}\bar{\Lambda}_{c}$ spectrum Pakhlova _et al._ (2008) could be the
same state as the $Y(4660)$ one, the much more precise BESIII data Ablikim
_et al._ (2018), however, show a different behavior up to 4.6 GeV in the
$\Lambda_{c}\bar{\Lambda}_{c}$ invariant mass distribution (see below).
Further complications come from the $1^{--}$ structures around 4.63 GeV
reported in the $D_{s}\bar{D}_{s1}+c.c.$ Jia _et al._ (2019) and
$D_{s}\bar{D}_{s2}+c.c.$ Jia _et al._ (2020) distributions, the former of
which has been proposed to be due to a molecular state from the
$D_{s}^{*}\bar{D}_{s1}$-$D_{s}\bar{D}_{s1}$ interaction He _et al._ (2020).
Suffice it to say that the situation of the $Y(4630)$ is not unambiguous. With
more precise data that will be collected at BESIII and Belle-II, we suggest to
search for line shape irregularities (either peaks or dips) at the
$D_{s}^{*}\bar{D}_{s1}$ and $D_{s}^{*}\bar{D}_{s2}$ thresholds in open-charm-
strangeness final states such as $D_{s}^{(*)}\bar{D}_{s}^{(*)}$ and
$D_{s}\bar{D}_{s1(s2)}$.
There are also hints in data for positive $C$-parity $D_{s}\bar{D}_{s1}$ and
$D_{s}^{*}\bar{D}_{s1}$ virtual states (see Table 3 and Fig. 1), whose
thresholds are at 4503 MeV and 4647 MeV, respectively. As can be seen from
Fig. 7, there is a peak around 4.51 GeV and a dip around 4.65 GeV in the
$J/\psi\phi$ energy distribution measured by the LHCb Collaboration Aaij _et
al._ (2017a), and the energy difference between the dip and peak approximately
equals to the mass splitting between the $D_{s}^{*}$ and $D_{s}$. We also
notice that the highest peak in the same data appears at the
$D_{s0}^{*}(2317)\bar{D}_{s}$ threshold.777The coincidence of the peak
position with the threshold and the highly asymmetric line shape suggests a
$D_{s0}^{*}(2317)\bar{D}_{s}$ virtual state. Such systems will be studied in a
future work. All these channels, together with the $D_{s}\bar{D}_{s}^{*}$ and
$D_{s}^{*}\bar{D}_{s}^{*}$ discussed in Section IV.2.2, need to be considered
in a reliable analysis of the $B^{+}\to J/\psi\phi K^{+}$ data, which is
however beyond the scope of this paper.
Notice that because the $D_{1(2)}$ and $D_{s1(s2)}$ have finite widths, the
molecular states containing one of them can decay easily through the decays of
$D_{1(2)}$ or $D_{s1(s2)}$. The structures at the thresholds of
$D_{s}^{(*)}\bar{D}_{s1(s2)}$, for which virtual states are predicted, will
get smeared by the widths of $D_{s1(s2)}$. Thus, the $D_{s}^{(*)}\bar{D}_{s2}$
threshold structures should be broader and smoother than the
$D_{s}^{(*)}\bar{D}_{s1}$ ones since the width of the $D_{s2}$, $(16.9\pm
0.7)$ MeV Zyla _et al._ (2020), is much larger than that of the $D_{s1}$,
$(0.92\pm 0.05)$ MeV Zyla _et al._ (2020).
#### IV.2.5 $\Lambda_{c}\bar{\Lambda}_{c}$: analysis of the BESIII data and
more baryon-antibaryon bound states
From Table 2 and Fig. 1, in the spectrum of the isoscalar $1^{--}$ states, in
addition to those made of a pair of charmed mesons, we predict more than 10
baryon-antibaryon molecules. The lowest one is the
$\Lambda_{c}\bar{\Lambda}_{c}$ bound state, and the others are above 4.85 GeV.
While those above 4.85 GeV are beyond the current reach of BESIII (there is a
BESIII data-taking plan in the energy region above 4.6 GeV Ablikim _et al._
(2020b)), there is strong evidence for the existence of a
$\Lambda_{c}\bar{\Lambda}_{c}$ bound state in the BESIII data Ablikim _et
al._ (2018).
The $\Lambda_{c}\bar{\Lambda}_{c}$ system can form a bound state with a
binding energy in the range from a few MeV to dozens of MeV, depending on the
cutoff. Therefore, we predict that there is a pole below the
$\Lambda_{c}\bar{\Lambda}_{c}$ threshold and the pole position can be
extracted from the line shape of the $\Lambda_{c}\bar{\Lambda}_{c}$ invariant
mass distribution near threshold.
The cross section of $e^{+}e^{-}\to\Lambda_{c}\bar{\Lambda}_{c}$ was first
measured using the initial state radiation by Belle Pakhlova _et al._ (2008)
and a vector charmonium-like structure $Y(4630)$ was observed. The BESIII
Collaboration measured such cross sections at four energy points just above
threshold much more precisely Ablikim _et al._ (2018). The energy dependence
of the cross sections at these four points has a weird behavior: it is almost
flat. This can be understood as the consequence of the Sommerfeld factor,
which makes the distribution nonvanishing even exactly at threshold, and the
existence of a near-threshold pole, which counteracts the increasing trend of
the phase space multiplied by the Sommerfeld factor to result in an almost
flat distribution. Here we fit BESIII data to estimate where the pole is
located.
The Sommerfeld factor Sommerfeld (1931) accounting for the multi-photon
exchange between the $\Lambda_{c}^{+}$ and $\bar{\Lambda}_{c}^{-}$ reads,
$\displaystyle S_{0}(E)=\frac{2\pi x}{1-e^{-2\pi x}},$ (81)
where $x=\alpha\mu/k$ with $\alpha\approx 1/137$, $\mu=m_{\Lambda_{c}}/2$, and
$k$ is defined in Eq. (78). The cross section of
$e^{+}e^{-}\to\Lambda_{c}\bar{\Lambda}_{c}$ is now parameterized as
$\displaystyle\sigma(E)=N\cdot
S_{0}(E)\cdot|f(E)|^{2}\cdot\frac{\rho(E)}{E^{2}},$ (82)
with $N$ a normalization constant and $\rho(E)=k/(8\pi E)$ the phase space.
Here $f(E)$ denotes the nonrelativistic scattering amplitude, and the $S$-wave
one,
$\displaystyle
f_{0}(E)=\left(\frac{1}{a_{0}}-i\sqrt{2\mu(E-2m_{\Lambda_{c}})}\right)^{-1},$
(83)
is sufficient in the immediate vicinity of the threshold. Note that we take
the scattering length $a_{0}$ complex to take into account the couplings
between the $\Lambda_{c}\bar{\Lambda}_{c}$ and lower channels Dong _et al._
(2020a). Finally, we have 3 parameters, Re$(1/a_{0})$, Im$(1/a_{0})$ and $N$,
to fit four experimental data.
Figure 8: Top: pole positions of Eq. (83) on the first RS with different
scattering length ($a_{0}$) values and the color represents the $\chi^{2}$ (in
a logarithmic form for better illustration) of the fit to BESIII data Ablikim
_et al._ (2018). The pole on the second RS is at the same position if we
change the sign of Re$(1/a_{0})$, which does not change the fit. Bottom:
examples of some fits, which yield poles below threshold at $4456-19i$ MeV
(red), $4468-14i$ MeV (blue dash-dotted) and $4566-6i$ MeV (green dashed).
The fitted results are shown in Fig. 8. We can see that the best fit leads to
a pole located close to the real axis but above threshold. A pole below
threshold, as we predicted, is also possible, see the bottom one in Fig. 8.
These fits, though with larger $\chi^{2}$, are reasonable since we have only
four points. The obtained $a_{0}$ values from these fits yield poles several
MeV below threshold with an imaginary part of dozens of MeV. Such poles are
located on the first RS corresponding to a bound state, which moves from the
real axis onto the complex plane due to the coupling to lower channels. There
is another pole located at the symmetric position on the second RS,
corresponding to a virtual state. Actually, with the scattering length
approximation in Eq. (83), we cannot determine on which RS the pole is located
since the poles on different RSs below threshold have the same behavior above
threshold. More data are needed to pin down the exact pole position
corresponding to our predicted $\Lambda_{c}\bar{\Lambda}_{c}$ bound state,
which should be different from the $Y(4630)$ or $Y(4660)$. In Ref. Dai _et
al._ (2017), the BESIII Ablikim _et al._ (2018) and Belle Pakhlova _et al._
(2008) data are fitted together using an amplitude with a pole around 4.65
GeV. While the Belle data of the $Y(4630)$ peak can be well described, the
much more precise BESIII data points in the near-threshold region cannot. We
conclude that the data from $\Lambda_{c}\bar{\Lambda}_{c}$ threshold up to 4.7
GeV should contain signals of at least two states: the
$\Lambda_{c}\bar{\Lambda}_{c}$ molecule and another one with a mass around
4.65 GeV.
As for the isoscalar vector states above 4.85 GeV, the structures could be
more easily identified from data than those around 4.3 GeV. This is because
the charmonium states in that mass region should be very broad while these
hadronic molecules are narrower due to the small binding energies,
corresponding to large spatial extensions. We expect the
$\Sigma_{c}\bar{\Sigma}_{c}$, $\Xi_{c}\bar{\Xi}_{c}$ and
$\Sigma_{c}\bar{\Sigma}_{c}^{*}$ below 5 GeV to be seen in the forthcoming
BESIII measurements, and the ones higher than 5 GeV can be searched for in
future super tau-charm facilities Barniakov (2019); Peng _et al._ (2020b).
There are isovector $1^{--}$ baryon-antibaryon molecular states above 4.7 GeV,
see Table 7 and Fig. 5. It is more difficult to observe these states than the
isoscalar ones in $e^{+}e^{-}$ collisions since the main production mechanism
of vector states should be driven by a vector $\bar{c}\gamma_{\mu}c$ current,
which is an isoscalar, coupled to the virtual photon. However, they could be
produced together with a pion, and thus can be searched for in future super
tau-charm facilities with center-of-mass energies above 5 GeV.
#### IV.2.6 $\bar{D}^{(*)}\Sigma_{c}^{(*)}$: $P_{c}$ states
The $\bar{D}^{(*)}\Sigma_{c}^{(*)}$ systems with isospin-$1/2$ are attractive,
and a near-threshold pole can be found for each combination. This pole is a
virtual state or (mostly) a bound state depending on the cutoff, see Table 4
and Fig. 3.
Such systems have drawn lots of attention Wu _et al._ (2010, 2011, 2012a);
Wang _et al._ (2011); Yang _et al._ (2012); Wu _et al._ (2012b); Xiao _et
al._ (2013); Karliner and Rosner (2015) especially after the pentaquark
states, $P_{c}(4450)$ and $P_{c}(4380)$, were observed by LHCb Aaij _et al._
(2015). In the updated measurement Aaij _et al._ (2019), the $P_{c}(4450)$
signal splits into two narrower peaks, $P_{c}(4440)$ and $P_{c}(4457)$. There
is no clear evidence for the previous broad $P_{c}(4380)$, and meanwhile a new
narrow resonance $P_{c}(4312)$ shows up. Several models have been applied by
tremendous works to understand the structures of these states, and the
$\bar{D}^{(*)}\Sigma_{c}^{(*)}$ molecular explanation stands out as it can
explain the three states simultaneously, see e.g. Refs. Liu _et al._ (2019b);
Xiao _et al._ (2019a); Du _et al._ (2020c). Particularly in Ref. Du _et
al._ (2020c), the LHCb data are described quite well by the interaction
constructed with heavy quark spin symmetry and actually four $P_{c}$ states,
instead of three, show up, corresponding to $\bar{D}\Sigma_{c}$,
$\bar{D}\Sigma_{c}^{*}$ and $\bar{D}^{*}\Sigma_{c}$ molecules. A hint of a
narrow $P_{c}(4380)$ was reported in the analysis of Ref. Du _et al._
(2020c). The rest three $P_{c}$ states related to $\bar{D}^{*}\Sigma_{c}^{*}$
predicted there have no signals up to now.
In the vector meson saturation model considered here, the two contact terms
constructed considering only HQSS Liu _et al._ (2018, 2019b); Sakai _et al._
(2019); Du _et al._ (2020c), corresponding to the total angular momentum of
the light degrees of freedom to be $1/2$ and $3/2$, are the same, similar to
the $H\bar{H}$ interaction discussed in Section IV.2.1. As a result, 7
$\bar{D}^{(*)}\Sigma_{c}^{(*)}$ molecular states Xiao _et al._ (2013) with
similar binding energies are obtained, and the two $\bar{D}^{*}\Sigma_{c}$
states with different total spins, corresponding to the $P_{c}(4440)$ and
$P_{c}(4457)$, degenerate. The degeneracy will be lifted by considering the
exchange of pion and other mesons and keeping the momentum-dependent terms of
the light vector exchange.
#### IV.2.7 $\bar{D}^{(*)}\Xi_{c}^{(\prime)}$: $P_{cs}$ and related states
It is natural for the isoscalar $\bar{D}^{*}\Xi_{c}$ to form bound states if
the above $P_{c}$ states are considered as the isospin-$1/2$
$\bar{D}^{(*)}\Sigma_{c}^{(*)}$ molecules since the interactions from the
light vector exchange are the same in these two cases, see Table 8. Note that
such states have been predicted by various works Hofmann and Lutz (2005); Chen
_et al._ (2017); Anisovich _et al._ (2015); Wang (2016); Feijoo _et al._
(2016); Lu _et al._ (2016); Xiao _et al._ (2019b); Chen _et al._ (2016b);
Wang _et al._ (2020f); Zhang _et al._ (2020).
Recently, Ref. Aaij _et al._ (2020a) reported an exotic state named
$P_{cs}(4459)$ in the invariant mass distribution of $J/\psi\Lambda$ in
$\Xi_{b}^{-}\to J/\psi K^{-}\Lambda$. Even though the significance is only
3.1$\sigma$, several works Liu _et al._ (2020); Chen (2020); Wang (2020c);
Peng _et al._ (2020c); Chen _et al._ (2020) have explored the possibility of
$P_{cs}(4459)$ being a molecule of $\bar{D}^{*}\Xi_{c}$, and the finding here
supports such an explanation that the structure could be caused by two
isoscalar $\bar{D}^{*}\Xi_{c}$ molecules.
Furthermore, Ref. Wang _et al._ (2020g) moved forward to the double
strangeness systems and claimed that $\bar{D}_{s}^{*}\Xi_{c}^{\prime}$ and
$\bar{D}_{s}^{*}\Xi_{c}^{*}$ may form bound states with $J^{P}=3/2^{-}$ and
$5/2^{-}$, respectively. The $\phi$ exchange for such systems yields repulsive
interaction at leading order, see Table 8, and the bound states obtained in
Ref. Wang _et al._ (2020g) result from other contributions, including the
exchange of pseudoscalar and scalar mesons, the subleading momentum dependence
from the $\phi$ exchange and coupled-channel effects.
## V Summary and discussion
The whole spectrum of hadronic molecules of a pair of charmed and anticharmed
hadrons, considering all the $S$-wave singly-charmed mesons and baryons as
well as the $s_{\ell}=3/2$ $P$-wave charmed mesons, is systematically obtained
using $S$-wave constant contact potentials saturated by the exchange of vector
mesons. The coupling of charmed heavy hadrons and light mesons are constructed
by implementing HQSS, chiral symmetry and SU(3) flavor symmetry.
The spectrum predicted here should be regarded as the leading approximation of
the spectrum for heavy-antiheavy molecular states, and gives only a general
overall feature of the heavy-antiheavy hadronic molecular spectrum. Specific
systems may differ from the predictions here due to the limitations of our
treatment. We considered neither the effects of coupled channels, nor the
spin-dependent interactions, which arises from momentum-dependent terms that
are of higher order in the very near-threshold region, nor the contribution
from the exchange of pseudoscalar and scalar mesons, nor the mixing with
charmonia. Nevertheless, the spectrum shows a different pattern than that
considering only the one-pion exchange (see, e.g., Ref. Karliner and Rosner
(2015)), which does not allow the molecular states in systems such as
$D\bar{D}$ and $\Sigma_{c}\bar{D}$, where the one-pion exchange is forbidden
without coupled channels, to exist.
In total 229 hidden-charm hadronic molecules (bound or virtual) are predicted,
many of which deserve attentions:
* 1)
The pole positions of the isoscalar $D\bar{D}^{*}$ with positive and negative
$C$-parity are consistent with the molecular explanation of $X(3872)$ and
$\tilde{X}(3872)$, respectively. There is a shallow bound state in the
isoscalar $D\bar{D}$ system, consistent with the recent lattice QCD result
Prelovsek _et al._ (2020).
* 2)
The spectrum of the $\bar{D}^{(*)}\Sigma_{c}^{(*)}$ systems is consistent with
the molecular explanations of famous $P_{c}$ states: the $P_{c}(4312)$ as an
isospin-$1/2$ $\bar{D}\Sigma_{c}$ molecule, and $P_{c}(4440)$ and
$P_{c}(4457)$ as isospin-$1/2$ $\bar{D}^{*}\Sigma_{c}$ molecules. With the
resonance saturation from the vector mesons, the two $\bar{D}^{*}\Sigma_{c}$
molecules are degenerated. In addition, there is an isospin-$1/2$
$\bar{D}\Sigma_{c}^{*}$ molecule, consistent with the narrow $P_{c}(4380)$
advocated in Ref. Du _et al._ (2020c), and three isospin-$1/2$
$\bar{D}^{*}\Sigma_{c}^{*}$ molecules, consistent with the results in the
literature.
* 3)
There are two isoscalar $\bar{D}^{*}\Xi_{c}$ molecules, which may be related
to the recently announced $P_{cs}(4459)$. In addition, more negative-parity
isoscalar $P_{cs}$-type molecules are predicted: one in $\bar{D}\Xi_{c}$, one
in $\bar{D}\Xi_{c}^{\prime}$, one in $\bar{D}\Xi_{c}^{*}$, two in
$\bar{D}^{*}\Xi_{c}^{\prime}$, and three in $\bar{D}^{*}\Xi_{c}^{*}$.
* 4)
Instead of associating the $X(4140)$ with a $D_{s}^{*}\bar{D}_{s}^{*}$
molecule like some other works did, our results prefer the
$D_{s}^{*}\bar{D}_{s}^{*}$ to form a virtual state. The peak in the invariant
mass distribution of $J/\psi\phi$ measured by LHCb just at the
$D_{s}^{*}\bar{D}_{s}^{*}$ threshold is consistent with this scenario,
according to the discussion in Ref. Dong _et al._ (2020a).
* 5)
The isoscalar $D^{(*)}\bar{D}_{1(2)}$ can form negative-parity bound states
with both positive and negative $C$ parities. The $D\bar{D}_{1}$ bound state
is the lowest one in this family, and the $1^{--}$ one is consistent with the
sizeable $D\bar{D}_{1}$ molecular component in the $\psi(4230)$.
* 6)
$\Lambda_{c}\bar{\Lambda}_{c}$ bound states with $J^{PC}=0^{-+}$ and $1^{--}$
are predicted. The vector one should be responsible to the almost flat line
shape of the $e^{+}e^{-}\to\Lambda_{c}\bar{\Lambda}_{c}$ cross section in the
near-threshold region observed by BESIII Ablikim _et al._ (2018).
* 7)
Light vector meson exchanges either vanish due to the cancellation between
$\rho$ and $\omega$ or are not allowed in the isovector $D^{(*)}\bar{D}^{(*)}$
systems and $D^{(*)}\bar{D}_{s}^{(*)}$ systems. However, the vector charmonia
exchanges may play an important role as pointed out in Ref. Aceti _et al._
(2014), and the $Z_{c}(3900,4020)$ and $Z_{cs}(3985)$ could well be the
$D^{(*)}\bar{D}^{*}$ and $D^{*}\bar{D}_{s}$ – $D\bar{D}_{s}^{*}$ virtual
states.
When the light vector meson exchange is allowed, the results reported here are
generally consistent with the results from a more complete treatment of the
one-boson exchange model (e.g., by solving the Schrödinger equation). For
example, the binding energy of the isoscalar $D\bar{D}$ Zhang _et al._ (2006)
and $D\bar{D}_{1}$ Dong _et al._ (2020b) bound states from the $\rho$ and
$\omega$ exchanges fit the spectrum well; a similar pattern of molecular
states related to the $X(3872)$ was obtained in Ref. Liu _et al._ (2009) and
the light vector exchange was found necessary to bind $D\bar{D}^{*}$ together;
the $\bar{D}^{*}\Sigma_{c}$ bound states corresponding to the $P_{c}(4440)$
and $P_{c}(4457)$ were obtained via one boson exchange in Ref. Liu _et al._
(2019c), and the degeneracy of the two states with $J=1/2$ and $3/2$ is lifted
by the pion exchange and higher order contributions from the $\rho$ and
$\omega$ exchange. We should also notice that there can be systems whose
contact terms receive important contributions from the scalar-meson exchanges.
We expect that there should be structures in the near-threshold region for all
the heavy-antiheavy hadron pairs that have attractive interactions at
threshold. The structure can be either exactly at threshold, if the attraction
is not strong enough to form a bound state, or below threshold, if a bound
state is formed. Moreover, the structures are not necessarily peaks, and they
can be dips in invariant mass distributions, depending on the pertinent
production mechanism as discussed in our recent work Dong _et al._ (2020a).
When the predicted states have ordinary quantum numbers as those of charmonia,
the molecular states must mix with charmonia, and the mixing can have an
important impact on the spectrum. Yet, in the energy region higher than 4.8
GeV, where plenty of states are predicted as shown in Figs. 1 and 2, normal
charmonia should be very broad due to the huge phase space while the molecular
states should be relatively narrow due to the large distance between the
consistent hadrons. Thus, narrow structures to be discovered in this energy
region should be mainly due to the molecular structures, being either bound or
virtual states.
Among the 229 structures predicted here, only a minority is in the energy
region that has been studied in detail. The largest data sets from the current
experiments have the following energy restrictions: direct production of the
$1^{--}$ sector in $e^{+}e^{-}$ collisions goes up to 4.6 GeV at BESIII; the
hidden-charm $XYZ$ states produced through the weak process $b\to c\bar{c}s$
in $B\to K$ decays should be below 4.8 GeV; the hidden-charm $P_{c}$
pentaquarks produced in $\Lambda_{b}\to K$ decays should be below 5.1 GeV. To
find more states in the predicted spectrum, we need to have both data in these
processes with higher statistics and data at other experiments such as the
prompt production at hadron colliders, PANDA, electron-ion collisions and
$e^{+}e^{-}$ collisions above 5 GeV at super tau-charm facilities.
The potentials in the bottom sector are the same as those in the charm sector,
if using the nonrelativistic field normalization, due to the HQFS, and we
expect the same number of molecular states in the analogous systems therein.
Because of the much heavier reduced masses of hidden-bottom systems, the
virtual states in the charm sector will move closer to the thresholds or even
become bound states in the bottom sector, and the bound states in the charm
sector will be more deeply bound in the bottom sector. There may even be
excited states for some deeply bound systems. For these deeply bound systems,
the constant contact term approximation considered here will not be
sufficient. However, due to the large masses, such states are more difficult
to be produced than those in the charm sector.
###### Acknowledgements.
We would like to thank Chang-Zheng Yuan for useful discussions, and thank Fu-
Lai Wang for a communication regarding Ref. Wang _et al._ (2020g). This work
is supported in part by the Chinese Academy of Sciences (CAS) under Grant No.
XDB34030000 and No. QYZDB-SSW-SYS013, by the National Natural Science
Foundation of China (NSFC) under Grant No. 11835015, No. 12047503 and No.
11961141012, by the NSFC and the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) through the funds provided to the Sino-German
Collaborative Research Center TRR110 “Symmetries and the Emergence of
Structure in QCD” (NSFC Grant No. 12070131001, DFG Project-ID 196253076), and
by the CAS Center for Excellence in Particle Physics (CCEPP).
## Appendix A Vertex factors for direct processes
The vertex factors in Eq. (70) for different particles are calculated in the
following:
* •
$D$
$\displaystyle g_{1}$ $\displaystyle=-\sqrt{2}\beta
g_{V}m_{D}(P_{a}^{(Q)}V_{ab}P^{(Q)T}_{b}),$ (84) $\displaystyle g_{2}$
$\displaystyle=\sqrt{2}\beta
g_{V}m_{D}(P_{a}^{(\bar{Q})}V_{ab}P^{(\bar{Q})T}_{b}).$ (85)
* •
$D^{*}$
$\displaystyle g_{1}$ $\displaystyle=\sqrt{2}\beta
g_{V}m_{D^{*}}\epsilon\cdot\epsilon^{*}(P_{a}^{*(Q)}V_{ab}P^{*(Q)T}_{b})$
$\displaystyle\approx-\sqrt{2}\beta
g_{V}m_{D^{*}}(P_{a}^{*(Q)}V_{ab}P^{*(Q)T}_{b}),$ (86) $\displaystyle g_{2}$
$\displaystyle=-\sqrt{2}\beta
g_{V}m_{D^{*}}\epsilon\cdot\epsilon^{*}(P_{a}^{*(\bar{Q})}V_{ab}P^{*(\bar{Q})T}_{b})$
$\displaystyle\approx\sqrt{2}\beta
g_{V}m_{D^{*}}(P_{a}^{*(\bar{Q})}V_{ab}P^{*(\bar{Q})T}_{b}).$ (87)
* •
$D_{1}$
$\displaystyle g_{1}$
$\displaystyle=-\sqrt{2}\beta_{2}g_{V}m_{D_{1}}\epsilon\cdot\epsilon^{*}(P^{(Q)}_{1a}V_{ab}P^{(Q)T}_{1b}),$
$\displaystyle\approx\sqrt{2}\beta_{2}g_{V}m_{D_{1}}(P^{(Q)}_{1a}V_{ab}P^{(Q)T}_{1b}),$
(88) $\displaystyle g_{2}$
$\displaystyle=\sqrt{2}\beta_{2}g_{V}m_{D_{1}}\epsilon\cdot\epsilon^{*}(P^{(\bar{Q})}_{1a}V_{ab}P^{(\bar{Q})T}_{1b}),$
$\displaystyle\approx-\sqrt{2}\beta_{2}g_{V}m_{D_{1}}(P^{(\bar{Q})}_{1a}V_{ab}P^{(\bar{Q})T}_{1b}).$
(89)
* •
$D_{2}^{*}$
$\displaystyle g_{1}$
$\displaystyle=\sqrt{2}\beta_{2}g_{V}m_{D_{2}^{*}}\epsilon^{\mu\nu}\epsilon_{\mu\nu}^{*}(P^{*(Q)}_{2a}V_{ab}P^{*(Q)T}_{2b})$
$\displaystyle\approx\sqrt{2}\beta_{2}g_{V}m_{D_{2}^{*}}(P^{*(Q)}_{2a}V_{ab}P^{*(Q)T}_{2b}),$
(90) $\displaystyle g_{2}$ $\displaystyle=-\sqrt{2}\beta
g_{V}m_{D_{2}^{*}}\epsilon^{\mu\nu}\epsilon_{\mu\nu}^{*}(P^{*(\bar{Q})}_{2a}V_{ab}P^{*(\bar{Q})T}_{2b})$
$\displaystyle\approx\sqrt{2}\beta
g_{V}m_{D_{2}^{*}}(P^{*(\bar{Q})}_{2a}V_{ab}P^{*(\bar{Q})T}_{2b}).$ (91)
* •
$B_{\bar{3}}$
$\displaystyle g_{1}$
$\displaystyle=\frac{1}{\sqrt{2}}\beta_{B}g_{V}\bar{u}(k_{1})u(p_{1}){\rm
tr}\left[(B_{\bar{3}}^{(Q)T}VB_{\bar{3}}^{(Q)}\right]$
$\displaystyle\approx\sqrt{2}\beta_{B}g_{V}m_{B_{\bar{3}}}{\rm
tr}\left[(B_{\bar{3}}^{(Q)T}VB_{\bar{3}}^{(Q)}\right],$ (92) $\displaystyle
g_{2}$
$\displaystyle=-\frac{1}{\sqrt{2}}\beta_{B}g_{V}\bar{u}(k_{2})u(p_{2}){\rm
tr}\left[B_{3}^{(\bar{Q})T}V^{T}B_{3}^{(\bar{Q})}\right]$
$\displaystyle\approx-\sqrt{2}\beta_{B}g_{V}m_{B_{\bar{3}}}{\rm
tr}\left[B_{3}^{(\bar{Q})T}V^{T}B_{3}^{(\bar{Q})}\right].$ (93)
* •
$B_{6}$
$\displaystyle g_{1}$
$\displaystyle=-\frac{1}{3}\frac{\beta_{S}g_{V}}{\sqrt{2}}\bar{u}(k_{1})\gamma^{5}(\gamma^{\mu}+v^{\mu})^{2}\gamma^{5}u(p_{1})$
$\displaystyle\quad\times{\rm tr}\left[B_{6}^{(Q)T}VB^{(Q)}_{6}\right]$
$\displaystyle\approx-\sqrt{2}m_{B_{6}}\beta_{S}g_{V}{\rm
tr}\left[B_{6}^{(Q)T}VB^{(Q)}_{6}\right],$ (94) $\displaystyle g_{2}$
$\displaystyle=\frac{1}{3}\frac{\beta_{S}g_{V}}{\sqrt{2}}\bar{u}(k_{2})\gamma^{5}(\gamma^{\mu}+v^{\mu})^{2}\gamma^{5}u(p_{2})$
$\displaystyle\quad\times{\rm
tr}\left[B_{6}^{(\bar{Q})T}V^{T}B^{(\bar{Q})}_{6}\right]$
$\displaystyle\approx\sqrt{2}m_{B_{6}}\beta_{S}g_{V}{\rm
tr}\left[B_{6}^{(\bar{Q})T}V^{T}B^{(\bar{Q})}_{6}\right].$ (95)
* •
$B_{6}^{*}$
$\displaystyle g_{1}$
$\displaystyle=\frac{\beta_{S}g_{V}}{\sqrt{2}}\bar{u}_{\mu}^{*}(k_{1})u^{*\mu}(p_{1}){\rm
tr}\left[B_{6}^{*(Q)T}VB_{6}^{*(Q)}\right]$
$\displaystyle\approx-\sqrt{2}m_{B_{6}^{*}}\beta_{S}g_{V}{\rm
tr}\left[B_{6}^{*(Q)T}VB_{6}^{*(Q)}\right],$ (96) $\displaystyle g_{2}$
$\displaystyle=-\frac{\beta_{S}g_{V}}{\sqrt{2}}\bar{u}_{\mu}^{*}(k_{2})u^{*\mu}(p_{2}){\rm
tr}\left[B_{6}^{*(\bar{Q})T}V^{T}B_{6}^{*(\bar{Q})}\right]$
$\displaystyle\approx\sqrt{2}m_{B_{6}^{*}}\beta_{S}g_{V}{\rm
tr}\left[B_{6}^{*(\bar{Q})T}V^{T}B_{6}^{*(\bar{Q})}\right].$ (97)
In the above deductions we have used $\epsilon\cdot\epsilon^{*}=-1$,
$\epsilon^{\mu\nu}\cdot\epsilon_{\mu\nu}^{*}=1$, $\bar{u}(k_{1})u(p_{1})=2m$
and $\bar{u}_{6\mu}^{*}u_{6}^{*\mu}=-2m$ at threshold. Note that the factors
such as $P_{a}^{(Q)}V_{ab}P^{(Q)T}_{b}$, ${\rm
tr}\left[B_{\bar{3}}^{(Q)T}VB_{\bar{3}}^{(Q)}\right]$ in the above expressions
contain only the SU(3) flavor information and the properties of the
corresponding fields have already been extracted.
## Appendix B List of the potential factor $F$
The details of interactions between all combinations of heavy-antiheavy hadron
pairs are listed in Tables 8 and 9.
Table 8: The group theory factor $F$, defined in Eq. (71), for the interaction of charmed-anticharmed hadron pairs with only the light vector-meson exchanges. Here both charmed hadrons are the $S$-wave ground states. $I$ is the isospin and $S$ is the strangeness. Positive $F$ means attractive. For the systems with $F=0$, the sub-leading exchanges of vector-charmonia also lead to an attractive potential at threshold. System | $(I,S)$ | Thresholds (MeV) | Exchanged particles | $F$
---|---|---|---|---
$D^{(*)}\bar{D}^{(*)}$ | (0,0) | $(3734,3876,4017)$ | $\rho,\omega$ | $\frac{3}{2},\frac{1}{2}$
| (1,0) | | $\rho,\omega$ | $-\frac{1}{2},\frac{1}{2}$
$D_{s}^{(*)}\bar{D}^{(*)}$ | $(\frac{1}{2},1)$ | $(3836,3977,3979,4121)$ | $-$ | $0$
$D^{(*)}_{s}\bar{D}^{(*)}_{s}$ | (0,0) | $(3937,4081,4224)$ | $\phi$ | $1$
$\bar{D}^{(*)}\Lambda_{c}$ | $(\frac{1}{2},0)$ | $(4154,4295)$ | $\omega$ | $-1$
$\bar{D}_{s}^{(*)}\Lambda_{c}$ | $(0,-1)$ | $(4255,4399)$ | $-$ | $0$
$\bar{D}^{(*)}\Xi_{c}$ | $(1,-1)$ | $(4337,4478)$ | $\rho,\omega$ | $-\frac{1}{2},-\frac{1}{2}$
| $(0,-1)$ | | $\rho,\omega$ | $\frac{3}{2},-\frac{1}{2}$
$\bar{D}_{s}^{(*)}\Xi_{c}$ | $(\frac{1}{2},-2)$ | $(4438,4582)$ | $\phi$ | $-1$
$\bar{D}^{(*)}\Sigma_{c}^{(*)}$ | $(\frac{3}{2},0)$ | $(4321,4385,4462,4527)$ | $\rho,\omega$ | $-1,-1$
| $(\frac{1}{2},0)$ | | $\rho,\omega$ | $2,-1$
$\bar{D}_{s}^{(*)}\Sigma_{c}^{(*)}$ | $(1,-1)$ | $(4422,4486,4566,4630)$ | $-$ | $0$
$\bar{D}^{(*)}\Xi_{c}^{{}^{\prime}(*)}$ | $(1,-1)$ | $(4446,4513,4587,4655)$ | $\rho,\omega$ | $-\frac{1}{2},-\frac{1}{2}$
| $(0,-1)$ | | $\rho,\omega$ | $\frac{3}{2},-\frac{1}{2}$
$\bar{D}_{s}^{(*)}\Xi_{c}^{{}^{\prime}(*)}$ | $(\frac{1}{2},-2)$ | $(4547,4614,4691,4758)$ | $\phi$ | $-1$
$\bar{D}^{(*)}\Omega_{c}^{(*)}$ | $(\frac{1}{2},-2)$ | $(4562,4633,4704,4774)$ | $-$ | $0$
$\bar{D}_{s}^{(*)}\Omega_{c}^{(*)}$ | $(0,-3)$ | $(4664,4734,4807,4878)$ | $\phi$ | $-2$
$\Lambda_{c}\bar{\Lambda}_{c}$ | $(0,0)$ | $(4573)$ | $\omega$ | $2$
$\Lambda_{c}\bar{\Xi}_{c}$ | $(\frac{1}{2},1)$ | $(4756)$ | $\omega$ | $1$
$\Xi_{c}\bar{\Xi}_{c}$ | $(1,0)$ | $(4939)$ | $\rho,\omega,\phi$ | $-\frac{1}{2},\frac{1}{2},1$
| $(0,0)$ | | $\rho,\omega,\phi$ | $\frac{3}{2},\frac{1}{2},1$
$\Lambda_{c}\bar{\Sigma}_{c}^{(*)}$ | $(1,0)$ | $(4740,4805)$ | $\omega$ | $2$
$\Lambda_{c}\bar{\Xi}_{c}^{{}^{\prime}(*)}$ | $(\frac{1}{2},1)$ | $(4865,4932)$ | $\omega$ | $1$
$\Lambda_{c}\bar{\Omega}_{c}^{(*)}$ | $(0,2)$ | $(4982,5052)$ | $-$ | $0$
$\Xi_{c}\bar{\Sigma}_{c}^{(*)}$ | $(\frac{3}{2},-1)$ | $(4923,4988)$ | $\rho,\omega$ | $-1,1$
| $(\frac{1}{2},-1)$ | | $\rho,\omega$ | $2,1$
$\Xi_{c}\bar{\Xi}_{c}^{{}^{\prime}(*)}$ | $(1,0)$ | $(5048,5115)$ | $\rho,\omega,\phi$ | $-\frac{1}{2},\frac{1}{2},1$
| $(0,0)$ | | $\rho,\omega,\phi$ | $\frac{3}{2},\frac{1}{2},1$
$\Xi_{c}\bar{\Omega}_{c}^{(*)}$ | $(\frac{1}{2},1)$ | $(5165,5235)$ | $\phi$ | $2$
$\Sigma_{c}^{(*)}\bar{\Sigma}_{c}^{(*)}$ | $(2,0)$ | $(4907,4972,5036)$ | $\rho,\omega$ | $-2,2$
| $(1,0)$ | | $\rho,\omega$ | $2,2$
| $(0,0)$ | | $\rho,\omega$ | $4,2$
$\Sigma_{c}^{(*)}\bar{\Xi}^{{}^{\prime}(*)}_{c}$ | $(\frac{3}{2},1)$ | $(5032,5097,5100,5164)$ | $\rho,\omega$ | $-1,1$
| $(\frac{1}{2},1)$ | | $\rho,\omega$ | $2,1$
$\Sigma_{c}^{(*)}\bar{\Omega}^{(*)}_{c}$ | $(0,2)$ | $(5149,5213,5219,5284)$ | $-$ | $0$
$\Xi_{c}^{{}^{\prime}(*)}\bar{\Xi}_{c}^{{}^{\prime}(*)}$ | $(1,0)$ | $(5158,5225,5292)$ | $\rho,\omega,\phi$ | $-\frac{1}{2},\frac{1}{2},1$
| $(0,0)$ | | $\rho,\omega,\phi$ | $\frac{3}{2},\frac{1}{2},1$
$\Xi^{{}^{\prime}(*)}_{c}\bar{\Omega}_{c}^{(*)}$ | $(\frac{1}{2},1)$ | $(5272,5341,5345,5412)$ | $\phi$ | $2$
$\Omega_{c}^{(*)}\bar{\Omega}_{c}^{(*)}$ | $(0,0)$ | $(5390,5461,5532)$ | $\phi$ | $4$
Table 9: The group theory factor $F$, defined in Eq. (71), for the interaction of charmed-anticharmed hadron pairs with only the light vector-meson exchanges. Here one of the charmed hadrons is an $s_{\ell}=3/2$ charmed meson. See the caption of Table 8. System | $(I,S)$ | Thresholds (MeV) | Exchanged particles | $F$
---|---|---|---|---
$D^{(*)}\bar{D}_{1,2}$ | (0,0) | $(4289,4330,4431,4472)$ | $\rho,\omega$ | $\frac{3}{2},\frac{1}{2}$
| (1,0) | | $\rho,\omega$ | $-\frac{1}{2},\frac{1}{2}$
$D^{(*)}\bar{D}_{s1,s2}$ | $(\frac{1}{2},-1)$ | $(4390,4431,4534,4575)$ | $-$ | $0$
$D_{s}^{(*)}\bar{D}_{1,2}$ | $(\frac{1}{2},1)$ | $(4402,4436,4544,4578)$ | $-$ | $0$
$D^{(*)}_{s}\bar{D}_{s1,s2}$ | (0,0) | $(4503,4537,4647,4681)$ | $\phi$ | $1$
$D_{1,2}\bar{D}_{1,2}$ | (0,0) | $(4844,4885,4926)$ | $\rho,\omega$ | $\frac{3}{2},\frac{1}{2}$
| (1,0) | | $\rho,\omega$ | $-\frac{1}{2},\frac{1}{2}$
$D_{s1,s2}\bar{D}_{1,2}$ | $(\frac{1}{2},1)$ | $(4957,4991,4998,5032)$ | $-$ | $0$
$D_{s1,s2}\bar{D}_{s1,s2}$ | (0,0) | $(5070,5104,5138)$ | $\phi$ | $1$
$\Lambda_{c}\bar{D}_{1,2}$ | $(\frac{1}{2},0)$ | $(4708,4750)$ | $\omega$ | $-1$
$\Lambda_{c}\bar{D}_{s1,s2}$ | $(0,-1)$ | $(4822,4856)$ | $-$ | $0$
$\Xi_{c}\bar{D}_{1,2}$ | $(1,-1)$ | $(4891,4932)$ | $\rho,\omega$ | $-\frac{1}{2},-\frac{1}{2}$
| $(0,-1)$ | | $\rho,\omega$ | $\frac{3}{2},-\frac{1}{2}$
$\Xi_{c}\bar{D}_{s1,s2}$ | $(\frac{1}{2},-2)$ | $(5005,5039)$ | $\phi$ | $-1$
$\Sigma_{c}^{(*)}\bar{D}_{1,2}$ | $(\frac{3}{2},0)$ | $(4876,4917,4940,4981)$ | $\rho,\omega$ | $-1,-1$
| $(\frac{1}{2},0)$ | | $\rho,\omega$ | $2,-1$
$\Sigma_{c}^{(*)}\bar{D}_{s1,s2}$ | $(1,-1)$ | $(4989,5023,5053,5087)$ | $-$ | $0$
$\Xi_{c}^{{}^{\prime}(*)}\bar{D}_{1,2}$ | $(1,-1)$ | $(5001,5042,5068,5109)$ | $\rho,\omega$ | $-\frac{1}{2},-\frac{1}{2}$
| $(0,-1)$ | | $\rho,\omega$ | $\frac{3}{2},-\frac{1}{2}$
$\Xi_{c}^{{}^{\prime}(*)}\bar{D}_{s1,s2}$ | $(\frac{1}{2},-2)$ | $(5114,5148,5181,5215)$ | $\phi$ | $-1$
$\Omega_{c}^{(*)}\bar{D}_{1,2}$ | $(\frac{1}{2},-2)$ | $(5117,5158,5188,5229)$ | $-$ | $0$
$\Omega_{c}^{(*)}\bar{D}_{s1,s2}$ | $(0,-3)$ | $(5230,5264,5301,5335)$ | $\phi$ | $-2$
## Appendix C Amplitude calculation for cross processes
In the following we show the deduction of Eq. (73).
* •
$D\bar{D}_{1}$ and $D_{s}\bar{D}_{s1}$
$\displaystyle V$
$\displaystyle=i\left(i\frac{-2\zeta_{1}g_{V}}{\sqrt{3}}\right)\left(i\frac{2\zeta_{1}g_{V}}{\sqrt{3}}\right)m_{D}m_{D_{1}}$
$\displaystyle\times\epsilon_{1}^{*\mu}\frac{-i(g_{\mu\nu}-q_{\mu}q_{\nu}/m_{\rm
ex}^{2})}{q^{2}-m_{\rm ex}^{2}+i\epsilon}\epsilon_{2}^{\nu}F$
$\displaystyle\approx FF_{c}\zeta_{1}^{2}g_{V}^{2}\frac{m_{D}m_{D_{1}}}{m_{\rm
ex}^{2}-\Delta{m}^{2}}$ (98)
with $F_{c}=4/3$.
* •
$D^{*}\bar{D}_{1}$ and $D_{s}^{*}\bar{D}_{s1}$
$\displaystyle V$
$\displaystyle=i\left(i\frac{i\zeta_{1}g_{V}}{\sqrt{3}}\right)\left(i\frac{-i\zeta_{1}g_{V}}{\sqrt{3}}m_{D^{*}}m_{D_{1}}\right)\epsilon_{\alpha\beta\gamma\delta}\epsilon_{\alpha_{1}\beta_{1}\gamma_{1}\delta_{1}}$
$\displaystyle\times\epsilon_{1}^{\beta}\epsilon_{2}^{*\alpha}v^{\gamma}\frac{-i(g^{\delta\delta_{1}}-q^{\delta}q^{\delta_{1}}/m_{\rm
ex}^{2})}{q^{2}-m_{\rm
ex}^{2}+i\epsilon}\epsilon_{3}^{\alpha_{1}}\epsilon_{4}^{*\beta_{1}}v^{\gamma_{1}}F$
$\displaystyle\approx\frac{1}{3}\zeta_{1}^{2}g_{V}^{2}\frac{m_{D^{*}}m_{D_{1}}}{m_{\rm
ex}^{2}-\Delta{m}^{2}}(\bm{\epsilon}_{1}\times\bm{\epsilon}_{2}^{*})\cdot(\bm{\epsilon}_{3}\times\bm{\epsilon}_{4}^{*})F$
$\displaystyle=-\frac{1}{3}F\zeta_{1}^{2}g_{V}^{2}\frac{m_{D^{*}}m_{D_{1}}}{m_{\rm
ex}^{2}-\Delta{m}^{2}}\bm{S}_{1}\cdot\bm{S}_{2}$
$\displaystyle=FF_{c}\zeta_{1}^{2}g_{V}^{2}\frac{m_{D^{*}}m_{D_{1}}}{m_{\rm
ex}^{2}-\Delta{m}^{2}},$ (99)
where $\bm{S}_{i}$ is the spin-1 operator. Explicitly,
$\bm{S}_{1}\cdot\bm{S}_{2}=-2,-1$ and $1$ for total spin $J=0,1$ and $2$,
respectively and in turn we obtain $F_{c}=2/3,1/3$ and $-1/3$.
* •
$D^{*}\bar{D}_{2}$ and $D_{s}^{*}\bar{D}_{s2}$
$\displaystyle V$
$\displaystyle=i\left(i\sqrt{2}\zeta_{1}g_{V}\right)\left(i\sqrt{2}\zeta_{1}g_{V}\right)m_{D^{*}}m_{D_{2}}$
$\displaystyle\times\epsilon_{1\mu}\epsilon_{2}^{*\mu\nu}\frac{-i(g_{\nu\alpha}-q_{\nu}q_{\alpha})/m_{\rm
ex}^{2}}{q^{2}-m_{\rm
ex}^{2}+i\epsilon}\epsilon_{3}^{\alpha\beta}\epsilon_{4\beta}F$
$\displaystyle\approx{2\zeta_{1}^{2}g_{V}^{2}}\frac{m_{D^{*}}m_{D_{2}}}{m_{\rm
ex}^{2}-\Delta{m}^{2}}\epsilon_{1\mu}\epsilon_{2}^{*\mu\nu}\epsilon_{3\nu}^{\beta}\epsilon_{4\beta}F$
$\displaystyle=FF_{c}\zeta_{1}^{2}g_{V}^{2}\frac{m_{D^{*}}m_{D_{2}}}{m_{\rm
ex}^{2}-\Delta{m}^{2}},$ (100)
where $F_{c}=-1/3,1$ and $-2$ for total spin $J=1,2$ and $3$, respectively.
## References
* Gell-Mann (1964) M. Gell-Mann, Phys. Lett. 8, 214 (1964).
* Zweig (1964) G. Zweig, in _DEVELOPMENTS IN THE QUARK THEORY OF HADRONS. VOL. 1. 1964 - 1978_ , edited by D. Lichtenberg and S. P. Rosen (1964) pp. 22–101.
* Chen _et al._ (2016a) H.-X. Chen, W. Chen, X. Liu, and S.-L. Zhu, Phys. Rept. 639, 1 (2016a), arXiv:1601.02092 [hep-ph] .
* Hosaka _et al._ (2016) A. Hosaka, T. Iijima, K. Miyabayashi, Y. Sakai, and S. Yasui, PTEP 2016, 062C01 (2016), arXiv:1603.09229 [hep-ph] .
* Richard (2016) J.-M. Richard, Few Body Syst. 57, 1185 (2016), arXiv:1606.08593 [hep-ph] .
* Lebed _et al._ (2017) R. F. Lebed, R. E. Mitchell, and E. S. Swanson, Prog. Part. Nucl. Phys. 93, 143 (2017), arXiv:1610.04528 [hep-ph] .
* Esposito _et al._ (2017) A. Esposito, A. Pilloni, and A. Polosa, Phys. Rept. 668, 1 (2017), arXiv:1611.07920 [hep-ph] .
* Guo _et al._ (2018) F.-K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao, and B.-S. Zou, Rev. Mod. Phys. 90, 015004 (2018), arXiv:1705.00141 [hep-ph] .
* Ali _et al._ (2017) A. Ali, J. S. Lange, and S. Stone, Prog. Part. Nucl. Phys. 97, 123 (2017), arXiv:1706.00610 [hep-ph] .
* Olsen _et al._ (2018) S. L. Olsen, T. Skwarnicki, and D. Zieminska, Rev. Mod. Phys. 90, 015003 (2018), arXiv:1708.04012 [hep-ph] .
* Altmannshofer _et al._ (2019) W. Altmannshofer _et al._ (Belle-II), PTEP 2019, 123C01 (2019), [Erratum: PTEP 2020, 029201 (2020)], arXiv:1808.10567 [hep-ex] .
* Kalashnikova and Nefediev (2019) Y. S. Kalashnikova and A. Nefediev, Phys. Usp. 62, 568 (2019), arXiv:1811.01324 [hep-ph] .
* Cerri _et al._ (2019) A. Cerri _et al._ , CERN Yellow Rep. Monogr. 7, 867 (2019), arXiv:1812.07638 [hep-ph] .
* Liu _et al._ (2019a) Y.-R. Liu, H.-X. Chen, W. Chen, X. Liu, and S.-L. Zhu, Prog. Part. Nucl. Phys. 107, 237 (2019a), arXiv:1903.11976 [hep-ph] .
* Brambilla _et al._ (2020) N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C.-P. Shen, C. E. Thomas, A. Vairo, and C.-Z. Yuan, Phys. Rept. 873, 1 (2020), arXiv:1907.07583 [hep-ex] .
* Guo _et al._ (2020) F.-K. Guo, X.-H. Liu, and S. Sakai, Prog. Part. Nucl. Phys. 112, 103757 (2020), arXiv:1912.07030 [hep-ph] .
* Yang _et al._ (2020a) G. Yang, J. Ping, and J. Segovia, Symmetry 12, 1869 (2020a), arXiv:2009.00238 [hep-ph] .
* Ortega and Entem (2020) P. G. Ortega and D. R. Entem, (2020), arXiv:2012.10105 [hep-ph] .
* Choi _et al._ (2003) S. Choi _et al._ (Belle), Phys. Rev. Lett. 91, 262001 (2003), arXiv:hep-ex/0309032 .
* Ablikim _et al._ (2013a) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 110, 252001 (2013a), arXiv:1303.5949 [hep-ex] .
* Liu _et al._ (2013) Z. Liu _et al._ (Belle), Phys. Rev. Lett. 110, 252002 (2013), [Erratum: Phys. Rev. Lett. 111, 019901 (2013)], arXiv:1304.0121 [hep-ex] .
* Ablikim _et al._ (2014a) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 112, 022001 (2014a), arXiv:1310.1163 [hep-ex] .
* Ablikim _et al._ (2014b) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 112, 132001 (2014b), arXiv:1308.2760 [hep-ex] .
* Ablikim _et al._ (2013b) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 111, 242001 (2013b), arXiv:1309.1896 [hep-ex] .
* Bondar _et al._ (2012) A. Bondar _et al._ (Belle), Phys. Rev. Lett. 108, 122001 (2012), arXiv:1110.2251 [hep-ex] .
* Garmash _et al._ (2016) A. Garmash _et al._ (Belle), Phys. Rev. Lett. 116, 212001 (2016), arXiv:1512.07419 [hep-ex] .
* Ablikim _et al._ (2020a) M. Ablikim _et al._ (BESIII), (2020a), arXiv:2011.07855 [hep-ex] .
* Aaij _et al._ (2019) R. Aaij _et al._ (LHCb), Phys. Rev. Lett. 122, 222001 (2019), arXiv:1904.03947 [hep-ex] .
* Ecker _et al._ (1989) G. Ecker, J. Gasser, A. Pich, and E. de Rafael, Nucl. Phys. B 321, 311 (1989).
* Epelbaum _et al._ (2002) E. Epelbaum, U.-G. Meißner, W. Glöckle, and C. Elster, Phys. Rev. C 65, 044001 (2002), arXiv:nucl-th/0106007 .
* Peng _et al._ (2020a) F.-Z. Peng, M.-Z. Liu, M. Sánchez Sánchez, and M. Pavon Valderrama, Phys. Rev. D 102, 114020 (2020a), arXiv:2004.05658 [hep-ph] .
* Liu _et al._ (2008) X. Liu, Y.-R. Liu, W.-Z. Deng, and S.-L. Zhu, Phys. Rev. D 77, 034003 (2008), arXiv:0711.0494 [hep-ph] .
* Ding (2009) G.-J. Ding, Eur. Phys. J. C 64, 297 (2009), arXiv:0904.1782 [hep-ph] .
* Wu _et al._ (2010) J.-J. Wu, R. Molina, E. Oset, and B. S. Zou, Phys. Rev. Lett. 105, 232001 (2010), arXiv:1007.0573 [nucl-th] .
* Wu _et al._ (2011) J.-J. Wu, R. Molina, E. Oset, and B. S. Zou, Phys. Rev. C 84, 015202 (2011), arXiv:1011.2399 [nucl-th] .
* Yang _et al._ (2012) Z.-C. Yang, Z.-F. Sun, J. He, X. Liu, and S.-L. Zhu, Chin. Phys. C 36, 6 (2012), arXiv:1105.2901 [hep-ph] .
* Pavon Valderrama (2020) M. Pavon Valderrama, Eur. Phys. J. A 56, 109 (2020), arXiv:1906.06491 [hep-ph] .
* Dong _et al._ (2020a) X.-K. Dong, F.-K. Guo, and B.-S. Zou, (2020a), arXiv:2011.14517 [hep-ph] .
* Isgur and Wise (1990) N. Isgur and M. B. Wise, Phys. Lett. B 237, 527 (1990).
* Isgur and Wise (1989) N. Isgur and M. B. Wise, Phys. Lett. B 232, 113 (1989).
* Wise (1992) M. B. Wise, Phys. Rev. D 45, 2188 (1992).
* Casalbuoni _et al._ (1992) R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio, and G. Nardulli, Phys. Lett. B 292, 371 (1992), arXiv:hep-ph/9209248 .
* Casalbuoni _et al._ (1997) R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio, and G. Nardulli, Phys. Rept. 281, 145 (1997), arXiv:hep-ph/9605342 [hep-ph] .
* Grinstein _et al._ (1992) B. Grinstein, E. E. Jenkins, A. V. Manohar, M. J. Savage, and M. B. Wise, Nucl. Phys. B 380, 369 (1992), arXiv:hep-ph/9204207 .
* Falk (1992) A. F. Falk, Nucl. Phys. B 378, 79 (1992).
* Falk and Luke (1992) A. F. Falk and M. E. Luke, Phys. Lett. B 292, 119 (1992), arXiv:hep-ph/9206241 .
* Zyla _et al._ (2020) P. Zyla _et al._ (Particle Data Group), PTEP 2020, 083C01 (2020).
* Filin _et al._ (2010) A. Filin, A. Romanov, V. Baru, C. Hanhart, Y. Kalashnikova, A. Kudryavtsev, U.-G. Meißner, and A. Nefediev, Phys. Rev. Lett. 105, 019101 (2010), arXiv:1004.4789 [hep-ph] .
* Guo and Meißner (2011) F.-K. Guo and U.-G. Meißner, Phys. Rev. D 84, 014013 (2011), arXiv:1102.3536 [hep-ph] .
* Du _et al._ (2019) M.-L. Du, F.-K. Guo, and U.-G. Meißner, Phys. Rev. D 99, 114002 (2019), arXiv:1903.08516 [hep-ph] .
* Du _et al._ (2020a) M.-L. Du, F.-K. Guo, C. Hanhart, B. Kubis, and U.-G. Meißner, (2020a), arXiv:2012.04599 [hep-ph] .
* Aaij _et al._ (2016) R. Aaij _et al._ (LHCb), Phys. Rev. D 94, 072001 (2016), arXiv:1608.01289 [hep-ex] .
* Barnes _et al._ (2003) T. Barnes, F. Close, and H. Lipkin, Phys. Rev. D 68, 054006 (2003), arXiv:hep-ph/0305025 .
* van Beveren and Rupp (2003) E. van Beveren and G. Rupp, Phys. Rev. Lett. 91, 012003 (2003), arXiv:hep-ph/0305035 .
* Kolomeitsev and Lutz (2004) E. Kolomeitsev and M. Lutz, Phys. Lett. B 582, 39 (2004), arXiv:hep-ph/0307133 .
* Chen and Li (2004) Y.-Q. Chen and X.-Q. Li, Phys. Rev. Lett. 93, 232001 (2004), arXiv:hep-ph/0407062 .
* Guo _et al._ (2006) F.-K. Guo, P.-N. Shen, H.-C. Chiang, R.-G. Ping, and B.-S. Zou, Phys. Lett. B 641, 278 (2006), arXiv:hep-ph/0603072 .
* Guo _et al._ (2007) F.-K. Guo, P.-N. Shen, and H.-C. Chiang, Phys. Lett. B 647, 133 (2007), arXiv:hep-ph/0610008 .
* Guo (2019) F.-K. Guo, EPJ Web Conf. 202, 02001 (2019).
* Ma _et al._ (2019) L. Ma, Q. Wang, and U.-G. Meißner, Chin. Phys. C 43, 014102 (2019), arXiv:1711.06143 [hep-ph] .
* Martínez Torres _et al._ (2019) A. Martínez Torres, K. Khemchandani, and L.-S. Geng, Phys. Rev. D 99, 076017 (2019), arXiv:1809.01059 [hep-ph] .
* Wu _et al._ (2019) T.-W. Wu, M.-Z. Liu, L.-S. Geng, E. Hiyama, and M. Pavon Valderrama, Phys. Rev. D 100, 034029 (2019), arXiv:1906.11995 [hep-ph] .
* Wu _et al._ (2020) T.-W. Wu, M.-Z. Liu, and L.-S. Geng, (2020), arXiv:2012.01134 [hep-ph] .
* Bando _et al._ (1985) M. Bando, T. Kugo, S. Uehara, K. Yamawaki, and T. Yanagida, Phys. Rev. Lett. 54, 1215 (1985).
* Bando _et al._ (1988) M. Bando, T. Kugo, and K. Yamawaki, Phys. Rept. 164, 217 (1988).
* Meißner (1988) U.-G. Meißner, Phys. Rept. 161, 213 (1988).
* Casalbuoni _et al._ (1993) R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio, and G. Nardulli, Phys. Lett. B 299, 139 (1993), arXiv:hep-ph/9211248 .
* Isola _et al._ (2003) C. Isola, M. Ladisa, G. Nardulli, and P. Santorelli, Phys. Rev. D 68, 114001 (2003), arXiv:hep-ph/0307367 .
* Dong _et al._ (2020b) X.-K. Dong, Y.-H. Lin, and B.-S. Zou, Phys. Rev. D 101, 076003 (2020b), arXiv:1910.14455 [hep-ph] .
* Yan _et al._ (1992) T.-M. Yan, H.-Y. Cheng, C.-Y. Cheung, G.-L. Lin, Y. Lin, and H.-L. Yu, Phys. Rev. D 46, 1148 (1992), [Erratum: Phys. Rev. D 55, 5851 (1997)].
* Rarita and Schwinger (1941) W. Rarita and J. Schwinger, Phys. Rev. 60, 61 (1941).
* Liu and Oka (2012) Y.-R. Liu and M. Oka, Phys. Rev. D 85, 014015 (2012), arXiv:1103.4624 [hep-ph] .
* Machleidt _et al._ (1987) R. Machleidt, K. Holinde, and C. Elster, Phys. Rept. 149, 1 (1987).
* Chen _et al._ (2019a) R. Chen, Z.-F. Sun, X. Liu, and S.-L. Zhu, Phys. Rev. D 100, 011502 (2019a), arXiv:1903.11013 [hep-ph] .
* Oller and Oset (1997) J. Oller and E. Oset, Nucl. Phys. A 620, 438 (1997), [Erratum: Nucl. Phys. A 652, 407 (1999)], arXiv:hep-ph/9702314 .
* Donoghue _et al._ (1989) J. F. Donoghue, C. Ramirez, and G. Valencia, Phys. Rev. D 39, 1947 (1989).
* Lin and Ko (2000) Z.-w. Lin and C. Ko, Phys. Rev. C 62, 034903 (2000), arXiv:nucl-th/9912046 .
* Albaladejo _et al._ (2016a) M. Albaladejo, F.-K. Guo, C. Hidalgo-Duque, and J. Nieves, Phys. Lett. B 755, 337 (2016a), arXiv:1512.03638 [hep-ph] .
* Pilloni _et al._ (2017) A. Pilloni, C. Fernández-Ramírez, A. Jackura, V. Mathieu, M. Mikhasenko, J. Nys, and A. Szczepaniak (JPAC), Phys. Lett. B 772, 200 (2017), arXiv:1612.06490 [hep-ph] .
* Gong _et al._ (2018) Q.-R. Gong, J.-L. Pang, Y.-F. Wang, and H.-Q. Zheng, Eur. Phys. J. C 78, 276 (2018), arXiv:1612.08159 [hep-ph] .
* Veltman (2012) M. Veltman, _Diagrammatica: The Path to Feynman rules_ (Cambridge University Press, 2012).
* Aghasyan _et al._ (2018) M. Aghasyan _et al._ (COMPASS), Phys. Lett. B 783, 334 (2018), arXiv:1707.01796 [hep-ex] .
* Aaij _et al._ (2020a) R. Aaij _et al._ (LHCb), (2020a), arXiv:2012.10380 [hep-ex] .
* Törnqvist (2003) N. A. Törnqvist, (2003), arXiv:hep-ph/0308277 .
* Wong (2004) C.-Y. Wong, Phys. Rev. C 69, 055202 (2004), arXiv:hep-ph/0311088 .
* Swanson (2004) E. S. Swanson, Phys. Lett. B 588, 189 (2004), arXiv:hep-ph/0311229 .
* Törnqvist (2004) N. A. Törnqvist, Phys. Lett. B 590, 209 (2004), arXiv:hep-ph/0402237 .
* Törnqvist (1994) N. A. Törnqvist, Z. Phys. C61, 525 (1994), arXiv:hep-ph/9310247 [hep-ph] .
* Gamermann _et al._ (2007) D. Gamermann, E. Oset, D. Strottman, and M. Vicente Vacas, Phys. Rev. D 76, 074016 (2007), arXiv:hep-ph/0612179 .
* Gamermann and Oset (2007) D. Gamermann and E. Oset, Eur. Phys. J. A 33, 119 (2007), arXiv:0704.2314 [hep-ph] .
* Wang (2020a) Z.-G. Wang, (2020a), arXiv:2012.11869 [hep-ph] .
* AlFiky _et al._ (2006) M. T. AlFiky, F. Gabbiani, and A. A. Petrov, Phys. Lett. B 640, 238 (2006), arXiv:hep-ph/0506141 .
* Nieves and Valderrama (2012) J. Nieves and M. Valderrama, Phys. Rev. D 86, 056004 (2012), arXiv:1204.2790 [hep-ph] .
* Hidalgo-Duque _et al._ (2013a) C. Hidalgo-Duque, J. Nieves, and M. Valderrama, Phys. Rev. D 87, 076006 (2013a), arXiv:1210.5431 [hep-ph] .
* Guo _et al._ (2013) F.-K. Guo, C. Hidalgo-Duque, J. Nieves, and M. Pavon Valderrama, Phys. Rev. D 88, 054007 (2013), arXiv:1303.6608 [hep-ph] .
* Hidalgo-Duque _et al._ (2013b) C. Hidalgo-Duque, J. Nieves, A. Ozpineci, and V. Zamiralov, Phys. Lett. B 727, 432 (2013b), arXiv:1305.4487 [hep-ph] .
* Baru _et al._ (2016) V. Baru, E. Epelbaum, A. Filin, C. Hanhart, U.-G. Meißner, and A. Nefediev, Phys. Lett. B 763, 20 (2016), arXiv:1605.09649 [hep-ph] .
* Zhang _et al._ (2006) Y.-J. Zhang, H.-C. Chiang, P.-N. Shen, and B.-S. Zou, Phys. Rev. D 74, 014013 (2006), arXiv:hep-ph/0604271 .
* Liu _et al._ (2009) X. Liu, Z.-G. Luo, Y.-R. Liu, and S.-L. Zhu, Eur. Phys. J. C 61, 411 (2009), arXiv:0808.0073 [hep-ph] .
* Prelovsek _et al._ (2020) S. Prelovsek, S. Collins, D. Mohler, M. Padmanath, and S. Piemonte, (2020), arXiv:2011.02542 [hep-lat] .
* Gamermann and Oset (2008) D. Gamermann and E. Oset, Eur. Phys. J. A 36, 189 (2008), arXiv:0712.1758 [hep-ph] .
* Dai _et al._ (2020) L. Dai, G. Toledo, and E. Oset, Eur. Phys. J. C 80, 510 (2020), arXiv:2004.05204 [hep-ph] .
* Wang _et al._ (2020a) E. Wang, H.-S. Li, W.-H. Liang, and E. Oset, (2020a), arXiv:2010.15431 [hep-ph] .
* Uehara _et al._ (2006) S. Uehara _et al._ (Belle), Phys. Rev. Lett. 96, 082003 (2006), arXiv:hep-ex/0512035 .
* Pakhlov _et al._ (2008) P. Pakhlov _et al._ (Belle), Phys. Rev. Lett. 100, 202001 (2008), arXiv:0708.3812 [hep-ex] .
* Aubert _et al._ (2010) B. Aubert _et al._ (BaBar), Phys. Rev. D 81, 092003 (2010), arXiv:1002.0281 [hep-ex] .
* Albaladejo _et al._ (2015) M. Albaladejo, F.-K. Guo, C. Hidalgo-Duque, J. Nieves, and M. Pavon Valderrama, Eur. Phys. J. C 75, 547 (2015), arXiv:1504.00861 [hep-ph] .
* Cincioglu _et al._ (2016) E. Cincioglu, J. Nieves, A. Ozpineci, and A. Yilmazer, Eur. Phys. J. C 76, 576 (2016), arXiv:1606.03239 [hep-ph] .
* Hammer _et al._ (2016) I. Hammer, C. Hanhart, and A. Nefediev, Eur. Phys. J. A 52, 330 (2016), arXiv:1607.06971 [hep-ph] .
* Ortega _et al._ (2018) P. G. Ortega, J. Segovia, D. R. Entem, and F. Fernández, Phys. Lett. B 778, 1 (2018), arXiv:1706.02639 [hep-ph] .
* Kalashnikova (2005) Y. Kalashnikova, Phys. Rev. D 72, 034010 (2005), arXiv:hep-ph/0506270 .
* Zhou and Xiao (2017) Z.-Y. Zhou and Z. Xiao, Phys. Rev. D 96, 054031 (2017), [Erratum: Phys. Rev. D 96, 099905 (2017)], arXiv:1704.04438 [hep-ph] .
* Cincioglu _et al._ (2020) E. Cincioglu, A. Ozpineci, and D. Y. Yilmaz, (2020), arXiv:2012.14013 [hep-ph] .
* Wang _et al._ (2013) Q. Wang, C. Hanhart, and Q. Zhao, Phys. Rev. Lett. 111, 132003 (2013), arXiv:1303.6355 [hep-ph] .
* Aceti _et al._ (2014) F. Aceti, M. Bayar, E. Oset, A. Martínez Torres, K. Khemchandani, J. M. Dias, F. Navarra, and M. Nielsen, Phys. Rev. D 90, 016003 (2014), arXiv:1401.8216 [hep-ph] .
* He and Chen (2018) J. He and D.-Y. Chen, Eur. Phys. J. C 78, 94 (2018), arXiv:1712.05653 [hep-ph] .
* Ortega _et al._ (2019) P. G. Ortega, J. Segovia, D. R. Entem, and F. Fernández, Eur. Phys. J. C 79, 78 (2019), arXiv:1808.00914 [hep-ph] .
* Albaladejo _et al._ (2016b) M. Albaladejo, P. Fernandez-Soler, and J. Nieves, Eur. Phys. J. C 76, 573 (2016b), arXiv:1606.03008 [hep-ph] .
* Prelovsek _et al._ (2015) S. Prelovsek, C. Lang, L. Leskovec, and D. Mohler, Phys. Rev. D 91, 014504 (2015), arXiv:1405.7623 [hep-lat] .
* Matuschek _et al._ (2020) I. Matuschek, V. Baru, F.-K. Guo, and C. Hanhart, (2020), arXiv:2007.05329 [hep-ph] .
* Cleven _et al._ (2011) M. Cleven, F.-K. Guo, C. Hanhart, and U.-G. Meissner, Eur. Phys. J. A 47, 120 (2011), arXiv:1107.0254 [hep-ph] .
* Hanhart _et al._ (2015) C. Hanhart, Y. S. Kalashnikova, P. Matuschek, R. Mizuk, A. Nefediev, and Q. Wang, Phys. Rev. Lett. 115, 202001 (2015), arXiv:1507.00382 [hep-ph] .
* Guo _et al._ (2016) F.-K. Guo, C. Hanhart, Y. S. Kalashnikova, P. Matuschek, R. Mizuk, A. Nefediev, Q. Wang, and J. L. Wynen, Phys. Rev. D 93, 074031 (2016), arXiv:1602.00940 [hep-ph] .
* Wang _et al._ (2018a) Q. Wang, V. Baru, A. Filin, C. Hanhart, A. Nefediev, and J.-L. Wynen, Phys. Rev. D 98, 074023 (2018a), arXiv:1805.07453 [hep-ph] .
* Baru _et al._ (2020) V. Baru, E. Epelbaum, A. Filin, C. Hanhart, R. Mizuk, A. Nefediev, and S. Ropertz, (2020), arXiv:2012.05034 [hep-ph] .
* Wang _et al._ (2019) G.-J. Wang, X.-H. Liu, L. Ma, X. Liu, X.-L. Chen, W.-Z. Deng, and S.-L. Zhu, Eur. Phys. J. C 79, 567 (2019), arXiv:1811.10339 [hep-ph] .
* Xiao _et al._ (2020) L.-Y. Xiao, G.-J. Wang, and S.-L. Zhu, Phys. Rev. D 101, 054001 (2020), arXiv:1912.12781 [hep-ph] .
* Aaij _et al._ (2020b) R. Aaij _et al._ (LHCb), Phys. Rev. Lett. 125, 242001 (2020b), arXiv:2009.00025 [hep-ex] .
* Aaij _et al._ (2020c) R. Aaij _et al._ (LHCb), Phys. Rev. D 102, 112003 (2020c), arXiv:2009.00026 [hep-ex] .
* Meng _et al._ (2020a) L. Meng, B. Wang, and S.-L. Zhu, (2020a), arXiv:2012.09813 [hep-ph] .
* Aaij _et al._ (2017a) R. Aaij _et al._ (LHCb), Phys. Rev. Lett. 118, 022003 (2017a), arXiv:1606.07895 [hep-ex] .
* Aaltonen _et al._ (2009) T. Aaltonen _et al._ (CDF), Phys. Rev. Lett. 102, 242002 (2009), arXiv:0903.2229 [hep-ex] .
* Liu and Zhu (2009) X. Liu and S.-L. Zhu, Phys. Rev. D 80, 017502 (2009), [Erratum: Phys. Rev. D 85, 019902 (2012)], arXiv:0903.2529 [hep-ph] .
* Branz _et al._ (2009) T. Branz, T. Gutsche, and V. E. Lyubovitskij, Phys. Rev. D 80, 054019 (2009), arXiv:0903.5424 [hep-ph] .
* Albuquerque _et al._ (2009) R. M. Albuquerque, M. E. Bracco, and M. Nielsen, Phys. Lett. B 678, 186 (2009), arXiv:0903.5540 [hep-ph] .
* Zhang and Huang (2010) J.-R. Zhang and M.-Q. Huang, J. Phys. G 37, 025005 (2010), arXiv:0905.4178 [hep-ph] .
* Chen _et al._ (2015) X. Chen, X. Lü, R. Shi, and X. Guo, (2015), arXiv:1512.06483 [hep-ph] .
* Karliner and Rosner (2016) M. Karliner and J. L. Rosner, Nucl. Phys. A 954, 365 (2016), arXiv:1601.00565 [hep-ph] .
* Aaij _et al._ (2017b) R. Aaij _et al._ (LHCb), Phys. Rev. D 95, 012002 (2017b), arXiv:1606.07898 [hep-ex] .
* Wang _et al._ (2018b) E. Wang, J.-J. Xie, L.-S. Geng, and E. Oset, Phys. Rev. D 97, 014017 (2018b), arXiv:1710.02061 [hep-ph] .
* Wang _et al._ (2020b) J.-Z. Wang, D.-Y. Chen, X. Liu, and T. Matsuki, (2020b), arXiv:2011.08501 [hep-ph] .
* Wan and Qiao (2020) B.-D. Wan and C.-F. Qiao, (2020), arXiv:2011.08747 [hep-ph] .
* Wang _et al._ (2020c) J.-Z. Wang, Q.-S. Zhou, X. Liu, and T. Matsuki, (2020c), arXiv:2011.08628 [hep-ph] .
* Meng _et al._ (2020b) L. Meng, B. Wang, and S.-L. Zhu, Phys. Rev. D 102, 111502 (2020b), arXiv:2011.08656 [hep-ph] .
* Yang _et al._ (2020b) Z. Yang, X. Cao, F.-K. Guo, J. Nieves, and M. Pavon Valderrama, (2020b), arXiv:2011.08725 [hep-ph] .
* Chen and Huang (2020) R. Chen and Q. Huang, (2020), arXiv:2011.09156 [hep-ph] .
* Du _et al._ (2020b) M.-C. Du, Q. Wang, and Q. Zhao, (2020b), arXiv:2011.09225 [hep-ph] .
* Cao _et al._ (2020) X. Cao, J.-P. Dai, and Z. Yang, (2020), arXiv:2011.09244 [hep-ph] .
* Sun and Xiao (2020) Z.-F. Sun and C.-W. Xiao, (2020), arXiv:2011.09404 [hep-ph] .
* Wang _et al._ (2020d) Q.-N. Wang, W. Chen, and H.-X. Chen, (2020d), arXiv:2011.10495 [hep-ph] .
* Wang _et al._ (2020e) B. Wang, L. Meng, and S.-L. Zhu, (2020e), arXiv:2011.10922 [hep-ph] .
* Wang (2020b) Z.-G. Wang, (2020b), arXiv:2011.10959 [hep-ph] .
* Azizi and Er (2020) K. Azizi and N. Er, (2020), arXiv:2011.11488 [hep-ph] .
* Jin _et al._ (2020) X. Jin, X. Liu, Y. Xue, H. Huang, and J. Ping, (2020), arXiv:2011.12230 [hep-ph] .
* Simonov (2020) Y. Simonov, (2020), arXiv:2011.12326 [hep-ph] .
* Süngü _et al._ (2020) J. Süngü, A. Türkan, H. Sundu, and E. V. Veliev, (2020), arXiv:2011.13013 [hep-ph] .
* Ikeno _et al._ (2020) N. Ikeno, R. Molina, and E. Oset, (2020), arXiv:2011.13425 [hep-ph] .
* Xu _et al._ (2020) Y.-J. Xu, Y.-L. Liu, C.-Y. Cui, and M.-Q. Huang, (2020), arXiv:2011.14313 [hep-ph] .
* Aubert _et al._ (2005) B. Aubert _et al._ (BaBar), Phys. Rev. Lett. 95, 142001 (2005), arXiv:hep-ex/0506081 .
* He _et al._ (2006) Q. He _et al._ (CLEO), Phys. Rev. D 74, 091104 (2006), arXiv:hep-ex/0611021 .
* Yuan _et al._ (2007) C. Yuan _et al._ (Belle), Phys. Rev. Lett. 99, 182004 (2007), arXiv:0707.2541 [hep-ex] .
* Ablikim _et al._ (2016) M. Ablikim _et al._ (BESIII), Phys. Rev. D93, 011102 (2016), arXiv:1511.08564 [hep-ex] .
* Ablikim _et al._ (2017a) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 118, 092002 (2017a), arXiv:1610.07044 [hep-ex] .
* Ablikim _et al._ (2017b) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 118, 092001 (2017b), arXiv:1611.01317 [hep-ex] .
* Ablikim _et al._ (2019) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 122, 102002 (2019), arXiv:1808.02847 [hep-ex] .
* Gao _et al._ (2017) X. Gao, C. Shen, and C. Yuan, Phys. Rev. D 95, 092007 (2017), arXiv:1703.10351 [hep-ex] .
* Qin _et al._ (2016) W. Qin, S.-R. Xue, and Q. Zhao, Phys. Rev. D 94, 054035 (2016), arXiv:1605.02407 [hep-ph] .
* Chen _et al._ (2019b) Y.-H. Chen, L.-Y. Dai, F.-K. Guo, and B. Kubis, Phys. Rev. D 99, 074016 (2019b), arXiv:1902.10957 [hep-ph] .
* Lu _et al._ (2017) Y. Lu, M. N. Anwar, and B.-S. Zou, Phys. Rev. D 96, 114022 (2017), arXiv:1705.00449 [hep-ph] .
* (170) C.-Z. Yuan, “The $Y$ states and other vectors in $e^{+}e^{-}$ annihilations,” https://indico.ihep.ac.cn/event/11793/session/7/contribution/5/material/slides/0.pdf, talk given at the 5th Hadron Physics Online Forum (HAPOF), July 22, 2020.
* Wang _et al._ (2014) Q. Wang, M. Cleven, F.-K. Guo, C. Hanhart, U.-G. Meißner, X.-G. Wu, and Q. Zhao, Phys. Rev. D 89, 034001 (2014), arXiv:1309.4303 [hep-ph] .
* Ma _et al._ (2015) L. Ma, X.-H. Liu, X. Liu, and S.-L. Zhu, Phys. Rev. D 91, 034032 (2015), arXiv:1406.6879 [hep-ph] .
* Cleven _et al._ (2015) M. Cleven, F.-K. Guo, C. Hanhart, Q. Wang, and Q. Zhao, Phys. Rev. D 92, 014005 (2015), arXiv:1505.01771 [hep-ph] .
* Hanhart and Klempt (2020) C. Hanhart and E. Klempt, Int. J. Mod. Phys. A 35, 2050019 (2020), arXiv:1906.11971 [hep-ph] .
* Cleven _et al._ (2014) M. Cleven, Q. Wang, F.-K. Guo, C. Hanhart, U.-G. Meißner, and Q. Zhao, Phys. Rev. D 90, 074039 (2014), arXiv:1310.2190 [hep-ph] .
* Olschewsky (2018) K. Olschewsky, “Heavy hadronic molecules with negative parity: The vector states,” (2018), master thesis, Bonn University.
* Lees _et al._ (2014) J. Lees _et al._ (BaBar), Phys. Rev. D 89, 111103 (2014), arXiv:1211.6271 [hep-ex] .
* Wang _et al._ (2015) X. Wang _et al._ (Belle), Phys. Rev. D 91, 112007 (2015), arXiv:1410.7641 [hep-ex] .
* Guo _et al._ (2008) F.-K. Guo, C. Hanhart, and U.-G. Meißner, Phys. Lett. B 665, 26 (2008), arXiv:0803.1392 [hep-ph] .
* Cotugno _et al._ (2010) G. Cotugno, R. Faccini, A. Polosa, and C. Sabelli, Phys. Rev. Lett. 104, 132005 (2010), arXiv:0911.2178 [hep-ph] .
* Guo _et al._ (2010) F.-K. Guo, J. Haidenbauer, C. Hanhart, and U.-G. Meißner, Phys. Rev. D 82, 094008 (2010), arXiv:1005.2055 [hep-ph] .
* Pakhlova _et al._ (2008) G. Pakhlova _et al._ (Belle), Phys. Rev. Lett. 101, 172001 (2008), arXiv:0807.4458 [hep-ex] .
* Ablikim _et al._ (2018) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 120, 132001 (2018), arXiv:1710.00150 [hep-ex] .
* Jia _et al._ (2019) S. Jia _et al._ (Belle), Phys. Rev. D 100, 111103 (2019), arXiv:1911.00671 [hep-ex] .
* Jia _et al._ (2020) S. Jia _et al._ (Belle), Phys. Rev. D 101, 091101 (2020), arXiv:2004.02404 [hep-ex] .
* He _et al._ (2020) J. He, Y. Liu, J.-T. Zhu, and D.-Y. Chen, Eur. Phys. J. C 80, 246 (2020), arXiv:1912.08420 [hep-ph] .
* Ablikim _et al._ (2020b) M. Ablikim _et al._ (BESIII), Chin. Phys. C 44, 040001 (2020b), arXiv:1912.05983 [hep-ex] .
* Sommerfeld (1931) A. Sommerfeld, Annalen der Physik 403, 257 (1931).
* Dai _et al._ (2017) L.-Y. Dai, J. Haidenbauer, and U.-G. Meißner, Phys. Rev. D 96, 116001 (2017), arXiv:1710.03142 [hep-ph] .
* Barniakov (2019) A. Y. Barniakov (Super Charm-Tau Factory), PoS LeptonPhoton2019, 062 (2019).
* Peng _et al._ (2020b) H.-P. Peng, Y.-H. Zheng, and X.-R. Zhou, Physics 49, 513 (2020b).
* Wu _et al._ (2012a) J.-J. Wu, L. Zhao, and B. Zou, Phys. Lett. B 709, 70 (2012a), arXiv:1011.5743 [hep-ph] .
* Wang _et al._ (2011) W. Wang, F. Huang, Z. Zhang, and B. Zou, Phys. Rev. C 84, 015203 (2011), arXiv:1101.0453 [nucl-th] .
* Wu _et al._ (2012b) J.-J. Wu, T.-S. Lee, and B. Zou, Phys. Rev. C 85, 044002 (2012b), arXiv:1202.1036 [nucl-th] .
* Xiao _et al._ (2013) C. Xiao, J. Nieves, and E. Oset, Phys. Rev. D 88, 056012 (2013), arXiv:1304.5368 [hep-ph] .
* Karliner and Rosner (2015) M. Karliner and J. L. Rosner, Phys. Rev. Lett. 115, 122001 (2015), arXiv:1506.06386 [hep-ph] .
* Aaij _et al._ (2015) R. Aaij _et al._ (LHCb), Phys. Rev. Lett. 115, 072001 (2015), arXiv:1507.03414 [hep-ex] .
* Liu _et al._ (2019b) M.-Z. Liu, Y.-W. Pan, F.-Z. Peng, M. Sánchez Sánchez, L.-S. Geng, A. Hosaka, and M. Pavon Valderrama, Phys. Rev. Lett. 122, 242001 (2019b), arXiv:1903.11560 [hep-ph] .
* Xiao _et al._ (2019a) C. Xiao, J. Nieves, and E. Oset, Phys. Rev. D 100, 014021 (2019a), arXiv:1904.01296 [hep-ph] .
* Du _et al._ (2020c) M.-L. Du, V. Baru, F.-K. Guo, C. Hanhart, U.-G. Meißner, J. A. Oller, and Q. Wang, Phys. Rev. Lett. 124, 072001 (2020c), arXiv:1910.11846 [hep-ph] .
* Liu _et al._ (2018) M.-Z. Liu, F.-Z. Peng, M. Sánchez Sánchez, and M. Pavon Valderrama, Phys. Rev. D 98, 114030 (2018), arXiv:1811.03992 [hep-ph] .
* Sakai _et al._ (2019) S. Sakai, H.-J. Jing, and F.-K. Guo, Phys. Rev. D 100, 074007 (2019), arXiv:1907.03414 [hep-ph] .
* Hofmann and Lutz (2005) J. Hofmann and M. Lutz, Nucl. Phys. A 763, 90 (2005), arXiv:hep-ph/0507071 .
* Chen _et al._ (2017) R. Chen, J. He, and X. Liu, Chin. Phys. C 41, 103105 (2017), arXiv:1609.03235 [hep-ph] .
* Anisovich _et al._ (2015) V. Anisovich, M. Matveev, J. Nyiri, A. Sarantsev, and A. Semenova, Int. J. Mod. Phys. A 30, 1550190 (2015), arXiv:1509.04898 [hep-ph] .
* Wang (2016) Z.-G. Wang, Eur. Phys. J. C 76, 142 (2016), arXiv:1509.06436 [hep-ph] .
* Feijoo _et al._ (2016) A. Feijoo, V. Magas, A. Ramos, and E. Oset, Eur. Phys. J. C 76, 446 (2016), arXiv:1512.08152 [hep-ph] .
* Lu _et al._ (2016) J.-X. Lu, E. Wang, J.-J. Xie, L.-S. Geng, and E. Oset, Phys. Rev. D 93, 094009 (2016), arXiv:1601.00075 [hep-ph] .
* Xiao _et al._ (2019b) C. Xiao, J. Nieves, and E. Oset, Phys. Lett. B 799, 135051 (2019b), arXiv:1906.09010 [hep-ph] .
* Chen _et al._ (2016b) H.-X. Chen, L.-S. Geng, W.-H. Liang, E. Oset, E. Wang, and J.-J. Xie, Phys. Rev. C 93, 065203 (2016b), arXiv:1510.01803 [hep-ph] .
* Wang _et al._ (2020f) B. Wang, L. Meng, and S.-L. Zhu, Phys. Rev. D 101, 034018 (2020f), arXiv:1912.12592 [hep-ph] .
* Zhang _et al._ (2020) Q. Zhang, B.-R. He, and J.-L. Ping, (2020), arXiv:2006.01042 [hep-ph] .
* Liu _et al._ (2020) M.-Z. Liu, Y.-W. Pan, and L.-S. Geng, (2020), arXiv:2011.07935 [hep-ph] .
* Chen (2020) R. Chen, (2020), arXiv:2011.07214 [hep-ph] .
* Wang (2020c) Z.-G. Wang, (2020c), arXiv:2011.05102 [hep-ph] .
* Peng _et al._ (2020c) F.-Z. Peng, M.-J. Yan, M. Sánchez Sánchez, and M. Pavon Valderrama, (2020c), arXiv:2011.01915 [hep-ph] .
* Chen _et al._ (2020) H.-X. Chen, W. Chen, X. Liu, and X.-H. Liu, (2020), arXiv:2011.01079 [hep-ph] .
* Wang _et al._ (2020g) F.-L. Wang, R. Chen, and X. Liu, (2020g), arXiv:2011.14296 [hep-ph] .
* Liu _et al._ (2019c) M.-Z. Liu, T.-W. Wu, M. Sánchez Sánchez, M. P. Valderrama, L.-S. Geng, and J.-J. Xie, (2019c), arXiv:1907.06093 [hep-ph] .
|
32k
|
arxiv_papers
|
2101.01022
|
# A Two Sub-problem Decomposition for the
Optimal Design of Filterless Optical Networks
Brigitte Jaumard, and Yan Wang B. Jaumard and Y. Wang were with the Department
of Computer Science and Software Engineering, Concordia University, Montreal,
QC, H3G 1M8 Canada e-mail: [email protected] received
December 31st, 2020
###### Abstract
Filterless optical transport networks relies on passive optical
interconnections between nodes, i.e., on splitters/couplers and amplifiers.
While different studies have investigated their design, none of them offer a
solution for an optimal design. We propose a one step solution scheme, which
combines network provisioning, i.e., routing and wavelength assignment within
a single mathematical model. Decomposition into two different types sub-
problems is then used in order to conceive an exact solution scheme. The first
type of subproblem relies on the generation of filterless subnetworks while
the second one takes care of their wavelength assignment.
Numerical experiments demonstrate the improved performance of the proposed
optimization model and algorithm over the state of the art, with the
improvement of the solution for several open source data sets.
###### Index Terms:
Filterless Networks, All-optical Networks, Network Provisioning, Routing and
Wavelength Assignment.
## I Introduction
The idea of filterless optical networks goes back to the seminal articles of
[1, 2]. These networks rely on broadcast-and-select nodes equipped with
coherent transceivers, as opposed to active switching networks, which today
use Reconfigurable Optical Add-Drop Multiplexers (ROADMs). Filterless optical
networks exhibit several advantages and are currently considered for, e.g.,
metro regional networks [3] or submarine networks [4]. They allow a reduction
in energy consumption and therefore lead to both a reduction in cost and in
the carbon footprint. In addition, they improve the network robustness, while
simplifying several aspects of impairment-aware design [5], although with a
reduced spectral efficiency due to their inherent channel broadcasting which
may be attenuated with the use of blockers, see, e.g., Dochhan et al. [6].
In order to solve the filterless network design problem, Tremblay et al. [7]
proposed a two-step solution process: (1) A Genetic Algorithm to generate
fiber trees, (2) A Tabu Search algorithm for solving the routing and
wavelength assignment algorithm on the selected trees among those generated by
step (1). Both steps are solved with a heuristic, while efficient exact
algorithms exist today, at least for the routing and wavelength assignment
problem, see, e.g., [8, 9].
Ayoub et al. [10] also devised a two-step algorithm: (1) A first heuristic
algorithm to generate edge-disjoint fiber trees, (2) An ILP (Integer Linear
Program) model to solve the routing and spectrum assignment problem.
Unfortunately, they did not compare their results with those of [7].
In this paper, we improve on the solution process we had previously proposed
in [11], in which we had an exact algorithm to generate fiber trees, but we
were still with a two-step process in which the second step, as in previous
studies, solves the routing and wavelength assignment problem. We are now able
to propose a first exact one-step solution process using a large scale
optimization modelling and algorithm, which are described in Sections III and
IV, respectively. Numerical results then show improved filterless network
designs over all the previous studies.
The paper is organized as follows. The problem statement of the design of
filterless optical networks is recalled in Section II. In Section III, the
mathematical model of our one-step design is proposed. In Section IV, we
discuss our solution scheme: generation of an initial solution, column
generation technique to solve the linear relaxation and derivation of an
integer solution. Computational results are summarized in Section V, including
a comparison with the results of [7]. Conclusions are drawn in the last
section.
## II Design of Filterless Optical Networks
Consider an optical network represented by its physical network $G=(V,L)$,
where $V$ is the set of nodes (indexed by $v$), and $L$ is the set of fiber
links (indexed by $\ell$). We denote by $\textsc{in}(v)$ and $\textsc{out}(v)$
the set of incoming and outgoing links of $v$, respectively.
In the sequel, we will use undirected trees and will adopt the following
terminology: we use the term ”edge” for designating an undirected link, and
the term link for designating a directed link. We denote by $\omega(v)$ the
co-cycle of $v$, i.e., the set of adjacent edges of $v$ in an undirected
graph.
The traffic is described by a set $K$ of unit requests where each request $k$
is described by its source and destination nodes, $v_{s}$ and $v_{d}$
respectively.
A filterless network solution consists of a set of filterless sub-network
(f-subnet or FSN for short) solutions, where each f-subnet is a directed graph
supported by an undirected tree, and f-subnets are pairwise fiber links
disjoint. We provide an illustration in Figure 1. Figure 1(a) represents the
node connectivity of the original optical network. We assume that each pair of
connecting nodes has two directed fibers, one in each direction. Figure 1(b)
depicts filterless optical network with three f-subnets, each supported by an
undirected tree: the red one (FSN1), supported by tree
$T_{1}=\\{\\{v_{1},v_{3}\\},\\{v_{2},v_{3}\\},\\{v_{3},v_{4}\\},\\{v_{3},v_{5}\\}\\}$,
the blue one (FSN2), supported by tree
$T_{2}=\\{\\{v_{3},v_{7}\\},\\{v_{2},v_{7}\\}\\}$ and the green one (FSN3),
supported by tree
$T_{3}=\\{\\{v_{3},v_{7}\\},\\{v_{2},v_{7}\\},\\{v_{4},v_{7}\\},\\{v_{4},v_{5}\\},\\{v_{4},v_{6}\\}\\}$.
Fiber links with two arrows indicates the filterless optical network uses two
directional fibers, one in each direction, while links with a unique arrow use
each a single directional fiber. Required passive splitters and combiners are
added to interconnect the fiber links supporting the request provisioning,
see, e.g., node $v_{4}$ requires a three splitters/combiners with two ports in
order to accommodate the provisioning of the requests using the green
filterless network. Note that all three sub-networks are pairwise fiber link
disjoint. Observe that two node connections are unused: edges
$\\{v_{1},v_{2}\\}$ and $\\{v_{1},v_{5}\\}$ of the physical network, which may
be an indication of a poor network design.
Each request is provisioned on one of the sub-networks. Observe that if we use
more than one f-subnet, the routing is not necessarily unique. For instance,
in the example of Figure 1(b), request from $v_{2}$ to $v_{3}$ can be either
provisioned on the FSN1 (red) filterless sub-network, or on FSN3 (green sub-
network).
We are interested in establishing a set of fiber link disjoint f-subnets such
that the provisioning of all the requests minimizes the maximum fiber link
load.
(a) Undirected graph supporting the physical network
(b) Three different FSNs
Figure 1: Construction of a filterless network solution
The objective is to find f-subnets and a provisioning such that we minimize
the overall number of wavelengths. Each request must be provisioning on a
lightpath, i.e., a route and a wavelength such that the continuity constraint
is satisfied, i.e., the same wavelength from source to destination, while
considering the broadcast effect, i.e., same wavelength on all outgoing links
of the source node and their descendants.
We next illustrate the impacts of the broadcast effect on the wavelength
assignment. Consider the following example with 7 requests on FSN1 in Figure
1(b): $k_{13}$ from $v_{1}$ to $v_{3}$, and with the same notations, $k_{53}$,
$k_{35}$, $k_{21}$, $k_{12}$, $k_{42}$, and $k_{34}$. Figure 2 depicts a
provisioning of them, which requires 4 wavelengths. Plain lines correspond to
the provisioning between the source and the destination nodes, while dotted
lines indicate the broadcast effect beyond the destination nodes.
Figure 2: Wavelength request provisioning
Observe that the wavelength assignment takes into account that requests that
conflict only in their broadcast effect beyond their destination nodes can
share the same wavelength. For instance, request $k_{13}$ can share the same
wavelength as request $k_{53}$.
## III Mathematical Model
We propose a mathematical model, called DFOP, which relies on two different
sets of configurations: the FSN configurations and the wavelength
configurations. Each FSN configuration consists of: (i) a subgraph fsn of $G$
supported by an undirected tree backbone, and (ii) a set of requests
provisioned on fsn. Each wavelength configuration consists of a potential
wavelength assignment for the overall set of requests.
The design of a filterless network amounts then to the selection of a set of
FSN configurations, as many as the desired number of FSNs, and of wavelength
configurations, as many as the number of required wavelengths, which jointly
minimizes the number of required wavelengths.
The proposed decomposition consists of a master problem coordinating a set of
pricing problems which are solved alternately, see Section IV for the details
of the solution scheme. We next describe the master problem and the pricing
problems.
### III-A Master Problem
Let fsn be a f-subnet configuration. It is formally characterized by:
* •
$a_{\ell}^{\textsc{fsn}}=1$ if link $\ell$ is in f-subnet fsn, 0 otherwise
* •
$b_{k}^{\textsc{fsn}}=1$ if request $k$ is routed on fsn, 0 otherwise
* •
$\theta^{\textsc{fsn}}_{kk^{\prime}}=1$ if request $k$ and request
$k^{\prime}$ are in conflict on fsn, 0 otherwise.
Let $\lambda$ be a wavelength configuration. It is characterized by:
* •
$\beta^{\lambda}_{k}=1$ if request $k$ uses wavelength (color) configuration
$\lambda$, 0 otherwise.
There are two sets of decision variables, one for each set of configurations:
* •
$z_{\textsc{fsn}}=1$ if f-subnet FSN, together with its provisioned requests,
is selected as a filterless sub-network, 0 otherwise.
* •
$x_{\lambda}=1$ if wavelength (color) configuration $\lambda$ is used, 0
otherwise. One wavelength configuration is defined by all the requests with
the same wavelength assignment.
The objective corresponds to the minimization of the total number of
wavelengths.
$\min\sum\limits_{\lambda\in\Lambda}x_{\lambda}$ (1)
subject to:
$\displaystyle\sum_{\textsc{fsn}\in\mathcal{FSN}}b_{k}^{\textsc{fsn}}z_{\textsc{fsn}}\geq
1$ $\displaystyle k\in K$ (2)
$\displaystyle\sum\limits_{\textsc{fsn}\in\mathcal{FSN}}a_{\ell}^{\textsc{fsn}}z_{\textsc{fsn}}\leq
1\qquad$ $\displaystyle\ell\in L$ (3)
$\displaystyle\sum\limits_{\textsc{fsn}\in\mathcal{FSN}}z_{\textsc{fsn}}\leq
n_{\mathcal{FSN}}$ (4)
$\displaystyle\sum\limits_{\lambda\in\Lambda}\beta^{\lambda}_{k}\>\beta^{\lambda}_{k^{\prime}}x_{\lambda}\leq
1-\sum\limits_{\textsc{fsn}\in\mathcal{FSN}}\theta^{\textsc{fsn}}_{kk^{\prime}}z_{\textsc{fsn}}\qquad$
$\displaystyle k,k^{\prime}\in K$ (5)
$\displaystyle\sum\limits_{\lambda\in\Lambda}\beta^{\lambda}_{k}x_{\lambda}\geq
1$ $\displaystyle k\in K$ (6) $\displaystyle z_{\textsc{fsn}}\in\\{0,1\\}$
$\displaystyle\textsc{fsn}\in\mathcal{FSN}$ (7) $\displaystyle
x_{\lambda}\in\\{0,1\\}$ $\displaystyle\lambda\in\Lambda.$ (8)
Constraints (2) indicate each request will be routed on at least one selected
filterless sub-network. Although the constraints could be written as equality
constraints, inequalities make the model easier to solve in practice for LP
(Linear Program)/ILP (Integer Linear Program) solvers. Due to the objective, a
request will not be routed more than once, unless there is enough spare
capacity to do so, and then we can easily eliminate superfluous provisioning
in a post-processing step. Constraints (3) guarantee pairwise link disjoint
sub-networks. Constraints (4) is an optional constraint to express that we
want to limit the number of f-subnet configurations. Constraints (5) guarantee
the conflicted requests could not share the same wavelength (color).
Constraints (6) express that each request will use at least one wavelength
(color). Again inequality is justified as for constraints (2), and a post-
processing can easily eliminate a double wavelength assignment. Post-
processing is justified in view of the enhanced convergence of the solution
scheme thanks to the inequalities.
### III-B PP${}_{\textsc{fsn}}$: FSN Pricing Problem
FSN pricing problem aims to find a new FSN configuration which could
potentially improve the objective of the continuous relaxation of the
restricted master problem,
Variables of the pricing problem directly or indirectly defines the
coefficients of the column of the master problem, and are defined as follows:
* •
$a_{\ell}=1$ if link $\ell$ is in the f-subnet under construction, 0
otherwise, for all $\ell\in L$
* •
$a_{v}=1$ if node $v$ belongs to f-subnet under construction, 0 otherwise, for
all $v\in V$. Remember that f-subnets are usually not supported by a spanning
tree.
* •
$\alpha_{e}=1$ if edge $e$ belongs to the tree supporting the f-subnet under
construction, 0 otherwise, for all $e\in E$.
* •
$x_{k}=1$ if request $k$ is routed on the f-subnet under construction, 0
otherwise, for all $k\in K,t\in T$
* •
$\varphi_{k\ell}=1$ if the routing of request $k$ goes through link $\ell$, or
if the channel used in the routing of request $k$ propagates on link $\ell$
because it is not filtered, 0 otherwise, for all $\ell\in L,k\in K$.
* •
$\psi_{k\ell}=1$ if the routing of request $k$ goes through link $\ell$
between its source and destination, 0 otherwise, for all $\ell\in L,k\in K$.
We need both variables $\varphi_{k\ell}$ and $\psi_{k\ell}=1$ in order to
identify the unfiltered wavelength links.
* •
$\theta_{kk^{\prime}}=1$ if request $k$ and request $k^{\prime}$ are in
conflict on the f-subnet under construction, 0 otherwise.
* •
$\omega_{kk^{\prime}\ell}=\psi^{\textsc{fsn}}_{k\ell}\varphi^{\textsc{fsn}}_{k^{\prime}\ell}$,
for all $\ell\in L,k,k^{\prime}\in K$. In other words, with
$\omega_{kk^{\prime}\ell}=1$ identifying a link on which $k$ and $k^{\prime}$
are conflicting either between their source and destination nodes, or with one
of them being routed to the broadcast of the other request, and 0 otherwise
(no conflict).
Objective: it corresponds to the reduced cost of variable $z_{\textsc{fsn}}$,
i.e., the objective function of the pricing problems, i.e., the decomposition
sub-problems. The reader who is not familiar with linear programming concepts,
and in particular with the concept of reduced cost is referred to the book of
Chvatal [12]. The coefficient 0 indicates that the variable $z_{\textsc{fsn}}$
does not appear in the objective function 1 of the master problem, and
therefore its coefficient contribution to the reduced cost is null.
$\min\>0-\sum\limits_{k\in
K}u_{k}^{(\ref{eq:demand})}x^{k}+\sum\limits_{\ell\in
L}u_{\ell}^{\eqref{eq:link_disjoint}}{\alpha_{\ell}}+u^{\eqref{eq:limit_nb_trees}}\\\
+\sum\limits_{k,k^{\prime}\in
K}u_{kk^{\prime}}^{\eqref{eq:conflict_color}}\theta_{kk^{\prime}}$ (9)
We now describe the set of constraints.
Building an undirected tree
$\displaystyle\sum\limits_{\begin{subarray}{c}e=\\{v,v^{\prime}\\}\in E:\\\
v,v^{\prime}\in V^{\prime}\end{subarray}}{\alpha_{e}}\leq|V^{\prime}|-1\quad$
$\displaystyle V^{\prime}\subset V,|V^{\prime}|\geq 3$ (11)
$\displaystyle\sum\limits_{v\in V}a_{v}=\sum\limits_{e\in E}{\alpha_{e}}+1$
(12) $\displaystyle 2\alpha_{e}\leq a_{v}+a_{v^{\prime}}$ $\displaystyle
v,v^{\prime}\in V,{e=\\{v,v^{\prime}\\}}$ (13)
$\displaystyle\sum\limits_{e\in\omega(v)}\alpha_{e}\geq a_{v}$ $\displaystyle
v\in V$ (14) $\displaystyle
a_{\ell}\leq\alpha_{e},\>a_{\overline{\ell}}\leq\alpha_{e}$
$\displaystyle\ell=(v,v^{\prime}),\overline{\ell}=(v^{\prime},v)$
$\displaystyle v,v^{\prime}\in V,{e=\\{v,v^{\prime}\\}}$ (15) Routing of the
provisioned requests $\displaystyle{\varphi}_{k\ell}\leq a_{\ell}$
$\displaystyle k\in K,\ell\in L$ (16) $\displaystyle
a_{\ell}\leq\sum\limits_{k\in K}{\varphi}_{k\ell}$ $\displaystyle\ell\in L$
(17) $\displaystyle{\varphi}_{k\ell}+{\varphi}_{k\overline{\ell}}\leq 1\qquad$
$\displaystyle\ell=(v,v^{\prime}),$
$\displaystyle\overline{\ell}=(v^{\prime},v):v\in V,v^{\prime}\in V$ (18)
$\displaystyle\sum\limits_{\ell\in{\textsc{in}(d_{k})}}{\varphi}_{k\ell}=\sum\limits_{\ell\in{\textsc{out}(s_{k})}}{\varphi}_{k\ell}=x_{k}\qquad$
$\displaystyle k\in K$ (19)
$\displaystyle\sum\limits_{\ell\in{\textsc{in}(s_{k})}}{\varphi}_{k\ell}=0$
$\displaystyle k\in K$ (20) Propagation of unfiltered channels
$\displaystyle{\varphi}_{k\ell^{\prime}}\leq\sum\limits_{\ell\in{\textsc{in}(v)}}{\varphi}_{k\ell}+1-a_{\ell^{\prime}}\qquad$
$\displaystyle k\in K,$ $\displaystyle v\in
V\setminus\\{{s_{k}}\\},\ell^{\prime}\in{\textsc{out}(v)}$ (21)
$\displaystyle{\varphi}_{k\ell}\leq{\varphi}_{k\ell^{\prime}}+2-a_{\ell}-a_{\ell^{\prime}}$
$\displaystyle k\in K,$ $\displaystyle v\in
V,\ell\in{\textsc{in}(v)},\ell^{\prime}\in{\textsc{out}(v)}$ (22) ”Filtered”
provisioning of requests
$\displaystyle\sum\limits_{\ell\in{\textsc{in}(d_{k})}}{\psi}_{k\ell}=\sum\limits_{\ell\in{\textsc{out}(s_{k})}}{\psi}_{k\ell}=x_{k}\qquad$
$\displaystyle k\in K$ (23)
$\displaystyle\sum\limits_{\ell\in{\textsc{in}(v)}}{\psi}_{k\ell}=\sum\limits_{\ell\in{\textsc{out}(v)}}{\psi}_{k\ell}\leq
x_{k}$ $\displaystyle k\in K,$ $\displaystyle v\in V\setminus({s_{k},d_{k}})$
(24)
$\displaystyle\sum\limits_{\ell\in{\textsc{out}(d_{k})}}{\psi}_{k\ell}=\sum\limits_{\ell\in{\textsc{in}(s_{k})}}{\psi}_{k\ell}=0\qquad$
$\displaystyle k\in K$ (25) Reach distance $\displaystyle\sum\limits_{\ell\in
L}\textsc{dist}_{\ell}{\psi}_{k\ell}\leq\textsc{reach\\_dist}$ $\displaystyle
k\in K$ (26) Identifying wavelength conflicting lightpaths
$\displaystyle{\psi}_{k\ell}\leq\varphi_{k\ell}$ $\displaystyle k\in K,\ell\in
L$ (27)
$\displaystyle\theta_{kk^{\prime}}\geq\psi^{\textsc{fsn}}_{k\ell}+\psi^{\textsc{fsn}}_{k^{\prime}\ell}-1\qquad$
$\displaystyle\ell\in L,\lambda\in\Lambda,$ $\displaystyle k,k^{\prime}\in K$
(28)
$\displaystyle\theta_{kk^{\prime}}\geq\psi^{\textsc{fsn}}_{k\ell}+\varphi^{\textsc{fsn}}_{k^{\prime}\ell}-1$
$\displaystyle\ell\in L,\lambda\in\Lambda,$ $\displaystyle k,k^{\prime}\in K$
(29)
$\displaystyle\theta_{kk^{\prime}}\geq\psi^{\textsc{fsn}}_{k^{\prime}\ell}+\varphi^{\textsc{fsn}}_{k\ell}-1$
$\displaystyle\ell\in L,\lambda\in\Lambda,$ $\displaystyle k,k^{\prime}\in K$
(30) $\displaystyle\theta_{kk^{\prime}}\leq\sum\limits_{\ell\in
L}(\omega_{kk^{\prime}\ell}+\omega_{k^{\prime}k\ell})$ $\displaystyle
k,k^{\prime}\in K$ (31)
$\displaystyle\omega_{kk^{\prime}\ell}\leq\psi^{\textsc{fsn}}_{k\ell}$
$\displaystyle\ell\in L,k,k^{\prime}\in K$ (32)
$\displaystyle\omega_{kk^{\prime}\ell}\leq\varphi^{\textsc{fsn}}_{k^{\prime}\ell}$
$\displaystyle\ell\in L,k,k^{\prime}\in K$ (33)
$\displaystyle\omega_{kk^{\prime}\ell}\geq\psi^{\textsc{fsn}}_{k\ell}+\varphi^{\textsc{fsn}}_{k^{\prime}\ell}-1$
$\displaystyle\ell\in L,k,k^{\prime}\in K$ (34) Domains of the variables
$\displaystyle\alpha_{e}\in\\{0,1\\}$ $\displaystyle e=\\{v,v^{\prime}\\}\in
E$ (35) $\displaystyle a_{v}\in\\{0,1\\}$ $\displaystyle v\in V$ (36)
$\displaystyle a_{\ell}\in\\{0,1\\}$ $\displaystyle\ell\in L$ (37)
$\displaystyle{\varphi}_{k\ell}\in\\{0,1\\}$ $\displaystyle\ell\in L,k\in K$
(38) $\displaystyle\psi_{k\ell}\in\\{0,1\\}$ $\displaystyle\ell\in L,k\in K$
(39) $\displaystyle x_{k}\in\\{0,1\\}$ $\displaystyle k\in K$ (40)
$\displaystyle\theta_{kk^{\prime}}\in\\{0,1\\}$ $\displaystyle k,k^{\prime}\in
K$ (41) $\displaystyle\omega_{kk^{\prime}\ell}\in\\{0,1\\}$ $\displaystyle
k,k^{\prime}\in K,\ell\in L.$ (42)
Constraints (11) are the classical subtour elimination in order to guarantee
an acyclic graph structure (i.e., a supporting tree for the f-subnet under
construction) [13]. Constraints (12) force the generation of a single
f-subnet. Constraints (13) and (14) guarantee the consistency between node and
edge variables: an edge is used in the undirected tree if and only if its two
endpoints belong to it. Constraints (15) take care of consistency between
variables: guarantee that if a link is used, its associated edge belongs to
the undirected tree.
The next block of constraints (16)-(20) take care of the routing of the
requests, including the links hosting the unfiltered channels. The constraints
(16) and (17) ensure that if a request is routed over link $\ell$, then that
link belongs to the output f-subnet and vice versa, if a link belongs to the
output f-subnet structure, then at least one routing uses it. Constraints (18)
prevent the use of both $\ell$ and $\overline{\ell}$ (the link in the opposite
direction of $\ell$) in the routing of a given request. Constraints (19)-(20)
are flow constraints in order to take care of the ”filtered” part of the
routing of the requests.
The next two sets of constraints (21)-(22) are for taking care of the
propagation of the unfiltered channels. Constraints (21) enforce for every
request $k$ and node different from its source node ($s_{k}$), that if none of
its incoming is on the route or broadcast effect of request $k$, then none of
the outgoing links can be used for either the routing or the broadcast effect
of request $k$. Constraints (23)-(24) are flow constraints in order to take
care of the routing of request $k$ goes through $\ell$ between its source and
destination. Constraints (25) impose no wavelength assignment on the incoming
links of the source and on the outgoing links of the destination for the
”filtered” routing of $k$.
Constraint (26) is the reach constraint, for each request the routing distance
between source and destination must not exceed a maximum distance (1,500 km).
Constraints (27) define the relation between variables $\varphi_{k\ell}$ and
$\psi_{k\ell}$. Constraints (28)-(30) are conflict wavelength constraints
either $k$ and $k^{\prime}$ share a link between their source and destination
nodes, see (28), or one of the requests is routed to the broadcast part of the
other request, see (29) and (30). Constraints (31)-(34) identify the no
conflict case, i.e., when the routes between their source and destination are
not overlapping, and then none of the requests is routed to the broadcast part
of the other request.
### III-C PP${}^{\textsc{h}}_{\textsc{color}}$: Relaxed Wavelength Pricing
Problem
Wavelength pricing problem aims to find a new wavelength configuration which
could potentially improve the objective of the continuous relaxation of the
restricted master problem.
The output of this pricing problem contains a set of requests which could
share the same wavelength.
Variables
$\beta_{k}=1$ if request $k$ uses the wavelength associated with the current
wavelength pricing problem, 0 otherwise, for all $k\in K$
$\alpha_{kk^{\prime}}=\beta_{k}\>\beta_{k^{\prime}}$. It is equal to 0 if $k$
and $k^{\prime}$ cannot be assigned the same wavelength, 0 otherwise.
Objective:
$\min\>1-\sum\limits_{k\in
K}u_{k}^{(\ref{eq:one_request_at_least_one_color})}\beta_{k}+\sum\limits_{k,k^{\prime}\in
K}u_{kk^{\prime}}^{\eqref{eq:conflict_color}}\alpha_{kk^{\prime}}$ (43)
subject to:
$\displaystyle\beta_{k}+\beta_{k^{\prime}}\leq 1$ $\displaystyle\qquad\text{if
}\sum\limits_{\textsc{fsn}\in\mathcal{FSN}}\theta^{\textsc{fsn}}_{kk^{\prime}}z^{\textsc{fsn}}>0\quad$
$\displaystyle k,k^{\prime}\in K$ (44)
$\displaystyle\alpha_{kk^{\prime}}\leq\beta_{k}$ $\displaystyle
k,k^{\prime}\in K$ (45)
$\displaystyle\alpha_{kk^{\prime}}\leq\beta_{k^{\prime}}$ $\displaystyle
k,k^{\prime}\in K$ (46)
$\displaystyle\beta_{k}+\beta_{k^{\prime}}\leq\alpha_{kk^{\prime}}+1$
$\displaystyle k,k^{\prime}\in K$ (47)
$\displaystyle\alpha_{kk^{\prime}}\in\\{0,1\\}$ $\displaystyle k,k^{\prime}\in
K$ (48) $\displaystyle\beta_{k}\in\\{0,1\\}$ $\displaystyle k\in K.$ (49)
Constraints (44) identify wavelength conflicts between two requests, and
consequently make sure that wavelength conflicting requests are not assigned
the same wavelength. Constraints (45)-(47) are the linearization of
$\alpha_{kk^{\prime}}=\beta_{k}\>\beta_{k^{\prime}}$.
### III-D PP${}^{\textsc{e}}_{\textsc{color}}$: Exact Coloring Pricing
Problem
The exact wavelength pricing problem is the heuristic one with the omission of
constraints (44).
## IV Solution Scheme
We now describe the solution process of the Master problem described in
Section III-A, in coordination with the pricing problems
PP${}_{\textsc{fsn}}$, PP${}^{\textsc{h}}_{\textsc{color}}$ and
PP${}^{\textsc{e}}_{\textsc{color}}$, described in Sections III-B, III-C and
III-D.
### IV-A Column Generation and ILP Solution
Column Generation method is a well-known technique for solving efficiently
large-scale Linear programming problems [14]. In order to derive an ILP
solution, we can then use either a branch-and-price method [15] or heuristic
methods [16], the accuracy of which can be estimated.
We first discuss the column generation technique and the novelty we introduced
with our mathematical formulation. It allows the solution of the linear
relaxation of the Master Problem, i.e., (1)-(8). In order to do so, we first
define the so-called Restricted Master Problem (RMP), i.e., the Master Problem
with a very limited of variables or columns, and a so-called Pricing Problem,
i.e., a configuration generator. Here, in our modelling, contrarily to the
classical Dantzig-Wolfe decomposition, we consider two different types of
pricing problems, the FSN one, and the wavelength one.
The objective function of the Pricing Problems is defined by the Reduced Cost
of the decision variable associated with the newly generated configuration: if
its minimum value is negative, the addition of the latter configuration will
allow a decrease of the optimal LP value of the current RMP, otherwise, if its
minimum value is positive for both pricing problems (i.e., the LP optimality
condition), the optimal LP value ($z_{\textsc{lp}}^{\star}$) of problem
(1)-(8) has been reached. In other words, the solution process consists in
solving the RMP and the pricing problems alternatively, until the LP
optimality condition is satisfied.
Figure 3: Flowchart: column generation with three different pricing problems
Once the optimal LP solution of the MP has been reached, we can then derive an
ILP solution, i.e., a selection of pairwise link disjoint filterless sub-
networks, which provision all the requests, jointly with a wavelength
assignment for all the requests. It is done with the ILP solution of the last
RMP in which domains of the variables has been changed from continuous to
integer requirements. Denote by ${\tilde{z}}_{\textsc{ilp}}$ the value of that
ILP solution: it is not necessarily an optimal ILP solution, but is guaranteed
to have an accuracy not larger than
$\varepsilon=({\tilde{z}}_{\textsc{ilp}}-z^{\star}_{\textsc{lp}})/\
{z_{\textsc{lp}}^{\star}}.$
### IV-B Detailed Flowchart and Algorithm
After outlining the general process of a solution process with a column
generation algorithm in the previous section, and on the derivation of an
$\varepsilon$-optimal ILP solution, we now provide the detailed flowchart of
our solution process in Figure 3. Solution alternance of the three pricing
problems is sought until the LP optimality condition is satisfied, i.e., no
configuration with a negative reduced cost can be found. Different strategies
can be defined for this alternance, and we describe below the one which gave
the best results, among the ones we tested.
1. 1.
Generate an initial solution with a single FSN supported by a spanning tree,
and a first set of wavelength assignment. We use the algorithm of Welsh-Powell
[17] to minimize the number of wavelengths, using a wavelength conflict graph
on which we minimize the number of colors.
2. 2.
Apply the column generation algorithm in order to solve the linear relaxation
of model DFOP:
1. (a)
Solve restricted master problem with current FSN and wavelength assignment
configurations
2. (b)
Solve pricing problem PP${}_{\textsc{fsn}{}}$. If it generates an improving
FSN configuration, add it to the RMP and return to Step 2a
3. (c)
Solve pricing problem PP${}^{\textsc{h}}_{\textsc{color}}$. If it generates an
improving wavelength configuration, add it to the RMP and return to Step 2a
4. (d)
Solve pricing problem PP${}^{\textsc{e}}_{\textsc{color}}$. If it generates an
improving wavelength configuration, add it to the RMP and return to Step 2a
3. 3.
As the continuous relaxation of the Master Problem has been solved optimally,
solve the last generated restricted master problem with integer requirements
for the variables, derive an ILP solution.
## V Numerical Results
We report now the performance of our proposed mathematical model and
algorithm, and compare the quality of its solutions with those of Tremblay et
al. [7].
### V-A Data Sets
We use the Italy, California and Germany topologies as in Tremblay et al. [7]
(see [18] for distances), as well as the Cost239 and USA topologies, using the
link distances of [19] and [20], respectively. We recall their characteristics
in Table I. We consider unit uniform demands, as in [7], unless otherwise
indicated.
TABLE I: Data set characteristics Networks | # nodes | # nodes
---|---|---
Italy | 10 | 15
California | 17 | 20
Germany17 | 17 | 26
Cost239 | 11 | 26
USA | 12 | 15
TABLE II: Network parameters for filterless solutions | Tremblay et al. [7] | Model DFOP
---|---|---
| a single tree | two trees | three trees
| # Trees | W | $z^{\star}_{\textsc{lp}}$ | W | Max Load | $z^{\star}_{\textsc{lp}}$ | W | Max Load | $z^{\star}_{\textsc{lp}}$ | W | Max Load
| R | N | R | N | R | N
Italy | 2 | 125 | 141.0 | 141 | 19 | 32 | 120.0 | 123 | 17 | 16 | - | - | - | -
California | 3 | 120 | 125.6 | 126 | 16 | 110 | 1117.4 | 122 | 16 | 106 | 1113.4 | 120 | 16 | 104
Germany17 | 2 | 188 | 120.5 | 125 | 16 | 109 | 162.3 | 173 | 16 | 67 | - | - | - | -
Cost239 | - | - | 149.3 | 151 | 10 | 41 | 122.7 | 128 | 16 | 22 | 115.7 | 125 | 17 | 18
USA | - | - | 161.0 | 161 | 11 | 50 | 141.2 | 153 | 17 | 46 | - | - | - | -
### V-B Performance of the Proposed DFOP Model
We tested our model/algorithm on the same data sets as Tremblay et al. [7].
Results are summarized in Table II.
Available numerical results of [7] are reported in columns 2 and 3, while
results or our DFOP model are reported in the remaining columns. We provide
the results of the DFOP model in Columns 4 to 7 (2 trees for Italian and
Germany17, 3 trees for California), for the first three data sets.
(a) Italy (b) California (c) Germany17
Figure 4: Two filterless sub-networks solutions with optimized trees compared
to those of [7]
(a) Uniform Traffic (272 unit requests, 1 request per node pair)
(b) Non Uniform Traffic (338 unit requests, several requests per node pair)
Figure 5: Three filterless sub-networks on the California network
We observe that we improve the values of Tremblay et al. [7] when using the
same 2 supporting trees for 2 for the FSNs of two data sets, i.e., 23
wavelengths instead of 25 for the Italy network, and 73 wavelengths instead of
88 for Germany network In Figure 4, we provide the details of the solutions of
Model DFOP with different trees than those of [7]. Numbers on the links
indicate the number of wavelengths, by differentiating between ”filtered” and
”unfiltered” wavelengths.
As we did not develop a branch-and-price algorithm and as we limited the
number of iterations in the column generation algorithm, the solutions of
Model DFOP are not necessarily optimal, except when $z^{\star}_{\textsc{lp}}$
is equal to the ILP optimal value. We then observe that several solutions with
one tree are optimal, i.e., those for the Italy, California and USA networks.
We also provided results with one tree (optimized selection): we observe that
the number of wavelengths for one tree is increasing over the required number
for 2 or 3 trees.
Lastly, in Figure 5, we provide results with three f-subnets for the
California network, first with uniform traffic in Figure 5(a), and then with
non uniform traffic in Figure 5(b) (number of requests randomly generated
between 1 and 3 for each node pair). It is interesting to observe that the
optimal solution for the non uniform traffic does not use one link, and that
while one f-subnet is always a spanning, the two other f-subnets are very
small, most likely due to the characteristics of the California topology. The
bold red arrow indicates the link with the largest number of wavelengths.
## VI Conclusions
This paper presents a one step decomposition model and algorithm, which can
solve exactly the filterless network design problem. While computational times
are sometimes quite high (i.e., sometimes a couple of hours), we hope to be
soon able to improve further the modelling and algorithm in order to provide
more scalable solutions. However, this is already a very significant first
step towards the exact design of filterless optical networks.
## Acknowledgment
B. Jaumard has been supported by a Concordia University Research Chair (Tier
I) on the Optimization of Communication Networks and by an NSERC (Natural
Sciences and Engineering Research Council of Canada) grant.
## References
* [1] A. Tzanakaki and M. O’Mahony, “Analysis of filterless wavelength converters employing cross-gain modulation in semiconductor optical amplifiers,” in _Conference on Lasers and Electro-Optics (CLEO)_ , Baltimore, MD, USA, 1999, pp. 433 – 434.
* [2] J.-P. Savoie, C. Tremblay, D. Plant, and M. Bélanger, “Physical layer validation of filterless optical networks,” in _European Conference on Optical Communication (ECOC)_ , Torino, Italy, Sept. 2010, pp. 1 – 3.
* [3] P. Pavon-Marino, F.-J. Moreno-Muro, M. Garrich, M. Quagliotti, E. Riccardi, A. Rafel, and A. Lord, “Techno-economic impact of filterless data plane and agile control plane in the 5G optical metro,” _Journal of Lightwave Technology_ , vol. 38, no. 15, pp. 3801 – 3814, 2020.
* [4] M. Nooruzzaman, N. Alloune, C. Tremblay, P. Littlewood, and M. P. Bélanger, “Resource savings in submarine networks using agility of filterless architectures,” _IEEE Communications Letters_ , vol. 21, no. 3, pp. 512–515, March 2017.
* [5] C. Tremblay, P. Littlewood, M. Bélanger, L. Wosinska, and J. Chen, “Agile filterless optical networking,” in _Conference on Optical Network Design and Modeling - ONDM_ , 2017, pp. 1 – 4.
* [6] A. Dochhan, R. Emmerich, P. W. Berenguer, C. Schubert, J. Fischer, M. Eiselt, and J.-P. Elbers, “Flexible metro network architecture based on wavelength blockers and coherent transmission,” in _45th European Conference on Optical Communication (ECOC)_ , Dublin, Ireland, 2019, pp. 1 – 4.
* [7] C. Tremblay, E. Archambault, M. Bélanger, J.-P. Savoie, F. Gagnon, and D. Plant, “Passive filterless core networks based on advanced modulation and electrical compensation technologies,” _Telecommunication Systems_ , vol. 52, no. 4, pp. 167–181, January 2013.
* [8] C. Duhamel, P. Mahey, A. Martins, R. Saldanha, and M. de Souza, “Model-hierarchical column generation and heuristic for the routing and wavelength assignment problem,” _4OR_ , pp. 1 – 20, 2016.
* [9] B. Jaumard and M. Daryalal, “Efficient spectrum utilization in large scale rwa problems,” _IEEE/ACM Transactions on Networking_ , vol. PP, pp. 1–16, March 2017.
* [10] O. Ayoub, S. Shehata, F. Musumeci, and M. Tornatore, “Filterless and semi-filterless solutions in a metro-HAUL network architecture,” _20th International Conference on Transparent Optical Networks (ICTON)_ , pp. 1 – 4, 2018.
* [11] B. Jaumard, Y. Wang, and N. Huin, “Optimal design of filterless optical networks,” _20th International Conference on Transparent Optical Networks (ICTON)_ , pp. 1 – 5, 2018.
* [12] V. Chvatal, _Linear Programming_. Freeman, 1983.
* [13] G. Nemhauser and L. Wolsey, _Integer and Combinatorial Optimization_. Wiley, 1999, reprint of the 1988 edition.
* [14] M. Lübbecke and J. Desrosiers, “Selected topics in column generation,” _Operations Research_ , vol. 53, pp. 1007–1023, 2005.
* [15] C. Barnhart, E. Johnson, G. Nemhauser, M. Savelsbergh, and P. Vance, “Branch-and-price: Column generation for solving huge integer programs,” _Operations Research_ , vol. 46, no. 3, pp. 316–329, 1998.
* [16] R. Sadykov, F. Vanderbeck, A. Pessoa, I. Tahiri, and E. Uchoa, “Primal heuristics for branch and price: The assets of diving methods,” _INFORMS Journal on Computing_ , vol. 31, no. 2, pp. 251 – 267, 2019.
* [17] D. Welsh and M. B. Powell, “An upper bound for the chromatic number of a graph and its application to timetabling problems,” _The Computer Journal_ , vol. 10, pp. 85 – 86, 1967.
* [18] E. Archambault, “Design and simulation platform for optical filterless networks,” Master’s thesis, Ecole de Technologie Supérieure (ETS), Montreal, Canada, 2008.
* [19] M. Hadi and M. Pakravan, “Resource allocation for elastic optical networks using geometric optimization,” _Journal of Optical Communications and Networking_ , vol. 9, no. 10, pp. 889–899, January 2017.
* [20] SNDlib, “Germany50 problem,” http://sndlib.zib.de/home.action/, October 2005.
|
8k
|
arxiv_papers
|
2101.01026
|
# Anomalies and symmetric mass generation for Kähler-Dirac fermions
Nouman Butt Department of Physics, University of Illinois at Urbana-
Champaign, 1110 W Green St, Urbana, IL 61801 Simon Catterall Department of
Physics, Syracuse University, Syracuse, NY 13244, USA Arnab Pradhan
Department of Physics, Syracuse University, Syracuse, NY 13244, USA Goksu Can
Toga Department of Physics, Syracuse University, Syracuse, NY 13244, USA
###### Abstract
We show that massless Kähler-Dirac fermions exhibit a mixed gravitational
anomaly involving an exact $U(1)$ symmetry which is unique to Kähler-Dirac
fields. Under this $U(1)$ symmetry the partition function transforms by a
phase depending only on the Euler character of the background space.
Compactifying flat space to a sphere we learn that the anomaly vanishes in odd
dimensions but breaks the symmetry down to $Z_{4}$ in even dimensions. This
$Z_{4}$ is sufficient to prohibit bilinear terms from arising in the fermionic
effective action. Four fermion terms are allowed but require multiples of two
flavors of Kähler-Dirac field. In four dimensional flat space each Kähler-
Dirac field can be decomposed into four Dirac spinors and hence these anomaly
constraints ensure that eight Dirac fermions or, for real representations,
sixteen Majorana fermions are needed for a consistent interacting theory.
These constraints on fermion number agree with known results for topological
insulators and recent work on discrete anomalies rooted in the Dai-Freed
theorem. Our work suggests that Kähler-Dirac fermions may offer an independent
path to understanding these constraints. Finally we point out that this
anomaly survives intact under discretization and hence is relevant in
understanding recent numerical results on lattice models possessing massive
symmetric phases.
## I Introduction
The Kähler-Dirac equation gives an alternative to the Dirac equation for
describing fermions in which the physical degrees of freedom are carried by
antisymmetric tensors rather than spinors. These tensors transform under a
twisted rotation group that corresponds to the diagonal subgroup of the usual
(Euclidean) Lorentz group and a corresponding flavor symmetry. In flat space
the Kähler-Dirac field can be decomposed into a set of degenerate Dirac
spinors but this equivalence is lost in a curved background since the coupling
to gravity differs from the Dirac case. Indeed, unlike the case of Dirac
fermions, Kähler-Dirac fermions can be defined on an arbitrary manifold
without the need to introduce a frame and spin connection. These facts were
emphasized many years ago by Banks et al. where an attempt was made to
identify the four Dirac fermions residing in each four dimensional Kähler-
Dirac field with the number of generations in the Standard Model Banks et al.
(1982). With precision LEP data ruling out a fourth generation this idea had
to be abandoned. However, in this paper, we will argue that there may be
another natural interpretation for this degeneracy - it is required in order
to write down anomaly free, and hence consistent, theories of interacting
fermions.
These constraints arise because the partition function of a massless free
Kähler-Dirac field in curved space transforms by a phase under a particular
global $U(1)$ symmetry with the phase being determined by the Euler character
$\chi$ of the background. The appearance of $\chi$ shows that the anomaly is
gravitational in origin but is distinct from the usual mixed gravitational
anomaly of Weyl fermions Fujikawa (1980). If we compactify flat space to a
sphere this anomaly breaks the $U(1)$ to $Z_{4}$ in even dimensions which is
sufficient to prohibit the appearance of fermion bilinear terms in the quantum
effective action. Four fermion terms are however allowed and the simplest
operator of this type requires two flavors of Kähler-Dirac fermion. Since a
massless Kähler-Dirac field can be decomposed into two so-called reduced
Kähler-Dirac fields, each carrying half the original degrees of freedom, four
reduced fields are required for the minimal four fermion interaction.
In even dimensions a reduced Kähler-Dirac field, in the flat space limit, can
be decomposed into $2^{D/2}$ Majorana spinors and we learn that consistent
interacting theories in even dimensions possess $2^{D/2+2}$ Majorana fields.
These fermion numbers agree with a series of anomaly cancellation conditions
associated with certain discrete symmetries in dimensions two and four - see
table 1 and García-Etxebarria and Montero (2019); Wan and Wang (2020).
D | Symmetry | Critical number of Majoranas
---|---|---
2 | Chiral fermion parity | 8
4 | Spin-$Z_{4}$ | 16
Table 1: Number of Weyl fermions needed for consistent interacting theories in
$D=2$ and $D=4$
Since cancellation of all ’t Hooft anomalies is a necessary condition for
fermions to acquire mass without breaking symmetries this suggests it may be
possible to build models with precisely this fermion content where all
fermions are gapped in the I.R. In fact there are examples in condensed matter
physics where precisely this occurs see Fidkowski and Kitaev (2010); Ryu and
Zhang (2012); Kapustin et al. (2015); You and Xu (2015); Morimoto et al.
(2015); You et al. (2018); Wang and Wen (2020); Guo et al. (2020). Similar
results have been obtained in staggered fermions in lattice gauge theory Ayyar
and Chandrasekharan (2015); Ayyar and Chandrasekharan (2016a, b); Catterall
(2016); Catterall and Butt (2018); Catterall et al. (2018).
The plan of the paper is as follows. We start in section II by giving a brief
introduction to Kähler-Dirac fermions exhibiting their connection to Dirac
fermions, and showing how, in the case of massless fields, the theory is
invariant under a $U(1)$ symmetry whose generator $\Gamma$ anticommutes with
the Kähler-Dirac operator on any curved background. Using $\Gamma$ one can
project out half the degrees of freedom to obtain a reduced Kähler-Dirac
field. Section III shows that the $U(1)$ symmetry suffers from a gravitational
anomaly in even dimensions and breaks to $Z_{4}$. In section IV we show using
a spectral flow argument that this remaining $Z_{4}$ symmetry suffers from a
global anomaly in the presence of interactions unless the theory contains
multiples of four reduced Kähler-Dirac fields. In section V we point out that
a necessary condition for symmetric mass generation in such theories is that
these anomalies cancel and we give examples of possible four fermion
interactions that might be capable of achieving this. Section VI points out
the connections between these continuum ideas and lattice fermions and a final
summary appears in section VII.
## II Review of Kähler-Dirac Fields
The Kähler-Dirac equation arises on taking the square root of the Laplacian
operator written in terms of exterior derivatives. We start by defining the
Kähler-Dirac operator $K=d-d^{\dagger}$ with $d^{\dagger}$ the adjoint of $d$.
Clearly $-K^{2}=dd^{\dagger}+d^{\dagger}d=\Box$ since the derivative operators
are nilpotent. This suggests an alternative to the usual Dirac equation Kahler
(1962):
$(K-m)\Phi=0$ (1)
where $\Phi=\left(\phi_{0},\phi_{1},\ldots\phi_{p},\phi_{D}\right)$ is a
collection of $p$-forms. The action of the derivative operators on these forms
is then given by Banks et al. (1982); Catterall et al. (2018)
$\displaystyle\begin{split}d\Phi=\Big{(}0,\partial_{\mu}\phi,\partial_{\mu_{1}}\phi_{\mu_{2}}-\partial_{\mu_{2}}\phi_{\mu_{1}},\ldots\\\
\qquad,\sum_{perms\,\pi}\left(-1\right)^{\pi}\partial_{\mu_{1}}\phi_{\mu_{2}\ldots\mu_{D}}\Big{)}\end{split}$
(2)
$-d^{\dagger}\Phi=\left(\phi^{\nu},\phi^{\nu}_{\mu},\ldots,\phi^{\nu}_{\mu_{1},\ldots,\mu_{D-1}},0\right)_{;\nu}.$
(3)
An inner product of two such Kähler-Dirac fields $A$ and $B$ can be defined as
$\left[A,B\right]=\sum_{p}\frac{1}{p!}a^{{\mu_{1}}\ldots{\mu_{D}}}b_{{\mu_{1}}\ldots{\mu_{D}}}.$
(4)
Using this allows one to obtain the Kähler-Dirac equation by the variation of
the Kähler-Dirac action
$S_{\rm KD}=\int
d^{D}x\sqrt{g}\,\left[\overline{\Phi},\left(K-m\right)\Phi\right]$ (5)
where $\overline{\Phi}$ is an independent (in Euclidean space) Kähler-Dirac
field.
It is easy to see that the Kähler-Dirac operator anticommutes with a linear
operator $\Gamma$ which acts on the $p$-form fields as
$\Gamma:\quad\phi_{p}\to\left(-1\right)^{p}\phi_{p}.$ (6)
This anti-commutation property can be used to construct a $U(1)$ symmetry of
the massless Kähler-Dirac action which acts on $\Phi$ as
$\Phi\to e^{i\alpha\Gamma}\Phi.$ (7)
Furthermore, using $\Gamma$ one can define operators which project out even
and odd form fermions - so-called reduced Kähler-Dirac fields
$\Phi_{\pm}=P_{\pm}\Phi$ with
$P_{\pm}=\frac{1}{2}\left(1\pm\Gamma\right).$ (8)
The Kähler-Dirac operator $K$ couples even to odd forms and hence the massless
Kähler-Dirac action separates into two independent pieces
$S=\int\left(\overline{\Phi}_{+}\,K\Phi_{-}+\overline{\Phi}_{-}\,K\Phi_{+}\right)$.
Retaining just one of these terms one obtains an action for such a reduced
Kähler-Dirac (RKD) field
$S_{\rm RKD}=\int d^{D}x\sqrt{g}\,\left[\overline{\Phi}_{-},K\Phi_{+}\right].$
(9)
Notice that the single flavor reduced theory admits no mass term since
$\overline{\Phi}_{+}\Phi_{-}=0$. Finally if we relabel
$\overline{\Phi}_{-}\to\Phi_{-}$ this reduced action can be rewritten in a
Majorana-like form
$S_{\rm RKD}=\frac{1}{2}\int d^{D}x\sqrt{g}\,\left[\Phi,K\Phi\right].$ (10)
Given that both the Dirac operator and the Kähler-Dirac operator correspond to
square roots of the Laplacian one might imagine that there is a relation
between the Kähler-Dirac field and spinor fields. To exhibit this relationship
we construct a matrix $\Psi$ by combining the p-form components of the Kähler-
Dirac field $\Phi$ with products of Dirac gamma matrices
$\Psi=\sum_{p=0}^{D}\frac{1}{p!}\gamma^{{\mu_{1}}\ldots{\mu_{p}}}\phi_{{\mu_{1}}\ldots{\mu_{p}}}.$
(11)
where
$\gamma^{{\mu_{1}}\ldots{\mu_{p}}}=\gamma^{\mu_{1}}\gamma^{\mu_{2}}\cdots\gamma^{\mu_{p}}$
are constructed using the usual (Euclidean) Dirac matrices
$\gamma^{\mu}=\gamma^{a}e^{\mu}_{a}$ 111We will restrict ourselves to even
dimensions in what follows. Odd dimensions require twice as many spinor
degrees of freedom to match the number of components of a Kähler-Dirac field -
see Hands (2021). In flat space it is straightforward to show that the matrix
$\Psi$ satisfies the usual Dirac equation
$\left(\gamma^{\mu}\partial_{\mu}-m\right)\Psi=0$ (12)
and describes $2^{D/2}$ degenerate Dirac spinors corresponding to the columns
of $\Psi$. This equation of motion can be derived from the action
$S=\int d^{D}x\,{\rm
Tr}\,\left[\overline{\Psi}(\gamma^{\mu}\partial_{\mu}-m)\Psi\right].$ (13)
This action is invariant under a global ${\rm Spin}(D)\times SU(2^{D/2})$
symmetry where the first factor corresponds to Euclidean Lorentz
transformations and the second to an internal flavor symmetry. In even
dimensions we can write a matrix representation of the $U(1)$ generator as
$\Gamma\equiv\gamma_{5}\otimes\gamma_{5}$ where the two factors act by left
and right multiplication on $\Psi$. The matrix representation of a reduced
Kähler-Dirac field is then given by
$\Psi_{\pm}=\frac{1}{2}\left(\Psi\pm\gamma_{5}\Psi\gamma_{5}\right).$ (14)
Similarly the reduced action can be written as
$\int d^{D}x\,{\rm
Tr}\,\left(\overline{\Psi}_{-}\gamma^{\mu}\partial_{\mu}\Psi_{+}\right).$ (15)
The condition $\overline{\Phi}_{-}=\Phi_{-}$ then implies the matrix condition
$\Psi^{*}=B\Psi B^{T}$ (16)
where $B=C\gamma_{5}$ with $C$ the usual charge conjugation matrix. Using this
one can write a Majorana-like matrix representation of the reduced action as
$\frac{1}{2}\int d^{D}x\,{\rm
Tr}\,\left(B\Psi^{T}B^{T}\gamma^{\mu}\partial_{\mu}\Psi\right).$ (17)
Notice that after this reduction the free theory in flat space corresponds to
$2^{D/2-1}$ Dirac or $2^{D/2}$ Majorana spinors.
We can gain further insight into the reality condition eqn. 16 by going to a
chiral basis for the gamma matrices. In four dimensions the full Kähler-Dirac
field $\Psi$ then takes the form
$\Psi=\left(\begin{array}[]{cc}E&O^{\prime}\\\ O&E^{\prime}\end{array}\right)$
(18)
where $O$ and $O^{\prime}$ denote $2\times 2$ blocks of odd form fields while
$E$ and $E^{\prime}$ denote corresponding even form fields. Each block
contains a doublet of Weyl fields which transform in representations of the
$SU(2)_{L}\times SU(2)_{R}$ Lorentz and an $SU(2)\times SU(2)$ flavor
symmetry. The condition $\Psi^{*}=B\Psi B^{T}$ implies
$O^{\prime}=i\sigma_{2}O^{*}i\sigma_{2}$ and
$E^{\prime}=-i\sigma_{2}E^{*}i\sigma_{2}$. This suggests that the operation
$X\to i\sigma_{2}X^{*}i\sigma_{2}$ can be interpreted as a generalized charge
conjugation operator that flips both chirality and flavor representation of a
given Weyl doublet within the Kähler-Dirac field. It also implies that both
$(O,O^{\prime})$ and $(E,E^{\prime})$ constitute doublets of Majorana spinors.
Finally we should note that while the Kähler-Dirac equation eqn. 1 written in
the language of forms does not change in curved space, its matrix
representation takes the modified form
$(e^{\mu}_{a}\gamma^{a}D_{\mu}-m)\Psi=0$ (19)
where $e_{\mu}^{a}$ is the vielbein or frame and
$D_{\mu}\Psi=\partial_{\mu}\Psi+[\omega_{\mu},\Psi]$ is the covariant
derivative associated to the spin connection.
## III A gravitational anomaly for Kähler-Dirac fields
In the previous section we showed that the Kähler-Dirac action is invariant
under a $U(1)$ symmetry. However in the quantum theory we should also be
careful to examine the invariance of the fermion measure. We will do this for
a generic four dimensional curved background using the matrix representation
of the Kähler-Dirac theory. The curved space action in $D=4$ reads
$S=\int d^{4}x\sqrt{g}\,{\rm
Tr}\,\left(\overline{\Psi}e^{\mu}_{a}\gamma^{a}D_{\mu}\Psi\right)$ (20)
where $D_{\mu}\Psi=\partial_{\mu}\Psi+[\omega_{\mu},\Psi]$ with $\omega_{\mu}$
the spin connection and we have introduced the frame field $e_{\mu}^{a}(x)$ to
translate between flat and curved space indices with
$e_{\mu}^{a}e_{\nu}^{b}\delta_{ab}=g_{\mu\nu}$. In the standard way we start
by expanding $\Psi$ and $\overline{\Psi}$ on a basis of eigenstates of the
Kähler-Dirac operator:
$\gamma^{\mu}D_{\mu}\phi_{n}=\lambda_{n}\phi_{n}$ (21)
with $\int d^{4}x\,e(x){\rm
Tr}\left(\overline{\phi}_{n}(x)\phi_{m}(x)\right)=\delta_{nm}$ and $e={\rm
det}\,(e_{\mu}^{a})$. Thus
$\displaystyle\Psi(x)$ $\displaystyle=\sum_{n}a_{n}\phi_{n}(x)$ (22)
$\displaystyle\overline{\Psi}(x)$
$\displaystyle=\sum_{n}\overline{b}_{n}\overline{\phi}_{n}(x).$ (23)
The measure is then written
$D\overline{\Psi}\,D\Psi=\prod_{n}d\overline{b}_{n}\,da_{n}$ and the variation
of this measure under the $U(1)$ transformation with parameter $\alpha(x)$ is
given by $e^{-2i\int\alpha(x)A(x)}$ where the anomaly $A$ is formally given by
$A(x)={\rm Tr}\,\sum_{n}e\overline{\phi}_{n}\Gamma\phi_{n}(x)$ (24)
where the operator $\Gamma=\Gamma_{5}\otimes\gamma_{5}$ carries a flavor
rotation matrix $\gamma_{5}$ acting on the right of the matrix field and a
curved space chiral matrix $\Gamma_{5}$ acting on the left with
$\Gamma^{5}=\gamma^{a}\gamma^{b}\gamma^{c}\gamma^{d}e^{1}_{a}e^{2}_{b}e^{3}_{c}e^{4}_{d}=\gamma^{5}e$.
We need a gauge invariant regulator for this expression so we try inserting
the factor
$e^{\frac{1}{M^{2}}\left(\gamma^{\mu}D_{\mu}\right)^{2}}$ (25)
into the expression for $A$. We can write
$\left(\gamma^{\mu}D_{\mu}\right)^{2}=D^{\mu}D_{\mu}+e^{\mu}_{c}e^{\nu}_{d}\sigma^{cd}[D_{\mu},D_{\nu}]$
(26)
where $\sigma^{cd}=\frac{1}{4}[\gamma^{c},\gamma^{d}]$ are the generators of
${\rm Spin}(4)$. Furthermore for KD fermions we have:
$[D_{\mu},D_{\nu}]\psi=[R_{\mu\nu},\psi]$
where $R_{\mu\nu}=\frac{1}{2}R_{\mu\nu}^{ab}\sigma_{ab}$. Plugging this
expression into eqn. 26 yields
$\left(\gamma^{\mu}D_{\mu}\right)^{2}\psi=D^{\mu}D_{\mu}\psi+{\frac{1}{2}}e^{\mu}_{c}e^{\nu}_{d}\sigma^{cd}R_{\mu\nu}^{ab}[\sigma_{ab},\psi].$
The anomaly can then be written
$\displaystyle A(x)$ $\displaystyle=\lim_{M\to\infty}{\rm
Tr}\,\sum_{n}e\left(\overline{\phi}_{n}{\Gamma}e^{\frac{1}{M^{2}}(\gamma^{\mu}D_{\mu})^{2}}\phi_{n}\right)$
(27) $\displaystyle=\lim_{M\to\infty}{\rm Tr}\,\left(\Gamma
e^{\frac{1}{M^{2}}(\gamma^{\mu}D_{\mu})^{2}}\sum_{n}e\phi_{n}\overline{\phi}_{n}\right)$
$\displaystyle=\lim_{x\to x^{{}^{\prime}}}\lim_{M\to\infty}{\rm
Tr}\,\left(\Gamma
e^{\frac{1}{M^{2}}(\gamma^{\mu}D_{\mu})^{2}}\delta(x-x^{{}^{\prime}})\right)$
$\displaystyle\begin{split}&=\lim_{x\to x^{{}^{\prime}}}\lim_{M\to\infty}{\rm
Tr}\,\Big{(}e\gamma^{5}e^{\frac{1}{M^{2}}(D^{\mu}D_{\mu}+{\frac{1}{2}}e^{\mu}_{c}e^{\nu}_{d}{\sigma^{cd}}R_{\mu\nu}^{ab}[\sigma_{ab},.])}\\\
&\qquad\times\delta(x-x^{{}^{\prime}})\gamma_{5}\Big{)}.\end{split}$
Expanding the exponential to $O(1/M^{4})$ to get a non-zero result for the
trace over spinor and flavor indices and acting with
$e^{\frac{1}{M^{2}}D_{\mu}D^{\mu}}$ on the delta function yields 222See
appendix for more details
$\displaystyle\begin{split}A&=\frac{1}{16\pi^{2}}\left(\frac{1}{2!}\right)\left(\frac{1}{4}\right){\rm
tr}\,\left(e\gamma^{5}\sigma^{ab}\sigma^{cd}\right)e^{\mu}_{a}e^{\nu}_{b}e^{\rho}_{c}e^{\lambda}_{d}R_{\mu\nu}^{CD}R_{\rho\lambda}^{EF}\\\
&\qquad\times{\rm
tr}\,\left(\sigma_{CD}\sigma_{EF}\gamma_{5}\right)\end{split}$
$\displaystyle=\frac{1}{128\pi^{2}}\epsilon^{\mu\nu\rho\lambda}\epsilon_{CDEF}R_{\mu\nu}^{CD}R_{\rho\lambda}^{EF}$
(28)
where we have also employed the result:
$e\epsilon^{abcd}e^{\mu}_{a}e^{\nu}_{b}e^{\rho}_{c}e^{\lambda}_{d}=\epsilon^{\mu\nu\rho\lambda}.$
Thus the anomaly $A(x)$ is just the Euler density and we find that the phase
transformation of the partition function under the global $U(1)$ symmetry is
then
$Z\to e^{-2i\alpha\int d^{4}x\,A(x)}=e^{-2i\alpha\chi}Z.$ (29)
This result for the anomaly agrees with a previous lattice calculation that
employed a discretization of the Kähler-Dirac action on simplicial lattices
Catterall et al. (2018). The non-zero value for the anomaly originates in the
existence of exact zero modes of the Kähler-Dirac operator. Such zero modes
are eigenstates of $\Gamma$ and eqn. 24 shows that $\int
d^{4}x\,A(x)=n_{+}-n_{-}$ where $n_{\pm}$ denotes the number of zero modes
with $\Gamma=\pm 1$. Our final result is then a consequence of the index
theorem
$n_{+}-n_{-}=\chi.$ (30)
On the sphere (which can be regarded as a compactification of $R^{4}$) the
presence of this phase breaks the $U(1)$ to $Z_{4}$. This non-anomalous
$Z_{4}$ is then sufficient to prohibit fermion bilinear mass terms from
appearing in the effective action of the theory. This compactification of the
space on a sphere is similar to the strategy that is used to show the
importance of instantons in the QCD path integral. It places constraints on
the terms that can appear in the effective action as the radius of the sphere
is sent to infinity. Four fermion terms are allowed but require at least two
flavors of Kähler-Dirac field to be non-vanishing 333Of course we can go
further and demand that the anomaly be cancelled for manifolds with other
values of $\chi$. For example, on manifolds with $\chi=-4$ only eight fermion
terms are allowed, only twelve fermion terms on spaces with $\chi=-6$ etc.
Other work on higher order multifermion interactions can be found in Wu et al.
(2019); Jian and Xu (2020).. Since each massless Kähler-Dirac field can be
written in terms of two independent reduced fields this implies consistent,
interacting theories require four reduced Kähler-Dirac fields. In four
dimensions, and taking the flat space limit, each such reduced field
corresponds to four Majorana fermions and we learn that such theories contain
sixteen Majorana fermions. It is not hard to generalize this argument to any
even dimension.
## IV A global $Z_{4}$ anomaly
In the last section we found that a system of free Kähler-Dirac fermions
propagating on an even dimensional space is anomalous and remains invariant
only under a $Z_{4}$ symmetry. In this section we will examine such a theory
in the presence of interactions and show that this residual $Z_{4}$ symmetry
suffers from a global anomaly unless the theory contains multiples of four
Kähler-Dirac fields.
From eqn. 10 it is clear that the effective action for a reduced Kähler-Dirac
fermion is given by a Pfaffian ${\rm Pf}(K)$. From the property
$[\Gamma,K]_{+}=0$ it is easy to show that ${\rm det}\,(K)\geq 0$. However
since ${\rm Pf}\,(K)=\pm\sqrt{\rm det}\,(K)$ there is an ambiguity in the
phase of the Pfaffian.
To analyze this in more detail we will consider the theory on a torus with
periodic boundary conditions and deform the theory to remove a potential zero
mode. The simplest possibility is to couple a pair of such reduced Kähler-
Dirac fields to an auxiliary real scalar field $\sigma$. The fermion operator
is then given by
$M=\delta^{ab}K+\sigma(x)\epsilon^{ab}.$ (31)
We will assume that the total action (including terms involving just $\sigma$)
is invariant under a discrete symmetry which extends the fermionic $Z_{4}$
discussed in the previous section:
$\displaystyle\Phi$ $\displaystyle\to i\Gamma\Phi$ (32) $\displaystyle\sigma$
$\displaystyle\to-\sigma.$ (33)
Notice that this fermion operator is antisymmetric and real and hence all
eigenvalues of $M$ lie on the imaginary axis. Let us define the Pfaffian as
the product of the eigenvalues in the upper half plane in the background of
some reference configuration $\sigma=\sigma_{0}={\rm constant}$. By continuity
we define the Pfaffian to be the product of these same eigenvalues under
fluctuations of $\sigma$. Furthermore, it is easy to see that as a consequence
of the $Z_{4}$ symmetry
$\Gamma M\left(\sigma\right)\Gamma=-M\left(-\sigma\right).$ (34)
This result shows that the spectrum and hence the determinant is indeed
invariant under the $Z_{4}$ transformation $\sigma\to-\sigma$. But this is not
enough to show the Pfaffian itself is unchanged since there remains the
possibility that eigenvalues flow through the origin as $\sigma$ is deformed
smoothly to $-\sigma$ leading to a sign change. To understand what happens we
consider a smooth interpolation of $\sigma$:
$\sigma(s)=s\sigma_{0}\quad s\in\left(-1,+1\right).$ (35)
The question of eigenvalue flow can be decided by focusing on the behavior of
the eigenvalues of the fermion operator closest to the origin at small $s$. In
this region the eigenvalues of smallest magnitude correspond to fields which
are constant over the lattice and satisfy the eigenvalue equation:
$\sigma_{0}s\epsilon^{ab}v^{b}=\lambda v^{a}.$ (36)
The two eigenvalues $\lambda=\pm i\sigma_{0}s$. Clearly these eigenvalues
change sign as as $s$ varies from positive to negative values leading to a
Pfaffian sign change. This can also be seen explicitly from eqn. 34 since
${\rm Pf}\,\left[M(-\sigma)\right]={\rm det}\,\left[\Gamma\right]{\rm
Pf}\,\left[M(\sigma)\right]=-{\rm Pf}\,\left[M(\sigma)\right].$ (37)
We thus learn that the Pfaffian of the 2 flavor system indeed changes sign
under the $Z_{4}$ transformation. On integration over $\sigma$ the value of
any even function of $\sigma$ including the partition function would then
yield zero rendering expectation values of $Z_{4}$ invariant operators ill-
defined. This corresponds to a global anomaly for the discrete $Z_{4}$
symmetry in the interacting theory.
Clearly this global anomaly will be cancelled for any multiple of four reduced
Kähler-Dirac fields provided that they couple via a Yukawa term of the form
$\sigma(x)\Phi^{a}C^{ab}\Phi^{b}$ with $C$ a real, antisymmetric matrix. In
that case $C$ can be brought to a canonical antisymmetric form
$\left(\lambda_{1}i\sigma_{2}\otimes\lambda_{2}i\sigma_{2}\otimes...\right)$
using a non-anomalous orthogonal transformation. Positivity of the Pfaffian
under $\sigma\to-\sigma$ then depends on the Yukawa interaction in $M$
containing an even number of such $2\times 2$ blocks. Decomposing the reduced
Kähler-Dirac field in a flat background into spinors we see that anomaly
cancellation occurs for eight or sixteen Majorana fermions in two and four
dimensions respectively.
The spectral flow argument we have given is similar to the one given by Witten
in showing that a single Weyl fermion in the fundamental representation of
$SU(2)$ is anomalous Witten (1982).
## V Symmetric mass generation
The cancellation of anomalies is crucial to the problem of giving masses to
fermions without breaking symmetries. Since anomalies originate from massless
states then any phase where all states are massive in the I.R must necessarily
arise from a U.V theory with vanishing anomaly. In particular, it is only
possible to accomplish such symmetric mass generation if one cancels off the
’t Hooft anomalies for all global symmetries Razamat and Tong (2021); Tong
(2021).
In the previous section we have seen that only multiples of four reduced
Kähler-Dirac fields have vanishing $Z_{4}$ anomaly. Thus we require that any
interactions we introduce in the theory respect this symmetry. The simplest
such interaction is a four fermion operator as we have already discussed. It
corresponds to adding a simple $\int\sigma^{2}$ term to the Yukawa action
discussed in the previous section.
We should note however that cancellation of anomalies is a necessary condition
for symmetric mass generation but it may not be sufficient – the fact that
four fermion terms are perturbatively irrelevant operators in dimensions
greater than two may mean that a more complicated scalar action is required –
indeed this was the finding of numerical work in four dimensions where a
continuous phase transition to a massive symmetric phase was found only by
tuning an additional scalar kinetic term Butt et al. (2018).
With this caveat it is useful to give examples of possible four fermion terms
that might lead to symmetric mass generation For example, one can imagine
taking four reduced Kähler-Dirac fields transforming in the fundamental
representation of a $SO(4)$ symmetry and employ the term
$\int d^{D}x\,\sqrt{g}\,\left(\left[\Phi^{a},\Phi^{b}\right]_{+}\right)^{2}$
(38)
where the $+$ subscript indicates that fermion bilinear is projected to the
self-dual $(1,0)$ representation of $SO(4)$. In practice this can be
implemented in flat space via a Yukawa term which is given in the matrix
representation by
$\int d^{D}x\,G\,{\rm
Tr}\,\left(\Psi^{a}(x)\Psi^{b}(x)\right)_{+}\sigma_{ab}(x)+\frac{1}{2}\sigma^{2}_{ab}(x).$
(39)
Notice that this Yukawa interaction is mediated by a scalar $\sigma_{ab}^{+}$
that also transforms in the self-dual representation of $SO(4)$
$\sigma_{ab}^{+}=\frac{1}{2}\left(\sigma_{ab}+\frac{1}{2}\epsilon_{abcd}\sigma_{cd}\right)$
(40)
In the next section we show how Kähler-Dirac theories can be discretized in a
manner which leaves the anomaly structure of the theory intact and results in
theories of (reduced) staggered fermions. The four fermion interaction that
results from integrating over $\sigma_{+}$ in eqn. 39 has been studied using
numerical simulation and the results of this work indeed provide evidence of a
massive symmetric phase in dimensions two and three Ayyar and Chandrasekharan
(2015); Ayyar and Chandrasekharan (2016a, b); Catterall (2016) while an
additional scalar kinetic operator was also needed in four dimensions Butt et
al. (2018).
As another example one can take eight flavors of reduced Kähler-Dirac field
which are taken to transform in the eight dimensional real spinor
representation of ${\rm Spin}(7)$. An appropriate Yukawa term which might be
used to gap those fermions is given by
$\int d^{D}x\,{\rm
Tr}\,\left(\Psi^{a}(x)\Gamma^{ab}_{\mu}\Psi^{b}(x)\right)\sigma_{\mu}(x)$ (41)
where $\Gamma_{\mu},\mu=1\ldots 7$ are the (real) Dirac matrices for ${\rm
Spin}(7)$ You and Xu (2015). This interaction was shown by Fidkowski and
Kitaev to gap out boundary Majorana modes in a (1+1)-dimensional system
without breaking symmetries Fidkowski and Kitaev (2010). This interaction may
also play a role in constructing Kähler-Dirac theories that target GUT models.
For example, if one is able to gap out the $(E,E^{\prime})$ blocks occurring
in eqn. 18 for a reduced Kähler-Dirac field valued in ${\rm Spin}(7)$ the
remaining light fields live in the representation $(8,2,1)$. If the ${\rm
Spin}(7)$ is subsequently Higgsed to ${\rm Spin}(6)=SU(4)$ then this
representation breaks to $(4,2,1)\oplus(\overline{4},1,2)$ which is the field
content of the Pati-Salam theory Pati (1975).
## VI Exact anomalies for lattice fermions
One of the most important properties of the Kähler-Dirac equation is that it
can be discretized without encountering fermion doubling Rabin (1982); Becher
and Joos (1982). Furthermore this discretization procedure can be done for any
random triangulation of the space allowing one to capture topological features
of the spectrum. The idea is to replace continuum p-forms by p-cochains or
lattice fields living on oriented p-simplices in the triangulation. The
exterior derivative and its adjoint are mapped to co-boundary and boundary
operators which act naturally on these p-simplices and retain much of the
structure of their continuum cousins – for example they are both nilpotent
operators. Homology theory can then be used to show that the spectrum of the
discrete theory evolves smoothly into its continuum cousin as the lattice
spacing is sent to zero – there are no additional lattice modes or doublers
that obscure the continuum limit. Furthermore, the number of exact zero modes
of the lattice Kähler-Dirac operator is exactly the same as found in the
continuum theory. This immediately suggests that the anomaly encountered
earlier which depends only on the topology of the background space can be
exactly reproduced in the lattice theory. This was confirmed in Catterall et
al. (2018) where a lattice calculation revealed precisely the same
gravitational anomaly derived in this paper.
If one restricts to regular lattices with the topology of the torus it is
straightforward to see that the discrete Kähler-Dirac operator discussed above
can be mapped to a staggered lattice fermion operator on a lattice with half
the lattice spacing Banks et al. (1982). One simply maps the p-form components
located in the matrix $\Psi^{a}$ into a set of single component lattice
fermions $\chi^{a}$ via
$\Psi(x)=\sum_{n_{\mu}=0,1}\chi(x+n_{\mu})\gamma^{\left(x+n_{\mu}\right)}$
(42)
where $\gamma^{x}=\prod_{i=1}^{D}\gamma_{i}^{x_{i}}$ and the summation runs
over the $2^{D}$ points in a unit hypercube of a regular lattice. If one
substitutes this expression into the continuum kinetic term, replaces the
continuum derivative with a symmetric finite difference and carries out the
trace operation one obtains the free staggered fermion action. Indeed the
operator $\Gamma$ acting on forms then becomes the site parity operator
$\epsilon(x)=\left(-1\right)^{\sum_{i=1}^{D}x_{i}}$ and the $U(1)$ symmetry of
the massless Kähler-Dirac action is just the familiar $U(1)_{\epsilon}$
symmetry of staggered fermions. Indeed, it is possible to repeat the arguments
of section IV to show that a staggered fermion theory equipped with a four
fermion term is only well-defined for multiples of four reduced staggered
fermions under which the classical $Z_{4}$ symmetry is preserved. This helps
to explain why these theories seem capable of generating a massive symmetric
phase Ayyar and Chandrasekharan (2015); Ayyar and Chandrasekharan (2016a, b);
Catterall (2016); Butt et al. (2018).
## VII Summary
In this paper we have shown that theories of massless Kähler-Dirac fermions
suffer from a gravitational anomaly that breaks a $U(1)$ symmetry down to
$Z_{4}$ in even dimensions. We derive this anomaly by computing the symmetry
variation of the path integral for free Kähler-Dirac fermions propagating in a
background curved space. The remaining $Z_{4}$ prohibits fermion bilinear mass
terms from arising in the quantum effective action. We then use spectral flow
arguments to argue that multiples of four flavors of Kähler-Dirac are needed
to avoid a further global anomaly in this $Z_{4}$ symmetry in the presence of
interactions. Since four fermion interactions are allowed by these constraints
we argue that they may be capable of gapping such systems without breaking
symmetries.
In flat space each reduced Kähler-Dirac field transforming in a real
representation can be decomposed into $2^{D/2}$ Majorana fermions. Thus
anomaly cancellation in the interacting theory dictates a very specific
fermion content - multiples of eight and sixteen Majorana fermions in two and
four dimensions respectively. Remarkably, this fermion counting agrees with
independent constraints based on the cancellation of the chiral fermion parity
and spin-$Z_{4}$ symmetries of Weyl fermions in two and four dimensions
García-Etxebarria and Montero (2019); Wang and Wen (2020) 444 One can also
decompose a Kähler-Dirac fermion into two and four Majorana spinors in one and
three dimensions respectively. Building four fermion operators for these
Kähler-Dirac fields then yields theories with eight and sixteen Majorana
spinors which is also in agreement with results from odd dimensional
topological insulators..
Finally we discuss how this anomaly can be realized exactly in lattice
realizations of such systems and emphasize how the results in this paper shed
light on the appearance of massive symmetric phases in recent simulations of
lattice four fermion models. The appearance of an anomaly in a lattice system
is notable as it contradicts the usual folklore that anomalies only appear in
systems with an infinite number of degrees of freedom.
While the anomaly vanishes for closed odd dimensional manifolds it is non-zero
for odd dimensional manifolds with boundary. For example the Euler
characteristic of the three ball is $\chi(B^{3})=1$ and the symmetry in the
bulk is hence broken to $Z_{2}$ allowing for the presence of mass terms.
However the boundary fields living on $S^{2}$ possess an enhanced $Z_{4}$
symmetry prohibiting such bilinear terms and we learn that such boundary
fields can instead be gapped using four fermion interactions.
###### Acknowledgements.
This work was supported by the US Department of Energy (DOE), Office of
Science, Office of High Energy Physics under Award Number DE-SC0009998. SC is
grateful for helpful discussions with Erich Poppitz, David Tong and Yuven
Wang.
## Appendix A Delta function
Following Fujikawa (1980)
$\delta(x-x^{{}^{\prime}})=\int\frac{d^{4}k}{(2\pi)^{4}}e^{ik_{\mu}D^{\mu}\sigma(x,x^{{}^{\prime}})}$
where $D_{\mu}$ is now the Kähler-Dirac operator and
$\sigma(x,x^{{}^{\prime}})$ is the geodesic biscalar [a generalization of
$\frac{1}{2}(x-x^{{}^{\prime}})^{2}$ in flat space] defined by
$\sigma(x,x^{{}^{\prime}})=\frac{1}{2}g^{\mu\nu}D_{\mu}\sigma(x,x^{{}^{\prime}})D_{\nu}\sigma(x,x^{{}^{\prime}})$
with
$\sigma(x,x)=0$
and
$\lim_{x\to
x^{{}^{\prime}}}D_{\mu}D^{\nu}\sigma(x,x^{{}^{\prime}})=g^{\nu}_{\mu}.$
Now,
$\displaystyle\begin{split}D^{2}\delta(x-x^{{}^{\prime}})&=D^{\nu}\int\frac{d^{4}k}{(2\pi)^{4}}[ik_{\lambda}D_{\nu}D^{\lambda}\sigma\\\
&\qquad+\frac{1}{2!}\\{(D_{\nu}(ik\cdot D\sigma))(ik\cdot D\sigma)\\\
&\quad\qquad+(ik\cdot D\sigma)(D_{\nu}(ik\cdot D\sigma)\\}+...]\end{split}$
(43)
$\displaystyle\begin{split}&=\int\frac{d^{4}k}{(2\pi)^{4}}[D^{\nu}(ik_{\lambda}D_{\nu}D^{\lambda}\sigma)\\\
&\qquad+(ik_{\lambda}D_{\nu}D^{\lambda}\sigma)(ik_{\rho}D^{\nu}D^{\rho}\sigma)+...].\end{split}$
(44)
Taking $\lim_{x\to x^{{}^{\prime}}}$, other terms represented by … vanish, and
we obtain
$\displaystyle D^{2}\delta(x-x^{{}^{\prime}})$
$\displaystyle=\int\frac{d^{4}k}{(2\pi)^{4}}[(ik_{\lambda}g_{\nu}^{\lambda})(ik_{\rho}g^{\nu\rho})+D^{\nu}(ik_{\lambda}g_{\nu}^{\lambda})]$
(45)
$\displaystyle=\int\frac{d^{4}k}{(2\pi)^{4}}[(ik_{\nu})(ik^{\nu})+iD^{\nu}(k_{\nu})]$
(46) $\displaystyle=\int\frac{d^{4}k}{(2\pi)^{4}}[-k^{2}+iD^{\nu}(k_{\nu})].$
(47)
Which implies
$\lim_{x\to
x^{{}^{\prime}}}e^{D^{2}}\delta(x-x^{{}^{\prime}})=\int\frac{d^{4}k}{(2\pi)^{4}}e^{[-k^{2}+iD^{\nu}(k_{\nu})]}.$
(48)
Hence,
$\displaystyle\lim_{x\to
x^{{}^{\prime}}}e^{D^{2}/M^{2}}\delta(x-x^{{}^{\prime}})$
$\displaystyle=\int\frac{d^{4}k}{(2\pi)^{4}}e^{[-k^{2}+iD^{\nu}(k_{\nu})]/M^{2}}$
(49)
$\displaystyle=M^{4}\int\frac{d^{4}k}{(2\pi)^{4}}e^{[-k^{2}+iD^{\nu}(k_{\nu})/M]}$
(50) $\displaystyle=M^{4}\frac{1}{16\pi^{2}}.$ (51)
## References
* Banks et al. (1982) T. Banks, Y. Dothan, and D. Horn, Phys. Lett. B117, 413 (1982).
* Fujikawa (1980) K. Fujikawa, Phys. Rev. D 21, 2848 (1980).
* García-Etxebarria and Montero (2019) I. n. García-Etxebarria and M. Montero, JHEP 08, 003 (2019), eprint 1808.00009.
* Wan and Wang (2020) Z. Wan and J. Wang, JHEP 07, 062 (2020), eprint 1910.14668.
* Fidkowski and Kitaev (2010) L. Fidkowski and A. Kitaev, Phys. Rev. B 81, 134509 (2010), eprint 0904.2197.
* Ryu and Zhang (2012) S. Ryu and S.-C. Zhang, Phys. Rev. B 85, 245132 (2012), eprint 1202.4484.
* Kapustin et al. (2015) A. Kapustin, R. Thorngren, A. Turzillo, and Z. Wang, JHEP 12, 052 (2015), eprint 1406.7329.
* You and Xu (2015) Y.-Z. You and C. Xu, Phys. Rev. B91, 125147 (2015), eprint 1412.4784.
* Morimoto et al. (2015) T. Morimoto, A. Furusaki, and C. Mudry, Phys. Rev. B92, 125104 (2015), eprint 1505.06341.
* You et al. (2018) Y.-Z. You, Y.-C. He, C. Xu, and A. Vishwanath, Phys. Rev. X 8, 011026 (2018), eprint 1705.09313.
* Wang and Wen (2020) J. Wang and X.-G. Wen, Phys. Rev. Res. 2, 023356 (2020), eprint 1809.11171.
* Guo et al. (2020) M. Guo, K. Ohmori, P. Putrov, Z. Wan, and J. Wang, Commun. Math. Phys. 376, 1073 (2020), eprint 1812.11959.
* Ayyar and Chandrasekharan (2015) V. Ayyar and S. Chandrasekharan, Phys. Rev. D 91, 065035 (2015), eprint 1410.6474.
* Ayyar and Chandrasekharan (2016a) V. Ayyar and S. Chandrasekharan, Phys. Rev. D 93, 081701 (2016a), eprint 1511.09071.
* Ayyar and Chandrasekharan (2016b) V. Ayyar and S. Chandrasekharan, JHEP 10, 058 (2016b), eprint 1606.06312.
* Catterall (2016) S. Catterall, JHEP 01, 121 (2016), eprint 1510.04153.
* Catterall and Butt (2018) S. Catterall and N. Butt, Phys. Rev. D 97, 094502 (2018), eprint 1708.06715.
* Catterall et al. (2018) S. Catterall, J. Laiho, and J. Unmuth-Yockey, JHEP 10, 013 (2018), eprint 1806.07845.
* Kahler (1962) E. Kahler, Rend. Math 3-4 21, 425 (1962).
* Hands (2021) S. Hands, Symmetry 13, 1523 (2021), eprint 2105.09646.
* Wu et al. (2019) X.-C. Wu, Y. Xu, C.-M. Jian, and C. Xu, Phys. Rev. B 100, 155138 (2019), eprint 1906.07191.
* Jian and Xu (2020) C.-M. Jian and C. Xu, Phys. Rev. B 101, 035118 (2020), eprint 1907.08613.
* Witten (1982) E. Witten, Phys. Lett. B 117, 324 (1982).
* Razamat and Tong (2021) S. S. Razamat and D. Tong, Phys. Rev. X 11, 011063 (2021), eprint 2009.05037.
* Tong (2021) D. Tong (2021), eprint 2104.03997.
* Butt et al. (2018) N. Butt, S. Catterall, and D. Schaich, Phys. Rev. D 98, 114514 (2018), eprint 1810.06117.
* Pati (1975) A. Pati, Jogesh C.; Salam, Phys. Rev. D 10, 1, 275 (1975).
* Rabin (1982) J. M. Rabin, Nucl. Phys. B 201, 315 (1982).
* Becher and Joos (1982) P. Becher and H. Joos, Z. Phys. C 15, 343 (1982).
|
8k
|
arxiv_papers
|
2101.01028
|
# Three models of non-perturbative quantum-gravitational binding
Jan Smit
Institute for Theoretical Physics, University of Amsterdam,
Science Park 904, P.O. Box 94485, 1090 GL, Amsterdam, the Netherlands.
###### Abstract:
Known quantum and classical perturbative long-distance corrections to the
Newton potential are extended into the short-distance regime using evolution
equations for a ‘running’ gravitational coupling, which is used to construct
examples non-perturbative potentials for the gravitational binding of two
particles. Model-I is based on the complete set of the relevant Feynman
diagrams. Its potential has a singularity at a distance below which it becomes
complex and the system gets black hole-like features. Model-II is based on a
reduced set of diagrams and its coupling approaches a non-Gaussian fixed point
as the distance is reduced. Energies and eigenfunctions are obtained and used
in a study of time-dependent collapse (model-I) and bouncing (both models) of
a spherical wave packet. The motivation for such non-perturbative ‘toy’ models
stems from a desire to elucidate the mass dependence of binding energies found
25 years ago in an explorative numerical simulation within the dynamical
triangulation approach to quantum gravity. Models I & II suggest indeed an
explanation of this mass dependence, in which the Schwarzschild scale plays a
role. An estimate of the renormalized Newton coupling is made by matching with
the small-mass region. Comparison of the dynamical triangulation results for
mass renormalization with ‘renormalized perturbation theory’ in the continuum
leads to an independent estimate of this coupling, which is used in an
improved analysis of the binding energy data.
quantum gravity, lattice theory
## 1 Introduction
An explorative numerical computation of two-particle binding was performed
within the original time-space symmetrical dynamical triangulation
(SDT)111Customarily known as Euclidean dynamical triangulation (EDT). The
acronym SDT was introduced earlier by the author to emphasise the difference
with causal dynamical triangulation (CDT). It is not ideal but we shall keep
it here to distinguish it from other versions of EDT mentioned later. approach
to quantum gravity [1]. The binding energies found were puzzling in their
dependence on the masses of the particles. In the present paper we study
models in the continuum with the aim of improving our acquaintance with
possible mass dependencies of binding energies and then return to the SDT
results.
These continuum models are derived from one-loop perturbative corrections to
the Newton potential, which include quantum gravitational contributions [2, 3]
as well classical ones in which $\hbar$ cancels [4, 5]. At large distances the
corrections are independent of the UV regulator. The calculations were
interpreted within effective field theory [2, 3] and were subsequently also
carried out by other authors, as discussed in [6, 7] which’ final results we
are using in this work. In [7] it was observed that the quantum contributions
from the subset one-particle-reducible (1PR) diagrams (‘dressed one-particle
exchange’) suggested a ‘running’ gravitational coupling depending on the
distance scale, a simple example of a renormalization-group type evolution
with a non-Gaussian fixed point [8]. Later such running was found to be not
universally applicable [9, 10]. However, similar running couplings including
also the classical contributions are employed here, solely for the
construction of non-perturbative (‘toy’) models of quantum gravitational
binding.
The models are specified by a running potential
$V_{\rm r}=-\tilde{G}m^{2}r\,,$ (1)
in which a dimensionless running coupling $\tilde{G}$ satisfies an evolution
equation with an asymptotic condition
$-r\frac{\partial\tilde{G}}{\partial
r}=\beta(\tilde{G},\sqrt{G}\,m)\,,\qquad\tilde{G}\to\frac{G}{r^{2}}\,,\quad
r\to\infty\,,$ (2)
were $G$ is the Newton constant.222Units in which $\hbar=c=1$; we shall also
use a Planck length $\ell_{\rm P}=G^{1/2}$, Planck mass $m_{\rm P}=G^{-1/2}$
and when convenient units $G=1$. For convenience, we shall call models using
the potential $-Gm^{2}/r$: ‘Newton models’. The ‘beta function’ $\beta$
depends on the (equal) mass $m$ of the particles through the classical
perturbative corrections. For large masses the classical terms in $\beta$ tend
to dominate and dropping the quantum part leads to simpler ‘classical
evolution models’.
Model-I starts from the long-distance potential including all one-loop
corrections [6]. It leads to an evolution with singularities at a distance
$r_{\rm s}$. We interpret the singularities as distributions, which enables
continuing the running past $r_{\rm s}$ to zero distance where the potential
vanishes. When $r$ passes $r_{\rm s}$ the potential gets an imaginary part.
For large particle masses $r_{\rm s}\approx 3Gm$; it is of order of the
Schwarzschild radius of the two-particle system and the model has black hole-
like features, such as absorbing probability out of the two-particle wave
function. Model-II uses only the 1PR contributions. Its evolution of
$\tilde{G}$ has a non-Gaussian fixed point, the potential is regular and real
for all $r\geq 0$ and it has a minimum at a distance $r_{\rm min}$. For large
masses $r_{\rm min}\approx Gm$, hence also of order of the Schwarzschild
radius.333The single point particles have no horizon; the long distance
corrections and the beta functions derived from them do not contain a ‘back
reaction’ of the particles on the geometry.
For the most part in this work the models are equipped with a non-relativistic
kinetic energy operator $K$. However, a relativistic kinetic energy operator
$K_{\rm rel}$ gives interesting qualitatively different results in model I.
For example, with the simpler classical-evolution potential, classical
particles falling in from a distance $r>r_{\rm s}$ obtain the velocity of
light when reaching the singularity at $r_{\rm s}$, as happens for particles
approaching a Schwarzschild black hole horizon [11]. In model-II such
particles’ maximal velocity stays below that of light. For brevity the models
with $K_{\rm rel}$ will be dubbed ‘relativistic models’.444Also in the
relativistic Newton model the particle velocity reaches that of light, but at
zero distance. Models with an energy-independent potential and relativistic
kinetic energy can sometimes describe interesting physics. For example, such a
model can describe the linear relation between spin and squared-mass of
hadrons [12]. But the relativistic models I and II are qualitative and not
intended to describe merging black holes and neutron stars as done in
sophisticated Effective One Body models [13, 14, 15]. The field theoretic
introduction of particles in the SDT computation is also relativistic.
Computations of binding energies lead naturally to more general knowledge of
the spectrum of eigenvalues and eigenfunctions of the Hamiltonian. It is fun
and instructive to exploit this in studying also the development in time of a
spherical wave packet released at a large distance. It shows oscillatory
bouncing and falling back in both models, and in model-I during decay.
In the SDT study [1], the binding energy $E_{\rm b}$ was found to increase
only moderately with $m$ (it had even decreased at the largest mass), a
behavior differing very much from the rapid increase of $E_{\rm
b}=G^{2}m^{5}/4$ in the Newton model. Finite-size effects, although presumably
present, were expected to diminish with increasing mass (the effective extent
of the wave function was assumed $\propto 1/m$). An important clue for a
renewed interpretation here of the results is suggested by the fact that in
models I and II, at relatively large masses, the bound state wave function is
maximal near $r_{\rm s}$ respectively $r_{\rm min}$. Since these scales grow
with $m$ this suggests that finite-size effects become larger with increasing
mass. We estimate the renormalized $G$ by matching to Newtonian behavior in
the small mass region. This is helped by renormalized perturbation theory,
which provides independent estimates of $G$ from the SDT results for mass
renormalization.
Mentioning some aspects of DT may be useful here, although this not the place
to give even a brief proper review. Depending on the bare Newton coupling, the
pure555In lattice QCD, using the pure gauge theory for computing hadron masses
is called the ‘quenched’ approximation (or ‘valence’ approximation since it
lacks dynamical fermion loops). The long-distance corrected Newton potential
contains no massive scalar loops and our bound state calculations based on it
are in this sense quenched approximations. gravity model has two phases. Deep
in the weak-coupling phase the computer-generated simplicial configurations
contain baby universes assembled in tree-like structures with ‘branched
polymer’ characteristics—hence the name ‘elongated phase’—very different from
a four-sphere representing de Sitter space in imaginary time. Deep in the
strong-coupling phase the configurations contain ‘singular structures’, such
as a vertex embedded in a macroscopic volume within one lattice spacing—hence
the name ‘crumpled phase’. Only close to the transition between the phases the
average spacetimes, as used in [1], have approximately properties of a four-
sphere. The transition was found to be of first-order [16, 17, 18] whereas
many researchers were looking for a second- or higher-order critical point at
which a continuum limit might be taken. Primarily for these reasons ‘causal
dynamical triangulation’ (CDT) was introduced, which has a phase showing a de
Sitter-type spacetime, with fluctuations enabling a determination of a
renormalized Newton coupling, and furthermore a distant-dependent spectral
dimension showing dimensional reduction at short distances [19, 20]. Another
continuation of dynamical triangulation research uses a ‘measure term’, which,
when written as an addition to the action involves a logarithmic dependence on
the curvature [21, 22, 23, 24]. Evidence was obtained for a non-trivial fixed
point scenario in which the above 1st-order critical point is closely passed
on the crumpled side by a trajectory in a plane of coupling constants towards
the continuum limit (cf. [24] and references therein).666The strong coupling
side of the phase transition was also judged as physics-favoured in another
lattice formulation approach to quantum gravity, see e.g. the review [25].
Scaling of the spectral dimension was instrumental determining relative
lattice spacings [26] and evidence for the possibility of a continuum limit
was also found in the spectrum of Kähler-Dirac fermions [27].
Returning to the original SDT, reference [28] gives a continuum interpretation
of average SDT spacetimes in terms of an approximation by an agglomerate of
4-spheres making up a branched polymer in the elongated phase, and a four-
dimensional negatively-curved hyperbolic space in the crumpled phase.777By
modeling the continuum path integral using such approximate saddle points one
also finds a first-order phase transition [29]. A scaling analysis in the
crumpled phase, away from the transition, led to an average curvature radius
reaching a finite limit of order of the lattice spacing as the total four-
volume increased to infinity. Similar behavior is expected to hold for the
average radius of the four-spheres in the elongated phase; they have small
volumes (still containing thousands of 4-simplices) and their number increases
with the total volume. Hence also this continuum interpretation implies that
the UV cutoff given by the lattice spacing cannot be removed in SDT. However,
models with a UV cutoff may still be able to describe truly non-perturbative
aspects of quantum Einstein gravity at scales below the cutoff.
Section 2 introduces the one-loop corrected long-distance Newton potentials of
models I and II. When naively extended to short distances these potentials
become more singular than $1/r$ and with a short-distance cutoff we calculate
in section 3 perturbative corrections to the binding energy. Section 4 starts
the derivation of the evolution equation, with a discussion of its properties
in the two models. The running potential is then used to calculate s-wave
binding energies in sections 5 (model-I) and 6 (model-II), with variational
methods and with matrix-diagonalization in a discrete Fourier basis in finite
volume. A pleasant by-product of the latter method is a spectrum of
eigenvalues and eigenfunctions, which is used in section 7 in real-time
calculations of the spherical bouncing and collapse of a wave packet let loose
far from the Schwarzschild-scale region. In section 8 we return to the binding
computation in SDT with an extended discussion of the renormalized mass and
binding energy results. Relating some of the binding energy data to the very
small mass region by a simple phenomenological formula yields an estimate of
the renormalized Newton coupling $G$. The mass renormalization data are used
in independent estimates of $G$, which improve the analysis of the binding
energy results.
Results are summarized in section 9 with a conclusion in section 10. Solutions
of the evolution equations are given in appendix A. Appendix B.1 starts with a
formal definition of model-I and describes some consequences of its
Hamiltonian being symmetric but not Hermitian. Further details of analytical
and numerical treatments are in the remainder of appendix B and in appendix C.
Classical motion of the relativistic classical-evolution models is studied in
appendix D. Appendix E sketches the derivation of a relation between the
renormalized mass and the bare mass of the particles using renormalized
perturbation theory.
## 2 Perturbatively corrected Newton potential
The potential is defined by a Fourier transform of the scattering amplitude of
two scalar particles, in Minkowski spacetime, calculated to one-loop order and
after a non-relativistic reduction [4]. Its long-distance form is UV-finite
and calculable in effective field theory [2, 3]. Graviton loops give non-
analytic terms in the exchange momentum $q$ at $q=0$, which determine the
long-distance corrections. Terms analytic in $q$ correspond to short-distance
behavior. They involve UV-divergencies; after their subtraction, finite parts
remain with unknown coefficients, which are set to zero in our models. One-
loop effects of the massive particle belong to the analytic type and are
omitted this way. Including the long-distance corrections the potential has
the form
$V=-\frac{Gm_{1}m_{2}}{r}\left[1+d\,\frac{G(m_{1}+m_{2})}{r}+c\,\frac{G}{r^{2}}\right]+\mathcal{O}(G^{3})\,,$
(3)
where $G$ is the Newton coupling. Actually, the $d$ term is a classical
contribution (independent of $\hbar$) coming from classical General Relativity
[4, 5, 7]); the $c$ term is a quantum correction of order $\hbar$.
Calculations were performed in harmonic gauge.
Intuitively one may think that the potential corresponds to dressed one-
particle exchange. This leads to the so-called the one-particle-reducible
(1PR) potential [30]. The 1PR scattering amplitude does not include all one-
loop diagrams and it is not gauge invariant. Since we are primarily interested
in models that provide examples of bound-state energies, we accept this lack
of gauge invariance and study also models based on the 1PR potential.
Including all diagrams one arrives at a ‘complete’ potential which may lead to
gauge invariant results when calculating gauge-invariant observables. The
dimensionless ratio of the bound-state energy to the mass of the constituent
particles may be such an observable. The potential is not gauge invariant, as
discussed in [6].
The constants $c$ and $d$ are given by [6]
$\displaystyle d$ $\displaystyle=$ $\displaystyle 3,\qquad
c=\frac{41}{10\pi}\simeq 1.3\,,\;\;\;\;\qquad\qquad\mbox{model-I (complete)}$
(4) $\displaystyle d$ $\displaystyle=$
$\displaystyle-1,\quad\,c=-\frac{167}{30\pi}\simeq-1.8\,.\qquad\qquad\mbox{model-
II (1PR)}$ (5)
## 3 Calculations with the 1-loop potential
We continue with equal masses, $m_{1}=m_{2}=m$ and turn to the computation of
the binding energy of the positronium-like system in which $Gm^{2}$ plays the
role of the fine-structure constant. In terms of $f(r)=r\psi(\vec{r})$, with
$\psi(\vec{r})$ the wave function, the time-independent non-relativistic
radial s-wave Schrödinger equation is to be
$Hf(r)=E\,f(r),\quad H=K+V_{\rm reg}(r),\quad
K=-\frac{1}{m}\,\frac{\partial^{2}}{\partial r^{2}}\,,$ (6)
where the potential $V_{\rm reg}$ is a regularized version of $V$ in order to
deal with the singular behavior of the $c$ and $d$ terms at the origin. Note
that $m$ is twice the reduced mass. The binding energy is defined as the
negative of the minimum energy
$E_{\rm b}=-E_{\rm min}\,.$ (7)
In case the average squared-velocity $v^{2}=\langle K/m\rangle\gtrsim 1$ we
also study relativistic models with kinetic energy operator
$K_{\rm rel}=2\sqrt{m^{2}-\partial_{r}^{2}},$ (8)
with
$E_{\rm b}=-(E_{\rm min}-2m)\,,\quad v_{\rm
rel}^{2}=\langle-\partial_{r}^{2}/(m^{2}-\partial_{r}^{2})\rangle\,.$ (9)
In the following we shall tacitly be dealing with the nonrelativistic model
unless mentioned otherwise.
The zero-loop potential
$V_{0}=-\frac{Gm^{2}}{r}\,,$ (10)
gives the bound-state energy spectrum of the Hydrogen atom with $\alpha\to
Gm^{2}$ and reduced mass $m/2$,
$E_{n}=-\frac{1}{4}\,G^{2}m^{5}\,\frac{1}{n^{2}}\,,\quad n=1,2,\ldots\,,\quad
E_{\rm min}=E_{1}\,.$ (11)
The eigenfunctions are given by
$u_{n}(r,a)=\frac{2ar}{(an)^{5/2}}\,L^{1}_{n-1}\left(\frac{2r}{an}\right)\exp\left(\frac{-r}{an}\right)\,,$
(12)
where $L^{1}_{n-1}$ is the associated Laguerre polynomial ($L^{1}_{0}=1$) and
$a$ the Bohr radius
$a_{\rm B}=\frac{2}{Gm^{3}}\,.$ (13)
The non-relativistic binding energy $E_{\rm b}=-E_{1}=G^{2}m^{5}/4$. It
becomes very large as $m$ increases beyond the Planck mass $G^{-1/2}$, and
then also the average squared-velocity of the particles becomes much larger
than the light velocity: $v^{2}=\langle K/m\rangle=G^{2}m^{4}/4$. In the
relativistic version of the model the binding energy can be estimated estimate
by a variational calculation using $u_{1}(r,a)$ as trial wave function with
variational parameter $a$:
$E_{\rm b}=2m-\mathcal{E}(a_{\rm
min})\,,\quad\mathcal{E}(a)=\int_{0}^{\infty}u_{1}(r,a)\,(K_{\rm
rel}+V_{0})\,u_{1}(r,a)=\langle K_{\rm rel}\rangle-\frac{Gm^{2}}{a}\,,$ (14)
where $a_{\rm min}$ is the value of $a$ where $\mathcal{E}(a)$ is minimal.
Starting from small masses, when $m$ increases $a$ decreases from large values
near the Bohr radius towards $a=0$.888The non-relativistic variational result
is exact, $a_{\rm min}=a_{\rm B}$ and $\mathcal{E}(a_{\rm min})=E_{1}$. The
relativistic $\langle K_{\rm rel}\rangle$ scales like $(\mbox{const.})/a$ as
$a\to 0$, like the potential part of $\mathcal{E}(a)$ but with a
$(\mbox{const.})$ independent of $m$. Consequently there is a maximum mass
$m_{\rm c}$ at which $a$ has reached zero and beyond which there is no minimum
anymore. Since the variational energy provides an upper bound to the exact
energy, the relativistic Newton model has no ground state for $m>m_{\rm c}$.
The calculation is described in appendix B.2:
$m_{\rm c}=\frac{4}{\sqrt{3\pi\,G}}\,,\quad\lim_{m\uparrow m_{\rm
c}}\mathcal{E}=0\,,\quad E_{\rm b}=2m_{\rm c}\,,\qquad\mbox{Newton model}$
(15)
and $v_{\rm rel}^{2}\to 1$ as $m\uparrow m_{\rm c}$ since
$\langle-\partial^{2}\rangle\to\infty$.
Writing $V=V_{0}+V_{1}$ and treating the one-loop contribution $V_{1}$ (order
$G^{2}$ in (3)) as a perturbation, with a simple short-distance cutoff,
$\displaystyle V_{1\,\rm reg}(r)$ $\displaystyle=$ $\displaystyle
V_{1}(r),\quad r>\ell$ (16) $\displaystyle=$ $\displaystyle V_{1}(\ell),\quad
0<r<\ell,$
the perturbative change in the minimum energy is given by
$\displaystyle\Delta E_{1}$ $\displaystyle=$
$\displaystyle\int_{0}^{\infty}dr\,u_{1}(r,a)^{2}\,V_{1\,\rm reg}(r)\,,\qquad
a=a_{\rm B}\,,$ (17) $\displaystyle=$
$\displaystyle-G^{2}m^{2}\left[\frac{4dm}{a^{2}}+\frac{4c}{a^{3}}\left(\frac{1}{3}-\ln\\!\left(\frac{2\ell}{a}\right)-\gamma\right)\right]\left(1+\mathcal{O}(\ell/a)\right)\,,$
(18)
where $\gamma$ is the Euler constant and we assumed $\ell/a\ll 1$. (The $d$
term in (18) is finite as $\ell\to 0$, its presence in $V_{1}$ did not need a
UV cutoff for this calculation.) Choosing $\ell$ equal to the Planck length,
$\sqrt{G}$, this gives for small masses $(m\sqrt{G})^{3}\ll 1$,
$\frac{\Delta
E_{1}}{m}=-d(m\sqrt{G})^{8}-c\left(\frac{1}{6}-\frac{3}{2}\,\ln(m\sqrt{G})-\frac{1}{2}\,\gamma\right)\,(m\sqrt{G})^{10}+\mathcal{O}((m\sqrt{G})^{12})\,.$
(19)
For masses smaller than $\simeq 0.54/\sqrt{G}$ this asymptotic expression is
accurate to better than 10%. The ratio of the c and d term in (19) is maximal
for $m\sqrt{G}=0.56$ .
The perturbative evaluation looses sense when $|\Delta E_{1}/E_{1}|>1$, which
happens for $m\sqrt{G}\gtrsim 0.54$ and $m\sqrt{G}\gtrsim 0.66$, respectively
in model-I and model-II. At these values the ratio of the binding energy to
the mass is still small, $|E_{1}+\Delta E_{1}|/m=0.041$ in model-I, $\approx$
zero in model-II (for which $\Delta E_{1}$ is positive), while the Bohr radii
are still much larger than the short-distance cutoff: $a_{\rm B}\simeq
13\sqrt{G}$, respectively $\simeq 9\sqrt{G}$.
There is no physics reason to go to larger masses and treat $V_{1}$ non-
perturbatively, but it is interesting to see what happens. A first estimate is
obtained in a variational calculation using $u_{1}(r,a)$ as a trial wave
function with $a$ as a variational parameter, as in (14) with $K_{\rm rel}\to
K$, $E_{\rm b}\to-\mathcal{E}$, $V_{0}\to V_{0\,{\rm reg}}+V_{1\,{\rm reg}}$
(putting the same cutoff on $V_{0}$ as on $V_{1}$). This estimate can be
improved somewhat by using the $u_{n}(r,a_{\rm min})$, $n=1,\ldots,N$, to
compute matrix elements $H_{mn}$ for conversion to an $N\times N$ matrix
problem (keeping $a_{\rm min}$ fixed by the variational problem at $N=1$). For
$N\geq 3$ basis functions the minimum eigenvalue of this Hamiltonian matrix
appears to converge rapidly to a limiting value, $E_{\rm b}^{(N)}-E_{\rm
b}^{(\infty)}\propto N^{-2}$ and the difference $E_{\rm b}^{(1)}-E_{\rm
b}^{(\infty)}$ is only a few percent outside a crossover regime between small
and large masses. But this convergence is misleading: the exponential fall off
of the $u_{n}(r,a)$ sets in at increasingly larger $r$ ($\propto n^{2}$), such
that the region around $a_{\rm min}$ where the ground-state wave function is
large is not well sampled well at large $n$. Calculations with the Fourier-
sine basis introduced later in (48) indicate that $E_{\rm b}^{(1)}$ is
accurate to about 10%, 20 %, for model-I, model-II. Here we shall only record
that the large-mass results of the variational calculation are asymptotic
to999This can be understood as follows: For model-I the result is simply the
absolute minimum of the (large-mass approximation of the) regularized
potential, $V_{\rm reg}(\ell)\simeq-2dm^{3}G^{2}/\ell^{2}=-6m^{3}G^{2}$. For
model-II the opposite sign of $V_{1}$ causes it to act like a small-$a$
‘barrier’ in $\mathcal{E}(a)$, similar to the kinetic energy ‘barrier’
$\langle K\rangle=1/(ma^{2})$. At small masses this effect is negligible,
since the kinetic energy pushes $a_{\rm min}$ towards the then large $a_{\rm
B}$. When $m$ increases $a_{\rm min}$ decreases but it does not fall below
$\simeq 11\sqrt{G}$, after which it increases due to the $d$ term in $V_{1}$.
At large masses the potential may be approximated by $V\simeq-
Gm^{2}/r+2m^{3}G^{2}/r^{2}$, which gives
$\mathcal{E}(a)\simeq-m^{2}G/a+4m^{3}G^{2}/a^{2}$ and results in a relatively
small variational $\mathcal{E}(a_{\rm min})\simeq-m/16$, which is $\ell$ and
$G$ independent.
$\displaystyle\frac{E_{\rm b}}{m}$ $\displaystyle\simeq$
$\displaystyle\frac{6m^{2}G^{2}}{\ell^{2}}\,,\qquad\mbox{model-I, non-
running}$ (20) $\displaystyle\simeq$
$\displaystyle\frac{1}{16}\,.\qquad\qquad\mbox{model-II, non-running}$ (21)
## 4 Running potential models I & II
A distance-dependent coupling $G^{(1)}_{r}$ can be identified by writing
$V=-\frac{G^{(1)}_{r}m^{2}}{r}\;,$ (22)
and from this a dimensionless $\tilde{G}$:
$\tilde{G}\equiv\frac{G^{(1)}_{r}}{r^{2}}=\frac{G}{r^{2}}+\frac{2dmG^{2}}{r^{3}}+c\frac{G^{2}}{r^{4}}+\mathcal{O}(G^{3})\,.$
(23)
We identify a beta-function for $\tilde{G}$ to order $G^{2}$:
$\displaystyle-r\frac{\partial\tilde{G}}{\partial r}$ $\displaystyle=$
$\displaystyle
2\frac{G}{r^{2}}+6dm\frac{G^{2}}{r^{3}}+4c\frac{G^{2}}{r^{4}}+\mathcal{O}(G^{3})=\beta(\tilde{G},m\sqrt{G})+\mathcal{O}(G^{3})\,,$
(24) $\displaystyle\beta(\tilde{G},m\sqrt{G})$ $\displaystyle=$ $\displaystyle
2\tilde{G}+2dm\sqrt{G}\,\tilde{G}^{3/2}+2c\,\tilde{G}^{2}\,.$ (25)
Here (23) is used to eliminate $r$ to order $G^{2}$ on the r.h.s. of
(24).101010For instance, solving (23) for $G/r^{2}$ by iteration:
$G/r^{2}=\tilde{G}-2dm\sqrt{G}\,(G/r^{2})^{3/2}-c\,(G/r^{2})^{2}+\mathcal{O}(G^{3})=\tilde{G}-2dm\sqrt{G}\,\tilde{G}^{3/2}-c\,\tilde{G}^{2}+\mathcal{O}(G^{3})$.
Alternatively, we can divide the l.h.s. and r.h.s. of (23) by $G$, use
Mathematica to solve for $1/r$, insert in (24) and expand in $G$. We now
redefine the running coupling $\tilde{G}(r)$ to be the solution of
$-r\frac{\partial\tilde{G}}{\partial r}=\beta(\tilde{G},m\sqrt{G})\,,$ (26)
with the boundary condition
$G_{r}\equiv\tilde{G}r^{2}\to G,\quad r\to\infty\,.$ (27)
The corresponding running potential is defined as
$V_{\rm r}=-\frac{G_{r}m^{2}}{r}=-\tilde{G}\,m^{2}\,r.$ (28)
A model-II type beta-function without the $d$-term but with the same negative
$c$ was mentioned in [8] as a simple example generating a flow with a UV-
attractive fixed point.
Figure 1: Left: beta functions $\beta(\tilde{G},m\sqrt{G})$; top to bottom:
$m=1$ (model-I, brown), $m=0$ (model-I, blue), $m=0$ (model-II, green), $m=1$
(model-II, red). Right: $r_{\rm s}$ (model-I, blue) and $r_{\rm min}$ (model-
II, brown) versus $m$. Also shown is the Bohr radius $a_{\rm B}$ (magenta).
Units $G=1$.
Figure 1 shows a plot of the betas for two values of $m$. In model-I there is
only the IR-attractive ($r\to\infty$) fixed point at $\tilde{G}=0$, for all
$m$. In model-II there is in addition a UV-attractive ($r\to 0$) fixed point
at positive $\tilde{G}$. It moves towards zero as $m$ increases:
$\displaystyle\tilde{G}_{*}$ $\displaystyle=$
$\displaystyle\frac{1}{|c|}+\frac{1}{2c^{2}}\left(Gm^{2}-m\sqrt{G}\sqrt{|c|+Gm^{2}/4}\right),\qquad\mbox{model-
II}$ (29) $\displaystyle=$ $\displaystyle\frac{1}{|c|}=0.56,\quad m=0\,,$ (30)
$\displaystyle=$
$\displaystyle\frac{1}{Gm^{2}}-\frac{2|c|}{G^{2}m^{4}}+\cdots,\quad
m\to\infty\,.$ (31)
Note that $\tilde{G}_{*}$ is not very large and it can even be close to zero
for $m\sqrt{G}\gg 1$. For convenience, we use units $G=1$ in the following.
The evolution equation (26) is solved in appendix A. For $d=0$, the solution
simplifies to
$\tilde{G}(r)=\frac{1}{r^{2}-c}\,.$ (32)
In model-II $c$ is negative and one recognizes the small-$m$ limit of the UV-
fixed point as $r\to 0$. In model-I $c$ is positive and when $r$ moves in from
infinity towards zero, $\tilde{G}$ blows up at $r=r_{\rm s}=\sqrt{c}$. For
non-zero masses $r_{\rm s}$ moves to larger values (figure 1), which can be
macroscopic,
$\displaystyle r_{\rm s}$ $\displaystyle=$ $\displaystyle
3m+\frac{c}{3m}[2\ln(3m)-\ln(c)-1]+\cdots,\quad
m\to\infty\,,\qquad\qquad\mbox{model-I}$ (33) $\displaystyle=$
$\displaystyle\sqrt{c}+3m\,\frac{\pi}{4}+\cdots,\quad m\to 0\,.$ (34)
For general $m$, the running coupling has the expansion near $r_{\rm s}$:
$\tilde{G}=\frac{r_{\rm s}}{2c(r-r_{\rm
s})}-\frac{3\sqrt{2}\,m}{2c^{3/2}}\,\frac{\sqrt{r_{\rm s}}}{\sqrt{r-r_{\rm
s}}}+\mathcal{O}(1)\,,\qquad\mbox{ model-I}$ (35)
which shows an integrable square-root singularity in addition to a pole.
Hence, $\tilde{G}$ and the potential $V_{\rm r}$ are complex in $0<r<r_{\rm
s}$ and its Hamiltonian is not Hermitian.
Dropping the $c$-term in $\beta$ altogether at large $m$ gives a classical
beta function (independent of $\hbar$) with a simple solution to its evolution
equation,
$\beta=2\tilde{G}+2dm\tilde{G}^{3/2}\Rightarrow\tilde{G}=\frac{1}{(r-dm)^{2}}\,,$
(36)
and corresponding classical-evolution potentials,
$\displaystyle V_{\mbox{\scriptsize CE-I}}$ $\displaystyle=$
$\displaystyle-\frac{m^{2}r}{(r-3m)^{2}}\,,\qquad\qquad\mbox{CE-I model}$ (37)
$\displaystyle V_{\mbox{\scriptsize CE-II}}$ $\displaystyle=$
$\displaystyle-\frac{m^{2}r}{(r+m)^{2}}\,.\;\,\qquad\qquad\mbox{CE-II model}$
(38)
How to interpret the singularity in model-I? In usual terminology one might
say that $\tilde{G}$ has a Landau pole at $r_{\rm s}$ and a small-distance
cutoff might have to be introduced to avoid it. However, it seems odd to put a
UV cutoff near $r_{\rm s}$ when it is macroscopic. One option is to disallow
macroscopic values of $m$—disallow huge values of the $dm$ term in the beta
function and require $m\ll 1$; then $r_{\rm s}=\mathcal{O}(1)$ whereas the
important distances are on the scale of the Bohr radius $2/m^{3}$ and a
minimal distance $\ell$ of order of the Planck length would avoid problems.
But then one would essentially be back to the previous section while the
region $m>1$ is interesting.
Encouraged by the following features we shall assume that the singularity
represents black hole-like physics: At large $m$, $r_{\rm s}\simeq 3m$ which
is the right order of magnitude for the horizon when two heavy particles merge
into a black hole. In the relativistic classical-evolution model, particles at
rest released from a distance $r>r_{\rm s}$ gain a relativistic velocity
approaching the light-velocity as $r\downarrow r_{\rm s}$ (cf. appendix D).
(The same happens with a test particle in the gravitational field of a heavy
one, when $m=m_{2}\lll m_{1}$). Also for a Schwarzschild black hole the
relativistic $|$velocity$|$ of a massive test particle approaches 1 at the
horizon in finite (proper) time. A non-Hermitian Hamiltonian also occurs with
the Dirac equation in Schwarzschild spacetime when expressed in Hamiltonian
form [31].
We shall interpret the singularity in the potential as a distribution. For the
pole in (35) this is the Cauchy principal value. One way to define the
distributions is [32],
$\frac{1}{(r-r_{\rm s})^{n}}=(-1)^{n-1}\frac{\partial^{n}}{\partial
r^{n}}\,\ln|r-r_{\rm s}|\,,\quad n=1,2\,$ (39)
($n=2$ refers to the CE-I model (37)). In terms of wave functions,
$\int_{0}^{\infty}dr\,\phi^{*}(r)\,\frac{1}{(r-r_{\rm
s})^{n}}\,\psi(r)=-\int_{0}^{\infty}dr\,\ln|r-r_{\rm
s}|\,\frac{\partial^{n}}{\partial
r^{n}}\,\left[\phi^{*}(r)\psi(r)\right]\,,\quad n=1,2\,.$ (40)
The wave functions are required to be smooth and to vanish sufficiently fast
at the boundaries of the integration domain, such that the above partial
integrations are valid as shown. Eq. (40) suggests that matrix elements of the
potential are particularly sensitive to derivatives of wave functions near
$r_{\rm s}$.
Figure 2: Running potentials $V_{\rm r}/m^{2}$ for $m=0.6$ (blue) and $m=2$
(red); also shown is the Newton form $-1/r$ (brown). Left: model-I (real
part); the dashed vertical lines indicate the position $r_{\rm s}$ of the
singularity. Right: model-II.
Figure 2 shows the running potentials in models I and II for two masses (to
facilitate visual comparison $V_{\rm r}$ was divided by $m^{2}$). They vanish
at $r=0$ and at large distances they approach the Newton potential, which is
also shown. In the left plot for model-I at $m=2$ one can imagine how the
negative-definite classical double pole (36) is ameliorated in the quantum
model (35) into a single pole, leaving a deep—still negative—minimum on its
left flank and a steeper descent on its right flank. This minimum has nearly
disappeared (just visible near the origin) for the smaller $m=0.6$. In model-
II, the potential is negative-definite and smooth with a minimum at $r_{\rm
min}$:
$\displaystyle r_{\rm min}$ $\displaystyle\to$
$\displaystyle\sqrt{|c|},\quad\frac{V_{\rm
min}}{m^{2}}\to-\frac{1}{2\sqrt{|c|}}\,,\qquad m\to 0,\qquad\mbox{model-II}$
(41) $\displaystyle\simeq$ $\displaystyle m,\qquad\;V_{\rm
min}\simeq-\frac{m}{4}\,,\qquad\qquad\;m\gg 1.$ (42)
Remarkable here is the fact that $r_{\rm min}$ is for large $m$ also of order
the Schwarzschild horizon scale, which suggests that this model might
illustrate a ‘horizonless black hole’. The right plot in figure 1 shows that
$r_{\rm s}$ and $r_{\rm min}$ are rather featureless functions of $m$.
It is clear from figure 2 that when $m$ increases, ${\rm Re}[V_{{\rm r,I}}]$
can approach $V_{\mbox{\scriptsize CE-I}}$ only non-uniformly in $r$, since
their singularity structures differ. One cannot expect simultaneous
convergence of matrix elements. In case there is a UV cutoff on the wave
functions that limits their first two derivatives one may expect uniform
convergence of a finite number of matrix elements. On the other hand, the
approach of $V_{{\rm r,II}}$ to $V_{\mbox{\scriptsize CE-II}}$ is uniform in
$r$ (as can be clearly illustrated by plotting their ratio). In this case also
the approach of the $\beta$-functions is uniform in $\tilde{G}$, since the
latter is restricted to $\tilde{G}<\tilde{G}_{*}$ and $\tilde{G}_{*}\to 0$.
## 5 Binding energy in model-I
Appendix B describes details of the numerical treatment of the singularity;
special aspects of non-hermitian but symmetric Hamiltonian are the subject of
appendix B.1.
We start with variational calculations using $N$ s-wave bound state
eigenfunctions $u_{n}(r,a)$ of the hydrogen atom, (12). Let $E_{\rm min}(N,a)$
be the eigenvalue with minimal real part of the hamiltonian matrix
$H_{mn}=\int_{0}^{\infty}dr\,u_{m}(r,a)\,(K+V_{\rm
r})\,u_{n}(r,a),\quad(m,n)=1,\,2,\,\ldots,N\,.$ (43)
In section 3 we mentioned that keeping $a$ fixed by the variational method
with $N=1$ leads to a (probably misleading) fast convergence when increasing
$N$. Here we allow $a$ to depend to depend also on $N$. The eigenvectors
$f_{jn}(a)$ of the Hamiltonian matrix determine eigenfunctions
$f_{j}(r,a)=\sum_{n}f_{jn}(a)u_{n}(r,a)$. Using the eigenfunction
corresponding to $E_{\rm min}(N,a)$ as a trial function in the energy
functional $\mathcal{E}$ (cf. appendix B.1) and its real part for minimization
the variational method becomes
$\displaystyle{\rm Re}[\mathcal{E}]$ $\displaystyle=$ $\displaystyle{\rm
Re}[E_{{\rm min}}(N,a)]\equiv F_{N}(a)\,,$ (44)
$\displaystyle\frac{\partial}{\partial a}F_{N}(a)|_{a=a_{\rm min}}$
$\displaystyle=$ $\displaystyle 0,\quad E_{\rm min}=E_{{\rm min}}(N,a_{\rm
min})\,,$ (45)
where $a_{\rm min}$ corresponds to the deepest local minimum of $F_{N}(a)$.
Figure 3: Left: Variational estimates of $-{\rm Re}[E_{\rm min}]/m$. The
lowest blue curve is obtained with $F_{1}(a)$ and $u_{1}(r,a)$, with
asymptotes into the small and large mass regions (black, dashed). Next in
height in $m>2$ is an estimate using a Gaussian wave function $f_{\rm
G}(r,a,s)$ at fixed variance $s=0.18$ with asymptote (dashed) provided by the
CE-I model, (149). Also shown are ’variational bounds’ obtained with Gaussian
and Breit-Wigner functions (appendix B.5), $\mathcal{E}_{\rm GP}(a,s)$ and
${\rm Re}[\mathcal{E}_{\rm BWPSR}(a,s)]$ (highest and next highest blue curves
in $m>2$) with enclosed GP-asymptote (black, dashed). Right: imaginary parts.
The small-mass asymptote (black-dashed) represents (47). The large-mass
asymptote to the variational $-{\rm Im}[\mathcal{E}_{\rm min}]/m$ (lowest
black-dashed line in $m>1)$ is a fit $0.20\,m^{2}$ to the numerical data. The
highest asymptote and curve represent the BWPSR result. (Absent is the CE-I
model which has a real potential.)
In the simplest approximation, $N=1$,
$E_{\rm min}(1,a)=H_{11}(a)\,.$ (46)
For small masses we find again a single minimum $a_{\rm min}\simeq a_{\rm B}$
with binding energy $E_{\rm b}=-{\rm Re}[E_{\rm min}]\simeq m^{5}/4$ shown in
the left plot of figure 3.111111Figure 3 shows many other results for the
binding energy which will be explained in due course. New in model-I is the
imaginary part of $E_{\rm min}$, shown in the right plot. Its asymptotic form
for small $m$ is approximately given by
$\Gamma_{\rm b}\equiv-2\,{\rm Im}[E_{\rm
min}]\approx\frac{32\sqrt{2}}{105}\,d\sqrt{c}\,m^{12}=1.48\,m^{12}$ (47)
(cf. appendix B.2).
When raising $m$ beyond $0.9$ a second local minimum appears in $F_{1}(a)$ at
much smaller $a$ (figure 4). This second minimum becomes the lowest one when
$m$ increases between 1.8 and 1.9 – the new global minimum for determining
$E_{\rm min}$. The resulting extension of $E_{\rm b}/m$ into the large mass
region is shown in the left plot of figure 3, wherein the dashed horizontal
asymptotes come from the classical-evolution potential for $m\to\infty$ (cf.
(114)). The right plot in figure 3 shows the imaginary part.
Increasing $N$, for masses $m\geq 2$, each step $\Delta N=1$ introduces a new
local minimum at still smaller $a$, while the previous minima change somewhat
and then stabilize.121212This does not happen in the small mass region where
the kinetic energy contribution $1/(ma^{2})$ to the variational function
allows only one H-like minimum near $a_{\rm B}$. In the Newton model
(potential $-m^{2}/r$) new local minima do not appear when raising $N$. For
example, for $N=6$ and $m=2$, $F_{6}(a)$ has seven local minima (right plot in
figure 4); the sixth is the lowest, with $E_{\rm min}/m\simeq-4.2-2.4\,i$ and
an $|{\rm eigenfunction}|^{2}$ consisting of two Gaussian-like peaks to the
left and right of $r_{\rm s}$. The energy of the first minimum is much higher
and has changed little, its corresponding eigenfunction has the qualitative
shape of the first hydrogen s-wave – it is still H-like. Increasing $N$
further leads to even lower values of ${\rm Re}[E_{\rm min}]$ and this line of
investigation rapidly becomes numerically and humanly challenging. We could
not decide this way whether ${\rm Re}[E_{\rm min}]$, at fixed $m\geq 2$,
reaches a finite limit or goes to minus infinity as $N\to\infty$.
Figure 4: Left: Variational function $F_{1}(a)/m$ versus $a/r_{\rm s}$ for
$m=0.9$, 1, 1.5, 2 (top to bottom near $a/r_{\rm s}=0.2$), and the classical-
evolution limit function (114) for $m\to\infty$ (dashed). The first minimum is
the shallow one in the region $2<a/r_{\rm s}<3$. Right: $F_{6}(a)/m$ for
$m=2$; the 2nd to 7th minima are shown, the first minimum is outside of the
plot.
The s-wave Hydrogen eigenfunctions have nice asymptotic behavior for
$r\to\infty$, but they are not well suited to investigate the evidently
important region around the singularity at $r_{\rm s}$. For this region much
better sampling is obtained with the Fourier-sine modes
$b_{n}(r,L)=\sqrt{\frac{2}{L}}\,\sin\left(\frac{n\pi
r}{L}\right)\theta(L-r),\quad n=1,\,\ldots,\,N\,,$ (48)
where $\theta$ is the unit-step function. The modes are chosen to vanish at
$L$ which is large relative to the region where the wave function under
investigation is substantial; $L$ controls finite-size effects. The sampling
density is controlled by the minimum half-wavelength $\lambda_{\rm
min}/2=L/N$; the equivalent maximum momentum $p_{\rm max}=\pi N/L$ serves as a
UV cutoff on derivatives of the basis functions.
Figure 5: Left: Absolute value of ${\rm Re}[E_{j}]/m$ of the first 40
eigenvalues for $m=2$, $L=32\,r_{\rm s}$, $N=128$ ($E_{j}$ is negative for
$j<30$). Right: Corresponding $-{\rm Im}[E_{j}]/m$.
Some results follow now first for the case $m=2$ in the large-mass region for
which $r_{\rm s}=6.6$ and $a_{\rm B}=1/4$. Figure 5 shows part of the
eigenvalue spectrum for $L=32\,r_{\rm s}$, $N=128$, ordered by increasing
${\rm Re}[E_{j}]$. The real parts of the eigenvalues start negative and change
sign near mode number $j=30$, beyond which they increase roughly quadratically
with $j$ (linearly when using $K_{\rm rel}$) where they correspond to the
unbound modes. The mode number where the eigen-energy changes sign increases
with $L$ at fixed $\lambda_{\rm min}$. The binding energy $|{\rm Re}[E_{1}]|$
is large compared to $m$ and first few ${\rm Re}[E_{j}]$ look a bit irregular;
their imaginary part is very large at $j=3$, 4 and 6. Exceptional is the $j=N$
eigenvalue: the last three eigenvalues are $E_{126}=1.73-0.00088\,i$,
$E_{127}=1.77-0.00070\,i$, $E_{128}=40.8-201\,i$.
Figure 6: Left: first six eigenfunctions $r_{\rm s}|f_{j}(r)|^{2}$ and $r_{\rm
s}|f_{128}(r)|^{2}$, as a function of $\bar{r}=r/r_{\rm s}$, for $m=2$,
$L/r_{\rm s}=32$, $N=128$. From left to right:
$j=6,\,4,\,3,\,128,\,1,\,2,\,5$. Right: $r_{\rm s}|f_{22}(r)|^{2}$ (blue,
smallest peak at $\bar{r}\approx 23$) and $r_{\rm s}u_{18}(r,a_{\rm B})^{2}$
(brown).
The left plot in figure 6 shows the first six eigenfunctions $r_{\rm
s}|f_{j}(r)|^{2}$ versus $\bar{r}=r/r_{\rm s}$, normalized under Hermitian
conjugation (the factor $r_{\rm s}$ stems from the Jacobian in $dr=r_{\rm
s}\,d\bar{r}$). Also added is the last one, i.e. $f_{128}(r)$; it straddles
$r_{\rm s}$ and reaches into $r<r_{\rm s}$ where ${\rm Im}[V_{\rm r}]$ is
large. The eigenfunctions $f_{j}(r)$, $j=1$, 2, 5, tunnel a little through the
pole barrier into the region $r<r_{\rm s}$ and become small for $r\lesssim
0.95\,r_{\rm s}$.
Eigenfunctions for which $|{\rm Im}[E_{j}]|>|{\rm Im}[E_{1}]|$ peak in the
region $0<r<r_{\rm s}$ where the potential is complex. At smaller $L$ such
very large $-{\rm Im}[E_{j}]$ ‘outliers’ also occur in the unbound part of the
spectrum, whereas their ${\rm Re}[E_{j}]$ appear mildly affected relative to
neighboring $j$. The mode numbers of the outliers vary wildly when varying $L$
or $\lambda_{\rm min}$, but their number appears to be roughly given by the
sampling density times $r_{\rm s}$: $(N/L)r_{\rm s}$. In figures 5 and 6,
$Nr_{\rm s}/L=128/32=4$ and including $j=N$ there are four outliers.
With increasing $j>6$ the negative-energy eigenfunctions slowly become H-like,
but without support in the region $0<\bar{r}\lesssim 1$ and with relatively
small imaginary parts, ${\rm Im}[E]/{\rm Re}[E]\ll 1$. The right plot in
figure 6 shows an example in which $f_{22}(r)$ is compared with
$u_{n}(r,a_{\rm B})$, $n=18$, chosen to give a rough match at the largest
peak.
Finite-size effects appear under control when the wave function fits
comfortably in $0<r<L$, which is true in the left plot of figure 6, and
reasonable well also in the right plot. Beyond $j=23$ the wave functions get
squeezed in the limited volume and finite-size effects become large. The large
$j$ eigenfunctions (except $f_{N}(r)$) look a bit like the sine functions of a
free particle in the region $r_{\rm s}<r<L$. A domain size of the minimal-
energy eigenfunction can be defined by the distance $r_{90}$ containing 90% of
the probability, which is for the current example given by
$\int_{0}^{r_{90}}dr\,|f_{1}(r)|^{2}=0.9\,,\qquad r_{90}\simeq 1.95\,r_{\rm
s}\simeq 13,\qquad\qquad\mbox{($m=2$, model-I)}$ (49)
much smaller indeed than $L=32\,r_{\rm s}\simeq 211$. But the peak of
$|f_{1}(r)|^{2}$ is just outside $r_{\rm s}$ (figure 6). For larger $N/L$ most
of this $r_{90}$ consists of $r_{\rm s}$ since then the width of the peak is
much smaller than $r_{\rm s}$ (appendix B.4), and the same holds in general
for larger masses (appendix B.5).
Figure 7: Shifted spectra for $m=2$, $L/r_{\rm s}=32$, $N=64$, 96, 128,
equivalently $\lambda_{\rm min}/r_{\rm s}=1$, 2/3, 1/2. The shifts $\sigma$
are respectively 0, 2, 4 (blue, red, brown dots or upper, middle, lower dots
at $\kappa=1$).
The minimal energy ${\rm Re}[E_{1}]$ is quite sensitive to $N/L$ because
matrix elements $V_{mn}$ are sensitive to the derivatives of the basis
functions at the singularity. Comparing different $N/L$ we can shift the
sequence $j$ by an amount $\sigma$ and label $E$ by $\kappa=j-\sigma$ with
$\kappa$ ‘anchored’ at some value where the energy and eigenfunction are
H-like: $f_{j}(r)\approx u_{\kappa}(r)$ and ${\rm
Re}[E]\approx-m^{5}/(4\kappa^{2})$ . For example, $f_{22}$ was compared to
$u_{18}$ in figure 6 and thus $\kappa=18$ and $\sigma=4$. Figure 7 shows
shifted spectra for three values of $N/L$. The sequence reaching to
$\kappa_{\rm min}=-3$ ($\sigma=4$) corresponds to case shown in figures 5 and
6. The dots match visually at $\kappa=6$, 7, …, where UV-cutoff effects are
reasonably small.
The question whether the binding energy is bounded is investigated further in
appendix B.4 where we come to the conclusion that it is finite in the non-
relativistic model. But it is huge for large masses, $E_{\rm b}/m\approx
7m^{8}$, and the squared average velocity $v^{2}=\langle K\rangle/m$ is of the
same order of magnitude.131313In figure 5, $E_{\rm b}/m$ is already very large
for $m=2$ but $v^{2}=0.51$ is still moderate. Repeating the computation for
the relativistic model gave $v^{2}=0.52$, $v_{\rm rel}^{2}=0.30$, whereas the
other results changed little compared to the non-relativistic model. The
number of eigenfunctions with dominant support in the region $r\lesssim r_{\rm
s}$ is expected to stay finite but large in the limit $N/L\to\infty$, with a
finite large negative minimal $\kappa\equiv\kappa_{\rm min}$, in the shifted
labeling.
In the relativistic model-I (with the kinetic energy operator $K_{\rm rel}$)
we find that there is no lower bound on the energy spectrum in the large mass
region (in the small mass region $m\lesssim 0.61$ the binding energy with
$K_{\rm rel}$ is finite and approaches that with $K$ as $m\to 0$). When
$N/L\to\infty$, all energies ${\rm Re}[E_{j}]$ near the ground state move to
$-\infty$; in the shifted labeling $\kappa_{\rm min}$ moves to $-\infty$.
Figure 8: Examples of the mass dependence of $E_{\rm b}/m$ at various fixed
minimal wavelengths $\lambda_{\rm min}$, and for comparison also the
variational result obtained with $u_{1}(r,a)$ shown earlier in figure 3
(lowest blue dashed curve); higher at $m=2$, in succession: $\lambda_{\rm
min}=19.8$ with nonrelativistic $K$ (red); $\lambda_{\rm min}=3.29$ with the
relativistic $K_{\rm rel}$ (magenta); $\lambda_{\rm min}=1$ with $K_{\rm rel}$
(magenta), $K$ (red dots) and asymptote (50) from the CE-I model (red,
dashed).
Figure 9: Left: real and imaginary parts of $-E_{1}$ vs. $m$ , for
$\lambda_{\rm min}=3.29$ (respectively blue dots and black dashed straight
line segments connecting data points). Right: $r_{\rm s}|f_{1}(r)|^{2}$ vs.
$\bar{r}=r/r_{\rm s}$ ; large to small peak-heights around $\bar{r}=1.2$:
$m=2.2$ (black), 1.9 (blue), 2 (red), 2.1 (brown).
But one may question wether it makes sense to allow arbitrarily large
derivatives in non-relativistic eigenfunctions when the binding energy is so
sensitive to this. Let us put a cutoff on the Fourier momenta, $p_{\rm
max}=N\pi/L$, equivalently, require a minimum $\lambda_{\rm min}$. Examples
are shown in figure 8. For comparison, also shown is the earlier variational
result obtained with $u_{1}(r,a)$ (same as in figure 3), and the large mass
result obtained in the CE-I model with $\lambda_{\rm min}=1$ (cf. appendix
B.3),
$E_{\rm b}/m\simeq 48\,m^{2}\,\qquad\qquad\qquad\qquad\mbox{CE-I model }\,.$
(50)
This large mass result seems quite far off; comparison with results using
Gaussian variational trial functions with a fixed width support it (cf. end of
appendix B.5; a quadratic dependence $E_{\rm b}/m\propto m^{2}$ was found
earlier in (20)). The surprising dips in the mass dependence are accompanied
by large variations in the imaginary part of $E_{1}$, as shown in the close-up
in figure 9. Large $|{\rm Im}[E_{1}]|$ imply eigenfunctions that are
substantial in $r<r_{\rm s}$ (cf. the right plot), which diminishes the
contribution to ${\rm Re}[E_{1}]$ from the right flank of the singularity. The
occurrence of substantial contributions to $f_{1}(r)$ in $r<r_{\rm s}$ is
perhaps an effect of rendering it orthogonal (under transposition) to all
other eigenfunctions, a property involving also the imaginary part of the
Hamiltonian and its eigenfunctions. In the CE-I model the potential is real;
the potential and the ground-state wave function are nearly symmetrical around
$r_{\rm s}$ and we found no dips in the binding energy as a function of $m$.
For large masses the variational trial function $u_{1}(r,a)$ is evidently
wrong in its estimate of a small and constant $E_{\rm b}/m\simeq 0.23$. Its
only parameter $a$ cannot simultaneously monitor two properties of the wave
function: a large derivative, near the singularity.
## 6 Binding energy in model-II
Figure 10: Left: Variational binding energy of model-II, with its large-$m$
asymptote. The slightly higher black-dashed line represents $1/4$. (At $m=2$
the Fourier-sine basis with $N=128$ and $L=64$ gives a 4 % larger $E_{\rm
b}/m$ than the variational estimate; the relativistic value is another 28 %
higher.) Right: excitation spectrum $\Delta_{n}=(E_{n+1}-E_{1})/m$ near the
ground state for $m=10$, $L=64$, $N=64$. The dashed line shows
$n\omega/m=n/(2m^{2})$.
Figure 10 shows the variational estimate of the binding energy with the s-wave
trial function $u_{1}(r,a)$. For small masses $E_{\rm b}/m$ is again close to
the perturbative values in section 3. At large $m$ it becomes constant as in
model-I where this behavior was misleading. However, here the mismatch of the
variational value ($\simeq 0.23$) with the ideal value ($1/4$) is moderate
because the running potential approaches uniformly that of the classical-
evolution model CE-II (section 4, (38)), for which $E_{\rm b}/m$ becomes
constant at large $m$. The spectrum near the ground state is approximately
that of a harmonic oscillator (HO), which can be understood from the expansion
of the CE-II potential near its minimum at $r=m$,
$\displaystyle V_{\mbox{\scriptsize CE-II}}(r)$ $\displaystyle=$
$\displaystyle-\frac{m^{2}r}{(r+m)^{2}}=-\frac{m}{4}+\frac{(r-m)^{2}}{16m}-\frac{(r-m)^{3}}{16m^{2}}+\cdots$
(51) $\displaystyle=$ $\displaystyle-\frac{m}{4}+\frac{m_{\rm
red}}{2}\,\omega^{2}(r-m)^{2}+\dots\;,\qquad\omega=\frac{1}{2m}\,$
(recall $m_{\rm red}=m/2$). Hence, we expect the large-$m$ spectrum near the
ground state to be approximately given by
$\frac{E_{n+1}}{m}\simeq-\frac{1}{4}+\left(n+\frac{1}{2}\right)\frac{1}{2m^{2}},\quad
n=0,\,1,\,2,\,\ldots\,$ (52)
($j=n+1$), with corrections primarily of order $\mathcal{O}(m^{-4})$ from the
terms omitted in (51). These can be substantial because the potential is quite
asymmetrical (figure 2) with its $1/r$ tail at large $r$ where the true
eigenfunctions fall off slower than a Gaussian. There are also exponentially
small corrections due to the fact that the eigenfunctions of this anharmonic
oscillator have to vanish at the origin. The right plot in figure 10 compares
the excitation spectrum near the ground state with (52), for $m=10$ ($r_{\rm
min}=10.1$), using the basis of sine functions. The ground state energy
$E_{1}$ differs only 0.5% from the $-1/4$ in (52) (which may be compared with
the $-1/6$ in (21)). The first few eigenfunctions are closely HO-like; for
large $n$ they should become H-like, $\approx u_{n}(r,a_{\rm B})$, but it
would require much larger $L$ and $N$ to verify this.
At substantially smaller masses the spectrum near the ground state is neither
closely HO-like nor H-like. For $m=2$ ($r_{\rm min}=2.52$), part of the
spectrum is shown in the left plot of figure 11; in this case even the first
few eigenfunctions are still H-like (right plot). The $r_{90}$ domain size of
the ground state for $m=2$:
$\int_{0}^{r_{90}}dr\,|f_{1}(r)|^{2}=0.9\,,\qquad r_{90}\simeq
7\,,\qquad\qquad\mbox{($m=2$, model-II)}$ (53)
is somewhat smaller than the 13 in (49) for model-I; it approaches
$1/\omega=2m=2r_{\rm min}$ for larger masses.
Figure 11: Left: model-II spectrum for $m=2$, $L=211$, $N=128$ ($\lambda_{\rm
min}=3.3$); the remaining positive energies increase approximately
quadratically. Right: first two eigenfunctions, $j=1,2$.
## 7 Spherical bounce and collapse
Using the spectrum and eigenfunctions obtained with the Fourier-sine basis we
study here the time development of a spherically symmetric two-particle state.
Consider a Gaussian wave packet at a distance $r_{0}$ from the origin, at time
$t=0$,
$\psi(r,0)=\mu^{-1/2}\exp\left[-\frac{(r-r_{0})^{2}}{4s_{0}^{2}}\right],\qquad\int_{0}^{\infty}dr\,\psi(r,0)^{2}=1\,.$
(54)
Assuming $r_{0}$ sufficiently far from the origin and $s_{0}/r_{0}$
sufficiently small, $\psi(0,0)$ is negligible such that it qualifies for a
radial wave function, and extending the normalization integral to minus
infinity $\mu=2\pi s_{0}$. The Fourier-sine basis at finite $L$ and $N$ is
accurate (visibly) provided that $L/r_{0}$ is large enough and the wave packet
not too narrow. We can then replace $\psi(r,0)$ by its approximation in terms
of the models’ eigenfunctions (for model-I these are here normalized under
transposition). Using the notation of appendix B.1, let
$\displaystyle\psi_{n}$ $\displaystyle=$
$\displaystyle\int_{0}^{L}dr\,b_{n}(r)\,\psi(r,0)\,,$ (55)
$\displaystyle\psi_{j}$ $\displaystyle=$
$\displaystyle\int_{0}^{L}dr\,f_{j}(r)\,\psi(r,0)=\sum_{n=1}^{N}f_{jn}\psi_{n}\,.$
(56)
We now redefine $\psi(r,0)$,
$\psi(r,0)=\mu^{-1/2}\sum_{n=1}^{N}\psi_{n}\,b_{n}(r)=\mu^{-1/2}\sum_{j=1}^{N}\psi_{j}\,f_{j}(r)\,,$
(57)
with $\mu$ such that $\psi(r,0)$ is normalized again,
$\sum_{n=1}^{N}\psi_{n}^{2}=\sum_{j=1}^{N}\psi_{j}^{2}=1\,.$ (58)
The coefficients $\psi_{j}$ are real in model-II and complex in model-I (in
the latter $\sum_{j}{\rm Im}[\psi_{j}^{2}]=0$). This initial wave function
satisfies the boundary conditions at $r=\\{0,L\\}$ and it should be an
accurate approximation to the original Gaussian. The time-dependent wave
function is given by
$\psi(r,t)=\sum_{j=1}^{N}\psi_{j}\,f_{j}(r)\,e^{-iE_{j}t}\,.$ (59)
The case with the pure Newton potential is informative for interpreting the
results, as is also the ‘free-particle’ case $V=0$ in $0<r<L$. In addition to
looking at detailed shapes the packet may take in the course of time,
quantitative observables are useful: the squared norm $\nu$, average distance
$d$ and its root-mean-square deviation $s$ that we shall call spread:
$\displaystyle\nu(t)$ $\displaystyle=$
$\displaystyle||\psi||^{2}=\int_{0}^{L}dr\,|\psi(r,t)|^{2}\,,$ (60)
$\displaystyle d(t)$ $\displaystyle=$ $\displaystyle\langle
r\rangle=\nu(t)^{-1}\int_{0}^{L}dr\,|\psi(r,t)|^{2}\,r\,,$ (61) $\displaystyle
s(t)$ $\displaystyle=$ $\displaystyle\sqrt{\langle r^{2}-\langle
r\rangle^{2}\rangle}=\left\\{\nu(t)^{-1}\int_{0}^{L}dr\,|\psi(r,t)|^{2}\,\left[r^{2}-d(t)^{2}\right]\right\\}^{1/2}\,.$
(62)
Following the norm is only interesting for model-I with its non-Hermitian
Hamiltonian; for the other models (II, Newton, free) it stays put at $\nu=1$.
The free pseudo-particle with reduced mass $m/2$ is not entirely free because
of the boundaries at $r=0$ and $L$. As time progresses the wave packet
broadens. When it reaches the origin its composing waves scatter back and the
average $\langle r\rangle$ increases. Similar scattering starts when the
packet reaches $L$. After some ‘equilibration’ time $|\psi(r,t)|^{2}$ becomes
roughly uniform with fluctuations, and $\langle r\rangle\approx L/2$, $\langle
r^{2}-\langle r\rangle^{2}\rangle\approx L^{2}/12$.
Examples in model I and II now follow for mass $m=2$, which implies $a_{\rm
B}=1/4$, $r_{\rm s}=6.59$, $r_{\rm min}=2.52$, with $N=128$, $L=32\,r_{\rm
s}=211$, which implies $\lambda_{\rm min}=2L/N=3.29$. These values of $m$, $L$
and $N$ are also used in figures 6 and 11. Let us start with model-II in which
the Hamiltonian is Hermitian.
Figure 12: Left: model-II coefficients $\psi_{j}^{2}$ ($\psi_{1}^{2}=5\times
10^{-7}$). Right: model-I coefficients ${\rm Re}[\psi_{j}^{2}]$, for the first
10 modes these vary from $\mathcal{O}(10^{-20})$ to $\mathcal{O}(10^{-8})$.
### 7.1 Bouncing with model-II
The parameters of $\psi(r,0)$ are $r_{0}=10\,r_{\rm min}=25.2$ and
$s_{0}=r_{\rm min}=2.52\,$. With these the initial Gaussian is negligible at
the origin and $\lambda_{\rm min}=1.3\,s_{0}$ turns out to be sufficiently
small to enable a reasonably accurate approximation in the basis of sine
functions or eigenfunctions $f_{j}(r)$. Figure 12 (left plot) shows
coefficients $\psi_{j}^{2}$; the dominant modes are $j=4$ and 5. The ratio
$r_{0}/a_{\rm B}$ is large (100.8) and in the Newton case 39 bound-state
$u_{n}(r,a_{\rm B})$ enable a good representation of $\psi(r,0)$. The energy
$\langle H\rangle=-0.59\,m$.
Figure 13: Left: $d(t)/r_{\rm min}$ (upper curves) and $s(t)/r_{\rm min}$
(lower curves) of model-II (fully drawn) and Newton (dashed); $m=2$, $r_{\rm
min}=2.52$. Right: $r_{\rm min}|\psi(r,t)|^{2}$ vs. $r/r_{\rm min}$ at the
time of the first bounce (blue, $t_{{\rm b},1}=73$) and at the time of the
first fall-back (brown, $t_{{\rm f},1}=220$). The initial $|\psi|^{2}$ is also
shown (black, dashed).
Initially the packet spreads and moves towards the origin with roughly the
classical acceleration, then it decelerates and bounces back to a distance
near the starting point, after which the process repeats. The left plot in
figure 13 shows the oscillation of $d(t)$ (upper curves). The initial
acceleration $d^{\prime\prime}(t)$ in model-II is smaller than that of Newton
which’ force is stronger (figure 2). The spread $s(t)$ (lower curves) has
similar oscilations, its maximum values are much smaller than the free-
particle value $\sqrt{L^{2}/12}=61$ and the scattered wave from the boundary
at $L$ is negligible. The right plot in figure 13 shows the packet at the time
of the first bounce (minimum of $d(t)$) and at the time of the subsequent
fall-back (maximum $d(t)$): $\\{t_{{\rm b},1},t_{{\rm f},1}\\}=\\{73,220\\}$.
The number 4 to 5 of large maxima may reflect that $j=4,5$ dominate in the
expansion (57).
These plots will not change much in the limit $\lambda_{\rm min}\to 0$ or in
the infinite volume limit $L\to\infty$.
### 7.2 Bouncing collapse with model-I
The parameters here are that of model-II with $r_{\rm min}\to r_{\rm s}$:
$r_{0}=10\,r_{\rm s}=65.9$ and $s_{0}=r_{\rm s}=6.6$ (here $\lambda_{\rm
min}/s_{0}=1/2$ and $r_{0}/a_{\rm B}=L\simeq 211$). With $r_{0}$ here larger
than in model-II the dominant $\psi_{j}$ are around $j=15$ (figure 12, right
plot); the energy, $\langle H\rangle=-0.036\,m$ is smaller in magnitude and
the time scale on which things change is larger. But the major difference is
the imaginary part in the eigenvalues $E_{j}$, which leads to a rapid decay of
all eigenfunctions with a sizable imaginary part, typically those with support
in $r\lesssim r_{\rm s}$ (figures 5 and 6).
Figure 14: Left: Time-dependence of the squared-norm in model-I. Right:
$r_{\rm s}|\psi(r,t)|^{2}$ at $t=0$ (dashed), and at $t=124.2$, when
$\dot{\nu}(t)=-0.001$ and $\nu(t)=0.987$.
Figure 14 shows the squared norm $\nu(t)$ (left plot). Up to times of about
100 it hardly changes, the wave packet has not reached the region $r\approx
r_{\rm s}$ yet. Beyond that the norm starts diving down. The ‘norm-velocity’
$\dot{\nu}(t)\equiv d\,\nu(t)/dt$ is maximal at $t=201$,
$\dot{\nu}(201)=-0.0053$. At the earlier $t=124$ this velocity is already
-0.001 and although the norm has changed little, $|\psi(r,t)|^{2}$ has changed
quite a lot as can be seen in the right plot of figure 14.
Figure 15: As in figure 13, here for model-I; for $m=2$ ($r_{\rm s}=6.6$); the
first bounce and fall-back time are $t_{{\rm b},1}=231$, $t_{{\rm f},1}=518$.
The right plot shows $|\psi(r,t)|^{2}/\nu(t)$ at $t_{{\rm b},1}$ (blue) and
$t_{{\rm f},1}$ (brown).
The distance and spread shown in the left plot of figure 15 display similar
bouncing and falling back as for model-II in figure 13. The Newton force is in
this case the smaller one. Wave functions at the first bounce and fall-back
times are shown in the right plot. A gap in the region $0<r\lesssim r_{\rm s}$
is clearly visible. Also remarkable is the approximate recovery of the initial
shape of $|\psi|^{2}$ at the fall-back time ($|\psi(r,t)|^{2}$ in the right
plot is ‘renormalized’ by $\nu(t)$, the remaining total probability at the
fall-back time is $\nu(518)=0.16$).
Also here in model-I the infinite volume limit $L\to\infty$ at fixed
$\lambda_{\rm min}$ will have little effect on $|\psi(r,t)|^{2}$. With
$\lambda_{\rm min}\to 0$ it is useful to revert to the shifted labeling
$\kappa=j-\sigma$, as in figure 7. With the anchoring of that plot we expect
the important contributing modes to stay put around the $\kappa$ value
corresponding to $j=15$ in the right plot of figure 12. For example, in figure
7, this $\kappa=j-\sigma=15-4=11$ for the sequence with the same $N$ and $L$
as here (brown dots); the modes with $j\leq 10$, $\kappa\leq 6$ are
negligible. In the relativistic model $\kappa_{\rm min}=-\infty$ and the
contribution of the modes $\kappa=6,\,5,\,4,\,\ldots,\,-\infty$, is also
expected to be negligible. In particular, huge negative imaginary parts of
energy eigenvalues make all such modes irrelevant after times small compared
to the Planck time $\sqrt{G}$.
## 8 Revisiting SDT results
A few lattice details: configurations contributing to the imaginary-time path
integral regulated by the simplicial lattice were generated by numerical
simulation, with lattice action $S=-\kappa_{2}N_{2}$; $N_{2}$ is the number of
triangles contained in a total number $N_{4}$ of equal-lateral four-simplices.
The bare Newton coupling $G_{0}$ is related to $\kappa_{2}$ by
$G_{0}=\frac{4v_{2}}{\kappa_{2}},\quad v_{2}=\frac{\sqrt{3}\,a^{2}}{4}\,,$
(63)
where $v_{2}$ is the area of a triangle and $a$ is the lattice spacing (called
$\ell$ in [1]). The scalar field was put on the dual lattice formed by the
centers of the four-simplices; the dual lattice spacing
$\tilde{a}=a/\sqrt{10}$. The inverse propagator of the scalar field depends on
a bare mass parameter $m_{0}$. The ‘renormalized’ mass $m$ was ‘measured’ from
the (nearly) exponential decay of the propagator at large distance. The
lattice-geodesic-distance between two centers is defined as the minimal number
of dual-lattice links connecting the centers, times $\tilde{a}$. Not too far
away from the phase transition point the propagators on the dual lattice do
not seem to be affected by singular structures or fractal branched polymers.
The numerical simulations were carried out with $N_{4}=32000$ and two values
of $\kappa_{2}$ on either side of, but close to, the phase transition at
$\kappa_{2}^{\rm c}\simeq 1.258$: $\kappa_{2}=1.255$
($G_{0}=0.863\,\tilde{a}^{2}$) in the crumpled phase and $\kappa_{2}=1.259$
($G_{0}=0.860\,\tilde{a}^{2}$) in the elongated phase. A way of envisioning
the generated spacetimes was suggested by their similarity to a four-sphere
(de Sitter space in imaginary time), stemming from a comparison of an averaged
volume-distance relation with that of a $D$-sphere of radius $r_{0}$, up to an
intermediate distance, which gave $\\{D,r_{0}\\}=\\{4.2,13.4\,\tilde{a}\\}$
and $\\{3.7,\,14.2\,\tilde{a}\\}$ respectively at $\kappa_{2}=1.255$ and
$\kappa_{2}=1.259$ . More local analyses, strictly in $D=4$ dimensions, of
such volume-distance relations led to comparisons with four-spheres in the
elongated phase and 4D hyperbolic spaces in the crumpled phase [28]. A factor
$\lambda$ was proposed that converts the zigzag-hopping lattice-geodesic
distance $d_{\ell}$ to an effective continuum geodesic-distance $d_{\rm c}$
through the interior of the lattice:141414The value of $\lambda$ depends
somewhat on $\kappa_{2}$ and the lattice size, but much more on its
application: the so-called A-fit [28] is appropriate here for comparison with
the exponential decay of the propagators in [1].
$d_{\rm c}=\lambda\,d_{\ell},\qquad\lambda\simeq 0.45\,.$ (64)
In the following we use in this section cutoff units $\tilde{a}=1$.
Figure 16: Renormalized mass $m$ vs. bare mass $m_{0}$. The dashed straight
lines are fits of $m=x\,m_{0}^{y}$ to only the data points at $m_{0}=0.1$ and
0.316 , with $x(1.255)=1.24$, $y(1.255)=0.63$ (upper data, red) and
$x(1.259)=1.25$, $y(1.259)=0.66$ (lower data, blue) . The curves represent
(68) with (70). Units $\tilde{a}=1$.
From Tables 1 and 2 in [1] we find the binding energies and masses:
$\begin{array}[]{lcccllccc}\kappa_{2}=1.255&m_{0}&m&E_{\rm
b}&&\kappa_{2}=1.259&m_{0}&m&E_{\rm b}\\\
&0.0316&0.14&0.035(2)&&&0.0316&0.12&0.019(2)\\\
&0.1&0.29&0.064(2)&&&0.1&0.27&0.038(2)\\\
&0.316&0.60&0.078(2)&&&0.316&0.58&0.053(1)\\\
&1&1.21&0.054(1)&&&1&1.20&0.045(1)\end{array}$ (65)
It is interesting to focus first on the renormalized mass, which represents in
perturbation theory a binding of a ‘cloud of gravitons’ to a bare particle.
Based on the shift symmetry of the scalar field action it was argued in [33]
that the mass-renormalization should be multiplicative and not additive. A
power-like relation compatible with this was noted in [1]: $m\propto
m_{0}^{y}$, with $y=\ln(2.1)/\ln(\sqrt{10})=0.64$ (the values of $m_{0}^{2}$
used in the computation differed by factors of 10). A check on this is in the
log-log-plot figure 16, where the dashed straight lines are fits to only the
intermediate data points at $m_{0}=0.1$ and $\sqrt{10}=0.316$ ; the lines miss
the other data points only by a few percent or less. Similar fits to all four
data points support also remarkably precise power behavior.
However, if the power $y$ stays constant in the limit $m_{0}\to 0$, this is
only compatible with absence of additive renormalization, multiplicative
renormalization suggests that $y$ should approach 1 in the zero mass limit.
Numerical evidence for this was presented in [34] using so-called degenerate
triangulations in which finite-size effects are reduced compared to SDT.
Estimating by eye, the plots in this work appear compatible for small masses
$m_{0}\leq 0.1$ with a multiplicative relation $m=f\,m_{0}$, $f\approx 1.8$.
To see whether the numerical results can be interpreted by comparing with
‘renormalized perturbation theory’ we have calculated the bare $m_{0}^{2}$ as
a function of the renormalized $m^{2}$ to 1-loop order in the renormalized $G$
using dimensional regularization in the continuum (cf. appendix E).
Surprisingly, the result comes out UV- and IR-finite:
$m_{0}^{2}=m^{2}+\frac{5}{2\pi}\,Gm^{4}\,.\qquad\qquad\qquad\qquad\qquad\mbox{(continuum)}$
(66)
Transferring this relation to the SDT lattice while keeping the unambiguous
nature of its right hand side, only the coefficient of the bare mass
$m_{0}^{2}$ may be affected by the lattice regularization differing very much
from dimensional regularization in the continuum, suggesting that
$\displaystyle f^{2}m_{0}^{2}$ $\displaystyle=$ $\displaystyle
m^{2}\left(1+\frac{5}{2\pi}\,G\,m^{2}\right)\qquad\qquad\qquad\qquad\qquad\mbox{(lattice)}$
(67)
where $f$ depends on $G_{0}/a^{2}$ (or equivalently $\kappa_{2}$) but not on
$m_{0}$ or $m$ (in the currently quenched approximation).151515In SDT, the
permutations of the labels assigned to vertices form a remnant of the
diffeomorphism gauge-group, which is effectively summed-over in the numerical
computations. The renormalized $m$ and $G$ are defined in terms of gauge-
invariant observables. In the quenched approximation we can alternatively
think of $G$ to be defined by the terms of order $m^{2}$ and $m^{4}$ in an
expansion of $m_{0}^{2}$ vs. $m^{2}$, as in (67), and compare with the
binding-energy definition.
Using the renormalized Planck lengths $\ell_{\rm P}=\sqrt{G}$ in (73) obtained
from the binding energy, a fit of $f$ to the renormalized mass at the smallest
bare mass $m_{0}=0.0316$ gives $f(1.255)=5.25$, $f(1.259)=4.19\,$. With these
$f$ and $G$ the formula (67) turns out to describe surprisingly well also the
other three masses (within a few percent for the next two larger masses and
within 20% for the largest). Fitting the data at more masses it is possible to
estimate also $\ell_{\rm P}$. A fit of (67) to the renormalized masses at the
two smaller bare masses ($m_{0}=0.0316$ and 0.1) yields similar values for $f$
and the Planck lengths come out as $\ell_{\rm P}(1.255)=6.6$, $\ell_{\rm
P}(1.259)=5.3\,$, and again (67) fares quite well for the two other masses.
However, at the two fitted masses the factor $1+5Gm^{2}/(2\pi)$ comes out much
larger than 1: $\\{1.7,\,3.9\\}$ and $\\{1.3,\,2.6\\}$, respectively for
$\kappa_{2}=1.255$ and $1.259$; for the not fitted masses this factor is even
very much larger. The perturbative formula appears to work too well, as if it
is nearly exact, which is of course hard to believe. We avoided this problem
by using a rational function representation of the renormalization ratio
$\frac{m_{0}^{2}}{m^{2}}=\frac{1}{f^{2}}\,R(m^{2}),\qquad
R(m^{2})=\frac{1+p\,m^{2}}{1+q\,m^{2}}\,,$ (68)
and identified $G$ from the expansion
$R(m^{2})=1+(p-q)m^{2}+(q^{2}-pq)m^{4}+\cdots\,,\qquad G=(2\pi/5)(p-q)\,,$
(69)
in which we can think of the $\mathcal{O}(Gm^{2})$ term as applying to very
small masses. Fitting $R(m^{2})/f^{2}$ to the renormalization ratio of the
three smaller masses gives
$\displaystyle\\{f,\,p,\,q\\}$ $\displaystyle=$
$\displaystyle\\{6.2,\,55.7,\,2.6\\},\quad\ell_{\rm
P}=8.1\,,\qquad\kappa_{2}=1.255\,,$ $\displaystyle\\{f,\,p,\,q\\}$
$\displaystyle=$ $\displaystyle\\{4.5,\,32.9,\,3.0\\},\quad\ell_{\rm
P}=6.1\,,\qquad\kappa_{2}=1.259\,.$ (70)
The fit is shown in figure 16 as $m$ versus $m_{0}$ (which can be obtained
easily in exact form from (68)). In Planck units the resulting renormalized
masses $m\ell_{\rm P}$ are given by
$\begin{array}[]{llcllccc}\kappa_{2}=1.255&m_{0}&m\ell_{\rm
P}&&\kappa_{2}=1.259&m_{0}&m\ell_{\rm P}&\mbox{(from mass renormalization)}\\\
&0.0316&1.13&&&0.0316&0.736&\\\ &0.1&2.35&&&0.1&1.66&\\\
&0.316&4.86&&&0.316&3.56&\\\ &1&9.80&&&1&7.36&\end{array}$ (71)
They are in the intermediate to large mass regions of models I and II.
Figure 17: Left: Binding energy vs. $m$ . Right: Binding-energy ratio $E_{\rm
b}/m$ vs. $m$ . Upper data (red): $\kappa_{2}=1.255$ , lower data (blue):
$\kappa_{2}=1.259$ .
For the mass renormalization, the data at the smallest bare mass appears to
still make sense when neglecting finite-size effects. Yet, there are good
reasons to distrust the binding energy data at $m_{0}=0.0316$, and at
$m_{0}=1$: the renormalized mass of the first is too small for a reasonable
determination of binding energies on the distance scale of the
simulations,161616In figure 4 of [1], the propagators show exponential falling
at large distances for all masses, but the effective $E_{\rm b}(r)$ of the
smallest mass in figure 5 lacks a stationary region as for the other masses
($\kappa_{2}$ is the same in both figures.) and the renormalized mass of the
second is so large that strong lattice artefacts are to be expected. There is
no reason to suspect the data at the other two mass values. The left plot in
figure 17 shows the binding energy versus the renormalized masses. At the
largest mass they have dropped, which seems odd. The ratio $E_{\rm b}/m$ in
the right plot of figure 17 shows an almost linear behavior. But a linear
extrapolation of the first (left) three points towards $m=0$ would give silly
physics, since one expects that $E_{\rm b}/m$ vanishes rapidly as $m$ goes to
zero. These plots strengthen our suspicion of the binding energy data at the
smallest (and largest) mass.
Assuming that the SDT data for the two intermediate masses ($m_{0}=0.1$ and
0.316) can be connected with the Newtonian behavior $E_{\rm b}/m\to
G^{2}m^{4}/4$ as $m\to 0$, consider fitting them by functions of the form
$\frac{E_{\rm b}}{m}=F_{n}(m\ell_{\rm P})=\frac{(m\ell_{\rm
P})^{4}}{4P_{n}(m\ell_{\rm P})}\,,\quad
P_{n}(x)=1+\sum_{k=1}^{n}c_{k}\,x^{k}\,,$ (72)
in which $\ell_{\rm P}$ is the renormalized Planck length in lattice units.
With $n=5$ the polynomial in the denominator can implement the trend of
$E_{\rm b}/m$ falling with increasing $m$. Without further input the minimal
Asatz $P_{5}(m\ell_{\rm P})=1+c_{5}(m\ell_{\rm P})^{5}$ leads to the fit:
$\displaystyle\\{\ell_{\rm P},c_{5}\\}$ $\displaystyle=$
$\displaystyle\\{5.10,0.625\\},\qquad\kappa_{2}=1.255\,,$ (73)
$\displaystyle=$ $\displaystyle\\{4.37,1.07\\},\qquad\kappa_{2}=1.259\,.$
The ordering in magnitude, $\ell_{\rm P}(1.255)>\ell_{\rm P}(1.259)$ follows
that of the bare Planck lengths $\ell_{\rm P0}(1.255)=0.9287$, $\ell_{\rm
P0}(1.259)=0.9273$ (this applies also to (70)). The fits are shown in figure
18. Implicit in the form of the fit function is the assumption that
sufficiently to the left of its maximum it represents continuum behavior of
$E_{\rm b}/m$ on huge lattices with negligible finite-size effects.
Figure 18: Double-logarithmic plot of $E_{\rm b}/m$ vs. $m\ell_{\rm P}$
obtained with the minimal-Ansatz fit (73) which uses only data at $m_{0}=0.1$
and 0.316 . Upper data and curves (red): $\kappa_{2}=1.255$ , lower (blue):
$\kappa_{2}=1.259$ . Downward shifted smallest mass data are indicated by
blank spots on the curves.
In the ‘minimal Ansaz’ fit (73) the renormalized masses come out in Planck
units as
$\begin{array}[]{llcllccc}\kappa_{2}=1.255&m_{0}&m\ell_{\rm
P}&&\kappa_{2}=1.259&m_{0}&m\ell_{\rm P}&\mbox{(from $E_{\rm b}/m$ minimal
Ansatz)}\\\ &0.0316&0.71&&&0.0316&0.52&\\\ &0.1&1.5&&&0.1&1.2&\\\
&0.316&3.1&&&0.316&2.5&\\\ &1&6.2&&&1&5.2&\end{array}$ (74)
The smallest renormalized masses (left out of the fits, $m_{0}=0.0316$) are
not particularly small in Planck units—more like in the intermediate mass
region of models I and II. Their $E_{\rm b}/m$ ratios are also shown in figure
18. The blank spots on the curves indicate values they should have on huge
lattices, assuming the curves are right. These shifted values seem rather
small, too different from the actual (although distrusted) numerical data. One
would like to take into account also the smallest mass $E_{\rm b}/m$ data,
somehow. Including it in a chi-squared fit does not work: the resulting curves
turn out to go practically through the smallest mass point while missing the
other two by several standard deviations. This puts the minimal Ansatz into
question. Using a modified $P_{5}(m\ell_{\rm P})=1+c_{4}(m\ell_{\rm
P})^{4}+c_{5}(m\ell_{\rm P})^{5}$ in a three parameter fit ($c_{1}$, $c_{2}$
and $\ell_{\rm P}$) leads to satisfactory looking fit curves, with $\ell_{\rm
P}(1.255)=9.9$, $\ell_{\rm P}(1.259)\simeq 11$. However, these are rather
large Planck lengths, which moreover violate the ordering $\ell_{\rm
P}(1.255)>\ell_{\rm P}(1.259)$. As good compromise is found to be: fix
$\ell_{\rm P}$ by the mass renormalization results (70) in a two-parameter
($c_{1}$ and $c_{2}$) chi-squared fit to the three smaller mass data; then
$\displaystyle\\{\ell_{\rm P},c_{4},c_{5}\\}$ $\displaystyle=$
$\displaystyle\\{8.1,0.179,0.368\\},\qquad\kappa_{2}=1.255\,,\qquad\qquad\mbox{(mixed
fit)}$ (75) $\displaystyle=$
$\displaystyle\\{6.1,0.589,0.603\\},\qquad\kappa_{2}=1.259\,.$
The results are shown and compared with the minimal Ansatz fits in figures 19
and 20. The left plots show $E_{\rm b}/m$ versus the bare masses. We see that
for $\kappa_{2}=1.255$ the downward shift of the minimal mass data to the
curves has reduced to an acceptable extent; for $\kappa_{2}=1.259$ it is still
substantial.
Figure 19: Plots showing results of the mixed fit (75) and the minimal-Ansatz
fit (73); $\kappa_{2}=1.255$. Left: $E_{\rm b}/m$ vs. the bare mass $m_{0}$
(upper curve left of the maxima: mixed fit). Downward shifted smallest mass
data are again indicated by blank spots on the curves. Right: double-
logarithmic plot. Error bars have been left out in the minimal-Ansatz case
(lower curve right of the maxima) for easier recognition of corresponding data
points.
Figure 20: As in figure 19, here for $\kappa_{2}=1.259$.
Applying the conversion factor (64) to the Planck lengths, their continuum
version is $\ell_{\rm P,c}=\lambda\,\ell_{\rm P}$ :
$\ell_{\rm P,c}(1.255)=3.6=1.15\,a\,,\quad\ell_{\rm
P,c}((1.259)=2.8=0.87\,a\,.$ (76)
The two intermediate masses in the minimal Ansatz fits are already in the
large-mass region of models I and II. Inspired by these models for
interpreting the data here, realizing fully well that the jump from the
continuum to SDT is a big one, we recall the size of the bound states,
$13\,\ell_{\rm P}$ and $7\,\ell_{\rm P}$ respectively in model-I
($\lambda_{\rm min}=3.3\,\ell_{\rm P}$) and model-II, for $m\ell_{\rm P}=2$
(cf. (49) and (53)). These bound-state sizes are similar to half the
circumference of the above-mentioned four-spheres approximating the average
SDT spacetimes, e.g. for $\kappa_{2}=1.259$, $r_{0}\pi\approx 45\,\approx
10\,\ell_{\rm P}$ . Hence, the SDT bound-state could well be
squeezed—suffering from a finite-size effect—which raises the energy and
lowers $E_{\rm b}$. In models I and II the size of the bound states increases
with increasing $m$ (since $r_{\rm s}$ and $r_{\rm min}$ are roughly
proportional to $m$) and such squeezing may explain the curious lowering of
$E_{\rm b}/m$ with increasing $m$, here in SDT.
Since models I and II illustrate such different possibilities as,
respectively, a potential singular at $r_{\rm s}>0$, and a slowly varying
potential with a minimum at $r_{\rm min}>0$ reflecting a running coupling with
a UV fixed point, it is interesting to compare with SDT some more of their
qualitative features:
1. 1.
At small $m\ell_{\rm P}$ in both models, the position $r_{\rm max}$ of the
maximum of the ground-state wave function’s magnitude $|f_{1}(r)|^{2}$ is near
the Bohr radius $a_{\rm B}=2\ell_{\rm P}/(m\ell_{\rm P})^{3}$; for $m\ell_{\rm
P}=0.1$ this is about $2000\,\ell_{\rm P}$ (!). As $m$ increases $r_{\rm max}$
first goes to a minimal value after which it increases asymptotically $\propto
m$. In model-I the scale of $r_{\rm max}$ is then set by $r_{\rm
s}=3m\ell_{\rm P}^{2}$, in model-II by $r_{\rm min}=m\ell_{\rm P}^{2}$.
2. 2.
In the small mass region $m\ell_{\rm P}\lesssim 0.4$ the ratio $E_{\rm b}/m$
in model-I increases faster with $m$ than the Newtonian $(m\ell_{\rm
P})^{4}/4$; in model-II this increase is slower.
3. 3.
Different implementations of a UV cutoff in the relativistic model-I imply
different versions of the model. Binding energies for minimum wavelength
cutoffs $\lambda_{\rm min}/\ell_{\rm P}=1$, 3.3 and 19.8 were shown in figure
8. Effects of the singularity come to the fore when $\lambda_{\rm min}\ll
r_{\rm s}$, i.e. $\lambda_{\rm min}/\ell_{\rm P}\ll 3m\ell_{\rm P}$.
Typically, $E_{\rm b}/m$ rises rapidly above 1 when $m\ell_{\rm P}$ increases
beyond 0.6 and at large masses it increases $\propto m^{2}$.
4. 4.
Model-II’s intermediate mass region is rather broad (figure 10) and the ratio
$E_{\rm b}/m$ rises slowly to a limiting value 0.25, a value much smaller than
typical in model-I.
Above we came to the major conjecture in this paper is that the decrease of
$E_{\rm b}/m$ with increasing mass in our SDT results is due to finite size
effects getting larger as a reflection of the Schwarzschild scale. This
incorporates large mass aspects in point 1. For point 2: expansion of the fit
function, $F_{5}(m\ell_{\rm P})=(1/4)(m\ell_{\rm P})^{4}(1-c_{4}(m\ell_{\rm
P})^{4}-c_{5}(m\ell_{\rm P})^{5}+\cdots)$, shows that model-II is favoured
since $c_{4,5}>0$. Point 3: With the Planck lengths $\ell_{\rm P}\approx 6$
and 8 of the mass-renormalization fit (70), $m\ell_{\rm P}>1.5$ for the two
intermediate masses in (71). On the dual lattice the minimal wavelength is
$2\tilde{a}=2$; hence $(\lambda_{\rm min}/\ell_{\rm P})_{\rm lat}\lesssim
1/3$. Then $m\ell_{\rm P}$ is large enough for the intermediate masses to
satisfy $\lambda_{\rm min}/\ell_{\rm P}\ll 3m\ell_{\rm P}$. When comparing
with model-I features the case $\lambda_{\rm min}=19.8$ is not relevant. For
$\lambda_{\rm min}/\ell_{\rm P}=3.3$ the ratio $E_{\rm b}/m$ shoots up rapidly
when $m\ell_{\rm P}\gtrsim 4$, way beyond the values here in SDT study.
Smaller $\lambda_{\rm min}$ gave binding-energy ratios which are already in
the intermediate mass region orders of magnitude larger than found in SDT.
There is no indication of model-I behavior in the SDT data. Point 4: the
magnitude of the numerical $E_{\rm b}/m$ is smaller than $0.25$, as in model-
II.
All in all the explorative SDT data are compatible with model-II behavior, not
with model-I’s.
## 9 Summary
In the previous sections, binding energies in model I and II were found to
depend very much on whether $m$ is in a small-mass region or a large-mass
region. At very small masses it approaches the Newtonian form $E_{\rm
b}=G^{2}m^{5}/4$. Requiring that perturbative one-loop corrections are smaller
than the zero-loop value gives $m/m_{\rm P}\lesssim 0.54$ ($0.66$) in case I
(II).171717The numbers depend logarithmically on a UV-cutoff length $\ell$ in
the potential, which was chosen equal to the Planck length (cf. (18)). This
gives an idea of where the small mass regions end. The relativistic Newton
model turns out to have no ground state, $E_{\rm min}=-\infty$, for $m>m_{\rm
c}\simeq 1.3\,m_{\rm P}$, and the average squared-velocity approaches one when
$m\uparrow m_{\rm c}$.
Model-I ’s energy eigenvalues have a non-vanishing imaginary part, a
probability decay rate $\Gamma=-2\,{\rm Im}[E]$ . In the small-mass region,
the decay rate of the ground state $\Gamma_{\rm b}\propto G^{11/2}\,m^{12}$
(cf. (47)). In the large-mass region the binding energy $E_{\rm b}\equiv-{\rm
Re}[E_{\rm min}]$ is huge in the non-relativistic model, $E_{\rm b}\propto
G^{4}m^{9}$, $\Gamma_{\rm b}\propto G^{3}m^{7}$ (figure 3 and appendix B.5).
The relativistic model-I lacks a ground state for $m\gtrsim 0.61\,m_{\rm P}$ ,
and this number certainly represents the end of its small-mass region. With a
UV cutoff on the derivative of the wave function $E_{\rm b}$ is finite. With a
minimum wavelength $\lambda_{\rm min}$, $E_{\rm b}$ and $\Gamma_{\rm b}$
approach infinity as $\lambda_{\rm min}\to 0$. For $\lambda_{\rm
min}=\ell_{\rm P}$ and $m\gtrsim 2\,m_{\rm P}$, the binding energy is even
close to the non-relativistic one (figure 8). Peculiar undulations occur in
the mass dependence of ${\rm Re}[E_{\rm min}]$, which are accompanied by a
wildly varying ${\rm Im}[E_{\rm min}]$ (figure 9). At fixed $\lambda_{\rm
min}$, $E_{\rm b}\propto m^{3}$ for large masses.
Model-I’s ground state $|\mbox{eigenfunction}|^{2}$ peaks near $r_{\rm
s}\simeq 3Gm$ in the large-mass region. Eigenfunctions with a large decay rate
have their domain in the inside region, $r\lesssim r_{\rm s}$, and they are
not excited when an initial wave packet does not penetrate this region. In the
study of collapse (figure 14) during bouncing (figure 15) the time scales stem
from the excited modes181818Mode numbers $j$ around 15 in the right plots of
figures 12 and 5. which have decay rates $\Gamma\lll m_{\rm P}$. After
dividing out the absorbtion effect on the norm of the wave function, the
bouncing is for $m=2\,m_{\rm P}$ qualitatively similar to the Newton case.
Model-II has a singularity-free potential with a minimum at a finite distance
$r_{\rm min}$ that increases with $m$; at large masses $r_{\rm min}\simeq Gm$
and $V_{\rm min}\simeq-m/4$. Its non-relativistic and relativistic versions
differ only substantially in an intermediate mass region ($\lesssim 30$% for
$E_{\rm b}/m$). The ground state $|\mbox{eigen function}|^{2}$ is large near
$r_{\rm min}$ and the hydrogen-like spectrum in the small-mass region changes
slowly to that of an anharmonic oscillator at large masses, where $E_{\rm
b}/m\to 1/4$, a value much smaller than typically in model-I. For $m=2\,m_{\rm
P}$ its bouncing behavior of an in-falling wave packet appears to deviate
somewhat more from the Newton case than model-I (figure 13).
Model-I shares the absorbtion effect with black holes. The classical motion in
the classical-evolution (CE) models (in which the quantum term in the beta
function is neglected) can be extended through the singularity into one of
perennial bouncing and falling back (appendix D). In the CE-I case, the
relativistic velocity of particles falling-in from a distance $r_{0}>r_{\rm
s}$ reaches that of light at $r_{\rm s}$. (In the relativistic Newton model
the particles also reach the light velocity, but only strictly at the origin
where they may pass each other—the model has no inside region.) In case II
both properties are absent (no absorbtion and $v_{\rm rel}<0.46$ even when
falling in from infinity). The model still shares the interesting possibility
of quantum physics at macroscopic distances $\propto m$ where the bound-state
wave function is maximal. Since the potential in both models is regular at the
origing they show features similar to ‘regular black holes’ [35, 36].
In reanalyzing the SDT results, the data at the largest renormalized mass was
not used since one expects its value to cause large lattice artefacts. The
remaining mass-renormalization results were compared to a formula derived from
renormalized perturbation theory to order $G$ and adapted to the lattice. The
formula described the results surprisingly well, too good to be believed and
it was therefore re-interpreted as the $\mathcal{O}(G)$ term in the expansion
of a phenomenological function fitted to the data. This led to an estimate of
the renormalized Newton coupling from mass renormalization.
The binding-energy results at the smallest mass were treated with caution
since their determination in [1] is not convincing. Discarding them initially,
phenomenological fits with the Newtonian constraint $E_{\rm b}/m\to
G^{2}m^{4}/4$ as $m\to 0$ led to estimates of $\sqrt{G}$ somewhat smaller than
the ones from mass-renormalization. Treating the latter as fiducial values in
improved fits which included also the smallest-mass data finally led to a
reasonable understanding of the binding-energy results. The values $m\ell_{\rm
P}$ of the trusted masses for the binding energy came out as lying clearly in
the large-mass region of models I and II. This offered the explanation of the
puzzling mass dependence in the $E_{\rm b}/m$ data as a large-mass finite-size
effect. Further comparison with characteristic features of models I and II, in
particular the magnitude of $E_{\rm b}/m$, then led to the conclusion that the
explorative SDT results are compatible with model-II behavior, and not with
that of model-I.
## 10 Conclusion
Models I and II are interesting in their own right. Model-I, with its pole and
inverse square-root singularities at $r_{\rm s}$, required considerable
numerical effort. The imaginary part of its potential depends on the presence
of both classical and quantum corrections in the beta function. It occurs in
the region $r<r_{\rm s}$ and is maximal near $r_{\rm s}$, which is a finite
distance from the origin for all mass values.191919This is different from the
Dirac Hamiltonian in a Schwarzschild geometry in which the non-hermitian part
is concentrated at the origin [31]. For small masses the ground state decays
slowly202020$\Gamma_{\rm b}$ is $\mathcal{O}(G^{1/2}\,m)$ smaller than the
two-graviton decay rate of equal-mass ‘gravitational atoms’, $\Gamma_{\rm
atom}=(41/(128\,\pi^{2}))\,G^{5}\,m^{11}$, which depends primarly on the wave
function at the origin [37]. at a rate $\Gamma_{\rm b}\approx
1.5\,G^{11/2}\,m^{12}$. For large masses the relativistic model lacks a ground
state. Yet, a spherical wave packet state falling in from a distance $r\gg
r_{\rm s}$ is primarily composed of exited states with small decay times and
the packet still exhibits bouncing and falling back during its slow decay. It
is desirable to extend the model by including decay channels into gravitons.
The non-trivial UV fixed point in model-II leads to a regular potential at all
$r$. The increase of its minimum at $r_{\rm min}$ with $m$ suggests the
possibility of a macroscopic an-harmonic oscillator when $r_{\rm min}$ becomes
of order of the Schwarzschild scale. In-falling spherical states keep their
norm while bouncing. Some of the local probability should diminish eventually
by the familiar ‘spreading of the wave packet’.
Using the SDT results in [1], Planck lengths obtained with perturbative mass
renormalization or with matching binding energies to the Newtonian region were
similar; the first were actually employed to improve the analysis of the
latter.212121The renormalized ‘continuum Planck lengths’ in lattice units,
$\ell_{\rm Pc}/a\approx 1.15$ to $0.9$ happen to be larger than the value 0.48
found in CDT based, on a different method ([19] section 11). The magnitude of
the binding energy is roughly compatible with values found in model-II. The
growing of $r_{\rm s}$ and $r_{\rm min}$ in models I and II suggested a
reasonable interpretation of the binding energy data. The relevance of the
Schwarzschild scale in this interpretation came as a surprise.
Simulations on larger lattices are necessary to see whether these conclusions
hold up to further scrutiny. This should be possible with current
computational resources when carried out in a large-mass region, and may tell
us something non-perturbative about black holes in the quantum theory.222222As
the volume increases $E_{\rm b}/m$ vs. $m$ should stop decreasing; it might
flatten as in model-II or even increase as in model-I. In a plot like figure 5
of [1] one might see an oscillation in the effective $E_{\rm b}(r)$ beyond
$r=6$ indicating a complex energy (and its conjugate), something like
$\exp(-{\rm Re}[E]\tau)\cos({\rm Im}[E]\tau)$ with $\tau=r+{\rm const}$.
Simulations at small masses, $m\ell_{\rm P}\ll 1$, aiming at observing binding
energies of Newtonian magnitude seem very difficult because of the rapid
increase of the equal-mass Bohr radius $2\ell_{\rm P}/(m\ell_{\rm P})^{3}$.
## Note added
Shortly after the previous version of this article a new EDT computation of
the quenched binding energy of two scalar particles appeared in [38]. The
authors used the ‘measure term’ and an extended class of ‘degenerate’
triangulations as in [24, 34]. Their analysis included short distances in
which dimensional reduction was expected to influence the results. This was
taken into account by assuming a corresponding mass dependence of the binding
energy, $E_{\rm b}=G^{2}m^{\alpha}/4$. Subsequently an infinite-volume
extrapolation and a continuum extrapolation led to the Newtonian $\alpha=5$ in
four dimensions and a renormalized Newton coupling $G$ with relatively small
statistical errors. The computation used very small masses and is in this
sense complementary to [1] in which (as concluded here) binding energies were
computed in a large-mass region. A follow-up article [39] addressed the
relation of the Newton coupling to the lattice spacing more closely and
described also the computation of a differently defined $G$, which agreed
quite well with [38].
## Acknowledgements
Many thanks to the Institute of Theoretical Physics of the University of
Amsterdam for its hospitality and the use of its facilities contributing to
this work. I thank Raghav Govind Jha for drawing my attention to the results
in [34].
## Appendix A Evolution equation
The equation $-r\partial\tilde{G}/\partial r=\beta(\tilde{G})$ simplifies when
$\beta$ in (25) is expressed in terms of $\sqrt{\tilde{G}}\equiv z$ (units
$G=1$, a notation $b=dm/2$ is introduced for convenience):
$\displaystyle-\frac{r\partial z}{\partial r}$ $\displaystyle\equiv$
$\displaystyle\beta_{z}=z+2bz^{2}+cz^{3},\quad z=\sqrt{\tilde{G}},\quad
b=dm/2,$ (77) $\displaystyle=$ $\displaystyle cz(z-z_{1})(z-z_{2}),\quad
z_{1}=\frac{-b-\sqrt{b^{2}-c}}{c},\quad z_{2}=\frac{-b+\sqrt{b^{2}-c}}{c}$
(78)
We note that $c$, and $b$ are positive in model-I and negative in model-II
((4), (5)). The critical coupling in model-II is
$z_{*}=z_{1}\,,$ (79)
and (29)–(31) in the main text follow. Separating variables, integrating and
imposing the boundary condition $z\to 1/r$ for $r\to\infty$, the solution can
be obtained in the form
$\displaystyle\ln(r)$ $\displaystyle=$ $\displaystyle-\ln(z)-f(z)+f(0),$ (80)
$\displaystyle f(z)$ $\displaystyle=$
$\displaystyle\frac{\ln(z-z_{1})}{cz_{1}(z_{1}-z_{2})}+\frac{\ln(z-z_{2})}{cz_{2}(z_{2}-z_{1})}\qquad\qquad\mbox{(model-I)}$
(81) $\displaystyle=$
$\displaystyle\frac{\ln(z_{1}-z)}{cz_{1}(z_{1}-z_{2})}+\frac{\ln(z-z_{2})}{cz_{2}(z_{2}-z_{1})}\qquad\qquad\mbox{(model-
II)}\,.$ (82)
The second form is chosen for model-II to avoid $f(0)$ being complex, since
$z<z_{1}$ in this case. For $b=0$ ($m=0$), $f(z)$ simplifies to
$-(1/2)\ln(z^{2}+1/c)$, resulting in $z^{2}=1/(r^{2}-c)$, as used in (32).
(This follows more easily directly from $\beta(\tilde{G},0)$).
## Appendix B More on model-I
Since $f(z)\to-\ln(z)+\mathcal{O}(1/z^{2})$ for $z\to\infty$, the position of
the singularity is given by
$r_{\rm s}=\exp[f(0)]\,.$ (83)
Expanding $r$ as a function of $z$ for $z\to\infty$ gives
$\bar{r}\equiv\frac{r}{r_{\rm
s}}=\frac{1}{z}\,e^{-f(z)}=1+\frac{1}{2cz^{2}}-\frac{2b}{3c^{2}z^{3}}+\mathcal{O}(z^{-4}),$
(84)
with the inversion
$\displaystyle z$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2c}\,\sqrt{\bar{r}-1}}-\frac{2b}{3c}-\frac{8b^{2}-3c}{12\sqrt{2}\,c^{3/2}}\,\sqrt{\bar{r}-1}+\mathcal{O}(\bar{r}-1)\,,$
(85) $\displaystyle z^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{2c(\bar{r}-1)}-\frac{2\sqrt{2}\,b}{3c^{3/2}}\,\frac{1}{\sqrt{\bar{r}-1}}-\frac{40b^{2}-9c}{36c^{2}}+\mathcal{O}(\sqrt{\bar{r}-1})\,.$
(86)
Coefficients of even (odd) powers $(\sqrt{\bar{r}-1})^{k}$ in the expansion
(86) happen to be even (odd) polynomials in $b$ of order $k+2$. For later use
we note that keeping only the terms linear in $b$ gives a series that
converges in $\bar{r}\in(0,1)$, where $z^{2}$ is imaginary.
Keeping in $V_{\rm r}=-m^{2}rz^{2}$ only the first term of the expansion (86),
or the first two terms, gives models which can be used to study the effect of
the singularity on the binding energy: the pole model, respectively the
pole+square-root model:
$\displaystyle V_{\rm P}$ $\displaystyle=$ $\displaystyle-m^{2}r\,\frac{r_{\rm
s}}{2c(r-r_{\rm
s})}\,,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mbox{(P model)}$ (87)
$\displaystyle V_{\rm PSR}$ $\displaystyle=$
$\displaystyle-m^{2}r\left[\frac{r_{\rm s}}{2c(r-r_{\rm
s})}-\frac{\sqrt{2}\,m}{c^{3/2}}\,\sqrt{\frac{r_{\rm s}}{r-r_{\rm
s}}}\right]\,\qquad\qquad\qquad\;\mbox{(PSR model)}$ (88)
(we used $b=3m/2$) . These potentials do not vanish as $r\to\infty$ and are
intended to be used only in matrix elements that focus on a neighborhood of
the singularity. For the PSR potential at large $m$ the square-root
contribution should not overwhelm that of the pole, because if it would, then
all terms left out in the expansion (86) would contribute substantially and we
are back to model-I.
For values of $r$ not close to $r_{\rm s}$ the dependence of $z$ on $r$ was
determined by solving (80) numerically for the real and imaginary parts of $z$
as a function of $r$ in the region $0<r<2r_{\rm s}$ ($z$ is real for $r_{\rm
s}<r<2r_{\rm s}$). There are two solutions with opposite signs of ${\rm
Im}[z]$. The one with ${\rm Im}[z]<0$ is chosen to get a decaying time
dependence of the eigenfunctions of the Hamiltonian. For remaining integral
$\int_{2r_{\rm s}}^{\infty}$ we used the inverse of a small $z$ expansion of
$r$, or changed variables from $r$ to $z$.
Numerical evaluation of matrix elements of the running potential is delicate
because of the singularity at $r=r_{\rm s}$. Singular terms were subtracted
from $V_{\rm r}$ and their contribution was evaluated separately as follows
($F(r)$ is a smooth trial wave function or a product of basis functions):
$\int_{0}^{2r_{\rm s}}dr\,V_{\rm r}(r)F(r)=\int_{0}^{2r_{\rm s}}dr\,V_{\rm
reg}(r)F(r)+\int_{0}^{2r_{\rm s}}dr\,V_{\rm PSR}(r)F(r),\quad V_{\rm
reg}=V_{\rm r}-V_{\rm PSR}\,.$ (89)
The first integral on the right hand side was done numerically, the second
analytically. The regularized potential $V_{\rm reg}$ is finite but develops
at larger masses ($m>2$) a deep trough around $r_{\rm s}$ as a sort of
premonition of the double pole in the CE-I model, which slows numerical
integration. Distributional aspects in the analytic evaluation can be taken
care of in various ways, (40), or
$\left(\int_{0}^{r_{\rm s}-\epsilon}dr+\int_{r_{\rm s}+\epsilon}^{2r_{\rm
s}}dr\right)\frac{r_{\rm s}}{r-r_{\rm s}}\,F(r),\quad\epsilon\downarrow 0\,.$
(90)
for the pole, or
${\rm Re}\left[\int dr\,\frac{1}{(r-r_{\rm
s}+i\epsilon)^{n}}\,F(r)\right]_{\epsilon\downarrow 0},\qquad n=1,2\,,$ (91)
(assuming real $F(r)$ in the ‘$i\epsilon$ method’). Numerically, the
principal-value in the symmetric integration $\int_{0}^{2r_{\rm s}}$ around
$r_{\rm s}$ can be obtained conveniently by a subtraction in the integrant,
$F(r)\to F(r)-F(r_{\rm s})$. The methods lead to identical results.
### B.1 Orthogonality under transposition and variational method
The interpretation of the singular potential as a distribution becomes
implemented when evaluating matrix elements of the Hamiltonian,
$\langle\phi|H|\psi\rangle=\int dr\,\phi(r)H\psi(r)$. Starting formally,
consider basis functions $b_{n}(r)$ forming a complete set, and
$H_{mn}=\int_{0}^{\infty}dr\,b_{m}^{*}(r)\,(K+V)\,b_{n}(r)\equiv
K_{mn}+V_{mn}\,.$ (92)
The basis functions can be the s-wave Hydrogen eigenfunctions (including the
unbound states), or Fourier-sine functions (the $b_{n}(r)$ have to vanish at
the origin). We assume them to be real and orthonormal,
$\int_{0}^{\infty}dr\,b_{m}(r)b_{n}(r)=\delta_{mn},\quad\sum_{n}b_{n}(r)b_{n}(r^{\prime})=\delta(r-r^{\prime})\,.$
(93)
For simplicity we use a notation in which the labels $m$ and $n$ are discrete
and which has to be suitably adapted in case of continuous labeling. When the
$K_{mn}$ integrals diverge at infinite $r$ we assume them to be regularized by
$b_{n}(r)\to b_{n}(r)\exp(-\epsilon r)$ with the limit $\epsilon\downarrow 0$
taken at a suitable place. Then $K_{mn}$ and $V_{mn}$ are symmetric in
$m\leftrightarrow n$.
Since the potential is complex for $r<r_{\rm s}$, $V_{mn}\neq V_{nm}^{*}$, the
Hamiltonian is not Hermitian and its eigenvalues $E$, eigenvectors $f_{En}$
and eigenfunctions $f_{E}(r)$ are complex. The eigenvalue problem takes the
form
$f_{E}(r)=\sum_{n}f_{En}\,b_{n}(r),\quad\sum_{n}H_{mn}f_{En}=Ef_{Em}.$ (94)
The symmetry $H_{mn}=H_{nm}$ invites an inner product under transposition,
without complex conjugation. Using matrix notation
$f_{E^{\prime}}^{T}Hf_{E}=E^{\prime}f_{E^{\prime}}^{T}f_{E}=Ef_{E^{\prime}}^{T}f_{E}\to(E^{\prime}-E)f_{E^{\prime}}^{T}f_{E}=0$,
where we used $H^{T}=H$. Eigenvectors belonging to different eigenvalues are
still orthogonal and normalizing them to 1 (under transposition), we have in
more explicit notation232323Characters $j,k$ refer to eigenvectors of the
Hamiltonian, characters $m,n$ refer to basis vectors.
$\displaystyle f_{j}(r)$ $\displaystyle=$
$\displaystyle\sum_{n}f_{jn}\,b_{n}(r)\,,\quad\sum_{n}H_{mn}\,f_{jn}=E_{j}\,f_{jm}\,,$
(95) $\displaystyle\sum_{n}f_{jn}f_{kn}$ $\displaystyle=$
$\displaystyle\delta_{jk}\,,\quad\sum_{j}f_{jm}f_{jn}=\delta_{mn}\,,\quad
b_{n}(r)=\sum_{j}f_{jn}\,f_{j}(r)\,,\quad$ (96)
$\displaystyle\int_{0}^{\infty}dr\,f_{j}(r)f_{k}(r)$ $\displaystyle=$
$\displaystyle\delta_{jk}\,,\quad\sum_{j}f_{j}(r)f_{j}(r^{\prime})=\delta(r-r^{\prime})\,.$
(97)
For finite matrices $f_{jn}$ (which will be the case in our approximations)
the second equation in (96) follows from the first ($ff^{T}=1\\!\\!1\to
f^{T}f=1\\!\\!1$ since a right-inverse is also a left-inverse); at the formal
level with infinitely many basis functions it is an assumption. We also have
$\sum_{j}E_{j}f_{jm}f_{jn}=H_{mn}\,,\quad\,\sum_{j}E_{j}\,f_{j}(r)f_{j}(r^{\prime})=H(r,r^{\prime})\,.$
(98)
In model-I the $f_{jn}$ and $f_{j}(r)$ are complex; they are real for model-II
and the other models with a real potential. In the discrete part of the
spectrum the labels $j$ on the eigenvectors will be assigned according to
${\rm Re}[E_{1}]<{\rm Re}[E_{2}]<{\rm Re}[E_{3}]<\cdots\,,$ (99)
assuming no degeneracy at zero angular momentum. An arbitrary wave function
$\psi(r)$ in radial Hilbert space can be decomposed as242424Note that in Dirac
notation $\langle n|\psi\rangle=\psi_{n}$ but $\langle
j|\psi\rangle=\int_{0}^{\infty}dr\,f_{j}(r)^{*}\,\psi(r)\neq\psi_{j}$, in
model-I.
$\psi(r)=\sum_{n}\psi_{n}\,b_{n}(r)=\sum_{j}\psi_{j}\,f_{j}(r)\,,\quad\psi_{j}=\int_{0}^{\infty}dr\,f_{j}(r)\,\psi(r)\,.$
(100)
Conventionally, the functional depending on a variational trial function
$\psi$ is
$\mathcal{E}[\psi]=\frac{\langle\psi|H|\psi\rangle}{\langle\psi|\psi\rangle}=\frac{\sum_{mn}\psi^{*}_{m}H_{mn}\psi_{n}}{\sum_{n}\psi_{n}^{*}\psi_{n}}=\frac{\sum_{mn}(\rho_{m}H_{mn}\rho_{n}+\sigma_{m}H_{mn}\sigma_{n})}{\sum_{n}(\rho_{n}\rho_{n}+\sigma_{n}\sigma_{n})}\,,$
(101)
where $\psi_{n}=\rho_{n}+i\sigma_{n}$ (real $\rho$ and $\sigma$) and the
symmetry of $H_{mn}$ is used. The variational equations become
$\sum_{n}H_{mn}\rho_{n}=\mathcal{E}\rho_{n},\quad\sum_{n}H_{mn}i\sigma_{n}=\mathcal{E}i\sigma_{n}\,.$
(102)
The sum of these equations appears equivalent to (95), their difference to the
complex conjugate of (95) without conjugating $E$. Hence they are not
equivalent to (95) unless $E$ and $\mathcal{E}$ are real, i.e. only for real
potentials. On the other hand,
$\mathcal{E}[\psi]=\frac{\sum_{mn}\psi_{m}H_{mn}\psi_{n}}{\sum_{n}\psi_{n}\psi_{n}}$
(103)
leads to the correct equation
$\sum_{n}H_{mn}(\rho_{n}+i\sigma_{n})=\mathcal{E}\,(\rho_{m}+i\sigma_{m})\,,$
(104)
implying that $\mathcal{E}$ is an eigenvalue. In variational estimates we
shall minimize the real part of $\mathcal{E}[\psi(a,\ldots)]$ with respect to
variational parameters $a,\ldots$. However the corresponding theorem in case
of a Hermitian Hamiltonian, $\mathcal{E}\geq E_{1}$, does not appear to hold
true with a complex symmetric Hamiltonian: using a transpose-normalized trial
function $\psi$ ($\sum_{n}\psi_{n}^{2}=1$, $\sum_{j}\psi_{j}^{2}=1$) gives
${\rm Re}[\mathcal{E}]-{\rm Re}[E_{1}]={\rm Re}\left[\sum_{j}\left(E_{j}-{\rm
Re}[E_{1}]\right)\,\psi_{j}^{2}\right]\,,$ (105)
from which one cannot conclude positivity since the individual $\psi_{j}^{2}$
are complex. In the conventional case with a real and symmetric $H_{mn}$,
eigenvalues are real, transpose-normalized eigenvectors are real and with a
real $\psi$ trial function $\psi_{j}^{2}\geq 0$; then, since $E_{j}-E_{1}>0$
for $j\geq 2$, the r.h.s. is positive.252525It is comforting that with
variational functions $\psi(r)$ lying entirely in the subspace spanned by the
Fourier-sine $b_{n}(r,L)$ (finite $N$) we did find ${\rm Re}[\mathcal{E}]>{\rm
Re}[E_{1}]$ in model-I.
A finite discrete set of basis function can be used for approximations that
diagonalize $H_{mn}$. For eigenfunctions $f_{j}(r)$ which are negligible when
$r>L$ (typically those near the ground state $j=1$), Fourier-sine functions in
finite volume $r<L$ should be able to give a good approximation,
$b_{n}(r,L)=\sqrt{\frac{2}{L}}\,\sin\left(\frac{n\pi
r}{L}\right)\theta(L-r)\,,\quad n=1,\,\ldots,\,N$ (106)
($\theta$ is the unit-step function), which form a complete set in $r\in(r,L)$
with Dirichlet boundary conditions when $N\to\infty$. Their simplicity is
useful in numerical computations with finite $N$, in which $L$ controls
finite-size effects and $p_{\rm max}=N\pi/L$, is a cutoff on the mode momenta.
Such a UV cutoff can be avoided in variational calculations.
### B.2 H-like trial function
Here follow a few variational calculations using $u_{1}(r,a)$ in (12) as a
normalized trial wave function with variational parameter $a$ and variational
energy
$\mathcal{E}(a)=\int_{0}^{\infty}dr\,u_{1}(r,a)\left(K+V(r)\right)u_{1}(r,a)=\langle
K\rangle+\langle V\rangle\,,$ (107)
and similar with $K\to K_{\rm rel}$. The first concerns the relativistic model
with the classical Newton potential $V_{\rm N}(r)=-m^{2}/r$ (units $G=1$). The
potential energy in the state $u_{1}$ equals
$\langle V_{\rm N}\rangle=-\frac{m^{2}}{a}\,.$ (108)
Using the Fourier-Sine representation
$u_{1}(r,a)=\frac{2}{\pi}\int_{0}^{\infty}dp\,\sin(pr)\,\frac{4pa^{3/2}}{(1+a^{2}p^{2})^{2}}\,,$
(109)
the relativistic energy is found to be
$\displaystyle\langle K_{\rm rel}\rangle$ $\displaystyle=$
$\displaystyle\frac{4}{3\pi}\,\left(\frac{4-4a^{2}m^{2}+3a^{4}m^{4}}{a(a^{2}m^{2}-1)^{2}}+\frac{3a^{3}m^{4}(a^{2}m^{2}-2)\,{\rm
arcsec}(am)}{(a^{2}m^{2}-1)^{5/2}}\right)\,.$ (110) $\displaystyle=$
$\displaystyle 2m+\frac{1}{ma^{2}}+\mathcal{O}(m^{-3})\,,\quad m\to\infty\,,$
(111) $\displaystyle=$ $\displaystyle\frac{16}{3\pi
a}+\frac{16m^{2}a}{3\pi}+\mathcal{O}(a^{3})\,,\quad a\to 0\,.$ (112)
As $m$ increases from 0, the value of $a$ where $\mathcal{E}(a,m)=\langle
K_{\rm rel}\rangle+\langle V_{\rm N}\rangle$ has its minimum, moves from
$a\simeq a_{\rm B}$ towards zero. Keeping the first two terms in (112) one
finds that the position of the minimum of $\mathcal{E}(a,m)$, $a_{\rm min}$,
reaches zero when the mass reaches a critical value $m_{\rm c}$:
$m_{\rm c}=\frac{4}{\sqrt{3\pi}}\,,\quad a_{\rm min}=\frac{\sqrt{m_{\rm
c}^{2}-m^{2}}}{mm_{\rm c}}\,,\quad\mathcal{E}(a_{\rm min},m)=2mm_{\rm
c}\sqrt{m_{\rm c}^{2}-m^{2}}\,.$ (113)
Since $a_{\rm min}$ and also the minimal $\mathcal{E}$ vanish as $m\uparrow
m_{\rm c}$, the limiting variational binding energy $E_{\rm
b}=2m-\mathcal{E}=2m_{\rm c}\simeq 1.30$. By the variational theorems
$\mathcal{E}$ is an upper bound to the energy of the ground state. Since
$\lim_{a\downarrow 0}\mathcal{E}(a,m)=-\infty$ for $m>m_{\rm c}$, the
relativistic Hamiltonian with the Newton potential is unbounded from below.
Next calculation: With the potential $V_{\mbox{\scriptsize CE-I}}$ in (37) the
variational function (46) becomes, in terms of $\bar{a}=a/(3m)$,
$\frac{\mathcal{E}_{1,{\mbox{\scriptsize
CE-I}}}(a)}{m}=\frac{\mathcal{E}_{K}(\bar{a})}{m}+\frac{1}{3}\left[-\frac{1}{\bar{a}}-\frac{4}{\bar{a}^{2}}+\frac{4}{\bar{a}^{3}}+\left(\frac{12}{\bar{a}^{3}}-\frac{8}{\bar{a}^{4}}\right)e^{-2/\bar{a}}{\rm
Ei}\left(\frac{2}{\bar{a}}\right)\right]\,,\quad\frac{\mathcal{E}_{K}(\bar{a})}{m}=\frac{1}{9\bar{a}^{2}m^{4}}\,,$
(114)
where $\mathcal{E}_{K}$ corresponds to the non-relativistic operator $K$ (the
second term in (111)). Neglecting the latter, the above expression is
represented by the dashed curve in the left plot of figure 4. Its two minima
are at $\bar{a}_{1}=2.85$, $\bar{a}_{2}=0.23$. In model-I, the positions of
the two minima are for $m=2$ already close to these values; using them to
estimate the average non-relativistic squared velocity gives
$v^{2}=\mathcal{E}_{K}(\bar{a})/m=8.6\times 10^{-3}$ and $0.13$, respectively
at $\bar{a}_{1}$ and $\bar{a}_{2}$.
Last calculation in this section: Leading small-mass dependence of the
imaginary part of the variational energy. The potential gets an imaginary part
in $0<r<r_{\rm s}$ and as mentioned earlier the expansion (86) converges when
keeping only the leading (linear) terms in $b=3m/2$ as $m\to 0$. Consider
first the term $\propto 1/\sqrt{r-r_{\rm s}}$ in (86), which corresponds to
the square-root term of the PSR model (88)),
$\displaystyle\int_{0}^{r_{\rm s}}dr\,{\rm Im}[V_{\rm PSR}]\,u_{1}(r,a)^{2}$
$\displaystyle=$
$\displaystyle\frac{m^{3}ax}{8\sqrt{2}\,c^{3/2}}\,\left[x(15+8x^{2}+4x^{4})\right.$
$\displaystyle\left.-(15+18x^{2}+12x^{4}+8x^{6}){\rm DawsonF}(x)\right]\,,$
$\displaystyle x$ $\displaystyle=$ $\displaystyle\sqrt{\frac{2r_{\rm
s}}{a}}\,.$ (115)
Using the small $m$ forms $a=a_{\rm B}$, $r_{\rm s}=\sqrt{c}$, and further
expansion to leading order in $m$ leads to the decay rate
$\Gamma_{\rm b}\equiv-2\,{\rm Im}[E_{\rm
min}]\approx\frac{32\sqrt{2}}{105}\,d\sqrt{c}\,m^{12}=1.48\,m^{12}\,,$ (116)
where $d=2b/m=3$ indicates the perturbative order in the parameters of
model-I. Continuing the expansion (86) up to $\mathcal{O}((\sqrt{r-r_{\rm
s}})^{11})$ and keeping again only terms linear in $b$, gives instead of (116)
(avoiding quoting fractions of excessively large integers)
$\Gamma_{\rm b}=1.38329\,m^{12}\,.$ (117)
(The $\mathcal{O}((\sqrt{r-r_{\rm s}})^{11})$ contribution in only about
$10^{-5}$ of (116). )
Above, instead of finding the minimum $a_{\rm min}$ of the variational
integral $\mathcal{E}(a)$, we simply used the Bohr radius, which means the
calculation is really a perturbative evaluation of imaginary part of the
Hamiltonian in the ground state wave function. The result (116) is quoted in
(47).
### B.3 CE-I model with the Fourier-sine basis
The kinetic energy matrix is diagonal in the basis of sine functions (106),
$\frac{K_{mn}}{m}=\frac{1}{m^{2}r_{\rm
s}^{2}}\left(\frac{n\pi}{\bar{L}}\right)^{2}\delta_{mn}\,,\quad\bar{L}\equiv\frac{L}{r_{\rm
s}}\,.$ (118)
In the CE-I model $r_{\rm s}=3m$. Using $\bar{r}=r/r_{\rm s}$ as integration
variable, the potential matrix becomes
$\frac{V_{\mbox{\scriptsize
CE-I},mn}}{m}=-\frac{2}{3\bar{L}}\int_{0}^{\bar{L}}d\bar{r}\,\frac{\bar{r}}{(\bar{r}-1)^{2}}\,\sin\left(\frac{m\pi\bar{r}}{\bar{L}}\right)\sin\left(\frac{n\pi\bar{r}}{\bar{L}}\right)\,,$
(119)
which can be evaluated analytically into a host of terms (using the
$i\epsilon$ method to implement the distributional interpretation of the
double pole), too many to record here. The explicit dependence on the mass has
canceled in (119). The binding energy ratio $E_{\rm b}/m$ can now be
considered a function of $1/m^{4}$ coming from $K_{mn}$, of $N/\bar{L}=2r_{\rm
s}/\lambda_{\rm min}\equiv\rho$, and of $\bar{L}$. Assuming $\bar{L}$ is large
enough such that finite-size effects may neglected, and that $m$ is large
enough to neglect the kinetic energy contribution, there remains only the
dependence on $\rho$. This was tested twice (t1, t2) by three computations
(c1, c2, c3):
* c1
computed the mass dependence of $E_{\rm b}/m$ at fixed $\lambda_{\rm min}=1$,
for $m=1$, 2, …, 10;
* c2
computed the $\rho$ dependence of $E_{\rm b}/m$ at $m=2$; data ranging from
$\rho=2$ to 512;
* c3
computed the $\rho$ dependence of $E_{\rm b}/m$ while leaving out the
contribution from $K_{mn}$ $\propto 1/m^{4}$; data ranging from $\rho=2$ to
64, which were fitted by $E_{\rm b}/m=1.339\,\rho^{2}$;
* t1
The data in c3 are consistently 3% higher than those in c2 which indicates
that already at $m=2$ the effect of the kinetic energy is only 3%;
* t2
Substituting $\rho=2r_{\rm s}/\lambda_{\rm min}=6m$ in the fit from c3 gives
$E_{\rm b}/m=48.2\,m^{2}$, which describes the data in c1 well within a few %
for $m\geq 2$.
### B.4 Bounds on $E_{\rm b}/m$
Figure 21: Left: results for $w/r_{\rm s}$ and fitting function (120);
$L/r_{\rm s}=\\{32,8,2\\}\leftrightarrow{\rm\\{Magenta,Blue,Red\\}}$. Right:
$r_{\rm s}|f_{1}(r)|^{2}$ (blue) versus $\bar{r}=r/r_{\rm s}$ for $L/r_{\rm
s}=2$, $N=128$, $m=2$, fitted by a by a Gaussian
$\propto\exp[-(r-a)^{2}/(2s^{2})]$, $a=1.0089\,r_{\rm s}$, $s=0.0059\,r_{\rm
s}$ (brown). In this case $\\{r_{-},\,r_{+}\\}=\\{0.992\,r_{\rm
s},\,1.026\,r_{\rm s}\\}$, $w=0.034\,r_{\rm s}$,
$\int_{r_{-}}^{r_{+}}dr\,|f_{1}(r)|^{2}=0.94$, $\int_{r_{-}}^{r_{+}}dr\,{\rm
Re}[f_{1}(r)]^{2}=0.91$ .
The question whether the binding energy is bounded was followed up using the
basis of sine functions, which then helped to choose improved trial functions
for the variational method. With the sine functions the UV cutoff was raised
by reducing $L$, since going beyond $N=128$ was numerically impractical.
Results were obtained for $N=16$, 24, 32, 48, 64, 96, 128, and $L/r_{\rm
s}=2$, 8, 32. To make sure that the ground state $f_{1}(r)$ fitted-in easily
in the smaller $L$ domains, we studied its width. A convenient measure of the
width is the distance between the two minima of $|f_{1}(r)|^{2}$ closest to
$r_{\rm s}$. For example, in figure 6 these minima are at $r_{-}=0.938\,r_{\rm
s}$ and $r_{+}=1.371\,r_{\rm s}$, giving a width $w=(r_{+}-r_{-})r_{\rm
s}=0.433\,r_{\rm s}$. The left plot in figure 21 shows results for the width
as a function of $Nr_{\rm s}/L=\rho$, with data at each $\rho$ selected to
correspond to the largest available $L$. The curve is a fit to $w/r_{\rm s}$
by a rational function
$R_{w}(\rho)=\frac{6.41+0.0146\,\rho}{1+0.534\rho},\qquad\rho=Nr_{\rm
s}/L=2r_{\rm s}/\lambda_{\rm min}$ (120)
(the first point was left out of the fit to improve agreement with the data at
larger $\rho$). The fit indicates a finite width as $\lambda_{\rm min}\to 0$:
$R_{w}(\infty)=0.044$. As $r_{\rm s}/\lambda_{\rm min}$ increases,
$|f_{1}(r)|^{2}$ looks more and more like a Gaussian, narrowing in width and
the position of its maximum approaching $r_{\rm s}$. The right plot in figure
21 shows an example. The fit gives a standard deviaton $s\simeq
0.00588\,r_{\rm s}$, from which we deduce a conversion factor between $w$ and
the standard deviation $s$:
$w/s\simeq 5.76\,.$ (121)
The left plot in figure 22 shows ${\rm Re}[E_{1}]/m$ obtained from the same
selected $\\{N,\,L/r_{\rm s}\\}$ values. The results are fitted well (using
all data points for ${\rm Re}[E_{1}]/m$ and omitting the first three for ${\rm
Im}[E_{1}]/m$) by the rational functions
$\displaystyle R_{{\rm Re}[E]}(\rho)$ $\displaystyle=$
$\displaystyle-\frac{0.375322+0.727212\,\rho+0.349396\,\rho^{2}}{1+0.0410593\,\rho+0.000172826\,\rho^{2}}\,,$
(122) $\displaystyle R_{{\rm Im}[E]}(\rho)$ $\displaystyle=$
$\displaystyle-\frac{3.20098+0.901784\,\rho+0.0977789\,\rho^{2}}{1+0.0448932\,\rho+0.00112354\,\rho^{2}}\,.$
(123)
The second derivative $R_{{\rm Re}[E]}^{\prime\prime}(\rho)$ is negative at
the smaller $\rho$, changes sign at $\rho\simeq 35$, reaches a maximum at
$\rho\approx 73$ — properties almost within the data region — and then slowly
falls to zero while the function becomes constant. This suggests that ${\rm
Re}[E_{1}]/m$ is finite; extrapolation gives $R_{E}(\infty)=-2022$. The
corresponding fit to the imaginary part of $E_{1}$ has similar properties with
a relatively moderate limit $R_{{\rm Im}[E]}(\infty)=-87$. Extrapolation to,
say, within 20% of the infinite $\rho$ limits would involve values of $\rho$
into the many hundreds, which still might seem preposterously far from the
computed results. To substantiate the finiteness of the binding energy we need
data in this region, but going beyond $\rho=64$ is numerically difficult.
Figure 22: Data for ${\rm Re}[E_{1}/m]$ (left) and ${\rm Im}[E_{1}]/m$ (right)
at with fits by the functions in (122), (123). Same data with color coding as
in figure 21.
The lowest-energy eigenfunction $f_{1}(r)$ receives most of its normalization
integral from the region $r_{\rm s}\lesssim r\lesssim r_{\rm s}+w$ and the
small ratios $w/r_{\rm s}$ in figure 21 suggest that the large binding
energies found thus far are caused by the singularity at $r_{\rm s}$. Changing
tactics, we focus in appendices B.4 and B.5 on the region around $r_{\rm s}$
by studying simpler models: the pole model (P), the pole+square-root model
(PSR). The good approximation of the Gaussian to $|f_{1}(r)|^{2}$ in figure 21
suggests using a Gaussian for a variational approximation in the large-mass
region:
$\displaystyle f_{\rm G}(r)$ $\displaystyle=$ $\displaystyle\mu^{-1/2}_{\rm
G}\left(\exp\left[-\frac{(r-a)^{2}}{2s^{2}}\right]\right)^{1/2}\,,\quad\int_{0}^{2r_{\rm
s}}dr\,f_{\rm G}(r)^{2}=1\,,$ (124) $\displaystyle\mathcal{E}_{\rm GP}(a,s)$
$\displaystyle=$ $\displaystyle\int_{0}^{2r_{\rm s}}dr\,f_{\rm G}(r)H_{\rm
P}f_{\rm G}(r),$ (125)
for the P-model; the normalization integral determines $\mu_{\rm G}$. With
upper integration limit $2\,r_{\rm s}$ we can compare with results using the
sine basis functions with $L=2\,r_{\rm s}$. Extending the integration range to
$-\infty<r<\infty$ facilitates analytical evaluation of the resulting
variational integral—let’s denote it by $\mathcal{E}(a,s)$. This extension is
permitted if $f_{\rm G}(r)$ is at $r=\\{0,\,2r_{\rm s}\\}$ small enough for
satisfying the boundary conditions to sufficient accuracy when $\\{a,\,s\\}$
is near the minimum of $\mathcal{E}(a,s)$, which may replace $\mathcal{E}_{\rm
GP}(a,s)$ under these circumstances.
The PSR-model potential contains also a square root in the potential; this
appears to inhibit analytic evaluation. A rational form of $f(r)^{2}$,
$\displaystyle f_{\rm BW}(r)$ $\displaystyle=$ $\displaystyle\mu^{-1/2}_{\rm
BW}\,r(2r_{\rm
s}-r)\left(\frac{1}{(r-a)^{2}+s^{2}}\right)^{1/2},\quad\int_{0}^{2r_{\rm
s}}dr\,f_{\rm BW}(r)^{2}=1\,,$ (126) $\displaystyle\mathcal{E}_{\rm
BWPSR}(a,s)$ $\displaystyle=$ $\displaystyle\int_{0}^{2r_{\rm s}}dr\,f_{\rm
BW}(r)H_{\rm PSR}f_{\rm BW}(r),$ (127)
allows analytic evaluation of the variational integral $\mathcal{E}_{\rm
BWPSR}$ (the factor $r(2r_{\rm s}-r)$ has been added to satisfy the boundary
conditions even at the lower end of the large-mass region where $f_{\rm
BW}^{2}$ without this factor would be rather broad). We dub $f_{\rm BW}$ the
Breit-Wigner (BW) trial function. Note that $f_{\rm G}(r)$ and $f_{\rm BW}(r)$
approach the square root of a Dirac delta function as $s\to 0$.
Figure 23: Running potentials $V_{\rm r}/m^{2}$: model-I, the P-model and the
PSR-model, for $m=0.6$ (blue, dashed-blue and dashed-purple) and for $m=2$
(red, dashed-red and dashed-magenta). Left: real part; Right: imaginary parts,
in which to the eye the blue and dashed-purple curves overlap.
Figure 23 shows again the potential in the critical region, here with the
potentials of the P-model and the PSR-model included for comparison. In the
left plot, the dashed curve for the PSR-model is above that of model-I, hence,
its variational energy is definitely above ${\rm Re}[E_{1}]$ of model-I, it
will produce a lower bound on its binding-energy. The dashed curve for the
P-model lies below that of model-I. Assuming the Gaussian variational energy
to be accurate for large masses for the P-model we may expect its variational
energy to lie below ${\rm Re}[E_{1}]$ of model-I, hence to produce – or to be
close to – an upper bound on its binding energy. This putative upper bound and
the lower bound are shown in figure 3. The large mass asymptotes are given by
(see appendix B.5 for their evaluation)
$\displaystyle-{\rm Re}[\mathcal{E}]/m$ $\displaystyle\simeq$ $\displaystyle
6.96\,m^{8}\,,\qquad\qquad\qquad\qquad\qquad\quad\;\;\,\mbox{(GP)}$ (128)
$\displaystyle-{\rm Re}[\mathcal{E}]/m$ $\displaystyle\simeq$ $\displaystyle
4.17\,m^{8}\,,\quad-{\rm Im}[E]/m\simeq 4.17\,m^{6}\,.\quad\mbox{(BWPSR)}$
(129)
The asymptote with the Gaussian trial function is shown dashed between the
P-model Gauss curve and the PSR-model BW-curve.
Turning to the estimates for $m=2$ obtained with the basis of sine functions,
$-R_{{\rm Re}[E]}(\infty)=2022$ lies indeed between the variational 1427
(BWPSR) and 2596 (GP). Furthermore, the conversion factor 5.76 in (121) from
the width $w$ of the wave function to the fitted Gaussian standard-deviation
$s$, gives, when applied to the extrapolated width,
$R_{w}(\infty)/5.76=0.00077$, remarkably close to the GP value $s_{\rm
min}/r_{\rm s}=0.00075$ (cf. below (135)).
The importance of the singularity for the binding energy helps understanding
the change of sign of the second derivative of $R_{{\rm Re}[E]}(\rho)$ in
(122). Using the sine basis for the P-model with $L=2\,r_{\rm s}$, a rational
function of the form (122) fitted to its numerical data of $E_{1}/m$ has a
positive second derivative for all $2<\rho<\infty$, with a finite
$R_{E}(\infty)=-4336$. In similar fashion the PSR-model yields $R_{{\rm
Re}[E]}(\infty)=-3666$ and $R_{{\rm Im}[E]}(\infty)=-205$. In the classical-
evolution model CE-I, the data clearly indicate a diverging limit
$E_{1}/m\to-\infty$: a purely quadratic form with
$R_{E}^{\prime\prime}(\rho)=-1.30$ gives a good fit over the whole range
$2\leq\rho\leq 512$. This divergence reflects the stronger singularity of the
double pole in this model. In the quantum model-I the diverging and converging
behaviors compete: since $\lambda_{\rm min}=(2/\rho)\,r_{\rm s}$, the smallest
wavelength modes still average the double-pole behavior of the potential when
$\rho\ll 35$, thus the classical-evolution behavior ($R_{{\rm
Re}[E]}^{\prime\prime}(\rho)<0$) wins, whereas at larger $\rho\gg 35$ the true
single-pole+square-root singularity ($R_{{\rm Re}[E]}^{\prime\prime}(\rho)>0$)
wins with $R_{{\rm Re}[E]}^{\prime\prime}(\rho)\downarrow 0$, $R_{{\rm
Re}[E]}^{\prime}(\rho)\uparrow 0$ as $\rho\to\infty$.
### B.5 Gaussian and Breit-Wigner trial functions
For the P-model (87) and the Gaussian in (124) we can use the implementation
(40) of the principle value in the variational integral:
$\displaystyle\mathcal{E}_{\rm GP}$ $\displaystyle=$
$\displaystyle\mathcal{E}_{K}+\mathcal{E}_{V}\,,$ (130)
$\displaystyle\mathcal{E}_{K,\rm GP}$ $\displaystyle=$
$\displaystyle-\frac{1}{m}\int_{-\infty}^{\infty}dr\,f_{\rm
G}\frac{\partial^{2}}{\partial r^{2}}\,f_{\rm G}=\frac{1}{4ms^{2}}\,,$ (131)
$\displaystyle\mathcal{E}_{V,{\rm GP}}$ $\displaystyle=$
$\displaystyle\frac{m^{2}r_{\rm
s}}{2c}\int_{-\infty}^{\infty}dr\,\ln(|r-r_{\rm s}|)\,\frac{\partial}{\partial
r}\left(rf_{\rm G}^{2}\right)\,.$ (132)
After writing
$s=\bar{s}\,r_{\rm s},\quad a=(1+\bar{s}y)r_{\rm s}\,,$ (133)
the potential part can be worked into the form
$\displaystyle\mathcal{E}_{V,{\rm GP}}$ $\displaystyle=$
$\displaystyle-\frac{m^{2}r_{\rm
s}}{2c}\left(1+\frac{h(y)}{\bar{s}}\right)\,,$ (134) $\displaystyle h(y)$
$\displaystyle=$
$\displaystyle\frac{y}{2}\left[2+M^{(1,0,0)}\left(0,\frac{1}{2},-\frac{y^{2}}{2}\right)-M^{(1,0,0)}\left(0,\frac{3}{2},-\frac{y^{2}}{2}\right)\right]\,,$
(135)
where $M$ is the Kummer confluent hypergeometric function and its superscript
denotes differentiation with respect to its first argument. The odd function
$-h(y)$ has a minimum at $y=y_{\rm min}=1.307$, $h(y_{\rm min})=0.765$. The
pair of variational equations
$\\{\partial_{\bar{s}}\mathcal{E}=0,\,\partial_{y}\mathcal{E}=0\\}$ was solved
numerically and the resulting binding energy is plotted in figure 3. For
$m=2$, $\bar{s}=0.000745$, $y=1.31$, and $\mathcal{E}_{\rm GP}/m=-2596$. When
$m$ increases $\bar{s}$ approaches zero. The asymptotic form for $\bar{s}\to
0$,
$\frac{\mathcal{E}_{\rm GP}^{\rm as}}{m}=\frac{1}{4mr_{\rm
s}^{2}\bar{s}^{2}}-\frac{mr_{\rm s}\,h_{\rm min}}{2c\bar{s}}\,.$ (136)
gives with $r_{\rm s}\to 3m$ a finite minimum at large $m$:
$\bar{s}\to 0.0632\,m^{-6},\quad\frac{\mathcal{E}_{\rm GP}^{\rm
as}}{m}\to-6.96\,m^{8}\,.$ (137)
Numerical results in $m\gtrsim 1$ are plotted in figure 3.
With the Breit-Wigner trial function (126) the principle-value in the P-model
was treated with the definition (90). In this case the resulting expressions
for $\mathcal{E}_{K}$ and $\mathcal{E}_{V}$ do not involve functions more
sophisticated than logarithms but they are too long to record here. A
representation in terms of the analogue $\bar{s}$ and $y$ for this trial
function is also useful here. At $m=2$, the minimum is at $y=1$ (machine
precision), $\bar{s}=0.000761$, with $\mathcal{E}_{\rm BWP}/m=-2220$, which is
somewhat higher than the above value $-2596$ for GP. Indeed, the BW trial
function is less accurate than the Gaussian one. The leading asymptotic form
for $\bar{s}\to 0$ of $\mathcal{E}_{\rm BWP}$ simplifies to
$\frac{\mathcal{E}_{\rm BWP}^{\rm as}}{m}=\frac{1}{8m^{2}r_{\rm
s}^{2}\bar{s}^{2}}-\frac{mr_{\rm s}y}{2c(1+y^{2})\bar{s}}\,,$ (138)
in which the dependence on $1/\bar{s}$ and $y$ has again decoupled in the
potential term, which has its minimum at $y=1$. Using $r_{\rm s}=3m$ for large
$m$ the solution for the minimum becomes
$y\to 1\,,\quad\bar{s}=0.0483\,m^{-6}\,,\quad\mathcal{E}_{\rm BWP}^{\rm
as}/m=-5.94\,m^{8}\,.$ (139)
Turning to the square-root term in the PSR-potential (88), Mathematica does
not give an analytic form for the variational integral with a Gaussian, but
the Breit-Wigner form poses no further difficulty for the integral
$-\frac{\sqrt{2}\,m^{3}}{c^{3/2}}\int_{0}^{2r_{\rm s}}dr\,\sqrt{\frac{r_{\rm
s}}{r-r_{\rm s}}}\,rf_{\rm BW}(r)^{2}\,.$ (140)
The real part of the above expression is added to $\mathcal{E}_{\rm BWP}$ to
make ${\rm Re}[\mathcal{E}_{\rm BWPSR}]$, the imaginary part is evaluated at
the minimum of the latter. Numerical results in $m\geq 1$ are plotted in
figure 3. For $m=2$, ${\rm Re}[\mathcal{E}_{\rm min}]/m\simeq-1427$, ${\rm
Im}[\mathcal{E}_{\rm min}]/m\simeq-312$. As $m$ increases, $y_{\rm min}$ and
$\bar{s}_{\rm min}$ rapidly approach 1 and 0 respectively. The asymptotic form
for $\bar{s}\to 0$ can be worked in the form
$\displaystyle\frac{{\rm Re}[\mathcal{E}_{\rm BWPSR}^{\rm as}]}{m}$
$\displaystyle=$ $\displaystyle\frac{1}{8m^{2}r_{\rm
s}^{2}\bar{s}^{2}}-\frac{mr_{\rm s}y}{2c(1+y^{2})\bar{s}}+\frac{m^{2}r_{\rm
s}}{c^{3/2}\sqrt{\bar{s}}}\sqrt{\frac{y+\sqrt{1+y^{2}}}{1+y^{2}}}\,,$ (141)
$\displaystyle\frac{{\rm Im}[\mathcal{E}_{\rm BWPSR}^{\rm as}]}{m}$
$\displaystyle=$ $\displaystyle-\frac{m^{2}r_{\rm
s}}{c^{3/2}\sqrt{\bar{s}}}\,\sqrt{\frac{-y+\sqrt{1+y^{2}}}{1+y^{2}}}\,.$ (142)
The square-root term in $\mathcal{E}_{{\rm BWPSR}}^{\rm as}$ is of order
$m\sqrt{\bar{s}}$ relative to the pole term and (139) also gives the leading
behavior of $\mathcal{E}_{{\rm BWPSR}}^{\rm as}/m$ as $m\to\infty$. The
imaginary part behaves as
${\rm Im}[E_{\rm min}]/m\to-4.17\,m^{6}\,.$ (143)
In the relativistic version of model-I with the Gaussian trial function the
first term in (136) is replaced by
$\displaystyle\frac{\langle K_{\rm rel}\rangle}{m}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{2}}{mr_{\rm
s}\bar{s}}\;U\left(-\frac{1}{2},\,0,\,2m^{2}r_{\rm
s}^{2}\bar{s}^{2}\right)=\frac{\sqrt{2}}{\sqrt{\pi}\,mr_{\rm
s}\bar{s}}+\mathcal{O}(\ln\bar{s})$ $\displaystyle\frac{\mathcal{E}_{\rm
GPrel}^{\rm as}}{m}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{2}}{\sqrt{\pi}\,mr_{\rm s}\bar{s}}-\frac{h_{\rm
min}mr_{\rm s}}{2c\bar{s}}\,,$ (144)
where $U$ is Kummer’s confluent hypergeometric function. The relativistic
kinetic-energy contribution scales like $1/\bar{s}$, in contrast to the non-
relativistic $1/\bar{s}^{2}$. The potential contribution is unchanged, of
order $1/\bar{s}$. Comparing coefficients of $1/\bar{s}$ it follows that the
variational integral goes to negative infinity as $\bar{s}\to 0$ for masses
greater than 0.612 (the asymptotic forms (136) and (144) used only $\bar{s}\to
0$, they hold for all $m$). In the Breit-Wigner case we can use a simpler
trial function without the factor $r(2r_{\rm s}-r)$ in (126) when focussing on
the limit $\bar{s}\to 0$. This simplifies analytical evaluation of $\langle
K_{\rm rel}\rangle$, it can be expressed in a Meijer G-function, and
$\frac{\mathcal{E}_{\rm BWPSRrel}^{\rm as}}{m}=\frac{\mathcal{E}_{\rm
BWPrel}^{\rm as}}{m}=\frac{4}{\pi^{2}mr_{\rm s}\bar{s}}-\frac{mr_{\rm
s}}{4c\bar{s}}\,$ (145)
(we inserted $y_{\rm min=1}$ in the potential part of (138)). Comparing
coefficients of $1/\bar{s}$ again indicates no lower bound on $E_{\rm b}$
already for $m>0.565$.
Figure 24: Left: $\mathcal{E}_{\mbox{\scriptsize GCE-I}}/m$ in (147) for
$\bar{s}=0.1$. Right: Numerically evaluated $E_{\rm b}/m^{3}$ of model-I with
Gaussian trial wave function (dots) fitted at $m=\\{10,11,\ldots,20\\}$ by the
function $(p_{0}+c_{h2}\,q_{1}\,m)/(1+q_{1}m)$ (curve), $p_{0}=-12.51$,
$q_{1}=0.1647$.
The classical-evolution model (37) can be treated in similar fashion. With the
Gaussian trial function, the potential part
$\mathcal{E}_{V,\mbox{\scriptsize
GCE-I}}=m^{2}\int_{-\infty}^{\infty}dr\,\ln(|r-r_{\rm
s}|)\frac{\partial^{2}}{\partial r^{2}}\left(rf_{G}(r^{2}\right)\,,\quad
r_{\rm s}=3m,$ (146)
has after the substitution (133) the form
$\frac{\mathcal{E}_{V,\mbox{\scriptsize
GCE-I}}}{m}=\frac{h_{1}(y)}{\bar{s}}+\frac{h_{2}(y)}{\bar{s}^{2}}\,,$ (147)
where $h_{1}(y)$ and $h_{2}(y)$ are are odd and even in $y\to-y$, of similar
size, and expressible in Kummer functions, as in (135). The left plot in
figure 24 shows $\mathcal{E}_{\mbox{\scriptsize GCE-I},V}$ for $\bar{s}=0.1$.
The value of its left minimum approaches that of the right minimum as
$\bar{s}\to 0$. We could use a trial function with two maxima, its square
would have turned into two delta functions each with prefactor 1/2 as
$\bar{s}\to 0$. For finite $\bar{s}$ the single Gaussian picks out the right
(lowest) minimum and would still approach the same limiting $E_{\rm b}/m$ of
order $m^{2}$.
The leading $s$-dependence in $\mathcal{E}_{V}$ is of order $s^{-2}$, and
$\frac{\mathcal{E}_{\mbox{\scriptsize GCE-I}}^{\rm
as}}{m}\simeq\frac{1}{36m^{4}\bar{s}^{2}}-\frac{h_{2}(y_{\rm
min})}{\bar{s}^{2}}\,,\quad y_{\rm min}=2.12\,,\quad h_{2}(y_{\rm
min})=0.0949\,,\,,$ (148)
which has no finite minimum in $\bar{s}>0$ for large $m$ (in fact for
$m>0.736$). This confirms the lack of ground state found for this model with
the basis of sine functions.
The idea of setting a maximum on the derivative of wave functions (realized at
the end of section 5 by a minimal wavelength $\lambda_{\rm min}$) can be
implemented also by a mass-independent width parameter $s$. Then $\bar{s}$ can
diminish with increasing mass only slowly, $\bar{s}=s/r_{\rm s}\simeq s/(3m)$,
and the P and PSR models cannot be used for estimating the binding energy: the
terms in the potential part of (141) are of order $m^{3}$ and $m^{7/2}$,
implying that the square-root term inevitably overtakes the pole term
(resulting in a negative binding energy). But all other terms in the expansion
(86) would have become relatively large as well—the expansion would not
converge. The CE-I model can be used because (147) holds for all $\bar{s}$
such that the boundary condition for $f_{G}(r,a,s)$ at the origin is fulfilled
to sufficient accuracy, which is typically the case for large masses.
The question arises wether the binding energy of model-I approaches that of
the CE-I model under these circumstances. This was investigated as follows. A
fit with a Gaussian to the ground-state wave function of model-I determined
with the Fourier-since basis at $\lambda_{\rm min}=1$ and $m=3$ gives
$s=0.1801$ ($r_{\rm s}=9.49$, $\bar{s}=0.0190$). Using this $s$ in
$\bar{s}=s/(3m)$, neglecting the kinetic energy contribution and evaluating
(147) at the minimum of $h_{2}(y)$ gives
$E_{\rm b}^{\rm as}/m=c_{h1}\,m+c_{h2}\,m^{2}\,,\quad c_{h1}=3.359\,,\quad
c_{h2}=26.34\qquad\mbox{(CE-I model)}\,.$ (149)
This estimate is shown in the left plot of figure 3 by the black dashed line a
little above the numerically evaluated model-I curve of the variational
Gaussian estimate. Numerical $\lambda_{\rm min}=1$ data for $E_{\rm b}/m^{3}$
at $m=10$, 11, …20 are well fitted by the rational function
$(p_{0}+p_{1}m)/(1+q_{1}m)$. Its extrapolation $m\to\infty$ differs only 2%
from the $c_{h2}$ in (149) of the CE-I model. An equally good looking fit with
the constraint $p_{1}/q_{1}=c_{h2}$ is shown in the right plot of figure 24.
There is no reason to doubt that in case of a UV cutoff on the wave function
the CE-I model gives the correct asymptotic mass dependence of
model-I.262626The Gaussian trial binding energy with $s=0.1801$ is higher than
$E_{\rm b}$ obtained with the Fourier-basis. The reason may be the fact that
the maximum derivative of the Gaussian, 7.5, is larger than the $2\pi$ of
$\sin(2\pi r/\lambda_{\rm min})$.
## Appendix C More on model-II
In model-II, the running potential has a minimum determined by
$V_{\rm r}=-z^{2}rm^{2},\quad 0=\frac{\partial V_{\rm r}}{\partial
r}=-z^{2}m^{2}+2z\beta_{z}m^{2}.$ (150)
The relevant solution is given by
$z_{\rm min}=-\frac{2b+\sqrt{4b^{2}-2c}}{2c},$ (151)
from which $r_{\rm min}$ and $V_{r\,{\rm min}}$ follow using (82), and also
their asymptotic behavior in (42). At large $m$ the potential approaches its
classical-evolution form
$V_{\mbox{\scriptsize CE-II}}=-m^{2}r\,\frac{1}{(r+m)^{2}}$ (152)
uniformly. Matrix elements of the potential can be conveniently computed using
the transformation of variables $r\to z$ as given by the solution in (82),
with Jacobian $\partial r/\partial z=-r/\beta_{z}$.
## Appendix D Classical motion in the relativistic CE and Newton models
The classical motion in the CE models is expected to approximate the motion of
the wave packet at very large masses $m\gg 1$, with initial spread $s_{0}\ll
m$ and initial distance $r_{0}\gg m$. Consider the particles released at rest
form a large mutual distance $r=r_{0}$ with the dynamics specified by the
Hamiltonians
$\displaystyle H(r,p)$ $\displaystyle=$ $\displaystyle
2\sqrt{m^{2}+p^{2}}-\frac{m^{2}r}{(r-3m)^{2}}\,,\qquad\qquad\mbox{CE-I model}$
(153) $\displaystyle H(r,p)$ $\displaystyle=$ $\displaystyle
2\sqrt{m^{2}+p^{2}}-\frac{m^{2}r}{(r+m)^{2}}\,,\;\qquad\qquad\mbox{CE-II
model}$ (154) $\displaystyle H(r,p)$ $\displaystyle=$ $\displaystyle
2\sqrt{m^{2}+p^{2}}-\frac{m^{2}}{|r|}\,.\quad\qquad\qquad\qquad\mbox{Newton
model}$ (155) $\displaystyle\dot{r}$ $\displaystyle=$
$\displaystyle\frac{\partial H}{\partial p}\,,\quad\dot{p}=-\frac{\partial
H}{\partial r}\,,$ (156)
We start with model CE-I. Numerical integration rapidly shows that, starting
from $r_{0}$, $r\downarrow r_{\rm s}=3m$, $p\to-\infty$ and the velocity
$v_{\rm rel}\to-1$, at a time $t_{\rm c}$. Similarly, releasing the particles
near the origin at a distance $r_{0}^{\prime}$ determined by energy
conservation
$H(r_{0}^{\prime},0)=H(r_{0},0),$ (157)
gives a motion $r\uparrow r_{\rm s}$, $p\to+\infty$, $v_{\rm rel}\to+1$, in a
time $t_{\rm c}^{\prime}$. We can extend the first falling-in motion by gluing
to it the time-reversed second motion and in this way continue it towards the
origin, where it reaches $r_{0}^{\prime}$ and reverses (‘bounces’) back
towards $r_{\rm s}$. The motion can then be extended again by gluing a time-
reversed version of the first motion, after which $r$ reaches $r_{0}$ again.
The process can be repeated such that a non-linear oscillating motion emerges.
The gluing implies that the limit points of $r(t)$ and $\dot{r}(t)=2\,v_{\rm
rel}(t)$ at the gluing times $t=t_{\rm c}$, $t_{\rm c}+2t_{\rm c}^{\prime}$,
$2t_{\rm c}+2t_{\rm c}^{\prime}$, … are added, which renders these functions
continuous at these times. Figure 25 shows $r(t)$ over one period for $m=2$
and $r_{0}=10\,r_{\rm s}=60$. Figure 26 shows the velocity. The gluing
procedure has replaced the ‘black hole’ interval $0\leq t<t_{\rm c}$ into a
black-hole–white-hole cyclic dependence on time.
Figure 25: Relativistic CE-I model, $m=2$, $r_{0}=60$. Left: Once cycle of
falling in and bouncing back; $t_{\rm c}=197.2$, $t_{\rm c}^{\prime}=6.5$. The
gluing times are $t_{\rm c}$ and $t_{\rm c}+2t_{\rm c}^{\prime}$. Right:
close-up around $t_{\rm c}+t_{\rm c}^{\prime}=203.7$; the minimum at $t=t_{\rm
c}+t_{\rm c}^{\prime}$ is $r_{0}^{\prime}=0.6$ .
Figure 26: As in figure 25 for $v_{\rm rel}(t)$.
Figure 27: Relativistic Newton model with Cartesian coordinates $r$ and $p$;
$m=2$, $r_{0}=60$. Left: Once cycle of $r(t)$; the velocity $v_{\rm
rel}(t)=r^{\prime}(t)/2=-1$, $+1$ at the gluing times $t_{\rm c}=51.25$,
$3t_{\rm c}$. Right: momentum $p$ diverging at gluing times.
In the CE-II model the plots look similar, except that the maximum velocity
does not reach $\pm 1$, even when falling in from infinity, and the flattening
of $v_{\rm rel}$ in the CE-I model at $v_{\rm rel}=\pm 1$ is rounded off in
the CE-II model. Using $2m=H(\infty,0)=H(r,p)$ gives $|p|$ as a function of
$r$ which is maximal at $r=m$; $|p_{\rm max}|=m\sqrt{17}/8$, $|v_{\rm
rel,\,max}|=\sqrt{17}/9\simeq 0.46$.
In the Newton model the potential is singular at the origin and this point is
reached at a finite time $t_{\rm c}$ with any initial distance
$0<r_{0}<\infty$. The infinite force at the origin is attractive, not
repelling as needed for a bounce, the evolution seemingly has to stop at
$t_{\rm c}$, However, re-interpreting $r$ as a Cartesian coordinate that may
become negative, as anticipated by the absolute value in the denominator in
(156), and assuming that the point particles may occupy the same point, the
motion can be continued through the origin with continuous $r(t)$ and $v_{\rm
rel}(t)$, as shown in figure 27.
## Appendix E Perturbative mass renormalization
Figure 28: Diagrams for the scalar selfenergy $\Sigma_{0}$, the dashed lines
represent gravitons. The graviton loop in the tadpole diagram $c$ is to be
accompanied by a ghost loop (not shown).
In the perturbative vacuum $\langle g_{\mu\nu}\rangle=\eta_{\mu\nu}={\rm
diag}(-1,1,1,1)$, $\langle\phi\rangle=0$, the renormalized selfenergy
$\Sigma(p^{2})$ of the scalar field $\phi$, Wick-rotated to Euclidean momentum
space, is related to the renormalized scalar-field propagator $G(p)$ by
$G(p)^{-1}=m^{2}+p^{2}+\Sigma(p^{2}).$ (158)
Relevant one-loop scalar selfenergy diagrams are shown in figure 28 (a ghost
loop should be added to the tadpole). A possible scalar field tadpole closed
loop is left out since in the comparison with SDT we are interested in effects
caused by the pure gravity model without ‘back reaction’ of the scalar
field—the quenched approximation. To 1-loop order pure-gravity can be
renormalized in itself [40], here we assume it to be done such that $\langle
g_{\mu\nu}\rangle=\eta_{\mu\nu}$. We use the graviton propagator and vertex
functions in harmonic gauge given in [41] and dimensional regularization. The
closed graviton loops in diagrams $b$ and $c$ (and its ghost companion) are
often declared zero with dimensional regularization. However, for the tadpole
diagram $c$ this leads to the ambiguous result $0/0$ (the zero in the
denominator comes from the zero mass of the graviton in its propagator), which
was analyzed in [42, 43, 44, 45]. A graviton mass parameter $\lambda$
regulates infrared divergencies. It induces a violation of gauge invariance
that disappears in infrared-safe quantities where the limit $\lambda\to 0$ can
be taken.
The diagrams correspond to the unrenormalized selfenergy $\Sigma_{0}$, which
differs from $\Sigma$ by the counterterms for mass renormalization and field
rescaling, $\delta_{m}$ and $\delta_{Z}$,
$\Sigma(p^{2})=\Sigma_{0}(p^{2})+\delta_{m}+\delta_{Z}\,p^{2},$ (159)
and which are chosen such that the expansion around the zero of
$\Sigma(p^{2})$ at $p^{2}=-m^{2}$ (‘on shell’) has the form
$\Sigma(p^{2})=0+\mathcal{O}((m^{2}+p^{2})^{2})$ .272727Part of the notation
here follows [46], section 10.2 . This implies
$\Sigma_{0}(-m^{2})+\delta_{m}-\delta_{Z}m^{2}=0,\qquad\Sigma^{\prime}_{0}(-m^{2})+\delta_{Z}=0.$
(160)
The counterterms $\Delta S$ are introduced by rewriting the bare action
$S_{0}$ in terms of the renormalized field and mass, $S_{0}=S+\Delta S$,
$\phi_{0}=\sqrt{Z}\,\phi=\sqrt{1+\delta_{Z}}\,\phi$,
$m_{0}^{2}=Z^{-1}(m^{2}+\delta_{m})$. In one-loop order,
$m_{0}^{2}=m^{2}-\delta_{Z}m^{2}+\delta_{m}=m^{2}-\Sigma_{0}(-m^{2}).$ (161)
Applying the usual techniques diagram $a$ can be worked into the form
$\displaystyle\Sigma_{0a}$ $\displaystyle=$ $\displaystyle-8\pi
G\,\mu^{4-d}\int_{0}^{1}dx\int\frac{d^{d}k}{(2\pi)^{d}}\,\frac{a_{1}k^{2}p^{2}+a_{2}(p^{2})^{2}(1-x)^{2}+a_{3}(1-x)p^{2}m^{2}+a_{4}m^{4}}{[k^{2}+x(1-x)p^{2}+xm^{2}+(1-x)\lambda^{2}]^{2}},$
(162) $\displaystyle a_{1}$ $\displaystyle=$ $\displaystyle
2+\frac{(4-d)(d-2)}{2d},a_{2}=2+\frac{(4-d)(d-2)}{2},a_{3}=-(d^{2}-4d+4),a_{4}=-\frac{d(d-2)}{2}\,.$
Here $\mu$ is the conventional mass parameter that keeps the dimension of
$\Sigma$ independent of spacetime dimension $d$. Similarly, diagram $b$ (with
symmetry factor 1/2) corresponds to
$\displaystyle\Sigma_{0b}$ $\displaystyle=$ $\displaystyle-8\pi
G\,(b_{1}p^{2}+b_{2}m^{2})\,\mu^{4-d}\int\frac{d^{d}k}{(2\pi)^{d}}\,\frac{1}{k^{2}+\lambda^{2}}\,,$
(163) $\displaystyle b_{1}$ $\displaystyle=$
$\displaystyle\frac{3d^{2}}{4}-\frac{5d}{2}+2,\;b_{2}=\frac{3d^{2}}{4}-\frac{d}{2}.$
(164)
The tadpole diagram comes out as
$\Sigma_{0c}=-8\pi
G\,\left(\frac{2-d}{4}\right)\,\left(\frac{3d^{2}}{4}-3d+1\right)\,(dm^{2}+(d-1)p^{2})\,\frac{1}{\lambda^{2}}\,\mu^{4-d}\int\frac{d^{d}k}{(2\pi)^{d}}\,\frac{k^{2}}{k^{2}+\lambda^{2}}\,,$
(165)
where the factor $1/\lambda^{2}$ comes from the graviton propagator attached
to the tadpole tail. As $d\to 4$ the loop integral produces a factor
$\lambda^{4}$ and $\Sigma_{0c}$ vanishes in the limit $\lambda\to 0$. The same
should happen in the ghost tadpole, since it cancels un-physical contributions
in the graviton loop. Near $d=4$, $\Sigma_{0b}$ is proportional to
$\lambda^{2}$, including the residue of the pole at $d=4$, hence also
$\Sigma_{0b}$ vanishes as $\lambda\to 0$.
For generic $p^{2}$, the limit $\lambda\to 0$ of $\Sigma_{0a}$ is not zero
near $d=4$, and furthermore, the residue of the pole at $d=4$ is proportional
to $(m^{2}+p^{2})$ and vanishes on-shell: $\Sigma_{0a}(-m^{2})$ is finite.
Hence $\Sigma_{0}(-m^{2})=\Sigma_{0a}(-m^{2})$; we find
$\displaystyle\Sigma_{0}(-m^{2})$ $\displaystyle=$
$\displaystyle-\frac{5}{2\pi}\,G\,m^{4}\,,$ (166) $\displaystyle m_{0}^{2}$
$\displaystyle=$ $\displaystyle m^{2}+\frac{5}{2\pi}\,Gm^{4}\,.$ (167)
The cancelation of poles at $d=4$ was noted earlier in [47], where it was
found to occur also in Yukawa models of fermions and scalars coupled to
gravity. The question arose if these cancelations occurred only in the
harmonic gauge. Gauge independence has been contested in [48] where a
dependence was found on a gauge parameter $\omega$. Finiteness was mentioned
for the harmonic gauge in which $\omega=0$ and a second gauge parameter
$\alpha=1$. But one can see from the results in this work that for $\omega=0$
the cancelation is in fact in-dependent of $\alpha$. A similar phenomenon
occurs in the work [49]. Hence, the on-shell relation (167) is gauge-
independent in a restricted class of gauges. However, in the way we have
defined $m^{2}$ it is the position of the pole as a function of $p^{2}$ in the
renormalized Green function. Such a ‘pole mass’ is a physical gauge-invariant
quantity that also describes the position of poles in analytically continued
S-matrix elements. Since $m_{0}$ and Newton’s coupling $G$ are gauge
invariant, (167) is a gauge-invariant (but regularization-dependent) relation.
The derivative $\Sigma^{\prime}_{0}(p^{2})$ is UV-divergent at $d=4$ and it
has an IR-divergence on shell as $\lambda\to 0$,
$\Sigma^{\prime}_{0}(-m^{2})=\frac{8\pi
Gm^{2}}{16\pi^{2}}\left(\frac{8}{4-d}-4\gamma_{\rm
E}+4\ln(4\pi)-\ln\frac{\lambda^{2}}{\mu^{2}}-3\ln\frac{m^{2}}{\mu^{2}}\right)+\mathcal{O}((4-d)^{2},\lambda^{2}).$
(168)
It will not be gauge independent. This IR-divergence is to be resolved similar
to the case of QED.
## References
* [1] B. V. de Bakker and J. Smit, _Gravitational binding in 4D dynamical triangulation_ , _Nucl. Phys._ B484 (1997) 476 [hep-lat/9604023].
* [2] J. F. Donoghue, _Leading quantum correction to the Newtonian potential_ , _Phys.Rev.Lett._ 72 (1994) 2996 [gr-qc/9310024].
* [3] J. F. Donoghue, _General relativity as an effective field theory: The leading quantum corrections_ , _Phys.Rev._ D50 (1994) 3874 [gr-qc/9405057].
* [4] Y. Iwasaki, _Quantum theory of gravitation vs. classical theory. - fourth-order potential_ , _Prog. Theor. Phys._ 46 (1971) 1587.
* [5] B. R. Holstein and J. F. Donoghue, _Classical physics and quantum loops_ , _Phys. Rev. Lett._ 93 (2004) 201602 [hep-th/0405239].
* [6] N. Bjerrum-Bohr, J. F. Donoghue and B. R. Holstein, _Quantum gravitational corrections to the nonrelativistic scattering potential of two masses_ , _Phys. Rev. D_ 67 (2003) 084033 [hep-th/0211072]; [Erratum: Phys.Rev.D 71, 069903 (2005)].
* [7] N. E. J. Bjerrum-Bohr, J. F. Donoghue and B. R. Holstein, _Quantum corrections to the Schwarzschild and Kerr metrics_ , _Phys. Rev. D_ 68 (2003) 084005 [hep-th/0211071]; [Erratum: Phys.Rev.D 71, 069904 (2005)].
* [8] A. Codello, R. Percacci and C. Rahmede, _Investigating the Ultraviolet Properties of Gravity with a Wilsonian Renormalization Group Equation_ , _Annals Phys._ 324 (2009) 414 [0805.2909].
* [9] M. M. Anber and J. F. Donoghue, _On the running of the gravitational constant_ , _Phys.Rev._ D85 (2012) 104016 [1111.2875].
* [10] J. F. Donoghue, _A Critique of the Asymptotic Safety Program_ , _Front. in Phys._ 8 (2020) 56 [1911.02967].
* [11] L. Landau and E. Lifshitz, _The classical theory of fields_ , _Pergamon Press_ (1971) .
* [12] J. Smit, _Introduction to quantum fields on a lattice: A robust mate_ , _Cambridge Lect. Notes Phys._ 15 (2002) [Erratum: https://staff.fnwi.uva.nl/j.smit/].
* [13] A. Buonanno and T. Damour, _Effective one-body approach to general relativistic two-body dynamics_ , _Phys. Rev. D_ 59 (1999) 084006 [gr-qc/9811091].
* [14] A. Buonanno and T. Damour, _Transition from inspiral to plunge in binary black hole coalescences_ , _Phys. Rev. D_ 62 (2000) 064015 [gr-qc/0001013].
* [15] G. Schäfer and P. Jaranowski, _Hamiltonian formulation of general relativity and post-Newtonian dynamics of compact binaries_ , _Living Rev. Rel._ 21 (2018) 7 [1805.07240].
* [16] P. Bialas, Z. Burda, A. Krzywicki and B. Petersson, _Focusing on the fixed point of 4d simplicial gravity_ , _Nucl. Phys._ B472 (1996) 293 [hep-lat/9601024].
* [17] B. V. de Bakker, _Further evidence that the transition of 4D dynamical triangulation is 1st order_ , _Phys. Lett._ B389 (1996) 238 [hep-lat/9603024].
* [18] T. Rindlisbacher and P. de Forcrand, _Euclidean Dynamical Triangulation revisited: is the phase transition really 1st order? (extended version)_ , _JHEP_ 1505 (2015) 138 [1503.03706].
* [19] J. Ambjorn, A. Goerlich, J. Jurkiewicz and R. Loll, _Nonperturbative Quantum Gravity_ , _Phys. Rept._ 519 (2012) 127 [1203.3591].
* [20] R. Loll, _Quantum Gravity from Causal Dynamical Triangulations: A Review_ , _Class. Quant. Grav._ 37 (2020) 013002 [1905.08669].
* [21] B. Bruegmann and E. Marinari, _4-d simplicial quantum gravity with a nontrivial measure_ , _Phys.Rev.Lett._ 70 (1993) 1908 [hep-lat/9210002].
* [22] J. Ambjorn, K. Anagnostopoulos and J. Jurkiewicz, _Abelian gauge fields coupled to simplicial quantum gravity_ , _JHEP_ 9908 (1999) 016 [hep-lat/9907027].
* [23] J. Ambjorn, L. Glaser, A. Goerlich and J. Jurkiewicz, _Euclidian 4d quantum gravity with a non-trivial measure term_ , _JHEP_ 10 (2013) 100 [1307.2270].
* [24] J. Laiho, S. Bassler, D. Coumbe, D. Du and J. Neelakanta, _Lattice Quantum Gravity and Asymptotic Safety_ , _Phys. Rev. D_ 96 (2017) 064015 [1604.02745].
* [25] H. W. Hamber, _Vacuum Condensate Picture of Quantum Gravity_ , _Symmetry_ 11 (2019) 87 [1707.08188].
* [26] D. N. Coumbe and J. Jurkiewicz, _Evidence for Asymptotic Safety from Dimensional Reduction in Causal Dynamical Triangulations_ , _JHEP_ 03 (2015) 151 [1411.7712].
* [27] S. Catterall, J. Laiho and J. Unmuth-Yockey, _Kähler-Dirac fermions on Euclidean dynamical triangulations_ , _Phys. Rev. D_ 98 (2018) 114503 [1810.10626].
* [28] J. Smit, _Continuum interpretation of the dynamical-triangulation formulation of quantum Einstein gravity_ , _JHEP_ 08 (2013) 016 [1304.6339]; [Erratum: JHEP 09, 048 (2015)].
* [29] J. Smit, _A phase transition in quantum Einstein gravity_ , _Talks: San Francisco & Syracuse, USA, & Swansea, UK_ (2016) .
* [30] C. Burgess, _Quantum gravity in everyday life: General relativity as an effective field theory_ , _Living Rev. Rel._ 7 (2004) 5 [gr-qc/0311082].
* [31] A. Lasenby, C. Doran, J. Pritchard, A. Caceres and S. Dolan, _Bound states and decay times of fermions in a Schwarzschild black hole background_ , _Phys. Rev. D_ 72 (2005) 105014 [gr-qc/0209090].
* [32] M. Lighthill, _Fourier Analysis and Generalized Functions_. Cambridge University Press, Cambridge, UK, 1958.
* [33] M. E. Agishtein and A. A. Migdal, _Critical behavior of dynamically triangulated quantum gravity in four-dimensions_ , _Nucl. Phys._ B385 (1992) 395 [hep-lat/9204004].
* [34] R. G. Jha, J. Laiho and J. Unmuth-Yockey, _Lattice quantum gravity with scalar fields_ , _PoS_ LATTICE2018 (2018) 043 [1810.09946].
* [35] V. P. Frolov, _Notes on nonsingular models of black holes_ , _Phys. Rev. D_ 94 (2016) 104056 [1609.01758].
* [36] A. Bonanno, A.-P. Khosravi and F. Saueressig, _Regular black holes with stable cores_ , _Phys. Rev. D_ 103 (2021) 124027 [2010.04226].
* [37] N. G. Nielsen, A. Palessandro and M. S. Sloth, _Gravitational Atoms_ , _Phys. Rev. D_ 99 (2019) 123011 [1903.12168].
* [38] M. Dai, J. Laiho, M. Schiffer and J. Unmuth-Yockey, _Newtonian binding from lattice quantum gravity_ , _Phys. Rev. D_ 103 (2021) 114511 [2102.04492].
* [39] S. Bassler, J. Laiho, M. Schiffer and J. Unmuth-Yockey, _The de Sitter Instanton from Euclidean Dynamical Triangulations_ , _Phys. Rev. D_ 103 (2021) 114504 [2103.06973].
* [40] G. ’t Hooft and M. Veltman, _One loop divergencies in the theory of gravitation_ , _Annales Poincare Phys.Theor._ A20 (1974) 69\.
* [41] N. E. J. Bjerrum-Bohr, J. F. Donoghue and B. R. Holstein, _Quantum gravitational corrections to the nonrelativistic scattering potential of two masses_ , _Phys. Rev._ D67 (2003) 084033 [hep-th/0211072].
* [42] B. de Wit and R. Gastmans, _On the Induced Cosmological Term in Quantum Gravity_ , _Nucl. Phys. B_ 128 (1977) 294.
* [43] I. Antoniadis, J. Iliopoulos and T. Tomaras, _Gauge Invariance in Quantum Gravity_ , _Nucl. Phys. B_ 267 (1986) 497.
* [44] D. Johnston, _Gauge Independence of Mass Counterterms in the Abelian Higgs Model and Gravity_ , _Nucl. Phys. B_ 293 (1987) 229.
* [45] B. de Wit and N. Hari Dass, _Gauge independence in quantum gravity_ , _Nucl.Phys._ B374 (1992) 99.
* [46] M. E. Peskin and D. V. Schroeder, _An Introduction to quantum field theory_. Addison-Wesley, Reading, USA, 1995.
* [47] A. Rodigast and T. Schuster, _Gravitational Corrections to Yukawa and phi**4 Interactions_ , _Phys. Rev. Lett._ 104 (2010) 081301 [0908.2422].
* [48] P. T. Mackay and D. J. Toms, _Quantum gravity and scalar fields_ , _Phys. Lett. B_ 684 (2010) 251 [0910.1703].
* [49] D. Capper, _A GENERAL GAUGE GRAVITON LOOP CALCULATION_ , _J. Phys. A_ 13 (1980) 199.
|
32k
|
arxiv_papers
|
2101.01030
|
aainstitutetext: Department of Physics, POSTECH,
Pohang 37673, Koreabbinstitutetext: Asia Pacific Center for Theoretical
Physics,
67 Cheongam-ro, Nam-gu, Pohang 37673, Koreaccinstitutetext: School of Physics,
University of Electronic Science and Technology of China,
No. 2006 Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China
# Topological vertex for 6d SCFTs with $\mathbb{Z}_{2}$-twist
Hee-Cheol Kim a Minsung Kim c Sung-Soo Kim
###### Abstract
We compute the partition function for 6d $\mathcal{N}=1$ $SO(2N)$ gauge
theories compactified on a circle with $\mathbb{Z}_{2}$ outer automorphism
twist. We perform the computation based on 5-brane webs with two O5-planes
using topological vertex with two O5-planes. As representative examples, we
consider 6d $SO(8)$ and $SU(3)$ gauge theories with $\mathbb{Z}_{2}$ twist. We
confirm that these partition functions obtained from the topological vertex
with O5-planes indeed agree with the elliptic genus computations.
## 1 Introduction
In Kim:2019dqn , it was proposed that 5-brane webs Aharony:1997ju ;
Aharony:1997bh for 6d $\mathcal{N}=(1,0)$ superconformal field theories
(SCFTs) with $SO(N)$ gauge symmetry coupled to a tensor multiplet on a circle
Heckman:2015bfa . Such 5-brane webs are constructed with two O5-planes whose
separation can be naturally identified with the Kaluza-Klein (KK) momentum
Hayashi:2015vhy ; Kim:2017jqn . Given a 5-brane web, one can compute the
prepotential Witten:1996qb ; Intriligator:1997pq for the corresponding
theories on Coulomb branch. It was checked Kim:2019dqn that the prepotentials
obtained from the proposed 5-branes webs indeed agree with the prepotentials
that one can compute the triple intersection numbers in their geometric
descriptions Jefferson:2018irk ; Bhardwaj:2019fzv .
Of particular significance to this 5-brane construction of 6d SCFTs with
$SO(N)$ gauge symmetry is a realization of such 6d SCFTs with $\mathbb{Z}_{2}$
twist, providing a new perspective on RG flows on Higgs branches of D-type
conformal matters Heckman:2013pva ; DelZotto:2014hpa . With non-zero
holonomies turned on, one introduces a light charged scalar mode carrying non-
zero KK momentum along the 6d circle, giving rise to new Higgs branch
associated with a vev of the light mode. In particular, RG flows from
Higgsings on two O5-planes leads to 5-brane webs for twisted compactifications
of 6d theories.
In this paper, we compute $\mathbb{R}^{4}\times T^{2}$ partition functions for
6d theories with $\mathbb{Z}_{2}$ twist, based on 5-brane configurations
proposed in Kim:2019dqn . As representative examples, we consider two 6d SCFTs
with $\mathbb{Z}_{2}$ twist: one is the 6d $SO(8)$ gauge theory with
$\mathbb{Z}_{2}$ twist and the other is the 6d $SU(3)$ theory with
$\mathbb{Z}_{2}$ twist. The $SO(8)$ theory is obtained through the Higgsing
sequence from the 6d $SO(10)$ gauge theory with two hypermultiplets in the
fundamental representation (flavors) to the $SO(9)$ gauge theory with a flavor
and then to the $SO(8)$ theory with $\mathbb{Z}_{2}$ twist, while the $SU(3)$
theory is obtained through the Higgsing sequence from 6d $G_{2}$ gauge theory
with a flavor to the $SU(3)$ theory with $\mathbb{Z}_{2}$ twist, whose 5-brane
configuration yields 5d $SU(3)_{9}$ gauge theory with the Chern-Simons level
$9$, as predicted in Jefferson:2017ahm ; Jefferson:2018irk ; Hayashi:2018lyv .
As a main tool of computation, we implement topological vertex formalism
Aganagic:2003db ; Iqbal:2007ii ; Awata:2008ed with an O5-plane Kim:2017jqn
applying these Higgsing sequences. We check the obtained results against the
elliptic genus Haghighat:2013tka ; Kim:2014dza ; Haghighat:2014vxa ;
Gadde:2015tra . computations by applying the Higgsings leading to the twisted
theories or directly twisting.
The organization of the paper is as follows. In section 2, we discuss the
construction of 5-brane webs for the 6d $SO(8)$ gauge theory with
$\mathbb{Z}_{2}$ twist and compute the partition function using topological
vertex based on 5-brane webs as well as using the ADHM method. In a similar
way, the 6d $SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist is discussed in
section 3. We then summarize the results and discuss some subtle issues in
section 4. In appendices A and B, we discuss properties of $\mathbb{Z}_{2}$
twisting of 6d theories and the perturbative partition function from the
perspective of twisted affine Lie algebras, and also provide various
identities that are useful in actual computations.
While completing this paper, we became aware of Hayashi:2020hhb which has
some overlap with this paper.
## 2 $SO(8)$ theory with $\mathbb{Z}_{2}$ twist
We first consider twisted compactification of 6d pure $SO(8)$ gauge theory on
$-4$ curve. As proposed in Kim:2019dqn , a 5-brane configuration for the
twisted compactification of the 6d pure $SO(8)$ gauge theory can be obtained
from a Higgsing of 6d $SO(10)$ gauge theory with two hypermultiplets in the
fundamental representation ($SO(10)+2\mathbf{F}$) on $-4$ curves. Before we
discuss twisting, let us recall the standard Higgsing, which is to give the
vev’s to two fundamental scalars charged under a color D5-brane. This leads to
the 6d $SO(8)$ gauge theory on $-4$ curve. To realize twisted
compactification, one instead gives independent vev’s to two individual
scalars. Namely, first give a vev to one fundamental scalar to yield the 6d
$SO(9)$ gauge theory with a fundamental hypermultiplet ($SO(9)+1\mathbf{F}$),
and then give a different vev to another fundamental scalar carrying a unit KK
momentum. This leads to the 6d $SO(8)$ gauge theory on $-4$ curve with
${\mathbb{Z}}_{2}$ twist. In this section, we first review construction of
5-brane configuration leading to the twisted compactification of the 6d pure
$SO(8)$ gauge theory on $-4$ curve, and then compute the partition function of
the twisted $SO(8)$ gauge theory on $\mathbb{R}^{4}\times T^{2}$, using
topological vertex Aganagic:2003db on a 5-brane web with two O5-planes
Kim:2017jqn . We compare our result with Higgsing or twisting of the elliptic
genus from the ADHM construction using localization technique, introduced in
Benini:2013nda ; Benini:2013xpa .
Figure 1: A 5-brane web for 6d $SO(10)$ theory with two fundamentals. Two
fundamental hypermultiplets are represented by two external D5-branes ending
on a D7-brane (black dot) and the monodromy cuts of D7-branes point the
outward directions.
### 2.1 5-brane web for 6d $SO(8)$ gauge theory with $\mathbb{Z}_{2}$ twist
Let us begin with a 5-brane configuration for the 6d $SO(10)$ gauge theory
with two fundamental hypermultiplets on $-4$ curve. It is depicted in Figure
1, where there are two O5-planes whose separation naturally gives a
compactification direction associated with 6d circle. Two fundamental
hypermultiplets are denoted by two D5-branes ending on D7-branes. Their masses
are $m_{1},m_{2}$. The W-bosons $W_{i}$ of the theory are denoted by wiggly
lines connecting color D5-branes in Figure 1. The masses of the W-bosons are
given by
$\displaystyle m_{W_{1}}$ $\displaystyle=2\phi_{0}-\phi_{2}\ ,$
$\displaystyle\quad m_{W_{2}}$ $\displaystyle=2\phi_{1}-\phi_{2}\ ,$
$\displaystyle m_{W_{3}}$ $\displaystyle=2\phi_{2}-\phi_{0}-\phi_{1}-\phi_{3}\
,$ $\displaystyle m_{W_{4}}$
$\displaystyle=2\phi_{3}-\phi_{2}-\phi_{4}-\phi_{5}\ ,$ $\displaystyle
m_{W_{5}}$ $\displaystyle=2\phi_{4}-\phi_{3}\ ,$ $\displaystyle\quad
m_{W_{6}}$ $\displaystyle=2\phi_{5}-\phi_{3}\ ,$ (1)
where $\phi_{i}$ ($i=1,\cdots,5$) are the Coulomb branch vev and $\phi_{0}$ is
the tensor branch vev. As it is a configuration for the 6d theory on a circle,
one can see that the W-bosons masses are consistent with the affine Cartan
matrix111The affine Cartan matrix is defined as
$\mathcal{C}_{ij}=\displaystyle
2\frac{(\alpha_{i},\alpha_{j})}{(\alpha_{j},\alpha_{j})}$ with the simple
roots $\alpha_{i}$ of affine Lie algebras. of untwisted affine Lie algebra
$D_{5}^{(1)}$,
$\displaystyle
m_{W_{i}}=\big{(}\mathcal{C}_{D_{5}^{(1)}}\big{)}_{ij}\,\phi_{j-1}\
,\qquad\quad\mathcal{C}_{D_{5}^{(1)}}=\begin{pmatrix}[r]2&0&-1&0&0&0\\\
0&2&-1&0&0&0\\\ -1&-1&2&-1&0&0\\\ 0&0&-1&2&-1&-1\\\ 0&0&0&-1&2&0\\\
0&0&0&-1&0&2\end{pmatrix}.$ (2)
(a)
(b)
(c)
(d)
Figure 2: (a) Another 5-brane web of 6d $SO(10)$ theory with two flavors after
applying a series of flop transitions in Figure 1. (b) The bottom part of (a)
after applying Hanany-Witten transition. The black circle and black zigzag
line denote D7-brane and corresponding monodromy cut, respectively. (c)
Putting a D7-brane and a color D5-brane to the orientifold plane on the
bottom. A D7-brane is split into two half D7-branes and generates
$\widetilde{\mathrm{O}5}^{-}$ plane. The half D7-branes and corresponding half
monodromy cuts are denoted by blue dots and blue zigzag lines. The half
D5-brane is denoted by violet line. (d) A 5-brane configuration after Higgsing
away two half D5-branes between D7-branes and then taking two half D7-branes
to the left and right infinities, respectively.
As discussed, to twist, we perform the Higgsing such that we first bring a
D7-brane down to one of the O5-planes to obtain $SO(9)$ gauge theory with a
fundamental as depicted in Figure 2, and then push up the remaining D7-brane
to the other O5-plane to give a different vev to the remaining scalar, as in
Figure 3. More precisely, the Higgsing from a $SO(10)$ gauge theory with a
flavor to a $SO(9)$ gauge theory is achieved as follows: Through successive
flop transitions, one can bring a D7-brane near the bottom O5-plane as
depicted in Figure 2(2(a)). Then one brings the D7-brane inside the Coulomb
branch of $SO(10)$ gauge theory, where the D7-brane becomes floating as in
Figure 2(2(d)), since the flavor D5-brane is annihilated due to the Hanany-
Witten transition. We give a vev to the flavor which locates the flavor
7-brane and a color D5-brane on an O5-plane. That is to set the masses of the
flavor hypermultiplet and the associated W-boson $\phi_{1}-\phi_{0}$ to zero,
$\displaystyle m_{1}=0\ ,\qquad\phi_{1}=\phi_{0}\ .$ (3)
We note that when it is placed on an O5-plane as depicted in Figure 2(2(d))
and Figure 2(2(d)), a D7-brane (black dot) is split into two half D7-branes
(blue dots), creating a half D5-brane between two half D7-branes stuck on an
O5-plane, which hence turns the O5-plane between these split half D7-branes to
an $\widetilde{{\rm O5}}^{-}$-plane Evans:1997hk ; Giveon:1998sr ; Feng:2000eq
; Bertoldi:2002nn . In doing so, a new Higgs branch opens up such that two
half color D5-branes (or a full color D5-brane) are suspended between two half
D7-branes on the $\widetilde{{\rm O5}}^{-}$-plane which can be Higgsed away,
resulting in a 5-brane configuration for a $SO(9)$ gauge theory. After moving
half D7-branes away from each other to the opposite along the orientifold
plane, one gets a 5-brane configuration with an $\widetilde{{\rm O5}}$-plane
as in Figure 2(2(d)).
(a)
(b)
Figure 3: (a) The 5-brane web obtained by applying Figure 2(b)-(d) to the
bottom part of Figure 2(a). (b) Flop transition of (a).
Figure 3(3(a)) is the resulting 5-brane web diagram describing the 6d $SO(9)$
gauge theory on a circle with a fundamental hypermultiplet of mass $m_{2}$,
implementing (3). Here we have redefined the scalars $\phi_{i}$
$\displaystyle\phi_{0}\to\phi_{4}\ ,\qquad\phi_{2}\to\phi_{3}\
,\qquad\phi_{3}\to\phi_{2}\ ,\qquad\phi_{4}\to\phi_{0}\
,\qquad\phi_{5}\to\phi_{1}\ ,$ (4)
to make the masses of the W-bosons form the affine Cartan matrix of untwisted
affine Lie algebra $B_{4}^{(1)}$,
$\displaystyle m_{W_{1}}$ $\displaystyle=2\phi_{0}-\phi_{2}\ ,$ $\displaystyle
m_{W_{2}}$ $\displaystyle=2\phi_{1}-\phi_{2}\ ,$ $\displaystyle m_{W_{3}}$
$\displaystyle=2\phi_{2}-\phi_{3}-\phi_{0}-\phi_{1}\ ,$ $\displaystyle
m_{W_{4}}$ $\displaystyle=2\phi_{3}-2\phi_{4}-\phi_{2}\ ,$ $\displaystyle
m_{W_{5}}$ $\displaystyle=2\phi_{4}-\phi_{3}\ ,$ (5)
or more explicitly,
$\displaystyle
m_{W_{i}}=\big{(}\mathcal{C}_{B_{4}^{(1)}}\big{)}_{ij}\,\phi_{j-1}\
,\qquad\quad\mathcal{C}_{B_{4}^{(1)}}=\begin{pmatrix}[r]2&0&-1&0&0\\\
0&2&-1&0&0\\\ -1&-1&2&-1&0\\\ 0&0&-1&2&-2\\\ 0&0&0&-1&2\end{pmatrix}.$ (6)
We further Higgs the remaining fundamental hypermultiplet. As the 5-brane
configuration given in Figure 3 has two different types of orientifold planes,
O5- and $\widetilde{\rm O5}$-planes, we have two ways of Higgsing the
remaining hypermultiplet. In the previous Higgsing $SO(10)+2\mathbf{F}\to
SO(9)+1\mathbf{F}$, the Higgsing procedure converts an O5-plane to an
$\widetilde{\rm O5}$-plane or vice versa. In a similar fashion, if one brings
a flavor D5-brane (or equivalently a flavor D7-brane) down to the
$\widetilde{\rm O5}$-plane, then it is to perform the same Higgsing, and hence
it would yield a 5-brane web for a pure $SO(8)$ theory on a circle. This is
the standard Higgsing $SO(9)+1\mathbf{F}\to SO(8)$, hence corresponding to
untwisted compactification $SO(10)+2\mathbf{F}\to SO(9)+1\mathbf{F}\to SO(8)$.
Indeed, shifting $\phi_{3}\to\phi_{3}+\phi_{4}$ straightforwardly yields the
W-boson masses forming the affine Cartan matrix of untwisted affine Lie
algebra $D_{4}^{(1)}$, as expected,
$\displaystyle m_{W_{1}}$ $\displaystyle=2\phi_{0}-\phi_{2}\ ,$ $\displaystyle
m_{W_{2}}$ $\displaystyle=2\phi_{1}-\phi_{2}\ ,$ $\displaystyle m_{W_{3}}$
$\displaystyle=2\phi_{2}-\phi_{3}-\phi_{4}-\phi_{0}-\phi_{1}\ ,$
$\displaystyle m_{W_{4}}$ $\displaystyle=2\phi_{3}-\phi_{2}\ ,$ $\displaystyle
m_{W_{5}}$ $\displaystyle=2\phi_{4}-\phi_{2}\ .$ (7)
Figure 4: The 5-brane web of $SO(8)$ theory with $\mathbb{Z}_{2}$ twist.
Now, let us take the flavor D5-brane and one of color D5-branes to the ${\rm
O5}$-plane instead. One can do the same kind of Higgsing as done for the
previous the $SO(10)$ theory to the $SO(9)$ theory, which then makes a 5-brane
configuration with two $\widetilde{\rm O5}$-planes. As discussed in
Kim:2019dqn , this is the Higgsing that one Higgses the $SO(9)$ theory with
one fundamental hypermultiplet by giving a vev to a scalar field carrying
Kaluza-Klein momentum which leads to the $SO(8)$ gauge theory with a
$\mathbb{Z}_{2}$ twist. Namely, we first revive the 6d circle radius $R$ in
Figure 3(3(a)) by introducing Kaluza-Klein momentum, which can be done by
redefining Kähler parameters to be
$\displaystyle 2\phi_{0}-\phi_{2}\to 2\phi_{0}-\phi_{2}+\frac{1}{R}\
,\quad\qquad\phi_{1}-\phi_{0}\to\phi_{1}-\phi_{0}-\frac{1}{2R}\ ,$ (8)
where other parameters are unaltered in Figure 3(3(a)). By successive flop
transitions, the fundamental hypermultiplet can be placed near the O5-plane
located on the top in Figure 3(3(a)). This yields a 5-brane configuration in
Figure 3(3(b)). We then Higgs by setting
$\displaystyle m_{2}=\frac{1}{2R}\
,\qquad\qquad\phi_{1}=\phi_{0}+\frac{1}{2R}\ .$ (9)
The resulting 5-brane configuration for the 6d $SO(8)$ gauge theory with a
$\mathbb{Z}_{2}$ twist is depicted in Figure 4. Here the W-boson masses are
given by
$\displaystyle m_{W_{1}}$ $\displaystyle=2\phi_{0}-\phi_{2}+\frac{1}{R}\ ,$
$\displaystyle m_{W_{2}}$
$\displaystyle=2\phi_{2}-2\phi_{0}-\phi_{3}-\frac{1}{2R}\ ,$ $\displaystyle
m_{W_{3}}$ $\displaystyle=2\phi_{3}-2\phi_{4}-\phi_{2}\ ,$ $\displaystyle
m_{W_{4}}$ $\displaystyle=2\phi_{4}-\phi_{3}\ ,$ (10)
which form the affine Cartan matrix of twisted affine Lie algebra
$D_{4}^{(2)}$,
$\displaystyle\mathcal{C}_{D_{4}^{(2)}}=\left(\begin{array}[]{rrrr}2&-1&0&0\\\
-2&2&-1&0\\\ 0&-1&2&-2\\\ 0&0&-1&2\end{array}\right).$ (15)
### 2.2 Partition function from 5-brane webs
We now compute the partition function based on the 5-brane configuration for
the 6d $SO(8)$ gauge theory with a $\mathbb{Z}_{2}$ twist, using topological
vertex with an O5-plane Kim:2017jqn . With an O5-plane, we can easily realize
a 5d $SO(N)$ or $Sp(N)$ gauge theory with hypermultiplets in the fundamental
representation. For $SO(N)$ gauge theories, we can also depict 5-brane
configurations for hypermultiplets in the spinor representation Zafrir:2015ftn
. Our strategy to carry out topological vertex computation with O5-planes is
first to introduce auxiliary spinor hypermultiplets and then to decouple them
after the computation.
(a)
(b)
(c)
Figure 5: (a) Couple two spinors after Higgsing Figure 2(c). (b) Moving two
half D7-branes and corresponding monodromy cuts to far left. (c) Applying
generalized flop transition to spinors.
(a)
(b)
Figure 6: A 5-brane web diagram for $\mathbb{Z}_{2}$ twisted compactification
of 6d $SO(8)$ theory with four auxiliary spinors.
To introduce four auxiliary spinors, we start with a 5-brane configuration
before twisting. Namely, we consider 5-brane web for 6d $SO(10)$ gauge theory
with two hypermultiplets in the fundamental representation as well as four
auxiliary hypermultiplets in the spinor representation. One can imagine that
given a 5-brane web for 6d $SO(10)$ gauge theory with two fundamentals as in
Figure 2(2(a)), one introduces each spinor as a charge conserving distant
5-brane on the left and on the right of the main web for the $SO(10)$ gauge
theory. On the bottom O5-plane, they are a 5-brane of charge $(2,-1)$ on the
left and a 5-brane of charge $(2,1)$ on the right. On the top O5-plane, they
are a 5-brane of charge $(2,-1)$ on the left and a 5-brane of charge $(2,-1)$
on the right. Higgsing toward $SO(8)$ gauge theory with a twist is
straightforward except there are additional half D5-branes introduced along
O5-planes due to Hanany-Witten transitions when taking half D7-brane to
infinity. For instance, in Figure 5, Higgsing from $SO(10)$ gauge theory with
a fundamental is depicted in the presence of two spinors, a distant 5-brane of
charge $(1,-1)$ due to the monodromy of D7-brane (fundamental hyper) on the
left and another distant 5-brane of charge $(2,1)$ on the right of the bottom
O5-plane. Figure 5(5(a)) is a configuration when the Higgsing is performed,
where there are two half D7-branes are on the orientifold plane. Unlike the
previous case where we took each half D7-brane to the opposite directions to
realize an $\widetilde{\rm O5}$-plane, this time, for computation ease, we
take two half D7-branes to the same direction, to the left, to keep the
orientifold plane an O5-plane. This is depicted in Figure 5(5(b)), where half
D5-branes are created due to the Hanany-Witten transitions. Taking the spinor
5-branes closer to the center of the O5-plane, they undergo “generalized flop”
transitions Hayashi:2017btw but still preserve the charge conservation as
giving in Figure 5(5(c)). Repeating the same procedure on the top O5-plane,
one finds a 5-brane web for the 6d $SO(8)$ gauge theory with four auxiliary
spinors with a $\mathbb{Z}_{2}$ twist, as in Figure 6. Again, to obtain the
partition function for the 6d $SO(8)$ gauge theory with a $\mathbb{Z}_{2}$
twist, we first compute the partition function based on the 5-brane web for
the 6d $SO(8)$ gauge theory with four spinors with the twist in Figure 6, and
then decouple the spinors by taking all the masses of the spinor matter to
infinity.
To perform the topological vertex method with an O5-plane, we assign arrows,
Young diagrams $\lambda,\mu_{i},\nu_{i}$ and Kähler parameters $Q_{i}$ on the
edges as in Figure 7. The topological string partition function $Z$ can be
computed by evaluating the edge factor and the vertex factor:
$\displaystyle
Z=\sum_{\lambda,\mu,\nu,\cdots}\quantity(\prod\mathrm{Edge\>Factor})\quantity(\prod\mathrm{Vertex\>Factor})\
.$ (16)
The Edge Factor is given by the product of Kähler parameters and the framing
factor of the associated Young diagram
$\lambda=(\lambda_{1},\lambda_{2},\cdots,\lambda_{\ell(\lambda)})$,
$\displaystyle(-Q)^{|\lambda|}f^{n}_{\lambda}\ ,$ (17)
where the framing factor is defined as
$\displaystyle
f_{\lambda}=(-1)^{\absolutevalue{\lambda}}g^{\frac{1}{2}(\norm*{\lambda^{t}}^{2}-\norm*{\lambda}^{2})}=f_{\lambda^{t}}^{-1}\
,$ (18)
with
$\displaystyle\absolutevalue{\lambda}=\sum_{i=1}^{\ell(\lambda)}\lambda_{i}\
,\qquad\norm{\lambda}^{2}=\sum_{i=1}^{\ell(\lambda)}\lambda_{i}^{2}\ .$ (19)
Here, $g=e^{-\epsilon}$ is the $\Omega$-deformation parameter that is
unrefined $\epsilon_{1}=-\epsilon_{2}=\epsilon$, and $\lambda^{t}$ denotes the
transposed Young diagram which corresponds to the conjugate partition of
$\lambda$. The power of the framing factor $n$ is defined with the incoming
and outgoing edges connected to $Q$ and their antisymmetric product, for
instance, in Figure 7, $n=u_{i}\wedge v_{2}=\det(u_{1},v_{2})$.
Figure 7: The example of 5-brane web where $\lambda,\mu_{i},\nu_{i}$ are Young
diagram associated with edges, $Q$ is a Kähler parameter associated with edge
$\lambda$, and $u_{i},v_{i}$ are $(p,q)$-charges of four edges.
The Vertex Factor is given by
$\displaystyle
C_{\lambda\mu\nu}=g^{\frac{1}{2}(-\norm*{\mu^{t}}^{2}+\norm*{\mu}^{2}+\norm*{\nu}^{2})}\tilde{Z}_{\nu}(g)\sum_{\eta}s_{\lambda^{t}/\eta}(g^{-\rho-\nu})s_{\mu/\eta}(g^{-\rho-\nu^{t}})\
,$ (20)
where $\lambda,\mu,\nu$ are the Young diagrams assigned to three out-going
edges connected to a given vertex, arranged in the clockwise order such that
the last Young diagram $\nu$ is assigned to the preferred direction which is
associated with instanton sum. We note that
$C_{\lambda\mu\nu}=C_{\mu\nu\lambda}=C_{\nu\lambda\mu}$ for unrefined vertex
factors. When the orientation of an edge is reversed, the corresponding Young
diagram is transposed. Here, $\tilde{Z}_{\nu}(g)$ is defined by
$\displaystyle\tilde{Z}_{\nu}(g)$
$\displaystyle=\tilde{Z}_{\nu^{t}}(g)=\prod_{i=1}^{\ell(\nu)}\prod_{j=1}^{\nu_{i}}\quantity(1-g^{\nu_{i}+\nu_{j}^{t}-i-j+1})^{-1},$
(21)
where $s_{\lambda/\eta}(x)$ is skew Shur functions and
$\rho=(-\frac{1}{2},-\frac{3}{2},-\frac{5}{2},\cdots)$. When summing skew Shur
functions, one needs to repeatedly use the Cauchy identities
$\displaystyle\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\lambda/\eta_{1}}(g^{-\rho-\nu_{1}})s_{\lambda/\eta_{2}}(g^{-\rho-\nu_{2}})$
$\displaystyle\quad=\mathcal{R}_{\nu_{2}\nu_{1}}(Q)^{-1}\sum_{\lambda}Q^{\absolutevalue{\eta_{1}}+\absolutevalue{\eta_{2}}-\absolutevalue{\lambda}}s_{\eta_{2}/\lambda}(g^{-\rho-\nu_{1}})s_{\eta_{1}/\lambda}(g^{-\rho-\nu_{2}})\
,$ (22)
$\displaystyle\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\lambda/\eta_{1}^{t}}(g^{-\rho-\nu_{1}})s_{\lambda^{t}/\eta_{2}}(g^{-\rho-\nu_{2}})$
$\displaystyle\quad=\mathcal{R}_{\nu_{2}\nu_{1}}(-Q)\sum_{\lambda}Q^{\absolutevalue{\eta_{1}}+\absolutevalue{\eta_{2}}-\absolutevalue{\lambda}}s_{\eta_{2}^{t}/\lambda}(g^{-\rho-\nu_{1}})s_{\eta_{1}/\lambda^{t}}(g^{-\rho-\nu_{2}})\
,$ (23)
where
$\displaystyle\mathcal{R}_{\lambda\mu}(Q)$
$\displaystyle=\prod^{\infty}_{i,j=1}\left(1-Q\,g^{i+j-\mu_{i}-\lambda_{j}-1}\right).$
(24)
See also Appendix B for various other Cauchy identities and other related
special functions.
Now, we assign the Young diagrams and Kähler parameters to a 5-brane web
diagram for the 6d $SO(8)$ gauge theory with four auxiliary spinors as in
Figure 8. We compute the partition function based on this 5-brane web with two
O5-planes as follows: First we reflect the 5-brane web with respect to each
O5-plane, which creates a mirror image of the original 5-brane web. We also
cut the 5-brane configuration by half so that we can glue over the color
D5-branes associated with Young diagrams $\mu_{1},\mu_{2},\mu_{3}$. As we have
mirror images, we can choose a convenient fundamental 5-brane configuration,
which contains both the original and the mirror images such that when crossing
the O5-plane, 5-brane webs are smoothly connected. Together with the mirror
images, we then see that we can extract two pieces of strip diagrams given in
Figure 9.
Figure 8: 5-brane web diagram for $SO(8)$ theory with a $\mathbb{Z}_{2}$ twist
where we first couple four auxiliary spinors and then later decouple all the
spinors after obtaining the partition function. In this way, one can compute
the partition function for 6d $SO(8)$ theory with a $\mathbb{Z}_{2}$ twist.
(a)
(b)
Figure 9: Strip diagrams for 6d $SO(8)$ gauge theory with $\mathbb{Z}_{2}$
twist, which is coupled to 4 auxiliary spinors. They are the left part (a) and
the right part (b) of Figure 8. We use mirror images to make each part as a
form of a strip diagram.
We note here in Figure 9 that unlike the edge factors $\mu_{i}$ following the
conventional rule, the edge factors of $\nu_{i}$ should be treated carefully.
When $\nu_{i}$ are reflected over the orientifold planes, they get the
additional factor $(-1)^{\absolutevalue{\nu_{i}}}f_{\nu_{i}}^{\pm 1}$, where
$\pm 1$ is determined from the orientation of the edges. For the vertex
factor, the assigned arrows associated with Young diagrams are reversed when
they are reflected over, which makes the ordering of the corresponding Young
diagrams counterclockwise. It also follows from the identity Kim:2017jqn
$\displaystyle
C_{\lambda\mu\nu}=(-1)^{|\lambda|+|\mu|+|\nu|}f_{\lambda}^{-1}(g)f_{\mu}^{-1}(g)f_{\nu}^{-1}(g)C_{\mu^{t}\lambda^{t}\nu^{t}}\
,$ (25)
that the reversal of the Young diagrams is translated into the transposition
of the Young diagrams, and in turn, it is equivalent to changing the direction
of the arrows.
The topological string partition function is then obtained by gluing two strip
diagrams in Figure 9, by summing up the Young diagrams $\mu_{i}$ and $\nu_{i}$
which are associated with the preferred direction
$\displaystyle Z$
$\displaystyle=\sum_{\mu_{i},\nu_{i}}Q_{1}^{\absolutevalue{\nu_{1}}}Q_{3}^{\absolutevalue{\nu_{2}}}Q_{5}^{\absolutevalue{\nu_{3}}}Q_{7}^{\absolutevalue{\nu_{4}}}(-Q_{B}Q_{10})^{\absolutevalue{\mu_{1}}}(-Q_{B})^{\absolutevalue{\mu_{2}}}(-Q_{B}Q_{9})^{\absolutevalue{\mu_{3}}}$
$\displaystyle\qquad\quad\times
f_{\mu_{1}}^{2}f_{\mu_{3}}^{-2}f_{\nu_{1}}^{3}f_{\nu_{2}}f_{\nu_{3}}f_{\nu_{4}}^{3}Z^{\rm
strip}_{\mathrm{left}}(\mu_{1},\mu_{2},\mu_{3},\nu_{1},\nu_{2})Z^{\rm
strip}_{\mathrm{right}}(\mu_{1},\mu_{2},\mu_{3},\nu_{3},\nu_{4})\ ,$ (26)
where two strip parts are as follows: With the shorthand notation
$Q_{i,j,k,\cdots}=Q_{i}Q_{j}Q_{k}\cdots$, the left strip part, in Figure
9(9(a)), is given by
$\displaystyle Z_{\mathrm{left}}^{\rm
strip}(\mu_{1},\mu_{2},\mu_{3},\nu_{1},\nu_{2})$
$\displaystyle=g^{\frac{1}{2}(\norm*{\mu_{1}}^{2}+\norm*{\mu_{2}}^{2}+\norm*{\mu_{3}}^{2}+2\norm*{\nu_{1}}^{2}+2\norm*{\nu_{2}^{t}}^{2})}\tilde{Z}_{\mu_{1}}(g)\tilde{Z}_{\mu_{2}}(g)\tilde{Z}_{\mu_{3}}(g)\tilde{Z}_{\nu_{1}}(g)^{2}\tilde{Z}_{\nu_{2}}(g)^{2}$
$\displaystyle~{}~{}\times\frac{\mathcal{R}_{\emptyset\emptyset}(Q_{1,2,3,4,9,10})\mathcal{R}_{\emptyset\mu_{1}}(Q_{3,4,9,10})\mathcal{R}_{\emptyset\mu_{2}}(Q_{3,4,9})\mathcal{R}_{\emptyset\mu_{3}}(Q_{3,4})\mathcal{R}_{\emptyset\nu_{1}}(Q_{1,1,2,3,4,9,10})}{\mathcal{R}_{\emptyset\mu_{1}^{t}}(Q_{1,2})\mathcal{R}_{\emptyset\mu_{2}^{t}}(Q_{1,2,10})\mathcal{R}_{\emptyset\mu_{3}^{t}}(Q_{1,2,9,10})\mathcal{R}_{\emptyset\nu_{1}^{t}}(Q_{2,3,4,9,10})\mathcal{R}_{\emptyset\nu_{2}}(Q_{1,2,3,3,4,9,10})\mathcal{R}_{\mu_{1}\mu_{2}^{t}}(Q_{10})}$
$\displaystyle~{}~{}\times\frac{\mathcal{R}_{\emptyset\nu_{2}^{t}}(Q_{1,2,4,9,10})\mathcal{R}_{\mu_{1}^{t}\nu_{1}^{t}}(Q_{2})\mathcal{R}_{\mu_{2}^{t}\nu_{1}^{t}}(Q_{2,10})\mathcal{R}_{\mu_{3}^{t}\nu_{1}^{t}}(Q_{2,9,10})\mathcal{R}_{\mu_{1}\nu_{2}^{t}}(Q_{4,9,10})\mathcal{R}_{\mu_{2}\nu_{2}^{t}}(Q_{4,9})}{\mathcal{R}_{\mu_{1}\mu_{3}^{t}}(Q_{9,10})\mathcal{R}_{\mu_{2}\mu_{3}^{t}}(Q_{9})\mathcal{R}_{\mu_{1}^{t}\nu_{1}}(Q_{1,1,2})\mathcal{R}_{\mu_{2}^{t}\nu_{1}}(Q_{1,1,2,10})\mathcal{R}_{\mu_{3}^{t}\nu_{1}}(Q_{1,1,2,9,10})\mathcal{R}_{\mu_{1}\nu_{2}}(Q_{3,3,4,9,10})}$
$\displaystyle~{}~{}\times\frac{\mathcal{R}_{\mu_{3}\nu_{2}^{t}}(Q_{4})\mathcal{R}_{\nu_{1}\nu_{1}}(Q_{1,1})\mathcal{R}_{\nu_{2}\nu_{2}}(Q_{3,3})\mathcal{R}_{\nu_{1}^{t}\nu_{2}}(Q_{2,3,3,4,9,10})\mathcal{R}_{\nu_{1}\nu_{2}^{t}}(Q_{1,1,2,4,9,10})}{\mathcal{R}_{\mu_{2}\nu_{2}}(Q_{3,3,4,9})\mathcal{R}_{\mu_{3}\nu_{2}}(Q_{3,3,4})\mathcal{R}_{\nu_{1}\nu_{2}}(Q_{1,1,2,3,3,4,9,10})\mathcal{R}_{\nu_{1}^{t}\nu_{2}^{t}}(Q_{2,4,9,10})}\
,$ (27)
and the right strip part, in Figure 9(9(b)), is given by
$\displaystyle Z_{\mathrm{right}}^{\rm
strip}(\mu_{1},\mu_{2},\mu_{3},\nu_{3},\nu_{4})$
$\displaystyle=g^{\frac{1}{2}(\norm*{\mu_{1}^{t}}^{2}+\norm*{\mu_{2}^{t}}^{2}+\norm*{\mu_{3}^{t}}^{2}+2\norm*{\nu_{3}^{t}}^{2}+2\norm*{\nu_{4}}^{2})}\tilde{Z}_{\mu_{1}}(g)\tilde{Z}_{\mu_{2}}(g)\tilde{Z}_{\mu_{3}}(g)\tilde{Z}_{\nu_{3}}(g)^{2}\tilde{Z}_{\nu_{4}}(g)^{2}$
$\displaystyle~{}~{}\times\frac{\mathcal{R}_{\emptyset\emptyset}(Q_{5,6,7,8,9,10})\mathcal{R}_{\emptyset\mu_{1}^{t}}(Q_{5,6})\mathcal{R}_{\emptyset\mu_{2}^{t}}(Q_{5,6,10})\mathcal{R}_{\emptyset\mu_{3}^{t}}(Q_{5,6,9,10})\mathcal{R}_{\emptyset\nu_{3}^{t}}(Q_{6,7,8,9,10})}{\mathcal{R}_{\emptyset\mu_{1}}(Q_{7,8,9,10})\mathcal{R}_{\emptyset\mu_{2}}(Q_{7,8,9})\mathcal{R}_{\emptyset\mu_{3}}(Q_{7,8})\mathcal{R}_{\emptyset\nu_{3}}(Q_{5,5,6,7,8,9,10})\mathcal{R}_{\emptyset\nu_{4}^{t}}(Q_{5,6,8,9,10})\mathcal{R}_{\mu_{1}\mu_{2}^{t}}(Q_{10})}$
$\displaystyle~{}~{}\times\frac{\mathcal{R}_{\emptyset\nu_{4}}(Q_{5,6,7,7,8,9,10})\mathcal{R}_{\mu_{1}^{t}\nu_{3}^{t}}(Q_{6})\mathcal{R}_{\mu_{2}^{t}\nu_{3}^{t}}(Q_{6,10})\mathcal{R}_{\mu_{3}^{t}\nu_{3}^{t}}(Q_{6,9,10})\mathcal{R}_{\mu_{1}\nu_{4}^{t}}(Q_{8,9,10})\mathcal{R}_{\mu_{2}\nu_{4}^{t}}(Q_{8,9})}{\mathcal{R}_{\mu_{1}\mu_{3}^{t}}(Q_{9,10})\mathcal{R}_{\mu_{2}\mu_{3}^{t}}(Q_{9})\mathcal{R}_{\mu_{1}^{t}\nu_{3}}(Q_{5,5,6})\mathcal{R}_{\mu_{2}^{t}\nu_{3}}(Q_{5,5,6,10})\mathcal{R}_{\mu_{3}^{t}\nu_{3}}(Q_{5,5,6,9,10})\mathcal{R}_{\mu_{1}\nu_{4}}(Q_{7,7,8,9,10})}$
$\displaystyle~{}~{}\times\frac{\mathcal{R}_{\mu_{3}\nu_{4}^{t}}(Q_{8})\mathcal{R}_{\nu_{3}\nu_{3}}(Q_{5,5})\mathcal{R}_{\nu_{4}\nu_{4}}(Q_{7,7})\mathcal{R}_{\nu_{3}^{t}\nu_{4}}(Q_{6,7,7,8,9,10})\mathcal{R}_{\nu_{3}\nu_{4}^{t}}(Q_{5,5,6,8,9,10})}{\mathcal{R}_{\mu_{2}\nu_{4}}(Q_{7,7,8,9})\mathcal{R}_{\mu_{3}\nu_{4}}(Q_{7,7,8})\mathcal{R}_{\nu_{3}\nu_{4}}(Q_{5,5,6,7,7,8,9,10})\mathcal{R}_{\nu_{3}^{t}\nu_{4}^{t}}(Q_{6,8,9,10})}\
.$ (28)
As we are computing the partition function of a 6d theory on the
$\Omega$-background which is compactified on a circle with a twist, it is a
partition function on $\mathbb{R}^{4}\times T^{2}$. Hence, it can be compared
with the elliptic genus of 6d theories. To this end, we expand the partition
function (2.2) as
$\displaystyle
Z=Z_{\mathrm{pert}}\bigg{(}1+\sum_{n=1}^{\infty}u^{n}Z_{n}\bigg{)}\ ,$ (29)
where $Z_{\mathrm{pert}}$ is the perturbative part and $Z_{n}$ are the
$n$-string elliptic genus, and $u$ is the string fugacity which is given by a
product of Kähler parameters.
We proceed with the computation by expressing the Kähler parameters in terms
of physical parameters. First we eliminate the dependence of $\phi_{0}$ along
the W-bosons. For that, we perform the following shifts
$\displaystyle\phi_{2}\to\phi_{2}+2\phi_{0}+\frac{1}{2R}\
,\quad\phi_{3}\to\phi_{3}+2\phi_{0}+\frac{1}{2R}\
,\quad\phi_{4}\to\phi_{4}+\phi_{0}+\frac{1}{4R}\ ,$ (30)
which yield
$\displaystyle-\log\alpha_{1}=2\phi_{4}-\phi_{3}\
,\quad\qquad-\log\alpha_{2}=2\phi_{3}-2\phi_{4}-\phi_{2}\ ,$
$\displaystyle-\log\alpha_{3}=2\phi_{2}-\phi_{3}\
,\quad\qquad-\log\alpha_{4}=-\phi_{2}+\frac{\tau}{2}\ ,$ $\displaystyle-\log
Q_{B}=\phi_{2}+\phi_{3}+4\phi_{0}+\tau\ ,$ (31)
where $\alpha_{i}$ and $Q_{B}$ are the Kähler parameters assigned in Figure 8.
Here, we have identified that $1/R$ as $\tau$, since it parametrizes the KK-
momentum. One can see the Cartan matrix of $\mathfrak{so}(7)$ from the
W-bosons in (2.2), which is the invariant subalgebra under order two outer
automorphism of $\mathfrak{so}(8)$ algebra. The string fugacity $u$ is
$Q_{B}\alpha_{1}^{-2}\alpha_{2}^{-2}\alpha_{3}^{-1}\alpha_{4}$, and in terms
of $u$,
$\displaystyle-\log Q_{B}=-\log u+\phi_{2}+\phi_{3}-\frac{\tau}{2}\ .$ (32)
As the perturbative part of the partition function $Z_{\mathrm{pert}}$ is the
zeroth order in $u$, we obtain $Z_{\mathrm{pert}}$ by setting
$\mu_{i}=\emptyset$ in (2.2). It is convenient to separate out the spinor
matter parts, as we will decouple them. To this end, we first sum over
$\nu_{i}$ and express the Kähler parameters in terms of the $\alpha_{i}$
associated with the W-bosons, e.g., $Q_{2}=\alpha_{4}Q_{1}^{-1}$, etc. Then we
see that $Z_{\mathrm{pert}}$ is written as a function of $g$, $\alpha_{i}$ and
$Q_{1}$, $Q_{3}$, $Q_{5}$, $Q_{7}$ that are the Kähler parameters associated
with the spinor matter. Next, we take the plethystic logarithm, defined in
Appendix B, and expand $Z_{\mathrm{pert}}$ in terms of $\alpha_{i}$ and
$Q_{i}$. The shift (2.2) then yields
$\displaystyle Z_{\mathrm{pert}}$
$\displaystyle=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\quantity(\chi_{\Delta_{+}}^{\mathfrak{so}(7)}+(\chi_{\mathbf{7}}^{\mathfrak{so}(7)}-1)q^{1/2}+\cdots)]\
,$ (33)
where $\operatorname{PE}$ stands for the plethystic exponential,
$\displaystyle\operatorname{PE}[f(x)]=\exp(\sum_{n=1}^{\infty}\frac{1}{n}f(x^{n}))\
,$ (34)
and $q=e^{-\tau}$, the terms $\cdots$ are the terms depend on the spinor
matter. Here the characters for the positive roots and the fundamental
representation of $\mathfrak{so}(7)$ algebra take the form
$\displaystyle\chi_{\Delta_{+}}^{\mathfrak{so}(7)}$
$\displaystyle=x_{2}+\frac{x_{2}^{2}}{x_{3}}+x_{3}+\frac{x_{3}}{x_{2}}+\frac{x_{2}x_{3}}{x_{4}^{2}}+\frac{x_{3}^{2}}{x_{2}x_{4}^{2}}+\frac{x_{4}^{2}}{x_{2}}+\frac{x_{4}^{2}}{x_{3}}+\frac{x_{2}x_{4}^{2}}{x_{3}}\
,$ (35) $\displaystyle\chi_{\mathbf{7}}^{\mathfrak{so}(7)}$
$\displaystyle=1+x_{2}+\frac{1}{x_{2}}+\frac{x_{2}}{x_{3}}+\frac{x_{3}}{x_{2}}+\frac{x_{3}}{x_{4}^{2}}+\frac{x_{4}^{2}}{x_{3}}\
,$ (36)
where $x_{i}=e^{-\phi_{i}}$. We note that the $\cdots$ terms in (33) denote
the terms that involve $Q_{1}$, $Q_{3}$, $Q_{5}$ and $Q_{7}$ which we will
decouple. In the 5d limit where the KK-momentum becomes very large, the states
with KK-momenta are truncated. Thus, only $q^{0}$ terms in $Z_{\mathrm{pert}}$
survive. It is then easy to see that that $Z_{\mathrm{pert}}$ involves the
contribution from the positive roots of $\mathfrak{so}(7)$ which is expected
on the Coulomb branch where masses of W-bosons are all positive. This thus
shows that the corresponding 5d theory is an $SO(7)$ gauge theory, as
expected.
The partition function for 6d self-dual strings can be expanded in terms of
the string fugacity
$\displaystyle
Z_{\mathrm{string}}=\frac{Z}{Z_{\mathrm{pert}}}=1+uZ_{1}+u^{2}Z_{2}+\cdots\ ,$
(37)
where $Z_{n}$ is the $n$-string elliptic genus. Again, having in mind the
decoupling of the spinor contributions, when summing over Young diagrams
(2.2), we express it as a function of $\alpha_{i}$ and $Q_{1}$, $Q_{3}$,
$Q_{5}$, $Q_{7}$. To decouple the spinors, one needs to go through a flop
transition, where masses of the spinors are proportional to $-\log$ of
$Q_{1}^{-1}$, $Q_{3}^{-1}$, $Q_{5}^{-1}$, and $Q_{7}^{-1}$. We take each mass
of the spinors to infinite or the corresponding Kähler parameters to zero
while keeping $\alpha_{1},\alpha_{4}$ finite. So, after decoupling the
spinors, we get the desired $n$-string elliptic genus.
To organize $n$-string elliptic genus, we expand $Z_{n}$ in terms of the
Kähler parameters associated with the W-bosons, given in (2.2). In particular,
using $\alpha_{4}=q^{1/2}(\alpha_{1}\alpha_{2}\alpha_{3})^{-1}$, we expand
$Z_{n}$ in terms of $q$ and $\alpha_{1}$. The one-string elliptic genus
$Z_{1}$ is expressed as
$\displaystyle Z_{1}$
$\displaystyle=\\!\Bigg{(}\\!-\frac{g\alpha_{2}^{3}\alpha_{3}^{2}\Big{(}\alpha_{3}(\alpha_{3}+\\!1)\alpha_{2}^{2}+\\!(\alpha_{3}^{2}-6\alpha_{3}+1)\alpha_{2}\\!+\\!\alpha_{3}\\!+\\!1\Big{)}}{(1-g)^{2}(1-\alpha_{2})^{2}(1-\alpha_{3})^{2}(1-\alpha_{2}\alpha_{3})}\alpha_{1}^{3}+O(\alpha_{1}^{4})\Bigg{)}q^{-1/2}$
$\displaystyle~{}~{}+\\!\Bigg{(}-\frac{4g\alpha_{2}^{2}\alpha_{3}^{2}\Big{(}(\alpha_{3}^{2}-\alpha_{3}+1)\alpha_{2}^{2}-(\alpha_{3}+1)\alpha_{2}+1\Big{)}}{(1-g)^{2}(1-\alpha_{2})^{2}(1-\alpha_{3})^{2}(1-\alpha_{2}\alpha_{3})^{2}}\alpha_{1}^{2}$
$\displaystyle\quad~{}~{}-\frac{4g\alpha_{2}^{3}\alpha_{3}^{2}\Big{(}\alpha_{3}(\alpha_{3}\\!+\\!1)\alpha_{2}^{2}+(\alpha_{3}^{2}\\!-\\!6\alpha_{3}\\!+\\!1)\alpha_{2}\\!+\\!\alpha_{3}\\!+\\!1\Big{)}}{(1-g)^{2}(1-\alpha_{2})^{2}(1-\alpha_{3})^{2}(1-\alpha_{2}\alpha_{3})^{2}}\alpha_{1}^{3}+\\!O(\alpha_{1}^{4})\\!\Bigg{)}+\\!O(q^{1/2}).$
(38)
The two-string elliptic genus $Z_{2}$ is rather complicated, so we write only
the leading terms in $\alpha_{1}$:
$\displaystyle Z_{2}$
$\displaystyle=\Bigg{(}\Bigg{[}\frac{g^{5}\alpha_{2}^{6}\alpha_{3}^{4}}{(1\\!-\\!g)^{4}(1\\!+\\!g)^{2}(1\\!-\\!\alpha_{2})^{2}(1\\!-\\!\alpha_{3})^{2}}\quantity(\frac{1}{(1\\!-\\!g\alpha_{2})^{2}(g-\alpha_{3})^{2}}+\frac{1}{(g\\!-\\!\alpha_{2})^{2}(1-g\alpha_{3})^{2}})$
$\displaystyle+\\!\frac{g^{4}\alpha_{2}^{6}\alpha_{3}^{5}}{(1-g)^{4}(1-\alpha_{2})^{2}(1-\alpha_{2}\alpha_{3})^{2}(g-\alpha_{3})^{2}(1-\alpha_{3}g)^{2}}$
$\displaystyle+\\!\frac{g^{5}\alpha_{2}^{6}\alpha_{3}^{6}}{(1\\!-\\!g)^{4}(1\\!+\\!g)^{2}(1\\!-\\!\alpha_{3})^{2}(1\\!-\\!\alpha_{2}\alpha_{3})^{2}}\quantity(\frac{1}{(g\\!-\\!\alpha_{3})^{2}(g\\!-\\!\alpha_{2}\alpha_{3})^{2}}\\!+\\!\frac{1}{(1\\!-\\!\alpha_{3}g)^{2}(1\\!-\\!\alpha_{2}\alpha_{3}g)^{2}}\\!)$
$\displaystyle+\\!\frac{g^{4}\alpha_{2}^{7}\alpha_{3}^{4}}{(1-g)^{4}(g-\alpha_{2})^{2}(1-\alpha_{2}g)^{2}(1-\alpha_{3})^{2}(1-\alpha_{2}\alpha_{3})^{2}}$
$\displaystyle+\\!\frac{g^{4}\alpha_{2}^{7}\alpha_{3}^{5}}{(1-g)^{4}(1-\alpha_{2})^{2}(1-\alpha_{3})^{2}(g-\alpha_{2}\alpha_{3})^{2}(1-\alpha_{2}\alpha_{3}g)^{2}}$
$\displaystyle+\\!\frac{g^{5}\alpha_{2}^{8}\alpha_{3}^{4}}{(1\\!-\\!g)^{4}(1\\!+\\!g)^{2}(1\\!-\\!\alpha_{2})^{2}(1\\!-\\!\alpha_{2}\alpha_{3})^{2}}\\!\quantity(\\!\frac{1}{(g\\!-\\!\alpha_{2})^{2}(g\\!-\\!\alpha_{2}\alpha_{3})^{2}}\\!+\\!\frac{1}{(1\\!-\\!g\alpha_{2})^{2}(1\\!-\\!g\alpha_{2}\alpha_{3})^{2}}\\!)\\!\Bigg{]}\\!\alpha_{1}^{6}$
(39) $\displaystyle+O(\alpha_{1}^{7})\Bigg{)}q^{-1}+O(q^{-1/2})\ .$ (40)
In the following subsections, we will compare this obtained result with the
elliptic genus computations in two different ways: First, as explained through
5-brane webs, we perform the Higgsing $SO(10)+2\mathbf{F}\to
SO(9)+1\mathbf{F}\to SO(8)$ with $\mathbb{Z}_{2}$ twist. Secondly, we apply
the $\mathbb{Z}$ twist directly to the elliptic genus of the 6d $SO(8)$
theory.
### 2.3 Partition function from the Higgsing sequence
We now compute the partition function of the $SO(8)$ theory with
$\mathbb{Z}_{2}$ twist from direct Higgsing of the partition function for the
6d $SO(10)$ gauge theory with two fundamental hypermultiplets. The
perturbative part is given by
$\displaystyle
Z_{\mathrm{pert}}^{\mathrm{gauge}}=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\quantity(\chi_{\Delta_{+}}^{\mathfrak{so}(10)}+q\chi_{\Delta_{-}}^{\mathfrak{so}(10)})\frac{1}{1-q}]\
,$ (41)
where $\chi_{\Delta_{+}}^{\mathfrak{so}(10)}$ and
$\chi_{\Delta_{-}}^{\mathfrak{so}(10)}$ are the positive and negative root
parts of the character for the adjoint representation of $\mathfrak{so}(10)$,
respectively. More explicitly, for instance,
$\chi_{\Delta_{+}}^{\mathfrak{so}(10)}=\sum_{\alpha}e^{-\alpha}$ for the
positive roots $\alpha$, which can be also expressed in terms of $\phi_{i}$ in
Figure 2(2(a)).
To realize the Higgsing $SO(10)+2F\to SO(9)+1F$, let us consider the
contribution of the first hypermultiplet of mass $m_{1}$ to the perturbative
part, which is given by
$\displaystyle
Z_{\mathrm{pert}}^{\mathrm{hyper1}}=\operatorname{PE}\quantity[-\frac{g}{(1-g)^{2}}\quantity(M_{1}\chi_{\mathbf{10}}^{\mathfrak{so}(10)}+\frac{q}{M_{1}}\chi_{\mathbf{10}}^{\mathfrak{so}(10)})\frac{1}{1-q}]\
,$ (42)
where $\chi_{\mathbf{10}}^{\mathfrak{so}(10)}$ is the character for the
fundamental representation $\mathbf{10}$ of $\mathfrak{so}(10)$, and
$M_{1}=e^{-m_{1}}$ is the fugacity for the mass of the fundamental
hypermultiplet $m_{1}$. To Higgs the first hypermultiplet, we recover the
affine node $\phi_{0}$ using Figure 2(2(a)) and impose (3). If one changes the
variables as in Figure 3(3(a)) and eliminate the affine node $\phi_{0}$, one
can find that the perturbative part
$\displaystyle
Z_{\mathrm{pert}}^{\mathrm{gauge}}Z_{\mathrm{pert}}^{\mathrm{hyper1}}\to\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\quantity(\chi_{\Delta_{+}}^{\mathfrak{so}(9)}+q\chi_{\Delta_{-}}^{\mathfrak{so}(9)})\frac{1}{1-q}]\
,$ (43)
up to the Cartan part, where $\chi_{\Delta_{\pm}}^{\mathfrak{so}(9)}$ are the
positive and negative parts of the characters for the adjoint representation
of $\mathfrak{so}(9)$, respectively. This reflects that the theory becomes the
$SO(9)$ gauge theory as in Figure 3(a).
Higgsing the second hypermultiplet of mass $m_{2}$ needs more caution because
of $1/R$ factor in Figure 3(b). From string tensions associated with the
hypermultiplet in Figure 3(b), one can read that the contribution of the
second hypermultiplet after Higgsing the first hypermultiplet becomes
$\displaystyle Z_{\mathrm{pert}}^{\mathrm{hyper2}}\to$
$\displaystyle\operatorname{PE}\bigg{[}-\frac{g}{(1-g)^{2}}\bigg{(}M_{2}\quantity(\frac{x_{1}}{q}+\frac{x_{1}}{x_{2}}+\frac{x_{2}}{x_{3}}+\frac{x_{3}}{x_{4}^{2}}+1+\frac{x_{4}^{2}}{x_{3}}+\frac{x_{3}}{x_{2}}+\frac{x_{2}}{x_{1}}+\frac{q}{x_{1}})$
$\displaystyle+\frac{q}{M_{2}}\quantity(\frac{x_{1}}{q}+\frac{x_{1}}{x_{2}}+\frac{x_{2}}{x_{3}}+\frac{x_{3}}{x_{4}^{2}}+1+\frac{x_{4}^{2}}{x_{3}}+\frac{x_{3}}{x_{2}}+\frac{x_{2}}{x_{1}}+\frac{q}{x_{1}})\bigg{)}\frac{1}{1-q}\bigg{]}\
,$ (44)
where $x_{i}=e^{-\phi_{i}}$ for $\phi_{i}$ in Figure 3(b). To Higgs the second
hypermultiplet, we reintroduce the affine node $\phi_{0}$ again and impose
(9). Lastly, shifting variables (30) gives the perturbative part
$\displaystyle
Z_{\mathrm{pert}}^{\mathrm{gauge}}Z_{\mathrm{pert}}^{\mathrm{hyper1}}Z_{\mathrm{pert}}^{\mathrm{hyper2}}\to\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\quantity((\chi_{\Delta_{+}}^{\mathfrak{so}(7)}-2)+(\chi_{\mathbf{7}}^{\mathfrak{so}(7)}+1)q^{1/2}+\cdots)\\!],$
(45)
which is the same result as (33) up to the Cartan part.
Next, consider the instanton part. The 6d $SO(8+2p)$ gauge theories on $-4$
curve have $Sp(p)\times Sp(p)$ flavor symmetry. In the 2d worldsheet theory on
a self-dual string introduced in Haghighat:2014vxa , both $SO(8+2p)$ gauge
symmetry and $Sp(p)$ flavor symmetries become flavor symmetry. The gauge
symmetry in 2d theory is $Sp(n)=USp(2n)$ where $n$ is string number. The field
contents in this theory can be written as $\mathcal{N}=(0,2)$ multiplets. The
charged fields are vector multiplet $V$, three Fermi multiplets
$\Lambda^{\Phi},\Lambda^{Q},\Lambda^{\tilde{Q}}$, and four chiral multiplets
$B,\tilde{B},Q,\tilde{Q}$. Their charges under the symmetries on a worldsheet
are summarized in Table 1.
| $Sp(n)$ | $SO(8+2p)$ | $Sp(p)_{1}$ | $Sp(p)_{2}$ | $U(1)_{\epsilon_{1}}$ | $U(1)_{\epsilon_{2}}$
---|---|---|---|---|---|---
$V$ | $\mathbf{adj}$ | $\mathbf{1}$ | $\mathbf{1}$ | $\mathbf{1}$ | $0$ | $0$
$\Lambda^{\Phi}$ | $\mathbf{adj}$ | $\mathbf{1}$ | $\mathbf{1}$ | $\mathbf{1}$ | $-1$ | $-1$
$B$ | $\mathbf{antisymm}$ | $\mathbf{1}$ | $\mathbf{1}$ | $\mathbf{1}$ | $1$ | $0$
$\tilde{B}$ | $\mathbf{antisymm}$ | $\mathbf{1}$ | $\mathbf{1}$ | $\mathbf{1}$ | $0$ | $1$
$Q$ | $\mathbf{n}$ | $\mathbf{8+2p}$ | $\mathbf{1}$ | $\mathbf{1}$ | $1/2$ | $1/2$
$\tilde{Q}$ | $\bar{\mathbf{n}}$ | $\overline{\mathbf{8+2p}}$ | $\mathbf{1}$ | $\mathbf{1}$ | $1/2$ | $1/2$
$\Lambda^{Q}$ | $\mathbf{n}$ | $\mathbf{1}$ | $\mathbf{p}$ | $\mathbf{1}$ | $0$ | $0$
$\Lambda^{\tilde{Q}}$ | $\bar{\mathbf{n}}$ | $\mathbf{1}$ | $\mathbf{1}$ | $\bar{\mathbf{p}}$ | $0$ | $0$
Table 1: The $\mathcal{N}=(0,2)$ multiplets and symmetries on worldsheet in 6d
$SO(8+2p)$ gauge theory.
The $U(1)_{\epsilon_{1}}$ and $U(1)_{\epsilon_{2}}$ are Cartans of $SO(4)$
rotating transverse $\mathbb{R}^{4}$ to worldsheet.
Using the field contents in Table 1, one can calculate the elliptic genus of
strings in the 6d $SO(10)$ theory using localization technique Benini:2013xpa
. The $n$-string elliptic genus is expressed as
$\displaystyle Z_{n}=\frac{1}{(2\pi
i)^{n}}\frac{1}{\absolutevalue{\mathrm{Weyl}\quantity[G]}}\oint
Z_{\mathrm{1-loop}}\ ,$ (46)
where $G$ is the gauge group on the 2d worldsheet. The contour integral is
performed using the Jeffrey-Kirwan (JK) residue prescription in
1993alg.geom..7001J .
The one-loop determinant $Z_{\mathrm{1-loop}}$ is the product of the
contributions from each multiplets. The $\mathcal{N}=(0,2)$ vector multiplet
contribution in $Z_{\mathrm{1-loop}}$ is given as a product of all the roots
$\alpha$ of $G$ of rank $r$,
$\displaystyle
Z_{\mathrm{vec}}=\quantity(\frac{2\pi\eta^{2}}{i})^{r}\prod_{\alpha}\frac{i\theta_{1}(\alpha)}{\eta}\prod_{j=1}^{r}du_{j}\
,$ (47)
where $\eta$ is the Dedekind eta function, $\theta_{1}(x)$ is the Jacobi theta
function. See Appendix B for the definition and some useful properties of the
theta function. The $\mathcal{N}=(0,2)$ chiral and Fermi multiplet
contributions are given as
$\displaystyle
Z_{\mathrm{chiral}}=\prod_{\rho}\frac{i\eta}{\theta_{1}(\rho)},\qquad
Z_{\mathrm{Fermi}}=\prod_{\rho}\frac{i\theta_{1}(\rho)}{\eta}\ ,$ (48)
where $\rho$ is the weight vector of the representation of each multiplet in
gauge and flavor symmetries.
In the case of the $SO(10)$ theory, the 2d gauge group is $Sp(n)$. Note that
in orthogonal basis $\\{e_{i}\\}_{i=1}^{n}$ of $\mathfrak{sp}(n)$ algebra, the
root vectors are given by $\pm e_{i}\pm e_{j}$, the weight vectors of the
fundamental and the antisymmetric representations are $\pm e_{i}$ and $\pm
e_{i}\pm e_{j}(i\neq j$), respectively. Denote chemical potential parameters
of $\mathfrak{sp}(n)$ gauge algebra be $u_{i}$ and $\mathfrak{so}(10)$,
$\mathfrak{sp}(1)_{1}$, $\mathfrak{sp}(1)_{2}$ flavor symmetry algebras be
$v_{i}$, $m_{1}$ and $m_{2}$, respectively. Then the $n$-string elliptic genus
from (46) is
$\displaystyle Z_{n}^{SO(10)}$ $\displaystyle=\frac{1}{(2\pi
i)^{n}}\frac{1}{\absolutevalue{\mathrm{Weyl}[Sp(n)]}}\oint\prod_{I=1}^{n}du_{I}\cdot\quantity(\prod_{I=1}^{n}\frac{i^{2}\theta_{1}(\pm
2u_{I})}{\eta^{2}})\quantity(\prod_{I<J}^{n}\frac{i^{4}\theta_{1}(\pm u_{I}\pm
u_{J})}{\eta^{4}})$
$\displaystyle\quad\times\quantity(\frac{i\theta_{1}(-2\epsilon_{+})}{\eta})^{n}\quantity(\prod_{I=1}^{n}\frac{i^{2}\theta_{1}(-2\epsilon_{+}\pm
2u_{I})}{\eta^{2}})\quantity(\prod_{I<J}^{n}\frac{i^{4}\theta_{1}(-2\epsilon_{+}\pm
u_{I}\pm u_{J})}{\eta^{4}})$
$\displaystyle\quad\times\quantity(\frac{i^{2}\eta^{2}}{\theta_{1}(\epsilon_{1,2})})^{n}\quantity(\prod_{I<J}^{n}\frac{i^{8}\eta^{8}}{\theta_{1}(\epsilon_{1,2}\pm
u_{I}\pm
u_{J})})\quantity(\prod_{I=1}^{n}\prod_{J=1}^{5}\frac{i^{4}\eta^{4}}{\theta_{1}(\epsilon_{+}\pm
u_{I}\pm v_{J})})$
$\displaystyle\quad\times\quantity(\prod_{I=1}^{n}\frac{i^{4}\theta_{1}(\pm
u_{I}+m_{1})\theta_{1}(\pm u_{I}+m_{2})}{\eta^{4}})\ ,$ (49)
where $\epsilon_{\pm}=\frac{\epsilon_{1}\pm\epsilon_{2}}{2}$ and we use
shorthand notation $\theta_{1}(\pm
2u_{i})=\theta_{1}(2u_{i})\theta_{1}(-2u_{i})$,
$\theta_{1}(\epsilon_{1,2})=\theta_{1}(\epsilon_{1})\theta_{1}(\epsilon_{2})$,
etc.222For example, $\theta_{1}(\epsilon_{1,2}\pm u_{I}\pm
u_{J})=\\!\displaystyle\prod_{n=1}^{2}\\!\theta_{1}(\epsilon_{n}\\!+u_{I}+u_{J})\theta_{1}(\epsilon_{n}\\!+u_{I}-u_{J})\theta_{1}(\epsilon_{n}\\!-u_{I}+u_{J})\theta_{1}(\epsilon_{n}\\!-u_{I}-u_{J})$.
To evaluate the integral (2.3), one needs to find residues of the integrand.
However, not all poles contributes to elliptic genus. The contributing
residues are called Jeffrey-Kirwan (JK) residue, determined as follows.
Suppose the singular hyperplanes $H_{n}$ are
$\sum_{I=1}^{r}a_{I}^{(n)}u_{I}+\cdots=0$. Fix an auxiliary vector $\zeta$ in
the space which contains vectors $a^{(n)}$. If $\zeta$ can be written as
linear combination of $a^{(n_{1})},\cdots,a^{(n_{l})}$ with positive
coefficients, then the singularities $H_{1},\cdots,H_{l}$ contribute to the
integral.
We explicitly calculate the one string and two string elliptic genus to
compare with topological vertex calculation. For $n=1$, the singular
hyperplanes are
$\displaystyle\pm u_{1}\pm v_{J}+\epsilon_{+}=0\ .$ (50)
If one takes an auxiliary vector $\zeta=(1)$, only singular hyperplanes
$+u_{1}\pm v_{J}+\epsilon_{+}=0$ contribute to the elliptic genus. Using
$\displaystyle\frac{1}{2\pi
i}\oint_{u_{1}=0}\frac{du}{\theta_{1}(u_{1})}=\frac{1}{2\pi\eta^{3}}\
,\qquad\theta_{1}(-x)=-\theta_{1}(x)\ ,$ (51)
the one-string elliptic genus is
$\displaystyle Z_{1}^{SO(10)}$
$\displaystyle=-\frac{\eta^{12}}{2}\sum_{I=1}^{5}\Bigg{(}\frac{\theta_{1}(2\epsilon_{+}+2v_{I})\theta_{1}(4\epsilon_{+}+2v_{I})\theta_{1}(\epsilon_{+}+v_{I}\pm
m_{1})\theta_{1}(\epsilon_{+}+v_{I}\pm
m_{2})}{\theta_{1}(\epsilon_{1,2})\prod_{J\neq I}^{5}\theta_{1}(v_{I}\pm
v_{J})\theta_{1}(2\epsilon_{+}+v_{I}\pm v_{J})}$
$\displaystyle\qquad\qquad\qquad\qquad+(v_{I}\to-v_{I})\Bigg{)}.$ (52)
For $n=2$, the singular hyperplanes are
$\displaystyle\pm u_{I}\pm u_{2}+\epsilon_{1,2}=0,\qquad\pm u_{I}\pm
v_{J}+\epsilon_{+}=0\ ,$ (53)
for $I=1,2$ and $J=1,\cdots,5$. We choose an auxiliary vector $\zeta$ as in
Figure 10, so that the contributing poles are
$\displaystyle(i)\left\\{\begin{array}[]{l}-u_{1}\pm v_{I}+\epsilon_{+}=0\\\
-u_{1}-u_{2}+\epsilon_{1,2}=0\end{array}\right.$
$\displaystyle(ii)\left\\{\begin{array}[]{l}-u_{1}\pm v_{I}+\epsilon_{+}=0\\\
u_{1}-u_{2}+\epsilon_{1,2}=0\end{array}\right.$ (58)
$\displaystyle(iii)\left\\{\begin{array}[]{l}-u_{1}\pm m_{I}+\epsilon_{+}=0\\\
-u_{2}\pm m_{J}+\epsilon_{+}=0\\\ \end{array}\right.$
$\displaystyle(iv)\left\\{\begin{array}[]{l}-u_{1}+u_{2}+\epsilon_{1,2}=0\\\
-u_{1}-u_{2}+\epsilon_{1,2}=0\end{array}\right.$ (63)
$\displaystyle(v)\left\\{\begin{array}[]{l}-u_{1}+u_{2}+\epsilon_{1,2}=0\\\
-u_{2}\pm v_{I}+\epsilon_{+}=0\\\ \end{array}\right.$
$\displaystyle(vi)\left\\{\begin{array}[]{l}u_{2}\pm v_{I}+\epsilon_{+}=0\\\
-u_{1}-u_{2}+\epsilon_{1,2}=0\end{array}\right.\ .$ (68)
Figure 10: A choice of an auxiliary vector $\zeta$ for JK-residue calculation
of two-string elliptic genus.
The residue around poles (58)(i) is
$\displaystyle\mathrm{Res}_{1}$
$\displaystyle=\frac{\eta^{24}}{8}\sum_{I=1}^{5}\bigg{[}\frac{\theta_{1}(3\epsilon_{+}-\epsilon_{-})\theta_{1}(2\epsilon_{-}-2v_{I})\theta_{1}(2\epsilon_{1}-2v_{I})\theta_{1}(4\epsilon_{+}+2v_{I})\theta_{1}(\epsilon_{2}+2v_{I})}{\theta_{1}(\epsilon_{1})\theta_{1}(\epsilon_{2})^{2}\theta_{1}(2\epsilon_{1})\theta_{1}(2\epsilon_{-})\theta_{1}(2v_{I})}$
$\displaystyle\qquad\quad\times\\!\\!\frac{\theta_{1}(3\epsilon_{+}-\epsilon_{-}+2v_{I})\theta_{1}(\epsilon_{+}+v_{I}\pm
m_{1,2})\theta_{1}(\epsilon_{-}-v_{I}\pm m_{1,2})}{\prod_{J\neq
I}^{5}\theta_{1}(v_{I}\pm v_{J})\theta_{1}(\epsilon_{1}-v_{I}\pm
v_{J})\theta_{1}(\epsilon_{2}+v_{I}\pm v_{J})\theta_{1}(2\epsilon_{+}+v_{I}\pm
v_{J})}$ $\displaystyle\qquad\qquad+(v_{I}\to-
v_{I})\bigg{]}+(\epsilon_{1}\leftrightarrow\epsilon_{2},\>\epsilon_{-}\to-\epsilon_{-})\
.$ (69)
The poles (58)(ii) give
$\displaystyle\mathrm{Res}_{2}$
$\displaystyle=-\frac{\eta^{24}}{8}\sum_{I=1}^{5}\bigg{[}\frac{\theta_{1}(3\epsilon_{+}+\epsilon_{-}+2v_{I})\theta_{1}(4\epsilon_{+}+2\epsilon_{-}+2v_{I})\theta_{1}(5\epsilon_{+}+\epsilon_{-}+2v_{I})}{\theta_{1}(\epsilon_{1,2})\theta_{1}(2\epsilon_{1})\theta_{1}(2\epsilon_{-})}$
$\displaystyle\qquad\quad\times\\!\\!\frac{\theta_{1}(6\epsilon_{+}\\!+2\epsilon_{-}+2v_{I})\theta_{1}(\epsilon_{+}\\!+v_{I}\pm
m_{1,2})\theta_{1}(2\epsilon_{+}\epsilon_{-}\\!+v_{I}\pm
m_{1,2})}{\prod_{J\neq I}^{5}\theta_{1}(v_{I}\pm
v_{J})\theta_{1}(\epsilon_{1}\\!+v_{I}\pm
v_{J})\theta_{1}(2\epsilon_{+}\\!+v_{I}\pm
v_{J})\theta_{1}(3\epsilon_{+}\\!+\epsilon_{-}\\!+v_{I}\pm v_{J})}$
$\displaystyle\qquad\qquad+(v_{I}\to-
v_{I})\bigg{]}+(\epsilon_{1}\leftrightarrow\epsilon_{2},\>\epsilon_{-}\to-\epsilon_{-})\
,$ (70)
and the poles (58)(iii) contribute
$\displaystyle\mathrm{Res}_{3}$ $\displaystyle=\frac{\eta^{24}}{8}\sum_{I\neq
J}^{5}\bigg{[}\frac{\theta_{1}(2\epsilon_{+}+2v_{I,J})\theta_{1}(4\epsilon_{+}+2v_{I,J})\theta_{1}(4\epsilon_{+}+v_{I}+v_{J})}{\theta_{1}(\epsilon_{1,2})^{2}\theta_{1}(v_{I}+v_{J})\theta_{1}(\epsilon_{1,2}+v_{I}+v_{J})\theta_{1}(\epsilon_{1,2}\pm(v_{I}-v_{J}))}$
$\displaystyle\qquad\quad\times\\!\\!\frac{\theta_{1}(\epsilon_{+}+v_{I,J}\pm
m_{1,2})}{\theta_{1}(3\epsilon_{+}\pm\epsilon_{-}+v_{I}+v_{J})\prod_{K\neq
I,J}^{5}\theta_{1}(v_{I,J}\pm v_{K})\theta_{1}(2\epsilon_{+}+v_{I,J}\pm
v_{K})}$ $\displaystyle\quad\qquad+(v_{I}\to-v_{I})+(v_{J}\to-
v_{J})+(v_{I}\to-v_{I},\>v_{J}\to-v_{J})\bigg{]},$ (71)
where
$\theta_{1}(2\epsilon_{+}+2v_{I,J})=\theta_{1}(2\epsilon_{+}+2v_{I})\theta_{1}(2\epsilon_{+}+2v_{J})$.
The residue of poles (58)(iv) is zero. The cases (58)(v) and (vi) are
$u_{1}\leftrightarrow u_{2}$ and $u_{1}\to u_{2},\>u_{2}\to-u_{1}$ of the case
(ii), respectively. These transformations do not change the residues. As a
result, two string elliptic genus reads
$\displaystyle
Z_{2}^{SO(10)}=\mathrm{Res}_{1}+3\times\mathrm{Res}_{2}+\mathrm{Res}_{3}\ .$
(72)
We now Higgs this $SO(10)$ elliptic genera. We only consider the unrefined
case, $\epsilon_{1}=-\epsilon_{2}$. The brane web construction gives hints for
Higgsing procedure. In the elliptic genus calculated from the field theory
construction, there are $\mathfrak{so}(10)$ parameters $v_{i}$. They are
written in the orthogonal basis, so first convert it to fundamental weight
basis (Dynkin basis) and recover an affine node:
$\displaystyle v_{1}=\phi_{1}-\phi_{0},\qquad
v_{2}=-\phi_{0}-\phi_{1}+\phi_{2},$ $\displaystyle
v_{3}=\phi_{3}-\phi_{2},\qquad v_{4}=-\phi_{3}+\phi_{4}+\phi_{5},\qquad
v_{5}=\phi_{5}-\phi_{4}$ (73)
Then we can apply the Higgsing conditions (3) and (9). Lastly, we shift
variables (30) to compare with topological vertex calculation. The result is
$\displaystyle v_{1}=0,\quad v_{2}=-2\phi_{4}+\phi_{3},\quad
v_{3}=\phi_{2}-\phi_{3},\quad v_{4}=-\phi_{2},\quad v_{5}=\frac{\tau}{2}$ (74)
Note that $v_{2}$, $v_{3}$, $v_{4}$ form the orthogonal basis of
$\mathfrak{so}(7)$ algebra. Thus, by redefinition of the variables, the
Higgsing sets the parameters as
$\displaystyle m_{1}=0\ ,\qquad m_{2}=\frac{\tau}{2}\ ,\qquad
v_{4}=\frac{\tau}{2}\ ,\qquad v_{5}=0\ .$ (75)
Substituting (75) into (2.3) and (72) and taking the unrefined limit
$\epsilon_{1}=-\epsilon_{2}$, we find that the one-string elliptic genus is
given by
$\displaystyle
Z_{1}=\\!\frac{\eta^{12}}{2\theta_{1}(\epsilon_{-})^{2}}\sum_{I=1}^{3}\\!\quantity(\frac{\theta_{1}(2v_{I})^{2}}{\theta_{1}(v_{I})^{2}\theta_{1}(v_{I}\\!-\\!\frac{\tau}{2})\theta_{1}(v_{I}\\!+\\!\frac{\tau}{2})\prod_{J\neq
I}^{3}\theta_{1}(v_{I}\pm v_{J})^{2}}+(v_{I}\\!\to\\!-v_{I})),$ (76)
and the two-string elliptic genus is given by
$\displaystyle
Z_{2}=\mathrm{Res}_{1}+3\times\mathrm{Res}_{2}+\mathrm{Res}_{3}\ ,$ (77)
where
$\displaystyle\mathrm{Res}_{1}$
$\displaystyle=\frac{\eta^{24}}{8}\sum_{I=1}^{3}\bigg{[}\frac{\theta_{1}(\epsilon_{-}-2v_{I})^{2}\theta_{1}(2\epsilon_{-}-2v_{I})^{2}}{\theta_{1}(\epsilon_{-})^{2}\theta_{1}(2\epsilon_{-})^{2}\theta_{1}(v_{I})^{2}\theta_{1}(\epsilon_{-}-v_{I})^{2}\theta_{1}(\pm
v_{I}+\frac{\tau}{2})\theta_{1}(\pm(\epsilon_{-}-v_{I})+\frac{\tau}{2})}$
$\displaystyle\qquad\qquad\times\prod_{J\neq
I}^{3}\frac{1}{\theta_{1}(\epsilon_{-}-v_{I}-v_{J})^{2}\theta_{1}(v_{I}\pm
v_{J})^{2}\theta_{1}(\epsilon_{-}-v_{I}+v_{J})^{2}}+(v_{I}\to-v_{I})\bigg{]}$
$\displaystyle\qquad+(\epsilon_{-}\to-\epsilon_{-})\ ,$ (78)
$\displaystyle\mathrm{Res}_{2}$
$\displaystyle=\frac{\eta^{24}}{8}\sum_{I=1}^{3}\bigg{[}\frac{\theta_{1}(\epsilon_{-}+2v_{I})^{2}\theta_{1}(2\epsilon_{-}+2v_{I})^{2}}{\theta_{1}(\epsilon_{-})^{2}\theta_{1}(2\epsilon_{-})^{2}\theta_{1}(v_{I})^{2}\theta_{1}(\epsilon_{-}+v_{I})^{2}\theta_{1}(\pm
v_{I}+\frac{\tau}{2})\theta_{1}(\pm(\epsilon_{-}+v_{I})+\frac{\tau}{2})}$
$\displaystyle\qquad\qquad\times\prod_{J\neq
I}^{3}\frac{1}{\theta_{1}(v_{I}\pm v_{J})^{2}\theta_{1}(\epsilon_{-}+v_{I}\pm
v_{J})^{2}}+(v_{I}\to-v_{I})\bigg{]}$
$\displaystyle\quad+(\epsilon_{-}\to-\epsilon_{-})\ ,$ (79)
$\displaystyle\mathrm{Res}_{3}$ $\displaystyle=\frac{\eta^{24}}{8}\sum_{I\neq
J}^{3}\bigg{[}\frac{\theta_{1}(2v_{I,J})^{2}}{\theta_{1}(\epsilon_{-})^{4}\theta_{1}(v_{I,J})^{2}\theta_{1}(\epsilon_{-}\pm
v_{I}\pm v_{J})^{2}\theta_{1}(\pm v_{I,J}+\frac{\tau}{2})\prod_{K\neq
I,J}^{3}\theta_{1}(v_{I,J}\pm v_{K})^{2}}$
$\displaystyle\qquad\qquad+(v_{I}\to-v_{I})+(v_{J}\to-v_{J})+(v_{I}\to-
v_{I},\>v_{J}\to-v_{J})\bigg{]}.$ (80)
Since $v_{i}$ are the $\mathfrak{so}(7)$ parameters, we change the variables
$\displaystyle v_{1}$ $\displaystyle=-\log\alpha_{1}\ ,\qquad
v_{2}=-\quantity(\log\alpha_{1}+\log\alpha_{2})\ ,$ (81) $\displaystyle v_{3}$
$\displaystyle=-\quantity(\log\alpha_{1}+\log\alpha_{2}+\log\alpha_{3})\ .$
(82)
and expand $Z_{1}$ and $Z_{2}$ in terms of $q$ and $\alpha_{1}$ to compare
with topological vertex calculation in the previous subsection. By an explicit
calculation, we see that two results are in complete agreement.
### 2.4 Partition function from a direct twisting of the $SO(8)$ theory
There is yet another way to obtaining the partition function of the $SO(8)$
theory with $\mathbb{Z}_{2}$ twist from the field theory construction. The
perturbative part of the partition function of the 6d $SO(8)$ theory is given
by
$\displaystyle
Z_{\mathrm{pert}}^{SO(8)}=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\quantity(\chi_{\Delta_{+}}^{\mathfrak{so}(8)}+q\chi_{\Delta_{-}}^{\mathfrak{so}(8)})\frac{1}{1-q}]\
,$ (83)
where $\chi_{\Delta_{\pm}}^{\mathfrak{so}(8)}$ are the positive and negative
parts of the character for the adjoint representation of $\mathfrak{so}(8)$,
$\displaystyle\chi_{\Delta_{+}}^{\mathfrak{so}(8)}=\sum_{i<j}^{4}\quantity(x_{i}x_{j}+\frac{x_{i}}{x_{j}}),\qquad\chi_{\Delta_{-}}^{\mathfrak{so}(8)}=\sum_{i<j}^{4}\quantity(\frac{1}{x_{i}x_{j}}+\frac{x_{j}}{x_{i}})\
,$ (84)
for the fugacities $x_{i}$ of the orthonormal basis of $\mathfrak{so}(8)$.
The elliptic genus of the 6d pure $SO(8)$ theory can be obtained from the 2d
worldsheet theory with matters in Table 1, with $p=0$. As there is no Fermi
multiplet $\Lambda^{Q}$ and $\Lambda^{\tilde{Q}}$ for $p=0$, the $n$-string
elliptic genus is given by
$\displaystyle Z_{n}^{SO(8)}$ $\displaystyle=\frac{1}{(2\pi
i)^{n}}\frac{1}{\absolutevalue{\mathrm{Weyl}[Sp(n)]}}\oint\prod_{I=1}^{n}du_{I}\cdot\quantity(\prod_{I=1}^{n}\frac{i^{2}\theta_{1}(\pm
2u_{I})}{\eta^{2}})\quantity(\prod_{I<J}^{n}\frac{i^{4}\theta_{1}(\pm u_{I}\pm
u_{J})}{\eta^{4}})$
$\displaystyle\quad\times\quantity(\frac{i\theta_{1}(-2\epsilon_{+})}{\eta})^{n}\quantity(\prod_{I=1}^{n}\frac{i^{2}\theta_{1}(-2\epsilon_{+}\pm
2u_{I})}{\eta^{2}})\quantity(\prod_{I<J}^{n}\frac{i^{4}\theta_{1}(-2\epsilon_{+}\pm
u_{I}\pm u_{J})}{\eta^{4}})$
$\displaystyle\quad\times\quantity(\frac{i^{2}\eta^{2}}{\theta_{1}(\epsilon_{1,2})})^{n}\quantity(\prod_{I<J}^{n}\frac{i^{8}\eta^{8}}{\theta_{1}(\epsilon_{1,2}\pm
u_{I}\pm
u_{J})})\quantity(\prod_{I=1}^{n}\prod_{j=1}^{4}\frac{i^{4}\eta^{4}}{\theta_{1}(\epsilon_{+}\pm
u_{I}\pm v_{J})}).$ (85)
$v_{J}$ are the chemical potentials for the $SO(8)$ symmetry and they can be
regarded as being in the orthogonal basis of $SO(8)$.
Let the eight states in the fundamental representation be $\ket{\pm e_{i}}$
where $i=1,2,3,4$. They satisfy $H^{j}\ket{\pm e_{j}}=\pm\delta_{ij}\ket{\pm
e_{j}}$ for the orthonormal basis $H^{j}$ of the Cartan subalgebra of
$\mathfrak{so}(8)$. The order two outer automorphism $\sigma$ of
$\mathfrak{so}(8)$ algebra exchanges the simple root $\alpha_{3}$ and
$\alpha_{4}$. In terms of the orthonormal basis of the Cartan subalgebra,
$\sigma(H^{4})=-H^{4}$. Thus, the invariant elements are $\pm H^{1}$, $\pm
H^{2}$, $\pm H^{3}$ and $H^{4}-H^{4}$, corresponding to the states $\ket{\pm
e_{1}}$, $\ket{\pm e_{2}}$, $\ket{\pm e_{3}}$ and $\ket{e_{4}}+\ket{-e_{4}}$.
On the other hand, $H^{4}-(-H^{4})$ has the eigenvalue $-1$, which corresponds
to the state $\ket{e_{4}}-\ket{-e_{4}}$. Upon compactification, $\ket{e_{4}}$
and $\ket{-e_{4}}$ are identified so that the invariant states $\ket{\pm
e_{1}}$, $\ket{\pm e_{2}}$, $\ket{\pm e_{3}}$ and $\ket{0}$ form the
fundamental representation of the invariant algebra $\mathfrak{so}(7)$ under
the order two outer automorphism. The state of eigenvalue $-1$ becomes a
singlet and gets additional half KK-momentum shift Tachikawa:2011ch .
Consequently, we change $(x_{4}+\frac{1}{x_{4}})\sum_{i=1}^{3}x_{i}$ to
$(1+q^{1/2})\sum_{i=1}^{3}x_{i}$ in $\chi_{\Delta_{+}}^{\mathfrak{so}(8)}$ and
$(v_{4}+\frac{1}{v_{4}})\sum_{i=1}^{3}\frac{1}{v_{i}}$ to
$(1+q^{-1/2})\sum_{i=1}^{3}\frac{1}{v_{i}}$ in
$\chi_{\Delta_{-}}^{\mathfrak{so}(8)}$. After this replacement, the
perturbative part becomes
$\displaystyle
Z_{\mathrm{pert}}=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\quantity(\chi_{\Delta_{+}}^{\mathfrak{so}(7)}+\quantity(\chi_{\mathbf{7}}^{\mathfrak{so}(7)}-1)q^{1/2}+\cdots)]\
,$ (86)
which agrees with the perturbative part obtained from 5-brane webs in (33).
For the instanton partition function, we replace $\pm v_{4}$ in (2.4) to
$\frac{\tau}{2}$ and $0$. The elliptic genus is then expressed as
$\displaystyle Z_{n}$ $\displaystyle=\frac{1}{(2\pi
i)^{n}}\frac{1}{\absolutevalue{\mathrm{Weyl}[Sp(n)]}}\oint\prod_{I=1}^{n}du_{I}\cdot\quantity(\prod_{I=1}^{n}\frac{i^{2}\theta_{1}(\pm
2u_{I})}{\eta^{2}})\quantity(\prod_{I<J}^{n}\frac{i^{4}\theta_{1}(\pm u_{I}\pm
u_{J})}{\eta^{4}})$
$\displaystyle\quad\times\quantity(\frac{i\theta_{1}(-2\epsilon_{+})}{\eta})^{n}\quantity(\prod_{I=1}^{n}\frac{i^{2}\theta_{1}(-2\epsilon_{+}\pm
2u_{I})}{\eta^{2}})\quantity(\prod_{I<J}^{n}\frac{i^{4}\theta_{1}(-2\epsilon_{+}\pm
u_{I}\pm u_{J})}{\eta^{4}})$
$\displaystyle\quad\times\quantity(\frac{i^{2}\eta^{2}}{\theta_{1}(\epsilon_{1,2})})^{n}\quantity(\prod_{I<J}^{n}\frac{i^{8}\eta^{8}}{\theta_{1}(\epsilon_{1,2}\pm
u_{I}\pm
u_{J})})\quantity(\prod_{I=1}^{n}\prod_{J=1}^{3}\frac{i^{4}\eta^{3}}{\theta_{1}(\epsilon_{+}\pm
u_{I}\pm v_{J})})$
$\displaystyle\quad\times\prod_{I=1}^{n}\frac{i^{4}\eta^{3}}{\theta_{1}(\epsilon_{+}\pm
u_{I})\theta_{1}(\epsilon_{+}\pm u_{I}+\frac{\tau}{2})}\ .$ (87)
For $n=1$, the singular hyperplanes are
$\displaystyle\pm u_{1}+\epsilon_{+}\pm v_{J}=0\ ,\qquad\pm
u_{1}+\epsilon_{+}=0\ ,\qquad\pm u_{1}+\epsilon_{+}+\frac{\tau}{2}=0\ .$ (88)
Take an auxiliary vector $\zeta=(1)$. The residues of the last two singular
hyperplanes are zero since $\theta_{1}(k\tau)=0$ for $k\in\mathbb{Z}$. Thus,
only poles $+u_{1}+\epsilon_{+}\pm v_{j}=0$ contribute to the one string
elliptic genus. One can find that the JK-residue sum for $Z_{1}$ with the
unrefined limit $\epsilon_{1}=-\epsilon_{2}$ gives the same expression as
(76). A similar calculation for the $n=2$ case leads to (77) as well.
## 3 $SU(3)$ theory with $\mathbb{Z}_{2}$ twist
Figure 11: A 5-brane web for 6d $G_{2}$ gauge theory with one fundamental.
The 6d $SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist can be obtained by
Higgsing from the 6d $G_{2}$ gauge theory with a hypermultiplet in the
fundamental representation. The Higgsing procedure can be realized through
5-brane webs as given in Kim:2019dqn . It is also worthy of noting that from
the perspective of 5d SCFTs, it is 5d pure $SU(3)$ gauge theory at the Chern-
Simons level $\kappa=9$ Jefferson:2018irk . The corresponding 5-brane web is
given in Hayashi:2018lyv . In order not to be confused with this 5d $SU(3)$
theory, we denote it by $SU(3)_{9}$. In this section, we compute the partition
function for the 6d $SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist from
5-brane webs using topological vertex and also from performing a Higgsing on
the elliptic genera of self-dual strings in the 6d $G_{2}$ theory.
### 3.1 5-brane web for 6d $SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist
We first review Type IIB 5-brane construction of the 6d pure $SU(3)$ gauge
theory with $\mathbb{Z}_{2}$ twist. We start from the 6d $G_{2}$ gauge theory
with one fundamental hypermultiplet, whose 5-brane configuration is given in
Figure 11333The 5-brane web given in Figure 11 is obtained by taking monodromy
cut of D7-branes on O5-planes differently compared with the 5-brane web given
in Kim:2019dqn . One can think of it as an $SL(2,\mathbb{Z})$ transformed web
diagram., where the mass of the fundamental hyper is denoted by $m$ and the
masses of W-bosons $W_{i}$ are expressed in terms of the scalar fields as
$\displaystyle m_{W_{1}}=2\phi_{0}-\phi_{2}\ ,\qquad
m_{W_{2}}=2\phi_{1}-\phi_{2}\ ,\qquad m_{W_{3}}=-\phi_{0}-3\phi_{1}+2\phi_{2}\
,$ (89)
which form the affine Cartan matrix of untwisted affine Lie algebra
$G_{2}^{(1)}$.
To obtain the $SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist from the
$G_{2}$ gauge theory with a fundamental, we Higgs the hypermultiplet of the
$G_{2}$ gauge theory by giving vevs to the scalar fields which carry KK-
momentum. To this end, we redefine the scalars as
$\displaystyle 2\phi_{0}-\phi_{2}\to 2\phi_{0}-\phi_{2}+\frac{1}{R}\
,\qquad\phi_{1}-\phi_{0}\to\phi_{1}-\phi_{0}-\frac{1}{2R}\ ,$ (90)
while other parameters in Figure 11 remain unaltered. This recovers the 6d
circle radius $R$ in the brane configuration. The Higgsing procedure to the 6d
$SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist is then similar to the
Higgsing explained through Figure 2(b)-(d) in the previous section. We first
bring the flavor D7-brane down to the lower O5-plane as depicted in Figure 12.
The Higgsing condition is then given by
$\displaystyle m=\frac{1}{2R},\qquad\phi_{1}=\phi_{0}+\frac{1}{2R}\ .$ (91)
Figure 12: Lowering the flavor brane in Figure 11 through a series of flop
transitions, so as to apply the Higgsing procedure discussed in Figure
2(b)-(d).
The first condition in (91) comes from the flavor D7-brane put on the lower
$\mathrm{O}5$-plane. As discussed in the previous section, this D7-brane is
brought to the lower chamber of the Coulomb branch and then is put to the
$\mathrm{O}5^{-}$-plane on the bottom. It follows that this full D7-brane is
split into two half D7-branes generating an
$\widetilde{\mathrm{O}5}^{-}$-plane in between these two half D7-branes. The
second condition in (91) comes from putting the lower color D5-brane on the
$\mathrm{O}5^{-}$-plane in the middle, which also becomes two half D5-branes.
Through Higgsing, one of half D5-branes remains intact, while the other half
D5-brane is recombined onto the two half D7-branes on the
$\mathrm{O}5^{-}$-plane. So there are effectively three half D5-branes in
between two half D7-branes so that the two half D5-branes can be Higgsed away.
Now, by taking each half D7-brane in the opposite directions, leaving the half
D7 monodromy cut being presented from the right to the left, one obtains a
5-brane configuration given in Figure 13. The masses of W-bosons after the
Higgsing are then given by
$\displaystyle m_{W_{1}}=2\phi_{2}-4\phi_{0}-\frac{3}{2R}\ ,\qquad
m_{W_{2}}=\frac{1}{R}+2\phi_{0}-\phi_{2}\ ,$ (92)
which form the affine Cartan matrix of twisted affine Lie algebra
$A_{2}^{(2)}$.
Figure 13: A 5-brane web for 6d $SU(3)$ theory with $\mathbb{Z}_{2}$ twist.
The half monodromy cut extends from the right to the left.
### 3.2 Partition function from 5-brane webs
With this 5-brane web, we are ready to compute the topological string
partition function for the 6d $SU(3)$ gauge theory with $\mathbb{Z}_{2}$
twist. Like 5-brane configurations for the $SO(8)$ theory with
$\mathbb{Z}_{2}$ twist in the previous section, this 5-brane configuration is
not smoothly connected to its reflected image over the orientifold plane. In
order to make a desirable 5-brane configuration where the original 5-brane and
its reflected mirror image are smoothly connected, we introduce an auxiliary
spinor hypermultiplet on the 5-brane configuration, as done in the previous
section. It allows us then to apply topological vertex formulation with an
$O5$-plane to compute the partition function for the 6d $SU(3)$ gauge theory
with $\mathbb{Z}_{2}$ twist, followed by taking the decoupling limit of the
newly introduced hypermultiplet. For instance, see Figure 14. The lower part
of the 5-brane configuration for 6d $SU(3)$ gauge theory with $\mathbb{Z}_{2}$
twist is given in Figure 14(a) where we introduced a hypermultiplet444From
$G_{2}$ gauge theory point of view, this hypermultiplet is a fundamental hyper
which originates from a hypermultiplet in the spinor representation of an
$SO(7)$ gauge theory. on the left hand side of the web configuration. Here two
half D7-branes are still in the middle of the brane web which are denoted by
blue dots and their monodromy cuts are the wiggly lines pointing to the left.
For computational ease, we take these half D7-branes to the far left instead
of taking them in the opposite directions. This makes a 5-brane configuration
with only O5-plane, as depicted in Figure 14(b). When taking all the half
D7-branes to the left, half D5-branes are created as a result of the Hanany-
Witten transitions, which is depicted in Figure 14(b), where half D5-branes
are solid lines in violet. In actual computation, they can be viewed as
follows: one full external D5-brane (or two half D5-branes) is connected to
the right $(0,1)$ and $(1,1)$ 5-branes, and an additional full D5-brane is
connected to the left $(0,1)$ and $(1,1)$ 5-branes.
(a)
(b)
Figure 14: (a) Coupling a spinor $(-1,1)$ 5-brane to $SU(3)$ theory with
$\mathbb{Z}_{2}$ twist. (b) Applying the generalized flop transition and
moving the half D7-branes and monodromy cuts to the right. Several half
D5-branes are created due to the Hanany-Witten transition.
For the upper part of the 5-brane web in Figure 13, the 5-brane configuration
is not smoothly connected to its mirror image as well. One may again couple
another hypermultiplet as in Figure 15. It is also possible to use the
symmetry of the 5-brane web. Instead of computing the partition function based
on Figure 15, we use the following trick. We recycle the lower part of 5-brane
configuration with a hyper for the upper part computation. In other words,
when implementing the topological vertex for 5-brane configuration on the
upper orientifold plane, we re-use the lower part configuration with a hyper
as if it is a suitable 5-brane configuration for the upper part and perform
the partition function computation by properly gluing 5-branes. For instance,
as in Figure 16, we first consider strip diagrams. For the upper part, in
particular, we use the strip diagram of the lower part and glue the upper and
lower part together. In the figure, the two edges marked with
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}||}$
symbols are identified. This trick is not harmful, as the point of introducing
extra hypermultiplets is to arrange the 5-brane configuration in a way that we
can apply the topological vertex. Moreover, we decouple the contribution of
these newly introduced hypermultiplets.
Figure 15: A brane web of $SU(3)$ theory with $\mathbb{Z}_{2}$ twist coupled
with two spinors. Figure 16: Strip diagrams for 5-brane webs for $SU(3)$
theory with $\mathbb{Z}_{2}$ twist given in Figure 15. The left part (a) and
the right part (b) of the web diagram in Figure 15. $Z_{\mathrm{left}}$ is
based on (a) and $Z_{\mathrm{right}}$ is based on (b). The edges denoted with
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}||}$ are
identified.
It follows then that the the strip diagram on Figure 16(a) yields
$\displaystyle Z_{\mathrm{left}}(\mu_{1},\mu_{2},\nu_{1},\nu_{2})$ (93)
$\displaystyle=g^{\frac{1}{2}(\norm*{\mu_{1}}^{2}+\norm*{\mu_{2}}^{2}+2\norm*{\nu_{1}}^{2}+2\norm*{\nu_{2}^{t}}^{2})}\tilde{Z}_{\mu_{1}}(g)\tilde{Z}_{\mu_{2}}(g)\tilde{Z}_{\nu_{1}}(g)^{2}\tilde{Z}_{\nu_{2}}(g)^{2}\frac{\mathcal{R}_{\emptyset\mu_{1}}(Q_{3,4,5})\mathcal{R}_{\emptyset\mu_{1}^{t}}(Q_{1,2})}{\mathcal{R}_{\emptyset\emptyset}(Q_{1,2,3,4,5})}$
$\displaystyle\quad\times\frac{\mathcal{R}_{\emptyset\mu_{2}}(Q_{4,5})\mathcal{R}_{\emptyset\mu_{2}^{t}}(Q_{1,2,3})\mathcal{R}_{\nu_{1}\emptyset}(Q_{1,1,2,3,4,5})\mathcal{R}_{\emptyset\nu_{2}}(Q_{1,2,3,4,5,5})\mathcal{R}_{\mu_{1}^{t}\nu_{1}^{t}}(Q_{2})\mathcal{R}_{\mu_{2}^{t}\nu_{1}^{t}}(Q_{2,3})}{\mathcal{R}_{\emptyset\nu_{1}^{t}}(Q_{2,3,4,5})\mathcal{R}_{\emptyset\nu_{2}^{t}}(Q_{1,2,3,4})\mathcal{R}_{\mu_{1}\mu_{2}^{t}}(Q_{3})\mathcal{R}_{\mu_{1}^{t}\nu_{1}}(Q_{1,1,2})}$
$\displaystyle\quad\times\frac{\mathcal{R}_{\mu_{1}\nu_{2}^{t}}(Q_{3,4})\mathcal{R}_{\mu_{2}\nu_{2}^{t}}(Q_{4})\mathcal{R}_{\nu_{1}\nu_{1}}(Q_{1,1})\mathcal{R}_{\nu_{1}^{t}\nu_{2}}(Q_{2,3,4,5,5})\mathcal{R}_{\nu_{2}\nu_{2}}(Q_{5,5})\mathcal{R}_{\nu_{1}\nu_{2}^{t}}(Q_{1,1,2,3,4})}{\mathcal{R}_{\mu_{2}^{t}\nu_{1}}(Q_{1,1,2,3})\mathcal{R}_{\mu_{1}\nu_{2}}(Q_{3,4,5,5})\mathcal{R}_{\mu_{2}\nu_{2}}(Q_{4,5,5})\mathcal{R}_{\nu_{1}\nu_{2}}(Q_{1,1,2,3,4,5,5})\mathcal{R}_{\nu_{1}^{t}\nu_{2}^{t}}(Q_{2,3,4})},$
where we used the Kähler parameters $Q_{i}$ assigned in Figure 15 with the
convention that $Q_{i,j,\cdots}=Q_{i}Q_{j}\cdots$.
The strip on Figure 16(b) can be treated in a similar way except that the
upper and lower edges marked with
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}||}$ symbol
are identified.
The topological string partition function for the strip in Figure 16(b) is a
bit involving but can be obtained in a straightforward manner following
Haghighat:2013gba ; macdonald_symmetric_1995 . One needs to repeatedly use
various Cauchy identities in Appendix B. The contribution of the strip diagram
in Figure 16(b) then reads
$\displaystyle
Z_{\mathrm{right}}(\mu_{1},\mu_{2})=g^{\frac{1}{2}(\norm*{\mu_{1}^{t}}^{2}+\norm*{\mu_{2}^{t}}^{2})}\tilde{Z}_{\mu_{1}}(g)\tilde{Z}_{\mu_{2}}(g)\times$
$\displaystyle\quad\times\prod_{m=1}^{\infty}\frac{1}{1-q^{m}}\frac{1}{\mathcal{R}_{\emptyset\emptyset}(q^{2m})^{2}\mathcal{R}_{\emptyset\emptyset}(q^{2m-1/2})^{2}\mathcal{R}_{\emptyset\emptyset}(q^{2m-1})^{2}\mathcal{R}_{\emptyset\emptyset}(q^{2m-3/2})^{2}}$
$\displaystyle\quad\times\frac{1}{\mathcal{R}_{\emptyset\mu_{1}}(q^{2m-2}Q_{3,4,5})\mathcal{R}_{\emptyset\mu_{1}}(q^{2m-1}Q_{3,4,5})\mathcal{R}_{\emptyset\mu_{1}}(q^{2m}Q_{1,2}^{-1})\mathcal{R}_{\emptyset\mu_{1}}(q^{2m-1}Q_{1,2}^{-1})}$
$\displaystyle\quad\times\frac{1}{\mathcal{R}_{\emptyset\mu_{1}^{t}}(q^{2m-2}Q_{1,2})\mathcal{R}_{\emptyset\mu_{1}^{t}}(q^{2m-1}Q_{1,2})\mathcal{R}_{\emptyset\mu_{1}^{t}}(q^{2m-2}Q_{1,1,2,2,3,4,5})\mathcal{R}_{\emptyset\mu_{1}^{t}}(q^{2m-1}Q_{1,1,2,2,3,4,5})}$
$\displaystyle\quad\times\frac{1}{\mathcal{R}_{\emptyset\mu_{2}}(q^{2m-2}Q_{4,5})\mathcal{R}_{\emptyset\mu_{2}}(q^{2m-1}Q_{4,5})\mathcal{R}_{\emptyset\mu_{2}}(q^{2m}Q_{1,2,3}^{-1})\mathcal{R}_{\emptyset\mu_{2}}(q^{2m-1}Q_{1,2,3}^{-1})}$
$\displaystyle\quad\times\frac{1}{\mathcal{R}_{\emptyset\mu_{2}^{t}}(q^{2m}Q_{4,5}^{-1})\mathcal{R}_{\emptyset\mu_{2}^{t}}(q^{2m-1}Q_{1,2,3})\mathcal{R}_{\emptyset\mu_{2}^{t}}(q^{2m-1}Q_{4,5}^{-1})\mathcal{R}_{\emptyset\mu_{2}^{t}}(q^{2m-2}Q_{1,2,3})}$
$\displaystyle\quad\times\frac{1}{\mathcal{R}_{\mu_{1}\mu_{1}^{t}}(q^{2m})\mathcal{R}_{\mu_{1}\mu_{1}^{t}}(q^{2m-1})\mathcal{R}_{\mu_{2}\mu_{2}^{t}}(q^{2m})\mathcal{R}_{\mu_{2}\mu_{2}^{t}}(q^{2m-1})}$
$\displaystyle\quad\times\frac{1}{\mathcal{R}_{\mu_{1}^{t}\mu_{2}}(q^{2m}Q_{3}^{-1})\mathcal{R}_{\mu_{1}^{t}\mu_{2}}(q^{2m-1}Q_{3}^{-1})\mathcal{R}_{\mu_{1}\mu_{2}^{t}}(q^{2m-1}Q_{3})\mathcal{R}_{\mu_{1}\mu_{2}^{t}}(q^{2m-2}Q_{3})}\
,$ (94)
where $q=(Q_{1}Q_{2}Q_{3}Q_{4}Q_{5})^{2}$.
The full topological string partition function $Z$ is obtained by combining
$Z_{\mathrm{left}}$ and $Z_{\mathrm{right}}$,
$\displaystyle
Z=\sum_{\mu_{i},\nu_{i}}(-Q_{B})^{\absolutevalue{\mu_{1}}+\absolutevalue{\mu_{2}}}Q_{1}^{\absolutevalue{\nu_{1}}}Q_{5}^{\absolutevalue{\nu_{2}}}f_{\mu_{1}}f_{\mu_{2}}^{-1}f_{\nu_{1}}^{3}f_{\nu_{2}}Z_{\mathrm{left}}(\mu_{1},\mu_{2},\nu_{1},\nu_{2})Z_{\mathrm{right}}(\mu_{1},\mu_{2}).$
(95)
It is convenient to further rescale the scalars in Figure 13 as
$\displaystyle\phi_{2}\to\phi_{2}+2\phi_{0}+\frac{3}{4R}\ ,$ (96)
which gives that the Kähler parameters are expressed in terms of physical
parameters
$\displaystyle-\log Q_{1}Q_{2}$ $\displaystyle=-\log
Q_{4}Q_{5}=-\phi_{2}+\frac{\tau}{4}\ ,$ $\displaystyle-\log Q_{3}$
$\displaystyle=2\phi_{2}\ ,$ (97) $\displaystyle-\log Q_{B}$
$\displaystyle=\log u+2\phi_{2}\ ,$ (98)
where $u$ is the string fugacity. We note that the relation
$q=(Q_{1}Q_{2}Q_{3}Q_{4}Q_{5})^{2}=e^{-\tau}$ is not changed under the
rescaling. With $A=e^{-\phi_{2}}$, we can write the Kähler parameters $Q_{2}$
and $Q_{4}$ as $Q_{2}=q^{1/4}A^{-1}Q_{1}^{-1}$ and
$Q_{4}=q^{1/4}A^{-1}Q_{5}^{-1}$ where $Q_{1},Q_{5}$ are the Kähler parameters
associated with the spinors which we will decouple.
The perturbative partition function $Z_{\mathrm{pert}}$ is obtained by setting
$\mu_{1}=\mu_{2}=\emptyset$ in (95) and by summing up over the Young diagrams
$\nu_{i}$, as a function of $g$, $A$, $q$, $Q_{1}$ and $Q_{5}$. As an
expansion of $q$, we find the perturbative part is given by
$\displaystyle
Z_{\mathrm{pert}}\\!=\operatorname{PE}\\!\bigg{[}\frac{2gA^{2}}{(1\\!-g)^{2}}\\!+\frac{g(2A^{-1}\\!+2A)q^{1/4}}{(1-g)^{2}}\\!+\frac{4gq^{1/2}}{(1\\!-g)^{2}}\\!+\frac{2g(A\\!+A^{-1})q^{3/4}}{(1-g)^{2}}\\!+\cdots\\!\bigg{]},$
(99)
where the $\cdots$ denotes the terms involving $Q_{1}$ and $Q_{5}$ which we
will decouple later. This is expected result as in Appendix A. If we take the
large radius limit corresponding to $q\to 0$, the states depending on KK-
momentum will be truncated, and as a result, only $q^{0}$ term will remain.
The partition function for 6d self-dual strings $Z_{\mathrm{string}}$ is
obtained by summing up over all Young diagrams in (95), expand about $q$ and
$A$, and also by decoupling the auxiliary spinor hypermultiplets that we
introduced for computational ease, $Q_{1},Q_{5}\to\infty$ as discussed in the
previous section,
$\displaystyle
Z_{\mathrm{string}}=\frac{Z}{Z_{\mathrm{pert}}}=1+uZ_{1}+u^{2}Z_{2}+\cdots,$
(100)
where $Z_{n}$ correspond to the $n$-string elliptic genus,
$\displaystyle Z_{1}$
$\displaystyle=\frac{2gA^{2}}{(1-g)^{2}(1-A^{2})^{2}}+\frac{2gA(1+A^{2})}{(1-g)^{2}(1-A^{2})^{2}}q^{1/4}+\frac{6gA^{2}}{(1-g)^{2}(1-A^{2})^{2}}q^{1/2}$
$\displaystyle\quad+\frac{6gA(1+A^{2})}{(1-g)^{2}(1-A^{2})^{2}}q^{3/4}+\frac{6A^{4}g+2A^{2}(2g^{2}+7g+2)+6g}{(1-g)^{2}(1-A^{2})^{2}}q+O(q^{5/4})\
,$ $\displaystyle Z_{2}$
$\displaystyle=\frac{g^{4}A^{4}\Big{(}A^{4}(3g^{2}+2g+3)-2A^{2}(g^{2}+6g+1)+3g^{2}+2g+3\Big{)}}{(1-g)^{4}(1+g)^{2}(1-A^{2})^{2}(A^{2}-g)^{2}(1-A^{2}g)^{2}}$
$\displaystyle\quad+\frac{2g^{3}A^{3}(A^{2}+1)\big{(}2A^{4}g+A^{2}(g^{2}-6g+1)+2g\big{)}}{(1-g)^{4}(1-A^{2})^{2}(A^{2}-g)^{2}(1-A^{2}g)^{2}}q^{1/4}$
$\displaystyle\quad+\frac{A^{2}g^{3}\Big{(}A^{4}(3g^{2}+2g+3)-2A^{2}(g^{2}+6g+1)+3g^{2}+2g+3\Big{)}}{(1-g)^{4}(1+g)^{2}(1-A^{2})^{2}(A^{2}-g)^{2}(1-A^{2}g)^{2}}\times$
$\displaystyle\qquad\times\big{(}A^{4}g+2A^{2}(g+1)^{2}+g\big{)}q^{1/2}+O(q^{3/4})\
.$ (101)
Though we presented the result of one- and two-string elliptic genus only,
higher $n$-string elliptic genus can be computed in a straightforward manner.
### 3.3 Elliptic genus by Higgsing 6d $G_{2}+1\mathbf{F}$
In this subsection, we compute the elliptic genus to cross-check the partition
for the $SU(3)$ theory with $\mathbb{Z}_{2}$ twist obtained from 5-brane webs
in the previous subsection. As discussed, the $SU(3)$ theory with
$\mathbb{Z}_{2}$ twist can be obtained by Higgsing the 6d $G_{2}$ gauge theory
with one fundamental hypermultiplet ($G_{2}+1\mathbf{F}$). We hence start by
computing the elliptic genus for the 6d $G_{2}$ gauge theory with one
fundamental hypermultiplet and apply the Higgsing that leads to the $SU(3)$
theory with $\mathbb{Z}_{2}$ twist.
The perturbative part of the partition function comes from the contributions
from the vector multiplet and the hypermultiplet,
$Z_{\mathrm{pert}}^{G_{2}+1\mathbf{F}}=Z_{\mathrm{pert}}^{\mathrm{gauge}}Z_{\mathrm{pert}}^{\mathrm{hyper}}$.
As we have the correspond 5-brane web in Figure 12, we can readily obtain the
perturbative part. The vector multiplet contribution to the perturbative part
for $G_{2}+1\mathbf{F}$ takes the following form
$\displaystyle
Z_{\mathrm{pert}}^{\mathrm{gauge}}=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\quantity(\chi_{\Delta_{+}}^{G_{2}}+q\chi_{\Delta_{-}}^{G_{2}})\frac{1}{1-q}]\
,$ (102)
where $\chi_{\Delta_{\pm}}^{G_{2}}$ are the positive and negative parts from
the characters for the adjoint representation of $G_{2}$. The hypermultiplet
contribution to the perturbative part is given by
$\displaystyle
Z_{\mathrm{pert}}^{\mathrm{hyper}}\\!=\\!\operatorname{PE}\\!\quantity[-\frac{g}{(1-g)^{2}}\quantity(M\\!+\\!\frac{q}{M})\quantity(\frac{x_{1}}{q}\\!+\frac{x_{1}}{x_{2}}\\!+\frac{x_{2}}{x_{1}^{2}}\\!+1\\!+\frac{x_{1}^{2}}{x_{2}}\\!+\frac{x_{2}}{x_{1}}\\!+\frac{q}{x_{1}})\frac{1}{1-q}],$
(103)
where $M=e^{-m}$ and $x_{i}=e^{-\phi_{i}}$. These perturbative part can be
easily computed from the web diagram in Figure 12.
To perform the Higgsing leading to the $SU(3)$ gauge theory with
$\mathbb{Z}_{2}$ twist, we give a vev to the hypermultiplet carrying KK
momentum. For that, we recover the affine node $\phi_{0}$ as in Figure 12, and
impose the Higgsing conditions (91). By rescaling the scalars (96), we obtain
the perturbative part of the partition function for the $SU(3)$ gauge theory
with $\mathbb{Z}_{2}$ twist:
$\displaystyle
Z_{\mathrm{pert}}\\!=\\!\operatorname{PE}\\!\bigg{[}\frac{2g}{(1-g)^{2}}$
$\displaystyle\Big{(}\\!(A^{2}\\!-\\!2)+(A\\!+\\!A^{-1})q^{1/4}+q^{1/2}+(A\\!+\\!A^{-1})q^{3/4}+\\!\cdots\\!\Big{)}\bigg{]},$
(104)
where $A=x_{2}=e^{-\phi_{2}}$. This result is the same as that obtained from
the topological vertex calculation (99) up to the Cartan parts.
We now compute the elliptic genus of the 6d $G_{2}$ theory with one
fundamental hypermultiplet, following Kim:2018gjo . The symmetries of the 2d
worldsheet theory on self-dual strings are $U(n)$ gauge symmetry for string
number $n$, $SU(3)$ global symmetry which is enhanced to $G_{2}$ in IR,
$U(1)_{\sf J}$ and $SU(2)_{l}$ global symmetries. Here, for $SU(2)_{R}$
R-symmetry and $SO(4)=SU(2)_{l}\times SU(2)_{r}$ rotation symmetry of the
transverse $\mathbb{R}^{4}$, the charge ${\sf J}$ is identified as the sum of
the $SU(2)_{r}$ charge and the $SU(2)_{R}$ charge, ${\sf J}=J_{r}+J_{R}$. The
worldsheet fields can be written in $\mathcal{N}=(0,2)$ multiplets. There are
the vector multiplet $V$, the Fermi multiplets $\lambda$,
$(\hat{\lambda},\check{\lambda})$, $(\Psi,\bar{\Psi})$, and the chiral
multiplets $(q,\tilde{q})$, $(a,\tilde{a})$, $(\phi_{i},\phi_{4})$,
$(b,\tilde{b})$. Their charges under the symmetries in the 2d worldsheet
theory are summarized in Table 2.
| $U(n)$ | $SU(3)$ | $SU(2)_{l}$ | $U(1)_{\sf J}$
---|---|---|---|---
$V$ | $\mathbf{adj}$ | $\mathbf{1}$ | $\mathbf{1}$ | $0$
$\lambda$ | $\mathbf{adj}$ | $\mathbf{1}$ | $\mathbf{1}$ | $-1$
$(q,\tilde{q})$ | $(\mathbf{n},\bar{\mathbf{n}})$ | $(\bar{\mathbf{3}},\mathbf{3})$ | $\mathbf{1}$ | $1/2$
$(a,\tilde{a})$ | $\mathbf{adj}$ | $\mathbf{1}$ | $\mathbf{2}$ | $1/2$
$(\phi_{i},\phi_{4})$ | $\bar{\mathbf{n}}$ | $(\bar{\mathbf{3}},\mathbf{1})$ | $\mathbf{1}$ | 1/2
$(b,\tilde{b})$ | $\overline{\mathbf{anti}}$ | $\mathbf{1}$ | $\mathbf{2}$ | $1/2$
$(\hat{\lambda},\check{\lambda})$ | $\overline{\mathbf{sym}}$ | $\mathbf{1}$ | $\mathbf{1}$ | $(0,-1)$
$(\Psi,\tilde{\Psi})$ | $(\mathbf{n},\bar{\mathbf{n}})$ | $\mathbf{1}$ | $\mathbf{1}$ | $0$
Table 2: The $\mathcal{N}=(0,2)$ multiplets and symmetries in ADHM formalism
of 6d $G_{2}$ gauge theory.
Using the matter content in Table 2, one can write down the $n$-string
elliptic genus:
$\displaystyle Z_{n}$ $\displaystyle=\frac{1}{n!}\frac{1}{(2\pi
i)^{n}}\oint\prod_{I=1}^{n}du_{I}\cdot\Big{(}\frac{2\pi\eta^{2}}{i}\Big{)}^{n}\quantity(\prod_{I\neq
J}^{n}\frac{i\theta_{1}(u_{I}-u_{J})}{\eta})\quantity(\prod_{I,J=1}^{n}\frac{i\theta_{1}(-2\epsilon_{+}+u_{I}-u_{J})}{\eta})$
$\displaystyle\quad\times\quantity(\prod_{I=1}^{n}\prod_{J=1}^{3}\frac{i^{2}\eta^{2}}{\theta_{1}(\epsilon_{+}\pm(u_{I}-v_{J}))})\quantity(\prod_{I,J=1}^{n}\frac{i^{2}\eta^{2}}{\theta_{1}(\epsilon_{1,2}+u_{I}-u_{J})})$
$\displaystyle\quad\times\quantity(\prod_{I=1}^{n}\prod_{J=1}^{3}\frac{i\eta}{\theta_{1}(\epsilon_{+}-u_{I}-v_{J})}\frac{i\eta}{\theta_{1}(\epsilon_{+}-u_{I})})\quantity(\prod_{I<J}^{n}\frac{i\eta}{\theta_{1}(\epsilon_{1,2}-u_{I}-u_{J})})$
$\displaystyle\quad\times\quantity(\prod_{I\leq
J}^{n}\frac{i\theta_{1}(u_{I}+u_{J})}{\eta}\frac{i\theta_{1}(-2\epsilon_{+}+u_{I}+u_{J})}{\eta})\quantity(\prod_{I=1}^{n}\frac{i^{2}\theta_{1}(\pm
u_{I}+m)}{\eta^{2}}).$ (105)
Here, $u_{I}$ are the $U(n)$ parameters, $\eta$ is the Dedekind eta function,
$v_{I}$ are the $SU(3)$ parameters subject to $v_{1}+v_{2}+v_{3}=0$. First
consider the $n=1$ case. We choose an auxiliary vector $\zeta=(1)$. Then
contributing poles are from $\epsilon_{+}+u-v_{J}=0$. The contour integral
converts to the JK-residue sum. We thus obtain the one-string elliptic genus
for the $G_{2}$ gauge theory with one hypermultiplet
$\displaystyle
Z_{1}^{G_{2}+1\mathbf{F}}\\!=\\!\sum_{I=1}^{3}\frac{\eta^{6}\,\theta_{1}(2v_{I}-4\epsilon_{+})\,\theta_{1}\\!(m\pm(\epsilon_{+}\\!-\\!v_{I}))}{\theta_{1}(\epsilon_{1,2})\,\theta_{1}(2\epsilon_{+}\\!-\\!v_{I})\displaystyle{\prod_{J\neq
I}}\theta_{1}(v_{I}\\!-\\!v_{J})\,\theta_{1}(2\epsilon_{+}\\!-\\!v_{I}\\!+v_{J})\,\theta_{1}(2\epsilon_{+}\\!-\\!v_{J})},$
(106)
where the $SU(3)$ condition $v_{1}+v_{2}+v_{3}=0$ is used. For the $n=2$ case,
we choose an auxiliary vector $\zeta$ as in Figure 17.
Figure 17: An auxiliary vector $\zeta$ for the JK-residue calculation of two-
string elliptic genus of the $G_{2}$ gauge theory with one fundamental
hypermultiplet.
The poles which survive in the JK-residue sum are
$\displaystyle(i)\left\\{\begin{array}[]{l}\epsilon_{+}+u_{2}-v_{I}=0\\\
\epsilon_{+}+u_{1}-v_{J}=0\end{array}\right.$
$\displaystyle(ii)\left\\{\begin{array}[]{l}\epsilon_{+}+u_{1}-v_{J}=0\\\
\epsilon_{1,2}+u_{1}-u_{2}=0\end{array}\right.$
$\displaystyle(iii)\left\\{\begin{array}[]{l}\epsilon_{1,2}+u_{2}-u_{1}=0\\\
\epsilon_{+}+u_{1}-v_{J}=0\ .\end{array}\right.$ (113)
The first poles $(i)$ in (113) give
$\displaystyle\mathrm{Res}_{1}$ $\displaystyle=\sum_{I\neq
J}^{3}\frac{\eta^{12}\theta_{1}(2v_{I,J}-4\epsilon_{+})\theta_{1}(\epsilon_{+}-v_{I,J}\pm
m)}{2\,\theta_{1}\\!(\epsilon_{1,2})^{2}\,\theta_{1}\\!(2\epsilon_{+}\pm
v_{I,J})\,\theta_{1}\\!(\epsilon_{1,2}\pm v_{I,J})}$ (114)
$\displaystyle\quad\\!\times\\!\\!\prod_{K\neq
I,J}\\!\frac{\theta_{1}(4\epsilon_{+}+v_{K})}{\theta_{1}\\!(2\epsilon_{+}+v_{K})\,\theta_{1}(3\epsilon_{+}\pm\epsilon_{-}\\!+\\!v_{K})\,\theta_{1}\\!(v_{I,J}\\!-v_{K})\,\theta_{1}\\!(2\epsilon_{+}\\!-v_{I,J}\\!+v_{K})}.$
The second poles $(ii)$ give
$\displaystyle\mathrm{Res}_{2}$
$\displaystyle=\\!\frac{\eta^{12}}{2}\sum_{I=1}^{3}\\!\Bigg{(}\\!\frac{\theta_{1}(5\epsilon_{+}\\!+\\!\epsilon_{-}\\!-2v_{I})\theta_{1}(6\epsilon_{+}\\!+\\!2\epsilon_{-}\\!-\\!2v_{I})\theta_{1}(\epsilon_{+}\\!-\\!v_{I}\pm\\!m)\theta_{1}(2\epsilon_{+}\\!+\\!\epsilon_{-}\\!-\\!v_{I}\\!\pm
m)}{\theta_{1}(\epsilon_{1,2})\theta_{1}(2\epsilon_{1})\theta_{1}(-2\epsilon_{-})\theta_{1}(2\epsilon_{+}-v_{I})\theta_{1}(3\epsilon_{+}+\epsilon_{-}-v_{I})}$
$\displaystyle\quad\qquad\times\prod_{J\neq
I}\frac{1}{\theta_{1}(v_{I}-v_{J})\theta_{1}(\epsilon_{1}-v_{I}+v_{J})\theta_{1}(2\epsilon_{+}-v_{I}+v_{J})\theta_{1}(3\epsilon_{+}+\epsilon_{-}-v_{I}+v_{J})}$
$\displaystyle\quad\qquad\quad\times\frac{1}{\theta_{1}(2\epsilon_{+}+v_{J})\theta_{1}(3\epsilon_{+}+\epsilon_{-}+v_{J})}+(\epsilon_{1}\leftrightarrow\epsilon_{2},\>\epsilon_{-}\to-\epsilon_{-})\bigg{)}\
.$ (115)
Notice that the third poles (iii) can be obtained by $u_{1}\leftrightarrow
u_{2}$ in $(ii)$. It follows that the residue for (iii) is the same as that
for $(ii)$, $\mathrm{Res}_{3}=\mathrm{Res}_{2}$. The two-string elliptic genus
of the $G_{2}$ theory with one hypermultiplet is then given by
$\displaystyle
Z_{2}^{G_{2}+1\mathbf{F}}=\mathrm{Res}_{1}+2\times\mathrm{Res}_{2}\ .$ (116)
We now Higgs the 6d $G_{2}$ theory. The procedure is similar to that for the
$SO(10)$ case in the previous section. As the elliptic genus results are
written in terms of the $SU(3)$ parameters, rather than the $G_{2}$
parameters, some additional implementation is required for the proper Higgs
from the 6d $G_{2}$ theory to the $SU(3)$ theory with $\mathbb{Z}_{2}$ twist.
We begin by identifying the relation between the $SU(3)$ parameters $v_{i}$
and the $G_{2}$ fundamental weights. It follows from the embedding
$\displaystyle G_{2}$ $\displaystyle\supset SU(3)$ (117) $\displaystyle{\bf
7}$ $\displaystyle={\bf 1+3+\bar{3}}\,$ (118)
that the character for the fundamental representation of $G_{2}$ is expressed
in terms of the $SU(3)$ characters, $1+\sum_{I}(e^{v_{i}}+e^{-v_{I}})$, and
the parameter map between $SU(3)$ parameters $v_{i}$ and the $G_{2}$
fundamental weights is $v_{1}\to\phi_{1}$, $v_{2}\to\phi_{1}-\phi_{2}$ and
$v_{3}\to\phi_{2}-2\phi_{1}$. By applying the Higgsing conditions from the
5-brane web diagram (91) as well as the reparameterization (96), one finds
that the proper Higgsing from the $G_{2}$ theory to the $SU(3)$ gauge theory
with $\mathbb{Z}_{2}$ twist is given by
$\displaystyle v_{1}\to\frac{\tau}{2},\qquad
v_{2}\to-\phi_{2}-\frac{\tau}{4},\qquad v_{3}\to\phi_{2}-\frac{\tau}{4},\qquad
m\to\frac{\tau}{2}.$ (119)
Substituting these into the one-string elliptic genus for the $G_{2}$ theory
given in (106), we get the one-string elliptic genus for the $SU(3)$ gauge
theory with $\mathbb{Z}_{2}$ twist:
$\displaystyle
Z_{1}=\frac{2\,\eta^{6}\,\theta_{1}(-2\phi_{2}+\frac{\tau}{2})}{\theta_{1}(\epsilon_{-})^{2}\theta_{1}(2\phi_{2})^{2}\theta_{1}(\frac{\tau}{2})\theta_{1}(-\phi_{2}+\frac{\tau}{4})\theta_{1}(\phi_{2}-\frac{3\tau}{4})}\
.$ (120)
It follows that two-string elliptic genus for the $SU(3)$ gauge theory with
$\mathbb{Z}_{2}$ twist is given by
$\displaystyle Z_{2}$ $\displaystyle=\frac{\eta^{12}\,\theta_{1}(\pm
2\phi_{2}+\frac{\tau}{2})}{\theta_{1}(\epsilon_{-})^{4}\theta_{1}(\epsilon_{-}\pm
2\phi_{2})^{2}\theta_{1}(\pm\phi_{2}+\frac{\tau}{4})\theta_{1}(\pm\epsilon_{-}+\frac{\tau}{2})\theta_{1}(\pm\phi_{2}+\frac{3\tau}{4})}$
$\displaystyle\quad+\bigg{(}\frac{2\eta^{12}\theta_{1}(\epsilon_{-}-2\phi_{2}+\frac{\tau}{2})\theta_{1}(2\epsilon_{-}-2\phi_{2}+\frac{\tau}{2})}{\theta_{1}(\epsilon_{-})^{2}\theta_{1}(2\epsilon_{-})^{2}\theta_{1}(2\phi_{2})^{2}\theta_{1}(\epsilon_{-}-2\phi_{2})^{2}\theta_{1}(-\phi_{2}+\frac{\tau}{4})\theta_{1}(\epsilon_{-}-A+\frac{\tau}{4})}$
$\displaystyle\qquad\quad\times\frac{1}{\theta_{1}(\frac{\tau}{2})\theta_{1}(\epsilon_{-}+\frac{\tau}{2})\theta_{1}(-\phi_{2}+\frac{3\tau}{4})\theta_{1}(\epsilon_{-}-\phi_{2}+\frac{3\tau}{4})}+(\epsilon_{-}\to-\epsilon_{-})\bigg{)}.$
(121)
It is straightforward to see that these one- and two-string elliptic genera,
(120) and (3.3), agree with the $Z_{1}$ and $Z_{2}$ obtained from topological
vertex given in (3.2), by double expanding (120) and (3.3) in terms of $q$ and
$A$ and also taking the unrefined limit.
## 4 Conclusion
In this paper, we computed the partition functions of 6d $SO(8)$ and $SU(3)$
gauge theories with $\mathbb{Z}_{2}$ outer automorphism twist in two different
perspectives. One is to use their Type IIB 5-brane webs where 6d conformal
matter theories of D-type gauge symmetry can be engineered as 5-brane webs
with two $\mathrm{O}5$-planes. Among various RG flows, we discussed the
Higgsing procedure on 5-brane webs giving rise to the 6d $SO(8)$ and $SU(3)$
gauge theories with $\mathbb{Z}_{2}$ twist, from the 6d $SO(10)$ gauge theory
with two flavors and from the 6d $G_{2}$ gauge theory with a flavor,
respectively. We computed the partition functions of these theories following
the topological vertex formalism in the presence of $\mathrm{O}5$-planes
developed in Kim:2017jqn , by introducing and decoupling spinor matter fields
to implement the topological vertex as done Hayashi:2018bkd . The other is to
directly apply the outer automorphism twists on the elliptic genera for the
$SO(10)$ and $G_{2}$ gauge theories, by Higgsing the hypermultiplets with
proper KK-momentum dependence. We checked that the partition functions based
on topological vertex with O5-planes agree with the elliptic genera after
Higgsings. As the elliptic genera are fully refined, we compared them in the
unrefined limit as well as by double expanding each in terms of the KK
momentum and the Coulomb branch parameters.
When computing the elliptic genus for the $SU(3)$ theory with $\mathbb{Z}_{2}$
twist in section 3.3, one may wonder whether one can apply the twisting
directly on the elliptic genus for the $SU(3)$ theory, implementing the KK-
momentum shifts as done for the $SO(8)$ case in section 2.4. In the case of
$SO(8)$ theory with $\mathbb{Z}_{2}$ twist, the order two outer automorphism
maps fundamental representation to itself, and hence KK-momentum shifts can be
easily understood. For $\mathfrak{su}(3)$ algebra, however, the fundamental
representation $\mathbf{3}$ maps to the anti-fundamental representation
$\bar{\mathbf{3}}$ under the order two outer automorphism. From the ADHM
perspective, there are one chiral multiplet which transforms as $\mathbf{3}$
and two chiral multiplets which transform as $\bar{\mathbf{3}}$, and hence it
is not clear how to perform the automorphism twist on these $SU(3)$ states in
the ADHM construction. It would be good if a more systematic study along this
direction is carried out so that it would be even applicable to order three
outer automorphism twist.
Yet as another independent check, we computed, in Kim:2020hhh , the BPS
spectrum of the 6d theories with $\mathbb{Z}_{2}$ twist using the bootstrap
method via the Nakajima-Yoshioka’s blowup equations, which provides fully
refined partition functions. We checked our results against the BPS spectrum
from the blowup formula and found that two results completely match.
## Acknowledgements
We would like to thank Kimyeong Lee, Kaiwen Sun, Xing-Yue Wei, and Futoshi
Yagi for useful discussions. S.K. thanks APCTP, KIAS, and POSTECH for
hospitality for his visit. The research of HK and MK is supported by the POSCO
Science Fellowship of POSCO TJ Park Foundation and the National Research
Foundation of Korea (NRF) Grant 2018R1D1A1B07042934.
## Appendix A Twisted boundary condition and affine Lie algebras
Consider a 6d gauge theory with a simple Lie algebra $\mathfrak{g}$ and
compactification of the theory on a circle $S^{1}$ of radius $R$. For the
gauge fields $A_{\mu}=A_{\mu}^{a}T^{a}$, where $T^{a}$ lie in the adjoint
representation of $\mathfrak{g}$, we impose the periodic boundary condition
$\displaystyle A_{\mu}(x^{i},x^{6}+2\pi R)=A_{\mu}(x^{i},x^{6})\ .$ (122)
Here, $i=1,2,\cdots,5$ and $x^{6}$ is the coordinate along $S^{1}$. Fourier
expansion of the gauge fields preserving the periodic boundary condition takes
the form
$\displaystyle
A_{\mu}(x^{i},x^{6})=A_{\mu}^{a}(x^{i},x^{6})T^{a}=\sum_{n\in\mathbb{Z}}e^{i\frac{nx^{6}}{R}}A_{\mu,n}^{a}(x^{i})T^{a}=\sum_{n\in\mathbb{Z}}A_{\mu,n}^{a}(x^{i})T^{a}_{n},$
(123)
where $T^{a}_{n}=T^{a}e^{inx^{6}/R}$. The new basis $T^{a}_{n}$ satisfies
$\displaystyle\commutator{T^{a}_{n}}{T^{b}_{m}}=f^{abc}\,T^{c}_{n+m}\ ,$ (124)
where $f^{abc}$ is the structure constant of $\mathfrak{g}$,
$\commutator*{T^{a}}{T^{b}}=f^{abc}T^{c}$. The basis $\\{T^{a}_{n}\\}$ with
possible central extensions generates an untwisted affine Lie algebra. For a
simple Lie algebra of type $X_{\ell}$, the untwisted affine Lie algebra
constructed using the periodic boundary condition (122) is denoted by
$X_{\ell}^{(1)}$, and their Dynkin diagram is shown in Figure 18. The black
circles are affine nodes. Excluding it gives the Dynkin diagram of
corresponding simple Lie algebra $X_{\ell}$.
Figure 18: Dynkin diagrams of untwisted affine Lie algebras, where the black
node $\bullet$ represents the affine node.
Instead of the periodic boundary condition, one can impose a twisted boundary
condition
$\displaystyle A_{\mu}(x^{i},x^{6}+2\pi R)=\sigma(A_{\mu}(x^{i},x^{6}))\ ,$
(125)
where $\sigma$ is a finite order automorphism of $\mathfrak{g}$. If $\sigma$
is an order $m$ automorphism, i.e., $\sigma^{m}=1$, then the Fourier expansion
of the gauge fields preserving the twisted boundary condition (125) is
$\displaystyle
A_{\mu}(x^{i},x^{6})=\sum_{n,k}e^{i\frac{x^{6}}{R}(n+\frac{k}{m})}A_{\mu,n,k}^{a}(x^{i})T^{a}=\sum_{n,k}A_{\mu,n,k}^{a}T^{a}_{n+k/l}\
,$ (126)
where $T^{a}_{n+k/m}=T^{a}e^{i\frac{x^{6}}{R}(n+\frac{k}{m})}$. The
commutation relation of new basis is given by
$\displaystyle\commutator{T^{a}_{n+k/m}}{T^{b}_{n^{\prime}+k^{\prime}/m}}=f^{abc}T^{c}_{n+n^{\prime}+(k+k^{\prime})/m}\
.$ (127)
If $\sigma$ is a conjugation, it is called an inner automorphism. In this
case, the resultant algebra is the same as untwisted affine Lie algebra
Fuchs:1992nq . If $\sigma$ is not an inner automorphism, it is called an outer
automorphism and the resultant algebra becomes different from untwisted affine
Lie algebras. The new algebra generated by $\\{T^{a}_{n+m/l}\\}$ with possible
central extensions is called a twisted affine Lie algebra.
An outer automorphism can be viewed as a graph automorphism of Dynkin
diagrams. Only the simple Lie algebras of types $A_{n}$, $D_{n}$ and $E_{6}$
have nontrivial outer automorphisms. $A_{n}$ has an $\mathbb{Z}_{2}$ Dynkin
diagram automorphism which exchanges the simple roots $\alpha_{i}$ and
$\alpha_{n+1-i}$ as in Figure 19(a) and Figure 19(b). $D_{n}$ has an order two
outer automorphism which exchanges the simple roots $\alpha_{n-1}$ and
$\alpha_{n}$ as in Figure 19(c). The exceptional algebra $E_{6}$ also has an
order two outer automorphism shown in Figure 19(e). The $D_{4}$ algebra
additionally has an order three automorphism as Figure 19(d). For these simple
Lie algebras $X_{\ell}$, twisted affine Lie algebras associated with order
$r=2,3$ diagram automorphism are denoted by $X_{\ell}^{(r)}$, and their Dynkin
diagrams are given in Figure 20. Note that unlike the untwisted case,
excluding the affine node does not yield $X_{\ell}$ algebra. For example,
deleting the affine node from $A_{2\ell}^{(2)}$ and $D_{\ell+1}^{(2)}$
algebras give the Dynkin diagram of $C_{\ell}$ and $B_{\ell}$ algebras,
respectively.
Figure 19: Dynkin diagrams and graph outer automorphisms of simple Lie
algebras of type (a) $A_{2n+1}$, (b) $A_{2n}$, (c) $D_{n}$, (d) $D_{4}$ and
(e) $E_{6}$. Figure 20: Dynkin diagrams of twisted affine Lie algebras.
Since a 6d gauge theory compactified on a circle naturally has an affine Lie
algebra structure, its perturbative spectrum can be read off from the
representation theory of Lie algebras. One can write down the perturbative
part of the partition function by using either affine root system or by
decomposing the representation of 6d gauge algebra into the representation of
the invariant subalgebra under the automorphism. For more details, see
Kac:1990gs ; Kim:2004xx and also Appendix A of Kim:2020hhh .
We list some relevant results of the perturbative part of the partition
function $Z_{\rm pert}$, used in this paper.
$(i)$ The untwisted compactification: for a 6d gauge theory with gauge algebra
$\mathfrak{g}$, the W-boson contribution to the perturbative part is given by,
in the unrefined limit,
$\displaystyle
Z_{\mathrm{pert}}=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\frac{1}{1-q}\quantity(\chi_{\Delta_{+}}^{\mathfrak{g}}+q\,\chi_{\Delta_{-}}^{\mathfrak{g}})]\
,$ (128)
where $\chi_{\Delta_{\pm}}^{\mathfrak{g}}$ are the character associated with
the positive/negative roots, respectively. The twisted compactifications that
we discussed in the main text are $SO(8)$ gauge theory with $\mathbb{Z}_{2}$
twist and $SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist. Their invariant
subalgebras carry fractional KK-momentum. These fractional momenta contribute
to the perturbative part.
$(ii)$ $SO(8)$ gauge theory with $\mathbb{Z}_{2}$ twist: the adjoint
representation of $\mathfrak{so}(8)$ decomposes into the adjoint and the
fundamental representations of $\mathfrak{so}(7)$. The adjoint representation
of $\mathfrak{so}(7)$ carries integer KK charge, while the fundamental
representation carries half-integer KK charge,
$\displaystyle D_{4}$ $\displaystyle\to B_{3}$ $\displaystyle\mathbf{28}$
$\displaystyle\to\mathbf{21}_{0}\oplus\mathbf{7}_{1/2}\ ,$ (129)
where the subscript denotes the KK charge. From this, one finds that the
perturbative contribution to the partition function for $SO(8)$ gauge theory
with $\mathbb{Z}_{2}$ twist is given by
$\displaystyle
Z_{\mathrm{pert}}=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\frac{1}{1-q}\quantity(\chi_{\Delta_{+}}^{\mathfrak{so}(7)}+q^{1/2}\chi_{\mathbf{7}}^{\mathfrak{so}(7)}+q\,\chi_{\Delta_{-}}^{\mathfrak{so}(7)})]\,.$
(130)
In the dimensional reduction limit where $R\to 0$, all the states with non-
zero KK-momentum are truncated so that only the adjoint representation of
$\mathfrak{so}(7)$ algebra remains. We note that the invariant subalgebra
$\mathfrak{so}(7)$ can be readily read off, as it it nothing but the algebra
obtained by removing the affine node of the Dynkin diagram of $D_{4}^{(2)}$.
$(ii)$ $SU(3)$ gauge theory with $\mathbb{Z}_{2}$ twist: the adjoint
representation of $\mathfrak{su}(3)$ decomposes into the representations of
$\mathfrak{su}(2)$ as
$\displaystyle A_{2}$ $\displaystyle\to A_{1}$ $\displaystyle\mathbf{8}$
$\displaystyle\to\mathbf{3}_{0}\oplus\mathbf{2}_{1/4}\oplus\mathbf{2}_{3/4}\oplus\mathbf{1}_{1/2}\,.$
(131)
The perturbative contribution to the partition function for $SU(3)$ gauge
theory with $\mathbb{Z}_{2}$ twist is then given by
$\displaystyle
Z_{\mathrm{pert}}=\operatorname{PE}\quantity[\frac{2g}{(1-g)^{2}}\frac{1}{1-q}\quantity(\chi_{\Delta_{+}}^{\mathfrak{su}(2)}+(q^{1/4}+q^{3/4})\chi_{\mathbf{2}}^{\mathfrak{su}(2)}+q\,\chi_{\Delta_{-}}^{\mathfrak{su}(2)})]\
,$ (132)
up to the Cartan part. Again, in dimensional reduction limits, only the
adjoint representation of $\mathfrak{su}(2)$ survives, and this
$\mathfrak{su}(2)$ is the algebra obtained by removing the affine node of the
Dynkin diagram of $A_{2}^{(2)}$.
## Appendix B Special functions
The Plethystic exponential is defined by
$\displaystyle\operatorname{PE}[f(x)]=\exp(\sum_{n=1}^{\infty}\frac{1}{n}f(x^{n}))\
.$
Its inverse function, Plethystic logarithm, is given by
$\displaystyle\operatorname{PL}\quantity[f(x)]=\operatorname{PE}^{-1}\quantity[f(x)]=\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\log
f(x^{n})\ ,$ (133)
where $\mu(n)$ is the Möbius function defined by
$\displaystyle\mu(n)=\left\\{\begin{array}[]{ll}(-1)^{p}&~{}\text{if $n$ is a
square-free positive integer with $p$ prime factors,}\\\
0&~{}\text{if~{}}n\text{ has a squared prime factor.}\end{array}\right.$ (136)
In topological vertex formalism, we use the following special functions for
integer partitions
$\lambda=(\lambda_{1},\lambda_{2},\cdots,\lambda_{\ell(\lambda)})$ and
$\mu=(\mu_{1},\mu_{2},\cdots,\mu_{\ell(\mu)})$,
$\displaystyle\tilde{Z}_{\nu}(g)$
$\displaystyle=\tilde{Z}_{\nu^{t}}(g)=\prod_{i=1}^{\ell(\nu)}\prod_{j=1}^{\nu_{i}}\quantity(1-g^{\nu_{i}+\nu_{j}^{t}-i-j+1})^{-1},$
(137) $\displaystyle\mathcal{R}_{\lambda\mu}(Q)$
$\displaystyle=\mathcal{M}(Q)^{-1}\mathcal{N}_{\lambda^{t}\mu}(Q)\ ,$ (138)
$\displaystyle\mathcal{M}(Q)$
$\displaystyle=\prod_{i,j=1}^{\infty}(1-Qg^{i+j-1})^{-1}=\operatorname{PE}\quantity[\frac{gQ}{(1-g)^{2}}]\
,$ (139) $\displaystyle\mathcal{N}_{\lambda\mu}(Q)$
$\displaystyle=\quantity[\prod_{i=1}^{\ell(\lambda)}\prod_{j=1}^{\lambda_{i}}\quantity(1-Qg^{\lambda_{i}+\mu^{t}_{j}-i-j+1})]\quantity[\prod_{i=1}^{\ell(\mu)}\prod_{j=1}^{\mu_{i}}\quantity(1-Qg^{-\lambda^{t}_{j}-\mu_{i}+i+j-1})]\
,$ (140)
where $g$ the unrefined $\Omega$-deformation parameter, $\nu^{t}$ means
transposed partition of $\nu$, and $Q$ is a Kähler parameter. In practical
calculation, the following Cauchy identities macdonald_symmetric_1995 are
also handy:
$\displaystyle\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\lambda/\eta_{1}}(\mathbf{x})s_{\lambda/\eta_{2}}(\mathbf{y})$
$\displaystyle=\prod_{i,j}\frac{1}{1-Qx_{i}y_{j}}\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\eta_{2}/\lambda}(\mathbf{x})s_{\eta_{1}/\lambda}(\mathbf{y})\
,$ (141)
$\displaystyle\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\lambda/\eta_{1}^{t}}(\mathbf{x})s_{\lambda^{t}/\eta_{2}}(\mathbf{y})$
$\displaystyle=\prod_{i,j}\quantity(1+Qx_{i}y_{j})\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\eta_{2}^{t}/\lambda}(\mathbf{x})s_{\eta_{1}/\lambda^{t}}(\mathbf{y})\
.$ (142)
When $\mathbf{x}=g^{-\rho-\nu_{1}}$ and $\mathbf{y}=g^{-\rho-\nu_{2}}$ for
$\rho=(-\frac{1}{2},-\frac{3}{2},-\frac{5}{2},\cdots)$, as discussed in the
main text,
$\displaystyle\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\lambda/\eta_{1}}(g^{-\rho-\nu_{1}})s_{\lambda/\eta_{2}}(g^{-\rho-\nu_{2}})=\mathcal{R}_{\nu_{2}\nu_{1}}(Q)^{-1}\sum_{\lambda}Q^{\absolutevalue{\eta_{1}}+\absolutevalue{\eta_{2}}-\absolutevalue{\lambda}}s_{\eta_{2}/\lambda}(g^{-\rho-\nu_{1}})s_{\eta_{1}/\lambda}(g^{-\rho-\nu_{2}}),$
$\displaystyle\sum_{\lambda}Q^{\absolutevalue{\lambda}}s_{\lambda/\eta_{1}^{t}}(g^{-\rho-\nu_{1}})s_{\lambda^{t}/\eta_{2}}(g^{-\rho-\nu_{2}})=\mathcal{R}_{\nu_{2}\nu_{1}}(-Q)\sum_{\lambda}Q^{\absolutevalue{\eta_{1}}+\absolutevalue{\eta_{2}}-\absolutevalue{\lambda}}s_{\eta_{2}^{t}/\lambda}(g^{-\rho-\nu_{1}})s_{\eta_{1}/\lambda^{t}}(g^{-\rho-\nu_{2}}).$
We also note that skew Schur function $s_{\lambda/\mu}$ is zero unless
$\lambda\supset\mu$.
For a periodic strip diagram given in Figure 16, the explicit form of such
periodic vertex can be obtained using the method in Haghighat:2013gba ;
macdonald_symmetric_1995 . Using the definition of the edge factor and the
vertex factor, one needs to evaluate the equation of the form
$\displaystyle\mathcal{G}(\mathbf{x}_{1},\\!\cdots\\!,\mathbf{x}_{4},\mathbf{y}_{1},\\!\cdots\\!,\mathbf{y}_{4})$
$\displaystyle=\sum_{\lambda_{i},\eta_{i}}q_{1}^{\absolutevalue{\lambda_{1}}}q_{2}^{\absolutevalue{\lambda_{2}}}q_{3}^{\absolutevalue{\lambda_{3}}}q_{4}^{\absolutevalue{\lambda_{4}}}s_{\lambda_{1}/\eta_{1}}(\mathbf{x}_{1})s_{\lambda_{1}/\eta_{4}}(\mathbf{y}_{1})s_{\lambda_{2}/\eta_{1}}(\mathbf{x}_{2})$
$\displaystyle~{}~{}\times
s_{\lambda_{2}/\eta_{2}}(\mathbf{y}_{2})s_{\lambda_{3}/\eta_{2}}(\mathbf{x}_{3})s_{\lambda_{3}/\eta_{3}}(\mathbf{y}_{3})s_{\lambda_{4}/\eta_{3}}(\mathbf{x}_{4})s_{\lambda_{4}/\eta_{4}}(\mathbf{y}_{4}).$
(143)
A successive use of the Cauchy identities (141) yields
$\displaystyle\mathcal{G}(\mathbf{x}_{1},\cdots,\mathbf{y}_{4})=\mathcal{F}(\mathbf{x}_{1},\cdots,\mathbf{y}_{4})\mathcal{G}(Q\mathbf{x}_{1},\cdots,Q\mathbf{x}_{4},Q\mathbf{y}_{1},\cdots,Q\mathbf{y}_{4}),$
(144)
where $Q=q_{1}q_{2}q_{3}q_{4}$ and
$\mathcal{F}(\mathbf{x}_{1},\cdots,\mathbf{y}_{4})=F(\mathbf{x}_{1},\cdots,\mathbf{y}_{4})F(Q^{1/2}\mathbf{x}_{1},\cdots,Q^{1/2}\mathbf{y}_{4})$
for
$\displaystyle F(\mathbf{x}_{1},\cdots,\mathbf{y}_{4})$ (145)
$\displaystyle=\prod\Bigg{[}\frac{1}{\left(1-q_{1}\mathbf{x}_{1}\mathbf{y}_{1}\right)\left(1-q_{2}\mathbf{x}_{2}\mathbf{y}_{2}\right)\left(1-q_{3}\mathbf{x}_{3}\mathbf{y}_{3}\right)\left(1-q_{4}\mathbf{x}_{4}\mathbf{y}_{4}\right)\left(1-q_{1,2}\mathbf{y}_{1}\mathbf{y}_{2}\right)}$
$\displaystyle\times\frac{1}{\left(1-q_{1,4}\mathbf{x}_{1}\mathbf{x}_{4}\right)\left(1-q_{2,3}\mathbf{x}_{2}\mathbf{y}_{3}\right)\left(1-q_{3,4}\mathbf{x}_{3}\mathbf{y}_{4}\right)\left(1-q_{1,2,3}\mathbf{y}_{1}\mathbf{y}_{3}\right)\left(1-q_{1,2,4}\mathbf{y}_{2}\mathbf{x}_{4}\right)}$
$\displaystyle\times\frac{1}{\left(1-q_{1,3,4}\mathbf{x}_{3}\mathbf{x}_{1}\right)\left(1-q_{2,3,4}\mathbf{x}_{2}\mathbf{y}_{4}\right)\left(1-Q\mathbf{x}_{2}\mathbf{x}_{1}\right)\left(1-Q\mathbf{y}_{2}\mathbf{x}_{3}\right)\left(1-Q\mathbf{x}_{4}\mathbf{y}_{3}\right)\left(1-Q\mathbf{y}_{1}\mathbf{y}_{4}\right)}\Bigg{]}.$
Here, $q_{i,j,\cdots}=q_{i}q_{j}\cdots$ and
$\prod(1-Q\mathbf{x}_{i}\mathbf{y}_{j})=\prod_{m,n}\big{(}1-(\mathbf{x}_{i})_{m}(\mathbf{y}_{j})_{n}Q\big{)}$
for integer partitions $\mathbf{x}_{i}$ and $\mathbf{y}_{j}$. After repeating
this procedure for $m$ times, we need to change $\mathbf{x}_{i}$ and
$\mathbf{y}_{j}$ to $Q^{m}\mathbf{x}_{i}$ and $Q^{m}\mathbf{y}_{j}$,
respectively, and hence,
$\displaystyle\mathcal{G}(\mathbf{x}_{1},\cdots,\mathbf{y}_{4})=\quantity[\prod_{m=0}^{n}\mathcal{F}(Q^{m}\mathbf{x}_{1},\cdots,Q^{m}\mathbf{y}_{4})]\mathcal{G}(Q^{n+1}\mathbf{x}_{1},\cdots,Q^{n+1}\mathbf{y}_{4}).$
(146)
Under the condition $Q\to 0$ as $n\to\infty$ which is used for deriving the
Cauchy identities, the only contributing terms in
$\mathcal{G}(Q^{m}\mathbf{x}_{1},\cdots,Q^{m}\mathbf{y}_{4})$ are
$\lambda_{i}=\eta_{j}$ for all $i$ and $j$:
$\displaystyle\lim_{n\to\infty}\mathcal{G}(Q^{n}\mathbf{x}_{1},\cdots,Q^{n}\mathbf{y}_{4})=\sum_{\lambda}Q^{\absolutevalue{\lambda}}=\prod_{m=1}^{\infty}\frac{1}{1-Q^{m}}\
.$ (147)
Hence, it follows that
$\displaystyle\mathcal{G}(\mathbf{x}_{1},\cdots,\mathbf{y}_{4})=\prod_{n=1}^{\infty}\frac{\mathcal{F}(Q^{n-1}\mathbf{x}_{1},\cdots,Q^{n-1}\mathbf{y}_{4})}{1-Q^{n}}\
.$ (148)
In localization computation, the Dedekind eta function $\eta$ and the Jacobi
theta function $\theta_{1}(x)$ are defined as follows: For the complex
structure $\tau$ of a torus, $q=e^{2\pi i\tau}$,
$\displaystyle\eta$
$\displaystyle=q^{\frac{1}{24}}\prod_{n=1}^{\infty}(1-q^{n})\ ,$ (149)
$\displaystyle\theta_{1}(x)$
$\displaystyle=-iq^{\frac{1}{8}}y^{\frac{1}{2}}\prod_{n=1}^{\infty}(1-q^{n})(1-yq^{n})(1-y^{-1}q^{n-1})\
,$ (150)
where $y=e^{2\pi ix}$. They satisfy
$\displaystyle\theta_{1}(-x)=-\theta_{1}(x),\quad\theta_{1}(n\tau)=0\quad(n\in\mathbb{Z})\
,$ $\displaystyle\frac{1}{2\pi
i}\oint_{u=0}\frac{du}{\theta_{1}(u)}=\frac{1}{2\pi\eta^{3}}\ ,$ (151)
where $\oint_{u=0}$ means that the integral contour is taken around $u=0$ so
that only the residue at $u=0$ contributes. These properties are useful when
we evaluate the JK-residue.
## References
* (1) H.-C. Kim, S.-S. Kim and K. Lee, _Higgsing and Twisting of 6d $D_{N}$ gauge theories_, 1908.04704.
* (2) O. Aharony and A. Hanany, _Branes, superpotentials and superconformal fixed points_ , _Nucl.Phys._ B504 (1997) 239 [hep-th/9704170].
* (3) O. Aharony, A. Hanany and B. Kol, _Webs of (p,q) five-branes, five-dimensional field theories and grid diagrams_ , _JHEP_ 9801 (1998) 002 [hep-th/9710116].
* (4) J. J. Heckman, D. R. Morrison, T. Rudelius and C. Vafa, _Atomic Classification of 6D SCFTs_ , 1502.05405.
* (5) H. Hayashi, S.-S. Kim, K. Lee, M. Taki and F. Yagi, _More on 5d descriptions of 6d SCFTs_ , _JHEP_ 10 (2016) 126 [1512.08239].
* (6) S.-S. Kim and F. Yagi, _Topological vertex formalism with O5-plane_ , _Phys. Rev._ D97 (2018) 026011 [1709.01928].
* (7) E. Witten, _Phase transitions in M theory and F theory_ , _Nucl. Phys._ B471 (1996) 195 [hep-th/9603150].
* (8) K. A. Intriligator, D. R. Morrison and N. Seiberg, _Five-dimensional supersymmetric gauge theories and degenerations of Calabi-Yau spaces_ , _Nucl.Phys._ B497 (1997) 56 [hep-th/9702198].
* (9) P. Jefferson, S. Katz, H.-C. Kim and C. Vafa, _On Geometric Classification of 5d SCFTs_ , _JHEP_ 04 (2018) 103 [1801.04036].
* (10) L. Bhardwaj, P. Jefferson, H.-C. Kim, H.-C. Tarazi and C. Vafa, _Twisted Circle Compactifications of 6d SCFTs_ , 1909.11666.
* (11) J. J. Heckman, D. R. Morrison and C. Vafa, _On the Classification of 6D SCFTs and Generalized ADE Orbifolds_ , _JHEP_ 1405 (2014) 028 [1312.5746].
* (12) M. Del Zotto, J. J. Heckman, A. Tomasiello and C. Vafa, _6d Conformal Matter_ , _JHEP_ 1502 (2015) 054 [1407.6359].
* (13) P. Jefferson, H.-C. Kim, C. Vafa and G. Zafrir, _Towards Classification of 5d SCFTs: Single Gauge Node_ , 1705.05836.
* (14) H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, _Dualities and 5-brane webs for 5d rank 2 SCFTs_ , _JHEP_ 12 (2018) 016 [1806.10569].
* (15) M. Aganagic, A. Klemm, M. Marino and C. Vafa, _The Topological vertex_ , _Commun.Math.Phys._ 254 (2005) 425 [hep-th/0305132].
* (16) A. Iqbal, C. Kozcaz and C. Vafa, _The Refined topological vertex_ , _JHEP_ 0910 (2009) 069 [hep-th/0701156].
* (17) H. Awata and H. Kanno, _Refined BPS state counting from Nekrasov’s formula and Macdonald functions_ , _Int.J.Mod.Phys._ A24 (2009) 2253 [0805.0191].
* (18) B. Haghighat, C. Kozcaz, G. Lockhart and C. Vafa, _Orbifolds of M-strings_ , _Phys. Rev._ D89 (2014) 046003 [1310.1185].
* (19) J. Kim, S. Kim, K. Lee, J. Park and C. Vafa, _Elliptic Genus of E-strings_ , 1411.2324.
* (20) B. Haghighat, A. Klemm, G. Lockhart and C. Vafa, _Strings of Minimal 6d SCFTs_ , _Fortsch.Phys._ 63 (2015) 294 [1412.3152].
* (21) A. Gadde, B. Haghighat, J. Kim, S. Kim, G. Lockhart and C. Vafa, _6d String Chains_ , 1504.04614.
* (22) H. Hayashi and R.-D. Zhu, _More on topological vertex formalism for 5-brane webs with O5-plane_ , 2012.13303.
* (23) F. Benini, R. Eager, K. Hori and Y. Tachikawa, _Elliptic genera of two-dimensional N=2 gauge theories with rank-one gauge groups_ , _Lett. Math. Phys._ 104 (2014) 465 [1305.0533].
* (24) F. Benini, R. Eager, K. Hori and Y. Tachikawa, _Elliptic Genera of 2d ${\mathcal{N}}$ = 2 Gauge Theories_, _Commun. Math. Phys._ 333 (2015) 1241 [1308.4896].
* (25) N. J. Evans, C. V. Johnson and A. D. Shapere, _Orientifolds, branes, and duality of 4-D gauge theories_ , _Nucl. Phys._ B505 (1997) 251 [hep-th/9703210].
* (26) A. Giveon and D. Kutasov, _Brane dynamics and gauge theory_ , _Rev. Mod. Phys._ 71 (1999) 983 [hep-th/9802067].
* (27) B. Feng and A. Hanany, _Mirror symmetry by O3 planes_ , _JHEP_ 11 (2000) 033 [hep-th/0004092].
* (28) G. Bertoldi, B. Feng and A. Hanany, _The Splitting of branes on orientifold planes_ , _JHEP_ 04 (2002) 015 [hep-th/0202090].
* (29) G. Zafrir, _Brane webs and $O5$-planes_, _JHEP_ 03 (2016) 109 [1512.08114].
* (30) H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, _Discrete theta angle from an O5-plane_ , _JHEP_ 11 (2017) 041 [1707.07181].
* (31) L. C. Jeffrey and F. C. Kirwan, _Localization for nonabelian group actions_ , in _eprint arXiv:alg-geom/9307001_ , p. 7001, July, 1993.
* (32) Y. Tachikawa, _On S-duality of 5d super Yang-Mills on $S^{1}$_, _JHEP_ 11 (2011) 123 [1110.0531].
* (33) B. Haghighat, A. Iqbal, C. Kozcaz, G. Lockhart and C. Vafa, _M-Strings_ , _Commun. Math. Phys._ 334 (2015) 779 [1305.6322].
* (34) I. G. Macdonald, _Symmetric Functions and Hall Polynomials_. Oxford University Press, 2 ed., 1995.
* (35) H.-C. Kim, J. Kim, S. Kim, K.-H. Lee and J. Park, _6d strings and exceptional instantons_ , 1801.03579.
* (36) H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, _5-brane webs for 5d $\mathcal{N}$ = 1 G2 gauge theories_, _JHEP_ 03 (2018) 125 [1801.03916].
* (37) H.-C. Kim, M. Kim, S.-S. Kim and K.-H. Lee, _Bootstrapping BPS spectra of 5d/6d field theories_ , 2101.00023.
* (38) J. Fuchs, _Affine Lie algebras and quantum groups: An Introduction, with applications in conformal field theory_. Cambridge University Press, 3, 1995\.
* (39) V. G. Kac, _Infinite dimensional Lie algebras_. Cambridge University Press, 3 ed., 1990.
* (40) S. Kim, K.-M. Lee, H.-U. Yee and P. Yi, _The $N=1^{*}$ theories on $R^{1+2}\times S^{1}$ with twisted boundary conditions_, _JHEP_ 08 (2004) 040 [hep-th/0403076].
|
16k
|
arxiv_papers
|
2101.01031
|
# The KPP equation as a scaling limit
of locally interacting Brownian particles 111Keywords and phrases: Fisher-KPP
equation; scaling limits; interacting diffusions; local interaction,
proliferation. 222AMS subject classification (2020): 82C22, 82C21, 60K40.
Franco Flandoli and Ruojun Huang
###### Abstract
Fisher-KPP equation is proved to be the scaling limit of a system of Brownian
particles with local interaction. Particles proliferate and die depending on
the local concentration of other particles. Opposite to discrete models,
controlling concentration of particles is a major difficulty in Brownian
particle interaction; local interactions instead of mean field or moderate
ones makes it more difficult to implement the law of large numbers properties.
The approach taken here to overcome these difficulties is largely inspired by
A. Hammond and F. Rezakhanlou [10] implemented there in the mean free path
case instead of the local interaction regime.
## 1 Introduction
We consider the scaling limit (in a “local” interaction regime) of the
empirical measure of an interacting Brownian particle system in
$\mathbb{R}^{d}$, $d\geq 1$, describing the proliferation mechanism of cells,
where only approximately a constant number of particles interact with a given
particle at any given time. We connect the evolution of its empirical measure
process to the Fisher-KPP (f-kpp) equation.
The f-kpp equation is related to particle systems in several ways. One of them
is the probabilistic representation by branching processes, see [12] which
originated a large literature. Others have to do with scaling limits of
interacting particles. In the case of discrete particles occupying the sites
of a lattice, with local interaction some of the main classical works are [3,
5, 6]; see also the recent contributions [4, 9].
The discrete setting, as known for many other systems (see for instance [11])
offers special opportunities due to the simplicity of certain invariant or
underlying measures, often of Bernoulli type; the technology in that case has
become very rich and deeply developed. Different is the case of interacting
diffusions, less developed. The mean field theory, for diffusions, is a
flexible and elegant theory [19] but localizing the interactions is very
difficult, see for instance [21, 20] as few of the attempts. When the
interaction is moderate, namely intermediate between mean field and local,
there are more technical opportunities, widely discovered by K. Oelschläger in
a series of works including [16, 17]. Along these lines, also the f-kpp
equation has been obtained as a scaling limit in [8]. Let us mention also [13,
1, 14, 15, 18] for related works.
In the present work we fill in the gap and prove that the f-kpp equation is
also the scaling limit of diffusions, locally interacting. In a sense, this is
the analog of the discrete particle results of [3] and the other references
above. The proof is not based on special reference measures of Bernoulli type
as in the discrete case (not invariant in the present proliferation case, but
still fundamental), but it is strongly inspired by the work of [10], which
deals with locally interacting diffusions in the mean-free-path regime, that
we adapt to the local regime (the former requires that a particle meets a
finite number of others, on average, in a unit amount of time; the latter
requires that a particle has a finite number of others, on average, in its own
neighborhood, where interaction takes place). Compared to the discrete setting
[3], where the dynamics is a superposition of simple-exclusion process (which
leads to the diffusion operator) and spin-flip dynamics (leading to the
reaction term) and the number of particles per site is either zero or one, we
have to worry about concentration of particles, one of the main difficulties
for the investigation of interacting diffusions.
After this short introduction to the subject, let us give some more technical
details. We start with the formal definition of a closely-related model
already studied in [8] in the so-called moderate interaction regime. Then we
introduce our slightly altered model.
###### Definition 1
A configuration of the system is a sequence
$\eta=\left(x_{i},a_{i}\right)_{i\in\mathbb{N}}\in\left(\mathbb{R}^{d}\times\left\\{L,N\right\\}\right)^{\mathbb{N}}$
with the following constraint: there exists
$i_{\max}\left(\eta\right)\in\mathbb{N}$ such that $a_{i}=L$ for $i\leq
i_{\max}\left(\eta\right)$, $a_{i}=N$ and $x_{i}=0$ for
$i>i_{\max}\left(\eta\right)$.
The heuristic meaning is that particles with index $i\leq
i_{\max}\left(\eta\right)$ exist, are alive ($=L$), and occupy position
$x_{i}$; particles with $i>i_{\max}\left(\eta\right)$ do not exist yet ($=N$),
but we formally include them in the description; they are placed at $x_{i}=0$.
Test functions $F$ are functions on
$\left(\mathbb{R}^{d}\times\left\\{L,N\right\\}\right)^{\mathbb{N}}$ which
depend only on a finite number of coordinates,
$F=F\left(x_{1},...,x_{n},a_{1},...,a_{n}\right)$ with
$\left(x_{i},a_{i}\right)\in\mathbb{R}^{d}\times\left\\{L,N\right\\}$ and are
smooth in $\left(x_{1},...,x_{n}\right)\in\mathbb{R}^{dn}$.
###### Definition 2
The infinitesimal generator $\mathcal{L}_{N}$, parametrized by
$N\in\mathbb{N}$, is given by
$\displaystyle\left(\mathcal{L}_{N}F\right)\left(\eta\right)=\sum_{i\leq
i_{\max}\left(\eta\right)}\frac{1}{2}\Delta_{x_{i}}F\left(\eta\right)+\sum_{j\leq
i_{\max}\left(\eta\right)}\lambda_{N}^{j}\left(\eta\right)\left[F\left(\eta^{j}\right)-F\left(\eta\right)\right]$
(1)
where, if $\eta=\left(x_{i},a_{i}\right)_{i\in\mathbb{N}}$, then
$\eta^{j}=(x_{i}^{j},a_{i}^{j})_{i\in\mathbb{N}}$ is given by
$\displaystyle(x_{i}^{j},a_{i}^{j})$
$\displaystyle=\left(x_{i},a_{i}\right)\text{ for }i\neq
i_{\max}\left(\eta\right)+1$
$\displaystyle\big{(}x_{i_{\max}\left(\eta\right)+1}^{j},a_{i_{\max}\left(\eta\right)+1}^{j}\big{)}$
$\displaystyle=\left(x_{j},L\right).$
The rate $\lambda_{N}^{j}\left(\eta\right)$ is given by
$\displaystyle\lambda_{N}^{j}\left(\eta\right)=\Big{(}1-\frac{1}{N}\sum_{k\leq
i_{\max}\left(\eta\right)}\theta_{N}\left(x_{j}-x_{k}\right)\Big{)}^{+}$ (2)
where $\theta_{N}$ are smooth compact support mollifiers with a rate of
convergence to the delta Dirac at zero specified in the sequel.
The heuristic behind this definition is that: i) existing particles move at
random like independent Brownian motions; ii) a new particle could be created
at the position of an existing particle $j$, with rate proportional to the
empty space in a neighborhood of $x_{j}$, neighborhood described by the
support of $\theta_{N}$. Our aim is to choose the scaling of $\theta_{N}$,
namely the neighborhood of interaction, such that only a small finite number
of particles different from $j$ are in that neighborhood.
In the classical studies of continuum interacting particle systems, where
interactions are modulated by a potential, one usually takes
$\displaystyle\theta_{N}(x)=N^{\beta}\theta(N^{\beta/d}x)$
for some smooth compactly supported function $\theta(\cdot)$, where $N$ is the
order of the number of particles in the system. The case $\beta=0$ is called
mean-field, since all particles interact with each other at any given time.
The case $\beta\in(0,1)$ is called moderate, as not all particles interact at
any given time, nevertheless such number is diverging with $N$. The case
$\beta=1$ is called local, as one would expect that in a neighborhood of
radius $N^{-1/d}$, only a constant number of particles interact. Of course,
here we are assuming that particles are relatively homogeneously distributed
in space at all times down to the microscopic scale (which is not always
proven). For the system with generator (1), the moderate scaling regime with
$\beta\in(0,1/2)$ has been studied and its scaling limit to f-kpp equation
established in [8], with earlier results [17] for a shorter range of $\beta$,
and so is the mean-field case whose limit is a different kind of equation [2,
7]; our aim here is to study the local regime, subject to a modification of
the rate (2).
We introduce a scale parameter $\epsilon\in(0,1]$ describing the length scale
where two particles can interact. In particular, in the local regime,
$\epsilon=N^{-1/d}$, but our result is more general, covering also the whole
moderate regime ([8, 17]). Then we consider the mollifier
$\displaystyle\theta^{\epsilon}(x):=\epsilon^{-d}\theta(\epsilon^{-1}x)$
built from a given, nonnegative, Hölder continuous and compactly supported
function $\theta:\mathbb{R}^{d}\to\mathbb{R}_{+}$ with $\int\theta=1$. The
rate of proliferation (2) can be written as
$\displaystyle\lambda^{j}_{N}(\eta)=\Big{(}1-\frac{1}{N}\sum_{k\leq
i_{\text{max}}(\eta)}\theta^{\epsilon}(x_{j}-x_{k})\Big{)}^{+}$ (3)
whereby the proliferation part of the generator is
$\displaystyle(\mathcal{L}_{C}F)(\eta)$ $\displaystyle:=\sum_{j\leq
i_{\text{max}}(\eta)}\Big{[}1-\frac{1}{N}\sum_{k\leq
i_{\text{max}}(\eta)}\theta^{\epsilon}(x_{j}-x_{k})\Big{]}^{+}[F(\eta^{j})-F(\eta)].$
Throughout the paper any sum is only over particles alive in the system (whose
cardinality is always finite), hence we do not discuss the label $a_{j}$ of
particle $j$.
Heuristically, the positive part on the rate (3) should be insignificant, as
we would guess that if starting with a density profile not larger than $1$,
then subsequently the density of particles everywhere is no larger than $1$.
This is the case for the f-kpp equation (see (8) below). However, at the
microscopic level we do not have effective control on the scale of $\epsilon$,
even a posteriori. Hence in this paper we consider a slightly altered model,
namely one without the positive part in the rate.
Note that now the proliferation rate can be negative, which we will interpret
to mean, in terms of the proliferation part of the generator,
$\displaystyle(\widetilde{\mathcal{L}}_{C}F)(\eta)$ $\displaystyle=\sum_{j\leq
i_{\text{max}}(\eta)}[F(\eta^{j})-F(\eta)]+\frac{1}{N}\sum_{j,k\leq
i_{\text{max}}(\eta)}\theta^{\epsilon}(x_{j}-x_{k})[F(\eta^{-j})-F(\eta)],$
(4)
where $\eta^{-j}$ signifies deleting particle $j$ from the collection $\eta$.
Thus, the infinitesimal generator of our particle system under study is
$\displaystyle(\widetilde{\mathcal{L}}_{N}F)(\eta)=\sum_{j\leq
i_{\text{max}}(\eta)}\frac{1}{2}\Delta_{x_{i}}F(\eta)+(\widetilde{\mathcal{L}}_{C}F)(\eta).$
(5)
###### Condition 3
The function $u_{0}$ appearing as initial condition satisfies:
(a). It is compactly supportly in $\mathbb{B}(0,R)$, an open ball of radius
$R$ around the origin.
(b). $0\leq u_{0}(x)\leq\gamma$ for some finite constant $\gamma$ and all
$x\in\mathbb{R}^{d}$.
In particular, $u_{0}\in L^{1}_{+}(\mathbb{R}^{d})$ (the space of nonnegative
integrable functions), with $\|u_{0}\|_{L^{1}}\leq\gamma|\mathbb{B}(0,R)|$.
Denoting by $\eta(t)$ the collection of alive particles at time $t\geq 0$, and
$N(t)$ its cardinality, we distribute at time $t=0$,
$i_{max}(\eta(0))=N_{0}:=N\int_{\mathbb{R}^{d}}u_{0}$
number of points independently with identical probability density $(\int
u_{0})^{-1}u_{0}$ on $\mathbb{R}^{d}$, for some $u_{0}$ satisfying Condition
3. In particular, $u_{0}(x)dx$ is the weak limit, in probability, of the
initial (normalized) empirical measure:
$\displaystyle\frac{1}{N}\sum_{j\leq
N_{0}}\delta_{x_{j}(0)}(x)\stackrel{{\scriptstyle
w}}{{\Rightarrow}}u_{0}(x)dx.$ (6)
We introduce the sequence of space-time (normalized) empirical measures
$\displaystyle\xi^{N}\left(dt,dx\right):=\frac{1}{N}dt\sum_{j\leq
N(t)}\delta_{x_{j}(t)}(dx),\quad N\in\mathbb{N},$ (7)
taking values in the space $\mathcal{M}=\mathcal{M}_{T}$ of nonnegative finite
measures on $[0,T]\times\mathbb{R}^{d}$ endowed with the weak topology. Since
these are random measures, the mappings
$\displaystyle\omega\mapsto\xi^{N}(dt,dx,\omega)$
induce probability measures $\mathcal{P}^{N}$ on the space $\mathcal{M}$. That
is, $\mathcal{P}^{N}\in\mathcal{P}(\mathcal{M})$.
We also introduce the f-kpp equation on $[0,T]\times\mathbb{R}^{d}$:
$\displaystyle\partial_{t}u=\frac{1}{2}\Delta u+u-u^{2},\quad
u(0,\cdot)=u_{0},$ (8)
with $u_{0}$ satisfying Condition 3. We take as weak formulation of (8) that
for any $\phi\in C_{c}^{1,2}([0,T)\times\mathbb{R}^{d})$,
$\displaystyle 0$
$\displaystyle=\int_{\mathbb{R}^{d}}u_{0}(x)\phi(0,x)dx+\int_{0}^{T}\int_{\mathbb{R}^{d}}u(t,x)\partial_{t}\phi(t,x)\,dxdt$
$\displaystyle+\int_{0}^{T}\int_{\mathbb{R}^{d}}\left[\frac{1}{2}u(t,x)\Delta\phi(t,x)+(u-u^{2})(t,x)\phi(t,x)\right]dxdt\,.$
(9)
More precisely, see Definition 14. The choice of this weak formulation,
together with the uniqueness of the f-kpp equation is discussed in Section 3.
Our main result is the following theorem, which is proved in Section 2.
###### Theorem 4
Suppose that $\epsilon=\epsilon(N)$ is such that $\epsilon^{-d}\leq CN$ for
some finite constant $C$ and $\epsilon(N)\to 0$ as $N\to\infty$. Then, for
every finite $T$ and $d\geq 1$, the sequence of probability measures
$\\{\mathcal{P}^{N}\\}_{N}$ induced by
$\\{\omega\mapsto\xi^{N}(dt,dx,\omega)\\}_{N}$ converges weakly in the space
$\mathcal{P}(\mathcal{M})$ to a Dirac measure on $\xi(dt,dx)\in\mathcal{M}$.
The measure $\xi$ is absolutely continuous with respect to the Lebesgues
measure on $[0,T]\times\mathbb{R}^{d}$, i.e. $\xi(dt,dx)=u(t,x)dtdx$. The
density $u(t,x)$ is the unique weak solution to the f-kpp equation (8), in the
sense of (1).
## 2 Proof of the main result
Our proof is based on adapting the strategy of [10], which deals with scaling
limits to coagulation-type pdes. The key to the proof of the main result is an
Itô-Tanaka trick, well-known in the setting of sdes. Specifically, for every
$\epsilon$ and $T$, we define an auxiliary function
$r^{\epsilon}(t,x)=r^{\epsilon,T}(t,x):[0,T]\times\mathbb{R}^{d}\to\mathbb{R}_{+}$,
which is the unique solution to the pde terminal value problem:
$\displaystyle\begin{cases}\partial_{t}r^{\epsilon}(t,x)+\Delta
r^{\epsilon}(t,x)+\theta^{\epsilon}(x)=0\\\ r^{\epsilon}(T,x)=0\end{cases}.$
(10)
Denoting by $C_{0}$ the maximum radius of the compact support of $\theta$, we
have the following estimates for $r^{\epsilon}$ and $\nabla r^{\epsilon}$.
###### Proposition 5
There exists finite constant $C(d,T,C_{0})$ such that for any
$x\in\mathbb{R}^{d},\epsilon>0,t\in[0,T]$ we have that
$\displaystyle|r^{\epsilon}(t,x)|\leq\begin{cases}Ce^{-C|x|^{2}}1_{\\{|x|\geq
1\\}}+C\left(|x|\vee\epsilon\right)^{2-d}1_{\\{|x|<1\\}},\quad d\neq 2\\\ \\\
Ce^{-C|x|^{2}}1_{\\{|x|\geq
1\\}}+C|\log\left(|x|\vee\epsilon\right)|1_{\\{|x|<1\\}},\quad d=2\end{cases}$
(11) $\displaystyle|\nabla_{x}r^{\epsilon}(t,x)|\leq
Ce^{-C|x|^{2}}1_{\\{|x|\geq
1\\}}+C\left(|x|\vee\epsilon\right)^{1-d}1_{\\{|x|<1\\}},\quad d\geq 1.$ (12)
Proof. We first demonstrate (11). Write
$r^{\epsilon}\left(t,x\right)=u^{\epsilon}\left(T-t,x\right)$
with
$\displaystyle\begin{cases}\partial_{t}u^{\epsilon}\left(t,x\right)=\Delta
u^{\epsilon}\left(t,x\right)+\theta^{\epsilon}\left(x\right)\\\
u^{\epsilon}\left(0,x\right)=0.\end{cases}$
For each fixed $\epsilon>0$, the function $u^{\epsilon}\left(t,x\right)$ is of
class $C^{1,2}([0,T]\times\mathbb{R}^{d})$ (since we assumed that $\theta\in
C^{\alpha}(\mathbb{R}^{d})$ for some $\alpha\in(0,1)$) and it is given by the
explicit formula
$u^{\epsilon}\left(t,x\right)=\int_{0}^{t}\int_{\mathbb{R}^{d}}p_{t-s}\left(x-y\right)\theta^{\epsilon}\left(y\right)dyds$
where
$p_{t}\left(x\right):=\left(4\pi
t\right)^{-d/2}\exp\left(-\frac{\left|x\right|^{2}}{4t}\right)\qquad\text{for
}t>0.$
We also have
$\displaystyle u^{\epsilon}\left(t,x\right)$
$\displaystyle=\int_{0}^{t}\int_{\mathbb{R}^{d}}p_{s}\left(y\right)\theta^{\epsilon}\left(x-y\right)dyds$
$\displaystyle=\int_{\mathbb{R}^{d}}\theta^{\epsilon}\left(x-y\right)\left[\int_{0}^{t}p_{s}\left(y\right)ds\right]dy.$
This reformulation is crucial to understand the “singularity” of
$u^{\epsilon}$ (let us repeat it is smooth, but it becomes singular at $x=0$
when $\epsilon\rightarrow 0$). Call
$\displaystyle K\left(t,x\right):=\int_{0}^{t}p_{s}\left(x\right)ds$ (13)
the kernel of this formula, such that
$u^{\epsilon}\left(t,x\right)=\int_{\mathbb{R}^{d}}\theta^{\epsilon}\left(x-y\right)K\left(t,y\right)dy.$
For $x\neq 0$ the function $K\left(t,x\right)$ is well defined and smooth:
notice that $s\mapsto p_{s}\left(x\right)$ is integrable at $s=0$, and on any
set $\left[0,T\right]\times\overline{\mathcal{O}}$ with
$0\not\in\overline{\mathcal{O}}$, the function $p_{s}\left(x\right)$ is
uniformly continuous with all its derivatives in $x$ (extended equal to zero
for $t=0$). But for $x=0$ it is well defined only in dimension $d=1$.
We have, for $x\neq 0$,
$\displaystyle K\left(t,x\right)$ $\displaystyle=\int_{0}^{t}\left(4\pi
s\right)^{-d/2}\exp\left(-\frac{\left|x\right|^{2}}{4s}\right)ds$
$\displaystyle\stackrel{{\scriptstyle
r=\frac{\left|x\right|^{2}}{4s}}}{{=}}\int_{\frac{\left|x\right|^{2}}{4t}}^{\infty}\left(\frac{\pi\left|x\right|^{2}}{r}\right)^{-d/2}\exp\left(-r\right)\frac{\left|x\right|^{2}}{4r^{2}}dr$
$\displaystyle=\frac{1}{4\pi^{d/2}\left|x\right|^{d-2}}\int_{\frac{\left|x\right|^{2}}{4t}}^{\infty}r^{\frac{d-4}{2}}\exp\left(-r\right)dr.$
Therefore
$\displaystyle K\left(t,x\right)$
$\displaystyle=\frac{1}{\left|x\right|^{d-2}}G\left(t,x\right)$ $\displaystyle
G\left(t,x\right)$
$\displaystyle:=\frac{1}{4\pi^{d/2}}\int_{\frac{\left|x\right|^{2}}{4t}}^{\infty}r^{\frac{d-4}{2}}\exp\left(-r\right)dr.$
Since
$\displaystyle G\left(t,x\right)$
$\displaystyle:=\frac{1}{4\pi^{d/2}}\int_{\frac{\left|x\right|^{2}}{4t}}^{\infty}r^{\frac{d-4}{2}}\exp\left(-r\right)dr$
$\displaystyle\leq\frac{1}{4\pi^{d/2}}\int_{\frac{\left|x\right|^{2}}{4T}}^{\infty}r^{\frac{d-4}{2}}\exp\left(-r\right)dr$
$\displaystyle\overset{d\neq
2}{\leq}A\exp\left(-\alpha\left|x\right|^{2}\right)+B\left|x\right|^{d-2}1_{\\{|x|<1\\}}$
$G\left(t,x\right)\overset{d=2}{\leq}A\exp\left(-\alpha\left|x\right|^{2}\right)-B\log\left|x\right|1_{\\{|x|<1\\}}$
for some $A,B,\alpha>0$. Therefore
$\displaystyle K\left(t,x\right)\overset{d\neq
2}{\leq}\frac{1}{\left|x\right|^{d-2}}\left[A\exp\left(-\alpha\left|x\right|^{2}\right)+B\left|x\right|^{d-2}1_{\\{|x|<1\\}}\right]$
$\displaystyle\leq\frac{1}{\left|x\right|^{d-2}}A\exp\left(-\alpha\left|x\right|^{2}\right)+B1_{\\{|x|<1\\}}$
$K\left(t,x\right)\overset{d=2}{\leq}A-B\log\left|x\right|1_{\\{|x|<1\\}}.$
It follows
$\displaystyle u^{\epsilon}\left(t,x\right)$ $\displaystyle\overset{d\neq
2}{\leq}A\int_{\mathbb{R}^{d}}\theta^{\epsilon}\left(x-y\right)\frac{1}{\left|y\right|^{d-2}}\exp\left(-\alpha\left|y\right|^{2}\right)dy+B1_{\\{|x|<1\\}}$
$\displaystyle\leq Ce^{-C|x|^{2}}1_{\\{|x|\geq
1\\}}+C\left(|x|\vee\epsilon\right)^{2-d}1_{\\{|x|<1\\}}$
$u^{\epsilon}\left(t,x\right)\overset{d=2}{\leq}Ce^{-C|x|^{2}}1_{\\{|x|\geq
1\\}}+C|\log\left(|x|\vee\epsilon\right)|1_{\\{|x|<1\\}}.$
for some $C>0$.
To prove (12), we note that
$\displaystyle\nabla_{x}u^{\epsilon}(t,x)$
$\displaystyle=\int_{0}^{t}\int_{\mathbb{R}^{d}}\nabla
p_{s}(y)\theta^{\epsilon}(x-y)dyds$
$\displaystyle=\int_{\mathbb{R}^{d}}\theta^{\epsilon}(x-y)\left[\int_{0}^{t}\nabla
p_{s}(y)ds\right]dy.$
Since
$|\nabla p_{t}(x)|\leq
2^{-1}(4\pi)^{-d/2}|x|t^{-\frac{d}{2}-1}\exp\left(-\frac{|x|^{2}}{4t}\right)$
we have that
$\displaystyle\left|\int_{0}^{t}\nabla p_{s}(y)ds\right|$ $\displaystyle\leq
2^{-1}(4\pi)^{-d/2}\int_{0}^{t}|x|s^{-\frac{d}{2}-1}\exp\left(-\frac{|x|^{2}}{2}\right)ds$
$\displaystyle\leq
2^{-1}\pi^{-d/2}|x|^{1-d}\int_{\frac{|x|^{2}}{4T}}^{\infty}r^{\frac{d}{2}-1}e^{-r}dr$
$\displaystyle\leq|x|^{1-d}\left[A\exp(-\alpha|x|^{2})+B|x|^{d}1_{\\{|x|<1\\}}\right]$
$\displaystyle\leq|x|^{1-d}A\exp(-\alpha|x|^{2})+B1_{\\{|x|<1\\}}$
for some $A,B,\alpha>0$. Thus, we have
$\displaystyle|\nabla_{x}u(t,x)|$ $\displaystyle\leq
A\int\theta^{\epsilon}(x-y)|y|^{1-d}\exp(-\alpha|y|^{2})dy+B1_{\\{|x|<1\\}}$
$\displaystyle\leq Ce^{-C|x|^{2}}1_{\\{|x|\geq
1\\}}+C\left(|x|\vee\epsilon\right)^{1-d}1_{\\{|x|<1\\}}$
for some $C>0$.
We need the following preliminary lemma.
###### Lemma 6
For any $d\geq 1$ and finite $T$, there exists some finite
$C=C(T,||u_{0}||_{L^{1}})$ such that
$\displaystyle\mathbb{E}\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))dt\leq C.$ (14)
Proof. By Itô formula applied to the process $N(t)$ (the cardinality of alive
particles) and taking expectation, the martingale vanishes and we get that
$\displaystyle\mathbb{E}N(T)=\mathbb{E}N_{0}+\mathbb{E}\int_{0}^{T}\sum_{j\leq
N(t)}\left[1-\frac{1}{N}\sum_{k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))\right]dt$
implying that
$\displaystyle\mathbb{E}\int_{0}^{T}\frac{1}{N}\sum_{j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))dt\leq\mathbb{E}N_{0}+\mathbb{E}\int_{0}^{T}N(t)dt.$
The rhs is dominated from above by the same quantity calculated for a particle
system with pure proliferation of unit rate (and no killing), and thereby is
bounded by $e^{T}N\int u_{0}$.
We proceed to derive the limiting equation. Fixing any $\phi(t,x)\in
C_{c}^{1,2}([0,T)\times\mathbb{R}^{d})$, we consider the time dependent
functional on $\eta$
$\displaystyle Q^{N}(t,\eta):=\frac{1}{N}\sum_{j\leq
i_{\text{max}}(\eta)}\phi(t,x_{j}).$ (15)
By Itô formula applied to the process $Q^{N}(t,\eta(t))$, we get that
$\displaystyle Q^{N}(T,\eta(T))-Q^{N}(0,\eta(0))=$
$\displaystyle\int_{0}^{T}\frac{1}{N}\sum_{j\leq
N(t)}\big{(}\partial_{t}+\frac{1}{2}\Delta_{x_{j}}\big{)}\phi(t,x_{j}(t))\,dt$
$\displaystyle+\int_{0}^{T}\frac{1}{N}\sum_{j\leq
N(t)}\Big{[}1-\frac{1}{N}\sum_{k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))\Big{]}\phi(t,x_{j}(t))\,dt+\widetilde{M}_{T}$
$\displaystyle=$
$\displaystyle\Big{\langle}\xi^{N}(dt,dx),\big{(}\partial_{t}+\frac{1}{2}\Delta+1\big{)}\phi(t,x)\Big{\rangle}$
$\displaystyle-\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))\phi(t,x_{j}(t))\,dt+\widetilde{M}_{T}$
(16)
where $\\{\widetilde{M}_{t}\\}$ is a martingale. We can readily control the
martingale via its quadratic variation
$\mathbb{E}[\widetilde{M}_{T}^{2}]\leq
4\int_{0}^{T}\mathbb{E}[A^{(1)}_{t}+A^{(2)}_{t}]\,dt$
where
$\displaystyle A^{(1)}_{t}:=$ $\displaystyle\frac{1}{N^{2}}\sum_{j\leq
N(t)}\left|\nabla_{x_{j}}\phi(t,x_{j}(t))\right|^{2},$ $\displaystyle
A^{(2)}_{t}:=$ $\displaystyle\frac{1}{N^{2}}\sum_{j\leq
N(t)}\Big{[}1+\frac{1}{N}\sum_{k\leq
N(t)}\theta^{\epsilon}((x_{j}(t)-x_{k}(t))\Big{]}\phi(t,x_{j}(t))^{2}.$
Since $\phi$ is a test function and $\mathbb{E}N(t)\leq Ne^{t}\int u_{0}$,
combined with Lemma 6 we arrive at
$\mathbb{E}\int_{0}^{T}[A^{(1)}_{t}+A^{(2)}_{t}]\,dt\leq\frac{C_{T,\phi}}{N}.$
Therefore, the martingale vanishes in $L^{2}(\mathbb{P})$ (and in probability)
in the limit $N\to\infty$. Further, since $\phi(T,\cdot)=0$, we have that
$Q^{N}(T,\eta(T))=0$; whereas by our assumption on the initial condition, we
have that $Q^{N}(0,\eta(0))\to\int\phi(0,x)u_{0}(x)dx$ in probability.
Regarding the last term of (2), i.e.
$\displaystyle\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}\left(x_{j}(t)-x_{k}(t)\right)\phi\left(t,x_{j}(t)\right)\,dt,$
(17)
we shall prove the following approximation in steps. To state it, let us
denote
$\displaystyle\eta^{\delta}(x):=\delta^{-d}\eta(\delta^{-1}x)$ (18)
for a smooth, nonnegative, compactly supported function
$\eta:\mathbb{R}^{d}\to\mathbb{R}_{+}$ with $\int\eta=1$. Fix also two smooth,
compactly supported functions
$\phi,\psi:\mathbb{R}^{d}\times[0,T)\to\mathbb{R}$.
###### Proposition 7
Suppose that $\epsilon=\epsilon(N)$ is such that $\epsilon^{-d}\leq CN$ for
some finite constant $C$ and $\epsilon(N)\to 0$ as $N\to\infty$. Then, for any
$d\geq 1$ and finite $T$, we have that
$\displaystyle\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}\left(x_{j}(t)-x_{k}(t)\right)\phi\left(t,x_{j}(t)\right)\psi\left(t,x_{k}(t)\right)dt$
$\displaystyle=\int_{0}^{T}\int_{\mathbb{R}^{d}}\phi(t,w)\psi(t,w)\left(\xi^{N}*_{x}\eta^{\delta}\right)(t,w)^{2}dwdt+Err(\epsilon,N,\delta)\,,$
for some error term that vanishes in the following limit
$\limsup_{\delta\to
0}\limsup_{N\to\infty}\;\mathbb{E}|Err(\epsilon,N,\delta)|=0,$
and any $\eta:\mathbb{R}^{d}\to\mathbb{R}_{+}$ smooth, nonnegative, compactly
supported with $\int\eta=1$. Here we used the shorthand
$\left(\xi^{N}*_{x}\eta^{\delta}\right)(t,w):=\frac{1}{N}\sum_{j\leq
N(t)}\eta^{\delta}(w-x_{j}(t)).$
Step I. Fixing $\epsilon,T$. Consider the time-dependent functional on $\eta$,
indexed by $z\in\mathbb{R}^{d}$:
$X^{N}_{z}(t,\eta):=\frac{1}{N^{2}}\sum_{j,k\leq
i_{\text{max}}(\eta)}r^{\epsilon}\left(t,x_{j}-x_{k}+z\right)\phi(t,x_{j})\psi(t,x_{k})$
where $r^{\epsilon}(t,x)$ is the auxiliary function defined in (10). By Itô
formula applied to the process $(X^{N}_{z}-X^{N}_{0})(t,\eta(t))$, we get that
$\displaystyle(X^{N}_{z}-X^{N}_{0})(T,\eta(T))-(X^{N}_{z}-X^{N}_{0})(0,\eta(0))=\int_{0}^{T}\left((\partial_{t}+\widetilde{\mathcal{L}}_{N})(X_{z}-X_{0})\right)(t,\eta(t))\,dt+M_{T}$
where $\\{M_{t}\\}$ is a martingale. Written out in detail, the lhs has one
term
$\displaystyle H_{0}:=-\frac{1}{N^{2}}\sum_{j,k\leq
N_{0}}\big{[}r^{\epsilon}(0,x_{j}(0)-x_{k}(0)+z)-r^{\epsilon}(0,x_{j}(0)-x_{k}(0))\big{]}\phi(0,x_{j}(0))\psi(0,x_{k}(0))$
and we have the following terms in the integrand of rhs
$\displaystyle H_{t}(t)$ $\displaystyle:=\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\big{[}r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\big{]}\partial_{t}\Big{(}\phi(t,x_{j}(t))\psi(t,x_{k}(t))\Big{)}.$
$\displaystyle H_{xx}(t)$ $\displaystyle:=\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\Big{[}(\partial_{t}+\Delta)\big{(}r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t)\big{)}\Big{]}\phi(t,x_{j}(t))\psi(t,x_{k}(t))$
$\displaystyle=\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\big{[}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))-\theta^{\epsilon}(x_{j}(t)-x_{k}(t)+z)\big{]}\phi(t,x_{j}(t))\psi(t,x_{k}(t)).$
$\displaystyle H_{J}(t)$ $\displaystyle:=\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\big{[}r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\big{]}$
$\displaystyle\quad\quad\quad\cdot\frac{1}{2}\big{(}\Delta\phi(t,x_{j}(t))\psi(t,x_{k}(t))+\Delta\psi(t,x_{k}(t))\phi(t,x_{j}(t))\big{)}.$
$\displaystyle H_{x}(t)$ $\displaystyle:=\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\big{[}\nabla r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-\nabla
r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\big{]}$
$\displaystyle\quad\quad\quad\cdot\frac{1}{2}\big{(}\nabla\phi(t,x_{j}(t))\psi(t,x_{k}(t))-\nabla\psi(t,x_{k}(t))\phi(t,x_{j}(t))\big{)}.$
$\displaystyle H_{C}(t):=$ $\displaystyle\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\Big{[}1-\frac{1}{N}\sum_{i\leq
N(t)}\theta^{\epsilon}\left(x_{j}(t)-x_{i}(t)\right)\Big{]}$
$\displaystyle\quad\quad\quad\cdot\left[r^{\epsilon}\left(t,x_{j}(t)-x_{k}(t)+z\right)-r^{\epsilon}\left(t,x_{j}(t)-x_{k}(t)\right)\right]\phi(t,x_{j}(t))\psi(t,x_{k}(t))$
$\displaystyle+\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\Big{[}1-\frac{1}{N}\sum_{i\leq
N(t)}\theta^{\epsilon}\left(x_{k}(t)-x_{i}(t)\right)\Big{]}$
$\displaystyle\quad\quad\quad\cdot\left[r^{\epsilon}\left(t,x_{j}(t)-x_{k}(t)+z\right)-r^{\epsilon}\left(t,x_{j}(t)-x_{k}(t)\right)\right]\phi(t,x_{j}(t))\psi(t,x_{k}(t)).$
The martingale terms can be controlled via its quadratic variation
$\displaystyle\mathbb{E}[M^{2}_{T}]\leq
4\int_{0}^{T}\mathbb{E}[B^{(1)}_{t}+B^{(2)}_{t}]dt$
where
$\displaystyle B^{(1)}_{t}:=$ $\displaystyle\frac{1}{N^{4}}\sum_{j\leq
N(t)}\left|\nabla_{x_{j}}\Big{(}\sum_{k\leq
N(t)}\big{[}r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\big{]}\phi(t,x_{j}(t))\psi(t,x_{k}(t))\Big{)}\right|^{2}$
$\displaystyle+\frac{1}{N^{4}}\sum_{k\leq
N(t)}\left|\nabla_{x_{k}}\Big{(}\sum_{j\leq
N(t)}\big{[}r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\big{]}\phi(t,x_{j}(t))\psi(t,x_{k}(t))\Big{)}\right|^{2}.$
(19) $\displaystyle B^{(2)}_{t}:=$ $\displaystyle\frac{1}{N^{4}}\sum_{j\leq
N(t)}\Big{[}1+\frac{1}{N}\sum_{i\leq
N(t)}\theta^{\epsilon}((x_{j}(t)-x_{i}(t))\Big{]}$
$\displaystyle\quad\quad\quad\cdot\left|\sum_{k\leq
N(t)}\big{[}r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\big{]}\phi(t,x_{j}(t))\psi(t,x_{k}(t))\right|^{2}$
$\displaystyle+\frac{1}{N^{4}}\sum_{k\leq N(t)}\Big{[}1+\frac{1}{N}\sum_{i\leq
N(t)}\theta^{\epsilon}((x_{k}(t)-x_{i}(t))\Big{]}$
$\displaystyle\quad\quad\quad\cdot\left|\sum_{j\leq
N(t)}\big{[}r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\big{]}\phi(t,x_{j}(t))\psi(t,x_{k}(t))\right|^{2}.$
(20)
Step II. We show that among the previous terms, only $H_{xx}$ is significant,
in a sense to be made precise. To this end, we need to bound the various other
terms, of which there are significant similarities: one type of terms is a
double sum involving the difference of $r^{\epsilon}$; the second type is a
double sum involving the difference of $\nabla r^{\epsilon}$; and the third
type is a triple sum involving the difference of $r^{\epsilon}$.
We first prove a general proposition about a pure proliferation system that is
naturally coupled to our system, from which some of our desired conclusions
immediately follow.
###### Proposition 8
Let $d\geq 1$ and $\left(x_{i}\left(t\right)\right)$ be the pure proliferation
model with unit rate (no killing), with $N_{0}$ initial particles distributed
independently with density $(\int u_{0})^{-1}u_{0}$, for $u_{0}$ satisfying
Condition 3. Let $T>0$ be given and let
$f(t,x):[0,T]\times\mathbb{R}^{d}\rightarrow\mathbb{R}_{+}$,
$g(x):\mathbb{R}^{d}\to\mathbb{R}_{+}$ be bounded non-negative functions, with
$g\in L^{1}(\mathbb{R}^{d})$. Then there are constants $C_{T},c=c(d,T)>0$,
independent of $N$ and $f,g$, such that for any $t\in[0,T]$
$\displaystyle\mathbb{E}\left[\sum_{i,j}f\left(t,x_{i}\left(t\right)-x_{j}\left(t\right)\right)\right]\leq
N_{0}2C^{2}_{T}\left\|f\right\|_{\infty}\mathbb{+}N^{2}C_{T}^{2}\gamma^{2}e^{cR}\int_{\mathbb{R}^{d}}f\left(t,x\right)e^{-c\left|x\right|}dx,$
(21)
and
$\displaystyle\mathbb{E}\left[\sum_{i,j,k}f\left(t,x_{i}\left(t\right)-x_{j}\left(t\right)\right)g\left(x_{j}\left(t\right)-x_{k}\left(t\right)\right)\right]$
$\displaystyle\leq$ $\displaystyle
N_{0}5C_{T}^{3}\left\|f\right\|_{\infty}\left\|g\right\|_{\infty}$
$\displaystyle+N^{2}2C_{T}^{3}\gamma^{2}e^{cR}\left(\left\|f\right\|_{\infty}\int_{\mathbb{R}^{d}}g\left(x\right)e^{-c\left|x\right|}dx+\left\|g\right\|_{\infty}\int_{\mathbb{R}^{d}}f\left(t,x\right)e^{-c\left|x\right|}dx\right)\text{
}$
$\displaystyle+N^{3}C_{T}^{3}\gamma^{3}e^{cR}\left\|g\right\|_{L^{1}}\int_{\mathbb{R}^{d}}f\left(t,x\right)e^{-c\left|x\right|}dx,$
(22)
where the sum is extended to all particles alive at time $t$. The constant
$C_{T}(=e^{T})$ is the average number of alive particles at time $T$, when
starting from a single initial particle.
Proof. Step 1. Essential for the proof is the fact that the exponential clocks
of proliferation can be modeled a priori, therefore let us write a few details
in this direction for completeness. Particles, previously indexed by $i$, will
be indexed below by a multi-index $a$ of the form
$a=\left(a_{1},...,a_{n}\right)$
with $n$ positive integer, $a_{1}\in\left\\{1,...,N_{0}\right\\}$,
$a_{2},...,a_{n}\in\left\\{1,2\right\\}$ (if $n\geq 2$). Denote by
$\Lambda^{N}$ the set of all such multi-indexes. Given $a\in\Lambda^{N}$, we
denote by $n\left(a\right)$ the length of the string
$a=\left(a_{1},...,a_{n}\right)$ defining $a$. We set
$a^{-1}=\left(a_{1},...,a_{n-1}\right)$
when $n\geq 2$. The heuristic idea behind these notations is that $a_{1}$
denotes the progenitor at time $t=0$; $a_{2},...,a_{n}$ describe the
subsequent story, where particle $a$ is a direct descendant of particle
$a^{-1}$.
Each particle $a$ lives for a finite random time. On a probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$, assume to have a countable
family of independent Exponential r.v.’s $\tau^{a}$ of parameter $\lambda=1$,
indexed by $a\in\Lambda^{N}$. The time $\tau^{a}$ is the life span of particle
$a$; its interval of existence will be denoted by $[T_{0}^{a},T_{f}^{a})$ with
$T_{f}^{a}=T_{0}^{a}+\tau^{a}$. The random times $T_{0}^{a}$ are defined
recursively in $n\in\mathbb{N}$: if $n\left(a\right)=0$, $T_{0}^{a}=0$; if
$n\left(a\right)>0$,
$T_{0}^{a}=T_{0}^{a^{-1}}+\tau^{a^{-1}}=T_{f}^{a^{-1}}.$
We may now define the set of particles alive at time $t$: it is the set
$\Lambda_{t}^{N}=\left\\{a\in\Lambda^{N}:t\in[T_{0}^{a},T_{f}^{a})\right\\}.$
Initial particles have a random initial position in the space
$\mathbb{R}^{d}$: we assume that on the probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$ there are r.v.’s
$X_{1},...,X_{N_{0}}$ distributed with density $(\int u_{0})^{-1}u_{0}$
independent among themselves and with respect to the random times $\tau^{a}$,
$a\in\Lambda^{N}$.
Particles move as Brownian motions: we assume that on the probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$ there is a countable family of
independent Brownian motions $W^{a}$, $a\in\Lambda^{N}$, independent among
themselves and with respect to the random times $\tau^{a}$, $a\in\Lambda^{N}$
and the initial positions $X_{0}^{1},...,X_{0}^{N_{0}}$. The position
$x_{t}^{a}$ of particle $a$ during its existence interval
$[T_{0}^{a},T_{f}^{a})$ is defined recursively in $n\in\mathbb{N}$ as follows:
if $n\left(a\right)=0$, $x_{t}^{a}=X_{0}^{a}+W_{t}^{a}$ for
$t\in[T_{0}^{a},T_{f}^{a})$; if $n\left(a\right)>0$
$x_{t}^{a}=x_{T_{0}^{a}}^{a^{-1}}+W_{t-T_{0}^{a}}^{a}\qquad\text{for
}t\in[T_{0}^{a},T_{f}^{a}).$
Step 2. Given $k\in\left\\{1,...,N\right\\}$ and
$a=\left(a_{1},...,a_{n}\right)\in\Lambda^{N}$, the process $x_{t}^{a}$ is
formally defined only for $t\in[T_{0}^{a},T_{f}^{a})$. Call
$\widetilde{x}_{t}^{a}$ the related process, defined for all $t\geq 0$ as
follows: for each $b=\left(a_{1},...,a_{m}\right)$ with $m\leq n$, on the
interval $[T_{0}^{b},T_{f}^{b})$ it is given by $x_{t}^{b}$; and on
$[T_{f}^{a},\infty)$ it is given by
$x_{T_{0}^{a}}^{a^{-1}}+W_{t-T_{0}^{a}}^{a}$. The process
$\widetilde{x}_{t}^{a}$ is a Brownian motion with initial position
$X_{0}^{a_{1}}$. More precisely, if $\mathcal{G}$ denotes the $\sigma$-algebra
generated by the family $\left\\{\tau^{a};a\in\Lambda^{N}\right\\}$, then the
law of $\widetilde{x}_{t}^{a}$ conditioned to $\mathcal{G}$ is the law of a
Brownian motion with initial position $X_{0}^{a_{1}}$.
Step 3. With the notations of Step 1 above, we have to handle
$\mathbb{E}\left[\sum_{a,b\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right]=\sum_{a,b\in\Lambda^{N}}\mathbb{E}\left[1_{a\in\Lambda_{t}^{N}}1_{b\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right].$
As explained in the previous step, let us denote the components of $a,b$ as
$a=\left(a_{1},...a_{n}\right)$, $b=\left(b_{1},...b_{m}\right)$, with
integers $n,m>0$, $a_{1},b_{1}\in\left\\{1,...,N_{0}\right\\}$ and all the
other entries in $\left\\{1,2\right\\}$. Then
$\displaystyle\sum_{a,b\in\Lambda^{N}}\mathbb{E}\left[1_{a\in\Lambda_{t}^{N}}1_{b\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right]$
$\displaystyle=\sum_{a,b\in\Lambda^{N}:a_{1}=b_{1}}\mathbb{E}\left[1_{a\in\Lambda_{t}^{N}}1_{b\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right]+\sum_{a,b\in\Lambda^{N}:a_{1}\neq
b_{1}}\mathbb{E}\left[1_{a\in\Lambda_{t}^{N}}1_{b\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right].$
In the following computation, when we decompose a multi-index
$a=\left(a_{1},...a_{n}\right)$ in the form $\left(a_{1},a^{\prime}\right)$ we
understand that $a^{\prime}$ does not exist in the case $n=1$, while
$a^{\prime}=\left(a_{2},...a_{n}\right)$ if $n\geq 2$. We simply bound
$\displaystyle\sum_{a,b\in\Lambda^{N}:a_{1}=b_{1}}\mathbb{E}\left[1_{a\in\Lambda_{t}^{N}}1_{b\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right]$
$\displaystyle=\sum_{a_{1}=1}^{N_{0}}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{E}\left[1_{\left(a_{1},a^{\prime}\right)\in\Lambda_{t}^{N}}1_{\left(a_{1},b^{\prime}\right)\in\Lambda_{t}^{N}}f\left(t,x_{t}^{\left(a_{1},a^{\prime}\right)}-x_{t}^{\left(a_{1},b^{\prime}\right)}\right)\right]$
$\displaystyle\leq\left\|f\right\|_{\infty}\sum_{a_{1}=1}^{N_{0}}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{E}\left[1_{\left(a_{1},a^{\prime}\right)\in\Lambda_{t}^{N}}1_{\left(a_{1},b^{\prime}\right)\in\Lambda_{t}^{N}}\right]$
$\displaystyle=N_{0}\left\|f\right\|_{\infty}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{E}\left[1_{\left(1,a^{\prime}\right)\in\Lambda_{t}^{N}}1_{\left(1,b^{\prime}\right)\in\Lambda_{t}^{N}}\right]$
where $I$ denotes the set of binary sequences of finite length, and the last
identity is due to the fact that the quantity
$\mathbb{E}\left[1_{\left(a_{1},a^{\prime}\right)\in\Lambda_{t}^{N}}1_{\left(a_{1},b^{\prime}\right)\in\Lambda_{t}^{N}}\right]$
is independent of $a_{1}$: then it is equal to
$=N_{0}\left\|f\right\|_{\infty}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{E}\left[1_{\left(1,a^{\prime}\right)\in\Lambda_{t}^{1}}1_{\left(1,b^{\prime}\right)\in\Lambda_{t}^{1}}\right]$
where $\Lambda_{t}^{1}$ is the set of indexes relative to the case of a single
initial particle, and the identity holds because the presence of more initial
particles does not affect the expected values of the previous expression;
finally the previous quantity is equal to
$\displaystyle=N_{0}\left\|f\right\|_{\infty}\mathbb{E}\left[\sum_{a,b\in\Lambda_{t}^{1}}1\right]\leq
N_{0}\left\|f\right\|_{\infty}\mathbb{E}\left[\left|\Lambda_{t}^{1}\right|^{2}\right]$
$\displaystyle\leq
N_{0}\left\|f\right\|_{\infty}\mathbb{E}\left[\left|\Lambda_{T}^{1}\right|^{2}\right]\leq
N_{0}\|f\|_{\infty}(C^{2}_{T}+C_{T}),$
where we have denoted by $\left|\Lambda_{t}^{1}\right|$ the cardinality of the
set $\Lambda_{t}^{1}$, which is a Poisson random variable with finite mean
$C_{T}=\mathbb{E}\left[\left|\Lambda_{T}^{1}\right|\right]$, and we get one
addend of the inequality stated in the proposition.
Concerning the other sum,
$\displaystyle\sum_{a,b\in\Lambda^{N}:a_{1}\neq
b_{1}}\mathbb{E}\left[1_{a\in\Lambda_{t}^{N}}1_{b\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right]$
$\displaystyle=\sum_{a_{1}\neq b_{1}}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{E}\left[1_{\left(a_{1},a^{\prime}\right)\in\Lambda_{t}^{N}}1_{\left(b_{1},b^{\prime}\right)\in\Lambda_{t}^{N}}f\left(t,x_{t}^{\left(a_{1},a^{\prime}\right)}-x_{t}^{\left(b_{1},b^{\prime}\right)}\right)\right]$
$\displaystyle\leq N_{0}^{2}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{E}\left[1_{\left(1,a^{\prime}\right)\in\Lambda_{t}^{2}}1_{\left(2,b^{\prime}\right)\in\Lambda_{t}^{2}}f\left(t,x_{t}^{\left(1,a^{\prime}\right)}-x_{t}^{\left(2,b^{\prime}\right)}\right)\right]$
where the last inequality, involving a system with only two initial particles,
can be explained similarly to what done above. Recalling the notation of Step
2 above, the previous expression is equal to
$=N_{0}^{2}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{E}\left[1_{\left(1,a^{\prime}\right)\in\Lambda_{t}^{2}}1_{\left(2,b^{\prime}\right)\in\Lambda_{t}^{2}}f\left(t,\widetilde{x}_{t}^{\left(1,a^{\prime}\right)}-\widetilde{x}_{t}^{\left(2,b^{\prime}\right)}\right)\right].$
Now we use the fact that the laws of processes indexed by 1 and 2 are
independent and the law of $\widetilde{x}_{t}^{\left(1,a^{\prime}\right)}$
conditioned to $\mathcal{G}^{1}$ is a Brownian motion with initial position
$X_{0}^{1}$, where $\mathcal{G}^{1}$ is the $\sigma$-algebra generated by the
family
$\left\\{\tau^{\left(1,a^{\prime}\right)};\left(1,a^{\prime}\right)\in\Lambda^{1}\right\\}$;
and similarly for $\widetilde{x}_{t}^{\left(2,b^{\prime}\right)}$ with respect
to $\mathcal{G}^{2}$, similarly defined. Thus, after taking conditional
expectation with respect to $\mathcal{G}^{1}\vee\mathcal{G}^{2}$ inside the
previous expected value, we get that the previous expression is equal to
$=N_{0}^{2}\sum_{a^{\prime},b^{\prime}\in
I}\mathbb{P}\left(\left(1,a^{\prime}\right)\in\Lambda_{t}^{2}\right)\mathbb{P}\left(\left(2,b^{\prime}\right)\in\Lambda_{t}^{2}\right)\mathbb{E}\left[f\left(t,W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)\right]$
where $W_{t}^{i}$, $i=1,2$ are two independent Brownian motions in
$\mathbb{R}^{d}$, independent also of $X_{0}^{1},X_{0}^{2}$. We may simplify
the previous expression to
$=N_{0}^{2}\left(\sum_{a^{\prime}\in
I}\mathbb{P}\left(\left(1,a^{\prime}\right)\in\Lambda_{t}^{1}\right)\right)^{2}\mathbb{E}\left[f\left(t,\sqrt{2}W_{t}+X_{0}^{1}-X_{0}^{2}\right)\right]$
where $W_{t}$ is a Brownian motion in $\mathbb{R}^{d}$ independent of
$X_{0}^{1},X_{0}^{2}$. One has
$\displaystyle\sum_{a^{\prime}\in
I}\mathbb{P}\left(\left(1,a^{\prime}\right)\in\Lambda_{t}^{1}\right)$
$\displaystyle=\mathbb{E}\left[\sum_{a\in\Lambda_{t}^{1}}1\right]=\mathbb{E}\left[\left|\Lambda_{t}^{1}\right|\right]$
$\displaystyle\leq\mathbb{E}\left[\left|\Lambda_{T}^{1}\right|\right]=C_{T}.$
Moreover, denoting $\overline{u_{0}}\left(x\right)=u_{0}\left(-x\right)$,
$\displaystyle\mathbb{E}\left[f\left(t,\sqrt{2}W_{t}+X_{0}^{1}-X_{0}^{2}\right)\right]=\|u_{0}\|_{L^{1}}^{-2}\int\mathbb{E}\left[f\left(t,\sqrt{2}W_{t}+x\right)\right]\left(\overline{u_{0}}\ast
u_{0}\right)\left(x\right)dx$
$\displaystyle=\|u_{0}\|_{L^{1}}^{-2}\left\langle
e^{t\Delta}f(t,\cdot),\overline{u_{0}}\ast
u_{0}\right\rangle=\|u_{0}\|_{L^{1}}^{-2}\left\langle
f(t,\cdot),e^{t\Delta}\left(\overline{u_{0}}\ast u_{0}\right)\right\rangle$
$\displaystyle=\|u_{0}\|_{L^{1}}^{-2}\int
f\left(t,x\right)\mathbb{E}\left[\left(\overline{u_{0}}\ast
u_{0}\right)\left(\sqrt{2}W_{t}+x\right)\right]dx.$
Now we may estimate
$\displaystyle\mathbb{E}\left[\left(\overline{u_{0}}\ast
u_{0}\right)\left(\sqrt{2}W_{t}+x\right)\right]$
$\displaystyle\leq\gamma^{2}\mathbb{E}\left[1_{B\left(0,R\right)}\left(\sqrt{2}W_{t}+x\right)\right]$
$\displaystyle=\gamma^{2}\mathbb{P}\left(\sqrt{2}W_{t}\in
B\left(x,R\right)\right)$
$\displaystyle\leq\gamma^{2}\mathbb{P}\left(\left|\sqrt{2}W_{t}\right|\geq\left|x\right|-R\right)$
$\displaystyle\leq\gamma^{2}e^{cR}e^{-c\left|x\right|}$
for some constant $c=c(d,T)>0$. Therefore, summarizing,
$\displaystyle N_{0}^{2}\left(\sum_{a^{\prime}\in
I}\mathbb{P}\left(\left(1,a^{\prime}\right)\in\Lambda_{t}^{1}\right)\right)^{2}\mathbb{E}\left[f\left(t,\sqrt{2}W_{t}+X_{0}^{1}-X_{0}^{2}\right)\right]$
$\displaystyle\leq
N_{0}^{2}C_{T}^{2}\|u_{0}\|_{L^{1}}^{-2}\gamma^{2}e^{cR}\int
f\left(t,x\right)e^{-c\left|x\right|}dx.$
This completes the proof of (21).
Step 4. Now we turn to demonstrate (22), i.e.
$\displaystyle\mathbb{E}\left[\sum_{a,b,c\in\Lambda_{t}^{N}}f(t,x_{t}^{a}-x_{t}^{b})g(x_{t}^{a}-x_{t}^{c})\right]=\sum_{a,b,c\in\Lambda^{N}}\mathbb{E}\left[1_{a\in\Lambda_{t}^{N}}1_{b\in\Lambda_{t}^{N}}1_{c\in\Lambda_{t}^{N}}f(t,x_{t}^{a}-x_{t}^{b})g(x_{t}^{a}-x_{t}^{c})\right].$
We devide the above sum into five cases:
$\displaystyle\sum_{a,b,c\in\Lambda^{N}:a_{1}=b_{1}=c_{1}}+\sum_{a,b,c\in\Lambda^{N}:a_{1}=b_{1}\neq
c_{1}}+\sum_{a,b,c\in\Lambda^{N}:a_{1}=c_{1}\neq
b_{1}}+\sum_{a,b,c\in\Lambda^{N}:b_{1}=c_{1}\neq
a_{1}}+\sum_{a,b,c\in\Lambda^{N}:a_{1}\neq b_{1},a_{1}\neq c_{1},b_{1}\neq
c_{1}}$ $\displaystyle:=S_{1}+S_{2}+S_{3}+S_{4}+S_{5}.$
Firstly,
$\displaystyle S_{1}$
$\displaystyle\leq\|f\|_{\infty}\|g\|_{\infty}\sum_{a_{1}=1}^{N_{0}}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(a_{1},a^{\prime})\in\Lambda_{t}^{N}}1_{(a_{1},b^{\prime})\in\Lambda_{t}^{N}}1_{(a_{1},c^{\prime})\in\Lambda_{t}^{N}}\right]$
$\displaystyle\leq
N_{0}\|f\|_{\infty}\|g\|_{\infty}\mathbb{E}\left[\left|\Lambda_{t}^{1}\right|^{3}\right]\leq
N_{0}\|f\|_{\infty}\|g\|_{\infty}5C_{T}^{3}.$
Secondly,
$\displaystyle S_{2}$
$\displaystyle\leq\|f\|_{\infty}\sum_{a,b,c\in\Lambda^{N}:a_{1}=b_{1}\neq
c_{1}}\mathbb{E}\left[1_{a,b,c\in\Lambda_{t}^{N}}g\left(x_{t}^{a}-x_{t}^{c}\right)\right]$
$\displaystyle=\|f\|_{\infty}\sum_{a_{1}\neq
c_{1}}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(a_{1},a^{\prime})\in\Lambda_{t}^{N}}1_{(a_{1},b^{\prime})\in\Lambda_{t}^{N}}1_{(c_{1},c^{\prime})\in\Lambda_{t}^{N}}g\left(x_{t}^{(a_{1},a^{\prime})}-x_{t}^{(c_{1},c^{\prime})}\right)\right]$
$\displaystyle\leq\|f\|_{\infty}N_{0}^{2}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(1,a^{\prime})\in\Lambda_{t}^{1}}1_{(1,b^{\prime})\in\Lambda_{t}^{1}}1_{(2,c^{\prime})\in\Lambda_{t}^{2}}g\left(x_{t}^{(1,a^{\prime})}-x_{t}^{(2,c^{\prime})}\right)\right]$
$\displaystyle=\|f\|_{\infty}N_{0}^{2}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(1,a^{\prime})\in\Lambda_{t}^{1}}1_{(1,b^{\prime})\in\Lambda_{t}^{1}}1_{(2,c^{\prime})\in\Lambda_{t}^{2}}g\left(\widetilde{x}_{t}^{(1,a^{\prime})}-\widetilde{x}_{t}^{(2,c^{\prime})}\right)\right].$
Noting that $\widetilde{x}_{t}^{(1,\cdot)},\widetilde{x}_{t}^{(2,\cdot)}$ are
independent processes
$\displaystyle=\|f\|_{\infty}N_{0}^{2}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{P}\left((1,a^{\prime}),(1,b^{\prime})\in\Lambda_{t}^{1}\right)\mathbb{P}\left((2,c^{\prime})\in\Lambda_{t}^{2}\right)\mathbb{E}\left[g\left(W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)\right]$
$\displaystyle\leq\|f\|_{\infty}N_{0}^{2}\mathbb{E}\left[\left|\Lambda_{t}^{1}\right|^{2}\right]\mathbb{E}\left[\left|\Lambda_{t}^{2}\right|\right]\mathbb{E}\left[g\left(W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)\right]$
for two independent auxiliary Brownian motions $W_{t}^{1},W_{t}^{2}$.
Similarly to already analyzed in Step 3, it is bounded by
$\displaystyle\leq\|f\|_{\infty}N_{0}^{2}2C_{T}^{3}\|u_{0}\|_{L^{1}}^{-2}\gamma^{2}e^{cR}\int
g(x)e^{-c|x|}dx.$
Thirdly,
$\displaystyle S_{3}$
$\displaystyle\leq\|g\|_{\infty}\sum_{a,b,c\in\Lambda^{N}:a_{1}=c_{1}\neq
b_{1}}\mathbb{E}\left[1_{a,b,c\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)\right]$
$\displaystyle=\|g\|_{\infty}\sum_{a_{1}\neq
b_{1}}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(a_{1},a^{\prime})\in\Lambda_{t}^{N}}1_{(b_{1},b^{\prime})\in\Lambda_{t}^{N}}1_{(a_{1},c^{\prime})\in\Lambda_{t}^{N}}f\left(x_{t}^{(a_{1},a^{\prime})}-x_{t}^{(b_{1},b^{\prime})}\right)\right]$
$\displaystyle\leq\|g\|_{\infty}N_{0}^{2}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(1,a^{\prime})\in\Lambda_{t}^{1}}1_{(2,b^{\prime})\in\Lambda_{t}^{2}}1_{(1,c^{\prime})\in\Lambda_{t}^{1}}f\left(x_{t}^{(1,a^{\prime})}-x_{t}^{(2,b^{\prime})}\right)\right]$
$\displaystyle=\|g\|_{\infty}N_{0}^{2}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{P}\left((1,a^{\prime}),(1,c^{\prime})\in\Lambda_{t}^{1}\right)\mathbb{P}\left((2,b^{\prime})\in\Lambda_{t}^{2}\right)\mathbb{E}\left[f\left(W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)\right]$
$\displaystyle=\|g\|_{\infty}N_{0}^{2}\mathbb{E}\left[\left|\Lambda_{t}^{1}\right|^{2}\right]\mathbb{E}\left[\left|\Lambda_{t}^{2}\right|\right]\mathbb{E}\left[f\left(W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)\right]$
$\displaystyle\leq\|g\|_{\infty}N_{0}^{2}2C_{T}^{3}\|u_{0}\|_{L^{1}}^{-2}\gamma^{2}e^{cR}\int
f(t,x)e^{-c|x|}dx.$
The analysis of $S_{4}$ is analogous to $S_{3}$, and finally,
$\displaystyle S_{5}$ $\displaystyle=\sum_{a,b,c\in\Lambda^{N}:a_{1}\neq
b_{1}\neq
c_{1}}\mathbb{E}\left[1_{a,b,c\in\Lambda_{t}^{N}}f\left(t,x_{t}^{a}-x_{t}^{b}\right)g\left(x_{t}^{a}-x_{t}^{c}\right)\right]$
$\displaystyle\leq N_{0}^{3}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(1,a^{\prime})\in\Lambda_{t}^{1}}1_{(2,b^{\prime})\in\Lambda_{t}^{2}}1_{(3,c^{\prime})\in\Lambda_{t}^{3}}f\left(t,x_{t}^{(1,a^{\prime})}-x_{t}^{(2,b^{\prime})}\right)g\left(x_{t}^{(1,a^{\prime})}-x_{t}^{(3,c^{\prime})}\right)\right]$
$\displaystyle=N_{0}^{3}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{E}\left[1_{(1,a^{\prime})\in\Lambda_{t}^{1}}1_{(2,b^{\prime})\in\Lambda_{t}^{2}}1_{(3,c^{\prime})\in\Lambda_{t}^{3}}f\left(t,\widetilde{x}_{t}^{(1,a^{\prime})}-\widetilde{x}_{t}^{(2,b^{\prime})}\right)g\left(\widetilde{x}_{t}^{(1,a^{\prime})}-\widetilde{x}_{t}^{(3,c^{\prime})}\right)\right]$
Noting that
$\widetilde{x}_{t}^{(1,\cdot)},\widetilde{x}_{t}^{(2,\cdot)},\widetilde{x}_{t}^{(3,\cdot)}$
are independent processes,
$\displaystyle=N_{0}^{3}\sum_{a^{\prime},b^{\prime},c^{\prime}\in
I}\mathbb{P}\left((1,a^{\prime})\in\Lambda_{t}^{1}\right)$
$\displaystyle\mathbb{P}\left((2,b^{\prime})\in\Lambda_{t}^{2}\right)\mathbb{P}\left((3,c^{\prime})\in\Lambda_{t}^{3}\right)$
$\displaystyle\cdot\mathbb{E}\left[f\left(t,W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)g\left(W_{t}^{1}-W_{t}^{3}+X_{0}^{1}-X_{0}^{3}\right)\right]$
$\displaystyle=N_{0}^{3}\mathbb{E}\left[\left|\Lambda_{t}^{1}\right|\right]\mathbb{E}\left[\left|\Lambda_{t}^{2}\right|\right]\mathbb{E}\left[\left|\Lambda_{t}^{3}\right|\right]$
$\displaystyle\mathbb{E}\left[f\left(t,W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)g\left(W_{t}^{1}-W_{t}^{3}+X_{0}^{1}-X_{0}^{3}\right)\right],$
for three independent auxiliary Brownian motions
$W_{t}^{1},W_{t}^{2},W_{t}^{3}$. Conditioning on
$W_{t}^{1},W_{t}^{2},X_{0}^{1},X_{0}^{2}$, and we compute
$\displaystyle\mathbb{E}\left[g\left(W_{t}^{1}-W_{t}^{3}+X_{0}^{1}-X_{0}^{3}\right)\;\Big{|}\;W_{t}^{1},W_{t}^{2},X_{0}^{1},X_{0}^{2}\right]$
$\displaystyle=\|u_{0}\|_{L^{1}}^{-1}\mathbb{E}\left[\int
g\left(W_{t}^{1}+X_{0}^{1}-x\right)\left(e^{\frac{1}{2}\Delta}u_{0}\right)(x)dx\;\Big{|}\;W_{t}^{1},W_{t}^{2},X_{0}^{1},X_{0}^{2}\right]$
$\displaystyle\leq\|u_{0}\|_{L^{1}}^{-1}\gamma\|g\|_{L^{1}}.$
Thus, we obtain that
$\displaystyle S_{5}$ $\displaystyle\leq
N_{0}^{3}C_{T}^{3}\|u_{0}\|_{L^{1}}^{-1}\gamma\|g\|_{L^{1}}\mathbb{E}\left[f\left(t,W_{t}^{1}-W_{t}^{2}+X_{0}^{1}-X_{0}^{2}\right)\right]$
$\displaystyle\leq
N_{0}^{3}C_{T}^{3}\|u_{0}\|_{L^{1}}^{-3}\gamma^{3}\|g\|_{L^{1}}e^{cR}\int
f(t,x)e^{-c|x|}dx.$
This completes the proof of (22).
###### Corollary 9
Let $d\geq 1$ and $\epsilon=\epsilon(N)$ as in the statement of the main
theorem. For any $T$ finite and $\phi,\psi\in
C_{c}^{\infty}([0,T)\times\mathbb{R}^{d})$, we have that
$\displaystyle\limsup_{|z|\to 0}\limsup_{N\to\infty}$
$\displaystyle\quad\mathbb{E}\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\left|r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\right||\phi|(t,x_{j}(t))|\psi|(t,x_{k}(t))dt=0.$
(23) $\displaystyle\limsup_{|z|\to 0}\limsup_{N\to\infty}$
$\displaystyle\quad\mathbb{E}\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\left|\nabla r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-\nabla
r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\right||\phi|(t,x_{j}(t))|\psi|(t,x_{k}(t))dt=0.$
(24) $\displaystyle\limsup_{|z|\to 0}\limsup_{N\to\infty}$
$\displaystyle\mathbb{E}\int_{0}^{T}\frac{1}{N^{3}}\sum_{i,j,k\leq
N(t)}\theta^{\epsilon}\left(x_{j}(t)-x_{k}(t)\right)\left|r^{\epsilon}(t,x_{j}(t)-x_{i}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{i}(t))\right||\phi|(t,x_{i}(t))|\psi|(t,x_{j}(t))dt=0.$
(25)
Proof. Note that our particle system can be coupled with a system of pure
proliferation of unit rate (with no killing), so that the former is a strict
subset of the latter. Hence, it is an upper bound to compute (23)-(25) for the
pure proliferation process. We proceed to do so in the rest of the proof,
while abusing notations, still using the letter $x_{j}(t)$ to denote particle
positions (now for a different system), and $N(t)$ the cardinality of
particles.
Firstly, by (21) applied to the function
$f(t,x):=|r^{\epsilon}(t,x+z)-r^{\epsilon}(t,x)|$, and by (11), upon bounding
$\phi,\psi$ by constants, we get that
$\displaystyle C_{\phi,\psi}\mathbb{E}\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\left|r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\right|dt$
$\displaystyle\leq
C_{\phi,\psi,T,u_{0}}\frac{1}{N}\left\|r^{\epsilon}\left(\cdot,\cdot+z\right)-r^{\epsilon}\left(\cdot,\cdot\right)\right\|_{\infty}$
$\displaystyle\quad\quad\quad+C_{\phi,\psi,T,u_{0}}e^{cR}\left(\int_{0}^{T}\int\left|r^{\epsilon}\left(t,x+z\right)-r^{\epsilon}\left(t,x\right)\right|e^{-c\left|x\right|}dxdt\right)$
$\displaystyle\leq\begin{cases}C_{\phi,\psi,T,u_{0}}^{\prime}\left(\frac{\epsilon^{2-d}}{N}+\int_{0}^{T}\int\left|r^{\epsilon}\left(t,x+z\right)-r^{\epsilon}\left(t,x\right)\right|e^{-c\left|x\right|}dxdt\right),\quad
d\neq 2,\\\ \\\
C_{\phi,\psi,T,u_{0}}^{\prime}\left(\frac{|\log\epsilon|}{N}+\int_{0}^{T}\int\left|r^{\epsilon}\left(t,x+z\right)-r^{\epsilon}\left(t,x\right)\right|e^{-c\left|x\right|}dxdt\right),\quad
d=2.\end{cases}$
The first term is negligible for our range of $\epsilon(N)$. For the second
term, for the kernel $K$ defined at (13),
$\displaystyle\int_{0}^{T}\int\left|r^{\epsilon}\left(t,x+z\right)-r^{\epsilon}\left(t,x\right)\right|e^{-c\left|x\right|}dxdt$
$\displaystyle=\int_{0}^{T}\int\left|\left(\theta^{\epsilon}\ast\left(K\left(t,\cdot+z\right)-K\left(t,\cdot\right)\right)\right)\left(x\right)\right|e^{-c\left|x\right|}dxdt$
$\displaystyle\leq\int_{0}^{T}\int\left(\theta^{\epsilon}\ast\left|K\left(t,\cdot+z\right)-K\left(t,\cdot\right)\right|\right)\left(x\right)e^{-c\left|x\right|}dxdt$
$\displaystyle=\int_{0}^{T}\int\left|K\left(t,x+z\right)-K\left(t,x\right)\right|\left(\theta^{\epsilon}\ast
e^{-c\left|\cdot\right|}\right)\left(x\right)dxdt.$
Now we take the two limits; as $N\to\infty$ hence
$\epsilon=\epsilon(N)\rightarrow 0$, by Lebesgue dominated convergence theorem
we get
$\rightarrow\int_{0}^{T}\int\left|K\left(t,x+z\right)-K\left(t,x\right)\right|e^{-c\left|x\right|}dxdt.$
Then, as $\left|z\right|\rightarrow 0$, again by Lebesgue dominated
convergence theorem we get that the limit is zero.
Next, the proof of (24) is similar, and only involves a minor change. We apply
(21) with the new function $f(t,x):=|\nabla r^{\epsilon}(t,x+z)-\nabla
r^{\epsilon}(t,x)|$, and by (12) we obtain for all $d\geq 1$,
$\displaystyle C_{\phi,\psi}\mathbb{E}\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\left|\nabla r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-\nabla
r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\right|$ $\displaystyle\leq
C_{\phi,\psi,T,u_{0}}^{\prime}\left(\frac{\epsilon^{1-d}}{N}+\int_{0}^{T}\int\left|\nabla
r^{\epsilon}\left(t,x+z\right)-\nabla
r^{\epsilon}\left(t,x\right)\right|e^{-c\left|x\right|}dxdt\right).$
Then, we have that
$\displaystyle\int_{0}^{T}\int|\nabla r^{\epsilon}(t,x+z)-\nabla
r^{\epsilon}(t,x)|e^{-c|x|}dxdt$ $\displaystyle\leq\int_{0}^{T}\int|\nabla
K(t,x+z)-\nabla K(t,x)|\left(\theta^{\epsilon}*e^{-c|\cdot|}\right)(x)dxdt$
still converges to zero as $N\to\infty$ followed by $|z|\to 0$ by the
dominated convergence theorem. Indeed, $|\nabla K(t,x)|$ has a singularity of
order $|x|^{1-d}$ near $0$ hence integrable for all $d$.
Lastly, turning to (25). By (22), applied to the functions
$f(t,x):=|r^{\epsilon}(t,x+z)-r^{\epsilon}(t,x)|$,
$g(x)=\theta^{\epsilon}(x)$, and by (11), upon bounding $\phi,\psi$ by
constants, we get that
$\displaystyle
C_{\phi,\psi}\mathbb{E}\int_{0}^{T}\frac{1}{N^{3}}\sum_{i,j,k\leq
N(t)}\theta^{\epsilon}\left(x_{j}(t)-x_{k}(t)\right)\left|r^{\epsilon}(t,x_{j}(t)-x_{i}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{i}(t))\right|dt$
$\displaystyle\leq
C_{\phi,\psi,T,u_{0}}\frac{1}{N^{2}}\epsilon^{2-2d}\|\theta\|_{\infty}$
$\displaystyle+C_{\phi,\psi,T,u_{0}}\frac{1}{N}\left(\epsilon^{2-d}\int\theta^{\epsilon}(x)e^{-|x|}dx+\epsilon^{-d}\int_{0}^{T}\int|r^{\epsilon}(t,x+z)-r^{\epsilon}(t,x)|e^{-c|x|}dxdt\right)$
$\displaystyle+C_{\phi,\psi,T,u_{0}}\|\theta^{\epsilon}\|_{L^{1}}\int_{0}^{T}\int|r^{\epsilon}(t,x+z)-r^{\epsilon}(t,x)|e^{-c|x|}dxdt$
$\displaystyle\leq
C^{\prime}_{\phi,\psi,T,u_{0}}\epsilon^{2}(\|\theta\|_{\infty}+\|\theta^{\epsilon}\|_{L^{1}})+C^{\prime}_{\phi,\psi,T,u_{0}}\int_{0}^{T}\int|r^{\epsilon}(t,x+z)-r^{\epsilon}(t,x)|e^{-c|x|}dxdt$
if $d\neq 2$, and when $d=2$ there is a $|\log\epsilon|$ correction, where we
also used the relation $\epsilon(N)^{-d}\leq CN$. Since the first term is
negligible in $\epsilon$, and the second term is already analyzed in (23),
converging to zero as $N\to\infty$ followed by $|z|\to 0$, the lemma is
proved.
###### Remark 10
Though Corollary 9 does not give a rate of convergence for the quantities
involved, via a different proof we can have quantitative estimates that may be
of independent interest: there exists some finite constant
$C=C(T,d,C_{0},R,\gamma)$ such that for any $0<\epsilon\leq|z|$ small enough,
we have
$\displaystyle\mathbb{E}\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\left|r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\right||\phi|(t,x_{j}(t))|\psi|(t,x_{k}(t))dt$
$\displaystyle\leq\begin{cases}C\left(|z|^{\frac{2}{d+1}}+\frac{\epsilon^{2-d}}{N}\right),\quad
d\neq 2,\\\ \\\ C\left(|z|^{\frac{2}{3}}+\frac{|\log\epsilon|}{N}\right),\quad
d=2\end{cases}$ (26)
and
$\displaystyle\mathbb{E}\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\left|\nabla r^{\epsilon}(t,x_{j}(t)-x_{k}(t)+z)-\nabla
r^{\epsilon}(t,x_{j}(t)-x_{k}(t))\right||\phi|(t,x_{j}(t))|\psi|(t,x_{k}(t))dt$
$\displaystyle\leq
C\left(|z|^{\frac{1}{d+1}}+\frac{\epsilon^{1-d}}{N}\right),\quad d\geq 1$ (27)
and
$\displaystyle\mathbb{E}\int_{0}^{T}\frac{1}{N^{3}}\sum_{i,j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))\left|r^{\epsilon}(t,x_{j}(t)-x_{i}(t)+z)-r^{\epsilon}(t,x_{j}(t)-x_{i}(t))\right||\phi|(t,x_{i}(t))|\psi|(t,x_{j}(t))dt$
$\displaystyle\leq\begin{cases}C\left(|z|^{\frac{2}{d+1}}+\frac{\epsilon^{-d}}{N}|z|^{\frac{2}{d+1}}+\frac{\epsilon^{2-d}}{N}\right),\quad
d\neq 2,\\\ \\\
C\left(|z|^{\frac{2}{d+1}}+\frac{\epsilon^{-d}}{N}|z|^{\frac{2}{d+1}}+\frac{|\log\epsilon|}{N}\right),\quad
d=2.\end{cases}$ (28)
###### Remark 11
When we proceed to bound the martingale terms $B^{(1)}$, $B^{(2)}$ (2)-(2), we
are faced with a minor problem not present in Corollary 9, namely, after
applying the elementary inequality $(\sum_{i=1}^{n}a_{i})^{2}\leq
n\sum_{i=1}^{n}a_{i}^{2}$, we have sums of square terms $|r^{\epsilon}|^{2}$
or $|\nabla r^{\epsilon}|^{2}$. This can be dealt with, by bounding one of
$|r^{\epsilon}|$ (resp. $|\nabla r^{\epsilon}|$) in the square crudely by
$C\epsilon^{2-d}$ ($d\neq 2$) or $C|\log\epsilon|$ ($d=2$) (resp.
$C\epsilon^{1-d}$), and leave with the other one, for which we are back to one
of the three statements of Corollary 9. Also note that we are saved by the
prefactor $N^{-4}$ in this case.
Step III. Given Proposition 8 and Corollary 9, we can proceed to finish the
proof of Proposition 7, as in [10, page 42-43]. By applying this corollary,
together with Remark 11, we see that the terms coming out of the application
of Itô-Tanack trick, namely $H_{0},H_{t},H_{J},H_{x},H_{C},B^{(1)},B^{(2)}$,
all vanish in the limit as $N\to\infty$ followed by $|z|\to 0$ (where
$\epsilon=\epsilon(N)\to 0$ is such that $\epsilon^{-d}\leq CN$). The only
outstanding term is $H_{xx}$, whereby we get that
$\displaystyle\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))\phi(t,x_{j}(t))\psi(t,x_{k}(t))dt$
$\displaystyle=\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t)+z)\phi(t,x_{j}(t))\psi(t,x_{k}(t))dt+Err(\epsilon,N,|z|)$
(29)
where $Err(\epsilon,N,|z|)$ vanishes in the following limit
$\displaystyle\limsup_{|z|\to
0}\limsup_{N\to\infty}\mathbb{E}\left|Err(\epsilon,N,|z|)\right|=0.$
Since the lhs of (2) is independent of $z$, take any nonnegative smooth and
compactly supported function $\eta:\mathbb{R}^{d}\to\mathbb{R}_{+}$ with
$\int_{\mathbb{R}^{d}}\eta=1$, we have
$\displaystyle\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}\left(x_{j}(t)-x_{k}(t)\right)\phi(t,x_{j}(t))\psi(t,x_{k}(t))$
$\displaystyle=\int_{0}^{T}\iint_{\mathbb{R}^{2d}}\frac{1}{N^{2}}\sum_{j,k}\theta^{\epsilon}\left(x_{j}(t)-x_{k}(t)-z_{1}+z_{2}\right)\eta^{\delta}(z_{1})\eta^{\delta}(z_{2})\phi(t,x_{j}(t))\psi(t,x_{k}(t))dz_{1}dz_{2}dt+Err(\epsilon,N,\delta)$
(30)
with
$\limsup_{\delta\to
0}\limsup_{N\to\infty}\mathbb{E}|Err(\epsilon,N,\delta)|=0.$
Shifting the arguments of $\phi(t,\cdot),\psi(t,\cdot)$ in (2) by $z_{1}$ and
$z_{2}$, respectively (with the latter two in the support of $\eta^{\delta}$),
it can be shown that we cause an error of $O(\delta)$ in expectation, whereby
we rewrite (2)
$\displaystyle\int_{0}^{T}\iint_{\mathbb{R}^{2d}}\frac{1}{N^{2}}\sum_{j,k}\theta^{\epsilon}\left(x_{j}(t)-x_{k}(t)-z_{1}+z_{2}\right)\eta^{\delta}(z_{1})\eta^{\delta}(z_{2})\phi(t,x_{j}(t)-z_{1})\psi(t,x_{k}(t-z_{2}))dz_{1}dz_{2}dt+Err_{1}(\epsilon,N,\delta)$
$\displaystyle=\frac{1}{N^{2}}\int_{0}^{T}\iint_{\mathbb{R}^{2d}}\theta^{\epsilon}(w_{1}-w_{2})\phi(t,w_{1})\psi(t,w_{2})\sum_{j\leq
N(t)}\eta^{\delta}\left(x_{j}(t)-w_{1}\right)\sum_{k\leq
N(t)}\eta^{\delta}\left(x_{k}(t)-w_{2}\right)dw_{1}dw_{2}dt+Err_{1}(\epsilon,N,\delta)$
$\displaystyle=\int_{0}^{T}\int_{\mathbb{R}^{2d}}\theta^{\epsilon}(w_{1}-w_{2})\phi(t,w_{1})\psi(t,w_{2})\left(\eta^{\delta}*_{x}\xi^{N}\right)(t,w_{1})\left(\eta^{\delta}*_{x}\xi^{N}\right)(t,w_{2})dw_{1}dw_{2}dt+Err_{1}(\epsilon,N,\delta),$
where the second line is a change of the order of integration, and
$\mathbb{E}|Err_{1}(\delta,N,\epsilon)|=\mathbb{E}|Err(\delta,N,\epsilon)|+O(\delta).$
Since $\eta^{\delta},\phi,\psi$ are all smooth, and
$|w_{1}-w_{2}|<2C_{0}\epsilon$ within the support of $\theta^{\epsilon}$,
changing the $w_{2}$ to $w_{1}$ in the argument of $\eta^{\delta}$ and
$\psi(t,\cdot)$ can be shown to cause an error on the order
$O(\epsilon\delta^{-2d-1})$ in expectation, and we can rewrite the above
further
$\displaystyle\int_{0}^{T}dt\int_{\mathbb{R}^{d}}dw_{1}\left(\int_{\mathbb{R}^{d}}dw_{2}\;\theta^{\epsilon}(w_{1}-w_{2})\right)\phi(t,w_{1})\psi(t,w_{1})\left(\eta^{\delta}*_{x}\xi^{N}\right)(t,w_{1})^{2}+Err_{2}(\epsilon,N,\delta)$
$\displaystyle=\int_{0}^{T}dt\int_{\mathbb{R}^{d}}dw_{1}\phi(t,w_{1})\psi(t,w_{1})\left(\eta^{\delta}*_{x}\xi^{N}\right)(t,w_{1})^{2}+Err_{2}(\epsilon,N,\delta)\,,$
since $\int\theta^{\epsilon}=1$, where
$\mathbb{E}|Err_{2}(\epsilon,N,\delta)|=\mathbb{E}|Err_{1}(\epsilon,N,\delta)|+O(\epsilon\delta^{-2d-1}).$
Since $\epsilon(N)\to 0$ with $N$, $\mathbb{E}|Err_{2}(\epsilon,N,\delta)|$
vanishes with $N$ and $\delta$ in the right order. This completes the proof of
Proposition 7. $\QED$
To complete the proof of Theorem 4, another ingredient is the tightness of the
sequence of measures $\\{\mathcal{P}^{N}\\}_{N}$ in
$\mathcal{P}(\mathcal{M})$, and the properties of its weak subsequential
limits, as discussed next.
###### Lemma 12
Let any $d\geq 1$ and $T$ finite. The sequence $\\{\mathcal{P}^{N}\\}_{N}$
induced by $\\{\omega\mapsto\xi^{N}(dt,dx,\omega)\\}_{N}$ is tight in
$\mathcal{P}(\mathcal{M})$, hence relatively compact in the weak topology.
Proof. Firstly, note that subsets of the form
$\displaystyle
K_{A}:=\left\\{\mu\in\mathcal{M}:\;\int_{[0,T]\times\mathbb{R}^{d}}\left(1+|x|\right)\mu(dt,dx)\leq
A\right\\}$
are relatively compact in $\mathcal{M}$. Indeed, uniformly for $\mu\in K_{A}$,
we have that
$\displaystyle\mu\left(\left([0,T]\times\overline{\mathbb{B}}(0,L)\right)^{c}\right)\leq
L^{-1}\int_{\left([0,T]\times\overline{\mathbb{B}}(0,L)\right)^{c}}|x|\mu(dt,dx)\leq
L^{-1}A$
for any $L>0$. Secondly, by coupling to a system of pure proliferation of unit
rate, by a similar proof as Proposition 8, we can show that
$\displaystyle\mathbb{E}\left[\int_{0}^{T}\sum_{j\leq
N(t)}\left(1+|x_{j}(t)|\right)dt\right]\leq C_{*}N_{0}$
for some constant $C_{*}=C_{*}(d,T,R)$ finite. Thus, for any $\epsilon>0$, we
can find $A=A(\epsilon)$ such that uniformly for all $N$,
$\displaystyle\mathcal{P}^{N}\left({\overline{K}_{A}}^{c}\right)$
$\displaystyle\leq\mathbb{P}\left(\int_{[0,T]\times\mathbb{R}^{d}}\left(1+|x|\right)\xi^{N}(dt,dx)>A\right)$
$\displaystyle\leq A^{-1}\mathbb{E}\left[\int_{0}^{T}\frac{1}{N}\sum_{j\leq
N(t)}\left(1+|x_{j}(t)|\right)dt\right]\leq
A^{-1}C_{*}\|u_{0}\|_{L^{1}}<\epsilon$
by Markov’s inequality. This implies that the sequence
$\\{\mathcal{P}^{N}\\}_{N}$ is tight.
###### Lemma 13
Let $d\geq 1$ and $T$ finite. Any weak subsequential limit $\mathcal{P}^{*}$
of the sequence $\\{\mathcal{P}^{N}\\}_{N}$ is supported on the subset of
$\mathcal{M}$ consiting of measures that are absolutely continuous with
respect to the Lebesgue measure on $[0,T]\times\mathbb{R}^{d}$, with density
bounded by a deterministic constant.
Proof. We adapt the proof strategy of [10, Lemmas 4.1, 4.2] to our case.
Taking a smooth approximation $\\{\psi_{n}\\}_{n}$ to the function
$(x-k-2)_{+}$, where $k=k(d,T,u_{0})$ is a constant to be determined, we fix a
$C^{2}$ function $\psi:\mathbb{R}\to\mathbb{R}_{+}$ that is non-decreasing,
convex, with $\psi(0)=0$, $\psi^{\prime\prime}$ bounded, and $\psi^{\prime}=0$
for $x\leq k$ and $\psi^{\prime}\leq 1$ for $x>k$. For simplicity, we denote
$f^{\delta}(t,x)dt:=\left(\xi^{N}*_{x}\eta^{\delta}\right)(t,x)=\frac{1}{N}dt\sum_{j\leq
N(t)}\eta^{\delta}\left(t,x-x_{j}(t)\right),$
where $\eta$ is a smooth bump function as in (18).
By Itô formula applied to the process
$\int_{\mathbb{R}^{d}}\psi\left(f^{\delta}(t,x)\right)dx$, and then taking
expectation, the martingale vanishes and we get that
$\displaystyle\mathbb{E}\int\psi\left(f^{\delta}(T,x)\right)dx\leq\mathbb{E}\int\psi\left(f^{\delta}(0,x)\right)dx+\mathbb{E}\int_{0}^{T}\int\sum_{j\leq
N(t)}\frac{1}{2}\Delta_{x_{j}}\psi\left(f^{\delta}(t,x)\right)dxdt$
$\displaystyle\quad\quad\quad+\mathbb{E}\int_{0}^{T}\int\sum_{j\leq
N(t)}\left[\psi\left(f^{\delta}(t,x)+\frac{1}{N}\eta^{\delta}(x-x_{j}(t)))\right)-\psi(f^{\delta}(t,x))\right]\,dxdt$
$\displaystyle\leq\mathbb{E}\int\psi(f^{\delta}(0,x))dx+C\frac{N^{-1}}{\delta^{d+2}}\int|\nabla\eta|^{2}dx+\mathbb{E}\int_{0}^{T}\int\frac{1}{N}\sum_{j\leq
N(t)}\eta^{\delta}(x-x_{j}(t))1_{\\{f^{\delta}(t,x)>k\\}}dxdt$
$\displaystyle=\mathbb{E}\int\psi(f^{\delta}(0,x))dx+C\frac{N^{-1}}{\delta^{d+2}}\int|\nabla\eta|^{2}dx+\mathbb{E}\int_{0}^{T}\int
f^{\delta}(t,x)1_{\\{f^{\delta}(t,x)>k\\}}dxdt,$ (31)
where the analysis of the second term is the same as done in [10, page 46], in
particular utilizing the identity [10, (4.7)]. We argue that the rhs of (2)
converges to zero as $N\to\infty$ for some constant $k=k(T,d,u_{0})$ and every
fixed $\delta$. With that we can get the desired conclusion by repeating the
argument of [10, Lemma 4.2]. In particular, the constant $k+2$ is the upper
bound on the density.
To this end, fixing $t$ and $x$ and we consider $\mathbb{B}(x,\delta)$, the
open $\delta$-ball around $x$. We denote by $Z(t,x,\delta)$ the number of
particles in an auxiliary binary Branching Brownian motion (bbm) of unit
branching rate that fall into $\mathbb{B}(x,\delta)$ at time $t$, when
starting with a single particle at $t=0$ distributed with density $(\int
u_{0})^{-1}u_{0}$.
Recall that our particle system starts with $N_{0}=N\int u_{0}$ number of
independent points with density $(\int u_{0})^{-1}u_{0}$, and each particle
generates its own lineage, stochastically dominated from above by a bbm. If we
denote $Z^{(i)}(t,x,\delta)$ the number of particles in our proliferation
system that fall into $\mathbb{B}(x,\delta)$ at time $t$ that come from the
$i$-th lineage, for $i=1,2,...,N_{0}$, then $Z^{(i)}$ are dominated by i.i.d.
copies $Z_{i}$ of $Z$.
Note that each $Z_{i}$ is a Poisson variable with constant mean $e^{T}$. By
Chernoff’s bound for sums of independent Poisson variables, for fixed
$\delta,t,x$, some $C=C(d,T,u_{0})$, any $N$ and $s>\mathbb{E}Z$ large enough,
we have the tail estimates
$\displaystyle\mathbb{P}\Big{(}f^{\delta}(t,x)\geq
s\frac{\delta^{-d}||\eta||_{\infty}}{\int
u_{0}}\Big{)}\leq\mathbb{P}\Big{(}N_{0}^{-1}\sum_{i=1}^{N_{0}}Z^{(i)}\geq
s\Big{)}$
$\displaystyle\leq\mathbb{P}\Big{(}N_{0}^{-1}\sum_{i=1}^{N_{0}}Z_{i}\geq
s\Big{)}\leq C\exp\\{-CN(s-\mathbb{E}Z)\log(s-\mathbb{E}Z)\\}$ (32)
where the first inequality comes from unraveling the definition of
$f^{\delta}$. Further, due to the joint continuity of Brownian motion
transition densities, $(t,x)\mapsto\mathbb{E}[Z(t,x,\delta)]$ is continuous.
By the boundedness and compactly supportedness of $u_{0}$, this quantity
decays to zero as $|x|\to\infty$, uniformly on $[0,T]$. Further, the limit as
$\delta\to 0$ of $\delta^{-d}\mathbb{E}[Z(t,x,\delta)]$ exists and is (up to a
constant) the occupation density of a bbm at $(t,x)$. Thus, the following
constant is universal:
$\displaystyle
k_{1}(d,T,u_{0}):=\sup_{t\in[0,T],\;x\in\mathbb{R}^{d},\,\delta\in(0,1]}\left\\{\delta^{-d}\mathbb{E}[Z(t,x,\delta)]\right\\}<\infty.$
In particular, upon choosing
$k=k(d,T,u_{0}):=2\gamma\vee\frac{2k_{1}||\eta||_{\infty}}{\int u_{0}}$
we have by (32) that for some $C=C(k)$ finite,
$\mathbb{E}\left[f^{\delta}(t,x)1_{\\{f^{\delta}(t,x)>k\\}}\right]=\int_{k}^{\infty}\mathbb{P}(f^{\delta}(t,x)>u)du\leq
Ce^{-CN}.$
To conclude the proof, we note that the random variable
$f^{\delta}(t,x)1_{\\{f^{\delta}(t,x)>k\\}}$ is dominated from above by
$f^{\delta}(t,x)$ which is uniformly integrable. Indeed, it is simple to check
that
$\mathbb{E}\int_{0}^{T}\int f^{\delta}(t,x)dxdt\leq e^{T}\int
u_{0},\quad\forall\delta,N>0.$
Thus, by the dominated convergence theorem, the third term of (2) converges to
zero as $N\to\infty$, for every fixed $\delta$. Since $k\geq 2\gamma$ where
$\gamma=\|u_{0}\|_{\infty}$, we also have the first term of (2) converging to
zero.
Now we complete the proof of Theorem 4. Since the sequence
$\\{\mathcal{P}^{N}\\}_{N}$ is tight in $\mathcal{P}(\mathcal{M})$, we can
take a weakly converging subsequence
$\mathcal{P}^{N_{k}}\Rightarrow\mathcal{P}^{*}$, and we denote by $\xi(dt,dx)$
a random variable taking values in $\mathcal{M}$ that is distributed according
to the measure $\mathcal{P}^{*}$. For any $\iota>0$ we have that
$\displaystyle\mathbb{P}\left(\left|\int
u_{0}(x)\phi(0,x)dx+\left\langle\xi,(\partial_{t}+\frac{1}{2}\Delta+1)\phi\right\rangle-\left\langle\left(\xi*_{x}\eta^{\delta}\right)^{2},\phi\right\rangle\right|>3\iota\right)$
$\displaystyle\leq\liminf_{N\to\infty}\mathbb{P}\left(\left|\int
u_{0}(x)\phi(0,x)dx+\left\langle\xi^{N},(\partial_{t}+\frac{1}{2}\Delta+1)\phi\right\rangle-\left\langle\left(\xi^{N}*_{x}\eta^{\delta}\right)^{2},\phi\right\rangle\right|>3\iota\right)$
$\displaystyle\leq\liminf_{N\to\infty}\mathbb{P}\left(\left|Q^{N}(0,\eta(0))-\int\phi(0,x)u_{0}(x)dx\right|>\iota\right)+\liminf_{N\to\infty}\mathbb{P}\left(|\widetilde{M}_{T}|>\iota\right)$
$\displaystyle\quad\quad\quad+\liminf_{N\to\infty}\mathbb{P}\left(\left|\left\langle(\xi^{N}*_{x}\eta^{\delta})^{2},\phi\right\rangle-\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))\phi(t,x_{j}(t))\,dt\right|>\iota\right)$
where we used the idenity (2). We already have the first two limits equal to
zero, and furthermore, by Proposition 7, we have that
$\displaystyle\limsup_{\delta\to
0}\limsup_{N\to\infty}\mathbb{E}\left|\int_{0}^{T}\frac{1}{N^{2}}\sum_{j,k\leq
N(t)}\theta^{\epsilon}(x_{j}(t)-x_{k}(t))\phi(t,x_{j}(t))\,dt-\left\langle(\xi^{N}*_{x}\eta^{\delta})^{2},\phi\right\rangle\right|=0$
By Lemma 13, we can write $\xi(dt,dx)=u(t,x)dtdx$ for some (possibly random)
function $u(t,x)$ with a deterministic bound. With $u*_{x}\eta^{\delta}$
bounded uniformly in $\delta$, we have by the dominated convergence theorem
that as $\delta\to 0$,
$\displaystyle\mathbb{E}\left|\left\langle\phi(t,x),(u*_{x}\eta^{\delta})(t,x)^{2}\right\rangle-\left\langle\phi(t,x),u(t,x)^{2}\right\rangle\right|\to
0.$
Taken together, we get that with probability one,
$\int u_{0}(x)\phi(0,x)dx+\left\langle
u,(\partial_{t}+\frac{1}{2}\Delta+1)\phi\right\rangle-\left\langle
u^{2},\phi\right\rangle=0$
holds, whereby $u(t,x)$ is a weak solution of the f-kpp equation, in the sense
of (1). Since we will prove in Section 3 that weak solutions to the f-kpp
equation are unique, the limit $\mathcal{P}^{*}$ must be unique and is a Dirac
measure on $u(t,x)dtdx$. This completes the proof of our main result.
## 3 Uniqueness of weak solutions of f-kpp equation
Denote by $L_{+}^{1}\left(\mathbb{R}^{d}\right)$ the set of nonnegative
integrable functions.
As a preliminary, let us recall that we have denoted by
$\xi^{N}\left(dt,dx\right)$ the space-time empirical measure and we have
remarked that it has converging subsequences. Thus assume that
$\xi^{N_{k}}\left(dt,dx\right)$ weakly converges to a space-time finite
measure $\xi\left(dt,dx\right)$:
$\lim_{k\rightarrow\infty}\int_{0}^{T}\int_{\mathbb{R}^{d}}K\left(t,x\right)\xi^{N_{k}}\left(dt,dx\right)=\int_{0}^{T}\int_{\mathbb{R}^{d}}K\left(t,x\right)\xi\left(dt,dx\right)$
for every bounded continuous function $K$. Moreover, we know that there exists
$u\in
L^{\infty}\left(\left[0,T\right];L^{\infty}\left(\mathbb{R}^{d}\right)\cap
L_{+}^{1}\left(\mathbb{R}^{d}\right)\right)$
such that
$\int_{0}^{T}\int_{\mathbb{R}^{d}}K\left(t,x\right)\xi\left(dt,dx\right)=\int_{0}^{T}\int_{\mathbb{R}^{d}}K\left(t,x\right)u\left(t,x\right)dxdt.$
Moreover, we also know that the finite measure $\mu_{0}^{N}\left(dx\right)$
defined on $\mathbb{R}^{d}$ by
$\mu_{0}^{N}\left(dx\right)=\frac{1}{N}\sum_{j}\delta_{x_{j}\left(0\right)}$
converges weakly to a finite measure $\mu_{0}\left(dx\right)$ with density
$u_{0}\in L_{+}^{1}\left(\mathbb{R}^{d}\right)$:
$\lim_{n\rightarrow\infty}\int_{\mathbb{R}^{d}}\phi\left(x\right)\mu_{0}^{N}\left(dx\right)=\int_{\mathbb{R}^{d}}\phi\left(x\right)u_{0}\left(x\right)dx$
for every bounded continuous function $\phi$. Finally, let us also introduce
the notation $\mu_{t}^{N}\left(dx\right)$ for the family of finite measures on
$\mathbb{R}^{d}$, indexed by $t$, such that
$\xi^{N}\left(dt,dx\right)=\mu_{t}^{N}\left(dx\right)dt$
namely, more explicitly,
$\mu_{t}^{N}\left(dx\right)=\frac{1}{N}\sum_{j}\delta_{x_{j}\left(t\right)}\left(dx\right)$.
We have, for $K\in
C_{c}^{1,2}\left(\left[0,T\right]\times\mathbb{R}^{d}\right)$ (compact support
is here a restriction only in space, since we take $\left[0,T\right]$ closed),
$\displaystyle\left\langle\mu_{T}^{N},K\left(T\right)\right\rangle$
$\displaystyle=\left\langle\mu_{0}^{N},K\left(0\right)\right\rangle+\int_{0}^{T}\left\langle
g_{t}^{N},\left(\partial_{t}+\frac{1}{2}\Delta+1\right)K\left(t\right)\right\rangle
dt$
$\displaystyle-N^{-2}\int_{0}^{T}\sum_{j,k}\theta^{\epsilon}\left(x_{j}\left(t\right)-x_{k}\left(t\right)\right)K\left(t,x_{j}\left(t\right)\right)dt+M_{T}.$
Notice that we may express the first time integral by means of
$\xi^{N}\left(dt,dx\right)$, but we cannot do the same for the term
$\left\langle\mu_{T}^{N},K\left(T\right)\right\rangle$, since this is not a
space-time integral. Therefore we have
$\displaystyle\left\langle\mu_{T}^{N},K\left(T\right)\right\rangle$
$\displaystyle=\left\langle\mu_{0}^{N},K\left(0\right)\right\rangle+\int_{0}^{T}\int_{\mathbb{R}^{d}}\left(\partial_{t}+\frac{1}{2}\Delta+1\right)K\left(t,x\right)\xi^{N}\left(dt,dx\right)$
$\displaystyle-N^{-2}\int_{0}^{T}\sum_{j,k}\theta^{\epsilon}\left(x_{j}\left(t\right)-x_{k}\left(t\right)\right)K\left(t,x_{j}\left(t\right)\right)dt+M_{T}.$
Now, from the convergence property of $\xi^{N_{k}}$ above, we have
$\displaystyle\lim_{k\rightarrow\infty}\int_{0}^{T}\int_{\mathbb{R}^{d}}\left(\partial_{t}+\frac{1}{2}\Delta+1\right)K\left(t,x\right)\xi^{N_{k}}\left(dt,dx\right)$
$\displaystyle=\int_{0}^{T}\int_{\mathbb{R}^{d}}\left(\partial_{t}+\frac{1}{2}\Delta+1\right)K\left(t,x\right)u\left(t,x\right)dxdt.$
Similarly, we have
$\lim_{k\rightarrow\infty}\left\langle\mu_{0}^{N_{k}},K\left(0\right)\right\rangle=\int_{\mathbb{R}^{d}}K\left(0,x\right)u_{0}\left(x\right)dx.$
The last term is the one carefully analyzed in the paper. But we cannot state
anything about the convergence of
$\left\langle\mu_{T}^{N_{k}},K\left(T\right)\right\rangle$, unless we
investigate the tightness of the emprical measures $\mu_{\cdot}^{N}$ in
$\mathcal{D}\left(\left[0,T\right];M_{+}\left(\mathbb{R}^{d}\right)\right)$, a
fact that we prefer to avoid.
Therefore the only choice is to assume that
$K\left(T\right)=0.$
We deduce
$\displaystyle 0$
$\displaystyle=\int_{\mathbb{R}^{d}}K\left(0,x\right)u_{0}\left(x\right)dx$
$\displaystyle+\int_{0}^{T}\int_{\mathbb{R}^{d}}\left(\partial_{t}+\frac{1}{2}\Delta+1\right)K\left(t,x\right)u\left(t,x\right)dxdt$
$\displaystyle-\lim_{k\rightarrow\infty}N^{-2}\int_{0}^{T}\sum_{j,k}\theta^{\epsilon}\left(x_{j}\left(t\right)-x_{k}\left(t\right)\right)K\left(t,x_{j}\left(t\right)\right)dt$
(using also the fact that the martingale goes to zero). This motivates the
next definition.
###### Definition 14
Assume $u_{0}\in L^{\infty}\left(\mathbb{R}^{d}\right)\cap
L_{+}^{1}\left(\mathbb{R}^{d}\right)$. We say that
$u\in
L^{\infty}\left(\left[0,T\right];L^{\infty}\left(\mathbb{R}^{d}\right)\cap
L_{+}^{1}\left(\mathbb{R}^{d}\right)\right)$
is a weak solution of the Cauchy problem
$\displaystyle\partial_{t}u$ $\displaystyle=\frac{1}{2}\Delta
u+u\left(1-u\right)$ (33) $\displaystyle u|_{t=0}$ $\displaystyle=u_{0}$
if
$\displaystyle 0$
$\displaystyle=\int_{\mathbb{R}^{d}}K\left(0,x\right)u_{0}\left(x\right)dx+\int_{0}^{T}\int_{\mathbb{R}^{d}}\left(\partial_{t}+\frac{1}{2}\Delta+1\right)K\left(t,x\right)u\left(t,x\right)dxdt$
$\displaystyle-\int_{0}^{T}\int_{\mathbb{R}^{d}}u^{2}\left(t,x\right)K\left(t,x\right)dxdt$
for all test functions $K\in
C_{c}^{1,2}\left(\left[0,T\right]\times\mathbb{R}^{d}\right)$ such that
$K\left(T,\cdot\right)=0$.
A priori, a weak solution does not have continuity properties in time and thus
the value at zero is not properly defined. Implicitly it is defined by the
previous identity, but we can do better.
###### Lemma 15
Given $\phi\in C_{c}^{2}\left(\mathbb{R}^{d}\right)$, the measurable bounded
function $t\mapsto\left\langle u\left(t\right),\phi\right\rangle$ has a
continuous modification, with value equal to $\left\langle
u_{0},\phi\right\rangle$ at time zero. Moreover, denoting by
$t\mapsto\left\langle u\left(t\right),\phi\right\rangle$ the continuous
modification, we have
$\left\langle u\left(t\right),\phi\right\rangle=\left\langle
u_{0},\phi\right\rangle+\frac{1}{2}\int_{0}^{t}\left\langle
u\left(s\right),\Delta\phi\right\rangle ds+\int_{0}^{t}\left\langle
u\left(s\right)\left(1-u\left(s\right)\right),\phi\right\rangle ds$ (34)
for all $t\in\left[0,T\right]$.
Proof. Given $t_{0}\in[0,T)$, $h>0$ such that $t_{0}+h\leq T$ and $\phi\in
C_{c}^{2}\left(\mathbb{R}^{d}\right)$, consider the function
$K\left(t,x\right)=\phi\left(x\right)\chi\left(t\right)$, where
$\chi\left(t\right)$ is equal to 1 for $t\in\left[0,t_{0}\right]$,
$1-\frac{1}{h}\left(t-t_{0}\right)$ for $t\in\left[t_{0},t_{0}+h\right]$, zero
for $t\in\left[t_{0}+h,T\right]$. This function is only Lipschitz continuous
in time but it is not difficult to approximate it by a $C^{1}$ function of
time (this is not really needed, since Lipschitz continuity in time of the
test functions would be sufficient in the definition above). We have
$\partial_{t}K\left(t,x\right)=\phi\left(x\right)\chi^{\prime}\left(t\right)$
where $\chi^{\prime}\left(t\right)$ is equal to zero outside
$\left[t_{0},t_{0}+h\right]$ and to $-h^{-1}$ inside, with only lateral
derivatives at $t_{0}$ and $t_{0}+h$. Using this test function above we get
$\displaystyle 0$
$\displaystyle=\int_{\mathbb{R}^{d}}\phi\left(x\right)u_{0}\left(x\right)dx+\int_{0}^{T}\chi^{\prime}\left(t\right)\int_{\mathbb{R}^{d}}\phi\left(x\right)u\left(t,x\right)dxdt$
$\displaystyle+\int_{0}^{T}\chi\left(t\right)\left(\left\langle
u\left(t\right),\frac{1}{2}\Delta\phi\right\rangle+\left\langle
u\left(t\right)\left(1-u\left(t\right)\right),\phi\right\rangle\right)dt$
namely
$\displaystyle 0$
$\displaystyle=\int_{\mathbb{R}^{d}}\phi\left(x\right)u_{0}\left(x\right)dx-\frac{1}{h}\int_{t_{0}}^{t_{0}+h}v\left(t\right)dt$
$\displaystyle+\int_{0}^{t_{0}}\left(\left\langle
u\left(t\right),\frac{1}{2}\Delta\phi\right\rangle+\left\langle
u\left(t\right)\left(1-u\left(t\right)\right),\phi\right\rangle\right)dt$
$\displaystyle-\frac{1}{h}\int_{t_{0}}^{t_{0}+h}\left(t-t_{0}\right)\left(\left\langle
u\left(t\right),\frac{1}{2}\Delta\phi\right\rangle+\left\langle
u\left(t\right)\left(1-u\left(t\right)\right),\phi\right\rangle\right)dt$
where we have denoted by $v\left(t\right)$ the bounded measurable function
$\int_{\mathbb{R}^{d}}\phi\left(x\right)u\left(t,x\right)dx$. Since
$u\left(t\right)$ is bounded, the function equal to
$\left(t-t_{0}\right)\left(\left\langle
u\left(t\right),\frac{1}{2}\Delta\phi\right\rangle+\left\langle
u\left(t\right)\left(1-u\left(t\right)\right),\phi\right\rangle\right)$ for
$t\in\left[t_{0},T\right]$ and equal to zero at $t_{0}$ is continuous at
$t=t_{0}$, hence
$\lim_{h\rightarrow
0}\frac{1}{h}\int_{t_{0}}^{t_{0}+h}\left(t-t_{0}\right)\left(\left\langle
u\left(t\right),\frac{1}{2}\Delta\phi\right\rangle+\left\langle
u\left(t\right)\left(1-u\left(t\right)\right),\phi\right\rangle\right)dt=0.$
By Lebesgue differentiability theorem, the following limit exists for a.e.
$t_{0}$:
$\lim_{h\rightarrow
0}\frac{1}{h}\int_{t_{0}}^{t_{0}+h}v\left(t\right)dt=v\left(t_{0}\right).$
Therefore we get
$v\left(t_{0}\right)=\int_{\mathbb{R}^{d}}\phi\left(x\right)u_{0}\left(x\right)dx+\int_{0}^{t_{0}}\left(\left\langle
u\left(t\right),\frac{1}{2}\Delta\phi\right\rangle+\left\langle
u\left(t\right)\left(1-u\left(t\right)\right),\phi\right\rangle\right)dt$
for a.e. $t_{0}$. The right-hand-side of this identity is a continuous
function of $t_{0}$, hence the function $v$ has a continuous modification. And
its value at $t_{0}=0$ is
$\int_{\mathbb{R}^{d}}\phi\left(x\right)u_{0}\left(x\right)dx$.
We can now prove the main result of this section.
###### Proposition 16
Two weak solutions of the Cauchy problem (33) coincide a.s.
Proof. Step 1. Let $u$ be a weak solution. Let $e^{t\frac{1}{2}\Delta}$ be the
heat semigroup, defined for instance on bounded measurable functions. In this
step we are going to prove that
$u\left(t\right)=e^{t\frac{1}{2}\Delta}u_{0}+\int_{0}^{t}e^{\left(t-s\right)\frac{1}{2}\Delta}\left[u\left(s\right)\left(1-u\left(s\right)\right)\right]ds.$
(35)
Let $\left(\sigma_{\epsilon}\right)_{\epsilon\in\left(0,1\right)}$ be
classical smooth compact support mollifiers; set
$u_{\epsilon}\left(t\right)=\sigma_{\epsilon}\ast u\left(t\right).$
Given $\psi\in C_{c}^{1,2}\left(\mathbb{R}^{d}\right)$, take
$\phi=\sigma_{\epsilon}^{-}\ast\psi$ in (34), where
$\sigma_{\epsilon}^{-}\left(x\right)=\sigma_{\epsilon}\left(-x\right)$. Then,
being
$\displaystyle\left\langle
u\left(t\right),\sigma_{\epsilon}^{-}\ast\psi\right\rangle$
$\displaystyle=\left\langle\sigma_{\epsilon}\ast
u\left(t\right),\psi\right\rangle=\left\langle
u_{\epsilon}\left(t\right),\psi\right\rangle$ $\displaystyle\left\langle
u_{0},\sigma_{\epsilon}^{-}\ast\psi\right\rangle$
$\displaystyle=\left\langle\sigma_{\epsilon}\ast u_{0},\psi\right\rangle$
$\displaystyle\left\langle
u\left(s\right),\Delta\sigma_{\epsilon}^{-}\ast\psi\right\rangle$
$\displaystyle=\left\langle
u\left(s\right),\sigma_{\epsilon}^{-}\ast\Delta\psi\right\rangle=\left\langle
u_{\epsilon}\left(t\right),\Delta\psi\right\rangle=\left\langle\Delta
u_{\epsilon}\left(t\right),\psi\right\rangle$
we get
$\left\langle
u_{\epsilon}\left(t\right),\psi\right\rangle=\left\langle\sigma_{\epsilon}\ast
u_{0},\psi\right\rangle+\frac{1}{2}\int_{0}^{t}\left\langle\Delta
u_{\epsilon}\left(s\right),\psi\right\rangle
ds+\int_{0}^{t}\left\langle\sigma_{\epsilon}\ast\left[u\left(s\right)\left(1-u\left(s\right)\right)\right],\psi\right\rangle
ds$
and therefore
$u_{\epsilon}\left(t\right)=\sigma_{\epsilon}\ast
u_{0}+\frac{1}{2}\int_{0}^{t}\Delta
u_{\epsilon}\left(s\right)ds+\int_{0}^{t}\sigma_{\epsilon}\ast\left[u\left(s\right)\left(1-u\left(s\right)\right)\right]ds$
which implies that $t\mapsto u_{\epsilon}\left(t,x\right)$ is differentiable,
for every $x\in\mathbb{R}^{d}$. With classical arguments we can rewrite the
equation in the form
$u_{\epsilon}\left(t\right)=e^{t\frac{1}{2}\Delta}\sigma_{\epsilon}\ast
u_{0}+\int_{0}^{t}e^{\left(t-s\right)\frac{1}{2}\Delta}\sigma_{\epsilon}\ast\left[u\left(s\right)\left(1-u\left(s\right)\right)\right]ds.$
Notice that $e^{t\frac{1}{2}\Delta}$ is defined by a convolution with a smooth
kernel, for $t>0$, and thus by commutativity between convolutions we have
$e^{t\frac{1}{2}\Delta}\sigma_{\epsilon}\ast u_{0}=\sigma_{\epsilon}\ast
e^{t\Delta\frac{1}{2}}u_{0}$ and similarly under the integral sign. Hence we
can also write
$u_{\epsilon}\left(t\right)=\sigma_{\epsilon}\ast
e^{t\frac{1}{2}\Delta}u_{0}+\int_{0}^{t}\sigma_{\epsilon}\ast
e^{\left(t-s\right)\frac{1}{2}\Delta}\left[u\left(s\right)\left(1-u\left(s\right)\right)\right]ds.$
Given $\phi\in C_{c}\left(\mathbb{R}^{d}\right)$, we deduce
$\left\langle
u\left(t\right),\sigma_{\epsilon}^{-}\ast\phi\right\rangle=\left\langle
e^{t\frac{1}{2}\Delta}u_{0},\sigma_{\epsilon}^{-}\ast\phi\right\rangle+\int_{0}^{t}\left\langle
e^{\left(t-s\right)\frac{1}{2}\Delta}\left[u\left(s\right)\left(1-u\left(s\right)\right)\right],\sigma_{\epsilon}^{-}\ast\phi\right\rangle
ds.$
Since $\sigma_{\epsilon}^{-}\ast\phi$ converges uniformly to $\phi$, from
dominated convergence theorem we deduce
$\left\langle u\left(t\right),\phi\right\rangle=\left\langle
e^{t\frac{1}{2}\Delta}u_{0},\phi\right\rangle+\int_{0}^{t}\left\langle
e^{\left(t-s\right)\frac{1}{2}\Delta}\left[u\left(s\right)\left(1-u\left(s\right)\right)\right],\phi\right\rangle
ds$
and thus we get (35).
Step 2. Assume that $u^{\left(i\right)}$ are two weak solutions. Then, from
(35),
$u^{\left(1\right)}\left(t\right)-u^{\left(2\right)}\left(t\right)=\int_{0}^{t}e^{\left(t-s\right)k\Delta}\left(u^{\left(1\right)}\left(s\right)-u^{\left(2\right)}\left(s\right)\right)\left(1-u^{\left(1\right)}\left(s\right)-u^{\left(2\right)}\left(s\right)\right)ds$
hence
$\left\|u^{\left(1\right)}\left(t\right)-u^{\left(2\right)}\left(t\right)\right\|_{\infty}\leq\int_{0}^{t}\left\|u^{\left(1\right)}\left(s\right)-u^{\left(2\right)}\left(s\right)\right\|_{\infty}\left(1+\left\|u^{\left(1\right)}\left(s\right)\right\|_{\infty}+\left\|u^{\left(1\right)}\left(s\right)\right\|_{\infty}\right)ds.$
Since, by assumption, $u^{\left(i\right)}$ are bounded, we deduce
$\left\|u^{\left(1\right)}\left(t\right)-u^{\left(2\right)}\left(t\right)\right\|_{\infty}=0$
by Gronwall lemma.
## References
* [1] V. Bansaye and S. Méléard. Stochastic models for structured populations, volume 1 of Mathematical Biosciences Institute Lecture Series. Stochastics in Biological Systems. Springer, Cham; MBI Mathematical Biosciences Institute, Ohio State University, Columbus, OH, 2015. Scaling limits and long time behavior.
* [2] N. Champagnat and S. Méléard. Invasion and adaptive evolution for individual-based spatially structured populations. J. Math. Biol., 55(2):147–188, 2007.
* [3] A. De Masi, P. A. Ferrari, and J. L. Lebowitz. Reaction-diffusion equations for interacting particle systems. J. Statist. Phys., 44(3-4):589–644, 1986.
* [4] A. De Masi, T. Funaki, E. Presutti, and M. E. Vares. Fast-reaction limit for Glauber-Kawasaki dynamics with two components. ALEA Lat. Am. J. Probab. Math. Stat., 16(2):957–976, 2019.
* [5] A. De Masi and E. Presutti. Mathematical methods for hydrodynamic limits, volume 1501 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1991.
* [6] J. Farfán, C. Landim, and K. Tsunoda. Static large deviations for a reaction-diffusion model. Probab. Theory Related Fields, 174(1-2):49–101, 2019.
* [7] F. Flandoli and M. Leimbach. Mean field limit with proliferation. Discrete Contin. Dyn. Syst. Ser. B, 21(9):3029–3052, 2016.
* [8] F. Flandoli, M. Leimbach, and C. Olivera. Uniform convergence of proliferating particles to the FKPP equation. J. Math. Anal. Appl., 473(1):27–52, 2019.
* [9] T. Funaki and K. Tsunoda. Motion by mean curvature from Glauber-Kawasaki dynamics. J. Stat. Phys., 177(2):183–208, 2019.
* [10] A. Hammond and F. Rezakhanlou. The kinetic limit of a system of coagulating Brownian particles. Arch. Ration. Mech. Anal., 185(1):1–67, 2007.
* [11] C. Kipnis and C. Landim. Scaling limits of interacting particle systems, volume 320 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1999.
* [12] H. P. McKean. Application of Brownian motion to the equation of Kolmogorov-Petrovskii-Piskunov. Comm. Pure Appl. Math., 28(3):323–331, 1975.
* [13] S. Méléard and S. Roelly-Coppoletta. A propagation of chaos result for a system of particles with moderate interaction. Stochastic Process. Appl., 26(2):317–332, 1987.
* [14] M. Métivier. Quelques problèmes liés aux systèmes infinis de particules et leurs limites. In Séminaire de Probabilités, XX, 1984/85, volume 1204 of Lecture Notes in Math., pages 426–446. Springer, Berlin, 1986.
* [15] G. Nappo and E. Orlandi. Limit laws for a coagulation model of interacting random particles. Ann. Inst. H. Poincaré Probab. Statist., 24(3):319–344, 1988\.
* [16] K. Oelschläger. A law of large numbers for moderately interacting diffusion processes. Z. Wahrsch. Verw. Gebiete, 69(2):279–322, 1985.
* [17] K. Oelschläger. On the derivation of reaction-diffusion equations as limit dynamics of systems of moderately interacting stochastic processes. Probab. Theory Related Fields, 82(4):565–586, 1989.
* [18] A. Stevens. The derivation of chemotaxis equations as limit dynamics of moderately interacting stochastic many-particle systems. SIAM J. Appl. Math., 61(1):183–212, 2000.
* [19] A.-S. Sznitman. Topics in propagation of chaos. In École d’Été de Probabilités de Saint-Flour XIX—1989, volume 1464 of Lecture Notes in Math., pages 165–251. Springer, Berlin, 1991.
* [20] K. Uchiyama. Pressure in classical statistical mechanics and interacting Brownian particles in multi-dimensions. Ann. Henri Poincaré, 1(6):1159–1202, 2000.
* [21] S. R. S. Varadhan. Scaling limits for interacting diffusions. Comm. Math. Phys., 135(2):313–353, 1991.
Authors address: Scuola Normale Superiore di Pisa. Piazza Dei Cavalieri 7.
Pisa PI 56126. Italia.
E-mails: {franco.flandoli, ruojun.huang}@ sns.it.
|
16k
|
arxiv_papers
|
2101.01033
|
code haskell École Normale Supérieure de Paris, PSL,
[email protected] supported by the Polish NCN grant
2017/26/D/ST6/00201. University of Warsaw,
[email protected]://orcid.org/0000-0003-0578-9103Partially
supported by the Polish NCN grant 2017/26/D/ST6/00201. Corentin Barloy and
Lorenzo Clemente [100] Theory of computation - Automata over infinite objects.
###### Acknowledgements.
We would like to thank Daniel Robertz for kindly providing us with the LDA
package for Maple 16. John Q. Open and Joan R. Access 2 42nd Conference on
Very Important Topics (CVIT 2016) CVIT 2016 CVIT 2016 December 24–27, 2016
Little Whinging, United Kingdom 42 23
# Bidimensional linear recursive sequences and universality of unambiguous
register automata
Corentin Barloy Lorenzo Clemente
(January 2021)
###### Abstract
We study the universality and inclusion problems for register automata over
equality data $(\mathbb{A},=)$. We show that the universality
$L(B)=(\Sigma\times\mathbb{A})^{*}$ and inclusion problems $L(A)\subseteq
L(B)$ can be solved with 2-EXPTIME complexity when both automata are without
guessing and $B$ is unambiguous, improving on the currently best-known
2-EXPSPACE upper bound by Mottet and Quaas. When the number of registers of
both automata is fixed, we obtain a lower EXPTIME complexity, also improving
the EXPSPACE upper bound from Mottet and Quaas for fixed number of registers.
We reduce inclusion to universality, and then we reduce universality to the
problem of counting the number of orbits of runs of the automaton. We show
that the orbit-counting function satisfies a system of bidimensional linear
recursive equations with polynomial coefficients (linrec), which generalises
analogous recurrences for the Stirling numbers of the second kind, and then we
show that universality reduces to the zeroness problem for linrec sequences.
While such a counting approach is classical and has successfully been applied
to unambiguous finite automata and grammars over finite alphabets, its
application to register automata over infinite alphabets is novel.
We provide two algorithms to decide the zeroness problem for bidimensional
linear recursive sequences arising from orbit-counting functions. Both
algorithms rely on techniques from linear non-commutative algebra. The first
algorithm performs variable elimination and has elementary complexity. The
second algorithm is a refined version of the first one and it relies on the
computation of the Hermite normal form of matrices over a skew polynomial
field. The second algorithm yields an EXPTIME decision procedure for the
zeroness problem of linrec sequences, which in turn yields the claimed bounds
for the universality and inclusion problems of register automata.
###### keywords:
unambiguous register automata, universality and inclusion problems, multi-
dimensional linear recurrence sequences.
## 1 Introduction
#### Register automata.
_Register automata_ extend finite automata with finitely many registers
holding values from an infinite _data domain $\mathbb{A}$_ which can be
compared against the data appearing in the input. The study of register
automata arises naturally in automata theory as a conservative generalisation
of finite automata over finite alphabets $\Sigma$ to richer but well-behaved
classes of infinite alphabets. The seminal work of Kaminski and Francez
introduced _finite-memory automata_ as the study of register automata over the
data domain $(\mathbb{A},=)$ consisting of an infinite set $\mathbb{A}$ and
the equality relation [29]. The recent book [4] studies automata theory over
other data domains such as $(\mathbb{Q},\leq)$, and more generally homogeneous
[36] or even $\omega$-categorical relational structures. Another motivation
for the study of register automata comes from the area of database theory: XML
documents can naturally be modelled as finite unranked trees where data values
from an infinite alphabet are necessary to model the _attribute values_ of the
document (c.f. [41] and the survey [47]).
The central verification question for register automata is the _inclusion
problem_ , which, for two given automata $A,B$, asks whether $L(A)\subseteq
L(B)$. In full generality the problem is undecidable and this holds already in
the special case of the _universality problem_
$L(B)=(\Sigma\times\mathbb{A})^{*}$ [41, Theorem 5.1], when $B$ has only two
registers [4, Theorem 1.8] (or even just one register in the more powerful
model _with guessing_ [4, Exercise 9], i.e., non-deterministic reassignment in
the terminology of [30]). One way to obtain decidability is to restrict the
automaton $B$. One such restriction requires that $B$ is _deterministic_ :
Since deterministic register automata are effectively closed under
complementation, the inclusion problem reduces to non-emptiness of
$L(A)\cap(\Sigma\times\mathbb{A})^{*}\setminus L(B)$, which can be checked in
PSPACE. Another, incomparable, restriction demands that $B$ has only one
register: In this case the problem becomes decidable [29, Appendix
A]111Decidability even holds for the so-called “two-window register automata”,
which combined with the restriction in [29] demanding that the last data value
read must always be stored in some register boils down to a slightly more
general class of “$1\frac{1}{2}$-register automata”. and non-primitive
recursive [22, Theorem 5.2].
#### Unambiguity.
_Unambiguous automata_ are a natural class of automata intermediate between
deterministic and nondeterministic automata. An automaton is unambiguous if
there is at most one accepting run on every input word. Unambiguity has often
been used to generalise decidability results for deterministic automata at the
price of a usually modest additional complexity. For instance, the
universality problem for deterministic finite automata (which is PSPACE-
complete in general [52]) is NL-complete, while for the unambiguous variant it
is in PTIME [51, Corollary 4.7], and even in NC2 [55]. An even more dramatic
example is provided by universality of context-free grammars, which is
undecidable in general [28, Theorem 9.22], PTIME-complete for deterministic
context-free grammars, and decidable for unambiguous context-free grammars
[45, Theorem 5.5] (even in PSPACE [15, Theorem 10]). (The more general
equivalence problem is decidable for deterministic context-free grammars [48],
but it is currently an open problem whether equivalence is decidable for
unambiguous context-free grammars, as well as for the more general
_multiplicity equivalence_ of context-free grammars [33].) Other applications
of unambiguity for universality and inclusion problems in automata theory
include Büchi automata [7, 2], probabilistic automata [21], Parikh automata
[9, 5], vector addition systems [20], and several others (c.f. also [18, 19]).
#### Number sequences and the counting approach.
The universality problem for a language over finite words
$L\subseteq\Sigma^{*}$ is equivalent to whether its associated _word counting
function_ $f_{L}(n):=\left|L\cap\Sigma^{n}\right|$ equals
$\left|\Sigma\right|^{n}$ for every $n$. The most classical way of exploiting
unambiguity of a computation model $A$ (finite automaton, context-free
grammar, …) is to use the fact that it yields a bijection between the
recognised language $L(A)$ and the set of accepting runs. In this way,
$f_{L}(n)$ is also the number of accepting runs of length $n$, and for the
latter recursive descriptions usually exist. When the class of number
sequences to which $f_{L}$ belongs contains $\left|\Sigma\right|^{n}$ and is
closed under difference, this is equivalent to the _zeroness_ problem for
$g(n):=\left|\Sigma\right|^{n}-f_{L}(n)$, which amounts to decide whether
$g=0$. This approach has been pioneered by Chomsky and Schützenberger [14] who
have shown that the generating function
$g_{L}(x)=\sum_{n=0}^{\infty}f_{L}(n)\cdot x^{n}$ associated to an unambiguous
context-free language $L$ is algebraic (c.f. [8]). A similar observation by
Stearns and Hunt [51] shows that $g_{L}(x)$ is rational [50, Chapter 4], when
$L$ is regular, and more recently by Bostan et al. [5] who have shown that
$g_{L}(x)$ is holonomic [49] when $L$ is recognised by an unambiguous Parikh
automaton. Since the zeroness problem for rational, algebraic, and holonomic
generating functions is decidable, one obtains decidability of the
corresponding universality problems.
#### Unambiguous register automata.
Returning to register automata, Mottet and Quaas have recently shown that the
inclusion problem in the case where $B$ is an unambiguous register automaton
over equality data (without guessing) can be decided in 2-EXPSPACE, and in
EXPSPACE when the numbers of registers of $B$ is fixed [37, Theorem 1]. Note
that already decidability is interesting, since unambiguous register automata
without guessing are not closed under complement in the class of
nondeterministic register automata without guessing [30, Example 4], and thus
the classical approach via complementing $B$ fails for register automata222 In
the more general class of register automata with guessing, an unproved
conjecture proposed by Colcombet states that unambiguous register automata
with guessing are effectively closed under complement [19, Theorem 12],
implying decidability of the universality and containment problems for
unambiguous register automata with guessing and, a posteriori, unambiguous
register automata without guessing as considered in this paper. No published
proof of this conjecture has appeared as of yet. . (In fact, even for finite
automata complementation of unambiguous finite automata cannot lead to a PTIME
universality algorithm, thanks to Raskin’s recent super-polynomial lower-bound
for the complementation problem for unambiguous finite automata in the class
of non-deterministic finite automata [44]). Mottet and Quaas obtain their
result by showing that inclusion can be decided by checking a reachability
property of a suitable graph of triply-exponential size obtained by taking the
product of $A$ and $B$, and then applying the standard NL algorithm for
reachability in directed graphs.
#### Our contributions.
In view of the widespread success of the counting approach to unambiguous
models of computation, one may wonder whether it can be applied to register
automata as well. This is the topic of our paper. A naïve counting approach
for register automata immediately runs into trouble since there are infinitely
many data words of length $n$. The natural remedy is to use the fact that
$\mathbb{A}^{n}$, albeit infinite, is _orbit-finite_ [4, Sec. 3.2], which is a
crucial notion generalising finiteness to the realm of relational structures
used to model data. In this way, we naturally count the number of _orbits_ of
words/runs of a given length, which in the context of model theory is
sometimes known as the _Ryll-Nardzewski function_ [46]. For example, in the
case of equality data $(\mathbb{A},=)$, the number of orbits of words of
length $n$ is the well-known _Bell number $B(n)$_, and for $(\mathbb{Q},\leq)$
one obtains the _ordered Bell numbers_ (a.k.a. _Fubini numbers_); c.f.
Cameron’s book for more examples [11, Ch. 7].
When considering orbits of runs, the run length $n$ seems insufficient to
obtain recurrence equations. To this end, we also consider the number of
distinct data values $k$ that appear on the word labelling the run. For
instance, in the case of equality data, the corresponding orbit-counting
function is the well-known sequence of _Stirling numbers of the second kind_
$S(n,k):\mathbb{Q}^{\mathbb{N}^{2}}$, which satisfies $S(0,0)=1$,
$S(m,0)=S(0,m)=0$ for $m\geq 1$, and
$\displaystyle S(n,k)=S(n-1,k-1)+k\cdot S(n-1,k),\quad\text{ for }n,k\geq 1.$
(1)
These intuitions lead us to define the class of _bidimensional linear
recursive sequences with polynomial coefficients_ (linrec; c.f. (5)) which are
a class of number sequences in $\mathbb{Q}^{\mathbb{N}^{2}}$ satisfying a
system of shift equations with polynomial coefficients generalising (1).
Linrec are sufficiently general to model the orbit-counting functions of
register automata and yet amenable to algorithmic analysis. Our first result
is a complexity upper bound for the zeroness problem for a class of linrec
sequences which suffices to model register automata.
###### Theorem 1.1.
The zeroness problem for linrec sequences with univariate polynomial
coefficients from $\mathbb{Q}[k]$ is in EXPTIME.
This is obtained by modelling linrec equations as systems of linear equations
with _skew polynomial coefficients_ (introduced by Ore [43]) and then using
complexity bounds on the computation of the Hermite normal form of skew
polynomial matrices by Giesbrecht and Kim [26]. Our second result is a
reduction of the universality and inclusion problems to the zeroness problem
of a system of linrec equations of exponential size. Together with Theorem
1.1, this yields improved upper bounds on the former problems.
###### Theorem 1.2.
The universality $L(B)=(\Sigma\times\mathbb{A})^{*}$ and the inclusion problem
$L(A)\subseteq L(B)$ for register automata $A,B$ without guessing with $B$
unambiguous are in 2-EXPTIME, and in EXPTIME for a fixed number of registers
of $A,B$. The same holds for the equivalence problem $L(A)=L(B)$ when both
automata are unambiguous.
The rest of the paper is organised as follows. In Sec. 2, we introduce linrec
sequences (c.f. Sec. A.3 for a comparison with well known sequence families
from the literature such as the C-recursive, P-recursive, and the more recent
polyrec sequences [10]). In Sec. 3, we introduce unambiguous register automata
and we present an efficient reduction of the inclusion (and thus equivalence)
problem to the universality problem, which allows us to concentrate on the
latter in the rest of the paper. In Sec. 4, we present a reduction of the
universality problem to the zeroness problem for linrec. In Sec. 5, we show
with a simple argument based on elimination that the zeroness problem for
linrec is decidable, and in Sec. 6 we derive a complexity upper bound using
non-commutative linear algebra. Finally, in Sec. 7 we conclude with further
work and an intriguing conjecture. Full proofs, additional definitions, and
examples are provided in Appendices A, B, C, D and E.
#### Notation.
Let $\mathbb{N}$, $\mathbb{Z}$, and $\mathbb{Q}$ be the set of non-negative
integers, resp., rationals. The _height_ of an integer $k\in\mathbb{Z}$ is
$\left|{k}\right|_{\infty}=\left|{k}\right|$, and for a rational number
$a\in\mathbb{Q}$ uniquely written as $a=\frac{p}{q}$ with
$p\in\mathbb{Z},q\in\mathbb{N}$ co-prime we define
$\left|{a}\right|_{\infty}=\max\\{\left|{p}\right|_{\infty},\left|{q}\right|_{\infty}\\}$.
Let $\mathbb{Q}[n,k]$ denote the ring of bivariate polynomials. The
_(combined) degree_ $\deg P$ of
$P=\sum_{i,j}a_{ij}n^{i}k^{j}\in\mathbb{Q}[n,k]$ is the maximum $i+j$ s.t.
$a_{ij}\neq 0$ and the _height_ $\left|{P}\right|_{\infty}$ is
$\max_{i,j}\left|{a_{ij}}\right|_{\infty}$. For a nonempty set $A$ and
$n\in\mathbb{N}$, let $A^{n}$ be the set of sequences of elements from $A$ of
length $n$, In particular, $A^{0}=\\{\varepsilon\\}$ contains only the empty
sequence $\varepsilon$. Let $A^{*}=\bigcup_{n\in\mathbb{N}}A^{n}$ be the set
of all finite sequences over $A$. We use the _soft-Oh_ notation
$\tilde{O}({f(n)})$ to denote $\bigcup_{c\geq 0}O(f(n)\cdot\log^{c}f(n))$.
## 2 Bidimensional linear recursive sequences with polynomial coefficients
Let $f(n,k):\mathbb{Q}^{\mathbb{N}^{2}}$ be a bidimensional sequence. For
$L\in\mathbb{N}$, the _first $L$-section_ of $f$ is the one-dimensional
sequence $f(L,k):\mathbb{Q}^{\mathbb{N}}$ obtained by fixing its first
component to $L$; the _second $L$-section_ $f(n,L)$ is defined similarly. The
two _shift operators_
$\partial_{1},\partial_{2}:\mathbb{Q}^{\mathbb{N}^{2}}\to\mathbb{Q}^{\mathbb{N}^{2}}$
are
$\displaystyle(\partial_{1}f)(n,k)=f(n+1,k)\quad\text{ and
}\quad(\partial_{2}f)(n,k)=f(n,k+1),\quad\text{ for all }n,k\geq 0.$
An _affine operator_ is a formal expression of the form
$A=p_{00}+p_{01}\cdot\partial_{1}+p_{10}\cdot\partial_{2}$ where
$p_{00},p_{01},p_{10}\in\mathbb{Q}[n,k]$ are bivariate polynomials over $n,k$
with rational coefficients. Let $\\{f_{1},\dots,f_{m}\\}$ be a set of
variables denoting bidimensional sequences333We abuse notation and silently
identify variables denoting sequences with the sequences they denote.. A
_system of linear shift equations_ over $f_{1},\dots,f_{m}$ consists of $m$
equations of the form
$\displaystyle\left\\{\begin{array}[]{rcl}\partial_{1}\partial_{2}f_{1}&=&A_{1,1}\cdot
f_{1}+\cdots+A_{1,m}\cdot f_{m},\\\ &\vdots&\\\
\partial_{1}\partial_{2}f_{m}&=&A_{m,1}\cdot f_{1}+\cdots+A_{m,m}\cdot
f_{m},\end{array}\right.$ (5)
where the $A_{i,j}$’s are affine operators. A bidimensional sequence
$f:\mathbb{Q}^{\mathbb{N}^{2}}$ is _linear recursive of order $m$, degree $d$,
and height $h$_ (abbreviated, linrec) if the following two conditions hold:
1. 1)
there are auxiliary bidimensional sequences
$f_{2},\dots,f_{m}:\mathbb{Q}^{\mathbb{N}^{2}}$ which together with $f=f_{1}$
satisfy a system of linear shift equations as in (5) where the polynomial
coefficients have (combined) degree $\leq d$ and height $\leq h$.
2. 2)
for every $1\leq i\leq m$ there are constants denoted $f_{i}(0,\geq
1),f_{i}(\geq 1,0)\in\mathbb{Q}$ s.t. $f_{i}(0,k)=f_{i}(0,\geq 1)$ and
$f_{i}(n,0)=f_{i}(\geq 1,0)$ for every $n,k\geq 1$.
If we additionally fix the initial values $f_{1}(0,0),\dots,f_{m}(0,0)$, then
the system (5) has a unique solution, which is computable in PTIME.
###### Lemma 2.1.
The values $f_{i}(n,k)$’s are computable in deterministic time
$\tilde{O}({m\cdot n\cdot k})$.
In the following we will use the following effective closure under section.
###### Lemma 2.2.
If $f:\mathbb{Q}^{\mathbb{N}^{2}}$ is linrec of order $\leq m$, degree $\leq
d$, and height $\leq h$, then its $L$-sections
$f(L,k),f(n,L):\mathbb{Q}^{\mathbb{N}}$ are linrec of order $\leq
m\cdot(L+3)$, degree $\leq d$, and height $\leq h\cdot L^{d}$.
We are interested in the following central algorithmic problem for linrec.
Zeroness problem.
Input: A system of linrec equations (5) together with all initial conditions.
Output: Is it the case that $f_{1}=0$?
In Sec. 4 we use linrec sequences to model the orbit-counting functions of
register automata, which we introduce next.
## 3 Unambiguous register automata
We consider register automata over the relational structure $(\mathbb{A},=)$
consisting of a countable set $\mathbb{A}$ equipped with equality as the only
relational symbol. Let $\bar{a}=a_{1}\cdots a_{n}\in\mathbb{A}^{n}$ be a
finite sequence of $n$ data values. An _$\bar{a}$ -automorphism_ of
$\mathbb{A}$ is a bijection $\alpha:\mathbb{A}\to\mathbb{A}$ s.t.
$\alpha(a_{i})=a_{i}$ for every $1\leq i\leq n$, which is extended pointwise
to $\bar{a}\in\mathbb{A}^{n}$ and to $L\subseteq\mathbb{A}^{*}$. For
$\bar{b},\bar{c}\in\mathbb{A}^{n}$, we write $\bar{b}\sim_{\bar{a}}\bar{c}$
whenever there is an $\bar{a}$-automorphism $\alpha$ s.t.
$\alpha(\bar{b})=\bar{c}$. The _$\bar{a}$ -orbit_ of $\bar{b}$ is the
equivalence class
$[\bar{b}]_{\bar{a}}=\\{\bar{c}\in\mathbb{A}^{n}\mid\bar{b}\sim_{\bar{a}}\bar{c}\\}$,
and the set of $\bar{a}$-orbits of sequences in $L\subseteq\mathbb{A}^{*}$ is
$\mathsf{orbits}_{\bar{a}}(L)=\\{[\bar{b}]_{\bar{a}}\mid\bar{b}\in L\\}$. In
the special case when $\bar{a}=\varepsilon$ is the empty tuple, we just speak
about _automorphism_ $\alpha$ and _orbit_ $[\bar{b}]$. A set $X$ is _orbit-
finite_ if $\mathsf{orbits}(X)$ is a finite set [4, Sec. 3.2]. All definitions
above extend to $\mathbb{A}_{\bot}:=\mathbb{A}\cup\\{\bot\\}$ with
$\bot\not\in\mathbb{A}$ in the expected way. A _constraint_ $\varphi$ is a
quantifier-free444Since $(\mathbb{A},=)$ is a homogeneous relational
structure, and thus it admits quantifier elimination, we would obtain the same
expressive power if we would consider more general first-order formulas
instead. formula generated by $\varphi,\psi::\equiv x=\bot\mid
x=y\mid\varphi\lor\psi\mid\varphi\land\psi\mid\lnot\varphi$, where $x,y$ are
variables and $\bot$ is a special constant denoting an undefined value. The
semantics of a constraint $\varphi(x_{1},\dots,x_{n})$ with $n$ free variables
$x_{1},\dots,x_{n}$ is the set of tuples of $n$ elements which satisfies:
$\left\llbracket\varphi\right\rrbracket=\\{a_{1},\dots,a_{n}\in\mathbb{A}_{\bot}^{n}\mid\mathbb{A}_{\bot},x_{1}:a_{1},\dots,x_{n}:a_{n}\models\varphi\\}$.
A _register automaton_ of _dimension_ $d\in\mathbb{N}$ is a tuple
$A=(d,\Sigma,\mathsf{L},\mathsf{L}_{I},\mathsf{L}_{F},\xrightarrow{})$ where
$d$ is the number of registers, $\Sigma$ is a finite alphabet, $\mathsf{L}$ is
a finite set of _control locations_ , of which we distinguish those which are
_initial_ $\mathsf{L}_{I}\subseteq\mathsf{L}$, resp., _final_
$\mathsf{L}_{F}\subseteq\mathsf{L}$, and “$\xrightarrow{}$” is a set of rules
of the form $p\xrightarrow{\sigma,\varphi}q$, where $p,q\in\mathsf{L}$ are
control locations, $\sigma\in\Sigma$ is an input symbol from the finite
alphabet, and
$\varphi(x_{1},\dots,x_{d},y,x_{1}^{\prime},\dots,x_{d}^{\prime})$ is a
constraint relating the current register values $x_{i}$’s, the current input
symbol (represented by the variable $y$), and the next register values of
${x_{i}^{\prime}}$’s.
###### Example 3.1.
Let $A$ over $\left|\Sigma\right|=1$ have one register $x$, and four control
locations $p,q,r,s$, of which $p$ is initial and $s$ is final. The transitions
are $p\xrightarrow{x=\bot\land x^{\prime}=y}q$, $p\xrightarrow{x=\bot\land
x^{\prime}=y}r$, $q\xrightarrow{x\neq y\land x^{\prime}=x}q$,
$q\xrightarrow{x=y\land x^{\prime}=x}s$, $r\xrightarrow{x=y\land
x^{\prime}=x}r$, and $r\xrightarrow{x\neq y\land x^{\prime}=x}s$. The
automaton accepts all words of the form $a(\mathbb{A}\setminus\\{a\\})^{*}a$
or $aa^{*}(\mathbb{A}\setminus\\{a\\})$ with $a\in\mathbb{A}$.
A register automaton is _orbitised_ if every constraint $\varphi$ appearing in
some transition thereof denotes an orbit
$\left\llbracket\varphi\right\rrbracket\in\mathsf{orbits}(\mathbb{A}^{2\cdot
d+1}_{\bot})$. For example, when $d=1$ the constraint $\varphi\equiv
x=x^{\prime}$ is not orbitised, however
$\left\llbracket\varphi\right\rrbracket=\left\llbracket\varphi_{0}\right\rrbracket\cup\left\llbracket\varphi_{1}\right\rrbracket$
splits into two disjoint orbits for the orbitised constraints
$\varphi_{0}\equiv x=x^{\prime}\land x=y$ and $\varphi_{1}\equiv
x=x^{\prime}\land x\neq y$. The automaton from Example 3.1 is orbitised. Every
register automaton can be transformed in orbitised form by replacing every
transition $p\xrightarrow{\sigma,\varphi}q$ with exponentially many
transitions
$p\xrightarrow{\sigma,\varphi_{1}}q,\dots,p\xrightarrow{\sigma,\varphi_{n}}q$,
for each orbit $\left\llbracket\varphi_{i}\right\rrbracket$ of
$\left\llbracket\varphi\right\rrbracket\subseteq\mathbb{A}^{2\cdot
d+1}_{\bot}$.
A _register valuation_ is a tuple of (possibly undefined) values
$\bar{a}=(a_{1},\dots,a_{d})\in\mathbb{A}^{d}_{\bot}$. A _configuration_ is a
pair $(p,\bar{a})$, where $p\in\mathsf{L}$ is a control location and
$\bar{a}\in\mathbb{A}^{d}_{\bot}$ is a register valuation; it is _initial_ if
$p\in\mathsf{L}_{I}$ is initial and all registers are initially undefined
$\bar{a}=(\bot,\dots,\bot)$, and it is _final_ whenever $p\in\mathsf{L}_{F}$
is so. The _semantics_ of a register automaton $A$ is the infinite transition
system $\left\llbracket A\right\rrbracket=(C,C_{I},C_{F},\xrightarrow{})$
where $C$ is the set of configurations, of which $C_{I},C_{F}\subseteq C$ are
the initial, resp., final ones, and ${\xrightarrow{}}\subseteq
C\times(\Sigma\times\mathbb{A})\times C$ is the set of all transitions of the
form
$\displaystyle(p,\bar{a})\xrightarrow{\sigma,a}(q,\bar{a}^{\prime}),\qquad\text{with
}\sigma\in\Sigma,a\in\mathbb{A},\text{ and
}\bar{a},\bar{a}^{\prime}\in\mathbb{A}^{d}_{\bot},$
s.t. there exists a rule $p\xrightarrow{\sigma,\varphi}q$ where satisfying the
constraint
$\mathbb{A}_{\bot},\bar{x}:\bar{a},y:a,\bar{x}^{\prime}:\bar{a}^{\prime}\models\varphi$.
A _data word_ is a sequence
$w=(\sigma_{1},a_{1})\cdots(\sigma_{n},a_{n})\in(\Sigma\times\mathbb{A})^{*}$.
A _run over_ a data word $w$ _starting at_ $c_{0}\in C$ and _ending at_
$c_{n}\in C$ is a sequence $\pi$ of transitions of $\left\llbracket
A\right\rrbracket$ of the form
$\pi=c_{0}\xrightarrow{\sigma_{1},a_{1}}c_{1}\xrightarrow{\sigma_{2},a_{2}}\cdots\xrightarrow{\sigma_{n},a_{n}}c_{n}.$
We denote with $\mathsf{Runs}(c_{0};w;c_{n})$ the set of runs over $w$
starting at $c_{0}$ and ending in $c_{n}$, and with
$\mathsf{Runs}(C_{I};w;c_{n})$ the set of _initial runs_ , i.e., those runs
over $w$ starting at some initial configuration $c_{0}\in C_{I}$ and ending in
$c_{n}$. The run $\pi$ is _accepting_ if $c_{n}\in C_{F}$. The language
$L(A,c)$ recognised from configuration $c\in C$ is the set of data words
labelling some accepting run starting at $c$; the language recognised from a
set of configurations $D\subseteq C$ is $L(A,D)=\bigcup_{c\in D}L(A,c)$, and
the language recognised by the register automaton $A$ is $L(A)=L(A,C_{I})$.
Similarly, the _backward language_ $L^{\mathsf{R}}(A,c)$ is the set of words
labelling some run starting at an initial configuration and ending at $c$.
Thus, we also have $L(A)=L^{\mathsf{R}}(A,C_{F})$. A register automaton is
_deterministic_ if for every input word there exists at most one initial run,
and _unambiguous_ if for every input word there is at most one initial and
accepting run. A register automaton is _without guessing_ if, for every
initial run $(p,\bot^{d})\xrightarrow{w}(q,\bar{a})$ every non-$\bot$ data
value in $\bar{a}$ occurs in the input $w$, written $\bar{a}\subseteq w$. In
the rest of the paper we will study exclusively automata without guessing. A
deterministic automaton is unambiguous and without guessing. These semantic
properties can be decided in PSPACE with simple reachability analyses (c.f.
[19]).
###### Example 3.2.
The automaton from Example 3.1 is unambiguous and without guessing. An example
of language which can only be recognised by ambiguous register automata is the
set of words where the same data value appears two times $L=\\{u\cdot a\cdot
v\cdot a\cdot w\mid a\in\mathbb{A};u,v,w\in\mathbb{A}^{*}\\}$.
###### Lemma 3.3.
If $A$ is an unambiguous register automaton, then there is a bijection between
the language it recognises $L(A)=L(A,C_{I})=L^{\mathsf{R}}(A,C_{F})$ and the
set of runs starting at some initial configuration in $C_{I}$ and ending at
some final configuration in $C_{F}$.
We are interested in the following decision problem.
Inclusion problem.
Input: Two register automata $A,B$ over the same input alphabet $\Sigma$.
Output: Is it the case that $L(A)\subseteq L(B)$?
The _universality problem_ asks $L(A)=(\Sigma\times\mathbb{A})^{*}$, and the
_equivalence problem_ $L(A)=L(B)$. In general, universality reduces to
equivalence, which in turn reduces to inclusion. In our context, inclusion
reduces to universality and thus all three problems are equivalent.
###### Lemma 3.4.
Let $A$ and $B$ be two register automata.
1. 1.
The inclusion problem $L(A)\subseteq L(B)$ with $A$ orbitised and without
guessing reduces in PTIME to the case where $A$ is deterministic. The
reduction preserves whether $B$ is 1) unambiguous, 2) without guessing, and 3)
orbitised.
2. 2.
The inclusion problem $L(A)\subseteq L(B)$ with $A$ deterministic reduces in
PTIME to the universality problem for some register automaton $C$. If $B$ is
unambiguous, then so is $C$. If $B$ is without guessing, then so is $C$. If
$A$ and $B$ are orbitised, then so is $C$.
## 4 Universality of unambiguous register automata without guessing
We reduce universality of unambiguous register automata without guessing to
zeroness of bidimensional linrec sequences with univariate polynomial
coefficients. The _width_ of a sequence of data values $\bar{a}=a_{1}\cdots
a_{n}\in\mathbb{A}^{n}$ is
$0pt{\bar{a}}=\left|\\{a_{1},\dots,a_{n}\\}\right|$, for a word
$w=(\sigma_{1},a_{1})\cdots(\sigma_{n},a_{n})\in(\Sigma\times\mathbb{A})^{*}$
we set $0ptw=0pt{(a_{1}\cdots a_{n})}$, and for a run $\pi$ over $w$ we set
$0pt\pi=0ptw$. Let the _Ryll-Nardzewski function_ $G_{p,\bar{a}}(n,k)$ of a
configuration $(p,\bar{a})\in C=\mathsf{L}\times\mathbb{A}^{d}_{\bot}$ count
the number of $\bar{a}$-orbits of initial runs of length $n$ and width $k$
ending in $(p,\bar{a})$:
$\displaystyle G_{p,\bar{a}}(n,k)=\left|\\{[\pi]_{\bar{a}}\mid
w\in(\Sigma\times\mathbb{A})^{n},\pi\in\mathsf{Runs}(C_{I};w;p,\bar{a}),0ptw=k\\}\right|.$
(6)
###### Lemma 4.1.
Let $\bar{a},\bar{b}\in\mathbb{A}_{\bot}^{d}$. If $[\bar{a}]=[\bar{b}]$, then
$G_{p,\bar{a}}(n,k)=G_{p,\bar{b}}(n,k)$ for every $n,k\geq 0$.
We thus overload the notation and write $G_{p,[\bar{a}]}$ instead of
$G_{p,\bar{a}}$. Since $\mathbb{A}^{d}_{\bot}$ is orbit-finite, this yields
finitely many variables $G_{p,[\bar{a}]}$’s. By slightly abusing notation, let
$G_{C_{F}}(n,k)=\sum_{[(p,\bar{a})]\in\mathsf{orbits}(C_{F})}G_{p,[\bar{a}]}(n,k)$
be the sum of the Ryll-Nardzewski function over all orbits of accepting
configurations. When the automaton is unambiguous, thanks to Lemma 3.3,
$G_{C_{F}}(n,k)$ is also the number of orbits of accepted words of length $n$
and width $k$.
###### Lemma 4.2.
Let $A$ be an unambiguous register automaton w/o guessing over $\Sigma$ and
let $S_{\Sigma}(n,k)$ be the number of orbits of all words of length $n$ and
width $k$. We have $L(A)=(\mathbb{A}\times A)^{*}$ if, and only if, $\forall
n,k\in\mathbb{N}\cdot G_{C_{F}}(n,k)=S_{\Sigma}(n,k)$.
In other words, universality of $A$ reduces to zeroness of
${G:=S_{\Sigma}-G_{C_{F}}}$. The sequence $S_{\Sigma}$ is linrec since it
satisfies the recurrence in Figure 2 with initial conditions
$S_{\Sigma}(0,0)=1$ and $S_{\Sigma}(n+1,0)=S_{\Sigma}(0,k+1)=0$ for $n,k\geq
0$. We show that all the sequences of the form $G_{p,[\bar{a}]}$ are also
linrec and thus also $G$ will be linrec.
Figure 1: Last-step decomposition.
We perform a last-step decomposition of an initial run; c.f. Figure 1.
Starting from some initial configuration $(p_{0},\bot^{d})$, the automaton has
read a word $w$ of length $n-1$ leading to $(p,\bar{a})$. Then, the automaton
reads the last letter $(\sigma,a)$ and goes to $(p^{\prime},\bar{a}^{\prime})$
via the transition
$t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime})$. The question
is in how many distinct ways can an orbit of the run over $w$ be extended into
an orbit of the run over $w\cdot(\sigma,a)$. We distinguish three cases.
1. I:
Assume that $a$ appears in register $\bar{a}_{i}=a$. Since the automaton is
without guessing, $a\in w$ has appeared earlier in the input word and
$\bar{a}^{\prime}\subseteq\bar{a}$ (ignoring $\bot$’s). Thus, each
$\bar{a}$-orbit of runs $[p_{0},\bot^{d}\xrightarrow{w}p,\bar{a}]_{\bar{a}}$
yields, via the fixed $t$, an $\bar{a}^{\prime}$-orbit of runs
$[p_{0},\bot^{d}\xrightarrow{w}p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]_{\bar{a}^{\prime}}$
of the same width in just one way.
2. II:
Assume that $a$ is globally fresh $a\not\in w$, and thus in particular
$a\not\in\bar{a}$ since the automaton is without guessing. Each
$\bar{a}$-orbit of runs $[p_{0},\bot^{d}\xrightarrow{w}p,\bar{a}]_{\bar{a}}$
of width $0ptw$ yields, via the fixed $t$, a single $\bar{a}^{\prime}$-orbit
of runs
$[p_{0},\bot^{d}\xrightarrow{w}p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]_{\bar{a}^{\prime}}$
of width $0pt{(w\cdot a)}=0ptw+1$.
3. III:
Assume that $a\in w$ is not globally fresh, but it does not appear in any
register $a\not\in\bar{a}$. Since the automaton is without guessing, every
value in $\bar{a}$ appears in $w$. Consequently, $a$ can be any of the $0ptw$
distinct values in $w$, with the exception of $0pt{\bar{a}}$ values. Each
$\bar{a}$-orbit of runs $[p_{0},\bot\xrightarrow{w}p,\bar{a}]_{\bar{a}}$ of
width $0ptw$ yields $0ptw-0pt{\bar{a}}\geq 0$ $\bar{a}^{\prime}$-orbits of
runs
$[p_{0},\bot^{d}\xrightarrow{w}p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]_{\bar{a}^{\prime}}$
of the same width.
(As expected, we do not need unambiguity at this point, since we are counting
orbits of runs.) We obtain the equations in Figure 2, where the sums range
over orbits of transitions. This set of equations is finite since there are
finitely many orbits $[\bar{a}]\in\mathsf{orbits}(\mathbb{A}^{d}_{\bot})$ of
register valuations, and moreover we can effectively represent each orbit by a
constraint [4, Ch. 4]. Strictly speaking, the equations are not linrec due to
the “$\max$” operator, however they can easily be transformed to linrec by
considering $G_{p,[\bar{a}]}(n,K)$ separately for $1\leq K<d$; in the interest
of clarity, we omit the full linrec expansion. The initial condition is
$G_{p,[\bar{a}]}(0,0)=1$ if $p\in I$ initial, and $G_{p,[\bar{a}]}(0,0)=0$
otherwise. The two $0$-sections satisfy $G_{p,[\bar{a}]}(n+1,0)=0$ for $n\geq
0$ (if the word is nonempty, then there is at least one data value) and
$G_{p,[\bar{a}]}(0,k+1)=0$ for $k\geq 0$ (an empty word does not have any data
value).
$\displaystyle G_{p^{\prime},[\bar{a}^{\prime}]}(n+1,k+1)=$ $\displaystyle\
\sum_{[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]:\;a\in\bar{a}}\underbrace{G_{p,[\bar{a}]}(n,k+1)}_{\textsf{\bf
I}}\ +$ $\displaystyle\
\sum_{[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]:\;a\not\in\bar{a}}\left(\underbrace{G_{p,[\bar{a}]}(n,k)}_{\textsf{\bf
II}}+\underbrace{\max(k+1-0pt{[\bar{a}]},0)\cdot
G_{p,[\bar{a}]}(n,k+1)}_{\textsf{\bf III}}\right),$ $\displaystyle
S_{\Sigma}(n+1,k+1)=$ $\displaystyle\ {\left|\Sigma\right|}\cdot
S_{\Sigma}(n,k)+{\left|\Sigma\right|}\cdot(k+1)\cdot S_{\Sigma}(n,k+1),$
$\displaystyle G(n,k)=$ $\displaystyle\
S_{\Sigma}(n,k)-\sum_{[p,\bar{a}]\in\mathsf{orbits}(C_{F})}G_{p,[\bar{a}]}(n,k).$
Figure 2: Linrec automata equations.
###### Lemma 4.3.
The sequences $G_{p,[\bar{a}]}$’s satisfy the system of equations in Figure 2.
###### Example 4.4.
The equations corresponding to the automaton in Example 3.1 are as follows.
(Since the automaton is orbitised, we can omit the orbit.) We have
$G_{p}(0,0)=1$, $G_{q}(0,0)=G_{r}(0,0)=G_{s}(0,0)=0$ and for $n,k\geq 0$:
$\displaystyle G_{p}(n+1,k+1)$ $\displaystyle=0,$ $\displaystyle
G_{q}(n+1,k+1)$ $\displaystyle=\underbrace{G_{p}(n,k)}_{\textsf{\bf
II}}+\underbrace{(k+1)\cdot G_{p}(n,k+1)}_{\textsf{\bf
III}}+\underbrace{G_{q}(n,k)}_{\textsf{\bf II}}+\underbrace{k\cdot
G_{q}(n,k+1)}_{\textsf{\bf III}},$ $\displaystyle G_{r}(n+1,k+1)$
$\displaystyle=\underbrace{G_{p}(n,k)}_{\textsf{\bf
II}}+\underbrace{(k+1)\cdot G_{p}(n,k+1)}_{\textsf{\bf
III}}+\underbrace{G_{r}(n,k+1)}_{\textsf{\bf I}},$ $\displaystyle
G_{s}(n+1,k+1)$ $\displaystyle=\underbrace{G_{q}(n,k+1)}_{\textsf{\bf
I}}+\underbrace{G_{r}(n,k)}_{\textsf{\bf II}}+\underbrace{k\cdot
G_{r}(n,k+1)}_{\textsf{\bf III}}.$
###### Lemma 4.5.
Let $A$ be an unambiguous register automaton over equality atoms without
guessing with $d$ registers and $\ell$ control locations. The universality
problem for $A$ reduces to the zeroness problem of the linrec sequence $G$
defined by the system of equations in Figure 2 containing $O(\ell\cdot
2^{d\cdot\log d})$ variables and equations and constructible in PSPACE. If $A$
is already orbitised, then the system of equations has size $O(\ell)$.
## 5 Decidability of the zeroness problem
In this section, we present an algorithm to solve the zeroness problem of
bidimensional linrec sequences with univariate polynomial coefficients, which
is sufficient for linrec sequences from Figure 2. We first give a general
presentation on elimination for bivariate polynomial coefficients, and then we
use the univariate assumption to obtain a decision procedure. We model the
non-commutative operators appearing in the definition of linrec sequences (5)
with Ore polynomials (a.k.a. skew polynomials) [43]555The general definition
of the Ore polynomial ring $R[\partial;\sigma,\delta]$ uses an additional
component $\delta:R\to R$ in order to model differential operators. We present
a simplified version which is enough for our purposes.. Let $R$ be a (not
necessarily commutative) ring and $\sigma$ an automorphism of $R$. The ring of
_(shift) skew polynomials_ $R[\partial;\sigma]$ is defined as the ring of
polynomials but where the multiplication operation satisfies the following
commutation rule: For a coefficient $a\in R$ and the unknown $\partial$, we
have
$\partial\cdot a=\sigma(a)\cdot\partial.$
(The usual ring of polynomials is recovered when $\sigma$ is the identity.)
The multiplication extends to monomials as $a\partial^{k}\cdot
b\partial^{l}=a\sigma^{k}(b)\cdot\partial^{k+l}$ and to the whole ring by
distributivity. The _degree_ of a skew monomial $a\cdot\partial^{k}$ is $k$,
and the degree $\deg P$ of a skew polynomial $P$ is the maximum of the degrees
of its monomials. The degree function satisfies the expected identities
$\deg(P\cdot Q)=\deg P+\deg Q$ and $\deg(P+Q)\leq\max(\deg P,\deg Q)$. A skew
polynomial is _monic_ if the coefficient of its monomial of highest degree is
$1$. The crucial and only property that we need in this section is that skew
polynomial rings admit a Euclidean pseudo-division algorithm, which in turns
allows one to find common left multiples. A skew polynomial ring
$R[\partial;\sigma]$ has _pseudo-division_ if for any two skew polynomials
$A,B\in R[\partial;\sigma]$ with $\deg A\geq\deg B$ there is a coefficient
$a\in R$ and skew polynomials $Q,R\in R[\partial;\sigma]$ s.t. $a\cdot
A=P\cdot B+Q$ and $\deg Q<\deg B$. We say that a ring $R$ has the _common left
multiple_ (CLM) property if for every $a,b\neq 0$, there exists $c,d\neq 0$
such that $c\cdot a=d\cdot b$.
###### Theorem 5.1 (c.f. [42, Sec. 1]).
If $R$ has the CLM property, then 1) $R[\partial;\sigma]$has a pseudo-
division, and 2) $R[\partial;\sigma]$also has the CLM property.
The most important instances of skew polynomials are the _first_ and _second
Weyl algebras_ :
$\displaystyle
W_{1}=\mathbb{Q}[n,k][\partial_{1};\sigma_{1}]\quad\text{and}\quad
W_{2}=W_{1}[\partial_{2};\sigma_{2}]=\mathbb{Q}[n,k][\partial_{1};\sigma_{1}][\partial_{2};\sigma_{2}],$
(7)
where $\mathbb{Q}[n,k]$ is the ring of bivariate polynomials, and the shifts
satisfy $\sigma_{1}(p(n,k)):=p(n+1,k)$ and
$\sigma_{2}\left(\sum_{i}p_{i}(n,k)\partial_{1}^{i}\right):=\sum_{i}p_{i}(n,k+1)\partial_{1}^{i}$.
Skew polynomials in $W_{2}$ act on bidimensional sequences
$f:\mathbb{Q}^{\mathbb{N}^{2}}$ by interpreting $\partial_{1}$ and
$\partial_{2}$ as the two shifts. A linrec system of equations (5) can thus be
interpreted as a system of linear equations with variables $f_{1},\dots,f_{m}$
and coefficients in $W_{2}$.
###### Example 5.2.
Continuing our running Example 4.4, we obtain the following linear system of
equations with $W_{2}$ coefficients:
$\displaystyle\begin{array}[]{rrrrrrrr}\partial_{1}\partial_{2}\cdot
G_{p}&&&&=0,\\\ -(1+(k+1)\partial_{2})\cdot
G_{p}&+(\partial_{1}\partial_{2}-k\partial_{2}-1)\cdot G_{q}&&&=0,\\\
-(1+(k+1)\partial_{2})\cdot
G_{p}&&+(\partial_{1}\partial_{2}-\partial_{2})\cdot G_{r}&&=0,\\\
&-\partial_{2}\cdot G_{q}&-(1+k\partial_{2})\cdot
G_{r}&+\partial_{1}\partial_{2}\cdot G_{s}&=0,\end{array}$
$\displaystyle(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\cdot S_{1}=0,$
$\displaystyle G_{s}-S_{1}+G=0.$
Since $W_{0}=\mathbb{N}[n,k]$ is commutative, it obviously has the CLM
property. By two applications of Theorem 5.1, we have (see Sec. D.1 for CLM
examples):
###### Corollary 5.3.
The two Weyl algebras $W_{1}$ and $W_{2}$ have the CLM property.
A (linear) _cancelling relation_ (CR) for a bidimensional sequence
$f:\mathbb{Q}^{\mathbb{N}^{2}}$ is a linear equation of the form
$\displaystyle
p_{i^{*},j^{*}}(n,k)\cdot\partial_{1}^{i^{*}}\partial_{2}^{j^{*}}f=\sum_{(i,j)<_{\text{lex}}(i^{*},j^{*})}p_{i,j}(n,k)\cdot\partial_{1}^{i}\partial_{2}^{j}f,$
(CR-2)
where $p_{i^{*},j^{*}}(n,k),p_{i,j}(n,k)\in\mathbb{Q}[n,k]$ are bivariate
polynomial coefficients and $<_{\text{lex}}$ is the lexicographic ordering.
Cancelling relations for a one-dimensional sequence
$g:\mathbb{Q}^{\mathbb{N}}$ are defined analogously (we use the second
variable $k$ as the index for convenience):
$\displaystyle q_{j^{*}}(k)\cdot\partial_{2}^{j^{*}}g=\sum_{0\leq
j<j^{*}}q_{j}(k)\cdot\partial_{2}^{j}g.$ (CR-1)
We use cancelling relations as certificates of zeroness for $f$ when the
$p_{i,j}$’s are univariate. We do not need to construct any cancelling
relation, just knowing that some exists with the required bounds suffices.
###### Lemma 5.4.
The zeroness problem for a bidimensional linrec sequence
$f:\mathbb{Q}^{\mathbb{N}^{2}}$ of order $\leq m$ and univariate polynomial
coefficients in $\mathbb{Q}[k]$ admitting some cancelling relation (CR-2) with
leading coefficient $p_{i^{*},j^{*}}(k)\in\mathbb{Q}[k]$ of degree $\leq e$
and height $\leq h$ s.t. each of the one-dimensional sections
$f(M,k)\in\mathbb{Q}^{\mathbb{N}}$ for $1\leq M\leq i^{*}$ also admits some
cancelling relation (CR-1) of $\partial_{2}$-degree $\leq d$ with leading
polynomial coefficients of degrees $\leq e$ and height $\leq h$ is decidable
in deterministic time $\tilde{O}({p(m,i^{*},j^{*},d,e,h)})$ for some
polynomial $p$.
Elimination already yields decidability with elementary complexity for the
zeroness problem and thus for the universality/equivalence/inclusion problems
of unambiguous register automata without guessing.
###### Theorem 5.5.
The zeroness problem for linrec sequences with univariate polynomial
coefficients from $\mathbb{Q}[k]$ (or from $\mathbb{Q}[n]$) is decidable.
###### Example 5.6.
Continuing our running Example 5.2, we subsequently eliminate
$G_{p},G_{s},G_{r},G_{q},S$ finally obtaining (c.f. Example D.12 in Sec. D.2
for details)
$\displaystyle\begin{array}[]{ll}G(n+4,k+4)=&(k+3)\cdot
G(n+3,k+4)+G(n+3,k+3)\;+\\\ &-(k+2)\cdot G(n+2,k+4)-G(n+2,k+3).\end{array}$
(10)
As expected, all coefficients are polynomials in $\mathbb{Q}[k]$ and in
particular they do not involve the variable $n$. Moreover, we note that the
relation above is _monic_ , in the sense that the lexicographically leading
term $G(n+4,k+4)$ has coefficient $1$ (c.f. Sec. 7). (C.f. Example D.13 for
elimination in a two-register automaton and Example D.14 for a one-register
automaton accepting all words of length $\geq 2$.)
We omit a precise complexity analysis of elimination because better bounds can
be obtained by resorting to linear non-commutative algebra, which is the topic
of the next section.
## 6 Complexity of the zeroness problem
In this section we present an EXPTIME algorithm to solve the zeroness problem
and we apply this result to register automata. We compute the _Hermite normal
form_ (HNF) of the matrix with skew polynomial coefficients associated to (5)
in order to do elimination in a more efficient way. The complexity bounds
provided by Giesbrecht and Kim [26] on the computation of the HNF lead to the
following bounds for cancelling relations; c.f. Appendix E for further details
and full proofs.
###### Lemma 6.1.
A linrec sequence $f\in\mathbb{Q}^{\mathbb{N}^{2}}$ of order $\leq m$, degree
$\leq d$, and height $\leq h$ admits a cancelling relation (CR-2) with the
orders $i^{*},j^{*}$ and the degree of $p_{i^{*},j^{*}}$ polynomially bounded,
and with height $\left|{p_{i^{*},j^{*}}}\right|_{\infty}$ exponentially
bounded. Similarly, its one-dimensional sections
$f(0,k),\dots,f(i^{*},k)\in\mathbb{Q}^{\mathbb{N}}$ also admit cancelling
relations (CR-1) of polynomially bounded orders and degree, and exponentially
bounded height.
This allows us to prove below the EXPTIME upper-bound for zeroness of Theorem
1.1, and the 2-EXPTIME algorithm for inclusion of Theorem 1.2.
###### Proof 6.2 (Proof of Theorem 1.1).
Thanks to the bounds from Lemma 6.1, $i^{*},j^{*}$ are polynomially bounded;
we can find a polynomial bound $d$ on the $\partial_{2}$-degrees of the
cancelling relations $R_{0},\dots,R_{i^{*}}$ for the sections
$f(0,k),\dots,f(i^{*},k)$, respectively; we can find a polynomial bound $e$ on
the degrees of $p_{i^{*},j^{*}}(k)$ and the leading polynomial coefficients of
the $R_{i}$’s; and an exponential bound $h$ on
$\left|{p_{i^{*},j^{*}}}\right|_{\infty}$ and the heights of the leading
polynomial coefficients of the $R_{i}$’s. We thus obtain an EXPTIME algorithm
by Lemma 5.4.
This yields the announced upper-bounds for the inclusion problem for register
automata.
###### Proof 6.3 (Proof of Theorem 1.2).
For the universality problem $L(B)=(\Sigma\times\mathbb{A})^{*}$, let $d$ be
the number of registers and $\ell$ the number of control locations of $B$. By
Lemma 4.5, the universality problem reduces in PSPACE to zeroness of a linrec
system with polynomial coefficients in $\mathbb{Q}[k]$ containing $O(\ell\cdot
2^{d\cdot\log d})$ variables $G_{p,[\bar{a}]}$ and the same number of
equations. By Theorem 1.1, we get a 2-EXPTIME algorithm. When the numbers of
registers $d$ is fixed, we get an EXPTIME algorithm. For the inclusion problem
$L(A)\subseteq L(B)$, we first orbitise $A$ into an equivalent orbitised
register automaton without guessing $A^{\prime}$. A close inspection of the
two constructions leading to $C$ in the proof of Lemma 3.4 reveal that
transitions in $C$ are either transitions from $A^{\prime}$ (and thus already
orbitised), or pairs of a transition in $B$ together with a transition in
$A^{\prime}$, the second of which is already orbitised. It follows that
orbitising $C$ incurs in an exponential blow-up w.r.t. the number of registers
of $B$, but only polynomial w.r.t. the number of registers of $A^{\prime}$
(and thus of $A$), since the $A^{\prime}$-part in $C$ is already orbitised.
Consequently, we can write (in PSPACE) a system of linrec equations for the
universality problem of $C$ of size exponential in the number of registers of
$A$ and of $B$. By reasoning as in the first part of the proof, we obtain a
EXPTIME algorithm for the universality problem of $C$, and thus a 2-EXPTIME
algorithm for the original inclusion problem $L(A)\subseteq L(B)$. If both the
number of registers of $A$ and of $B$ is fixed, we get an EXPTIME algorithm.
The equivalence problem $L(A)=L(B)$ with both automata $A,B$ unambiguous
reduces to two inclusion problems.
## 7 Further remarks and conclusions
We say that $P=\sum_{i,j}p_{i,j}(n,k)\cdot\partial_{1}^{i}\partial_{2}^{j}$ is
_monic_ if $p_{i^{*},j^{*}}=1$ where $(i^{*},j^{*})$ is the lexicographically
largest pair $(i,j)$ s.t. $p_{i,j}\neq 0$. The cancelling relation (CR-2) in
our examples (10), (24), (25), (29) happens to be monic in this sense.
###### Conjecture 7.1 (Monicity conjecture).
There always exists a _monic_ cancelling relation (CR-2) for linrec systems
obtained from automata equations in Figure 2, and similarly for their sections
(CR-1).
7.1 has important algorithmic consequences. The exponential complexity in
Theorem 1.1 comes from the exponential growth of the rational number
coefficients (heights) in the HNF. This is due to the use of Lemma 5.4, whose
complexity depends on the maximal root of the leading polynomial
$p_{i^{*},j^{*}}(n,k)$ from (CR-2). If 7.1 holds, then
$p_{i^{*},j^{*}}(n,k)=1$, Lemma 5.4 would yield a PTIME algorithm for
zeroness, and consequently all complexities in Theorem 1.2, would drop by one
exponential. This provides ample motivation to investigate the monicity
conjecture.
In order to obtain the lower EXPTIME complexity for $L(A)\subseteq L(B)$ in
Theorem 1.2 we have to fix the number of registers in _both_ automata $A$ and
$B$. The EXPSPACE upper bound of Mottet and Quaas [37] holds already when only
the number of registers of $B$ is fixed, while we only obtain a 2-EXPTIME
upper bound in this case. It is left for future work whether the counting
approach can yield better bounds without fixing the number of registers of
$A$.
The fact that the automata are non-guessing is crucial in each of the cases I,
II, and III of the equations in Figure 2 in order to correctly count the
number of orbits of runs. For automata with guessing from the fact that the
current input $a$ is stored in a register we cannot deduce that $a$ actually
appeared previously in the input word $w$, and thus our current
parametrisation in terms of length and width does not lead to a recursive
characterisation.
in the last-step decomposition since we need to know that all values in
$\bar{a}$
Finally, it is also left for further work to extend the counting approach to
other data domains such as total order atoms, random graph atoms, etc…, and,
more generally, to arbitrary homogeneous and $\omega$-categorical atoms under
suitable computability assumptions (c.f. [16]), and to other models of
computation such as register pushdown automata [13, 39].
## References
* [1] Ronald Alter and K.K Kubota. Prime and prime power divisibility of Catalan numbers. Journal of Combinatorial Theory, Series A, 15(3):243 – 256, 1973\.
* [2] Christel Baier, Stefan Kiefer, Joachim Klein, Sascha Klüppelholz, David Müller, and James Worrell. Markov Chains and Unambiguous Büchi Automata. In Swarat Chaudhuri and Azadeh Farzan, editors, Proc. of CAV’16, pages 23–42, Cham, 2016. Springer International Publishing.
* [3] M. Benedikt, T. Duff, A. Sharad, and J. Worrell. Polynomial automata: Zeroness and applications. In Proc. of LICS’17, pages 1–12, June 2017. doi:10.1109/LICS.2017.8005101.
* [4] Mikołaj Bojańczyk. Slightly Infinite Sets. 2019\. URL: https://www.mimuw.edu.pl/~bojan/paper/atom-book.
* [5] Alin Bostan, Arnaud Carayol, Florent Koechlin, and Cyril Nicaud. Weakly-Unambiguous Parikh Automata and Their Link to Holonomic Series. In Artur Czumaj, Anuj Dawar, and Emanuela Merelli, editors, Proc. of ICALP’20, volume 168 of LIPIcs, pages 114:1–114:16, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
* [6] Alin Bostan, Frédéric Chyzak, Bruno Salvy, and Ziming Li. Fast computation of common left multiples of linear ordinary differential operators. In Proc. of ISAAC’12, pages 99–106, New York, NY, USA, 2012. ACM.
* [7] Nicolas Bousquet and Christof Löding. Equivalence and inclusion problem for strongly unambiguous büchi automata. In Adrian-Horia Dediu, Henning Fernau, and Carlos Martín-Vide, editors, Proc. of LATA’10, pages 118–129, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.
* [8] Mireille Bousquet-Mélou. Algebraic generating functions in enumerative combinatorics and context-free languages. In Volker Diekert and Bruno Durand, editors, Proc. of STACS’05, pages 18–35, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg.
* [9] Michaël Cadilhac, Alain Finkel, and Pierre McKenzie. Unambiguous constrained automata. In Hsu-Chun Yen and Oscar H. Ibarra, editors, Proc. of DLT’12, volume 7410 of LNCS, pages 239–250. Springer Berlin Heidelberg, 2012.
* [10] Michaël Cadilhac, Filip Mazowiecki, Charles Paperman, Michał Pilipczuk, and Géraud Sénizergues. On Polynomial Recursive Sequences. In Artur Czumaj, Anuj Dawar, and Emanuela Merelli, editors, Proc. of ICALP’20, volume 168 of LIPIcs, pages 117:1–117:17, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
* [11] Peter J. Cameron. Notes on Counting: An Introduction to Enumerative Combinatorics. Australian Mathematical Society Lecture Series. Cambridge University Press, 1 edition, 2017.
* [12] Giusi Castiglione and Paolo Massazza. On a class of languages with holonomic generating functions. Theoretical Computer Science, 658:74–84, 2017.
* [13] Edward Y. C. Cheng and Michael Kaminski. Context-free languages over infinite alphabets. Acta Inf., 35(3):245–267, 1998.
* [14] N. Chomsky and M. P. Schützenberger. The algebraic theory of context-free languages. In P. Braffort and D. Hirschberg, editors, Computer Programming and Formal Systems, volume 35 of Studies in Logic and the Foundations of Mathematics, pages 118–161. Elsevier, 1963.
* [15] Lorenzo Clemente. On the complexity of the universality and inclusion problems for unambiguous context-free grammars. In Laurent Fribourg and Matthias Heizmann, editors, Proceedings 8th International Workshop on Verification and Program Transformation and 7th Workshop on Horn Clauses for Verification and Synthesis, Dublin, Ireland, 25-26th April 2020, volume 320 of EPTCS, pages 29–43. Open Publishing Association, 2020. doi:10.4204/EPTCS.320.2.
* [16] Lorenzo Clemente and Slawomir Lasota. Reachability analysis of first-order definable pushdown systems. In Stephan Kreutzer, editor, Proc. of CSL’15, volume 41 of LIPIcs, pages 244–259, Dagstuhl, 2015.
* [17] P. M. Cohn. Skew Fields: Theory of General Division Rings, volume 57 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, 1995.
* [18] Thomas Colcombet. Forms of Determinism for Automata (Invited Talk). In Christoph Dürr and Thomas Wilke, editors, Proc. of STACS’12, volume 14 of LIPIcs, pages 1–23, Dagstuhl, Germany, 2012. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [19] Thomas Colcombet. Unambiguity in automata theory. In Jeffrey Shallit and Alexander Okhotin, editors, Descriptional Complexity of Formal Systems, pages 3–18, Cham, 2015. Springer International Publishing.
* [20] Wojciech Czerwiński, Diego Figueira, and Piotr Hofman. Universality Problem for Unambiguous VASS. In Igor Konnov and Laura Kovács, editors, Proc. of CONCUR’20, volume 171 of LIPIcs, pages 36:1–36:15, Dagstuhl, Germany, 2020\. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
* [21] Laure Daviaud, Marcin Jurdzinski, Ranko Lazic, Filip Mazowiecki, Guillermo A. Pérez, and James Worrell. When is Containment Decidable for Probabilistic Automatal. In Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella, editors, Proc. of ICALP’18, volume 107 of LIPIcs, pages 121:1–121:14, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [22] Stéphane Demri and Ranko Lazić. LTL with the freeze quantifier and register automata. ACM Trans. Comput. Logic, 10(3):16:1–16:30, April 2009.
* [23] Philippe Flajolet, Stefan Gerhold, and Bruno Salvy. On the non-holonomic character of logarithms, powers, and the nth prime function. Electr. J. Comb., 11(2), 2005.
* [24] Stefan Gerhold. On some non-holonomic sequences. Electr. J. Comb., 11(1), 2004.
* [25] M. Giesbrecht. Factoring in skew-polynomial rings over finite fields. Journal of Symbolic Computation, 26(4):463–486, 1998. URL: http://www.sciencedirect.com/science/article/pii/S0747717198902243, doi:https://doi.org/10.1006/jsco.1998.0224.
* [26] Mark Giesbrecht and Myung Sub Kim. Computing the Hermite form of a matrix of Ore polynomials. Journal of Algebra, 376:341–362, 2013.
* [27] Vesa Halava, Tero Harju, Mika Hirvensalo, and Juhani Karhumäki. Skolem’s problem - on the border between decidability and undecidability, 2005.
* [28] John Hopcroft, Rajeev Motwani, and Jeffrey Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 2000.
* [29] Michael Kaminski and Nissim Francez. Finite-memory automata. Theoretical Computer Science, 134(2):329–363, 1994.
* [30] Michael Kaminski and Daniel Zeitlin. Finite-memory automata with non-deterministic reassignment. International Journal of Foundations of Computer Science, 21(05):741–760, 2010.
* [31] Ravindran Kannan and Achim Bachem. Polynomial algorithms for computing the Smith and Hermite normal forms of an integer matrix. SIAM Journal on Computing, 8(4):499–507, 1979.
* [32] Martin Klazar. Bell numbers, their relatives, and algebraic differential equations. Journal of Combinatorial Theory, Series A, 102(1):63–87, 2003. URL: http://www.sciencedirect.com/science/article/pii/S0097316503000141, doi:https://doi.org/10.1016/S0097-3165(03)00014-1.
* [33] Werner Kuich. On the multiplicity equivalence problem for context-free grammars. In Proceedings of the Colloquium in Honor of Arto Salomaa on Results and Trends in Theoretical Computer Science, pages 232—250, Berlin, Heidelberg, 1994. Springer-Verlag.
* [34] George Labahn, Vincent Neiger, and Wei Zhou. Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix. Journal of Complexity, 42:44–71, 2017.
* [35] Leonard Lipshitz. D-finite power series. Journal of Algebra, 122(2):353–373, 1989.
* [36] Dugald Macpherson. A survey of homogeneous structures. Discrete Math., 311(15):1599–1634, August 2011.
* [37] Antoine Mottet and Karin Quaas. The containment problem for unambiguous register automata and unambiguous timed automata. Theory of Computing Systems, 2020. doi:10.1007/s00224-020-09997-2.
* [38] T. Mulders and A. Storjohann. On lattice reduction for polynomial matrices. Journal of Symbolic Computation, 35(4):377–401, 2003.
* [39] A.S. Murawski, S.J. Ramsay, and N. Tzevelekos. Reachability in pushdown register automata. Journal of Computer and System Sciences, 87:58–83, 2017.
* [40] Vincent Neiger, Johan Rosenkilde, and Grigory Solomatov. Computing Popov and Hermite forms of rectangular polynomial matrices. In Proc. of ISAAC’18, pages 295—302, New York, NY, USA, 2018. ACM.
* [41] Frank Neven, Thomas Schwentick, and Victor Vianu. Finite state machines for strings over infinite alphabets. ACM Trans. Comput. Logic, 5(3):403—435, July 2004.
* [42] Oystein Ore. Linear equations in non-commutative fields. Annals of Mathematics, 32(3):463–477, 1931. URL: http://www.jstor.org/stable/1968245.
* [43] Oystein Ore. Theory of non-commutative polynomials. Annals of Mathematics, 34(3):480–508, 1933. URL: http://www.jstor.org/stable/1968173.
* [44] Mikhail Raskin. A Superpolynomial Lower Bound for the Size of Non-Deterministic Complement of an Unambiguous Automaton. In Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella, editors, Proc. of ICALP’18, volume 107 of LIPIcs, pages 138:1–138:11, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [45] Arto Salomaa and Marti Soittola. Automata-theoretic aspects of formal power series. Texts and Monographs in Computer Science. Springer, 1978.
* [46] James Schmerl. A decidable $\aleph_{0}$-categorical theory with a non-recursive Ryll-Nardzewski function. Fundamenta Mathematicae, 98(2):121–125, 1978.
* [47] Luc Segoufin. Automata and logics for words and trees over an infinite alphabet. In Zoltán Ésik, editor, Computer Science Logic, volume 4207 of LNCS, pages 41–57. Springer Berlin Heidelberg, 2006.
* [48] Géraud Sénizergues. The equivalence problem for deterministic pushdown automata is decidable. In Pierpaolo Degano, Roberto Gorrieri, and Alberto Marchetti-Spaccamela, editors, Proc. of ICALP’97, pages 671–681, Berlin, Heidelberg, 1997. Springer Berlin Heidelberg.
* [49] Richard P. Stanley. Differentiably finite power series. European Journal of Combinatorics, 1(2):175–188, 1980.
* [50] Richard P. Stanley. Enumerative Combinatorics. The Wadsworth & Brooks/Cole Mathematics Series 1. Springer, 1 edition, 1986.
* [51] R. Stearns and H. Hunt. On the equivalence and containment problems for unambiguous regular expressions, grammars, and automata. In Proc. of SFCS’81, pages 74–81, Washington, DC, USA, 1981. IEEE Computer Society. URL: http://dx.doi.org/10.1109/SFCS.1981.29, doi:10.1109/SFCS.1981.29.
* [52] L. J. Stockmeyer and A. R. Meyer. Word problems requiring exponential time (preliminary report). In Proc. of STOC’73, pages 1–9, New York, NY, USA, 1973. ACM.
* [53] Wen-Guey Tzeng. A polynomial-time algorithm for the equivalence of probabilistic automata. SIAM J. Comput., 21(2):216–227, April 1992.
* [54] G. Villard. Computing Popov and Hermite forms of polynomial matrices. In Proc. of ISAAC’96, pages 250—258, New York, NY, USA, 1996. Association for Computing Machinery.
* [55] Tzeng Wen-Guey. On path equivalence of nondeterministic finite automata. Information Processing Letters, 58(1):43–46, 1996.
## Appendix A Additional material for Sec. 2
### A.1 One-dimensional linear recursive sequences
Let $f(n):\mathbb{Q}^{\mathbb{N}}$ be a one-dimensional sequence. The shift
operator $\partial:\mathbb{Q}^{\mathbb{N}}\to\mathbb{Q}^{\mathbb{N}}$ is
defined as $(\partial f)(n)=f(n+1)$ for every $n\in\mathbb{N}$. A one-
dimensional sequence $f$ is _linear recursive_ (linrec) if there are auxiliary
sequences $f=f_{1},f_{2},\dots,f_{m}:\mathbb{Q}^{\mathbb{N}}$ satisfying a
system of equations of the form
$\displaystyle\left\\{\begin{array}[]{rcl}\partial f_{1}&=&p_{1,1}\cdot
f_{1}+\cdots+p_{1,m}\cdot f_{m},\\\ &\vdots&\\\ \partial f_{m}&=&p_{m,1}\cdot
f_{1}+\cdots+p_{m,m}\cdot f_{m},\end{array}\right.$ (14)
where the $p_{i,j}\in\mathbb{Q}[n]$ are univariate polynomials. The _order_ of
a linrec sequence is the smallest $m$ s.t. it admits a description as above.
Allowing terms on the r.h.s. of the form $p\in\mathbb{Q}[n]$ does not increase
the expressiveness power since univariate polynomials are already linrec and
thus $p$ could be replaced by introducing an auxiliary variable for it. If we
fix the initial conditions $f_{1}(0),\dots,f_{m}(0)$, then the system above
has unique solution, and we can moreover compute all the values $f_{i}(n)$’s
by unfolding the definition. Amongst innumerable others, the _Fibonacci
sequence_ $\partial^{2}f=\partial f+f$ is linrec (even constant recursive)
since we can introduce an auxiliary sequence $g$ and write $\partial f=f+g$
and $\partial g=f$. An example using non-constant polynomial coefficients is
provided by the number $t(n)$ of _involutions_ of $\\{1,\dots,n\\}$ (a.k.a.
_telephone numbers_) since $\partial^{2}t=\partial t+(n+1)\cdot t$; by
introducing an auxiliary sequence $s(n)$, we have a linrec system $\partial
t=t+n\cdot s$ and $\partial s=t$.
### A.2 Examples of bidimensional linrec sequences
There is a wealth of examples of linrec sequences. The power sequence $n^{k}$
is bidimensional linrec since for $n,k\geq 1$, $n^{k}=n\cdot n^{k-1}$ and the
two sections $0^{k}$ and $n^{0}$ are certainly constant after the first
element. The sequence of _binomial coefficients_ ${n\choose k}$ is linrec
since ${n\choose k}={n-1\choose k-1}+{n-1\choose k}$ for $n,k\geq 1$ and the
two sections satisfy ${n\choose 0}=1$ for $n\geq 0$ and ${0\choose k}=0$ for
$k\geq 1$. The _Stirling numbers of the first kind $s(n,k)$_ are linrec since
$s(n,k)=s(n-1,k-1)-(n-1)\cdot s(n-1,k)$ for $n,k\geq 1$ and the two sections
$s(n,0)=s(0,k)=0$ are constant for $n,k\geq 1$. Similar recurrences appear for
the Stirling numbers of the second kind $S(n,k)$ (as remarked in the
introduction), the _Eulerian numbers_ $A(n,k)=(n-k)\cdot A(n-1,m-1)+(k+1)\cdot
A(n-1,m)$ the _triangle numbers_ $T(n,k)=k\cdot T(n-1,k-1)+k\cdot T(n-1,k)$,
and many more.
As an additional example, consider the _Bell numbers_ $B(n)$, which count the
number of non-empty partitions of a set of $n$ elements. Notice that $B(n)$ is
not linrec, in fact not even P-recursive [32, 24]. The well-known relationship
$B(n)=\sum_{k=0}^{n}S(n,k)$ suggests to consider the partial sums
$C(n,k)=\sum_{i=0}^{k-1}S(n,k)$. We have $C(n,0)=0$ and
$C(n+1,k+1)=S(n,k)+C(n+1,k)$, thus $C$ is linrec and $B(n)=C(n+1,n+1)$ is its
diagonal (shifted by one).
### A.3 Comparison with other classes of sequences
#### Linrec vs. C-recursive.
A sequence $f:\mathbb{Q}^{\mathbb{N}^{d}}$ is _C-recursive_ if it satisfies a
recursion as in (5) where the affine operators $A_{i,j}$ are restricted to be
of the form $c_{i,j,0}+c_{i,j,1}\partial_{1}+c_{i,j,2}\partial_{2}$ for some
constants $c_{i,j,0},c_{i,j,1},c_{i,j,2}\in\mathbb{Q}$. Thus bidimensional
C-recursive sequences are linrec by definition. Since the asymptotic growth of
a 1-dimensional C-recursive sequence $f(n)$ is $O(r^{n})$ for some constant
$r\in\mathbb{Q}$, the sequence $n!=n\cdot(n-1)!$ is linrec but not
C-recursive, and thus the inclusion is strict. An useful fact is that zeroness
of C-recursive sequences can be solved in PTIME [51, 53].
###### Lemma A.1.
The zeroness problem for a one-dimensional C-recursive sequence can be solved
in PTIME.
###### Proof A.2.
It is well-known that a one-dimensional C-recursive sequence $f$ of order $m$
represented as in (14) where the $p_{i,j}$’s are rational numbers in
$\mathbb{Q}$, can be transformed into a single recurrence
$\displaystyle\partial_{m}f=c_{0}\cdot\partial_{0}f+\cdots+c_{m-1}\cdot\partial_{m-1}f,$
where $c_{0},\cdots,c_{m-1}\in\mathbb{Q}$. C.f. the proof of [27, Lemma 1]
relying on the Cayley-Hamilton theorem, or the more recent proof of [10,
Proposition 1] relying on a linear independence argument. It follows that
$f=0$ if, and only if, $f(n)=0$ for $0\leq n\leq m-1$. The latter condition
can be checked in PTIME by Lemma 2.1.
#### Linrec vs. P-recursive.
In dimension one, linrec sequences are a special case of _P-recursive
sequences_ [49]. The latter class can be defined as those sequences
$f:\mathbb{Q}^{\mathbb{N}}$ satisfying a linear equation of the form
$p_{k}(n)f(n)+p_{k-1}(n)f(n-1)+\cdots+p_{0}(n)f(n-k)=0$ for every $n\geq k$,
where $p_{k}(n),\dots,p_{0}(n)\in\mathbb{Q}[n]$. Thus linrec corresponds to
P-recursive with leading polynomial coefficient $p_{k}(i)=1$. The inclusion is
strict. The Catalan numbers $C(n)$ are P-recursive since they satisfy
$(n+2)\cdot C(n+1)=(4n+2)\cdot C(n)$ for every $n\geq 0$. However, they are
not linrec, and in fact not even polyrec (a more general class, c.f. below),
since 1) by [10, Theorem 6] polyrec (and thus linrec) sequences are
ultimately periodic modulo every sufficiently large prime, and 2) $C(n)$is
not ultimately periodic modulo any prime $p$ [1].
In dimension two, linrec and P-recursive sequences [35] are incomparable. The
sequence $f(m,n)=m^{n}$ is linrec since $f(m+1,n+1)=(m+1)\cdot f(m+1,n)$,
$f(m,0)=1$, and $f(0,n+1)=0$. The diagonal of $f$ is thus $f(n,n)=n^{n}$.
Since P-recursive sequences are closed under taking diagonals [35, Theorem
3.8] and $n^{n}$ is not P-recursive [23, Section 1, page 5], it follows that
$m^{n}$ is not P-recursive either (as a two-dimensional sequence).
#### Linrec vs. polyrec
A one-dimensional sequence $f:\mathbb{Q}^{\mathbb{N}}$ is _polynomial
recursive_ (polyrec) if it satisfies a system of equations as in (14) where
the rhs’ are polynomial expressions in $\mathbb{Q}[f_{1}(n),\dots,f_{m}(n)]$
[10, Definition 3]666Since polynomial coefficients can already be defined in
this formalism, we would obtain the same class by allowing more general
expressions in $\mathbb{Q}[n][f_{1}(n),\dots,f_{m}(n)]$.. In dimension one,
the class of linrec sequences is strictly included in the class of polyrec
sequences. Consider the sequence $f(n)=2^{2^{n}}$. On the one hand, it is
polyrec since $f(n+1)=f(n)^{2}$. On the other hand, it is not linrec, and in
fact not even P-recursive, since a P-recursive sequence $g(n)$ has growth rate
$O((n!)^{c})$ for some constant $c\in\mathbb{N}$ [35, Proposition 3.11]. To
the best of our knowledge, polyrec sequences in higher dimension have not been
studied yet.
### A.4 Zeroness problem
Zeroness of one-dimensional C-recursive sequences is decidable in NC2 [53]
(and thus in polylogarithmic space); we recalled a simple argument leading to
a PTIME algorithm in Lemma A.1. Zeroness of one-dimensional P-recursive
sequences is decidable (c.f. [12] and the corrections in [5, Section 5]).
Zeroness of one-dimensional polyrec sequences is decidable, and in fact the
more general zeroness problem for polynomial automata is decidable with non-
primitive recursive complexity [3] (polyrec sequences correspond to polynomial
automata over a unary alphabet $\Sigma=\\{a\\}$).
### A.5 Proofs for Sec. 2
See 2.2
###### Proof A.3.
We prove the lemma for the $L$-section $f^{L}(n)$ defined as $f(n,L)$. Let the
auxiliary sequences be $f=f_{1},\dots,f_{m}$ as in (5), and fix the initial
conditions $f_{j}(0,\geq 1),f_{j}(\geq 1,0),f_{j}(0,0)\in\mathbb{Q}$ for every
$1\leq j\leq m$. Let $f^{K}_{j}(n)$ be a new variable denoting the $K$-section
$f_{j}(n,K)$, for every $1\leq j\leq m$ and $0\leq K\leq L$. We show by
induction on $K$ that all the $f^{K}_{j}$’s are linrec. In the base case
$K=0$, $f^{0}_{j}(n)$ is linrec by setting
$f^{0}_{j}(0)=f_{j}(0,0)\in\mathbb{Q}$ and
$\partial_{1}f^{0}_{j}(n)=f_{j}(n+1,0)=f_{j}(\geq 1,0)\in\mathbb{Q}$. Notice
that, strictly speaking, the latter is not a legal linrec equation since
constants are allowed only in the base case and not in (14) (which are linear
systems and not affine ones). To this end, we introduce an extra variable
$g_{j}(n)$ and we define $g_{j}(0)=f_{j}(\geq 1,0)\in\mathbb{Q}$, and we have
the linrec equations
$\displaystyle\partial_{1}f^{0}_{j}(n)$ $\displaystyle=g_{j}(n),$
$\displaystyle\partial_{1}g_{j}(n)$ $\displaystyle=g_{j}(n).$
For the inductive step, we write
$\displaystyle\partial_{1}f^{M+1}_{j}(n)$
$\displaystyle=\partial_{1}\partial_{2}f_{j}(n,M)$
$\displaystyle=\sum_{i}(p_{i00}(n,M)+p_{i01}(n,M)\cdot\partial_{1}+p_{i11}(n,M)\cdot\partial_{2})f_{i}(n,M)$
$\displaystyle=\sum_{i}\left((p_{i00}(n,M)+p_{i01}(n,M)\cdot\partial_{1})f^{M}_{i}(n)+p_{i11}(n,M)\cdot
f^{M+1}_{i}(n)\right).$
By induction, each $f^{M}_{i}$ is one-dimensional linrec, and we can thus
adjoin their corresponding systems of equations. We have introduced
$m\cdot(L+1)$ new variables $f_{j}^{L}$’s and $m$ variables $g_{j}$’s (thus
$m+m\cdot(L+1)+m=m\cdot(L+3)$ in total), and the same number of additional
equations. The initial condition for the new variables $f^{M}_{j}$ is
$f^{M}_{j}(0)=f_{j}(0,M)$, which can be computed in PTIME by Lemma 2.1.
Moreover every polynomial coefficient appears already in the original system,
but with the second parameter fixed to some $0\leq M\leq L$. Therefore the
degree does not increase and the height is bounded by $h\cdot L^{d}$.
## Appendix B Proofs for Sec. 3
See 3.4
The two reductions in Lemma 3.4 are sufficiently generic to be useful also in
other contexts. For instance, in the context of nondeterministic finite
automata they imply that the inclusion problem $L(A)\subseteq L(B)$ with $A$
nondeterministic and $B$ unambiguous reduces in PTIME to the universality
problem of an unambiguous finite automaton. Since the latter problem is in
PTIME [51, Corollary 4.7], the inclusion problem is in PTIME as well. Notice
that we didn’t assume that $A$ is unambiguous, as it is often done in
analogous circumstances [51], [5, Section 5]. A similar reduction has recently
been used in the context of inclusion problems between context-free grammars
and finite automata [15, Sec. 3.1] In the context of register automata, the
results of [37] do not make any unambiguity assumption on $A$.
###### Proof B.1.
Consider two register automata $A$ and $B$ over finite alphabet $\Sigma$ with
transition relations $\xrightarrow{}_{A}$, resp., $\xrightarrow{}_{B}$. We
assume w.l.o.g. that they have the same number of registers. Regarding the
first point, consider the new finite alphabet
$\Sigma^{\prime}={\xrightarrow{}_{A}}$ which equals exactly the set of
transition rules of $A$. Let $h:\Sigma^{\prime}\to\Sigma$ be the surjective
homomorphism allowing us to recover the original letter and defined as
$h(p\xrightarrow{\sigma,\varphi}q)=\sigma$; We extend $h$ to a function
$\hat{h}:(\Sigma^{\prime}\times\mathbb{A})\to(\Sigma\times\mathbb{A})$ by
preserving the data value $\hat{h}(t,a)=(h(t),a)$. Consider the automaton
$A^{\prime}$ obtained from $A$ by replacing every transition rule
$t=(p\xrightarrow{\sigma,\varphi}_{A}q)$ of $A$ with
$p\xrightarrow{t,\varphi}_{A^{\prime}}q$. Since $A^{\prime}$ has the same set
of control locations and number of transitions as $A$, it is clearly of
polynomial size. Since $A$ is without guessing and orbitised, $\varphi$
uniquely determines the next register contents given the current configuration
and input $(\sigma,a)$. Thus the only source of nondeterminism in $A$ resides
in the fact that there may be several transitions over the same $\sigma$. This
nondeterminism is removed in $A^{\prime}$, since $\sigma$ is replaced by the
transition $t$ itself. Consequently, $A^{\prime}$ is deterministic.
Consider the automaton $B^{\prime}$ obtained from $B$ by replacing every
transition rule $p\xrightarrow{\sigma,\varphi}_{B}q$ with _all_ transitions of
the form $p\xrightarrow{t,\varphi}_{B^{\prime}}q$ s.t. $h(t)=\sigma$. Clearly,
$B^{\prime}$ has the same control locations as $B$ and number of transitions
$O(\left|\xrightarrow{}_{A}\right|\cdot\left|\xrightarrow{}_{B}\right|)$.
Moreover, if $B$ is orbitised, then so it is $B^{\prime}$ Thus $B^{\prime}$ is
of polynomial size and by definition $L(B^{\prime})=\hat{h}^{-1}(L(B))$ and
$L(B)=\hat{h}(L(B^{\prime}))$. The correctness of the reduction follows from
the following claims. $L(A)\subseteq L(B)$ if, and only if,
$L(A^{\prime})\subseteq L(B^{\prime})$.
###### Proof B.2 (Proof of the claim).
For the “only if” direction, assume $L(A)\subseteq L(B)$ and let $w\in
L(A^{\prime})$. By the definition of $A^{\prime}$, $\hat{h}(w)\in L(A)$, and
thus $\hat{h}(w)\in L(B)$ by assumption. It follows that
$w\in\hat{h}^{-1}(L(B))=L(B^{\prime})$, as required.
For the “if” direction, assume $L(A^{\prime})\subseteq L(B^{\prime})$ and let
$w=(\sigma_{1},a_{1})\cdots(\sigma_{n},a_{n})\in L(A)$. Let the corresponding
accepting run in $A$ be
$\displaystyle\pi=(p_{0},\bar{a}_{0})\xrightarrow{\sigma_{1},a_{1}}\cdots\xrightarrow{\sigma_{n},a_{n}}(p_{n},\bar{a}_{n}).$
induced by the sequence of transitions
$t_{1}=(p_{0}\xrightarrow{\sigma_{1},\varphi_{1}}p_{1}),\dots,t_{n}=(p_{n-1}\xrightarrow{\sigma_{n},\varphi_{n}}p_{n})$.
By the definition of $A^{\prime}$, $\rho:=(t_{1},a_{1})\cdots(t_{n},a_{n})\in
L(A^{\prime})$, and thus $\rho\in L(B^{\prime})$ by assumption. By definition
of $B^{\prime}$, $w=\hat{h}(\rho)\in\hat{h}(L(B^{\prime}))=L(B)$, as required.
If $B$ is unambiguous, then so it is $B^{\prime}$.
###### Proof B.3 (Proof of the claim).
If there are two distinct accepting runs in $B^{\prime}$ over the same input
word $w\in(\Sigma^{\prime}\times\mathbb{A})*$, then applying $\hat{h}$ yields
two distinct accepting runs in $B$ over
$\hat{h}(w)\in(\Sigma\times\mathbb{A})^{*}$.
If $B$ is without guessing, then so it is $B^{\prime}$.
###### Proof B.4 (Proof of the claim).
If there is a reachable transition in $\left\llbracket
B^{\prime}\right\rrbracket$ of the form
$(p,\bar{a})\xrightarrow{t,a}(q,\bar{a}^{\prime})$ s.t. some fresh
$a^{\prime}_{i}$ occurs in $\bar{a}^{\prime}$, then the same holds for
$(p,\bar{a})\xrightarrow{h(t),a}(q,\bar{a}^{\prime})$ in $\left\llbracket
B\right\rrbracket$.
We now show the second point, and we thus assume that $A$ is deterministic. By
pure set-theoretic manipulations, we have
$\displaystyle L(A)\subseteq L(B)\text{ iff }L(B)\cup
L(A)^{c}=(\mathbb{A}\times A)^{*}\text{ iff }(L(B)\cap L(A))\cup
L(A)^{c}=(\mathbb{A}\times A)^{*},$
where $L(A)^{c}$ denotes $(\mathbb{A}\times A)^{*}\setminus L(A)$. It suffices
to observe that 1) $L(A)^{c}$ is recognisable by a deterministic (and thus
unambiguous and without guessing) register automaton constructible in PTIME,
2) $L(B)\cap L(A)$ is recognisable by an unambiguous and without guessing
automaton of polynomial size (since $A$ is deterministic and $B$ unambiguous
and without guessing), and 3) the disjoint union of two unambiguous and
without guessing languages is unambiguous and without guessing, and the
complexity is again polynomial. We thus take as $C$ any unambiguous and
without guessing automaton of polynomial size s.t. $L(C)=(L(B)\cap L(A))\cup
L(A)^{c}$. Finally, if $A$ and $B$ are orbitised, then $C$ is also orbitised.
## Appendix C Proofs for Sec. 4
See 4.1
###### Proof C.1.
Let $R_{p,\bar{a}}(n,k)$ be the set whose cardinality is counted by
$G_{p,\bar{a}}(n,k)$:
$\displaystyle R_{p,\bar{a}}(n,k)=\\{[\pi]_{\bar{a}}\mid
w\in(\Sigma\times\mathbb{A})^{n},\pi\in\mathsf{Runs}(C_{I};w;p,\bar{a}),0ptw=k\\}.$
(15)
Let $\alpha:\mathbb{A}\to\mathbb{A}$ be an automorphism s.t.
$\alpha(\bar{a})=\bar{b}$. We claim that there exists a bijective function
from $R_{p,\bar{a}}(n,k)$ to $R_{p,\bar{b}}(n,k)$. Consider the function $f$
that maps $\bar{a}$-orbits of runs to $\bar{b}$-orbits of runs defined as
$\displaystyle
f([\pi]_{\bar{a}})=[\alpha(\pi)]_{\alpha(\bar{a})}=[\alpha(\pi)]_{\bar{b}}.$
Since runs $\pi\in R_{p,\bar{a}}(n,k)$ are $\bar{a}$-supported and $f$
preserves the length of the run and the width of the data word labelling it,
$f$ has the right type $f:R_{p,\bar{a}}(n,k)\to R_{p,\bar{b}}(n,k)$. We claim
that $f$ is injective on $R_{p,\bar{a}}(n,k)$. Towards a contradiction, assume
$[\pi]_{\bar{a}}\neq[\rho]_{\bar{a}}$ but
$[\alpha(\pi)]_{\bar{b}}=[\alpha(\rho)]_{\bar{b}}$. There exists a
$\bar{b}$-automorphism $\beta:\mathbb{A}\to\mathbb{A}$ s.t.
$\beta(\alpha(\pi))=\alpha(\rho)$. Consequently,
$\alpha^{-1}(\beta(\alpha(\pi)))=\rho$ maps $\pi$ to $\rho$. Moreover,
$\alpha^{-1}\beta\alpha$ is an $\bar{a}$-automorphism since
$\displaystyle\alpha^{-1}(\beta(\alpha(\bar{a})))$
$\displaystyle=\alpha^{-1}(\beta(\bar{b}))$ (def. of $\alpha$)
$\displaystyle=\alpha^{-1}(\bar{b})$ ($\beta$ is a $\bar{b}$-automorphism)
$\displaystyle=\bar{a}$ $\displaystyle\text{(def.\leavevmode\nobreak\ of
$\alpha$)}.$
It follows that $[\pi]_{\bar{a}}=[\rho]_{\bar{a}}$, which is a contradiction.
Thus, $f$ is injective. By a symmetric argument, there exists also an
injective function $g:R_{p,\bar{b}}(n,k)\to R_{p,\bar{a}}(n,k)$.
See 4.3
###### Proof C.2.
We show that $G_{p,[\bar{a}]}(n,k)$ counts the number of orbits of initial
runs over words of length $n$ and width $k$ ending in a configuration in the
orbit $(p,[\bar{a}])$. Let $S_{p,\bar{a}}(n,k)$ be the set of initial runs
ending in $(p,\bar{a})$ over words $w$ of length $n$ and width $k$:
$\displaystyle S_{p,\bar{a}}(n,k)=\\{\pi\mid
w\in(\Sigma\times\mathbb{A})^{n},\pi\in\mathsf{Runs}(C_{I};w;p,\bar{a}),0ptw=k\\}.$
(16)
We have
$R_{p,\bar{a}}(n,k)=\mathsf{orbits}_{\bar{a}}(S_{p,\bar{a}}(n,k))=\\{[\pi]_{\bar{a}}\mid\pi\in
S_{p,\bar{a}}(n,k)\\}$. We observe the following decomposition for $n,k\geq
0$:
$\displaystyle S_{p^{\prime},\bar{a}^{\prime}}(n+1,k+1)=$
$\displaystyle\underbrace{\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\in\bar{a}}\\{\pi\cdot
t\mid\pi\in S_{p,\bar{a}}(n,k+1)\\}}_{\textsf{\bf I}}\ \cup$
$\displaystyle\underbrace{\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\not\in\bar{a}}\\{\pi\cdot
t\mid\pi\in S_{p,\bar{a}}(n,k),a\not\in\pi\\}}_{\textsf{\bf II}}\ \cup$
$\displaystyle\underbrace{\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\not\in\bar{a}}\\{\pi\cdot
t\mid\pi\in S_{p,\bar{a}}(n,k+1),a\in\pi\\}}_{\textsf{\bf III}},$
where the three unions marked by $\textsf{\bf I},\textsf{\bf II},\textsf{\bf
III}$ are mutually disjoint. When we pass to their $\bar{a}^{\prime}$-orbits,
we also get a disjoint union of orbits:
$\displaystyle R_{p^{\prime},\bar{a}^{\prime}}(n+1,k+1)=$
$\displaystyle\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\in\bar{a}}{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in S_{p,\bar{a}}(n,k+1)\\}}\ \cup$
$\displaystyle\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\not\in\bar{a}}{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in S_{p,\bar{a}}(n,k),a\not\in\pi\\}}\ \cup$
$\displaystyle\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\not\in\bar{a}}{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in S_{p,\bar{a}}(n,k+1),a\in\pi\\}}.$
By taking cardinalities on both sides, we get
$\displaystyle\left|R_{p^{\prime},\bar{a}^{\prime}}(n+1,k+1)\right|=$
$\displaystyle\left|\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\in\bar{a}}\underbrace{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in S_{p,\bar{a}}(n,k+1)\\}}_{R^{\textsf{\bf
I}}_{t}}\right|\ +$
$\displaystyle\left|\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\not\in\bar{a}}\underbrace{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in
S_{p,\bar{a}}(n,k),a\not\in\pi\\}}_{R^{\textsf{\bf II}}_{t}}\right|\ +$
$\displaystyle\left|\bigcup_{t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),a\not\in\bar{a}}\underbrace{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in
S_{p,\bar{a}}(n,k+1),a\in\pi\\}}_{R^{\textsf{\bf III}}_{t}}\right|.$
###### Claim 1.
Fix two transitions
$t_{1}=(p_{1},\bar{a}_{1}\xrightarrow{\sigma_{1},a_{1}}p^{\prime},\bar{a}^{\prime})$
and
$t_{2}=(p_{2},\bar{a}_{2}\xrightarrow{\sigma_{2},a_{2}}p^{\prime},\bar{a}^{\prime})$.
If $R^{\textsf{\bf I}}_{t_{1}}\cap R^{\textsf{\bf I}}_{t_{2}}\not=\emptyset$
then $[t_{1}]=[t_{2}]$.
###### Proof C.3 (Proof of the claim).
Let $[\pi_{1}\cdot t_{1}]_{\bar{a}^{\prime}}=[\pi_{2}\cdot
t_{2}]_{\bar{a}^{\prime}}$ for two runs $\pi_{1}\in
S_{p_{1},\bar{a}_{1}}(n-1,k)$ and $\pi_{2}\in S_{p_{2},\bar{a}_{2}}(n-1,k)$.
There exists an ($\bar{a}^{\prime}$-)automorphism $\alpha$ s.t.
$\alpha(\pi_{1}\cdot t_{1})=\pi_{2}\cdot t_{2}$. In particular,
$\alpha(t_{1})=t_{2}$, i.e., $[t_{1}]=[t_{2}]$ as required.
The claim above implies that the $R^{\textsf{\bf I}}_{t}$’s are disjoint for
distinct orbits $[t]$’s, and similarly for $R^{\textsf{\bf II}}_{t}$ and
$R^{\textsf{\bf III}}_{t}$. We thus obtain the equations
$\displaystyle\left|R_{p^{\prime},\bar{a}^{\prime}}(n+1,k+1)\right|=$
$\displaystyle\sum_{[t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime})]:\;a\in\bar{a}}|\underbrace{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in S_{p,\bar{a}}(n,k+1)\\}}_{R^{\textsf{\bf
I}}_{t}}|\ +$
$\displaystyle\sum_{[t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime})]:\;a\not\in\bar{a}}|\underbrace{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in
S_{p,\bar{a}}(n,k),a\not\in\pi\\}}_{R^{\textsf{\bf II}}_{t}}|\ +$
$\displaystyle\sum_{[t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime})]:\;a\not\in\bar{a}}|\underbrace{\\{[\pi\cdot
t]_{\bar{a}^{\prime}}\mid\pi\in
S_{p,\bar{a}}(n,k+1),a\in\pi\\}}_{R^{\textsf{\bf III}}_{t}}|.$
###### Claim 2.
The set of orbits $R^{\textsf{\bf I}}_{t}$ is in bijection with the set of
orbits
$R_{p,\bar{a}}(n,k+1)=\\{[\pi]_{\bar{a}}\mid\pi\in S_{p,\bar{a}}(n,k+1)\\}.$
###### Proof C.4 (Proof of the claim).
Indeed, consider the mapping $f:R^{\textsf{\bf I}}_{t}\to
R_{p,\bar{a}}(n,k+1)$ defined as
$\displaystyle f([\pi\cdot t]_{\bar{a}^{\prime}})=[\pi]_{\bar{a}}\quad\text{
with }t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}).$
First of all $f$ is well-defined as a function: Assume $[\pi_{1}\cdot
t]_{\bar{a}^{\prime}}=[\pi_{2}\cdot t]_{\bar{a}^{\prime}}$ for two paths
$\pi_{1},\pi_{2}$ both ending in configuration $(p,\bar{a})$. There exists an
$\bar{a}^{\prime}$-automorphism $\alpha$ s.t. $\alpha(\pi_{1}\cdot
t)=\pi_{2}\cdot t$. In particular, $\alpha(\pi_{1})=\pi_{2}$ and since
$\pi_{1},\pi_{2}$ end up in the same configuration $(p,\bar{a})$,
$\alpha(\bar{a})=\bar{a}$. Thus $\alpha$ is in fact a $\bar{a}$-automorphism
and $[\pi_{1}]_{\bar{a}}=[\pi_{1}]_{\bar{a}}$ as required. Secondly, $f$ is of
the right type since $[\pi]_{\bar{a}}\in R_{p,\bar{a}}(n,k+1)$: $\pi\cdot t$
is a run over a word $w\cdot a$ of width $k+1$ and thus $\pi$ is a run over a
word $w$ also of width $k+1$ because $a\in\bar{a}$, implying $a\in w$ since
the automaton is non-guessing. We argue that $f$ is a bijection. First of all,
$f$ is injective: If $f([\pi_{1}\cdot t]_{\bar{a}^{\prime}})=f([\pi_{2}\cdot
t]_{\bar{a}^{\prime}})$, then by definition of $f$ we have
$[\pi_{1}]_{\bar{a}}=[\pi_{2}]_{\bar{a}}$. There exists an
$\bar{a}$-automorphism $\alpha$ s.t. $\alpha(\pi_{1})=\pi_{2}$. Since the
automaton is without guessing, $\bar{a}^{\prime}\subseteq\bar{a}$, and thus
$\alpha$ is also an $\bar{a}^{\prime}$-automorphism. Since $\alpha(t)=t$ (due
to the fact that $a\in\bar{a}$ and thus $\alpha(a)=a$), $\alpha(\pi_{1}\cdot
t)=\pi_{2}\cdot t$ and thus $[\pi_{1}\cdot t]_{\bar{a}^{\prime}}=[\pi_{2}\cdot
t]_{\bar{a}^{\prime}}$ as required.
The mapping $f$ is also surjective. Indeed, let $[\pi]_{\bar{a}}\in
R_{p,\bar{a}}(n,k+1)$. Thus $\pi$ ends in configuration $(p,\bar{a})$ and
therefore $\pi\cdot t$ is a run. Consequently, $[\pi\cdot
t]_{\bar{a}^{\prime}}\in R^{\textsf{\bf I}}_{t}$. This is enough since, by the
definition of $f$, $[\pi]_{\bar{a}}=f([\pi\cdot t]_{\bar{a}^{\prime}})$.
###### Claim 3.
The set of orbits $R^{\textsf{\bf II}}_{t}$ is in bijection with the set of
orbits
$R_{p,\bar{a}}(n,k)=\\{[\pi]_{\bar{a}}\mid\pi\in S_{p,\bar{a}}(n,k)\\}.$
###### Proof C.5 (Proof of the claim).
Consider the mapping
$\displaystyle f([\pi\cdot t]_{\bar{a}^{\prime}})=[\pi]_{\bar{a}},\quad\text{
with }t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}).$
First of all, $f$ is well-defined as a function, and the argument is as in the
previous point. Secondly, $f$ has the right type. If $\pi\cdot t$ is a run
over a word $w\cdot a$ of width $k+1$, then $\pi$ is a run over $w$ of width
$k$ since $a\not\in w$. Thus $f$ is indeed a mapping from $R_{\textsf{\bf
II}}$ to $R_{p,\bar{a}}(n,k)$. We argue that $f$ is bijective. First of all,
$f$ is injective. Consider $\bar{a}^{\prime}$-orbit of runs $[\pi_{1}\cdot
t]_{\bar{a}^{\prime}},[\pi_{2}\cdot t]_{\bar{a}^{\prime}}\in R_{\textsf{\bf
II}}$ with $a\not\in\pi_{1}\cup\pi_{2}$. If $f([\pi_{1}\cdot
t]_{\bar{a}^{\prime}})=f([\pi_{2}\cdot t]_{\bar{a}^{\prime}})$, then by
definition of $f$ we have $[\pi_{1}]_{\bar{a}}=[\pi_{2}]_{\bar{a}}$. There
exists an $\bar{a}$-automorphism $\alpha$ s.t. $\alpha(\pi_{1})=\pi_{2}$.
Since $a\not\in\pi_{1}\cup\pi_{2}$, there is an automorphism $\beta$ s.t.
$\beta$ agrees with $\alpha$ on every data value in $\pi_{1}$ (in particular,
$\beta(\pi_{1})=\pi_{2}$ and $\beta(\bar{a})=\bar{a}$), and $\beta(a)=a$.
Since the automaton is without guessing,
$\bar{a}^{\prime}\subseteq\bar{a}\cup\\{a\\}$. Thus, $\beta$ is a
$\bar{a}^{\prime}$-automorphism and $\beta(\pi_{1}\cdot
t)=\beta(\pi_{1})\cdot\beta(t)=\pi_{2}\cdot t$, i.e., $[\pi_{1}\cdot
t]_{\bar{a}^{\prime}}=[\pi_{2}\cdot t]_{\bar{a}^{\prime}}$ as required. The
mapping $f$ is surjective by an argument as in the proof of Claim 2.
###### Claim 4.
The set of orbits $R^{\textsf{\bf III}}_{t}$ with $k+1\geq 0pt{\bar{a}}$ is in
bijection with $k+1-0pt{\bar{a}}$ disjoint copies of the set of orbits
$R_{p,\bar{a}}(n,k+1)=\\{[\pi]_{\bar{a}}\mid\pi\in S_{p,\bar{a}}(n,k+1)\\},$
and it is empty if otherwise $k+1<0pt{\bar{a}}$.
###### Proof C.6 (Proof of the claim).
If $k+1<0pt{\bar{a}}$, then clearly since the automaton is non-guessing it
could not have stored more distinct data values $0pt{\bar{a}}$ in the register
than the number of distinct data values $k+1$ in the input, and thus
$R^{\textsf{\bf III}}_{t}=\emptyset$ in this case. In the following, thus
assume $k+1\geq 0pt{\bar{a}}$. Let $w=a_{1}\cdots a_{n}\in\mathbb{A}^{n}$ be
the sequence of data values labelling the run $\pi$, and consider the non-
contiguous subsequence $D_{\pi}=a_{i_{1}}\cdots a_{i_{k+1-0pt{\bar{a}}}}$ of
$w$ consisting of the $k+1-0pt{\bar{a}}$ distinct elements in
$w\setminus\bar{a}$ in their order of appearance in $w$ (and thus in $\pi$).
Consider the function $f$ defined as
$\displaystyle f([\pi\cdot
t]_{\bar{a}^{\prime}})=(j,[\pi]_{\bar{a}})\quad\text{ with
}t=(p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}),$
where $a\not\in\bar{a}$ equals the unique $a_{i_{j}}\in D_{\pi}$. First of
all, $f$ is well-defined as a function: Assume $([\pi_{1}\cdot
t]_{\bar{a}^{\prime}},(j_{1},[\pi_{1}]_{\bar{a}})),([\pi_{2}\cdot
t]_{\bar{a}^{\prime}},(j_{2},[\pi_{2}]_{\bar{a}}))\in f$ with $[\pi_{1}\cdot
t]_{\bar{a}^{\prime}}=[\pi_{2}\cdot t]_{\bar{a}^{\prime}}$. There is an
$\bar{a}^{\prime}$-automorphism $\alpha$ s.t. $\alpha(\pi_{1}\cdot
t)=\pi_{2}\cdot t$. In particular, $\alpha(\pi_{1})=\pi_{2}$ and
$\alpha(t)=t$, which also implies $\alpha(a_{1})=a_{2}$. From
$\alpha(\pi_{1})=\pi_{2}$, we even have that $\alpha$ is a
$\bar{a}$-automorphism, and thus $[\pi_{1}]_{\bar{a}}=[\pi_{2}]_{\bar{a}}$. We
now argue that $j_{1}=j_{2}$. Assume $a$ appears in position $j_{1}$ in
$D_{\pi_{1}}$ and in position $j_{2}$ in $D_{\pi_{2}}$. Assume by way of
contradiction that $j_{1}\neq j_{2}$. We have that $\alpha(a)=a$ appears in
position $j_{1}$ in $\alpha(D_{\pi_{1}})=D_{\alpha(\pi_{1})}=D_{\pi_{2}}$,
i.e., $a$ also appears in position $j_{1}$ in $D_{\pi_{2}}$. This is a
contradiction, since all elements in $D_{\pi_{2}}$ are distinct. Thus $f$ is
indeed a mapping from $R_{\textsf{\bf III}}$ to
$\\{1,\dots,k+1-0pt{\bar{a}}\\}\times R_{p,\bar{a}}(n,k+1)$.
We argue that $f$ is bijective. First of all, $f$ is injective. Consider
$\bar{a}^{\prime}$-orbit of runs $[\pi_{1}\cdot
t]_{\bar{a}^{\prime}},[\pi_{2}\cdot t]_{\bar{a}^{\prime}}\in R^{\textsf{\bf
III}}_{t}$ with $a\not\in\bar{a},a\in\pi_{1},a\in\pi_{2}$. Assume
$f([\pi_{1}\cdot t]_{\bar{a}^{\prime}})=f([\pi_{2}\cdot
t]_{\bar{a}^{\prime}})$. By the definition of $f$, we have
$[\pi_{1}]_{\bar{a}}=[\pi_{2}]_{\bar{a}}$, and $a$ occurs in the same position
$j$ in $D_{\pi_{1}}$, resp., $D_{\pi_{2}}$. Consequently $\alpha(a)$ occurs at
position $j$ in $\alpha(D_{\pi_{1}})=D_{\alpha(\pi_{1})}=D_{\pi_{2}}$, and
thus $\alpha(a)=a$. There exists an $\bar{a}$-automorphism $\alpha$ s.t.
$\alpha(\pi_{1})=\pi_{2}$. Since the automaton is without guessing,
$\bar{a}^{\prime}\subseteq\bar{a}\cup\\{a\\}$, and thus $\alpha$ is even an
$\bar{a}^{\prime}$-automorphism. This means
$[\pi_{1}]_{\bar{a}^{\prime}}=[\pi_{2}]_{\bar{a}^{\prime}}$ and $\alpha(t)=t$,
and thus $[\pi_{1}\cdot t]_{\bar{a}^{\prime}}=[\pi_{2}\cdot
t]_{\bar{a}^{\prime}}$ as required. The mapping $f$ is surjective by an
argument analogous as in the proof of Claim 2.
Thanks to Claims 2, 3 and 4, we obtain the equations
$\displaystyle\left|R_{p^{\prime},\bar{a}^{\prime}}(n+1,k+1)\right|=$
$\displaystyle\sum_{[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]:\;a\in\bar{a}}\left|R_{p,\bar{a}}(n,k+1)\right|\
+$
$\displaystyle\sum_{[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]:\;a\not\in\bar{a}}\left|R_{p,\bar{a}}(n,k)\right|\
+$
$\displaystyle\sum_{[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]:\;a\not\in\bar{a}}\left|\\{1,\dots,k+1-0pt{\bar{a}}\\}\times
R_{p,\bar{a}}(n,k+1)\right|.$
By recalling the definition
$G_{p,\bar{a}}(n+1,k+1)=\left|R_{p,\bar{a}}(n+1,k+1)\right|$, we obtain, as
required,
$\displaystyle G_{p^{\prime},\bar{a}^{\prime}}(n+1,k+1)=$
$\displaystyle\sum_{[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]:\;a\in\bar{a}}G_{p,\bar{a}}(n,k+1)\
+$
$\displaystyle\sum_{[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]:\;a\not\in\bar{a}}(G_{p,\bar{a}}(n,k)+\max\\{k+1-0pt{\bar{a}},0\\}\cdot
G_{p,\bar{a}}(n,k+1)).$
See 4.5
###### Proof C.7.
We can effectively enumerate all orbits of transitions
$[p,\bar{a}\xrightarrow{\sigma,a}p^{\prime},\bar{a}^{\prime}]$ by enumerating
all the exponentially many constraints up to logical equivalence [4, Ch. 4],
which can be done in PSPACE since this is the complexity of first-order logic
over the equality relation. Recall that the Bell number $B(n)$ counts the
number of non-empty partitions of a set of $n$ elements. The system in Figure
2 contains $\ell\cdot B(d)+2=O(\ell\cdot 2^{d\cdot\log d})$ equations and
variables.
## Appendix D Proofs and additional material for Sec. 5
See 5.1
###### Proof D.1.
We adapt a proof by Giesbrecht given in the case when $R$ is a field, for
which there even is a _least_ common left multiple [25, Sec. 2] (c.f. also
[43, Sec. 2]). We consider the more general case where $R$ is a ring, in which
case we will not have any minimality guarantee for the common left multiple.
We first prove that $R[\partial;\sigma]$ has pseudo-division. Let consider the
nonzero skew polynomials
$A=a_{m}\cdot\partial^{m}+\cdots+a_{0}\quad\text{and}\quad
B=b_{n}\cdot\partial^{n}+\cdots+b_{0}$
where $m\geq n$. Let $R_{0}=A$. The leading term of $B$ is
$b_{n}\cdot\partial^{n}$ and thus the leading term of $\partial^{m-n}\cdot B$
is $\partial^{m-n}\cdot
b_{n}\cdot\partial^{n}=\sigma^{m-n}(b_{n})\cdot\partial^{m}$. Since $R$ is
CLM, there are $a_{0}^{\prime}$ and $b_{0}^{\prime}$ s.t. $a_{0}^{\prime}\cdot
a_{m}=b_{0}^{\prime}\cdot\sigma^{m-n}(b_{n})$. Therefore,
$R_{1}:=a_{0}^{\prime}\cdot R_{0}-b_{0}^{\prime}\cdot\partial^{m-n}\cdot B$
has degree strictly less than $m_{0}:=m=\deg{R_{0}}$. We repeat this operation
obtaining a sequence of remainders:
$\displaystyle a_{0}^{\prime}\cdot R_{0}$
$\displaystyle=b_{0}^{\prime}\cdot\partial^{m_{0}-n}\cdot B+R_{1},$
$\displaystyle a_{1}^{\prime}\cdot R_{1}$
$\displaystyle=b_{1}^{\prime}\cdot\partial^{m_{1}-n}\cdot B+R_{2},$
$\displaystyle\ \ \vdots$ $\displaystyle a_{k-1}^{\prime}\cdot R_{k-1}$
$\displaystyle=b_{k-1}^{\prime}\cdot\partial^{m_{k-1}-n}\cdot B+R_{k},$
$\displaystyle a_{k}^{\prime}\cdot R_{k}$
$\displaystyle=b_{k}^{\prime}\cdot\partial^{m_{k}-n}\cdot B+R_{k+1},$
where $m_{i}:=\deg{R_{i}}$, $R_{i+1}:=a_{i}^{\prime}\cdot
R_{i}-b_{i}^{\prime}\cdot\partial^{m_{i}-n}$, and the degrees satisfy
$m_{0}>m_{1}>\cdots>m_{k}>n>m_{k+1}$. By defining
$a=a_{k}^{\prime}a_{k-1}^{\prime}\cdots a_{0}^{\prime}\in R$, taking as
quotient the skew polynomial
$P=b_{k}^{\prime}\cdot\partial^{m_{k}-n}+a_{k}^{\prime}b_{k-1}^{\prime}\cdot\partial^{m_{k-1}-n}+\cdots+a_{k}^{\prime}a_{k-1}^{\prime}\cdots
a_{1}^{\prime}b_{0}^{\prime}\cdot\partial^{m_{0}-n}\in R[\partial;\sigma]$
and as a remainder $Q=R_{k+1}$ we have, as required, $\deg Q<m$ and
$\displaystyle a\cdot A=P\cdot B+Q.$
We now show that $R[\partial;\sigma]$ has the CLM property. To this end, let
$A_{1},A_{2}\in R[\partial;\sigma]$ with $\deg{A_{1}}\geq\deg{A_{2}}$ be
given. We apply the pseudo-division algorithm above to obtain the sequence
$\displaystyle a_{1}\cdot A_{1}$ $\displaystyle=Q_{1}\cdot A_{2}+A_{3},$
$\displaystyle a_{2}\cdot A_{2}$ $\displaystyle=Q_{2}\cdot A_{3}+A_{4},$
$\displaystyle\ \ \vdots$ $\displaystyle a_{k-2}\cdot A_{k-2}$
$\displaystyle=Q_{k-2}\cdot A_{k-1}+A_{k},$ $\displaystyle a_{k-1}\cdot
A_{k-1}$ $\displaystyle=Q_{k-1}\cdot A_{k}+A_{k+1},$
with $a_{1},\ldots,a_{k-1}\in R$, $A_{k+1}=0$, and the degrees of the
$A_{i}$’s are strictly decreasing:
$\deg{A_{2}}>\deg{A_{3}}>\cdots>\deg{A_{k}}$. Consider the following two
sequences of skew polynomials
$\displaystyle S_{1}=1,\quad S_{2}=0,\quad S_{i}=a_{i-2}\cdot
S_{i-2}-Q_{i-2}\cdot S_{i-1},\textrm{ and }$ $\displaystyle T_{1}=0,\quad
T_{2}=1,\quad T_{i}=a_{i-2}\cdot T_{i-2}-Q_{i-2}\cdot T_{i-1}.$
It can easily be verified that $S_{i}\cdot A_{1}+T_{i}\cdot A_{2}=A_{i}$ for
every $0\leq i\leq k+1$: The base cases $i=0$ and $i=1$ are clear;
inductively, we have
$\displaystyle S_{i}A_{1}+T_{i}A_{2}$ $\displaystyle=(a_{i-2}\cdot
S_{i-2}-Q_{i-2}\cdot S_{i-1})A_{1}+(a_{i-2}\cdot T_{i-2}-Q_{i-2}\cdot
T_{i-1})A_{2}=$
$\displaystyle=a_{i-2}(S_{i-2}A_{1}+T_{i-2}A_{2})-Q_{i-2}(S_{i-1}A_{1}+T_{i-1}A_{2})=$
$\displaystyle=a_{i-2}A_{i-2}-Q_{i-2}A_{i-1}=A_{i}.$
In particular, at the end $S_{k+1}\cdot A_{1}+T_{k+1}\cdot A_{2}=0$, as
required.
It remains to check that $S_{k+1}$ is nonzero. We show the stronger property
that $\deg{S_{i}}=\deg{A_{2}}-\deg{A_{i-1}}$ for every $3\leq i\leq k+1$. The
base case $i=3$ is clear. For the inductive step, notice that
$\deg{Q_{i-2}}=\deg{A_{i-2}}-\deg{A_{i-1}}>0$. Thus $\deg({Q_{i-2}}\cdot
S_{i-1})=\deg{A_{i-2}}-\deg{A_{i-1}}+\deg{A_{2}}-\deg{A_{i-2}}=\deg{A_{2}}-\deg{A_{i-1}}$.
Moreover, $\deg(a_{i-2}\cdot S_{i-2})=\deg
S_{i-2}=\deg{A_{2}}-\deg{A_{i-3}}<\deg{A_{2}}-\deg{A_{i-2}}$. Thus,
$\deg{S_{i}}=\deg({Q_{i-2}}\cdot S_{i-1})=\deg{A_{2}}-\deg{A_{i-1}}$, as
required.
See 5.4
###### Proof D.2.
We recall Lagrange’s classical bound on the roots of univariate polynomials.
###### Theorem D.3 (Lagrange, 1769).
The roots of a complex polynomial $p(z)=\sum_{i=0}^{d}a_{i}\cdot z^{i}$ of
degree $d$ are bounded by $1+\sum_{0\leq i\leq d-1}\frac{|a_{i}|}{|a_{n}|}$.
In particular, the maximal root of a polynomial $p(k)\in\mathbb{Q}[k]$ with
integral coefficients is at most $1+d\cdot\max_{i}|a_{i}|$.
By Theorem D.3, the largest root of the leading polynomial coefficient
$p_{i^{*},j^{*}}(k)$ is $\leq
1+\deg_{k}p_{i^{*},j^{*}}\cdot\left|{p_{i^{*},j^{*}}}\right|_{\infty}<2+e\cdot
h$ and similarly the roots of all the leading polynomial coefficients of the
cancelling relations for the sections $f(0,n),\dots,f(i^{*},n)$ are $<2+e\cdot
h$. In the following, let
$\displaystyle K=2+j^{*}+e\cdot h.$
###### Claim 5.
The one-dimensional section $f(n,L)\in\mathbb{Q}^{\mathbb{N}}$ for a fixed
$L\geq 0$ is identically zero if, and only if,
$f(0,L)=f(1,L)=\cdots=f(m\cdot(L+3),L)=0$.
###### Proof D.4 (Proof of the claim).
The “only if” direction is obvious. By Lemma 2.2, for any fixed
$L\in\mathbb{N}$ the 1-dimensional $L$-section $f(n,L)$ is linrec of order
$\leq m\cdot(L+3)$. In fact, it is C-recursive of the same order since the
coefficients do not depend on $n$ and are thus constants. It follows that if
$f(0,L)=f(1,L)=\cdots=f(m\cdot(L+3),L)=0$, then in fact $f(n,L)=0$ for every
$n\in\mathbb{N}$ (c.f. the proof of Lemma A.1).
###### Claim 6.
The one-dimensional section $f(M,k)\in\mathbb{Q}^{\mathbb{N}}$ for a fixed
$0\leq M\leq i^{*}$ is identically zero if, and only if,
$f(M,0)=f(M,1)=\cdots=f(M,d+e\cdot h)=0$.
###### Proof D.5 (Proof of the claim).
The “only if” direction is obvious. By assumption, $f(M,k)$ admits a
cancelling relation (CR-1) of $\partial_{2}$-degree $\ell^{*}\leq d$ and
leading polynomial coefficient $q_{\ell^{*}}(k)$ of degree $\leq e$ and height
$\leq h$. By Theorem D.3, the roots of $q_{\ell^{*}}(k)$ are bounded by
$O(e\cdot h)$. It follows that if $f(M,0)=f(M,1)=\cdots=f(M,d+e\cdot h)=0$
then $f(M,n)$ is identically zero.
###### Claim 7.
$f=0$ if, and only if, all the one-dimensional sections
$f(n,0),\dots,f(n,K),f(0,k),\dots,f(i^{*},k)\in\mathbb{Q}^{\mathbb{N}}$
are identically zero.
###### Proof D.6 (Proof of the claim).
The “only if” direction is obvious. For the “if” direction, assume all the
sections above are identically zero as one-dimensional sequences. By way of
contradiction, let $(n,k)$ be the pair of indices which is minimal for the
lexicographic order s.t. $f(n,k)\neq 0$. By assumption, we necessarily have
$n>i^{*}$ and $k>K$. By (CR-2) we have
$\displaystyle p_{i^{*},j^{*}}(k-j^{*})\cdot
f(n,k)=\sum_{(i,j)<_{\text{lex}}(i^{*},j^{*})}p_{i,j}(n-i^{*},k-j^{*})\cdot
f(n-(i^{*}-i),k-(j^{*}-k)).$
Since $k>K$, $k-j^{*}>K-j^{*}\geq 2+e\cdot h$, we have
$p_{i^{*},j^{*}}(k-j^{*})\neq 0$ since the largest root of $p_{i^{*},j^{*}}$
is $\leq 1+e\cdot h$. Consequently, there exists
$(i,j)<_{\text{lex}}(i^{*},j^{*})$ s.t. $f(n-(i^{*}-i),k-(j^{*}-k))\neq 0$,
which contradicts the minimality of $(n,k)$.
By putting together the three claims above it follows that $f$ is identically
zero if, and only if, $f$ is zero on the set of inputs
$\displaystyle\\{0,\dots,m\cdot(K+3)\\}\times\\{0,\dots,K\\}\cup\\{0,\dots,i^{*}\\}\times\\{0,\dots,d+e\cdot
h\\}.$
Let $N=1+\max\\{m\cdot(K+3),i^{*}\\}$ and $K^{\prime}=1+\max\\{K,d+e\cdot
h\\}$. The condition above can be verified by computing $O(N\cdot K^{\prime})$
values for $f(n,k)$, each of which can be done in deterministic time
$\tilde{O}({m\cdot N\cdot K^{\prime}})$ thanks to Lemma 2.1, together yielding
$\tilde{O}({m\cdot N^{2}\cdot(K^{\prime})^{2}})$ which is
$\tilde{O}({p(m,i^{*},j^{*},d,e,h)})$ for a suitable polynomial $p$.
See 5.5
###### Proof D.7.
We interpret the system of equations (5) as the following linear system of
equations with coefficients $P_{i,j}\in W_{2}$.
$\displaystyle\left\\{\begin{array}[]{rcl}P_{1,1}\cdot
f_{1}+\cdots+P_{1,m}\cdot f_{m}&=&0,\\\ &\vdots&\\\ P_{m,1}\cdot
f_{1}+\cdots+P_{m,m}\cdot f_{m}&=&0.\end{array}\right.$ (20)
The idea is to eliminate all variables $f_{m},\dots,f_{2}$ from (20) until a
CR for $f_{1}$ remains. W.l.o.g. We show how to remove the last variable
$f_{m}$. The skew polynomial coefficients of $f_{m}$ in equations $1,\dots,m$
are $P_{1,m},\dots,P_{m,m}\in W_{2}$. By $m$ applications of Corollary 5.3, we
can find left multipliers $Q_{1},\dots,Q_{m}\in W_{2}$ s.t. $Q_{1}\cdot
P_{1,m}=Q_{2}\cdot P_{2,m}=\cdots=Q_{m}\cdot P_{m,m}$. We obtain the new
system not containing $f_{m}$
$\displaystyle\left\\{\begin{array}[]{rrrcl}(Q_{1}P_{1,1}-Q_{m}P_{m,1})\cdot
f_{1}+\cdots&+(Q_{1}P_{1,m-1}-Q_{m}P_{m,m-1})\cdot f_{m-1}&=&0,\\\
&&&\vdots&\\\ (Q_{m-1}P_{m-1,1}-Q_{m}P_{m,1})\cdot
f_{1}+\cdots&+(Q_{m-1}P_{m-1,m-1}-Q_{m}P_{m,m-1})\cdot
f_{m-1}&=&0.\end{array}\right.$
After eliminating all the other variables $f_{m-1},\dots,f_{2}$ in the same
way, we are finally left with an equation $R\cdot f_{1}=0$ with $R\in W_{2}$.
Thanks to a linear independence-argument that will be presented in Lemma E.1,
the operator $R$ is not zero. (Notice that the univariate assumption is not
necessary to carry over the elimination procedure and obtain a cancelling
relation.) Notice that the polynomial coefficients in $R$ are univariate
polynomials in $\mathbb{Q}[k]$. Let $p_{i^{*},j^{*}}(k)$ be leading polynomial
coefficient of $R$ when put in the form (CR-2). By an analogous elimination
argument we can find cancelling relations $R_{1},\dots,R_{i^{*}}$ for each of
the one-dimensional sections
$f^{*}(0,k),\dots,f(i^{*},k)\in\mathbb{Q}^{\mathbb{N}}$ (which are effectively
one-dimensional linrec sequences by Lemma 2.2) respectively. We then conclude
by Lemma 5.4.
The elimination algorithm presented so far suffices to decide the
universality, inclusion, and equivalence problems for unambiguous register
automata without guessing.
###### Corollary D.8.
The universality and equivalence problems for unambiguous register automata
without guessing are decidable. The inclusion problem $L(A)\subseteq L(B)$ for
register automata without guessing is decidable when $B$ is unambiguous.
Notice that in the inclusion problem $L(A)\subseteq L(B)$ we do not assume
that $A$ is unambiguous.
###### Proof D.9.
By Lemma 3.4, inclusion and equivalence reduce to universality. By Lemma 4.2,
the universality problem reduces to the zeroness problem of the sequence $G$
from Figure 2, which is linrec by its definition and Lemma 4.3. Since the
polynomial coefficients in Figure 2 are univariate, we can decide zeroness of
$G$ by Theorem 5.5.
### D.1 CLM examples
In this section we illustrate the CLM property with two examples, the first
for $W_{1}$ and the second for $W_{2}$.
###### Example D.10.
We give an example of application of the CLM property in $W_{1}$. Consider the
two polynomials $F_{1}=\partial_{1}^{2}-(k+1)\partial_{1}$ and
$F_{2}=-\partial_{1}^{2}+\partial_{1}$. Since $k$ and $\partial_{1}$ commute,
$F_{2}\cdot F_{1}=F_{1}\cdot F_{2}$ and the multipliers have degree $2$. The
CLM algorithm finds multipliers of degree 1:
$\displaystyle 1\cdot F_{1}$ $\displaystyle=(-1)\cdot F_{2}+F_{3}$
$\displaystyle\textrm{with }F_{3}=-k\partial_{1},$ $\displaystyle k\cdot
F_{2}$ $\displaystyle=\partial_{1}\cdot F_{3}+F_{4}$
$\displaystyle\textrm{with }F_{4}=k\partial_{1},$ $\displaystyle 1\cdot F_{3}$
$\displaystyle=(-1)\cdot F_{4}.$
We have $s_{1}=1,s_{2}=0,s_{3}=1,s_{4}=-\partial_{1},s_{5}=-\partial_{1}+1$
and $t_{1}=0,t_{2}=1,t_{3}=1,t_{4}=-k+\partial_{1},t_{5}=-k+\partial_{1}+1$.
We can thus verify that $s_{5}\cdot F_{1}=-t_{5}\cdot F_{2}$.
###### Example D.11.
We give an example of CLM property in $W_{2}$. Consider the skew polynomials
$G_{1}=(-\partial_{1}^{2}+\partial_{1})\partial_{2}^{2}$ and
$G_{2}=(\partial_{1}^{2}-k\partial_{1})\partial_{2}-\partial_{1}$. Since
$\partial_{2}G_{2}=(\partial_{1}^{2}-(k+1)\partial_{1})\partial_{2}^{2}-\partial_{1}\partial_{2}$,
thanks to Example D.10 we have
$\displaystyle(\partial_{1}-k-1)\cdot G_{1}$
$\displaystyle=(-\partial_{1}+1)\partial_{2}\cdot G_{2}+G_{3},$
$\displaystyle\textrm{with
}G_{3}=(-\partial_{1}^{2}+\partial_{1})\partial_{2},$
which gives the first pseudo-division. Analogously, since
$(-\partial_{1}+1)\cdot(\partial_{1}^{2}-k\partial_{1})=(\partial_{1}-k)\cdot(-\partial_{1}^{2}+\partial_{1})$,
we have the second and third pseudo-divisions
$\displaystyle(-\partial_{1}+1)\cdot G_{2}$
$\displaystyle=(\partial_{1}-k)\cdot G_{3}+G_{4},$ $\displaystyle\textrm{with
}G_{4}=\partial_{1}^{2}-\partial_{1},$ $\displaystyle 1\cdot G_{3}$
$\displaystyle=-\partial_{2}\cdot G_{4}.$
We thus have
$s_{1}=1,s_{2}=0,s_{3}=\partial_{1}-k-1,s_{4}=-(\partial_{1}-k)\cdot(\partial_{1}-k-1),s_{5}=(\partial_{1}-k-1)-\partial_{2}\cdot(\partial_{1}-k)\cdot(\partial_{1}-k-1)=(\partial_{1}-k-1)-(\partial_{1}-k-1)\cdot(\partial_{1}-k-2)\partial_{2}$
and
$t_{1}=0,t_{2}=1,t_{3}=-(-\partial_{1}+1)\partial_{2},t_{4}=(-\partial_{1}+1)+(\partial_{1}-k)\cdot(-\partial_{1}+1)\partial_{2},t_{5}=-(-\partial_{1}+1)\partial_{2}+\partial_{2}\cdot((-\partial_{1}+1)+(\partial_{1}-k)\cdot(-\partial_{1}+1)\partial_{2})=(\partial_{1}-k-1)\cdot(-\partial_{1}+1)\partial_{2}^{2}$.
One can check that $s_{5}\cdot G_{1}=-t_{5}\cdot G_{2}$.
### D.2 CR examples
In this section we present detailed examples of CR.
###### Example D.12.
We continue our running Example 5.2. Recall the starting equations:
$\displaystyle\begin{array}[]{rrrrrrrr}\partial_{1}\partial_{2}\cdot
G_{p}&&&&=0,\\\ -(1+(k+1)\partial_{2})\cdot
G_{p}&+(\partial_{1}\partial_{2}-k\partial_{2}-1)\cdot G_{q}&&&=0,\\\
-(1+(k+1)\partial_{2})\cdot
G_{p}&&+(\partial_{1}\partial_{2}-\partial_{2})\cdot G_{r}&&=0,\\\
&-\partial_{2}\cdot G_{q}&-(1+k\partial_{2})\cdot
G_{r}&+\partial_{1}\partial_{2}\cdot G_{s}&=0,\end{array}$
$\displaystyle(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\cdot S_{1}=0,$
$\displaystyle G_{s}-S_{1}+G=0.$
In order to eliminate $G_{p}$, we need to find a common left multiple of
$a_{0}=\partial_{1}\partial_{2}$ and $b_{0}=1+(k+1)\partial_{2}$, i.e., we
need to find skew polynomials $c,d$ s.t. $c\cdot a_{0}=d\cdot b_{0}$. It can
be verified that taking $c=1+(k+2)\partial_{2}$ and
$d=\partial_{1}\partial_{2}$ fits the bill. We thus remove the first equation
and left-multiply by $d$ the second and third equations (with $S_{1}=S$ for
simplicity from now on):
$\displaystyle\begin{array}[]{rrrrrrr}\underbrace{(\partial_{1}^{2}\partial_{2}^{2}-(k+1)\partial_{1}\partial_{2}^{2}-\partial_{1}\partial_{2})}_{a_{1}}\cdot
G_{q}&&&=0,\\\
&+(\partial_{1}^{2}\partial_{2}^{2}-\partial_{1}\partial_{2}^{2})\cdot
G_{r}&&=0,\\\ -\underbrace{\partial_{2}}_{b_{1}}\cdot
G_{q}&-(1+k\partial_{2})\cdot G_{r}&+\partial_{1}\partial_{2}\cdot
G_{s}&=0,\end{array}$
$\displaystyle(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\cdot S=0,$
$\displaystyle G_{s}-S+G=0.$
We now remove $G_{q}$. Since its coefficient $b_{1}=\partial_{2}$ in the third
equation is already a multiple of its coefficient
$a_{1}=\partial_{1}^{2}\partial_{2}^{2}-(k+1)\partial_{1}\partial_{2}^{2}-\partial_{1}\partial_{2}$
in the first equation, it suffices to remove the first equation and left-
multiply the third equation by
“$\partial_{1}^{2}\partial_{2}-(k+1)\partial_{1}\partial_{2}-\partial_{1}$”:
$\displaystyle\begin{array}[]{rrrrrr}\underbrace{(\partial_{1}^{2}\partial_{2}^{2}-\partial_{1}\partial_{2}^{2})}_{a_{2}}\cdot
G_{r}&&=0,\\\
-\underbrace{(\partial_{1}^{2}\partial_{2}-(k+1)\partial_{1}\partial_{2}-\partial_{1})(1+k\partial_{2})}_{b_{2}}\cdot
G_{r}&+(\partial_{1}^{2}\partial_{2}-(k+1)\partial_{1}\partial_{2}-\partial_{1})\partial_{1}\partial_{2}\cdot
G_{s}&=0,\end{array}$
$\displaystyle(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\cdot S=0,$
$\displaystyle G_{s}-S+G=0.$
We now remove $G_{r}$, and thus we need to find a CLM of
$a_{2}=\partial_{1}^{2}\partial_{2}^{2}-\partial_{1}\partial_{2}^{2}=(\partial_{1}-1)\partial_{1}\partial_{2}^{2}$
and
$b_{2}=(\partial_{1}^{2}\partial_{2}-(k+1)\partial_{1}\partial_{2}-\partial_{1})(1+k\partial_{2})=(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)(1+k\partial_{2})\partial_{1}$.
It can be checked that for $d=(\partial_{1}-1)\partial_{2}^{2}$ there exists
some $c$ (whose exact value is not relevant here) s.t. $c\cdot a_{2}=d\cdot
b_{2}$. We can thus remove the first equation and left-multiply the second one
by $d$:
$\displaystyle\underbrace{(\partial_{1}-1)\partial_{2}^{2}(\partial_{1}^{2}\partial_{2}-(k+1)\partial_{1}\partial_{2}-\partial_{1})\partial_{1}\partial_{2}}_{a_{3}}\cdot
G_{s}$ $\displaystyle=0,$
$\displaystyle(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\cdot S$
$\displaystyle=0,$ $\displaystyle G_{s}-S+G$ $\displaystyle=0.$
We can now immediately remove $G_{s}$ by left-multiplying the last equation by
its coefficient $a_{3}$ in the first equation:
$\displaystyle\underbrace{(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)}_{b_{3}}\cdot
S$ $\displaystyle=0,$
$\displaystyle\underbrace{(\partial_{1}-1)\partial_{2}^{2}(\partial_{1}^{2}\partial_{2}-(k+1)\partial_{1}\partial_{2}-\partial_{1})\partial_{1}\partial_{2}}_{a_{3}}\cdot(-S+G)$
$\displaystyle=0.$
In order to finish it remains to remove $S$. The general approach is to find a
CLM of $a_{3}$ and $b_{3}$, but we would like to avoid performing too many
calculations here. Since $b_{3}\cdot S=0$, we also have
$b_{3}\partial_{1}^{2}\partial_{2}\cdot S=0$ (since
$\partial_{1}^{2}\partial_{2}\cdot S$ is just a shifted version of $S$, and
since $a_{3}$ can be written as
$a_{3}=(\partial_{1}-1)\partial_{2}^{2}(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\partial_{1}^{2}\partial_{2}=(\partial_{1}-1)\partial_{2}^{2}\cdot
b_{3}\cdot\partial_{1}^{2}\partial_{2}$, it follows that $a_{3}\cdot S$ = 0
and we immediately have
$\displaystyle\underbrace{(\partial_{1}-1)\partial_{2}^{2}(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\partial_{1}^{2}\partial_{2}}_{a_{4}}\cdot
G$ $\displaystyle=0.$
Since $a_{4}$ can be expanded to (as a sum of products).
$\displaystyle a_{4}$
$\displaystyle=(\partial_{1}-1)\partial_{2}^{2}(\partial_{1}\partial_{2}-(k+1)\partial_{2}-1)\partial_{1}^{2}\partial_{2}=$
$\displaystyle=(\partial_{1}\partial_{2}-(k+3)\partial_{2}-1)\partial_{1}^{2}(\partial_{1}-1)\partial_{2}^{3}=$
$\displaystyle=\partial_{1}^{4}\partial_{2}^{4}-(k+3)\partial_{1}^{3}\partial_{2}^{4}-\partial_{1}^{3}\partial_{2}^{3}-\partial_{1}^{3}\partial_{2}^{4}+(k+3)\partial_{1}^{2}\partial_{2}^{4}+\partial_{1}^{2}\partial_{2}^{3}=$
$\displaystyle=\partial_{1}^{4}\partial_{2}^{4}-(k+4)\partial_{1}^{3}\partial_{2}^{4}-\partial_{1}^{3}\partial_{2}^{3}+(k+3)\partial_{1}^{2}\partial_{2}^{4}+\partial_{1}^{2}\partial_{2}^{3},$
the sought cancelling relation for $G$, obtained by expanding the equation
above, is
$\displaystyle G(n+4,k+4)=$ $\displaystyle\ (k+4)\cdot
G(n+3,k+4)+G(n+3,k+3)\;+$ $\displaystyle-(k+3)\cdot G(n+2,k+4)-G(n+2,k+3).$
###### Example D.13.
We show a CR example coming from a two-register deterministic automaton. There
are three control locations $p,q,r$, which are all accepting and $p$ is
initial. When going from $p$ to $q$ the automaton stores the input in its
first register $x_{1}$. When going from $q$ to $r$, the automaton checks that
the input is different from what is stored in $x_{1}$ and stores it in
$x_{2}$. Then the automaton goes from $r$ to $r$ itself by reading an input
$y$ different from both registers, $x_{1}^{\prime}=x_{2}$ and
$x_{2}^{\prime}=y$. In this way the automaton accepts all words s.t. any three
consecutive data values are pairwise distinct. We have the counting equations:
$\displaystyle G_{p}(n+1,k+1)$ $\displaystyle=0,$ $\displaystyle
G_{q}(n+1,k+1)$ $\displaystyle=G_{p}(n,k)+(k+1)\cdot G_{q}(n,k+1),$
$\displaystyle G_{r}(n+1,k+1)$ $\displaystyle=G_{q}(n,k)+k\cdot
G_{q}(n,k+1)+G_{r}(n,k)+(k-1)\cdot G_{r}(n,k+1),$ $\displaystyle G(n,k)$
$\displaystyle=S(n,k)-G_{p}(n,k)-G_{q}(n,k)-G_{r}(n,k).$
We find the following CR:
$\displaystyle\begin{array}[]{rl}G(n+4,k+3)\ =&(2k+4)\cdot G(n+3,k+3)+2\cdot
G(n+3,k+2)\ +\\\ &-(k^{2}+4k+3)\cdot G(n+2,k+3)\ +\\\ &-(2k+3)\cdot
G(n+2,k+2)-G(n+2,k+1).\end{array}$ (24)
In the last example we consider an automaton which is almost universal.
###### Example D.14.
Consider the following register automaton $A$ with one register $x$ with unary
finite alphabet $\left|\Sigma\right|=1$. There are four control locations
$p,q,r,s$ of which $p$ is initial and $s$ is final. The automaton accepts all
words of length $\geq 2$ by unambiguously guessing whether or not the last two
letters are equal. The transitions are $p\xrightarrow{x=\bot\land
x^{\prime}=\bot}p$, $p\xrightarrow{x=\bot\land x^{\prime}=y}q$,
$p\xrightarrow{x=\bot\land x^{\prime}=y}r$, $q\xrightarrow{x=y\land
x^{\prime}=x}s$, $r\xrightarrow{x\neq y\land x^{\prime}=x}s$. Equations:
$\displaystyle G_{p}(n+1,k+1)$ $\displaystyle=G_{p}(n,k)+(k+1)\cdot
G_{p}(n,k+1),$ $\displaystyle G_{q}(n+1,k+1)$
$\displaystyle=G_{p}(n,k)+(k+1)\cdot G_{p}(n,k+1)=G_{p}(n+1,k+1),$
$\displaystyle G_{r}(n+1,k+1)$ $\displaystyle=G_{p}(n,k)+(k+1)\cdot
G_{p}(n,k+1)=G_{p}(n+1,k+1),$ $\displaystyle G_{s}(n+1,k+1)$
$\displaystyle=G_{q}(n,k+1)+G_{r}(n,k)+k\cdot G_{r}(n,k+1)=$
$\displaystyle=(k+2)G_{p}(n,k+1)+G_{p}(n,k),$ $\displaystyle G(n,k)$
$\displaystyle=S(n,k)-G_{s}(n,k).$
We find the following CR:
$\displaystyle G(n+3,k+3)=G(n+2,k+2)+(k+3)\cdot G(n+2,k+3).$ (25)
Thanks to the relationship above, we manually check that
$G(2,0)=G(2,1)=G(2,2)=0$, we can conclude that $G(n,k)=0$ for every $n,k\geq
2$. Indeed, the automaton accepts all words of length $\geq 2$.
## Appendix E Hermite forms
In this section we present an elimination algorithm based on the computation
of the Hermite normal form for matrices of skew polynomials. An easy but
important observation in order to get good bounds is that the first Weyl
algebra $W_{1}=\mathbb{Q}[k][\partial_{1};\sigma_{1}]$ from Sec. 5 is in fact
isomorphic to the (commutative) ring of bivariate polynomials
$\mathbb{Q}[k,\partial_{1}]$. In places where we need to obtain good
complexity bounds, we will use $W_{1}^{\prime}$ instead of $W_{1}$ and
$W_{2}^{\prime}$ instead of $W_{2}$, where
$\displaystyle W_{1}^{\prime}=\mathbb{Q}[k,\partial_{1}]\quad\text{and}\quad
W_{2}^{\prime}=W_{1}^{\prime}[\partial_{2};\sigma_{2}]=\mathbb{Q}[k,\partial_{1}][\partial_{2};\sigma_{2}].$
(26)
A skew polynomial $P\in W_{2}$ (or $W_{2}^{\prime}$) can be written in a
unique way as a finite sum
$\sum_{i,j,k}a_{i,j,k}z^{i}\partial_{1}^{j}\partial_{2}^{k}$ with
$a_{i,j,k}\in\mathbb{Q}$. We define $\deg_{z}P$ as the largest $i$ s.t.
$a_{i,j,k}\not=0$ for some $j,k$; $\deg_{\partial_{1}}$ and
$\deg_{\partial_{2}}$ are defined similarly. The _combined degree_
$\deg_{\partial_{1}+\partial_{2}}P$ is the largest $j+k$ s.t.
$a_{i,j,k}\not=0$ for some $i$, and similarly for $\deg_{z+\partial_{1}}$. The
_height_ of $P$ is
$\left|{P}\right|_{\infty}=\max_{i,j,k}{\left|{a_{i,j,k}}\right|_{\infty}}$.
#### Rational skew fields.
The improved elimination algorithm does not work in the skew polynomial ring,
but in its rational field extension. To this end we need to introduce skew
fields. A _skew field_ $\mathbb{F}$ is a field where multiplication is not
necessarily commutative [17]. (Skew fields are sometimes called _division
rings_ since they are noncommutative rings where multiplicative inverses
exist.) In the same way as the ring of polynomials $\mathbb{F}[x]$ over a
field $\mathbb{F}$ can be extended to a rational polynomial field
$\mathbb{F}(x)$, a skew polynomial ring $\mathbb{F}[\partial;\sigma]$ over a
skew field $\mathbb{F}$ can be extended to a _rational skew field_
$\mathbb{F}(\partial;\sigma)$. Its elements are formal fractions
$\frac{P}{Q}=Q^{-1}P$ quotiented by $Q^{-1}P\sim S^{-1}R$ if there exist
$A,B\in\mathbb{F}[\partial;\sigma]$ s.t. $A\cdot P=B\cdot R$ and $A\cdot
Q=B\cdot S$. Given $P,Q,R,S\in\mathbb{F}[\partial;\sigma]$ s.t. $S_{1}\cdot
Q=Q_{1}\cdot S$ and $S_{1}\cdot P=P_{1}\cdot S$ for some
$P_{1},S_{1},Q_{1}\in\mathbb{F}[\partial;\sigma]$, we can define the
operations:
$\displaystyle\frac{P}{Q}+\frac{R}{S}=\frac{S_{1}\cdot P+Q_{1}\cdot
R}{S_{1}\cdot
Q},\qquad\frac{P}{Q}\cdot\frac{R}{S}=\frac{P_{1}R}{S_{1}Q},\qquad\left(\frac{P}{Q}\right)^{-1}=\frac{Q}{P}.$
It was shown by O. Ore that this yields a well-defined skew field structure to
$\mathbb{F}(\partial;\sigma)$ and that unique reduced representations
$\frac{P}{Q}$ exist [42]777Actually, Ore considered formal quotients of the
form $PQ^{-1}$, but we found it more convenient to work in the symmetric
definition.. In our context, we define the skew fields
$\displaystyle\mathbb{F}(W_{1}^{\prime})=\mathbb{Q}(k,\partial_{1})\quad\text{and}\quad\mathbb{F}(W_{2}^{\prime})=\mathbb{F}(W_{1}^{\prime})(\partial_{2};\sigma_{2})=\mathbb{Q}(k,\partial_{1})(\partial_{2};\sigma_{2})$
(27)
associated to the corresponding iterated Weyl algebras $W_{1}^{\prime}$ and
$W_{2}^{\prime}$. Note that $\mathbb{F}(W_{1}^{\prime})$ is in fact just a
rational (commutative) field of bivariate polynomials. For
$R=\frac{P}{Q}\in\mathbb{F}(W_{1}^{\prime})$ or $\mathbb{F}(W_{2}^{\prime})$
written in reduced form, we define
$\left|{R}\right|_{\infty}=\max\\{\left|{P}\right|_{\infty},\left|{Q}\right|_{\infty}\\}$.
#### Non-commutative linear algebra.
Let $\mathbb{F}$ be a skew field. We denote by $\mathbb{F}^{n\times m}$ the
ring of matrices $A$ with $n$ rows and $m$ columns with entries in
$\mathbb{F}$, equipped with the usual matrix operations “$+$” and “$\cdot$”.
The _height_ of $A\in\mathbb{F}^{n\times m}$ is
$\left|{A}\right|_{\infty}=\max_{i,j}\left|{A_{i,j}}\right|_{\infty}$. The
_left $\mathbb{F}$-module_ spanned by the rows of $A=(u_{1},\dots,u_{n})$ is
the set of vectors in $\mathbb{F}^{n}$ of the form $a_{1}\cdot
u_{1}+\cdots+a_{n}\cdot v_{n}$ for some $a_{1},\dots,a_{n}\in\mathbb{F}$. The
_rank_ of $A$ is the dimension of the left $\mathbb{F}$-module spanned by its
rows. In other words, the rank of $A$ is the largest integer $r$ s.t. we can
extract $r$ rows $u_{i_{1}},\ldots,u_{i_{r}}$ that are free: for every
$a_{1},\ldots,a_{r}\in\mathbb{F}$, $a_{1}\cdot u_{i_{1}}+\cdots+a_{k}\cdot
u_{i_{k}}=0$ implies $a_{1}=\cdots=a_{k}=0$. A square matrix
$A\in\mathbb{F}^{n\times n}$ is _non-singular_ if there exists a matrix $B$
such that $A\cdot B=I$ , where $I\in\mathbb{F}^{n\times n}$ is the identity
matrix.
The following lemma implies that matrices arising from linrec systems have
full rank. We used this lemma to justify why the elimination algorithm in the
proof of Theorem 5.5 successfully produces a non-zero CR.
###### Lemma E.1.
Let $A\in
W_{2}=\mathbb{Q}[n,k][\partial_{1};\sigma_{1}][\partial_{2};\sigma_{2}]^{n\times
n}$ be a matrix of skew polynomials s.t. the combined degree
$\deg_{\partial_{1}+\partial_{2}}A_{i,i}$ of the diagonal entries is strictly
larger than the combined degree $\deg_{\partial_{1}+\partial_{2}}A_{j,i}$ of
every other entry $j\neq i$ in the same column $i$. Then $A$ has rank $n$.
Indeed, the combined degree of diagonal entries $\partial_{1}\partial_{2}$ in
a system of linrec equations (5) is $2$, while every other entry has the form
$p(n,k)$, $p(n,k)\cdot\partial_{1}$, or $p(n,k)\cdot\partial_{2}$ with
$p(n,k)\in Q[n,k]$ and thus has combined degree $1$.
###### Proof E.2.
We denote by $A_{i}$ the $i^{\text{th}}$ row of $A$. By contradiction, assume
$A$ does not have full rank. There exist rows $A_{i_{1}},\ldots,A_{i_{k}}$ and
nonzero coefficients $P_{1},\cdots,P_{k}\in W_{2}$ such that:
$P_{i_{1}}\cdot A_{i_{1}}+\cdots+P_{i_{k}}\cdot A_{i_{k}}=0.$
Let $j_{1}=i_{1}$. Since
$\deg_{\partial_{1}+\partial_{2}}{A_{i_{1},i_{1}}}>\deg_{\partial_{1}+\partial_{2}}A_{{i_{r}},{i_{1}}}$
for $r\geq 2$, there is an index $j_{2}$ such that
$\deg_{\partial_{1}+\partial_{2}}{P_{j_{2}}}>\deg_{\partial_{1}+\partial_{2}}{P_{j_{1}}}$.
By repeating this process, we have a sequence of indices
$j_{1},\ldots,j_{k+1}$ such that
$\deg_{\partial_{1}+\partial_{2}}{P_{j_{k+1}}}>\deg_{\partial_{1}+\partial_{2}}{P_{j_{k}}}>\cdots>\deg_{\partial_{1}+\partial_{2}}{P_{j_{1}}}.$
This is a contradiction because there are only $k$ different $P_{i}$’s.
#### Hermite normal forms.
Let $A\in\mathbb{F}[\partial;\sigma]^{n\times n}$ be a skew polynomial square
matrix. Let $\deg_{\partial}A=\max_{i,j}\deg_{\partial}A_{i,j}$. We say that
$A$ is _unimodular_ if it is invertible in
$\mathbb{F}(\partial;\sigma)^{n\times n}$ and moreover the inverse matrix
$A^{-1}$ has coefficients already in the skew polynomial ring
$\mathbb{F}[\partial;\sigma]$. We say that $A$ of rank $r$ is in _Hermite
form_ if a) exactly its first $r$ rows are non-zero, and the first (leading)
non-zero entry in each row satisfies the following conditions: b.1) it is a
monic skew polynomial (its leading coefficient is $1\in\mathbb{F}$), b.2) all
entries below it are zero, and b.3) all entries above it have strictly lower
degree. (In particular, a matrix in Hermite form is upper triangular.) The
_Hermite normal form_ (HNF) of a skew polynomial matrix $A$ of full rank $n$
is the (unique) matrix $H\in\mathbb{F}[\partial;\sigma]^{n\times n}$ in
Hermite form which can be obtained by applying a (also unique) unimodular
transformation $U\in\mathbb{F}[\partial;\sigma]^{n\times n}$ as $H=U\cdot A$.
Existence of $U$ (and thus of $H$) has been shown in [26, Theorem 2.4], and
uniqueness in [26, Theorem 2.5]. The Hermite form $H$ yields directly a
cancelling relationship (CR-2) for the $n$-th linrec variable $f_{n}$, as we
show in the following example. (By reordering the equations, we can get an
analogous relationship for $f_{1}$.)
###### Example E.3.
Consider the following system of linrec equations:
$\displaystyle\left\\{\begin{array}[]{rrr}(\partial_{1}-1)\partial_{2}\cdot
G_{r}&-\partial_{2}\cdot G_{s}&=0,\\\ -(k\partial_{2}+1)\cdot
G_{r}&+\partial_{1}\partial_{2}\cdot G_{s}&=0.\end{array}\right.$
In matrix form we have
$\displaystyle\underbrace{\begin{pmatrix}(\partial_{1}-1)\partial_{2}&-\partial_{2}\\\
-k\partial_{2}-1&\partial_{1}\partial_{2}\end{pmatrix}}_{A\in W_{2}^{2\times
2}}\cdot\underbrace{\begin{pmatrix}G_{r}\\\ G_{s}\end{pmatrix}}_{x}=0.$ (28)
The matrix $A$ above is not in Hermite form; one reason is that
$(\partial_{1}-1)\partial_{2}$ is not monic as a polynomial in $W_{2}$
(because its leading coefficient is $\partial_{1}-1\neq 1$); another reason is
that the entry $-k\partial_{2}-1$ below it is nonzero. We show in Example E.14
that the Hermite form $H=U\cdot A$ of $A$ is
$\displaystyle
H=\begin{pmatrix}1&(\frac{k}{\partial_{1}-1}-\partial_{1})\partial_{2}\\\
0&\partial_{2}^{2}-\frac{1}{\partial_{1}^{2}-\partial_{1}-(k+1)}\partial_{2}\end{pmatrix}.$
This allows us to immediately obtain a cancelling relation for the variable
$G_{s}$ corresponding to the last row. Going back to our initial matrix
equation $A\cdot x=0$, we have $UAx=Hx=0$ where $x=(G_{r}\ G_{s})^{T}$,
yielding
$\displaystyle\left(\partial_{2}^{2}-\frac{1}{\partial_{1}^{2}-\partial_{1}-(k+1)}\partial_{2}\right)\cdot
G_{s}=0.$
By clearing out the denominator (an ordinary bivariate polynomial from
$\mathbb{Q}[k,\partial_{1}]$), we obtain
$\displaystyle((\partial_{1}^{2}-\partial_{1}-(k+1))\cdot\partial_{2}^{2}-\partial_{2})\cdot
G_{s}=(\partial_{1}^{2}\partial_{2}^{2}-\partial_{1}\partial_{2}^{2}-(k+1)\partial_{2}^{2}-\partial_{2})\cdot
G_{s}=0$ (29)
yielding the sought cancelling relation for $G_{s}$ not mentioning any other
sequence:
$\displaystyle G_{s}(k+2,n+2)=G_{s}(n+1,k+2)+(k+1)\cdot
G_{s}(n,k+2)+G_{s}(n,k+1).$
In order to bound the complexity of the Hermite form $H$ in our case of
interest, we will use results from [26], instantiated in the special case of
Ore shift polynomials. These results generalise to skew polynomials analogous
complexity bounds for the HNF over integer matrices $\mathbb{Z}^{n\times n}$
[31] and integer univariate polynomial matrices $\mathbb{Z}[z]^{n\times n}$
[54, 38, 34, 40].
###### Theorem E.4.
Let $A\in\mathbb{F}[\partial;\sigma]^{n\times n}$ of full rank $n$ with HNF
$H=U\cdot A\in\mathbb{F}[\partial;\sigma]^{n\times n}$.
1. 1.
$\sum_{i}\deg_{\partial}H_{i,i}\leq n\cdot\deg_{\partial}A$ [26, Theorem 4.7,
point (a)]. In particular,
$\displaystyle\deg_{\partial}H\leq n\cdot\deg_{\partial}A.$ (30)
2. 2.
For $A\in\mathbb{F}[z][\partial;\sigma]^{n\times n}$ and
$H\in\mathbb{F}(z)[\partial;\sigma]^{n\times n}$ [26, Theorem 5.6, point (a)],
$\displaystyle\deg_{z}H=O(n^{2}\cdot\deg_{z}A\cdot\deg_{\partial}A)$ (31)
3. 3.
For $A\in\mathbb{Z}[z][\partial;\sigma]^{n\times n}$ and
$H\in\mathbb{Q}(z)[\partial;\sigma]^{n\times n}$ we have [26, Corollary 5.9],
$\displaystyle\log\left|{H}\right|_{\infty}=\tilde{O}({n^{2}\cdot\deg_{z}A\cdot(\deg_{\partial}A+\log\left|{A}\right|_{\infty})}).$
(32)
We lift the results of Theorem E.4 from univariate polynomial rings
$\mathbb{F}[z],\mathbb{Z}[z]$ to the bivariate polynomial rings
$\mathbb{F}[k,\partial_{1}],\mathbb{Z}[k,\partial_{1}]$ that we need in our
complexity analysis by noticing that the latter behave like the former if we
replace $\deg_{z}$ with $\deg_{k+\partial_{1}}$. The formal result that we
need is the following.
###### Lemma E.5.
Let $A$ be an invertible matrix in $\mathbb{Z}[k,\partial_{1}]^{n\times n}$.
Then $\deg_{k+\partial_{1}}A^{-1}\leq n\cdot\deg_{k+\partial_{1}}A$ and
$\log|A^{-1}|_{\infty}\leq
n^{2}(1+\log|A|_{\infty}+\log\deg_{k+\partial_{1}}A)$.
###### Proof E.6.
By Cramer’s formula, every coefficient of $A^{-1}$ is the quotient of the
determinant of a submatrix of $A$ and the determinant of $A$. By Lipschitz’
formula we have
$\det(A)=\sum_{\sigma}\text{sign}(\sigma)A_{1,\sigma_{1}}\cdots
A_{n,\sigma_{2}}$, where $\text{sign}(\sigma)\in\\{-1,1\\}$ and $\sigma$
ranges over all permutations of $\\{1,\ldots,n\\}$. Then we can bound the size
of the determinant
The two bounds in Lemma E.7 below are obtained from the last two bounds in
Theorem E.4 by inspecting the proofs in [26] and using the the bounds on
inversion of matrices of bivariate polynomials from Lemma E.5.
###### Lemma E.7.
1. 1.
For $A\in\mathbb{F}[k,\partial_{1}][\partial;\sigma]^{n\times n}$ and
$H\in\mathbb{F}(k,\partial_{1})[\partial;\sigma]^{n\times n}$,
$\displaystyle\deg_{k+\partial_{1}}H=O(n^{2}\cdot\deg_{k+\partial_{1}}A\cdot\deg_{\partial}A)$
(33)
2. 2.
For $A\in\mathbb{Z}[k,\partial_{1}][\partial;\sigma]^{n\times n}$ and
$H\in\mathbb{Q}(k,\partial_{1})[\partial;\sigma]^{n\times n}$ we have
$\displaystyle\log\left|{H}\right|_{\infty}=\tilde{O}({n^{2}\cdot\deg_{k+\partial_{1}}A\cdot(\deg_{\partial}A+\log\left|{A}\right|_{\infty})}).$
(34)
Putting everything together, the bounds from point 1. of Theorem E.4 and the
two bounds from Lemma E.7 yield the following corollary.
###### Corollary E.8.
Let $A\in(W_{2}^{\prime})^{m\times
m}=\mathbb{Q}[k,\partial_{1}][\partial_{2};\sigma_{2}]^{m\times m}$ of full
rank $m$ with HNF $H=U\cdot
A\in\mathbb{Q}(k,\partial_{1})[\partial_{2};\sigma_{2}]^{m\times m}$. We have:
$\displaystyle\deg_{\partial_{2}}H$ $\displaystyle\leq
n\cdot\deg_{\partial_{2}}A,$ $\displaystyle\deg_{k+\partial_{1}}H$
$\displaystyle=O(n^{2}\cdot\deg_{k+\partial_{1}}A\cdot\deg_{\partial_{2}}A),$
$\displaystyle\log\left|{H}\right|_{\infty}$
$\displaystyle=\tilde{O}({m^{2}\cdot\deg_{\partial_{2}}A\cdot(\deg_{k+\partial_{1}}A+\log\left|{A}\right|_{\infty})}).$
Thus, the degrees of the HNF are polynomially bounded, and the heights are
exponentially bounded. The bounds from Corollary E.8 yield the complexity
upper-bound on the zeroness problem that we are after.
See 6.1
###### Proof E.9.
Let $f$ be a linrec sequence of order $\leq m$, degree $\leq d$, and height
$\leq h$. Since $\deg_{\partial_{2}}=\deg_{\partial_{1}}=1$ in $A$ from
linrec, thanks to Corollary E.8 the Hermite form $H$ has
$\deg_{\partial_{2}}H\leq m$, $\deg_{k+\partial_{1}}H$ is polynomially bounded
(and thus $\deg_{k}H$ and $\deg_{\partial_{1}}H$ as well), and
$\left|{H}\right|_{\infty}$ is exponentially bounded. Thanks to the fact that
the Hermite form is triangular, we can immediately extract from $H\cdot x=0$
the existence of a cancelling relation (CR-2) for $f_{1}$ where $i^{*},j^{*}$
are polynomially bounded, the degree of $p_{i^{*},j^{*}}$ is polynomially
bounded, and the height of $\left|{p_{i^{*},j^{*}}}\right|_{\infty}$ is
exponentially bounded.
Moreover, consider the one-dimensional sections
$f(0,k),\dots,f(i^{*},k)\in\mathbb{Q}^{\mathbb{N}}$. By Lemma 2.2, they are
linrec of order $\leq m\cdot(i^{*}+3)$, degree $\leq d$, and height $\leq
h\cdot(i^{*})^{d}$, and thus there are associated matrices
$A_{0},\dots,A_{i^{*}}$ of the appropriate dimensions
$\leq(m\cdot(i^{*}+3))\times(m\cdot(i^{*}+3))$ with coefficients in
$\mathbb{Q}[k][\partial_{2};\sigma_{2}]$. The bounds from Corollary E.8 can be
applied to this case as well and we obtain for each $0\leq i\leq i^{*}$ a
cancelling relation (CR-1) $R_{i}$ with leading polynomial coefficient
$q_{i,\ell_{i}^{*}}(k)$ where $\ell_{i}^{*}$ is polynomially bounded, its
degree in $k$ is polynomially bounded, and the height
$\left|{q_{i,\ell_{i}^{*}}}\right|_{\infty}$ is exponentially bounded.
### E.1 Extended example
We conclude this section with an extended example showing how to compute the
Hermite form of a skew polynomial matrix, thus illustrating the techniques of
Giesbrecht and Kim [26] leading to Theorem E.4. We apply the algorithm on our
running example. For $n\in\mathbb{N}$, denote with
$\mathbb{F}[\partial;\sigma]_{n}$ the semiring of skew polynomials of degree
at most $n$ with coefficients in the field $\mathbb{F}$. Let
$\phi_{n}:\mathbb{F}[\partial;\sigma]_{n}\to\mathbb{F}^{n+1}$ be the bijection
that associates to a skew polynomial of degree $\leq n$ the vector of its
coefficients, starting from the one of highest degree. For instance,
$\phi_{5}(5\cdot\partial^{3}+4\cdot\partial^{2}+7)=(0,0,5,4,0,7).$
The _$m$ -Sylvester matrix_ of a skew polynomial
$P\in\mathbb{F}[\partial;\sigma]_{n-m}$ of degree $\leq n-m$ is the matrix
$S_{n}^{m}(P)\in\mathbb{F}^{(m+1)\times(n+1)}$ defined by
$\displaystyle S_{n}^{m}(P)=\begin{pmatrix}\phi_{n}(\partial^{m}P)\\\
\phi_{n}(\partial^{m-1}P)\\\ \vdots\\\ \phi_{n}(\partial^{0}P)\end{pmatrix}.$
(35)
For example, for $P=5\cdot\partial^{3}+4\cdot\partial^{2}+7$ we have
$\displaystyle S_{5}^{2}(P)=\begin{pmatrix}\phi_{5}(\partial^{2}P)\\\
\phi_{5}(\partial^{1}P)\\\
\phi_{5}(\partial^{0}P)\end{pmatrix}=\begin{pmatrix}5&4&0&7&0&0\\\
0&5&4&0&7&0\\\ 0&0&5&4&0&7\end{pmatrix}.$
The next lemma shows that sufficiently large Sylvester matrices can be used to
express product of polynomials in terms of products of matrices. This crucial
idea allows one to transform problems on skew polynomials in
$\mathbb{F}[\partial,\sigma]$ to linear algebra problems in the underlying
field (or just semiring) $\mathbb{F}$.
###### Lemma E.10 (c.f. [6, Sec. 1, eq. (1)]).
Let $P,Q\in\mathbb{F}[\partial;\sigma]$ and $n,m\in\mathbb{N}$ s.t. $\deg
P\leq m$ and $\deg Q\leq n-\deg P$. Then,
$\displaystyle\phi_{n}(Q\cdot P)=\phi_{m}(Q)\cdot S_{n}^{m}(P).$
We extend both $\phi_{n}$ and $S_{n}^{m}$ to skew polynomial matrices in
$\mathbb{F}[\partial;\sigma]_{n}^{k\times k}$ by point-wise application and
then merging all the obtained matrices into a single one.
###### Example E.11.
For instance, $\phi_{2}(A)$ with
$A\in\mathbb{Q}[k][\partial_{1};\sigma_{1}][\partial_{2};\sigma_{2}]^{2\times
2}$ from (28) equals
$\displaystyle\phi_{2}(A)$
$\displaystyle=\phi_{2}\begin{pmatrix}(\partial_{1}-1)\partial_{2}&-\partial_{2}\\\
-k\partial_{2}-1&\partial_{1}\partial_{2}\end{pmatrix}=\begin{pmatrix}\phi_{2}((\partial_{1}-1)\partial_{2})&\phi_{2}(-\partial_{2})\\\
\phi_{2}(-k\partial_{2}-1)&\phi_{2}(\partial_{1}\partial_{2})\end{pmatrix}$
$\displaystyle=\begin{pmatrix}0&\partial_{1}-1&0&0&{-1}&0\\\
0&{-k}&{-1}&0&\partial_{1}&0\end{pmatrix}\in\mathbb{Q}[k][\partial_{1};\sigma_{1}]^{2\times
6}$
and thus $S_{2}^{1}(A)\in\mathbb{Q}[k][\partial_{1};\sigma_{1}]^{4\times 6}$
is
$\displaystyle S_{2}^{1}(A)$
$\displaystyle=S_{2}^{1}\begin{pmatrix}(\partial_{1}-1)\partial_{2}&-\partial_{2}\\\
-k\partial_{2}-1&\partial_{1}\partial_{2}\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}S_{2}^{1}((\partial_{1}-1)\partial_{2})&S_{2}^{1}(-\partial_{2})\\\
S_{2}^{1}(-k\partial_{2}-1)&S_{2}^{1}(\partial_{1}\partial_{2})\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}\begin{pmatrix}\phi_{2}(\partial_{2}(\partial_{1}-1)\partial_{2})\\\
\phi_{2}((\partial_{1}-1)\partial_{2})\end{pmatrix}&\begin{pmatrix}\phi_{2}(\partial_{2}(-\partial_{2}))\\\
\phi_{2}(-\partial_{2})\end{pmatrix}\\\
\begin{pmatrix}\phi_{2}(\partial_{2}(-k\partial_{2}-1))\\\
\phi_{2}(-k\partial_{2}-1)\end{pmatrix}&\begin{pmatrix}\phi_{2}(\partial_{2}\partial_{1}\partial_{2})\\\
\phi_{2}(\partial_{1}\partial_{2})\end{pmatrix}\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}\begin{pmatrix}(\partial_{1}-1\ 0\ 0)\\\ (0\
\partial_{1}-1\ 0)\end{pmatrix}&\begin{pmatrix}({-1}\ 0\ 0)\\\ (0\ {-1}\
0)\end{pmatrix}\\\ \begin{pmatrix}({-(k+1)}\ {-1}\ 0)\\\ (0\ {-k}\
{-1})\end{pmatrix}&\begin{pmatrix}(\partial_{1}\ 0\ 0)\\\ (0\ \partial_{1}\
0)\end{pmatrix}\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}\partial_{1}-1&0&0&{-1}&0&0\\\
0&\partial_{1}-1&0&0&{-1}&0\\\ {-(k+1)}&{-1}&0&\partial_{1}&0&0\\\
0&{-k}&{-1}&0&\partial_{1}&0\end{pmatrix}.$
By definition of the Hermite form, we have that $H=U\cdot A$. By (30) every
degree of skew polynomials appearing therein is bounded by $n\cdot\deg A$.
Hence setting $\rho=n\cdot\deg A$, we have the following matrix equation with
coefficients in $\mathbb{F}$:
$\displaystyle\phi_{\rho+d}(H)=\phi_{\rho}(U)\cdot S^{\rho}_{\rho+d}(A).$
The _diagonal degree vector_ of the Hermite form for $A$ is the unique vector
$d$ s.t. $d_{i}=\deg H_{i,i}$. The algorithm will guess such a vector, and it
can detect whether the guess was correct or not. If it is the right one, then
$H$ and $U$ can be computed.
###### Example E.12.
The correct diagonal degree vector for our running example is $(0,2)$. The
Hermite normal form $H=U\cdot A$ of the $2\times 2$ matrix $A$ from our
running example has the form
$\displaystyle H=\begin{pmatrix}H_{11}&H_{12}\\\ 0&H_{22}\end{pmatrix},\quad
U=\begin{pmatrix}U_{11}&U_{12}\\\
U_{21}&U_{22}\end{pmatrix}\in\mathbb{Q}[k][\partial_{1};\sigma_{1}][\partial_{2};\sigma_{2}]^{2\times
2}$
where
$H_{11},H_{22}\in\mathbb{Q}[k][\partial_{1};\sigma_{1}][\partial_{2};\sigma_{2}]$
are _monic_ skew polynomials of degree respectively $0$ and $2$ and
$H_{12},U_{11},U_{12},U_{21},U_{22}\in\mathbb{Q}[k][\partial_{1};\sigma_{1}][\partial_{2};\sigma_{2}]$
are skew polynomials of degree $1$. It follows that
$\displaystyle\phi_{2}(H)$
$\displaystyle=\begin{pmatrix}\phi_{2}(H_{11})&\phi_{2}(H_{12})\\\
0&\phi_{2}(H_{22})\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}\phi_{2}(1)&\phi_{2}(a_{121}\partial_{2}+a_{120})\\\
0&\phi_{2}(\partial_{2}^{2}+a_{221}\partial_{2}+a_{220})\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}0&0&1&0&a_{121}&a_{120}\\\
0&0&0&1&a_{221}&a_{220}\end{pmatrix}\in\mathbb{Q}[k][\partial_{1};\sigma_{1}]^{2\times
6}.$
Similarly,
$\displaystyle\phi_{1}(U)$
$\displaystyle=\begin{pmatrix}\phi_{1}(U_{11})&\phi_{1}(U_{12})\\\
\phi_{1}(U_{21})&\phi_{1}(U_{22})\end{pmatrix}=\begin{pmatrix}\phi_{1}(u_{111}\partial_{2}+u_{110})&\phi_{1}(u_{121}\partial_{2}+u_{120})\\\
\phi_{1}(u_{211}\partial_{2}+u_{210})&\phi_{1}(u_{221}\partial_{2}+u_{220})\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}u_{111}&u_{110}&u_{121}&u_{120}\\\
u_{211}&u_{210}&u_{221}&u_{220}\end{pmatrix}\in\mathbb{Q}[k][\partial_{1};\sigma_{1}]^{2\times
4}.$
By putting the pieces together, we obtain the following matrix equation with
entries in $\mathbb{Q}[k][\partial_{1};\sigma_{1}]$
$\displaystyle\underbrace{\begin{pmatrix}0&0&1&0&a_{121}&a_{120}\\\
0&0&0&1&a_{221}&a_{220}\end{pmatrix}}_{\phi_{2}(H)}=$
$\displaystyle\qquad\underbrace{\begin{pmatrix}u_{111}&u_{110}&u_{121}&u_{120}\\\
u_{211}&u_{210}&u_{221}&u_{220}\end{pmatrix}}_{\phi_{1}(U)}\cdot\underbrace{\begin{pmatrix}\partial_{1}-1&0&0&{-1}&0&0\\\
0&\partial_{1}-1&0&0&{-1}&0\\\ {-(k+1)}&{-1}&0&\partial_{1}&0&0\\\
0&{-k}&{-1}&0&\partial_{1}&0\end{pmatrix}}_{S^{1}_{2}(A)}.$
It is shown in [26, Theorem 5.2] that if we guessed the diagonal degree vector
right, then we can remove columns from $\phi_{\rho+d}(H)$ corresponding to
under-determined entries, and corresponding columns in $S^{\rho}_{\rho+d}(A)$,
in order to obtain two matrices $\tilde{A}$ and $\tilde{H}$ such that:
* •
$\tilde{H}$ is only made of $0$’s and $1$’s.
* •
$\tilde{A}$ is a square matrix.
* •
The matrix equation $T\tilde{A}=\tilde{H}$ of unknown $T$ (of the same
dimensions as $\phi_{\rho}(U)$) has a unique solution. In particular,
$\tilde{A}$ has full rank and hence is invertible.
###### Example E.13.
The reduced system $\tilde{H}=\phi_{1}(U)\cdot\tilde{A}$ in our running
example is obtained by removing columns $5,6$ from $\phi_{2}(H)$ and
correspondingly from $S_{2}^{1}(A)$:
$\displaystyle\underbrace{\begin{pmatrix}0&0&1&0\\\
0&0&0&1\end{pmatrix}}_{\tilde{H}}=\underbrace{\begin{pmatrix}u_{111}&u_{110}&u_{121}&u_{120}\\\
u_{211}&u_{210}&u_{221}&u_{220}\end{pmatrix}}_{\phi_{1}(U)}\cdot\underbrace{\begin{pmatrix}\partial_{1}-1&0&0&-1\\\
0&\partial_{1}-1&0&0\\\ -(k+1)&-1&0&\partial_{1}\\\
0&-k&-1&0\end{pmatrix}}_{\tilde{A}}.$
Now the obtained $\tilde{A}$ is invertible. Hence we can determine $U$ thanks
to the equation $\phi_{1}(U)=\tilde{H}\tilde{A}^{-1}$.
###### Example E.14.
In the example, we obtain
$\displaystyle T=\begin{pmatrix}-\frac{k}{\partial_{1}-1}&-1\\\
\frac{k+1}{\partial_{1}^{2}-\partial_{1}-(k+1)}\partial_{2}+\frac{1}{\partial_{1}^{2}-\partial_{1}-(k+1)}&\frac{1}{\partial_{1}^{2}-\partial_{1}-(k+1)}\partial_{2}\end{pmatrix},$
yielding the Hermite form:
$\displaystyle H=T\cdot A$
$\displaystyle=\begin{pmatrix}-\frac{k}{\partial_{1}-1}&-1\\\
\frac{k+1}{\partial_{1}^{2}-\partial_{1}-(k+1)}\partial_{2}+\frac{1}{\partial_{1}^{2}-\partial_{1}-(k+1)}&\frac{1}{\partial_{1}^{2}-\partial_{1}-(k+1)}\partial_{2}\end{pmatrix}\cdot\begin{pmatrix}(\partial_{1}-1)\partial_{2}&-\partial_{2}\\\
-k\partial_{2}-1&\partial_{1}\partial_{2}\end{pmatrix}$
$\displaystyle=\begin{pmatrix}1&(\frac{k}{\partial_{1}-1}-\partial_{1})\partial_{2}\\\
0&\partial_{2}^{2}-\frac{1}{\partial_{1}^{2}-\partial_{1}-(k+1)}\partial_{2}\end{pmatrix}.$
(36)
|
32k
|
arxiv_papers
|
2101.01034
|
# Sidon sets for linear forms
Melvyn B. Nathanson Lehman College (CUNY), Bronx, New York 10468
[email protected]
###### Abstract.
Let $\varphi(x_{1},\ldots,x_{h})=c_{1}x_{1}+\cdots+c_{h}x_{h}$ be a linear
form with coefficients in a field $\mathbf{F}$, and let $V$ be a vector space
over $\mathbf{F}$. A nonempty subset $A$ of $V$ is a _$\varphi$ -Sidon set_ if
$\varphi(a_{1},\ldots,a_{h})=\varphi(a^{\prime}_{1},\ldots,a^{\prime}_{h})$
implies $(a_{1},\ldots,a_{h})=(a^{\prime}_{1},\ldots,a^{\prime}_{h})$ for all
$h$-tuples $(a_{1},\ldots,a_{h})\in A^{h}$ and
$(a^{\prime}_{1},\ldots,a^{\prime}_{h})\in A^{h}$. There exist infinite Sidon
sets for the linear form $\varphi$ if and only if the set of coefficients of
$\varphi$ has distinct subset sums. In a normed vector space with
$\varphi$-Sidon sets, every infinite sequence of vectors is asymptotic to a
$\varphi$-Sidon set of vectors. Results on $p$-adic perturbations of
$\varphi$-Sidon sets of integers and bounds on the growth of $\varphi$-Sidon
sets of integers are also obtained.
###### Key words and phrases:
Sidon set, sumset, sum of dilates, distinct subset sums, representation
functions.
###### 2010 Mathematics Subject Classification:
11B13, 11B34, 11B75, 11P99
Supported in part by a grant from the PSC-CUNY Research Award Program.
## 1\. Linear forms with property $N$
Let $\mathbf{F}$ be a field and let $h$ be a positive integer. We consider
linear forms
(1) $\varphi(x_{1},\ldots,x_{h})=c_{1}x_{1}+\cdots+c_{h}x_{h}$
where $c_{i}\in\mathbf{F}$ for all $i\in\\{1,\ldots,h\\}$.
Let $V$ be a vector space over the field $\mathbf{F}$. For every subset
nonempty $A$ of $V$, let
$A^{h}=\left\\{(a_{1},\ldots,a_{h}):a_{i}\in A\text{ for all
}i\in\\{1,\ldots,h\\}\right\\}$
be the set of all $h$-tuples of elements of $A$. For $c\in\mathbf{F}$, the
_$c$ -dilate_ of $A$ is the set
$c\ast A=\\{ca:a\in A\\}.$
The _$\varphi$ -image of $A$_ is the set
$\displaystyle\varphi(A)$
$\displaystyle=\left\\{\varphi(a_{1},\ldots,a_{h}):(a_{1},\ldots,a_{h})\in
A^{h}\right\\}$
$\displaystyle=\left\\{c_{1}a_{1}+\cdots+c_{h}a_{h}:(a_{1},\ldots,a_{h})\in
A^{h}\right\\}$ $\displaystyle=c_{1}\ast A+\cdots+c_{h}\ast A.$
Thus, $\varphi(A)$ is a sum of dilates. We define
$\varphi(\emptyset)=\\{0\\}$.
A nonempty subset $A$ of $V$ is a _Sidon set for the linear form $\varphi$_
or, simply, a _$\varphi$ -Sidon set_ if it satisfies the following property:
For all $h$-tuples $(a_{1},\ldots,a_{h})\in A^{h}$ and
$(a^{\prime}_{1},\ldots,a^{\prime}_{h})\in A^{h}$, if
$\varphi(a_{1},\ldots,a_{h})=\varphi(a^{\prime}_{1},\ldots,a^{\prime}_{h})$
then $(a_{1},\ldots,a_{h})=(a^{\prime}_{1},\ldots,a^{\prime}_{h})$, that is,
$a_{i}=a^{\prime}_{i}$ for all $i\in\\{1,\ldots,h\\}$. Thus, $A$ is a
$\varphi$-Sidon set if the linear form $\varphi$ is one-to-one on $A^{h}$.
Two cases of special interest are $V=\mathbf{F}$ with $\varphi$-Sidon sets
contained in $\mathbf{F}$, and $V=\mathbf{F}=\mathbf{Q}$ with $\varphi$-Sidon
sets of positive integers.
For the linear form $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$, every set with one
element is a $\varphi$-Sidon set. There is a simple obstruction to the
existence of $\varphi$-Sidon sets with more than one element. For every
nonempty subset $I$ of $\\{1,\ldots,h\\}$, define the _subset sum_
(2) $s(I)=\sum_{i\in I}c_{i}.$
Let $s({\emptyset})=0$. Suppose there exist disjoint subsets $I_{1}$ and
$I_{2}$ of $\\{1,\ldots,h\\}$ with $I_{1}$ and $I_{2}$ not both empty such
that
(3) $s({I_{1}})=\sum_{i\in I_{1}}c_{i}=\sum_{i\in I_{2}}c_{i}=s({I_{2}}).$
Let $I_{3}=\\{1,\ldots,h\\}\setminus(I_{1}\cup I_{2})$. Let $A$ be a subset of
$V$ with $|A|\geq 2$. Choose vectors $u,v,w\in A$ with $u\neq v$, and define
$a_{i}=\begin{cases}u&\text{ if $i\in I_{1}$}\\\ v&\text{ if $i\in I_{2}$}\\\
w&\text{ if $i\in I_{3}$}\end{cases}$
and
$a^{\prime}_{i}=\begin{cases}v&\text{ if $i\in I_{1}$ }\\\ u&\text{ if $i\in
I_{2}$}\\\ w&\text{ if $i\in I_{3}$.}\end{cases}$
We have
$(a_{1},\ldots,a_{h})\neq(a^{\prime}_{1},\ldots,a^{\prime}_{h})$
because $I_{1}\cup I_{2}\neq\emptyset$ and $a_{i}\neq a^{\prime}_{i}$ for all
$i\in I_{1}\cup I_{2}$.
The sets $I_{1}$, $I_{2}$, $I_{3}$ are pairwise disjoint. Condition (3)
implies
$\displaystyle\varphi(a_{1},\ldots,a_{h})$ $\displaystyle=\sum_{i\in
I_{1}}c_{i}a_{i}+\sum_{i\in I_{2}}c_{i}a_{i}+\sum_{i\in I_{3}}c_{i}a_{i}$
$\displaystyle=\left(\sum_{i\in I_{1}}c_{i}\right)u+\left(\sum_{i\in
I_{2}}c_{i}\right)v+\left(\sum_{i\in I_{3}}c_{i}\right)w$
$\displaystyle=\left(\sum_{i\in I_{2}}c_{i}\right)u+\left(\sum_{i\in
I_{1}}c_{i}\right)v+\left(\sum_{i\in I_{3}}c_{i}\right)w$
$\displaystyle=\sum_{i\in I_{1}}c_{i}a^{\prime}_{i}+\sum_{i\in
I_{2}}c_{i}a^{\prime}_{i}+\sum_{i\in I_{3}}c_{i}a^{\prime}_{i}$
$\displaystyle=\varphi(a^{\prime}_{1},\ldots,a^{\prime}_{h})$
and so $A$ is not a $\varphi$-Sidon set.
We say that the linear form (1) has _property $N$_ if there do _not_ exist
disjoint subsets $I_{1}$ and $I_{2}$ of $\\{1,\ldots,h\\}$ that satisfy
condition (3) with $I_{1}$ and $I_{2}$ not both empty. If the linear form
$\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ has property $N$, then
$\sum_{i\in I_{1}}c_{i}=s(I_{1})\neq s({\emptyset})=0$
for every nonempty subset $I_{1}$ of $\\{1,\ldots,h\\}$. In particular,
choosing $I_{1}=\\{i\\}$ shows that $c_{i}\neq 0$ for all
$i\in\\{1,\ldots,h\\}$.
For example, if $h\geq 1$ and $c_{i}=2^{i-1}$ for all $i\in\\{1,\ldots,h\\}$,
then the linear form
$\varphi=\sum_{i=1}^{h}c_{i}x_{i}=x_{1}+2x_{2}+4x_{3}+\cdots+2^{h-1}x_{h}$
has property $N$.
If $h\geq 2$ and $c_{i}=1$ for all $i\in\\{1,\ldots,h\\}$, then the linear
form
$\psi=\sum_{i=1}^{h}c_{i}x_{i}=x_{1}+x_{2}+x_{3}+\cdots+x_{h}$
does not have property $N$ because the nonempty disjoint sets $I_{1}=\\{1\\}$
and $I_{2}=\\{2\\}$ satisfy
$\sum_{i\in I_{1}}c_{i}=c_{1}=1=c_{2}=\sum_{i\in I_{2}}c_{i}.$
In Section 3 we prove that, for every infinite vector space $V$, there exist
infinite $\varphi$-Sidon sets for the linear form $\varphi$ if and only if
$\varphi$ has property $N$.
For related work on additive number theory for linear forms, see Bukh[2] and
Nathanson [12, 13, 14, 15, 16, 18].
Let $\varphi(x_{1},\ldots,x_{h})=c_{1}x_{1}+\cdots+c_{h}x_{h}$, where
$c_{i}\in\mathbf{F}$ for $i\in\\{1,2,\ldots,h\\}$. Let $J_{1}$ and $J_{2}$ be
distinct subsets of $\\{1,2,\ldots,h\\}$ such that $\sum_{i\in
J_{1}}c_{i}=\sum_{i\in J_{2}}c_{i}$ and let $J=J_{1}\cap J_{2}$. The sets
$I_{1}=J_{1}\setminus J$ and $I_{2}=J_{2}\setminus J$ are distinct and
disjoint subsets of $\\{1,2,\ldots,h\\}$. Moreover, $\sum_{i\in
I_{1}}c_{i}=\sum_{i\in I_{2}}c_{i}$. It follows that the linear form $\varphi$
has property $N$ if and only if the set $\\{c_{1},\ldots,c_{h}\\}$ has
distinct subset sums.
Let $g(n)$ be the size of the largest subset of $\\{1,2,\ldots,n\\}$ that has
distinct subset sums. A famous unsolved problem of Paul Erdős and Leo Moser
asks if
$g(n)=\frac{\log n}{\log 2}+O(1).$
See Erdős [5, pp. 136–137], Guy [6, Section C8], and Dubroff, Fox, and Xu [4].
## 2\. Classical Sidon sets
The idea of a Sidon set for a linear form derives from the classical
definition of a Sidon set of integers. In additive number theory, a _Sidon
set_ (also called a $B_{2}$-set) is a set $A$ of positive integers such that,
if $a_{1},a_{2},a^{\prime}_{1},a^{\prime}_{2}\in A$ and
$a_{1}+a_{2}=a^{\prime}_{1}+a^{\prime}_{2}$
then $\\{a_{1},a_{2}\\}=\\{a^{\prime}_{1},a^{\prime}_{2}\\}$. More generally,
let $G$ be an additive abelian group or semigroup, and let $A$ be a subset of
$G$. For $h\geq 2$, the _$h$ -fold sumset_ of $A$ is the set $hA$ of all sums
of $h$ not necessarily distinct elements of $A$. A nonempty set $A$ is an _$h$
-Sidon set_ (or a $B_{h}$-set) if every element of the sumset $hA$ has an
essentially unique representation as the sum of $h$ elements of $A$, in the
following sense: If $\\{a_{i}:i\in I\\}$ is a set of pairwise distinct
elements of $A$ and if $\\{u_{i}:i\in I\\}$ and $\\{v_{i}:i\in I\\}$ are sets
of nonnegative integers such that
$h=\sum_{i\in I}u_{i}=\sum_{i\in I}v_{i}$
and
$\sum_{i\in I}u_{i}a_{i}=\sum_{i\in I}v_{i}a_{i}$
then $u_{i}=v_{i}$ for all $i\in I$.
The sumset $hA$ is associated with the linear form
$\psi=\psi(x_{1},\ldots,x_{h})=x_{1}+\cdots+x_{h}$
and
$hA=\psi(A)=\left\\{a_{1}+\cdots+a_{h}:a_{i}\in A\text{ for all
}i\in\\{1,\ldots,h\\}\right\\}.$
The linear form $\psi$ does not satisfy condition $N$, and there exists no
$\psi$-Sidon set $A$ with $\operatorname{\text{card}}(A)\geq 2$.
The literature on classical Sidon sets is huge. Two surveys of results on
classical Sidon sets are Halberstam and Roth [7] and O’Bryant [19]. For recent
work, see [3, 8, 9, 10, 11, 20, 21, 23, 25].
## 3\. Contractions of linear forms
Let $\mathbf{F}$ be a field and let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a
linear form in $h$ variables with coefficients $c_{i}\in\mathbf{F}$.
Associated to every subset $J$ of $\\{1,\ldots,h\\}$ is the linear form in
$\operatorname{\text{card}}(J)$ variables
$\varphi_{J}=\sum_{j\in J}c_{j}x_{j}.$
We have $\varphi_{\emptyset}=0$ and $\varphi_{J}=\varphi$ if
$J=\\{1,\ldots,h\\}$. The linear form $\varphi_{J}$ is called a _contraction_
of the linear form $\varphi$.
Let $V$ be a vector space over the field $\mathbf{F}$. For every nonempty
subset $A$ of $V$, let
$\varphi_{J}(A)=\left\\{\sum_{j\in J}c_{j}a_{j}:a_{j}\in A\text{ for all }j\in
J\right\\}.$
If $A$ is a $\varphi$-Sidon set, then $A$ is a $\varphi_{J}$-Sidon set for
every nonempty subset $J$ of $\\{1,\ldots,h\\}$.
For every subset $X$ of $V$ and vector $v\in V$, the _translate_ of $X$ by $v$
is the set
$X+v=\\{x+v:x\in X\\}.$
For every subset of $J$ of $\\{1,\ldots,h\\}$, let
$J^{c}=\\{1,\ldots,h\\}\setminus J$ be the complement of $J$ in
$\\{1,\ldots,h\\}$. For every subset $A$ of $V$ and $b\in V\setminus A$, we
define
(4) $\Phi_{J}(A,b)=\varphi_{J}(A)+\left(\sum_{j\in
J^{c}}c_{j}\right)b=\varphi_{J}(A)+s(J^{c})b$
be the translate of the set $\varphi_{J}(A)$ by the subset sum $s(J^{c})b$. We
have $\Phi_{\emptyset}(A,b)=\left(\sum_{j=1}^{h}c_{j}\right)b$ and
$\Phi_{J}(A,b)=\varphi(A)$ if $J=\\{1,\ldots,h\\}$.
###### Lemma 1.
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with coefficients in
the field $\mathbf{F}$. Let $V$ be a vector space over $\mathbf{F}$. For every
subset $A$ of $V$ and $b\in V\setminus A$,
(5)
$\varphi\left(A\cup\\{b\\}\right)=\bigcup_{J\subseteq\\{1,\ldots,h\\}}\Phi_{J}(A,b).$
If $A\cup\\{b\\}$ is a $\varphi$-Sidon set, then
(6) $\left\\{\Phi_{J}(A,b):J\subseteq\\{1,\ldots,h\\}\right\\}$
is a set of pairwise disjoint sets.
If $A$ is a $\varphi$-Sidon set and (6) is a set of pairwise disjoint sets,
then $A\cup\\{b\\}$ is a $\varphi$-Sidon set.
###### Proof.
If $w\in\varphi\left(A\cup\\{b\\}\right)$, then there exist vectors
$v_{1},\ldots,v_{h}\in A\cup\\{b\\}$ such that
$w=\varphi(v_{1},\ldots,v_{h})=\sum_{i=1}^{h}c_{i}v_{i}.$
Let $J=\\{j\in\\{1,\ldots,h\\}:v_{j}=a_{j}\in A$. We have
$J^{c}=\\{j\in\\{1,\ldots,h\\}:v_{j}=b\\}$ and
$\displaystyle w$ $\displaystyle=\sum_{i=1}^{h}c_{i}v_{i}=\sum_{j\in
J}c_{j}a_{j}+\sum_{j\in
J^{c}}c_{j}b\in\varphi_{J}(A)+s(J^{c})b=\Phi_{J}(A,b).$
Conversely, if $w\in\Phi_{J}(A,b)$ for some $J=\\{j\in\\{1,\ldots,h\\}$, then
there exist $a_{j}\in A$ for all $j\in J$ such that
$w=\sum_{j\in J}c_{j}a_{j}+\sum_{j\in
J^{c}}c_{j}b\in\varphi\left(A\cup\\{b\\}\right).$
This proves (5). It follows that if $A\cup\\{b\\}$ is a $\varphi$-Sidon set,
then (6) is a set of pairwise disjoint sets.
Suppose that $A$ is a Sidon set and that the sets $\Phi_{J}(A,b)$ are pairwise
disjoint for all $J\subseteq\\{1,\ldots,h\\}$. Let
$u_{1},\ldots,u_{h},v_{1},\ldots,v_{h}\in A\cup\\{b\\}$. Consider the sets
$J_{1}=\\{j\in\\{1,\ldots,h\\}:u_{j}\neq
b\\}\operatorname{\qquad\text{and}\qquad}J_{2}=\\{j\in\\{1,\ldots,h\\}:v_{j}\neq
b\\}$
and the complementary sets
$J_{1}^{c}=\\{j\in\\{1,\ldots,h\\}:u_{j}=b\\}\operatorname{\qquad\text{and}\qquad}J_{2}^{c}=\\{j\in\\{1,\ldots,h\\}:v_{j}=b\\}.$
We have
$\varphi(u_{1},\ldots,u_{h})=\sum_{j\in J_{1}}c_{j}u_{j}+\left(\sum_{j\in
J_{1}^{c}}c_{j}\right)b\in\Phi_{J_{1}}(A,b)$
and
$\varphi(v_{1},\ldots,v_{h})=\sum_{j\in J_{2}}c_{j}v_{j}+\left(\sum_{j\in
J_{2}^{c}}c_{j}\right)b\in\Phi_{J_{2}}(A,b).$
If $J_{1}\neq J_{2}$, then $\Phi_{J_{1}}(A,b)\cap\Phi_{J_{2}}(A,b)=\emptyset$
and $\varphi(u_{1},\ldots,u_{h})\neq\varphi(v_{1},\ldots,v_{h})$.
If $J_{1}=J_{2}=\emptyset$, then
$(u_{1},\ldots,u_{h})=(b,\ldots,b)=(v_{1},\ldots,v_{h})$.
If $J_{1}=J_{2}\neq\emptyset$, then $J_{1}^{c}=J_{2}^{c}$ and
$\sum_{j\in J_{1}^{c}}c_{j}=\sum_{j\in J_{2}^{c}}c_{j}.$
It follows that
$\sum_{j\in J_{1}}c_{j}u_{j}=\sum_{j\in J_{1}}c_{j}v_{j}.$
Because $A$ is a $\varphi_{J_{1}}$-Sidon set, we have $u_{j}=v_{j}$ for all
$j\in J_{1}$, hence $u_{i}=v_{i}$ for all $i\in\\{1,\ldots,h\\}$. Thus, if $A$
is a Sidon set and the sets $\Phi_{J}(A,b)$ are pairwise disjoint, then
$A\cup\\{b\\}$ is a $\varphi$-Sidon set. This completes the proof. ∎
###### Lemma 2.
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with coefficients in
the field $\mathbf{F}$. Let $V$ be a vector space over $\mathbf{F}$, let $X$
be an infinite subset of $V$, and let $B$ be a finite subset of $X$. If the
linear form $\varphi$ has property $N$, then there exists $b\in X$ such that,
for all subsets $J$ of $\\{1,\ldots,h\\}$, the sets $\Phi_{J}(B,b)$ are
pairwise disjoint.
###### Proof.
Let $J_{1}$ and $J_{2}$ be distinct subsets of $\\{1,\ldots,h\\}$. For all
$x\in X$, we have
(7) $\Phi_{J_{1}}(B,x)\cap\Phi_{J_{2}}(B,x)\neq\emptyset$
if and only if there exist elements $b_{1,j}\in B$ for all $j\in J_{1}$ and
$b_{2,j}\in B$ for all $j\in J_{2}$ such that
(8) $\sum_{j\in J_{1}}c_{j}b_{1,j}+\left(\sum_{j\in
J^{c}_{1}}c_{j}\right)x=\sum_{j\in J_{2}}c_{j}b_{2,j}+\left(\sum_{j\in
J^{c}_{2}}c_{j}\right)x.$
Let $K=J^{c}_{1}\cap J^{c}_{2}$. The sets $I_{1}=J^{c}_{1}\setminus K$ and
$I_{2}=J^{c}_{2}\setminus K$ are disjoint. If $I_{1}=I_{2}=\emptyset$, then
$J^{c}_{1}=K=J^{c}_{2}$ and $J_{1}=J_{2}$, which is absurd. Therefore, $I_{1}$
and $I_{2}$ are disjoint sets, not both empty.
Because the linear form $\varphi$ has property $N$, we have
$\sum_{j\in I_{1}}c_{j}\neq\sum_{j\in I_{2}}c_{j}$
and so
$c=\sum_{j\in I_{2}}c_{j}-\sum_{j\in I_{1}}c_{j}\neq 0.$
Thus, $c\in\mathbf{F}\setminus\\{0\\}$ and so $c$ is invertible in
$\mathbf{F}$. From (8) we obtain
$\displaystyle\sum_{j\in J_{1}}c_{j}b_{1,j}-\sum_{j\in J_{2}}c_{j}b_{2,j}$
$\displaystyle=\left(\sum_{j\in J^{c}_{2}}c_{j}\right)x-\left(\sum_{j\in
J^{c}_{1}}c_{j}\right)x$ $\displaystyle=\left(\sum_{j\in
I_{2}}c_{j}-\sum_{j\in I_{1}}c_{j}\right)x$ $\displaystyle=cx$
and so
(9) $x=c^{-1}\left(\sum_{j\in I_{1}}c_{j}b_{1,j}-\sum_{j\in
I_{2}}c_{j}b_{2,j}\right).$
Because the set $B$ is finite, the set $B^{\prime}$ of elements in $X$ of the
form (9) is also finite. Because the set $X$ is infinite, the set
$X\setminus(B\cup B^{\prime})$ is infinite. For all $b\in X\setminus(B\cup
B^{\prime})$, the set $\\{\Phi_{J}(B,b):J\subseteq\\{1,\ldots,h\\}\\}$
consists of pairwise disjoint sets. This completes the proof. ∎
###### Theorem 1.
Let $\mathbf{F}$ be a field, let $V$ be an infinite vector space over the
field $\mathbf{F}$, and let $X$ be an infinite subset of $V$. Let
$\varphi(x_{1},\ldots,x_{h})=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with
nonzero coefficients $c_{i}\in\mathbf{F}$. The following are equivalent:
1. (i)
The set $X$ contains an infinite $\varphi$-Sidon set $A$.
2. (ii)
The set $X$ contains a $\varphi$-Sidon set $A$ with $|A|\geq 2$.
3. (iii)
The linear form $\varphi$ has property $N$.
###### Proof.
Condition (i) implies (ii). It was proved in Section 1 that (ii) implies
(iii). We shall prove that (iii) implies (i).
Suppose that the linear form $\varphi$ has property $N$. We construct
inductively an infinite $\varphi$-Sidon set $A$ contained in $X$. For all
$a_{1}\in X$, the set $A_{1}=\\{a_{1}\\}$ is a $\varphi$-Sidon set, because
every set with one element is $\varphi$-Sidon. Let
$A_{n}=\\{a_{1},\ldots,a_{n}\\}$ be a $\varphi$-Sidon set $A$ contained in
$X$. By Lemma 2, there exists $a_{n+1}\in X$ such that
$\Phi_{J_{1}}(A_{n},a_{n+1})\cap\Phi_{J_{2}}(A_{n},a_{n+1})=\emptyset$
if $J_{1}$ and $J_{2}$ are distinct subsets of $\\{1,\ldots,h\\}$. It follows
from Lemma 1 that the set $A_{n+1}=A_{n}\cup\\{a_{n+1}\\}$ is a
$\varphi$-Sidon set. This completes the proof. ∎
## 4\. Perturbations of linear forms
An absolute value on a field $\mathbf{F}$ is a function $|\
|:\mathbf{F}\rightarrow\mathbf{R}$ such that
1. (i)
$|c|\geq 0$ for all $c\in\mathbf{F}$, and $|c|=0$ if and only if $c=0$,
2. (ii)
$|c_{1}c_{2}|=|c_{1}|\ |c_{2}|$ for all $c_{1},c_{2}\in\mathbf{F}$,
3. (ii)
$|c_{1}+c_{2}|\leq|c_{1}|+|c_{2}|$ for all $c_{1},c_{2}\in\mathbf{F}$.
The absolute value $|\ |$ on $\mathbf{F}$ is _trivial_ if $|c|=1$ for all
$c\neq 0$, and _nontrivial_ if $|c|\neq 1$ for some $c\neq 0$. The usual
absolute value and the $p$-adic absolute values are the nontrivial absolute
values on $\mathbf{Q}$.
Let $V$ be a vector space over $\mathbf{F}$. A _norm_ on $V$ with respect to
an absolute value $|\ |$ on $\mathbf{F}$ is a function $\|\
\|:V\rightarrow\mathbf{R}$ such that
1. (i)
$\|v\|\geq 0$ for all $v\in V$, and $\|v\|=0$ if and only if $v=0$,
2. (ii)
$\|cv\|=|c|\ \|v\|$ for all $c\in\mathbf{F}$ and $v\in V$,
3. (iii)
$\|v+w\|\leq\|v\|+\|w\|$ for all $v,w\in V$.
For example, if $|\ |$ is an absolute value on $\mathbf{F}$ and
$V=\mathbf{F}^{n}$, then, for every vector
$\mathbf{x}=\operatorname{\left(\begin{smallmatrix}x_{1}\\\ \vdots\\\
x_{n}\end{smallmatrix}\right)}\in V$, the functions
$\|\mathbf{x}\|_{1}=\sum_{j=1}^{n}|x_{j}|$
and
$\|\mathbf{x}\|_{\infty}=\max\\{|x_{j}|:j=1,\ldots,n\\}$
are norms on $V$ with respect to $|\ |$.
If $|\ |$ is a nontrivial absolute value on $\mathbf{F}$, then there exists
$c\in\mathbf{F}$ with $|c|\neq 0$ and $|c|\neq 1$. If $|c|>1$, then
$0<|1/c|=1/|c|<1$. If $0<|c_{0}|<1$, then
$0<\left|c_{0}^{n+1}\right|=\left|c_{0}\right|^{n+1}<\left|c_{0}\right|^{n}=\left|c_{0}^{n}\right|$
for all $n\in\mathbf{N}$. Thus, the field $\mathbf{F}$ is infinite and
(10)
$\inf\\{|c|:c\in\mathbf{F}\setminus\\{0\\}\\}=\inf\\{|c_{0}^{n}|:n=1,2,3,\ldots\\}=0.$
Let $V$ be a nonzero normed vector space with respect to a nontrivial absolute
value on the field $\mathbf{F}$. Let $v_{0}\in V\setminus\\{0\\}$. Let
$c_{0}\in\mathbf{F}$ with $0<|c_{0}|<1$. For all $n\in\mathbf{N}$ we have
$c_{0}^{n}v_{0}\neq 0$ and
$0<\left\|c_{0}^{n+1}v_{0}\right\|=\left|c_{0}^{n+1}\right|\left\|v_{0}\right\|<\left|c_{0}^{n}\right|\left\|v_{0}\right\|=\left\|c_{0}^{n}v_{0}\right\|$
Thus, the vector space $V$ is infinite and
(11) $\inf\\{|x|:x\in
V\setminus\\{0\\}\\}=\inf\\{|c_{0}^{n}v_{0}|:n=1,2,3,\ldots\\}=0.$
###### Lemma 3.
Let $\mathbf{F}$ be a field with a nontrivial absolute value. Let $V$ be a
nonzero vector space over $\mathbf{F}$ that has a norm with respect to the
absolute value on $\mathbf{F}$. Let $A^{\prime}$ be a finite subset of $V$ and
let $b\in V$.
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with coefficients
$c_{i}\in\mathbf{F}$. If the linear form $\varphi$ has property $N$, then for
every $\varepsilon>0$ there are infinitely many nonzero vectors $a\in V$ such
that
$\|a-b\|<\varepsilon$
and, for all subsets $J$ of $\\{1,\ldots,h\\}$, the sets
$\Phi_{J}(A^{\prime},a)=\varphi_{J}(A^{\prime})+\left(\sum_{j\in
J^{c}}c_{j}\right)a$
are pairwise disjoint.
###### Proof.
If $A^{\prime}=\emptyset$, then $\varphi_{J}(A^{\prime})=\\{0\\}$ for all
$J\subseteq\\{1,\ldots,h\\}$ and
$\Phi_{J}(A^{\prime},a)=\left\\{\left(\sum_{j\in
J^{c}}c_{j}\right)a\right\\}$. Because $\varphi$ has property $N$, for every
nonzero vector $a\in V$ the vectors $\left(\sum_{j\in
J^{c}}c_{j}\right)a=s(J^{c})a$ are distinct and so the sets
$\Phi_{J}(A^{\prime},a)$ are pairwise disjoint. Choose any of the infinitely
many nonzero vectors $a$ such that $\|a-b\|<\varepsilon$.
Let $A^{\prime}\neq\emptyset$ and $x\in V$. For distinct subsets $J_{1}$ and
$J_{2}$ of $\\{1,\ldots,h\\}$, we have
(12)
$\Phi_{J_{1}}(A^{\prime},b+x)\cap\Phi_{J_{2}}(A^{\prime},b+x)\neq\emptyset$
if and only if there exist vectors $a_{1,j}\in A^{\prime}$ for all $j\in
J_{1}$ and $a_{2,j}\in A^{\prime}$ for all $j\in J_{2}$ such that
(13) $\sum_{j\in J_{1}}c_{j}a_{1,j}+\sum_{j\in J^{c}_{1}}c_{j}(b+x)=\sum_{j\in
J_{2}}c_{j}a_{2,j}+\sum_{j\in J^{c}_{2}}c_{j}(b+x).$
Let $K=J^{c}_{1}\cap J^{c}_{2}$. The sets $I_{1}=J^{c}_{1}\setminus K$ and
$I_{2}=J^{c}_{2}\setminus K$ are disjoint. If $I_{1}=I_{2}=\emptyset$, then
$K=J^{c}_{1}=J^{c}_{2}$ and so $J_{1}=J_{2}$, which is absurd. Therefore, the
sets $I_{1}$ and $I_{2}$ are disjoint sets, not both empty.
Because the linear form $\varphi$ has property $N$, we have
$\sum_{j\in I_{1}}c_{j}=s(I_{1})\neq s(I_{2})=\sum_{j\in I_{2}}c_{j}$
and so
$c=\sum_{j\in I_{2}}c_{j}-\sum_{j\in I_{1}}c_{j}\neq 0.$
Thus, the scalar $c$ is invertible in $\mathbf{F}$. From (13) we obtain
$\displaystyle\sum_{j\in J_{1}}c_{j}a_{1,j}-\sum_{j\in J_{2}}c_{j}a_{2,j}$
$\displaystyle=\sum_{j\in J^{c}_{2}}c_{j}(b+x)-\sum_{j\in
J^{c}_{1}}c_{j}(b+x)$ $\displaystyle=\sum_{j\in I_{2}}c_{j}(b+x)-\sum_{j\in
I_{1}}c_{j}(b+x)$ $\displaystyle=\left(\sum_{j\in I_{2}}c_{j}-\sum_{j\in
I_{1}}c_{j}\right)(b+x)$ $\displaystyle=c(b+x)$
and
(14) $x=c^{-1}\left(\sum_{j\in J_{1}}c_{j}a_{1,j}-\sum_{j\in
J_{2}}c_{j}a_{2,j}\right)-b.$
Because the set $A^{\prime}$ is nonempty and finite, the set $X$ of vectors
$x$ in $V$ of the form (14) is also nonempty and finite. If $X=\\{0\\}$, let
$\delta=1$. If $X\neq\\{0\\}$, let
(15) $\delta=\min\\{\|x\|:x\in X\setminus\\{0\\}\\}>0$
and let
(16) $\varepsilon^{\prime}=\min(\delta,\varepsilon)>0.$
By (11), there are infinitely many vectors $x_{0}$ in $V$ such that
(17) $0<\|x_{0}\|<\varepsilon^{\prime}.$
It follows from (15) and (16) that each such vector satisfies $x_{0}\notin X$,
and so
$\Phi_{J_{1}}(A^{\prime},b+x_{0})\cap\Phi_{J_{2}}(A^{\prime},b+x_{0})=\emptyset$
for all distinct subsets $J_{1}$ and $J_{2}$ of $\\{1,\ldots,h\\}$. Choosing
$a=b+x_{0}$ completes the proof. ∎
Let $\mathbf{F}$ be a field with a nontrivial absolute value, and let $V$ be a
vector space over $\mathbf{F}$ that has a norm with respect to the absolute
value on $\mathbf{F}$. Let $\mathbf{N}=\\{1,2,3,\ldots\\}$ be the set of
positive integers. Let $A=\\{a_{k}:k\in\mathbf{N}\\}$ and
$B=\\{b_{k}:k\in\mathbf{N}\\}$ be sets of not necessarily distinct vectors in
$V$. Let $\varepsilon=\\{\varepsilon_{k}:k\in\mathbf{N}\\}$ be a set of
positive real numbers. The set $B$ is an _$\varepsilon$ -perturbation_ of the
set $A$ if
$\|a_{k}-b_{k}\|<\varepsilon_{k}$
for all $k\in\mathbf{N}$.
###### Theorem 2.
Let $\mathbf{F}$ be a field with a nontrivial absolute value and let $V$ be a
vector space over $\mathbf{F}$ that has a norm with respect to the absolute
value on $\mathbf{F}$. Let $\varepsilon=\\{\varepsilon_{k}:k=1,2,3,\ldots\\}$
be a set of positive real numbers. Let $\varphi$ be a linear form with
coefficients in $\mathbf{F}$ that has property $N$. For every set
$B=\\{b_{k}:k=1,2,3,\ldots\\}$ of vectors in $V$, there is a $\varphi$-Sidon
set $A=\\{a_{k}:k=1,2,3,\ldots\\}$ of vectors in $V$ such that
(18) $\|a_{k}-b_{k}\|<\varepsilon_{k}$
for all $k=1,2,3,\ldots$.
###### Proof.
We construct the set $A$ inductively. Begin by choosing $a_{1}=b_{1}$. Every
set with one element is a $\varphi$-Sidon set, and so $A_{1}=\\{a_{1}\\}$ is a
$\varphi$-Sidon set such that $\|a_{1}-b_{1}\|=0<\varepsilon_{1}$.
Let $n\geq 1$, and let $A_{n}=\\{a_{1},\ldots,a_{n}\\}$ be a $\varphi$-Sidon
set that satisfies inequality (18) for all $k\in\\{1,\ldots,n\\}$. Applying
Lemma 3 to the finite set $A^{\prime}=A_{n}$ and the vector $b=b_{n+1}$, we
obtain a vector $a_{n+1}\in V$ such that
$\|a_{n+1}-b_{n+1}\|<\varepsilon_{n+1}$ and the sets $\Phi_{J}(A_{n},a_{n+1})$
are pairwise disjoint for all $J\subseteq\\{1,\ldots,h\\}$. The set $A_{n}$ is
$\varphi$-Sidon, and so, by Lemma 1, the set $A_{n+1}=A_{n}\cup\\{a_{n+1}\\}$
is a $\varphi$-Sidon set. This completes the proof. ∎
###### Theorem 3.
Let $\mathbf{F}$ be a field with a nontrivial absolute value, and let
$\varphi$ be a linear form with coefficients in $\mathbf{F}$ that has property
$N$. Let $V$ be a vector space over $\mathbf{F}$ that has a norm with respect
to absolute value on $\mathbf{F}$. For every set
$B=\\{b_{k}:k=1,2,3,\ldots\\}$ of vectors in $V$, there exists a
$\varphi$-Sidon set $A=\\{a_{k}:k=1,2,3,\ldots\\}$ in $V$ such that
$\lim_{k\rightarrow\infty}\|a_{k}-b_{k}\|=0.$
###### Proof.
This follows fromTheorem 2 applied to any sequence
$\varepsilon=\\{\varepsilon_{k}:k=1,2,3,\ldots\\}$ of positive numbers such
that $\lim_{k\rightarrow\infty}\varepsilon_{k}=0$. ∎
## 5\. $p$-adic $\varphi$-Sidon sets
Let $\mathbf{P}=\\{2,3,5,\ldots\\}$ be the set of prime numbers. For every
prime number $p$, let $|\ |_{p}$ be the usual $p$-adic absolute value with
$|p|_{p}=1/p$. Every integer $r$ satisfies $|r|_{p}\leq 1$.
###### Lemma 4.
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with rational
coefficients $c_{i}$ that satisfies property $N$. Let $\mathbf{P}_{0}$ be a
nonempty finite set of prime numbers. Let $A^{\prime}$ be a finite set of
integers and let $b$ be an integer. For every $\varepsilon>0$ there are
infinitely many positive integers $a$ such that
$|a-b|_{p}<\varepsilon$
for all $p\in\mathbf{P}_{0}$ and the sets
$\Phi_{J}(A^{\prime},a)=\varphi_{J}(A^{\prime})+\left(\sum_{j\in
J^{c}}c_{j}\right)a$
are pairwise disjoint for all subsets $J$ of $\\{1,\ldots,h\\}$.
###### Proof.
Let $\varepsilon^{\prime}>0$. Choose a positive integer $k$ such that
$\frac{1}{2^{k}}<\varepsilon^{\prime}.$
The integer $b$ is not necessarily positive, but for all sufficiently large
positive integers $r$ we have
(19) $a=b+r\prod_{p\in\mathbf{P}_{0}}p^{k}>0.$
Let $p\in\mathbf{P}_{0}$. For all integers $r$ satisfying (19) we have
$|a-b|_{p}=|r|_{p}\prod_{p\in\mathbf{P}_{0}}|p^{k}|_{p}\leq|p^{k}|_{p}=\frac{1}{p^{k}}\leq\frac{1}{2^{k}}<\varepsilon^{\prime}.$
The proof of Lemma 4 is the same as the proof of Lemma 3 until the choice of
$x_{0}$, at which point we choose a positive integer
$x_{0}=r\prod_{p\in\mathbf{P}_{0}}p^{k}$ that satisfies inequality (19). This
completes the proof. ∎
###### Theorem 4.
Let $\varphi$ be a linear form with rational coefficients that satisfies
property $N$. Let $\\{\varepsilon_{k}:k=1,2,3,\ldots\\}$ be a sequence of
positive real numbers and let $\\{p_{k}:k=1,2,3,\ldots\\}$ be a sequence of
prime numbers. For every sequence of integers $B=\\{b_{k}:k=1,2,3,\ldots\\}$,
there exists a strictly increasing sequence of positive integers
$A=\\{a_{k}:k=1,2,3,\ldots\\}$ such that $A$ is a $\varphi$-Sidon set and
$|a_{k}-b_{k}|_{p_{j}}<\varepsilon_{k}$
for all $k\in\mathbf{N}$ and $j\in\\{1,\ldots,k\\}$.
###### Proof.
The proof of Theorem 4 is an inductive construction based on Lemma 4. Choose a
positive integer $k_{1}$ such that $1/p_{1}^{k_{1}}<\varepsilon_{1}$ and
$b_{1}+p_{1}^{k_{1}}>0$. Let $a_{1}=b_{1}+p_{1}^{k_{1}}$. The set
$A_{1}=\\{a_{1}\\}$ is a $\varphi$-Sidon set and
$|a_{1}-b_{1}|_{p_{1}}<\varepsilon_{1}$.
For $n\geq 1$, let $A_{n}=\\{a_{1},\ldots,a_{n}\\}$ be a set of positive
integers with $a_{1}<\cdots<a_{n}$ such that $A_{n}$ is a $\varphi$-Sidon set
and
$|a_{k}-b_{k}|_{p_{j}}<\varepsilon_{k}$
for all $k\in\\{1,\ldots,n\\}$ and $j\in\\{1,\ldots,k\\}$. We apply Lemma 4 to
the set $A^{\prime}=A_{n}$, the integer $b=b_{n+1}$, the finite set of primes
$\mathbf{P}_{0}=\\{p_{1},\ldots,p_{n},p_{n+1}\\}$, and
$\varepsilon^{\prime}=\varepsilon_{n+1}>0$ to obtain an integer
$a_{n+1}>a_{n}$ such that
$|a_{n+1}-b_{n+1}|_{p_{j}}<\varepsilon_{n+1}$
for all $j\in\\{1,\ldots,n,n+1\\}$ and the sets $\Phi_{J}(A_{n+1},a_{n+1})$
are pairwise disjoint for all $J\subseteq\\{1,\ldots,h\\}$. It follows from
Lemma 1 that $A_{n+1}$ is a $\varphi$-Sidon set. This completes the proof. ∎
###### Theorem 5.
Let $\varphi$ be a linear form with rational coefficients that satisfies
property $N$. Let $B=\\{b_{k}:k=1,2,3,\ldots\\}$ be a sequence of integers.
There exists a strictly increasing $\varphi$-Sidon set of positive integers
$A=\\{a_{k}:k=1,2,3,\ldots\\}$ such that, for every prime number $p$, the set
$A$ is $p$-adically asymptotic to $B$ in the sense that
$\lim_{k\rightarrow\infty}|a_{k}-b_{k}|_{p}=0.$
###### Proof.
This follows from Theorem 4 applied to the set of all prime numbers and any
sequence $\varepsilon=\\{\varepsilon_{k}:k=1,2,3,\ldots\\}$ of positive
numbers such that $\lim_{k\rightarrow\infty}\varepsilon_{k}=0$. ∎
## 6\. Growth of $\varphi$-Sidon sets
Let $f(t)$ be a real-valued or complex-valued function defined for $t\geq
t_{0}$. Let $g(t)$ be positive function defined for $t\geq t_{0}$. We write
$f(t)\ll g(t)$
if there exist constants $C_{1}>0$ and $t_{1}\geq t_{0}$ such that $|f(t)|\leq
C_{1}g(t)$ for all $t\geq t_{1}$. We write
$f(t)\gg g(t)$
if there exist constants $C_{2}>0$ and $t_{2}\geq t_{0}$ such that $|f(t)|\geq
C_{2}g(t)$ for all $t\geq t_{2}$.
Let $A$ be a set of positive integers. The _growth function_ or _counting
function_ of $A$ is the function $A(n)$ that counts the number of positive
integers in the set $A\cap\\{1,\ldots,n\\}$. The number of $h$-fold sums of
integers taken from the set $A\cap\\{1,\ldots,n\\}$ is
$\binom{A(n)+h-1}{h}$
and each of these sums is at most $hn$. If $A$ is a classical $h$-Sidon set,
then these sums are distinct and
$\frac{A(n)^{h}}{h!}\leq\binom{A(n)+h-1}{h}\leq hn$
This simple counting argument proves that
$A(n)\ll n^{1/h}.$
The upper bound is tight. Bose and Chowla [1] proved that for every positive
integer $n$ there exist finite Sidon sets $A$ with
$A\subseteq\\{1,\ldots,n\\}\operatorname{\qquad\text{and}\qquad}\operatorname{\text{card}}(A)\gg
n^{1/h}.$
We do not have best possible upper bounds for infinite Sidon sets. Erdős (in
Stöhr [24]) constructed an infinite Sidon set $A$ of order 2 with
$\limsup_{n\rightarrow\infty}\frac{A(n)}{\sqrt{n}}\geq\frac{1}{2}$
and so $A(n)\gg\sqrt{n}$ for infinitely many $n$, but he also proved that
every classical Sidon set of order 2 satisfies
$\liminf_{n\rightarrow\infty}A(n)\sqrt{\frac{\log n}{n}}\ll 1$
and so $A(n)\ll\sqrt{n/\log n}$ for infinitely many $n$.
It is of interest to obtain upper bounds for the size of $\varphi$-Sidon sets.
Let $\mathbf{F}$ be a field with an absolute value. The _counting function_ of
a subset $X$ of $\mathbf{F}$ is
$X(t)=\operatorname{\text{card}}\left(x\in X:|x|\leq t\right).$
###### Theorem 6.
Let $\mathbf{F}$ be a field with an absolute value. Let
$\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with coefficients in
$\mathbf{F}$, and let $C=\sum_{i=1}^{h}|c_{i}|$. Let $X$ be a subset of
$\mathbf{F}$ such that $\varphi(X)\subseteq X$. If $A$ is a $\varphi$-Sidon
subset of $X$, then
$A(t)\leq X(Ct)^{1/h}$
for all $t\geq 0$.
###### Proof.
Let $A^{\prime}=\\{a\in A:|a|\leq t\\}$. We have
$A(t)=\operatorname{\text{card}}(A^{\prime})$ and, because $A$ is a
$\varphi$-Sidon set,
$A(t)^{h}=\operatorname{\text{card}}(\varphi(A^{\prime})).$
If $a_{1},\ldots,a_{h}\in A^{\prime}$, then
$b=\varphi(a_{1},\ldots,a_{h})\in\varphi(A^{\prime})\subseteq X$ and
$\displaystyle|b|$
$\displaystyle=\left|\varphi(a_{1},\ldots,a_{h})\right|=\left|\sum_{i=1}^{h}c_{i}a_{i}\right|$
$\displaystyle\leq\sum_{i=1}^{h}\left|c_{i}a_{i}\right|\leq\sum_{i=1}^{h}\left|c_{i}\right|\max(|a_{i}|:i=1,\ldots
h)$ $\displaystyle\leq Ct.$
Therefore,
$A(t)^{h}=\operatorname{\text{card}}(\varphi(A^{\prime}))\leq\operatorname{\text{card}}\\{x\in
X:|x|\leq Ct\\}=X(Ct)$
and
$A(t)\leq X(Ct)^{1/h}.$
This completes the proof. ∎
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with nonzero rational
coefficients. Let $m$ be a common multiple of the the denominators of the
coefficients $c_{1},\ldots,c_{h}$, and let $d$ be the greatest common divisor
of the integers $mc_{1},\ldots,mc_{h}$. Let $c^{\prime}_{i}=mc_{i}/d$ for
$i\in\\{1,\ldots,h\\}$. The integers $c^{\prime}_{i}=mc_{i}/d$ are nonzero and
relatively prime. Consider the linear form
$\varphi^{\prime}=\sum_{i=1}^{h}c^{\prime}_{i}x_{i}$. We have
$\varphi=\frac{d}{m}\sum_{i=1}^{h}\frac{mc_{i}}{d}x_{i}=\frac{d}{m}\sum_{i=1}^{h}c^{\prime}_{i}x_{i}=\frac{d}{m}\varphi^{\prime}.$
It follows that a set is a $\varphi$-Sidon set if and only if it is a
$\varphi^{\prime}$-Sidon set. Thus, in the study of $\varphi$-Sidon sets, a
linear form with nonzero rational coefficients can be replaced with a linear
form with nonzero relatively prime integer coefficients.
###### Theorem 7.
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with integer
coefficients. If $A$ is a $\varphi$-Sidon set of integers, then
$A(t)=\\{a\in A:|a|\leq t\\}\ll t^{1/h}.$
###### Proof.
We have $\varphi(\mathbf{Z})\subseteq\mathbf{Z}$. Let $[t]$ denote the integer
part of the real number $t$. With the usual absolute value, the counting
function of $\mathbf{Z}$ is $\mathbf{Z}(t)=2[t]+1\leq 2t+1$. Applying Theorem
6 with $X=\mathbf{Z}$, we obtain
$A(t)\leq\mathbf{Z}(Ct)^{1/h}\leq(2Ct+1)^{1/h}\ll t^{1/h}.$
This completes the proof. ∎
###### Theorem 8.
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with integer
coefficients that satisfies condition $N$. There exists an infinite
$\varphi$-Sidon set $A=\\{a_{k}:k\in\mathbf{N}\\}$ of distinct positive
integers such that
(20) $a_{k+1}\leq 4^{h}k^{2h-1}+k$
for all $k\in\mathbf{N}$.
###### Proof.
We construct the $\varphi$-Sidon set $A=\\{a_{k}:k\in\mathbf{N}\\}$
inductively. The set $A_{1}=\\{a_{1}\\}$ is a $\varphi$-Sidon set for every
integer $a_{1}$. Let $a_{1}=1$.
Let $k\geq 1$ and let $A_{k}=\\{a_{1},\ldots,a_{k}\\}$ be a $\varphi$-Sidon
set of positive integers. Let $b$ be a positive integer. By Lemma 1, the set
$A_{k}\cup\\{b\\}$ is a $\varphi$-Sidon set if and only if the sets
$\Phi_{J}(A_{k},b)=\varphi_{J}(A_{k})+\left(\sum_{j\in J^{c}}c_{j}\right)b$
are pairwise disjoint for all $J\subseteq\\{1,\ldots,h\\}$.
Let $J_{1}$ and $J_{2}$ be distinct subsets of $\\{1,\ldots,h\\}$. The sets
$J_{1}\setminus(J_{1}\cap J_{2})$ and $J_{2}\setminus(J_{1}\cap J_{2})$ are
distinct and disjoint. We have
$\Phi_{J_{1}}(A_{k},b)\cap\Phi_{J_{2}}(A_{k},b)\neq\emptyset$
if and only if there exist integers $a_{1,j}\in A_{k}$ for all $j\in J_{1}$
and $a_{2,j}\in A_{k}$ for all $j\in J_{2}$ such that
(21) $\sum_{j\in J_{1}}c_{j}a_{1,j}+\left(\sum_{j\in
J^{c}_{1}}c_{j}\right)b=\sum_{j\in J_{2}}c_{j}a_{2,j}+\left(\sum_{j\in
J^{c}_{2}}c_{j}\right)b.$
The integer
$\displaystyle c$ $\displaystyle=\sum_{j\in J^{c}_{2}}c_{j}-\sum_{j\in
J^{c}_{1}}c_{j}=s(J_{2}^{c})-s(J_{1}^{c})$
$\displaystyle=s\left(J_{1}\setminus(J_{1}\cap
J_{2})\right)-s\left(J_{2}\setminus(J_{1}\cap J_{2})\right)$
is nonzero because the linear form $\varphi$ satisfies condition $N$. The
integer $b$ satisfies equation (21) if and only if
(22) $cb=\sum_{j\in J_{1}}c_{j}a_{1,j}-\sum_{j\in J_{2}}c_{j}a_{2,j}.$
Thus, there is at most one integer $b$ that satisfies equation (22).
Let $\operatorname{\text{card}}(J_{1})=j_{1}$ and
$\operatorname{\text{card}}(J_{2})=j_{2}$. The sets $J_{1}$ and $J_{2}$ are
distinct subsets of $\\{1,\ldots,h\\}$ and so
$j_{1}+j_{2}\leq 2h-1.$
The number of integers of the form
$\sum_{j\in J_{1}}c_{j}a_{1,j}-\sum_{j\in J_{2}}c_{j}a_{2,j}$
with $a_{1,j}\in A_{k}$ and $a_{2,j}\in A_{k}$ is at most ${k}^{j_{1}+j_{2}}$.
The number of ordered pairs $(J_{1},J_{2})$ of subsets of $\\{1,\ldots,h\\}$
of cardinalities $j_{1}$ and $j_{2}$, respectively, is
$\binom{h}{j_{1}}\binom{h}{j_{2}}.$
Thus, the number of equations of the form (22) is at most
$\displaystyle\underbrace{\sum_{j_{1}=0}^{h}\sum_{j_{2}=0}^{h}}_{j_{1}+j_{2}\leq
2h-1}\binom{h}{j_{1}}\binom{h}{j_{2}}{k}^{j_{1}+j_{2}}$
$\displaystyle\leq\sum_{j_{1}=0}^{h}\binom{h}{j_{1}}\sum_{j_{2}=0}^{h}\binom{h}{j_{2}}{k}^{2h-1}$
$\displaystyle=4^{h}k^{2h-1}$
and so there are at most $4^{h}k^{2h-1}+k$ positive integers $b$ such that
$b\notin A_{k}$ and $A_{k}\cup\\{b\\}$ is not a $\varphi$-Sidon set. It
follows that there exists a positive integer $a_{k+1}$ such that
1. (i)
$a_{k+1}\notin A_{k}$,
2. (ii)
$A_{k+1}=A_{k}\cup\\{a_{k+1}\\}$ is a $\varphi$-Sidon set,
3. (iii)
$a_{k+1}\leq 4^{h}k^{2h-1}+k$.
This completes the proof. ∎
###### Theorem 9.
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with integer
coefficients that satisfies condition $N$. There exists an infinite
$\varphi$-Sidon set $A$ of positive integers such that
$A(t)\gg t^{1/(2h-1)}.$
###### Proof.
This follows from inequality (20). ∎
## 7\. Open problems
1. (1)
Let $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ be a linear form with integer
coefficients. Let $\mathbf{P}$ be the set of prime numbers and let $A=\\{\log
p:p\in\mathbf{P}\\}$. Consider the $h$-tuple
$(p_{1},\ldots,p_{h})\in\mathbf{P}^{h}$ of not necessarily distinct prime
numbers, and let $\mathbf{P}_{0}=\\{p\in\mathbf{P}:p=p_{i}\text{ for some
}i\in\\{1,\ldots,h\\}\\}$. For each $p\in\mathbf{P}_{0}$, let
$I_{p}=\\{i\in\\{1,\ldots,h\\}:p_{i}=p\\}\operatorname{\qquad\text{and}\qquad}s(I_{p})=\sum_{i\in
I_{p}}c_{i}.$
We have
$\varphi(p_{1},\ldots,p_{h})=\sum_{i=1}^{h}c_{i}\log
p_{i}=\sum_{p\in\mathbf{P}_{0}}s(I_{p})\log
p=\log\prod_{p\in\mathbf{P}_{0}}p^{S(I_{p})}.$
If the linear form $\varphi$ satisfies property $N$, then, by the fundamental
theorem of arithmetic, the set $A=\\{\log p:p\in\mathbf{P}\\}$ is a
$\varphi$-Sidon se.
For the linear form $\psi=x_{1}+\cdots+x_{h}$, Ruzsa [22] used the set $A$ to
construct large classical Sidon sets of positive integers . Are such
constructions also possible for $\varphi$-Sidon sets of positive integers?
2. (2)
Let $A=\\{a_{k}:k=1,2,3,\ldots\\}$ and $B=\\{b_{k}:k=1,2,3,\ldots\\}$ be
sequences of integers. The set $A$ is a _polynomial perturbation_ of $B$ if
$|a_{k}-b_{k}|<k^{r}$
for some $r>0$ and all $k\geq k_{0}$. The set $A$ is a _bounded perturbation_
of $B$ if
$|a_{k}-b_{k}|<m_{0}$
for some $r>0$ and all $k\geq k_{0}$.
Let $\varphi$ be a linear form with integer coefficients that satisfies
condition $N$. Let $B$ be a set of integers. Does there exist a
$\varphi$-Sidon set of integers that is a polynomial perturbation of $B$?
Does there exist a $\varphi$-Sidon set of integers that is a bounded
perturbation of $B$?
3. (3)
Let $\varphi$ be a linear form with integer coefficients that satisfies
condition $N$. For every positive integer $n$, determine the cardinality of
the largest $\varphi$-Sidon subset of $\\{1,2,\ldots,n\\}$.
4. (4)
There exists $c>0$ such that, for every positive integer $n$, there is a
classical Sidon set $A\subseteq\\{1,\ldots,n\\}$ with $A(n)\geq c\sqrt{n}$.
However, there is no infinite classical Sidon set $A$ of positive integers
such that $A(n)\geq c\sqrt{n}$ for some $c>0$ and all $n\geq n_{0}$. Indeed,
Erdős (in Stöhr [24]) proved that every infinite classical Sidon set satisfies
$\liminf_{n\rightarrow\infty}A(n)\sqrt{\frac{\log n}{n}}\ll 1.$
Are there analogous lower bounds for infinite $\varphi$-Sidon sets of positive
integers associated with binary linear forms $\varphi=c_{1}x_{1}+c_{2}x_{2}$
or with linear forms $\varphi=\sum_{i=1}^{h}c_{i}x_{i}$ for $h\geq 3$?
5. (5)
Consider sets of integers. One might expect that the elements of a set $A$ of
integers that is “sufficiently random” or “in general position” will be a
classical Sidon set, that is, will not contain a nontrivial solution of the
equation $x_{1}+x_{2}=x_{3}+x_{4}$. Equivalently, the set $A$ will be one-to-
one (up to transposition) on the function $f(x_{1},x_{2})=x_{1}+x_{2}$. There
is nothing special about the function $x_{1}+x_{2}$. One could ask if $A$ is
one-to-one (up to permutation) on some symmetric function, or one-to-one on a
function that is not symmetric. The functions considered in this paper are
linear forms in $h$ variables.
Conversely, given the set $A$ of integers, we can ask what are the functions
(in some particular set $\mathcal{F}$ of functions) with respect to which the
set $A$ is one-to-one. This inverse problem is considered in Nathanson [17].
## References
* [1] R. C. Bose and S. Chowla, _Theorems in the additive theory of numbers_ , Comment. Math. Helv. 37 (1962/63), 141–147.
* [2] B. Bukh, _Sums of dilates_ , Combin. Probab. Comput. 17 (2008), no. 5, 627–639.
* [3] J. Cilleruelo, O. Serra, and M. Wötzel, _Sidon set systems_ , Rev. Mat. Iberoam. 36 (2020), no. 5, 1527–1548.
* [4] Q. Dubroff, J. Fox, and M. Q. Xu, _A note on the Erdős distinct subset sums problem_ , SIAM J. Discrete Math. 35 (2021), 322–324.
* [5] P. Erdős, Problems and results in additive number theory, _Colloque sur la Théorie des Nombres, Bruxelles, 1955_ , Georges Thone, Liège; Masson & Cie, Paris, 1956, 127–137.
* [6] R. K. Guy, R. K., _Unsolved Problems in Number Theory_ , Springer-Verlag, New York, 2004.
* [7] H. Halberstam and K. F. Roth, _Sequences, Vol. 1_ , Oxford University Press, Oxford, 1966, Reprinted by Springer-Verlag, Heidelberg, in 1983.
* [8] S. Z. Kiss and C. Sándor, _Generalized asymptotic Sidon basis_ , Discrete Math. 344 (2021), no. 2, 112208, 5.
* [9] Y. Kohayakawa, S. J. Lee, C. G. Moreira, and V. Rödl, _Infinite Sidon sets contained in sparse random sets of integers_ , SIAM J. Discrete Math. 32 (2018), no. 1, 410–449.
* [10] M. Kovačević and V. Y. F. Tan, _Improved bounds on Sidon sets via lattice packings of simplices_ , SIAM J. Discrete Math. 31 (2017), no. 3, 2269–2278.
* [11] H. Liu and P. P. Pach, _The number of multiplicative Sidon sets of integers_ , J. Combin. Theory Ser. A 165 (2019), 152–175.
* [12] M. B. Nathanson, _Problems in additive number theory. I_ , Additive Combinatorics, CRM Proc. Lecture Notes, vol. 43, Amer. Math. Soc., Providence, RI, 2007, pp. 263–270.
* [13] by same author, _Representation functions of bases for binary linear forms_ , Funct. Approx. Comment. Math. 37 (2007), no. part 2, 341–350.
* [14] by same author, _Inverse problems for linear forms over finite sets of integers_ , J. Ramanujan Math. Soc. 23 (2008), no. 2, 151–165.
* [15] by same author, _Problems in additive number theory. II. Linear forms and complementing sets_ , J. Théor. Nombres Bordeaux 21 (2009), no. 2, 343–355.
* [16] by same author, _Comparison estimates for linear forms in additive number theory_ , J. Number Theory 184 (2018), 1–26.
* [17] by same author, _An inverse problem for Sidon sets_ , arXiv:2104.06501, 2021.
* [18] M. B. Nathanson, K. O’Bryant, B. Orosz, I. Ruzsa, and M. Silva, _Binary linear forms over finite sets of integers_ , Acta Arith. 129 (2007), no. 4, 341–361.
* [19] K. O’Bryant, _A complete annotated bibliography of work related to Sidon sequences_ , Electronic J. Combinatorics (2004), Dynamic Surveys DS 11\.
* [20] P. P. Pach, _An improved upper bound for the size of the multiplicative 3-Sidon sets_ , Int. J. Number Theory 15 (2019), no. 8, 1721–1729.
* [21] P. P. Pach and C. Sándor, _On infinite multiplicative Sidon sets_ , European J. Combin. 76 (2019), 37–52.
* [22] I. Z. Ruzsa, _An infinite Sidon sequence_ , J. Number Theory 68 (1998), no. 1, 63–71.
* [23] T. Schoen and I. D. Shkredov, _An upper bound for weak $B_{k}$-sets_, SIAM J. Discrete Math. 33 (2019), no. 2, 837–844.
* [24] A. Stöhr, _Gelöste und ungelöste Fragen über Basen der natürlichen Zahlenreihe. I, II_ , J. Reine Angew. Math. 194 (1955), 40–65, 111–140.
* [25] W. Xu, _Popular differences and generalized Sidon sets_ , J. Number Theory 186 (2018), 103–120.
|
8k
|
arxiv_papers
|
2101.01036
|
# VIS30K: A Collection of Figures and Tables from IEEE Visualization
Conference Publications
Jian Chen, Meng Ling, Rui Li, Petra Isenberg, Tobias Isenberg, Michael
Sedlmair, Torsten Möller, Robert S. Laramee, Han-Wei Shen, Katharina Wünsche,
and Qiru Wang J. Chen, M. Ling, R. Li, and H.-W. Shen are with The Ohio State
University, USA. E-mails: {chen.8028 $|$ ling.253 $|$ li.8950 $|$
shen.94}@osu.edu. P. Isenberg and T. Isenberg are with Inria, France. E-mails:
{petra.isenberg $|$ tobias.isenberg}@inria.fr. M. Sedlmair is with University
of Stuttgart, Germany. E-mail: [email protected]. T.
Möller and K. Wünsche are with University of Vienna, Austria. E-mails:
{torsten.moeller $|$ katharina.wuensche}@univie.ac.at. R. S. Laramee and Q.
Wang are with the University of Nottingham, UK. E-mail: {robert.laramee $|$
qiru.wang}@nottingham.ac.uk.
###### Abstract
We present the VIS30K dataset, a collection of 29,689 images that represents
30 years of figures and tables from each track of the IEEE Visualization
conference series (Vis, SciVis, InfoVis, VAST). VIS30K’s comprehensive
coverage of the scientific literature in visualization not only reflects the
progress of the field but also enables researchers to study the evolution of
the state-of-the-art and to find relevant work based on graphical content. We
describe the dataset and our semi-automatic collection process, which couples
convolutional neural networks (CNN) with curation. Extracting figures and
tables semi-automatically allows us to verify that no images are overlooked or
extracted erroneously. To improve quality further, we engaged in a peer-search
process for high-quality figures from early IEEE Visualization papers. With
the resulting data, we also contribute VISImageNavigator (VIN,
visimagenavigator.github.io), a web-based tool that facilitates searching and
exploring VIS30K by author names, paper keywords, title and abstract, and
years.
###### Index Terms:
Visualization, IEEE VIS, InfoVis, SciVis, VAST, dataset, bibliometrics,
images, figures, tables.
## 1 Introduction
Visualization is a discipline that inherently relies on images and videos to
explain and showcase its research. Images are thus an essential component of
scientific publications in our field. They facilitate comprehension of complex
scientific concepts [11, 41] and enable authors to refer to their proposed
visualization solutions, alternatives, and competing approaches or to
graphically explain algorithms, techniques, workflows, and study results.
Browsing a domain’s images can reveal temporal trends and common practices. It
facilitates the comparison of sub-disciplines [24]. Although figures are
ubiquitous in visualization publications, they are embedded in PDFs and remain
largely inaccessible via scholarly search tools such as digital libraries,
Google Scholar, CiteSeerX, or MS Academic Search. The primary goal of our
work—similar to that of past work on IEEE VIS papers [15], keywords [16], or
EuroVis papers [39]—is to extend the corpora of data we can use for studies of
the visualization field.
Our primary contribution is a dataset we call VIS30K (Fig. 1). It contains
images and tables from 30 years (1990–2019) of the IEEE VIS conference,
spanning all tracks: Vis, InfoVis, SciVis, and VAST. IEEE VIS is the longest-
running and largest conference focusing on visualization and its images
reflect the evolution of the field. Our primary data sources include IEEE
Xplore, conference CDs, and hard copies of the conference proceedings from
which we obtain the images in their best possible quality. In addition to
images, we include tables as special form of data organization that can be
informative to the community. Our dataset can serve many purposes. It enables
researchers to study the visual evolution of the field from an objective,
image-centric point of view. It assists teaching about visualization by
providing fast visual access to refereed research images and contributions. It
also can serve as a data source for researchers in other fields such as
computer vision or machine learning. And, finally, it supports visualization
researchers when browsing and discovering new work.
Collecting these figures and tables was challenging. We optimized data quality
with a hybrid solution. We first extracted figures via convolutional neural
networks (CNNs), followed by expert curation. This way we ensure reliable data
(i. e., completeness and image locations/dimensions), while at the same time
requiring a manageable amount of manual cleaning and verification.
Our secondary contribution is a web-based tool, VISImageNavigator (VIN,
visimagenavigator.github.io), that allows people to search and explore our
dataset. We cross-link VIN to the metadata of KeyVis [16] and VisPubData [15]
and their detailed bibliometric metadata. This metadata associated to papers,
and thus all images, allows us to support searching using text-based queries.
Figure 1: A timeline of selected images from all 30 years (1990—2019) of IEEE
Visualization conference showing diverse and trending research work. Best
viewed electronically, zoomed in.
## 2 Related Work
Previous work from three areas inspired our own. The first surveys past work
and offers visual access to past publications. A second group collects and
analyzes metadata derived from visualization research papers. The third group
relates to the CNN-based extraction algorithms we employed for our data
extraction.
Visual Collections of Visualization Research. We are not the first to attempt
a visual overview of the visualization field. Yet past work generally focuses
on specific subareas of the research, each of which provides an overview of
work on the subtopic rather than a comprehensive and browsable image database.
For instance, some work has focused on providing references and representative
images of specific data or layout techniques: Schulz’s 300 tree-layout methods
[35], Kucher and Kerren’s more than 470 text visualizations [21], Aigner et
al.’s over 100 temporal data visualizations [1], and Kehrer and Hauser’s
multivariate and multifaced data of more than 160 images [18]. Others examine
specific visualization applications, e. g., Kerren et al.’s biological data
[19], Kucher et al.’s sentiment analysis [22], Chatzimparmpas and Jusufi’s
trustworthy machine learning [4], and Diehl et al.’s VisGuides [9] on advice
and recommendations on visual design. In contrast to these focused
perspectives, VIS30K provides a broader coverage of all 30 years of IEEE VIS.
Our downloadable dataset comprises all the images contained in each paper,
rather than just a few samples per publication or approach.
The work most closely related to ours is Deng et al.’s VisImages collection of
IEEE InfoVis and IEEE VAST images [8] and Zeng et al.’s VISstory [44]. Both
sets of authors plan to release their datasets but only provide a subset of
our data. VISstory only covers data from 2009–2018, while VisImages does not
include IEEE Vis and SciVis paper images. Our work also differs in its
approach to quality control. We rely on expert input to check the capture of
all images, while VisImages uses crowd-sourcing. VISstory only tests a subset
of images for quality. Similar to these tools, we provide a web-based tool to
explore the image data although focus on different aspects. VisImages
categorizes image content in addition to metadata and VISstory focuses on a
paper rather than image-centered views where each paper is encoded as a ring
with sectors standing for individual images.
Meta-Analysis of Visualization Publications. Another direction of research
centers on meta-analyses of the visualization field, without focusing on
visual content. Lam et al. [23], e. g., established seven empirical evaluation
scenarios by analyzing 850 papers appearing in the ‘information visualization’
subcommunity of IEEE VIS. Isenberg et al. [17] later extended this historical
analysis of evaluation practices to all tracks of IEEE VIS in a systematic
review of 581 papers. Isenberg et al. [16] further collected IEEE VIS paper
keywords to derive visualization topics from a metadata collection of IEEE VIS
publications [15]. We make use of metadata from this collection in our work to
gather paper PDFs prior to automatic extraction. Conceptually, our new VIS30K
extends this line of work by leveraging new image-based extraction methods
[27] and search tools to make the figure and table data accessible.
CNN-based Extraction Algorithms. Using data-driven CNN algorithms to train
classifiers to extract figures and tables is becoming increasingly popular
[31, 37, 6]. Current approaches to fine-grained recognition involve the
important step of preparing the annotated training data with ground-truth
labeling prior to training a model for prediction. Problems in this area have
inspired research in three major directions. The first is crowdsourcing to
annotate the document manually [37]. We did not use this solution for two
reasons. First, cleaning noisy crowdsourced annotations is time-consuming in
itself and also needs effective quality control [38]. Second, crowdsourcing
lacks flexibility: often we must know in advance if we are to extract figures,
equations, texts, tables or all of these. It may not be realistic to determine
complete categories in advance. Another solution is to mine information from
an XML schema [6, 29] or from LaTeX [38] or PDF [25] syntax. We could not use
this approach since early IEEE VIS PDF papers are lack corresponding LaTeX or
XML source files. The most popular figure-extraction algorithms rely on
manually defined rules and assumptions. These techniques are typically
successful for the particular type of figure for which these rules are
followed, but suffer from the classical problem with rule-based approaches:
when rules are broken, the algorithm fails. For example, an intuitively
reasonable rule is to assume captions always exist. An algorithm can locate a
figure by searching for caption terms such as _Fig._ and _Table_ [6, 25].
However, about $2\%$ of our VIS30K images do not satisfy this assumption and
thus can cause the extraction algorithm to fail. Choudhury et al.’s algorithm
[5] focuses on specific figure types, such as line charts and scatter plots,
while our goal is to extract a comprehensive collection of figures and tables.
For these reasons, annotated ground-truth data are not publicly available for
automatic figure and table extraction.
(a) Total # of images (figures and tables), by year.
(b) Average # of images (figures and tables) per page, by year. We included
potential color plate pages from early years (1990–2001) in the page count for
this analysis, and only counted an image once if that same image appeared in
both the paper and its color plate.
Figure 2: We extracted 29,689 images (26,776 figures, 2,913 tables) from the
2,916 IEEE Visualization conference papers, spanning 30 years (Vis: 13,509;
SciVis: 3,232; InfoVis: 7,834; VAST: 5,114). Numbers for the joint conference
are depicted as wide pale gray bars. The individual tracks are overlaid on
top. On average, Vis/SciVis has more images per paper page than InfoVis and
VAST.
## 3 Dataset Description
We now describe the data format and information stored for each figure and
table in our database and the decisions we made concerning the figure
extraction task. But we start by defining the terms we use throughout the
remainder of the paper.
### 3.1 Terms
IEEE VIS. Over its history, IEEE VIS has undergone a number of name changes (
Fig. 2). It started out in 1990 as IEEE Visualization (Vis), then added IEEE
InfoVis in 1995 followed by IEEE VAST in 2006. In 2008–2012, all three venues
were jointly called IEEE VisWeek, and since 2013 the blanket name IEEE VIS has
been used. From 2013 onward, the IEEE Vis ceased to exist, replaced by the
IEEE Scientific Visualization (SciVis) conference. Here we use VIS to refer to
all four venues: Vis, InfoVis, VAST, and SciVis, for the entire time period
covered by our dataset.
Figures and Tables. We refer to a figure as a container for graphical
representations. These representations can be images, screenshots of
visualization techniques, user interfaces, photos, diagrams, and others. We
classify algorithms, pseudocode, and equations as textual content and thus do
not include them in VIN. Including these additions is left as future work. A
table is a row-column representation of relations among related data concepts
or categories [13], usually composed of cells [20, 28].
### 3.2 Image Data Collection
We collected 29,689 images (26,776 figures and 2,913 tables from 2,916
conference and journal publications of the IEEE VIS conference from 1990 to
2019 (Fig. 2). Our collection also includes case studies and late-breaking
results from earlier years as they are included in the digital library. We do
not include the more recent short papers as only 3 years of data are
available. We also exclude posters as they do not appear consistently in the
IEEE Xplore digital library, our primary data source for the paper PDFs.
We include tables as a separate category alongside figures as we consider
them, unlike unstructured text, as a form of structured and visual data
representation that might be useful to analyze. Not only the visual layout of
tables may be interesting, but also, more importantly, the relative frequency
of tables in published research results and the amount of space tables occupy
in papers. Data stored in tables can be further extracted and cross-linked
into a knowledge base. Tables can be filtered out for other use cases such as
searching for related work or searching for images for teaching.
(a) Sicat et al. [36], Fig. 7
Goal: comparison of volume rendering coarse levels with subcaptions embedded
in the figure
(b) Isenberg and Carpendale [14], Fig. 2
Goal: showing two different tree layouts and labeling without subcaption
(c) Yu et al. [43], Fig. 11
Goal: showing interaction technique by embedded views
(d) He et al. [12], Fig. 7
Goal: comparing transfer function design; many views
(e) Perer and Shneiderman [32], Fig. 3
Goal: saving space by placing the figure caption into an empty corner; we did
not remove such captions
(f) Dasgupta et al. [7], Fig. 3
Goal: tabular view of textual and figure elements that we classified as a
table
(g) Isenberg et al. [16], Table 2
Goal: table lens view of quantitative data; mix of table & figure elements
which we classified as a figure
Figure 3: The use of figures and tables shows great variation. Here, we place
subfigures side-by-side for comparison to present different techniques, as in
(a). Subfigures may not have subcaptions (b). They can be embedded (c) or
contain tabular views of different parameter choices (d). Figure captions
sometimes appear inside the figure’s rectangular bounding box (e). Tables
often contain visual separators, but the content can be hierarchical and can
contain figures (f) or use table lenses (g). These variations lead us to
retain composite figures and tables in our data cohort to preserve the
functional values of these paper elements. All images © IEEE, used with
permission.
### 3.3 Choosing Figures and Tables
Scholarly articles are often structured based on a template and are properly
referenced, yet authors use varying approaches to generate figures and tables
and to embed them in their papers (Fig. 3). These varying practices required
us to make decisions about which types of visual representations to include
and exclude in our database:
High Variation in Composition of Figures and Tables. Authors often treat
algorithms, pseudocode, and tables as figures with figure numbers. In our data
collection, we separated algorithms and pseudocode from figures and tagged
tables and figures separately. While both pseudocode and algorithms are
important scientific content in papers, they generally consist of text and are
not the forms of visual data representations we target with VIS30K.
Occasionally, authors placed figures and tables wrapped within the text flow
without captions or figure/table numbers. We collected such figures and tables
nonetheless but excluded small, often repetitive word-scale visualizations and
word-scale graphics such as those in Blascheck et al.’s work [2]. In our
dataset, we list tables that contain primarily text but sometimes also small
inline images (e. g., Fig. 3(f)) as tables. We include other column-row
representations such as heatmap matrices and table lenses (e. g., Fig. 3(g))
that use a primarily graphical encoding as figures.
Handling of Subfigures and Subcaptions. A figure can be composed of multiple
images or be a combination of images and text. Such composite figures are
common in visualization papers (e. g., Fig. 3). We initially hope to dissemble
these composite figures into subfigures, but ultimately choose not to due to
their variable degree of separability. Composite figures, e. g., are used to
report related sets of design results (Fig. 3(a)). They sometimes do not have
subcaptions (Fig. 3(b)). In other cases, the subfigure indices just label
different views of the same data (e. g., (b) is a magnified view of (a)) and
are monolithic (e. g., Fig. 3(c)). Composite figures also sometimes place
subfigures side-by-side to compare techniques or parameters (Fig. 3(d)).
Separating these subfigures would defeat the functional value of these figure
compositions.
Composite figures can contain subcaptions that are explicitly associated with
subfigures through spatial proximity (e. g., Fig. 3(a)). Subfigures and
subcaptions in the same composite figure often have similar content.
Subcaptions can contain a few lines (like our own Fig. 3), a brief term, or
merely an index (e. g., (a)–(g) as in Fig. 3(a)). Because we maintain
composite figures and do not split them into subfigures, we have no choice but
to include subcaptions in our collection—even though we did remove the main
caption of the figures, except when the caption was inside the figure’s
bounding box (Fig. 3(e)). We also retained the markers of index-only
subcaptions to help viewers to identify the subdivision of composite figures.
Low Quality and Noisy Figures. Images in IEEE Xplore papers from 2001 onward
generally have excellent visual quality. However, we found errors and many
unclear figures in earlier papers. We sought to correct these in our data to
provide a more reliable source for IEEE VIS publication figures. In
particular, images from papers from 2000 and earlier often are of low quality.
We replaced images from these early years with better versions when the paper
copy in the ACM DL had better quality, when we could find it on the conference
CD or proceedings, or when we found a better (author) copy online. Papers
published in 1995 or earlier often have color-plate pages, causing IEEE and
ACM to list different page numbers for some papers. Also, figures in these
color-plate pages may or may not be the same as the figures on the main paper
pages. When they were the same, we used the color version. Otherwise we
collected both. We corrected errors such as missing pages in the printed or
digital library version (e. g., Dovey [10]) and added additional pages found
in conference proceedings. We also found entries on IEEE Xplore that linked to
a paper under a different title or in which the last page was the first page
of the next paper. Some papers contained white pages or duplication which we
excluded from the total paper pages count.
## 4 Figure and Table Collection Procedure
We designed and implemented a new CNN-based data-driven solution to harvest
figures and tables embedded in IEEE VIS research papers to avoid manual
labeling. The input to our CNNs is the paper pages and the output is a
structured representation of all the figures and tables within the input files
and the associated bounding box locations.
### 4.1 Overview
The main idea behind our approach is to train a CNN with automatically
synthesized papers. Our approach works by ‘pasting’ different paper component
parts including figures, tables, and text onto a white image to create a
“pseudo-paper” collection. These pseudo-papers are sufficient to guide CNNs to
detect and localize the figures and tables from real documents. While the
simulation approach has been used in other realistic environments [42], ours
is to our knowledge the first use for scholarly document analysis.
Our approach leverages the simple assumption that the form and structural
content of a page are more important for detecting images than the factual
content. The advantage is that, in theory, it allows a CNN algorithm to act on
any document layout and labels, even new and unknown ones. Rule-based or XML-
based methods would require us to keep stipulating new rules or define
suitable XML tags to cope with complex documents. Our method, in contrast,
always generates its own synthetic appearance to minimize the differences
between the training data and the real papers, to improve prediction accuracy
as it produces accurate “ground-truth” data (Fig. 4).
### 4.2 Training Data Preparation: Pseudo-Papers
(a) An inner page with two figures each occupying a single column.
(b) A first page without figure, and an inner page with a table crossing two
columns and a single-column figure.
Figure 4: Automatically rendered pseudo-paper pages in our training data
generation with ground-truth labels. The text content in 4(a) is grammatically
correct but not semantically meaningful in the visualization domain. Page
samples of 4(b), header, title, abstract, body text, figure, table, captions,
and other document components are shown. We diversified the page layout
structures to render pages both with and without images. When images are
shown, they appear in single or double columns.
The essential part of our approach is to treat training data as a composition
of individual document elements, where the goals are (1) to record bounding
boxes for each of the labels and components in a PDF image to produce high-
quality labels for the training data; and (2) to synthesize appearance to
reduce the differences between the training data and the real papers to some
extent.
Image and Text Corpora. To reduce the number of training images needed and to
increase training data diversity, we used image collections from Borkin et
al.’s MASSVIS dataset [3] and from the SciVis memorability data by Li and Chen
[26]. Early papers are often black and white and may contain salt-and-pepper
image noise. This variation in image appearance (brightness, contrast) can
reduce our CNN’s image detection accuracy. To match such visual variations, we
doubled the figure/table samples by converting these images to black and
white, with a range of gray-scale variations. We assembled the text corpus
using Stribling et al.’s SCIgen [40] so the textural content remains coherent,
although not necessarily being relevant to IEEE VIS (Fig. 4(a)).
Pseudo-paper Corpora. We used this image and text corpus to automatically
synthesize a large set of papers to depict paper titles, abstracts, (body
text, document headers, figures, tables, and captions) (Fig. 4). Our document-
production algorithm inserted the text, figures, and tables into white pages
of particular size and coordinates with particular fonts and styles, to match
the IEEE VIS paper structure. We also inserted bullets and equations because
pilot tests revealed that, without them, bullets and equations from the real
papers were often misclassified as point-based visualizations [3], a figure
type containing dot or scatter plots or similar elements.
In total, we generated 13,000 pages (10,000 for training, 3,000 for
validation), each with 1075 × 1400 pixel resolution and labeled as selected
categories of figure, table, or text (Fig. 4). Each component on these pages
features accurate bounding boxes.
### 4.3 Integrating CNN Models for Figure Extraction
We trained two complementary CNNs, YOLOv3 [33] and Faster R-CNN [34],
independently for subsequent figure extraction from the actual papers. One may
think of this combination as a means to boost performance of our learning
algorithm as we only used a very small set of labeled examples, compared to
millions of training samples in other solutions [38]. YOLOv3 is a single-stage
detector network—fast and accurate for object detection. Faster R-CNN [34] is
a multi-stage proposal and sampling-based approach where a certain number of
candidates were sampled from a large pool of generated ROIs. Both YOLOv3 and
Faster R-CNN returned the four coordinates of each bounding box, along with
class labels. We trained both models under TensorFlow and executed it on a
single nvidia GeForce RTX 2080 Ti GPU, with 11 GB memory.
In the prediction stage, we downloaded the paper PDFs by following the links
in the VisPubData database [15]. We excluded short papers, posters, panels,
and keynote files, so our collection comprised 2,916 full papers for the years
1990–2019. We first converted these PDFs to PNG pixel images using the convert
command with $\text{dpi}=300$ and pixel resolution up to $2353\times 3213$.
This conversion was necessary to capture all images in their camera-ready
rendered visual form in the paper PDF, including scanned pages from early
years, vector images, pixel graphics, simple text versions, and any
combinations thereof. We then fed these images into the CNNs to extract
figures, tables, captions, etc. For each paper page, we thus produced the
bounding boxes of the 17 classes (6 textual content, 11 figures/tables).
After model prediction, we used heuristics in [27] to combine both models’
results by merging the bounding boxes from Faster R-CNN (better localization)
with any additional images/bounding boxes detected by YOLOv3 (better
detection) into an initial set of labeled bounding boxes. We further tightened
or expanded these bounding boxes to acquire accurate regions for each figure
and table. Since the visualization type is not of current interest and since
CNN models make mistakes, we combined the 10 figure classes into a single
figure label type in our post-processing.
### 4.4 Fine-Grained Recognition and Data Validation
Figure 5: Fine-grained human recognition to correct CNN errors. The orange
boxes show the machine prediction and the green boxes the human results to
curate bounding box regions.
Fine-grained recognition refers to the task of distinguishing very similar
categories or correcting the results to obtain the ground truth. CNN results
can lead to errors or imprecision in image detection and localization [31]. We
obtained the final cohort by manually cleaning the CNNs’ predictions using a
collaborative annotation tool with two interfaces: one enables us to edit the
content of the dataset and the other provides an overview of the data
collection by year (see supplementary materials Sec. B for these two
interfaces). We used the first interface to examine all pages in our dataset
and check the labeling to remove, add, move, and resize the machine-generated
bounding boxes as needed. After this individual pass, the first two authors of
this paper used the second interface to verify the results for the entire
dataset. Through this process we cleaned 26,776 figures and 2,913 tables from
all 30 years of the IEEE Visualization conference, as described in Sec. 3.
Using the manually cleaned data as the ground truth, we evaluated our CNN-
based labeling following the evaluation metrics of PDFFigure2.0 [6], using
intersection over union ($IoU=area\;of\;overlap/area\;of\;union$) as 0.8. We
found the overall recall of our CNN-based extraction approach on the VIS30K
images to be 0.84, with precision 0.94 and F1 score 0.89. For this analysis we
only used the “image” label in our dataset because our training phase images
came from two limited datasets—they did not capture the full range of images
in visualization papers. Nonetheless, we analyzed the entire dataset,
including the early years with their low-quality images for which other
algorithms would fail. We considered predicted figures and tables that did not
exist in the final human-curated labels as false positives. As a figure could
contain multiples, we also considered it as detected if we detected such
multiples in the form of several bounding boxes.
The present measurement results mean that our CNN model requires at least 22%
manual effort (to add $16\%$ false negatives and remove $6\%$ false
positives). Removing false positives requires us to detect duplicate
“detection frames,” while false negatives are images that go undetected. In
addition to the cost of cleaning (22%), there are aspects of the manual labor
that are difficult to measure, i. e., fine-tuning results that are considered
correct using a machine’s standard ($IoU\geq 80\%$), but not based on human-
centered heuristics. Fig. 5 shows three example cases when a user needs to
locate and resize the bounding boxes (region error) or update the class labels
manually (class error). These are instances of region error where subcaptions
are excluded or included in the prediction of multi-composite views. Class
errors are also corrected when a table is inferred to be a figure. We estimate
that about $20\%$ of the images required a final adjustment to fine-tune the
CNNs’ output.
## 5 VISImageNavigator (VIN): Exploring Figures and Tables in The Literature
(a) Image-centric view using a “brick wall” layout.
(b) Timeline-centric view of paper image cards. Here results show all images
where the paper title and abstract contain the term “evaluation” of authors
field “Stasko T. John”.
(c) Paper-centric view using a paper layout for a query of papers appeared
between 2017 and 2019.
Figure 6: Our VISImageNavigator (VIN) interface and search engine. We arranged
figures and tables using 6(a) a “brick wall” layout, 6(b) a timeline view, or
6(c) a paper list layout We color-coded the images with frames based on the
conference types. Users can query the database, by terms (using authors’
keywords or terms in titles and abstracts), by image type (figure, table, or
both), by conference category (Vis, SciVis, InfoVis, VAST), or by year. A
click on an image displays article details including authors and a hyperlink
to the full PDF in IEEE’s digital library.
We posit that our ability to extract data must be accompanied by the
community’s ability to use, further classify, manage, and reason about the
content of the figures and tables. Our second contribution is thus the design
and implementation of VISImageNavigator (VIN; see Fig. 6 and
visimagenavigator.github.io), a lightweight online browser to view and query
the dataset and its metadata. VIN can be used to explore VIS figures and
tables, VIS publication venues, keywords, and authors over time.
### 5.1 Browsing the Image Collection
The VIN interface has three styles: The default browser layout (Fig. 6(a)) was
inspired by the VizioMetrix search engine [24]. Figures are arranged next to
each other following the “brick wall” metaphor. The second timeline-centric
paper piles facilitates viewing temporal trends (Fig. 6(b)), while the last
one presents a paper-centric view (Fig. 6(c)). Figure and table captions are
available on demand. The images are ordered by conference year and by the
order of appearance in the proceedings. VIN is not designed to support
dedicated statistical analyses of the data itself. We implemented backend by
indexing the authors, captions, and author keywords, crossed-linked to the
paper keywords in keyvis.org [16]. We also implemented term-based search in
titles and abstracts. The by-title-and-abstract mode enables users to directly
search by title and abstract explore and often returns more complete results
than searching images by author keywords, likely because not all papers
comprise author keywords or these do not cover all aspects.
Naturally, the results can be sorted several ways: by author to study people’s
presentation styles or by year to retrieve the most recent images for a given
search term. Using VIN, the user can answer questions such as ‘Which figures
are used in evaluation papers?’ by searching for “evaluation” and reading the
results. One can also just examine the figures but not tables or ask ‘What are
the result figures in John T. Stasko’s evaluation papers?’ by filtering both
the authors and terms fields (Fig. 6(b)).
Since VIS is the premier conference of our field, decisions about what topics
appear and what methods are published can profoundly influence applications.
Consider questions like what illustrative visualization techniques have been
developed? and what are the techniques in domains such as “brain imaging” and
“quantum physics”? (cf. see Fig. 10 of the supplemental material Sect. D). To
answer this question using VIN, the user can query the term ‘brain’ to reveal
diverse advances in showing tomography in the nineties, tensor lines, as well
as metaphorical maps. In contrast, searching “quantum” returns fewer results
with a major focus on depicting symmetrical structures. Switching to the
paper-centric view reveals the paper titles that confirm that most ‘quantum’
papers use volume rendering to show particle interactions.
### 5.2 Use-Cases for VIS30K
We envision the following use-cases for VIN and VIS30K:
1\. Identifying Related Work. Typically, when researchers search for related
work, they either rely on text search in digital libraries or they manually
follow trails of citations from one paper to the next. In addition to offering
a text-based search in paper metadata, VIN offers a focused, visual way to
quickly browse related work that is impossible with other research databases
or generic online image search tools. This visual search can complement the
traditional related work search and enable researchers to stumble upon papers
with similar layouts, data representations, or interfaces that may not show up
in text searches. Image overviews also help them to see and describe
differences in data visualization styles spanning multiple years that may be
more difficult to grasp from images confined to individual papers.
2\. Teaching and Communication. Our image database and VIN can also be used to
quickly find images for teaching and communication. By filtering out later
years of the conference, for example, historic examples from the community can
be retrieved and compared to the current state-of-the-art. Browsing the most
recent years reveals new contributions and the latest advances. Our images are
stored in a lossless format at a resolution that supports their use for
teaching and communication. In addition, extracting paper references is made
simple through the VIN interface (Fig. 6(c)). Further, there may be users
outside the community who are interested in the types of representations
published by the community but lack easy access to papers or are not
accustomed to reading scientific content. For these groups of users, VIN can
serve as an entry point to research in the visualization community and spark
interest in exploring its work further. Thus, VIN also serves as a bridge to
other communities.
3\. Understanding VIS. Both the Visualization community itself and external
researchers interested in the history of visual data analysis may be curious
about the evolution of the field. Past efforts on understanding practices in
the community were listed in Sec. 2. Complementing this past work, our data
and tool now offers overviews of our community’s visual output both in the
form of both figures and tables. Researchers can either engage assess and
analyze the data qualitatively using VIN or download it to build additional
tools that use their own metrics to support quantitative study of the image
content.
4\. Tool Building and Testing. Our database can be used by others to extend
VIN or build novel tools for other types of image analysis tasks. For example,
future projects could build a dedicated image similarity search tool on top of
VIN, use the database as training data for machine learning algorithms, or
look into visualizing image content (e. g.visual question answering and
visualization re-targeting). The database can also be used for computer vision
projects. Our results demonstrate that the state-of-the-art CNN solutions and
figure and table extractions do not achieve human-level accuracy. This finding
suggests that our VIS30K dataset could present a grand challenge for future
benchmarking of machine learning research.
## 6 Discussion and Conclusion
We introduce VIS30K, a curated and complete dataset of all figures and tables
used in IEEE VIS conference publications over its 30-year history. We also
provide a data exploration tool, VIN, that facilitates interactive exploration
of this scholarly resource as well as a collection of the relevant metadata.
For the first time, our VIN tool enables researchers and practitioners to
search for approaches related to their own or solutions for their data
analysis problems in a visual way—after all, most of us remember images we
have seen in the past much better than the specific names of the relevant
papers. Our search also enables researchers to quickly find related work they
may not even be aware of, without requiring them to read and download several
possibly related papers from digital libraries. In addition to these immediate
benefits of our interactive search, our dataset will allow us to explore a
number of interesting research questions in the future. For example, how has
visual encoding been used in the past, and has this changed over time? Do the
three conference tracks use specific forms of encoding in a similar way or are
there differences? How can we create a visualization with a similar style?
Our work also has implications that arise from our specific extraction
approach. We used CNNs to extract the image and table locations via generated
pseudo papers, followed by a manual cleaning step to ensure quality. Without
CNNs, a huge amount of manual work would have been needed. Without our manual
cleaning, similarly, we could not have ensured our high data quality. While we
worked on published papers, our hybrid CNN-manual approach is not limited to
such documents: it could well be applied more broadly, e. g., to XML-based
solutions such as GROBID [29]. We also anticipate that DeepPaperComposer [27],
a newer model for non-textual content extraction can provide a scalable
solution for information extraction from future VIS publications. Our
constructive experience could inspire future work on pipelines that seek to
extract images and tables from documents.
Naturally, our work is not without limitations. Our dataset does not represent
all of visualization scholarship. We examined papers in only a single venue
and did not collect scholarly figures presented at other venues, e. g.,
EuroVis, PacificVis, CHI, and other related conferences. We also did not
collect visualization-related journal articles in the IEEE Transactions on
Visualization and Computer Graphics and in IEEE Computer Graphics and
Applications. As the visualization field in itself is cross-disciplinary, we
also did not examine domain-specific journals that provide applications and
real-world impact (see an excellent review of visualization uses in studying
the human connectome by Margulies et al. [30]).
Reproducibility. We have released three data collections and our CNN models.
The main contribution in this work is the VIS30K image collection and ground-
truth bounding box types and locations of all images released through the IEEE
dataport at DOI 10.21227/4hy6-vh52. Metadata for our VIS30K dataset is
accessible via a public Google spreadsheet (go.osu.edu/vis30k). The 13K
training and validation data of the synthetic pages and their ground-truth and
the text and image corpora are accessible via go.osu.edu/vis30ktrainingdata.
The tensorflow models we used are accessible online through the VIN website.
We have also released the pre-trained CNN models at
go.osu.edu/vis30kpretrainedmodels.
## Acknowledgments
We thank Roger Crawfis for his hard copies of early conference proceedings and
David H. Laidlaw for conference proceedings CDs. This work was partly
supported by NSF OAC-1945347, NIST MSE-10NANB12H181, NSF CNS-1531491, NSF
IIS-1302755 and the FFG ICT of the Future program via the ViSciPub project
(no. 867378).
## References
* [1] W. Aigner, S. Miksch, H. Schumann, and C. Tominski, “Survey of visualization techniques,” in _Visualization of Time-Oriented Data_. London: Springer, 2011, ch. 7, pp. 147–254. doi: 10 . 1007/978-0-85729-079-3_7
* [2] T. Blascheck, L. Besançon, A. Bezerianos, B. Lee, and P. Isenberg, “Glanceable visualization: Studies of data comparison performance on smartwatches,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 25, no. 1, pp. 616–629, Jan. 2019. doi: 10 . 1109/TVCG . 2018 . 2865142
* [3] M. A. Borkin, A. A. Vo, Z. Bylinskii, P. Isola, S. Sunkavalli, A. Oliva, and H. Pfister, “What makes a visualization memorable?” _IEEE Transactions on Visualization and Computer Graphics_, vol. 19, no. 12, pp. 2306–2315, Dec. 2013. doi: 10 . 1109/TVCG . 2013 . 234
* [4] A. Chatzimparmpas and I. Jusufi, “The state of the art in enhancing trust in machine learning models with the use of visualizations,” _Computer Graphics Forum_, vol. 39, no. 3, pp. 713–756, Jun. 2020. doi: 10 . 1111/cgf . 14034
* [5] S. R. Choudhury, P. Mitra, and C. L. Giles, “Automatic extraction of figures from scholarly documents,” in _Proc. DocEng_, 2015, pp. 47–50. doi: 10 . 1145/2682571 . 2797085
* [6] C. Clark and S. Divvala, “PDFFigures 2.0: Mining figures from research papers,” in _Proc. ACM/IEEE-CS Joint Conference on Digital Libraries_. New York: ACM, 2016, pp. 143–152. doi: 10 . 1145/2910896 . 2910904
* [7] A. Dasgupta, H. Wang, N. O’Brien, and S. Burrows, “Separating the wheat from the chaff: Comparative visual cues for transparent diagnostics of competing models,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 26, no. 1, pp. 1043–1053, Jan. 2020. doi: 10 . 1109/TVCG . 2019 . 2934540
* [8] D. Deng, Y. Wu, X. Shu, M. Xu, J. Wu, S. Fu, and Y. Wu, “VisImages: A large-scale, high-quality image corpus in visualization publications,” arXiv preprint 2007.04584, Jul. 2020.
* [9] A. Diehl, A. Abdul-Rahman, M. El-Assady, B. Bach, D. A. Keim, and M. Chen, “VisGuides: A forum for discussing visualization guidelines,” in _EuroVis Short Papers_. Goslar, Germany: Eurographics Association, 2018, pp. 61–65. doi: 10 . 2312/eurovisshort . 20181079
* [10] D. Dovey, “Vector plots for irregular grids,” in _Proc. Visualization_, 1995, pp. 248–253. doi: 10 . 1109/VISUAL . 1995 . 480819
* [11] E. E. Fırat and R. S. Laramee, “Towards a survey of interactive visualization for education,” in _Proc. Computer Graphics and Visual Computing_, 2018, pp. 91–101. doi: 10 . 2312/cgvc . 20181211
* [12] T. He, L. Hong, A. Kaufman, and H. Pfister, “Generation of transfer functions with stochastic search techniques,” in _Proc. Visualization_, 1996, pp. 227–234. doi: 10 . 1109/VISUAL . 1996 . 568113
* [13] M. Hurst, “Towards a theory of tables,” _International Journal of Document Analysis and Recognition_, vol. 8, no. 2–3, pp. 123–131, Mar. 2006. doi: 10 . 1007/s10032-006-0016-y
* [14] P. Isenberg and S. Carpendale, “Interactive tree comparison for co-located collaborative information visualization,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 13, no. 6, pp. 1232–1239, Nov./Dec. 2007. doi: 10 . 1109/TVCG . 2007 . 70568
* [15] P. Isenberg, F. Heimerl, S. Koch, T. Isenberg, P. Xu, C. D. Stolper, M. Sedlmair, J. Chen, T. Möller, and J. Stasko, “Vispubdata.org: A metadata collection about IEEE visualization (VIS) publications,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 23, no. 9, pp. 2199–2206, Sep. 2016. doi: 10 . 1109/TVCG . 2016 . 2615308
* [16] P. Isenberg, T. Isenberg, M. Sedlmair, J. Chen, and T. Möller, “Visualization as seen through its research paper keywords,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 23, no. 1, pp. 771–780, Jan. 2016. doi: 10 . 1109/TVCG . 2016 . 2598827
* [17] T. Isenberg, P. Isenberg, J. Chen, M. Sedlmair, and T. Möller, “A systematic review on the practice of evaluating visualization,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 19, no. 12, pp. 2818–2827, Dec. 2013. doi: 10 . 1109/TVCG . 2013 . 126
* [18] J. Kehrer and H. Hauser, “Visualization and visual analysis of multifaceted scientific data: A survey,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 19, no. 3, pp. 495–513, Mar. 2013. doi: 10 . 1109/TVCG . 2012 . 110
* [19] A. Kerren, K. Kucher, Y.-F. Li, and F. Schreiber, “BioVis Explorer: A visual guide for biological data visualization techniques,” _PLoS One_, vol. 12, no. 11, pp. e0 187 341:1–e0 187 341:14, Nov. 2017. doi: 10 . 1371/journal . pone . 0187341
* [20] S. Khusro, A. Latif, and I. Ullah, “On methods and tools of table detection, extraction and annotation in PDF documents,” _Journal of Information Science_, vol. 41, no. 1, pp. 41–57, Feb. 2015. doi: 10 . 1177/0165551514551903
* [21] K. Kucher and A. Kerren, “Text visualization techniques: Taxonomy, visual survey, and community insights,” in _Proc. IEEE Pacific Visualization Symposium_, 2015, pp. 117–121. doi: 10 . 1109/PACIFICVIS . 2015 . 7156366
* [22] K. Kucher, C. Paradis, and A. Kerren, “The state of the art in sentiment visualization,” _Computer Graphics Forum_, vol. 37, no. 1, pp. 71–96, Feb. 2018. doi: 10 . 1111/cgf . 13217
* [23] H. Lam, E. Bertini, P. Isenberg, C. Plaisant, and S. Carpendale, “Empirical studies in information visualization: Seven scenarios,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 18, no. 9, pp. 1520–1536, Sep. 2012. doi: 10 . 1109/TVCG . 2011 . 279
* [24] P. Lee, J. D. West, and B. Howe, “Viziometrics: Analyzing visual information in the scientific literature,” _IEEE Transactions on Big Data_, vol. 4, no. 1, pp. 117–129, Mar. 2017. doi: 10 . 1109/TBDATA . 2017 . 2689038
* [25] P. Li, X. Jiang, and H. Shatkay, “Figure and caption extraction from biomedical documents,” _Bioinformatics_, vol. 35, no. 21, pp. 4381–4388, 2019. doi: 10 . 1093/bioinformatics/btz228
* [26] R. Li and J. Chen, “Toward a deep understanding of what makes a scientific visualization memorable,” in _Short Papers of IEEE Visualization/SciVis_, 2018, pp. 26–31. doi: 10 . 1109/SciVis . 2018 . 8823764
* [27] M. Ling and J. Chen, “DeepPaperComposer: A simple solution for training data preparation for parsing research papers,” in _Proc. EMNLP/Scholarly Document Processing_. Stroudsburg, PA, USA: ACL, 2020, pp. 91–96. doi: 10 . 18653/v1/2020 . sdp-1 . 10
* [28] V. Long, R. Dale, and S. Cassidy, “A model for detecting and merging vertically spanned table cells in plain text documents,” in _Proc. International Conference on Document Analysis and Recognition_, 2005, pp. 1242–1246. doi: 10 . 1109/ICDAR . 2005 . 21
* [29] P. Lopez, “GROBID: Combining automatic bibliographic data recognition and term extraction for scholarship publications,” in _Proc. International Conference on Theory and Practice of Digital Libraries_. Berlin: Springer, 2009, pp. 473–474. doi: 10 . 1007/978-3-642-04346-8_62
* [30] D. S. Margulies, J. Böttger, A. Watanabe, and K. J. Gorgolewski, “Visualizing the human connectome,” _NeuroImage_, vol. 80, no. 15, pp. 445–461, Oct. 2013. doi: 10 . 1016/j . neuroimage . 2013 . 04 . 111
* [31] S. S. Paliwal, D. Vishwanath, R. Rahul, M. Sharma, and L. Vig, “TableNet: Deep learning model for end-to-end table detection and tabular data extraction from scanned document images,” in _Proc. International Conference on Document Analysis and Recognition_, 2019, pp. 128–133. doi: 10 . 1109/ICDAR . 2019 . 00029
* [32] A. Perer and B. Shneiderman, “Balancing systematic and flexible exploration of social networks,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 12, no. 5, pp. 693–700, Sep./Oct. 2006. doi: 10 . 1109/TVCG . 2006 . 122
* [33] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in _Proc. IEEE Conference on Computer Vision and Pattern Recognition_, 2016, pp. 779–788. doi: 10 . 1109/CVPR . 2016 . 91
* [34] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 39, no. 6, pp. 1137–1149, Jun. 2017. doi: 10 . 1109/TPAMI . 2016 . 2577031
* [35] H.-J. Schulz, “Treevis.net: A tree visualization reference,” _IEEE Computer Graphics and Applications_, vol. 31, no. 6, pp. 11–15, Nov./Dec. 2011. doi: 10 . 1109/MCG . 2011 . 103
* [36] R. Sicat, J. Krüger, T. Möller, and M. Hadwiger, “Sparse PDF volumes for consistent multi-resolution volume rendering,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 20, no. 12, pp. 2417–2426, Dec. 2014. doi: 10 . 1109/TVCG . 2014 . 2346324
* [37] N. Siegel, Z. Horvitz, R. Levin, S. Divvala, and A. Farhadi, “FigureSeer: Parsing result-figures in research papers,” in _Proc. European Conference on Computer Vision_. Berlin: Springer, 2016, pp. 664–680. doi: 10 . 1007/978-3-319-46478-7_41
* [38] N. Siegel, N. Lourie, R. Power, and W. Ammar, “Extracting scientific figures with distantly supervised neural networks,” in _Proc. ACM/IEEE-CS Joint Conference on Digital Libraries_. New York: ACM, 2018, pp. 223–232. doi: 10 . 1145/3197026 . 3197040
* [39] L. F. Smith and S. Jänicke, “The impact of EuroVis publications,” in _EuroVis Posters_, 2020, pp. 25–27. doi: 10 . 2312/eurp . 20201120
* [40] J. Stribling, M. Krohn, and D. Aguayo, “SCIgen – An automatic CS paper generator,” Online tool: https://pdos.csail.mit.edu/archive/scigen/, 2005.
* [41] H. Strobelt, D. Oelke, C. Rohrdantz, A. Stoffel, D. A. Keim, and O. Deussen, “Document cards: A top trumps visualization for documents,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 15, no. 6, pp. 1145–1152, Nov./Dec. 2009. doi: 10 . 1109/TVCG . 2009 . 139
* [42] A. Tsirikoglou, G. Eilertsen, and J. Unger, “A survey of image synthesis methods for visual machine learning,” in _Computer Graphics Forum_, vol. 39, no. 6, 2020, pp. 426–451. doi: 10 . 1111/cgf . 14047
* [43] L. Yu, P. Svetachov, P. Isenberg, M. H. Everts, and T. Isenberg, “FI3D: Direct-touch interaction for the exploration of 3D scientific visualization spaces,” _IEEE Transactions on Visualization and Computer Graphics_, vol. 16, no. 6, pp. 1613–1622, Nov./Dec. 2010. doi: 10 . 1109/TVCG . 2010 . 157
* [44] W. Zeng, A. Dong, X. Chen, and Z. Cheng, “VIStory: Interactive storyboard for exploring visual information in scientific publications,” _Journal of Visualization_, 2021, to appear. doi: 10 . 1007/s12650-020-00688-1
Additional material
## A. Databases
Our dataset collection includes figures and tables in the 2,916 conference and
journal publications of the IEEE VIS conference from 1990 to 2019. We store
the IEEE VIS paper images in PNG format in our VIS30K data. Metadata of the
dataset is accessible via a public Google spreadsheet (go.osu.edu/vis30k)
whose columns A–D are:
1. A
The paper DOI as a unique identifier to cross-link to other databases such as
VisPubData [15], KeyVis [16], and the Practice of Evaluating Visualization
[17, 23].
2. B
A thumbnail of each image, which is a low-resolution version of each image
extracted from the paper. This provides a gateway to data analyses, e. g.,
through Google CoLab.
3. C
The image type: either figure (F) or table (T).
4. D
An image link that points to its web storage address where a full-resolution
version is accessible through IEEE DataPort. Please note that all image files
are copyrighted, and for most the copyright is owned by IEEE. Some have
creative commons licenses or are in the public domain. Yet other images are
subject to different, specific copyrights as indicated in the figure caption
in the paper.
## B. Interactive Label Cleaning Tools
Fig. 7 shows screenshots of two tools we used during our interactive cleaning
process.
## C. Image Distribution in the Training Data Corpus
The diversity of image data in the training data is critical for developing
useful image data extraction algorithms. We employed two databases collected
by Borkin et al. [3] and Li and Chen [26]. We removed images if they contained
abundant text. We construct an image dataset with 12 categories shown in Table
I.
TABLE I: The image class distribution of our training data corpus. Our pseudo-paper page composer randomly pastes 10 classes of figures and one table, and one bullets and equations classes onto white paper pages. These images are subset of figures and tables from the MASSVIS and scientific visualization images collected by Borkin et al. [3] and Li and Chen [26]. Table | Area and circles | Bars | Bullets and equations | Line chart | Maps
---|---|---|---|---|---
232 | 148 | 362 | 380 | 330 | 268
Matrix and parallel coordinates | Multiple types | Photos | Point-based | Scientific data visualization | Tree and Networks
62 | 460 | 120 | 120 | 262 | 134
(a) Interface to verify results for individual coders.
(b) Interface to examine all coders’ results.
Figure 7: Screenshots of the tools used to clean the results of CNN-based
labels. Although large labeled databases in natural scenes are becoming
standard, they are comparatively rare in scholarly document databases. This
tool is used for easily correcting, adding, and annotating figures and tables
using the orange bounding boxes and tags.
## D. Additional Use-Cases
Figure 8: VIN use-case scenario: Identify work related to “illustrative
visualization.” This scenario illustrates a use case when the user identifies
the related work. VIN offers a focused, visual way to quickly browser related
work progressively. It can complement the traditional text-based related work
search.
In the paper we present three interfaces in an interactive web-based tool,
VIN, that allows the general public to perform their own analyses on the
entire 30 years of IEEE visualization image datasets. We show in this part
example questions the VIN tool can help answer.
What are the illustrative visualization techniques in literature? Looking at
images allows scholars to “see” the techniques invented over the years and
obtain a gist of the development of techniques over time. Fig. 8 shows an
exploratory process for someone, here Jerry, who has taken a visualization
class in graduate school to explore techniques related to “illustrative
visualization”. Jerry puts “illustrative visualization” in the query by author
keywords and ses that the tool returned 243 figures and 13 table images
ranging from see-through views to distorted rendering to depth-based
techniques. Jerry first observes that most of the early techniques were about
transfer functions and then newer techniques address interactive exploration
and augmented depth perception (Fig. 8(a)–(b)). The most recent work in 2019
is different enough to catch Jerry’s attention in that only one paper at
InfoVis was about non-spatial data (Fig. 8(a)). Wondering what that paper is
about, Jerry switches to the paper-centric view to learn that it was about
setting parameters in a high-dimensional space. Apparently this paper has a
novel use of terminology compared to other papers, which largely focused on
spatial data representation. Curious about why he did not see Ebert and
Rheingans’ work on shading-based illustrative visualization, Jerry tries a
similar term ‘non-photorealistic rendering’ and added ‘Penny Rheingans’ to the
author’s name field; now it returned several of Rheingans’ and her students’
work (Fig. 8(b)). Trying the same keywords and author’s field and filtering by
‘title and abstract’ instead returns fewer results (Fig. 8(c)). Jerry learns
that next time he should try both search options. Further exploration by
updating the keywords to “illustration” reveals that this term was largely
used in hand-drawn techniques (Fig. 8(d)).
Figure 9: VIN use-case scenario: Students searching for scholarly work by
Prof. Sheelagh Carpendale at IEEE Vis and InfoVis. This scenario describes how
VIN can facilitate learning and communication. It shows the types of
representations published by a specific scholar in VIS.
What work has been published by Sheelagh Carpendale? Assume that André is a
new PhD student working with Prof. Carpendale at Simon Fraser University who
would like a quick understanding of Prof. Carpendale’s work in interactive
visualization before reading her other HCI (human-computer interaction)
journal and conference papers. André selects the author’s name from the author
category and then clicks the timeline view to obtain an overview of the work
(Fig. 9). He sees diverse contributions in visual representations and novel
interaction techniques on tabletop and large displays. Since André likes
interactive techniques, André quickly begins to explore the early papers
related to focus$+$context. Curious what concepts these focus plus context
describe, André switches to the paper view. Here he learns that Dr. Carpendale
published several focus$+$context applications in the area of biology.
Interested, André decides that he should read these papers to learn about the
details.
Figure 10: VIN use-case scenario: Paper images containing “brain” or “quantum”
in the paper title or abstract. We see significant advances in brain
visualization compared to those for exploring quantum data.
What visualization applications center around quantum physics? For Emma, a
scholar whose goals are related to examine complex structures in quantum
physics data, our VIN tool is a resource for research techniques she could
adapt or reuse rather than reinventing the wheel. Emma knows that she could
query “quantum” from the authors’ keywords or more specific terms in abstract
and title that target specific design choices (e. g., showing the data with
line or volume rendering) or stylistic decisions (e. g., color space and line
styles) (Fig. 10). She is interested in the most frequent visual encodings
(e.g., what data attributes are mapped to which marks and channels) and best
practices (e.g. the use of transfer functions in volume graphics). She finds
that the brick wall interface provides a better understanding of the design
patterns, and shows which techniques are most frequently used to show
topological structures. She understands that she could create new methods to
meet the data exploration goals of the quantum physicists’ new design goals.
|
8k
|
arxiv_papers
|
2101.01039
|
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC
BY 4.0).
BIR 2021: 11th International Workshop on Bibliometric-enhanced Information
Retrieval at ECIR 2021, April 1, 2021, online
[[email protected], ] [orcid=0000-0002-9609-9505,
[email protected], url=http://tmr.liacs.nl, ]
# Improving reference mining in patents with BERT
Ken Voskuil Suzan Verberne Leiden Institute of Advanced Computer Science,
Leiden University
(2021)
###### Abstract
In this paper we address the challenge of extracting scientific references
from patents. We approach the problem as a sequence labelling task and
investigate the merits of BERT models to the extraction of these long
sequences. References in patents to scientific literature are relevant to
study the connection between science and industry. Most prior work only uses
the front-page citations for this analysis, which are provided in the metadata
of patent archives. In this paper we build on prior work using Conditional
Random Fields (CRF) and Flair for reference extraction. We improve the quality
of the training data and train three BERT-based models on the labelled data
(BERT, bioBERT, sciBERT). We find that the improved training data leads to a
large improvement in the quality of the trained models. In addition, the BERT
models beat CRF and Flair, with recall scores around 97% obtained with cross
validation. With the best model we label a large collection of 33 thousand
patents, extract the citations, and match them to publications in the Web of
Science database. We extract 50% more references than with the old training
data and methods: 735 thousand references in total. With these
patent–publication links, follow-up research will further analyze which types
of scientific work lead to inventions.
###### Abstract
We
###### keywords:
Patent analysis Information Extraction Reference mining BERT
## 1 Introduction
References in patents to scientific literature provide relevant information
for studying the relation between science and technological inventions. These
references allow us to answer questions about the types of scientific work
that leads to inventions. Most prior work analysing the citations between
patents and scientific publications focuses on the front-page citations, which
are well structured and provided in the metadata of patent archives such as
Google Patents. It has been argued that in-text references provide valuable
information in addition to front-page references: they have little overlap
with front-page references [1] and are a better indication of knowledge flow
between science and patents [2, 3, 1].
In the 2019 paper by Verberne et al. [4], the authors evaluate two sequence
labelling methods for extracting in-text references from patents: Conditional
Random Fields (CRF) and Flair. In this paper we extend that work, by (1)
improving the quality of the training data and (2) applying BERT models to the
problem. We use error analysis throughout our work to find problems in the
dataset, improve our models and analyze the types of errors different models
are susceptible to.
We first discuss the prior work in Section 2. We describe the improvements we
make in the dataset in Section 3, and the new models proposed for this task in
Section 4. We compare the results of our new models with previous results,
both on the labelled dataset and a larger unlabelled corpus (Section 5). We
end with a discussion on the characteristics of the results of our new models
(Section 6), followed by a conclusion.
Our code and improved dataset are released under an open-source license on
github.111https://github.com/kaesve/patent-citation-extraction
## 2 Prior work
Reference analysis in patents has primarily been done using the references
that are listed on the patent’s front page. Patents often contain many more
references in the patent text themselves, but these are more difficult to
extract and analyze because their formatting is not standardized. Verberne et
al. [4] introduce a new labelled dataset consisting of 22 patents and 1,952
hand-labelled references. They apply two sequence labelling methods to the
reference extraction tasks.
Conditional Random Fields (CRF) model sequence labelling problems as an
undirected graph of observed and hidden variables, to find an optimal sequence
of hidden variables (labels) given a sequence of feature vectors [5]. Feature
vectors usually consist of several manually designed heuristics on the level
of individual tokens and small neighborhoods of tokens. For extracting
references, Verberne et al. [4] use a set of $11+6*4$ features. This includes
11 features derived from the current token, ranging from the part-of-speech
(POS) tag (extracted with NTLK), lexical features such as whether the token
starts with a capital or is a number, and pattern-based features to mark
tokens that look like a year or a page number.222The features are similar to
the ones used in https://sklearn-
crfsuite.readthedocs.io/en/latest/tutorial.html It also includes a subset of 6
features for each of the two preceding and following tokens.
As the authors note, CRF has limited capabilities to take context into
account. They chose to compare CRF with the Flair framework, which is better
able to use token contexts. Flair uses a BiLSTM-CRF model in combination with
pre-trained word embeddings [6]. One downside of Flair models is that they are
memory intensive, which limits the maximum sequence length it can process at
once. Where the CRF model can analyze a complete patent at once, the Flair
models required to split sequences up into subsequences of 20 to 40 tokens
[4]. Verberne et al. used the IOB labels during training to prevent splitting
within a reference.
The models were evaluated by measuring precision and recall using cross
validation on the labelled data. CRF performed better than Flair in all
measures except the recall of I-labels. The models were also applied to a
large corpus of 33,338 unlabelled USPTO biotech patents, and the resulting
extracted references were matched against the Web of Science (WoS) database.
Here, Flair performed significantly better. Counting references with a
definitive match in WoS that were not included in the patent front-page, CRF
was able to find 125,631 of such references compared to 493,583 references
found by Flair.
Recent developments in transfer learning have improved the state of the art in
numerous NLP tasks. BERT [7] is a large transformer model that is pre-trained
on a large corpus for multiple language modelling tasks. The resulting model
can be used as a basis for new tasks on different data sets. Even when the
contents of these data sets or the task deviate significantly from the pre-
training corpus and tasks, the pre-training is still beneficial. Several
authors have trained models with the same architecture as BERT on different,
more domain-specific corpora. These include SciBERT [8] and BioBERT [9].
## 3 Improving data quality
While exploring the results of our models, we found that several prediction
errors seemed to be caused by mistakes in the labelled data. These mistakes
result in a more pessimistic evaluation of our models and, more importantly,
could influence the effectiveness of training our models. We noticed two types
of problems; inconsistent or missing labels, and inconsistent tokenization. We
include examples of both kinds of problems below, and describe our attempts to
improve the data quality.
### 3.1 pre-processor inconsistencies
The patent dataset contains text from 22 patents taken from Google Patents.
Labels were added manually by one annotator using the BRAT annotation
tool333http://brat.nlplab.org/, and the text was subsequently transformed into
IOB files using a pipeline consisting of splitting the text into sentences,
then tokens and adding IOB and POS tags. Because tokenization was applied
after annotation, the labels produced by BRAT needed to be aligned with the
produced tokens. In some cases, this was done by recombining tokens. When
comparing the source text with the IOB data, we found that some sequences of
tokens seemed to have been accidentally reordered. An example of this is shown
in Figure 1 After reviewing the pre-processing pipeline we were able to find
the likely cause of this problem. We chose to replace this pipeline with a
simpler procedure, that does not do sentence splitting or combining of tokens.
Besides sentence boundaries, our method also ignores paragraph boundaries and
white space in general.
(Eskildsen et al., Nuc. Acids Res. 31:3166-3173, 2003;
---
Kakuta et al., J. Interferon & Cytokine Res. 22:981-993, 2002.)
(a) Original text Token | Eskildsen | et | al., | Nuc. | Acids | Res. | … | Res. | 22:981-993, | 2002.)(
---|---|---|---|---|---|---|---|---|---|---
Label | B | I | I | I | I | I | … | I | I | O
(b) Original tokenization Token | ( | Eskildsen | et | al. | , | Nuc | . | Acids | … | Res | . | 22:981-993 | , | 2002 | . | )
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Label | O | B | I | I | I | I | I | I | … | I | I | I | I | I | I | O
(c) New tokenization
Figure 1: Comparing the original with the new tokenization. Note that
punctuation marks are now treated as separate tokens. Also note the labelling
of the braces, and the wrongly labelled last token in the original
tokenization. Example taken from patent US8133710B2.
### 3.2 Inconsistent labelling
After improving the pre-processing, we still found examples of label
inconsistencies. Moreover, our models found several references that were not
included in the annotations. Finally, we found multiple instances of
references to patents and other non-academic literature. These are often hard
to distinguish from scientific literature references. We manually looked at
each difference between predicted and expected labels, and changed the
annotations where necessary. We repeated this process several times, with
different models and after retraining on the updated data. In this process, we
labelled 330 new references, resulting in a total of 2,318 references and
32,359 (I)nside tokens. We chose to include patent references when they
included author names or titles, and other non-literature references, when the
reference shares the format of a literature reference. This simplifies the
task, as the model does not have to disambiguate references by their type.
Since these extracted non-literature references will not match with the
publications in WoS, they will be filtered out in the next step of the
pipeline.
While we think this process has improved the data quality significantly, our
method does introduce biases in the training and evaluation of our models. By
only fixing labelling mistakes that our models find, we may overlook
unlabelled references that our models miss. This leads to an overestimation in
our evaluation, and biases in our model due to the feed back loop in the
training process. By using multiple different models for finding incorrect
labels, we mitigate the effect to some extent. Beside an intrinsic evaluation
using the labelled data, we will also evaluate our models on an extrinsic task
using unlabelled data. This allows us to still compare our model performance
with previous results, without biases in the dataset or overestimations.
## 4 Extracting references with BERT, BioBERT and SciBERT
We compare three different pre-trained models for extracting references from
our data set; BERT, BioBERT and SciBERT. Since our data set consists of
patents from the biomedical domain, we expect that these more domain-specific
pre-training corpora will have a positive effect on our task. Before comparing
the results between these models, we describe our method for fine tuning the
pre-trained model for reference extraction.
### 4.1 pre-processing
BERT-based models have two characteristics that require additional pre-
processing of our dataset. BERT uses its own neural subword tokenization. Our
dataset is already tokenized into words, as described above, so we apply the
BERT tokenizer to each token in our dataset. Transformer-based models such as
BERT also work on fixed sequence lengths, using padding for shorter sequences,
and are memory intensive. The models we train use a maximum sequence length of
64 tokens, limited by the memory available. Though this can be configured to
be higher depending on the available hardware and the size of the model, it is
infeasible to apply these models on complete patents, which can contain tens
of thousands of subword tokens. There are several common strategies to divide
text into shorter sequences. A natural approach is to use paragraph or
sentence splitting. We found this insufficient, as many sentences in our data
set run for much longer than the limit of 64 tokens. Our data set contains not
only long sentences; even references, the entities we are looking to extract,
can be longer than 64 tokens. Because of this observation we decide to not use
any semantic or structural information in splitting our text, except for our
original token boundaries.
Our BERT specific pre-processing can be summarized in the following steps:
1. 1.
Collect the sequence of tokens $T$ and their respective labels $L$ for a given
patent
2. 2.
Create two empty lists $T^{\prime}$ and $L^{\prime}$
3. 3.
Add the sequence start token to $T^{\prime}$
4. 4.
While there are tokens left in $T$:
1. (a)
Get the next token $t$ and label $l$
2. (b)
Use the word tokenizer to get sub tokens $t^{\prime}_{1},...,t^{\prime}_{n}$
3. (c)
If $|T^{\prime}|+n+1$ is larger than our limit of 64 tokens or when we reach
the end of the document, add the sequence end token to $T^{\prime}$, pad both
sequences and add them to the data set. Set $T^{\prime}$ and $L^{\prime}$ to
new empty lists
4. (d)
Add $t^{\prime}_{1},...,t^{\prime}_{n}$ to $T^{\prime}$, add $l$ to
$L^{\prime}$
We note that the retokenization changes our task from a one-to-one to a many-
to-many sequence-to-sequence task, as there could now be multiple subword
tokens associated with one label. Another implication of these pre-processing
steps, is that the entities that we seek to extract can be split across
multiple sequences of 64 subword tokens. As mentioned earlier, we have a total
of 2,318 references and 32,359 tokens labelled as (I)nside. This gives us a
total of $34,677$ reference tokens (labelled either B or I). We find that the
average reference contains $\frac{34,677}{2,318}\approx 15$ word tokens, and
thus at least that many subword tokens. We can expect a large number of
references to be split across two or more sequences. We expect that this could
have a significant effect on the performance of our models, as the model will
not always have access to the context of a reference.
### 4.2 Training the BERT models
We fine-tunet three different BERT models to our labelled data: BERT-base,
bioBERT, and sciBERT (all cased). We used To fine-tune the BERT models, we use
the open source BERT implementation by
HuggingFace444https://huggingface.co/transformers/model_doc/bert.html##bertfortokenclassification,
with a token classification head consisting of a single linear layer. In the
case that an input sequence is shorter than 64 tokens (which only occurs at
the end of a patent), we mask out the loss for the output past the input
sequence. We train the models for three epochs through our training data, with
a batch size of 32.555We published the trained models on
https://github.com/kaesve/patent-citation-extraction
## 5 Results
### 5.1 Intrinsic evaluation
We evaluate our models using a leave-one-out training scheme. For each patent
in the data set we train a new model using the other 21 patents as the
training data. Aside from the maximum sequence length, we used the default
hyperparameter configurations provided by the chosen framework. We evaluate on
both the original and updated dataset.
Table 1: Comparing three BERT models and two baseline models using leave-one-out evaluation on 22 patents (micro averages). Best results are printed in boldface. * indicates results as reported by [4]. | Dataset | Label | Precision | Recall | F1 | Support
---|---|---|---|---|---|---
BERT | Original | B | 0.849 | 0.927 | 0.886 | 1,988
| | I | 0.896 | 0.955 | 0.924 | 28,449
SciBERT | Original | B | 0.865 | 0.925 | 0.894 | 1,988
| | I | 0.898 | 0.944 | 0.920 | 28,449
BioBERT | Original | B | 0.843 | 0.929 | 0.884 | 1,988
| | I | 0.894 | 0.952 | 0.922 | 28,449
BERT | Updated | B | 0.934 | 0.948 | 0.941 | 2,318
| | I | 0.985 | 0.972 | 0.978 | 32,359
SciBERT | Updated | B | 0.947 | 0.954 | 0.950 | 2,318
| | I | 0.986 | 0.976 | 0.981 | 32,359
BioBERT | Updated | B | 0.944 | 0.957 | 0.951 | 2,318
| | I | 0.986 | 0.974 | 0.980 | 32,359
CRF* | Original | B | 0.890 | 0.824 | 0.856 | 1,988
| | I | 0.914 | 0.870 | 0.891 | 28,449
CRF | Updated | B | 0.922 | 0.893 | 0.907 | 2,318
| | I | 0.964 | 0.938 | 0.951 | 32,359
Flair* | Original | B | 0.762 | 0.702 | 0.731 | 1,988
(Flair embeddings) | | I | 0.814 | 0.890 | 0.850 | 28,449
Flair* | Original | B | 0.722 | 0.647 | 0.682 | 1,988
(Glove embeddings) | | I | 0.789 | 0.840 | 0.814 | 28,449
Table 1 shows the results of evaluating the models on the labelled data using
leave-one-out validation. We also include the results of [4] as a baseline,
however, the results are not directly comparable as they used five-fold cross-
validation for evaluation. Their models therefore were trained on less data.
Finally, we include the results of applying the original CRF implementation on
our updated dataset, using the same leave-one-out validation strategy.
We see that our new models perform reasonably well on the original dataset.
Comparing to the baseline methods, we see that the BERT models consistently
achieve a much higher recall. This is especially useful for the WoS matching
task, as was discussed earlier.
When we compare the results of our models obtained with the updated dataset to
those obtained with the original data, we see that the changes in the dataset
lead to improvements in every metric. Especially in the precision column, we
see a large jump in quality. This jump is in part the direct result of our
relabelling process. Most changes in the dataset concerned changing labels
from ‘O’ to ‘I’ or ‘B’ tokens, where our models found references that were
missed during labelling.
Comparing the BERT-based models with each other, we find that the differences
are small. With the updated data the SciBERT and BioBERT models seem to
perform slightly better than the plain BERT model.
Finally, we can compare the results of the CRF model on the original and
updated dataset. We again see a clear jump in performance. This comparison
does suffer from the training bias and different evaluation strategy mentioned
earlier. Furthermore, the CRF model uses features designed for the original
dataset. As we changed the tokenization process, this means that some of the
pattern based features do not work as intended. Still, we think the results do
show that the changes to the dataset make this task easier.
### 5.2 Extrinsic evaluation
We also apply each model to an unlabelled data set of 33,338 patents [4]. For
this application, the models are trained on the complete labelled data set.
The references produced by these models are matched against the Web of Science
database, using the same procedures as reported in [4].
From the set of 33,338 patents, we extract references to papers published in
the years 1980–2010 (the ‘focus years’). This results in a list of extracted
references. We parse them into separate fields: first author, second author,
year, journal title, volume/issue, and page numbers. Then we match those
fields to publications in the database. If we find a non-ambiguous match for a
subset of the fields, we count this as a ‘definite match’ [4].
Figure 2: The number of found, parsed and matched references in the unlabelled
dataset. The results for CRF and Flair were obtained for the original dataset.
In-text citations are citations that only occur in the text and are not
included in the patent front page. The ‘focus years’ are the years 1980–2010,
for which we extract references. A ‘definite match’ is a non-ambiguously
matched publication to the reference. (also see [4] for details about the
matching procedure)
The results are displayed in Figure 2. There is a clear difference between the
new BERT-based models and the previous CRF and Flair models, but these results
are not directly comparable since CRF and Flair were trained on the original
data. The figure also shows that the three BERT models perform nearly
identical to each other. As with the results from our intrinsic evaluation,
SciBERT seems to perform better than the other two BERT models by a small
margin.
We found that our models do not always produce clean sequences of IOB tokens;
sometimes the beginning is not marked as a B, or a word in the middle of a
reference is labelled as O. We extract references from sequences of I tokens
starting with a B token or an I token preceded by an O token, and ending
before an O or B token. In the case that our model misses a word in the middle
of a reference, this means that we split this reference in two references
during extraction. Our matching script reports unique matches per patent, so
this does not lead to double-counting references. On the other hand, it could
mean that neither part of the split reference contains enough information to
make a definite match in the WoS database.
## 6 Discussion
Our results show that our BERT-based models outperform both CRF and Flair,
especially after improving the training data. While the increased precision
and recall is likely overestimated in our intrinsic evaluation, the new models
also perform better in our extrinsic evaluation, which does not have the same
training biases. Our models were able to extract roughly 240,000 more
references that could be matched with the WoS database from the unlabelled
data than Flair could, an increase of almost 50%.
The difference between the numbers of matched publications found by CRF and
BERT is striking given the small differences in quality of the models measured
with leave-one-out validation (Table 1). This can for a large part be
explained by the improved training data, but also by the higher recall for the
BERT models. In addition, we investigate two characteristics of errors made by
our models, and show the differences between BERT and CRF. We focus on
prediction errors _within_ references, as these have the largest effect on the
downstream task of parsing references. Specifically, we look at cases where
the model labels a token as O when that token is labelled as B or I in the
ground truth.
Figure 3: The relative positions of predicted ‘O’ labels within references,
per model.
Figure 3 shows the relative position of errors within references. This data
was captured during the leave-one-out evaluation. One major difference between
BERT and CRF-based models is that CRF explicitly learns ordered patterns in
sequences. We would expect CRF models to make errors by starting or ending a
label sequence too early or too late, but we do not usually expect errors to
occur in the middle of a reference, as CRF learns that an I never follows an
O. Without this structural prior, we expect the errors to occur more uniformly
across the references for the BERT models. The histograms seem to confirm
these intuitions. Leaving out mistakes in the first word, we see that the
distribution for especially the SciBERT and BioBERT models seem uniform. The
CRF model shows a clear drop in the first third of the distribution, and a
steady increase in the second half.
By manually looking at references where CRF predicts an O close to the middle,
we found we could categorize these mistakes almost completely in two groups:
CRF only labelled the first or last few tokens as part of the reference, or
the reference is very long and CRF finds two references at beginning and end
of the reference. In both scenarios CRF does produce coherent sequences of a B
label followed by I labels. On the other hand, our BERT models sometimes do
not predict a B at all, or in the wrong place. The models are also prone to
missing an I label in the middle of a reference.
Figure 4 is another way to visualize this difference. Here we plot the lengths
of sequences of O’s found within references. The median error sequence length
is one or two for the BERT models, and four for CRF. In other words, BERT
models not only make fewer mistakes than CRF, but the mistakes are smaller on
average, and more uniformly spread across the reference. We speculate that
this helps with the ultimate task of parsing and matching the references. CRF
errors almost always include the first or last few tokens, which often contain
important information for parsing the reference, such as the publication year
and the author names.
Figure 4: The relative positions of predicted ‘O’ labels within references,
per model.
## 7 Conclusion
We applied BERT-based models to extract references from patent texts. We found
that these models achieve better recall than CRF and Flair. We use an external
database of publications to match these references, which means that recall is
more important than precision, as imprecisions will be resolved during
matching. During the development of our models, we found that the original
dataset for this task had errors in labelling and pre-processing. We used our
models interactively to find these mistakes, and repaired them.
We find that the improved training data leads to a large improvement in the
quality of the trained models. In addition, the BERT models beat CRF and
Flair, with recall scores around 97% obtained with cross validation. Our
models were also applied to a large unlabelled dataset, and were able to
extract 50% more references than previous methods.
We also show that BERT models are prone to a different kind of errors than CRF
models. Combining these methods could potentially lead to a stronger model. We
think that the limited maximal sequence size that BERT can handle affects its
performance, due to the average length of references. Recent work focuses on
modifying the attention architecture underlying BERT to better accommodate
longer sequences. This includes new models such as the Reformer, Longformer,
Linformer, Big Bird and the Performer [10]. We think these models could
achieve even better results, with little modification to our method.
## References
* Bryan et al. [2019] K. A. Bryan, Y. Ozcan, B. N. Sampat, In-Text Patent Citations: A User’s Guide, Technical Report, National Bureau of Economic Research, 2019.
* Nagaoka and Yamauchi [2015] S. Nagaoka, I. Yamauchi, The use of science for inventions and its identification: Patent level evidence matched with survey, Research Institute of Economy, Trade and Industry (RIETI) (2015).
* Bryan and Ozcan [2016] K. A. Bryan, Y. Ozcan, The impact of open access mandates on invention, Mimeo, Toronto (2016).
* Verberne et al. [2019] S. Verberne, I. Chios, J. Wang, Extracting and matching patent in-text references to scientific publications, in: Proceedings of the 4th Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2019), 2019, pp. 56–69.
* Wallach [2004] H. M. Wallach, Conditional random fields: An introduction, Technical Reports (CIS) (2004) 22\.
* Akbik et al. [2019] A. Akbik, T. Bergmann, D. Blythe, K. Rasul, S. Schweter, R. Vollgraf, Flair: An easy-to-use framework for state-of-the-art nlp, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), 2019, pp. 54–59.
* Devlin et al. [2019] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 4171–4186.
* Beltagy et al. [2019] I. Beltagy, A. Cohan, K. Lo, Scibert: Pretrained contextualized embeddings for scientific text, CoRR abs/1903.10676 (2019). URL: http://arxiv.org/abs/1903.10676. arXiv:1903.10676.
* Lee et al. [2019] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, J. Kang, Biobert: a pre-trained biomedical language representation model for biomedical text mining, CoRR abs/1901.08746 (2019). URL: http://arxiv.org/abs/1901.08746. arXiv:1901.08746.
* Tay et al. [2020] Y. Tay, M. Dehghani, D. Bahri, D. Metzler, Efficient transformers: A survey, arXiv preprint arXiv:2009.06732 (2020).
|
4k
|
arxiv_papers
|
2101.01040
|
# Spacing Homogenization in Lamellar Eutectic Arrays with Anisotropic
Interphase Boundaries
M. Ignacio, M. Plapp Laboratoire de Physique de la Matière Condensée, Ecole
Polytechnique, CNRS, 91128 Palaiseau, France.
(25 November 2019)
###### Abstract
We analyze the effect of interphase boundary anisotropy on the dynamics of
lamellar eutectic solidification fronts, in the limit that the lamellar
spacing varies slowly along the envelope of the front. In the isotropic case,
it is known that the spacing obeys a diffusion equation, which can be obtained
theoretically by making two assumptions: (i) the lamellae always grow normal
to the large-scale envelope of the front, and (ii) the Jackson-Hunt law that
links lamellar spacing and front temperature remains locally valid. For
anisotropic boundaries, we replace hypothesis (i) by the symmetric pattern
approximation, which has recently been found to yield good predictions for
lamellar growth direction in presence of interphase anisotropy. We obtain a
generalized Jackson-Hunt law for tilted lamellae, and an evolution equation
for the envelope of the front. The latter contains a propagative term if the
initial lamellar array is tilted with respect to the direction of the
temperature gradient. However, the propagation velocity of the propagative
wave modes are found to be small, so that the dynamics of the front can be
reasonably described by a diffusion equation with a diffusion coefficient that
is modified with respect to the isotropic case.
## I Introduction
Eutectic alloys solidify into two-phase composite solids for a wide range of
compositions. The geometric structure of the composite is the result of a
pattern-formation process that takes place at the solid-liquid interface. The
patterns are shaped by the interplay between solute diffusion through the
liquid and capillary forces at the interfaces. This leads to the emergence of
lamellae if the volume fractions of the two phases are comparable. For
strongly different volume fractions, fibers of the minority phase inside a
matrix of the majority phase are found.
Eutectic solidification can be studied under well-controlled conditions by
directional solidification of thin samples Jackson _et al._ (1966);
Seetharaman and Trivedi (1988); Ginibre _et al._ (1997). In this geometry,
most often the lamellar morphology emerges, and the crystallization front is
quasi-one-dimensional. In the absence of external perturbations and boundary
effects, the lamellar pattern generally becomes more regular with time, that
is, the lamellar spacing gets more and more homogeneous.
In a seminal paper, Jackson and Hunt have analyzed steady-state growth of
eutectic composites Jackson _et al._ (1966). They established a relation
between the average undercooling at the solidification front $\Delta T$ – the
difference between the front temperature and the eutectic temperature – and
the lamellar spacing $\lambda$. The curve $\Delta T(\lambda)$ exhibits a
single minimum; the spacings observed in experiments on extended samples are
typically distributed in a narrow range around the spacing $\lambda_{m}$ that
corresponds to this minimum Trivedi _et al._ (1991).
Jackson and Hunt also qualitatively analyzed the stability of lamellar arrays,
under the hypothesis (which they attributed to Cahn) that lamellae always grow
in the direction that is perpendicular to the large-scale envelope of the
lamellar front. Then, in a (convex) bump of the front, the spacing gets larger
when solidification proceeds. If the undercooling increases with the spacing,
then the bump recedes in the temperature gradient and the front is stable; in
contrast, if the undercooling decreases with increasing spacing, the bump
advances further and the front is unstable. Eventually, the amplification of
the front deformation will lead to lamella pinchoff and elimination. The
stability of the front hence depends on the slope of the curve $\Delta
T(\lambda)$.
These arguments were later formalized by Langer and co-workers Langer (1980);
Datye and Langer (1981). They established that the spacing obeys a diffusion
equation, as is generally the case for one-dimensional pattern-forming systems
that exhibit a characteristic length scale Manneville (1991); Cross and
Hohenberg (1993). The spacing diffusion coefficient is proportional to
$d\Delta T/d\lambda$ and becomes negative for spacings smaller than
$\lambda_{m}$. This means that the array is unstable for spacings smaller than
$\lambda_{m}$.
When experiments and numerical simulations became precise enough to directly
test these predictions, it was found that the normal growth hypothesis was not
exactly satisfied: the trijunctions also slightly move along the front
envelope, which gives an additional contribution to the spacing diffusion
coefficient that is always positive and hence stabilizing Akamatsu _et al._
(2002, 2004). Whereas it is likely that this lateral drift of the trijunctions
originates from the interaction of the diffusion field in the liquid, which
depends on the local lamellar spacing, and the shape of the solidification
front at the scale of the individual lamellae, no quantitative analytic
expression for this contribution has been obtained so far. Instead, a single
phenomenological parameter was fitted, which could reproduce the results of
both simulations and experiments.
All the theoretical analyses cited above neglect crystallographic effects, and
assume that all the interfaces are isotropic. This is also a standard
assumption made in numerical simulations Kassner and Misbah (1991); Karma and
Sarkissian (1996); Parisi and Plapp (2008, 2010). However, crystallographic
effects are often important. This is obviously the case for irregular
eutectics, in which one or both of the solid-liquid interfaces are facetted.
But even in alloys where both solid-liquid interfaces are microscopically
rough, crystallographic effects can come into play through the solid-solid
interfaces. In a eutectic grain, the two solid phases have a fixed relative
orientation with respect to each other, which can differ between different
grains. A distinction has been made between “floating” grains, in which the
solid-solid interfaces (interphase boundaries, IB) are isotropic, and “locked”
grains, in which they are anisotropic and tend to follow certain
crystallographic directions Caroli _et al._ (1992). In locked grains,
lamellae can grow tilted with respect to the direction of the temperature
gradient, which clearly violates the hypothesis of normal growth.
This behavior was recently studied in more detail by the new method of
rotating directional solidification Akamatsu _et al._ (2012a). The results
can be interpreted by taking into account the torque that is exerted on the
triple line by the anisotropy of the solid-solid interfaces. Instead of the
interphase boundary itself, it is now the generalized surface tension vector
$\vec{\sigma}$ Hoffman and Cahn (1972), which combines surface tension and
torque, that is perpendicular to the front envelope. Since this entails that,
in steady state, the solid-liquid interfaces have a mirror-symmetric shape
with respect to the center of each lamellae, this hypothesis was called
symmetric pattern approximation (SPA) Akamatsu _et al._ (2012b). The SPA
makes it possible to predict the growth direction of the lamellae in steady
state if the anisotropic surface free energy of the IB is known. Good
agreement between the SPA and numerical simulations using boundary-integral
and phase-field techniques was found Ghosh _et al._ (2015).
Here, we analyze how this torque alters the “geometric part” of the spacing
relaxation mechanism. In other words, we examine what is the consequence of
replacing Cahn’s ansatz with the SPA. In a first step, we generalize the
Jackson-Hunt calculation, taking into account that in the presence of
interphase boundary anisotropy the steady state is tilted. We demonstrate that
the relation between undercooling and spacing keeps the same form, with the
value of the minimum undercooling and the corresponding spacing depending on
the tilting angle.
The tilt has a dramatic effect on the spacing dynamics because it induces a
breaking of the parity (right-left) symmetry in the base state. In the case
where the growth direction is aligned with an extremum of the interphase
boundary energy, the torque and thus the tilt angle are zero. Then, the
evolution equation for the lamellar spacing is again a diffusion equation, but
with a diffusion coefficient that is modified by the interfacial anisotropy.
In contrast, when the base state is tilted, no closed-form evolution equation
for the spacing can be written down. Instead, an equation for the front shape
can be formulated, which is shown to have propagative solutions that can be
damped or amplified with time.
## II Model
### II.1 Directional Solidification
We consider the solidification of a binary eutectic alloy into two distinct
solid phases called $\alpha$ and $\beta$. The sample is solidified by pulling
it with a constant velocity $V$ from a hot to a cold zone; the externally
imposed temperature gradient is aligned with the pulling direction, and its
magnitude is denoted by $G$. For a sufficiently thin sample, a two-dimensional
treatment is appropriate. The solid consists of a succession of pairs of
lamellae of the phases $(\alpha,\beta)$. In order to write down the system of
equations ruling the evolution of the composition field, we assume that:
* •
The molar densities of the solid and liquid phases are the same so that the
total volume remains constant in time.
* •
Diffusion in the solid phases is neglected (one-sided model).
* •
Solute transport in the liquid is much slower than heat transport (i.e. high
Lewis number limit).
* •
Convection in the liquid is neglected (solute transport occurs only by
diffusion). This is appropriate for a thin sample.
* •
Elasticity and plasticity in the solid phases are neglected.
* •
Heat conductivities are equal in all phases, so that the temperature field is
independent of the shape of the solid-liquid interface.
* •
The latent heat rejected during solidification can be neglected.
Consequently (last two points), the temperature field is given by the frozen
temperature approximation,
$T(x,z,t)=T_{E}+G(z-Vt),$ (1)
in the sample frame $\mathcal{R}_{0}(\hat{x}_{0},\hat{z}_{0})$, where
$\hat{z}_{0}$ is the direction of the pulling velocity and the temperature
gradient. For convenience, we have chosen that the coordinate $z=0$
corresponds to the eutectic temperature $T_{E}$ at $t=0$.
### II.2 Free-Boundary Problem
Under the assumptions listed above, the fundamental free-boundary problem that
describes eutectic solidification is readily written down. In the liquid, the
concentration field $C(x,z,t)$ obeys the diffusion equation,
$\frac{\partial C}{\partial t}=D\vec{\nabla}^{2}C,$ (2)
with $D$ the solute diffusivity in the liquid. This equation has to be solved
subject to the Gibbs-Thomson equation at the solid-liquid interface. The shape
of the solid-liquid interface is described by the function $z_{\rm int}(x,t)$;
the interface undercooling is given by
$\Delta T=T_{E}-T(z_{\rm int}(x,t))=\Delta T_{D}+\Delta T_{c}+\Delta T_{k}$
(3)
with $T_{E}$ the eutectic temperature, $T(z_{\rm int}(x,t))$ the temperature
at the solid/liquid interface, and $\Delta T_{D}$, $\Delta T_{c}$, $\Delta
T_{k}$ stand respectively for the diffusion, capillary and kinetic
contributions. The first term links the concentration at the interface to the
interface temperature according to
$\Delta T_{D}=-m_{i}(C_{i}(x,z_{\rm int}(x,t),t)-C_{E}),$ (4)
with $m_{i}=\mathrm{d}T/\mathrm{d}C_{i}$ the liquidus slope of phase $i$,
$C_{i}(x,_{\rm int})$ the concentration on the liquid side of the interface
and $C_{E}$ the eutectic composition. The term $\Delta T_{c}$ arises from the
capillary force that shifts the melting point by an amount that is
proportional to the interface curvature $\kappa$ (we recall that we assume
that the solid-liquid interfaces are isotropic):
$\Delta T_{c}=\frac{\gamma_{iL}T_{E}}{L_{i}}\kappa,$ (5)
with $\gamma_{iL}$ the solid/liquid surface tension and $L_{i}$ the latent
heat per unit volume for phase $i$.
Finally, the kinetic contribution reads
$\Delta T_{k}=\frac{V_{n}}{\mu_{i}},$ (6)
with $V_{n}$ the local velocity normal to the interface and $\mu_{i}$ the
linear kinetic coefficient (the interface mobility).
The free-boundary problem is completed by the Young-Herring equation, to be
discussed below, and the Stefan condition that expresses the conservation of
solute at the moving solid-liquid interface,
$V_{n}\Delta C_{sl}^{i}=-\hat{n}\cdot D\vec{\nabla}C,$ (7)
where $\Delta C_{sl}^{i}$ is the concentration difference between the solid
$i$ ($i=\alpha,\beta$) and the liquid, $\hat{n}$ is the unit vector normal to
the S/L interface, and $V_{n}$ the normal velocity of the interface. Since it
turns out that the solid-liquid interfaces always remain close to the eutectic
temperature for slow growth, it is a good approximation to set $\Delta
C_{sl}^{i}$ equal to the equilibrium concentration differences at $T_{E}$.
Since the temperature field is set by Eq. (1), the interface position
satisfies
$\Delta T=G(z_{\rm int}-Vt).$ (8)
One can obtain a dimensionless formulation of the Gibbs Thomson law by
defining the dimensionless composition field
$c(x,z,t)=\frac{C(x,z,t)-C_{E}}{\Delta C},$ (9)
where $\Delta C=C^{s}_{\beta}-C^{s}_{\alpha}$ the eutectic plateau in the
phase diagram, with $C_{\alpha}^{s}$ and $C_{\beta}^{s}$ the concentration of
the solid phases.
Using Eqs. (8) and (9), the dimensionless Gibbs Thomson law becomes (the minus
sign is for the $\alpha-$phase, the plus for the $\beta-$phase),
$\frac{\Delta T}{|m_{i}|\Delta C}=-\frac{z_{\rm int}-Vt}{\ell_{T}^{i}}=\mp
c_{i}(x,z_{\rm int})+d_{i}\kappa+\beta_{i}V_{n}$ (10)
where
$\ell_{T}^{i}=\frac{|m_{i}|\Delta C}{G}$ (11)
are the thermal lengths,
$d_{i}=\frac{\gamma_{iL}T_{E}}{|m_{i}|L_{i}\Delta C}$ (12)
are the capillary lengths, and
$\beta_{i}=\frac{1}{|m_{i}|\mu_{i}\Delta C}$ (13)
are the kinetic coefficients. We also introduce the diffusion length
$\ell_{D}=\frac{D}{V}$ (14)
with $D$ the diffusion coefficient of the solute in the liquid phase.
For most metallic alloys and their organic analogs that have microscopically
rough solid-liquid interfaces, the kinetic term $\Delta T_{k}$ can be
neglected compared to both $\Delta T_{D}$ and $\Delta T_{c}$ Kramer and Tiller
(1965). This term will be dropped from now on.
### II.3 Young-Herring Equation
As already mentioned above, the free-boundary problem is completed by the
Young-Herring equation, which is a statement of capillary force balance at the
triple lines (triple points in the quasi-two-dimensional approximation).
Before stating it, let us make a few more comments on the crystallography of
eutectics.
For alloy systems with microscopically rough solid-liquid interfaces, the
interface free energy of the solid-liquid interfaces depends only weakly on
the interface orientation – it varies typically only by a few percent.
Therefore, we will assume in this work that the solid-liquid interfaces are
isotropic. In contrast, the solid-solid interfaces (interphase boundaries, IB)
may be strongly anisotropic. A eutectic composite consists of eutectic grains.
In each grain, all the domains of a given phase ($\alpha$ or $\beta$) have the
same orientation. The relative orientation of $\alpha$ and $\beta$ is
therefore fixed in a given grain, but may vary between different grains.
However, the IB can still freely choose its orientation; therefore, an IB
energy $\gamma_{\alpha\beta}(\hat{n}_{\alpha\beta})$ may be defined as a
function of orientation. Here, $\hat{n}_{\alpha\beta}$ is the unit normal
vector of the IB, which is equivalent to two polar angles in three dimensions.
However, in the quasi-two-dimensional approximation for a thin sample, the IB
are supposed to remain perpendicular to the sample walls, and therefore the IB
can explore only the orientations that lie within the sample plane. As a
consequence, $\gamma_{\alpha\beta}$ is a function of a single angle $\phi$
(the polar angle in the sample plane), and the function
$\gamma_{\alpha\beta}(\phi)$ is the intersection of the full three-dimensional
$\gamma$-plot and the sample plane.
In the following, we denote by $\gamma_{\alpha\beta}(\phi)$ the orientation-
dependent IB energy in the crystallographic frame, that is, with respect to
some reference axis of the crystal. If the sample is rotated with respect to
the laboratory frame by an angle $\phi_{R}$, as can be done in the method of
rotating directional solidification Akamatsu _et al._ (2012a), the
orientation-dependent IB energy in the laboratory frame will be given by
$\gamma_{\alpha\beta}(\phi-\phi_{R})$, where $\phi$ is the angle between the
IB orientation and the direction of the temperature gradient. We choose that
$\phi_{R}=0$ corresponds to a state in which a minimum of the IB energy is
aligned with the temperature gradient. As a generic example, we use $n$-fold
harmonic functions of the form
$\gamma_{\alpha\beta}(\phi)=\gamma_{0}[1-\epsilon\cos(n(\phi-\phi_{R}))]$ (15)
where $\epsilon$ is the anisotropy strength and $\gamma_{0}$ the average
surfarce tension which be set to $1$ in the following. It should be mentioned
that for a eutectic consisting of crystals with centrosymmetric unit cells, a
two-fold symmetry ($n=2$) of the IB energy is always present.
The force balance at trijunctions can be easily stated using the Cahn-Hoffman
formalism Hoffman and Cahn (1972); Wheeler (1999). Let $\hat{n}_{\alpha\beta}$
be the unit normal vector to the solid interphase and
$\hat{t}_{\alpha\beta}=-\mathrm{d}\hat{n}_{\alpha\beta}/\mathrm{d}\phi$ the
unit tangential vector to the solid interphase. With these definitions, the
Cahn-Hoffman vector reads
$\vec{\xi}_{\alpha\beta}=\gamma_{\alpha\beta}(\phi)\hat{n}_{\alpha\beta}-\gamma_{\alpha\beta}^{\prime}(\phi)\hat{t}_{\alpha\beta}.$
(16)
In addition, in two-dimension, one can define a unique generalized surface
tension vector as
$\vec{\sigma}_{\alpha\beta}=\gamma_{\alpha\beta}(\phi)\hat{t}_{\alpha\beta}+\gamma_{\alpha\beta}^{\prime}(\phi)\hat{n}_{\alpha\beta}.$
(17)
The equilibrium shape of a $\beta$ inclusion in an $\alpha$ crystal (or an
$\alpha$ inclusion in a $\beta$ crystal) is given by the inner envelope of the
polar plot of $\xi$ ($\xi-$plot). When the stiffness
$\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta}\leq 0$, a range of
orientations is excluded from the equilibrium shape (missing orientations,
MO), which then exhibits sharp corners. For a $n-$fold solid-solid interfacial
free energy of the form given by Eq. (15), the condition for negative
stiffness reads $\epsilon\geq(n^{2}-1)^{-1}$. We display the polar plots of
$\gamma$ ($\gamma-$plot) and $\xi$ ($\xi-$plot) in Fig. 1, for $\epsilon=0.05$
(without MO) and for $\epsilon=0.15$ (with MO) for a 4-fold interface free
energy.
The vector $\vec{\sigma}_{\alpha\beta}$ gives the surface tension force. It
allows us to write the Young-Herring equation at the trijunction
$\vec{\gamma}_{\alpha\ell}+\vec{\gamma}_{\beta\ell}+\vec{\sigma}_{\alpha\beta}(\phi)=0$
(18)
where $\vec{\gamma}_{\alpha\ell}=\gamma_{\alpha\ell}\hat{t}_{\alpha\ell}$ and
$\vec{\gamma}_{\beta\ell}=\gamma_{\beta\ell}\hat{t}_{\beta\ell}$, with
$\hat{t}_{\alpha\ell}$ and $\hat{t}_{\beta\ell}$, respectively the tangential
unit vectors to the $\alpha-$ and $\beta-$liquid interfaces pointing away from
the trijunction.
Figure 1: $\gamma$-plot and $\xi$-plot of the interfacial free energy given by
$\gamma_{\alpha\beta}=1-\epsilon\cos(n(\phi-\phi_{R}))$ with $\phi_{R}=\pi/8$
and $n=4$. Left: $\epsilon=0.02$ (without missing orientation). Right:
$\epsilon=0.2$ (with missing orientations). The orange dots indicate the
positions of the minima of $\gamma_{\alpha\beta}$.
## III Symmetric Pattern Approximation (SPA)
Consider a steady-state lamellar array with a regular spacing $\lambda_{0}$
(called “undeformed state” in the following). In presence of an interphase
anisotropy, the solid interphase can exhibit a tilting angle $\phi_{0}$ with
respect to the direction of the thermal gradient. We introduce the frame of
study $\mathcal{R}(\hat{x},\hat{z})$, moving at constant velocity
$\vec{V}=V(\hat{z}+\tan\phi_{0}\hat{x})$ with respect to the sample frame
$\mathcal{R}_{0}(\hat{x}_{0},\hat{z}_{0})$. This means that the trijunction
points drift laterally with a velocity $V_{\parallel}=V\tan\phi_{0}$, see Fig.
2 for the notations.
Figure 2: Schematics of a tilted reference state $\\{\lambda_{0},\phi_{0}\\}$
under directional solidification conditions.
Experiments and numerical simulations show that the “heads” of the lamellae
are approximately mirror symmetric with respect to the midplane of the
lamellae. Therefore, the contact angles (the angles between the solid-liquid
interfaces direction of the isotherms $\hat{x}$) of the solid-liquid
interfaces at the trijunctions are also approximately the same on both sides
of a lamella. This is only possible if the surface tension vector
$\vec{\sigma}_{\alpha\beta}$ is approximately perpendicular to the envelope of
the solid-liquid front. The assumption that $\vec{\sigma}_{\alpha\beta}$ is
exactly perpendicular to the front was called in Ref. Akamatsu _et al._
(2012b) the symmetric pattern approximation (SPA). Introducing the unit
vectors parallel $\hat{t}_{f}$ and perpendicular $\hat{n}_{f}$ to the large-
scale solid-liquid front, the SPA reads
$\vec{\sigma}_{\alpha\beta}\cdot\hat{t}_{f}=0.$ (19)
Consequently, the Young-Herring condition Eq. (18) expressed in the basis
formed by $\\{\hat{t}_{f},\hat{n}_{f}\\}$ reads
$\displaystyle\gamma_{\alpha\ell}\cos(\theta_{\alpha})-\gamma_{\beta\ell}\cos(\theta_{\beta})$
$\displaystyle=$ $\displaystyle 0,$ (20)
$\displaystyle\gamma_{\alpha\ell}\sin(\theta_{\alpha})+\gamma_{\beta\ell}\sin(\theta_{\beta})$
$\displaystyle=$
$\displaystyle|\vec{\sigma}_{\alpha\beta}(\phi_{0}-\phi_{R})|.$
where $\theta_{\alpha}$ and $\theta_{\beta}$ are the contact angles, both
taken positive. In the following, we investigate the consequences of the SPA
first for a front that is perpendicular to the pulling direction, and then for
a tilted solid-liquid front, see Fig. 3.
Figure 3: Schematics of the angles and vectors using the Symmetric Pattern
Approximation (SPA) for a) a planar and b) a tilted S/L fronts.
### III.1 Base State: Front Perpendicular to the Growth Direction
For an undeformed steady-state, see Fig. 3(a), the SPA given by Eq. (19) leads
to
$\phi_{0}=-\arctan\left(\frac{\gamma_{\alpha\beta}^{\prime}(\phi_{0}-\phi_{R})}{\gamma_{\alpha\beta}(\phi_{0}-\phi_{R})}\right).$
(21)
The solution of Eq. (21) gives the steady-state tilt angle as a function of
the orientation of the bicrystal, $\phi_{R}$. It is worth noting that Eq. (21)
has one single solution if the stiffness
$\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta}>0$ for all
orientations, and can have up to three solutions if
$\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta}<0$ for some range of
orientations. One may distinguish stable, meta-stable and unstable branches
that can be associated to the features of the $\xi$-plot Cabrera (1964);
Philippe _et al._ (2018), see appendix A for details. For cases with MO, the
system will select one of the two stable branches for a fixed $\phi_{R}$. In
contrast, if the system is brought to an initial state located on the unstable
branch, we expect that the Herring instability Herring (1951) will appear.
In figure 4, we plot the solution of Eq. (21) for $\phi_{0}$ as a function of
$\phi_{R}$ for a surface energy with $4$-fold symmetry with
$\epsilon=0.02,0.05$ (without MO) and $0.15$ (with MO). The symbols indicate
the limit of the metastable branches. The amplitude of the variation of
$\phi_{0}$ always increases with $\epsilon$.
Figure 4: Steady-state tilting angle $\phi_{0}$ as a function of the rotation
angle $\phi_{R}$ within the SPA given by Eq. (21). The solid/solid surface
tension corresponds to
$\gamma_{\alpha\beta}=1-\epsilon\cos(4(\phi_{0}-\phi_{R}))$ with
$\epsilon=0.02$, $0.05$ and $0.15$. For $\epsilon=0.15$, the dashed curves
correspond to the stable and the metastable branches. The green $\blacksquare$
indicate the beginning of the metastable branches. The dotted curve
corresponds to the unstable branch. The limit between metastable and unstable
branches is marked by the green $\bullet$.
Furthermore, applying the transformation
$\phi_{R}\rightarrow\phi_{R}+\delta\phi_{R}$ and
$\phi_{0}\rightarrow\phi_{0}+\delta\phi_{0}$, Eq. (21) leads to the relation
$\delta\phi_{0}=\frac{\gamma^{\prime\prime}_{\alpha\beta}\gamma_{\alpha\beta}-\gamma^{{}^{\prime}2}_{\alpha\beta}}{\gamma_{\alpha\beta}(\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta})}\delta\phi_{R}.$
(22)
For cases without MO, according to Eq. (22), the values of $\phi_{R}$
corresponding to a sign change of the slope of the curve $\phi_{0}(\phi_{R})$
are given by the solutions of
$\gamma^{\prime\prime}_{\alpha\beta}\gamma_{\alpha\beta}-\gamma^{{}^{\prime}2}_{\alpha\beta}=0$,
see Fig. 5. Close to a mininum of anisotropy (i.e. $\phi_{R}$
mod$[2\pi/n]=0$), the sign of $\delta\phi_{0}/\delta\phi_{R}$ is positive
which means that the tilting angle $\phi_{0}$ tends to “follow” the rotation
$\delta\phi_{R}$. Conversely, around a maximum of anisotropy (i.e. $\phi_{R}$
mod$[2\pi/n]=\pi/n$), the slope of $\phi_{0}(\phi_{R})$ is negative, and
therefore, any change of of the crystallographic angle $\delta\phi_{R}$ will
lead to a change of the tilting angle in the opposite direction.
Figure 5: Evolution of the slope $\delta\phi_{0}/\delta\phi_{R}$ with respect
to the crystallographic angle $\phi_{R}$ within the SPA given by Eq. (22). The
S/S surface tension corresponds to
$\gamma_{\alpha\beta}=1-\epsilon\cos(4(\phi_{0}-\phi_{R}))$ with
$\epsilon=0.02$, $0.05$ and $0.15$. The dotted curves give the metastable and
the unstable branches for $\epsilon=0.15$.
### III.2 Inclined Front
Considering an inclined planar front, see Fig. 3(b), characterized by the
angle $\alpha$ between $\hat{z}$ and $\hat{n}_{f}$, the unit vectors normal
and tangential to the front read
$\displaystyle\hat{n}_{f}$ $\displaystyle=$
$\displaystyle-\sin(\alpha)\hat{x}+\cos(\alpha)\hat{z},$ (23)
$\displaystyle\hat{t}_{f}$ $\displaystyle=$
$\displaystyle\cos(\alpha)\hat{x}+\sin(\alpha)\hat{z}.$
The coordinates of the interphase unit vectors in the frame
$\mathcal{R}(\hat{x},\hat{z})$ read
$\displaystyle\hat{n}_{\alpha\beta}$ $\displaystyle=$
$\displaystyle\cos(\phi)\hat{x}+\sin(\phi)\hat{z},$ (24)
$\displaystyle\hat{t}_{\alpha\beta}$ $\displaystyle=$
$\displaystyle-\frac{\mathrm{d}\hat{n}_{\alpha\beta}}{\mathrm{d}\phi}=\sin(\phi)\hat{x}-\cos(\phi)\hat{z}.$
The SPA reads
$\hat{\sigma}_{\alpha\beta}\cdot\hat{t}_{f}=0\Rightarrow-\tan\alpha=\frac{\gamma_{\alpha\beta}\tan\phi+\gamma^{\prime}_{\alpha\beta}}{\gamma^{\prime}_{\alpha\beta}\tan\phi-\gamma_{\alpha\beta}}$
(25)
Introducing the angle $\psi$ between the solid interphase and
$\vec{\sigma}_{\alpha\beta}$, such as
$\tan\psi=-\gamma^{\prime}_{\alpha\beta}/\gamma_{\alpha\beta}$, inside Eq.
(19) leads to the simple geometric relation
$\alpha=\phi-\psi.$ (26)
In addition, for the isotropic case (i.e. $\gamma_{\alpha\beta}^{\prime}=0$),
one gets $\alpha=\phi$ which corresponds to Cahn’s ansatz (the solid
interphase remains perpendicular to the solid/liquid front during growth).
Applying the transformation $\phi\rightarrow\phi_{0}+\delta\phi$ and
$\alpha\rightarrow\alpha_{0}+\delta\alpha$ (with $\alpha_{0}=0$), inside Eq.
(25), one obtains
$\displaystyle\delta\phi$ $\displaystyle=$
$\displaystyle\left(1-\left.\frac{\mathrm{d}\psi}{\mathrm{d}\phi}\right|_{\phi_{0}}\right)^{-1}\delta\alpha,$
$\displaystyle=$ $\displaystyle A_{SPA}(\phi_{R})\delta\alpha,$
where the anisotropy function $A_{SPA}(\phi_{R})$ within the SPA is given by
$A_{SPA}(\phi_{R})=\left[1+\left(\frac{\gamma^{\prime}_{\alpha\beta}}{\gamma_{\alpha\beta}}\right)^{2}\right]\frac{\gamma_{\alpha\beta}}{\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta}}.$
(28)
The anisotropy function corresponds to the proportionality factor between the
variation of the tilting angle $\phi$ and the variation of the angle $\alpha$
characterizing locally the deformation of the S/L front.
Some interesting points should be noted:
* •
The sign of the anisotropy function $A_{SPA}(\phi_{R})$ is imposed by the sign
of the stiffness $\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta}$.
* •
The equation $A_{SPA}(\phi_{R})=1$ has solutions for
$\frac{\mathrm{d}\psi}{\mathrm{d}\phi}=0$ or equivalently
$\gamma^{\prime\prime}_{\alpha\beta}-(\gamma^{\prime}_{\alpha\beta}/\gamma_{\alpha\beta})^{2}=0$.
It has one trivial solution corresponding to the isotropic case
($\gamma_{\alpha\beta}$ constant).
* •
The function $A_{SPA}(\phi_{R})$ diverges when
$\frac{\mathrm{d}\psi}{\mathrm{d}\phi}=1$, which corresponds to the case where
the stiffness $\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta}$
equals $0$.
* •
For positive stiffness, the maxima of $A_{SPA}(\phi_{R})$ are solutions of the
equation $\frac{\mathrm{d}^{2}\psi}{\mathrm{d}\phi^{2}}=0$. In addition, using
Eq. (15), the minimum and maximum values of the anisotropic function are
$A_{SPA}^{min}=(1-\frac{n^{2}\epsilon}{1-\epsilon})^{-1}$ and
$A_{SPA}^{max}=(1+\frac{n^{2}\epsilon}{1+\epsilon})^{-1}$.
We illustrate the behavior of the anisotropy function $A_{SPA}(\phi_{R})$ for
an interface energy with 4-fold symmetry in Fig. 6.
Figure 6: Evolution of the anisotropy function $A_{SPA}(\phi_{R})$ using
$\gamma_{\alpha\beta}=1-\epsilon\cos(4(\phi_{0}-\phi_{R}))$ with
$\epsilon=0.02$, $0.05$ and $0.15$. The dotted curve gives the unstable branch
for $\epsilon=0.15$.
## IV Jackson Hunt Law with Interphase Anisotropy
The analytical solution of the free-boundary problem with the real interface
shape is not known. In the Jackson-Hunt theory, the diffusion field is
calculated using a simplified interface shape, namely, a planar front.
Moreover, the contributions to the interface undercooling are averaged over
individual lamella.
In the moving frame $\mathcal{R}$, the average position of the interface is
given by $\zeta_{0}=\frac{1}{\lambda_{0}}\int_{0}^{\lambda_{0}}(z_{\rm
int}(x)-Vt)\mathrm{d}x$. We introduce average quantities over one pair of
lamellae, such as
$\langle\dots\rangle=\left\\{\begin{array}[]{l}\frac{1}{\eta\lambda_{0}}\int_{0}^{\eta\lambda_{0}}\dots\mathrm{d}x~{}~{}\mbox{
for $\alpha-$phase}\\\
\frac{1}{(1-\eta)\lambda_{0}}\int_{\eta\lambda_{0}}^{\lambda_{0}}\dots\mathrm{d}x~{}~{}\mbox{
for $\beta-$phase}\\\ \end{array}\right.$ (29)
with $\eta$ the nominal volume fraction of the $\alpha$ phase at the eutectic
temperature, which is related to $c_{\infty}$, the reduced composition of the
melt infinitely far ahead of the solidification front, by $c_{\infty}=\eta
c_{\alpha}+(1-\eta)c_{\beta}$.
The expression of the capillary contribution to the undercooling, given by the
average of Eq. (5), $\langle\Delta
T_{c,i}\rangle\propto\langle\kappa_{i}\rangle$, is directly obtained by
averaging the local curvature $\kappa=-\partial_{xx}z_{\rm
int}/(1+\partial_{x}{z_{\rm int}}^{2})^{3/2}$. One gets
$\langle\kappa\rangle=\left\\{\begin{array}[]{l}\frac{2}{\eta\lambda_{0}}\sin(\theta_{\alpha}(\phi_{0}))~{}~{}\mbox{for
$\alpha$-phase}\\\
\frac{2}{(1-\eta)\lambda_{0}}\sin(\theta_{\beta}(\phi_{0}))~{}~{}\mbox{for
$\beta$-phase}\\\ \end{array}\right.$ (30)
where $\theta_{i}$, the contact angles at the trijunction points, are fixed by
the equilibrium condition of the capillary forces at the trijunctions (Young
Herring equation), Eq. (20). Note that in the SPA the contact angles on the
two sides of each lamellae are identical, even for a tilted steady state.
In order to calculate the average of the diffusion term, given by Eq. (4),
$\langle\Delta T_{D,i}\rangle$, one assumes a flat S/L interface Datye and
Langer (1981); Langer (1980) (i.e., herein, the curvature can be seen as a
perturbation). In steady state and in the frame of reference $\mathcal{R}$,
the diffusion equation for the concentration field reads Kassner and Misbah
(1992),
$\nabla^{2}c+\frac{1}{\ell_{D}}\left(\frac{\partial c}{\partial
z}+\tan\phi_{0}\frac{\partial c}{\partial x}\right)=0.$ (31)
In addition, for a planar S/L front perpendicular to the temperature gradient,
the condition of mass conservation at the interface (Stefan’s condition Stefan
(1889)) imposes
$\left.\frac{\partial c}{\partial
z}\right|_{\zeta_{0}}=-\frac{1}{\ell_{D}}[c(x,\zeta_{0})-c^{s}_{i}],$ (32)
with $c_{i}^{s}$ the reduced concentration of the solid phase $i$
($i=\alpha,\beta$).
Let us now proceed to the solution of the diffusion equation. For simplicity
of notations, we will set $\zeta_{0}=0$ in the following (that is, $z=0$
corresponds to the interface position). The general solution of Eq. (31) for a
system with the spatial periodicity $\lambda_{0}$ on $x$ reads
$c(x,z)=c_{\infty}+\sum_{n=-\infty}^{+\infty}B_{n}\exp(-Q_{n}z)\exp(ik_{n}x)$
(33)
with $k_{n}=2\pi n/\lambda_{0}$ the wave number of the mode $n$. Inserting the
general solution inside Eq. (31) and keeping only the positive root for
$Q_{n}$ (since the concentration field must tend towards a constant far from
the front), one gets
$Q_{n}=\frac{1}{2\ell_{D}}\left(1+\sqrt{1+4\ell_{D}^{2}k_{n}^{2}\left[1-i\frac{\tan\phi_{0}}{k_{n}\ell_{D}}\right]}\right).$
(34)
The Peclet number is introduced as the ratio of the lamellar spacing and the
diffusion length, $Pe=\lambda_{0}/\ell_{D}$. In the limit of small Peclet
number (that is, for slow growth), $Pe\sim(|k_{n}|\ell_{D})^{-1}\ll 1$ for
$n\neq 0$, Eq. (34) can be simplified by keeping only the terms up to the
first order in $Pe$, which yields
$\displaystyle Q_{n}$ $\displaystyle\approx$
$\displaystyle\frac{1}{2\ell_{D}}+|k_{n}|\exp\left(-\frac{i\tan\phi_{0}}{2\ell_{D}k_{n}}\right)$
(35) $\displaystyle\approx$
$\displaystyle\frac{1}{2\ell_{D}}+|k_{n}|-i\frac{\tan\phi_{0}}{2\ell_{D}}sign(n).$
Then, inserting Eq. (33) inside the continuity equation Eq. (32) and
integrating over $x$ (from $x=0$ to $x=\eta\lambda_{0}$ for $\alpha$ and from
$x=\eta\lambda_{0}$ to $x={\lambda_{0}}$ for $\beta$), one obtains the
coefficients $B_{n}$ Datye and Langer (1981),
$\displaystyle B_{n}$ $\displaystyle=$ $\displaystyle\frac{2\exp(i\eta
k_{n}\lambda_{0}/2)\sin(\eta
k_{n}\lambda_{0}/2)}{\ell_{D}\lambda_{0}k_{n}(Q_{n}-1/\ell_{D})},~{}~{}(\forall
n)$ $\displaystyle\approx$ $\displaystyle\frac{2\exp(i\eta
k_{n}\lambda_{0}/2)\sin(\eta
k_{n}\lambda_{0}/2)}{\ell_{D}\lambda_{0}k_{n}|k_{n}|},~{}~{}(n\neq 0).$
at the $0^{th}$ order in $Pe$. The only difference with respect to the
original Jackson-Hunt calculation is the presence of the imaginary part in the
expression of $Q_{n}$ in Eq. (35) which produces oscillations in the $z$
direction on the typical length $2\ell_{D}/\tan\phi_{0}$. Those oscillations
can be understood by realizing that the concentration field is created by the
rejection and absorption of solute at the moving interface. Since the
distribution of the sources and sinks drifts laterally along the front, the
flux lines are slightly inclined with respect to the solution for non-tilted
growth. We have checked that the inclination angle is vanishingly small in the
small Peclet number regime.
The calculation of the average composition in front of each lamella yields
$\langle
c_{\alpha}\rangle=\frac{1}{\eta\lambda_{0}}\int_{0}^{\eta\lambda_{0}}c(x,\zeta_{0})\mathrm{d}x=c_{\infty}+B_{0}+\frac{\lambda_{0}}{\ell_{D}\eta}P(\eta),$
(37) $\langle
c_{\beta}\rangle=\frac{1}{(1-\eta)\lambda_{0}}\int_{\eta\lambda_{0}}^{\lambda_{0}}c(x,\zeta_{0})\mathrm{d}x=c_{\infty}+B_{0}-\frac{\lambda_{0}}{\ell_{D}(1-\eta)}P(\eta),$
(38)
with
$P(\eta)=\sum_{n=1}^{\infty}\frac{\sin^{2}(\pi\eta n)}{(n\pi)^{3}}.$ (39)
This is the same result as for nontilted growth. This fact is not surprising
since, for a planar interface, no coupling between the interface shape and the
lateral diffusion fluxes can occur Akamatsu _et al._ (2012a). From the
average compositions, one directly deduces the average diffusion undercooling
$\langle\Delta T_{D,i}\rangle=-\Delta Cm_{i}\langle c_{i}\rangle$. It is worth
noting that at this stage, the Fourier coefficient $B_{0}$, which corresponds
to the amplitude of a uniform boundary layer of thickness $\ell_{D}$ moving
ahead of the front, remains undetermined. The problem is closed by assuming
that neighboring lamellae are at the same temperature $\langle\Delta
T_{\alpha}\rangle=\langle\Delta T_{\beta}\rangle$, which allows to determine
the average undercooling without knowing the analytical form of $B_{0}$.
From this, we deduce the Jackson-Hunt law in presence of anisotropic
interphases
$\Delta
T(\lambda_{0},\phi_{0})=VK_{1}\lambda_{0}+K_{2}(\phi_{0})\lambda_{0}^{-1}.$
(40)
with
$K_{1}=\frac{\bar{m}P(\eta)}{D}\frac{\Delta C}{\eta(1-\eta)}$ (41)
and
$K_{2}(\phi_{0})=2\bar{m}\Delta
C\left(\frac{d_{\alpha}\sin(\theta_{\alpha}(\phi_{0}))}{\eta}+\frac{d_{\beta}\sin(\theta_{\beta}(\phi_{0}))}{(1-\eta)}\right)$
(42)
with
$\frac{1}{\bar{m}}=\frac{1}{|m_{\alpha}|}+\frac{1}{|m_{\beta}|}.$ (43)
Equivalently, introducing
$\lambda_{m}(\phi_{0})=\sqrt{K_{2}(\phi_{0})/(VK_{1})}$ and the minimum
undercooling $\Delta
T_{m}(\phi_{0})=2\sqrt{VK_{1}K_{2}(\phi_{0})}=2VK_{1}\lambda_{m}(\phi_{0})$,
one has
$\Delta T(\lambda_{0},\phi_{0})=\frac{\Delta
T_{m}(\phi_{0})}{2}\left(\frac{\lambda_{0}}{\lambda_{m}(\phi_{0})}+\frac{\lambda_{m}(\phi_{0})}{\lambda_{0}}\right).$
(44)
It is instructive to consider as an example a fictitious eutectic alloy with
symmetric phase diagram and identical surface tensions for the two solid-
liquid interfaces, at the eutectic composition. Indeed, the above expressions
can be further simplified in that case: we have $\eta=1/2$,
$|m_{\alpha}|=|m_{\beta}|=m$, $\theta_{\alpha}=\theta_{\beta}=\theta_{\ell}$;
the expression for $K_{1}$ reduces to $K_{1}=2P(1/2)m\Delta C/D$, with
$P(\eta=1/2)\approx 0.0339$.
The Young Herring law yields
$\sin(\theta_{\ell})=\frac{1}{2\gamma_{\ell}}|\vec{\sigma}_{\alpha\beta}(\phi_{0}-\phi_{R})|.$
(45)
The function $K_{2}$ given by Eq. (42) becomes $K_{2}(\phi_{0})=4m\Delta
Cd_{\ell}\sin(\theta_{\ell}(\phi_{0}))$ with $d_{\alpha}=d_{\beta}=d_{\ell}$
(the capillary length). One directly obtains the expressions
$\lambda_{m}(\phi_{0})=\sqrt{\frac{2d_{\ell}\ell_{D}\sin(\theta_{\ell}(\phi_{0}))}{P(1/2)}}$
(46)
and
$\Delta T_{m}(\phi_{0})=4\sqrt{2}m\Delta
C\sqrt{\frac{d_{\ell}\sin(\theta_{\ell}(\phi_{0}))P(1/2)}{\ell_{D}}}.$ (47)
It turns out that $\Delta T_{m}$ and $\lambda_{m}$ are proportional to
$|\vec{\sigma}_{\alpha\beta}(\phi_{0}-\phi_{R})|^{1/2}$ according to
$\displaystyle\frac{\Delta T_{m}(\phi_{0})}{m\Delta C}$ $\displaystyle=$
$\displaystyle 4P(1/2)\frac{\lambda_{m}(\phi_{0})}{\ell_{D}},$
$\displaystyle=$ $\displaystyle
4\sqrt{\frac{d_{\ell}P(1/2)}{\gamma_{\ell}\ell_{D}}}|\vec{\sigma}_{\alpha\beta}(\phi_{0}-\phi_{R})|^{1/2}.$
This fact can be easily understood: when
$|\vec{\sigma}_{\alpha\beta}(\phi_{0}-\phi_{R})|$ increases, the wetting angle
$\theta_{\ell}$, and obviously the average curvature $\langle\kappa\rangle$,
has to increase as well to maintain the equilibrium of the capillary force at
the trijunctions. Therefore, the capillary contribution in the Gibbs Thomson
law becomes stronger, which shifts $\Delta T_{m}$ and $\lambda_{m}$ towards
higher values.
We plot in Fig. 7 the function
$|\vec{\sigma}_{\alpha\beta}(\phi_{0}-\phi_{R})|^{1/2}$ versus $\phi_{R}$
using an IB energy of
$\gamma_{\alpha\beta}=1-\epsilon\cos(4(\phi_{0}-\phi_{R}))$.
Figure 7: Curve of $|\vec{\sigma}_{\alpha\beta}(\phi_{0}-\phi_{R})|^{1/2}$
versus $\phi_{R}$ for a symmetric phase diagram. We recall that both $\Delta
T_{m}(\phi_{0})$ and $\lambda_{m}(\phi_{0})$ are proportional to
$|\vec{\sigma}_{\alpha\beta}|^{1/2}$. The IB energy is
$\gamma_{\alpha\beta}=1-\epsilon\cos(4(\phi_{0}-\phi_{R}))$ for anisotropy
strengths $\epsilon=0.02$, $0.05$ and $0.15$. For $\epsilon=0.15$, the dashed
curves correspond to the stable and the metastable branches, and the dotted
curve to the unstable branch.
## V Evolution of Modulated Fronts
We now wish to examine the time evolution of large-scale fronts for which the
spacing and the inclination of the lamellae may vary with position and time.
We restrict our attention to weakly modulated fronts, which can be described
as perturbed steady-state fronts. As in previous works, the deformation of the
front with respect to the steady state may be described by the displacements
of the trijunction points, both parallel ($\delta_{x}$) and perpendicular
($\delta_{z}$) to the steady-state front. The large-scale envelope of the
front (that is, a smooth curve that is an interpolation of the average
position $\zeta_{0}$ of each individual lamella) is denoted by $\zeta(x,t)$ in
the moving frame $\mathcal{R}(\hat{x},\hat{z})$. When the displacements of the
trijunction points vary slowly along the front, it is possible to take a
continuum approach, in which the lateral displacements, the lamellar spacings
and the tilting angles can be represented by slowly varying functions of $x$
and $t$ denoted here by $\delta_{x}(x,t)$, $\lambda(x,t)$ and $\phi(x,t)$,
respectively. The local orientation of the front envelope is described by the
unit normal vector $\hat{n}_{f}\propto(-\partial_{x}\zeta,1)$. We introduce
the angle $\alpha=-\arctan(\partial_{x}\zeta)\approx\partial_{x}\zeta$ between
$z$ and $\hat{n}_{f}$.
### V.1 Fundamental Equations
In the long-wavelength limit, $|\partial_{x}\lambda|/\lambda_{0}\ll 1$, the
local spacing $\lambda(x,t)$ of the deformed state is written as
$\lambda(x,t)=\lambda_{0}+\delta\lambda(x,t)$, which can be rewritten Datye
and Langer (1981); Langer (1980)
$\lambda(x,t)\approx\lambda_{0}\left(1+\frac{\partial\delta_{x}}{\partial
x}\right).$ (49)
This of course implies
$\frac{\partial\lambda}{\partial t}=\lambda_{0}\frac{\partial}{\partial
x}\frac{\partial\delta_{x}}{\partial t}.$ (50)
In the following, we will assume that all the functions verify the theorem of
Schwarz, such that the order of the derivatives with respect to $x$ and $t$
can be inverted. Furthermore, we assume that the generalized Jackson Hunt law,
Eq. (44), remains locally valid for a smoothly varying spacing. Then, the
undercooling at the S/L interface reads
$-\zeta(x,t)G=\frac{\Delta
T_{m}(\phi(x,t))}{2}\left(\frac{\lambda(x,t)}{\lambda_{m}(\phi)}+\frac{\lambda_{m}(\phi)}{\lambda(x,t)}\right).$
(51)
The evolutions of $\delta_{x}$ and $\zeta$ are linked by the SPA. Indeed, if
the local inclination of the lamellae changes, this modifies the lateral drift
velocity of the trijunctions. In the moving frame,
$\displaystyle\frac{\partial\delta_{x}}{\partial t}$ $\displaystyle=$
$\displaystyle V\tan(\phi-\phi_{0}),$ $\displaystyle=$ $\displaystyle
V\tan\left(\frac{\alpha}{1-\frac{\partial\psi}{\partial\phi}}\right),$
since
$\alpha=\delta\phi-\delta\psi=\delta\phi(1-\frac{\partial\psi}{\partial\phi})$.
Under the assumption that the argument inside the tangent function remains
close to $0$ for all $\phi_{R}$ (valid in the limit of small front slopes),
the expression may be linearized to yield
$\displaystyle\frac{\partial\delta_{x}}{\partial t}$ $\displaystyle\approx$
$\displaystyle-V\frac{\partial\zeta}{\partial
x}\frac{1}{1-\frac{\partial\psi}{\partial\phi}},$ $\displaystyle\approx$
$\displaystyle-V\frac{\partial\zeta}{\partial x}A_{SPA}(\phi_{R}).$
For isotropic interfaces, these equations may be combined to yield an
evolution equation for the local spacing. However, for anisotropic interfaces,
the inclination angle $\phi$ provides a supplementary degree of freedom. Since
both $\lambda$ and $\phi$ can be expressed in terms of $\zeta$, here it is
more convenient to write an equation for $\zeta(x,t)$ rather than
$\lambda(x,t)$.
Expanding the expression of the undercooling around the homogeneous
underformed state of spacing $\lambda_{0}$ and angle $\phi_{0}$, we obtain
$\Delta T=\Delta T_{0}+\left.\frac{\partial\Delta
T}{\partial\lambda}\right|_{\lambda_{0},\phi_{0}}(\lambda-\lambda_{0})+\left.\frac{\partial\Delta
T}{\partial\phi}\right|_{\lambda_{0},\phi_{0}}(\phi-\phi_{0}).$ (54)
The deviation of the angle is replaced by
$\phi-\phi_{0}=-A_{SPA}(\phi_{R})\frac{\partial\zeta}{\partial x}.$ (55)
Taking the time derivative of Eq. (51) and injecting the linear expansion Eq.
(54), one has
$-G\frac{\partial\zeta}{\partial t}=\left.\frac{\partial\Delta
T}{\partial\lambda}\right|_{\lambda_{0},\phi_{0}}\frac{\partial\lambda}{\partial
t}+\left.\frac{\partial\Delta
T}{\partial\phi}\right|_{\lambda_{0},\phi_{0}}\frac{\partial\phi}{\partial
t}.$ (56)
Finally, replacing $\partial_{t}\lambda$ with Eqs. (50) and (V.1), and
$\partial_{t}\phi$ with the time derivative of Eq. (55), one obtains the
linear and homogeneous partial derivative equation (PDE)
$\frac{\partial}{\partial t}\left(\zeta-\ell_{0}\frac{\partial\zeta}{\partial
x}\right)=D_{0}\frac{\partial^{2}\zeta}{\partial x^{2}}$ (57)
with a diffusion coefficient
$\displaystyle D_{0}$ $\displaystyle=$
$\displaystyle\frac{\lambda_{0}V}{G}A_{SPA}(\phi_{R})\left.\frac{\partial\Delta
T}{\partial\lambda}\right|_{\lambda_{0},\phi_{0}},$ $\displaystyle=$
$\displaystyle\frac{\lambda_{0}V^{2}K_{1}}{G}A_{SPA}(\phi_{R})\left(1-\frac{1}{\Lambda_{0}^{2}(\phi_{R})}\right),$
with $\Lambda_{0}(\phi_{R})=\lambda_{0}/\lambda_{m}(\phi_{R})$, and a length
scale related to the anisotropy,
$\displaystyle\ell_{0}$ $\displaystyle=$
$\displaystyle\frac{1}{G}A_{SPA}(\phi_{R})\left.\frac{\partial\Delta
T}{\partial\phi}\right|_{\lambda_{0},\phi_{0}},$ $\displaystyle=$
$\displaystyle\frac{1}{G\lambda_{0}}A_{SPA}(\phi_{R})\left.\frac{\partial
K_{2}}{\partial\phi}\right|_{\phi_{0}}.$
Again, it is useful to examine the specialization of these expressions to the
symmetric eutectic alloy. We have
$D_{0}=\frac{2P(1/2)\lambda_{0}V\ell_{T}}{\ell_{D}}A_{SPA}(\phi_{R})\left(1-\frac{1}{\Lambda_{0}^{2}(\phi_{R})}\right)$
(60)
with $\ell_{T}$ is the thermal length Eq. (11). The anisotropic length, since
with $K_{2}\propto\sin(\theta_{\ell})\propto|\vec{\sigma}_{\alpha\beta}|$,
becomes
$\displaystyle\ell_{0}$ $\displaystyle=$
$\displaystyle\frac{2\ell_{T}d_{\ell}}{\gamma_{\ell}\lambda_{0}}A_{SPA}(\phi_{R})\left.\frac{\partial|\vec{\sigma}_{\alpha\beta}|}{\partial\phi}\right|_{\phi_{0}}$
(61) $\displaystyle=$
$\displaystyle\frac{2\ell_{T}d_{\ell}}{\gamma_{\ell}\lambda_{0}}\frac{\gamma^{\prime}_{\alpha\beta}}{\gamma_{\alpha\beta}}|\vec{\sigma}_{\alpha\beta}|$
since
$\left.\frac{\partial|\vec{\sigma}_{\alpha\beta}|}{\partial\phi}\right|_{\phi_{0}}=\frac{\gamma^{\prime}_{\alpha\beta}(\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta})}{|\vec{\sigma}_{\alpha\beta}|}.$
(62)
We plot in Fig. 8 the function
$|\vec{\sigma}_{\alpha\beta}|\gamma^{\prime}_{\alpha\beta}/\gamma_{\alpha\beta}$
versus $\phi_{R}$ using an IB energy
$\gamma_{\alpha\beta}=1-\epsilon\cos(4(\phi_{0}-\phi_{R}))$. It turns out that
$\ell_{0}$ is a decreasing function of the tilting angle $\phi_{0}$.
Figure 8: Plot of
$|\vec{\sigma}_{\alpha\beta}|\gamma^{\prime}_{\alpha\beta}/\gamma_{\alpha\beta}$
versus $\phi_{R}$ for an isotropic phase diagram and under the SPA. We recall
that the anisotropic length $\ell_{0}$ is proportional to
$|\vec{\sigma}_{\alpha\beta}|\gamma^{\prime}_{\alpha\beta}/\gamma_{\alpha\beta}$.
The IB energy is $\gamma_{\alpha\beta}=1-\epsilon\cos(4(\phi_{0}-\phi_{R}))$
for anisotropy strengths $\epsilon=0.02$, $0.05$ and $0.15$. For
$\epsilon=0.15$, the dashed curves correspond to the stable and the metastable
branches, and the dotted curve corresponds to the unstable branch.
The PDE for the front shape has strongly different properties for $\ell_{0}=0$
and $\ell_{0}\neq 0$. For $\ell_{0}=0$ and $D_{0}>0$, the PDE is parabolic and
reduces to the well-known diffusion equation which is of the same form as in
the isotropic case. Indeed, the condition $\ell_{0}(\lambda_{0},\phi_{0})=0$
is satisfied if the temperature gradient is aligned with a direction
corresponding to an extremum of $\gamma_{\alpha\beta}$, (i.e.
$\gamma^{\prime}_{\alpha\beta}(\phi_{0}-\phi_{R})=0$ ). This situation hence
yields the well-known phase diffusion equation for $\lambda$,
$\frac{\partial\lambda}{\partial t}=D_{0}\frac{\partial^{2}\lambda}{\partial
x^{2}}$ (63)
with
$D_{0}=A_{SPA}(\phi_{R})D_{\lambda}=A_{SPA}(\phi_{R})\frac{\lambda_{0}V}{G}\left.\frac{\partial\Delta
T}{\partial\lambda}\right|_{\lambda_{0},\phi_{0}}$. That is, the behavior of
such a front is identical to the one of an isotropic front, but with a phase
diffusion coefficient that is multiplied by the anisotropy function $A_{SPA}$.
For evolution around a minimum of $\gamma_{\alpha\beta}$, the anisotropy
function is lower than unity, and the dynamics of spacing relaxation is slowed
down. Conversely, when the front evolves around a maximum of
$\gamma_{\alpha\beta}$ the dynamics is accelerated.
For this case, any long-wavelength and small amplitude perturbation (i.e., for
$k\lambda_{0}\ll 1$ and $\delta\ll\lambda_{0}$) of the form
$\lambda(x,t)=\lambda_{0}[1+\delta\exp(ikx+\omega_{k}t)]$ (64)
will grow with a rate
$\omega_{k}=-D_{0}k^{2}.$ (65)
Therefore, according to Eq. (65) the system is stable if $D_{0}>0$, and
unstable otherwise. One deduces that, as long as the anisotropy function
$A_{SPA}$ remains positive, the threshold of stability is given by
$\Lambda_{0}(\phi_{R})=\lambda_{0}/\lambda_{m}(\phi_{R})=1$, which is the same
as for isotropic IB energy (up to the different expression of $\lambda_{m}$).
In contrast, for $\ell_{0}\neq 0$, the term
$\ell_{0}\partial_{t}\partial_{x}\zeta$ in the evolution equation Eq. (57)
breaks the parity symmetry $x\rightarrow-x$. The PDE is hyperbolic, and
therefore corresponds to a wave-propagation equation, see appendix B for
details. In both cases, $\ell_{0}=0$ and $\ell_{0}\neq 0$, the PDE reduces to
an initial values problem.
### V.2 Normal Mode Analysis
In order to obtain the normal mode analysis we write the Fourier
representation of the front shape $\zeta(x,t)$ as
$\zeta(x,t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}A_{k}(t)\exp(-ikx)\mathrm{d}k$
(66)
with $k$ the wave number and $A_{k}(t)$ the amplitude of the mode $k$. In
reciprocal space, the PDE for $\zeta(x,t)$ given by Eq. (57) reduces to an
ordinary differential equation in time for the amplitudes
$\dot{A}_{k}(t)-\omega A_{k}(t)=0$ (67)
with the dispersion relation $\omega=-\frac{D_{0}k^{2}}{1+i\ell_{0}k}$. This
can be rewritten as
$\omega=\omega_{R}+i\omega_{I}=\frac{-D_{0}k^{2}}{1+\ell_{0}^{2}k^{2}}+i\frac{D_{0}\ell_{0}k^{3}}{1+\ell_{0}^{2}k^{2}}.$
(68)
The phase velocity is given by
$v_{p}=\omega_{I}/k=D_{0}\ell_{0}k^{2}/(1+\ell_{0}^{2}k^{2})$ and the group
velocity by
$v_{g}=\mathrm{d}\omega_{I}/\mathrm{d}k=D_{0}\ell_{0}k^{2}(3+\ell_{0}^{2}k^{2})/(1+\ell_{0}^{2}k^{2})^{2}$.
Both of these velocities tend to the constant $D_{0}/\ell_{0}$ for
$k\rightarrow\infty$ and behave like $\sim D_{0}\ell_{0}k^{2}$ for
$k\rightarrow 0$. The solution of the Fourier amplitudes reads
$A_{k}(t)=A_{k}(0)\exp(\omega t)$ (69)
with
$A_{k}(t=0)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}\zeta(x,t=0)\exp(ikx)\mathrm{d}k$.
Therefore, the solution corresponding to an initial state
$\zeta(x,t=0)=a_{0}\cos(2\pi x/\lambda_{p})$, where
$\lambda_{p}\gg\lambda_{0}$ is the wavelength of the perturbation, one gets
$\displaystyle\zeta(x,t)$ $\displaystyle=$
$\displaystyle\frac{a_{0}}{2}\int_{-\infty}^{+\infty}[\delta(k-k_{p})-\delta(k+k_{p})]$
(70) $\displaystyle\times\mathrm{e}^{\omega t}\mathrm{e}^{-ikx}\mathrm{d}k,$
$\displaystyle=$ $\displaystyle
a_{0}\mathrm{e}^{\omega_{R}(k_{p})t}\cos(\omega_{I}(k_{p})t-k_{p}x)$
with $k_{p}=2\pi/\lambda_{p}$ the wave number of the perturbation. For this,
we have used the parity properties $\omega_{R}(k)=\omega_{R}(-k)$ and
$\omega_{I}(k)=-\omega_{I}(-k)$. Clearly, Eq. (70) corresponds to a time-
damped or -amplified wave where the sign of $\ell_{0}$ gives the direction of
propagation.
## VI Discussion
Up to now, we have shown that the anisotropy of the interphase boundary
changes the nature of the evolution equation for the front shape. This
equation is a wave equation for anisotropic interphase boundaries, in contrast
to the previously known diffusion equation in the isotropic case. In order to
investigate more quantitatively the influence of the cross term in Eq. (57),
one may compare the phase velocity in the small wave number limit,
$v_{p}(k\rightarrow 0)\sim D_{0}\ell_{0}k^{2}$, to the pulling velocity $V$.
For the sake of simplicity, we perform the calculations for a eutectic alloy
with symmetric phase diagram; no qualitative changes are expected if this
restriction is relaxed. For a symmetric eutectic alloy, one has
$\frac{v_{p}(k\rightarrow
0)}{V}\sim\left(2P(1/2)\lambda_{0}\frac{\ell_{T}}{\ell_{D}}\right)^{2}A_{SPA}\frac{\gamma_{\alpha\beta}^{\prime}}{\gamma_{\alpha\beta}}\left(\frac{\Lambda_{0}^{2}-1}{\Lambda_{0}^{4}}\right)k^{2}$
(71)
We examine a perturbation that has a wavelength of ten lamellar spacings, that
is, $k\approx 2\pi/(10\lambda_{0})$. For typical values of the other
parameters, $A_{SPA}\approx 1$,
$\gamma_{\alpha\beta}^{\prime}/\gamma_{\alpha\beta}\sim n\epsilon\approx 0.2$,
$\Lambda_{0}\approx 1.1$, $\ell_{T}/\ell_{D}\approx 4$, one obtains
$v_{p}/V\approx 8\times 10^{-4}$ and $\ell_{0}/\lambda_{0}\approx 5\times
10^{-2}$. This means that the propagation of the wave induced by the
anisotropy is very slow, and will be difficult to observe on the typical time
scale of directional solidification experiments. Note that according to Eq.
(71) the phase velocity depends on the distance of the initial spacing from
the minimum-undercooling spacing through the factor that depends on
$\Lambda_{0}$. However, since $\Lambda_{0}$ typically remains close to unity,
our conclusion is not limited to the particular value of $\Lambda_{0}$ taken
in the calculation.
Since the propagation of waves is very slow, we can reasonably neglect the
cross term in Eq. (57) for the description of experiments. Therefore, the
phase diffusion equation given by Eq. (63) remains a good approximation, even
if the temperature gradient is not aligned with a direction corresponding to
an extremum of $\gamma_{\alpha,\beta}$ (i.e., for tilted interphases).
Two further comments can be added at this point. First, the lateral
propagation of patterns and the evolution of spacings have been recently
studied in experiments and phase-field simulations of cellular and dendritic
arrays in dilute binary alloys Song _et al._ (2018), and an evolution
equation for the spacing has been extracted, which also contains propagative
and diffusive terms. However, in contrast to our findings, for dendrites the
propagative term dominates over the diffusive one. This difference points to
the very different roles that crystalline anisotropy plays for the selection
of dendritic and eutectic patterns. Second, we have relied here on the
symmetric pattern approximation. Evolution equations for the spacing have been
derived directly from the free-boundary problem in the limit of high
temperature gradients for eutectics Caroli _et al._ (1990), and by a
perturbation analysis of the boundary integral equation for cells Brattkus and
Misbah (1990). While it might be possible to use similar methods for a more
rigorous derivation of the front evolution equation obtained here, this would
certainly be a difficult undertaking. Moreover, a more rigorous treatment
would probably not decisively alter the order-of-magnitude estimates obtained
above.
## VII Conclusion
We have developed an evolution equation for the envelope of lamellar eutectic
solidification fronts in two dimensions, which corresponds to experiments in
thin samples, taking into account the anisotropy of the solid-solid interphase
boundaries. This generalizes previous works on fronts with isotropic
interfaces Jackson _et al._ (1966); Langer (1980); Datye and Langer (1981).
By replacing Cahn’s hypothesis (lamallae always grow normal to the envelope)
used for isotropic systems by the Symmetric Pattern Approximation (SPA) that
is derived from the balance of torques at the trijunction points, we have
demonstrated that the evolution equation contains a propagative term which
involves a new characteristic length scale $\ell_{0}$. This is striking,
because a local effect changes the nature of the equations describing the
evolution of the system at large scales. Moreover, the diffusive term that is
already present for isotropic interfaces gets multiplied by a factor that
depends on the anisotropy of the interphase boundaries.
A quantitative analysis of the new equation reveals that for typical
directional solidification conditions, the phase velocity of the propagative
modes is too slow to be observable. Therefore, the propagative evolution
equation can be reasonably replaced by a spacing diffusion equation as in the
isotropic case, where the spacing diffusion coefficient is multiplied by the
anisotropy function $A_{SPA}$. In addition, the dependence of the minimum
undercooling spacing on the tilting angle must also be taken into account in
order to correctly evaluate the reduced initial spacing.
It should be recalled that direct experimental measurements and phase field
simulations for lamellar eutectics with isotropic interphases Plapp and Karma
(2002); Akamatsu _et al._ (2004) have shown that Cahn’s hypothesis is not
strictly valid for isotropic systems: the trijunctions also slightly move in
the direction parallel to the envelope of the composite front in addition to
the normal-growth conjecture, with a velocity that is proportional to the
local gradient of the spacing. Despite the fact that this effect is small, it
introduces a stabilizing term in the diffusion equation that leads to an
overstability with respect to the theory. We expect a similar contribution in
the anisotropic case, but since no analytical description of this phenomenon
is available, only numerical simulations could permit to clarify this issue.
We hope to report on the results of such simulations in the near future.
## VIII Ackowlegdements
The authors thank S. Akamatsu, G. Faivre, and S. Bottin-Rousseau for many
useful discussions. This research was supported by the Centre National
d’Études Spatiales (France) and by the ANR ANPHASES project
(M-era.Net:ANR-14-MERA-0004).
## Appendix A Limit of Stability of $\phi_{0}(\phi_{R})$ within the SPA
As mentioned in the section III.1, for an underformed steady-state with a
planar front, the SPA (i.e. $\vec{\sigma}_{\alpha\beta}\cdot\hat{t}_{f}=0$)
imposes for
$\phi_{0}=-\arctan\left(\frac{\gamma_{\alpha\beta}^{\prime}(\phi_{0}-\phi_{R})}{\gamma_{\alpha\beta}(\phi_{0}-\phi_{R})}\right).$
(72)
Using the formalism of the dynamical system Manneville (1991), the problem can
be tackled by writing
$\frac{\partial\phi}{\partial t}=F(\phi;\phi_{R})$ (73)
with
$F(\phi;\phi_{R})=\phi+\arctan\left(\frac{\gamma_{\alpha\beta}^{\prime}(\phi-\phi_{R})}{\gamma_{\alpha\beta}(\phi-\phi_{R})}\right).$
(74)
a nonlinear function of $\phi$. The dynamics is fully determined by the nature
and the position of the fixed points of $F$ given by
$F(\phi_{0};\phi_{R})=0.$ (75)
The problem reduces to solve how the fixed points $\phi_{0}$ depends on
$\phi_{R}$ (seen as a control parameter). Therefore, the fixed points have to
be viewed like implicit functions of $\phi_{R}$. Interestingly, for a $n-$fold
function for $\gamma_{\alpha\beta}$, the function $F(\phi_{0};\phi_{R})$ is
symmetric with respect to the transformation $\phi_{R}\rightarrow
2\pi/n-\phi_{R}$ and $\phi_{0}\rightarrow-\phi_{0}$.
Furthermore, applying the transformation around a fixed point
$\phi_{R}\rightarrow\phi_{R}+\delta\phi_{R}$ and
$\phi_{0}\rightarrow\phi_{0}+\delta\phi_{0}$, Eq. (72) leads to the relation
$\displaystyle\delta\phi_{0}$ $\displaystyle=$ $\displaystyle\frac{\partial
F/\partial\phi_{R}}{\partial F/\partial\phi_{0}}\delta\phi_{R},$
$\displaystyle=$
$\displaystyle\frac{\gamma^{\prime\prime}_{\alpha\beta}\gamma_{\alpha\beta}-\gamma^{{}^{\prime}2}_{\alpha\beta}}{\gamma_{\alpha\beta}(\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta})}\delta\phi_{R}.$
The limit of stability corresponds to the turning points given by the
conditions $\frac{\partial F}{\partial\phi_{R}}\neq 0$ and $\frac{\partial
F}{\partial\phi_{0}}=0$ or equivalently
$\gamma_{\alpha\beta}+\gamma^{\prime\prime}_{\alpha\beta}=0$.
Unfortunately, this local investigation of the stability does not allow to
obtain the limit between the stable and the meta stable branches. However,
using symmetric consideration, those points are given by
$\\{\phi_{R}=2\pi/n,\phi_{0}(2\pi/n)\\}$.
## Appendix B Transformation of the Equation of Evolution
As written in the core of the text, the Partial Derivative Equation (PDE)
$\frac{\partial}{\partial t}\left(\zeta-\ell_{0}\frac{\partial\zeta}{\partial
x}\right)=D_{0}\frac{\partial^{2}\zeta}{\partial x^{2}}$ (77)
governing the evolution of the average front has an unusual form from a
physicist point of view. In this appendix, we want to demonstrate that, doing
a suitable change of variables, one can write this PDE under a more common
form without cross derivative and allowing to extract a characteristic time.
The determinant of the characteristic polynomial of Eq. (77) is
$\Delta_{\zeta}=\ell_{0}^{2}$ which leads to the roots $r_{1}=0$ and
$r_{2}=\ell_{0}/D_{0}$.
One introduces the characteristic coordinates $\xi=t-r_{1}x=t$ and
$\eta=t-r_{2}x=t-\ell_{0}x/D_{0}$. Using those variables, the linear
differential operator becomes symmetric and Eq. (77) reads
$\frac{\ell_{0}^{2}}{D_{0}}\partial_{\xi\eta}\zeta+\partial_{\xi}\zeta+\partial_{\eta}\zeta=0.$
(78)
Finally, in order to remove the cross derivative, one sets
$\alpha=\xi+\eta=2t-\ell_{0}x/D_{0}$ and $\beta=\xi-\eta=\ell_{0}x/D_{0}$, one
obtains
$\partial_{\alpha\alpha}\zeta-\partial_{\beta\beta}\zeta+\frac{1}{\tau_{ab}}\partial_{\alpha}\zeta=0.$
(79)
with $\tau_{ab}=\frac{\ell_{0}^{2}}{2D_{0}}$ a characteristic time.
## References
* Jackson _et al._ (1966) K. Jackson, J. D Hunt, and K. H Jackson, _Trans. Met. Soc. AIME_ , 236 (1966).
* Seetharaman and Trivedi (1988) V. Seetharaman and R. Trivedi, Metallurgical Transactions A 19, 2955 (1988).
* Ginibre _et al._ (1997) M. Ginibre, S. Akamatsu, and G. Faivre, Phys. Rev. E 56, 780 (1997).
* Trivedi _et al._ (1991) R. Trivedi, J. T. Mason, J. D. Verhoeven, and W. Kurz, Metallurgical Transactions A 22, 2523 (1991).
* Langer (1980) J. S. Langer, Phys. Rev. Lett. 44, 1023 (1980).
* Datye and Langer (1981) V. Datye and J. S. Langer, Phys. Rev. B 24, 4155 (1981).
* Manneville (1991) P. Manneville, _Dissipatives structures and weak turbulence_ (Academic Press, 1991).
* Cross and Hohenberg (1993) M. C. Cross and P. C. Hohenberg, Rev. Mod. Phys. 65, 851 (1993).
* Akamatsu _et al._ (2002) S. Akamatsu, M. Plapp, G. Faivre, and A. Karma, Phys. Rev. E 66, 030501 (2002).
* Akamatsu _et al._ (2004) S. Akamatsu, G. Faivre, M. Plapp, and A. Karma, Metallurgical and Materials Transactions A 35, 1815 (2004).
* Kassner and Misbah (1991) K. Kassner and C. Misbah, Phys. Rev. A 44, 6513 (1991).
* Karma and Sarkissian (1996) A. Karma and A. Sarkissian, Metallurgical and Materials Transactions A 27, 635 (1996).
* Parisi and Plapp (2008) A. Parisi and M. Plapp, Acta Materialia 56, 1348 (2008).
* Parisi and Plapp (2010) A. Parisi and M. Plapp, EPL 90, 26010 (2010).
* Caroli _et al._ (1992) B. Caroli, C. Caroli, G. Faivre, and J. Mergy, Journal of Crystal Growth 118, 135 (1992).
* Akamatsu _et al._ (2012a) S. Akamatsu, S. Bottin-Rousseau, M. Şerefoğlu, and G. Faivre, Acta Materialia 60, 3206 (2012a).
* Hoffman and Cahn (1972) D. W. Hoffman and J. W. Cahn, Surface Science 31, 368 (1972).
* Akamatsu _et al._ (2012b) S. Akamatsu, S. Bottin-Rousseau, M. Şerefoğlu, and G. Faivre, Acta Materialia 60, 3199 (2012b).
* Ghosh _et al._ (2015) S. Ghosh, A. Choudhury, M. Plapp, S. Bottin-Rousseau, G. Faivre, and S. Akamatsu, Phys. Rev. E 91, 022407 (2015).
* Kramer and Tiller (1965) J. Kramer and W. Tiller, The Journal of Chemical Physics 42, 257 (1965).
* Wheeler (1999) A. A. Wheeler, Journal of Statistical Physics 95, 1245 (1999).
* Cabrera (1964) N. Cabrera, Surface Science 2, 320 (1964).
* Philippe _et al._ (2018) T. Philippe, H. Henry, and M. Plapp, Journal of Crystal Growth 503, 20 (2018).
* Herring (1951) C. Herring, Phys. Rev. 82, 87 (1951).
* Kassner and Misbah (1992) K. Kassner and C. Misbah, Phys. Rev. A 45, 7372 (1992).
* Stefan (1889) J. Stefan, Abteilung 2,Mathematik, Astronomie, Physik, Meteorologie und Technik 98, 965 (1889).
* Song _et al._ (2018) Y. Song, S. Akamatsu, S. Bottin-Rousseau, and A. Karma, Phys. Rev. Materials 2, 053403 (2018).
* Caroli _et al._ (1990) B. Caroli, C. Caroli, and B. Roulet, Journal de Physique 51, 1865 (1990).
* Brattkus and Misbah (1990) K. Brattkus and C. Misbah, Phys. Rev. Lett. 64, 1935 (1990).
* Plapp and Karma (2002) M. Plapp and A. Karma, Phys. Rev. E 66, 061608 (2002).
|
8k
|
arxiv_papers
|
2101.01042
|
# Fast Ensemble Learning Using Adversarially-Generated Restricted Boltzmann
Machines
Gustavo H. de Rosa [email protected] Mateus Roder [email protected]
João Paulo Papa [email protected] Department of Computing
São Paulo State University
Bauru, Brazil
###### Abstract
Machine Learning has been applied in a wide range of tasks throughout the last
years, ranging from image classification to autonomous driving and natural
language processing. Restricted Boltzmann Machine (RBM) has received recent
attention and relies on an energy-based structure to model data probability
distributions. Notwithstanding, such a technique is susceptible to adversarial
manipulation, i.e., slightly or profoundly modified data. An alternative to
overcome the adversarial problem lies in the Generative Adversarial Networks
(GAN), capable of modeling data distributions and generating adversarial data
that resemble the original ones. Therefore, this work proposes to artificially
generate RBMs using Adversarial Learning, where pre-trained weight matrices
serve as the GAN inputs. Furthermore, it proposes to sample copious amounts of
matrices and combine them into ensembles, alleviating the burden of training
new models’. Experimental results demonstrate the suitability of the proposed
approach under image reconstruction and image classification tasks, and
describe how artificial-based ensembles are alternatives to pre-training vast
amounts of RBMs.
###### keywords:
Machine Learning , Adversarial Learning , Generative Adversarial Network ,
Restricted Boltzmann Machine , Ensemble Learning
††journal: ArXiv
## 1 Introduction
Artificial Intelligence (AI) has become one of the most cherished areas
throughout the last decades, essentially due to its capability in supporting
humans in decision-making tasks and automatizing repetitive jobs [1]. The key-
point for AI’s remarkable results lies in a subarea commonly known as Machine
Learning (ML). Such area is in charge of researching and developing algorithms
that are proficient in solving tasks through models that can learn from
examples, such as image classification [2], object recognition [3], autonomous
driving [4], and natural language processing [5], among others.
An algorithm that has received spotlights is the Restricted Boltzmann Machine
(RBM) [6], which is an energy-based architecture that attempts to represent
the data distribution relying upon physical and probabilistic theories. It can
also reconstruct data in a learned latent space, acting as an auto encoder-
decoder capable of extracting new features from the original data [7].
Nevertheless, such a network, along with other ML algorithms, often suffers
when reconstructing manipulated data, e.g., slightly or profoundly modified
information, being unfeasible when employed under real-world
scenarios111Usually, real-world data have higher variance than experimental
data and often results in poorly performances. [8].
A noteworthy perspective that attempts to overcome such a problem is
Adversarial Learning (AL), which introduces adversarial data (noisy data) to a
network’s training, aiding in its ability to deal with high variance
information [9]. Notwithstanding, it is not straightforward to produce noisy
data and feed to the network, as too much noise will impact the architecture’s
performance, while too little noise will not help the network in learning
distinct patterns [10]. Goodfellow et al. [11] introduced the Generative
Adversarial Network (GAN) as a solution to AL’s problem, where a two-network
system competes in a zero-sum approach. In summary, a GAN is composed of both
discriminative and generative networks, commonly known as the discriminator
and the generator. The generator is in charge of producing fake data, while
the discriminator estimates the probability of the fake data being real.
One can observe in the literature that several works have successfully applied
GANs in adversarial data generation, specifically images. For instance, Ledig
et al. [12] introduced the Super-Resolution Generative Adversarial Network
(SRGAN), capable of inferring photo-realistic images from upscaling factors by
employing a perceptual loss function that can guide the network in recovering
textures from downsampled images. Choi et al. [13] proposed the StarGAN, which
is a scalable approach for performing image-to-image translations over
multiple domains. They proposed a unified architecture that simultaneously
trains several datasets with distinct domains, leading to a superior quality
of the translated images. Moreover, Ma et al. [14] presented a novel framework
that fuses features from infrared and visible images, known as FusionGAN.
Essentially, their generator creates fused images with high infrared
intensities and additional visible features, while their discriminator forces
the images to have more details than the visible ones.
Notwithstanding, only a few works prosperously employed GANs and energy-based
models, e.g., RBMs. Zhao et al. [15] presented an Energy-based Generative
Adversarial Network (EBGAN) to generate high-resolution images, which uses an
energy-based discriminator that assigns high energy values to generated
samples and a generator that produces contrastive samples with minimal energy
values, allowing a more stable training behavior than traditional GANs. Luo et
al. [16] proposed an alternative training procedure for ClassRBMs in the
context of image classification, denoted as Generative Objective and
Discriminative Objective (ANGD). Essentially, they used GAN-based training
concepts, where the parameters are firstly updated according to the generator,
followed by the discriminator’s updates. Furthermore, Zhang et al. [17]
introduced the Adversarial Restricted Boltzmann Machine (ARBM) in high-quality
color image generation, which minimizes an adversarial loss between data
distribution and the model’s distribution without resorting to explicit
gradients. Nonetheless, to the best of the authors’ knowledge, no single work
attempts to use GANs to generate artificial RBMs.
Therefore, this work proposes to pre-train RBMs in a training set and use
their weight matrices as the input data for GANs. In other words, the idea is
to learn feasible representations of the input weight matrices and further
generate artificial matrices, where each generated matrix will be used to
construct a new RBM and evaluate its performance in a testing set.
Additionally, this paper proposes to sample vasts amounts of artificial-based
RBMs and construct heterogeneous ensembles to alleviate the burden of pre-
training plentiful amounts of new RBMs. Therefore, this work has three main
contributions:
* 1.
Introduce a methodology to generate artificial RBMs;
* 2.
Apply artificially-generated RBMs in the context of image reconstruction and
classification;
* 3.
Construct heterogeneous ensembles to lessen the computational load of pre-
training new models.
The remainder of the paper is organized as follows. Section 2 introduces the
main concepts related to Restricted Boltzmann Machines and Generative
Adversarial Networks. Section 3 describes the proposed approach considering
the Adversarially-Generated RBMs, while Section 4 details the methodology.
Section 5 presents the experimental results and some insightful discussions.
Finally, Section 6 states conclusions and future works.
## 2 Theoretical Background
In this section, we present a brief explanation of Restricted Boltzmann
Machines and Generative Adversarial Networks.
### 2.1 Restricted Boltzmann Machines
Restricted Boltzmann Machines are stochastic-based neural networks and
inspired by physical concepts, such as energy and entropy. Usually, RBMs are
trained through unsupervised algorithms and applied in image-based tasks, such
as collaborative filtering, feature extraction, image reconstruction, and
classification.
An RBM comprises a visible layer $\mathbf{v}$ composed by $m$ units, which
deals directly with the input data, and a hidden layer $\mathbf{h}$
constituted of $n$ units, which is in charge of learning features and the
input data’s probabilistic distribution. Also, let $\mathbf{W}_{m\times n}$ be
the matrix that models the weights between visible and hidden units, where
$w_{ij}$ stands for the connection between visible unit $v_{i}$ and hidden
unit $h_{j}$. Figure 1 illustrates a standard RBM architecture222Note that an
RBM does not have connections between units in the same layer..
Figure 1: Standard RBM architecture.
A deriving model from the RBM, often known as Bernoulli-Bernoulli RBM (BBRBM),
assumes that all units in layers $\mathbf{v}$ and $\mathbf{h}$ are binary and
sampled from a Bernoulli distribution [18], i.e., $\mathbf{v}\in\\{0,1\\}^{m}$
and $\mathbf{h}\in\\{0,1\\}^{n}$. Under such an assumption, Equation 1
describes the energy function of a BBRBM:
$E(\mathbf{v},\mathbf{h})=-\sum_{i=1}^{m}a_{i}v_{i}-\sum_{j=1}^{n}b_{j}h_{j}-\sum_{i=1}^{m}\sum_{j=1}^{n}v_{i}h_{j}w_{ij},$
(1)
where $\mathbf{a}$ and $\mathbf{b}$ stand for the visible and hidden units
biases, respectively.
Additionally, the joint probability of a given configuration
$(\mathbf{v},\mathbf{h})$ is modeled by Equation 2 , as follows:
$P(\mathbf{v},\mathbf{h})=\frac{e^{-E(\mathbf{v},\mathbf{h})}}{Z},$ (2)
where $Z$ stands for the partition function, responsible for normalizing the
probability over all possible visible and hidden units configurations.
Moreover, the probability of a visible (input) vector is descrived by Equation
3, as follows:
$P(\mathbf{v})=\frac{\displaystyle\sum_{\mathbf{h}}e^{-E(\mathbf{v},\mathbf{h})}}{Z}.$
(3)
One can perceive that a BBRBM is a bipartite graph, hence, it allows
information to flow from visible to hidden units, and vice-versa. Therefore,
it is possible to employ Equations 4 and 5 to formulate mutually independent
activations for both visible and hidden units, as follows:
$P(\mathbf{v}|\mathbf{h})=\prod_{i=1}^{m}P(v_{i}|\mathbf{h})$ (4)
and
$P(\mathbf{h}|\mathbf{v})=\prod_{j=1}^{n}P(h_{j}|\mathbf{v}),$ (5)
where $P(\mathbf{v}|\mathbf{h})$ and $P(\mathbf{h}|\mathbf{v})$ are the
probability of the visible layer given the hidden states and the probabily of
the hidden layer given the visible states, respectively.
Moreover, from Equations 4 and 5, one can achieve the probability of
activating a single visible unit $i$ given hidden states and the probability
of achieving a single hidden unit $j$ given visible states. Such activations
are described by Equations 6 and 7, as follows:
$P(v_{i}=1|\mathbf{h})=\sigma\left(\sum_{j=1}^{n}w_{ij}h_{j}+a_{i}\right)$ (6)
and
$P(h_{j}=1|\mathbf{v})=\sigma\left(\sum_{i=1}^{m}w_{ij}v_{i}+b_{j}\right),$
(7)
where $\sigma(\cdot)$ is the logistic-sigmoid function.
The training algorithm of a BBRBM learns a set of parameters
$\theta=(\mathbf{W},\mathbf{a},\mathbf{b})$ through an optimization problem,
which aims at maximizing the product of probabilities derived from a training
set ${\cal D}$. Equation 8 models such a problem, as follows:
$\operatorname*{\arg\\!\max}_{\Theta}\prod_{\mathbf{d}\in{\cal
D}}P(\mathbf{d}).$ (8)
An alternative to solve such optimization is by applying the negative of the
logarithm function, known as Negative Log-Likelihood (NLL). It represents an
approximation of both reconstructed data and original data distributions. A
better alternative proposed by Hinton et al. [6], known as Contrastive
Divergence (CD), uses the training data as the initial visible units and the
Gibbs sampling methods to infer the hidden and reconstructed visible layers.
Finally, one can apply derivatives and formulate the parameters update rule,
described by Equations 9, 10 and 11, as follows:
$\mathbf{W}^{s+1}=\mathbf{W}^{s}+\eta(\mathbf{v}P(\mathbf{h}|\mathbf{{v}})-\mathbf{\tilde{v}}P(\mathbf{\tilde{h}}|{\mathbf{\tilde{v}}})),$
(9)
$\mathbf{a}^{s+1}=\mathbf{a}^{s}+(\mathbf{v}-\mathbf{\tilde{v}})$ (10)
and
$\mathbf{b}^{s+1}=\mathbf{b}^{s}+(P(\mathbf{h}|\mathbf{v})-P(\mathbf{\tilde{h}}|\mathbf{\tilde{v}})),$
(11)
where $s$ stands for the current epoch, $\eta$ is the learning rate,
$\mathbf{\tilde{v}}$ stands for the reconstruction of the visible layer given
$\mathbf{h}$, and $\mathbf{\tilde{h}}$ is an approximation of the hidden
vector $\mathbf{h}$ given $\mathbf{\tilde{v}}$.
### 2.2 Generative Adversarial Networks
Goodfellow et al. [11] introduced the Generative Adversarial Networks,
essentially composed of discriminative and generative networks that compete
among themselves in a zero-sum game. The discriminative part estimates the
probability of a fake sample being a real one, while the generative part
produces the fake samples. Figure 2 illustrates an example of a standard
Generative Adversarial Network.
Figure 2: Standard architecture of a Generative Adversarial Network.
Regarding its training algorithm, the discriminative network $D$ is trained to
maximize the probability of classifying data as real images, regardless of
them being real or generated. Concurrently, the generative network $G$ is
trained to minimize the divergence between real and fake data distributions,
i.e., $log(1-D(G(\mathbf{\zeta}))$. As mentioned before, the two neural
networks compete between themselves in a zero-sum game, trying to achieve an
equilibrium, which is represented by Equation 12:
$\min_{G}\max_{D}C(D,G)=\mathbb{E}_{\mathbf{x}}[logD(\mathbf{x})]+\mathbb{E}_{\mathbf{\zeta}}[log(1-D(G(\mathbf{\zeta}))],$
(12)
where $C(D,G)$ stands for the loss function to be minimized, $D(\mathbf{x})$
stands for the estimated probability of a real sample $\mathbf{x}$ being real,
$\mathbb{E}_{\mathbf{x}}$ is the mathematical expectancy over all samples from
the real data set $\mathcal{X}$, $G(\mathbf{\zeta})$ stands for the generated
data given the noise vector $\mathbf{\zeta}$, $D(G(\mathbf{\zeta}))$ is the
estimated probability of a fake sample $G(\mathbf{\zeta})$ being real, and
$\mathbb{E}_{\mathbf{\zeta}}$ is the mathematical expectancy over all random
generator inputs, i.e., the expected value over all fake samples generated by
$G$.
Notwithstanding, Equation 12 imposes a problem where GANs can get trapped in
local optimums when the discriminator faces an easy task. Essentially, at the
early training iterations, when $G$ still does not know how to generate
adequate samples, $D$ might reject the generated samples with high
probabilities333Such a problem saturates the function
$log(1-D(G(\mathbf{\zeta}))$., leading to inadequate training. Therefore, an
alternative to overcome such a problem is to train $G$ to maximize
$log(D(G(\mathbf{\zeta}))$, improving initial gradients and mitigating the
possibility of getting trapped in local optimums.
## 3 Proposed Approach
This section provides a more in-depth explanation of how the proposed approach
works, divided into three parts: pre-training, adversarial learning, and
sampling. Additionally, we describe how to construct artificially-based
ensembles. Figure 3 illustrates the proposed approach pipeline to provide a
more precise visualization.
Figure 3: Proposed approach pipeline.
### 3.1 Pre-Training RBMs
Let $t$ be a sample that belongs to a training set $\mathcal{T}$, $K$ be a set
of RBMs, as well as $\mathbf{W}_{k}$ be the weight matrix from a given RBM
$R_{k}$. Initially, every RBM is trained on every sample from the training set
$\mathcal{T}$, resulting in $K$ pre-trained RBMs and, hence, $K$ pre-trained
weight matrices. Further, the $K$ weight matrices are concatenated into a new
training set, denoted as $\mathcal{T}_{A}$, which is used to feed the
Adversarial Learning training. Equations 13 and 14 formulate such a process,
as follows:
$\mathbf{W}_{k}=R_{k}(t)\mid\forall t\in\mathcal{T}\text{,
where}\>k\in\\{1,2,\ldots,K\\}$ (13)
and
$\mathcal{T}_{A}=[\mathbf{W}_{1},\mathbf{W}_{2},\ldots,\mathbf{W}_{K}]^{T}.$
(14)
Therefore, one can see that the intended approach aims to use pre-trained
weight matrices as the GAN’s input to learn their patterns and sample new
matrices.
### 3.2 Adversarial Learning
After performing the RBMs pre-training, it is possible to train a GAN with the
new training set $\mathcal{T}_{A}$. In other words, the idea is to use the
pre-trained weight matrices as the GAN’s input to train its generator and
learn how to generate artificial matrices. Nevertheless, due to the difficulty
in establishing an equilibrium when training GANs, we opted to employ an
additional validation set, denoted as $\mathcal{V}$, which is used to assess
whether an artificial matrix is suitable.
Before the GAN’s training process, we randomly select an original RBM and
resets its biases, e.g., $R_{r}$, where $r\in[1,K]$ is a randomly sampled
integer. Then, we sample a new weight matrix $\mathbf{\tilde{W}}_{r}$ from its
generator for every training epoch $d$, such that $d\in\\{1,2,\ldots,D\\}$ and
$D$ is the maximum number of epochs. Such a procedure is described by Equation
15:
$\mathbf{\tilde{W}}_{r}=G(\mathbf{\zeta}_{d})\mid\text{where}\>d\in\\{1,2,\ldots,D\\}.$
(15)
Afterward, $\mathbf{\tilde{W}}_{r}$ is replaced as $R_{r}$ weight matrix and
reconstructed over the validation set $\mathcal{V}$. Finally, after
reconstructing $D$ weight matrices, we select the epoch that achieved the
lowest mean squared error (MSE) as our final GAN, denoted as $S^{\ast}$.
### 3.3 Sampling New RBMs
Finally, one can now sample $L$ weight matrices from $S^{\ast}$ and
artificially-create $L$ new RBMs. Furthermore, to verify whether the
artificially generated RBMs are suitable, they are reconstructed over testing
set $\mathcal{Z}$ and compared against the $K$ original reconstructed RBMs.
Note that the proposed approach is extensible to the task of image
classification, where features are extracted from an RBM’s hidden layer and
fed to a classifier.
### 3.4 Constructing Ensembles
Alternatively, with a pre-trained GAN in hands, one can sample a vast amount
of artificial RBMs and compose a heterogeneous ensemble along with the
original RBMs. Let $H$ be an ensemble composed of $K$ original and $L$
artificial RBMs. Additionally, let $\hat{y}_{i}$, where
$i\in\\{1,2,\ldots,K+L\\}$ be the prediction of the ith $i$ RBM over a testing
set $\mathcal{Z}$. One can construct an array of predicted labels, denoted as
$\mathbf{y}(z)$, which holds the predictions of all $K$ and $L$ RBMs over
sample $z\in\mathcal{Z}$. Finally, the heterogeneous ensemble combines all
predictions using majority voting, as follows:
$H(z)=\operatorname*{\arg\\!\max}_{j\in\\{1,2,\ldots,C\\}}\sum_{i=1}^{K+L}\hat{y}_{i,j}(z),$
(16)
where $C$ stands for the number of classes.
## 4 Methodology
In this section, we discuss the methodology used to conduct this work, the
employed dataset, the evaluation tasks, and the experimental setup444The
source code is available at https://github.com/gugarosa/synthetic_rbms..
### 4.1 Evaluation Tasks
The proposed approach is evaluated over distinct tasks, such as image
reconstruction and image classification. Regarding image reconstruction, the
objective function is guided by the mean square error, which stands for the
error between the original and the reconstructed images. Furthermore,
concerning the image classification task, we opted to use an additional
classifier, such as the Support Vector Machine [19] with radial kernel and
without parameter optimization, instead of a Discriminative Restricted
Boltzmann Machine [20]. Therefore, instead of reconstructing samples with the
original or artificial RBMs, we use them as feature extractors, where the
original samples are passed to the hidden layer, and their activations are
passed to the classifier. Afterward, the classifier outputs an accuracy value,
which stands for the percentage of correct labels assigned by a classifier and
guides the proposed task. Additionally, we opted to construct original and
artificial RBMs ensembles to provide a more in-depth comparison and enhance
the proposed approach.
### 4.2 Dataset
We opted to consider only one toy-dataset to conduct the proposed experiments
as we wanted to explore the suitability and how the proposed method would
behave under a theoretical scenario. The dataset is the well-known
MNIST555http://yann.lecun.com/exdb/mnist [21], which is composed of a set of
$28\times 28$ grayscale images of handwritten digits, containing a training
set with $60,000$ images from digits ‘0’-‘9’, as well as a testing set with
$10,000$ images. Additionally, as our proposed approach uses a validation set,
we opted to split the training set into two new sets: (i) $48,000$ training
images and (ii) $12,000$ validation images.
### 4.3 Experimental Setup
Table 1 describes the experimental setup used to pre-train the RBMs,
implemented using Learnergy [22] library. Note that as we are trying to
understand whether the proposed approach is suitable or not in the evaluated
context, we opted to use the simplest version of RBM, i.e., without any
regularization, such as momentum, weight decay, and temperature.
Table 1: RBM pre-training hyperparameter configuration. Hyperparameter | Value
---|---
$m$ (number of visible units) | $784$
$n$ (number of hidden units) | $[32,64,128]$
$steps$ (number of CD steps) | $1$
$\eta$ (learning rate) | $0.1$
$bs$ (batch size) | $128$
$epochs$ (number of training epochs) | $10$
Table 2 describes the experimental setup used to train the GANs, implemented
using NALP666https://github.com/gugarosa/nalp library. Before the real
experimentation, we opted to conduct a grid-search procedure to find adequate
values for the number of noise dimensions, the batch size, and training
epochs. Additionally, we opted to use a low learning rate due to the high
number of training epochs and more stable convergence. Regarding the GAN
architecture, we used a linear architecture composed of three down-samplings
($512,256,128$ units), ReLU activations ($0.01$ threshold), and a single-unit
output layer for the discriminator, as well as three up-samplings
($128,256,512$), ReLU activations ($0.01$ threshold) and an output layer with
a hyperbolic tangent activation. Considering the ensemble construction in
image classification, we opted to use its most straightforward approach, i.e.,
majority voting, which employs $K$ models to create an array of $K$
predictions, gathering the most voted prediction final label.
Table 2: GAN pre-training hyperparameter configuration. Hyperparameter | Value
---|---
$n_{z}$ (number of noise dimensions) | $10000$
$\eta_{D}$ (discriminator learning rate) | $0.0001$
$\eta_{G}$ (generator learning rate) | $0.0001$
$K$ (number of pre-trained RBMs) | $[32,64,128,256,512]$
$bs$ (batch size) | $\frac{K}{16}$
$E$ (number of training epochs) | $4000$
## 5 Experimental Results
This section presents the experimental results concerning the tasks of image
reconstruction and image classification.
### 5.1 Image Reconstruction
Table 3 describes the mean reconstruction errors and their standard deviations
over the MNIST testing set777Note that we opted to sample $K$ weights, which
is the same amount of pre-trained RBMs., concerning the original RBMs ($R$)
and artificially-generated ones ($S^{\ast}$). According to the Wilcoxon
signed-rank test with $5\%$ significance, the bolded cells are statistically
equivalent, and the underlined ones are the lowest mean reconstruction errors.
Table 3: Mean reconstruction error and standard deviation over MNIST testing set. Hidden Units | Number of RBMs | $\mathbf{R}$ | $\mathbf{S^{\ast}}$
---|---|---|---
$32$ | $32$ | $\underline{\mathbf{88.173\pm 1.053}}$ | $110.779\pm 4.120$
$64$ | $\underline{\mathbf{88.418\pm 1.134}}$ | $103.484\pm 1.015$
$128$ | $\underline{\mathbf{88.607\pm 1.091}}$ | $120.138\pm 10.217$
$256$ | $\underline{\mathbf{88.639\pm 1.040}}$ | $121.924\pm 13.171$
$512$ | $\underline{\mathbf{88.662\pm 1.063}}$ | $152.059\pm 17.867$
$64$ | $32$ | $\underline{\mathbf{75.219\pm 0.881}}$ | $92.628\pm 6.344$
$64$ | $\underline{\mathbf{75.351\pm 0.969}}$ | $90.071\pm 2.443$
$128$ | $\underline{\mathbf{75.393\pm 0.910}}$ | $102.536\pm 7.133$
$256$ | $\underline{\mathbf{75.326\pm 0.941}}$ | $128.106\pm 22.970$
$512$ | $\underline{\mathbf{75.274\pm 0.887}}$ | $137.621\pm 29.069$
$128$ | $32$ | $\underline{\mathbf{64.768\pm 0.596}}$ | $72.607\pm 5.870$
$64$ | $\underline{\mathbf{64.748\pm 0.616}}$ | $76.660\pm 7.823$
$128$ | $\underline{\mathbf{64.756\pm 0.643}}$ | $71.581\pm 5.830$
$256$ | $\underline{\mathbf{64.776\pm 0.677}}$ | $95.401\pm 18.083$
$512$ | $\underline{\mathbf{64.725\pm 0.683}}$ | $106.867\pm 30.411$
Artificial RBMs are generated from noise and may loose performance when
sampled from large sets of input data. A considerable $K$ value indicates that
$K$ artificial RBMs were sampled from a GAN trained with $K$ RBMs, hence
providing a diversity factor and increasing their reconstruction errors and
standard deviations. One can perceive that every $K=\\{256,512\\}$ artificial
RBMs achieved the highest reconstruction errors and standard deviations,
indicating a performance loss and a worse reconstruction when compared to
$K=\\{32,64,128\\}$ experiments. On the other hand, such behavior can not be
detected in standard RBMs as they are trained from scratch with the same data
and often achieves similar results.
Another interesting point is that many hidden units ($n=128$) decreased the
gap between $R$ and $S^{\ast}$ reconstruction errors, especially when used
within lower amounts of $K$, e.g., $K=\\{32,64,128\\}$. As a higher number of
hidden units provides more consistent training and better reconstructions, it
is more feasible that GANs trained with these RBMs performs better samplings,
which results in lower reconstruction errors.
#### 5.1.1 Analyzing GANs Convergence
Generative Adversarial Networks are known for their generation capabilities,
and even though they are capable of providing interesting results, they are
still black-boxes concerning their discriminators’ and generators’ stability.
A practical tool is to inspect their convergence visually and create insights
according to their outputs. Thus, we opted to select two contrasting
experiments to fulfill the analysis mentioned above, such that Figures 4 and 5
illustrate the training set losses convergence by GANs with $n=32$ and
$n=128$, respectively, using $K=128$ pre-trained inputs.
Figure 4: Training set losses convergence by GANs with $n=32$ and $K=128$ pre-
trained inputs. Figure 5: Training set losses convergence by GANs with $n=128$
and $K=128$ pre-trained inputs.
Glancing at Figure 4, one can perceive that even though the discriminator
smoothly converged, the generator suffered from spiking values through all the
training procedure. Such a fact is explained due to the small number of hidden
units used to create the pre-trained RBMs, resulting in smaller-sized data fed
to the GANs and hence, being incapable of being adequately sampled with large
numbers of noise dimensions ($n_{z}=10000$). On the other hand, Figure 5
illustrates a satisfactory training convergence, where both discriminator and
generator suffered from spiking losses in their early and last iterations and
attained stable values during the rest of the training. Such behavior reflects
in its performance, where $S^{\ast}$ with $n=128$ and $K=128$ achieved a mean
reconstruction error of $64.756$ against $88.607$ from $S^{\ast}$ with $n=32$
and $K=128$.
#### 5.1.2 Influence of $K$ Pre-Trained RBMs
To provide a more in-depth analysis of how the number of pre-trained RBMs
influences the experimentation, we opted to plot a convergence graph
concerning the validation set reconstruction error obtained by GANs with
$n=128$, as illustrated by Figure 6. Such a figure exemplifies the difficulty
in training GANs, where an instability often accompanies their training.
Moreover, larger $K$ values, such as $K=256$ and $K=512$, brought more
instability to the learning process, being visually perceptible in the
reconstruction plots’ valleys and ridges. On the other hand, a smaller $K$
value, such as $K=32$, did not attain much instability and did not minimize
the reconstruction error as $K=64$ and $K=128$.
Figure 6: Validation set MSE convergence by GANs with $n=128$ pre-trained $K$
inputs.
#### 5.1.3 Quality of Reconstruction
When analyzing a network’s reconstruction capacity, it is common to analyze a
sample of a reconstructed image. Figure 7 illustrates a random reconstructed
sampled obtained by an RBM and a GAN using $n=128$ and $K=128$, as well as an
RBM and a GAN using $n=128$ and $K=512$. One can perceive that the original
RBM reconstructed samples, (a) and (c), were the most pleasant ones, as
expected due to their lower MSE. Furthermore, the closest MSE an artificial
RBM could achieve compared to the traditional RBMs is depicted by (c), where
it could almost reconstruct the sample in the same way as the original
versions did. On the other hand, the worst artificial RBM is illustrated by
(d), showing that when a learning procedure carries too much noise, it often
does not learn the correct patterns that it was supposed to learn.
|
---|---
(a) | (b)
|
(c) | (d)
Figure 7: Random sample reconstructed by (a) RBM and (b) GAN using $n=128$ and
$K=128$, while (c) and (d) stand for RBM and (b) GAN using $n=128$ and
$K=512$, respectively.
### 5.2 Image Classification
Regarding the image classification task, we opted to employ an additional set
of ensemble architectures, where $H_{R}$ stands for an ensemble composed of
only original RBMs, while $H_{S^{\ast}}$ stands for an artificial-based
ensemble. Additionally, we proposed the following heterogeneous ensembles:
* 1.
$H_{\alpha}$: composed by $0.5K$ original and $0.5K$ artificial RBMs;
* 2.
$H_{\beta}$: composed by $0.25K$ original and $0.75K$ artificial RBMs;
* 3.
$H_{\gamma}$: composed by $K$ original and $5K$ artificial RBMs.
Table 4 describes the mean classification accuracies and their standard
deviation over the MNIST testing set. According to the Wilcoxon signed-rank
test with $5\%$ significance, the bolded cells are statistically equivalent,
and the underlined ones are the highest mean classification accuracies.
Table 4: Mean classification accuracy (%) and standard deviation over MNIST
testing set.
Hidden Units | Number of RBMs | $\mathbf{R}$ | $\mathbf{S^{\ast}}$ | $\mathbf{H_{R}}$ | $\mathbf{H_{S^{\ast}}}$ | $\mathbf{H_{\alpha}}$ | $\mathbf{H_{\beta}}$ | $\mathbf{H_{\gamma}}$
---|---|---|---|---|---|---|---|---
$32$ | $32$ | $90.94\pm 0.45$ | $91.68\pm 0.22$ | $\underline{\mathbf{94.44\pm 0.00}}$ | $91.98\pm 0.00$ | $92.95\pm 0.00$ | $92.23\pm 0.00$ | $92.29\pm 0.00$
$64$ | $90.88\pm 0.49$ | $86.45\pm 0.21$ | $\underline{\mathbf{94.61\pm 0.00}}$ | $86.75\pm 0.00$ | $90.93\pm 0.00$ | $87.70\pm 0.00$ | $87.23\pm 0.00$
$128$ | $90.86\pm 0.46$ | $86.69\pm 0.70$ | $\underline{\mathbf{94.70\pm 0.00}}$ | $87.89\pm 0.00$ | $91.96\pm 0.00$ | $89.07\pm 0.00$ | $88.60\pm 0.00$
$256$ | $90.88\pm 0.43$ | $83.34\pm 0.71$ | $\underline{\mathbf{94.67\pm 0.00}}$ | $85.54\pm 0.00$ | $91.78\pm 0.00$ | $87.76\pm 0.00$ | $86.83\pm 0.00$
$512$ | $90.90\pm 0.41$ | $85.19\pm 1.03$ | $\underline{\mathbf{94.73\pm 0.00}}$ | $88.03\pm 0.00$ | $92.59\pm 0.00$ | $90.04\pm 0.00$ | $89.13\pm 0.00$
$64$ | $32$ | $94.17\pm 0.22$ | $93.86\pm 0.28$ | $\underline{\mathbf{95.83\pm 0.00}}$ | $93.96\pm 0.00$ | $94.88\pm 0.00$ | $94.16\pm 0.00$ | $94.01\pm 0.00$
$64$ | $94.17\pm 0.21$ | $92.88\pm 0.27$ | $\underline{\mathbf{95.82\pm 0.00}}$ | $93.21\pm 0.00$ | $94.47\pm 0.00$ | $93.67\pm 0.00$ | $93.47\pm 0.00$
$128$ | $94.18\pm 0.21$ | $94.41\pm 0.14$ | $\underline{\mathbf{95.78\pm 0.00}}$ | $94.60\pm 0.00$ | $95.07\pm 0.00$ | $94.72\pm 0.00$ | $94.71\pm 0.00$
$256$ | $94.19\pm 0.21$ | $91.89\pm 0.53$ | $\underline{\mathbf{95.88\pm 0.00}}$ | $92.68\pm 0.00$ | $94.56\pm 0.00$ | $93.36\pm 0.00$ | $93.06\pm 0.00$
$512$ | $94.18\pm 0.21$ | $92.12\pm 0.30$ | $\underline{\mathbf{95.85\pm 0.00}}$ | $93.25\pm 0.00$ | $94.98\pm 0.00$ | $93.92\pm 0.00$ | $93.67\pm 0.00$
$128$ | $32$ | $95.39\pm 0.15$ | $95.21\pm 0.08$ | $\underline{\mathbf{96.20\pm 0.00}}$ | $95.25\pm 0.00$ | $95.47\pm 0.00$ | $95.39\pm 0.00$ | $95.30\pm 0.00$
$64$ | $95.37\pm 0.13$ | $95.26\pm 0.08$ | $\underline{\mathbf{96.24\pm 0.00}}$ | $95.33\pm 0.00$ | $95.61\pm 0.00$ | $95.41\pm 0.00$ | $95.35\pm 0.00$
$128$ | $95.37\pm 0.15$ | $95.42\pm 0.07$ | $\underline{\mathbf{96.23\pm 0.00}}$ | $95.54\pm 0.00$ | $95.82\pm 0.00$ | $95.66\pm 0.00$ | $95.62\pm 0.00$
$256$ | $95.37\pm 0.16$ | $94.34\pm 0.19$ | $\underline{\mathbf{96.27\pm 0.00}}$ | $94.79\pm 0.00$ | $95.85\pm 0.00$ | $95.15\pm 0.00$ | $95.02\pm 0.00$
$512$ | $95.37\pm 0.15$ | $93.84\pm 0.26$ | $\underline{\mathbf{96.26\pm 0.00}}$ | $94.35\pm 0.00$ | $95.49\pm 0.00$ | $94.72\pm 0.00$ | $94.65\pm 0.00$
According to Table 4, it is possible to visualize that standard RBMs $R$ could
outperform artificial RBMs $S^{\ast}$ in $13$ out of $15$ experiments, except
when $n=32$ and $K=32$, $n=64$ and $K=128$, and $n=128$ and $K=128$. On the
other hand, standard ensemble $H_{R}$ outperformed artificial ensemble
$H_{S^{\ast}}$ in every experiment, attaining the highest mean accuracies. An
interesting fact is that the accuracy gap between both $H_{R}$ and
$H_{S^{\ast}}$ ensembles has decreased as the number of hidden units has
increased. Additionally, it is possible to observe that a high number of $K$
did not impact $H_{R}$ as much $H_{S^{\ast}}$ has been impacted, mainly
because the noise obtained from $S^{\ast}$ has been propagated to their
ensembles.
When comparing the heterogeneous ensembles (mix of standard and artificial
RBMs), one can perceive that fewer artificial RBMs improved the performance of
the ensemble, where the best one $H_{\alpha}$ is only composed of $50\%$
artificial RBMs. An elevated number of artificial RBMs, depicted in
$H_{\beta}$ and $H_{\gamma}$, highly degraded their ensembles performance, yet
it was attenuated as the number of hidden units was increased. Furthermore, an
excessive number of artificial RBMs ($5K$) has not degraded much performance
when applied in conjunction with every original RBM ($K$), depicted by the
$H_{\gamma}$ column.
### 5.3 Complexity Analysis
Let $\alpha$ be the complexity of pre-training an RBM, while $\beta$ and
$\gamma$ be the training and validation complexities of a GAN, respectively.
It is possible to observe that $K$ RBMs were pre-trained and fed to the GAN
learning procedure. Hence, the whole learning procedure complexity is depicted
by Equation 17, as follows:
$\alpha\cdot K+\beta+\gamma.$ (17)
The proposed approach intends to sample new artificial RBMs from a pre-trained
GAN and compare them against the standard pre-trained RBMs. Let $\iota$ be the
complexity of sampling $L$ RBMs, and the whole sampling procedure complexity
described by Equation 18, as follows:
$\iota\cdot L$ (18)
Therefore, summing both Equations 17 and 18 together, it is possible to
achieve the proposed approach complexity, as described by Equation 19:
$\alpha\cdot K+\beta+\gamma+\iota\cdot L.$ (19)
Note that $\iota\cdot L$ is insignificant compared to $\alpha\cdot K$, even
when $L$ is significantly larger than $K$. The advantage of the proposed
approach happens when $\beta+\gamma$ is comparable to $\alpha\cdot K$, i.e.,
when $K$ tends to a large number, the cost of training and validating GANs
will be smaller than training new sets of RBMs, thus sampling new RBMs from
GANs will be less expensive that training new RBMs.
## 6 Conclusion
This work addressed the generation of artificial Restricted Boltzmann Machines
through Generative Adversarial Networks. The overall idea was to alleviate the
burden of pre-training vast amounts of RBMs by learning a GAN-based latent
space representing the RBMs’ weight matrices, where new matrices could be
sampled from a random noise input. Additionally, such an approach was employed
to construct heterogeneous ensembles and further improve standard- and
artificial-based RBMs’ recognition ability.
The presented experimental results were evaluated over a classic literature
dataset, known as MNIST, and conducted over two different tasks: image
reconstruction and image classification. Considering the former task, RBMs
were trained to minimize the reconstruction error while the latter task uses
RBMs as feature extractors and an additional Support Vector Machine to perform
the classification.
Considering the image reconstruction task, original RBMs were capable of
reconstructing images better in all the evaluated configurations, yet in some
cases, artificial RBMs were able almost to reconstruct as equal. A thorough
analysis between the number of hidden units ($n$) and the number of sampled
RBMs ($K$) depicted that artificial-based RBMs could reconstruct better when
$n$ was the highest possible and $K$ was not high nor low. Such behavior
endures in the GAN training procedure, where the fewer amount of hidden units
resulted in smaller-sized data and could not accurately be sampled with a high
number of noise dimensions. Additionally, a large amount of $K$ brought
instability to the learning procedure as it increased the amount of diversity
and caused the loss function to spike (creation of several valleys and ridges)
during the training epochs.
Regarding the image classification task, ensembles with original RBMs could
classify the employed dataset better. Nevertheless, it is essential to remark
that a high number of hidden units decreased the gap between original and
artificial RBMs classification, e.g., $95.39\%\times 95.21\%$ with $n=128$ and
$K=32$, $95.37\%\times 95.26\%$ with $n=128$ and $K=64$, and $95.37\%\times
95.42\%$ with $n=128$ and $K=128$. On the other hand, one can perceive that a
high number of artificial RBMs in the ensembles brought a decrease in
performance, while fewer artificial RBMs in the ensembles attained the second-
best performance. Such behavior is explained by the base models $R$ and
$S^{\ast}$, where $R$ was better in $12$ out of $15$ experiments, thus
impacting more in the ensembles’ performance.
For future works, we aim to expand the proposed approach to more challenging
image datasets, such as CIFAR and Caltech101, as well as employ it in text
datasets, such as IMDB and SST. Moreover, we aim at increasing the number of
hidden units and sampled RBMs to verify whether the improvement trend
continues to happen. Finally, we intend to use more complex GAN-based
architectures, such as DCGAN and WGAN, to overcome its instability
complication.
## Conflicts of Interest
The authors declare that there are no conflicts of interest.
## Acknowledgments
The authors are grateful to São Paulo Research Foundation (FAPESP) grant
#2019/02205-5.
## References
* [1] C. Bishop, Pattern recognition and machine learning, springer, 2006.
* [2] J. Wang, L. Perez, The effectiveness of data augmentation in image classification using deep learning, Convolutional Neural Networks Vis. Recognit (2017) 11.
* [3] X. Wang, et al., Deep learning in object recognition, detection, and segmentation, Foundations and Trends® in Signal Processing 8 (4) (2016) 217–382.
* [4] A. Sallab, M. Abdou, E. Perot, S. Yogamani, Deep reinforcement learning framework for autonomous driving, Electronic Imaging 2017 (19) (2017) 70–76.
* [5] L. L. Deng, Y. Liu, Deep learning in natural language processing, Springer, 2018\.
* [6] G. Hinton, Training products of experts by minimizing contrastive divergence, Neural computation 14 (8) (2002) 1771–1800.
* [7] N. Srivastava, R. Salakhutdinov, Multimodal learning with deep boltzmann machines, in: Advances in neural information processing systems, 2012, pp. 2222–2230.
* [8] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus, Intriguing properties of neural networks, arXiv preprint arXiv:1312.6199.
* [9] P. Laskov, R. Lippmann, Machine learning in adversarial environments (2010).
* [10] C. Li, H. Liu, C. Chen, Y. Pu, L. Chen, R. Henao, L. Carin, Alice: Towards understanding adversarial learning for joint distribution matching, in: Advances in Neural Information Processing Systems, 2017, pp. 5495–5503.
* [11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in neural information processing systems, 2014, pp. 2672–2680.
* [12] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., Photo-realistic single image super-resolution using a generative adversarial network, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681–4690.
* [13] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, J. Choo, Stargan: Unified generative adversarial networks for multi-domain image-to-image translation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8789–8797.
* [14] J. Ma, W. Yu, P. Liang, C. Li, J. Jiang, Fusiongan: A generative adversarial network for infrared and visible image fusion, Information Fusion 48 (2019) 11–26.
* [15] J. Zhao, M. Mathieu, Y. LeCun, Energy-based generative adversarial network, arXiv preprint arXiv:1609.03126.
* [16] L. Luo, S. Zhang, Y. Wang, H. Peng, An alternate method between generative objective and discriminative objective in training classification restricted boltzmann machine, Knowledge-Based Systems 144 (2018) 144–152.
* [17] J. Zhang, S. Ding, N. Zhang, W. Jia, Adversarial training methods for boltzmann machines, IEEE Access.
* [18] G. Hinton, A practical guide to training restricted boltzmann machines, in: G. Montavon, G. Orr, K.-R. Müller (Eds.), Neural Networks: Tricks of the Trade, Vol. 7700 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2012, pp. 599–619.
* [19] C.-C. Chang, C.-J. Lin, Libsvm: A library for support vector machines, ACM transactions on intelligent systems and technology (TIST) 2 (3) (2011) 1–27.
* [20] H. Larochelle, Y. Bengio, Classification using discriminative restricted boltzmann machines, in: Proceedings of the 25th international conference on Machine learning, 2008, pp. 536–543.
* [21] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE 86 (11) (1998) 2278–2324.
* [22] M. Roder, G. H. de Rosa, J. P. Papa, Learnergy: Energy-based machine learners (2020). arXiv:2003.07443.
|
8k
|
arxiv_papers
|
2101.01045
|
# Comparing different subgradient methods for solving convex optimization
problems with functional constraints
Thi Lan Dinh111Torus-Actions SAS; 3 avenue Didier Daurat, F-31400 Toulouse;
France. Ngoc Hoang Anh Mai222CNRS; LAAS; 7 avenue du Colonel Roche, F-31400
Toulouse; France.
###### Abstract
We provide a dual subgradient method and a primal-dual subgradient method for
standard convex optimization problems with complexity
$\mathcal{O}(\varepsilon^{-2})$ and $\mathcal{O}(\varepsilon^{-2r})$, for all
$r>1$, respectively. They are based on recent Metel-Takeda’s work in
[arXiv:2009.12769, 2020, pp. 1-12] and Boyd’s method in [Lecture notes of
EE364b, Stanford University, Spring 2013-14, pp. 1-39]. The efficiency of our
methods is numerically illustrated in a comparison to the others.
Keywords: convex optimization, nonsmooth optimization, subgradient method
###### Contents
1. 1 Introduction
2. 2 Preliminaries
3. 3 Subgradient method (SG)
4. 4 Dual subgradient method (DSG)
5. 5 Primal-dual subgradient method (PDS)
6. 6 Numerical experiments
1. 6.1 Randomly generated test problems
2. 6.2 Linearly inequality constrained minimax problems
3. 6.3 Least absolute deviations (LAD)
4. 6.4 Support vector machine (SVM)
7. 7 Conclusion
8. 8 Appendix
1. 8.1 Dual subgradient method
2. 8.2 Primal-dual subgradient method
## 1 Introduction
The results presented in this paper are not very new and mainly based on
Metel-Takeda’s and Boyd’s ideas.
Given a convex function $f_{0}:{\mathbb{R}}^{n}\to{\mathbb{R}}$ and a closed
convex domain $C\subset{\mathbb{R}}^{n}$, consider the convex optimization
problem (COP):
$\inf_{x\in C}\ f_{0}(x)\,,$ (1.1)
where $f_{0}$ might be non-differentiable. It is well known that the optimal
value and an optimal solution of problem (1.1) can be approximated as closely
as desired by using subgradient methods. In some cases we can use interior
point or Newton methods to solve problem (1.1). Although developed very early
by Shor in the Soviet Union (see [8]), subgradient methods is still highly
competitive with state-of-the-art methods such as interior point and Newton
methods. They can be applied to problem (1.1) with very large number of
variables $n$ because of their little memory requirement.
We might classify problem (1.1) in three groups as follows:
1. (a)
Unconstrained case: $C={\mathbb{R}}^{n}$. It can be efficiently solved by
various methods, in particular with Nesterov’s method of weighted dual
averages in [5]. His method has optimal convergence rate
$\mathcal{O}(\varepsilon^{-2})$.
2. (b)
Inexpensive projection on $C$. In this case the projected subgradient method
introduced by Alber, Iusem and Solodov in [1] can be used for extremely large
problems for which interior point or Newton methods cannot be used.
3. (c)
COP with functional constraints (also known as standard form in the
literature):
$C=\\{x\in{\mathbb{R}}^{n}:f_{i}(x)\leq 0\,,\,i=1,\dots,m\,,\,Ax=b\\}\,,$
(1.2)
where $f_{i}$ is a convex function on ${\mathbb{R}}^{n}$, for $i=1,\dots,m$,
$A$ is a real matrix and $b$ is a real vector. Metel and Takeda recently
suggest the method of weighted dual averages [4] for solving this type of
problems based on Nesterov’s idea. They also obtain the optimal convergence
rate $\mathcal{O}(\varepsilon^{-2})$ for their method. Earlier Boyd proposes a
primal-dual subgradient method in his lecture notes [2] without complexity
analysis. We show that Boyd’s method is suboptimal.
#### Contribution.
Our contribution with respect to the previous group (c) is twofold:
1. 1.
In Section 4, we provide an alternative dual subgradient method with several
dual variables based on Metel-Takeda’s method [4]. Our dual subgradient method
has the same complexity $\mathcal{O}(\varepsilon^{-2})$ with Metel-Takeda’s.
Moreover it is much more efficient than Metel-Takeda’s in practice as shown in
Section 6.
2. 2.
In Section 5, we provide a primal-dual subgradient method based on the
nonsmooth penalty method and Boyd’s method in [2, Section 8]. Our primal-dual
subgradient method converges with complexity $\mathcal{O}(\varepsilon^{-2r})$
for all $r>1$.
We recall Nesterov’s subgradient method [6, Section 3.2.4] in Section 3 for
our comparison purpose. Our experiments in Section 6 show that Nesterov’s
subgradient method and Metel-Takeda’s dual subgradient method is not suitable
in practice although they have optimal convergence rate
$\mathcal{O}(\varepsilon^{-2})$. In contrast, our dual subgradient method and
primal-dual subgradient method are very efficient in practice even for COP
with large number of variables ($n\geq 1000$).
## 2 Preliminaries
Let $f:{\mathbb{R}}^{n}\to{\mathbb{R}}$ be a real-valued function on the
Euclidean space ${\mathbb{R}}^{n}$, $f$ is called convex if
$f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)\,,\quad\forall\
(x,y)\in{\mathbb{R}}^{n}\times{\mathbb{R}}^{n}\,,\,\forall\ t\in[0,1]\,.$
(2.3)
If $f:{\mathbb{R}}^{n}\to{\mathbb{R}}$ is a convex function, the
subdifferential at a point $\overline{x}$ in ${\mathbb{R}}^{n}$, denoted by
$\partial f(\overline{x})$, is defined by the set
$\partial
f(\overline{x}):=\\{g\in{\mathbb{R}}^{n}\,:\,f(x)-f(\overline{x})\geq
g^{\top}(x-\overline{x})\\}\,.$ (2.4)
A vector $g$ in $\partial f(\overline{x})$ is called a subgradient at
$\overline{x}$. The subdifferential is always a nonempty convex compact set.
From now on, we focus on solving the constrained convex optimization problem
with functional constraints:
$p^{\star}=\inf_{x\in{\mathbb{R}}^{n}}\\{f_{0}(x)\ :\ f_{i}(x)\leq
0\,,\,i=1,\dots,m\,,\quad Ax=b\\}\,,$ (2.5)
where $f_{i}:{\mathbb{R}}^{n}\to{\mathbb{R}}$ is a convex function, for
$i=0,1,\dots,m$, $A\in{\mathbb{R}}^{l\times n}$ and $b\in{\mathbb{R}}^{l}$.
Let $x^{\star}$ be an optimal solution of problem (2.5), i.e.,
$f_{i}(x^{\star})\leq 0$, for $i=1,\dots,m$, $Ax^{\star}=b$ and
$f_{0}(x^{\star})=p^{\star}$.
## 3 Subgradient method (SG)
This section is devoted to recall the simple subgradient method for problem
(2.5). This method is introduced in [6, Section 3.2.4] due to Nesterov for
convex problems with only inequality constraints. We extend Nesterov’s method
to convex problems with both inequality and equality constraints in a trivial
way.
It is fairly easy to see that problem (2.5) is equivalent to the problem:
$p^{\star}=\inf_{x\in{\mathbb{R}}^{n}}\\{f_{0}(x)\,:\,\overline{f}(x)\leq
0\\}\,,$ (3.6)
where
$\overline{f}(x)=\max\ \\{\
f_{1}(x),\dots,f_{m}(x),|a_{1}^{\top}x-b_{1}|,\dots,|a_{l}^{\top}x-b_{l}|\
\\}\,.$ (3.7)
Here $a_{j}^{\top}$ is the $j$th row of $A$, i.e.,
$A=\begin{bmatrix}a_{1}^{\top}\\\ \dots\\\ a_{l}^{\top}\end{bmatrix}$. If
$\overline{x}$ is an optimal solution of problem (3.6), $\overline{x}$ is also
an optimal solution of problem (2.5).
Let $g_{0}(x)\in\partial f_{0}(x)$,
$\overline{g}(x)\in\partial\overline{f}(x)$ and consider the following method
to solve problem (3.6):
SG Initialization: $\varepsilon>0$, $x^{(0)}\in{\mathbb{R}}^{n}$. For
$k=0,1,\dots,K$ do: 1. If $\overline{f}(x^{(k)})\leq\varepsilon$, then
$x^{(k+1)}:=x^{(k)}-\frac{\varepsilon}{\|g_{0}(x^{(k)})\|_{2}^{2}}g_{0}(x^{(k)})$;
2. Else,
$x^{(k+1)}:=x^{(k)}-\frac{\overline{f}(x^{(k)})}{\|\overline{g}(x^{(k)})\|_{2}^{2}}\overline{g}(x^{(k)})$.
(3.8)
In order to guarantee the convergence for method (3.8), let the following
assumption hold:
###### Assumption 3.1.
The norms of the subgradients of $f_{0},f_{1},\dots,f_{m}$ and the values of
$f_{1},\dots,f_{m}$ are bounded on any compact subsets of ${\mathbb{R}}^{n}$.
For every $\varepsilon>0$ and $K\in{\mathbb{N}}$, let
$\mathcal{I}_{\varepsilon}(K)$ be the set of iterations and
$p^{(K)}_{\varepsilon}$ be a value defined as follows:
$\mathcal{I}_{\varepsilon}(K):=\\{k\in\\{0,1,\dots,K\\}:\overline{f}(x^{(k)})\leq\varepsilon\\}\qquad\text{and}\quad
p^{(K)}_{\varepsilon}:=\min_{k\in\mathcal{I}_{\varepsilon}(K)}f_{0}(x^{(k)})\,.$
(3.9)
The optimal worst-case performance guarantee of method (3.8) is stated in the
following theorem:
###### Theorem 3.2.
If the number of iterations $K$ in method (3.8) is big enough,
$K\geq\mathcal{O}(\varepsilon^{-2})$, then
$\mathcal{I}_{\varepsilon}(K)\neq\emptyset\quad\text{ and }\quad
p^{(K)}_{\varepsilon}\leq p^{\star}+\varepsilon\,.$ (3.10)
The proof of Theorem 3.2 is similar to the proof of [6, Theorem 3.2.3].
## 4 Dual subgradient method (DSG)
In this section, we provide an alternative dual subgradient method for problem
(2.5). Originally developed by Metel and Takeda in [4], the dual subgradient
method involves a single dual variable when they consider the Lagrangian
function of problem (3.6) with the single constraint $\overline{f}(x)\leq 0$.
Our dual subgradient method is a generalization of Metel-Takeda’s result and
based on the Lagrangian function of problem (2.5) with several dual variables.
Consider the following Lagrangian function of problem (2.5):
$L(x,\lambda,\nu)=f_{0}(x)+\lambda^{\top}F(x)+\nu^{\top}(Ax-b)\,,$ (4.11)
for $x\in{\mathbb{R}}^{n}$, $\lambda\in{\mathbb{R}}_{+}^{m}$ and
$\nu\in{\mathbb{R}}^{l}$. Here
$F(x):=(F_{1}(x),\dots,F_{m}(x))\text{ with
}F_{i}(x)=\max\\{f_{i}(x),0\\}\,,\,i=1,\dots,m\,.$ (4.12)
The dual of problem (2.5) reads as:
$d^{\star}=\sup_{\lambda\in{\mathbb{R}}_{+}^{m},\ \nu\in{\mathbb{R}}^{l}}\
\inf_{x\in{\mathbb{R}}^{n}}L(x,\lambda,\nu)\,.$ (4.13)
Let the following assumption hold:
###### Assumption 4.1.
Strong duality holds for primal-dual (2.5)-(4.13), i.e., $p^{\star}=d^{\star}$
and (4.13) has an optimal solution $(\lambda^{\star},\nu^{\star})$.
It implies that $(x^{\star},\lambda^{\star},\nu^{\star})$ is a saddle-point of
the Lagrangian function $L$, i.e.,
$L(x^{\star},\lambda,\nu)\leq
L(x^{\star},\lambda^{\star},\nu^{\star})=p^{\star}\leq
L(x,\lambda^{\star},\nu^{\star})\,,$ (4.14)
for all $x\in{\mathbb{R}}^{n}$, $\lambda\in{\mathbb{R}}_{+}^{m}$ and
$\nu\in{\mathbb{R}}^{l}$.
Given $C$ as a subset of ${\mathbb{R}}^{n}$, denote by
$\operatorname{conv}(C)$ the convex hull generated by $C$, i.e.,
$\operatorname{conv}(C)=\\{\sum_{i=1}^{r}{t_{i}a_{i}}:a_{i}\in
C\,,\,t_{i}\in[0,1]\,,\,\sum_{i=1}^{r}t_{i}=1\\}$.
With $z=(x,\lambda,\nu)$, we use the following notation:
* •
$g_{0}(x)\in\partial f_{0}(x)$;
* •
$g_{i}(x)\in\partial F_{i}(x)=\begin{cases}\partial f_{i}(x)&\text{ if
}f_{i}(x)>0\,,\\\ \text{conv}(\partial f_{i}(x)\cup\\{0\\})&\text{ if
}f_{i}(x)=0\,,\\\ \\{0\\}&\text{ otherwise};\end{cases}$
* •
$G_{x}(z)\in\partial_{x}L(z)=\partial
f_{0}(x)+\sum_{i=1}^{m}\lambda_{i}\partial F_{i}(x)+A^{\top}\nu$;
* •
$G_{\lambda}(z)=\nabla_{\lambda}L(z)=F(x)$ and
$G_{\nu}(z)=\nabla_{\nu}L(z)=Ax-b$;
* •
$G(z)=(G_{\lambda}(z),-G_{\lambda}(z),-G_{\nu}(z))$.
Letting $z^{(k)}:=(x^{(k)},\lambda^{(k)},\nu^{(k)})$, we consider the
following method:
DSG Initialization:
$z^{(0)}=(x^{(0)},\lambda^{(0)},\nu^{(0)})\in{\mathbb{R}}^{n}\times{\mathbb{R}}^{m}_{+}\times{\mathbb{R}}^{l}$;
$s^{(0)}=0$; $\hat{x}^{(0)}=0$; $\delta_{0}=0$; $\beta_{0}=1$. For
$k=0,1,\dots,K$ do: 1.
$s^{(k+1)}=s^{(k)}+\frac{G(z^{(k)})}{\|G(z^{(k)})\|_{2}}$; 2.
$z^{(k+1)}=z^{(0)}-\frac{s^{(k+1)}}{\beta_{k}}$; 3.
$\beta_{k+1}=\beta_{k}+\frac{1}{\beta_{k}}$; 4.
$\delta_{k+1}=\delta_{k}+\frac{1}{\|G(z^{(k)})\|_{2}}$; 5.
$\hat{x}^{(k+1)}=\hat{x}^{(k)}+\frac{x^{(k)}}{\|G(z^{(k)})\|_{2}}$; 6.
$\overline{x}^{(k+1)}=\delta_{k+1}^{-1}\hat{x}^{(k+1)}$. (4.15)
Since $G_{\lambda}(z^{(k)})=F(x^{(k)})\geq 0$, it holds that
$\lambda^{(k+1)}_{i}\geq\lambda^{(k)}_{i}\geq\lambda^{(0)}_{i}\geq 0$, for
$i=1,\dots,m$.
Under Assumption 3.1, we obtaint the convergence rate of order
$\mathcal{O}(K^{-1/2})$ in the following theorem:
###### Theorem 4.2.
Let $\varepsilon>0$. If the number of iterations $K$ in method (4.15) is big
enough, $K\geq\mathcal{O}(\varepsilon^{-2})$, then
$\|F(\overline{x}^{(K+1)})\|_{2}+\|A\overline{x}^{(K+1)}-b\|_{2}\leq\varepsilon\text{
and }\quad f_{0}(\overline{x}^{(K+1)})\leq p^{\star}+\varepsilon\,.$ (4.16)
The proof of Theorem 4.2, based on Lemma 8.3, proceeds exactly the same as the
proofs in [4].
## 5 Primal-dual subgradient method (PDS)
In this section, we extend the primal-dual subgradient method introduced in
Boyd’s lecture notes [2, Section 8]. The idea of our method is to replace the
augmentation of the augmented Lagrangian considered in [2, Section 8] by a
more general penalty term. We also prove the convergence guarantee and provide
convergence rate order for this method.
Let $s\in[1,2]$ and $\rho>0$ be fixed. With $F(x)$ defined as in (4.12),
consider an equivalent problem of problem (2.5):
$\begin{array}[]{rl}p^{\star}=\inf\limits_{x\in{\mathbb{R}}^{n}}&f_{0}(x)+\rho(\|F(x)\|_{2}^{s}+\|Ax-b\|_{2}^{s})\\\
\text{s.t.}&F_{i}(x)\leq 0\,,\,i=1,\dots,m\,,\,Ax=b\,.\end{array}$ (5.17)
Since $x^{\star}$ is an optimal solution of problem (2.5), $x^{\star}$ is also
an optimal solution of problem (5.17).
###### Remark 5.1.
Instead of using the augmentation with the square of $l_{2}$-norm
$\|\cdot\|_{2}^{2}$ according to the definition of augmented problem in [2,
Section 8] we use the additional penalty term with $l_{2}$-norm to the $s$th
power $\|\cdot\|_{2}^{s}$ in problem (5.17). This strategy has been discussed
in [7, pp. 513] for the case of $s=1$.
The Lagrangian of problem (5.17) has the form:
$L_{\rho}(x,\lambda,\nu)=f_{0}(x)+\lambda^{\top}F(x)+\nu^{\top}(Ax-b)+\rho(\|F(x)\|_{2}^{s}+\|Ax-b\|_{2}^{s})\,,$
(5.18)
for $x\in{\mathbb{R}}^{n}$, $\lambda\in{\mathbb{R}}_{+}^{m}$ and
$\nu\in{\mathbb{R}}^{l}$.
Let us define a set-valued mapping
$T:{\mathbb{R}}^{n}\times{\mathbb{R}}^{m}_{+}\times{\mathbb{R}}^{l}\to
2^{{\mathbb{R}}^{n}\times{\mathbb{R}}^{m}_{+}\times{\mathbb{R}}^{l}}$ by
$\begin{array}[]{rl}T_{\rho}(x,\lambda,\nu)=&\partial_{x}L_{\rho}(x,\lambda,\nu)\times(-\partial_{\lambda}L_{\rho}(x,\lambda,\nu))\times(-\partial_{\nu}L_{\rho}(x,\lambda,\nu))\\\
=&\partial_{x}L_{\rho}(x,\lambda,\nu)\times\\{-F(x)\\}\times\\{b-Ax\\}\,,\end{array}$
(5.19)
where $\partial_{x}L_{\rho}(x,\lambda,\nu)=\partial
f_{0}(x)+\sum_{i=1}^{m}\lambda_{i}\partial
F_{i}(x)+A^{\top}\nu+\rho\partial\|F_{i}(\cdot)\|_{2}^{s}(x)+\rho\partial\|A\cdot-b\|_{2}^{s}(x)$.
The explicit formulas of the subgradients in the subdifferentials
$\partial\|F_{i}(\cdot)\|_{2}^{s}(x)$ and $\partial\|A\cdot-b\|_{2}^{s}(x)$
are provided in Appendix 8.2.
We do the simple iteration:
$z^{(k+1)}=z^{(k)}-\alpha_{k}T^{(k)}\,,$ (5.20)
where $z^{(k)}=(x^{(k)},\lambda^{(k)},\nu^{(k)})$ is the $k$th iterate of the
primal and dual variables, $T^{(k)}$ is any element of $T_{\rho}(z^{(k)})$,
$\alpha_{k}>0$ is the $k$th step size.
By expanding (5.20) out, we can also write the method as:
PDS Initialization: $x^{(0)}\in{\mathbb{R}}^{n}$,
$\lambda^{(0)}\in{\mathbb{R}}^{m}_{+}$, $\nu^{(0)}\in{\mathbb{R}}^{l}$ and
$(\alpha_{k})_{k\in{\mathbb{N}}}\subset{\mathbb{R}}_{+}$. For $k=0,1,\dots,K$
do: 1. $\varrho^{(k)}=\begin{cases}s\|F^{(k)}\|_{2}^{s-2}F^{(k)}&\text{ if
}F^{(k)}\neq 0\,,\\\ 0&\text{ otherwise;}\end{cases}$ 2.
$\varsigma^{(k)}=\begin{cases}s\|Ax^{(k)}-b\|_{2}^{s-2}(Ax^{(k)}-b)&\text{ if
}Ax^{(k)}\neq b\\\ 0&\text{ otherwise;}\end{cases}$ 3.
$x^{(k+1)}=x^{(k)}-\alpha_{k}[g_{0}^{(k)}+\sum_{i=1}^{m}(\lambda_{i}^{(k)}+\rho\varrho^{(k)}_{i})g_{i}^{(k)}+A^{\top}(\nu^{(k)}+\rho\varsigma^{(k)})]$;
4. $\lambda^{(k+1)}=\lambda^{(k)}+\alpha_{k}F^{(k)}\text{ and
}\nu^{(k+1)}=\nu^{(k)}+\alpha_{k}(Ax^{(k)}-b)$. (5.21)
Here we note:
* •
$g_{0}^{(k)}\in\partial f_{0}(x^{(k)})$, $g_{i}^{(k)}\in\partial
F_{i}(x^{(k)})$ and $F_{i}^{(k)}:=F_{i}(x^{(k)})$, $i=1,\dots,m$.
* •
$T^{(k)}=\begin{bmatrix}g_{0}^{(k)}+\sum_{i=1}^{m}(\lambda_{i}^{(k)}+\rho\varrho^{(k)}_{i})g_{i}^{(k)}+A^{\top}(\nu^{(k)}+\rho\varsigma^{(k)})\\\
-F^{(k)}\\\ b-Ax^{(k)}\end{bmatrix}$.
Remark that $\lambda^{(k)}\geq 0$ since $F_{i}^{(k)}\geq 0$, $i=1,\dots,m$.
The case of $s=2$ is the standard primal-dual subgradient method in [2,
Section 8].
Let the Assumptions 3.1 and 4.1 hold. For every $\varepsilon>0$ and
$K\in{\mathbb{N}}$, let
$\mathcal{I}_{\varepsilon}(K):=\\{k\in\\{0,1,\dots,K\\}\,:\,\|F(x^{(k)})\|_{2}+\|Ax^{(k)}-b\|_{2}\leq\varepsilon\\}$
(5.22)
and
$p^{(K)}_{\varepsilon}:=\min_{k\in\mathcal{I}_{\varepsilon}(K)}f_{0}(x^{(k)})\,.$
(5.23)
We state that the method (5.21) converges in the following theorem:
###### Theorem 5.2.
Let $\delta\in(0,1)$. Let the step size rule
$\alpha_{k}=\frac{\gamma_{k}}{\|T^{(k)}\|_{2}}\quad\text{ with
}\quad\gamma_{k}=(k+1)^{-1+\delta/2}\,.$ (5.24)
Let $\varepsilon>0$. If the number of iterations $K$ in method (5.21) is big
enough, $K\geq\mathcal{O}(\varepsilon^{-2s/\delta})$, then
$\mathcal{I}_{\varepsilon}(K)\neq\emptyset\quad\text{ and }\quad
p^{(K)}_{\varepsilon}\leq p^{\star}+\varepsilon\,.$ (5.25)
The proof of Theorem 5.2 is based on Lemma 8.4. This proof is similar in
spirit to the convergence proof of the standard primal-dual subgradient method
in [2, Section 8]. A mathematical mistake in the proof in [2, Section 8] is
corrected.
## 6 Numerical experiments
In this section we report results of numerical experiments obtained by solving
convex optimization problem (COP) with functional constraints. The experiments
are performed in Python 3.9.1. The implementation of methods (3.8), (4.15) and
(5.21) is available online via the link:
https://github.com/dinhthilan/COP.
We use a desktop computer with an Intel(R) Pentium(R) CPU N4200 @ 1.10GHz and
4.00 GB of RAM. The notation for the numerical results is given in Table 1.
Table 1: The notation $n$ | the number of variables of the COP
---|---
$m$ | the number of inequality constraints of the COP
$l$ | the number of equality constraints of the COP
SG | the COP solved by the subgradient method (3.8)
SingleDSG | the COP solved by the dual subgradient method with single dual variable [4, Algorithm 1]
MultiDSG | the COP solved by the dual subgradient method with multi-dual-variables (4.15)
PDS | the COP solved by the primal-dual subgradient method (5.21)
$s$ | the power of $l_{2}$-norm in the additional penalty term for PDS
$\rho$ | the penalty coefficients for PDS
val | the approximate optimal value of the COP
val⋆ | the exact optimal value of the COP
gap | the relative optimality gap w.r.t. the exact value val⋆, i.e., $\text{gap}=|\text{val}-\text{val}^{\star}|/{(1+\max\\{|\text{val}^{\star}|,|\text{val}|\\})}$
infeas | the infeasibility of the approximate optimal solution
time | the running time in seconds
$\varepsilon$ | the desired accuracy of the approximate solution
$K$ | the number of iterations
The value and the infeasibility of SG, SingleDSG, MultiDSG and PDS at the
$k$th iteration is computed as in Table 2.
Table 2: The value and the infeasitility at the $k$th iteration. Method | Complexity | val | infeas
---|---|---|---
SG | $\mathcal{O}(\varepsilon^{-2})$ | $f_{0}(x^{(k)})$ | $\max\\{\overline{f}(x^{(k)}),0\\}$
SingleDSG | $\mathcal{O}(\varepsilon^{-2})$ | $f_{0}(\overline{x}^{(k)})$ | $\max\\{\overline{f}(\overline{x}^{(k)}),0\\}$
MultiDSG | $\mathcal{O}(\varepsilon^{-2})$ | $f_{0}(\overline{x}^{(k)})$ | $\|F(\overline{x}^{(k)})\|_{2}+\|A\overline{x}^{(k)}-b\|_{2}$
PDS | $\mathcal{O}(\varepsilon^{-2r})\,,\,\forall\ r>1$ | $f_{0}(x^{(k)})$ | $\|F({x}^{(k)})\|_{2}+\|A{x}^{(k)}-b\|_{2}$
### 6.1 Randomly generated test problems
We construct randomly generated test problems in the form:
$\min_{x\in{\mathbb{R}}^{n}}\\{c^{\top}x\ :\ x\in\Omega\,,\,Ax=b\\}\,,$ (6.26)
where $c\in{\mathbb{R}}^{n}$, $A\in{\mathbb{R}}^{l\times n}$,
$b\in{\mathbb{R}}^{l}$ and $\Omega$ is a convex domain such that:
* •
Every entry of $c$ and $A$ is taken in $[-1,1]$ with uniform distribution.
* •
The domain $\Omega$ is chosen in the following two cases:
* –
Case 1: $\Omega:=\\{x\in{\mathbb{R}}^{n}:\|x\|_{1}\leq 1\\}$.
* –
Case 2: $\Omega:=\\{x\in[-1,1]^{n}:\max\\{-\log(x_{1}+1),x_{2}\\}\leq 1\\}$.
* •
With a random point $\overline{x}$ in $\Omega$, we take $b:=A\overline{x}$.
Let us apply SG, SingleDSG, multiDSG and PDS to solve problem (6.26). The size
of the test problems and the setting of our software are given in Table 3.
Table 3: Randomly generated test problems.
* •
Setting: $l=\lceil n/4\rceil$ $\varepsilon=10^{-3}$, $\rho=1/s$.
Id | Case | $\delta$ | $K$ | Size
---|---|---|---|---
$n$ | $m$ | $l$
1 | 1 | 0.5 | $10^{4}$ | 10 | 1 | 2
2 | 1 | 0.5 | $10^{5}$ | 100 | 1 | 15
3 | 1 | 0.99 | $10^{6}$ | 1000 | 1 | 143
4 | 2 | 0.5 | $10^{4}$ | 10 | 21 | 2
5 | 2 | 0.5 | $10^{5}$ | $100$ | 201 | 15
6 | 2 | 0.99 | $10^{5}$ | $1000$ | 2001 | 143
The numerical results are displayed in Table 4.
Table 4: Numerical results for randomly generated test problems Id | SG | SingleDSG | MultiDSG
---|---|---|---
val | infeas | time | val | infeas | time | val | infeas | time
1 | -0.6363 | 0.2584 | 1 | -0.9609 | 0.0806 | 1 | -0.8385 | 0.0140 | 1
2 | 0.2711 | 0.2389 | 19 | -0.9672 | 0.1832 | 35 | -0.9003 | 0.0177 | 19
3 | -0.0032 | 0.0849 | 2356 | -0.9938 | 0.0748 | 4704 | -0.8649 | 0.0153 | 1380
4 | -2.2255 | 2.3955 | 1 | -5.7702 | 2.0534 | 2 | -4.1255 | 0.0056 | 5
5 | 7.6548 | 7.7407 | 432 | -141.43 | 7.0433 | 84 | -37.712 | 0.0097 | 575
6 | -1.4176 | 38.806 | 1079 | -316.62 | 35.684 | 2382 | -401.07 | 0.0389 | 8059
Id | PDS with $s=1$ | PDS with $s=1.5$ | PDS with $s=2$
val | infeas | time | val | infeas | time | val | infeas | time
1 | -0.8213 | 0.0007 | 1 | -0.8226 | 0.0034 | 1 | -0.8209 | 0.0003 | 1
2 | -0.8830 | 0.0008 | 20 | -0.9004 | 0.0957 | 20 | -0.8913 | 0.0262 | 20
3 | -0.8550 | 0.0241 | 2434 | -0.8428 | 0.0043 | 2673 | -0.8430 | 0.0046 | 2424
4 | -4.0708 | 0.0025 | 2 | -4.1017 | 0.0029 | 2 | -4.1037 | 0.0041 | 2
5 | -37.022 | 0.0009 | 404 | -37.453 | 0.0214 | 416 | -37.506 | 0.0451 | 413
6 | -398.71 | 0.0435 | 4802 | -399.63 | 0.0475 | 4889 | -399.76 | 0.0323 | 4602
The convergences of SG, SingleDSG, MultiDSG and PDS with $s\in\\{1,1.5,2\\}$
are illustrated in Figures 3, 3, 3 for Case 1 and Figures 6, 6, 6 for Case 2.
Figure 1: Illustration for Case 1 with $n=10$ (Id 1).
Figure 2: Illustration for Case 1 with $n=100$ (Id 2).
Figure 3: Illustration for Case 1 with $n=1000$ (Id 3).
Figure 4: Illustration for Case 2 with $n=10$ (Id 4).
Figure 5: Illustration for Case 2 with $n=100$ (Id 5).
Figure 6: Illustration for Case 2 with $n=1000$ (Id 6).
These figures show that:
* •
In Case 1, the values returned by SingleDSG, MultiDSG and PDS with
$s\in\\{1,1.5,2\\}$ have the same limit when the number of iterations
increases except Id 1. In Id 1, the value returned by SingleDSG have a limit
which is not the same limit of the values returned by the others. We should
point out that the value of PDS with $s=2$ has the fastest convergence rate.
In this case the infeasibilities of all methods converge to zero and the
infeasibility of SG has the fastest convergence rate.
* •
In Case 2, the values returned by MultiDSG and PDS with $s\in\\{1,1.5,2\\}$
have the same limit while the values provided by SG and SingleDSG converge to
different limits, when the number of iterations increases. Moreover the value
of MultiDSG has the fastest convergence rate. We observe the similar behavior
when considering the infeasibilities of these methods.
For timing comparison from Table 4, SingleDSG is the slowest in Case 1 while
MultiDSG is the fastest one in this case. Nevertheless, MultiDSG takes the
most consuming time in Case 2.
### 6.2 Linearly inequality constrained minimax problems
We solve several test problems from [3, Table 4.1] including MAD8, Wong2 and
Wong3 by using SG, SingleDSG, multiDSG and PDS.
The size of the test problems and the setting of our software are given in
Table 5.
Table 5: Linearly inequality constrained minimax test problems.
* •
Setting: $l=0$, $\varepsilon=10^{-3}$, $\delta=0.5$, $\rho=1/s$, $K=10^{5}$.
Id | Problem | val⋆ | Size
---|---|---|---
$n$ | $m$
7 | MAD8 | 0.5069 | 20 | 10
8 | Wong2 | 24.3062 | 10 | 3
9 | Wong3 | -37.9732 | 20 | 4
The numerical results are displayed in Table 6.
Table 6: Numerical results for linearly inequality constrained minimax problems Id | SG | SingleDSG | MultiDSG
---|---|---|---
val | infeas | val | infeas | val | infeas
7 | 0.5065 | 0.0006 | 0.4629 | 0.0325 | 0.5037 | 0.0038
8 | 653.00 | 0.0000 | 22.352 | 0.6159 | 23.964 | 0.1017
9 | 198.03 | 0.0000 | -39.638 | 0.9788 | -38.697 | 0.2851
Id | PDS with $s=1$ | PDS with $s=1.5$ | PDS with $s=2$
val | infeas | val | infeas | val | infeas
7 | 0.5073 | 0.0000 | 0.5070 | 0.0000 | 0.5071 | 0.0000
8 | 24.305 | 0.0013 | 24.127 | 0.0975 | 24.003 | 0.1360
9 | -37.969 | 0.0009 | -38.436 | 0.6214 | -38.628 | 0.8841
Id | SG | SingleDSG | MultiDSG
gap | time | gap | time | gap | time
7 | 0.03% | 204 | 2.92% | 203 | 0.21% | 221
8 | 96.1% | 48 | 7.72% | 54 | 1.35% | 57
9 | 118% | 76 | 4.10% | 86 | 1.82% | 91
Id | PDS with $s=1$ | PDS with $s=1.5$ | PDS with $s=2$
gap | time | gap | time | gap | time
7 | 0.01% | 113 | 0.01% | 116 | 0.01% | 113
8 | 0.00% | 29 | 0.71% | 21 | 1.20% | 27
9 | 0.01% | 45 | 1.18% | 48 | 1.65% | 43
The convergences of SG, SingleDSG, MultiDSG and PDS with $s\in\\{1,1.5,2\\}$
are plotted in Figures 12, 12 and 12 for $\overline{n}\in\\{10,100,1000\\}$,
respectively.
Figure 7: Illustration for MAD8 problem (Id 7).
Figure 8: Illustration for Wong2 problem (Id 8).
Figure 9: Illustration for Wong3 problem (Id 9).
As one can see from Table 6, the value returned by PDS with $s=1$ has the
smallest gap w.r.t the exact value while requires smaller total time compared
to SG, SingleDSG and MultiDSG. Moreover the infeasibility of PDS with $s=1$
converges to zero more efficiently than the ones of SG, SingleDSG and MultiDSG
when the number of iterations increases.
Figures 12, 12 and 12 show that the values returned by PDS with
$s\in\\{1,1.5,2\\}$ converge with the same rate in this case but the
infeasibility of PDS with $s=1$ converges to zero the most fastest.
### 6.3 Least absolute deviations (LAD)
Consider the following problem:
$\min\limits_{x\in{\mathbb{R}}^{\overline{n}}}\|Dx-w\|_{1}\,.$ (6.27)
where $D\in\mathbf{R}^{\overline{m}\times\overline{n}}$,
$w\in\mathbf{R}^{\overline{m}}$ with entries taken in $[-1,1]$ with uniform
distribution.
By adding slack variables $y=Dx-w$, (6.27) is equivalent to the CCOP:
$\begin{array}[]{rl}\min\limits_{x,y}&\|y\|_{1}\\\
\text{s.t.}&y=Dx-w\,.\end{array}$ (6.28)
Let us solve (6.28) by using SG, SingleDSG, MultiDSG and PDS. The size of the
test problems and the setting of our software are given in Table 7.
Table 7: Numerical results for least absolute deviations
* •
Setting: $n=3\overline{n}$, $m=0$, $l=\overline{m}=2\overline{n}$, $K=10^{5}$,
$\varepsilon=10^{-3}$, $\rho=1/s$, $\delta=0.99$.
Id | size
---|---
$\overline{n}$ | $n$ | $l$
10 | 10 | 30 | 20
11 | 100 | 300 | 200
12 | 1000 | 3000 | 2000
The numerical results are displayed in Table 8.
Table 8: Numerical results for least absolute deviations Id | SG | SingleDSG | MultiDSG
---|---|---|---
val | infeas | time | val | infeas | time | val | infeas | time
10 | 0.0004 | 0.0010 | 23 | 0.0030 | 0.0017 | 39 | 0.0017 | 0.0013 | 21
11 | 51.917 | 0.0014 | 252 | 0.1084 | 0.0641 | 507 | 0.0945 | 0.0013 | 108
12 | 810.46 | 0.5941 | 3682 | 87.849 | 8.4173 | 7578 | 11.535 | 0.0013 | 1858
Id | PDS with $s=1$ | PDS with $s=1.5$ | PDS with $s=2$
val | infeas | time | val | infeas | time | val | infeas | time
10 | 0.0065 | 0.0033 | 14 | 0.0062 | 0.0017 | 15 | 0.0074 | 0.0019 | 14
11 | 0.0159 | 0.0151 | 132 | 0.0222 | 0.0021 | 116 | 0.0237 | 0.0020 | 120
12 | 0.0434 | 43.281 | 1787 | 0.0749 | 0.0044 | 1520 | 0.0739 | 0.0020 | 1443
The convergences of SG, SingleDSG, MultiDSG and PDS with $s\in\\{1,1.5,2\\}$
are illustrated in Figures 9, 9 and 9.
Figure 10: Illustration for LAD with $\overline{n}=10$ (Id 10).
Figure 11: Illustration for LAD with $\overline{n}=100$ (Id 11).
Figure 12: Illustration for LAD with $\overline{n}=1000$ (Id 12).
The value returned by PDS with $s=2$ has the the fastest convergence rate
while requires smaller total time compared to SG, SingleDSG and MultiDSG in
all cases. Moreover PDS with $s=2$ returns smallest infeasibility at the final
iteration.
### 6.4 Support vector machine (SVM)
Consider the following problem:
$\min_{w,u}\left[{{\frac{1}{N}}\sum_{i=1}^{N}\max\\{0,1-y_{i}(z_{i}^{\top}w-u)\\}+\frac{1}{2}\|w\|^{2}_{2}}\right]\,,$
(6.29)
where $z_{i}\in{\mathbb{R}}^{\overline{n}}$ and $y_{i}\in\\{-1,1\\}$ are taken
as follows:
* •
For $i=1,\dots,\lfloor N/2\rfloor$, we choose $y_{i}=1$ and take $z_{i}$ in
$[0,1]^{\overline{n}}$ with uniform distribution.
* •
For $i=\lfloor N/2\rfloor+1,\dots,N$, we choose $y_{i}=-1$ and take $z_{i}$ in
$[-1,0]^{\overline{n}}$ with uniform distribution.
Letting $\tau_{i}=z_{i}^{\top}w-u$, problem (6.29) is equivalent to the
problem:
$\begin{array}[]{rl}\min\limits_{w,u,\tau}&{{\frac{1}{N}}\sum_{i=1}^{N}\max\\{0,1-y_{i}\tau_{i}\\}+\frac{1}{2}\|w\|^{2}_{2}}\\\
\text{s.t.}&\tau=Zw-ue\,,\end{array}$ (6.30)
where $Z=\begin{bmatrix}z_{1}^{\top}\\\ \dots\\\ z_{N}^{\top}\end{bmatrix}$
and $e=\begin{bmatrix}1\\\ \dots\\\ 1\end{bmatrix}$.
Let us solve (6.30) by using SG, SingleDSG, MultiDSG and PDS. The size of test
problems and the setting of our software are given in Table 9.
Table 9: Numerical results for support vector machine
* •
Setting: $N=200\times{\overline{n}}$, $m=0$, $\varepsilon=10^{-3}$,
$\delta=0.5$, $\rho=1/s$, $\delta=0.99$, $K=10^{4}$.
Id | Size
---|---
$\overline{n}$ | $n$ | $l$
13 | 2 | 403 | 400
14 | 3 | 604 | 600
15 | 5 | 1006 | 1000
The numerical results are displayed in Table 10.
Table 10: Numerical results for support vector machine Id | SG | SingleDSG | MultiDSG
---|---|---|---
val | infeas | time | val | infeas | time | val | infeas | time
13 | 0.9764 | 0.0014 | 254 | 0.9823 | 0.2576 | 316 | 0.9956 | 0.0610 | 268
14 | 0.9821 | 0.0012 | 516 | 0.9793 | 0.3068 | 610 | 0.9930 | 1.3704 | 515
15 | 0.9978 | 0.0014 | 778 | 0.9892 | 0.3616 | 991 | 0.9979 | 6.0966 | 846
Id | PDS with $s=1$ | PDS with $s=1.5$ | PDS with $s=2$
val | infeas | time | val | infeas | time | val | infeas | time
13 | 0.9574 | 0.0998 | 140 | 0.9492 | 0.0957 | 139 | 0.9019 | 0.0955 | 144
14 | 0.9678 | 3.2489 | 229 | 0.9612 | 0.1171 | 255 | 0.9436 | 0.1170 | 246
15 | 0.9945 | 12.655 | 430 | 0.9823 | 0.1954 | 452 | 0.9831 | 0.1736 | 422
Figures 15, 15 and 15 show the progress of SG, SingleDSG, MultiDSG and PDS
with $s\in\\{1,1.5,2\\}$.
Figure 13: Illustration for SVM with $\overline{n}=2$ (Id 13).
Figure 14: Illustration for SVM with $\overline{n}=3$ (Id 14).
Figure 15: Illustration for SVM with $\overline{n}=5$ (Id 15).
## 7 Conclusion
We have tested the performance of different subgradient method in solving COP
with functional constraints. We emphasize that PDS with $s\in[1,2]$ and
MultiDSG are typically the best choice for users. They provide better
approximations in less computational time. For COP (2.5) with the number of
inequality constraints larger than the number of equality ones ($m>l$), users
should probably choose PDS with $s$ close to $1$ (see MAD8, Wong2 and Wong3).
When the number of equality constraints of COP (2.5) is much larger than the
number of inequality ones ($l\gg m$), PDS with $s$ close to $2$ would be a
good choice (see Case 1 and LAD). For COP (2.5) with large number of equality
constraints and large number of inequality ones ($m\gg 1$ and $l\gg 1$), the
best method would probably be MultiDSG (see Case 2). The reason why SG and
SingleDSG is much more inefficient than the others is that they use the
alternative single constraint $\overline{f}(x)\leq 0$ with $\overline{f}$
defined as in (3.7) and hence loose information of the dual problem.
As a topic of further applications, we would like to use MultiDSG and PDS for
solving large-scale semidefinite program (SDP) in the form:
$p^{\star}\,:=\,\inf\limits_{y\in{\mathbb{R}}^{n}}\left\\{f^{\top}y\
\left|\begin{array}[]{rl}&C_{i}+\mathcal{B}_{i}y\preceq
0\,,\,i=1,\dots,m\,,\\\ &Ay=b\end{array}\right.\right\\}\,,$ (7.31)
where $f\in{\mathbb{R}}^{n}$, $C_{i}\in{\mathbb{S}}^{s_{i}}$
$\mathcal{B}_{i}:{\mathbb{R}}^{n}\to{\mathbb{S}}^{s_{i}}$ is a linear operator
defined by $\mathcal{B}_{i}y=\sum_{j=1}^{n}y_{i}B_{i}^{(j)}$ with
$B_{i}^{(j)}\in{\mathbb{S}}^{s_{i}}$, $A\in{\mathbb{R}}^{l\times n}$, and
$b\in{\mathbb{R}}^{l}$. Here we denote by ${\mathbb{S}}^{s}$ the set of real
symmetric matrices of size $s$. Obviously, SDP (7.31) is equivalent to COP
with functional constraints:
$p^{\star}\,:=\,\inf\limits_{y\in{\mathbb{R}}^{n}}\left\\{f^{\top}y\
\left|\begin{array}[]{rl}&\lambda_{\max}(C_{i}+\mathcal{B}_{i}y)\leq
0\,,\,i=1,\dots,m\,,\\\ &Ay=b\end{array}\right.\right\\}\,,$ (7.32)
where $\lambda_{\max}(A)$ stands by the largest eigenvalue of a given
symmetric matrix $A$.
Finally, another interesting application is to solve large-scale nonsmooth
optimization problems arising from machine learning by using MultiDSG and PDS.
## 8 Appendix
### 8.1 Dual subgradient method
In each iteration $z^{(k+1)}=z^{(0)}-\frac{s^{(k+1)}}{\beta_{k}}$ is the
maximizer of
$\displaystyle
U^{s}_{\beta}(z):=-s^{\top}(z-z^{(0)})-\frac{\beta}{2}\|z-z^{(0)}\|^{2}_{2}$
(8.33)
for $s=s^{(k+1)}$ and $\beta=\beta_{k}$, with
$\displaystyle
U_{\beta_{k}}^{s^{(k+1)}}(z^{(k+1)})=\frac{\|s^{(k+1)}\|^{2}_{2}}{2\beta_{k}}.$
(8.34)
In addition, $U^{s}_{\beta}(z)$ is strongly concave in $z$ with parameter
$\beta$,
$\displaystyle U_{\beta}^{s}(z)\leq U_{\beta}^{s}(z^{\prime})+\nabla
U_{\beta}^{s}(z^{\prime})^{\top}(z-z^{\prime})-\frac{\beta}{2}\|z-z^{\prime}\|^{2}_{2}.$
(8.35)
We define $\overline{G}^{(k)}:=\frac{G(z^{(k)})}{\|G(z^{(k)})\|_{2}}$. Note
that $\|\overline{G}^{(k)}\|_{2}=1$.
###### Lemma 8.1.
In method (4.15), it holds that
$\|z^{(k)}-z^{\star}\|_{2}\leq\|z^{(0)}-z^{\star}\|_{2}+1$.
###### Proof.
From (8.34),
$\displaystyle U_{\beta_{k}}^{s^{(k+1)}}(z^{(k+1)})$
$\displaystyle=\frac{\beta_{k-1}}{\beta_{k}}\frac{\|s^{(k+1)}\|^{2}_{2}}{2\beta_{k-1}}=\frac{\beta_{k-1}}{\beta_{k}}\frac{\|s^{(k)}+\overline{G}^{(k)}\|^{2}_{2}}{2\beta_{k-1}}$
$\displaystyle=\frac{\beta_{k-1}}{\beta_{k}}\left(\frac{\|s^{(k)}\|^{2}_{2}}{2\beta_{k-1}}+\frac{1}{\beta_{k-1}}s^{(k)\top}\overline{G}^{(k)}+\frac{\|\overline{G}^{(k)}\|^{2}_{2}}{2\beta_{k-1}}\right)$
$\displaystyle=\frac{\beta_{k-1}}{\beta_{k}}\left(U_{\beta_{k-1}}^{s^{(k)}}(z^{(k)})+\frac{1}{\beta_{k-1}}s^{(k)\top}\overline{G}^{(k)}+\frac{1}{2\beta_{k-1}}\right)$
$\displaystyle=\frac{\beta_{k-1}}{\beta_{k}}\left(U_{\beta_{k-1}}^{s^{(k)}}(z^{(k)})+(z^{(0)}-z^{(k)})^{\top}\overline{G}^{(k)}+\frac{1}{2\beta_{k-1}}\right).$
Rearranging,
$\displaystyle(z^{(k)}-z^{(0)})^{\top}\overline{G}^{(k)}$
$\displaystyle=U_{\beta_{k-1}}^{s^{(k)}}(z^{(k)})-\frac{\beta_{k}}{\beta_{k-1}}U_{\beta_{k}}^{s^{(k+1)}}(z^{(k+1)})+\frac{1}{2\beta_{k-1}}$
$\displaystyle\leq
U_{\beta_{k-1}}^{s^{(k)}}(z^{(k)})-U_{\beta_{k}}^{s^{(k+1)}}(z^{(k+1)})+\frac{1}{2\beta_{k-1}},$
since $\beta_{k}$ is increasing. Telescoping these inequalities for
$k=1,\dots,K$, and using the fact that $\|s_{1}\|_{2}=1$,
$\displaystyle\sum_{k=1}^{K}(z^{(k)}-z^{(0)})^{\top}\overline{G}^{(k)}$
$\displaystyle\leq
U_{\beta_{0}}^{s_{1}}(z^{(1)})-U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})+\sum_{k=1}^{K}\frac{1}{2\beta_{k-1}}$
(8.36)
$\displaystyle=\frac{1}{2\beta_{0}}-U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})+\sum_{k=0}^{K-1}\frac{1}{2\beta_{k}}$
$\displaystyle=-U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})+\frac{1}{2}\left(\sum_{k=0}^{K-1}\frac{1}{\beta_{k}}+\beta_{0}\right).$
Expanding the recursion $\beta_{k}=\frac{1}{\beta_{k}-1}+\beta_{k-1}$,
$\displaystyle\sum_{k=1}^{K}(z^{(k)}-z^{(0)})^{\top}\overline{G}^{(k)}$
$\displaystyle\leq-U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})+\frac{\beta_{K}}{2}.$
(8.37)
Given the convexity of $L(x,\lambda,\nu)$ in $x$ and linearity in
$(\lambda,\nu)$,
$\displaystyle L(x^{\star},\lambda^{(k)},\nu^{(k)})\geq
L(x^{(k)},\lambda^{(k)},\nu^{(k)})$
$\displaystyle+G_{x}(x^{(k)},\lambda^{(k)},\nu^{(k)})^{\top}(x^{\star}-x^{(k)})$
(8.38) $\displaystyle
L(x^{(k)},\lambda^{\star},\nu^{\star})=L(x^{(k)},\lambda^{(k)},\nu^{(k)})$
$\displaystyle+G_{\lambda}(x^{(k)},\lambda^{(k)},\nu^{(k)})^{\top}(\lambda^{\star}-\lambda^{(k)})\rangle$
$\displaystyle+G_{\nu}(x^{(k)},\lambda^{(k)},\nu^{(k)})^{\top}(\nu^{\star}-\nu^{(k)}).$
(8.39)
Subtracting (8.38) from (8.39) and using (4.13),
$\displaystyle
0\leq(G_{x}(z^{(k)}),-G_{\lambda}(z^{(k)}),-G_{\nu}(z^{(k)}))^{\top}(z^{(k)}-z^{\star}).$
(8.40)
It follows that
$\displaystyle 0\leq$
$\displaystyle\sum_{k=1}^{K}(z^{(k)}-z^{\star})^{\top}\overline{G}^{(k)}$
$\displaystyle=$
$\displaystyle\sum_{k=1}^{K}(z^{(0)}-z^{\star})^{\top}\overline{G}^{(k)}+\sum_{k=1}^{K}(z^{(k)}-z^{(0)})^{\top}\overline{G}^{(k)}$
$\displaystyle\leq$
$\displaystyle\sum_{k=1}^{K}(z^{(0)}-z^{\star})^{\top}\overline{G}^{(k)}-U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})+\frac{\beta_{K}}{2}$
$\displaystyle=$
$\displaystyle(z^{(0)}-z^{\star})^{\top}s^{(k+1)}-U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})+\frac{\beta_{K}}{2}\,,$
(8.41)
where the second inequality uses (8.37), and the second equality follows since
$s^{(k+1)}=s^{(k)}+\overline{G}^{(k)}$. Considering inequality (8.35) with
$s=s^{(K+1)}$, $\beta=\beta_{K}$, $z=z^{\star}$, and $z^{\prime}=z^{(K+1)}$,
$\displaystyle U_{\beta_{K}}^{s^{(K+1)}}(z^{\star})$ $\displaystyle\leq
U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})+\nabla
U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})^{\top}(z^{\star}-z^{(K+1)})-\frac{\beta_{K}}{2}\|z^{\star}-z^{(K+1)}\|^{2}_{2}$
$\displaystyle=U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})-\frac{\beta_{K}}{2}\|z^{\star}-z^{(K+1)}\|^{2}_{2},$
given that $z^{(K+1)}$ is the maximum of $U_{\beta_{K}}^{s^{(K+1)}}(z)$.
Applying this inequality in (8.41),
$\displaystyle 0\leq$
$\displaystyle(z^{(0)}-z^{\star})^{\top}s^{(k+1)}-U_{\beta_{K}}^{s^{(K+1)}}(z^{\star})-\frac{\beta_{K}}{2}\|z^{\star}-z^{(K+1)}\|^{2}_{2}+\frac{\beta_{K}}{2}$
$\displaystyle=$
$\displaystyle(z^{(0)}-z^{\star})^{\top}s^{(k+1)}+s^{(k+1)\top}(z^{\star}-z^{(0)})+\frac{\beta_{K}}{2}\|z^{\star}-z^{(0)}\|^{2}_{2}-\frac{\beta_{K}}{2}\|z^{\star}-z^{(K+1)}\|^{2}_{2}+\frac{\beta_{K}}{2}$
$\displaystyle=$
$\displaystyle\frac{\beta_{K}}{2}\|z^{\star}-z^{(0)}\|^{2}_{2}-\frac{\beta_{K}}{2}\|z^{\star}-z^{(K+1)}\|^{2}_{2}+\frac{\beta_{K}}{2},$
where the first equality uses the definition of
$U_{\beta_{K}}^{s^{(K+1)}}(z^{\star})$ (8.33). Rearranging,
$\displaystyle\|z^{\star}-z^{(K+1)}\|^{2}_{2}\leq$
$\displaystyle\|z^{\star}-z^{(0)}\|^{2}_{2}+1.$ (8.42)
As $K\geq 1$ from (8.36), this implies that (8.42) holds for $K\geq 2$.
Considering now when $k=1$,
$\displaystyle\|z^{(1)}-z^{\star}\|^{2}_{2}=$
$\displaystyle\|z^{(0)}-z^{\star}-\overline{G}^{(0)}\|^{2}_{2}=\|z^{(0)}-z^{\star}\|^{2}_{2}-2(z^{(0)}-z^{\star})^{\top}\overline{G}^{(0)}+1$
$\displaystyle\leq$ $\displaystyle\|z^{(0)}-z^{\star}\|^{2}_{2}+1\,,$
where the last line uses (8.40). Now for all $k$,
$\displaystyle(\|z^{\star}-z^{(0)}\|_{2}+1)^{2}=$
$\displaystyle\|z^{\star}-z^{(0)}\|^{2}_{2}+2\|z^{\star}-z^{(0)}\|+1\geq\|z^{\star}-z^{(k)}\|^{2}_{2}+2\|z^{\star}-z^{(0)}\|\,,$
so that $\|z^{\star}-z^{(0)}\|_{2}+1\geq\|z^{\star}-z^{(k)}\|_{2}$. ∎
In order to prove the convergence result of Algorithm 4.15, we require
bounding the norm of the subgradients $G(z^{(k)})$.
###### Lemma 8.2.
There exists a constant $C>0$ such that $\|G(z^{(k)})\|_{2}\leq C$, for all
$k\in{\mathbb{N}}$.
###### Proof.
Recall that $g_{0}(x)\in\partial f_{0}(x)$ and $g_{i}(x)\in\partial F_{i}(x)$,
$\displaystyle\|G(z^{(k)})\|_{2}=$
$\displaystyle\|(G_{x}(z^{(k)}),-G_{\lambda}(z^{(k)}),-G_{\nu}(z^{(k)}))\|_{2}$
$\displaystyle=$
$\displaystyle\left\|\left(g(x^{(k)})+\sum_{i=1}^{m}\lambda_{i}^{(k)}g_{i}(x^{(k)})+A^{\top}\nu^{(k)},-F(x^{(k)}),-Ax^{(k)}+b\right)\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\|g(x^{(k)})\|_{2}+\sum_{i=1}^{m}\lambda_{i}^{(k)}\|g_{i}(x^{(k)})\|_{2}+\|A^{\top}\|\|\nu^{(k)}\|_{2}+F(x^{(k)})+\|Ax^{(k)}-b\|_{2}.$
Here we note
$\|A^{\top}\|:=\max_{u\in{\mathbb{R}}^{l}}\\{\|A^{\top}u\|_{2}/\|u\|_{2}\\}$.
The iterates of method (4.15) are bounded in a convex compact region,
$z^{(k)}\in D:=\\{z:\|z-z^{\star}\|_{2}\leq\|z^{(0)}-z^{\star}\|_{2}+1\\}$.
This implies that $x^{(k)}\in
D_{x}:=\\{x:\|x-x^{\star}\|_{2}\leq\|z^{(0)}-z^{\star}\|_{2}+1\\}$,
$\lambda^{(k)}\in
D_{\lambda}:=\\{\lambda:\|\lambda-\lambda^{\star}\|_{2}\leq\|z^{(0)}-z^{\star}\|_{2}+1\\}$
and $\nu^{(k)}\in
D_{\nu}:=\\{\nu:\|\nu-\nu^{\star}\|_{2}\leq\|z^{(0)}-z^{\star}\|_{2}+1\\}$.
The desired result follows due to Assumption 3.1. ∎
###### Lemma 8.3.
There exists a constant $C>0$ such that for any $K\in{\mathbb{N}}$ iterations
in method (4.15),
$\displaystyle
f_{0}(\overline{x}^{(K+1)})-p^{\star}\leq\frac{C(\|z^{(0)}-z^{\star}\|^{2}_{2}+1)}{2(K+1)}\left(\frac{1}{1+\sqrt{3}}+\sqrt{2K+1}\right)$
and
$\displaystyle\|F(\overline{x}^{(K+1)})\|_{2}+\|A\overline{x}^{(K+1)}-b\|_{2}\leq\frac{C(4(\|z^{(0)}-z^{\star}\|_{2}+1)^{2}+1)}{2(K+1)}\left(\frac{1}{1+\sqrt{3}}+\sqrt{2K+1}\right).$
###### Proof.
Using equation (8.37), and recalling that $z^{(K+1)}$ maximizes
$U_{\beta_{K}}^{s^{(K+1)}}(z)$ defined by (8.33),
$\displaystyle\frac{\beta_{K}}{2}\geq$
$\displaystyle\sum_{k=1}^{K}(z^{(k)}-z^{(0)})^{\top}\overline{G}^{(k)}+U_{\beta_{K}}^{s^{(K+1)}}(z^{(K+1)})$
$\displaystyle=$
$\displaystyle\sum_{k=1}^{K}(z^{(k)}-z^{(0)})^{\top}\overline{G}^{(k)}+\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{-\left(\sum_{k=0}^{K}\overline{G}^{(k)}\right)^{\top}(z-z^{(0)})-\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}$
$\displaystyle=$
$\displaystyle\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{\sum_{k=0}^{K}-\overline{G}^{(k)\top}(z-z^{(k)})-\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}.$
(8.43)
Like $\overline{x}^{(K+1)}$, let
$\overline{z}^{(K+1)}:=\delta_{K+1}^{-1}\sum_{k=0}^{K}\frac{z^{(k)}}{\|G(z^{(k)})\|_{2}}$,
$\overline{\lambda}^{(K+1)}:=\delta_{K+1}^{-1}\sum_{k=0}^{K}\frac{\lambda^{(k)}}{\|G(z^{(k)})\|_{2}}$
and
$\overline{\nu}^{(K+1)}:=\delta_{K+1}^{-1}\sum_{k=0}^{K}\frac{\nu^{(k)}}{\|G(z^{(k)})\|_{2}}$.
Multiplying both sides of (8.43) by $\delta_{K+1}^{-1}$,
$\displaystyle\delta_{K+1}^{-1}\frac{\beta_{K}}{2}$ $\displaystyle\geq$
$\displaystyle\delta_{K+1}^{-1}\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{\sum_{k=0}^{K}-\overline{G}^{(k)\top}(z-z^{(k)})-\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}$
$\displaystyle=$
$\displaystyle\delta_{K+1}^{-1}\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{\sum_{k=0}^{K}-\frac{G_{x}(z^{(k)})^{\top}}{\|G(z^{(k)})\|_{2}}(x-x^{(k)})+\frac{G_{\lambda}(z^{(k)})^{\top}}{\|G(z^{(k)})\|_{2}}(\lambda-\lambda^{(k)})\right.$
$\displaystyle+\left.\frac{G_{\nu}(z^{(k)})^{\top}}{\|G(z^{(k)})\|_{2}}(\nu-\nu^{(k)})-\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}$
$\displaystyle\geq$
$\displaystyle\delta_{K+1}^{-1}\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{\sum_{k=0}^{K}\frac{L(x^{(k)},\lambda^{(k)},\nu^{(k)})-L(x,\lambda^{(k)},\nu^{(k)})}{\|G(z^{(k)})\|_{2}}\right.$
$\displaystyle\left.+\frac{L(x^{(k)},\lambda,\nu)-L(x^{(k)},\lambda^{(k)},\nu^{(k)})}{\|G(z^{(k)})\|_{2}}-\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}$
$\displaystyle=$
$\displaystyle\delta_{K+1}^{-1}\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{\sum_{k=0}^{K}\frac{L(x^{(k)},\lambda,\nu)-L(x,\lambda^{(k)},\nu^{(k)})}{\|G(z^{(k)})}_{2}\|-\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}$
$\displaystyle=$
$\displaystyle\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{\delta_{K+1}^{-1}\sum_{k=0}^{K}\frac{L(x^{(k)},\lambda,\nu)-L(x,\lambda^{(k)},\nu^{(k)})}{\|G(z^{(k)})\|_{2}}-\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}$
$\displaystyle\geq$
$\displaystyle\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{L(\overline{x}^{(K+1)},\lambda,\nu)-L(x,\overline{\lambda}^{(K+1)},\overline{\nu}^{(K+1)})-\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\}$
$\displaystyle=$
$\displaystyle\max\limits_{z\in{\mathbb{R}}^{n+m+l}}\left\\{f_{0}(\overline{x}^{(K+1)})+F(\overline{x}^{(K+1)})^{\top}\lambda+(A\overline{x}^{(K+1)}-b)^{\top}\nu\right.$
$\displaystyle\left.-f_{0}(x)-F(x)^{\top}\overline{\lambda}^{(K+1)}-(Ax-b)^{\top}\overline{\nu}^{(K+1)}-\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\|z-z^{(0)}\|^{2}_{2}\right\\},$
(8.44)
where the third inequality uses Jensen’s inequality. Given the maximum
function, the inequality holds for any choice of $z$. We consider two cases,
the first being $x=\overline{x}^{(K+1)}$,
$\lambda=\overline{\lambda}^{(K+1)}+\frac{F(\overline{x}^{(K+1)})}{2\|F(\overline{x}^{(K+1)})\|_{2}}$
and
$\nu=\overline{\nu}^{(K+1)}+\frac{A\overline{x}^{(K+1)}-b}{2\|A\overline{x}^{(K+1)}-b\|_{2}}$.
From (8.44),
$\displaystyle\delta_{K+1}^{-1}\frac{\beta_{K}}{2}$ $\displaystyle\geq$
$\displaystyle
f_{0}(\overline{x}^{(K+1)})+\left(\overline{\lambda}^{(K+1)}+\frac{F(\overline{x}^{(K+1)})}{2\|F(\overline{x}^{(K+1)})\|_{2}}\right)^{\top}F(\overline{x}^{(K+1)})$
$\displaystyle+\left(\overline{\nu}^{(K+1)}+\frac{A\overline{x}^{(K+1)}-b}{2\|A\overline{x}^{(K+1)}-b\|_{2}}\right)^{\top}(A\overline{x}^{(K+1)}-b)$
$\displaystyle-
f_{0}(\overline{x}^{(K+1)})-F(\overline{x}^{(K+1)})^{\top}\overline{\lambda}^{(K+1)}-\overline{\nu}^{(K+1)\top}(A\overline{x}^{(K+1)}-b)$
$\displaystyle-\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\left\|\left(\overline{x}^{(K+1)},\overline{\lambda}^{(K+1)}+\frac{F(\overline{x}^{(K+1)})}{2\|F(\overline{x}^{(K+1)})\|_{2}},\overline{\nu}^{(K+1)}+\frac{A\overline{x}^{(K+1)}-b}{2\|A\overline{x}^{(K+1)}-b\|_{2}}\right)-z^{(0)}\right\|^{2}_{2}$
$\displaystyle=$
$\displaystyle\frac{1}{2}(\|F(\overline{x}^{(K+1)})\|_{2}+\|A\overline{x}^{(K+1)}-b\|_{2})$
$\displaystyle-\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\left\|\left(\overline{x}^{(K+1)},\overline{\lambda}^{(K+1)}+\frac{F(\overline{x}^{(K+1)})}{2\|F(\overline{x}^{(K+1)})\|_{2}},\overline{\nu}^{(K+1)}+\frac{A\overline{x}^{(K+1)}-b}{2\|A\overline{x}^{(K+1)}-b\|_{2}}\right)-z^{(0)}\right\|^{2}_{2}.$
(8.45)
Further,
$\displaystyle\left\|\left(\overline{x}^{(K+1)},\overline{\lambda}^{(K+1)}+\frac{F(\overline{x}^{(K+1)})}{2\|F(\overline{x}^{(K+1)})\|_{2}},\overline{\nu}^{(K+1)}+\frac{A\overline{x}^{(K+1)}-b}{2\|A\overline{x}^{(K+1)}-b\|_{2}}\right)-z^{(0)}\right\|^{2}_{2}$
$\displaystyle\leq$
$\displaystyle\|\overline{z}^{(K+1)}-z^{(0)}\|_{2}+\frac{\|F(\overline{x}^{(K+1)})\|_{2}}{2\|F(\overline{x}^{(K+1)})\|_{2}}+\frac{\|A\overline{x}^{(K+1)}-b\|_{2}}{2\|A\overline{x}^{(K+1)}-b\|_{2}}$
$\displaystyle=$
$\displaystyle\|\overline{z}^{(K+1)}-z^{\star}+z^{\star}-z^{(0)}\|_{2}+1$
$\displaystyle\leq$
$\displaystyle\|\overline{z}^{(K+1)}-z^{\star}\|_{2}+\|z^{\star}-z^{(0)}\|_{2}+1$
$\displaystyle\leq$
$\displaystyle\delta_{K+1}^{-1}\sum_{k=0}^{K}\frac{\|z^{(k)}-z^{\star}\|_{2}}{\|G(z^{(k)})\|_{2}}+\|z^{\star}-z^{(0)}\|_{2}+1$
$\displaystyle\leq$
$\displaystyle\delta_{K+1}^{-1}\sum_{k=0}^{K}\frac{\|z^{(0)}-z^{\star}\|_{2}+1}{\|G(z^{(k)})\|_{2}}+\|z^{\star}-z^{(0)}\|_{2}+1$
$\displaystyle=$ $\displaystyle 2(\|z^{(0)}-z^{\star}\|_{2}+1),$ (8.46)
where the third inequality uses Jensen’s inequality and the fourth inequality
uses Lemma 8.1. Combining (8.45) and (8.46),
$\displaystyle\|F(\overline{x}^{(K+1)})\|_{2}+\|A\overline{x}^{(K+1)}-b\|_{2}\leq\delta_{K+1}^{-1}\frac{\beta_{K}}{2}(4(\|z^{(0)}-z^{\star}\|_{2}+1)^{2}+1).$
The second case will use $w=z^{\star}$. Starting from (8.44),
$\displaystyle\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\geq$ $\displaystyle
f_{0}(\overline{x}^{(K+1)})+F(\overline{x}^{(K+1)})^{\top}\lambda^{\star}+(A\overline{x}^{(K+1)}-b)^{\top}\nu^{\star}$
$\displaystyle-
f_{0}(x^{\star})-F(x^{\star})^{\top}\overline{\lambda}^{(K+1)}-(Ax^{\star}-b)^{\top}\overline{\nu}^{(K+1)}-\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\|z^{\star}-z^{(0)}\|^{2}_{2}$
$\displaystyle=$ $\displaystyle
f_{0}(\overline{x}^{(K+1)})-f_{0}(x^{\star})-\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\|z^{\star}-z^{(0)}\|^{2}_{2},$
since $F(x^{\star})=0$ and $Ax^{\star}=b$. Rearranging,
$\displaystyle
f_{0}(\overline{x}^{(K+1)})-f_{0}(x^{\star})\leq\delta_{K+1}^{-1}\frac{\beta_{K}}{2}(\|z^{(0)}-z^{\star}\|^{2}_{2}+1).$
Using Lemma 8.2 and [5, Lemma 3], $\delta_{K+1}^{-1}\frac{\beta_{K}}{2}$ can
be bounded as follows.
$\displaystyle\delta_{K+1}^{-1}\frac{\beta_{K}}{2}\leq$
$\displaystyle\frac{1}{2}\left(\sum_{k=0}^{K}\frac{1}{\|G(z^{(k)})\|_{2}}\right)^{-1}\left(\frac{1}{1+\sqrt{3}}+\sqrt{2k+1}\right)$
$\displaystyle\leq$
$\displaystyle\frac{1}{2}(\sum_{k=0}^{K}\frac{1}{C})^{-1}\left(\frac{1}{1+\sqrt{3}}+\sqrt{2k+1}\right)\leq\frac{C}{2(K+1)}\left(\frac{1}{1+\sqrt{3}}+\sqrt{2k+1}\right).$
∎
### 8.2 Primal-dual subgradient method
#### Subgradient computing.
Given $C$ as a subset of ${\mathbb{R}}^{n}$, recall that
$\operatorname{conv}(C)$ stands for the convex hull generated by $C$.
The subdifferential of $F_{i}$ is computed by the formula:
$\partial F_{i}(x)=\begin{cases}\partial f_{i}(x)&\text{ if }f_{i}(x)>0\,,\\\
\operatorname{conv}(\partial f_{i}(x)\cup\\{0\\})&\text{ if }f_{i}(x)=0\,,\\\
\\{0\\}&\text{ otherwise}.\end{cases}$ (8.47)
Since the subdifferential of $\|\cdot\|^{s}_{2}$ function is computed by the
formula
$\partial\|\cdot\|_{2}^{s}(z)=\begin{cases}\\{2{z}\\}&\text{ if }s=2\,,\\\
s\|z\|_{2}^{s-2}{z}&\text{ if }z\neq 0\text{ and }s\in[0,2)\,,\\\
\\{sg\in{\mathbb{R}}^{m}:\|g\|_{2}\leq 1\\}&\text{ if }z=0\text{ and
}s\in[0,2),\end{cases}$ (8.48)
we have
$\begin{array}[]{rl}\partial\|F(\cdot)\|_{2}^{s}(x)&\supset\operatorname{conv}\\{\sum_{i=1}^{m}a_{i}b_{i}\,:\,a\in\partial\|\cdot\|_{2}^{s}(F(x))\,,\,b_{i}\in\partial
F_{i}(x)\\}\\\
&\supset\begin{cases}{s\|F(x)\|_{2}^{s-2}\sum_{i=1}^{m}F_{i}(x)\partial
F_{i}(x)}&\text{ if }F(x)\neq 0\,,\\\ \\{0\\}&\text{
otherwise}\,,\end{cases}\end{array}$ (8.49)
and
$\begin{array}[]{rl}\partial\|A\cdot-b\|_{2}^{s}(x)&\supset\operatorname{conv}\\{A^{\top}u\,:\,u\in\partial\|\cdot\|_{2}^{s}(Ax-b)\\}\\\
&\supset\begin{cases}{s\|Ax-b\|_{2}^{s-2}A^{\top}(Ax-b)}&\text{ if }Ax-b\neq
0\,,\\\ \\{0\\}&\text{ otherwise}\,.\end{cases}\end{array}$ (8.50)
#### Proof of Theorem 5.2.
In [2, Section 8], author uses a wrong statement that if
$(a_{k})_{k=1}^{\infty}$ and $(b_{k})_{k=1}^{\infty}$ are two nonnegative real
sequence such that $\sum_{k=1}^{\infty}a_{k}b_{k}$ is bounded and
$\sum_{k=1}^{\infty}a_{k}$ diverges, then $b_{k}\to 0$ as $k\to\infty$. For
counterexample, take $a_{k}=1/k$ and
$b_{k}=\begin{cases}1&\text{if }k=i^{2}\text{ for some }i\in{\mathbb{N}}\,,\\\
0&\text{otherwise.}\end{cases}$ (8.51)
It is not hard to prove Theorem 5.2 relying on the following lemma:
###### Lemma 8.4.
Let $K=\infty$ in method (5.21). For every $k\in{\mathbb{N}}$, let $i_{k}$ be
any element belonging to
$\arg\min_{1\leq i\leq k}T^{(i)\top}(z^{(i)}-z^{\star})\,.$ (8.52)
Then
$|f_{0}(x^{(i_{k})})-p^{\star}|\to 0\quad\text{ and
}\quad\|F(x^{(i_{k})})\|_{2}+\|Ax^{(i_{k})}-b\|_{2}\to 0$ (8.53)
as $k\to\infty$ with the rate at least $\mathcal{O}({(k+1)^{\delta/(2s)}})$.
###### Proof.
Let $z^{\star}:=(x^{\star},\lambda^{\star},\nu^{\star})$. Let $R>0$ be large
enough such that $R\geq\|z^{(1)}\|_{2}$ and $R\geq\|z^{\star}\|_{2}$. We start
by writing out a basic identity
$\begin{array}[]{rl}\|z^{(k+1)}-z^{\star}\|_{2}^{2}&=\|z^{(k)}-z^{\star}\|_{2}^{2}-2\alpha_{k}T^{(k)\top}(z^{(k)}-z^{\star})+\alpha_{k}^{2}\|T^{(k)}\|_{2}^{2}\\\
&=\|z^{(k)}-z^{\star}\|_{2}^{2}-2\gamma_{k}\frac{T^{(k)\top}}{\|T^{(k)}\|_{2}}(z^{(k)}-z^{\star})+\gamma_{k}^{2}\,.\end{array}$
(8.54)
By summing it over $k$ and rearranging the terms, we get
$\|z^{(k+1)}-z^{\star}\|_{2}^{2}+2\sum_{i=1}^{k}\gamma_{i}\frac{T^{(i)\top}}{\|T^{(i)}\|_{2}}(z^{(i)}-z^{\star})=\|z^{(1)}-z^{\star}\|^{2}_{2}+\sum_{i=1}^{k}\gamma_{i}^{2}\leq
4R^{2}+S.$ (8.55)
The latter inequality is due to $\sum_{i=1}^{\infty}\gamma_{i}^{2}=S<\infty$
(see Abel’s summation formula).
We argue that the sum on the lefthand side is nonnegative. First, we estimate
a lower bound of
$\begin{array}[]{rl}T^{(k)\top}(z^{(k)}-z^{\star})=&\partial_{x}L_{\rho}(z^{(k)})^{\top}(x^{(k)}-x^{\star})\\\
&-\partial_{\lambda}L_{\rho}(z^{(k)})^{\top}(\lambda^{(k)}-\lambda^{\star})-\partial_{\nu}L_{\rho}(z^{(k)})^{\top}(\nu^{(k)}-\nu^{\star})\,.\end{array}$
(8.56)
With $F^{(k)}\neq 0$ and $Ax^{(k)}\neq b$, the first term further expands to
$\begin{array}[]{rl}&\partial_{x}L_{\rho}(z^{(k)})^{\top}(x^{(k)}-x^{\star})\\\
=&g_{0}^{(k)\top}(x^{(k)}-x^{\star})\\\
&+\sum\limits_{i=1}^{m}\left(\lambda_{i}^{(k)}+\rho
s\|F^{(k)}\|_{2}^{s-2}F_{i}^{(k)}\right)g_{i}^{(k)\top}(x^{(k)}-x^{\star})\\\
&+\nu^{(k)\top}A(x^{(k)}-x^{\star})+\rho
s\|Ax^{(k)}-b\|_{2}^{s-2}(Ax^{(k)}-b)^{\top}A(x^{(k)}-x^{\star})\\\
\geq&f_{0}(x^{(k)})-p^{\star}+\lambda^{(k)\top}F^{(k)}+\rho
s\|F^{(k)}\|_{2}^{s}\\\ &+\nu^{(k)\top}(Ax^{(k)}-b)+\rho
s\|Ax^{(k)}-b\|_{2}^{s}\,.\qquad\text{(since $Ax^{\star}=b$)}\end{array}$
(8.57)
It is due to definition of subgradient, for the objective function, we have
$g_{0}^{(k)\top}(x^{(k)}-x^{\star})\geq f_{0}(x^{(k)})-p^{\star}\,,$ (8.58)
and for the constraints
$g_{i}^{(k)\top}(x^{(k)}-x^{\star})\geq
F_{i}^{(k)}-F_{i}(x^{\star})=F_{i}^{(k)}\,,\,i=1,\dots,m\,.$ (8.59)
Notice that $\lambda_{i}^{(k)}+\rho s\|F^{(k)}\|_{2}^{s-2}F_{i}^{(k)}$ is
nonnegative since $\lambda_{i}^{(k)}$ and $F_{i}^{(k)}$ are nonnegative. Next,
we have
$-\partial_{\lambda}L_{\rho}(z^{(k)})^{\top}(\lambda^{(k)}-\lambda^{\star})=-F^{(k)\top}(\lambda^{(k)}-\lambda^{\star})$
(8.60)
and
$-\partial_{\nu}L_{\rho}(z^{(k)})^{\top}(\nu^{(k)}-\nu^{\star})=(b-Ax^{(k)})^{\top}(\nu^{(k)}-\nu^{\star})\,.$
(8.61)
Using these and subtracting, we obtain
$\begin{array}[]{rl}T^{(k)\top}(z^{(k)}-z^{\star})\geq&f_{0}(x^{(k)})-p^{\star}+\lambda^{\star\top}F^{(k)}+\nu^{\star\top}(Ax^{(k)}-b)\\\
&+\rho s(\|F^{(k)}\|_{2}^{s}+\|Ax^{(k)}-b\|_{2}^{s})\\\
=&L(x^{(k)},\lambda^{\star},\nu^{\star})-L(x^{\star},\lambda^{\star},\nu^{\star})\\\
&+\rho s(\|F^{(k)}\|_{2}^{s}+\|Ax^{(k)}-b\|_{2}^{s})\\\ \geq&0\,.\end{array}$
(8.62)
The latter inequality is implied by using (4.14). It is remarkable that (8.62)
is still true even if $F^{(k)}=0$ or $Ax^{(k)}=b$.
Since both terms on the lefthand side of (8.55) are nonnegative, for all $k$,
we have
$\|z^{(k+1)}-z^{\star}\|_{2}^{2}\leq 4R^{2}+S\quad\text{and}\quad
2\sum_{i=1}^{k}\gamma_{i}\frac{T^{(i)\top}}{\|T^{(i)}\|_{2}}(z^{(i)}-z^{\star})\leq
4R^{2}+S\,.$ (8.63)
The first inequality yields that there exists positive real $D$ satisfying
$\|z^{(k)}\|\leq D$, namely $D=R+\sqrt{4R^{2}+S}$. By assumption, the norm of
subgradients $g_{i}^{(k)}$, $i=0,1,\dots,m$ on the set $\|x^{(k)}\|_{2}$ is
bounded, so it follows that $\|T^{(k)}\|_{2}$ is bounded by some positive real
$C$ independent from $k$.
The second inequality of (8.63) implies that
$T^{(i_{k})\top}(z^{(i_{k})}-z^{\star})\leq\frac{C(4R^{2}+S)}{2\sum_{i=1}^{k}\gamma_{i}}\leq\frac{W}{(k+1)^{\delta/2}}\,.$
(8.64)
for some $W>0$ independent from $k$. The latter inequality is due to the
asymptotic behavior of the zeta function. Moreover, (8.62) turns out that
$\begin{array}[]{rl}&(L(x^{(i_{k})},\lambda^{\star},\nu^{\star})-L(x^{\star},\lambda^{\star},\nu^{\star}))+\rho
s\|F(x^{(i_{k})})\|_{2}^{s}+\rho s\|Ax^{(i_{k})}-b\|_{2}^{s}\\\
\leq&T^{(i_{k})\top}(z^{(i_{k})}-z^{\star})\leq\frac{W}{(k+1)^{\delta/2}}\,.\end{array}$
(8.65)
Since three terms on the lefthand size are nonnegative, we obtain
$\|F(x^{(i_{k})})\|_{2}\leq\frac{W^{1/s}}{(\rho
s)^{1/s}(k+1)^{\delta/(2s)}}\,,\qquad\|Ax^{(i_{k})}-b\|_{2}\leq\frac{W^{1/s}}{(\rho
s)^{1/s}(k+1)^{\delta/(2s)}}\,,$ (8.66)
and
$\begin{array}[]{rl}\frac{W}{(k+1)^{\delta/2}}\geq&L(x^{(i_{k})},\lambda^{\star},\nu^{\star})-L(x^{\star},\lambda^{\star},\nu^{\star})\\\
=&f_{0}(x^{(i_{k})})-p^{\star}+\lambda^{\star\top}F(x^{(i_{k})})+\nu^{\star\top}(Ax^{(i_{k})}-b)\geq
0\,.\end{array}$ (8.67)
Using these, we have
$\frac{W}{(k+1)^{\delta/2}}+\xi_{i_{k}}\geq
f_{0}(x^{(i_{k})})-p^{\star}\geq-\xi_{i_{k}}\,.$ (8.68)
where
$\xi_{i_{k}}=\|\lambda^{\star}\|_{2}\|F(x^{(i_{k})})\|_{2}+\|\nu^{\star}\|_{2}\|Ax^{(i_{k})}-b\|_{2}$.
Since $\xi_{i_{k}}\to 0$ as $k\to\infty$ with the rate at least
$\mathcal{O}((k+1)^{\delta/(2s)})$. This proves (8.53). ∎
## References
* [1] Y. I. Alber, A. N. Iusem, and M. V. Solodov. On the projected subgradient method for nonsmooth convex optimization in a hilbert space. Mathematical Programming, 81(1):23–35, 1998.
* [2] S. Boyd. Subgradient methods. Lecture notes of EE364b, Stanford University, Spring 2013-14:1–39, 2014.
* [3] L. Lukšan and J. Vlcek. Test problems for nonsmooth unconstrained and linearly constrained optimization. Technická zpráva, 798, 2000.
* [4] M. R. Metel and A. Takeda. Dual subgradient method for constrained convex optimization problems. arXiv preprint arXiv:2009.12769, 2020.
* [5] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):221–259, 2009.
* [6] Y. Nesterov. Lectures on convex optimization, volume 137. Springer, 2018.
* [7] J. Nocedal and S. Wright. Numerical optimization. Springer Science & Business Media, 2006.
* [8] N. Z. Shor. Minimization methods for non-differentiable functions, volume 3. Springer Science & Business Media, 2012.
|
8k
|
arxiv_papers
|
2101.01047
|
††thanks: _Present address:_ Advanced Research Center for Nano-lithography
(ARCNL), Science Park 106, 1098 XG, The Netherlands††thanks: deceased
September 08, 2020
# Induced THz transitions in Rydberg caesium atoms for application in
antihydrogen experiments
M. Vieille-Grosjean Z. Mazzotta D. Comparat Université Paris-Saclay, CNRS,
Laboratoire Aimé Cotton, 91405, Orsay, France E. Dimova Bulgarian Academy of
Sciences, 72 Tzarigradsko Chaussée Blvd., 1784 Sofia, Bulgaria T. Wolz and C.
Malbrunot Physics Department, CERN, Genève 23, 1211, Switzerland
###### Abstract
Antihydrogen atoms are produced at CERN in highly excited Rydberg states.
However, precision measurements require anti-atoms in ground state. Whereas
experiments currently rely on spontaneous emission only, simulations have
shown that THz light can be used to stimulate the decay towards ground state
and thus increase the number of anti-atoms available for measurements. We
review different possibilities at hand to generate light in the THz range
required for the purpose of stimulated deexcitation. We demonstrate the effect
of a blackbody type light source, which however presents drawbacks for this
application including strong photoionization. Further, we report on the first
THz transitions in a beam of Rydberg caesium atoms induced by photomixers and
conclude with the implications of the results for the antihydrogen case.
## I Introduction
After years of technical developments, antihydrogen ($\bar{\rm{H}}$) atoms can
be regularly produced at CERN’s Antiproton Decelerator complex
AlP_Hbar_Accumulation ; KUR14 ; PhysRevLett.108.113002 . This anti-atom is
used for stringent tests of the Charge-Parity-Time (CPT) symmetry as well as
for the direct measurements of the effect of the Earth’s gravitational
acceleration on antimatter. For precision measurements towards both of these
goals ground-state antihydrogen atoms are needed.
The atoms are mostly synthesized using either a charge exchange (CE) reaction
where an excited positronium (Ps) atom (bound state of an electron and a
positron) releases its positron to an antiproton or a so-called three-body
recombination reaction (3BR) where a large number of positrons and antiprotons
are brought together to form antihydrogen.
Both formation mechanisms produce antihydrogen atoms in highly excited Rydberg
states with so-far best achieved temperatures of $\sim 40\,\mathrm{K}$
AlP_Hbar_Accumulation (corresponding to a mean velocity of $\sim
1000\,\mathrm{m/s}$) and in the presence of relatively strong magnetic fields
($\mathcal{O}(1\,\mathrm{T})$) to confine the charged particles and, in some
cases, trap the antihydrogen atoms. Although experimentally not well studied,
the antihydrogen atoms formed via 3BR are expected to cover a broad range of
principle quantum numbers up to $n\sim 100$ Gabrielse2002 ; rsa_Mal_18 ; ROB08
; radics2014scaling ; Jonsell_2019 . Highly excited states will be ionized by
the electric field present at the edges of the charged clouds so that in
general only antihydrogen with $n<50$ can escape the formation region. Via the
CE reaction, specific $n\sim 30$ values can be targeted resulting in a
narrower spread in $n$ that is mainly determined by the velocity and velocity
distribution of the impinging Ps Krasnicky_2019 ; 2016PhRvA..94b2714K ;
2012CQGra..29r4009D . In either case, all ($k,m$) substates are populated
where $m$ is the magnetic quantum number and $k$ a labeling index according to
the strength of the substate’s diamagnetic interaction that becomes in a
field-free environment the angular momentum quantum number $l$. The field-free
lifetime $\tau_{n,l}$ of the Rydberg states produced
$\tau_{n,l}\approx\left(\frac{n}{30}\right)^{3}\left(\frac{l+1/2}{30}\right)^{2}\times
2.4\,\mathrm{ms}$ (1)
is of the order of several milliseconds PhysRevA.31.495 and can be considered
a good approximation in the presence of a magnetic field $B\sim 1\,\mathrm{T}$
Topccu2006 . Given the currently achieved formation temperatures, this
results, for experiments that rely on an antihydrogen beam, in a large
fraction of atoms remaining in excited states before escaping the formation
region which complicates beam formation and hinders in-situ measurements.
In a previous publication wolz2019stimulated the stimulation of atomic
transitions in (anti-)hydrogen using appropriate light in order to couple the
initial population to fast spontaneously decaying levels was studied. Indeed,
such techniques allow to increase the ground state fraction within a few
microseconds which corresponds to an average flight path of the atoms on the
order of centimeters. Different deexcitation schemes, making use of THz light,
microwaves and visible lasers were investigated. Microwave sources and lasers
at the wavelengths and intensities identified in wolz2019stimulated are
commercially available and measurements of single Rydberg-Rydberg transitions
have been reported gallagher_1994 . The simultaneous generation of multiple
powerful light frequencies in the high GHz to THz regime however still remains
a technical challenge. After providing some insights into the antihydrogen
deexcitation schemes dealt with and clarifying which light intensities and
wavelengths are required in section II, we analyse in section III the
suitability of different THz light sources for this purpose. We report in
section IV on the effect of a broadband lamp and on the first observation of
highly selective THz light stimulated population transfer between Rydberg
states with a photomixer in a proof-of-principle experiment with a beam of
excited caesium atoms.
## II THz-induced antihydrogen deexcitation and state mixing
For both production schemes, CE and 3BR, the idea of stimulated deexcitation
of antihydrogen comes down to mixing many initially populated long-lived
states and simultaneously driving transitions to fewer short lived-levels from
where the spontaneous cascade decay towards the ground state is fast. Relying
on a pulsed CE production scheme the initially populated states can be mixed
by applying an additional electric field to the already present magnetic one
COM181 . A deexcitation/mixing scheme based on the stimulation of atomic
transitions via light in the THz frequency range is thoroughly discussed in
wolz2019stimulated for the 3BR case. This latter scheme is equally applicable
to a pulsed CE production.
When coupling a distribution of $N$ long-lived levels with an average lifetime
of $\tau_{\rm{N}}$ to $N^{\prime}$ levels with an average deexcitation time to
ground state of $\tau_{\rm{N^{\prime}}}^{\rm{GS}}\ll\tau_{\rm{N}}$, the
minimum achievable time $t_{\rm{deex}}$ for the entire system to decay to
ground state can be approximated by
$t_{\rm{deex}}\approx\frac{N}{N^{\prime}}\times\tau_{N^{\prime}}^{\rm{GS}}.$
(2)
In (anti)hydrogen, the average decay time of a $n^{\prime}$-manifold with
$N^{\prime}$ fully mixed states to ground state can be approximated, for low
$n^{\prime}$, by the average lifetime of the manifold:
$\tau_{N^{\prime}}^{\rm{GS}}\sim 2\,\mathrm{\mu
s}\times\left(n^{\prime}/10\right)^{4.5}$ COM181 . Consequently, when coupling
some thousands of initially populated Rydberg antihydrogen levels ($n\sim 30$)
to a low lying manifold this intrinsic limit would lead to a best deexcitation
time towards ground state of roughly a few tens of $\mathrm{\mu s}$. Within
such a time interval the atoms move only by a few tens of $\mathrm{mm}$ and
thus stay close to the formation region from where, once deexcited, a beam can
be efficiently formed.
Figure 1: Binding energy of (anti-)hydrogen Rydberg states in a
$1\,\mathrm{T}$ magnetic field as a function of the magnetic quantum number
$m$. Inter-$n$ manifold transitions in the THz region are indicated by
continuous errors. Dashed arrows illustrate some examples of spontaneous
transitions. The figure is adapted from Ref. wolz2019stimulated .
Figure 1 shows the binding energy diagram of antihydrogen states in a
$1\,\mathrm{T}$ magnetic field. Recalling the $|\Delta m|=1$ selection rule,
it becomes apparent that, especially to address high angular momentum states
that are incidentally the longest lived ones, all $\Delta n=-1$ THz-
transitions need to be driven to achieve an efficient mixing. In Ref.
wolz2019stimulated the efficiency of stimulating simultaneously all $\Delta
n=-1$ inter-$n$ manifold transitions from Rydberg levels $(n,k,m)$ down
towards a manifold $n^{\prime}$ that is rapidly depopulated to ground state by
spontaneous emission (in the following referred to as THz deexcitation) is
studied.
For $n=30$ and $n^{\prime}=5$ it is found that the total (summed over all
driven transitions) light intensity necessary is of $>10\,\mathrm{mW/cm^{2}}$
covering a frequency range from $\sim 200\,\mathrm{GHz}$ to well within the
few $\mathrm{THz}$ region (the frequencies range from over 40 THz for
$n=6\rightarrow 5$ to 0.26 THz for $n=30\rightarrow 29$).
As an alternative scheme (in the following referred to as THz mixing), it was
proposed to restrict the THz light to a certain fraction of the initially
populated levels in order to mix all ($k,m$) sublevels within, for example,
$25\leq n\leq 35$. Retaining the levels equipopulated allows for a narrowband
deexcitation laser to couple the Rydberg state distribution directly to the
$n=3$ manifold which decays on a nanosecond timescale. This results in a
reduction of the total THz light intensity required by more than an order of
magnitude to $1\,\mathrm{mW/cm^{2}}$.
In summary, THz mixing or deexcitation requires the simultaneous generation of
multiple light frequencies in the mW power regime. As derived in Ref.
wolz2019stimulated , optimal conditions to transfer population are given when
sending equally intense light to stimulate the desired individual
$n\rightarrow n-1$ transitions.
## III THz sources
The spectral range in the THz region – also called far-infrared or sub-mm
region, depending on the community (1 THz corresponds to 33 cm-1, to $\sim 4$
meV, and to a wavelength of 0.3 mm) – is situated at frequencies at which
powerful sources are not easily available and mW power is roughly the
bottleneck even if THz technology is a fast moving field (see for instance the
reviews given in Latzel2017 ; 2015JaJAP..54l0101H ; dhillon20172017 ;
zhong2017optically ; elsasser2019concepts ). The multiple frequencies light
necessary for deexcitation or state mixing in antihydrogen can be generated
via two techniques: either via several narrowband sources that emit a sharp
spectrum at the wavelengths required to drive single or few transitions, or
via a single broadband source that covers the frequency range of all required
transitions. In the following, we will give a general overview of the
constraints and limitations of either solution.
A first general constrain, whatever the source is, is linked to the fact that
all transitions should be driven simultaneously (and not sequentially) in
order to avoid a mere population exchange between the levels. In the
following, we will thus restrain ourselves to fixed frequency sources. We
first study the possible narrowband sources and then the broadband ones.
### III.1 Narrowband THz sources
In the case of narrowband sources, particular atomic transitions can be
targeted and therefore the power provided by the source at those wavelengths
is entirely used to drive the transition. Thus, ionization due to off-resonant
wavelengths can be minimized. The usage of multiple sources allows to
implement the correct power scaling as a function of output frequency
increasing the efficiency of the deexcitation. However, when stimulating all
$\Delta n=-1$ transitions from $n=30$ down to $n^{\prime}=5$ a totality of 25
wavelengths is required. In the presence of a magnetic field which leads to
significant degeneracy lifting of the levels, the necessary number of (very)
narrowband sources can even increase further due to the spectral broadening of
the atomic transitions. In view of the high number of desired wavelengths that
need to be produced the usage of expensive direct synthesis such as quantum
cascade or molecular lasers is not an option. Furthermore, as mentioned
earlier, the exact distribution of quantum states populated during
antihydrogen synthesis is not well known and thus a versatile apparatus is
needed to adapt the frequencies generated and used for mixing. Given this
point, CMOS-based terahertz sources or powerful diodes ($>1\,\mathrm{mW}$) are
not versatile enough solutions, due to the requirement of several frequency
multiplications and the necessity of many waveguides given their cut-off
frequencies.
In contrast, photomixing or optical rectification 2003OExpr..11.2486A seems
to be an attractive option. Given $n_{0}$ different laser frequencies
$\nu_{i}$ input signals, the photomixer optical beatnote produces, in the
ultra-fast semiconductor material, THz waves at all $\nu_{i}-\nu_{j}$
frequencies; the number of which being $n_{0}(n_{0}-1)/2$. Photomixing can
nowadays reach the $\mathrm{mW}$ level, shared by all generated frequencies
2011JAP…109f1301P . The $n_{0}$ laser inputs can be produced using pulse
shaping from a single broadband laser source liu1996terahertz ;
metcalf2013fully ; hamamda2015ro ; finneran2015decade . Given the limitation
of a photomixer’s total output power, the device is an especially attractive
solution for THz mixing purposes (and not necessarily deexcitation towards low
$n^{\prime}$) where the total power is divided up into less frequencies. As
mentioned in section II, this is the case for schemes relying on deexcitation
lasers. Additionally, the maximum achievable output power rapidly decreases
towards the few THz frequency region rendering the device unfit for the
deexcitation purpose below $n<15$. We conclude that, in particular for the THz
mixing scheme, photomixers exhibit very attractive characteristics.
Furthermore the photomixer simply reproduces the beating in the laser spectrum
and can thus also be used as a broadband source.
### III.2 Broadband THz sources
Using a broadband source has the main advantage that a single device might be
able to drive many transitions significantly facilitating the experimental
implementation. The obvious drawback, however, is that most of the power will
not be emitted at resonant frequencies and thus much higher total power would
be required to drive the needed transitions. This might lead to significant
losses due to ionization glukhov2010blackbody ; merkt2016 even if filters can
be used to reduce this effect. As pointed out earlier, the source output power
should ideally be constant over the exploited range of emitted wavelengths
which is more difficult to implement with a single broadband source.
Portable synchrotron griffiths2006instrumentation or table-top Free Electron
Laser sources hooker2013developments ; shibata1997broadband ;
2015PJAB…91..223N ; wetzels2006far would be ideal broadband sources with
intense radiance in the far-infrared region, but the costs of such apparatus
are still prohibitive. A possible alternative is the use of femtosecond mode-
locked lasers to generate very short THz pulses using optical rectification,
surface emitters or photoconductive (Auston) switches. However, we can only
use sources with fast repetition rates in order for the spontaneous emission
to depopulate all levels. Unfortunately, even though photoconductive switches
with $\mathrm{mW}$ THz average output power exist brown2008milliwatt , and THz
bandwidth in excess of 4 THz with power up to $64\,\mathrm{\mu W}$ as well as
optical-power-to-THz-power conversion efficiencies of $\sim 10^{-3}$ have been
demonstrated 2013ApPhL.103f1103D , the efficiency drops to $\sim 10^{-5}$ for
fast repetition rates low femtosecond pulse energies. Thus, if using a
standard oscillator providing for instance 1 W average power, no more than
$10\,\mathrm{\mu W}$ total output power is expected 2015JaJAP..54l0101H . Such
sources have been tested to drive transitions between Rydberg atoms, but with
only $10\%$ of the population transfer from the $n=50$ initial states down to
$n<40$ mandal2010half ; kopyciuk2010deexcitation ; takamine2017population .
A simple solution would consist of a blackbody emitter which efficiently
radiates in the THz range brundermann2012terahertz . A 1000 K blackbody emits
in the far infrared region of 0.1-5 THz, with a band radiance of 4 mW/cm2
which seems perfectly compatible with the requirements found for the
antihydrogen deexcitation purpose. Such a radiation source has been proposed
in order to cool internal degrees of freedom of MgH+ molecular ions
2004JPhB…37.4571V . Between about $400$ and $100\,\mathrm{cm^{-1}}$, the
radiant power emitted by a silicon carbide (Globar) source is as high as any
conventional infrared source, but below $100\,\mathrm{cm^{-1}}$, as for Nernst
lamps or glowers that become transparent below about $200\,\mathrm{cm^{-1}}$,
the emissivity is low. In the region between $\sim 50-10\,\mathrm{cm^{-1}}$ it
is thus customary to use a high-pressure mercury lamp with a spectrum close to
a blackbody one of effective temperature of 1000-5000 K
griffiths2006instrumentation ; brundermann2012terahertz ; cann1969light ;
wolfe1978infrared ; 1996InPhT..37..471K ; buijs2002incandescent ;
2003PhRvL..90i4801A .
## IV Experimental caesium test setup
Figure 2: Illustration of the experimental caesium beam setup.
In order to experimentally assess the potential of the discussed source types,
to evaluate realistic power outputs, and to study the suitability of the
sources for application to antihydrogen state mixing and deexcitation, we have
tested, on a beam of excited Rydberg caesium atoms, the narrow- and broadband
solution which seemed most optimal. The reason to use caesium and not directly
hydrogen atoms is mainly due to the fact that, compared to hydrogen, light to
manipulate caesium atoms is much easier to generate and off-the-shelf
solutions readily exist. However, alkaline Rydberg atoms, such as caesium,
have a behavior close to that of hydrogen.
In our experimental setup, illustrated in Fig. 2, a caesium effusive beam
emitted out of an oven enters a vacuum chamber. The atoms are excited by a cw
diode at 852 nm from the $6\mathrm{S}_{1/2}$ to the $6\mathrm{P}_{3/2}$ level.
A second tunable pulsed laser (OPO pumped by a Nd:YAG) then addresses the $n$S
or $n$D Rydberg level. The excitation lasers are sent perpendicular to the
beam direction. Two grids opposing each other perpendicular to the beam
direction introduce an electric field to field ionize the atoms and study the
population of each $(n,l)$ state. The THz radiation emitted by a narrowband
photomixer outside the chamber can be sent through a THz transparent viewport
towards the excited Cs atoms. Alternatively, a broadband lamp is mounted
inside the chamber in proximity to the measurement region to stimulate a
population transfer
The caesium state population was studied by applying a high voltage pulse to
the lower grid (cf. Fig. 2, the other grid was grounded) of the field ionizer
surrounding the atomic beam at a given delay time $t_{D}$ with respect to the
laser excitation pulse. The ionizing field was ramped making use of an RC
circuit with a rise time of $4\,\mathrm{\mu s}$. Since each state ionizes at a
given electric field strength, the state distribution can be probed by
collecting either the ions or electrons from the ionization on a Chevron stack
micro-channel plate (MCP) charge detector ducas1979detection ;
hollberg1984measurement .
We tested a commercial (GaAs Toptica) photomixer acting as a THz source
stimulating the $97\,\mathrm{GHz}$ transition between the initially excited
$36\mathrm{S}_{1/2}$ state towards the $36\mathrm{P}_{3/2}$ Rydberg state.
This transition was chosen due to a strong dipole transition, easy laser
excitation and a well defined field ionization signal. Undoubtedly, much more
cost-effective, convenient and efficient ways to induce a $97\,\mathrm{GHz}$
transition would have been to use a voltage-controlled oscillator (VCO), semi-
conductor (Gunn or IMPATT diode), backward-wave oscillator or a submillimeter-
wave source based on harmonic generation of microwave radiation. However, our
goal was not to drive specifically this transition, but to demonstrate the use
of a photomixer to drive Rydberg transitions. This technology allows to create
a spectrum of multiple sharp frequencies to simultaneously drive many
transitions in antihydrogen which ultimately results in a deexcitation of the
atoms. In the context of this proof-of-principle experiment, mixing of near
$852\,\mathrm{nm}$ laser lines from a Ti:Sa laser and a diode laser was used
to produce $\sim 1\,\mathrm{\mu W}$ THz output power at $97\,\mathrm{GHz}$
with a spectral linewidth which reproduces the one of the input lasers
($<5\,\mathrm{MHz}$). The THz light was sent through a TPX viewport
(transparent to THz radiation) and illuminated the sample for $\sim
10\,\mathrm{\mu s}$. A population transfer is clearly visible in Fig. 3 and
amounts to $\sim 15\%$ corresponding to a stimulated transition rate of the
order of $10^{4}\,\mathrm{s^{-1}}$. In theory, the large Cs dipole matrix
element ($554.4ea_{0}$ for the $36\mathrm{S}_{1/2}\rightarrow
36\mathrm{P}_{3/2}$ transition SIBALIC2017319 ) should lead to a much faster
transition rate of $\Omega\sim 10^{8}\,\mathrm{s^{-1}}$ when assuming a light
intensity of $\sim 1\,\mathrm{\mu W/cm^{2}}$. The experimentally observed
lower rate is mainly explained by a transition broadening due to large field
inhomogeneities in the region traversed by the Cs beam. Indeed, in our
geometry, the MCP produces a fringe field between the two field-ionizer plates
that can reach tens of $\mathrm{V/cm}$ leading to a broadening of tens of GHz
for the transition addressed SIBALIC2017319 . This measurement demonstrates
the first, to our knowledge, use of a photomixer to stimulate Rydberg
transitions in caesium atoms.
Figure 3: Caesium population transfer from the $36\mathrm{S}_{1/2}$ to the
$36\mathrm{P}_{3/2}$ level. The obtained MCP signal is plotted for case (1)
where the photomixer is switched on (triangle) and case (2) where the
photomixer is turned off (square). We indicate on the x-axis the time
reference of the signal to the high voltage ramp that is applied to the field
ionizer grids. The Cs atoms ionize at a given electric field strength and
accelerate towards the MCP. The ionization rate of the $36\mathrm{P}_{3/2}$
level peaks around $\sim 3.15\,\mathrm{\mu s}$ after the high voltage ramp is
started. The detection rate of ions originating from the ionization of the
$36\mathrm{S}_{1/2}$ level reaches its maximum approximately
$250\,\mathrm{ns}$ later. To improve the readability, the signals are averaged
over $0.4\,\mathrm{\mu s}$
Fig. 4 shows results obtained using a globar type (ASB-IR-12K from Spectral
Products) lamp which is a silicon nitride emitter mounted in a
$1\,\mathrm{inch}$ parabolic reflector that is small enough to be placed
inside the vacuum chamber $\sim 2\,\mathrm{cm}$ away from where the caesium
atoms are excited and ionized. Here, the delay time $t_{D}$ of the applied
ionizing field ramp with respect to the excitation laser was varied to study
the population of a given state (that ionizes at a given field strength) as a
function of time. To probe the population of these states we integrate the
signal in a $\sim 200\,\mathrm{ns}$ time window around the mean arrival time
of the field ionization signal. The desired signal can thus be slightly
contaminated by the ionization signal from nearby states. We compare the
lifetimes of the state for stimulated population transfer (lamp on) and sole
spontaneous emission (lamp off). Fig. 4 shows the results obtained for the
$40\mathrm{D}_{5/2}$ level. This level was chosen because $n\sim 40$ is close
to the highest level that we would hope to transfer in the case of
antihydrogen wolz2019stimulated . Although the decay curves are non-
exponential, we indicate the $1/e$ depopulation time that decreases from
$11\,\mathrm{\mu s}$ to $3.5\,\mathrm{\mu s}$ using the lamp. To interpret
this result, we simulate the spontaneous and light induced depopulation of the
$40\mathrm{D}_{5/2}$ state within the caesium atomic system. Dealing with non-
coherent light sources we place ourselves in the low saturation limit and
reduce the optical Bloch equations to a much simpler set of rate equations.
The resulting matrix system is numerically solved for a few hundred atoms as
detailed in 2014PhRvA..89d3410C . The simulations indicate that the
enhancement of the decay achieved experimentally is comparable to the
simulation result obtained by implementing a light source that emits an
isotropic blackbody spectrum of $\sim 1100\,\mathrm{K}$. This is close, and
even slightly higher than the temperature emitted by the collimated lamp.
Since the device is mounted in close proximity to the caesium beam, it is
possible that the radiative spectrum is indeed focused on the atoms. In
addition, we observed that $\sim 50\%$ of the atoms are either excited to
higher levels or photoionized vieil2018 . However, we note that filters, such
as TPX, PTFE or Teflon brundermann2012terahertz , can be used to cut out the
low (to avoid $n\rightarrow n+1$ transitions) and high (to avoid direct
photoionization) frequency parts of the spectrum that lead to these effects
vieil2018 .
We note that in the cryogenic environment of an antihydrogen experiment the
installation of such a high temperature lamp in the vicinity of the atoms
remains hypothetical. However, using transport of THz radiation by, for
example, a metallic light pipe is simple and efficient
brundermann2012terahertz . We investigated the transmission efficiency of the
lamp’s broadband spectrum with a $30\,\mathrm{cm}$ long copper tube (diameter:
$\sim 1\,\mathrm{cm}$) and could transfer $\geq 94.5\%$ vieil2018 of the
radiation.
Figure 4: Experimentally measured lifetimes of the caesium
$40\mathrm{D}_{5/2}$ level with and without a lamp (globar type). The time
$t_{D}$, given on the x-axis, indicates the time delay of the field ionization
ramp with respect to the excitation laser. We include simulation results for a
1100 K blackbody spectrum (gray). To improve the readability the experimental
signals are averaged over $0.1\,\mathrm{\mu s}$
## V Conclusions
This work reviewed different methods of generating light in the THz region to
stimulate the decay of Rydberg antihydrogen atoms towards ground state.
We commissioned a beamline to study Rydberg population transfer in alkaline
atoms. Cesium atoms are much easier to produce than (anti-)hydrogen and are
thus ideal for proof-of-principle studies. A $\sim 15\%$ population transfer
within the $n=36$ manifold was demonstrated using photomixing at $\sim
0.1\,\mathrm{THz}$. Such THz transitions between Rydberg states can be used to
mix the states of antihydrogen atoms while a laser deexcites the Rydberg state
distribution towards low $n$-manifolds wolz2019stimulated . Because
antihydrogen is formed (by collisions) in many Rydberg levels this mixing and
deexcitation requires several frequencies. The photomixer frequency range of
$\sim 0-2\,\mathrm{THz}$ is especially adapted to this purpose. An attractive
solution to generate a continuum around the few hundred $\mathrm{GHz}$ region
(needed for Rydberg states around $25\leq n\leq 30$) could be the use of
tapered amplifiers, i. e. semiconductor optical amplifiers, or the amplified
spontaneous emission output of an optical amplifier, as radiation inputs
towards the photomixer.
A deexcitation of the Cs $40\mathrm{D}_{5/2}$ was observed using a blackbody
type light source. However, the ionization fraction for a broadband source
lies around $\sim 50\%$ and is significantly elevated compared to the use of
narrowband light sources that emit sharp frequencies targeted towards single
$n\rightarrow n-1$ transitions. In combination with the lack of flexibility in
the source output power distribution as a function of the emitted wavelength,
broadband sources seem consequently rather unfit for the antimatter
application. In particular the high ionization potential must be pointed out
as a very harmful effect in the context of antihydrogen experiments where
atoms are only available in the few hundreds at a time following a complicated
synthesis procedure at CERN’s Antiproton Decelerator.
We conclude that in particular photomixing has potential for an application in
experiments aiming for a deexcitation of antihydrogen atoms. As pointed out,
the field of THz light sources is rapidly evolving and output powers in the mW
range have been demonstrated. In view of the theoretical studies published in
wolz2019stimulated and the complementary experimental conclusions drawn in
this manuscript, we aim to next demonstrate the first experimental
deexcitation of Rydberg hydrogen atoms to ground state in a few tens of
microsecond time scales. Addressing this long standing issue in the
antihydrogen community has the potential to pave the way for further
antimatter precision measurements at CERN.
## VI Acknowledgements
We dedicate this work to the memory of our co-author Emiliya Dimova who passed
away at the age of 50.
This work has been sponsored by the Wolfgang Gentner CERN Doctoral Student
Program of the German Federal Ministry of Education and Research (grant no.
05E15CHA, university supervision by Norbert Pietralla). It was supported by
the Studienstiftung des Deutschen Volkes and the Bulgarian Science Fund Grant
DN 18/14.
## VII Author contribution statement
All authors contributed to the work reported in this manuscript.
## References
* [1] M. Ahmadi, B. X. R. Alves, C. J. Baker, William Bertsche, et al. Antihydrogen accumulation for fundamental symmetry tests. Nature Communications, 8:681, Dec 2017.
* [2] N. Kuroda, S. Ulmer, D. J. Murtagh, S. Van Gorp, et al. A source of antihydrogen for in-flight hyperfine spectroscopy. Nature Communications, 5:3089, Jan 2014.
* [3] G. Gabrielse, R. Kalra, W. S. Kolthammer, R. McConnell, et al. Trapped antihydrogen in its ground state. Phys. Rev. Lett., 108:113002, Mar 2012.
* [4] G. Gabrielse, N. S. Bowden, P. Oxley, A. Speck, et al. Background-free observation of cold antihydrogen with field-ionization analysis of its states. Phys. Rev. Lett., 89(21):213401, Oct 2002.
* [5] C. Malbrunot, C. Amsler, S. Arguedas Cuendis, H. Breuker, et al. The ASACUSA antihydrogen and hydrogen program: results and prospects. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2116):20170273, Feb 2018.
* [6] F. Robicheaux. Atomic processes in antihydrogen experiments: a theoretical and computational perspective. Journal of Physics B: Atomic, Molecular and Optical Physics, 41(19):192001, Sep 2008.
* [7] B. Radics, D. J. Murtagh, Y. Yamazaki, and F. Robicheaux. Scaling behavior of the ground-state antihydrogen yield as a function of positron density and temperature from classical-trajectory monte carlo simulations. Phys. Rev. A, 90:032704, Sep 2014.
* [8] S. Jonsell and M. Charlton. Formation of antihydrogen beams from positron–antiproton interactions. New Journal of Physics, 21(7):073020, Jul 2019.
* [9] D. Krasnický, G. Testera, and N. Zurlo. Comparison of classical and quantum models of anti-hydrogen formation through charge exchange. Journal of Physics B: Atomic, Molecular and Optical Physics, 52(11):115202, May 2019.
* [10] D. Krasnický, R. Caravita, C. Canali, and G. Testera. Cross section for Rydberg antihydrogen production via charge exchange between Rydberg positroniums and antiprotons in a magnetic field. Phys. Rev. A, 94(2):022714, Aug 2016.
* [11] M. Doser, C. Amsler, A. Belov, G. Bonomi, et al. Exploring the WEP with a pulsed cold beam of antihydrogen. Classical and Quantum Gravity, 29(18):184009, Aug 2012.
* [12] Edward S. Chang. Radiative lifetime of hydrogenic and quasihydrogenic atoms. Phys. Rev. A, 31:495–498, Jan 1985.
* [13] T. Topçu and F. Robicheaux. Radiative cascade of highly excited hydrogen atoms in strong magnetic fields. Phys. Rev. A, 73(4):043405, Apr 2006.
* [14] T. Wolz, C. Malbrunot, M. Vieille-Grosjean, and D. Comparat. Stimulated decay and formation of antihydrogen atoms. Phys. Rev. A, 101:043412, Apr 2020.
* [15] T. F. Gallagher. Rydberg Atoms. Cambridge Monographs on Atomic, Molecular and Chemical Physics. Cambridge University Press, 1994.
* [16] D. Comparat and C. Malbrunot. Laser stimulated deexcitation of Rydberg antihydrogen atoms. Phys. Rev. A, 99:013418, Jan 2019.
* [17] P. Latzel, F. Pavanello, S. Bretin, M. Billet, et al. High efficiency UTC photodiode for high spectral efficiency THz links. In 2017 42nd International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz), pages 1–2, Aug 2017.
* [18] M. Hangyo. Development and future prospects of terahertz technology. Japanese Journal of Applied Physics, 54(12):120101, Dec 2015.
* [19] S. S. Dhillon, M. S. Vitiello, E. H. Linfield, A.G. Davies, et al. The 2017 terahertz science and technology roadmap. Journal of Physics D: Applied Physics, 50(4):043001, Jan 2017.
* [20] K. Zhong, W. Shi, D. Xu, P. Liu, et al. Optically pumped terahertz sources. Science China Technological Sciences, 60(12):1801–1818, Jun 2017\.
* [21] T. Elsässer, K. Reimann, and M. Woerner. Concepts and Applications of Nonlinear Terahertz Spectroscopy. Morgan & Claypool Publishers, Feb 2019.
* [22] J. Ahn, A. V. Efimov, R. D. Averitt, and A. T. Taylor. Terahertz waveform synthesis via optical rectification of shaped ultrafast laser pulses. Optics Express, 11:2486, Oct 2003.
* [23] S. Preu, G. H. Döhler, S. Malzer, L. J. Wang, et al. Tunable, continuous-wave Terahertz photomixer sources and applications. Journal of Applied Physics, 109(6):061301–061301, Mar 2011.
* [24] Y. Liu, S. Park, and A. M. Weiner. Terahertz waveform synthesis via optical pulse shaping. Selected Topics in Quantum Electronics, IEEE Journal of, 2(3):709–719, Sep 1996.
* [25] A. J. Metcalf, V. R. Supradeepa, D. E. Leaird, A. M. Weiner, et al. Fully programmable two-dimensional pulse shaper for broadband line-by-line amplitude and phase control. Optics express, 21(23):28029–28039, Nov 2013.
* [26] M. Hamamda, P. Pillet, H. Lignier, and D. Comparat. Ro-vibrational cooling of molecules and prospects. Journal of Physics B: Atomic, Molecular and Optical Physics, 48(18):182001, Aug 2015.
* [27] A. I. Finneran, J. T. Good, D. B. Holland, P. B. Carroll, et al. Decade-spanning high-precision terahertz frequency comb. Phys. Rev. Lett., 114(16):163902, Apr 2015.
* [28] I. L. Glukhov, E. A. Nekipelov, and V. D. Ovsiannikov. Blackbody-induced decay, excitation and ionization rates for rydberg states in hydrogen and helium atoms. Journal of Physics B: Atomic, Molecular and Optical Physics, 43(12):125002, Jun 2010.
* [29] C. Seiler, J. A. Agner, P. Pillet, and F. Merkt. Radiative and collisional processes in translationally cold samples of hydrogen rydberg atoms studied in an electrostatic trap. Journal of Physics B: Atomic, Molecular and Optical Physics, 49(9):094006, Apr 2016.
* [30] P. R. Griffiths and C. C. Homes. Instrumentation for Far-Infrared Spectroscopy. Wiley Online Library, 2006.
* [31] S. M. Hooker. Developments in laser-driven plasma accelerators. Nature Photonics, 7(10):775–782, Sep 2013.
* [32] Y. Shibata, K. Ishi, S. Ono, Y. Inoue, et al. Broadband free electron laser by the use of prebunched electron beam. Phys. Rev. Lett., 78(14):2740, Apr 1997.
* [33] K. Nakajima. Laser-driven electron beam and radiation sources for basic, medical and industrial sciences. Proceeding of the Japan Academy, Series B, 91:223–245, 2015.
* [34] A. Wetzels, A. Gürtler, L. D. Noordam, and F. Robicheaux. Far-infrared Rydberg-Rydberg transitions in a magnetic field: Deexcitation of antihydrogen atoms. Phys. Rev. A, 73(6):062507, Jun 2006.
* [35] E. R. Brown. Milliwatt thz average output power from a photoconductive switch. In Infrared, Millimeter and Terahertz Waves, 2008. IRMMW-THz 2008\. 33rd International Conference on, pages 1–2. IEEE, 2008.
* [36] R. J. B. Dietz, B. Globisch, M. Gerhard, A. Velauthapillai, et al. 64 $\mu$W pulsed terahertz emission from growth optimized InGaAs/InAlAs heterostructures with separated photoconductive and trapping regions. Applied Physics Letters, 103(6):061103, Aug 2013.
* [37] P. K. Mandal and A. Speck. Half-cycle-pulse-train induced state redistribution of rydberg atoms. Phys. Rev. A, 81(1):013401, Oct 2010.
* [38] T. Kopyciuk. Deexcitation of one-dimensional Rydberg atoms with a chirped train of half-cycle pulses. Physics Letters A, 374(34):3464–3467, Jul 2010.
* [39] A. Takamine, R. Shiozuka, and H. Maeda. Population redistribution of cold rydberg atoms. In Proceedings of the 12th International Conference on Low Energy Antiproton Physics (LEAP2016), page 011025, Nov 2017.
* [40] E. Bründermann, H. Hübers, and M. F. Kimmitt. Terahertz Techniques, volume 151. Springer, 2012.
* [41] I. S. Vogelius, L. B. Madsen, and M. Drewsen. Rotational cooling of molecules using lamps. Journal of Physics B, 37:4571–4574, Nov 2004.
* [42] M. W. P. Cann. Light sources in the 0.15–20-$\mu$ spectral range. Applied optics, 8(8):1645–1661, Aug 1969.
* [43] W. L. Wolfe and G. J. Zissis. The infrared handbook, volume 1. Spie Press, 1978.
* [44] M. F. Kimmitt, J. E. Walsh, C. L. Platt, K. Miller, et al. Infrared output from a compact high pressure arc source. Infrared Physics and Technology, 37:471–477, Jun 1996.
* [45] H. Buijs. Incandescent Sources for Mid-and Far-Infrared Spectrometry. Wiley Online Library, 2002.
* [46] M. Abo-Bakr, J. Feikes, K. Holldack, P. Kuske, et al. Brilliant, Coherent Far-Infrared (THz) Synchrotron Radiation. Phys. Rev. Lett., 90(9):094801–+, Mar 2003.
* [47] T. W. Ducas, W. P. Spencer, A G. Vaidyanathan, W. H. Hamilton, et al. Detection of far-infrared radiation using rydberg atoms. Appl. Phys. Lett., 35(5):382–384, Aug 1979.
* [48] L Hollberg and JL Hall. Measurement of the shift of Rydberg energy levels induced by blackbody radiation. Phys. Rev. Lett., 53(3):230, Jul 1984.
* [49] N. Ŝibalić, J. D. Pritchard, Adams C. S., and K. J. Weatherill. ARC: An open-source library for calculating properties of alkali Rydberg atoms. Computer Physics Communications, 220:319 – 331, 2017.
* [50] D. Comparat. Molecular cooling via Sisyphus processes. Phys. Rev. A, 89(4):43410, Apr 2014.
* [51] M. Vieille Grosjean. Atomes de Rydberg : Etude pour la production d’une source d’électrons monocinétique. Désexcitation par radiation THz pour l’antihydrogène. PhD thesis, Laboratoire Aimé Cotton, Orsay, France, 2018.
|
8k
|
arxiv_papers
|
2101.01050
|
11institutetext: Department of Theoretical Physics, Baku State University, Z.
Khalilov st. 23, AZ1148, Baku, Azerbaijan 22institutetext: Department of
Physics, Karadeniz Technical University, TR61080, Trabzon, Turkey
33institutetext: Institute for Physical Problems, Baku State University, Z.
Khalilov st. 23, AZ1148, Baku, Azerbaijan 44institutetext: Azerbaijan State
University of Economics, Istiqlaliyyat st.6, AZ1001, Baku, Azerbaijan
# Analytical bound state solutions of the Dirac equation with the Hulthén plus
a class of Yukawa potential including a Coulomb-like tensor interaction
A. I. Ahmadov [email protected] M. Demirci [email protected]
(Corresponding author)22 M. F. Mustamin 22 S. M. Aslanova 11 M. Sh. Orujova 44
(Received: )
###### Abstract
We examine the bound state solutions of the Dirac equation under the spin and
pseudospin symmetries for a new suggested combined potential, Hulten plus a
class of Yukawa potential including a Coulomb-like tensor interaction. An
improved scheme is employed to deal with the centrifugal (pseudo-centrifugal)
term. Using the Nikiforov-Uvarov and SUSYQM methods, we analytically develop
the relativistic energy eigenvalues and associated Dirac spinor components of
wave functions. We find that both methods give entirely the same results.
Modifiable of our results into some particular potential cases, useful for
other physical systems, are also discussed. We obtain complete agreement with
the findings of previous works. The spin and pseudospin bound state energy
spectra for various levels are presented in the absence as well as the
presence of tensor coupling. Both energy spectrums are sensitive with regards
to the quantum numbers $\kappa$ and $n$, as well as the parameter $\delta$. We
also notice that the degeneracies between Dirac spin and pseudospin doublet
eigenstate partners are completely removed by the tensor interaction. Finally,
we present the parameter space of allowable bound state regions of potential
strength $V_{0}$ with constants for both considered symmetry limits $C_{S}$
and $C_{PS}$.
###### Keywords:
Dirac equation, Hulthén and a class of Yukawa potentials, Nikiforov-Uvarov
Method, Supersymmetric Quantum Mechanics
###### pacs:
03.65.GeSolutions of wave equations: bound states and 03.65.PmRelativistic
wave equations
## 1 Introduction
In relativistic quantum mechanics (QM), the Dirac equation is used to describe
dynamics for many composite and non-composite subatomic systems that possess
spin-$1/2$ Dirac ; Herman ; Bagrov1 ; Greiner . The equation has been applied
to investigate physical phenomena in a wide range of topics, especially in the
nuclear and hadronic physics. In the advancement to these areas, two kinds of
symmetries are introduced to the Dirac equation: spin and pseudospin
Ginocchio0 ; Ginocchio1 ; Ginocchio2 ; Ginocchio3 ; Ring . The spin symmetry
produces two degeneracies of states with quantum numbers ($n,l,j=l\mp
s$)111For clearance, $n,l,s$, and $j$ denote radial, orbital, spin and total
angular momentum quantum numbers, respectively., allowing it to be considered
as a spin-doublet. For the pseudospin symmetry case, there is a quasi-
degeneracy from which the degenerate states have two units differences in
orbital angular momentum: ($n,l,j=l+1/2$) and ($n-1,l+2,j=l+3/2$). These
products can also be regarded as a pseudospin doublet with quantum numbers
($\tilde{n}=n-1,\tilde{l}=l+1,\tilde{j}=\tilde{l}\pm\tilde{s}$). Here, the
pseudospin and pseudo-orbital angular momentum are denoted as $\tilde{s}=1/2$
and $\tilde{l}$, respectively Arima ; Hecht . Furthermore, the pseudo-orbital
angular momentum can be interpreted as the orbital angular momentum of the
Dirac spinor lower component Ginocchio0 .
Many works have been done on several applications of these two symmetries,
such as explaining the antinucleon spectrum of a nucleus Ginocchio1 ;
Ginocchio2 ; Ginocchio3 ; Ring , the process of nuclear deformation Bohr as
well as nuclear superdeformation Dudek , effective nucleus shell-model Trol ,
and small spin-orbit splitting in hadrons Page . Particularly, the pseudospin
symmetry has been implemented with several kinds of potentials, like the
harmonic oscillator Lisboa , Woods-Saxon Guo , Hulthén Soylu ; Ikhdair11 ;
Ikhdair112 ; Haouat08 , Yukawa Aydogdu11 ; Ikhdair12 ; Pakdel , Morse
Berkdemir ; Qiang , Mie-type AydogduMie ; HamzaviMie , Pöschl-Teller Chen ;
Jia1 or the Manning-Rosen Gao ; Yanar and recently with hyperbolic-type
potentials Karayer . Moreover, the influence of tensor interaction potential
on both types of symmetries shows that all doublets lose their degeneracies
Tensor04 . In regards to this idea, the inclusion of tensor potentials for
solving the Dirac equation has been carried out in many studies Akcay09 ;
Aydogdu10 ; Hamzavi10 ; Ikot15 ; Mousavi .
Accompanying scalar, vector and tensor interactions to explain various systems
in an effective way to the Dirac equation may help us understand complication
of nuclear structure. Investigating straightly with quantum chromodynamics at
a full-scale seems to be hopelessly difficult at present. Many-body physics
approach can be done, but still not offering a simple way. As a result, a
simplified, mean-field type approach is always valuable. An extreme version of
this would be to assume an effective potential generated by the full set of
particles, and look at the energy levels of a single nucleon in this effective
field. Since involved interactions might depend on the individual nucleon spin
as well, the most general structure allows these scalar, vector and tensor
potentials. Regarding these backgrounds, our main concern in the present study
is to examine the bound state solutions of the Dirac equation under the spin
and pseudospin symmetry limits with new suggested potential. The potential
consists of the Hulthén Hulten1 ; Hulten2
$V_{H}(r)=-\frac{Ze^{2}\delta e^{-\delta r}}{(1-e^{-\delta r})},$ (1)
plus a class of Yukawa-type potential
$\displaystyle V_{CY}(r)=-\frac{Ae^{-\delta r}}{r}-\frac{Be^{-2\delta
r}}{r^{2}}.$ (2)
The parameter $Z$ is the atomic number, while $\delta$ and $r$ are the
screening parameter and separation distance of the potential, respectively.
The parameters $A$ and $B$ indicate the interaction strengths. The Hulthén
potential is classified as short-range potentials, extensively applied to
describe the continuum and bound states of the interaction systems. It has
been implemented in atomic, nuclear, and particle physics, particularly to
deal with strong coupling, so that significant role may emerge for a particle
under this potential. The Yukawa potential Yukawa , on the other hand, is a
well-known effective potential that has successfully described the strong
interactions between nucleons. It is widely used in plasma physics, in which
it represents the potential for a charged particle affected by weakly non-
ideal plasma, and also in electrolytes and colloids. Briefly speaking, both
potentials are two simple representations of the screened Coulomb potential,
i.e., they include Coulombic behavior for small $r$ and decrease exponentially
as $r$ increased.
Some previous attempts have provided satisfactory energy bound states by
examining both potentials separately. However, no one has considered them as a
linear combination so far. In this study, we propose their combination for the
first time, in order to obtain bound state solutions of Dirac equation.
Various mentioned phenomena earlier can be investigated by utilizing
combination of them to give alternative perspectives. Explicitly, from the two
mentioned potentials, we have
$V(r)=-\frac{Ze^{2}\delta e^{-\delta r}}{(1-e^{-\delta r})}-\frac{Ae^{-\delta
r}}{r}-\frac{Be^{-2\delta r}}{r^{2}}.$ (3)
For the tensor interaction, we use the following Coulomb-like potential
$U(r)=-\frac{H}{r},~{}H=\frac{Z_{a}Z_{b}}{4\pi\varepsilon_{0}},~{}r\geq
R_{c},$ (4)
where $Z_{a}$ and $Z_{b}$ represent the charge of the projectile $a$ and the
target nuclei $b$, respectively. $R_{c}$ denotes the Coulomb radius with value
is $7.78$ fm. In this work, we discuss the relativistic bound states in the
arbitrary $\kappa$-wave Dirac equation by using this new-proposed potential in
order to provide a more subtle formulation of physical properties,
particularly on the energy of bound and continuum states for any interacting
quantum systems.
In examining this system, we use two different widely used methods. The first
is the Nikiforov-Uvarov (NU) method Nikiforov within ordinary QM. The
procedure is based on solving a second-order linear differential equation by
transforming it into a generalized hypergeometric-type form. The second is the
Supersymmetry QM (SUSYQM) method Gendenshtein1 ; Gendenshtein2 . Supersymmetry
itself emerged as an attempt to unify all basic interactions in nature,
firstly identified to unify bosonic and fermionic sector. Various standard QM
phenomena have been successfully formulated with this ambitious model Cooper1
; Cooper2 . This method has also been implemented to obtain the spin and
pseudospin solutions of the Dirac equation under various potentials (see,
Refs.Zarrinkamar ; Maghsoodi ; Feizi ). In what follows, we provide the
relativistic bound state solutions for the above mentioned-combined system,
obtained by using both methods and compare their results.
We begin our discussion by constructing the Dirac equation in Sec. 2. We
separately present its spin symmetry case in Sec. 2.1 and pseudospin symmetry
case in Sec. 2.2. In Sec. 3, we provide our analytic results. We examine bound
state solutions for both symmetry cases by using the NU method in Sec. 3.1 and
then by using the SUSYQM method in Sec. 3.2. In Sec. 4, the reducibility of
our results into some potential cases are discussed. After that, we provide
the numerical predictions for the dependence of energy spectra on $\delta$,
$n$, $\kappa$ as well as other potential parameters in Sec. 5. Lastly, we
summarize our work and give concluding remarks in Sec. 6.
## 2 Governing Equation
In a relativistic description, the Dirac equation of a particle with mass $M$
influenced by a repulsive vector potential $V(\vec{r})$, an attractive scalar
potential $S(\vec{r})$, and a tensor potential $U(r)$ can be expressed in the
following general form (with units such that $\hbar=c=1$)
$\Big{[}\vec{\alpha}\cdot\vec{p}+\beta\Big{(}M+S(\vec{r})\Big{)}-i\beta\vec{\alpha}\cdot\hat{r}U(r)\Big{]}\psi(r,\theta,\phi)=\Big{[}E-V(\vec{r})\Big{]}\psi(r,\theta,\phi),$
(5)
where $\vec{p}=-i\vec{\nabla}$ and $E$ are respectively the momentum operator
and the relativistic energy of the system. $\vec{\alpha}$ and $\beta$ are the
$4\times 4$ Dirac matrices which are defined as
$\displaystyle\vec{\alpha}=\begin{pmatrix}0&\vec{\sigma}\\\
\vec{\sigma}&0\end{pmatrix},\qquad\beta=\begin{pmatrix}I&0\\\
0&-I\end{pmatrix},$ (6)
with $\vec{\sigma}$ and $I$ respectively are the $2\times 2$ Pauli spin
matrices and $2\times 2$ unit matrix.
For a particle within a spherical field, we can define the spin-orbit coupling
operator $\hat{K}=-\beta(\vec{\sigma}\cdot\vec{L}+1)$ with eigenvalue $\kappa$
alongside the total angular momentum operator $\vec{J}$. This operator
commutes with the Dirac Hamiltonian, regardless of the concerned symmetry
cases. Moreover, it also constructs a complete set of conservative quantities
with $H^{2},K,J^{2},J_{z}$. This quantum number is then used to label the
eigenstates, rather than the (pseudo-)orbital angular momentum. In the case of
spherical symmetry, the potentials $V(\vec{r})$ and $S(\vec{r})$ in Eq.(5) are
depend only on the radial coordinates, such that $V(\vec{r})=V(r)$ and
$S(\vec{r})=S(r)$ where $r=|\vec{r}|$. The Dirac spinors in this regards can
then be classified in accordance with $\kappa$ and $n$ as
$\displaystyle\psi_{n\kappa}(r,\theta,\phi)=\frac{1}{r}\left(\begin{array}[]{ll}F_{n\kappa}(r)Y_{jm}^{l}(\theta,\varphi)\\\
iG_{n\kappa}(r)Y_{jm}^{\tilde{l}}(\theta,\varphi)\end{array}\right).$ (9)
In this equation, $F_{n\kappa}(r)$ represents the upper and $G_{n\kappa}(r)$
the lower components of the radial wave function. There is also the spin
spherical harmonic function $Y_{jm}^{l}(\theta,\varphi)$ and its pseudospin
counterpart as $Y_{jm}^{\tilde{l}}(\theta,\varphi)$. Here, $m$ denotes the
angular momentum projection on the z-axis. For a given $\kappa=\pm 1,\pm
2,\ldots$, the spin and pseudospin cases have respectively
$l=|\kappa+1/2|-1/2$ and $\tilde{l}=|\kappa-1/2|-1/2$ to indicate their
orbital angular momentum, while $j=|\kappa|-1/2$ for their total angular
momentum. To connect $\kappa$ with the other quantum numbers, we have for the
spin symmetry
$\kappa=\left\\{\begin{array}[]{llcl}l=+(j+\frac{1}{2}),&~{}(p_{1/2},d_{3/2},\text{etc.}),&j=l-\frac{1}{2},&\hbox{unaligned
spin}~{}(\kappa>0),\\\
-(l+1)=-(j+\frac{1}{2}),&~{}(s_{1/2},p_{3/2},\text{etc.}),&j=l+\frac{1}{2},&\hbox{aligned
spin}~{}(\kappa<0),\end{array}\right.$ (10)
and for the pseudospin symmetry
$\kappa=\left\\{\begin{array}[]{llcl}+(\tilde{l}+1)=(j+\frac{1}{2}),&~{}(d_{3/2},f_{5/2},\text{etc.}),&j=\tilde{l}+\frac{1}{2},&\hbox{unaligned
spin}~{}(\kappa>0),\\\
-\tilde{l}=-(j+\frac{1}{2}),&~{}(s_{1/2},p_{3/2},\text{etc.}),&j=\tilde{l}-\frac{1}{2},&\hbox{aligned
spin}~{}(\kappa<0).\end{array}\right.$ (11)
Using the following identities Bjorken
$\displaystyle\begin{split}&\vec{\sigma}\cdot\vec{p}=\vec{\sigma}\cdot\hat{r}\left(\hat{r}\cdot\vec{p}+i\frac{\vec{\sigma}\cdot\vec{L}}{r}\right),\\\
&(\vec{\sigma}\cdot\vec{L})Y_{jm}^{\tilde{l}}(\theta,\varphi)=(\kappa-1)Y_{jm}^{\tilde{l}}(\theta,\varphi),\\\
&(\vec{\sigma}\cdot\vec{L})Y_{jm}^{l}(\theta,\varphi)=-(\kappa+1)Y_{jm}^{l}(\theta,\varphi),\\\
&(\vec{\sigma}\cdot\hat{r})Y_{jm}^{l}(\theta,\varphi)=-Y_{jm}^{\tilde{l}}(\theta,\varphi),\\\
&(\vec{\sigma}\cdot\hat{r})Y_{jm}^{\tilde{l}}(\theta,\varphi)=-Y_{jm}^{l}(\theta,\varphi)\end{split}$
(12)
inserting Eq.(9) into Eq.(5), and then splitting the angular part for their
two spinor components, we can write the radial coupled Dirac equations as
$\displaystyle\left(\frac{d}{dr}+\frac{\kappa}{r}-U(r)\right)F_{n\kappa}(r)=\Big{(}M+E_{n\kappa}-\Delta(r)\Big{)}G_{n\kappa}(r),$
(13)
and
$\displaystyle\left(\frac{d}{dr}-\frac{\kappa}{r}+U(r)\right)G_{n\kappa}(r)=\Big{(}M-E_{n\kappa}+\Sigma(r)\Big{)}F_{n\kappa}(r),$
(14)
where $\Delta(r)=V(r)-S(r)$ and $\Sigma=V(r)+S(r)$ have been used. Eliminating
$F_{n\kappa}(r)$ and $G_{n\kappa}(r)$ between Eq.(13) and Eq.(14), we obtain
the following equations
$\displaystyle\begin{split}\bigg{[}\frac{d^{2}}{dr^{2}}-\frac{\kappa(\kappa+1)}{r^{2}}+\frac{2\kappa}{r}U(r)-\frac{dU(r)}{dr}-U^{2}(r)&-\big{(}M+E_{n\kappa}-\Delta(r)\big{)}\big{(}M-E_{n\kappa}+\Sigma(r)\big{)}\\\
&+\frac{\frac{d\Delta(r)}{dr}(\frac{d}{dr}+\frac{\kappa}{r}-U(r))}{M+E_{n\kappa}-\Delta(r)}\bigg{]}F_{n\kappa}(r)=0,\end{split}$
(15)
$\displaystyle\begin{split}\bigg{[}\frac{d^{2}}{dr^{2}}-\frac{\kappa(\kappa-1)}{r^{2}}+\frac{2\kappa}{r}U(r)+\frac{dU(r)}{dr}-U^{2}(r)&-\big{(}M+E_{n\kappa}-\Delta(r)\big{)}\big{(}M-E_{n\kappa}+\Sigma(r)\big{)}\\\
&-\frac{\frac{d\Sigma(r)}{dr}(\frac{d}{dr}-\frac{\kappa}{r}+U(r))}{M-E_{n\kappa}+\Sigma(r)}\bigg{]}G_{n\kappa}(r)=0,\end{split}$
(16)
with $\kappa(\kappa+1)=l(l+1)$ and $\kappa(\kappa-1)=\tilde{l}(\tilde{l}+1)$.
Two different limit cases can be specified for these two equations. The
Eq.(15) is known as the spin symmetry while the Eq.(16) is the pseudospin
symmetry case. In addition to the previously mentioned applications, these
symmetries also play an important role in the magnetic moment and identical
bands of nuclear structure.
### 2.1 Spin Symmetry Limit
The spin symmetry occurs as $d\Delta(r)/dr=0$, so that
$\Delta(r)=C_{S}=\text{constant}$ Meng1 ; Meng2 , and hence Eq.(15) becomes
$\begin{split}\bigg{[}\frac{d^{2}}{dr^{2}}-\frac{\kappa(\kappa+1)}{r^{2}}+\frac{2\kappa}{r}U(r)&-\frac{dU(r)}{dr}-U^{2}(r)-\Big{(}M+E_{n\kappa}-C_{S}\Big{)}\Sigma(r)\\\
&+\Big{(}E_{n\kappa}^{2}-M^{2}+C_{S}(M-E_{n\kappa})\Big{)}\bigg{]}F_{n\kappa}(r)=0,\end{split}$
(17)
where $\kappa=l$ for $\kappa>0$ and $\kappa=-(l+1)$ for $\kappa<0$. The
$E_{n\kappa}$ depends on $n$ and $l$, which is associated with the spin
symmetry quantum number. From Eq. (13), the lower-spinor component can be
expressed as
$\begin{split}G_{n\kappa}(r)=\frac{1}{M+E_{n\kappa}-C_{S}}\Big{(}\frac{d}{dr}+\frac{\kappa}{r}-U(r)\Big{)}F_{n\kappa}(r),\end{split}$
(18)
where, as $E_{n\kappa}\neq-M$ for $C_{S}=0$ (exact spin symmetry), there exist
only real positive energy spectrum.
### 2.2 Pseudospin Symmetry Limit
In this limit, $d\Sigma/dr=0$, so that $\Sigma(r)=C_{PS}=\text{constant}$
Meng1 ; Meng2 . The Eq.(16) then becomes
$\begin{split}\bigg{[}\frac{d^{2}}{dr^{2}}-\frac{\kappa(\kappa-1)}{r^{2}}+\frac{2\kappa}{r}U(r)&+\frac{dU(r)}{dr}-U^{2}(r)+\Big{(}M-E_{n\kappa}+C_{PS}\Big{)}\Delta(r)\\\
&-\Big{(}M^{2}-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})\Big{)}\bigg{]}G_{n\kappa}(r)=0\end{split}$
(19)
where $\kappa=-\tilde{l}$ for $\kappa<0$, and $\kappa=\tilde{l}+1$ for
$\kappa>0$. The $E_{n\kappa}$ depends on $n$ and $\tilde{l}$, associated with
the pseudospin quantum numbers. Note that the case $\tilde{l}\neq 0$ produces
degenerate states with $j=\tilde{l}\pm 1/2$. This is classified as $SU(2)$
pseudospin symmetry. From Eq. (14), the corresponding upper-spinor component
can be written as
$\begin{split}F_{n\kappa}(r)=\frac{1}{M-E_{n\kappa}+C_{PS}}\Big{(}\frac{d}{dr}-\frac{\kappa}{r}+U(r)\Big{)}G_{n\kappa}(r)\end{split}$
(20)
where now, as $E_{n\kappa}\neq M$ for $C_{PS}=0$ (exact pseudospin symmetry),
there exist only real negative energy spectrum.
## 3 Analytical Treatment: Bound State Solutions
In this section, we treat the Dirac equation under the influence of the
proposed potential and find its bound state solutions through the NU and
SUSYQM methods.
### 3.1 Implementation of Nikiforov-Uvarov Method
#### 3.1.1 Spin Symmetry Case
We first consider the Eq.(17), which contains the Hulthén plus a class of
Yukawa and also a Coulomb-like tensor potential. It can be solved exactly only
for $\kappa=0$ and $\kappa=-1$ in the absence of tensor interaction ($H=0$),
since the centrifugal term (proportional to $\kappa(\kappa+1)/r^{2}$)
vanishes. In the case of arbitrary $\kappa$, an appropriate approximation
needs to be employed on the centrifugal terms. We use the following improved
approximation Greene for $\delta r\ll 1$
$\displaystyle\frac{1}{r}\approx\frac{2\delta e^{-\delta r}}{(1-e^{-2\delta
r})},\qquad\frac{1}{r^{2}}\approx\frac{4\delta^{2}e^{-2\delta
r}}{(1-e^{-2\delta r})^{2}}.$ (21)
It provides good accuracy for a small value of potential parameters. This
approximation scheme has been commonly used for tackling the same issue (see
Refs.Ahmadov1 ; Ahmadov2 , references therein). Under the approximation222For
convenience, we substitute $\delta\rightarrow 2\delta$ in the Hulthén
potential., our combined potential becomes
$V^{\prime}(r)=-\frac{(V_{0}+V_{0}^{\prime})e^{-2\delta r}}{1-e^{-2\delta
r}}-\frac{B^{\prime}e^{-4\delta r}}{(1-e^{-2\delta r})^{2}},$ (22)
with $V_{0}=2\delta Ze^{2}$, $V_{0}^{\prime}=2A\delta$, and
$B^{\prime}=4B\delta^{2}$.
Figure 1: The effect of approximation on our potential as a function of
separation distance $r$ for some values of parameter $\delta$.
To quantitatively understand the approximation effect of the potential, the
total potential (3), its approximation (22) and difference between them as a
function of $r$ for different values of $\delta$ are depicted in Fig. 1. Here,
we set $V_{0}=2$ fm-1, $A=1$ fm-1 and $B=1$ fm-1. It is obvious that for small
$\delta$, the approximation becomes more suitable. The difference is about
$10^{-3}$ and this is almost independent of $r$. It emphasizes that the
equation (21) is a good approximation for centrifugal term as the parameter
$\delta$ becomes small.
For the general form of the spin symmetry case, we now consider the above
approximation scheme and the tensor potential in Eq. (4), so that Eq.(17)
becomes
$\begin{split}\bigg{[}\frac{d^{2}}{dr^{2}}-\frac{4\delta^{2}e^{-2\delta
r}}{(1-e^{-2\delta r})^{2}}\Big{(}\kappa(\kappa+1)&+2\kappa
H+H+H^{2}\Big{)}-\Big{(}M+E_{n\kappa}-C_{S}\Big{)}\Sigma(r)\\\
&+\Big{(}E_{n\kappa}^{2}-M^{2}+C_{S}(M-E_{n\kappa})\Big{)}\bigg{]}F_{n\kappa}(r)=0,\end{split}$
(23)
where $\Sigma(r)$ is taken as the potential (22).
Introducing $s=e^{-2\delta r}$ for $r\in[0,\infty)$ and $s\in[0,1]$, we can
express the general form as
$\begin{split}\frac{d^{2}F_{n\kappa}}{ds^{2}}+\frac{1}{s}\frac{dF_{n\kappa}}{ds}+\bigg{[}&\frac{(V_{0}+V_{0}^{\prime})}{4\delta^{2}s(1-s)}(M+E_{n\kappa}-C_{S})+\frac{1}{4\delta^{2}s^{2}}\big{(}E_{n\kappa}^{2}-M^{2}+C_{S}(M-E_{n\kappa})\big{)}\\\
&+\frac{B^{\prime}(M+E_{n\kappa}-C_{S})}{4\delta^{2}(1-s)^{2}}-\frac{2\kappa
H+H+H^{2}+\kappa(\kappa+1)}{s(1-s)^{2}}\bigg{]}F_{n\kappa}=0.\end{split}$ (24)
We can further simplify this by defining
$\begin{split}&\alpha^{2}=\frac{(V_{0}+V_{0}^{\prime})(M+E_{n\kappa}-C_{S})}{4\delta^{2}},\\\
&\beta^{2}=\frac{M^{2}-E_{n\kappa}^{2}-C_{S}(M-E_{n\kappa})}{4\delta^{2}},\\\
&\gamma^{2}=-\frac{B^{\prime}(M+E_{n\kappa}-C_{S})}{4\delta^{2}},\\\
&\eta_{\kappa}=\kappa+H,\end{split}$ (25)
thus we arrive at the following form
$\begin{split}F_{n\kappa}^{\prime\prime}(s)+\frac{1-s}{s(1-s)}F_{n\kappa}^{\prime}(s)+\biggl{[}\frac{1}{s(1-s)}\biggr{]}^{2}\biggl{[}\alpha^{2}s(1-s)-\gamma^{2}s^{2}-\beta^{2}(1-s)^{2}-\eta_{\kappa}(\eta_{\kappa}+1)s\biggr{]}F_{n\kappa}(s)=0.\end{split}$
(26)
The tensor potential generates a new spin–orbit centrifugal term
$\eta_{\kappa}(\eta_{\kappa}+1)$. The solutions of this equation need to
satisfy the boundary conditions, such as $F_{n\kappa}(0)=0$ at $s=1$ for
$r\rightarrow 0$ and $F_{n\kappa}(\infty)\rightarrow 0$ at $s=0$ for
$r\rightarrow\infty$.
The above equation can be easily solved by means of the NU method. At this
stage, we follow the procedure presented in appendix A. Firstly, comparing
Eq.(26) with Eq.(126), we obtain
$\displaystyle\begin{split}&\tilde{\tau}(s)=1-s,\\\ &\sigma(s)=s(1-s),\\\
&\tilde{\sigma}(s)=\alpha^{2}s(1-s)-\beta^{2}(1-s)^{2}-\gamma^{2}s^{2}-\eta_{\kappa}(\eta_{\kappa}+1)s.\end{split}$
(27)
Following factorization in (127), we then have
$\displaystyle F_{n\kappa}(s)=y_{n\kappa}(s)\phi(s),$ (28)
so that Eq. (26) can be reduced to a hypergeometric type equation like in
Eq.(128), and then $y_{n\kappa}(s)$ can be identified as one of its solutions.
Considering the condition in Eq.(129) for the suitable function $\phi(s)$, we
obtain from relation (130)
$\pi(s)=\frac{{-s}}{2}\pm\sqrt{(a-k)s^{2}-(b-k)s+c},$ (29)
with
$\displaystyle\begin{split}&a=\frac{1}{4}+\alpha^{2}+\beta^{2}+\gamma^{2},\\\
&b=\alpha^{2}+2\beta^{2}-\eta_{\kappa}(\eta_{\kappa}+1),\\\
&c=\beta^{2}.\end{split}$ (30)
If the discriminant of Eq.(29) inside the square root is zero, the constant
$k$ can be classified as
$k_{\pm}=(b-2c)\pm 2\sqrt{c(a-b)+c^{2}}.$ (31)
From Eq.(29), we obtain the following possibilities for each value of $k$
$\pi(s)=\frac{{-s}}{2}\pm\left\\{\begin{array}[]{l}\Big{(}\sqrt{c}-\sqrt{c+a-b}\Big{)}s-\sqrt{c}\quad\text{for}\quad
k_{+},\\\ \Big{(}\sqrt{c}+\sqrt{c+a-b}\Big{)}s-\sqrt{c}\quad\text{for}\quad
k_{-}.\\\ \end{array}\right.$ (32)
Here we found four possible values of $\pi(s)$ from the NU method. The one
with negative derivation, the $k_{-}$ case, is redefined as $\tau(s)$ while
the other can be neglected since lack of physical significance. We then obtain
$\pi(s)=\sqrt{c}-s\left(\frac{1}{2}+\sqrt{c}+\sqrt{a-b+c}\right),$ (33)
$\tau(s)=1+2\sqrt{c}-2s\left(1+\sqrt{c}+\sqrt{a-b+c}\right).$ (34)
Considering all of these and using Eq.(131), we find the eigenvalue as
$\begin{split}\lambda=b-2c-2\sqrt{c^{2}+c(a-b)}-\left(\frac{1}{2}+{\sqrt{c}+\sqrt{a-b+c}}\right).\end{split}$
(35)
The hypergeometric-type equation provides a unique $n$-degree polynomial
solution for non-negative integer $n$ like in Eq.(133), with
$\lambda_{m}\neq\lambda_{n}$ for $m=0,1,2,...,n-1$. Consequently
$\begin{split}\lambda_{n}&=2n\left[{1+\left({\sqrt{c}+\sqrt{a-b+c}}\right)}\right]+n(n-1).\end{split}$
(36)
By inserting Eq.(35) into Eq.(36) and explicitly solving this with
$c=\beta^{2}$, we find
$\begin{split}M^{2}-&E_{n\kappa}^{2}-C_{S}(M-E_{n\kappa})=\left[\frac{\alpha^{2}-\eta_{\kappa}(\eta_{\kappa}+1)-1/2-n(n+1)-(2n+1)\sqrt{\frac{1}{4}+\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1)}}{n+\frac{1}{2}+\sqrt{\frac{1}{4}+\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1)}}\delta\right]^{2},\end{split}$
(37)
where $n=0,1,2,...$, $M>E_{n\kappa}$ and $E_{n\kappa}+M>C_{S}$. Note that the
expression under the square root $\geq 0$. Otherwise, we have no bound state
solutions. Here we have two different energy solutions for each value of $n$
and $\kappa$. However, the valid solution is the one that gives positive-
energy bound states Ginocchio3 . Furthermore, it is possible to have
degenerate states with various quantum numbers $n$ and $\kappa$ (or $l$) with
same energy eigenvalues as $H=0$.
Now, we attempt to obtain the associated wave function for the proposed
potential. By placing $\pi(s)$ and $\sigma(s)$ into Eq.(129), and then solving
the first order differential equation, one part of the factorization is found
to be
$\phi(s)=s^{\beta}(1-s)^{\xi},$ (38)
with
$\xi=\frac{1}{2}+\sqrt{\frac{1}{4}+\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1)}.$
(39)
The other part with the hypergeometric-type function $y_{n\kappa}(s)$ has
polynomial solutions that can be obtained from the Rodrigues relation of
Eq.(134). Solving Eq.(135) for the spin symmetry case, we find
$\rho(s)=s^{2\beta}(1-s)^{2\xi-1},$ (40)
and then substituting this into Eq. (134), we obtain
$\begin{split}y_{n\kappa}(s)&=C_{n\kappa}(1-s)^{-2\xi+1}s^{-2\beta}\frac{{d^{n}}}{{ds^{n}}}\left[{s^{{2\beta}+n}(1-s)^{2\xi-1+n}}\right].\end{split}$
(41)
It is possible to simplify this by introducing the Jacobi polynomials
Abramowitz
$\begin{split}P_{n}^{(a,b)}(s)&=\frac{(-1)^{n}}{2^{n}n!(1-s)^{a}(1+s)^{b}}\frac{d^{n}}{ds^{n}}\left[{(1-s)^{a+n}(1+s)^{b+n}}\right].\end{split}$
(42)
From this relation we find
$P_{n}^{(a,b)}(1-2s)=\frac{1}{n!s^{a}(1-s)^{b}}\frac{d^{n}}{ds^{n}}\left[s^{a+n}(1-s)^{b+n}\right],$
(43)
thus
$\frac{d^{n}}{ds^{n}}\left[s^{a+n}(1-s)^{b+n}\right]=n!s^{a}(1-s)^{b}P_{n}^{(a,b)}(1-2s).$
(44)
By comparing the last expression with Eq. (41), one gets
$y_{n\kappa}(s)=C_{n\kappa}P_{n}^{(2\beta,2\xi-1)}(1-2s).$ (45)
Putting $\phi(s)$ of Eq.(38) and $y_{n\kappa}(s)$ of Eq.(45) into Eq.(28)
leads to
$F_{n\kappa}(s)=C_{n\kappa}s^{\beta}(1-s)^{\xi}P_{n}^{(2\beta,2\xi-1)}(1-2s).$
(46)
Implementing the identity of Jacobi polynomials Abramowitz
$\begin{split}P_{n}^{(a,b)}&(1-2s)=\frac{{\Gamma(n+a+1)}}{{n!\Gamma(a+1)}}\mathop{{}_{2}F_{1}}\left({-n,a+b+n+1,1+a;s}\right),\end{split}$
(47)
we can write down the upper component of the spinor in terms of the
hypergeometric polynomial as
$\begin{split}F_{n\kappa}(s)=&C_{n\kappa}s^{\beta}(1-s)^{\xi}\frac{\Gamma(n+2\beta+1)}{n!\Gamma(2\beta+1)}\mathop{{}_{2}F_{1}}\left({-n,{2\beta}+2\xi+n,1+{2\beta};s}\right).\end{split}$
(48)
One can obtain $F_{n\kappa}(r)$ by defining $s=e^{-2\delta r}$ in the above
equation. We can implement this function, expressed with $r$-variable, into
the Eq.(18) to obtain the corresponding lower component as
$\displaystyle\begin{split}G_{n\kappa}(r)=\frac{D_{n\kappa}}{M+E_{n\kappa}-C_{S}}&\bigg{[}\frac{2\delta
n(2\beta+2\xi+n)\left(e^{-2\delta
r}\right)^{\beta+1}}{(2\beta+1)\left(1-e^{-2\delta
r}\right)^{-\xi}}\mathop{{}_{2}F_{1}}\left(1-n,2\xi+n+2\beta+1,2\beta+2;e^{-2\delta
r}\right)\\\ &+\left(\frac{2\delta\xi e^{-2\delta r}}{1-e^{-2\delta
r}}-2\beta\delta+\frac{\kappa+H}{r}\right)F_{n\kappa}(r)\bigg{]}.\end{split}$
(49)
Finally, using the following normalization
$\begin{split}\int\limits_{0}^{\infty}|R(r)|^{2}r^{2}dr&=\int\limits_{0}^{\infty}|\chi(r)|^{2}dr=\frac{1}{2\delta}\int\limits_{0}^{1}\frac{1}{s}|\chi(s)|^{2}ds=1,\end{split}$
(50)
and the following integral identity Abramowitz
$\begin{split}\int\limits_{0}^{1}&{dz(1-z)^{2(\nu+1)}z^{{2\mu}-1}}\biggl{[}{\mathop{{}_{2}F_{1}}(-n,2(\nu+\mu+1)+n,2\mu+1;z)}\biggr{]}^{2}\\\
&=\frac{{(n+{\nu}+1)n!\Gamma(n+{2\nu}+2)\Gamma(2\mu)\Gamma({2\mu}+1)}}{{(n+{\nu}+{\mu}+1)\Gamma(n+{2\mu}+1)\Gamma(2({\nu}+{\mu}+1)+n)}},\end{split}$
(51)
with $\nu>-3/2$ and $\mu>0$, we obtain the normalization constant of the spin
symmetric wave function as
$C_{n\kappa}=\sqrt{\frac{2\delta
n!(n+\xi+\beta)\Gamma({2\beta}+1)\Gamma(n+{2\beta}+2\xi)}{(n+\xi)\Gamma(2\beta)\Gamma(n+2\beta+1)\Gamma(n+2\xi)}}.$
(52)
#### 3.1.2 Pseudospin Symmetry Case
We now consider the Eq.(19) and will follow similar steps with the spin
symmetry case. Same as before, the equation can not be solved exactly for
$\kappa\neq 0$ or $\kappa\neq 1$ without tensor interaction. Applying the same
approximations (Eq.(21)) to the centrifugal terms of (19), the form of general
differential equation for the pseudospin symmetry becomes
$\begin{split}\biggl{[}\frac{d^{2}}{dr^{2}}-\frac{4\delta^{2}e^{-2\delta
r}}{(1-e^{-2\delta r})^{2}}\Big{(}\kappa(\kappa-1)&+2\kappa
H-H+H^{2}\Big{)}+\Big{(}M-E_{n\kappa}+C_{PS}\Big{)}\Delta(r)\\\
&-\Big{(}M^{2}-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})\Big{)}\biggr{]}G_{n\kappa}(r)=0,\end{split}$
(53)
where we consider the potential in Eq.(22) for $\Delta(r)$ and a Coulomb-like
potential in Eq. (4) for the tensor interaction. We can simplify Eq.(53) by
defining $s=e^{-2\delta r}$ to obtain
$\begin{split}\frac{d^{2}G_{n\kappa}}{ds^{2}}+\frac{1}{s}\frac{dG_{n\kappa}}{ds}+&\bigg{[}\frac{-(V_{0}+V_{0}^{\prime})}{4\delta^{2}s(1-s)}(M-E_{n\kappa}+C_{PS})+\frac{1}{4\delta^{2}s^{2}}\bigg{(}E_{n\kappa}^{2}-M^{2}-C_{PS}(M+E_{n\kappa})\bigg{)}\\\
&-\frac{B^{\prime}(M-E_{n\kappa}+C_{PS})}{4\delta^{2}(1-s)^{2}}-\frac{2\kappa
H-H+H^{2}+\kappa(\kappa-1)}{s(1-s)^{2}}\bigg{]}G_{n\kappa}=0\end{split}$ (54)
Using the following definitions
$\begin{split}&\tilde{\alpha}^{2}=-\frac{(V_{0}+V_{0}^{\prime})(M-E_{n\kappa}+C_{PS})}{4\delta^{2}},\\\
&\tilde{\beta}^{2}=\frac{M^{2}-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})}{4\delta^{2}},\\\
&\tilde{\gamma}^{2}=\frac{B^{\prime}(M-E_{n\kappa}+C_{PS})}{4\delta^{2}},\\\
&\tilde{\eta}_{\kappa}=\kappa+H,\end{split}$ (55)
we can rewrite Eq.(54) as
$\begin{split}&\frac{d^{2}G_{n\kappa}}{ds^{2}}+\frac{1}{s}\frac{dG_{n\kappa}}{ds}+\frac{1}{s^{2}(1-s)^{2}}\bigg{[}\tilde{\alpha}^{2}s(1-s)-\tilde{\gamma}^{2}s^{2}-\tilde{\beta}^{2}(1-s)^{2}-\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)s\bigg{]}G_{n\kappa}=0.\end{split}$
(56)
As the previous treatment, the solution is restricted in the boundary
conditions $G_{n\kappa}(0)=0$ at $s=1$ for $r\rightarrow 0$ and
$G_{n\kappa}(\infty)\rightarrow 0$ at $s=0$ for $r\rightarrow\infty$.
Comparing Eq.(56) with Eq.(126), we obtain
$\displaystyle\begin{split}&\tilde{\tau}(s)=1-s,\\\ &\sigma(s)=s(1-s),\\\
&\tilde{\sigma}(s)=\tilde{\alpha}^{2}s(1-s)-\tilde{\beta}^{2}(1-s)^{2}-\tilde{\gamma}^{2}s^{2}-\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)s,\end{split}$
(57)
which have similar form with Eq.(27) and only differ in the last parameter.
We then factorize the general solution as
$\displaystyle G_{n\kappa}(s)=\tilde{y}_{n\kappa}(s)\tilde{\phi}(s),$ (58)
and by using Eq.(57) as well as Eq.(129), we find
$\pi(s)=\frac{{-s}}{2}\pm\sqrt{(\tilde{a}-k)s^{2}-(\tilde{b}-k)s+\tilde{c}},$
(59)
where
$\begin{split}&\tilde{a}=\frac{1}{4}+\tilde{\alpha}^{2}+\tilde{\beta}^{2}+\tilde{\gamma}^{2},\\\
&\tilde{b}=\tilde{\alpha}^{2}+2\tilde{\beta}^{2}-\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1),\\\
&\tilde{c}=\tilde{\beta}^{2}.\end{split}$ (60)
Following the same procedures as in Eq.(31) - Eq.(36), we obtain
$\begin{split}M^{2}-E_{n\kappa}^{2}&+C_{PS}(M+E_{n\kappa})=\left[\frac{\tilde{\alpha}^{2}-\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)-\frac{1}{2}-n(n+1)-(2n+1)\sqrt{\frac{1}{4}+\tilde{\gamma}^{2}+\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)}}{n+\frac{1}{2}+\sqrt{\frac{1}{4}+\tilde{\gamma}^{2}+\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)}}\delta\right]^{2},\end{split}$
(61)
where $n=0,1,2,...$, $M>-E_{n\kappa}$ and $E_{n\kappa}<C_{PS}+M$. This
relation shows that the pseudospin limit produces a quadratic eigenvalues as
in the previous case. The bound state solutions can only be achieved if the
expression inside the square root $\geq$ 0\. Here we have two different energy
solutions for each value of $n$ and $\kappa$. On the other hand, in this
considered symmetry limit, only the negative energy eigenvalues are valid and
there are no bound state from the positive ones ($E\neq M+C$) Ginocchio3 .
Furthermore, we encounter degenerate states for various quantum numbers $n$
and $\kappa$ (or $\tilde{l}$) with the same energy spectrum as $H=0$.
Let us now examine the radial part of the eigenfunctions. By using Eq.(135),
the corresponding weight function can be written as
$\tilde{\rho}(s)=s^{2\tilde{\beta}}(1-s)^{2\tilde{\xi}-1},$ (62)
with
$\tilde{\xi}=\frac{1}{2}+\sqrt{\frac{1}{4}+\tilde{\gamma}^{2}+\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)},$
(63)
so that
$\begin{split}\tilde{y}_{n\kappa}(s)&=C_{n\kappa}(1-s)^{1-2\tilde{\xi}}s^{-2\tilde{\beta}}\frac{{d^{n}}}{{ds^{n}}}\left({s^{{2\tilde{\beta}}+n}(1-s)^{2\xi-1+n}}\right).\end{split}$
(64)
Similar to the spin symmetry case, applying the Jacobi polynomials gives us
$\tilde{y}_{n\kappa}(s)=\tilde{C}_{n\kappa}P_{n}^{(2\beta,2\tilde{\xi}-1)}(1-2s).$
(65)
From $\tilde{\phi}(s)$ and $\tilde{y}_{n\kappa}(s)$, the lower component of
the spinor wave function becomes
$G_{n\kappa}(s)=\tilde{C}_{n\kappa}s^{\tilde{\beta}}(1-s)^{\tilde{\xi}}P_{n}^{(2\tilde{\beta},2\tilde{\xi}-1)}(1-2s).$
(66)
Proceed further using Eq.(47), we can express this equation in terms of the
hypergeometric polynomial as
$\begin{split}G_{n\kappa}(s)=&\tilde{C}_{n\kappa}s^{\tilde{\beta}}(1-s)^{\tilde{\xi}}\frac{\Gamma(n+2\tilde{\beta}+1)}{n!\Gamma(2\tilde{\beta}+1)}\mathop{{}_{2}F_{1}}\left({-n,{2\tilde{\beta}}+2\tilde{\xi}+n,1+{2\tilde{\beta}};s}\right).\end{split}$
(67)
We can express this as $r$-dependent equation by defining $s=e^{-2\delta r}$,
and then by inserting this now $r$ dependent function into Eq.(20), we obtain
the other component as
$\displaystyle\begin{split}F_{n\kappa}(r)=\frac{\tilde{D}_{n\kappa}}{M-E_{n\kappa}+C_{PS}}&\bigg{[}\frac{2\delta
n(2\tilde{\beta}+2\tilde{\xi}+n)\left(e^{-2\delta
r}\right)^{\tilde{\beta}+1}}{(2\tilde{\beta}+1)\left(1-e^{-2\delta
r}\right)^{-\tilde{\xi}}}\mathop{{}_{2}F_{1}}\left(1-n,2\tilde{\xi}+2\tilde{\beta}+n+1,2\tilde{\beta}+2;e^{-2\delta
r}\right)\\\ &+\left(\frac{2\delta\tilde{\xi}e^{-2\delta r}}{1-e^{-2\delta
r}}-2\tilde{\beta}\delta-\frac{\kappa+H}{r}\right)G_{n\kappa}(r)\bigg{]}.\end{split}$
(68)
Implementing normalization condition (50) and the integral identity (51), the
normalization constant of the pseudospin symmetry becomes
$\tilde{C}_{n\kappa}=\sqrt{\frac{2\delta
n!(n+\tilde{\xi}+\tilde{\beta})\Gamma(2\tilde{\beta}+1)\Gamma(n+2\tilde{\beta}+2\tilde{\xi})}{(n+\tilde{\xi})\Gamma(2\tilde{\beta})\Gamma(n+2\tilde{\beta}+1)\Gamma(n+2\tilde{\xi})}}.$
(69)
As a final remark of the NU method, notice that the following replacements
$\begin{split}&\kappa(\kappa+1)\leftrightarrow\kappa(\kappa-1)~{}(\text{or}~{}\kappa\leftrightarrow\kappa\pm
1),\\\ &F_{n\kappa}\leftrightarrow
G_{n\kappa},~{}E^{+}_{n\kappa}\leftrightarrow-E^{-}_{n\kappa},\\\
&(V_{0}+V_{0}^{\prime})\leftrightarrow-(V_{0}+V_{0}^{\prime}),C_{S}\leftrightarrow-
C_{PS},\\\
&\beta^{2}\leftrightarrow\tilde{\beta}^{2},~{}\gamma^{2}\leftrightarrow-\tilde{\gamma}^{2},~{}\alpha^{2}\leftrightarrow-\tilde{\alpha}^{2},\end{split}$
(70)
enable us to straightforwardly produced the negative energy solution of the
pseudospin symmetry from the positive energy solution of the spin symmetry
case. That is, Eqs.(37) and (48) give respectively Eqs.(61) and (67) under the
above replacements, or vice versa.
### 3.2 Implementation of the SUSYQM Method
Now, we are going to implement the SUSYQM method for both symmetry cases. The
discussion follows conventions from appendix A of Ref.Ahmadov1 .
#### 3.2.1 Spin Symmetry Case
According to the SUSYQM, the ground state of $F_{0}(r)$ in Eq.(17) satisfies
$F_{0}(r)=N\exp\left(-\int W(r)dr\right),$ (71)
with normalization constant $N$. $W(r)$ is known as the superpotential and can
be used to define the supersymmetric partner potentials Cooper1 ; Cooper2
$V_{\pm}(r)=W^{2}(r)\pm W^{\prime}(r),$ (72)
which is also known as the Riccati equation. Here we take its particular
solution as
$W(r)=A-\frac{Be^{-2\delta r}}{1-e^{-2\delta r}},$ (73)
with unknown constants $A$ and $B$. To find the solution of Eq.(23) via
SUSYQM, we rewrite the equation in general form as
$\frac{d^{2}F_{n\kappa}}{dr^{2}}=\Big{(}V_{\rm eff}(r)-E\Big{)}F_{n\kappa}.$
(74)
Substituting $V_{-}(r)=V_{\rm eff}(r)-E_{0}$ ($E_{0}$ represents the ground-
state energy) and (73) into Eq.(72), and then comparing the compatible terms
of the left- and right-hand sides, we obtain
$\displaystyle A^{2}$ $\displaystyle=4\delta^{2}\beta^{2},$ (75)
$\displaystyle 2AB+2\delta B$
$\displaystyle=4\delta^{2}\alpha^{2}-4\delta^{2}\eta_{\kappa}(\eta_{\kappa}+1),$
(76) $\displaystyle 2AB+B^{2}$
$\displaystyle=4\delta^{2}\left(\alpha^{2}+\gamma^{2}\right).$ (77)
The requirements $B>0$ and $A<0$ are needed to describe the wave functions in
extreme condition. Then, from Eqs.(76) and (77) we find the following
relations
$\begin{split}B&=\frac{2\delta\pm\sqrt{4\delta^{2}+16\delta^{2}(\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1))}}{2}=\delta\pm
2\delta\sqrt{\frac{1}{4}+\eta_{\kappa}(\eta_{\kappa}+1)+\gamma^{2}}.\end{split}$
(78) $A=-\frac{B}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B}.$ (79)
We now back to Eq.(73). We approximate $W(r)\rightarrow$ $A$ as
$r\rightarrow\infty$. Inserting Eq.(73) into Eq.(72), the supersymmetric
partner potentials have the following forms
$\displaystyle\begin{split}V_{-}(r)&=\biggl{[}A^{2}-\frac{(2AB+2\delta
B)e^{-2\delta r}}{1-e^{-2\delta r}}+\frac{(B^{2}-2\delta B)e^{-4\delta
r}}{(1-e^{-2\delta r})^{2}}\biggr{]},\\\
V_{+}(r)&=\left[A^{2}-\frac{(2AB-2\delta B)e^{-2\delta r}}{1-e^{-2\delta
r}}+\frac{(B^{2}+2\delta B)e^{-4\delta r}}{(1-e^{-2\delta
r})^{2}}\right].\end{split}$ (80)
From these two equations, which is only differ from each other merely by
additive constants, we can introduce their invariant forms as Gendenshtein1 ;
Gendenshtein2
$\displaystyle\begin{split}R(B_{1})&=V_{+}(B,r)-V_{-}(B_{1},r)=\left[A^{2}-A_{1}^{2}\right]\\\
&=\left[-\frac{B}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B}\right]^{2}-\left[-\frac{B+2\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+2\delta}\right]^{2},\end{split}$
(81)
or more generally
$\displaystyle\begin{split}R(B_{i})&=V_{+}(B+(i-1)2\delta,r)-V_{-}(B+i2\delta,r)\\\
&=\Bigg{[}-\frac{B+(i-1)2\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+(i-1)2\delta}\Bigg{]}^{2}-\Bigg{[}-\frac{B+i2\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+i2\delta}\Bigg{]}^{2}.\end{split}$
(82)
Continuing this procedure and substituting
$\,B_{n}=B_{n-1}+2\delta=B+2n\delta$, the whole discrete spectrum of
Hamiltonian $H_{-}(B)$ in general becomes
$4\delta^{2}\beta^{2}=E_{0}^{2}+\sum_{i=1}^{n}R(B_{i}).$ (83)
By setting $E_{0}=0$, we find
$\displaystyle\begin{split}4&\delta^{2}\beta^{2}=\sum\limits_{i=1}^{n}R(B_{i})\\\
&=\left(-\frac{B}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B}\right)^{2}-\left(-\frac{B}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B}\right)^{2}\\\
&+\left(-\frac{B+2\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+2\delta}\right)^{2}-\left(-\frac{B+2\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+2\delta}\right)^{2}\\\
&+\cdots-\\\
&+\biggl{(}-\frac{B+2(n-1)\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+2(n-1)\delta}\biggr{)}^{2}-\left(-\frac{B+2(n-1)\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+2(n-1)\delta}\right)^{2}\\\
&+\left(-\frac{B+2n\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{(B+2n\delta)}\right)^{2},\end{split}$
(84)
so that
$\beta^{2}=\frac{1}{4\delta^{2}}\left[-\frac{B+2n\delta}{2}+\frac{2\delta^{2}(\alpha^{2}+\gamma^{2})}{B+2n\delta}\right]^{2}.$
(85)
By using Eq.(78) and inserting $\beta^{2}$ of Eq.(25) into this expression, we
obtain the corresponding energy equation as
$\begin{split}M^{2}-E_{n\kappa}^{2}-&C_{S}(M-E_{n\kappa})\\\
&=\delta^{2}\bigg{[}\frac{2\left(\alpha^{2}+\gamma^{2}\right)}{1+2n+2\sqrt{\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1)+\frac{1}{4}}}-\frac{1}{2}\left(1+2n+2\sqrt{\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1)+\frac{1}{4}}\right)\bigg{]}^{2},\end{split}$
(86)
which is identical with the previous result of the NU method in Eq.(37).
From Eq.(73), the corresponding component of wave function can be written as
$\begin{split}F_{0}(r)&=N\exp\left(-\int W(r)dr\right)\\\
&=N\exp\left[\int\left(-A+\frac{Be^{-2\delta r}}{1-e^{-2\delta
r}}\right)dr\right]\\\
&=Ne^{-Ar}\exp\left[\frac{B}{2\delta}\int\frac{d(1-e^{-2\delta
r})}{1-e^{-2\delta r}}\right]\\\ &=Ne^{-Ar}(1-e^{-2\delta
r})^{\frac{B}{2\delta}}.\end{split}$ (87)
We can see that for $r\rightarrow 0$, $F_{0}(r)\rightarrow 0$ and $B>0$, while
for $r\rightarrow\infty$, $F_{0}(r)\rightarrow 0$ and $A<0$.
#### 3.2.2 Pseudospin Symmetry Case
The ground state of $G_{0}(r)$ in Eq.(53) within SUSYQM can be written as
$G_{0}(r)=\tilde{N}\exp\left(-\int\tilde{W}(r)dr\right),$ (88)
where the normalization constant is now $\tilde{N}$. The supersymmetric
partner potentials for the current consideration can be written as
$\displaystyle\tilde{V}_{\pm}(r)=\tilde{W}^{2}(r)\pm\tilde{W}^{\prime}(r).$
(89)
The particular solution is now
$\tilde{W}(r)=\tilde{A}-\frac{\tilde{B}e^{-2\delta r}}{1-e^{-2\delta r}},$
(90)
with the unknown constants $\tilde{A}$ and $\tilde{B}$. We can rewrite Eq.(53)
in the following general form
$\frac{d^{2}G_{n\kappa}}{dr^{2}}=\Big{(}\tilde{V}_{\rm
eff}(r)-\tilde{E}\Big{)}G_{n\kappa}.$ (91)
Substituting $\tilde{V}_{-}(r)=\tilde{V}_{\rm eff}(r)-\tilde{E}_{0}$
($\tilde{E}_{0}$ represents the ground-state energy) and Eq.(90) into Eq.(89)
we find
$\displaystyle\tilde{A}^{2}=$ $\displaystyle 4\delta^{2}\tilde{\beta}^{2},$
(92) $\displaystyle 2\tilde{A}\tilde{B}+2\delta\tilde{B}=$ $\displaystyle
4\delta^{2}\tilde{\alpha}^{2}-4\delta^{2}\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1),$
(93) $\displaystyle 2\tilde{A}\tilde{B}+\tilde{B}^{2}=$ $\displaystyle
4\delta^{2}\left(\tilde{\alpha}^{2}+\tilde{\gamma}^{2}\right).$ (94)
The same argument with the spin symmetry leads to the condition $\tilde{A}<0$
and $\tilde{B}>0$, so that from Eq.(93) and Eq. (94) we obtain
$\begin{split}\tilde{B}=\frac{2\delta\pm\sqrt{4\delta^{2}+16\delta^{2}(\tilde{\gamma}^{2}+\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1))}}{2}=\delta\pm
2\delta\sqrt{\frac{1}{4}+\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)+\tilde{\gamma}^{2}},\end{split}$
(95)
$\tilde{A}=-\frac{\tilde{B}}{2}+\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{B}.$
(96)
Back to Eq.(90), we approximate $\tilde{W}(r)\rightarrow$-$\tilde{A}$ as
$r\rightarrow\infty$. Substituting Eq.(90) into Eq.(89) leads to
$\begin{split}\tilde{V}_{-}(r)&=\biggl{[}\tilde{A}^{2}-\frac{(2\tilde{A}\tilde{B}+2\delta\tilde{B})e^{-2\delta
r}}{1-e^{-2\delta r}}+\frac{(\tilde{B}^{2}-2\delta\tilde{B})e^{-4\delta
r}}{(1-e^{-2\delta r})^{2}}\biggr{]},\\\
\tilde{V}_{+}(r)&=\left[\tilde{A}^{2}-\frac{(2\tilde{A}\tilde{B}-2\delta\tilde{B})e^{-2\delta
r}}{1-e^{-2\delta r}}+\frac{(\tilde{B}^{2}+2\delta\tilde{B})e^{-4\delta
r}}{(1-e^{-2\delta r})^{2}}\right].\end{split}$ (97)
By using these relations, their invariant forms can be introduced as
Gendenshtein1 ; Gendenshtein2
$\begin{split}R(\tilde{B}_{1})&=\tilde{V}_{+}(\tilde{B},r)-\tilde{V}_{-}(\tilde{B}_{1},r)=\left(\tilde{A}^{2}-\tilde{A}_{1}^{2}\right)\\\
&=\bigg{[}\frac{\tilde{B}}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}}\bigg{]}^{2}-\bigg{[}\frac{\tilde{B}+2\delta}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}+2\delta}\bigg{]}^{2},\end{split}$
(98)
or
$\displaystyle\begin{split}R(\tilde{B}_{i})&=\tilde{V}_{+}\big{(}\tilde{B}+(i-1)2\delta,r\big{)}-\tilde{V}_{-}\big{(}\tilde{B}+i2\delta,r\big{)}\\\
&=\bigg{[}\frac{\tilde{B}+(i-1)2\delta}{2}-\frac{2(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})\delta^{2}}{\tilde{B}+(i-1)2\delta}\bigg{]}^{2}-\bigg{[}\frac{\tilde{B}+i2\delta}{2}-\frac{2(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})\delta^{2}}{\tilde{B}+i2\delta}\bigg{]}^{2}.\end{split}$
(99)
Continuing this and using
$\tilde{B}_{n}=\tilde{B}_{n-1}+2\delta=\tilde{B}+2n\delta$, the complete
spectrum of $H_{-}(\tilde{B})$ becomes
$4\delta^{2}\tilde{\beta}^{2}=\tilde{E}_{0}^{2}+\sum_{i=1}^{n}R(\tilde{B}_{i}).$
(100)
As $\tilde{E}_{0}=0$, we obtain
$\displaystyle\begin{split}4\delta^{2}\tilde{\beta}^{2}&=\sum\limits_{i=1}^{n}R(\tilde{B}_{i})\\\
&=\left(\frac{\tilde{B}}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}}\right)^{2}-\bigg{(}\frac{\tilde{B}}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}}\bigg{)}^{2}\\\
&+\left(\frac{\tilde{B}+2\delta}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}+2\delta}\right)^{2}-\left(\frac{\tilde{B}+2\delta}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}+2\delta}\right)^{2}\\\
&+\cdots-\\\
&+\biggl{(}\frac{\tilde{B}+2(n-1)\delta}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}+2(n-1)\delta}\biggr{)}^{2}-\left(\frac{\tilde{B}+2(n-1)\delta}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}+2(n-1)\delta}\right)^{2}\\\
&+\left(\frac{\tilde{B}+2n\delta}{2}-\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{(\tilde{B}+2n\delta)}\right)^{2},\end{split}$
(101)
and hence
$\tilde{\beta}^{2}=\frac{1}{4\delta^{2}}\left(-\frac{\tilde{B}+2n\delta}{2}+\frac{2\delta^{2}(\tilde{\alpha}^{2}+\tilde{\gamma}^{2})}{\tilde{B}+2n\delta}\right)^{2}.$
(102)
Using $\tilde{\beta}$ in Eq.(55) and $\tilde{B}$ of Eq.(95), the energy
spectrum equation becomes
$\begin{split}M^{2}-E_{n\kappa}^{2}&+C_{PS}(M+E_{n\kappa})\\\
&=\delta^{2}\bigg{[}\frac{2\left(\tilde{\alpha}^{2}+\tilde{\gamma}^{2}\right)}{1+2n+2\sqrt{\tilde{\gamma}^{2}+\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)+\frac{1}{4}}}-\frac{1}{2}\left(1+2n+2\sqrt{\tilde{\gamma}^{2}+\tilde{\eta}_{\kappa}(\tilde{\eta}_{\kappa}-1)+\frac{1}{4}}\right)\bigg{]}^{2},\end{split}$
(103)
which is identical to Eq.(61). From the superpotential $\tilde{W}(r)$, we can
express the eigenfunction $G_{0}(r)$ as
$\begin{split}G_{0}(r)&=\tilde{N}\exp\left(-\int\tilde{W}(r)dr\right)\\\
&=\tilde{N}\exp\left[-\int\left(\tilde{A}-\frac{\tilde{B}e^{-2\delta
r}}{1-e^{-2\delta r}}\right)dr\right]\\\
&=\tilde{N}e^{-\tilde{A}r}\exp\left[\frac{\tilde{B}}{2\delta}\int\frac{d(1-e^{-2\delta
r})}{1-e^{-2\delta r}}\right]\\\ &=Ne^{-\tilde{A}r}(1-e^{-2\delta
r})^{\frac{\tilde{B}}{2\delta}},\end{split}$ (104)
where now, for $r\rightarrow 0$, $G_{0}(r)\rightarrow 0$ and $\tilde{B}>0$,
whilst for $r\rightarrow\infty$, $G_{0}(r)\rightarrow 0$ and $\tilde{A}<0$.
## 4 Particular cases
Now, we are about to examine some particular cases regarding the bound state
energy eigenvalues in Eq.(37) and Eq.(61). We could derive some well-known
potentials, useful for other physical systems, by adjusting relevant
parameters in both cases. We then compare the corresponding energy spectrums
with the previous works.
1\. S-wave case:
The s-wave cases are directly obtained for $l=0$ and $\tilde{l}=0$ ($\kappa=1$
for pseudospin symmetry and $\kappa=-1$ for spin symmetry), so that the
spin–orbit coupling term vanishes. The corresponding energy eigenvalue
equation reduces to
$\displaystyle\begin{split}M^{2}-E_{n,-1}^{2}-C_{S}(M-E_{n,-1})=\delta^{2}\left[\frac{\alpha^{2}-1/2-H(H-1)-n(n+1)-(2n+1)\sqrt{\frac{1}{4}+\gamma^{2}+H(H-1)}}{n+\frac{1}{2}+\sqrt{\frac{1}{4}+\gamma^{2}+H(H-1)}}\right]^{2}\end{split}$
(105)
for the spin symmetry, and
$\displaystyle\begin{split}&M^{2}-E_{n,1}^{2}+C_{PS}(M+E_{n,1})=\delta^{2}\left[\frac{\tilde{\alpha}^{2}-1/2-H(H+1)-n(n+1)-(2n+1)\sqrt{\frac{1}{4}+\tilde{\gamma}^{2}+H(H+1)}}{n+\frac{1}{2}+\sqrt{\frac{1}{4}+\tilde{\gamma}^{2}+H(H+1)}}\right]^{2}\end{split}$
(106)
for pseudospin symmetry limit. Their corresponding wave functions are
$\displaystyle\begin{split}F_{n,-1}(r)=N_{n}&e^{-r\vartheta}(1-e^{-2\delta
r})^{(1+\zeta)/2}P_{n}^{\left(\vartheta/\delta,\zeta\right)}(1-2e^{-2\delta
r}),\\\ G_{n,1}(r)=\tilde{N}_{n}&e^{-r\tilde{\vartheta}}(1-e^{-2\delta
r})^{(1+\tilde{\zeta})/2}P_{n}^{\left(\tilde{\vartheta}/\delta,\tilde{\zeta}\right)}(1-2e^{-2\delta
r}),\end{split}$ (107)
where we have introduce the following relations
$\displaystyle\begin{split}\vartheta=&\sqrt{M^{2}-E_{n,-1}^{2}-C_{S}(M-E_{n,-1})},\qquad\zeta=\sqrt{1+4\gamma^{2}+4H(H-1)},\\\
\tilde{\vartheta}=&\sqrt{M^{2}-E_{n,1}^{2}+C_{PS}(M+E_{n,1})},\qquad\tilde{\zeta}=\sqrt{1+4\tilde{\gamma}^{2}+4H(H+1)}.\end{split}$
(108)
2\. Dirac-Hulthén problem:
For $V^{\prime}_{0}=B=0$, our potential turns to the Hulthén potential, and
the energy eigenvalue for the spin symmetry becomes
$\displaystyle
M^{2}-E_{n\kappa}^{2}-C_{S}(M-E_{n\kappa})=4\delta^{2}\left[\frac{Ze^{2}(M+E_{n\kappa}-C_{S})}{4\delta(n+\kappa+H+1)}-\frac{(n+\kappa+H+1)}{2}\right]^{2},$
(109)
while for the pseudospin symmetry
$\displaystyle
M^{2}-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})=\delta^{2}\left[\frac{-Ze^{2}(M-E_{n\kappa}+C_{PS})}{2\delta(n+\kappa+H)}-(n+\kappa+H)\right]^{2}.$
(110)
In the condition of vanishing tensor interaction ($H=0$), these results turn
out to be the same as the expressions obtained in Eq.(35) and Eq.(47) of Ref.
Soylu , and also the results in Ref. Ikhdair11 . The corresponding wave
functions for both symmetry cases can be expressed as
$\displaystyle\begin{split}F_{n\kappa}(r)&=N_{n\kappa}e^{-r\xi}(1-e^{-2\delta
r})^{\eta_{\kappa}+1}P_{n}^{\left(\xi/\delta,2\eta_{\kappa}+1\right)}(1-2e^{-2\delta
r}),\\\ G_{n\kappa}(r)&=\tilde{N}_{n\kappa}e^{-r\tilde{\xi}}(1-e^{-2\delta
r})^{\eta_{\kappa}}P_{n}^{\left(\tilde{\xi}/\delta,2\eta_{\kappa}-1\right)}(1-2e^{-2\delta
r}),\end{split}$ (111)
with
$\displaystyle\begin{split}\xi=\sqrt{M^{2}-E_{n\kappa}^{2}-C_{S}(M-E_{n\kappa})},\qquad\tilde{\xi}=\sqrt{M^{2}-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})}.\end{split}$
(112)
3\. Dirac-Yukawa problem:
We also notice that setting $V_{0}=B=0$ gives us the energy spectrum for the
Yukawa potential
$\displaystyle
M^{2}-E_{n\kappa}^{2}-C_{S}(M-E_{n\kappa})=4\delta^{2}\left[\frac{A(M+E_{n\kappa}-C_{S})}{4\delta(n+\kappa+H+1)}-\frac{(n+\kappa+H+1)}{2}\right]^{2}$
(113)
for the spin symmetry, and
$\displaystyle
M^{2}-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})=\delta^{2}\left[\frac{-A(M-E_{n\kappa}+C_{PS})}{2\delta(n+\kappa+H)}-(n+\kappa+H)\right]^{2}$
(114)
for the pseudospin symmetry case. As $H=0$, these equations are identical to
Eq.(30) and Eq.(31) of Ref. Aydogdu11 for the spin, and Eq.(25) and Eq.(43)
of Ref. Ikhdair12 for the pseudospin symmetry case. Their spinor wave
functions have the same form as Eq.(111) for each cases.
4\. Dirac-Coulomb-like problem:
Taking the limit $\delta\rightarrow 0$ in the Yukawa potential, we obtain the
well-known Coulomb-like potential $V(r)=-A/r$. The energy spectrum for both
symmetry cases respectively yield
$\displaystyle\begin{split}(E_{n\kappa}-M)(E_{n\kappa}+M-C_{S})=-\frac{A^{2}(-C_{S}+E_{n\kappa}+M)^{2}}{4(n+\kappa+H+1)^{2}}\Rightarrow
E^{S}_{n\kappa}=\frac{A^{2}(C_{S}-M)+4M(n+\kappa+H+1)^{2}}{A^{2}+4(n+\kappa+H+1)^{2}},\end{split}$
(115)
$\displaystyle\begin{split}(E_{n\kappa}+M)(C_{PS}-E_{n\kappa}+M)=\frac{A^{2}(C_{PS}-E_{n\kappa}+M)^{2}}{4(n+\kappa+H)^{2}}\Rightarrow
E^{PS}_{n\kappa}=\frac{A^{2}(C_{PS}+M)-4M(n+\kappa+H)^{2}}{A^{2}+4(n+\kappa+H)^{2}}.\end{split}$
(116)
For $H=0$, these results are respectively identical to Eqs.(59) and (60) of
Ref.AydogduMie with the replacement of $B\rightarrow A$. They are also in
agreement with the results in Eqs.(53) and (56) of Ref. Ikhdair12 . Moreover,
if $C_{S}=C_{PS}=0$, these results reduce to the Dirac–Coulomb problem as
$\displaystyle\begin{split}&E^{S}_{n\kappa}=M\left[\frac{4(n+\kappa+1)^{2}-A^{2}}{4(n+\kappa+1)^{2}+A^{2}}\right],~{}~{}E^{PS}_{n\kappa}=-M\left[\frac{4(n+\kappa)^{2}-A^{2}}{4(n+\kappa)^{2}+A^{2}}\right]\end{split}$
(117)
We note that the same expressions can also be achieved by taking the limit
$\delta\rightarrow 0$ of the Hulthén potential under the replacement of
$A\leftrightarrow Ze^{2}$.
5\. Dirac-inversely quadratic Yukawa problem:
When the parameters $V_{0}$ and $V^{\prime}_{0}$ is fixed to zero, then we
find the energy spectrum for inversely quadratic Yukawa potential as follows:
$\displaystyle\begin{split}M^{2}&-E_{n\kappa}^{2}-C_{S}(M-E_{n\kappa})=\left[\frac{\eta_{\kappa}(\eta_{\kappa}+1)+\frac{1}{2}+n(n+1)+(2n+1)\sqrt{\frac{1}{4}+\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1)}}{n+\frac{1}{2}+\sqrt{\frac{1}{4}+\gamma^{2}+\eta_{\kappa}(\eta_{\kappa}+1)}}\delta\right]^{2}\end{split}$
(118)
for spin symmetry limit and
$\displaystyle\begin{split}M^{2}&-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})=\left[\frac{\eta_{\kappa}(\eta_{\kappa}-1)+\frac{1}{2}+n(n+1)+(2n+1)\sqrt{\frac{1}{4}+\tilde{\gamma}^{2}+\eta_{\kappa}(\eta_{\kappa}-1)}}{n+\frac{1}{2}+\sqrt{\frac{1}{4}+\tilde{\gamma}^{2}+\eta_{\kappa}(\eta_{\kappa}-1)}}\delta\right]^{2}\end{split}$
(119)
for pseudospin symmetry limit.
6\. Dirac-Kratzer–Fues problem:
By limiting $\delta\to 0$, the inversely quadratic Yukawa potential can be
approximated as
$\displaystyle V_{CY}=\lim_{\delta\to 0}\left(-\frac{Ae^{-\delta
r}}{r}-\frac{Be^{-2\delta
r}}{r^{2}}\right)\simeq-\frac{A}{r}-\frac{B}{r^{2}},$ (120)
where $A=2r_{e}D_{e}$ and $B=-r_{e}^{2}D_{e}$. This form is well-known as the
Kratzer–Fues potential. The energy spectrum from both symmetries according
with this potential are
$\displaystyle\begin{split}M^{2}&-E_{n\kappa}^{2}-C_{S}(M-E_{n\kappa})=\frac{A^{2}(M-C_{S}+E_{n\kappa})^{2}}{\left(2n+1+2\sqrt{B(C_{S}-E_{n\kappa}-M)+(\eta_{\kappa}+\frac{1}{2})^{2}}\right)^{2}}\end{split}$
(121)
and
$\displaystyle\begin{split}M^{2}&-E_{n\kappa}^{2}+C_{PS}(M+E_{n\kappa})=\frac{A^{2}(C_{PS}-E_{n\kappa}+M)^{2}}{\left(2n+1+2\sqrt{B(C_{PS}-E_{n\kappa}+M)+(\eta_{\kappa}-\frac{1}{2})^{2}}\right)^{2}}\end{split}$
(122)
for spin and pseudospin symmetry cases, respectively. These results are
exactly the same as Eq.(38) and (30) of Ref.HamzaviMie for $C=0$,
$A\rightarrow B$ and $B\rightarrow-A$.
7\. Non-relativistic limit:
By setting $C_{S}=H=0$ and replacing $E_{n\kappa}+M\rightarrow 2m$ and
$E_{n\kappa}-M\rightarrow E_{nl}$ in Eqs.(25), (26) and (37), we have the non-
relativistic solutions of the Hulthén plus a class of Yukawa potential. The
resulting energy eigenvalue is
$\displaystyle\begin{split}E_{nl}=-\frac{1}{2m}\left[\frac{\delta\left(-\frac{m\left(A+Ze^{2}\right)}{\delta}+(2n+1)\sqrt{l^{2}+l+\frac{1}{4}-2mB}+l(l+1)+n(n+1)+\frac{1}{2}\right)}{\sqrt{l^{2}+l+\frac{1}{4}-2mB}+n+\frac{1}{2}}\right]^{2}.\end{split}$
(123)
Furthermore, setting $B=0$ simplifies the above equation as
$\displaystyle
E_{nl}=-\frac{1}{2m}\left[\frac{\delta(l+n+1)^{2}-m\left(A+Ze^{2}\right)}{(l+n+1)}\right]^{2},~{}(n,l=0,1,2,\ldots),$
(124)
which is coincide with Eq.(67) of Ref. Ikhdair112 by setting $d_{0}=0$,
$V_{0}\rightarrow A+Ze^{2}$ and $\delta\rightarrow 2\delta$. The same result
is obtained in Eq.(27) of Ref. Haouat08 if we replace $\delta\rightarrow
2\delta$ and $\alpha\rightarrow A+Ze^{2}$. Furthermore, this is also identical
to Eq.(35) of Ref.Ikhdair12 if we set $A\rightarrow A+Ze^{2}$. We note that,
considering the s-wave case ($l=0$), Eq.(124) provides the exact result from
the familiar nonrelativistic limit. Finally, when we set $\delta\rightarrow 0$
and $A=0$ in Eq. (124), the energy spectrum for the nonrelativistic Coulombic
field is obtained as
$\displaystyle E_{nl}=-\frac{m\left(Ze^{2}\right)^{2}}{2(l+n+1)^{2}}.$ (125)
## 5 Numerical Evaluations and Discussion
In this section, we perform the numerical evaluations for our analytical
results. We analyze the dependency of the energy spectrum to the potential
parameters for several quantum numbers. We can use an arbitrary unit to
express the eigenvalues since the natural units are used in this study.
Considering this issue, we prefer to use fm-1 unit for the involved parameters
in our calculation to obtain more realistic descriptions.
Table 1: Bound state energy eigenvalues (in fm-1) of the spin symmetry case for various values of $n$ and $l$ in the absence ($H=0$) and presence ($H=5$) of the tensor interaction. $l$ | $n,\kappa<0$ | $(l,j=l+1/2)$ | $E_{n,\kappa<0}(H=0)$ | $E_{n,\kappa<0}(H=5)$ | $n,\kappa>0$ | $(l,j=l-1/2)$ | $E_{n,\kappa>0}(H=0)$ | $E_{n,\kappa>0}(H=5)$
---|---|---|---|---|---|---|---|---
1 | 0,-2 | 0$p_{3/2}$ | 0.24181258 | 0.24725816 | 0,1 | 0$p_{1/2}$ | 0.24181258 | 0.26229015
2 | 0,-3 | 0$d_{5/2}$ | 0.24408024 | 0.24408024 | 0,2 | 0$d_{3/2}$ | 0.24408024 | 0.26915052
3 | 0,-4 | 0$f_{7/2}$ | 0.24725817 | 0.24181258 | 0,3 | 0$f_{5/2}$ | 0.24725817 | 0.27694672
4 | 0,-5 | 0$g_{9/2}$ | 0.25134955 | 0.24045289 | 0,4 | 0$g_{7/2}$ | 0.25134955 | 0.28568689
1 | 1,-2 | 1$p_{3/2}$ | 0.24407876 | 0.25134791 | 1,1 | 1$p_{1/2}$ | 0.24407876 | 0.26914833
2 | 1,-3 | 1$d_{5/2}$ | 0.24725666 | 0.24725666 | 1,2 | 1$d_{3/2}$ | 0.24725666 | 0.27694432
3 | 1,-4 | 1$f_{7/2}$ | 0.25134791 | 0.24407876 | 1,3 | 1$f_{5/2}$ | 0.25134791 | 0.28568428
4 | 1,-5 | 1$g_{9/2}$ | 0.25635671 | 0.24181038 | 1,4 | 1$g_{7/2}$ | 0.25635671 | 0.29537745
1 | 2,-2 | 2$p_{3/2}$ | 0.24725314 | 0.25635387 | 2,1 | 2$p_{1/2}$ | 0.24725314 | 0.27694119
2 | 2,-3 | 2$d_{5/2}$ | 0.25134496 | 0.25134496 | 2,2 | 2$d_{3/2}$ | 0.25134496 | 0.28568098
3 | 2,-4 | 2$f_{7/2}$ | 0.25635387 | 0.24725314 | 2,3 | 2$f_{5/2}$ | 0.25635387 | 0.29537396
4 | 2,-5 | 2$g_{9/2}$ | 0.26228527 | 0.24407133 | 2,4 | 2$g_{7/2}$ | 0.26228527 | 0.30603052
1 | 3,-2 | 3$p_{3/2}$ | 0.25133807 | 0.26228075 | 3,1 | 3$p_{1/2}$ | 0.25133807 | 0.28567666
2 | 3,-3 | 3$d_{5/2}$ | 0.25634875 | 0.25634875 | 3,2 | 3$d_{3/2}$ | 0.25634875 | 0.29536954
3 | 3,-4 | 3$f_{7/2}$ | 0.26228075 | 0.25133807 | 3,3 | 3$f_{5/2}$ | 0.26228075 | 0.30602597
4 | 3,-5 | 3$g_{9/2}$ | 0.26914103 | 0.24723546 | 3,4 | 3$g_{7/2}$ | 0.26914103 | 0.31765756
Table 2: Bound state energy eigenvalues (in fm-1) of the pseudospin symmetry case for various values of $n$ and $\widetilde{l}$ in the absence ($H=0$) and presence ($H=5$) of the tensor interaction. $\widetilde{l}$ | $n,\kappa<0$ | $(l,j)$ | $E_{n,\kappa<0}(H=0)$ | $E_{n,\kappa<0}(H=5)$ | $n-1,\kappa>0$ | $(l+2,j+1)$ | $E_{n,\kappa>0}(H=0)$ | $E_{n,\kappa>0}(H=5)$
---|---|---|---|---|---|---|---|---
1 | 1,-1 | 1$s_{1/2}$ | -0.24665137 | -0.25853490 | 0,2 | 0$d_{3/2}$ | -0.24665137 | -0.28786907
2 | 1,-2 | 1$p_{3/2}$ | -0.25183976 | -0.25183976 | 0,3 | 0$f_{5/2}$ | -0.25183976 | -0.30082457
3 | 1,-3 | 1$d_{5/2}$ | -0.25853490 | -0.24665137 | 0,4 | 0$g_{7/2}$ | -0.25853490 | -0.31543002
4 | 1,-4 | 1$f_{7/2}$ | -0.26675519 | -0.24295721 | 0,5 | 0$h_{9/2}$ | -0.26675519 | -0.33173176
1 | 2,-1 | 2$s_{1/2}$ | -0.25184913 | -0.26676283 | 1,2 | 1$d_{3/2}$ | -0.25184913 | -0.30083316
2 | 2,-2 | 2$p_{3/2}$ | -0.25854279 | -0.25854279 | 1,3 | 1$f_{5/2}$ | -0.25854279 | -0.31543915
3 | 2,-3 | 2$d_{5/2}$ | -0.26676283 | -0.25184913 | 1,4 | 1$g_{7/2}$ | -0.26676283 | -0.33174151
4 | 2,-4 | 2$f_{7/2}$ | -0.27653162 | -0.24667097 | 1,5 | 1$h_{9/2}$ | -0.27653162 | -0.34979406
1 | 3,-1 | 3$s_{1/2}$ | -0.25856120 | -0.27654386 | 2,2 | 2$d_{3/2}$ | -0.25856120 | -0.31545110
2 | 3,-2 | 3$p_{3/2}$ | -0.26677657 | -0.26677657 | 2,3 | 2$f_{5/2}$ | -0.26677657 | -0.33175386
3 | 3,-3 | 3$d_{5/2}$ | -0.27654386 | -0.25856119 | 2,4 | 2$g_{7/2}$ | -0.27654386 | -0.34980694
4 | 3,-4 | 3$f_{7/2}$ | -0.28788894 | -0.25189558 | 2,5 | 2$h_{9/2}$ | -0.28788894 | -0.36967266
1 | 4,-1 | 4$s_{1/2}$ | -0.25856120 | -0.28790739 | 3,2 | 3$d_{3/2}$ | -0.25856120 | -0.33177001
2 | 4,-2 | 4$p_{3/2}$ | -0.26677657 | -0.27656587 | 3,3 | 3$f_{5/2}$ | -0.26677657 | -0.34982325
3 | 4,-3 | 4$d_{5/2}$ | -0.27654386 | -0.26680859 | 3,4 | 3$g_{7/2}$ | -0.27654386 | -0.36968936
4 | 4,-4 | 4$f_{7/2}$ | -0.28788894 | -0.25865185 | 3,5 | 3$h_{9/2}$ | -0.28788894 | -0.39144042
In Table 1 and 2, we present several energy levels $E_{n,\kappa}$ for the case
of spin symmetry and pseudospin symmetry. We perform this calculation by using
Eq.(37) for the spin symmetry case and Eq.(61) for the pseudospin symmetry
case. The outcomes include the absence as well as presence of the tensor
coupling. In the calculation, we set $C_{S}=5$ fm-1, $C_{PS}=-5$ fm-1, $A=B=1$
fm-1, $V_{0}=2$ fm-1, and $\delta=0.05$ fm-1 for convenient. These parameters
can vary according to the considered bound states and here they solely
represent the widely used benchmarks for numerical purposes. As for the
nucleon mass, the corresponding value is $M=939$ MeV $\approx 4.76$ fm-1. We
have chosen these values to meet the appropriate range of nuclear studies,
particularly related to the single-nucleon states. From both tables, we notice
that $E_{n,\kappa}$ increases as the increment of $|\kappa|$ on both symmetry
considerations for a given $n$. The absence of tensor interaction ($H=0$) on
the spin symmetry case evoke degeneracy in some Dirac spin-doublet eigenstate
partners: ($np_{3/2},np_{1/2}$), ($nd_{5/2},nd_{3/2}$), ($nf_{7/2},nf_{5/2}$),
and ($ng_{9/2},ng_{7/2}$), etc. Each of these two spin-doublet pairs has the
same $n,l$. Under the same case, degeneracy also occur on the pseudospin
symmetry in some pseudospin-doublet partners: ($ns_{1/2}$,$(n-1)d_{3/2}$),
($np_{3/2}$,$(n-1)f_{5/2}$), ($nd_{5/2}$,$(n-1)g_{7/2}$), and
($nf_{7/2}$,$(n-1)h_{9/2}$), etc. Again, each of these two states has the same
$\tilde{n}$ and $\tilde{l}$. However, as the tensor interaction appears, all
these degeneracies on both symmetry considerations vanish.
We present the dependence of $E_{n\kappa}$ on $\delta$ for different $n$ and
$\kappa$ by setting the other parameters with the previous benchmarks in Fig.2
for the (a) spin and (b) pseudospin symmetry case. The behavior of
$E_{n\kappa}$ is demonstrated by varying $\delta$ from $0$ to $0.30$ fm-1 with
$0.01$ fm-1 step. Note that increasing the value of $\delta$ implies the less
attractive interactions. As $\delta$ rises for a short-range potential, the
bound state energy eigenvalues increase for the spin symmetry and decrease for
the pseudospin symmetry case. The increasing trend indicates that we have
tightly bounded states, while the decreasing behavior means otherwise.
Figure 2: The variation of $E_{n=0,1,\kappa=1,2,3,4}$ with respect to
screening parameter $\delta$ for (a) spin and (b) pseudospin symmetry case
with $H=5$, $C_{S}=5$ fm-1, $C_{PS}=-5$ fm-1, $M=4.76$ fm-1, $A=B=1$ fm-1 and
$V_{0}=2$ fm-1.
Figure 3: The energy eigenvalues of the spin symmetry states (a) $0p_{3/2}$
and (b) $0p_{1/2}$, and the pseudospin symmetry states (c) $0s_{1/2}$ and (d)
$0d_{3/2}$ in the plane of $(V_{0},C_{S,PS})$. The color heat map corresponds
to the energy eigenvalue (in fm-1) of the scanned region.
Particularly, we can see that the binding energies are stepped away from each
other as $\delta$ increases in both symmetries. We can see that the lines of
the following pair of states overlap: ($0d_{3/2},1p_{1/2}$),
($0f_{5/2},1d_{3/2}$) and ($0g_{7/2},1f_{5/2}$) for the spin symmetry case,
and ($0f_{5/2},1d_{3/2}$), ($0g_{7/2},1f_{5/2}$), and ($0h_{9/2},1g_{7/2}$)
for the pseudospin symmetry case.
In Fig.3, we illustrate the parameter space of the energy eigenvalues for the
spin symmetry states $0p_{3/2}$ and $0p_{1/2}$, and the pseudospin symmetry
states $0s_{1/2}$ and $0d_{3/2}$. The energy eigenvalues are scanned in
$(V_{0},C_{S})$ and $(V_{0},C_{PS})$ plane for each cases, respectively. For
this objective, we set $V_{0}=A=B$, $H=5$, $\delta=0.05$ fm-1, and $M=4.76$
fm-1. The scan parameters are varied from $0.0$ to $20.0$ fm-1 for $V_{0}$,
and from $-20.0$ to $20.0$ fm-1 for $C_{S,PS}$ with $0.5$ fm-1 step. The white
region represents non-real energy eigenvalue. That is, there are no bound
states occur in this domain. It is clear that the energy spectrums depend
entirely on the choice of $C_{S}$ and $C_{PS}$. The positive bound state of
the spin symmetry case are obtained at the regions of $5\leq C_{S}<10$ with
$M\geq E_{n\kappa}$ and $E_{n\kappa}+M\geq C_{S}$, as well as of $10\leq
C_{S}\leq 20$ with $M<E_{n\kappa}$ and $E_{n\kappa}+M\leq C_{S}$. Meanwhile,
the negative bound state energy eigenvalues for the pseudospin symmetry case
are reached at the regions of $-10<C_{PS}\leq-5$ with $M>-E_{n\kappa}$ and
$E_{n\kappa}<C_{PS}+M$, as well as of $-20<C_{PS}\leq-10$ with
$M<-E_{n\kappa}$ and $E_{n\kappa}<C_{PS}+M$. In addition, these results are
valid for other quantum states from the same distributions as the ones we have
discussed.
Figure 4: Variation of the normalized upper and lower components of
$F_{n,\kappa}(r)$ and $G_{n,\kappa}(r)$ with respect to $r$ for (a) spin and
(b) pseudospin symmetry consideration with $H=5$.
Finally, we present the lower and upper spinor wave functions of $np_{1/2}$
and $nd_{3/2}$ states as a function of $r$ with $n=0,1,2$ in Fig. 4 for (a)
spin and (b) pseudospin symmetry case. We have implemented Eqs.(48), (49),
(67) and (68) for these purposes. We use the same parameter values as in the
Tables 1 and 2. We can see that, the lower and upper component of the spin
symmetry case have $n$ nodes, while for the pseudospin $n+1$ nodes. The $r$
dependence of the potential strengths (i.e. $V_{0}$, $A$, $B$) keeps the
number of radial nodes stays the same, yet still influencing the wavelength
and magnitude of the appropriate solutions.
## 6 Summary and Conclusions
In this work, we have examined the bound state solutions of the Dirac equation
with a new suggested combined potential, the Hulthén plus a class of Yukawa
potential, as well as including a Coulomb-like tensor coupling under the
conditions of the spin and pseudospin symmetry. For this subject, we have
implemented the NU and SUSYQM methods. The tensor coupling preserves the form
of the combined potential, however produces a new spin-orbit centrifugal term
$\eta(\eta\pm 1)r^{-2}$ where $\eta$ denotes a new spin-orbit quantum number.
It provides the possibility of establishing a different form of spin-orbit
coupling terms that may evoke some physical interpretation.
For an arbitrary spin-orbit coupling quantum number $\kappa$, we have obtained
analytical expressions for the energy eigenvalues and its associated upper-
and lower-spinor wave functions in the spin as well as pseudospin symmetry
cases. Results from the two methods are entirely the same. Both are systematic
and practical for solving the considered symmetries and considered as two of
the most reliable methods in this subject in many cases. The wave functions
are expressed in terms of the hypergeometric functions, together with their
normalization constants. Although the energy spectrums overlap with each
other, the obtained wave functions from the SUSYQM are more compact than those
from the NU method. Hence, the validity of the SUSYQM method and its general
principles has been confirmed.
Furthermore, we have shown that our obtained results can be reduced into a few
special cases (s-wave case, Dirac-Hulthén problem, Dirac-Yukawa problem,
Dirac-Coulomb-like problem, Dirac-inversely quadratic Yukawa problem, Dirac-
Kratzer–Fues problem) and compared them with the literature. They are in full
consistency with the previous findings. Additionally, we have also considered
the nonrelativistic limit of the energy spectrum for the proposed potential by
making some replacements on the spin symmetry solution.
We have numerically investigated the dependence of the energy spectra
dependence on the screening parameter $\delta$, potential strength, as well as
parameter $C_{S}$ and $C_{PS}$. We found that both spin and pseudospin bound
state energies are sensitive with $\delta$, as well as with $C_{S}$ and
$C_{PS}$ for a given quantum number $\kappa$ and $n$. In the absence of the
tensor coupling, the Dirac spin and pseudospin-doublet eigenstate partners
evoke degeneracy for some states. However, the degeneracies are completely
eliminated if the tensor interaction involved. The allowed bound state regions
for both symmetries in the parameter space of the potential strength $V_{0}$
with respect to $C_{S}$, and $C_{PS}$ are also presented. Finally, the
normalized wave function components from both symmetries, influenced with
tensor interaction, are shown as a function of $r$.
In conclusion, a new suggested combined potential, Hulthén plus a class of
Yukawa potential including a Coulomb-like tensor interaction, have been
analytically solved. Our obtained results deserve particular attention to find
relevancy in more applicative branches of physics, especially in the area of
hadronic and nuclear physics.
## Appendix A Nikiforov-Uvarov Method
We briefly introduce the NU method Nikiforov , a useful way to solve a
hypergeometric-type second-order differential equation by transforming it into
the following form
$\frac{d^{2}\chi(s)}{ds^{2}}+\frac{\tilde{\tau}(s)}{\sigma(s)}\frac{d\chi(s)}{ds}+\frac{\tilde{\sigma}(s)}{\sigma^{2}(s)}\chi(s)=0.$
(126)
All coefficients here are polynomials, in which $\sigma(s)$ and
$\tilde{\sigma}(s)$ have a maximum second-order while $\tilde{\tau}(s)$ has a
first-order kind. To get a particular solution for the above equation, the
function $\chi(s)$ can be decomposed as
$\chi(s)=y(s)\phi(s),$ (127)
and then by substituting this into Eq. (126), we find a hypergeometric-type
equation as follows
$\sigma(s)\frac{d^{2}y(s)}{ds^{2}}+\tau(s)\frac{dy(s)}{ds}+\lambda y(s)=0.$
(128)
The function $\phi(s)$ need to satisfy
$\frac{1}{\phi(s)}\frac{d\phi(s)}{ds}=\frac{\pi(s)}{\sigma(s)},$ (129)
with
$\pi(s)=\frac{\sigma^{\prime}(s)-\tilde{\tau}(s)}{2}\pm\sqrt{\left[\frac{\sigma^{\prime}(s)-\tilde{\tau}(s)}{2}\right]^{2}-\tilde{\sigma}(s)+k\sigma(s)},$
(130)
where primes denote the derivative according to $s$ and it can be first-order
at most. The term within the square root is rearranged as zero discriminant of
a second-order polynomial. Therefore, an expression for $k$ is found after
solving such equation by means of the NU method.
Consequently, the equation reduces to a hypergeometric type equation, where
one of its solutions is $y(s)$. Hence the polynomial expression
$\bar{\sigma}(s)=\tilde{\sigma}(s)+\pi^{2}(s)+\pi(s)[\tilde{\tau}(s)-\sigma^{{}^{\prime}}(s)]+\pi^{{}^{\prime}}(s)\sigma(s)$
can be divided by a factor of $\sigma(s)$, such that
$\bar{\sigma}/\sigma(s)=\lambda$. Here, we use the following relation
$\displaystyle\lambda=k+\frac{d\pi(s)}{ds},$ (131)
$\tau(s)=\tilde{\tau}(s)+2\pi(s),$ (132)
with $\tau(s)$ has a negative derivative. For an integer $n\geq 0$, a unique
$n$-degree polynomial solution is obtained for the hypergeometric type
equation if
$\displaystyle\lambda\equiv\lambda_{n}=-n\frac{d\tau}{ds}-\frac{n(n-1)}{2}\frac{d^{2}\sigma}{ds^{2}},\qquad
n=0,1,2...$ (133)
On the other hand, the polynomial $y(s)$ satisfies the following Rodrigues
equation
$y_{n}(s)=\frac{C_{n}}{\rho(s)}\frac{d^{n}}{ds^{n}}\Big{[}\rho(s)\sigma^{n}(s)\Big{]}.$
(134)
The parameter $C_{n}$ denotes the normalization constant, while $\rho(s)$
stands for weighting function that obeys
$\frac{d\left[\sigma(s)\rho(s)\right]}{ds}=\tau(s)\rho(s),$ (135)
which is commonly known as the Pearson differential equation.
## References
* (1) P. A. M. Dirac, The Principles of Quantum Mechanics, Oxford University Press, Oxford, 1930.
* (2) H. Feshbach and F. Villars, Rev. Mod. Phys. 30, 24, (1958).
* (3) V. G. Bagrov, D. M. Gitman, Exact Solutions of Relativistic Wave Equations, Kluwer Academic Publishers, Dordrecht, 1990.
* (4) W. Greiner, Relativistics Quantum Mechanics, 3ed. edition Berlin, Springer, 2000.
* (5) J. N. Ginocchio, Phys. Rev. Lett. 78, 436 (1997).
* (6) J. N. Ginocchio, Phys. Rep. 315, 231-240 (1999).
* (7) J. N. Ginocchio, Phys. Rev. C 69, 034318 (2004).
* (8) J. N. Ginocchio, Phys. Rep. 414, 165-261 (2005).
* (9) S. G. Zhou, J. Meng, and P. Ring, Phys. Rev. Lett. 91, 262501 (2003).
* (10) A. Arima, M. Harvey, and K. Shimizu, Phys. Lett. B 30, 517 (1969).
* (11) K. T. Hecht and A. Adler, Nucl. Phys. A 137, 129 (1969).
* (12) A. Bohr, I. Hamamoto, and B.R. Mottelson, Phys. Scr. 26, 267 (1982).
* (13) J. Dudek, W. Nazarewicz, Z. Szymanski, and G.A. Leander, Phys. Rev. Lett. 59, 1405 (1987).
* (14) D. Troltenier, C. Bahri, and J. P. Draayer, Nucl. Phys. A 586, 53–72 (1995).
* (15) P. R. Page, T. Goldman, and J. N. Ginocchio, Phys. Rev. Lett. 86, 204 (2001).
* (16) R. Lisboa, M. Malheiro, A.S. de Castro, P. Alberto, and M. Fiolhais, Phys. Rev. C 69, 024319 (2004).
* (17) J. Y. Guo and Z.Q. Sheng, Phys. Let. A 338, 90-96 (2005).
* (18) A. Soylu, O. Bayrak and I. Boztosun, J. Math. Phys. 48, 082302 (2007).
* (19) S. M. Ikhdair, C. Berkdemir and R. Sever, App. Math. Compt. 217, 9019 (2011).
* (20) S. M. Ikhdair, and R. Sever, J. Phys. A: Math. Theor. 44, 355301 (2011).
* (21) S. Haouat and L. Chetouani, Phys. Scr. 77, 025005 (2008).
* (22) O. Aydoğdu, and R. Sever, Phys. Scr. 84, 025005 (2011).
* (23) S. M. Ikhdair, Cent. Eur. J. Phys. 10, 361-381 (2012).
* (24) F. Pakdel, A. A. Rajabi and M. Hamzavi, Advances in High Energy Phys. 2014, 867483 (2014).
* (25) C. Berkdemir, Nucl. Phys. A 770, 32 (2006).
* (26) W. C. Qiang, R. S. Zhou, Y. Gao, J. Phys. A: Math. Theor. 40, 1677 (2007).
* (27) O. Aydoğdu, and R. Sever, Annal. Phys. 325, 373 (2010).
* (28) M. Hamzavi, H. Hassanabadi and A. A. Rajabi, Mod. Phys. Lett. A 25, 2447 (2010).
* (29) C. S. Jia, P. Gao, Y. F. Diao, L. Z. Yi, X. J. Xie, Eur. Phys. J. A 34, 41 (2007).
* (30) C. S. Jia, T. Chen and L. G. Cui, Phys. Lett. A 373, 1621 (2009).
* (31) G. F. Wei and S. H. Dong, Phys. Lett. A 373, 49-53 (2008).
* (32) H. Yanar and A. Havare, Advances in High Energy Physics 2015, 915796 (2015).
* (33) H. Karayer, Eur. Phys. J. Plus 134, 452 (2019).
* (34) R. Lisboa, M. Malheiro, A. S. de Castro, P. Alberto and M. Fiolhais, Phys. Rev. C 69, 024319 (2004).
* (35) H. Akçay, Phys. Lett. A 373, 616 (2009).
* (36) O. Aydoğdu, and R. Sever, Few-Body Syst. 47, 193 (2010).
* (37) M. Hamzavi, A.A. Rajabi, and H. Hassanabadi, Few Body Syst. 48, 171-182 (2010).
* (38) A. N. Ikot, H. Hassanabadi and T. M. Abbey, Commun. Theor. Phys. 64, 637 (2015).
* (39) M. Mousavi and M. R. Shojaei, Commun. Theor. Phys. 66, 483-490 (2016).
* (40) L. Hulthén, Ark. Mat. Astron. Fys. A 28, 5 (1942).
* (41) L. Hultheń, Ark. Mat. Astron. Fys. B 29, 1 (1942).
* (42) H. Yukawa, Proc. Phys. Math. Soc. Jap. 17, 48 (1935).
* (43) A. F. Nikiforov and V. B. Uvarov, Special Functions of Mathematical Physics, Birkhäuser, Basel 1988.
* (44) L. E. Gendenshtein, JETP Lett. 38, 356 (1983).
* (45) L. E. Gendenshtein and I. V. Krive, Sov. Phys. Usp. 28, 645 (1985).
* (46) F. Cooper, A. Khare and U. Sukhatme, Supersymmetry in Quantum Mechnics, World Scientific, 2001.
* (47) F. Cooper, A. Khare, U. Sukhatme, Phys. Rep. 251, 267 (1995).
* (48) S. Zarrinkamar, A. A. Rajabi, and H. Hassanabadi, Annals of Physics 325, 2522 (2010).
* (49) E. Maghsoodi, H. Hassanabadi, and O. Aydogdu, Phys. Scr. 86, 015005 (2012).
* (50) H. Feizi and A.H. Ranjbar, Eur. Phys. J. Plus 128, 3 (2013).
* (51) J. D. Bjorken and S. D. Drell, Relativistic Quantum Mechanics, McGraw–Hill, New York, 1964.
* (52) J. Meng, K. Sugawara-Tanabe, S. Yamaji, P. Ring, and A. Arima, Phys. Rev. C 58, R628(R) (1998).
* (53) J. Meng, K. Sugawara-Tanabe, S. Yamaji, and A. Arima, Phys. Rev. C 59, 154-163 (1999).
* (54) R. L. Greene and C. Aldrich, Phys. Rev. A 14, 2363 (1976).
* (55) A. I. Ahmadov, M. Demirci, S. M. Aslanova, and M. F. Mustamin, Phys. Lett. A 384, 126372 (2020).
* (56) A. I. Ahmadov, S. M. Aslanova, M. Sh. Orujova, S. V. Badalov and Shi-Hai Dong, Phys. Lett. A 383, 3010 (2019).
* (57) M. Abramowitz, I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, Dover, New York, 1964.
|
16k
|
arxiv_papers
|
2101.01051
|
# A passively pumped vacuum package sustaining cold atoms for more than 200
days
Bethany J. Little [email protected] Gregory W. Hoth Justin Christensen
Chuck Walker Dennis J. De Smet Sandia National Laboratories, Albuquerque, NM
87185, USA Grant W. Biedermann Department of Physics and Astronomy,
University of Oklahoma, Norman, Oklahoma 73019, USA Jongmin Lee Sandia
National Laboratories, Albuquerque, NM 87185, USA Peter D. D. Schwindt
Sandia National Laboratories, Albuquerque, NM 87185, USA
###### Abstract
Compact cold-atom sensors depend on vacuum technology. One of the major
limitations to miniaturizing these sensors are the active pumps—typically ion
pumps—required to sustain the low pressure needed for laser cooling. Although
passively pumped chambers have been proposed as a solution to this problem,
technical challenges have prevented successful operation at the levels needed
for cold-atom experiments. We present the first demonstration of a vacuum
package successfully independent of ion pumps for more than a week; our vacuum
package is capable of sustaining a cloud of cold atoms in a magneto-optical
trap (MOT) for greater than 200 days using only non-evaporable getters and a
rubidium dispenser. Measurements of the MOT lifetime indicate the package
maintains a pressure of better than $2\times 10^{-7}$ Torr. This result will
significantly impact the development of compact atomic sensors, including
those sensitive to magnetic fields, where the absence of an ion pump will be
advantageous.
††preprint: APS/123-QED
## I Introduction
Atomic sensors, which make use of the inherent precision of atomic energy
levels, are moving from the laboratory, where they have pushed the limits of
precision measurements [1, 2, 3], to applications in the field, where the
requirements are shifted toward portability, reliability, and integration [4].
Successful miniaturization of vacuum technology for cold-atom sensors will
impact a wide range of applications including gravimeters, accelerometers,
gyroscopes, clocks, magnetometers, and gravity gradiometers.
These sensors must include an ultra-high vacuum (UHV) system capable of
sustaining a cloud of cold atoms. This cloud is often produced using a
magneto-optic trap (MOT). With existing UHV technology, both the pumping
apparatus and the gauges require many cubic centimeters at best [5], in
addition to the chamber itself. While passively pumped vacuum packages
utilizing non-evaporable getters (NEGs) have been proposed [6, 7], there has
been doubt as to whether or not they are sustainable over the time scales
needed for cold-atom sensors [8, 5]. Getters have been used in miniature ion-
clock chambers [9, 10], however these systems take advantage of a getter-
pump’s inability to pump noble gases, a severe disadvantage for applications
involving MOTs.
We present a vacuum chamber that uses passive pumping to maintain pressures
sufficient for a MOT in excess of 200 days. Using the MOT as a rough pressure
gauge, we estimate that the system sustains a background pressure of better
than $2\times 10^{-7}$ Torr [11, 12]. Although this pressure is relatively
high for cold-atom systems, it is sufficient for applications such as high
data rate atom interferometry [13]. While efforts are being made by others to
miniaturize vacuum systems for the purpose of atomic sensors [14, 8, 7], to
our knowledge this is the first demonstration of a passively pumped chamber
sustaining a MOT for more than a week [15].
## II Vacuum Design and Fabrication
Figure 1: (a) The vacuum package consists of a titanium body with six
sapphire windows and other components laser-welded onto it. (False
transparency reveals the inside components of the arms, shown in the insets in
more detail.) A copper pinch-off tube allows pumping down with standard turbo
and ion pumps and has been successfully pinched off to create a cold-welded
seal that preserves the vacuum. (b) Two rubidium dispensers are held in place
with alumina spacers. (c) Detail of one of the non-evaporable getters.
The primary challenge of the design is to make a chamber that can sustain a
high level of vacuum without dependence on active pumping, such as an ion
pump, while allowing the optical access needed for cold atom experiments. We
utilize non-evaporable getters, which passively pump the chamber by means of
chemisorption[16].
Since the NEGs do not pump rare gases, any helium permeation will limit the
lifetime of the vacuum. Although it is easy to find materials for the body of
the chamber that will not allow helium permeation, finding a transparent
material for the windows with this property is more challenging.
Aluminosilicate [7] and alumina [17, 18] perform better than borosilicate,
however the latter presents some fabrication challenges. We utilize sapphire,
which has no documented helium permeation that the authors are aware of, and
use C-cut windows to minimize birefringence and maximize strength. The body of
the package is fabricated from commercially pure titanium since it matches
well with the coefficient of thermal expansion of sapphire, has a much lower
hydrogen outgassing rate than stainless steel[19], and is nonmagnetic.
The configuration of the vacuum package is shown in Figure 1. The sapphire
windows (MPF Products, Inc.) braised into titanium frames are laser-welded
into the electropolished titanium package body. Four corner tubes support
operation: two with getters (SAES St172/HI/7.5-7), one with two dispensers
(SAES Rb/NF/3.4/12FT10+10), and a copper tube which is connected to a standard
UHV pumping apparatus prior during initial pump-down and testing, but is later
pinched off to form a cold-welded seal. The getters and dispensers are housed
in commercially pure titanium tubing and are connected to the outside of the
chamber via custom electrical feedthroughs, as shown in Figure 1. The external
volume of the package is approximately 70 mL.
The entire assembly is baked in a vacuum furnace at 400°C for seven days at
$1\times 10^{-7}$ Torr [20]. During this time, the package interior volume is
evacuated independently with total pressure and residual gas pressure
monitoring to assure sufficient exhaust parameters are achieved and that
unwanted gases from the getters and dispensers are removed as they are
electrically activated. Final post-exhaust pressures of $3.2\times 10^{-9}$
Torr are observed on the vacuum package near the turbo pump. Helium leak tests
are performed throughout the fabrication and exhaust process to ensure vacuum
integrity. On the final package, the ion pump current of 0.7 nA gives an
estimated pressure of $2.7\times 10^{-11}$ Torr.
## III Test Setup and Protocol
Figure 2: The vacuum package is tested using fluorescence from atoms in a
MOT. (a) Six circularly polarized 780-nm laser beams (four of which are
indicated by the large arrows) locked 8 MHz to the red of the F=2 to F’=3 line
in 87Rb provide cooling. A second laser tuned to the F=1 to F’=2 line is mixed
in with the axial cooling beams and serves as a repump. Anti-Helmholtz coils
(AHC) are mounted directly on the chamber. We observe loading curves by
switching the current to these coils off and then back on. The cloud of atoms
is imaged on a CCD; load curves (b) are obtained from the CCD and photodiode
(PD). A 795 nm probe beam is split on a beamsplitter (BS), with one arm
passing through the interior of the chamber, while the other serves as a
reference. The difference measurement of the absorption probe is sent from the
balance detector (BD) to a lock-in amplifier; the amplitude of the signal
shown in (c) is used to calculate the rubidium pressure.
We utilize a MOT to characterize our vacuum package. Not only does this
demonstrate the viability of the chamber for cold-atom experiments, but it
serves as an indication of the evolution of the background pressure over time.
The trap loading dynamics are well described by [12, 11]:
$\frac{dN}{dt}=R-\Gamma N,$ (1)
where $N$ is the number of atoms in the MOT, $R$ is the loading rate,
proportional to the rubidium pressure $P_{Rb}$, and
$\Gamma=\gamma_{Rb}P_{Rb}+\gamma_{bk}P_{bk}+\Gamma_{0}$ is the loss rate. This
loss rate depends on the rate $\gamma_{Rb}$ of collisions with rubidium, the
rate $\gamma_{bk}$ of collisions with other background gasses of pressure
$P_{bk}$, and a density-dependent factor $\Gamma_{0}$ accounting for two body
collisions within the cold cloud. In a typical MOT, the density of the trapped
atoms is limited by light scattering forces; in this constant density limit
$\Gamma_{0}$ can be approximated as constant and one obtains an exponential
loading curve [11, 12]:
$N=N_{0}(1-e^{-t/\tau}),$ (2)
where $N_{0}=R\tau$ is the total number of atoms loaded with time constant
$\tau=1/\Gamma$. $N_{0}$ and $\tau$ are measured by observing the fluorescence
of the atoms as a MOT is loaded [21]. A representative loading curve and fit
is shown in Figure 2 (b), which was collected after switching on the anti-
Helmholtz coils. The CCD counts are calibrated to determine the number of
atoms [21].
Arponthip, et al.[11] showed that $\tau$ can be used to estimate the
background pressure in typical UHV systems. Specifically, the pressure inside
the chamber is well approximated by
$P_{\text{vacuum}}<(2\times 10^{-8}\text{ Torr}\cdot\text{s})/\tau.$ (3)
Despite a few caveats [11], this gives a good upper bound on the background
pressure of the vacuum system. We also monitor the pressure of Rb in the
vacuum package directly using a probe laser on the D1 line.
A schematic of the test setup is shown in Figure 2. The 780 nm cooling laser
is locked 8 MHz to the red of the F=2 to F’=3 resonance of 87Rb. The repump
light is resonant with the F=1 to F’=2 transition of 87Rb. Both lasers are
coupled into polarization maintaining fibers and distributed to the vacuum
test setup via splitters.
A CCD and a photodiode are used to measure loading curves. The majority of the
data presented utilizes the results from the CCD; the photodiode serves to
confirm particularly short loading times (Figure 5). The trapping magnetic
field is generated with a pair of circular anti-Helmholtz coils which are
switched via software control. This method allows for background subtraction
of florescence from atoms in the chamber, compared to those loaded into the
trap.
The density of rubidium in the chamber is measured via the absorption of
another laser beam as it is swept through the D1 F=3 to F’=2,3 transition of
85Rb, for which the cross-section is calculated to be $9.2\times
10^{-16}\text{m}^{2}$ [22]. From this density, a pressure is calculated. We
use a balanced detector to reduce the noise due to power fluctuations. The
signal to noise is further improved by use of a lock-in detector; the current
of the laser is modulated at 100 kHz while it is swept across the resonance at
0.5 Hz. The resulting amplitude of the dispersive lock-in signal shown in
Figure 2 (c) is proportional to the absorption of the probe.
The vacuum package was initially set up with an ion pump. After establishing
testing procedures, the ion pump was switched off. Following successful
operation with the pump off for a month, the ion pump was switched back on in
preparation for pinching off the copper tube. During the month of pumpless
operation, the loading times and MOT atom number were around 100 ms and
$7\times 10^{5}$, respectively; prior to pinch-off, they were around 1 s and
$7\times 10^{6}$. A pneumatic pinch-off tool (CPS HY-187-F) was used to pinch
off the copper tube. Results of the measured MOT loading parameters following
pinch-off are shown in Figure 3.
## IV Results
Figure 3: Following pinch-off, (a) the number of atoms $N$ and (b) the
characteristic loading times $\tau$ of a MOT in the passively pumped chamber
are monitored over the course of 200 days. Other activity on the optical table
required the laser enclosure curtains to be opened during the day, causing
drifts like the one shown in (d), which shows the loading time over the course
of a long weekend. Many of these transients have been removed from (a) and (b)
to better show the trends. Major variations due to large changes in the
rubidium dispenser current are highlighted in red, with the corresponding
dispenser current used shown in (c). Figure 4: The pressure of the
background vapor in our vacuum package is estimated using Equation 3, while
the pressures of both isotopes of rubidium in the chamber are estimated using
the absorption probe measurement. Data is shown since calibration of this
measurement on day 60.
The primary result of this work is the demonstration that a passively-pumped
vacuum chamber can support cold atom physics experiments for months. Figures 3
(a) and (b) show the number of atoms in the trap $N$ and the loading time
constant $\tau$ over a period of 200 days following pinch-off. Each data point
is obtained by fitting Equation 2 to a measured loading curve, as in 2 (b).
Experiments involving large changes of the dispenser current are highlighted
in red. The dispenser current is shown in 3(c). Using the absorption probe and
Equation 3, we estimate the pressure of rubidium and the background gases in
our chamber, as shown in Figure 4. After initial changes immediately following
the pinch-off, the MOT loading parameters change relatively slowly, indicating
that the passive pumping is able to maintain a vacuum in the system for many
months. Variation in the plots is due to a number of factors, which we divide
into short-term transients and long-term trends.
The short day-to-day variability in MOT loading parameters is dominated by
temperature changes caused by opening and closing the curtained enclosure
around the optics table and other work going on in the lab. This is
highlighted in the example shown in Figure 3 (d), which shows a data run taken
over a long weekend; there is an initial thermalization period which we
exclude from our analysis. We attribute many of the short-term transients to a
combination of temperature-caused misalignments and changes in the rubidium
pressure due to ambient temperature. To test the temperature dependance, we
placed a temperature logger on the table next to the test setup. Temperature
changes of a few tenths of a degree result in significant changes to both the
atom number $N$ and the loading time constant $\tau$. These variations are
consistent with the observed variations in the Rb pressure. More details can
be found in the Supplemental Material [23].
Long-term trends are both more difficult to explain and more interesting. The
variation in the average values in Figure 3 (a-b) may be a result of
hysteresis in the alignment in various parts of the test setup, as well as
disturbances to the alignment due to other activity in the lab. The alignment
of the system has been optimized several times to maximize the atom number.
These realignments tend to cause step changes in N, for example near day 140
in Figure 3 (a). Our absorption measurement of the rubidium density indicates
an increase in the amount of rubidium in the chamber over the 200 days; this
likely plays a role in the increased MOT atom number and decreased loading
time seen over this period. For example, from day 60 to 180, the rubidium
pressure increased by around $1.4\times 10^{-8}$ Torr (Fig 4); based on an
estimated loss coefficient for Rb-Rb collisions [11], this could contribute to
the MOT loss rate $\Gamma=1/\tau$ by 0.6 s-1. During the same time, $\tau$
decreases from 120 ms to 90 ms (Fig 3), corresponding to an increase
$\Delta\Gamma\approx 3$ s-1 in the loss rate. Thus while the increase in
rubidium plays a role, there are likely other effects contributing to changes
in $\tau$.
In order to understand the role of the rubidium dispenser in our vacuum
package, several dispenser current variation experiments were performed. The
behavior of $N$ and $\tau$ during these experiments suggests that the Rb
dispenser plays a significant role in maintaining the pressure. The result of
one of these tests is shown in Figure 5. Naively one would expect from
Equation 1 that when the dispenser is turned off, the number of trapped atoms
would begin to decrease as the amount of rubidium decreases, while the loading
time would increase [12]. Surprisingly, the two parameters trend in the same
direction; we hypothesize that this is indicative of a dispenser pumping
effect—due to the gettering material of the dispenser, and possibly also due
to the increased number of alkali atoms, which may serve to reduce the
pressure of other gases in the chamber.
This can be seen by plotting the background pressure estimated from loading
curve fits and Equation 3 alongside the results of the absorption measurement,
in Figure 5 (b). When the dispenser is turned off (day 94), the increase in
the background pressure appears to be a larger effect than the decrease in
rubidium pressure. When the dispenser is turned back on, there is a spike in
background pressure: this effect has been noted by others, and is likely due
to a release of gas from both the chamber walls and the dispenser material as
the dispenser temperature increases [24]. After some time, the pressures
return to their values prior to the cycling experiment.
While the dispenser pumping effects are significant, it is worth noting that
this experiment also demonstrates that even after three days of leaving the
dispenser off, the chamber still supports a cloud of around $5\times 10^{5}$
cold atoms.
Figure 5: The rubidium dispenser is cycled off and on between days 93 and 98
at times marked with vertical lines. (a) As expected, the number of atoms in
the MOT drops, although this number trends toward a non-zero cloud size,
indicating that the chamber could sustain operation without the dispensers for
a significant period of time. The loading time also decreases; this is used to
estimate the rising background pressure (Eq. 3) shown in (b), along with the
rubidium pressures calculated from the absorption measurement.
## V Conclusion
We have demonstrated the successful design and testing of a portable vacuum
package that can sustain vacuum levels low enough for cold atoms using only
passive pumping via non-evaporable getters. The success of our design is
undoubtedly due to a combination of factors, including the low helium-
permeability of the materials and the fabrication procedures. We are in the
process of building and testing other vacuum chambers with different
parameters; the results of these future tests will shed light on which parts
of the design and fabrication are most critical.
Our design has a number of advantages for atomic sensors. While some have
proposed the use of other non-magnetic ion pumping mechanisms [8], eliminating
the need for an ion pump altogether presents a clear advantage. We have
successfully driven 6.8 GHz microwave transitions in rubidium in the chamber,
demonstrating that the metal body is not an obstacle to such atomic state
manipulation. Finally, in contrast to chip-focused designs [14], the six
windows allow optical access for the counter-propagating beams typically used
in atom interferometer applications.
The behavior of the MOT loading time and atom number in response to different
changes, such as the current in the dispenser or the temperature of the
environment will be the subject of future study. In such a small chamber,
subtleties arising from the likely pumping of the dispenser present both
challenges and advantages that we would like to understand better. The
dispenser requires 2.15 A current to maintain optimal conditions, or 2.22 W of
power. Continued investigation is required to develop techniques to achieve
substantially reduced power consumption while maintaining the appropriate
alkali [25, 24] and background pressures.
Our result represents significant progress toward reducing the size, weight,
and power consumption of atomic sensors. We expect that it will drive the
development of more robust sensors which can be used in a wide range of
applications, from fundamental physics such as measurements of gravity in
space, to more application-focused developments in civil engineering. It is
particularly well-suited for inertial sensing devices, where the demand is
high for a robust and compact vacuum package.
## Acknowledgments
The authors would like to Melissa Revelle for her contributions to the
testing, and members of John Kitching’s group at NIST for their ideas and
feedback. This work was supported by the Laboratory Directed Research and
Development program at Sandia National Laboratories. Sandia National
Laboratories is a multimission laboratory managed and operated by National
Technology $\&$ Engineering Solutions of Sandia, LLC, a wholly owned
subsidiary of Honeywell International Inc., for the U.S. Department of
Energy’s National Nuclear Security Administration under contract DE-NA0003525.
This paper describes objective technical results and analysis. Any subjective
views or opinions that might be expressed in the paper do not necessarily
represent the views of the U.S. Department of Energy or the United States
Government.
## Author’s Contributions
G. W. Hoth and B. J. Little contributed equally to this work.
## References
* Parker _et al._ [2018] R. H. Parker, C. Yu, W. Zhong, B. Estey, and H. Müller, Measurement of the fine-structure constant as a test of the standard model, Science 360, 191 (2018).
* Rosi _et al._ [2014] G. Rosi, F. Sorrentino, L. Cacciapuoti, M. Prevedelli, and G. Tino, Precision measurement of the newtonian gravitational constant using cold atoms, Nature 510, 518 (2014).
* Brewer _et al._ [2019] S. Brewer, J.-S. Chen, A. Hankin, E. Clements, C.-w. Chou, D. Wineland, D. Hume, and D. Leibrandt, Al+ 27 quantum-logic clock with a systematic uncertainty below 10- 18, Physical review letters 123, 033201 (2019).
* Bongs _et al._ [2019] K. Bongs, M. Holynski, J. Vovrosh, P. Bouyer, G. Condon, E. Rasel, C. Schubert, W. P. Schleich, and A. Roura, Taking atom interferometric quantum sensors from the laboratory to real-world applications, Nature Reviews Physics 1, 731 (2019).
* Basu and Velásquez-García [2016] A. Basu and L. F. Velásquez-García, An electrostatic ion pump with nanostructured si field emission electron source and ti particle collectors for supporting an ultra-high vacuum in miniaturized atom interferometry systems, Journal of Micromechanics and Microengineering 26, 124003 (2016).
* Rushton _et al._ [2014] J. A. Rushton, M. Aldous, and M. D. Himsworth, Contributed review: The feasibility of a fully miniaturized magneto-optical trap for portable ultracold quantum technology, Review of Scientific Instruments 85, 121501 (2014).
* Dellis _et al._ [2016] A. T. Dellis, V. Shah, E. A. Donley, S. Knappe, and J. Kitching, Low helium permeation cells for atomic microsystems technology, Optics Letters 41, 2775 (2016).
* Sebby-Strabley _et al._ [2016] J. Sebby-Strabley, C. Fertig, R. Compton, K. Salit, K. Nelson, T. Stark, C. Langness, and R. Livingston, Design innovations towards miniaturized gps-quality clocks, in _2016 IEEE International Frequency Control Symposium (IFCS)_ (IEEE, 2016) pp. 1–6.
* Gulati _et al._ [2018] G. K. Gulati, S. Chung, T. Le, J. Prestage, L. Yi, R. Tjoelker, N. Nyu, and C. Holland, Miniatured and low power mercury microwave ion clock, in _2018 IEEE International Frequency Control Symposium (IFCS)_ (IEEE, 2018) pp. 1–2.
* Jau _et al._ [2012] Y.-Y. Jau, H. Partner, P. Schwindt, J. Prestage, J. Kellogg, and N. Yu, Low-power, miniature 171yb ion clock using an ultra-small vacuum package, Applied Physics Letters 101, 253518 (2012).
* Arpornthip [2012] T. Arpornthip, Vacuum-pressure measurement using a magneto-optical trap, Physical Review A 85, 10.1103/PhysRevA.85.033420 (2012).
* Moore _et al._ [2015] R. W. G. Moore, L. A. Lee, E. A. Findlay, L. Torralbo-Campo, G. D. Bruce, and D. Cassettari, Measurement of vacuum pressure with a magneto-optical trap: A pressure-rise method, Review of Scientific Instruments 86, 093108 (2015).
* Rakholia [2014] A. V. Rakholia, Dual-axis high-data-rate atom interferometer via cold ensemble exchange, Physical Review Applied 2, 10.1103/PhysRevApplied.2.054012 (2014).
* McGilligan _et al._ [2020] J. McGilligan, K. Moore, A. Dellis, G. Martinez, E. de Clercq, P. Griffin, A. Arnold, E. Riis, R. Boudot, and J. Kitching, Laser cooling in a chip-scale platform, Applied Physics Letters 117, 054001 (2020).
* Boudot _et al._ [2020] R. Boudot, J. P. McGilligan, K. R. Moore, V. Maurice, G. D. Martinez, A. Hansen, E. de Clercq, and J. Kitching, Enhanced observation time of magneto-optical traps using micro-machined non-evaporable getter pumps, Scientific Reports 10, 1 (2020).
* SAE [2020] _St 171®and St 172 - Sintered Porous Getters_ , SAES Getters (2020).
* O’Hanlon [2005] J. F. O’Hanlon, _A user’s guide to vacuum technology_ (John Wiley & Sons, 2005).
* Perkins [1973] W. Perkins, Permeation and outgassing of vacuum materials, Journal of vacuum science and technology 10, 543 (1973).
* Takeda _et al._ [2011] M. Takeda, H. Kurisu, S. Yamamoto, H. Nakagawa, and K. Ishizawa, Hydrogen outgassing mechanism in titanium materials, Applied surface science 258, 1405 (2011).
* [20] We are uncertain that baking the miniature vacuum within a larger vacuum furnace is necessary. We did this to ensure there was not helium permeated into any of the materials, but none of the materials should allow any helium permeation so this step may have been unnecessary.
* Steck [2019] D. A. Steck, Rubidium 85 D line data, available online at http://steck.us/alkalidata (2019).
* Siddons _et al._ [2008] P. Siddons, C. S. Adams, C. Ge, and I. G. Hughes, Absolute absorption on rubidium D lines: comparison between theory and experiment, Journal of Physics B: Atomic, Molecular and Optical Physics 41, 155004 (2008).
* [23] See supplemental material at [url will be inserted by publisher] for further data and discussion on the role of temperature variance in our measurements. All files related to a published paper are stored as a single deposit and assigned a Supplemental Material url. This url appears in the article’s reference list.
* Kohn Jr _et al._ [2020] R. N. Kohn Jr, M. S. Bigelow, M. Spanjers, B. K. Stuhl, B. L. Kasch, S. E. Olson, E. A. Imhof, D. A. Hostutler, and M. B. Squires, Clean, robust alkali sources by intercalation within highly oriented pyrolytic graphite, Review of Scientific Instruments 91, 035108 (2020).
* Kang _et al._ [2017] S. Kang, R. P. Mott, K. A. Gilmore, L. D. Sorenson, M. T. Rakher, E. A. Donley, J. Kitching, and C. S. Roper, A low-power reversible alkali atom source, Applied Physics Letters 110, 244101 (2017).
|
4k
|
arxiv_papers
|
2101.01057
|
# A Note on the value distribution of a differential monomial and some
normality criteria
Sudip Saha1 and Bikash Chakraborty2 1Department of Mathematics, Ramakrishna
Mission Vivekananda Centenary College, Rahara, West Bengal 700 118, India.
[email protected] 2Department of Mathematics, Ramakrishna Mission
Vivekananda Centenary College, Rahara, West Bengal 700 118, India.
[email protected], [email protected]
###### Abstract.
In this paper, we prove some value distribution results which lead to some
normality criteria for a family of analytic functions. These results improve
some recent results.
††footnotetext: 2010 Mathematics Subject Classification: 30D45, 30D30, 30D20,
30D35.††footnotetext: Key words and phrases: Value distribution theory, Normal
family, Meromorphic functions, Differential monomials.
## 1\. Introduction and Main Results
Throughout this paper, we assume that the reader is familiar with the theory
of normal families ([11, 13]) of meromorphic functions on a domain
$D\subseteq\mathbb{C}\cup\\{\infty\\}$ and the value distribution theory
([3]). Further, it will be convenient to let that $E$ denote any set of
positive real numbers of finite Lebesgue measure, not necessarily same at each
occurrence. For any non-constant meromorphic function $f$, we denote by
$S(r,f)$ any quantity satisfying
$S(r,f)=o(T(r,f))~{}~{}\text{as}~{}~{}r\to\infty,~{}r\not\in E.$
Let $f$ be a non-constant meromorphic function. A meromorphic function
$a(z)(\not\equiv 0,\infty)$ is called a “small function” with respect to $f$
if $T(r,a(z))=S(r,f)$. For example, polynomial functions are small functions
with respect to any transcendental entire function.
A family $\mathscr{G}$ of meromorphic functions in a domain
$D\subset\mathbb{C}\cup\\{\infty\\}$ is said to be normal in $D$ if every
sequence $\\{g_{n}\\}\subset\mathscr{G}$ contains a subsequence which
converges spherically, uniformly on every compact subsets of $D$.
In 1959, Hayman proved the following theorem:
###### Theorem A.
([2]) If $f$ is a transcendental meromorphic function and $n\geq 3$, then
$f^{n}f^{\prime}$ assumes all finite values except possibly zero infinitely
often.
Moreover, Hayman ([2]) conjectured that the Theorem A remains valid for the
cases $n=1,~{}2$. In 1979, Mues ([9]) confirmed the Hayman’s Conjecture for
$n=2$, i.e., for a transcendental meromorphic function $f(z)$ in the open
plane, $f^{2}f^{\prime}-1$ has infinitely many zeros. This is a qualitative
result. But, in 1992, Q. Zhang ([14]) gave a quantitative version of Mues’s
result as follows:
###### Theorem B.
([14]) For a transcendental meromorphic function $f$, the following inequality
holds :
$T(r,f)\leq 6N\bigg{(}r,\frac{1}{f^{2}f^{\prime}-1}\bigg{)}+S(r,f).$
Using the Mues’s([9]) result, in 1989, Pang ([10]) gave a normality criterion
as follows:
###### Theorem C.
([10]) Let $\mathscr{F}$ be a family of meromorphic functions on a domain $D$.
If each $f\in\mathscr{F}$ satisfies $f^{2}f^{{}^{\prime}}\neq 1$, then
$\mathscr{F}$ is normal in $D$.
By replacing $f^{\prime}$ with $f^{(k)}$, in 2005, Huang and Gu ([5]) extended
the results of Q. Zhang ([14]) as follows:
###### Theorem D.
([5]) Let $f$ be a transcendental meromorphic function and $k$ be a positive
integer. Then
$T(r,f)\leq 6N\bigg{(}r,\frac{1}{f^{2}f^{(k)}-1}\bigg{)}+S(r,f).$
Consequently, they ([5]) obtained the following normality criterion.
###### Theorem E.
([5]) Let $\mathscr{F}$ be a family of meromorphic functions on a domain $D$
and let $k$ be a positive integer. If for each $f\in\mathscr{F},f$ has only
zeros of multiplicity at least $k$ and $f^{2}f^{(k)}\neq 1$, then
$\mathscr{F}$ is normal on domain $D$.
In this paper, we extend and improve the Theorem E. Moreover, we prove some
value distribution results. To state our next results, we recall some well
known definitions.
###### Definition 1.1.
([12]) Let $a\in\mathbb{C}\cup\\{\infty\\}$. For a positive integer $k$, we
denote
1. i)
by $N_{k)}\left(r,a;f\right)$ the counting function of $a$-points of $f$ whose
multiplicities are not greater than $k$,
2. ii)
by $N_{(k}\left(r,a;f\right)$ the counting function of $a$-points of $f$ whose
multiplicities are not less than $k$.
Similarly, the reduced counting functions $\overline{N}_{k)}(r,a;f)$ and
$\overline{N}_{(k}(r,a;f)$ are defined.
###### Definition 1.2.
([7]) For a positive integer $k$, we denote $N_{k}(r,0;f)$ the counting
function of zeros of $f$, where a zero of $f$ with multiplicity $q$ is counted
$q$ times if $q\leq k$, and is counted $k$ times if $q>k$.
###### Theorem 1.1.
Let $f$ be a transcendental meromorphic function such that
$N_{1)}(r,\infty;f)\\\ =S(r,f)$ and $\alpha(\not\equiv 0,\infty)$ be a small
function of $f$. Also, let $k~{}(\geq 1),q_{0}~{}(\geq 2),q_{i}~{}(\geq
0)~{}(i=1,2,\cdots,k-1),q_{k}(\geq 1)$ be positive integers. Then for any
small function $a(\not\equiv 0,\infty)$
$\displaystyle T(r,f)$ $\displaystyle\leq$
$\displaystyle\frac{2}{2q_{0}-3}\overline{N}\left(r,\frac{1}{\alpha
f^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}-a}\right)+S(r,f).$
###### Remark 1.1.
Theorem 1.1 improves and extends the recent result of Karmakar and Sahoo ([6])
for a particular class of transcendental meromorphic function which has
finitely many simple poles. Also, Theorem 1.1 improves significantly the
recent result of Chakraborty and et. all ([1]).
As an application of Theorem 1.1, we prove the following normality criterion:
###### Theorem 1.2.
Let $\mathscr{F}$ be a family of analytic functions in a domain $D$ and also
let $k~{}(\geq 1),q_{0}~{}(\geq 2),q_{i}~{}(\geq
0)~{}(i=1,2,\cdots,k-1),q_{k}(\geq 1)$ be positive integers. If for each
$f\in\mathscr{F}$
* (a)
$f$ has only zeros of multiplicity at least $k$ and
* (b)
$\displaystyle{f^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}}\neq 1$,
then $\mathscr{F}$ is normal on domain $D$.
###### Remark 1.2.
Clearly, Theorem 1.2 extend and improve Theorem E for a family of analytic
functions.
Moreover, in a recent result of W. Lü and B. Chakraborty ([8]), the lower
bound of $q_{0}$ was $3$. Thus our result also improve the result of W. Lü and
B. Chakraborty ([8]) by reducing the lower bound of $q_{0}$.
The following example shows that the condition on multiplicity of zeros of $f$
in Theorem 1.2 is necessary.
###### Example 1.1.
Let $\mathscr{F}=\\{f_{n}(z)=nz~{}:~{}n\in\mathbb{N}\\}$ and $D$ be any domain
containing the origin. Further suppose that $k~{}(\geq 2),q_{0}~{}(\geq
2),q_{i}~{}(\geq 0)~{}(i=1,2,\cdots,k-1),q_{k}(\geq 1)$ be positive integers.
Now, we observe that for each $f\in\mathscr{F}$
$\displaystyle{f^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}}\neq 1.$
Moreover, $f_{n}(0)\rightarrow 0$ but $f_{n}(z)\rightarrow\infty$ as
$n\rightarrow\infty$ for $z\not=0$. Hence $\mathscr{F}$ cannot be normal in
any domain containing the origin.
## 2\. Necessary Lemmas
###### Lemma 2.1.
([4]) Let $A>1$, then there exists a set $M(A)$ of upper logarithmic density
at most $\delta(A)=\min\\{(2e^{(A-1)}-1)^{-1},1+e(A-1)\exp(e(1-A))\\}$ such
that for $k=1,2,3,\cdots$
$\limsup\limits_{r\to\infty,~{}r\notin M(A)}\frac{T(r,f)}{T(r,f^{(k)})}\leq
3eA.$
###### Lemma 2.2.
Let $f$ be a transcendental meromorphic function and $\alpha~{}(\not\equiv
0,\infty)$ be a small function of $f$. Let
$M[f]=\alpha(f)^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}$, where
$q_{0},q_{1},\cdots,q_{k}(\geq 1)$ are $k(\geq 1)$ non-negative integers. Then
$M[f]$ is not identically constant.
###### Proof.
Since, $\alpha$ is a small function of $f$, then $T(r,\alpha)=S(r,f)$.
Therefore the proof follows from Lemma 3.4 of ([1]). ∎
###### Lemma 2.3.
Let $f$ be a transcendental meromorphic function and $\alpha~{}(\not\equiv
0,\infty)$ be a small function of $f$. Let,
$M[f]=\alpha(f)^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}$, where
$q_{0},q_{1},\cdots,q_{k}(\geq 1)$ are $k(\geq 1)$ non-negative integers. Then
$T(r,M[f])\leq\left\\{q_{0}+2q_{1}+\cdots+(k+1)q_{k}\right\\}T(r,f)+S(r,f).$
###### Proof.
The proof is obvious. ∎
###### Lemma 2.4.
Let $f(z)$ be a transcendental meromorphic function and $\alpha(z)(\not\equiv
0,\infty)$ be a small function of $f(z)$. Also, let $q_{0},q_{1},\cdots,q_{k}$
be non-negative integers. Define
$M[f]=\alpha(f)^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}},$
where $k(\geq 1),q_{i}(i=0,1,\cdots,k)$ are non-negative integers. If
$a(z)(\not\equiv 0,\infty)$ is another small function of $f$, then
$\displaystyle\mu T(r,f)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,0;f)+\overline{N}(r,a;M[f])+\overline{N}(r,\infty;f)+q_{1}N_{1}(r,0;f)$
$\displaystyle+q_{2}N_{2}(r,0;f)+\cdots+q_{k}N_{k}(r,0;f)+S(r,f),$
where $\mu=\sum\limits_{i=0}^{k}q_{i}$.
###### Proof.
Using the lemma of logarithmic derivative, we have
(1) $\displaystyle T(r,f^{\mu})$ $\displaystyle=$ $\displaystyle
N(r,0;f^{\mu})+m\left(r,\frac{1}{f^{\mu}}\right)+O(1)$ $\displaystyle\leq$
$\displaystyle N(r,0;f^{\mu})+m\left(r,\frac{1}{M[f]}\right)+S(r,f)$
$\displaystyle\leq$ $\displaystyle
N(r,0;f^{\mu})+T(r,M[f])-N(r,0;M[f])+S(r,f).$
Now, using the Nevanlinna’s second fundamental theorem and the Lemma (2.3), we
have
$\displaystyle T(r,f^{\mu})$ $\displaystyle\leq$ $\displaystyle
N(r,0;f^{\mu})+\overline{N}(r,0;M[f])+\overline{N}(r,\infty;M[f])$
$\displaystyle+\overline{N}(r,a;M[f])-N(r,0;M[f])+S(r,M[f])+S(r,f)$
$\displaystyle\leq$ $\displaystyle
N(r,0;f^{\mu})+\overline{N}(r,0;M[f])+\overline{N}(r,\infty;f)$
$\displaystyle+\overline{N}(r,a;M[f])-N(r,0;M[f])+S(r,f).$
Let $z_{0}$ be a zero of $f(z)$ with multiplicity $q~{}(\geq 1)$. Then $z_{0}$
is a zero of $f^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}$ of order
at least
$\displaystyle qq_{0}+(q-1)q_{1}+(q-2)q_{2}+\cdots+2q_{q-2}+q_{q-1}$
$\displaystyle=$ $\displaystyle q(q_{0}+q_{1}+\cdots+q_{q-1})-(1\cdot
q_{1}+2\cdot q_{2}+\cdots+(q-1)\cdot
q_{q-1})~{}~{}~{}~{}\text{if}~{}~{}~{}~{}q\leq k,$
and
$\displaystyle qq_{0}+(q-1)q_{1}+(q-2)q_{2}+\cdots+(q-k)q_{k}$
$\displaystyle=$ $\displaystyle q(q_{0}+q_{1}+\cdots+q_{k})-(1\cdot
q_{1}+2\cdot q_{2}+\cdots+k\cdot q_{k})~{}~{}~{}~{}\text{if}~{}~{}~{}~{}q>k.$
Therefore $z_{0}$ is a zero of $M[f]$ of order at least
$q(q_{0}+q_{1}+\cdots+q_{q-1})-(1\cdot q_{1}+2\cdot q_{2}+\cdots+(q-1)\cdot
q_{q-1})+r$ if $q\leq k$ and $q(q_{0}+q_{1}+\cdots+q_{k})-(1\cdot q_{1}+2\cdot
q_{2}+\cdots+k\cdot q_{k})+r$ if $q>k$ respectively, (where $r=0$ if
$\alpha(z)$ does not have a zero or pole at $z_{0}$; $r=s$ if $\alpha(z)$ has
a zero of order $s$ at $z_{0}$ and $r=-s$ if $\alpha(z)$ has a pole of order
$s$ at $z_{0}$).
Now,
$\displaystyle q\mu+1-\\{q(q_{0}+q_{1}+\cdots+q_{q-1})-(1\cdot q_{1}+2\cdot
q_{2}+\cdots+(q-1)\cdot q_{q-1})\\}-r$ $\displaystyle=$ $\displaystyle
1+(1\cdot q_{1}+2\cdot q_{2}+\cdots+(q-1)\cdot
q_{q-1})+q(q_{q}+q_{q+1}+\ldots+q_{k})-r~{}~{}~{}~{}\text{if}~{}~{}~{}~{}q\leq
k.$
and
$\displaystyle q\mu+1-\\{q(q_{0}+q_{1}+\cdots+q_{k})-(1\cdot q_{1}+2\cdot
q_{2}+\cdots+k\cdot q_{k})\\}-r$ $\displaystyle=$ $\displaystyle 1+1\cdot
q_{1}+2\cdot q_{2}+\cdots+k\cdot q_{k}-r~{}~{}~{}~{}\text{if}~{}~{}~{}~{}q>k.$
Therefore
$\displaystyle N(r,0;f^{\mu})+\overline{N}(r,0;M[f])-N(r,0;M[f])$
$\displaystyle\leq$
$\displaystyle\overline{N}(r,0;f)+q_{1}N_{1}(r,0;f)+q_{2}N_{2}(r,0;f)+\cdots+q_{k}N_{k}(r,0;f)+S(r,f).$
Therefore (2) gives
$\displaystyle\mu T(r,f)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,\infty;f)+\overline{N}(r,a;M[f])+\overline{N}(r,0;f)+q_{1}N_{1}(r,0;f)$
$\displaystyle+q_{2}N_{2}(r,0;f)+\cdots+q_{k}N_{k}(r,0;f)+S(r,f).$
This completes the proof. ∎
###### Lemma 2.5.
([11, 13]) Let $\mathscr{F}$ be a family of meromorphic functions on the unit
disc $\Delta$ such that all zeros of functions in $\mathscr{F}$ have
multiplicity at least $k$. Let $\alpha$ be a real number satisfying
$0\leq\alpha<k$ . Then $\mathscr{F}$ is not normal in any neighbourhood of
$z_{0}\in\Delta$ if and only if there exists
* i)
points $z_{n}\in\Delta$, $z_{n}\to z_{0}$;
* ii)
positive numbers $\rho_{n},~{}\rho_{n}\to 0;$ and
* iii)
functions $f_{n}\in\mathscr{F}$
such that $\displaystyle{\rho_{n}^{-\alpha}f_{n}(z_{n}+\rho_{n}\zeta)\to
g(\zeta)}$ spherically uniformly on compact subsets of $\mathbb{C}$, where $g$
is non-constant meromorphic function.
## 3\. Proof of the Theorems
###### Proof of Theorem 1.1.
Assume
$M[f]=\alpha f^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}.$
Since $a(\not\equiv 0,\infty)$ is a small function of $f$, thus from Lemma
(2.4), we get
$\displaystyle\mu T(r,f)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,\infty;f)+\overline{N}(r,a;M[f])+\overline{N}(r,0;f)+q_{1}N_{1}(r,0;f)$
$\displaystyle+q_{2}N_{2}(r,0;f)+\cdots+q_{k}N_{k}(r,0;f)+S(r,f).$
Now (3) can be written as
(4)
$\displaystyle(q_{0}-1)T(r,f)\leq\overline{N}(r,\infty;f)+\overline{N}(r,a;M[f])+S(r,f).$
Given $N_{1)}(r,\infty;f)=S(r,f)$, so (4) can be written as
$\displaystyle\left(q_{0}-\frac{3}{2}\right)T(r,f)$ $\displaystyle\leq$
$\displaystyle\overline{N}(r,\infty;f)-\frac{1}{2}N_{(2}(r,\infty;f)+\overline{N}(r,a;M[f])+S(r,f)$
$\displaystyle\leq$ $\displaystyle\overline{N}(r,a;M[f])+S(r,f).$
Thus
$\displaystyle T(r,f)$ $\displaystyle\leq$
$\displaystyle\frac{2}{(2q_{0}-3)}\overline{N}\left(r,\frac{1}{M[f]-a}\right)+S(r,f).$
This completes the proof. ∎
###### Proof of Theorem 1.2.
Given that $\mathscr{F}$ is the family of analytic functions in a domain $D$
such that for each $f\in\mathscr{F}$
* (a)
$f$ has only zeros of multiplicity at least $k$ and
* (b)
$\displaystyle{f^{q_{0}}(f^{\prime})^{q_{1}}\cdots(f^{(k)})^{q_{k}}}\neq 1$,
where $k~{}(\geq 1),q_{0}~{}(\geq 2),q_{i}~{}(\geq
0)~{}(i=1,2,\cdots,k-1),q_{k}(\geq 1)$ are the positive integers.
Our claim is that the family of analytic functions $\mathscr{F}$ is normal on
domain $D$. Since normality is a local property, so we may assume that
$D=\Delta$, the unit disc. Thus we have to show that $\mathscr{F}$ is normal
in $\Delta$.
On contrary, we assume that $\mathscr{F}$ is not normal in $\Delta$. Now we
define a real number as
$\alpha=\frac{\mu_{*}}{\mu},$
where $\mu=q_{0}+q_{1}+\cdots+q_{k}$ and $\mu_{*}=q_{1}+2q_{2}+\cdots+kq_{k}$.
Since $q_{0}(\geq 2),q_{i}(\geq
0)~{}(i=1,2,\cdots,k-1)~{}~{}~{}\text{and}~{}~{}~{}q_{k}(\geq 1)$, so,
$0\leq\alpha<k$.
Since $\mathscr{F}$ is not normal in $\Delta$, so by Lemma 2.5, there exists
$\\{f_{n}\\}\subset\mathscr{F},z_{n}\in\Delta$ and positive numbers $\rho_{n}$
with $\rho_{n}\to 0$ such that
$u_{n}(\zeta)=\rho_{n}^{-\alpha}f_{n}(z_{n}+\rho_{n}\zeta)\to u(\zeta),$
spherically uniformly on every compact subsets of $\mathbb{C}$, where
$u(\zeta)$ is a non-constant meromorphic function. Now define
$V_{n}(\zeta)=(u_{n}(\zeta))^{q_{0}}(u_{n}^{{}^{\prime}}(\zeta))^{q_{1}}\cdots(u_{n}^{(k)}(\zeta))^{q_{k}},$
and
$V(\zeta)=(u(\zeta))^{q_{0}}(u^{{}^{\prime}}(\zeta))^{q_{1}}\cdots(u^{(k)}(\zeta))^{q_{k}}.$
Therefore
(5) $\displaystyle V_{n}(\zeta)$ $\displaystyle=$
$\displaystyle(u_{n}(\zeta))^{q_{0}}(u_{n}^{{}^{\prime}}(\zeta))^{q_{1}}\cdots(u_{n}^{(k)}(\zeta))^{q_{k}}$
$\displaystyle=$
$\displaystyle\rho_{n}^{\mu_{*}-\alpha\mu}(f_{n}(z_{n}+\rho_{n}\zeta))^{q_{0}}(f_{n}^{{}^{\prime}}(z_{n}+\rho_{n}\zeta))^{q_{1}}\cdots(f_{n}^{(k)}(z_{n}+\rho_{n}\zeta))^{q_{k}}$
$\displaystyle=$
$\displaystyle(f_{n}(z_{n}+\rho_{n}\zeta))^{q_{0}}(f_{n}^{{}^{\prime}}(z_{n}+\rho_{n}\zeta))^{q_{1}}\cdots(f_{n}^{(k)}(z_{n}+\rho_{n}\zeta))^{q_{k}}.$
Since $u_{n}(\zeta)\to u(\zeta)$ locally, uniformly and spherically, so,
$V_{n}(\zeta)\to V(\zeta)$ locally, uniformly and spherically.
Since $\\{f_{n}\\}$ is a sequence of analytic functions and $\rho_{n}$ are
positive numbers, thus $\\{u_{n}(\zeta)\\}$ is a sequence of analytic
functions which converges locally, uniformly and spherically to $u(\zeta)$.
Since $u(\zeta)$ is non-constant, so, $u(\zeta)$ must be non-constant analytic
function.
Given that any zero of $f_{n}$ has multiplicities at least $k$, so by the
Hurwitz’s theorem, any zero of $u(\zeta)$ has also multiplicities at least
$k$. Thus obviously $V(\zeta)\not\equiv 0$.
Again, since $V_{n}(\zeta)\neq 1$ and $V_{n}(\zeta)\to V(\zeta)$ uniformly,
locally, spherically, so by the Hurwitz’s theorem $V(\zeta)\neq 1$.
Hence $u(\zeta)$ must be non-transcendental, otherwise, Theorem 1.1 implies
$V(\zeta)=1$ has infinitely many solution, that is impossible.
Thus $u(\zeta)$ must be a non-constant polynomial function, say
$u(\zeta)=c_{0}+c_{1}\cdot\zeta+\cdots+c_{r}\cdot\zeta^{r}.$
Since any zero of $u(\zeta)$ has multiplicity at least $k$, thus the value of
$r$ must be at least $k$.
Thus $u(\zeta)$ is a polynomial of degree at least $k$, but it is not possible
as $V(\zeta)\not=1$. Thus our assumption is wrong. Hence we obtain our result.
∎
Acknowledgement
The authors are grateful to the anonymous referee for his/her valuable
suggestions which considerably improved the presentation of the paper.
The first author is thankful to the Council of Scientific and Industrial
Research, HRDG, India for granting Junior Research Fellowship (File No.:
08/525(0003)/2019-EMR-I) during the tenure of which this work was done.
The research work of the second author is supported by the Department of
Higher Education, Science and Technology & Biotechnology, Govt.of West Bengal
under the sanction order no. 216(sanc) /ST/P/S&T/16G-14/2018 dated 19/02/2019.
## References
* [1] B. Chakraborty, S. Saha, A. K. Pal and J. Kamila, Value distribution of some differential monomials, Filomat, Accepted for publication.
* [2] W. K. Hayman, Picard values of meromorphic functions and their derivatives, Ann. Math., 70(1959), 9-42.
* [3] W. K. Hayman, Meromorphic Functions, The Clarendon Press, Oxford (1964).
* [4] W. K. Hayman and J. Miles, On the growth of a meromorphic function and its derivatives, Complex Variables, 12 (1989), 245–260.
* [5] X. Huang and Y. Gu, On the value distribution of $f^{2}f^{(k)}$, J. Aust. Math. Soc., 78(2005), 17-26.
* [6] H. Karmakar and P. Sahoo, On the Value Distribution of $f^{n}f^{(k)}-1$, Results Math., 73(2018), doi:10.1007/s00025-018-0859-9.
* [7] I. Lahiri and S. Dewan, Inequalities arising out of the value distribution of a differential monomial, J. Inequal. Pure Appl. Math. 4(2003), no. 2, Art. 27.
* [8] W. Lü. and B. Chakraborty, On the value distribution of a Differential Monomial and some normality criteria, arXiv:1903.10940.
* [9] E. Mues, Über ein Problem von Hayman, Math. Z., 164(1979), 239-259.
* [10] X. C. Pang, Bloch’s principle and normal criterion, Sci. China Ser. A, 33 (1989), 782-791.
* [11] J. Schiff, Normal Families, Springer-Verlag, Berlin, 1993.
* [12] C. C. Yang and H. X. Yi, Uniqueness Theory of Meromorphic Functions, Kluwer Academic Publishers, Dordrecht, The Netherlands, (2003).
* [13] L. Zalcman, Normal families: new perspectives, Bull. Amer. Math. Soc., 35 (1998), 215-230.
* [14] Q. D. Zhang, A growth theorem for meromorphic functions, J. Chengdu Inst. Meteor., 20(1992), 12-20.
|
4k
|
arxiv_papers
|
2101.01060
|
# Personal Privacy Protection via Irrelevant Faces Tracking and Pixelation in
Video Live Streaming
Jizhe Zhou, Chi-Man Pun
###### Abstract
To date, the privacy-protection intended pixelation tasks are still labor-
intensive and yet to be studied. With the prevailing of video live streaming,
establishing an online face pixelation mechanism during streaming is an
urgency. In this paper, we develop a new method called Face Pixelation in
Video Live Streaming (FPVLS) to generate automatic personal privacy filtering
during unconstrained streaming activities. Simply applying multi-face trackers
will encounter problems in target drifting, computing efficiency, and over-
pixelation. Therefore, for fast and accurate pixelation of irrelevant people’s
faces, FPVLS is organized in a frame-to-video structure of two core stages. On
individual frames, FPVLS utilizes image-based face detection and embedding
networks to yield face vectors. In the raw trajectories generation stage, the
proposed Positioned Incremental Affinity Propagation (PIAP) clustering
algorithm leverages face vectors and positioned information to quickly
associate the same person’s faces across frames. Such frame-wise accumulated
raw trajectories are likely to be intermittent and unreliable on video level.
Hence, we further introduce the trajectory refinement stage that merges a
proposal network with the two-sample test based on the Empirical Likelihood
Ratio (ELR) statistic to refine the raw trajectories. A Gaussian filter is
laid on the refined trajectories for final pixelation. On the video live
streaming dataset we collected, FPVLS obtains satisfying accuracy, real-time
efficiency, and contains the over-pixelation problems.
###### Index Terms:
face pixelation, video live streaming, privacy protection, positioned
incremental affinity propagation, empirical likelihood ratio
## I Introduction
Figure 1: Differences on the drifting and over-pixelation issues handled by
FPVLS (yellow) and tracker-based YouTube Studio tool (red). The original
snapshots of the live streaming scene are listed in the upper row from left to
right. The middle row with red circle mosaics is the offline pixelation
results of YouTube Studio tools. The bottom row with yellow rectangular
mosaics is the online pixelation results of FPVLS.
Video live streaming has never been so prevailing as in the past two or three
years. Live streaming platforms, sites, and streamers rapidly become part of
our daily life. Benefit from the popularity of smart-phones and cheaper 5G
networks, online video live streaming profoundly shapes our daily life [1].
Streaming techniques endow people with the capability to instantly record and
broadcast real scenes to audiences. The primary host of live streaming
activities transformed from TV stations, news bureau, and professional
agencies to the ordinaries. Without the imposed censorship, these private
streaming channels severely disregard the personal privacy rights [2]. As a
result, streamers, together with live streaming platforms, have raised a host
of privacy infringement issues. Despite different privacy laws in different
regions, these privacy infringements are always concerning and even committed
to crimes [3].
Apart from the indifference of streamers, the handcrafted and labor-intensive
pixelation process is in fact the main reason leads to rampant privacy
violations amid streaming [4]. Streaming platforms are therefore incapable of
allocating adequate human labor on executing censorship anytime, anywhere.
Being a giant hosting service provider, YouTube is already aware of the reason
and published its offline face pixelation tool on the latest YouTube Creator
Studio [5]. When uploading videos, creators are now required to pixelate faces
of children under age 13 through YouTube Creator Studio, unless parental
consent is obtained [6]. YouTube Live and Facebook Live experienced explosive
growths during the global COVID-19 pandemic, and how to push similar privacy-
protection policies in streaming is under heated discussions. As far as we
know, current studies, including YouTube Studio [5] and Microsoft Azure [7],
mainly focus on processing offline videos; whilst leave the face pixelation
solution on the online video live streaming field underexplored. Thus, in this
paper, we build a privacy-protection method named Face Pixelation in Video
Live Streaming (FPVLS) to blur irrelevant peoples’ faces amid streaming. FPVLS
protects personal privacy rights as well as promotes the sound development for
the video live streaming industry.
The tangible reference of FPVLS is the mosaics manually allocated in TV shows.
People who are unwilling to expose in front of the audiences will be facially
pixelated in the show’s post-production process. Hence, FPVLS can be
intuitively resolved as an automized replica of the manual face pixelation
process. Furthermore, the manual face pixelation process is remarkably close
to the multi-face tracking (MFT) algorithms, because they both involve
continuous identification of target faces on frames. Then, fine-tuning MFT
algorithms on live streaming seem to be a straightforward solution for FPVLS.
Follow the above interpretation, we directly migrated and tuned the multi-face
trackers on our collected video live streaming dataset. However, vast tests
reveal that migrated multi-face trackers suffer a severe drifting issue. Fig.
1 demonstrates a typical street-streaming scene from left to right. A street
hawker is waving hands to the female streamer for greeting. Without the
permission of the hawker, his face shall be pixelated for privacy protection.
Meanwhile, a passerby in a light green shirt is passing through the streamer
and the hawker. YouTube Studio, which relies on MFT algorithms, generates the
middle row’s mosaics circled in red. During the crossover of the passerby and
the hawker (middle-three snapshots), the hawker’s face mosaics drift
manifestly. At the rightmost snapshot, the mosaics are even brought to the
passerby’s neck region and leave the hawker’s face completely exposed.
Such a drifting issue is mainly caused by the underperformance of the face
detection algorithms in the unconstrained live-streaming scenes. State-of-the-
art MFT studies adopt a similar tracking-by-detection structure [8, 9, 10],
which assembles face detectors ahead of trackers. Detectors specify the
bounding boxes of faces on frames and pass the results to trackers. Trackers
either designate an existing tracklet or initiate a new tracklet for each
detection, and further merge tracklets into an intact trajectory. The
detector’s outcomes primarily determine the quality of the tracklets, and
trackers accept tracklets as priors in making scene-based matching or
reasoning. On the whole performance of the MFT algorithm, the detector plays
an as pivotal role as the tracker. However, the detection accuracy is seldom
mentioned and discussed in current MFT algorithms [8, 9, 10, 11, 12].
Benchmark tests for these MFT algorithms are carried out in music videos,
movie clips, TV sitcoms, or CCTV footage recorded by professional, high-
resolution, steady, or stably moving cameras with various shot changes. Face
detectors function well in these videos; accordingly, tracklets can be
reliably generated. Clean face detection results and reliable primal tracklets
become the acquiescence, and the major concern of current MFT is how to
establish consistent, accurate associations among tracklets under frequent
switching shots.
In practice, live streaming scenes are commonly recorded by handheld mobile
phone cameras with only a single or a few shots. Likewise, the video quality
can only maintain at a relatively low level (720p/30FPS or lower). Crowded
scenarios, frequent irregular motions of the camera, and abrupt broadcasting
resolution conversion are everywhere in live streaming. All these harsh
realities indicate that face detection networks are no longer able to yield
accurate results for the generation of reliable tracklets. Once scattered or
unreliable tracklets are initialized, trackers are prone to fast corruption
and jeopardize the efficacy of MFT based pixelation methods. Only very few
previous works [13, 11] noticed the uncertainty caused by unreliable detection
results but hardly explored solutions.
Besides the drifting, the apparent over-pixelation problem can be observed in
the middle snapshots of Fig. 1. Introduced by the tracking algorithms and
inherited by tracking based pixelation methods, the over-pixelation occurs
when pixelation methods generate unnecessary and excessive mosaics for un-
identifiable faces. Heavy or full occlusion and massive motion blurs are the
common reasons in causing un-identifiable faces. Over-pixelation is an
intrinsic problem while migrating tracking algorithms to pixelation tasks. MFT
algorithms benefit from estimating the locations of temporary un-identifiable
faces [14, 12]. Such an evidence accumulation process fairly contributes to
constructing long enough trajectories, thereby avoiding frequent face ID-
switch. Unlike tracking, in the meantime of redacting irrelevant faces out,
pixelation tasks eager to preserve the audience as many originals as possible.
Consequently, to accomplish the irrelevant people’s faces tracking and
pixelation task under streaming scenes, FPVLS adopts a brand new, clustering
and re-detection based, frame-to-video framework. The core of this framework
is the raw face trajectory generation stage and the trajectory refinement
stage. In a nutshell, when streaming started, face detection and embedding
(recognition) networks are alternately applied on frames to yield face
vectors. Noises caused both by false positives in detection and variations in
embedding networks are concealed in these face vectors. Thereupon, Positioned
Incremental Affinity Propagation (PIAP) is proposed to cope with these noises
and generate raw face trajectories. PIAP inherits the ability of clustering
under ill-defined cluster numbers from classic Affinity Propagation (AP) [15].
We introduce the positioned information to revise affinities and endow PIAP
with noise-resistance. We further employ incremental clustering to accelerate
the consensus-reaching process. Besides the noises, false negatives in
detection arouse intermittences in raw trajectories. The trajectory refinement
stage aims to elaborately compensate the intermittences without provoking
over-pixelation. We construct a proposal network to re-detect the faces in the
intermittences. The two-sample test based on the Empirical Likelihood Ratio
(ELR) is then applied to cull the non-faces from re-detection results. The
proposal network and the two-sample test collaborate to yield the final
trajectories. Lastly, the post-process allocates Gaussian filters on the
refined trajectories for pixelation. An example of FPVLS is presented in the
bottom row of Fig. 1. The mosaics marked in yellow rectangular remain on the
hawker’s face after the crossover. The drifting issue is greatly alleviated.
Further, as shown in the middle pictures of the third row, while the target
face is fully occluded, FPVLS will not produce baffling, over-pixelated
mosaics like YouTube Studio or other tracking based methods.
Specifically, this paper consists of following contributions:
* •
We build the first online pixelation framework: Face Pixelation in Video Live
Streaming (FPVLS). FPVLS adopts PIAP clustering and the two-sample test based
on ELR to alleviate the trajectories drifting as well as contain the over-
pixelation problem.
* •
We proposed the Positioned Incremental Affinity Propagation (PIAP) clustering
algorithm to generate raw trajectories upon inaccurate face vectors. The
proposed PIAP spontaneously handles the cluster number generation problem, and
are endowed with noise-resistant and time-saving merits through position
information and increment clustering.
* •
A non-parametric imposing two-sample test based on empirical likelihood ratio
(ELR) statistics is introduced. Cooperating with the proposal network, the
two-sample test compensates the deep networks insufficiency through the ELR
statistics, and avoid over-pixelation in the trajectory refinement. Such an
error rejection algorithm can serve other face detection algorithms.
* •
We collected a diverse video live streaming dataset from the streaming
platforms and manually labeled dense annotations (51040 face labels) on frames
to conduct our experiments. This dataset will be released to the public and
could be the benchmark dataset for future studies on video live streaming.
The remaining of the paper is organized as follows. Section II reviews the
related works; Section III introduces the detail methods of FPVLS;
experiments, results, and discussion could be found in Section IV; Section V
concludes and discusses future work.
## II Related Works
To our best knowledge, face pixelation in video live streaming is not studied
before. Referring to existing techniques, we first tried to implement FPVLS in
an end-to-end manner by replacing the 2D deep CNN with its extension 3D CNN
[16]. Taking an entire video as input, 3D CNN handles spatial-temporal
information simultaneously and achieves great success in video segmentation
[17, 18] and action recognition areas [16, 19]. However, tested on the live
streaming dataset we collected, 3D CNN models are quite time-consuming and
extremely sensitive to video quality and length. Furthermore, without
sufficient training data, as designed to cope with high-level abstractions of
video contents, 3D CNN cannot precisely handle the pixelation task on the
individual frame-level. Therefore, the face mosaics generated by the 3D CNN
model constantly blinking during streaming.
The other proper way for realization is the multi-object tracking (MOT) or
multi-face tracking (MFT) algorithms we discussed in the Introduction. Current
offline pixelation methods, including YouTube Studio and Microsoft Azure, are
mainly implemented on MFT. Although MOT and MFT are claimed to be thoroughly
studied [14], their tracking accuracy is still challenging in unconstrained
videos. Reviewing state-of-the-art trackers, we find two main categories:
offline and online trackers. The former category assumes the object detection
in all frames has already been conducted, and clean tracklets are established
by linking different detection in offline mode [20]. This property of offline
trackers allows global optimization for the path [21] but makes them incapable
of dealing with online tasks.
State-of-the-art online MOT algorithms [10, 9, 22, 23, 24] combine continuous
motion prediction, online learning, and the correlation filter to find an
operator that gradually updates the tracker at each timestamp. KCF [10] and
ECO [9] are the representatives of this kind. Within the same shot, these
trackers assume a predictable position for targets in the adjacent frames.
Although this assumption is generally applicable in conventional videos, it
does not hold in video live streaming due to the frequent camera shakes. Tests
of such tracking algorithms are always established on high-resolution music
video datasets in the absence of massive camera moves and motion blur [25].
Moreover, MOT sometimes encounters the size issue when directly applied to the
face tracking field. The faces’ size may be so small (12*12 pixels) that the
online-learned operator is not robust enough to progressively locate the time-
invariant feature of the face. Therefore, the correlation filter based MOT
algorithms also suffer drastic drifting issues on the face pixelation tasks.
State-of-the-art MFT algorithms adopt the tracking-by-detection structure [26,
11, 27, 28]. IOU tracker [26] and POI tracker [11] are the outstanding ones.
IOU tracker applies detection networks on the current frame. The bounding
boxes with the most considerable IoU value will be assigned to the
corresponding trajectory, referring to the tracking list. IOU tracker is agile
in processing frames, but heavily relies on the detection results and does not
consider the temporal connection between frames. Thereby, false positives and
false negatives in detection are hard to be eliminated and cause instant
drifting mosaics. POI tracker replaces the correlation filters with deep
embedding networks. Online POI tracker applies detection networks in all
frames and divides the tracking video into multiple disjoint segments. They
then use the dense neighbors (DN) search to associate the detection responses
into short tracklets in each segment. Several nearby segments are merged into
a longer segment, and the DN search is applied again in each longer segment to
associate existing tracklets into longer tracklets. To some degree, the POI
tracker shares a cognate idea with our raw face trajectory generation stage in
the video segment, detection, and embedding aspects. However, POI uses a DN
search instead of clustering algorithms and do not further apply refinement.
Therefore, the POI tracker will also be profoundly affected by inaccurate
detection contains frequent false positives and negatives. Besides, widely
accepted solutions of MOT and MFT [29, 30] in unconstrained videos assume some
priors, including the number of people showing up in the video, initial
positions of the people, no fast motions, no camera movements, and high
recording resolution. The state-of-the-art studies on MFT [31] manage to
exclude as many priors as possible but still require the full-length video in
advance for analysis.
Figure 2: Proposed framework of FPVLS.
## III Proposed Methods
Consider the aforementioned pros and cons; we construct the FPVLS as depicted
in Fig. 2. Along with a preprocessing and a postprocessing stage, FPVLS owns
two core stages: the raw face trajectories generation stage and the trajectory
refinement stage. In Fig. 2, stages are enclosed by yellow rectangular, and
the pivotal intermediate results or algorithms are surrounded by dark
rectangular.
Specifically: (1) Prepocessing: We design a buffer section at the beginning of
each live streaming. The buffer section enables FPVLS to slice an entire video
streaming into video segments, thereby launching the later trajectory
refinement stage. Besides, the buffering time can be leveraged to designate
the streamers’ faces (like touching the phone screen). Face detection and
embedding networks are also applied in this stage to generate face vectors.
(2) Raw Face Trajectories Generation: Raw face trajectories link the same
person’s face vectors across frames through clustering. For the clustering
afoot, the number of people showing up in streaming cannot be known in
advance, the face vectors contain noises, and the real-time efficiency must be
met. Hence, we propose the PIAP upon AP [15]. Positioned information revises
the affinities, thereby altering the embedding variations and excluding the
noises as outliers. The incremental propagation shoves the consensus-reaching
process, thereby, guarantees efficiency. (3) Trajectory Refinement: Besides
the noises, raw trajectories incur intermittences due to the false negatives
in detection. A proposal network is built to propose suspicious face areas.
Afterward, the two-sample test based on empirical likelihood ratio statistic
is applied to deny the inappropriate bounding boxes yielded by the proposal
network and fill the accepted ones into the intermittences for refinement. The
discontinuity of raw trajectories is therefore elaborately vanished without
provoking the over-pixelation problem. (4) Postprocessing. Gaussian filters
with various kernel values can be placed on the non-streamers’ trajectories to
generate the final pixelated streaming and then broadcast to the audience.
### III-A Prepossessing
#### III-A1 Video Segments
FPVLS deals with video live streaming; the processing speed of FPVLS, counted
by pixelation frames-per-second (FPS), shall be greater than or equal to the
streaming’s original broadcasting speed, which is also counted by FPS.
Usually, the broadcasting FPS is constant. Therefore, if we denote the
broadcasting FPS as ${FPS}$, we have pixelation FPS $\geq{FPS}$ as facts.
Then, we can stack every $\mathcal{N}$ frames into a short video segment by
demanding an at least $(2*\mathcal{N})$ frames buffer section at the very
beginning of streaming without causing discontinuities in the broadcasting
after pixelation (Proposition 1).
Afterward, leveraging image-based face detection and recognition networks
frame-by-frame, face vectors in a segment are obtained. Video segments reform
the commonly hours-long video live streaming into numerous segments in seconds
level. In this manner, we alleviate the conflict between accuracy and
efficiency by transforming it into a broadcasting latency. Considering the
primal latency brought by communication and compression, such a buffering
latency is minor and may not be noticed. However, the length of the segments
shall still be restrained within a rational interval to balance the
performance of FPVLS and the user-experienced latency.
###### Proposition 1.
If video segments length is $\mathcal{N}$ frames, a $(2*\mathcal{N})$ frames
buffer section can ensure the broadcasting continuity after pixelation.
###### Proof.
The recording of a video is continuous and normally the recording FPS
$={FPS}$. FPVLS receives this video as raw input and processes on the unit of
video segments. For a certain frame $Fr$ recorded at frame number $f$, FPVLS
will stack $Fr$ into the segment $Sg$; then process the entire segment $Sg$;
and finally broadcast $Sg$.
Thus, we can have:
* •
The recording of the all the frames belonging to $Sg$ is completed at time
$\\{\lceil\frac{f}{\mathcal{N}}\rceil*\frac{\mathcal{N}}{FPS}\\}$.
* •
FPVLS takes at most $\\{\frac{\mathcal{N}}{FPS}$} seconds to process $Sg$.
* •
$Sg$ is then broadcast, and extra
$\\{\frac{f}{FPS}-\lfloor\frac{f}{\mathcal{N}}\rfloor*\frac{\mathcal{N}}{FPS}\\}$
seconds are needed to display $Fr$, as $Sg$ is broadcast from its beginning
frame.
The broadcasting frame number of $Fr$ after FPVLS is the sum of the above
three:
$FPS*\\{\lceil\frac{f}{\mathcal{N}}\rceil*\frac{\mathcal{N}}{FPS}+\frac{\mathcal{N}}{FPS}+\frac{f}{FPS}-\lfloor\frac{f}{\mathcal{N}}\rfloor*\frac{\mathcal{N}}{FPS}\\}$
$\Rightarrow\\{f+\mathcal{N}+{\mathcal{N}}*(\lceil\frac{f}{\mathcal{N}}\rceil-\lfloor\frac{f}{\mathcal{N}}\rfloor)\\}\Rightarrow
f+2*\mathcal{N}$
That is, when segment length is $\mathcal{N}$ frames, FPVLS broadcasts any
frame with an at most $(2*\mathcal{N})$ frames lag. We can directly cover this
lag by demanding a $(2*\mathcal{N})$ frame long buffer section at the
beginning of streaming. ∎
#### III-A2 Face Detection and Recognition on Frames
In this paper, we use MTCNN [29], and CosFace [32] to process every frame in
turn. MTCNN and CosFace are chosen for the convenience of later affinity in
PIAP and the training of the proposal network. However, do note that the
detection and embedding networks could be substituted by other state-of-the-
art work like PyramidBox [33] & ArcFace [34]. Also, instead of indulging in
comparing or tuning pre-trained face detection and embedding algorithms, we
intentionally avoid using the most advanced algorithms to demonstrate the
performances of FPVLS under relatively poor priors. MTCNN accepts arbitrary
size inputs and detects face larger than $12*12$ pixel. CosFace reads the face
detection results as input and generates $512$ dimension feature vector for
each face. Face alignment is applied right after detection. Every detected
face is cropped out from the frame and aligned to frontal pose through the
affine transformation before embedding. Since we built the refinement stage,
the detection threshold is initiated to a higher value for better suppressing
the false positives.
### III-B Raw Face Trajectories Generation
The embedded face vectors are further clustered to connect the same face
across frames and segments. DBSCAN [35], FCM [36], Affinity Propagation (AP)
[15], and their extensions are the top candidates for objects clustering in
data streams. As noisy detection results are common in videos and the data-
size for clustering is also unbalanced, the density-based DBSCAN is declined.
Fuzzy logic based FCM mandatorily requires the cluster number for
initialization. Despite the ability to handle the ill-defined cluster number
and cluster under unbalanced data size, the remaining AP also has intrinsic
noise-sensitive and time-consuming defects. Therefore, in the proposed PIAP,
pair-wise affinities based on deep feature vectors are revised according to
positioned information. The time to reach the consensus are significantly
shortened by assigning responsibilities and availabilities to new-coming
vectors according to the ones in the last segment. PIAP produces the face
clusters, and an inner-cluster sequential link quickly generates the faces’
raw trajectories afterward.
#### III-B1 Affinity Propagation
The key of AP [15] is to select the exemplars, which are the represents of
clusters, and then assign the rested nodes to its most preferring exemplars.
Similar to traditional clustering algorithms, the very first step of AP is the
measurement of the distance between data nodes, also called the similarities
or affinities. Following the common notation in AP clustering, $i$ and $k$,
($i,k\in R^{D}$) are two of the data nodes and $S(i,k)$ denotes the similarity
between data node ${i}$ and ${k}$, thereby indicating how well the data node
$k$ is suited to be the exemplar for data point $i$. $S$ is the similarity
matrix stores the similarities between every two nodes. $S(i,k)$ is the
element on row $i$, column $k$ of $S$. Similar notations are also used in
below. The diagonal of the similarity matrix $S(k,k)$ denotes the preference
of selecting $k$ as an exemplar. $S(k,k)$ is called as the preference value
$p(k)$ in some studies [37].
A series of messages are then passed among all data nodes to reach the
agreement on exemplars’ selection. These messages are the responsibilities
$R(i,k)$ and availabilities $A(i,k)$. $R$ and $A$ are the responsibility and
availability matrices. Node $i$ passes $R(i,k)$ to its potential exemplar $k$,
indicating the current willingness of $i$ choosing $k$ as its exemplar
considering all the other potential exemplars. Correspondingly, $k$ responds
$A(i,k)$ to $i$, implying the current willingness of $k$ accepting $i$ as its
member considering all the other potential members. Then, $R(i,k)$ is updated
according to $A(i,k)$, and a new iteration of message passing is issued. The
consensus of the sum-product message passing process ($R(i,k)$ and $A(i,k)$
remain the same after an iteration) stands for the final agreement of all
nodes in exemplars’ selection and clusters’ association is reached. The sum of
$R(i,k)$ and $A(i,k)$ can directly give the fitness for choosing $k$ as the
exemplar of $i$. Apart from the ill-defined cluster number, AP is not
sensitive to the initialization settings; selects real data nodes as
exemplars; allows asymmetric matrix as input; is more accurate when measured
in the sum of squared errors. Therefore, considering the subspace distribution
(measured by the least squares regression), AP is effective in generating
robust and accurate clustering results for high dimension data like face
vectors.
We initialize both of the responsibility and availability matrices to zero for
the commencement, and following [15], the computation of $R$ and $A$ are:
$R(i,k)\leftarrow S(i,k)-\max_{{k^{\prime}},s.t.{k^{\prime}}\neq
k}\\{A(i,k^{\prime})+S(i,{k^{\prime}})\\}$ (1)
$A(i,k)\leftarrow\min\bm{\\{}{0,R(k,k)+\sum_{{i^{\prime}},{i^{\prime}}\notin{\\{i,k\\}}}\max\\{0,R({i^{\prime}},k)\\}\bm{\\}}}$
(2)
Equation (3) is used to fill in the elements on the diagonal of the
availability matrix:
$A(k,k)\leftarrow\sum_{i^{\prime},s.t.i^{\prime}\neq
k}\max\\{0,R(i^{\prime},k)\\}$ (3)
Update responsibilities and availabilities according to (1), (2) and (3) till
convergence, then the criterion matrix $C$, which holds the exemplars as well
as clustering results, is simply the sum of the availability matrix and
responsibility matrix at each location. The highest value in each row of $C$
is designated as the exemplar. Naturally, as each row stands for a data node,
data nodes that share the same exemplar are in the same cluster. Therefore,
the exemplars $\hat{c}=\\{c_{1},...c_{i},...c_{n}\\},\hat{c}\in C$ can be
computed as:
$c_{i}=\arg\max_{k}\\{A(i,k)+R(i,k)\\}$ (4)
#### III-B2 Positioned Incremental Affinity Propagation
(1)-(4) are the original equation for classic AP. In most trackers, the
critical assumption is that the motion of the same face between consecutive
frames is minor within a single shot. Correlation filters, IOU trackers, or
other online learning networks are all working under such an assumption.
However, in live streaming videos, the frequent camera shakes and the
resolution conversion invalid this assumption. The only fact that helps
exclude outliers from the classic AP algorithm is that any face can only
appear in one position in a single frame. Therefore, faces belonging to the
same frame should have small enough similarities. Cosine similarity is then
brought for measurements. Let $j$ stands for the other face vectors that
belong to the same frame as $i$. The similarity matrix can be computed as:
$S(i,k)=\left\\{\begin{array}[]{lr}\frac{i\cdot k}{\|i\|\|k\|}-1,\quad\mbox{
if }\ k\notin{j}\\\ \\\ -1,\quad\mbox{ if }\ k\in j\end{array}\right.$ (5)
$S(i,k)\in[-1,0]$, since the negative similarity is more decent for the
convergence of message passing.
We intentionally avoid directly manipulating the vectors’ value like in some
other face clustering works [38]. Although the embedding network is not
perfect and will surely generate outliers due to many factors, it is still
somehow reliable considering the face vectors’ subspace distribution. Instead
of any rush tuning on face vectors, we solely force the similarities to the
minimum if faces come from a single frame.
Moreover, $p(k)$ which is the diagonal of the similarity matrix $S$, reflects
how suitable is $k$ considered as the potential exemplar for itself. $p(k)$ is
directly assigned to $R(k,k)$ according to (1). Therefore, in (2), the left
side of the equation is the evidence of how much $k$ can stands for itself
($R(k,k)$) and how much $k$ can stands for the others
($\sum_{{i^{\prime}},{i^{\prime}}\notin{\\{i,k\\}}}\max\\{0,R({i^{\prime}},k)\\}$).
Every positive responsibilities of $k$ contributes to $A(i,k)$. As we
mentioned above, any face can only have one position in a single frame.
$A(i,k)$ should not count $j$’s ($j\in k$) choice as part of $i$’s
availability. One step further, the choice of $j$ will actually repel $i$ in
availability and equation (2) shall be rewritten as:
$\displaystyle A(i,k)\leftarrow\min\bm{\\{}0,R(k,k)+$ (6)
$\displaystyle\sum_{{i^{\prime}},{i^{\prime}}\notin{\\{i,j,k\\}}}\max\\{0,R({i^{\prime}},k)\\}-\sum_{{i^{\prime}},{i^{\prime}}\in{\\{j\\}}}\max\\{0,R({i^{\prime}},k)\\}\bm{\\}}$
(5) and (6) ensure $i$ and $j$ are mutual exclusive in the clustering process.
If $i$ variants and aggregates to another cluster contains $j$, they will
repel each other during the affinity propagation process. The significant drop
in their availabilities will cause the corresponding responsibilities also
drastically reduced. This chain reaction will not stop until one of them is
revised to another cluster through the message passing. Also, as $S(i,j)=-1$,
responsibility in (1) guarantees $j$ will not be selected as the potential
exemplar of $i$ during initialization.
Besides the position information, we employ incremental clustering to address
the efficiency issue. The key challenge in building incremental AP is that the
previously computed data nodes have already connected through nonzero
responsibilities and availabilities. However, the newly arrived ones remain
zero (detached). Therefore, the data nodes coming at different timestamps also
stay at varying statuses with disproportionate relationships. Simple
continuously rolling their affinity at each timestamp cannot reach the
solution of incremental AP.
In our case, video frames are strongly correlated data. According to the
previous segment, we could assign a proper value to newly-arrived faces in
adjacent segments without affecting the clustering purity. The face vectors
belonging to a particular person are supposed to stay closer to each other.
That is, the same person’s faces should gather within a small area in the
feature space. Thus, our incremental AP algorithm is proposed based on the
fact that: if two detected faces are in adjacent segments and refer to one
person, they should be clustered into the same group as well as have similar
responsibilities and availabilities. Rather than starting with zero, we can
assign the same responsibilities and availabilities to the newly-coming
vectors and considerably trim the message-passing process. Such a fact is not
well considered in past studies of incremental affinity propagation [38, 39].
Following the commonly used notations, the similarity matrix is denoted as
$S_{t-t^{\prime}}$ at $t-t^{\prime}$ time with
($M_{t-t^{\prime}}*M_{t-t^{\prime}}$) dimension, where
$(t-t^{\prime})=\frac{\mathcal{N}}{FPS}$. And the responsibility matrix and
availability matrix at time $t-t^{\prime}$ are $R_{t-t^{\prime}}$ and
$A_{t-t^{\prime}}$ with a same dimension as $S_{t-t^{\prime}}$. Mark the
closest face vector for the newly arrived $i$ as $i^{\prime}$, we have:
$i^{\prime}=\arg\max_{i^{\prime},i^{\prime}\leq
M_{t-t^{\prime}}}\left\\{S(i,i^{\prime})\right\\}$ (7)
Then, the incremental updation of $R_{t}$ according to $R_{t-t^{\prime}}$ and
$A_{t}$ according to $A_{t-t^{\prime}}$ are:
$R_{t}(i,k)=\left\\{\begin{array}[]{lr}R_{t-t^{\prime}}(i,k),\quad i\leq
M_{t-t^{\prime}},k\leq M_{t-t^{\prime}}\\\ \\\
R_{t-t^{\prime}}(i^{\prime},k),\quad i>M_{t-t^{\prime}},k\leq
M_{t-t^{\prime}}\\\ \\\ R_{t-t^{\prime}}(i,k^{\prime}),\quad i\leq
M_{t-t^{\prime}},k>M_{t-t^{\prime}}\\\ \\\ 0,\quad
i>M_{t-t^{\prime}},k>M_{t-t^{\prime}}\end{array}\right.$ (8)
$A_{t}(i,k)=\left\\{\begin{array}[]{lr}A_{t-t^{\prime}}(i,k),\quad i\leq
M_{t-t^{\prime}},k\leq M_{t-t^{\prime}}\\\ \\\
A_{t-t^{\prime}}(i^{\prime},k),\quad i>M_{t-t^{\prime}},k\leq
M_{t-t^{\prime}}\\\ \\\ A_{t-t^{\prime}}(i,k^{\prime}),\quad i\leq
M_{t-t^{\prime}},k>M_{t-t^{\prime}}\\\ \\\ 0,\quad
i>M_{t-t^{\prime}},k>M_{t-t^{\prime}}\end{array}\right.$ (9)
$R_{t}(i,k)$ and $A_{t}(i,k)$ are the responsibilities and availabilities of
newly arrived face vectors at time $t$. $M_{t-t^{\prime}}$ stands for the
amount of faces at time $t-t^{\prime}$. Note that the dimension of all the
matrices is increasing with time.
Denote $Z_{p}^{q}=\\{Z_{1}^{q},Z_{2}^{q},Z_{3}^{q}...Z_{p}^{q}\\}$ as the set
of all $p$ face vectors extracted in segment $q$ at time $t$. The full process
of PIAP algorithm is summarized as Algorithm 1.
Algorithm 1 Positioned Incremental Affinity Propagation
Input: $R_{t-t^{\prime}}$,$A_{t-t^{\prime}}$,$C_{t-t^{\prime}},Z_{p}^{q}$
Output: $R_{t},A_{t},C_{t}$
1: while not end of a live-streaming do
2: Compute similarity matrix according to (5).
3: if the first video segment of a live-stream then
4: Assign zeros to all responsibilities and availabilities.
5: else
6: Compute responsibilities and availabilities for $Z_{p}^{q}$ according to
equation (7), (8) and (9).
7: Extend responsibilities matrix $R_{t-t^{\prime}}$ to $R_{t}$, and
availabilities $A_{t-t^{\prime}}$ to $A_{t}$.
8: end if
9: Message-passing according to equation (1), (6), (3) and (4) until
convergence.
10: end while
Handled by our proposed PIAP algorithm, we can quickly cluster faces in new
segments; exclude the outliers, and discover newly spotted faces. Unlike newly
spotted faces, the false positives are quickly isolated by other real faces
while clustering, thereby not forming any long and steady enough trajectories.
As a result, the scattered faces referring to the same person could be linked
together sequentially to form our raw face trajectories.
### III-C Trajectory Refinement
If we blur non-streamers’ faces instantly on raw trajectories, fast blinking
mosaics can be observed everywhere. Owing to inadequate training samples, poor
streaming quality, and noisy scenes, false negatives are manifest in frame-
wise face detection results. Therefore, the created raw trajectories are
always intermittent and incur blinking mosaics. Regarding little breaks that
only span over a few frames in the raw trajectory, we can quickly smooth them
through interpolation. The real contest remains on the gaps accumulated frame-
after-frame by false negatives in detection.
Direct smoothing on gaps leads to massive over-pixelation problems as we
cannot tell whether the miss-detection or heavy occlusions cause the gap. To
eliminate the false negatives as well as dodge over-pixelation, we bring in
the trajectory refinement stage. Concretely, the trajectory refinement stage
grounds on a proposal network and the corresponding two-sample test based on
empirical likelihood ratio (ELR) statistics to identify the miss-detection
from a bunch of suspicious face areas. The proposal network is a relatively
shallow CNN aiming to propose suspicious face areas on the gap frames at a
high recall rate. We cull the non-faces from proposed face areas according to
the two-sample test based on ELR since (i) it refrains from the computation
burden of another embedding network and catches the extra richness of the
detection; (ii) it does not require the proposal network results to comply
with a Gaussian distribution, thereby, is not parametric-imposing. Combing the
two-sample test with the proposal network, we deny the inappropriate bounding
boxes and receive the proper ones for refinements.
#### III-C1 The Proposal Network
The proposal network shares some mutual characteristics with the cascaded face
detection networks. The cascaded architecture introduces multiple deep
convolution networks to predict faces in a coarse-to-fine manner. The first
several networks in the architecture also dedicate to proposing suspicious
face areas just as our proposal network does [29, 40, 41]. For the sake of
simplicity, here, we peel off the first network from MTCNN [29] to construct
our proposal network. MTCNN leverages three cascaded deep convolution networks
to predict faces and their landmark locations. The very first P-net in MTCNN
and our proposal network reach the same goal.
Moreover, the loss function of the proposal network aims at the highest recall
rate. However, ordinary face detection networks always focus on the highest
accuracy. The higher recall rate suggests more false positives in results.
Therefore, if the proposal network is built and trained in a single-handed
manner, it may incur the low precision problem. That is, when measured through
the IoU of bounding boxes, the proposal network trends to produce many
overlapped false positives. MTCNN is trained as an entirety, and the
performance of the P-net is contained by the latter R-net and O-net to achieve
high accuracy. Thus, P-net retains a high recall rate and will not incite the
precision problem. The structure of the proposal network is shown in Fig. 3.
Figure 3: The structure of the proposal network. The input and network layers
are depicted in yellow. The convolution kernels are rendered in blue. The
output is a binary classification result, and the output of the last but one
layer is used for the trajectory refinement.
If any miss-detection is captured through the proposal network and received by
the two-sample test, they will be filled back into the raw trajectories.
Afterward, gaps are to be patched into tiny breaks, and the interpolation will
reboot to smooth the trajectories. In this manner, we can recursively refine
the trajectories seamlessly. Moreover, this manner does not request the
proposal net to spot every miss-detection. We thereby can raise the proposal
network detection threshold and use Non-Maximum Suppression (NMS) [42] to
suppress the false positives under a high recall rate as well.
#### III-C2 Two-sample test Based on Empirical Likelihood Ratio
Sending the gap areas of the raw trajectory to the proposal network, we can
get some of the miss-detection along with other false positives as results.
Associate through Hungarian algorithm [43], all the proposed faces form
candidate trajectories, which correspond to orange dash lines in Fig. 4.
Generally, with false positives yielded by the proposal network, there is
likely to have more than one candidate trajectories. Mark an observation of a
raw trajectory as the sequence $z=\\{z_{1},z_{2},...,z_{m}\\}$, and mark one
of its possible candidate trajectories as the sequence
$z^{\prime}=\\{z^{\prime}_{1},z{{}^{\prime}}_{2},...,z{{}^{\prime}}_{m^{{}^{\prime}}}\\}(m^{\prime}<m)$.
$z$ is yielded by MTCNN, and likewise, $z^{\prime}$ is yielded by the foremost
network in MTCNN. We use the second last layer’s output in the proposal
network as the vectors of $z^{\prime}$ and trace back the output of the same
layer for $z$. Thereby, as the last but one layer is the size of 1*1*32 in
Fig. 3, $z$ and $z^{\prime}$ are $32*m$ and $32*m^{\prime}$ matrices.
The expense of high recall rate is the massively generated false positives. A
direct method to exclude these false positives and identify the rest is to re-
train another face embedding network and feed the results to PIAP once again.
Notwithstanding the above method might be theoretically capable, we cannot
ensure the real-time efficiency since the number of proposed faces could be
substantial. Furthermore, there is also no guarantee that the deep network
will project false positives as outliers because the embedding network is
mainly trained on the face dataset. Other than embedding networks, the Siamese
net [44, 45, 46] might be applicable. Siamese net is designed to identify the
slight differences between two similar inputs through a completely weight-
sharing, dual structure. However, attribute to its shared weight structure as
well, the Siamese net is extremely susceptible to noise. Therefore, it is
incapable of sieving the lightly-occluded faces out of the proposed suspicious
faces.
Figure 4: $z$ (solid line) and all the possible $z^{\prime}$ (orange dash
lines). Orange dots are the suspicious faces proposed by the proposal network.
Red areas on $z$ are the breaks recovered by interpolation.
In Fig. 4, the solid line corresponding to $z$ indicates the seamless
trajectories after interpolation. Red areas on the solid line are the breaks
recovered by interpolation. Under well-established face detection networks,
for the correct or acceptable candidate trajectory $z^{\prime}$,
$|z|>|z^{\prime}|$ should hold. Meaning the number of miss-detected faces
should be less than the number of detected faces in raw trajectories. If we
put the detection network insufficiency aside, $z^{\prime}$ are caused by
objective factors including distortions, rotations, illumination changes, and
so on. Therefore, we can infer that the original network could somehow detect
the miss-detected faces after some reverse-transformation of distortions,
rotations, illumination changes, and so on. Substantially, under ideal
conditions, $z^{\prime}$ is expected to be captured directly by the detection
network; however, some noises $\varepsilon_{z^{{}^{\prime}}}$ stop deep CNN
from functioning. Detection under these noises requires the help of the
proposal net. Hence, we can deduce the $z^{\prime}$ from $z$ according to:
$z^{\prime}=f(z,\theta)+\varepsilon_{z^{{}^{\prime}}}$ (10)
where $f$ is the transformation function with the hyperparameter $\theta$, and
independent noise term $\varepsilon_{z^{{}^{\prime}}}\sim
N(0,\sigma_{z^{{}^{\prime}}}^{2}I)$. $I$ is the identity matrix.
That is, as depicted in Fig. 4, the trajectories refinement is just like the
process of linking dense dots (proposed faces) into dash lines (candidate
trajectories) and then into a solid line (refined trajectories). Some state-
of-the-art studies in MFT [31] fields introduced a similar concept for
trajectories optimization, but assume both $z$ & $z^{\prime}$ comply with a
Gaussian distribution. They assign $f=f_{G}$ where $f_{G}$ is a Gaussian
Process (GP) model and infers the correlation between tracklets through
Maximum-Likelihood Estimation (MLE) on $\theta$.
The GP model is sufficient in most offline tracking cases [31, 47, 48, 49].
However, in video live streaming, trajectories are refined on short video
segments, containing only a small number of frames. Namely, we cannot impose
$f=f_{G}$ because: (i) the amount of proposed faces within a segment is not
significantly larger than the dimension number of the face vectors; (ii) the
amount of proposed faces within a segment is insufficient in guaranteeing the
proposed face sample complies with a Gaussian distribution. Inspired by [50],
for better robustness and avoiding parametric-imposing, we propose a two-
sample test to reject the proposed false positives based on empirical
likelihood ratio (ELR) statistics.
The goal of the two-sample test is to determine whether two distributions are
different based on samples. In our case, we test whether the candidate
trajectory and the corresponding raw face trajectory are two face samples that
come from the same population (distribution). The statistical null hypothesis
for the two-sample test is $\mathcal{H}_{0}:P_{z}=P_{z^{\prime}}$.
$\mathcal{H}_{0}$ stands if and only if $z$ and $z^{\prime}$ are from the same
distribution. Otherwise, $z$ and $z^{\prime}$ are different at a $\alpha$
significance level, and we should reject $\mathcal{H}_{0}$ and the
corresponding candidate trajectory. Denote the domain of face vectors is a
compact set defined in a Gaussian Reproducing Kernel Hilbert Space (RKHS)
$\mathit{H}$ with reproducing kernel $\mathcal{F}$. Following the common
Maximum Mean Discrepancy (MMD), we measure the distribution $z$ and
$z^{\prime}$ by embedding them into such a Gaussian RKHS [51]. Denote the mean
embedding of distribution $z$ and $z^{\prime}$ in $\mathit{H}$ as $\mu z$ and
$\mu z^{\prime}$, the question of finding optimal estimators of MMD in kernel-
based two-sample tests is an estimate of statistically optimal estimators in
the construction of these kernel-based tests. After the projection function
$f\in\mathcal{F}$, MMD is represented by the supremum of the mean embedding of
$z$ and $z^{\prime}$ :
$MMD_{m,m^{\prime}}[\mathcal{F},z,z^{\prime}]:=\sup_{f\in\mathcal{F}}(\frac{1}{m}\sum_{i=1}^{m}[f(x_{i})]-\frac{1}{m^{\prime}}\sum_{i=1}^{m^{\prime}}[f(y_{i})])$
(11)
$x_{i}$ and $y_{j}$ are the elements in $z$ and $z^{\prime}$ after the
projection. Referring [50, 52], $\mathcal{H}_{0}$ can be written as:
$\mathcal{H}_{0}:P_{z}=P_{z^{\prime}}\Leftrightarrow
MMD_{m,m^{\prime}}[\mathcal{F},z,z^{\prime}]^{2}=0\\\ $ (12)
According to equation (11), the right side of (12) is:
$\displaystyle MMD_{m,m^{\prime}}[\mathcal{F},z,z^{\prime}]^{2}=\langle\mu
z-\mu z^{\prime},\mu z-\mu z^{\prime}\rangle_{\mathit{H}}$ (13)
$\displaystyle=\frac{1}{m(m-1)}\sum_{i\neq
j}^{m}k(x_{i},x_{j})+\frac{1}{m^{\prime}(m^{\prime}-1)}\sum_{i\neq
j}^{m^{\prime}}k(y_{i},y_{j})$
$\displaystyle-\frac{2}{mm^{\prime}}\sum_{i,j=1}^{m,m^{\prime}}k(x_{i},y_{j})$
$k(x_{i},y_{j})$ stands for the Gaussian kernel function of $\mathit{H}$.
As $m^{\prime}<m$, if we pick the nearest (in time) $m^{\prime}$ face vectors
in $z$, and according to (13), set
$h(z,z^{\prime})=\\{\frac{1}{m(m-1)}\sum_{i\neq
j}^{m}k(x_{i},x_{j})+\frac{1}{m^{\prime}(m^{\prime}-1)}\sum_{i\neq
j}^{m^{\prime}}k(y_{i},y_{j})-\frac{2}{mm^{\prime}}\sum_{i,j=1}^{m,m^{\prime}}k(x_{i},y_{j})\\}$
for simplification, the linear time unbiased estimator of MMD is proposed in
[53]. The unbiased estimator $MMD_{ub}$ of $z$ and $z^{\prime}$ functions
similarly as the MLE, but does not presume $z$ and $z^{\prime}$ are densely
sampled under a multi-variant Gaussian Process.
$MMD_{ub}[\mathcal{F},z,z^{\prime}]=\frac{1}{\lfloor
m^{\prime}/2\rfloor}\sum_{i=1}^{{\lfloor
m^{\prime}/2\rfloor}}h(z_{2i-1},z_{2i})$ (14)
Denote $h_{i}=h(z_{2i-1},z_{2i})$ for further simplification. $z$ and
$z^{\prime}$ are i.i.d, which implies $h_{i}$ are also i.i.d observations.
i.e., $\bar{h}=0$, where $\bar{h}$ is the mean value. Hence, we introduce the
empirical likelihood ratio (ELR) statistic [52, 54] $p_{i}$ to solve the two-
sample test of $\mathcal{H}_{0}:\bar{h}=0$ as:
$L(p_{i})=\sup_{p_{i}}\\{\prod_{i=1}^{m^{\prime}}p_{i},\big{|}\sum_{i=1}^{m^{\prime}}p_{i}=1,\sum_{i}^{m^{\prime}}p_{i}h_{i}=0\\}$
(15)
where $p_{i}$ subjects to the normalization and $\bar{h}=0$ constrains.
$\sum_{i}^{N}p_{i}h_{i}=0$ ensures the empirical mean of $h$ equals zero.
Therefore, any subtle difference between $z$ and $z^{\prime}$ can be captured
by the empirical likelihood ratio statistic $p_{i}$ through the pairwise
discrepancy $MMD_{ub}$. The explicit solution of (15) is attained from a
Lagrange multiplier argument $\lambda$ according to [54]:
$\displaystyle p_{i}=\frac{1}{m}\frac{1}{1+\lambda h_{i}}\ $ (16)
$\displaystyle s.t.$ $\displaystyle\sum_{i=1}^{m}\frac{h_{i}}{1+\lambda
h_{i}}=0$
Through (15) and (16), the supremum is transferred to the ELR test $T_{ELR}$
for statistic $p_{i}$. If the null hypothesis $\mathcal{H}_{0}$ holds, we
have:
$\displaystyle T_{ELR}=-2\log L(p_{i})\stackrel{{\scriptstyle
d}}{{\rightarrow}}\mathcal{X}_{1}^{2}$ (17)
where $\mathcal{X}_{1}^{2}$ is the chi-square distribution with 1 degree of
freedom according to Wilk’s Theorem, and $T_{ELR}$ converges in this
distribution. Otherwise, we will reject the null hypothesis $\mathcal{H}_{0}$
when $T_{ELR}\geq\mathcal{X}_{\alpha}^{2}$, and $\mathcal{X}_{\alpha}^{2}$ is
the confidence interval:
$Pr(\mathcal{X}_{1}^{2}\geq\mathcal{X}_{\alpha}^{2})=\alpha$ (18)
The rejection threshold of (18) could be computed through the off-the-shelf
python package. Therefore, leveraging $p_{i}$ as the new parametric feature to
eliminate the false positives raised by the proposal network is extremely
time-saving. The cost of getting $p_{i}$ is solely related to the computation
of $T_{ELR}$, and according to (13), this cost is linear to the sample size.
Our two-sample test algorithm based on empirical likelihood ratio statistic
achieves impressing results in real tests.
With the efforts of (17) and (18), the received $z^{\prime}$ is added to $z$’s
trajectory. Once again, fast interpolation applies to fill up the tiny breaks
that still exist and then generate the refined trajectory.
### III-D Postprocessing
The final pixelation happens on the refined non-streamers’ trajectories
through Gaussian filters. The choice of different Gaussian kernels can offer
different pixelation styles. Pixelated video segments will directly be
broadcast to the audience. An overall procedure of FPVLS is described in
Algorithm 2.
Algorithm 2 Overall Procedures of the FPVLS
Input:Raw video live streaming
Output:The pixelated video live streaming
1: while not end of a live stream do
2: Stack the streaming frames into video segments.
3: Use the face detection and embedding networks to generate face vectors
within a segment.
4: if is the buffer section then
5: Designate (for example, touching the phone screen) the streamers’ faces.
6: else
7: PIAP clustering on face vectors to form the raw face trajectories.
8: Interpolation on raw face trajectories to recover the breaks.
9: The proposal network detection and two-sample test based on ELR to
compensate the detection lost.
10: Filling the gaps with detection lost and interpolation again to form
refined trajectories.
11: end if
12: Gaussian Filter blurs the non-streamers’ trajectories and broadcast to
audience.
13: end while
## IV Experiments and Discussions
TABLE I: Details of the live video streaming dataset
Dataset | | Quantity
---
of Videos
Category | Resolution | | People Occurred
---
Frames | | Total
---
Face Labels*
| Streamers’
---
Face Labels*
| Irrelevant People’s
---
Face Labels*
$HS$ | 4 | a,b,c,d | 720p/1080p | $\gg$ 2 | 4133 | 17366 | 7989 | 9377
$LS$ | 8 | a,b,c,d | 360p/480p | $\gg$ 2 | 4680 | 16112 | 7090 | 9022
$LN$ | 4 | a,b,c,d | 360p/480p | $\leq$2 | 5692 | 9849 | 5038 | 4811
$HN$ | 4 | a,b,c,d | 720p | $\leq$2 | 4867 | 7713 | 3956 | 3757
* •
*: the value amounts the occurrences of the faces. Meaning the same face is repeatedly counted in different frames.
### IV-A Dataset
As there are no available datasets and benchmark tests for reference, we
collected and built a video live streaming video dataset from YouTube and
Facebook platforms and manually labeled dense annotations (51040 face labels
and 26967 irrelevant people’s or privacy-sensitive face labels). We followed
the notation paradigm of MOT15 [55] and MOT 17 [56], which are benchmark
datasets in tracking. The major difference is we de-associate the heavily
occluded and other temporarily un-identifiable faces from their trajectories,
and mark them with the ’over-pixelation’ label. On the contrary, the tracking
datasets give credits to trackers that can produce continuous trajectories
under heavy occlusion.
In total, 20 streaming video fragments come from 4 different streaming
categories (a: street interview; b: dancing; c: street streaming; d: flash
activities) are collected. As demonstrated in Table I, these labeled video
fragments are further divided into four groups according to the broadcasting
resolution and the number of people showing up in streaming. Live streaming
videos having at least 720p resolution are marked as high-resolution $H$, and
the rest are low-resolution $L$. Similarly, the live streaming videos contain
more than two people are sophisticated scenes $S$, and the rest are naive ones
$N$.
TABLE II: Pixelation results on collected video live streaming dataset
Method | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPP$\uparrow$ | MFPP$\uparrow$ | MFPP$\uparrow$ | Entire Dataset
---|---|---|---|---|---|---|---|---|---
($HS$) | ($LS$) | ($HN$) | ($LN$) | ($HS$) | ($LS$) | ($HN$) | ($LN$) | | MP$\uparrow$
---
(frames)
OPR$\downarrow$ | FPS$\uparrow$ | NoP
YouTube[5] | 0.45 | 0.42 | 0.68 | 0.47 | 0.53 | 0.47 | 0.77 | 0.63 | 238 | 0.31 | $N/A$ | $\times$
Azure[7] | 0.43 | 0.47 | 0.64 | 0.56 | 0.50 | 0.53 | 0.70 | 0.68 | 203 | 0.30 | $N/A$ | $\times$
KCF[10] | 0.35 | 0.32 | 0.38 | 0.31 | 0.41 | 0.40 | 0.44 | 0.40 | 113 | 0.28 | $>$100 | ✓
ECO[9] | 0.27 | 0.28 | 0.34 | 0.31 | 0.37 | 0.37 | 0.41 | 0.40 | 148 | 0.24 | 30~50 | ✓
IOU[26] | 0.58 | 0.53 | 0.60 | 0.57 | 0.68 | 0.64 | 0.71 | 0.68 | 241 | 0.23 | 30~50 | ✓
POI[11] | 0.62 | 0.61 | 0.70 | 0.67 | 0.70 | 0.70 | 0.83 | 0.81 | 275 | 0.27 | 30~50 | ✓
| FPVLS1
---
(without refinement)
0.56 | 0.52 | 0.67 | 0.64 | 0.68 | 0.64 | 0.80 | 0.77 | 254 | 0.09 | 20~40 | ✓
| FPVLS2
---
(Base CNNs)
0.68 | 0.67 | 0.72 | 0.70 | 0.80 | 0.77 | 0.86 | 0.83 | 342 | 0.12 | 20~40 | ✓
| FPVLS3
---
(Advanced CNNs)
0.68 | 0.66 | 0.72 | 0.71 | 0.80 | 0.77 | 0.86 | 0.85 | 367 | 0.12 | 20~40 | ✓
* •
$\uparrow$ $\&$ $\downarrow$ stand for the higher the better and the lower the
better. FPVLS1 adopts base CNN models and doesn’t activate the trajectories
refinements stage. FPVLS2 adopts base CNN models of MTCNN[29]+CosFace[32].
FPVLS3 adopts advanced CNN models of PyramidBox[33]+ArcFace[34].
### IV-B Parameters
All the parameters remain unchanged for entire live-streaming video tests.
$\mathcal{N}=150$ for every segment contains 150 frames. 10 seconds for
buffering as $2*\mathcal{N}/30$. CosFace resizes the cropped face to $112*96$;
weight decay is 0.0005. The learning rate is 0.005 for initialization and
drops at every 60K iterations. The damping factor for PIAP is default 0.5 as
it in AP. Detection threshold for MTCNN is [0.7,0.8,0.9]. The proposal network
inherent the threshold from MTCNN, and the resize factor is [0.702]; the IoU
rate for NMS is [0.7]. $\alpha=0.95$ for ELR statistic.
### IV-C Experiments Breadth
As we are the first work on the face pixelation in the video live streaming
field, we can hardly find similar methods for comparison. Therefore, we dived
into potential algorithms in the online MOT and MFT fields and migrated proper
ones to the pixelation tasks. In short, thorough experiments are conducted by
(i) employing currently applied commercial pixelation methods, including
YouTube Studio [5] and Microsoft Azure [7]; (ii) migrating proper state-of-
the-art MOT to face pixelation, that is the KCF [10] and ECO [9]; (iii)
migrating proper state-of-the-art MFT to face pixelation, that is the IOU [26]
and POI [11]; (iv) adopting FPVLS and FPVLS with more advanced based CNNs.
TABLE III: FPVLS performances under various segment length
Segment Length $(\mathcal{N})$ | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPA$\uparrow$ | MFPP$\uparrow$ | MFPP$\uparrow$ | MFPP$\uparrow$ | Entire Dataset
---|---|---|---|---|---|---|---|---|---
($HS$) | ($LS$) | ($HN$) | ($LN$) | ($HS$) | ($LS$) | ($HN$) | ($LN$) | | MP$\uparrow$
---
(frames)
OPR$\downarrow$ | | Buffering
---
Time**
IOU*[26] | 0.58 | 0.53 | 0.60 | 0.57 | 0.68 | 0.64 | 0.71 | 0.68 | 241 | 0.23 | N/A
| 0 frames
---
(without refinement)
0.56 | 0.52 | 0.67 | 0.64 | 0.68 | 0.64 | 0.80 | 0.77 | 254 | 0.09 | 0s
30 frames | 0.61 | 0.59 | 0.68 | 0.65 | 0.72 | 0.68 | 0.82 | 0.78 | 278 | 0.20 | 2s
POI*[11] | 0.62 | 0.61 | 0.70 | 0.67 | 0.70 | 0.70 | 0.83 | 0.81 | 275 | 0.27 | N/A
60 frames | 0.63 | 0.63 | 0.70 | 0.68 | 0.75 | 0.72 | 0.83 | 0.80 | 278 | 0.16 | 4s
90 frames | 0.65 | 0.65 | 0.71 | 0.68 | 0.77 | 0.73 | 0.85 | 0.80 | 301 | 0.14 | 6s
120 frames | 0.67 | 0.67 | 0.72 | 0.69 | 0.79 | 0.75 | 0.86 | 0.81 | 325 | 0.13 | 8s
150 frames | 0.68 | 0.67 | 0.72 | 0.70 | 0.80 | 0.77 | 0.86 | 0.83 | 342 | 0.12 | 10s
180 frames | 0.68 | 0.67 | 0.72 | 0.70 | 0.80 | 0.77 | 0.86 | 0.83 | 342 | 0.12 | 12s
210 frames | 0.68 | 0.67 | 0.72 | 0.70 | 0.81 | 0.78 | 0.86 | 0.84 | 342 | 0.12 | 14s
240 frames | 0.68 | 0.67 | 0.73 | 0.70 | 0.81 | 0.78 | 0.87 | 0.84 | 342 | 0.12 | 16s
270 frames | 0.69 | 0.68 | 0.73 | 0.70 | 0.82 | 0.79 | 0.87 | 0.84 | 360 | 0.11 | 18s
300 frames | 0.69 | 0.68 | 0.73 | 0.71 | 0.82 | 0.79 | 0.87 | 0.84 | 360 | 0.11 | 20s
* •
*: IOU and POI tracker are listed here for comparisons. The segment or buffer section does not exist in their algorithm.
* •
**: the buffer section length is twice of the segment length (Proposition 1),
and the default FPS is 30.
### IV-D Evaluation Metrics
Experiments are afoot on the collected dataset with all the aforementioned
algorithms. Referring to the widely accepted metrics in the tracking field
[57], we propose the Multi-Face Pixelation Accuracy (MFPA), Multi-Face
Pixelation Precision (MFPP), and Most Pixelated (MP) frames to indicate the
overall performance of an algorithm. Moreover, the Over-Pixelation Ratio
(OPR), and the ability to handle the Number of People (NoP) are the other two
customized metrics for pixelation tasks. OPR represents the degree of over-
pixelation and is reversely related to preserving original pictures to the
audiences. OPR is the vital metric in making up the one-sidedness brought by
evaluations solely based on the accuracy (MFPA) and precision (MFPP). Higher
MFPA and MFPP values of an algorithm may not stand for absolutely better
performances on the pixelation tasks unless it still can contain the OPR at a
low level. NoP is crucial as we cannot ask for frequent human interventions
during the pixelation of live streaming. Thereby, NoP is the key to bring the
pixelation algorithm into real applications. This set of metrics is grounded
on the essence of face pixelation tasks and fits other privacy-protection-
related pixelation tasks as well.
Specifically:
${}MFPA=1-{\frac{\sum_{t}(m_{t}+fp_{t}+mm_{t})}{\sum_{t}{g_{t}}}}$
where $m_{t}$, $fp_{t}$, $mm_{t}$, $g_{t}$ correspond to the missed
pixelation, false positives in pixelation, miss-matched pixelation, and total
pixelation labeled in frame $t$.
$MFPP={\frac{\sum_{i,t}d_{i,t}}{\sum_{t}{c_{t}}}}$
where $d_{i,t}$ is distance between the pixelation of face $i$ and its ground
truth on frame $t$. $d_{i,t}$ is measured through the bounding box overlapping
ratio in our paper; therefore, the higher, the better. $c_{t}$ is the total
number of matched pixelation in frame $t$. For all the face tracks, MP is the
highest length of consecutively and correctly pixelated frame sequence. The
other metric OPR indicates the degree of over-pixelation problem by:
$OPR=\frac{\sum_{t}(op_{t})}{\sum_{t}{g_{t}}}$
where the $op_{t}$ is matched over-pixelation in frame $t$ according to the
’over-pixelation’ label.
### IV-E Experiment Results
Pixelation results of the methods mentioned in C are evaluated through the set
of metrics proposed in D, on the live streaming dataset built in A. The
results are listed in Table II. YouTube and Azure are offline pixelation
methods that adopt tracker based algorithms without calibration. KCF and ECO
here are pre-trained with face datasets and optimized for face pixelation
tasks. Although ECO applies higher-dimensional features than KCF, the MFPA of
ECO is lower than KCF. The problem for the combination of the correlation
filters and online learning strategy is any noisy samples yielded by
correlation filters profoundly affect the learning network. For state-of-the-
art tracking-by-detection methods, IOU and POI trackers get quite attractive
results in comparison. POI tracker has two versions. The online version with
Mask R-CNN is used here. POI tracker shares some cognate theories with FPVLS
on detection, embedding, and the following trajectory generation. Self-
evidently, FPVLS without the trajectory refinement stage somehow achieves
similar performances with the POI tracker.
Overall, FPVLS achieves better performance in MFPA, MFPP, MP, and OPR on every
single subdataset. Particularly, FPVLS noticeably increases the MP metric on
the entire dataset, indicating FPVLS yields longer pixelation. Referring to
the last two rows of Table II, FPVLS with advanced CNNs substitutes the base
face detection and embedding networks with more advanced deep networks.
PyramidBox [33] gains around 3% accuracy promotion comparing to MTCNN[29] on
their benchmark tests. Likewise, ArcFace [34] owns 1%-7% accuracy promotion
comparing to CosFace[32] on their benchmark tests. However, advanced CNN
models did not bring FPVLS significant enough boosts on any five metrics. This
proves our FPVLS framework is robust to the selection of CNN models. On the
other hand, if worse enough detection and embedding networks are adopted,
FPVLS will end up in a mess according to our further tests.
TABLE IV: Different gains brought by two-sample test based on ELR and the
Gaussian Process model.
| Segment Length $(\mathcal{N})$
---
| Proposal Network
---
Recall Rate
Methods | | MFPA Gain*$\uparrow$
---
| MFPP Gain*$\uparrow$
---
| MP Gain*$\uparrow$
---
| OPR Gain*$\downarrow$
---
30 | 0.33 | $T_{ELR}$ | 0.05 | 0.04 | 30 | 0.11
GP | -0.02 | -0.03 | 0 | 0.16
90 | 0.33 | $T_{ELR}$ | 0.08 | 0.07 | 57 | 0.05
GP | 0.03 | 0.03 | 19 | 0.08
150 | 0.32 | $T_{ELR}$ | 0.10 | 0.08 | 88 | 0.03
GP | 0.05 | 0.04 | 31 | 0.08
210 | 0.33 | $T_{ELR}$ | 0.10 | 0.09 | 88 | 0.03
GP | 0.08 | 0.07 | 57 | 0.05
300 | 0.32 | $T_{ELR}$ | 0.11 | 0.10 | 94 | 0.02
GP | 0.10 | 0.10 | 69 | 0.04
* •
$T_{ELR}$ is the two-sample test based on ELR, GP is the Gaussian Process
model.
* •
*: Gains are the boosts brought by the $T_{ELR}$ or the GP model through the trajectories refinement stage .
### IV-F Ablation Study
The performance of FPVLS is the joint efforts of the raw face trajectory
generation and the trajectory refinement stages. Our ablation study focuses on
exploring the promotion brought by each of the stages. Therefore, we discuss
the effects of parameters (preprocessing), clustering algorithms (raw
trajectories generation), and the two-sample test based on ELR (trajectory
refinement) in sequence.
Parameters. Except for the buffering length, the rest parameters own to pre-
trained models of face detection and embedding networks. Several sets of
parameters are published for the pre-trained models. In fact, the parameter
tuning affects less than $1\%$ on their benchmark tests, while the base CNN
model substitution in Table II introduces $3-7\%$ boosts. So as FPVLS is
robust to CNN model selections, it is also robust to parameter selections for
pre-trained models.
The buffer section length is a user-defined setting in our method. Ideally,
the longer buffering section helps accuracy, but lower the user experience.
The performances of FPVLS with different segment length $\mathcal{N}$ is shown
in Table III. On our entire dataset, there is just a slight difference (+0.011
in MFPA) when increasing segment length from 5s to 10s. When 4s or less is
used, we incur a drop up to -0.12 in MFPA. We adopt the 5s (150 frames)
segment length to achieve the shortest buffering time with the smallest
drawbacks, thereby balancing the efficiency and user experience. Besides, the
network capacity and video compression introduce primal latency in seconds
level. Live streaming requires low-latency when remote face-to-face
interaction happens. In online video live streaming, the only unilateral
interaction is typing live comments. Therefore, if set to a reasonable value
(commonly less than 10 seconds), the buffering lag is transparent to both
streamers and audiences.
Moreover, we move the metrics of IOU and POI trackers into Table III for
comparisons. FPVLS, even with the segment length of 0 frames (completely
without the trajectory refinement stage), surpasses the IOU tracker, and FPVLS
with the segment length of 30 frames (2s buffering time) reaches comparable
MFPA and MFPP value with the POI tracker. FPVLS, with the segment length of 60
frames (4s buffering time), surpasses the POI tracker. POI tracker applies DN
search [58], which excludes the outliers and smooths the trajectories
according to conjunctive tracklets. POI retains higher MFPA and MFPP value
than the FPVLS with extremely short segment length (0 or 30 frames). However,
the POI tracker’s notably higher OPR metric implies the DN search also leads
to massive over-pixelation while smoothing the trajectories. That means,
compared with the state-of-the-art MFT-based pixelation methods, FPVLS has
better performance even when a very low buffer section is set.
Subdatasets. According to Table II and Table III, the High-resolution Naive-
scene ($HN$) subdataset is the one favorable to all the tested algorithms. All
these algorithms get their best performance on the $HN$ subdataset. Regarding
Table III, when $\mathcal{N}$ increases linearly, on the $LN$ and $HN$
subdataset, the changing rate of MFPP is more flattened than that of MFPA.
Moreover, the changing rate of MFPP also quickly reaches the turning point on
these two subdatasets. In other words, the number of people showing up in the
videos ($S$ or $N$) has a more substantial impact on MFPP than the resolution
($H$ or $L$). On the other side, MFPA is influenced simultaneously by the
resolution and the number of people showing up in videos.
TABLE V: Clustering accuracy and efficiency
Dataset | Method | Purity | Accuracy | | Cluster number
---
(Clustered/Truth)
Time(s)
$HS$ | AP* | 0.81 | 0.79 | 22/17 | 1.82
PAP | 0.86 | 0.83 | 18/17 | 1.82
PIAP | 0.86 | 0.83 | 18/17 | 0.08
$LS$ | AP* | 0.72 | 0.70 | 64/43 | 3.05
PAP | 0.79 | 0.73 | 49/43 | 3.05
PIAP | 0.78 | 0.73 | 49/43 | 0.19
$HN$ | AP* | 0.86 | 0.79 | 8/6 | 0.75
PAP | 0.91 | 0.89 | 6/6 | 0.75
PIAP | 0.90 | 0.89 | 6/6 | 0.04
$LN$ | AP* | 0.89 | 0.86 | 11/7 | 0.80
PAP | 0.92 | 0.90 | 9/7 | 0.80
PIAP | 0.91 | 0.89 | 9/7 | 0.03
* •
*: The detection results are cleaned before fed to AP.
Figure 5: The promotion brought by each stage of FPVLS. (a) is the live video
pixelated solely through face detection and embedding networks. (b) is
processed on the result of (a) through PIAP clustering. (c) processes the
results of (b) through the proposal network. (d) is the result of processing
(c) through the two-sample test based on ELR.
Clustering Algorithm. We built Table V to investigate the different clustering
performances among classic AP, PAP (AP with positioned information), and PIAP
on the collected live streaming dataset. In such a way, we can figure out how
the PIAP facilitates the performance of the entire FPVLS. As no comparable or
similar clustering algorithm handles object clustering under noises and ill-
defined cluster numbers, we conduct longitudinal comparisons among AP, PAP,
and PIAP under $\mathcal{N}$=150. If detection full of false positives is fed
to AP directly, the noise-sensitive AP will produce meaningless indexes for
reference. Since PIAP excludes false positives as outliers, we trace back the
raw vectors of faces remaining in PIAP clustering results. These raw face
vectors are sent to AP, and in Table V, AP only handles the embedding
variations. Comparing AP with PAP, the positioned affinities significantly
boost the clustering purity, indicating the embedding variations are fixed to
a large extent. According to PIAP and PAP results, our incremental message
passing algorithm solves the time-consuming problem of AP almost without
affecting the purity. The value of the Time column is the seconds needed to
reach consensus.
Two-sample test based on ELR & GP model. We also explicitly explore the
metrics gains or boosts brought by the other core stage of FPVLS: the
trajectory refinement. We look into the differences in metrics gains while
applying the two-sample test based on ELR and the Gaussian Process (GP) model.
The trajectory refinement is the downstream stage of the raw trajectories
generation stage. Therefore, the comparison is conducted under the same
segment length to vanish the basic raw trajectories’ discrepancies.
Furthermore, the proposal networks recall rate is also examined in case the
behavior of the proposal network varies under different segment lengths.
Table IV is the result. Differences in five metrics when applying the proposed
two-sample test based on ELR and the commonly used Gaussian Process model are
shown. The recall rate of the proposal networks is not affected by the value
of the segment length. Therefore, the raw input of the two-sample test and GP
model is generally the same. Different metrics’ gains are the boosts brought
by the trajectories refinement stage using the two-sample test or GP model.
The two-sample test based on ELR is denoted as $T_{ELR}$ in the table.
$T_{ELR}$ outperforms the GP model on every metric under every segment length.
The GP model even claims negative gains in MFPA and MFPP under extremely short
segment length, where the initial assumption that proposed faces comply with a
Gaussian distribution cannot hold.
Figure 6: Pixelation comparisons between FPVLS (yellow) and the POI tracker
(red) in low-resolution scenes.
### IV-G Quantitative Analysis
Fig. 5 shows the specific function of each algorithm in a real streaming
scene. Fig. 5(a) is the same scene that we demonstrated in Fig. 1 pixelated
solely through face detection and recognition networks. For comparison, we
feed around 1000 pictures of the streamer in advance to pre-train the
recognition network. The classification result of the recognition network is
directly used for pixelation. In the rightmost snapshot of Fig. 5(a), the
detection and recognition network failed to capture the hawker’s face and
produced a false positive in purple rectangular. Also, in the second and the
fourth snapshots, the detection network failed to locate the hawker’s face.
Then, we fix the results of (a) through PIAP clustering. The rightmost
snapshot in (b) shows PIAP excluded the false positives caused by the
detection network. (b) corresponds to the results after the raw trajectories
generation stage. Further, the proposal network is applied to raw trajectories
to retrieve the detection lost. In (c), the proposal network yielded the lost
faces along with some other non-face areas marked in brown rectangles. The ELR
statistics managed to cull the non-faces through the two-sample test in (d).
The result in (d) is fully processed by FPVLS except for the final pixelation.
The effect of both the raw face trajectory generation and trajectory
refinement stages are explicit and significant regarding (a)-(d).
More qualitative results111More video results on:https://FPVLS.github.io are
shown in Fig. 6 to demonstrate the performance of FPVLS under noisy, low-
resolution video live streaming scenes. POI tracker generates mosaics in red
circles, and FPVLS generates those in yellow rectangular. The left-upper row
shows the same over-pixelation problem also occurs in the POI tracker. In red
circles, excessive and puzzling mosaics are placed on the frontal streamer’s
faces while the crossover of two people’s faces happens. However, the
temporary invisible non-streamer’s face will not be pixelated by FPVLS,
therefore, we leave the streamer’s face clean. The right-side rows show a
crowded flash-event, which is hard to be dealt with. Limited by tracklets,
trackers cannot instantly recover from the drifting in noisy scenes. Besides
other misplaced mosaics, the tracker blurs the face of the streamer who wears
a pink trench coat. Although not perfect, FPVLS manages to pixelate most of
the faces correctly, especially leaves the streamer’s face (marked in blue
rectangular) un-pixelated.
### IV-H Efficiency
The main cost of FPVLS is on the face embedding algorithm. The embedding for
one face takes 10-15ms on our i7-7800X, dual GTX1080, 32G RAM machine. The
face detection, including compensation for a frame, takes 3ms. Another time-
costing part is the initialization of PIAP; the initialization process takes
10-30ms depending on the case. Each incremental propagation loop takes 3-5ms.
FPVLS generally satisfies the real-time efficiency requirement, and under
extreme circumstances that contain many faces, we can reduce the sampling rate
of the video frames to improve the efficiency of FPVLS.
## V Conclusions
Leveraging PIAP clustering to generate raw face trajectories and the two-
sample test based on ELR to refine the raw trajectories, FPVLS manages to
accomplish the irrelevant people’s face tracking and pixelation problem in
video live streaming. PIAP deals with the ill-defined cluster number issue and
endows the classic AP with noise-resistance and time-saving merits. The two-
sample test based on empirical likelihood ratio statistics outperforms the
widely used Gaussian process in the trajectory refinement. With most advanced
mobile phone chips like Apple A12, FPVLS could be deployed on smartphones and
brought into real applications without much burden.
As we discussed in section IV, the low-resolution videos are still quite
challenging because deep networks can drop to a deficient performance and
break the robustness of FPVLS under low-resolution scenarios. Our future work
will focus on the improvement of accuracy and robustness under low resolution
streaming scenes.
## VI Acknowledgment
The authors would like to deliver special thanks to Sin-Teng Wong for her
attentive work on the video live streaming dataset annotations.
This work was partly supported by the University of Macau under Grants:
MYRG2018-00035-FST and MYRG2019-00086-FST, and the Science and Technology
Development Fund, Macau SAR (File no. 0034/2019/AMJ, 0019/2019/A).
## References
* [1] N.-S. Vo, T. Q. Duong, H. D. Tuan, and A. Kortun, “Optimal video streaming in dense 5g networks with d2d communications,” _IEEE Access_ , vol. 6, pp. 209–223, 2017.
* [2] C. Faklaris, F. Cafaro, S. A. Hook, A. Blevins, M. O’Haver, and N. Singhal, “Legal and ethical implications of mobile live-streaming video apps,” in _Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct_. ACM, 2016, pp. 722–729.
* [3] F. Zimmer, K. J. Fietkiewicz, and W. G. Stock, “Law infringements in social live streaming services,” in _International Conference on Human Aspects of Information Security, Privacy, and Trust_. Springer, 2017, pp. 567–585.
* [4] D. R. Stewart and J. Littau, “Up, periscope: Mobile streaming video technologies, privacy in public, and the right to record,” _Journalism & Mass Communication Quarterly_, vol. 93, no. 2, pp. 312–331, 2016.
* [5] Y. Help, _Blur your videos_ , 2020 (accessed Apr 17th, 2020). [Online]. Available: https://support.google.com/youtube/answer/9057652?hl=en
* [6] Y. O. Blog, _Better protecting kids’ privacy on YouTube_ , January 6, 2020 (accessed Apr 17th, 2020). [Online]. Available: https://youtube.googleblog.com/2020/01/betterprotecting-kids-privacy-on-YouTube.html
* [7] Juliako, _Redact faces with Azure Media Analytics_ , 2020 (accessed Apr 17th, 2020). [Online]. Available: https://docs.microsoft.com/en-us/azure/media-services/previous/media-services-face-redaction
* [8] E. Sánchez-Lozano, B. Martinez, G. Tzimiropoulos, and M. Valstar, “Cascaded continuous regression for real-time incremental face tracking,” in _European Conference on Computer Vision_. Springer, 2016, pp. 645–661.
* [9] M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg, “Eco: Efficient convolution operators for tracking,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 6638–6646.
* [10] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 37, no. 3, pp. 583–596, 2014.
* [11] F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, and J. Yan, “Poi: Multiple object tracking with high performance detection and appearance feature,” in _European Conference on Computer Vision_. Springer, 2016, pp. 36–42.
* [12] H. Shen, L. Huang, C. Huang, and W. Xu, “Tracklet association tracker: An end-to-end learning-based association approach for multi-object tracking,” _arXiv preprint arXiv:1808.01562_ , 2018.
* [13] C. Huang, B. Wu, and R. Nevatia, “Robust object tracking by hierarchical association of detection responses,” in _European Conference on Computer Vision_. Springer, 2008, pp. 788–801.
* [14] T. Zhang and H. M. Gomes, “Technology survey on video face tracking,” in _Imaging and Multimedia Analytics in a Web and Mobile World 2014_ , vol. 9027\. International Society for Optics and Photonics, 2014, p. 90270F.
* [15] B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” _science_ , vol. 315, no. 5814, pp. 972–976, 2007.
* [16] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural networks for human action recognition,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 35, no. 1, pp. 221–231, 2013.
* [17] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 4694–4702.
* [18] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , 2014, pp. 1725–1732.
* [19] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 4489–4497.
* [20] J. Berclaz, F. Fleuret, E. Turetken, and P. Fua, “Multiple object tracking using k-shortest paths optimization,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 33, no. 9, pp. 1806–1819, 2011.
* [21] A. R. Zamir, A. Dehghan, and M. Shah, “Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs,” in _European Conference on Computer Vision_. Springer, 2012, pp. 343–356.
* [22] M. Mueller, N. Smith, and B. Ghanem, “Context-aware correlation filter tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 1396–1404.
* [23] J. Valmadre, L. Bertinetto, J. Henriques, A. Vedaldi, and P. H. Torr, “End-to-end representation learning for correlation filter based tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 2805–2813.
* [24] S. Liu, T. Zhang, X. Cao, and C. Xu, “Structural correlation filter for robust visual tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 4312–4320.
* [25] S. Zhang, Y. Gong, J.-B. Huang, J. Lim, J. Wang, N. Ahuja, and M.-H. Yang, “Tracking persons-of-interest via adaptive discriminative features,” in _European conference on computer vision_. Springer, 2016, pp. 415–433.
* [26] E. Bochinski, V. Eiselein, and T. Sikora, “High-speed tracking-by-detection without using image information,” in _2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)_. IEEE, 2017, pp. 1–6.
* [27] S. Sridhar, F. Mueller, A. Oulasvirta, and C. Theobalt, “Fast and robust hand tracking using detection-guided optimization,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 3213–3221.
* [28] P. Weinzaepfel, Z. Harchaoui, and C. Schmid, “Learning to track for spatio-temporal action localization,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 3164–3172.
* [29] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Joint face representation adaptation and clustering in videos,” in _European conference on computer vision_. Springer, 2016, pp. 236–251.
* [30] E. Insafutdinov, M. Andriluka, L. Pishchulin, S. Tang, E. Levinkov, B. Andres, and B. Schiele, “Arttrack: Articulated multi-person tracking in the wild,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 6457–6465.
* [31] C.-C. Lin and Y. Hung, “A prior-less method for multi-face tracking in unconstrained videos,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 538–547.
* [32] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu, “Cosface: Large margin cosine loss for deep face recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 5265–5274.
* [33] X. Tang, D. K. Du, Z. He, and J. Liu, “Pyramidbox: A context-assisted single shot face detector,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 797–813.
* [34] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 4690–4699.
* [35] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “Density-based spatial clustering of applications with noise,” in _Int. Conf. Knowledge Discovery and Data Mining_ , vol. 240, 1996, p. 6.
* [36] N. R. Pal and J. C. Bezdek, “On cluster validity for the fuzzy c-means model,” _IEEE Transactions on Fuzzy systems_ , vol. 3, no. 3, pp. 370–379, 1995.
* [37] C.-D. Wang, J.-H. Lai, C. Y. Suen, and J.-Y. Zhu, “Multi-exemplar affinity propagation,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 35, no. 9, pp. 2223–2237, 2013.
* [38] X. Cao, C. Zhang, H. Fu, S. Liu, and H. Zhang, “Diversity-induced multi-view subspace clustering,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 586–594.
* [39] L. Sun and C. Guo, “Incremental affinity propagation clustering based on message passing,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 26, no. 11, pp. 2731–2744, 2014.
* [40] D. Chen, G. Hua, F. Wen, and J. Sun, “Supervised transformer network for efficient face detection,” in _European Conference on Computer Vision_. Springer, 2016, pp. 122–138.
* [41] H. Qin, J. Yan, X. Li, and X. Hu, “Joint training of cascaded cnn for face detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 3456–3465.
* [42] A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in _18th International Conference on Pattern Recognition (ICPR’06)_ , vol. 3. IEEE, 2006, pp. 850–855.
* [43] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” in _2016 IEEE International Conference on Image Processing (ICIP)_. IEEE, 2016, pp. 3464–3468.
* [44] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fully-convolutional siamese networks for object tracking,” in _European conference on computer vision_. Springer, 2016, pp. 850–865.
* [45] Q. Guo, W. Feng, C. Zhou, R. Huang, L. Wan, and S. Wang, “Learning dynamic siamese network for visual object tracking,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 1763–1771.
* [46] Q. Wang, Z. Teng, J. Xing, J. Gao, W. Hu, and S. Maybank, “Learning attentions: residual attentional siamese network for high performance online visual tracking,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 4854–4863.
* [47] T. Hirscher, A. Scheel, S. Reuter, and K. Dietmayer, “Multiple extended object tracking using gaussian processes,” in _2016 19th International Conference on Information Fusion (FUSION)_. IEEE, 2016, pp. 868–875.
* [48] S. Hou, A. Galata, F. Caillette, N. Thacker, and P. Bromiley, “Real-time body tracking using a gaussian process latent variable model,” in _2007 IEEE 11th International Conference on Computer Vision_. IEEE, 2007, pp. 1–8.
* [49] C. Yang, L. Bruzzone, R. Guan, L. Lu, and Y. Liang, “Incremental and decremental affinity propagation for semisupervised clustering in multispectral images,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 51, no. 3, pp. 1666–1679, 2013.
* [50] G. Ciuperca and Z. Salloum, “Empirical likelihood test for high-dimensional two-sample model,” _Journal of Statistical Planning and Inference_ , vol. 178, pp. 37–60, 2016.
* [51] K. Fukumizu, F. R. Bach, and M. I. Jordan, “Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces,” _Journal of Machine Learning Research_ , vol. 5, no. Jan, pp. 73–99, 2004.
* [52] L. Ding, Z. Liu, Y. Li, S. Liao, Y. Liu, P. Yang, G. Yu, L. Shao, and X. Gao, “Linear kernel tests via empirical likelihood for high-dimensional data,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, 2019, pp. 3454–3461.
* [53] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” _Journal of Machine Learning Research_ , vol. 13, no. Mar, pp. 723–773, 2012.
* [54] A. B. Owen, _Empirical likelihood_. Chapman and Hall/CRC, 2001.
* [55] L. Leal-Taixé, A. Milan, I. Reid, S. Roth, and K. Schindler, “Motchallenge 2015: Towards a benchmark for multi-target tracking,” _arXiv preprint arXiv:1504.01942_ , 2015.
* [56] A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler, “Mot16: A benchmark for multi-object tracking,” _arXiv preprint arXiv:1603.00831_ , 2016.
* [57] K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance: the clear mot metrics,” _EURASIP Journal on Image and Video Processing_ , vol. 2008, pp. 1–10, 2008.
* [58] D. Du, H. Qi, W. Li, L. Wen, Q. Huang, and S. Lyu, “Online deformable object tracking based on structure-aware hyper-graph,” _IEEE Transactions on Image Processing_ , vol. 25, no. 8, pp. 3572–3584, 2016.
|
16k
|
arxiv_papers
|
2101.01063
|
# Latitudinal variation of methane mole fraction above clouds in Neptune’s
atmosphere from VLT/MUSE-NFM: Limb-darkening reanalysis
###### Abstract
We present a reanalysis of visible/near-infrared (480 – 930 nm) observations
of Neptune, made in 2018 with the Multi Unit Spectroscopic Explorer (MUSE)
instrument at the Very Large Telescope (VLT) in Narrow Field Adaptive Optics
mode, reported by Irwin et al., Icarus, 311, 2019. We find that the inferred
variation of methane abundance with latitude in our previous analysis, which
was based on central meridian observations only, underestimated the retrieval
errors when compared with a more complete assessment of Neptune’s limb
darkening. In addition, our previous analysis introduced spurious latitudinal
variability of both the abundance and its uncertainty, which we reassess here.
Our reanalysis of these data incorporates the effects of limb-darkening based
upon the Minnaert approximation model, which provides a much stronger
constraint on the cloud structure and methane mole fraction, makes better use
of the available data and is also more computationally efficient. We find that
away from discrete cloud features, the observed reflectivity spectrum from 800
– 900 nm is very well approximated by a background cloud model that is
latitudinally varying, but zonally symmetric, consisting of a H2S cloud layer,
based at 3.6 – 4.7 bar with variable opacity and scale height, and a
stratospheric haze. The background cloud model matches the observed limb
darkening seen at all wavelengths and latitudes and we find that the mole
fraction of methane at 2–4 bar, above the H2S cloud, but below the methane
condensation level, varies from 4–6% at the equator to 2–4% at south polar
latitudes, consistent with previous analyses, with a equator/pole ratio of
$1.9\pm 0.2$ for our assumed cloud/methane vertical distribution model. The
spectra of discrete cloudy regions are fitted, to a very good approximation,
by the addition of a single vertically thin methane ice cloud with opacity
ranging from 0 – 0.75 and pressure less than $\sim 0.4$ bar.
Icarus
Department of Physics, University of Oxford, Parks Rd, Oxford, OX1 3PU, UK
Instituto Nacional de Técnica Aeroespacial (INTA), 28850, Torrejón de Ardoz
(Madrid), Spain. School of Earth Sciences, University of Bristol, Wills
Memorial Building, Queens Road, Bristol, BS8 1RJ, UK School of Physics &
Astronomy, University of Leicester, University Road, Leicester, LE1 7RH, UK
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove
Drive, Pasadena, CA 91109, USA University of the Basque Country UPV/EHU, 48013
Bilbao, Spain
Patrick [email protected]
Minnaert limb-darkening analysis improves modelling of Neptune’s reflectivity
spectrum in visible/near-IR.
General cloud distribution can be modelled with zonally-symmetric H2S cloud
and stratospheric haze.
Mole fraction of methane at 2–4 bar (above H2S cloud) found to decrease from
4–6% at the equator to 2–4% at the south pole.
Discrete cloud features can be fitted with an additional methane ice cloud at
pressures less than $\sim$ 0.4 bar.
## 1 Introduction
The visible and near-infrared spectrum of Neptune is formed by the reflection
of sunlight from the atmosphere, modulated primarily by the absorption of
gaseous methane, but also to a lesser extent H2S [Irwin . (2018)]. Measured
spectra can thus be inverted to determine the cloud structure as a function of
location and altitude, providing we know the vertical and latitudinal
distribution of methane. Although for some years the vertical profiles of
methane determined from Voyager 2 radio-occultation observations were used at
all latitudes, HST/STIS observations of Uranus recorded in 2002 [Karkoschka
Tomasko (2009)] and similar observations of Neptune recorded in 2003
[Karkoschka Tomasko (2011)] both showed that the tropospheric cloud-top (i.e.,
above the H2S cloud) methane mole fraction varies significantly with latitude
on both planets, later confirmed for Uranus by several follow-up studies
[Sromovsky . (2011), Sromovsky . (2014), Sromovsky . (2019)]. These HST/STIS
observations used the collision-induced absorption (CIA) bands of H2–H2 and
H2–He near 825 nm, which allow variations of CH4 mole fraction to be
differentiated from cloud-top pressure variations of the H2S cloud.
kark09,kark11 found that the methane mole fraction above the main observable
H2S cloud tops at 2–4 bar varies from $\sim 4$% at equatorial latitudes to
$\sim 2$% polewards of $\sim 40^{\circ}$ N,S for both planets.
More recently, an analysis of VLT/MUSE Narrow-Field Mode (NFM) observations
(770 – 930 nm) along Neptune’s central meridian [Irwin . (2019)] found a
similar latitudinal variation of cloud-top methane mole fraction, with values
of 4–5% reported at equatorial latitudes, reducing to 3–4% at polar latitudes,
but with considerable pixel-to-pixel variation that was not understood at the
time. In this study we reanalyse these data using a new limb-darkening
approximation model, which makes much better use of all the data from a given
latitude, observed at many different zenith angles, and which we find
considerably improves our methane mole fraction determinations. We also find
that having constrained the smooth latitudinal variation of opacity of the
tropospheric cloud and stratospheric haze, we are able to efficiently retrieve
the additional opacity of discrete upper tropospheric (0.1 - 0.5 bar) methane
clouds seen in our observations.
## 2 MUSE Observations
As reported by irwin19, commissioning-mode observations of Neptune were made
on 19th June 2018 with the Multi Unit Spectroscopic Explorer (MUSE) instrument
[Bacon . (2010)] at ESO’s Very Large Telescope (VLT) in Chile, in Narrow-Field
Mode (NFM). MUSE is an integral-field spectrograph, which records 300 $\times$
300 pixel ‘cubes’, where each ‘spaxel’ contains a complete visible/near-
infrared spectrum (480 – 930 nm) with a spectral resolving power of 2000 –
4000. MUSE’s Narrow-Field Mode has a field of view of 7.5” $\times$ 7.5”,
giving a spaxel size of 0.025”, and uses Adaptive Optics to achieve a spatial
resolution less than 0.1”. These commissioning observations are summarised in
Table 1 of irwin19. The spatial resolution was estimated to have a full-width-
half-maximum of 0.06” at 800 nm. The observed spectra were smoothed to the
resolution of the IRTF/SpeX instrument, which has a triangular instrument
function with FWHM = 2 nm, sampled at 1 nm, in order to increase the signal-
to-noise ratio without losing the essential shape of the observed spectra.
This resolution was also more consistent with the spectral resolution of the
methane gaseous absorption data used, which are described in section 3.2.
In our previous analysis of these data [Irwin . (2019)], spectra recorded from
single pixels along the central meridian of one of the longer integration time
observations (120s) were fitted with our NEMESIS retrieval model [Irwin .
(2008)] to determine latitudinal variations of methane and cloud structure.
The cloud-top (i.e., immediately above the H2S cloud) methane mole fractions
were found to be consistent with HST/STIS observations [Karkoschka Tomasko
(2011)], but were not very well constrained with significant latitudinal
variation that we attributed at the time to the random noise from single-pixel
retrievals. However, they also did not make full use of the limb-darkening
behaviour visible in these IFU observations, although we verified that our
cloud parameterization reproduced the observed limb-darkening well at 5 –
$10^{\circ}$S.
Since making our initial report on the MUSE-NFM Neptune observations, an
analysis of HST/WFC3 observations for Jupiter has been conducted by perez20,
which makes much better use of the limb-darkening information content of
multi-spectral observations using a Minnaert limb darkening approximation
scheme. We have adapted this technique for use with our Neptune MUSE-NFM
observations and find that it greatly improves the quality of our fits and our
estimates of the latitudinal variation of cloud-top methane mole fraction at
2–4 bar in Neptune’s atmosphere. This reanalysis has also highlighted an
erroneous retrieval artefact in our previous work [Irwin . (2019)] at some
locations, which can now be explained.
## 3 Reanalysis
### 3.1 Minnaert Limb-darkening analysis
The dependence of the observed reflectivity from a location on a planet on the
incidence and emission angles can be well approximated using an empirical law
first introduced by minnaert41. For an observation at a particular wavelength,
the observed reflectivity111$I/F$ is $\pi R/F$, where $R$ is the reflected
radiance (W cm-2 sr-1 $\mu$m-1) and $F$ is the incident solar irradiance at
the planet (W cm-2 $\mu$m-1). $I/F$ can be approximated as:
$\frac{I}{F}=\left(\frac{I}{F}\right)_{0}\mu_{0}^{k}\mu^{k-1}$ (1)
where $(I/F)_{0}$ is the nadir-viewing reflectivity, $k$ is the limb-darkening
parameter, and $\mu$ and $\mu_{0}$ are, respectively, the cosines of the
emission and solar incidence angles. With this model a value of $k>0.5$
indicates limb-darkening, while $k<0.5$ indicates limb-brightening. Taking
logarithms, Eq. 1 can be re-expressed as:
$\ln\left(\mu\frac{I}{F}\right)=\ln\left(\frac{I}{F}\right)_{0}+k\ln(\mu\mu_{0})$
(2)
and we can see that it is possible to fit the Minnaert parameters $(I/F)_{0}$
and $k$ if we perform a least-squares fit on a set of measurements of $\ln(\mu
I/F)$ as a function of $\ln(\mu\mu_{0})$.
Figure 1: Appearance of Neptune on June 19th 2018 at 09:43:21UT (Observation
‘3’ of irwin19) at 830 nm (top row) and 840 nm (bottom row). The left-hand
column shows the observed images and the middle column shows the images
reconstructed following a Minnaert limb-darkening analysis. Here, for each
location with latitude $\phi$ and cos(zenith) angles $\mu_{0}$ and $\mu$ the
reflectivity was calculated as $I/F=R(\phi)\mu_{0}^{k(\phi)}\mu^{k(\phi)-1}$
where $R(\phi)$ and $k(\phi)$ are the interpolated values of Minnaert
parameters $\left(I/F\right)_{0}$ and $k$ at that latitude. The right-hand
column shows the difference (reconstructed - observed). The equator and
$60^{\circ}$S latitude circles are indicated by the red-dashed and blue-dashed
lines, respectively. The dark spot seen at continuum wavelengths near the
equator is an artefact of the reduction pipeline of unknown origin. We found
it to have negligible effect on our analysis. Figure 2: Appearance of Neptune
on June 19th 2008 at 09:43:21UT (Observation ‘3’ of irwin19) at 840 nm,
showing regions masked to fit the Minnaert curves (top) and the areas selected
for later additional discrete cloud fitting (bottom).
We analysed the same ‘cube’ of Neptune as was studied by irwin19, namely
Observation ‘3’, recorded by VLT/MUSE at 09:43:21(UT) on 19th June 2018. We
analysed the spectra in this cube in the wavelength range 800 – 900 nm and
Fig. 1 shows the observed appearance of the planet at 830 and 840 nm, which
are wavelengths of weak and strong methane absorption, respectively. The limb-
darkening behaviour of the observed spectra were analysed in latitude bands of
width $10^{\circ}$, spaced every $5^{\circ}$ to achieve Nyquist sampling. For
each latitude band, the observed reflectivities were used to construct plots
of $\ln(\mu I/F)$ against $\ln(\mu\mu_{0})$ and straight lines fitted to
deduce $(I/F)_{0}$ and $k$ for each wavelength. Locations on the disc where
there were bright clouds were masked out (Fig. 2) and examples of the fits at
830 and 840 nm for latitude bands centred on the equator and $60^{\circ}$S are
shown in Fig. 3. Here it can be seen that the Minnaert empirical law provides
a very accurate approximation of the observed dependence of reflectivity with
viewing zenith angles. Although all the measurements are plotted, only those
measurements with $\mu\mu_{0}\geq 0.09$ (i.e., $\mu$, $\mu_{0}>\sim 0.3$) were
used to fit $(I/F)_{0}$ and $k$ to make sure that the fitting procedure was
not overly affected by points measured near the disc edge and thus potentially
more ‘diluted’ with space. Also plotted in Fig. 3 are the reflectivities
calculated with our radiative transfer and retrieval model from our best-fit
retrieved cloud and methane mole fractions at these latitudes, reported in
section 3.3. It can be seen that there is very good agreement between the
reflectivities calculated with our multiple-scattering matrix operator model
and the Minnaert limb-darkening approximation to the observations for zenith
angles less than $\sim 70^{\circ}$.
Extending this analysis to all wavelengths under consideration, Fig. 4 shows a
contour plot of the fitted values of $(I/F)_{0}$ and $k$ for all wavelengths
and latitude bands. It can be seen that at wavelengths near 830 nm, the fitted
$k$ values are greater than 0.5, indicating limb darkening, while at longer
wavelengths, values of $k$ less than 0.5 are fitted, indicating limb
brightening. It can also just be seen in Fig. 4 that the width of the
reflectance peak of $(I/F)_{0}$ is noticeably wider at latitudes southwards of
20 – $40^{\circ}$S, a trend that is also just discernible in the fitted $k$
values near the reflectance peak.
Figure 3: Minnaert analysis of observed limb-darkening curves at 830 nm (top
row) and 840 nm (bottom row). The left-hand column shows all observations,
while the right-hand column is limited to points with $\mu\mu_{0}>0.1$. The
red points are the measurements at the equator, while those coloured blue are
at $60^{\circ}$S. The red and blue solid lines are the lines fitted with the
Minnaert analysis for points with $\mu\mu_{0}\geq 0.09$. The vertical blue
dotted lines indicate the values of $\mu\mu_{0}$ corresponding to our 5-point
quadrature ordinates (with the vertical blue dashed line indicating the
smallest value of $\mu\mu_{0}$ used in our retrieval analysis). The blue
diamonds and red squares are simulated reflectivities from our best-fit cloud
structure and methane mole fractions at these latitudes, reported in section
3.3, showing excellent consistency between the Minnaert approximation and the
reflectivities simulated with our matrix operator multiple-scattering model
for $\mu\mu_{0}>0.1$. Figure 4: Minnaert analysis of observed limb-darkening
curves from 800 to 900 nm and at all latitudes visible on Neptune’s disc. The
left-hand column shows the fitted values of $(I/F)_{0}$ and $k$ at the equator
(red) and $60^{\circ}$S (blue), while the right-hand column shows a contour
plot of these fitted parameters at all latitudes. The horizontal lines in the
left hand column indicates the value of the white colour in the contour plots
in the right hand column, and for the $k$ contour plot indicates the
transition from limb darkening $(k>0.5)$ to limb-brightening $(k<0.5)$. The
horizontal lines in the right hand column indicate the equator (red) and
$60^{\circ}$S (blue), respectively.
Having fitted values of $(I/F)_{0}$ and $k$ for all wavelengths and latitude
bands, it is then possible to reconstruct the apparent image of the planet at
any observation geometry. Using the measured observation values of $\mu$ and
$\mu_{0}$ we reconstructed the images of Neptune at 830 and 840 nm, where for
each location with latitude $\phi$ and cos(zenith) angles $\mu_{0}$ and $\mu$
the reflectivity is calculated as
$I/F=R(\phi)\mu_{0}^{k(\phi)}\mu^{k(\phi)-1}$, where $R(\phi)$ and $k(\phi)$
are the interpolated values of Minnaert parameters $\left(I/F\right)_{0}$ and
$k$ at that latitude. We compare these reconstructed images with the observed
images in Fig. 1, which also shows the differences between the observed and
reconstructed images. As can be seen the observed general dependence of
reflectivity with latitude and position on disc is well reproduced compared
with the original MUSE observations and the differences are very small, except
at: 1) locations of known artefacts in the reduced data; 2) locations of the
small discrete clouds, which were masked out when fitting the zonally-averaged
Minnaert limb-darkening curves; and 3) off-disc, where the observed images are
not corrected for the instrument point-spread function (PSF). We will return
to these discrete clouds in section 3.5.
### 3.2 Retrieval model
Having applied the Minnaert model to the observations we then used the fitted
$(I/F)_{0}$ and $k$ parameters to reconstruct synthetic spectra of Neptune for
all visible latitude bands and fitted these as synthetic ‘observations’ using
our radiative transfer and retrieval model, NEMESIS [Irwin . (2008)]. There
are two main advantages in doing this: 1) the spectra reconstructed using the
fitted Minnaert parameters have smaller random error values as they have been
reconstructed from values fitted to a combination of all the points in a
latitude band; and 2) we can reconstruct the apparent spectrum of Neptune at
any set of angles that is convenient for modelling, which can greatly reduce
computation time. In our previous approach [Irwin . (2019)], where we did not
assume to know the zenith-angle dependence, we tried to fit simultaneously to
the observations at several different zenith angles near the equator. For
modelling near-infrared reflectivity observations, NEMESIS employs a plane-
parallel matrix operator multiple-scattering model [Plass . (1973)]. In this
model, integration over zenith angle is done with a Gauss-Lobatto quadrature
scheme, while the azimuth integration is done with Fourier decomposition. For
most calculations, not too near to the disc edge, we have found that five
zenith angles are usually sufficient and the reflectivity at a particular
zenith angle is linearly interpolated between calculations done at the two
closest zenith angles. Although this provides a general purpose functionality,
this approach has some drawbacks: 1) it requires two sets of calculations at
two different solar zenith angles for each location; 2) the linear
interpolation can lead to interpolation errors at larger zenith angles; and 3)
for points near the disc edge, the number of Fourier components needed to
fully resolve the azimuth dependence increases, which can greatly increase
computation time. By reconstructing spectra using the fitted Minnaert
$(I/F)_{0}$ and $k$ parameters we can simulate spectra measured as if they
exactly coincided with the angles in our quadrature scheme, thus avoiding
interpolation error. In addition, if we assume the Minnaert approximation to
be true, which has a linear dependence in logarithmic space, we only need to
fit spectra calculated at two different angles, not several, and can test how
well the linear approximation applies in post-processing. Hence, in the
retrievals presented here we reconstructed two spectra for each latitude band
with viewing zenith angle $\theta_{V}$, solar zenith angle $\theta_{S}$ and
azimuth angle $\phi$ values of ($0^{\circ}$, $0^{\circ}$, $180^{\circ}$) and
($61.45^{\circ}$, $61.45^{\circ}$, $180^{\circ}$), respectively. Here,
$\phi=180^{\circ}$ indicates back-scattering, while $\theta=61.45^{\circ}$ is
the second zenith angle in our five-point Gauss-Lobatto scheme, summarised in
Table LABEL:tab:quadrature, and $\theta=0^{\circ}$ is the fifth. This second
zenith angle is sufficiently high to probe limb-darkening or limb-brightening,
but is not too high that we need an excessive number of Fourier components in
the azimuth angle decomposition to properly model it, which would make the
computation excessively slow. For each latitude band the two synthetic
observations at $0^{\circ}$ and $61.45^{\circ}$ zenith angle were then fitted
simultaneously to determine the vertical cloud structure and tropospheric
methane mole fraction.
Errors in the fitted $(I/F)_{0}$ and $k$ parameters were propagated into the
errors on these reconstructed spectra at $0^{\circ}$ and $61.45^{\circ}$
zenith angle as normal, and were seen to increase towards the poles where the
curves were less well sampled. However, even then, because the synthetic
spectra are derived from linear fits to a large number of data points the
random error is very small and we found that we were unable to fit the
synthetic observations to within these error. We attribute this to ‘forward-
modelling’ systematic errors due to deficiencies in the methane absorption
coefficients of kark11 and also in our chosen cloud and methane
parameterization schemes, described below. In order to achieve final
$\chi^{2}/n$ fits of $\sim 1$ at all latitudes in our retrievals (necessary to
derive representative error values on the retrieved parameters) we found it
necessary to multiply these errors by a factor of $\sim 15$. Although this may
appear to be an alarmingly high factor, we will see later in Fig. 11 that this
leads to error bars on the synthetic $I/F$ reflectivity spectra of only 0.5 –
1.0 %, which is perfectly reasonable given the likely accuracy of the
absorption coefficients used and also the simplicity of our retrieval scheme.
We also decided that this approach was more appropriate than our usual
procedure of simply adding a forward modelling error (which here would have
been 0.5 – 1.0% at all wavelengths and locations) since this would miss the
fact that the Minnaert-fitting errors are dependent on both wavelength and
latitude.
Table 1: Five-point Gauss-Lobatto Quadrature Scheme used in this study Index | $\mu$ | $\theta$ (∘) | Weight
---|---|---|---
1 | 0.1652790 | 80.4866 | 0.3275398
2 | 0.4779249 | 61.4500 | 0.2920427
3 | 0.7387739 | 42.3729 | 0.2248893
4 | 0.9195339 | 23.1420 | 0.1333060
5 | 1.0000000 | 0.00000 | 0.0222222
As with our previous analysis [Irwin . (2019)], we modelled the atmosphere of
Neptune using 39 layers spaced equally in log pressure between $\sim 10$ and
$0.001$ bar. We ran NEMESIS in correlated-k mode and for methane absorption
used a methane k-table generated from the band model of kark10. The collision-
induced absorption of H2-H2 and H2-He near 825 nm was modelled with the
coefficients of borysow89a,borysow89b,borysow00, assuming a thermally-
equilibriated ortho:para hydrogen ratio. Rayleigh scattering was included as
described in irwin19 and the effects of polarization and Raman scattering were
again justifiably neglected at these wavelengths. We used the solar spectrum
of chance10, smoothed with a triangular line shape of FWHM = 2 nm and took
Neptune’s distance from the Sun on the date of observation to be 29.94 AU. The
reference temperature and mole fraction profile is the same as that used by
irwin19 and is based on the ‘N’ profile determined by Voyager-2 radio-
occultation measurements [Lindal (1992)], with He:H2 = 0.177 (15:85),
including 0.3% mole fraction of N2.
For the methane profile, we adopted a simple model with a variable deep mole
fraction, limited to 100% relative humidity above the condensation level and
further limited to a maximum stratospheric mole fraction of $1.5\times
10^{-3}$ [Lellouch . (2010)] as shown in Fig. 5. Several authors [Karkoschka
Tomasko (2011), Sromovsky . (2019), e.g.,] have pointed out that such a simple
“step” model is not physically well based for either Neptune or Uranus when
extended to great depths and in particular tollefson19 notes that such strong
deep methane latitudinal gradients would induce humidity winds (additional to
the thermal wind equation), which are not seen. Instead, sromovsky19 favours a
“descended profile” model, where downwards motion suppresses the methane mole
fraction in the 2–4 bar region, but which then recovers to a uniform deep mole
fraction at depth. This profile is compared with our “step” model in Fig. 5.
Although this profile is smoother and may be more physically plausible than
the “step” model, we do not have the vertical resolution in the MUSE data to
be able to discriminate between the two and we also cannot see clearly through
the H2S cloud to deeper pressures. This can be seen in Fig. 5, where we have
also plotted the two-way vertical transmission to space through the cloud
only, which shows the fitted cloud to be nearly opaque. In addition, we have
also plotted in Fig. 5 the functional derivatives with respect to methane
abundance, i.e., the rate of change of the calculated radiance spectrum with
respect to the methane abundance at each level if we were to assume a
continuous profile. Here we can see that we are only significantly sensitive
to the methane abundance in the 1–4 bar region. In fact, we see that with the
MUSE data we are only really sensitive to the column abundance of methane
above the H2S cloud and this column abundance will depend on the vertical
distribution of both the methane mole fraction and the cloud; since we do not
have precise constraints on either we thus have a degeneracy. Hence, in this
study we decided used the simpler “step” model for methane, which has the
added advantage of returning a mean value for the methane mole fraction in the
2–4 bar region, which is easy to understand, interpret and compare with
previous studies. It is worth noting that tollefson19, who were able to probe
to slightly deeper pressures than we were, also adopted a simple “step” model
of methane.
Figure 5: Left hand panel: Methane profiles considered in this analysis. Our
simple “step” model is shown for two different values of the ‘deep’ methane
mole fraction (3% and 5%, respectively) and compared with the “descended”
methane profile of sromovsky19 for a deep mole fraction of 5%, a deep pressure
of $P_{d}=5$ bar and scaling coefficient, $vx=3.0$ (see Eq. 3 of sromovsky19
for details). The shaded region is the cloud opacity/bar of our nominal
equatorial cloud distribution model, which has a base pressure of 4.66 bar.
Note that both models have been limited to 100% relative humidity at all
pressures and the stratospheric mole fraction is limited to not exceed
$1.5\times 10^{-3}$. Also plotted for reference is the two-way transmission
from space to each level through the main cloud only, showing the cloud to be
mostly opaque. The haze distribution has been omitted from this figure for
clarity. Right-hand panel: sensitivity of calculated radiances on the
abundance of methane at each pressure level, showing that our main sensitivity
is from 1–4 bar.
For clouds/hazes we again adopted the parameterized model used by irwin16 to
model VLT/SINFONI and Gemini/NIFS H-band observations of Neptune, which was
found to provide good limb-darkening/limb-brightening behaviour. In this model
particles in the troposphere are modelled with a cloud near the H2S
condensation level (and which is thus presumed to be rich in H2S ice [Irwin .
(2019)]), with a variable base pressure ($\sim 3.6$ – 4.7 bar) and a scale
height retrieved as as a fraction of the pressure scale height (called the
fractional scale height). Scattering from haze particles is modelled with a
second layer with base pressure fixed at $0.03$ bar and fixed fractional scale
height of 0.1. Although the base pressure of the stratospheric haze may in
reality vary with latitude, we found that the precise pressure level did not
significantly affect the calculated spectra at these wavelengths (since the
transmission to space is close to unity at the tropopause level from 800 to
900 nm) and so fixed it to a typically representative value stated. The
scattering properties of the cloud were calculated using Mie scattering and a
retrievable imaginary refractive index spectrum. For cases where we allow the
imaginary refractive index to vary with wavelength (as in our previous report)
we use a Kramers-Kronig analysis to construct the real part of the refractive
index spectrum, assuming $n_{real}=1.4$ at 800 nm. Here, however, for
simplicity we forced the imaginary refractive indices to be the same at all
wavelengths across the 800 – 900 nm range considered, and hence the real
refractive index was fixed to 1.4 over the whole range also. The Mie-
calculated phase functions were again approximated with combined Henyey-
Greenstein functions for computational simplicity and also to smooth over
features peculiar to spherical particles, such as the back-scattering ‘glory’.
Table 2: Preset grid of cloud parameterization values used in test retrievals of cloud opacity, cloud fractional scale height, stratospheric haze opacity and cloud-top methane mole fraction at the equator and $60^{\circ}$S Parameter | Number of values | Values
---|---|---
Mean cloud radius | 4 | 0.05, 0.1, 0.5, 1.0 $\mu$m
Cloud radius variance | 2 | 0.05, 0.3
Cloud $n_{imag}$ | 3 | 0.001, 0.01, 0.1
Mean haze radius | 4 | 0.05, 0.1, 0.5, 1.0 $\mu$m
Haze radius variance | 2 | 0.05, 0.3
Haze $n_{imag}$ | 3 | 0.001, 0.01, 0.1
$p_{base}$ | 3 | 3.65, 4.15, 4.66 bar
### 3.3 Retrieval analysis
To ‘tune’ our retrieval model we first concentrated on the latitude bands at
the equator and $60^{\circ}$S. We were aware from our previous study that
there was likely to be a high degree of degeneracy in our best-fit solutions
with respect to assumed particle sizes and other parameters in the 800 – 900
nm range. Hence, we first analysed these two latitude bands for a grid of
preset values of: 1) mean cloud particle radius; 2) variance of cloud radius
distribution; 3) cloud imaginary refractive index; 4) mean haze particle
radius; 5) variance of haze radius distribution; 6) haze imaginary refractive
index; and 7) cloud base pressure, described in Table LABEL:tab:parameter.
From Table LABEL:tab:parameter we can see that the number of grid values is
$4\times 2\times 3\times 4\times 2\times 3\times 3=1728$ setups for each
latitude band. For each setup we retrieved simultaneously four variables from
the synthetic spectra reconstructed at 0∘ and 61.45∘ emission angle: 1) cloud
opacity; 2) cloud fractional scale height; 3) haze opacity; and 4) the cloud-
top methane mole fraction. As explained earlier, we assumed that the set
imaginary refractive indices applied at all wavelengths simultaneously. After
fitting we plotted the $\chi^{2}$ of the fits as a function of the grid
parameters, together with the retrieved cloud-top methane mole fraction, which
we show in Fig. 6 for the equator and Fig. 7 for the $60^{\circ}$S. Here we
can see that the goodness of fit depends little on the assumed deep pressure
of the cloud, nor on the variance of the size distribution of the cloud and
haze particles. However, we can see that there is a strong preference for
solutions with a haze particle mean radius of 0.05 – 0.1 $\mu$m and high
imaginary refractive index of 0.1. For the cloud, it can be seen that the
preference is for a low imaginary refractive index of 0.001. Retrievals where
the cloud imaginary refractive index was set to 0.1 had $\chi^{2}$ in excess
of 1000 and so are not visible in Figs. 6 and 7, but the constraint on the
cloud mean radius is not strong with values of 0.05 – 0.1 $\mu$m slightly
favoured over 1.0 $\mu$m. A cloud mean radius of 0.5 $\mu$m is least favoured.
In addition, we checked to see if the limb-darkening curves modelled with our
radiative transfer model at all zenith angles were consistent with the
Minnaert law and found very good correspondence for zenith angles less than
$\sim 70^{\circ}$ (noted in Fig. 3) for schemes using both five and nine
zenith angles, adding confidence to our approach.
Figure 6: Variation of the goodness-of-fit of our retrievals ($\chi^{2}$) in
the equatorial band (variable cloud fractional scale height, stratospheric
haze opacity and cloud-top methane mole fraction) as a function of the fixed
grid values of the other cloud and haze properties defined in Table.
LABEL:tab:parameter. The first three panels of the top row show the $\chi^{2}$
values of all the fits for different fixed values of the mean cloud particle
radius, cloud particle imaginary refractive index, and variance of the cloud
particle radius distribution, while the first three panels of the bottom row
show the $\chi^{2}$ values for the corresponding haze particle properties. The
top right panel shows the $\chi^{2}$ values for different set values of the
cloud base pressure, while the bottom right panel shows the fitted values of
the cloud-top methane mole fraction, with the cloud base pressure of each case
colour-coded as indicated in the panel above to show that there is no simple
correlation between retrieved methane mole fraction and set cloud base
pressure. As we are fitting simultaneously to $2\times 101=202$ points in
total (i.e., two spectra at $0^{\circ}$ and $61.45^{\circ}$ zenith angle,
respectively), $\chi^{2}=\ \sim 200$ indicates a good fit. Figure 7: As Fig.
6, but for the $60^{\circ}$S latitude band.
Although there are a wide range of best-fit $\chi^{2}$ values it is apparent
that the best fits are achieved for a methane cloud-top mole fraction of
$\sim$ 4–6% at the equator and $\sim$ 2–4 % at $60^{\circ}$S. However,
although we clearly retrieve lower methane mole fractions near the south pole
than at the equator it can be seen that there are a wide range of possible
cloud solutions that give equally good fits to the data, but rather different
methane abundances. Hence, although it would appear that the polar methane
cloud-top mole fraction is $\sim 0.5$ times that at the equator we can be less
certain of the absolute cloud-top methane mole fraction at the equator and
pole.
Having surveyed the range of cloud properties that best match the observed
limb darkening at the equator and $60^{\circ}$S, we then took one of the best-
fit setup cases and applied this to all latitudes. We chose to fix
$p_{base}=4.66$ bar, $r_{cloud}=0.1$ $\mu$m with 0.05 variance and imaginary
refractive index $n_{imag}=0.001$. For the haze we chose to fix $r_{haze}=0.1$
$\mu$m with 0.3 variance and imaginary refractive index $n_{imag}=0.1$. We
then fitted to the synthetic spectra generated from our fitted Minnaert limb-
darkening coefficients for all the latitude bands sampled by the Neptune MUSE
observations and fitted once more for 1) cloud opacity; 2) cloud fractional
scale height; 3) haze opacity; and 4) the cloud-top methane mole fraction. The
resulting fitted methane cloud-top mole fractions as a function of latitude
are shown in Fig. 8, where we also show the methane mole fraction variation
derived in our previous analysis [Irwin . (2019)] and that derived by kark11.
In Fig. 8 we can see that our derived latitudinal methane distribution for our
default model (indicated as Model 1) varies much more smoothly with latitude
than our previous analysis [Irwin . (2019)] and has more smoothly-varying
error bars. In addition, it can be seen that our new retrieved methane
variation more closely resembles that determined by kark11. The greatest
discrepancy occurs at 20 – 40∘S and we found here that our fits had the
highest $\chi^{2}/n$ values. To introduce additional flexibility into our
model, we ran our retrievals a second time, but additionally allowed the model
to vary the imaginary refractive indices of the cloud and haze particles
(Model 2), where the imaginary refractive indices were still assumed to be
wavelength invariant. It can be seen that Model 2 retrieves lower methane mole
fractions at 20 – 40∘S and even more closely resembles the results of kark11.
Figure 8: Fitted methane mole fractions as a function of latitude. The results
from our previous work [Irwin . (2019)] are shown for reference and compared
with our new model: 1) where the imaginary refractive indices of the haze and
cloud are fixed to 0.1 and 0.001, respectively; and 2) where the imaginary
refractive indices of the haze and cloud are allowed to vary (keeping constant
with wavelength). Also shown are the methane mole fractions estimated by
kark11, scaled to match our estimates, and recalculations with Model 2 where
the base cloud pressure has been reduced to 4.15 and 3.65 bar, respectively.
The difference is not large for $p_{base}=4.15$ bar, but for $p_{base}=3.65$
bar it can be seen that the methane retrieval becomes unstable, for the
reasons described in the text.
Fig. 9 shows the latitudinal variation of the wavelength-invariant values of
$n_{imag}$ retrieved by Model 2 for the cloud and haze, and also shows the
retrieved 2-D (i.e., latitude – altitude) cloud structure. We can see that
$n_{imag}$ for the cloud is poorly constrained, but that of the haze is well
estimated and it would appear that to best match the observations at 20 – 40∘S
the haze particles are required to have slightly lower $n_{imag}$ values than
those found at other latitudes. This is easily understood looking at Fig. 1
where we can see that this latitude has numerous bright, high, discrete
clouds. Although we masked the observations to focus the retrievals on the
background smooth latitudinal variation, to mask completely the brighter
clouds at these latitudes would have left us with no data to analyse at all
(Fig.2). Hence, we would expect Model 2, which allows the cloud/haze particle
reflectivity to vary, to better incorporate the additional reflectivity from
these upper tropospheric methane clouds and so fit the observations more
accurately and also retrieve a more reliable latitudinal variation in cloud-
top methane mole fraction. Please note that the cloud opacity plot in Fig. 9
shows opacity below the 4.66-bar cloud base pressure for two reasons: 1) we
assume the opacity to diminish with a scale height of 1 km below the
condensation level rather than cutting off sharply; and 2) we show here the
opacity in the 39 model atmospheric layers, which are split equally between
$\sim 10$ and $0.001$ bar and so do not coincide exactly with the base
pressures of the cloud and haze.
Figure 9: Fitted latitudinal variation of cloud opacity/bar (at 800 nm) and
imaginary refractive indices of the cloud and haze particles. The top panel
shows a contour plot of the fitted cloud opacity/bar profiles (darker regions
indicate greater cloud density), while the bottom panel shows the latitudinal
variation of the retrieved imaginary refractive indices of the cloud particles
(red) and haze particles (blue), together with error range (grey). For the
imaginary refractive indices it can be seen that this is poorly constrained
for the cloud (large error bars) and we just need the particles to be highly
scattering. However, the imaginary refractive index of the haze is well
constrained, and shows these particles are more scattering at 20–40∘S. The red
line in the top plot indicates the cloud top pressure (i.e., level where
overlying cloud opacity at 800 nm is unity). The cloud contour map indicates
the main cloud top to lie at similar pressure levels at all latitudes and has
a cloud-top pressure of $\sim$ 3–4 bar. We can also see a very slight increase
in stratospheric haze opacity at 20 – 40∘S, associated with the cloudy zone
and then clearing slightly towards the north and south.
In addition to providing a better constrained retrieval of cloud-top methane
mole fraction, showing its mole fraction to decrease from equator to south
pole, our new retrieval scheme appears to detect noticeably lower mole
fractions of methane near 60∘S. This is more easily seen in Fig. 10, which
shows the spatial variation of tropospheric cloud opacity, tropospheric cloud
fractional scale height, stratospheric cloud opacity and cloud-top methane
mole fraction projected onto the disc of Neptune as seen by VLT/MUSE. It is
difficult to be certain if this is a real feature as we have much less
geometrical coverage of the limb-darkening curves as we approach the south
pole. If it is a real feature then it is possible it might perhaps be related
to the South Polar Feature (SPF), e.g., tollefson19. We will return to this
question in the next section.
Figure 10: Fitted tropospheric cloud opacity (Cloud Opac.), tropospheric cloud
fractional scale height (Cloud FSH), stratospheric cloud opacity (Haze Opac.)
and cloud-top methane mole fraction (Methane VMR) projected on to the
Neptune’s disc as seen by VLT/MUSE. Also plotted are the opacity (Meth. Cloud
Opac.) and mean pressure level (Meth. Cloud Press.) of additional methane
clouds used to fit the discrete cloud regions. The methane mole fraction plot
clearly shows lower values of methane polewards of $30-40^{\circ}$S and an
apparent possible local minimum at $60^{\circ}$S.
Finally, in Fig.11 we show the fitted spectra from Model 2 at $0^{\circ}$ and
$61.45^{\circ}$ zenith angle, compared with the synthetic ‘observed’ spectra
at the equator and $60^{\circ}$S. The error bars of the synthetic observations
are shown as the lighter colour-shaded regions and it is apparent here why the
errors in the synthetic observations had to be inflated to enable the
retrieval model to fit to an accuracy of $\chi^{2}/n\sim 1$: even when
inflated the reflectivity errors are still small ($\sim 0.5$ %) compared with
the likely accuracy of the gaseous absorption coefficients used and the
simplicity of our cloud parameterization scheme.
Figure 11: Spectra fitted by NEMESIS to synthetic observations at $0^{\circ}$
(red) and $61.45^{\circ}$ (blue) zenith angle generated from the Minnaert
limb-darkening analysis at $60^{\circ}$S and the equator. The Minnaert-
modelled synthetic observed spectra and error limits (as described in the
text) are shown as the light shaded regions, while the spectra fitted by our
retrieval model are shown as the solid, darker lines.
### 3.4 Comparison with previous retrievals
The difference between our new methane retrievals and our previous estimates
[Irwin . (2019)] are for some locations greater than 3-$\sigma$, although it
should be remembered that these are random errors only, and do not account for
systematic errors arising from differences in the assumed methane/cloud
models. These differences mostly occur in the cloud belt near 20 – 40∘, where
we have unaccounted upper tropospheric methane ice clouds, but we wondered
whether there might be other effects that might explain the sharper
latitudinal changes in methane abundance and estimated errors of [Irwin .
(2019)]. Our previous retrievals assumed a base cloud pressure of
$p_{base}=4.23$ bar, rather than $p_{base}=4.66$ bar as we assumed here.
Hence, we re-ran our retrievals using the two other base pressures listed in
Table LABEL:tab:parameter of 4.15 and 3.65 bar, respectively.
The results for Model 2 (where we also fit for $n_{imag}$ of both cloud and
haze) for all three cloud base pressures are shown in Fig. 8. As can be seen,
the results for $p_{base}=4.15$ bar are very similar to those for
$p_{base}=4.66$ bar, but those for $p_{base}=3.65$ are very different and
appear, in terms of scatter and inflated error bars, more like our previous
results [Irwin . (2019)]. We believe this to be caused by an artefact of our
retrieval model, where we have assumed a single cloud with fixed base pressure
and variable scale height combined with our simple “step” methane model. For
Model 2 with $p_{base}=4.66$ bar it can be seen in Fig. 9 that the retrieved
level of unit cloud optical depth is in the range 3–4 bar, depending on
latitude, comfortably greater than the methane condensation pressure. However,
when the cloud base pressure is lowered to $p_{base}=3.65$ bar the cloud
opacity has to be greater at lower pressure levels in order to give enough
overall reflectivity. This pushes the level of unit optical depth to lower
pressures and, depending on the deep methane abundance, can at some latitudes
become similar to the methane condensation pressure. In such circumstances the
sensitivity of the calculated reflectivity to the deep methane mole fraction
is reduced and the retrieved mole fraction may need to be greatly increased
(and have greater error bars) to give enough methane absorption, exactly as we
see. The retrieved pressure levels of unit optical depth from irwin19 are
shown in Fig. 9 of that paper to be in the range 1.8–3 bar, which is indeed
rather close to the methane condensation pressure level and so would
unfortunately have suffered from this same systematic artefact. However,
irwin19 assumed $p_{base}=4.23$ bar, a value that gave consistent results in
our new retrievals, which indicates that there must be an additional
difference between the two analyses that caused irwin19 to retrieve unit
optical depth values near the methane condensation level. We have identified
this difference to be that rather than using a priori values of
$n_{imag}=0.001$ and 0.1 for the cloud and haze respectively, as used in this
study, irwin19 assumed the imaginary refractive index spectra to be fixed at
all latitudes to those retrieved from their limb-darkening analysis at 5 –
10∘S. Figure 6 of irwin19 shows that the haze particles were found at the
equator to be rather dark ($n_{imag}$ in the range 0.076 to 0.159, depending
on wavelength). These darker, wavelength-dependent haze particles, combined
with the extended wavelength region of 770 – 930 nm (compared with 800 – 900
nm considered here) led to larger retrieved haze opacities and consequently
required larger cloud opacities to match the peak reflectivity at continuum
wavelengths. This then led to the retrieved unit cloud optical depth levels
approaching the methane condensation pressure.
To demonstrate this effect we repeated the analysis of the central median
observations of irwin19, using exactly the same setup as used in this previous
study, but substituting the a priori cloud scattering properties to be those
used by our new Model 1 (fixed $n_{imag}$) and Model 2 (variable $n_{imag}$).
Our re-fitted methane mole fractions are shown in Fig. 12. Here, we can see
that when reprocessed in this way the set of spectra along Neptune’s meridian
return a latitudinal variation in deep methane mole fraction that is much more
consistent with our new analysis and also with the HST/STIS determinations
[Karkoschka Tomasko (2011)]. The apparently small retrieved errors of the
reanalysis towards the south pole should be viewed with caution. Such
latitudes are only seen over a very small range of zenith angles and so we
cannot say anything about the limb-darkening here. Hence, we are much more
dependent at these latitudes on the assumed cloud and methane profile, and so
our methane estimates are prone to larger systematic error. In addition, for
the central meridional analysis we used the MUSE pipeline radiance errors
(scaled to give $\chi^{2}/n\sim 1$ for fits to spectra near the equator),
which are smaller near the pole giving smaller apparent methane mole fraction
errors. In contrast our new limb-darkening analysis has fewer points to define
the limb-darkening and so assigns larger error bars to the reconstructed
spectra near the pole. Hence, at these latitudes the retrieved deep mole
fraction is retrieved with larger error.
Returning to the question of the apparent minimum of methane at 60∘S in our
new analysis, with larger retrieval errors towards the south pole the solution
might be expected to partially relax back to the a priori value of ($4\pm
4$)%222Note that this parameter is treated logarithmically within NEMESIS, and
hence it is the fractional error, i.e., 1.0, that is used in the covariance
matrix.. However, when we repeated the retrievals using a lower methane mole
fraction of ($2\pm 2$)% (i.e., same fractional error), the same latitudinal
behaviour was determined as can be seen in Fig. 12 (Model 2A), so this cannot
be the cause. Instead, we believe this apparent methane feature may arise from
the limited range of zenith angles sampled to fit the Minnaert parameters at
these latitudes, since they will appear only at higher zenith angles and our
analysis further omits observations with $\mu>0.3$ ($\mu$ is the cosine of the
zenith angle), to avoid locations too near the disc edge. Figure 4 shows
(bottom right panel) that at these latitudes the Minnaert limb-darkening
parameter, $k$, appears to tend to $\sim 0.5$ at $\sim 80^{\circ}$S at all
methane-absorbing wavelengths, but there is no clear difference in the
appearance of Neptune at this latitude at any wavelength. Hence, we believe
this effect to be a geometrical artefact of our limb-darkening analysis, in
the same way that the central meridional analysis shows a continuing decrease
towards the pole as we view the locations at higher and higher zenith angle.
Only observations recorded with even higher spatial resolution would be able
to better constrain the latitudinal variability of methane at such high
latitudes. In the meantime, our new determinations of methane mole fraction at
polar latitudes have larger retrieval errors properly indicating this greater
uncertainty.
Figure 12: As Fig. 8, but comparing the results from our previous work [Irwin
. (2019)] with the results of a reprocessing of the central meridian spectra
considered by irwin19, where the cloud/haze scattering properties have been
replaced by those used in our new analysis and either fixed at all latitudes
(Reproc. Model 1), or allowed to vary with latitude (Reproc. Model 2). In
addition, we show the results of our limb-darkening analysis with Model 2
(previously shown in Fig. 8 and which has an a priori methane mole fraction of
4%) and a revised version of Model 2, where the a priori methane mole fraction
has been reduced to 2% (Model 2A).
Finally, we return to the question of assumed methane and cloud profile
parameterizations and the absolute accuracy of our methane retrievals. As
noted earlier, with spectral observations such as these in this wavelength
region what we are actually sensitive to is the column abundance of methane
above the cloud top. Sadly, the vertical resolution of such nadir/near-nadir
observations cannot physically be less than $\sim$ one scale height, which
means that it is very difficult to discriminate between the “step” methane
model used here and more sophisticated models such as the “descended profile”
model favoured by, e.g., kark11 and sromovsky19. This is especially the case
when considering that we do not have a good ab initio model for the vertical
cloud structure either. It is apparent that for models with higher cloud
opacity at lower pressures, the mole fraction of methane will need to be
higher to give the same column abundance and so it can be seen that there
exists a wide range of possible cloud and methane vertical distribution models
that could fit our observations equally well and give the same methane column
abundance for a given latitude. For the nominal Model 2 retrievals presented
here, with cloud base at 4.66 bar we see a clear reduction in the column
abundance of methane towards the pole, which we interpret here in terms of a
deep mole fraction varying from ($5.1\pm 0.3$)% at the equator to ($2.6\pm
0.2$)% at $60^{\circ}$S, i.e., a reduction factor of $1.9\pm 0.2$. However, in
terms of the latitudinal dependence of the mole fraction of methane this
depends on the methane profile, cloud profile and the reference pressure level
and so the cloud-top methane mole fraction could conceivably vary by as much
as $\sim\pm 1$%. Hence, here we estimate the equatorial deep mole fraction of
methane to be in the range 4–6%, with the abundance at polar latitudes reduced
from those at equatorial latitudes by a factor of $1.9\pm 0.2$.
### 3.5 Extension to discrete cloud retrievals
Having greatly improved our fit to the background atmospheric state, we
wondered if it might be possible to retrieve, in addition, the cloud profiles
for the discrete cloud regions masked out in our analysis so far. Discrete
clouds such as these are known to exist at pressures from 0.5 – 0.1 bar [Irwin
. (2011), e.g.] and as such are almost certainly clouds of methane ice. It can
be seen from Fig. 1 that in our observations these clouds are mostly
restricted to latitudes 30 – 40∘S and are of highly variable reflectivity and
only cover a small range of central meridian longitudes. Hence, our Minnaert
limb-darkening analysis, which assumes that the clouds at a particular
latitude are zonally-symmetric and do not vary with central meridian
longitude, was not appropriate for analysing these discrete clouds. Instead,
assuming that the cloud properties of the background tropospheric and
stratospheric clouds would not be different from their zonally-retrieved
Minnaert values, we looked to see what opacity and pressure level additional
discrete methane ice clouds might need to have to best match the observed
spectra in the previously masked pixels. We assumed that the methane ice
particles at such low pressures were likely small and assumed a size
distribution with mean radius 0.1 $\mu$m and variance 0.3. Methane ice is
highly scattering at short wavelengths [Martonchik Orton (1994), Grundy .
(2002)] and so the complex refractive index was set to $1.3+0.00001i$, with
scattering properties calculated via Mie theory and phase functions again
approximated by combined Henyey-Greenstein functions. For vertical location,
we assumed that the cloud had a Gaussian dependence of specific density
(particles/gram) with altitude and had a variable peak pressure and opacity.
The a priori pressure level of the opacity peak was set to 0.3 bar. We
reanalysed the areas that had previously been masked and extended the area
slightly to capture some of the thinner discrete clouds seen (Fig. 2). Then,
for each pixel in this extended area, we set the tropospheric cloud,
stratospheric haze and cloud-top methane mole fraction to that determined from
the zonally-averaged Minnaert analysis for that latitude and retrieved the
opacity and peak pressure of an additional methane ice cloud.
Our retrieved methane ice cloud properties can be seen in Fig. 10, where we
retrieve opacities of up to 0.75. Although there are clearly some regions of
thick methane ice cloud, the median value of this additional opacity is found
to be only 0.0063 and so it only makes a significant difference to the
observed radiances in the small discrete regions seen. Figure 10 also shows an
apparent variation in mean pressure of these discrete clouds, remaining near
the a priori pressure of $\sim 0.3$ bar for the thinnest clouds, but reducing
to as low as 0.15 bar for the thickest clouds. However, we find this pressure
variation to be insignificant compared with the retrieval errors. Reviewing
the two-way transmission-to-space within the 800 – 900 nm wavelength region
examined here, we find that we are only weakly sensitive to the actual
pressure level of detached methane clouds in the 0.5 – 0.1 bar region. Our
chosen wavelength band includes the strong methane absorption band at 887 nm,
but even here the two-way transmission to space only reduces to 0.5 at the
0.35 bar level for nominal cloud/haze conditions. To really probe the
altitudes of such clouds we need to use observations in the much stronger
methane bands at 1.7 $\mu$m in the H-band, as has been done by numerous
previous authors [Irwin . (2011), Irwin . (2016), Luszcz-Cook . (2016),
e.g.,], but which is not observable by MUSE. We examined the raw MUSE
observations near 727 nm and 887 nm, but could not see any clear difference in
the brightness of the discrete clouds with wavelength for either weak or
bright detached clouds. Hence, all we can really say with the MUSE
observations is that the discrete clouds (for all opacities) must lie
somewhere at pressures less than $\sim 0.4$ bar.
With the addition of discrete methane clouds our forward-modelled
reconstructed images of Neptune at 830 and 840 nm are compared with the MUSE
observations in Fig. 13. Comparing with Fig. 1 it can be seen that we achieve
a very good fit at all locations on Neptune’s disc. Although we have only
shown the fit at two wavelengths here, the fit was found to be very good for
all wavelengths in the 800 – 900 nm range and Fig. 14 shows the root-mean-
square (RMS) reflectivity differences of our fits at all wavelengths, showing
that we match the observed spectra to an RMS of typically 0.05%, increasing to
only 0.15% in the brightest methane ice clouds. At these locations it may be
that model is struggling with the fact that the stratospheric haze opacity was
set and fixed to that derived from zonally-averaged fits where the discrete
clouds had not been entirely masked.
Figure 13: As Fig. 1, but here the reconstructed images are generated from
NEMESIS forward-modelling calculations from the fitted cloud and methane
profiles, including fitting of discrete cloud regions using an additional thin
methane ice cloud layer. As can be seen the residuals are reduced greatly and
are small at all locations on the planet’s disc. Figure 14: Root-mean-square
reflectivity (I/F) differences of our final fit to the observations of Neptune
over the entire visible disc from 800 – 900 nm.
## 4 Conclusions
In this work, we have reanalysed VLT/MUSE-NFM observations of Neptune, made in
June 2018 [Irwin . (2019)], with a Minnaert limb-darkening analysis recently
developed for Jupiter studies [Pérez-Hoyos . (2020)]. We find that the new
scheme allows us to use the observations much more effectively than simply
analysing along the central meridian, as we had previously done, as it also
accounts for the observed limb-darkening/limb-brightening at different
wavelengths and latitudes. Once having fitted the general latitudinal
variation of cloud and haze with this analysis we are then able to fit for the
properties of discrete methane clouds seen in our observations, allowing us to
fit all locations on the visible disc to a reflectivity (I/F) precision of 0.5
– 1.0% (RMS $<0.15$%). Our main conclusions are:
* •
We find that we are able to fit the background reflectivity spectrum of
Neptune from 800 – 900 nm with a simple two-cloud zonally symmetric model
comprising a deep cloud based at 4.66 bar, with variable fractional scale
height and a layer of stratospheric haze based near 0.03 bar.
* •
The cloud-top mole fraction of methane at 2–4 bar (i.e., above the H2S cloud,
based at 4.66 bar) is found to decrease by a factor of $1.9\pm 0.2$ from
equator to pole, from 4–6% at the equator to 2–4% at the south pole.
* •
While this latitudinal decrease in methane mole fraction is well defined, the
absolute mole fractions at different latitudes depends on the precise choice
of cloud parameterization (and indeed methane parameterization), for which a
wide range of setups give similar goodness of fit.
* •
The previous retrievals along the central meridian of these data reported by
irwin19 appear to have suffered from an unfortunate retrieval artefact at some
locations due to a less sophisticated incorporation of limb-darkening. Our new
methane retrievals are more robust, vary more smoothly with latitude, and give
a clearer and conservative estimate of the likely error limits.
* •
The opacity of both the tropospheric cloud and stratospheric haze is found to
be maximum at 20 – $40^{\circ}$S and 20 – $40^{\circ}$N, which are also the
latitudes of the discrete methane clouds seen. While this may be a real
feature, it may also be that the limb-darkening curves analysed by our
Minnaert scheme were contaminated by thin discrete clouds at these latitudes.
* •
Adding localised methane clouds to our zonal model allows us to additionally
retrieve the properties of the discrete cloud locations; we find these clouds
must lie at pressures less than 0.4 bar and have opacities of up to 0.75.
* •
The latitudinal variation of cloud-top methane mole fraction with latitude
observed here in 2018 seems little changed from that seen by HST/STIS in 2003
[Karkoschka Tomasko (2011)], indicating that this apparent latitudinal
distribution of cloud-top methane has not varied significantly in the
intervening fifteen years.
Having now developed a scheme that matches the observed spectra of Neptune
from 800 – 900 nm, the next step will be to reproduce the appearance of
Neptune at other wavelengths to see if our fitted cloud/methane model is more
generally applicable. There will be two main challenges to this:
1. 1.
Firstly, extending to shorter wavelengths the contribution of Rayleigh
scattering becomes more important, which limits our ability to see to deeper
cloud layers. However, more importantly for the MUSE-NFM observations is the
fact that the full-width-half-maximum (FWHM) of the point-spread-function
(PSF) becomes significantly worse. It can be seen in Fig. 1 that even at
800–900 nm, there is a considerable residual off-disc signal due to the PSF.
We could have tried to model this in our fitting procedure, but concluded that
it was simpler to limit ourselves to locations not too near the disc edge in
the Minnaert analysis. However, this would become more difficult to justify at
shorter wavelengths. Instead, in future work we hope to take our existing
model, calculate the appearance at shorter wavelengths, and convolve with a
PSF model, which still needs to be developed. By thus simulating the shortwave
observations we will be able to see if we can discount more of the possible
cloud/haze paramaterisation setups, which will help refine our methane
retrievals.
2. 2.
Extending to longer wavelengths, the contribution of Rayleigh scattering
becomes less and the increasing strength of the methane absorption bands means
that it is possible to probe the vertical extent and location of clouds more
precisely. We would need to extend to J, H and K-band observations to see
which of our cloud/haze setups can be discounted, using the fact that the
opacity of small cloud particles will fall more quickly with wavelength than
large particles. However, this will mean trying to analyse observations taken
at different apparitions and with different instruments, which will mean that
the cloud distribution will not be the same and that systematic errors in the
photometric calibration and PSF characterisation will arise. In addition, the
complex refractive indices of the particles will be different from those we
have derived at these wavelengths.
Although these difficulties are not insurmountable, they will require careful
evaluation and effort to overcome, which is why we leave them as future work.
However, it is clear from this study that splitting the atmosphere of Neptune
into a background zonal part (that can be fitted with a the Minneart limb-
darkening scheme) and an additional spatially-varying part due to discrete
methane clouds greatly simplifies the retrieval process and allows us to
efficiently reconstruct the entire 3-D structure of Neptune’s clouds and
methane cloud-top mole fraction. Our model is then simultaneously consistent
with the observations at all wavelengths under consideration and all locations
on Neptune’s disc. The approach could also be applied to observations of
Neptune at other wavelengths and also Uranus and Saturn, which to a first
approximation appear zonally symmetric with additional discrete cloud features
at visible/near-infrared wavelengths. It could also be further applied to
Jupiter observations, building upon the work of perez20, at wavelengths and
latitudes that appear relatively homogeneous.
###### Acknowledgements.
We are grateful to the United Kingdom Science and Technology Facilities
Council for funding this research. Glenn Orton was supported by NASA funding
to the Jet Propulsion Laboratory, California Institute of Technology. Leigh
Fletcher was supported by a Royal Society Research Fellowship and European
Research Council Consolidator Grant (under the European Union’s Horizon 2020
research and innovation programme, grant agreement No 723890) at the
University of Leicester. The observations reported in this paper have the ESO
ID: 60.A-9100(K).
## References
* Bacon . (2010) bacon10Bacon, R., Accardo, M., Adjali, L., Anwand, H., Bauer, S., Biswas, I.Yerle, N. 201007\. The MUSE second-generation VLT instrument The MUSE second-generation VLT instrument. Proc. SPIE Proc. SPIE ( 7735, 773508). 10.1117/12.856027
* Borysow . (2000) borysow00Borysow, A., Borysow, J. Fu, Y. 200006\. Semi-empirical Model of Collision-Induced Absorption Spectra of H 2-H 2 Complexes in the Second Overtone Band of Hydrogen at Temperatures from 50 to 500 K Semi-empirical Model of Collision-Induced Absorption Spectra of H 2-H 2 Complexes in the Second Overtone Band of Hydrogen at Temperatures from 50 to 500 K. Icarus1452601-608. 10.1006/icar.2000.6384
* Borysow Frommhold (1989) borysow89bBorysow, A. Frommhold, L. 198906\. Collision-induced Infrared Spectra of H 2-He Pairs at Temperatures from 18 to 7000 K. II. Overtone and Hot Bands Collision-induced Infrared Spectra of H 2-He Pairs at Temperatures from 18 to 7000 K. II. Overtone and Hot Bands. ApJ341549. 10.1086/167515
* Borysow . (1989) borysow89aBorysow, A., Frommhold, L. Moraldi, M. 198901\. Collision-induced Infrared Spectra of H 2-He Pairs Involving 0 1 Vibrational Transitions and Temperatures from 18 to 7000 K Collision-induced Infrared Spectra of H 2-He Pairs Involving 0 1 Vibrational Transitions and Temperatures from 18 to 7000 K. ApJ336495. 10.1086/167027
* Chance Kurucz (2010) chance10Chance, K. Kurucz, RL. 201006\. An improved high-resolution solar reference spectrum for earth’s atmosphere measurements in the ultraviolet, visible, and near infrared An improved high-resolution solar reference spectrum for earth’s atmosphere measurements in the ultraviolet, visible, and near infrared. J. Quant. Spec. Radiat. Transf.11191289-1295. 10.1016/j.jqsrt.2010.01.036
* Grundy . (2002) grundy02Grundy, WM., Schmitt, B. Quirico, E. 200202\. The Temperature-Dependent Spectrum of Methane Ice I between 0.7 and 5 $\mu$m and Opportunities for Near-Infrared Remote Thermometry The Temperature-Dependent Spectrum of Methane Ice I between 0.7 and 5 $\mu$m and Opportunities for Near-Infrared Remote Thermometry. Icarus1552486-496. 10.1006/icar.2001.6726
* Irwin . (2016) irwin16Irwin, PGJ., Fletcher, LN., Tice, D., Owen, SJ., Orton, GS., Teanby, NA. Davis, GR. 201606\. Time variability of Neptune’s horizontal and vertical cloud structure revealed by VLT/SINFONI and Gemini/NIFS from 2009 to 2013 Time variability of Neptune’s horizontal and vertical cloud structure revealed by VLT/SINFONI and Gemini/NIFS from 2009 to 2013. Icarus271418-437. 10.1016/j.icarus.2016.01.015
* Irwin . (2011) irwin11Irwin, PGJ., Teanby, NA., Davis, GR., Fletcher, LN., Orton, GS., Tice, D.Calcutt, SB. 201111\. Multispectral imaging observations of Neptune’s cloud structure with Gemini-North Multispectral imaging observations of Neptune’s cloud structure with Gemini-North. Icarus2161141-158. 10.1016/j.icarus.2011.08.005
* Irwin . (2008) irwin08Irwin, PGJ., Teanby, NA., de Kok, R., Fletcher, LN., Howett, CJA., Tsang, CCC.Parrish, PD. 200804\. The NEMESIS planetary atmosphere radiative transfer and retrieval tool The NEMESIS planetary atmosphere radiative transfer and retrieval tool. J. Quant. Spec. Radiat. Transf.1091136-1150. 10.1016/j.jqsrt.2007.11.006
* Irwin . (2019) irwin19Irwin, PGJ., Toledo, D., Braude, AS., Bacon, R., Weilbacher, PM., Teanby, NA.Orton, GS. 201910\. Latitudinal variation in the abundance of methane (CH4) above the clouds in Neptune’s atmosphere from VLT/MUSE Narrow Field Mode Observations Latitudinal variation in the abundance of methane (CH4) above the clouds in Neptune’s atmosphere from VLT/MUSE Narrow Field Mode Observations. Icarus33169-82. 10.1016/j.icarus.2019.05.011
* Irwin . (2018) irwin18Irwin, PGJ., Toledo, D., Garland, R., Teanby, NA., Fletcher, LN., Orton, GS. Bézard, B. 201804\. Detection of hydrogen sulfide above the clouds in Uranus’s atmosphere Detection of hydrogen sulfide above the clouds in Uranus’s atmosphere. Nature Astronomy2420-427. 10.1038/s41550-018-0432-1
* Karkoschka Tomasko (2009) kark09Karkoschka, E. Tomasko, M. 200907\. The haze and methane distributions on Uranus from HST-STIS spectroscopy The haze and methane distributions on Uranus from HST-STIS spectroscopy. Icarus2021287-309. 10.1016/j.icarus.2009.02.010
* Karkoschka Tomasko (2010) kark10Karkoschka, E. Tomasko, MG. 201002\. Methane absorption coefficients for the jovian planets from laboratory, Huygens, and HST data Methane absorption coefficients for the jovian planets from laboratory, Huygens, and HST data. Icarus2052674-694. 10.1016/j.icarus.2009.07.044
* Karkoschka Tomasko (2011) kark11Karkoschka, E. Tomasko, MG. 201101\. The haze and methane distributions on Neptune from HST-STIS spectroscopy The haze and methane distributions on Neptune from HST-STIS spectroscopy. Icarus2111780-797. 10.1016/j.icarus.2010.08.013
* Lellouch . (2010) lellouch10Lellouch, E., Hartogh, P., Feuchtgruber, H., Vand enbussche, B., de Graauw, T., Moreno, R.Wildeman, K. 201007\. First results of Herschel-PACS observations of Neptune First results of Herschel-PACS observations of Neptune. A&A518L152. 10.1051/0004-6361/201014600
* Lindal (1992) lindal92Lindal, GF. 199203\. The Atmosphere of Neptune: an Analysis of Radio Occultation Data Acquired with Voyager 2 The Atmosphere of Neptune: an Analysis of Radio Occultation Data Acquired with Voyager 2. AJ103967. 10.1086/116119
* Luszcz-Cook . (2016) luszcz16Luszcz-Cook, SH., de Kleer, K., de Pater, I., Adamkovics, M. Hammel, HB. 201609\. Retrieving Neptune’s aerosol properties from Keck OSIRIS observations. I. Dark regions Retrieving Neptune’s aerosol properties from Keck OSIRIS observations. I. Dark regions. Icarus27652-87. 10.1016/j.icarus.2016.04.032
* Martonchik Orton (1994) martonchick94Martonchik, JV. Orton, GS. 199412\. Optical constants of liquid and solid methane Optical constants of liquid and solid methane. Appl. Opt.33368306-8317. 10.1364/AO.33.008306
* Minnaert (1941) minnaert41Minnaert, M. 1941\. The reciprocity principle in lunar photometry The reciprocity principle in lunar photometry. ApJ93403-410. 10.1086/144279
* Pérez-Hoyos . (2020) perez20Pérez-Hoyos, S., Sánchez-Lavega, A., Sanz-Requena, JF., Barrado-Izagirre, N., Carrión-González, O., Anguiano-Arteaga, A.Braude, AS. 202012\. Color and aerosol changes in Jupiter after a North Temperate Belt disturbance Color and aerosol changes in Jupiter after a North Temperate Belt disturbance. Icarus352114031. 10.1016/j.icarus.2020.114031
* Plass . (1973) plass73Plass, GN., Kattawar, GW. Catchings, FE. 197301\. Matrix operator theory of radiative transfer. 1: Rayleigh scattering. Matrix operator theory of radiative transfer. 1: Rayleigh scattering. Appl. Opt.12314-329. 10.1364/AO.12.000314
* Sromovsky . (2011) sromovsky11Sromovsky, LA., Fry, PM. Kim, JH. 201109\. Methane on Uranus: The case for a compact CH4 cloud layer at low latitudes and a severe CH4 depletion at high-latitudes based on re-analysis of Voyager occultation measurements and STIS spectroscopy Methane on Uranus: The case for a compact CH4 cloud layer at low latitudes and a severe CH4 depletion at high-latitudes based on re-analysis of Voyager occultation measurements and STIS spectroscopy. Icarus2151292-312. 10.1016/j.icarus.2011.06.024
* Sromovsky . (2019) sromovsky19Sromovsky, LA., Karkoschka, E., Fry, PM., de Pater, I. Hammel, HB. 201901\. The methane distribution and polar brightening on Uranus based on HST/STIS, Keck/NIRC2, and IRTF/SpeX observations through 2015 The methane distribution and polar brightening on Uranus based on HST/STIS, Keck/NIRC2, and IRTF/SpeX observations through 2015. Icarus317266-306. 10.1016/j.icarus.2018.06.026
* Sromovsky . (2014) sromovsky14Sromovsky, LA., Karkoschka, E., Fry, PM., Hammel, HB., de Pater, I. Rages, K. 201408\. Methane depletion in both polar regions of Uranus inferred from HST/STIS and Keck/NIRC2 observations Methane depletion in both polar regions of Uranus inferred from HST/STIS and Keck/NIRC2 observations. Icarus238137-155. 10.1016/j.icarus.2014.05.016
* Tollefson . (2019) tollefson19Tollefson, J., de Pater, I., Luszcz-Cook, S. DeBoer, D. 201906\. Neptune’s Latitudinal Variations as Viewed with ALMA Neptune’s Latitudinal Variations as Viewed with ALMA. AJ1576251. 10.3847/1538-3881/ab1fdf
|
16k
|
arxiv_papers
|
2101.01069
|
Annihilators and associated varieties of Harish-Chandra modules for $Sp(p,q)$
WILLIAM MCGOVERN
Department of Mathematics, Box 354350, University of Washington, Seattle, WA
98195
## Introduction
The purpose of this paper is to extend the recipes of [5] from the group
$SU(p,q)$ to the group Sp$(p,q)$; we will compute annihilators and associated
varieties of simple Harish-Chandra modules for the latter group. We will
appeal to the classification of primitive ideals in enveloping algebras of
types $B$ and $C$ in [4] via their generalized $\tau$-invariants; thus
(inspired by [5]) we will define a map $H$ taking (parameters of) simple
Harish-Chandra modules $X$ of trivial infinitesimal character to pairs
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ where $\mathbf{T}_{1}$ is a
domino tableau and $\overline{\mathbf{T}}_{2}$ an equivalence class of signed
tableaux of the same shape as $\mathbf{T}_{1}$. Then $\mathbf{T}_{1}$ will
parametrize the annihilator of $X$ (via the classification of primitive ideals
in [4]) while any representative of $\overline{\mathbf{T}}_{2}$, suitably
normalized, parametrizes its associated variety (via the classification of
nilpotent orbits in Lie Sp$(p,q)$ in [1, 9.3.5]). The proof of these
properties will rest primarily on the commutativity of the maps $H$ with both
$\tau$-invariants and wall-crossing operators $T_{\alpha\beta}$, defining the
latter operators as in [10]. We will parametrize our modules $X$ via signed
involutions of signature $(p,q)$ and construct the tableaux from the
involutions.
## Cartan subgroups and Weyl groups
For $G=\,\,$Sp$(p,q),n=p+q,p\leq q$ we set $\mathfrak{g}_{0}=\,\,$Lie $G$ and
we let $\mathfrak{g}$ be its complexification. Let $\theta$ be the usual
Cartan involution of $G$ or $\mathfrak{g}$ and let $\mathfrak{k}+\mathfrak{p}$
be the corresponding Cartan decomposition. Denote by $K$ the subgroup of the
complexification of $G$ corresponding to $\mathfrak{k}$. Let $H_{0}$ be a
compact Cartan subgroup of $G$ with complexified Lie algebra $\mathfrak{h}$.
As a choice of simple roots in $\mathfrak{g}$ relative to $\mathfrak{h}$ we
take $2e_{1},e_{2}-e_{1},\ldots,e_{n-1}-e_{n-2},e_{n}-e_{n-1}$, following [4].
There are $p+1$ conjugacy classes of Cartan subgroups of $G$. If we take
$H_{0}$ to be a compact Cartan subgroup and define $H_{i}$ inductively for
$i>0$ as the Cayley transform of $H_{i-1}$ through $e_{p-i+1}-e_{p+i}$ for
$1\leq i\leq p$, then the $H_{i}$ furnish a complete set of representatives
for the conjugacy classes of Cartan subgroups of $G$. The real Weyl group
$W(H_{i})$ of the $i$th Cartan subgroup $H_{i}$ is isomorphic to
$W_{p-i}\ltimes S_{2}^{p-i}\times W_{i}\times W_{q-p+i}$, where $W_{r},S_{j}$
respectively denote the hyperoctahedral group of rank $r$ and the symmetric
group on $j$ letters; here $W(H_{i})$ embeds into the complex Weyl group $W$
of $\mathfrak{g}$ (relative to $\mathfrak{h}$) by permuting, interchanging,
and changing the signs of the first $p-i$ pairs of coordinates and then
permuting and changing the signs of the next $i$ and then the next $q-p+i$
coordinates [6, 7]. The subgroups $H_{i}$ are all connected and there is a
single block of simple Harish-Chandra modules for $G$ with trivial
infinitesimal character.
## The $\mathcal{D}$ set and Cartan involutions
Using Vogan’s classification of simple Harish-Chandra modules with trivial
infinitesimal character by $\mathbb{Z}/2\mathbb{Z}$-data [11], we parametrize
such modules for these groups combinatorially, as follows. Define
$\mathcal{S}_{n}$, the set of signed involutions on $n$ letters, to be the set
of all sets $\\{s_{1},\ldots,s_{m}\\}$, where each $s_{i}$ takes one of the
forms $(i,+),(i,-),(i,j)^{+},(i,j)^{-}$, where $i,j$ lie between 1 and $n$,
the pairs $(i,j)$ are ordered with $i<j$, and each index $i$ between 1 and $n$
appears in a unique pair of exactly one of the above types. We say that
$\sigma\in\mathcal{S}_{n}$ has signature $(p,q)$ or lies in
$\mathcal{S}_{n,p}$, if the total number of singletons $(i,+)$ and pairs
$(i,j)^{+}$ or $(i,j)^{-}$ in $\sigma$ is $p$. Let $\mathcal{I}_{n}$ denote
the set of all involutions in the complex Weyl group $W_{n}$, Identify an
element $\iota\in\mathcal{I}_{n}$ with an element $\sigma\in\mathcal{S}_{n}$
by decreeing that $(j,+)\in\sigma$ if $\iota(j)=j,(j,-)\in\sigma$ if
$\iota(j)=-j,(i,j)^{+}\in\sigma$ if the (positive) indices $i,j$ are flipped
by $\iota,(i,j)^{-}\in\sigma$ if instead the indices $i,-j$ are flipped by
$\iota$. Then $W_{n}$ acts on $\mathcal{I}_{n}$ by conjugation and we may
transfer this action to $\mathcal{S}_{n}$ via the above identification. The
set of involutions flipping $p-r$ pairs of indices in $\\{\pm
1,\ldots,\pm(p+q)\\}$ is stable under conjugation and has cardinaility equal
to the index of $W(H_{p-r})$, whence we may take $\mathcal{S}_{n,p}$ as a
parametrizing set $\mathcal{D}$ for the set of simple Harish-Chandra modules
for Sp$(p,q)$ with trivial infinitesimal character. More generally, as needed
for inductive arguments below, we define for any subset $M$ of
$\\{1,\ldots,n\\}$ the set $\mathcal{S}_{M,p}$ in the same way as
$\mathcal{S}_{n,p}$, replacing the numbers $1,\ldots,n$ by the numbers in $M$.
The Cartan involution $\theta$ corresponding to any
$\sigma\in\mathcal{S}_{n,p}$ fixes a unit coordinate vector $e_{i}$ whenever
$(i,+)\in\sigma$ or $(i,-)\in\sigma$. If $(i,j)^{+}\in\sigma$, then $\theta$
flips the vectors $e_{i}$ and $e_{j}$, while if $(i,j)^{-}\in\sigma$, then
$\theta$ flips $e_{i}$ and $-e_{j}$, unless $j-i=1$, in which case $\theta$
sends both $e_{i}$ and $e_{j}$ to their negatives. Thus a simple root
$e_{i+1}-e_{i}$ is compact imaginary for $\sigma$ if and only if
$(i,\epsilon)$ and $(i+1,\epsilon)$ both lie in $\sigma$ for some sign
$\epsilon$, while $2e_{1}$ is compact imaginary if and only if $(1,\epsilon)$
lies in $\sigma$ for some sign $\epsilon$.
## The cross action and the Cayley transform
For an element $\sigma$ of $\mathcal{S}_{n,p}$ and a pair of indices $i,j$
between 1 and $n$, we define In$(i,j,\sigma)$ as in [5, Definition 1.9.1],
interchanging the unique occurrences of $i$ and $j$ in $\sigma$ and leaving
all others unchanged (so that $\sigma$ is unchanged if $(i,j)^{+}$ or
$(i,j)^{-}$ occurs in it). We define SC$(1,\sigma)$ to be $\sigma$ with the
pair $(1,i)^{\epsilon}$ in it replaced by $(1,i)^{-\epsilon},-\epsilon$ the
opposite sign to $\epsilon$, if there is such a pair in $\sigma$; otherwise
SC$(\sigma)=\sigma$. We define In$(1,2,\sigma)^{\prime}$ to be $\sigma$ with
the pairs $(1,i)^{\epsilon},(2,j)^{{\epsilon}^{\prime}}$ replaced by
$(1,j)^{{-{\epsilon}^{\prime}}},(2,i)^{-\epsilon}$ if such pairs occur in
$\sigma$; otherwise we set In$(1,2,\sigma)^{\prime}=\,$In$(1,2,\sigma)$.
###### 3.1. PROPOSITION.
Let $s$ be the simple root $e_{i}-e_{i-1},\sigma\in\mathcal{S}_{n,p}$. Then
$s\times\sigma$, the cross action of $s$ on the parameter $\sigma$, is given
by In$(i-1,i,\sigma)$. For $t=2e_{1}$ and $\sigma\in\mathcal{S}_{n,p}$ we have
$t\times\sigma=\,$SC$(1,\sigma)$.
###### Proof.
This may be computed directly, along the lines of [5, §§1.8,9]. ∎
As in [5], we will also need to compute Cayley transforms of parameters
$\sigma$ through simple roots. It suffices to consider the simple root
$s=e_{i+1}-e_{i}$. Define $c^{i}(\sigma)$ to be $\sigma$ with the pairs
$(i,\epsilon),(i+1,-\epsilon)$ replaced by $(i,i+1)^{\epsilon}$ for
$\sigma\in\mathcal{S}_{n,p}$, if such pairs occur in $\sigma$; otherwise
$c^{i}(\sigma)$ is undefined. Thus $c^{i}(\sigma)$ is defined if and only if
$e_{i+1}-e_{i}$ is an imaginary noncompact root for $\sigma$. In a similar
way, define $c_{i}(\sigma)$ if $e_{i+1}-e_{i}$ is real for $\sigma$ to be the
i through nvolution obtained from $\sigma$ by replacing $(i,i+1)^{\epsilon}$
by $(i,\epsilon),(i+1,-\epsilon)$.
###### 3.2. PROPOSITION.
If $e_{i+1}-e_{i}$ is imaginary noncompact for a parameter $\sigma$ in
$\mathcal{S}_{n,p}$, then the Cayley transform of $\sigma$ through this root
is given by $c^{i}(\sigma)$. If this root is real for $\sigma$, then the
Cayley transform through it is given by $c_{i}(\sigma)$.
###### Proof.
Again this follows from a direct computation, along the lines of [5, 1.12]. ∎
## $\tau$-invariants and wall-crossing operators
We define the $\tau$-invariant $\tau(\sigma)$ of a parameter $\sigma$in
$\mathcal{S}_{n,p}$ in the same way as in [5, 1.13]: it consists of the simple
roots that are either real, compact imaginary, or complex and sent to negative
roots by the Cartan involution $\theta$. We extend this definition to
$\mathcal{S}_{M,p}$ as in [5, 1.15]. Similarly, if $\alpha,\beta$ are
nonorthogonal simple roots of the same length, then we define the wall-
crossing operator $T_{\alpha\beta}$ on $\mathcal{S}_{n,p}$ as in [5, 1.14] and
[10]. It is single-valued. If $\alpha,\beta$ are nonorthogonal but have
different lengths we define $T_{\alpha\beta}$ on $\mathcal{S}_{n,p}$ in the
same way; in this setting it takes either one or two values. In both cases
$T_{\alpha\beta}$ sends parameters with $\alpha$ not in the $\tau$-invariant
but $\beta$ in it to parameters with the opposite property. In more detail, if
a parameter $\sigma$ includes $(1,2)^{+}$, then the effect of
$T_{\alpha\beta}$ on it is to replace $(1,2)^{+}$ by either $1^{+},2^{-}$ or
$1^{-},2^{+}$, and vice versa; otherwise we interchange $1$ and $2$ in
$\sigma$ if at least one of them is paired with another index but they are not
paired with each other, or we change the sign attached to the pair
$(1,i)^{\epsilon}$ in $\sigma$, whichever (or both) of these operations has
the desired effect on the $\tau$-invariant of $\sigma$.
## The algorithm
We now describe the algorithm we will use to compute for a given
$\sigma\in\mathcal{S}_{n,p}$ the annihilator and associated variety of the
corresponding Harish-Chandra module for Sp$(p,q)$. We will attach an ordered
pair $(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ to $\sigma$, where
$\mathbf{T}_{1}$ is a domino tableau in the sense of [2] and
$\overline{\mathbf{T}}_{2}$ is an equivalence class of signed tableaux of
signature $(2p,2q)$ and the same shape as $\mathbf{T}_{1}$. The shape of
$\mathbf{T}_{1}$ will be a doubled partition of $2(p+q)$; that is, a partition
of $2(p+q)$ in which the parts occur in equal pairs. Any tableau in
$\overline{\mathbf{T}}_{2}$ will thus also have rows occurring in pairs,
called double rows, of equal length; moreover the two rows in a double row
will begin with the same sign. The equivalence relation defining
$\overline{\mathbf{T}}_{2}$ will be that we can change all signs in any pair
of double rows of the same even length, or in any pair $D_{1},D_{2}$ of double
rows of different even lengths whenever there is an open cycle of
$\mathbf{T}_{1}$ in the sense of [4, 3.1.1] (so not including its smallest
domino) with its hole in one of $D_{1},D_{2}$ and its corner in the other.
Here we allow the second double row $D_{2}$ to have length 0, so that, for
example, if
$\mathbf{T}_{1}=$ $1$ $2$
then the two signed tableaux of the same shape (one with its rows beginning
with $+$, the other with $-$) both lie in $\overline{\mathbf{T}}_{2}$, since
if $\mathbf{T}_{1}$ is moved through the open cycle of its 2-domino, then that
domino is given a clockwise quarter-turn, so that it now occupies one square
of a double row that was empty in $\mathbf{T}_{1}$. On the other hand, if
$\mathbf{T}_{1}$ is replaced by its transpose, then the two signed tableaux of
this shape lie in separate classes $\overline{\mathbf{T}}_{2}$, since in that
case the open cycle of the 2-domino does not intersect the empty double row.
In addition to this equivalence relation, we decree as usual for signed
tableaux that any two of them are identified whenever one can be obtained from
the other by interchanging pairs of double rows of the same length. The
signature of $\overline{\mathbf{T}}_{2}$ (i.e. the number of $+$ signs in it)
will be $2p$; note that this is an invariant of $\overline{\mathbf{T}}_{2}$.
To construct $\mathbf{T}_{1}$ and $\overline{\mathbf{T}}_{2}$ we follow a
similar recipe to [5, Chap. 3], replacing the tableaux occurring there with
domino tableaux and using insertion and bumping for domino tableaux as in [2].
###### 5.1. DEFINITION.
Let $\sigma\in\mathcal{S}_{n,p}$. Order the elements of $\sigma$ by increasing
size of their largest numbers. We construct the pair
$H(\sigma)=(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ attached to $\sigma$
inductively, starting from a pair of empty tableaux. At each step we insert
the next element $(i,\epsilon)$ or $(i,j)^{\epsilon}$ into the current pair of
tableaux. Assume first that the next element of $\sigma$ is $(i,\epsilon)$
(with $\epsilon$ a sign) and choose any representative $\mathbf{T}_{2}$ of
$\overline{\mathbf{T}}_{2}$.
1. (1)
If the first double row of $\mathbf{T}_{2}$ ends in $-\epsilon$, then add
$\epsilon$ to the end of both of its rows and add a vertical domino labelled
$i$ to the end of the first double row of $\mathbf{T}_{1}$.
2. (2)
If not and if the first double row of $\mathbf{T}_{2}$ has (rows of) even
length, then we look first for a lower double row of $\mathbf{T}_{2}$ with the
same length ending in $-\epsilon$; if there is a such a double row, we
interchange it with the first double row in $\mathbf{T}_{2}$ and then proceed
as above. Otherwise we start over, trying to insert $(i,\epsilon)$ into the
highest double row of $\mathbf{T}_{2}$ strictly shorter than its first double
row. (In the end, we may have to insert a domino labelled $i$ into a new
double row of $\mathbf{T}_{1}$, using $\epsilon$ for both signs in the new
double row of $\mathbf{T}_{2}$.)
3. (3)
If not and the first (or first available) double row of $\mathbf{T}_{2}$ has
odd length but there is more than one double row of this length, but none
ending in $-\epsilon$, then we change all signs in the first two double rows
of $\mathbf{T}_{2}$ of this length and then proceed as in the previous case.
4. (4)
Otherwise the highest available double row $R$ in $\mathbf{T}_{2}$ has even
length, ends in $\epsilon$, and is the only double row of this length. In this
case we look at the domino in $\mathbf{T}_{1}$ occupying the last square in
the lower row of $R$. If we move $\mathbf{T}_{1}$ through the open cycle of
this domino, we find that its shape changes by removing this square and adding
a square either at the end of the higher row of some double row $R^{\prime}$
of $\mathbf{T}_{1}$ or else in a new row, not in $\mathbf{T}_{1}$. If it lies
in a new row, then change all signs in $R$ and proceed as above. If it does
not lie in a new row and $R^{\prime}\neq R$, then change the signs of
$\mathbf{T}_{2}$ in both $R$ and $R^{\prime}$ and proceed as above (again not
actually moving $\mathbf{T}_{1}$ through the cycle). Finally, if
$R=R^{\prime}$, then move $\mathbf{T}_{1}$ through the open cycle, place a new
horizontal domino labelled $i$ at the end of the lower row of $R$ in
$\mathbf{T}_{1}$, and choose the signs in $\mathbf{T}_{2}$ so that both rows
of $R$ now end in $\epsilon$ while all other rows of $\mathbf{T}_{2}$ have the
same signs as before.
###### 5.2. DEFINITION.
Retain the notation of the previous definition but assume now that the next
element of $\sigma$ is $(i,j)^{\epsilon}$. We begin by inserting a horizontal
domino labelled $i$ at the end of the first row of $\mathbf{T}_{1}$ if
$\epsilon=+$, or a vertical domino labelled $i$ at the end of the first column
of $\mathbf{T}_{1}$ if $\epsilon=-$, following the procedure of [2] (and thus
bumping dominos with higher labels as needed). We obtain a new tableau
$\mathbf{T}^{\prime}$, whose shape is obtained that of $\mathbf{T}_{1}$ by
adding a single domino $D$, lying either in some double row $R$ of
$\mathbf{T}_{1}$ or else in a new row (in which case $D$ must be horizontal).
Let $\ell$ be the length of $R$ (before $D$ was added).
1. (1)
If $D$ is horizontal and $\ell$ is even, then add a domino labelled $j$ to
$\mathbf{T}^{\prime}$ immediately below the position of $D$, in the lower row
of $R$. Choose signs in $\mathbf{T}_{2}$ so that both rows of $R$ now end in a
different sign than they did before; leave all other signs the same. If $D$
lies in a new row, then we have a new double row in $\mathbf{T}_{2}$, which
can begin with either sign; to make a particular choice, we decree that both
rows in the new double row begin with $-$.
2. (2)
If $D$ is horizontal and $\ell$ is odd, then $\mathbf{T}^{\prime}$ does not
have special shape in the sense of [2], but its shape becomes special if one
moves through just one open cycle. Move through this cycle and choose the
signs in $\mathbf{T}_{2}$ so that $R$ is now a genuine double row and its rows
end in a different sign than they did before. Then $\mathbf{T}_{2}$ has either
two more $+$ signs than before or two more $-$ signs. Insert a vertical domino
labelled $j$ to the first available double row in $\mathbf{T}_{1}$ strictly
below $R$, following the procedure of the previous definition. The sign
attached to $j$ is $-$ if $\mathbf{T}_{2}$ gained two $+$ signs and is $+$
otherwise.
3. (3)
If $D$ is vertical and $\ell$ is even, then $R$ is still a double row; choose
signs so that its rows end in the same sign as they did before, leaving all
other signs unchanged. If a new double row was created by inserting the new
domino, then its rows can begin with either sign; to make a particular choice,
we decree that this sign is $+$. Now add a new vertical domino $j$ to the
first available double row strictly below $R$, as in the previous case, giving
it the same sign as in that case.
4. (4)
If $D$ is vertical and $\ell$ is odd, then proceed as in the previous case.
One can check that either choice of sign made in Definition 5.2(1) or (3)
gives rise to equivalent tableaux $\mathbf{T}_{2}$, so that in the end we get
a well-defined equivalence class $\overline{\mathbf{T}}_{2}$. To compute the
associated variety of the Harish-Chandra module corresponding to $\sigma$, we
choose any representative of $\overline{\mathbf{T}}_{2}$ and normalize it so
that all even rows begin with $+$; this is because this variety is the closure
of one nilpotent $K$-orbit in $\mathfrak{p}^{*}$ [9, 5.2] and such orbits (via
the Kostant-Sekiguchi bijection between them and nilpotent orbits in
$\mathfrak{g}_{0}$) are parametrized by signed tableaux of signature $(2p,2q)$
in which even rows begin with $+$ [1, 9.3.5]. Later we will show that the map
$H$ defines a bijection between $\mathcal{S}_{n,p}$ and pairs
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ with $\overline{\mathbf{T}}_{2}$
of signature $(2p,2q)$
We give two examples. First let $\sigma=\\{(1,2)^{+}\\}\in\mathcal{S}_{2,1}$.
Then we get
$\mathbf{T}_{1}=$ $1$ $2$
while $\overline{\mathbf{T}}_{2}$ consists of both signed tableaux of this
shape, as noted above. In the other example, we let
$\sigma=\\{((1,+),(2,-),(3,4)^{+},(5,+)\\}\in\mathcal{S}_{5,3}$. Then
$\mathbf{T}_{1}=$ $1$ $2$ $3$ $4$ $5$ and $\mathbf{T}_{2}=$ $+$ $-$ $+$ $-$
$+$
where we denote signed tableaux by tableaux tiled by vertical dominos, each
labelled with the (common) sign of each of its squares; note that here
$\overline{\mathbf{T}}_{2}$ consists of a single tableau.
## $\tau$-invariants and $\mathbf{T}_{\alpha\beta}$ on tableaux
We define the $\tau$-invariant
$\tau(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ of a pair
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$, or just of its domino tableau
$\mathbf{T}_{1}$, in the same way as [3, 2.1.9]; thus for example $2e_{1}$
lies in the $\tau$-invariant of $\mathbf{T}_{1}$ in type $C$ if and only if
the 1-domino in it is vertical. If $\alpha,\beta$ are simple roots of the form
$e_{i}-e_{i-1},e_{i+1}-e_{i}$ for some $i$, then we define $T_{\alpha\beta}$
on a domino tableau $\mathbf{T}_{1}$ lying in the domain $D_{\alpha\beta}$ of
this operator as in [3, 2.1.10] and extend this to a pair
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ by making the operator act
trivially on $\overline{\mathbf{T}}_{2}$. We now define $T_{\alpha\beta}$ in
the other cases, following the notation of [3, 2.3.4], and as in [3] defining
this operator on pairs rather than single tableaux.
###### 6.1. DEFINITION.
Suppose that $\alpha=2e_{1},\beta=e_{2}-e_{1}$. If $\mathbf{T}_{1}\in
D_{\alpha\beta}$, then either $F_{2}\subseteq\mathbf{T}_{1}$ or
$\tilde{F}_{2}\subseteq\mathbf{T}_{1}$.
1. (1)
In the first case, let $\mathbf{T}_{1}^{\prime}$ be obtained from
$\mathbf{T}_{1}$ by replacing $F_{2}$ by $F_{1}$. If the 2-domino of
$\mathbf{T}_{1}^{\prime}$ lies in an open cycle not including the 1-domino and
if the equivalence class $\overline{\mathbf{T}}_{2}$ breaks up into two
classes
$\overline{\mathbf{T}}_{2}^{\prime},\overline{\mathbf{T}}_{2}^{\prime\prime}$
with respect to $\mathbf{T}_{1}^{\prime}$, then we set
$T_{\alpha\beta}(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})=((\mathbf{T}_{1}^{\prime},\overline{\mathbf{T}}_{2}^{\prime}),(\mathbf{T}_{1}^{\prime},\overline{\mathbf{T}}_{2}^{\prime\prime}))$.
If the 2-domino lies in a closed cycle $c$, then let
$\tilde{\mathbf{T}}_{1}^{\prime}$ be the tableau obtained from
$\mathbf{T}_{1}^{\prime}$ by moving through $c$ and we set
$T_{\alpha\beta}(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})=((\tilde{\mathbf{T}}_{1}^{\prime},\overline{\mathbf{T}}_{2}),(\mathbf{T}_{1}^{\prime},\overline{\mathbf{T}}_{2}))$.
Otherwise set
$T_{\alpha\beta}(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})=(\mathbf{T}_{1}^{\prime},\overline{\mathbf{T}}_{2})$.
2. (2)
If instead $\tilde{F}_{2}\subseteq\mathbf{T}_{1}$, then the 2-domino of
$\mathbf{T}_{1}$ lies in a closed cycle $c$, since $\mathbf{T}_{1}$ has the
(special) shape of a doubled partition; if this cycle were open, it would have
to be simultaneously an up and down cycle in the sense of [4, §3], a
contradiction. Let $\tilde{\mathbf{T}}_{1}$ be obtained from $\mathbf{T}_{1}$
by moving through $c$ and let $\tilde{\mathbf{T}}_{1}^{\prime}$ be obtained
from $\tilde{\mathbf{T}}_{1}$ by replacing $F_{2}$ by $F_{1}$. Then set
$T_{\alpha\beta}(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ =
$(\tilde{\mathbf{T}}_{1}^{\prime},\overline{\mathbf{T}}_{2})$.
3. (3)
If instead $\alpha=e_{2}-e_{1},\beta=2e_{1}$, then define
$T_{\alpha\beta}(\mathbf{T}_{1})$ for $\mathbf{T}_{1}\in D_{\alpha\beta}$ as
above, interchanging $F_{1},F_{2}$ throughout by
$\tilde{F}_{1},\tilde{F}_{2}$.
For example, if $\mathbf{T}_{1}$ is as in the first example in the last
section, so that $\overline{\mathbf{T}}_{2}$ consists of both signed tableaux
of this shape, then $T_{2e_{1},e_{2}-e_{1}}$ sends
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ to the pair
$((\mathbf{T}^{\prime},\mathbf{T}_{2}^{\prime}),(\mathbf{T}^{\prime},\mathbf{T}_{2}^{\prime\prime})$,
where $\mathbf{T}^{\prime}$ the transpose of $\mathbf{T}_{1}$ and
$\mathbf{T}_{2}^{\prime},\mathbf{T}_{2}^{\prime\prime}$ are the two signed
tableaux in $\overline{\mathbf{T}}_{2}$. Note also that, unlike [3, 2.3.4], we
must not move through any open cycles, as all of our tableaux must have
doubled partition shape. There are no right domino tableaux and so there is no
notion of extended cycle.
## $H$ commutes with $\tau$-invariants
As in [5], we prove that our algorithm $H$ computes the annihilators of simple
Harish-Chandra modules by showing that it commutes with taking
$\tau$-invariants and applying wall-crossing operators. In this section we
deal with $\tau$-invariants.
###### 7.1. PROPOSITION.
Let $\sigma\in\mathcal{S}_{n,p}$ and $\alpha$ a simple root for Sp$(p,q)$.
Then $\alpha\in\tau(\sigma)$ if and only if $\alpha\in\tau(H(\sigma)$.
###### Proof.
We enumerate all possible ways in which $\alpha$ can lie in $\tau(\sigma)$, or
fail to lie in this set, and then check directly that the conclusion holds in
each case. Suppose first that $\alpha\in\tau(\sigma)$.
1. (1)
If $\alpha=e_{i+1}-e_{i}$ is compact imaginary, then we must have
$(i,\epsilon),(i+1,\epsilon)\in\sigma$ for some sign $\epsilon$. The
$i$-domino starts out vertical and in the first double row of
$\mathbf{T}_{1}$; eventually either the $i$\- and $(i+1)$-dominos wind up
horizontal with the first on top of the second, or the $(i+1)$-domino is added
vertically to a lower double row. In both cases the $i$-domino winds up above
the $(i+1)$-domino, as desired. If $\alpha=2e_{1}$ is compact imaginary, then
$(1,\epsilon)\in\sigma$ for some sign $\epsilon$, so that the 1-domino is
vertical in $\mathbf{T}_{1}$.
2. (2)
If $\alpha=e_{i+1}-e_{i}$ is real, then we must have $(i,i+1)^{+}\in\sigma$.
It is clear from the algorithm that the $i$-domino winds up below the
$(i+1)$-domino.
3. (3)
Otherwise $\alpha$ is complex. If $\alpha=2e_{1}$, then we must have
$(1,j)^{-}\in\sigma$ for some $j$, and then it is clear that the 1-domino
winds up vertical in the first double row of $\mathbf{T}_{1}$, while the
$j$-domino lies below this double row.
4. (4)
We are now reduced to the case where $\alpha=e_{i+1}-e_{i},\alpha$ complex.
If$(i,\epsilon),(j,i+1)^{\epsilon^{\prime}}\in\sigma$ for some $j<i$ for signs
$\epsilon,\epsilon^{\prime}$ and if the $i$-domino is vertical when adjoined
to $\mathbf{T}_{1}$, then it is added to the end of some double row $R$ such
that the double rows above it end in the same sign as $R$ in $\mathbf{T}_{2}$
(since the $i$-domino was not put into a higher row). When the $j$-domino is
inserted, adding a domino $D$ to the shape of $\mathbf{T}_{1}$, the additional
signs added to $\mathbf{T}_{2}$, if $D$ is vertical, are both $-\epsilon$,
whence the $(i+1)$-domino is now inserted vertically with sign $\epsilon$ and
winds up in a row below $R$ (since all higher rows end with the same sign as
they did before the $j$-domino was inserted). If $D$ is horizontal, then it
lies in the bottom row of $\mathbf{T}_{1}$, and once again, the $(i+1)$-domino
lies below it. The argument is similar if instead the $i$-domino ends up
horizontal (lying directly below another horizontal domino) when it is
adjoined to $\mathbf{T}_{1}$.
5. (5)
If $(i+1,\epsilon),(i,k)^{+}\in\sigma$ for some $k>i+1$, then the
$(i+1)$-domino is vertical at the end of some double row of $\mathbf{T}_{1}$;
the $i$-domino is adjoined horizontally either to this double row or a higher
one, and if to this double row bumps the $(i+1)$-domino so that it lies below
the $i$-domino, as desired.
6. (6)
If $(j,i+1)^{\epsilon},(i,k)^{+}\in\sigma$ and $j<i<k$, then the $i$-domino is
added to the first row of $\mathbf{T}_{1}$, while the $(i+1)$-domino in all
cases lies below this row. A similar argument applies if instead
$(j,i)^{\epsilon},(i+1,k)^{-}\in\sigma$.
7. (7)
If $(j_{2},i)^{+},(j_{1},i+1)^{\epsilon}\in\sigma$ and $j_{1}<j_{2}<i$, then
adding the $j_{1}$-domino bumps the dominos that were previously bumped by
adding the $j_{2}$-domino, together with at least one domino in the double row
of the $i$-domino, so that the $(i+1)$-domino winds up below the $i$-domino.
8. (8)
If $(j_{1},i)^{\epsilon},(j_{2},i+1)^{-}\in\sigma$ and $j_{1}<j_{2}<i$, then
since the $j_{2}$-domino is inserted vertically into the first column of
$\mathbf{T}_{1}$, the $(i+1)$-domino is again forced to lie below the
$i$-domino, as in the previous case.
9. (9)
if $(j_{2},i)^{+},(j_{1},i+1)^{-}\in\sigma$ and $j_{1}<j_{2}$, then again the
$(i+1)$-domino winds up below the $i$-domino, similarly to the previous case.
10. (10)
if $(i+1,k_{1})^{\epsilon},(i_{1},k_{2})^{+}\in\sigma$ and $i+1<k_{1}<k_{2}$,
then either the $i$-domino bumps the $(i+1)$-domino, or else the $i$-domino
winds up horizontal in the first double row, while the $(i+1)$-domino winds up
vertical in the first column, in either case lying below the $i$-domino.
11. (11)
if $(i,k_{1})^{-},(i+1,k_{2})^{-}\in\sigma$ and $i+1<k_{1}<k_{2}$, then the
$(i+1)$-domino is inserted below the $i$-domino.
12. (12)
if $(i,k_{1})^{+},(i+1,k_{2})^{-}\in\sigma$ and $i+1<k_{1}<k_{2}$, then as
above the $i$-domino winds up in the first double row while the $(i+1)$-domino
winds up in the first column and lies below the former.
This exhausts all cases where $\alpha\in\tau(\sigma)$. Now suppose the
contrary. The cases where $\alpha=2e_{1}$ are easily dealt with, so assume
that $\alpha=e_{i+1}-e_{i}$.
1. (1)
If $\alpha$ is noncompact imaginary, so that
$(i,\epsilon),(i+1,-\epsilon)\in\sigma$, then the $(i+1)$-domino is inserted
either next to the $i$-domino or in a higher row.
2. (2)
If $(j,i)^{-},(i+1,\epsilon)\in\sigma$ and $j<i$, then either the
$(i+1)$-domino is added vertically at the end of the first double row, or
there is at least one double row of odd length ending in $-\epsilon$ in
$\mathbf{T}_{1}$, whence the $(i+1)$-domino is added to a row not below the
$i$-domino.
3. (3)
If $(i,\epsilon),(i+1,k)^{\prime}+\in\sigma$ and $+1i<k$, then the
$(i+1)$-domino is inserted horizontally into the first row and cannot lie
below the $i$-domino.
4. (4)
If $(i+1,\epsilon),(i,k)^{-}\in\sigma$ and $i<k$ then the $(i+1)$-domino is
inserted vertically into the first column and cannot lie below the $i$-domino.
5. (5)
If $(j,i)^{\epsilon},(i+1,m)^{+}\in\sigma$ and $j<i$, then the $(i+1)$-domino
is inserted horizontally into the first row and cannot lie below the
$i$-domino.
6. (6)
If $(j_{1},i)^{\epsilon},(j_{2},i+1)^{+}\in\sigma$ and $j_{1}<j_{2}<i$, then
either the $(i+1)$-domino is inserted vertically into a double row no further
down than the one in which the $i$-domino appears, or the $i$-domino is in the
first double row and the $(i+1)$-domino is also inserted into this double row,
not directly below the former. In both cases the $(i+1)$-domino is not below
the $i$-domino.
7. (7)
If $(j_{2},i)^{-},(j_{1},i+1)^{+}\in\sigma$ and $j_{1}>j_{2}<i$, then the
$i$-domino is in the lowest double row of $\mathbf{T}_{1}$ and the
$(i+1)$-domino cannot lie below it.
8. (8)
If $(j_{2},i)^{-},(j_{1},i+1)^{-}\in\sigma$ and $j_{1}<j_{2}<i$, then the
$i$-domino is in the lowest double row of $\mathbf{T}_{1}$ and $(i+1)$-domino
is not below this double row.
9. (9)
If $(i,k_{1})^{\epsilon},(i+1,k_{2})^{+}\in\sigma$ and $i+1<k_{1}<k_{2}$, then
the $(i+1)$-domino is inserted in the first double row of $\mathbf{T}_{1}$ and
does not lie below the $i$-domino.
10. (10)
If $(i+1,k_{1})^{-},(i,k_{2})^{-}\in\sigma$ and $i+1<k_{1}<k_{2}$, then the
$i$-domino bumps the $(i+1)$-domino vertically and does not wind up below the
latter.
11. (11)
If $(i+1,k_{1})^{+},(i,k_{2})^{-}\in\sigma,i+1<k_{1}<k_{2}$, then the
$(i+1)$-domino is inserted into the first double row and cannot lie below the
$i$-domino.
This exhausts all cases and concludes the proof.
∎
## $H$ commutes with $T_{\alpha\beta}$
.
Again following [5], we complete our program of showing that the map $H$
computes annihilators by showing that it commutes with wall-crossing
operators.
###### 8.1. PROPOSITION.
Let $\alpha,\beta$ be nonorthogonal simple roots and let
$\sigma\in\mathcal{S}_{n,p}$ lie in the domain of the operator
$T_{\alpha\beta}$. Then
$H(T_{\alpha\beta}(\sigma))=T_{\alpha\beta}(H(\sigma)$.
###### Proof.
As in [5] we enumerate all ways in which $\sigma$ can lie in the domain of
$T_{\alpha\beta}$ and check that the conclusion holds in all cases. Let
$\mathbf{T}_{1},\mathbf{T}_{2}$ respectively denote the domino tableau and a
representative of the class of signed tableaux attached to $\sigma$ by the
algorithm.
Suppose first that $\\{\alpha,\beta\\}=\\{2e_{1},e_{2}-e_{1}\\}$. If
$(1,2)^{+}\in\sigma$, then $F_{2}\subseteq\mathbf{T}_{1}$ and
$T_{\alpha\beta}(\sigma)$ consists of the two involutions
$\sigma_{1},\sigma_{2}$ obtained from $\sigma$ by replacing $(1,2)^{+}$ by
$(1,+),(2,-)$ and $(1,-),(2,+)$ in turn. Clearly the domino tableaux attached
to $\sigma_{1},\sigma_{2}$ are both equal to the tableau
$\mathbf{T}_{1}^{\prime}$ defined in Definition 6.1 (1). The signed tableaux
attached to these involutions at the second step of the algorithm are not
equivalent at that step, whence by the algorithm they remain inequivalent at
its end. Hence $H(\sigma_{1})\neq H(\sigma_{2})$, as desired. The cases where
$(1,+),(2,-)\in\sigma$ or $(1,-),(2,+)\in\sigma$ are similar. Finally, in the
case where at least one of the indices 1 and 2 is paired with another index
but 1 and 2 are not paired with each other, one clearly moves the 1- and
2-dominos in $\mathbf{T}_{1}$ in the desired fashion, whence one can check
that if any other dominos move, they are the ones in the closed cycle
containing the 2-domino of $\mathbf{T}_{1}$ and in fact the two domino
tableaux produced are those specified by Definition 6.1 (cf. [3,
2.2.9,2.3.7]). If the 2-domino of $\mathbf{T}_{1}$ does not lie in a closed
cycle, then only one domino tableau is produced, which again agrees with that
given by this definition.
Henceforth we assume that $\alpha=e_{i}-e_{i-1},\beta=e_{i+1}-e_{i}$ for some
$i\geq 2$. Set $\sigma^{\prime}=T_{\alpha\beta}(\sigma)$ and let
$\mathbf{T}_{1}^{\prime},\mathbf{T}_{2}^{\prime}$ respectively denote the
domino tableau and a representative of the class of signed tableaux attached
to $\sigma^{\prime}$ by the algorithm. The cases in our discussion below are
parallel to the corresponding cases in the proof of [5, Proposition 4.2.1].
That proof shows that the desired result holds whenever none of the indices
$i-1,i,i+1$ occurs in an ordered pair $(a,b)^{-}$ in $\sigma$ and
$T_{\alpha\beta}$ does not act on $\mathbf{T}_{1}$ by an $F$-type interchange
in the sense of [3], using [3, 2.1.20,2.1.21] in place of the results in
Section 2.5 of [5]: in all cases either the $(i-1)$\- and $i$\- or $i$\- and
$(i+1)$-dominos are interchanged in $\mathbf{T}_{1}$, whichever of these has
the desired effect on $\tau$-invariants. Apart from this one changes signs and
moves through open cycles in the same way in the constructions of
$\mathbf{T}_{1},\mathbf{T}_{1}^{\prime}$ and
$\mathbf{T}_{2},\mathbf{T}_{2}^{\prime}$, so that $\mathbf{T}_{2}^{\prime}$ is
equivalent to $\mathbf{T}_{2}$, as desired. If an ordered pair $(a,b)^{-}$
involving one of the indices $i-1,i,i+1$ does occur in $\sigma$, then one
checks directly that $\mathbf{T}_{1}^{\prime}$ is obtained from
$\mathbf{T}_{1}$ by either interchanging the $(i-1)$\- and $i$\- or $i$\- and
$(i+1)$-dominos and we may take $\mathbf{T}_{2}^{\prime}=\mathbf{T}_{2}$, as
desired; note that ordered pairs $(a,b)^{-}$ have no analogue in [5].. It only
remains to show that the desired result holds whenever
$\mathbf{T}_{\alpha\beta}$ acts on $\mathbf{T}_{1}$ by an $F$-type interchange
(again, there is no analogue of such an interchange in [5]). In each case
below, we indicate how many subcases involve an $F$-type interchange; then the
result follows by a direct calculation in each such subcase. Throughout we
denote by $j,j_{1},j_{2},j_{3}$ indices less than $i-1$ with
$j_{1}<j_{2}<j_{3}$, and similarly by $k,k_{1},k_{2},k_{3}$ indices greater
than $i+1$ with $k_{1}<k_{2}<k_{3}$.
1. (1)
Suppose first that $(i-1,\epsilon),(i,-\epsilon),(i+1,-\epsilon)$ all lie in
$\sigma$ for some sign $\epsilon$, which for definiteness we take to be $+$.
Then $\sigma^{\prime}$ is obtained from $\sigma$ by replacing the terms
$(i-1,+),(i,-)$ by the single term $(i-1,i)^{+}$. Let $\tilde{\sigma}$ consist
of the terms of $\sigma$ involving only indices less than $i-1$ and let
$\tilde{\mathbf{T}}_{1},\tilde{\mathbf{T}}_{2}$ be the domino and
representative of the class of signed tableaux attached to $\tilde{\sigma}$ by
the algorithm. There are four subcases, according as the top double row of
$\tilde{\mathbf{T}}_{2}$ has even or odd length and ends with $+$ or $-$, but
only one of these has $\mathbf{T}_{\alpha\beta}$ acting on $\mathbf{T}_{1}$ by
an $F$-type interchange. One checks directly that the conclusion holds in this
case.
2. (2)
If instead $(i-1,\epsilon),(i,i+1)^{+}\in\sigma$, so that
$(i-1,\epsilon),(i,\epsilon),(i+1,-\epsilon))\in\sigma^{\prime}$, then again
only one subcase out of four has $T_{\alpha\beta}$ acting on $\mathbf{T}_{1}$
by an $F$-type interchange, and the conclusion holds in that case.
3. (3)
If $(j,i-1)^{\epsilon},(i,i+1)^{+}\in\sigma$, so that
$(j,i)^{\epsilon},(i-1,i+1)^{+}\in\sigma^{\prime}$, then no $F$-type
interchange ever takes place.
4. (4)
If $(i-1,i+1)^{\epsilon},(i,k)^{+}\in\sigma$, so that
$(i-1,i)^{\epsilon},(i+1,k)^{+}\in\sigma^{\prime}$, then no $F$-type
interchange ever takes place.
5. (5)
If
$(j,i-1)^{\epsilon},(i,\epsilon^{\prime}),(i+1,\epsilon^{\prime})\in\sigma$,
so that
$(i-1,\epsilon^{\prime}),(j,i)^{\epsilon}),((i+1),\epsilon^{\prime})\in\sigma^{\prime}$,
then there are eight subcases, depending as in case 1 on the length parity and
sign at the end of the top double row, and this time also on whether the
$i-1$\- and $i+1$-dominos are bumped into the same double row. Two subcases
involve an $F$-type interchange and the desired result holds in both of them.
6. (6)
If $(i-1,\epsilon),(i,-\epsilon),(j,i+1)^{\epsilon^{\prime}}\in\sigma$, so
that
$(i-1,\epsilon),(j,i)^{\epsilon^{\prime}},(i+1,-\epsilon)\in\sigma^{\prime}$,
then no $F$-type interchange takes place.
7. (7)
If $(i-1,\epsilon),(i+1,\epsilon),(i,k)^{+}\in\sigma$, so that
$(i-1,\epsilon),(i,\epsilon),(i+1,k)^{+}\in\sigma^{\prime}$, then there is one
case where an $F$-type interchange occurs and the result holds in that case.
8. (8)
If $(i-1,\epsilon),(i+1,-\epsilon),(i,k)^{+}\in\sigma$, so that
$(i,\epsilon),(i+1,-\epsilon),(i-1,k)^{+}\in\sigma^{\prime}$, then an $F$-type
interchange always occurs and the result holds in all cases.
9. (9)
If $(j_{1},i-1)^{+},(i,\epsilon^{\prime}),(j_{2},i+1)^{+}\in\sigma$, so that
$(i-1,\epsilon),(j_{1},i)^{+},(j_{2},i+1)^{+}\in\sigma^{\prime}$, then there
is one case where an $F$-type interchange occurs and the result holds in it.
10. (10)
If $(j_{2},i-1)^{+},(i,\epsilon),(j_{1},i+1)^{+}\in\sigma$, so that
$(j_{2},i-1)^{+},(j_{1},i)^{+},(i+1,\epsilon)^{+}\in\sigma^{\prime}$, then an
$F$-type interchange never arises.
11. (11)
If $(j,i-1)^{+},(i+1,\epsilon),(i,k)^{+}\in\sigma$, so that
$(j,i)^{+},(i+1,\epsilon),(i-1,k)^{+}\in\sigma^{\prime}$, then an $F$-type
interchange never arises.
12. (12)
If $(i-1,\epsilon),(j,i+1)^{+},(i,k)^{+}\in\sigma$, so that
$(i-1,\epsilon),(j,i)^{+},(i+1,k)^{+}\in\sigma^{\prime}$, then an $F$-type
interchange never arises.
13. (13)
If $((i+1,\epsilon),(i-1,k_{1})^{+},(i,k_{2})^{+}\in\sigma$, so that
$(i,\epsilon),(i-1,k_{1}),(i+1,k_{2})\in\sigma^{\prime}$, then there are two
subcases where an $F$-type interchange arises and the result holds in both of
them.
14. (14)
If $(i-1,\epsilon),(i+1,k_{1})^{+},(i,k_{2})^{+}\in\sigma$, so that
$((i,\epsilon),(i+1,k_{1}^{+}),(i-1,k_{2})^{+}\in\sigma^{\prime}$, then there
are two subcases where an $F$-type interchange arises and the result holds in
both of them.
15. (15)
If $((j_{2},i-1)^{+},(j_{3},i)^{+},(j_{1},i+1)^{+}\in\sigma$, so that
$(j_{2},i-1)^{+},(j_{1},i)^{+},(j_{3},i+1)^{+}\in\sigma^{\prime}$, then no
$F$-type interchange occurs.
16. (16)
If $(j_{1},i-1)^{+},(j_{3},i)^{+},(j_{2},i+1)^{+}\in\sigma$, so that
$(j_{3},i-1)^{+},(j_{1},i)^{+},(j_{2},i+1)\in\sigma^{\prime}$, then no
$F$-type interchange toccurse.
17. (17)
If $(j_{2},i-1)^{+},(j_{i},i+1)^{+},(i,k)^{+}\in\sigma$, so that
$(j_{2},i-1)^{+},(j_{1},i)^{+},(i+1,k)^{+}\in\sigma^{\prime}$, then no
$F$-type interchange occurs.
18. (18)
If $(j_{1},i-1)^{+},(j_{2},i)^{+},(i,k)^{+}\in\sigma$, so that
$(j_{1},i)^{+},(j_{2},i+1)^{+},(i-1,k)^{+}\in\sigma^{\prime}$, then no
$F$-type interchange occurs.
19. (19)
If $(j,i+1)^{+},(i-1,k_{1}),(i,k_{2})^{+}\in\sigma$, so that
$(j,i)^{+},(i-1,k_{1})^{+},(i+1,k_{2})^{+}\in\sigma^{\prime}$, then no
$F$-type interchange occurs.
20. (20)
If $(j,i-1)^{+},(i+1,k_{1})^{+},(i,k_{2})^{+}\in\sigma$, so that
$(j_{i}^{+},(i+1,k_{1})^{+},(i-1,k_{2})^{+}\in\sigma^{\prime}$, then no
$F$-type interchange occurs.
21. (21)
If $(i+1,k_{1})^{+},(i-1,k_{2})^{+}(,i,k_{3})^{+}\in\sigma$, so that
$(i,k_{1})^{+},(i-1,k_{2})^{+},(i+1,k_{3})^{+}\in\sigma^{\prime}$, then no
$F$-type interchange occurs.
22. (22)
If $(i-1,k_{1})^{+},(i+1,k_{2})^{+},(i,k_{3})^{+}\in\sigma$, so that
$((i,k_{1})^{+},(i+1,k_{2})^{+},(i-1,k_{3})^{+}\in\sigma^{\prime}$, then no
$F$-type interchange occurs.
This exhausts all cases and concludes the proof.
∎
###### 8.2. THEOREM.
Let $\sigma\in\mathcal{S}_{n,p}$. Then the first coordinate $\mathbf{T}_{1}$
of $H(\sigma)$ parametrizes the annihilator of the simple Harish-Chandra
module corresponding to $\sigma$ via the classification of [4, Theorem
3.5.11].
###### Proof.
Thanks to Propositions 7.1 and 8.1, we know that the primitive ideal $I$
corresponding to $\mathbf{T}_{1}$ has the same generalized $\tau$-invariant as
the Harish-Chandra module $X$ corresponding to $\sigma$, whence by [4, Theorem
3.5.9] $I$ is indeed the annihilator of $X$, since primitive ideals of trivial
infinitesimal character in type $C$ are uniquely determined by their
generalized $\tau$-invariants. ∎
We also see that, since the wall-crossing operators $T_{\alpha\beta}$ generate
the Harish-Chandra cells for Sp$(p,q)$ [7, Theorem 1], modules in the same
Harish-Chandra cell for this group (and trivial infinitesimal character) have
the same signed tableaux $\mathbf{T}_{2}$ attached to them, up to changing the
signs in double rows whose rows have even length.
## $H$ is a bijection
###### 9.1. THEOREM.
The map $H$ defines a bijection between $\mathcal{S}_{n,p}$ and ordered pairs
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$, where $\mathbf{T}_{1}$ is a
domino tableau with shape a doubled partition of $2(p+q)$ and
$\overline{\mathbf{T}}_{2}$ is an equivalence class of signed tableaux of
signature $(2p,2q)$ and the same shape as $\mathbf{T}_{1}$.
###### Proof.
We first show that any ordered pair
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ as in the hypothesis lies in the
range of $H$, by induction on $p+q$. Assuming that this holds for all pairs
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ if $\mathbf{T}_{1}$ has fewer
than $n=p+q$ dominos, let $\mathbf{T}_{1},\overline{\mathbf{T}}_{2}$ be a pair
with $n$ dominos in $\mathbf{T}_{1}$. Let $\mathbf{T}_{1}^{\prime}$ be
$\mathbf{T}_{1}$ with the $n$-domino removed.
If the $n$-domino in $\mathbf{T}_{1}$ is horizontal and lies in a row of even
length, then the next to last row $R$ of $\mathbf{T}_{1}^{\prime}$ has two
more squares than its last row. By [2, 1.2.13], there is a domino tableau
$\mathbf{T}$ whose shape is that of $\mathbf{T}_{1}^{\prime}$ with the last
two squares removed from $R$ such that inserting a horizontal $i$-domino for a
suitable index $i$ into the first row of $\mathbf{T}$ produces the tableau
$\mathbf{T}_{1}^{\prime}$, or else there is such a tableau $\mathbf{T}$ and an
index $i$ such that inserting a vertical $i$-domino into the first column of
$\mathbf{T}$ produces $\mathbf{T}_{1}^{\prime}$. In either case there is a
pair
$(\mathbf{T}_{1}^{\prime\prime},\overline{\mathbf{T}}_{2}^{\prime\prime})=H(\sigma^{\prime})$
in the range of $H$, and if we add $(i,n)^{+}$ or $(i,n)^{-}$ to
$\sigma^{\prime}$ to get $\sigma$ (the first pair if the $i$-domino is
horizontal, the second if it is vertical), then
$H(\sigma)=(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$, as desired. If instead
the $n$-domino in $\mathbf{T}_{1}$ is horizontal but lies in a row of odd
length then we can move $\mathbf{T}_{1}^{\prime}$ through a suitable open
cycle to produce a new tableau $\mathbf{T}_{1}^{\prime\prime}$ with shape a
doubled partition such that the shape of $\mathbf{T}_{1}$ differs from that of
$\mathbf{T}_{1}$ by a single vertical domino. We then reduce to the case
covered in the following paragraph.
Now suppose that the $n$-domino in $\mathbf{T}_{1}$ is vertical, so that the
shape of $\mathbf{T}_{1}^{\prime}$ is that of a doubled partition. Let
$\mathbf{T}_{2}$ be a representative of $\overline{\mathbf{T}}_{2}$; assume
for definiteness that the squares in $\mathbf{T}_{2}$ corresponding to those
of the $n$-domino in $\mathbf{T}_{1}$ are labelled $+$. Look at all the double
rows in $\mathbf{T}_{2}$ above the one corresponding to the double row with
the $n$-domino in.$\mathbf{T}_{1}$. If every such double row consists of rows
of odd length ending in $+$, then one checks immediately that there is a class
$\overline{\mathbf{T}}_{2}^{\prime}$ such that the pair
$(\mathbf{T}_{1},\overline{\mathbf{T}}_{2}^{\prime})=H(\sigma^{\prime})$ lies
in the range of $H$ and if we add $(n,+)$ to $\sigma^{\prime}$ the resulting
involution $\sigma$ satisfies
$H(\sigma)=(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ as desired. Otherwise,
if the lowest such double row $D$ has rows of odd length and ends in $-$, let
$\tilde{\mathbf{T}}_{1}^{\prime}$ be obtained from $\mathbf{T}_{1}^{\prime}$
by removing the last squares of the rows of $D$. There is a domino tableau
$\mathbf{T}$ with the same shape as $\tilde{\mathbf{T}}_{1}^{\prime}$ and an
index $i$ such that inserting a suitably oriented $i$-domino into $\mathbf{T}$
gives $\mathbf{T}^{\prime}$. As above there is a class
$\overline{\mathbf{T}}_{2}^{\prime\prime}$ of signed tableaux such that
$(\mathbf{T},\overline{\mathbf{T}}_{2}^{\prime\prime})=H(\sigma^{\prime})$ and
then there is $\sigma$ with
$H(\sigma)=(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$, as desired. If the
lowest such double row has rows of even length ending in $+$, then look at the
open cycle through the largest domino in the corresponding double row. The
argument of the last paragraph produces the desired $\sigma$. If the lowest
such double row $D$ has rows of even length ending in $-$, then look at the
open cycle of $\mathbf{T}_{1}^{\prime}$ through the largest domino in the
corresponding double row. If this open cycle has its hole and corner in
different double rows $D_{1},D_{2}$, then change all signs in these double
rows of $\mathbf{T}_{2}$ and argue as in the previous case. Finally, if this
open cycle has its hole and corner both in $D$, then move through this open
cycle in $\mathbf{T}_{1}^{\prime}$ and argue as in the case where the
$n$-domino in $\mathbf{T}_{1}$ is horizontal. In this case adjoining the
$i$-domino initially produces a domino tableau where the first row of $D$ has
length two more than its second row; moving through the open cycle, as
specified by Definition 5.2 (2), gives $D$ the shape it has in
$\mathbf{T}_{1}$ and then bumping the $n$-domino into the next lower double
row yields $\mathbf{T}_{1}$, as desired.
Now we know that $H$ is surjective. To show that it is injective, it is enough
to show that its domain and range have the same cardinality. To this end, we
appeal to [7]. The cells of Harish-Chandra modules for Sp$(p,q)$ span complex
vector spaces which carry the structure of representations of the Weyl group
$W$ of type $C_{p+q}$. Let ${\mathbf{p}}$ be a doubled partition of $2(p+q)$,
with Lusztig symbol $s$, and let $\pi$ be the corresponding irreducible
representation of $W$. Enumerate the distinct even parts of $\mathbf{p}$ as
$r_{1},\ldots,r_{k}$ and denote by $\mathbf{p}_{1},\ldots,\mathbf{p}_{2^{k}}$
the $2^{k}$ partitions obtained from $\mathbf{p}$ by either replacing the
block $r_{i}\ldots,r_{i}$ of parts of $\mathbf{p}$ equal to $r_{i}$ by
$r_{i}+1,r_{i},\ldots,r_{i},r_{i}-1$ or leaving this block unchanged, for all
$i$ between 1 and $k$. Then the $\mathbf{p}_{i}$ correspond (via their Lusztig
symbols) to the representations in the complex double cell of $\pi$ of
Springer type in the sense of [7]. From this and [7, Corollary 3] it follows
that the number of equivalence classes ${\overline{\mathbf{T}}}_{2}$ of shape
$\mathbf{p}$ relative to a fixed domino tableau $\mathbf{T}_{1}$ of this shape
equals the number of modules in any Harish-Chandra cell $C$ with annihilator
the primitive ideal corresponding to $\mathbf{T}_{1}$, provided that $C$ has
at least one such module. Hence the domain and range of $H$ have the same
cardinality and $H$ is a bijection. ∎
Fix a signed tableau $\mathbf{T}_{2}^{\prime}$ whose rows of even length all
begin with $+$. It follows that signed involutions $\sigma$ such that the
normalization (in the sense of the paragraph after Definition 5.2) of the
second coordinate of $H(\sigma)$ is $\mathbf{T}_{2}^{\prime}$ correspond
bijectively to modules in a Harish-Chandra cell for Sp$(p,q)$ and that all
such cells (of modules with trivial infinitesimal character) arise in this
way; in particular, and in accordance with [7, Theorem 6], there are as many
such cells as there are nilpotent orbits in $\mathfrak{g}_{0}$. It remains to
show that all modules in the cell corresponding to $\mathbf{T}_{2}^{\prime}$
have associated variety equal to the closure of the corresponding $K$-orbit in
$\mathfrak{p}$ via [1, 9.3.5]. This we will do in the next and final section.
## Associated varieties
Our final result is
###### 10.1. THEOREM.
Let $\sigma\in\mathcal{S}_{n,p}$ correspond to the Harish-Chandra module $Z$.
Then the associated variety of $Z$ is the closure of the $K$-orbit
corresponding to $\mathbf{T}_{2}^{\prime}$, where
$H(\sigma)=(\mathbf{T}_{1},\overline{\mathbf{T}}_{2}),\mathbf{T}_{2}$ is a
representative of $\overline{\mathbf{T}}_{2}$, and $\mathbf{T}_{2}^{\prime}$
is its normalization as defined after Definition 5.2 (obtained from
$\mathbf{T}_{2}$ by changing signs as necessary in all rows of even length so
that they begin with $+$).
###### Proof.
Let $\mathfrak{q}$ be a $\theta$-stable parabolic subalgebra of $\mathfrak{g}$
whose corresponding Levi subgroup of $G$ is Sp$(p^{\prime},q^{\prime})\times
U(p_{1},q_{1})\times\cdots\times U(p_{r},q_{r})$, where the $p_{i},q_{i}$ are
such that $p^{\prime}+\sum_{i}p_{i}=p,q"+\sum_{i}q_{i}=q$. There is a simple
derived functor module $A_{\mathfrak{q}}$ of trivial infinitesimal character
whose associated variety is the closure of the Richardson orbit $\mathcal{O}$
attached to $\mathfrak{q}$ in the sense of [8]. The corresponding clan
$\sigma^{\prime}$ is obtained as follows. Its first block of terms corresponds
to the factor Sp$(p^{\prime},q^{\prime})$, taking the form
$(1,+)\ldots,(p^{\prime}-q^{\prime},+),(p^{\prime}-q^{\prime}+1,p^{\prime}-q^{\prime}+2)^{-},\ldots,\hfil\linebreak(p^{\prime}+q^{\prime}-1,p^{\prime}+q^{\prime})^{-}$
if $p^{\prime}\geq q^{\prime}$ or
$(1,-),\ldots,(q^{\prime}-p^{\prime},-),(q^{\prime}-p^{\prime}+1,q^{\prime}-p^{\prime}+2)^{-},\ldots,\hfil\linebreak(q^{\prime}+p^{\prime}-1,q^{\prime}+p^{\prime})^{-}$
if $q^{\prime}>p^{\prime}$. Its next block of terms takes the form
$(m+1,+),\ldots,(m+p_{1}-q_{1},+),(m,m+p_{1}-q_{1}+1)^{+},\ldots,(p^{\prime}+q^{\prime}+1,p^{\prime}+q^{\prime}+p_{1}+q_{1})^{+}$,
if $p_{1}\geq q_{1}$, or
$(m+1,-),\ldots,(m+q_{1}-p_{1})^{-},(m,m+q_{1}-p_{1}+1)^{+},\ldots\hfil\linebreak(p^{\prime}+q^{\prime}+1,p^{\prime}+q^{\prime}+p_{1}+q_{1})^{+}$
if $q_{1}>p_{1}$ (where
$m+1=\lfloor(p^{\prime}+q^{\prime}+p_{1}+q_{1}+1)/2\rfloor$); the remaining
blocks of $\sigma$ correspond similarly to the remaining factors
$U(p_{i},q_{i})$. Letting
$H(\sigma^{\prime})=(\mathbf{T}_{1},\overline{\mathbf{T}}_{2})$ and defining
$\mathbf{T}_{2}^{\prime}$ as above, one checks immediately that the orbit
corresponding to $\mathbf{T}_{2}^{\prime}$ is indeed $\mathcal{O}$. More
generally, let $X^{\prime}$ be any simple Harish-Chandra module for
Sp$(p^{\prime},q^{\prime})$ with trivial infinitesimal character and
associated variety $\bar{\mathcal{O}}$. Then there is a simple Harish-Chandra
module $X$ for $G$ obtained from $X^{\prime}$ by cohomological parabolic
induction from $\mathfrak{q}$, whose associated variety is the closure of the
orbit induced from $\mathcal{O}$ in the sense of [8]. Its signed involution
$\sigma(X)$ is obtained from that of $X^{\prime}$ by adding the blocks of
terms corresponding to the $U(p_{i},q_{i})$ factors in the above construction
of $\sigma$, and if the theorem holds for $X^{\prime}$ and its associated
variety, then the same is true for $X$.
Given $Z$ as in the theorem, let $\bar{\mathcal{O}}$ be its associated
variety. If $\mathcal{O}$ is the closure of a Richardson orbit, say the one
attached to the $\theta$-stable parabolic subalgebra $\mathfrak{q}$, then the
module $A_{\mathfrak{q}}$ above lies in the same Harish-Chandra cell as $Z$
and the theorem holds for $A_{\mathfrak{q}}$, whence it holds for $Y$. In
general, using [8, Proposition 2.3 (3)] and induction by stages, we can induce
$\mathcal{O}$ to an orbit $\mathcal{O}^{\prime}$ for a higher rank group
$G^{\prime}$ such that all even parts in the partition corresponding to
$\mathcal{O}^{\prime}$ have multiplicity at most $4$, whence
$\mathcal{O}^{\prime}$ is Richardson by [8, Corollary 5.2]. Then the result
holds for the module $Z^{\prime}$ correspondingly induced from $Z$. But now
the orbit $\mathcal{O}$ of Sp$(p,q)$ is the only one inducing to
$\mathcal{O}^{\prime}$ relative to a suitable $\theta$-stable parabolic
subalgebra $\mathfrak{q}$ of Lie $G^{\prime}$ with Levi subgroup having
Sp$(p,q)$ as its only factor of type $C$. It follows that the theorem holds
for $Z$, as desired. ∎
## References
* [1] D. Collingwood and W. McGovern. Nilpotent orbits in semisimple Lie algebras, Chapman and Hall, 1993.
* [2] D. Garfinkle. On the classification of primitive ideals for complex classical Lie algebras, I, Compositio Math., 75:135–169, 1990.
* [3] D. Garfinkle. On the classification of primitive ideals for complex classical Lie algebras, II, Composition Math., 81:307–336, 1992.
* [4] D. Garfinkle. On the classification of primitive ideals for complex classical Lie algebras, III. Compositio Math., 88:187–234, 1993.
* [5] D. Garfinkle. The annihilators of irreducible Harish-Chandra modules for $SU(p,q)$ and other type $A_{n-1}$ groups, Amer. J. Math., 115:305–369, 1993.
* [6] A. W. Knapp. Lie groups beyond an introduction, Progress in Math.,140, Birkhäuser, Boston, 2002.
* [7] W. McGovern. Cells of Harish-Chandra modules for real classical groups,. Amer. J. Math., 120:211–228, 1998.
* [8] P. Trapa. Richardson orbits for real classical groups. J. Alg., 286:361–385, 2005.
* [9] P. Trapa. Leading-term cycles of Harish-Chandra modules and partial orders on components of the Springer fiber, Comp. Math., 143:515–540, 2007.
* [10] D. A. Vogan. Irreducible characters of semisimple Lie groups I, Duke Math. J., 46:61–108, 1979.
* [11] D. A. Vogan. Representations of real reductive Lie groups, Progress in Math., 15, Birkhäuser, Boston, 1981.
|
8k
|
arxiv_papers
|
2101.01070
|
acoustics actuators addictions admsci adolescents aerospace agriculture
agriengineering agronomy ai algorithms allergies analytica animals antibiotics
antibodies antioxidants applmech applnano applsci arts asc asi atmosphere
atoms audiolres automation axioms batteries bdcc behavsci beverages biochem
bioengineering biologics biology biomechanics biomedicines biomedinformatics
biomimetics biomolecules biophysica biosensors biotech birds bloods brainsci
breath buildings businesses cancers carbon cardiogenetics catalysts cells
ceramics challenges chemengineering chemistry chemosensors chemproc children
civileng cleantechnol climate clockssleep cmd coatings colloids compounds
computation computers condensedmatter conservation constrmater cosmetics crops
cryptography crystals cyber dairy data dentistry dermato dermatopathology
designs diabetology diagnostics digital disabilities diseases diversity dna
drones dynamics earth ecologies econometrics economies education ejbc ejihpe
electricity electrochem electronicmat electronics encyclopedia endocrines
energies eng engproc entropy environments environsciproc epidemiologia
epigenomes est fermentation fibers fire fishes fluids foods forecasting
forensicsci forests fractalfract fuels futureinternet futurephys galaxies
games gases gastroent gastrointestdisord gels genealogy genes geographies
geohazards geomatics geosciences geriatrics hazardousmatters healthcare hearts
hemato heritage highthroughput horticulturae humanities hydrogen hydrology
hygiene idr ijerph ijfs ijgi ijms ijns ijtpp immuno informatics information
infrastructures inorganics insects instruments inventions iot j jcdd jcm jcp
jcs jdb jfb jfmk jimaging jintelligence jlpea jmmp jmp jmse jne jnt jof joitmc
jor journalmedia jox jpm jrfm jsan jtaer jzbg land languages laws life liquids
literature livers logistics lubricants machines macromol magnetism
magnetochemistry make marinedrugs materials materproc mathematics mca medicina
medicines medsci membranes metabolites metals microarrays micromachines
microorganisms minerals mining modelling molbank molecules mps mti
nanomanufacturing nanomaterials ncrna network neuroglia neurolint neurosci
nitrogen notspecified nursrep nutrients obesities oceans ohbm optics oral
organics osteology oxygen parasites particles pathogens pathophysiology
pediatric pharmaceuticals pharmaceutics pharmacy philosophies photochem
photonics physics physiolsci plants plasma pollutants polymers polysaccharides
preprints proceedings processes prosthesis proteomes psych psychiatryint
publications quantumrep quaternary qubs radiation reactions recycling
religions remotesensing reports reprodmed resources risks robotics safety sci
scipharm sensors separations sexes sexes signals sinusitis skins smartcities
sna societies socsci soilsystems solids sports standards stats surfaces
surgeries suschem sustainability symmetry systems taxonomy technologies
telecom test textiles tourismhosp toxics toxins transplantology traumas
tropicalmed universe urbansci uro vaccines vehicles vetsci vibration viruses
vision water wem wevj women world
|
512
|
arxiv_papers
|
2101.01071
|
# Traveling pulses in Class-I excitable media
Andreu Arinyo-i-Prats1,2 Pablo Moreno-Spiegelberg1 Manuel A. Matías1 Damià
Gomila1 [email protected] 1IFISC (CSIC-UIB), Instituto de Física
Interdisciplinar y Sistemas Complejos, E-07122 Palma de Mallorca, Spain
2 Institute of Computer Science, Czech Academy of Sciences, 182 07 Praha 8,
Czech Republic
###### Abstract
We study Class-I excitable $1$-dimensional media showing the appearance of
propagating traveling pulses. We consider a general model exhibiting Class-I
excitability mediated by two different scenarios: a homoclinic (saddle-loop)
and a SNIC (Saddle-Node on the Invariant Circle) bifurcations. The distinct
properties of Class-I with respect to Class-II excitability infer unique
properties to traveling pulses in Class-I excitable media. We show how the
pulse shape inherit the infinite period of the homoclinic and SNIC
bifurcations at threshold, exhibiting scaling behaviors in the spatial
thickness of the pulses that are equivalent to the scaling behaviors of
characteristic times in the temporal case.
Excitability is a nonlinear dynamical regime that is ubiquitous in Nature.
Excitable systems, having a stationary dynamics, are characterized by their
response to external stimuli with respect to a threshold. Thus, stimuli below
the threshold exhibit linear damping in their return to the fixed point, while
stimuli exceeding the threshold are characterized by a nontrivial trajectory
in phase space before returning to the fixed point. This different response to
external perturbations confers excitable systems with unique information
processing capabilities, and also the possibility of filtering noise below the
threshold level. Moreover, excitable media, that are spatially extended
systems that locally exhibit excitable dynamics, can propagate information, as
happens in neuronal fibers Keener and Sneyd (1998) or the heart tissue Zykov
(1990).
From a dynamical systems point of view, excitability is typically associated
to the sudden creation (or destruction) of a limit cycle, whose remnant traces
in phase space constitute the excitable excursion (Izhikevich, 2007). The
route (i.e. bifurcation) through which a limit cycle is created or destroyed,
leads to differences in the excitable dynamics. A basic classification of
excitability is in two classes (types), depending on the response to external
perturbations (Izhikevich, 2007). Class-I excitable systems are characterized
by a frequency response that starts from zero, leading to a (theoretically)
infinite response time at the threshold. On the other hand, Class-II excitable
systems are characterized by a frequency response that occurs in a relatively
narrow interval, and thus the response time is bounded. Regarding the
bifurcations that originate these two types of excitable systems, Class-I
excitability occurs in certain bifurcations that involve a saddle fixed point
when creating/destroying a limit cycle, as are the cases of a homoclinic
(saddle-loop) or SNIC (Saddle Node on the Invariant Circle), also known as
SNIPER (Saddle-Node Infinite Period), bifurcations. In turn, Class-II
excitability is mediated by transitions involving a Hopf bifurcation such that
in relatively narrow parameter range a large amplitude cycle is created, as is
the case of subcritical Hopf bifurcations (typically close to the transition
from sub- to supercritical Hopf bifurcation) and also the case of a
supercritical Hopf bifurcation followed by a canard, i.e, a sudden growth of
the cycle happening sometimes in fast-slow systems. This excludes the regular
supercritical Hopf bifurcation, characterized by a gentle growth of the limit
cycle amplitude.
Excitable media, obtained by coupling spatially dynamical systems that are
locally excitable, show different regimes in which local excitations exceeding
a threshold can propagate across the medium Mikhailov (1990); Meron (1992).
Many studies have been carried out in Class-II excitable media, but much less
is know about pulse propagation in the Class-I case. Excitable regimes are
relevant in a number of physical, chemical and biological systems, namely
semiconductor lasers, chemical clocks, the heart, and signal transmission in
neural fibers. $1$-D pulse propagation is also behind nerve impulse
transmission.
In one spatial dimension both pulse propagation and periodic wave train
regimes are found in Class-II (Keener and Sneyd, 1998; Mikhailov, 1990), and
their instabilities have been characterized for representative models
Zimmermann _et al._ (1997); Or-Guil _et al._ (2001). In $2$ and $3$ spatial
dimensions further regimes are reported in Class-II, like spiral waves,
including spiral breakup leading to spatiotemporal chaos (Bär and Or-Guil,
1999) and Winfree turbulence (Alonso _et al._ , 2003). These transitions are
relevant in the study of excitable waves in heart tissue Zykov (1990);
Panfilov (1998); Bär (2019), where they are associated to certain patologies.
Class-I excitability is much less studied and appears in models of population
Baurmann _et al._ (2007); Iuorio and Veerman (2020) or neural Pietras _et
al._ (2019) dynamics, and evidence of pulse propagation has been found in
seagrasses Ruiz-Reynés (2019). Class-I pulse propagation has also recently
been studied in arrays of coupled semiconductor lasers Alfaro-Bittner _et
al._ (2020). The different properties of Class-I and Class-II excitability,
specially the divergence of the period at threshold, can significantly modify
the properties of spatiotemporal structures in excitable media. In this Letter
we characterize traveling pulses in Class-I excitable media and show their
distinct properties and instabilities.
To address this problem we propose a general model based on the normal form of
a codimension-$3$ bifurcation Dumortier _et al._ (2006) which is the simplest
continuous model one can write with Class-I excitable behavior that can be
accessed either through an homoclinic (saddle-loop) or a SNIC bifurcation. To
this normal form we add $1$-D diffusion to study spatial propagation:
$\displaystyle\partial_{t}u$ $\displaystyle=v+D_{11}\partial_{xx}u$ (1)
$\displaystyle\partial_{t}v$
$\displaystyle=\varepsilon_{1}u^{3}+\mu_{2}u+\mu_{1}+v(\nu+bu-\varepsilon_{2}u^{2})+D_{22}\partial_{xx}v\
.$
We choose $\varepsilon_{1}=\varepsilon_{2}=-1$ to assure asymptotic stability,
and in the present work we fix the parameters $\nu=1$, $b=2.4$ and
$D_{11}=D_{22}=1$, considering $\mu_{1}$ and $\mu_{2}$ as control parameters.
Figure 1: Simplified phase diagram of the temporal system
($\partial_{xx}u=\partial_{xx}v=0$) in the parameter space
($\mu_{1},\mu_{2}$). The diagram is organized by three main codimension-$2$
points: i) a Cusp (blue dot), where the Saddle-Node bifurcation lines (blue)
meet, ii) a Saddle-Node Separatrix Loop bifurcation (SNSL-, red dot) on which
a SNIC (blue dot-dashed line) and iii) a homoclinic of a stable cycle
($L_{-}$, red dashed line) bifurcation lines join, and ii) a bifurcation (DL-,
red triangle) in which $L_{-}$ meets with a homoclinic bifurcation of an
unstable cycle ($L_{+}$, red solid line). The dotted end of $L_{+}$ indicates
that the line continues (not shown) but it is not relevant for this work.
The dynamical regimes exhibited by the temporal system (local dynamics), which
is the system that describes the time evolution of homogeneous solutions
($\partial_{xx}u=\partial_{xx}v=0$), are shown in Fig. 1, where only the most
relevant transitions to our study are shown. The fixed points of this temporal
system have $v^{\star}=0$ and $u^{\star}$ being determined by the solutions to
the cubic equation formed by the first $3$ terms of the second equation in
(1), that corresponds to the normal form of the cusp codimension-$2$
bifurcation. The two blue lines are saddle node bifurcations that mark the
boundary between the inside region with $3$ real roots and the outside region
with $1$ real plus a pair of complex conjugate roots, these $2$ lines joining
at a cusp point in $\mu_{1}=\mu_{2}=0$ (blue circle in Fig. 1) with a triple
degenerate root.
There are other codimension-$2$ points, in addition to the cusp, that organize
the scenario of interest to our study. One of them is the SNSL- point (Saddle-
Node Separatrix Loop) Schecter (1987), (red circle in Fig. 1), that is
characterized by a nascent (i.e. with a zero eigenvalue) homoclinic (saddle
loop) bifurcation. It is precisely from this SNSL- point that come up the two
principal boundaries of the Class-I excitability region: a SNIC line (blue
dot-dashed line), emerging upwards, and a homoclinic (saddle loop) line (red
dashed line), $L_{-}$ downwards.
Another relevant codimension-$2$ point is the DL- (red triangle upside down),
representing a homoclinic (saddle-loop) bifurcation to a neutral (resonant)
saddle 111Also known as resonant side switching point Champneys and Kuznetsov
(1994), characterized by the fact that the absolute value of the leading
eigenvalues of the saddle are equal., implying a transition between $L_{-}$
(that involves a stable cycle) and $L_{+}$ (red line, that involves an
unstable cycle), and that leads also to the emergence of a fold of cycles
bifurcation line, not shown in Fig. 1 for simplicity. The left SN line, both
saddle-loop bifurcations ($L_{-}$ and $L_{+}$) and the SNIC curve delimit the
Class-I excitable region (shaded grey area in Fig. 1). In this region a
perturbation around the lower fixed point (black dot in Fig. 2c) that crosses
the threshold, i.e. the stable manifold of the middle fixed point (cross in
Fig. 2c), will trigger an excitable trajectory that comes back to the lower
fixed point [panels a) and c) in Fig. 2].
The two above mentioned bifurcation lines, SNIC and $L_{-}$ are the ones of
special interest to our study, as they mediate two different ways of entering
the Class-I excitable region, cf. (Gaspard, 1990; Jacobo _et al._ , 2008),
and it will be reflected in the behavior of the pulses to be considered below.
Furthermore, there are other codimension-$2$ points not relevant for our
analysis P. Moreno-Spiegelberg and et al. (2021).
Figure 2: Comparison of the temporal dynamics of an excitable excursion and
the spatial dynamics of a $1$-D pulse sustained by this excitable dynamics. a)
temporal excitable trajectory of $u$ and $v$ (spatially homogeneous system)
starting from an initial condition just above the saddle point; c)
representation of the same excitable excursion on the $(u,v)$ phase space; b)
stable $1$-D pulse as a function of the spatial coordinate in the moving
reference frame; d) the same pulse in the $(u,v)$ (sub)phase space. Dot,
cross, and circle in c) and d) indicate lower, middle and upper fixed points
respectively. Here $\mu_{1}=0.3$ and $\mu_{2}=1.0$.
By initializing the system with a strong enough localized perturbation around
the lower homogeneous solution a pair of solitary (or traveling) pulses that
propagate with fixed shape and constant and opposite velocities are generated.
One of such pulses, the one moving to the left, is shown in Fig. 2b). A
convenient way of characterizing this pulse is using a moving reference frame,
$\xi=x-ct$, where $c$ is the velocity of the pulse yet to be determined. In
this coordinate system the partial differential equations (1) become ordinary
differential equations, and in our case we get,
$\displaystyle du/d\xi$ $\displaystyle=$ $\displaystyle u_{\xi}\qquad;\qquad
dv/d\xi=v_{\xi}$ (2) $\displaystyle du_{\xi}/d\xi$ $\displaystyle=$
$\displaystyle-(v+c\,u_{\xi})$ $\displaystyle dv_{\xi}/d\xi$ $\displaystyle=$
$\displaystyle u^{3}-\mu_{2}u-\mu_{1}-v(1+bu+u^{2})-c\,v_{\xi}\ .$
Trajectories of this system describe stationary solutions of (1) in the
reference frame moving with velocity $c$ Jaïbi _et al._ (2020). Only bounded
trajectories have a physical meaning. In particular, excitable pulses are
represented in this system as homoclinic trajectories originating from the
lower fixed point (panels b and d in Fig. 2). $c$ is computed numerically
simultaneously with the field profiles, and it varies weakly with parameters
in the excitable region.
Figure 3: Phase diagram showing the bifurcation lines of $1$-D pulses.
Traveling pulses are stable in the green region. The main instabilities
discussed in this work are the Heteroclinic I (black line) and the SNIC (blue
dot-dashed line). The other lines that bound the stability region are the
Heteroclinic II and the Hopf of pulses (not discussed in this work). The SN,
$L_{-}$ and $L_{+}$ lines are shown in order to compare the diagram with
respect to Fig. 1. The $\times$, $*$ and $+$ symbols mark the parameter values
studied in Figs. 2, 4, and 6 respectively.
Although the dynamical system describing temporal dynamics for homogeneous
solutions and the spatial dynamical system (2) are different, one may observe
important similarities in their solutions. Roughly speaking, a traveling pulse
somehow transcribes the temporal dynamics in space, such that the spatial
profile of the pulse resembles the excitable trajectory in time. Thus, Fig.
2(a) shows a excitable (open) trajectory in the temporal dynamics, while Fig.
2(b) shows a excitable pulse in the spatial dynamics (a homoclinic orbit), for
the same parameter values. In Figs. 2c) and 2d) the trajectories from Fig. 2a)
and 2b) are represented in the $(u,v)$ phase space respectively. The
similarity of panels c) and d) of Fig. 2 anticipates the results presented in
this work.
Next we analyze how the infinite period bifurcations leading to Class-I
excitability in the temporal system, namely the homoclinic and SNIC
bifurcations, affect the shape of the traveling pulses. To do so we study the
domain of stability of pulses in the $(\mu_{1},\mu_{2})$ parameter space,
shown in Fig. 3, where the cusp and saddle-node lines of Fig. 1 are also
included in the diagram for comparison. This domain is delimited by several
bifurcations at which the pulse is destroyed or made unstable: Heteroclinics I
and II, SNIC, and Hopf of pulses. Here we focus on the Heteroclinic I and SNIC
bifurcations, which are connected to the $L_{-}$ (homoclinic) and the SNIC
bifurcations of the temporal system respectively.
Let us first consider the Heteroclinic I curve, represented as a black line in
Fig. 3. Approaching this bifurcation the pulse shape changes drastically,
generating a plateau at the value of the middle (saddle) homogeneous solution
(Fig. 4b). As the spatial trajectory approaches the saddle point (through its
stable manifold) there is a slowing down of the spatial dynamics, inherited
from the temporal homoclinic (Fig. 4a), that manifests as a plateau in the
spatial profile. The plateau is more clear as one is very close to the
bifurcation (black line). Fig. 4c) shows the temporal excitable excursion in
the $(u,v)$ (sub)phase space, where it can be seen that the trajectory gets
closer and closer to the saddle point, marked with a cross. The spatial
counterpart (Fig. 4d) behaves analogously, leading to the formation of a
double heteroclinic at threshold, where the size of the plateau diverges.
This slowing down has a characteristic logarithmic scaling law in the width of
the plateau with respect to the parameter distance to the bifurcation 222The
logarithmic scaling law in the spatial coordinate $\xi$ is analogous to the
temporal logarithmic scaling law of the homoclinic (saddle-loop) bifurcation
Gaspard (1990)., as shown in Fig. 5a). The red line represents the expected
scaling slope from theory, that depends on the logarithmic parameter distance
divided by the (independently obtained) leading unstable eigenvalue of the
saddle point, and we can see that the agreement is perfect.
The fact that the Heteroclinic I bifurcation curve follows closely the $L_{-}$
and $L_{+}$ lines of the temporal dynamics (Fig. 1) and that the quantitative
scaling for the width of the pulse has the same form of that of a homoclinic
bifurcation in time indicate how the bifurcations of the temporal dynamics
permeate the spatial dynamical description of the pulse, even though the
connection is not straight forward from the equations.
Figure 4: a) Divergence of the duration of the excitable excursion in the
temporal system approaching the Homoclinic bifurcation, and b) divergence of
the plateau in the pulses approaching the Heteroclinic I bifurcation. Here
$\mu_{2}=0.4$ and $\mu_{1}=\mu_{1c}-\Delta\mu_{1}$ where the homoclinic
bifurcation occurs at $\mu_{1c}=0.07560395587$ for the temporal system, and
the Heteroclinic I at $\mu_{1c}=0.08107876002$ for the spatial systems, and
$\Delta\mu_{1}=10^{-3}$ (grey), $\Delta\mu_{1}=10^{-5}$ (green),
$\Delta\mu_{1}=10^{-12}$ (black) in all panels. Panels c) and d) show a zoom
in of the most relevant region of the phase space $(u,v)$ for the temporal and
spatial dynamics respectively. Figure 5: (a) Scaling of the pulse width,
$\gamma$, approaching the Heteroclinic bifurcation. $\gamma$ is defined as the
distance since the pulse separates $10^{-2}$ from the stable homogeneous
solution until it comes back to the same distance. The expected scaling is
$\gamma=\frac{1}{\lambda_{1}}log(|\mu-\mu_{1c}|)$ (shown in red in the plot),
where $\lambda_{1}$ is the closest eigenvalue to zero of the middle fixed
point in the spatial dynamics. b) Scaling of the eigenvalue, $\lambda$, that
becomes $0$ at the SNIC as a function of the parameter distance to the SNIC
bifurcation. The expected scaling is a power law with exponent $1/2$:
$\lambda\propto\sqrt{|\mu_{1}-\mu_{1c}|}$ (red line).
The second instability of pulses that we consider is the SNIC bifurcation
(blue dash-dotted line in Fig. 3). At this bifurcation a cycle is
reconstructed when a saddle and a node collide, namely the lower and middle
fixed points. As the pulse approaches the SNIC, there is a slowing down in the
approach to stable fixed point in the spatial dynamics, as the spatial
eigenvalue tends to zero too.
Figure 6: Same as in Fig. 4 for the SNIC bifurcation. $u-u_{0}$ is plotted in
panels a) and b), where $u_{0}$ is the bottom stable fixed point. Here
$\mu_{2}=2.0$ and $\mu_{1c}=1.0886621079036347$ with
$\Delta\mu_{1}=-10^{-4},10^{-2},10^{-1}$ for black, green and grey curves
respectively.
In the temporal case the power law manifests in the divergence of the
characteristic time to reach the stable fixed point Jacobo _et al._ (2008).
Analogously, one would expect a power-law scaling of the pulse thickness, that
would diverge at the onset of bifurcation. However, due to the exponential
approach to the saddle close to the bifurcation, the thickness of the pulse is
not well defined. So we have turned to measure the approach rate, that is
proportional to the leading eigenvalue, that becomes zero at the bifurcation.
This scaling is shown in Fig. 6b) and, as expected in a saddle-node
bifurcation, it follows a power-law with exponent $1/2$. The behavior of the
system at the other side of the bifurcation corresponds to a wave train, that
in this case is the analog of a temporal periodic behavior, the SNIC marking,
thus, a transition from wave trains to pulses.
In conclusion, we have shown the existence of $1$-D pulses in a model with
excitable behavior corresponding to Class-I excitability mediated by two
different bifurcations, SNIC and homoclinic (saddle-loop). We have
characterized the region in parameter space in which the pulse is stable and
two specific instability regimes of pulses: a heteroclinic that occurs for
parameter values close to the temporal homoclinic and the SNIC, both defining
Class-I excitability. These instabilities exhibit at the transition the same
scaling behaviors for the width of the pulses than those found in the period
of the oscillations of the temporal case, namely logarithmic and power-law for
the heteroclinic and SNIC bifurcations respectively, unveiling a profound
relation between temporal systems and the spatiotemporal structures of partial
differential equations. Further instabilities of pulses in this system, marked
as Heteroclinic II and a Hopf bifurcations in Fig. 3, are beyond the scope of
the present work and will be studied elsewhere (P. Moreno-Spiegelberg and et
al., 2021).
This work sets the ground to the study and explore spatio-temporal structures
in Class-I excitable media, both $1$\- and $2$-dimensional, a field unexplored
hitherto.
We acknowledge financial support from FEDER/Ministerio de Ciencia, Innovación
y Universidades - Agencia Estatal de Investigación through the SuMaEco project
(RTI2018-095441-B-C22) and the María de Maeztu Program for Units of Excellence
in R&D (No. MDM-2017-0711). AAiP and PMS have contributed equally to this
work.
## References
* Keener and Sneyd (1998) J. Keener and J. Sneyd, _Mathematical Physiology_ (Springer-Verlag, Berlin, Heidelberg, 1998).
* Zykov (1990) V. S. Zykov, Ann. N. Y. Acad. Sci. 591, 75 (1990).
* Izhikevich (2007) E. M. Izhikevich, _Dynamical Systems in Neuroscience_ (MIT Press, Cambridge (MA), 2007).
* Mikhailov (1990) A. S. Mikhailov, _Foundations of Synergetics. I. Distributed Active Systems_ (Springer, Berlin, 1990).
* Meron (1992) E. Meron, Physics Reports 218, 1 (1992).
* Zimmermann _et al._ (1997) M. G. Zimmermann, S. O. Firle, M. A. Natiello, M. Hildebrand, M. Eiswirth, M. Bär, A. K. Bangia, and I. G. Kevrekidis, Physica D 110, 92 (1997).
* Or-Guil _et al._ (2001) M. Or-Guil, J. Krishnan, I. G. Kevrekidis, and M. Bär, Phys. Rev. E 64, 046212 (2001).
* Bär and Or-Guil (1999) M. Bär and M. Or-Guil, Phys. Rev. Lett. 82, 1160 (1999).
* Alonso _et al._ (2003) S. Alonso, F. Sagués, and A. S. Mikhailov, Science 299, 1722 (2003).
* Panfilov (1998) A. V. Panfilov, Chaos 8, 57 (1998).
* Bär (2019) M. Bär, _Reaction-Diffusion Patterns and Waves: From Chemical Reactions to Cardiac Arrhythmias. In: Tsuji K., Müller S. (eds) Spirals and Vortices. The Frontiers Collection_ (Springer, Cham, 2019).
* Baurmann _et al._ (2007) M. Baurmann, T. Gross, and U. Feudel, J. Theor. Bio. 245, 220 (2007).
* Iuorio and Veerman (2020) A. Iuorio and F. Veerman, bioRxiv (2020), 10.1101/2020.07.29.226522.
* Pietras _et al._ (2019) B. Pietras, F. Devalle, A. Roxin, A. Daffertshofer, and E. Montbrió, Phys. Rev. E 100, 042412 (2019).
* Ruiz-Reynés (2019) D. Ruiz-Reynés, _Dynamics of Posidonia oceanica meadows_ , Ph.D. thesis, Universitat de les Illes Balears (2019).
* Alfaro-Bittner _et al._ (2020) K. Alfaro-Bittner, S. Barbay, and M. Clerc, Chaos 30, 083136 (2020).
* Dumortier _et al._ (2006) F. Dumortier, R. Roussarie, J. Sotomayor, and H. Zoladek, _Bifurcations of planar vector fields: Nilpotent Singularities and Abelian Integrals_ (Springer, Berlin, 2006).
* Schecter (1987) S. Schecter, SIAM J. Math. Anal. 18, 1142 (1987).
* Note (1) Also known as resonant side switching point Champneys and Kuznetsov (1994), characterized by the fact that the absolute value of the leading eigenvalues of the saddle are equal.
* Gaspard (1990) P. Gaspard, J. Phys. Chem. 94, 1 (1990).
* Jacobo _et al._ (2008) A. Jacobo, D. Gomila, M. A. Matías, and P. Colet, Phys. Rev. A 78, 053821 (2008).
* P. Moreno-Spiegelberg and et al. (2021) P. Moreno-Spiegelberg and et al., “Bifurcation structure of traveling pulses in class-i excitable media,” (2021), unpublished.
* Jaïbi _et al._ (2020) O. Jaïbi, A. Doelman, M. Chirilus-Bruckner, and E. Meron, Physica D: Nonlinear Phenomena 412, 132637 (2020).
* Note (2) The logarithmic scaling law in the spatial coordinate $\xi$ is analogous to the temporal logarithmic scaling law of the homoclinic (saddle-loop) bifurcation Gaspard (1990).
* Champneys and Kuznetsov (1994) A. R. Champneys and Y. A. Kuznetsov, Int. J. Bif. Chaos 4, 785 (1994).
|
4k
|
arxiv_papers
|
2101.01077
|
# HyperDegrade: From GHz to MHz Effective CPU Frequencies
Alejandro Cabrera Aldaya Billy Bob Brumley
Tampere University, Tampere, Finland
{alejandro.cabreraaldaya,billy.brumley}@tuni.fi
###### Abstract
Performance degradation techniques are an important complement to side-channel
attacks. In this work, we propose HyperDegrade—a combination of previous
approaches and the use of simultaneous multithreading (SMT) architectures. In
addition to the new technique, we investigate the root causes of performance
degradation using cache eviction, discovering a previously unknown slowdown
origin. The slowdown produced is significantly higher than previous
approaches, which translates into an increased time granularity for
Flush+Reload attacks. We evaluate HyperDegrade on different Intel
microarchitectures, yielding significant slowdowns that achieve, in select
microbenchmark cases, three orders of magnitude improvement over state-of-the-
art. To evaluate the efficacy of performance degradation in side-channel
amplification, we propose and evaluate leakage assessment metrics. The results
evidence that HyperDegrade increases time granularity without a meaningful
impact on trace quality. Additionally, we designed a fair experiment that
compares three performance degradation strategies when coupled with
Flush+Reload from an attacker perspective. We developed an attack on an
unexploited vulnerability in OpenSSL in which HyperDegrade excels—reducing by
three times the number of required Flush+Reload traces to succeed. Regarding
cryptography contributions, we revisit the recently proposed Raccoon attack on
TLS-DH key exchanges, demonstrating its application to other protocols. Using
HyperDegrade, we developed an end-to-end attack that shows how a Raccoon-like
attack can succeed with real data, filling a missing gap from previous
research.
## 1 Introduction
Side Channel Analysis (SCA), is a cryptanalytic technique that targets the
implementation of a cryptographic primitive rather than the formal
mathematical description. Microarchitecture attacks are an SCA subclass that
focus on vulnerabilities within the hardware implementation of an Instruction
Set Architecture (ISA). While more recent trends exploit speculation [38, 35],
classical trends exploit contention within different components and at various
levels. Specifically for our work, the most relevant is cache contention.
Percival [48] and Osvik et al. [43] pioneered access-driven L1 data cache
attacks in the mid 2000s, then Acıiçmez et al. [3] extended to the L1
instruction cache setting in 2010. Most of the threat models considered only
SMT architectures such as Intel’s HyperThreading (HT), where victim and spy
processes naturally execute in parallel. Yarom and Falkner [60] removed this
requirement with their groundbreaking Flush+Reload technique utilizing cache
line flushing [30], encompassing cross-core attacks in the threat model by
exploiting (inclusive) Last Level Cache (LLC) contention.
In this work, we examine the following Research Questions (RQ).
RQ 1: Research Question 1 With respect to SMT architectures, are CPU topology
and affinity factors in performance degradation attacks?
Allan et al. [6] proposed Degrade as a general performance degradation
technique, but mainly as a companion to Flush+Reload attacks. They identify
hot spots in code and repeatedly flush to slow down victims—in the
Flush+Reload case, with the main goal of amplifying trace granularity. Pereida
García and Brumley [49] proposed an alternate framework for hot spot
identification. We explore Section 1 in Section 3 to understand what role
physical and logical cores in SMT architectures play in performance
degradation. Along the way, we discover the root cause of Degrade which we
subsequently amplify. This leads to our novel HyperDegrade technique, and
Section 4 shows its efficacy, with slowdown factors in select microbenchmark
cases remarkably exceeding three orders of magnitude.
RQ 2: Research Question 1 Does performance degradation lead to Flush+Reload
traces with statistically more info. leakage?
Nowadays, Flush+Reload coupled with Degrade is a standard offensive technique
for academic research. While both Allan et al. [6] and Pereida García and
Brumley [49] give convincing use-case specific motivation for why Degrade is
useful, neither actually show the information-theoretic advantage of Degrade.
Section 5 closes this gap and partially answers Section 1 by utilizing an
existing SCA metric to demonstrate the efficacy of Degrade as an SCA trace
amplification technique. We then extend our analysis to our HyperDegrade
technique to resolve Section 1. At a high level, it shows HyperDegrade leads
to slightly noisier individual measurements yet positively disproportionate
trace granularity.
RQ 3: Research Question 1 Can HyperDegrade reduce adversary effort when
attacking crypto implementations?
Section 1 compared HyperDegrade with previous approaches from a theoretical
point of view. In Section 6 we compare the three approaches from an _applied_
perspective, showing a clear advantage for HyperDegrade over the others.
RQ 4: Research Question 1 Can a Raccoon attack (variant) succeed with real
data?
Merget et al. [39] recently proposed the Raccoon attack (e.g. CVE-2020-1968),
a timing attack targeting recovery of TLS 1.2 session keys by exploiting DH
key-dependent padding logic. Yet the authors only model the SCA data and
abstract away the protocol messages. Section 6 answers Section 1 by developing
a microarchitecture timing attack variant of Raccoon, built upon Flush+Reload
and our new HyperDegrade technique. Our end-to-end attack uses real SCA traces
and real protocol (CMS) messages to recover session keys, leading to loss of
confidentiality. We conclude in Section 7.
## 2 Background
### 2.1 Memory Hierarchy
Fast memory is expensive, therefore computer system designers use faster yet
smaller caches of slower yet larger main memory to benefit from locality
without a huge price increase. A modern microprocessor has several caches (L1,
L2, LLC) forming a cache hierarchy [44, Sect. 8.1.2], the L1 being the fastest
one but smaller and tightly coupled to the processor. Caches are organized in
cache lines of fixed size (e.g., 64 bytes). Two L1 caches typically exist, one
for storing instructions and the other for data. Regarding this work, we are
mainly interested in the L1 instruction cache and remaining levels.
When the processor needs to fetch some data (or instructions) from memory, it
first checks if they are already cached in the L1. If the desired cache line
is in the L1, a _cache hit_ occurs and the processor gets the required data
quickly. On the contrary if it is not in the L1, a _cache miss_ occurs and the
processor tries to fetch it from the next, slower, cache levels or in the
worst case, from main memory. When gathering data, the processor caches it to
reduce latency in future loads of the same data, backed by the principle of
locality [44, Sect. 8.1.5].
### 2.2 Performance Degradation
In contrast to generic CPU monopolization methods like the “cheat” attack by
Tsafrir et al. [55] that exploit the OS process scheduler, several works have
addressed the problem of degrading the performance of a victim using
microarchitecture components [28, 32, 41, 29]. However, in most cases it is
not clear whether SCA-based attackers gain benefits from the proposed
techniques.
On the other hand, Allan et al. [6] proposed a cache-eviction based
performance degradation technique that enhances Flush+Reload attack SCA
signals (traces). This method has been widely employed in previous works to
mount SCA attacks on cryptography implementations. For instance RSA [9], ECDSA
[7], DSA [50], SM2 [57], AES [17], and ECDH [23].
The performance degradation strategy proposed by Allan et al. [6], Degrade
from now on, consists of an attacker process that causes cache contention by
continuously issuing clflush instructions. It is an unprivileged instruction
that receives a virtual memory address as an operand and evicts the
corresponding cache line from the entire memory hierarchy [1].
This attack applies to shared library scenarios, which are common in many OSs.
This allows an attacker to load the same library used by the victim and
receive a virtual address that will point to the same physical address, thus,
same cache line. Therefore, if the attacker evicts said cache line from the
cache, when the victim accesses it (e.g., executes the code contained within
it), a cache miss will result, thus the microprocessor must fetch the content
from slower main memory.
### 2.3 Leakage Assessment
Pearson’s correlation coefficient, Welch’s T-test, Test Vector Leakage
Assessment (TVLA), and Normalized Inter-Class Variance (NICV) are established
statistical tools in the SCA field. Leakage assessment leverages these
statistical tools to identify leakage in procured traces for SCA. A short
summary follows.
Pearson’s correlation coefficient measures the linear similarity between two
random variables. It is generally useful for leakage assessment [18, Sect.
3.5] and Point of Interest (POI) identification within traces, for example in
template attacks [16] or used directly in Correlation Power Analysis (CPA)
[15]. POIs are the subset of points in an SCA trace that leak sensitive
information.
Welch’s T-test is a statistical measure to determine if two sample sets were
drawn from populations with similar means. Goodwill et al. [25] proposed TVLA
that utilizes the T-test for leakage assessment by comparing sets of traces
with fixed vs. random cryptographic keys and data.
Lastly, Bhasin et al. [10] propose NICV for leakage assessment. It is an
ANalysis Of VAriance (ANOVA) F-test, a statistical measure to determine if a
number of sample sets were drawn from populations with similar variances.
### 2.4 Key Agreement and SCA
Merget et al. [39] recently proposed the Raccoon attack that exploits a
specification-level weakness in protocols that utilize Diffie-Hellman key
exchange. The key insight is that some standards, including TLS 1.2 and below,
dictate stripping leading zero bytes from the shared DH key (session key, or
pre-master secret in TLS nomenclature). This introduces an SCA attack vector
since, at a low level, this behavior trickles down to several measurable time
differences in components like compression functions for hash functions. In
fixed DH public key scenarios, an attacker observes one TLS handshake (the
target) then repeatedly queries the victim using a large number of TLS
handshakes with chosen inputs. Detecting shorter session keys through timing
differences, the authors use these inputs to construct a lattice problem to
recover the target session key, hence compromising confidentiality for the
target TLS session.
## 3 HyperDegrade: Concept
The objective of HyperDegrade is to improve performance degradation offered by
Degrade when targeting a victim process, resulting in enhanced SCA traces when
coupled with a Flush+Reload attack. Under a classical Degrade attack, the
degrading process continuously evicts a cache line from the cache hierarchy,
forcing the microprocessor to fetch the cache line from main memory when the
victim needs it.
It would be interesting to evaluate the efficacy of the Degrade strategy,
seeking avenues for improvement. The root cause of Degrade as presented in [6]
is the cache will produce more misses during victim execution—we present novel
results on this later. Therefore, the cache miss to executed instructions
ratio is a reasonable metric to evaluate its performance.
For this task, we developed a proof-of-concept victim that executes custom
code located in a shared library. This harness receives as input a cache line
index, then executes a tight loop in said cache line several times. Figure 1
shows the code snippet of this loop at the left, and one cache line
disassembled code at the right.
For our experiments the number of iterations executed is $2^{16}$ (defined by
rsi). Therefore, we expect the number of instructions executed in the selected
cache line is about 1M. Under normal circumstances, every time the processor
needs to fetch this code from memory, the L1 cache should serve it very
quickly.
5000: add $0x1,%rsi
5004: sub $0x1,%rsi
.p2align 12 5008: add $0x1,%rsi
L0: 500c: sub $0x1,%rsi
.rept 64 5010: add $0x1,%rsi
.rept 6 5014: sub $0x1,%rsi
add $1, %rsi 5018: add $0x1,%rsi
sub $1, %rsi 501c: sub $0x1,%rsi
.endr 5020: add $0x1,%rsi
add $1, %rsi 5024: sub $0x1,%rsi
sub $2, %rsi 5028: add $0x1,%rsi
jz END 502c: sub $0x1,%rsi
jmp *%rdi ; L0 5030: add $0x1,%rsi
.p2align 6 5034: sub $0x2,%rsi
.endr 5038: je 6000 <END>
503e: jmpq *%rdi ; L0
Figure 1: Victim single cache line loop (code and disasm.).
### 3.1 Degrade Revisited
On the Degrade attacker side, we developed a degrading process that loads the
same shared library and continuously evicts the victim executed cache line
using clflush. We use the Linux perf tool to gather statistics about victim
execution under a Degrade attack. For this task, we used the perf (commit
13311e74) FIFO-based performance counters control to sync their sampling with
the victim and degrade processes. perf uses two FIFOs for this task, one for
enabling/disabling the performance counters and another for giving ACKs. The
sync procedure in our measurement tooling is the following:
1. 1.
The degrade process executes and it blocks until receiving an ACK packet from
perf using FIFO A.
2. 2.
perf executes with counters disabled (“-D -1” option), using FIFO C for
control and A for ACKs. Then it runs taskset that executes the victim pinned
to a specific core.
3. 3.
The victim enables the counters by writing to C, then it blocks until it
receives an ACK from the degrade process using another FIFO.
4. 4.
When perf receives the enable counters command, it sends an ACK using A to the
degrade process. When the latter receives the ACK, it forwards it to the
victim. When the victim receives this packet, it starts executing its main
loop (Figure 1).
5. 5.
Once the victim finishes, it disables the counters in perf.
This procedure considerably reduces measurement tooling overhead, but some
remains. The NoDegrade strategy does not use a degrade process, however we
used a dummy process that follows the FIFO logic to unify the sync procedure
among experiments. We repeated each experiment 100 times, gathering the
average and relative standard deviation. In all reported cases the latter was
less than 4%, therefore we used the average for our analysis. We recorded the
number of L1 instruction cache misses and the number of instructions retired
by the microprocessor. For these experiments, we used the environment setup
Coffee Lake detailed in Table 3.
We collected data while the victim was running standalone (i.e., NoDegrade
strategy) and while it was under Degrade effect. Table 1 shows the results for
each perf event. The number of retired instructions is roughly the same
between both experiments, where the difference from expected (1M) is likely
due to the measurement tooling overhead. Nevertheless, the number of L1
instruction cache misses was 4k for the NoDegrade test and 33k for Degrade.
However, 33k is still far below one cache-miss per executed instruction (1M).
Table 1: NoDegrade and Degrade statistics.
Parameter | NoDegrade | Degrade
---|---|---
inst_retired.any | 1.5M | 1.5M
L1-icache-load-misses | 4,115 | 33,785
### 3.2 The HyperDegrade Technique
In order to increase the performance impact of Degrade, we attempt to maximize
the number of cache misses. For this task we made the hypothesis that in an
SMT architecture, if the degrade process is pinned to the victim’s sibling
core, then the number of cache misses will increase.
According to an expired patent from Intel concerning clflush [45], the
microarchitectural implementation of this instruction in the ISA distinguishes
if the flushed cache line is already present in the L1 or not. While it is not
explicitly stated in that document as there is no latency analysis, it is our
belief that the flushed cache line would be evicted from the L1 before others
caches, e.g., due to the proximity wrt., for instance, the LLC controller.
Figure 2 illustrates this idea, where the arrows represent clflush actions and
the dashed ones are slower than the others.
Figure 2: Degrade vs HyperDegrade from clflush perspective.
Following this hypothesis, we present HyperDegrade as a cache-evicting degrade
strategy that runs in the victim sibling core in a microarchitecture with SMT
support. From an architecture perspective it does the same task as Degrade,
but in the same physical core as the victim. However, the behavior at the
microarchitecture level is quite different because, if our hypothesis is
correct, it should produce more cache misses due to the local proximity of the
L1. To support this claim, we repeated the previous experiment while pinning
the degrade process to the victim sibling core.
Table 2 shows the results of HyperDegrade in comparison with the previous
experiment. Note that with HyperDegrade there are about 33x cache
misses111after subtracting NoDegrade cache misses to remove non-targeted code
activity than with Degrade, translating to a considerable increase in the
number of CPU cycles the processor spends executing the victim. At the same
time, the number of observed cache misses increased considerably, approaching
the desired rate. This result, while not infallible proof, supports our
hypothesis that sharing the L1 with the victim process should produce higher
performance degradation.
Table 2: HyperDegrade improvement.
Parameter | NoDegrade | Degrade | HyperDegrade
---|---|---|---
inst_retired.any | 1.5M | 1.5M | 1.5M
L1-icache-load-misses | 4,115 | 33,785 | 992,074
cycles | 1,252,211 | 12,935,389 | 504,395,314
machine_clears.smc | $<1$ | 28,375 | 983,348
On the other hand, note the number of CPU cycles increases by a higher factor
(43x), which leads us to suspect there could be another player that is
influencing the performance degradation; further research is needed. After
repeating the experiment for several perf parameters, we found an interesting
performance counter that helps explain this behavior.
It is the number of _machine clears_ produced by _self-modifying code_ or SMC
(machine_clears.smc). According to Intel, a _machine clear or nuke_ causes the
entire pipeline to be cleared, thus producing a _severe performance penalty_
[1, 19-112].
Regarding the SMC classification of the machine clear, when the attacker
evicts a cache line, it invalidates a cache line from the victim L1
instruction cache. This might be detected by the microprocessor as an SMC
event.
The machine clears flush the pipeline, forcing the victim to re-fetch some
instructions from memory, thus increasing the number of L1 cache misses due to
the degrade process action. Therefore, it amplifies the effect produced by a
cache miss, because sometimes the same instructions are fetched more than
once.
Moreover, this analysis reveals an unknown performance degradation root cause
of both Degrade and HyperDegrade, thus complementing the original research on
Degrade in [6]. The performance degradation occurs due to an increased number
of cache misses and due to increased machine clears, where the latter is
evidenced by the significant increase from zero (NoDegrade) to 28k (Degrade).
Likewise, HyperDegrade increases the number of cache misses and machine
clears, thus, further amplifying the performance degradation produced by
Degrade. This demonstrates that the topology of the microprocessor and the
affinity of the degrade process have significant influence in the performance
degradation impact, answering Section 1.
We identified SMC machine clears as an additional root cause for both Degrade
and HyperDegrade, however, there could be others. In this regard, we highlight
that our root cause analysis, albeit sound, is not complete. Moreover,
achieving such completeness is challenging due to the undocumented nature of
the microarchitecture, providing an interesting research direction for
continued research. Indeed, in concurrent work, Ragab et al. [53] analyze
machine clears in the context of transient execution.
Contention test and pure SMC scenario. For the sake of completeness, we
compared the CPU cycles employed by different experiments using the previous
setup. However, in this case, we vary the number of iterations in the tight
loop over a single cache line. We ranged this value in the set
$\\{2^{16},2^{17},..,2^{25}\\}$. Therefore, the number of executed
instructions by the victim will be
$\text{{victim\\_num\\_inst}}=16\times\text{{num\\_iter}}$.
This comparison involves five experiments: one for each degrade strategy, plus
a contention test and a pure SMC scenario (presented later). The contention
test is equivalent to HyperDegrade; however, this time the clflush instruction
will flush a cache line _not used_ by the victim. This test allows to evaluate
the performance impact of co-locating a degrade process while it does not
modify the victim’s cache state.
Figure 3 visualizes the results of this comparison, where both axes are
$\log_{2}$-scaled. The $y$-axis represents the ratio cycles per
victim_num_inst. It can be appreciated that the contention test and NoDegrade
have very similar performance behavior (i.e., their curves overlap at the
bottom). Hence, the performance degradation of a HyperDegrade process will
only be effective if it flushes a victim cache line, whereas additional
resource contention related to executing the clflush instruction in a sibling
core can be neglected.
Figure 3: Experiments performance comparison ($\log_{2}$-scale).
We included a pure SMC scenario as an additional degrade strategy. Figure 4
illustrates the degrade process core. This code continuously triggers machine
clears due to its self-modifying code behavior. Its position in Figure 3 shows
it has about the same performance degrading power as Degrade. However, this
pure SMC alternative does not depend on shared memory between the victim and
degrade processes, thus the presence of—and finding—hot cachelines [6] is not
a requirement.
mov $0x40, %dil ; 0x40 is part of the opcode at L0
lea L0(%rip), %rcx ; rcx points to L0
L0: mov %dil, (%rcx)
jmp L0
Figure 4: Main SMC degrade process instructions.
Limitations. HyperDegrade offers a significant slowdown wrt. previous
performance degradation strategies. On the other hand, it is tightly coupled
to SMT architectures because it requires physical core co-location with the
victim process. Therefore, it is only applicable to microprocessors with this
feature. In this regard, HyperDegrade has the same limitation as previous
works that exploit SMT [48, 3, 61, 26, 5, 11]. SCA attacks enabled by SMT
often have no target shared library requirement, which is a hard requirement
for Flush+Reload to move to cross-core application scenarios. For example,
neither the L1 dcache spy [48] nor the L1 icache spy [3] require victims
utilizing any form of shared memory on SMT architectures. Yet HyperDegrade
retains this shared library requirement, since our applet is based on clflush
to induce the relevant microarchitecture events. However, since SMT is a
common feature in modern microarchitectures and shared libraries are even more
common, HyperDegrade is another tool on the attacker’s belt for performing
Flush+Reload attacks.
## 4 HyperDegrade: Performance
With our HyperDegrade applet from Section 3, the goal of this section is to
evaluate the efficacy of HyperDegrade as a technique to degrade the
performance of victim applications that link against shared libraries. Section
5 will later explore the use of HyperDegrade in SCA, but here we focus purely
on the slowdown effect. Applied as such in isolation, HyperDegrade is useful
to effectively monopolize the CPU comparative to the victim, and also increase
the CPU time billed to the victim for the same computations performed by the
victim.
Allan et al. [6, Sect. 4] use the SPEC 2006CPU benchmark suite, specifically
29 individual benchmark applications, to establish the efficacy of their
Degrade technique as a performance degradation mechanism. In our work, we
choose a different suite motivated from several directions.
First, unfortunately SPEC benchmarks are not free and open-source software
(FOSS). In the interest of Open Science, we instead utilize the BEEBS
benchmark suite by Pallister et al. [46, 47] which is freely
available222https://github.com/mageec/beebs. The original intention of BEEBS
is microbenchmarking of typical embedded applications (sometimes
representative) to facilitate device power consumption measurements.
Nevertheless, it suits our purposes remarkably.
These 77 benchmark applications also differ in the fact that they are not
built with debug symbols, which is required to apply the Allan et al. [6]
methodology. While debug symbols themselves should not affect application
performance, they often require less aggressive compiler optimizations that,
in the end, result in less efficient binaries which might paint an unrealistic
picture for performance degradation techniques outside of research
environments.
We used the BEEBS benchmark suite off-the-shelf, with one minor modification.
By default, BEEBS statically links the individual benchmark libraries whereas
HyperDegrade (and originally Degrade) target shared libraries. Hence, we added
a new option to additionally compile each benchmark as a shared library and
dynamically link the benchmark application against it.
### 4.1 Experiment
Before presenting and discussing the empirical results, we first describe our
experiment environment. Since HyperDegrade targets HT architectures
specifically, we chose four consecutive chip generations, all featuring HT.
Table 3 gives an overview, from older to younger models.
Table 3: Various SMT architectures used in our experiments.
Family | Model | Base | Cores / | Details
---|---|---|---|---
| | Freq. | Threads |
Skylake | i7-6700 | 3.4 GHz | 4 / 8 | Ubuntu 18, 32 GB RAM
Kaby Lake | i7-7700HQ | 2.8 GHz | 4 / 8 | Ubuntu 20, 32 GB RAM
Coffee Lake | i7-9850H | 2.6 GHz | 6 / 12 | Ubuntu 18, 32 GB RAM
Whiskey Lake | i7-8665UE | 1.7 GHz | 4 / 8 | Ubuntu 20, 16 GB RAM
Our experiment consists of the following steps. We used the perf utility to
definitively measure performance, including clock cycle count. In an initial
profiling step, we exhaustively search (guided by perf metrics) for the most
efficient cache line to target during eviction. We then run three different
tests: a baseline NoDegrade, classical Degrade, and our HyperDegrade from
Section 3. Each test that involves degradation profiles for the target cache
line independently: i.e. the target cache line for Degrade is perhaps not the
same as HyperDegrade. We then iterate each test to gather statistics, then
repeat for all 77 BEEBS benchmarks, and furthermore across the four target
architectures. We used the taskset utility to pin to separate physical cores
in the Degrade case, and same physical core in the HyperDegrade case.
### 4.2 Results
While Table 8 and Table 9 contain the full statistics per architecture,
strategy, and BEEBS microbenchmark, Table 4 and Figure 5 provide high level
overviews of the aggregate data. Table 4 shows the efficacy of HyperDegrade
over classical Degrade is striking, with median slowdown factors ranging from
254 to 364, and maximum slowdown factors ranging from 1060 to 1349. These
maximum slowdowns are what our title alludes to—for example, in the Skylake
i7-6700 case (maximum), reducing the 3.4 GHz base frequency to a 3.1 MHz
effective frequency when observed from the victim application perspective.
Table 4: Statistics (aggregated from Table 8 and Table 9) for different
performance degradation strategies targeting BEEBS shared library benchmarks,
across architectures.
Family | Method | Median | Min | Max | Mean | Stdev
---|---|---|---|---|---|---
Skylake | Degrade | 11.1 | 1.4 | 33.1 | 13.1 | 8.0
Skylake | HyperDegrade | 254.0 | 10.4 | 1101.9 | 306.3 | 226.7
Kaby Lake | Degrade | 10.6 | 1.4 | 36.5 | 12.0 | 7.5
Kaby Lake | HyperDegrade | 266.4 | 10.2 | 1060.1 | 330.6 | 229.0
Coffee Lake | Degrade | 12.2 | 1.5 | 39.0 | 14.0 | 7.9
Coffee Lake | HyperDegrade | 317.5 | 13.0 | 1143.7 | 382.5 | 246.9
Whiskey Lake | Degrade | 12.5 | 1.5 | 43.9 | 14.4 | 9.2
Whiskey Lake | HyperDegrade | 364.3 | 13.5 | 1349.3 | 435.8 | 280.9
Figure 5 visualizes the aggregate statistics from Table 8 and Table 9. Due to
the magnitude of the slowdowns, the $x$-axis is logarithmic. Please note these
data points are for identifying general trends; the location of individual
points within separate distributions (i.e. different benchmarks) may vary.
Figure 5: Distributions (computed from Table 8 and Table 9) for different
performance degradation strategies targeting BEEBS shared library benchmarks,
across architectures. Note the $x$-axis is logarithmic ($\log_{2}$).
Finally, Table 10 and Table 11 for the PARSEC [12] macrobenchmark suite are
analogous to Table 8 and Table 9 for the BEEBS microbenchmarks. In this case,
the slowdowns have a noticeably smaller magnitude. We attribute the difference
to benchmarking goals. While BEEBS microbenchmarks are typically CPU-bound and
capable of running on bare metal, that is not the case for PARSEC
macrobenchmarks where the focus is parallelism. The combined results from the
two benchmark suites demonstrate that while a typical binary will not
experience a slowdown of three orders of magnitude, microbenchmarks with
small, tight loops usually exhibit more significant slowdowns. This is
convenient since the typical application of performance degradation mechanisms
is in conjunction with side-channel attacks that target such hot spots.
In summary, the empirical data in this section validates the HyperDegrade
concept and answers Section 1 authoritatively. The data shows a clear
advantage—even reaching three orders of magnitude in select microbenchmark
cases—of HyperDegrade over classical Degrade. Therefore, as a pure performance
degradation mechanism, HyperDegrade outperforms Degrade.
## 5 HyperDegrade: Assessment
Applying the HyperDegrade concept from Section 3, Section 4 subsequently
showed the efficacy of HyperDegrade as a performance degradation technique.
Similar to the classical Degrade technique, we see the main application of
HyperDegrade in the SCA area to improve the granularity of microarchitecture
timing traces. That is the focus of this section.
We first enumerate some of the shortcomings in previous work on performance
degradation. Allan et al. [6, Sect. 5] show that decreasing the Flush+Reload
wait time—while indeed increasing granularity—generally leads to a higher
number of missed accesses concerning the targeted line. This was in fact the
main motivation for their Degrade technique. Applying Degrade [6, Sect. 7],
the authors argue why missed accesses are detrimental to their end-to-end
cryptanalytic attack. While the intuition for their argument is logical, the
authors provide no evidence, empirical or otherwise, that Degrade actually
leads to traces containing statistically more information, which is in fact
the main purpose of performance degradation techniques. The motivation and
intuition by Pereida García and Brumley [49] is similar—albeit with a
different framework for target cache line identification—and equally lacks
evidence.
The goal of this section is to answer Section 1, rectifying these shortcomings
inspired by information-theoretic methods. We do so by utilizing an
established SCA metric to demonstrate that classical Degrade leads to
statistically more leakage than Flush+Reload in isolation. Additionally, our
HyperDegrade technique further amplifies this leakage.
### 5.1 Experiment
Figure 6 depicts the shared library we constructed to use throughout the
experiments in this section. The code has two functions x64_victim_0 and
x64_victim_1 that are essentially the same, but separated by 512 bytes. The
functions set a counter (r10) from a constant (CNT, in this case 2k), then
proceed through several effective nops (add and sub instructions that cancel),
then finally decrement the counter and iterate.
1200 <x64_victim_0>: 1400 <x64_victim_1>:
1200: mov $CNT,%r10 1400: mov $CNT,%r10
1207: add $0x1,%r10 1407: add $0x1,%r10
120b: sub $0x1,%r10 140b: sub $0x1,%r10
... ...
12e7: add $0x1,%r10 14e7: add $0x1,%r10
12eb: sub $0x1,%r10 14eb: sub $0x1,%r10
12ef: sub $0x1,%r10 14ef: sub $0x1,%r10
12f3: jnz 1207 <x64_victim_0+0x7> 14f3: jnz 1407 <x64_victim_1+0x7>
12f9: retq 14f9: retq
Figure 6: Functions of a shared library (objdump view) used to construct an
ideal victim for our SCA leakage assessment experiments.
We designed and implemented an ideal victim application linking against this
shared library. The victim either makes two sequential x64_victim_0 calls
(“0-0”) or x64_victim_0 followed by x64_victim_1 (“0-1”). We then used the
stock Flush+Reload technique, probing the start of x64_victim_0 (i.e. at hex
offset 1200).
Pinning the victim and spy to separate physical cores, we then procured 20k
traces, in two sets of 10k for each of 0-0 and 0-1, and took the mean of the
sets to arrive at the average trace. Figure 7 (Top) represents these two
baseline Flush+Reload cases with NoDegrade strategy as the two plots on the
far left.
The next experiment was analogous, yet with the classical Degrade strategy. We
degraded two cache lines—one in x64_victim_0 and the x64_victim_1, both in the
middle of their respective functions. These are the two middle plots in Figure
7 (Top). Here the victim, spy, and degrade processes are all pinned to
different physical cores.
Our final experiment was analogous, yet with our novel HyperDegrade strategy
and pinning the victim and degrade processes to two logical cores of the same
physical core—degrading the same two cache lines—and the spy to a different
physical core. These are the two plots on the far right in Figure 7 (Top).
What can be appreciated in Figure 7 (Top), is that both performance
degradation strategies are working as intended—they are stretching the traces.
The remainder of this section focuses on quantifying this effect. In fact
HyperDegrade stretches the traces to such an extreme that the NoDegrade data
on the far left is scantily discernible in this visualization.
### 5.2 Results
Recalling from Section 2.3, NICV suits particularly well for our purposes,
since it is designed to work with only public data and is agnostic to leakage
models [10]. The latter fact makes NICV pertinent as a metric to compare the
quality of traces [10, Sect. 3]. The metric—in the interval $[0,1]$—is defined
by
$\mathrm{NICV}(X,Y)=\frac{\mathrm{Var}[\mathrm{E}[Y|X]]}{\mathrm{Var}[Y]}$ (1)
with traces $Y$, classes $X$, and $\mathrm{E}$ the expectation (mean). The
square root of the NICV metric, or the correlation ratio, is an upper bound
for Pearson’s correlation coefficient [52, Corollary 8]. Two classes (0-0 and
0-1) suffice for our purposes, simplifying Equation 1 as follows.
$\mathrm{NICV}(X,Y)=\frac{(\mathrm{E}[Y|X=0]-\mathrm{E}[Y|X=1])^{2}}{4\cdot\mathrm{Var}[Y]}$
Figure 7 (Bottom) illustrates applying this metric to the two sets of
measurements for each degrade strategy—baseline NoDegrade, Degrade, and
HyperDegrade—and visualizing the square root, or maximum correlation. With
simple thresholding to identify POIs (i.e., those points that exceed a fixed
value), this leads to the POI statistics in Table 5. To give one extremely
narrow interpretation, with CNT set to 2k in Figure 6, less than 2k POIs
indicates information is being lost, i.e. the victim is running faster than
the spy is capable of measuring. With this particular victim in this
particular environment, it implies neither NoDegrade nor Degrade achieve
sufficient trace granularity to avoid information loss, while HyperDegrade
does so with ease.
In conclusion, this definitively answers Section 1; the Degrade strategy leads
to statistically more information leakage over stock Flush+Reload due to the
significant POI increase. Similarly, it shows HyperDegrade leads to
significantly more POIs compared to Degrade, but at the same time (on average)
slightly lower maximum correlation for each POI.
Figure 7: Top: averaged traces across different degrade strategies and
different victim execution paths (i.e. classes, 0-0 and 0-1). The legend
corresponds to the plots from left to right. Bottom: the NICV metric’s square
root, or maximum correlation. The legend again corresponds to the plots from
left to right. The plots align and display the same time slice. Table 5: POI
counts and ratios at various NICV thresholds across degrade strategies (see
Figure 7). The ratios (x) are between the different strategies.
Threshold | NoDegrade | Degrade | HyperDegrade
---|---|---|---
0.1 | 233 | 1212 (5.2x) | 13151 (56.4x, 10.9x)
0.2 | 188 | 1149 (6.1x) | 11664 (62.0x, 10.2x)
0.3 | 167 | 1097 (6.6x) | 11159 (66.8x, 10.2x)
0.4 | 147 | 1049 (7.1x) | 10194 (69.3x, 9.7x)
0.5 | 117 | 969 (8.3x) | 6003 (51.3x, 6.2x)
## 6 HyperDegrade: Exploitation
While Section 5 shows that HyperDegrade leads to more leakage due to the
significant increase in POIs, the Figure 6 shared library and linking victim
application are unquestionably purely synthetic. While this is ideal for
leakage assessment, it does not represent the use of HyperDegrade in a real
end-to-end SCA attack scenario. What remains is to demonstrate that
HyperDegrade applies in end-to-end attack scenarios and that HyperDegrade has
a quantifiable advantage over other degrade strategies wrt. attacker effort.
That is the purpose of this section.
The leak. Recalling Section 2.4, the original Raccoon attack exploits the fact
that Diffie-Hellman as used in TLS 1.2 and below dictates stripping leading
zeros of the shared DH key during session key derivation. The authors note
that _not_ stripping is not foolproof can also lead to oracles [39, Sect.
3.5], pointing at an OpenSSL function that is potentially vulnerable to
microarchitecture attacks [39, Appx. B]. They leave the investigation of said
function—unrelated to TLS—as future work: a gap which this section fills.
Figure 8 shows that function, which is our target within the current (as of
this writing) state-of-the-art OpenSSL 1.1.1h DH shared secret key derivation.
The shared secret is computed at line 36; however, OpenSSL internals strip the
leading zero bytes of this result. Therefore, at line 40 this function checks
if the computed shared secret needs to be padded. Padding is needed if the
number of bytes of the shared secret and the DH modulus differ.
Figure 8: The target vulnerability in OpenSSL 1.1.1h Diffie-Hellman shared key
derivation for our end-to-end attack.
The leakage model. Considering a theoretical leakage model, the binary result
of the line 40 condition leaks whether the shared secret has at least eight
leading zero bits (branch taken) or not (branch not taken). This model—capable
of extracting at most eight bits of information—affects the key sizes that are
in scope. While 2048/256-bit DH parameters are more consistent with current
key size recommendations, the original Raccoon attack [39, Table 3] is unable
to target eight bits of leakage in this setting: the authors explicitly leave
it as an open problem [39, Sect. 6.2]. They instead target legacy
(1024/160-bit) or non-standard (1036/160-bit) DH parameters for an eight-bit
leak. We follow suit, targeting legacy keys (see Section 6.1) for the exact
same reasons.
The victims. Our next task was to identify callers to the Figure 8 code from
the application and protocol levels, since it is unrelated to TLS. We
successfully identified PKCS #7 (RFC 2315 [34]) and CMS (RFC 5652 [33]) as
standards where Figure 8 might apply. We subsequently used the TriggerFlow
tool [27] to verify that OpenSSL’s cms and smime command line utilities have
the Figure 8 function in their call stacks.
### 6.1 Attack Outline and Threat Model
In our end-to-end attack, all message encryptions and decryptions are with
OpenSSL’s command line cms utility. We furthermore assume Alice has a static
DH public key in an X.509 certificate and, wlog., the DH parameters are the
fixed 1024/160-bit variant from RFC 5114 [37]. OpenSSL supports these natively
as named parameters, used implicitly. We carried out all experiments on the
Coffee Lake machine from Table 3.
Our Raccoon attack variant consists of the following steps. (i) Obtain a
target CMS-encrypted message from Bob to Alice. (ii) Based on the target,
construct many chosen ciphertexts and submit them to Alice for decryption.
(iii) Monitor Alice’s decryptions of these ciphertexts with HyperDegrade and
Flush+Reload to detect the key-dependent padding. (iv) Use the resulting
information to construct a lattice problem and recover the original target
session key between Bob and Alice, leading to loss of confidentiality for the
target message. The original Raccoon attack [39] abstracts away most of these
steps, using only simulated SCA data.
Threat model. Our attack makes several assumptions discussed below, which we
borrow directly from the existing literature. (i) Our threat model assumes the
attacker is able to co-locate on the same system with Alice (victim), and
furthermore execute on the same logical and physical cores in parallel to
Alice. See the end of Section 3 for a discussion of this standard assumption.
(ii) We also assume that Alice decrypts messages non-interactively, due to the
number of queries required. This is a fair assumption not only because DH is
literally Non-Interactive Key Exchange (NIKE) [21] from the theory
perspective, but also because CMS (the evolution of PKCS #7) has ubiquitous
use cases, e.g. including S/MIME. Chosen ciphertext decryptions is a standard
assumption from the applied SCA literature [22, Sect. 1.1] [23, Sect. 1.4]
[54, Sect. 3]. (iii) We assume the attacker is able to observe one encrypted
message from Bob to Alice. This is a passive variant of the standard Dolev-Yao
adversary [20] that is Man-in-the-Middle (MitM) capable of eavesdropping, and
the exact same assumption from the original Raccoon attack [39, Fig. 1]. Ronen
et al. [54, Sect. 3] call this _privileged network position_ since it is a
weak assumption compared to full MitM capabilities. To summarize, the overall
threat model used by Ronen et al. [54, Sect. 3] is extremely similar to ours
and encompasses all of the above assumptions. The only slight difference is a
stronger notion of co-location in our case—from same CPU to same physical
core.
Case study: triggering oracle decryptions. We briefly explored the non-
interactive requirement discussed above. Specifically, two arenas: automated
email decryption, and automated decryption of certificate-related messages.
Recent changes in Thunderbird (v78+) migrate from the Enigmail plugin to
native support for email encryption and/or authentication (PGP, S/MIME).
Automated, non-interactive decryption for various purposes (e.g., filtering)
appears to be a non-default (yet supported)
option.333https://bugzilla.mozilla.org/show_bug.cgi?id=1644085 Quoting from
that thread: “A lot of companies e.g. in the finance sector decrypt the
messages at a central gateway and then forward them internally to the
respective recipient.”
We also found explicit code meeting our non-interactive requirement in the
realm of automated certificate management. (i) The Simple Certificate
Enrollment Protocol (SCEP, RFC 8894 [31]) supports exchanging (public key
encrypted, CMS formatted) confidential messages over an insecure channel, such
as HTTP or generally out-of-band. This is in contrast to the Automatic
Certificate Management Environment (ACME) protocol (RFC 8555 [8], e.g., Let’s
Encrypt [2]), which relies on the confidentiality and authenticity guarantees
of TLS. The open source444https://redwax.eu/rs/docs/latest/mod/mod_scep.html
Apache module mod_scep dynamically links against OpenSSL to provide this
functionality. (ii) The Certificate Management Protocol (CMP, RFC 4210 [40])
provides similar functionality (i.e., public key encrypted, CMS formatted
messages) with similar motivations (automated certificate management over
insecure channels). Yet the implementation integrated into upcoming OpenSSL
3.0 does not currently support encrypted protocol
messages.555https://www.openssl.org/docs/manmaster/man1/openssl-cmp.html
### 6.2 Degrade Strategies Compared
This section aims at answering Section 1 by means of comparing three
performance degradation strategies (NoDegrade, Degrade, HyperDegrade) when
paired with a Flush+Reload attack to exploit this vulnerability. We reuse the
following setup and adversary plan later during the end-to-end attack (Section
6.3).
Experiment. We monitor the cache line corresponding to the memmove function
call and its surrounding instructions, i.e. near line 41 of Figure 8. If
memmove is executed, at least two cache hits should be observed: (i) when the
function is called, (ii) then when the function finishes (ret instruction).
Therefore, if two cache hits are observed in a trace _close_ to each other,
that would mean the shared secret was padded, and in contrast a single cache
hit only detects flow surrounding line 41 of Figure 8.
We select the first cache line where the function memmove is located as the
degrading cache line. It is the stock, unmodified, uninstrumented memmove
available system-wide as part of the shared C standard library libc. Degrading
during memmove execution should increase the time window the spy process has
to detect the second cache hit (i.e., increase time granularity).
We strive for a _fair_ comparison between the three degradation strategies
during a Flush+Reload attack. It is challenging to develop an optimal attack
for each degradation strategy and even harder to maintain fairness. Therefore,
we developed a single attack plan and swept its parameters in order to provide
a meaningful and objective comparison.
Table 6 summarizes the attack parameters and the explored search space. The
first parameter, $r$, affects trace capturing—it specifies the number of
iterations the Flush+Reload wait loop should iterate. The remaining parameters
belong to the trace processing tooling. The second parameter, $t$, refers to
the threshold in CPU clock cycles used to distinguish a cache hit from a miss.
After some manual trace inspection, we observed this threshold varies between
degradation strategy and Flush+Reload wait time; we decided to add it to the
search space. The last parameter, $d$, specifies the distance (in number of
Flush+Reload samples) between two cache hits to consider them as close.
Table 6: Attack parameters search space. Parameter | Range
---|---
Flush+Reload wait time ($r$) | $\\{128,256\\}$
Cache hit/miss threshold ($t$) | $\\{50,100,150,200\\}$
Cache hits closeness distance ($d$) | $\\{1,5,10,\ldots,95\\}$
We explore this parameter search space, and for each parameter set—i.e.,
triplet $(r,t,d)$—evaluate the attack performance, estimating the true
positive (TP) and false positive rates (FP). For this task, we generated two
pairs of DH keys (i.e., attacker and victim). We selected one of these pairs
such that the shared secret needs padding after a DH key exchange, while for
the other it does not.
We then captured 1k traces for each key pair, parameter set, and degradation
strategy under consideration and estimated the TP and FP rates. We are
interested in finding which parameter sets lead to more efficient attacks in
terms of number of traces to capture, i.e. number of attacker queries.
Therefore, we focused on those results with zero false positives for the
comparison, thus it is a best case analysis for all degradation strategies.
Results. For 1024-bit DH, the lattice-based cryptanalysis requires 173 samples
where padding occurred (explained later). Therefore, the following equation
defines the average number of traces that need to be captured, where
$\Pr[\text{pad}]=1/177\approx 0.00565$ with the fixed RFC 5114 [37]
parameters.
$ \text{num\\_traces}=173/(\Pr[\text{TP}]\cdot\Pr[\text{pad}])$
Considering the very low probability the leakage occurs and the increased
complexity of lattice-based cryptanalysis in the presence of errors (see [4,
Sect. 6]), reducing the number of traces is important for attack
effectiveness.
Table 7 shows the best parameter set results for each degrade strategy that
could lead to a successful attack. Note that HyperDegrade clearly reduced the
number of required traces to succeed by at least a factor of $3.3$ when
compared with Degrade (the second best performer). This translates into a
considerable reduction in the number of traces: from 181k to 53k. Moreover,
Figure 9 shows there is not just a single parameter set where HyperDegrade
performs better than Degrade, but rather there are $88$ of them. These results
provide evidence that HyperDegrade can perform better than the other two
degrade strategies for mounting Flush+Reload attacks on cryptography
applications, answering Section 1.
Table 7: Best results for degrade strategies. Strategy | Trace count | Param. set $(r,d,t)$
---|---|---
NoDegrade | 651510 | $(128,1,100)$
Degrade | 181189 | $(256,1,170)$
HyperDegrade | 53721 | $(256,1,170)$
Figure 9: Degrade strategies comparison: how many parameter sets can be used
to mount an attack using $x$-number of traces.
### 6.3 End-to-End Attack Instance
The remainder of this section answers Section 1. We begin with lattice
details, then finish with the results of our end-to-end attack.
Lattice construction. Alice’s public key $g^{a}$ is readily available and the
attacker observes $g^{b}$ from the original target query, along with
ciphertext encrypted under the shared session key $g^{ab}$ (private). Then the
attacker proceeds with chosen queries, crafting ciphertext $g^{b}g^{r_{i}}$
and random $r_{i}$ for submitting to Alice for decryption. Alice then computes
$(g^{b}g^{r_{i}})^{a}=g^{ab}\cdot(g^{a})^{r_{i}}$ with the attacker measuring
if padding occurs. This is an instance of the hidden number problem (HNP) by
Boneh and Venkatesan [13]—to recover $\alpha=g^{ab}$ given many
$t_{i}=(g^{a})^{r_{i}}$.
We use the lattice construction by Nguyen and Shparlinski [42] verbatim,
stated here for completeness. Restricting to the $t_{i}$ where padding
occurred, our SCA data tells us $0<\alpha t_{i}<p/{2^{\ell}}$, where we set
$\ell=8$ due to the nature of this particular side channel; recall “branch
taken” in Figure 8 says at least the top eight bits are clear. Denoting
$u_{i}=p/2^{\ell+1}$ yields $v_{i}=\lvert\alpha t_{i}-u_{i}\rvert_{p}\leq
p/{2^{\ell+1}}$ where $\lvert x\rvert_{p}$ is signed modulo $p$ reduction
centered around zero. Then there are integers $\lambda_{i}$ where
$\operatorname{abs}(\alpha t_{i}-u_{i}-\lambda_{i})\leq p/{2^{\ell+1}}$ holds,
and this is the key observation for lattice attacks; the $u_{i}$ approximate
$\alpha t_{i}$ since they are closer than a random integer modulo $p$.
Consider the rational $d+1$-dimension lattice generated by the rows of the
following matrix.
$B=\begin{bmatrix}2Wp&0&\dots&\dots&0\\\ 0&2Wp&\ddots&\vdots&\vdots\\\
\vdots&\ddots&\ddots&0&\vdots\\\ 0&\dots&0&2Wp&0\\\
2Wt_{1}&\dots&\dots&2Wt_{d}&1\end{bmatrix}$
When we set $W=2^{\ell}$, $\vec{x}=(\lambda_{1},\ldots,\lambda_{d},\alpha)$,
$\vec{y}=(2Wv_{1},\ldots,2Wv_{d},\alpha)$, and
$\vec{u}=(2Wu_{1},\ldots,2Wu_{d},0)$ we get the linear relationship
$\vec{x}B-\vec{u}=\vec{y}$. Solving the Closest Vector Problem (CVP) with
inputs $B$ and $\vec{u}$ yields $\vec{x}$, and hence the target session key
$\alpha$. We also use the traditional CVP-to-SVP (Shortest Vector Problem)
embedding by Goldreich et al. [24, Sec. 3.4]. Pereida García et al. [51]
suggest weighting on the average logarithm, hence we set $W=2^{\ell+1}$ in our
$\ell=8$ scenario. Lastly, to set the lattice dimension we use the heuristic
from [58, Sect. 9.1] verbatim, which is $d=1-e^{-c}$. With their suggested
confidence factor $c=1.35$ (to improve key bit independence), in our case it
leads to $d=173$; this explains the constant from Section 6.2. We use the same
BKZ block size parameter as [39, Table 3], $\beta=60$.
Results. Following the results of Section 6.2, we proceeded to capture 60k
traces using HyperDegrade and the parameter set shown in Table 7. Our capture
tooling implements precisely step (i) to (iii) in Section 6.1, obtaining a
target ciphertext, constructing chosen ciphertexts, and querying the oracle.
In the end, at each capture iteration the attacker only needs to modify a
public key field in an ASN.1 structure to produce a new chosen ciphertext,
then take the measurements while Alice performs the decryption.
Considering the padding probability $\Pr[\text{pad}]=1/177$, the expected
number of padded traces in the $60\text{k}$ set is $339$. After processing
each trace, our tooling detected $3611$ paddings. It indicates that, with high
probability, these also contain false positives. Albrecht and Heninger [4]
focus on lattice-based cryptanalysis of ECDSA and suggest adjusting lattice
parameters at the cost of increased computation to compensate for errors. We
instead use a different approach in the DH setting—not applicable in the ECDSA
setting due to its usage of nonces—to counteract those false positives.
To reduce the FP rate, for each trace where we detected padding, we retry the
query seven times and majority vote the result. From the $3611$ traces
detected as padded, only $239$ passed the majority voting. Therefore, the
total number of traces captured was $60\text{k}+7\cdot 3611=85277$.
Even with the majority voting, some false positives could remain; we sorted
the $239$ samples by the vote count. Then we selected the highest ranked 173
samples to build the HNP instances. For the sake of completeness, we verified
there were 47 false positives in the $239$ set; however, all had the lowest
vote count of four.
We implemented our lattice using BKZ reduction from fpylll
666https://github.com/fplll/fpylll, a Python wrapper for the fplll C++ library
[19]. We constructed 24 lattice instances from our SCA data, and executed
these in parallel on a 2.1 GHz dual CPU Intel Xeon Silver 4116 (24 cores, 48
threads across 2 CPUs) running Ubuntu 20 with 256 GB memory. The first
instance to recover the session key did so in one hour and five minutes with a
single BKZ reduction. With no abstractions, utilizing real trace data at the
application level, and real protocol messages, our end-to-end attack resolves
Section 1.
OpenSSL disclosure. We contacted the OpenSSL security team to disclose our
results regarding the exploitability of this leak. We also designed,
implemented and tested a fix777https://github.com/openssl/openssl/pull/13772
that avoids executing the leaky branch. We achieved this by changing the
default behavior of the dh->meth->compute_key function pointer to always
return a fixed-length array (i.e., the public byte length of $p$) in constant
time, ensuring variable pad is zero in Figure 8 (i.e., BN_num_bytes(dh->p) and
rv are equal). Retaining the branch and not simply removing it is due to
backwards compatibility issues with OpenSSL engines [56]. OpenSSL merged our
fix on 10 January 2021, included as of version 1.1.1j.
## 7 Conclusion
HyperDegrade increases performance degradation with respect to state-of-the-
art. The difference depends on the targeted process, but we achieved slowdown
factors up to three orders of magnitude in select microbenchmark cases. In
addition to increased cache misses, we discovered the cache-based performance
degradation root cause is due to the increased number of machine clears
produced by the processor detecting a cache line flush from L1 as self-
modifying code (Section 1). We analyzed the impact of Degrade and HyperDegrade
on Flush+Reload traces from a theoretical point of view using leakage
assessment tools, demonstrating that HyperDegrade tremendously increases the
number of POIs, which reflects in an increased time granularity (Section 1).
From an applied perspective, we designed a fair experiment that compares the
three degrade strategies NoDegrade, Degrade, and HyperDegrade when coupled
with a Flush+Reload attack wrt. the number of traces needed to recover a
secret from a cryptography implementation (Section 1). Our resulting data
demonstrates the benefits of HyperDegrade, requiring three times less traces
and attacker queries to succeed, the latter being the standard metric in
applied SCA literature. Regarding cryptography, we answered an open problem
from the recently published Raccoon attack, providing experimental evidence
that such an attack applies with real data (Section 1).
Future work. Our work either reinforces or illuminates several new avenues for
continued related research.
In Section 6.2, we noted how the cache hit threshold varies depending on
various spy parameters. We have also noted this behavior in other Flush+Reload
scenarios, outside this work. It would definitely be an interesting future
research line to investigate its root cause.
In general, our off-the-shelf applied lattice techniques in Section 6.3, while
serving their purpose for proof-of-concept, are likely not optimal.
Fundamental lattice-based cryptanalysis improvements (e.g., the recent [4])
are beyond the scope of our work, but could reduce dimension and subsequently
attacker queries. Similar to our Raccoon variant, the original Raccoon attack
[39] is unable to target 2048/256-bit DH with eight bits or less of leakage.
The authors leave this as an open problem, and we concur; indeed, improved
lattice methods to compensate for these significantly larger finite field
elements is an interesting research direction.
Several previous studies gather widespread certificate and key usage
statistics. For example, the original Raccoon attack authors gather statistics
for static DH keys in X.509 certificates for TLS 1.2 (and lower)
authentication, and/or ephemeral-static DH keys in TLS 1.2 (and lower) cipher
suites [39, Sect. 7] from public services. Bos et al. [14] gather publicly-
available elliptic curve keys from protocols and services such as TLS, SSH,
BitCoin, and the Austrian e-identity card; Valenta et al. [59] consider IPSec,
as well. Not specific to any particular public key cryptosystem, Lenstra et
al. [36] gather publicly-available PGP keys and X.509 certificates for TLS 1.2
(and lower) authentication. Along these lines, although it is beyond the scope
of our work, we call for future studies that gather and share S/MIME key usage
statistics, paying particular attention to legal issues and privacy since,
different from PGP, we are not aware of any general public (distributed)
repositories for S/MIME keys.
Acknowledgments. This project has received funding from the European Research
Council (ERC) under the European Union’s Horizon 2020 research and innovation
programme (grant agreement No 804476). Supported in part by CSIC’s i-LINK+
2019 “Advancing in cybersecurity technologies” (Ref. LINKA20216).
Table 8: BEEBS performance degradation results (cycles, thousands) on Skylake
and Kaby Lake.
| Skylake | Kaby Lake
---|---|---
Benchmark | NoDegrade | Degrade | HyperDegrade | NoDegrade | Degrade | HyperDegrade
aha-compress | 70583 | 848723 | (12.0x) | 16222580 | (229.8x) | 69754 | 697353 | (10.0x) | 18585074 | (266.4x)
aha-mont64 | 21954 | 463471 | (21.1x) | 12066723 | (549.6x) | 22148 | 356097 | (16.1x) | 11607976 | (524.1x)
bs | 2032 | 8991 | (4.4x) | 171277 | (84.3x) | 2180 | 7137 | (3.3x) | 152106 | (69.8x)
bubblesort | 332386 | 9244976 | (27.8x) | 249121317 | (749.5x) | 305248 | 9197521 | (30.1x) | 229279506 | (751.1x)
cnt | 13237 | 191420 | (14.5x) | 4482733 | (338.6x) | 13488 | 187061 | (13.9x) | 5202446 | (385.7x)
compress | 8878 | 104299 | (11.7x) | 3248479 | (365.9x) | 9001 | 109617 | (12.2x) | 3767946 | (418.6x)
cover | 6982 | 172070 | (24.6x) | 2777996 | (397.9x) | 7165 | 95582 | (13.3x) | 2592937 | (361.9x)
crc | 7671 | 139690 | (18.2x) | 3745933 | (488.3x) | 7839 | 152098 | (19.4x) | 3516855 | (448.6x)
crc32 | 46875 | 1458728 | (31.1x) | 31199559 | (665.6x) | 47004 | 1716899 | (36.5x) | 33339067 | (709.3x)
ctl-stack | 37481 | 528762 | (14.1x) | 11150759 | (297.5x) | 38447 | 577541 | (15.0x) | 15785236 | (410.6x)
ctl-string | 31088 | 981144 | (31.6x) | 14581397 | (469.0x) | 32189 | 799759 | (24.8x) | 17316754 | (538.0x)
ctl-vector | 30742 | 367393 | (12.0x) | 7275579 | (236.7x) | 31266 | 372204 | (11.9x) | 8280240 | (264.8x)
cubic | 33333 | 201498 | (6.0x) | 3240751 | (97.2x) | 30686 | 184586 | (6.0x) | 4497482 | (146.6x)
dijkstra | 1965916 | 39394667 | (20.0x) | 962126944 | (489.4x) | 1979452 | 36038128 | (18.2x) | 1032841427 | (521.8x)
dtoa | 13236 | 76977 | (5.8x) | 1374173 | (103.8x) | 13422 | 77959 | (5.8x) | 1838910 | (137.0x)
duff | 6332 | 60486 | (9.6x) | 2488543 | (393.0x) | 6448 | 56663 | (8.8x) | 1876609 | (291.0x)
edn | 192602 | 4037466 | (21.0x) | 65758984 | (341.4x) | 191705 | 2978126 | (15.5x) | 86795945 | (452.8x)
expint | 29724 | 395738 | (13.3x) | 4513790 | (151.9x) | 30193 | 377277 | (12.5x) | 5138636 | (170.2x)
fac | 4595 | 64388 | (14.0x) | 1719178 | (374.1x) | 4761 | 60222 | (12.6x) | 1815137 | (381.2x)
fasta | 2218271 | 28380647 | (12.8x) | 628593367 | (283.4x) | 2216456 | 33215414 | (15.0x) | 644852892 | (290.9x)
fdct | 7759 | 23118 | (3.0x) | 935889 | (120.6x) | 7863 | 33953 | (4.3x) | 993673 | (126.4x)
fibcall | 2889 | 19083 | (6.6x) | 751337 | (260.0x) | 3054 | 19906 | (6.5x) | 824186 | (269.8x)
fir | 764841 | 18665980 | (24.4x) | 731941919 | (957.0x) | 764893 | 17441064 | (22.8x) | 685505153 | (896.2x)
frac | 12426 | 138238 | (11.1x) | 3326194 | (267.7x) | 12614 | 153789 | (12.2x) | 3927731 | (311.4x)
huffbench | 1400373 | 12636520 | (9.0x) | 185693684 | (132.6x) | 1424397 | 10053959 | (7.1x) | 187339402 | (131.5x)
insertsort | 4381 | 76538 | (17.5x) | 1937214 | (442.1x) | 4518 | 82371 | (18.2x) | 2423159 | (536.3x)
janne_complex | 2443 | 17672 | (7.2x) | 387808 | (158.7x) | 2589 | 15907 | (6.1x) | 403711 | (155.9x)
jfdctint | 11742 | 117469 | (10.0x) | 2469100 | (210.3x) | 11917 | 106261 | (8.9x) | 3063148 | (257.0x)
lcdnum | 2247 | 20278 | (9.0x) | 283506 | (126.1x) | 2419 | 14790 | (6.1x) | 357861 | (147.9x)
levenshtein | 151336 | 3413532 | (22.6x) | 94467885 | (624.2x) | 148115 | 3324556 | (22.4x) | 92479156 | (624.4x)
ludcmp | 8941 | 86599 | (9.7x) | 2064081 | (230.8x) | 9190 | 71764 | (7.8x) | 2182355 | (237.5x)
matmult-float | 67852 | 1760965 | (26.0x) | 45235894 | (666.7x) | 68377 | 1481444 | (21.7x) | 53988799 | (789.6x)
matmult-int | 438567 | 13438448 | (30.6x) | 371621846 | (847.4x) | 444120 | 9045445 | (20.4x) | 424979930 | (956.9x)
mergesort | 519862 | 9598330 | (18.5x) | 179589127 | (345.5x) | 517294 | 8497198 | (16.4x) | 207490953 | (401.1x)
miniz | 3405 | 17987 | (5.3x) | 224863 | (66.0x) | 3547 | 12738 | (3.6x) | 290558 | (81.9x)
minver | 6391 | 53104 | (8.3x) | 1275079 | (199.5x) | 6824 | 44293 | (6.5x) | 1331331 | (195.1x)
nbody | 250992 | 5342357 | (21.3x) | 164185274 | (654.1x) | 253584 | 5439927 | (21.5x) | 162955284 | (642.6x)
ndes | 113555 | 1740050 | (15.3x) | 34799379 | (306.5x) | 119918 | 1563197 | (13.0x) | 53176247 | (443.4x)
nettle-aes | 113306 | 489386 | (4.3x) | 14676852 | (129.5x) | 113123 | 488887 | (4.3x) | 21739421 | (192.2x)
nettle-arcfour | 87327 | 1784265 | (20.4x) | 22378097 | (256.3x) | 87136 | 1515589 | (17.4x) | 25846101 | (296.6x)
nettle-cast128 | 13957 | 23609 | (1.7x) | 198077 | (14.2x) | 13263 | 23692 | (1.8x) | 166519 | (12.6x)
nettle-des | 9179 | 27907 | (3.0x) | 250080 | (27.2x) | 9336 | 30056 | (3.2x) | 202710 | (21.7x)
nettle-md5 | 7482 | 35234 | (4.7x) | 549028 | (73.4x) | 7604 | 29737 | (3.9x) | 732806 | (96.4x)
nettle-sha256 | 14957 | 55160 | (3.7x) | 1459782 | (97.6x) | 15014 | 56483 | (3.8x) | 1314044 | (87.5x)
newlib-exp | 3667 | 23027 | (6.3x) | 344297 | (93.9x) | 3815 | 23708 | (6.2x) | 484350 | (126.9x)
newlib-log | 3176 | 14916 | (4.7x) | 352798 | (111.1x) | 3304 | 15313 | (4.6x) | 417836 | (126.4x)
newlib-mod | 2280 | 10949 | (4.8x) | 203870 | (89.4x) | 2486 | 11338 | (4.6x) | 221979 | (89.3x)
newlib-sqrt | 9448 | 140972 | (14.9x) | 5497784 | (581.8x) | 9608 | 143752 | (15.0x) | 3787532 | (394.2x)
ns | 27810 | 920498 | (33.1x) | 30642549 | (1101.9x) | 24270 | 718504 | (29.6x) | 25730490 | (1060.1x)
nsichneu | 12465 | 17453 | (1.4x) | 129175 | (10.4x) | 12680 | 17799 | (1.4x) | 128992 | (10.2x)
prime | 58915 | 934324 | (15.9x) | 11517642 | (195.5x) | 57920 | 922405 | (15.9x) | 14618565 | (252.4x)
qsort | 3868 | 34437 | (8.9x) | 636532 | (164.5x) | 4014 | 26922 | (6.7x) | 831894 | (207.2x)
qurt | 5456 | 56121 | (10.3x) | 972214 | (178.2x) | 5588 | 59091 | (10.6x) | 1089667 | (195.0x)
recursion | 7983 | 149886 | (18.8x) | 4806422 | (602.0x) | 7365 | 140916 | (19.1x) | 5193849 | (705.1x)
rijndael | 1831062 | 11743208 | (6.4x) | 235853943 | (128.8x) | 1830953 | 9728816 | (5.3x) | 280555490 | (153.2x)
select | 2499 | 18750 | (7.5x) | 286758 | (114.7x) | 2615 | 15980 | (6.1x) | 310185 | (118.6x)
sglib-arraybinsearch | 32695 | 941448 | (28.8x) | 24948746 | (763.1x) | 32073 | 913077 | (28.5x) | 25650916 | (799.7x)
sglib-arrayheapsort | 73813 | 964227 | (13.1x) | 28155013 | (381.4x) | 74316 | 1056981 | (14.2x) | 27101576 | (364.7x)
sglib-arrayquicksort | 35820 | 584648 | (16.3x) | 21360149 | (596.3x) | 35771 | 445614 | (12.5x) | 19761832 | (552.4x)
sglib-dllist | 103228 | 931802 | (9.0x) | 19577829 | (189.7x) | 103490 | 784713 | (7.6x) | 21655708 | (209.3x)
sglib-hashtable | 73515 | 720144 | (9.8x) | 13448462 | (182.9x) | 75750 | 672251 | (8.9x) | 16372567 | (216.1x)
sglib-listinsertsort | 148962 | 4155691 | (27.9x) | 57359384 | (385.1x) | 147850 | 4116964 | (27.8x) | 82759796 | (559.8x)
sglib-listsort | 82649 | 851066 | (10.3x) | 21974191 | (265.9x) | 83363 | 786974 | (9.4x) | 22087737 | (265.0x)
sglib-queue | 79274 | 976908 | (12.3x) | 20131755 | (254.0x) | 79976 | 999905 | (12.5x) | 28994719 | (362.5x)
sglib-rbtree | 194861 | 1795057 | (9.2x) | 39956917 | (205.1x) | 198968 | 1518989 | (7.6x) | 47673755 | (239.6x)
slre | 81356 | 961313 | (11.8x) | 22990277 | (282.6x) | 81068 | 885753 | (10.9x) | 27643530 | (341.0x)
sqrt | 278604 | 5427855 | (19.5x) | 66636453 | (239.2x) | 275896 | 4302655 | (15.6x) | 78476306 | (284.4x)
st | 46255 | 758154 | (16.4x) | 13044156 | (282.0x) | 45667 | 563498 | (12.3x) | 14099803 | (308.7x)
statemate | 5587 | 36137 | (6.5x) | 989900 | (177.2x) | 5768 | 36597 | (6.3x) | 1074219 | (186.2x)
stb_perlin | 170355 | 2261082 | (13.3x) | 64860091 | (380.7x) | 141765 | 2404186 | (17.0x) | 63274557 | (446.3x)
stringsearch1 | 15819 | 91324 | (5.8x) | 4618112 | (291.9x) | 16061 | 102834 | (6.4x) | 4935053 | (307.3x)
strstr | 5200 | 48645 | (9.4x) | 1224891 | (235.5x) | 5354 | 51939 | (9.7x) | 1345225 | (251.2x)
tarai | 2884 | 28585 | (9.9x) | 754962 | (261.7x) | 2965 | 18523 | (6.2x) | 682388 | (230.1x)
trio-snprintf | 16952 | 78512 | (4.6x) | 1100793 | (64.9x) | 17144 | 72171 | (4.2x) | 1273947 | (74.3x)
trio-sscanf | 21499 | 152353 | (7.1x) | 2933134 | (136.4x) | 22001 | 120026 | (5.5x) | 3577032 | (162.6x)
ud | 11215 | 69845 | (6.2x) | 1724508 | (153.8x) | 11106 | 75100 | (6.8x) | 2196806 | (197.8x)
whetstone | 335501 | 2891946 | (8.6x) | 55686230 | (166.0x) | 301591 | 2700492 | (9.0x) | 61531596 | (204.0x)
Table 9: BEEBS performance degradation results (cycles, thousands) on Coffee
Lake and Whiskey Lake.
| Coffee Lake | Whiskey Lake
---|---|---
Benchmark | NoDegrade | Degrade | HyperDegrade | NoDegrade | Degrade | HyperDegrade
aha-compress | 71096 | 813585 | (11.4x) | 21102464 | (296.8x) | 69668 | 816475 | (11.7x) | 23975673 | (344.1x)
aha-mont64 | 21931 | 468459 | (21.4x) | 12821078 | (584.6x) | 21687 | 492008 | (22.7x) | 15890278 | (732.7x)
bs | 1874 | 9676 | (5.2x) | 183221 | (97.8x) | 1763 | 8435 | (4.8x) | 207217 | (117.5x)
bubblesort | 335556 | 9468033 | (28.2x) | 322437859 | (960.9x) | 302489 | 11286756 | (37.3x) | 369824760 | (1222.6x)
cnt | 13046 | 231693 | (17.8x) | 5177506 | (396.8x) | 13030 | 240288 | (18.4x) | 6270126 | (481.2x)
compress | 8728 | 115043 | (13.2x) | 4971456 | (569.6x) | 8552 | 115566 | (13.5x) | 5197671 | (607.7x)
cover | 6779 | 115233 | (17.0x) | 3332393 | (491.6x) | 7151 | 118538 | (16.6x) | 2716975 | (379.9x)
crc | 7526 | 160400 | (21.3x) | 4230637 | (562.1x) | 7601 | 177394 | (23.3x) | 3880102 | (510.4x)
crc32 | 47198 | 1842254 | (39.0x) | 36548242 | (774.4x) | 46386 | 2035556 | (43.9x) | 37694482 | (812.6x)
ctl-stack | 37440 | 616068 | (16.5x) | 14702668 | (392.7x) | 37847 | 725898 | (19.2x) | 21964505 | (580.3x)
ctl-string | 31313 | 853770 | (27.3x) | 18846973 | (601.9x) | 31845 | 704006 | (22.1x) | 19457277 | (611.0x)
ctl-vector | 30486 | 436212 | (14.3x) | 9325473 | (305.9x) | 30914 | 442518 | (14.3x) | 10432897 | (337.5x)
cubic | 33872 | 209368 | (6.2x) | 9250622 | (273.1x) | 30295 | 215268 | (7.1x) | 9827263 | (324.4x)
dijkstra | 1970036 | 44216908 | (22.4x) | 1253757348 | (636.4x) | 1961091 | 42750800 | (21.8x) | 1470626853 | (749.9x)
dtoa | 13163 | 88265 | (6.7x) | 2015472 | (153.1x) | 13020 | 78937 | (6.1x) | 2348685 | (180.4x)
duff | 6198 | 73171 | (11.8x) | 2860949 | (461.6x) | 6043 | 63287 | (10.5x) | 2248195 | (372.0x)
edn | 194838 | 3877224 | (19.9x) | 86990331 | (446.5x) | 196602 | 4120570 | (21.0x) | 104584239 | (532.0x)
expint | 29623 | 468879 | (15.8x) | 4908536 | (165.7x) | 29713 | 453217 | (15.3x) | 5642536 | (189.9x)
fac | 4334 | 77443 | (17.9x) | 1858161 | (428.7x) | 4226 | 62654 | (14.8x) | 2150388 | (508.8x)
fasta | 2241455 | 36760694 | (16.4x) | 718824003 | (320.7x) | 2285382 | 38691950 | (16.9x) | 770811211 | (337.3x)
fdct | 7519 | 25085 | (3.3x) | 1528669 | (203.3x) | 7465 | 38172 | (5.1x) | 2103383 | (281.7x)
fibcall | 2713 | 20411 | (7.5x) | 827897 | (305.2x) | 2731 | 22243 | (8.1x) | 893172 | (327.0x)
fir | 772946 | 17543810 | (22.7x) | 804645036 | (1041.0x) | 855949 | 22698227 | (26.5x) | 827569902 | (966.8x)
frac | 12313 | 172861 | (14.0x) | 5878181 | (477.4x) | 12136 | 185299 | (15.3x) | 5985378 | (493.2x)
huffbench | 1417998 | 14160500 | (10.0x) | 216244934 | (152.5x) | 1449947 | 13955966 | (9.6x) | 229499477 | (158.3x)
insertsort | 4212 | 81626 | (19.4x) | 2250385 | (534.2x) | 4065 | 95022 | (23.4x) | 2758031 | (678.3x)
janne_complex | 2272 | 17739 | (7.8x) | 449771 | (197.9x) | 2171 | 18005 | (8.3x) | 484653 | (223.2x)
jfdctint | 11636 | 142194 | (12.2x) | 2969790 | (255.2x) | 11497 | 123960 | (10.8x) | 3773542 | (328.2x)
lcdnum | 2065 | 22535 | (10.9x) | 323111 | (156.4x) | 1950 | 16860 | (8.6x) | 395945 | (203.0x)
levenshtein | 153352 | 3282550 | (21.4x) | 108037727 | (704.5x) | 149114 | 4310603 | (28.9x) | 114304863 | (766.6x)
ludcmp | 8823 | 82479 | (9.3x) | 2434265 | (275.9x) | 8765 | 83908 | (9.6x) | 3208591 | (366.0x)
matmult-float | 68462 | 1689146 | (24.7x) | 50348408 | (735.4x) | 67895 | 1762057 | (26.0x) | 57860913 | (852.2x)
matmult-int | 443996 | 9399015 | (21.2x) | 375279612 | (845.2x) | 443126 | 11765284 | (26.6x) | 501250720 | (1131.2x)
mergesort | 525439 | 10632506 | (20.2x) | 222978650 | (424.4x) | 514642 | 10430846 | (20.3x) | 281787323 | (547.5x)
miniz | 3270 | 14691 | (4.5x) | 412024 | (126.0x) | 3143 | 13570 | (4.3x) | 487135 | (155.0x)
minver | 6194 | 53678 | (8.7x) | 1413440 | (228.2x) | 6373 | 52712 | (8.3x) | 1506640 | (236.4x)
nbody | 256466 | 6453615 | (25.2x) | 191264270 | (745.8x) | 252409 | 5888723 | (23.3x) | 216254299 | (856.8x)
ndes | 114508 | 1772845 | (15.5x) | 54788991 | (478.5x) | 119261 | 1683039 | (14.1x) | 65749412 | (551.3x)
nettle-aes | 114333 | 567288 | (5.0x) | 31052424 | (271.6x) | 112622 | 576548 | (5.1x) | 34106112 | (302.8x)
nettle-arcfour | 88005 | 2120577 | (24.1x) | 26954748 | (306.3x) | 86661 | 1436640 | (16.6x) | 35980953 | (415.2x)
nettle-cast128 | 13974 | 26723 | (1.9x) | 206736 | (14.8x) | 12886 | 23546 | (1.8x) | 317716 | (24.7x)
nettle-des | 9078 | 32316 | (3.6x) | 255601 | (28.2x) | 8921 | 23980 | (2.7x) | 238207 | (26.7x)
nettle-md5 | 7350 | 32253 | (4.4x) | 675813 | (91.9x) | 7217 | 29839 | (4.1x) | 937431 | (129.9x)
nettle-sha256 | 14889 | 65280 | (4.4x) | 1735502 | (116.6x) | 14623 | 62231 | (4.3x) | 1652764 | (113.0x)
newlib-exp | 3658 | 25479 | (7.0x) | 661410 | (180.8x) | 3352 | 18812 | (5.6x) | 639414 | (190.7x)
newlib-log | 3038 | 16828 | (5.5x) | 511174 | (168.2x) | 2908 | 12749 | (4.4x) | 546217 | (187.8x)
newlib-mod | 2098 | 12596 | (6.0x) | 286197 | (136.4x) | 2010 | 13121 | (6.5x) | 337327 | (167.8x)
newlib-sqrt | 9430 | 186240 | (19.7x) | 6294054 | (667.4x) | 9174 | 183378 | (20.0x) | 5230626 | (570.1x)
ns | 28058 | 823601 | (29.4x) | 32090464 | (1143.7x) | 23796 | 943435 | (39.6x) | 32109538 | (1349.3x)
nsichneu | 12323 | 18872 | (1.5x) | 160230 | (13.0x) | 12236 | 18332 | (1.5x) | 164601 | (13.5x)
prime | 58925 | 944634 | (16.0x) | 16115583 | (273.5x) | 57233 | 1005099 | (17.6x) | 18090661 | (316.1x)
qsort | 3625 | 34736 | (9.6x) | 810665 | (223.6x) | 3543 | 28630 | (8.1x) | 1046006 | (295.2x)
qurt | 5333 | 67598 | (12.7x) | 1442175 | (270.4x) | 5140 | 64387 | (12.5x) | 1418908 | (276.0x)
recursion | 7838 | 177321 | (22.6x) | 5943982 | (758.3x) | 6813 | 147406 | (21.6x) | 5670809 | (832.3x)
rijndael | 1846763 | 10954085 | (5.9x) | 298854160 | (161.8x) | 1829801 | 11877918 | (6.5x) | 301644496 | (164.9x)
select | 2304 | 17312 | (7.5x) | 335812 | (145.7x) | 2182 | 18485 | (8.5x) | 369037 | (169.1x)
sglib-arraybinsearch | 32531 | 1061435 | (32.6x) | 28361404 | (871.8x) | 31769 | 1107230 | (34.9x) | 31999324 | (1007.2x)
sglib-arrayheapsort | 74568 | 1174730 | (15.8x) | 41340255 | (554.4x) | 73905 | 1196702 | (16.2x) | 50085952 | (677.7x)
sglib-arrayquicksort | 36200 | 583738 | (16.1x) | 26300739 | (726.5x) | 35305 | 545617 | (15.5x) | 25654013 | (726.6x)
sglib-dllist | 104108 | 947183 | (9.1x) | 26737508 | (256.8x) | 102530 | 930507 | (9.1x) | 31315926 | (305.4x)
sglib-hashtable | 72956 | 752795 | (10.3x) | 23759580 | (325.7x) | 74857 | 798844 | (10.7x) | 27269711 | (364.3x)
sglib-listinsertsort | 150571 | 3901838 | (25.9x) | 79469309 | (527.8x) | 151002 | 5126991 | (34.0x) | 99854495 | (661.3x)
sglib-listsort | 83492 | 865237 | (10.4x) | 26506487 | (317.5x) | 83432 | 879144 | (10.5x) | 31338183 | (375.6x)
sglib-queue | 79923 | 1171442 | (14.7x) | 33365763 | (417.5x) | 79308 | 1213322 | (15.3x) | 41832831 | (527.5x)
sglib-rbtree | 197550 | 1975689 | (10.0x) | 55392435 | (280.4x) | 198785 | 1797773 | (9.0x) | 59307610 | (298.3x)
slre | 81343 | 966902 | (11.9x) | 30416350 | (373.9x) | 79819 | 1022986 | (12.8x) | 34449484 | (431.6x)
sqrt | 281429 | 6231013 | (22.1x) | 91157327 | (323.9x) | 286888 | 4148972 | (14.5x) | 87278192 | (304.2x)
st | 46787 | 879737 | (18.8x) | 13216784 | (282.5x) | 45113 | 591158 | (13.1x) | 16071550 | (356.2x)
statemate | 5426 | 48227 | (8.9x) | 1187334 | (218.8x) | 5319 | 41665 | (7.8x) | 1302996 | (244.9x)
stb_perlin | 170913 | 2951012 | (17.3x) | 81001658 | (473.9x) | 141537 | 3078432 | (21.7x) | 85481578 | (604.0x)
stringsearch1 | 15688 | 95590 | (6.1x) | 5010590 | (319.4x) | 15624 | 116866 | (7.5x) | 6019351 | (385.3x)
strstr | 5072 | 59257 | (11.7x) | 1662703 | (327.8x) | 4943 | 51456 | (10.4x) | 2064634 | (417.6x)
tarai | 2698 | 25798 | (9.6x) | 898884 | (333.1x) | 2566 | 24645 | (9.6x) | 937342 | (365.3x)
trio-snprintf | 16752 | 89957 | (5.4x) | 1578752 | (94.2x) | 16670 | 80103 | (4.8x) | 1886009 | (113.1x)
trio-sscanf | 21441 | 170602 | (8.0x) | 4218744 | (196.8x) | 21561 | 121913 | (5.7x) | 4357713 | (202.1x)
ud | 11003 | 70482 | (6.4x) | 2121308 | (192.8x) | 10692 | 77272 | (7.2x) | 2755305 | (257.7x)
whetstone | 337043 | 3331038 | (9.9x) | 87417253 | (259.4x) | 301729 | 3001540 | (9.9x) | 99761886 | (330.6x)
Table 10: PARSEC performance degradation results (cycles, thousands) on
Skylake and Kaby Lake.
| Skylake | Kaby Lake
---|---|---
Benchmark | NoDegrade | Degrade | HyperDegrade | NoDegrade | Degrade | HyperDegrade
blackscholes | 1054281 | 1780761 | (1.7x) | 18930407 | (18.0x) | 1076981 | 1865230 | (1.7x) | 23938125 | (22.2x)
streamcluster | 1615085 | 5743487 | (3.6x) | 135072031 | (83.6x) | 1609993 | 5117245 | (3.2x) | 149581491 | (92.9x)
fluidanimate | 1811088 | 8432568 | (4.7x) | 152698071 | (84.3x) | 1819377 | 8398743 | (4.6x) | 176033235 | (96.8x)
swaptions | 1530971 | 5362703 | (3.5x) | 85922802 | (56.1x) | 1534618 | 5586172 | (3.6x) | 112188654 | (73.1x)
freqmine | 2287791 | 4632607 | (2.0x) | 59327806 | (25.9x) | 2312730 | 4426167 | (1.9x) | 70071636 | (30.3x)
canneal | 2682216 | 9233551 | (3.4x) | 104439434 | (38.9x) | 3017818 | 8427129 | (2.8x) | 123489532 | (40.9x)
Table 11: PARSEC performance degradation results (cycles, thousands) on Coffee
Lake and Whiskey Lake.
| Coffee Lake | Whiskey Lake
---|---|---
Benchmark | NoDegrade | Degrade | HyperDegrade | NoDegrade | Degrade | HyperDegrade
blackscholes | 960317 | 1653042 | (1.7x) | 23307722 | (24.3x) | 897093 | 1728380 | (1.9x) | 35757406 | (39.9x)
streamcluster | 1612074 | 5753689 | (3.6x) | 150934680 | (93.6x) | 1438805 | 5510738 | (3.8x) | 263693538 | (183.3x)
fluidanimate | 1616088 | 7346424 | (4.5x) | 172996775 | (107.0x) | 1626180 | 9221194 | (5.7x) | 212095965 | (130.4x)
swaptions | 1614836 | 5397065 | (3.3x) | 92169463 | (57.1x) | 1356684 | 6204397 | (4.6x) | 149384195 | (110.1x)
freqmine | 2152394 | 4007422 | (1.9x) | 69069190 | (32.1x) | 2113832 | 4481302 | (2.1x) | 80329274 | (38.0x)
canneal | 2567086 | 7895577 | (3.1x) | 112026498 | (43.6x) | 2566824 | 9135742 | (3.6x) | 127757591 | (49.8x)
## References
* int [2020] Intel 64 and IA-32 architectures software developers manual. Volume 3B 325462-073US, Intel, November 2020. URL https://software.intel.com/content/dam/develop/external/us/en/documents-tps/325462-sdm-vol-1-2abcd-3abcd.pdf.
* Aas et al. [2019] Josh Aas, Richard Barnes, Benton Case, Zakir Durumeric, Peter Eckersley, Alan Flores-López, J. Alex Halderman, Jacob Hoffman-Andrews, James Kasten, Eric Rescorla, Seth D. Schoen, and Brad Warren. Let’s Encrypt: An automated certificate authority to encrypt the entire web. In Lorenzo Cavallaro, Johannes Kinder, XiaoFeng Wang, and Jonathan Katz, editors, _Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019_ , pages 2473–2487. ACM, 2019. URL https://doi.org/10.1145/3319535.3363192.
* Acıiçmez et al. [2010] Onur Acıiçmez, Billy Bob Brumley, and Philipp Grabher. New results on instruction cache attacks. In Stefan Mangard and François-Xavier Standaert, editors, _Cryptographic Hardware and Embedded Systems, CHES 2010, 12th International Workshop, Santa Barbara, CA, USA, August 17-20, 2010. Proceedings_ , volume 6225 of _LNCS_ , pages 110–124. Springer, 2010. URL https://doi.org/10.1007/978-3-642-15031-9_8.
* Albrecht and Heninger [2021] Martin R. Albrecht and Nadia Heninger. On bounded distance decoding with predicate: Breaking the “lattice barrier” for the hidden number problem. In Anne Canteaut and François-Xavier Standaert, editors, _Advances in Cryptology - EUROCRYPT 2021 - 40th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zagreb, Croatia, October 17-21, 2021, Proceedings, Part I_ , volume 12696 of _LNCS_ , pages 528–558. Springer, 2021. URL https://doi.org/10.1007/978-3-030-77870-5_19.
* Aldaya et al. [2019] Alejandro Cabrera Aldaya, Billy Bob Brumley, Sohaib ul Hassan, Cesar Pereida García, and Nicola Tuveri. Port contention for fun and profit. In _2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019_ , pages 870–887. IEEE, 2019. URL https://doi.org/10.1109/SP.2019.00066.
* Allan et al. [2016] Thomas Allan, Billy Bob Brumley, Katrina E. Falkner, Joop van de Pol, and Yuval Yarom. Amplifying side channels through performance degradation. In Stephen Schwab, William K. Robertson, and Davide Balzarotti, editors, _Proceedings of the 32nd Annual Conference on Computer Security Applications, ACSAC 2016, Los Angeles, CA, USA, December 5-9, 2016_ , pages 422–435. ACM, 2016. URL http://doi.acm.org/10.1145/2991079.2991084.
* Aranha et al. [2020] Diego F. Aranha, Felipe Rodrigues Novaes, Akira Takahashi, Mehdi Tibouchi, and Yuval Yarom. LadderLeak: Breaking ECDSA with less than one bit of nonce leakage. In Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni Vigna, editors, _CCS ’20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020_ , pages 225–242. ACM, 2020. URL https://doi.org/10.1145/3372297.3417268.
* Barnes et al. [2019] Richard Barnes, Jacob Hoffman-Andrews, Daniel McCarney, and James Kasten. Automatic Certificate Management Environment (ACME). RFC 8555, RFC Editor, March 2019. URL https://datatracker.ietf.org/doc/rfc8555/.
* Bernstein et al. [2017] Daniel J. Bernstein, Joachim Breitner, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal, and Yuval Yarom. Sliding right into disaster: Left-to-right sliding windows leak. In Wieland Fischer and Naofumi Homma, editors, _Cryptographic Hardware and Embedded Systems - CHES 2017 - 19th International Conference, Taipei, Taiwan, September 25-28, 2017, Proceedings_ , volume 10529 of _LNCS_ , pages 555–576. Springer, 2017. URL https://doi.org/10.1007/978-3-319-66787-4_27.
* Bhasin et al. [2014] Shivam Bhasin, Jean-Luc Danger, Sylvain Guilley, and Zakaria Najm. NICV: Normalized inter-class variance for detection of side-channel leakage. In _International Symposium on Electromagnetic Compatibility, EMC 2014, Tokyo, Japan, May 12-16, 2014, Proceedings_ , pages 310–313, 2014. URL https://ieeexplore.ieee.org/document/6997167.
* Bhattacharyya et al. [2019] Atri Bhattacharyya, Alexandra Sandulescu, Matthias Neugschwandtner, Alessandro Sorniotti, Babak Falsafi, Mathias Payer, and Anil Kurmus. SMoTherSpectre: Exploiting speculative execution through port contention. In Lorenzo Cavallaro, Johannes Kinder, XiaoFeng Wang, and Jonathan Katz, editors, _Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019_ , pages 785–800. ACM, 2019. URL https://doi.org/10.1145/3319535.3363194.
* Bienia et al. [2008] Christian Bienia, Sanjeev Kumar, Jaswinder Pal Singh, and Kai Li. The PARSEC benchmark suite: characterization and architectural implications. In Andreas Moshovos, David Tarditi, and Kunle Olukotun, editors, _17th International Conference on Parallel Architectures and Compilation Techniques, PACT 2008, Toronto, Ontario, Canada, October 25-29, 2008_ , pages 72–81. ACM, 2008. URL https://doi.org/10.1145/1454115.1454128.
* Boneh and Venkatesan [1996] Dan Boneh and Ramarathnam Venkatesan. Hardness of computing the most significant bits of secret keys in Diffie-Hellman and related schemes. In Neal Koblitz, editor, _Advances in Cryptology - CRYPTO ’96, 16th Annual International Cryptology Conference, Santa Barbara, California, USA, August 18-22, 1996, Proceedings_ , volume 1109 of _LNCS_ , pages 129–142. Springer, 1996. URL https://doi.org/10.1007/3-540-68697-5_11.
* Bos et al. [2014] Joppe W. Bos, J. Alex Halderman, Nadia Heninger, Jonathan Moore, Michael Naehrig, and Eric Wustrow. Elliptic curve cryptography in practice. In Nicolas Christin and Reihaneh Safavi-Naini, editors, _Financial Cryptography and Data Security - 18th International Conference, FC 2014, Christ Church, Barbados, March 3-7, 2014, Revised Selected Papers_ , volume 8437 of _LNCS_ , pages 157–175. Springer, 2014. URL https://doi.org/10.1007/978-3-662-45472-5_11.
* Brier et al. [2004] Eric Brier, Christophe Clavier, and Francis Olivier. Correlation power analysis with a leakage model. In Marc Joye and Jean-Jacques Quisquater, editors, _Cryptographic Hardware and Embedded Systems - CHES 2004: 6th International Workshop Cambridge, MA, USA, August 11-13, 2004. Proceedings_ , volume 3156 of _LNCS_ , pages 16–29. Springer, 2004. URL https://doi.org/10.1007/978-3-540-28632-5_2.
* Chari et al. [2002] Suresh Chari, Josyula R. Rao, and Pankaj Rohatgi. Template attacks. In Burton S. Kaliski Jr., Çetin Kaya Koç, and Christof Paar, editors, _Cryptographic Hardware and Embedded Systems - CHES 2002, 4th International Workshop, Redwood Shores, CA, USA, August 13-15, 2002, Revised Papers_ , volume 2523 of _LNCS_ , pages 13–28. Springer, 2002\. URL https://doi.org/10.1007/3-540-36400-5_3.
* Cohney et al. [2020] Shaanan Cohney, Andrew Kwong, Shahar Paz, Daniel Genkin, Nadia Heninger, Eyal Ronen, and Yuval Yarom. Pseudorandom black swans: Cache attacks on CTR_DRBG. In _2020 IEEE Symposium on Security and Privacy, SP 2020, San Francisco, CA, USA, May 18-21, 2020_ , pages 1241–1258. IEEE, 2020. URL https://doi.org/10.1109/SP40000.2020.00046.
* Coron et al. [2000] Jean-Sébastien Coron, Paul C. Kocher, and David Naccache. Statistics and secret leakage. In Yair Frankel, editor, _Financial Cryptography, 4th International Conference, FC 2000 Anguilla, British West Indies, February 20-24, 2000, Proceedings_ , volume 1962 of _LNCS_ , pages 157–173. Springer, 2000. URL https://doi.org/10.1007/3-540-45472-1_12.
* development team [2016] The FPLLL development team. fplll, a lattice reduction library. 2016\. URL https://github.com/fplll/fplll.
* Dolev and Yao [1983] Danny Dolev and Andrew Chi-Chih Yao. On the security of public key protocols. _IEEE Trans. Inf. Theory_ , 29(2):198–207, 1983\. URL https://doi.org/10.1109/TIT.1983.1056650.
* Freire et al. [2013] Eduarda S. V. Freire, Dennis Hofheinz, Eike Kiltz, and Kenneth G. Paterson. Non-interactive key exchange. In Kaoru Kurosawa and Goichiro Hanaoka, editors, _Public-Key Cryptography - PKC 2013 - 16th International Conference on Practice and Theory in Public-Key Cryptography, Nara, Japan, February 26 - March 1, 2013. Proceedings_ , volume 7778 of _LNCS_ , pages 254–271. Springer, 2013. URL https://doi.org/10.1007/978-3-642-36362-7_17.
* Genkin et al. [2014] Daniel Genkin, Adi Shamir, and Eran Tromer. RSA key extraction via low-bandwidth acoustic cryptanalysis. In Juan A. Garay and Rosario Gennaro, editors, _Advances in Cryptology - CRYPTO 2014 - 34th Annual Cryptology Conference, Santa Barbara, CA, USA, August 17-21, 2014, Proceedings, Part I_ , volume 8616 of _LNCS_ , pages 444–461. Springer, 2014. URL https://doi.org/10.1007/978-3-662-44371-2_25.
* Genkin et al. [2017] Daniel Genkin, Luke Valenta, and Yuval Yarom. May the fourth be with you: A microarchitectural side channel attack on several real-world applications of Curve25519. In Bhavani M. Thuraisingham, David Evans, Tal Malkin, and Dongyan Xu, editors, _Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017_ , pages 845–858. ACM, 2017. URL http://doi.acm.org/10.1145/3133956.3134029.
* Goldreich et al. [1997] Oded Goldreich, Shafi Goldwasser, and Shai Halevi. Public-key cryptosystems from lattice reduction problems. In Burton S. Kaliski Jr., editor, _Advances in Cryptology - CRYPTO ’97, 17th Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 1997, Proceedings_ , volume 1294 of _LNCS_ , pages 112–131. Springer, 1997. URL https://doi.org/10.1007/BFb0052231.
* Goodwill et al. [2011] Gilbert Goodwill, Benjamin Jun, Josh Jaffe, and Pankaj Rohatgi. A testing methodology for side-channel resistance validation. In _Non-Invasive Attack Testing Workshop, NIAT 2011, Nara, Japan, September 26-27, 2011. Proceedings_. NIST, 2011. URL https://csrc.nist.gov/csrc/media/events/non-invasive-attack-testing-workshop/documents/08_goodwill.pdf.
* Gras et al. [2018] Ben Gras, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida. Translation leak-aside buffer: Defeating cache side-channel protections with TLB attacks. In William Enck and Adrienne Porter Felt, editors, _27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, August 15-17, 2018_ , pages 955–972. USENIX Association, 2018. URL https://www.usenix.org/conference/usenixsecurity18/presentation/gras.
* Gridin et al. [2019] Iaroslav Gridin, Cesar Pereida García, Nicola Tuveri, and Billy Bob Brumley. Triggerflow: Regression testing by advanced execution path inspection. In Roberto Perdisci, Clémentine Maurice, Giorgio Giacinto, and Magnus Almgren, editors, _Detection of Intrusions and Malware, and Vulnerability Assessment - 16th International Conference, DIMVA 2019, Gothenburg, Sweden, June 19-20, 2019, Proceedings_ , volume 11543 of _LNCS_ , pages 330–350. Springer, 2019. URL https://doi.org/10.1007/978-3-030-22038-9_16.
* Grunwald and Ghiasi [2002] Dirk Grunwald and Soraya Ghiasi. Microarchitectural denial of service: insuring microarchitectural fairness. In Erik R. Altman, Kemal Ebcioglu, Scott A. Mahlke, B. Ramakrishna Rau, and Sanjay J. Patel, editors, _Proceedings of the 35th Annual International Symposium on Microarchitecture, Istanbul, Turkey, November 18-22, 2002_ , pages 409–418. ACM/IEEE Computer Society, 2002. URL https://doi.org/10.1109/MICRO.2002.1176268.
* Gruss et al. [2015] Daniel Gruss, Raphael Spreitzer, and Stefan Mangard. Cache template attacks: Automating attacks on inclusive last-level caches. In Jaeyeon Jung and Thorsten Holz, editors, _24th USENIX Security Symposium, USENIX Security 15, Washington, D.C., USA, August 12-14, 2015_ , pages 897–912. USENIX Association, 2015. URL https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/gruss.
* Gullasch et al. [2011] David Gullasch, Endre Bangerter, and Stephan Krenn. Cache games - bringing access-based cache attacks on AES to practice. In _32nd IEEE Symposium on Security and Privacy, S &P 2011, 22-25 May 2011, Berkeley, California, USA_, pages 490–505. IEEE Computer Society, 2011. URL https://doi.org/10.1109/SP.2011.22.
* Gutmann [2020] Peter Gutmann. Simple Certificate Enrolment Protocol. RFC 8894, RFC Editor, September 2020. URL https://datatracker.ietf.org/doc/rfc8894/.
* Hasan et al. [2005] Jahangir Hasan, Ankit Jalote, T. N. Vijaykumar, and Carla E. Brodley. Heat stroke: Power-density-based denial of service in SMT. In _11th International Conference on High-Performance Computer Architecture (HPCA-11 2005), 12-16 February 2005, San Francisco, CA, USA_ , pages 166–177. IEEE Computer Society, 2005. URL https://doi.org/10.1109/HPCA.2005.16.
* Housley [2009] Russ Housley. Cryptographic message syntax (CMS). RFC 5652, RFC Editor, September 2009. URL https://datatracker.ietf.org/doc/rfc5652/.
* Kaliski [1998] Burt Kaliski. PKCS #7: Cryptographic message syntax version 1.5. RFC 2315, RFC Editor, March 1998. URL https://datatracker.ietf.org/doc/rfc2315/.
* Kocher et al. [2019] Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, and Yuval Yarom. Spectre attacks: Exploiting speculative execution. In _2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019_ , pages 1–19. IEEE, 2019. URL https://doi.org/10.1109/SP.2019.00002.
* Lenstra et al. [2012] Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung, and Christophe Wachter. Public keys. In Reihaneh Safavi-Naini and Ran Canetti, editors, _Advances in Cryptology - CRYPTO 2012 - 32nd Annual Cryptology Conference, Santa Barbara, CA, USA, August 19-23, 2012. Proceedings_ , volume 7417 of _LNCS_ , pages 626–642. Springer, 2012. URL https://doi.org/10.1007/978-3-642-32009-5_37.
* Lepinski and Kent [2008] Matt Lepinski and Stephen Kent. Additional Diffie-Hellman groups for use with IETF standards. RFC 5114, RFC Editor, January 2008. URL https://datatracker.ietf.org/doc/rfc5114/.
* Lipp et al. [2018] Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Anders Fogh, Jann Horn, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, and Mike Hamburg. Meltdown: Reading kernel memory from user space. In William Enck and Adrienne Porter Felt, editors, _27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, August 15-17, 2018_ , pages 973–990. USENIX Association, 2018. URL https://www.usenix.org/conference/usenixsecurity18/presentation/lipp.
* Merget et al. [2021] Robert Merget, Marcus Brinkmann, Nimrod Aviram, Juraj Somorovsky, Johannes Mittmann, and Jörg Schwenk. Raccoon attack: Finding and exploiting most-significant-bit-oracles in TLS-DH(E). In Michael Bailey and Rachel Greenstadt, editors, _30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021_ , pages 213–230. USENIX Association, 2021. URL https://www.usenix.org/conference/usenixsecurity21/presentation/merget.
* Mononen et al. [2005] Tero Mononen, Tomi Kause, Stephen Farrell, and Dr. Carlisle Adams. Internet X.509 Public Key Infrastructure Certificate Management Protocol (CMP). RFC 4210, RFC Editor, September 2005. URL https://datatracker.ietf.org/doc/rfc4210/.
* Moscibroda and Mutlu [2007] Thomas Moscibroda and Onur Mutlu. Memory performance attacks: Denial of memory service in multi-core systems. In Niels Provos, editor, _Proceedings of the 16th USENIX Security Symposium, Boston, MA, USA, August 6-10, 2007_. USENIX Association, 2007. URL https://www.usenix.org/conference/16th-usenix-security-symposium/memory-performance-attacks-denial-memory-service-multi.
* Nguyen and Shparlinski [2003] Phong Q. Nguyen and Igor E. Shparlinski. The insecurity of the Elliptic Curve Digital Signature Algorithm with partially known nonces. _Des. Codes Cryptogr._ , 30(2):201–217, 2003\. URL https://doi.org/10.1023/A:1025436905711.
* Osvik et al. [2006] Dag Arne Osvik, Adi Shamir, and Eran Tromer. Cache attacks and countermeasures: The case of AES. In David Pointcheval, editor, _Topics in Cryptology - CT-RSA 2006, The Cryptographers’ Track at the RSA Conference 2006, San Jose, CA, USA, February 13-17, 2006, Proceedings_ , volume 3860 of _LNCS_ , pages 1–20. Springer, 2006. URL https://doi.org/10.1007/11605805_1.
* Page [2009] Daniel Page. _Practical Introduction to Computer Architecture_. Texts in Computer Science. Springer, 2009. URL https://doi.org/10.1007/978-1-84882-256-6.
* Palanca et al. [2003] Salvador Palanca, Stephen A. Fischer, and Subramaniam Maiyuran. CLFLUSH micro-architectural implementation method and system, 2003. US Patent 6,546,462.
* Pallister et al. [2013] James Pallister, Simon J. Hollis, and Jeremy Bennett. BEEBS: Open benchmarks for energy measurements on embedded platforms. _CoRR_ , abs/1308.5174, 2013. URL http://arxiv.org/abs/1308.5174.
* Pallister et al. [2015] James Pallister, Simon J. Hollis, and Jeremy Bennett. Identifying compiler options to minimize energy consumption for embedded platforms. _Comput. J._ , 58(1):95–109, 2015. URL https://doi.org/10.1093/comjnl/bxt129.
* Percival [2005] Colin Percival. Cache missing for fun and profit. In _BSDCan 2005, Ottawa, Canada, May 13-14, 2005, Proceedings_ , 2005\. URL http://www.daemonology.net/papers/cachemissing.pdf.
* Pereida García and Brumley [2017] Cesar Pereida García and Billy Bob Brumley. Constant-time callees with variable-time callers. In Engin Kirda and Thomas Ristenpart, editors, _26th USENIX Security Symposium, USENIX Security 2017, Vancouver, BC, Canada, August 16-18, 2017_ , pages 83–98. USENIX Association, 2017. URL https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/garcia.
* Pereida García et al. [2016] Cesar Pereida García, Billy Bob Brumley, and Yuval Yarom. “Make sure DSA signing exponentiations really are constant-time”. In Edgar R. Weippl, Stefan Katzenbeisser, Christopher Kruegel, Andrew C. Myers, and Shai Halevi, editors, _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, October 24-28, 2016_ , pages 1639–1650. ACM, 2016. URL http://doi.acm.org/10.1145/2976749.2978420.
* Pereida García et al. [2020] Cesar Pereida García, Sohaib ul Hassan, Nicola Tuveri, Iaroslav Gridin, Alejandro Cabrera Aldaya, and Billy Bob Brumley. Certified side channels. In Srdjan Capkun and Franziska Roesner, editors, _29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020_ , pages 2021–2038. USENIX Association, 2020. URL https://www.usenix.org/conference/usenixsecurity20/presentation/garcia.
* Prouff et al. [2009] Emmanuel Prouff, Matthieu Rivain, and Régis Bevan. Statistical analysis of second order differential power analysis. _IEEE Trans. Computers_ , 58(6):799–811, 2009\. URL https://doi.org/10.1109/TC.2009.15.
* Ragab et al. [2021] Hany Ragab, Enrico Barberis, Herbert Bos, and Cristiano Giuffrida. Rage against the machine clear: A systematic analysis of machine clears and their implications for transient execution attacks. In Michael Bailey and Rachel Greenstadt, editors, _30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021_ , pages 1451–1468. USENIX Association, 2021. URL https://www.usenix.org/conference/usenixsecurity21/presentation/ragab.
* Ronen et al. [2019] Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, David Wong, and Yuval Yarom. The 9 lives of Bleichenbacher’s CAT: new cache ATtacks on TLS implementations. In _2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019_ , pages 435–452. IEEE, 2019. URL https://doi.org/10.1109/SP.2019.00062.
* Tsafrir et al. [2007] Dan Tsafrir, Yoav Etsion, and Dror G. Feitelson. Secretly monopolizing the CPU without superuser privileges. In Niels Provos, editor, _Proceedings of the 16th USENIX Security Symposium, Boston, MA, USA, August 6-10, 2007_. USENIX Association, 2007. URL https://www.usenix.org/conference/16th-usenix-security-symposium/secretly-monopolizing-cpu-without-superuser-privileges.
* Tuveri and Brumley [2019] Nicola Tuveri and Billy Bob Brumley. Start your ENGINEs: Dynamically loadable contemporary crypto. In _2019 IEEE Cybersecurity Development, SecDev 2019, Tysons Corner, VA, USA, September 23-25, 2019_ , pages 4–19. IEEE, 2019. URL https://doi.org/10.1109/SecDev.2019.00014.
* Tuveri et al. [2018] Nicola Tuveri, Sohaib ul Hassan, Cesar Pereida García, and Billy Bob Brumley. Side-channel analysis of SM2: A late-stage featurization case study. In _Proceedings of the 34th Annual Computer Security Applications Conference, ACSAC 2018, San Juan, PR, USA, December 03-07, 2018_ , pages 147–160. ACM, 2018. URL https://doi.org/10.1145/3274694.3274725.
* ul Hassan et al. [2020] Sohaib ul Hassan, Iaroslav Gridin, Ignacio M. Delgado-Lozano, Cesar Pereida García, Jesús-Javier Chi-Domínguez, Alejandro Cabrera Aldaya, and Billy Bob Brumley. Déjà vu: Side-channel analysis of Mozilla’s NSS. In Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni Vigna, editors, _CCS ’20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020_ , pages 1887–1902. ACM, 2020. URL https://doi.org/10.1145/3372297.3421761.
* Valenta et al. [2018] Luke Valenta, Nick Sullivan, Antonio Sanso, and Nadia Heninger. In search of CurveSwap: Measuring elliptic curve implementations in the wild. In _2018 IEEE European Symposium on Security and Privacy, EuroS &P 2018, London, United Kingdom, April 24-26, 2018_, pages 384–398. IEEE, 2018. URL https://doi.org/10.1109/EuroSP.2018.00034.
* Yarom and Falkner [2014] Yuval Yarom and Katrina Falkner. FLUSH+RELOAD: A high resolution, low noise, L3 cache side-channel attack. In _Proceedings of the 23rd USENIX Security Symposium, San Diego, CA, USA, August 20-22, 2014_ , pages 719–732. USENIX Association, 2014\. URL https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/yarom.
* Yarom et al. [2016] Yuval Yarom, Daniel Genkin, and Nadia Heninger. CacheBleed: A timing attack on OpenSSL constant time RSA. In Benedikt Gierlichs and Axel Y. Poschmann, editors, _Cryptographic Hardware and Embedded Systems - CHES 2016 - 18th International Conference, Santa Barbara, CA, USA, August 17-19, 2016, Proceedings_ , volume 9813 of _LNCS_ , pages 346–367. Springer, 2016. URL https://doi.org/10.1007/978-3-662-53140-2_17.
|
16k
|
arxiv_papers
|
2101.01079
|
# Counter-terrorism analysis
using cooperative game theory
Sung Chan Choi Department of Mathematics, University of Utah, 155 S. 1400 E.,
Salt Lake City, UT 84112, USA. e-mail: [email protected]
###### Abstract
Game theory has been applied in many fields of study, especially economics and
political science. Arce M. and Sandler (2005) analyzed counter-terrorism using
non-cooperative game theory (the players are, for example, the US and the EU),
which assumes that communication among the players is not allowed, or, if it
is allowed, then there is no mechanism to enforce any agreement the players
may make. The only solution in the non-cooperative setting would be a Nash
equilibrium because the players adopt only self-enforcing strategies. Here we
analyze counter-terrorism using cooperative game theory, because there are
ways to communicate among the players and to make binding agreements; indeed,
countries that oppose terrorism are closely connected to each other in many
aspects such as economically and in terms of international politics.
## 1 Introduction
Arce M. and Sandler (2005) classified counter-terrorism policies into
preemption, no action, and deterrence. Preemption is a proactive policy in
which terrorists and their assets are attacked to curb subsequent terrorist
campaigns. It can protect all potential targets from terrorists. Deterrence
comprises more defensive or passive counter-terrorism measures that include
making technological barriers such as metal detectors or bomb-sniffing
equipment at airports, fortifying potential targets, and securing borders.
These defensive policies are intended to deter an attack by either making
success more difficult or increasing the likelihood of negative consequences
for the terrorists.
The reason why many countries facing terrorism are more inclined to choose the
deterrence policy rather than preemption, despite the greater social gain
using preemption, is that the famous “prisoner’s dilemma” is hidden in the
game, as we will point out below.
Since preemption can protect all potential targets, it provides public
benefits. In contrast, deterrence imposes public costs because it can deflect
the attack to relatively less-guarded targets. We assume that each preemption
gives a public benefit of 4 for player 1 and player 2 at a private cost of 6
to the player who uses preemption. Comparing with deterrence, it imposes a
public cost of 4 on both the deterrer and the other because the nondeterrer
suffers the deflection costs of being the target of choice, and it provides
private gains of 6 to the only deterrer motivated by greater amount of gain
than the cost. The payoff bimatrix from Arce M. and Sandler (2005), in which
the row player is player 1 (e.g., the US) and the column player is player 2
(e.g., the EU), is given by
$\bordermatrix{&\text{Preempt}&\text{Status
Quo}&\text{Deter}\cr\text{Preempt}&(2,2)&(-2,4)&(-6,6)\cr\text{Status
Quo}&(4,-2)&(0,0)&(-4,2)\cr\text{Deter}&(6,-6)&(2,-4)&(-2,-2)}.$ (1)
* •
(Preempt, Preempt)
The players give a public benefit $(=4)$ to each other so both can take a
total benefit of 8 and they pay a private cost $(=6)$ respectively. Therefore
each payoff is equal to $2$ $(=4+4-6)$.
* •
(Preempt, Status Quo) or (Status Quo, Preempt)
The preemptor can gain public benefit $(=4)$ from himself preempting but pay
private cost $(=6)$. Hence his payoff will be $-2$ $(=4-6)$. However, the
player adopting the status quo can only get the benefit $(=4)$ which the
preemptor makes without any cost. So the payoff to the player doing nothing is
$4$ $(=4-0)$.
* •
(Preempt, Deter) or (Deter, Preempt)
The payoff to the preemptor is $-6$ $(=4-6-4)$ because he can enjoy his own
public benefit $(=4)$ but has to pay a private cost $(=6)$ and a public cost
$(=4)$ raised by the deterrer together. On the other hand, since the deterrer
can attain a private benefit $(=6)$ from himself and also a public benefit
$(=4)$ by the preemptor but has only to pay the public cost $(=4)$ raised by
his deterring, the payoff to the deterrer is $6$ $(=6+4-4)$.
* •
(Deter, Status Quo) or (Status Quo, Deter)
The only deterrer gets a private benefit $(=6)$ and pays a public cost $(=4)$.
So the payoff to the deterrer is $2$ $(=6-4)$. In case of adopting the status
quo, it just costs $(=4)$ without any benefit. The payoff for adopting the
status quo is $-4$ $(=0-4)$.
* •
(Deter, Deter)
The payoff to the players is $-2$ $(=6-4-4)$ because each can get a private
benefit of 6 but they impose a public cost of 4 on each other.
Notice that (Deter, Deter) is a pure Nash equilibrium because Deter is a
dominant strategy for both players. Yet both players receive higher payoffs
from (Preempt, Preempt) and from (Status Quo, Status Quo), so this is a
classic prisoner’s dilemma situation.
Our aim here is to apply cooperative game theory to this model instead of non-
cooperative game theory. There are at least three kinds of solution in
cooperative game theory, namely, the TU (transferable utility) solution, the
NTU solution based on the Nash Bargaining Model, and the NTU solution based on
the lambda transfer approach. Ferguson (2014) is recommended for background on
this theory.
### 1.1 TU solution
In a cooperative game with payoff bimatrix $(\bm{A},\bm{B})$, the players will
agree to play so as to achieve $\sigma=\max_{i,j}(a_{ij}+b_{ij})$, and then
will divide $\sigma$ between them in some way. If the threat strategies are
$\bm{p}$ for Player 1 and $\bm{q}$ for Player 2, Player 1 will accept no less
than $D_{1}=\bm{p}^{\textsf{T}}\bm{A}\bm{q}$ and player 2 will accept no less
than $D_{2}=\bm{p}^{\textsf{T}}\bm{B}\bm{q}$ since the players can receive
them without agreement. The players will negotiate about which point on the
line segment $u+v=\sigma$ from $(D_{1},\sigma-D_{1})$ to $(\sigma-
D_{2},D_{2})$ is the TU solution. It should be the midpoint of the interval,
i.e.,
$\bm{\varphi}=(\varphi_{1},\varphi_{2})=\bigg{(}\frac{\sigma+D_{1}-D_{2}}{2},\frac{\sigma-(D_{1}-D_{2})}{2}\bigg{)}.$
This shows that Player 1 wants to maximize $D_{1}-D_{2}$, while Player 2 wants
to minimize it. Since $D_{1}-D_{2}=\bm{p}^{\textsf{T}}(\bm{A}-\bm{B})\bm{q}$,
we see that the optimal threat strategies are given by the solution
$(\bm{p}^{*},\bm{q}^{*})$ of the matrix game $\bm{A}-\bm{B}$. With
$\delta=\text{Val}(\bm{A}-\bm{B})=(\bm{p}^{*})^{\textsf{T}}(\bm{A}-\bm{B})\bm{q}^{*},$
the TU solution becomes
$\bm{\varphi}^{*}=(\varphi_{1}^{*},\varphi_{2}^{*})=\bigg{(}\frac{\sigma+\delta}{2},\frac{\sigma-\delta}{2}\bigg{)}.$
Let
$\bm{A}=\begin{pmatrix}2&-2&-6\\\ 4&0&-4\\\
6&2&-2\end{pmatrix}\quad\textrm{and}\quad\bm{B}=\begin{pmatrix}2&4&6\\\
-2&0&2\\\ -6&-4&-2\end{pmatrix}$
as in (1). Then the difference matrix
$\bm{A}-\bm{B}=\begin{pmatrix}0&-6&-12\\\ 6&0&-6\\\ 12&6&0\end{pmatrix}$
has a saddle point at the lower right with value $\delta=0$. So
$\bm{p}^{*}=(0,0,1)^{\textsf{T}}$ and $\bm{q}^{*}=(0,0,1)^{\textsf{T}}$ are
the threat strategies and the disagreement point is $(D_{1},D_{2})=(-2,-2)$,
which is the Nash equilibrium in the non-cooperative game. Also, we can get
value $\sigma=\max(a_{ij}+b_{ij})=4$. Therefore the TU solution is
$\bm{\varphi}^{*}=\bigg{(}\frac{\sigma+\delta}{2},\frac{\sigma-\delta}{2}\bigg{)}=\bigg{(}\frac{4+0}{2},\frac{4-0}{2}\bigg{)}=(2,2).$
Since the cooperative strategy gives (2,2), this does not require any side
payment.
### 1.2 NTU solution based on the Nash Bargaining Model
This model assumes that two elements should be given and known to the players.
One element is a compact (i.e., closed and bounded), convex set $S$ in the
plane. We refer to $S$ as the NTU-feasible set. Another is a threat point,
$(u^{*},v^{*})\in S$. Given an NTU-feasible set $S$ and a threat point
$(u^{*},v^{*})\in S$, we can find a unique NTU solution $(\bar{u},\bar{v})\in
S$ that maximizes $f(u,v)=(u-u^{*})(v-v^{*})$, as suggested by Nash.
###### Theorem 1.
If there exists a point $(u,v)\in S$ with $u>u^{*}$ and $v>v^{*}$ then
$\max_{u>u^{*},v>v^{*},(u,v)\in S}(u-u^{*})(v-v^{*})$
is attained at a unique point $(\bar{u},\bar{v})$.
###### Proof.
Suppose there are two different points $(u_{1},v_{1}),(u_{2},v_{2})\in S$ that
maximize $f(u,v)=(u-u^{*})(v-v^{*})$, and let $M$ be the maximum value. Since
$M>0$, $u_{1}=u_{2}$ implies $v_{1}=v_{2}$. Since $S$ is convex and $(u,v)\in
S$, without loss of generality we can suppose that $u_{1}<u_{2}$, in which
case $v_{1}>v_{2}$, and put
$(u,v)=\frac{1}{2}(u_{1},v_{1})+\frac{1}{2}(u_{2},v_{2})=\frac{1}{2}(u_{1}+u_{2},v_{1}+v_{2})$.
Now
$\displaystyle f(u,v)$
$\displaystyle=\bigg{(}\frac{u_{1}+u_{2}}{2}-u^{*}\bigg{)}\bigg{(}\frac{v_{1}+v_{2}}{2}-v^{*}\bigg{)}$
$\displaystyle=\frac{(u_{1}-u^{*})+(u_{2}-u^{*})}{2}\cdot\frac{(v_{1}-v^{*})+(v_{2}-v^{*})}{2}$
$\displaystyle=\frac{2(u_{1}-u^{*})(v_{1}-v^{*})-(u_{1}-u^{*})(v_{1}-v^{*})+2(u_{2}-u^{*})(v_{2}-v^{*})}{4}$
$\displaystyle\quad{}+\frac{-(u_{2}-u^{*})(v_{2}-v^{*})+(u_{1}-u^{*})(v_{2}-v^{*})+(u_{2}-u^{*})(v_{1}-v^{*})}{4}$
$\displaystyle=\bigg{(}\frac{(u_{1}-u^{*})(v_{1}-v^{*})}{2}+\frac{(u_{2}-u^{*})(v_{2}-v^{*})}{2}\bigg{)}+\frac{(u_{1}-u_{2})(v_{2}-v_{1})}{4}$
$\displaystyle=M+\frac{(u_{1}-u_{2})(v_{2}-v_{1})}{4}.$
Since $u_{1}<u_{2}$ and $v_{1}>v_{2}$, the last fraction is positive, hence
$f(u,v)>M$, which is a contradiction to the assumption that $M$ is the maximum
value. Therefore, the point $(\bar{u},\bar{v})$ is unique. ∎
$(u+2)(v+2)=c$$c=16$$c=24$$c=8$$v=-u+4$$v=-u-4$$(2,2)$$(6,-6)$$(-6,6)$$(-2,-2)$$-10$$-8$$-6$$-4$$-2$$0$$2$$4$$6$$8$$10$$-10$$-8$$-6$$-4$$-2$$0$$2$$4$$6$$8$$10$$u$
(Player 1’s payoff)$v$ (Player 2’s payoff) Figure 1: TU feasible set:
$-u-4\leq v\leq-u+4$. NTU feasible set: shaded area of rhombus.
We can show our bimatrix geometrically in Figure 1. First, we consider the
disagreement point $(u^{*},v^{*})=(-2,-2)$ in the TU solution section as the
threat point. The set of Pareto optimal points consists of the two line
segments from $(-6,6)$ to $(2,2)$ and from $(2,2)$ to $(6,-6)$. The NTU
solution is that point along this path which maximizes $(u+2)(v+2)$. Let
$f(u)=(u+2)(v+2)$. Now, the line segment from $(-6,6)$ to $(2,2)$ has the
equation, $v=-\frac{1}{2}u+3$. So we can rewrite
$f(u)=(u+2)(v+2)=(u+2)(-\frac{1}{2}u+5)=-\frac{1}{2}u^{2}+4u+10$. It has its
maximum in $u\in[-6,2]$ at $u=2$ where $v$ has the value 2. Similarly, the
line segment from $(2,2)$ to $(6,-6)$ satisfies the equation $v=-2u+6$. So we
can write $f(u)=-2u^{2}+4u+16$. In this case, it has its maximum in
$u\in[2,6]$ at $u=2$ and $v=2$ too. Hence, $f(u)=(u+2)(v+2)$ is maximized
along the Pareto boundary at $(\bar{u},\bar{v})=(2,2)$ which is the NTU
solution of our example.
### 1.3 NTU solution based on the lambda transfer approach
If the original bimatrix $(\bm{A},\bm{B})$ and its utilities are not measured
in the same units, we can change it into a bimatrix to which the TU theory
applies. If an increase of one unit in Player 1’s utility is worth an increase
$\lambda$ $(>0)$ units in Player 2’s utility, then the bimatrix
$(\lambda\bm{A},\bm{B})$ has transferable utility. By the TU-solution method
with bimatrix $(\lambda\bm{A},\bm{B})$, the lambda transfer solution for the
NTU game is
$\bm{\varphi}(\lambda)=(\varphi_{1}(\lambda),\varphi_{2}(\lambda))=\bigg{(}\frac{\sigma(\lambda)+\delta(\lambda)}{2\lambda},\frac{\sigma(\lambda)-\delta(\lambda)}{2}\bigg{)},$
(2)
where $\sigma(\lambda)=\max_{i,j}(\lambda a_{ij}+b_{ij})$ and
$\delta(\lambda)=\mathrm{Val}(\lambda\bm{A}-\bm{B})=\bm{p}^{*\textsf{T}}(\lambda\bm{A}-\bm{B})\bm{q}^{*}$.
Generally, there is a unique $\lambda$, denoted by $\lambda^{*}$, such that
(2) is on the Pareto optimal boundary of the NTU feasible set. Then
$\bm{\varphi}(\lambda^{*})$ is the NTU solution.
In our example we have the transferred bimatrix
$(\lambda\bm{A},\bm{B})=\begin{pmatrix}(2\lambda,2)&(-2\lambda,4)&(-6\lambda,6)\\\
(4\lambda,-2)&(0,0)&(-4\lambda,2)\\\
(6\lambda,-6)&(2\lambda,-4)&(-2\lambda,-2)\end{pmatrix}.$
Then $\delta(\lambda)=-2\lambda+2$ can be found easily through the difference
matrix
$\lambda\bm{A}-\bm{B}=\begin{pmatrix}2\lambda-2&-2\lambda-4&-6\lambda-6\\\
4\lambda+2&0&-4\lambda-2\\\ 6\lambda+6&2\lambda+4&-2\lambda+2\end{pmatrix},$
which has a saddle point at the lower right. It is easy to check that
$\sigma(\lambda)=\max_{i,j}(\lambda a_{ij}+b_{ij})$ is given by
$\sigma(\lambda)=\begin{cases}-6\lambda+6&\text{if
$0<\lambda\leq\frac{1}{2}$}\\\ 2\lambda+2&\text{if $\frac{1}{2}\leq\lambda\leq
2$}\\\ 6\lambda-6&\text{if $\lambda\geq 2$}.\end{cases}$
Case
1: $0<\lambda\leq\frac{1}{2}$
The candidate of the solution is
$\displaystyle\bm{\varphi}(\lambda)$
$\displaystyle=\bigg{(}\frac{\sigma(\lambda)+\delta(\lambda)}{2\lambda},\frac{\sigma(\lambda)-\delta(\lambda)}{2}\bigg{)}$
$\displaystyle=\bigg{(}\frac{(-6\lambda+6)+(-2\lambda+2)}{2\lambda},\frac{(-6\lambda+6)-(-2\lambda+2)}{2}\bigg{)}$
$\displaystyle=\bigg{(}\\!\\!-4+\frac{4}{\lambda},-2\lambda+2\bigg{)},$
which does not intersect the NTU feasible set.
Case
2: $\frac{1}{2}\leq\lambda\leq 2$
Since $\sigma(\lambda)=2\lambda+2$ and $\delta=-2\lambda+2$, we get
$\bm{\varphi}(\lambda)=(2/\lambda,2\lambda)$ as the solution. Only the point
$\bm{\varphi}(1)=(2,2)$ belongs to the NTU feasible set.
Case
3: $\lambda\geq 2$
The final step is to check whether
$\bm{\varphi}(\lambda)=(2-2/\lambda,4\lambda-4)$ is a possible solution, and
it is not.
From the cases above, our final NTU solution through the lambda transfer
approach is $\bm{\varphi}(\lambda^{*})=(2,2)$ at $\lambda^{*}=1$.
We conclude that all three approaches lead to the same solution, namely
(Preempt, Preempt), in contrast to the non-cooperative (Nash equilibrium)
solution, (Deter, Deter).
## 2 Generalization
The bimatrix (1) was a very specific symmetric example, which we now want to
generalize. The bimatrix
$\bordermatrix{&\text{Preempt}&\text{Status
Quo}&\text{Deter}\cr\text{Preempt}&(2B-c,2B-c)&(B-c,B)&(B-c-C,B+b-C)\cr\text{Status
Quo}&(B,B-c)&(0,0)&(-C,b-C)\cr\text{Deter}&(B+b-C,B-c-C)&(b-C,-C)&(b-2C,b-2C)}$
(3)
from Arce M. and Sandler (2005) shows the generalized payoffs. As before, the
row player is Player 1 (e.g., the US) and the column player is Player 2 (e.g.,
the EU), with $B$ and $c$ representing the public benefit and the private cost
when a player uses the preemption policy, and $b$ and $C$ denoting the private
benefit and the public cost when a player takes the deterrence action. Here
$B<c<2B$ and $C<b<2C$ are assumed. The derivation of (3) is similar to that of
(1).
To make the game easier to analyze, we make additional assumptions beyond
those of Arce M. and Sandler (2005). We assume that $B=C$, $c=\alpha B$, and
$b=\beta C$, where $1<\alpha,\beta<2$, on the basis of (1) and (3). This
reduces (3), after factoring out $B$, to
$\bordermatrix{&\text{Preempt}&\text{Status
Quo}&\text{Deter}\cr\text{Preempt}&(2-\alpha,2-\alpha)&(-(\alpha-1),1)&(-\alpha,\beta)\cr\text{Status
Quo}&(1,-(\alpha-1))&(0,0)&(-1,\beta-1)\cr\text{Deter}&(\beta,-\alpha)&(\beta-1,-1)&(-(2-\beta),-(2-\beta))}=(\bm{U},\bm{V}),$
a matrix with two parameters instead of four.
### 2.1 TU solution
Since
$\bm{U}=\begin{pmatrix}2-\alpha&-(\alpha-1)&-\alpha\\\ 1&0&-1\\\
\beta&\beta-1&-(2-\beta)\end{pmatrix}\quad\textrm{and}\quad\bm{V}=\begin{pmatrix}2-\alpha&1&\beta\\\
-(\alpha-1)&0&\beta-1\\\ -\alpha&-1&-(2-\beta)\end{pmatrix},$
the difference matrix
$\bm{U}-\bm{V}=\begin{pmatrix}0&-\alpha&-\alpha-\beta\\\ \alpha&0&-\beta\\\
\alpha+\beta&\beta&0\end{pmatrix}$
has a saddle point at the lower right with value $\delta=0$. So
$\bm{p}^{*}=(0,0,1)^{\textsf{T}}$ and $\bm{q}^{*}=(0,0,1)^{\textsf{T}}$ are
the threat strategies and the disagreement point is $(-(2-\beta),-(2-\beta))$,
which is the Nash equilibrium in the non-cooperative game. Also, we can get
$\bm{U}+\bm{V}=\begin{pmatrix}2(2-\alpha)&2-\alpha&-\alpha+\beta\\\
2-\alpha&0&-(2-\beta)\\\ -\alpha+\beta&-(2-\beta)&-2(2-\beta)\end{pmatrix}.$
We see that $\sigma=\max_{i,j}(u_{ij}+v_{ij})=2(2-\alpha)$ because
$2(2-\alpha)-(-\alpha+\beta)=(2-\alpha)+(2-\beta)>0$ under the condition
$1<\alpha,\beta<2$. Now the TU solution is
$\bm{\varphi}^{*}=\bigg{(}\frac{\sigma+\delta}{2},\frac{\sigma-\delta}{2}\bigg{)}=(2-\alpha,2-\alpha).$
Since the cooperative strategy gives $(u_{11},v_{11})=(2-\alpha,2-\alpha)$,
this does not require any side payment. Converting it to the original
notation, we get $\bm{\varphi}^{*}=(2B-c,2B-c)$.
### 2.2 NTU solution based on the Nash Bargaining Model
First of all, we have to compare the slopes of the line segments representing
the Pareto optimal boundary to find the NTU solution because the slopes could
depend on the parameters $\alpha$ and $\beta$. We can think of two cases as in
Figure 2 and Figure 3. Figure 2 shows that the slope of the line segment
$P_{1}$ from $(-\alpha,\beta)$ to $(2-\alpha,2-\alpha)$ is less than that of
the line segment $Q_{1}$ from $(-(\alpha-1),1)$ to $(2-\alpha,2-\alpha)$,
equivalently, the slope of the line segment $P_{2}$ from $(2-\alpha,2-\alpha)$
to $(\beta,-\alpha)$ is greater than that of the line segment $Q_{2}$ from
$(2-\alpha,2-\alpha)$ to $(1,-(\alpha-1))$, and vice versa in Figure 3.
Case
1: $\text{slope}(P_{1})<\text{slope}(Q_{1})$, i.e.,
$-(\alpha+\beta-2)/2<-(\alpha-1)$ or $\alpha<\beta$; equivalently,
$\text{slope}(P_{2})>\text{slope}(Q_{2})$.
$(-\alpha,\beta)$$(2-\alpha,2-\alpha)$$(\beta,-\alpha)$$(-(\alpha-1),1)$$(1,-(\alpha-1))$$P_{1}$$P_{2}$$Q_{1}$$Q_{2}$$-3$$-2$$-1$$0$$1$$2$$3$$-3$$-2$$-1$$0$$1$$2$$3$Player
1’s payoffPlayer 2’s payoff Figure 2: The Pareto optimal boundary in Case 1
is shown in blue.
Let us start with Figure 2. In this case, the NTU solution should be on the
line of the equation
$v=-\frac{\alpha+\beta-2}{2}u+\frac{(2-\alpha)(\alpha+\beta)}{2},\quad-\alpha\leq
u\leq 2-\alpha,$ (4)
or that of the equation
$v=-\frac{2}{\alpha+\beta-2}u+\frac{(2-\alpha)(\alpha+\beta)}{\alpha+\beta-2},\quad
2-\alpha\leq u\leq\beta.$ (5)
We consider the disagreement point $(u^{*},v^{*})=(-(2-\beta),-(2-\beta))$ in
the TU solution section as the threat point. The set of Pareto optimal points
consists of the two line segments (4) and (5) above. The NTU solution is that
point $(u,v)$ along this path that maximizes $(u+2-\beta)(v+2-\beta)$. Now,
using the equation (4), we can rewrite this as a quadratic
$f(u):=(u+2-\beta)\bigg{(}\\!\\!-\frac{\alpha+\beta-2}{2}u+\frac{(2-\alpha)(\alpha+\beta)}{2}+2-\beta\bigg{)}.$
The maximum of $f(u)$ occurs at
$\hat{u}=\frac{-\alpha^{2}+\beta^{2}-4\beta+8}{2(\alpha+\beta-2)},$
but $\hat{u}-(2-\alpha)=(4-\alpha-\beta)^{2}/[2(\alpha+\beta-2)]>0$, so the
maximum of $f(u)$ over $-\alpha\leq u\leq 2-\alpha$ occurs at $u=2-\alpha$.
Similarly, if we substitute the linear function (5) for $v$ in
$(u+2-\beta)(v+2-\beta)$, a similar argument shows that $f(u)$ is maximized
over $[2-\alpha,\beta]$ at $u=2-\alpha$. Hence, $(u+2-\beta)(v+2-\beta)$ is
maximized along the Pareto optimal boundary at
$(\bar{u},\bar{v})=(2-\alpha,2-\alpha)$, which is the NTU solution.
Case
2: $\text{slope}(P_{1})>\text{slope}(Q_{1})$, i.e.,
$-(\alpha+\beta-2)/2>-(\alpha-1)$ or $\alpha>\beta$; equivalently,
$\text{slope}(P_{2})<\text{slope}(Q_{2})$.
$(-\alpha,\beta)$$(-(\alpha-1),1)$$(2-\alpha,2-\alpha)$$(1,-(\alpha-1))$$(\beta,-\alpha)$$Q_{1}$$Q_{2}$$P_{1}$$P_{2}$$-3$$-2$$-1$$0$$1$$2$$3$$-3$$-2$$-1$$0$$1$$2$$3$Player
1’s payoffPlayer 2’s payoff Figure 3: The Pareto optimal boundary in Case 2 is
shown in blue.
We should be more careful with this case because the constraint $\alpha>\beta$
implies that the Pareto optimal boundary comprises four different line
segments (see Figure 3). Now, we consider the two (unlabeled) outer line
segments from $(-\alpha,\beta)$ to $(-(\alpha-1),1)$ and from
$(1,-(\alpha-1))$ to $(\beta,-\alpha)$, whose equations are
$v=-(\beta-1)u+\alpha+\beta-\alpha\beta,\quad-\alpha\leq u\leq-(\alpha-1),$
(6)
and
$v=-\frac{1}{\beta-1}u+\frac{\alpha+\beta-\alpha\beta}{\beta-1},\quad 1\leq
u\leq\beta.$ (7)
First, we check whether the NTU solution could be on the line of equation (6)
or (7). If we maximize $(u+2-\beta)(v+2-\beta)$ along (6), then we must
maximize $f(u):=(u+2-\beta)(-(\beta-1)u+\alpha-\alpha\beta+2)$ over
$-\alpha\leq u\leq-(\alpha-1)$. We find that the maximum of this quadratic
occurs at $\hat{u}>-(\alpha-1)$, so the maximum over $[-\alpha,-(\alpha-1)]$
occurs at $u=-(\alpha-1)$. Similarly, maximizing along (7), we must maximize
$f(u):=(u+2-\beta)(-(\beta-1)^{-1}u+(\beta-1)^{-1}(\alpha+\beta-\alpha\beta)+2-\beta)$
over $1\leq u\leq\beta$. The maximum occurs at $u=1$.
Now, we focus on two other line segments, $Q_{1}$ given by
$v=-(\alpha-1)u+\alpha(2-\alpha),\quad-(\alpha-1)\leq u\leq 2-\alpha,$ (8)
and $Q_{2}$ given by
$v=-\frac{1}{\alpha-1}u+\frac{\alpha(2-\alpha)}{\alpha-1},\quad 2-\alpha\leq
u\leq 1.$
Along the line segment (8), we can maximize $(u-\beta+2)(v-\beta+2)$ by
maximizing $f(u):=(u-\beta+2)(-(\alpha-1)u+\alpha(2-\alpha)+2-\beta)$ over
$-(\alpha-1)\leq u\leq 2-\alpha$. The maximum of the quadratic occurs at
$\hat{u}>2-\alpha$, so its maximum over $[-(\alpha-1),2-\alpha]$ occurs at
$u=2-\alpha$, and $f(2-\alpha)=(4-\alpha-\beta)^{2}$. Similarly, the maximum
of $(u-\beta+2)(v-\beta+2)$ over $2-\alpha\leq u\leq 1$ occurs at $u=2-\alpha$
with the same result. Hence, $f(u)=(u-\beta+2)(v-\beta+2)$ is maximized along
the Pareto optimal boundary at $(\bar{u},\bar{v})=(2-\alpha,2-\alpha)$, which
is the general NTU solution when $\alpha>\beta$. This coincides with our
result in the case $\alpha<\beta$, and both arguments apply when
$\alpha=\beta$.
### 2.3 NTU solution based on the lambda transfer approach
We have the transferred bimatrix,
$(\lambda\bm{U},\bm{V})=\begin{pmatrix}(\lambda(2-\alpha),2-\alpha)&(-\lambda(\alpha-1),1)&(-\lambda\alpha,\beta)\\\
(\lambda,-(\alpha-1))&(0,0)&(-\lambda,\beta-1)\\\
(\lambda\beta,-\alpha)&(\lambda(\beta-1),-1)&(-\lambda(2-\beta),-(2-\beta))\end{pmatrix}.$
and therefore
$\displaystyle\lambda\bm{U}-\bm{V}$
$\displaystyle=\begin{pmatrix}(\lambda-1)(2-\alpha)&-\lambda(\alpha-1)-1&-\lambda\alpha-\beta\\\
\lambda+\alpha-1&0&-\lambda-\beta+1\\\
\lambda\beta+\alpha&\lambda(\beta-1)+1&(1-\lambda)(2-\beta)\end{pmatrix}.$
We can verify that the $(3,3)$ entry is a saddle point, so
$\delta(\lambda)=(1-\lambda)(2-\beta)$. This involves showing that
$(1-\lambda)(2-\beta)$ is a row minimum and a column maximum, regardless of
$0<\lambda<\infty$.
To evaluate $\sigma(\lambda)$ we need the maximal entry of
$\lambda\bm{U}+\bm{V}=\begin{pmatrix}(\lambda+1)(2-\alpha)&-\lambda(\alpha-1)+1&-\lambda\alpha+\beta\\\
\lambda-\alpha+1&0&-\lambda+\beta-1\\\
\lambda\beta-\alpha&\lambda(\beta-1)-1&-(\lambda+1)(2-\beta)\end{pmatrix},$
so let us first consider the case $0<\lambda\leq 1$. Then, comparing
$(\lambda+1)(2-\alpha)$ with each of the other entries of
$\lambda\bm{U}+\bm{V}$, we find that $\sigma(\lambda)=(\lambda+1)(2-\alpha)$
provided
$\frac{\alpha+\max(\alpha,\beta)}{2}-1\leq\lambda\leq 1.$
(We are using the fact that $\alpha>\beta/(3-\alpha)$, regardless of
$\alpha,\beta\in(1,2)$.) In this case, the TU solution of the transferred
problem is
$\displaystyle\bm{\varphi}(\lambda)$
$\displaystyle=\bigg{(}\frac{\sigma(\lambda)+\delta(\lambda)}{2\lambda},\frac{\sigma(\lambda)-\delta(\lambda)}{2}\bigg{)}$
$\displaystyle=\bigg{(}\frac{\beta-\alpha}{2}+\frac{4-\alpha-\beta}{2\lambda},\frac{\beta-\alpha}{2}+\frac{(4-\alpha-\beta)\lambda}{2}\bigg{)},$
which reduces to $(2-\alpha,2-\alpha)$ when $\lambda=1$. Thus, $\lambda^{*}=1$
and $\bm{\varphi}(\lambda^{*})=(2-\alpha,2-\alpha)$. A similar argument
applies when $1\leq\lambda<\infty$.
## 3 Conclusion
Using cooperative game theory, we obtained a different solution than the one
found using non-cooperative game theory. Our game solution against terrorism
is to take a firm attitude toward terrorists, that is, (preempt, preempt),
even though there are many constraints in the real world. Arce M. and Sandler
(2005) wanted to show why countries facing terrorism take the passive action
against terrorists. In contrast, this paper shows there is a positive effect
when all countries facing terrorism stand firm, cooperating with each other.
## References
* [1] Daniel G. Arce M. and Todd Sandler (2005) Counterterrorism: A Game-Theoretic Analysis. The Journal of Conflict Resolution, 49 (2), The Political Economy of Transnational Terrorism, pp. 183–200.
* [2] Thomas S. Ferguson (2014) Game Theory, Second Edition. http://www.math.ucla.edu/tom/GameTheory/Contents.html.
|
4k
|
arxiv_papers
|
2101.01083
|
# Topology of the energy landscape of sheared amorphous solids and the
irreversibility transition
Ido Regev [email protected] Department of Solar Energy and Environmental
Physics, Jacob Blaustein Institutes for Desert Research, Ben-Gurion University
of the Negev, Sede Boqer Campus 84990, Israel Ido Attia Jacob Blaustein
Institutes for Desert Research, Ben-Gurion University of the Negev, Sede Boqer
Campus 84990, Israel Karin Dahmen Department of Physics, University of
Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801, USA
Srikanth Sastry Jawaharlal Nehru Centre for Advanced Scientific Research,
Jakkar Campus, 560064 Bengaluru, India Muhittin Mungan [email protected]
bonn.de Institut für angewandte Mathematik, Universität Bonn, Endenicher Allee
60, 53115 Bonn, Germany
###### Abstract
Recent experiments and simulations of amorphous solids plastically deformed by
an oscillatory drive have found a surprising behavior - for small strain
amplitudes the dynamics can be reversible, which is contrary to the usual
notion of plasticity as an irreversible form of deformation. This
reversibility allows the system to reach limit-cycles in which plastic events
repeat indefinitely under the oscillatory drive. It was also found that
reaching reversible limit-cycles, can take a large number of driving cycles
and it was surmised that the plastic events encountered during the transient
period are not encountered again and are thus irreversible. Using a graph
representation of the stable configurations of the system and the plastic
events connecting them, we show that the notion of reversibility in these
systems is more subtle. We find that reversible plastic events are abundant,
and that a large portion of the plastic events encountered during the
transient period are actually reversible, in the sense that they can be part
of a reversible deformation path. More specifically, we observe that the
transition graph can be decomposed into clusters of configurations that are
connected by reversible transitions. These clusters are the strongly connected
components of the transition graph and their sizes turn out to be power-law
distributed. The largest of these are grouped in regions of reversibility,
which in turn are confined by regions of irreversibility whose number
proliferates at larger strains. Our results provide an explanation for the
irreversibility transition - the divergence of the transient period at a
critical forcing amplitude. The long transients result from transition between
clusters of reversibility in a search for a cluster large enough to contain a
limit-cycle of a specific amplitude. For large enough amplitudes, the search
time becomes very large, since the sizes of the limit cycles become
incompatible with the sizes of the regions of reversibility.
## I Introduction and Summary
Understanding the response of a configuration of interacting particles in a
disordered solid to an externally imposed force is one of the main challenges
currently facing researchers in the fields of soft matter physics and rheology
[1, 2]. As an amorphous solid adapts to the imposed forcing, it starts to
explore its complex energy landscape which gives rise to rich dynamics [3, 4].
One example of such dynamics is the response to an oscillatory driving, which
for small amplitudes, can lead to cyclic response: a repeated sequence of
configurations whose period is commensurate with that of the driving force [5,
6, 7, 8, 9, 10, 11]. Such cyclic responses encode information and possess
“memory” about the forcing that caused them. Memory effects of this kind have
been observed experimentally [12], as well as numerically [13, 5], in
cyclically driven (sheared) amorphous solids, colloidal suspensions [2], and
other condensed matter systems, such as superconducting vortices and
plastically deformed crystals [14, 15, 16, 17, 18, 19]. Cyclic response is
important in many applications of plastic deformation such as fatigue
experiments and the stability of geophysical structures. Large Amplitude
Oscillatory Shear (LAOS) is used extensively to characterize the rheological
properties of soft materials[20].
An important feature of cyclic response in amorphous solids is that for small
shear amplitudes, the steady state response includes plastic events that keep
reoccurring in consecutive driving cycles and are in this sense reversible.
However, before the system settles in a cyclic response, it typically
undergoes a transient period in which the dynamics is not repetitive and the
plastic events can thus be regarded as irreversible. As the amplitude of shear
is increased, transients become increasingly longer and eventually, at a
critical strain amplitude, the dynamics becomes completely irreversible. Here
we will consider this critical strain to be the yield strain [21, 6] though it
is sometimes referred to as the irreversibility transition.
Since both reversible and irreversible plastic events involve particle
rearrangements, it is not clear what distinguishes one from the other [21, 22,
23, 24, 25, 26, 27, 28, 29, 30]. Recently, we have shown that in the athermal,
quasi-static (AQS) regime we can rigorously describe the dynamics of driven
disordered systems in terms of a directed state-transition graph [31, 32, 33].
The nodes of this graph, the mesostates, correspond to collections of locally
stable particle configurations that transform into each other under applied
shear via purely elastic deformations. The edges of the graph therefore
describe the plastic events. Furthermore, we have demonstrated that such
transition graphs can be readily extracted from molecular simulations of
sheared amorphous solids [33]. The ability to link the topology of the AQS
transition graph with dynamics has provided a novel means of probing the
complex energy landscape of these systems. Here we show that analysis of such
graphs in terms of their strongly connected components (SCCs) [34] allows us
to distinguish between reversible and irreversible events and better
understand the organization of memory in these materials.
In the AQS networks of sheared amorphous solids, SCCs correspond to sets of
mesostates connected by plastic deformation pathways such that each mesostate
in the SCC is reachable from each other mesostate in the SCC by a plastic
deformation path. Due to this property, a plastic transition which is part of
these paths can be reached arbitrarily many times and is reversible.
Reversible plastic events are thus events that connect states within an SCC.
Conversely, irreversible plastic events are transitions between states in
different SCCs, since by their definition, transitions between mesostates
belonging to different SCCs cannot be reversed. The ability to identify
reversible plastic events as events inside SCCs and irreversible plastic
events as transitions between SCCs allows us to better understand the
transient and reversible dynamics of amorphous solids. At the same time, this
distinction facilitates comparing the properties of the corresponding plastic
events. We observe that changes in energy and stress during irreversible
events are significantly larger than for reversible events. While many
irreversible transitions occur at high stresses and energies associated with
yielding, we also find a significant amount of irreversible transitions
occurring at much lower stresses and energies.
We further study the properties of SCCs and find that their overall size
distribution follows a power-law. For strains near and above yield, very small
SCCs proliferate. Since the plastic events associated with cyclic response are
reversible, they must be confined to a single SCC. The statistics of SCC sizes
thus provides an estimate of the memory retention capability and its
dependence on the strain amplitude of the driving. Furthermore, these findings
also shed light on the mechanisms giving rise to the long transient dynamics
observed in cyclically sheared amorphous solids. We show that reversible
plastic events are dominant up to a strain of about $\gamma_{\rm rev}=0.085$,
which is below the yield strain $\gamma_{y}=0.135$ in this system. For strains
above $\gamma_{\rm rev}$ and approaching yielding, irreversible plastic events
become increasingly dominant. This finding suggests that there is a change in
the dynamic response of these systems as the driving crosses from the below
yield to the near yield regime around $\gamma_{\rm rev}=0.085$. Indeed, we
find that in the sub-yield regime $\gamma<\gamma_{\rm rev}$, large SCCs are
readily available and the transient to a limit-cycle is largely constrained by
finding the right one, i.e. a response where all plastic transitions are
reversible and thus confined to the same SCC. On the other hand at the near-
yield regime, $\gamma_{\rm rev}<\gamma<\gamma_{y}$, the SCC size does matter.
This regime is characterized by small SCCs and hence SCCs of the required size
are rare. As a result, the transient dynamics is dominated by a search for an
SCC of the appropriate size.
Figure 1: (a) Illustration of the construction of a catalogue of mesostates
starting from the reference state $O$ at generation $g=0$. Transitions in
black/gray (red/orange) designate $\mathbf{U}$\- ($\mathbf{D}$-) transitions.
Under $\mathbf{U}$\- and $\mathbf{D}$-transitions we obtain $2,4$, and $5$ new
meostates at generations $g=1,2$, and $3$, respectively. Transitions leading
to new mesostates at each generation have been highlighted. (b) A mesostate
transition graph generated from an initial configuration $O$ (marked as a red
dot) with several strongly connected components (SCCs) highlighted in
different colors. The largest 6 SCCs have sizes, 929 (green), 222 (brown), 115
(yellow), 90 (orange), 37 (cyan), and 20 (purple). Transitions within an SCC
correspond to reversible plastic events, since for any deformation path
connecting two states in an SCC, by definition there is also a reverse path.
Irreversible plastic events are transitions between states belonging to
different SCCs. (c) The inter-SCC graph is a compressed representation of the
graph in (b), showing the SCCs as squares with colors that correspond to the
colors in (b). The arrows connecting the SCCs are the irreversible plastic
events and the inter-SCC graph is therefore acyclic.
## II Results
### II.1 Mesostates, AQS state transition graphs, and mutual reachability
Consider the athermal dynamics of an amorphous solid being subject to shear
strain along a fixed direction. After its initial preparation, before the
system is subject to any external forcing, it is in a local minimum of its
potential energy. As we increase the strain in a slow and adiabatic manner,
the energy landscape deforms and the position of the local energy minimum in
configuration space changes. For a range of strains that is dependent on the
particle configuration, the amorphous solid adapts by purely elastic
deformation to the strain increments. This elastic response lasts until we
reach a value of the strain where the particle configuration attained ceases
to be a local energy minimum and thus becomes unstable. Increasing the strain
further, the system relaxes into a new local energy minimum and this
constitutes a plastic event. Thus, given a locally stable configuration of
particles, there exists a range of strains, applied in the positive and
negative directions, over which an amorphous solid adapts to changes in the
applied strain in a purely elastic manner and which is punctuated on either
end by plastic events. In [33] we have called such a contiguous collection of
locally stable equilibria a “mesostate”. Thus with each mesostate $A$ we can
associate a range of strain values $(\gamma^{-}[A],\gamma^{+}[A])$, over which
the locally stable configurations transform elastically into each other and
that is limited by plastic events at $\gamma^{\pm}[A]$. When a plastic event
occurs, the system reaches a new, locally stable, configuration which must
necessarily belong to some other mesostate $B$. Since mesostate transitions
are triggered at either end of the stability interval
$(\gamma^{-}[A],\gamma^{+}[A])$, we call transitions under strain increase and
decrease $\mathbf{U}$-, respectively, $\mathbf{D}$-transitions. For example,
if mesostate $B$ is reached under a $\mathbf{U}$-transition from $A$, we write
this symbolically as $B=\mathbf{U}A$. The mesostate transitions under strain
increases and decreases have a natural representation as a directed graph, the
AQS state transition graph. Here each vertex is a mesostate and from each
vertex we have two outgoing directed transitions, namely one under
$\mathbf{U}$ and the other under $\mathbf{D}$. As explained above, in the
context of sheared amorphous solids, the transitions of the AQS graphs
correspond to purely plastic events. These events can be traced back to
localized regions in the sample, the soft-spots, where a small number
particles undergo a rearrangement. In the simplest picture, soft spots can be
thought of as two level hysteretic elements [35, 36], which interact with each
other via Eshelby-type long range elastic deformation fields [37].
Since the AQS transition graph represents the plastic deformation paths under
every possible history of applied shear along a fixed axis, the dynamic
response of the amorphous solid will be encoded in the graph topology. The
connection with soft-spot interactions was already explored in [33], and our
aim here is to explore the implications of graph topology on the dynamics. To
this end, we perform a decomposition of the graph into its strongly connected
components. This decomposition is based on the relation of mutual reachability
of mesostates, which is defined as follows: two mesostates $A$ and $B$ are
said to be mutually reachable if there is a sequence of $\mathbf{U}$ and
$\mathbf{D}$ transitions that lead from $A$ to $B$ and back from $B$ to $A$.
Mutual reachability is an equivalence relation: if $A$ and $B$ are mutually
reachable and $B$ and $C$ are mutually reachable, then $A$ and $C$ are also
mutually reachable. Thus mutual reachability partitions the set of mesostates
of the AQS transition graph into (disjoint) sets of mutually reachable states.
In network theory such sets are called strongly connected components (SCCs)
[34].
### II.2 AQS transition graphs from simulations
As we have shown in [33], it is possible to extract AQS state transition
graphs from simulations of a sheared amorphous solid. The details of the
construction of such graphs can be found in Appendix A.1 and A.2, as well as
the Supporting Material of ref. [33]. Here we summarize the main procedure and
our data. We start with an initial stable particle configuration belonging to
a mesostate $O$, which we call the reference state, and we assign $O$ to
generation $g=0$. We then determine its range of stability $\gamma^{\pm}[O]$,
as well as the mesostates $\mathbf{U}O$ and $\mathbf{D}O$ that it transits
into. The latter are the mesostates of generation $g=1$. Proceeding generation
by generation, and identifying mesostates that have been encountered at a
previous generation we can assemble a catalog of mesostates, which (i) lists
the stability range of each mesostate, and (ii), identifies the mesostates
that these transit into under $\mathbf{U}$\- and $\mathbf{D}$-transitions.
Fig. 1(a) illustrates the initial stages of the catalog acquisition. We have
extracted from numerical simulations $8$ catalogs, each corresponding to a
different initial configuration quenched from a liquid. These catalogs contain
a total of nearly $400$k mesostates and we identified the SCCs that they
belong to. Table 1 in the Appendix A.2 summarizes our data. Fig. 1(b) shows a
portion of an AQS state transition graph obtained from catalog $\\#1$ of the
data set. The excerpt shown contains $1542$ mesostates. The reference state
$O$, containing the initial configuration, is marked by a big circle (in red)
and nodes belonging to the same SCC have the same color. SCCs with less than
15 nodes are shown in dark gray. The largest 6 SCCs shown have sizes, 929
(green), 222 (brown), 115 (yellow), 90 (orange), 37 (cyan), and 20 (purple).
Figure 2: (a) Normalized distributions of energies at the onset of reversible
(blue dots) and irreversible (red squares) plastic events (transitions). (b)
Normalized distributions of the energy drops during reversible (blue dots) and
irreversible (red squares) plastic events (transitions). The results in this
figure combine data sampled from all $8$ catalogs.
Figure 3: (a,b) Density plot of stress vs. the energy at the onset of
reversible (a) and irreversible (b) plastic events. The overall parabolic
shape of the scattered points, corresponds to the bulk elastic response of the
samples. (c,d) The stress drop after a plastic event vs. the stress at the
onset of the plastic event for reversible (c) and irreversible (d)
transitions. The color bars to the right depict the color-coding of the
density from low (dark/blue) to high (bright/yellow). The results in this
figure combine data sampled from all $8$ catalogs
.
### II.3 Reversible and Irreversible Plastic Events
The partition of the mesostates of an AQS transition graph into SCCs allows us
to identify two types of transitions: transitions within the same SCC and
transitions connecting different SCCs. The former transitions are plastic
deformations that can be reverted, since mutual reachability assures that for
any transition from $A$ to $B$ there exists a sequence of transitions from $B$
to $A$. We will therefore call these transitions reversible 111Note that this
is not reversibility in the thermodynamic sense, since plastic events involve
energy dissipation.. On the other hand, transitions between two different SCCs
must necessarily be irreversible: there is a plastic deformation path from a
mesostate in one SCC to a mesostate in another SCC, but there is no
deformation path back. If there had been one, these two states would have been
mutually reachable, and therefore part of the same SCC. Further details on
identifiying transitions as reversible are given in Section A.3 of the the
Appendix. We can condense the transition graph by collapsing all states
belonging to an SCC into a single vertex so that only transitions between SCCs
remain [39], i.e. the irreversible transitions. The graph obtained in this way
is the inter-SCC graph, and by construction, this graph is acyclic, i.e. it
cannot contain any paths that lead out of and return to the same SCC. Fig.
1(c) shows the inter-SCC graph obtained from the mesostate transition network
shown in panel (b). The size of the vertices represents the size of the
respective SCCs with a logarithmic scaling as indicated in the legend to the
right of the figure. The color and placement of the SCC vertices follows those
of panel (b).
Since the SCC decomposition allows us to distinguish reversible from
irreversible plastic events, we can use it to compare their properties. In
Figs 2 and 3 we compare the statistics of reversible and irreversible events
across the entire $8$ catalogues. In Fig 2(a) we show the energies at the
onset of reversible (blue dots) and irreversible (red squares) plastic events.
We see that while reversible events occur predominately at low energies, the
distribution for irreversible events is bimodal: there is a concentration of
events at low energies and another concentration at high energies. Fig 3(a,b)
shows density plots of the stress at which reversible and irreversible plastic
events occurs as a function of the energy. We can see that the secondary peak
of irreversible transitions at higher energies correspond to stresses $\sigma$
close to and above the yield stress (the stress at the
yielding/irreversibility transition), which is $\sigma_{y}\sim 2.5$ in units
of the simulation and that reversible events are much scarcer in this region.
In Fig 2(b) we compare the energy drops due to reversible and irreversible
plastic events. We can see that both exhibit power-law behavior. The
irreversible events, while showing a strong cutoff, give rise to much larger
energy drops in general and correspond to large collective particle
rearrangements (avalanches). In Fig 3(c,d) we show a density plot of the
stress drops $\Delta\sigma$ and stresses $\sigma$ associated with reversible
and irreversible plastic events, respectively. The figure reveals that the
events accompanied by large stress avalanches are concentrated close to and
above the yield stress and exhibit a secondary peak in the density plot of the
irreversible events. While it is obvious that close to yielding the system
experiences a large number of large irreversible events, the figures also
clearly shows the presence of a large number of irreversible events with small
stress drops at stresses much below yield. In the following we shall argue
that these events play a role in the transient dynamics observed in
simulations under oscillatory shear at sub-yield strain amplitudes [6, 5, 27,
28].
Figure 4: (a) The SCC size distribution taken from the full $8$ catalogues
(in blue) exhibits a heavy tail. The solid line is a power-law exponent $2.67$
and serves as a guide to the eye. Colors other than blue correspond to
distributions derived from the same catalogues but only up to a maximal
generation number of $24,28,32,36$ demonstrating that the distribution of SCC
sizes becomes stable for networks significantly smaller than the ones used to
calculate the exponent. (b) Plastic deformation history leading from the
initial state $O$ to a mesostate $A$ of the catalog after $g=40$ plastic
events. Each vertical blue line is an intermediate mesostate $P$ with its
stability range $(\gamma^{-}[P],\gamma^{+}[P])$, while the horizontal line
segments in black ($\mathbf{U}$) and red ($\mathbf{D}$) that connect adjacent
mesostates indicate the strains at which the corresponding plastic events
occurred. For each mesostate $A$ and deformation history, we can identify the
largest and smallest strains under which a $\mathbf{U}$-, respectively
$\mathbf{D}$-transition occurred, $\gamma^{\pm}_{\rm max}$, as illustrated by
the extended horizontal lines. (c) Deformation path history dependence of
$k_{\rm REV}$: each dot represents a mesostate of catalog $\\#1$. The
coordinates of each dot represent the largest positive and negative strains
$\gamma_{\rm max}^{\pm}$, cf. panel (b), that were required to reach a
specific mesostate, while their color represents how many reversible
transitions $k_{\rm REV}=0,1$, or $2$, go out of it, as indicated in the
legend. The location of the yield strain in both positive and negative
direction have been marked by dotted vertical and horizontal lines. The region
highlighted by the light blue triangle contains the set of all mesostates that
can be reached without ever applying a shear strain whose magnitude exceeds
$|\gamma_{\rm max}^{\pm}|=0.085$. The prevalence of mesostates with $k_{\rm
REV}=2$ (blue dots) inside this region, implies that mesostates reached by
applying strains whose magnitudes remain below $0.085$ undergo predominantly
reversible transitions, i.e. lead to mesostates that are part of the same SCC.
(d) Scatter plot of the mesostates with $|\gamma_{\rm max}^{\pm}|\leq 0.085$
across the $5$ catalogs with $40$ or more generation. As was the case for the
single data set shown in panel (c) of this figure, the region $|\gamma_{\rm
max}^{\pm}|\leq 0.085$ shows a high degree of reversibility across all $5$
catalogs: the region contains $9298$ mesostates out of which $7728$ have
$k_{\rm REV}=2$ and $1194$ have $k_{\rm REV}=1$ outgoing irreversible
transitions. Inset: mean SCC size that a mesostate belongs to, given that it
is stable at some strain $\gamma$ calculated from all $8$ catalogs. Error bars
represent the standard deviation of fluctuations around the mean. The figure
shows that mesostates stable at large strains tend to belong to small SCCs.
### II.4 AQS transition graph topology
Fig 4(a) shows the size distribution of SSCs extracted from all eight
catalogs. The solid line is a power-law behavior with exponent of $2.67$ and
serves as a guide to the eye. We estimated the power-law exponent and its
uncertainty using the maximum-likelihood method described in [40], and by
considering only the $24488$ SCCs with sizes $s_{\rm SCC}\geq s_{\rm min}=4$.
This choice was motivated by the empirical observation that small SCCs
containing mesostates at the largest generations of the catalog limit are more
likely to increase in size, if the catalog is augmented by going to higher
number of generations. The exponent depends on the choice of cutoff $s_{\rm
min}$: for $s_{\rm min}=1,2,3$, and $4$, we obtain (number of data points
indicated in parentheses) the exponents $2.033\pm 0.003$ $(169049)$, $2.529\pm
0.005$ $(81528)$, $2.60\pm 0.01$ $(40021)$, and $2.67\pm 0.01$ $(24488)$,
respectively. The exponents for $s_{\rm min}=2,3$, and $4$ fall all into a an
interval between $2.5$ and $2.7$, while the exponent of $2.203$ obtained with
the cut-off $s_{\rm min}=1$ seems to be significantly different. In fact, as
we will show shortly, close to yielding there is a proliferation of SCCs with
size one and this affects the estimate of the exponent. Thus the distribution
of SCC sizes is broad, following a power-law $s_{\rm SCC}^{-\alpha}$, with an
exponent of about $\alpha=2.67$ and with the main source of uncertainty in
$\alpha$ coming from the choice of the lower cut-off $s_{\rm min}$. Fig 4(a)
also compares this distribution against the distributions obtained by limiting
the generation number in the catalogues to a maximal generation number. It is
clear that the distribution does not change significantly.
Next, we ask for the “location” of SCCs in the transition graph by looking for
correlations between the plastic deformation history of a mesostate $A$ and
the number of reversible transitions that are going out of it, $k_{\rm
REV}[A]$. Recall that each mesostate in our catalog is reached from the
reference configuration $O$ by a sequence of $\mathbf{U}$\- and
$\mathbf{D}$-transitions. We call this the plastic deformation history path of
$A$, as illustrated in Fig. 4(b). Additional details on deformation history
are provided in Section A.3 of the Appendix. For each mesostate and
deformation history path, we can identify the largest and smallest strains
under which a $\mathbf{U}$-, respectively $\mathbf{D}$-transition occurred,
$\gamma^{\pm}_{\rm max}$. These values are indicated in Fig 4(b) by the
horizontal dashed lines. Fig 4(c) shows a scatter plot obtained from catalog
$\\#1$ of our data set. Here each dot corresponds to a mesostate $A$ that is
placed at $(\gamma^{-}_{\rm max}[A],\gamma^{+}_{\rm max}[A])$. Since
$\gamma^{-}_{\rm max}[A]<\gamma^{+}_{\rm max}[A]$, the dots are scattered
above the central diagonal of the figure. The location of the yield strain
$\gamma_{y}=0.135$ of the sample is indicated by the dashed vertical and
horizontal lines. We have color-coded the mesostates according to the number
$k_{\rm REV}[A]$ of outgoing reversible transitions, with blue, light red and
gray corresponding to $2$, $1$, and $0$ possible reversible transitions,
respectively. Note that multiple mesostates can have the same values of the
extremal strains $\gamma^{\pm}_{\rm max}$ and hence will be placed at the same
location in the scatter plot. In order to reveal correlations between the
straining history and $k_{\rm REV}$, we have first plotted the data points for
which $k_{\rm REV}=2$, next those for which $k_{\rm REV}=1$, and finally,
$k_{\rm REV}=0$. In spite of this over-plotting sequence, there appears a
prominent central “blue” region that is bounded by $\gamma^{-}_{\rm
max}\geq-0.085$ and $\gamma^{+}_{\rm max}\leq 0.085$. This region contains
$1783$ mesostates out of which $1448$ have $k_{\rm REV}=2$, $257$ have $k_{\rm
REV}=1$, while $78$ mesostates have $k_{\rm REV}=0$. Thus $88\%$ of the
transitions out of these mesostates are reversible 222Since the total number
of states in this region is $1783$, the total number of outgoing transitions,
being twice this number, is $3566$. The total number reversible transitions id
$2\times 1448+257=3153$. Hence the fraction of reversible transitions is
$0.88$.. States with a deformation history in which the magnitude of the
applied strain never exceeded $0.085$ are therefore highly likely to deform
reversibly under $\mathbf{U}$\- or $\mathbf{D}$-transitions. Since
irreversible transitions are rare in this region, and it is only these
transitions that connect different SCCs, a further implication of this finding
is that the mesostates in this regime must be organized in a small number of
SCCs, and we therefore expect these to be large. Upon inspection, we find that
the mesostates in this region belong to $199$ SCCs with the largest $8$ SCCs
having sizes $s_{\rm SCC}=929,306,222,115,90,81,33$, and $30$ 333The total
size of the $199$ SCCs containing the $1783$ mesostates in this region is
$2587$. Thus an additional $804$ mesostates belong to these SCCs, but are
outside the reversibility region. All $2587$ mesostates turn out to be
contained in the region $\gamma^{-}_{\rm max}\geq-0.095$ and $\gamma^{+}_{\rm
max}\leq 0.096$.. The excerpt of the transition graph shown in Figs. 1(b) and
1(c) contains some of these SCCs. We have verified that such reversibility
regions are present in each of the 8 catalogs we extracted and with similar
extents in strain $|\gamma^{\pm}_{\rm max}|\lesssim 0.085$. Fig. 4(d) shows a
scatter plot of the mesostates with $|\gamma^{\pm}_{\rm max}|\leq 0.085$
sampled from the $5$ catalogs with $40$ or more generations. This region
contains $9298$ mesostates out of which $7728$ have only reversible outgoing
transitions ($k_{\rm REV}=2$), while for $1194$ mesostates one of the two
transitions is reversible ($k_{\rm REV}=1$).
One prominent feature of the transition graph excerpt shown in Figs. 1(b) and
(c) is the large hub-like SCC (green) with $s_{\rm SCC}=929$ mesostates and an
out-degree of $39$, i.e. $39$ $\mathbf{U}$\- or $\mathbf{D}$-transitions to
neighbouring SCCs. Hubs are a common feature of scale-free networks, which
typically emerge via a stochastic growth process of self-organization [43,
34]. Such networks are characterized by heavy-tailed degree distributions.
Note that a mesostate transition graph is generated from a single disordered
initial configuration of particles, by a deterministic process for the
acquisition of mesostates and the identification of transitions between them.
The initial configuration itself has been obtained from a liquid state through
a quench to zero temperature. The transition graphs can therefore be regarded
as quenched disordered graphs, linked via the catalog acquisition process to
an ensemble of initial configurations extracted from the liquid state [32,
31]. We have computed degree distributions of the inter-SCC graphs over the
full catalog as well as when restricted to the reversibility regions. For
example, among the $199$ SCCs associated with the reversibility region of
catalog #1, Fig. 4(c), the largest $8$ SCCs also have the largest degrees,
given by $k=39,12,7,4,3,3,2$, and $2$. The reversibility regions of all $8$
catalogs display similar network features: in each of these we observe a few
SCCs with large degrees that are superposed on a background of a very large
number of SCCs with very small degrees. Note that every SCC has to have at
least two outgoing irreversible transitions, as explained in Section A.3 of
the Appendix. While these findings per se do not rule out the possibility of a
scale-free inter-SCC graph, here are not enough SCCs with large degrees in our
catalogs to deduce a heavy-tailed degree distribution.
Figure 5: Large excerpt of the inter-SCC graph, cf. Fig.1(c), obtained from
catalog #1. Shown is the sub-graph of $3228$ SCCs (squares) that can be
reached from the SCC of the initial state by at most $15$ inter-SCC
transitions. These SCCs contain a total of $12790$ mesostates. The size of the
SCCs is indicated by the legend in the top left corner of the figure. The
coloring scheme of the SCCs indicates the (average) number of yield events in
the plastic deformation history of the mesostates constituting the SCC, i.e.
the number of mesostate transitions in their deformation history that occur at
stresses of magnitude $2.5$ and higher. The figure shows patches of SCCs with
the same number of yield events. Among these the ’green’ patch of SCCs whose
constituting mesostates have suffered no yield experience is dominant. Note
that even for $2$ (orange) or more yield events there are relatively extended
patches of SCCs of large sizes, such as the orange patch in the top left part
of the graph. These findings suggest that the transition graph contains
multiple reversibility regions, such as the one shown in Fig. 4(c), that
differ only by the common history of past yield events of their constituent
mesostates.
The inset of Fig 4(d) shows the (conditional) average of the sizes of SCCs to
which a mesostate $A$ belongs to, given that it is stable at some strain of
magnitude $|\gamma|$, i.e. we average over the sizes of SCCs which a mesostate
$A$ belongs to, and for which either $\gamma^{-}[A]<|\gamma|<\gamma^{+}[A]$,
or $\gamma^{-}[A]<-|\gamma|<\gamma^{+}[A]$ holds. Further details are provided
in Section A.3 of the Appendix. The vertical error bars show the standard
deviations around the averages. For $|\gamma|\lesssim 0.05$, the mean SCC size
is around $30$. The distribution of SCC sizes in this region is very broad, as
can be seen from the standard deviations, which are much larger than the mean
values. For larger strains, the mean SCC size gradually drops to $1.2$,
accompanied by increasingly smaller standard deviations. Since all states of a
given catalog are reached from the same ancestral mesostate $O$ at zero
strain, a mesostate whose strain history has never experienced a strain of
magnitude larger than $\gamma_{\rm max}$ must be stable at some strain
$\gamma$ with $-\gamma_{\rm max}\leq\gamma\leq\gamma_{\rm max}$. Thus
mesostates inside the reversibility region are stable at strain values that
are also within these ranges. We therefore conclude that the mesostates in the
reversibility region are dominantly organized in few large SCCs whose sizes
follow a broad distribution and that mesostates stable at larger strains tend
to be part of smaller SCCs.
Turning now to the mesostates placed outside the reversibility region, it can
be seen from Fig 4(c) that these are more likely to have one or more outgoing
irreversible transitions, i.e. $k_{\rm REV}=1,0$. Note, that mesostates in
this regime are a mixture of (i) mesostates stable at low strain values,
which, however, experienced large strains in their history and subsequently
were driven back to lower strains, and (ii), mesostates stable at large
strains. The choice of plotting these mesostates against maximal strains in
their deformation history blurs this distinction. However, we checked that the
mesostates of (i) are part of some other regions of reversibility and also
similarly organized into larger SCCs. On the other hand, mesostates in (ii)
must belong to comparatively small SCCs, as indicated by the inset of Fig
4(d). In order to support our expectations regarding the mesostates of type
(i), we have inspected the deformation history of mesostates belonging to an
SCC, counting the number $n_{Y}$ of times the magnitude of the stress exceeds
the yield stress (the stress at the yielding/irreversibility transition)
$\sigma_{y}\approx 2.5$ in their deformation history. We find that this number
is nearly constant across all mesostates constituting an SCC, differing only
from SCC to SCC. Fig. 5 shows a large excerpt of the inter-SCC graph, cf.
Fig.1(c), obtained from catalog #1. Shown is the sub-graph of $3228$ SCCs
(squares) that can be reached from the SCC of the initial state by at most
$15$ inter-SCC transitions. These SCCs contain a total of $12790$ mesostates.
The size of the SCCs is indicated by the legend in the top left corner of the
figure. The coloring scheme of the SCCs shown on the top left indicates the
number $n_{Y}$ of yield events in the plastic deformation history of the
mesostates constituting the SCC. The figure shows patches of SCCs with the
same number of yield events. Among these the ’green’ patch of SCCs whose
constituting mesostates have suffered no yield experience is dominant. Note
that even for $n_{Y}=2$ (orange) or more yield events, there are relatively
extended patches of SCCs of large sizes, such as the orange patch in the top
left part of the graph.
Putting all these findings together, we conclude that the landscape of local
energy minima accessible by arbitrary plastic deformation protocols is
composed of regions of reversibility with few but relatively large SCCs. The
mesostates belonging to these patches tend to have a common deformation
history, that differs from mesostates belonging to other reversibility regions
by the near-yield or yield events they suffered in their deformation history.
These reversibility regions are surrounded by significantly smaller SCCs
containing mesostates stable at strain values closer to yield.
### II.5 Response to cyclic shear
Our findings on the topology of the energy landscape, and its organization
into patches of regions in which plastic events are reversible, have direct
implication for the response of the amorphous solid to an applied oscillatory
shear strain. An evolution towards cyclic response, i.e. limit-cycles, is a
mechanism to encode memory of the past deformation history in such systems [2]
and the loss of the capability to do so at increasingly larger amplitudes is
believed to be related to the reversibility/irreversibility transition. We
start with the observation that the mesostates forming the cyclic response to
an applied oscillatory shear are mutually reachable and therefore must belong
to the same SCC: consider a simple cycle with a lower endpoint $R$, i.e. a
mesostate $R$, such that
$R=\mathbf{D}^{m}\mathbf{U}^{n}R\,.$ (1)
The intermediate states of this cycle are the mesostates
$R,\mathbf{U}R,\mathbf{U}^{2}R,\ldots\mathbf{U}^{n}R,\mathbf{D}\mathbf{U}^{n}R,\mathbf{D}^{2}\mathbf{U}^{n}R,\ldots,\mathbf{D}^{m}\mathbf{U}^{n}R=R$.
Any pair of these states is mutually reachable, and these states must be part
of the same SCC. Indeed, we find that many SCCs of our catalog, contain cycles
of the form (1). Fig. 1(b) shows three different cycles that belong to the
yellow SCC. The $\mathbf{U}$\- and $\mathbf{D}$\- transitions forming the
cycles have been highlighted by black and red arrows, respectively. For the
state labeled $R$ we have $R=\mathbf{D}^{12}\mathbf{U}^{13}R$: the amorphous
solid returns to state $R$ after a sequence of 13 plastic events under
increasing strain followed by 12 plastic events under decreasing strain. A
cyclic response to oscillatory shear in which the period of the driving and
response coincide (harmonic response) must be of the form (1), and will be
produced by an applied cyclic shear such that $\gamma_{\rm min}\to\gamma_{\rm
max}\to\gamma_{\rm min}\to\ldots$, for some pair of strains $\gamma_{\rm min}$
and $\gamma_{\rm max}$. To relate the length of a limit cycle $\ell=m+n$, cf.
(1), to the drive amplitude, we extract from our catalog every possible limit-
cycle of the form (1) that is compatible with oscillatory forcing given by
$0\to\gamma\to-\gamma\to\gamma\ldots,$ (2)
for some amplitude $\gamma$. Across the $8$ catalogs, we identified a total of
$44642$ distinct limit-cycles. Grouping these limit-cycles by their length
$\ell$, we show in Fig. 6(a) the range of amplitudes $\gamma$ for which they
were observed (horizontal red line) along with their average amplitude (blue
dots). As expected, we find that the length $\ell$ of the cycle increases with
the amplitude of oscillatory shear. This behavior is well described by a
power-law with an exponent of $2.5$, as indicated in the figure.
### II.6 Transient response and the reversibility/irreversibility transition
From the topology of the state transition graph we can draw now conclusions
about the nature of the transients towards cyclic response under oscillatory
shear. Since limit-cycles attained at increasingly larger amplitudes are
formed by a larger number $\ell$ of mesostates, these require increasingly
larger SCCs that can contain them since a limit cycle is always a part of an
SCC. In Fig 6(b), we show a scatter plot where each dot corresponds to an SCC,
and the size of the SCC is plotted against the strain amplitude of the limit
cycle with largest $\ell$ that it contains (small red dots). We have also
indicated the right boundary of the scatter region marked by a dotted red line
connecting the extreme data points 444More specifically, for each possible SCC
size $s$ we determined the largest amplitude $\gamma$, and from these pair of
points $(s,\gamma)$ we extracted the boundary curve as $s_{\rm
max}(\gamma)=\max_{\gamma^{\prime}\leq\gamma}\\{s^{\prime}:(s^{\prime},\gamma^{\prime})\\}$,
i.e. the convex hull.. In Fig 6(b) we have superposed the data points of the
inset of Fig 4(d), enabling us to compare the available SCC sizes with the
sizes of the actual SCCs selected by the limit-cycles reached with oscillatory
shear at amplitude $\gamma$. It is apparent that for amplitudes above $0.08$
the sizes of selected SCCs are multiple standard deviations away from the
sizes of the available SCCs. This means that for these driving amplitudes,
SSCs with sizes necessary to contain them are rare and thus transients are
expected to be long, as observed in simulations [5].
Figure 6: (a) The range of strain amplitudes of oscillatory shear of the form
(2) that give rise to a limit-cycle consisting of $\ell$ plastic
events/mesostate transitions. Blue dots refer to average strain values for the
corresponding cycle lengths. The solid black line is a power-law with exponent
$2.5$. (b) and (c): Note that mesostates forming a cyclic response must belong
to the same SCC, and consequently a limit-cycle formed by $\ell$ mesostates
can only be part of an SCC whose size $s_{\rm SCC}\geq\ell$. (b) Scatter of
SCC size $s_{\rm SCC}$, against the strain amplitude $\gamma$ generating the
limit-cycle with largest $\ell$ contained in the SCC (red small dots). The
bigger red dots connected by a dashed line outline the boundary of this
region, they are the smallest SCCs that contain a limit-cycle of a strain
amplitude $\gamma$. Blue dots with error bars re-plot the inset of 4(d),
displaying the average sizes of SCCs that contain states stable at strain
$\gamma$. (c) Scatter plot of the SCC sizes against the length $\ell_{\rm
max}$ of the largest limit cycles, under oscillatory shear (2) that these
contain. The red curve is a prediction of the Preisach model. Refer to text
for further details. The results in this figure combine data sampled from all
$8$ catalogs.
The above observations have implications for the length of transients towards
limit-cycles under oscillatory shear. They suggest that two separate dynamics
govern the transient response. At low driving amplitudes, and hence well
within the reversibility region, sufficiently large SCCs are abundant and
cyclic response sets in when a suitable sequence of reversible plastic
transitions has been reached and the SCC has thereby ”trapped” the limit-
cycle. Here SCC size is not a limiting factor. On the other hand, for larger
amplitudes, i.e. amplitudes outside of the reversibility region but still
below yield, larger SCCs are needed, which as we have shown, become
increasingly rare. It is thus plausible to assume that the ensuing increase in
the duration of the transient is predominantly due to the search for a
sufficiently large SCC, and that the additional requirement that such an SCC,
once found, is also trapping is of secondary importance, given that the
probability of finding an SCC of the right size is already very small. These
observations are consistent with earlier findings by one of us for this system
which showed that limit-cycles for strain amplitudes beyond $\gamma\sim 0.13$
were not observed or extremely rare [5]. This further suggests that the
reversibility/irreversibility transition of the response under oscillatory
shear is governed by a cross-over of the probability of finding a limit-cycle
into a rare-event regime due to the scarcity of SCCs of sufficient size.
Having established the relation between the SCC size $s_{\rm SCC}$ and the
driving amplitudes $\gamma$, we next connect $s_{\rm SCC}$ to the length of
the limit cycles that they contain. Fig 6(c) shows a scatter plot of SCC sizes
against the length $\ell_{\rm max}$ of the largest limit-cycles they contain.
As the figure reveals, the scatter points cover an area with a well-defined
lower-boundary, i.e. the smallest SCC size that can confine a limit-cycle of a
given length $\ell$. Moreover, this boundary has a distinct concave-up shape,
and for most of the data points $s_{\rm SCC}>\ell_{\rm max}$. Thus while SCCs
of size $s_{\rm SCC}=\ell_{\rm max}$ would have sufficed to trap a limit-
cycle, we find that these SCCs contain many more states in general. As will be
discussed in the following section, this is also related to the memory
capacity of an SCC.
### II.7 SCCs and Memory Capacity
To understand the origin of these excess mesostates and their connection with
memory capacity, let us return to Fig.1(b) and consider the “yellow” SCC. This
SCC is bounded by a cycle with endpoint $R$, which contains multiple sub-
cycles, some of which have been indicated in the figure. It therefore appears
that the largest cycles inside an SCC come with a collection of sub-cycles,
the mesostates of which are mutually reachable as well. In fact, if the
sheared amorphous solid had return point memory (RPM) [15, 31], then any cycle
of the form (1) would necessarily be organized in a hierarchy of sub-cycles,
and moreover, all of these together would be part of the same SCC 555In the
context of RPM such SCCs are formed by a largest cycle and its sub-cycles.
They have been referred to as maximal loops [31].. Thus if RPM were to hold,
the mesostates forming the sub-cycles of a limit-cycle would all be part of
the same SCC. RPM can be used as a means to store information by utilizing the
hierarchy of cycles and sub-cycles [46]. Moreover, the nesting depth of the
hierarchy provides an upper limit for the amount of information that can be
encoded via RPM [46, 47].
A central finding of ref. [33] has been that for amorphous solids and up to
moderately large strain amplitudes, the limit-cycles reached under oscillatory
shear exhibit near, but not perfect, RPM. As a result, such cycles are still
accompanied by an almost hierarchical organization into sub-cycles 666Fig.
2(d) of [33] contains an example of such an organization of cycles and sub-
cycles.. The deviations from RPM are a result of positive and negative type of
interactions among soft-spots via the Eshelby deformation kernels, as a result
of which a plastic event in one part of the sample may bring some soft-spots
closer to instability, while at the same time it may stabilize others. If such
soft-spot interactions were completely absent (or negligible), we would be in
the Preisach regime, where each soft-spot can be regarded as an independent
hysteretic two-state system and the system exhibits RPM [49, 50]. Limit-cycles
then become Preisach hysteresis loops whose cycle and sub-cycle structure is
governed by the hysteresis behavior of the individual soft-spots undergoing
plastic deformations as the cycle is reversed. Since the Preisach model
exhibits RPM, its main hysteresis loop along with its sub-cycles constitutes
an SCC. Due to the absence of interactions, the size of this SCC can be
estimated as follows. Assuming that a Preisach loop is formed by $L$ non-
interacting soft-spots, so that $\ell=2L$, and assuming further that the
switching sequence of soft spots as the loop is traversed is maximally random
777By this we mean that if we label the soft-spots according to the sequence
in which they undergo a plastic event as the driving force is increased, they
will revert their states in some order relative to this. The latter sequence
can be thought of as a permutation of the former. In [47], we have shown that
for the Preisach model this permutation alone determines the structure of the
SCC containing the main hysteresis loop and hence its size. Assuming that this
permutation is uniformly selected from all possible permutations, i.e. that
this permutation is maximally random, the result (3) follows., the average
size of the SCC containing the Preisach loop is given for large $L$ as [47]
$s_{\rm
Pr}=\frac{1}{2}\sqrt{\frac{1}{e\pi}}\,\frac{e^{2L^{\frac{1}{2}}}}{L^{\frac{1}{4}}}.$
(3)
In Fig 6(c), we have superposed the Preisach prediction $s_{\rm Pr}$, assuming
that $L=\ell_{\rm max}/2$ in (3), on top of the simulation results. Despite of
the rather crude assumptions made by identifying the SCCs of the sheared
amorphous solid as Preisach loops, the Preisach prediction broadly follows the
lower boundary of the scattered points, i.e. the minimum size of SCCs that can
support a limit-cycle of a given length $\ell_{\rm max}$.
As remarked above, the capacity for encoding memory using RPM is related to
the nesting depth $d$ of the hierarchy of cycles and sub-cycles. For the
Preisach model, the average of $d$ can be worked out explicitly and is given
as $d=2\sqrt{L}$ [47]. Comparing with the corresponding average SCC size
$s_{\rm Pr}$ of (3), this gives $d=\ln s_{\rm Pr}$ to leading order.
## III Discussion
We have analyzed the structure of transition graphs characterizing the plastic
response of an amorphous solid to an applied external shear. We have focused
on the strongly connected components (SCCs) of these graphs. Physically, SCCs
correspond to collections of stable particle configurations that are
interconnected by reversible plastic events, so that it is possible to reach
any of these configurations from any other one by an appropriate sequence of
applied shear strain. The identification of SCCs thereby enabled us to
designate plastic events as reversible and irreversible, depending on whether
these connect states within the same SCC or not. The description in terms of
SCCs has also allowed us to characterize the topology of the underlying energy
landscape. Our analysis shows that the energy landscape is highly
heterogeneous: basins of few but large SCCs, containing a large number of
reversible transitions at strain values below yield, the reversibility
regions, are surrounded by a large number of very small SCCs, consisting of
local minima stable at strain values near or above yield. The overall size
distribution of SCCs is therefore rather broad, and we find it to follow a
power-law. Since the plastic events constituting any cyclic response to an
applied shear must be confined to a single SCC, and the number of such plastic
events increases with the amplitude of the driving, the size of the
corresponding confining SCCs becomes larger as well. We have shown that as the
driving amplitude approached yielding, the sizes of the required SCCs become
so large that encountering SCCs of sizes that can still confine them become
rare events. This observation provides a mechanism for the irreversibility
transition and the associated yield strain, above which amorphous solids under
slow oscillatory shear cannot find limit-cycles and the dynamics becomes
irreversible.
To summarize, the graph-theoretical analysis of the driven dynamics of
amorphous solids under athermal conditions presented here, reveals new
features of the energy landscape of glasses, which are responsible for the
memory properties of the system. Furthermore, since a transition from
reversible plasticity, that allows the system to store memory of past
deformations, to irreversible plasticity, which allows the system to “forget”
past deformations, is at the heart of the yielding transition, this analysis
provides a new framework for understanding this transition. By identifying the
SCC as a basic entity that groups reversible plastic events, our study also
provides the basis for predicting the memory storage and retrieval capability
of such systems, a topic of interest in recent experimental work on this topic
[12, 52].
There are many open questions that are still to be addressed. Specifically,
how network features are affected by the preparation protocol of the initial
configuration and by system size and how shearing in different orientations
affects the configurations encountered. Recent studies [53, 54] have shown
that amorphous solids prepared by equilibrating supercooled liquids to very
low temperatures, are “ultra-stable” in the sense that their response is
almost purely elastic up to the point of yielding (the irreversibility
transition). In such materials, preliminary results indicate a much simpler
topology in the sub-yield regime. When the system size is increased, we expect
the opposite - that the graph will become more complex. One can also compose
different networks that stem from the same initial configuration but are
rotated by an angle as was performed in recent experiments [55]. One can
expect that despite the different orientation there will be some overlap
between the networks. However, this is still to be checked against simulation
data and will be the focus of future work.
Acknowledgments MM and IR would like to thank Sylvain Patinet, Umut Salman,
Lev Truskinovsky, and Damien Vandembroucq for stimulating discussions during
their stay at ESPCI as part of the Joliot Chair visiting fellowship. IR would
also like to thank Asaf Szulc for useful discussions. MM was funded by the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under
Projektnummer 398962893, the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) - Projektnummer 211504053 - SFB 1060, and by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s
Excellence Strategy - GZ 2047/1, Projekt-ID 390685813. IR was supported by the
Israel Science Foundation through Grant No. 1301/17. SS acknowledges support
through the J. C. Bose Fellowship (JBR/2020/000015), SERB, DST, India.
Author contributions MM and IR contributed equally to the research of this
work. IA contributed in the earlier stages of the work. KD, MM, IR and SS
analyzed the results and wrote the paper.
## Appendix A Materials and Methods
### A.1 Sample Preparation
We simulated a binary system of $1024$ point particles interacting by a soft,
radially-symmetric, potential described in [56] where half of the particles
are $1.4$ larger than the other half. For each realization, we prepared an
initial configuration at a high temperature of $T=1.0$ and ran it for $20$
simulation time units (all units are mentioned in standard Lennard Jones,
dimensionless reduced units). We then ran the final configuration for another
$50$ simulation time units at $T=0.1$. This quench protocol is identical to
the one used in [5] and leads to a relatively soft glass (without a stress-
peak). The final configuration was then quenched to zero temperature using the
FIRE minimization algorithm. Such a configuration is part of a mesostate and
we denote this mesostate by $O$ and call it the reference state.
Next, we applied shear to the quenched configuration under athermal quasi-
static (AQS) conditions, increasing the strain by small strain steps of
$\delta\gamma=10^{-4}$. The straining is implemented by means of the Lees-
Edwards boundary conditions [19], and after each step we minimize the energy
of the sheared configuration using the FIRE algorithm [57]. Further details of
the system and simulation can be found in [56, 5].
### A.2 Extraction of mesostate catalogs
Starting from the initial mesostates $O$ that we obtained from the zero
temperature quench, we continue applying the strain until reaching the first
plastic event. This event corresponds to a $\mathbf{U}$ transition from the
initial configuration $O$ at strain $\gamma^{+}[O]$. Similarly, we rerun the
simulation starting from the same initial configuration and shear in the same
manner but in the negative direction until another plastic event occurs, which
corresponds to a $\mathbf{D}$ transition from $O$ at strain $\gamma^{-}[O]$.
This completes the first step of obtaining $\mathbf{U}O$ and $\mathbf{D}O$,
forming the states of generation 1. Next, for each of the states $\mathbf{U}O$
and $\mathbf{D}O$ we determine their stability ranges
$\gamma^{\pm}[\mathbf{U}O]$, $\gamma^{\pm}[\mathbf{D}O]$, as well as the
states they transit under $\mathbf{U}$ and $\mathbf{D}$, which constitute the
states of generation $2$. We then proceed in a similar manner to generation
$3$ etc. After each transition we check whether the resulting mesostate has
been encountered before or not. In the former case we just update a table of
transitions, in the latter case we add the state to our collection of
mesostates, which we call the catalog of mesostates. Proceeding in this way
generation by generation, we assemble a catalog of mesostates $A$, their
stability ranges $\gamma^{\pm}[A]$, along with the $\mathbf{U}$ and
$\mathbf{D}$ transitions among them, establishing in this way the AQS state
transition graph.
We can also associate with each mesostate $A$ the generation $g[A]$ at which
it was added to our catalog. We quantify the extent of a catalog by the
maximum number of generations $g_{\rm max}$, for which all transitions (both
$\mathbf{U}$\- and $\mathbf{D}$-transitions) have been worked out. In other
words, for all mesostates in generation $g\leq g_{\rm max}$ we have identified
the mesostates that they transit into.
We have generated $8$ initial states $O$ from molecular dynamics simulations,
as described above, and used these to extract the $8$ catalogs. Table 1 shows
the number of states $N$, generations $g_{\rm max}$ and the number of SCCs
$N_{\rm SCC}$ contained in each catalog along with the overall totals. Along
with this data, we have collected for each mesostate $A$ in our catalog, the
minimum and maximum values of strain over which a mesostate is stable, as well
as the values of the stress and energy at these two points and the changes of
these two quantities when a plastic event occurs. The analysis in the main
text is based on this data set.
Run | $g_{\rm max}$ | $N$ | $N_{\rm SCC}$
---|---|---|---
1 | 40 | 48204 | 18887
2 | 43 | 56121 | 27702
3 | 37 | 43951 | 17451
4 | 36 | 43550 | 18267
5 | 41 | 44656 | 19971
6 | 35 | 51784 | 27133
7 | 41 | 51741 | 21516
8 | 45 | 46395 | 18122
ALL | n/a | 386402 | 169049
Table 1: Properties of the $8$ mesostate catalogs, labeled 1 – 8 that were
extracted form the molecular simulations. For each catalog we list the maximum
number of generations $g_{\rm max}$. This means that the catalog contains all
mesostates that can be reached from the initial mesostate $O$ by at most
$g_{\rm max}+1$ plastic events, i.e. $\mathbf{U}$\- or
$\mathbf{D}$-transitions. In the third column we specify the number $N$ of
mesostates belonging generations $g\leq g_{\rm max}$, while the fourth column
lists the number of strongly connected components $N_{\rm SCC}$ that these
states form.
The identification of the generation $g[A]$ at which a mesostate was added to
our catalog also allows us via back-tracking to determine the deformation
history, i.e. the sequence of $\mathbf{U}$\- and $\mathbf{D}$-transitions that
lead from the initial state $O$ to $A$. A sample deformation history has been
shown in 4(b). By construction, the generation $g[A]$ is also the smallest
number of $\mathbf{U}$\- and $\mathbf{D}$-transitions needed to reach $A$ from
$O$. However, such a deformation history need not be unique: with $g[A]$ being
the first generation at which mesostate $A$ appears in the catalog, $A$ must
necessarily have been reached with a transition from a mesostate belonging to
generation $g-1$. However, there might be different mesostates in generation
$g-1$ that transit into $A$, therefore each of these would provide an
alternative path from $O$ to $A$. We have verified that such degeneracies
constitute a small fraction, about 1 - 3 %, of the transitions in our catalog.
### A.3 Identification of strongly connected components, reversible and
irreversible transitions
Once the catalogs of mesostates have been compiled, we used an implementation
of the Kosaraju-Sharir algorithm [58] to identify the strongly connected
components (SCC) of the transition graphs. Thus given a catalog, we are able
to assign each of its mesostates to an SCC and thereby obtain their sizes. As
discussed in the main text, transition between mesostates belonging to the
same SCC are reversible, while those between mesostates belonging to different
SCCs are irreversible.
From a numerical implementation point of view, in which we only sample a
finite number of mesostates and transitions, it is possible that a transition
that appears irreversible, turns reversible in a larger catalog of mesostates,
as more mesostates and transitions are added. Such conversions do indeed
occur, but we find that they happen predominantly at low generations. As
yielding is approached, a large number of small SCCs are generated, and the
transitions between these typically involve large plastic events. It is
therefore unlikely that some of these transitions will become reversible, and
we verified this, inspecting our data. This is consistent with the results of
Fig. 4(a) that the SCC size distribution changes little as catalogs with an
increasingly larger number of generations are sampled.
Note that if a mesostate has two outgoing irreversible transitions, it must
necessarily constitute an SCC of size one, i.e. an SCC containing just this
mesostate. SCCs of size one also arise when a node in our catalog is
peripheral, i.e. it belongs to generation $g_{\rm max}+1$, and both outgoing
transitions are left undetermined and hence absent. Since this is an artifact
of the catalog acquisition procedure, we have excluded all peripheral nodes in
our analysis of SCCs.
The inter-SCC graph shown in Fig. 1(c) was obtained by collapsing the SCCs
into a single vertex and keeping the irreversible transitions connecting
mesostates belonging to the different SCCs. It is easy to see that each SCC
will have at least two outgoing inter-SCC transitions. Since the AQS
transitions graphs formed by only considering the $\mathbf{U}$-transitions (or
$\mathbf{D}$-transitions) are acyclic, and in particular collections of
directed trees [31, 32, 33], inside each SCC there must be at least one
$\mathbf{U}$\- and one $\mathbf{D}$-tree. The corresponding transitions out of
their roots must necessarily be irreversible and hence lead out of the SCC.
The inset of Fig. 4(d) shows the mean SCC size that a mesostate belong to,
given that it is stable at a strain magnitude $|\gamma|$. To this end we
considered $25$ equally-spaced strain magnitudes with $0.000\leq|\gamma|\leq
0.200$, so that the spacing between successive strain magnitudes is
$\Delta=0.008$. Given a strain of magnitude $|\gamma|$, we consider all
mesostates $A$ that are stable at $\pm|\gamma|$, so that
$\gamma^{-}[A]<|\gamma|<\gamma^{+}[A]$, or
$\gamma^{-}[A]<-|\gamma|<\gamma^{+}[A]$ holds. We next perform an average over
the sizes of the SCCs that these mesostates belong to. The average SCC size
and its standard deviation obtained in this way are then plotted against
$|\gamma|$, leading to the inset of Fig. 4(d).
Note that the partition of the range of $|\gamma|$ values into equally spaced
strain values with spacing $\Delta$, will cause mesostates $A$ whose stability
range $\gamma^{+}[A]-\gamma^{-}[A]$ is larger than $\Delta$, to be associated
with multiple and adjacent values of $|\gamma|$. The effect of this is a
smoothing of the SCC size averages. We have checked that this does not effect
our results significantly.
## References
* Bonn _et al._ [2017] D. Bonn, M. M. Denn, L. Berthier, T. Divoux, and S. Manneville, Yield stress materials in soft condensed matter, Rev. Mod. Phys. 89, 035005 (2017).
* Keim _et al._ [2019] N. C. Keim, J. D. Paulsen, Z. Zeravcic, S. Sastry, and S. R. Nagel, Memory formation in matter, Rev. Mod. Phys. 91, 035002 (2019).
* Szulc _et al._ [2020] A. Szulc, O. Gat, and I. Regev, Forced deterministic dynamics on a random energy landscape: Implications for the physics of amorphous solids, Phys. Rev. E 101, 052616 (2020).
* Schinasi-Lemberg and Regev [2020] E. Schinasi-Lemberg and I. Regev, Annealing and rejuvenation in a two-dimensional model amorphous solid under oscillatory shear, Phys. Rev. E 101, 012603 (2020).
* Regev _et al._ [2013] I. Regev, T. Lookman, and C. Reichhardt, Onset of irreversibility and chaos in amorphous solids under periodic shear, Phys. Rev. E 88, 062401 (2013).
* Fiocco _et al._ [2013] D. Fiocco, G. Foffi, and S. Sastry, Oscillatory athermal quasistatic deformation of a model glass, Phys. Rev. E 88, 020301 (2013).
* Keim and Arratia [2014] N. C. Keim and P. E. Arratia, Mechanical and microscopic properties of the reversible plastic regime in a 2d jammed material, Phys. Rev. Lett. 112, 028302 (2014).
* Schreck _et al._ [2013] C. F. Schreck, R. S. Hoy, M. D. Shattuck, and C. S. O’Hern, Particle-scale reversibility in athermal particulate media below jamming, Phys. Rev. E 88, 052205 (2013).
* Priezjev [2016] N. V. Priezjev, Reversible plastic events during oscillatory deformation of amorphous solids, Phys. Rev. E 93, 013001 (2016).
* Royer and Chaikin [2015] J. R. Royer and P. M. Chaikin, Precisely cyclic sand: Self-organization of periodically sheared frictional grains, PNAS 112, 49 (2015).
* Bandi _et al._ [2018] M. Bandi, H. G. E. Hentschel, I. Procaccia, S. Roy, and J. Zylberg, Training, memory and universal scaling in amorphous frictional granular matter, EPL (Europhysics Letters) 122, 38003 (2018).
* Keim _et al._ [2020] N. C. Keim, J. Hass, B. Kroger, and D. Wieker, Global memory from local hysteresis in an amorphous solid, Phys. Rev. Research 2, 012004 (2020).
* Fiocco _et al._ [2014] D. Fiocco, G. Foffi, and S. Sastry, Encoding of memory in sheared amorphous solids, Phys. Rev. Lett. 112, 025702 (2014).
* Brown _et al._ [2019] B. L. Brown, C. Reichhardt, and C. Reichhardt, Reversible to irreversible transitions in periodically driven skyrmion systems, New Journal of Physics 21, 013001 (2019).
* Sethna _et al._ [1993] J. P. Sethna, K. Dahmen, S. Kartha, J. A. Krumhansl, B. W. Roberts, and J. D. Shore, Hysteresis and hierarchies: Dynamics of disorder-driven first-order phase transformations, Phys. Rev. Lett. 70, 3347 (1993).
* Pine _et al._ [2005] D. J. Pine, J. P. Gollub, J. F. Brady, and A. M. Leshansky, Chaos and threshold for irreversibility in sheared suspensions, Nature 438, 997 (2005).
* Corte _et al._ [2008] L. Corte, P. M. Chaikin, J. P. Gollub, and D. J. Pine, Random organization in periodically driven systems, Nature Physics 4, 420 (2008).
* Keim and Nagel [2011] N. C. Keim and S. R. Nagel, Generic Transient Memory Formation in Disordered Systems with Noise, Phys. Rev. Lett. 107, 010603 (2011).
* Ni _et al._ [2019] X. Ni, H. Zhang, D. B. Liarte, L. W. McFaul, K. A. Dahmen, J. P. Sethna, and J. R. Greer, Yield precursor dislocation avalanches in small crystals: the irreversibility transition, Phys. Rev. Lett. 123, 035501 (2019).
* Hyun _et al._ [2011] K. Hyun, M. Wilhelm, C. O. Klein, K. S. Cho, J. G. Nam, K. H. Ahn, S. J. Lee, R. H. Ewoldt, and G. H. McKinley, A review of nonlinear oscillatory shear tests: Analysis and application of large amplitude oscillatory shear (laos), Prog. Poly. Sci. 36, 1697 (2011).
* Regev _et al._ [2015] I. Regev, J. Weber, C. Reichhardt, K. A. Dahmen, and T. Lookman, Reversibility and criticality in amorphous solids, Nature Comm. 6, 8805 (2015).
* Leishangthem _et al._ [2017] P. Leishangthem, A. D. Parmar, and S. Sastry, The yielding transition in amorphous solids under oscillatory shear deformation, Nature Comm. 8, 14653 (2017).
* Priezjev [2018] N. V. Priezjev, The yielding transition in periodically sheared binary glasses at finite temperature, Computational Materials Science 150, 162 (2018).
* Mangan _et al._ [2008] N. Mangan, C. Reichhardt, and C. O. Reichhardt, Reversible to irreversible flow transition in periodically driven vortices, Phys. Rev. Lett. 100, 187002 (2008).
* Das _et al._ [2020] P. Das, H. Vinutha, and S. Sastry, Unified phase diagram of reversible–irreversible, jamming, and yielding transitions in cyclically sheared soft-sphere packings, PNAS 117, 10203 (2020).
* Morse _et al._ [2020] P. Morse, S. Wijtmans, M. van Deen, M. van Hecke, and M. L. Manning, Differences in plasticity between hard and soft spheres, Phys. Rev. Research 2, 023179 (2020).
* Kawasaki and Berthier [2016] T. Kawasaki and L. Berthier, Macroscopic yielding in jammed solids is accompanied by a nonequilibrium first-order transition in particle trajectories, Phys. Rev. E 94, 022615 (2016).
* Regev and Lookman [2018] I. Regev and T. Lookman, Critical diffusivity in the reversibility–irreversibility transition of amorphous solids under oscillatory shear, Journal of Physics: Condensed Matter 31, 045101 (2018).
* Nagamanasa _et al._ [2014] K. H. Nagamanasa, S. Gokhale, A. Sood, and R. Ganapathy, Experimental signatures of a nonequilibrium phase transition governing the yielding of a soft glass, Phys. Rev. E 89, 062308 (2014).
* Möbius and Heussinger [2014] R. Möbius and C. Heussinger, (ir) reversibility in dense granular systems driven by oscillating forces, Soft Matter 10, 4806 (2014).
* Mungan and Terzi [2019] M. Mungan and M. M. Terzi, The structure of state transition graphs in hysteresis models with return point memory: I. general theory, Ann. Henri Poincaré 20, 2819 (2019).
* Mungan and Witten [2019] M. Mungan and T. A. Witten, Cyclic annealing as an iterated random map, Phys. Rev. E 99, 052132 (2019).
* Mungan _et al._ [2019] M. Mungan, S. Sastry, K. Dahmen, and I. Regev, Networks and hierarchies: How amorphous materials learn to remember, Phys. Rev. Lett. 123, 178002 (2019).
* Barrat _et al._ [2008] A. Barrat, M. Barthelemy, and A. Vespignani, _Dynamical processes on complex networks_ (Cambridge University Press, Cambridge, 2008).
* Manning and Liu [2011] M. L. Manning and A. J. Liu, Vibrational modes identify soft spots in a sheared disordered packing, Phys. Rev. Lett. 107, 108302 (2011).
* Falk and Langer [1998] M. L. Falk and J. S. Langer, Dynamics of viscoplastic deformation in amorphous solids, Phys. Rev. E 57, 7192 (1998).
* Maloney and Lemaître [2006] C. E. Maloney and A. Lemaître, Amorphous systems in athermal, quasistatic shear, Phys. Rev. E 74, 016118 (2006).
* Note [1] Note that this is not reversibility in the thermodynamic sense, since plastic events involve energy dissipation.
* Corominas-Murtra _et al._ [2013] B. Corominas-Murtra, J. Goñi, R. V. Solé, and C. Rodríguez-Caso, On the origins of hierarchy in complex networks, PNAS 110, 13316 (2013).
* Clauset _et al._ [2009] A. Clauset, C. R. Shalizi, and M. E. Newman, Power-law distributions in empirical data, SIAM Rev. 51, 661 (2009).
* Note [2] Since the total number of states in this region is $1783$, the total number of outgoing transitions, being twice this number, is $3566$. The total number reversible transitions id $2\times 1448+257=3153$. Hence the fraction of reversible transitions is $0.88$.
* Note [3] The total size of the $199$ SCCs containing the $1783$ mesostates in this region is $2587$. Thus an additional $804$ mesostates belong to these SCCs, but are outside the reversibility region. All $2587$ mesostates turn out to be contained in the region $\gamma^{-}_{\rm max}\geq-0.095$ and $\gamma^{+}_{\rm max}\leq 0.096$.
* Barabási and Albert [1999] A.-L. Barabási and R. Albert, Emergence of scaling in random networks, Science 286, 509 (1999).
* Note [4] More specifically, for each possible SCC size $s$ we determined the largest amplitude $\gamma$, and from these pair of points $(s,\gamma)$ we extracted the boundary curve as $s_{\rm max}(\gamma)=\mathop{max}\displaylimits_{\gamma^{\prime}\leq\gamma}\\{s^{\prime}:(s^{\prime},\gamma^{\prime})\\}$, i.e. the convex hull.
* Note [5] In the context of RPM such SCCs are formed by a largest cycle and its sub-cycles. They have been referred to as maximal loops [31].
* Perković and Sethna [1997] O. Perković and J. P. Sethna, Improved magnetic information storage using return-point memory, J. Appl. Phys. 81, 1590 (1997).
* Terzi and Mungan [2020] M. M. Terzi and M. Mungan, State transition graph of the preisach model and the role of return-point memory, Phys. Rev. E 102, 012122 (2020).
* Note [6] Fig. 2(d) of [33] contains an example of such an organization of cycles and sub-cycles.
* Preisach [1935] F. Preisach, Über die magnetische Nachwirkung, Z. Physik 94, 277 (1935).
* Barker _et al._ [1983] J. A. Barker, D. E. Schreiber, B. G. Huth, and D. H. Everett, Magnetic hysteresis and minor loops: models and experiments, Proc. Roy. Soc. A 386, 251 (1983).
* Note [7] By this we mean that if we label the soft-spots according to the sequence in which they undergo a plastic event as the driving force is increased, they will revert their states in some order relative to this. The latter sequence can be thought of as a permutation of the former. In [47], we have shown that for the Preisach model this permutation alone determines the structure of the SCC containing the main hysteresis loop and hence its size. Assuming that this permutation is uniformly selected from all possible permutations, i.e. that this permutation is maximally random, the result (3) follows.
* Mukherji _et al._ [2019] S. Mukherji, N. Kandula, A. Sood, and R. Ganapathy, Strength of mechanical memories is maximal at the yield point of a soft glass, Physical Review Letters 122, 158001 (2019).
* Yeh _et al._ [2020] W.-T. Yeh, M. Ozawa, K. Miyazaki, T. Kawasaki, and L. Berthier, Glass stability changes the nature of yielding under oscillatory shear, Phys. Rev. Lett. 124, 225502 (2020).
* Bhaumik _et al._ [2021] H. Bhaumik, G. Foffi, and S. Sastry, The role of annealing in determining the yielding behavior of glasses under cyclic shear deformation, Proceedings of the National Academy of Sciences 118 (2021).
* Schwen _et al._ [2020] E. M. Schwen, M. Ramaswamy, C.-M. Cheng, L. Jan, and I. Cohen, Embedding orthogonal memories in a colloidal gel through oscillatory shear, Soft matter 16, 3746 (2020).
* Lerner and Procaccia [2009] E. Lerner and I. Procaccia, Locality and nonlocality in elastoplastic responses of amorphous solids, Phys. Rev. E 79, 066109 (2009).
* Bitzek _et al._ [2006] E. Bitzek, P. Koskinen, F. Gähler, M. Moseler, and P. Gumbsch, Structural relaxation made simple, Phys. Rev. Lett. 97, 170201 (2006).
* Aho _et al._ [1982] A. Aho, J. Hopcroft, and J. Ullman, _Data Structures and Algorithms_ (Addison-Wesley Reading, 1982).
|
16k
|
arxiv_papers
|
2101.01086
|
Matthieu Jedor [email protected]
Centre Borelli, ENS Paris-Saclay & Cdiscount and Jonathan Louëdec
[email protected]
Cdiscount and Vianney Perchet [email protected]
CREST, ENSAE Paris & Criteo AI Lab
# Be Greedy in Multi-Armed Bandits
###### Abstract
The Greedy algorithm is the simplest heuristic in sequential decision problem
that carelessly takes the locally optimal choice at each round, disregarding
any advantages of exploring and/or information gathering. Theoretically, it is
known to sometimes have poor performances, for instance even a linear regret
(with respect to the time horizon) in the standard multi-armed bandit problem.
On the other hand, this heuristic performs reasonably well in practice and it
even has sublinear, and even near-optimal, regret bounds in some very specific
linear contextual and Bayesian bandit models.
We build on a recent line of work and investigate bandit settings where the
number of arms is relatively large and where simple greedy algorithms enjoy
highly competitive performance, both in theory and in practice. We first
provide a generic worst-case bound on the regret of the Greedy algorithm. When
combined with some arms subsampling, we prove that it verifies near-optimal
worst-case regret bounds in continuous, infinite and many-armed bandit
problems. Moreover, for shorter time spans, the theoretical relative
suboptimality of Greedy is even reduced.
As a consequence, we subversively claim that for many interesting problems and
associated horizons, the best compromise between theoretical guarantees,
practical performances and computational burden is definitely to follow the
greedy heuristic. We support our claim by many numerical experiments that show
significant improvements compared to the state-of-the-art, even for moderately
long time horizon.
###### keywords:
Multi-armed bandits, greedy algorithm, continuous-armed bandits, infinite-
armed bandits, many-armed bandits
## 1 Introduction
Multi-armed bandits are basic instances of online learning problems with
partial feedback (Bubeck and Cesa-Bianchi, 2012; Lattimore and Szepesvári,
2020; Slivkins, 2019). In the standard stochastic bandit problem, a learning
agent sequentially pulls among a finite set of actions, or “arms”, and
observes a stochastic reward accordingly. The goal of the agent is then to
maximize its cumulative reward, or equivalently, to minimize its regret,
defined as the difference between the cumulative reward of an oracle (that
knows the mean rewards of arms) and the one of the agent. This problem
requires to trade-off between exploitation (leveraging the information
obtained so far) and exploration (gathering information on uncertain arms).
The exploration, although detrimental in the short term, is usually needed in
the worst-case as it ensures that the learning algorithm “converges” to the
optimal arm in the long run. On the other hand, the Greedy algorithm, an
exploration-free strategy, focuses on pure exploitation and pulls the
apparently best arm according to the information gathered thus far, at the
risk of only sampling once the true optimal arm. This typically happens with
Bernoulli rewards where only arms whose first reward is a 1 will be pulled
again (and the others discarded forever). As a consequence, with some non-zero
probability, the regret grows linearly with time as illustrated in the
following example.
###### Example 1.1.
Consider a relatively simple Bernoulli bandit problem consisting of $K=2$ arms
with expected rewards $0.9$ and $0.1$ respectively. With probability at least
0.01, Greedy fails to find the optimal arm. On the other hand, with
probability $0.9^{2}$ it suffers no regret after the initial pulls. This
results in a linear regret with a large variance. This typical behavior is
illustrated in Appendix A.1.
Two solutions have been proposed to overcome this issue. The first one is to
force the exploration; for example with an initial round-robin exploration
phase (Even-Dar et al., 2002), or by spreading the exploration uniformly over
time à la Epsilon-Greedy (Auer et al., 2002). However, both these algorithms
need to know the different parameters of the problem to perform optimally
(either to set the length of the round-robin phase or the value of
$\varepsilon$), which represents a barrier to their use in practice. The
second solution is to have a data-driven and adaptive exploration; for
example, by adding an exploration term à la UCB (Auer et al., 2002), by using
a Bayesian update à la Thompson Sampling (Thompson, 1933; Perrault et al.,
2020a), by using data- and arm-dependent stopping times for exploring à la
Explore-Then-Commit (Perchet and Rigollet, 2013; Perchet et al., 2016) or by
tracking the number of pulls of suboptimal arms (Baransi et al., 2014; Honda
and Takemura, 2010, 2015). With careful tuning, these algorithms are
asymptotically optimal for specific reward distributions. Yet this asymptotic
regime can occur after a long period of time (Garivier et al., 2019) and thus
simpler heuristics might be preferable for relatively short time horizon
(Vermorel and Mohri, 2005; Kuleshov and Precup, 2014).
Conversely, the simple Greedy algorithm has recently been proved to satisfy
near-optimal regret bounds in some linear contextual model (Bastani et al.,
2017; Kannan et al., 2018; Raghavan et al., 2020) and a sublinear regret bound
in some Bayesian many-armed setting (Bayati et al., 2020). In particular, this
was possible because the Greedy algorithm benefits from “free” exploration
when the number of arms is large enough. We illustrate this behavior in the
following example.
###### Example 1.2.
Consider bandit problems where rewards are Gaussian distributions with unit
variance and mean rewards are drawn i.i.d. from a uniform distribution over
$[0,1]$. In Figure 1, we compare the regret of Greedy with the UCB algorithm
for different number of arms and time horizon. For both algorithms, we observe
a clear transition phase between problems with higher average regret (with
darker colors) and problems with lower regret (with lighter colors). In this
example, this transition takes the form of a diagonal.
This diagonal is much lower for Greedy compared to UCB, meaning that Greedy
performs better in the problems in-between, and this in spite of UCB being
optimal in the problem-dependent sense (on the other hand, that is when the
horizon is large, UCB outperforms Greedy). The intuition is that, when the
number of near-optimal arms is large enough, Greedy rapidly converges to one
of them while UCB is still in its initial exploration phase. The key argument
here is the short time horizon relatively to the difficulty of the problem; we
emphasis on the “relatively” as in practice the “turning point”, that is the
time horizon for which UCB performs better, can be extremely large.
Figure 1: Bayesian regret divided by the horizon for UCB (left) and Greedy
(right) as a function of the number of arms and the horizon in Gaussian bandit
problems.
Numerous interesting problems actually lie in the bottom left corner of Figure
1, i.e., bandit problems with a large number of arms and a relatively short
time horizon and, as a consequence, the Greedy algorithm should be considered
as a valid baseline.
#### Our results
We first provide a generic regret bound on Greedy, and we illustrate how to
derive worst-case regret bounds. We will then instantiate this regret bound to
a uniformly sampled subset of arms and prove this satisfies near-optimal
worst-case regret bounds in the continuous-armed, infinite-armed and many-
armed bandit models. As a byproduct of our analysis, we get that the problem
of unknown smoothness parameters can be overcome by a simple discretization
depending only on the time horizon in the first of these models. In all these
settings, we repeat the experiments of previous papers and show that the
Greedy algorithm outmatches the state-of-the-art.
#### Detailed comparison with prior work on Greedy
Greedy recently regained some attention in Bayesian bandit problems with a
large but finite number of arms (Bayati et al., 2020). It performs extremely
well empirically when the number of arms is large, sometimes better than
“optimal” algorithms; in that case, the regret of Greedy is sublinear, though
not optimal. In the following, we get rid of the strong Bayesian assumptions
and we consider many different bandit models, where a subsampling technique is
required and considered in the following.
Another recent success of Greedy is in linear contextual bandit problems, as
it is asymptotically optimal for a two-armed contextual bandit with linear
rewards when a covariate diversity condition holds (Bastani et al., 2017).
This idea can be extended to rewards given by generalized linear models. If
observed contexts are selected by an adversary, but perturbed by white noise,
then Greedy can again have optimal regret guarantees (Kannan et al., 2018).
Additional assumptions can even improved those results (Raghavan et al., 2018,
2020). Those results hold because exploration is not needed thanks to the
diversity in the contexts. We do not believe this assumption is satisfied in
many practical scenarios and we are therefore rather interested in the
implicit exploration of Greedy. As a consequence, we shall no further consider
the contextual framework (even if admittedly, our results could be generated
via careful binning (Perchet and Rigollet, 2013)). Interestingly, an extensive
empirical study of contextual bandit algorithms found that Greedy is actually
the second most efficient algorithm and is extremely close to the first one
(Bietti et al., 2018).
The Greedy algorithm has already been shown to enjoy great empirical
performance in the continuous-armed bandit model (Jedor et al., 2020). In this
paper, we make formal this insight. Finally, we mention that in the one-
dimensional linear bandit problem with a known prior distribution, the
cumulative regret of a greedy algorithm (under additional structural
assumptions) admits an $\mathcal{O}(\sqrt{T})$ upper bound and its Bayes risk
admits an $\mathcal{O}(\log T)$ upper bound (Mersereau et al., 2009). Linear
bandits are only considered empirically in this paper (see Appendix G.1).
#### Related work on bandit models
We also provide a short literature review on the different bandit settings
studied in this paper.
##### Continuous-armed bandits
In the continuous-armed bandit problem with nonparametric regularity
assumptions (Agrawal, 1995), lower and upper bounds are matching up to sub-
logarithmic factors (Kleinberg, 2005). Additional structural assumptions can
be considered to lower regret, such as margin condition (Auer et al., 2007),
Lipschitz (w.r.t. some fixed metric) mean-payoff function (Kleinberg et al.,
2008), local Lipschitzness (w.r.t. some dissimilarity function) (Bubeck et
al., 2010). Adaptivity to smoothness parameters is also a crucial task (Bubeck
et al., 2011; Locatelli and Carpentier, 2018; Hadiji, 2019).
##### Infinite-armed bandits
The original infinite-armed bandit problem (Berry et al., 1997) consists in a
sequence of $n$ choices from an infinite number of Bernoulli arms, with
$n\rightarrow\infty$. The objective was to minimize the long-run failure rate.
The Bernoulli parameters are independent observations from a known
distribution. With a uniform prior distribution, it is possible to control the
cumulative regret (Bonald and Proutiere, 2013). A more general model has been
considered (Wang et al., 2009). In particular, rewards are usually assumed to
be uniformly bounded in $[0,1]$ and the mean reward of a randomly drawn arm is
$\varepsilon$-optimal with probability
$\mathcal{O}\left(\varepsilon^{\beta}\right)$ for some $\beta>0$.
##### Many-armed bandits
Models in many-armed bandit problems are more varied, but the main idea is
that the number of arms is large comparatively to the number of rounds
(Teytaud et al., 2007). The exploration can be enhanced with a focus on a
small subset of arms (using a cross-entropy based algorithm without
theoretical guarantees thought) (Wang et al., 2017). The definition of regret
can also be altered; by considering a given quantile fraction of the
probability distribution over the mean rewards of arms (Chaudhuri and
Kalyanakrishnan, 2018) or with respect to a “satisfing” action (the definition
of a satisficing action is set by the learner) (Russo and Van Roy, 2018). Mean
rewards can also be formulate with a semi-parametric model (Ou et al., 2019).
A setting with multiple best/near-optimal arms without any assumptions about
the structure of the bandit instance has also been considered (Zhu and Nowak,
2020). The objective there is to design algorithms that can automatically
adapt to the unknown hardness of the problem.
## 2 Preliminaries
In the stochastic multi-armed bandit model, a learning agent interacts
sequentially with a finite set of $K$ distributions
$\mathcal{V}_{1},\dots,\mathcal{V}_{K}$, called arms. At round
$t\in\mathds{N}$, the agent chooses an arm $A_{t}$, which yields a stochastic
reward $X_{t}$ drawn from the associated probability distribution
$\mathcal{V}_{A_{t}}$. The objective is to design a sequential strategy
maximizing the expected cumulative reward up to some time horizon $T$. Let
$\mu_{1},\dots,\mu_{K}$ denote the mean rewards of arms, and
$\mu^{\star}\coloneqq\max_{k\in[K]}\mu_{k}$ be the best mean reward. The goal
is equivalent to minimizing the regret, defined as the difference between the
expected reward accumulated by the oracle strategy always playing the best arm
at each round, and the one accumulated by the strategy of the agent,
$R_{T}=\mathbb{E}\left[\sum_{t=1}^{T}\left(\mu^{\star}-X_{t}\right)\right]=T\mu^{\star}-\mathbb{E}\left[\sum_{t=1}^{T}\mu_{A_{t}}\right]$
where the expectation is taken with respect to the randomness in the sequence
of successive rewards from each arm and the possible randomization in the
strategy of the agent. Let $N_{k}(T)$ be the number of pulls of arm $k$ at the
end of round $T$ and define the suboptimality gap of an arm
$k\in[K]\coloneqq\\{1,\ldots,K\\}$ as $\Delta_{k}=\mu^{\star}-\mu_{k}$. The
expected regret is equivalently written as
$R_{T}=\sum_{k=1}^{K}\Delta_{k}\mathbb{E}\left[N_{k}(T)\right]\,.$
[H] Set of $K$ arms
$t\leftarrow 1$ $T$ Pull arm $\displaystyle
A_{t}\in\operatorname*{argmax}_{k\in[K]}\widehat{\mu}_{k}(t-1)$
Figure 2: Greedy
#### The Greedy algorithm
Summarized in Algorithm 2, Greedy is probably the simplest and the most
obvious algorithm. Given a set of $K$ arms, at each round $t$, it pulls the
arm with the highest average reward
$\displaystyle\widehat{\mu}_{k}(t-1)=\frac{1}{N_{k}(t-1)}\sum_{s=1}^{t-1}X_{s}\mathbf{1}\left\\{A_{s}=k\right\\}$111With
the convention that $0/0=\infty$, so that the first $K$ pulls initialize each
counter.. Thus, it constantly exploits the best empirical arm.
In the rest of the paper, we assume that the stochastic reward $X_{t}$ takes
the form $X_{t}=\mu_{A_{t}}+\eta_{t}$ where $\\{\eta_{t}\\}_{t=1}^{T}$ are
i.i.d. 1-subgaussian white noise and that $\mu_{k}$ are bounded for all
$k\in[K]$, $\mu_{k}\in[0,1]$ without loss of generality. We further assume the
knowledge of the time horizon $T$, unknown time horizon can be handled as
usual in bandit problems (Besson and Kaufmann, 2018). Finally, we say that arm
$k$ is $\varepsilon$-optimal for some $\varepsilon>0$ if
$\mu_{k}\geq\mu^{\star}-\varepsilon$.
## 3 Generic bound on Greedy
We now present the generic worst-case regret bound on Greedy that we will use
to derive near-optimal bounds in several bandit models. The proof is provided
in Appendix B.1.
###### Theorem 3.1.
The regret of Greedy verifies for all $\varepsilon>0$
$R_{T}\leq
T\exp\left(-N_{\varepsilon}\frac{\varepsilon^{2}}{2}\right)+3\varepsilon
T+\frac{6K}{\varepsilon}+\sum_{k=1}^{K}\Delta_{k}$
where $N_{\varepsilon}$ denotes the number of $\varepsilon$-optimal arms.
###### Remark 3.2.
This bound generalizes a Bayesian analysis (Bayati et al., 2020). It is
slightly looser; indeed the Bayesian assumption can be used to bound
$N_{\varepsilon}$ and further improve the third term by bounding the number of
suboptimal arms. Those techniques usually do not work in the stochastic
setting.
It is easy to see that this bound is meaningless when $N_{\varepsilon}$ is
independent of $T$ as one of the first two terms will, at least, be linear
with respect to $T$. On the other hand, $N_{\varepsilon}$ has no reason to
depend on the time horizon. The trick to obtain sublinear regret will be to
lower bound $N_{\varepsilon}$ by a function of the number of arms $K$, then to
optimize $K$ with respect to the time horizon $T$. To motivate this, consider
the following example.
###### Example 3.3.
Consider a problem with a huge number of arms $n$ with mean rewards drawn
i.i.d. from a uniform distribution over $[0,1]$. In that specific case, we
roughly have $N_{\varepsilon}\approx\varepsilon K$ for some subset of arms,
chosen uniformly at random, with cardinality $K$. Taking
$\varepsilon=\left(\frac{\log T}{K}\right)^{1/3}$, so that the first term in
the generic bound is sublinear, yields a
$\mathcal{O}\left(\max\left\\{T\left(\frac{\log
T}{K}\right)^{1/3},K\left(\frac{K}{\log T}\right)^{1/3}\right\\}\right)$
regret bound, which comes from the second and third terms respectively. If we
sub-sampled $K=T^{3/5}\left(\log T\right)^{2/5}$ arms, so that the maximum is
minimized, the regret bound becomes $\mathcal{O}\left(T^{4/5}\left(\log
T\right)^{1/5}\right)$; in particular it is sublinear.
This argument motivates this paper and will be made formal in subsequent
sections. Though this does not lead to optimal bounds – as expected by the
essence of the greedy heuristic in the multi-armed bandit model –, it will
nonetheless be highly competitive for short time span in many practical bandit
problems.
It is possible to theoretically improve the previous result by using a
chaining/peeling type of argument. Unfortunately, it is not practical to
derive better explicit guarantees as it involves an integral without close
form expressions; its proof is postponed to Appendix B.2.
###### Corollary 3.4.
The regret of Greedy verifies
$R_{T}\leq\min_{\varepsilon}\Big{\\{}3\varepsilon
T+\frac{6K}{\varepsilon}+\int_{\varepsilon}^{1}\left(3T+\frac{6K}{x^{2}}\right)\exp\left(-N_{x}\frac{x^{2}}{2}\right)dx\Big{\\}}+T\exp\left(-\frac{K}{2}\right)+\sum_{k=1}^{K}\Delta_{k}\,.$
## 4 Continuous-armed bandits
We first study Greedy in the continuous-armed bandit problem. We recall that
in this model, the number of actions is infinitely large. Formally, let
$\mathcal{A}$ be an arbitrary set and $\mathcal{F}$ a set of functions from
$\mathcal{A}\rightarrow\mathbb{R}$. The learner is given access to the action
set $\mathcal{A}$ and function class $\mathcal{F}$. In each round $t$, the
learner chooses an action $A_{t}\in\mathcal{A}$ and receives reward
$X_{t}=f(A_{t})+\eta_{t}$, where $\eta_{t}$ is some noise and
$f\in\mathcal{F}$ is fixed, but unknown. As usual in the literature
(Kleinberg, 2005; Auer et al., 2007; Hadiji, 2019), we restrict ourselves to
the case $\mathcal{A}=[0,1]$, $\eta_{t}$ is 1-subgaussian, $f$ takes values in
$[0,1]$ and $\mathcal{F}$ is the set of all functions that satisfy an Hölder
condition around the maxima. Formally,
###### Assumption 4.1
There exist constants $L\geq 0$ and $\alpha>0$ such that for all $x\in[0,1]$,
$f(x^{\star})-f(x)\leq L\cdot|x^{\star}-x|^{\alpha}$
where $x^{\star}$ denotes the optimal arm.
This assumption captures the degree of continuity at the maxima and it is
needed to ensure that this maxima is not reached at a sharp peak.
Similarly to CAB1 (Kleinberg, 2005), the Greedy algorithm will work on a
discretization of the action set into a finite set of $K$ equally spaced
points $\\{1/K,2/K,\dots,1\\}$. Each point is then considered as an arm and we
can apply the standard version of Greedy on them.
###### Remark 4.2.
The same analysis holds if it chooses a point uniformly at random from the
chosen interval $\left[\frac{k-1}{K},\frac{k}{K}\right]$ for $1\leq k\leq K$,
see also Auer et al. (2007).
The problem is thus to set the number of points $K$. The first regret bound on
the Greedy algorithm assumes that the smoothness parameters are known. The
proof is provided in Appendix C.
###### Theorem 4.3.
If $f:[0,1]\rightarrow[0,1]$ satisfies Assumption 4.1, then for all
$\varepsilon>0$ and a discretization of
$K\geq\left(\frac{L}{\varepsilon}\right)^{1/\alpha}$ arms, the regret of
Greedy verifies
$R_{T}\leq
T\exp\left(-\frac{K}{2L^{1/\alpha}}\varepsilon^{2+1/\alpha}\right)+4\varepsilon
T+\frac{6K}{\varepsilon}+K\,.$
In particular, the choice
$K=\left(32/27\right)^{\alpha/(4\alpha+1)}L^{2/(4\alpha+1)}T^{(2\alpha+1)/(4\alpha+1)}\left(\log
T\right)^{2\alpha/(4\alpha+1)}$
yields for $L\leq\sqrt{\frac{3}{2T}}K^{\alpha+1/2}$,
$R_{T}\leq 13L^{2/(4\alpha+1)}T^{(3\alpha+1)/(4\alpha+1)}\left(\log
T\right)^{2\alpha/(4\alpha+1)}+1\,.$
This bound is sublinear with respect to the time horizon $T$, yet suboptimal.
Indeed, the lower bound in this setting is
$\Omega\left(T^{(\alpha+1)/(2\alpha+1)}\right)$ and the MOSS algorithm run on
a optimal discretization attains it since its regret scales, up to constant
factor, as
$\mathcal{O}\left(L^{1/(2\alpha+1)}T^{(\alpha+1)/(2\alpha+1)}\right)$ (Hadiji,
2019). Yet, as mentioned previously, Greedy is theoretically competitive for
short time horizon due to small constant factors. In Figure 3, we displayed
regret upper bounds of MOSS and Greedy as a function of time for functions
that satisfy Assumption 4.1 with smoothness parameters $L=1$ and $\alpha=1$.
We see that the bound on Greedy is stronger up until a moderate time horizon
$T\approx 12000$.
Of course, assuming that the learner knows smoothness parameters $\alpha$ and
$L$ is often unrealistic. If we want to ensure a low regret on very regular
functions, by taking $\alpha\rightarrow\infty$, we have the following
corollary.
###### Corollary 4.4.
If $f:[0,1]\rightarrow[0,1]$ satisfies Assumption 4.1, then for a
discretization of $K=\sqrt{\frac{4}{3}T\log T}$ arms, the regret of the Greedy
algorithm verifies for $L\leq
3^{1/4}\left(4/3\right)^{(2\alpha+1)/4}T^{2\alpha}\left(\log
T\right)^{(\alpha+1)/2}$,
$R_{T}\leq
15\max\\{L^{1/(2\alpha+1)},L^{-1/(2\alpha+1)}\\}T^{(3\alpha+2)/(4\alpha+2)}\sqrt{\log
T}+1\,.$
###### Proof 4.5.
It is a direct consequence of Theorem 4.3 with the choice of
$\varepsilon=\left(L^{1/\alpha}\sqrt{\frac{3\log
T}{T}}\right)^{\alpha/(2\alpha+1)}$.
Once again, Greedy attains a sublinear, yet suboptimal, regret bound. In the
case of unknown smoothness parameters, the regret lower bound is
$\Omega\left(L^{1/(1+\alpha)}T^{(\alpha+2)/(2\alpha+2)}\right)$ (Locatelli and
Carpentier, 2018), which is attained by MeDZO with a
$\mathcal{O}\left(L^{1/(\alpha+1)}T^{(\alpha+2)/(2\alpha+2)}\left(\log_{2}T\right)^{3/2}\right)$
regret bound (Hadiji, 2019). This time, Greedy also has a lower polynomial
dependency which makes it even more competitive theoretically. In Figure 3, we
displayed regret upper bounds of MeDZO and Greedy (with unknown smoothness
parameters) as a function of time for functions that satisfy Assumption 4.1
with smoothness parameters $L=1$ and $\alpha=1$. Here we cannot see the
turning point since Greedy is stronger up until an extremely large time
horizon $T\approx 1,9\cdot 10^{46}$. Our numerical simulations will further
support this theoretical advantage.
fig:bounds_continuous [Known smoothness][Unknown smoothness]
Figure 3: Regret upper bound of various algorithms as a function of time in
the continuous-armed bandit model with smothness parameters $L=1$ and
$\alpha=1$.
## 5 Infinite-armed bandits
We now study the infinite-armed bandit problem. In this setting, we consider
the general model of Wang et al. (2009). In particular they assume a margin
condition on the mean reward of a randomly drawn arm. Formally,
###### Assumption 5.1
There exist $\mu^{\star}\in(0,1]$ and $\beta>0$ such that the mean reward
$\mu$ of a randomly drawn arm satisfies
$\mathbb{P}\left(\mu>\mu^{\star}-\varepsilon\right)=\mathcal{O}\left(\varepsilon^{\beta}\right)\text{,
for }\varepsilon\rightarrow 0\,.$
Equivalently, there exist $c_{1}>0$ and $c_{2}>0$ such that
$c_{1}\varepsilon^{\beta}\leq\mathbb{P}\left(\mu>\mu^{\star}-\varepsilon\right)\leq
c_{2}\varepsilon^{\beta}\,.$
Similarly to UCB-F (Wang et al., 2009), Greedy will consist of randomly
choosing $K$ arms at first and then running Greedy on those arms. The problem
is then to choose the optimal number of arms $K$. The following bound on
Greedy assumes the knowledge of the parameter $\beta$ and $c_{1}$. Its proof
is deferred in Appendix D.
###### Theorem 5.2.
Assume Assumption 5.1 of the model. The regret of Greedy verifies for any
subsampling of $K>0$ arms and for all $\varepsilon>0$
$R_{T}\leq
T\left[\exp\left(-\frac{c_{1}}{4}K\varepsilon^{2+\beta}\right)+\exp\left(-\frac{c_{1}}{8}K\varepsilon^{\beta}\right)\right]+4\varepsilon
T+\frac{6K}{\varepsilon}+K\,.$
In particular, the choice
$K=\left(2/3\right)^{(2+\beta)/(4+\beta)}\left(\frac{8}{c_{1}(4+\beta)}\right)^{2/(4+\beta)}T^{(2+\beta)/(4+\beta)}\left(\log
T\right)^{2/(4+\beta)}$
yields
$R_{T}\leq
20\left(c_{1}(4+\beta)\right)^{-2/(4+\beta)}T^{(3+\beta)/(4+\beta)}\left(\log
T\right)^{2/(4+\beta)}\,.$
In comparison, the lower bound is this model is
$\Omega\left(T^{\beta/(1+\beta)}\right)$ for any $\beta>0$ and
$\mu^{\star}\leq 1$ and UCB-F obtained a
$\mathcal{O}\left(T^{\beta/(\beta+1)}\log T\right)$ regret bound in the case
$\mu^{\star}=1$ or $\beta>1$ and a
$\widetilde{\mathcal{O}}\left(T^{1/2}\right)$ bound otherwise (Wang et al.,
2009). The regret of Greedy is once again sublinear, though suboptimal, with a
lower logarithmic dependency. Our numerical simulations will further emphasis
its competitive performance.
The case of unknown parameters is more complicated to handle compared to the
continuous-armed model and is furthermore not the main focus of this paper. A
solution proposed by Carpentier and Valko (2015) nonetheless, is to perform an
initial phase to estimate the parameter $\beta$.
## 6 Many-armed bandits
We now consider the particular model of many-armed bandit problem of Zhu and
Nowak (2020). It is somehow related to the previous two except it also takes
into account the time horizon. In particular, it focuses on the case where
multiple best arms are present. Formally, let $T$ be the time horizon, $n$ be
the total number of arms and $m$ be the number of best arms. We emphasis that
$n$ can be arbitrary large and $m$ is usually unknown. The following
assumption will lower bound the number of best arms.
###### Assumption 6.1
There exists $\gamma\in[0,1]$ such that the number of best arms satisfies
$\frac{n}{m}\leq T^{\gamma}\,.$
We assume that the value $\gamma$ (or at least some upper-bound) is known in
our case, even though adaptivity to it is possible (Zhu and Nowak, 2020). The
following Theorem bounds the regret of a Greedy algorithm that initially
subsamples a set of arms. Its proof is provided in Appendix E.
###### Theorem 6.2.
Assume Assumption 6.1 of the model and that the number of arms $n$ is large
enough for the following subsampling schemes to be possible. Depending on the
value of $\gamma$ and the time horizon $T$, it holds:
* •
If $T^{1-3\gamma}\leq\log T$, in particular for $\gamma\geq\frac{1}{3}$ and
$T\geq 2$, choosing $K=2T^{2\gamma}\log T$ leads to
$R_{T}\leq 14T^{\gamma+1/2}\log T+2\,.$
* •
Otherwise, the choice of $K=2\sqrt{T^{1+\gamma}\log T}$ yields
$R_{T}\leq 14T^{(3+\gamma)/4}\sqrt{\log T}+2\,.$
The previous bounds indicate that Greedy realizes a sublinear worst-case
regret on the standard multi-armed bandit problem at the condition that the
number of arms is large and the proportion of near-optimal arms is high
enough. To compare, the MOSS algorithm run on an optimal subsampling achieves
a $\mathcal{O}\left(T^{(1+\gamma)/2}\log T\right)$ regret bound for all
$\gamma\in[0,1]$, which is optimal up to logarithmic factors (Zhu and Nowak,
2020). In this case, our numerical simulation will show that Greedy is
competitive even when the setup is close to the limit of the theoretical
guarantee of Greedy.
## 7 Experiments
We now evaluate Greedy in the previously studied bandit models to highlight
its practical competitive performance. For fairness reasons with respect to
the other algorithms, and in the idea of reproducibility, we will not create
new experiment setups but reproduce experiments that can be found in the
literature (and compare the performances of Greedy w.r.t. state of the art
algorithms).
### 7.1 Continuous-armed bandits
In the continuous-armed bandit setting, we repeat the experiments of Hadiji
(2019). We consider three functions that are gradually sharper at the maxima
and thus technically harder to optimize:
$\displaystyle f_{1}:x$ $\displaystyle\mapsto 0.5\sin(13x)\sin(27x)+0.5$
$\displaystyle f_{2}:x$
$\displaystyle\mapsto\max\left(3.6x(1-x),1-|x-0.05|/0.05\right)$
$\displaystyle f_{3}:x$ $\displaystyle\mapsto
x(1-x)\left(4-\sqrt{|\sin{60x}|}\right)$
These functions verify Assumption 4.1 with $\alpha=2,1,0.5$ and $L\approx
221,20,2$, respectively, and are plotted for convenience in Appendix A.2.
Noises are drawn i.i.d. from a standard Gaussian distribution and we consider
a time horizon $T=100000$. We compare the Greedy algorithm with MeDZO (Hadiji,
2019), CAB1 (Kleinberg, 2005) with MOSS (Audibert and Bubeck, 2009; Degenne
and Perchet, 2016b) as the underlying algorithm and Zooming (Kleinberg et al.,
2008). For Greedy, we use the discretization of Corollary 4.4 while for
CAB.MOSS we choose the optimal discretization $K=\left\lceil
L^{2/(2\alpha+1)}T^{1/(2\alpha+1)}\right\rceil$. For MeDZO, we choose the
parameter suggested by authors $B=\sqrt{T}$. We emphasis here that CAB.MOSS
and Zooming require the smoothness parameters contrary to MeDZO and Greedy.
Results are averaged over $1000$ iterations and are presented on Figure
LABEL:fig:exp_continuous. Shaded area represents 5 standard deviation for each
algorithm.
fig:exp_continuous [$f_{1}$][$f_{2}$] [$f_{3}$]
Figure 4: Regret of various algorithms as a function of time in continuous-
armed bandit problems.
We see that Greedy outperforms the other algorithms in all scenarios. We can
clearly observe that the slope of the cumulative regret of Greedy is stepper
than the one of CAB.MOSS, yet it manages to obtain a lower regret by quickly
concentrating on near-optimal arms. Moreover, the difference is striking for
the relatively large time horizon considered here. Interestingly, the slope of
Greedy is more pronounced in the second scenario; this may be due to the low
number of local maxima which negatively affects the number of
$\varepsilon$-optimal arms for Greedy.
### 7.2 Infinite-armed bandits
In the infinite-armed bandit setting, we repeat the experiments of Bonald and
Proutiere (2013). We consider two Bernoulli bandit problems with a time
horizon $T=10000$. In the first scenario, mean rewards are drawn i.i.d. from
the uniform distribution over $[0,1]$, while in the second scenario, they are
drawn from a Beta(1, 2) distribution. We assume the knowledge of the
parameters. We compare Greedy with UCB-F (Wang et al., 2009), a
straightforward extension of MeDZO (analyzed by Zhu and Nowak (2020) in this
model) and TwoTarget (Bonald and Proutiere, 2013) that further assumes
Bernoulli rewards and the knowledge of the underlying distribution of mean
rewards. For Greedy, we use the subsampling suggested in Theorem 5.2. Results,
averaged over 1000 iterations, are displayed on Figure LABEL:fig:exp_infinite
and the shaded area represents 0.5 standard deviation for each algorithm.
fig:exp_infinite [Uniform prior][Beta(1, 2) prior]
Figure 5: Regret of various algorithms as a function of time in infinite-armed
bandit problems.
Once again, we see the excellent empirical performances of Greedy. It is
actually outperformed by TwoTarget in the uniform case since the latter has
been specifically optimize for that case (and is asymptotically optimal) but
Greedy is more robust as the second scenario points out; furthermore,
TwoTarget works only for Bernoulli rewards contrary to Greedy.
### 7.3 Many-armed bandits
In the many-armed bandit setting, we repeat the experiment of Zhu and Nowak
(2020). We consider a Bernoulli bandit problem where best arms have an mean
reward of $0.9$ while for suboptimal arms they are evenly distributed among
$\\{0.1,0.2,0.3,0.4,0.5\\}$. The time horizon is $T=5000$ and the total number
of arms $n=2000$. We set the hardness level at $\gamma=0.4$ resulting in a
number of best arms $m=\left\lceil\frac{n}{T^{\gamma}}\right\rceil=64$. In
this setup, Greedy is near its limit in terms of theoretical guarantee. We
compare OracleGreedy, the greedy algorithm run on an subsampling of arms
analyzed previously, with MOSS (Audibert and Bubeck, 2009), OracleMOSS (Zhu
and Nowak, 2020) (which consider an optimal subsampling for MOSS), MeDZO
(Hadiji, 2019; Zhu and Nowak, 2020) and the standard Greedy algorithm that
consider all arms. For OracleGreedy, we consider a subsampling of
$K=(1-2\gamma)T^{2\gamma}\log T/4$ arms, which corresponds to the value of a
more careful analysis of the regret in the bad events in Theorem 6.2 for
1/4-subgaussian random variables. Results are averaged over 5000 iterations
and displayed on Figure 6. Shaded area represents 0.5 standard deviation for
each algorithm.
Figure 6: Regret of various algorithms on a many-armed bandit problem with
hardness $\gamma=0.4$.
Once again we observe the excellent performance of Greedy on a subsampling of
arms; it outperforms OracleMOSS, its closest competitor, since both assume the
knowledge of the hardness parameter $\gamma$ and subsample. It is also
interesting to notice that the variance of OracleGreedy is much smaller than
OracleMOSS.
## 8 Conclusion
In this paper, we have refined the standard version of Greedy by considering a
subsampling of arms and proved sublinear worst-case regret bounds in several
bandit models. We also carried out an extensive experimental evaluation which
reveals that it outperforms the state-of-the-art for relatively short time
horizon. Besides, since its indexes are usually computed by most algorithms,
it is trivial to implement and fast to run. Consequently, the Greedy algorithm
should be considered as a standard baseline when multiple near-optimal arms
are present, which is the case in many models as we saw.
#### Interesting Direction
We leave open the question of adaptivity. Adaptivity here could refer to
adaptive subsampling or adaptivity to unknown parameters. In particular in the
continuous-armed bandit problem, previous work showed that the learner pays a
polynomial cost to adapt (Hadiji, 2019). Knowing that Greedy works best for
relatively short time horizon, it might be interesting to study this cost for
a greedy strategy and for what time horizon it might be worth it.
Another interesting, and relevant in practical problems, direction is to
analyze the performance of Greedy in combinatorial bandits (with a large
number of arms and thus a non-tractable number of actions), but with some
structure on the rewards on arms (Degenne and Perchet, 2016a; Perrault et al.,
2019, 2020b).
The research presented was supported by the French National Research Agency,
under the project BOLD (ANR19-CE23-0026-04) and it was also supported in part
by a public grant as part of the Investissement d’avenir project, reference
ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program
for optimization, operations research and their interactions with data
sciences.
## References
* Abbasi-Yadkori et al. (2011) Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. In _Advances in Neural Information Processing Systems_ , pages 2312–2320, 2011.
* Agrawal (1995) Rajeev Agrawal. The continuum-armed bandit problem. _SIAM journal on control and optimization_ , 33(6):1926–1951, 1995.
* Agrawal and Goyal (2012) Shipra Agrawal and Navin Goyal. Analysis of thompson sampling for the multi-armed bandit problem. In _Conference on learning theory_ , pages 39–1, 2012.
* Audibert and Bubeck (2009) Jean-Yves Audibert and Sébastien Bubeck. Minimax policies for adversarial and stochastic bandits. 2009\.
* Auer et al. (2002) Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. _Machine learning_ , 47(2-3):235–256, 2002.
* Auer et al. (2007) Peter Auer, Ronald Ortner, and Csaba Szepesvári. Improved rates for the stochastic continuum-armed bandit problem. In _International Conference on Computational Learning Theory_ , pages 454–468. Springer, 2007.
* Baransi et al. (2014) Akram Baransi, Odalric-Ambrym Maillard, and Shie Mannor. Sub-sampling for multi-armed bandits. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_ , pages 115–131. Springer, 2014.
* Bastani et al. (2017) Hamsa Bastani, Mohsen Bayati, and Khashayar Khosravi. Mostly exploration-free algorithms for contextual bandits. _arXiv preprint arXiv:1704.09011_ , 2017.
* Bayati et al. (2020) Mohsen Bayati, Nima Hamidi, Ramesh Johari, and Khashayar Khosravi. Unreasonable effectiveness of greedy algorithms in multi-armed bandit with many arms. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Berry et al. (1997) Donald A Berry, Robert W Chen, Alan Zame, David C Heath, and Larry A Shepp. Bandit problems with infinitely many arms. _The Annals of Statistics_ , pages 2103–2116, 1997.
* Besson and Kaufmann (2018) Lilian Besson and Emilie Kaufmann. What doubling tricks can and can’t do for multi-armed bandits. _arXiv preprint arXiv:1803.06971_ , 2018.
* Bietti et al. (2018) Alberto Bietti, Alekh Agarwal, and John Langford. A contextual bandit bake-off. _arXiv preprint arXiv:1802.04064_ , 2018.
* Bonald and Proutiere (2013) Thomas Bonald and Alexandre Proutiere. Two-target algorithms for infinite-armed bandits with bernoulli rewards. In _Advances in Neural Information Processing Systems_ , pages 2184–2192, 2013.
* Bubeck and Cesa-Bianchi (2012) Sébastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. _arXiv preprint arXiv:1204.5721_ , 2012.
* Bubeck et al. (2010) Sébastien Bubeck, Rémi Munos, Gilles Stoltz, and Csaba Szepesvari. X-armed bandits. _arXiv preprint arXiv:1001.4475_ , 2010.
* Bubeck et al. (2011) Sébastien Bubeck, Gilles Stoltz, and Jia Yuan Yu. Lipschitz bandits without the lipschitz constant. In _International Conference on Algorithmic Learning Theory_ , pages 144–158. Springer, 2011.
* Carpentier and Valko (2015) Alexandra Carpentier and Michal Valko. Simple regret for infinitely many armed bandits. In _International Conference on Machine Learning_ , pages 1133–1141, 2015.
* Chakrabarti et al. (2009) Deepayan Chakrabarti, Ravi Kumar, Filip Radlinski, and Eli Upfal. Mortal multi-armed bandits. In _Advances in neural information processing systems_ , pages 273–280, 2009.
* Chaudhuri and Kalyanakrishnan (2018) Arghya Roy Chaudhuri and Shivaram Kalyanakrishnan. Quantile-regret minimisation in infinitely many-armed bandits. In _UAI_ , pages 425–434, 2018.
* Cheung et al. (2019) Wang Chi Cheung, Vincent Tan, and Zixin Zhong. A thompson sampling algorithm for cascading bandits. In _The 22nd International Conference on Artificial Intelligence and Statistics_ , pages 438–447, 2019.
* Degenne and Perchet (2016a) Rémy Degenne and Vianney Perchet. Combinatorial semi-bandit with known covariance. In _Advances in Neural Information Processing Systems_ , pages 2972–2980, 2016a.
* Degenne and Perchet (2016b) Rémy Degenne and Vianney Perchet. Anytime optimal algorithms in stochastic multi-armed bandits. In Maria Florina Balcan and Kilian Q. Weinberger, editors, _Proceedings of The 33rd International Conference on Machine Learning_ , volume 48 of _Proceedings of Machine Learning Research_ , pages 1587–1595, New York, New York, USA, 20–22 Jun 2016b. PMLR. URL http://proceedings.mlr.press/v48/degenne16.html.
* Deshpande and Montanari (2012) Yash Deshpande and Andrea Montanari. Linear bandits in high dimension and recommendation systems. In _2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_ , pages 1750–1754. IEEE, 2012.
* Even-Dar et al. (2002) Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Pac bounds for multi-armed bandit and markov decision processes. In _International Conference on Computational Learning Theory_ , pages 255–270. Springer, 2002.
* Garivier et al. (2019) Aurélien Garivier, Pierre Ménard, and Gilles Stoltz. Explore first, exploit next: The true shape of regret in bandit problems. _Mathematics of Operations Research_ , 44(2):377–399, 2019.
* Hadiji (2019) Hédi Hadiji. Polynomial cost of adaptation for x-armed bandits. In _Advances in Neural Information Processing Systems_ , pages 1029–1038, 2019.
* Honda and Takemura (2010) Junya Honda and Akimichi Takemura. An asymptotically optimal bandit algorithm for bounded support models. In _COLT_ , pages 67–79. Citeseer, 2010.
* Honda and Takemura (2015) Junya Honda and Akimichi Takemura. Non-asymptotic analysis of a new bandit algorithm for semi-bounded rewards. _The Journal of Machine Learning Research_ , 16(1):3721–3756, 2015.
* Jedor et al. (2020) Matthieu Jedor, Jonathan Louëdec, and Vianney Perchet. Lifelong learning in multi-armed bandits. _arXiv preprint arXiv:2012.14264_ , 2020.
* Kannan et al. (2018) Sampath Kannan, Jamie H Morgenstern, Aaron Roth, Bo Waggoner, and Zhiwei Steven Wu. A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. In _Advances in Neural Information Processing Systems_ , pages 2227–2236, 2018.
* Kleinberg et al. (2008) Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-armed bandits in metric spaces. In _Proceedings of the fortieth annual ACM symposium on Theory of computing_ , pages 681–690, 2008.
* Kleinberg (2005) Robert D Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In _Advances in Neural Information Processing Systems_ , pages 697–704, 2005.
* Kuleshov and Precup (2014) Volodymyr Kuleshov and Doina Precup. Algorithms for multi-armed bandit problems. _arXiv preprint arXiv:1402.6028_ , 2014.
* Kveton et al. (2015) Branislav Kveton, Csaba Szepesvari, Zheng Wen, and Azin Ashkan. Cascading bandits: Learning to rank in the cascade model. In _International Conference on Machine Learning_ , pages 767–776, 2015.
* Lattimore and Szepesvári (2020) Tor Lattimore and Csaba Szepesvári. _Bandit algorithms_. Cambridge University Press, 2020.
* Locatelli and Carpentier (2018) Andrea Locatelli and Alexandra Carpentier. Adaptivity to smoothness in x-armed bandits. In _Conference on Learning Theory_ , pages 1463–1492, 2018.
* Mersereau et al. (2009) Adam J Mersereau, Paat Rusmevichientong, and John N Tsitsiklis. A structured multiarmed bandit problem and the greedy policy. _IEEE Transactions on Automatic Control_ , 54(12):2787–2802, 2009.
* Ou et al. (2019) Mingdong Ou, Nan Li, Cheng Yang, Shenghuo Zhu, and Rong Jin. Semi-parametric sampling for stochastic bandits with many arms. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 7933–7940, 2019.
* Perchet and Rigollet (2013) Vianney Perchet and Philippe Rigollet. The multi-armed bandit problem with covariates. _Ann. Statist._ , 41(2):693–721, 04 2013. 10.1214/13-AOS1101. URL https://doi.org/10.1214/13-AOS1101.
* Perchet et al. (2016) Vianney Perchet, Philippe Rigollet, Sylvain Chassang, and Erik Snowberg. Batched bandit problems. _Ann. Statist._ , 44(2):660–681, 04 2016. 10.1214/15-AOS1381. URL https://doi.org/10.1214/15-AOS1381.
* Perrault et al. (2019) Pierre Perrault, Vianney Perchet, and Michal Valko. Exploiting structure of uncertainty for efficient matroid semi-bandits. _arXiv preprint arXiv:1902.03794_ , 2019.
* Perrault et al. (2020a) Pierre Perrault, Etienne Boursier, Michal Valko, and Vianney Perchet. Statistical efficiency of thompson sampling for combinatorial semi-bandits. _Advances in Neural Information Processing Systems_ , 33, 2020a.
* Perrault et al. (2020b) Pierre Perrault, Michal Valko, and Vianney Perchet. Covariance-adapting algorithm for semi-bandits with application to sparse outcomes. In _Conference on Learning Theory_ , pages 3152–3184. PMLR, 2020b.
* Raghavan et al. (2018) Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, and Zhiwei Steven Wu. The externalities of exploration and how data diversity helps exploitation. _arXiv preprint arXiv:1806.00543_ , 2018.
* Raghavan et al. (2020) Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, and Zhiwei Steven Wu. Greedy algorithm almost dominates in smoothed contextual bandits. _arXiv preprint arXiv:2005.10624_ , 2020.
* Russo and Van Roy (2018) Daniel Russo and Benjamin Van Roy. Satisficing in time-sensitive bandit learning. _arXiv preprint arXiv:1803.02855_ , 2018.
* Slivkins (2019) Aleksandrs Slivkins. Introduction to multi-armed bandits. _arXiv preprint arXiv:1904.07272_ , 2019.
* Teytaud et al. (2007) Olivier Teytaud, Sylvain Gelly, and Michele Sebag. Anytime many-armed bandits. 2007\.
* Thompson (1933) William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. _Biometrika_ , 25(3/4):285–294, 1933.
* Vermorel and Mohri (2005) Joannes Vermorel and Mehryar Mohri. Multi-armed bandit algorithms and empirical evaluation. In _European conference on machine learning_ , pages 437–448. Springer, 2005.
* Wang et al. (2017) Erli Wang, Hanna Kurniawati, and Dirk P Kroese. Cemab: A cross-entropy-based method for large-scale multi-armed bandits. In _Australasian Conference on Artificial Life and Computational Intelligence_ , pages 353–365. Springer, 2017.
* Wang et al. (2009) Yizao Wang, Jean-Yves Audibert, and Rémi Munos. Algorithms for infinitely many-armed bandits. In _Advances in Neural Information Processing Systems_ , pages 1729–1736, 2009.
* Xia et al. (2015) Yingce Xia, Haifang Li, Tao Qin, Nenghai Yu, and Tie-Yan Liu. Thompson sampling for budgeted multi-armed bandits. In _Twenty-Fourth International Joint Conference on Artificial Intelligence_ , 2015.
* Xia et al. (2016) Yingce Xia, Wenkui Ding, Xu-Dong Zhang, Nenghai Yu, and Tao Qin. Budgeted bandit problems with continuous random costs. In _Asian conference on machine learning_ , pages 317–332, 2016.
* Zhu and Nowak (2020) Yinglun Zhu and Robert Nowak. On regret with multiple best arms. _Advances in Neural Information Processing Systems_ , 33, 2020.
## Appendix A Additional figures
This section provides illustrations that we did not include in the article in
order to not overload it.
### A.1 Failure of Greedy
Here, we illustrate Example 1.1, that is the failure of Greedy. We recall that
we considered a Bernoulli bandit problem consisting of $K=2$ arms with mean
rewards $0.9$ and $0.1$ respectively. In Figure 7, we compare the regret of
Greedy with the Thompson Sampling algorithm (Thompson, 1933).
Figure 7: Regret of various algorithms as a function of time in a Bernoulli
bandit problem. Results are averaged over $1000$ runs and the shaded area
represents $0.1$ standard deviation.
### A.2 Continuous functions studied
This section provides the plots of the studied functions in Subsection 7.1.
These functions, displayed on Figure LABEL:fig:continuous_functions, are
recalled below for convenience.
$\displaystyle f_{1}:x$ $\displaystyle\mapsto 0.5\sin(13x)\sin(27x)+0.5,$
$\displaystyle f_{2}:x$
$\displaystyle\mapsto\max\left(3.6x(1-x),1-|x-0.05|/0.05\right)$
$\displaystyle f_{3}:x$ $\displaystyle\mapsto
x(1-x)\left(4-\sqrt{|\sin{60x}|}\right)$
fig:continuous_functions [$f_{1}$][$f_{2}$] [$f_{3}$]
Figure 8: Functions considered in the continuous-armed bandit experiments.
## Appendix B Proofs of Section 3
### B.1 Proof of Theorem 3.1
The proof combines two techniques standard in the literature: creating a
“good” event in order to distinguish the randomness of the distributions from
the behavior of the algorithm and decomposing the arms into near-optimal and
suboptimal ones. Fix some $\varepsilon>0$.
#### Good event
Define the event $\mathfrak{E}$, through its complement, by
$\mathfrak{E}^{c}=\bigcap_{k:\Delta_{k}\leq\varepsilon}\mathbb{P}\left(\exists
t\,|\,\widehat{\mu}_{k}(t)\leq\mu_{k}-\varepsilon\right)\,.$
In words, $\mathfrak{E}$ is the event that at least one $\varepsilon$-optimal
arm is never underestimated by more than $\varepsilon$ below its mean reward.
Using the independence of the events along with the concentration bound of
Bayati et al. (2020), see Lemma F.2, we obtain
$\mathbb{P}\left(\mathfrak{E}^{c}\right)\leq\exp\left(-N_{\varepsilon}\frac{\varepsilon^{2}}{2}\right)\,.$
(1)
#### Bound on the number of pulls of suboptimal arms
On the event $\mathfrak{E}$, let $k\in[K]$ be an arm such that
$\Delta_{k}>3\varepsilon$. With a slight abuse of notation, we denote by
$\widehat{\mu}_{k}^{t}$ the average reward of arm $k$ after $t$ samples. The
expected number of pulls of arm $k$ is then bounded by
$\displaystyle\mathbb{E}\left[N_{k}(T)\,|\,\mathfrak{E}\right]$
$\displaystyle\leq
1+\sum_{t=1}^{\infty}\mathbb{P}\left(\widehat{\mu}_{k}^{t}\geq\mu^{\star}-2\varepsilon\right)$
$\displaystyle=1+\sum_{t=1}^{\infty}\mathbb{P}\left(\widehat{\mu}_{k}^{t}-\mu_{k}\geq\Delta_{k}-2\varepsilon\right)$
$\displaystyle\leq
1+\sum_{t=1}^{\infty}\exp\left(-t\frac{(\Delta_{k}-2\varepsilon)^{2}}{2}\right)$
$\displaystyle=1+\frac{1}{\exp\left(\frac{(\Delta_{k}-2\varepsilon)^{2}}{2}\right)-1}$
$\displaystyle\leq 1+\frac{2}{(\Delta_{k}-2\varepsilon)^{2}}$ (2)
where in second inequality we use Lemma F.1 since $\widehat{\mu}_{k}^{t}$ is
$1/t$-subgaussian and in the last inequality we used that $e^{x}\geq 1+x$ for
all $x\in\mathbb{R}$.
#### Putting things together
We first decompose the regret according to the event $\mathfrak{E}$
$R_{T}\leq\mathbb{E}\left[R_{T}|\mathfrak{E}^{c}\right]\mathbb{P}\left(\mathfrak{E}^{c}\right)+\mathbb{E}\left[R_{T}|\mathfrak{E}\right]\,.$
(3)
As mean rewards are bounded in $[0,1]$, the regret on the bad event is bounded
by $T$ and by Equation (1) we have
$\mathbb{E}\left[R_{T}|\mathfrak{E}^{c}\right]\mathbb{P}\left(\mathfrak{E}^{c}\right)\leq
T\exp\left(-N_{\varepsilon}\frac{\varepsilon^{2}}{2}\right)\,.$
We further decompose the second term on the right-hand side of Equation (3),
$\mathbb{E}\left[R_{T}|\mathfrak{E}\right]\leq\sum_{k:\Delta_{k}\leq
3\varepsilon}\Delta_{k}\mathbb{E}\left[N_{k}(T)|\mathfrak{E}\right]+\sum_{k:\Delta_{k}>3\varepsilon}\Delta_{k}\mathbb{E}\left[N_{k}(T)|\mathfrak{E}\right]\,.$
The first term is trivially bounded by $3\varepsilon T$, while for the second
term we have by Equation (2),
$\displaystyle\sum_{k:\Delta_{k}>3\varepsilon}\Delta_{k}\mathbb{E}\left[N_{k}(T)|\mathfrak{E}\right]$
$\displaystyle\leq\sum_{k:\Delta_{k}>3\varepsilon}\frac{2\Delta_{k}}{(\Delta_{k}-2\varepsilon)^{2}}+\sum_{k=1}^{K}\Delta_{k}$
$\displaystyle\leq\sum_{k:\Delta_{k}>3\varepsilon}\frac{6}{(\Delta_{k}-2\varepsilon)}+\sum_{k=1}^{K}\Delta_{k}$
$\displaystyle\leq\sum_{k:\Delta_{k}>3\varepsilon}\frac{6}{\varepsilon}+\sum_{k=1}^{K}\Delta_{k}\leq\frac{6K}{\varepsilon}+\sum_{k=1}^{K}\Delta_{k}$
where in the second inequality we used that $\Delta_{k}\leq
3(\Delta_{k}-2\varepsilon)$, which holds true since $\Delta_{k}\geq
3\varepsilon$. Hence the result.
### B.2 Proof of Corollary 3.4
We recall the definition of the event $\mathfrak{E}_{\varepsilon}$, through
its complement $\mathfrak{E}^{c}_{\varepsilon}$,
$\mathfrak{E}^{c}_{\varepsilon}=\bigcap_{k:\Delta_{k}\leq\varepsilon}\mathbb{P}\left(\exists
t\,|\,\widehat{\mu}_{k}(t)\leq\mu_{k}-\varepsilon\right)\,.$
Consider any increasing sequence $\left\\{\varepsilon_{m}\right\\}_{m=0}^{M}$
and denote $\mathfrak{E}_{m}$ the good event associated with $\varepsilon_{m}$
for $m\in\\{0,\ldots,M\\}$. By the chain rule and the previous computation of
the regret on the good event (see proof of Theorem 3.1), we have
$\displaystyle R_{T}$
$\displaystyle\leq\left(3\varepsilon_{0}T+\frac{6K}{\varepsilon_{0}}\right)\mathbb{P}\left(\mathfrak{E}_{0}\right)+\left(3\varepsilon_{1}T+\frac{6K}{\varepsilon_{1}}\right)\mathbb{P}(\mathfrak{E}_{1}\cap\mathfrak{E}_{0}^{c})+\ldots$
$\displaystyle\quad+\left(3\varepsilon_{M}T+\frac{6K}{\varepsilon_{M}}\right)\mathbb{P}(\mathfrak{E}_{M}\cap\mathfrak{E}_{M-1}^{c})+T\mathbb{P}(\mathfrak{E}_{M-1}^{c})+\sum_{k=1}^{K}\Delta_{k}$
$\displaystyle\leq\left[\left(3\varepsilon_{0}T+\frac{6K}{\varepsilon_{0}}\right)-\left(3\varepsilon_{1}T+\frac{6K}{\varepsilon_{1}}\right)\right]\mathbb{P}\left(\mathfrak{E}_{0}\right)+\ldots$
$\displaystyle\quad+\left[\left(3\varepsilon_{M-1}T+\frac{6K}{\varepsilon_{M-1}}\right)-\left(3\varepsilon_{M}T+\frac{6K}{\varepsilon_{M}}\right)\right]\mathbb{P}\left(\mathfrak{E}_{M-1}\right)$
$\displaystyle\quad+\left(3\varepsilon_{M}T+\frac{6K}{\varepsilon_{M}}\right)\mathbb{P}\left(\mathfrak{E}_{M}\right)+T\mathbb{P}(\mathfrak{E}_{M}^{c})+\sum_{k=1}^{K}\Delta_{k}$
where in the second inequality we used that
$\mathbf{1}\\{\mathfrak{A}\cap\mathfrak{B}^{c}\\}=\mathbf{1}\\{\mathfrak{A}\\}-\mathbf{1}\\{\mathfrak{B}\\}$
if $\mathfrak{B}\subset\mathfrak{A}$. In the proof of Theorem 3.1, we show
that
$\mathbb{P}(\mathfrak{E}_{m}^{c})\leq\exp\left(-N_{\varepsilon_{m}}\frac{\varepsilon^{2}_{m}}{2}\right)$
for $m\in\\{0,\ldots,M\\}$. Hence we obtain
$\displaystyle R(T)$
$\displaystyle\leq\left(3\varepsilon_{0}T+\frac{6K}{\varepsilon_{0}}\right)$
$\displaystyle\quad+\sum_{m=0}^{M-1}\left[\left(3\varepsilon_{m+1}T+\frac{6K}{\varepsilon_{m+1}}\right)-\left(3\varepsilon_{m}T+\frac{6K}{\varepsilon_{m}}\right)\right]\exp\left(-N_{\varepsilon_{m}}\frac{\varepsilon^{2}_{m}}{2}\right)$
$\displaystyle\quad+T\exp\left(-\frac{K}{2}\right)+\sum_{k=1}^{K}\Delta_{k}$
The middle term is upper-bounded by
$\sum_{m=0}^{M-1}(\varepsilon_{m+1}-\varepsilon_{m})\left[3T+\frac{6K}{\varepsilon^{2}_{m}}\right]\exp\left(-N_{\varepsilon_{m}}\frac{\varepsilon^{2}_{m}}{2}\right),$
which converges, as the mesh of the sequence $\varepsilon_{m}$ goes to zero,
towards
$\int_{\varepsilon}^{1}\left(3T+\frac{6K}{x^{2}}\right)\exp\left(-N_{x}\frac{x^{2}}{2}\right)dx$
Hence the result.
## Appendix C Proof of Theorem 4.3
Let $\varepsilon>0$. The regret can be decomposed into an approximation and an
estimation term,
$Tf(x^{\star})-\sum_{t=1}^{T}f(x_{t})=T\left(f(x^{\star})-\max_{k\in[K]}f\left(\frac{k}{K}\right)\right)+\left(T\max_{k\in[K]}f\left(\frac{k}{K}\right)-\sum_{t=1}^{T}f(x_{t})\right)\,.$
By Assumption 4.1, the first term is bounded by $\varepsilon T$ when
$K\geq\left(\frac{L}{\varepsilon}\right)^{1/\alpha}$. Then, according to
Theorem 3.1, we just have to lower bound $N_{\varepsilon}$ to conclude. To do
so, we prove a lower bound on the number of arms that are
$\varepsilon$-optimal with respect to the best arm overall. Let
$N_{\varepsilon}^{C}$ denotes this quantity.
#### Bound on $N_{\varepsilon}^{C}$
By Assumption 4.1, an $\varepsilon$-optimal arm $k$ may verify (there can be
$\varepsilon$-optimal that are not around the maxima)
$L\left|x^{\star}-k/K\right|^{\alpha}\leq\varepsilon\,.$
Knowing that $k$ is an integer, we obtain
$\left\lceil
K\left(x^{\star}-\left(\frac{\varepsilon}{L}\right)^{1/\alpha}\right)\right\rceil\leq
k\leq\left\lfloor
K\left(x^{\star}+\left(\frac{\varepsilon}{L}\right)^{1/\alpha}\right)\right\rfloor\,.$
This means that we have the following lower bound on $N_{\varepsilon}^{C}$
$N_{\varepsilon}^{C}\geq\left\lfloor
K\left(x^{\star}+\left(\frac{\varepsilon}{L}\right)^{1/\alpha}\right)\right\rfloor-\left\lceil
K\left(x^{\star}-\left(\frac{\varepsilon}{L}\right)^{1/\alpha}\right)\right\rceil+1\,.$
Thanks to Lemma F.4, we obtain
$N_{\varepsilon}^{C}\geq\left\lfloor
2K\left(\varepsilon/L\right)^{1/\alpha}\right\rfloor\,.$
Finally, using that $\left\lfloor 2x\right\rfloor\geq x$ for $x\geq 1$ (easily
verify with the assumption on $K$), we obtain the following lower bound
$N_{\varepsilon}^{C}\geq K\left(\varepsilon/L\right)^{1/\alpha}\,.$
#### Conclusion
We trivially have that $N_{\varepsilon}\geq N_{\varepsilon}^{C}$. The first
part of the Theorem then results from the fact that
$\sum_{k=1}^{K}\Delta_{k}\leq K$ since $\mu_{k}\in[0,1]$ for all $k\in[K]$. On
the other hand, the second part comes from taking $\varepsilon^{2}=3K/(2T)$
which is the value of $\varepsilon$ that minimizes the term $4\varepsilon
T+6K/\varepsilon$.
## Appendix D Proof of Theorem 5.2
Let $\varepsilon>0$. Once again, thanks to Theorem 3.1 we just have to bound
$N_{\varepsilon}$ and the result will follow by adding the approximation cost
$\varepsilon T$. We construct a good event on the expected rewards of sampled
arms. Let $I_{\varepsilon}=[\mu^{\star}-\varepsilon,\mu^{\star}]$ and
$N_{\varepsilon}^{I}=\sum_{k=1}^{K}\mathbf{1}\\{k\in I_{\varepsilon}\\}$ be
the number of $\varepsilon$-optimal arms with respect to all arms. Assumption
5.1 implies that
$p=\mathbb{E}\left[\mathbf{1}\\{k\in I_{\varepsilon}\\}\right]=\mathbb{P}(k\in
I_{\varepsilon})\in[c_{1}\varepsilon^{\beta},c_{2}\varepsilon^{\beta}]\,.$
Let $\delta\in[0,1)$. By Chernoff inequality we have
$\mathbb{P}\left(N_{\varepsilon}^{I}<(1-\delta)Kp\right)\leq\exp\left(-Kp\delta^{2}/2\right)\,.$
In particular, taking $\delta=\frac{1}{2}$ yields
$\mathbb{P}\left(N_{\varepsilon}^{I}<c_{1}\varepsilon^{\beta}K/2\right)\leq\exp\left(-c_{1}\varepsilon^{\beta}K/8\right)\,.$
Now we trivially have that $N_{\varepsilon}\geq N_{\varepsilon}^{I}$, and
hence we obtain
$\mathbb{P}\left(N_{\varepsilon}<c_{1}\varepsilon^{\beta}K/2\right)\leq\exp\left(-c_{1}\varepsilon^{\beta}K/8\right)\,.$
By constructing a good event based on the previous concentration bound and
using that $\sum_{k=1}^{K}\Delta_{k}\leq K$, we obtain the first part of the
Theorem. The second part results from (i) the first exponential term dominates
since $\varepsilon^{2+\beta}\leq\varepsilon^{\beta}$ for all
$\varepsilon\in[0,1]$ and $\beta>0$ and (ii) the choice of
$\varepsilon=\sqrt{3K/(2T)}$ which is the value that minimizes $4\varepsilon
T+6K/\varepsilon$.
## Appendix E Proof of Theorem 6.2
Once again, we just need a lower bound on the number of optimal arms in the
subsampling and we construct a good event to do so. We reuse the previous
notation $N_{\varepsilon}$ to denote this value ($\varepsilon=0$ here). Let
$N_{\varepsilon}^{S}$ be the number of optimal arms with respect to all arms.
In the case of a subsampling of $K$ arms done without replacement,
$N_{\varepsilon}^{S}$ is distributed according to a hypergeometric
distribution. By Hoeffding’s inequality, see Lemma F.3, we have for $0<t<pK$,
$\mathbb{P}\left(N_{\varepsilon}^{S}\leq(p-t)K\right)\leq\exp\left(-2t^{2}K\right)$
where $p=m/n$. We want to choose $t$ such $p-t>0$, otherwise the bound is
meaningless. In particular, the choice of $t=p/2$ yields
$\mathbb{P}\left(N_{\varepsilon}^{S}\leq
pK/2\right)\leq\exp\left(-p^{2}K/2\right)\,.$
We then trivially have that $N_{\varepsilon}\geq N_{\varepsilon}^{S}$. The
regret on the bad events is then given by
$T\left[\exp\left(-pK\varepsilon^{2}/4\right)+\exp\left(-p^{2}K/2\right)\right]$
For this regret to be $\mathcal{O}(1)$, the two following inequalities must be
verify:
$\displaystyle pK\varepsilon^{2}/4$ $\displaystyle\geq\log T$ $\displaystyle
p^{2}K/2$ $\displaystyle\geq\log T$
Now the term $3\varepsilon T+\frac{6K}{\varepsilon}$ of Theorem 3.1 is
minimized for $\varepsilon^{2}=2K/T$. This leads to
$\displaystyle pK^{2}/2$ $\displaystyle\geq T\log T$ $\displaystyle p^{2}K/2$
$\displaystyle\geq\log T$
Using that $p=T^{-\alpha}$, we obtain
$K\geq 2\max\left\\{\sqrt{T^{1+\alpha}},T^{2\alpha}\sqrt{\log
T}\right\\}\sqrt{\log T}\,.$
The proof is concluded by decomposing according to the value inside the max
term.
## Appendix F Useful Results
In this section, for the sake of completeness, we provide previous results
used in our analysis together with a small lemma.
###### Lemma F.1 (Corollary 5.5 of Lattimore and Szepesvári (2020)).
Let $X_{1},\ldots,X_{n}$ be $n$ independent $\sigma^{2}$-subgaussian random
variables. Then for any $\varepsilon\geq 0$, it holds that
$\mathbb{P}\left(\overline{X}\geq\varepsilon\right)\leq\exp\left(-\frac{n\varepsilon^{2}}{2\sigma^{2}}\right)$
where $\overline{X}=\frac{1}{n}\sum_{i=1}^{n}X_{i}$.
###### Lemma F.2 (Lemma 2 of Bayati et al. (2020)).
Let $Q$ be a distribution with mean $\mu$ such that $Q-\mu$ is 1-subgaussian.
Let $\\{X_{i}\\}_{i=1}^{n}$ be i.i.d. samples from distribution $Q$,
$S_{n}=\sum_{i=1}^{n}X_{i}$ and $M_{n}=S_{n}/n$. Then for any $\delta>0$, we
have
$\mathbb{P}\left(\exists
n:M_{n}<\mu-\delta\right)\leq\exp\left(-\delta^{2}/2\right)\,.$
###### Lemma F.3 (Hoeffding’s inequality).
Let $X_{1},\ldots,X_{n}$ be independent bounded random variables supported in
$[0,1]$. For all $t\geq 0$, we have
$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\left(X_{i}-\mathbb{E}[X_{i}]\right)\geq
t\right)\leq\exp\left(-2nt^{2}\right)$
and
$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\left(X_{i}-\mathbb{E}[X_{i}]\right)\leq-t\right)\leq\exp\left(-2nt^{2}\right)\,.$
###### Lemma F.4.
Let a and b be two real numbers. Then the following holds true
$\left\lfloor a+b\right\rfloor-\left\lceil a-b\right\rceil\geq\left\lfloor
2b\right\rfloor-1\,.$
###### Proof F.5.
We have
$\displaystyle\left\lfloor a+b\right\rfloor-\left\lceil a-b\right\rceil$
$\displaystyle=\left\lfloor a+b\right\rfloor+\left\lfloor b-a\right\rfloor$
$\displaystyle\geq\left\lfloor a+b+b-a\right\rfloor-1$
$\displaystyle=\left\lfloor 2b\right\rfloor-1$
where we used respectively that, $\left\lceil
x\right\rceil=-\left\lfloor-x\right\rfloor$ and $\left\lfloor
x+y\right\rfloor\leq\left\lfloor x\right\rfloor+\left\lfloor
y\right\rfloor+1$.
## Appendix G Further experiments
In this section, we evaluate the standard Greedy algorithm, that considers all
arms, in several bandit models to once again highlight its competitive
performance in some cases compared to the state-of-the-art.
### G.1 Linear bandits
In the linear bandit model, for each round $t$, the learner is given the
decision set $\mathcal{A}_{t}\subset\mathbb{R}^{d}$, from which she chooses an
action $A_{t}\in\mathcal{A}_{t}$ and receives reward
$X_{t}=\langle\theta_{\star},A_{t}\rangle+\eta_{t}$, where
$\theta_{\star}\in\mathbb{R}^{d}$ is an unknown parameter vector and
$\eta_{t}$ is some i.i.d. white noise, usually assume 1-subgaussian. In this
model, the Greedy algorithm consists of two phases: firstly, it computes the
regularized least-square estimator of $\theta$; then, it plays the arm in the
action set that maximizes the linear product with the estimator of $\theta$.
Here we consider a problem with a large dimension relatively to the time
horizon. Precisely, we fix $d=50$, a time horizon $T=2500$ and the noise is a
standard Gaussian distribution. The set of arms consists of the unit ball and
the parameter $\theta$ is randomly generated on the unit sphere. We compare
Greedy with LinUCB (Abbasi-Yadkori et al., 2011) and BallExplore (Deshpande
and Montanari, 2012), an algorithm specifically designed for such a setting.
The regularization term $\lambda$ is set at 1 for Greedy and LinUCB, the
confidence term $\delta=\frac{1}{T}$ for LinUCB and the parameter $\Delta=d$
for BallExplore. Results, displayed on Figure 9, are averaged over 50
iterations. Shaded area represents 2 times the standard deviation for each
algorithm.
Figure 9: Bayesian regret of various algorithms as a function of time in a
linear bandit problem.
We see that Greedy outperforms both LinUCB and BallExplore; in particular the
regret of Greedy is sublinear. Another point that we have not emphasized so
far is the computational complexity. Until now, the difference in terms of
computation was rather insignificant. This is no longer the case for
algorithms designed for linear bandits as they must solve an optimization
problem at each round. For example, in this simulation, the iteration time on
a single-core processor is 70 seconds for Greedy, 678 sec. for LinUCB and 1031
sec. for BallExplore. In words, Greedy is roughly ten times faster than LinUCB
and fifteen times faster than BallExplore.
### G.2 Cascading bandits
We now consider a special, but popular, case of stochastic combinatorial
optimization under semi-bandit feedback called the cascading bandit problem.
Formally, there are $L\in\mathbb{N}$ ground items and at each round $t$, the
agent recommends a list $\left(a_{1}^{t},\dots,a_{K}^{t}\right)$ of $K\leq L$
items to the user. The user examines the list, from the first item to the
last, and clicks on the first attractive item, if any. A weight $w(l)\in[0,1]$
is associated to each item $l\in[L]$, which denotes the click probability of
the item. The reward of the agent at round $t$ is given by
$1-\prod_{K=1}^{K}\left(1-w(a_{k}^{t})\right)\in\\{0,1\\}$ and she receives
feedback for each $k\in[K]$ such that $k\leq c_{t}=\min\left\\{1\leq k\leq
K:w_{t}(a_{k}^{t})=1\right\\}$ where
$w_{t}(a_{k}^{t})\sim\text{Bernoulli}(w(a_{k}^{t}))$ and we assume that the
minimum over an empty set is $\infty$. In this setting, the Greedy algorithm
outputs a list consisting of the $K$ best empirical arms. The goal of these
experiments is to study in which regimes, as a function of $L$ and $K$, the
Greedy algorithm might be preferable to the state-of-the-art.
We reproduce the experiments of Kveton et al. (2015) in the Bayesian setting.
We compare Greedy with CascadeKL-UCB (Kveton et al., 2015) and TS-Cascade
(Cheung et al., 2019). Greedy and CascadeKL-UCB share the same initialization
which is to select each item once as the first item on the list. For each
algorithm, the list is ordered from the largest index to the smallest one. We
consider two scenarios: on the first one, the prior on the mean rewards is a
uniform distribution over $[0,1]$ while on the second scenario, we consider a
more realistic Beta(1, 3) distribution so that most arms have low mean
rewards. The time horizon is set at $T=10000$. The regret and standard
deviation of each algorithm, averaged over 100 iterations, are reported in
Table 1 and 2 for different values of $L$ and $K$.
Table 1: Bayesian regret of various algorithms in cascading bandit problems with a uniform prior. L | K | Greedy | CascadeKL-UCB | TS-Cascade
---|---|---|---|---
16 | 2 | 176.1 $\pm$ 26.4 | 48.1 $\pm$ 2.7 | 109.7 $\pm$ 1.8
16 | 4 | 10.2 $\pm$ 1.9 | 9.9 $\pm$ 1.0 | 28.4 $\pm$ 0.9
16 | 8 | 0.7 $\pm$ 0.2 | 0.7 $\pm$ 0.1 | 3.6 $\pm$ 0.3
32 | 2 | 166.1 $\pm$ 22.8 | 58.7 $\pm$ 3.5 | 178.7 $\pm$ 2.5
32 | 4 | 6.7 $\pm$ 0.9 | 10.1 $\pm$ 0.8 | 47.0 $\pm$ 1.0
32 | 8 | 0.2 $\pm$ 0.03 | 0.7 $\pm$ 0.08 | 8.3 $\pm$ 0.4
64 | 2 | 135.5 $\pm$ 15.6 | 76.6 $\pm$ 3.7 | 288.6 $\pm$ 2.6
64 | 4 | 6.5 $\pm$ 0.5 | 12.5 $\pm$ 0.6 | 80.3 $\pm$ 1.3
64 | 8 | 0.3 $\pm$ 0.02 | 0.9 $\pm$ 0.07 | 16.6 $\pm$ 0.5
128 | 2 | 133.1 $\pm$ 12.4 | 107.4 $\pm$ 4.8 | 442.6 $\pm$ 3.4
128 | 4 | 9.4 $\pm$ 0.3 | 18.0 $\pm$ 0.8 | 127.4 $\pm$ 1.5
128 | 8 | 0.5 $\pm$ 0.02 | 1.5 $\pm$ 0.1 | 27.9 $\pm$ 0.6
256 | 2 | 137.2 $\pm$ 10.6 | 151.0 $\pm$ 5.6 | 605.7 $\pm$ 3.1
256 | 4 | 16.6 $\pm$ 0.2 | 26.9 $\pm$ 1.0 | 179.5 $\pm$ 1.4
256 | 8 | 1.0 $\pm$ 0.03 | 1.8 $\pm$ 0.1 | 39.9 $\pm$ 0.5
Table 2: Bayesian regret of various algorithms in cascading bandit problems with a Beta(1, 3) prior. L | K | Greedy | CascadeKL-UCB | TS-Cascade
---|---|---|---|---
16 | 2 | 590.4 $\pm$ 83.5 | 207.9 $\pm$ 5.2 | 199.5 $\pm$ 3.6
16 | 4 | 304.8 $\pm$ 35.7 | 116.4 $\pm$ 4.2 | 103.2 $\pm$ 2.9
16 | 8 | 97.9 $\pm$ 11.7 | 39.6 $\pm$ 2.1 | 34.4 $\pm$ 1.6
32 | 2 | 433.1 $\pm$ 49.1 | 330.7 $\pm$ 8.3 | 333.7 $\pm$ 3.8
32 | 4 | 192.2 $\pm$ 23.1 | 166.2 $\pm$ 6.0 | 163.3 $\pm$ 3.7
32 | 8 | 38.7 $\pm$ 5.3 | 50.1 $\pm$ 2.9 | 54.6 $\pm$ 1.9
64 | 2 | 576.2 $\pm$ 55.8 | 485.8 $\pm$ 11.2 | 540.1 $\pm$ 4.8
64 | 4 | 144.2 $\pm$ 12.3 | 207.5 $\pm$ 6.8 | 246.1 $\pm$ 4.1
64 | 8 | 20.3 $\pm$ 1.8 | 49.2 $\pm$ 2.2 | 76.4 $\pm$ 1.6
128 | 2 | 575.2 $\pm$ 40.1 | 710.9 $\pm$ 16.3 | 843.4 $\pm$ 4.7
128 | 4 | 100.8 $\pm$ 5.5 | 270.6 $\pm$ 7.4 | 372.9 $\pm$ 3.7
128 | 8 | 18.0 $\pm$ 0.6 | 60.7 $\pm$ 2.0 | 115.7 $\pm$ 1.4
256 | 2 | 522.5 $\pm$ 32.4 | 1068.3 $\pm$ 26.1 | 1235.1 $\pm$ 6.3
256 | 4 | 125.1 $\pm$ 3.8 | 380.0 $\pm$ 10.3 | 551.1 $\pm$ 3.85
256 | 8 | 27.3 $\pm$ 0.4 | 86.4 $\pm$ 2.6 | 174.8 $\pm$ 1.5
As expected by the Bayesian setting, Greedy outplays the state-of-the-art when
the number of arms $L$ is large. Even more interesting is that, as the number
of recommended items $K$ gets larger the regret of Greedy decreases at a
faster rate than the other algorithms. Our intuition is that the conservatism
of standard bandit algorithms is amplified as $K$ increases and this is
further exacerbated by the cascade model where items at the bottom of the list
may not get a feedback. On the contrary, the Greedy algorithm quickly
converges to a solution that uniquely depends on past individual performances
of arms. In addition, the contrast between the performance of Greedy and the
state-of-the-art is even more striking in the second scenario. This is not
particularly surprising as the Beta(1, 3) distribution gives rise to harder
problems for the considered time horizon.
### G.3 Mortal bandits
We now consider the mortal bandit problem where arms die and new ones appear
regularly (in particular, an arm is not always available contrary to the
standard model). In this setting, the Greedy algorithm pulls the best
empirical arm available. As previous work considered a large number of arms,
state-of-the-art algorithms in this setting, e.g. AdaptiveGreedy (Chakrabarti
et al., 2009), emphasis an hidden subsampling of arms due to their
initialization. They further required a careful (manual) tuning of their
parameter for optimal performance. Consequently, we compare Greedy to a
standard bandit algorithm extended to this model and we consider a small
number of arms. Similarly to the last setting, the goal is to observe in which
regimes, as a function of the mean lifetime of arms, Greedy might be
preferable.
We repeat the experiments of Chakrabarti et al. (2009) with $K=100$ arms. The
number of arms remains fixed throughout the time horizon $T$, that is when an
arm dies, it is immediately replaced by another one. The time horizon $T$ is
set at 10 times the mean lifetime of the arms. The lifetime of arm $k$,
denoted $L_{k}$, is drawn i.i.d. from a geometric distribution with mean
lifetime $L$; this arm dies after being available for $L_{k}$ rounds. We
consider logarithmically spaced values of mean lifetimes. We also assume that
arms are Bernoulli random variables. We consider two scenarios: in the first
one, mean rewards of arms are drawn i.i.d. from a uniform distribution over
[0, 1], while in the second scenario they are drawn from a Beta(1, 3)
distribution. We compare the Greedy algorithm with Thompson Sampling (Agrawal
and Goyal, 2012). Results are averaged over 100 iterations and are reported on
Figure LABEL:fig:exp_mortal. Shaded area represents 0.5 standard deviation for
each algorithm.
fig:exp_mortal [Uniform prior][Beta(1, 3) prior]
Figure 10: Bayesian regret of various algorithms as a function of the expected
lifetime of arms in mortal bandit problems.
As expected, Greedy outperforms Thompson Sampling for intermediate expected
lifetime and vice versa for long lifetime. And for short lifetime, as we
previously saw, a sub-sampling of arms could have considerably improve the
performance of both algorithms.
### G.4 Budgeted bandits
We now consider the budgeted bandit problem. In this model, the pull of arm
$k$ at round $t$ entails a random cost $c_{k}(t)$. Moreover, the learner has a
budget $B$, which is a known parameter, that will constrain the total number
of pulls. In this setting, the index of an arm in the Greedy algorithm is the
average reward divided by the average cost. Like before, the objective is to
evaluate in which regimes with respect to the budget $B$, Greedy might be
preferable to a state-of-the-art algorithm.
We reproduce the experiments of Xia et al. (2016). Specifically, we study two
scenarios with $K=100$ arms in each. The first scenario considers discrete
costs; both the reward and the cost are sampled from Bernoulli distributions
with parameters randomly sampled from $(0,1)$. The second scenario considers
continuous costs; the reward and cost of an arm is sampled from two different
Beta distributions, the two parameters of each distribution are uniformly
sampled from $[1,5]$. The budget is chosen from the set
$\\{100,500,1000,5000,10000\\}$. We compare Greedy to Budget-UCB (Xia et al.,
2016) and BTS (Xia et al., 2015). The results of simulations are displayed in
Figure LABEL:fig:exp_budgeted and are averaged over 500 runs. Shaded area
represents 0.5 standard deviation for each algorithm.
fig:exp_budgeted [Discrete costs][Continuous costs]
Figure 11: Regret of various algorithms as a function of the budget in
budgeted bandit problems.
Interestingly, in this setting the interval of budgets for which Greedy
outperforms baseline algorithms is extremely small for discrete costs and
large for continuous costs. In the latter case, even for large budget Greedy
has a lower expected regret than BTS. Nonetheless it suffers from a huge
variance which makes its use risky in practice.
|
16k
|
arxiv_papers
|
2101.01090
|
# A higher dimensional Hilbert irreducibility theorem
Giulio Bresciani Freie Universität Berlin, Arnimallee 3, 14195, Berlin,
Germany [email protected]
###### Abstract.
Assuming the weak Bombieri-Lang conjecture, we prove that a generalization of
Hilbert’s irreducibility theorem holds for families of geometrically mordellic
varieties (for instance, families of hyperbolic curves). As an application we
prove that, assuming Bombieri-Lang, there are no polynomial bijections
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$.
The author is supported by the DFG Priority Program "Homotopy Theory and
Algebraic Geometry" SPP 1786
###### Contents
1. 1 Introduction
2. 2 Pulling families to maximal Kodaira dimension
3. 3 Higher dimensional HIT
4. 4 Polynomial bijections $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$
## 1\. Introduction
Serre reformulated Hilbert’s irreducibility theorem as follows [Ser97, Chapter
9].
###### Theorem (Hilbert’s irreducibility, Serre’s form).
Let $k$ be finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a morphism with $X$
a scheme of finite type over $k$. Suppose that the generic fiber is finite,
and that there are no generic sections
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$. Then $X(k)\to\mathbb{P}^{1}(k)$
is not surjective.
Recall that the weak Bombieri-Lang conjecture states that, if $X$ is a
positive dimensional variety of general type over a field $k$ finitely
generated over $\mathbb{Q}$, then $X(k)$ is not dense in $X$.
A variety $X$ over a field $k$ is _geometrically mordellic_ , or _GeM_ , if
every subvariety of $X_{\bar{k}}$ is of general type. This generalizes to
defining a scheme $X$ as geometrically mordellic, or GeM, if it is of finite
type over $k$ and every subvariety of $X_{\bar{k}}$ is of general type. If the
weak Bombieri-Lang conjecture holds and $k$ is a field finitely generated over
$\mathbb{Q}$, then the set of rational points of a GeM scheme over $k$ is
finite, since its Zariski closure cannot have positive dimension.
Assuming Bombieri-Lang, we prove that Hilbert’s irreducibility theorem
generalizes to morphisms whose generic fiber is GeM.
###### Theorem A.
Let $k$ be finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a morphism with $X$
a scheme of finite type over $k$. Suppose that the generic fiber is GeM, and
that there are no generic sections $\operatorname{Spec}k(\mathbb{P}^{1})\to
X$.
Assume either that the weak Bombieri-Lang conjecture holds in every dimension,
or that it holds up to dimension equal to $\dim X$ and that there exists an
$N$ such that $|X_{v}(k)|\leq N$ for every rational point
$v\in\mathbb{P}^{1}(k)$. Then $X(k)\to\mathbb{P}^{1}(k)$ is not surjective.
There is a version of Hilbert’s irreducibility theorem over non-rational
curves, and the same is true for the higher dimensional generalization.
###### Theorem B.
Assume that the weak Bombieri-Lang conjecture holds in every dimension. Let
$k$ be finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to C$ be a morphism with $X$ any scheme
of finite type over $k$ and $C$ a geometrically connected curve. Assume that
the generic fiber is GeM, and that there are no generic sections
$\operatorname{Spec}k(C)\to X$. Then $X(h)\to C(h)$ is not surjective for some
finite extension $h/k$.
As an application of Theorem A we give an answer to a long-standing
Mathoverflow question [Mat19] which asks whether there exists a polynomial
bijection $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$, conditional on the weak
Bombieri-Lang conjecture.
###### Theorem C.
Assume that the weak Bombieri-Lang conjecture for surfaces holds, and let $k$
be a field finitely generated over $\mathbb{Q}$. There are no polynomial
bijections $k\times k\to k$.
We remark that B. Poonen has proved that, assuming the weak Bombieri-Lang
conjecture for surfaces, there are polynomials giving _injective_ maps
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$, see [Poo10].
In 2019, T. Tao suggested on his blog [Tao19] a strategy to try to solve the
problem of polynomial bijections $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$
conditional on Bombieri-Lang, let us summarize it. Given a morphism
$\mathbb{A}^{2}\to\mathbb{A}^{1}$ and a cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{A}^{1}\dashrightarrow\mathbb{A}^{1}$,
denote by $P_{c}$ the pullback of $\mathbb{A}^{2}$. If $P_{c}$ is of general
type, by Bombieri-Lang $P_{c}(\mathbb{Q})$ is not dense in $P_{c}$ and hence
by Hilbert irreducibility a generic section $\mathbb{A}^{1}\dashrightarrow
P_{c}$ exists. If $P_{c}$ is of general type for "many" covers $c$, one might
expect this to force the existence a generic section
$\mathbb{A}^{1}\dashrightarrow\mathbb{A}^{2}$, which would be in contradiction
with the bijectivity of
$\mathbb{A}^{2}(\mathbb{Q})\to\mathbb{A}^{1}(\mathbb{Q})$.
The strategy had some gaps, though. There were no results showing that the
pullback $P_{c}$ is of general type for "many" covers $c$, and it was not
clear how this would force a generic section of
$\mathbb{A}^{2}\to\mathbb{A}^{1}$. Tao started a so-called "polymath project"
in order to crowdsource a formalization. The project was active for roughly
one week in the comments section of the blog but didn’t reach a conclusion.
Partial progress was made, we cite the two most important contributions. W.
Sawin showed that $\mathbb{A}^{2}(\mathbb{Q})\to\mathbb{A}^{1}(\mathbb{Q})$
can’t be bijective if the generic fiber has genus $0$ or $1$. H. Pasten showed
that, for some morphisms $\mathbb{A}^{2}\to\mathbb{A}^{1}$ with generic fiber
of genus at least $2$, the base change of $\mathbb{A}^{2}$ along the cover
$z^{2}-b\mathrel{\mathop{\ordinarycolon}}\mathbb{A}^{1}\to\mathbb{A}^{1}$ is
of general type for a generic $b$.
Theorem A is far more general than Theorem C, but it is possible to extract
from the proof of the former the minimal arguments needed in order to prove
the latter. These minimal arguments are a formalization of the ideas described
above, hence as far as Theorem C is concerned we have essentially filled in
the gaps in Tao’s strategy.
### Acknowledgements
I would like to thank Hélène Esnault for reading an earlier draft of the paper
and giving me a lot of valuable feedback, and Daniel Loughran for bringing to
my attention the problem of polynomial bijections
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$.
### Conventions
A variety over $k$ is a geometrically integral scheme of finite type over $k$.
A smooth, projective variety is of general type if its Kodaira dimension is
equal to its dimension: in particular, a point is a variety of general type.
We say that a variety is of general type if it is birational to a smooth,
projective variety of general type. More generally, we define the Kodaira
dimension of any variety $X$ as the Kodaira dimension of any smooth projective
variety birational to $X$.
Curves are assumed to be smooth, projective and geometrically connected. Given
a variety $X$ (resp. a scheme of finite type $X$) and $C$ a curve, a morphism
$X\to C$ is a family of varieties of general type (resp. of GeM schemes) if
the generic fiber is a variety of general type (resp. a GeM scheme). Given a
morphism $f\mathrel{\mathop{\ordinarycolon}}X\to C$, a generic section of $f$
is a morphism $s\mathrel{\mathop{\ordinarycolon}}\operatorname{Spec}k(C)\to X$
(equivalently, a rational map
$s\mathrel{\mathop{\ordinarycolon}}C\dashrightarrow X$) such that $f\circ s$
is the natural morphism $\operatorname{Spec}k(C)\to C$ (equivalently, the
identity $C\dashrightarrow C$).
## 2\. Pulling families to maximal Kodaira dimension
This section is of purely geometric nature, thus we may assume that $k$ is
algebraically closed of characteristic $0$ for simplicity. The results then
descend to non-algebraically closed fields with standard arguments.
Given a family $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ of
varieties of general type and
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ a finite
covering, let $f_{c}\mathrel{\mathop{\ordinarycolon}}X_{c}\to\mathbb{P}^{1}$
be the fiber product and, by abuse of notation,
$c\mathrel{\mathop{\ordinarycolon}}X_{c}\to X$ the base change of $c$. The
goal of this section is to obtain sufficient conditions on $c$ such that
$X_{c}$ is of general type. This goal will be reached in 2.13, which contains
all the geometry we’ll need for arithmetic applications.
Let us say that $X\to\mathbb{P}^{1}$ is birationally trivial if there exists a
birational morphism $X\dashrightarrow F\times\mathbb{P}^{1}$ which commutes
with the projection to $\mathbb{P}^{1}$. If $f$ is birationally trivial, then
clearly our goal is unreachable, since $X_{c}$ will have Kodaira dimension
$-\infty$ no matter which cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ we choose.
We will show that this is in fact the only exception.
Assume that $X$ is smooth and projective (we can always reduce to this case),
then the relative dualizing sheaf $\omega_{f}$ exists [Kle80, Corollary 24].
First, we show that for _every_ non-birationally trivial family there exists
an integer $m$ such that $f_{*}\omega_{f}^{m}$ has _some_ positivity 2.10.
Second, we show that if $f_{*}\omega_{f}^{m}$ has _enough_ positivity, then
$X$ is of general type 2.11. We then pass from "some" to "enough" positivity
by base changing along a cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$.
### 2.1. Positivity of $f_{*}\omega_{f}^{m}$
There are two cases: either there exists some finite cover
$c\mathrel{\mathop{\ordinarycolon}}C\to\mathbb{P}^{1}$ such that $X_{d}\to C$
is birationally trivial, or not. Let us say that
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ is _birationally
isotrivial_ in the first case, and non-birationally isotrivial in the second
case.
The non-birationally isotrivial case has been extensively studied by Viehweg
and Kollár, we don’t need to do any additional work.
###### Proposition 2.1 (Kollár, Viehweg [Kol87, Theorem p.363]).
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a non-
birationally isotrivial family of varieties of general type, with $X$ smooth
and projective. There exists an $m>0$ such that, in the decomposition of
$f_{*}\omega_{f}^{m}$ in a direct sum of line bundles, each factor has
positive degree.∎
We are thus left with studying the positivity of $f_{*}\omega_{f}^{m}$ in the
birationally isotrivial, non-birationally trivial case. We’ll have to deal
with various equivalent birational models of families, not always smooth, so
let us first compare their relative pluricanonical sheaves.
#### 2.1.1. Morphisms of pluricanonical sheaves
In this subsection, fix a base scheme $S$. If a morphism to $S$ is given, it
is tacitly assumed to be flat, locally projective, finitely presentable, with
Cohen-Macauley equidimensional fibers of dimension $n$. For such a morphism
$f\mathrel{\mathop{\ordinarycolon}}X\to S$, the relative dualizing sheaf
$\omega_{f}$ exists and is coherent, see [Kle80, Theorem 21]. Recall that
$\omega_{f}$ satisfies the functorial isomorphism
$f_{*}\underline{\operatorname{Hom}}_{X}(F,\omega_{f}\otimes_{X}f^{*}N)\simeq\underline{\operatorname{Hom}}_{S}(R^{n}f_{*}F,N)$
for every quasi-coherent sheaf $F$ on $X$ and every quasi-coherent sheaf $N$
on $S$. Write $\omega_{f}^{\otimes m}$ for the $m$-th tensor power, we may
drop the superscript $\\_^{\otimes}$ and just write $\omega_{f}^{m}$ if
$\omega_{f}$ is a line bundle.
Every flat, projective map $f\mathrel{\mathop{\ordinarycolon}}X\to S$ of
smooth varieties over $k$ satisfies the above, see [Kle80, Corollary 24], and
in this case we can compute $\omega_{f}$ as $\omega_{X}\otimes
f^{*}\omega_{S}^{-1}$, where $\omega_{X}$ and $\omega_{S}$ are the usual
canonical bundles. Moreover, the relative dualizing sheaf behaves well under
base change along morphisms $S^{\prime}\to S$, see [Kle80, Proposition 9.iii].
Given a morphism $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ over $S$ and a
quasi-coherent sheaf $F$ over $Y$, then $R^{n}f_{*}(g_{*}F)$ is the
$E^{n,0}_{2}$ term of the Grothendieck spectral sequence $(R^{p}f\circ
R^{q}g)(F)\Rightarrow R^{p+q}(f\circ g)(F)$, thus there is a natural morphism
$R^{n}f_{*}(g_{*}F)\to R^{n}(fg)_{*}F$. This induces a natural map
$\operatorname{Hom}_{Y}(F,\omega_{fg})=\operatorname{Hom}_{S}(R^{n}(fg)_{*}F,\mathcal{O}_{S})\to\operatorname{Hom}_{S}(R^{n}f_{*}(g_{*}F),\mathcal{O}_{S})=\operatorname{Hom}_{X}(g_{*}F,\omega_{f}).$
###### Definition 2.2.
If $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ is a morphism over $S$, define
$g_{\scaleto{\triangle}{0.5em},f}\mathrel{\mathop{\ordinarycolon}}g_{*}(\omega_{fg})\to\omega_{f}$
as the sheaf homomorphism induced by the identity of $\omega_{fg}$ via the
homomorphism
$\operatorname{Hom}_{Y}(\omega_{fg},\omega_{fg})\to\operatorname{Hom}_{X}(g_{*}\omega_{fg},\omega_{f})$
given above for $F=\omega_{fg}$. With an abuse of notation, call
$g_{\scaleto{\triangle}{0.5em},f}$ the induced sheaf homomorphism
$g_{*}(\omega_{fg}^{\otimes m})\to\omega_{f}^{\otimes m}$ for every $m\geq 0$.
If there is no risk of confusion, we may drop the subscript $\\__{f}$ and just
write $g_{\scaleto{\triangle}{0.5em}}$.
The following facts are straightforward, formal consequences of the definition
of $g_{\scaleto{\triangle}{0.5em}}$, we omit proofs.
###### Lemma 2.3.
Let $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be a morphism over $S$ and
$s\mathrel{\mathop{\ordinarycolon}}S^{\prime}\to S$ any morphism,
$f^{\prime}\mathrel{\mathop{\ordinarycolon}}X^{\prime}\to S^{\prime}$,
$g^{\prime}\mathrel{\mathop{\ordinarycolon}}Y^{\prime}\to X^{\prime}$ the
pullbacks to $S^{\prime}$. By abuse of notation, call $s$ the morphisms
$Y^{\prime}\to Y$, $X^{\prime}\to X$, too. Then
$g^{\prime}_{\scaleto{\triangle}{0.5em}}=g_{\scaleto{\triangle}{0.5em}}|_{X^{\prime}}\in\operatorname{Hom}_{X^{\prime}}(g^{\prime}_{*}\omega_{f^{\prime}g^{\prime}},\omega_{f^{\prime}})=\operatorname{Hom}_{X^{\prime}}(s^{*}g_{*}\omega_{fg},s^{*}\omega_{f}).$
∎
###### Lemma 2.4.
For every quasi-coherent sheaf $F$ on $Y$, the natural map
$\operatorname{Hom}_{Y}(F,\omega_{fg})\to\operatorname{Hom}_{X}(g_{*}F,\omega_{f})$
constructed above is given by
$\varphi\mapsto g_{\scaleto{\triangle}{0.5em}}\circ
g_{*}\varphi\mathrel{\mathop{\ordinarycolon}}g_{*}F\to
g_{*}\omega_{fg}\to\omega_{f}.$
∎
###### Corollary 2.5.
Let $h\mathrel{\mathop{\ordinarycolon}}Z\to Y$,
$g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be morphisms over $S$. Then, for
every $m\geq 0$,
$g_{\scaleto{\triangle}{0.5em}}\circ
g_{*}h_{\scaleto{\triangle}{0.5em}}=(gh)_{\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}gh_{*}\omega_{fgh}^{\otimes
m}\to g_{*}\omega_{fg}^{\otimes m}\to\omega_{f}^{\otimes m}.$
∎
###### Corollary 2.6.
Let $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be a morphism over $S$. Suppose
that a group $H$ acts on $Y,X,S$ and $g,f$ are $H$-equivariant. Then
$g_{*}\omega_{fg}^{\otimes m},\omega_{f}^{\otimes m}$ are $H$-equivariant
sheaves and
$g_{\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}g_{*}\omega_{fg}^{\otimes
m}\to\omega_{f}^{\otimes m}$ is $H$-equivariant.∎
###### Lemma 2.7.
Let $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be a morphism over $S$. Assume
that $Y,X$ are smooth varieties over a field $k$, and that $g$ is birational.
Then $g_{\scaleto{\triangle}{0.5em}}$ is an isomorphism.
###### Proof.
We have $\omega_{f}=\omega_{X}\otimes f^{*}\omega_{S}^{-1}$ and
$\omega_{fg}=\omega_{Y}\otimes(fg)^{*}\omega_{S}^{-1}$. Moreover,
$\omega_{Y}=g^{*}\omega_{X}\otimes\mathcal{O}_{Y}(R)$ where $R$ is some
effective divisor whose irreducible components are contracted by $g$, hence
$\omega_{fg}=g^{*}\omega_{f}\otimes\mathcal{O}_{Y}(R)$. Since
$g_{*}\mathcal{O}_{Y}(mR)\simeq\mathcal{O}_{X}$, we have a natural isomorphism
$g_{*}(\omega_{fg}^{m})\simeq\omega_{f}^{m}$ by projection formula. This is
easily checked to correspond to $g_{\scaleto{\triangle}{0.5em}}$, which is
then an isomorphism as desired. ∎
#### 2.1.2. Birationally isotrivial families
Let $C$ be a smooth projective curve and
$f\mathrel{\mathop{\ordinarycolon}}X\to C$ a birationally isotrivial family of
varieties of general type, and let $F/k$ be a smooth projective variety such
that the generic fiber of $f$ is birational to $F$. Let $H$ be the finite
group of birational automorphisms of $F$. The scheme of fiberwise birational
isomorphisms $\operatorname{Bir}(X/C,F)\to C$ restricts to an $H$-torsor on
some non-empty open subset $V$ of $C$. The action of $H$ on
$\operatorname{Bir}(X/C,F)|_{V}$ is transitive on the connected components,
thus they are all birational.
###### Definition 2.8.
In the situation above, define $b\mathrel{\mathop{\ordinarycolon}}B_{f}\to C$
as the smooth completion of any connected component of
$\operatorname{Bir}(X/C,F)|_{V}$, and $G_{f}\subseteq H$ as the subgroup of
elements mapping $B_{f}$ to itself. Let us call $B_{f}\to C$ and $G_{f}$ the
_monodromy cover_ and the _monodromy group_ of $f$ respectively.
We have that $B_{f}\to C$ is a $G_{f}$-Galois covering characterized by the
following universal property: if $C^{\prime}$ is a smooth projective curve
with a finite morphism $c\mathrel{\mathop{\ordinarycolon}}C^{\prime}\to C$,
then $X_{c}\to C^{\prime}$ is birationally trivial if and only if there exists
a factorization $C^{\prime}\to B_{f}\to C$.
###### Proposition 2.9.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to C$ be a birationally isotrivial
family of varieties of general type, with $X$ smooth and projective. If $p\in
B_{f}$ is a ramification point of the monodromy cover
$b\mathrel{\mathop{\ordinarycolon}}B_{f}\to C$, then for some $m$ there exists
an injective sheaf homomorphism $\mathcal{O}_{B_{f}}(p)\to
f_{b*}\omega_{f_{b}}^{m}$.
###### Proof.
The statement is equivalent to the existence of a non-trivial section of
$\omega_{f_{b}}^{m}$ which vanishes on the fiber $X_{b,p}$. Let $F$ be as
above, $G_{f}$ acts faithfully with birational maps on $F$. By equivariant
resolution of singularities, we may assume that $G_{f}$ acts faithfully by
isomorphisms on $F$. We have that $X$ is birational to $(F\times B_{f})/G_{f}$
where $G_{f}$ acts diagonally.
By resolution of singularities, let $X^{\prime}$ be a smooth projective
variety with birational morphisms $X^{\prime}\to X$, $X^{\prime}\to(F\times
B_{f})/G_{f}$: thanks to 2.7 we may replace $X$ with $X^{\prime}$ and assume
we have a birational morphism $X\to(F\times B_{f})/G_{f}$. By equivariant
resolution of singularities again, we may find a smooth projective variety $Y$
with an action of $G_{f}$, a birational morphism
$g\mathrel{\mathop{\ordinarycolon}}Y\to X_{b}$ and a birational,
$G_{f}$-equivariant morphism $y\mathrel{\mathop{\ordinarycolon}}Y\to F\times
B_{f}$. Call $\pi\mathrel{\mathop{\ordinarycolon}}F\times B_{f}\to B_{f}$ the
projection.
${Y}$${X_{b}}$${X}$${F\times B_{f}}$${(F\times
B_{f})/G_{f}}$${B_{f}}$${C}$$\scriptstyle{y}$$\scriptstyle{g}$$\scriptstyle{b}$$\scriptstyle{\pi}$$\scriptstyle{b}$$\scriptstyle{\pi
y}$$\scriptstyle{f_{b}}$
Recall that we are trying to find a global section of $\omega_{f_{b}}^{m}$
that vanishes on $X_{b,p}$, where $p$ is a ramification point of $b$. Thanks
to 2.7, we have that $\pi y_{*}\omega_{\pi
y}^{m}\simeq\pi_{*}\omega_{\pi}^{m}\simeq\mathcal{O}_{B_{f}}\otimes\operatorname{H}^{0}(F,\omega_{F}^{m})$,
thus $\operatorname{H}^{0}(Y,\omega_{\pi
y}^{m})=\operatorname{H}^{0}(F,\omega_{F}^{m})=\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})$.
The sheaf homomorphism
$g_{\scaleto{\triangle}{0.5em}}=g_{\scaleto{\triangle}{0.5em},f_{b}}\mathrel{\mathop{\ordinarycolon}}g_{*}\omega_{\pi
y}^{m}\to\omega_{f_{b}}^{m}$ induces a linear map
$g_{\scaleto{\triangle}{0.5em}}(p)\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})=\operatorname{H}^{0}(Y,\omega_{\pi
y}^{m})\xrightarrow{g_{\scaleto{\triangle}{0.5em}}}\operatorname{H}^{0}(X_{b},\omega_{f_{b}}^{m})\xrightarrow{\bullet|_{p}}\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$
where the last map is the restriction to the fiber. Let $V\subseteq B_{f}$ be
the étale locus of $b\mathrel{\mathop{\ordinarycolon}}B_{f}\to C$. Since
$X_{b}|_{V}$ is smooth, then $g_{\scaleto{\triangle}{0.5em}}$ restricts to an
isomorphism on $X_{b}|_{V}$ thanks to 2.7 and thus the map
$\operatorname{H}^{0}(Y,\omega_{\pi
y}^{m})\to\operatorname{H}^{0}(X_{b},\omega_{f_{b}}^{m})$ is injective.
We want to show that the restriction map
$\operatorname{H}^{0}(X_{b},\omega_{f_{b}}^{m})\to\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$
is not injective for some $m$, it is enough to show that
$g_{\scaleto{\triangle}{0.5em}}(p)$ is not injective. Thanks to 2.3, we have
that
$g_{\scaleto{\triangle}{0.5em}}(p)=g_{p,\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})\to\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$.
Recall now that $G_{f}$ acts on $Y$. Let $G_{f,p}$ be the stabilizer of $p\in
B_{f}$, it is a non-trivial group since $p$ is a ramification point. Thanks to
2.6, the stabilizer $G_{f,p}$ acts naturally on
$\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})$,
$\operatorname{H}^{0}(F,\omega_{F}^{m})$,
$\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$, and the maps
$y_{p,\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})\simeq\operatorname{H}^{0}(F,\omega_{F}^{m})$,
$g_{p,\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})\to\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$
are $G_{f,p}$-equivariant. Moreover, the action on
$\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$ is trivial since the
action on $X_{b,p}$ is trivial. It follows that
$g_{\scaleto{\triangle}{0.5em}}(p)$ is $G_{f,p}$-invariant, and hence to show
that it is not injective for some $m$ it is enough to show that the action of
$G_{f,p}$ on
$\operatorname{H}^{0}(F,\omega_{F}^{m}))=\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})$
is not trivial for some $m$.
Since $F$ is of general type,
$F\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(F,\omega_{F}^{m}))$ is
generically injective for some $m$, fix it. Since the action of $G_{f,p}$ on
$F$ is faithful, for every non-trivial $g\in G_{f,p}$ there exists a section
$s\in\operatorname{H}^{0}(F,\omega_{F}^{m})$ and a point $v\in F$ such that
$s(v)=0$ and $s(g(v))\neq 0$, in particular the action of $G_{f,p}$ on
$\operatorname{H}^{0}(F,\omega_{F}^{m})$ is not trivial and we conclude. ∎
###### Corollary 2.10.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a non-
birationally trivial family of varieties of general type, with $X$ smooth and
projective. Then there exists an $m$ with an injective homomorphism
$\mathcal{O}(1)\to f_{*}\omega_{f}^{m}$.
###### Proof.
If $f$ is not birationally isotrivial, apply 2.1. Otherwise, $f$ is
birationally isotrivial and not birationally trivial, thus the monodromy cover
$b\mathrel{\mathop{\ordinarycolon}}B_{f}\to\mathbb{P}^{1}$ is not trivial.
Since $\mathbb{P}^{1}$ has no non-trivial étale covers, we have that
$B_{f}\to\mathbb{P}^{1}$ has at least one ramification point $p$. Let $m$ be
the integer given by 2.9, and write
$f_{*}\omega_{f}^{m}=\bigoplus_{i}\mathcal{O}_{\mathbb{P}^{1}}(d_{i})$. Since
$\mathcal{O}_{B_{f}}(p)\subseteq f_{b*}\omega_{f_{b}}^{m}$ and
$\omega_{f_{b}}=b^{*}\omega_{f}$, see [Kle80, Proposition 9.iii], there exists
an $i$ with $d_{i}>0$. ∎
### 2.2. Pulling families to maximal Kodaira dimension
Now that we have established a positivity result for $f_{*}\omega_{f}^{m}$ of
any non-birationally trivial family
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$, let us use this to
pull families to maximal Kodaira dimension.
###### Proposition 2.11.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a family of
varieties of general type, with $X$ smooth and projective. Then $X$ is of
general type if and only if there exists an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(1)\to f_{*}\omega_{X}^{m_{0}}$, or equivalently
$\mathcal{O}_{\mathbb{P}^{1}}(2m_{0}+1)\to f_{*}\omega_{f}^{m_{0}}$, for some
$m_{0}>0$.
###### Proof.
By resolution of singularities, there exists a birational morphism
$g\mathrel{\mathop{\ordinarycolon}}X^{\prime}\to X$ with $X^{\prime}$ smooth
and projective such that the generic fiber of $X^{\prime}\to\mathbb{P}^{1}$ is
smooth and projective. We have
$\omega_{X^{\prime}}=g^{*}\omega_{X}\otimes\mathcal{O}_{X^{\prime}}(R)$ where
$R$ is some effective divisor whose irreducible components are contracted by
$g$, hence $g_{*}\omega_{X^{\prime}}^{m}=\omega_{X}^{m}\otimes
g_{*}O(mR)=\omega_{X}^{m}$ for every $m\geq 0$. We may thus replace $X$ with
$X^{\prime}$ and assume that the generic fiber is smooth. This guarantees that
$\operatorname{rank}f_{*}\omega_{X}^{m}=\operatorname{rank}f_{*}\omega_{f}^{m}$
has growth $O(m^{\dim X-1})$.
If there are no injective homomorphisms $\mathcal{O}_{\mathbb{P}^{1}}(1)\to
f_{*}\omega_{X}^{m}$ for every $m>0$, then
$\operatorname{h}^{0}(\omega_{X}^{m})\leq\operatorname{rank}f_{*}\omega_{X}^{m}=\operatorname{rank}f_{*}\omega_{f}^{m}$,
and this has growth $O(m^{\dim X-1})$.
On the other hand, let $\mathcal{O}_{\mathbb{P}^{1}}(1)\to
f_{*}\omega_{X}^{m_{0}}$ be an injective homomorphism for some $m_{0}>0$. In
particular, $X$ has Kodaira dimension $\geq 0$.
For some $m$, the closure $Y$ of the image of
$X\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}}))$ has
dimension equal to the Kodaira dimension of $X$ and $k(Y)$ is algebraically
closed in $k(X)$, see [Iit71, §3]. If $X^{\prime}$ is a smooth projective
variety birational to $X$, then there is a natural isomorphism
$\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}})=\operatorname{H}^{0}(X^{\prime},\omega_{X^{\prime}}^{mm_{0}})$,
see [Har77, Ch. 2, Theorem 8.19]. Thus, up to replacing $X$ with some other
smooth, projective variety birational to $X$, we may assume that
$X\dashrightarrow
Y\subseteq\mathbb{P}(\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}}))$ is defined
everywhere and has smooth, projective generic fiber $Z$ by resolution of
singularities. Iitaka has then shown that $Z$ has Kodaira dimension $0$, see
[Iit71, Theorem 5]. This is easy to see in the case in which
$\omega_{X}^{mm_{0}}$ is base point free, since then $\omega_{X}^{mm_{0}}$ is
the pullback of $\mathcal{O}(1)$ and thus
$\omega_{Z}^{mm_{0}}=\omega_{X}^{mm_{0}}|_{Z}$ is trivial.
Let us recall briefly Grothendieck’s convention that, if $V$ is a vector
bundle, then $\mathbb{P}(V)$ is the set (or scheme) of linear quotients $V\to
k$ up to a scalar. A non-trivial linear map $W\to V$ thus induces a rational
map $\mathbb{P}(V)\dashrightarrow\mathbb{P}(W)$ by restriction. If $L$ is a
line bundle with non-trivial global sections, the rational map
$X\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,L))$ is defined by sending
a point $x\in X$ outside the base locus to the quotient
$\operatorname{H}^{0}(X,L)\to L_{x}\simeq k$. If $L$ embeds in another line
bundle $M$, then there is a natural factorization
$X\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,M))\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,L))$,
and any point of $X$ outside the support of $M/L$ and outside the base locus
of $L$ maps to the locus of definition of
$\mathbb{P}(\operatorname{H}^{0}(X,M))\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,L))$.
Let $F\subseteq X$ be the fiber over any rational point of $\mathbb{P}^{1}$.
The injective homomorphism $\mathcal{O}_{\mathbb{P}^{1}}(1)\to
f_{*}\omega_{X}^{m_{0}}$ induces an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(m)\to f_{*}\omega_{X}^{mm_{0}}$, choose any
embedding $\mathcal{O}_{\mathbb{P}^{1}}(1)\to\mathcal{O}_{\mathbb{P}^{1}}(m)$,
these induce an injective homomorphism
$\mathcal{O}_{X}(F)\to\omega_{X}^{mm_{0}}$. Since $\mathcal{O}_{X}(F)$ induces
the morphism $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$, the
composition
$X\to
Y\subseteq\mathbb{P}(\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}}))\dashrightarrow\mathbb{P}^{1}$
coincides with $f$. Observe that the right arrow depends on the choice of the
embedding $\mathcal{O}_{X}(F)\to\omega_{X}^{mm_{0}}$, but the composition
doesn’t.
Let $\xi$ be the generic point of $\mathbb{P}^{1}$, $U\subseteq Y$ an open
subset such that $U\to\mathbb{P}^{1}$ is defined, $Y_{\xi}$ the closure of
$U_{\xi}$ in $Y$. Then the generic fiber $Z$ of $X\to Y$ is the generic fiber
of $X_{\xi}\to Y_{\xi}$, too. By hypothesis, $X_{\xi}$ is of general type,
thus by adjunction $\omega_{X_{\xi}}|_{Z}=\omega_{Z}$ is big and hence $Z$ is
of general type.
Since $Z$ is a variety of general type of Kodaira dimension $0$ over
$\operatorname{Spec}k(Y)$, then $Z=\operatorname{Spec}k(Y)$, the morphism
$X\to Y$ is generically injective and thus $X$ is of general type. ∎
###### Remark 2.12.
We don’t actually need the precision of 2.11: for our purposes it is enough to
show that, if $f_{*}\omega_{X}^{m_{0}}$ has a positive _enough_ sub-line
bundle for some $m_{0}$, then $X$ is of general type. This weaker fact has a
more direct proof, let us sketch it.
First, let us mention an elementary fact about injective sheaf homomorphisms.
Let $P,Q$ be vector bundles on $\mathbb{P}^{1}$ and $M,N$ vector bundles on
$X$, with $P$ of rank $1$. Suppose we are given injective homomorphisms
$m\in\operatorname{Hom}(P,f_{*}M)$, $n\in\operatorname{Hom}(Q,f_{*}N)$. Then
$m^{a}\otimes n\in\operatorname{Hom}(P^{\otimes a}\otimes Q,f_{*}(M^{\otimes
a}\otimes N))$ is injective for every $a>0$: this can be checked on the
generic point of $\mathbb{P}^{1}$ and thus on the generic fiber
$X_{k(\mathbb{P}^{1})}$, where the fact that $P$ has rank $1$ allows us to
reduce to the fact that the tensor product of non-zero sections of vector
bundles is non-zero on an integral scheme.
Assume we have an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(3m_{0})\to f_{*}\omega_{X}^{m_{0}}$, or
equivalently $\mathcal{O}_{\mathbb{P}^{1}}(5m_{0})\to
f_{*}\omega_{f}^{m_{0}}$, we want to prove that $X$ is of general type. Let
$r(m)$ be the rank $f_{*}\omega_{f}^{mm_{0}}$ for every $m$. Since the generic
fiber is of general type, up to replacing $m_{0}$ by a multiple
$m_{0}^{\prime}$ we may assume that the growth of $r(m)$ is $O(m^{\dim X-1})$.
The induced morphism $\mathcal{O}_{\mathbb{P}^{1}}(5m_{0}^{\prime})\to
f_{*}\omega_{f}^{m_{0}^{\prime}}$ is injective thanks to the above.
Thanks to [Vie83, Theorem III], every line bundle in the factorization of
$f_{*}\omega_{f}^{mm_{0}}$ has non-negative degree, we may thus choose an
injective homomorphism $\mathcal{O}_{\mathbb{P}^{1}}^{r(m)}\to
f_{*}\omega_{f}^{mm_{0}}$. Taking the tensor product with the $m$-th power of
the homomorphism given by hypothesis, we get an homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(5mm_{0})^{r(m)}\to f_{*}\omega_{f}^{2mm_{0}}$
which is injective thanks to the above.
Since
$f_{*}\omega_{X}^{2mm_{0}}=f_{*}\omega_{f}^{2mm_{0}}\otimes\mathcal{O}_{\mathbb{P}^{1}}(-4mm_{0})$,
we thus have an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(mm_{0})^{r(m)}\to f_{*}\omega_{X}^{2mm_{0}}.$
In particular, we have
$\operatorname{h}^{0}(\omega_{X}^{2mm_{0}})\geq(mm_{0}+1)r(m)$ which has
growth $O(m^{n})$, hence $X$ is of general type.
###### Corollary 2.13.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a non-
birationally trivial family of varieties of general type. Then there exists an
integer $d_{0}$ and a non-empty open subset $U\subseteq\mathbb{P}^{1}$ such
that, for every finite cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ with $\deg
c\geq d_{0}$ and such that the branch points of $c$ are contained in $U$, we
have that $X_{c}$ is of general type. If $X$ is smooth and projective, $U$ can
be chosen as the largest open subset such that $f|_{f^{-1}(U)}$ is smooth.
###### Proof.
By resolution of singularities, we may assume that $X$ is smooth and
projective. By generic smoothness, there exists an open subset
$U\subseteq\mathbb{P}^{1}$ be such that $f|_{X_{U}}$ is smooth. We have that
$X_{c}$ is smooth for every
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ whose
branch points are contained in $U$ since each point of $X_{c}$ is smooth
either over $X$ or over $\mathbb{P}^{1}$.
Let $m_{0}$ be the integer given by 2.10, we have an injective homomorphism
$\mathcal{O}(1)\to f_{*}\omega_{f}^{m_{0}}$. Set $d_{0}=2m_{0}+1$, for every
finite cover $c$ of degree $\deg c\geq d_{0}=2m_{0}+1$ we have an induced
homomorphism $\mathcal{O}(2m_{0}+1)\to f_{c*}\omega_{f_{c}}^{m_{0}}$ and thus
$\mathcal{O}(1)\to f_{c*}\omega_{X_{c}}^{m_{0}}$. It follows that $X_{c}$ is
of general type thanks to 2.11. ∎
## 3\. Higher dimensional HIT
### 3.1. Pulling fat sets
Recall that Serre [Ser97, Chapter 9] defined a subset $S$ of
$\mathbb{P}^{1}(k)$ as _thin_ if there exists a morphism
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ with $X$ of finite type
over $k$, finite generic fiber and no generic sections
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$ such that $S\subseteq f(X(k))$.
It’s immediate to check that a subset of a thin set is thin, and a finite
union of thin sets is thin. Serre’s form of Hilbert’s irreducibility theorem
says that, if $k$ is finitely generated over $\mathbb{Q}$, then
$\mathbb{P}^{1}(k)$ is not thin.
###### Definition 3.1.
A subset $S\subseteq\mathbb{P}^{1}(k)$ is _fat_ if the complement
$\mathbb{P}^{1}(k)\setminus S$ is thin.
Given a subset $S\subseteq\mathbb{P}^{1}(k)$, a finite set of finite morphisms
$D=\\{d_{i}\mathrel{\mathop{\ordinarycolon}}D_{i}\to\mathbb{P}^{1}\\}_{i}$
each of degree $>1$ with $D_{i}$ smooth, projective and geometrically
connected is a _scale_ for $S$ if
$S\cup\bigcup_{i}d_{i}(D_{i}(k))=\mathbb{P}^{1}(k)$. The set of branch points
of the scale $D$ is the union of the sets of branch points of $d_{i}$.
Using the fact that a connected scheme with a rational point is geometrically
connected [Sta20, Lemma 04KV], it’s immediate to check that a subset of
$\mathbb{P}^{1}$ is fat if and only if it has a scale. The set of branch
points of a scale gives valuable information about a fat set.
###### Lemma 3.2.
Let $S\subseteq\mathbb{P}^{1}$ be a fat set, and let
$D=\\{d_{i}\mathrel{\mathop{\ordinarycolon}}D_{i}\to\mathbb{P}^{1}\\}_{i}$ be
a scale for $S$. Let
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ be a
morphism such that the sets of branch points of $c$ and $D$ are disjoint. Then
$c^{-1}(S)$ is fat.
###### Proof.
Let
$d_{i}^{\prime}\mathrel{\mathop{\ordinarycolon}}D_{i}^{\prime}\to\mathbb{P}^{1}$
be the base change of $d_{i}$ along
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$. By
construction,
$c^{-1}(S)\cup\bigcup_{i}d_{i}^{\prime}(D_{i}^{\prime}(k))=\mathbb{P}^{1}(k)$.
Since the sets of branch points of $c$ and $d_{i}$ are disjoint, we have that
$D_{i}^{\prime}$ is geometrically connected, see for instance [Str20, Lemma
2.8]. Moreover, $D_{i}^{\prime}$ is smooth since each point of
$D_{i}^{\prime}$ is étale either over $\mathbb{P}^{1}$ or $D_{i}$. It follows
that $d_{i}^{\prime}$ has degree $>1$ and
$\\{d_{i}^{\prime}\mathrel{\mathop{\ordinarycolon}}D_{i}^{\prime}\to\mathbb{P}^{1}\\}_{i}$
is a scale for $c^{-1}(S)$, which is thus fat. ∎
### 3.2. Decreasing the fiber dimension
Let us now prove Theorem A. Using Hilbert’s irreducibility, it’s easy to check
that Theorem A is equivalent to the following statement.
If the generic fiber of $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$
is GeM and $f(X(k))$ is fat, there exists a section
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$.
We prove this statement by induction on the dimension of the generic fiber. If
the generic fiber has dimension $0$, this follows from the definition of fat
set. Let us prove the inductive step.
We define recursively a sequence of closed subschemes $X_{i+1}\subseteq X_{i}$
with $X_{0}=X$ and such that $f(X_{i}(k))\subseteq\mathbb{P}^{1}_{k}$ is fat.
* •
Define $X^{\prime}_{i}$ as the closure of $X_{i}(k)$ with the reduced scheme
structure, $f(X^{\prime}_{i}(k))=f(X_{i}(k))\subseteq\mathbb{P}^{1}_{k}$ is
fat.
* •
Define $X^{\prime\prime}_{i}$ as the union of the irreducible components of
$X^{\prime}_{i}$ which dominate $\mathbb{P}^{1}$,
$f(X^{\prime\prime}_{i}(k))\subseteq\mathbb{P}^{1}_{k}$ is fat since
$f(X_{i}^{\prime}(k))\setminus f(X_{i}^{\prime\prime}(k))$ is finite.
* •
Write $X^{\prime\prime}_{i}=\bigcup_{j}Y_{i,j}$ as union of irreducible
components, $Y_{i,j}\to\mathbb{P}^{1}$ is dominant for every $j$. For every
$j$, there exists a finite cover $C_{i,j}\to\mathbb{P}^{1}$ with $C_{i,j}$
smooth projective and a rational map $Y_{i,j}\dashrightarrow C_{i,j}$ with
geometrically irreducible generic fiber. If $C_{i,j}\to\mathbb{P}^{1}$ is an
isomorphism, define $Z_{i,j}=Y_{i,j}$. Otherwise, there exists a non-empty
open subset $V_{i,j}\subseteq Y_{i,j}$ such that $Y_{i,j}\dashrightarrow
C_{i,j}$ is defined on $V_{i,j}$. In particular,
$f(V_{i,j}(k))\subseteq\mathbb{P}^{1}(k)$ is thin. Define
$Z_{i,j}=Y_{i,j}\setminus V_{i,j}$ and $X_{i+1}=\bigcup_{j}Z_{i,j}\subseteq
X_{i}$. By construction, $f(X_{i+1}(k))\subseteq\mathbb{P}^{1}(k)$ is fat
since $f(X_{i}^{\prime\prime}(k))\setminus f(X_{i+1}(k))$ is thin.
By noetherianity, the sequence is eventually stable, let $r$ be such that
$X_{r+1}=X_{r}$. Since $X_{r+1}=X_{r}$, then $X_{r}(k)$ is dense in $X_{r}$,
thus every irreducible component is geometrically irreducible, see [Sta20,
Lemma 0G69]. Moreover, every irreducible component of $X_{r}$ dominates
$\mathbb{P}^{1}$ with geometrically irreducible generic fiber. Replace $X$
with $X_{r}$ and write $X=\bigcup_{j}Y_{j}$ as union of irreducible
components, we may assume that $Y_{j}\to\mathbb{P}^{1}$ is a family of GeM
varieties for every $j$ and $Y_{j}(k)$ is dense in $Y_{j}$.
If $Y_{j}\to\mathbb{P}^{1}$ is birationally trivial for some $j$, since
$Y_{j}(k)$ is dense in $Y_{j}$ and a generic fiber of $Y_{j}\to\mathbb{P}^{1}$
has a finite number of rational points, then $\dim Y_{j}=0$,
$Y_{j}\to\mathbb{P}^{1}$ is birational and we conclude. Otherwise, thanks to
2.13, there exists an integer $d_{0}$ and a non-empty open subset
$U\subseteq\mathbb{P}^{1}$ such that, for every finite cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ with $\deg
c\geq d_{0}$ such that the branch points of $c$ are contained in $U$, we have
that $Y_{j,c}$ is of general type for every $j$.
Let $D=\\{d_{l}\mathrel{\mathop{\ordinarycolon}}D_{l}\to\mathbb{P}^{1}\\}$ be
a scale for $f(X(k))$. Up to shrinking $U$ furthermore, we may assume that the
set of branch points of $D$ is disjoint from $U$. Since we are assuming that
the weak Bombieri-Lang conjecture holds up to dimension $\dim X$, the
dimension of $\overline{Y_{j,c}(k)}\subseteq Y_{j,c}$ is strictly smaller than
$\dim Y_{j}$ for every $j$. Moreover, we have that
$f_{c}(X_{c}(k))=m_{c}^{-1}(f(X(k)))$ is fat thanks to 3.2. It follows that,
by induction hypothesis, there exists a generic section
$\operatorname{Spec}k(\mathbb{P}^{1})\to X_{c}$ for _every_ finite cover $c$
as above. There are a lot of such covers: let us show that we can choose them
so that the resulting sections "glue" to a generic section
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$.
### 3.3. Gluing sections
Choose coordinates on $\mathbb{P}^{1}$ so that $0,\infty\in U$, let $p$ be any
prime number greater than $d_{0}$. For any positive integer $n$, let
$m_{n}\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ be the
$n$-th power map. We have shown above that there exists a rational section
$\mathbb{P}^{1}\dashrightarrow X_{m_{p}}$ for every prime $p\geq d_{0}$, call
$s_{p}\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\dashrightarrow
X_{m_{p}}\to X$ the composition.
We either assume that there exists an integer $N$ such that, for every
rational point $v\in\mathbb{P}^{1}(k)$, we have $|X_{v}(k)|\leq N$ or that the
Bombieri-Lang conjecture holds in every dimension. In the second case, the
uniform bound $N$ exists thanks to a theorem of Caporaso-Harris-Mazur and
Abramovich-Voloch [CHM97, Theorem 1.1] [AV96, Theorem 1.5] [Abr97]. Choose
$N+1$ prime numbers $p_{0},\dots,p_{N}$ greater than $d_{0}$, for each one we
have a rational section
${X}$${\mathbb{P}^{1}}$${\mathbb{P}^{1}}$$\scriptstyle{f}$$\scriptstyle{m_{p}}$$\scriptstyle{s_{p}}$
Let $Q=\prod_{i=0}^{N}p_{i}$, for every $i=0,\dots,N$, we get a rational
section $S_{p_{i}}$ by composition with $s_{p_{i}}$:
${X}$${\mathbb{P}^{1}}$${\mathbb{P}^{1}}$${\mathbb{P}^{1}}$$\scriptstyle{f}$$\scriptstyle{m_{Q/p_{i}}}$$\scriptstyle{S_{p_{i}}}$$\scriptstyle{m_{Q}}$$\scriptstyle{m_{p_{i}}}$$\scriptstyle{s_{p_{i}}}$
Let $V\subseteq\mathbb{P}^{1}$ be an open subset such that $S_{p_{i}}$ is
defined on $V$ for every $i$. For every rational point $v\in V(k)$, we have
$|X_{v}(k)|\leq N$ and thus there exists a couple of different indexes $i\neq
j$ such that $S_{p_{i}}(v)=S_{p_{j}}(v)$ for infinitely many $v\in V(k)$,
hence $S_{p_{i}}=S_{p_{j}}$. Let $Z\subseteq X$ be the image
$S_{p_{i}}=S_{p_{j}}$, by construction we have
$k(\mathbb{P}^{1})=k(t)\subseteq k(Z)\subseteq k(t^{-p_{i}})\cap
k(t^{-p_{j}})\subseteq k(t^{-Q}).$
Using Galois theory on the cyclic extension $k(t^{-Q})/k(t)$, it is immediate
to check that $k(t^{-p_{i}})\cap k(t^{-p_{j}})=k(t)\subseteq k(t^{-Q})$ since
$p_{i},p_{j}$ are coprime, thus $k(Z)=k(t)$ and $Z\to\mathbb{P}^{1}$ is
birational. This concludes the proof of Theorem A.
### 3.4. Non-rational base
Let us show how Theorem A implies Theorem B. Let $C$ be a geometrically
connected curve over a field $k$ finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to C$ be a morphism of finite type whose
generic fiber is a GeM scheme. Assume that there exists a non-empty open
subset $V\subseteq C$ such that $X|_{V}(h)\to V(h)$ is surjective for every
finite extension $h/k$. We want to prove that there exists a generic section
$C\dashrightarrow X$. It’s easy to reduce to the case in which $C$ is smooth
and projective, so let us make this assumption.
Observe that, up to replacing $X$ with an affine covering, we may assume that
$X$ is affine. Choose $C\to\mathbb{P}^{1}$ any finite map: since $X$ is
affine, the Weil restriction $R_{C/\mathbb{P}^{1}}(X)\to\mathbb{P}^{1}$ exists
[BLR90, §7.6, Theorem 4]. Recall that
$R_{C/\mathbb{P}^{1}}(X)\to\mathbb{P}^{1}$ represents the functor on
$\mathbb{P}^{1}$-schemes
$S\mapsto\operatorname{Hom}_{C}(S\times_{\mathbb{P}^{1}}C,X)$.
If $L/k(C)/k(\mathbb{P}^{1})$ is a Galois closure and $\Sigma$ is the set of
embeddings $\sigma\mathrel{\mathop{\ordinarycolon}}k(C)\to L$ as
$k(\mathbb{P}^{1})$ extensions, the scheme $R_{C/\mathbb{P}^{1}}(X)_{L}$ is
isomorphic to the product
$\prod_{\Sigma}X\times_{\operatorname{Spec}k(C),\sigma}\operatorname{Spec}L$
and hence is a GeM scheme, see [Bre20, Lemma 3.3]. It follows that the generic
fiber $R_{C/\mathbb{P}^{1}}(X)_{k(\mathbb{P}^{1})}$ is a GeM scheme, too.
Let $U\subseteq\mathbb{P}^{1}$ be the image of $V\subseteq C$. The fact that
$X|_{V}(h)\to V(h)$ is surjective for every finite extension $h/k$ implies
that $R_{C/\mathbb{P}^{1}}(X)|_{U}(k)\to U(k)$ is surjective. By Theorem A, we
get a generic section $\mathbb{P}^{1}\dashrightarrow R_{C/\mathbb{P}^{1}}(X)$,
which in turn induces generic section $C\dashrightarrow X$ by the universal
property of $R_{C/\mathbb{P}^{1}}(X)$. This concludes the proof of Theorem B.
## 4\. Polynomial bijections $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$
Let us prove Theorem C. Let $k$ be finitely generated over $\mathbb{Q}$, and
let $f\mathrel{\mathop{\ordinarycolon}}\mathbb{A}^{2}\to\mathbb{A}^{1}$ be any
morphism. Assume by contradiction that $f$ is bijective on rational points.
First, let us show that the generic fiber of $f$ is geometrically irreducible.
This is equivalent to saying that $\operatorname{Spec}k(\mathbb{A}^{2})$ is
geometrically connected over $\operatorname{Spec}k(\mathbb{A}^{1})$, or that
$k(\mathbb{A}^{1})$ is algebraically closed in $k(\mathbb{A}^{2})$. Let
$k(\mathbb{A}^{1})\subseteq L\subseteq k(\mathbb{A}^{2})$ a subextension
algebraic over $k(\mathbb{A}^{1})$. Let $C\to\mathbb{A}^{1}$ be a finite cover
with $C$ regular and $k(C)=L$. The rational map $\mathbb{A}^{2}\dashrightarrow
C$ is defined in codimension $1$, thus there exists a finite subset
$S\subseteq\mathbb{A}^{2}$ and an extension $\mathbb{A}^{2}\setminus S\to C$.
Since the composition $\mathbb{A}^{2}\setminus S(k)\to
C(k)\to\mathbb{A}^{1}(k)$ is surjective up to a finite number of points, by
Hilbert’s irreducibility theorem we have that $C=\mathbb{A}^{1}$, i.e.
$L=k(\mathbb{A}^{1})$.
This leaves us with three cases: the generic fiber is a geometrically
irreducible curve of geometric genus $0$, $1$, or $\geq 2$. The first two have
been settled by W. Sawin in the polymath project [Tao19], while the third
follows from Theorem A. Let us give details for all of them.
### 4.1. Genus 0
Assume that the generic fiber of $f$ has genus $0$. By generic smoothness,
there exists an open subset $U\subseteq\mathbb{A}^{2}$ such that $f|_{U}$ is
smooth. For a generic rational point $u\in U(k)$, the fiber $f^{-1}(f(u))$ is
birational to a Brauer-Severi variety of dimension $1$ and has a smooth
rational point, thus it is birational to $\mathbb{P}^{1}$ and
$f^{-1}(f(u))(k)$ is infinite. This is absurd.
### 4.2. Genus 1
Assume now that the generic fiber has genus $1$. By resolution of
singularities, there exists an open subset $V\subseteq\mathbb{A}^{1}$, a
variety $X$ with a smooth projective morphism
$g\mathrel{\mathop{\ordinarycolon}}X\to V$ whose fibers are smooth genus $1$
curves and a compatible birational map $X\dashrightarrow\mathbb{A}^{2}$. Up to
shrinking $V$, we may suppose that the fibers of $f|_{V}$ are geometrically
irreducible. Let $U$ be a variety with open embeddings $U\subseteq X$,
$U\subseteq\mathbb{A}^{2}$, replace $V$ with $g(U)\subseteq V$ so that
$g|_{U}$ is surjective.
The morphism $X\setminus U\to V$ is finite, let $N$ be its degree. Since the
fibers of $U\to V$ have at most one rational point, it follows that
$|X_{v}(k)|\leq N+1$ for every $v\in V(k)$.
Every smooth genus $1$ fibration is a torsor for a relative elliptic curve
(namely, its relative $\underline{\operatorname{Pic}}^{0}$), thus there exists
an elliptic curve $E\to V$ such that $X$ is an $E$-torsor. Moreover, every
torsor for an abelian variety is torsion, thus there exists a finite morphism
$\pi\mathrel{\mathop{\ordinarycolon}}X\to E$ over $V$ induced by the
$n$-multiplication map $E\to E$ for some $n$.
If $v\in V(k)$ is such that $X_{v}(k)$ is non-empty, then
$|X_{v}(k)|=|E_{v}(k)|\leq N+1$. This means that, up to composing $\pi$ with
the $(N+1)!$ multiplication $E\to E$, we may assume that $\pi(X(k))\subseteq
V(k)\subseteq E(k)$, where $V\to E$ is the identity section. In particular,
$X(k)\subseteq\pi^{-1}(V(k))$ is not dense. This is absurd, since $X$ is
birational to $\mathbb{A}^{2}$.
### 4.3. Genus $\geq 2$
Thanks to Theorem A, there exists an open subset $V\subseteq\mathbb{A}^{1}$
and a section $s\mathrel{\mathop{\ordinarycolon}}V\to\mathbb{A}^{2}$. It
follows that $\mathbb{A}^{2}|_{V}(k)=s(V(k))$, which is absurd since $s(V)$ is
a proper closed subset and $\mathbb{A}^{2}|_{V}(k)$ is dense.
## References
* [Abr97] Dan Abramovich “A high fibered power of a family of varieties of general type dominates a variety of general type” In _Invent. Math._ 128.3, 1997, pp. 481–494 DOI: 10.1007/s002220050149
* [AV96] Dan Abramovich and José Felipe Voloch “Lang’s conjectures, fibered powers, and uniformity” In _New York J. Math._ 2, 1996, pp. 20–34electronic URL: http://nyjm.albany.edu:8000/j/1996/2_20.html
* [BLR90] Siegfried Bosch, Werner Lütkebohmert and Michel Raynaud “Néron models” 21, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) Springer-Verlag, Berlin, 1990, pp. x+325 DOI: 10.1007/978-3-642-51438-8
* [Bre20] Giulio Bresciani “On the Bombieri-Lang Conjecture over finitely generated fields”, 2020 arXiv:2012.15765 [math.NT]
* [CHM97] Lucia Caporaso, Joe Harris and Barry Mazur “Uniformity of rational points” In _J. Amer. Math. Soc._ 10.1, 1997, pp. 1–35 DOI: 10.1090/S0894-0347-97-00195-1
* [Har77] Robin Hartshorne “Algebraic geometry” Graduate Texts in Mathematics, No. 52 Springer-Verlag, New York-Heidelberg, 1977, pp. xvi+496
* [Iit71] Shigeru Iitaka “On $D$-dimensions of algebraic varieties” In _J. Math. Soc. Japan_ 23, 1971, pp. 356–373 DOI: 10.2969/jmsj/02320356
* [Kle80] Steven L. Kleiman “Relative duality for quasicoherent sheaves” In _Compositio Math._ 41.1, 1980, pp. 39–60 URL: http://www.numdam.org/item?id=CM_1980__41_1_39_0
* [Kol87] János Kollár “Subadditivity of the Kodaira dimension: fibers of general type” In _Algebraic geometry, Sendai, 1985_ 10, Adv. Stud. Pure Math. North-Holland, Amsterdam, 1987, pp. 361–398 DOI: 10.2969/aspm/01010361
* [Mat19] Z.. (https://mathoverflow.net/users/5098/z-h) “Polynomial bijection from $\mathbb{Q}\times\mathbb{Q}$ to $\mathbb{Q}$?” URL:https://mathoverflow.net/q/21003 (version: 2019-06-09), MathOverflow URL: https://mathoverflow.net/q/21003
* [Poo10] Bjorn Poonen “Multivariable polynomial injections on rational numbers” In _Acta Arith._ 145.2, 2010, pp. 123–127 DOI: 10.4064/aa145-2-2
* [Ser97] Jean-Pierre Serre “Lectures on the Mordell-Weil theorem”, Aspects of Mathematics Friedr. Vieweg & Sohn, Braunschweig, 1997, pp. x+218 DOI: 10.1007/978-3-663-10632-6
* [Sta20] The Stacks project authors “The Stacks project”, https://stacks.math.columbia.edu, 2020
* [Str20] Sam Streeter “Hilbert property for double conic bundles and del Pezzo varieties”, 2020 arXiv:1812.05937 [math.AG]
* [Tao19] Terence Tao et al. “Ruling out polynomial bijections over the rationals via Bombieri-Lang?”, https://terrytao.wordpress.com/2019/06/08/ruling-out-polynomial-bijections-over-the-rationals-via-bombieri-lang/, 2019
* [Vie83] Eckart Viehweg “Weak positivity and the additivity of the Kodaira dimension for certain fibre spaces” In _Algebraic varieties and analytic varieties (Tokyo, 1981)_ 1, Adv. Stud. Pure Math. North-Holland, Amsterdam, 1983, pp. 329–353 DOI: 10.2969/aspm/00110329
|
8k
|
arxiv_papers
|
2101.01093
|
# Breaking Ties: Regression Discontinuity DesignMeets Market Design††thanks:
We thank Nadiya Chadha, Andrew McClintock, Sonali Murarka, Lianna Wright, and
the staff of the New York City Department of Education for answering our
questions and facilitating access to data. Don Andrews, Tim Armstrong, Eduardo
Azevedo, Yeon-Koo Che, Glenn Ellison, Brigham Frandsen, John Friedman, Justine
Hastings, Guido Imbens, Jacob Leshno, Whitney Newey, Ariel Pakes, Pedro
Sant’Anna, Olmo Silva, Hal Varian and seminar participants at Columbia,
Montreal, Harvard, Hebrew University, Google, the NBER Summer Institute, the
NBER Market Design Working Group, the FRB of Minneapolis, CUNY, Yale,
Hitotsubashi, and Tokyo provided helpful feedback. We’re especially indebted
to Adrian Blattner, Nicolas Jimenez, Ignacio Rodriguez, and Suhas Vijaykumar
for expert research assistance and to MIT SEII team leaders Eryn Heying and
Anna Vallee for invaluable administrative support. We gratefully acknowledge
funding from the Laura and John Arnold Foundation, the National Science
Foundation (under awards SES-1056325 and SES-1426541), and the W.T. Grant
Foundation.
Ati̇la Abdulkadi̇roğlu Duke University and NBER. Email:
[email protected] Joshua D. Angrist MIT and NBER. Email:
[email protected] Yusuke Narita Yale University and NBER. Email:
[email protected] Parag Pathak MIT and NBER. Email: [email protected]
0.5cm0.5cm
Many schools in large urban districts have more applicants than seats.
Centralized school assignment algorithms ration seats at over-subscribed
schools using randomly assigned lottery numbers, non-lottery tie-breakers like
test scores, or both. The New York City public high school match illustrates
the latter, using test scores and other criteria to rank applicants at
“screened” schools, combined with lottery tie-breaking at unscreened “lottery”
schools. We show how to identify causal effects of school attendance in such
settings. Our approach generalizes regression discontinuity methods to allow
for multiple treatments and multiple running variables, some of which are
randomly assigned. The key to this generalization is a local propensity score
that quantifies the school assignment probabilities induced by lottery and
non-lottery tie-breakers. The local propensity score is applied in an
empirical assessment of the predictive value of New York City’s school report
cards. Schools that receive a high grade indeed improve SAT math scores and
increase graduation rates, though by much less than OLS estimates suggest.
Selection bias in OLS estimates is egregious for screened schools.
## 1 Introduction
Large school districts increasingly use sophisticated centralized assignment
mechanisms to match students and schools. In addition to producing fair and
transparent admissions decisions, centralized assignment offers a unique
resource for research on schools: the data these systems generate can be used
to construct unbiased estimates of school value-added. This research dividend
arises from the tie-breaking embedded in centralized assignment. Many school
assignment schemes rely on the deferred acceptance (DA) algorithm, which takes
as input information on applicant preferences and school priorities. In
settings where seats are scarce, DA rations seats at oversubscribed schools
using tie-breaking variables, producing quasi-experimental assignment of
students to schools.
Many districts break ties with a uniformly distributed random variable, often
described as a lottery number. Abdulkadiroğlu et al. (2017a) show that DA with
lottery tie-breaking assigns students to schools as if in a stratified
randomized trial. That is, conditional on preferences and priorities, the
assignments generated by such systems are randomly assigned and therefore
independent of potential outcomes. In practice, however, preferences and
priorities, which we call applicant type, are too finely distributed for full
non-parametric conditioning to be useful. We must therefore pool applicants of
different types, while avoiding any omitted variables bias that might arise
from the fact that type predicts outcomes.
The key to type pooling is the DA propensity score, defined as the probability
of school assignment conditional on applicant type. In a mechanism with
lottery tie-breaking, conditioning on the scalar DA propensity score is
sufficient to make school assignment independent of potential outcomes.
Moreover, the distribution of the scalar propensity score turns out to be much
coarser than the distribution of types.111The propensity score theorem says
that for research designs in which treatment status, $D_{i}$, is independent
of potential outcomes conditional on covariates, $X_{i}$, treatment status is
also independent of potential outcomes conditional on the propensity score,
that is, conditional on $E[D_{i}|X_{i}]$. In work building on Abdulkadiroğlu
et al. (2017a), the DA propensity score has been used to study schools
(Bergman, 2018), management training (Abebe et al., 2019), and
entrepreneurship training (Pérez Vincent and Ubfal, 2019).
This paper generalizes the propensity score to DA-based assignment mechanisms
in which tie-breaking variables are not limited to randomly assigned lottery
numbers. Selective exam schools, for instance, admit students with high test
scores, and students with higher scores tend to have better achievement and
graduation outcomes regardless of where they enroll. We refer to such
scenarios as involving general tie-breaking.222Non-lottery tie-breaking
embedded in centralized assignment schemes has been used in econometric
research on schools in Chile (Hastings et al., 2013; Zimmerman, 2019), Ghana
(Ajayi, 2014), Italy (Fort et al., 2020), Kenya (Lucas and Mbiti, 2014),
Norway (Kirkeboen et al., 2016), Romania (Pop-Eleches and Urquiola, 2013),
Trinidad and Tobago (Jackson, 2010, 2012; Beuermann et al., 2016), and the
U.S. (Abdulkadiroğlu et al., 2014; Dobbie and Fryer, 2014; Barrow et al.,
2016). These studies treat individual schools and tie-breakers in isolation,
without exploiting centralized assignment. Related methodological work
exploring regression discontinuity designs with multiple assignment variables
and multiple cutoffs includes Papay et al. (2011); Zajonc (2012); Wong et al.
(2013a); Cattaneo et al. (2016a). Matching markets with general tie-breaking
raise challenges beyond those addressed in the Abdulkadiroğlu et al. (2017a)
study of DA with lottery tie-breaking.
The most important complication raised by general tie-breaking arises from the
fact that seat assignment is no longer independent of potential outcomes
conditional on applicant type. This problem is intimately entwined with the
identification challenge raised by regression discontinuity (RD) designs,
which typically compare candidates for treatment on either side of a
qualifying test score cutoff. In particular, non-lottery tie-breakers play the
role of an RD running variable and are likewise a source of omitted variables
bias. The setting of interest here, however, is far more complex than the
typical RD design: DA may involve many treatments, tie-breakers, and cutoffs.
A further barrier to causal inference comes from the fact that the propensity
score in this general tie-breaking setting depends on the unknown distribution
of non-lottery tie-breakers conditional on type. Consequently, the propensity
score under general tie-breaking may be no coarser than the underlying high-
dimensional type distribution. When the score distribution is no coarser than
the type distribution, score conditioning is pointless.
These problems are solved here by introducing a local DA propensity score that
quantifies the probability of school assignment induced by a combination of
non-lottery and lottery tie-breakers. This score is “local” in the sense that
it is constructed using the fact that continuously distributed non-lottery
tie-breakers are locally uniformly distributed. Combining this property with
the (globally) known distribution of lottery tie-breakers yields a formula for
the assignment probabilities induced by any DA match. Conditional on the local
DA propensity score, school assignments are shown to be asymptotically
randomly assigned. Moreover, like the DA propensity score for lottery tie-
breaking, the local DA propensity score has a distribution far coarser than
the underlying type distribution.
Our analytical approach builds on Hahn et al. (2001) and other pioneering
econometric contributions to the development of non-parametric RD designs. We
also build on the more recent local random assignment interpretation of
nonparametric RD.333See, among others, Frolich (2007); Cattaneo et al. (2015,
2017); Frandsen (2017); Sekhon and Titiunik (2017); Frolich and Huber (2019);
and Arai et al. (2019). The resulting theoretical framework allows us to
quantify the probability of school assignment as a function of a few features
of student type and tie-breakers, such as proximity to the admissions cutoffs
determined by DA and the identity of key cutoffs for each applicant. By
integrating nonparametric RD with Rosenbaum and Rubin (1983)’s propensity
score theorem and large-market matching theory, our theoretical results
provide a framework suitable for causal inference in a wide variety of
applications.
The research value of the local DA propensity score is demonstrated through an
analysis of New York City (NYC) high school report cards. Specifically, we ask
whether schools distinguished by “Grade A” on the district’s school report
card indeed signify high quality schools that boost their students’
achievement and improve other outcomes. Alternatively, the good performance of
most Grade A students may reflect omitted variables bias. The distinction
between causal effects and omitted variables bias is especially interesting in
light of an ongoing debate over access to New York’s academically selective
schools, also called screened schools, which are especially likely to be
graded A (see, e.g., Brody (2019) and Veiga (2018)). We identify the causal
effects of Grade A school attendance by exploiting the NYC high school match.
NYC employs a DA mechanism integrating non-lottery screened school tie-
breaking with a common lottery tie-breaker at lottery schools. In fact, NYC
screened schools design their own tie-breakers based on middle school
transcripts, interviews, and other factors.
The effects of Grade A school attendance are estimated here using instrumental
variables constructed from the school assignment offers generated by the NYC
high school match. Specifically, our two-stage least squares (2SLS) estimators
use assignment offers as instrumental variables for Grade A school attendance,
while controlling for the local DA propensity score. The resulting estimates
suggest that Grade A attendance boosts SAT math scores modestly and may
increase high school graduation rates a little. But these effects are much
smaller than those the corresponding ordinary least squares (OLS) estimates of
Grade A value-added would suggest.
We also compare 2SLS estimates of Grade A effects computed separately for
NYC’s screened and lottery schools, a comparison that shows the two sorts of
schools to have similar effects. This finding therefore implies that OLS
estimates showing a large Grade A screened school advantage are especially
misleading. The distinction between screened and lottery schools has been
central to the ongoing debate over NYC school access and quality. Our
comparison suggests that the public concern with this sort of treatment effect
heterogeneity may be misplaced. Treatment effect heterogeneity may be limited,
supporting our assumption of constant treatment effects conditional on
observables.444The analysis here allows for treatment effect heterogeneity as
a function of observable student and school characteristics. Our working paper
shows how DA in markets with general tie-breaking identifies average causal
affects for applicants with tie-breaker values away from screened-school
cutoffs (Abdulkadiroğlu et al. (2019)). We leave other questions related to
unobserved heterogeneity for future work.
The next section shows how DA can be used to identify causal effects of school
attendance. Section 3 illustrates key ideas in a setting with a single non-
lottery tie-breaker. Section 4 derives a formula for the local DA propensity
score in a market with general tie-breaking. This section also derives a
consistent estimator of the local propensity score. Section 5 uses these
theoretical results to estimate causal effects of attending Grade A
schools.555Our theoretical analysis covers any mechanism that can be computed
by student-proposing DA. This DA class includes student-proposing DA, serial
dictatorship, the immediate acceptance (Boston) mechanism (Ergin and Sönmez,
2006), China’s parallel mechanisms (Chen and Kesten, 2017), England’s first-
preference-first mechanisms (Pathak and Sönmez, 2013), and the Taiwan
mechanism (Dur et al., 2018). In large markets satisfying regularity
conditions that imply a unique stable matching, the relevant DA class also
includes school-proposing as well as applicant-proposing DA (these conditions
are spelled out in Azevedo and Leshno (2016)). The DA class omits the Top
Trading Cycles (TTC) mechanism defined for school choice by Abdulkadiroğlu and
Sönmez (2003).
## 2 Using Centralized Assignment to Eliminate Omitted Variables Bias
The NYC school report cards published from 2007-13 graded high schools on the
basis of student achievement, graduation rates, and other criteria. These
grades were part of an accountability system meant to help parents choose high
quality schools. In practice, however, report card grades computed without
extensive control for student characteristics reflect students’ ability and
family background as well as school quality. Systematic differences in student
body composition are a powerful source of bias in school report cards. It’s
therefore worth asking whether a student who is randomly assigned to a Grade A
high school indeed learns more and is more likely to graduate as a result.
We answer this question using instrumental variables derived from NYC’s DA-
based assignment of high school seats. The NYC high school match generates a
single school assignment for each applicant as a function of applicants’
preferences over schools, school-specific priorities, and a set of tie-
breaking variables that distinguish between applicants who share preferences
and priorities.666Seat assignment at some of NYC’s selective enrollment “exam
schools” is determined by a separate match. NYC charter schools use school-
specific lotteries. Applicants are free to seek exam school and charter school
seats as well as an assignment in the traditional sector. Because they’re a
function of student characteristics like preferences and test scores, NYC
assignments are not randomly assigned. We show, however, that conditional on
the local DA propensity score, DA-generated assignments of a seat at school
$s$ provide credible instruments for enrollment at school $s$. This result
motivates a two-stage least squares (2SLS) specification where the endogenous
treatment is enrollment at any Grade A school while the instrument is DA-
generated assignment of a seat at any Grade A school.
Our identification strategy builds on the large-market “continuum” model of DA
detailed in Abdulkadiroğlu et al. (2017a). The large-market model is extended
here to allow for multiple and non-lottery tie-breakers. To that end, let
$s=0,1,...,S$ index schools, where $s=0$ represents an outside option.
Applicants are assumed to be identified by an index, $i$, drawn from the unit
interval $[0,1]$. The large market model is “large” by virtue of this
assumption.
Applicant $i$’s preferences over schools constitute a strict partial ordering,
$\succ_{i}$, where $a\succ_{i}b$ means that $i$ prefers school $a$ to school
$b$. Each applicant is also granted a priority at every school. For example,
schools may prioritize applicants who live nearby or with currently enrolled
siblings. Let $\rho_{is}\in\\{1,...,K,\infty\\}$ denote applicant $i$’s
priority at school $s$, where $\rho_{is}<\rho_{js}$ means school $s$
prioritizes $i$ over $j$. We use $\rho_{is}=\infty$ to indicate that $i$ is
ineligible for school $s.$ The vector $\rho_{i}=(\rho_{i1},...,\rho_{iS})$
records applicant $i$’s priorities at each school. Applicant type is then
defined as $\theta_{i}=(\succ_{i},\rho_{i})$, that is, the combination of an
applicant’s preferences and priorities at all schools. Let $\Theta_{s}$ denote
the set of types, $\theta$, that ranks $s$.
In addition to applicant type, DA matches applicants to seats as a function of
a set of tie-breaking variables. We leave DA mechanics for Section 4; at this
point, it’s enough to establish notation for DA inputs. Most importantly, our
analysis of markets with general tie-breaking requires notation to keep track
of tie-breakers. Let $v\in\\{1,...,V\\}$ index tie-breakers and let $S_{v}$ be
the set of schools using tie-breaker $v$. We assume that each school uses a
single tie-breaker. Scalar random variable $R_{iv}$ denotes applicant $i$’s
tie-breaker $v$. Some of these are uniformly distributed lottery numbers. The
set of non-lottery $R_{iv}$ used at schools ranked by applicant $i$ are
collected in the vector $\mathcal{R}_{i}$. Without loss of generality, we
assume that ties are broken in favor of applicants with the smaller tie-
breaker value. DA uses $\theta_{i}$, $\mathcal{R}_{i}$, and the set of lottery
tie-breakers for all $i$ to assign applicants to schools.
We are interested in using the assignment variation resulting from DA to
estimate the causal effect of $C_{i}$, a variable indicating student $i$’s
attendance at (or years of enrollment in) any Grade A school. Outcome
variables, denoted $Y_{i}$, include SAT scores and high school graduation
status. In a DA match like the one in NYC, $C_{i}$ is not randomly assigned,
but rather reflects student preferences, school priorities, tie-breaking
variables, as well as decisions whether or not to enroll at school $s$ when
offered a seat there through the match. The potential for omitted variables
bias induced by the process determining $C_{i}$ can be eliminated by an
instrumental variables strategy that exploits our understanding of the
structure of matching markets.
The instruments used for this purpose are a function of individual school
assignments, indicated by $D_{i}(s)$ for the assignment of student $i$ to a
seat at school $s$. Because DA generates a single assignment for each student,
a dummy for any Grade A assignment, denoted $D_{Ai}$, is the sum of dummies
indicating all assignments to individual Grade A schools. $D_{Ai}$ provides a
natural instrument for $C_{i}$. In particular, we show below that 2SLS
consistently estimates the effect of $C_{i}$ on $Y_{i}$ in the context of a
linear constant-effects causal model that can be written as:
$\displaystyle Y_{i}$ $\displaystyle=\beta
C_{i}+f_{2}(\theta_{i},\mathcal{R}_{i},\delta)+\eta_{i},$ (1)
where $\beta$ is the causal effect of interest and the associated first stage
equation is
$\displaystyle C_{i}$ $\displaystyle=\gamma
D_{Ai}+f_{1}(\theta_{i},\mathcal{R}_{i},\delta)+\nu_{i}.$ (2)
The terms $f_{1}(\theta_{i},\mathcal{R}_{i},\delta)$ and
$f_{2}(\theta_{i},\mathcal{R}_{i},\delta)$ are functions of type and non-
lottery tie-breakers, as well as a bandwidth, $\delta\in\mathbb{R}$, that’s
integral to the local DA propensity score.
Our goal is to specify $f_{1}(\theta_{i},\mathcal{R}_{i},\delta)$ and
$f_{2}(\theta_{i},\mathcal{R}_{i},\delta)$ so that 2SLS estimates of $\beta$
are consistent. Because (1) is seen as a model for potential outcomes rather
than a regression equation, consistency requires that $D_{Ai}$ and $\eta_{i}$
be uncorrelated. The relevant identification assumption can be written:
$E[\eta_{i}D_{Ai}]\approx 0,$ (3)
where $\approx$ means asymptotic equality as $\delta\rightarrow 0$, in a
manner detailed below. Briefly, our main theoretical result establishes
limiting local conditional mean independence of school assignments from
applicant characteristics and potential outcomes, yielding (3). This result
specifies $f_{1}(\theta_{i},\mathcal{R}_{i},\delta)$ and
$f_{2}(\theta_{i},\mathcal{R}_{i},\delta)$ to be easily-computed functions of
the local propensity score and elements of $\mathcal{R}_{i}$.
Abdulkadiroğlu et al. (2017a) derive the relevant DA propensity score for a
scenario with lottery tie-breaking only. Lottery tie-breaking obviates the
need for a bandwidth and control for components of $\mathcal{R}_{i}$. Many
applications of DA use non-lottery tie-breaking, however. The task here is to
derive the propensity score for elaborate matches like that in NYC, which
combines lottery tie-breaking with many school-specific non-lottery tie-
breakers. The resulting estimation strategy integrates propensity score
methods with the nonparametric approach to RD (introduced by Hahn et al.
(2001)), and the local random assignment model of RD (discussed by Frolich
(2007); Cattaneo et al. (2015, 2017); Frandsen (2017), among others). Our
theoretical results can also be seen as generalizing nonparametric RD to allow
for many schools (treatments), many tie-breakers (running variables), and many
cutoffs.
## 3 From Non-Lottery Tie-Breaking to Random Assignment in Serial
Dictatorship
An analysis of a market with a single non-lottery tie-breaker and no
priorities illuminates key elements of our approach. DA in this case is known
as serial dictatorship. Like the general local DA score, the local DA score
for serial dictatorship depends only on a handful of statistics, including
admissions cutoffs for schools ranked, and whether applicant $i$’s tie-breaker
is close to cutoffs for schools using non-lottery tie-breakers. Conditional on
this local propensity score, school offers are asymptotically randomly
assigned.
Serial dictatorship can be described as follows:
> Order applicants by tie-breaker. Proceeding in order, assign each applicant
> to his or her most preferred school among those with seats remaining.
Seating is constrained by a capacity vector,
$q=(q_{0},q_{1},q_{2},...,q_{S})$, where $q_{s}\in[0,1]$ is defined as the
proportion of the unit interval that can be seated at school $s$. We assume
$q_{0}=1$. Serial dictatorship is used in Boston and New York City to allocate
seats at selective public exam schools.
Because serial dictatorship relies on a single tie-breaker, notation for the
set of non-lottery tie-breakers, $\mathcal{R}_{i}$, can be replaced by a
scalar, $R_{i}$. As in Abdulkadiroğlu et al. (2017a), tie-breakers for
individuals are modelled as stochastic, meaning they are drawn from a
distribution for each applicant. Although $R_{i}$ is not necessarily uniform,
we assume that it’s distributed with positive density over $[0,1]$, with
continuously differentiable cumulative distribution function, $F^{i}_{R}$.
These common support and smoothness assumptions notwithstanding, tie-breakers
may be correlated with type, so that $R_{i}$ and $R_{j}$ for applicants $i$
and $j$ are not necessarily identically distributed, though they’re assumed to
be independent of one another. The probability that type $\theta$ applicants
have a tie-breaker below any value $r$ is $F_{R}(r|\theta)\equiv
E[F^{i}_{R}(r)|\theta_{i}=\theta]$, where $F^{i}_{R}(r)$ is $F^{i}_{R}$
evaluated at $r$.
The serial dictatorship allocation is characterized by a set of tie-breaker
cutoffs, denoted $\tau_{s}$ for school $s$. For any school $s$ that’s filled
to capacity, $\tau_{s}$ is given by the tie-breaker of the last (highest tie-
breaker value) student assigned to $s$. Otherwise, $\tau_{s}=1$, a non-binding
cutoff reflecting excess capacity. We say an applicant qualifies at $s$ when
they have a tie-breaker value that clears $\tau_{s}$. Under serial
dictatorship, students are assigned to $s$ if and only if they:
* •
qualify at $s$ (since seats are assigned in tie-breaker order)
* •
fail to qualify at any school they prefer to $s$ (since serial dictatorship
assigns available seats at preferred schools first)
In large markets, cutoffs are constant, so stochastic variation in seat
assignments arises solely from the distribution of tie-breakers.
### 3.1 The Serial Dictatorship Propensity Score
Which cutoffs matter? Under serial dictatorship, the assignment probability
faced by an applicant of type $\theta$ at school $s$ is determined by the
cutoff at $s$ and by cutoffs at schools preferred to $s$. By virtue of single
tie-breaking, it’s enough to know only one of the latter. In particular, an
applicant who fails to clear the highest cutoff among those at schools
preferred to $s$ surely fails to do better than $s$. This leads us to define
most informative disqualification (MID), a scalar parameter for each applicant
type and school. MID tells us how the tie-breaker distribution among type
$\theta$ applicants to $s$ is truncated by disqualification at the schools
type $\theta$ applicants prefer to $s$.
Because MID for type $\theta$ at school $s$ is defined with reference to the
set of schools $\theta$ prefers to $s$, we define:
$B_{\theta s}=\\{s^{\prime}\neq s\hskip 2.84544pt|\hskip
2.84544pts^{\prime}\succ_{\theta}s\\}\text{ for each }\theta\in\Theta_{s},$
(4)
the set of schools type $\theta$ prefers to $s$. For each type and school,
$MID_{\theta s}$ is a function of tie-breaker cutoffs at schools in $B_{\theta
s}$, specifically:
$\displaystyle MID_{\theta s}\equiv\left\\{\begin{array}[c]{ll}0&\text{ if
}B_{\theta s}=\emptyset\\\ \max\\{\tau_{b}\hskip 2.84544pt|\hskip
2.84544ptb\in B_{\theta s}\\}&\text{ otherwise. }\end{array}\right.$ (7)
$MID_{\theta s}$ is zero when school $s$ is ranked first since all who rank
$s$ first compete for a seat there. The second line reflects the fact that an
applicant who ranks $s$ second is seated there only when disqualified at the
school they’ve ranked first, while applicants who rank $s$ third are seated
there when disqualified at their first and second choices, and so on.
Moreover, anyone who fails to clear cutoff $\tau_{b}$ is surely disqualified
at schools with less forgiving cutoffs. For example, applicants who fail to
qualify at a school with a cutoff of 0.6 are disqualified at a school with
cutoff 0.4.
Note that an applicant of type $\theta$ cannot be seated at $s$ when
$MID_{\theta s}>\tau_{s}$. This is the scenario sketched in the top panel of
Figure 1, which illustrates the forces determining SD assignment rates. On the
other hand, assignment rates when $MID_{\theta s}\leq\tau_{s}$ are given by
the probability that:
$MID_{\theta s}<R_{i}\leq\tau_{s},$
an event described in the middle panel of Figure 1. These facts are collected
in the following proposition, which is implied by a more general result for DA
proved in the online appendix.
###### Proposition 1 (Propensity Score in Serial Dictatorship).
Suppose seats in a large market are assigned by serial dictatorship. Let
$p_{s}(\theta)=E[D_{i}(s)|\theta_{i}=\theta]$ denote the type $\theta$
propensity score for assignment to $s$. For all schools $s$ and
$\theta\in\Theta_{s}$, we have:
$\displaystyle
p_{s}(\theta)=\max\left\\{0,F_{R}(\tau_{s}|\theta)-F_{R}(MID_{\theta
s}|\theta)\right\\}.$
Proposition 1 says that the serial dictatorship assignment probability,
positive only when the tie-breaker cutoff at $s$ exceeds $MID_{\theta s}$, is
given by the size of the group with $R_{i}$ between $MID_{\theta s}$ and
$\tau_{s}$. This is
$F_{R}(\tau_{s}|\theta)-F_{R}(MID_{\theta s}|\theta).$
With a uniformly distributed lottery number, the serial dictatorship
propensity score simplifies to $\tau_{s}-MID_{\theta s}$, a scenario noted in
Figure 1. In this case, the assignment probability for each applicant is
determined by $\tau_{s}$ and $MID_{\theta s}$ alone. Given these two cutoffs,
seats at $s$ are randomly assigned.
### 3.2 Serial Dictatorship Goes Local
With non-lottery tie-breaking, the serial dictatorship propensity score
depends on the conditional distribution function, $F_{R}(\cdot|\theta)$
evaluated at $\tau_{s}$ and $MID_{\theta s}$, rather than the cutoffs
themselves. This dependence leaves us with two econometric challenges. First,
$F_{R}(\cdot|\theta)$ is unknown. This precludes computation of the propensity
score by repeatedly sampling from $F_{R}(\cdot|\theta)$. Second,
$F_{R}(\cdot|\theta)$, is likely to depend on $\theta$, so the score in
Proposition 1 need not have coarser support than does $\theta$. This is in
spite of the fact many applicants with different values of $\theta$ share the
same $MID_{\theta s}$. Finally, although controlling for $p_{s}(\theta)$
eliminates confounding from type, assignments are a function of tie-breakers
as well as type. Confounding from non-lottery tie-breakers remains even after
conditioning on $p_{s}(\theta)$.
These challenges are met here by focusing on assignment probabilities for
applicants with tie-breaker realizations close to key cutoffs. Specifically,
for each $\tau_{s}$, define an interval, $(\tau_{s}-\delta,\tau_{s}+\delta]$,
where parameter $\delta$ is a bandwidth analogous to that used for
nonparametric RD estimation. A local propensity score treats the qualification
status of applicants inside this interval as randomly assigned. This
assumption is justified by the fact that, given continuous differentiability
of tie-breaker distributions, non-lottery tie-breakers have a limiting uniform
distribution as the bandwidth shrinks to zero.
Figure 1: Assignment Probabilities under Serial Dictatorship
Notes: This figure illustrates the assignment probability at school $s$ under
serial dictatorship. $R_{i}$ is the tie-breaker. $MID_{\theta s}$ is the most
forgiving cutoff at schools preferred to $s$ and $\tau_{s}$ is the cutoff at
$s$.
The following Proposition uses this fact to characterize the local serial
dictatorship propensity score:
###### Proposition 2 (Local Serial Dictatorship Propensity Score).
Suppose seats in a large market are assigned by serial dictatorship. Also, let
$W_{i}$ be any applicant characteristic other than type that is unchanged by
school assignment.777Let $W_{i}=\Sigma_{s}D_{i}(s)W_{i}(s)$, where $W_{i}(s)$
is the potential value of $W_{i}$ revealed when $D_{i}(s)=1$. We say $W_{i}$
is unchanged by school assignment when $W_{i}(s)=W_{i}(s^{\prime})$ for all
$s\neq s^{\prime}$. Examples include demographic characteristics and potential
outcomes. Finally, assume $\tau_{s}\neq\tau_{s^{\prime}}$ for all $s\neq
s^{\prime}$ unless both are 1. Then,
$E[D_{i}(s)|\theta_{i}=\theta,W_{i}=w]=0\text{ if }\tau_{s}<MID_{\theta s}.$
Otherwise,
$\displaystyle\lim_{\delta\rightarrow
0}E[D_{i}(s)|\theta_{i}=\theta,W_{i}=w,R_{i}\leq MID_{\theta
s}-\delta]=\lim_{\delta\rightarrow
0}E[D_{i}(s)|\theta_{i}=\theta,W_{i}=w,R_{i}>\tau_{s}+\delta)]=0,$
$\displaystyle\lim_{\delta\rightarrow
0}E[D_{i}(s)|\theta_{i}=\theta,W_{i}=w,R_{i}\in(MID_{\theta
s}+\delta,\tau_{s}-\delta]]=1,$ $\displaystyle\lim_{\delta\rightarrow
0}E[D_{i}(s)|\theta_{i}=\theta,W_{i}=w,R_{i}\in(MID_{\theta
s}-\delta,MID_{\theta s}+\delta]]$ $\displaystyle\quad\
=\lim_{\delta\rightarrow
0}E[D_{i}(s)|\theta_{i}=\theta,W_{i}=w,R_{i}\in(\tau_{s}-\delta,\tau_{s}+\delta]]=0.5.$
This follows from a more general result for DA presented in the next section.
Proposition 2 describes a key conditional independence result: the limiting
local probability of seat assignment in serial dictatorship takes on only
three values and is unrelated to applicant characteristics. Note that the
cases enumerated in the proposition (when $\tau_{s}>MID_{\theta s}$) partition
the tie-breaker line as sketched in Figure 1. Applicants with tie-breaker
values above the cutoff at $s$ are disqualified at $s$ and so cannot be seated
there, while applicants with tie-breaker values below $MID_{\theta s}$ are
qualified at a school they prefer to $s$ and so will be seated elsewhere.
Applicants with tie-breakers strictly between $MID_{\theta s}$ and $\tau_{s}$
are surely assigned to $s$. Finally, type $\theta$ applicants with tie-
breakers near either $MID_{\theta s}$ or the cutoff at $s$ are seated with
probability approximately equal to one-half. Nearness in this case means
inside the interval defined by bandwidth $\delta$.
The driving force behind Proposition 2 is the assumption that the tie-breaker
distribution is continuously differentiable. In a shrinking window, the tie-
breaker density therefore approaches that of a uniform distribution, so the
limiting qualification rate is one-half (See Abdulkadiroğlu et al. (2017b) or
Bugni and Canay (2018) for formal proof of this claim). The assumption of a
continuously differentiable tie-breaker distribution is analogous to the
continuous running variable assumption invoked in Lee (2008) and to a local
smoothness assumption in Dong (2018). Continuity of tie-breaker distributions
implies a weaker smoothness condition asserting continuity at cutoffs of the
conditional expectation functions of potential outcomes given running
variables. We favor the stronger continuity assumption because the implied
local random assignment provides a scaffold for construction of assignment
probabilities in more complicated matching scenarios.888The connection between
continuity of running variable distributions and conditional expectation
functions is noted by Dong (2018) and Arai et al. (2019). Antecedents for the
local random assignment idea include an unpublished appendix to Frolich (2007)
and an unpublished draft of Frandsen (2017), which shows something similar for
an asymmetric bandwidth. See also Cattaneo et al. (2015) and Frolich and Huber
(2019).
## 4 The Local DA Propensity Score
Many school districts assign seats using a version of student-proposing DA,
which can be described like this:
> Each applicant proposes to his or her most preferred school. Each school
> ranks these proposals, first by priority then by tie-breaker within priority
> groups, provisionally admitting the highest-ranked applicants in this order
> up to its capacity. Other applicants are rejected.
>
> Each rejected applicant proposes to his or her next most preferred school.
> Each school ranks these new proposals together with applicants admitted
> provisionally in the previous round, first by priority and then by tie-
> breaker. From this pool, the school again provisionally admits those ranked
> highest up to capacity, rejecting the rest.
>
> The algorithm terminates when there are no new proposals (some applicants
> may remain unassigned).
Different schools may use different tie-breakers. For example, the NYC high
school match includes a diverse set of screened schools (Abdulkadiroğlu et
al., 2005, 2009). These schools order applicants using school-specific tie-
breakers that are derived from interviews, auditions, or GPA in earlier
grades, as well as test scores. The NYC match also includes many unscreened
schools, referred to here as lottery schools, that use a uniformly distributed
lottery number as tie-breaker. Lottery numbers are distributed independently
of type and potential outcomes, but non-lottery tie-breakers like entrance
exam scores almost certainly depend on these variables.
### 4.1 Key Assumptions and Main Theorem
We adopt the convention that tie-breaker indices are ordered such that lottery
tie-breakers come first. That is, $v\in\\{1,...,U\\}$, where $U\leq V$,
indexes $U$ lottery tie-breakers. Each lottery tie-breaker, $R_{iv}$ for
$v=\\{1,...U\\}$, is uniformly distributed over $[0,1]$. Non-lottery tie-
breakers are indexed by $v\in\\{U+1,...,V\\}$. The assumptions employed with
general tie-breaking are summarized as follows:
###### Assumption 1.
1. (i)
For any tie-breaker indexed by $v\in\\{1,...,V\\}$ and applicants $i\neq j$,
tie-breakers $R_{iv}$ and $R_{jv}$ are independent, though not necessarily
identically distributed.
2. (ii)
The unconditional joint distribution of non-lottery tie-breakers
$\\{R_{iv};v=U+1,...,V\\}$ for applicant $i$ is continuously differentiable
with positive density over $[0,1]$.
Let $v(s)$ be a function that returns the index of the tie-breaker used at
school $s$. By definition, $s\in S_{v(s)}$. To combine applicants’ priority
status and tie-breaking variables into a single number for each school, we
define applicant position at school $s$ as:
$\pi_{is}=\rho_{is}+R_{iv(s)}.$
Since the difference between any two priorities is at least 1 and tie-breaking
variables are between 0 and 1, applicant order by position at $s$ is
lexicographic, first by priority then by tie-breaker. As noted in the
discussion of serial dictatorship, we distinguish between tie-breakers and
priorities because the latter are fixed, while the former are random
variables.
We also generalize cutoffs to incorporate priorities; these DA cutoffs are
denoted $\xi_{s}$. For any school $s$ that ends up filled to capacity,
$\xi_{s}$ is given by $\max_{i}\\{\pi_{is}|D_{i}(s)=1\\}$. Otherwise, we set
$\xi_{s}=K+1$ to indicate that $s$ has slack (recall that $K$ is the lowest
possible priority).
DA assigns a seat at school $s$ to any applicant $i$ ranking $s$ who has
$\pi_{is}\leq\xi_{s}\text{ and }\pi_{ib}>\xi_{b}\text{ for all }b\succ_{i}s.$
(8)
This is a consequence of the fact that the student-proposing DA is
stable.999In particular, if an applicant is seated at $s$ but prefers $b$, she
must be qualified at $s$ and not have been assigned to $b$. Moreover, since
DA-generated assignments at $b$ are made in order of position, applicants not
assigned to $b$ must be disqualified there. In large markets, $\xi_{s}$ is
fixed as tie breakers are drawn and re-drawn. DA-induced school assignment
rates are therefore determined by the distribution of stochastic tie-breakers
evaluated at fixed school cutoffs. Condition (8) nests our characterization of
seat assignment under serial dictatorship since we can set $\rho_{is}=0$ for
all applicants and use a single tie-breaker to determine position. Statement
(8) then says that $R_{i}\leq\tau_{s}$ and $R_{i}>MID_{\theta s}$ for
applicants with $\theta_{i}=\theta$.
The DA propensity score is the probability of the event described by (8). This
probability is determined in part by marginal priority at school $s$, denoted
$\rho_{s}$ and defined as $\text{int}(\xi_{s})$, the integer part of the DA
cutoff. Conditional on rejection by all preferred schools, applicants to $s$
are assigned $s$ with certainty if $\rho_{is}<\rho_{s}$, that is, if they
clear marginal priority. Applicants with $\rho_{is}>\rho_{s}$ have no chance
of finding a seat at $s$. Applicants for whom $\rho_{is}=\rho_{s}$ are
marginal: these applicants are seated at $s$ when their tie-breaker values
fall below tie-breaker cutoff $\tau_{s}$. This quantity can therefore be
written as the decimal part of the DA cutoff:
$\tau_{s}=\xi_{s}-\rho_{s}.$
Applicants with marginal priority have $\rho_{is}=\rho_{s}$, so
$\pi_{is}\leq\xi_{s}\Leftrightarrow R_{iv(s)}\leq\tau_{s}.$
In addition to marginal priority, the local DA propensity score is conditioned
on applicant position relative to screened school cutoffs. To describe this
conditioning, define a set of variables, $t_{is}(\delta)$, as follows:
$\begin{split}t_{is}(\delta)=&\left\\{\begin{array}[c]{ll}n&\text{ if
}\rho_{\theta s}>\rho_{s}\text{ or, if }v(s)>U,\>\rho_{\theta
s}=\rho_{s}\text{ and }R_{iv(s)}>\tau_{s}+\delta\\\ a&\text{ if }\rho_{\theta
s}<\rho_{s}\text{ or, if }v(s)>U,\>\rho_{\theta s}=\rho_{s}\text{ and
}R_{iv(s)}\leq\tau_{s}-\delta\\\ c&\text{ if }\rho_{\theta s}=\rho_{s}\text{
and, if }v(s)>U,\,R_{iv(s)}\in(\tau_{s}-\delta,\tau_{s}+\delta],\\\
\end{array}\right.\end{split}$
where the mnemonic value labels $n,a,c$ stand for never seated, always seated,
and conditionally seated. It’s convenient to collect these variables in a
vector,
$T_{i}(\delta)=[t_{i1}(\delta),...,t_{is}(\delta),...,t_{iS}(\delta)].$
Elements of $T_{i}(\delta)$ for unscreened schools are a function only of the
partition of types determined by marginal priority. For screened schools,
however, $T_{i}(\delta)$ also encodes the relationship between tie-breakers
and cutoffs. Never-seated applicants to $s$ cannot be seated there, either
because they fail to clear marginal priority at $s$ or because they’re too far
above the cutoff when $s$ is screened. Always-seated applicants to $s$ are
assigned $s$ for sure when they can’t do better, either because they clear
marginal priority at $s$ or because they’re well below the cutoff at $s$ when
$s$ is screened. Finally, conditionally-seated applicants to $s$ are
randomized marginal priority applicants. Randomization is by lottery number
when $s$ is a lottery school or by non-lottery tie-breaker within the
bandwidth when $s$ is screened.
With this machinery in hand, the local DA propensity score is defined as
follows:
$\psi_{s}(\theta,T)=\lim_{\delta\rightarrow
0}E[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta)=T],$
for $T=[t_{1},...,t_{s},...,t_{S}]$ where $t_{s}\in\\{n,a,c\\}$ for each $s$.
This describes assignment probabilities as a function of type and cutoff
proximity at each school. As in Proposition 2, formal characterization of
$\psi_{s}(\theta,T)$ requires cutoffs be distinct:
###### Assumption 2.
$\tau_{s}\neq\tau_{s^{\prime}}$ for all $s\neq s^{\prime}$ unless both are 1.
The formula characterizing $\psi_{s}(\theta,T)$ builds on an extension of the
$MID$ idea to a general tie-breaking regime. First, the set of schools
$\theta$ prefers to $s$, $B_{\theta s}$, is partitioned by tie-breakers by
defining $B^{v}_{\theta s}\equiv\\{b\in S_{v}\hskip 2.84544pt|\hskip
2.84544ptb\succ_{\theta}s\\}$ for each $v$. We then have:
$\displaystyle MID^{v}_{\theta s}=\left\\{\begin{array}[c]{ll}0&\text{ if
}\rho_{\theta b}>\rho_{b}\text{ for all }b\in B^{v}_{\theta s}\text{ or if
}B_{\theta s}^{v}=\emptyset\\\ 1&\text{ if }\rho_{\theta b}<\rho_{b}\text{ for
some }b\in B^{v}_{\theta s}\\\ \max\\{\tau_{b}\hskip 2.84544pt|\hskip
2.84544ptb\in B^{v}_{\theta s}\text{ and }\rho_{\theta b}=\rho_{b}\\}&\text{
otherwise. }\end{array}\right.$
$MID^{v}_{\theta s}$ quantifies the extent to which qualification at schools
using tie-breaker $v(s)$ and that type $\theta$ applicants prefer to $s$
truncates the tie-breaker distribution among those contending for seats at s.
Next, define:
$m_{s}(\theta,T)=|\\{v>U:MID^{v}_{\theta s}=\tau_{b}\text{ and }t_{b}=c\text{
for some }b\in B^{v}_{\theta s}\\}|.$
This quantity counts the number of RD-style experiments created by the
screened schools that type $\theta$ prefers to $s$.
The last preliminary to a formulation of local DA assignment scores uses
$MID^{v}_{\theta s}$ and $m_{s}(\theta,T)$ to compute disqualification rates
at all schools preferred to $s$. We break this into two pieces: variation
generated by screened schools and variation generated by lottery schools. As
the bandwidth shrinks, the limiting disqualification probability at screened
schools in $B_{\theta s}$ converges to
$\sigma_{s}(\theta,T)=0.5^{m_{s}(\theta,T)}.$ (9)
The disqualification probability at lottery schools in $B_{\theta s}$ is
$\lambda_{s}(\theta)=\prod_{v=1}^{U}(1-MID^{v}_{\theta s}),$ (10)
without regard to bandwidth.
To recap: the local DA score for type $\theta$ applicants is determined in
part by the screened schools $\theta$ prefers to $s$. Relevant screened
schools are those determining $MID^{v}_{\theta s}$, and at which applicants
are close to tie-breaker cutoffs. The variable $m_{s}(\theta,T)$ counts the
number of tie-breakers involved in such close encounters. Applicants drawing
screened school tie-breakers close to $\tau_{b}$ for some $b\in B^{v}_{\theta
s}$ face qualification rates of $0.5$ for each tie-breaker $v$. Since screened
school disqualification is locally independent over tie-breakers, the term
$\sigma_{s}(\theta,T)$ computes the probability of not being assigned a
screened school preferred to $s$. Likewise, since the qualification rate at
preferred lottery schools is $MID^{v}_{\theta s}$, the term
$\lambda_{s}(\theta)$ computes the probability of not being assigned a lottery
school preferred to $s$.
The following theorem combines these in a formula for the local DA propensity
score:
###### Theorem 1 (Local DA Propensity Score with General Tie-breaking).
Suppose seats in a large market are assigned by DA with tie-breakers indexed
by $v$, and suppose Assumptions 1 and 2 hold. For all schools $s,\theta,$ $T$
and $w$, we have
$\psi_{s}(\theta,T)=\lim_{\delta\rightarrow
0}E[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta)=T,W_{i}=w]=0,$
if (a) $t_{s}=n$; or (b) $t_{b}=a\ \text{ for some }b\in B_{\theta s}$.
Otherwise,
$\psi_{s}(\theta,T)=\left\\{\begin{array}[c]{ll}\sigma_{s}(\theta,T)\lambda_{s}(\theta)&\text{
if }t_{s}=a\\\
\sigma_{s}(\theta,T)\lambda_{s}(\theta)\max\left\\{0,\dfrac{\tau_{s}-MID^{v(s)}_{\theta
s}}{1-MID^{v(s)}_{\theta s}}\right\\}&\text{ if }t_{s}=c\text{ and }v(s)\leq
U\\\ \sigma_{s}(\theta,T)\lambda_{s}(\theta)\times 0.5&\text{ if
}t_{s}=c\text{ and }v(s)>U.\\\ \end{array}\right.$ (11)
Theorem 1 starts with a scenario where applicants to $s$ are either
disqualified there or assigned to a preferred school for sure.101010See the
appendix for proof of the Theorem, along with other theoretical results,
including derivation of a non-limit form of the DA propensity score. In this
case, we need not worry about whether $s$ is a screened or lottery school. In
other scenarios where applicants are surely qualified at $s$, the probability
of assignment to $s$ is determined entirely by disqualification rates at
preferred screened schools and by truncation of lottery tie-breaker
distributions at preferred lottery schools. These sources of assignment risk
combine to produce the first line of (11). The conditional assignment
probability at any lottery $s$, described on the second line of (11), is
determined by the disqualification rate at preferred schools and the
qualification rate at $s$, where the latter is given by
$\tau_{s}-MID^{v(s)}_{\theta s}$ (to see this, note that $\lambda_{s}(\theta)$
includes the term $1-MID^{v(s)}$ in the product over lottery tie-breakers).
Similarly, the conditional assignment probability at any screened $s$, on the
third line of (11), is determined by the disqualification rate at preferred
schools and the qualification rate at $s$, where the latter is given by $0.5$.
The Theorem covers the non-lottery tie-breaking serial dictatorship scenario
in the previous section. With a single non-lottery tie-breaker,
$\lambda_{s}(\theta)=1$. When $t_{s}=n$ or $t_{b}=a$ for some $b\in B_{\theta
s}$, the local propensity score at $s$ is zero. Otherwise, suppose $t_{b}=n$
for all $b\in B_{\theta s}$, so that $m_{s}(\theta,T)=0$. If $t_{s}=a$, then
the local propensity score is $1$. If $t_{s}=c$, then the local propensity
score is $0.5$. Suppose, instead, that $MID_{\theta s}=\tau_{b}$ for some
$b\in B_{\theta s}$, so that $m_{s}(\theta,T)=1$. In this case, $t_{s}\neq c$
because cutoffs are distinct. If $t_{s}=a$, then the local propensity score is
$0.5$. Online Appendix B uses an example to illustrate the Theorem in other
scenarios.
### 4.2 Score Estimation
Theorem 1 characterizes the theoretical probability of school assignment in a
large market with a continuum of applicants. In reality, of course, the number
of applicants is finite and propensity scores must be estimated. We show here
that, in an asymptotic sequence that increases market size with a shrinking
bandwidth, a sample analog of the local DA score described by Theorem 1
converges uniformly to the corresponding local score for a finite market. Our
empirical application establishes the relevance of this asymptotic result by
showing that applicant characteristics are balanced by assignment status
conditional on estimates of the local DA propensity score.
The asymptotic sequence for the estimated local DA score works as follows:
randomly sample $N$ applicants from a continuum economy. The applicant sample
(of size $N$) includes information on each applicant’s type and the vector of
large-market school capacities, $q_{s}$, which give the proportion of $N$
seats that can be seated at $s$. We observe realized tie-breaker values for
each applicant, but not the underlying distribution of non-lottery tie-
breakers. The set of finitely many schools is unchanged along this sequence.
Fix the number of seats at school $s$ in a sampled finite market to be the
integer part of $Nq_{s}$ and run DA with these applicants and schools. We
consider the limiting behavior of an estimator computed using the estimated
$\hat{MID}^{v}_{\theta_{i}s}$, $\hat{\tau}_{s}$, and marginal priorities
generated by this single realization. Also, given a bandwidth $\delta_{N}>0$,
we compute $t_{is}(\delta_{N})$ for each $i$ and $s$, collecting these in
vector $T_{i}(\delta_{N})$. These statistics then determine:
$\hat{m}_{s}(\theta_{i},T_{i}(\delta_{N}))=|\\{v>U:\hat{MID}^{v}_{\theta_{i}s}=\hat{\tau}_{b}\text{
and }t_{ib}(\delta_{N})=c\text{ for some }b\in B^{v}_{\theta_{i}s}\\}|.$
Our local DA score estimator, denoted
$\hat{\psi}_{s}(\theta_{i},T_{i}(\delta_{N}))$, is constructed by plugging
these ingredients into the formula in Theorem 1. That is, if (a)
$\hat{t}_{is}(\delta_{N})=n$; or (b) $\hat{t}_{ib}(\delta_{N})=a\ \text{ for
some }b\in B_{\theta_{i}s}$, then
$\hat{\psi}_{s}(\theta_{i},T_{i}(\delta_{N}))=0.$ Otherwise,
$\displaystyle\hat{\psi}_{s}(\theta_{i},T_{i}(\delta_{N}))=\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$
$\displaystyle\left\\{\begin{array}[c]{ll}\hat{\sigma}_{s}(\theta_{i},T_{i}(\delta_{N}))\hat{\lambda}_{s}(\theta_{i})&\text{if
}t_{is}(\delta_{N})=a\\\
\hat{\sigma}_{s}(\theta_{i},T_{i}(\delta_{N}))\hat{\lambda}_{s}(\theta_{i})\max\left\\{0,\frac{\hat{\tau}_{s}-\hat{MID}^{v(s)}_{\theta_{i}s}}{1-\hat{MID}^{v(s)}_{\theta_{i}s}}\right\\}&\text{if
}t_{is}(\delta_{N})=c\text{ and }v(s)\leq U\\\
\hat{\sigma}_{s}(\theta_{i},T_{i}(\delta_{N}))\hat{\lambda}_{s}(\theta_{i})\times
0.5&\text{if }t_{is}(\delta_{N})=c\text{ and }v(s)>U,\\\ \end{array}\right.$
(15)
where
$\hat{\sigma}_{s}(\theta_{i},T_{i}(\delta_{N}))=0.5^{\hat{m}_{s}(\theta_{i},T_{i}(\delta_{N}))}$
and
$\hat{\lambda}_{s}(\theta_{i})=\prod_{v=1}^{U}(1-\hat{MID}^{v}_{\theta_{i}s}).$
As a theoretical benchmark for the large-sample performance of
$\hat{\psi}_{s}$, consider the true local DA score for a finite market of size
$N$. This is
$\displaystyle\psi_{Ns}(\theta,T)=\lim_{\delta\rightarrow
0}E_{N}[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta)=T],$ (16)
where $E_{N}$ is the expectation induced by the joint tie-breaker distribution
for applicants in the finite market. This quantity is defined by fixing the
distribution of types and the vector of proportional school capacities, as
well as market size. $\psi_{Ns}(\theta,T)$ is then the limit of the average of
$D_{i}(s)$ across infinitely many tie-breaker draws in ever-narrowing
bandwidths for this finite market. Because tie-breaker distributions are
assumed to have continuous density in the neighborhood of any cutoff, the
finite-market local propensity score is well-defined for any positive
$\delta$.
We’re interested in the gap between the estimator
$\hat{\psi}_{s}(\theta,T(\delta_{N}))$ and the true local score
$\psi_{Ns}(\theta,T)$ as $N$ grows and $\delta_{N}$ shrinks. We show below
that $\hat{\psi}_{s}(\theta,T(\delta_{N}))$ converges uniformly to
$\psi_{Ns}(\theta,T)$ in our asymptotic sequence.
This result uses a regularity condition:
###### Assumption 3.
(Rich support) In the population continuum market, for every school $s$ and
every priority $\rho$ held by a positive mass of applicants who rank $s$, the
proportion of applicants $i$ with $\rho_{is}=\rho$ who rank $s$ first is also
positive.
Uniform convergence of $\hat{\psi}_{s}(\theta,T(\delta_{N}))$ is formalized
below:
###### Theorem 2 (Consistency of the Estimated Local DA Propensity Score).
In the asymptotic sequence described above, and maintaining Assumptions 1-3,
the estimated local DA propensity score $\hat{\psi}_{s}(\theta,T(\delta_{N}))$
is a consistent estimator of $\psi_{Ns}(\theta,T)$ in the following sense: For
any $\delta_{N}$ such that $\delta_{N}\rightarrow 0$,
$N\delta_{N}\rightarrow\infty,$ and $T(\delta_{n})\rightarrow T$,
$\sup_{\theta,s,T}|\hat{\psi}_{s}(\theta,T(\delta_{N}))-\psi_{Ns}(\theta,T)|\overset{p}{\longrightarrow}0,$
as $N\rightarrow\infty$.
This result (proved in the online appendix) justifies conditioning on an
estimated local propensity score to eliminate omitted variables bias in school
attendance effect estimates.
### 4.3 Treatment Effect Estimation
Theorems 1 and 2 provide a foundation for causal inference. In combination
with an exclusion restriction discussed below, these results imply that a
dummy variable indicating Grade A assignments is asymptotically independent of
potential outcomes (represented by the residuals in a equation (1)),
conditional on an estimate of the Grade A local propensity score. Let $S_{A}$
denote the set of Grade A schools. Because DA generates a single offer, the
local propensity score for Grade A assignment can be computed as:
$\hat{\psi}_{A}(\theta_{i},T_{i}(\delta_{N}))=\sum_{s\in
S_{A}}\hat{\psi}_{s}(\theta_{i},T_{i}(\delta_{N})).$
In other words, the local score for Grade A assignment is the sum of the
scores for all Grade A schools in the match.
These considerations lead to a 2SLS estimator with second and first stage
equations that can be written in stylized form as:
$\displaystyle Y_{i}$ $\displaystyle=\beta
C_{i}+\sum_{x}\alpha_{2}(x)d_{i}(x)+g_{2}(\mathcal{R}_{i};\delta_{N})+\eta_{i}$
(17) $\displaystyle C_{i}$ $\displaystyle=\gamma
D_{Ai}+\sum_{x}\alpha_{1}(x)d_{i}(x)+g_{1}(\mathcal{R}_{i};\delta_{N})+\nu_{i},$
(18)
where $d_{i}(x)=1\\{\hat{\psi}_{A}(\theta_{i},T_{i}(\delta_{N}))=x\\}$ and the
set of parameters denoted $\alpha_{2}(x)$ and $\alpha_{1}(x)$ provide
saturated control for the local propensity score. As detailed in the next
section, functions $g_{2}(\mathcal{R}_{i};\delta_{N})$ and
$g_{1}(\mathcal{R}_{i};\delta_{N})$ implement local linear control for
screened school tie-breakers for applicants to these schools with
$\hat{t}_{is}(\delta_{N})=c$. Linking this with the empirical strategy
sketched at the outset, equation (17) is a version of of equation (1) that
sets
$f_{2}(\theta_{i},\mathcal{R}_{i},\delta)=\sum_{x}\alpha_{2}(x)d_{i}(x)+g_{2}(\mathcal{R}_{i};\delta_{N}).$
Likewise, equation (18) is a version of equation (2) with
$f_{1}(\theta_{i},\mathcal{R}_{i},\delta)$ defined similarly.
Our implementation of score-controlled instrumental variables is inspired by
the Calonico et al. (2019) analysis of RD designs with covariates. Using a mix
of simulation evidence and theoretical reasoning, Calonico et al. (2019)
argues that additive control for covariates in a local linear regression model
requires fewer assumptions and is likely to have better finite-sample behavior
than more elaborate procedures. The covariates of interest to us are a full
set of dummies for values in the support of the Grade A local propensity
score. We’d like to control for these while also benefiting from the good
performance of local linear regression estimators of conditional mean
functions near cutoffs.111111Calonico et al. (2019) discuss both sharp and
fuzzy RD designs. The conclusions for sharp design carry over to the fuzzy
case in which cutoff clearance is used as an instrument. Equations (17) and
(18) are said to be stylized because they omit a number of implementation
details supplied in the following section.
Note that saturated regression-conditioning on the local propensity score
eliminates applicants with score values of zero or one. This is apparent from
an analogy with a fixed-effects panel model. In panel data with multiple
annual observations on individuals, estimation with individual fixed effects
is equivalent to estimation after subtracting person means from regressors.
Here, the “fixed effects” are coefficients on dummies for each possible score
value. When the score value is 0 or 1 for applicants of a given type,
assignment status is constant and observations on applicants of this type drop
out. We therefore say an applicant has Grade A risk when
$\hat{\psi}_{A}(\theta_{i},T_{i}(\delta_{N}))\in(0,1)$. The sample with risk
contributes to parameter estimation in models with saturated score control.
Propensity score conditioning facilitates control for applicant type in the
sample with risk. In practice, local propensity score conditioning yields
considerable dimension reduction compared to full-type conditioning, as we
would hope. The 2014 NYC high school match, for example, involved 52,124
applicants of 47,074 distinct types. Of these, 42,461 types listed a Grade A
school on their application to the high school match. By contrast, the local
propensity score for Grade A school assignment takes on only 2,054 values.
## 5 A Brief Report on NYC Report Cards
### 5.1 Doing DA in the Big Apple
Since the 2003-04 school year, the NYC Department of Education (DOE) has used
DA to assign rising ninth graders to high schools. Many high schools in the
match host multiple programs, each with their own admissions protocols.
Applicants are matched to programs rather than schools. Each applicant for a
ninth grade seat can rank up to twelve programs. All traditional public high
schools participate in the match, but charter schools and NYC’s specialized
exam high schools have separate admissions procedures.121212Some special needs
students are also matched separately. The centralized NYC high school match is
detailed in Abdulkadiroğlu et al. (2005, 2009). Abdulkadiroğlu et al. (2014)
describe NYC exam school admissions.
The NYC match is structured like the general DA match described in Section 4:
lottery programs use a common uniformly distributed lottery number, while
screened programs use a variety of non-lottery tie-breaking variables.
Screened tie-breakers are mostly distinct, with one for each school or
program, though some screened programs share a tie-breaker. In any case, our
theoretical framework accommodates all of NYC’s many tie-breaking
protocols.131313Screened tie-breakers are reported as an integer variable
encoding the underlying tie-breaker order such as a test score or portfolio
summary score. We scale these so as to lie in $(0,1]$ by computing
$[R_{iv}-\min_{j}{R_{jv}}+1]/[\max_{j}R_{jv}-\min_{j}R_{jv}+1]$ for each tie-
breaker $v$. This transformation produces a positive cutoff at $s$ when only
one applicant is seated at $s$ and a cutoff of 1 when all applicants who rank
$s$ are seated there.
Our analysis uses Theorems 1 and 2 to compute propensity scores for programs
rather than schools since programs are the unit of assignment. For our
purposes, a lottery school is a school hosting any lottery program. Other
schools are defined as screened.141414Some NYC high schools sort applicants on
a coarse screening tie-breaker that allows ties, breaking these ties using the
common lottery number. Schools of this type are treated as lottery schools,
with priority groups defined by values of the screened tie-breaker. Seats for
NYC’s ed-opt programs are allocated to two groups, one of which screens
applicants using a single non-lottery tie-breaker and the other using the
common lottery number. The online appendix explains how ed-opt programs are
handled by our analysis.
In 2007, the NYC DOE launched a school accountability system that graded
schools from A to F. This mirrors similar accountability systems in Florida
and other states. NYC’s school grades were determined by achievement levels
and, especially, achievement growth, as well as by survey- and attendance-
based features of the school environment. Growth looked at credit
accumulation, Regents test completion and pass rates; performance measures
were derived mostly from four- and six-year graduation rates. Some schools
were ungraded. Figure 2 reproduces a sample letter-graded school progress
report.151515Walcott (2012) details the NYC grading methodology used in this
period. Note that the computation of the grade of a school for a particular
year uses only information from past years, so that there is no feedback
between school grades and the school’s current outcomes.
Figure 2: Sample NYC School Report Card
Notes: This figure shows the 2011/12 progress report for East Side Community
School. Source: $www.crpe.org$
The 2007 grading system was controversial. Proponents applauded the
integration of multiple measures of school quality while opponents objected to
the high-stakes consequences of low school grades, such as school closure or
consolidation. Rockoff and Turner (2011) provide a partial validation of the
system by showing that low grades seem to have sparked school improvement. In
2014, the DOE replaced the 2007 scheme with school quality measures that place
less weight on test scores and more on curriculum characteristics and
subjective assessments of teaching quality. The relative merits of the old and
new systems continue to be debated.
The results reported here use application data from the 2011-12, 2012-13, and
2013-14 school years (students in these application cohorts enrolled in the
following school years). Our sample includes first-time applicants seeking 9th
grade seats, who submitted preferences over programs in the main round of the
NYC high school match. We obtained data on school capacities and priorities,
lottery numbers, and screened school tie-breakers, information that allows us
to replicate the match. Details related to match replication appear in the
online appendix.161616Our analysis assigns report card grades to a cohort’s
schools based on the report cards published in the previous year. For the
2011/12 application cohort, for instance, we used the grades published in
2010/11. On the other hand, applicant SAT scores from tests taken before 9th
grade are dropped.
Students at Grade A schools have higher average SAT scores and higher
graduation rates than do students at other schools. Differences in graduation
rates across schools feature in popular accounts of socioeconomic differences
in school access (see, e.g., Harris and Fessenden (2017) and Disare (2017)).
Grade A students are also more likely than students attending other schools to
be deemed “college- and career-prepared” or “college-ready.”171717These
composite variables are determined as a function of Regents and AP scores,
course grades, vocational or arts certification, and college admission tests.
These and other school characteristics are documented in Table 5.1, which
reports statistics separately by school grade and admissions regime.
Achievement gaps between screened and lottery Grade A schools are especially
large, likely reflecting selection bias induced by test-based screening.
Screened Grade A schools have a majority white and Asian student body, the
only group of schools described in the table to do so (the table reports
shares black and Hispanic). These schools are also over-represented in
Manhattan, a borough that includes most of New York’s wealthiest neighborhoods
(though average family income is higher on Staten Island). Teacher experience
is similar across school types, while screened Grade A schools have somewhat
more teachers with advanced degrees.
The first two columns of Table 5.1 describe the roughly 180,000 ninth graders
enrolled in the 2012-13, 2013-14, and 2014-15 school years. Students enrolled
in a Grade A school, including those enrolled in the Grade A schools assigned
outside the match, are less likely to be black or Hispanic and have higher
baseline scores than the general population of 9th graders. The 153,000 eighth
graders who applied for ninth grade seats are described in column 3 of the
table. Roughly 130,000 listed a Grade A school for which seats are assigned in
the match on their application form and a little over a third of these were
assigned to a Grade A school.181818The difference between total 9th grade
enrollment and the number of match participants is accounted for by special
education students outside the main match, direct-to-charter enrollment, and a
few schools that straddle 9th grade. Applicants in the match have baseline
scores (from tests taken in 6th grade) above the overall district mean
(baseline scores are standardized to the population of test-takers). As can be
seen by comparing columns 3 and 4 in Table 5.1, however, the average
characteristics of Grade A applicants are mostly similar to those of the
entire applicant population.
The statistics in column 5 of Table 5.1 show that applicants enrolled in a
Grade A school (among schools participating in the match) are somewhat less
likely to be black and have higher baseline scores than the total applicant
pool. These gaps likely reflect systematic differences in offer rates by race
at screened Grade A schools. Column 5 of Table 5.1 also shows that most of
those attending a Grade A school were assigned there, and that most Grade A
students ranked a Grade A school first. Grade A students are about twice as
likely to go to a lottery school as to a screened school. Interestingly,
enthusiasm for Grade A schools is far from universal: just under half of all
applicants in the match ranked a Grade A school first.
### 5.2 Balance and 2SLS Estimates
Because NYC has a single lottery tie-breaker, the disqualification probability
at lottery schools in $B_{\theta s}$ described by equation (10) simplifies to
$\lambda_{s}(\theta)=(1-MID^{1}_{\theta s}),$
where $MID^{1}_{\theta s}$ is most informative disqualification at schools
using the common lottery tie-breaker, $R_{1i}$. The local DA score described
by equation (11) therefore also simplifies, in this case to:
$\psi_{s}(\theta,T)=\left\\{\begin{array}[c]{ll}\sigma_{s}(\theta,T)(1-MID^{1}_{\theta
s})&\text{ if }t_{s}=a,\\\
\sigma_{s}(\theta,T)\max\left\\{0,\tau_{s}-MID^{1}_{\theta s}\right\\}&\text{
if }t_{s}=c\text{ and }v(s)=1,\\\ \sigma_{s}(\theta,T)(1-MID^{1}_{\theta
s})\times 0.5&\text{ if }t_{s}=c\text{ and }v(s)>1.\\\ \end{array}\right.$
(19)
Estimates of the local DA score based on (19) reveal that roughly 35,000
applicants have Grade A risk, that is, an estimated local DA score value
strictly between 0 and 1. As can be seen in column 6 of Table 5.1, applicants
with Grade A risk have mean baseline scores and demographic characteristics
much like those of the sample enrolled at a Grade A school. The ratio of
screened to lottery enrollment among those with Grade A risk is also similar
to the corresponding ratio in the sample of enrolled students (compare
32.9/15.3 in the former group to 66.3/25.0 in the latter). Online Appendix
Figure D1 plots the distribution of Grade A assignment probabilities for
applicants with risk. The modal probability is $0.5$, reflecting the fact that
roughly 25% of those with Grade A risk rank a single Grade A school and that
this school is screened.
The balancing property of local propensity score conditioning is evaluated
using score-controlled differences in covariate means for applicants who do
and don’t receive Grade A assignments. Score-controlled differences by Grade A
assignment status are estimated in a model that includes a dummy indicating
assignments at ungraded schools as well as a dummy for Grade A assignments,
controlling for the propensity scores for both. We account for ungraded school
attendance to ensure that estimated Grade A effects compare schools with high
and low grades, omitting the ungraded.191919Ungraded schools were mostly new
when grades were assigned or had data insufficient to determine a grade.
Specifically, let $D_{Ai}$ denote Grade A assignments as before, and let
$D_{0i}$ indicate assignments at ungraded schools. Assignment risk for each
type of school is controlled using sets of dummies denoted $d_{Ai}(x)$ and
$d_{0i}(x)$, respectively, for score values indexed by $x$.
The covariates of interest here, denoted by $W_{i}$, are those that are
unchanged by school assignment and should therefore be mean-independent of
$D_{Ai}$ in the absence of selection bias. The balance test results reported
in Table 5.2 are estimates of parameter $\gamma_{A}$ in regressions of $W_{i}$
on $D_{Ai}$ of the form:
$\displaystyle
W_{i}=\gamma_{A}D_{Ai}+\gamma_{0}D_{0i}+\sum_{x}\alpha_{A}(x)d_{Ai}(x)+\sum_{x}\alpha_{0}(x)d_{0i}(x)+g(\mathcal{R}_{i};\delta_{N})+\nu_{i}.$
(20)
Local piecewise linear control for screened tie-breakers is parameterized as:
$\displaystyle g(\mathcal{R}_{i};\delta_{N})=\sum_{s\in S\backslash
S_{0}}\omega_{1s}a_{is}+k_{is}[\omega_{2s}+\omega_{3s}(R_{iv(s)}-\tau_{s})+\omega_{4s}(R_{iv(s)}-\tau_{s})\textbf{1}(R_{iv(s)}>\tau_{s})],$
(21)
where $S\backslash S_{0}$ is the set of screened programs, $a_{is}$ indicates
whether applicant $i$ applied to screened program $s$, and
$k_{is}=1[\hat{t}_{is}(\delta_{N})=c]$. The sample used to estimate (20) is
limited to applicants with Grade A risk.
Parameters in (20) and (21) vary by application cohort (three cohorts are
stacked in the estimation sample). Bandwidths are estimated two ways, as
suggested by Imbens and Kalyanaraman (2012) (IK) using a uniform kernel, and
using methods and software described in Calonico et al. (2017) (CCFT). These
bandwidths are computed separately for each program (the notation ignores
this), for the set of applicants in the relevant marginal priority
group.202020The IK bandwidths used here are identical to those yielded by the
IK implementation referenced in Armstrong and Kolesár (2018) and distributed
via the RDhonest package. Bandwidths are computed separately for each outcome
variable; we use the smallest of these for each program. The bandwidth for
screened programs is set to zero when there are fewer than five in-bandwidth
observations on one or the other side of the relevant cutoff. The control
function $g(\mathcal{R}_{i};\delta_{N})$ is unweighted and can therefore be
said to use a uniform kernel. We also explored bandwidths designed to produce
balance as in Cattaneo et al. (2016b). These results proved to be sensitive to
implementation details such as the p-value used to establish balance.
As can be seen in column 2 of Table 5.2, which reports raw differences in
means by Grade A assignment status, applicants assigned to a Grade A school
are much more likely to have ranked a Grade A school first, and ranked more
Grade A schools highly than did other applicants. These applicants are also
more likely to rank a Screened Grade A school first and among their top three.
Minority and free-lunch-eligible applicants are less likely to be assigned to
a Grade A school, while those assigned to a Grade A school have much higher
baselines scores, with gaps of $0.3-0.4$ in favor of those assigned. These raw
differences notwithstanding, our theoretical results suggest that estimates of
$\gamma_{A}$ in equation (20) should be close to zero.
This is borne out by the estimates reported in column 4 of the table, which
shows small, mostly insignificant differences in covariates by assignment
status when estimated using using Imbens and Kalyanaraman (2012) bandwidths.
The estimated covariate gaps in column 6, computed using Calonico et al.
(2017) bandwidths, are similar. These estimates establish the empirical
relevance of both the large-market model of DA and the local DA propensity
score derived from it.212121Our balance assessment relies on linear models to
estimate mean differences rather than comparisons of distributions. The focus
on means is justified because the IV reduced form relationships we aspire to
validate are themselves regressions. Recall that in a regression context,
reduced form causal effects are unbiased provided omitted variables are mean-
independent of the instrument, $D_{Ai}$. Since treatment variable $D_{Ai}$ is
a dummy, the regression of omitted control variables on it is given by the
difference in conditional control variable means computed with $D_{Ai}$
switched on and off.
Causal effects of Grade A attendance are estimated by 2SLS using assignment
dummies as instruments for years of exposure to schools of a particular type,
as suggested by equations (1) and (2). As in the setup used to establish
covariate balance, however, the 2SLS estimating equations include two
endogenous variables, $C_{Ai}$ for Grade A exposure and $C_{0i}$ measuring
exposure to an ungraded school. Exposure is measured in years for SAT
outcomes; otherwise, $C_{Ai}$ and $C_{0i}$ are enrollment dummies. As in
equation (20), local propensity score controls consist of saturated models for
Grade A and ungraded propensity scores, with local linear control for screened
tie-breakers as described by equation (21). These equations also control for
baseline math and English scores, free lunch, special education, and English
language learner dummies, and gender and race dummies (estimates without these
controls are similar, though less precise).222222Replacing $W_{i}$ on the left
hand side of (20) with outcome variable $Y_{i}$, equations (20) and (21)
describe the reduced form for our 2SLS estimator. In an application with
lottery tie-breaking, Abdulkadiroğlu et al. (2017a) compare score-controlled
2SLS estimates with semiparametric instrumental variables estimates based on
Abadie (2003). The former are considerably more precise than the latter.
OLS estimates of Grade A effects, reported as a benchmark in the second column
of Table 5.2, indicate that Grade A attendance is associated with higher SAT
scores and graduation rates, as well as increased college and career
readiness. The OLS estimates in Table 5.2 are from models that omit local
propensity score controls, computed in a sample that includes all participants
in the high school match without regard to assignment probability. OLS
estimates of the SAT gains associated with Grade A enrollment are around 6-7
points. Estimated graduation gains are similarly modest at 2.4 points, but
effects on college and career readiness are substantial, running 7-10 points
on a base rate around 40.
The first stage effects of Grade A assignments on Grade A enrollment, shown in
columns 4 and 6 of Panel A in Table 5.2, show that Grade A offers boost Grade
A enrollment by about 1.8 years between the time of application and SAT test-
taking. Grade A assignments boost the likelihood of any Grade A enrollment by
about 67 percentage points. This can be compared with Grade A enrollment rates
of 16-19 percent among those not assigned a Grade A seat in the
match.232323The gap between assignment and enrollment arises from several
sources. Applicants remaining in the public system may attend charter or non-
match exam schools. Applicants may also reject a match-based assignment,
turning instead to an ad hoc administrative assignment process later in the
year.
In contrast with the OLS estimates in column 2, the 2SLS estimates shown in
columns 4 and 6 of Table 5.2 suggest that most of the SAT gains associated
with Grade A attendance reflect selection bias. Computed with either
bandwidth, 2SLS estimates of SAT math gains are around 2 points, though still
(marginally) significant. 2SLS estimates of SAT reading effects are even
smaller and not significantly different from zero, though estimated with
similar precision. At the same time, the 2SLS estimate for graduation status
shows a statistically significant gain of 3-4 percentage points, exceeding the
corresponding OLS estimate. The estimated standard error of $0.009$ associated
with the graduation estimate in column 4 seems especially noteworthy, as this
suggests that our research design has the power to uncover even modest
improvements in high school completion rates.242424Estimates reported in
Online Appendix Table D.2 show little difference in follow-up rates between
applicants who are and aren’t offered a Grade A seat. The 2SLS estimates in
Table 5.2 are therefore unlikely to be compromised by differential attrition.
The strongest Grade A effects appear in estimates of effects on college and
career preparedness and college readiness. This may in part reflect the fact
that Grade A schools are especially likely to offer advanced courses, the
availability of which contributes to the college- and career-related composite
outcome variables (the online appendix details the construction of these
variables). 2SLS estimates of effects on these outcomes are mostly close to
the corresponding OLS estimates (three out of four are smaller). Here too,
switching bandwidth matters little for magnitudes. Throughout Table 5.2,
however, 2SLS estimates computed with an IK bandwidth are more precise than
those computed using CCFT.
### 5.3 Screened vs. Lottery Grade A Effects
In New York, education policy discussions often focus on access to
academically selective screened schools such as Townsend Harris in Queens, a
school consistently ranked among the top American high schools by U.S. News
and World Report. Public interest in screened schools motivates an analysis
that distinguishes screened from lottery Grade A effects. The possibility of
different effects within the Grade A sector also raises concerns related to
the exclusion restriction underpinning a causal interpretation of 2SLS
estimates. In the context of our causal model of Grade A effects, the
exclusion restriction fails when the offer of a Grade A seat moves applicants
between schools of different quality within the Grade A sector. We therefore
explore multi-sector models that distinguish causal effects of attendance at
different sorts of Grade A schools, focusing on differences by admissions
regime since this is widely believed to matter for school quality.
The multi-sector estimates reported in Table 5.3 are from models that include
separate endogenous variables for screened and lottery Grade A schools, along
with a third endogenous variable for the ungraded sector. Instruments in this
just-identified set-up are two dummies indicating each sort of Grade A offer,
as well as a dummy indicating the offer of a seat at an ungraded school. 2SLS
models include separate saturated local propensity score controls for screened
Grade A offer risk, unscreened Grade A offer risk, and ungraded offer risk.
These multi-sector estimates are computed in a sample limited to applicants at
risk of assignment to either a screened or lottery Grade A school. In view of
the relative precision of estimates using IK bandwidth, multi-sector estimates
using CCFT bandwidths are omitted.
OLS estimates again provide an interesting benchmark. As can be seen in the
first two columns of Table 5.3, screened Grade A students appear to reap a
large SAT advantage even after controlling for baseline achievement and other
covariates. In particular, OLS estimates of Grade A effects for schools in the
screened sector are on the order of 14-18 points. At the same time, Grade A
lottery schools appear to generate achievement gains of only about 2 points.
Yet the corresponding 2SLS estimates, reported in columns 3 and 4 of the
table, suggest the achievement gains yielded by enrollment in both sorts of
Grade A schools are equally modest. The 2SLS estimates here run less than 2
points for math scores, with smaller (not significant) negative estimates for
reading. The sole statistically significant SAT effect is that for the lottery
Grade A school impact on math scores.
The remaining 2SLS estimates in the table likewise show similar screened-
school and lottery-school effects. With one marginal exception, p-values in
the table reveal estimates for the two sectors to be statistically
indistinguishable. As in Table 5.2, the 2SLS estimates in Table 5.3 suggest
that screened and lottery Grade A schools boost graduation rates by about 3
points. Effects on college and career preparedness are larger for lottery
schools than for screened, but this impact ordering is reversed for effects on
college readiness. On the whole, Table 5.3 leads us to conclude that OLS
estimates showing a large screened Grade A advantage are driven by selection
bias.
## 6 Summary and Next Steps
Centralized student assignment opens new opportunities for the measurement of
school quality. The research potential of matching markets is enhanced here by
marrying the conditional random assignment generated by lottery tie-breaking
with RD-style variation at screened schools. The key to this intermingled
empirical framework is a local propensity score that controls for differential
assignment rates in DA matches with general tie-breakers. This new tool allows
us to exploit all sources of quasi-experimental variation arising from any
mechanism in the DA class.
Our analysis of NYC school report cards suggests Grade A schools boost SAT
math scores and high school graduation rates by a few points. OLS estimates,
by contrast, show considerably larger effects of Grade A attendance on test
scores. Grade A screened schools enroll some of the city’s highest achievers,
but large OLS estimates of achievement gains from attendance at these schools
appear to be an artifact of selection bias. Concerns about access to such
schools (expressed, for example, in Harris and Fessenden (2017)) may therefore
be overblown. On the other hand, Grade A attendance increases measures of
college and career preparedness. These results may reflect the greater
availability of advanced courses in Grade A schools, a feature that should be
replicable at other schools.
In principle, Grade A assignments may act to move applicants between schools
within the Grade A sector as well as to boost overall Grade A enrollment.
Offer-induced movement between screened and lottery Grade A schools may
violate the exclusion restriction that underpins our 2SLS results if schools
within the Grade A sector vary in quality. It’s therefore worth asking whether
screened and lottery schools should indeed be treated as having the same
effect. Perhaps surprisingly, our analysis supports the idea that screened and
lottery Grade A schools can be pooled and treated as having a common average
causal effect.
Our provisional agenda for further research prioritizes an investigation of
econometric implementation strategies for DA-founded research designs. This
work is likely to build on the asymptotic framework in Bugni and Canay (2018)
and the study of RD designs with multiple tie-breakers in Papay et al. (2011),
Zajonc (2012), Wong et al. (2013b) and Cattaneo et al. (2019). It may be
possible to extend the reasoning behind doubly robust nonparametric
estimators, such as discussed by Rothe and Firpo (2019) and Rothe (2020), to
our setting.
Statistical inference in Section 5 relies on conventional large sample
reasoning of the sort widely applied in empirical RD applications. It seems
natural to consider permutation or randomization inference along the lines
suggested by Cattaneo et al. (2015, 2017), and Canay and Kamat (2017), along
with optimal inference and estimation strategies such as those introduced by
Armstrong and Kolesár (2018) and Imbens and Wager (2019). Also on the agenda,
Narita (2020) suggests a path toward generalization of the large-market model
of DA assignment risk. Finally, we look forward to a more detailed
investigation of the consequences of heterogeneous treatment effects for
identification strategies of the sort considered here.
## References
* Abadie (2003) Abadie, A. (2003): “Semiparametric instrumental variables estimation of treatment response models,” _Journal of Econometrics_ , 113(2), 231–263.
* Abdulkadiroğlu et al. (2017a) Abdulkadiroğlu, A., J. D. Angrist, Y. Narita, and P. A. Pathak (2017a): “Research Design Meets Market Design: Using Centralized Assignment for Impact Evaluation,” _Econometrica_ , 85(5), 1373–1432.
* Abdulkadiroğlu et al. (2017b) ——— (2017b): “Impact Evaluation in Matching Markets with General Tie-breaking,” NBER Working Paper No. 24172.
* Abdulkadiroğlu et al. (2019) ——— (2019): “Breaking Ties: Regression Discontinuity Design Meets Market Design,” Cowles Foundation Discussion Paper 2170.
* Abdulkadiroğlu et al. (2014) Abdulkadiroğlu, A., J. D. Angrist, and P. A. Pathak (2014): “The Elite Illusion: Achievement Effects at Boston and New York Exam Schools,” _Econometrica_ , 82(1), 137–196.
* Abdulkadiroğlu et al. (2005) Abdulkadiroğlu, A., P. A. Pathak, and A. E. Roth (2005): “The New York City High School Match,” _American Economic Review, Papers and Proceedings_ , 95, 364–367.
* Abdulkadiroğlu et al. (2009) ——— (2009): “Strategy-Proofness versus Efficiency in Matching with Indifferences: Redesigning the New York City High School Match,” _American Economic Review_ , 99(5), 1954–1978.
* Abdulkadiroğlu and Sönmez (2003) Abdulkadiroğlu, A. and T. Sönmez (2003): “School Choice: A Mechanism Design Approach,” _American Economic Review_ , 93, 729–747.
* Abebe et al. (2019) Abebe, G., M. Fafchamps, M. Koelle, and S. Quinn (2019): “Learning Management Through Matching: A Field Experiment Using Mechanism Design,” NBER Working Paper No. 26035.
* Ajayi (2014) Ajayi, K. (2014): “Does School Quality Improve Student Performance? New Evidence from Ghana,” IED Discussion Paper No. 260.
* Arai et al. (2019) Arai, Y., Y.-C. Hsu, T. Kitagawa, I. Mourifié, and Y. Wan (2019): “Testing Identifying Assumptions in Fuzzy Regression Discontinuity Designs,” Cemmap Working Paper CWP10/19.
* Armstrong and Kolesár (2018) Armstrong, T. B. and M. Kolesár (2018): “Optimal Inference in a Class of Regression Models,” _Econometrica_ , 86, 655–683.
* Azevedo and Leshno (2016) Azevedo, E. and J. Leshno (2016): “A Supply and Demand Framework for Two-Sided Matching Markets,” _Journal of Political Economy_ , 124(5), 1235–1268.
* Barrow et al. (2016) Barrow, L., L. Sartain, and M. de la Torre (2016): “The Role of Selective High Schools in Equalizing Educational Outcomes: Heterogeneous Effects by Neighborhood Socioeconomic Status,” FRB of Chicago Working Paper No. 2016-17.
* Bergman (2018) Bergman, P. (2018): “The Risks and Benefits of School Integration for Participating Students: Evidence from a Randomized Desegregation Program,” IZA Discussion Paper.
* Beuermann et al. (2016) Beuermann, D., C. K. Jackson, and R. Sierra (2016): “Privately Managed Public Secondary Schools and Academic Achievement in Trinidad and Tobago: Evidence from Rule-Based Student Assignments,” IDB Working Paper Series No. 637.
* Brody (2019) Brody, L. (2019): “Inside the Effort to Diversity Middle School in New York,” _Wall Street Journal_ , May 18.
* Bugni and Canay (2018) Bugni, F. A. and I. A. Canay (2018): “Testing Continuity of a Density via g-order statistics in the Regression Discontinuity Design,” Cemmap Working Paper CWP20/18.
* Calonico et al. (2017) Calonico, S., M. D. Cattaneo, M. H. Farrell, and R. Titiunik (2017): “Rdrobust: Software for Regression-discontinuity Designs,” _The Stata Journal_ , 17, 372–404.
* Calonico et al. (2019) ——— (2019): “Regression Discontinuity Designs Using Covariates,” _The Review of Economics and Statistics_ , 101, 442–451.
* Canay and Kamat (2017) Canay, I. A. and V. Kamat (2017): “Approximate Permutation Tests and Induced Order Statistics in the Regression Discontinuity Design,” _Review of Economic Studies_ , 85, 1577–1608.
* Cattaneo et al. (2015) Cattaneo, M. D., B. R. Frandsen, and R. Titiunik (2015): “Randomization Inference in the Regression Discontinuity Design: An Application to Party Advantages in the US Senate,” _Journal of Causal Inference_ , 3(1), 1–24.
* Cattaneo et al. (2017) Cattaneo, M. D., R. Titiunik, and G. Vazquez-Bare (2017): “Comparing Inference Approaches for RD Designs: A Reexamination of the Effect of Head Start on Child Mortality,” _Journal of Policy Analysis and Management_ , 36(3), 643–681.
* Cattaneo et al. (2019) ——— (2019): “Analysis of Regression Discontinuity Designs with Multiple Cutoffs or Multiple Scores,” .
* Cattaneo et al. (2016a) Cattaneo, M. D., R. Titiunik, G. Vazquez-Bare, and L. Keele (2016a): “Interpreting Regression Discontinuity Designs with Multiple Cutoffs,” _Journal of Politics_ , 78(4), 1229–1248.
* Cattaneo et al. (2016b) Cattaneo, M. D., G. Vazquez-Bare, and R. Titiunik (2016b): “Inference in regression discontinuity designs under local randomization,” _Stata Journal_ , 16, 331–367(37).
* Chen and Kesten (2017) Chen, Y. and O. Kesten (2017): “Chinese College Admissions and School Choice Reforms: A Theoretical Analysis,” _Journal of Political Economy_ , 125, 99–139.
* Disare (2017) Disare, M. (2017): “City to Eliminate High School Admissions Method that Favored Families with Time and Resources,” _Chalkbeat_ , June 6.
* Dobbie and Fryer (2014) Dobbie, W. and R. G. Fryer (2014): “Exam High Schools and Academic Achievement: Evidence from New York City,” _American Economic Journal: Applied Economics_ , 6(3), 58–75.
* Dong (2018) Dong, Y. (2018): “Alternative Assumptions to Identify LATE in Fuzzy Regression Discontinuity Designs,” _Oxford Bulletin of Economics and Statistics_ , 80, 1020–1027.
* Dur et al. (2018) Dur, U., P. A. Pathak, F. Song, and T. Sönmez (2018): “Deduction Dilemmas: The Taiwan Assignment Mechanism,” NBER Working Paper No. 25024.
* Ergin and Sönmez (2006) Ergin, H. and T. Sönmez (2006): “Games of School Choice under the Boston Mechanism,” _Journal of Public Economics_ , 90, 215–237.
* Fort et al. (2020) Fort, M., A. Ichino, and G. Zanella (2020): “Cognitive and Non-Cognitive Costs of Daycare 0-2 for Children in Advantaged Families,” _Journal of Political Economy_ , 128.
* Frandsen (2017) Frandsen, B. R. (2017): “Party Bias in Union Representation Elections: Testing for Manipulation in the Regression Discontinuity Design When the Running Variable is Discrete,” in _Regression Discontinuity Designs: Theory and Applications_ , Emerald Publishing Limited, 281–315.
* Frolich (2007) Frolich, M. (2007): “Regression Discontinuity Design with Covariates (Unpublished Appendix),” IZA Discussion Paper No. 3024.
* Frolich and Huber (2019) Frolich, M. and M. Huber (2019): “Including Covariates in the Regression Discontinuity Design,” _Journal of Business and Economic Statistics_ , 37, 736–748.
* Hahn et al. (2001) Hahn, J., P. Todd, and W. Van der Klaauw (2001): “Identification and Estimation of Treatment Effects with a Regression-Discontinuity Design,” _Econometrica_ , 69(1), 201–209.
* Harris and Fessenden (2017) Harris, E. and F. Fessenden (2017): “The Broken Promises of Choice in New York City Schools,” _New York Times_ , May 5.
* Hastings et al. (2013) Hastings, J., C. Neilson, and S. D. Zimmerman (2013): “Are Some Degrees Worth More than Others? Evidence from College Admission Cutoffs in Chile,” NBER Working Paper No. 19241.
* Imbens and Kalyanaraman (2012) Imbens, G. W. and K. Kalyanaraman (2012): “Optimal Bandwidth Choice for the Regression Discontinuity Estimator,” _Review of Economic Studies_ , 79(3), 933–959.
* Imbens and Wager (2019) Imbens, G. W. and S. Wager (2019): “Optimized Regression Discontinuity Designs,” _Review of Economics and Statistics_ , 101, 264–278.
* Jackson (2010) Jackson, K. (2010): “Do Students Benefit from Attending Better Schools? Evidence from Rule-based Student Assignments in Trinidad and Tobago,” _Economic Journal_ , 120(549), 1399–1429.
* Jackson (2012) ——— (2012): “Single-sex Schools, Student Achievement, and Course Selection: Evidence from Rule-based Student Assignments in Trinidad and Tobago,” _Journal of Public Economics_ , 96(1-2), 173–187.
* Kirkeboen et al. (2016) Kirkeboen, L., E. Leuven, and M. Mogstad (2016): “Field of Study, Earnings, and Self-Selection,” _Quarterly Journal of Economics_ , 131, 1057–1111.
* Lee (2008) Lee, D. S. (2008): “Randomized Experiments from Non-Random Selection in US House Elections,” _Journal of Econometrics_ , 142, 675–697.
* Lucas and Mbiti (2014) Lucas, A. and I. Mbiti (2014): “Effects of School Quality on Student Achievement: Discontinuity Evidence from Kenya,” _American Economic Journal: Applied Economics_ , 6(3), 234–263.
* Narita (2020) Narita, Y. (2020): “A Theory of Quasi-Experimental Evaluation of School Quality,” _Management Science_.
* Papay et al. (2011) Papay, J. P., J. B. Willett, and R. J. Murnane (2011): “Extending the Regression-Discontinuity Approach to Multiple Assignment Variables,” _Journal of Econometrics_ , 161(2), 203–207.
* Pathak and Sönmez (2013) Pathak, P. A. and T. Sönmez (2013): “School Admissions Reform in Chicago and England: Comparing Mechanisms by their Vulnerability to Manipulation,” _American Economic Review_ , 103(1), 80–106.
* Pérez Vincent and Ubfal (2019) Pérez Vincent, S. and D. Ubfal (2019): “Using Centralized Assignment to Evaluate Entrepreneurship and Life-Skills Training Programs in Argentina,” Working Paper.
* Pop-Eleches and Urquiola (2013) Pop-Eleches, C. and M. Urquiola (2013): “Going to a Better School: Effects and Behavioral Responses,” _American Economic Review_ , 103(4), 1289–1324.
* Rockoff and Turner (2011) Rockoff, J. and L. Turner (2011): “Short Run Impacts of Accountability of School Quality,” _American Economic Journal: Economic Policy_ , 2(4), 119–147.
* Rosenbaum and Rubin (1983) Rosenbaum, P. R. and D. B. Rubin (1983): “The Central Role of the Propensity Score in Observational Studies for Causal Effects,” _Biometrica_ , 70, 41–55.
* Rothe (2020) Rothe, C. (2020): “Flexible Covariate Adjustments in Randomized Experiments,” .
* Rothe and Firpo (2019) Rothe, C. and S. Firpo (2019): “Properties of doubly robust estimators when nuisance functions are estimated nonparametrically,” _Econometric Theory_ , 35, 1048–1087.
* Sekhon and Titiunik (2017) Sekhon, J. S. and R. Titiunik (2017): “On Interpreting the Regression Discontinuity Design as a Local Experiment,” in _Regression Discontinuity Designs: Theory and Applications_ , Emerald Publishing Limited, 1–28.
* van der Vaart (2000) van der Vaart, A. W. (2000): _Asymptotic Statistics_ , Cambridge University Press.
* Veiga (2018) Veiga, C. (2018): “Brooklyn Middle Schools Eliminate ‘Screening’ as New York City Expands Integration Efforts,” _Chalkbeat_ , September 20.
* Walcott (2012) Walcott, D. (2012): “NYC Department of Education: Progress Reports for New York City Public Schools,” .
* Wellner (1981) Wellner, J. A. (1981): “A Glivenko-Cantelli Theorem for Empirical Measures of Independent but Non-Identically Distributed Random Variables,” _Stochastic Processes and Their Applications_ , 11(3), 309–312.
* Wong et al. (2013a) Wong, V. C., P. M. Steiner, and T. D. Cook (2013a): “Analyzing Regression-Discontinuity Designs with Multiple Assignment Variables: A Comparative Study of Four Estimation Methods,” _Journal of Educational and Behavioral Statistics_ , 38, 107–141.
* Wong et al. (2013b) ——— (2013b): “Analyzing Regression-Discontinuity Designs With Multiple Assignment Variables: A Comparative Study of Four Estimation Methods,” _Journal of Educational and Behavioral Statistics_ , 38, 107–141.
* Zajonc (2012) Zajonc, T. (2012): “Regression Discontinuity Design with Multiple Forcing Variables,” _Essays on Causal Inference for Public Policy_ , 45–81.
* Zimmerman (2019) Zimmerman, S. D. (2019): “Elite Colleges and Upward Mobility to Top Jobs and Top Incomes,” _American Economic Review_ , 109, 1–47.
Appendix
## Appendix A Proof of Theorem 1
Let $F^{i}_{v}(r)$ denote the cumulative distribution function (CDF) of
$R_{iv}$ evaluated at $r$ and define
$F_{v}(r|\theta)=E[F^{i}_{v}(r)|\theta_{i}=\theta].$ (22)
This is the fraction of type $\theta$ applicants with tie-breaker $v$ below
$r$ (set to zero when type $\theta$ ranks no schools using tie-breaker $v$).
We may condition on additional events.
Recall that the joint distribution of tie-breakers for applicant $i$ is
assumed to be continuously differentiable with positive density. This
assumption has the following implication: The conditional distribution of tie-
breaker $v$, $F_{v}(r|e),$ is continuously differentiable, with
$F^{\prime}_{v}(r|e)>0$ at any $r=\tau_{1},...,\tau_{S}$. Here, the
conditioning event $e$ is any event of the form that
$\theta_{i}=\theta,R_{iu}>r_{u}\text{ for }u=1,...,v-1,$ and
$T_{i}(\delta)=T$.
Take any large market with the general tie-breaking structure in Section 4.
For each $\delta>0$ and each tie-breaker $v=U+1,...,V+1$, let $e(v)$ be short-
hand notation for “$\theta_{i}=\theta,R_{iu}>MID^{u}_{\theta s}\text{ for
}u=1,...,v-1,T_{i}(\delta)=T,$ and $W_{i}=w$.” Similarly, $e(1)$ is short-hand
notation for “$\theta_{i}=\theta,T_{i}(\delta)=T,$ and $W_{i}=w$.” Let
$\psi_{s}(\theta,T,\delta,w)\equiv E[D_{i}(s)|e(1)]$ be the assignment
probability for an applicant with $\theta_{i}=\theta,T_{i}(\delta)=T,$ and
characteristics $W_{i}=w$. Our proofs use a lemma that describes this
assignment probability. To state the lemma, for $v>U$, let
$\Phi_{\delta}(v)\equiv\begin{cases}\dfrac{F_{v}(MID^{v}_{\theta
s}+\delta|e(v))-F_{v}(MID^{v}_{\theta s}-\delta|e(v))}{F_{v}(MID^{v}_{\theta
s}|e(v))-F_{v}(MID^{v}_{\theta s}-\delta|e(v))}\text{ if
}t_{b}(\delta)=c\text{ for some }b\in B^{v}_{\theta s}\\\ 1\ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \text{ otherwise.}\end{cases}$
We use this object to define
$\Phi_{\delta}\equiv\prod_{v=1}^{U}(1-MID^{v}_{\theta
s})\prod^{V}_{v=U+1}\Phi_{\delta}(v).$ Finally, let
$\Phi^{\prime}_{\delta}\equiv\begin{cases}\max\left\\{0,\dfrac{F_{v(s)}(\tau_{s}|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}{F_{v(s)}(\tau_{s}+\delta|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}\right\\}\text{
if }v(s)>U\\\ \max\left\\{0,\dfrac{\tau_{s}-MID^{v(s)}_{\theta
s}}{1-MID^{v(s)}_{\theta s}}\right\\}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ if }v(s)\leq
U.\end{cases}$
###### Lemma 1.
In the general tie-breaking setting of Section 4, for any fixed $\delta>0$
such that $\delta<\min_{\theta,s,v}|\tau_{s}-MID_{\theta,s}^{v}|$, we have:
$\psi_{s}(\theta,T,\delta,w)=\left\\{\begin{array}[c]{ll}0&\text{ if
}t_{s}(\delta)=n\text{ or }t_{b}(\delta)=a\text{ for some }b\in B_{\theta
s},\\\ \Phi_{\delta}&\text{ otherwise and }t_{s}(\delta)=a,\\\
\Phi_{\delta}\times\Phi^{\prime}_{\delta}&\text{ otherwise and
}t_{s}(\delta)=c.\end{array}\right.$
Proof of Lemma 1. We start verifying the first line in
$\psi_{s}(\theta,T,\delta,w)$. Applicants who don’t rank $s$ have
$\psi_{s}(\theta,T,\delta,w)=0$. Among those who rank $s$, those of
$t_{s}(\delta)=n$ have $\rho_{\theta s}>\rho_{s}\text{ or, if }v(s)\neq
0,\>\rho_{\theta s}=\rho_{s}\text{ and }R_{iv(s)}>\tau_{s}+\delta$. If
$\rho_{\theta s}>\rho_{s}$, then $\psi_{s}(\theta,T,\delta,w)=0$. Even if
$\rho_{\theta s}\leq\rho_{s}$, as long as $\>\rho_{\theta s}=\rho_{s}\text{
and }R_{iv(s)}>\tau_{s}+\delta$, student $i$ never clears the cutoff at school
$s$ so $\psi_{s}(\theta,T,\delta,w)=0$.
To show the remaining cases, take as given that it is not the case that
$t_{s}(\delta)=n\text{ or }t_{b}(\delta)=a\text{ for some }b\in B_{\theta s}$.
Applicants with $t_{b}(\delta)\neq a$ for all $b\in B_{\theta s}$ and
$t_{s}(\delta)=a$ or $c$ may be assigned $b\in B_{\theta s},$ where
$\rho_{\theta b}=\rho_{b}$. Since the (aggregate) distribution of tie-breaking
variables for type $\theta$ students is
$\hat{F}_{v}(\cdot|\theta)=F_{v}(\cdot|\theta)$, conditional on
$T_{i}(\delta)=T$, the proportion of type $\theta$ applicants not assigned any
$b\in B_{\theta s}$ where $\rho_{\theta b}=\rho_{b}$ is
$\Phi_{\delta}=\prod_{v=1}^{U}(1-MID^{v}_{\theta
s})\prod^{V}_{v=U+1}\Phi_{\delta}(v)$ since each $\Phi_{\delta}(v)$ is the
probability of not being assigned to any $b\in B^{v}_{\theta s}$. To see why
$\Phi_{\delta}(v)$ is the probability of not being assigned to any $b\in
B^{v}_{\theta s}$, note that if $t_{b}(\delta)\neq c\text{ for all }b\in
B^{v}_{\theta s}$, then $t_{b}(\delta)=n$ for all $b\in B^{v}_{\theta s}$ so
that applicants are never assigned to any $b\in B^{v}_{\theta s}$. Otherwise,
i.e., if $t_{b}(\delta)=c\text{ for some }b\in B^{v}_{\theta s}$, then
applicants are assigned to $s$ if and only if their values of tie-breaker $v$
clear the cutoff of the school that produces $MID^{v}_{\theta s}$, where
applicants have $t_{s}(\delta)=c$. This event happens with probability
$\dfrac{F_{v}(MID^{v}_{\theta s}|e(v))-F_{v}(MID^{v}_{\theta
s}-\delta|e(v))}{F_{v}(MID^{v}_{\theta s}+\delta|e(v))-F_{v}(MID^{v}_{\theta
s}-\delta|e(v))},$
implying that $\Phi_{\delta}(v)$ is the probability of not being assigned to
any $b\in B^{v}_{\theta s}$.
Given this fact, to see the second line, note that every applicant of type
$t_{s}(\delta)=a$ who is not assigned a higher choice is assigned $s$ for sure
because $\rho_{\theta s}<\rho_{s}$ or $\rho_{\theta s}+R_{iv(s)}<\xi_{s}$.
Therefore, we have
$\psi_{s}(\theta,T,\delta,w)=\Phi_{\delta}.$
Finally, consider applicants with $t_{s}(\delta)=c$. The fraction of those who
are not assigned a higher choice is $\Phi_{\delta}$, as explained above. Also,
for tie-breaker $v(s)$, the tie-breaker values of these applicants are larger
(worse) than $MID^{v(s)}_{\theta s}$. If $\tau_{s}<MID^{v(s)}_{\theta s},$
then no such applicant is assigned $s.$ If $\tau_{s}\geq MID^{v(s)}_{\theta
s},$ then the fraction of applicants who are assigned $s$ conditional on
$\tau_{s}\geq MID^{v(s)}_{\theta s}$ is given by
$\max\left\\{0,\frac{F_{v(s)}(\tau_{s}|e(V+1))-\max\\{F_{v(s)}(MID^{v(s)}_{\theta
s}|e(V+1)),F_{v(s)}(\tau_{s}-\delta|e(V+1))\\}}{F_{v(s)}(\tau_{s}+\delta|e(V+1))-\max\\{F_{v(s)}(MID^{v(s)}_{\theta
s}|e(V+1)),F_{v(s)}(\tau_{s}-\delta|e(V+1))\\}}\right\\}\text{ if }v(s)>U$
and
$\max\left\\{0,\dfrac{\tau_{s}-MID^{v(s)}_{\theta s}}{1-MID^{v(s)}_{\theta
s}}\right\\}\text{ if }v(s)\leq U.$
If $MID^{v(s)}_{\theta s}<\tau_{s}$, then
$\delta<\min_{\theta,s,v}|\tau_{s}-MID_{\theta,s}^{v}|$ implies
$MID^{v(s)}_{\theta s}<\tau_{s}-\delta$. This in turn implies
$\max\\{F_{v(s)}(MID^{v(s)}_{\theta
s}|e(V+1)),F_{v(s)}(\tau_{s}-\delta|e(V+1))\\}=F_{v(s)}(\tau_{s}-\delta|e(V+1)).$
If $MID^{v(s)}_{\theta s}>\tau_{s}$, then
$\delta<\min_{\theta,s,v}|\tau_{s}-MID_{\theta,s}^{v}|$ implies
$MID^{v(s)}_{\theta s}>\tau_{s}+\delta$. By the definition of $e(V+1)$,
$R_{iu}>MID^{u}_{\theta s}\text{ for }u=1,...,V$. Therefore, there is no
applicant with $R_{iv(s)}>MID^{v(s)}_{\theta s}$ and
$R_{iv(s)}\in[\tau_{s}-\delta,\tau_{s}+\delta]$.
Hence, conditional on $t_{s}(\delta)=c$ and not being assigned a choice
preferred to $s,$ the probability of being assigned $s$ is given by
$\Phi^{\prime}_{\delta}$. Therefore, for students with $t_{s}(\delta)=c$, we
have $\psi_{s}(\theta,T,\delta,w)=\Phi_{\delta}\times\Phi^{\prime}_{\delta}.$∎
###### Lemma 2.
In the general tie-breaking setting of Section 4, for all $s$, $\theta$, and
sufficiently small $\delta>0$, we have:
$\psi_{s}(\theta,T,\delta,w)=\left\\{\begin{array}[c]{ll}0&\text{ if
}t_{s}(0)=n\text{ or }t_{b}(0)=a\text{ for some }b\in B_{\theta s},\\\
\Phi^{*}_{\delta}&\text{ otherwise and }t_{s}(0)=a,\\\
\Phi^{*}_{\delta}\times&\dfrac{F_{v(s)}(\tau_{s}|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}{F_{v(s)}(\tau_{s}+\delta|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}\\\
&\text{ otherwise and }t_{s}(0)=c\text{ and }v(s)>U.\\\
\Phi^{*}_{\delta}\times&\max\\{0,\dfrac{\tau_{s}-MID^{v(s)}_{\theta
s}}{1-MID^{v(s)}_{\theta s}}\\}\\\ &\text{ otherwise and }t_{s}(0)=c\text{ and
}v(s)\leq U.\\\ \end{array}\right.$ (23)
where
$\Phi^{*}_{\delta}(v)\equiv\begin{cases}\dfrac{F_{v}(MID^{v}_{\theta
s}+\delta|e(v))-F_{v}(MID^{v}_{\theta s}|e(v))}{F_{v}(MID^{v}_{\theta
s}+\delta|e(v))-F_{v}(MID^{v}_{\theta s}-\delta|e(v))}\text{ if
}MID^{v}_{\theta s}=\tau_{b}\text{ and }t_{b}=c\text{ for some }b\in
B^{v}_{\theta s},\\\ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ otherwise}\end{cases}$
and
$\Phi^{*}_{\delta}\equiv\prod_{v=1}^{U}(1-MID^{v}_{\theta
s})\prod^{V}_{v=U+1}\Phi^{*}_{\delta}(v).$
Proof of Lemma 2. The first line follows from Lemma 1 and the fact that
$t_{s}(0)=n\text{ or }t_{b}(0)=a\text{ for some }b\in B_{\theta s}$ imply
$t_{s}(\delta)=n\text{ or }t_{b}(\delta)=a\text{ for some }b\in B_{\theta s}$
for sufficiently small $\delta>0$.
For the remaining lines, first note that conditional on $t_{s}(0)\neq n\text{
and }t_{b}(0)\neq a\text{ for all }b\in B_{\theta s}$, we have
$\Phi^{*}_{\delta}(v)=\Phi_{\delta}(v)$ and so
$\Phi^{*}_{\delta}=\Phi_{\delta}$ holds for small enough $\delta$.
$\Phi^{*}_{\delta}$ therefore is the probability of not being assigned to a
school preferred to $s$ in the last three cases.
The second line is then by the fact that $t_{s}(0)=a$ implies
$t_{s}(\delta)=a$ for small enough $\delta>0$. The third line is by the fact
that for small enough $\delta>0$,
$\displaystyle\Phi^{\prime}_{\delta}$
$\displaystyle=\max\bigg{\\{}0,\frac{F_{v(s)}(\tau_{s}|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}{F_{v(s)}(\tau_{s}+\delta|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}\bigg{\\}}$
$\displaystyle=\frac{F_{v(s)}(\tau_{s}|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}{F_{v(s)}(\tau_{s}+\delta|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))},$
where we invoke Assumption 2, which implies $MID^{v}_{\theta s}\neq\tau_{s}$.
The last line directly follows from Lemma 1. ∎
We use Lemma 2 to derive Theorem 1. We characterize $\lim_{\delta\rightarrow
0}\psi_{s}(\theta,T,\delta,w)$ and show that it coincides with
$\psi_{s}(\theta,T)$ in the main text. In the first case in Lemma 2,
$\psi_{s}(\theta,T,\delta,w)$ is constant (0) for any small enough $\delta$.
The constant value is also $\lim_{\delta\rightarrow
0}\psi_{s}(\theta,T,\delta,w)$ in this case.
To characterize $\lim_{\delta\rightarrow 0}\psi_{s}(\theta,T,\delta,w)$ in the
remaining cases, note that by the differentiability of $F_{v}(\cdot|e(v))$
(recall the continuous differentiability of $F^{i}_{v}(r|e)$), L’Hopital’s
rule implies:
$\lim_{\delta\rightarrow
0}\dfrac{F_{v(s)}(\tau_{s}|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}{F_{v(s)}(\tau_{s}+\delta|e(V+1))-F_{v(s)}(\tau_{s}-\delta|e(V+1))}=\dfrac{F^{\prime}_{v(s)}(\tau_{s}|e(V+1))}{2F^{\prime}_{v(s)}(\tau_{s}|e(V+1))}=0.5\\\
$
and
$\lim_{\delta\rightarrow 0}\dfrac{F_{v}(MID^{v}_{\theta
s}+\delta|e(v))-F_{v}(MID^{v}_{\theta s}|e(v))}{F_{v}(MID^{v}_{\theta
s}+\delta|e(v))-F_{v}(MID^{v}_{\theta
s}-\delta|e(v))}=\dfrac{F^{\prime}_{v}(MID^{v}_{\theta
s}|e(v))}{2F^{\prime}_{v}(MID^{v}_{\theta s}|e(v))}=0.5.\\\ $
This implies $\lim_{\delta\rightarrow
0}\Phi^{*}_{\delta}(v)=0.5^{1\\{MID^{v}_{\theta s}=\tau_{b}\text{ and
}t_{b}=c\text{ for some }b\in B^{v}_{\theta s}\\}}$ since $1\\{MID^{v}_{\theta
s}=\tau_{b}\text{ and }t_{b}=c\text{ for some }b\in B^{v}_{\theta s}\\}$ does
not depend on $\delta$. Therefore
$\lim_{\delta\rightarrow 0}\Phi^{*}_{\delta}=\prod_{v=1}^{U}(1-MID^{v}_{\theta
s})0.5^{m_{s}(\theta,T)}$
where $m_{s}(\theta,T)=|\\{v>U:MID^{v}_{\theta s}=\tau_{b}\text{ and
}t_{b}=c\text{ for some }b\in B^{v}_{\theta s}\\}|$.
Combining these limiting facts with the fact that the limit of a product of
functions equals the product of the limits of the functions, we obtain the
following: $\lim_{\delta\rightarrow 0}\psi_{s}(\theta,T,\delta,w)=0$ if (a)
$t_{s}=n$ or (b) $t_{b}=a\ \text{ for some }b\in B_{\theta s}$. Otherwise,
$\psi_{s}(\theta,T)=\left\\{\begin{array}[c]{ll}\sigma_{s}(\theta,T)\lambda_{s}(\theta)&\text{
if }t_{s}=a\\\
\sigma_{s}(\theta,T)\lambda_{s}(\theta)\max\left\\{0,\frac{\tau_{s}-MID^{v(s)}_{\theta
s}}{1-MID^{v(s)}_{\theta s}}\right\\}&\text{ if }t_{s}=c\text{ and }v(s)\leq
U\\\ 0.5\sigma_{s}(\theta,T)\lambda_{s}(\theta)&\text{ if }t_{s}=c\text{ and
}v(s)>U.\\\ \end{array}\right.$
This expression coincides with $\psi_{s}(\theta,T)$, completing the proof of
Theorem 1.
Online Appendices
## Appendix B Understanding Theorem 1
Figure B illustrates Theorem 1 for an applicant who ranks screened schools 1,
3, 5 and 6 and lottery schools 2 and 4, where school $k$ is applicant’s $k$-th
choice. The line next to each school represents applicant position (priority
plus tie-breaker) for each school. Schools with the same colored lines have
the same tie-breaker. Schools 1 and 5 use screened tie-breaker $2$. Schools 2
and 4 use lottery tie-breaker 1. Schools 3 and 6 use screened tie-breaker $3$.
Since school 1 has only one priority, positions run from 1 to 2. School 2 has
two priority groups, so positions run from 1 to 3. Figure B indicates the
applicants position $\pi$ by an arrow. At screened schools, the brackets
around the DA cutoff $\xi$ represent the $\delta$-neighborhood around the
cutoff.
Figure B1: Illustrating Theorem 1
Notes: This figure illustrates Theorem 1 for one applicant listing six
schools. The applicant has marginal priority (shown in bold) at each. Dashes
mark intervals in which offer risk is strictly between 0 and 1. The set of
applicants subject to random assignment includes everyone with marginal
priority at lottery schools and applicants with tie-breakers inside the
relevant bandwidth at screened schools. Same-color tie-breakers are shared.
Schools 1, 3, 5, and 6 are screened, while 2 and 4 have lottery tie-breakers.
The applicant’s preferences are 1 $\succ_{i}$ 2 $\succ_{i}$ 3 $\succ_{i}$ 4
$\succ_{i}$ 5 $\succ_{i}$ 6. Arrows mark $\pi_{is}=\rho_{is}+R_{iv(s)}$, the
applicant’s position at each school $s$. Lower $\pi_{is}$ is better. Integers
indicate priorities $\rho_{s}$, and tick marks indicate the DA cutoff,
$\xi_{s}=\rho_{s}+\tau_{s}$. Note that $t_{\mathit{6}}=a$, so this applicant
is sure to be seated somewhere. The assignment probability therefore sums to
1: if $\tau_{\mathit{2}}\geq\tau_{\mathit{4}},$ the probability of any
assignment is $\tau_{\mathit{2}}+0.5\times(1-\tau_{\mathit{2}})+0+2\times
0.5^{2}\times(1-\tau_{\mathit{2}})=1$; if
$\tau_{\mathit{2}}<\tau_{\mathit{4}},$ this probability is
$\tau_{\mathit{2}}+0.5\times(1-\tau_{\mathit{2}})+0.5\times(\tau_{\mathit{4}}-\tau_{\mathit{2}})+2\times
0.5^{2}\times(1-\tau_{\mathit{4}})=1$.
The applicant is never seated at school 1 since his position is to the right
of the $\delta$-neighborhood, conditionally seated at schools 2 and 4 since
his priority is equal to the marginal priority at each school, conditionally
seated at schools 3 and 5 since his position is within the
$\delta$-neighborhood at each school, and always seated at school 6 since his
position is to the left of the $\delta$-neighborhood.
The columns next to the lines record tie-breaker cutoff, $\tau$,
disqualification probability at lottery schools, $\lambda$, schools
contributing to $\lambda$, the disqualification probability at screened
schools, $\sigma$, schools contributing to $\sigma$, and assignment
probability.
The local score at each school is computed as follows:
* School 1:
The local score at school 1 is zero because $t_{i\mathit{1}}(\delta)=n$.
* School 2:
MID at school 2 is zero because this applicant ranks no other lottery school
higher. Hence, the second line of (11) applies and probability is given by the
tie-breaker cutoff at school 2, which is $\tau_{\mathit{2}}$.
* School 3:
Since $t_{i\mathit{3}}(\delta)=c$, the third line of (11) applies. The local
score at school 3 is the probability of not being assigned to school 2, that
is, $1-\tau_{\mathit{2}}$, times $0.5$. This last term is the probability
associated with being local to the cutoff at school 3.
* School 4:
MID at school 4 is determined by the tie-breaker cutoff at school 2. When MID
exceeds the tie-breaker cutoff at school 4, then school 4 assignment
probability is zero. Otherwise, since $t_{i\mathit{3}}(\delta)=c$ and school 4
is a lottery school, the second line of (11) applies. The probability is
therefore $0.5$ times the difference between the cutoff at school 4 and MID.
* School 5:
MID at school 5 is determined by the larger of the tie-breaker cutoffs at
school 2 and school 4. Since $t_{i\mathit{5}}(\delta)=c$, the third line of
(11) applies, and the probability is determined by $(0.5)^{2}$ times
$\lambda$, the disqualification probability at lottery schools.
* School 6:
Finally, since $t_{i\mathit{6}}(\delta)=a$, the first line of (11) applies and
the local score becomes $(0.5)^{2}$ times $\lambda$.
Since $t_{i\mathit{6}}(\delta)=a$, the probabilities sum to 1. If
$\tau_{\mathit{2}}\geq\tau_{\mathit{4}}$, the probability of any assignment is
$\tau_{\mathit{2}}+0.5\times(1-\tau_{\mathit{2}})+2\times(0.5)^{2}\times(1-\tau_{\mathit{2}})=1$.
If $\tau_{\mathit{2}}<\tau_{\mathit{4}}$, the probability is
$\tau_{\mathit{2}}+0.5\times(1-\tau_{\mathit{4}})+0.5\times(\tau_{\mathit{4}}-\tau_{\mathit{2}})+2\times
0.5^{2}\times(1-\tau_{\mathit{4}})=1$.
## Appendix C Additional Results and Proofs
### C.1 The DA Propensity Score
This appendix derives the DA propensity score defined as the probability of
assignment conditional on type for all applicants, without regard to cutoff
proximity. The serial dictatorship propensity score discussed in Section 3.1
is a special case of this.
$MID^{v}_{\theta s}$ and priority status determine DA propensity score with
general tie-breakers. For this proposition, we assume that tie-breakers
$R_{iv}$ and $R_{iv^{\prime}}$ are independent for $v\neq v^{\prime}$.
###### Proposition 3 (The DA Propensity Score with General Tie-breaking).
Consider DA with multiple tie-breakers indexed by $v$, distributed
independently of one another according to $F_{v}(r|\theta)$. For all $s$ and
$\theta$ in this match,
$\begin{split}p_{s}(\theta)=\left\\{\begin{array}[c]{ll}0&\text{ if
}\rho_{\theta s}>\rho_{s}\\\ \prod_{v}(1-F_{v}(MID^{v}_{\theta
s}|\theta))&\text{ if }\rho_{\theta s}<\rho_{s}\\\ \prod_{v\neq
v(s)}(1-F_{v}(MID^{v}_{\theta s}|\theta))\\\
\times\max\left\\{0,F_{v(s)}(\tau_{s}|\theta)-F_{v(s)}(MID^{{v(s)}}_{\theta
s}|\theta)\right\\}&\text{ if }\rho_{\theta s}=\rho_{s}\\\
\end{array}\right.\end{split}$
where $F_{v(s)}(\tau_{s}|\theta)=\tau_{s}$ and $F_{v(s)}(MID^{{v(s)}}_{\theta
s}|\theta)=MID^{v(s)}_{\theta s}$ when $v(s)\in\\{1,...,U\\}$.
Proposition 3, which generalizes an earlier multiple lottery tie-breaker
result in Abdulkadiroğlu et al. (2017a), covers three sorts of applicants.
First, applicants with less-than-marginal priority at $s$ have no chance of
being seated there. The second line of the theorem reflects the likelihood of
qualification at schools preferred to $s$ among applicants surely seated at
$s$ when they can’t do better. Since tie-breakers are assumed independent, the
probability of not doing better than $s$ is described by a product over tie-
breakers, $\prod_{v}(1-F_{v}(MID^{v}_{\theta s}|\theta))$. If type $\theta$ is
sure to do better than $s$, then $MID^{v}_{\theta s}=1$ and the probability at
$s$ is zero.
Finally, the probability for applicants with $\rho_{\theta_{i}s}=\rho_{s}$
multiplies the term
$\prod_{v\neq v(s)}(1-F_{v}(MID^{v}_{\theta s}|\theta))$
by
$\max\left\\{0,F_{v(s)}(\tau_{s}|\theta)-F_{v(s)}(MID^{{v(s)}}_{\theta
s}|\theta)\right\\}.$
The first of these is the probability of failing to improve on $s$ by virtue
of being seated at schools using a tie-breaker other than $v(s)$. The second
parallels assignment probability in single-tie-breaker serial dictatorship: to
be seated at $s$, applicants in $\rho_{\theta_{i}s}=\rho_{s}$ must have
$R_{iv(s)}$ between $MID^{{v(s)}}_{\theta s}$ and $\tau_{s}$.
Proposition 3 allows for single tie-breaking, lottery tie-breaking, or a mix
of non-lottery and lottery tie-breakers as in the NYC high school match. With
a single tie-breaker, the propensity score formula simplifies, omitting
product terms over $v$:
###### Corollary 3 (Abdulkadiroğlu et al. (2017a)).
Consider DA using a single tie-breaker, $R_{i}$, distributed according to
$F_{R}(r|\theta)$ for type $\theta$. For all $s$ and $\theta$ in this market,
we have:
$p_{s}(\theta)=\left\\{\begin{array}[c]{ll}0&\text{ if }\rho_{\theta
s}>\rho_{s},\\\ 1-F_{R}(MID_{\theta s}|\theta)&\text{ if }\rho_{\theta
s}<\rho_{s},\\\ (1-F_{R}(MID_{\theta
s}|\theta))\times\max\left\\{0,\dfrac{F_{R}(\tau_{s}|\theta)-F_{R}(MID_{\theta
s}|\theta)}{1-F_{R}(MID_{\theta s}|\theta)}\right\\}&\text{ if }\rho_{\theta
s}=\rho_{s},\\\ \end{array}\right.$
where $p_{s}(\theta)=0$ when $MID_{\theta s}=1$ and $\rho_{\theta
s}=\rho_{s}$, and $MID_{\theta s}$ is as defined in Section 3, applied to a
single tie-breaker.
Common lottery tie-breaking for all schools further simplifies the DA
propensity score. When $v(s)=1$ for all $s$, $F_{R}(MID_{\theta
s})=MID_{\theta s}$ and $F_{R}(\tau_{s}|\theta)=\tau_{s}$, as in the Denver
match analyzed by Abdulkadiroğlu et al. (2017a). In this case, the DA
propensity score is a function only of $MID_{\theta s}$ and the classification
of applicants into being never, always, and conditionally seated. This
contrasts with the scores in Propositions 3 and 3, which depend on the unknown
and unrestricted conditional distributions of tie-breakers given type
($F_{R}(\tau_{s}|\theta)$ and $F_{R}(MID_{\theta s}|\theta)$ with a single
tie-breaker; $F_{v}(\tau_{s}|\theta)$ and $F_{v}(MID_{\theta s}|\theta)$ with
general tie-breakers). We therefore turn again to the local propensity score
to isolate assignment variation that is independent of type and potential
outcomes.
#### Proof of Proposition 3
We prove Proposition 3 using a strategy to that used in the proof of Theorem 1
in Abdulkadiroğlu et al. (2017a). Note first that admissions cutoffs
$\mathbf{\xi}$ in a large market do not depend on the realized tie-breakers
$r_{iv}$’s: DA in the large market depends on the $r_{iv}$’s only through
$G(I_{0})$, defined as the fraction of applicants in set $I_{0}=\\{i\in
I\hskip 2.84544pt|\hskip 2.84544pt\theta_{i}\in\Theta_{0},r_{iv}\leq
r_{v}\text{ for all }v\\}$ with various choices of $\Theta_{0}$ and $r_{v}$.
In particular, $G(I_{0})$ doesn’t depend on tie-breaker realizations in the
large market. For the empirical CDF of each tie-breaker conditional on each
type, $\hat{F}_{v}(\cdot|\theta)$, the Glivenko-Cantelli theorem for
independent but non-identically distributed random variables implies
$\hat{F}_{v}(\cdot|\theta)=F_{v}(\cdot|\theta)$ for any $v$ and $\theta$
(Wellner, 1981). Since cutoffs $\mathbf{\xi}$ are constant, marginal priority
$\rho_{s}$ is also constant for every school $s$.
Now, consider the propensity score for school $s.$ First, applicants who don’t
rank $s$ have $p_{s}(\theta)=0$. If $\rho_{\theta s}>\rho_{s},$ then
$\rho_{\theta s}>\rho_{s}.$ Therefore,
$p_{s}(\theta)=0\text{ if }\rho_{\theta s}>\rho_{s}\text{ or $\theta$ does not
rank $s$}.$
Second, if $\rho_{\theta s}\leq\rho_{s}$, then the type $\theta$ applicant may
be assigned a preferred school $\tilde{s}\in B_{\theta s},$ where
$\rho_{\theta\tilde{s}}=\rho_{\tilde{s}}$. For each tie-breaker $v$, the
proportion of type $\theta$ applicants assigned some $\tilde{s}\in
B^{v}_{\theta s}$ where $\rho_{\theta\tilde{s}}=\rho_{\tilde{s}}$ is
$F_{v}(MID^{v}_{\theta s}|\theta)$. This means that for each $v$, the
probability of not being assigned any $\tilde{s}\in B^{v}_{\theta s}$ where
$\rho_{\theta\tilde{s}}=\rho_{\tilde{s}}$ is $1-F_{v}(MID^{v}_{\theta
s}|\theta).$ Since tie-breakers are assumed to be distributed independently of
one another, the probability of not being assigned any $\tilde{s}\in B_{\theta
s}$ where $\rho_{\theta\tilde{s}}=\rho_{\tilde{s}}$ for a type $\theta$
applicant is $\Pi_{v}(1-F_{v}(MID^{v}_{\theta s}|\theta)).$ Every applicant of
type $\rho_{\theta s}<\rho_{s}$ who is not assigned a preferred choice is
assigned $s$ because $\rho_{\theta s}<\rho_{s}.$ So
$p_{s}(\theta)=\Pi_{v}(1-F_{v}(MID^{v}_{\theta s}|\theta))\text{ if
}\rho_{\theta s}<\rho_{s}.$
Finally, consider applicants of type $\rho_{\theta s}=\rho_{s}$ who are not
assigned a choice preferred to $s$. The fraction of applicants $\rho_{\theta
s}=\rho_{s}$ who are not assigned a preferred choice is
$\Pi_{v}(1-F_{v}(MID^{v}_{\theta s}|\theta))$. Also, the values of the tie-
breaking variable $v(s)$ of these applicants are larger than
$MID^{v(s)}_{\theta s}$. If $\tau_{s}<MID^{v(s)}_{\theta s},$ then no such
applicant is assigned $s.$ If $\tau_{s}\geq MID^{v(s)}_{\theta s},$ then the
fraction of applicants who are assigned $s$ within this set is given by
$\frac{F_{v(s)}(\tau_{s}|\theta)-F_{v(s)}(MID^{v(s)}_{\theta
s}|\theta)}{1-F_{v(s)}(MID^{v(s)}_{\theta s}|\theta)}.$ Hence, conditional on
$\rho_{\theta s}=\rho_{s}$ and not being assigned a choice higher than $s,$
the probability of being assigned $s$ is given by
$\max\\{0,\frac{F_{v(s)}(\tau_{s}|\theta)-F_{v(s)}(MID^{v(s)}_{\theta
s}|\theta)}{1-F_{v(s)}(MID^{v(s)}_{\theta s}|\theta)}\\}.$ Therefore,
$p_{s}(\theta)=\prod_{v\neq v(s)}(1-F_{v}(MID^{v}_{\theta
s}|\theta))\times\max\left\\{0,F_{v(s)}(\tau_{s}|\theta)-F_{v(s)}(MID^{{v(s)}}_{\theta
s}|\theta)\right\\}\text{ if }\rho_{\theta s}=\rho_{s}.$
### C.2 Proof of Theorem 2
The proof uses lemmas established below. The first lemma shows that the vector
of DA cutoffs computed for the sampled market, $\hat{\xi}_{N}$, converges to
the vector of cutoffs in the continuum.
###### Lemma 3.
(Cutoff almost sure convergence)
$\hat{\mathbf{\xi}}_{N}\overset{a.s.}{\longrightarrow}\mathbf{\xi}$ where
$\mathbf{\xi}$ denotes the vector of continuum market cutoffs.
This result implies that the estimated score converges to the large-market
local score as market size grows and bandwidth shrinks.
###### Lemma 4.
(Estimated local propensity score almost sure convergence) For all
$\theta\in\Theta,s\in S,$ and $T\in\\{a,c,n\\}^{S}$, we have
$\hat{\psi}_{s}(\theta,T(\delta_{N}))\overset{a.s.}{\longrightarrow}\psi_{s}(\theta,T)$
as $N\rightarrow\infty$ and $\delta_{N}\rightarrow 0$.
The next lemma shows that the true finite market score with a fixed bandwidth,
defined as $\psi_{Ns}(\theta,T;\delta_{N})\equiv
E_{N}[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta_{N})=T]$, also converges to
$\psi_{s}(\theta,T)$ as market size grows and bandwidth shrinks.
###### Lemma 5.
(Bandwidth-specific propensity score almost sure convergence) For all
$\theta\in\Theta,s\in S,$ $T\in\\{a,c,n\\}^{S}$, and $\delta_{N}$ such that
$\delta_{N}\rightarrow 0$ and $N\delta_{N}\rightarrow\infty$ as
$N\rightarrow\infty$, we have
$\psi_{Ns}(\theta,T;\delta_{N})\overset{p}{\longrightarrow}\psi_{s}(\theta,T)$
as $N\rightarrow\infty$.
Finally, the definitions of $\psi_{Ns}(\theta,T;\delta_{N})$ and
$\psi_{Ns}(\theta,T)$ imply that
$|\psi_{Ns}(\theta,T;\delta_{N})-\psi_{Ns}(\theta,T)|\overset{a.s.}{\longrightarrow}0$
as $\delta_{N}\rightarrow 0$. Combining these results shows that for all
$\theta\in\Theta,s\in S,$ and $T$, as $N\rightarrow\infty$ and
$\delta_{N}\rightarrow 0$ with $N\delta_{N}\rightarrow\infty$, we have
$\displaystyle|\hat{\psi}_{s}(\theta,T(\delta_{N}))-\psi_{Ns}(\theta,T)|$
$\displaystyle=$
$\displaystyle|\hat{\psi}_{s}(\theta,T(\delta_{N}))-\psi_{Ns}(\theta,T;\delta_{N})+\psi_{Ns}(\theta,T;\delta_{N})-\psi_{Ns}(\theta,T)|$
$\displaystyle\leq$
$\displaystyle|\hat{\psi}_{s}(\theta,T(\delta_{N}))-\psi_{Ns}(\theta,T;\delta_{N})|+|\psi_{Ns}(\theta,T;\delta_{N})-\psi_{Ns}(\theta,T)|$
$\displaystyle\overset{p}{\longrightarrow}$
$\displaystyle|\psi_{s}(\theta,T)-\psi_{s}(\theta,T)|+0$ $\displaystyle=$
$\displaystyle 0.$
This yields the theorem since $\Theta,S$, and $\\{n,c,a\\}^{S}$ are finite.
#### Proof of Lemma 3
The proof of Lemma 3 is analogous to the proof of Lemma 3 in Abdulkadiroğlu et
al. (2017a) and available upon request. The main difference is that to deal
with multiple non-lottery tie-breakers, the proof of Lemma 3 needs to invoke
the continuous differentiability of $F^{i}_{v}(r|e)$ and the Glivenko-Cantelli
theorem for independent but non-identically distributed random variables
(Wellner, 1981).
#### Proof of Lemma 4
$\hat{\psi}_{s}(\theta,T(\delta_{N}))$ is almost everywhere continuous in
finite sample cutoffs $\hat{\mathbf{\xi}}_{N}$, finite sample MIDs
($MID^{v}_{\theta s}$), and bandwidth $\delta_{N}$. Since every
$MID^{v}_{\theta s}$ is almost everywhere continuous in finite sample cutoffs
$\hat{\mathbf{\xi}}_{N}$, $\hat{\psi}_{s}(\theta,T(\delta_{N}))$ is almost
everywhere continuous in finite sample cutoffs $\hat{\mathbf{\xi}}_{N}$ and
bandwidth $\delta_{N}$. Recall $\delta_{N}\rightarrow 0$ by assumption while
$\hat{\mathbf{\xi}}_{N}\overset{a.s.}{\longrightarrow}\mathbf{\xi}$ by Lemma
3. Therefore, by the continuous mapping theorem, as $N\rightarrow\infty$,
$\hat{\psi}_{s}(\theta,T(\delta_{N}))$ almost surely converges to
$\hat{\psi}_{s}(\theta,T(\delta_{N}))$ with $\mathbf{\xi}$ replacing
$\hat{\mathbf{\xi}}_{N}$, which converges to $\psi_{s}(\theta,T)$ as
$\delta_{N}\rightarrow 0$.
#### Proof of Lemma 5
We use the following fact, which is implied by Example 19.29 in van der Vaart
(2000).
###### Lemma 6.
Let $X$ be a random variable distributed according to some CDF $F$ over
$[0,1]$. Let $F(\cdot|X\in[x-\delta,x+\delta])$ be the conditional version of
$F$ conditional on $X$ being in a small window $[x-\delta,x+\delta]$ where
$x\in[0,1]$ and $\delta\in(0,1]$. Let $X_{1},...,X_{N}$ be iid draws from $F$.
Let $\hat{F}_{N}$ be the empirical CDF of $X_{1},...,X_{N}$. Let
$\hat{F}_{N}(\cdot|X\in[x-\delta,x+\delta])$ be the conditional version of
$\hat{F}_{N}$ conditional on a subset of draws falling in
$[x-\delta,x+\delta]$, i.e.,
$\\{X_{i}|i=1,...,n,X_{i}\in[x-\delta,x+\delta]\\}$. Suppose $(\delta_{N})$ is
a sequence with $\delta_{N}\downarrow 0$ and $\delta_{N}\times
N\rightarrow\infty$. Then $\hat{F}_{N}(\cdot|X\in[x-\delta_{N},x+\delta_{N}])$
uniformly converges to $F(\cdot|X\in[x-\delta_{N},x+\delta_{N}])$, i.e.,
$\sup_{x^{\prime}\in[0,1]}|\hat{F}_{N}(x^{\prime}|X\in[x-\delta_{N},x+\delta_{N}])-F(x^{\prime}|X\in[x-\delta_{N},x+\delta_{N}])|\rightarrow_{p}0\text{
as }N\rightarrow\infty\text{ and }\delta_{N}\rightarrow 0.$
###### Proof of Lemma 6.
We first prove the statement for $x\in(0,1)$. Let $P$ be the probability
measure of $X$ and $\hat{P}_{N}$ be the empirical measure of
$X_{1},...,X_{N}$. Note that
$\displaystyle\sup_{x^{\prime}\in[0,1]}|\hat{F}_{N}(x^{\prime}|X\in[x-\delta_{N},x+\delta_{N}])-F(x^{\prime}|X\in[x-\delta_{N},x+\delta_{N}])|$
$\displaystyle=$
$\displaystyle\sup_{t\in[-1,1]}|\hat{F}_{N}(x+t\delta_{N}|X\in[x-\delta_{N},x+\delta_{N}])-F(x+t\delta_{N}|X\in[x-\delta_{N},x+\delta_{N}])|$
$\displaystyle=$
$\displaystyle\sup_{t\in[-1,1]}|\frac{\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]}{\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]}-\frac{P_{X}[x-\delta_{N},x+t\delta_{N}]}{P_{X}[x-\delta_{N},x+\delta_{N}]}|$
$\displaystyle=$
$\displaystyle\frac{1}{\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]P_{X}[x-\delta_{N},x+\delta_{N}]}$
$\displaystyle\times\sup_{t\in[-1,1]}|\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]P_{X}[x-\delta_{N},x+\delta_{N}]-\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]P_{X}[x-\delta_{N},x+t\delta_{N}]|$
$\displaystyle=$
$\displaystyle\frac{1}{\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]P_{X}[x-\delta_{N},x+\delta_{N}]}$
$\displaystyle\times\sup_{t\in[-1,1]}|\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}](P_{X}[x-\delta_{N},x+\delta_{N}]-\hat{P}_{N}[x-\delta_{N},x+\delta_{N}])$
$\displaystyle~{}~{}~{}~{}+\hat{P}_{N}[x-\delta_{N},x+\delta_{N}](\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]-P_{X}[x-\delta_{N},x+t\delta_{N}])|$
$\displaystyle\leq$
$\displaystyle\frac{1}{\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]P_{X}[x-\delta_{N},x+\delta_{N}]}$
$\displaystyle\times\\{\sup_{t\in[-1,1]}\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]|\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]-P_{X}[x-\delta_{N},x+\delta_{N}]|$
$\displaystyle~{}~{}~{}~{}+\sup_{t\in[-1,1]}\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]|\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]-P_{X}[x-\delta_{N},x+t\delta_{N}]|\\}$
$\displaystyle=\frac{1}{P_{X}[x-\delta_{N},x+\delta_{N}]}\times\\{|\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]-P_{X}[x-\delta_{N},x+\delta_{N}]|$
$\displaystyle+\sup_{t\in[-1,1]}|\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]-P_{X}[x-\delta_{N},x+t\delta_{N}]|\\}$
$\displaystyle=$
$\displaystyle\frac{A_{N}}{P_{X}[x-\delta_{N},x+\delta_{N}]},$
where
$A_{N}=|\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]-P_{X}[x-\delta_{N},x+\delta_{N}]|+\sup_{t\in[-1,1]}|\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]-P_{X}[x-\delta_{N},x+t\delta_{N}]|.$
The above inequality holds by the triangle inequality and the second last
equality holds because
$\sup_{t\in[-1,1]}\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]=\hat{P}_{N}[x-\delta_{N},x+\delta_{N}]$.
We show that $A_{N}/P_{X}[x-\delta_{N},x+\delta_{N}]\stackrel{{\scriptstyle
p}}{{\longrightarrow}}0$. Example 19.29 in van der Vaart (2000) implies that
the sequence of processes
$\\{\sqrt{n/\delta_{N}}(\hat{P}_{N}[x-\delta_{N},x+t\delta_{N}]-P_{X}[x-\delta_{N},x+t\delta_{N}]):t\in[-1,1]\\}$
converges in distribution to a Gaussian process in the space of bounded
functions on $[-1,1]$ as $N\rightarrow\infty$. We denote this Gaussian process
by $\\{\mathbb{G}_{t}:t\in[-1,1]\\}$. We then use the continuous mapping
theorem to obtain
$\sqrt{n/\delta_{N}}A_{N}\stackrel{{\scriptstyle
d}}{{\longrightarrow}}|\mathbb{G}_{1}|+\sup_{t\in[-1,1]}|\mathbb{G}_{t}|$
as $N\rightarrow\infty$. Since $\\{\mathbb{G}_{t}:t\in[-1,1]\\}$ has bounded
sample paths, it follows that $|\mathbb{G}_{1}|<\infty$ and
$\sup_{t\in[-1,1]}|\mathbb{G}_{t}|<\infty$ for sure. By the continuous mapping
theorem, under the condition that $N\delta_{N}\rightarrow\infty$,
$\displaystyle(1/\delta_{N})A_{N}$
$\displaystyle=(1/\sqrt{N\delta_{N}})\times\sqrt{n/\delta_{N}}A_{N}$
$\displaystyle\stackrel{{\scriptstyle
d}}{{\longrightarrow}}0\times(|\mathbb{G}_{1}|+\sup_{t\in[-1,1]}|\mathbb{G}_{t}|)$
$\displaystyle=0.$
This implies that $(1/\delta_{N})A_{N}\stackrel{{\scriptstyle
p}}{{\longrightarrow}}0$, because for any $\epsilon>0$,
$\displaystyle\Pr(|(1/\delta_{N})A_{N}|>\epsilon)$
$\displaystyle=\Pr((1/\delta_{N})A_{N}<-\epsilon)+\Pr((1/\delta_{N})A_{N}>\epsilon)$
$\displaystyle\leq\Pr((1/\delta_{N})A_{N}\leq-\epsilon)+1-\Pr((1/\delta_{N})A_{N}\leq\epsilon)$
$\displaystyle\rightarrow\Pr(0\leq-\epsilon)+1-\Pr(0\leq\epsilon)$
$\displaystyle=0,$
where the convergence holds since $(1/\delta_{N})A_{N}\stackrel{{\scriptstyle
d}}{{\longrightarrow}}0$. To show that
$A_{N}/P_{X}[x-\delta_{N},x+\delta_{N}]\stackrel{{\scriptstyle
p}}{{\longrightarrow}}0$, it is therefore enough to show that
$\lim_{N\rightarrow\infty}(1/\delta_{N})P_{X}[x-\delta_{N},x+\delta_{N}]>0$.
We have
$\displaystyle(1/\delta_{N})P_{X}[x-\delta_{N},x+\delta_{N}]$
$\displaystyle=(1/\delta_{N})(F_{X}(x+\delta_{N})-F_{X}(x-\delta_{N}))$
$\displaystyle=(1/\delta_{N})(2f(x)\delta_{N}+o(\delta_{N}))$
$\displaystyle=2f(x)+o(1)$ $\displaystyle\rightarrow 2f(x)$ $\displaystyle>0,$
where we use Taylor’s theorem for the second equality and the assumption of
$f(x)>0$ for the last inequality.
We next prove the statement for $x=0$. Note that
$\displaystyle\sup_{x^{\prime}\in[0,1]}|\hat{F}_{N}(x^{\prime}|X\in[-\delta_{N},\delta_{N}])-F(x^{\prime}|X\in[-\delta_{N},\delta_{N}])|$
$\displaystyle=$
$\displaystyle\sup_{t\in[0,1]}|\hat{F}_{N}(t\delta_{N}|X\in[0,\delta_{N}])-F(t\delta_{N}|X\in[0,\delta_{N}])|$
$\displaystyle=$
$\displaystyle\sup_{t\in[0,1]}|\frac{\hat{F}_{N}(t\delta_{N})}{\hat{F}_{N}(\delta_{N})}-\frac{F_{X}(t\delta_{N})}{F_{X}(\delta_{N})}|$
$\displaystyle=$
$\displaystyle\frac{1}{\hat{F}_{N}(\delta_{N})F_{X}(\delta_{N})}\sup_{t\in[0,1]}|\hat{F}_{N}(t\delta_{N})F_{X}(\delta_{N})-\hat{F}_{N}(\delta_{N})F_{X}(t\delta_{N})|$
$\displaystyle=$
$\displaystyle\frac{1}{\hat{F}_{N}(\delta_{N})F_{X}(\delta_{N})}\sup_{t\in[0,1]}|\hat{F}_{N}(t\delta_{N})(F_{X}(\delta_{N})-\hat{F}_{N}(\delta_{N}))+\hat{F}_{N}(\delta_{N})(\hat{F}_{N}(t\delta_{N})-F_{X}(t\delta_{N}))|$
$\displaystyle\leq$
$\displaystyle\frac{1}{\hat{F}_{N}(\delta_{N})F_{X}(\delta_{N})}\\{\sup_{t\in[0,1]}\hat{F}_{N}(t\delta_{N})|\hat{F}_{N}(\delta_{N})-F_{X}(\delta_{N})|+\sup_{t\in[0,1]}\hat{F}_{N}(\delta_{N})|\hat{F}_{N}(t\delta_{N})-F_{X}(t\delta_{N})|\\}$
$\displaystyle=$
$\displaystyle\frac{1}{F_{X}(\delta_{N})}\\{|\hat{F}_{N}(\delta_{N})-F_{X}(\delta_{N})|+\sup_{t\in[0,1]}|\hat{F}_{N}(t\delta_{N})-F_{X}(t\delta_{N})|\\}=\frac{A_{N}^{0}}{F_{X}(\delta_{N})},$
where
$A_{N}^{0}=|\hat{F}_{N}(\delta_{N})-F_{X}(\delta_{N})|+\sup_{t\in[0,1]}|\hat{F}_{N}(t\delta_{N})-F_{X}(t\delta_{N})|$.
By the argument used in the above proof for $x\in(0,1)$, we have
$(1/\delta_{N})A_{N}^{0}\stackrel{{\scriptstyle p}}{{\longrightarrow}}0$. It
also follows that
$\displaystyle(1/\delta_{N})F_{X}(\delta_{N})$
$\displaystyle=(1/\delta_{N})(f(0)\delta_{N}+o(\delta_{N}))$
$\displaystyle=f(0)+o(1)$ $\displaystyle\rightarrow f(0)$ $\displaystyle>0.$
Thus, $\frac{A_{N}^{0}}{F_{X}(\delta_{N})}\stackrel{{\scriptstyle
p}}{{\longrightarrow}}0$, and hence
$\sup_{x^{\prime}\in[0,1]}|\hat{F}_{N}(x^{\prime}|X\in[-\delta_{N},\delta_{N}])-F(x^{\prime}|X\in[-\delta_{N},\delta_{N}])|\stackrel{{\scriptstyle
p}}{{\longrightarrow}}0$. The proof for $x=1$ follows from the same argument.
∎
Consider any deterministic sequence of economies $\\{g_{N}\\}$ such that
$g_{N}\in\mathcal{G}$ for all $N$ and $g_{N}\rightarrow G$ in the
$(\mathcal{G},d)$ metric space. Let $(\delta_{N})$ be an associated sequence
of positive numbers (bandwidths) such that $\delta_{N}\rightarrow 0$ and
$N\delta_{N}\rightarrow\infty$ as $N\rightarrow\infty$. Let
$\psi_{Ns}(\theta,T;\delta_{N})\equiv
E_{N}[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta_{N})=T]$ be the (finite-market,
deterministic) bandwidth-specific propensity score for particular $g_{N}$ and
$\delta_{N}$.
For Lemma 5, it is enough to show deterministic convergence of this finite-
market score, that is,
$\psi_{Ns}(\theta,T;\delta_{N})\rightarrow\psi_{s}(\theta,T)$ as
$g_{N}\rightarrow G$ and $\delta_{N}\rightarrow 0$. To see this, let $G_{N}$
be the distribution over $I(\Theta_{0},r_{0},r_{1})$’s induced by randomly
drawing $N$ applicants from $G$, where
$I(\Theta_{0},r_{0},r_{1})\equiv\\{i|\theta_{i}\in\Theta_{0},r_{0}<r_{i}\leq
r_{1}\\}$. Note that $G_{N}$ is random and that
$G_{N}\overset{a.s.}{\rightarrow}G$ by Wellner (1981)’s Glivenko-Cantelli
theorem for independent but non-identically distributed random variables.
$G_{N}\overset{p}{\rightarrow}G$ and
$\psi_{Ns}(\theta,T;\delta_{N})\rightarrow\psi_{s}(\theta,T)$ allow us to
apply the Extended Continuous Mapping Theorem (Theorem 18.11 in van der Vaart
(2000)) to obtain
$\tilde{\psi}_{Ns}(\theta,T;\delta_{N})\overset{p}{\longrightarrow}\psi_{s}(\theta,T)$
where $\tilde{\psi}_{Ns}(\theta,T;\delta_{N})$ is the random version of
$\psi_{Ns}(\theta,T;\delta_{N})$ defined for $G_{N}$.
For notational simplicity, consider the single-school RD case, where there is
only one school $s$ making assignments based on a single non-lottery tie-
breaker $v(s)$ (without using any priority). A similar argument with
additional notation shows the result for DA with general tie-breaking.
For any $\delta_{N}>0,$ whenever $T_{i}(\delta_{N})=a$, it is the case that
$D_{i}(s)=1$. As a result,
$\psi_{Ns}(\theta,a;\delta_{N})\equiv
E_{N}[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta_{N})=a]=1\equiv\psi_{s}(\theta,a).$
Therefore, $\psi_{Ns}(\theta,a;\delta_{N})\rightarrow\psi_{s}(\theta,a)$ as
$N\rightarrow\infty$. Similarly, for any $\delta_{N}>0,$ whenever
$T_{i}(\delta_{N})=n$, it is the case that $D_{i}(s)=0$. As a result,
$\psi_{Ns}(\theta,n;\delta_{N})\equiv
E_{N}[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta_{N})=n]=0\equiv\psi_{s}(\theta,n).$
Therefore, $\psi_{Ns}(\theta,n;\delta_{N})\rightarrow\psi_{s}(\theta,n)$ as
$N\rightarrow\infty$. Finally, when $T_{i}(\delta_{N})=c$, let
$F_{N,v(s)}(r|\theta)\equiv\dfrac{\sum^{N}_{i=1}1\\{\theta_{i}=\theta\\}F_{v(s)}^{i}(r)}{\sum^{N}_{i=1}1\\{\theta_{i}=\theta\\}}$
be the aggregate tie-breaker distribution conditional on each applicant type
$\theta$ in the finite market. $\mathbf{\tilde{\xi}}_{Ns}$ denotes the random
cutoff at school $s$ in a realized economy $g_{N}$. For any $\epsilon$, there
exists $N_{0}$ such that for any $N>N_{0}$, we have
$\displaystyle\psi_{Ns}(\theta,c;\delta_{N})$ $\displaystyle\equiv
E_{N}[D_{i}(s)|\theta_{i}=\theta,T_{i}(\delta_{N})=c]$
$\displaystyle=P_{N}[R_{iv(s)}\leq\mathbf{\tilde{\xi}}_{Ns}|\theta_{i}=\theta,R_{iv(s)}\in(\mathbf{\tilde{\xi}}_{Ns}-\delta_{N},\mathbf{\tilde{\xi}}_{Ns}+\delta_{N}]]$
$\displaystyle\in(P[R_{iv(s)}\leq\mathbf{\xi}_{s}|\theta_{i}=\theta,R_{iv(s)}\in(\mathbf{\xi}_{s}-\delta_{N},\mathbf{\xi}_{s}+\delta_{N}]]-\epsilon/2,$
$\displaystyle\ \ \ \ \
P[R_{iv(s)}\leq\mathbf{\xi}_{s}|\theta_{i}=\theta,R_{iv(s)}\in(\mathbf{\xi}_{s}-\delta_{N},\mathbf{\xi}_{s}+\delta_{N}]]+\epsilon/2),$
where $\mathbf{\xi}_{s}$ is school $s$’s continuum cutoff, $P$ is the
probability induced by the tie-breaker distributions in the continuum economy,
and the inclusion is by Assumption 2 and Lemmata 3 and 6. Again for any
$\epsilon$, there exists $N_{0}$ such that for any $N>N_{0}$, we have
$\displaystyle(P[R_{iv(s)}\leq\mathbf{\xi}_{s}|\theta_{i}=\theta,R_{iv(s)}\in(\mathbf{\xi}_{s}-\delta_{N},\mathbf{\xi}_{s}+\delta_{N}]]-\epsilon/2,$
$\displaystyle\
P[R_{iv(s)}\leq\mathbf{\xi}_{s}|\theta_{i}=\theta,R_{iv(s)}\in(\mathbf{\xi}_{s}-\delta_{N},\mathbf{\xi}_{s}+\delta_{N}]]+\epsilon/2)$
$\displaystyle=(\dfrac{F_{v(s)}(\mathbf{\xi}_{s}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)}{F_{v(s)}(\mathbf{\xi}_{s}+\delta_{N}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)}-\epsilon/2,$
$\displaystyle\ \ \ \ \
\dfrac{F_{v(s)}(\mathbf{\xi}_{s}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)}{F_{v(s)}(\mathbf{\xi}_{s}+\delta_{N}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)}+\epsilon/2)$
$\displaystyle=(\dfrac{\\{F_{v(s)}(\mathbf{\xi}_{s}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)\\}/\delta_{N}}{\\{F_{v(s)}(\mathbf{\xi}_{s}+\delta_{N}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}|\theta)\\}/\delta_{N}+\\{F_{v(s)}(\mathbf{\xi}_{s}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)\\}/\delta_{N}}-\epsilon/2,$
$\displaystyle\ \ \ \ \
\dfrac{\\{F_{v(s)}(\mathbf{\xi}_{s}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)\\}/\delta_{N}}{\\{F_{v(s)}(\mathbf{\xi}_{s}+\delta_{N}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}|\theta)\\}/\delta_{N}+\\{F_{v(s)}(\mathbf{\xi}_{s}|\theta)-F_{v(s)}(\mathbf{\xi}_{s}-\delta_{N}|\theta)\\}/\delta_{N}}+\epsilon/2)$
$\displaystyle\in(0.5-\epsilon,0.5+\epsilon)$
$\displaystyle=(\psi_{s}(\theta,c)-\epsilon,\psi_{s}(\theta,c)+\epsilon),$
completing the proof.
## Appendix D Empirical Appendix
### D.1 Data
The NYC DOE provided data on students, schools, the rank-order lists submitted
by match participants, school assignments, and outcome variables. Applicants
and programs are uniquely identified by a number that can be used to merge
data sets. Students with a record in assignment files who cannot be matched to
other files are omitted.
#### D.1.1 Applicant Data
We focus on first-time applicants to the NYC public (unspecialized) high
school system who live in NYC and attended a public middle school in eighth
grade. The NYC high school match is conducted in three rounds. The data used
for the present analyses are from the first assignment round, which uses DA
and we refer to as main round. Applicants who were not assigned after the main
round apply to the remaining seats in a subsequent supplementary round.
Students who remain unassigned in the supplementary round are then assigned on
a case-by-case basis in the final administrative round.
Assignment, Priorities, and Ranks
Data on the assignment system come from the DOE’s enrollment office, and
report assignments for our two cohorts. The main application data set details
applicant program choices, eligibility, priority group and rank, as well as
the admission procedure used at the respective program. Lottery numbers and
details on assignments at Educational Option (Ed-Opt) programs are provided in
separate data sets.
Student Characteristics
NYC DOE students files record grade, gender, ethnicity, and whether students
attended a public middle school. Separate files include (i) student scores on
middle school standardized tests, (ii) English language learner and special
education status, and (iii) subsidized lunch status. Our baseline middle
school scores are from 6th grade math and English exams. If a student re-took
a test, the latest result is used. Our demographic characteristics come from
the DOE’s snapshot for 8th grade.
#### D.1.2 School-level Data
School Letter Grades
School grades are drawn from NYC DOE School Report Cards for 2010/11, 2011/12
and 2012/13. For each application cohort, we grade schools based on the report
cards published in the school year prior to the application school year: for
the 2011/12 application cohort, for instance, schools are assigned grades
published in 2010/11, and similarly for the other two cohorts.
School Characteristics
School characteristics were taken from report card files provided by the DOE.
These data provide information on enrollment statistics, racial composition,
attendance rates, suspensions, teacher numbers and experience, and graduating
class Regents Math and English performance. A unique identifier for each
school allows these data to be merged with data from other sources. The
analyses on teacher experience and education reported in Table 5.1 of this
publication are based on the School-Level Master File 1996-2016, a dataset
compiled by the Research Alliance for NYC Schools at New York University’
Steinhardt School of Culture, Education, and Human Development
(www.ranycs.org). All data in the School-Level Master File are publicly
available. The Research Alliance takes no responsibility for potential errors
in the dataset or the analysis. The opinions expressed in this publication are
those of the authors and do not represent the views of the Research Alliance
for NYC Schools or the institutions that posted the original publicly
available data.252525Research Alliance for New York City Schools (2017).
School-Level Master File 1996-2016 [Data file and code book]. Unpublished data
Defining Screened and Lottery Schools
We define lottery schools as any school hosting at least one program for which
the lottery number is used as the tie-breaker. Screened schools are the
remaining schools. Some schools allow students to share a screened tie-breaker
rank, breaking screening-variable ties with lottery numbers. Propensity scores
for such schools are computed using the lottery tie breaker and schools are
considered lottery in any analysis that makes this substantive distinction.
Specialized high schools are considered screened schools. The remaining
schools, mostly charters that conduct a separate lottery process, are
considered lottery schools.
#### D.1.3 SAT and Graduation Outcomes
SAT Tests
The NYC DOE has data on SAT scores for test-takers from 2006-17. These data
originate with the College Board. We use the first test for multiple takers.
For applicants tested in the same month, we use the highest score. During our
sample period, the SAT has been redesigned. We re-scale scores of SAT exams
taken prior to the reform according to the official re-scaling scheme provided
by CollegeBoard.262626See
https://collegereadiness.collegeboard.org/educators/higher-
ed/scoring/concordance for the conversion scale.
Graduation
The DOE Graduation file records the discharge status for public school
students enrolled from 2005-17. Because data on graduation results are not yet
available for the youngest (2013/14) cohort, graduation results are for the
two older cohorts only.
College- and Career-preparedness and College-readiness
The DOE provided us with individual-level indicators for college- and career-
preparedness as well as college-readiness for public school students enrolled
from 2005-17. Since these data are not yet available for the youngest
(2013/14) cohort, the results are for the two older cohorts only. Table D.1.3
gives an overview on the criteria for the two indicators.
#### D.1.4 Replicating the NYC Match
NYC uses the student-proposing DA algorithm to determine assignments. The
three ingredients for this algorithm are: student’s ranking of up to 12
programs, program capacities and priorities, and tie-breakers.
Program Assignment Rules
Programs use a variety of assignment rules. Lottery, Limited Unscreened, and
Zoned programs order students first by priority group, and within priority
group by lottery number. Screened and Audition programs order students by
priority group and then by a non-lottery tie-breaker, referred to as running
or rank variable. We observe these in the form of an an ordering of applicants
provided by Screened and Audition programs. Ed-Opt programs use two tie-
breakers, which is described into more detail below. Finally, as mentioned
above, some schools allow students to share a screened tie-breaker rank,
breaking screening-variables ties with lottery numbers.
Program Capacities and Priorities
Program capacities must be imputed. We assume program capacity equals the
number of assignments extended. Program type determines priorities. The
priority group is a number assigned by the NYC DOE depending on addresses,
program location, siblings, among other considerations, including, in some
cases, whether applicants attended an information session or open house (for
Limited Unscreened programs).
Lottery Numbers
The lottery numbers are provided by the NYC DOE in a separate data set.
Lottery tie-breakers are reported as unique alphanumeric string and scaled to
$[0,1]$. Lottery numbers are missing for some; we assign these applicants a
randomly drawn lottery number and use it in our replicated match. It is this
replicated match that is used to construct assignment instruments and their
associated propensity scores.
Ranks
Screened, Audition, and half of the seats at Ed-Opt programs assign students a
rank, based on various diverging criteria, such as former test performance.
Ranks are reported as an integer reflecting raw tie-breaker order in this
group. We scale these so as to lie in $(0,1]$ by transforming raw tie-breaking
realizations $R_{iv}$ into
$[R_{iv}-\min_{j}{R_{jv}}+1]/[\max_{j}R_{jv}-\min_{j}R_{jv}+1]$ for each tie-
breaker $v$. At some screened programs, the rank numbers of applicants have
gaps, i.e. the distribution of running variable values is discontinuous.
Potential reasons include i) human error when school principals submit
applicant rankings to the NYC DOE, and ii) while running variables are
assigned at the program level, applications at Ed-Opt programs are treated as
six separate buckets (i.e. distinct application choices), leading to
artificial gaps in rank distributions (see discussion of assignment at Ed-Opt
programs below).
Assignment at Educational Option programs
Ed-Opt programs use two tie-breakers. Applicants are first categorized into
high performers, middle performers, and low performers by scores on a seventh
grade reading test. Ed-Opt programs aim to have an enrollment distribution of
16% high performers, 68% middle performers and 16% low performers. Half of Ed-
Opt seats are assigned using the lottery tie-breaker. These seats are called
“random.” The other half uses a rank variable such as those used by other
screened programs. These seats are called “select.”
We refer to the resulting six combinations as “buckets.” Ed-Opt applicants are
treated as applying to all six. A separate data set details which bucket
applicants were offered. Buckets have their own priorities and capacities. The
latter are imputed based on the observed assignments to buckets.
Tables D.1.4 and D.1.4 show applicants’ choice order of and priorities at Ed-
Opt buckets, respectively. Both are based on consultations with the NYC DOE
and our simulations of the match.
High performers rank high buckets first, while medium and low performers apply
to medium and low buckets first, respectively.
High performers have highest priority (priority group 1) at high buckets,
while medium and low performers receive highest priority at medium and low
buckets, respectively.
Miscellaneous Sample Restrictions The analysis sample is limited to first-time
eighth grade applicants for ninth grade seats. Ineligible applications (as
indicated in the main application data set) are dropped. Applicants with
special education status compete for a different set of seats and are thus
dropped in the analysis.
Students in the top 2% of scorers on the seventh grade reading are
automatically admitted into any Ed-Opt program they rank first. We gather
these assignments in a separate Ed-Opt bucket, thereby leaving the admission
process to the other six unaffected.
Table D.1.4 records the proportion of applicants for which our match
replication was successful.
### D.2 Additional Empirical Results
Grade A risk has a mode at 0.5, but takes on many other values as well. A
probability of 0.5 arises when the overall Grade A propensity score is
generated by a single Grade A screened school. This can be seen in Figure D1,
which tabulates the estimated probability of assignment to a Grade A school
for applicants in all cohorts (2012-2014) with a probability strictly between
0 and 1 calculated using the formula in Theorem 1. There are 24,966 students
with the estimated assignment probability equal to 1, 86,494 students with the
propensity score equal to 0, and 41,647 students with Grade A risk. The
propensity score of 0.5 arises when the overall Grade A propensity score is
generated by a single Grade A screened school.
Figure D1: Distribution of Grade A Risk
Notes: This figure shows the histogram of the estimated probability of
assignment to a Grade A school for at-risk applicants in all sample cohorts
(2012-2014), calculated using Theorem 1. The full sample includes 24,966
applicants with a Grade A propensity score equal to 1, 86,494 applicants with
propensity score equal to 0, and 35,102 students with Grade A risk. The at-
risk sample is used to compute the balance estimates reported in Table 5.2.
Table D.2 reports estimates of the effect of Grade A assignments on attrition,
computed by estimating models like those used to gauge balance. Applicants who
receive Grade A school assignments have a slightly higher likelihood of taking
the SAT. Decomposing Grade A schools into screened and lottery schools,
applicants who receive lottery Grade A school assignments are 1.6 percent more
likely to have SAT scores, while assignments to Grade A screened schools do
not correspond to a statistically significant difference in the likelihood of
having follow-up SAT scores. This modest difference seems unlikely to bias the
2SLS Grade A estimates reported in Tables 5.2 and 5.3.
Table D.2 reports estimates of the effect of enrollment in an ungraded high
school. These use models like those used to compute the estimates presented in
Table 5.2. OLS estimates show a small positive effect of ungraded school
attendance on SAT scores and a strong negative effect on graduation outcomes.
2SLS estimates, by contrast, suggest ungraded school attendance is unrelated
to these outcomes.
|
32k
|
arxiv_papers
|
2101.01098
|
# Simulating quantum vibronic dynamics at finite temperatures with many body
wave functions at 0K
Angus J. Dunnett [email protected] Alex W. Chin Sorbonne
Université, CNRS, Institut des NanoSciences de Paris, 4 place Jussieu, 75005
Paris, France
###### Abstract
For complex molecules, nuclear degrees of freedom can act as an environment
for the electronic ‘system’ variables, allowing the theory and concepts of
open quantum systems to be applied. However, when molecular system-environment
interactions are non-perturbative and non-Markovian, numerical simulations of
the complete system-environment wave function become necessary. These many
body dynamics can be very expensive to simulate, and extracting finite-
temperature results - which require running and averaging over many such
simulations - becomes especially challenging. Here, we present numerical
simulations that exploit a recent theoretical result that allows dissipative
environmental effects at finite temperature to be extracted efficiently from a
single, zero-temperature wave function simulation. Using numerically exact
time-dependent variational matrix product states, we verify that this approach
can be applied to vibronic tunneling systems and provide insight into the
practical problems lurking behind the elegance of the theory, such as the
rapidly growing numerical demands that can appear for high temperatures over
the length of computations.
## I Introduction
The dissipative quantum dynamics of electronic processes play a crucial role
in the physics and chemistry of materials and biological life, particularly in
the ultra-fast and non-equilibrium conditions typical of photophysics,
nanoscale charge transfer and glassy, low-temperature phenomena (Miller _et
al._ , 1983). Indeed, the through-space tunneling of electrons, protons and
their coupled dynamics critically determine how either ambient energy is
transduced, or stored energy is utilised in supramolecular ‘devices’, and
real-time dynamics are especially important when the desired processes occur
against thermodynamical driving forces, or at the single-to-few particle level
(Devault, 1980; May and Kühn, 2008).
In many physio-chemical systems, a reaction, energy transfer, or similar event
proceeds in the direction of a free energy gradient, necessitating the
dissipation of energy and the generation of entropy (Dubi and Dia Ventra,
2011; Benenti _et al._ , 2017). A powerful way of modelling the microscopic
physics at work during these irreversible dynamics is the concept of an ‘open’
quantum system (Breuer _et al._ , 2002; Weiss, 2012). Here a few essential
and quantized degrees of freedom constituting the ‘system’ are identified and
explicitly coupled to a much larger number of ‘environmental’ degrees of
freedom. Equations of motion for the coupled system and environment variables
are then derived and solved, with the goal of obtaining the behaviour of the
‘system’ degrees of freedom once the unmeasureable environmental variables are
averaged over their uncertain initial and final states. It is in this ‘tracing
out’ of the environment that the originally conservative, reversible dynamics
of the global system gives way to apparently irreversible dynamics in the
behaviour of the system’s observable variables. The effective behaviour of the
system ‘opened’ to the environment is entirely contained within its so-called
reduced density matrix, which we shall later define. Important examples of the
emergent phenomenology of reduced density matrices include the ubiquitous
processes of thermalization, dephasing and decoherence.
In the solid state, a typical electronic excitation will interact weakly with
the lattice vibrations of the material, particularly the long-wavelength, low
frequency modes. Under such conditions it is often possible to treat the
environment with low-order perturbation theory and - given that the lattice
‘environment’ relaxes back to equilibrium very rapidly - it is possible to
derive a Markovian master equation for the reduced density matrix, such as the
commonly used Bloch-Redfield theory (Breuer _et al._ , 2002; May and Kühn,
2008; Weiss, 2012). However, in sufficiently complex molecular systems, such
as organic bio-molecules, the primary environmental degrees of freedom acting
on electronic states are typically the stochastic vibrational motions of the
atomic nuclear coordinates. Unlike the solid state, these vibrations can: (1)
couple non-perturbatively to electronic states, (2) relax back to equilibrium
on timescales that are longer than the dynamics they induce in the system, and
(3) have frequencies $\omega$ such that $\hbar\omega\gg K_{B}T$, where $T$ is
the environmental temperature, and so must be treated quantum mechanically
(zero-point energy and nuclear quantum effects). In this regime, the theory
and numerical simulation of open quantum systems becomes especially
challenging, as the detailed dynamics of the interacting system and
environmental quantum states need to be obtained, essentially requiring the
solution of a correlated (entangled) many body problem.
One well known and powerful approach to this problem in theoretical chemistry
is the Multi-layer Multiconfigurational Time-dependent Hartree (ML-MCTDH)
technique, which enables vibronic wave functions to be efficiently represented
and propagated without the a priori limitations due to the ‘curse of
dimensionality’ associated with many body quantum systems (Wang and Shao,
2019; Lubich, 2015). However, computationally demanding methods based on the
propagation of a large wave function from a definite initial state will
typically struggle when dealing with finite-temperature environments (vide
infra), as the probability distribution of initial states requires extensive
sampling. For this reason, the majority of ML-MCTDH studies have been
effectively on zero-temperature systems.
In this article we will explore a recent and intriguing development in an
alternative approach to real-time dynamics and chemical rate prediction. This
approach is based on the highly efficient representation and manipulation of
large, weakly entangled wave functions with DMRG, Matrix-Product and Tensor-
Network-State methods (Orus, 2014). These methods, widely used in condensed
matter, quantum information and cold atom physics, have recently been applied
to a range of open system models, including chemical systems, but - as wave
function methods - are typically used at zero-temperature (Prior _et al._ ,
2010, 2013; Chin _et al._ , 2013; Xie _et al._ , 2019; Alvertis _et al._ ,
; Schröder _et al._ , 2019). However, a remarkable new result due to
Tamascelli et al. shows that it is indeed possible to obtain the _finite-
temperature_ reduced dynamics of a system based on a simulation of a ‘pure’,
i.e. zero-temperature wave function (Tamascelli _et al._ , 2019).
In principle, this opens the way for many existing wave function methods to be
extended into finite temperature regimes, although the present formulation of
Tamascelli et al.’s T-TEDOPA mapping is most easily implemented with matrix
product states (MPS). In this article, we shall investigate this extension to
finite temperature in the regime of relevance for molecular quantum dynamics,
that is, non-perturbative vibrational environments, and present numerical data
that verifies the elegance and utility of the method, as well as some of the
potential issues arising in implementation.
The structure of the article is as follows. In section II we will summarise
Tamascelli et al.’s T-TEDOPA mapping. In section III we verify the theory by
comparing numerical simulations against an exactly solvable open system model,
and also employ further numerical investigations to provide some insight into
the manner in which finite temperatures are handled within this method. By
looking at the observables of the environment, we find that the number of
excitations in the simulations grows continuously over time, which may place
high demands on computational resources in some problems. In section IV we
will present results for a model system inspired by electron transfer in a
multi-dimensional vibrational environment, and show how the temperature-driven
transition from quantum tunneling to classical barrier transfer are
successfully captured by this new approach. This opens a potentially fruitful
new phase for the application of tensor network and related many body
approaches for the simulation of non-equilibrium dynamics in a wide variety of
vibronic materials and molecular reactions.
## II T-TEDOPA
Figure 1: (a) A generic open quantum system contains a few-level ‘system’ (S)
that interacts with a much larger thermal heat bath of bosonic oscillators
(the environment, E). The continuum of oscillator modes are initially
uncorrelated with the system and each is thermally occupied with
characteristic temperature $T=\beta^{-1}$. Coupling and stochastic
fluctuations of the environment lead to the effective thermalization of the
system, once the environmental states have been traced over. (b) In the
T-TEDOPA approach, the harmonic environment is extended to include modes of
negative frequency, and all modes (positive and negative frequency) are
initially in their ground states. It can be formally demonstrated that the
thermalization of S in (a) can always be obtained from the pure zero-
temperature state in (b), provided the spectral density of the original
environment is known. Figure 2: (a) The extended proxy environment of Fig. 1
(a) is described by an effective, temperature-dependent spectral density
$J_{\beta}(\omega)$. Once the effective $J_{\beta}(\omega)$ has been
specified, new oscillator modes can be found that provide a unitary
transformation to a linear chain representation of the environment with
nearest neighbour interactions. The non-perturbative wave function dynamics
for such a many-body 1D system can be very efficiently simulated with MPS
methods. (b) $J_{\beta}(\omega)$ for a physical Ohmic environment at three
representative temperatures. At very low temperature ($\omega_{c}\beta\gg 1$)
there is essentially no coupling to the negative frequency modes, as
excitation of these modes leads to an effective absorption of heat _from_ the
environment. At higher temperatures, $J_{\beta}(\omega)$ becomes increasingly
symmetric for the positive and negative modes.
In this section we shall summarise the essential features of the T-TEDOPA
approach, closely following the original notation and presentation of
Tamascelli et al (Tamascelli _et al._ , 2019). Our starting point is the
generic Hamiltonian for a system coupled to a bosonic environment consisting
of a continuum of harmonic oscillators
$H_{SE}=H_{S}+H_{E}+H_{I},$ (1)
where
$H_{I}=A_{S}\otimes\int_{0}^{\infty}d\omega\hat{O}_{\omega},H_{E}=\int_{0}^{\infty}d\omega\omega
a_{\omega}^{\dagger}a_{\omega}.$ (2)
The Hamiltonian $H_{S}$ is the free system Hamiltonian, which for chemical
systems, molecular photophysics and related problems will often be a
description of a few of the most relevant diabatic states at some reference
geometry of the environment(s) (May and Kühn, 2008). $A_{S}$ is the system
operator which couples to the bath. For the bath operators we take the
displacements
$O_{\omega}=\sqrt{J(\omega)}(a_{\omega}+a_{\omega}^{\dagger}),$ (3)
thus defining the spectral density $J(\omega)$. This has been written here as
a continuous function, but coupling to a discrete set of vibrational modes in,
say, a molecular chromophore, can be included within this description by
adding suitable structure to the spectral density, i.e. sets of Lorentzian
peaks or Dirac functions (Wilhelm _et al._ , 2004; Schulze and Kuhn, 2015;
Mendive-Tapia _et al._ , 2018). The state of the system+environment at time
$t$ is described by a mixed state described by a density matrix
$\rho_{SE}(t)$. The initial condition is assumed to be a product of system and
environment states $\rho_{SE}(0)=\rho_{S}(0)\otimes\rho_{E}(0)$ where
$\rho_{S}(0)$ is an arbitrary density matrix for the system and
$\rho_{E}(0)=\exp(-H_{E}\beta)/\mathcal{Z}$, with the environment partition
function given by $\mathcal{Z}=\operatorname{Tr}\\{\exp(-H_{E}\beta)\\}$. Such
a product state is commonly realised in photophysics, where the reference
geometry for the environment is the electronic ground state and the electronic
system is excited according to the Franck-Condon principle into some manifold
of electronic excited states without nuclear motion (May and Kühn, 2008;
Mukamel, 1995). Indeed, this can also occur following any sufficiently rapid
non-adiabatic event, just as ultra-fast charge separation at a donor-acceptor
interface (Gélinas _et al._ , 2014; Smith and Chin, 2015). The environment
thus begins in a thermal equilibrium state with inverse temperature $\beta$,
and the energy levels of each harmonic mode are statistically populated, as
shown in Fig. 1a. For a very large (continuum) of modes, the number of
possible thermal configurations of the initial probability distribution grows
extremely rapidly with temperature, essentially making a naive sampling of
these configurations impossible for full wave function simulations. We note,
however, that some significantly better sampling methods involving sparse
grids and/or stochastic mean-field approaches have been proposed and
demonstrated (Alvermann and Fehske, 2009; Binder and Burghardt, 2019).
The initial thermal condition of the environmental oscillators is also a
Gaussian state, for which is it further known that the influence functional
(Weiss, 2012) \- which is a full description of the influence of the bath on
the system - will depend only on the two-time correlation function of the bath
operators
$S(t)=\int_{0}^{\infty}d\omega\langle O_{\omega}(t)O_{\omega}(0)\rangle.$ (4)
Any two environments with the same $S(t)$ will have the same influence
functional and thus give rise to the same reduced system dynamics, i.e. the
same $\rho_{S}(t)=\operatorname{Tr}\\{\rho_{SE}(t)\\}$. That the reduced
systems dynamics are completed specified by the spectral density and
temperature of a Gaussian environment has been known for a long time (Weiss,
2012), but the key idea of the equivalence - and thus the possibility of the
interchange - of environments with the same correlation functions has only
recently been demonstrated by Tamascelli et al. (Tamascelli _et al._ , 2018).
The time dependence in eq. 4 refers to the interaction picture so that the
bath operators evolve under the free bath Hamiltonian:
$O_{\omega}(t)=e^{iH_{E}t}O_{\omega}(0)e^{-iH_{E}t}$. Using eq. 3 and $\langle
a_{\omega}^{\dagger}a_{\omega}\rangle=n_{\beta}(\omega)$ we have
$S(t)=\int_{0}^{\infty}J(\omega)[e^{-i\omega
t}(1+n_{\beta}(\omega))+e^{i\omega t}n_{\beta}(\omega)].$ (5)
Making use of the relation
$\frac{1}{2}(1+\coth(\omega\beta/2))\equiv\begin{cases}n_{\omega}(\beta),\omega\geq
0\\\ -(n_{|\omega|}(\beta)+1),\omega<0\end{cases}$ (6)
we can write eq. 5 as an integral over all positive and negative $\omega$
$S(t)=\int_{-\infty}^{\infty}d\omega\operatorname{sign}(\omega)\frac{J(|\omega|)}{2}(1+\coth(\frac{\omega\beta}{2}))e^{-i\omega
t}.$ (7)
But eq. 7 is exactly the two-time correlation function one would get if the
system was coupled to a bath, now containing positive and negative
frequencies, at zero temperature, with a temperature weighted spectral density
given by
$J_{\beta}(\omega)=\operatorname{sign}(\omega)\frac{J(|\omega|)}{2}(1+\coth(\frac{\omega\beta}{2})).$
(8)
Thus, we find that our open system problem is completely equivalent to the one
governed by the Hamiltonian
$H=H_{S}+H_{E}^{\text{ext}}+H_{I}^{\text{ext}},$ (9)
in which the system couples to an extended environment, where
$\begin{split}&H_{I}^{\text{ext}}=A_{S}\otimes\int_{-\infty}^{\infty}d\omega\sqrt{J_{\beta}(\omega)}(a_{\omega}+a_{\omega}^{\dagger}),\\\
&H_{E}^{\text{ext}}=\int_{-\infty}^{\infty}d\omega\omega
a_{\omega}^{\dagger}a_{\omega},\end{split}$ (10)
and which has the initial condition
$\rho_{SE}(0)=\rho_{S}(0)\otimes\ket{0}_{E}\bra{0}$. The system now couples to
a bath consisting of harmonic oscillators of positive and negative frequencies
which are initially in their ground states, as shown in Fig. 1b. This
transformed initial condition is now far more amenable to simulation as the
environment is now described by a _pure_ , single-configuration wave function,
rather than a statistical mixed state, and so _no_ statistical sampling is
required to capture the effects of temperature on the reduced dynamics!
Analysing the effective spectral density of Eq. 8, it can be seen that the new
extended environment has thermal detailed balance between absorption and
emission processes encoded in the ratio of the coupling strengths to the
positive and negative modes in the extended _Hamiltonian_ , as opposed to the
operator statistics of a thermally occupied _state_ of the original, physical
mode, i.e.
$\frac{J_{\beta}(\omega)}{J_{\beta}(-\omega)}=\frac{\langle
a_{\omega}a^{\dagger}_{\omega}\rangle_{\beta}}{\langle
a_{\omega}^{\dagger}a_{\omega}\rangle_{\beta}}=e^{\beta\omega}$ (11)
Indeed, from the system’s point of view, there is no difference between the
absorption from an occupied, positive energy, bath mode and the emission into
an unoccupied, negative energy, bath mode.
In fact, the equivalence between these two environments goes beyond the
reduced system dynamics as there exists a unitary transformation which links
the extended environment to the original thermal environment. This means that
one is able to reverse the transformation and calculate thermal expectations
for the actual bosonic bath such as $\langle
a_{\omega}^{\dagger}(t)a_{\omega}(t)\rangle_{\beta}$. This is particularly
useful for molecular systems in which environmental (vibrational) dynamics are
also important observables that report on the mechanisms and pathways of
physio-chemical transformations (Musser _et al._ , 2015; Schnedermann _et
al._ , 2016, 2019). This is a major advantage of many body wave function
approaches, as the full information about the environment is available, c.f.
effective master equation descriptions which are obtained _after_ averaging
over the environmental state. We note that the idea of introducing a second
environment of negative frequency oscillators to provide finite temperature
effects in pure wave functions was previously proposed in the thermofield
approach of De Vegas and Banulus (de Vega and Bañuls, ). This approach
explicitly uses the properties of two-mode squeezed states to generate thermal
reduced dynamics, but the original thermofield approach, unlike the T-TEDOPA
mapping, considered the positive and negative frequency environments as two
separate baths.
Following this transformation a further step is required to facilitate
efficient simulation of the many-body system+environment wave function. This
is to apply a unitary transformation to the bath modes which converts the
_star_ -like geometry of $H_{I}^{\text{ext}}$ into a _chain_ -like geometry,
thus allowing the use of MPS methods (Chin _et al._ , 2010, 2013; Prior _et
al._ , 2013). We thus define new modes
$c_{n}^{(\dagger)}=\int_{-\infty}^{\infty}U_{n}(\omega)a_{\omega}^{(\dagger)}$,
known as chain modes, via the unitary transformation
$U_{n}(\omega)=\sqrt{J_{\beta}(\omega)}p_{n}(\omega)$ where $p_{n}(\omega)$
are orthonormal polynomials with respect to the measure $d\omega
J_{\beta}(\omega)$. Thanks to the three term recurrence relations associated
with all orthonormal polynomials $p_{n}(\omega)$, only one of these new modes,
$n=1$, will be coupled to the system, while all other chain modes will be
coupled only to their nearest neighbours (Chin _et al._ , 2010). Our
interaction and bath Hamiltonians thus become
$\begin{split}&H_{I}^{\text{chain}}=\kappa A_{S}(c_{1}+c_{1}^{\dagger}),\\\
&H_{E}^{\text{chain}}=\sum_{n=1}^{\infty}\omega_{n}c_{n}^{\dagger}c_{n}+\sum_{n=1}^{\infty}(t_{n}c_{n}^{\dagger}c_{n+1}+h.c).\end{split}$
(12)
The chain coefficients appearing in eq. LABEL:eq:chain are related to the
three-term recurrence parameters of the orthonormal polynomials and can be
computed using standard numerical techniques (Chin _et al._ , 2010). The full
derivation of the above Hamiltonian is given in the appendix. Since the
initial state of the bath was the vacuum state, it is unaffected by the chain
transformation.
We have thus arrived at a formulation of the problem of finite-temperature
open systems in which the many-body environmental state is initialised as a
pure product of trivial ground states, whilst the effects of thermal
fluctuations and populations are encoded in the Hamiltonian chain parameters
and system-chain coupling. These parameters must be determined once for each
temperature but - in principle - the actual simulation of the many body
dynamics is now no more complex than a zero-temperature simulations. This thus
opens up the use of powerful $T=0K$ wave function methods for open systems,
such as those based on MPS, numerical renormalisation group and ML-MCTDH (Wang
and Shao, 2019; Lubich, 2015). However, while this seems remarkable - and we
believe this mapping to be a major advance - there must be a price to be paid
elsewhere. We shall now demonstrate with numerical examples where some of the
computational costs for including finite-$T$ effects may appear and discuss
how they might effect the feasibility and precision of simulations. We also
propose a number of ways to mitigate these potential problems within the
framework of tensor network approaches.
## III Numerical tests and computational efficiency
All numerical results in the following sections are obtained by representing
the many body system-environment wave function as a MPS and evolving it using
time-dependent variational methods. All results have been converged w.r.t. the
parameters of MPS wave functions (bond dimensions, local Hilbert space
dimensions, integrator time steps), meaning that the results and discussion
should - unless explicitly stated - pertain to the essential properties of the
T-TEDOPA mapping itself. Extensive computational details and background theory
can be founds in Refs. (Orus, 2014; Schollwöck, ; Lubich _et al._ , 2015;
Paeckel _et al._ , ; Haegeman _et al._ , 2016).
### III.1 Chain dynamics and chain-length truncation
Before looking at the influence of thermal bath effects on a quantum system,
we first investigate the effects of the changing chain parameters that appear
due to the inclusion of temperature in the effective spectral density
$J_{\beta}(\omega)$. As a consequence of the nearest-neighbour nature of eq.
LABEL:eq:chain (see Fig. 2), the chain mapping establishes a kind of causality
among the bath modes which is extremely convenient for simulation. Starting
from $t=0$ the system will interact first with the chain mode $n=1$ which, as
well as acting back on the system, will in turn excite the next mode along the
chain and so on. The dynamics thus have a well defined light-cone structure in
which a perturbation travels outwards from the system along the chain to
infinity. This means that we may truncate the chain at any distant mode $n=N$
without causing an error in the system or bath observables up to a certain
time $T_{LC}(N)$ which is the time it takes for the edge of the light-cone to
reach the $Nth$ chain mode. Beyond $T_{LC}(N)$ there will be reflections off
the end of the chain leading to error in the bath observables, however these
reflections will not cause error in the system observables until the time
$t\approx 2T_{LC}(N)$. Figure 3 shows a snapshot of the chain mode occupations
for the Ohmic spin-boson model considered in the next section. One can see
that the velocity of the wave-front that travels outward from the system
depends on temperature, with hotter baths leading to faster propagation and
thus requiring somewhat longer chains.
To enable simulation we are also required to truncate the infinite Fock space
dimension of each chain mode to a finite dimension $d$, introducing an error
for which there exist rigorously derived bounds (Woods _et al._ , ). The
initial state $\ket{\Psi(0)}_{SE}=\ket{\psi(0)}_{S}\otimes\ket{0}_{E}$ (here
we specialize to the case where the system is initially in a pure state) can
then be encoded in an MPS and evolved under one of the many time-evolution
methods for MPS. We choose to use the one-site Time-Dependent-Variational-
Principle (1TDVP) as it has been shown to be a efficient method for tracking
long-time thermalization dynamics and has previously been shown to give
numerically exact results for the zero-temperature spin-boson model in the
highly challenging regime of quantum criticality (Schröder and Chin, 2016). In
our implementation of 1TDVP the edge of the light-cone is automatically
estimated throughout the simulation by calculating the overlap of the wave-
function $\ket{\Psi(t)}_{SE}$ with its initial value $\ket{\Psi(0)}_{SE}$ at
each chain site. This allows us expand the MPS dynamically to track to
expanding light-cone, providing roughly a 2-fold speed-up compared to using a
fixed length MPS.
Figure 3: Chain mode occupations $\langle c_{n}^{\dagger}c_{n}\rangle$ at time
$\omega_{c}t=45$ for baths of several temperatures. The system, which in this
case is the Ohmic SBM, with $\omega_{0}=0.2\omega_{c}$ and $\alpha=0.1$, is
attached at site $n=1$ of the chain.
### III.2 Two-level system dynamics: dephasing and divergence of chain
occupations due to energy exchange
To confirm the accuracy of this approach in terms of reduced system dynamics
we now explore the effects of a dissipative environment on a quantum two-level
system. First, we compare the numerical results against the analytically
solvable Independent-Boson-Model (IBM) (Mahan, 2000; Breuer _et al._ , 2002).
This is a model of pure dephasing, defined by
$H_{S}=\frac{\omega_{0}}{2}\sigma_{z}$ and $A_{S}=\sigma_{z}$, where
$\\{\sigma_{x},\sigma_{y},\sigma_{z}\\}$ are the standard Pauli matrices. We
take an Ohmic spectral density with a hard cut-off
$J(\omega)=2\alpha\omega\Theta(\omega-\omega_{c})$ and choose a coupling
strength of $\alpha=0.1$ and a gap of $\omega_{0}=0.2\omega_{c}$ for the two
level system (TLS). The initial state of the system is a positive
superposition of the spin-up and spin-down states, and we monitor the decay of
the TLS coherence, which is quantified by $\langle\sigma_{x}(t)\rangle$. All
results were converged using a Fock space dimension of $d=6$ for the chain
modes and maximum MPS bond-dimension $D_{\text{max}}=4$. We find that the
results obtained using the T-TEDOPA method agree very well with the exact
solution (see fig. 4) and correctly reproduce the transition from under-damped
to over-damped decay as the temperature is increased (Mahan, 2000; Breuer _et
al._ , 2002).
Figure 4: Comparison of T-TEDOPA (Black crosses) with the exact solution for
the Independent-Boson-Model at $\beta=100$ (Red), $\beta=10$ (Blue) and
$\beta=1$ (Green). $H_{S}=\frac{\omega_{0}}{2}\sigma_{z}$, $A_{S}=\sigma_{z}$,
$J(\omega)=2\alpha\omega_{c}(\frac{\omega}{\omega_{c}})^{s}\Theta(\omega-\omega_{c}))$,
$\alpha=0.1$, $s=1$, $\omega_{0}=0.2\omega_{c}$
As a second numerical example we take the Spin-Boson-Model (SBM), identical to
the IBM considered above except that now the TLS couples to the bath via
$A_{S}=\sigma_{x}$. Unlike the previous case, the bath can now drive
transitions within the TLS, so that energy is now dynamically exchanged
between the TLS and its environment. Indeed, as $A_{S}$ no longer commutes
with $H_{S}$, no exact solution for this model is known (Weiss, 2012). It has
thus become an important testing ground for numerical approaches to non-
perturbative simulations of open systems and has been widely applied to the
physics of decoherence, energy relaxation and thermalization in diverse
physical, chemical and biological systems - see Refs. (Weiss, 2012; De Vega
and Alonso, 2017) for extensive references. In our example, we prepare the
spin in the upper spin state ($\langle\sigma_{z}\rangle=+1$) and allow the
bath to thermalize by environmental energy exchange (see Fig. 1a). Here,
instead of presenting the spin dynamics for this model we will here interest
ourselves in the observables of the bath as these will provided insight into
the manner in which a finite temperature bath is being mimicked by an
initially empty tight-binding chain. In figure 5 we plot the bath mode
occupations $\langle a_{\omega}^{\dagger}a_{\omega}\rangle$ for several
temperatures. Each observation was taken after the spin had decayed into its
thermal steady state and thus provides a kind of absorption spectrum for the
system. We note that these data refer to the modes of the extended environment
of eq. 9 rather than the original bosonic bath and thus the mode energies run
from $-\omega_{c}$ to $\omega_{c}$.
We find that for zero temperature ($\beta=\infty$) the bath absorption
spectrum contains a single peak at a frequency around
$\omega_{p}=0.17\omega_{c}$, suggesting that the spin emits into the bath at a
re-normalized frequency that is lower than the bare gap of the TLS
($\omega_{0}=0.2\omega_{c}$). This agrees well with the renormalized gap
$\omega_{0}^{r}=\omega_{0}(\omega_{0}/\omega_{c})^{\frac{\alpha}{1-\alpha}}$
predicted by the non-perturbative variational polaron theory of Silby & Harris
(Silbey and Harris, 1984), which for the parameters used here gives
$\omega_{0}^{r}=0.167\omega_{c}$.
Moving to non-zero temperature we see that a peak begins to form at a
corresponding negative frequency, which we interpret as being due the spin
absorbing thermal energy from the bath by the _emission_ (creation) of
negative energy quanta. In accordance with detailed balance, the ratio between
the positive and negative frequency peaks approaches unity as temperature is
increased and by $\beta\omega_{c}=2$ the two peaks have merged to form a
single, almost symmetric, distribution, reflecting the dominance of thermal
absorption and emission over spontaneous emission at high temperature. Indeed,
as shown in the right inset of figure 5 the ratio of the peak heights we
extract obeys $\frac{\langle n_{\omega}\rangle+1}{\langle
n_{-\omega}\rangle}=e^{\epsilon\beta}$ with $\epsilon=0.118$. Thus we see that
the chain is composed of two independent vacuum reservoirs of positive and
negative energy which the system emits into at rates which effectively
reproduce the emission and absorption dynamics that would be induced by a
thermal bath.
However, the introduction of positive and negative modes has an interesting
and important consequence for the computational resources required for
simulation. Shown in the left inset of figure 5 is the total mode occupation
as a function of time for some of the different temperatures simulated. One
sees that for $\beta=\infty$ (zero temperature) the total occupation of the
bath modes increases initially and then plateaus at a certain steady state
value corresponding to the total number of excitations created in the bath by
the TLS during its decay. In contrast, for finite temperature, the total mode
occupation increases indefinitely at a rate which grows with temperature. This
is despite the fact that for the finite temperature baths the total excitation
number will also reach a steady state once the TLS has decayed. The reason for
this is clear. The thermal occupation of the physical bath mode with frequency
$\omega$ is obtained by subtracting its negative, from its positive energy
counterpart in the extended mode basis, i.e. $\langle
n_{\omega}\rangle_{\beta}=\langle n_{\omega}\rangle_{\ket{0}_{E}}-\langle
n_{-\omega}\rangle_{\ket{0}_{E}}$. While $\langle n_{\omega}\rangle_{\beta}$
will reach a steady state, the components $\langle
n_{\omega}\rangle_{\ket{0}_{E}}$ and $\langle
n_{-\omega}\rangle_{\ket{0}_{E}}$ will be forever increasing, reflecting the
fact that the TLS reaches a _dynamic_ equilibrium with the bath in which
energy is continuously being absorbed from and emitted into the bath at equal
rates, thus filling up the positive and negative reservoirs. Since it is the
modes of the _extended_ environment that appear in the numerical simulation,
one will always encounter potentially large errors once the filling of the
modes exceeds their capacity set by the truncation to $d$ Fock states per
oscillator. The rate at which this filling occurs increases with temperature
and is linear in time. However, as the relaxation time of the system is also
broadly proportional to temperature for $\beta\omega_{c}\ll 1$, this may not
be a problem, if one is only interested in the short-time transient dynamics.
Where this may pose problems is for the extraction of converged properties of
relaxed, i.e. locally thermalized excited states, such as their (resonance)
fluorescence spectra, or multidimensional optical spectra (Mukamel, 1995).
While these ever-growing computational resources must - as argued above - be
present in any simulation approach, we note that one possible way to combat
the growth of local dimensions could be to use the dynamical version of Guo’s
Optimised Boson Basis (OBB) which was introduced into 1TDVP for open systems
by Schroeder et al. (Guo _et al._ , 2012; Schröder and Chin, 2016).
Figure 5: Bath mode occupations $\langle n_{\omega}\rangle=\langle
a_{\omega}^{\dagger}a_{\omega}\rangle$ for the extended environment after the
TLS has decayed. The TLS is governed by a Hamiltonian
$H_{S}=\frac{\omega_{0}}{2}\sigma_{z}$ where $\omega_{0}=0.2\omega_{c}$ and is
coupled to an Ohmic bath with a hard cut-off via $A_{S}=\sigma_{x}$. The
coupling strength is $\alpha=0.1$. Left inset: total mode occupation as a
function of time $\langle
n\rangle_{\text{tot}}=\int_{-\infty}^{\infty}d\omega\langle
n_{\omega}\rangle$. Right inset shows $\frac{\langle
n_{\omega_{p}}\rangle+1}{\langle n_{\omega_{n}}\rangle}$ plotted on a log
scale against the inverse temperature, demonstrating the detailed balance of
the absorption and emission rates.
## IV Electron Transfer
Figure 6: (a) Potential energy surfaces (Marcus parabolas) for $\epsilon=0$ as
a function of the reaction coordinate $x$. We consider only the case of zero
bias, i.e. when the minima of the two wells are at the same energy. (b)
Turning the electronic coupling $\epsilon$ leads to an avoided crossing and
thus an energy barrier $E_{b}$ for the reaction. Note that this is a
simplified picture in which we treat the bath as being represented by a single
mode of frequency $\omega$ and coupling strength $g$ whereas in the actual
model we simulate there is a similar surface for all bath modes.
Having established that the T-TEDOPA mapping allows efficient computational
access to finite temperature open dynamics, we now study the chemically
relevant problem of tunneling electron transfer. Electron transfer is a
fundamental problem in chemical dynamics and plays an essential role in a vast
variety of crucial processes including the ultra-fast primary electron
transfer step in photosynthetic reaction centers and the electron transport
that powers biological respiration (Devault, 1980; Marcus, 1993; May and Kühn,
2008). The problem of modeling electron transfer between molecules comes down
to accurately treating the coupling between the electronic states and
environmental vibrational modes, and often involves the use of first principle
techniques to parameterize the total spectral functions of the vibrational and
outer solvent, or protein environment (Mendive-Tapia _et al._ , 2018;
Schröder _et al._ , 2019; Zuehlsdorff _et al._ , 2019). In many molecular
systems - and particularly biological systems where the transfer between
electronic states is affected by coupling to chromophore and protein modes -
the system-bath physics is highly non-perturbative and $J(\omega)$ has very
sharp frequency-dependence (May and Kühn, 2008; Womick _et al._ , 2011; Chin
_et al._ , 2013; Kolli _et al._ , 2012). Until recently, and even at zero
temperature, a fully quantum mechanical description of the coupling to a
continuum of environmental vibrations was challenging due to the exponential
scaling of the vibronic wave functions. However, with advances in numerical
approaches driven by developments in Tensor-Networks and ML-MCTDH, the exact
quantum simulation of continuum environment models can now be explored very
precisely at zero temperature. Given this, we now explore how the T-TEDOPA
mapping can extend this capability to finite temperature quantum tunneling.
Here, we will again adapt the spin-boson model to analyse a typical donor-
acceptor electron transfer system, as shown in Fig.6. In this model the
electron transfer process is modelled using two states representing the
reactant and product states which we take to be the eigenstates of
$\sigma_{x}$ with $\ket{\downarrow}$ representing the reactant and
$\ket{\uparrow}$ the product. We take our system Hamiltonian to be
$H_{S}=\frac{\epsilon}{2}\sigma_{z}+\lambda_{R}\frac{1+\sigma_{x}}{2}$, and
the coupling operator as $A_{S}=\frac{1+\sigma_{x}}{2}$, where $\lambda_{R}$
is the reorganization energy which for an Ohmic bath is
$\lambda_{R}=2\alpha\omega_{c}$. The electron tunnels from the
_environmentally relaxed_ reactant state to the product state by moving
through a multi-dimensional potential energy landscape along a collective
reaction coordinate which is composed of the displacements of the ensemble of
bath modes (this is effectively the coordinate associated with the mode that
is directly coupled to the system in the chain representation of the
environment). Figure 6(a) shows two potential energy surfaces - Marcus
parabolas - of the electronic system for $\epsilon=0$. Although in the actual
model we simulate the reaction coordinate is composed of the displacements of
an infinite number of modes, in figure 6 we present a simplified picture in
which the electron moves along a single reaction coordinate, $x$. The
potential minimum of the reactant state corresponds to the bath in its
undisplaced, vacuum state, whereas at the potential minimum of the product
state each bath mode is displaced by an amount depending on its frequency and
the strength of its coupling to the TLS $\sqrt{J(\omega)}/\omega$. The
presence of the reorganization energy in $H_{S}$ ensures that these two minima
are degenerate in energy and thus detailed balance will ensure an equal
forward and backward rate.
Turning on the coupling $\epsilon$ between the two levels leads to an avoided
crossing in the two energy surfaces in an adiabatic representation of the
vibronic tunneling system, leading to two potential wells. In such a semi-
classical (Born-Oppeheimer) picture, we see that the electron must overcome a
kind of effective energy barrier $E_{b}$ that scales with the total
reorganisation energy of the entire environment $\lambda_{R}$ in order for the
reaction to progress. We thus might well expect to see thermally activated
(exponential) behaviour whereby the tunneling rate $\propto\exp(-\beta
E_{b})$. However, at low temperatures this behaviour should be dramatically
quenched and dissipative quantum tunneling should become dominant and strongly
dependent on the spectral function of the environment (Weiss, 2012).
### IV.1 Numerical results
Figure 7: (a) $\langle\sigma_{x}(t)\rangle$ for several temperatures, which
represents the progress of the reaction. The decay to the steady state is
exponential at high temperature. (b) $\langle\sigma_{y}(t)\rangle$,
representing the momentum along the reaction coordinate. We encounter some
noise beyond about $\omega_{c}t=50$ in the $\beta=2$ data. This is as a result
of the truncation of the local Hilbert spaces of the bath modes (cf. sec III).
The inset shows an enlarged view of the initial fast dynamics which appear to
be broadly independent of temperature.
For our numerical investigation we take an Ohmic spectral density with
$\alpha=0.8$ for which the dynamics are expected to be incoherent at all
temperatures, i.e. the energy surfaces of figure 6(b) are well separated and
friction is such that there will be no oscillatory tunneling dynamics between
reactant and product. In figure 7 we present results for this model at several
temperatures using the T-TEDOPA mapping and 1TDVP. The expectation of
$\sigma_{x}$ can be taken to be a measure of the progress of the reaction,
starting at the value of $-1$ when the system is entirely in the reactant
state, and approaching $0$ as the electron tunnels through the barrier and the
populations thermalize. We find that as the temperature is increased the
dynamics tend to an exponential decay to the steady state, whereas non-
exponential behavior is observed for lower temperatures. In figure 7(b) we
show the expectation of $\sigma_{y}$, which is the conjugate coordinate to the
$\sigma_{x}$ and which may thus be interpreted as a kind of momentum
associated with the tunneling. We find that there is a sharp initial spike in
$\langle\sigma_{y}\rangle$ which decays with oscillations which are
increasingly damped at higher temperatures. As we might have predicted, these
transient dynamics occur on a timescale of $\tau\approx\omega_{c}^{-1}$, which
the fastest response time of an environment with an upper cut-off frequency of
$\omega_{c}$. This is approximately the timescale over which the environment
will adjust to the sudden presence of the electron, and essentially sets the
timescale for the formation of the adiabatic landscape (or, alternatively, for
the formation of the dressed polaron states), after which the tunneling
dynamics proceed. This period is related to the slippage of initial conditions
that is sometimes used to fix issues of density matrix positivity in
perturbative Redfield Theory (Gaspard and Nagaoka, 1999), although here the
establishment of these conditions is described exactly and in real-time. We
also see that the crossover to the tunneling regime happens faster as the
temperature increases, meaning that the effective initial conditions -
particularly $\langle\sigma_{y}(t)\rangle$ \- are temperature dependent.
We extract approximate reaction rates from the TLS dynamics by fitting each
$\langle\sigma_{x}(t)\rangle$ to an exponential decay $-e^{-\Gamma t}$ on
timescales $t>\tau$. We thus obtain the rates $\Gamma(\epsilon,\beta)$ for the
various values of $\beta$ and $\epsilon$ simulated. The values of $\epsilon$
were chosen to be small compared to the characteristic vibrational frequency
of the bath, $\epsilon\ll\omega_{c}$ and to the reorganisation energy,
$\epsilon\ll\lambda_{R}$ and thus lie in the non-adiabatic regime which is the
relevant regime for electron transfer. One may then perform a perturbative
expansion in $\epsilon$, otherwise known as the ‘Golden Rule’ approach which,
for an Ohmic bath, yields the following formulas for the high and low
temperature limits corresponding respectively to the classical and quantum
regimes (Weiss, 2012).
$\Gamma(\beta)=\begin{cases}\frac{\sqrt{\pi}}{4\sqrt{\alpha}}\epsilon^{2}(\frac{\pi}{\beta\omega_{c}})^{2\alpha-1},\beta\omega_{c}\gg
1\\\
\frac{e^{2}}{4}\sqrt{\frac{\pi\beta\omega_{c}}{2\alpha}}\exp({-\frac{\alpha\beta\omega_{c}}{2}}),\beta\omega_{c}\ll
1\end{cases}.$ (13)
The golden rule result is based on second-order perturbation in the tunneling
coupling $\epsilon$, but it is exact to all orders in the system-environment
coupling $\alpha$. Additionally, the Ohmic form of the spectral function
generates a non-trivial power-law dependence of the tunneling rate on the
temperature for $\beta\omega_{c}\gg 1$ in which the rate may either decrease
or increase as the temperature is lowered, depending on the value of $\alpha$.
We plot these formulas along with the numerically evaluated rates in figure 8.
There is a good agreement in the high and low temperature limits between the
Golden Rule expressions and the T-TEDOPA results, and one clearly sees that
the temperature dependence of the rate is non-monotonic with a transition from
power law growth (quantum, $2\alpha-1>0$) to power-law decay (classical,
$\propto\sqrt{\beta}$) as the temperature increases from $T=0$. We note that
for the parameters we present here, the intermediate regime where thermally
activated behaviour is predicted $\beta\omega_{c}\sim 1$ is not observed for
the Ohmic environment, and one essentially switches from tunneling limited by
the effect of friction on the attempt frequency to the low-temperature
polaronic tunneling of Eqs. 13.
Figure 8: Log plot of the rates, $\Gamma$, extracted from
$\langle\sigma_{x}(t)\rangle$ for $\epsilon=0.2$ (Red), $\epsilon=0.3$ (Blue)
and $\epsilon=0.4$ (Red) as a function of $\beta$. (Dashed lines) High
temperature ($T\gg\omega_{c}$), classical, limit of Golden Rule formula.
(Dotted lines) Low temperature ($T\ll\omega_{c}$), quantum, limit of Golden
Rule formula.
## V Conclusion
In this article we have shown how the combination of the Tamasceli’s
remarkable T-TEDOPA mapping and non-perturbative variational Tensor-Network
dynamics can be applied to chemical and photophysical systems under laboratory
conditions. Through numerical experiments we have carefully investigated how
the T-TEDOPA mapping allows the effects of finite temperatures to be obtained
efficiently without any need for costly sampling of the thermal environment
state, or the explicit use of density matrices. However, analysis of these
environmental dynamics reveals how incorporating finite temperatures can lead
to more expensive simulations, due to the filling-up of the chain modes and
the longer chains that are needed to prevent recurrence dynamics. Yet, we
believe that this method, and others like it, based on the exact quantum many-
body treatment of vibrational modes (Somoza _et al._ , 2019), could present
an attractive complementary approach to the Multi-Layer Multi-Configurational
Time-Dependent Hartree Method (MLMCTDH) commonly used in chemical dynamics.
One possible direction for this would be to consider a problem in which a
(discretized) potential surface for a reaction is contained within the system
Hamiltonian, while the environment bath provides the nuclear thermal and
quantum fluctuations that ultimately determine both real-time kinetics and
thermodynamical yields for the process, as is currently captured in methods
such as Ring Polymer Molecular Dynamics (Craig and Manolopoulos, 2004).
Furthermore, the Tensor-Network structures are not limited to the simple chain
geometries we consider here but can in fact adopt a tree structure, thus
enabling the treatment of complex coupling to multiple independent baths
(Schröder _et al._ , 2019). Such trees tensor networks have recently been
interfaced with ab initio methods to explore ultra-fast photophysics of real
molecules and their pump-probe spectra (Schnedermann _et al._ , 2019), but
such efforts have so far been limited to zero temperature. Finally, the
cooperative, antagonistic or sequential actions of different types of
environments, i.e. light and vibrations (Wertnik _et al._ , 2018), or even
the creation of new excitations, such as polaritons (Herrera and Owrutsky,
2020; Memmi _et al._ , 2017; Del Pino _et al._ , 2018), could play a key
role in sophisticated new materials for energy transduction, catalysis or
regulation (feedback) of reactions, and T-TDEPODA-based tensor networks are
currently being used to explore these developing areas.
## References
* Miller _et al._ (1983) W. H. Miller, S. D. Schwartz, and J. W. Tromp, The Journal of Chemical Physics 79, 4889 (1983), publisher: American Institute of Physics.
* Devault (1980) D. Devault, Quarterly reviews of biophysics 13, 387 (1980).
* May and Kühn (2008) V. May and O. Kühn, _Charge and energy transfer dynamics in molecular systems_ (John Wiley & Sons, 2008).
* Dubi and Dia Ventra (2011) Y. Dubi and M. Dia Ventra, Reviews of Modern Physics 83, 131 (2011).
* Benenti _et al._ (2017) G. Benenti, G. Casati, K. Saito, and R. Whitney, Physics Reports Fundamental aspects of steady-state conversion of heat to work at the nanoscale, 694, 1 (2017).
* Breuer _et al._ (2002) H.-P. Breuer, F. Petruccione, _et al._ , _The theory of open quantum systems_ (Oxford University Press on Demand, 2002).
* Weiss (2012) U. Weiss, _Quantum Dissipative Systems_, 4th ed. (WORLD SCIENTIFIC, 2012).
* Wang and Shao (2019) H. Wang and J. Shao, The Journal of Physical Chemistry A 123, 1882 (2019).
* Lubich (2015) C. Lubich, Applied Mathematics Research eXpress 2015, 311 (2015).
* Orus (2014) R. Orus, Annals of Physics 349, 117 (2014), arXiv: 1306.2164.
* Prior _et al._ (2010) J. Prior, A. W. Chin, S. F. Huelga, and M. B. Plenio, Physical review letters 105, 050404 (2010).
* Prior _et al._ (2013) J. Prior, I. de Vega, A. W. Chin, S. F. Huelga, and M. B. Plenio, Physical Review A 87, 013428 (2013), arXiv: 1205.2897.
* Chin _et al._ (2013) A. Chin, J. Prior, R. Rosenbach, F. Caycedo-Soler, S. F. Huelga, and M. B. Plenio, Nature Physics 9, 113 (2013).
* Xie _et al._ (2019) X. Xie, Y. Liu, Y. Yao, U. Schollwöck, C. Liu, and H. Ma, The Journal of Chemical Physics 151, 224101 (2019).
* (15) A. M. Alvertis, F. A. Y. N. Schröder, and A. W. Chin, The Journal of Chemical Physics 151, 10.1063/1.5115239.
* Schröder _et al._ (2019) F. A. Y. N. Schröder, D. H. P. Turban, A. J. Musser, N. D. M. Hine, and A. W. Chin, Nature Communications 10, 1062 (2019).
* Tamascelli _et al._ (2019) D. Tamascelli, A. Smirne, J. Lim, S. F. Huelga, and M. B. Plenio, Physical Review Letters 123, 090402 (2019), arXiv: 1811.12418.
* Wilhelm _et al._ (2004) F. Wilhelm, S. Kleff, and J. Von Delft, Chemical physics 296, 345 (2004).
* Schulze and Kuhn (2015) J. Schulze and O. Kuhn, The Journal of Physical Chemistry B 119, 6211 (2015).
* Mendive-Tapia _et al._ (2018) D. Mendive-Tapia, E. Mangaud, T. Firmino, A. de la Lande, M. Desouter-Lecomte, H.-D. Meyer, and F. Gatti, The Journal of Physical Chemistry B 122, 126 (2018).
* Mukamel (1995) S. Mukamel, _Principles of nonlinear optical spectroscopy_ , Vol. 6 (Oxford university press New York, 1995).
* Gélinas _et al._ (2014) S. Gélinas, A. Rao, A. Kumar, S. L. Smith, A. W. Chin, J. Clark, T. S. van der Poll, G. C. Bazan, and R. H. Friend, Science 343, 512 (2014).
* Smith and Chin (2015) S. L. Smith and A. W. Chin, Physical Review B 91, 201302 (2015).
* Alvermann and Fehske (2009) A. Alvermann and H. Fehske, Physical review letters 102, 150601 (2009).
* Binder and Burghardt (2019) R. Binder and I. Burghardt, Faraday Discussions 221, 406 (2019).
* Tamascelli _et al._ (2018) D. Tamascelli, A. Smirne, S. F. Huelga, and M. B. Plenio, Physical review letters 120, 030402 (2018).
* Musser _et al._ (2015) A. J. Musser, M. Liebel, C. Schnedermann, T. Wende, T. B. Kehoe, A. Rao, and P. Kukura, Nature Physics 11, 352 (2015).
* Schnedermann _et al._ (2016) C. Schnedermann, J. M. Lim, T. Wende, A. S. Duarte, L. Ni, Q. Gu, A. Sadhanala, A. Rao, and P. Kukura, The journal of physical chemistry letters 7, 4854 (2016).
* Schnedermann _et al._ (2019) C. Schnedermann, A. M. Alvertis, T. Wende, S. Lukman, J. Feng, F. A. Schröder, D. H. Turban, J. Wu, N. D. Hine, N. C. Greenham, _et al._ , Nature communications 10, 1 (2019).
* (30) I. de Vega and M.-C. Bañuls, Physical Review A 92, 10.1103/PhysRevA.92.052116.
* Chin _et al._ (2010) A. W. Chin, Á. Rivas, S. F. Huelga, and M. B. Plenio, Journal of Mathematical Physics 51, 092109 (2010).
* (32) U. Schollwöck, Annals of Physics 326, 10.1016/j.aop.2010.09.012.
* Lubich _et al._ (2015) C. Lubich, I. Oseledets, and B. Vandereycken, SIAM Journal on Numerical Analysis 53, 917 (2015), arXiv: 1407.2042.
* (34) S. Paeckel, T. Köhler, A. Swoboda, S. R. Manmana, U. Schollwöck, and C. Hubig, Annals of Physics 411, 10.1016/j.aop.2019.167998.
* Haegeman _et al._ (2016) J. Haegeman, C. Lubich, I. Oseledets, B. Vandereycken, and F. Verstraete, Physical Review B 94, 165116 (2016), arXiv: 1408.5056.
* (36) M. Woods, M. Cramer, and M. Plenio, Physical Review Letters 115, 10.1103/PhysRevLett.115.130401.
* Schröder and Chin (2016) F. A. Schröder and A. W. Chin, Physical Review B 93, 075105 (2016).
* Mahan (2000) G. D. Mahan, _Many-Particle Physics_ (Springer US, Boston, MA, 2000).
* De Vega and Alonso (2017) I. De Vega and D. Alonso, Reviews of Modern Physics 89, 015001 (2017).
* Silbey and Harris (1984) R. Silbey and R. A. Harris, The Journal of Chemical Physics 80, 2615 (1984).
* Guo _et al._ (2012) C. Guo, A. Weichselbaum, J. von Delft, and M. Vojta, Physical review letters 108, 160401 (2012).
* Marcus (1993) R. A. Marcus, Reviews of Modern Physics 65, 599 (1993).
* Zuehlsdorff _et al._ (2019) T. J. Zuehlsdorff, A. Montoya-Castillo, J. A. Napoli, T. E. Markland, and C. M. Isborn, The Journal of Chemical Physics 151, 074111 (2019).
* Womick _et al._ (2011) J. M. Womick, H. Liu, and A. M. Moran, The Journal of Physical Chemistry A 115, 2471 (2011).
* Kolli _et al._ (2012) A. Kolli, E. J. O’Reilly, G. D. Scholes, and A. Olaya-Castro, The Journal of chemical physics 137, 174109 (2012).
* Gaspard and Nagaoka (1999) P. Gaspard and M. Nagaoka, The Journal of chemical physics 111, 5668 (1999).
* Somoza _et al._ (2019) A. D. Somoza, O. Marty, J. Lim, S. F. Huelga, and M. B. Plenio, Physical Review Letters 123, 100502 (2019).
* Craig and Manolopoulos (2004) I. R. Craig and D. E. Manolopoulos, The Journal of chemical physics 121, 3368 (2004).
* Wertnik _et al._ (2018) M. Wertnik, A. Chin, F. Nori, and N. Lambert, The Journal of chemical physics 149, 084112 (2018).
* Herrera and Owrutsky (2020) F. Herrera and J. Owrutsky, The Journal of Chemical Physics 152, 100902 (2020).
* Memmi _et al._ (2017) H. Memmi, O. Benson, S. Sadofev, and S. Kalusniak, Physical review letters 118, 126802 (2017).
* Del Pino _et al._ (2018) J. Del Pino, F. A. Schröder, A. W. Chin, J. Feist, and F. J. Garcia-Vidal, Physical Review B 98, 165416 (2018).
## Author Contributions
AJD implemented the T-TEDOPA mapping in a bespoke 1TDVP code and performed the
numerical simulations. AJD and AWC wrote the manuscript. AWC oversaw the
project.
## Code
The codes used in this work are freely available for reasonable use at
https://github.com/angusdunnett/MPSDynamics.
## Funding
AJD is supported by the Ecole Doctorale 564 ‘Physique en Ile-de-France’. AWC
is partly supported by ANR project No. 195608/ACCEPT.
## VI Appendix
The chain mapping used in section II is based on the theory of orthogonal
polynomials. A polynomial of degree $n$ is defined by
$p_{n}(x)=\sum_{m=0}^{n}a_{m}x^{m}.$ (14)
The space of polynomials of degree $n$ is denoted $\mathbb{P}_{n}$ and is a
subset of the space of all polynomials $\mathbb{P}_{n}\subset\mathbb{P}$.
Given a measure $d\mu(x)$ which has finite moments of all orders on some
interval $[a,b]$, we may define the inner product of two polynomials
$\langle{p,q}\rangle_{\mu}=\int_{a}^{b}d\mu(x)p(x)q(x).$ (15)
This inner product gives rise to a unique set of orthonormal polynomials
$\\{\tilde{p}_{n}\in\mathbb{P}_{n},n=0,1,2,...\\}$ which all satisfy
$\langle\tilde{p}_{n},\tilde{p}_{m}\rangle=\delta_{n,m}.$ (16)
This set forms a complete basis for $\mathbb{P}$, and more specifically the
set $\\{\tilde{p}_{n}\in\mathbb{P}_{n},n=0,1,2,...m\\}$ is a complete basis
for $\bigcup_{r=1}^{m}\mathbb{P}_{r}$.
It is often useful to express the orthonormal polynomials in terms of the
orthogonal _monic_ polynomials $\pi_{n}(x)$ which are the unnormalized scalar
multiples of $\tilde{p}_{n}(x)$ whose leading coefficient is 1 ($a_{n}=1$)
$\tilde{p}_{n}(x)=\frac{\pi_{n}(x)}{||\pi_{n}||}.$ (17)
The key property of orthogonal polynomials for the construction of the chain
mapping is that they satisfy a three term recurrence relation
$\pi_{k+1}(x)=(x-\alpha_{k})\pi_{k}(x)-\beta_{k}\pi_{k-1}(x),$ (18)
where it can be easily shown that
$\alpha_{k}=\frac{\langle
x\pi_{k},\pi_{k}\rangle}{\langle\pi_{k},\pi_{k}\rangle},\beta_{k}=\frac{\langle\pi_{k},\pi_{k}\rangle}{\langle\pi_{k-1},\pi_{k-1}\rangle}.$
(19)
Now that we have defined the orthogonal polynomials we may use them to
construct the unitary transformation that will convert the star Hamiltonian of
Eq. 9 with
$H_{I}^{\text{ext}}=A_{S}\otimes\int_{-\infty}^{\infty}d\omega\sqrt{J_{\beta}(\omega)}(a_{\omega}+a_{\omega}^{\dagger}),H_{E}^{\text{ext}}=\int_{-\infty}^{\infty}d\omega\omega
a_{\omega}^{\dagger}a_{\omega},$ (20)
into the chain Hamiltonian of Eq. LABEL:eq:chain. The transformation is given
by
$c_{n}^{(\dagger)}=\int_{-\infty}^{\infty}U_{n}(\omega)a_{\omega}^{(\dagger)},$
(21)
where
$U_{n}(\omega)=\sqrt{J_{\beta}(\omega)}\tilde{p}_{n}(\omega)=\sqrt{J_{\beta}(\omega)}\frac{\pi_{n}(\omega)}{||\pi_{n}||},$
(22)
and the polynomials $\tilde{p}_{n}(\omega)$ are orthonormal with respect to
the measure $d\omega J_{\beta}(\omega)$. The unitarity of $U_{n}(\omega)$
follows immediately from the orthonormality of the polynomials.
Applying the above transformation to the interaction Hamiltonian we have
$H_{I}^{\text{ext}}=A_{S}\otimes\sum_{n=0}^{\infty}\int_{-\infty}^{\infty}d\omega
J_{\beta}(\omega)\frac{\pi_{n}(\omega)}{||\pi_{n}||}(c_{n}^{\dagger}+c_{n})$
(23)
For the zeroth order monic polynomial we have $\pi_{0}=1$ and so we may insert
this into the above expression
$H_{I}^{\text{ext}}=A_{S}\otimes\sum_{n=0}^{\infty}\int_{-\infty}^{\infty}d\omega
J_{\beta}(\omega)\frac{\pi_{n}(\omega)\pi_{0}}{||\pi_{n}||}(c_{n}^{\dagger}+c_{n}).$
(24)
Recognising the inner product in the above expression and making use of the
orthogonality of the polynomials we have
$H_{I}^{\text{ext}}=A_{S}\otimes\sum_{n=0}^{\infty}||\pi_{n}||\delta_{n,0}(c_{n}^{\dagger}+c_{n})=A_{S}\otimes||\pi_{0}||(c_{0}^{\dagger}+c_{0}),$
(25)
and thus, in the new basis, only one mode now couples to the system.
Now for the environment part of the Hamiltonian we have
$H_{E}^{\text{ext}}=\sum_{n,m=0}^{\infty}\int_{-\infty}^{\infty}d\omega
J_{\beta}(\omega)\omega\frac{\pi_{n}(\omega)\pi_{m}(\omega)}{||\pi_{n}||||\pi_{m}||}c_{n}^{\dagger}c_{m}.$
(26)
Substituting for $\omega\pi_{n}(\omega)$ from the three term recurrence
relation of Eq. 18 yields
$H_{E}^{\text{ext}}=\sum_{n,m=0}^{\infty}\int_{-\infty}^{\infty}d\omega\frac{J_{\beta}(\omega)}{||\pi_{n}||||\pi_{m}||}\Big{[}\pi_{n+1}(\omega)+\alpha_{n}\pi_{n}(\omega)+\beta_{n}\pi_{n-1}(\omega)\Big{]}\pi_{m}(\omega)c_{n}^{\dagger}c_{m}.$
(27)
Again, evaluating the inner products we have
$\begin{split}H_{E}^{\text{ext}}&=\sum_{n,m=0}^{\infty}\frac{1}{||\pi_{n}||}\Big{[}||\pi_{m}||\delta_{n+1,m}+\alpha_{n}||\pi_{m}||\delta_{n,m}+\beta_{n}||\pi_{m}||\delta_{n-1,m}\Big{]}c_{n}^{\dagger}c_{m}\\\
&=\sum_{n=0}^{\infty}\sqrt{\beta_{n+1}}c_{n}^{\dagger}c_{n+1}+\alpha_{n}c_{n}^{\dagger}c_{n}+\sqrt{\beta_{n+1}}c_{n}^{\dagger}c_{n-1},\end{split}$
(28)
where in the second line we have used the fact that
$\frac{||\pi_{n+1}||}{||\pi_{n}||}=\sqrt{\beta_{n+1}}.$ (29)
We thus arrive at the nearest-neighbour coupling Hamiltonian of Eq.
LABEL:eq:chain and are able to identify the chain coefficients as
$\begin{split}&\kappa=||\pi_{0}||,\\\ &\omega_{n+1}=\alpha_{n},\\\
&t_{n}=\sqrt{\beta_{n}}.\end{split}$ (30)
Note that in Eq. LABEL:eq:chain the chain sites are labeled starting from
$n=1$ and not $n=0$ as in Eq. 28. All that remains now to calculate the chain
coefficients for a particular spectral density $J_{\beta}(\omega)$ is to
compute the recurrence coefficients, $\alpha_{n}$ and $\beta_{n}$, and this
may done interatively using Eqs 18 and 19 and numerically evaluating the inner
product integrals using a quadrature rule.
|
8k
|
arxiv_papers
|
2101.01101
|
# Lipschitz regularity for
degenerate elliptic integrals with $p,q$-growth
G. Cupini – P. Marcellini – E. Mascolo – A. Passarelli di Napoli Giovanni
Cupini: Dipartimento di Matematica, Università di Bologna
Piazza di Porta S.Donato 5, 40126 - Bologna, Italy [email protected]
Paolo Marcellini and Elvira Mascolo: Dipartimento di Matematica “U. Dini”,
Università di Firenze
Viale Morgagni 67/A, 50134 - Firenze, Italy [email protected]
[email protected] Antonia Passarelli di Napoli: Dipartimento di
Matematica e Appl. “R. Caccioppoli”
Università di Napoli “Federico II”
Via Cintia, 80126 Napoli, Italy [email protected]
###### Abstract.
We establish the local Lipschitz continuity and the higher differentiability
of vector-valued local minimizers of a class of energy integrals of the
Calculus of Variations. The main novelty is that we deal with possibly
degenerate energy densities with respect to the $x-$variable.
###### Key words and phrases:
Nonstandard growth conditions; $p,q$-growth; Degenerate ellipticity; Lipschitz
continuity.
###### 2010 Mathematics Subject Classification:
49N60, 35J50
Acknowledgements. The authors are members of GNAMPA (Gruppo Nazionale per
l’Analisi Matematica, la Probabilità e le loro Applicazioni) of INdAM
(Istituto Nazionale di Alta Matematica)
## 1\. Introduction
The paper deals with the regularity of minimizers of integral functionals of
the Calculus of Variations of the form
$F(u)=\int_{\Omega}f(x,Du)\,dx$ (1.1)
where $\Omega\subset\mathbb{R}^{n}$, $n\geq 2$, is a bounded open set,
$u:\Omega\rightarrow\mathbb{R}^{N}$, $N\geq 1$, is a Sobolev map. The main
feature of (1.1) is the possible degeneracy of the lagrangian $f(x,\xi)$ with
respect to the $x-$variable. We assume that the Carathéodory function
$f=f\left(x,\xi\right)$ is convex and of class $C^{2}$ with respect to
$\xi\in\mathbb{R}^{N\times n}$, with $f_{\xi\xi}\left(x,\xi\right)$, $f_{\xi
x}\left(x,\xi\right)$ also Carathéodory functions and $f(\cdot,0)\in
L^{1}(\Omega)$. We emphasize that the $N\times n$ matrix of the second
derivatives $f_{\xi\xi}\left(x,\xi\right)$ not necessarily is uniformly
elliptic and it may degenerate at some $x\in\Omega$.
In the vector-valued case $N>1$ minimizers of functionals with general
structure may lack regularity, see [17],[48],[42], and it is natural to assume
a modulus-gradient dependence for the energy density; i.e. that there exists
$g=g(x,t):\Omega\times[0,+\infty)\rightarrow[0,+\infty)$ such that
$f(x,\xi)=g(x,|\xi|).$ (1.2)
Without loss of generality we can assume $g(x,0)=0$; indeed the minimizers of
$F$ are minimizers of $u\mapsto\int_{\Omega}\left(f(x,Du)-f(x,0)\right)\,dx$
too. Moreover, by (1.2) and the convexity of $f$, $g\left(x,t\right)$ is a
non-negative, convex and increasing function of $t\in\left[0,+\infty\right)$.
As far as the growth and the ellipticity assumptions are concerned, we assume
that there exist exponents $p,q$, nonnegative measurable functions $a(x),k(x)$
and a constant $L>0$ such that
$\left\\{\begin{array}[]{l}a\left(x\right)\,(1+|\xi|^{2})^{\frac{p-2}{2}}|\lambda|^{2}\leq\langle
f_{\xi\xi}(x,\xi)\lambda,\lambda\rangle\leq
L\,(1+|\xi|^{2})^{\frac{q-2}{2}}|\lambda|^{2},\quad 2\leq p\leq q,\\\
\left|f_{\xi x}(x,\xi)\right|\leq
k\left(x\right)(1+|\xi|^{2})^{\frac{q-1}{2}}\end{array}\right.$ (1.3)
for a.e. $x\in\Omega$ and for every $\xi,\lambda\in\mathbb{R}^{N\times n}$. We
allow the coefficient $a\left(x\right)$ to be zero, so that (1.3)1 is a not
uniform ellipticity condition. As proved in Lemma 2.2, (1.3)1 implies the
following possibly degenerate $p,q-$growth conditions for $f$, for some
constant $c>0$,
$c\,a\left(x\right)(1+|\xi|^{2})^{\frac{p-2}{2}}|\xi|^{2}\leq f(x,\xi)\leq
L(1+|\xi|^{2})^{\frac{q}{2}},\;\;\;\;\text{a.e.
}x\in\Omega,\;\forall\;\xi\in\mathbb{R}^{N\times n}.$ (1.4)
Our main result concerns the local Lipschitz regularity and the higher
differentiability of the local minimizers of $F$.
###### Theorem 1.1.
Let the functional $F$ in (1.1) satisfy (1.2) and (1.3). Assume moreover that
$\frac{1}{a}\in L_{\mathrm{loc}}^{s}(\Omega),\qquad k\in
L_{\mathrm{loc}}^{r}(\Omega),$ (1.5)
with $r,s>n$ and
$\frac{q}{p}<\frac{s}{s+1}\left(1+\frac{1}{n}-\frac{1}{r}\right).$ (1.6)
If $u\in W_{\mathnormal{loc}}^{1,1}(\Omega)$ is a local minimizer of $F$, then
for every ball $B_{R_{0}}\Subset\Omega$ the following estimates
$\|Du\|_{L^{\infty}(B_{R_{0}/2})}\leq
C\mathcal{K}_{R_{0}}^{\vartheta}\left(\int_{B_{R_{0}}}\left(1+f(x,Du)\right)\,dx\right)^{\vartheta}$
(1.7)
$\int_{B_{R_{0}/2}}a(1+|Du|^{2})^{\frac{p-2}{2}}\left|D^{2}u\right|^{2}\,dx\leq
C\mathcal{K}_{R_{0}}^{\vartheta}\left(\int_{B_{R_{0}}}\left(1+f(x,Du)\right)\,dx\right)^{\vartheta},$
(1.8)
hold with the exponent $\vartheta$ depending on the data, the constant $C$
also depending on $R_{0}$ and where
$\mathcal{K}_{R_{0}}=1+\|a^{-1}\|_{L^{s}(B_{R_{0}})}\|k\|_{L^{r}(B_{R_{0}})}^{2}$.
It is well known that to get regularity under $p,q-$growth the exponents $q$
and $p$ cannot be too far apart; usually, the gap between $p$ and $q$ is
described by a condition relating $p,q$ and the dimension $n$. In our case we
take into account the possible degeneracy of $a(x)$ and the condition (1.3)2
on the mixed derivatives $f_{\xi x}$ in terms of a possibly unbounded
coefficient $k(x)$; then we deduce that the gap depends on $s$, the
summability exponent of $a^{-1}$ that “measures” how much $a$ is degenerate,
and the exponent $r$ that tell us how far $k(x)$ is from being bounded. If
$s=r=\infty$ then (1.6) reduces to $\frac{q}{p}<1+\frac{1}{n}$ that is what
one expects, see [12] and for instance [39]. Moreover, if $s=\infty$ and
$n<r\leq+\infty$, then (1.6) reduces to
$\frac{q}{p}<1+\frac{1}{n}-\frac{1}{r}$ and we recover the result of [22].
Motivated by applications to the theory of elasticity, recently Colombo and
Mingione [8],[9] (see also [2],[24],[18],[19]) studied the so-called double
phase integrals
$\int_{\Omega}|Du|^{p}+b(x)|Du|^{q}\,dx,\quad 1<p<q\,.$ (1.9)
The model case we have in mind here is different: we consider the degenerate
functional with non standard growth of the form
$I(u)=\int_{B_{1}(0)}a(x)(1+|Du|^{2})^{\frac{p}{2}}+b(x)(1+|Du|^{2})^{\frac{q}{2}}\,dx$
(1.10)
with $0\leq a(x)\leq b(x)\leq L$ for some $L>0$. The integrand of $I(u)$
satisfies (1.3)2 with
$k\left(x\right)=\left|Da\left(x\right)\right|+\left|Db\left(x\right)\right|$.
It is worth mentioning that in the literature $a(x)$ is usually assumed
positive and bounded away from zero, see e.g. [2],[8], which is not the case
here since $a(x)$ may vanish at some point. The counterpart is that we
consider the powers of $(1+|Du|^{2})^{\frac{1}{2}}$ instead of $|Du|$. We
notice that the regularity result of Theorem 1.1 is new also when $p=q\geq 2$,
for example for the energy integral
$F_{1}(u)=\int_{B_{1}(0)}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx$ (1.11)
with $a(x)\geq 0$, $\frac{1}{a}\in L^{s}(\Omega)$ and $\left|Da\right|\in
L^{r}$ with $\frac{1}{s}+\frac{1}{r}<\frac{1}{n}$. As far as we know, the
results proposed here are the first approach to the study of the Lipschitz
continuity of the local minimizers in the setting of degenerate elliptic
integrals under $p,q-$growth.
As well known, weak solutions to the elliptic equation in divergence form of
the type
$-\mathrm{div}\left(A(x,Du)\right)=0\,\,\text{in}\,\,\Omega.$
are locally Lipschitz continuous provided the vector field
$A:\Omega\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ is differentiable with
respect to $\xi$ and satisfies the uniformly elliptic conditions
$\Lambda_{1}(1+|\xi|^{2})^{\frac{p-2}{2}}|\lambda|^{2}\leq\langle
A_{\xi}(x,\xi)\lambda,\lambda\rangle\leq\Lambda_{2}(1+|\xi|^{2})^{\frac{p-2}{2}}|\lambda|^{2}.$
Trudinger [49] started the study of the interior regularity of solutions to
linear elliptic equation of the form
$\sum_{i,j=1}^{n}\frac{\partial}{\partial
x_{i}}\left(a_{ij}(x)\,\frac{\partial u}{\partial{x_{j}}}(x)\right)=0\,,\qquad
x\in\Omega\subseteq\mathbb{R}^{n},$ (1.12)
where the measurable coefficients $a_{ij}$ satisfy the non-uniform condition
$\lambda(x)|\xi|^{2}\leq\sum_{i,j=1}^{n}a_{ij}(x)\xi_{i}\xi_{j}\leq
n^{2}\mu(x)|\xi|^{2}$ (1.13)
for a.e. $x\in\Omega$ and every $\xi\in\mathbb{R}^{n}$. Here $\lambda(x)$ is
the minimum eigenvalue of the symmetric matrix $A(x)=(a_{ij}(x))$ and
$\mu(x):=\sup_{ij}|a_{ij}|$. Trudinger proved that any weak solution of (1.12)
is locally bounded in $\Omega$, under the following integrability assumptions
on $\lambda$ and $\mu$
$\lambda^{-1}\in
L_{\mathrm{loc}}^{r}(\Omega)\quad\text{and}\quad\mu_{1}=\lambda^{-1}\mu^{2}\in
L_{\mathrm{loc}}^{\sigma}(\Omega)\quad\text{with
$\frac{1}{r}+\frac{1}{\sigma}<\frac{2}{n}$}.$ (1.14)
The equation (1.12) is usually called degenerate when $\lambda^{-1}\notin
L^{\infty}(\Omega)$, whereas it is called singular when $\mu\notin
L^{\infty}(\Omega)$. These names in this case refer to the degenerate and the
singular cases with respect to the $x-$variable, but in the mathematical
literature these names are often referred to the gradient variable; this
happens for instance with the $p-$Laplacian operator
$-\mathrm{div}\left(\left|Du\right|^{p-2}Du\right)$. We do not study in this
paper the degenerate case with respect to the gradient variable, but we refer
for instance to the analysis made by Duzaar and Mingione [21], who studied an
$L^{\infty}-$gradient bound for solutions to non-homogeneous $p-$Laplacian
type systems and equations; see also Cianchi and Maz’ya [7] and the references
therein for the rich literature on the subject.
The result by Trudinger was extended in many settings and directions: firstly,
by Trudinger himself in [50] and later by Fabes, Kenig and Serapioni in [28];
Pingen in [47] dealt with systems. More recently for the regularity of
solutions and minimizers we refer to [3],[5],[10][15], [16],[31]. For the
higher integrability of the gradient we refer to [32] (see also [6]). Very
recently Calderon-Zygmund’s estimates for the $p-$Laplace operator with
degenerate weights have been established in [1]. The literature concerning
non-uniformly elliptic problems is extensive and we refer the interested
reader to the references therein.
The study of the Lipschitz regularity in the $p,q-$growth context started with
the papers by Marcellini [34],[35] and, since then, many and various
contributions to the subject have been provided, see the references in
[41],[39]. The vectorial homogeneous framework was considered in [36],[40] and
by Esposito, Leonetti and Mingione [26],[27]. The condition (1.3)2 for general
non autonomous integrands $f=f(x,Du)$ has been first introduced in
[22],[23],[24]. It is worth to highlight that, due to the $x-$dependence, the
study of regularity is significantly harder and the techniques more complex.
The research on this subject is intense, as confirmed by the many articles
recently published, see e.g. [11],[13],[20],[29],
[37],[38],[39],[44],[45],[46].
Let us briefly sketch the tools to get our regularity result. First, for
Lipschitz and higher differentiable minimizers, we prove a weighted
summability result for the second order derivatives of minimizers of
functionals with possibly degenerate energy densities, see Proposition 3.2.
Next in Theorem 3.3 we get an a-priori estimate for the $L^{\infty}$-norm of
the gradient. To establish the a-priori estimate we use the Moser’s iteration
method [43] for the gradient and the ideas of Trudinger [49]. An approximation
procedure allows us to conclude. Actually, if $u$ is a local minimizer of
(1.1), we construct a sequence of suitable variational problems in a ball
$B_{R}\subset\subset\Omega$ with boundary value data $u$. In order to apply
the a-priori estimate to the minimizers of the approximating functionals we
prove a higher differentiability result (Theorem 4.1) for minimizers of the
class of functionals with $p,q-$growth studied in [22], where only the
Lipschitz continuity was proved. By applying the previous a-priori estimate to
the sequence of the solutions we obtain a uniform control in $L^{\infty}$ of
the gradient which allows to transfer the local Lipschitz continuity property
to the original minimizer $u$.
Another difficulty due to the $x-$dependence of the energy density is that the
Lavrentiev phenomenon may occur. A local minimizer of $F$ is a function $u\in
W_{\mathrm{loc}}^{1,1}(\Omega)$ such that $f(x,Du)\in
L_{\mathrm{loc}}^{1}(\Omega)$ and
$\int_{\Omega}f(x,Du)\,dx\leq\int_{\Omega}f(x,Du+D\varphi)\,dx$
for every $\varphi\in C_{0}^{1}(\Omega)$. If $u$ is a local minimizer of the
functional $F$, by virtue of (1.4) we have that $a(x)|Du|^{p}\in
L_{\mathrm{loc}}^{1}(\Omega)$ and, by (1.5), $u\in
W_{\mathrm{loc}}^{1,\frac{ps}{s+1}}(\Omega)$ since
$\int_{B_{R}}|Du|^{\frac{ps}{s+1}}\,dx\leq\left(\int_{B_{R}}a|Du|^{p}\,dx\right)^{\frac{s}{s+1}}\left(\int_{B_{R}}\frac{1}{a^{s}}\,dx\right)^{\frac{1}{s+1}}<+\infty$
(1.15)
for every ball $B_{R}\subset\Omega$. Therefore in our context a-priori the
presence of the Lavrentiev phenomenon cannot be excluded. Indeed, due to the
growth assumptions on the energy density, the integral in (1.1) is well
defined if $u\in W^{1,q\frac{r}{r-1}}$, but a-priori this is not the case if
$u\in W^{1,\frac{ps}{s+1}}(\Omega)\setminus
W_{\mathrm{loc}}^{1,q\frac{r}{r-1}}(\Omega)$. However, as a consequence of
Theorem 1.1, under the stated assumptions (1.2),(1.3),(1.5),(1.6) the
Lavrentiev phenomenon for the integral functional $F$ in (1.1) cannot occur.
For the gap in the Lavrentiev phenomenon we refer to [51],[4],[27],[25].
We conclude this introduction by observing that even in the one-dimensional
case the Lipschitz continuity of minimizers for non-uniformly elliptic
integrals is not obvious. Indeed, if we consider a minimizer $u$ to the one-
dimensional integral
$F\left(u\right)=\int_{-1}^{1}a\left(x\right)\left|u^{\prime}\left(x\right)\right|^{p}\,dx\,,\qquad
p>1,$ (1.16)
then the Euler’s first variation takes the form
$\int_{-1}^{1}a\left(x\right)p\left|u^{\prime}\left(x\right)\right|^{p-2}u^{\prime}\left(x\right)\varphi^{\prime}\left(x\right)\,dx\,=0,\;\;\;\forall\;\varphi\in
C_{0}^{1}\left(-1,1\right).$
This implies that the quantity
$a\left(x\right)\left|u^{\prime}\left(x\right)\right|^{p-2}u^{\prime}\left(x\right)$
is constant in $\left(-1,1\right)$; it is a nonzero constant, unless
$u\left(x\right)$ itself is constant in $\left(-1,1\right)$, a trivial case
that we do not consider here. In particular the sign of
$u^{\prime}\left(x\right)$ is constant and we get
$\left|u^{\prime}\left(x\right)\right|^{p-1}=\frac{c}{a\left(x\right)}\,,\;\;\;\text{a.e.}\;x\in\left(-1,1\right).$
Therefore if $a\left(x\right)$ vanishes somewhere in $\left(-1,1\right)$ then
$\left|u^{\prime}\left(x\right)\right|$ is unbounded (and viceversa),
independently of the exponent $p>1$. Thus for $n=1$ the local Lipschitz
regularity of the minimizers does not hold in general if the coefficient
$a\left(x\right)$ vanishes somewhere.
We can compare this one-dimensional fact with the general conditions
considered in the Theorem 1.1. In the case
$a\left(x\right)=\left|x\right|^{\alpha}$ for some $\alpha\in\left(0,1\right)$
then, taking into account the assumptions in (1.5), for the integral in (1.16)
we have
$k\left(x\right)=a^{\prime}\left(x\right)=\alpha\left|x\right|^{\alpha-2}x$
and
$\left\\{\begin{array}[]{ccccc}\frac{1}{a}\in
L_{\mathrm{loc}}^{s}\left(-1,1\right)&\Leftrightarrow&1-\alpha
s>0&\Leftrightarrow&\alpha<\frac{1}{s}\\\ k\left(x\right)=a^{\prime}\in
L_{\mathrm{loc}}^{r}\left(-1,1\right)&\Leftrightarrow&r\left(\alpha-1\right)>-1&\Leftrightarrow&\alpha>1-\frac{1}{r}.\end{array}\right.$
These conditions are compatible if and only if $1-\frac{1}{r}<\frac{1}{s}$.
Therefore, also in the one-dimensional case we have a counterexample to the
$L^{\infty}-$gradient bound in (1.7) if
$\frac{1}{r}+\frac{1}{s}>1\,.$ (1.17)
This is a condition that can be easily compared with the assumption (1.6) for
the validity of $L^{\infty}-$gradient bound (1.7) in the general
$n-$dimensional case. In fact, being $1\leq\frac{q}{p}$, (1.6) implies
$1<\frac{s}{s+1}\left(1+\frac{1}{n}-\frac{1}{r}\right)\;\;\;\;\Leftrightarrow\;\;\;\;\frac{1}{r}+\frac{1}{s}<\frac{1}{n}\,,$
which essentially is the complementary condition to (1.17) when $n=1$.
The plan of the paper is the following. In Section 2 we list some definitions
and preliminary results. In Section 3 we prove an a-priori estimates of the
$L^{\infty}$-norm of the gradient of local minimizers and an higher
differentiability result, see Theorem 3.3. In Section 4 we prove an estimate
for the second order derivatives of a minimizer of an auxiliary uniformly
elliptic functional, see Theorem 4.1. In the last section we complete the
proof of Theorem 1.1.
## 2\. Preliminary results
We shall denote by $C$ or $c$ a general positive constant that may vary on
different occasions, even within the same line of estimates. Relevant
dependencies will be suitably emphasized using parentheses or subscripts. In
what follows, $B(x,r)=B_{r}(x)=\\{y\in\mathbb{R}^{n}:\,\,|y-x|<r\\}$ will
denote the ball centered at $x$ of radius $r$. We shall omit the dependence on
the center and on the radius when no confusion arises.
To prove our higher differentiability result ( see Theorem 4.1 below) we use
the finite difference operator. For a function
$u:\Omega\rightarrow\mathbb{R}^{k}$, $\Omega$ open subset of $\mathbb{R}^{n}$,
given $s\in\\{1,\ldots,n\\}$, we define
$\tau_{s,h}u(x):=u(x+he_{s})-u(x),\qquad x\in\Omega_{|h|},$ (2.1)
where $e_{s}$ is the unit vector in the $x_{s}$ direction, $h\in\mathbb{R}$
and
$\Omega_{|h|}:=\\{x\in\Omega\,:\,\mathrm{dist\,}(x,\partial\Omega)<|h|\\}.$
We now list the main properties of this operator.
* (i)
if $u\in W^{1,t}(\Omega)$, $1\leq t\leq\infty$, then $\tau_{s,h}u\in
W^{1,t}(\Omega_{|h|})$
$D_{i}(\tau_{s,h}u)=\tau_{s,h}(D_{i}u),$
* (ii)
if $f$ or $g$ has support in $\Omega_{|h|}$, then
$\int_{\Omega}f\tau_{s,h}g\,dx=\int_{\Omega}g\tau_{s,-h}f\,dx,$
* (iii)
if $u,u_{x_{s}}\in L^{t}(B_{R})$, $1\leq t<\infty$, and $0<\rho<R$, then for
every $h$, $|h|\leq R-\rho$,
$\int_{B_{\rho}}|\tau_{s,h}u(x)|^{t}\,dx\leq|h|^{t}\int_{B_{R}}|u_{x_{s}}(x)|^{t}\,dx,$
* (iv)
if $u\in L^{t}(B_{R})$, $1<t<\infty$, and for $0<\rho<R$ there exists $K>0$
such that for every $h$, $|h|<R-\rho$,
$\sum_{s=1}^{n}\int_{B_{\rho}}|\tau_{s,h}u(x)|^{t}\,dx\leq K|h|^{t},$ (2.2)
then letting $h$ go to $0$, $Du\in L^{t}(B_{\rho})$ and
$\|u_{x_{s}}\|_{L^{t}(B_{\rho})}\leq K$ for every $s\in\\{1,\ldots,n\\}$.
We recall the following estimate for the auxiliary function
$V_{p}(\xi):=\Bigl{(}1+|\xi|^{2}\Bigr{)}^{\frac{p-2}{4}}\xi,$ (2.3)
which is a convex function since $p\geq 2$ (see the Step 2 in [33] and the
proof of [30, Lemma 8.3]).
###### Lemma 2.1.
Let $1<p<\infty$. There exists a constant $c=c(n,p)>0$ such that
$c^{-1}\Bigl{(}1+|\xi|^{2}+|\eta|^{2}\Bigr{)}^{\frac{p-2}{2}}\leq\frac{|V_{p}(\xi)-V_{p}(\eta)|^{2}}{|\xi-\eta|^{2}}\leq
c\Bigl{(}1+|\xi|^{2}+|\eta|^{2}\Bigr{)}^{\frac{p-2}{2}}$
for any $\xi$, $\eta\in\mathbb{R}^{n}$.
In the next lemma we prove that (1.3)1 implies the, possibly degenerate,
$p,q-$growth condition stated in (1.4).
###### Lemma 2.2.
Let $f=f(x,\xi)$ be convex and of class $C^{2}$ with respect to the
$\xi-$variable.
Assume (1.2) and
$a(x)\,(1+|\xi|^{2})^{\frac{p-2}{2}}|\lambda|^{2}\leq\langle
D_{\xi\xi}f(x,\xi)\lambda,\lambda\rangle\leq
b(x)\,(1+|\xi|^{2})^{\frac{q-2}{2}}|\lambda|^{2}$ (2.4)
for some exponents $2\leq p\leq q$ and nonnegative functions $a,b$. Then there
exists a constant $c$ such that
$c\,a(x)(1+|\xi|^{2})^{\frac{p-2}{2}}|\xi|^{2}\leq f(x,\xi)\leq
b(x)(1+|\xi|^{2})^{\frac{q}{2}}+f(x,0).$
###### Proof.
For $x\in\Omega$ and $s\in\mathbb{R}$, let us set $\varphi(s)=g(x,st)$, where
we recall that $g$ is linked to $f$ by (1.2). The assumptions on $f$ imply
that $\varphi\in C^{2}(\mathbb{R})$ and that $g_{t}$ is increasing in the
gradient variable $t\in[0;+\infty)$ with $g_{t}(x,0)=0$. Since
$\varphi^{\prime}(s)=g_{t}(x,st)\cdot
t\,,\;\;\;\;\;\varphi^{\prime\prime}(s)=g_{tt}(x,st)\cdot t^{2},$
Taylor expansion formula yields that there exists $\vartheta\in(0,1)$ such
that
$\varphi(1)=\varphi(0)+\varphi^{\prime}(0)+\frac{1}{2}\varphi^{\prime\prime}(\vartheta)\,.$
Recalling the definition of $\varphi$, we get
$g(x,t)=g(x,0)+g_{t}(x,0)\cdot t+\frac{1}{2}g_{tt}(x,\vartheta t)\cdot
t^{2}=g(x,0)+\frac{1}{2}g_{tt}(x,\vartheta t)\cdot t^{2}.$ (2.5)
Assumption (2.4) translates into
$a(x)\,(1+t^{2})^{\frac{p-2}{2}}\leq g_{tt}(x,t)\leq
b(x)\,(1+t^{2})^{\frac{q-2}{2}}.$ (2.6)
Inserting (2.6) in (2.5), we obtain
$\frac{a(x)}{2}\,(1+(\vartheta t)^{2})^{\frac{p-2}{2}}t^{2}+g(x,0)\leq
g(x,t)\leq g(x,0)+\frac{b(x)}{2}\,[1+(\vartheta t)^{2}]^{\frac{q-2}{2}}t^{2}.$
(2.7)
Note that, since $\vartheta<1$ and $q>2$, the right hand side of (2.7) can be
controlled with
$g(x,t)\leq g(x,0)+b(x)\,[1+(\vartheta t)^{2}]^{\frac{q-2}{2}}t^{2}\leq
g(x,0)+b(x)\,[1+t^{2}]^{\frac{q-2}{2}}t^{2}$
Moreover since $g(x,0)\geq 0$ and $p\geq 2$, the left hand side of (2.7) can
be controlled from below as follows
$g(x,t)\geq\frac{a(x)}{2}\,(1+(\vartheta
t)^{2})^{\frac{p-2}{2}}t^{2}+g(x,0)\geq\frac{a(x)}{2}\,(1+(\vartheta
t)^{2})^{\frac{p-2}{2}}t^{2}$ $\geq\frac{a(x)}{2}\,(\vartheta^{2}+(\vartheta
t)^{2})^{\frac{p-2}{2}}t^{2}=\vartheta^{p-2}\frac{a(x)}{2}(1+t^{2})^{\frac{p-2}{2}}t^{2}.$
Combining the last two estimates and recalling that $f(x,\xi)=g(x,|\xi|)$, we
conclude that there exists a constant $c=c(\vartheta)$ such that
$c(\vartheta)a(x)(1+|\xi|^{2})^{\frac{p-2}{2}}|\xi|^{2}\leq f(x,\xi)\leq
b(x)\,(1+|\xi|^{2})^{\frac{q-2}{2}}|\xi|^{2}+f(x,0)$
and the conclusion follows. ∎
We end this preliminary section with a well known property. The following
lemma has important applications in the so called hole-filling method. Its
proof can be found for example in [30, Lemma 6.1] .
###### Lemma 2.3.
Let $h:[r,R_{0}]\rightarrow\mathbb{R}$ be a nonnegative bounded function and
$0<\vartheta<1$, $A,B\geq 0$ and $\beta>0$. Assume that
$h(s)\leq\vartheta h(t)+\frac{A}{(t-s)^{\beta}}+B,$
for all $r\leq s<t\leq R_{0}$. Then
$h(r)\leq\frac{cA}{(R_{0}-r)^{\beta}}+cB,$
where $c=c(\vartheta,\beta)>0$.
## 3\. The a-priori estimate
The main result in this section is an a-priori estimate of the
$L^{\infty}$-norm of the gradient of local minimizers of the functional $F$ in
(1.1) satisfying weaker assumptions than those in Theorem 3.3. Precisely, in
this section we consider the following growth conditions
$\left\\{\begin{array}[]{l}a(x)\,(1+|\xi|^{2})^{\frac{p-2}{2}}|\lambda|^{2}\leq\langle
f_{\xi\xi}(x,\xi)\lambda,\lambda\rangle\leq
b(x)\,(1+|\xi|^{2})^{\frac{q-2}{2}}|\lambda|^{2}\\\ |f_{\xi x}(x,\xi)|\leq
k(x)(1+|\xi|^{2})^{\frac{q-1}{2}},\end{array}\right.$ (3.1)
for a.e. $x\in\Omega$ and for every $\xi,\lambda\in\mathbb{R}^{N\times n}$.
Here, $a,b,k$ are non-negative measurable functions. We do not require $a,b\in
L^{\infty}$, but, in the main result of this section, see Theorem 3.3, we
assume the following summability properties:
$\frac{1}{a}\in L_{\mathrm{loc}}^{s}(\Omega),\qquad a\in
L_{\mathrm{loc}}^{\frac{rs}{2s+r}}(\Omega),\qquad b,\,k\in
L_{\mathrm{loc}}^{r}(\Omega),\qquad\text{with $r>n$}.$ (3.2)
Moreover, we assume (1.6). We use the following weighted Sobolev type
inequality, whose proof relies on the Hölder’s inequality, see e.g. [16].
###### Lemma 3.1.
Let $p\geq 2$, $s\geq 1$ and $w\in
W_{0}^{1,\frac{ps}{s+1}}(\Omega;\mathbb{R}^{N})$ ($w\in
W_{0}^{1,p}(\Omega;\mathbb{R}^{N})$ if $s=\infty$). Let
$\lambda:\Omega\to[0,+\infty)$ be a measurable function such that
$\lambda^{-1}\in L^{s}(\Omega)$. There exists a constant $c=c(n)$ such that
$\left(\int_{\Omega}|w|^{\sigma^{*}}\,dx\right)^{\frac{p}{\sigma^{*}}}\leq
c(n)\|\lambda^{-1}\|_{L^{s}(\Omega)}\int_{\Omega}\lambda|Dw|^{p}\,dx,$ (3.3)
where $\sigma=\frac{ps}{s+1}$ ($\sigma=p$ if $s=+\infty$).
In establishing the a-priori estimate, we need to deal with quantities that
involve the $L^{2}$-norm of the second derivatives of the minimizer weighted
with the function $a(x)$. Next result tells that a $W^{2,\frac{2s}{s+1}}$
assumption on the second derivatives implies that they belong to the weighted
space $L^{2}(a(x)dx)$. More precisely, we have
###### Proposition 3.2.
Consider the functional $F$ in (1.1) satisfying the assumption (3.1) with
$a,b\in L_{\mathrm{loc}}^{1}(\Omega),\,k\in
L_{\mathnormal{loc}}^{\frac{2s}{s-1}}(\Omega),$ (3.4)
for some $s\geq 1$. If $u\in W_{\mathnormal{loc}}^{1,\infty}(\Omega)\cap
W_{\mathnormal{loc}}^{2,\frac{2s}{s+1}}(\Omega)$ is a local minimizer of $F$
then
$a(x)|D^{2}u|^{2}\in L_{\mathrm{loc}}^{1}(\Omega).$
###### Proof.
Since $u$ is a local minimizer of the functional $F$, then $u$ satisfies the
Euler’s system
$\int_{\Omega}\sum_{i,\alpha}f_{\xi_{i}^{\alpha}}(x,Du)\varphi_{x_{i}}^{\alpha}(x)\,dx=0\qquad\forall\varphi\in
C_{0}^{\infty}(\Omega;\mathbb{R}^{N}),$
and, using the second variation, for every $s=1,\ldots,n$ it holds
$\int_{\Omega}\left\\{\sum_{i,j,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)\varphi_{x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}+\sum_{i,\alpha}f_{\xi_{i}^{\alpha}x_{s}}(x,Du)\varphi_{x_{i}}^{\alpha}\right\\}\,dx=0\qquad\forall\varphi\in
C_{0}^{\infty}(\Omega;\mathbb{R}^{N}).$ (3.5)
Fix $s=1,\ldots,n$, a cut off function $\eta\in C_{0}^{\infty}(\Omega)$ and
define for any $\gamma\geq 0$ the function
$\varphi^{\alpha}:=\eta^{4}u_{x_{s}}^{\alpha}\quad\alpha=1,\ldots,N.$
Thanks to our assumptions on the minimizer $u$, through a standard density
argument, we can use $\varphi$ as test function in the equation (3.5), thus
getting
$\displaystyle 0=$
$\displaystyle\int_{\Omega}4\eta^{3}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)\eta_{x_{i}}u_{x_{s}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
$\displaystyle+\int_{\Omega}\eta^{4}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
$\displaystyle+\int_{\Omega}4\eta^{3}\sum_{i,s,\alpha}f_{\xi_{i}^{\alpha}x_{s}}(x,Du)\eta_{x_{i}}u_{x_{s}}^{\alpha}\,dx$
$\displaystyle+\int_{\Omega}\eta^{4}\sum_{i,s,\alpha}f_{\xi_{i}^{\alpha}x_{s}}(x,Du)u_{x_{s}x_{i}}^{\alpha}\,dx$
$\displaystyle=:$ $\displaystyle J_{1}+J_{2}+J_{3}+J_{4}.$
By the use of Cauchy-Schwartz and Young’s inequalities and by virtue of the
second inequality of (3.1), we can estimate the integral $I_{1}$ as follows
$\displaystyle|J_{1}|$ $\displaystyle\leq$ $\displaystyle
4\int_{\Omega}\\!\\!\Bigg{\\{}\eta^{2}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)\eta_{x_{i}}u_{x_{s}}^{\alpha}\eta_{x_{j}}u_{x_{s}}^{\beta}\Bigg{\\}}^{\frac{1}{2}}\\!\\!\Bigg{\\{}\eta^{4}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\Bigg{\\}}^{\frac{1}{2}}$
$\displaystyle\leq$ $\displaystyle
C\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q}{2}}\,dx$
$\displaystyle\qquad+\frac{1}{2}\int_{\Omega}\eta^{4}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx.$
Moreover, by the last inequality in (3.1) we obtain
$\displaystyle|J_{3}|$ $\displaystyle\leq$ $\displaystyle
4\int_{\Omega}\eta^{3}k(x)(1+|Du|^{2})^{\frac{q-1}{2}}\sum_{i,s,\alpha}|\eta_{x_{i}}u_{x_{s}}^{\alpha}|\,dx$
$\displaystyle\leq$ $\displaystyle
4\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q}{2}}\,dx.$
and also
$\displaystyle|J_{4}|$ $\displaystyle\leq$
$\displaystyle\int_{\Omega}\eta^{4}k(x)(1+|Du|^{2})^{\frac{q-1}{2}}|D^{2}u|\,dx.$
Therefore we get
$\displaystyle\int_{\Omega}\eta^{4}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx\leq\frac{1}{2}\int_{\Omega}\eta^{4}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
$\displaystyle+C\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q}{2}}\,dx+4\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q}{2}}\,dx$
$\displaystyle+\int_{\Omega}\eta^{4}k(x)(1+|Du|^{2})^{\frac{q-1}{2}}|D^{2}u|\,dx.$
Reabsorbing the first integral in the right hand side by the left hand side we
obtain
$\displaystyle\int_{\Omega}\eta^{4}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
$\displaystyle\leq$ $\displaystyle
C\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q}{2}}\,dx+4\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q}{2}}\,dx$
$\displaystyle+\int_{\Omega}\eta^{4}k(x)(1+|Du|^{2})^{\frac{q-1}{2}}|D^{2}u|\,dx.$
By the ellipticity assumption in (3.1) and since $u\in
W^{1,\infty}_{\mathrm{loc}}(\Omega)$ we get
$\displaystyle\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle\leq$ $\displaystyle
C\|1+|Du|\|_{L^{\infty}(\mathrm{supp}\eta)}^{q}\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)\,dx+C\|1+|Du|\|_{L^{\infty}(\mathrm{supp}\eta)}^{q}\int_{\Omega}\eta^{3}|D\eta|k(x)$
$\displaystyle+C\|1+|Du|\|_{L^{\infty}(\mathrm{supp}\eta)}^{q-1}\int_{\Omega}\eta^{4}k(x)|D^{2}u|\,dx.$
Hölder’s inequality yields
$\displaystyle\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2}{2}}|D^{2}u|^{2}\,dx$
(3.6) $\displaystyle\leq$ $\displaystyle
C\|1+|Du|\|_{L^{\infty}(\mathrm{supp}\eta)}^{q}\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)\,dx+C\|1+|Du|\|_{L^{\infty}(\mathrm{supp}\eta)}^{q}\int_{\Omega}\eta^{3}|D\eta|k(x)$
$\displaystyle+C\|1+|Du|\|_{L^{\infty}(\mathrm{supp}\eta)}^{q-1}\left(\int_{\Omega}\eta^{4}k^{\frac{2s}{s-1}}\,dx\right)^{\frac{s-1}{2s}}\left(\int_{\Omega}\eta^{4}|D^{2}u|^{\frac{2s}{s+1}}\,dx\right)^{\frac{s+1}{2s}}.$
Since $k\in L^{\frac{2s}{s-1}}_{\mathrm{loc}}(\Omega)$ and $u\in
W^{2,\frac{2s}{s+1}}_{\mathnormal{loc}}(\Omega)$ then estimate (3.6) implies
that
$a(x)|D^{2}u|^{2}\in L^{1}_{\mathrm{loc}}(\Omega).$
∎
We are now ready to establish the main result of this section.
###### Theorem 3.3.
Consider the functional $F$ in (1.1) satisfying the assumptions (3.1), (3.2),
(1.2) and (1.6). If $u\in W_{\mathnormal{loc}}^{1,\infty}(\Omega)\cap
W_{\mathnormal{loc}}^{2,{\frac{2s}{s+1}}}(\Omega)$ is a local minimizer of $F$
then for every ball $B_{R_{0}}\Subset\Omega$
$\|Du\|_{L^{\infty}(B_{R_{0}/2})}\leq
C\mathcal{K}_{R_{0}}^{\vartheta}\left(\int_{B_{R_{0}}}\left(1+f(x,Du)\right)\,dx\right)^{\vartheta}$
(3.7)
$\int_{B_{\rho}}a(1+|Du|^{2})^{\frac{p-2}{2}}\left|D^{2}u\right|^{2}\,dx\leq
c\left({\int_{B_{R_{0}}}}(1+f(x,Du)\,dx\right)^{\vartheta},$ (3.8)
hold, for any $\rho<\frac{R}{2}$. Here
$\mathcal{K}_{R_{0}}=1+\|a^{-1}\|_{L^{s}(B_{R_{0}})}\|k+b\|_{L^{r}(B_{R_{0}})}^{2}+\|a\|_{L^{\frac{rs}{2s+r}}(B_{R_{0}})},$
$\vartheta>0$ is depending on the data, $C$ is depending also on $R_{0}$ and
$c$ is depending also on $\rho$ and $\mathcal{K}_{R_{0}}$.
###### Proof.
Since $u$ is a local minimizer of the functional $F$, then $u$ satisfies the
Euler’s system
$\int_{\Omega}\sum_{i,\alpha}f_{\xi_{i}^{\alpha}}(x,Du)\varphi_{x_{i}}^{\alpha}(x)\,dx=0\qquad\forall\varphi\in
C_{0}^{\infty}(\Omega;\mathbb{R}^{N}),$
and, using the second variation, for every $s=1,\ldots,n$ it holds
$\int_{\Omega}\left\\{\sum_{i,j,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)\varphi_{x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}+\sum_{i,\alpha}f_{\xi_{i}^{\alpha}x_{s}}(x,Du)\varphi_{x_{i}}^{\alpha}\right\\}\,dx=0\qquad\forall\varphi\in
C_{0}^{\infty}(\Omega;\mathbb{R}^{N}).$ (3.9)
Fix $s=1,\ldots,n$, a cut off function $\eta\in C_{0}^{\infty}(\Omega)$ and
define for any $\gamma\geq 0$ the function
$\varphi^{\alpha}:=\eta^{4}u_{x_{s}}^{\alpha}(1+|Du|^{2})^{\frac{\gamma}{2}}\quad\alpha=1,\ldots,N.$
One can easily check that
$\displaystyle\varphi^{\alpha}_{x_{i}}=$ $\displaystyle
4\eta^{3}\eta_{x_{i}}u_{x_{s}}^{\alpha}(1+|Du|^{2})^{\frac{\gamma}{2}}+\eta^{4}u_{x_{s}x_{i}}^{\alpha}(1+|Du|^{2})^{\frac{\gamma}{2}}+\gamma\eta^{4}u_{x_{s}}^{\alpha}(1+|Du|^{2})^{\frac{\gamma-2}{2}}|Du|\big{(}|Du|\big{)}_{x_{i}}.$
Thanks to our assumptions on the minimizer $u$, through a standard density
argument, we can use $\varphi$ as test function in the equation (3.9), thus
getting
$\displaystyle 0=$
$\displaystyle\int_{\Omega}4\eta^{3}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)\eta_{x_{i}}u_{x_{s}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
$\displaystyle+\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
$\displaystyle+\gamma\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma-2}{2}}|Du|\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}}^{\alpha}u_{x_{s}x_{j}}^{\beta}(|Du|)_{x_{i}}\,dx$
$\displaystyle+\int_{\Omega}4\eta^{3}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,s,\alpha}f_{\xi_{i}^{\alpha}x_{s}}(x,Du)\eta_{x_{i}}u_{x_{s}}^{\alpha}\,dx$
$\displaystyle+\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,s,\alpha}f_{\xi_{i}^{\alpha}x_{s}}(x,Du)u_{x_{s}x_{i}}^{\alpha}\,dx$
$\displaystyle+\gamma\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma-2}{2}}|Du|\sum_{i,s,\alpha}f_{\xi_{i}^{\alpha}x_{s}}(x,Du)u_{x_{s}}^{\alpha}\big{(}|Du|\big{)}_{x_{i}}\,dx$
$\displaystyle=:$ $\displaystyle I_{1}+I_{2}+I_{3}+I_{4}+I_{5}+I_{6}.$ (3.10)
Estimate of $I_{1}$
By the use of Cauchy-Schwartz and Young’s inequalities and by virtue of the
second inequality in (3.1), we can estimate the integral $I_{1}$ as follows
$\displaystyle|I_{1}|$ $\displaystyle\leq$ $\displaystyle
4\int_{\Omega}\\!\\!(1+|Du|^{2})^{\frac{\gamma}{2}}\Bigg{\\{}\\!\\!\eta^{2}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)\eta_{x_{i}}u_{x_{s}}^{\alpha}\eta_{x_{j}}u_{x_{s}}^{\beta}\Bigg{\\}}^{\frac{1}{2}}\\!\\!\Bigg{\\{}\eta^{4}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\Bigg{\\}}^{\frac{1}{2}}$
(3.11) $\displaystyle\leq$ $\displaystyle
C(\varepsilon)\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx$
$\displaystyle\qquad+\varepsilon\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx,$
where $\varepsilon>0$ will be chosen later.
Estimate of $I_{3}$
Since
$f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,\xi)=\left(\frac{g_{tt}(x,|\xi|)}{|\xi|^{2}}-\frac{g_{t}(x,|\xi|)}{|\xi|^{3}}\right)\xi_{i}^{\alpha}\xi_{j}^{\beta}+\frac{g_{t}(x,|\xi|)}{|\xi|}\delta_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}$
and
$(|Du|)_{x_{i}}=\frac{1}{|Du|}\sum_{\alpha,s}u_{x_{i}x_{s}}^{\alpha}u_{x_{s}}^{\alpha}$
(3.12)
then
$\displaystyle\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}}^{\alpha}u_{x_{s}x_{j}}^{\beta}(|Du|)_{x_{i}}$
$\displaystyle=$
$\displaystyle\left(\frac{g_{tt}(x,|Du|)}{|Du|^{2}}-\frac{g_{t}(x,|Du|)}{|Du|^{3}}\right)\sum_{i,j,s,\alpha,\beta}u_{x_{s}}^{\alpha}u_{x_{s}x_{j}}^{\beta}u_{x_{i}}^{\alpha}u_{x_{j}}^{\beta}(|Du|)_{x_{i}}$
$\displaystyle+\frac{g_{t}(x,|Du|)}{|Du|}\sum_{i,s,\alpha}u_{x_{s}}^{\alpha}u_{x_{s}x_{i}}^{\alpha}(|Du|)_{x_{i}}$
$\displaystyle=$
$\displaystyle\left(\frac{g_{tt}(x,|Du|)}{|Du|}-\frac{g_{t}(x,|Du|)}{|Du|^{2}}\right)\sum_{\alpha}\left(\sum_{i}u_{x_{i}}^{\alpha}(|Du|)_{x_{i}}\right)^{2}$
$\displaystyle+g_{t}(x,|Du|)|D(|Du|)|^{2}.$ (3.13)
Thus,
$\displaystyle
I_{3}=\gamma\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma-2}{2}}|Du|$
$\displaystyle\Big{\\{}\Big{(}\frac{g_{tt}(x,|Du|)}{|Du|}-\frac{g_{t}(x,|Du|)}{|Du|^{2}}\Big{)}\sum_{\alpha}\left(\sum_{i}u_{x_{i}}^{\alpha}(|Du|)_{x_{i}}\right)^{2}$
$\displaystyle+g_{t}(x,|Du|)|D(|Du|)|^{2}\Big{\\}}\,dx.$
Using the Cauchy-Schwartz inequality, i.e.
$\sum_{\alpha}\left(\sum_{i}u_{x_{i}}^{\alpha}(|Du|)_{x_{i}}\right)^{2}\leq|Du|^{2}|D(|Du|)|^{2}$
and observing that
$g_{t}(x,|Du|)\geq 0,$
we conclude
$I_{3}\geq\gamma\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma-2}{2}}|Du|\frac{g_{tt}(x,|Du|)}{|Du|}\sum_{\alpha}\left(\sum_{i}u_{x_{i}}^{\alpha}(|Du|)_{x_{i}}\right)^{2}\,dx\geq
0.$ (3.14)
Estimate of $I_{4}$
By using the last inequality in (3.1) we obtain
$\displaystyle|I_{4}|$ $\displaystyle\leq$ $\displaystyle
4\int_{\Omega}\eta^{3}k(x)(1+|Du|^{2})^{\frac{q-1+\gamma}{2}}\sum_{i,s,\alpha}|\eta_{x_{i}}u_{x_{s}}^{\alpha}|\,dx$
(3.15) $\displaystyle\leq$ $\displaystyle
4\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx.$
Estimate of $I_{5}$
Using the last inequality in (3.1) and Young’s inequality we have that
$\displaystyle|I_{5}|$ $\displaystyle\leq$
$\displaystyle\int_{\Omega}\eta^{4}k(x)(1+|Du|^{2})^{\frac{q-1+\gamma}{2}}|D^{2}u|\,dx$
(3.16) $\displaystyle\leq$
$\displaystyle\sigma\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle+C_{\sigma}\int_{\Omega}\eta^{4}\frac{k^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx,$
where $\sigma\in(0,1)$ will be chosen later and $a$ is the function appearing
in (3.1).
Estimate of $I_{6}$
Using the last inequality in (3.1) and (3.12), we get
$\displaystyle|I_{6}|$ $\displaystyle\leq$
$\displaystyle\gamma\int_{\Omega}\eta^{4}k(x)(1+|Du|^{2})^{\frac{q-1+\gamma}{2}}|D(|Du|)|\,dx$
(3.17) $\displaystyle\leq$
$\displaystyle\gamma\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{q-1+\gamma}{2}}k(x)|D^{2}u|\,dx$
$\displaystyle\leq$
$\displaystyle\sigma\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle+C_{\sigma}\gamma^{2}\int_{\Omega}\eta^{4}\frac{k^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx,$
where we used Young’s inequality again.
Since the equality (3.10) can be written as follows
$I_{2}+I_{3}=-I_{1}-I_{4}-I_{5}-I_{6}\,,$
by virtue of (3.14), we get
$I_{2}\leq|I_{1}|+|I_{4}|+|I_{5}|+|I_{6}|\,$
and therefore, recalling the estimates (3.11), (3.15), (3.16) and (3.17), we
obtain
$\displaystyle\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
(3.18) $\displaystyle\leq$
$\displaystyle\varepsilon\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
$\displaystyle+4\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx$
$\displaystyle+2\sigma\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle+C_{\sigma}(1+\gamma^{2})\int_{\Omega}\eta^{4}\frac{k^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx$
$\displaystyle+C_{\varepsilon}\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx.$
Choosing $\varepsilon=\frac{1}{2}$, we can reabsorb the first integral in the
right hand side by the left hand side thus getting
$\displaystyle\int_{\Omega}\eta^{4}(1+|Du|^{2})^{\frac{\gamma}{2}}\sum_{i,j,s,\alpha,\beta}f_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x,Du)u_{x_{s}x_{i}}^{\alpha}u_{x_{s}x_{j}}^{\beta}\,dx$
(3.19) $\displaystyle\leq$ $\displaystyle
4\sigma\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle+C\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx+C_{\sigma}(1+\gamma^{2})\int_{\Omega}\eta^{4}\frac{k^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx$
$\displaystyle+C\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx.$
Now, using the ellipticity condition in (3.1) to estimate the left hand side
of (3.19), we get
$\displaystyle
c_{2}\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle\leq$ $\displaystyle
4\sigma\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle+C\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx+C_{\sigma}(1+\gamma^{2})\int_{\Omega}\eta^{4}\frac{k^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx$
$\displaystyle+C\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx.$
We claim that $k\in L^{\frac{2s}{s-1}}_{\mathrm{loc}}(\Omega)$. Since by
assumption (3.2), $k\in L^{r}_{\mathrm{loc}}(\Omega)$, we need to prove that
$\frac{2s}{s-1}\leq r$ that is equivalent to $\frac{2}{r}+\frac{1}{s}\leq 1$.
This holds true, because by $\eqref{gap}$ and $q\geq p$ we get
$\frac{s}{s+1}\left(1+\frac{1}{n}-\frac{1}{r}\right)>1\Leftrightarrow\frac{n}{r}+\frac{n}{s}<1$
and we conclude, because $n\geq 2$. Therefore, since by the assumption $u\in
W^{1,\infty}_{\mathrm{loc}}(\Omega)\cap
W^{2,\frac{2s}{s+1}}_{\mathrm{loc}}(\Omega)$, we can use Proposition 3.2, that
implies that the first integral in the right hand side of previous estimate is
finite. By choosing $\sigma=\frac{c_{2}}{8}$, we can reabsorb the first
integral in the right hand side by the left hand side thus getting
$\displaystyle\int_{\Omega}\eta^{4}a(x)(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx$
(3.20) $\displaystyle\leq$ $\displaystyle
C\int_{\Omega}\eta^{3}|D\eta|k(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx+C(1+\gamma^{2})\int_{\Omega}\eta^{4}\frac{k^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx$
$\displaystyle+C\int_{\Omega}\eta^{2}|D\eta|^{2}b(x)(1+|Du|^{2})^{\frac{q+\gamma}{2}}\,dx$
$\displaystyle\leq$ $\displaystyle
C\int_{\Omega}\left(\eta^{2}|D\eta|^{2}+|D\eta|^{4}\right)a(x)(1+|Du|^{2})^{\frac{p+\gamma}{2}}\,dx$
$\displaystyle\quad+C(\gamma+1)^{2}\int_{\Omega}\eta^{4}\frac{k^{2}(x)+b^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx,$
where we used Young’s inequality again. Now, we note that
$\eta^{4}a(x)\left|D\big{(}(1+|Du|^{2})^{\frac{p+\gamma}{4}}\big{)}\right|^{2}\leq
c(p+\gamma)^{2}a(x)\eta^{4}(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}$
and so fixing $\frac{R_{0}}{2}\leq\rho<t^{\prime}<t<R<R_{0}$ with $R_{0}$ such
that $B_{R_{0}}\Subset\Omega$, and choosing $\eta\in C_{0}^{\infty}(B_{t})$ a
cut off function between $B_{t^{\prime}}$ and $B_{t}$, by the assumption
$a^{-1}\in L^{s}_{\mathrm{loc}}(\Omega)$ we can use Sobolev type inequality of
Lemma 3.1 with $w=\eta^{2}(1+|Du|^{2})^{\frac{p+\gamma}{4}}$, $\lambda=a$ and
$p=2$, thus obtaining
$\displaystyle\left(\int_{B_{t}}\left(\eta^{2}(1+|Du|^{2})^{\frac{p+\gamma}{4}}\right)^{\left(\frac{2s}{s+1}\right)^{*}}\,dx\right)^{\frac{2}{\left(\frac{2s}{s+1}\right)^{*}}}\leq\frac{c(n)}{(t-t^{\prime})^{2}}\int_{B_{t}}a(x)(1+|Du|^{2})^{\frac{p+\gamma}{2}}\,dx$
$\displaystyle\qquad\qquad+c(n)(p+\gamma)^{2}\int_{B_{t}}\eta^{4}a\,(1+|Du|^{2})^{\frac{p-2+\gamma}{2}}|D^{2}u|^{2}\,dx,$
with a constant $c(n)$ depending only on $n$.
Using (3.20) to estimate the last integral in previous inequality, we obtain
$\displaystyle\left(\int_{B_{t}}\eta^{\frac{4ns}{n(s+1)-2s}}(1+|Du|^{2})^{\frac{(p+\gamma)ns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
$\displaystyle\leq$ $\displaystyle
c\left(\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{2}}+\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{4}}\right)\int_{B_{t}}a(x)(1+|Du|^{2})^{\frac{p+\gamma}{2}}\,dx$
$\displaystyle\qquad+c(p+\gamma)^{4}\int_{B_{t}}\frac{k^{2}(x)+b^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p+\gamma}{2}}\,dx$
$\displaystyle\leq$ $\displaystyle
c\left(\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{2}}+\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{4}}\right)\int_{B_{t}}a(x)(1+|Du|^{2})^{\frac{p+\gamma}{2}}\,dx$
$\displaystyle\quad+c(p+\gamma)^{4}\left(\int_{B_{t}}\frac{1}{a^{s}}\,dx\right)^{\frac{1}{s}}\left(\int_{B_{t}}(k^{r}+b^{r})\,dx\right)^{\frac{2}{r}}$
$\displaystyle\qquad\times\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(2q-p+\gamma)rs}{2(rs-2s-r)}}dx\right)^{\frac{rs-2s-r}{rs}},$
where we used assumptions (3.2) and Hölder’s inequality with exponents $s$,
$\frac{r}{2}$ and $\frac{rs}{rs-2s-r}$.
Using the properties of $\eta$ we obtain
$\displaystyle\left(\int_{B_{t^{\prime}}}(1+|Du|^{2})^{\frac{(p+\gamma)ns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
$\displaystyle\leq$ $\displaystyle
c(p+\gamma)^{4}\|a^{-1}\|_{L^{s}(B_{R_{0}})}\|k+b\|_{L^{r}(B_{R_{0}})}^{2}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(2q-p+\gamma)rs}{2(rs-2s-r)}}dx\right)^{\frac{rs-2s-r}{rs}}$
$\displaystyle\qquad+c\left(\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{2}}+\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{4}}\right)\int_{B_{t}}a(x)(1+|Du|^{2})^{\frac{p+\gamma}{2}}\,dx$
$\displaystyle\leq$ $\displaystyle
c(p+\gamma)^{4}\|a^{-1}\|_{L^{s}(B_{R_{0}})}\|k+b\|_{L^{r}(B_{R_{0}})}^{2}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(2q-p+\gamma)rs}{2(rs-2s-r)}}dx\right)^{\frac{rs-2s-r}{rs}}$
$\displaystyle\qquad+c\left(\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{2}}+\frac{(p+\gamma)^{2}}{(t-t^{\prime})^{4}}\right)\|a\|_{L^{\frac{rs}{2s+r}}(B_{R_{0}})}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(p+\gamma)rs}{2(rs-2s-r)}}\,dx\right)^{\frac{rs-2s-r}{rs}},$
where we used the assumption $a\in
L_{\mathrm{loc}}^{\frac{rs}{2s+r}}(\Omega)$. Setting
$\mathcal{K}_{R_{0}}=1+\|a^{-1}\|_{L^{s}(B_{R_{0}})}\|k+b\|_{L^{r}(B_{R_{0}})}^{2}+\|a\|_{L^{\frac{rs}{2s+r}}}(B_{R_{0}})$
(3.21)
and assuming without loss of generality that $t-t^{\prime}<1$, we can write
the previous estimate as follows
$\displaystyle\left(\int_{B_{t^{\prime}}}(1+|Du|^{2})^{\frac{(p+\gamma)ns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
$\displaystyle\leq
c(p+\gamma)^{4}\mathcal{K}_{R_{0}}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(2q-p+\gamma)rs}{2(rs-2s-r)}}dx\right)^{\frac{rs-2s-r}{rs}}$
$\displaystyle\qquad+c(p+\gamma)^{2}\frac{\mathcal{K}_{R_{0}}}{(t-t^{\prime})^{4}}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(p+\gamma)rs}{2(rs-2s-r)}}\,dx\right)^{\frac{rs-2s-r}{rs}},$
and, using the a-priori assumption $u\in W_{\mathrm{loc}}^{1,\infty}(\Omega)$,
we get
$\displaystyle\left(\int_{B_{t^{\prime}}}(1+|Du|^{2})^{\frac{(p+\gamma)ns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
$\displaystyle\leq
c(p+\gamma)^{4}\mathcal{K}_{R_{0}}\left(\|Du\|_{L^{\infty}(B_{R})}^{2(q-p)}+\frac{1}{(t-t^{\prime})^{4}}\right)\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(p+\gamma)rs}{2(rs-2s-r)}}dx\right)^{\frac{rs-2s-r}{rs}}.$
(3.22)
Setting now
$m=\frac{rs}{rs-2s-r}$
and noting that
$\frac{ns}{n(s+1)-2s}=\frac{1}{2}\left(\frac{2s}{s+1}\right)^{*}=:\frac{2_{s}^{*}}{2}$
we can write (3.22) as follows
$\displaystyle\left(\int_{B_{t^{\prime}}}\left((1+|Du|^{2})^{\frac{(p+\gamma)m}{2}}\right)^{\frac{2_{s}^{*}}{2m}}\,dx\right)^{\frac{2m}{2_{s}^{*}}}$
(3.23) $\displaystyle\leq$ $\displaystyle
c(p+\gamma)^{4m}\mathcal{K}^{m}_{R_{0}}\frac{\|Du\|^{2(q-p)m}_{L^{\infty}(B_{R})}}{(t-t^{\prime})^{4m}}\int_{B_{t}}(1+|Du|^{2})^{\frac{(p+\gamma)m}{2}}dx,$
where, without loss of generality, we supposed
$\|Du\|^{2(q-p)m}_{L^{\infty}(B_{R})}\geq 1$. Define now the decreasing
sequence of radii by setting
$\rho_{i}=\rho+\frac{R-\rho}{2^{i}}$
and the increasing sequence of exponents
$p_{0}=pm\qquad\qquad{p_{i}}={p_{i-1}}\left(\frac{2_{s}^{*}}{2m}\right)=p_{0}\left(\frac{2_{s}^{*}}{2m}\right)^{i}$
As we will prove (see (3) below) the right hand side of (3.23) is finite for
$\gamma=0$ . Then for every $\rho<\rho_{i+1}<\rho_{i}<R$, we may iterate it on
the concentric balls $B_{\rho_{i}}$ with exponents $p_{i}$, thus obtaining
$\displaystyle\left(\int_{B_{\rho_{i+1}}}(1+|Du|^{2})^{\frac{p_{i+1}}{2}}\,dx\right)^{\frac{1}{p_{i+1}}}$
(3.24) $\displaystyle\leq$
$\displaystyle\displaystyle{\prod_{j=0}^{i}}\left(C^{m}\mathcal{K}^{m}_{R_{0}}{\frac{p_{j}^{4m}\|Du\|^{2m(q-p)}_{L^{\infty}(B_{R})}}{(\rho_{j}-\rho_{j+1})^{4m}}}\right)^{\frac{1}{p_{j}}}\left(\int_{B_{R}}(1+|Du|^{2})^{\frac{p_{0}}{2}}\,dx\right)^{\frac{1}{p_{0}}}$
$\displaystyle=$
$\displaystyle\displaystyle{\prod_{j=0}^{i}}\left(C^{m}\mathcal{K}^{m}_{R_{0}}{\frac{4^{jm}p_{j}^{4m}\|Du\|^{2(q-p)m}_{L^{\infty}(B_{R})}}{(R-\rho)^{4m}}}\right)^{\frac{1}{p_{j}}}\left(\int_{B_{R}}(1+|Du|^{2})^{\frac{p_{0}}{2}}\,dx\right)^{\frac{1}{p_{0}}}$
$\displaystyle=$
$\displaystyle\displaystyle{\prod_{j=0}^{i}}\left(4^{jm}p_{j}^{4m}\right)^{\frac{1}{p_{j}}}\displaystyle{\prod_{j=0}^{i}}\left({\frac{C^{m}\mathcal{K}^{m}_{R_{0}}\|Du\|^{2(q-p)m}_{L^{\infty}(B_{R})}}{(R-\rho)^{4m}}}\right)^{\frac{1}{p_{j}}}\left(\int_{B_{R}}(1+|Du|^{2})^{\frac{p_{0}}{2}}\,dx\right)^{\frac{1}{p_{0}}}.$
Since
$\displaystyle{\prod_{j=0}^{i}}\left(4^{jm}p_{j}^{4m}\right)^{\frac{1}{p_{j}}}=\exp\left(\sum_{j=0}^{i}\frac{1}{p_{j}}\log(4^{jm}p_{j}^{4m})\right)\leq\exp\left(\sum_{j=0}^{+\infty}\frac{1}{p_{j}}\log(4^{jm}p_{j}^{4m})\right)\leq
c(n,r)$
and
$\displaystyle\displaystyle{\prod_{j=0}^{i}}\left(\frac{C^{m}\mathcal{K}^{m}_{R_{0}}\|Du\|^{2(q-p)m}_{L^{\infty}(B_{t})}}{{(R-\rho)^{4m}}}\right)^{\frac{1}{p_{j}}}=\left(\frac{C^{m}\mathcal{K}^{m}_{R_{0}}\|Du\|^{2(q-p)m}_{L^{\infty}(B_{R})}}{{(R-\rho)^{4m}}}\right)^{\sum_{j=0}^{i}\frac{1}{p_{j}}}$
$\displaystyle\leq$
$\displaystyle\left(\frac{C^{m}\mathcal{K}^{m}_{R_{0}}\|Du\|^{2(q-p)m}_{L^{\infty}(B_{R})}}{{(R-\rho)^{4m}}}\right)^{\sum_{j=0}^{+\infty}\frac{1}{p_{j}}}=\left(\frac{C\mathcal{K}_{R_{0}}\|Du\|^{2(q-p)}_{L^{\infty}(B_{R})}}{{(R-\rho)^{4}}}\right)^{\frac{2^{*}_{s}}{p(2^{*}_{s}-2m)}}$
we can let $i\to\infty$ in (3.24) thus getting
$\displaystyle\|Du\|_{L^{\infty}(B_{\rho})}\leq
C(n,r,p)\left(\frac{\mathcal{K}_{R_{0}}}{{(R-\rho)^{4}}}\right)^{\frac{2^{*}_{s}}{p(2^{*}_{s}-2m)}}\|Du\|^{\frac{2(q-p)2^{*}_{s}}{p(2^{*}_{s}-2m)}}_{L^{\infty}(B_{R})}\left(\int_{B_{R}}(1+|Du|^{2})^{\frac{pm}{2}}\,dx\right)^{\frac{1}{pm}},$
where we used that $p_{0}=pm$. Since assumption (1.6) implies that
$\frac{2(q-p)2^{*}_{s}}{p(2^{*}_{s}-2m)}<1,$ (3.25)
we can use Young’s inequality with exponents
$\frac{p(2^{*}_{s}-2m)}{2(q-p)2^{*}_{s}}>1\qquad\text{and}\qquad\frac{p(2^{*}_{s}-2m)}{p(2^{*}_{s}-2m)-2(q-p)2^{*}_{s}}$
to deduce that
$\displaystyle\|Du\|_{L^{\infty}(B_{\rho})}\leq\frac{1}{2}\|Du\|_{L^{\infty}(B_{R})}+C(n,r,p,s)\left(\frac{\mathcal{K}_{R_{0}}}{{(R-\rho)^{4}}}\right)^{\vartheta}\left(\int_{B_{R}}(1+|Du|^{2})^{\frac{pm}{2}}\,dx\right)^{\varsigma},$
(3.26)
with $\vartheta=\vartheta(p,q,n,s)$ and $\varsigma=\varsigma(p,q,n,s)$.
We now estimate the last integral. By definition of $m$ and by the assumption
on $s$, i.e., $s>\frac{nr}{r-n}$, we get
$m<\frac{ns}{n(s+1)-2s}.$
Thus, by Hölder’s inequality,
$\int_{B_{R}}(1+|Du|^{2})^{\frac{pm}{2}}\,dx\leq
c(R_{0},n,r,s)\left(\int_{B_{R}}(1+|Du|^{2})^{\frac{pns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{r(n(s+1)-2s)}{n(rs-2s-r)}}.$
(3.27)
This last integral can be estimated by using (3.22) with $\gamma=0$. Indeed,
let us re-define $t^{\prime},t$ and $\eta$ as follows: consider $R\leq
t^{\prime}<t\leq 2R-\rho\leq R_{0}$ and $\eta$ a cut off function, $\eta\equiv
1$ on $B_{t^{\prime}}$ and $\mathrm{supp\,}\eta\subset B_{t}$. By (3.22) with
$\gamma=0$,
$\displaystyle\left(\int_{B_{t^{\prime}}}(1+|Du|^{2})^{\frac{pns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
(3.28) $\displaystyle\leq$ $\displaystyle
c\left(\frac{p^{2}}{(t-t^{\prime})^{2}}+\frac{p^{2}}{(t-t^{\prime})^{4}}\right)\int_{B_{R_{0}}}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx$
$\displaystyle\qquad+cp^{4}\mathcal{K}_{B_{R_{0}}}\left(\int_{B_{R_{0}}}(k^{r}+b^{r})\,dx\right)^{\frac{2}{r}}$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\times\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(2q-p)rs}{2(rs-2s-r)}}dx\right)^{\frac{rs-2s-r}{rs}}.$
If we denote
$\tau:=\frac{(2q-p)rs}{rs-2s-r},\quad\tau_{1}:=\frac{nps}{n(s+1)-2s},\quad\tau_{2}:=\frac{ps}{s+1},$
by (1.6) and $s>\frac{rn}{r-n}$, we get
$\frac{\tau}{\tau_{1}}<1<\frac{\tau}{\tau_{2}}.$
Therefore there exists $\theta\in(0,1)$ such that
$1=\theta\frac{\tau}{\tau_{1}}+(1-\theta)\frac{\tau}{\tau_{2}}.$
The precise value of $\theta$ is
$\theta=\frac{ns(qr-pr+p)+qrn}{rs(2q-p)}.$ (3.29)
By Hölder’s inequality with exponents $\frac{\tau_{1}}{\theta\tau}$ and
$\frac{\tau_{2}}{(1-\theta)\tau}$ we get
$\displaystyle\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{(2q-p)rs}{2(rs-2s-r)}}dx\right)^{\frac{rs-2s-r}{rs}}=\left(\int_{B_{t}}(1+|Du|^{2})^{\theta\frac{\tau}{2}+(1-\theta)\frac{\tau}{2}}dx\right)^{\frac{2q-p}{\tau}}$
$\displaystyle\leq\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{\tau_{1}}{2}}dx\right)^{\frac{(2q-p)\theta}{\tau_{1}}}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{\tau_{2}}{2}}dx\right)^{\frac{(2q-p)(1-\theta)}{\tau_{2}}}.$
Hence, we can use the inequality above to estimate the last integral of (3.28)
to deduce that
$\displaystyle\left(\int_{B_{t^{\prime}}}(1+|Du|^{2})^{\frac{pns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
(3.30) $\displaystyle\leq$ $\displaystyle
c\left(\frac{p^{2}}{(t-t^{\prime})^{2}}+\frac{p^{2}}{(t-t^{\prime})^{4}}\right)\int_{B_{R_{0}}}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx$
$\displaystyle+C\mathcal{K}_{B_{R_{0}}}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{ps}{2(s+1)}}\,dx\right)^{\frac{(1-\theta)(2q-p)(s+1)}{ps}}$
$\displaystyle\qquad\times\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{pns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{\theta(ns+n-2s)(2q-p)}{nps}}.$
Note that, again by (1.6) and (3.29), we have
$\frac{\theta(2q-p)}{p}<1.$
We can use Young’s inequality in the last term of (3.30) with exponents
$\frac{p}{p-\theta(2q-p)}$ and $\frac{p}{\theta(2q-p)}$ to obtain that for
every $\sigma<1$
$\displaystyle\left(\int_{B_{t^{\prime}}}(1+|Du|^{2})^{\frac{pns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
$\displaystyle\leq$ $\displaystyle
C\left(\frac{p^{2}}{(t-t^{\prime})^{2}}+\frac{p^{2}}{(t-t^{\prime})^{4}}\right)\int_{B_{R_{0}}}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx$
$\displaystyle+C_{\sigma}\mathcal{K}_{B_{R_{0}}}^{\frac{p}{p-\theta(2q-p)}}\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{ps}{2(s+1)}}\,dx\right)^{\frac{(1-\theta)(2q-p)}{p-\theta(2q-p)}\frac{s+1}{s}}$
$\displaystyle\qquad+\sigma\left(\int_{B_{t}}(1+|Du|^{2})^{\frac{pns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}.$
By applying Lemma 2.3, and noting that $2R-\rho-R=R-\rho$, we conclude that
$\displaystyle\left(\int_{B_{R}}(1+|Du|^{2})^{\frac{pns}{2(n(s+1)-2s)}}\,dx\right)^{\frac{n(s+1)-2s}{ns}}$
$\displaystyle\leq
C\left(\frac{p^{2}}{(R-\rho)^{2}}+\frac{p^{2}}{(R-\rho)^{4}}\right)\int_{B_{R_{0}}}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx$
$\displaystyle+C\mathcal{K}_{B_{R_{0}}}^{\frac{p}{p-\theta(2q-p)}}\left(\int_{B_{R_{0}}}(1+|Du|^{2})^{\frac{ps}{2(s+1)}}\,dx\right)^{\frac{(1-\theta)(2q-p)}{p-\theta(2q-p)}\frac{s+1}{s}}.$
(3.31)
Collecting (3.27) and (3) we obtain
$\displaystyle\int_{B_{R}}(1+|Du|^{2})^{\frac{pm}{2}}\,dx$ $\displaystyle\leq$
$\displaystyle
C\left(\frac{p^{2}}{(R-\rho)^{2}}+\frac{p^{2}}{(R-\rho)^{4}}\right)\int_{B_{R_{0}}}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx$
(3.32)
$\displaystyle+C\mathcal{K}_{B_{R_{0}}}^{\frac{p}{p-\theta(2q-p)}}\left(\int_{B_{R_{0}}}(1+|Du|^{2})^{\frac{ps}{2(s+1)}}\,dx\right)^{\frac{(1-\theta)(2q-p)}{p-\theta(2q-p)}\frac{s+1}{s}}.$
Notice that the right hand side is finite, because $u$ is a local minimizer
and (1.4) and (1.15) hold. This inequality, together with (3.26), implies
$\displaystyle\|Du\|_{L^{\infty}(B_{\rho})}\leq$
$\displaystyle\frac{1}{2}\|Du\|_{L^{\infty}(B_{R})}+C\left(\frac{\mathcal{K}_{B_{R_{0}}}}{(R-\rho)^{8}}\right)^{\theta}\left(\int_{B_{R_{0}}}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx\right)^{\tilde{\theta}}$
$\displaystyle+C\left(\frac{\mathcal{K}_{B_{R_{0}}}}{(R-\rho)^{8}}\right)^{\theta}\left(\int_{B_{R_{0}}}(1+|Du|^{2})^{\frac{ps}{2(s+1)}}\,dx\right)^{\tilde{\varsigma}}$
with the constant $C$ depending on the data. Applying Lemma 2.3 we conclude
the proof of estimate (3.7). Now, we write the estimate (3.20) for $\gamma=0$
and for a cut off function $\eta\in C_{0}^{\infty}(B_{\frac{R}{2}})$, $\eta=1$
on $B_{\rho}$ for some $\rho<\frac{R}{2}$. This yields
$\displaystyle\int_{B_{\rho}}a(x)(1+|Du|^{2})^{\frac{p-2}{2}}|D^{2}u|^{2}\,dx$
$\displaystyle\leq$ $\displaystyle
C(R)\int_{B_{\frac{R}{2}}}a(x)(1+|Du|^{2})^{\frac{p}{2}}\,dx+C\int_{B_{\frac{R}{2}}}\frac{k^{2}(x)+b^{2}(x)}{a(x)}(1+|Du|^{2})^{\frac{2q-p}{2}}\,dx$
$\displaystyle\leq$ $\displaystyle
C(R)\int_{B_{\frac{R}{2}}}f(x,Du)\,dx+C\|1+|Du|\|_{L^{\infty}(B_{\frac{R}{2}})}^{2q-p}\int_{B_{\frac{R}{2}}}\frac{k^{2}(x)+b^{2}(x)}{a(x)}\,dx$
$\displaystyle\leq$ $\displaystyle C(R)\int_{B_{\frac{R}{2}}}f(x,Du)\,dx$
$\displaystyle\quad+C(R)\|1+|Du|\|_{L^{\infty}(B_{\frac{R}{2}})}^{2q-p}\left(\int_{B_{\frac{R}{2}}}(k^{r}(x)+b^{r}(x))\,dx\right)^{\frac{2}{r}}\left(\int_{B_{\frac{R}{2}}}\frac{1}{a^{s}(x)}\,dx\right)^{\frac{1}{s}},$
where we used Hölder’s inequality, since $\frac{1}{s}+\frac{2}{r}<1$ by
assumptions. Using (3.7) to estimate the $L^{\infty}$ norm of $|Du|$ and
recalling the definition of $\mathcal{K}_{R_{0}}$ at (3.21), we get
$\int_{B_{\rho}}a(1+|Du|^{2})^{\frac{p-2}{2}}|D^{2}u|^{2}\,dx\leq
c\left(\int_{B_{R}}1+f(x,Du)\,dx\right)^{\tilde{\varrho}},$
i.e. (3.8), with $c$ depending on $p,r,s,n,\rho,R,\mathcal{\ K}_{R_{0}}$. ∎
## 4\. An auxiliary functional: higher differentiability estimate
Consider the functional
$H(v)=\int_{\Omega}h(x,Dv)\,dx$
where $\Omega\subset\mathbb{R}^{n}$, $n\geq 2$, is a Sobolev map and
$h:\Omega\times\mathbb{R}^{N\times n}\rightarrow[0,+\infty)$ is a Carathéodory
function, convex and of class $C^{2}$ with respect to the second variable. We
assume that there exists
$\tilde{h}:\Omega\times[0,+\infty)\rightarrow[0,+\infty)$, increasing in the
last variable such that
$h(x,\xi)=\tilde{h}(x,|\xi|).$ (4.1)
Moreover assume that there exist $p,q$, $1<p<q$, and constant
$\ell,\nu,L_{1},L_{2}$ such that
$\ell(1+|\xi|^{2})^{\frac{p}{2}}\leq h(x,\xi)\leq
L_{1}\,(1+|\xi|^{2})^{\frac{q}{2}}$ (4.2)
for a.e. $x\in\Omega$ and for every $\xi\in\mathbb{R}^{N\times n}$. We assume
that $h$ is of class $C^{2}$ with respect to the $\xi-$variable, and that the
following conditions hold
$\nu\,(1+|\xi|^{2})^{\frac{p-2}{2}}|\lambda|^{2}\leq\langle
h_{\xi\xi}(x,\xi)\lambda,\lambda\rangle\leq
L_{2}\,(1+|\xi|^{2})^{\frac{q-2}{2}}|\lambda|^{2}$ (4.3)
for a.e. $x\in\Omega$ and for every $\xi,\lambda\in\mathbb{R}^{N\times n}$.
Moreover, we assume that there exists a non-negative function $k\in
L_{\mathrm{loc}}^{r}(\Omega)$ such that
$|D_{\xi x}h(x,\xi)|\leq k(x)(1+|\xi|^{2})^{\frac{q-1}{2}}$ (4.4)
for a.e. $x\in\Omega$ and for every $\xi\in\mathbb{R}^{N\times n}$.
The following is a higher differentiability result for minimizers of $H$,
that, by the result in [22], are locally Lipschitz continuos.
###### Theorem 4.1.
Let $v\in W^{1,\infty}_{\mathrm{loc}}(\Omega)$ be a local minimizer of the
functional $H$. Assume (4.1)–(4.4) for a couple of exponents $p,q$ such that
$\frac{q}{p}<1+\frac{1}{n}-\frac{1}{r}.$ (4.5)
Then $u\in W^{2,2}_{\mathrm{loc}}(\Omega)$ and the following estimate holds
$\displaystyle\int_{B_{\rho}}|DV_{p}(Du)|^{2}\,dx$ $\displaystyle\leq$
$\displaystyle\frac{c}{(R-\rho)^{2}}\|1+|Du|\|^{2q-p}_{\infty}\int_{B_{2R}}|Du|^{2}\,dx+c\|1+|Du|\|^{2q-p}_{\infty}\int_{B_{R}}k^{2}(x)\,dx$
$\displaystyle+c\|1+|Du|\|^{q-1}_{\infty}\left(\int_{B_{R}}k^{\frac{p}{p-1}}(x)\,dx\right)^{\frac{p-1}{p}}\left(\int_{B_{2R}}|Du|^{p}\,dx\right)^{\frac{1}{p}}$
for every ball $B_{\rho}\subset B_{R}\subset B_{2R}\Subset\Omega.$
###### Proof.
Since $v$ is a local minimizer of the functional $H$, then $v$ satisfies the
Euler’s system
$\int_{\Omega}\sum_{i,\alpha}h_{\xi_{i}^{\alpha}}(x,Du)\varphi_{x_{i}}^{\alpha}(x)\,dx=0\qquad\forall\varphi\in
C_{0}^{\infty}(\Omega;\mathbb{R}^{N}).$
Let $B_{2R}\Subset\Omega$ and let $\eta\in C_{0}^{\infty}(B_{R})$ be a cut off
function between $B_{\rho}$ and $B_{R}$ for some $\rho<R$. Fixed $1\leq s\leq
n$, and denoted $e_{s}$ is the unit vector in the $x_{s}$ direction, consider
the finite differential operator $\tau_{s,h}$, see (2.1), from now on simply
denoted $\tau_{h}$. Choosing $\varphi=\tau_{-h}(\eta^{2}\tau_{h}u)$ as test
function in the Euler’s system, we get, by properties (i) and (ii) of
$\tau_{h}$,
$\int_{\Omega}\sum_{i,\alpha}\tau_{h}(h_{\xi_{i}^{\alpha}}(x,Du))D_{x_{i}}(\eta^{2}\tau_{h}u^{\alpha})\,dx=0$
and so
$\int_{\Omega}\sum_{i,\alpha}\tau_{h}(h_{\xi_{i}^{\alpha}}(x,Du))(2\eta\eta_{x_{i}}\tau_{h}u^{\alpha}+\eta^{2}\tau_{h}u_{x_{i}}^{\alpha})\,dx=0.$
Exploiting the definition of $\tau_{h}$, we get
$\int_{\Omega}\big{(}h_{\xi_{i}^{\alpha}}(x+he_{s},Du(x+he_{s}))-h_{\xi_{i}^{\alpha}}(x,Du(x))\big{)}(2\eta\eta_{x_{i}}\tau_{h}u^{\alpha}+\eta^{2}\tau_{h}u_{x_{i}}^{\alpha})\,dx=0$
i.e.
$\displaystyle\int_{\Omega}\eta^{2}[h_{\xi_{i}^{\alpha}}(x+he_{s},Du(x+he_{s}))-h_{\xi_{i}^{\alpha}}(x+he_{s},Du(x))]\tau_{h}u_{x_{i}}^{\alpha}\,dx$
$\displaystyle=$
$\displaystyle\int_{\Omega}\eta^{2}[h_{\xi_{i}^{\alpha}}(x,Du(x))-h_{\xi_{i}^{\alpha}}(x+he_{s},Du(x))]\tau_{h}u_{x_{i}}^{\alpha}\,dx$
$\displaystyle-2\int_{\Omega}\eta[h_{\xi_{i}^{\alpha}}(x+he_{s},Du(x+he_{s}))-h_{\xi_{i}^{\alpha}}(x+he_{s},Du(x))]\eta_{x_{i}}\tau_{h}u^{\alpha}$
$\displaystyle+2\int_{\Omega}\eta[h_{\xi_{i}^{\alpha}}(x,Du(x))-h_{\xi_{i}^{\alpha}}(x+he_{s},Du(x))]\eta_{x_{i}}\tau_{h}u^{\alpha}.$
We can write previous equality as follows
$\displaystyle I_{0}=:$
$\displaystyle\int_{\Omega}\eta^{2}\int_{0}^{1}\sum_{i,j,\alpha,\beta}h_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x+he_{s},Du(x)+\sigma\tau_{h}Du(x))d\sigma\tau_{h}u_{x_{i}}^{\alpha}\tau_{h}u_{x_{j}}^{\beta}\,dx$
$\displaystyle=$ $\displaystyle
h\int_{\Omega}\eta^{2}\int_{0}^{1}\sum_{i,\alpha}h_{x_{s}\xi_{i}^{\alpha}}(x+\sigma
he_{s},Du(x))d\sigma\tau_{h}u_{x_{i}}^{\alpha}\,dx$
$\displaystyle-2\int_{\Omega}\eta\int_{0}^{1}\sum_{i,j,\alpha,\beta}h_{\xi_{i}^{\alpha}\xi_{j}^{\beta}}(x+he_{s},Du(x)+\sigma\tau_{h}Du(x))d\sigma\eta_{x_{i}}\tau_{h}u^{\alpha}\tau_{h}u_{x_{j}}^{\beta}$
$\displaystyle+2h\int_{\Omega}\eta\int_{0}^{1}\sum_{i,\alpha}h_{x_{s}\xi_{i}^{\alpha}}(x+\sigma
he_{s},Du(x))d\sigma\eta_{x_{i}}\tau_{h}u^{\alpha}$ $\displaystyle=:$
$\displaystyle I_{1}+I_{2}+I_{3},$
that implies
$I_{0}\leq|I_{1}|+|I_{2}|+|I_{3}|$ (4.6)
The ellipticity assumption (4.3) yields
$I_{0}\geq\nu\int_{\Omega}\eta^{2}(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{p-2}{2}}|\tau_{h}Du|^{2}\,dx$
(4.7)
By assumption (4.4) we get
$\displaystyle|I_{1}|$ $\displaystyle\leq$
$\displaystyle|h|\int_{\Omega}\eta^{2}\int_{0}^{1}k(x+\sigma
he_{s})d\sigma(1+|Du(x)|^{2})^{\frac{q-1}{2}}|\tau_{h}Du|\,dx$ (4.8)
$\displaystyle\leq$
$\displaystyle|h|\|1+|Du|\|^{\frac{2q-p}{2}}_{\infty}\int_{\Omega}\eta^{2}\int_{0}^{1}k(x+\sigma
he_{s})d\sigma(1+|Du(x)|^{2})^{\frac{p-2}{4}}|\tau_{h}Du|\,dx$
$\displaystyle\leq$
$\displaystyle\frac{\nu}{4}\int_{\Omega}\eta^{2}(1+|Du(x)|^{2})^{\frac{p-2}{2}}|\tau_{h}Du|^{2}\,dx$
$\displaystyle+c_{\nu}|h|^{2}\|1+|Du|\|^{2q-p}_{\infty}\int_{\Omega}\eta^{2}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{2}\,dx,$
where in the last line we used Young’s inequality. The right inequality in
(4.3) yields
$\displaystyle|I_{2}|$ $\displaystyle\leq$ $\displaystyle
c(L_{2})\int_{\Omega}\eta|D\eta|(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{q-2}{2}}|\tau_{h}Du||\tau_{h}u|\,dx$
(4.9) $\displaystyle\leq$ $\displaystyle
c(L_{2})\|1+|Du|\|^{\frac{2q-p}{2}}_{\infty}\int_{\Omega}\eta|D\eta|(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{p-2}{4}}|\tau_{h}Du||\tau_{h}u|\,dx$
$\displaystyle\leq$
$\displaystyle\frac{\nu}{4}\int_{\Omega}\eta^{2}(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{p-2}{2}}|\tau_{h}Du|^{2}\,dx$
$\displaystyle+c_{\nu,L_{2}}\|1+|Du|\|^{2q-p}_{\infty}\int_{\Omega}|D\eta|^{2}|\tau_{h}u|^{2}\,dx.$
Finally, using again assumption (4.4) and Hölder’s inequality, we obtain
$\displaystyle|I_{3}|$ $\displaystyle\leq$ $\displaystyle
2|h|\int_{\Omega}\eta|D\eta|\int_{0}^{1}k(x+\sigma
he_{s})d\sigma(1+|Du(x)|^{2})^{\frac{q-1}{2}}|\tau_{h}u|\,dx$ (4.10)
$\displaystyle\leq$
$\displaystyle|h|\|1+|Du|\|^{q-1}_{\infty}\int_{\Omega}\eta|D\eta|\int_{0}^{1}k(x+\sigma
he_{s})d\sigma|\tau_{h}u|\,dx$ $\displaystyle\leq$
$\displaystyle|h|\|1+|Du|\|^{q-1}_{\infty}\left(\int_{\Omega}\eta|D\eta|\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{\frac{p}{p-1}}\,dx\right)^{\frac{p-1}{p}}$
$\displaystyle\qquad\times\left(\int_{\Omega}\eta|D\eta||\tau_{h}u|^{p}\,dx\right)^{\frac{1}{p}}.$
Inserting (4.7), (4.8), (4.9) and (4.10) in (4.6), we get
$\displaystyle\nu\int_{\Omega}\eta^{2}(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{p-2}{2}}|\tau_{h}Du|^{2}\,dx$
$\displaystyle\leq$
$\displaystyle\frac{\nu}{2}\int_{\Omega}\eta^{2}(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{p-2}{2}}|\tau_{h}Du|^{2}\,dx$
$\displaystyle+c_{\nu,L_{2}}\|1+|Du|\|_{\infty}^{2q-p}\int_{\Omega}|D\eta|^{2}|\tau_{h}u|^{2}\,dx$
$\displaystyle+c_{\nu}|h|^{2}\|1+|Du|\|_{\infty}^{2q-p}\int_{\Omega}\eta^{2}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{2}\,dx$
$\displaystyle+|h|\|1+|Du|\|_{\infty}^{q-1}\left(\int_{\Omega}\eta|D\eta|\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{\frac{p}{p-1}}\,dx\right)^{\frac{p-1}{p}}$
$\displaystyle\qquad\times\left(\int_{\Omega}\eta|D\eta||\tau_{h}u|^{p}\,dx\right)^{\frac{1}{p}}.$
Reabsorbing the first integral in the right hand side by the left hand side
and recalling the properties of the cut off function $\eta$, we obtain
$\displaystyle\int_{B_{\rho}}(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{p-2}{2}}|\tau_{h}Du|^{2}\,dx$
$\displaystyle\leq$
$\displaystyle\frac{c}{(R-\rho)^{2}}\|1+|Du|\|_{\infty}^{2q-p}\int_{B_{R}}|\tau_{h}u|^{2}\,dx$
$\displaystyle+c|h|^{2}\|1+|Du|\|_{\infty}^{2q-p}\int_{B_{R}}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{2}\,dx$
$\displaystyle+c|h|\|1+|Du|\|_{\infty}^{q-1}\left(\int_{B_{R}}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{\frac{p}{p-1}}\,dx\right)^{\frac{p-1}{p}}$
$\displaystyle\qquad\times\left(\int_{B_{R}}|\tau_{h}u|^{p}\,dx\right)^{\frac{1}{p}},$
where $c=c(\nu,L_{2},n,N)$. By property (iii) of the finite difference
operator we deduce that
$\displaystyle\int_{B_{\rho}}(1+|Du(x)|^{2}+|Du(x+he_{s})|^{2})^{\frac{p-2}{2}}|\tau_{h}Du|^{2}\,dx$
(4.11) $\displaystyle\leq$
$\displaystyle\frac{c}{(R-\rho)^{2}}|h|^{2}\|1+|Du|\|^{2q-p}_{\infty}\int_{B_{2R}}|Du|^{2}\,dx$
$\displaystyle+c|h|^{2}\|1+|Du|\|^{2q-p}_{\infty}\int_{B_{R}}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{2}\,dx$
$\displaystyle+c|h|^{2}\|1+|Du|\|^{q-1}_{\infty}\left(\int_{B_{R}}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{\frac{p}{p-1}}\,dx\right)^{\frac{p-1}{p}}$
$\displaystyle\qquad\times\left(\int_{B_{2R}}|Du|^{p}\,dx\right)^{\frac{1}{p}}.$
Dividing (4.11) by $|h|^{2}$ and using Lemma 2.1 in the left hand side, we get
$\displaystyle\int_{B_{\rho}}\frac{|\tau_{h}V_{p}(Du)|^{2}}{|h|^{2}}\,dx$
$\displaystyle\leq$
$\displaystyle\frac{c}{(R-\rho)^{2}}\|1+|Du|\|_{\infty}^{2q-p}\int_{B_{2R}}|Du|^{2}\,dx$
$\displaystyle+c\|1+|Du|\|_{\infty}^{2q-p}\int_{B_{R}}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{2}\,dx$
$\displaystyle+c\|1+|Du|\|_{\infty}^{q-1}\left(\int_{B_{R}}\left(\int_{0}^{1}k(x+\sigma
he_{s})d\sigma\right)^{\frac{p}{p-1}}\,dx\right)^{\frac{p-1}{p}}$
$\displaystyle\qquad\times\left(\int_{B_{2R}}|Du|^{p}\,dx\right)^{\frac{1}{p}}.$
Letting $h$ go to $0$, by property (iv) of the finite difference operator, we
conclude
$\displaystyle\int_{B_{\rho}}|DV_{p}(Du)|^{2}\,dx$ (4.12) $\displaystyle\leq$
$\displaystyle\frac{c}{(R-\rho)^{2}}\|1+|Du|\|^{2q-p}_{\infty}\int_{B_{2R}}|Du|^{2}\,dx+c\|1+|Du|\|^{2q-p}_{\infty}\int_{B_{R}}k^{2}(x)\,dx$
$\displaystyle+c\|1+|Du|\|^{q-1}_{\infty}\left(\int_{B_{R}}k^{\frac{p}{p-1}}(x)\,dx\right)^{\frac{p-1}{p}}\left(\int_{B_{2R}}|Du|^{p}\,dx\right)^{\frac{1}{p}}.$
Since $k\in L^{r}$, with $r>n\geq 2\geq\frac{p}{p-1}$ , estimate (4.12)
implies the conclusion. ∎
## 5\. Proof of Theorem 1.1
Using the previous results and an approximation procedure, we can prove of our
main result.
###### Proof of Theorem 1.1.
For $f(x,\xi)$ satisfying the assumptions (1.3)–(1.6), let us introduce the
sequence
$f_{h}(x,\xi)=f(x,\xi)+\frac{1}{h}(1+|\xi|^{2})^{\frac{ps}{2(s+1)}}.$ (5.1)
Note that $f_{h}(x,\xi)$ satisfies the following set of conditions
$\frac{1}{h}(1+|\xi|^{2})^{\frac{ps}{2(s+1)}}\leq
f_{h}(x,\xi)\leq(1+L)\,(1+|\xi|^{2})^{\frac{q}{2}},$ (5.2)
$\frac{c_{1}}{h}\,(1+|\xi|^{2})^{\frac{ps}{2(s+1)}-2}|\lambda|^{2}\leq\langle
D_{\xi\xi}f_{h}(x,\xi)\lambda,\lambda\rangle,$ (5.3)
$|D_{\xi\xi}f_{h}(x,\xi)|\leq c_{2}(1+L)\,(1+|\xi|^{2})^{\frac{q-2}{2}},$
(5.4) $|D_{\xi x}f_{h}(x,\xi)|\leq k(x)(1+|\xi|^{2})^{\frac{q-1}{2}}$ (5.5)
for some constants $c_{1},c_{2}>0$, for a.e. $x\in\Omega$ and for every
$\xi\in\mathbb{R}^{N\times n}$.
Now, fix a ball $B_{R}\Subset\Omega$, and let $v_{h}\in
W^{1,\frac{ps}{s+1}}(B_{R},\mathbb{R}^{N})$ be the unique solution to the
problem
$\min\left\\{\int_{B_{R}}f_{h}(x,Dv)\,dx:\,\,v_{h}\in
u+W^{1,\frac{ps}{s+1}}_{0}(B_{R},\mathbb{R}^{N})\right\\}.$ (5.6)
Since $f_{h}(x,\xi)$ satisfies (5.2), (5.3), (5.4) (5.5) with $k\in L^{r}$,
$r>n$, and (1.6) holds, then by the result in [22] we have that $v_{h}\in
W^{1,\infty}_{\mathrm{loc}}(B_{R})$ and by Theorem 4.1, used with $p$ replaced
by $p\frac{s}{s+1}$, we also have $v_{h}\in W^{2,2}_{\mathrm{loc}}(B_{R})$.
Since $f_{h}(x,\xi)$ satisfies (5.2), by the minimality of $v_{h}$ we get
$\displaystyle\int_{B_{R}}|Dv_{h}|^{\frac{ps}{s+1}}\,dx\leq
c_{s}\int_{B_{R}}a(x)|Dv_{h}|^{p}+c_{s}\int_{B_{R}}\frac{1}{a^{s}(x)}\,dx$
$\displaystyle\leq$ $\displaystyle
c_{s}\int_{B_{R}}f_{h}(x,Dv_{h})\,dx+c_{s}\int_{B_{R}}\frac{1}{a^{s}(x)}\,dx$
$\displaystyle\leq$ $\displaystyle
c_{s}\int_{B_{R}}f_{h}(x,Du)\,dx+c_{s}\int_{B_{R}}\frac{1}{a^{s}(x)}\,dx$
$\displaystyle=$ $\displaystyle
c_{s}\int_{B_{R}}f(x,Du)\,dx+\frac{c_{s}}{h}\int_{B_{R}}(1+|Du|)^{\frac{ps}{s+1}}\,dx+c_{s}\int_{B_{R}}\frac{1}{a^{s}(x)}\,dx$
$\displaystyle\leq$ $\displaystyle
c_{s}\int_{B_{R}}f(x,Du)\,dx+c_{s}\int_{B_{R}}(1+|Du|)^{\frac{ps}{s+1}}\,dx+c_{s}\int_{B_{R}}\frac{1}{a^{s}(x)}\,dx.$
Therefore the sequence $v_{h}$ is bounded in $W^{1,\frac{ps}{s+1}}(B_{R})$, so
there exists $v\in u+W_{0}^{1,\frac{ps}{s+1}}(B_{R})$ such that, up to
subsequences,
$v_{h}\rightharpoonup v\qquad\text{weakly in}\,\,W^{\frac{ps}{s+1}}(B_{R}).$
(5.7)
On the other hand, we can apply Theorem 3.3 to $f_{h}(x,\xi)$ since the
assumptions are satisfied, with $b$ replaced by $1+L$. Thus, we are legitimate
to apply estimates (3.7) and (3.8) to the solutions $v_{h}$ to obtain
$\displaystyle\|Dv_{h}\|_{L^{\infty}(B_{\rho})}\leq
C\mathcal{K}_{R}^{\tilde{\vartheta}}\left(\int_{B_{R}}(1+f_{h}(x,Dv_{h}))\,dx\right)^{\tilde{\varsigma}}$
(5.8) $\displaystyle\leq$ $\displaystyle
C\mathcal{K}_{R}^{\tilde{\vartheta}}\left(\int_{B_{R}}(1+f_{h}(x,Du))\,dx\right)^{\tilde{\varsigma}}$
$\displaystyle=$ $\displaystyle
C\mathcal{K}_{R}^{\tilde{\vartheta}}\left(\int_{B_{R}}(1+f(x,Du)+\frac{1}{h}(1+|Du|^{2})^{\frac{ps}{2(s+1)}})\,dx\right)^{\tilde{\varsigma}}$
$\displaystyle\leq$ $\displaystyle
C\mathcal{K}_{R}^{\tilde{\vartheta}}\left(\int_{B_{R}}(1+f(x,Du)+(1+|Du|^{2})^{\frac{ps}{2(s+1)}})\,dx\right)^{\tilde{\varsigma}},$
with $C,\tilde{\vartheta},\tilde{\varsigma}$ independent of $h$ and
$0<\rho<R$. Therefore, up to subsequences,
$v_{h}\rightharpoonup v\qquad\text{weakly* in}\,\,W^{1,\infty}(B_{\rho}).$
(5.9)
Our next aim is to show that $v=u$. The lower semicontinuity of
$u\mapsto\int_{B_{R}}f(x,Du)$ and the minimality of $v_{h}$ imply
$\displaystyle\int_{B_{R}}f(x,Dv)\,dx\leq\liminf_{h}\int_{B_{R}}f(x,Dv_{h})\,dx\leq\liminf_{h}\int_{B_{R}}f_{h}(x,Dv_{h})\,dx$
$\displaystyle\leq$ $\displaystyle\liminf_{h}\int_{B_{R}}f_{h}(x,Du)\,dx$
$\displaystyle=$
$\displaystyle\liminf_{h}\int_{B_{R}}(f(x,Du)+\frac{1}{h}(1+|Du|^{2})^{\frac{ps}{2(s+1)}})\,dx$
$\displaystyle=$ $\displaystyle\int_{B_{R}}f(x,Du)\,dx.$
The strict convexity of $f$ yields that $u=v$. Therefore passing to the limit
as $h\rightarrow\infty$ in (5.8) we get
$\|Du\|_{L^{\infty}(B_{\rho})}\leq
C\mathcal{K}_{R}^{\tilde{\vartheta}}\left(\int_{B_{R}}(1+f(x,Du)+(1+|Du|^{2})^{\frac{ps}{2(s+1)}})\,dx\right)^{\tilde{\varsigma}},$
i.e. (1.7). Moreover, we are legitimate to apply estimate (3.8) to each
$v_{h}$ thus getting
$\displaystyle\int_{B_{\rho}}a(x)(1+|Dv_{h}|^{2})^{\frac{p-2}{2}}|D^{2}v_{h}|^{2}\,dx$
$\displaystyle\leq$ $\displaystyle
c\left(\int_{B_{R}}(1+f_{h}(x,Dv_{h}))\,dx\right)^{\tilde{\varrho}}$
$\displaystyle\leq$ $\displaystyle
c\left(\int_{B_{R}}(1+f_{h}(x,Du))\,dx\right)^{\tilde{\varrho}}$
$\displaystyle=$ $\displaystyle
c\left(\int_{B_{R}}\left(1+f(x,Du)+\frac{1}{h}(1+|Du|^{2})^{\frac{ps}{2(s+1)}}\right)\,dx\right)^{\tilde{\varrho}},$
where we used the minimality of $v_{h}$ and the definition of $f_{h}(x,\xi)$.
Since $v_{h}\rightarrow u$ a.e. up to a subsequence, we conclude that
$\displaystyle\int_{B_{\rho}}a(x)(1+|Du|^{2})^{\frac{p-2}{2}}|D^{2}u|^{2}\,dx\leq\liminf_{h}\int_{B_{\rho}}a(x)(1+|Dv_{h}|^{2})^{\frac{p-2}{2}}|D^{2}v_{h}|^{2}\,dx$
$\displaystyle\quad\leq
c\liminf_{h}\left(\int_{B_{R}}\left(1+f(x,Du)+\frac{1}{h}(1+|Du|^{2})^{\frac{ps}{2(s+1)}}\right)\,dx\right)^{\tilde{\varrho}}$
$\displaystyle\quad=\left(\int_{B_{R}}\left(1+f(x,Du)\right)\,dx\right)^{\tilde{\varrho}},$
i.e. (1.8). ∎
## References
* [1] A.K. Balci, L. Diening, R. Giova, A. Passarelli di Napoli: Elliptic equations with degenerate weights, (2020), arXiv:2003.10380v1.
* [2] P. Baroni, M. Colombo, G. Mingione: Regularity for general functionals with double phase,_Calc. Var. Partial Differential_ , 57 (2018), no. 2, Paper No. 62, 48 pp.
* [3] P. Bella, M. Schäffner: Local boundedness and Harnack inequality for solutions of linear non-uniformly elliptic equations, _Comm. Pure App. Math._ (2019) to appear.
* [4] M. Belloni, G. Buttazzo: A survey of old and recent results about the gap phenomenon in the calculus of variations,_R. Lucchetti, J. Revalski (Eds.), Recent developements in well-posed variational problems, Mathematical Applications,_ $\mathbf{331}$ (1995), 1-27.
* [5] S. Biagi, G. Cupini, E. Mascolo: Regularity of quasi-minimizers for non-uniformly elliptic integrals, _J. Math Anal. Appl._ , 485 (2020), 123838, 20 pp.
* [6] M. Carozza, G. Moscariello, A. Passarelli di Napoli: Higher integrability for minimizers of anisotropic functionals _Discrete Contin. Dyn. Syst. Ser. B_ , $\mathbf{11}$ (2009), 43-55.
* [7] A. Cianchi, V. Maz’ya: Global Lipschitz regularity for a class of quasilinear elliptic equations, Comm. Partial Differential Equations, 36 (2011), 100-133.
* [8] M. Colombo, G. Mingione: Regularity for double phase variational problems,_Arch. Rat. Mech. Anal._ , 215 (2015), 443-496.
* [9] M. Colombo, G. Mingione: Bounded minimisers of double phase variational integrals, Arch. Rat. Mech. Anal., 218 (2015), 219-273.
* [10] D. Cruz-Uribe, P. Di Gironimo, C. Sbordone: On the continuity of solutions to degenerate elliptic equations, _J. Differential Equations_ 250 (2011), 2671-2686.
* [11] G. Cupini, F. Giannetti, R. Giova, A. Passarelli di Napoli: Regularity results for vectorial minimizers of a class of degenerate convex integrals, J. Differential Equations, 265 (2018), 4375-4416.
* [12] G. Cupini, M. Guidorzi, E. Mascolo: Regularity of minimizers of vectorial integrals with $p,q$${-}$growth, Nonlinear Anal., 54 (2003), 591-616.
* [13] G. Cupini, P. Marcellini, E. Mascolo: Local boundedness of solutions to quasilinear elliptic systems, Manuscripta Math., 137 (2012), 287-315.
* [14] G. Cupini, P. Marcellini, E. Mascolo: Existence and regularity for elliptic equations under $p,q$${-}$growth, Adv. Differential Equations, 19 (2014), 693-724.
* [15] G. Cupini, P. Marcellini, E. Mascolo: Regularity of minimizers under limit growth conditions, _Nonlinear Anal._ , 153 (2017), 294-310.
* [16] G. Cupini, P. Marcellini, E. Mascolo: Nonuniformly elliptic energy integrals with $p,q$-growth, _Nonlinear Anal._ , 177 (2018), part A, 312-324.
* [17] E. De Giorgi: Un esempio di estremali discontinue per un problema variazionale di tipo ellittico,_Boll. Un. Mat. Ital.,_ 1 (1968), 135-137.
* [18] C. De Filippis, G. Mingione: On the regularity of non-autonomous functionals, _J. Geom. Anal._ 30 (2020), 1584-1626.
* [19] C. De Filippis, J. Oh: Regularity for multi-phase variational problems _J. Differential Equations_ , 267 (2019), 1631-1670.
* [20] T. Di Marco, P. Marcellini: A-priori gradient bound for elliptic systems under either slow or fast growth conditions, Calc. Var. Partial Differential Equations, 59 (2020), 26 pp.
* [21] F. Duzaar, G. Mingione: Local Lipschitz regularity for degenerate elliptic systems, Ann. Inst. H. Poincaré Anal. Non Linéaire, 27 (2010), 1361-1396.
* [22] M. Eleuteri, P. Marcellini, E. Mascolo. Lipschitz estimates for systems with ellipticity conditions at infinity. _Ann. Mat. Pura Appl._ , 195 (2016), 1575-1603.
* [23] M. Eleuteri, P. Marcellini, E. Mascolo: Lipschitz continuity for functionals with variable exponents _Rend. Lincei Mat. Appl.,_ 27 (2016), 61-87.
* [24] M. Eleuteri, P. Marcellini, E. Mascolo: Regularity for scalar integrals without structure conditions, Advances in Calculus of Variations, 13 (2020), 279-300. https://doi.org/10.1515/acv-2017-0037
* [25] A. Esposito, F. Leonetti, P.V. Petricca: Absence of Lavrentiev gap for non-autonomous functionals with $(p,q)-$growth, _Adv. Nonlinear Anal.,_ 8 (2019), 73-78.
* [26] L. Esposito, F. Leonetti, G. Mingione: Regularity results for minimizers of irregular integrals with $(p,q)$ growth, _Forum Mathematicum_ , 14 (2002), 245-272.
* [27] L. Esposito, F. Leonetti, G. Mingione: Sharp regularity for functionals with $(p,q)$ growth, _J. Differential Equations_ , 204 (2004), 5-55.
* [28] E. Fabes, C. Kenig, R. Serapioni: The local regularity of solutions of degenerate elliptic equations, _Comm. Partial Diff. Eq._ , 7 (1982), 77-116.
* [29] R. Giova, A. Passarelli di Napoli: Regularity results for a priori bounded minimizers of non-autonomous functionals with discontinuous coefficients, Adv. Calc. Var., 12 (2019), 85-110.
* [30] E. Giusti: Direct methods in the calculus of variations. World scientific publishing Co. (2003) 50.
* [31] T. Iwaniec, L. Migliaccio, G. Moscariello, A. Passarelli di Napoli: A priori estimates for nonlinear elliptic complexes, Adv. Differential Equations, 8 (2003), 513-546.
* [32] T. Iwaniec, C. Sbordone: Quasiharmonic fields, _Ann. Inst. H. Poincaré Anal. Non Linéaire_ , 18 (2001), 519-572.
* [33] P. Marcellini: Approximation of quasiconvex functions, and lower semicontinuity of multiple integrals, _Manuscripta Math.,_ 51 (1985), no. 1-3, 1-28.
* [34] P. Marcellini: Regularity of minimizers of integrals in the calculus of variations with non standard growth conditions,_Arch. Rational Mech. Anal._ , 105 (1989) 267-284.
* [35] P. Marcellini: Regularity and existence of solutions of elliptic equations with $p,q$-growth conditions, _J. Differential Equations_ , 90 (1991), 1-30.
* [36] P. Marcellini: Everywhere regularity for a class of elliptic systems without growth conditions ,_Ann. Scuola Norm. Sup. Pisa Cl. Sci._ , 23 (1996), 1-25.
* [37] P. Marcellini: Regularity under general and $p,q-$growth conditions, Discrete Cont. Dinamical Systems Series S, 13 (2020), 2009-2031.
* [38] P. Marcellini: A variational approach to parabolic equations under general and $p,q-$growth conditions, Nonlinear Anal., 194 (2020), 111456, 17 pp.
* [39] P. Marcellini: Growth conditions and regularity for weak solutions to nonlinear elliptic pdes, J. Math. Anal. Appl., 2020, to appear. https://doi.org/10.1016/j.jmaa.2020.124408
* [40] P. Marcellini, G. Papi: Nonlinear elliptic systems with general growth,_J. Differential Equations_ , 221 (2006), 412-443.
* [41] G. Mingione: Regularity of minima: an invitation to the dark side of the calculus of variations,_Appl. Math.,_ 51 (2006), 355-426.
* [42] C. Mooney, O. Savin : Some singular minimizers in low dimensions in the calculus of variations,_Arch. Ration. Mech. Anal._ , 22 (2016), 1-22
* [43] J. Moser: A new proof of De Giorgi’s theorem concerning the regularity problem for elliptic differential equations, _Comm. Pure Appl. Math.,_ 13 (1960), 457-468.
* [44] A. Passarelli di Napoli: Higher differentiability of minimizers of variational integrals with Sobolev coefficients, _Adv. Calc. Var._ , 7 (2014), 59-89.
* [45] A. Passarelli di Napoli: Higher differentiability of solutions of elliptic systems with Sobolev coefficients: the case $p=n=2$, _Pot. Anal._ , 41 (2014), 715-735.
* [46] A. Passarelli di Napoli: Regularity results for non-autonomous variational integrals with discontinuous coefficients, _Atti Accad. Naz. Lincei, Rend. Lincei Mat. Appl._ , 26 (2015), (4), 475-496.
* [47] M. Pingen: Regularity results for degenerate elliptic systems,__ Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 369-380.
* [48] V. $\check{\text{S}}$verák, X.Yan: A singular minimizer of a smooth strongly convex functional in three dimensions,_Calc. Var. Partial Differential Equations_ , 10 (2000) 213-221
* [49] N.S. Trudinger: On the regularity of generalized solutions of linear, non-uniformly elliptic equations, _Arch. Rational Mech. Anal._ , 42 (1971), 42-50.
* [50] N.S. Trudinger: Linear elliptic operators with measurable coefficients, _Ann. Scuola Norm. Sup. Pisa Cl. Sci._ , 27 (1973), 265-308.
* [51] V.V. Zhikov: On Lavrentiev phenomenon, _Russian J. Math. Phys._ , 3 (1995), 249-269.
|
16k
|
arxiv_papers
|
2101.01104
|
# How does the Combined Risk Affect the Performance of Unsupervised Domain
Adaptation Approaches?
Li Zhong1,2,111Equal Contribution. Work done at AAII, UTS., Zhen Fang2,∗, Feng
Liu2,∗, Jie Lu2,222Corresponding Author, Bo Yuan1, Guangquan Zhang2
###### Abstract
Unsupervised domain adaptation (UDA) aims to train a target classifier with
labeled samples from the source domain and unlabeled samples from the target
domain. Classical UDA learning bounds show that target risk is upper bounded
by three terms: source risk, distribution discrepancy, and combined risk.
Based on the assumption that the combined risk is a small fixed value, methods
based on this bound train a target classifier by only minimizing estimators of
the source risk and the distribution discrepancy. However, the combined risk
may increase when minimizing both estimators, which makes the target risk
uncontrollable. Hence the target classifier cannot achieve ideal performance
if we fail to control the combined risk. To control the combined risk, the key
challenge takes root in the unavailability of the labeled samples in the
target domain. To address this key challenge, we propose a method named
E-MixNet. E-MixNet employs enhanced mixup, a generic vicinal distribution, on
the labeled source samples and pseudo-labeled target samples to calculate a
proxy of the combined risk. Experiments show that the proxy can effectively
curb the increase of the combined risk when minimizing the source risk and
distribution discrepancy. Furthermore, we show that if the proxy of the
combined risk is added into loss functions of four representative UDA methods,
their performance is also improved.
## Introduction
Figure 1: The values of combined risk and accuracy on the task C $\rightarrow$
P on Image-CLEF. The left figure shows the value of combined risk. The right
figure shows the accuracy of the task. Blue line: ignore the optimization of
combined risk. Green line: optimize the combined risk by source samples and
target samples with high confidence. Orange line: optimize combined risk by
the proxy formulated by mixup. Purple line: optimize the proxy formulated by
e-mixup.
_Domain Adaptation_ (DA) aims to train a target-domain classifier with samples
from source and target domains (Lu et al. 2015). When the labels of samples in
the target domain are unavailable, DA is known as _unsupervised DA_ (UDA)
(Zhong et al. 2020; Fang et al. 2020), which has been applied to address
diverse real-world problems, such as computer version (Zhang et al. 2020c;
Dong et al. 2019, 2020b), natural language processing (Lee and Jha 2019; Guo,
Pasunuru, and Bansal 2020), and recommender system (Zhang et al. 2017; Yu,
Wang, and Yuan 2019; Lu et al. 2020)
Significant theoretical advances have been achieved in UDA. Pioneering
theoretical work was proposed by Ben-David et al. (2007). This work shows that
the target risk is upper bounded by three terms: source risk, marginal
distribution discrepancy, and combined risk. This earliest learning bound has
been extended from many perspectives, such as considering more surrogate loss
functions (Zhang et al. 2019a) or distributional discrepancies (Mohri and
Medina 2012; Shen et al. 2018) (see (Redko et al. 2020) as a survey).
Recently, Zhang et al. (2019a) proposed a new distributional discrepancy
termed Margin Disparity Discrepancy and developed a tighter and more practical
UDA learning bound.
The UDA learning bounds proposed by (Ben-David et al. 2007, 2010) and the
recent UDA learning bounds proposed by (Shen et al. 2018; Xu et al. 2020;
Zhang et al. 2020b) consist of three terms: source risk, marginal distribution
discrepancy, and combined risk. Minimizing the source risk aims to obtain a
source-domain classifier, and minimizing the distribution discrepancy aims to
learn domain-invariant features so that the source-domain classifier can
perform well on the target domain. The combined risk embodies the adaptability
between the source and target domains (Ben-David et al. 2010). In
particularly, when the hypothesis space is fixed, the combined risk is a
constant.
Based on the UDA learning bounds where the combined risk is assumed to a small
constant, many existing UDA methods focus on learning domain-invariant
features (Fang et al. 2019; Dong et al. 2020c, a; Liu et al. 2019) by
minimizing the estimators of the source risk and the distribution discrepancy.
In the learned feature space, the source and target distributions are similar
while the source-domain classifier is required to achieve a small error.
Furthermore, the generalization error of the source-domain classifier is
expected to be small in the target domain.
However, the combined risk may increase when learning the domain-invariant
features, and the increase of the combined risk may degrade the performance of
the source-domain classifier in the target domain. As shown in Figure 1, we
calculate the value of the combined risk and accuracy on a real-world UDA task
(see the green line). The performance worsens with the increased combined
risk. Zhao et al. (2019) also pointed out the increase of combined risk causes
the failure of source-domain classifier on the target domain.
To investigate how the combined risk affect the performance on the domain-
invariant features, we rethink and develop the UDA learning bounds by
introducing feature transformations. In the new bound (see Eq. (5)), the
combined risk is a function related to feature transformation but not a
constant (compared to bounds in (Ben-David et al. 2010)). We also reveal that
the combined risk is deeply related to the conditional distribution
discrepancy (see Theorem 3). Theorem 3 shows that, the conditional
distribution discrepancy will increase when the combined risk increases.
Hence, it is hard to achieve satisfied target-domain accuracy if we only focus
on learning domain-invariant features and omit to control the combined risk.
To estimate the combined risk, the key challenge takes root in the
_unavailability_ of the labeled samples in the target domain. A simple
solution is to leverage the pseudo labels with high confidence in the target
domain to estimate the combined risk. However, since samples with high
confidence are insufficient, the value of the combined risk may still increase
(see the green line in Figure 1). Inspired by semi-supervised learning
methods, an advanced solution is to directly use the mixup technique to
augment pseudo-labeled target samples, which can slightly help us estimate the
combined risk better than the simple solution (see the orange line in Figure
1).
However, the target-domain pseudo labels provided by the source-domain
classifier may be inaccurate due to the discrepancy between domains, which
causes that mixup may not perform well with inaccurate labels. To mitigate the
issue, we propose enhanced mixup (e-mixup) to substitute mixup to compute a
proxy of the combined risk. The purple line in Figure 1 shows that the proxy
based on e-mixup can significantly boost the performance. Details of the proxy
is shown in section Motivation.
To the end, we design a novel UDA method referred to E-MixNet. E-MixNet learns
the target-domain classifier by simultaneously minimizing the source risk, the
marginal distribution discrepancy, and the proxy of combined risk. Via
minimizing the proxy of combined risk, we control the increase of combined
risk effectively, thus, control the conditional distribution discrepancy
between two domains.
We conduct experiments on three public datasets (Office-31, Office-Home, and
Image-CLEF) and compare E-MixNet with a series of existing state-of-the-art
methods. Furthermore, we introduce the proxy of the combined risk into four
representative UDA methods (i.e., DAN (Long et al. 2015), DANN (Ganin et al.
2016), CDAN (Long et al. 2018), SymNets (Zhang et al. 2019b)). Experiments
show that E-MixNet can outperform all baselines, and the four representative
methods can achieve better performance if the proxy of the combined risk is
added into their loss functions.
## Problem Setting and Concepts
In this section, we introduce the definition of UDA, then introduce some
important concepts used in this paper.
Let $\mathcal{X}\subset\mathbb{R}^{d}$ be a feature space and
$\mathcal{Y}:=\\{{\mathbf{y}}_{c}\\}_{c=1}^{K}$ be the label space, where the
label ${\mathbf{y}}_{c}\in\mathbb{R}^{{K}}$ is a one-hot vector, whose $c$-th
coordinate is $1$ and the other coordinates are $0$.
###### Definition 1 (Domains in UDA).
Given random variables $X_{s},X_{t}\in\mathcal{X}$,
$Y_{s},{Y}_{t}\in\mathcal{Y}$, the source and target domains are joint
distributions $P_{X_{s}{Y}_{s}}$ and $P_{X_{t}{Y}_{t}}$ with
$P_{X_{s}Y_{s}}\neq P_{X_{t}{Y}_{t}}$.
Then, we propose the UDA problem as follows.
###### Problem 1 (UDA).
Given independent and identically distributed (i.i.d.) labeled samples
${D}_{s}=\\{(\mathbf{x}_{s}^{i},\mathbf{y}_{s}^{i})\\}^{{n}_{s}}_{i=1}$ drawn
from the source domain $P_{X_{s}{Y}_{s}}$ and i.i.d. unlabeled samples
$D_{t}=\\{\mathbf{x}_{t}^{i}\\}^{n_{t}}_{i=1}$ drawn from the target marginal
distribution $P_{X_{t}}$, the aim of UDA is to train a classifier
$f:\mathcal{X}\rightarrow\mathcal{Y}$ with ${D}_{s}$ and $D_{t}$ such that $f$
classifies accurately target data $D_{t}$.
Given a loss function
$\ell:\mathbb{R}^{K}\times\mathbb{R}^{K}\rightarrow\mathbb{R}_{\geq 0}$ and
any scoring functions $\mathbf{C},\mathbf{C}^{\prime}$ from function space
$\\{\mathbf{C}:\mathcal{X}\rightarrow\mathbb{R}^{K}\\}$, source risk, target
risk and classifier discrepancy are
$\begin{split}&R_{s}^{\ell}(\mathbf{C}):=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim
P_{X_{s}Y_{s}}}\ell(\mathbf{C}(\mathbf{x}),\mathbf{y}),\\\
&R_{t}^{\ell}(\mathbf{C}):=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim
P_{X_{t}Y_{t}}}\ell(\mathbf{C}(\mathbf{x}),\mathbf{y}),\\\
&R_{s}^{\ell}(\mathbf{C}^{\prime},\mathbf{C}):=\mathbb{E}_{\mathbf{x}\sim
P_{X_{s}}}\ell(\mathbf{C}^{\prime}(\mathbf{x}),\mathbf{C}(\mathbf{x})),\\\
&R_{t}^{\ell}(\mathbf{C}^{\prime},\mathbf{C}):=\mathbb{E}_{\mathbf{x}\sim
P_{X_{t}}}\ell(\mathbf{C}^{\prime}(\mathbf{x}),\mathbf{C}(\mathbf{x})).\end{split}$
Lastly, we define the disparity discrepancy based on double losses, which will
be used to design our method.
###### Definition 2 (Double Loss Disparity Discrepancy).
Given distributions $P,Q$ over some feature space $\widetilde{\mathcal{X}}$,
two losses $\ell_{s},\ell_{t}$, a hypothesis space
$\mathcal{H}\subset\\{\mathbf{C}:\widetilde{\mathcal{X}}\rightarrow\mathbb{R}^{K}\\}$
and any scoring function $\mathbf{C}\in\mathcal{H}$, then the double loss
disparity discrepancy $d_{\mathbf{C},\mathcal{H}}^{\ell_{s}\ell_{t}}(P,Q)$ is
$\begin{split}\sup_{\mathbf{C}^{\prime}\in\mathcal{H}}\big{(}R_{P}^{\ell_{t}}(\mathbf{C}^{\prime},\mathbf{C})-R_{Q}^{\ell_{s}}(\mathbf{C}^{\prime},\mathbf{C})\big{)},\end{split}$
(1)
where
$\begin{split}&R_{P}^{\ell_{t}}(\mathbf{C}^{\prime},\mathbf{C}):=\mathbb{E}_{\mathbf{x}\sim
P}\ell_{t}(\mathbf{C}^{\prime}(\mathbf{x}),\mathbf{C}(\mathbf{x})),\\\
&R_{Q}^{\ell_{s}}(\mathbf{C}^{\prime},\mathbf{C}):=\mathbb{E}_{\mathbf{x}\sim
Q}\ell_{s}(\mathbf{C}^{\prime}(\mathbf{x}),\mathbf{C}(\mathbf{x})).\end{split}$
When losses $\ell_{s},\ell_{t}$ are the margin loss (Zhang et al. 2019a), the
double loss disparity discrepancy is known as the Margin Disparity Discrepancy
(Zhang et al. 2019a).
Compared with the classical discrepancy distance (Mansour, Mohri, and
Rostamizadeh 2009):
$\begin{split}&d^{\ell}_{\mathcal{H}}(P,Q):=\sup_{\mathbf{C}^{\prime},\mathbf{C}\in\mathcal{H}}\big{|}R_{P}^{\ell}(\mathbf{C}^{\prime},\mathbf{C})-R_{Q}^{\ell}(\mathbf{C}^{\prime},\mathbf{C})\big{|},\end{split}$
(2)
double loss disparity discrepancy is tighter and more flexible.
###### Theorem 1 (DA Learning Bound).
Given a loss $\ell$ satisfying the triangle inequality and a hypothesis space
$\mathcal{H}\subset\\{\mathbf{C}:\mathcal{X}\rightarrow\mathbb{R}^{K}\\}$,
then for any $\mathbf{C}\in\mathcal{H}$, we have
$R_{t}^{\ell}(\mathbf{C})\leq
R_{s}^{\ell}(\mathbf{C})+d^{\ell}_{\mathcal{H}}(P_{X_{s}},P_{X_{t}})+\lambda^{\ell},$
where $d^{\ell}_{\mathcal{H}}$ is the discrepancy distance defined in Eq. (2)
and
$\lambda^{\ell}:=\min_{\mathbf{C}^{*}\in\mathcal{H}}\big{(}R_{s}^{\ell}(\mathbf{C}^{*})+R_{t}^{\ell}(\mathbf{C}^{*})\big{)}$
known as the combined risk.
In Theorem 1, when the hypothesis space $\mathcal{H}$ and the loss $\ell$ are
fixed, the combined risk is a fixed constant. Note that, under certain
assumptions, the target risk can be upper bounded only by the first two terms
(Gong et al. 2016; Zhang et al. 2020a), which is also a promising research
direction.
## Theoretical Analysis
Here we introduce our main theoretical results. All proofs can be found at
https://github.com/zhonglii/E-MixNet.
### Rethinking DA Learning Bound
Many existing UDA methods (Wang and Breckon 2020; Zou et al. 2019; Tang and
Jia 2020) learn a suitable feature transformation $\mathbf{G}$ such that the
discrepancy between transformed distributions $P_{\mathbf{G}(X_{s})}$ and
$P_{\mathbf{G}(X_{t})}$ is reduced. By introducing the transformation
$\mathbf{G}$ in the classical DA learning bound, we discover that the combined
error $\lambda^{\ell}$ is not a fixed value.
###### Theorem 2.
Given a loss $\ell$ satisfying the triangle inequality, a transformation space
$\mathcal{G}\subset\\{\mathbf{G}:\mathcal{X}\rightarrow\mathcal{X}_{\rm
new}\\}$ and a hypothesis space
$\mathcal{H}\subset\\{\mathbf{C}:\mathcal{X}_{\rm
new}\rightarrow\mathbb{R}^{K}\\}$, then for any $\mathbf{G}\in\mathcal{G}$ and
$\mathbf{C}\in\mathcal{H}$,
$R_{t}^{\ell}(\mathbf{C}\circ\mathbf{G})\leq
R_{s}^{\ell}(\mathbf{C}\circ\mathbf{G})+d^{\ell}_{\mathcal{H}}(P_{\mathbf{G}(X_{s})},P_{\mathbf{G}(X_{t})})+\lambda^{\ell}(\mathbf{G}),$
where $d^{\ell}_{\mathcal{H}}$ is the discrepancy distance defined in Eq. (2)
and
$\lambda^{\ell}(\mathbf{G}):=\min_{\mathbf{C}^{*}\in\mathcal{H}}\big{(}R_{s}^{\ell}(\mathbf{C}^{*}\circ\mathbf{G})+R_{t}^{\ell}(\mathbf{C}^{*}\circ\mathbf{G})\big{)}$
(3)
known as the combined risk.
According to Theorem 2, it is not enough to minimize the source risk and
distribution discrepancy by seeking the optimal classifier $\mathbf{C}$ and
optimal transformation $\mathbf{G}$ from spaces $\mathcal{H}$ and
$\mathcal{G}$. Because we cannot guarantee the value of combined risk
$\lambda^{\ell}(\mathbf{G})$ is always small during the training process.
For convenience, we define
$\Lambda^{\ell}(\mathbf{C},\mathbf{G}):=R_{s}^{\ell}(\mathbf{C}\circ\mathbf{G})+R_{t}^{\ell}(\mathbf{C}\circ\mathbf{G}),$
(4)
hence,
$\lambda^{\ell}(\mathbf{G})=\min_{\mathbf{C}^{*}\in\mathcal{H}}\Lambda^{\ell}(\mathbf{C}^{*},\mathbf{G})$.
### Meaning of Combined Risk $\lambda^{\ell}(\mathbf{G})$
To future understand the meaning of the combined risk
$\lambda^{\ell}(\mathbf{G})$, we prove the following Theorem.
###### Theorem 3.
Given a symmetric loss $\ell$ satisfying the triangle inequality, a feature
transformation
$\mathbf{G}\in\mathcal{G}\subset\\{\mathbf{G}:\mathcal{X}\rightarrow\mathcal{X}_{\rm
new}\\}$, a hypothesis space $\mathcal{H}\subset\\{\mathbf{C}:\mathcal{X}_{\rm
new}\rightarrow\mathbb{R}^{K}\\}$, and
$\mathbf{C}_{s}=\operatorname*{argmin}_{\mathbf{C}\in\mathcal{H}}R_{s}^{\ell}(\mathbf{C}\circ\mathbf{G}),~{}~{}\mathbf{C}_{t}=\operatorname*{argmin}_{\mathbf{C}\in\mathcal{H}}R_{t}^{\ell}(\mathbf{C}\circ\mathbf{G}),$
then
$\begin{split}2\lambda^{\ell}(\mathbf{G})-\delta&\leq
R_{s}^{\ell}(\mathbf{C}_{t}\circ\mathbf{G})+R_{t}^{\ell}(\mathbf{C}_{s}\circ\mathbf{G})\\\
&\leq
2\lambda^{\ell}(\mathbf{G})+d^{\ell}_{\mathcal{H}}(P_{\mathbf{G}(X_{s})},P_{\mathbf{G}(X_{t})})+\delta,\end{split}$
where $\lambda^{\ell}(\mathbf{G})$ is defined in Eq. (3),
$\delta:=R_{s}^{\ell}(\mathbf{C}_{s}\circ\mathbf{G})+R_{t}^{\ell}(\mathbf{C}_{t}\circ\mathbf{G})$
known as the approximation error and $d^{\ell}_{\mathcal{H}}$ is the
discrepancy distance defined in Eq. (2).
Theorem 3 implies that the combined risk $\lambda^{\ell}(\mathbf{G})$ is
deeply related to the optimal classifier discrepancy
$R_{s}^{\ell}(\mathbf{C}_{t}\circ\mathbf{G})+R_{t}^{\ell}(\mathbf{C}_{s}\circ\mathbf{G}),$
which can be regarded as the conditional distribution discrepancy between
$P_{Y_{s}|\mathbf{G}(X_{s})}$ and $P_{Y_{t}|\mathbf{G}(X_{t})}$. If
$\lambda^{\ell}(\mathbf{G})$ increases, the conditional distribution
discrepancy is larger.
### Double Loss DA Learning Bound
Note that there exist methods, such as MDD (Zhang et al. 2019a), whose source
and target losses are different. To understand these UDA methods and bridge
the gap between theory and algorithms, we develop the classical DA learning
bound to a more general scenario.
###### Theorem 4.
Given losses $\ell_{s}$ and $\ell_{t}$ satisfying the triangle inequality, a
transformation space
$\mathcal{G}\subset\\{\mathbf{G}:\mathcal{X}\rightarrow\mathcal{X}_{\rm
new}\\}$ and a hypothesis space
$\mathcal{H}\subset\\{\mathbf{C}:\mathcal{X}_{\rm
new}\rightarrow\mathbb{R}^{K}\\}$, then for any $\mathbf{G}\in\mathcal{G}$ and
$\mathbf{C}\in\mathcal{H}$, then $R_{t}^{\ell_{t}}(\mathbf{C}\circ\mathbf{G})$
is bounded by
$\begin{split}R_{s}^{\ell_{s}}(\mathbf{C}\circ\mathbf{G})+d^{\ell_{s}\ell_{t}}_{\mathbf{C},\mathcal{H}}(P_{\mathbf{G}(X_{s})},P_{\mathbf{G}(X_{t})})+\lambda^{\ell_{s}\ell_{t}}(\mathbf{G}),\end{split}$
(5)
where $d^{\ell_{s}\ell_{t}}_{\mathbf{C},\mathcal{H}}$ is the double loss
disparity discrepancy defined in Eq. (1) and
$\lambda^{\ell_{s}\ell_{t}}(\mathbf{G})$ is the combined risk:
$\lambda^{\ell_{s}\ell_{t}}(\mathbf{G}):=\min_{\mathbf{C}^{*}\in\mathcal{H}}\Lambda^{\ell_{1}\ell_{2}}(\mathbf{C}^{*},\mathbf{G}),~{}~{}~{}{\rm
here}$ (6)
$\begin{split}\Lambda^{\ell_{1}\ell_{2}}(\mathbf{C}^{*},\mathbf{G}):=R^{\ell_{s}}_{s}(\mathbf{C}^{*}\circ\mathbf{G})+R^{\ell_{t}}_{t}(\mathbf{C}^{*}\circ\mathbf{G}).\end{split}$
(7)
In Theorem 4, the condition that $\ell_{s}$ and $\ell_{t}$ satisfy the
triangle inequality, can be replaced by a weaker condition:
$\begin{split}&R_{t}^{\ell_{t}}(\mathbf{C}\circ\mathbf{G})\leq
R_{t}^{\ell_{t}}(\mathbf{C}^{\prime}\circ\mathbf{G},\mathbf{C}\circ\mathbf{G})+R_{t}^{\ell_{t}}(\mathbf{C}^{\prime}\circ\mathbf{G}),\\\
&R_{s}^{\ell_{s}}(\mathbf{C}^{\prime}\circ\mathbf{G},\mathbf{C}\circ\mathbf{G})\leq
R_{s}^{\ell_{s}}(\mathbf{C}^{\prime}\circ\mathbf{G})+R_{s}^{\ell_{s}}(\mathbf{C}\circ\mathbf{G}).\end{split}$
If we set $\ell_{s},\ell_{t}$ as the margin loss, $\ell_{s},\ell_{t}$ do not
satisfy the triangle inequality but they satisfy above condition.
## Proposed Method: E-MixNet
Here we introduce motivation and details of our method.
### Motivation
Theorem 3 has shown that the combined risk is related to the conditional
distribution discrepancy. As the increase of the combined risk, the
conditional distribution discrepancy is increased. Hence, omitting the
importance of the combined risk may make negative impacts on the target-domain
accuracy. Figure 1 (blue line) verifies our observation.
To control the combined risk, we consider the following problem.
$\begin{split}&\min_{\mathbf{G}\in\mathcal{G}}\lambda^{\ell}(\mathbf{G})=\min_{\mathbf{G}\in\mathcal{G},\mathbf{C}^{*}\in\mathcal{H}}\Lambda^{\ell}(\mathbf{C}^{*},\mathbf{G}),\end{split}$
(8)
where $\Lambda^{\ell}(\mathbf{C}^{*},\mathbf{G})$ is defined in Eq. (4). Eq.
(8) shows we can control the combined risk by minimizing
$\Lambda^{\ell}(\mathbf{C}^{*},\mathbf{G})$. However, it is prohibitive to
directly optimize the combined risk, since the labeled target samples are
indispensable to estimate $\Lambda^{\ell}(\mathbf{C}^{*},\mathbf{G})$.
To alleviate the above issue, a simple method is to use the target pseudo
labels with high confidence to estimate the
$\Lambda^{\ell}(\mathbf{C}^{*},\mathbf{G})$. Given the source samples
$\\{(\mathbf{x}^{i}_{s},\mathbf{y}^{i}_{s})\\}_{i=1}^{n_{s}}$ and the target
samples with high confidence
$\\{(\mathbf{x}^{i}_{\mathcal{T}},\mathbf{y}^{i}_{\mathcal{T}})\\}_{i=1}^{n_{\mathcal{T}}}$,
the empirical form of $\Lambda^{\ell}(\mathbf{C}^{*},\mathbf{G})$ can be
computed by
$\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\ell(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{s}^{i}),\mathbf{y}_{s}^{i})+\frac{1}{n_{\mathcal{T}}}\sum_{i=1}^{n_{\mathcal{T}}}\ell(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{\mathcal{T}}^{i}),\mathbf{y}_{\mathcal{T}}^{i}).$
However, the combined risk may still increase as shown in Fig 1 (green line).
The reason may be that the target samples, whose pseudo labels with high
confidence, are insufficient.
Inspired by semi-supervised learning, an advanced solution is to use mixup
technique (Zhang et al. 2018) to augment pseudo-labeled target samples. Mixup
produces new samples by a convex combination: given any two samples
$(\mathbf{x}_{1},\mathbf{y}_{1})$, $(\mathbf{x}_{2},\mathbf{y}_{2})$,
$\begin{split}&\mathbf{x}=\alpha\mathbf{x}_{1}+(1-\alpha)\mathbf{x}_{2},~{}~{}~{}\mathbf{y}=\alpha\mathbf{y}_{1}+(1-\alpha)\mathbf{y}_{2},\end{split}$
where $\alpha$ is a hyper-parameter. Zhang et al. (2018) has shown that mixup
not only reduces the memorization to adversarial samples, but also performs
better than Empirical Risk Minimization (ERM) (Vapnik and Chervonenkis 2015).
By applying mixup on the target samples with high confidence, new samples
$\\{(\mathbf{x}^{i}_{m},\mathbf{y}^{i}_{m})\\}_{i=1}^{n}$ are produced, then
we propose a proxy $\widetilde{\Lambda}^{\ell}(\mathbf{C}^{*},\mathbf{G})$ of
$\Lambda^{\ell}$ as follows:
$\begin{split}\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\ell(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{s}^{i}),\mathbf{y}_{s}^{i})+\frac{1}{n}\sum_{i=1}^{n}\ell(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{m}^{i}),\mathbf{y}_{m}^{i}).\end{split}$
The aforementioned issue can be mitigated, since mixup can be regarded as a
data augmentation (Zhang et al. 2018). However, the target-domain pseudo
labels provided by the source-domain classifier may be inaccurate due to the
discrepancy between domains, which causes that mixup may not perform well with
inaccurate labels. We propose enhanced mixup (e-mixup) to substitute the mixup
to compute the proxy. E-mixup introduces the pure true-labeled source-samples
to mitigate the issue caused by bad pseudo labels.
Furthermore, to increase the diversity of new samples, e-mixup produces each
new sample using two distant samples, where the distance of the two distant
samples is expected to be large. Compared the ordinary mixup technique (i.e.,
producing new samples using randomly selected samples), e-mixup can produce
new examples more effectively. We also verify that e-mixup can further boost
the performance (see Table 5). Details of the e-mixup are shown in Algorithm
1. Corresponding to the double loss situation, denoted by samples
$\\{(\mathbf{x}^{i}_{e},\mathbf{y}^{i}_{e})\\}_{i=1}^{n}$ produced by e-mixup,
the proxy $\widetilde{\Lambda}^{\ell_{s}\ell_{t}}(\mathbf{C}^{*},\mathbf{G})$
of $\Lambda^{\ell_{s}\ell_{t}}$ (defined in Eq. (7)) as
$\begin{split}\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\ell_{s}(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{s}^{i}),\mathbf{y}_{s}^{i})+\frac{1}{n}\sum_{i=1}^{n}\ell_{t}(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{e}^{i}),\mathbf{y}_{e}^{i}).\end{split}$
(9)
The purple line in Figure 1 and ablation study show that e-mixup can further
boost performance.
### Algorithm
The optimization of the combined risk plays a crucial role in UDA.
Accordingly, we propose a method based on the aforementioned analyses to solve
UDA more deeply.
Figure 2: The network architecture of applying the proxy of the combined risk.
The left figure is the general model for adding the proxy into existing UDA
models. The right figure is a specific model based on double loss disparity
discrepancy.
#### Objective Function
Input: samples $\\{(\mathbf{x}^{i},\mathbf{y}^{i})\\}_{i=1}^{n}$.
Parameter: $\alpha$, the number of class $K$.
Output: new samples $\\{(\mathbf{x}^{i}_{e},\mathbf{y}^{i}_{e})\\}_{i=1}^{n}$.
1: for $i=1,2,\dots,n$ do
2: $y^{i}=\operatorname*{argmin}_{c\in\\{1,...,K\\}}{y}^{i}_{c}$ %
${y}^{i}_{c}$ is the c-th
coordinate value of vector $\mathbf{y}^{i}$.
3: Select one from the samples whose label is $y^{i}$ and denoted it by
$(\mathbf{x}^{i}_{r},\mathbf{y}^{i}_{r})$
4: $\mathbf{x}_{e}^{i}=\alpha\mathbf{x}^{i}+(1-\alpha)\mathbf{x}^{i}_{r}$,
$\mathbf{y}_{e}^{i}=\alpha\mathbf{y}^{i}+(1-\alpha)\mathbf{y}^{i}_{r}$
5: end for
Algorithm 1 e-mixup
According to the theoretical bound in Eq. (5), we need to solve the following
problem
$\begin{split}\min_{\mathbf{C}\in\mathcal{H},\mathbf{G}\in\mathcal{G}}\big{(}\widehat{R}_{s}^{\ell_{s}}(\mathbf{C}\circ\mathbf{G})&+d^{\ell_{s}\ell_{t}}_{\mathbf{C},\mathcal{H}}(\widehat{P}_{\mathbf{G}(X_{s})},\widehat{P}_{\mathbf{G}(X_{t})})\\\
&+\min_{\mathbf{C}^{*}\in\mathcal{H}}\widetilde{\Lambda}^{\ell_{s}\ell_{t}}(\mathbf{C}^{*},\mathbf{G})\big{)},\end{split}$
where $d^{\ell_{s}\ell_{t}}_{\mathbf{C},\mathcal{H}}$ is the double losses
disparity discrepancy defined in Eq. (1) and
$\widetilde{\Lambda}^{\ell_{s}\ell_{t}}(\mathbf{C}^{*},\mathbf{G})$ is defined
in Eq. (9).
Minimizing double loss disparity discrepancy is a minimax game, since the
double losses disparity discrepancy is defined as the supremum over hypothesis
space $\mathcal{H}$. Thus, we revise the above problem as follows:
$\begin{split}\min_{\mathbf{C}\in\mathcal{H},\mathbf{G}\in\mathcal{G}}&\big{(}\widehat{R}_{s}^{\ell_{s}}(\mathbf{C}\circ\mathbf{G})+\widetilde{\Lambda}^{\ell_{s}\ell_{t}}(\widetilde{\mathbf{C}},\mathbf{G})\\\
+&\eta
d_{\mathbf{C},\mathbf{C}^{\prime},\gamma}^{\ell_{s}\ell_{t}}(\widehat{P}_{\mathbf{G}(X_{s})},\widehat{P}_{\mathbf{G}(X_{t})})\big{)},\end{split}$
(10)
where $\gamma,\eta$ are parameters to make our model more flexible,
$\begin{split}&d_{\mathbf{C},\mathbf{D},\gamma}^{\ell_{s}\ell_{t}}(\widehat{P}_{\mathbf{G}(X_{s})},\widehat{P}_{\mathbf{G}(X_{t})})\\\
&=\widehat{R}^{\ell_{t}}_{t}(\mathbf{D}\circ\mathbf{G},\mathbf{C}\circ\mathbf{G})-\gamma\widehat{R}^{\ell_{s}}_{s}(\mathbf{D}\circ\mathbf{G},\mathbf{C}\circ\mathbf{G}),\\\
\mathbf{C}^{\prime}&=\operatorname*{argmax}_{\mathbf{D}\in\mathcal{H}}~{}~{}d_{\mathbf{C},\mathbf{D},\gamma}^{\ell_{s}\ell_{t}}(\widehat{P}_{\mathbf{G}(X_{s})},\widehat{P}_{\mathbf{G}(X_{t})}),\\\
\widetilde{\mathbf{C}}&=\operatorname*{argmin}_{\mathbf{C}^{*}\in\mathcal{H}}~{}~{}\widetilde{\Lambda}^{\ell_{s}\ell_{t}}(\mathbf{C}^{*},\mathbf{G}).\end{split}$
(11)
To solve the problem (10), we construct a deep method. The network
architecture is shown in Fig. 2(b), which consists of a generator
$\mathbf{G}$, a discriminator $\mathbf{D}$, and two classifiers
$\mathbf{C},\mathbf{C}^{*}$. Next, we introduce the details about our method.
We use standard cross-entropy as the source loss $\ell_{s}$ and use modified
cross-entropy (Goodfellow et al. 2014; Zhang et al. 2019a) as the target loss
$\ell_{t}$.
For any scoring functions
$\mathbf{F},\mathbf{F}^{\prime}:\mathcal{X}\rightarrow\mathbb{R}^{K}$,
$\begin{split}&\ell_{s}(\mathbf{F}(\mathbf{x}),\mathbf{F}^{\prime}(\mathbf{x})):=-\log(\sigma_{{{h}^{\prime}(\mathbf{x})}}(\mathbf{F}(\mathbf{x}))),\\\
&\ell_{t}(\mathbf{F}(\mathbf{x}),\mathbf{F}^{\prime}(\mathbf{x})):=\log(1-\sigma_{{{h}^{\prime}(\mathbf{x})}}(\mathbf{F}(\mathbf{x}))),\end{split}$
(12)
where $\sigma$ is softmax function: for any
$\mathbf{y}=[y_{1},...,y_{K}]\in\mathbb{R}^{K}$,
$\sigma_{c}(\mathbf{y})=\frac{\exp({{y}_{c}})}{\sum_{k=1}^{K}\exp({{y}_{k}})},~{}~{}{\rm
for}~{}c=1,...,K,$
and
${h}^{\prime}(\mathbf{x})=\operatorname*{argmax}_{c\in\\{1,...,K\\}}F^{\prime}_{c}(\mathbf{x}),$
(13)
here $F^{\prime}_{c}$ is the $c$-th coordinate function of function
$\mathbf{F}^{\prime}$.
Source risk. Given the source samples
$\\{(\mathbf{x}^{i}_{s},\mathbf{y}_{s}^{i})\\}_{i=1}^{n_{s}}$, then
$\widehat{R}^{\ell_{s}}_{s}(\mathbf{C}\circ\mathbf{G})=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}-\log(\sigma_{{y}_{s}^{i}}(\mathbf{C}\circ\mathbf{G}(\mathbf{x}^{i}_{s}))),$
(14)
where ${y}_{s}^{i}$ is the label corresponding to one-hot vector
$\mathbf{y}_{s}^{i}$.
Double loss disparity discrepancy. Given the source and target samples
$\\{(\mathbf{x}^{i}_{s},\mathbf{y}_{s}^{i})\\}_{i=1}^{n_{s}},\\{\mathbf{x}^{i}_{t}\\}_{i=1}^{n_{t}}$,
then
$\begin{split}&d^{\ell_{s}\ell_{t}}_{\mathbf{C},\mathbf{D},\gamma}(\widehat{P}_{\mathbf{G}(X_{s})},\widehat{P}_{\mathbf{G}(X_{t})})\\\
=&-\frac{\gamma}{n_{s}}\sum_{i=1}^{n_{s}}\ell_{s}(\mathbf{D}\circ\mathbf{G}(\mathbf{x}^{i}_{s}),\mathbf{C}\circ\mathbf{G}(\mathbf{x}^{i}_{s}))\\\
&+\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}\ell_{t}(\mathbf{D}\circ\mathbf{G}(\mathbf{x}^{i}_{t}),\mathbf{C}\circ\mathbf{G}(\mathbf{x}^{i}_{t})),\end{split}$
(15)
where $\ell_{s},\ell_{t}$ are defined in Eq. (12).
Combined risk. As discussed in Motivation, the combined risk cannot be
optimized directly. To mitigate this problem, we use the proxy
$\widetilde{\Lambda}^{\ell_{s}\ell_{t}}$ in Eq. (9) to substitute it.
Further, motivated by (Berthelot et al. 2019), we use mean mquare error
($\ell_{mse}$) to calculate the proxy of the combined risk, because, unlike
the cross-entropy loss, $\ell_{mse}$ is bounded and less sensitive to
incorrect predictions. Denoted by
$\\{(\mathbf{x}^{i}_{e},\mathbf{y}^{i}_{e})\\}_{i=1}^{n}$ the output of the
e-mixup. Then the proxy is calculated by
$\begin{split}\widetilde{\Lambda}^{\ell_{mse}}(\mathbf{C}^{*},\mathbf{G})=&\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\ell_{mse}(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{s}^{i}),\mathbf{y}_{s}^{i})\\\
&+\frac{1}{n}\sum_{i=1}^{n}\ell_{mse}(\mathbf{C}^{*}\circ\mathbf{G}(\mathbf{x}_{e}^{i}),\mathbf{y}_{e}^{i}).\end{split}$
(16)
Input: source, target samples
$\\{(\mathbf{x}^{i}_{s},\mathbf{y}^{i}_{s})\\}_{i=1}^{n_{s}}$,$\\{\mathbf{x}^{i}_{t}\\}_{i=1}^{n_{t}}$.
Parameter: learning rate $l$, batch size $n_{b}$, the number of iteration $T$,
network parameters $\mathbf{\Theta}$.
Output: the predicted target label $\widehat{\mathbf{y}}_{t}$.
1: Initialize $\mathbf{\Theta}$
2: for $j=1,2,\dots,T$ do
3: Fetch source minibatch $D_{s}^{m}$
4: Fetch target minibatch $D_{t}^{m}$
5: Calculate $\widehat{R}^{\ell_{s}}_{s}$ using $D_{s}^{m}$ (Eq. (14))
6: Calculate $d^{\ell_{s}\ell_{t}}_{\mathbf{C},\mathbf{D},\gamma}$ using
$D_{s}^{m},D_{t}^{m}$ (Eq. (15))
7: Obtain highly confident target samples
$\\{(\mathbf{x}^{i}_{\mathcal{T}},\mathbf{y}^{i}_{\mathcal{T}})\\}_{i=1}^{n_{\mathcal{T}}}$
predicted by $\mathbf{C}\circ\mathbf{G}$ on $D_{t}^{m}$
8:
$\\{(\mathbf{x}^{i},\mathbf{y}^{i})\\}_{i=1}^{n}=D_{s}^{m}\cup\\{(\mathbf{x}^{i}_{\mathcal{T}},\mathbf{y}^{i}_{\mathcal{T}})\\}_{i=1}^{n_{\mathcal{T}}}$
9: $\\{(\mathbf{x}^{i}_{e},\mathbf{y}^{i}_{e})\\}_{i=1}^{n}$ =
e-mixup($\\{(\mathbf{x}^{i},\mathbf{y}^{i})\\}_{i=1}^{n}$)
10: Calculate $\widetilde{\Lambda}^{\ell_{mse}}$ using
$D_{s}^{m},\\{(\mathbf{x}^{i}_{e},\mathbf{y}^{i}_{e})\\}_{i=1}^{n}$ (Eq. (16))
11: Update $\mathbf{\Theta}$ according to Eq. (17)
12: end for
Algorithm 2 The training procedure of E-MixNet Table 1: Results on Office-31 (ResNet-50) Method | A$\rightarrow$W | D$\rightarrow$W | W$\rightarrow$D | A$\rightarrow$D | D$\rightarrow$A | W$\rightarrow$A | Avg
---|---|---|---|---|---|---|---
ResNet-50 (He et al. 2016) | 68.4$\pm$0.2 | 96.7$\pm$0.1 | 99.3$\pm$0.1 | 68.9$\pm$0.2 | 62.5$\pm$0.3 | 60.7$\pm$0.3 | 76.1
DAN (Long et al. 2015) | 80.5$\pm$0.4 | 97.1$\pm$0.2 | 99.6$\pm$0.1 | 78.6$\pm$0.2 | 63.6$\pm$0.3 | 62.8$\pm$0.2 | 80.4
RTN (Long et al. 2016) | 84.5$\pm$0.2 | 96.8$\pm$0.1 | 99.4$\pm$0.1 | 77.5$\pm$0.3 | 66.2$\pm$0.2 | 64.8$\pm$0.3 | 81.6
DANN (Ganin et al. 2016) | 82.0$\pm$0.4 | 96.9$\pm$0.2 | 99.1$\pm$0.1 | 79.7$\pm$0.4 | 68.2$\pm$0.4 | 67.4$\pm$0.5 | 82.2
ADDA (Tzeng et al. 2017) | 86.2$\pm$0.5 | 96.2$\pm$0.3 | 98.4$\pm$0.3 | 77.8$\pm$0.3 | 69.5$\pm$0.4 | 68.9$\pm$0.5 | 82.9
JAN (Long et al. 2013) | 86.0$\pm$0.4 | 96.7$\pm$0.3 | 99.7$\pm$0.1 | 85.1$\pm$0.4 | 69.2$\pm$0.3 | 70.7$\pm$0.5 | 84.6
MADA (Pei et al. 2018) | 90.0$\pm$0.1 | 97.4$\pm$0.1 | 99.6$\pm$0.1 | 87.8$\pm$0.2 | 70.3$\pm$0.3 | 66.4$\pm$0.3 | 85.2
SimNet (Pinheiro 2018) | 88.6$\pm$0.5 | 98.2$\pm$0.2 | 99.7$\pm$0.2 | 85.3$\pm$0.3 | 73.4$\pm$0.8 | 71.8$\pm$0.6 | 86.2
MCD (Saito et al. 2018) | 89.6$\pm$0.2 | 98.5$\pm$0.1 | 100.0$\pm$.0 | 91.3$\pm$0.2 | 69.6$\pm$0.1 | 70.8$\pm$0.3 | 86.6
CDAN+E (Long et al. 2018) | 94.1$\pm$0.1 | 98.6$\pm$0.1 | 100.0$\pm$.0 | 92.9$\pm$0.2 | 71.0$\pm$0.3 | 69.3$\pm$0.3 | 87.7
SymNets (Zhang et al. 2019b) | 90.8$\pm$0.1 | 98.8$\pm$0.3 | 100.0$\pm$.0 | 93.9$\pm$0.5 | 74.6$\pm$0.6 | 72.5$\pm$0.5 | 88.4
MDD (Zhang et al. 2019a) | 94.5$\pm$0.3 | 98.4$\pm$0.1 | 100.0$\pm$.0 | 93.5$\pm$0.2 | 74.6$\pm$0.3 | 72.2$\pm$0.1 | 88.9
E-MixNet | 93.0$\pm$0.3 | 99.0$\pm$0.1 | 100.0$\pm$.0 | 95.6$\pm$0.2 | 78.9$\pm$0.5 | 74.7$\pm$0.7 | 90.2
Table 2: Results on Image-CLEF (ResNet-50) Method | I$\rightarrow$P | P$\rightarrow$I | I$\rightarrow$C | C$\rightarrow$I | C$\rightarrow$P | P$\rightarrow$C | Avg
---|---|---|---|---|---|---|---
ResNet-50 (He et al. 2016) | 74.8$\pm$0.3 | 83.9$\pm$0.1 | 91.5$\pm$0.3 | 78.0$\pm$0.2 | 65.5$\pm$0.3 | 91.2$\pm$0.3 | 80.7
DAN (Long et al. 2015) | 74.5$\pm$0.4 | 82.2$\pm$0.2 | 92.8$\pm$0.2 | 86.3$\pm$0.4 | 69.2$\pm$0.4 | 89.8$\pm$0.4 | 82.5
DANN (Ganin et al. 2016) | 75.0$\pm$0.6 | 86.0$\pm$0.3 | 96.2$\pm$0.4 | 87.0$\pm$0.5 | 74.3$\pm$0.5 | 91.5$\pm$0.6 | 85.0
JAN (Long et al. 2013) | 76.8$\pm$0.4 | 88.0$\pm$0.2 | 94.7$\pm$0.2 | 89.5$\pm$0.3 | 74.2$\pm$0.3 | 91.7$\pm$0.3 | 85.8
MADA (Pei et al. 2018) | 75.0$\pm$0.3 | 87.9$\pm$0.2 | 96.0$\pm$0.3 | 88.8$\pm$0.3 | 75.2$\pm$0.2 | 92.2$\pm$0.3 | 85.8
CDAN+E (Long et al. 2018) | 77.7$\pm$0.3 | 90.7$\pm$0.2 | 97.7$\pm$0.3 | 91.3$\pm$0.3 | 74.2$\pm$0.2 | 94.3$\pm$0.3 | 87.7
SymNets (Zhang et al. 2019b) | 80.2$\pm$0.3 | 93.6$\pm$0.2 | 97.0$\pm$0.3 | 93.4$\pm$0.3 | 78.7$\pm$0.3 | 96.4$\pm$0.1 | 89.9
E-MixNet | 80.5$\pm$0.4 | 96.0$\pm$0.1 | 97.7$\pm$0.3 | 95.2$\pm$0.4 | 79.9$\pm$0.2 | 97.0$\pm$0.3 | 91.0
#### Training Procedure
Finally, the UDA problem can be solved by the following minimax game.
$\begin{split}&\min_{\mathbf{C}\in\mathcal{H},\mathbf{G}\in\mathcal{G}}\big{(}\widehat{R}_{s}^{\ell_{s}}(\mathbf{C}\circ\mathbf{G})+\widetilde{\Lambda}^{\ell_{mse}}({\mathbf{C}}^{*},\mathbf{G})\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\eta
d_{\mathbf{C},\mathbf{D},\gamma}^{\ell_{s}\ell_{t}}(\widehat{P}_{\mathbf{G}(X_{s})},\widehat{P}_{\mathbf{G}(X_{t})})\big{)},\\\
&~{}~{}~{}~{}\max_{\mathbf{D}\in\mathcal{H}}~{}~{}~{}d_{\mathbf{C},\mathbf{D},\gamma}^{\ell_{s}\ell_{t}}(\widehat{P}_{\mathbf{G}(X_{s})},\widehat{P}_{\mathbf{G}(X_{t})}),\\\
&~{}~{}~{}~{}\min_{\mathbf{C}^{*}\in\mathcal{H}}~{}~{}~{}\widetilde{\Lambda}^{\ell_{mse}}(\mathbf{C}^{*},\mathbf{G}).\end{split}$
(17)
The training procedure is shown in Algorithm 2.
## Experiments
Table 3: Results on Office-Home (ResNet-50) Method | A$\rightarrow$C | A$\rightarrow$P | A$\rightarrow$R | C$\rightarrow$A | C$\rightarrow$P | C$\rightarrow$R | P$\rightarrow$A | P$\rightarrow$C | P$\rightarrow$R | R$\rightarrow$A | R$\rightarrow$C | R$\rightarrow$P | Avg
---|---|---|---|---|---|---|---|---|---|---|---|---|---
ResNet-50 (He et al. 2016) | 34.9 | 50.0 | 58.0 | 37.4 | 41.9 | 46.2 | 38.5 | 31.2 | 60.4 | 53.9 | 41.2 | 59.9 | 46.1
DAN (Long et al. 2015) | 54.0 | 68.6 | 75.9 | 56.4 | 66.0 | 67.9 | 57.1 | 50.3 | 74.7 | 68.8 | 55.8 | 80.6 | 64.7
DANN (Ganin et al. 2016) | 44.1 | 66.5 | 74.6 | 57.9 | 62.0 | 67.2 | 55.7 | 40.9 | 73.5 | 67.5 | 47.9 | 77.7 | 61.3
JAN (Long et al. 2013) | 45.9 | 61.2 | 68.9 | 50.4 | 59.7 | 61.0 | 45.8 | 43.4 | 70.3 | 63.9 | 52.4 | 76.8 | 58.3
CDAN+E (Long et al. 2018) | 47.0 | 69.4 | 75.8 | 61.0 | 68.8 | 70.8 | 60.2 | 47.1 | 77.9 | 70.8 | 51.4 | 81.7 | 65.2
SymNets (Zhang et al. 2019b) | 46.0 | 73.8 | 78.2 | 64.1 | 69.7 | 74.2 | 63.2 | 48.9 | 80.0 | 74.0 | 51.6 | 82.9 | 67.2
MDD (Zhang et al. 2019a) | 54.9 | 73.7 | 77.8 | 60.0 | 71.4 | 71.8 | 61.2 | 53.6 | 78.1 | 72.5 | 60.2 | 82.3 | 68.1
E-MixNet | 57.7 | 76.6 | 79.8 | 63.6 | 74.1 | 75.0 | 63.4 | 56.4 | 79.7 | 72.8 | 62.4 | 85.5 | 70.6
Table 4: The results of combination experiments on Office-Home (ResNet-50) Method | A$\rightarrow$C | A$\rightarrow$P | A$\rightarrow$R | C$\rightarrow$A | C$\rightarrow$P | C$\rightarrow$R | P$\rightarrow$A | P$\rightarrow$C | P$\rightarrow$R | R$\rightarrow$A | R$\rightarrow$C | R$\rightarrow$P | Avg
---|---|---|---|---|---|---|---|---|---|---|---|---|---
DAN (Long et al. 2015) | 54.0 | 68.6 | 75.9 | 56.4 | 66.0 | 67.9 | 57.1 | 50.3 | 74.7 | 68.8 | 55.8 | 80.6 | 64.7
DAN+$\widetilde{\Lambda}^{\ell_{mse}}$ | 57.0 | 71.0 | 77.9 | 59.9 | 72.6 | 70.1 | 58.1 | 57.1 | 77.3 | 72.7 | 64.7 | 84.6 | 68.6
DANN (Ganin et al. 2016) | 44.1 | 66.5 | 74.6 | 57.9 | 62.0 | 67.2 | 55.7 | 40.9 | 73.5 | 67.5 | 47.9 | 77.7 | 61.3
DANN+$\widetilde{\Lambda}^{\ell_{mse}}$ | 50.9 | 69.6 | 77.8 | 61.9 | 70.7 | 71.6 | 60.0 | 49.5 | 78.4 | 71.8 | 55.7 | 83.7 | 66.8
CDAN+E (Long et al. 2018) | 47.0 | 69.4 | 75.8 | 61.0 | 68.8 | 70.8 | 60.2 | 47.1 | 77.9 | 70.8 | 51.4 | 81.7 | 65.2
CDAN+E+$\widetilde{\Lambda}^{\ell_{mse}}$ | 49.5 | 70.1 | 77.8 | 64.3 | 71.3 | 74.2 | 61.6 | 50.6 | 80.0 | 73.5 | 56.6 | 84.1 | 67.8
SymNets (Zhang et al. 2019b) | 46.0 | 73.8 | 78.2 | 64.1 | 69.7 | 74.2 | 63.2 | 48.9 | 80.0 | 74.0 | 51.6 | 82.9 | 67.2
SymNets+$\widetilde{\Lambda}^{\ell_{mse}}$ | 48.8 | 74.7 | 79.7 | 64.9 | 72.5 | 75.6 | 63.9 | 47.0 | 80.8 | 73.9 | 52.4 | 83.9 | 68.2
Table 5: Ablation experiments on Image-CLEF s | t | m | e | I$\rightarrow$P | P$\rightarrow$I | I$\rightarrow$C | C$\rightarrow$I | C$\rightarrow$P | P$\rightarrow$C | Avg
---|---|---|---|---|---|---|---|---|---|---
| | | | 80.2 | 94.2 | 96.7 | 94.7 | 79.2 | 95.5 | 90.1
| ${\surd}$ | | | 79.9 | 92.2 | 97.7 | 93.8 | 79.4 | 96.5 | 89.9
| ${\surd}$ | ${\surd}$ | | 79.7 | 93.7 | 97.5 | 94.5 | 79.7 | 96.2 | 90.2
${\surd}$ | ${\surd}$ | ${\surd}$ | | 79.4 | 95.0 | 97.8 | 94.8 | 81.4 | 96.5 | 90.8
${\surd}$ | ${\surd}$ | | ${\surd}$ | 80.5 | 96.0 | 97.7 | 95.2 | 79.9 | 97.0 | 91.0
We evaluate E-Mixnet on three public datasets, and compare it with several
existing state-of-the-art methods. Codes will be available at
https://github.com/zhonglii/E-MixNet.
### Datasets
Three common UDA datasets are used to evaluate the efficacy of E-MixNet.
Office-31 (Saenko et al. 2010) is an object recognition dataset with $4,110$
images, which consists of three domains with a slight discrepancy: amazon (A),
dslr (D) and webcam (W). Each domain contains $31$ kinds of objects. So there
are $6$ domain adaptation tasks on Office-31: A $\rightarrow$ D, A
$\rightarrow$ W, D $\rightarrow$ A, D $\rightarrow$ W, W $\rightarrow$ A, W
$\rightarrow$ D.
Office-Home (Venkateswara et al. 2017) is an object recognition dataset with
$15,500$ image, which contains four domains with more obvious domain
discrepancy than Office-31. These domains are Artistic (A), Clipart (C),
Product (P), Real-World (R). Each domain contains $65$ kinds of objects. So
there are $12$ domain adaptation tasks on Office-Home: A $\rightarrow$ C, A
$\rightarrow$ P, A $\rightarrow$ R, …, R $\rightarrow$ P.
ImageCLEF-DA333http://imageclef.org/2014/adaptation/ is a dataset organized by
selecting the 12 common classes shared by three public datasets (domains):
Caltech-256 (C), ImageNet ILSVRC 2012 (I), and Pascal VOC 2012 (P). We permute
all three domains and build six transfer tasks: I$\rightarrow$P,
P$\rightarrow$I, I$\rightarrow$C, C$\rightarrow$I, C$\rightarrow$P,
P$\rightarrow$C.
### Experimental Setup
Following the standard protocol for unsupervised domain adaptation in (Ganin
et al. 2016; Long et al. 2018), all labeled source samples and unlabeled
target samples are used in the training process and we report the average
classification accuracy based on three random experiments. $\gamma$ in Eq.
(15) is selected from 2, 4, 8, and it is set to 2 for Office-Home, 4 for
Office-31, and 8 for Image-CLEF.
ResNet-50 (He et al. 2016) pretrained on ImageNet is employed as the backbone
network ($\mathbf{G}$). $\mathbf{C}$, $\mathbf{D}$ and $\mathbf{C}^{*}$ are
all two fully connected layers where the hidden unit is 1024. Gradient
reversal layer between G and $\mathbf{D}$ is employed for adversarial
training. The algorithm is implemented by Pytorch. The mini-batch stochastic
gradient descent with momentum 0.9 is employed as the optimizer, and the
learning rate is adjected by $l_{i}=l_{0}(1+\delta i)^{-\beta}$, where i
linearly increase from 0 to 1 during the training process, $\delta=10$,
$l_{0}=0.04$. We follow (Zhang et al. 2019a) to employ a progressive strategy
for $\eta$: $\eta=\frac{2\eta_{0}}{1+\exp(\delta*i)-\eta_{0}}$, $\eta_{0}$ is
set to 0.1. The $\alpha$ in e-mixup is set to 0.6 in all experiments.
### Results
The results on Office-31 are reported in Tabel 1. E-MixNet achieves the best
results and exceeds the baselines for 4 of 6 tasks. Compared to the
competitive baseline MDD, E-MixNet surpasses it by 4.3% for the difficult task
D $\rightarrow$ A.
The results on Image-CLEF are reported in Table 2. E-MixNet significantly
outperforms the baselines for 5 of 6 tasks. For the hard task C $\rightarrow$
P, E-MixNet surpasses the competitive baseline SymNets by 2.7%.
The results on Office-Home are reported in Table 3. Despite Office-Home is a
challenging dataset, E-MixNet still achieves better performance than all the
baselines for 9 of 12 tasks. For the difficult tasks A $\rightarrow$ C, P
$\rightarrow$ A, and R $\rightarrow$ C, E-MixNet has significant advantages.
In order to further verify the efficacy of the proposed proxy of the combined
risk, we add the proxy into the loss functions of four representative UDA
methods. As shown in Fig. 2(a), we add a new classifier that is the same as
the classifier in the original method to formulate the proxy of the combined
risk. The results are shown in Table 4. The four methods can achieve better
performance after optimizing the proxy. It is worth noting that DANN obtains a
5.5% percent increase. The experiments adequately demonstrate the combined
risk plays a crucial role for methods that aim to learn a domain-invariant
representation and the proxy can indeed curb the increase of the combined
risk.
### Ablation Study and Parameter Analysis
Ablation Study. To further verify the efficacy of the proxy of combined risk
calculated by mixup and e-mixup respectively. Ablation experiments are shown
in Tabel 5, where s indicates that the source samples are introduced to
augment the target samples, t indicates augmenting the target samples, m
denotes mixup, and e denotes e-mixup. Table 5 shows that E-MixNet achieves the
best performance, which further shows that we can effectively control the
combined risk by the proxy $\widetilde{\Lambda}^{{\ell}_{mse}}$.
Parameter analysis. Here we aim to study how the parameter $\gamma$ affects
the performance and the efficiencies of _mean square error_ (MSE) and cross-
entropy for the proxy of combined risk. Firstly, as shown in Fig. 3(a), a
relatively larger $\gamma$ can obtain better performance and faster
convergence. Secondly, when mixup behaves between two samples, the accuracy of
the pseudo labels of the target samples are much important. To against the
adversarial samples, MSE is employed to substitute cross-entropy. As shown in
Fig. 3(b), MSE can obtain more stable and better performance. Furthermore,
$\mathcal{A}$-distance is also an important indicator showing the distribution
discrepancy, which is defined as $dis_{\mathcal{A}}=2(1-2\epsilon)$ where
$\epsilon$ is the test error. As shown in Fig. 3 (c). E-MixNet achieves a
better performance of adaptation, implying the efficiency of the proposed
proxy.
Figure 3: The impact of $\gamma$ is shown in (a). The impact of the loss
functions for the proxy of the combined risk is shown in (b). Comparison of
$\mathcal{A}$-distance.
## Conclusion
Though numerous UDA methods have been proposed and have achieved significant
success, the issue caused by combined risk has not been brought to the
forefront and none of the proposed methods solve the problem. Theorem 3
reveals that the combined risk is deeply related to the conditional
distribution discrepancy and plays a crucial role for transfer performance.
Furthermore, we propose a method termed E-MixNet, which employs enhanced mixup
to calculate a proxy of the combined risk. Experiments show that our method
achieves a comparable performance compared with existing state-of-the-art
methods and the performance of the four representative methods can be boosted
by adding the proxy into their loss functions.
## Acknowledgments
The work presented in this paper was supported by the Australian Research
Council (ARC) under DP170101632 and FL190100149. The first author particularly
thanks the support of UTS-AAII during his visit.
## References
* Ben-David et al. (2010) Ben-David, S.; Blitzer, J.; Crammer, K.; Kulesza, A.; Pereira, F.; and Vaughan, J. W. 2010. A theory of learning from different domains. _Machine learning_ 79(1-2): 151–175.
* Ben-David et al. (2007) Ben-David, S.; Blitzer, J.; Crammer, K.; and Pereira, F. 2007. Analysis of representations for domain adaptation. In _NeurIPS_ , 137–144.
* Berthelot et al. (2019) Berthelot, D.; Carlini, N.; Goodfellow, I.; Papernot, N.; Oliver, A.; and Raffel, C. A. 2019. Mixmatch: A holistic approach to semi-supervised learning. In _NeurIPS_ , 5049–5059.
* Dong et al. (2019) Dong, J.; Cong, Y.; Sun, G.; and Hou, D. 2019. Semantic-Transferable Weakly-Supervised Endoscopic Lesions Segmentation. In _ICCV_ , 10711–10720.
* Dong et al. (2020a) Dong, J.; Cong, Y.; Sun, G.; Liu, Y.; and Xu, X. 2020a. CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain Adaptation. In Vedaldi, A.; Bischof, H.; Brox, T.; and Frahm, J.-M., eds., _ECCV_ , 745–762. Cham: Springer International Publishing. ISBN 978-3-030-58598-3.
* Dong et al. (2020b) Dong, J.; Cong, Y.; Sun, G.; Yang, Y.; Xu, X.; and Ding, Z. 2020b. Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions Segmentation. _IEEE Transactions on Circuits and Systems for Video Technology_ 1–1. doi:10.1109/TCSVT.2020.3016058.
* Dong et al. (2020c) Dong, J.; Cong, Y.; Sun, G.; Zhong, B.; and Xu, X. 2020c. What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic Lesions Segmentation. In _CVPR_ , 4022–4031.
* Fang et al. (2020) Fang, Z.; Lu, J.; Liu, F.; Xuan, J.; and Zhang, G. 2020. Open set domain adaptation: Theoretical bound and algorithm. _IEEE Transactions on Neural Networks and Learning Systems_ .
* Fang et al. (2019) Fang, Z.; Lu, J.; Liu, F.; and Zhang, G. 2019. Unsupervised domain adaptation with sphere retracting transformation. In _2019 International Joint Conference on Neural Networks (IJCNN)_ , 1–8. IEEE.
* Ganin et al. (2016) Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. 2016. Domain-adversarial training of neural networks. _The Journal of Machine Learning Research_ 17: 2096–2030.
* Gong et al. (2016) Gong, M.; Zhang, K.; Liu, T.; Tao, D.; Glymour, C.; and Schölkopf, B. 2016. Domain adaptation with conditional transferable components. In _ICML_ , 2839–2848.
* Goodfellow et al. (2014) Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative Adversarial Nets. In _NeurIPS_ , 2672–2680. Curran Associates, Inc.
* Guo, Pasunuru, and Bansal (2020) Guo, H.; Pasunuru, R.; and Bansal, M. 2020. Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits. In _AAAI_ , 7830–7838.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In _CVPR_ , 770–778.
* Lee and Jha (2019) Lee, S.; and Jha, R. 2019. Zero-shot adaptive transfer for conversational language understanding. In _AAAI_ , volume 33, 6642–6649.
* Liu et al. (2019) Liu, F.; Lu, J.; Han, B.; Niu, G.; Zhang, G.; and Sugiyama, M. 2019. Butterfly: A panacea for all difficulties in wildly unsupervised domain adaptation. _arXiv preprint arXiv:1905.07720_ .
* Long et al. (2015) Long, M.; Cao, Y.; Wang, J.; and Jordan, M. I. 2015. Learning transferable features with deep adaptation networks. In _ICML_ , 97–105.
* Long et al. (2018) Long, M.; Cao, Z.; Wang, J.; and Jordan, M. I. 2018. Conditional adversarial domain adaptation. In _NeurIPS_ , 1640–1650.
* Long et al. (2013) Long, M.; Wang, J.; Ding, G.; Sun, J.; and Yu, P. S. 2013. Transfer feature learning with joint distribution adaptation. In _ICCV_ , 2200–2207.
* Long et al. (2016) Long, M.; Zhu, H.; Wang, J.; and Jordan, M. I. 2016. Unsupervised domain adaptation with residual transfer networks. In _NeurIPS_ , 136–144.
* Lu et al. (2015) Lu, J.; Behbood, V.; Hao, P.; Zuo, H.; Xue, S.; and Zhang, G. 2015. Transfer learning using computational intelligence: A survey. _Knowledge-Based Systems_ 80: 14–23.
* Lu et al. (2020) Lu, W.; Yu, Y.; Chang, Y.; Wang, Z.; Li, C.; and Yuan, B. 2020. A Dual Input-aware Factorization Machine for CTR Prediction. In _Proceedings of the 29th International Joint Conference on Artificial Intelligence_.
* Mansour, Mohri, and Rostamizadeh (2009) Mansour, Y.; Mohri, M.; and Rostamizadeh, A. 2009. Domain Adaptation: Learning Bounds and Algorithms. In _COLT_.
* Mohri and Medina (2012) Mohri, M.; and Medina, A. M. 2012. New analysis and algorithm for learning with drifting distributions. In _ALT_ , 124–138. Springer.
* Pei et al. (2018) Pei, Z.; Cao, Z.; Long, M.; and Wang, J. 2018. Multi-adversarial domain adaptation. _arXiv preprint arXiv:1809.02176_ .
* Pinheiro (2018) Pinheiro, P. O. 2018. Unsupervised domain adaptation with similarity learning. In _CVPR_ , 8004–8013.
* Redko et al. (2020) Redko, I.; Morvant, E.; Habrard, A.; Sebban, M.; and Bennani, Y. 2020. A survey on domain adaptation theory. _arXiv preprint arXiv:2004.11829_ .
* Saenko et al. (2010) Saenko, K.; Kulis, B.; Fritz, M.; and Darrell, T. 2010. Adapting visual category models to new domains. In _ECCV_ , 213–226. Springer.
* Saito et al. (2018) Saito, K.; Watanabe, K.; Ushiku, Y.; and Harada, T. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In _CVPR_ , 3723–3732.
* Shen et al. (2018) Shen, J.; Qu, Y.; Zhang, W.; and Yu, Y. 2018. Wasserstein distance guided representation learning for domain adaptation. In _AAAI_.
* Tang and Jia (2020) Tang, H.; and Jia, K. 2020. Discriminative Adversarial Domain Adaptation. In _AAAI_ , 5940–5947.
* Tzeng et al. (2017) Tzeng, E.; Hoffman, J.; Saenko, K.; and Darrell, T. 2017. Adversarial discriminative domain adaptation. In _CVPR_ , 7167–7176.
* Vapnik and Chervonenkis (2015) Vapnik, V. N.; and Chervonenkis, A. Y. 2015. On the uniform convergence of relative frequencies of events to their probabilities. In _Measures of complexity_ , 11–30. Springer.
* Venkateswara et al. (2017) Venkateswara, H.; Eusebio, J.; Chakraborty, S.; and Panchanathan, S. 2017. Deep hashing network for unsupervised domain adaptation. In _CVPR_ , 5018–5027.
* Wang and Breckon (2020) Wang, Q.; and Breckon, T. P. 2020. Unsupervised Domain Adaptation via Structured Prediction Based Selective Pseudo-Labeling. In _AAAI_ , 6243–6250. AAAI Press.
* Xu et al. (2020) Xu, M.; Zhang, J.; Ni, B.; Li, T.; Wang, C.; Tian, Q.; and Zhang, W. 2020. Adversarial Domain Adaptation with Domain Mixup. In _AAAI_ , 6502–6509. AAAI Press.
* Yu, Wang, and Yuan (2019) Yu, Y.; Wang, Z.; and Yuan, B. 2019. An Input-aware Factorization Machine for Sparse Prediction. In _IJCAI_ , 1466–1472.
* Zhang et al. (2018) Zhang, H.; Cisse, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2018. mixup: Beyond Empirical Risk Minimization. In _ICLR_.
* Zhang et al. (2020a) Zhang, K.; Gong, M.; Stojanov, P.; Huang, B.; Liu, Q.; and Glymour, C. 2020a. Domain Adaptation As a Problem of Inference on Graphical Models. In _NeurIPS_.
* Zhang et al. (2017) Zhang, Q.; Wu, D.; Lu, J.; Liu, F.; and Zhang, G. 2017. A cross-domain recommender system with consistent information transfer. _Decision Support Systems_ 104: 49–63.
* Zhang et al. (2020b) Zhang, Y.; Deng, B.; Tang, H.; Zhang, L.; and Jia, K. 2020b. Unsupervised multi-class domain adaptation: Theory, algorithms, and practice. _arXiv preprint arXiv:2002.08681_ .
* Zhang et al. (2020c) Zhang, Y.; Liu, F.; Fang, Z.; Yuan, B.; Zhang, G.; and Lu, J. 2020c. Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation. _arXiv preprint arXiv:2007.14612_ .
* Zhang et al. (2019a) Zhang, Y.; Liu, T.; Long, M.; and Jordan, M. 2019a. Bridging Theory and Algorithm for Domain Adaptation. In Chaudhuri, K.; and Salakhutdinov, R., eds., _ICML_ , volume 97 of _PMLR_ , 7404–7413. PMLR.
* Zhang et al. (2019b) Zhang, Y.; Tang, H.; Jia, K.; and Tan, M. 2019b. Domain-symmetric networks for adversarial domain adaptation. In _CVPR_ , 5031–5040.
* Zhao et al. (2019) Zhao, H.; des Combes, R. T.; Zhang, K.; and Gordon, G. 2019. On Learning Invariant Representation for Domain Adaptation. _ICML_ .
* Zhong et al. (2020) Zhong, L.; Fang, Z.; Liu, F.; Yuan, B.; Zhang, G.; and Lu, J. 2020. Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation. _arXiv preprint arXiv:2006.13022_ .
* Zou et al. (2019) Zou, H.; Zhou, Y.; Yang, J.; Liu, H.; Das, H. P.; and Spanos, C. J. 2019. Consensus adversarial domain adaptation. In _AAAI_ , volume 33, 5997–6004.
|
8k
|
arxiv_papers
|
2101.01106
|
# The trace amplitude method and its application to the NLO QCD calculation
Zi-Qiang Chen1222 [email protected] and Cong-Feng
Qiao1,[email protected], corresponding author
1 School of Physics, University of Chinese Academy of Sciences, Yuquan Road
19A, Beijing 100049
2 CAS Key Laboratory of Vacuum Physics, Beijing 100049, China
###### Abstract
The trace amplitude method (TAM) provides us a straightforward way to
calculate the helicity amplitudes with massive fermions analytically. In this
work, we review the basic idea of this method, and then discuss how it can be
applied to next-to-leading order (NLO) quantum chromodynamics (QCD)
calculations, which has not been explored before. By analyzing the singularity
structures of both virtual and real corrections, we show that the TAM can be
generalized to NLO QCD calculations straightforwardly, the only caution is
that the unitarity should be guaranteed. We also present a simple example to
demonstrate the application of this method.
PACS number(s): 12.38.–t, 12.38.Bx
## I Introduction
In high energy colliders like the Large Hadron Collider (LHC), processes with
multi-particle final states are of great important in signal and background
analysis. To describe these processes at the same precision level as the
experimental measurements, one has to calculate the cross sections to at least
next-to-leading order (NLO). However, the NLO calculations for multi-particle
processes are very challenging: as the number of external particles increases,
both the number of Feynman diagrams and the computational difficulty of each
diagram grows rapidly. The conventional amplitude squaring approach (CAS),
i.e. squaring the Feynman amplitude, summing over the spins of external
states, and taking the trace of each possible fermion string loop, is proved
to be tedious and time consuming when the number of external particles is more
than 5. Another drawback of this approach is that it loses the spin
information of final state particles, which is attainable in present
experimental measurements.
An alternative approach is to compute the helicity amplitude explicitly, then
the amplitude squaring and polarization summation can be performed easily
during the numerical evaluation. The development of this approach has
experienced a long history. Many techniques have been developed for
calculating the tree- DeCausmaecker:1981jtq ; Berends:1981uq ; Nam:1983gt ;
Kleiss:1985yh ; Xu:1986xb ; Parke:1986gb ; Berends:1987me ; Chang:1992bb ;
Yehudai:1992rt ; Ballestrero:1994jn ; Vega:1995cc ; Bondarev:1997kf ;
Andreev:2001se ; Qiao:2003ue ; Cachazo:2004kj ; Britto:2004ap ; Schwinn:2005pi
and loop- Bern:1991aq ; Bern:1994zx ; Bern:1994cg ; Brandhuber:2004yw ;
Luo:2004ss ; Bena:2004xu ; Quigley:2004pw ; Bedford:2004py ; Britto:2004nc ;
Roiban:2004ix ; Bern:2005hs ; Bidder:2005ri level helicity amplitudes. It
should be noted that literatures on this subject are vast, and to give a
complete survey of them is beyond the scope of this paper. For reviews, see
for instance Refs. Mangano:1990by ; Dixon:1996wi ; Bern:2008ef ;
Elvang:2013cua ; Dixon:2013uaa .
For processes of fermion production or decays, Feynman amplitudes incorporate
one or more open fermion line, which can be expressed as
$\bar{U}(p_{1},\lambda_{1})\Gamma U(p_{2},\lambda_{2})={\rm tr}[\Gamma
U(p_{2},\lambda_{2})\otimes\bar{U}(p_{1},\lambda_{1})],$ (1)
where $p_{1}$ and $p_{2}$ are the momenta of the external fermions,
$\lambda_{1}$ and $\lambda_{2}$ denote their polarization states; $\Gamma$
stands for the string of Dirac gamma matrices between the spinors;
$U(p,\lambda)$ stands for either fermion spinor $u(p,\lambda)$ or anti-fermion
spinor $v(p,\lambda)$. In $4$-dimensional spinor space, the spinor product
$U(p_{2},\lambda_{2})\otimes\bar{U}(p_{1},\lambda_{1})$ can be re-expressed by
basic Dirac gamma matrices through different ways Nam:1983gt ; Kleiss:1985yh ;
Chang:1992bb ; Yehudai:1992rt ; Ballestrero:1994jn ; Vega:1995cc ;
Bondarev:1997kf ; Andreev:2001se ; Qiao:2003ue , then the trace in Eq. (1) can
be evaluated straightforwardly. For convenience, we call this method the trace
amplitude method (TAM) hereafter. The TAM is different from the so called
helicity amplitude method (HAM), which has been proposed in Refs.
Kleiss:1985yh ; Xu:1986xb and generalized to NLO in Refs. Bern:1991aq ;
Bern:1994zx . These two methods are complementary to each other: the results
obtained from the HAM is more compact, while the TAM is more transparent to
beginners and more convenient for a realization on a computer algebra system.
Although the TAM has been proposed for a long time, its validity in higher-
order calculations has not been discussed before. Considering the fact that
the NLO corrections are usually important in phenomenological study, in this
work we discuss the generalization of TAM to NLO QCD calculation.
The rest of the paper is organized as follows. In Sec. II, we review some
basic formulas used to derive the TAM; In Sec. III, we analyze different types
of singularities encountered in NLO QCD calculations; In Sec. III, we present
a scheme that enable the application of TAM in NLO QCD calculation; In Sec.
IV, illustrative examples, the NLO QCD corrections to $g+g\to t+\bar{t}$ and
$q+\bar{q}\to t+\bar{t}$ processes, are presented. The last section is
reserved for a summary.
## II Spinor product
The key ingredient of the TAM is to re-express the spinor product
$U(p_{2},\lambda_{2})\otimes\bar{U}(p_{1},\lambda_{1})$ by basic Dirac gamma
matrices. This re-expression can be done through various means, such as
constructing the transformation matrix between spinors with different momenta
and polarization states Nam:1983gt , introducing auxiliary vectors
Kleiss:1985yh ; Chang:1992bb ; Ballestrero:1994jn ; Andreev:2001se ;
Qiao:2003ue , making the use of orthogonal basis of the $4$-dimensional spinor
space Yehudai:1992rt , making the use of the Bouchiat-Michel identity
Vega:1995cc ; B-MIdentity , etc. In fact, as revealed in Ref. Bondarev:1997kf
, in fact all these approaches may attribute to the same mathematical scheme.
In this section, we follow the lines of auxiliary vector approach, and present
some basic formulas of TAM.
Consider a (anti)fermion with momentum $p$, polarization vector $s$, and mass
$m$, the on-shell and polarization conditions require that
$p^{2}=m^{2},\quad s^{2}=-1,\quad p\cdot s=0.$ (2)
The corresponding spinor can be defined as the common eigenstate of the two
commuting operators $\not{p}$ and $\gamma_{5}\not{s}$:
$\displaystyle\not{p}U_{s}(p,\lambda)=MU_{s}(p,\lambda),$ (3)
$\displaystyle\gamma_{5}\not{s}U_{s}(p,\lambda)=\lambda U_{s}(p,\lambda).$ (4)
Here, for fermion $U_{s}(p,\lambda)=u_{s}(p,\lambda)$, $M=m$; for antifermion
$U_{s}(p,\lambda)=v_{s}(p,\lambda)$, $M=-m$; $\lambda=\pm 1$ denote the two
different polarization states.
The massive spinor $U_{s}(p,\lambda)$ can be constructed with massless spinor.
By introducing two auxiliary vectors that fulfil the conditions
$k_{0}^{2}=0,\quad k_{1}^{2}=-1,\quad k_{0}\cdot k_{1}=0,$ (5)
one can construct a massless spinor $w(k_{0},\lambda)$ in light of the light-
like vector $k_{0}$, satisfying
$\displaystyle\not{k}_{0}w(k_{0},\lambda)=0,$ (6)
$\displaystyle\gamma_{5}w(k_{0},\lambda)=\lambda w(k_{0},\lambda).$ (7)
From above two equations we have
$\displaystyle
w(k_{0},\lambda)\bar{w}(k_{0},\lambda)=\frac{1+\lambda\gamma_{5}}{2}\not{k}_{0},$
$\displaystyle\not{k}_{1}w(k_{0},\lambda)=\lambda w(k_{0},-\lambda).$ (8)
Here the relative phase between $w(k_{0},+)$ and $w(k_{0},-)$ may be fixed by
$k_{1}$.
With the massless spinor $w(k_{0},\lambda)$, the massive spinor
$U_{s}(p,\lambda)$ can be expressed as
$U_{s}(p,\lambda)=\frac{(\not{p}+M)(1+\not{s})}{2\sqrt{k_{0}\cdot(p+Ms)}}w(k_{0},-\lambda),$
(9)
which satisfies (3) and (4). The normalization factor here is fixed by the
fermion spin sum relation
$\sum_{\lambda}U_{s}(p,\lambda)\bar{U}_{s}(p,\lambda)=\not{p}+M.$ (10)
Combine Eqs. (8) and (9), one can readily get the desired spinor product:
$U_{s_{1}}(p_{1},\lambda_{1})\otimes\bar{U}_{s_{2}}(p_{2},\lambda_{2})=\frac{({\not{p}}_{1}+M_{1})(1+{\not{s}}_{1})\Lambda(\lambda_{1},\lambda_{2}){\not{k}}_{0}(1+{\not{s}}_{2})({\not{p}}_{2}+M_{2})}{8\sqrt{k_{0}\cdot(p_{1}+M_{1}s_{1})}\sqrt{k_{0}\cdot(p_{2}+M_{2}s_{2})}},$
(11)
with
$\displaystyle\Lambda(\lambda,\lambda)=1-\lambda\gamma_{5},$
$\displaystyle\Lambda(\lambda,-\lambda)=\not{k}_{1}(\lambda+\gamma_{5}).$ (12)
The polarization vector of a fermion can be expressed through the momentum of
the fermion as
$s=\frac{(p\cdot q)p-m^{2}q}{m\sqrt{(p\cdot q)^{2}-m^{2}q^{2}}},$ (13)
where $q$ can be an arbitrary vector except for those paralleling to momentum
$p$. For $q=(1,\vec{0})$, the polarization vector is found to be
$s=(\frac{|\vec{p}|}{m},\frac{E}{m}\frac{\vec{p}}{|\vec{p}|})$, which
indicates that the corresponding spinor is in helicity eigenstate. In
computation, it is more convenient to take $q=\frac{M}{m}k_{0}$, the so called
Kleiss-Stirling (KS) Kleiss:1985yh polarization basis. In this basis, Eq.
(11) can be simplified to
$U_{\rm KS}(p_{1},\lambda_{1})\otimes\bar{U}_{\rm
KS}(p_{2},\lambda_{2})=\frac{({\not{p}}_{1}+M_{1})\Lambda(\lambda_{1},\lambda_{2}){\not{k}}_{0}({\not{p}}_{2}+M_{2})}{4\sqrt{k_{0}\cdot
p_{1}}\sqrt{k_{0}\cdot p_{2}}}.$ (14)
Note, in phenomenological study, other choices of polarization basis may lead
to certain convenience. The transformation rule between spinors in different
polarization basis can be obtained by taking an explicit representation for
Dirac matrices.
In general, arbitrary vectors $k_{0}$ and $k_{1}$ will cause the result of the
amplitude extra complication. To avoid this, in actual computation, one may
either specify $k_{0}$ and $k_{1}$ explicitly, or construct $k_{0}$ and
$k_{1}$ with external momenta, as demonstrated in Ref. Chang:1992bb .
## III Singularity structure of NLO QCD calculation
In this section, we analyze the singularity structure of NLO QCD calculation.
The dimensional regularization with space-time dimension $D=4-2\epsilon$ is
used to regularize both ultraviolet (UV) and infrared (IR) singularities.
Although the results are well known Kunszt:1994np ; Catani:2000ef , we discuss
them in detail for a twofold reason: (i) they are essential to the NLO
generalization of TAM, and (ii) we provide a new perspective on this subject.
Specifically, in Ref. Catani:2000ef , the singular terms of the virtual loop
corrections are derived from that of the real corrections, by exploiting the
fact that the IR singularities of the virtual and real corrections cancel each
other. While here, we derive these singular terms through direct loop integral
analysis, and show that they are exactly canceled by their counterparts in
real corrections.
### III.1 Singular terms in virtual corrections
The one-loop virtual corrections contain UV and IR singularities, which appear
as $\frac{1}{\epsilon^{n}}-$pole under the dimensional regularization. In a
renormalizable theory like QCD, UV singularities are contained in the diagrams
or subdiagrams with a small number of external legs, and can be removed by
renormalization procedure. In renormalized perturbation theory, the
renormalized UV-finite one-loop amplitude $\tilde{\mathcal{M}}^{\rm loop}$ is
defined as
$\tilde{\mathcal{M}}^{\rm loop}=\mathcal{M}^{\rm loop}+\mathcal{M}^{\rm CT},$
(15)
where $\mathcal{M}^{\rm CT}$ denotes the amplitudes of counterterms.
To study the IR singularity structure of $\tilde{\mathcal{M}}^{\rm loop}$, we
use lightcone gauge, where the gluon propagator is
$D_{\mu\nu}^{ab}(p)=\delta^{ab}\frac{i\Pi_{\mu\nu}(p)}{p^{2}+i\varepsilon},$
(16)
with
$\Pi_{\mu\nu}(p)=-g_{\mu\nu}+\frac{r_{\mu}p_{\nu}+r_{\nu}p_{\mu}}{r\cdot p},$
(17)
where $r$ is a light-like vector. Lightcone gauge is a physical gauge, means a
sum over physical transverse polarization states while the gluon is on its
mass shell:
$\Pi_{\mu\nu}(p)\overset{p^{2}=0}{\longrightarrow}\sum_{i=1,2}\epsilon_{\mu}^{(i)}(p,r)\epsilon_{\nu}^{(i)*}(p,r).$
(18)
For one-loop amplitude without soft or mutually collinear external lines, soft
singularities originate from the exchange of soft gluon between two on-shell
legs. To isolate the soft singularities, we impose a cutoff $\delta_{0}$ to
all components of loop momentum $k$, that is
$|k^{\mu}|<\delta_{0}\ll\text{particle masses or other kinematic scales}.$
(19)
This region will be referred as soft region.
The structure of external leg attached by a soft gluon can be approximated as
$\vbox{\hbox{\includegraphics[scale={0.5}]{eikonal.eps}}}\simeq-
g_{s}T^{a}_{c_{i}c_{j}}\frac{1}{k^{2}+2p\cdot
k}\left(2p^{\mu}\vbox{\hbox{\includegraphics[scale={0.5}]{eikonalB.eps}}}+\mathcal{O}(|k|)\right),$
(20)
with
$T^{a}_{c_{i}c_{j}}=\begin{cases}t^{a}_{c_{i}c_{j}},i=\text{outgoing quark or
incoming antiquark}\\\ -t^{a}_{c_{j}c_{i}},i=\text{outgoing antiquark or
incoming quark}\\\ -if^{ac_{i}c_{j}},i=\text{gluon}\end{cases}.$ (21)
Here the dashed line denotes either quark (massive or massless) or gluon,
$c_{i}$ denotes the color index of parton $i$, $t^{a}_{c_{i}c_{j}}$ and
$f^{ac_{i}c_{j}}$ are the generators of $SU(3)$ fundamental and adjoint
representation respectively. By default, the momentum of parton $i$ is defined
as outgoing. For incoming case, one should take the replacement $p\to-p$.
For the structure of two external legs connected by one soft gluon, we have
$\displaystyle\vbox{\hbox{\includegraphics[scale={0.5}]{softloop.eps}}}\simeq$
$\displaystyle
ig_{s}^{2}T^{a}_{c_{i}c_{i^{\prime}}}T^{a}_{c_{j}c_{j^{\prime}}}\mu^{4-D}\int\limits_{|k^{\mu}|<\delta_{0}}\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{k^{2}(k^{2}+2k\cdot
p_{i})(k^{2}-2k\cdot p_{j})}$ (22)
$\displaystyle\times\left(4p_{i}\cdot\Pi(k)\cdot
p_{j}\vbox{\hbox{\includegraphics[scale={0.5}]{softloopB.eps}}}+\mathcal{O}(|k|)\right).$
Here, all the three propagators have poles in the region
$|k^{\mu}|<\delta_{0}$. However, at NLO, only the poles of $1/k^{2}$ are
concerned. For the case where both partons are incoming or outgoing, the poles
of $1/(k^{2}+2k\cdot p_{i})$ and $1/(k^{2}-2k\cdot p_{j})$ lead to pure
imaginary singular terms111These terms can be obtained through the Cutkosky
rules Cutkosky:1960sp ., which eventually canceled each other between
$\tilde{\mathcal{M}}^{\rm loop}(\mathcal{M}^{\rm tree})^{*}$ and
$(\tilde{\mathcal{M}}^{\rm loop})^{*}\mathcal{M}^{\rm tree}$. For the case
where one parton is outgoing while another is incoming, no imaginary
singularities arise. In fact, the poles of $1/(k^{2}+2k\cdot p_{i})$ and
$1/(k^{2}-2k\cdot p_{j})$ are located in the region where
$k^{+}k^{-}\ll|\vec{k}_{T}|^{2}$ 222Here we work in the $p_{i}+p_{j}$ center-
of-mass frame. We take the lightcone coordinates so that the momenta are
$p_{i}=(p_{i}^{+},m_{i}^{2}/(2p_{i}^{+}),\vec{0}_{T})$,
$p_{j}=(m_{j}^{2}/(2p_{j}^{-}),p_{j}^{-},\vec{0}_{T})$,
$k=(k^{+},k^{-},\vec{k}_{T})$., and we can perform contour deformations on
both $k^{+}$ and $k^{-}$ to get out of this region Collins:2011zzd . Note,
since we work in lightcone guage, the singularities at $r\cdot k=0$ may
obstruct the contour deformations. In our simple case, this can be overcome by
appropriate choice of $r$. For example, we may choose a generic $r$ that do
not parallel to any $p_{i}$.
When both partons $i$ and $j$ are massive, we can deform the integral to a
contour where all components of $k$ are comparable. Then the asymptotic
behavior of loop momentum is $|k^{\mu}|\sim\kappa^{2}$ as $\kappa\to 0$. Thus
we can neglect the $k^{2}$ term compared to $k\cdot p_{i}$ or $k\cdot p_{j}$
(eikonal approximation). It can also be seen that the $\mathcal{O}(|k|)$ term
does not contribute to soft singularity, as it leads to terms scaling like
$\kappa^{2}$ or higher. When either or both of $i$ and $j$ are massless, there
is an overlapping soft-collinear region, where the scaling behavior of $k$ is
$|k^{+}|\sim\kappa$, $|k^{-}|\sim\kappa^{3}$ and $|\vec{k}_{T}|\sim\kappa^{2}$
(or $|k^{+}|\sim\kappa^{3}$, $|k^{-}|\sim\kappa$ and
$|\vec{k}_{T}|\sim\kappa^{2}$). It can be seen that the eikonal approximation
still hold in this region. Thus for both massive and massless cases, we have
$\displaystyle\vbox{\hbox{\includegraphics[scale={0.5}]{softLoop.eps}}}\overset{\rm
soft}{\sim}-ig_{s}^{2}T^{a}_{c_{i}c_{i^{\prime}}}T^{a}_{c_{j}c_{j^{\prime}}}\mu^{4-D}\int\limits_{|k^{\mu}|<\delta_{0}}\frac{d^{D}k}{(2\pi)^{D}}\frac{p_{i}\cdot\Pi(k)\cdot
p_{j}}{k^{2}(k\cdot p_{i})(k\cdot
p_{j})}\vbox{\hbox{\includegraphics[scale={0.5}]{softLoopB.eps}}}$
$\displaystyle\overset{\rm
soft}{\sim}-g_{s}^{2}T^{a}_{c_{i}c_{i^{\prime}}}T^{a}_{c_{j}c_{j^{\prime}}}\mu^{4-D}\int\limits_{|\vec{k}|<\delta_{0}}\frac{d^{D-1}k}{2k_{0}(2\pi)^{D-1}}\frac{p_{i}\cdot\Pi(k)\cdot
p_{j}}{(k\cdot p_{i})(k\cdot
p_{j})}\bigg{|}_{k_{0}=|\vec{k}|}\vbox{\hbox{\includegraphics[scale={0.5}]{softLoopB.eps}}}$
$\displaystyle\ .$ (23)
Here, the symbol “$\overset{\rm soft}{\sim}$” denotes that the real part of
soft singular terms on each side are equal.
Besides soft gluon exchange between external legs, soft singularities also
come from on-shell renormalization constants. The corresponding terms can be
re-expressed as self-energy insertions to external lines. For this case, we
have (see Appendix A for detailed derivation):
$\displaystyle\frac{1}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{softSE.eps}}}\overset{\rm
soft}{\sim}$
$\displaystyle-\frac{1}{2}g_{s}^{2}T^{a}_{c_{i}c_{i^{\prime\prime}}}T^{a}_{c_{i^{\prime\prime}}c_{i^{\prime}}}\mu^{4-D}\int\limits_{|\vec{k}|<\delta_{0}}\frac{d^{D-1}k}{2k_{0}(2\pi)^{D-1}}\frac{p_{i}\cdot\Pi(k)\cdot
p_{i}}{(k\cdot p_{i})^{2}}\bigg{|}_{k_{0}=|\vec{k}|}$
$\displaystyle\times\vbox{\hbox{\includegraphics[scale={0.5}]{softSEB.eps}}}.$
(24)
Summing up all configurations where a gluon connects two external legs and
self-energy corrections to each external line, we obtain the complete soft
singularities for one-loop amplitude:
$\displaystyle\tilde{\mathcal{M}}^{\rm loop}_{c_{1}\cdots c_{n}}\overset{\rm
soft}{\sim}$
$\displaystyle-\frac{1}{2}g_{s}^{2}\sum_{i,j}^{n}\mu^{4-D}\int\limits_{|\vec{k}|<\delta_{0}}\frac{d^{D-1}k}{2k_{0}(2\pi)^{D-1}}\frac{p_{i}\cdot\Pi(k)\cdot
p_{j}}{(k\cdot p_{i})(k\cdot p_{j})}\bigg{|}_{k_{0}=|\vec{k}|}$
$\displaystyle\times\left(\boldsymbol{T}\cdot\boldsymbol{T}\mathcal{M}^{\rm
tree}\right)_{c_{1}\cdots c_{i}\cdots c_{j}\cdots c_{n}},$ (25)
where
$\left(\boldsymbol{T}\cdot\boldsymbol{T}\mathcal{M}^{\rm
tree}\right)_{c_{1}\cdots c_{i}\cdots c_{j}\cdots
c_{n}}=\begin{cases}T^{a}_{c_{i}c_{i^{\prime}}}T^{a}_{c_{j}c_{j^{\prime}}}\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{i^{\prime}}\cdots c_{j^{\prime}}\cdots c_{n}},&i\neq
j\\\
T^{a}_{c_{i}c_{i^{\prime\prime}}}T^{a}_{c_{i^{\prime\prime}}c_{i^{\prime}}}\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{i^{\prime}}\cdots c_{n}},&i=j\end{cases}$ (26)
is the color connected Born amplitude.
The collinear singularities arise when the virtual gluon is collinear to any
massless external momentum. Here we work in lightcone coordinates where loop
momentum $k$ and external momentum $p$ can be expressed as
$\displaystyle k=(k^{+},k^{-},\vec{k}_{T}),$ $\displaystyle
p=(p^{+},0,\vec{0}_{T}).$ (27)
Since the soft-collinear singularities have been incorporated in Eq. (25), to
avoid double counting, we consider only the hard-collinear singularities. The
corresponding scaling behavior of $k$ is $|k^{+}|\sim\kappa^{0}$,
$|k^{-}|\sim\kappa^{2}$ and $|\vec{k}_{T}|\sim\kappa$. Then the involved
propagators $1/k^{2}$ and $1/(k\pm p)^{2}$ scale like $\kappa^{-2}$, other
propagators with non-collinear external momenta scale like $\kappa^{0}$. The
hard-collinear integral region for $k$ is defined as
$|k^{+}|>\frac{p^{+}}{p^{0}}\delta_{0},\quad\quad|\vec{k}_{T}|<\delta_{T},\quad\quad|k^{-}|\sim\frac{|\vec{k}_{T}|^{2}}{|k^{+}|},$
(28)
with the cut on $|k^{+}|$ exclude the soft-collinear region. The explicit
integration range of $|k^{-}|$ does not concern with the collinear singular
terms. Note, to validate the scaling behavior of $k$, the soft cutoff
parameter should be much larger than the collinear cutoff parameter:
$\delta_{T}\ll\delta_{0}$.
In lightcone gauge, the structures where virtual gluon $k$ connects the
external leg $p$ to hard part (or other external leg) scale like $\kappa$:
$\vbox{\hbox{\includegraphics[scale={0.5}]{colLoop.eps}}}\simeq\mathcal{O}(\kappa).$
(29)
Therefore, the only collinear singularites come from the self-energy
corrections to external legs. We have (see Appendix B for detailed
derivation):
$\displaystyle\frac{1}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{colSEQuark.eps}}}\overset{\rm
coll}{\sim}$
$\displaystyle-\frac{g_{s}^{2}}{16\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\frac{1}{\epsilon}C_{F}\left(\frac{3+\epsilon}{2}+2\ln\frac{\delta_{0}}{p_{0}}\right)$
$\displaystyle\times\vbox{\hbox{\includegraphics[scale={0.5}]{colSEQuarkB.eps}}},$
$\displaystyle\frac{n_{lf}}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{colSEGluonF.eps}}}\overset{\rm
coll}{\sim}$
$\displaystyle-\frac{g_{s}^{2}}{16\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\frac{1}{\epsilon}n_{lf}\left(-\frac{1-\epsilon}{3-2\epsilon}\right)$
$\displaystyle\times\vbox{\hbox{\includegraphics[scale={0.5}]{colSEGluonB.eps}}},$
$\displaystyle\frac{1}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{colSEGluonG.eps}}}\overset{\rm
coll}{\sim}$
$\displaystyle-\frac{g_{s}^{2}}{16\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\frac{1}{\epsilon}C_{A}\left(\frac{11}{6}+2\ln\frac{\delta_{0}}{p_{0}}\right)$
$\displaystyle\times\vbox{\hbox{\includegraphics[scale={0.5}]{colSEGluonB.eps}}}.$
(30)
Here, $n_{lf}=3$ denotes the number of light quark flavors, $C_{A}=3$,
$C_{F}=4/3$ are QCD color factors; the symbol “$\overset{\rm coll}{\sim}$”
means that the collinear singular terms on each side are equal. Then the
collinear singular terms for one-loop amplitude have the form
$\tilde{\mathcal{M}}^{\rm loop}_{c_{1}\cdots c_{n}}\overset{\rm
coll}{\sim}\frac{g_{s}^{2}}{16\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\sum_{i}\left(\frac{\gamma(i)}{\epsilon}\right)\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{n}},$ (31)
where
$\displaystyle\gamma(q)=-C_{F}\left(\frac{3+\epsilon}{2}+2\ln\frac{\delta_{0}}{p_{0}}\right),$
$\displaystyle\gamma(g)=n_{lf}\frac{1-\epsilon}{3-2\epsilon}-C_{A}\left(\frac{11}{6}+2\ln\frac{\delta_{0}}{p_{0}}\right).$
(32)
### III.2 Dimensional regularization prescriptions and singular terms in real
corrections
The analysis in the previous subsection is based on dimensional
regularization. The key ingredient of dimensional regularization is to
continue the dimensions of loop momentum from $4$ to $D=4-2\epsilon$. While
for the treatments of external momenta and gluons’ polarization, one is left
with some freedom, which result in different variants of dimensional
regularization. Two commonly used variants are333Another commonly used
regularization scheme is the dimensional reduction Siegel:1979wq ; Bern:1991aq
; Stockinger:2005gx , where a quasi-4-dimensional space should be introduced
Stockinger:2005gx . The transition rules between dimensional reduction and
dimensional regularization are discussed in Refs. Kunszt:1994np ;
Catani:1996pk ; Signer:2008va ; Catani:2000ef .:
* •
‘t Hooft-Veltman (HV) tHooft:1972tcz scheme: Loop momentum are treated as
$D$-dimensional, while external ones are treated as $4$-dimensional. The
gluons inside loop have $D-2$ polarization states, while other gluons have 2
polarizations.
* •
Conventional dimensional regularization (CDR) scheme: All momenta are treated
as $D$-dimensional, and all gluons have $D-2$ polarization states.
The analysis in the previous subsection is legitimate in both HV and CDR
schemes. In Eqs. (25) and (31), the only quantity that concerning the choice
of HV or CDR is the tree-level amplitude $\mathcal{M}^{\rm tree}$.
In general, only a combination of virtual loop corrections and real emission
contributions lead to IR-finite results. Their dependence on regularization
prescriptions should also be canceled as the IR singularities. This
cancellation is only achieved if the regularization prescriptions employed in
the virtual and real corrections are consistent, which means unitarity. Hence,
in real corrections, the emitted soft or collinear particles should be treated
in the same way as the particles inside loop in virtual corrections. As an
example, Fig. 1 shows the case of a gluon splitting into soft or collinear
gluons under HV and CDR schemes separately.
Figure 1: Gluon splitting into soft or collinear gluons under HV and CDR
schemes. Here the label $D$ (4) indicates that the momentum of corresponding
gluon is $D$\- ($4$-) dimensional, and the number of polarization states is
$D-2$ (2).
There are essentially two types of approaches to evaluate the cross sections
of real emission processes: one based on the phase-space slicing method
Fabricius:1981sx ; Kramer:1986mc ; Harris:2001sx , and the other based on the
subtraction method Ellis:1980wv ; Catani:1996vz ; Phaf:2001gc . In both
approaches, IR singular terms are isolated, and the remaining finite parts can
be calculated numerically in 4-dimensional space-time. To match our analysis
on virtual corrections, here we take the two cutoff phase-space slicing
method. As the corresponding implementation is described in detail in Ref.
Harris:2001sx , we only introduce some main results here.
Considering the real emission process
$p_{a}+p_{b}\to p_{1}+\cdots+p_{n}+p_{n+1}\ ,$ (33)
where $(n+1)$ is the “additional” particle that may soft or collinear to
another massless external line. By introducing soft cut $\delta_{0}$ and
collinear cut $\delta_{T}$444The cutoff parameters used here is different from
that used in Ref. Harris:2001sx . They are related by the relations
$\delta_{0}=\frac{\sqrt{s_{12}}}{2}\delta_{s}^{\text{\tiny\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Harris:2001sx}{\@@citephrase{(}}{\@@citephrase{)}}}}}$ and
$\delta_{T}^{2}=z(1-z)\delta_{c}^{\text{\tiny\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Harris:2001sx}{\@@citephrase{(}}{\@@citephrase{)}}}}}s_{12}$.
which satisfies $\delta_{0}\gg\delta_{T}$, the phase space can be separated
into three regions:
* •
soft: $p_{n+1}^{0}<\delta_{0}$;
* •
hard-collinear (HC): $p_{n+1}^{0}>\delta_{0}$ and $|\vec{p}_{n+1\
T}|<\delta_{T}$;
* •
hard-non-collinear (HNC): $p_{n+1}^{0}>\delta_{0}$ and $|\vec{p}_{n+1\
T}|>\delta_{T}$.
Here, the transverse momentum $|\vec{p}_{n+1\ T}|$ is relative to the “parent”
particle $i^{\prime}$, which splitting into $i$ and $(n+1)$: $i^{\prime}\to
i+(n+1)$. The cross section of real emission process can be written as
$\sigma_{\rm real}=\sigma_{\rm real}^{\rm soft}+\sigma_{\rm real}^{\rm
HC}+\sigma_{\rm real}^{\rm HNC}.$ (34)
After neglecting terms of order $\delta_{0}$ and $\delta_{T}$, the soft and
hard-collinear pieces take the form555Here we present the results for
indistinguishable final state case only. The results for other cases, like
tagged final state or hadron in initial state, can be found in Ref.
Harris:2001sx .
$\displaystyle\sigma_{\rm real}^{\rm soft}=$
$\displaystyle\frac{1}{2\Phi}\int\limits_{\rm
S}d\Gamma_{n+1}\overline{\sum}|\mathcal{M}^{\rm real}_{c_{1}\cdots
c_{n+1}}|^{2}$ $\displaystyle=$ $\displaystyle
g_{s}^{2}\mu^{4-D}\int\limits_{|\vec{p}_{n+1}|<\delta_{0}}\frac{d^{D-1}p_{n+1}}{2p_{n+1}^{0}(2\pi)^{D-1}}\sum_{i,j}^{n}\bigg{\\{}\frac{p_{i}\cdot\Pi(p_{n+1})\cdot
p_{j}}{(p_{i}\cdot p_{n+1})(p_{j}\cdot p_{n+1})}$
$\displaystyle\frac{1}{2\Phi}\int
d\Gamma_{n}\overline{\sum}\left[\mathcal{M}^{\rm tree}_{c_{1}\cdots
c_{i^{\prime}}\cdots c_{j}\cdots
c_{n}}T^{a}_{c_{i}c_{i^{\prime}}}\right]\left[\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{i}\cdots c_{j^{\prime}}\cdots
c_{n}}T^{a}_{c_{j}c_{j^{\prime}}}\right]^{*}\bigg{\\}},$ (35)
and
$\displaystyle\sigma_{\rm real}^{\rm HC}=$
$\displaystyle\frac{1}{2\Phi}\int\limits_{\rm
HC}d\Gamma_{n+1}\overline{\sum}|\mathcal{M}^{\rm real}_{c_{1}\cdots
c_{n+1}}|^{2}$ $\displaystyle=$
$\displaystyle-\frac{g_{s}^{2}}{8\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\left(\sum_{i}\frac{\gamma(i)}{\epsilon}\right)\frac{1}{2\Phi}\int
d\Gamma_{n}\overline{\sum}|\mathcal{M}^{\rm tree}_{c_{1}\cdots c_{n}}|^{2},$
(36)
where $\Phi$ is the flux factor, $d\Gamma_{n}$ and $d\Gamma_{n+1}$ stand for
the $n$\- and $(n+1)$\- body phase space respectively, $\gamma(i)$ has been
defined in Eq. (32).
Comparing Eqs. (35) and (36) with Eqs. (25) and (31), we can see that the IR
singularities in real corrections are canceled explicitly against their
counterparts in virtual corrections, as expected by the Kinoshita-Lee-
Nauenberg (KLN) theorem Kinoshita:1962ur ; Lee:1964is . Their dependence on
dimensional regularization prescription are also canceled, as long as the
corresponding $\gamma(i)$ and $\mathcal{M}^{\rm tree}$ are obtained with the
same scheme.
## IV Generalize the TAM to NLO QCD calculation
In this section, we discuss how to apply TAM in NLO QCD calculations. We also
compare the result obtained from TAM with that from CAS.
### IV.1 Calculation scheme
In fact, the application of TAM in the calculation of one-loop helicity
amplitude is transparent, and will not lead to any additional technical
difficulty. The only concern is that one should find a proper scheme for real
corrections to guarantee the unitarity.
In Section III.1, we analyze the singularity structure of one-loop amplitude.
The analysis there is performed at amplitude level, and is irrelevant with the
treatment of spinors. Therefore, for one-loop helicity amplitude which
calculated through TAM, the formulas (15), (25) and (31) still hold. While in
real corrections, the singularity formulas (35) and (36) are at cross section
(decay width) level. We can decompose $\sigma_{\rm real}^{\rm soft}$ and
$\sigma_{\rm real}^{\rm HC}$ according to the helicity states of the $1$st to
$n$-th particles, which leads to
$\displaystyle\sum_{\lambda_{n+1}}\sigma_{\rm real}^{\rm
soft}(\lambda_{1},\cdots,\lambda_{n},\lambda_{n+1})$ $\displaystyle=$
$\displaystyle
g_{s}^{2}\mu^{4-D}\int\limits_{|\vec{p}_{n+1}|<\delta_{0}}\frac{d^{D-1}p_{n+1}}{2p_{n+1}^{0}(2\pi)^{D-1}}\sum_{i,j}^{n}\bigg{\\{}\frac{p_{i}\cdot\Pi(p_{n+1})\cdot
p_{j}}{(p_{i}\cdot p_{n+1})(p_{j}\cdot p_{n+1})}$
$\displaystyle\frac{1}{2\Phi}\int d\Gamma_{n}\left[\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{i^{\prime}}\cdots c_{j}\cdots
c_{n}}(\lambda_{1},\cdots,\lambda_{n})T^{a}_{c_{i}c_{i^{\prime}}}\right]\left[\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{i}\cdots c_{j^{\prime}}\cdots
c_{n}}(\lambda_{1},\cdots,\lambda_{n})T^{a}_{c_{j}c_{j^{\prime}}}\right]^{*}\bigg{\\}},$
(37)
and
$\displaystyle\sum_{\lambda_{n+1}}\sigma_{\rm real}^{\rm
HC}(\lambda_{1},\cdots,\lambda_{n},\lambda_{n+1})$ $\displaystyle=$
$\displaystyle-\frac{g_{s}^{2}}{8\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\left(\sum_{i}\frac{\gamma(i)}{\epsilon}\right)\frac{1}{2\Phi}\int
d\Gamma_{n}|\mathcal{M}^{\rm tree}_{c_{1}\cdots
c_{n}}(\lambda_{1},\cdots,\lambda_{n})|^{2}\ .$ (38)
Here, $\lambda_{i}$ denotes the helicity state of the $i$-th particle,
$\mathcal{M}^{\rm tree}(\lambda_{1},\cdots,\lambda_{n})$ denotes the tree-
level helicity amplitude, which should be calculated through the TAM to
guarantee the unitarity.
With the above preparation, we then obtain the scheme that can be used in NLO
QCD calculations: the one-loop helicity amplitude can be calculated
straightforwardly by using the TAM, and the soft and hard-collinear pieces of
real corrections should be calculated through Eqs. (37) and (38). There are
some remarkable points in practice as follows:
1) As we have discussed in Sec. III.2, this calculation scheme is legitimate
under both CDR and HV schemes.
2) In the TAM, additional $\gamma_{5}$ is introduced in Eq. (12). Although in
dimensional regularization, $\gamma_{5}$ is always a difficult object to deal
with, the $\gamma_{5}$ here will not cause additional trouble. Because this
$\gamma_{5}$ is only concerned with the treatment of spinor, which is
independent with singularity structure. Hence, our scheme is valid under
arbitrary self-consistent $\gamma_{5}$ prescription, like the ‘t Hooft-
Veltman-Breitenlohner-Maison (HVBM) prescription tHooft:1972tcz ;
Breitenlohner:1976te or the Kreimer-Korner prescription Kreimer:1989ke ;
Korner:1991sx .
3) The integrand in Eq. (37) contains the factor
$\frac{p_{i}\cdot\Pi(p_{n+1})\cdot p_{j}}{(p_{i}\cdot p_{n+1})(p_{j}\cdot
p_{n+1})}=\frac{-p_{i}\cdot p_{j}}{(p_{i}\cdot p_{n+1})(p_{j}\cdot
p_{n+1})}+\frac{1}{r\cdot p_{n+1}}\left(\frac{r\cdot p_{i}}{p_{i}\cdot
p_{n+1}}+\frac{r\cdot p_{j}}{p_{j}\cdot p_{n+1}}\right).$ (39)
By using the color conservation relation
$\sum_{i=1}^{n}T^{a}_{c_{i}c_{i^{\prime}}}\mathcal{M}^{\rm tree}_{c_{1}\cdots
c_{i^{\prime}}\cdots c_{n}}=0$ Catani:2000ef , we can see that the second term
is vanished after summing over $i$ and $j$. Hence in actual calculation, we
can replace $p_{i}\cdot\Pi(p_{n+1})\cdot p_{j}$ by $-p_{i}\cdot p_{j}$.
### IV.2 Consistency between TAM and CAS
To compare the result obtained from TAM with that from CAS, we introduce a
quantity $\mathcal{F}(\Gamma_{A},\Gamma_{B})$:
$\mathcal{F}(\Gamma_{A},\Gamma_{B})=\sum_{\lambda_{1},\lambda_{2}}\bar{U}(p_{2},\lambda_{2})\Gamma_{A}U(p_{1},\lambda_{1})\bar{U}(p_{1},\lambda_{1})\Gamma_{B}U(p_{2},\lambda_{2})\
,$ (40)
where $\Gamma_{A}$ and $\Gamma_{B}$ denote series of Dirac gamma matrices.
This quantity can be calculated through either CAS or TAM:
$\displaystyle\mathcal{F}_{\rm CAS}(\Gamma_{A},\Gamma_{B})=$
$\displaystyle{\rm
tr}[\Gamma_{A}(\not{p}_{1}+M_{1})\Gamma_{B}(\not{p}_{2}+M_{2})]\ ,$
$\displaystyle\mathcal{F}_{\rm TAM}(\Gamma_{A},\Gamma_{B})=$
$\displaystyle\frac{1}{16(k_{0}\cdot p_{1})(k_{0}\cdot
p_{2})}\sum_{\lambda_{1},\lambda_{2}}\Big{\\{}{\rm
tr}[\Gamma_{A}(\not{p}_{1}+M_{1})\Lambda(\lambda_{1},\lambda_{2})(\not{p}_{2}+M_{2})]$
$\displaystyle{\rm
tr}[\Gamma_{B}(\not{p}_{2}+M_{2})\Lambda(\lambda_{2},\lambda_{1})(\not{p}_{1}+M_{1})]\Big{\\}}\
.$ (41)
In 4-dimensions, the gamma matrices can be represented explicitly by $4\times
4$ matrices, and the construction (9) is compatible with the fermion spin sum
relation (10), which means $\mathcal{F}_{\rm TAM}(\Gamma_{A},\Gamma_{B})$ is
equivalent with $\mathcal{F}_{\rm CAS}(\Gamma_{A},\Gamma_{B})$. While in
general $D$-dimensions, this equivalence will no longer hold666For example,
considering $\Gamma_{A}=\gamma^{\mu}$, $\Gamma_{B}=\gamma_{\mu}$. By using the
HVBM $\gamma_{5}$-scheme, we obtain $\mathcal{F}_{\rm
CAS}(\gamma^{\mu},\gamma_{\mu})=4DM_{1}M_{2}-(4D-8)p_{1}\cdot p_{2}$, and
$\mathcal{F}_{\rm TAM}(\gamma^{\mu},\gamma_{\mu})=16M_{1}M_{2}-8p_{1}\cdot
p_{2}$.. The subtle difficulty is that there is inconsistency between the
continuous space-time dimensions and the fixed spinor space dimensions, as
revealed in Ref. Siegel:1980qs . Therefore, we may write
$\mathcal{F}_{\rm CAS}(\Gamma_{A},\Gamma_{B})-\mathcal{F}_{\rm
TAM}(\Gamma_{A},\Gamma_{B})=\mathcal{O}(\epsilon).$ (42)
At NLO, one may worry that the $\mathcal{O}(\epsilon)$ term will interfere
with UV or IR singularities, which eventually lead to terms that are finite or
even divergent as $\epsilon\to 0$. In fact, the $\mathcal{O}(\epsilon)$ term
arises due to different treatment of spinor, whose effect will be eliminated
as we sum up all pieces of NLO corrections, as discussed in the preceding
subsection.
In HV dimensional regularization scheme, all momenta and gluons outside loop
are treated as 4-dimensional. The product of one-loop amplitude and Born
amplitude takes the form $\mathcal{F}(\Gamma_{A},\overline{\Gamma}_{B})$,
where $\overline{\Gamma}_{B}$ denotes a series of 4-dimensional Dirac gamma
matrices that comes from Born amplitude. In this case, we have
$\mathcal{F}_{\rm CAS}(\Gamma_{A},\overline{\Gamma}_{B})-\mathcal{F}_{\rm
TAM}(\Gamma_{A},\overline{\Gamma}_{B})=0,$ (43)
which indicates that the consistency can be obtained even without a
combination of virtual and real corrections.
## V Example
To demonstrate the calculation scheme proposed in Sec. IV, in this section, we
apply it to the calculation of the NLO QCD corrections to $g+g\to t+\bar{t}$
and $q+\bar{q}\to t+\bar{t}$ processes. We find that the final results are
consistent with that from the CAS. Another successful application of the TAM
at one-loop level can be found in Ref. Chen:2012ju , where Higgs boson decays
to $l\bar{l}\gamma$ was studied.
The momenta and polarization states of incoming and outgoing particles are
denoted as:
$\displaystyle g(p_{1},\lambda_{1})+g(p_{2},\lambda_{2})\to
t(p_{3},\lambda_{3})+\bar{t}(p_{4},\lambda_{4}),$ $\displaystyle
q(p_{1},\lambda_{1})+\bar{q}(p_{2},\lambda_{2})\to
t(p_{3},\lambda_{3})+\bar{t}(p_{4},\lambda_{4}).$ (44)
Here, initial and final state particles are all on their mass shells:
$p_{1}^{2}=p_{2}^{2}=0$ and $p_{3}^{2}=p_{4}^{2}=m^{2}_{t}$. The gluons’
polarization vectors are denoted as $\epsilon_{1}^{(\lambda_{1})}$ and
$\epsilon_{2}^{(\lambda_{2})}$, which satisfy the constrains
$\epsilon_{1}^{(\lambda_{1})}\cdot\epsilon_{1}^{(\lambda_{1})*}=\epsilon_{2}^{(\lambda_{2})}\cdot\epsilon_{2}^{(\lambda_{2})*}=-1$
and
$p_{1}\cdot\epsilon_{1}^{(\lambda_{1})}=p_{2}\cdot\epsilon_{2}^{(\lambda_{2})}=0$.
In the center-of-mass system, the momenta and gluons’ polarization vectors are
chosen as:
$\displaystyle p_{1}=\frac{\sqrt{s}}{2}(1,0,0,1),\quad
p_{2}=\frac{\sqrt{s}}{2}(1,0,0,-1),$ $\displaystyle
p_{3}=\frac{\sqrt{s}}{2}(1,0,r_{y},r_{z}),\quad
p_{4}=\frac{\sqrt{s}}{2}(1,0,-r_{y},-r_{z}),$ (45)
and
$\epsilon_{1}^{(1)}=\epsilon_{2}^{(1)}=(0,1,0,0),\quad\epsilon_{1}^{(2)}=\epsilon_{2}^{(2)}=(0,0,1,0),$
(46)
where $s=(p_{1}+p_{2})^{2}$, and the on-shell condition constrains
$r_{y}^{2}+r_{z}^{2}=1-4m_{t}^{2}/s$. The helicity states of fermions are
defined under the KS basis with the auxiliary vectors chosen as:
$k_{0}=(1,1,0,0),\quad k_{1}=(0,0,0,1).$ (47)
Then the tree-level and one-loop helicity amplitudes can be calculated
straightforwardly by using the spinor product formula (14).
In the computation of one-loop amplitudes, the HV dimensional regularization
is adopted to regularize the UV and IR singularities. For the $\gamma_{5}$
introduced in Eq. (14), the HVBM prescription is adopted. The UV singularities
are removed by renormalization procedure. The renormalization constants
include $Z_{2}$, $Z_{m}$, $Z_{l}$, $Z_{3}$ and $Z_{g}$, corresponding to heavy
quark field, heavy quark mass, light quark field, gluon field and strong
coupling constant, respectively. We define $Z_{2}$, $Z_{m}$, $Z_{l}$ and
$Z_{3}$ in the on-shell (OS) scheme, $Z_{g}$ in the modified minimal-
subtraction ($\overline{\rm MS}$) scheme. The corresponding counterterms are
$\displaystyle\delta Z_{2}^{\rm OS}=$ $\displaystyle-
C_{F}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm
UV}}+\frac{2}{\epsilon_{\rm
IR}}-3\gamma_{E}+3\ln\frac{4\pi\mu^{2}}{m_{t}^{2}}+4\right],$
$\displaystyle\delta Z_{m}^{\rm OS}=$
$\displaystyle-3C_{F}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm
UV}}-\gamma_{E}+\ln\frac{4\pi\mu^{2}}{m_{t}^{2}}+\frac{4}{3}\right],$
$\displaystyle\delta Z_{l}^{\overline{\rm OS}}=$ $\displaystyle-
C_{F}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm
UV}}-\frac{1}{\epsilon_{\rm IR}}\right],$ $\displaystyle\delta
Z_{3}^{\overline{\rm OS}}=$
$\displaystyle\frac{\alpha_{s}}{4\pi}\left[(\beta^{\prime}_{0}-2C_{A})\left(\frac{1}{\epsilon_{\rm
UV}}-\frac{1}{\epsilon_{\rm
IR}}\right)-\frac{4}{3}T_{F}\sum_{i=c,b,t}\left(\frac{1}{\epsilon_{\rm
UV}}-\gamma_{E}+\ln\frac{4\pi\mu^{2}}{m_{i}^{2}}\right)\right],$
$\displaystyle\delta Z_{g}^{\overline{\rm MS}}=$
$\displaystyle-\frac{\beta_{0}}{2}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm
UV}}-\gamma_{E}+\ln(4\pi)\right].$ (48)
Here, $\mu$ is the renormalization scale, $\gamma_{E}$ is the Euler’s
constant; $C_{A}=3$, $C_{F}=4/3$ and $T_{F}=1/2$ are QCD color factors;
$\beta_{0}=(11/3)C_{A}-(4/3)T_{F}n_{f}$ is the one-loop coefficient of QCD
beta function, in which $n_{f}=6$ is the number of active quark flavors,
$\beta^{\prime}_{0}=(11/3)C_{A}-(4/3)T_{F}n_{lf}$ and $n_{lf}=3$ is the number
of light quark flavors.
In real corrections, IR singularities arise from the phase-space integration
of the additional emitted gluon, whose momentum is denoted by $p_{5}$
hereafter. To isolate the singularities, we follows the lines of Ref.
Harris:2001sx . By introducing two cutoff parameters $\delta_{s}$ and
$\delta_{c}$, the real correction phase space is split into three regions:
* •
soft: $p_{5}^{0}\leq\delta_{s}\sqrt{s}/2$;
* •
hard-collinear: including the collinear-to-$p_{1}$ region where
$p_{5}^{0}>\delta_{s}\sqrt{s}/2$ and $(p_{1}+p_{5})^{2}\leq\delta_{c}s$, and
the the collinear-to-$p_{2}$ region where $p_{5}^{0}>\delta_{s}\sqrt{s}/2$ and
$(p_{2}+p_{5})^{2}\leq\delta_{c}s$.
* •
hard-non-collinear: $p_{5}^{0}>\delta_{s}\sqrt{s}/2$ and
$(p_{1}+p_{5})^{2}>\delta_{c}s$ and $(p_{2}+p_{5})^{2}>\delta_{c}s$.
The soft and hard-collinear contributions can be obtained through Eqs. (2.22)
and (2.74) of Ref. Harris:2001sx , with the unpolarized Born cross section
$\sigma^{0}$ replaced by helicity cross section
$\sigma^{0}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$, which should be
calculated through the TAM. The remaining hard-non-collinear contribution is
IR finite, and can be calculated numerically in 4-dimensions. After summing up
these three pieces, their dependence on technical cuts are eliminated as
expected.
The total NLO corrections are obtained by summing up the virtual and real
corrections. We verify that all singularities are canceled exactly. By taking
the same renormalization scheme, we find that our results agree with Fig. 3
and Fig. 4 of Ref. Nason:1987xz within 1% accuracy.
For illustrative purpose, we present the numerical values of $\mathcal{A}^{\rm
loop}$, $\mathcal{A}^{\rm CT}$, $\mathcal{A}^{\rm soft}_{\rm real}$ and
$\mathcal{A}^{\rm HC}_{\rm real}$, which are defined as:
$\displaystyle\mathcal{A}^{\rm loop}=2\sum_{\lambda}{\rm
Re}\left[\mathcal{M}^{\rm loop}(\mathcal{M}^{\rm tree})^{*}\right],$
$\displaystyle\mathcal{A}^{\rm CT}=2\sum_{\lambda}{\rm
Re}\left[\mathcal{M}^{\rm CT}(\mathcal{M}^{\rm tree})^{*}\right],$
$\displaystyle\mathcal{A}^{\rm soft}_{\rm
real}=g_{s}^{2}\mu^{2\epsilon}\int\limits_{p_{5}^{0}<\delta_{s}\sqrt{s}/2}\frac{d^{D-1}p_{5}}{2p_{5}^{0}(2\pi)^{D-1}}\sum_{i,j=1}^{4}\frac{p_{i}\cdot
p_{j}}{(p_{i}\cdot p_{5})(p_{j}\cdot p_{5})}$
$\displaystyle\quad\quad\quad\sum_{\lambda}\left[\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{i^{\prime}}\cdots c_{j}\cdots
c_{4}}T^{a}_{c_{i}c_{i^{\prime}}}\right]\left[\mathcal{M}^{\rm
tree}_{c_{1}\cdots c_{i}\cdots c_{j^{\prime}}\cdots
c_{4}}T^{a}_{c_{j}c_{j^{\prime}}}\right]^{*},$ $\displaystyle\mathcal{A}^{\rm
HC}_{\rm
real}=\frac{g_{s}^{2}}{4\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\mu_{f}^{2}}\right)\frac{A_{1}}{\epsilon}\sum_{\lambda}|\mathcal{M}^{\rm
tree}|^{2}\ .$ (49)
Here, $\mu_{f}$ is the initial state factorization scale, and for gluon-gluon
channel $A_{1}=-n_{lf}/3+C_{A}(11/6+\ln\delta_{s})$ , for quark-antiquark
channel $A_{1}=C_{F}(3/2+2\ln\delta_{s})$. Note, for comparison convenience,
the hard-collinear contribution $\mathcal{A}^{\rm HC}_{\rm real}$ here only
include singular or approach dependent (TAM or CAS) terms.
The mass of heavy quarks are taken as $m_{t}=173$, $m_{b}=4.9$, $m_{c}=1.5$,
and the strong coupling $g_{s}$ is set to one. The numerical results for
$g+g\to t+\bar{t}$ and $q+\bar{q}\to t+\bar{t}$ processes at the point
$s=500^{2}$, $r_{z}=0.6$, $\mu_{r}=\mu_{f}=500$ are given in Table 1 and Table
2, respectively. As a comparison, we also list the results obtained from the
CAS, where CDR scheme is employed. It can be seen that for each piece of NLO
corrections, the results from TAM and CAS are generally different. However,
after summing up all pieces, consistent results are obtained. The numerical
results confirm our statement in Sec. IV.2.
Table 1: Numerical results for $g+g\to t+\bar{t}$ process. | TAM with HV | CAS with CDR
---|---|---
$\mathcal{A}^{\rm loop}$ | $\scriptstyle-10.08118054436082\epsilon^{-2}-3.437853927720160\epsilon^{-1}$ $\scriptstyle+75.75337372837709$ | $\scriptstyle-10.08118054436082\epsilon^{-2}+17.83506953428062\epsilon^{-1}$ $\scriptstyle+69.74008746354726$
$\mathcal{A}^{\rm CT}$ | $\scriptstyle-20.99049132259232\epsilon^{-1}-87.04718302407158$ | $\scriptstyle-20.47530745414479\epsilon^{-1}-41.20686937309857$
$\mathcal{A}^{\rm soft}_{\rm real}$ | $\scriptstyle 10.08118054436082\epsilon^{-2}+9.306574433771248\epsilon^{-1}$ $\scriptstyle-20.16236108872165\ln\delta_{s}\epsilon^{-1}-20.38707598960415$ $\scriptstyle+20.16236108872165\ln^{2}\delta_{s}-18.61314886754250\ln\delta_{s}$ | $\scriptstyle 10.08118054436082\epsilon^{-2}-12.48153289667707\epsilon^{-1}$ $\scriptstyle-20.16236108872165\ln\delta_{s}\epsilon^{-1}-27.53194238007485$ $\scriptstyle+20.16236108872165\ln^{2}\delta_{s}+24.96306579335413\ln\delta_{s}$
$\mathcal{A}^{\rm HC}_{\rm real}$ | $\scriptstyle 20.16236108872165\ln\delta_{s}\epsilon^{-1}+15.12177081654124\epsilon^{-1}$ $\scriptstyle-39.39339412989338\ln\delta_{s}+29.54504559742003$ | $\scriptstyle 20.16236108872165\ln\delta_{s}\epsilon^{-1}+15.12177081654124\epsilon^{-1}$ $\scriptstyle-4.182820531003251\ln\delta_{s}-3.137115398252439$
sum | $\scriptstyle 20.78024526235088\ln\delta_{s}+20.16236108872165\ln^{2}\delta_{s}$ $\scriptstyle-2.1358396878786$ | $\scriptstyle 20.78024526235088\ln\delta_{s}+20.16236108872165\ln^{2}\delta_{s}$ $\scriptstyle-2.1358396878786$
Table 2: Numerical results for $q+\bar{q}\to t+\bar{t}$ process. | TAM with HV | CAS with CDR
---|---|---
$\mathcal{A}^{\rm loop}$ | $\scriptstyle-0.3998042041986813\epsilon^{-2}+0.6577945202719476\epsilon^{-1}$ $\scriptstyle+6.474889692802957$ | $\scriptstyle-0.3998042041986813\epsilon^{-2}+1.198174166364416\epsilon^{-1}$ $\scriptstyle+5.585807571271599$
$\mathcal{A}^{\rm CT}$ | $\scriptstyle-1.649192342319560\epsilon^{-1}-5.294770961065026$ | $\scriptstyle-1.649192342319560\epsilon^{-1}-3.065704920933595$
$\mathcal{A}^{\rm soft}_{\rm real}$ | $\scriptstyle 0.399804204198681\epsilon^{-2}+0.3916915157495908\epsilon^{-1}$ $\scriptstyle-0.7996084083973625\ln\delta_{s}\epsilon^{-1}-1.043296995492568$ $\scriptstyle+0.7996084083973625\ln^{2}\delta_{s}-0.7833830314991816\ln\delta_{s}$ | $\scriptstyle 0.399804204198681\epsilon^{-2}-0.1486881303428773\epsilon^{-1}$ $\scriptstyle-0.7996084083973625\ln\delta_{s}\epsilon^{-1}-1.572711444953939$ $\scriptstyle+0.7996084083973625\ln^{2}\delta_{s}+0.2973762606857546\ln\delta_{s}$
$\mathcal{A}^{\rm HC}_{\rm real}$ | $\scriptstyle 0.7996084083973625\ln\delta_{s}\epsilon^{-1}+0.5997063062980219\epsilon^{-1}$ $\scriptstyle 1.562281770620308\ln\delta_{s}+1.171711327965231$ | $\scriptstyle 0.7996084083973625\ln\delta_{s}\epsilon^{-1}+0.5997063062980219\epsilon^{-1}$ $\scriptstyle+0.4815224784353714\ln\delta_{s}+0.3611418588265285$
sum | $\scriptstyle 0.7788987391211260\ln\delta_{s}+0.7996084083973625\ln^{2}\delta_{s}$ $\scriptstyle+1.30853306421059$ | $\scriptstyle 0.7788987391211260\ln\delta_{s}+0.7996084083973625\ln^{2}\delta_{s}$ $\scriptstyle+1.30853306421059$
## VI Summary
Helicity amplitude method is not only a technique to perform the perturbative
calculation of Feynman diagrams, it provides more information than
conventional amplitude squaring approach in phenomenological study. In this
paper, we reviewed the basic idea of TAM, and discussed how to generalize this
method to NLO QCD calculation. By analyzing the singularity structures of
virtual and real corrections, we proposed a scheme that can guarantee the
unitarity in the NLO QCD calculation. This scheme is legitimate under both CDR
and HV schemes, and is compatible with arbitrary self-consistent $\gamma_{5}$
prescription. We also provided an illustrative example by this scheme. Another
noteworthy aspect of this work is that, instead of relying on KLN theorem, we
shown the cancellations of soft and collinear singularities explicitly, at
least at NLO, by using the power counting technique.
Last, it should be mentioned that the IR divergence cancellation at next-to-
next-to-leading order (NNLO) is much more complicated than that at NLO. To
tackle this issue, the techniques developed in Refs. Grammer:1973db ;
Catani:1998bh ; Becher:2009kw ; Becher:2009qa ; Feige:2014wja may be useful.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of
China(NSFC) under the Grants 11975236 ,11635009, and 12047553.
## References
* (1) P. De Causmaecker, R. Gastmans, W. Troost and T. T. Wu, Nucl. Phys. B 206, 53-60 (1982)
* (2) F. A. Berends, R. Kleiss, P. De Causmaecker, R. Gastmans, W. Troost and T. T. Wu, Nucl. Phys. B 206, 61-89 (1982)
* (3) K. Nam and M. J. Moravcsik, J. Math. Phys. 25, 820 (1984)
* (4) R. Kleiss and W. Stirling, Nucl. Phys. B 262, 235-262 (1985)
* (5) Z. Xu, D. H. Zhang and L. Chang, Nucl. Phys. B 291, 392-428 (1987)
* (6) S. J. Parke and T. Taylor, Phys. Rev. Lett. 56, 2459 (1986)
* (7) F. A. Berends and W. Giele, Nucl. Phys. B 306, 759-808 (1988)
* (8) C. H. Chang and Y. Q. Chen, Phys. Rev. D 46, 3845 (1992); Phys. Rev. D 50, 6013 (1994) (erratum).
* (9) E. Yehudai, FERMILAB-PUB-92-256-T, [arXiv:hep-ph/9209293 [hep-ph]].
* (10) A. Ballestrero and E. Maina, Phys. Lett. B 350, 225-233 (1995) [arXiv:hep-ph/9403244 [hep-ph]].
* (11) R. Vega and J. Wudka, Phys. Rev. D 53, 5286-5292 (1996) [erratum: Phys. Rev. D 56, 6037-6038 (1997)] doi:10.1103/PhysRevD.56.6037 [arXiv:hep-ph/9511318 [hep-ph]].
* (12) A. L. Bondarev, [arXiv:hep-ph/9710398 [hep-ph]].
* (13) V. Andreev, Phys. Rev. D 62, 014029 (2000) [arXiv:hep-ph/0101140 [hep-ph]].
* (14) C. F. Qiao, Phys. Rev. D 67, 097503 (2003) [arXiv:hep-ph/0302128 [hep-ph]].
* (15) F. Cachazo, P. Svrcek and E. Witten, JHEP 09, 006 (2004) [arXiv:hep-th/0403047 [hep-th]].
* (16) R. Britto, F. Cachazo and B. Feng, Nucl. Phys. B 715, 499-522 (2005) [arXiv:hep-th/0412308 [hep-th]].
* (17) C. Schwinn and S. Weinzierl, JHEP 05, 006 (2005) [arXiv:hep-th/0503015 [hep-th]].
* (18) Z. Bern and D. A. Kosower, Nucl. Phys. B 379, 451-561 (1992)
* (19) Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, Nucl. Phys. B 425, 217-260 (1994) [arXiv:hep-ph/9403226 [hep-ph]].
* (20) Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, Nucl. Phys. B 435, 59-101 (1995) [arXiv:hep-ph/9409265 [hep-ph]].
* (21) A. Brandhuber, B. J. Spence and G. Travaglini, Nucl. Phys. B 706, 150-180 (2005) [arXiv:hep-th/0407214 [hep-th]].
* (22) M. x. Luo and C. k. Wen, JHEP 11, 004 (2004) [arXiv:hep-th/0410045 [hep-th]].
* (23) I. Bena, Z. Bern, D. A. Kosower and R. Roiban, Phys. Rev. D 71, 106010 (2005) [arXiv:hep-th/0410054 [hep-th]].
* (24) C. Quigley and M. Rozali, JHEP 01, 053 (2005) [arXiv:hep-th/0410278 [hep-th]].
* (25) J. Bedford, A. Brandhuber, B. J. Spence and G. Travaglini, Nucl. Phys. B 706, 100-126 (2005) [arXiv:hep-th/0410280 [hep-th]].
* (26) R. Britto, F. Cachazo and B. Feng, Nucl. Phys. B 725, 275-305 (2005) [arXiv:hep-th/0412103 [hep-th]].
* (27) R. Roiban, M. Spradlin and A. Volovich, Phys. Rev. Lett. 94, 102002 (2005) [arXiv:hep-th/0412265 [hep-th]].
* (28) Z. Bern, L. J. Dixon and D. A. Kosower, Phys. Rev. D 71, 105013 (2005) [arXiv:hep-th/0501240 [hep-th]].
* (29) S. J. Bidder, N. E. J. Bjerrum-Bohr, D. C. Dunbar and W. B. Perkins, Phys. Lett. B 612, 75-88 (2005) [arXiv:hep-th/0502028 [hep-th]].
* (30) M. L. Mangano and S. J. Parke, Phys. Rept. 200, 301-367 (1991) [arXiv:hep-th/0509223 [hep-th]].
* (31) L. J. Dixon, [arXiv:hep-ph/9601359 [hep-ph]].
* (32) Z. Bern et al. [NLO Multileg Working Group], [arXiv:0803.0494 [hep-ph]].
* (33) H. Elvang and Y. t. Huang, [arXiv:1308.1697 [hep-th]].
* (34) L. J. Dixon, [arXiv:1310.5353 [hep-ph]].
* (35) C. Bouchiat and L. Michel, Nucl. Phys. 5, 416 (1958).
* (36) Z. Kunszt, A. Signer and Z. Trocsanyi, Nucl. Phys. B 420, 550-564 (1994) [arXiv:hep-ph/9401294 [hep-ph]].
* (37) S. Catani, S. Dittmaier and Z. Trocsanyi, Phys. Lett. B 500, 149-160 (2001) [arXiv:hep-ph/0011222 [hep-ph]].
* (38) J. Collins, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol. 32, 1-624 (2011)
* (39) R. E. Cutkosky, J. Math. Phys. 1, 429-433 (1960)
* (40) G. ’t Hooft and M. J. G. Veltman, Nucl. Phys. B 44, 189-213 (1972)
* (41) W. Siegel, Phys. Lett. B 84, 193-196 (1979)
* (42) D. Stockinger, JHEP 03, 076 (2005) [arXiv:hep-ph/0503129 [hep-ph]].
* (43) S. Catani, M. H. Seymour and Z. Trocsanyi, Phys. Rev. D 55, 6819-6829 (1997) [arXiv:hep-ph/9610553 [hep-ph]].
* (44) A. Signer and D. Stockinger, Nucl. Phys. B 808, 88-120 (2009) [arXiv:0807.4424 [hep-ph]].
* (45) K. Fabricius, I. Schmitt, G. Kramer and G. Schierholz, Z. Phys. C 11, 315 (1981)
* (46) G. Kramer and B. Lampe, Fortsch. Phys. 37, 161 (1989) DESY-86-119.
* (47) B. W. Harris and J. F. Owens, Phys. Rev. D 65, 094032 (2002) [arXiv:hep-ph/0102128 [hep-ph]].
* (48) R. K. Ellis, D. A. Ross and A. E. Terrano, Nucl. Phys. B 178, 421-456 (1981)
* (49) S. Catani and M. H. Seymour, Nucl. Phys. B 485, 291-419 (1997) [erratum: Nucl. Phys. B 510, 503-504 (1998)] [arXiv:hep-ph/9605323 [hep-ph]].
* (50) L. Phaf and S. Weinzierl, JHEP 04, 006 (2001) [arXiv:hep-ph/0102207 [hep-ph]].
* (51) T. Kinoshita, J. Math. Phys. 3, 650-677 (1962)
* (52) T. D. Lee and M. Nauenberg, Phys. Rev. 133, B1549-B1562 (1964)
* (53) P. Breitenlohner and D. Maison, Commun. Math. Phys. 52, 11-75 (1977)
* (54) D. Kreimer, Phys. Lett. B 237, 59-62 (1990)
* (55) J. G. Korner, D. Kreimer and K. Schilcher, Z. Phys. C 54, 503-512 (1992)
* (56) W. Siegel, Phys. Lett. B 94, 37-40 (1980)
* (57) L. B. Chen, C. F. Qiao and R. L. Zhu, Phys. Lett. B 726, 306-311 (2013) [erratum: Phys. Lett. B 808, 135629 (2020)] [arXiv:1211.6058 [hep-ph]].
* (58) P. Nason, S. Dawson and R. K. Ellis, Nucl. Phys. B 303, 607-633 (1988)
* (59) G. Grammer, Jr. and D. R. Yennie, Phys. Rev. D 8, 4332-4344 (1973)
* (60) S. Catani, Phys. Lett. B 427, 161-171 (1998) [arXiv:hep-ph/9802439 [hep-ph]].
* (61) T. Becher and M. Neubert, Phys. Rev. D 79, 125004 (2009) [erratum: Phys. Rev. D 80, 109901 (2009)] [arXiv:0904.1021 [hep-ph]].
* (62) T. Becher and M. Neubert, JHEP 06, 081 (2009) [erratum: JHEP 11, 024 (2013)] [arXiv:0903.1126 [hep-ph]].
* (63) I. Feige and M. D. Schwartz, Phys. Rev. D 90, no.10, 105020 (2014) [arXiv:1403.6472 [hep-ph]].
* (64) S. Mandelstam, Nucl. Phys. B 213, 149-168 (1983)
* (65) G. Leibbrandt, Phys. Rev. D 29, 1699 (1984)
## Appendix A: Soft singularities of external self-energy diagrams
In this appendix, we present the derivation of Eq. (24). As we work in
lightcone gauge, a proper prescription should be introduced to treat the
unphysical pole $(r\cdot k)^{-1}$. It has been shown that the usual principal-
value prescription is incompatible with Wick rotation, and will lead to a
violation of Ward identity Leibbrandt:1983pj . These difficulties can be
avoided by using the Mandelstam-Leibbrandt (ML) prescription Mandelstam:1982cb
; Leibbrandt:1983pj :
$\frac{1}{k\cdot r}\to\frac{k\cdot r^{*}}{(k\cdot r^{*})(k\cdot
r)+i\varepsilon}=\frac{1}{k\cdot r+i\varepsilon\ {\rm sign}(r^{*}\cdot k)}\ ,$
(50)
where $r^{*}=(r^{0},-\vec{r})$. Although our analysis here does not involve
any explicit computation of Feynman integral, we as well use the ML
prescription.
The self-energy correction to external massive quark takes the form
$\frac{1}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{softSEQuark.eps}}}=\frac{1}{2}\bar{u}(p)[-i\Sigma_{c_{i}c_{i^{\prime}}}(p)]\frac{i(\not{p}+m)}{p^{2}-m^{2}}\Gamma_{c_{i^{\prime}}}(p)\bigg{|}_{p_{0}\to\omega_{p}},$
(51)
where $\omega_{p}=\sqrt{|\vec{p}|^{2}+m^{2}}$, and
$-i\Sigma_{c_{i}c_{i^{\prime}}}(p)=g_{s}^{2}C_{F}\delta_{c_{i}c_{i^{\prime}}}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{\gamma^{\mu}(\not{p}-\not{k}+m)\gamma^{\nu}}{k^{2}[(p-k)^{2}-m^{2}]}\Pi_{\mu\nu}(k).$
(52)
By performing the Passarino-Veltman tensor reduction, the
$-i\Sigma_{c_{i}c_{i^{\prime}}}(p)$ can be reduced to the form777The vector
$r^{*}$ is introduced by the ML prescription. Since our derivation does not
involve explicit integral computation, the final expression should be
prescription independent. As a check, we performed an analysis without any
$r^{*}$ involved, and consistent result was obtained.
$-i\Sigma_{c_{i}c_{i^{\prime}}}(p)=g_{s}^{2}C_{F}\delta_{c_{i}c_{i^{\prime}}}[f_{1}m+f_{2}(\not{p}-m)+f_{3}\not{r}+f_{4}\not{r}^{*}],$
(53)
where $f_{i}$ should be expand near $p_{0}=\omega_{p}$:
$f_{i}=f_{i}^{(0)}+f_{i}^{(1)}(p_{0}-\omega_{p})+\mathcal{O}\left((p_{0}-\omega_{p})^{2}\right),$
(54)
with
$f_{i}^{(n)}=\frac{d^{n}f_{i}}{dp_{0}^{n}}\bigg{|}_{p_{0}=\omega_{\vec{p}}}.$
(55)
By rescaling the loop momentum as $k\to\kappa^{2}k$, we can perform power
counting to pick the terms that are potentially soft divergent:
$\displaystyle f_{1}^{(0)}\overset{\rm soft}{\sim}$ $\displaystyle 0,\quad
f_{2}^{(0)}\overset{\rm soft}{\sim}0\ ,\quad f_{3}^{(0)}\overset{\rm
soft}{\sim}0\ ,\quad f_{4}^{(0)}\overset{\rm soft}{\sim}0,$ $\displaystyle
f_{1}^{(1)}\overset{\rm soft}{\sim}$ $\displaystyle\frac{2(p\cdot
r)}{m^{2}(r\cdot r^{*})-2(p\cdot r)(p\cdot r^{*})}[\omega_{p}(r\cdot
r^{*})I_{1}-(p\cdot r^{*})I_{2}-(p\cdot r)I_{3}]-2I_{2},$ $\displaystyle
f_{3}^{(1)}\overset{\rm soft}{\sim}$ $\displaystyle\frac{2(p\cdot
r)}{m^{2}(r\cdot r^{*})-2(p\cdot r)(p\cdot
r^{*})}\bigg{\\{}\bigg{[}\frac{\omega_{p}m^{2}(r\cdot r^{*})}{p\cdot
r}-3\omega_{p}(p\cdot r^{*})\bigg{]}I_{1}+\frac{(p\cdot r^{*})^{2}}{r\cdot
r^{*}}I_{2}$ $\displaystyle+\bigg{[}m^{2}-\frac{(p\cdot r)(p\cdot
r^{*})}{r\cdot r^{*}}\bigg{]}I_{3}\bigg{\\}},$ $\displaystyle
f_{4}^{(1)}\overset{\rm soft}{\sim}$ $\displaystyle\frac{2(p\cdot
r)}{m^{2}(r\cdot r^{*})-2(p\cdot r)(p\cdot r^{*})}\bigg{\\{}-\omega_{p}(p\cdot
r)I_{1}+\bigg{[}m^{2}-\frac{(p\cdot r)(p\cdot r^{*})}{r\cdot
r^{*}}\bigg{]}I_{2}$ $\displaystyle+\frac{(p\cdot r)^{2}}{r\cdot
r^{*}}I_{3}\bigg{\\}},$ (56)
where
$\displaystyle
I_{1}=\int\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{k^{2}[(p-k)^{2}-m^{2}](k\cdot
r)}\bigg{|}_{p_{0}=\omega_{p}}\overset{\rm
soft}{\sim}-\frac{1}{2}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{k^{2}(k\cdot
p)(k\cdot r)}\ ,$ $\displaystyle
I_{2}=\frac{d}{dp_{0}}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{k^{2}[(p-k)^{2}-m^{2}]}\bigg{|}_{p_{0}=\omega_{p}}\overset{\rm
soft}{\sim}-\frac{\omega_{p}}{2}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{k^{2}(k\cdot
p)^{2}}\ ,$ $\displaystyle
I_{3}=\frac{d}{dp_{0}}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{k\cdot
r^{*}}{k^{2}[(p-k)^{2}-m^{2}](k\cdot
r)}\bigg{|}_{p_{0}=\omega_{p}}\overset{\rm
soft}{\sim}-\frac{\omega_{p}}{2}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{k\cdot
r^{*}}{k^{2}(k\cdot p)^{2}(k\cdot r)}\ .$ (57)
Note, to regularize the soft divergence by dimensional regularization, the
derivative and integral operation should be performed in an order:
$\frac{d}{dp_{0}}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{\cdots}{\cdots}\bigg{|}_{p_{0}=\omega_{p}}\to\int\frac{d^{D}k}{(2\pi)^{D}}\left(\frac{d}{dp_{0}}\frac{\cdots}{\cdots}\bigg{|}_{p_{0}=\omega_{p}}\right).$
(58)
Finally, we obtain the desired soft singular term:
$\displaystyle\frac{1}{2}\bar{u}(p)[-i\Sigma_{c_{i}c_{i^{\prime}}}(p)]\frac{i(\not{p}+m)}{p^{2}-m^{2}}\Gamma_{c_{i^{\prime}}}(p)\bigg{|}_{p_{0}\to\omega_{p}}$
$\displaystyle\overset{\rm soft}{\sim}$
$\displaystyle\frac{i}{2}g_{s}^{2}C_{F}\frac{m^{2}f_{1}^{(1)}+(p\cdot
r)f_{3}^{(1)}+(p\cdot
r^{*})f_{4}^{(1)}}{\omega_{p}}\bar{u}(p)\Gamma_{c_{i}}(p)$
$\displaystyle\overset{\rm soft}{\sim}$
$\displaystyle-\frac{i}{2}g_{s}^{2}C_{F}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{p\cdot\Pi(k)\cdot
p}{k^{2}(k\cdot p)^{2}}\bar{u}(p)\Gamma_{c_{i}}(p).$ (59)
The result for massless quark can be obtained by simply set $m=0$, as we have
checked that the soft-collinear region will not produce new terms.
The self-energy correction to external gluon is of the form
$\frac{1}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{softSEGluon.eps}}}=\frac{1}{4}\epsilon^{*\mu}(p)[-i\Sigma^{c_{i}c_{i^{\prime}}}_{\mu\nu}(p)]\frac{i\Pi^{\nu\nu^{\prime}}(p)}{p^{2}}\Gamma^{c_{i^{\prime}}}_{\nu^{\prime}}(p)\bigg{|}_{p_{0}\to|\vec{p}|},$
(60)
where
$\displaystyle-i\Sigma_{\mu\nu}^{c_{i}c_{i^{\prime}}}(p)=-C_{A}\delta^{c_{i}c_{i^{\prime}}}g_{s}^{2}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}$
$\displaystyle\frac{i\Pi^{\alpha\rho}(k)i\Pi^{\beta\sigma}(p-k)}{k^{2}(p-k)^{2}}$
$\displaystyle[g_{\rho\sigma}(2k-p)_{\mu}+g_{\sigma\mu}(2p-k)_{\rho}+g_{\mu\rho}(-p-k)_{\sigma}]$
$\displaystyle[g_{\nu\beta}(2p-k)_{\alpha}+g_{\beta\alpha}(2k-p)_{\nu}+g_{\alpha\nu}(-k-p)_{\beta}]\
.$ (61)
By performing power counting, we obtain the terms that are potentially soft or
soft-collinear divergent
$\displaystyle\frac{1}{4}\epsilon^{*\mu}(p)[-i\Sigma^{c_{i}c_{i^{\prime}}}_{\mu\nu}(p)]\frac{i\Pi^{\nu\nu^{\prime}}(p)}{p^{2}}\Gamma^{c_{i^{\prime}}}_{\nu^{\prime}}(p)\bigg{|}_{p_{0}\to|\vec{p}|}$
$\displaystyle\overset{\rm soft}{\sim}$
$\displaystyle-\frac{i}{2}g_{s}^{2}C_{A}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{p\cdot\Pi(k)\cdot
p}{k^{2}(p\cdot k)^{2}}\Gamma^{c_{i}}(p)\cdot\epsilon^{*}(p).$ (62)
Note, a factor of $2$ is included to incorporate the contribution from
$(k-p)\to 0$ region.
## Appendix B: Hard-collinear singularities of external self-energy diagrams
In this appendix, we present the derivation of Eq. (30), the hard-collinear
singular terms of external self-energy diagrams. We decompose the loop
momentum as
$k^{\mu}=zp^{\mu}+k^{-}\bar{p}^{\mu}+k_{T}^{\mu},$ (63)
where $p$ is the external momentum and $\bar{p}=(p_{0},-\vec{p})$. Then the
propagator and the integral measures take the forms
$\displaystyle k^{2}=4p_{0}^{2}zk^{-}-\vec{k}_{T}^{2},$
$\displaystyle(p-k)^{2}=-4p_{0}^{2}(1-z)k^{-}-\vec{k}_{T}^{2},$ $\displaystyle
d^{D}k=2p_{0}^{2}\ dz\ dk^{-}\ d^{D-2}k_{T}.$ (64)
For the quantity $(r\cdot k)^{-1}$, we use the ML prescription as in Appendix
A. To reduce the number of independent vectors, we set $r$ to be parallel with
$\bar{p}$ (then $r^{*}$ is parallel with $p$). In fact, these treatments is
not much concern to the final result, since $(r\cdot k)^{-1}$ is nonvanishing
in hard-collinear region (except when choosing $r\varpropto p$).
The self-energy correction to external massless quark is of the form
$\frac{1}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{softSEQuark.eps}}}=\frac{1}{2}\bar{u}(p)[-i\Sigma_{c_{i}c_{i^{\prime}}}(p)]\frac{i\not{p}}{p^{2}}\Gamma_{c_{i^{\prime}}}(p)\bigg{|}_{p_{0}\to|\vec{p}|},$
(65)
with
$-i\Sigma_{c_{i}c_{i^{\prime}}}(p)=g_{s}^{2}C_{F}\delta_{c_{i}c_{i^{\prime}}}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{\gamma^{\mu}(\not{p}-\not{k})\gamma^{\nu}}{k^{2}(p-k)^{2}}\Pi_{\mu\nu}(k).$
(66)
In hard-collinear region, the scaling behaviors of lightcone components are
$z\sim\kappa^{0}$, $k^{-}\sim\kappa^{2}$, $|\vec{k}_{T}|\sim\kappa$. By taking
the terms which scale like $\mathcal{O}(\kappa^{0})$, we have
$\displaystyle\frac{1}{2}\bar{u}(p)[-i\Sigma_{c_{i}c_{i^{\prime}}}(p)]\frac{i\not{p}}{p^{2}}\Gamma_{c_{i^{\prime}}}(p)\bigg{|}_{p_{0}\to|\vec{p}|}\overset{\rm
coll}{\sim}ig_{s}^{2}C_{F}\mu^{4-D}\int\frac{2p_{0}^{2}dzd^{D-2}k_{T}}{(2\pi)^{D-1}}\frac{(D-10)z+8}{4z}$
$\displaystyle\times\int\frac{dk^{-}}{2\pi}\frac{1}{[4p_{0}^{2}zk^{-}-\vec{k}_{T}^{2}+i\varepsilon][-4p_{0}^{2}(1-z)k^{-}-\vec{k}_{T}^{2}+i\varepsilon]}\bar{u}(p)\Gamma_{c_{i}}(p).$
(67)
It can be seen that there are two poles located at
$k^{-}=\frac{\vec{k}_{T}^{2}-i\varepsilon}{4p_{0}^{2}z}$ and
$k^{-}=\frac{\vec{k}_{T}^{2}-i\varepsilon}{-4p_{0}^{2}(1-z)}$. In the region
$z<0$ or $z>1$, both poles lie in the same half-plane, and no singular term
will arise. In the region $0<z<1$, we have
$\displaystyle\overset{\rm
coll}{\sim}-\frac{g_{s}^{2}}{4\pi}C_{F}\mu^{4-D}\int_{\frac{\delta_{0}}{p_{0}}}^{1}dz\frac{(D-10)z+8}{4z}\int_{|\vec{k}_{T}|<\delta_{T}}\frac{d^{D-2}k_{T}}{(2\pi)^{D-2}}\frac{1}{\vec{k}_{T}^{2}}\
\bar{u}(p)\Gamma_{c_{i}}(p)$ $\displaystyle\overset{\rm
coll}{\sim}-\frac{g_{s}^{2}}{8\pi^{2}}C_{F}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\frac{1}{2\epsilon}\left(\frac{3+\epsilon}{2}+2\ln\frac{\delta_{0}}{p_{0}}\right)\
\bar{u}(p)\Gamma_{c_{i}}(p).$ (68)
Here, the cutoff parameter $\delta_{0}$ is introduced to exclude the soft-
collinear region.
The fermion loop self-energy correction to external gluon takes the form:
$\frac{n_{lf}}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{softSEGluonFermionLoop.eps}}}=\frac{n_{lf}}{2}\epsilon^{*\mu}(p)[-i\Sigma^{c_{i}c_{i^{\prime}}}_{\mu\nu}(p)]\frac{i\Pi^{\nu\nu^{\prime}}(p)}{p^{2}}\Gamma^{c_{i^{\prime}}}_{\nu^{\prime}}(p)\bigg{|}_{p_{0}\to|\vec{p}|}\
,$ (69)
where
$\displaystyle-i\Sigma_{\mu\nu}^{c_{i}c_{i^{\prime}}}(p)=-\frac{g_{s}^{2}}{2}\delta^{c_{i}c_{i^{\prime}}}\mu^{4-D}\frac{d^{D}k}{(2\pi)^{D}}\frac{{\rm
Tr}[\gamma^{\nu}\cdot(\not{k}-\not{p})\cdot\gamma^{\mu}\cdot\not{k}]}{k^{2}(p-k)^{2}}.$
(70)
Similarly, we have
$\displaystyle\frac{n_{lf}}{2}\epsilon^{*\mu}(p)[-i\Sigma^{c_{i}c_{i^{\prime}}}_{\mu\nu}(p)]\frac{i\Pi^{\nu\nu^{\prime}}(p)}{p^{2}}\Gamma^{c_{i}}_{\nu^{\prime}}(p)\bigg{|}_{p_{0}\to|\vec{p}|}$
$\displaystyle\overset{\rm coll}{\sim}$ $\displaystyle
i\frac{n_{lf}}{2}g_{s}^{2}\frac{D-2}{D-1}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{1}{k^{2}(p-k)^{2}}\
\Gamma_{b}(p)\cdot\epsilon^{*}(p)$ $\displaystyle\overset{\rm coll}{\sim}$
$\displaystyle-
n_{lf}\frac{g_{s}^{2}}{8\pi}\frac{D-2}{D-1}\mu^{4-D}\int_{|\vec{k}_{T}|<\delta_{T}}\frac{d^{D-2}k_{T}}{(2\pi)^{D-2}}\frac{1}{\vec{k}_{T}^{2}}\
\Gamma_{c_{i}}(p)\cdot\epsilon^{*}(p)$ $\displaystyle\overset{\rm coll}{\sim}$
$\displaystyle
n_{lf}\frac{g_{s}^{2}}{8\pi^{2}}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\frac{1}{2\epsilon}\frac{1-\epsilon}{3-2\epsilon}\
\Gamma_{c_{i}}(p)\cdot\epsilon^{*}(p).$ (71)
The fermion loop self-energy correction to external gluon takes the form:
$\frac{1}{2}\vbox{\hbox{\includegraphics[scale={0.5}]{softSEGluon.eps}}}=\frac{1}{4}\epsilon^{*\mu}(p)[-i\Sigma^{c_{i}c_{i^{\prime}}}_{\mu\nu}(p)]\frac{i\Pi^{\nu\nu^{\prime}}(p)}{p^{2}}\Gamma^{c_{i^{\prime}}}_{\nu^{\prime}}(p)\bigg{|}_{p_{0}\to|\vec{p}|},$
(72)
where
$\displaystyle-i\Sigma_{\mu\nu}^{c_{i}c_{i^{\prime}}}(p)=-C_{A}\delta^{c_{i}c_{i^{\prime}}}g_{s}^{2}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}$
$\displaystyle\frac{i\Pi^{\alpha\rho}(k)i\Pi^{\beta\sigma}(p-k)}{k^{2}(p-k)^{2}}$
$\displaystyle[g_{\rho\sigma}(2k-p)_{\mu}+g_{\sigma\mu}(2p-k)_{\rho}+g_{\mu\rho}(-p-k)_{\sigma}]$
$\displaystyle[g_{\nu\beta}(2p-k)_{\alpha}+g_{\beta\alpha}(2k-p)_{\nu}+g_{\alpha\nu}(-k-p)_{\beta}].$
(73)
We have
$\displaystyle\frac{1}{4}\epsilon^{*\mu}(p)[-i\Sigma^{c_{i}c_{i^{\prime}}}_{\mu\nu}(p)]\frac{i\Pi^{\nu\nu^{\prime}}(p)}{p^{2}}\Gamma^{c_{i^{\prime}}}_{\nu^{\prime}}(p)\bigg{|}_{p_{0}\to|\vec{p}|}$
$\displaystyle\overset{\rm coll}{\sim}$ $\displaystyle
ig_{s}^{2}C_{A}\mu^{4-D}\int\frac{d^{D}k}{(2\pi)^{D}}\frac{(z^{2}-z+1)^{2}}{z(1-z)}\frac{1}{k^{2}(p-k)^{2}}\
\Gamma_{c_{i}}(p)\cdot\epsilon^{*}(p)$ $\displaystyle\overset{\rm coll}{\sim}$
$\displaystyle-\frac{g_{s}^{2}}{4\pi}C_{A}\mu^{4-D}\int_{\frac{\delta_{0}}{p_{0}}}^{1-\frac{\delta_{0}}{p_{0}}}dz\frac{(z^{2}-z+1)^{2}}{z(1-z)}\int_{|\vec{k}_{T}|<\delta_{T}}\frac{d^{D-2}k_{T}}{(2\pi)^{D-2}}\frac{1}{\vec{k}_{T}^{2}}\
\Gamma_{c_{i}}(p)\cdot\epsilon^{*}(p)$ $\displaystyle\overset{\rm coll}{\sim}$
$\displaystyle-\frac{g_{s}^{2}}{8\pi^{2}}C_{A}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi\mu^{2}}{\delta_{T}^{2}}\right)^{\epsilon}\frac{1}{2\epsilon}\left(\frac{11}{6}+2\ln\frac{\delta_{0}}{p_{0}}\right)\
\Gamma_{c_{i}}(p)\cdot\epsilon^{*}(p).$ (74)
|
16k
|
arxiv_papers
|
2101.01109
|
# Szasz’s theorem and its generalizations
Gérard Bourdaud
###### Abstract
We establish the most general Szasz type estimates for homogeneous Besov and
Lizorkin-Triebel spaces, and their realizations.
Keywords: Fourier transformation, homogeneous Besov spaces, homogeneous
Lizorkin-Triebel spaces, realizations. 2010 Mathematics Subject
Classification: 46E35, 42A38.
## 1 Introduction
What we call a Szasz type theorem is the following estimate :
$\left(\int_{{\mathbb{R}}^{n}}|\xi|^{\theta
p}|\mathcal{F}(f)(\xi)|^{p}\,\mathrm{d}\xi\right)^{1/p}\leq
c\,\|f\|_{\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})}\,,$ (1.1)
where $\mathcal{F}$ is the Fourier transformation, and $c$ depends only on the
fixed parameters $s,p,q,r,n,\theta$. Here
$\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})$ denotes the homogeneous Besov space
$\dot{B}_{{r},{q}}^{s}({\mathbb{R}}^{n})$ or the Lizorkin-Triebel space
$\dot{F}_{{r},{q}}^{s}({\mathbb{R}}^{n})$, and the parameters satisfy
$s\in\mathbb{R}\,,\quad p,q,r\in]0,\infty]\,,$ (1.2)
see Section 2 for the detailed definitions. By an easy homogeneity argument,
the value of $\theta$ is necessarily
$\theta=s+n-\frac{n}{p}-\frac{n}{r}\,.$ (1.3)
This number $\theta$ will be referred as the Szasz exponent associated with
$(s,p,q,r,n)$. A quick reading of the estimate (1.1) might lead to the
following statement : for all $f\in\dot{A}^{s}_{p,q}({\mathbb{R}^{n}})$,
$\mathcal{F}(f)$ is a locally integrable function on ${\mathbb{R}}^{n}$
satisfying (1.1). But this is trivially inexact : if $f$ is a nonzero
polynomial, the r.h.s. of (1.1) is $0$, while $\mathcal{F}(f)$ is a nonzero
distribution supported by $\\{0\\}$, which does not belong to
$L_{1,loc}({\mathbb{R}}^{n})$.
To obtain a correct formulation, it is necessary to deal with a modified
Fourier transformation. Let us recall that
$\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})$ is a subspace of
$\mathcal{S}^{\prime}_{\infty}({\mathbb{R}}^{n})$, the space of tempered
distributions modulo polynomials. Then we have the following statement, whose
easy proof is left to the reader :
###### Proposition 1.1.
There exists a unique one-to-one continuous linear mapping
$\dot{\mathcal{F}}:\mathcal{S}^{\prime}_{\infty}({\mathbb{R}}^{n})\rightarrow\mathcal{D}^{\prime}({\mathbb{R}}^{n}\setminus\\{0\\})$
s.t., for all $f\in\mathcal{S}^{\prime}({\mathbb{R}}^{n})$,
$\dot{\mathcal{F}}([f]_{\infty})$ is the restriction of $\mathcal{F}(f)$ to
${\mathbb{R}}^{n}\setminus\\{0\\}$, where $[f]_{\infty}$ denotes the
equivalence class of $f$ modulo polynomials.
With the help of $\dot{\mathcal{F}}$, we can give a more precise formulation
of the estimate (1.1), namely :
###### Definition 1.2.
The system $(s,p,q,r,n)$ satisfies the property (w-SB) (resp. (w-SF)) if there
exists $c=c(s,p,q,r,n)>0$ s.t., for all
$u\in\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})$, the distribution
$\dot{\mathcal{F}}(u)$ is a locally integrable function on
${\mathbb{R}}^{n}\setminus\\{0\\}$ s.t.
$\left(\int_{{\mathbb{R}^{n}}\setminus\\{0\\}}|\xi|^{\theta
p}|\dot{\mathcal{F}}(u)(\xi)|^{p}\,\mathrm{d}\xi\right)^{1/p}\leq
c\,\|u\|_{\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})}\,,$ (1.4)
where $\theta$ is given by (1.3), and $A$ stands for $B$ or $F$, respectively.
We refer to both (w-SB) and (w-SF) as the “weak Szasz properties”. The
property (w-SB) was proved classically in a number of cases. Some of them
appeared in the book of J. Peetre [7]. They concern the case $\theta=0$ :
* •
$r=2$, $0<p\leq 2$, $q=p$, see thm. 4, p. 119, called “Szasz theorem”. The
original Szasz’s result concerns periodic Besov spaces in dimension $n=1$, and
the particular case $p=1$ is due to Bernstein, see thm. 3, p. 119.
* •
$1\leq r\leq 2$, $p=q=1$ (see (7’), p. 120).
* •
$0<r\leq 1$, $p=q=\infty$, see cor. 3, p. 251, first stated in (3), p. 116,
for $r=1$.
Then B. Jawerth [3, thm. 3.1] proved the property (w-SF) in case
$s=0\,,\quad\theta=n\left(1-\frac{2}{p}\right)\,,\quad 1<r=p<2\,.$
The case $p=q=r=2$, $\theta=s=0$ is just Plancherel. Jawerth’s theorem was
proved formerly in case $q=2$, under the name “Hardy inequality” (Paley, Hardy
and Littlewood in the periodic case, Fefferman and Björk for $p=1$, Peetre for
$0<p<1$).
Peetre stated his Szasz type theorems in the following form :
$\mathcal{F}\,:\,\dot{B}^{s}_{r,q}\quad\rightarrow\quad L_{p}\,,$
without taking care of polynomials. He was aware of the difficulty, but he
preferred not giving too much details. He was thinking that any clever reader
would interpret his statements correctly, i.e. according to Definition 1.2.
In the present paper, we first prove the Szasz property in the most general
possible case. Then, under supplementary conditions on parameters, we will
improve it, by making use of the classical notion of realization for
homogeneous function spaces :
###### Definition 1.3.
The system $(s,p,q,r,n)$ satisfies the property (s-SB) (resp. (s-SF)) if there
exist a realization
$\sigma:\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})\rightarrow\mathcal{S}^{\prime}({\mathbb{R}^{n}})$
and $c=c(s,p,q,r,n)>0$ s.t.
$\mathcal{F}(\sigma(\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})))\subset
L_{1,loc}({\mathbb{R}}^{n})$ (1.5)
and s.t. the estimate (1.1) is satisfied for every
$f\in\sigma(\dot{A}^{s}_{r,q}({\mathbb{R}^{n}}))$, where $\theta$ is given by
(1.3), and $A$ stands for $B$ or $F$, respectively. Both properties are
referred as the “strong Szasz properties”.
###### Remark 1.4.
The strong Szasz property implies the weak one. Conversely, under condition
(1.5), the weak Szasz property implies the strong one.
We will prove the strong Szasz property in the most general possible case, and
observe that some systems $(s,p,q,r,n)$ satisfy the weak Szasz property, but
not the strong one.
## 2 Preliminaries
### 2.1 Notation
${\mathbb{N}}$ denotes the set of natural numbers, including $0$. All
distribution spaces in this work are defined on ${\mathbb{R}^{n}}$, except
otherwise specified. We set $r^{\prime}:=r/(r-1)$ if $1<r\leq\infty$ and
$r^{\prime}:=\infty$ if $0<r\leq 1$. The symbol $\hookrightarrow$ indicates a
continuous embedding. We denote by ${\mathcal{S}}$ the Schwartz class and by
${\mathcal{S}}^{\prime}$ its topological dual, the space of tempered
distributions, endowed with the $\ast$-weak topology. For $0<p\leq\infty$,
$\|\cdot\|_{p}$ denotes the $L_{p}$ quasi-norm w.r.t. Lebesgue measure. For
$a\in{\mathbb{R}^{n}}$, the translation operator $\tau_{a}$ acts on functions
according to the formula $(\tau_{a}f)(x):=f(x-a)$ for all
$x\in{\mathbb{R}^{n}}$. For $\lambda>0$, the dilation operator $h_{\lambda}$
acts on functions according to the formula $(h_{\lambda}f)(x):=f(x/\lambda)$
for all $x\in{\mathbb{R}^{n}}$. The Fourier transform of
$f\in{\mathcal{S}}^{\prime}$ is denoted by $\widehat{f}$. We denote by
${\mathcal{P}}_{{\infty}}$ the set of all polynomials on ${\mathbb{R}^{n}}$.
We denote by ${\mathcal{S}}_{{\infty}}$ the set of all
$\varphi\in{\mathcal{S}}$ such that $\langle u,\varphi\rangle=0$ for all
$u\in{\mathcal{P}}_{{\infty}}$, and by ${\mathcal{S}}^{\prime}_{{\infty}}$ its
topological dual endowed with the $\ast$-weak topology. For all
$f\in{\mathcal{S}}^{\prime}$, $[f]_{\infty}$ denotes the equivalence class of
$f$ modulo ${\mathcal{P}}_{{\infty}}$. The mapping which takes any
$[f]_{\infty}$ to the restriction of $f$ to ${\mathcal{S}}_{{\infty}}$ turns
out to be an isomorphism from
${\mathcal{S}}^{\prime}/{\mathcal{P}}_{{\infty}}$ onto
${\mathcal{S}}^{\prime}_{{\infty}}$. The constants $c,c_{1},\ldots$, are
strictly positive, and depend only on the fixed parameters $n,s,p,q,r$ ; their
values may change from a line to another.
### 2.2 Homogeneous Besov and Lizorkin-Triebel spaces
The usual definition of $\dot{A}^{s}_{r,q}$ is given via the Littlewood-Paley
decomposition, that we briefly recall. We consider a $C^{\infty}$ function
$\gamma$, supported by $1/2\leq|\xi|\leq 3/2$, s.t.
$\sum_{j\in{\mathbb{Z}}}\gamma(2^{j}\xi)=1\,,\quad\forall\xi\in{\mathbb{R}^{n}}\setminus\\{0\\}\,,$
and define the operators $Q_{j}$ $(j\in{\mathbb{Z}})$ by
$\widehat{Q_{j}f}:=\gamma(2^{-j}(\cdot))\widehat{f}$. Then for all
$f\in{\mathcal{S}}^{\prime}_{{\infty}}$, it holds
$f=\sum_{j\in{\mathbb{Z}}}Q_{j}f$ in ${\mathcal{S}}^{\prime}_{{\infty}}$.
###### Definition 2.1.
Let $s\in{\mathbb{R}}$ and $0<r,q\leq\infty$ with $r<\infty$ in $F$-case.
* (i)
The homogeneous Besov space $\dot{B}^{s}_{r,q}$ is the set of
$f\in{\mathcal{S}}^{\prime}_{{\infty}}$ s.t
$\|f\|_{\dot{B}^{s}_{r,q}}:=\,\bigl{(}\sum_{j\in{{\mathbb{Z}}}}2^{jsq}\|{Q}_{j}f\|_{r}^{q}\bigr{)}^{1/q}<\infty\,.$
* (ii)
The homogeneous Lizorkin-Triebel space $\dot{F}^{s}_{r,q}$ is the set of
$f\in{\mathcal{S}}^{\prime}_{{\infty}}$ s.t.
$\|f\|_{\dot{F}^{s}_{r,q}}:=\bigl{\|}\bigl{(}\sum_{j\in{\mathbb{Z}}}2^{jsq}|{Q}_{j}f|^{q}\bigr{)}^{1/q}\bigr{\|}_{r}<\infty\,.$
We recall the Nikol’skij type estimates, see [2, prop. 4], [5, prop. 3.4], [6,
props. 2.15, 2.17], [8, prop. 2.3.2/1] and [11] for the proofs.
###### Proposition 2.2.
Let $0<a<b$. For all sequence let $(u_{j})_{j\in{\mathbb{Z}}}$ in
${\mathcal{S}}^{\prime}$ s.t.
* •
$\widehat{u_{j}}$ is supported by the annulus $a2^{j}\leq|\xi|\leq b2^{j}$,
for all $j\in{\mathbb{Z}}$,
* •
$M:=\big{(}\sum_{j\in\mathbb{Z}}(2^{js}\|u_{j}\|_{r})^{q}\big{)}^{1/q}<\infty$
in the $B$-case ,
* •
$M:=\big{\|}\big{(}\sum_{j\in\mathbb{Z}}(2^{js}|u_{j}|)^{q}\big{)}^{1/q}\big{\|}_{r}<\infty$
in the $F$-case,
the series $\sum_{j\in\mathbb{Z}}u_{j}$ converges in
${\mathcal{S}}^{\prime}_{{\infty}}$ and
$\|\sum_{j\in\mathbb{Z}}u_{j}\|_{\dot{A}^{s}_{r,q}}\leq cM$, where the
constant $c$ depends only on $n,s,r,q,a,b$.
###### Proposition 2.3.
For all $f\in\dot{A}^{s}_{r,q}$, there exists a set of polynomials $w_{j}$,
$j\in{\mathbb{Z}}$, such that the series
$\sum_{j\in{\mathbb{Z}}}(Q_{j}f-w_{j})$ is convergent in
${\mathcal{S}}^{\prime}$. The sum of that series is a tempered distribution
$g$ s.t. $[g]_{\infty}=f$.
###### Proof.
In case of Besov space with $r=q=\infty$, see e.g. [1, rem. 4.9]. The general
case follows by embedding
$\dot{A}^{s}_{r,q}\hookrightarrow\dot{B}^{s-(n/p)}_{\infty,\infty}$. ∎
## 3 General Szasz’s theorem
### 3.1 Statements of the results
###### Theorem 3.1.
Let $s,p,q,r$ s.t. (1.2).
* •
The system $(s,p,q,r,n)$ satisfies (w-SB) iff
$0<r\leq 2\quad\mathrm{and}\quad 0<q\leq p\leq r^{\prime}\,.$ (3.1)
* •
The system $(s,p,q,r,n)$ satisfies (w-SF) iff
$0<r\leq 2\quad\mathrm{and}\quad\left(\,r\leq
p<r^{\prime}\quad\mathrm{or}\quad q\leq p=r^{\prime}\,\right)\,.$ (3.2)
The above theorem has a counterpart for the usual (inhomogeneous) Besov and
Lizorkin-Triebel spaces :
###### Theorem 3.2.
Let $s,p,q,r$ s.t. (1.2), and $\theta$ the associated Szasz exponent. Let us
consider the following property: $(\cal{Z})$ There exists $c>0$ s.t., for all
$f\in A^{s}_{r,q}$, $\mathcal{F}(f)$ is a locally integrable function on
${\mathbb{R}^{n}}$ satisfying the estimate
$\left(\int_{{\mathbb{R}}^{n}}(1+|\xi|)^{\theta
p}|\mathcal{F}(f)(\xi)|^{p}\,\mathrm{d}\xi\right)^{1/p}\leq
c\,\|f\|_{A^{s}_{r,q}}\,,$
where $A^{s}_{r,q}$ stands for $B^{s}_{r,q}$ (resp. $F^{s}_{r,q}$). Then
$(\cal{Z})$ holds iff (3.1) (resp. ( 3.2 )).
The above property $(\cal{Z})$ has been classically established in particular
cases, see e.g. [9, p.55].
### 3.2 Proof
We limit ourselves to Theorem 3.1. The same arguments work for Theorem 3.2, up
to minor modifications.
Step 1. We first prove that the various conditions on $r,p,q$ imply the weak
Szasz property.
Substep 1.1. Let us assume $1\leq r\leq 2$, $0<p\leq r^{\prime}$ and
$p<\infty$. We introduce the annulus
$U_{j}:=\\{\xi\in{\mathbb{R}^{n}}\,:\,2^{j}\leq|\xi|\leq
2^{j+1}\,\\}\,,j\in{\mathbb{Z}}\,.$
Let $u\in\dot{B}^{s}_{r,p}$. By the Haussdorff-Young theorem, it holds
$\|\widehat{Q_{j}u}\|_{r^{\prime}}\leq c\,\|Q_{j}u\|_{r}\,.$
Thanks to condition $p\leq r^{\prime}$, we can apply Hölder’s inequality and
deduce
$\int_{U_{j}}\left|\widehat{Q_{j}u}(\xi)\right|^{p}\,\mathrm{d}\xi\leq
c\,2^{jn(1-(p/r^{\prime}))}\|Q_{j}u\|^{p}_{r}\,.$ (3.3)
By definition of $Q_{j}$, the function $\widehat{Q_{j}u}$ is supported by the
annulus
$2^{j-1}\leq|\xi|\leq 3.2^{j-1}\,.$
Thus the function
$v(\xi):=\sum_{j\in{\mathbb{Z}}}\widehat{Q_{j}u}(\xi)$
is well defined and locally integrable on ${\mathbb{R}^{n}}\setminus\\{0\\}$.
We claim that
$\left(\int_{{\mathbb{R}}^{n}\setminus\\{0\\}}|\xi|^{\theta
p}|v(\xi)|^{p}\,\mathrm{d}\xi\right)^{1/p}\leq
c\,\|u\|_{{\dot{B}}^{s}_{r,p}}\,.$ (3.4)
We observe that, for $\xi\in U_{j}$, it holds $\widehat{Q_{k}u}(\xi)=0$ for
$k<j$ or $k>j+1$. Hence, by (3.3) and by definition of $\theta$ (see (1.3)),
we obtain
$\int_{{\mathbb{R}}^{n}\setminus\\{0\\}}|\xi|^{\theta
p}|v(\xi)|^{p}\,\mathrm{d}\xi=\sum_{j\in{\mathbb{Z}}}\int_{U_{j}}|\xi|^{\theta
p}|v(\xi)|^{p}\,\mathrm{d}\xi$ $\leq c_{1}\,\sum_{j\in{\mathbb{Z}}}2^{j\theta
p}\int_{U_{j}}|\widehat{Q_{j}u}(\xi)|^{p}\mathrm{d}\xi\leq
c_{2}\,\sum_{j\in{\mathbb{Z}}}2^{sp}\|Q_{j}u\|_{r}^{p}\,.$
This ends up the proof of the claim. By Proposition 2.3, there exists a
convergent series $\sum_{j\in{\mathbb{Z}}}f_{j}$ in $\mathcal{S}^{\prime}$
s.t. $f_{j}-Q_{j}u$ is a polynomial for each $j\in{\mathbb{Z}}$. By setting
$f:=\sum_{j\in{\mathbb{Z}}}f_{j}$, we obtain a tempered distribution s.t.
$[f]_{\infty}=u$. The restriction of $\widehat{f}$ to
${\mathbb{R}^{n}}\setminus\\{0\\}$ coincide with $v$. Thus the estimate (3.4)
is precisely (1.4) for the function space $\dot{B}^{s}_{r,p}$. We conclude
that $(s,p,p,r,n)$ has the property (w-SB).
###### Remark 3.3.
The above proof works as well in case $r=1$ and $p=\infty$, up to minor
changes.
Substep 1.2. Let us assume $0<r<1$. By a classical embedding, see e.g. [3,
thm. 2.1], it holds
$\dot{B}^{s}_{r,p}\quad\hookrightarrow\quad\dot{B}^{s+n-(n/r)}_{1,p}\,.$
Both systems $(s,p,p,r,n)$ and $(s+n-(n/r),p,p,1,n)$ have the same Szasz
exponent. By Substep 1.1 (with $r=1$), we conclude that $(s,p,p,r,n)$ has the
property (w-SB).
Substep 1.3. Under the condition : $0<r\leq 2$ and $q\leq p\leq r^{\prime}$,
we have the embedding $\dot{B}^{s}_{r,q}\,\hookrightarrow\,\dot{B}^{s}_{r,p}$,
with the same Szasz exponent. By the preceding substeps, the system
$(s,p,q,r,n)$ has the property (w-SB).
Substep 1.4. For Lizorkin-Triebel spaces, we argue similarly, by using
embeddings into Besov spaces, without change of Szasz exponent. We first
consider the embedding
$\dot{F}^{s}_{r,q}\,\hookrightarrow\,\dot{B}^{s}_{r,\max(r,q)}$, see [3,
(1.2)]. We conclude that $(s,p,q,r,n)$ has the property (w-SF) if
$0<r\leq 2\,,\quad\max(r,q)\leq p\leq r^{\prime}\,.$
Substep 1.5. Now we assume that $0<r<2$ and $r\leq p<r^{\prime}$. Then we
choose $r_{1}>r$, sufficiently near from $r$ to have
$0<r<r_{1}\leq 2\,,\quad r\leq p\leq r_{1}^{\prime}\,.$
By setting
$s_{1}:=s+\frac{n}{r_{1}}-\frac{n}{r}\,,$
we obtain the embedding
$\dot{F}^{s}_{r,q}\,\hookrightarrow\,\dot{B}^{s_{1}}_{r_{1},r}$, see [3, thm.
2.1]. We conclude that $(s,p,q,r,n)$ has the property (w-SF).
Step 2. Now we prove that the Szasz property implies the various conditions on
$r,p,q$.
Substep 2.1. To prove the necessity of condition $r\leq 2$, we use the
following statement :
###### Lemma 3.4.
For all $2<r\leq\infty$, there exist a compact subset $K$ of
${\mathbb{R}^{n}}$, with Lebesgue measure equal to $0$, s.t. $0\notin K$, and
a nonzero Borel measure $\mu$ on ${\mathbb{R}^{n}}$, supported by $K$, s.t.
${\mathcal{F}}^{-1}\mu\in L_{r}({\mathbb{R}^{n}})$.
###### Proof.
Let us fix a number $\beta$ s.t.
$\frac{1}{r}<\beta<\frac{1}{2}\,.$
Consider first the case $n=1$. According to Kaufmann’s theorem [4], see also
[10, thm. 9A2], there exist a compact subset $C$ of ${\mathbb{R}}$, with
Lebesgue measure equal to $0$, and a nonzero Borel measure $\mu$ on
${\mathbb{R}}$, supported by $C$, s.t. the function $g:={\mathcal{F}}^{-1}\mu$
satisfies the estimate
$|g(x)|\leq c\,(1+|x|)^{-\beta}\,,$ (3.5)
hence $g$ belongs to $L_{r}({\mathbb{R}})$. By a translation of $C$ and $\mu$,
if necessary, we have also $0\notin C$. In higher dimension $n$, we define
$\mu_{n}:=\mu\otimes\cdots\otimes\mu$ ($n$ factors). Then $\mu_{n}$ is a Borel
measure supported by $K:=C^{n}$, and $f:={\mathcal{F}}^{-1}\mu_{n}$ is given
by $f(x_{1},\ldots,x_{n})=g(x_{1})\cdots g(x_{n})$. Then $f\in
L_{r}({\mathbb{R}^{n}})$ follows by (3.5). ∎
Let us keep the same notation as in the above Lemma and proof. Since $K$ is a
compact subset of ${\mathbb{R}^{n}}\setminus\\{0\\}$, it holds
$f=\sum_{j=N}^{M}Q_{j}f$
for some $N,M\in{\mathbb{Z}}$. Hence $f$ belongs as well to
$\dot{A}^{s}_{r,q}$ whatever be $s,q$. Since the Lebesgue measure of $K$ is
equal to $0$, $\widehat{f}$ is not locally integrable on
${\mathbb{R}^{n}}\setminus\\{0\\}$. Hence the Szasz property cannot hold.
Substep 2.2. Assume $r^{\prime}<p\leq\infty$. Let $\varphi\in{\mathcal{S}}$
s.t. $\widehat{\varphi}$ is supported by the ball $|\xi|\leq 1/2$. Let us set
$f_{k}(x):={\rm e}^{2^{k}ix_{1}}\varphi(x)$, $k=1,2,\ldots$, and
$f:=\sum_{k=1}^{\infty}k2^{-k\theta}f_{k}\,.$
Since $\widehat{f_{k}}$ is supported by the annulus
$C_{k}:=\\{\xi\,:\,\frac{3}{4}2^{k}\leq|\xi|\leq\frac{5}{4}2^{k}\\}$, we can
apply Proposition 2.2 and conclude that
$\|f\|_{\dot{B}^{s}_{r,q}}\leq
c\,\left(\sum_{k=1}^{\infty}(2^{ks}\|k2^{-k\theta}f_{k}\|_{r})^{q}\right)^{1/q}=c\|\varphi\|_{r}\,\left(\sum_{k=1}^{\infty}(k2^{k(s-\theta)})^{q}\right)^{1/q}\,.$
Here we have
$s-\theta=\frac{n}{p}-\frac{n}{r^{\prime}}<0\,,$
hence $f\in\dot{B}^{s}_{r,q}$. The series which defines $f$ converges as well
in ${\mathcal{S}}^{\prime}$ and
$\widehat{f}=\sum_{k=1}^{\infty}k2^{-k\theta}\,\tau_{2^{k}e_{1}}\widehat{\varphi}$
in ${\mathcal{S}}^{\prime}$, where $e_{1}:=(1,0,\ldots,0)$. For every $k$, we
have
$\left(\int_{{\mathbb{R}^{n}}\setminus\\{0\\}}|\xi|^{\theta
p}|\widehat{f}(\xi)|^{p}\,\mathrm{d}\xi\right)^{1/p}\geq
c\,k\left(\int_{C_{k}}|\widehat{\varphi}(\xi-2^{k}e_{1})|^{p}\,\mathrm{d}\xi\right)^{1/p}=c\,k\|\widehat{\varphi}\|_{p}\,.$
Thus the property (w-SB) does not hold.
Substep 2.3. Assume $p<q$. Let $\psi\in{\mathcal{S}}$ s.t. $\widehat{\psi}$ is
supported by the annulus $C_{0}$. Let us set
$f:=\sum_{k=1}^{\infty}k^{-1/p}2^{k((n/r)-s)}h_{2^{-k}}\psi\,.$
The above series converges in $\mathcal{S}^{\prime}$, and, by Proposition 2.2,
$\|[f]_{\infty}\|_{\dot{B}^{s}_{r,q}}$ is less than
$c\,\left(\sum_{k=1}^{\infty}(2^{ks}k^{-1/p}2^{k((n/r)-s)}\|h_{2^{-k}}\psi\|_{r})^{q}\right)^{1/q}=c\|\psi\|_{r}\,\left(\sum_{k=1}^{\infty}k^{-q/p}\right)^{1/q}\,,$
hence $[f]_{\infty}\in\dot{B}^{s}_{r,q}$. By definition of $\theta$, it holds
$\widehat{f}(\xi)=\sum_{k=1}^{\infty}k^{-1/p}2^{k((n/p)-\theta)}\widehat{\psi}(2^{-k}\xi)\,.$
Hence $\int_{{\mathbb{R}}^{n}\setminus\\{0\\}}|\xi|^{\theta
p}|\widehat{f}(\xi)|^{p}\,\mathrm{d}\xi$ is greater than
$\sum_{k=1}^{\infty}\int_{C_{-k}}\left|k^{-1/p}2^{k((n/p)-\theta)}\widehat{\psi}(2^{-k}\xi)\right|^{p}\,|\xi|^{\theta
p}\,\mathrm{d}\xi\geq c\|\widehat{\psi}\|^{p}_{p}\sum_{k=1}^{\infty}k^{-1}\,.$
Thus the property (w-SB) does not hold.
Substep 2.4. Assume $p<r$. Let us take $r_{1},s_{1}$ s.t.
$p<r_{1}<r\,\quad s_{1}-\frac{n}{r_{1}}=s-\frac{n}{r}.$
According to [3, (1.3) and thm. 2.1 (ii)], we have the embedding
$\dot{B}^{s_{1}}_{r_{1},r_{1}}=\dot{F}^{s_{1}}_{r_{1},r_{1}}\hookrightarrow\dot{F}^{s}_{r,q}$
(3.6)
with the same Szasz exponent. By condition $p<r_{1}$ and Substep 2.3, the
system $(s_{1},p,r_{1},r_{1},n)$ has not the property (w-SB). By (3.6), the
system $(s,p,q,r,n)$ has not the property (w-SF).
Substep 2.5. Assume $p>r^{\prime}$. Let us take $r_{1},s_{1}$ s.t.
$p>r^{\prime}_{1}>r^{\prime}\,\quad s_{1}-\frac{n}{r_{1}}=s-\frac{n}{r}.$
Then the embedding (3.6) holds again, without change of Szasz exponent. By
condition $p>r^{\prime}_{1}$ and Substep 2.2 the system
$(s_{1},p,r_{1},r_{1},n)$ has not the property (w-SB). Hence the system
$(s,p,q,r,n)$ has not the property (w-SF).
Substep 2.6. Assume $p=r^{\prime}<q$. Let us take the same functions $f_{k}$
as in Substep 2.2, and define
$f:=\sum_{k=1}^{\infty}k^{-1/p}2^{-k\theta}f_{k}\,.$
Here we have $s=\theta$. Then $\|f\|^{r}_{\dot{F}^{s}_{r,q}}$ is less than
$c\,\int_{{\mathbb{R}^{n}}}\left(\sum_{k=1}^{\infty}(k^{-1/p}|f_{k}(x)|)^{q}\right)^{r/q}\mathrm{d}x=c\,\int_{{\mathbb{R}^{n}}}\left(\sum_{k=1}^{\infty}k^{-q/p}\right)^{r/q}\,|\varphi(x)|^{r}\mathrm{d}x\,,$
hence $f\in\dot{F}^{s}_{r,q}$. It holds
$\int_{{\mathbb{R}^{n}}\setminus\\{0\\}}|\xi|^{\theta
p}|\widehat{f}(\xi)|^{p}\,\mathrm{d}\xi\geq
c\,\|\widehat{\varphi}\|^{p}_{p}\sum_{k=1}^{\infty}k^{-1}\,.$
We conclude that the property (w-SF) does not hold.
## 4 Szasz’s theorem for the realized function spaces
### 4.1 Generalities on realizations
The proof of Theorem 1.1 relies upon the choice of a specific representative
for each member of $\dot{A}^{s}_{r,q}$, with the help of Proposition 2.3. This
choice of representative may be organized in a coherent way, through the
notion of realization, that we recall here.
###### Definition 4.1.
Let $E$ be a quasi-Banach space continuously embedded into
$\mathcal{S}^{\prime}_{\infty}$. A realization of $E$ is a continuous linear
mapping $\sigma:E\rightarrow\mathcal{S}^{\prime}$ s.t.
$[\sigma(u)]_{\infty}=u$ for every $u\in E$.
In case $E$ is invariant by translations (i.e. if $\tau_{a}(E)\subset E$ for
all $a\in{\mathbb{R}^{n}}$), we say that a realization $\sigma$ of $E$
commutes with translations if $\sigma\circ\tau_{a}=\tau_{a}\circ\sigma$ for
all $a\in{\mathbb{R}^{n}}$ ; such property holds iff $\sigma(E)$ is invariant
by translations. Similar considerations apply to dilation invariance.
###### Proposition 4.2.
Let $E$ be a quasi-Banach space continuously embedded into
$\mathcal{S}^{\prime}_{\infty}$. Assume that $E$ admits a realization $\sigma$
s.t.
$\mathcal{F}(\sigma(E))\subset L_{1,loc}({\mathbb{R}}^{n})\,.$ (4.1)
Then
$\sigma(E)=\\{f\in\mathcal{S}^{\prime}\,:\,[f]_{\infty}\in
E\quad\mathrm{and}\quad\widehat{f}\in L_{1,loc}({\mathbb{R}}^{n})\\}\,.$ (4.2)
###### Proof.
Let $V$ denote the r.h.s of (4.2). Then $\sigma(E)\subset V$ follows
immediately by (4.1). In the opposite sense, if we take $f\in V$, then
$f-\sigma([f]_{\infty})\in{\mathcal{P}}_{\infty}$, thus
$\mathcal{F}\left(f-\sigma([f]_{\infty})\right)$ is an integrable function
near $0$, supported by $\\{0\\}$. One concludes that $f=\sigma([f]_{\infty})$,
i.e. $f\in\sigma(E)$. ∎
###### Corollary 4.3.
Let $E$ be a quasi-Banach space continuously embedded into
$\mathcal{S}^{\prime}_{\infty}$. Then $E$ admits at most one realization
$\sigma$ s.t. (4.1).
###### Proposition 4.4.
Let $E$ be a quasi-Banach space continuously embedded into
$\mathcal{S}^{\prime}_{\infty}$. Assume that $E$ is invariant by translations
(resp. dilations). If $E$ admits a realization $\sigma$ s.t. (4.1), then
$\sigma$ commutes with translations (resp. dilations).
###### Proof.
By (4.2), $\sigma(E)$ is invariant by translations (resp. dilations). ∎
###### Proposition 4.5.
Let $E$ be a quasi-Banach space, continuously embedded into
$\mathcal{S}^{\prime}_{\infty}$, s.t.
$\dot{{\mathcal{F}}}(E)\subset L_{1,loc}({\mathbb{R}^{n}}\setminus\\{0\\})\,.$
Then the following properties are equivalent :
* (i)
for every $R>0$, there exists $c_{R}>0$ s.t., for all $u\in E$,
$\int_{0<|\xi|\leq R}\left|\dot{{\mathcal{F}}}(u)(\xi)\right|\mathrm{d}\xi\leq
c_{R}\|u\|_{E}\,,$ (4.3)
* (ii)
there exists a realization $\sigma$ of $E$ s.t. $\mathcal{F}(\sigma(E))\subset
L_{1,loc}({\mathbb{R}}^{n})$.
###### Proof.
Step 1 : (i) $\Rightarrow$ (ii). Let $u\in E$ : by extending arbitrarily
$\dot{\mathcal{F}}(u)$ at $0$, we obtain a function, say $T(u)$, in
$L_{1,loc}({\mathbb{R}^{n}})\cap\mathcal{S}^{\prime}$. By property (4.3), the
mapping $T$, defined in such a way, is linear and continuous from $E$ to
$\mathcal{S}^{\prime}$. By setting $\sigma:=\mathcal{F}^{-1}\circ T$ we obtain
a realization of $E$ s.t. $\mathcal{F}(\sigma(E))\subset
L_{1,loc}({\mathbb{R}}^{n})$.
Step 2 : (ii) $\Rightarrow$ (i). Let us endow $\sigma(E)$ with the norm
$\|-\|_{E}$. By the Closed Graph theorem, the mapping
$\mathcal{F}:\sigma(E)\rightarrow L_{1,loc}({\mathbb{R}}^{n})$ is continuous.
The estimate (4.3) follows. ∎
### 4.2 The strong Szasz property
By Proposition 4.4, the strong Szasz property is related to the existence of
translation commuting realizations. Let us recall the following statement :
###### Proposition 4.6.
The space $\dot{B}^{s}_{r,q}({\mathbb{R}^{n}})$ admits a translation commuting
realization iff
$s<n/r\quad\mathrm{or}\quad(\,s=n/r\quad\mathrm{and}\quad q\leq 1\,)\,.$ (4.4)
The space $\dot{F}^{s}_{r,q}({\mathbb{R}^{n}})$ admits a translation commuting
realization iff
$s<n/r\quad\mathrm{or}\quad(\,s=n/r\quad\mathrm{and}\quad r\leq 1\,)\,.$ (4.5)
###### Proof.
Step 1. Assume that the conditions (4.4) and (4.5) hold, respectively. For any
$u\in\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})$, the series
$\sum_{j\in{\mathbb{Z}}}Q_{j}u$ converges as well in ${\mathcal{S}}^{\prime}$,
see [1, prop. 4.6] in case $\min(r,q)\geq 1$ and [6, thm. 4.1] in the general
case ; denoting by $\sigma_{0}(u)$ its sum in ${\mathcal{S}}^{\prime}$, we
obtain a realization
$\sigma_{0}:\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})\rightarrow{\mathcal{S}}^{\prime}$,
which clearly commutes with translations (Indeed $\sigma_{0}$ commutes as well
with dilations, see [1, prop. 4.6 (2)] and [6, thm. 1.2]).
Step 2. Assume that the conditions (4.4) and (4.5) does not hold,
respectively. In case $\min(r,q)\geq 1$, we proved that, for some integer
$\nu\geq 1$, $\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})$ admits no translation
commuting realization in ${\mathcal{S}}^{\prime}_{{\nu-1}}$, the space of
tempered distributions modulo polynomials of degree less than $\nu-1$, see [1,
thm. 4.2 (2)]. It is not difficult to see that this proof works as well for
any $r,q>0$. A fortiori $\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})$ admits no
translation commuting realization in ${\mathcal{S}}^{\prime}$. ∎
###### Theorem 4.7.
Let $s,p,q,r$ s.t. (1.2). Then the system $(s,p,q,r,n)$ satisfies the property
(s-SB) (resp. (s-SF)) iff both conditions (3.1) and (4.4) (resp. conditions
(3.2) and (4.5) ) hold.
###### Proof.
The necessity of the various conditions follows by Propositions 4.4 and 4.6,
and Theorem 3.1. We turn now to the proof of sufficiency. According to Remark
1.4, it will suffice to find a realization $\sigma$ of
$\dot{A}^{s}_{r,q}({\mathbb{R}^{n}})$ s.t. (1.5). We limit ourselves to Besov
spaces. Similar arguments work in case of Lizorkin-Triebel spaces.
Step 1. Assume $0<r\leq 2$, $s<n/r$ and $q\leq r^{\prime}$. According to
Theorem 3.1, the system $(s,r^{\prime},q,r,n)$ has the property (w-SB) with
Szasz exponent
$\theta=s+n-\frac{n}{r^{\prime}}-\frac{n}{r}<n\left(1-\frac{1}{r^{\prime}}\right)\,.$
By Hölder’s inequality, it holds
$\int_{0<|\xi|\leq R}\left|f(\xi)\right|\mathrm{d}\xi\leq
c\,R^{n\left(1-\frac{1}{r^{\prime}}\right)-\theta}\,\left(\int_{{\mathbb{R}^{n}}\setminus\\{0\\}}|\xi|^{\theta
r^{\prime}}|f(\xi)|^{r^{\prime}}\,\mathrm{d}\xi\right)^{1/r^{\prime}}$
for every $R>0$ and every $f\in L_{1,loc}({\mathbb{R}^{n}}\setminus\\{0\\})$.
Then we conclude with the aid of Proposition 4.5.
Step 2. Assume $0<r\leq 2$, $s=n/r$, $q\leq 1$ . The system $(s,1,q,r,n)$ has
the property (w-SB) with Szasz exponent equal to $0$. That means :
$\int_{{\mathbb{R}^{n}}\setminus\\{0\\}}|\dot{\mathcal{F}}(u)(\xi)|\,\mathrm{d}\xi\leq
c\,\|u\|_{\dot{B}^{s}_{r,q}}\,.$
We can apply again Proposition 4.5. ∎
###### Remark 4.8.
By known results on unicity of translation or dilation commuting realizations,
see [1, thms. 4.1, 4.2], the realization obtained in Theorem 4.7 coincide with
the “standard” realization $\sigma_{0}$ introduced in the proof of Proposition
4.6.
### Acknowledgment
I thank Madani Moussai, Hervé Quéffelec and Winfried Sickel for useful
discussions in the preparation of the paper.
## References
* [1] G. Bourdaud. Realizations of homogeneous Besov and Lizorkin-Triebel spaces. Math. Nachr. 286 (2013), 476–491.
* [2] G. Bourdaud, M. Moussai, W. Sickel. Composition operators in Lizorkin-Triebel spaces. J. Funct. Anal. 259 (2010), 1098–1128.
* [3] B. Jawerth. Some observations on Besov and Lizorkin-Triebel spaces. Math. Scand. 40 (1977), 94–104.
* [4] R. Kaufmann. On the theorem of Jarník and Besicovitch. Acta Arithmetica 39 (1981), 265–267.
* [5] M. Moussai. Composition operators on Besov algebras. Revista Mat. Iberoamer. 28 (2012), 239–272.
* [6] M. Moussai. Realizations of homogeneous Besov and Triebel-Lizorkin spaces and an application to pointwise multipliers. Anal. Appl. (Singap.) 13 (2015), 149–183.
* [7] J. Peetre. New Thoughts on Besov Spaces. Duke Univ. Math. Series I, Durham, N.C., 1976.
* [8] T. Runst & W. Sickel. Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations. de Gruyter, Berlin, 1996.
* [9] H.J. Schmeisser & H. Triebel. Topics in Fourier analysis and function spaces. Geest & Portig, Leipzig 1987, Wiley, Chichester 1987.
* [10] T-H. Wolff. Lectures on Harmonic Analysis. University Lecture Series. AMS Press, 2003.
* [11] M. Yamazaki. A quasi-homogeneous version of paradifferential operators, I: Boundedness on spaces of Besov type. J. Fac. Sci. Univ. Tokyo, Sect. IA Math. 33 (1986), 131–174. II: A Symbolic calculus. ibidem 33 (1986), 311–345.
Université de Paris, I.M.J. - P.R.G (UMR 7586)
Bâtiment Sophie Germain
Case 7012
F 75205 Paris Cedex 13
[email protected]
|
4k
|
arxiv_papers
|
2101.01110
|
# Quadratic relations of the deformed $W$-superalgebra ${\cal
W}_{q,t}\bigl{(}A(M,N)\bigr{)}$
Takeo KOJIMA
###### Abstract
We find the free field construction of the basic $W$-current and screening
currents for the deformed $W$-superalgebra ${\cal
W}_{q,t}\bigl{(}A(M,N)\bigr{)}$ associated with Lie superalgebra of type
$A(M,N)$. Using this free field construction, we introduce the higher
$W$-currents and obtain a closed set of quadratic relations among them. These
relations are independent of the choice of Dynkin-diagrams for the Lie
superalgebra $A(M,N)$, though the screening currents are not. This allows us
to define ${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$ by generators and relations.
Department of Mathematics and Physics, Faculty of Engineering, Yamagata
University,
Jonan 4-chome 3-16, Yonezawa 992-8510, JAPAN
## 1 Introduction
The deformed $W$-algebra ${\cal W}_{q,t}\bigl{(}\mathfrak{g}\bigr{)}$ is a two
parameter deformation of the classical $W$-algebra ${\cal W}(\mathfrak{g})$.
Shiraishi et al. [1] obtained a free field construction of the deformed
Virasoro algebra ${\cal W}_{q,t}\bigl{(}\mathfrak{sl}(2)\bigr{)}$, which is a
one-parameter deformation of the Virasoro algebra, to construct a deformation
of the correspondence between conformal field theory and the Calogero-
Sutherland model. The theory of the deformed $W$-algebras ${\cal
W}_{q,t}(\mathfrak{g})$ has been developed in papers [2, 3, 4, 5, 6, 7, 8, 9,
10, 11]. However, in comparison with the conformal case, the theory of the
deformed $W$-algebra is still not fully developed and understood. For that
matter it is worthwhile to concretely construct ${\cal
W}_{q,t}({\mathfrak{g}})$ in each case. This paper is a continuation of the
paper [11] for ${\cal W}_{q,t}\bigl{(}A(1,0)\bigr{)}$. The purpose of this
paper is to generalize the result of case $A(1,0)$ to $A(M,N)$.
We follow the method of [10], where a free field construction is found for the
deformed $\mathcal{W}_{q,t}\bigl{(}\mathfrak{sl}(3)\bigr{)}$ and
$\mathcal{W}_{q,t}\bigl{(}A(1,0)\bigr{)}$. Starting from a $W$ current given
as a sum of three vertex operators
$\displaystyle T_{1}(z)=\Lambda_{1}(z)+\Lambda_{2}(z)+\Lambda_{3}(z)\,,$
and two screening currents $S_{j}(z)$ given by a vertex operator, the authors
of [10] determined them simultaneously by demanding that $T_{1}(z)$ and
$S_{j}(w)$ commute up to a total difference. Higher currents $T_{i}(z)$ are
defined inductively by the fusion relation
$\displaystyle\mathop{\mathrm{Res}}_{w=x^{i}z}T_{1}(w)T_{i-1}(z)=c_{i}T_{i}(x^{i-1}z)$
with appropriate constants $x$ and $c_{i}$. In the case of
$\mathcal{W}_{q,t}\bigl{(}\mathfrak{sl}(3)\bigr{)}$, it is known that they
truncate, i.e. $T_{3}(z)=1$ and $T_{i}(z)=0$ ($i\geq 4$), and that $T_{1}(z)$
and $T_{2}(z)$ satisfy the quadratic relations [2, 3]
$\displaystyle
f_{1,1}\left(\frac{z_{2}}{z_{1}}\right)T_{1}(z_{1})T_{1}(z_{2})-f_{1,1}\left(\frac{z_{1}}{z_{2}}\right)T_{1}(z_{2})T_{1}(z_{1})=c\left(\delta\left(\frac{x^{-2}z_{2}}{z_{1}}\right)T_{2}(x^{-1}z_{2})-\delta\left(\frac{x^{2}z_{2}}{z_{1}}\right)T_{2}(xz_{2})\right),$
$\displaystyle
f_{1,2}\left(\frac{z_{2}}{z_{1}}\right)T_{1}(z_{1})T_{2}(z_{2})-f_{2,1}\left(\frac{z_{1}}{z_{2}}\right)T_{2}(z_{2})T_{1}(z_{1})=c\left(\delta\left(\frac{x^{-3}z_{2}}{z_{1}}\right)-\delta\left(\frac{x^{3}z_{2}}{z_{1}}\right)\right),$
$\displaystyle
f_{2,2}\left(\frac{z_{2}}{z_{1}}\right)T_{2}(z_{1})T_{2}(z_{2})-f_{2,2}\left(\frac{z_{1}}{z_{2}}\right)T_{2}(z_{2})T_{2}(z_{1})=c\left(\delta\left(\frac{x^{-2}z_{2}}{z_{1}}\right)T_{1}(x^{-1}z_{2})-\delta\left(\frac{x^{2}z_{2}}{z_{1}}\right)T_{1}(xz_{2})\right)$
with appropriate constants $x,c$, and functions $f_{i,j}(z)$. In the case of
$\mathcal{W}_{q,t}\bigl{(}A(1,0)\bigr{)}$, it was shown in [11] that such
truncation for $T_{i}(z)$ does not take place and that an infinite number of
quadratic relations are satisfied by infinite number of $T_{i}(z)$’s. In the
present paper, we extend this result to general $A(M,N)$.
Following the method of [10], we construct the basic $W$ current $T_{1}(z)$
together with the screening currents $S_{j}(w)$ for
$\mathcal{W}_{q,t}\bigl{(}A(M,N)\bigr{)}$ (See (8) and (9)). We introduce the
higher $W$-currents $T_{i}(z)$ (See (185)) and obtain a closed set of
quadratic relations among them (See (187)). We show further that these
relations are independent of the choice of Dynkin-diagrams for the
superalgebra $A(M,N)$, though the screening currents are not. This allows us
to define ${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$ by generators and relations.
The text is organized as follows. In Section 2, we prepare the notation and
formulate the problem. In Section 3, we give a free field construction of the
basic $W$-current $T_{1}(z)$ and the screening currents $S_{j}(w)$ for the
deformed $W$-algebra ${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$. In Section 4, we
introduce higher $W$-currents $T_{i}(z)$ and present a closed set of quadratic
relations among them. We show that these quadratic relations are independ of
the choice of the Dynkin-diagram for the superalgebra $A(M,N)$. We also obtain
the $q$-Poisson algebra in the classical limit. Section 5 is devoted to
conclusion and discussion.
## 2 Preliminaries
In this section we prepare the notation and formulate the problem. Throughout
this paper, we fix a real number $r>1$ and a complex number $x$ with
$0<|x|<1$.
### 2.1 Notation
In this section we use complex numbers $a$, $w$ $(w\neq 0)$, $q$ ($q\neq 0,\pm
1$), and $p$ with $|p|<1$. For any integer $n$, define $q$-integer
$\displaystyle[n]_{q}=\frac{q^{n}-q^{-n}}{q-q^{-1}}.$
We use symbols for infinite products
$\displaystyle(a;p)_{\infty}=\prod_{k=0}^{\infty}(1-ap^{k}),~{}~{}~{}(a_{1},a_{2},\ldots,a_{N};p)_{\infty}=\prod_{i=1}^{N}(a_{i};p)_{\infty}$
for complex numbers $a_{1},a_{2},\ldots,a_{N}$. The following standard
formulae are useful.
$\displaystyle\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}a^{m}\right)=1-a,~{}~{}~{}\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{a^{m}}{1-p^{m}}\right)=(a;p)_{\infty}.$
We use the elliptic theta function $\Theta_{p}(w)$ and the compact notation
$\Theta_{p}(w_{1},w_{2},\ldots,w_{N})$ as
$\displaystyle\Theta_{p}(w)=(p,w,pw^{-1};p)_{\infty},~{}~{}~{}\Theta_{p}(w_{1},w_{2},\ldots,w_{N})=\prod_{i=1}^{N}\Theta_{p}(w_{i})$
for complex numbers $w_{1},w_{2},\ldots,w_{N}\neq 0$. Define $\delta(z)$ by
the formal series
$\displaystyle\delta(z)=\sum_{m\in{\mathbf{Z}}}z^{m}.$
### 2.2 Dynkin-diagram of $A(M,N)$
In this section we introduce Dynkin-diagrams of the Lie superalgebra $A(M,N)$.
We fix integers $M,N$ $(M+N\geq 1,M,N=0,1,2,\ldots)$. We set $L=M+N+1$. Let
$\varepsilon_{1},\varepsilon_{2},\ldots,\varepsilon_{M+1}$ and
$\delta_{1},\delta_{2},\ldots,\delta_{N+1}$ be a basis of $\mathbf{R}^{L+1}$
with an inner product $(~{},~{})$ such that
$\displaystyle(\varepsilon_{i},\varepsilon_{j})=\delta_{i,j}~{}~{}~{}(1\leq
i,j\leq M+1),~{}~{}~{}(\delta_{i},\delta_{j})=-\delta_{i,j}~{}~{}~{}(1\leq
i,j\leq N+1),$
$\displaystyle(\varepsilon_{i},\delta_{j})=(\delta_{j},\varepsilon_{i})=0~{}~{}~{}(1\leq
i\leq M+1,1\leq j\leq N+1).$
The standard fundamental system $\Pi^{st}$ for the Lie superalgebra $A(M,N)$
is given as
$\displaystyle\Pi^{st}=\\{\alpha_{i}=\varepsilon_{i}-\varepsilon_{i+1},\alpha_{M+1}=\varepsilon_{M+1}-\delta_{1},\alpha_{M+1+j}=\delta_{j}-\delta_{j+1}|1\leq
i\leq M,1\leq j\leq N\\}.$
The standard Dynkin-diagram $\Phi^{st}$ for the Lie superalgebra $A(M,N)$ is
given as
$\Phi^{st}=$$\alpha_{1}$$\alpha_{M}$$\alpha_{M+1}$$\alpha_{M+2}$$\alpha_{M+N+1}$$\cdots\cdots$$\cdots\cdots$
Here a circle represents an even simple root and a crossed circle represents
an odd isotropic simple root.
There is an indeterminacy in how to choose Dynkin-diagram for the Lie
superalgebra $A(M,N)$, which is brought by fundamental reflections
$r_{\alpha_{i}}$. For the fundamental system $\Pi$, the fundamental reflection
$r_{\alpha_{i}}$ $(\alpha_{i}\in\Pi)$ satisfies
$\displaystyle
r_{\alpha_{i}}(\alpha_{j})=\left\\{\begin{array}[]{cc}-\alpha_{i}&{\rm
if}~{}~{}~{}j=i,\\\ \alpha_{i}+\alpha_{j}&{\rm if}~{}~{}~{}j\neq
i,~{}(\alpha_{i},\alpha_{j})\neq 0,\\\ \alpha_{j}&{\rm if}~{}~{}~{}j\neq
i,~{}(\alpha_{i},\alpha_{j})=0.\end{array}\right.$ (4)
For an odd isotropic root $\alpha_{i}$, we call the fundamental reflection
$r_{\alpha_{i}}$ odd reflection. For an even root $\alpha_{i}$, we call the
fundamental reflection $r_{\alpha_{i}}$ real reflection. The Dynkin-diagram
transformed by $r_{\alpha_{i}}$ is represented as $r_{\alpha_{i}}(\Phi)$. Real
reflections don’t change Dynkin-diagram. We illustrate the notion of odd
reflections as follows.
$\cdots$$\cdots$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$r_{\alpha_{i}}$$\cdots$$\cdots$$\alpha_{i-1}+\alpha_{i}$$-\alpha_{i}$$\alpha_{i}+\alpha_{i+1}$$\cdots$$\cdots$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$r_{\alpha_{i}}$$\cdots$$\cdots$$\alpha_{i-1}+\alpha_{i}$$-\alpha_{i}$$\alpha_{i}+\alpha_{i+1}$$\cdots$$\alpha_{1}$$\alpha_{2}$$r_{\alpha_{1}}$$\cdots$$-\alpha_{1}$$\alpha_{1}+\alpha_{2}$$\cdots$$\alpha_{L-1}$$\alpha_{L}$$r_{\alpha_{L}}$$\cdots$$\alpha_{L-1}+\alpha_{L}$$-\alpha_{L}$
Example $A(1,0)$ and $A(0,1)$
$r_{\delta_{1}-\varepsilon_{1}}$$\delta_{1}-\varepsilon_{1}$$\varepsilon_{1}-\varepsilon_{2}$$\varepsilon_{1}-\delta_{1}$$\delta_{1}-\varepsilon_{2}$$r_{\delta_{1}-\varepsilon_{2}}$$\varepsilon_{1}-\varepsilon_{2}$$\varepsilon_{2}-\delta_{1}$
Here
$\Pi_{1}=\\{\delta_{1}-\varepsilon_{1},\varepsilon_{1}-\varepsilon_{2}\\}$ and
$\Pi_{2}=\\{\varepsilon_{1}-\delta_{1},\delta_{1}-\varepsilon_{2}\\}$ are the
other fundamental systems.
Example $A(1,1)$
$\varepsilon_{1}-\varepsilon_{2}$$\varepsilon_{2}-\delta_{1}$$\delta_{1}-\delta_{2}$$r_{\varepsilon_{2}-\delta_{1}}$$\varepsilon_{1}-\delta_{1}$$\delta_{1}-\varepsilon_{2}$$\varepsilon_{2}-\delta_{2}$$r_{\varepsilon_{1}-\delta_{1}}$$\delta_{1}-\varepsilon_{1}$$\varepsilon_{1}-\varepsilon_{2}$$\varepsilon_{2}-\delta_{2}$
Here
$\Pi_{1}=\\{\varepsilon_{1}-\delta_{1},\delta_{1}-\varepsilon_{2},\varepsilon_{2}-\delta_{2}\\}$
and
$\Pi_{2}=\\{\delta_{1}-\varepsilon_{1},\varepsilon_{1}-\varepsilon_{2},\varepsilon_{2}-\delta_{2}\\}$
are the other fundamental systems.
Example $A(2,0)$ and $A(0,2)$
$\varepsilon_{1}-\varepsilon_{2}$$\varepsilon_{2}-\varepsilon_{3}$$\varepsilon_{3}-\delta_{1}$$r_{\varepsilon_{3}-\delta_{1}}$$\varepsilon_{1}-\varepsilon_{2}$$\varepsilon_{2}-\delta_{1}$$\delta_{1}-\varepsilon_{3}$$\delta_{1}-\varepsilon_{1}$$\varepsilon_{1}-\varepsilon_{2}$$\varepsilon_{2}-\varepsilon_{3}$$r_{\varepsilon_{1}-\delta_{1}}$$\varepsilon_{1}-\delta_{1}$$\delta_{1}-\varepsilon_{2}$$\varepsilon_{2}-\varepsilon_{3}$$r_{\varepsilon_{2}-\delta_{1}}$
Here
$\Pi_{1}=\\{\varepsilon_{1}-\varepsilon_{2},\varepsilon_{2}-\delta_{1},\delta_{1}-\varepsilon_{3}\\}$,
$\Pi_{2}=\\{\varepsilon_{1}-\delta_{1},\delta_{1}-\varepsilon_{2},\varepsilon_{2}-\varepsilon_{3}\\}$,
and
$\Pi_{3}=\\{\delta_{1}-\varepsilon_{1},\varepsilon_{1}-\varepsilon_{2},\varepsilon_{2}-\varepsilon_{3}\\}$
are the other fundamental systems.
### 2.3 Ding-Feigin’s construction
We introduce the Heisenberg algebra with generators $a_{i}(m)$, $Q_{i}$
$(m\in{\mathbf{Z}},1\leq i\leq L)$ satisfying
$\displaystyle[a_{i}(m),a_{j}(n)]=\frac{1}{m}A_{i,j}(m)\delta_{m+n,0}~{}~{}(m,n\neq
0,1\leq i,j\leq L),$ $\displaystyle[a_{i}(0),Q_{j}]=A_{i,j}(0)~{}~{}(1\leq
i,j\leq L).$
The remaining commutators vanish. We impose the following conditions on the
parameters $A_{i,j}(m)\in\mathbf{C}$:
$\displaystyle A_{i,i}(m)=1~{}(m\neq 0,1\leq i\leq
L),~{}~{}~{}A_{i,j}(m)=A_{j,i}(-m)~{}(m\in\mathbf{Z},1\leq i\neq j\leq L),$
$\displaystyle\det\left(\left(A_{i,j}(m)\right)_{i,j=1}^{L}\right)\neq
0~{}~{}(m\in\mathbf{Z}).$
We use the normal ordering symbol $:~{}:$ that satisfies
$\displaystyle:a_{i}(m)a_{j}(n):=\left\\{\begin{array}[]{cc}a_{i}(m)a_{j}(n)&(m<0),\\\
a_{j}(n)a_{i}(m)&(m\geq 0)\end{array}\right.~{}~{}~{}(m,n\in{\mathbf{Z}},1\leq
i,j\leq L),$ (7)
$\displaystyle:a_{i}(0)Q_{j}:=:Q_{j}a_{i}(0):=Q_{j}a_{i}(0)~{}~{}~{}(1\leq
i,j\leq L).$
Next, we work on Fock space of the free field. Let $T_{1}(z)$ be a sum of
vertex operators
$\displaystyle
T_{1}(z)=g_{1}\Lambda_{1}(z)+g_{2}\Lambda_{2}(z)+\cdots+g_{L+1}\Lambda_{L+1}(z),$
(8)
$\displaystyle{\Lambda}_{i}(z)=e^{\sum_{j=1}^{L}\lambda_{i,j}(0)a_{j}(0)}:\exp\left(\sum_{j=1}^{L}\sum_{m\neq
0}\lambda_{i,j}(m)a_{j}(m)z^{-m}\right):~{}~{}(1\leq i\leq L+1).$
We call $T_{1}(z)$ the basic $W$-current. We introduce the screening currents
$S_{j}(w)$ $(1\leq j\leq L)$ as
$\displaystyle
S_{j}(w)=w^{\frac{1}{2}A_{j,j}(0)}e^{Q_{j}}w^{a_{j}(0)}:\exp\left(\sum_{m\neq
0}s_{j}(m)a_{j}(m)w^{-m}\right):~{}~{}(1\leq j\leq L).$ (9)
The parameters $A_{i,j}(m)$, $\lambda_{i,j}(m),s_{j}(m)$ and $g_{i}$ are to be
determined through the construction given below.
Quite generally, given two vertex operators $V(z)$, $W(w)$, their product has
the form
$\displaystyle
V(z)W(w)=\varphi_{V,W}\left(z,w\right):V(z)W(w):~{}~{}~{}(|z|\gg|w|)$
with some formal power series $\varphi_{V,W}(z,w)\in\mathbf{C}[[w/z]]$. The
vertex operators $V(z)$ and $W(w)$ are said to be mutually local if the
following two conditions hold.
$\displaystyle\mathrm{(i)}~{}~{}\varphi_{V,W}(z,w)~{}{\rm
and}~{}\varphi_{W,V}(w,z)~{}{\rm
converge~{}to~{}rational~{}functions},~{}~{}~{}$
$\displaystyle\mathrm{(ii)}~{}~{}\varphi_{V,W}(z,w)=\varphi_{W,V}(w,z).$
Under this setting, we are going to determine the $W$-current $T_{1}(z)$ and
the screening currents $S_{j}(w)$ that satisfy the following mutual locality
(10), commutativity (11), and symmetry (12).
Mutual Locality $\Lambda_{i}(z)$ $(1\leq i\leq L+1)$ and $S_{j}(w)$ $(1\leq
j\leq L)$ are mutually local and the operator product expansion have at most
one pole and one zero.
$\displaystyle\varphi_{\Lambda_{i},S_{j}}(z,w)=\varphi_{S_{j},\Lambda_{i}}(w,z)={\displaystyle\frac{w-\frac{z}{p_{i,j}}}{w-\frac{z}{q_{i,j}}}}~{}~{}~{}(1\leq
i\leq L+1,1\leq j\leq L).$ (10)
We allow the possibility $p_{i,j}=q_{i,j}$, in which case
$\Lambda_{i}(z)S_{j}(w)=S_{j}(w)\Lambda_{i}(z)=:\Lambda_{i}(z)S_{j}(w):$.
Commutativity $T_{1}(z)$ commutes with $S_{j}(w)$ $(1\leq j\leq L)$ up to a
total difference
$\displaystyle[T_{1}(z),S_{j}(w)]=B_{j}(z)\left(\delta\left(\frac{q_{j,j}w}{z}\right)-\delta\left(\frac{q_{j+1,j}w}{z}\right)\right)~{}~{}~{}(1\leq
j\leq L)\,,$ (11)
with some currents $B_{j}(z)$ $(1\leq j\leq L)$.
Symmetry For $\widetilde{S}_{j}(w)=e^{-Q_{j}}S_{j}(w)$ $(1\leq j\leq L)$, we
impose
$\displaystyle\varphi_{\widetilde{S}_{k},\widetilde{S}_{l}}(w,z)=\varphi_{\widetilde{S}_{l},\widetilde{S}_{k}}(w,z)~{}~{}~{}(1\leq
k,l\leq L),$
$\displaystyle\varphi_{\widetilde{S}_{k},\widetilde{S}_{l}}(w,z)=1~{}~{}~{}(|k-l|\geq
2,1\leq k,l\leq L).$ (12)
For simplicity, we impose further the following conditions.
$\displaystyle q_{i,j}~{}~{}(1\leq i\leq L+1,1\leq j\leq L)~{}~{}{\rm
are~{}distinct},$ (13)
$\displaystyle\left|\frac{q_{j+1,j}}{q_{j,j}}\right|\neq 1~{}~{}~{}(1\leq
j\leq L),~{}~{}~{}-1<A_{k,k+1}(0)<0~{}~{}~{}(1\leq k\leq L-1).$ (14)
It can be seen from elementary consideration that there are three kinds of
freedom in choosing parameters.
(i) Rearranging indices
$\displaystyle\Lambda_{i}(z)\mapsto\Lambda_{i^{\prime}}(z),~{}~{}~{}S_{j}(w)\mapsto
S_{j^{\prime}}(w).$ (15)
(ii) Scaling variables: $\Lambda_{i}(z)\mapsto\Lambda_{i}(sz)$, i.e.
$\displaystyle\lambda_{i,j}(m)\mapsto
s^{m}\lambda_{i,j}(m)~{}~{}~{}q_{i,j}\mapsto sq_{i,j},~{}~{}~{}p_{i,j}\mapsto
sp_{i,j}~{}~{}~{}(m\neq 0,1\leq i\leq L+1,1\leq j\leq L).$ (16)
(iii) Scaling free fields: The free field can be rescaled as
$a_{j}(m)\mapsto\alpha_{j}(m)^{-1}a_{j}(m)$ $(m>0,1\leq j\leq L)$, i.e.
$\displaystyle\begin{array}[]{cc}\begin{array}[]{c}a_{j}(m)\mapsto\alpha_{j}(m)^{-1}a_{j}(m),~{}~{}s_{j}(m)\mapsto\alpha_{j}(m)s_{j}(m),\\\
\lambda_{i,j}(m)\mapsto\lambda_{i,j}(m)\alpha_{j}(m)\end{array}&(m\neq 0,1\leq
i\leq L+1,1\leq j\leq L),\\\
A_{i,j}(m)\mapsto\alpha_{i}(m)^{-1}A_{i,j}(m)\alpha_{j}(m)&(m\neq 0,1\leq
i,j\leq L),\end{array}$ (21)
where we set $\alpha_{j}(m)\neq 0$ $(m>0,1\leq j\leq L)$ and
$\alpha_{j}(-m)=\alpha_{j}(m)^{-1}$ $(m>0,1\leq j\leq L)$.
## 3 Free field construction
In this section we give a free field construction of the basic $W$-current and
the screening currents for ${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$.
### 3.1 Free field construction
In Ding-Feigin’s construction[10], there are $2^{L}$ cases to be considered
separately according to values of $A_{j,j}(0)$ $(1\leq j\leq L)$. We fix a
pair of integers $j_{1},j_{2},\ldots,j_{K}$ $(1\leq K\leq L)$ satisfying
$1\leq j_{1}<j_{2}<\cdots<j_{K}\leq L$. Hereafter, we study the case the
following conditions for $A_{j,j}(0)$ $(1\leq j\leq L)$ are satisfied.
$\displaystyle~{}A_{j,j}(0)=1~{}~{}~{}{\rm
if}~{}~{}~{}j=j_{1},j_{2},\ldots,j_{K},~{}~{}A_{j,j}(0)\neq 1~{}~{}~{}{\rm
if}~{}~{}~{}j\neq j_{1},j_{2},\ldots,j_{K}.$
First, we prepare the parameters $A_{i,j}(0)$ to give the free field
construction. We have already introduced $L\times L$ symmetric matrix
$(A_{i,j}(0))_{i,j=1}^{L}$ as parameters of the Heisenberg algebra. To write
$p_{i,j}$, $q_{i,j}$, $A_{i,j}(m)$, $s_{j}(m)$, and $\lambda_{i,j}(m)$
explicitly, it is convenient to introduce $(L+1)\times(L+1)$ symmetric matrix
$(A_{i,j}(0))_{i,j=0}^{L}$ uniquely extended from $(A_{i,j}(0))_{i,j=1}^{L}$
as follows.
$\displaystyle A_{0,1}(0)=\left\\{\begin{array}[]{cc}A_{1,2}(0)&{\rm
if}~{}~{}j_{1}\neq 1,\\\ -1-A_{1,2}(0)&{\rm
if}~{}~{}j_{1}=1,\end{array}\right.~{}A_{0,L}(0)=\left\\{\begin{array}[]{cc}A_{L,L-1}(0)&{\rm
if}~{}~{}j_{K}\neq L,\\\ -1-A_{L,L-1}(0)&{\rm
if}~{}~{}j_{K}=L,\end{array}\right.$ (26) $\displaystyle
A_{0,0}(0)=\left\\{\begin{array}[]{cc}-2A_{0,L}(0)&{\rm if}~{}~{}K={\rm
even},\\\ 1&{\rm if}~{}~{}K={\rm
odd},\end{array}\right.~{}~{}A_{0,i}(0)=0~{}~{}(i\neq 0,1,L).$ (29)
The extended matrix $\left(A_{i,j}(0)\right)_{i,j=0}^{L}$ are explicitly
written by $\beta=A_{1,2}(0)$ as follows (See Lemma 3.10).
$\displaystyle A_{i,i}(0)=\left\\{\begin{array}[]{cc}1&{\rm
if}~{}~{}i\in\widehat{J},\\\ -2\beta&{\rm
if}~{}~{}i\notin\widehat{J},~{}i\in\widehat{I}(\beta),\\\ 2(1+\beta)&{\rm
if}~{}~{}i\notin\widehat{J},~{}i\in\widehat{I}(-1-\beta)\end{array}\right.~{}~{}(1\leq
i\leq L+1),$ (33) $\displaystyle
A_{j-1,j}(0)=A_{j,j-1}(0)=\left\\{\begin{array}[]{cc}\beta&{\rm
if}~{}~{}j\in\widehat{I}(\beta),\\\ -1-\beta&{\rm
if}~{}~{}j\in\widehat{I}(-1-\beta)\end{array}\right.~{}~{}(1\leq j\leq L+1),$
(36) $\displaystyle A_{k,l}(0)=A_{l,k}(0)=0~{}~{}\left(|k-l|\geq 2,1\leq
k,l\leq L~{}~{}{\rm or}~{}~{}k=0,~{}l\neq 0,1,L\right).$ (37)
Here we set
$\displaystyle\widehat{J}=\left\\{\begin{array}[]{cc}\\{j_{1},j_{2},\ldots,j_{K}\\}&{\rm
if}~{}~{}K={\rm even},\\\ \\{j_{1},j_{2},\ldots,j_{K},L+1\\}&{\rm
if}~{}~{}K={\rm odd},\end{array}\right.~{}~{}~{}\widehat{I}(\delta)=\\{1\leq
j\leq L+1|A_{j-1,j}(0)=\delta\\}.$ (40)
We understand subscripts of $A_{i,j}(0)$ with mod.$L+1$, i.e.
$A_{0,0}(0)=A_{L+1,L+1}(0)$. We note
$\widehat{I}(\beta)\cup\widehat{I}(-1-\beta)=\\{1,2,\ldots,L+1\\}$.
Next, we introduce the two parameters $x$ and $r$ defined as
$\displaystyle
x^{2r}=\frac{q_{2,1}}{q_{1,1}},~{}~{}~{}r=\left\\{\begin{array}[]{cc}{\displaystyle\frac{1}{1+\beta}}&{\rm
for}~{}~{}~{}|\widehat{I}(\beta)|>|\widehat{I}(-1-\beta)|,\\\
{\displaystyle-\frac{1}{\beta}}&{\rm
for}~{}~{}~{}|\widehat{I}(\beta)|\leq|\widehat{I}(-1-\beta)|,\end{array}\right.$
(43)
where $|\widehat{I}(\delta)|$ represents the number of elements in
$\widehat{I}(\delta)$. By this parametrization, we have (83). From (14) and
$q_{i,j}\neq 0$, we obtain $|x|\neq 0,1$ and $r>1$. In this paper, we focus
our attention to
$\displaystyle 0<|x|<1,~{}~{}~{}r>1.$
For the case of $|x|>1$, we obtain the same results under the change $x\to
x^{-1}$.
To give the free field construction, we set $D(k,l;\Phi)$ as
$\displaystyle
D(k,l;{\Phi})=\left\\{\begin{array}[]{cc}(r-1)\left|\widehat{I}\left(k+1,l+1;-\frac{1}{r},\widehat{\Phi}\right)\right|+\left|\widehat{I}\left(k+1,l+1;\frac{1-r}{r},\widehat{\Phi}\right)\right|&(0\leq
k\leq l\leq L),\\\ 0&(0\leq l<k\leq L),\end{array}\right.$ (46)
$\displaystyle\widehat{I}(k,l;\delta,\widehat{\Phi})=\\{1\leq j\leq L+1|k\leq
j\leq l,A_{j-1,j}(0)=\delta\\}~{}~{}~{}(1\leq k\leq l\leq L+1).$ (47)
$D(k,l;\Phi)$ is given by using the matrix $(A_{i,j}(0))_{i,j=0}^{L}$. The
matrix $(A_{i,j}(0))_{i,j=0}^{L}$ can be constructed from the Dynkin-diagrams
$\Phi$ and $\widehat{\Phi}$, which we will introduce below.
Example We fix integers $M,N$ $(M\geq N\geq 0,M+N\geq 1)$. We set $K=1$,
$L=M+N+1$, and $j_{1}=M+1$. We have
$\displaystyle A_{i,i}(0)=\left\\{\begin{array}[]{cc}\frac{2(r-1)}{r}&{\rm
if}~{}~{}~{}1\leq i\leq M,\\\ 1&{\rm if}~{}~{}~{}i=0,M+1,\\\
\frac{~{}2~{}}{r}&{\rm if}~{}~{}~{}M+2\leq i\leq L,\end{array}\right.$ (51)
$\displaystyle
A_{i,i-1}(0)=A_{i-1,i}(0)=\left\\{\begin{array}[]{cc}\frac{1-r}{r}&{\rm
if}~{}~{}~{}1\leq i\leq M+1,\\\ -\frac{~{}1~{}}{r}&{\rm if}~{}~{}~{}M+2\leq
i\leq L+1,\end{array}\right.$ (54) $\displaystyle
A_{k,l}(0)=0~{}~{}\left(|k-l|\geq 2,1\leq k,l\leq L~{}~{}{\rm
or}~{}~{}k=0,~{}l\neq 0,1,L\right).$
$\displaystyle\widehat{I}\left(\frac{1-r}{r}\right)=\\{1,2,\ldots,M+1\\},~{}~{}~{}\widehat{I}\left(\frac{1}{r}\right)=\\{M+2,\ldots,L+1\\}.$
We picture $L\times L$ matrix $(A_{i,j}(0))_{i,j=1}^{L}$ as the standard
Dynkin-diagram $\Phi^{st}$ of $A(M,N)$ in Section 2. We picture
$(L+1)\times(L+1)$ matrix $(A_{i,j}(0))_{i,j=0}^{L}$ as the Dynkin-diagram
$\widehat{\Phi}^{st}$ as follows.
$\cdots\cdots$$\cdots$$\alpha_{1}$$\alpha_{M}$$\alpha_{M+1}$$\alpha_{M+2}$$\alpha_{L}$$\alpha_{0}=\alpha_{L+1}$$\widehat{\Phi}^{st}=~{}~{}$$\frac{1-r}{r}$$-\frac{1}{r}$$\frac{1-r}{r}$$\frac{1-r}{r}$$\frac{1-r}{r}$$-\frac{1}{r}$$-\frac{1}{r}$$-\frac{1}{r}$
Here a circle represents an even simple root $(\alpha_{i},\alpha_{i})=2$ and a
crossed circle represents an odd isotropic simple root
$(\alpha_{i},\alpha_{i})=0$. The inner product $(\alpha_{i},\alpha_{j})$ of
the roots and the parameters $A_{i,j}(0)$ correspond as
$(\alpha_{i},\alpha_{i})=2\Leftrightarrow A_{i,i}(0)\neq 1$,
$(\alpha_{i},\alpha_{i})=0\Leftrightarrow A_{i,i}(0)=1$,
$(\alpha_{i},\alpha_{j})=-1\Leftrightarrow A_{i,j}(0)\neq 0$ $(i\neq j)$. As
additional information, the values of the parameters $A_{j-1,j}(0)$ are
written beside the line segment connecting $\alpha_{j-1}$ and $\alpha_{j}$. We
have
$\displaystyle D(0,L;\Phi^{st})=(N+1)r+M-N.$
Example For $L=3$, $K=2$, $j_{1}=1,j_{2}=3$, we have
$\displaystyle\left(A_{i,j}(0)\right)_{i,j=0}^{3}=\left(\begin{array}[]{cccc}\frac{2(r-1)}{r}&\frac{1-r}{r}&0&\frac{1-r}{r}\\\
\frac{1-r}{r}&1&-\frac{1}{r}&0\\\ 0&-\frac{1}{r}&\frac{2}{r}&-\frac{1}{r}\\\
\frac{1-r}{r}&0&-\frac{1}{r}&1\end{array}\right),~{}~{}~{}\widehat{I}\left(-\frac{1}{r}\right)=\\{2,3\\},~{}~{}~{}\widehat{I}\left(\frac{1-r}{r}\right)=\\{1,4\\}=\\{0,1\\}.$
(59)
Here we understand subscripts of $A_{i,j}(0)$ with mod.$4$, i.e.
$A_{0,3}(0)=A_{4,3}(0)$. We picture $3\times 3$ matrix
$(A_{i,j}(0))_{i,j=1}^{3}$ as nonstandard Dynkin-diagram of $A(1,1)$. We
picture $4\times 4$ matrix $(A_{i,j}(0))_{i,j=0}^{3}$ as the Dynkin-diagram
$\widehat{\Phi}$ as follows.
$\Phi=~{}~{}$$\alpha_{1}$$\alpha_{2}$$\alpha_{3}$$\alpha_{0}=\alpha_{4}$$\widehat{\Phi}=~{}~{}$$\alpha_{1}$$\alpha_{2}$$\alpha_{3}$$-\frac{1}{r}$$-\frac{1}{r}$$\frac{1-r}{r}$$\frac{1-r}{r}$
We have
$\displaystyle
D(0,3;\Phi)=2r,~{}~{}~{}D(1,1;\Phi)=r-1,~{}~{}~{}D(1,2;\Phi)=2r-2.$
###### Theorem 3.1
Assume (10), (11), (12), (13) and (14). Then, up to the freedom (15), (16) and
(21), the parameters $p_{i,j},q_{i,j}$, $A_{i,j}(m)$, $s_{i}(m)$,
$\lambda_{i,j}(m)$, $g_{i}$, and the current $B_{j}(m)$ are uniquely
determined as follows. Conversely, by choosing these parameters, (10), (11),
and (12) are satisfied.
$\displaystyle
q_{j,j}=x^{D(1,j-1;\Phi)},~{}q_{j+1,j}=x^{2r+D(1,j-1;\Phi)}~{}(1\leq j\leq
L),$ $\displaystyle p_{1,1}=\left\\{\begin{array}[]{cc}x^{2}&{\rm
if}~{}~{}1\in\widehat{I}\left(-\frac{1}{r}\right),\\\ x^{2r-2}&{\rm
if}~{}~{}1\in\widehat{I}\left(\frac{1-r}{r}\right),\end{array}\right.$ (62)
$\displaystyle
p_{j,j}=x^{D(1,j-2;\Phi)}\times\left\\{\begin{array}[]{cc}x^{r+1}&{\rm
if}~{}~{}j\in\widehat{I}(-\frac{1}{r}),\\\ x^{2r-1}&{\rm
if}~{}~{}j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.~{}(2\leq j\leq L),$
(65) $\displaystyle
p_{j,j-1}=x^{D(1,j-2;\Phi)}\times\left\\{\begin{array}[]{cc}x^{2r-2}&{\rm
if}~{}~{}j\in\widehat{I}(-\frac{1}{r}),\\\ x^{2}&{\rm
if}~{}~{}j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.~{}(2\leq j\leq
L+1),$ (68) $\displaystyle p_{k,l}=q_{k,l}~{}~{}~{}(k\neq l,l+1,1\leq k\leq
L+1,l\leq l\leq L).$ (69)
$\displaystyle\begin{array}[]{c}s_{j}(m)=1~{}~{}~{}(m>0,1\leq j\leq L),\\\
s_{j}(-m)=\left\\{\begin{array}[]{cc}-1&{\rm if}~{}~{}j\in\widehat{J},\\\
-\displaystyle{\frac{[m]_{x}[2(r-1)m]_{x}}{[rm]_{x}[(r-1)m]_{x}}}&{\rm
if}~{}~{}j\notin\widehat{J},j\in I(-\frac{1}{r}),\\\
-\displaystyle{\frac{[(r-1)m]_{x}[2m]_{x}}{[rm]_{x}[m]_{x}}}&{\rm
if}~{}~{}j\notin\widehat{J},j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.\end{array}~{}(m>0,1\leq
j\leq L).$ (75) $\displaystyle A_{i,i}(0)=\left\\{\begin{array}[]{cc}1&{\rm
if}~{}j\in\widehat{J},\\\ \displaystyle{\frac{~{}2~{}}{r}}&{\rm
if}~{}~{}i\notin\widehat{J},~{}i\in\widehat{I}(-\frac{1}{r}),\\\
\displaystyle{\frac{2(r-1)}{r}}&{\rm
if}~{}~{}i\notin\widehat{J},~{}i\in\widehat{I}(\frac{1-r}{r})\end{array}\right.~{}~{}~{}(1\leq
i\leq L+1),$ (79) $\displaystyle
A_{j-1,j}(0)=A_{j,j-1}(0)=\left\\{\begin{array}[]{cc}\displaystyle{-\frac{~{}1~{}}{r}}&{\rm
if}~{}~{}j\in\widehat{I}(-\frac{1}{r}),\\\ \displaystyle{\frac{1-r}{r}}&{\rm
if}~{}~{}j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.~{}(1\leq j\leq
L+1),$ (82) $\displaystyle A_{k,l}(0)=A_{l,k}(0)=0~{}(|k-l|\geq 2,1\leq
k<l\leq L~{}~{}{\rm or}~{}~{}k=0,l\neq 0,1,L).$ (83) $\displaystyle
A_{j,j}(m)=1~{}(m\neq 0,1\leq j\leq L),$ $\displaystyle A_{k,l}(m)=0~{}(m\neq
0,|k-l|\geq 2,1\leq k,l\leq L),$
$\displaystyle\begin{array}[]{cc}A_{j-1,j}(m)=\displaystyle{\frac{[m]_{x}}{[rm]_{x}}}\times\left\\{\begin{array}[]{cc}\displaystyle{\frac{1}{s_{j}(-m)}}&(m>0),\\\
\displaystyle{\frac{1}{s_{j-1}(m)}}&(m<0)\end{array}\right.&{\rm
if}~{}~{}j\in\widehat{I}\left(-\frac{1}{r}\right),\\\
A_{j-1,j}(m)=\displaystyle{\frac{[(r-1)m]_{x}}{[rm]_{x}}}\times\left\\{\begin{array}[]{cc}\displaystyle{\frac{1}{s_{j}(-m)}}&(m>0),\\\
\displaystyle{\frac{1}{s_{j-1}(m)}}&(m<0)\end{array}\right.&{\rm
if}~{}~{}j\in\widehat{I}\left(\frac{1-r}{r}\right)\end{array}(2\leq j\leq L),$
(90)
$\displaystyle\begin{array}[]{c}A_{j-1,j}(-m)=A_{j,j-1}(m),~{}~{}A_{j,j-1}(-m)=A_{j-1,j}(m)\end{array}(m>0,2\leq
j\leq L).$ (92) $\displaystyle\lambda_{i,j}(0)=\frac{2r\log
x}{D(0,L;\Phi)}\times\left\\{\begin{array}[]{cc}D(0,j-1;\Phi)&{\rm
if}~{}~{}1\leq j\leq i-1,\\\ -D(j,L;\Phi)&{\rm if}~{}~{}i\leq j\leq
L\end{array}\right.~{}(1\leq i\leq L+1).$ (95)
$\displaystyle\frac{\lambda_{i,j}(m)}{s_{j}(m)}=\frac{[rm]_{x}(x-x^{-1})}{[D(0,L;\Phi)m]_{x}}\times\left\\{\begin{array}[]{cc}\displaystyle{-x^{(r+D(1,L;\Phi))m}[D(0,j-1;\Phi)m]_{x}}&{\rm
if}~{}~{}1\leq j\leq i-1,\\\
\displaystyle{x^{(r-D(0,0;\Phi))m}[D(j,L;\Phi)m]_{x}}&{\rm if}~{}~{}i\leq
j\leq L\end{array}\right.$ (98) $\displaystyle(m\neq 0,1\leq i\leq L+1).$ (99)
$\displaystyle g_{i}=g\times\left\\{\begin{array}[]{cc}[r-1]_{x}&{\rm
if}~{}~{}i\in\widehat{I}(-\frac{1}{r}),\\\ 1&{\rm
if}~{}~{}i\in\widehat{I}(\frac{1-r}{r})\end{array}\right.~{}(1\leq i\leq
L+1).$ (102) $\displaystyle
B_{j}(z)=g_{j}\left(\frac{q_{j,j}}{p_{j,j}}-1\right):\Lambda_{j}(z)S_{j}(q_{j,j}^{-1}z):~{}(1\leq
j\leq L).$ (103)
###### Proposition 3.2
The $\Lambda_{i}(z)$’s satisfy the commutation relations
$\displaystyle\Lambda_{k}(z_{1})\Lambda_{l}(z_{2})=\frac{\Theta_{x^{2a}}\left(x^{2}\frac{z_{2}}{z_{1}},~{}~{}x^{-2r}\frac{z_{2}}{z_{1}},~{}~{}x^{2r-2}\frac{z_{2}}{z_{1}}\right)}{\Theta_{x^{2a}}\left(x^{-2}\frac{z_{2}}{z_{1}},~{}~{}x^{2r}\frac{z_{2}}{z_{1}},~{}~{}x^{-2r+2}\frac{z_{2}}{z_{1}}\right)}\Lambda_{l}(z_{2})\Lambda_{k}(z_{1})~{}~{}~{}(1\leq
k,l\leq L+1)\,,$ (104)
where $a=D(0,L;\Phi)$. We understand (104) in the sense of analytic
continuation.
###### Proposition 3.3
The $S_{j}(w)$’s satisfy the commutation relations
$\displaystyle
S_{j}(w_{1})S_{j}(w_{2})=S_{j}(w_{2})S_{j}(w_{1})\times\left\\{\begin{array}[]{cc}-1&{\rm
if}~{}~{}~{}j\in\widehat{J},\\\
\displaystyle{-\left(\frac{w_{1}}{w_{2}}\right)^{\frac{2}{r}-1}\frac{\Theta_{x^{2r}}\left(x^{2}\frac{w_{1}}{w_{2}}\right)}{\Theta_{x^{2r}}\left(x^{2}\frac{w_{2}}{w_{1}}\right)}}&{\rm
if}~{}~{}j\notin\widehat{J},~{}j\in\widehat{I}(-\frac{1}{r}),\\\
\displaystyle{-\left(\frac{w_{1}}{w_{2}}\right)^{1-\frac{2}{r}}\frac{\Theta_{x^{2r}}\left(x^{2}\frac{w_{2}}{w_{1}}\right)}{\Theta_{x^{2r}}\left(x^{2}\frac{w_{1}}{w_{2}}\right)}}&{\rm
if}~{}~{}j\notin\widehat{J},~{}j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.~{}~{}~{}(1\leq
j\leq L),$ (108) $\displaystyle
S_{j-1}(w_{1})S_{j}(w_{2})=S_{j}(w_{2})S_{j-1}(w_{1})\times\left\\{\begin{array}[]{cc}\displaystyle{\left(\frac{w_{1}}{w_{2}}\right)^{-\frac{1}{r}}\frac{\Theta_{x^{2r}}\left(x^{r+1}\frac{w_{2}}{w_{1}}\right)}{\Theta_{x^{2r}}\left(x^{r+1}\frac{w_{1}}{w_{2}}\right)}}&{\rm
if}~{}~{}j\in\widehat{I}(-\frac{1}{r}),\\\
\displaystyle{\left(\frac{w_{1}}{w_{2}}\right)^{\frac{1}{r}-1}\frac{\Theta_{x^{2r}}\left(x^{2r-1}\frac{w_{2}}{w_{1}}\right)}{\Theta_{x^{2r}}\left(x^{2r-1}\frac{w_{1}}{w_{2}}\right)}}&{\rm
if}~{}~{}j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.~{}~{}~{}(2\leq j\leq
L),$ (111) $\displaystyle
S_{k}(w_{1})S_{l}(w_{2})=S_{l}(w_{2})S_{k}(w_{1})~{}~{}~{}(|k-l|\geq 2,1\leq
k,l\leq L).$ (112)
We understand (112) in the sense of the analytic continuation.
In fact, the stronger relation
$\displaystyle
S_{j}(w_{1})S_{j}(w_{2})=(w_{1}-w_{2}):S_{j}(w_{1})S_{j}(w_{2}):~{}~{}~{}(j\in\widehat{J})$
holds. This means that the screening currents $S_{j}(w)$ $(j\in\widehat{J})$
are ordinary fermions.
### 3.2 Proof of Theorem 3.1
In this section, we show Theorem 3.1 and Proposition 3.3.
###### Lemma 3.4
For $\Lambda_{i}(z)$ and $S_{j}(w)$, we obtain
$\displaystyle\varphi_{\Lambda_{i},S_{j}}(z,w)=e^{\sum_{k=1}^{L}\lambda_{i,k}(0)A_{k,j}(0)}\exp\left(\sum_{k=1}^{L}\sum_{m=1}^{\infty}\frac{1}{m}\lambda_{i,k}(m)A_{k,j}(m)s_{j}(-m)\left(\frac{w}{z}\right)^{m}\right),$
(113)
$\displaystyle\varphi_{S_{j},\Lambda_{i}}(w,z)=\exp\left(\sum_{k=1}^{L}\sum_{m=1}^{\infty}\frac{1}{m}s_{j}(m)A_{j,k}(m)\lambda_{i,k}(-m)\left(\frac{z}{w}\right)^{m}\right)~{}~{}~{}(1\leq
i\leq L+1,1\leq j\leq L),$ (114)
$\displaystyle\varphi_{\widetilde{S}_{k},\widetilde{S}_{l}}(w_{1},w_{2})=\exp\left(\sum_{m=1}^{\infty}\frac{1}{m}s_{k}(m)A_{k,l}(m)s_{l}(-m)\left(\frac{w_{2}}{w_{1}}\right)^{m}\right)~{}~{}~{}(1\leq
k,l\leq L).$ (115)
Assume (10), we obtain
$\displaystyle\varphi_{\Lambda_{k},\Lambda_{l}}(z_{1},z_{2})=\exp\left(\sum_{i=1}^{L}\sum_{m=1}^{\infty}\frac{1}{m}\frac{\lambda_{k,i}(m)}{s_{i}(m)}(q_{l,i}^{-m}-p_{l,i}^{-m})\left(\frac{z_{2}}{z_{1}}\right)^{m}\right)~{}~{}~{}(1\leq
k,l\leq L+1).$ (116)
###### Lemma 3.5
Mutual locality (10) holds, if and only if (117) and (118) are satisfied.
$\displaystyle\sum_{k=1}^{L}\lambda_{i,k}(0)A_{k,j}(0)=\log\left(\frac{q_{i,j}}{p_{i,j}}\right)~{}~{}~{}(1\leq
i\leq L+1,1\leq j\leq L),$ (117)
$\displaystyle\sum_{k=1}^{L}\lambda_{i,k}(m)A_{k,j}(m)s_{j}(-m)=q_{i,j}^{m}-p_{i,j}^{m}~{}~{}~{}(m\neq
0,1\leq i\leq L+1,1\leq j\leq L).$ (118)
Proof of Lemmas 3.4 and 3.5. Using the standard formula
$\displaystyle e^{A}e^{B}=e^{[A,B]}e^{B}e^{A}~{}~{}~{}([[A,B],A]=0~{}{\rm
and}~{}[[A,B],B]=0),$
we obtain (113), (114), (115), and
$\displaystyle\varphi_{\Lambda_{k},\Lambda_{l}}(z_{1},z_{2})=\exp\left(\sum_{i,j=1}^{L}\sum_{m=1}^{\infty}\frac{1}{m}\lambda_{k,i}(m)A_{i,j}(m)\lambda_{l,j}(-m)\left(\frac{z_{2}}{z_{1}}\right)^{m}\right)~{}~{}~{}(1\leq
k,l\leq L+1).$ (119)
Considering (113), (114), and the expansions
$\displaystyle{\displaystyle\frac{w-p_{i,j}^{-1}z}{w-q_{i,j}^{-1}z}}=\exp\left(\log\left(\frac{q_{i,j}}{p_{i,j}}\right)-\sum_{m=1}^{\infty}\frac{1}{m}(p_{i,j}^{m}-q_{i,j}^{m})\left(\frac{w}{z}\right)^{m}\right)~{}~{}~{}(|z|\gg|w|),$
(120)
$\displaystyle\frac{w-p_{i,j}^{-1}z}{w-q_{i,j}^{-1}z}=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}(p_{i,j}^{-m}-q_{i,j}^{-m})\left(\frac{z}{w}\right)^{m}\right)~{}~{}~{}(|w|\gg|z|),$
(121)
we obtain (117) and (118) from (10). Substituting (118) for (119), we have
(116).
Conversely, if we assume (117) and (118), we obtain (10) from (113), (114),
(120), and (121). $\Box$
From the linear equations (117) and (118), $\lambda_{i,j}(m)$ are expressed in
terms of the other parameters.
###### Lemma 3.6
We assume (10) and (13). The commutativity (11) holds, if and only if (122),
(123), (124), and (125) are satisfied.
$\displaystyle p_{k,l}=q_{k,l}~{}~{}~{}(k\neq l,l+1,1\leq k\leq L+1,1\leq
l\leq L),$ (122) $\displaystyle
q_{k,k}^{\frac{1}{2}A_{k,k}(0)}:\Lambda_{k}(z)S_{k}\left(q_{k,k}^{-1}z\right):=q_{k+1,k}^{\frac{1}{2}A_{k,k}(0)}:\Lambda_{k+1}(z)S_{k}\left(q_{k+1,k}^{-1}z\right):~{}~{}~{}(1\leq
k\leq L),$ (123)
$\displaystyle\frac{g_{k+1}}{g_{k}}=-\left(\frac{q_{k+1,k}}{q_{k,k}}\right)^{\frac{1}{2}A_{k,k}(0)}\frac{\frac{q_{k,k}}{p_{k,k}}-1}{\frac{~{}q_{k+1,k}}{p_{k+1,k}}-1~{}}~{}~{}~{}(1\leq
k\leq L),$ (124) $\displaystyle
B_{k}(z)=g_{k}\left(\frac{q_{k,k}}{p_{k,k}}-1\right):\Lambda_{k}(z)S_{k}(q_{k,k}^{-1}z):~{}~{}~{}(1\leq
k\leq L).$ (125)
Proof of Lemma 3.6. From (10), we obtain
$\displaystyle[\Lambda_{i}(z),S_{j}(w)]=\left(\frac{q_{i,j}}{p_{i,j}}-1\right)\delta\left(\frac{q_{i,j}w}{z}\right):\Lambda_{i}(z)S_{j}(q_{i,j}^{-1}z):~{}~{}(1\leq
i\leq L+1,1\leq j\leq L).$ (126)
Considering (13) and (126), we know that (11) holds, if and only if (122) and
$\displaystyle
B_{j}(z)=g_{j}\left(\frac{q_{j,j}}{p_{j,j}}-1\right):\Lambda_{j}(z)S_{j}(q_{j,j}^{-1}z):=-g_{j+1}\left(\frac{q_{j+1,j}}{p_{j+1,j}}-1\right):\Lambda_{j+1}(z)S_{j}(q_{j+1,j}^{-1}z):~{}(1\leq
j\leq L)$ (127)
are satisfied. (127) holds, if and only if (123), (124), and (125) are
satisfied. Hence, we obtain this lemma. $\Box$
We use the abbreviation $h_{k,l}(w)$ $(1\leq k,l\leq L)$ as
$\displaystyle
h_{k,l}\left(\frac{w_{2}}{w_{1}}\right)=\varphi_{\widetilde{S}_{k},\widetilde{S}_{l}}(w_{1},w_{2}).$
(128)
###### Lemma 3.7
We assume (10) and (123). Then, $h_{k,l}(w)$ in (128) satisfy the
$q$-difference equations
$\displaystyle\begin{array}[]{c}{\displaystyle\frac{w-p_{k,k}^{-1}}{w-q_{k,k}^{-1}}h_{k,k}\left(q_{k,k}^{-1}w\right)=\frac{w-p_{k+1,k}^{-1}}{w-q_{k+1,k}^{-1}}h_{k,k}\left(q_{k+1,k}^{-1}w\right)},\\\
{\displaystyle\left(\frac{q_{k+1,k}}{q_{k,k}}\right)^{A_{k,k}(0)-1}\frac{p_{k+1,k}}{p_{k,k}}\frac{1-p_{k,k}w}{1-q_{k,k}w}h_{k,k}\left(q_{k,k}w\right)=\frac{1-p_{k+1,k}w}{1-q_{k+1,k}w}h_{k,k}\left(q_{k+1,k}w\right)}\end{array}~{}(1\leq
k\leq L),$ (131)
and
$\displaystyle\begin{array}[]{c}{\displaystyle\frac{h_{k,k+1}(q_{k,k}w)}{h_{k,k+1}(q_{k+1,k}w)}=\frac{q_{k+1,k+1}}{p_{k+1,k+1}}\left(\frac{q_{k,k}}{q_{k+1,k}}\right)^{A_{k,k+1}(0)}\frac{1-p_{k+1,k+1}w}{1-q_{k+1,k+1}w}},\\\
{\displaystyle\frac{h_{k,k+1}\left(q_{k+1,k+1}^{-1}w\right)}{h_{k,k+1}\left(q_{k+2,k+1}^{-1}w\right)}=\frac{1-q_{k+1,k}^{-1}w}{1-p_{k+1,k}^{-1}w}},\\\
{\displaystyle\frac{h_{k+1,k}(q_{k+2,k+1}w)}{h_{k+1,k}(q_{k+1,k+1}w)}=\frac{q_{k+1,k}}{p_{k+1,k}}\left(\frac{q_{k+2,k+1}}{q_{k+1,k+1}}\right)^{A_{k,k+1}(0)}\frac{1-p_{k+1,k}w}{1-q_{k+1,k}w}},\\\
{\displaystyle\frac{h_{k+1,k}\left(q_{k,k}^{-1}w\right)}{h_{k+1,k}\left(q_{k+1,k}^{-1}w\right)}=\frac{1-p_{k+1,k+1}^{-1}w}{1-q_{k+1,k+1}^{-1}w}}\end{array}~{}(1\leq
k\leq L-1).$ (136)
Proof of Lemma 3.7. Multiplying (123) by the screening currents on the left or
right and considering the normal orderings, we obtain (131) and (136) as
necessary conditions. $\Box$
###### Lemma 3.8
The relations (141) and (146) hold, if (10), (12), (14), and (123) are
satisfied.
$\displaystyle\begin{array}[]{c}q_{k+1,k+1}=q_{k,k}x^{(1+A_{k,k+1}(0))r},\\\
q_{k+1,k}=q_{k,k}x^{2r},\\\ p_{k+1,k+1}=q_{k,k}x^{(1-A_{k,k+1}(0))r},\\\
p_{k+1,k}=q_{k,k}x^{2(1+A_{k,k+1}(0))r}\end{array}~{}~{}~{}(1\leq k\leq L-1).$
(141)
$\displaystyle\begin{array}[]{cc}\begin{array}[]{c}p_{k,k}=q_{k,k}x^{A_{k,k}(0)r},\\\
p_{k+1,k}=q_{k,k}x^{(2-A_{k,k}(0))r}\end{array}&{\rm
if}~{}~{}~{}k\notin\widehat{J},\\\ p_{k+1,k}=p_{k,k}&{\rm
if}~{}~{}~{}k\in\widehat{J}\end{array}~{}~{}~{}(1\leq k\leq L).$ (146)
###### Lemma 3.9
The relation (153) holds, if (10), (12), (14), and (123) are satisfied.
$\displaystyle s_{k}(m)s_{k}(-m)=\left\\{\begin{array}[]{cc}-1&{\rm
if}~{}~{}~{}k\in\widehat{J},\\\
\displaystyle{-\frac{[\frac{1}{2}A_{k,k}(0)rm]_{x}[(2-A_{k,k}(0))rm]_{x}}{[\frac{1}{2}(2-A_{k,k}(0))rm]_{x}[rm]_{x}}}&{\rm
if}~{}~{}~{}k\notin\widehat{J}\end{array}\right.~{}(m>0,1\leq k\leq L),$ (149)
$\displaystyle\begin{array}[]{c}s_{k}(m)A_{k,k+1}(m)s_{k+1}(-m)=\displaystyle{-\frac{[A_{k,k+1}(0)rm]_{x}}{[rm]_{x}}},\\\
s_{k+1}(m)A_{k+1,k}(m)s_{k}(-m)=\displaystyle{-\frac{[A_{k+1,k}(0)rm]_{x}}{[rm]_{x}}}\end{array}~{}~{}(m>0,1\leq
k\leq L-1),$ (152) $\displaystyle A_{k,l}(m)=0~{}~{}~{}(m>0,|k-l|\geq 2,1\leq
k,l\leq L).$ (153)
Proof of Lemmas 3.8 and 3.9. From Lemma 3.7, we obtain the $q$-difference
equations (131) and (136). From (115) and (128), the constant term of
$h_{k,l}(w)$ is 1. Comparing the Taylor expansions for both sides of (131) and
(136), we obtain
$\displaystyle\frac{p_{k+1,k}}{p_{k,k}}\left(\frac{q_{k+1,k}}{q_{k,k}}\right)^{A_{k,k}(0)-1}=1~{}~{}~{}(1\leq
k\leq L),$ (154)
$\displaystyle\frac{q_{k+1,k+1}}{p_{k+1,k+1}}\left(\frac{q_{k,k}}{q_{k+1,k}}\right)^{A_{k,k+1}(0)}=1,~{}~{}\frac{q_{k+1,k}}{p_{k+1,k}}\left(\frac{q_{k+2,k+1}}{q_{k+1,k+1}}\right)^{A_{k,k+1}(0)}=1~{}~{}~{}(1\leq
k\leq L-1).$ (155)
First, we study the $q$-difference equations in (136). Upon the specialization
(155), we obtain solutions of (136) as
$\displaystyle
h_{k,k+1}(w)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{\left(\frac{p_{k+1,k+1}}{q_{k,k}}\right)^{m}-\left(\frac{q_{k+1,k+1}}{q_{k,k}}\right)^{m}}{1-\left(\frac{q_{k+1,k}}{q_{k,k}}\right)^{m}}w^{m}\right)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{\left(\frac{q_{k+2,k+1}}{p_{k+1,k}}\right)^{m}-\left(\frac{q_{k+2,k+1}}{q_{k+1,k}}\right)^{m}}{1-\left(\frac{q_{k+2,k+1}}{q_{k+1,k+1}}\right)^{m}}w^{m}\right),$
(156) $\displaystyle
h_{k+1,k}(w)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{\left(\frac{q_{k+1,k}}{q_{k+1,k+1}}\right)^{m}-\left(\frac{p_{k+1,k}}{q_{k+1,k+1}}\right)^{m}}{1-\left(\frac{q_{k+2,k+1}}{q_{k+1,k+1}}\right)^{m}}w^{m}\right)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{\left(\frac{q_{k+1,k}}{q_{k+1,k+1}}\right)^{m}-\left(\frac{q_{k+1,k}}{p_{k+1,k+1}}\right)^{m}}{1-\left(\frac{q_{k+1,k}}{q_{k,k}}\right)^{m}}w^{m}\right).$
(157)
Here we used $|q_{k+1,k}/q_{k,k}|\neq 1$ $(1\leq k\leq L)$ assumed in (14).
From the compatibility of the two formulae for $h_{k,k+1}(w)$ in (156) [ or
$h_{k+1,k}(w)$ in (157)], there are two possible choices for $q_{k,k}$,
$q_{k+1,k+1}$, $q_{k+1,k}$, and $q_{k+2,k+1}$.
$\displaystyle\mathrm{(i)}~{}~{}\frac{q_{k+1,k}}{q_{k,k}}=\frac{q_{k+2,k+1}}{q_{k+1,k+1}}~{}~{}~{}{\rm
or}~{}~{}~{}\mathrm{(ii)}~{}~{}\frac{q_{k+1,k}}{q_{k,k}}=\frac{q_{k+1,k+1}}{q_{k+2,k+1}}.$
(158)
First, we consider the case of $\mathrm{(ii)}$
$q_{k+1,k}/q_{k,k}=q_{k+1,k+1}/q_{k+2,k+1}$ in (158). From the compatibility
of the two formulae for $h_{k,k+1}(w)$ in (156) [and $h_{k+1,k}(w)$ in (157)],
we obtain
$\displaystyle\left(\frac{p_{k+1,k+1}}{q_{k,k}}\right)^{m}+\left(\frac{q_{k+1,k+1}}{p_{k+1,k}}\right)^{m}=\left(\frac{q_{k+1,k+1}}{q_{k+1,k}}\right)^{m}+\left(\frac{q_{k+1,k+1}}{q_{k,k}}\right)^{m}~{}~{}(m\neq
0).$ (159)
From (159) for $m=1,2$, we obtain
$p_{k+1,k+1}/p_{k+1,k}=q_{k+1,k+1}/q_{k+1,k}$. Combining (159) for $m=1$ and
$p_{k+1,k+1}/p_{k+1,k}=q_{k+1,k+1}/q_{k+1,k}$, we obtain $q_{k,k}=p_{k+1,k}$
or $q_{k+1,k+1}=p_{k+1,k+1}$. For the case of $q_{k,k}=p_{k+1,k}$, we obtain
$A_{k,k+1}(0)=1$ from (155). For the case of $q_{k+1,k+1}=p_{k+1,k+1}$, we
obtain $A_{k,k+1}(0)=0$ from (155). $A_{k,k+1}(0)=0$ and $A_{k,k+1}(0)=1$
contradict with $-1<A_{k,k+1}(0)<0$ assumed in (14). Hence, the case of
$\mathrm{(ii)}~{}q_{k+1,k}/q_{k,k}=q_{k+1,k+1}/q_{k+2,k+1}$ is impossible.
Next, we consider the case of $\mathrm{(i)}$
$q_{k+1,k}/q_{k,k}=q_{k+2,k+1}/q_{k+1,k+1}$ in (158). From exclusion of the
case $\mathrm{(ii)}$ and the parametrization (43), we can parametrize
$\displaystyle\frac{q_{2,1}}{q_{1,1}}=\frac{q_{3,2}}{q_{2,2}}=\cdots=\frac{q_{L+1,L}}{q_{L,L}}=x^{2r}.$
(160)
From the compatibility of the two formulae for $h_{k,k+1}(w)$ in (156) [ and
$h_{k+1,k}(w)$ in (157)], we obtain
$\displaystyle
h_{k,k+1}(w)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{[A_{k,k+1}(0)rm]_{x}}{[rm]_{x}}x^{-(A_{k,k+1}(0)+1)rm}\left(\frac{q_{k+1,k+1}}{q_{k,k}}\right)^{m}w^{m}\right),$
(161) $\displaystyle
h_{k+1,k}(w)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{[A_{k,k+1}(0)rm]_{x}}{[rm]_{x}}x^{(A_{k,k+1}(0)+1)rm}\left(\frac{q_{k,k}}{q_{k+1,k+1}}\right)^{m}w^{m}\right).$
(162)
We used (155) and (160). From $h_{k,k+1}(w)=h_{k+1,k}(w)$ assumed in (12), we
obtain
$\displaystyle\frac{q_{k+1,k+1}}{q_{k,k}}=x^{(A_{k,k+1}(0)+1)r}.$ (163)
Considering (155), (160), and (163), we obtain (141). From (161), (162), and
(163), we obtain
$\displaystyle
h_{k,k+1}(w)=h_{k+1,k}(w)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{[A_{k,k+1}(0)rm]_{x}}{[rm]_{x}}w^{m}\right).$
(164)
Considering (115), (128), and (164), we obtain the second half of (153).
Next, we study the $q$-difference equations in (131). Upon the specialization
(154), the compatibility condition of the equations in (131) is
$\displaystyle(p_{k,k}-p_{k+1,k})(p_{k,k}p_{k+1,k}-q_{k,k}q_{k+1,k})=0~{}~{}~{}(1\leq
k\leq L).$ (165)
First, we study the case of $A_{k,k}(0)=1$. We obtain $p_{k,k}=p_{k+1,k}$ in
the second half of (146) from (154). Solving (131) upon $p_{k,k}=p_{k+1,k}$,
we obtain $h_{k,k}(w)=1-w$. Considering (115) and (128), we obtain
$s_{k}(m)s_{k}(-m)=-1$ $(m>0)$ in the first half of (153).
Next, we study the case of $A_{k,k}(0)\neq 1$. We obtain $p_{k,k}\neq
p_{k+1,k}$ from (14) and (154). Then, we obtain
$p_{k,k}p_{k+1,k}=q_{k,k}q_{k+1,k}$ from (165). Combining
$p_{k,k}p_{k+1,k}=q_{k,k}q_{k+1,k}$ and (154), we obtain (146). Solving (131),
we obtain
$\displaystyle
h_{k,k}(w)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}{\displaystyle\frac{\left[\frac{1}{2}A_{k,k}(0)rm\right]_{x}\left[(2-A_{k,k}(0))rm\right]_{x}}{\left[\frac{1}{2}(2-A_{k,k}(0))rm\right]_{x}[rm]_{x}}}w^{m}\right).$
We used $|q_{k+1,k}/q_{k,k}|\neq 1$ $(1\leq k\leq L)$ in (14) and
$q_{k+1,k}/q_{k,k}=x^{2r}$ $(1\leq k\leq L)$ in (160). Considering (115) and
(128), we obtain the first half of (153). $\Box$
###### Lemma 3.10
The relation (37) holds for $(A_{i,j}(0))_{i,j=0}^{L}$, if (10), (11), (12),
(13), and (14) are satisfied.
Proof of Lemma 3.10. We obtain $A_{k,l}(0)$ $(|k-l|\geq 2,1\leq k,l\leq L)$
from (12). From Lemma 3.6, we have (123). From Lemma 3.8, we have (141) and
(146). From the compatibility of (141) and (146), we obtain the following
relations for $(A_{i,j}(0))_{i,j=1}^{L}$.
$\displaystyle A_{k+1,k}(0)=\left\\{\begin{array}[]{cc}A_{k-1,k}(0)&{\rm
if}~{}~{}k\notin\widehat{J},\\\ -1-A_{k-1,k}(0)&{\rm
if}~{}~{}k\in\widehat{J}\end{array}\right.~{}~{}~{}(2\leq k\leq L-1),$ (168)
$\displaystyle A_{k,k}(0)=\left\\{\begin{array}[]{cc}-2A_{k-1,k}(0)&{\rm
if}~{}~{}k\notin\widehat{J},\\\ 1&{\rm
if}~{}~{}k\in\widehat{J}\end{array}\right.~{}(2\leq k\leq
L),~{}~{}~{}A_{1,1}(0)=\left\\{\begin{array}[]{cc}-2A_{1,2}(0)&{\rm
if}~{}~{}1\notin\widehat{J},\\\ 1&{\rm
if}~{}~{}1\in\widehat{J},\end{array}\right.$ (173) $\displaystyle
A_{k,l}(0)=0~{}~{}~{}(|k-l|\geq 2,1\leq k,l\leq L).$ (174)
Solving these equations, we obtain (37) for $1\leq i\leq L$, $2\leq j\leq L$,
and $1\leq k,l\leq L$. The extension to $(A_{i,j}(0))_{i,j=0}^{L}$ is direct
consequence of the definition (29). $\Box$
###### Proposition 3.11
The relations (10), (11), and (12) hold, if the parameters $p_{i,j},q_{i,j}$,
$A_{i,j}(m)$, $s_{i}(m)$, $g_{i}$, and $\lambda_{i,j}(m)$ are determined by
(83), (117), (118), (122), (124), (141), (146), and (153).
Proof of Proposition 3.11. First, we will show the formulae (69), (75), (92),
(95), (99), and (102) in Theorem 3.1. Let $q_{1,1}=s$. From (83), (122),
(141), and (146), we have $q_{j,j}=sx^{D(1,j-1;\Phi)}$,
$q_{j+1,j}=sx^{2r+D(1,j-1;\Phi)}~{}(1\leq j\leq L)$,
$p_{1,1}=s\times\left\\{\begin{array}[]{cc}x^{2}&{\rm
if}~{}~{}1\in\widehat{I}\left(-\frac{1}{r}\right),\\\ x^{2r-2}&{\rm
if}~{}~{}1\in\widehat{I}\left(\frac{1-r}{r}\right)\end{array}\right.$,
$p_{j,j}=sx^{D(1,j-2;\Phi)}\times\left\\{\begin{array}[]{cc}x^{r+1}&{\rm
if}~{}~{}j\in\widehat{I}(-\frac{1}{r}),\\\ x^{2r-1}&{\rm
if}~{}~{}j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.$ $(2\leq j\leq L)$,
$p_{j,j-1}=sx^{D(1,j-2;\Phi)}\times\left\\{\begin{array}[]{cc}x^{2r-2}&{\rm
if}~{}~{}j\in\widehat{I}(-\frac{1}{r}),\\\ x^{2}&{\rm
if}~{}~{}j\in\widehat{I}(\frac{1-r}{r})\end{array}\right.$ $(2\leq j\leq
L+1)$, $p_{k,l}=q_{k,l}$ $(k\neq l,l+1,1\leq k\leq L+1,l\leq l\leq L)$. Upon
the specialization $s=1$, we have (69). From (69), (83), and (124), we have
(102). From (153), we have
$\displaystyle
s_{k}(-m)=-\frac{1}{\alpha_{k}(m)}\times\left\\{\begin{array}[]{cc}1&{\rm
if}~{}~{}k\in\widehat{J},\\\
\displaystyle{\frac{[\frac{1}{2}A_{k,k}(0)rm]_{x}[(2-A_{k,k}(0))rm]_{x}}{[\frac{1}{2}(2-A_{k,k}(0))rm]_{x}[rm]_{x}}}&{\rm
if}~{}~{}k\notin\widehat{J}\end{array}~{}~{}(m>0),\right.$ (177)
$\displaystyle A_{k\pm 1,k}(m)=\frac{\alpha_{k}(m)}{\alpha_{k\pm
1}(m)}\frac{[A_{k\pm
1,k}(0)rm]_{x}}{[rm]_{x}}\times\left\\{\begin{array}[]{cc}1&{\rm
if}~{}~{}k\in\widehat{J},\\\
\displaystyle{\frac{[rm]_{x}[\frac{1}{2}(2-A_{k,k}(0))rm]_{x}}{[\frac{1}{2}A_{k,k}(0)rm]_{x}[(2-A_{k,k}(0))rm]_{x}}}&{\rm
if}~{}~{}k\notin\widehat{J}\end{array}~{}~{}(m>0),\right.$ (180)
$\displaystyle A_{k,k\pm 1}(-m)=A_{k\pm 1,k}(m)~{}~{}(m>0).$
Here the signs of the formulae are in the same order. Here we set
$s_{k}(m)=\alpha_{k}(m)$ $(m>0,1\leq k\leq L)$. Setting $\alpha_{k}(m)=1$
$(m>0,1\leq k\leq L)$ provides (75) and (92). Solving (117) and (118), we
obtain $\lambda_{i,j}(m)$ in (95) and (99). Solving (117) and (118) for
arbitrary $\alpha_{k}(m)$, we obtain $\lambda_{i,j}(m)\alpha_{j}(m)$. Now we
obtained the formulae (69), (75), (92), (95), (99), and (102). As a by-product
of calculation, we proved that there is no indeterminacy in the free field
realization except for (15), (16), and (21), which is part of Theorem 3.1.
Next, we will derive (10), (11), and (12). From (75) and (92), we obtain the
symmetry (12) by direct calculation. Because $\lambda_{i,j}(m)$ are determined
by (117) and (118), the mutual locality (10) holds from Lemma 3.5. From (75)
and (99), we have (123) by direct calculation. From (10), (122), (123), and
(124), we obtain
$[T_{1}(z),S_{j}(w)]=\left(\frac{q_{j,j}}{p_{j,j}}-1\right):\Lambda_{j}(z)S_{j}(q_{j,j}^{-1}z):\left(\delta\left(\frac{q_{j,j}w}{z}\right)-\delta\left(\frac{q_{j+1,j}w}{z}\right)\right)$
$(1\leq j\leq L)$. Hence, we have the commutativity (11) upon the condition
(103). We derived (10), (11), and (12). $\Box$
Proof of Theorem 3.1. We assume the relations (10), (11), (12), (13), and
(14). From Lemmas 3.5, 3.6, 3.8, 3.9, and 3.10, we obtain the relations (83),
(117), (118), (122), (124), (141), (146), and (153). In proof of Proposition
3.11, we have obtained $p_{i,j}$, $q_{i,j}$, $s_{j}(m)$, $A_{i,j}(m)$,
$g_{i}$, $\lambda_{i,j}(m)$, and $B_{j}(z)$ in (69), (75), (92), (95), (99),
(102), and (103) from the relations (83), (117), (118), (122), (124), (141),
(146), and (153). Moreover, in proof of Proposition 3.11, we have proved that
there is no indeterminacy in the free field realization except for (15), (16),
and (21).
Conversely, in proof of Proposition 3.11, we have proved that the relations
(10), (11), and (12) hold, if the relations (69), (75), (83), (92), (95),
(99), (102), and (103) are satisfied. $\Box$
Proof of Proposition 3.3. Using $h_{k,l}(w)$ in (128), we obtain
$\displaystyle
S_{k}(w_{1})S_{l}(w_{2})=\left(\frac{w_{1}}{w_{2}}\right)^{A_{k,l}(0)}\frac{h_{k,l}\left(\frac{w_{2}}{w_{1}}\right)}{h_{l,k}\left(\frac{w_{1}}{w_{2}}\right)}S_{l}(w_{2})S_{k}(w_{1})~{}~{}~{}(1\leq
k,l\leq L).$
Using (75), (83) and (92), we obtain (112). $\Box$
By direct calculation, we have the following lemma.
###### Lemma 3.12
The determinants of $(A_{i,j}(m))_{i,j=1}^{L}$ in (83) and (92) are given by
$\displaystyle\det\left(\left(A_{i,j}(m)\right)_{i,j=1}^{L}\right)=(-1)^{L}\frac{\displaystyle[D(0,L;\Phi)m]_{x}[(r-1)m]_{x}^{|\widehat{I}(\frac{1-r}{r})|-1}[m]_{x}^{|\widehat{I}(-\frac{1}{r})|-1}}{\displaystyle[rm]_{x}^{L}\prod_{j=1}^{L}s_{j}(m)s_{j}(-m)}~{}~{}~{}(m\neq
0),$
$\displaystyle\det\left(\left(A_{i,j}(0)\right)_{i,j=1}^{L}\right)=r^{-L}(r-1)^{|\widehat{I}(\frac{1-r}{r})|-1}D(0,L;\Phi).$
Hence, the condition $\det\left(\left(A_{i,j}(m)\right)_{i,j=1}^{L}\right)\neq
0$ $(m\in\mathbf{Z})$ is satisfied.
## 4 Quadratic relation
In this section, we introduce the higher $W$-currents $T_{i}(z)$ and obtain a
set of quadratic relations of $T_{i}(z)$ for the deformed $W$-superalgebra
${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$. We show that these relations are
independent of the choice of Dynkin-diagrams.
### 4.1 Quadratic relation
We define the functions $\Delta_{i}(z)$ $(i=0,1,2,\ldots)$ as
$\displaystyle\Delta_{i}(z)=\frac{(1-x^{2r-i}z)(1-x^{-2r+i}z)}{(1-x^{i}z)(1-x^{-i}z)}.$
We have
$\displaystyle\Delta_{i}(z)-\Delta_{i}({z}^{-1})=\frac{[r]_{x}[r-i]_{x}}{[i]_{x}}(x-x^{-1})(\delta(x^{-i}z)-\delta(x^{i}z))~{}~{}(i=1,2,3,\ldots).$
We define the structure functions $f_{i,j}(z;a)$ $(i,j=0,1,2,\ldots)$ as
$\displaystyle
f_{i,j}(z;a)=\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{[(r-1)m]_{x}[rm]_{x}[{\rm
Min}(i,j)m]_{x}[(a-{\rm
Max}(i,j))m]_{x}}{[m]_{x}[am]_{x}}(x-x^{-1})^{2}z^{m}\right).$ (181)
In the case of $a=D(0,L;\Phi)$, the ratio of the structure function
$\displaystyle\frac{f_{1,1}(z^{-1};a)}{f_{1,1}(z;a)}=\frac{\Theta_{x^{2a}}(x^{2}z,x^{-2r}z,x^{2r-2}z)}{\Theta_{x^{2a}}(x^{-2}z,x^{2r}z,x^{-2r+2}z)}$
coincides with those of (104).
We introduce the higher $W$-currents $T_{i}(z)$ and give the quadratic
relations. From now on, we set $g=1$ in (102), but this is not an essential
limitation. Hereafter, we use the abbreviations
$\displaystyle
c(r,x)=[r]_{x}[r-1]_{x}(x-x^{-1}),~{}~{}~{}d_{j}(r,x)=\left\\{\begin{array}[]{cc}\displaystyle{\prod_{l=1}^{j}\frac{[r-l]_{x}}{[l]_{x}}}&(j\geq
1),\\\ 1&(j=0).\end{array}\right.$ (184)
We introduce the $W$-currents $T_{i}(z)$ $(i=0,1,2,\ldots)$ as
$\displaystyle T_{0}(z)$
$\displaystyle=1,~{}~{}~{}T_{1}(z)=\sum_{k\in\widehat{I}(\frac{1-r}{r})}\Lambda_{k}(z)+d_{1}(r,x)\sum_{k\in\widehat{I}(-\frac{1}{r})}\Lambda_{k}(z),$
$\displaystyle T_{i}(z)$
$\displaystyle=\sum_{(m_{1},m_{2},\ldots,m_{L+1})\in\hat{N}(\Phi)\atop{m_{1}+m_{2}+\cdots+m_{L+1}=i}}\prod_{k\in\hat{I}(-\frac{1}{r})}d_{m_{k}}(r,x)~{}\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z)~{}~{}(i=2,3,4,\ldots).$
(185)
Here we set
$\displaystyle\Lambda_{m_{1},m_{2},\cdots,m_{L+1}}^{(i)}(z)$
$\displaystyle=:\prod_{k\in\widehat{I}(\frac{1-r}{r})}\Lambda_{k}^{(m_{k})}(x^{-i+1+2(m_{1}+\cdots+m_{k-1})}z)$
$\displaystyle\times\prod_{k\in\widehat{I}(-\frac{1}{r})}\Lambda_{k}^{(m_{k})}(x^{-i+1+2(m_{1}+\cdots+m_{k-1})}z):~{}~{}{\rm
for}~{}~{}(m_{1},m_{2},\ldots,m_{L+1})\in\hat{N}(\Phi),$
where
$\displaystyle\Lambda_{k}^{(0)}(z)=1,~{}~{}~{}\Lambda_{k}^{(m)}(z)=:\Lambda_{k}(z)\Lambda_{k}(x^{2}z)\cdots\Lambda_{k}(x^{2m-2}z):,$
$\displaystyle\hat{N}(\Phi)=\left\\{(m_{1},m_{2},\ldots,m_{L+1})\in{\mathbf{N}}^{L+1}\left|0\leq
m_{k}\leq 1~{}~{}{\rm
if}~{}~{}k\in\hat{I}\left(\frac{1-r}{r}\right),~{}m_{k}\geq 0~{}~{}{\rm
if}~{}~{}k\in\hat{I}\left(-\frac{1}{r}\right)\right.\right\\}.$ (186)
We have $T_{i}(z)\neq 1$ $(i=1,2,3,\ldots)$ and $T_{i}(z)\neq T_{j}(z)$
$(i\neq j)$.
The following is the main theorem of this paper.
###### Theorem 4.1
For the deformed $W$-superalgebra ${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$, the
$W$-currents $T_{i}(z)$ satisfy the set of quadratic relations
$\displaystyle
f_{i,j}\left(\frac{z_{2}}{z_{1}};a\right)T_{i}(z_{1})T_{j}(z_{2})-f_{j,i}\left(\frac{z_{1}}{z_{2}};a\right)T_{j}(z_{2})T_{i}(z_{1})$
$\displaystyle=$
$\displaystyle~{}c(r,x)\sum_{k=1}^{i}\prod_{l=1}^{k-1}\Delta_{1}(x^{2l+1})\left(\delta\left(\frac{x^{-j+i-2k}z_{2}}{z_{1}}\right)f_{i-k,j+k}(x^{j-i};a)T_{i-k}(x^{k}z_{1})T_{j+k}(x^{-k}z_{2})\right.$
$\displaystyle-$
$\displaystyle\left.\delta\left(\frac{x^{j-i+2k}z_{2}}{z_{1}}\right)f_{i-k,j+k}(x^{-j+i};a)T_{i-k}(x^{-k}z_{1})T_{j+k}(x^{k}z_{2})\right)~{}~{}~{}(j\geq
i\geq 1).$ (187)
Here we use $f_{i,j}(z;a)$ in (181) with the specialization $a=D(0,L;\Phi)$.
The quadratic relations (187) are independent of the choice of Dynkin-diagrams
for the Lie superalgebra $A(M,N)$.
In view of Theorem 4.1, we arrive at the following definition.
###### Definition 4.2
Set $T_{i}(z)=\sum_{m\in{\mathbf{Z}}}T_{i}[m]z^{-m}$ $(i=1,2,3,\ldots)$. The
deformed $W$-superalgebra ${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$ is an
associative algebra over ${\mathbf{C}}$ with the generators $T_{i}[m]$
$(m\in{\mathbf{Z}},i=1,2,3,\ldots)$ and the defining relations (187).
### 4.2 Proof of Theorem 4.1
###### Proposition 4.3
The $\Lambda_{i}(z)$’s satisfy
$\displaystyle~{}f_{1,1}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{k}(z_{1})\Lambda_{l}(z_{2})=\Delta_{1}\left(\frac{x^{-1}z_{2}}{z_{1}}\right):\Lambda_{k}(z_{1})\Lambda_{l}(z_{2}):,$
$\displaystyle
f_{1,1}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{l}(z_{1})\Lambda_{k}(z_{2})=\Delta_{1}\left(\frac{xz_{2}}{z_{1}}\right):\Lambda_{l}(z_{1})\Lambda_{k}(z_{2}):~{}~{}~{}(1\leq
k<l\leq L+1),$ $\displaystyle
f_{1,1}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{i}(z_{1})\Lambda_{i}(z_{2})=:\Lambda_{i}(z_{1})\Lambda_{i}(z_{2}):~{}~{}~{}{\rm
if}~{}~{}i\in\hat{I}\left(\frac{1-r}{r}\right),$ $\displaystyle
f_{1,1}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{i}(z_{1})\Lambda_{i}(z_{2})=\Delta_{2}\left(\frac{z_{2}}{z_{1}}\right):\Lambda_{i}(z_{1})\Lambda_{i}(z_{2}):~{}~{}~{}{\rm
if}~{}~{}i\in\hat{I}\left(-\frac{1}{r}\right)$ (188)
where we set $a=D(0,L;\Phi)$.
Proof of Proposition 4.3. Substituting (69) and (99) into
$\varphi_{\Lambda_{k},\Lambda_{l}}(z_{1},z_{2})$ in (116), we obtain (188).
$\Box$
Proof of Proposition 3.2. Using
$\varphi_{\Lambda_{k},\Lambda_{l}}(z_{1},z_{2})$ in (116), we obtain
$\displaystyle\Lambda_{k}(z_{1})\Lambda_{l}(z_{2})=\frac{\varphi_{\Lambda_{k},\Lambda_{l}}\left(z_{1},z_{2}\right)}{\varphi_{\Lambda_{l},\Lambda_{k}}\left(z_{2},z_{1}\right)}\Lambda_{l}(z_{2})\Lambda_{k}(z_{1})~{}~{}~{}(1\leq
k,l\leq L+1).$
Using the explicit formulae of
$\varphi_{\Lambda_{k},\Lambda_{l}}(z_{1},z_{2})$, we obtain (104). $\Box$
###### Lemma 4.4
The $D(0,L;\Phi)$ given in (47) is independent of the choice of the Dynkin-
diagrams for the Lie superalgebra $A(M,N)$.
$\displaystyle
D(0,L;\Phi)=D(0,L;r_{\alpha_{i}}(\Phi))~{}~{}~{}(\alpha_{i}\in\Pi).$ (189)
Here $\Pi$ is a fundamental system.
Proof of Lemma 4.4. We show (189) by checking all cases. We set the Dynkin-
diagrams $\Phi_{j}$ $(1\leq j\leq 8)$ as follows. Let $K=K(\Phi_{j})$ the
number of odd isotropic roots $(\alpha_{i},\alpha_{i})=0$ in the Dynkin-
diagram $\Phi_{j}$. We set
$\cdots\cdots$$\alpha_{1}$$\alpha_{2}$$\Phi_{1}=~{}~{}$$\cdots\cdots$$\alpha_{1}$$\alpha_{2}$$\Phi_{2}=~{}~{}$$\cdots\cdots$$\Phi_{3}=~{}~{}$$\alpha_{L-1}$$\alpha_{L}$$\cdots\cdots$$\Phi_{4}=~{}~{}$$\alpha_{L-1}$$\alpha_{L}$
For $2\leq i\leq L-1$, we set
$\Phi_{5}=~{}~{}$$\cdots\cdots$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$
$\Phi_{6}=~{}~{}$$\cdots\cdots$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$
$\Phi_{7}=~{}~{}$$\cdots\cdots$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$
$\Phi_{8}=~{}~{}$$\cdots\cdots$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$
We have $r_{\alpha_{1}}(\Phi_{1})=\Phi_{2}$,
$r_{\alpha_{L}}(\Phi_{3})=\Phi_{4}$, $r_{\alpha_{i}}(\Phi_{5})=\Phi_{6}$, and
$r_{\alpha_{i}}(\Phi_{7})=\Phi_{8}$.
The affinized Dynkin-diagrams $\widehat{\Phi}_{j}$ from $\Phi_{j}$ $(1\leq
j\leq 8)$ are given as
$\widehat{\Phi}_{j}=\left\\{\begin{array}[]{cc}\widehat{\Phi}_{j,1}&{\rm
if}~{}K(\Phi_{j})={\rm even},\\\ \widehat{\Phi}_{j,2}&{\rm
if}~{}K(\Phi_{j})={\rm odd}.\end{array}\right.$
$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{1,1}=~{}~{}$$\alpha_{1}$$\alpha_{2}$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$-1-\delta$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{1,2}=~{}~{}$$\alpha_{1}$$\alpha_{2}$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$\delta$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{2,1}=~{}~{}$$\alpha_{1}$$\alpha_{2}$$\alpha_{0}=\alpha_{L+1}$$-1-\delta$$\delta$$\delta$$\delta$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{2,2}=~{}~{}$$\alpha_{1}$$\alpha_{2}$$\alpha_{0}=\alpha_{L+1}$$-1-\delta$$\delta$$\delta$$-1-\delta$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{3,1}=~{}~{}$$\alpha_{L-1}$$\alpha_{L}$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$-1-\delta$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{3,2}=~{}~{}$$\alpha_{L-1}$$\alpha_{L}$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$\delta$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{4,1}=~{}~{}$$\alpha_{L-1}$$\alpha_{L}$$\alpha_{0}=\alpha_{L+1}$$\delta$$-1-\delta$$\delta$$\delta$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{4,2}=~{}~{}$$\alpha_{L-1}$$\alpha_{L}$$\alpha_{0}=\alpha_{L+1}$$\delta$$-1-\delta$$\delta$$-1-\delta$
$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{5,1}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$-1-\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{5,2}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$-1-\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$
$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{6,1}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$-1-\delta$$\delta$$-1-\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{6,2}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$-1-\delta$$\delta$$-1-\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$
$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{7,1}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$-1-\delta$$\delta$$\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{7,2}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$-1-\delta$$\delta$$\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$
$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{8,1}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$$\cdots\cdots$$\cdots\cdots$$\widehat{\Phi}_{8,2}=~{}~{}$$\cdots$$\alpha_{0}=\alpha_{L+1}$$\delta$$\delta$$-1-\delta$$\delta$$\alpha_{i-1}$$\alpha_{i}$$\alpha_{i+1}$
The values of $A_{j-1,j}(0)$ are written beside the line segment connecting
$\alpha_{j-1}$ and $\alpha_{j}$. We have
$|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j-1,1})|=|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j,2})|,~{}~{}~{}|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j-1,2})|=|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j,1})|~{}~{}(1\leq
j\leq 2),$
$|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j-1,1})|=|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j,1})|,~{}~{}~{}|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j-1,2})|=|\widehat{I}(1,L+1;\delta,\widehat{\Phi}_{2j,2})|~{}~{}(3\leq
j\leq 4),$
where $\delta=\frac{1-r}{r}$ or $-\frac{1}{r}$. Hence we have
$D(0,L;\Phi_{2j-1})=D(0,L;\Phi_{2j})~{}~{}~{}(1\leq j\leq 4).$
In other words, we have
$\displaystyle D(0,L;\Phi_{j})=D(0,L;r_{\alpha_{1}}(\Phi_{j}))~{}~{}(1\leq
j\leq 2),$ $\displaystyle
D(0,L;\Phi_{j})=D(0,L;r_{\alpha_{L}}(\Phi_{j}))~{}~{}(3\leq j\leq 4),$
$\displaystyle D(0,L;\Phi_{j})=D(0,L;r_{\alpha_{i}}(\Phi_{j}))~{}~{}(5\leq
j\leq 8,2\leq i\leq L-1).$
Now we have proved (189). $\Box$
###### Lemma 4.5
$\Delta_{i}(z)$ and $f_{i,j}(z;a)$ satisfy the following fusion relations.
$\displaystyle
f_{i,j}(z;a)=f_{j,i}(z;a)=\prod_{k=1}^{i}f_{1,j}(z^{-i-1+2k}z;a)~{}~{}~{}(1\leq
i\leq j),$ (190) $\displaystyle
f_{1,i}(z;a)=\left(\prod_{k=1}^{i-1}\Delta_{1}(x^{-i+2k}z)\right)^{-1}\prod_{k=1}^{i}f_{1,1}(x^{-i-1+2k}z;a)~{}~{}~{}(i\geq
2),$ (191) $\displaystyle
f_{1,i}(z;a)f_{j,i}(x^{\pm(j+1)}z;a)=\left\\{\begin{array}[]{cc}f_{j+1,i}(x^{\pm
j}z;a)\Delta_{1}(x^{\pm i}w)&(1\leq i\leq j),\\\ f_{j+1,i}(x^{\pm
j}z;a)&(1\leq j<i),\end{array}\right.$ (194) $\displaystyle
f_{1,i}(z;a)f_{1,j}(x^{\pm(i+j)}z;a)=f_{1,i+j}(x^{\pm j}z;a)\Delta_{1}(x^{\pm
i}z)~{}~{}(i,j\geq 1),$ (195) $\displaystyle
f_{1,i}(z;a)f_{1,j}(x^{\pm(i-j-2k)}z;a)=f_{1,i-k}(x^{\mp
k}z;a)f_{1,j+k}(x^{\pm(i-j-k)}z;a)~{}~{}(i,j,i-k,j+k\geq 1),$ (196)
$\displaystyle\Delta_{i+1}(z)=\left(\prod_{k=1}^{i-1}\Delta_{1}(x^{-i+2k}z)\right)^{-1}\prod_{k=1}^{i}\Delta_{2}(x^{-i-1+2k}z)~{}~{}~{}(i\geq
2).$ (197)
Proof of Lemma 4.5. We obtain (190) and (197) by straightforward calculation
from the definitions. We show (191) here. From definitions, we have
$\displaystyle\left(\prod_{k=1}^{i-1}\Delta_{1}(x^{-i+2k}z)\right)^{-1}\prod_{k=1}^{i}f_{1,1}(x^{-i-1+2k}z;a)$
$\displaystyle=$
$\displaystyle\exp\left(-\sum_{m=1}^{\infty}\frac{1}{m}\frac{[rm]_{x}[(r-1)m]_{x}}{[am]_{x}}(x-x^{-1})^{2}\left([(a-1)m]_{x}\sum_{k=1}^{i}x^{(-i+2k-1)m}-[am]_{x}\sum_{k=1}^{i-1}x^{(-i+2k)m}\right)z^{m}\right).$
Using the relation
$\displaystyle[(a-1)m]_{x}\sum_{k=1}^{i}x^{(-i+2k-1)m}-[am]_{x}\sum_{k=1}^{i-1}x^{(-i+2k)m}=[(a-i)m]_{x},$
we have $f_{1,i}(z;a)$. Using (190) and (191), we obtain the relations (194),
(195), and (196). $\Box$
The following relations (198), (199), and (200) give special cases of (187).
###### Lemma 4.6
The $T_{i}(z)$’s satisfy the fusion relation
$\displaystyle\lim_{z_{1}\to
x^{\pm(i+j)}z_{2}}\left(1-x^{\pm(i+j)}\frac{z_{2}}{z_{1}}\right)f_{i,j}\left(\frac{z_{2}}{z_{1}};a\right)T_{i}(z_{1})T_{j}(z_{2})$
$\displaystyle=\mp c(r,x)\prod_{k=1}^{{\rm
Min}(i,j)-1}\Delta_{1}(x^{2k+1})T_{i+j}(x^{\pm i}z_{2})~{}~{}~{}(i,j\geq 1).$
(198)
Here we set $a=D(0,L;\Phi)$.
Proof of Lemma 4.6. Summing up the relations (A 1)–(A 4) in Appendix A gives
(198). $\Box$
###### Lemma 4.7
The $T_{i}(z)$’s satisfy the exchange relation as meromorphic functions
$\displaystyle
f_{i,j}\left(\frac{z_{2}}{z_{1}};a\right)T_{i}(z_{1})T_{j}(z_{2})=f_{j,i}\left(\frac{z_{1}}{z_{2}};a\right)T_{j}(z_{2})T_{i}(z_{1})~{}~{}~{}(j\geq
i\geq 1).$ (199)
Both sides are regular except for poles at $z_{2}/z_{1}\neq x^{\pm(j-i+2k)}$
$(1\leq k\leq i)$. Here we set $a=D(0,L;\Phi)$.
Proof of Lemma 4.7. Using the commutation relation (104) repeatedly, (199) is
obtained except for poles in both sides. Using Proposition 4.3, we identify
the pole position as $z_{2}/z_{1}=x^{\pm(j-i+2k)}$ $(1\leq k\leq i)$. $\Box$
###### Lemma 4.8
The $T_{i}(z)$’s satisfy the quadratic relations
$\displaystyle
f_{1,i}\left(\frac{z_{2}}{z_{1}};a\right)T_{1}(z_{1})T_{i}(z_{2})-f_{i,1}\left(\frac{z_{1}}{z_{2}};a\right)T_{i}(z_{2})T_{1}(z_{1})$
$\displaystyle=$
$\displaystyle~{}c(r,x)\left(\delta\left(\frac{x^{-i-1}z_{2}}{z_{1}}\right)T_{i+1}(x^{-1}z_{2})-\delta\left(\frac{x^{i+1}z_{2}}{z_{1}}\right)T_{i+1}(xz_{2})\right)~{}~{}(i\geq
1).$ (200)
Here we set $a=D(0,L;\Phi)$.
Proof of Lemma 4.8. Summing up the relations (B 2)–(B 6) in Appendix B gives
(200). $\Box$
Proof of Theorem 4.1. We prove Theorem 4.1 by induction. Lemma 4.8 is the
basis of induction for the proof. In what follows we set $a=D(0,L;\Phi)$.
We define ${\rm LHS}_{i,j}$ and ${\rm RHS}_{i,j}(k)$ with $(1\leq k\leq i\leq
j)$ as
$\displaystyle{\rm LHS}_{i,j}$
$\displaystyle=f_{i,j}\left(\frac{z_{2}}{z_{1}};a\right)T_{i}(z_{1})T_{j}(z_{2})-f_{j,i}\left(\frac{z_{1}}{z_{2}};a\right)T_{j}(z_{2})T_{i}(z_{1}),$
$\displaystyle{\rm RHS}_{i,j}(k)$
$\displaystyle=c(r,x)\prod_{l=1}^{k-1}\Delta_{1}(x^{2l+1})\left(\delta\left(\frac{x^{-j+i-2k}z_{2}}{z_{1}}\right)f_{i-k,j+k}(x^{j-i};a)T_{i-k}(x^{k}z_{1})T_{j+k}(x^{-k}z_{2})\right.$
$\displaystyle-\left.\delta\left(\frac{x^{j-i+2k}z_{2}}{z_{1}}\right)f_{i-k,j+k}(x^{-j+i};a)T_{i-k}(x^{-k}z_{1})T_{j+k}(x^{k}z_{2})\right)~{}~{}(1\leq
k\leq i-1),$ $\displaystyle{\rm RHS}_{i,j}(i)$
$\displaystyle=c(r,x)\prod_{l=1}^{i-1}\Delta_{1}(x^{2l+1})\left(\delta\left(\frac{x^{-j-i}z_{2}}{z_{1}}\right)T_{j+i}(x^{-i}z_{2})-\delta\left(\frac{x^{j+i}z_{2}}{z_{1}}\right)T_{j+i}(x^{i}z_{2})\right).$
We prove the following relation by induction on $i$ $(1\leq i\leq j)$.
$\displaystyle{\rm LHS}_{i,j}=\sum_{k=1}^{i}{\rm RHS}_{i,j}(k).$ (201)
The starting point of $i=1\leq j$ was previously proven in Lemma 4.8. We
assume that the relation (201) holds for $i$ $(1\leq i<j)$, and we show ${\rm
LHS}_{i+1,j}=\sum_{k=1}^{i+1}{\rm RHS}_{i+1,j}(k)$ from this assumption.
Multiplying ${\rm LHS}_{i,j}$ by
$f_{1,i}\left(z_{1}/z_{3};a\right)f_{1,j}\left(z_{2}/z_{3};a\right)T_{1}(z_{3})$
on the left and using the quadratic relation
$f_{1,j}\left(z_{2}/z_{3};a\right)T_{1}(z_{3})T_{j}(z_{2})=f_{j,1}\left(z_{3}/z_{2};a\right)T_{j}(z_{2})T_{1}(z_{3})+\cdots$,
along with the fusion relation (194) gives
$\displaystyle
f_{1,j}\left(\frac{z_{2}}{z_{3}};a\right)f_{i,j}\left(\frac{z_{2}}{z_{1}};a\right)f_{1,i}\left(\frac{z_{1}}{z_{3}};a\right)T_{1}(z_{3})T_{i}(z_{1})T_{j}(z_{2})$
$\displaystyle-$ $\displaystyle
f_{j,1}\left(\frac{z_{3}}{z_{2}};a\right)f_{j,i}\left(\frac{z_{1}}{z_{2}};a\right)T_{j}(z_{2})f_{1,i}\left(\frac{z_{1}}{z_{3}};a\right)T_{1}(z_{3})T_{i}(z_{1})$
$\displaystyle-$
$\displaystyle~{}c(r,x)\delta\left(\frac{x^{-j-1}z_{2}}{z_{3}}\right)\Delta_{1}\left(\frac{x^{-i}z_{1}}{z_{3}}\right)f_{j+1,i}\left(\frac{x^{-j}z_{1}}{z_{3}};a\right)T_{j+1}(x^{j}z_{3})T_{i}(z_{1})$
$\displaystyle+$
$\displaystyle~{}c(r,x)\delta\left(\frac{x^{j+1}z_{2}}{z_{3}}\right)\Delta_{1}\left(\frac{x^{i}z_{1}}{z_{3}}\right)f_{j+1,i}\left(\frac{x^{j}z_{1}}{z_{3}};a\right)T_{j+1}(x^{-j}z_{3})T_{i}(z_{1}).$
(202)
Taking the limit $z_{3}\to x^{-i-1}z_{1}$ of (202) multiplied by
$c(r,x)^{-1}\left(1-x^{-i-1}z_{1}/z_{3}\right)$ and using the fusion relation
(198) along with the relation $\lim_{z_{3}\to
x^{-i-1}z_{1}}\left(1-x^{-i-1}z_{1}/z_{3}\right)\Delta_{1}\left(x^{-i}z_{1}/z_{3}\right)=c(r,x)$
gives
$\displaystyle
f_{1,j}\left(\frac{x^{i+1}z_{2}}{z_{1}};a\right)f_{i,j}\left(\frac{z_{2}}{z_{1}};a\right)T_{i+1}(x^{-1}z_{1})T_{j}(z_{2})-f_{j,1}\left(\frac{x^{-i-1}z_{1}}{z_{2}};a\right)f_{j,i}\left(\frac{z_{1}}{z_{2}};a\right)T_{j}(z_{2})T_{i+1}(x^{-1}z_{1})$
$\displaystyle-$
$\displaystyle~{}c(r,x)\delta\left(\frac{x^{i-j}z_{2}}{z_{1}}\right)f_{j+1,i}(x^{i-j+1};a)T_{j+1}(x^{j-i-1}z_{1})T_{i}(z_{1})$
$\displaystyle-$
$\displaystyle~{}c(r,x)\delta\left(\frac{x^{i+j+2}z_{2}}{z_{1}}\right)\prod_{l=1}^{i}\Delta_{1}(x^{2l+1})T_{i+j+1}(x^{i+1}z_{2}).$
Using the fusion relation (194) and
$f_{j+1,i}(x^{i-j+1};a)T_{j+1}(x^{j-i-1}z_{1})T_{i}(z_{1})=f_{i,j+1}(x^{j-i-1};a)T_{i}(z_{1})T_{j+1}(x^{j-i-1}z_{1})$
in (199) gives
$\displaystyle
f_{i+1,j}\left(\frac{xz_{2}}{z_{1}};a\right)T_{i+1}(x^{-1}z_{1})T_{j}(z_{2})-f_{j,i+1}\left(\frac{x^{-1}z_{1}}{z_{2}};a\right)T_{j}(z_{2})T_{i+1}(x^{-1}z_{1})$
$\displaystyle-$
$\displaystyle~{}c(r,x)\delta\left(\frac{x^{i-j}z_{2}}{z_{1}}\right)f_{i,j+1}(x^{-i+j+1};a)T_{i}(z_{1})T_{j+1}(x^{-1}z_{2})$
$\displaystyle+$
$\displaystyle~{}c(r,x)\delta\left(\frac{x^{i+j+2}z_{2}}{z_{1}}\right)\prod_{l=1}^{i}\Delta_{1}(x^{2l+1})T_{i+j+1}(x^{i+1}z_{2}).$
(203)
Multiplying ${\rm RHS}_{i,j}(i)$ by
$f_{1,i}\left(z_{1}/z_{3};a\right)f_{1,j}\left(z_{2}/z_{3};a\right)T_{1}(z_{3})$
from the left and using the fusion relation (195) gives
$\displaystyle
c(r,x)\prod_{l=1}^{i-1}\Delta_{1}(x^{2l+1})\left(\delta\left(\frac{x^{-i-j}z_{2}}{z_{1}}\right)f_{1,i+1}\left(\frac{x^{j}z_{1}}{z_{3}};a\right)\Delta_{1}\left(\frac{x^{i}z_{1}}{z_{3}}\right)T_{1}(z_{3})T_{i+j}(x^{j}z_{1})\right.$
$\displaystyle-$
$\displaystyle~{}\left.\delta\left(\frac{x^{i+j}z_{2}}{z_{1}}\right)f_{1,i+1}\left(\frac{x^{-j}z_{1}}{z_{3}};a\right)\Delta_{1}\left(\frac{x^{-i}z_{1}}{z_{3}}\right)T_{1}(z_{3})T_{i+j}(x^{-j}z_{1})\right).$
(204)
Taking the limit $z_{3}\to x^{-i-1}z_{1}$ of (204) multiplied by
$c(r,x)^{-1}\left(1-x^{-i-1}z_{1}/z_{3}\right)$ and using the fusion relation
(198) along with the relation $\lim_{z_{3}\to
x^{-i-1}z_{1}}\left(1-x^{-i-1}z_{1}/z_{3}\right)\Delta_{1}\left(x^{-i}z_{1}/z_{3}\right)=c(r,x)$
gives
$\displaystyle
c(r,x)\delta\left(\frac{x^{-i-j}z_{2}}{z_{1}}\right)\prod_{l=1}^{i}\Delta_{1}(x^{2l+1})T_{i+j+1}(x^{-i-1}z_{2})$
$\displaystyle-~{}$ $\displaystyle
c(r,x)\delta\left(\frac{x^{i+j}z_{2}}{z_{1}}\right)\prod_{l=1}^{i-1}\Delta_{1}(x^{2l+1})f_{1,i+j}(x^{i-j+1};a)T_{1}(x^{-i-1}z_{1})T_{i+j}(x^{i}z_{2}).$
(205)
Multiplying ${\rm RHS}_{i,j}(k)$ $(1\leq k\leq i-1)$ by
$f_{1,i}\left(z_{1}/z_{3};a\right)f_{1,j}\left(z_{2}/z_{3};a\right)T_{1}(z_{3})$
from the left and using the fusion relation (196) along with
$f_{i-k,j+k}(x^{j-i};a)T_{i-k}(x^{k}z_{1})T_{j+k}(x^{j-i+k}z_{1})=f_{j+k,i-k}(x^{i-j};a)T_{j+k}(x^{j-i+k}z_{1})T_{i-k}(x^{k}z_{1})$
in (199) gives
$\displaystyle c(r,x)\prod_{l=1}^{k-1}\Delta_{1}(x^{2l+1})$ (206)
$\displaystyle\times$
$\displaystyle\left(\delta\left(\frac{x^{-j+i-2k}z_{2}}{z_{1}}\right)f_{1,i-k}\left(\frac{x^{k}z_{1}}{z_{3}};a\right)f_{j+k,i-k}(x^{i-j};a)f_{1,j+k}\left(\frac{x^{-i+j+k}z_{1}}{z_{3}};a\right)T_{1}(z_{3})T_{j+k}(x^{j-i+k}z_{1})T_{i-k}(x^{k}z_{1})\right.$
$\displaystyle-\left.\delta\left(\frac{x^{j-i+2k}z_{2}}{z_{1}}\right)f_{1,i-k}\left(\frac{x^{-k}z_{1}}{z_{3}};a\right)f_{i-k,j+k}(x^{i-j};a)f_{1,j+k}\left(\frac{x^{i-j-k}z_{1}}{z_{3}};a\right)T_{1}(z_{3})T_{i-k}(x^{-k}z_{1})T_{j+k}(x^{k}z_{2})\right).$
Taking the limit $z_{3}\to x^{-i-1}z_{1}$ of (206) multiplied by
$c(r,x)^{-1}\left(1-x^{-i-1}z_{1}/z_{3}\right)$ and using the fusion relations
(194) and (198) along with
$f_{i-k+1,j+k}(x^{i-j+1};a)T_{i-k+1}(x^{-k-1}z_{1})T_{j+k}(x^{-j+i-k}z_{1})=f_{j+k,i-k+1}(x^{j-i-1};a)T_{j+k}(x^{-j+i-k}z_{1})T_{i-k+1}(x^{-k-1}z_{1})$
in (199) gives
$\displaystyle
c(r,x)\prod_{l=1}^{k}\Delta_{1}(x^{2l+1})\delta\left(\frac{x^{-j+i-2k}z_{2}}{z_{1}}\right)f_{j+k-1,i-k}(x^{i-j+1};a)T_{i-k}(x^{k}z_{1})T_{j+k+1}(x^{-k-1}z_{2})$
$\displaystyle-~{}$ $\displaystyle
c(r,x)\prod_{l=1}^{k-1}\Delta_{1}(x^{2l+1})\delta\left(\frac{x^{j-i+2k}z_{2}}{z_{1}}\right)f_{i-k+1,j+k}(x^{i-j+1};a)T_{i-k+1}(x^{-k-1}z_{1})T_{j+k}(x^{k}z_{2}).$
(207)
Summing (203), (205), and (207) for $1\leq k\leq i-1$ and shifting the
variable $z_{1}\mapsto xz_{1}$ gives ${\rm LHS}_{i+1,j}=\sum_{k=1}^{i+1}{\rm
RHS}_{i+1,j}(k)$. By induction on $i$, we have shown the quadratic relation
(187).
The quadratic relations (187) are independent of the choice of Dynkin-diagrams
for the Lie superalgebra $A(M,N)$, because $a=D(0,L;\Phi)$ is independent of
the choice of Dynkin-diagrams. See Lemma 4.4. $\Box$
### 4.3 Classical limit
The deformed $W$-algebra ${\cal W}_{q,t}\bigl{(}{\mathfrak{g}}\bigr{)}$
includes the $q$-Poisson $W$-algebra as a special case. As an application of
the quadratic relations (187), we obtain the $q$-Poisson $W$-algebra [6, 12,
13]. We study ${\cal W}_{q,t}(A(M,N))$ $(M\geq N\geq 0,M+N\geq 1)$. We set
parameters $q=x^{2r}$ and $\beta=(r-1)/r$. We define the $q$-Poisson bracket
$\\{,\\}$ by taking the classical limit $\beta\to 0$ with $q$ fixed as
$\displaystyle\\{T_{i}^{{\rm PB}}[m],T_{j}^{{\rm PB}}[n]\\}=-\lim_{\beta\to
0}\frac{1}{\beta\log q}[T_{i}[m],T_{j}[n]].$
Here, we set $T_{i}^{PB}[m]$ as
$T_{i}(z)=\sum_{m\in{\mathbf{Z}}}T_{i}[m]z^{-m}\longrightarrow
T_{i}^{PB}(z)=\sum_{m\in{\mathbf{Z}}}T_{i}^{PB}[m]z^{-m}$ $(\beta\to
0,~{}q~{}\rm{fixed})$. The $\beta$-expansions of the structure functions are
given as
$\displaystyle f_{i,j}(z;a)=1+\beta\log
q\sum_{m=1}^{\infty}\frac{\left[\frac{1}{2}{\rm
Min}(i,j)m\right]_{q}\left[\left(\frac{1}{2}({\rm
Max}(i,j)-M-1)\right)m\right]_{q}}{[\frac{1}{2}(M+1)m]_{q}}(q-q^{-1})+O(\beta^{2})~{}~{}(i,j\geq
1),$ $\displaystyle c(r,x)=-\beta\log q+O(\beta^{2}),$
where $a=D(0,M+N+1;\Phi)=(N+1)r+M-N$.
###### Proposition 4.9
For the $q$-Poisson $W$-superalgebra for $A(M,N)$ $(M\geq N\geq 0,M+N\geq 1)$,
the generating functions $T_{i}^{PB}(z)$ satisfy
$\displaystyle\\{T_{i}^{PB}(z_{1}),T_{j}^{PB}(z_{2})\\}$
$\displaystyle=(q-q^{-1})C_{i,j}\left(\frac{z_{2}}{z_{1}}\right)T_{i}^{PB}(z_{1})T_{j}^{PB}(z_{2})$
$\displaystyle+\sum_{k=1}^{i}\delta\left(\frac{q^{\frac{-j+i}{2}-k}z_{2}}{z_{1}}\right)T_{i-k}^{PB}(q^{\frac{k}{2}}z_{1})T_{j+k}^{PB}(q^{-\frac{k}{2}}z_{2})$
$\displaystyle-\sum_{k=1}^{i}\delta\left(\frac{q^{\frac{j-i}{2}+k}z_{2}}{z_{1}}\right)T_{i-k}^{PB}(q^{-\frac{k}{2}}z_{1})T_{j+k}^{PB}(q^{\frac{k}{2}}z_{2})~{}~{}(1\leq
i\leq j).$
Here we set the structure functions $C_{i,j}(z)$ $(i,j\geq 1)$ as
$\displaystyle C_{i,j}(z)=\sum_{m\in{\mathbf{Z}}}\frac{\left[\frac{1}{2}{\rm
Min}(i,j)m\right]_{q}\left[\frac{1}{2}({\rm
Max}(i,j)-M-1)m\right]_{q}}{[\frac{1}{2}(M+1)m]_{q}}z^{m}~{}~{}(i,j\geq 1).$
The structure functions satisfy $C_{i,M+1}(z)=C_{M+1,i}(z)=0$ $(1\leq i\leq
M+1)$.
## 5 Conclusion and Discussion
In this paper, we found the free field construction of the basic $W$-current
$T_{1}(z)$ (See (95) and (99)) and the screening currents $S_{j}(w)$ (See
(75)) for the deformed $W$-superalgebra ${\cal
W}_{q,t}\bigl{(}A(M,N)\bigr{)}$. Using the free field construction, we
introduced the higher $W$-currents $T_{i}(z)$ (See (185)) and obtained a
closed set of quadratic relations among them (See (187)). These relations are
independent of the choice of Dynkin-diagrams for the Lie superalgebra
$A(M,N)$.
Recently, Feigin, Jimbo, Mukhin, and Vilkoviskiy [9] introduced the free field
construction of the basic $W$-current and the screening currents in types
$A,B,C,D$ including twisted and supersymmetric cases in terms of the quantum
toroidal algebras. Their motivation is to understand a commutative family of
integrals of motion associated with affine Dynkin-diagrams [14, 15]. In the
case of type $A$, their basic $W$-current $T_{1}(z)$ satisfies
$\displaystyle
T_{1}(z_{1})T_{1}(z_{2})=\frac{\Theta_{\mu}\left(q_{1}\frac{z_{2}}{z_{1}},~{}q_{2}\frac{z_{2}}{z_{1}},~{}q_{3}\frac{z_{2}}{z_{1}}\right)}{\Theta_{\mu}\left(q_{1}^{-1}\frac{z_{2}}{z_{1}},~{}q_{2}^{-1}\frac{z_{2}}{z_{1}},~{}q_{3}^{-1}\frac{z_{2}}{z_{1}}\right)}T_{1}(z_{2})T_{1}(z_{1})~{}~{}~{}(q_{1}q_{2}q_{3}=1)$
(208)
in the sense of analytic continuation. Upon the specialization
$q_{1}=x^{2},q_{2}=x^{-2r},q_{3}=x^{2r-2},\mu=x^{2a}$ $(a=D(0,L;\Phi))$, their
commutation relation (208) coincides with those of this paper (See (104)). In
the case of $\mathfrak{sl}(N)$, their basic $W$-current $T_{1}(z)$ coincides
with those of [14, 15], which gives a one-parameter deformation of ${\cal
W}_{q,t}(\mathfrak{sl}(N))$ in Ref.[2, 3]. In the case of $A(M,N)$, their
basic $W$-current $T_{1}(z)$ gives a one-parameter deformation of those of
${\cal W}_{q,t}\bigl{(}A(M,N)\bigr{)}$ in this paper.
It is still an open problem to find quadratic relations of the deformed
$W$-algebra ${\cal W}_{q,t}({\mathfrak{g}})$, except for
${\mathfrak{g}}=\mathfrak{sl}(N)$, $A_{2}^{(2)}$, and $A(M,N)$. It seems to be
possible to extend Ding-Feigin’s construction to other Lie superalgebras and
obtain their quadratic relations.
ACKNOWLEDGMENTS
The author would like to thank Professor Michio Jimbo very much for carefully
reading the manuscript and for giving lots of useful advice. This work is
supported by the Grant-in-Aid for Scientific Research C (26400105) from the
Japan Society for the Promotion of Science.
## Appendix A Fusion relation
In this appendix we summarize the fusion relations of $\Lambda_{i}(z)$. We use
the abbreviation
$\displaystyle
F_{i,j}^{(\pm)}(z;a)=(1-x^{\pm(i+j)}z)f_{i,j}(z;a)~{}~{}~{}(a=D(0,L;\Phi)).$
For
$(m_{1},m_{2},\ldots,m_{L+1}),(n_{1},n_{2},\ldots,n_{L+1})\in\hat{N}(\Phi)$
defined in (186), we set $i=m_{1}+m_{2}+\cdots+m_{L+1}$ and
$j=n_{1}+n_{2}+\cdots+n_{L+1}$.
$\bullet$ If ${\rm Max}\\{1\leq k\leq L+1|n_{k}\neq 0\\}<{\rm Min}\\{1\leq
k\leq L+1|m_{k}\neq 0\\}$ holds, we have
$\displaystyle\lim_{z_{1}\to
x^{i+j}z_{2}}F_{i,j}^{(+)}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{1})\Lambda_{n_{1},n_{2},\ldots,n_{L+1}}^{(j)}(z_{2})$
$\displaystyle=-c(r,x)\prod_{l=1}^{{\rm
Min}(i,j)-1}\Delta_{1}(x^{2l+1})\Lambda_{m_{1}+n_{1},m_{2}+n_{2},\ldots,m_{L+1}+n_{L+1}}^{(i+j)}(x^{i}z_{2}).$
(A 1)
$\bullet$ If ${\rm Max}\\{1\leq k\leq L+1|m_{k}\neq 0\\}<{\rm Min}\\{1\leq
k\leq L+1|n_{k}\neq 0\\}$ holds, we have
$\displaystyle\lim_{z_{1}\to
x^{-(i+j)}z_{2}}F_{i,j}^{(-)}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{1})\Lambda_{n_{1},n_{2},\ldots,n_{L+1}}^{(j)}(z_{2})$
$\displaystyle=c(r,x)\prod_{l=1}^{{\rm
Min}(i,j)-1}\Delta_{1}(x^{2l+1})\Lambda_{m_{1}+n_{1},m_{2}+n_{2},\ldots,m_{L+1}+n_{L+1}}^{(i+j)}(x^{-i}z_{2}).$
(A 2)
$\bullet$ If $l$ satisfies $l\in\widehat{I}(-\frac{1}{r})$ and $l={\rm
Max}\\{1\leq k\leq L+1|n_{k}\neq 0\\}={\rm Min}\\{1\leq k\leq L+1|m_{k}\neq
0\\}$, we have
$\displaystyle\lim_{z_{1}\to
x^{i+j}z_{2}}F_{i,j}^{(+)}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{1})\Lambda_{n_{1},n_{2},\ldots,n_{L+1}}^{(j)}(z_{2})$
$\displaystyle=-\frac{c(r,x)d_{m_{l}+n_{l}}(r,x)}{d_{m_{l}}(r,x)d_{n_{l}}(r,x)}\prod_{l=1}^{{\rm
Min}(i,j)-1}\Delta_{1}(x^{2l+1})\Lambda_{m_{1}+n_{1},m_{2}+n_{2},\ldots,m_{L+1}+n_{L+1}}^{(i+j)}(x^{i}z_{2}).$
(A 3)
$\bullet$ If $l$ satisfies $l\in\widehat{I}(-\frac{1}{r})$ and $l={\rm
Max}\\{1\leq k\leq L+1|m_{k}\neq 0\\}={\rm Min}\\{1\leq k\leq L+1|n_{k}\neq
0\\}$, we have
$\displaystyle\lim_{z_{1}\to
x^{-(i+j)}z_{2}}F_{i,j}^{(-)}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{1})\Lambda_{n_{1},n_{2},\ldots,n_{L+1}}^{(j)}(z_{2})$
$\displaystyle=\frac{c(r,x)d_{m_{l}+n_{l}}(r,x)}{d_{m_{l}}(r,x)d_{n_{l}}(r,x)}\prod_{l=1}^{{\rm
Min}(i,j)-1}\Delta_{1}(x^{2l+1})\Lambda_{m_{1}+n_{1},m_{2}+n_{2},\ldots,m_{L+1}+n_{L+1}}^{(i+j)}(x^{-i}z_{2}).$
(A 4)
The remaining fusions vanish.
## Appendix B Exchange relation
In this appendix we give the exchange relations of $\Lambda_{i}(z)$ and
$\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z)$, which are obtained from
Proposition 4.3. For $(m_{1},m_{2},\ldots,m_{L+1})\in\hat{N}(\Phi)$ in (186),
we set $i=m_{1}+m_{2}+\cdots+m_{L+1}$. We assume $i\geq 1$. We calculate
$\displaystyle
f_{1,i}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{l}(z_{1})\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2})-f_{i,1}\left(\frac{z_{1}}{z_{2}};a\right)\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2})\Lambda_{l}(z_{1}),$
(B 1)
where $a=D(0,L;\Phi)$.
$\bullet$ If $l$ satisfies $m_{l}\neq 0$ and $l\in\widehat{I}(\frac{1-r}{r})$,
(B 1) is deformed as
$\displaystyle
f_{1,i}\left(\frac{z_{2}}{z_{1}};a\right)\Lambda_{l}(z_{1})\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2})-f_{i,1}\left(\frac{z_{1}}{z_{2}};a\right)\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2})\Lambda_{l}(z_{1})=0.$
(B 2)
$\bullet$ If $l$ satisfies $m_{l}\neq 0$ and $l\in\widehat{I}(-\frac{1}{r})$,
(B 1) is deformed as
$\displaystyle\frac{c(r,x)d_{m_{l}+1}(r,x)}{d_{1}(r,x)d_{m_{l}}(r,x)}:\Lambda_{l}(z_{1})\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2}):$
$\displaystyle\times\left(\delta\left(x^{-i-1+2(m_{1}+m_{2}+\cdots+m_{l-1})}\frac{z_{2}}{z_{1}}\right)-\delta\left(x^{i+1-2(m_{l+1}+m_{l+2}+\cdots+m_{L+1})}\frac{z_{2}}{z_{1}}\right)\right).$
(B 3)
$\bullet$ If $l$ satisfies $l<{\rm Min}\\{1\leq k\leq L+1|m_{k}\neq 0\\}$, (B
1) is deformed as
$\displaystyle
c(r,x):\Lambda_{l}(z_{1})\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2}):\left(\delta\left(\frac{x^{-i-1}z_{2}}{z_{1}}\right)-\delta\left(\frac{x^{-i+1}z_{2}}{z_{1}}\right)\right).$
(B 4)
$\bullet$ If $l$ satisfies $l>{\rm Max}\\{1\leq k\leq L+1|m_{k}\neq 0\\}$, (B
1) is deformed as
$\displaystyle
c(r,x):\Lambda_{l}(z_{1})\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2}):\left(\delta\left(\frac{x^{i-1}z_{2}}{z_{1}}\right)-\delta\left(\frac{x^{i+1}z_{2}}{z_{1}}\right)\right).$
(B 5)
$\bullet$ If $l$ satisfies $m_{l}=0$ and ${\rm Min}\\{1\leq k\leq
L+1|m_{k}\neq 0\\}<l<{\rm Max}\\{1\leq k\leq L+1|m_{k}\neq 0\\}$, (B 1) is
deformed as
$\displaystyle
c(r,x):\Lambda_{l}(z_{1})\Lambda_{m_{1},m_{2},\ldots,m_{L+1}}^{(i)}(z_{2}):$
$\displaystyle\times\left(\delta\left(x^{-i+1+2(m_{1}+m_{2}+\cdots+m_{l-2})}\frac{z_{2}}{z_{1}}\right)-\delta\left(x^{i+1-2(m_{l}+m_{l+1}+\cdots+m_{L+1})}\frac{z_{2}}{z_{1}}\right)\right).$
(B 6)
## References
* [1] J. Shiraishi, H. Kubo, H. Awata, and S. Odake, A quantum deformation of the Virasoro algebra and the Macdonald symmetric functions, Lett. Math. Phys. 38, 33-51 (1996)
* [2] H. Awata, H. Kubo, S. Odake, and J. Shiraishi, Quantum ${\cal W}_{N}$ algebras and Macdonald polynomials, Commun. Math. Phys. 179, 401-416 (1996)
* [3] B. Feigin and E. Frenkel, Quantum $\cal{W}$-algebras and elliptic algebras, Commun. Math. Phys. 178, 653-678 (1996)
* [4] V. Brazhnikov and S. Lukyanov, Angular quantization and form-factors in massive integrable models, Nucl. Phys. B512, 616-636 (1998).
* [5] Y. Hara, M. Jimbo, H. Konno, S. Odake, and J. Shiraishi, Free field approach to the dilute $A_{L}$ models, J. Math. Phys. 40, 3791-3826 (1999)
* [6] E. Frenkel and N. Reshetikhin, Deformations of $\cal{W}$ algebras associated to simple Lie algebras, Commun. Math. Phys. 197, 1-31 (1998)
* [7] A. Sevostyanov, Drinfeld-Sokolov reduction for quantum groups and deformations of $W$-algebras, Selecta Math. 8, 637-703 (2002)
* [8] S. Odake, Comments on the deformed ${W}_{N}$ algebra, Int. J. Mod. Phys. B16, 2055-2064 (2002)
* [9] B. Feigin, M. Jimbo, E. Mukhin, and I. Vilkoviskiy, Deformation of $W$ algebras via quantum toroidal algebras, arXiv: 2003.04234 (2020)
* [10] J. Ding and B. Feigin, Quantized $W$-algebra of $\mathfrak{sl}(2,1)$: A construction from the quantization of screening operators, Contemp. Math. 248, 83-108 (1998)
* [11] T. Kojima, Quadratic relations of the deformed $W$-superalgebra ${\cal W}_{q,t}\bigl{(}\mathfrak{sl}(2|1)\bigr{)}$, arXiv:1912.03096 (2019)
* [12] E. Frenkel and N. Reshetikhin, Quantum affine algebras and deformation of the Virasoro algebra and $\cal{W}$-algebra, Commun. Math. Phys. 178, 237-264 (1996)
* [13] E. Frenkel, N. Reshetikhin, and M. Semenov-Tian-Shansky, Drinfeld-Sokolov reduction for difference operators and deformation of $\cal{W}$-algebras I. The case of Virasoro algebra, Commun. Math. Phys. 192, 605-629 (1998)
* [14] B. Feigin, T. Kojima, J. Shiraishi, and H. Watanabe, The integrals of motion for the deformed $W$-algebra ${\cal W}_{q,t}\bigl{(}\widehat{\mathfrak{sl}}_{N}\bigr{)}$, Proceedings of Symposium on Representation Theory 2006, 102-114 (2006), ISBN4-9902328-2-8, arXiv: 0705.0627v1
* [15] T. Kojima and J. Shiraishi, The integrals of motion for the deformed $W$-algebra ${\cal W}_{q,t}\bigl{(}\widehat{\mathfrak{gl}}_{N}\bigr{)}$. II. Proof of the commutation relations, Commun. Math. Phys. 283, 795-851 (2008)
|
16k
|
arxiv_papers
|
2101.01112
|
# Congruences modulo powers of $5$
for the rank parity function
Dandan Chen School of Mathematical Sciences, East China Normal University,
Shanghai, People’s Republic of China [email protected] , Rong Chen
School of Mathematical Sciences, East China Normal University, Shanghai,
People’s Republic of China [email protected] and Frank Garvan
Department of Mathematics, University of Florida, Gainesville, FL 32611-8105
[email protected]
(Date: December 24, 2020)
###### Abstract.
It is well known that Ramanujan conjectured congruences modulo powers of $5$,
$7$ and and $11$ for the partition function. These were subsequently proved by
Watson (1938) and Atkin (1967). In 2009 Choi, Kang, and Lovejoy proved
congruences modulo powers of $5$ for the crank parity function. The generating
function for rank parity function is $f(q)$, which is the first example of a
mock theta function that Ramanujan mentioned in his last letter to Hardy. We
prove congruences modulo powers of $5$ for the rank parity function.
###### Key words and phrases:
partition congruences, Dyson’s rank, mock theta functions, modular functions
###### 2020 Mathematics Subject Classification:
05A17, 11F30, 11F37, 11P82, 11P83
The first and second authors were supported in part by the National Natural
Science Foundation of China (Grant No. 11971173) and an ECNU Short-term
Overseas Research Scholarship for Graduate Students (Grant no. 201811280047).
The third author was supported in part by a grant from the Simon’s Foundation
(#318714).
## 1\. Introduction
Let $p(n)$ be the number of unrestricted partitions of $n$. Ramanujan
discovered and later proved that
(1.1) $\displaystyle p(5n+4)$ $\displaystyle\equiv 0\pmod{5},$ (1.2)
$\displaystyle p(7n+5)$ $\displaystyle\equiv 0\pmod{7},$ (1.3) $\displaystyle
p(11n+6)$ $\displaystyle\equiv 0\pmod{11}.$
In 1944 Dyson [16] defined the rank of a partition as the largest part minus
the number of parts and conjectured that residue of the rank mod $5$ (resp.
mod $7$) divides the partitions of $5n+4$ (resp. $7n+5$) into $5$ (resp. $7$)
equal classes thus giving combinatorial explanations of Ramanujan’s partition
congruences mod $5$ and $7$. Dyson’s rank conjectures were proved by Atkin and
Swinnerton-Dyer [7]. Dyson also conjectured the existence of another statistic
he called the crank which would likewise explain Ramanujan’s partition
congruence mod $11$. The crank was found by Andrews and the third author [4]
who defined the crank as the largest part, if the partition has no ones, and
otherwise as the difference between the number of parts larger than the number
of ones and the number of ones.
Let $M_{e}(n)$ (resp. $M_{o}(n)$) denote the number of partitions of $n$ with
even (resp. odd) crank. Choi, Kang and Lovejoy [12] proved congruences modulo
powers of $5$ for the difference, which we call the crank parity function.
###### Theorem 1.1 (Choi, Kang and Lovejoy [12, Theorem 1.1]).
For all $\alpha\geq 0$ we have
$M_{e}(n)-M_{o}(n)\equiv 0\pmod{5^{\alpha+1}},\qquad\mbox{if $24n\equiv
1\pmod{5^{2\alpha+1}}$}.$
This gave a weak refinement of Ramanujan’s partition congruence modulo powers
of $5$:
$p(n)\equiv 0\pmod{5^{a}},\qquad\mbox{if $24n\equiv 1\pmod{5^{\alpha}}$}.$
This was proved by Watson [30].
In this paper we prove an analogue of Theorem 1.1 for the rank parity
function. Analogous to $M_{e}(n)$ and $M_{o}(n)$ we let $N_{e}(n)$ (resp.
$N_{o}(n)$) denote the number of partitions of $n$ with even (resp. odd) rank.
It is well known that the difference is related to Ramanujan’s mock theta
function $f(q)$. This is the first example of a mock theta function that
Ramanujan gave in his last letter to Hardy. Let
$\displaystyle f(q)$
$\displaystyle=\sum_{n=0}^{\infty}a_{f}(n)q^{n}=1+\sum_{n=1}^{\infty}\frac{q^{n^{2}}}{(1+q)^{2}(1+q^{2})^{2}\cdots(1+q^{n})^{2}}$
$\displaystyle=1+q-2\,{q}^{2}+3\,{q}^{3}-3\,{q}^{4}+3\,{q}^{5}-5\,{q}^{6}+7\,{q}^{7}-6\,{q}^{8}+6\,{q}^{9}-10\,{q}^{10}+12\,{q}^{11}-11\,{q}^{12}+\cdots.$
This function has been studied by many authors. Ramanujan conjectured an
asymptotic formula for the coefficients $a_{f}(n)$. Dragonette [15] improved
this result by finding a Rademacher-type asymptotic expansion for the
coefficients. The error term was subsequently improved by Andrews [3],
Bringmann and Ono [10], and Ahlgren and Dunn [1]. We have
$a_{f}(n)=N_{e}(n)-N_{o}(n),$
for $n\geq 0$.
Our main theorem is
###### Theorem 1.2.
For all $\alpha\geq 3$ and all $n\geq 0$ we have
(1.4)
$a_{f}(5^{\alpha}n+\delta_{\alpha})+a_{f}(5^{\alpha-2}n+\delta_{\alpha-2})\equiv
0\pmod{5^{\left\lfloor\tfrac{1}{2}\alpha\right\rfloor}},$
where $\delta_{\alpha}$ satisfies $0<\delta_{\alpha}<5^{\alpha}$ and
$24\delta_{\alpha}\equiv 1\pmod{5^{\alpha}}$.
Below in Section 3.1 we show that the generating function for
$a_{f}(5n-1)+a_{f}(n/5),$
is a linear combination of two eta-products. See Theorem 3.1. This enables us
to use the theory of modular functions to obtain congruences. Our presentation
and method is similar to that Paule and Radu [25], who solved a difficult
conjecture of Sellers [28] for congruences modulo powers of $5$ for Andrews’s
two-colored generalized Frobenious partitions [2]. In Section 2 we include the
necessary background and algorithms from the theory of modular functions for
proving identities. In Section 3 we apply the theory of modular functions to
prove our main theorem. In Section 4 we conclude the paper by discussing
congruences modulo powers of $7$ for both the rank and crank parity functions.
### Some Remarks and Notation
Throughout this paper we use the standard $q$-notation. For finite products we
use
$(z;q)_{n}=(z)_{n}=\begin{cases}{\displaystyle\prod_{j=0}^{n-1}(1-zq^{j})},&n>0\\\
1,&n=0.\end{cases}$
For infinite products we use
$(z;q)_{\infty}=(z)_{\infty}=\lim_{n\to\infty}(z;q)_{n}=\prod_{n=1}^{\infty}(1-zq^{(n-1)}),$
$(z_{1},z_{2},\dots,z_{k};q)_{\infty}=(z_{1};q)_{\infty}(z_{2};q)_{\infty}\cdots(z_{k};q)_{\infty},$
$[z;q]_{\infty}=(z;q)_{\infty}(z^{-1}q;q)_{\infty}=\prod_{n=1}^{\infty}(1-zq^{(n-1)})(1-z^{-1}q^{n}),$
$[z_{1},z_{2},\dots,z_{k};q]_{\infty}=[z_{1};q]_{\infty}[z_{2};q]_{\infty}\cdots[z_{k};q]_{\infty},$
for $\lvert q\rvert<1$ and $z$, $z_{1}$, $z_{2}$,…, $z_{k}\neq 0$. For
$\theta$-products we use
$J_{a,b}=(q^{a},q^{b-a},q^{b};q^{b})_{\infty},\quad\mbox{and}\quad
J_{b}=(q^{b};q^{b})_{\infty},$
and as usual
(1.5) $\eta(\tau)=\exp(\pi i\tau/12)\prod_{n=1}^{\infty}(1-\exp(2\pi
in\tau)=q^{1/24}\prod_{n=1}^{\infty}(1-q^{n}),$
where $\operatorname{Im}(\tau)>0$.
Throughout this paper we let $\lfloor x\rfloor$ denote the largest integer
less and or equal to $x$, and let $\lceil x\rceil$ denote the smallest integer
greater than or equal to $x$.
We need some notation for formal Laurent series. See the remarks at the end of
[25, Section 1, p.823]. Let $R$ be a ring and $q$ be an indeterminant. We let
$R((q))$ denote the formal Laurent series in $q$ with coefficients in $R$.
These are series of the form
$f=\sum_{n\in\mathbb{Z}}a_{n}\,q^{n},$
where $a_{n}\neq 0$ for at most finitely many $n<0$. For $f\neq 0$ we define
the order of $f$ (with respect to $q$) as the smallest integer $N$ such that
$a_{N}\neq 0$ and write $N=\operatorname{ord}_{q}(f)$. We note that if $f$ is
a modular function this coincides with $\operatorname{ord}(f,\infty)$. See
equation (2.1) below for this other notation. Suppose $t$ and $f\in R((q)$ and
the composition $f\circ t$ is well-defined as a formal Laurent series. This is
the case if $\operatorname{ord}_{q}(t)>0$. The $t$-order of
$F=f\circ t=\sum_{n\in\mathbb{Z}}a_{n}\,t^{n},$
where $t=\sum_{n\in\mathbb{Z}}b_{n}\,q^{n}$, is defined to be the smallest
integer $N$ such that $a_{N}\neq 0$ and write $N=\operatorname{ord}_{t}(F)$.
For example, if
$f={q}^{-1}+1+2\,q+\cdots,\qquad t=q^{2}+3q^{3}+5q^{4}+\cdots,$
then
$F=f\circ t={t}^{-1}+1+2\,t+\cdots,\qquad=q^{-2}-3{q}^{-1}+5+\cdots,$
so that $\operatorname{ord}_{q}(f)=-1$, $\operatorname{ord}_{q}(t)=2$,
$\operatorname{ord}_{t}(F)=-1$ and $\operatorname{ord}_{q}(F)=-2$.
## 2\. Modular Functions
In this section we present the needed theory of modular functions which we use
to prove identities. A general reference is Rankin’s book [26].
### 2.1. Background theory
Our presentation is based on [8, pp.326-329]. Let
$\mathscr{H}=\\{\tau\,:\,\operatorname{Im}(\tau)>0\\}$ (the complex upper
half-plane). For each $M=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in
M_{2}^{+}(\mathbb{Z})$, the set of integer $2\times 2$ matrix with positive
determinant, the bilinear transformation $M(\tau)$ is defined by
$M\tau=M(\tau)=\frac{a\tau+b}{c\tau+d}.$
The stroke operator is defined by
$\left({f}\,\left\arrowvert\,M\right.\right)(\tau)=f(M\tau),$
and satisfies
${f}\,\left\arrowvert\,MS\right.={{f}\,\left\arrowvert\,M\right.}\,\left\arrowvert\,S\right..$
The modular group $\Gamma(1)$ is defined by
$\Gamma(1)=\left\\{\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in
M_{2}^{+}(\mathbb{Z})\,:\,ad-bc=1\right\\}.$
We consider the following subgroups $\Gamma$ of the modular group with finite
index
$\Gamma_{0}(N)=\left\\{\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\in\Gamma(1)\,:\,c\equiv 0\pmod{N}\right\\},$
$\Gamma_{1}(N)=\left\\{\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\in\Gamma(1)\,:\,\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\equiv\begin{pmatrix}1&*\\\
0&1\end{pmatrix}\pmod{N}\right\\},$
Such a group $\Gamma$ acts on $\mathscr{H}\cup\mathbb{Q}\cup{\infty}$ by the
transformation $V(\tau)$, for $V\in\Gamma$ which induces an equivalence
relation. We call a set
$\mathscr{F}\subseteq\mathscr{H}\cup\mathbb{Q}\cup\\{\infty\\}$ a fundamental
set for $\Gamma$ if it contains one element of each equivalence class. The
finite set $\mathcal{F}\cap\left(\mathbb{Q}\cup\\{\infty\\}\right)$ is called
the complete set of inequivalent cusps.
A function $f\,:\,\mathscr{H}\longrightarrow\mathbb{C}$ is a modular function
on $\Gamma$ if the following conditions hold:
1. (i)
$f$ is holomorphic on $\mathscr{H}$.
2. (ii)
$\displaystyle{f}\,\left\arrowvert\,V\right.=f$ for all $V\in\Gamma$.
3. (iii)
For every $A\in\Gamma(1)$ the function ${f}\,\left\arrowvert\,A^{-1}\right.$
has an expansion
$({f}\,\left\arrowvert\,A^{-1}\right.)(\tau)=\sum_{m=m_{0}}^{\infty}b(m)\exp(2\pi
i\tau m/\kappa)$
on some half-plane $\left\\{\tau\,:\,\operatorname{Im}\tau>h\geq 0\right\\}$,
where $T=\begin{pmatrix}1&1\\\ 0&1\end{pmatrix}$ and
$\kappa=\min\left\\{k>0\,:\,\pm A^{-1}T^{k}A\in\Gamma\right\\}.$
The positive integer $\kappa=\kappa(\Gamma;\zeta)$ is called the fan width of
$\Gamma$ at the cusp $\zeta=A^{-1}\infty$. If $b(m_{0})\neq 0$, then we write
$\operatorname{Ord}(f,\zeta,\Gamma)=m_{0}$
which is called the order of $f$ at $\zeta$ with respect to $\Gamma$. We also
write
(2.1)
$\operatorname{ord}(f;\zeta)=\frac{m_{0}}{\kappa}=\frac{m_{0}}{\kappa(\Gamma,\zeta)},$
which is called the invariant order of $f$ at $\zeta$. For each
$z\in\mathscr{H}$, $\operatorname{ord}(f;z)$ denotes the order of $f$ at $z$
as an analytic function of $z$, and the order of $f$ with respect to $\Gamma$
is defined by
$\operatorname{Ord}(f,z,\Gamma)=\frac{1}{\ell}\operatorname{ord}(f;z)$
where $\ell$ is the order of $z$ as a fixed point of $\Gamma$. We note
$\ell=1$, $2$ or $3$. Our main tool for proving modular function identities is
the valence formula [26, Theorem 4.1.4, p.98]. If $f\neq 0$ is a modular
function on $\Gamma$ and $\mathscr{F}$ is any fundamental set for $\Gamma$
then
(2.2) $\sum_{z\in\mathscr{F}}\operatorname{Ord}(f,z,\Gamma)=0.$
### 2.2. Eta-product identities
We will consider eta-products of the form
(2.3) $f(\tau)=\prod_{d\mid N}\eta(d\tau)^{m_{d}},$
where $N$ is a positive integer, each $d>0$ and $m_{d}\in\mathbb{Z}$.
#### Modularity
Newman [24] has found necessary and sufficient conditions under which an eta-
product is a modular function on $\Gamma_{0}(N)$.
###### Theorem 2.1 ([24, Theorem 4.7]).
The function $f(\tau)$ (given in (2.3)) is a modular function on
$\Gamma_{0}(N)$ if and only if
1. (1)
$\displaystyle\sum_{d\mid N}m_{d}=0$,
2. (2)
$\displaystyle\sum_{d\mid N}dm_{d}\equiv 0\pmod{24}$,
3. (3)
$\displaystyle\sum_{d\mid N}\frac{Nm_{d}}{d}\equiv 0\pmod{24}$, and
4. (4)
$\displaystyle\prod_{d\mid N}d^{|m_{d}|}$ is a square.
#### Orders at cusps
Ligozat [21] has computed the invariant order of an eta-product at the cusps
of $\Gamma_{0}(N)$.
###### Theorem 2.2 ([21, Theorem 4.8]).
If the eta-product $f(\tau)$ (given in (2.3)) is a modular function on
$\Gamma_{0}(N)$, then its order at the cusp $\zeta=\frac{b}{c}$ (assuming
$(b,c)=1$) is
(2.4) $\operatorname{ord}(f(\tau);\zeta)=\sum_{d\mid
N}\frac{(d,c)^{2}m_{d}}{24d}.$
Chua and Lang [14] have found a set of inequivalent cusps for $\Gamma_{0}(N)$.
###### Theorem 2.3 ([14, p.354]).
Let N be a positive integer and for each positive divisor $d$ of $N$ let
$e_{d}=(d,N/d)$. Then the set
$\Delta={\underset{d\mid N}{\cup}}\,S_{d}$
is a complete set of inequivalent cusps of $\Gamma_{0}(N)$ where
$S_{d}=\\{x_{i}/d\,:\,(x_{i},d)=1,\quad 0\leq x_{i}\leq d-1,\quad
x_{i}\not\equiv x_{j}\pmod{e_{d}}\\}.$
Biagioli [9] has found the fan width of the cusps of $\Gamma_{0}(N)$.
###### Lemma 2.4 ([9, Lemma 4.2]).
If $(r,s)=1$, then the fan width of $\Gamma_{0}(N)$ at $\frac{r}{s}$ is
$\kappa\left(\Gamma_{0}(N);\frac{r}{s}\right)=\frac{N}{(N,s^{2})}.$
#### An application of the valence formula
Since eta-products have no zeros or poles in $\mathscr{H}$ the following
result follows easily from the valence formula (2.2).
###### Theorem 2.5.
Let $f_{1}(\tau)$, $f_{2}(\tau)$, …, $f_{n}(\tau)$ be eta-products that are
modular functions on $\Gamma_{0}(N)$. Let $\mathcal{S}_{N}$ be a set of
inequivalent cusps for $\Gamma_{0}(N)$. Define the constant
(2.5) $B=\sum_{\begin{subarray}{c}\zeta\in\mathcal{S}_{N}\\\
\zeta\neq\infty\end{subarray}}\mbox{min}(\left\\{\operatorname{Ord}(f_{j},\zeta,\Gamma_{0}(N))\,:\,1\leq
j\leq n\right\\}),$
and consider
(2.6)
$g(\tau):=\alpha_{1}f_{1}(\tau)+\alpha_{2}f_{2}(\tau)+\cdots+\alpha_{n}f_{n}(\tau),$
where each $\alpha_{j}\in\mathbb{C}$. Then
$g(\tau)\equiv 0$
if and only if
(2.7) $\operatorname{Ord}(g(\tau),\infty,\Gamma_{0}(N))>-B.$
An algorithm for proving eta-product identities.
STEP 0. Write the identity in the following form:
(2.8)
$\alpha_{1}f_{1}(\tau)+\alpha_{2}f_{2}(\tau)+\cdots+\alpha_{n}f_{n}(\tau)=0,$
where each $\alpha_{i}\in\mathbb{C}$ and each $f_{i}(\tau)$ is an eta-product
of level $N$.
STEP 1. Use Theorem 2.1 to check that $f_{j}(\tau)$ is a modular function on
$\Gamma_{0}(N)$ for each $1\leq j\leq n$.
STEP 2. Use Theorem 2.3 to find a set $\mathcal{S}_{N}$ of inequivalent cusps
for $\Gamma_{0}(N)$ and the fan width of each cusp.
STEP 3. Use Theorem 2.2 to calculate the order of each eta-product
$f_{j}(\tau)$ at each cusp of $\Gamma_{0}(N)$.
STEP 4. Calculate
$B=\sum_{\begin{subarray}{c}\zeta\in\mathcal{S}_{N}\\\
\zeta\neq\infty\end{subarray}}\mbox{min}(\left\\{\operatorname{Ord}(f_{j},\zeta,\Gamma_{0}(N))\,:\,1\leq
j\leq n\right\\}).$
STEP 5. Show that
$\operatorname{Ord}(g(\tau),\infty,\Gamma_{0}(N))>-B$
where
$g(\tau)=\alpha_{1}f_{1}(\tau)+\alpha_{2}f_{2}(\tau)+\cdots+\alpha_{n}f_{n}(\tau).$
Theorem 2.5 then implies that $g(\tau)\equiv 0$ and hence the eta-product
identity (2.8).
The third author has written a MAPLE package called ETA which implements this
algorithm. See
http://qseries.org/fgarvan/qmaple/ETA/
#### A modular equation
Define
(2.9) $\displaystyle t$
$\displaystyle:=t(\tau):=\frac{\eta(\tau)^{2}\eta(10\tau)^{4}}{\eta(2\tau)^{4}\eta(5\tau)^{2}}$
$\displaystyle=q-2\,{q}^{2}+3\,{q}^{3}-6\,{q}^{4}+11\,{q}^{5}-16\,{q}^{6}+24\,{q}^{7}-38\,{q}^{8}+57\,{q}^{9}-82\,{q}^{10}+117\,{q}^{11}+\cdots.$
We note that $t(\tau)$ is a Hauptmodul for $\Gamma_{0}(10)$ [22]. As an
application of our algorithm we prove the following theorem which will be
needed later.
###### Theorem 2.6.
Let
(2.10) $\displaystyle\sigma_{0}(\tau)$ $\displaystyle=-t,$ (2.11)
$\displaystyle\sigma_{1}(\tau)$ $\displaystyle=-5t^{2}+2\cdot 5t,$ (2.12)
$\displaystyle\sigma_{2}(\tau)$ $\displaystyle=-5^{2}t^{3}+2\cdot
5^{2}t^{2}-7\cdot 5t,$ (2.13) $\displaystyle\sigma_{3}(\tau)$
$\displaystyle=-5^{3}t^{4}+2\cdot 5^{3}t^{3}-7\cdot 5^{2}t^{2}+12\cdot 5t,$
(2.14) $\displaystyle\sigma_{4}(\tau)$ $\displaystyle=-5^{4}t^{5}+2\cdot
5^{4}t^{4}-7\cdot 5^{3}t^{3}+12\cdot 5^{2}t^{2}-11\cdot 5t,$
where $t=t(\tau)$ is defined in (2.9). Then
(2.15) $t(\tau)^{5}+\sum_{j=0}^{4}\sigma_{j}(5\tau)\,t(\tau)^{j}=0.$
###### Proof.
From Theorem 2.1 we find that $t(\tau)$ is a modular function on
$\Gamma_{0}(10)$ and $t(5\tau)$ is a modular function on $\Gamma_{0}(50)$.
Hence each term on the left side of (2.15) is a modular function on
$\Gamma_{0}(50)$. For convenience we divide by $t(\tau)^{5}$ and let
(2.16) $g(\tau)=1+\sum_{j=0}^{4}\sigma_{j}(5\tau)\,t(\tau)^{j-5}.$
From Theorem 2.3, Lemma 2.4 and Theorem 2.2 we have the following table of fan
widths for the cusps of $\Gamma_{0}(50)$, with the orders and invariant orders
of both $t(\tau)$ and $t(5\tau)$.
$\begin{array}[]{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hrule\cr\zeta&0&1/2&1/5&2/5&3/5&4/5&1/10&3/10&7/10&9/10&1/25&1/50\\\
\hrule\cr\kappa(\Gamma_{0}(50),\zeta)&50&25&2&2&2&2&1&1&1&1&2&1\\\
\hrule\cr\operatorname{ord}(t(\tau),\zeta)&0&-1/5&0&0&0&0&1&1&1&1&0&1\\\
\hrule\cr\operatorname{Ord}(t(\tau),\zeta,\Gamma_{0}(50)&0&-5&0&0&0&0&1&1&1&1&0&1\\\
\hrule\cr\operatorname{ord}(t(5\tau)&0&-1/25&0&0&0&0&-1&-1&-1&-1&0&5\\\
\hrule\cr\operatorname{Ord}(t(5\tau),\zeta,\Gamma_{0}(50)&0&-1&0&0&0&0&-1&-1&-1&-1&0&5\\\
\hrule\cr\end{array}$
Expanding the right side of (2.16) gives $16$ terms of the form
$t(5\tau)^{k}t(\tau)^{j-5}$ with $1\leq k\leq j+1$ where $0\leq j\leq 4$,
together with $(k,j)=(0,5)$. We calculate the order of each term at each cusp
$\zeta$ of $\Gamma_{0}(50)$, and thus giving lower bounds for
$\operatorname{Ord}(g(\tau),\zeta,\Gamma_{(}50)$ at each cusp in the
following.
$\begin{array}[]{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hrule\cr\zeta&0&1/2&1/5&2/5&3/5&4/5&1/10&3/10&7/10&9/10&1/25&1/50\\\
\hrule\cr\operatorname{Ord}(g(\tau),\zeta,\Gamma_{0}(50))\geq&0&0&0&0&0&0&-6&-6&-6&-6&0&0\\\
\hrule\cr\end{array}$
Thus the constant $B$ in Theorem 2.5 is $B=-24$. It suffices to show that
$\operatorname{Ord}(g(\tau),\infty,\Gamma_{0}(50))>24.$
This is easily verified. Thus by Theorem 2.5 we have $g(\tau)\equiv 0$ and the
result follows. ∎
### 2.3. The $U_{p}$ operator
Let $p$ prime and
$f=\sum_{m=m_{0}}^{\infty}a(m)q^{m}$
be a formal Laurent series. We define $U_{p}$ by
(2.17) $U_{p}(f):=\sum_{pm\geq m_{0}}a(pm)q^{m}.$
If $f$ is a modular function (with $q=\exp(2\pi i\tau)$),
(2.18)
$U_{p}(f)=\frac{1}{p}\sum_{j=0}^{p}{f}\,\left\arrowvert\,\begin{pmatrix}{1/p}&{j/p}\\\
{0}&{1}\end{pmatrix}\right.=\frac{1}{p}\sum_{j=0}^{p}f\left(\frac{\tau+j}{p}\right).$
By [5, Lemma 7, p.138] we have
###### Theorem 2.7.
Let $p$ be prime. If $f$ is a modular function on $\Gamma_{0}(pN)$ and $p\mid
N$, then $U_{p}(f)$ is a modular function on $\Gamma_{0}(N)$.
Gordon and Hughes [19, Theorem 4, p.336] have found lower bounds for the
invariant orders of $U_{p}(f)$ at cusps. Let $\nu_{p}(n)$ denote the $p$-adic
order of an integer $n$; i.e. the highest power of $p$ that divides $n$.
###### Theorem 2.8 ([19, Theorem 4]).
Suppose $f(\tau)$ is a modular function on $\Gamma_{0}(pN)$, where $p$ is
prime and $p\mid N$. Let $r=\frac{\beta}{\delta}$ be a cusp of
$\Gamma_{0}(N)$, where $\delta\mid N$ and $(\beta,\delta)=1$. Then
$\operatorname{Ord}(U_{p}(f),r,\Gamma_{0}(N))\geq\begin{cases}\frac{1}{p}\operatorname{Ord}(f,r/p,\Gamma_{0}(pN))&\mbox{if
$\nu_{p}(\delta)\geq\frac{1}{2}\nu_{p}(N)$}\\\
\operatorname{Ord}(f,r/p,\Gamma_{0}(pN))&\mbox{if
$0<\nu_{p}(\delta)<\frac{1}{2}\nu_{p}(N)$}\\\ \underset{0\leq k\leq
p-1}{\min}\operatorname{Ord}(f,(r+k)/p,\Gamma_{0}(pN))&\mbox{if
$\nu_{p}(\delta)=0$}.\end{cases}$
Theorems 2.5, 2.7 and 2.8 give the following algorithm.
#### An algorithm for proving $U_{p}$ eta-product identities
X
STEP 0. Write the identity in the form
(2.19)
${U_{p}}\left(\alpha_{1}g_{1}(\tau)+\alpha_{2}g_{2}(\tau)+\cdots+\alpha_{n}g_{k}(\tau)\right)=\alpha_{1}f_{1}(\tau)+\alpha_{2}f_{2}(\tau)+\cdots+\alpha_{n}f_{n}(\tau),$
where $p$ is prime, $p\mid N$, each $g_{j}(\tau)$ is an eta-product and a
modular function on $\Gamma_{0}(pN)$, and each $f_{j}(\tau)$ is an eta-product
and modular function on $\Gamma_{0}(N)$.
STEP 1. Use Theorem 2.1 to check that $f_{j}(\tau)$ is a modular function on
$\Gamma_{0}(N)$ for each $1\leq j\leq n$, and $g_{j}(\tau)$ is a modular
function on $\Gamma_{0}(pN)$ for each $1\leq j\leq k$.
STEP 2. Use Theorem 2.3 to find a set $\mathcal{S}_{N}$ of inequivalent cusps
for $\Gamma_{0}(N)$ and the fan width of each cusp.
STEP 3a. Compute $\operatorname{Ord}(f_{j},\zeta,\Gamma_{0}(N))$ for each $j$
at each cusp $\zeta$ of $\Gamma_{0}(N)$ apart from $\infty$.
STEP 3b. Use Theorem 2.8 to find lower bounds $L(g_{j},\zeta,N)$ for
$\operatorname{Ord}({U_{p}}(g_{j}),\zeta,\Gamma_{0}(N))$
for each cusp $\zeta$ of $\Gamma_{0}(N)$, and each $1\leq j\leq k$.
STEP 4. Calculate
(2.20) $B=\sum_{\begin{subarray}{c}\zeta\in\mathcal{S}_{N}\\\
\zeta\neq\infty\end{subarray}}\mbox{min}(\left\\{\operatorname{Ord}(f_{j},\zeta,\Gamma_{0}(N))\,:\,1\leq
j\leq n\right\\}\cup\\{L(g_{j},\zeta,N)\,:\,1\leq j\leq k\\}).$
STEP 5. Show that
$\operatorname{Ord}(h(\tau),\infty,\Gamma_{0}(N))>-B$
where
$h(\tau)={U_{p}}\left(\alpha_{1}g_{1}(\tau)+\alpha_{2}g_{2}(\tau)+\cdots+\alpha_{n}g_{k}(\tau)\right)-(\alpha_{1}f_{1}(\tau)+\alpha_{2}f_{2}(\tau)+\cdots+\alpha_{n}f_{n}(\tau)).$
Theorem 2.5 then implies that $h(\tau)\equiv 0$ and hence the $U_{p}$ eta-
product identity (2.19).
The third author has included an implementation of this algorithm in his ETA
MAPLE package.
As an application of our algorithm we sketch the proof of
(2.21) ${U_{5}}(g)=5\,f_{1}(\tau)+2\,f_{2}(\tau),$
where
$g(\tau)={\frac{\eta\left(50\,\tau\right)^{5}\eta\left(5\,\tau\right)^{4}\eta\left(4\,\tau\right)^{3}\eta\left(2\,\tau\right)^{3}}{\eta\left(100\,\tau\right)^{3}\eta\left(25\,\tau\right)^{2}\eta\left(10\,\tau\right)^{8}\eta\left(\tau\right)^{2}}},$
$f_{1}(\tau)={\frac{\eta\left(10\,\tau\right)^{8}\eta\left(\tau\right)^{4}}{\eta\left(5\,\tau\right)^{4}\eta\left(2\,\tau\right)^{8}}},\quad
f_{2}(\tau)={\frac{\eta\left(10\,\tau\right)^{5}\eta\left(\tau\right)^{2}}{\eta\left(20\,\tau\right)^{3}\eta\left(5\,\tau\right)^{2}\eta\left(4\,\tau\right)\eta\left(2\,\tau\right)}}.$
We use Theorem 2.1 to check that $f_{j}(\tau)$ is a modular function on
$\Gamma_{0}(20)$ for each $1\leq j\leq 2$, and $g(\tau)$ is a modular function
on $\Gamma_{0}(100)$. We use Theorem 2.3 to find a set $\mathcal{S}_{20}$ of
inequivalent cusps for $\Gamma_{0}(20)$ and the fan width of each cusp. By
Theorems 2.3, 2.2 and Lemma 2.4 we have the following table of orders.
$\begin{array}[]{|c|c|c|c|c|c|c|}\hrule\cr\zeta&0&1/2&1/4&1/5&1/10&1/20\\\
\hrule\cr\operatorname{Ord}(f_{1}(\tau),\zeta,\Gamma_{0}(20))&0&-2&-2&0&2&2\\\
\hrule\cr\operatorname{Ord}(f_{2}(\tau),\zeta,\Gamma_{0}(20))&1&0&-1&0&1&-1\\\
\hrule\cr\end{array}$
Using Theorems 2.3, 2.2, 2.8 and some calculation we have the following table
of lower bounds $L(g,\zeta,20)$.
$\begin{array}[]{|c|c|c|c|c|c|c|}\hrule\cr\zeta&0&1/2&1/4&1/5&1/10&1/20\\\
\hrule\cr\operatorname{Ord}({U_{5}}(g),\zeta,\Gamma_{0}(20))\geq&0&-2&-2&-1/5&3/5&-6/5\\\
\hrule\cr\end{array}$
Thus the constant $B$ in (2.20) is $B=-18/5$. It suffices to show that
$\operatorname{Ord}(h(\tau),\infty,\Gamma_{0}(20))\geq 4,$
where
$h(\tau)={U_{5}}(g)-(5f_{1}(\tau)+2f_{2}(\tau)).$
This is easily verified. Thus by Theorem 2.5 we have $h(\tau)\equiv 0$ and the
result (2.21) follows.
### 2.4. Generalized eta-functions
The generalized Dedekind eta function is defined to be
(2.22)
$\eta_{\delta,g}(\tau)=q^{\frac{\delta}{2}P_{2}(g/\delta)}\prod_{m\equiv\pm
g\pmod{\delta}}(1-q^{m}),$
where $P_{2}(t)=\\{t\\}^{2}-\\{t\\}+\tfrac{1}{6}$ is the second periodic
Bernoulli polynomial, $\\{t\\}=t-[t]$ is the fractional part of $t$,
$g,\delta,m\in\mathbb{Z}^{+}$ and $0<g<\delta$. The function
$\eta_{\delta,g}(\tau)$ is a modular function on $\mbox{SL}_{2}(\mathbb{Z})$
with a multiplier system. Let $N$ be a fixed positive integer. A generalized
Dedekind eta-product of level $N$ has the form
(2.23) $f(\tau)=\prod_{\begin{subarray}{c}\delta\mid N\\\
0<g<\delta\end{subarray}}\eta_{\delta,g}^{r_{\delta,g}}(\tau),$
where
(2.24) $r_{\delta,g}\in\begin{cases}\frac{1}{2}\mathbb{Z}&\mbox{if
$g=\delta/2$},\\\ \mathbb{Z}&\mbox{otherwise}.\end{cases}$
Robins [27] has found sufficient conditions under which a generalized eta-
product is a modular function on $\Gamma_{1}(N)$.
###### Theorem 2.9 ([27, Theorem 3]).
The function $f(\tau)$, defined in (2.23), is a modular function on
$\Gamma_{1}(N)$ if
1. (i)
$\displaystyle\sum_{\begin{subarray}{c}\delta\mid N\\\ g\end{subarray}}\delta
P_{2}(\textstyle{\frac{g}{\delta}})r_{\delta,g}\equiv 0\pmod{2}$, and
2. (ii)
$\displaystyle\sum_{\begin{subarray}{c}\delta\mid N\\\
g\end{subarray}}\frac{N}{\delta}P_{2}(0)r_{\delta,g}\equiv 0\pmod{2}$.
Cho, Koo and Park [13] have found a set of inequivalent cusps for
$\Gamma_{1}(N)\cap\Gamma_{0}(mN)$. The group $\Gamma_{1}(N)$ corresponds to
the case $m=1$.
###### Theorem 2.10 ([13, Corollary 4, p.930]).
Let $a$, $c$, $a^{\prime}$, $c\in\mathbb{Z}$ with
$(a,c)=(a^{\prime},c^{\prime})=1$.
1. (i)
The cusps $\frac{a}{c}$ and $\frac{a^{\prime}}{c^{\prime}}$ are equivalent mod
$\Gamma_{1}(N)$ if and only if
$\begin{pmatrix}a^{\prime}\\\
c^{\prime}\end{pmatrix}\equiv\pm\begin{pmatrix}a+nc\\\ c\end{pmatrix}\pmod{N}$
for some integer $n$.
2. (ii)
The following is a complete set of inequivalent cusps mod $\Gamma_{1}(N)$.
$\displaystyle\mathcal{S}$
$\displaystyle=\left\\{\frac{y_{c,j}}{x_{c,i}}\,:\,0<c\mid
N,\,0<s_{c,i},\,a_{c,j}\leq N,\,(s_{c,i},N)=(a_{c,j},N)=1,\right.$
$\displaystyle\qquad s_{c,i}=s_{c,i^{\prime}}\iff s_{c,1}\equiv\pm
s_{c^{\prime},i^{\prime}}\pmod{{\textstyle\frac{N}{c}}},$ $\displaystyle\qquad
a_{c,j}=a_{c,j^{\prime}}\iff\begin{cases}a_{c,j}\equiv\pm
a_{c,j^{\prime}}\pmod{c},&\mbox{if $c=\frac{N}{2}$ or $N$},\\\ a_{c,j}\equiv
a_{c,j^{\prime}}\pmod{c},&\mbox{otherwise},\end{cases}$
$\displaystyle\left.\vphantom{\frac{y_{c,j}}{x_{c,i}}}x_{c,i},y_{c,j}\in\mathbb{Z}\,\mbox{chosen
s.th.}\,x_{c,i}\equiv cs_{c,i},\,y_{c,j}\equiv
a_{c,j}\pmod{N},\,(x_{c,i},y_{c,j})=1\right\\},$
3. (iii)
and the fan width of the cusp $\frac{a}{c}$ is given by
$\kappa({\textstyle\frac{a}{c}},\Gamma_{1}(N))=\begin{cases}1,&\mbox{if $N=4$
and $(c,4)=2$},\\\ \frac{N}{(c,N)},&\mbox{otherwise}.\end{cases}$
In this theorem, it is understood as usual that the fraction $\frac{\pm 1}{0}$
corresponds to $\infty$.
Robins [27] has calculated the invariant order of $\eta_{\delta,g}(\tau)$ at
any cusp. This gives a method for calculating the invariant order at any cusp
of a generalized eta-product.
###### Theorem 2.11 ([27]).
The order at the cusp $\zeta=\frac{a}{c}$ (assuming $(a,c)=1$) of the
generalized eta-function $\eta_{\delta,g}(\tau)$ (defined in (2.22) and
assuming $0<g<\delta$) is
(2.25)
$\operatorname{ord}(\eta_{\delta,g}(\tau);\zeta)=\frac{\varepsilon^{2}}{2\delta}\,P_{2}\left(\frac{ag}{\varepsilon}\right),$
where $\varepsilon=(\delta,c)$.
#### An algorithm for proving generalized eta-product identities
X
We note that the analog of Theorem 2.5 holds for generalized eta-products
which are modular functions on $\Gamma_{1}(N)$, and follows easily from the
valence formula (2.2).
###### Theorem 2.12 ([17, Cor.2.5]).
Let $f_{1}(\tau)$, $f_{2}(\tau)$, …, $f_{n}(\tau)$ be generalized eta-products
that are modular functions on $\Gamma_{1}(N)$. Let $\mathcal{S}_{N}$ be a set
of inequivalent cusps for $\Gamma_{1}(N)$. Define the constant
(2.26) $B=\sum_{\begin{subarray}{c}s\in\mathcal{S}_{N}\\\
s\neq\infty\end{subarray}}\mbox{min}(\left\\{\operatorname{Ord}(f_{j},s,\Gamma_{1}(N))\,:\,1\leq
j\leq n\right\\}\cup\\{0\\}),$
and consider
(2.27)
$g(\tau):=\alpha_{1}f_{1}(\tau)+\alpha_{2}f_{2}(\tau)+\cdots+\alpha_{n}f_{n}(\tau)+1,$
where each $\alpha_{j}\in\mathbb{C}$. Then
$g(\tau)\equiv 0$
if and only if
(2.28) $\operatorname{Ord}(g(\tau),\infty,\Gamma_{1}(N))>-B.$
The algorithm for proving generalized eta-product identities is completely
analogous to the method for proving eta-product identities described in
Section 2.2. To prove an identity in the form
$\alpha_{1}f_{1}(\tau)+\alpha_{2}f_{2}(\tau)+\cdots+\alpha_{n}f_{n}(\tau)+1=0,$
the algorithm simply involves calculating the constant $B$ in (2.26) and then
calculating enough coefficients to show that the inequality (2.28) holds. A
more complete description is given in [17].
The third author has written a MAPLE package called thetaids which implements
this algorithm. See
http://qseries.org/fgarvan/qmaple/thetaids/
## 3\. The rank parity function modulo powers of $5$
### 3.1. A Generating Function
In this section we prove an identity for the generating function of
$a_{f}(5n-1)+a_{f}(n/5),$
where it is understood that $a_{f}(n)=0$ if $n$ is not a non-negative integer.
Our proof depends on some results of Mao [23] who found $5$-dissection results
for the rank modulo $10$.
###### Theorem 3.1.
(3.1)
$\sum_{n=0}^{\infty}(a_{f}(5n-1)+a_{f}(n/5))q^{n}=\frac{J_{2}^{4}J_{10}^{2}}{J_{1}J_{4}^{3}J_{20}}-4q\frac{J_{1}^{2}J_{4}^{3}J_{5}J_{20}}{J_{2}^{5}J_{10}}.$
###### Proof.
From Watson [29, p.64] we have
(3.2) $\displaystyle f(q)$
$\displaystyle=\frac{2}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2}}{1+q^{n}}.$
We find that
(3.3)
$\displaystyle\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2+4n}}{1+q^{5n}}$
$\displaystyle=\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2}}{1+q^{5n}},$
$\displaystyle\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2+3n}}{1+q^{5n}}$
$\displaystyle=\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2+n}}{1+q^{5n}}.$
By [23, Lemma 3.1] we have
(3.4)
$\displaystyle\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2}}{1+q^{5n}}$
$\displaystyle=P(q^{5},-q^{5};q^{25})-\frac{P(q^{10},-q^{5};q^{25})}{q^{3}}+\frac{(q;q)_{\infty}}{J_{25}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{75n(n+1)/2+5}}{1+q^{25n+5}},$
(3.5)
$\displaystyle\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2+n}}{1+q^{5n}}$
$\displaystyle=P(q^{10},-q^{10};q^{25})-q^{3}P(q^{5},-q^{10};q^{25})-\frac{(q;q)_{\infty}}{J_{25}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{75n(n+1)/2+8}}{1+q^{25n+10}},$
(3.6)
$\displaystyle\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2+2n}}{1+q^{5n}}$
$\displaystyle=\frac{P(q^{5},-1;q^{25})}{q^{6}}-\frac{P(q^{10},-1;q^{25})}{q^{9}}-\frac{(q;q)_{\infty}}{J_{25}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{25n(3n+1)/2-1}}{1+q^{25n}},$
where
(3.7)
$P(a,b;q)=\frac{[a,a^{2};q]_{\infty}(q;q)_{\infty}^{2}}{[b/a,ab,b;q]_{\infty}}.$
From (3.2)-(3.6), and noting that
$P(q^{5},-q^{5};q^{25})=P(q^{10},-q^{10};q^{25})$ we have
(3.8) $\displaystyle f(q)=$
$\displaystyle\frac{2}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2}}{1+q^{n}}$
$\displaystyle=$
$\displaystyle\frac{2}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2}(1-q^{n}+q^{2n}-q^{3n}+q^{4n})}{1+q^{5n}}$
$\displaystyle=$
$\displaystyle\frac{2}{(q;q)_{\infty}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{n(3n+1)/2}(2-2q^{n}+q^{2n})}{1+q^{5n}}$
$\displaystyle=$
$\displaystyle\frac{2}{J_{1}}\bigg{\\{}2q^{3}P(q^{5},-q^{10};q^{25})-\frac{2P(q^{10},-q^{5};q^{25})}{q^{3}}+\frac{P(q^{5},-1;q^{25})}{q^{6}}-\frac{P(q^{10},-1;q^{25})}{q^{9}}\bigg{\\}}$
$\displaystyle+\frac{4}{J_{25}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{75n(n+1)/2+5}}{1+q^{25n+5}}+\frac{4}{J_{25}}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}q^{75n(n+1)/2+8}}{1+q^{25n+10}}-\frac{1}{q}f(q^{25}).$
We let
(3.9)
$g(q)=\frac{2}{J_{1}}\bigg{\\{}2q^{3}P(q^{5},-q^{10};q^{25})-\frac{2P(q^{10},-q^{5};q^{25})}{q^{3}}+\frac{P(q^{5},-1;q^{25})}{q^{6}}-\frac{P(q^{10},-1;q^{25})}{q^{9}}\bigg{\\}},$
and write the $5$-dissection of $g(q)$ as
(3.10) $\displaystyle
g(q)=g_{0}(q^{5})+q\,g_{1}(q^{5})+q^{2}\,g_{2}(q^{5})+q^{3}\,g_{3}(q^{5})+q^{4}\,g_{4}(q^{5}).$
From (3.2), (3.9) and (3.10), replacing $q^{5}$ by $q$, we have
(3.11) $\sum_{n=0}^{\infty}a_{f}(5n+4)q^{n}=-\frac{1}{q}f(q^{5})+g_{4}(q),$
after dividing both sides by $q^{4}$ and replacing $q^{5}$ by $q$.
The 5-dissection of $J_{1}$ is well-known
(3.12) $J_{1}=J_{25}\bigg{(}B(q^{5})-q-q^{2}\frac{1}{B(q^{5})}\bigg{)},$
where
$B(q)=\frac{J_{2,5}}{J_{1,5}}.$
See for example [18, Lemma (3.18)].
From (3.9), (3.10) and (3.12)
(3.13) $\displaystyle
J_{25}\,(g_{0}(q^{5})+q\,g_{1}(q^{5})+q^{2}\,g_{2}(q^{5})+q^{3}\,g_{3}(q^{5})+q^{4}\,g_{4}(q^{5}))\bigg{(}B(q^{5})-q-q^{2}\frac{1}{B(q^{5})}\bigg{)}$
$\displaystyle=4q^{3}P(q^{5},-q^{10};q^{25})-\frac{4P(q^{10},-q^{5};q^{25})}{q^{3}}+\frac{2P(q^{5},-1;q^{25})}{q^{6}}-\frac{2P(q^{10},-1;q^{25})}{q^{9}}.$
By expanding the left side of (3.13) and comparing both sides according to the
residue of the exponent of $q$ modulo 5, we obtain 5 equations:
(3.14) $\displaystyle B(q)g_{0}-q^{5}g_{4}-\frac{q^{5}}{B(q)}g_{3}=0,$ (3.15)
$\displaystyle
B(q)g_{1}-g_{0}-\frac{q}{B(q)}g_{4}=-\frac{2P(q^{2},-1;q^{5})}{q^{2}J_{5}},$
(3.16) $\displaystyle
B(q)g_{2}-g_{1}-\frac{1}{B(q)}g_{0}=-\frac{4P(q^{2},-q;q^{5})}{qJ_{5}},$
(3.17) $\displaystyle
B(q)g_{3}-g_{2}-\frac{1}{B(q)}g_{1}=\frac{4P(q,-q^{2};q^{5})}{J_{5}},$ (3.18)
$\displaystyle
B(q)g_{4}-g_{3}-\frac{1}{B(q)}g_{2}=\frac{2P(q,-1;q^{5})}{q^{2}J_{5}},$
where $g_{j}=g_{j}(q)$ for $0\leq j\leq 4$.
Solving these equations we find that
(3.19) $\displaystyle g_{4}(q)$
$\displaystyle=\frac{1}{J_{5}(B^{5}-11q-q^{2}/B^{5})}\bigg{(}\frac{2}{q^{2}}X_{2}B^{4}-\frac{2}{q}X_{1}B^{-4}+4X_{4}B^{3}+4X_{3}B^{-3}$
$\displaystyle-\frac{8}{q}X_{3}B^{2}+8qX_{4}B^{-2}-\frac{6}{q^{2}}X_{1}B-\frac{6}{q}X_{2}B^{-1}\bigg{)},$
where $B:=B(q)$ and
$\displaystyle X_{1}=$ $\displaystyle
P(q^{2},-1;q^{5})=\frac{q^{2}J_{1,10}J_{2,10}^{3}J_{3,10}^{3}J_{5,10}^{2}}{2J_{10}^{6}J_{4,10}},\quad
X_{2}=P(q,-1;q^{5})=\frac{qJ_{1,10}^{3}J_{3,10}J_{4,10}^{3}J_{5,10}^{2}}{2J_{10}^{6}J_{2,10}},$
$\displaystyle X_{3}=$ $\displaystyle
P(q^{2},-q;q^{5})=\frac{qJ_{1,10}^{3}J_{3,10}^{2}J_{4,10}^{2}J_{5,10}}{J_{10}^{6}},\quad
X_{4}=P(q,-q^{2};q^{5})=\frac{J_{1,10}^{2}J_{2,10}^{2}J_{3,10}^{3}J_{5,10}}{J_{10}^{6}}.$
The following identity is also well-known
(3.20) $\frac{J_{1}^{6}}{J_{5}^{6}}=B^{5}-11q-q^{2}\frac{1}{B^{5}}.$
See for example [20, Lemma (2.5)].
By (3.19) and (3.20), we have
(3.21) $\displaystyle g_{4}(q)=$
$\displaystyle\frac{J_{1,10}^{3}J_{3,10}J_{4,10}^{3}J_{5,10}^{2}J_{2,5}^{4}J_{5}^{5}}{qJ_{2,10}J_{1,5}^{4}J_{10}^{6}J_{1}^{6}}-\frac{qJ_{1,10}J_{2,10}^{3}J_{3,10}^{3}J_{5,10}^{2}J_{1,5}^{4}J_{5}^{5}}{J_{4,10}J_{2,5}^{4}J_{10}^{6}J_{1}^{6}}+4\frac{J_{1,10}^{2}J_{2,10}^{2}J_{3,10}^{3}J_{5,10}J_{2,5}^{3}J_{5}^{5}}{J_{1,5}^{3}J_{10}^{6}J_{1}^{6}}$
$\displaystyle+4\frac{qJ_{1,10}^{3}J_{3,10}^{2}J_{4,10}^{2}J_{5,10}J_{1,5}^{3}J_{5}^{5}}{J_{2,5}^{3}J_{10}^{6}J_{1}^{6}}-8\frac{J_{1,10}^{3}J_{3,10}^{2}J_{4,10}^{2}J_{5,10}J_{2,5}^{2}J_{5}^{5}}{J_{1,5}^{2}J_{10}^{6}J_{1}^{6}}+8\frac{qJ_{1,10}^{2}J_{2,10}^{2}J_{3,10}^{3}J_{5,10}J_{1,5}^{2}J_{5}^{5}}{J_{2,5}^{2}J_{10}^{6}J_{1}^{6}}$
$\displaystyle-3\frac{J_{1,10}J_{2,10}^{3}J_{3,10}^{3}J_{5,10}^{2}J_{2,5}J_{5}^{5}}{J_{4,10}J_{1,5}J_{10}^{6}J_{1}^{6}}-3\frac{J_{1,10}^{3}J_{3,10}J_{4,10}^{3}J_{5,10}^{2}J_{1,5}J_{5}^{5}}{J_{2,10}J_{2,5}J_{10}^{6}J_{1}^{6}}.$
We prove
(3.22)
$g_{4}(q)=-4\frac{J_{1}^{2}J_{4}^{3}J_{5}J_{20}}{J_{2}^{5}J_{10}}+\frac{1}{q}\frac{J_{2}^{4}J_{10}^{2}}{J_{1}J_{4}^{3}J_{20}},$
using the algorithm described in Section 2.4.We first use (3.21) to rewrite
(3.22) as the following modular function identity for generalized eta-products
on $\Gamma_{1}(20)$.
(3.23) $\displaystyle 0=$ $\displaystyle
1-{\frac{\eta_{{10,1}}^{6}\eta_{{10,4}}^{4}}{\eta_{{10,2}}^{4}\eta_{{10,3}}^{6}}}+4\,{\frac{\eta_{{10,2}}^{2}\eta_{{10,3}}}{\eta_{{10,4}}^{2}\eta_{{10,5}}}}+4\,{\frac{\eta_{{10,1}}^{7}\eta_{{10,4}}^{6}}{\eta_{{10,2}}^{6}\eta_{{10,3}}^{6}\eta_{{10,5}}}}-8\,{\frac{\eta_{{10,1}}^{2}\eta_{{10,4}}}{\eta_{{10,2}}\eta_{{10,3}}\eta_{{10,5}}}}+8\,{\frac{\eta_{{10,1}}^{5}\eta_{{10,4}}^{3}}{\eta_{{10,2}}^{3}\eta_{{10,3}}^{4}\eta_{{10,5}}}}$
$\displaystyle-3\,{\frac{\eta_{{10,1}}\eta_{{10,2}}}{\eta_{{10,3}}\eta_{{10,4}}}}-3\,{\frac{\eta_{{5,1}}^{5}}{\eta_{{5,2}}^{5}}}+4\,{\frac{\eta_{{20,1}}^{9}\eta_{{20,3}}^{3}\eta_{{20,4}}^{7}\eta_{{20,6}}^{4}\eta_{{20,7}}^{3}\eta_{{20,8}}^{3}\eta_{{20,9}}^{9}}{\eta_{{20,10}}^{2}}}-{\frac{\eta_{{20,1}}^{6}\eta_{{20,2}}^{6}\eta_{{20,4}}^{7}\eta_{{20,6}}^{10}\eta_{{20,8}}^{3}\eta_{{20,9}}^{6}\eta_{{20,10}}^{2}}{\eta_{{20,5}}^{4}}}.$
We use Theorem 2.9 to check each that each generalized eta-product is a
modular function on $\Gamma_{1}(20)$. We then use Theorems 2.10 and 2.11 to
calculate the order of each generalized eta-product at each cusp of
$\Gamma_{1}(20)$. We calculate the constant in equation (2.26) to find that
$B=24$. We let $g(\tau)$ be the right side of (3.23) and easily show that
$\operatorname{Ord}(g(\tau),\infty,\Gamma_{1}(20))>24$. The required identity
follows by Theorem 2.12.
From (3.11) and (3.22) we have
(3.24)
$\sum_{n=0}^{\infty}a_{f}(5n-1)q^{n}+f(q^{5})=q\,g_{4}(q)=\frac{J_{2}^{4}J_{10}^{2}}{J_{1}J_{4}^{3}J_{20}}-4q\,\frac{J_{1}^{2}J_{4}^{3}J_{5}J_{20}}{J_{2}^{5}J_{10}},$
which is our result (3.1). ∎
### 3.2. A Fundamental Lemma
We need the following fundamental lemma, whose proof follows easily from
Theorem 2.6.
###### Lemma 3.2 (A Fundamental Lemma).
Suppose $u=u(\tau)$, and $j$ is any integer. Then
$\displaystyle{U_{5}}(u\,t^{j})=-\sum_{l=0}^{4}\sigma_{l}(\tau)\,{U_{5}}(u\,t^{j+l-5}),$
where $t=t(\tau)$ is defined in (2.9) and the $\sigma_{j}(\tau)$ are given in
(2.10)–(2.14).
###### Proof.
The result follows easily from (2.15) by multiplying both sides by
$u\,t^{j-5}$ and applying $U_{5}$. ∎
###### Lemma 3.3.
Let $u=u(\tau)$, and $l\in\mathbb{Z}$. Suppose for $l\leq k\leq l+4$ there
exist Laurent polynomials $p_{u,k}(t)\in\mathbb{Z}[t,t^{-1}]$ such that
(3.25) $\displaystyle U_{5}(u\,t^{k})=v\,p_{u,k}(t),$
and
(3.26) $\displaystyle
ord_{t}(p_{u,k}(t))\geq\left\lceil\frac{k+s}{5}\right\rceil,$
for a fixed integer $s$, where $t=t(\tau)$ is defined in (2.9) and where
$v=v(\tau)$. Then there exists a sequence of Laurent polynomials
$p_{u,k}(t)\in\mathbb{Z}[t,t^{-1}]$, $k\in\mathbb{Z}$, such that (3.25) and
(3.26) hold for all $k\in\mathbb{Z}$.
###### Proof.
We proceed by induction on $k$. Let $N>l+4$ and assume the result holds for
$l\leq k\leq N-1$. Then by Lemma 3.2 we have
$U_{5}(u\,t^{N})=-\sum_{j=0}^{4}\sigma_{j}(\tau)\,U_{5}(u\,t^{N+j-5})=-v\,\sum_{j=0}^{4}\sigma_{j}(\tau)\,p_{u,N+j-5}(t)=v\,p_{u,N}(t),$
where
$p_{u,N}(t)=-\sum_{j=0}^{4}\sigma_{j}(\tau)\,p_{u,N+j-5}(t)\in\mathbb{Z}[t,t^{-1}],$
and
$\displaystyle\operatorname{ord}_{t}(p_{u,N}(t))$
$\displaystyle\geq\underset{0\leq j\leq
4}{\min}(\operatorname{ord}_{t}(\sigma_{j})+\operatorname{ord}_{t}(p_{u,N+j-5}(t)))$
$\displaystyle\geq\underset{0\leq j\leq
4}{\min}\left(1+\left\lceil\frac{N+j+s-5}{5}\right\rceil\right)=\left\lceil\frac{N+s}{5}\right\rceil.$
The result for all $k\geq l$ follows. The induction proof for $k<l$ is
similar. ∎
###### Lemma 3.4.
Let $u=u(\tau)$, and $l\in\mathbb{Z}$. Suppose for $l\leq k\leq l+4$ there
exist Laurent polynomials $p_{u,k}(t)\in\mathbb{Z}[t,t^{-1}]$ such that
(3.27) $\displaystyle U_{5}(u\,t^{k})=v\,p_{u,k}(t),$
where
$p_{u,k}(t)=\sum_{n}c_{u}(k,n)\,t^{n},\quad\nu_{5}(c_{u}(k,n))\geq\left\lfloor\frac{3n-k+r}{4}\right\rfloor$
for a fixed integer $r$, where $t=t(\tau)$ is defined in (2.9) and where
$v=v(\tau)$. Then there exists a sequence of Laurent polynomials
$p_{u,k}(t)\in\mathbb{Z}[t,t^{-1}]$, $k\in\mathbb{Z}$, such that (3.27) holds
for $k>l+4$, where
$p_{u,k}(t)=\sum_{n}c_{u}(k,n)\,t^{n},\quad\mbox{and}\quad\nu_{5}(c_{u}(k,n))\geq\left\lfloor\frac{3n-k+r+2}{4}\right\rfloor.$
###### Remark 3.5.
Recall that $\nu_{p}(n)$ denotes the $p$-adic order of an integer $n$; i.e.
the highest power of $p$ that divides $n$.
###### Proof.
We proceed by induction on $k$. Let $N>l+4$ and assume (3.27) holds for $l\leq
k\leq N-1$ where
$p_{u,k}(t)=\sum_{n}c_{u}(k,n)\,t^{n},\quad\nu_{5}(c_{u}(k,n))=\left\lfloor\frac{3n-k+r}{4}\right\rfloor.$
As in the proof of Lemma 3.3 we have
$U_{5}(u\,t^{N})=v\,p_{u,N}(t),$
where
$p_{u,N}(t)=-\sum_{j=0}^{4}\sigma_{j}(\tau)\,p_{u,N+j-5}(t)\in\mathbb{Z}[t,t^{-1}],$
From Lemma 3.2 we have
$\sigma_{j}(t)=\sum_{l=1}^{j+1}s(j,l)\,t^{l}\in\mathbb{Z}[t],$
where
$\nu_{5}(s(j,l))\geq\left\lfloor\frac{3l+j}{4}\right\rfloor,$
for $1\leq l\leq j+1$, $0\leq j\leq 4$. Therefore
$p_{u,N}(t)=-\sum_{j=0}^{4}\sum_{l=1}^{j+1}s(j,l)\sum_{m}c_{u}(N+j-5,m)t^{m+l}=\sum_{n}c_{u}(N,n)\,t^{n},$
where
$c_{u}(N,n)=-\sum_{j=0}^{4}\sum_{l=1}^{j+1}s(j,l)\,c_{u}(N+j-5,n-l),$
and
$\displaystyle\nu_{5}(c_{u}(N,n))$
$\displaystyle\geq\underset{\begin{subarray}{c}1\leq l\leq j+1\\\ 0\leq j\leq
4\end{subarray}}{\min}\bigg{(}\nu_{5}(s(j,l))+\nu_{5}(c_{u}(N+j-5,n-l)\bigg{)}$
$\displaystyle\geq\underset{\begin{subarray}{c}1\leq l\leq j+1\\\ 0\leq j\leq
4\end{subarray}}{\min}\bigg{(}\left\lfloor\frac{3l+j}{4}\right\rfloor+\left\lfloor\frac{3(n-l)-(N+j-5)+r}{4}\right\rfloor\bigg{)}$
$\displaystyle\geq\underset{\begin{subarray}{c}1\leq l\leq j+1\\\ 0\leq j\leq
4\end{subarray}}{\min}\left\lfloor\frac{3l+j+3(n-l)-(N+j-5)+r-3}{4}\right\rfloor\geq\left\lfloor\frac{3n-N+r+2}{4}\right\rfloor.$
The result follows. ∎
We define the following functions which will be needed in the proof of Theorem
1.2.
(3.28)
$P_{A}:=\frac{J_{10}^{2}J_{5}J_{2}^{6}}{J_{20}J_{4}^{3}J_{1}^{5}}-4\frac{qJ_{20}J_{5}^{2}J_{4}^{3}}{J_{10}J_{2}^{3}J_{1}^{2}},\quad
P_{B}:=\frac{J_{10}^{6}J_{2}^{2}J_{1}}{qJ_{20}^{3}J_{5}^{5}J_{4}}+4\frac{qJ_{20}^{3}J_{4}J_{1}^{2}}{J_{10}^{3}J_{5}^{2}J_{2}}\quad
A:=\frac{J_{50}^{2}J_{1}^{4}}{J_{25}^{4}J_{2}^{2}},\quad
B:=\frac{qJ_{25}}{J_{1}}.$
For $f=f(\tau)$ we define
(3.29) $U_{A}(f):=U_{5}(A\,f),\qquad U_{B}(f):=U_{5}(B\,f).$
First we need some initial values of $U_{A}(P_{A}\,t^{k})$ and
$U_{B}(P_{B}\,t^{k})$.
###### Lemma 3.6.
Group I $\displaystyle U_{A}(P_{A})=P_{B}(5^{4}t^{5}-7\cdot 5^{3}t^{4}+14\cdot
5^{2}t^{3}-2\cdot 5^{2}t^{2}+t),$ $\displaystyle U_{A}(P_{A}t^{-1})=-P_{B}t,$
$\displaystyle U_{A}(P_{A}t^{-2})=-5P_{B}t^{2},$ $\displaystyle
U_{A}(P_{A}t^{-3})=-5^{2}P_{B}t^{3},$ $\displaystyle
U_{A}(P_{A}t^{-4})=-5^{3}P_{B}t^{4}.$ Group II $\displaystyle
U_{B}(P_{B})=P_{A},$ $\displaystyle U_{B}(P_{B}t^{-1})=P_{A}(-5t+2),$
$\displaystyle U_{B}(P_{B}t^{-2})=P_{A}(5^{2}t^{2}-8\cdot 5t+8),$
$\displaystyle U_{B}(P_{B}t^{-3})=P_{A}(5^{3}t^{3}-34\cdot 5t+34),$
$\displaystyle U_{B}(P_{B}t^{-4})=P_{A}(-5^{4}t^{4}+16\cdot 5^{3}t^{3}-36\cdot
5^{2}t^{2}-128\cdot 5t+6\cdot 5^{2}).$
###### Proof.
We use the algorithm described in Section 2.3 to prove each of these
identities. The identities take the form
$U_{5}(g)=f,$
where $f$, $g$ are linear combinations of eta-products. For each identity we
check that $f$ is a linear combination of eta-products which are modular
functions on $\Gamma_{0}(100)$ and that $g$ is a linear combination of eta-
products which are modular functions on $\Gamma_{0}(20)$. For each of the
identities we follow the 5 steps in the algorithm given after Theorem 2.8. We
note that the smallest value of $B$ encountered is $B=-14$. These steps have
been carried out with the help of MAPLE, including all necessary verifications
so that the results are proved. ∎
Following [25] a map
$a\,:\,\mathbb{Z}\times\mathbb{Z}\longrightarrow\mathbb{Z}$ is called a
discrete array if for each $i$ the map
$a(i,-)\,:\,\mathbb{Z}\longrightarrow\mathbb{Z}$, by $j\mapsto a(i,j)$ has
finite support.
###### Lemma 3.7.
There exist discrete arrays $a$ and $b$ such that for $k\geq 1$
(3.30) $\displaystyle U_{A}(P_{A}\,t^{k})$
$\displaystyle=P_{B}\,\sum_{n\geq\left\lceil(k+5)/5\right\rceil}a(k,n)\,t^{n},\quad\mbox{where}\quad\nu_{5}(a(k,n))\geq\left\lfloor\frac{3n-k}{4}\right\rfloor,$
(3.31) $\displaystyle U_{B}(P_{B}\,t^{k})$
$\displaystyle=P_{A}\,\sum_{n\geq\left\lceil
k/5\right\rceil}b(k,n)\,t^{n},\quad\mbox{where}\quad\nu_{5}(b(k,n))\geq\left\lfloor\frac{3n-k+2}{4}\right\rfloor.$
###### Proof.
From Lemma 3.6, Group I we find there is a discrete array $a$ such that
$U_{A}(P_{A}\,t^{k})=P_{B}\,\sum_{n\geq\left\lceil(k+5)/5\right\rceil}a(k,n)\,t^{n},\quad\mbox{where}\quad\nu_{5}(a(k,n))\geq\left\lfloor\frac{3n-k-2}{4}\right\rfloor,$
for $-4\leq k\leq 0$. Lemma 3.3 (with $s=4$) and Lemma 3.4 (with $r=-2$) imply
(3.30) for $k\geq 1$. From Lemma 3.6, Group II we find there is a discrete
array $b$ such that
$U_{B}(P_{B}\,t^{k})=P_{A}\,\sum_{n\geq\left\lceil
k/5\right\rceil}b(k,n)\,t^{n},\quad\mbox{where}\quad\nu_{5}(b(k,n))\geq\left\lfloor\frac{3n-k}{4}\right\rfloor,$
for $-4\leq k\leq 0$. Lemma 3.3 (with $s=0$) and Lemma 3.4 (with $r=0$) imply
(3.31) for $k\geq 1$. ∎
### 3.3. Proof of Theorem 1.2
For $\alpha\geq 1$ define $\delta_{\alpha}$ by $0<\delta<5^{\alpha}$ and
$24\delta_{\alpha}\equiv 1\pmod{5^{\alpha}}$. Then
$\delta_{2\alpha}=\frac{23\times
5^{2\alpha}+1}{24},\qquad\delta_{2\alpha+1}=\frac{19\times
5^{2\alpha+1}+1}{24}.$
We let
$\lambda_{2\alpha}=\lambda_{2\alpha+1}=\frac{5}{24}(1-5^{2\alpha}).$
For $n\geq 0$ we define
(3.32) $c_{f}(n):=a_{f}(5n-1)+a_{f}(n/5).$
We find that for $\alpha\geq 3$
(3.33)
$\sum_{n=0}^{\infty}\left(a_{f}(5^{\alpha}n+\delta_{\alpha})+a_{f}(5^{\alpha-2}n+\delta_{\alpha-2})\right)q^{n+1}=\sum_{n=1}^{\infty}c_{f}(5^{\alpha-1}n+\lambda_{\alpha-1})q^{n}.$
We define the sequence of functions $(L_{\alpha})_{\alpha=0}^{\infty}$ by
$L_{0}:=P_{A}$ and for $\alpha\geq 0$
$L_{2\alpha+1}:=U_{A}(L_{2\alpha}),\qquad\mbox{and}\qquad
L_{2\alpha+2}:=U_{B}(L_{2\alpha+1}).$
###### Lemma 3.8.
For $\alpha\geq 0$,
$L_{2\alpha}=\frac{J_{5}J_{2}^{2}}{J_{1}^{4}}\sum_{n=0}^{\infty}c_{f}(5^{2\alpha}n+\lambda_{2\alpha})q^{n},$
and
$L_{2\alpha+1}=\frac{J_{10}^{2}J_{1}}{J_{5}^{4}}\sum_{n=0}^{\infty}c_{f}(5^{2\alpha+1}n+\lambda_{2\alpha+1})q^{n}.$
###### Proof.
$\displaystyle L_{0}$
$\displaystyle=P_{A}=\frac{J_{10}^{2}J_{5}J_{2}^{6}}{J_{20}J_{4}^{3}J_{1}^{5}}-4q\frac{J_{20}J_{5}^{2}J_{4}^{3}}{J_{10}J_{2}^{3}J_{1}^{2}}=\frac{J_{5}J_{2}^{2}}{J_{1}^{4}}\left(\frac{J_{10}^{2}J_{2}^{4}}{J_{20}J_{4}^{3}J_{1}}-4q\frac{J_{20}J_{5}J_{4}^{3}J_{1}^{2}}{J_{10}J_{2}^{5}}\right)$
$\displaystyle=\frac{J_{5}J_{2}^{2}}{J_{1}^{4}}\sum_{n=0}^{\infty}(a_{f}(5n-1)+a_{f}(n/5))q^{n}=\sum_{n=0}^{\infty}c_{f}(n+\lambda_{0})q^{n}.$
This is the first equation with $\alpha=0$. The general result follows by a
routine induction argument. ∎
Our main result for the rank parity function modulo powers of $5$ is the
following theorem.
###### Theorem 3.9.
There exists a discrete array $\ell$ such that for $\alpha\geq 1$
(3.34) $\displaystyle L_{2\alpha}$ $\displaystyle=P_{A}\,\sum_{n\geq
1}\ell(2\alpha,n)\,t^{n},\quad\mbox{where}\quad\nu_{5}(\ell(2\alpha,n))\geq\alpha+\left\lfloor\frac{3n-3}{4}\right\rfloor,$
(3.35) $\displaystyle L_{2\alpha+1}$ $\displaystyle=P_{B}\,\sum_{n\geq
2}\ell(2\alpha+1,n)\,t^{n},\quad\mbox{where}\quad\nu_{5}(\ell(2\alpha+1,n))\geq\alpha+1+\left\lfloor\frac{3n-6}{4}\right\rfloor.$
###### Proof.
We define the discrete array $\ell$ recursively. Define
$\displaystyle\ell(1,1)=1,\,\ell(1,2)=-2\cdot 5^{2},\,\ell(1,3)=14\cdot
5^{2},\,\ell(1,4)=-7\cdot 5^{3},\,\ell(1,5)=5^{4},\,$
$\displaystyle\mbox{and}\quad\ell(1,k)=0,\quad\mbox{for $k\geq 6$}.$
For $\alpha\geq 1$ define
(3.36) $\ell(2\alpha,n)=\sum_{k\geq
1}\ell(2\alpha-1,k)\,b(k,n)\qquad\mbox{(for $n\geq 1$)},$
and
(3.37) $\ell(2\alpha+1,n)=\sum_{k\geq
1}\ell(2\alpha,k)\,a(k,n)\qquad\mbox{(for $n\geq 2$),}$
where $a$ and $b$ are the discrete arrays given in Lemma 3.7. From Lemma 3.6,
Group I and by Lemma 3.7 and equation (3.36) we have
$L_{1}=U_{A}(L_{0})=U_{A}(P_{A})=P_{B}\sum_{n=1}^{5}\ell(1,n)\,t^{n},\quad\mbox{where}\quad\nu_{5}(\ell(1,n))\geq\left\lfloor\frac{3n-2}{4}\right\rfloor.$
$\displaystyle L_{2}$
$\displaystyle=U_{B}(L_{1})=\sum_{n=1}^{5}\ell(1,n)U_{B}(P_{B}t^{n}),$
$\displaystyle=\sum_{n=1}^{5}\ell(1,n)P_{A}\sum_{k\geq 1}b(n,k)t^{k}$
$\displaystyle=P_{A}\sum_{n\geq 1}\sum_{k=1}^{5}\ell(1,k)b(k,n)t^{n}$
$\displaystyle=P_{A}\sum_{n\geq 1}\ell(2,n)t^{n},$
where
$\displaystyle\nu_{5}(\ell(2,n))$ $\displaystyle\geq\underset{1\leq k\leq
5}{\min}\bigg{(}\nu_{5}(\ell(1,k))+\nu_{5}(b(k,n)\bigg{)}\geq\underset{1\leq
k\leq
5}{\min}\bigg{(}\left\lfloor\frac{3k-2}{4}\right\rfloor+\left\lfloor\frac{3n-k+2}{4}\right\rfloor\bigg{)}$
$\displaystyle=\left\lfloor\frac{3n+1}{4}\right\rfloor,$
since when $k=1$,
$\left\lfloor\frac{3k-2}{4}\right\rfloor+\left\lfloor\frac{3n-k+2}{4}\right\rfloor=\left\lfloor\frac{3n+1}{4}\right\rfloor$,
and for $k\geq 2$,
$\left\lfloor\frac{3k-2}{4}\right\rfloor+\left\lfloor\frac{3n-k+2}{4}\right\rfloor\geq\left\lfloor\frac{3n+2k-3}{4}\right\rfloor\geq\left\lfloor\frac{3n+1}{4}\right\rfloor.$
Thus the result holds for $L_{2\alpha}$ when $\alpha=1$. We proceed by
induction. Suppose the result holds for $L_{2\alpha}$ for a given $\alpha\geq
1$. Then by Lemma 3.7 and equation (3.37) we have
$\displaystyle L_{2\alpha+1}$ $\displaystyle=U_{A}(L_{2\alpha})=\sum_{n\geq
1}\ell(2\alpha,n)U_{A}(P_{A}t^{n}),$ $\displaystyle=\sum_{n\geq
1}\ell(2\alpha,n)P_{B}\sum_{k\geq 2}a(n,k)t^{k}$
$\displaystyle=P_{B}\sum_{n\geq 2}\sum_{k\geq 1}\ell(2\alpha,k)a(k,n)t^{n}$
$\displaystyle=P_{B}\sum_{n\geq 2}\ell(2\alpha+1,n)t^{n},$
where
$\displaystyle\nu_{5}(\ell(2\alpha+1,n))$ $\displaystyle\geq\underset{1\leq
k}{\min}\bigg{(}\nu_{5}(\ell(2\alpha,k))+\nu_{5}(a(k,n)\bigg{)}\geq\underset{1\leq
k}{\min}\bigg{(}\alpha+\left\lfloor\frac{3k-3}{4}\right\rfloor+\left\lfloor\frac{3n-k}{4}\right\rfloor\bigg{)}$
$\displaystyle\geq\alpha+1+\left\lfloor\frac{3n-6}{4}\right\rfloor,$
since when $k=1$,
$\left\lfloor\frac{3k-3}{4}\right\rfloor+\left\lfloor\frac{3n-k}{4}\right\rfloor=1+\left\lfloor\frac{3n-5}{4}\right\rfloor$,
and for $k\geq 2$,
$\left\lfloor\frac{3k-3}{4}\right\rfloor+\left\lfloor\frac{3n-k}{4}\right\rfloor\geq\left\lfloor\frac{3n+2k-6}{4}\right\rfloor\geq
1+\left\lfloor\frac{3n-6}{4}\right\rfloor.$
Thus the result holds for $L_{2\alpha+1}$. Suppose the result holds for
$L_{2\alpha+1}$ for a given $\alpha\geq 1$. Then again by Lemma 3.7 and
equation (3.36) we have
$\displaystyle L_{2\alpha+2}$ $\displaystyle=U_{B}(L_{2\alpha+1})=\sum_{n\geq
2}\ell(2\alpha+1,n)U_{B}(P_{B}t^{n}),$ $\displaystyle=\sum_{n\geq
2}\ell(2\alpha+1,n)P_{A}\sum_{k\geq 1}b(n,k)t^{k}$
$\displaystyle=P_{A}\sum_{n\geq 1}\sum_{k\geq 2}\ell(2\alpha+1,k)b(k,n)t^{n}$
$\displaystyle=P_{A}\sum_{n\geq 1}\ell(2\alpha+2,n)t^{n},$
where $\ell(2\alpha+1,1)=0$. Here
$\displaystyle\nu_{5}(\ell(2\alpha+2,n))$ $\displaystyle\geq\underset{2\leq
k}{\min}\bigg{(}\nu_{5}(\ell(2\alpha+1,k))+\nu_{5}(b(k,n)\bigg{)}\geq\underset{2\leq
k}{\min}\bigg{(}\alpha+1+\left\lfloor\frac{3k-6}{4}\right\rfloor+\left\lfloor\frac{3n-k+2}{4}\right\rfloor\bigg{)}$
$\displaystyle\geq\underset{2\leq
k}{\min}\bigg{(}\alpha+1+\left\lfloor\frac{3n+2k-7}{4}\right\rfloor\bigg{)}=\alpha+1+\left\lfloor\frac{3n-3}{4}\right\rfloor.$
Thus the result holds for $L_{2\alpha+2}$, and the result holds in general by
induction. ∎
###### Corollary 3.10.
For $\alpha\geq 1$ and all $n\geq 0$ we have
(3.38) $\displaystyle c_{f}(5^{2\alpha}n+\lambda_{2\alpha})$
$\displaystyle\equiv 0\pmod{5^{\alpha}},$ (3.39) $\displaystyle
c_{f}(5^{2\alpha+1}n+\lambda_{2\alpha+1})$ $\displaystyle\equiv
0\pmod{5^{\alpha+1}}.$
###### Proof.
The congruences follow immediately from Lemma 3.8 and Theorem 3.9. ∎
In view of (3.33) and Corollary 3.10 we obtain (1.4). This completes the proof
of Theorem 1.2.
## 4\. Further results
The methods of this paper can be extended to study congruences mod powers of
$7$ for both the rank and crank parity functions. We describe some of these
results, which we will prove in a subsequent paper [11]. Analogous to (3.1) we
find that
(4.1)
$\sum_{n=0}^{\infty}(a_{f}(n/7)-a_{f}(7n-2))q^{n}=\frac{J_{7}^{3}}{J_{2}^{2}}\left(\frac{J_{1}^{3}J_{7}^{3}}{J_{2}^{3}J_{14}^{3}}+6q^{2}\frac{J_{14}^{4}J_{1}^{4}}{J_{2}^{4}J_{7}^{4}}\right),$
which leads to the following
###### Theorem 4.1.
For all $\alpha\geq 3$ and all $n\geq 0$ we have
(4.2)
$a_{f}(7^{\alpha}n+\delta_{\alpha})-a_{f}(7^{\alpha-2}n+\delta_{\alpha-2})\equiv
0\pmod{7^{\left\lfloor\tfrac{1}{2}(\alpha-1)\right\rfloor}},$
where $\delta_{\alpha}$ satisfies $0<\delta_{\alpha}<7^{\alpha}$ and
$24\delta_{\alpha}\equiv 1\pmod{7^{\alpha}}$.
It turns out that for the crank parity function congruences mod powers of $7$
are more difficult. Define the crank parity function
(4.3) $\beta(n)=M_{e}(n)-M_{o}(n),$
for all $n\geq 0$. The following is our analog of Choi, Kang and Lovejoy’s
Theorem 1.1.
###### Theorem 4.2.
For each $\alpha\geq 1$ there is an integral constant $K_{\alpha}$ such that
(4.4) $\beta(49n-2)\equiv K_{\alpha}\,\beta(n)\pmod{7^{\alpha}},\qquad\mbox{if
$24n\equiv 1\pmod{7^{\alpha}}$}.$
This gives a weak refinement of Ramanujan’s partition congruence modulo powers
of $7$:
$p(n)\equiv 0\pmod{7^{\alpha}},\qquad\mbox{if $24n\equiv
1\pmod{7^{\left\lfloor\tfrac{a+2}{2}\right\rfloor}}$}.$
This was also proved by Watson [30]. Atkin and O’Brien [6] obtained
congruences mod powers of $13$ for the partition function similar to (4.4).
## References
* [1] Scott Ahlgren and Alexander Dunn, _Maass forms and the mock theta function $f(q)$_, Math. Ann. 374 (2019), no. 3-4, 1681–1718. MR3985121
* [2] George E. Andrews, _Generalized Frobenius partitions_ , Mem. Amer. Math. Soc. 49 (1984), no. 301, iv+44. MR743546
* [3] George E. Andrews, _On the theorems of Watson and Dragonette for Ramanujan’s mock theta functions_ , Amer. J. Math. 88 (1966), 454–490. MR200258
* [4] George E. Andrews and F. G. Garvan, _Dyson’s crank of a partition_ , Bull. Amer. Math. Soc. (N.S.) 18 (1988), no. 2, 167–171. MR929094
* [5] A. O. L. Atkin and J. Lehner, _Hecke operators on $\Gamma_{0}(m)$_, Math. Ann. 185 (1970), 134–160. MR0268123
* [6] A. O. L. Atkin and J. N. O’Brien, _Some properties of $p(n)$ and $c(n)$ modulo powers of $13$_, Trans. Amer. Math. Soc. 126 (1967), 442–459. MR214540
* [7] A. O. L. Atkin and P. Swinnerton-Dyer, _Some properties of partitions_ , Proc. London Math. Soc. (3) 4 (1954), 84–106. MR0060535
* [8] Bruce C. Berndt, _Ramanujan’s notebooks. Part III_ , Springer-Verlag, New York, 1991. MR1117903
* [9] Anthony J. F. Biagioli, _A proof of some identities of Ramanujan using modular forms_ , Glasgow Math. J. 31 (1989), no. 3, 271–295. MR1021804
* [10] Kathrin Bringmann and Ken Ono, _The $f(q)$ mock theta function conjecture and partition ranks_, Invent. Math. 165 (2006), no. 2, 243–266. MR2231957
* [11] Dandan Chen, Rong Chen, and Frank Garvan, _Congruences modulo powers of $7$ for the rank and crank parity functions_, (2020), in preparation.
* [12] Dohoon Choi, Soon-Yi Kang, and Jeremy Lovejoy, _Partitions weighted by the parity of the crank_ , J. Combin. Theory Ser. A 116 (2009), no. 5, 1034–1046. MR2522417
* [13] Bumkyu Cho, Ja Kyung Koo, and Yoon Kyung Park, _Arithmetic of the Ramanujan-Göllnitz-Gordon continued fraction_ , J. Number Theory 129 (2009), no. 4, 922–947. MR2499414
* [14] Kok Seng Chua and Mong Lung Lang, _Congruence subgroups associated to the monster_ , Experiment. Math. 13 (2004), no. 3, 343–360. MR2103332
* [15] Leila A. Dragonette, _Some asymptotic formulae for the mock theta series of Ramanujan_ , Trans. Amer. Math. Soc. 72 (1952), 474–500. MR49927
* [16] F. J. Dyson, _Some guesses in the theory of partitions_ , Eureka (1944), no. 8, 10–15. MR3077150
* [17] Jie Frye and Frank Garvan, _Automatic proof of theta-function identities_ , Elliptic integrals, elliptic functions and modular forms in quantum field theory, Texts Monogr. Symbol. Comput., Springer, Cham, 2019, pp. 195–258. MR3889559
* [18] F. G. Garvan, _New combinatorial interpretations of Ramanujan’s partition congruences mod $5,7$ and $11$_, Trans. Amer. Math. Soc. 305 (1988), no. 1, 47–77. MR920146
* [19] B. Gordon and K. Hughes, _Ramanujan congruences for $q(n)$_, Analytic number theory (Philadelphia, Pa., 1980), Lecture Notes in Math., vol. 899, Springer, Berlin-New York, 1981, pp. 333–359. MR654539
* [20] Michael D. Hirschhorn and David C. Hunt, _A simple proof of the Ramanujan conjecture for powers of $5$_, J. Reine Angew. Math. 326 (1981), 1–17. MR622342
* [21] Gérard Ligozat, _Courbes modulaires de genre $1$_, Société Mathématique de France, Paris, 1975, Bull. Soc. Math. France, Mém. 43, Supplément au Bull. Soc. Math. France Tome 103, no. 3. MR0417060
* [22] Robert S. Maier, _On rationally parametrized modular equations_ , J. Ramanujan Math. Soc. 24 (2009), no. 1, 1–73. MR2514149
* [23] Renrong Mao, _Ranks of partitions modulo 10_ , J. Number Theory 133 (2013), no. 11, 3678–3702. MR3084295
* [24] Morris Newman, _Construction and application of a class of modular functions. II_ , Proc. London Math. Soc. (3) 9 (1959), 373–387. MR0107629
* [25] Peter Paule and Cristian-Silviu Radu, _The Andrews-Sellers family of partition congruences_ , Adv. Math. 230 (2012), no. 3, 819–838. MR2921161
* [26] Robert A. Rankin, _Modular forms and functions_ , Cambridge University Press, Cambridge, 1977. MR0498390
* [27] Sinai Robins, _Generalized Dedekind $\eta$-products_, The Rademacher legacy to mathematics (University Park, PA, 1992), Contemp. Math., vol. 166, Amer. Math. Soc., Providence, RI, 1994, pp. 119–128. MR1284055
* [28] James Sellers, _Congruences involving $F$-partition functions_, Internat. J. Math. Math. Sci. 17 (1994), no. 1, 187–188. MR1255240
* [29] G. N. Watson, _The Final Problem : An Account of the Mock Theta Functions_ , J. London Math. Soc. 11 (1936), no. 1, 55–80. MR1573993
* [30] G. N. Watson, _Ramanujans Vermutung über Zerfällungszahlen_ , J. Reine Angew. Math. 179 (1938), 97–128. MR1581588
|
8k
|
arxiv_papers
|
2101.01113
|
# Product and complex structures on 3-Bihom-Lie algebras
Juan Li, Ying Hou, Liangyun Chen
( School of Mathematics and Statistics, Northeast Normal University,
Changchun 130024, China )
###### Abstract
In this paper, we first introduce the notion of a product structure on a
$3$-Bihom-Lie algebra which is a Nijenhuis operator with some conditions. And
we provide that a $3$-Bihom-Lie algebra has a product structure if and only if
it is the direct sum of two vector spaces which are also Bihom subalgebras.
Then we give four special conditions such that each of them can make the
$3$-Bihom-Lie algebra has a special decomposition. Similarly, we introduce the
definition of a complex structure on a $3$-Bihom-Lie algebra and there are
also four types special complex structures. Finally, we show that the relation
between a complex structure and a product structure.
Keywords: 3-Bihom-Lie algebras, product structures, complex structures,
complex product structures
MSC(2020): 17A40, 17B61, 17D99.
000Corresponding author(L. Chen): [email protected] by NNSF
of China (Nos. 11771069 and 12071405)
## 1 Introduction
In [8], the authors introduced the notion of Bihom-algebras when they studied
generalized Hom-structures. There are two twisted maps $\alpha$ and $\beta$ in
this class of algebras. Bihom-algebras are Hom-algebras with $\alpha=\beta$.
And if $\alpha=\beta=\rm Id$, Bihom-algebras will be return to algebras. Since
then many authors are interesting in Bihom-algebras, such as Bihom-Lie
algebras [7], Bihom-Lie superalgebras [11], Bihom-Lie colour algebras [1],
BiHom-bialgebras [12] and so on. In particular, the authors given the
definition of $n$-Bihom-Lie algebras and $n$-Bihom-Associative algebras in
[9]. Considering $n=3$, the authors given some constructions of $3$-Bihom-Lie
algebras in [10]. Moreover, in [6] the notion of Nijenhuis operators on
$3$-BiHom-Lie superalgebras were given, and the authors studied the relation
between Nijenhuis operators and Rota-Baxter operators.
Product structures and complex structures with some conditions are Nijenhuis
operators. Many authors studied product structures and complex structures on
Lie algebras from many points of view in [2, 3, 4, 5, 14]. In particular, the
authors have studied product structures and complex structures on 3-Lie
algebras in [15]. And then product structures and complex structures on
involution Hom-3-Lie algebras have been studied in [13]. Based on these facts,
we want to consider the product structures and complex structures on 3-Bihom-
Lie algebras.
This paper is organized as follows. In Section 2, we recall the definition of
3-Bihom-Lie algebras and Nijenhuis operators on 3-Bihom-Lie algebras. In
Section 3, we introduce the notion of a product structure on 3-Bihom-Lie
algebra, and obtain that the necessary and sufficient condition of a 3-Bihom-
Lie algebra having a product structure is that it is the direct sum of two
Bihom-subalgebras (as vector spaces). Then we give four special product
structures. In Section 4, we introduce the notion of a complex structure on a
3-Bihom-Lie algebra and also give four special conditions. And then we
introduce the notion of a complex product structure on 3-Bihom-Lie algebra
which is a product structure and a complex structure with a condition. We give
the necessary and sufficient condition of a 3-Bihom-Lie algebra having a
complex product structure.
## 2 Preliminary
###### Definition 2.1.
[9] A $3$-Bihom-Lie algebra over a field $\mathbb{K}$ is a $4$-tuple
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$, which consists of a vector space $L$,
a $3$-linear operator $[\cdot,\cdot,\cdot]\colon L\times L\times L\rightarrow
L$ and two linear maps $\alpha,\beta\colon L\rightarrow L$, such that
$\forall\,x,y,z,u,v\in L$,
1. (1)
$\alpha\beta=\beta\alpha$,
2. (2)
$\alpha([x,y,z])=[\alpha(x),\alpha(y),\alpha(z)]$ and
$\beta([x,y,z])=[\beta(x),\beta(y),\beta(z)]$,
3. (3)
Bihom-skewsymmetry:
$[\beta(x),\beta(y),\alpha(z)]=-[\beta(y),\beta(x),\alpha(z)]=-[\beta(x),\beta(z),\alpha(y)],$
4. (4)
$3$-BiHom-Jacobi identity:
$\displaystyle[\beta^{2}(u),\beta^{2}(v),[\beta(x),\beta(y),\alpha(z)]]$
$\displaystyle=$
$\displaystyle[\beta^{2}(y),\beta^{2}(z),[\beta(u),\beta(v),\alpha(x)]]-[\beta^{2}(x),\beta^{2}(z),[\beta(u),\beta(v),\alpha(y)]]$
$\displaystyle+[\beta^{2}(x),\beta^{2}(y),[\beta(u),\beta(v),\alpha(z)]].$
A $3$-Bihom-Lie algebra is called a regular $3$-Bihom-Lie algebra if $\alpha$
and $\beta$ are algebra automorphisms.
###### Definition 2.2.
[9] A sub-vector space $\eta\subseteq L$ is a Bihom subalgebra of
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ if $\alpha(\eta)\subseteq\eta$,
$\beta(\eta)\subseteq\eta$ and $[x,y,z]\in\eta,~{}\forall\,x,y,z\in\eta$. It
is said to be a Bihom ideal of $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ if
$\alpha(\eta)\subseteq\eta$, $\beta(\eta)\subseteq\eta$ and
$[x,y,z]\in\eta,~{}\forall\,x\in\eta,y,z\in L.$
###### Definition 2.3.
[6] Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom-Lie algebra. A
linear map $N:L\rightarrow L$ is called a Nijenhuis operator if for all
$x,y,z\in L$, the following conditions hold :
$\displaystyle\alpha N=$ $\displaystyle N\alpha,~{}\beta N=N\beta,$
$\displaystyle[Nx,Ny,Nz]=$ $\displaystyle N[Nx,Ny,z]+N[Nx,y,Nz]+N[x,Ny,Nz]$
$\displaystyle-N^{2}[Nx,y,z]-N^{2}[x,Ny,z]-N^{2}[x,y,Nz]+N^{3}[x,y,z].$
## 3 Product structures on $3$-Bihom-Lie algebras
In this section, we introduce the notion of a product structure on a 3-Bihom-
Lie algebra by the Nijenhuis opeartors. We find four special conditions, and
each of them gives a special decomposition of the $3$-Bihom-Lie algebra. At
the end of this section, we give some examples.
###### Definition 3.1.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a 3-Bihom-Lie algebra. An almost
product structure on $L$ is a linear map $E:L\rightarrow L$ satisfying
$E^{2}=\rm Id$ $(E\neq\pm\rm Id)$, $\alpha E=E\alpha$ and $\beta E=E\beta$.
An almost product structure is called a product structure if $\forall x,y,z\in
L,$
$\displaystyle E[x,y,x]=$ $\displaystyle[Ex,Ey,Ez]+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]$
$\displaystyle-E[Ex,Ey,z]-E[x,Ey,Ez]-E[Ex,y,Ez].$ (1)
###### Remark 3.2.
If $E$ is a Nijenhuis operator on a 3-Bihom-Lie algebra with $E^{2}=\rm Id$.
Then $E$ is a product structure.
###### Theorem 3.3.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a 3-Bihom-Lie algebra. Then
there is a product structure on $L$ if and only if $L=L_{+}\oplus L_{-},$
where $L_{+}$ and $L_{-}$ are Bihom subalgebras of $L$.
###### Proof. .
Let $E$ be a product structure on $L$. By $E^{2}=\rm Id$, we have that
$L=L_{+}\oplus L_{-}$, where $L_{+}$ and $L_{-}$ are eigenspaces of $L$
associated to the eigenvalues 1 and -1 respectively, i.e., $L_{+}=\\{x\in
L|Ex=x\\},~{}L_{-}=\\{x\in L|Ex=-x\\}$. For all $x_{1},x_{2},x_{3}\in L_{+}$,
we can show
$\displaystyle E[x_{1},x_{2},x_{3}]$
$\displaystyle=[Ex_{1},Ex_{2},Ex_{3}]+[Ex_{1},x_{2},x_{3}]+[x_{1},Ex_{2},x_{3}]+[x_{1},x_{2},Ex_{3}]$
$\displaystyle\quad-E[Ex_{1},Ex_{2},x_{3}]-E[x_{1},Ex_{2},Ex_{3}]-E[Ex_{1},x_{2},Ex_{3}]$
$\displaystyle=4[x_{1},x_{2},x_{3}]-3E[x_{1},x_{2},x_{3}].$
Thus, we have $E[x_{1},x_{2},x_{3}]=[x_{1},x_{2},x_{3}]$, which implies that
$[x_{1},x_{2},x_{3}]\in L_{+}$. And since $E\alpha(x_{1})=\alpha
E(x_{1})=\alpha(x_{1})$, we can get $\alpha(x_{1})\in L_{+}$. Similarly, we
have $\beta(x_{1})\in L_{+}$. So $L_{+}$ is a Bihom subalgebra. By the same
way, we can obtain that $L_{-}$ is also a Bihom subalgebra.
Conversely, a linear map $E:L\rightarrow L$ defined by
$E(x+y)=x-y,~{}\forall x\in L_{+},~{}y\in L_{-}.$ (2)
Obviously, $E^{2}=\rm Id$. Since $L_{+}$ and $L_{-}$ are Bihom subalgebras of
$L$, for all $x\in L_{+},y\in L_{-}$, we have
$E\alpha(x+y)=E(\alpha(x)+\alpha(y))=\alpha(x)-\alpha(y)=\alpha(x-y)=\alpha
E(x-y),$
which implies that $E\alpha=\alpha E$. Similarly, we have $E\beta=\beta E$. In
addition, for all $x_{i}\in L_{+},~{}y_{j}\in L_{-},~{}i,j=1,2,3$, we can
obtain
$\displaystyle[E(x_{1}+y_{1}),E(x_{2}+y_{2}),E(x_{3}+y_{3})]+[E(x_{1}+y_{1}),x_{2}+y_{2},x_{3}+y_{3}]$
$\displaystyle+[x_{1}+y_{1},E(x_{2}+y_{2}),x_{3}+y_{3}]+[x_{1}+y_{1},x_{2}+y_{2},E(x_{3}+y_{3})]$
$\displaystyle-E([E(x_{1}+y_{1}),E(x_{2}+y_{2}),x_{3}+y_{3}]+[E(x_{1}+y_{1}),x_{2}+y_{2},E(x_{3}+y_{3})]$
$\displaystyle+[x_{1}+y_{1},E(x_{2}+y_{2}),E(x_{3}+y_{3})])$ $\displaystyle=$
$\displaystyle[x_{1}-y_{1},x_{2}-y_{2},x_{3}-y_{3}]+[x_{1}-y_{1},x_{2}+y_{2},x_{3}+y_{3}]+[x_{1}+y_{1},x_{2}-y_{2},x_{3}+y_{3}]$
$\displaystyle+[x_{1}+y_{1},x_{2}+y_{2},x_{3}-y_{3}]-E([x_{1}-y_{1},x_{2}-y_{2},x_{3}+y_{3}]+[x_{1}-y_{1},x_{2}+y_{2},x_{3}-y_{3}]$
$\displaystyle+[x_{1}+y_{1},x_{2}-y_{2},x_{3}-y_{3}])$ $\displaystyle=$
$\displaystyle
4[x_{1},x_{2},x_{3}]-4[y_{1},y_{2},y_{3}]-E(3[x_{1},x_{2},x_{3}]-[x_{1},x_{2},y_{3}]-[x_{1},y_{2},x_{3}]-[x_{1},y_{2},y_{3}])$
$\displaystyle-[y_{1},x_{2},x_{3}]-[y_{1},x_{2},y_{3}]-[y_{1},y_{2},x_{3}]+[y_{1},y_{2},y_{3}])$
$\displaystyle=$ $\displaystyle
E([x_{1},x_{2},x_{3}]+[x_{1},x_{2},y_{3}]+[x_{1},y_{2},x_{3}]+[x_{1},y_{2},y_{3}]+[y_{1},x_{2},x_{3}]+[y_{1},x_{2},y_{3}]$
$\displaystyle+[y_{1},y_{2},x_{3}]+[y_{1},y_{2},y_{3}])$ $\displaystyle=$
$\displaystyle E([x_{1}+y_{1},x_{2}+y_{2},x_{3}+y_{3}]).$
Therefore, $E$ is a product structure on $L$. ∎
###### Proposition 3.4.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom-Lie algebra with
$\alpha,\beta$ surjective and $E$ be an almost product structure on $L$. If
$E$ satisfies the following equation
$E[x,y,z]=[Ex,y,z],~{}\forall x,y,z\in L,$ (3)
then $E$ is a product structure on $L$ such that
$[L_{+},L_{+}L_{-}]=[L_{-},L_{-},L_{+}]=0$, i.e., $L$ is the direct sum of
$L_{+}$ and $L_{-}$.
###### Proof. .
By $(\ref{23})$ and $E^{2}=\rm Id$, we have
$\displaystyle[Ex,Ey,Ez]+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]$
$\displaystyle-E[Ex,Ey,z]-E[x,Ey,Ez]-E[Ex,y,Ez]$ $\displaystyle=$
$\displaystyle[Ex,Ey,Ez]+E[x,y,z]+[x,Ey,z]+[x,y,Ez]$
$\displaystyle-[E^{2}x,Ey,z]-[Ex,Ey,Ez]-[E^{2}x,y,Ez]$ $\displaystyle=$
$\displaystyle E[x,y,z].$
Thus $E$ is a product structure on $L$. By Theorem 3.3, we have $L=L_{+}\oplus
L_{-}$, where $L_{+}$ and $L_{-}$ are Bihom subalgebras. For all
$x_{1},x_{2}\in L_{+},x_{3}\in L_{-}$, on one hand we have
$E[x_{1},x_{2},x_{3}]=[Ex_{1},x_{2},x_{3}]=[x_{1},x_{2},x_{3}].$
On the other hand. Since $\alpha,\beta$ are surjective, we have
$\tilde{x}_{1},\tilde{x}_{2}\in L_{+},\tilde{x}_{3}\in L_{-}$ such taht
$x_{1}=\beta(\tilde{x}_{1}),x_{2}=\beta(\tilde{x}_{2})$ and
$x_{3}=\alpha(\tilde{x}_{3})$. So we can get
$\displaystyle
E[x_{1},x_{2},x_{3}]=E[\beta(\tilde{x}_{1}),\beta(\tilde{x}_{2}),\alpha(\tilde{x}_{3})]=E[\beta(\tilde{x}_{3}),\beta(\tilde{x}_{1}),\alpha(\tilde{x}_{2})]$
$\displaystyle=$
$\displaystyle[E\beta(\tilde{x}_{3}),\beta(\tilde{x}_{1}),\alpha(\tilde{x}_{2})]=-[\beta(\tilde{x}_{3}),\beta(\tilde{x}_{1}),\alpha(\tilde{x}_{2})]=-[\beta(\tilde{x}_{1}),\beta(\tilde{x}_{2}),\alpha(\tilde{x}_{3})]$
$\displaystyle=$ $\displaystyle-[x_{1},x_{2},x_{3}].$
Thus, we obtain $[L_{+},L_{+},L_{-}]=0$. Similarly, we have
$[L_{-},L_{-},L_{+}]=0$. The proof is finished. ∎
###### Definition 3.5.
An almost product structure $E$ on a $3$-Bihom Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is called a strict product structure if
$(\ref{23})$ holds.
###### Corollary 3.6.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom Lie algebra with
$\alpha,\beta$ surjective. Then $L$ has a strict product structure if and only
if $L=L_{+}\oplus L_{-}$, where $L_{+}$ and $L_{-}$ are Bihom subalgebras of
$L$ such that $[L_{+},L_{+},L_{-}]=0$ and $[L_{-},L_{-},L_{+}]=0$.
###### Proof. .
Let $E$ be a strict product structure on $L$. By Proposition 3.4 and Theorem
3.3, we can obtain $L=L_{+}\oplus L_{-}$, where $L_{+}$ and $L_{-}$ are Bihom
subalgebras of $L$ such that $[L_{+},L_{+},L_{-}]=0$ and
$[L_{-},L_{-},L_{+}]=0$.
Conversely, we can define a linear endomorphism $E$ by $(\ref{E defin})$.
Since $\alpha,\beta$ are surjective and
$[L_{+},L_{+},L_{-}]=[L_{-},L_{-},L_{+}]=0$. So we have
$[L_{+},L_{-},L_{+}]=[L_{-},L_{+},L_{+}]=[L_{+},L_{-},L_{-}]=[L_{-},L_{+},L_{-}]=0$.
For all $x_{i}\in L_{+},y_{j}\in L_{-},i,j=1,2,3$, we can show
$\displaystyle E[x_{1}+y_{1},x_{2}+y_{2},x_{3}+y_{3}]$ $\displaystyle=$
$\displaystyle E([x_{1},x_{2},x_{3}]+[y_{1},y_{2},y_{3}])$ $\displaystyle=$
$\displaystyle[x_{1},x_{2},x_{3}]-[y_{1},y_{2},y_{3}]$ $\displaystyle=$
$\displaystyle[x_{1}-y_{1},x_{2}+y_{2},x_{3}+y_{3}]$ $\displaystyle=$
$\displaystyle[E(x_{1}+y_{1}),x_{2}+y_{2},x_{3}+y_{3}].$
Thus $E$ is a strict product structure on $L$. ∎
###### Proposition 3.7.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom Lie algebra and $E$
be an almost product structure on $L$. If $E$ satisfies the following equation
$[x,y,z]=-[x,Ey,Ez]-[Ex,y,Ez]-[Ex,Ey,z],\forall x,y,z\in L,$ (4)
then $E$ is a product structure on $L$.
###### Proof. .
Using $(\ref{26})$ and $E^{2}=\rm Id$, we have
$\displaystyle[Ex,Ey,Ez]+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]$
$\displaystyle-E[Ex,Ey,z]-E[x,Ey,Ez]-E[Ex,y,Ez]$ $\displaystyle=$
$\displaystyle-[Ex,E^{2}y,E^{2}z]-[E^{2}x,Ey,E^{2}z]-[E^{2}x,E^{2}y,Ez]$
$\displaystyle+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]+E[x,y,z]$ $\displaystyle=$
$\displaystyle E[x,y,z].$
Thus $E$ is a product structure on $L$. ∎
###### Definition 3.8.
An almost product structure $E$ on a $3$-Bihom Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is called an abelian product structure
if $(\ref{26})$ holds.
###### Corollary 3.9.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom Lie algebra. Then
there is an abelian product structure on $L$ if and only if $L=L_{+}\oplus
L_{-},$ where $L_{+}$ and $L_{-}$ are abelian Bihom subalgebras of $L$.
###### Proof. .
Let $E$ be an abelian product structure on $L$. By Theorem 3.3 and Propsitoon
3.7, we only need to show that $L_{+}$ and $L_{-}$ are abelian Bihom
subalgebras of $L$. For all $x_{1},x_{2},x_{3}\in L_{+}$, we have
$\displaystyle[x_{1},x_{2},x_{3}]$
$\displaystyle=-[Ex_{1},Ex_{2},x_{3}]-[x_{1},Ex_{2},Ex_{3}]-[Ex_{1},x_{2},Ex_{3}]$
$\displaystyle=-3[x_{1},x_{2},x_{3}],$
which implies that $[x_{1},x_{2},x_{3}]=0$. Similarly,
$[y_{1},y_{2},y_{3}]=0,\forall y_{1},y_{2},y_{3}\in L_{-}$. Thus, $L_{+}$ and
$L_{-}$ are abelian Bihom subalgebras.
Conversely, define a linear endomorphism $E:L\rightarrow L$ by $(\ref{E
defin})$. Then for all $x_{i}\in L_{+},y_{j}\in L_{-},i,j=1,2,3$,
$\displaystyle-[x_{1}+y_{1},E(x_{2}+y_{2}),E(x_{3}+y_{3})]-[E(x_{1}+y_{1}),E(x_{2}+y_{2}),x_{3}+y_{3}]$
$\displaystyle-[E(x_{1}+y_{1}),x_{2}+y_{2},E(x_{3}+y_{3})]$ $\displaystyle=$
$\displaystyle-[x_{1}+y_{1},x_{2}-y_{2},x_{3}-y_{3}]-[x_{1}-y_{1},x_{2}-y_{2},x_{3}+y_{3}]-[x_{1}-y_{1},x_{2}+y_{2},x_{3}-y_{3}]$
$\displaystyle=$
$\displaystyle[x_{1},x_{2},y_{3}]+[x_{1},y_{2},x_{3}]+[x_{1},y_{2},y_{3}]+[y_{1},x_{2},x_{3}]+[y_{1},x_{2},y_{3}]+[y_{1},y_{2},x_{3}]$
$\displaystyle=$ $\displaystyle[x_{1}+y_{1},x_{2}+y_{2},x_{3}+y_{3}],$
i.e., $E$ is an abelian product structure on $L$. ∎
###### Proposition 3.10.
Let $E$ be an almost product structure on a $3$-Bihom Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ . If $E$ satisfies the following
equation, for all $x,y,z\in L$,
$[x,y,z]=E[Ex,y,z]+E[x,Ey,z]+E[x,y,Ez],$ (5)
then $E$ is an abelian product structure on $L$ such that
$[L_{+},L_{+},L_{-}]\subseteq L_{+},[L_{-},L_{-},L_{+}]\subseteq L_{-}.$
###### Proof. .
By $(\ref{30})$ and $E^{2}=\rm Id$ we have
$\displaystyle[Ex,Ey,Ez]+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]$
$\displaystyle-E[Ex,Ey,z]-E[x,Ey,Ez]-E[Ex,y,Ez]$ $\displaystyle=$
$\displaystyle E[x,Ey,Ez]+E[Ex,y,Ez]+E[Ex,Ey,z]+E[x,y,z]$
$\displaystyle-E[Ex,Ey,z]-E[x,Ey,Ez]-E[Ex,y,Ez]$ $\displaystyle=$
$\displaystyle E[x,y,z].$
Thus, we obtain that $E$ is a product structure on $L$. For all
$x_{1},x_{2},x_{3}\in L_{+}$, by $(\ref{30})$, we have
$\displaystyle[x_{1},x_{2},x_{3}]$
$\displaystyle=E[Ex_{1},x_{2},x_{3}]+E[x_{1},Ex_{2},x_{3}]+E[x_{1},x_{2},Ex_{3}]$
$\displaystyle=3E[x_{1},x_{2},x_{3}]=3[x_{1},x_{2},x_{3}].$
So we get $[L_{+},L_{+},L_{+}]=0$. Similarly, we have $[L_{-},L_{-},L_{-}]=0$.
By Corollary 3.9, $E$ is an abelian product structure on $L$. Moreover, for
all $x_{1},x_{2}\in L_{+},y_{1}\in L_{-}$, we have
$\displaystyle[x_{1},x_{2},y_{1}]$
$\displaystyle=E[Ex_{1},x_{2},y_{1}]+E[x_{1},Ex_{2},y_{1}]+E[x_{1},x_{2},Ey_{1}]$
$\displaystyle=E[x_{1},x_{2},y_{1}],$
which implies that $[L_{+},L_{+},L_{-}]\subseteq L_{+}$. Similarly, we have
$[L_{-},L_{-},L_{+}]\subseteq L_{-}$. ∎
###### Remark 3.11.
In Proposition 3.10 we can also obtain that $[L_{+},L_{-},L_{+}]\subseteq
L_{+}$, $[L_{-},L_{+},L_{+}]\subseteq L_{+}$, $[L_{-},L_{+},L_{-}]\subseteq
L_{-}$, $[L_{+},L_{-},L_{-}]\subseteq L_{-}$.
###### Definition 3.12.
An almost product structure $E$ on a $3$-Bihom Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is called a strong abelian product
structure if $(\ref{30})$ holds.
###### Corollary 3.13.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom Lie algebra. Then
there is a strong abelian product structure on $L$ if and only if
$L=L_{+}\oplus L_{-},$ where $L_{+}$ and $L_{-}$ are abelian Bihom subalgebras
of $L$ such that $[L_{+},L_{+},L_{-}]\subseteq L_{+}$,
$[L_{+},L_{-},L_{+}]\subseteq L_{+},[L_{-},L_{+},L_{+}]\subseteq
L_{+},[L_{-},L_{+},L_{-}]\subseteq L_{-},[L_{+},L_{-},L_{-}]\subseteq L_{-}$
and $[L_{-},L_{-},L_{+}]\subseteq L_{-}$.
###### Proposition 3.14.
Let $E$ be an almost product structure on a $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. If $E$ satisfies the following
equation
$E[x,y,z]=[Ex,Ey,Ez],$ (6)
then $E$ is a product structure on $L$ such that $[L_{+},L_{+},L_{-}]\subseteq
L_{-}$, $[L_{-},L_{-},L_{+}]\subseteq L_{+}$.
###### Proof. .
By $(\ref{31})$ and $E^{2}=\rm Id$ we have
$\displaystyle[Ex,Ey,Ez]+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]$
$\displaystyle-E[Ex,Ey,z]-E[x,Ey,Ez]-E[Ex,y,Ez]$ $\displaystyle=$
$\displaystyle E[x,y,z]+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]$
$\displaystyle-[E^{2}x,E^{2}y,Ez]-[Ex,E^{2}y,E^{2}z]-[E^{2}x,Ey,E^{2}z]$
$\displaystyle=$ $\displaystyle E[x,y,z].$
Thus, $E$ is a product structure on $L$. Moreover, for all $x_{1},x_{2}\in
L_{+},y_{1}\in L_{-}$, we have
$\displaystyle
E[x_{1},x_{2},y_{1}]=[Ex_{1},Ex_{2},Ey_{1}]=-[x_{1},x_{2},y_{1}],$
which implies that $[L_{+},L_{+},L_{-}]\subseteq L_{-}$. Similarly, we have
$[L_{-},L_{-},L_{+}]\subseteq L_{+}$. ∎
###### Remark 3.15.
In Proposition 3.14 we can also obtain that
$[L_{+},L_{-},L_{+}]\\!\subseteq\\!L_{-}$, $[L_{-},L_{+},L_{+}]$ $\subseteq
L_{-}$, $[L_{-},L_{+},L_{-}]\subseteq L_{+}$, $[L_{+},L_{-},L_{-}]\subseteq
L_{+}$
###### Definition 3.16.
An almost product structure $E$ on a $3$-Bihom Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is called a perfect product structure
if $(\ref{31})$ holds.
###### Corollary 3.17.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom Lie algebra. Then
$L$ has a perfect product structure if and only if $L=L_{+}\oplus L_{-},$
where $L_{+}$ and $L_{-}$ are Bihom subalgebras of $L$ such that
$[L_{+},L_{+},L_{-}]\subseteq L_{-}$, $[L_{+},L_{-},L_{+}]\subseteq L_{-}$,
$[L_{-},L_{+},L_{+}]\subseteq L_{-}$, $[L_{-},L_{-},L_{+}]\subseteq L_{+}$,
$[L_{-},L_{+},L_{-}]\subseteq L_{+}$ and $[L_{+},L_{-},L_{-}]\subseteq L_{+}$.
###### Corollary 3.18.
A strict product structure on a $3$-Bihom-Lie algebra is a perfect product
structure.
###### Example 3.19.
Let $L$ be a $3$-dimensional vector space with respect to a basis
$\\{e_{1},e_{2},e_{3}\\}$, the non-zero bracket and $\alpha,\beta$ be given by
$[e_{1},e_{2},e_{3}]=[e_{1},e_{3},e_{2}]=[e_{2},e_{3},e_{1}]=e_{2},~{}[e_{2},e_{1},e_{3}]=[e_{3},e_{1},e_{2}]=[e_{3},e_{2},e_{1}]=-e_{2},$
$\alpha=\rm{Id},~{}\beta=\left(\begin{array}[]{ccc}-1&0&0\\\ 0&1&0\\\
0&0&-1\end{array}\right).$
Then $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is a $3$-Bihom Lie algebra. So
$E=\left(\begin{array}[]{ccc}-1&0&0\\\ 0&1&0\\\ 0&0&-1\end{array}\right)$ is a
perfect product structure and an abelian product structure,
$E=\left(\begin{array}[]{ccc}-1&0&0\\\ 0&1&0\\\ 0&0&1\end{array}\right)$ is a
strong abelian product structure and $E=\left(\begin{array}[]{ccc}1&0&0\\\
0&1&0\\\ 0&0&-1\end{array}\right)$ is a strong abelian product structure and a
strict product structure.
###### Example 3.20.
Let $L$ be a $4$-dimensional $3$-Lie algebra with respect to a basis
$\\{e_{1},e_{2},e_{3},e_{4}\\}$ satisfied the bracket
$[e_{1},e_{2},e_{3}]=e_{4},~{}[e_{2},e_{3},e_{4}]=e_{1},~{}[e_{1},e_{3},e_{4}]=e_{2},~{}[e_{1},e_{2},e_{4}]=e_{3}.$
We can define maps $\alpha,\beta:L\rightarrow L$ by
$\alpha=\left(\begin{array}[]{cccc}-1&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\\\
0&0&0&-1\end{array}\right)$ and $\beta=\left(\begin{array}[]{cccc}-1&0&0&0\\\
0&-1&0&0\\\ 0&0&1&0\\\ 0&0&0&1\end{array}\right).$ Then
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is a $3$-Bihom Lie algebra. Thus
$E=\left(\begin{array}[]{cccc}-1&0&0&0\\\ 0&-1&0&0\\\ 0&0&1&0\\\
0&0&0&1\end{array}\right),E=\left(\begin{array}[]{cccc}-1&0&0&0\\\ 0&1&0&0\\\
0&0&1&0\\\ 0&0&0&-1\end{array}\right),E=\left(\begin{array}[]{cccc}-1&0&0&0\\\
0&1&0&0\\\ 0&0&-1&0\\\ 0&0&0&1\end{array}\right)$
are perfect and abelian product structures on $L$.
## 4 Complex structures on $3$-Bihom-Lie algebras
In this section, we introduce the notion of a complex structure on a real
$3$-Bihom-Lie algebra. There are also some special complex structures which
parallel to the case of product structures.
###### Definition 4.1.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a $3$-Bihom-Lie algebra. An
almost complex structure on $L$ is a linear map $J:L\rightarrow L$ satisfying
$J^{2}=-\rm Id$, $J\alpha=\alpha J$ and $J\beta=\beta J$. An almost complex
structure is called a complex structure if the following condition holds$:$
$\displaystyle J[x,y,z]=$
$\displaystyle-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]$
$\displaystyle+J[Jx,Jy,z]+J[x,Jy,Jz]+J[Jx,y,Jz].$ (7)
###### Remark 4.2.
A complex structure $J$ on a $3$-Bihom-Lie algebra $L$ means that $J$ is a
Nijenhuis operator satisfying $J^{2}=-\rm Id$.
###### Remark 4.3.
We can also use Definition 4.1 to give the notion of a complex structure on a
complex $3$-Bihon-Lie algebra with considering $J$ to be $\mathds{C}$-linear.
However, this is not very interesting. Because for a complex $3$-Bihom-Lie
algebra there is a one-to-one correspondence between $\mathds{C}$-linear
complex structures and product structures (see Proposition 4.23).
Now we consider that
$L_{\mathds{C}}=L\otimes_{\mathds{R}}\mathds{C}=\\{x+iy|x,y\in L\\}$ is the
complexification of a real $3$-Bihom-Lie algebra $L$. Obviously
$(L_{\mathds{C}},[\cdot,\cdot,\cdot]_{L_{\mathds{C}}},\alpha_{\mathds{C}},\beta_{\mathds{C}})$
is a $3$-Bihom-Lie algebra, where $[\cdot,\cdot,\cdot]_{L_{\mathds{C}}}$ is
defined by extending the bracket on $L$ complex trilinearly, and
$\alpha_{\mathds{C}}(x+iy)=\alpha(x)+i\alpha(y),\beta_{\mathds{C}}(x+iy)=\beta(x)+i\beta(y)$
for all $x,y\in L$. Let $\sigma$ be the conjugation map in $L_{\mathds{C}}$,
that is $\sigma(x+iy)=x-iy,~{}\forall x,y\in L.$ Then, $\sigma$ is a complex
antilinear, involutive automorphism of the complex vector space
$L_{\mathds{C}}$.
###### Remark 4.4.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra and
$J$ be a complex structure on $L$. We extend the complex structure $J$ complex
linearly, which is denoted $J_{\mathds{C}}$, i.e.,
$J_{\mathds{C}}:L_{\mathds{C}}\rightarrow L_{\mathds{C}}$ is defined by
$J_{\mathds{C}}(x+iy)=Jx+iJy,~{}\forall x,y\in L.$
Then $J_{\mathds{C}}$ is a complex linear endomorphism on $L_{\mathds{C}}$
such that
$J_{\mathds{C}}\alpha_{\mathds{C}}=\alpha_{\mathds{C}}J_{\mathds{C}}$,
$J_{\mathds{C}}\beta_{\mathds{C}}=\beta_{\mathds{C}}J_{\mathds{C}}$,
$J_{\mathds{C}}^{2}=-{\rm{Id}}_{L_{\mathds{C}}}$ and $(\ref{eq4.1})$ holds,
i.e., $J_{\mathds{C}}$ is a complex structure on $L_{\mathds{C}}$.
###### Theorem 4.5.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra.
Then there is a complex structure on $L$ if and only if
$L_{\mathds{C}}=Q\oplus P,$ where $Q$ and $P=\sigma(Q)$ are Bihom subalgebras
of $L_{\mathds{C}}$.
###### Proof. .
Suppose $J$ is a complex structure on $L$. By Remark 4.4 $J_{\mathds{C}}$ is a
complex structure on $L_{\mathds{C}}$. Denote by $L_{\pm i}$ the corresponding
eigenspaces of $L_{\mathds{C}}$ associated to the eigenvalues $\pm i$ and so
$L_{\mathds{C}}=L_{i}\oplus L_{-i}$. We can also get
$\displaystyle L_{i}$ $\displaystyle=\\{x\in
L_{\mathds{C}}|J_{\mathds{C}}(x)=ix\\}=\\{x-iJx|x\in L\\},$ $\displaystyle
L_{-i}$ $\displaystyle=\\{x\in
L_{\mathds{C}}|J_{\mathds{C}}(x)=ix\\}=\\{x+iJx|x\in L\\}.$
So we have $L_{-i}=\sigma(L_{i})$, $\alpha_{\mathds{C}}(L_{i})\subseteq
L_{i}$, $\beta_{\mathds{C}}(L_{i})\subseteq L_{i}$,
$\alpha_{\mathds{C}}(L_{-i})\subseteq L_{-i}$ and
$\beta_{\mathds{C}}(L_{-i})\subseteq L_{-i}$. Next, for all $X,Y,Z\in L_{i}$,
we have
$\displaystyle J_{\mathds{C}}[X,Y,Z]_{L_{\mathds{C}}}=$
$\displaystyle-[J_{\mathds{C}}X,J_{\mathds{C}}Y,J_{\mathds{C}}Z]_{L_{\mathds{C}}}+[J_{\mathds{C}}X,Y,Z]_{L_{\mathds{C}}}+[X,J_{\mathds{C}}Y,Z]_{L_{\mathds{C}}}+[X,Y,J_{\mathds{C}}Z]_{L_{\mathds{C}}}$
$\displaystyle+J_{\mathds{C}}[J_{\mathds{C}}X,J_{\mathds{C}}Y,Z]_{L_{\mathds{C}}}+J_{\mathds{C}}[X,J_{\mathds{C}}Y,J_{\mathds{C}}Z]_{L_{\mathds{C}}}+J_{\mathds{C}}[J_{\mathds{C}}X,Y,J_{\mathds{C}}Z]_{L_{\mathds{C}}}$
$\displaystyle=$ $\displaystyle
4i[X,Y,Z]_{L_{\mathds{C}}}-3J_{\mathds{C}}[X,Y,Z]_{L_{\mathds{C}}}.$
Thus, $[X,Y,Z]_{L_{\mathds{C}}}\in L_{i}$, which implies $L_{i}$ is a Bihom
subalgebra. Similarly $L_{-i}$ is also a Bihom subalgebra.
Conversely, we define a complex map $J_{\mathds{C}}:L_{\mathds{C}}\rightarrow
L_{\mathds{C}}$ by
$J_{\mathds{C}}(X+\sigma(Y))=iX-i\sigma(Y),~{}~{}\forall X,Y\in Q.$ (8)
Since $\sigma$ is a complex antilinear and involutive automorphism of
$L_{\mathds{C}}$, we have
$\displaystyle
J_{\mathds{C}}^{2}(X+\sigma(Y))=J_{\mathds{C}}(iX-i\sigma(Y))=J_{\mathds{C}}(iX+\sigma(iY))=i(iX)-i\sigma(iY)=-X-\sigma(Y),$
i.e., $J_{\mathds{C}}^{2}=-\rm{Id}$. And
$\displaystyle\alpha_{\mathds{C}}J_{\mathds{C}}(X+\sigma(Y))$
$\displaystyle=\alpha_{\mathds{C}}(iX-i\sigma(Y))=i\alpha_{\mathds{C}}(X)-i\alpha_{\mathds{C}}(\sigma(Y))=J_{\mathds{C}}(\alpha_{\mathds{C}}(X)+\alpha_{\mathds{C}}(\sigma(Y)))$
$\displaystyle=J_{\mathds{C}}\alpha_{\mathds{C}}(X+\sigma(Y)),$
i.e., $J_{\mathds{C}}\alpha_{\mathds{C}}=\alpha_{\mathds{C}}J_{\mathds{C}}$.
Similarly,
$J_{\mathds{C}}\beta_{\mathds{C}}=\beta_{\mathds{C}}J_{\mathds{C}}$. Further
we can show that $J_{\mathds{C}}$ satisfies $(\ref{eq4.1})$. Since
$L_{\mathds{C}}=Q\oplus P$, for all $X\in Q$ we have
$\displaystyle
J_{\mathds{C}}\sigma(X+\sigma(Y))=J_{\mathds{C}}(Y+\sigma(X))=iY-i\sigma(X)=\sigma(iX-i\sigma(Y))=\sigma
J_{\mathds{C}}(X+\sigma(Y)),$
which implies that $J_{\mathds{C}}\sigma=\sigma J_{\mathds{C}}$. Moreover,
since $\sigma(X)=X$ is equivalent to $X\in L$, we know that the set of fixed
points of $\sigma$ is the real vector space $L$. By
$J_{\mathds{C}}\sigma=\sigma J_{\mathds{C}}$, there is a well-defined
$J\in\rm{End}(L)$ given by $J\triangleq J_{\mathds{C}}|_{L}$. Because,
$J_{\mathds{C}}^{2}=-\rm{Id}$,
$J_{\mathds{C}}\alpha_{\mathds{C}}=\alpha_{\mathds{C}}J_{\mathds{C}}$,
$J_{\mathds{C}}\beta_{\mathds{C}}=\beta_{\mathds{C}}J_{\mathds{C}}$ and
$J_{\mathds{C}}$ satisfies $(\ref{eq4.1})$, $J$ is a complex structure on $L$.
∎
###### Proposition 4.6.
Let $J$ be an almost complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ . If $J$ satisfies
$J[x,y,z]=[Jx,y,z],\forall x,y,z\in L,$ (9)
then $J$ is a complex structure on $L$.
###### Proof. .
Since $J^{2}=-\rm{Id}$ and $(\ref{32})$, we have
$\displaystyle-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]$
$\displaystyle+J[Jx,Jy,z]+J[x,Jy,Jz]+J[Jx,y,Jz]$ $\displaystyle=$
$\displaystyle-[Jx,Jy,Jz]+J[x,y,z]+[x,Jy,z]+[x,y,Jz]$
$\displaystyle+[J^{2}x,Jy,z]+[Jx,Jy,Jz]+[J^{2}x,y,Jz]$ $\displaystyle=$
$\displaystyle J[x,y,z].$
Thus, $J$ is a complex structure on $L$. ∎
###### Definition 4.7.
An almost complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is called a strict complex structure if
$(\ref{32})$ holds.
###### Corollary 4.8.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ a real $3$-Bihom-Lie algebra with
$\alpha,\beta$ surjective. Then there is a strict complex structure on $L$ if
and only if $L_{\mathds{C}}=Q\oplus P,$ where $Q$ and $P=\sigma(Q)$ are Bihom
subalgebras of $L_{\mathds{C}}$ such that $[Q,Q,P]_{L_{\mathds{C}}}=0$ and
$[P,P,Q]_{L_{\mathds{C}}}=0$.
###### Proof. .
Let $J$ be a strict complex structure on $L$. Then, $J_{\mathds{C}}$ is a
strict complex structure on the complex $3$-Bihom-Lie algebra
$(L_{\mathds{C}},[\cdot,\cdot,\cdot]_{L_{\mathds{C}}},\alpha_{\mathds{C}},\beta_{\mathds{C}})$.
By Proposition 4.6 and Theorem 4.5, we only need to show that
$[L_{i},L_{i},L_{-i}]_{L_{\mathds{C}}}=0$ and
$[L_{-i},L_{-i},L_{i}]_{L_{\mathds{C}}}=0$. For all $X,Y\in L_{i}$ and $Z\in
L_{-i}$, on one hand we have
$J_{\mathds{C}}[X,Y,Z]_{L_{\mathds{C}}}=[J_{\mathds{C}}X,Y,Z]_{L_{\mathds{C}}}=i[X,Y,Z]_{L_{\mathds{C}}}.$
On the other hand, since $\alpha,\beta$ are surjective, we have
$\alpha_{\mathds{C}},\beta_{\mathds{C}}$ are surjective. So there are
$\tilde{X},\tilde{Y}\in L_{i}$ and $\tilde{Z}\in L_{-i}$ such that
$X=\beta_{\mathds{C}}(\tilde{X}),Y=\beta_{\mathds{C}}(\tilde{Y})$ and
$Z=\alpha_{\mathds{C}}(\tilde{Z})$. We can get
$\displaystyle J_{\mathds{C}}[X,Y,Z]_{L_{\mathds{C}}}$
$\displaystyle=J_{\mathds{C}}[\beta_{L_{\mathds{C}}}(\tilde{X}),\beta_{L_{\mathds{C}}}(\tilde{Y}),\alpha_{L_{\mathds{C}}}(\tilde{Z})]_{L_{\mathds{C}}}=J_{\mathds{C}}[\beta_{L_{\mathds{C}}}(\tilde{Z}),\beta_{L_{\mathds{C}}}(\tilde{X}),\alpha_{L_{\mathds{C}}}(\tilde{Y})]_{L_{\mathds{C}}}$
$\displaystyle=[J_{\mathds{C}}\beta_{L_{\mathds{C}}}(\tilde{Z}),\beta_{L_{\mathds{C}}}(\tilde{X}),\alpha_{L_{\mathds{C}}}(\tilde{Y})]_{L_{\mathds{C}}}=-i[\beta_{L_{\mathds{C}}}(\tilde{Z}),\beta_{L_{\mathds{C}}}(\tilde{X}),\alpha_{L_{\mathds{C}}}(\tilde{Y})]_{L_{\mathds{C}}}$
$\displaystyle=-i[\beta_{L_{\mathds{C}}}(\tilde{X}),\beta_{L_{\mathds{C}}}(\tilde{Y}),\alpha_{L_{\mathds{C}}}(\tilde{Z})]_{L_{\mathds{C}}}=-i[X,Y,Z]_{L_{\mathds{C}}}.$
Thus, we obtain $[L_{i},L_{i},L_{-i}]_{L_{\mathds{C}}}=0$. Similarly,
$[L_{-i},L_{-i},L_{i}]_{L_{\mathds{C}}}=0$
Conversely, we define a complex linear endomorphism
$J_{\mathds{C}}:L_{\mathds{C}}\rightarrow L_{\mathds{C}}$ by $(\ref{jc})$. By
the proof of Theorem 4.5, $J_{\mathds{C}}$ is an almost complex structure on
$L_{\mathds{C}}$. Because $\alpha_{\mathds{C}},\beta_{\mathds{C}}$ are
surjective and $[Q,Q,P]_{L_{\mathds{C}}}=[P,P,Q]_{L_{\mathds{C}}}=0$, we also
have
$[Q,P,Q]_{L_{\mathds{C}}}=[P,Q,Q]_{L_{\mathds{C}}}=[P,Q,P]_{L_{\mathds{C}}}=[Q,P,P]_{L_{\mathds{C}}}=0$.
Thus for all $X_{1},X_{2},X_{3}\in L_{i}$, $Y_{1},Y_{2},Y_{3}\in L_{-i}$, we
can show that
$\displaystyle
J_{\mathds{C}}[X_{1}+Y_{1},X_{2}+Y_{2},X_{3}+Y_{3}]_{L_{\mathds{C}}}$
$\displaystyle=$ $\displaystyle
J_{\mathds{C}}([X_{1},X_{2},X_{3}]_{L_{\mathds{C}}}+[Y_{1},Y_{2},Y_{3}]_{L_{\mathds{C}}})=i([X_{1},X_{2},X_{3}]_{L_{\mathds{C}}}-[Y_{1},Y_{2},Y_{3}]_{L_{\mathds{C}}})$
$\displaystyle=$
$\displaystyle([X_{1}-Y_{1},X_{2}+Y_{2},X_{3}+Y_{3}]_{L_{\mathds{C}}})=[iX_{1}-iY_{1},X_{2}+Y_{2},X_{3}+Y_{3}]_{L_{\mathds{C}}}$
$\displaystyle=$
$\displaystyle[J_{\mathds{C}}(X_{1}+Y_{1}),X_{2}+Y_{2},X_{3}+Y_{3}]_{L_{\mathds{C}}}.$
By the proof of Theorem 4.5, we obtain that $J\triangleq J_{\mathds{C}}|_{L}$
is a strict complex structure on the real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. The proof is finished. ∎
Let $J$ be an almost complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. We can define a complex vector space
structure on the real vector space $L$ by
$(a+bi)x\triangleq ax+bJx,\forall a,b\in\mathds{R},x\in L.$ (10)
Define two maps $\phi:L\rightarrow L_{i}$ and $\psi:L\rightarrow L_{-i}$ given
by $\phi(x)=\frac{1}{2}(x-iJx),\psi(x)=\frac{1}{2}(x+iJx)$. Clearly, $\phi$ is
complex linear isomorphism and $\psi=\sigma\phi$ is a complex antilinear
isomorphism of the complex vector spaces $L$.
Let $J$ be a strict complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ with $\alpha,\beta$ surjective. Then
with the complex vector space structure defined above,
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is a complex $3$-Bihom-Lie algebra. In
fact, using $(\ref{32})$ and $(\ref{complex})$ we can obtain
$\displaystyle[(a+bi)x,y,z]$ $\displaystyle=[ax+bJx,y,z]=a[x,y,z]+b[Jx,y,z]$
$\displaystyle=a[x,y,z]+bJ[x,y,z]=(a+bi)[x,y,z].$
Since $\alpha,\beta$ are surjective, the $[\cdot,\cdot,\cdot]$ is complex
trilinear.
Let $J$ be a complex structure on $L$. Define a new bracket
$[\cdot,\cdot,\cdot]_{J}:\wedge^{3}L\rightarrow L$by
$[x,y,z]_{J}=\frac{1}{4}([x,y,z]-[x,Jy,Jz]-[Jx,y,Jz]-[Jx,Jy,z]),\forall
x,y,z\in L.$ (11)
###### Proposition 4.9.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra and
$J$ be a complex structure on $L$. Then
$(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is a real $3$-Bihom-Lie algebra.
Moreover, $J$ is a strict complex structure on
$(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$. And when $\alpha,\beta$ are
surjective, the corresponding complex $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is isomorphic to the complex
$3$-Bihom-Lie algebra $L_{i}$.
###### Proof. .
First we can show that $(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is a real
$3$-Bihom-Lie algebra. By $(\ref{eq4.1})$, for all $x,y,z\in L$, we have
$\displaystyle[\phi(x),\phi(y),\phi(z)]_{L_{\mathds{C}}}$ $\displaystyle=$
$\displaystyle\frac{1}{8}[x-iJx,y-iJy,z-iJz]_{L_{\mathds{C}}}$
$\displaystyle=$
$\displaystyle\frac{1}{8}([x,y,z]-[x,Jy,Jz]-[Jx,y,Jz]-[Jx,Jy,z])-\frac{1}{8}i([x,y,Jz]+[x,Jy,z]$
$\displaystyle+[Jx,y,z]-[Jx,Jy,Jz])$ $\displaystyle=$
$\displaystyle\frac{1}{8}([x,y,z]-[x,Jy,Jz]-[Jx,y,Jz]-[Jx,Jy,z])-\frac{1}{8}iJ([x,y,z]-[x,Jy,Jz]$
$\displaystyle-[Jx,y,Jz]-[Jx,Jy,z])$ $\displaystyle=$
$\displaystyle\frac{1}{2}[x,y,z]_{J}-\frac{1}{2}iJ[x,y,z]_{J}$
$\displaystyle=$ $\displaystyle\phi[x,y,z]_{J}.$ (12)
Thus, we have
$[x,y,z]_{J}=\varphi^{-1}[\varphi(x),\varphi(y),\varphi(z)]_{L_{\mathds{C}}}$.
In addition, we can also get $\varphi\beta=\beta_{\mathds{C}}\varphi$ and
$\varphi\alpha=\alpha_{\mathds{C}}\varphi$. Since $J$ is a complex structure
and by Theorem 4.5, $L_{i}$ is a $3$-Bihom-Lie subalgebra. Therefore,
$(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is a real $3$-Bihom-Lie algebra.
Using $(\ref{eq4.1})$, for all $x,y,z\in L$, we have
$\displaystyle J[x,y,z]_{J}$
$\displaystyle=\frac{1}{4}J([x,y,z]-[x,Jy,Jz]-[Jx,y,Jz]-[Jx,Jy,z])$
$\displaystyle=\frac{1}{4}(-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz])$
$\displaystyle=[Jx,y,z]_{J},$
which implies that $J$ is a strict complex structure on
$(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$.
Since $\alpha,\beta$ are surjective and use the same way as above, we can
obtain $(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is a complex $3$-Bihom-Lie
algebra. By $(\ref{fai})$, $\varphi\beta=\beta_{\mathds{C}}\varphi$ and
$\varphi\alpha=\alpha_{\mathds{C}}\varphi$, we have $\varphi$ is a complex
$3$-Bihom-Lie algebra isomorphism. The proof is finished. ∎
###### Proposition 4.10.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra
with $\alpha,\beta$ surjective and $J$ be a complex structure on $L$. Then $J$
is a strict complex structure on $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ if and
only if $[\cdot,\cdot,\cdot]_{J}=[\cdot,\cdot,\cdot]$.
###### Proof. .
Because $J$ be a strict complex structure on $L$ and $\alpha,\beta$ are
surjective, we have $J[x,y,z]=[Jx,y,z]=[x,Jy,z]=[x,y,Jz]$. So for all
$x,y,x\in L$ we have
$\displaystyle[x,y,z]_{J}$
$\displaystyle=\frac{1}{4}([x,y,z]-[x,Jy,Jz]-[Jx,y,Jz]-[Jx,Jy,z])$
$\displaystyle=\frac{1}{4}([x,y,z]-J^{2}[x,y,z]-J^{2}[x,y,z]-J^{2}[x,y,z])$
$\displaystyle=[x,y,z].$
Conversely, if $[\cdot,\cdot,\cdot]_{J}=[\cdot,\cdot,\cdot]$, we have
$4[x,y,z]_{J}=[x,y,z]-[x,Jy,Jz]-[Jx,y,Jz]-[Jx,Jy,z]=4[x,y,z].$
Then $[x,Jy,Jz]+[Jx,y,Jz]+[Jx,Jy,z]=-3[x,y,z]$. Thus we obtain
$\displaystyle 4J[x,y,z]=4J[x,y,z]_{J}$ $\displaystyle=$ $\displaystyle
J([x,y,z]-[x,Jy,Jz]-[Jx,y,Jz]-[Jx,Jy,z])$ $\displaystyle=$
$\displaystyle-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]$ $\displaystyle=$
$\displaystyle 3[Jx,y,z]+[Jx,y,z]$ $\displaystyle=$ $\displaystyle 4[Jx,y,z],$
i.e.,$J[x,y,z]=[Jx,y,z]$. Therefore the Proposition holds. ∎
###### Proposition 4.11.
Let $J$ be an almost complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. If $J$ satisfies the following
equation
$[x,y,z]=[x,Jy,Jz]+[Jx,y,Jz]+[Jx,Jy,z],\forall x,y,z\in L$ (13)
then $J$ is a complex structure on $L$.
###### Proof. .
By $(\ref{35})$ and $J^{2}=-\rm Id$, we have
$\displaystyle-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]+J[Jx,Jy,z]$
$\displaystyle+J[x,Jy,Jz]+J[Jx,y,Jz]$ $\displaystyle=$
$\displaystyle-[Jx,J^{2}y,J^{2}z]-[J^{2}x,Jy,J^{2}z]-[J^{2}x,J^{2}y,Jz]+[Jx,y,z]+[x,Jy,z]$
$\displaystyle+[x,y,Jz]+J[x,y,z]$ $\displaystyle=$ $\displaystyle J[x,y,z].$
Thus, $J$ is a complex structure on $L$. ∎
###### Definition 4.12.
An almost complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is called an abelian complex structure
if $(\ref{35})$ holds.
###### Remark 4.13.
Let $J$ be an abelian complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. Then
$(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is an abelian $3$-Bihom-Lie
algebra.
###### Corollary 4.14.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra
with $\alpha,\beta$ surjective. Then there is an abelian complex structure on
$L$ if and only if $L_{\mathds{C}}=Q\oplus P,$ where $Q$ and $P=\sigma(Q)$ are
abelian Bihom subalgebras of $L_{\mathds{C}}$.
###### Proof. .
Let $J$ be an abelian complex structure on $L$. Because $\alpha,\beta$ are
surjective and by Proposition 4.9, we obtain that $\varphi$ is a complex
$3$-Bihom-Lie algebra isomorphism from
$(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ to
$(L_{i},[\cdot,\cdot,\cdot]_{L_{\mathds{C}}},\alpha_{\mathds{C}},\beta_{\mathds{C}})$.
Using Remark 4.13, $(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is an abelian
$3$-Bihom-Lie algebra. Thus, $Q=L_{i}$ is an abelian Bihom subalgebra of
$L_{\mathds{C}}$. Since $P=L_{-i}=\sigma(L_{i})$, for all
$x_{1}+iy_{1},x_{2}+iy_{2},x_{3}+iy_{3}\in L_{i}$, we have
$\displaystyle[\sigma(x_{1}+iy_{1}),\sigma(x_{2}+iy_{2}),\sigma(x_{3}+iy_{3})]_{L_{\mathds{C}}}$
$\displaystyle=$
$\displaystyle[x_{1}-iy_{1},x_{2}-iy_{2},x_{3}-iy_{3}]_{L_{\mathds{C}}}$
$\displaystyle=$
$\displaystyle[x_{1},x_{2},x_{3}]-[x_{1},y_{2},y_{3}]-[y_{1},x_{2},y_{3}]-[y_{1},y_{2},x_{3}]-i([x_{1},x_{2},y_{3}]+[x_{1},y_{2},x_{3}]$
$\displaystyle+[y_{1},x_{2},x_{3}]-[y_{1},y_{2},y_{3}])$ $\displaystyle=$
$\displaystyle\sigma([x_{1},x_{2},x_{3}]-[x_{1},y_{2},y_{3}]-[y_{1},x_{2},y_{3}]-[y_{1},y_{2},x_{3}]+i([x_{1},x_{2},y_{3}]+[x_{1},y_{2},x_{3}]$
$\displaystyle+[y_{1},x_{2},x_{3}]-[y_{1},y_{2},y_{3}]))$ $\displaystyle=$
$\displaystyle\sigma[x_{1}+iy_{1},x_{2}+iy_{2},x_{3}+iy_{3}]_{L_{\mathds{C}}}$
$\displaystyle=$ $\displaystyle 0.$
Therefore, $P$ is an abelian Bihom subalgebra of $L_{\mathds{C}}$.
Conversely, by Theorem 4.5, $L$ has a complex structure $J$. Moreover, by
Proposition 4.9, we have a complex $3$-Bihom-Lie algebra isomorphism $\varphi$
from $(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ to
$(Q,[\cdot,\cdot,\cdot]_{L_{\mathds{C}}},\alpha_{\mathds{C}},\beta_{\mathds{C}})$.
So $(L,[\cdot,\cdot,\cdot]_{J},\alpha,\beta)$ is an abelian $3$-Bihom-Lie
algebra. By the definition of $[\cdot,\cdot,\cdot]_{J}$, we obtain that $J$ is
an abelian complex structure on $L$. The proof is finished. ∎
###### Proposition 4.15.
Let $J$ be an almost complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. If $J$ satisfies the following
equation
$[x,y,z]=-J[Jx,y,z]-J[x,Jy,z]-J[x,y,Jz],\forall x,y,z\in L,$ (14)
then $J$ is a complex structure on $L$.
###### Proof. .
By $(\ref{38})$ and $J^{2}=-\rm Id$ we have
$\displaystyle-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]$
$\displaystyle+J[Jx,Jy,z]+J[x,Jy,Jz]+J[Jx,y,Jz]$ $\displaystyle=$
$\displaystyle J[J^{2}x,Jy,Jz]+J[Jx,J^{2}y,Jz]+J[Jx,Jy,J^{2}z]+J[x,y,z]$
$\displaystyle+J[Jx,Jy,z]+J[x,Jy,Jz]+J[Jx,y,Jz]$ $\displaystyle=$
$\displaystyle J[x,y,z].$
Thus $J$ is a complex structure on $L$. ∎
###### Definition 4.16.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra. An
almost complex structure $J$ on $L$ is called a strong abelian complex
structure if $(\ref{38})$ holds.
###### Corollary 4.17.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra.
Then there is a strong abelian complex structure on $L$ if and only if
$L_{\mathds{C}}=Q\oplus P,$ where $Q$ and $P=\sigma(Q)$ are abelian Bihom
subalgebras of $L_{\mathds{C}}$ such that $[Q,Q,P]_{L_{\mathds{C}}}\subseteq
Q,[Q,P,Q]_{L_{\mathds{C}}}\subseteq Q,[P,Q,Q]_{L_{\mathds{C}}}\subseteq
Q,[P,P,Q]_{L_{\mathds{C}}}\subseteq P,[P,Q,P]_{L_{\mathds{C}}}\subseteq P$ and
$[Q,P,P]_{L_{\mathds{C}}}\subseteq P$.
###### Proposition 4.18.
Let $J$ be an almost complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. If $J$ satisfies the following
equation
$J[x,y,z]=-[Jx,Jy,Jz],\forall x,y,z\in L,$ (15)
then $J$ is a complex structure on $L$.
###### Proof. .
By $(\ref{39})$ and $J^{2}=-\rm Id$, we have
$\displaystyle-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]+J[Jx,Jy,z]$
$\displaystyle+J[x,Jy,Jz]+J[Jx,y,Jz]$ $\displaystyle=$
$\displaystyle+J[x,y,z]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]-[J^{2}x,J^{2}y,Jz]$
$\displaystyle-[Jx,J^{2}y,J^{2}z]-[J^{2}x,Jy,J^{2}z]$ $\displaystyle=$
$\displaystyle J[x,y,z].$
Thus, $J$ is a complex structure on $L$. ∎
###### Definition 4.19.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra. An
almost complex structure $J$ on $L$ is called a perfect complex structure if
$(\ref{39})$ holds.
###### Corollary 4.20.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra.
Then there is a perfect complex structure on $L$ if and only if
$L_{\mathds{C}}=Q\oplus P,$ where $Q$ and $P=\sigma(Q)$ are Bihom subalgebras
of $L_{\mathds{C}}$ such that $[Q,Q,P]_{L_{\mathds{C}}}\subseteq
P,[Q,P,Q]_{L_{\mathds{C}}}\subseteq P,[P,Q,Q]_{L_{\mathds{C}}}\subseteq
P,[P,P,Q]_{L_{\mathds{C}}}\subseteq Q,[P,Q,P]_{L_{\mathds{C}}}\subseteq Q$ and
$[Q,P,P]_{L_{\mathds{C}}}\subseteq Q$.
###### Corollary 4.21.
Let $J$ be a strict complex structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ with $\alpha,\beta$ surjective, Then
$J$ is a perfect complex structure on $L$.
###### Example 4.22.
Let $L$ be a $4$-dimensional vector space with respect to a basis
$\\{e_{1},e_{2},e_{3},e_{4}\\}$, the non-zero bracket and $\alpha,\beta$ be
given by
$\displaystyle[e_{1},e_{2},e_{3}]$
$\displaystyle=[e_{1},e_{3},e_{2}]=[e_{2},e_{3},e_{1}]=e_{4},~{}[e_{2},e_{1},e_{3}]=[e_{3},e_{1},e_{2}]=[e_{3},e_{2},e_{1}]=-e_{4},$
$\displaystyle[e_{1},e_{4},e_{2}]$
$\displaystyle=[e_{2},e_{1},e_{4}]=[e_{2},e_{4},e_{1}]=e_{3},~{}[e_{1},e_{2},e_{4}]=[e_{4},e_{1},e_{2}]=[e_{4},e_{2},e_{1}]=-e_{3},$
$\displaystyle[e_{3},e_{1},e_{4}]$
$\displaystyle=[e_{3},e_{4},e_{1}]=[e_{4},e_{1},e_{3}]=e_{2},~{}[e_{1},e_{3},e_{4}]=[e_{1},e_{4},e_{3}]=[e_{4},e_{3},e_{1}]=-e_{2},$
$\displaystyle[e_{3},e_{2},e_{4}]$
$\displaystyle=[e_{4},e_{2},e_{3}]=[e_{4},e_{3},e_{2}]=e_{1},~{}[e_{2},e_{3},e_{4}]=[e_{2},e_{4},e_{3}]=[e_{3},e_{4},e_{2}]=-e_{1},$
$\alpha=\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&-1&0&0\\\ 0&0&1&0\\\
0&0&0&-1\end{array}\right),~{}\beta=\rm{Id}.$
Then $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ is a $3$-Bihom Lie algebra. So
$J_{1}=\left(\begin{array}[]{cccc}0&0&-1&0\\\ 0&0&0&1\\\ 1&0&0&0\\\
0&-1&0&0\end{array}\right)and~{}J_{2}=\left(\begin{array}[]{cccc}0&0&-1&0\\\
0&0&0&-1\\\ 1&0&0&0\\\ 0&1&0&0\end{array}\right)$
are strong abelian complex structures.
$J_{3}=\left(\begin{array}[]{cccc}1&0&1&0\\\ 0&1&0&1\\\ -2&0&-1&0\\\
0&-2&0&-1\end{array}\right),J_{4}=\left(\begin{array}[]{cccc}1&0&-1&0\\\
0&-1&0&2\\\ 2&0&-1&0\\\
0&-1&0&1\end{array}\right)and~{}J_{5}=\left(\begin{array}[]{cccc}-1&0&-1&0\\\
0&1&0&2\\\ 2&0&1&0\\\ 0&-1&0&-1\end{array}\right)$
are abelian complex structures.
At the end of this section, we give a condition between a complex structure
and a product structure on a $3$-Bihom-Lie algebra to introduce the definition
of a complex product structure. And we show the relation between a complex
structure and a product structure on a complex $3$-Bihom-Lie algebra.
###### Proposition 4.23.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a complex $3$-Bihom-Lie algebra.
Then $E$ is a product structure on $L$ if and only if $J=iE$ is a complex
structure on $L$.
###### Proof. .
Let $E$ be a product structure on $L$. Then we have $J^{2}=i^{2}E^{2}=-\rm
Id$, $J\alpha=iE\alpha=\alpha iE=\alpha J$ and $J\beta=\beta J$, i.e., $J$ is
an almost complex structure on $L$. Moreover, by $(\ref{21})$ we can get
$\displaystyle J[x,y,z]=iE[x,y,z]$ $\displaystyle=$ $\displaystyle
i([Ex,Ey,Ez]+[Ex,y,z]+[x,Ey,z]+[x,y,Ez]-E[Ex,Ey,z]$
$\displaystyle-E[x,Ey,Ez]-E[Ex,y,Ez])$ $\displaystyle=$
$\displaystyle-[iEx,iEy,iEz]+[iEx,y,z]+[x,iEy,z]+[x,y,iEz]+iE[iEx,iEy,z]$
$\displaystyle+iE[x,iEy,iEz]+iE[iEx,y,iEz]$ $\displaystyle=$
$\displaystyle-[Jx,Jy,Jz]+[Jx,y,z]+[x,Jy,z]+[x,y,Jz]+J[Jx,Jy,z]$
$\displaystyle+J[x,Jy,Jz]+J[Jx,y,Jz].$
Thus, $J$ is a complex structure on $L$.
Conversely, since $J=iE$, we obtain $E=-iJ$. So using the same way as above,
we have $E$ is a product structure on $L$. ∎
###### Definition 4.24.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra. A
complex product structure on $L$ is a pair $(J,E)$ consisting of a complex
structure $J$ and a product structure $E$ such that $JE=-EJ$.
###### Remark 4.25.
Let $(J,E)$ be a complex product structure on a real $3$-Bihom-Lie algebra
$(L,[\cdot,\cdot,\cdot],\alpha,\beta)$. For all $x\in L_{+}$, we have
$E(Jx)=-J(Ex)=-Jx$, which implies that $J(L_{+})\subset L_{-}$. Clearly, we
also can obtain $J(L_{-})\subset L_{+}$. Thus, we get $J(L_{-})=L_{+}$ and
$J(L_{+})=L_{-}$.
###### Theorem 4.26.
Let $(L,[\cdot,\cdot,\cdot],\alpha,\beta)$ be a real $3$-Bihom-Lie algebra.
Then $L$ has a complex product structure if and only if $L$ has a complex
structure $J$ and $L=L_{+}\oplus L_{-},$ where $L_{-}$ and $L_{+}$ are Bihom
subalgebras of $L$ and $J(L_{+})=L_{-}$.
###### Proof. .
Let $(J,E)$ be a complex product structure and $L_{\pm 1}$ be the eigenspaces
corresponding to the eigenvalues $\pm 1$ of $E$. By Theorem 3.3,
$L=L_{+}\oplus L_{-}$ and $L_{-}$, $L_{+}$ are Bihom subalgebras of $L$. And
by Remark 4.25 we have $J(L_{+})=L_{-}$.
Conversely, we can define a linear map $E:L\rightarrow L$ by
$E(x+y)=x-y,\forall x\in L_{+},y\in L_{-}.$
By Theorem 3.3, $E$ is a product structure on $L$. By $J(L_{+})=L_{-}$ and
$J^{2}=-\rm{Id}$, we have $J(L_{-})=L_{+}$. So $\forall x\in L_{+},y\in
L_{-}$,
$E(J(x+y))=E(J(x)+J(y))=-J(x)+J(y)=-J(E(x+y)),$
i.e., $EJ=-JE$. Thus, $(J,E)$ is a complex product structure on $L$. ∎
###### Example 4.27.
Consider the complex structures on the 4-dimensional $3$-Bihom-Lie algebra in
Example 4.22. We have
$E_{1}=\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&1&0&0\\\ 0&0&-1&0\\\
0&0&0&-1\end{array}\right),E_{2}=\left(\begin{array}[]{cccc}1&0&0&0\\\
0&-1&0&0\\\ 0&0&1&0\\\
0&0&0&-1\end{array}\right),E_{3}=\left(\begin{array}[]{cccc}1&0&0&0\\\
0&-1&0&0\\\ 0&0&-1&0\\\ 0&0&0&1\end{array}\right)$
are perfect and abelian product structures. Then $(J_{1},E_{1})$,
$(J_{2},E_{1})$, $(J_{1},E_{3})$ and $(J_{2},E_{3})$, are complex product
structures.
## References
* [1] K. Abdaoui, A. Ben Hassine, A. Makhlouf. BiHom-Lie colour algebras structures. arXiv:1706.02188.
* [2] A. Andrada. Complex product structures on 6-dimensional nilpotent Lie algebras. Forum Math. 20 (2008), no. 2, 285-315.
* [3] A. Andrada, M. Barberis, I. Dotti. Classification of abelian complex structures on 6-dimensional Lie algebras. J. Lond. Math. Soc. (2) 83 (2011), no. 1, 232-255.
* [4] A. Andrada, M. Barberis, I. Dotti, G. Ovando. Product structures on four dimensional solvable Lie algebras. Homology Homotopy Appl. 7 (2005), no. 1, 9-37.
* [5] A. Andrada, S. Salamon. Complex product structures on Lie algebras. Forum Math. 17 (2005), no. 2, 261-295.
* [6] A. Ben Hassine, S. Mabrouk, O. Ncib. $3$-BiHom-Lie superalgebras induced by BiHom-Lie superalgebras. Linear Multilinear Algebra. doi: 10.1080/03081087.2020.1713040.
* [7] Y. Cheng, H. Qi. Representations of Bihom-Lie algebras. arXiv:1610.04302.
* [8] G. Graziani, A. Makhlouf, C. Menini, F. Panaite. BiHom-associative algebras, BiHom-Lie algebras and BiHom-bialgebras. SIGMA Symmetry Integrability Geom. Methods Appl. 11 (2015), Paper 086, 34 pp.
* [9] A. Kitouni, A. Makhlouf, S. Silvestrov. On $n$-ary Generalization of BiHom-Lie Algebras and BiHom-Associative Algebras. arXiv:1812.00094.
* [10] J. Li, L. Chen. The construction of $3$-Bihom-Lie algebras. Comm. Algebra 48 (2020), no. 12, 5374-5390.
* [11] J. Li, L. Chen, Y. Cheng. Representations of Bihom-Lie superalgebras. Linear Multilinear Algebra 67 (2019), no. 2, 299-326.
* [12] L. Liu, A. Makhlouf, C. Menini, F. Panaite. BiHom-Novikov algebras and infinitesimal BiHom-bialgebras. J. Algebra 560 (2020), 1146-1172.
* [13] Y. Qin, J. Lin and J. Jiang. Product and complex structures on Hom-$3$-Lie algebra. doi: 10.13140/RG.2.2.27140.07041.
* [14] S. Salamon. Complex structures on nilpotent Lie algebras. J. Pure Appl. Algebra 157 (2001), no. 2-3, 311-333.
* [15] Y. Sheng and R. Tang. Symplectic, product and complex structures on 3-Lie algebras. J. Algebra, 508 (2018), 256-300.
|
8k
|
arxiv_papers
|
2101.01118
|
# A Method for a Pseudo-Local Measurement of the Galactic Magnetic Field
Steven R. Spangler Department of Physics and Astronomy, University of Iowa
###### Abstract
Much of the information about the magnetic field in the Milky Way and other
galaxies comes from measurements which are path integrals, such as Faraday
rotation and the polarization of synchrotron radiation of cosmic ray
electrons. The measurement made at the radio telescope results from the
contributions of volume elements along a long line of sight. The inferred
magnetic field is therefore some sort of average over a long line segment. A
magnetic field measurement at a given spatial location is of much more
physical significance. In this paper, we point out that HII regions
fortuitously offer such a “point” measurement, albeit of one component of the
magnetic field, and averaged over the sightline through the HII region.
However, the line of sight (LOS) through an HII region is much smaller (e.g.
30 - 50 pc) than one through the entire Galactic disk, and thus constitutes a
“pseudo-local” measurement. We use published HII region Faraday rotation
measurements to provide a new constraint on the magnitude of
magnetohydrodynamic (MHD) turbulence in the Galaxy, as well as to raise
intriguing speculations about the modification of the Galactic field during
the star formation process.
interstellar medium — interstellar magnetic fields — interstellar plasma
## 1 1\. Introduction
It has been known for decades that the interstellar medium is threaded by a
magnetic field. This is not surprising, since almost all phases of the
interstellar medium (ISM) have sufficient level of ionization to satisfy the
conditions for a plasma parameter $\Lambda_{e}\gg 1$ (number of particles per
DeBye sphere large, Nicholson, 1983), and thus constitute plasmas. Currents
flowing in these plasmas generate a magnetic field. Measurements that provide
information on the magnitude and functional form of the Galactic field include
Faraday rotation of radio sources, both in the Galaxy and beyond, Zeeman
splitting of magnetically-sensitive transitions of the hydrogen atom and the
hydroxyl, methanol and water molecules, polarization of the Galactic
nonthermal synchrotron radiation, and polarization of visible and infrared
light due to the alignment of interstellar grains in the Galactic magnetic
field. Recent reviews on the Galactic field include Ferrière (2011) and Han
(2017).
Studies extending back over several decades indicate that the Galactic field
consists, in part, of a Galaxy-wide, large scale field that can be described,
at least empirically, by analytic functions of galactocentric coordinates, and
a superposed, spatially-random component that is plausibly interpreted as
magnetohydrodynamic (MHD) turbulence, presumably similar to that which exists
in the solar wind. An example of a study that prescribes the form of the large
scale, “deterministic” component of the field is van Eck et al. (2011).
Reviews which consider the state of knowledge of both the large scale and
turbulent components are given in Ferrière (2011) and Han (2017). A very
recent review which summarizes what we know and can speculate about the
turbulent component is Ferrière (2020).
It is probably safe to say that all observational investigations have
concluded that the turbulent component is comparable in magnitude to the large
scale component. The exact value of the magnitude of the turbulent component
depends on what is assumed for the outer scale of the turbulence. Values for
the ratio $\frac{\delta b}{B_{0}}$ where $\delta b$ is the rms value of the
turbulent component and $B_{0}$ is the magnitude of the large scale component,
range from a few tens of percent to a number in excess of unity. For example,
Ferrière (2011) and Ferrière (2020) review our knowledge of the Galactic
field, and pay attention to the relative magnitudes of the regular and
turbulent fields, citing results of $\frac{\delta b}{B_{0}}\geq 3$ from two
independent investigations. Interestingly, Beck (2016), in a review of what is
known about magnetic fields in external galaxies similar to the Milky Way,
argues that the turbulent component is substantially larger than the large
scale, deterministic component. In Section 6 of his review, Beck examines the
state of knowledge of the Galactic field in the context of results for
external galaxies. He concludes that the amplitude of isotropic magnetic
fluctuations is $5\mu G$, that of anisotropic fluctuations $2\mu G$, and that
of the regular component (ordered over Galactic scales) is $2\mu G$. It should
be emphasized that these estimates are dependent on assumptions on the outer
scale of the turbulence, i.e. the scale which distinguishes a Galactic-scale
“deterministic” component from a turbulent component. Nonetheless, these
reviews make clear that there is substantial observational evidence for a
turbulent component to the Galactic magnetic field that is comparable in
magnitude, or even larger than an ordered, large scale field. In this paper,
we will consider the dimensionless turbulent amplitude $\frac{\delta
b}{B_{0}}$ to be a free parameter, subject to the observational constraints
cited above. Knowledge of the magnitude and spectrum of the Galactic MHD
turbulence is important to Galactic astrophysics, and astrophysics in general,
since this turbulence contributes to processes such as heating of the ISM
through dissipation of turbulence, confinement and propagation of the cosmic
rays, turbulent transport of heavy elements, and modification or regulation of
the star formation process.
One of the difficulties in determining the relative roles of a large scale and
a turbulent component of the Galactic field is the fact that some of the main
diagnostic techniques such as Faraday rotation, polarization of Galactic
synchrotron radiation, and dust polarization consist of path integrals over
long lines of sight through the Galaxy. What this means is that we get some
sort of average of the Galactic field over a long line of sight (LOS) through
the Galaxy, rather than the value at a set of points along that line of sight.
Zeeman splitting of OH and methanol maser transitions are not subject to this
limitation, since the region of emission is very localized (Crutcher and
Kemball, 2019). However, as discussed in Crutcher and Kemball (2019) regions
of maser emission have very high densities compared to the general ISM, and
are products of the star formation process. We will return to this point in
Section 4.
In this paper, we discuss a type of observation which utilizes the Faraday
rotation technique to obtain an estimate of the Galactic field (strictly
speaking one component of the field) in a localized region, rather than an
entire line of sight through the Galaxy. The localized region is comprised of
the volume of an HII region. In this sense, the technique provides a “local”
measurement. In what follows, we first summarize the basis of the technique of
Faraday rotation, then discuss recent observational results which allow it to
be used in a “local” measurement of the Galactic field.
Faraday rotation is a fundamental plasma process which consists of rotation in
the plane of polarization of radio waves due to the presence of a magnetized
plasma (Nicholson, 1983). It is used as a diagnostic technique in laboratory
plasmas as well as in radio astronomy. In the present context, it is applied
to radio waves emitted by extragalactic radio sources and which propagate
through the plasma of the interstellar medium (ISM). The change in the
position angle relative to what would be measured in the absence of the
plasma, $\Delta\chi$, is given by
$\Delta\chi=\left[\left(\frac{e^{3}}{2\pi
m_{e}^{2}c^{4}}\right)\int_{L}n\vec{B}\cdot\vec{ds}\right]\lambda^{2}$ (1)
In Equation (1) $e,m_{e},\mbox{ and }c$ are the customary fundamental physical
constants, $n$ is the electron density, $\vec{B}$ is the vector magnetic
field, both functions of position along the line of sight, and $\vec{ds}$ is
an increment in the line of sight from the source to the radio telescope.
Faraday rotation is a path integral measurement rather than a point
measurement, so there are contributions to plasma throughout the ISM along the
line of sight. The quantity in square brackets in Equation (1) is termed the
rotation measure (RM), and is usually quoted in SI units of radians/m2. For
most lines of sight through the ISM, the magnetic field measured is not only
an average over a long line through the ISM, it is also an average weighted by
the plasma density. Information on many lines of sight around the sky is used
to obtain estimates of the large scale structure of the Galactic field, as
well as the contribution of the turbulent component.
The observational impetus for this paper is a set of studies of the HII
regions Rosette Nebula and W4 by A. Costa and coworkers (Savage et al., 2013;
Costa et al., 2016; Costa and Spangler, 2018). Costa and co-workers carried
out Faraday rotation measurements with the Very Large Array (VLA) of
background radio sources viewed through these two HII regions. The main
results of these investigations were: (1) rotation measures (RM) for sources
viewed through the HII regions were typically much larger than for nearby
sources whose lines of sight probed the Galaxy, but not the HII regions, (2)
although there were large variations in the RM from source-to-source when the
LOS passed through the HII region, there was a well-defined average RM
(magnitude and sign) for the HII region, particularly the Rosette, (3) the
sign of the HII region-induced RM (which dominated the total RM) was
consistent with the sign of the large scale Galactic field in that part of the
sky. The RM due to the Rosette Nebula (Galactic longitude = 206.5 degrees) was
dominantly positive, while that for W4 (Galactic longitude = 135 degrees) was
dominantly negative.
The important point is that the magnetic field in the “RM anomaly” associated
with the HII region is confined to the volume of the HII region rather than
being an average over a line of sight of many kiloparsecs in the Galactic
plane. The radii of both HII regions are approximately 20-25 parsecs, so the
magnetic field is “local” in this sense. As discussed in the text earlier in
this section, in view of the apparent large amplitude, turbulent component of
the Galactic field, a reasonable a-priori expectation would be that the
magnetic field at a point in the Galactic plane, and averaged over a volume of
radius 20 - 25 parsecs, would have poor correlation with the ordered, large
scale Galactic field. The observations of the Rosette Nebula and W4 do not
bear out this expectation. In this paper, we explore the consequences of this
result for our inference about the Galactic magnetic field.
## 2 2\. Additional Observational Results on Faraday Rotation through HII
Regions
Two measurements is, of course, very poor statistics, even by the base
standards of astronomy. There is a need for additional measurements of the
sort presented by Costa and collaborators. Such measurements were provided by
the observations of Harvey-Smith et al. (2011), who carried out similar
observations, for similar goals, of somewhat larger HII regions like the
$\lambda$ Orionis HII region. In collecting HII regions from the sample of
Harvey-Smith et al. (2011), we restricted attention to objects in the outer
two quadrants of the Galactic plane, i.e. those with $90^{\circ}\leq l\leq
270^{\circ}$. Harvey-Smith et al. (2011) reported the the average value of the
line-of-sight component of the magnetic field $<B_{\parallel}>$ for each HII
region, and this quantity was also available from the analyses of Savage et
al. (2013); Costa et al. (2016); Costa and Spangler (2018).
In the studies of Costa and coworkers, the mean LOS component of the field,
$<B_{\parallel}>$, was not explicitly measured; analyses of the data
concentrated on fitting plasma shell models to the sets of RM values for each
HII region. In addition to other parameters, these models were dependent on
the magnitude and orientation of the magnetic field outside the HII region
(see Section 5.1 of Costa et al. (2016)). A limiting type of model was one in
which there was no amplification of the external magnetic field in the HII
region (referred to by Costa as the “Harvey-Smith model”). In this case, the
model fits give exactly the same quantity as the parameter $<B_{\parallel}>$
in Harvey-Smith et al. (2011). We took values from Table 4 of Costa et al.
(2016) and Table 6 of Costa and Spangler (2018) to obtain values of
$<B_{\parallel}>$, which should be directly comparable to this quantity in
Harvey-Smith et al. (2011). Once again, $<B_{\parallel}>$ has a sign and a
magnitude. The values of $<B_{\parallel}>$ from Harvey-Smith et al. (2011),
Costa et al. (2016), and Costa and Spangler (2018) are plotted in Figure 1,
and given in Table 1.
Figure 1: Measurements of $<B_{\parallel}>$ from Faraday rotation measurements through 6 HII regions in the 2nd and 3rd Galactic quadrants. Solid (blue) symbols indicate $<B_{\parallel}>>0$ (Field pointing towards observer), and open (red) symbols indicate $<B_{\parallel}><0$ (field pointing away from observer). The size of the plotted symbol is proportional to $\sqrt{|<B_{\parallel}>|}$. The locations of the 6 sources used are plotted in Galactic coordinates. In the simplest case of a purely azimuthal field, the projection of the large scale Galactic field on the LOS would change sign at $l=180^{\circ}$. In all cases, the polarity of $<B_{\parallel}>$ is the same as that for the large scale Galactic field in that quadrant. Table 1: HII Region Average Magnetic Field Components HII region | l | b | $<B_{\parallel}>\mu G$ | Ref.
---|---|---|---|---
Rosette | 206.5 | -2.1 | +2.73 | Costa et al. (2016)
W4 | 135 | +1.0 | -2.29 | Costa and Spangler (2018)
Sh2-264 | 195 | -12 | +2.2 | Harvey-Smith et al. (2011)
Sivan 3 | 144.5 | +14 | -2.5 | Harvey-Smith et al. (2011)
Sh2-171 | 118.0 | +6 | -2.3 | Harvey-Smith et al. (2011)
Sh2-220 | 161.3 | -12.5 | -6.3 | Harvey-Smith et al. (2011)
The clear result from Figure 1 is that for all 6 HII regions, the “local”
magnetic field defined by the average over the volume of the HII region, has
the same polarity as the Galactic-scale, mean magnetic field. In none of these
6 cases are fluctuations on scales of a few tens of parsecs capable of making
the “local” polarity opposite to the average over several kiloparsecs.
## 3 3\. Statistics of a Local, Line of Sight Measurement of $\vec{B}$ in a
Turbulent Medium
In this section, we discuss the statistics of the LOS component of the
magnetic field at a given point, when the magnetic field consists of the
vector sum of a large scale, ordered magnetic field $\vec{B_{0}}$, and a
superposed, spatially random turbulent field $\vec{\delta b}$. For ease of
following the discussion, the Appendix gives a glossary of mathematical
symbols and variables used in the following.
### 3.1 Geometry of the Line of Sight and the Galactic Magnetic Field
The geometry of the situation is shown in Figure 2. The z axis is defined by
the path from the radio source through the HII region and to the observer,
with the positive direction towards the observer. The x axis is in the plane
of the sky, and points toward decreasing galactic longitude. We also simplify
the situation by adopting the good approximation that the x-z plane coincides
with the Galactic plane. Finally, the y axis is perpendicular to the Galactic
plane and completes a right-handed coordinate system.
Figure 2: Cartoon illustrating geometry of observer, polarized extragalactic
radio source, and HII region. A Galactic magnetic field is illustrated,
consisting of a large scale, ordered component $\vec{B}_{0}$ inclined at an
angle $\Theta$ with respect to the line of sight, and a superposed turbulent
component $\vec{\delta b}$. The plane of the paper is the Galactic plane, the
z axis is defined by a ray from the radio source to the observer. The
coordinates x and y complete a right-handed coordinate system. In this
illustration, only one component of the turbulent field is shown, $\delta
b_{p}$, which is the component perpendicular to the large scale field and in
the Galactic plane. To lend realism, the trace shown represents 6 hours of
turbulent solar wind magnetic fluctuations measured by the WIND spacecraft on
September 25, 2020.
The observational results of Harvey-Smith et al. (2011); Savage et al. (2013);
Costa et al. (2016); Costa and Spangler (2018) show that there is a well-
defined mean field throughout the HII regions, with a magnitude equal to or
greater than the general Galactic field. Given the known presence of plasma
turbulence in the interstellar medium (ISM), we interpret this average HII
region field to be the vector sum of the large scale galactic field
$\vec{B_{0}}$ and a turbulent field $\vec{\delta b}$. As indicated in Figure
2, the angle between the line-of-sight (LOS) and the large scale field is
$\Theta$. We also assume, for this analysis, that the large-scale field is in
the Galactic plane.
Given the configuration illustrated in Figure 2, we define a second coordinate
system which is convenient for describing the turbulence. The $\parallel$
direction is in the direction of the mean field, the $p$ direction is
perpendicular to the mean field, and in the Galactic plane, and the $\perp$
direction completes the right-hand coordinate system and is perpendicular to
the Galactic plane. These three directions are defined by the unit vectors
$\hat{e}_{\parallel}$, $\hat{e}_{p}$, and $\hat{e}_{\perp}$.
### 3.2 Model for the Magnetic Field in the Vicinity of an HII Region
Given this latter coordinate system, we express the magnetic field (mean field
plus turbulence) by
$\vec{B}=(B_{0}+\delta b_{\parallel})\hat{e}_{\parallel}+\delta
b_{p}\hat{e}_{p}+\delta b_{\perp}\hat{e}_{\perp}$ (2)
In analogy with the Alfvènic turbulence in the solar wind, we expect $<(\delta
b_{p})^{2}>=<(\delta b_{\perp})^{2}>$ and $<(\delta b_{p})^{2}>\gg<(\delta
b_{\parallel})^{2}>$.
The model we adopt is that $\delta b_{\parallel},\delta b_{p},\mbox{ and
}\delta b_{\perp}$ are random, 0-mean quantities with an amplitude dominated
by fluctuations on the outer scale l, $l\geq 2R$, where $R$ is the radius of
the HII region. In the cases of the Rosette Nebula (Costa et al., 2016) and W4
(Costa and Spangler, 2018), the values of $2R$ are 38 and 50 parsecs,
respectively. These values are comparable to, or slightly less than the outer
scale of the 2D component of interstellar turbulence claimed by Minter and
Spangler (1996).
We adopt a very simple model for the average observed RM to a set of
extragalactic radio sources viewed through an HII region,
$RM_{obs}=RM_{bck}+RM_{neb}$ (3)
where $RM_{bck}$ is the background RM due to the plasma in the Galaxy which
would be observed in the absence of the nebula, and $RM_{neb}$ is the average
RM for lines of sight through the HII region. $RM_{neb}$ is given by the
simple expression
$RM_{neb}=2CnRAB_{z}$ (4)
where $C$ is the set of fundamental constants in curved brackets in Equation
(1), $n$ is the mean electron density in the ionized portion of the HII region
(determined by independent measurements such as intensity of thermal radio
emission in Savage et al. (2013); Costa et al. (2016); Costa and Spangler
(2018)), $R$ again is the radius of the HII region, and $B_{z}$ is the z
component of the interstellar field (mean field plus turbulence) at the
location of the HII region. The variable $A\geq 1$ is a factor describing
whether the magnetic field within the HII region is amplified over the value
in the surrounding ISM. Harvey-Smith et al. (2011) assumed that it was not
($A=1$), while Costa et al. (2016); Costa and Spangler (2018) considered
physically-based models in which $A\leq 4$. The case $A=1$ was considered as
well.
$B_{z}$ is obtained from Equation (2) as
$B_{z}=\vec{B}\cdot\hat{e}_{z}$ (5)
Substituting Equation (2) into this expression and evaluating the dot products
of unit vectors gives
$B_{z}=(B_{0}+\delta b_{\parallel})\cos\Theta-\delta b_{p}\sin\Theta$ (6)
The $\delta b_{\perp}$ component of the turbulent field makes no contribution
because, for lines of sight in or close to the Galactic plane, it is
perpendicular to the LOS. The negative sign in the second term arises because
positive $\delta b_{p}$ is pointing away from the observer (Figure 2).
Equation (6) shows the important, though obvious fact that the polarity of
$B_{z}$ (i.e. the same as $\vec{B_{0}}$ or opposite) depends on both the
amplitude of the fluctuations relative to the mean field as well as the angle
$\Theta$ between the large scale field and the LOS.
This expression can be rewritten in terms of dimensionless turbulence
amplitudes (turbulent “modulation indices”) as
$\displaystyle B_{z}=B_{0}(\cos\Theta+x\cos\Theta-y\sin\Theta)$ (7)
$\displaystyle x\equiv\frac{\delta b_{\parallel}}{B_{0}}$ (8) $\displaystyle
y\equiv\frac{\delta b_{p}}{B_{0}}$ (9)
where $x,y$ are dimensionless, 0 mean random variables with assumed properties
$\sqrt{<x^{2}>},\sqrt{<y^{2}>}\leq 1$. With all of this, the observed RM is
then
$\displaystyle
RM_{obs}=RM_{bck}+2CnRAB_{0}(\cos\Theta+x\cos\Theta-y\sin\Theta)$ (10)
$\displaystyle RM_{obs}=RM_{bck}+RM_{n}(\cos\Theta+x\cos\Theta-y\sin\Theta)$
(11)
with an obvious definition of the net nebular $RM_{n}$. It is worth
emphasizing that $RM_{n}$ is a property of the nebula itself, independent of
the ISM or the geometry of the line of sight.
The question considered in this paper then is whether the second term has the
same sign as the first, and if not, if it is large enough to cause the
observed RM to have the opposite sign as $RM_{bck}$. Equation (11) shows that
this depends on two factors.
1. 1.
The first is the relative magnitudes of $RM_{bck}$ and $RM_{n}$. Savage et al.
(2013); Costa et al. (2016); Costa and Spangler (2018) found that for both the
Rosette Nebula and W4, the largest absolute magnitudes of the RMs through the
nebula were of the order of 1000-1500 rad/m2, whereas the background Galactic
RMs in those parts of the sky had absolute magnitudes of the order of 100 -
150 rad/m2. This indicates that the magnitude of $RM_{n}$ could be as high as
10 times that of $RM_{bck}$.
2. 2.
The statistics of the quantity
$\xi=\cos\Theta+x\cos\Theta-y\sin\Theta$ (12)
are obviously crucial. For the observed RM to be of the opposite polarity to
$RM_{bck}$, it is necessary that $\xi<0$, and in fact $\xi<-\Delta$ where
$\Delta$ is defined as the ratio of background to nebular RM.
For the remainder of this section, we turn to a discussion of the statistics
of $\xi$.
### 3.3 Statistics of the Random Variable $\xi$
We begin by making a further simplification and ignoring the x term in $\xi$.
The justification for this is the assumption stated above that for Alfvènic
turbulence, the fluctuations in the parallel component (to the mean field) are
small compared to the transverse components. In addition, both the mean field
and the parallel component contributions to the stochastic variable $\xi$ are
proportional to $\cos\Theta$. We therefore proceed to consider the statistics
of the variable
$\xi=\cos\Theta-y\sin\Theta$ (13)
Only $y$ is a random variable, so we model it as a Gaussian-distributed,
random variable with zero mean, with a probability distribution function
$p(y)=\frac{1}{\sqrt{2\pi}\sigma}\exp(-y^{2}/2\sigma^{2})$ (14)
The quantity $\sigma$ is the dimensionless amplitude of the fluctuations of
the p component of the turbulence, and is a crucial parameter in our analysis.
Given this expression we can generate the probability distribution function
(pdf) of the stochastic variable $\xi$
$p(\xi)=\frac{1}{\sqrt{2\pi}\sigma}\frac{1}{\sin\Theta}\exp\left(-\frac{(\cos\Theta-\xi)^{2}}{2\sigma^{2}\sin^{2}\Theta}\right)$
(15)
For economy of notation, let $a\equiv\cos\Theta$, $b\equiv\sin\Theta$, with
the obvious identity $b=\sqrt{1-a^{2}}$, so that
$\displaystyle p(\xi)=\frac{1}{\sqrt{2\pi}\sigma
b}\exp\left(-\frac{(\xi-a)^{2}}{2\sigma^{2}b^{2}}\right)$ (16) $\displaystyle
p(\xi)=\frac{1}{\sqrt{2\pi}d}\exp\left(-\frac{(\xi-a)^{2}}{2d^{2}}\right)$
(17)
with $d\equiv\sigma b$. The distribution function of $\xi$ is also Gaussian,
with a nonzero mean value $<\xi>=a=\cos\Theta$.
A sample plot of $p(\xi)$ is shown in Figure 2. This figure shows that for
sufficiently large-amplitude turbulence ($\sigma=0.70$ in this case), and
sufficiently large angles of the mean field with respect to the line of sight
($\Theta=70^{\circ}$) the probability of the local field being reversed with
respect to the mean field is substantial.
Figure 3: The probability distribution function of the variable $\xi$ for the
case in which the dimensionless amplitude $\sigma$ of the magnetic
fluctuations in the p component of the magnetic field is $\sigma=0.70$ and the
angle $\Theta$ between the line of sight and the mean magnetic field is 70
degrees. In this case, the probability of $\xi$ being negative is substantial.
### 3.4 Expression for $P_{-}$, the Probability of a Polarity Reversal
Given the expression in Equation (16) or (17) for the probability distribution
function of the parameter $\xi$, we have the quantity of interest, the total
probability that the line-of-sight component of the magnetic field at a given
location in the ISM will be reversed from that of the mean field, We note this
quantity by $P_{-}$, and the obvious expression for it is
$P_{-}=\int_{-\infty}^{-\Delta}d\xi p(\xi)$ (18)
By inspection, it can be seen that this integral is the same as
$P_{-}=\int_{-\infty}^{-(\Delta+a)}dyp_{G}(y,d)=\int_{\Delta+a}^{\infty}dyp_{G}(y,d)$
(19)
where $p_{G}(y,d)$ is a zero mean, normalized Gaussian pdf with standard
deviation $d$.
If we make a final change of variables, $y\rightarrow
t\equiv\frac{y}{\sqrt{2}d}$, then Equation (19) becomes
$\displaystyle P_{-}=\frac{1}{\sqrt{\pi}}\int_{X}^{\infty}dte^{-t^{2}}$ (20)
$\displaystyle
X\equiv\frac{\Delta+a}{\sqrt{2}d}=\frac{(\Delta+\cos\Theta)}{\sqrt{2}\sigma\sin\Theta}$
(21)
Equation (20) is very close to the standard form of the Error Function
(Beckmann, 1967)
$erf(X)\equiv\frac{2}{\sqrt{\pi}}\int_{0}^{X}dte^{-t^{2}}$ (22)
Comparison of Equations (20) and (22) allows the final, compact formula for
the total probability of polarity reversal,
$P_{-}=\frac{1}{2}\left[1-erf(X)\right]=\frac{1}{2}erfc(X)$ (23)
where $erfc(X)$ is the complement of the error function.
Equation (23) is plotted in Figure 4.
Figure 4: The probability that the line of sight component of the magnetic
field at the location of an HII region is opposite to that of the global,
Galactic field. The probability $P_{-}$ is a function of the parameter
$X=\frac{(\Delta+\cos\Theta)}{\sqrt{2}\sigma\sin\Theta}$
.
As the argument $X\rightarrow 0$, the probability of reversal
$P_{-}\rightarrow 0.5$. This makes complete sense, since small argument $X$
corresponds to strong turbulence and/or a mean field nearly perpendicular to
the line of sight. In this case, the local magnetic field is dominated by the
turbulent component $\delta b_{p}$, which has a 50 percent change of being
positive and and a 50 percent chance of being negative.
The case of $X\rightarrow\infty$ corresponds to weak turbulence, and/or a mean
field increasingly directed towards the observer. In this case, the
probability of a reversal in the line-of-sight component of the magnetic field
approaches zero.
The probability of a reversal in the line of sight component is determined by
the parameter $X$, and it is of interest to calculate this quantity for some
observationally-relevant cases.
### 3.5 Calculations Relevant to Observations of HII Regions
In this section, we briefly apply these formulas to the observations presented
in Section 2. A total of 6 HII regions distributed over a range of Galactic
longitudes precludes a convincing and sophisticated statistical analysis.
However, the results presented in Figure 1 suggest that theoretical values of
$P_{-}\geq 0.20$ would begin to have an uncomfortable confrontation with the
observations. On the other hand, if characteristics of the HII regions,
amplitude of ISM turbulence and direction of the line of sight indicate a
value of $P_{-}\sim 0.01$, our calculations are completely consistent with the
observations discussed in Section 2. The formula given in Eq. (21) shows that
the parameter $X$ depends on $\Theta$, $\sigma$, and $\Delta$. In what
follows, we adopt a value of $\Delta=0.1$, which is in accord with the
observations of the Rosette Nebula and W4 discussed above.
Savage et al. (2013); Costa et al. (2016) in their study of the Rosette
nebula, found that reasonable fits of simple magnetized shell models to the
set of RM measurements gave a value of $\Theta\simeq 70^{\circ}$ if the
interstellar magnetic field is amplified in the shell by shock compression,
and $\Theta\simeq 50^{\circ}$ if there were no change in the magnitude of the
magnetic field in the shell, with respect to the general ISM (see Table 5 of
Costa et al., 2016). Savage et al. (2013) noted that an independent estimate,
based on models of the Galactic magnetic field, would give $\Theta\simeq
60^{\circ}$ at the location of the Rosette. Costa and Spangler (2018) used a
value of $\Theta=55^{\circ}$ for W4, if the Galactic magnetic field is
approximated as azimuthal. Thus values of $50^{\circ}\leq\Theta\leq
70^{\circ}$ are relevant to the results shown in Figure 1.
The amplitude of the turbulent magnetic field on spatial scales of tens of
parsecs is not well known, but results from the literature discussed in
Section 1 indicate that it is of order the mean, large scale field. Thus our
parameter $\sigma$ if probably of the order of unity, or a large fraction
thereof. A value of $\sigma=0.70\simeq\frac{1}{\sqrt{2}}$ would have the
magnitude of fluctuating field (quadratic sum of $\delta b_{p}$ and $\delta
b_{\perp}$ components) equal to the magnitude of the large scale field.
A value of $\sigma=0.70$ and $\Theta=70^{\circ}$ gives $X=0.48$; reference to
Figure 4 shows that the probability of reversal $P_{-}$ is about 24 %. In such
a case it is mildly surprising that all 6 HII regions discussed in Section 2
produce “Faraday rotation anomalies” with the same polarity as the large scale
field. On the other hand, a slightly smaller value of $\sigma=0.50$ and
$\Theta=50^{\circ}$, also consistent with the Costa et al. (2016) results for
the Rosette Nebula and W4, would have $X=1.37$. In this case, Figure 4 shows
that the probability of a polarity reversal for a single source is less than 5
%, and the results of Figure 1 are unremarkable.
This discussion shows that future investigations on more HII regions could
provide more stringent limits on the large scale turbulent fluctuations in the
Galactic magnetic field. The formulas presented in this section provide a
mathematical vocabulary for discussing current as well as future observations.
For example, it is clear that the probability of a reversal becomes much
greater for $\Theta\sim\frac{\pi}{2}$. HII regions in the close vicinity of
the Galactic anticenter would provide particularly sensitive diagnostics for
the amplitude of large scale turbulence.
## 4 4\. Conclusions from Zeeman Effect Measurements of Star Formation Region
Masers
The analysis in Section 3 assumes that HII regions occur in the presence of an
unmodified Galactic magnetic field. In the simplified stellar bubble models
presented in Savage et al. (2013); Costa et al. (2016); Costa and Spangler
(2018) the expanding bubble modifies the Galactic field, but the RM
enhancement due to the bubble is proportional to the Galactic field (mean
field plus turbulent contribution at the location of the HII region) in the
absence of the HII region. In the model considered by Harvey-Smith et al.
(2011) the HII region consists of an ionization of the interstellar gas with
no modification of the Galactic field. Harvey-Smith et al. (2011) describe
this as the HII region “lighting up” the Galactic field that exists at the
location of the HII region.
These viewpoints may be physically naive. HII regions are produced by hot,
massive stars that are products of the star formation process. Star formation
regions, in turn, result from contraction of gas from the general ISM, its
conversion to molecular form, and an associated increase in gas density. It
therefore seems likely that the process of forming dense clumps in molecular
clouds from which stars form results in major modification of the Galactic
field, perhaps to the extent of totally randomizing the vector field from its
value in the absence of the star formation region. In this light, the
observational results presented in Section 2 are even more remarkable; the
preservation of the polarity of the large scale field occurs in spite of both
the presence of natural turbulence in the Galactic field, as well as the
possibly major variations to the field induced by the star formation process.
The HII region results presented in Section 2 are not the only “local”
measurements of the magnetic field in the ISM. Hydroxyl, methanol, and water
masers in star formation regions display the Zeeman effect, permitting
measurements of the magnitude and sign of the LOS component of the magnetic
field in the maser region (Crutcher and Kemball, 2019). These masers occur in
regions of very high density compared to the ISM far from star formation
regions, and thus probe gas that has been substantially modified in density,
and presumably magnetic field, by the star formation process (Crutcher and
Kemball, 2019).
In spite of this, observations of the polarity of the magnetic field in maser
regions show results similar to those presented in Section 2. Fish et al.
(2003) examined Zeeman effect measurements for more than 50 massive star
formation regions. Although they did not find a general Galactic-wide magnetic
field direction, they did find that “ …in the solar neighborhood the magnetic
field outside the solar circle is oriented clockwise as viewed from the north
Galactic pole …”. This is the same sense as shown in Figure 1 on the basis of
the HII region results. A subsequent study by Green et al. (2012) yielded
similar results. Green et al. (2012) reported measurements of the magnetic
field (sign and magnitude of the line-of-sight component of the magnetic
field) for 14 Zeeman pairs in 6 high mass star formation regions in the
Carina-Sagittarius arm (4th Galactic quadrant). Their results show a dominant
magnetic polarity in this spiral arm (See Figure 7 of Green et al. (2012) .
The significance of these observational results was summarized in Han (2017),
who noted that they implied “turbulence in molecular clouds and violent star-
forming regions seem not to alter the mean magnetic field direction, although
the field strength is much enhanced from a few $\mu$ G to a few mG ”.
## 5 5\. Speculations from the Study of Hydrodynamical Turbulence
The results for the HII regions presented above, and illustrated in Figure 1
indicate, at the very least, potentially significant limits on the magnitude
of turbulent magnetic field fluctuations in the Galactic field. The results on
star formation region OH masers from Fish et al. (2003) and Green et al.
(2012) indicate, and the HII regions results may indicate, a far stronger and
mysterious physical process: a memory of the large scale polarity of the
Galactic field as it is modified during the contraction and increase in
density associated with star formation.
The obvious question which then arises is the meaning of “the persistence of
memory” in the large scale, Galactic magnetic field. The problem arises
because of the assumed isotropic nature of turbulent fluctuations on scales
much smaller than the outer scale of the turbulence. Such fluctuations, deep
in the inertial subrange or close to the dissipation range, should have no
tendency to respect large scale asymmetries in the Galactic plasma. It is
worthwhile to look for insight from the study of hydrodynamical turbulence.
There are obvious physical differences between the turbulence in the diffuse
plasmas of the interstellar medium and the turbulence in gases and liquids in
laboratory and industrial settings. However, there are important similarities,
too, and both the quality of the diagnostics and the shear amount of effort
expended in the study of hydrodynamic turbulence exceeds that which is
possible for astrophysical turbulence.
A review of our understanding of “passive scalars” in hydrodynamic turbulence
is presented in Warhaft (2000). Passive scalars are quantities which are
convected by fluid turbulence, without affecting the flow field. Warhaft
(2000) considered as a prime example temperature fluctuations in hydrodynamic
turbulence. An astrophysical example of a passive scalar could be the plasma
density fluctuations responsible for interstellar radio scintillations.
Warhaft (2000) presents an illuminating discussion of the differences between
the engineering and physics views of turbulence, and offers an admonition that
is certainly relevant for Galactic astronomers as well: “ The physicists, with
their quest for universality, have concentrated on inertial and dissipation
scales, the hope being, if the Reynolds (Re) and Peclet (Pe) numbers are high
enough, that the small-scale behavior will be independent of the large scales.
We address both the large and small scales in this review and argue that this
division is artificial and dangerous”. Warhaft (2000) then goes to present
laboratory and theoretical evidence that small scale passive scalar
fluctuations (primarily temperature fluctuations) are anisotropic with an
anisotropy that reflects large scale asymmetry. In the cases discussed by
Warhaft (2000) this large scale asymmetry was most typically a transverse
temperature gradient imposed on a turbulent shear flow. Warhaft (2000) also
shows that the small scale passive scalar fluctuations possess a skewness and
kurtosis that exceeds that present in the fundamental turbulent velocity
field.
It is not clear to me if the clear and insightful presentation of Warhaft
(2000) holds a key to understanding the mysterious observation described in
this paper, or if any such comparison would be a hallucinogenic extrapolation
of laboratory results far outside their range of applicability. While
Warhaft’s review does describe a documented persistence of large scale
anisotropy on small turbulent scales, as appears to be indicated in our
Galactic ISM observations, it must be emphasized that the discussion of
Warhaft (2000) is concerned with passive scalars in the turbulence. In the
Galactic ISM context, that would be plasma density, temperature, ionization
fraction, and metallicity. The magnetic field, on the other hand, should share
status with fluid velocity as a “primary excitation” of the turbulence, if the
ISM turbulence is Alfvénic. Nonetheless, it would be worthwhile to consider
further if the extensive, laboratory based studies of turbulence from an
engineering perspective can illuminate the nature of plasma turbulence in the
interstellar medium.
## 6 6\. Summary and Conclusions
1. 1.
In the case of the Galactic-anticenter-direction HII regions Rosette Nebula
and W4, the mean magnetic field in the HII region has the same polarity as the
general Galactic field in that part of space, despite the presence of strong
MHD turbulence which would be expected to randomize the polarity. An
additional 4 HII regions studied by Harvey-Smith et al (2011) share this
property.
2. 2.
We derive formulas for the probability of the “point” magnetic field having
the same polarity as the general field, which depends on properties of the
turbulent magnetic fluctuations as well as the details of the line of sight.
For reasonable estimates, this probability could be non-negligible, being as
high as 25 % for plausible characteristics. As a result, the observation of 6
anticenter-direction HII regions, all having the same magnetic polarity of the
general field, becomes mildly curious.
3. 3.
The results on HII regions support similar results on the polarity of star
formation OH masers presented by Fish et al. (2003) and Green et al. (2012).
In those studies, the polarity of the magnetic field in maser regions is
correlated among star formation regions that are spatially separated, and the
polarity is apparently determined by that of the large scale Galactic field.
The maser region results not only limit the properties of turbulent
fluctuations in the Galactic field, but also restrict the field randomization
that presumably occurs in the assembly of star formation cores during the
process of star formation.
4. 4.
We point out that studies of turbulence in laboratory settings also show a
persistence of large scale asymmetries in small and inertial scale
fluctuations, which could be termed “the persistence of memory” in the
Galactic magnetic field. These results are discussed in Warhaft (2000), and
refer to passive scalars in hydrodynamic turbulence. Although it is unclear if
the results discussed in Warhaft (2000) are applicable to Galactic MHD
turbulence, it would appear to be a worthwhile path to investigate.
## References
* Beck (2016) Beck, R. 2016, A&A Rev., 24, 4
* Beckmann (1967) Beckmann, P. 1967, Probability in Communication Engineering, Harcourt, Brace & World, p36
* Costa et al. (2016) Costa, A.H., Spangler, S.R., Sink, J.R. et al. 2016, ApJ, 821, 92
* Costa and Spangler (2018) Costa, A.H. and Spangler, S.R. 2018, ApJ, 865, 65
* Crutcher and Kemball (2019) Crutcher, R.M. and Kemball, A.J. 2019, Frontiers in Astronomy and Space Science, 6, 66
* Ferrière (2011) Ferrière, K. 2011, Mem. Soc. Astron. Italiana, 82, 824
* Ferrière (2020) Ferrière, K. 2020, Plasma Physics and Controlled Fusion, 62, 014014
* Fish et al. (2003) Fish, V.L., Reid, M.J., Argon, A.L., and Menten, K.M. 2003, ApJ, 596, 328
* Green et al. (2012) Green, J.A., McClure-Griffiths, N.M., Caswell, J.L. et al. 2012, MNRAS, 425, 2530
* Han (2017) Han, J.L. 2017, ARA&A, 55, 111
* Harvey-Smith et al. (2011) Harvey-Smith, L., Madsen, G.J., and Gaensler, B.M. 2011, ApJ, 736, 83
* Minter and Spangler (1996) Minter, A.H. and Spangler, S.R. 1996, ApJ, 458, 194
* Nicholson (1983) Nicholson, D.R.. 1983, Introduction to Plasma Theory, John Wiley & Sons, p3
* Savage et al. (2013) Savage, A.H., Spangler, S.R., and Fischer, P.D. 2013, ApJ, 765, 42
* van Eck et al. (2011) van Eck, C.L., Brown, J.C., Stil, J.M. et al. 2011, ApJ, 728, 97
* Warhaft (2000) Warhaft, Z. 2000, Annual Review of Fluid Mechanics, 32, 203
## Appendix A Glossary of Mathematical Variables
Variable | Definition
---|---
$\Delta\chi$ | change in polarization position angle due to Faraday rotation (Eq.1)
$\vec{B}$ | total magnetic field in ISM at location of HII region (Eq.2)
$\vec{B}_{0}$ | large scale, ordered component of Galactic magnetic field
$\vec{\delta b}$ | turbulent component of Galactic magnetic field
$\Theta$ | angle between LOS and large scale magnetic field $\vec{B_{0}}$ at location of HII region (Fig. 2)
$(\hat{e}_{\parallel},\hat{e}_{p},\hat{e}_{\perp})$ | unit vectors defining coordinate system aligned with large scale field $\vec{B_{0}}$
$(\delta b_{\parallel},\delta b_{p},\delta b_{\perp})$ | components of the turbulent magnetic field in a coordinate system aligned with large scale field $\vec{B_{0}}$
$RM_{obs}$ | observed RM for a background source observed through an HII region (Eq.3)
$RM_{bck}$ | RM due to ISM along LOS, but excluding HII region (Eq.3)
$RM_{neb}$ | RM due to plasma in HII region (Eq.3)
$C$ | set of atomic constants appearing in expression for Faraday rotation (Eq. 1)
$n$ | electron density in plasma causing Faraday rotation (Eq. 1)
$R$ | radius of HII region
$B_{z}$ | LOS component of Galactic magnetic field at location of HII region
$A$ | factor ($\geq 1$) determining whether Galactic field is amplified within HII region. Model dependent (Eq. 4)
$\hat{e}_{z}$ | unit vector in direction of LOS
$x$ | dimensionless amplitude of turbulent magnetic fluctuations in the direction of the large scale field (Eq. 8)
$y$ | dimensionless amplitude of turbulent magnetic fluctuations $\perp$ to large scale field (Eq. 9)
$\xi$ | derived variable dependent on $\Theta$, $x$, and $y$ (Eq. 12)
$\Delta$ | threshold value of $\xi$ such that sign of RM reversed for $\xi<-\Delta$
$\sigma$ | rms value of Gaussian-distributed $y$ (Eq. 14)
$a$ | secondary variable $a\equiv\cos\Theta$ (Eq. 16)
$b$ | secondary variable $b\equiv\sin\Theta$ (Eq. 16)
$d$ | secondary variable $d\equiv\sigma b$ (Eq. 17)
$p(\xi)$ | normalized, Gaussian pdf of $\xi$ (Eq. 15-17)
$p_{G}(\xi)$ | shifted, zero-mean version of $p(\xi)$ (Eq. 19)
$P_{-}$ | probability that turbulent fluctuations cause reversal of LOS magnetic polarity (Eq. 18)
$t$ | secondary variable $t\equiv\frac{y}{\sqrt{2}d}$ (Eq. 17)
$X$ | single parameter determining $P_{-}$ (Eq. 21)
|
8k
|
arxiv_papers
|
2101.01119
|
13227072 LABEL:LastPageJan. 8, 2021Nov. 8, 2021
# A fibering theorem for 3-manifolds
Jordan A. Sahattchieve Formerly Department of Mathematics, University of
Michigan in Ann Arbor [email protected]
###### Abstract.
This paper generalizes results of M. Moon on the fibering of certain compact
3-manifolds over the circle. It also generalizes a theorem of H. B. Griffiths
on the fibering of certain 2-manifolds over the circle.
###### Key words and phrases:
3-manifolds, fiber bundles, Bass-Serre trees
## 1\. Introduction
Consider a 3-manifold $M$ which fibers over $\mathbb{S}$ with fiber a compact
surface $F$. The bundle projection $\eta$ induces a homomorphism of
$\pi_{1}(M)$ onto $\mathbb{Z}$ whose kernel is precisely $\pi_{1}(F)$. One can
alternatively think of $M$ as the mapping torus of $F$ under the automorphism
of $F$ given by the monodromy of the bundle. We thus have a short exact
sequence
$1\rightarrow\pi_{1}(F)\rightarrow\pi_{1}(M)\rightarrow\mathbb{Z}\rightarrow
1$. Stallings proved in [15] a converse to this:
###### Theorem 1.
(Stallings, Theorems 1 and 2 [15], 1961) Let $M$ be a compact, irreducible
3-manifold. Suppose that there is a surjective homomorphism
$\pi_{1}(M)\rightarrow\mathbb{Z}$ whose kernel $G$ is finitely generated and
not of order 2. Then, $M$ fibers over $\mathbb{S}$ with fiber a compact
surface $F$ whose fundamental group is isomorphic to $G$.
In [6] Hempel and Jaco prove that a compact 3-manifold fibers under somewhat
relaxed assumptions:
###### Theorem 2.
(Hempel-Jaco, Theorem 3 [6], 1972) Let $M$ be a compact 3-manifold. Suppose
that there is an exact sequence $1\rightarrow N\rightarrow M\rightarrow
Q\rightarrow 1$, where $N$ is a nontrivial, finitely presented, normal
subgroup of $\pi_{1}(M)$ with infinite quotient $Q$. If $N\neq\mathbb{Z}$ and
$M$ contains no 2-sided projective plane, then $\hat{M}=M_{1}\\#\Sigma$, where
$\Sigma$ is a homotopy 3-sphere and $M_{1}$ is either:
(i) a fiber bundle over $\mathbb{S}$ with fiber a compact 2-manifold F, or
(ii) the union of two twisted I-bundles over a compact manifold $F$ which meet
at the corresponding 0-sphere bundles. In either case, $N$ is a subgroup of
finite index of $\pi_{1}(F)$ and $Q$ is an extension of a finite group by
either $\mathbb{Z}$ (case (i)), or $\mathbb{Z}_{2}*\mathbb{Z}_{2}$ (case(ii)).
The manifold $\hat{M}$ is obtained from $M$ by attaching a 3-ball to every
2-sphere in the boundary of $M$. The reader should be aware of some advances
in the field made after Theorem 2 appeared in print, which have direct bearing
on the manner in which it can be applied, as well as on its conclusion: In
[13] Scott proves that every finitely generated 3-manifold group is finitely
presented - thus, it suffices to assume that $N$ is only finitely generated.
Also, in view of the proof of the Poincare Conjecture by Perelman, one has a
conclusion about $\hat{M}$, as there are no homotopy 3-spheres other than
$\mathbb{S}^{3}$, which is the identity for the operation of forming a
connected sum. An assumption of irreducibility on $M$ allows us to obtain
conclusions about $M$ since in that case $\hat{M}=M$.
One dimension lower, in the case when $M$ is a 2-manifold, Griffiths proved
the following fibering theorem in [3]:
###### Theorem 3.
(Griffiths, Theorem [3], 1962) Let $M$ be a 2-manifold whose fundamental group
$G$ contains a finitely generated subgroup $U$ of infinite index, which
contains a non-trivial normal subgroup $N$ of $G$. Then, $M$ is homeomorphic
to either the torus or the Klein bottle.
Theorem 3 is the starting point for my investigation as well as the work in
[8]. While the above result does not use the word bundle explicitly, it is
easy to see that both the torus and the Klein bottle are $\mathbb{S}$-bundles
over either $\mathbb{S}$ or the 1-dimensional orbifold
$\mathbb{S}/\mathbb{Z}_{2}$, where $\mathbb{Z}_{2}$ acts on $\mathbb{S}$ by
reflection. As Moon notes in [8], if we adopt this point of view, the above
fibering theorems suggest a result analogous to Griffiths’ theorem for
3-manifolds. Indeed, in [8] Moon proves such a result for compact, geometric
manifolds and their torus sums:
###### Theorem 4.
(Moon, Corollary 2.11 [8], 2005) Let $M$ be an irreducible, compact,
orientable 3-manifold, which is either a torus sum $X_{1}\bigcup_{T}X_{2}$, or
$X_{1}\bigcup_{T}$, where each $X_{i}$ is either a Seifert fibered space or a
hyperbolic manifold. If $G=\pi_{1}(M)$ contains a finitely generated subgroup
$U$ of infinite index which contains a non-trivial normal subgroup $N$ of $G$,
which intersects non-trivially the fundamental group of the splitting torus,
and such that $N\cap\pi_{1}(X_{i})$ is not isomorphic to $\mathbb{Z}$, then
$M$ has a finite cover which is a bundle over $\mathbb{S}$ with fiber a
compact surface $F$, and $\pi_{1}(F)$ is commensurable with $U$.
On the other hand, Elkalla showed in [2] that if one additionally assumes that
$G$ is $U$-residually finite or, in other words, that for every $g\in G-U$ one
can find a finite index subgroup $U_{1}$ of $G$ which contains $U$ but not
$g$, and that if $M$ is $P^{2}$-irreducible, then one can replace the normal
$N$ with a subnormal one:
###### Theorem 5.
(Elkalla, Theorem 3.7 [2], 1983) Let $M$ be a $P^{2}$-irreducible, compact and
connected 3-manifold. If $G=\pi_{1}(M)$ contains a non-trivial subnormal
subgroup $N$ such that $N$ is contained in an indecomposable and finitely
generated subgroup $U$ of infinite index in $G$, and if $G$ is $U$-residually
finite, then either (i) the Poincare associate of $M$ is finitely covered by a
manifold, which is a fiber bundle over $\mathbb{S}$ with fiber a compact
surface $F$, such that there is a subgroup $V$ of finite index in both
$\pi_{1}(F)$ and $U$, or (ii) $N$ is isomorphic to $\mathbb{Z}$.
Again, one should view the conclusions of Theorem 5 in the context of the
Poincare Conjecture which is now a theorem; thus, under the assumptions made
on $M$, Theorem 5 implies that $M$ is itself finitely covered by a bundle over
$\mathbb{S}$.
Throughout this paper we shall use the notation $N\triangleleft_{s}G$ to stand
for the relationship of subnormality of a subgroup to the ambient group.
These results suggest to us that the following:
###### Conjecture 6.
(Scott, 2010) Every irreducible, compact 3-manifold whose fundamental group
$G$ contains a subnormal subgroup $N\neq\mathbb{Z}$ contained in a finitely
generated subgroup $U$ of infinite index in $G$, virtually fibers over
$\mathbb{S}$ with fiber a compact surface $F$ such that $N$ is commensurable
with $\pi_{1}(F)$.
With a view towards this conjecture, we aim to generalize Theorem 4 to the
case where $N$ is a subnormal subgroup of $G$ and to make the natural
inductive argument Theorem 4 suggests in light of the Geometrization Theorem.
Our main result is the following:
###### Theorem 28.
Let $M$ be a compact 3-manifold with empty or toroidal boundary. If
$G=\pi_{1}(M)$ contains a finitely generated subgroup $U$ of infinite index in
$G$ which contains a nontrivial subnormal subgroup $N$ of $G$, then: (a) $M$
is irreducible, (b) if further:
1. (1)
$N$ has a subnormal series of length $n$ in which $n-1$ terms are assumed to
be finitely generated,
2. (2)
$N$ intersects nontrivially the fundamental groups of the splitting tori of
some decomposition $\mathfrak{D}$ of $M$ into geometric pieces, and
3. (3)
the intersections of $N$ with the fundamental groups of the geometric pieces
are not isomorphic to $\mathbb{Z}$,
then, $M$ has a finite cover which is a bundle over $\mathbb{S}$ with fiber a
compact surface $F$ such that $\pi_{1}(F)$ and $U$ are commensurable.
Many of the proofs of [8] lend themselves to being generalized and this paper
makes this step. In doing so, I have omitted proofs in all cases where results
from [8] generalize verbatim without any need for additional arguments.
The reader will notice that in the process of proving Theorem 28, I have
verified that Conjecture 6 holds for the class of geometric manifolds:
###### Theorem 17.
Let $M$ be a compact geometric manifold and let $G=\pi_{1}(M)$. Suppose that
$U$ is a finitely generated subgroup of $G$ with $|G:U|=\infty$, and suppose
that $U$ contains a subnormal subgroup $N\triangleleft_{s}G$. If $N$ is not
infinite cyclic, then $M$ is finitely covered by a bundle over $\mathbb{S}$
with fiber a compact surface $F$ such that $\pi_{1}(F)$ is commensurable with
$U$.
My proof of Theorem 28 parallels the arguments in [8] and I first prove a
generalization of Theorem 3 for subnormal subgroups $N$:
###### Theorem 8.
Let $S$ be a surface whose fundamental group contains a finitely generated
subgroup $U$ of infinite index, which contains a non-trivial subnormal
subgroup $N$ of $\pi_{1}(S)$. Then, $S$ is the torus or the Klein bottle.
Theorem 8 is a consequence of the following theorem of Griffiths:
###### Theorem 7.
(Griffiths, Theorem 14.7 [4], 1967) Let $G$ be the fundamental group of an
ordinary Fuchsian space (such as a compact orientable surface with or without
boundary). Suppose that $G$ is infinite, not abelian and not isomorphic to
$\mathfrak{P}=\mathbb{Z}_{2}*\mathbb{Z}_{2}=\left\langle
a,b:a^{2}=1,b^{2}=1\right\rangle$ or
$\mathfrak{M}=\left\langle\mathfrak{P}*\mathfrak{P}:a_{1}b_{1}a_{2}b_{2}=1\right\rangle$,
and suppose that $U$ is a finitely generated subgroup of $G$. If $U$ contains
a non-trivial subnormal subgroup of $G$, then $U$ is of finite index in $G$.
###### Remark 1.1.
In what follows I shall assume that all 3-manifolds under consideration are
orientable.
The reader will no doubt notice that my arguments run in parallel to Moon’s.
Thus I shall first prove that Conjecture 6 is true for geometric manifolds.
Hyperbolic manifolds are handled by Theorem 1.10 in [8], which the reader can
verify for himself. In certain non-hyperbolic cases the result is a
consequence of the corresponding statement being true for compact manifolds
with the property that every subgroup of their fundamental group is finitely
generated. The remaining non-hyperbolic cases are settled by arguments using
Theorem 1 and certain facts about orbifolds proved in Section 3.1. The next
step is a generalization of Theorems 2.4 and 2.9 in [8] to handle a subnormal
$N$ which has a composition series consisting entirely of finitely generated
terms. Finally, I conclude with the inductive argument which was not possible
prior to the proof of the Geometrization Theorem. The reader will notice that
the proof of Theorem 14 is borrowed verbatim from Moon’s paper [8] \- very
minor changes are needed to achieve the desired generalization and I have
included Moon’s proof for completeness and readability.
## 2\. An Extension of Griffiths’ Theorem
In this section I generalize Griffiths’ Theorem 3 to handle the case when $N$
is subnormal rather than normal. Namely, I prove:
###### Theorem 8.
Let $S$ be a surface whose fundamental group contains a finitely generated
subgroup $U$ of infinite index, which contains a non-trivial subnormal
subgroup $N$ of $\pi_{1}(S)$. Then, $S$ is the torus or the Klein bottle.
###### Proof 2.1.
First, we note that if $S$ is an open surface or if $\partial S\neq\emptyset$,
then $\pi_{1}(S)$ is free - see, for example, Theorem 3.3. in [3]. Then,
Theorem 1.5 in [2] shows that $\pi_{1}(S)$ must be isomorphic to $\mathbb{Z}$.
This is impossible as $\mathbb{Z}$ does not have any nontrivial infinite index
subgroups. Therefore, $S$ is a closed surface. Suppose that $S$ is not the
torus. If $S$ is orientable, since $\pi_{1}(S)$ is infinite, not abelian, and
not isomorphic to $\mathfrak{P}$ or $\mathfrak{M}$, Griffiths’ Theorem 7 shows
that $U$ is of finite index in $\pi_{1}(S)$, which is a contradiction.
Therefore, $S$ is non-orientable. Let $S^{\prime}$ be its orientable double
cover. Suppose that $S^{\prime}$ is not the torus. We have
$\pi_{1}(S^{\prime})\triangleright\pi_{1}(S^{\prime})\cap
N_{k}\triangleright...\triangleright\pi_{1}(S^{\prime})\cap N_{0}$. As
$\pi_{1}(S^{\prime})\cap U$ is of index at most 2 in $U$,
$\pi_{1}(S^{\prime})\cap U$ is finitely generated. Since $\pi_{1}(S)$ is
torsion free, we have $\pi_{1}(S^{\prime})\cap N_{0}\neq\\{1\\}$. Griffiths’
Theorem 7 shows, then, that $\pi_{1}(S^{\prime})\cap U$ must be of finite
index in $\pi_{1}(S^{\prime})$ and therefore also in $\pi_{1}(S)$. This,
however, implies that $U$ is itself of finite index in $\pi_{1}(S)$, which
yields a contradiction. Therefore, $S^{\prime}$ must be the torus, but in this
case computing the relevant Euler characteristics shows that $S$ is the Klein
bottle.
While this result may be of independent interest, I shall only use it in
Section 3.1 to generalize Theorem 1.5 in [8] to the case of a subnormal
subgroup $N$.
## 3\. Geometric Manifolds
### 3.1. Compact Seifert Fibered Spaces
My goal in this section is to adapt Theorem 1.5 in [8] to prove that a compact
Seifert fibered space fibers over the circle if its fundamental group contains
a finitely generated subgroup of infinite index which contains a subnormal
subgroup not isomorphic to $\mathbb{Z}$. In order to accomplish this, I will
first generalize a few results about 2-orbifolds in [8].
###### Proposition 9.
Suppose $X$ is a good, closed, hyperbolic 2-orbifold, then
$G=\pi_{1}^{orb}(X)$ contains no finite subnormal subgroups.
###### Proof 3.1.
Suppose that $N$ is a finite subnormal subgroup, so that $N=N_{0}\triangleleft
N_{1}\triangleleft...\triangleleft N_{n-1}\triangleleft G$. Let $X_{0}$ be the
orbifold cover of $X$ such that $\pi_{1}^{orb}(X_{0})=N_{0}$. Then, $N_{0}$
acts on $\mathbb{H}$ by isometries. By II-Corollary 2.8 in [1], $N_{0}$ has a
fixed point. Let $Fix(N_{0})$ denote the subset of $\mathbb{H}$ fixed
pointwise by $N_{0}$. If $Fix(N_{0})=\left\\{p_{0}\right\\}$ consists of a
single point, then since $N_{0}$ is normal in $N_{1}$, $N_{1}$ leaves
$Fix(N_{0})$ invariant and $p_{0}$ is a fixed point for the action of $N_{1}$
on $\mathbb{H}$. Then, $N_{1}$ is itself finite. If $|Fix(N_{0})|>1$,
$Fix(N_{0})$ must be a geodesic line $l$ and $N_{0}=\mathbb{Z}_{2}$ is
generated by a single reflection about $l$. In this case, since $N_{0}$ is
normal in $N_{1}$, every $g\in N_{1}$ must leave $l$ invariant. The line $l$
separates $\mathbb{H}$ into two half-spaces; let $H$ be the subgroup of
$N_{1}$ which preserves them. We see that $N_{0}$ is central in $N_{1}$ and
that $N_{1}=N_{0}\times H$. Since every $h\in H$ leaves $l$ invariant, $h$
must restrict to either a translation or reflection about a point on $l$. Let
$H_{0}$ be the subgroup of $H$ consisting of orientation preserving isometries
of $l$; $H_{0}$ is a subgroup of index at most 2 in $H$. It is easy to see
that if $H$ contains an orientation reversing isometry $h$, then $h^{2}=1$ and
$h$ does not commute with any element of $H_{0}$. Thus, if $H_{0}$ is
nontrivial, then $N_{0}$ is characteristic in $N_{1}$ as it is the unique
central subgroup of order 2 of $N_{1}$. Hence $N_{0}\triangleleft N_{2}$. If
$H_{0}$ is trivial, then $N_{1}$ is finite. In either case we conclude that
$N_{2}$ contains a finite normal subgroup. Proceeding inductively, we conclude
that $G$ must be finite or leave $l$ invariant and be isomorphic to
$\mathbb{Z}_{2}\times H$ as above. This is impossible as the quotient of
$\mathbb{H}$ by such a group is not compact.
###### Lemma 10.
Let $X$ be a good closed 2-dimensional orbifold. If $U$ is a finitely
generated, infinite index subgroup of $\pi_{1}^{orb}(X)$ and $U$ contains a
non-trivial $N\triangleleft_{s}\pi_{1}^{orb}(X)$, then $X$ has a finite
orbifold cover $X_{1}$ which is a $\mathbb{S}$-bundle over either $\mathbb{S}$
or the orbifold $\mathbb{S}/\mathbb{Z}_{2}$.
###### Proof 3.2.
We proceed in a manner identical to Moon’s arguments in [8] but instead use
Proposition 9 and Theorem 7 to reach the desired conclusion.
Our hypothesis on $\pi_{1}^{orb(X)}$ tells us that $X$ is not a spherical
2-dimensional orbifold as these have a finite orbifold fundamental group. Thus
we shall assume that $X$ is a good, closed, 2-dimensional, hyperbolic
orbifold. Therefore $X$ has a finite cover which is a closed hyperbolic
surface, see [12] and we conclude that $\pi_{1}^{orb}(X)$ contains the
fundamental group $\Gamma$ of a closed surface as a finite index subgroup.
Hence $|\Gamma:\Gamma\cap U|=\infty$. In view of Proposition 9, we must have
$\Gamma\cap N\neq 1$: To prove this, let $Core(\Gamma)=\cap g\Gamma g^{-1}$
denote the normal core of $\Gamma$ in $\pi_{1}^{orb}(X)$. We note that
$|\pi_{1}^{orb}(X):Core(\Gamma)|<\infty$ and
$Core(\Gamma)\triangleleft\pi_{1}^{orb}(X)$, and further that $\Gamma\cap N=1$
would imply that $N$ would embed into the finite quotient
$\pi_{1}^{orb}(X)/Core(\Gamma)$, thus contradicting Proposition 9. Now Theorem
7 implies that $|\Gamma:\Gamma\cap U|<\infty$ which is a contradiction.
Therefore $X$ must be a Euclidean orbifold and the conclusion follows.
###### Lemma 11.
Let $X$ be a 2-dimensional compact orbifold with nonempty boundary whose
singular points are cone points in $Int(X)$. If a finitely generated subgroup
$U$ of $\pi_{1}^{orb}(X)$ contains a nontrivial subnormal subgroup $N$ of
$\pi_{1}^{orb}(X)$, then $U$ is of finite index in $\pi_{1}^{orb}(X)$.
###### Proof 3.3.
As $X$ has nonempty boundary, $\pi_{1}^{orb}(X)$ is a free product of cyclic
groups. Suppose by way of obtaining contradiction that the index of $U$ in
$\pi_{1}^{orb}(X)$ is infinite. The hypotheses of Theorem 1.5 in [2] are
satisfied and we conclude that $\pi_{1}^{orb}(X)$ is indecomposable, hence
cyclic. That is impossible as $\mathbb{Z}$ has no nontrivial infinite index
subgroups.
###### Lemma 12.
Suppose that $G$ is a cyclic extension of $H$: $1\rightarrow\left\langle
t\right\rangle\rightarrow G\rightarrow H\rightarrow 1$. Suppose that $U$ is
subgroup of $G$, then the centralizer of $\left\langle t\right\rangle$ in $U$,
$C_{U}(t)=\left\\{u\in U:[u,t]=1\right\\}$ is normal in $U$ and is of index at
most 2.
###### Proof 3.4.
The conjugation of $\left\langle t\right\rangle$ by elements of $G$,
$g\rightarrow\psi_{g}$, where $\psi_{g}(t)=gtg^{-1}$, defines a homomorphism
of $G$ to $Aut(\mathbb{Z})\cong\mathbb{Z}/2\mathbb{Z}$. Therefore, the kernel
of this homomorphism, which is precisely $K=\left\\{g\in G:[g,t]=1\right\\}$
is of index at most 2 in $G$. The observation $C_{U}(t)=G\cap K$ concludes the
proof.
###### Proposition 13.
Let $M$ be a compact 3-manifold whose fundamental group $\pi_{1}(M)$ has the
property that all of its subgroups are finitely generated. If $\pi_{1}(M)$
contains a finitely generated, infinite index subgroup $U$ which contains a
non-trivial subgroup $N\neq\mathbb{Z}$ subnormal in $\pi_{1}(M)$, then a
finite cover of $M$ fibers over $\mathbb{S}$ with fiber a compact surface $F$,
and $\pi_{1}(F)$ is commensurable with $U$.
###### Proof 3.5.
Let $N=N_{0}\triangleleft N_{1}\triangleleft...\triangleleft
N_{k-1}\triangleleft N_{k}=\pi_{1}(M)$. By assumption, each $N_{i}$ is
finitely generated. Let $i_{0}$ be the largest index such that
$|\pi_{1}(M):N_{i_{0}}|=\infty$ and $|\pi_{1}(M):N_{j}|<\infty$, for all
$j>i_{0}$. Then, $N_{i_{0}}$ is a normal infinite index subgroup of
$N_{i_{0}+1}$. Note that since $M$ is compact, Theorem 2.1 in [13] shows that
$N_{i_{0}}$ is finitely presented. Let $M_{N_{i_{0}+1}}$ be the finite cover
of $M$ whose fundamental group is $N_{i_{0}+1}$. We now have $1\rightarrow
N_{i_{0}}\rightarrow\pi_{1}(M_{N_{i_{0}+1}})\rightarrow Q\rightarrow 1$, where
$N_{i_{0}}$ is finitely presented and $Q$ infinite. Applying Theorem 3 of
Hempel and Jaco in [6], we conclude that $M_{N_{i_{0}+1}}$ has a finite cover
which is a bundle over $\mathbb{S}$ with fiber a compact surface $F$ and that
$N_{i_{0}}$ is subgroup of finite index in $\pi_{1}(F)$.
Next, we show that $|\pi_{1}(F):N_{0}|<\infty$. Consider the finite cover
$F_{N_{i_{0}}}$ of $F$ whose fundamental group is $N_{i_{0}}$. Suppose that
$|N_{i_{0}}:N_{0}|=\infty$, then Theorem 8 implies that $F_{N_{i_{0}}}$ is the
torus or a Klein bottle. If $F_{N_{i_{0}}}$ is the torus,
$N_{i_{0}}=\mathbb{Z}^{2}$ and if $F_{N_{i_{0}}}$ is the Klein bottle
$N_{i_{0}}=\left\langle
a,b:aba^{-1}b=1\right\rangle\cong\mathbb{Z}\rtimes\mathbb{Z}$. In either case,
any subgroup of $N_{i_{0}}$ is either trivial, isomorphic to $\mathbb{Z}$, or
of finite index in $N_{i_{0}}$. This is a contradiction since the hypothesis
on $N_{0}$ rules out the first two possibilities and we argued assuming
$|N_{i_{0}}:N_{0}|=\infty$. Hence we must have $|N_{i_{0}}:N_{0}|<\infty$ and
therefore $|\pi_{1}(F):N_{0}|<\infty$.
Now $\pi_{1}(M)$ is virtually an extension of $\pi_{1}(F)$ by $\mathbb{Z}$,
hence $|\pi_{1}(M):U|=\infty$, $|\pi_{1}(F):N_{0}|<\infty$, and $N_{0}<U$
together imply that $|U:N_{0}|<\infty$ showing that $U$ is commensurable with
$\pi_{1}(F)$ as desired.
The following theorem was first proved in [8] in the context of $N$ being a
normal subgroup of $\pi_{1}(M)$, the proof given below is a modification of
the proof therein.
###### Theorem 14.
Let $Y$ be a compact Seifert fibered space and let $U$ be a finitely
generated, infinite index subgroup of $\pi_{1}(Y)$. Suppose, further, that $U$
contains a non-trivial subnormal subgroup $N\neq\mathbb{Z}$ of $\pi_{1}(Y)$.
Then, $Y$ is finitely covered by a compact 3-manifold $Y_{1}$, which is a
bundle over $\mathbb{S}^{1}$ with fiber a compact surface $F$, and
$\pi_{1}(F)$ is commensurable with $U$.
###### Proof 3.6.
The fundamental group of a Seifert fibered space fits into the short exact
sequence of: $1\rightarrow\left\langle
t\right\rangle\rightarrow\pi_{1}(Y)\xrightarrow[]{\phi}\pi_{1}^{orb}(X)\rightarrow
1$, where $\left\langle t\right\rangle$ is the cyclic group generated by a
regular fiber. The proof, as in [8] proceeds in two cases:
#### Case 1:
The orbifold $X$ is orbifold covered by an orientable surface other than the
torus.
First, we show that $|\pi_{1}(Y):\phi^{-1}(\phi(U))|<\infty$. Since
$N\neq\mathbb{Z}$, $\pi_{1}^{orb}(X)\triangleright_{s}\phi(N)\neq 1$. Since we
also have $\phi(N)<\phi(U)<\pi_{1}^{orb}(X)$, and $\phi(U)$ is finitely
generated, we can apply Lemma 10 and Lemma 11 to conclude that
$|\pi_{1}^{orb}(X):\phi(U)|<\infty$. Therefore,
$|\pi_{1}(Y):\phi^{-1}(\phi(U))|<\infty$ as desired. Next, we show that
$\phi|_{U}$ is a monomorphism. If some power of $t$ is in $U$, then $U$ will
be of finite index in $\phi^{-1}(\phi(U))$. Since $\phi^{-1}(\phi(U))$ was
shown to be of finite index in $\pi_{1}(Y)$, we conclude that $U$ must be of
finite index in $\pi_{1}(Y)$ contrary to the assumptions in the statement of
the theorem. Now, since $\phi|_{U}$ is a monomorphism, $\phi(U)$ is torsion
free and therefore the fundamental group of a compact surface. Let
$C_{U}(t)=\left\\{u\in U:[u,t]=1\right\\}$, and let $G$ be the subgroup of
$\pi_{1}(Y)$ generated by $C_{U}(t)$ and $t$. Note that $C_{U}(t)\triangleleft
U$ and that the index of $C_{U}(t)$ in $U$ is at most 2 by Lemma 12. Hence
$|\phi^{-1}(\phi(U)):G|\leq 2$. Thus $G$ is a finite index subgroup of
$\pi_{1}(Y)$. Now, take the cover of $M_{1}$ of $M$ corresponding to
$G\leq\pi_{1}(Y)$. Since $G=\left\langle C_{U}(t),t\right\rangle$ with
$C_{U}(t)\cap\left\langle t\right\rangle=1$, we conclude that $G\cong
C_{U}(t)\times\mathbb{Z}$ and also that $M_{1}\cong F\times\mathbb{S}$, were
$F$ is a compact surface whose fundamental group is isomorphic to $C_{U}(t)$.
This concludes Case 1.
#### Case 2:
The orbifold $X$ is orbifold covered by the torus.
Let $\Gamma\cong\mathbb{Z}^{2}$ be the fundamental group of the torus which
orbifold covers $X$. We proceeding as in Moon [8]. In view of Proposition 13,
we only need to show that every subgroup $H$ of $\pi_{1}(Y)$ is finitely
generated.
We have the short exact sequence $1\rightarrow\left\langle t\right\rangle\cap
H\rightarrow H\rightarrow\phi(H)\rightarrow 1$. Since $\Gamma$ is of finite
index in $\pi_{1}^{orb}(X)$, $\phi(H)\cap\Gamma$ is of finite index in
$\phi(H)$. The intersection $\phi(H)\cap\Gamma$ is finitely generated as it is
a subgroup of $\Gamma$, hence $\phi(H)$ is finitely generated. Because the
kernel of $\phi|_{H}$ is a subgroup of $\mathbb{Z}$ and is therefore trivially
finitely generated, we conclude that $H$ is itself finitely generated as
needed.
### 3.2. Non-SFS Geometric Manifolds
We now handle the remaining two types of geometric manifolds which are not
Seifert fibered spaces in manner analogous to the proofs in [8].
###### Theorem 15.
Let $M$ be a closed Sol manifold, such that $\pi_{1}(M)$ contains a finitely
generated, infinite index subgroup $U$ which contains a non-trivial subgroup
$N\neq\mathbb{Z}$ subnormal in $\pi_{(}M)$. Then, a finite cover of $M$ is a
fiber bundle over $\mathbb{S}$ whose fiber is a compact surface $F$ and
$\pi_{1}(F)$ is commensurable with $U$.
###### Proof 3.7.
The fundamental group $\pi_{1}(M)$ of a closed Sol manifold satisfies
$1\rightarrow\mathbb{Z}^{2}\rightarrow\pi_{1}(M)\overset{\phi}{\rightarrow}\mathbb{Z}\rightarrow
1.$
Suppose $H$ is a subgroup of $\pi_{1}(M)$, then we have
$1\rightarrow\mathbb{Z}^{2}\cap H\rightarrow H\rightarrow\phi(H)\rightarrow
1$. Since $\mathbb{Z}^{2}\cap H$ and $\phi(H)$ are finitely generated, $H$ is
also finitely generated. Now, Proposition 13 yields the desired conclusion.
###### Theorem 16.
(Moon, Theorem 1.10 [8], 2005)Let $M$ be a complete hyperbolic manifold of
finite volume whose fundamental group $G$ contains a finitely generated
subgroup of infinite index $U$ which contains a non-trivial subgroup $N$
subnormal in $G$. Then, $M$ has a finite covering space $M_{1}$ which is a
bundle over $\mathbb{S}$ with fiber a compact surface $F$, and $\pi_{1}(F)$ is
a subgroup of finite index in $U$.
###### Proof 3.8.
The proof in [8] generalizes verbatim to the case when $N$ is subnormal in
$G$.
The above results are summarized in:
###### Theorem 17.
Let $M$ be a compact geometric manifold and let $G=\pi_{1}(M)$. Suppose that
$U$ is a finitely generated subgroup of $G$ with $|G:U|=\infty$, and suppose
that $U$ contains a subnormal subgroup $N\triangleleft_{s}G$. If $N$ is not
infinite cyclic, then $M$ is finitely covered by a bundle over $\mathbb{S}$
with fiber a compact surface $F$ such that $\pi_{1}(F)$ is commensurable with
$U$.
## 4\. Torus sums
We now consider manifolds which split along an incompressible torus
$\mathcal{T}$. In these cases the fundamental group $G$ of $M$ splits as a
free product with amalgamation over the group carried by the splitting torus
$\mathcal{T}$. Whenever one has such a splitting, one has an action of $G$ on
a tree with quotient an edge, which is called the Bass-Serre tree of $G$. Our
proof, as in [8], splits in two cases. In the first case we consider, the
quotient of the Bass-Serre tree of $G$ corresponding to the splitting of $M$
by the action of $U$ is a graph of groups of infinite diameter, and in the
second case - a graph of finite diameter. The proof of Theorem 2.9 in [8] is
easily seen to handle a subnormal $N$ in the case of a finite diameter
quotient. Thus, our efforts are focused on generalizing Theorem 2.4 in [8].
We begin with a simple lemma showing that if $N$ is a nontrivial subnormal
subgroup of a 3-manifold group not isomorphic to $\mathbb{Z}$, then all
finitely generated terms of its subnormal series appear to the right of the
terms which are not finitely generated:
###### Lemma 18.
Let $G$ be a finitely generated 3-manifold group. Let $N=N_{0}\triangleleft
N_{1}\triangleleft...\triangleleft N_{n-1}\triangleleft N_{n}=G$ be a
subnormal subgroup of $G$ such that $N\neq\left\\{1\right\\}$ and
$N\neq\mathbb{Z}$. Then there is an index $0\leq i_{0}\leq n$, such that
$N_{i}$ is finitely generated for all $i\geq i_{0}$ and no $N_{i}$ is finitely
generated for any $i<i_{0}$.
###### Proof 4.1.
Suppose, for the purpose of obtaining a contradiction that there exists an
occurrence of an ”inversion”: $N_{i_{0}}\triangleleft N_{i_{0}+1}$ with
$N_{i_{0}}$ finitely generated while $N_{i_{0}+1}$ not finitely generated.
Since every subgroup of a 3-manifold group is obviously itself a 3-manifold
group, we can apply Proposition 2.2 in [2] to conclude that $N_{i_{0}}$ must
be isomorphic to $\mathbb{Z}$, and thus $N_{0}=\mathbb{Z}$ contrary to
assumption. Therefore, such an inversion is not possible and the conclusion
follows.
The following results about Bass-Serre trees will allow us to apply Theorem
2.4 in [8] to the graph of groups corresponding to finite covers of $M$. The
hypothesis of Theorem 2.4 requires a splitting of $M$ along an incompressible
torus and further assumes that the quotient of the Bass-Serre tree of the
corresponding splitting of $\pi_{1}(M)$ by the action of $U$ is a graph of
infinite diameter. The arguments below show that the fundamental group of any
finite cover of $M$ will also have this property.
For completeness, I include a proof of the following standard fact which is
often left as an exercise in expository texts. I have borrowed it from Henry
Wilton’s unpublished notes titled Group actions on trees:
###### Lemma 19.
Let $\chi$ be a tree and let $\alpha,\beta\in Aut(\chi)$ be a pair of elliptic
automorphisms. If $Fix(\alpha)\cap Fix(\beta)=\emptyset$, then $\alpha\beta$
is a hyperbolic element of $Aut(\chi)$.
###### Proof 4.2.
It suffices to construct an axis for $\alpha\beta$ on which it acts by
translation, which we now do.
First, note that $Fix(\alpha)$ and $Fix(\beta)$ are closed subtrees of $\chi$.
Let $x\in Fix(\alpha)$ be the unique point in $\chi$ closest to $Fix(\beta)$
and $y\in Fix(\beta)$ be the unique point in $\chi$ closest to $Fix(\alpha)$.
Then, the unique geodesic from $y$ to $\alpha\beta\cdot y$ is
$[x,y]\cup\alpha\cdot[x,y]$ since there are no points in the interior of
$[x,y]$ fixed by $\alpha$ so that the concatenation of the geodesic segments
above is a geodesic segment. Hence, $d(y,\alpha\beta\cdot y)=d(y,\alpha\cdot
y)=2d(x,y)$. Similarly, the geodesic from $y$ to $(\alpha\beta)^{2}\cdot y$ is
$[x,y]\cup\alpha\cdot[x,y]\cup\alpha\beta\cdot[x,y]\cup\alpha\beta\alpha\cdot[x,y]$,
so that $d(y,(\alpha\beta)^{2}\cdot y)=4d(x,y)=2d(y,\alpha\beta\cdot y)$ thus
establishing the existence of an axis for $\alpha\beta$, which finishes the
proof.
###### Proposition 20.
Suppose $G$ is direct product with amalgamation $G=A*_{C}B$, with $G\neq A$,
and $G\neq B$, or $G=A*_{C}$, with $A\neq C$, and suppose $\chi$ is the Bass-
Serre tree for $G$. Then, if $H\leq G$ is a subgroup of finite index in $G$,
the action of $H$ on $\chi$ is minimal and $\chi$ is the Bass-Serre tree for a
graph of groups whose fundamental group is $H$.
###### Proof 4.3.
Since $G$ acts simplicially on $\chi$, $G$ acts by isometries on the CAT(0)
space $\chi$. Therefore, every $g\in G$ acts by an elliptic or hyperbolic
isometry according to whether $g$ fixes a vertex or realizes a non-zero
minimal translation distance $g_{min}=\inf\\{d(x,g\cdot x):x\in\chi\\}$. In
the latter case, $g$ leaves a subspace of $\chi$ isometric to $\mathbb{R}$
invariant, and acts on this subspace, called an axis for $g$, as translation
by a $g_{min}$. See [1] for an account of the classification of the isometries
of a CAT(0) space.
First, we show that there exists an element $g\in G$ which acts on $\chi$ as a
hyperbolic isometry - such an isometry is sometimes called loxodromic in the
context of group actions on trees. Let $g_{1}\in A-C$ and $g_{2}\in B-C$; such
elements can clearly be found as $G\neq A$ and $G\neq B$. Every automorphism
of $\chi$ is either elliptic or hyperbolic, so for $i=1,2$, $g_{i}$ is either
elliptic or hyperbolic. If one of these is hyperbolic, we are done. If $g_{1}$
and $g_{2}$ are both elliptic, then they stabilize two adjacent vertices
$v_{1}$ and $v_{2}$, respectively. Now, suppose that $g_{1}$ and $g_{2}$ both
fix a point $p\in\chi$; we can easily see that $g_{1}$ fixes the geodesic
segment $\left[v_{1},p\right]$ and similarly that $g_{2}$ fixes
$\left[v_{2},p\right]$. Because every edge of $\chi$ disconnects $\chi$, at
least one of $g_{1}$ and $g_{2}$ fixes the unique edge connecting the two
vertices $e(v_{1},v_{2})$, and therefore is in $C$. This is a contradiction
and therefore $Fix(g_{1})\cap Fix(g_{2})=\emptyset$, hence $g_{1}g_{2}$ is a
hyperbolic isometry by Lemma 19. The case of an HNN extension is similar.
Next, we show that every point of $\chi$ lies on the axis for some hyperbolic
isometry $g\in G$. Suppose that $g_{0}\in G$ is an element which acts as a
hyperbolic isometry on $\chi$, whose axis is $\gamma_{0}$. Suppose $e_{0}$ is
any edge of $\gamma_{0}$. Since $g\cdot\gamma_{0}$ is an axis for
$gg_{0}g^{-1}$, it follows that $g\cdot e_{0}$ is contained in the axis for
$gg_{0}g^{-1}$. But the action of $G$ is transitive on the set of edges of
$\chi$, therefore $\chi=\bigcup\limits_{g\in G}g\cdot\gamma_{0}$. However, the
union on the right-hand side is exactly the union of the axes for the elements
of $G$ which are conjugates of $g_{0}$, which proves our first claim.
Finally, we show that every axis $\gamma$ for a hyperbolic element $g\in G$ is
an axis for some $h\in H$ as follows: Consider the cosets
$H,gH,g^{2}H,...,g^{n}H$, where $|G:H|=n$. These cannot be all distinct,
therefore, we conclude that $g^{i}\in H$ for some $1\leq i\leq n$. However,
$\gamma$ is an axis for $g^{i}$ as well, hence our second claim has been
established. Now, we see that $\chi$ is a union of the axes for the hyperbolic
elements of $H$. From this we deduce that every edge of $\chi$ lies on an axis
of a hyperbolic element of $H$ and therefore $H$ cannot leave invariant any
subtree of $\chi$. This shows that the action of $H$ on $\chi$ is minimal and
that $\chi$ is, therefore, the Bass-Serre tree for $H$.
###### Lemma 21.
Let $M$ be a compact 3-manifold such that $\pi_{1}(M)=X*_{\mathcal{T}}$, where
$\mathcal{T}$ is an incompressible torus, so that $G=A*_{C}$, where
$G=\pi_{1}(M)$ and $A=C=\mathbb{Z}^{2}$. Suppose that $U$ is a finitely
generated subgroup of $G$ which contains a nontrivial subnormal subgroup
$N\neq\mathbb{Z}$, then $M$ has a finite cover which is a bundle over
$\mathbb{S}$ with fiber a torus, and $\pi_{1}(T)$ is commensurable with $U$.
###### Proof 4.4.
By assumption, $G$ satisfies $1\rightarrow\mathbb{Z}^{2}\rightarrow
G\rightarrow\mathbb{Z}\rightarrow 1$, so by Theorem 1, $M$ is a torus bundle
over $\mathbb{S}$. However, torus bundles over the circle are geometric and
the conclusion follows from Theorem 17.
Given a graph of groups $\Gamma$, we can form a different graph of groups
$\Gamma^{\prime}$ by collapsing an edge in $\Gamma$ to a single vertex with
vertex group $G_{v_{1}}*_{G_{e}}G_{v_{2}}$, or $G_{v}*_{G_{e}}$ if the edge
collapsed joins a single vertex $v$. To obtain a graph of groups structure, we
define the inclusion maps of all the edges meeting the new vertex in
$\Gamma^{\prime}$ to be the compositions of the injections indicated by
$\Gamma$ followed by the canonical inclusion of $G_{i}\hookrightarrow
G_{v_{1}}*_{G_{e}}G_{v_{2}}$ for $i=1,2$, or $G_{v}\hookrightarrow
G_{v}*_{G_{e}}$ as appropriate. It is obvious that
$\pi_{1}(\Gamma)\cong\pi_{1}(\Gamma^{\prime})$. If $\Gamma$ is a finite graph
of groups, given an edge $e\in Edge(\Gamma)$, we can proceed to collapse all
the edges of $\Gamma$ except for $e$ in this way to obtain a new graph of
groups $\Gamma_{e}$ which has a single edge. Thus, $\Gamma_{e}$ gives the
structure of a free product with amalgamation or an HNN extension to $G$. We
shall call this process of obtaining $\Gamma_{e}$ from $\Gamma$ collapsing
around $e$.
###### Proposition 22.
Let $\Gamma$ be a finite graph of groups with fundamental group
$\pi_{1}(\Gamma)=G$. Let $T$ be the Bass-Serre tree of $\Gamma$ and let $U$ be
a finitely generated subgroup of $G$ such that $U\backslash T$ has infinite
diameter. Then, for some edge $e\in Edge(\Gamma)$ the quotient of the Bass-
Serre tree of the graph $\Gamma_{e}$ by $U$ has infinite diameter.
###### Proof 4.5.
For $e\in Edge(\Gamma)$, let $Q_{e}$ be the graph obtained from $T$ by
collapsing every edge which is not a preimage of $e$ under the quotient map
$T\rightarrow G\backslash T=\Gamma$. One easily verifies that $Q_{e}$ is a
tree and that the action of $G$ on $T$ descends to an action of $G$ on
$Q_{e}$. The quotient of $Q_{e}$ by $G$ has a single edge and thus gives $G$
the structure of an amalgamated free product or an HNN extension. Further, the
stabilizers of the vertices of $Q_{e}$ are precisely conjugates of the vertex
groups of $\Gamma_{e}$, and the stabilizers of edges are conjugates of the
edge group of $\Gamma_{e}$: this can easily be verified by observing that for
every $v^{\prime}\in T$, the quotient map $T\rightarrow Q_{e}$ collapses the
tree $T_{v^{\prime}}$, which consists of all edge paths starting at
$v^{\prime}$ not containing preimages of $e$. Thus, $Stab(w^{\prime})\subseteq
G$ for $w^{\prime}\in Q_{e}$ is precisely the subgroup
$Inv(T_{v^{\prime}})\subseteq G$ which leaves $T_{v^{\prime}}$ invariant. It
is easy to see that $Inv(T_{v^{\prime}})$ contains every vertex stabilizer
subgroup of $G$ for vertices in $T_{v^{\prime}}$ and that $G$ splits as a
direct product with amalgamation of
$Inv(T_{v_{1}^{\prime}})*_{Stab(e^{\prime})}Inv(T_{v_{2}^{\prime}})$ for two
adjacent vertices $v_{1}^{\prime},v_{2}^{\prime}\in T$ joined by the edge
$e^{\prime}$ (or an HNN extension). Thus, we conclude that $Q_{e}$ is the
Bass-Serre tree for $\Gamma_{e}$.
Finally, to prove that $U\backslash Q_{e}$ has infinite diameter for some $e$,
we argue by contradiction. Suppose $U\backslash Q_{e}$ has finite diameter for
every $e\in Edge(\Gamma)$. First, observe that there is a bijection between
the edges of $Q_{e}$ and the edges of $T$ which are not collapsed. Because
$Q_{e}$ is the Bass-Serre tree of an amalgamated free product or an HNN
extension, Lemma 2.1 in [8] shows that since $U\backslash Q_{e}$ is assumed to
have finite diameter, it must be a finite graph of groups. This, however,
shows that there are finitely many $U$-orbits of edges in $Q_{e}$, hence there
are finitely many $U$-orbits of edges in $T$ lying above $e\in Edge(\Gamma)$.
Because this is true by assumption for every edge $e\in Edge(\Gamma)$ and
because $Edge(\Gamma)$ is finite, we conclude that there are finitely many
$U$-orbits of edges in $T$. This means that the quotient $U\backslash T$ must
be a finite graph which contradicts the assumption that $U\backslash T$ has
infinite diameter. Therefore, there exists an edge $e\in Edge(\Gamma)$ such
that $U\backslash Q_{e}$ has infinite diameter.
Propositions 20 and 22 allow us to prove the generalization of Theorem 2.4 in
[8] promised in the beginning of the section:
###### Theorem 23.
Let $M$ be a compact 3-manifold with $\pi_{1}(M)=G$, and suppose that $M$
splits along an incompressible torus $\mathcal{T}$,
$M=X_{1}\cup_{\mathcal{T}}X_{2}$, or $M=X_{1}\cup_{\mathcal{T}}$. Suppose
that:
1. (1)
$G$ contains a non-trivial subnormal subgroup
$N=N_{0}\triangleleft...\triangleleft N_{n-1}\triangleleft N_{n}=G$ such that
$N\neq\mathbb{Z}$,
2. (2)
at least $n$ terms in the subnormal series
$N=N_{0}\triangleleft...\triangleleft N_{n-1}\triangleleft N_{n}=G$ are
finitely generated,
3. (3)
$G$ contains a finitely generated subgroup $U$ of infinite index in $G$ such
that $N<U$.
If the graph of groups $\mathcal{U}$ corresponding to $U$ has infinite
diameter, then $M$ is finitely covered by a torus bundle over $\mathbb{S}^{1}$
with fiber $T$, and $U$ and $\pi_{1}(T)$ are commensurable.
###### Proof 4.6.
First, we note that if the splitting of $M$ induces a HNN extension structure
on $G$ which is of the form $A*_{C}$ with $A=C$, then the conclusion follows
from Lemma 21, therefore in what follows we shall assume that this is not the
case.
From Lemma 18 we conclude that our assumption on all but one of the $N_{i}$
being finitely generated is actually equivalent with the assumption that every
$N_{i}$ for $i>0$ is finitely generated. We consider two cases: either
$|N_{i}:N_{i-1}|<\infty$ for all $i>0$, or there exists at least one
occurrence of $|N_{i}:N_{i-1}|=\infty$ where $i>0$.
#### Case 1:
$|N_{i}:N_{i-1}|<\infty$ for all $i>0$
In this case, all the $N_{i}$ have finite index in $G$, except for $N$, in
particular $|G:N_{1}|<\infty$, and $N\triangleleft N_{1}<G$. Consider the
finite cover $M_{N_{1}}$ of $M$ whose fundamental group is $N_{1}$. This cover
will be made up of covers for the pieces $X_{1}$ and $X_{2}$ glued along
covers of the splitting torus $\mathcal{T}$. Let $\chi$ be the Bass-Serre tree
of the graph of groups induced by the splitting of $M$ along $\mathcal{T}$
which gives $G$ the structure of an amalgamated free product or HNN extension
over $\mathbb{Z}^{2}$. In view of Proposition 20, $\chi$ is the Bass-Serre
tree of $N_{1}$. Then, $N_{1}$ acts on $\chi$ with quotient the graph of
groups $N_{1}\backslash\chi$, which is also the graph of groups decomposition
of $N_{1}$ obtained from the splitting of fundamental group of the cover
$M_{N_{1}}$ along the fundamental groups corresponding to lifts of the
splitting torus $\mathcal{T}$ of $M$.
Consider now the subgroup $U\cap N_{1}$; we will show that $|U:U\cap
N_{1}|<\infty$ from which it will follow that $U\cap N_{1}$ is a finitely
generated, infinite index subgroup of $N_{1}$, which contains $N$: There is a
well-defined map $\psi$ from the coset space $U/U\cap N_{1}$ to the coset
space $G/N_{1}$ given by $\psi(u\,U\cap N_{1})=uN_{1}$. This map is easily
seen to be injective, hence $|U:U\cap N_{1}|<|G:N_{1}|<\infty$.
Note that $U\cap N_{1}$ acts on $\chi$ with an infinite diameter quotient: The
map $\left[x\right]_{U\cap
N_{1}\backslash\chi}\rightarrow\left[x\right]_{U\backslash\chi}$ from $U\cap
N_{1}\backslash\chi$ to $U\backslash\chi$ is onto. Hence, if $U\cap
N_{1}\backslash\chi$ had finite diameter $L$, we would conclude that any two
vertices $v_{1},v_{2}\in U\backslash\chi$ would be at a distance at most $L$
as we can find an edge path in $U\cap N_{1}\backslash\chi$ of length at most
$L$ between any two preimages of $v_{1}$ and $v_{2}$. This path projects to a
path of length at most $L$ between $v_{1}$ and $v_{2}$.
Since the finite index subgroup $U\cap N_{1}$ of $U$ also acts on $\chi$ with
an infinite diameter quotient, Proposition 22 shows that there is an edge in
$N_{1}\backslash\chi$, such that the splitting of $N_{1}$ along the
corresponding edge group has a Bass-Serre tree whose quotient by $U\cap N_{1}$
has infinite diameter. The edge groups of $N_{1}\backslash\chi$ are all
isomorphic to $\mathbb{Z}^{2}$ since they are the fundamental groups of lifts
of the splitting torus $\mathcal{T}$ of $M$. Therefore the cover $M_{N_{1}}$
is a torus sum and we can apply Theorem 2.4 in [8] to conclude that a finite
cover of $M_{N_{1}}$ fibers in the desired way, and that $U\cap N_{1}$, and
therefore also $U$, is commensurable with the fundamental group of the fiber.
#### Case 2:
$|N_{i}:N_{i-1}|=\infty$ for some $i>0$.
Let $i_{0}$ be the largest integer index for which
$|N_{i_{0}}:N_{i_{0}-1}|=\infty$. In this case, we consider the finite cover
$M_{N_{i_{0}}}$ whose fundamental group is $N_{i_{0}}$. Since $N_{i_{0}-1}$ is
assumed to be finitely generated, it is also finitely presented by Theorem 2.1
in [13]. Now, $M_{N_{i_{0}}}$ fibers in the desired way by Theorem 3 of [6],
and further $N_{i_{0}-1}$ is a subgroup of finite index in $\pi_{1}(T)$.
Finally, we show that $U$ is commensurable with $\pi_{1}(T)$. Consider $U\cap
N_{i_{0}-1}$; this group is a subgroup of $\mathbb{Z}^{2}$, therefore it is
either trivial, $\mathbb{Z}$ or $\mathbb{Z}^{2}$. Since $U\cap N_{i_{0}-1}$
contains the non-trivial $N\neq\mathbb{Z}$, we must have $U\cap
N_{i_{0}-1}\cong\mathbb{Z}^{2}$. Because the finite cover $M_{N_{i_{0}}}$ of
$M$ fibers over the circle, we have $|G:\pi_{1}(T)\rtimes\mathbb{Z}|<\infty$.
If $U\cap N_{i_{0}-1}$ were not of finite index in $U$, then $U$ would
obviously be of finite index in $G$, which contradicts the assumptions on $U$.
Therefore, we conclude that $U$ is commensurable with $\pi_{1}(T)$, as
desired.
The last step towards proving our main theorem is a restatement of Theorem 2.9
in [8]. While the proof in [8] only treats the case of $N$ being a normal
subgroup of $\pi_{1}(M)$, it is obvious that it applies verbatim to the case
of $N\triangleleft_{s}\pi_{1}(M)$.
###### Theorem 24.
(Moon, Theorem 2.9 [8], 2005) Let $M$ be a compact 3-manifold with
$M=X_{1}\cup_{\mathcal{T}}X_{2}$ or $M=X_{1}\cup_{\mathcal{T}}$. Suppose that
$X_{i}$ satisfies the following condition for $i=1,2$:
1. (1)
if $\pi_{1}(X_{i})$ contains a finitely generated subgroup $U_{i}$ with
$|\pi_{1}(X_{i}):U_{i}|=\infty$ such that $U_{i}$ contains a nontrivial
subnormal subgroup $\mathbb{Z}\neq N_{i}\triangleleft_{s}\pi_{1}(X_{i})$, then
a finite cover of $X_{i}$ fibers over $\mathbb{S}$ with fiber a compact
surface $F_{i}$ and $\pi_{1}(F_{i})$ is commensurable with $U_{i}$,
2. (2)
$G=\pi_{1}(M)$ contains a finitely generated subgroup $U$ of infinite index in
$G$ which contains a nontrivial subnormal subgroup $N$ of $G$, and that $N$
intersects nontrivially the fundamental group of the splitting torus,
3. (3)
$N\cap\pi_{1}(X_{i})\neq\mathbb{Z}$.
Suppose, further, that $G$ contains a finitely generated subgroup $U$ of
infinite index in $G$ which contains a nontrivial subnormal subgroup $N$ of
$G$, and that $N$ intersects nontrivially the fundamental group of the
splitting torus and $N\cap\pi_{1}(X_{i})\neq\mathbb{Z}$. If the graph of
groups $\mathcal{U}$ corresponding to $U$ is of finite diameter, then $M$ has
a finite cover $\widetilde{M}$ which is a bundle over $\mathbb{S}$ with fiber
a compact surface $F$, and $\pi_{1}(F)$ is commensurable with $U$.
For the sake of brevity, let us introduce the following terminology: We shall
say that a compact manifold $M$ has property (A) if whenever $\pi_{1}(M)$
contains a finitely generated subgroup $U$ of infinite index, and a nontrivial
subnormal subgroup $\mathbb{Z}\neq N\triangleleft_{s}G$ which has a subnormal
series in which all but one terms are assumed to be finitely generated, and
such that $N<U$, then $M$ fibers over $\mathbb{S}$ with fiber a compact
surface $F$ such that $\pi_{1}(F)$ is commensurable with $U$.
Combining Theorem 23 and Theorem 24, we obtain
###### Theorem 25.
Let $M$ be a compact 3-manifold with $M=X_{1}\cup_{\mathcal{T}}X_{2}$ or
$M=X_{1}\cup_{\mathcal{T}}$, and suppose that the following conditions are
satisfied:
1. (1)
each $X_{i}$, for $i=1,2$, has property (A),
2. (2)
$G=\pi_{1}(M)$ contains a finitely generated subgroup $U$ of infinite index,
which contains a nontrivial subnormal subgroup $N$ of $G$,
3. (3)
$N$ has a subnormal series in which the terms, except for $N$, are assumed to
be finitely generated,
4. (4)
$N$ intersects nontrivially the fundamental group of the splitting torus
$\mathcal{T}$,
5. (5)
$N\cap\pi_{1}(X_{i})\neq\mathbb{Z}$, for $i=1,2$.
Then, $M$ has a finite cover which is a bundle over $\mathbb{S}$ with fiber a
compact surface $F$, and $\pi_{1}(F)$ is commensurable with $U$.
## 5\. The main theorem
I am now ready to prove my main result. Recall the Geometrization Theorem
proved by Perelman in 2003:
###### Theorem 26.
(Perelman, Geometrization Theorem [9], [10], [11], 2003) Let $M$ be an
irreducible compact 3-manifold with empty or toroidal boundary. Then there
exists a collection of disjointly embedded incompressible tori
$\mathcal{T}_{1},...,\mathcal{T}_{k}$ such that each component of $M$ cut
along $\mathcal{T}_{1}\cup...\cup\mathcal{T}_{k}$ is geometric. Furthermore,
any such collection with a minimal number of components is unique up to
isotopy.
To improve the exposition of the proof of the main theorem below, we define
property $(A^{\prime})$: Let $M$ be a compact 3-manifold with empty or
toroidal boundary which has a decomposition into geometric pieces
$\mathfrak{D}=\left(M_{1},...,M_{2};\mathcal{T}_{1},...,\mathcal{T}_{p}\right)$,
where each $M_{i}$ is a compact geometric submanifold of $M$ with toroidal or
empty boundary and each $T_{i}$ is an incompressible torus. We shall say that
$\left(M,\mathfrak{D}\right)$ has property $(A^{\prime})$ if whenever
$\pi_{1}(M)$ contains a finitely generated subgroup $U$ with
$|\pi_{1}(M):U|=\infty$ and a nontrivial subnormal subgroup $\mathbb{Z}\neq
N\triangleleft_{s}\pi_{1}(M)$ which has a subnormal series in which all but
one terms are assumed to be finitely generated, such that $N<U$, and such that
$N$ intersects nontrivially the fundamental groups of the splitting tori
$\mathcal{T}_{i}$ in $\mathfrak{D}$ and $N\cap\pi_{1}(M_{i})\neq\mathbb{Z}$
for all $X_{i}\in\mathfrak{D}$, then $M$ fibers over $\mathbb{S}$ with fiber a
compact surface $F$ such that $\pi_{1}(F)$ is commensurable with U.
Using this notation, Theorem 23 and the proof of Theorem 24 in [8] together
yield:
###### Proposition 27.
Let $\left(X_{1},\mathfrak{D}_{1}\right)$ and
$\left(X_{2},\mathfrak{D}_{2}\right)$ each be a compact manifold along with a
decomposition into geometric pieces along incompressible tori. Suppose that
$\left(X_{1},\mathfrak{D}_{1}\right)$ and
$\left(X_{2},\mathfrak{D}_{2}\right)$ have property $(A^{\prime})$. If
$M=X_{1}\cup_{\mathcal{T}}X_{2}$ or $M=X_{1}\cup_{\mathcal{T}}$, where
$\mathcal{T}$ is an incompressible torus disjoint from the tori in
$\mathfrak{D}_{1}$ and $\mathfrak{D}_{2}$, and if the tori and geometric
pieces from $\mathfrak{D}_{1}$ and $\mathfrak{D}_{2}$ along with $\mathcal{T}$
together give a geometric decomposition $\mathfrak{D}$ for $M$, then
$\left(M,\mathfrak{D}\right)$ has property $(A^{\prime})$.
###### Theorem 28.
Let $M$ be a compact 3-manifold with empty or toroidal boundary. If
$G=\pi_{1}(M)$ contains a finitely generated subgroup $U$ of infinite index in
$G$ which contains a nontrivial subnormal subgroup $N$ of $G$, then: (a) $M$
is irreducible, (b) if further:
1. (1)
$N$ has a subnormal series of length $n$ in which $n-1$ terms are assumed to
be finitely generated,
2. (2)
$N$ intersects nontrivially the fundamental groups of the splitting tori of
some decomposition $\mathfrak{D}$ of $M$ into geometric pieces, and
3. (3)
the intersections of $N$ with the fundamental groups of the geometric pieces
are not isomorphic to $\mathbb{Z}$,
then, $M$ has a finite cover which is a bundle over $\mathbb{S}$ with fiber a
compact surface $F$ such that $\pi_{1}(F)$ and $U$ are commensurable.
###### Proof 5.1.
First we prove (a): By Theorem 1 in [7] $M\cong M_{1}\sharp
M_{2}\sharp...\sharp M_{p}$, where each $M_{i}$ is a prime manifold. Then, we
have $G=G_{1}*G_{2}*...*G_{p}$, where $G_{i}=\pi_{1}(M_{i})$. By Theorem 1.5
in [2], we must have $G_{i}={1}$ for $i\geq 2$, after possibly reindexing the
terms. Therefore, the Poincare Conjecture implies that
$M_{i}\cong\mathbb{S}^{3}$ for all $i\geq 2$, and we conclude that $M$ is a
prime manifold. Therefore $M$ must be irreducible, for if it were not, then
$M\cong\mathbb{S}^{2}\times\mathbb{S}$, hence $G=\mathbb{Z}$,
$U=\left\\{1\right\\}$, contradicting the hypothesis of the theorem. To prove
(b), we make an inductive argument to prove that every connected submanifold
$M^{\prime}\subseteq M$ which is a union of $X_{i}\in\mathfrak{D}$ has
property $(A^{\prime})$ with respect to its decomposition
$\mathfrak{D}^{\prime}$ into geometric pieces inherited from $\mathfrak{D}$.
We proceed by induction on the number $m$ of geometric pieces $X_{i}$ in
$M^{\prime}$. It is clearly true that if $M^{\prime}=X_{i}$ for some $i$, then
$M^{\prime}$ has property $(A^{\prime})$ by Theorem 17. Suppose that all
submanifolds which are a union of at most $m-1$ geometric pieces from
$\mathfrak{D}$ have property $(A^{\prime})$ with respect to their geometric
decompositions along incompressible tori from $\mathfrak{D}$, and suppose that
$M^{\prime}$ is a union of $m$ geometric pieces from $\mathfrak{D}$ so that
$\mathfrak{D}^{\prime}=\left(X_{i_{1}},...,X_{i_{m}};\mathcal{T}_{i_{1}},...,\mathcal{T}_{i_{s}}\right)$.
If we cut $M^{\prime}$ along all tori in $\mathfrak{D}^{\prime}$ which are in
$X_{i_{m}}$, $M^{\prime}$ will be decomposed into connected submanifolds
$N_{1},...,N_{w},X_{i_{m}}$, in such a way that each of the $N_{i}$ has a
geometric decomposition $\mathfrak{D}_{i}$ which consists of at most $m-1$
pieces from $\mathfrak{D}$, and tori also from $\mathfrak{D}$. Thus, it
follows that each of the submanifolds $N_{i},...,N_{w},X_{i_{m}}$ with its
geometric decomposition inherited from $\mathfrak{D}$ has property
$(A^{\prime})$. Since $M^{\prime}$ can be obtained from the pieces
$N_{1},...,N_{w},X_{i_{m}}$ by performing a finite number of torus sums of the
form $M_{1}\cup_{\mathcal{T}}M_{2}$ or $M_{1}\cup_{\mathcal{T}}$ for
$M_{i}\in\left\\{N_{1},...,N_{w},X_{i_{m}}\right\\}$ and
$\mathcal{T}\in\mathfrak{D}$, $M^{\prime}$ also has property $(A^{\prime})$
with respect to its geometric decomposition along tori from $\mathfrak{D}$ by
Proposition 27. By induction, $M$ also has property $(A^{\prime})$ hence it
fibers in the required way.
## 6\. Acknowledgments
I wish to thank Prof. Peter Scott, who was my Ph.D. thesis adviser at The
University of Michigan in Ann Arbor, for introducing to me to the problem of
the fibering of compact 3-manifolds over the circle and for the many years of
patient discussions and invaluable advice. Notably, I am indebted to Prof.
Scott for sketching an idea for the proof of Proposition 20 during one of our
many e-mail discussions. I would also like to thank the Department of
Mathematics at The University of Michigan for giving me the opportunity to
work with Prof. Peter Scott in the capacity of a Visiting Scholar and for
granting me remote access to the university’s library resources in the period
of 2012 to 2014. Finally, I would like to thank the anonymous referee for the
many helpful suggestions, which have greatly contributed to the readability of
this research article.
## References
* [1] M. Bridson, A. Haefliger. Metric spaces of non-positive curvature. Springer Verlag, 1999.
* [2] H. Elkalla. Subnormal subgroups in 3-manifolds groups. J. London Math. Soc., 2(30), no. 2:342–360, 1984.
* [3] H. Griffiths. The fundamental group of a surface, and a theorem of Schreier. Acta mathematica, 110:1–17, 1963.
* [4] H. Griffiths. A covering-space approach to theorems of Greenberg in Fuchsian, Kleinian and other groups. Communications on Pure and Applied Mathematics, 20:365–399, 1967.
* [5] H. Griffiths. Correction to ”A covering-space approach to theorems of Greenberg in Fuchsian, Kleinian and other groups”. Communications on Pure and Applied Mathematics, 2 :521–522, 1968.
* [6] J. Hempel, W. Jaco. Fundamental Groups of 3-Manifolds which are Extensions. The Annals of Mathematics, Second Series, 95(1):86–98, 1972.
* [7] J. Milnor. A unique decomposition theorem for 3-manifolds. American Journal of Mathematics, 84(1):1–7, 1962.
* [8] M. Moon. A generalization of a theorem of Griffiths to 3-manifolds. Topology and its Applications, 149:17–32, 2005.
* [9] G. Perelman. The entropy formula for the Ricci flow and its geometric applications. arXiv pre-print, 1–39, 2002.
* [10] G. Perelman. Ricci flow with surgery on three-manifolds. arXiv pre-print, 1–22, 2003.
* [11] G. Perelman. Finite extinction time for the solutions to the Ricci flow on certain three-manifolds. arXiv pre-print, 1–7, 2003.
* [12] G. P. Scott. The geometries of 3-manifolds. Bull. London Math. Soc., 15:401–487, 1983.
* [13] G. P. Scott. Finitely generated 3-manifold groups are finitely presented. J. London Math. Soc., 6(2):437–440, 1973.
* [14] G. P. Scott, T C Wall. Topological methods in group theory. Homological group theory (Proc. Sympos., Durham), 137–203, 1977.
* [15] J. Stallings. On fibering certain 3-manifolds. Topology of 3-manifolds, and Related Topics; Proc. of the University of Georgia Institute, GA, 95–100, 1961.
|
8k
|
arxiv_papers
|
2101.01120
|
# Casimir light in dispersive nanophotonics
Jamison Sloan1, Nicholas Rivera2, John D. Joannopoulos2, and Marin Soljačić2 1
Department of Electrical Engineering and Computer Science, Massachusetts
Institute of Technology, Cambridge, MA 02139, United States
2 Department of Physics, Massachusetts Institute of Technology, Cambridge, MA
02139, United States
###### Abstract
Time-varying optical media, whose dielectric properties are actively modulated
in time, introduce a host of novel effects in the classical propagation of
light, and are of intense current interest. In the quantum domain, time-
dependent media can be used to convert vacuum fluctuations (virtual photons)
into pairs of real photons. We refer to these processes broadly as “dynamical
vacuum effects” (DVEs). Despite interest for their potential applications as
sources of quantum light, DVEs are generally very weak, providing many
opportunities for enhancement through modern techniques in nanophotonics, such
as using media which support excitations such as plasmon and phonon
polaritons. Here, we present a theory of DVEs in arbitrary nanostructured,
dispersive, and dissipative systems. A key element of our framework is the
simultaneous incorporation of time-modulation and “dispersion” through time-
translation-breaking linear response theory. We propose a highly efficient
scheme for generating entangled surface polaritons based on time-modulation of
the optical phonon frequency of a polar insulator. We show that the high
density of states, especially in hyperbolic polaritonic media, may enable
high-efficiency generation of entangled phonon-polariton pairs. More broadly,
our theoretical framework enables the study of quantum light-matter
interactions in time-varying media, such as spontaneous emission, and energy
level shifts.
The nonvanishing zero-point energy of quantum electrodynamics leads to a
variety of observable consequences such as atomic energy shifts Bethe (1947),
spontaneous emission Purcell (1946); Gérard and Gayral (1999), forces
Lamoreaux (1997), and non-contact friction Kardar and Golestanian (1999);
Pendry (1997). Perhaps the most famously cited consequence of vacuum
fluctuations is the Casimir effect Mohideen and Roy (1998); Klimchitskaya et
al. (2009); Bordag et al. (2001); Plunien et al. (1986), which predicts that
two uncharged conducting plates, when placed close together, experience mutual
attraction (or repulsion, in some cases Munday et al. (2009); Kenneth et al.
(2002); Zhao et al. (2009)) due to the fluctuating electromagnetic fields
between the plates. The character of any fluctuation-based phenomenon is
determined by the electromagnetic modes which exist around the structure of
interest. As a result, the last two decades have provided promising insights
about how nanostructured composites of existing and emerging optical materials
can be used to modify observable effects of zero-point fluctuations.
In time-varying systems, electromagnetic vacuum fluctuations can lead to the
production of real photons. Famously, the “dynamical Casimir effect” predicts
how a cavity with rapidly oscillating boundaries produces entangled photon
pairs Moore (1970). Other related phenomena include photon emission from
rotating bodies Maghrebi et al. (2012), spontaneous parametric down-conversion
in nonlinear materials Boyd (2019), the Unruh effect for relativistically
accelerating bodies Yablonovitch (1989); Crispino et al. (2008); Fulling and
Davies (1976); Unruh and Wald (1984), Hawking radiation from black holes
Hawking (1975); Unruh (1976), and even particle production in the early
universe Shtanov et al. (1995). The close connections among these phenomena
are discussed in Nation et al. (2012). These “dynamical vacuum effects” (DVEs)
have been studied in depth since the 1960s for their relation to fundamental
questions about the quantum vacuum, and for their potential applications as
quantum light sources Glauber and Lewenstein (1991); Walls and Milburn (2007);
Scully and Zubairy (1999). Specifically, these processes are known to produce
squeezed light (which is entangled if more than one mode is involved) Loudon
and Knight (1987); Breitenbach et al. (1997) which enjoys applications in
quantum information Ralph and Lam (1998), spectroscopy Polzik et al. (1992),
and enhancing phase sensitivity at LIGO Aasi et al. (2013). Despite high
interest, these DVEs are very weak, with the first direct observation of the
dynamical Casimir effect occurring as recently as 2011 Wilson et al. (2011).
The strength of these effects can, in theory, be enhanced by nanostructured
optical composites, and polaritonic materials with strong resonances, as has
been seen with other fluctuation-based phenomena Purcell (1946); Rodriguez et
al. (2011); Volokitin and Persson (2007). However, considering DVEs in such
materials is complicated by the subtleties associated with describing the
optical properties of materials which are simultaneously dispersive and time-
dependent. Beyond this fundamental issue, there is not yet a general framework
which describes these emission effects in arbitrary nanostructured materials
Dodonov (2010). Such a framework is of paramount importance if modern material
and nanofabrication platforms are to be used to optimize these effects to make
them practical for potential applications in quantum information,
spectroscopy, imaging, and sensing.
In this Letter, we present a theoretical framework, based on macroscopic
quantum electrodynamics (MQED), for describing DVEs in arbitrary
nanostructured, dispersive, and dissipative time-dependent systems. We apply
our theory to describe two-photon emission processes from time-varying media.
As an example, we show that phonon-polariton pairs can be generated on thin
films of polar insulators (e.g., silicon carbide and hexagonal boron nitride),
whose transverse optical (TO) phonon frequency is rapidly modulated in time.
We find that the high density of states of surface phonon-polariton modes, in
conjunction with dispersive resonances, leads to phonon-polariton pair
generation efficiencies which are orders of magnitude higher than traditional
parametric down conversion. Our results are particularly relevant in the
context of recent experiments, which have observed parametric amplification of
optical phonons in the presence of a strong driving field, which effectively
causes the TO phonon frequency to vary in time Cartella et al. (2018).
Figure 1: Photon pair emission from arbitrary time-dependent dielectric media.
(a) A dispersive dielectric $\varepsilon(\mathbf{r},\omega)$ subject to an
arbitrary time modulation can be described as having the more general
dielectric function $\varepsilon(\mathbf{r},\omega,\omega^{\prime})$ which
encodes both time dependence and dispersion. (b) A schematic of a thin film of
polar insulator which has a small top layer which undergoes a time modulation.
As a result, surface phonon-polariton pairs are produced with frequencies
$\omega,\omega^{\prime}$, and wavevectors $q,q^{\prime}$.
There are inherent subtleties associated with describing the optical response
of time-modulated dielectrics which are already dispersive. In systems where
frequencies of time-modulation are far from any transition frequencies in the
system, one can consider an “adiabatic” description of the time-dependent
material. In this case, the permittivity can be taken as
$\varepsilon(\omega;t)$, or simply $\varepsilon(t)$, as is done in many
theoretical and experimental studies Law (1994); Lustig et al. (2018); Zurita-
Sánchez et al. (2009); Chu and Tamir (1972); Harfoush and Taflove (1991);
Fante (1971); Holberg and Kunz (1966). In cases where the adiabatic
approximation breaks down (e.g. in disperisve systems with similar modulation
and transition frequencies), we must revert to the most general dielectric
function allowed by linear response theory. In the absence of time-translation
invariance, the polarization $\mathbf{P}(t)$ is connected to the applied field
$\mathbf{E}(t)$ through a susceptibility $\chi(t,t^{\prime})$. Consequently,
the frequency response must be characterized by a two-frequency susceptibility
$\chi(\omega,\omega^{\prime})\equiv\int_{-\infty}^{\infty}dt\,dt^{\prime}\,\chi(t,t^{\prime})e^{i\omega
t}e^{-i\omega^{\prime}t^{\prime}}.$ The two Fourier transforms are defined
with opposing sign conventions so that for a time-independent material,
$\chi(\omega,\omega^{\prime})=2\pi\delta(\omega-\omega^{\prime})\chi(\omega)$.
The corresponding permittivity is defined by
$\varepsilon(\omega,\omega^{\prime})=2\pi\delta(\omega-\omega^{\prime})+\chi(\omega,\omega^{\prime})$,
as illustrated in Fig. 1a. In this case, the displacement field $\mathbf{D}$
is connected to the electric field $\mathbf{E}$ as
$\mathbf{D}(\omega)=\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{2\pi}\varepsilon(\omega,\omega^{\prime})\mathbf{E}(\omega^{\prime}).$
(1)
We now use this apparatus to parameterize the types of time-modulated
materials we consider in our theory of DVEs. Consider a photonic structure (of
arbitrary geometry and material composition), with a local dispersive
dielectric function $\varepsilon_{\text{bg}}(\mathbf{r},\omega)$. Then we
impart some spatiotemporal change to the susceptibility
$\Delta\chi(\mathbf{r},\omega,\omega^{\prime})$, so that the total
permittivity is
$\varepsilon(\mathbf{r},\omega,\omega^{\prime})=\varepsilon_{\text{bg}}(\mathbf{r},\omega)[2\pi\delta(\omega-\omega^{\prime})]+\Delta\chi(\mathbf{r},\omega,\omega^{\prime}).$
(2)
Our theory of DVEs in systems described by the general form of Eq. 2 is based
on a Hamiltonian description of electromagnetic field subject to interactions
in general time-varying media. We use macroscopic quantum electrodynamics
(MQED) Scheel and Buhmann (2008); Rivera and Kaminer (2020) to quantize the
electromagnetic field in the background structure
$\varepsilon_{\text{bg}}(\mathbf{r},\omega)$. In this framework, the
Hamiltonian of the bare electromagnetic field is
$H_{\text{EM}}=\int_{0}^{\infty}d\omega\int
d^{3}r\,\hbar\omega\,\mathbf{f}^{\dagger}(\mathbf{r},\omega)\cdot\mathbf{f}(\mathbf{r},\omega),$
(3)
where $\mathbf{f}^{(\dagger)}(\mathbf{r},\omega)$ is the annihilation
(creation) operator for a quantum harmonic oscillator at position $\mathbf{r}$
and frequency $\omega$. In such a medium, the electric field operator in the
interaction picture is given as
$\begin{split}\mathbf{E}(\mathbf{r},t)=i\sqrt{\frac{\hbar}{\pi\varepsilon_{0}}}\int_{0}^{\infty}&d\omega\frac{\omega^{2}}{c^{2}}\int
d^{3}r^{\prime}\sqrt{\operatorname{Im}\varepsilon_{\text{bg}}(\mathbf{r}^{\prime},\omega)}\\\
&\times\left(\mathbf{G}(\mathbf{r},\mathbf{r}^{\prime},\omega)\mathbf{f}(\mathbf{r}^{\prime},\omega)e^{-i\omega
t}-\text{h.c.}\right).\end{split}$ (4)
Here, $\mathbf{G}(\mathbf{r},\mathbf{r}^{\prime},\omega)$ is the
electromagnetic Green’s function of the background which satisfies
$\left(\nabla\times\nabla\times-\varepsilon_{\text{bg}}(\mathbf{r},\omega)\frac{\omega^{2}}{c^{2}}\right)\mathbf{G}(\mathbf{r},\mathbf{r}^{\prime},\omega)=\delta(\mathbf{r}-\mathbf{r}^{\prime})I$,
where $I$ is the $3\times 3$ identity matrix. We assume that the permittivity
change described by Eq. 2 creates a change to the polarization density
$\mathbf{P}(\mathbf{r},t)$, interacting with the electric field via
$V(t)=-\int d^{3}r\,\mathbf{P}(\mathbf{r},t)\cdot\mathbf{E}(\mathbf{r},t)$
Boyd (2019). Then by relating the polarization to the electric field through
linear response, we find the interaction Hamiltonian
$V(t)=-\varepsilon_{0}\int
d^{3}r\,dt^{\prime}\,\Delta\chi_{ij}(\mathbf{r},t,t^{\prime})E_{j}(\mathbf{r},t^{\prime})E_{i}(\mathbf{r},t),$
(5)
where we have used repeated index notation. If we work in the regime where
$\Delta\chi$ is small, then the electric field operator is well-approximated
by that of the unperturbed field, given in Eq. 4. To compute rates of two-
photon emission, we consider scattering matrix elements that connect the
electromagnetic vacuum state to final states which contain two photons. Taking
the S-matrix elements to first order in perturbation theory (see S.I.), the
probability of two-photon emission is given by
$\begin{split}P=\frac{1}{2\pi^{2}c^{4}}&\int_{0}^{\infty}d\omega\,d\omega^{\prime}\,(\omega\omega^{\prime})^{2}\int
d^{3}r\,d^{3}r^{\prime}\\\
&\times\operatorname{Tr}\left[\Delta\chi(\mathbf{r},\omega,-\omega^{\prime})\operatorname{Im}G(\mathbf{r},\mathbf{r}^{\prime},\omega^{\prime})\right.\\\
&\hskip
28.45274pt\Delta\chi^{\dagger}(\mathbf{r}^{\prime},\omega,-\omega^{\prime})\operatorname{Im}G(\mathbf{r}^{\prime},\mathbf{r},\omega)\left.\right],\end{split}$
(6)
where $\Delta\chi^{\dagger}$ is the matrix conjugate transpose of the tensor
$\Delta\chi$. The two instances of $\operatorname{Im}G$ indicates that there
are two quanta emitted — one at frequency $\omega$, and the other at
$\omega^{\prime}$. The Green’s function encodes everything about the
structure, dispersion, and dissipation of the background structure, and its
imaginary part is closely related to the local density of states, suggesting
that emission probabilities can be increased when more modes are available,
much as in the well-known case of single-photon Purcell enhancement. Similar
results have also been seen with two-photon emission from atoms Rivera et al.
(2017). Meanwhile, the tensor $\Delta\chi$ encodes everything about the
imposed time dependence of the material. This separation makes the computation
of emission rates in photonic nanostructures highly modular, and may provide
future opportunities for numerical implementations in cases where analytical
results are not feasible.
We now show how our theoretical framework accounts for dispersion and loss in
time-modulated thin films which generate pairs of entangled surface
polaritons. Surface polaritons have enjoyed a myriad of applications due to
their ability to maintain high confinement, and relatively low loss Chen et
al. (2012); Dai et al. (2014, 2015); Basov et al. (2016). Specifically, we
examine surface phonon-polaritons (SPhPs) on thin films of the polar
insulators silicon carbide (SiC) and hexagonal boron nitride (hBN).
Controllable sources of entangled surface polaritons are notably lacking, and
could prove important for applications in quantum information and imaging. In
the infrared, the dielectric response of polar insulators is well-described by
the resonance of transverse optical (TO) phonon modes. The permittivity in
this frequency range is given by the Lorentz oscillator
$\varepsilon_{\text{bg}}(\omega)=\varepsilon_{\infty}+\omega_{p}^{2}/(\omega_{0}^{2}-\omega^{2}-i\omega\Gamma),$
where $\varepsilon_{\infty}$ is the permittivity at high frequencies,
$\omega_{0}$ is the TO phonon frequency, $\omega_{p}$ is the plasma frequency,
and $\Gamma$ is the damping rate. Phonon-polaritons are supported above the
resonance at $\omega_{0}$, where
$\text{Re}\,\varepsilon_{\text{bg}}(\omega)<-1$, which is referred to as the
Reststrahlen band, or “RS band” (Fig. 2a).
Figure 2: Dynamical Casimir effect for silicon carbide phonon-polaritons. (a)
Dispersion relation of phonon-polaritons on $d_{\text{slab}}=100$ nm thick
slab of SiC. Dotted lines mark the edges of the RS band. Inset shows the
Lorentz oscillator permittivity around $\omega_{0}=1.49\times 10^{14}$ rad/s.
(b) Schematic representation of nondispersive time modulations of the
permittivity, versus dispersive modulations of the transverse optical phonon
frequency $\omega_{0}$. (c-f) Differential rate per unit area
$(1/A)d\Gamma/d\omega$ for phonon-polariton pairs production for various
values of $\Omega_{0}/\omega_{0}=\\{2.01,2.1,2.3,2.4\\}$, as well as short
($T=80$ fs) and long ($T=5$ ns) pulses. The modulated region is assumed to be
$d=10$ nm thick. Panels (c, d) show a nondispersive modulation with
$\delta\varepsilon=10^{-3}$. Panels (e, f) show a dispersive modlation with
$\delta\omega=10^{-3}$.
To highlight the interplay between dispersion and time dependence in two-
polariton spontaneous emission, we compare two different modulations of the
polar insulator structures (Fig. 2b). The first is a nondispersive modulation,
where a layer of thickness $d$ has its index perturbed by a constant amount as
$\varepsilon(t)=\varepsilon_{\text{bg}}(1+\delta\varepsilon\,f(t))$. In this
case, we have
$\Delta\chi(\omega,\omega^{\prime})=\delta\varepsilon\,f(\omega-\omega^{\prime})$,
where $f(\omega)$ is the Fourier transform of the modulation profile. If the
change in index is caused by a nonlinear layer with $\chi^{(2)}=100$ pm/V,
then an electric field strength of $10^{7}$ V/m gives
$\delta\varepsilon=10^{-3}$. The second is a dispersive modulation, where over
a thickness $d$, the transverse optical phonon frequency $\omega_{0}$ is
modulated to deviate from its usual value as a function of time as
$\omega^{2}(t)=\omega_{0}^{2}(1+\delta\omega\,f(t))$. In this case,
$\Delta\chi(\omega,\omega^{\prime})=\delta\omega\,\omega_{0}^{2}\omega_{p}^{2}f(\omega-\omega^{\prime})/(Q(\omega)Q(\omega^{\prime}))$
to first order in $\delta\omega$, where
$Q(\omega)\equiv\omega_{0}^{2}-\omega^{2}-i\omega\Gamma$ (see S.I.). From the
experimental models presented in Cartella et al. (2018) for SiC, we estimate
that an applied field strength of 1 GV/m gives rise to a frequency shift of
the order $\delta\omega=10^{-3}$. We will compare the two modulation types
with the same fractional change in parameter
$\delta\varepsilon=\delta\omega=10^{-3}$ to highlight that around
$\omega_{0}$, a fractional change $\delta\omega$ causes much stronger effects
than $\delta\varepsilon$. Later, we comment on efficiencies given the same
applied field strength.
Figure 3: Achieving strong DVE s through dispersive modulations. (a)
Dispersion of surface phonon-polaritons on a thin layer of hBN
($d_{\text{slab}}=100$ nm) in the upper RS band ($\omega_{0}=2.56\times
10^{14}$ rad/s). (b, c) Differential rate per unit area $(1/A)d\Gamma/d\omega$
for phonon-polariton pairs production for various values of
$\Omega_{0}/\omega_{0}=\\{2,2.05,2.1,2.15\\}$, as well as short ($T=80$ fs)
and long ($T=5$ ns) pulses. (d) Total emission rate per area of phonon-
polariton pairs as a function of pulse duration $T$ and frequency
$\Omega_{0}$. (e-g) Same as (b-d), except that the modulation is dispersive.
Panel (g) shows the strong enhancement which occurs for monochromatic
modulations when $\Omega_{0}/\omega_{0}=2$, corresponding to enhancement of
DVEs by dispersive parametric amplification.
We modulate the surface layer with perturbations of the form
$f(t)=\cos(\Omega_{0}t)e^{-t^{2}/2T^{2}}$. This enables us to consider
modulations across many timescales, from ultrashort pulses, to nearly
monochromatic (CW) modulations. Applying our formalism to the geometry
depicted in Fig. 1b, we find that the probability of two-polariton emission
per unit frequency $\omega$ and $\omega^{\prime}$ is given as
$\begin{split}\frac{1}{A}\frac{dP}{d\omega
d\omega^{\prime}}=\frac{|\Delta\chi(\omega,-\omega^{\prime})|^{2}}{16\pi^{3}}\int_{0}^{\infty}dq\,q\left(1-e^{-2qd}\right)^{2}\\\
\times\operatorname{Im}r_{p}(\omega,q)\operatorname{Im}r_{p}(\omega^{\prime},q).\end{split}$
(7)
Here, $r_{p}(\omega,q)$ is the p-polarized reflectivity associated with the
interface, and $A$ is the sample area. This equation encodes the frequency
correlations between the two emitted quanta $\omega$ and $\omega^{\prime}$.
Once $\Delta\chi$ is chosen, Eq. 7 can be integrated over $\omega^{\prime}$
and normalized by the pulse duration $T$ to obtain an area-normalized rate per
frequency $(1/A)d\Gamma/d\omega$. This quantity represents the emission rate
which is detected classically at frequency $\omega$, and thus no longer
discriminates between the two photons of the emitted pair.
Using this method, we obtain results for SiC which is modulated both
dispersively and nondispersively. In Fig. 2a, we see the phonon-polariton
dispersion for a 100 nm layer of SiC (dielectric parameters taken from Le Gall
et al. (1997)) Figs. 2c-f show the corresponding rate distribution
$(1/A)d\Gamma/d\omega$ for each of the marked modulation frequencies. We first
note the difference between nearly monochromatic modulations and short pulses.
For a long pulse (Fig. 2c), the two emitted polaritons are subject to the
energy conservation constraint $\omega+\omega^{\prime}\approx\Omega_{0}$. In
this regime, the behavior of the rate spectrum $d\Gamma/d\omega$ is determined
by where $\Omega_{0}/2$ lies in the RS band (see dashed lines on Fig. 2a). We
see that for various $\Omega_{0}$, the spectra are symmetrically peaked around
$\Omega_{0}/2$, with widths set by the loss. The strongest response is seen
around $\Omega_{0}/\omega_{0}=2.4$ where the density of states of SPhPs is
highest. At the slightly lower excitation frequency
$\Omega_{0}/\omega_{0}=2.3$, the central peak at $\Omega_{0}/2$ is flanked by
two symmetrical side peaks. These secondary peaks occur since $\Omega_{0}/2$
lies in between two bands of the dispersion, and thus one possibility for
satisfying the approximate energy conservation relation is that one polariton
is emitted into each band at the same wavevector $q$. Also notably, we see
that the modulation associated with $\Omega_{0}/\omega_{0}=2.01$ produces very
little response, owing to the low density of states at the bottom of the RS
band. For a short pulse (Fig. 2), the general trend in magnitudes between the
excitation frequencies is the same. However, maximum rate that can be achieved
is 10-100 times smaller, as the pulse is not long enough to establish a well-
defined frequency. Additionally, since a short pulse eliminates the strict
energy conservation condition, polaritons can be emitted at many frequency
pairs. As a result, the shape of the spectrum for most excitation frequencies
is peaked near the top of the RS band where the density of states is highest.
For dispersive modulations, many aspects of SPhP pair production remain the
same. However, several key changes emerge as a result of the difference in the
factor $|\Delta\chi|^{2}\propto 1/|Q(\omega)Q(\omega^{\prime})|^{2}$, which
becomes large when $\omega,\omega^{\prime}\approx\omega_{0}$. This condition
corresponds to parametric resonance of phonons which dictate the dielectric
response. While the behavior of the monochromatic modulation (Fig. 2e) for
higher frequencies $\Omega_{0}$ remains qualitatively the same, the magnitudes
of the peaks for $\Omega_{0}/\omega_{0}=2.1,2.01$ increase substantially.
Interestingly, for $\Omega_{0}/\omega_{0}=2.1$, this resonance amplifies the
tails of the frequency distribution, so that nondegenerate pair production is
actually slightly preferred. For short pulses (Fig. 2f), the density of states
behavior remains largely unchanged. However, the tails of the distribution at
the bottom of the RS band near $\omega_{0}$ are raised, in contrast to the
nondispersive behavior (Fig. 2d). There are two main factors which may cause
strong enhancement of the phonon emission spectrum: high density of states,
and parametric resonance around $\omega_{0}$. For SiC, these large dispersive
enhancements occur around $\omega_{0}$, which is actually at a point of very
low density of states in the dispersion. We can then reason that the strongest
emission should come from systems where the dispersive resonance overlaps more
strongly with the high density of states.
To this end, we elucidate how SPhPs on hBN, due to their multi-banded nature,
can enjoy much stronger enhancement through dispersive modulations. Unlike
SiC, hBN is an anisotropic polar insulator, with different transverse optical
phonon frequencies in the in-plane or out-of-plane directions. As a result,
hBN has two RS bands, and the dispersion relation is hyperbolic, being multi-
branched in each RS band Basov et al. (2016). The dispersion relation in the
RS band of thin hBN is seen in Fig. 3a (dielectric parameters taken from
Woessner et al. (2015); Cai et al. (2007)). In contrast to SiC, the density of
states of SPhPs is spread broadly across the upper RS band. Figs. 3b,c show
the emitted pair spectrum for a variety of driving frequencies, similarly to
SiC. The fringes seen in the emission spectra are a direct consequence of
interference between many possibilities for how two phonon-polaritons can
distribute themselves into many branches of the dispersion. Fig. 3d shows the
total rate of emission integrated over the upper RS band for a range of
modulation frequencies $\Omega_{0}$ and pulse durations $T$. Due to the
relatively even density of states, we see that the emission rate in the
nondispersive case is relatively uniform ($\Gamma/A\approx 10^{9}\mu$m-2s-1)
across a wide range of parameters.
For a dispersive modulation, the emission strengths are reordered entirely,
and the strongest emission occurs for degenerate production around
$\omega_{0}$ when the system is modulated at $2\omega_{0}$. These differences
manifest not only in the shape of the spectrum, but in the overall strength of
each process. Specifically, we see that around the point of strongest
enhancement (Fig. 3g), the emission rate is orders of magnitude higher than
for long pulses outside of the resonance around $\omega_{0}$. Even though
achieving $\delta\varepsilon=10^{-3}$ through a nonlinear substrate requires a
lower applied field than a TO phonon frequency shift of equivalent proportion,
the sensitive nature of the dispersive modulations provides opportunities for
improved efficiency. We estimate that with an applied field strength of 1
GV/m, the nondispersive modulation achieved through a thin nonlinear
($\chi^{(2)}=100$ pm/V) layer has a quantum efficiency of the order
$\eta\approx 10^{-9}$, while at the same field strength, the dispersive
modulation has $\eta\approx 10^{-5}$. Given that evidence of parametric
amplification of optical phonons in SiC has already been demonstrated Cartella
et al. (2018), we believe that efficient generation of SPhP pairs on SiC and
hBN by optical excitation should be feasible. We have also applied our
formalism to the generation of graphene plasmons on a nonlinear substrate, and
found this process could have an efficiency $\eta\approx 10^{-4}$ (see S.I.).
Such efficiencies could exceed the highest seen for pair generation to date
Bock et al. (2016). Potential applications will need to use or out-couple SPhP
pairs before they are attenuated.
We have provided a comprehensive Hamiltonian theory which governs photon
interactions in dispersive time-dependent dielectrics. Our work shows that the
role of dispersion is critical in describing and enhancing these phenomena, as
we showed for time-modulated polar insulators. Our framework is amenable to
design and optimization of complex structures for experiments and potential
devices. Our theory may also provide important insights about how enhanced
nonlinearities in epsilon-near-zero materials Caspani et al. (2016); Alam et
al. (2016) may present opportunities for enhancing DVEs by realizing large
relative changes in the permittivity. Beyond this, the Hamiltonian MQED
formalism we have presented can enable further studies of light-matter
interactions in arbitrary time-dependent materials. For example, one could
model how spontaneous emission and energy-level shifts of quantum emitters are
modified in the presence of time-modulation. Finally, this kind of formalism
could provide opportunities for studying the role that parametric
amplification of quasiparticles can play in exotic effects in solid-state
systems such as light-induced superconductivity Mitrano et al. (2016); Babadi
et al. (2017). Broadly, we anticipate that our framework will be of interest
for describing classical and quantum phenomena in many timely experimental
platforms featuring ultrafast optical modulation of materials.
###### Acknowledgements.
The authors thank Yannick Salamin and Prof. Ido Kaminer for helpful
discussions. This material is based upon work supported by the Defense
Advanced Research Projects Agency (DARPA) under Agreement No. HR00112090081.
This work was supported in part by the U.S. Army Research Office through the
Institute for Soldier Nanotechnologies under award number W911NF-18-2-0048.
J.S was supported in part by NDSEG fellowship No. F-1730184536. N.R. was
supported by Department of Energy Fellowship DE-FG02-97ER25308.
## References
* Bethe (1947) H. A. Bethe, Physical Review 72, 339 (1947).
* Purcell (1946) E. Purcell, Phys. Rev. 69, 681 (1946).
* Gérard and Gayral (1999) J.-M. Gérard and B. Gayral, Journal of lightwave technology 17, 2089 (1999).
* Lamoreaux (1997) S. K. Lamoreaux, Physical Review Letters 78, 5 (1997).
* Kardar and Golestanian (1999) M. Kardar and R. Golestanian, Reviews of Modern Physics 71, 1233 (1999).
* Pendry (1997) J. Pendry, Journal of Physics: Condensed Matter 9, 10301 (1997).
* Mohideen and Roy (1998) U. Mohideen and A. Roy, Physical Review Letters 81, 4549 (1998).
* Klimchitskaya et al. (2009) G. Klimchitskaya, U. Mohideen, and V. Mostepanenko, Reviews of Modern Physics 81, 1827 (2009).
* Bordag et al. (2001) M. Bordag, U. Mohideen, and V. M. Mostepanenko, Physics reports 353, 1 (2001).
* Plunien et al. (1986) G. Plunien, B. Müller, and W. Greiner, Physics Reports 134, 87 (1986).
* Munday et al. (2009) J. N. Munday, F. Capasso, and V. A. Parsegian, Nature 457, 170 (2009).
* Kenneth et al. (2002) O. Kenneth, I. Klich, A. Mann, and M. Revzen, Physical review letters 89, 033001 (2002).
* Zhao et al. (2009) R. Zhao, J. Zhou, T. Koschny, E. Economou, and C. Soukoulis, Physical review letters 103, 103602 (2009).
* Moore (1970) G. T. Moore, Journal of Mathematical Physics 11, 2679 (1970).
* Maghrebi et al. (2012) M. F. Maghrebi, R. L. Jaffe, and M. Kardar, Physical review letters 108, 230403 (2012).
* Boyd (2019) R. W. Boyd, _Nonlinear optics_ (Academic press, 2019).
* Yablonovitch (1989) E. Yablonovitch, Physical Review Letters 62, 1742 (1989).
* Crispino et al. (2008) L. C. Crispino, A. Higuchi, and G. E. Matsas, Reviews of Modern Physics 80, 787 (2008).
* Fulling and Davies (1976) S. A. Fulling and P. C. Davies, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 348, 393 (1976).
* Unruh and Wald (1984) W. G. Unruh and R. M. Wald, Physical Review D 29, 1047 (1984).
* Hawking (1975) S. W. Hawking, Communications in mathematical physics 43, 199 (1975).
* Unruh (1976) W. G. Unruh, Physical Review D 14, 870 (1976).
* Shtanov et al. (1995) Y. Shtanov, J. Traschen, and R. Brandenberger, Physical Review D 51, 5438 (1995).
* Nation et al. (2012) P. Nation, J. Johansson, M. Blencowe, and F. Nori, Reviews of Modern Physics 84, 1 (2012).
* Glauber and Lewenstein (1991) R. J. Glauber and M. Lewenstein, Physical Review A 43, 467 (1991).
* Walls and Milburn (2007) D. F. Walls and G. J. Milburn, _Quantum optics_ (Springer Science & Business Media, 2007).
* Scully and Zubairy (1999) M. O. Scully and M. S. Zubairy, _Quantum optics_ (1999).
* Loudon and Knight (1987) R. Loudon and P. L. Knight, Journal of modern optics 34, 709 (1987).
* Breitenbach et al. (1997) G. Breitenbach, S. Schiller, and J. Mlynek, Nature 387, 471 (1997).
* Ralph and Lam (1998) T. C. Ralph and P. K. Lam, Physical review letters 81, 5668 (1998).
* Polzik et al. (1992) E. Polzik, J. Carri, and H. Kimble, Physical review letters 68, 3020 (1992).
* Aasi et al. (2013) J. Aasi, J. Abadie, B. Abbott, R. Abbott, T. Abbott, M. Abernathy, C. Adams, T. Adams, P. Addesso, R. Adhikari, et al., Nature Photonics 7, 613 (2013).
* Wilson et al. (2011) C. M. Wilson, G. Johansson, A. Pourkabirian, M. Simoen, J. R. Johansson, T. Duty, F. Nori, and P. Delsing, Nature 479, 376 (2011).
* Rodriguez et al. (2011) A. W. Rodriguez, F. Capasso, and S. G. Johnson, Nature photonics 5, 211 (2011).
* Volokitin and Persson (2007) A. Volokitin and B. N. Persson, Reviews of Modern Physics 79, 1291 (2007).
* Dodonov (2010) V. Dodonov, Physica Scripta 82, 038105 (2010).
* Cartella et al. (2018) A. Cartella, T. F. Nova, M. Fechner, R. Merlin, and A. Cavalleri, Proceedings of the National Academy of Sciences 115, 12148 (2018).
* Law (1994) C. Law, Physical Review A 49, 433 (1994).
* Lustig et al. (2018) E. Lustig, Y. Sharabi, and M. Segev, Optica 5, 1390 (2018).
* Zurita-Sánchez et al. (2009) J. R. Zurita-Sánchez, P. Halevi, and J. C. Cervantes-Gonzalez, Physical Review A 79, 053821 (2009).
* Chu and Tamir (1972) R. Chu and T. Tamir, in _Proceedings of the Institution of Electrical Engineers_ (IET, 1972), vol. 119, pp. 797–806.
* Harfoush and Taflove (1991) F. Harfoush and A. Taflove, IEEE transactions on antennas and propagation 39, 898 (1991).
* Fante (1971) R. Fante, IEEE Transactions on Antennas and Propagation 19, 417 (1971).
* Holberg and Kunz (1966) D. Holberg and K. Kunz, IEEE Transactions on Antennas and Propagation 14, 183 (1966).
* Scheel and Buhmann (2008) S. Scheel and S. Y. Buhmann, Acta physica slovaca 58.5, 675 (2008).
* Rivera and Kaminer (2020) N. Rivera and I. Kaminer, Nature Reviews Physics 2, 538 (2020).
* Rivera et al. (2017) N. Rivera, G. Rosolen, J. D. Joannopoulos, I. Kaminer, and M. Soljačić, Proceedings of the National Academy of Sciences 114, 13607 (2017).
* Chen et al. (2012) J. Chen, M. Badioli, P. Alonso-González, S. Thongrattanasiri, F. Huth, J. Osmond, M. Spasenović, A. Centeno, A. Pesquera, P. Godignon, et al., Nature 487, 77 (2012).
* Dai et al. (2014) S. Dai, Z. Fei, Q. Ma, A. Rodin, M. Wagner, A. McLeod, M. Liu, W. Gannett, W. Regan, K. Watanabe, et al., Science 343, 1125 (2014).
* Dai et al. (2015) S. Dai, Q. Ma, M. Liu, T. Andersen, Z. Fei, M. Goldflam, M. Wagner, K. Watanabe, T. Taniguchi, M. Thiemens, et al., Nature nanotechnology 10, 682 (2015).
* Basov et al. (2016) D. Basov, M. Fogler, and F. G. De Abajo, Science 354, aag1992 (2016).
* Le Gall et al. (1997) J. Le Gall, M. Olivier, and J.-J. Greffet, Physical Review B 55, 10105 (1997).
* Woessner et al. (2015) A. Woessner, M. B. Lundeberg, Y. Gao, A. Principi, P. Alonso-González, M. Carrega, K. Watanabe, T. Taniguchi, G. Vignale, M. Polini, et al., Nature materials 14, 421 (2015).
* Cai et al. (2007) Y. Cai, L. Zhang, Q. Zeng, L. Cheng, and Y. Xu, Solid state communications 141, 262 (2007).
* Bock et al. (2016) M. Bock, A. Lenhard, C. Chunnilall, and C. Becher, Optics express 24, 23992 (2016).
* Caspani et al. (2016) L. Caspani, R. Kaipurath, M. Clerici, M. Ferrera, T. Roger, J. Kim, N. Kinsey, M. Pietrzyk, A. Di Falco, V. M. Shalaev, et al., Physical review letters 116, 233901 (2016).
* Alam et al. (2016) M. Z. Alam, I. De Leon, and R. W. Boyd, Science 352, 795 (2016).
* Mitrano et al. (2016) M. Mitrano, A. Cantaluppi, D. Nicoletti, S. Kaiser, A. Perucchi, S. Lupi, P. Di Pietro, D. Pontiroli, M. Riccò, S. R. Clark, et al., Nature 530, 461 (2016).
* Babadi et al. (2017) M. Babadi, M. Knap, I. Martin, G. Refael, and E. Demler, Physical Review B 96, 014512 (2017).
|
8k
|
arxiv_papers
|
2101.01121
|
Local Competition and Stochasticity for Adversarial Robustness in Deep
Learning
Konstantinos P. Panousis† Sotirios Chatzis† Antonios Alexos‡ Sergios
Theodoridis§ †Cyprus University of Technology, Limassol, Cyprus ‡University of
California Irvine, CA, USA §National and Kapodistrian University of Athens,
Athens, Greece & Aalborg University, Denmark [email protected]
###### Abstract
This work addresses adversarial robustness in deep learning by considering
deep networks with stochastic local winner-takes-all (LWTA) activations. This
type of network units result in sparse representations from each model layer,
as the units are organized in blocks where only one unit generates a non-zero
output. The main operating principle of the introduced units lies on
stochastic arguments, as the network performs posterior sampling over
competing units to select the winner. We combine these LWTA arguments with
tools from the field of Bayesian non-parametrics, specifically the stick-
breaking construction of the Indian Buffet Process, to allow for inferring the
sub-part of each layer that is essential for modeling the data at hand. Then,
inference is performed by means of stochastic variational Bayes. We perform a
thorough experimental evaluation of our model using benchmark datasets. As we
show, our method achieves high robustness to adversarial perturbations, with
state-of-the-art performance in powerful adversarial attack schemes.
## 1 Introduction
Despite their widespread success, Deep Neural Networks (DNNs) are notorious
for being highly susceptible to adversarial attacks. Adversarial examples,
i.e. inputs comprising carefully crafted perturbations, are designed with the
aim of “fooling” a considered model into misclassification. It has been shown
that, even small perturbations in an original input, e.g. via an $\ell_{p}$
norm, may successfully render a DNN vulnerable; this highlights the frailness
of common DNNs (Papernot et al.,, 2017). This vulnerability casts serious
doubts regarding the confident use of modern DNNs in safety-critical
applications, such as autonomous driving (Boloor et al.,, 2019; Chen et al.,,
2015), video recognition (Jiang et al.,, 2019), healthcare (Finlayson et al.,,
2019) and other real-world scenarios (Kurakin et al.,, 2016). To address these
concerns, significant research effort has been devoted to adversarially-robust
DNNs.
Adversarial attacks, and associated defense strategies, comprise many
different approaches sharing the same goal; make deep architectures more
reliable and robust. In general, adversarially-robust models are obtained via
the following types of strategies: (i) Adversarial Training, where a model is
trained with both original and perturbed data (Madry et al.,, 2017; Tramèr et
al.,, 2017; Shrivastava et al.,, 2017); (ii) Manifold Projections, where the
original data points are projected onto a different subspace, presuming that,
therein, the effects of the perturbations can be mitigated (Jalal et al.,,
2017; Shen et al.,, 2017; Song et al.,, 2017); (iii) Stochastic Modeling,
where some randomization of the input data and/or the neuronal activations is
performed on each hidden layer (Prakash et al.,, 2018; Dhillon et al.,, 2018;
Xie et al.,, 2017); and (iv) Preprocessing, where some aspects of either the
data or the neuronal activations are modified to induce robustness (Buckman et
al.,, 2018; Guo et al.,, 2017; Kabilan et al.,, 2018).
Despite these advances, most of the currently considered approaches and
architectures are particularly tailored to the specific characteristics of a
considered type of attacks. This implies that such a model may fail completely
in case the adversarial attack patterns change in a radical manner. To
overcome this challenge, we posit that we need to devise an activation
function paradigm different from common neuronal activation functions,
especially ReLU.
Recently, the deep learning community has shown fresh interest in more
biologically plausible models. In this context, there is an increasing body of
evidence from Neuroscience that neurons with similar functions in a biological
system are aggregated together in blocks, and local competition takes place
therein for their activation (Local-Winner-Takes-All, LWTA, mechanism). Under
this scheme, in each block, only one neuron can be active at a given time,
while the rest are inhibited to silence. Crucially, it appears that this
mechanism is of stochastic nature, in the sense that the same system may
produce different neuron activation patterns when presented with exactly the
same stimulus at multiple times (Kandel et al.,, 2000; Andersen et al.,, 1969;
Stefanis,, 1969; Douglas and Martin,, 2004; Lansner,, 2009). Previous
implementations of the LWTA mechanism in deep learning have shown that the
obtained sparse representations of the input are quite informative for
classification purposes (Lee and Seung,, 1999; Olshausen and Field,, 1996),
while exhibiting automatic gain control, noise suppression and robustness to
catastrophic forgetting (Srivastava et al.,, 2013; Grossberg,, 1982; Carpenter
and Grossberg,, 1988). However, previous authors have not treated the LWTA
mechanism under a systematic stochastic modeling viewpoint, which is a key
component in actual biological systems.
Finally, recent works in the community, e.g. Verma and Swami, (2019), have
explored the susceptibility of the conventional one-hot output encoding of
deep learning classifiers to adversarial attacks. By borrowing arguments from
coding theory, the authors examine the effect of encoding the deep learning
classifier output using error-correcting output codes in the adversarial
scenario. The experimental results suggest that such an encoding technique
enhances the robustness of a considered architecture, while retaining high
classification accuracy in the benign context.
Drawing upon these insights, in this work we propose a new deep network design
scheme that is particularly tailored to address adversarially-robust deep
learning. Our approach falls under the stochastic modeling type of approaches.
Specifically, we propose a deep network configuration framework employing: (i)
stochastic LWTA activations; and (ii) an Indian Buffet process (IBP)-based
mechanism for learning which sub-parts of the network are essential for data
modeling. We combine this modeling rationale with Error Correcting Output
Codes (Verma and Swami,, 2019) to further enhance performance.
We evaluate our approach using well-known benchmark datasets and network
architectures. We provide related source code at:
https://github.com/konpanousis/adversarial_ecoc_lwta. The obtained empirical
evidence vouches for the potency of our approach, yielding state-of-the-art
robustness against powerful benchmark attacks. The remainder of the paper is
organized as follows: In Section 2, we introduce the necessary theoretical
background. In Section 3, we introduce the proposed approach and describe its
rationale and inference algorithms. In Section 4, we perform extensive
experimental evaluations, providing insights for the behavior of the produced
framework. In Section 5, we summarize the contribution of this work.
## 2 Technical Background
### 2.1 Indian Buffet Process
The Indian Buffet Process (IBP) (Ghahramani and Griffiths,, 2006) defines a
probability distribution over infinite binary matrices. IBP can be used as a
flexible prior for latent factor models, allowing the number of involved
latent features to be unbounded and inferred in a data-driven fashion. Its
construction induces sparsity, while at the same time allowing for more
features to emerge, as new observations appear. Here, we focus on the stick-
breaking construction of the IBP proposed by Teh et al., (2007), which renders
it amenable to Variational Inference. Let us consider $N$ observations, and a
binary matrix $\boldsymbol{Z}=[z_{j,k}]_{j,k=1}^{N,K}$; each entry therein,
indicates the existence of feature $k$ in observation $j$. Taking the infinite
limit $K\rightarrow\infty$, we can construct the following hierarchical
representation (Teh et al.,, 2007; Theodoridis,, 2020):
$\displaystyle u_{k}\sim\mathrm{Beta}(\alpha,1),\
\pi_{k}=\prod_{i=1}^{k}u_{i},\ z_{j,k}\sim\mathrm{Bernoulli}(\pi_{k}),\
\forall j$
where $\alpha$ is a non-negative parameter, controlling the induced sparsity.
### 2.2 Local Winner-Takes-All
Let us assume a single layer of an LWTA-based network, comprising $K$ LWTA
blocks with $U$ competing units therein. Each block produces an output
$\boldsymbol{y}_{k}\in\mathrm{one\\_hot(U)}$, $k=1,\dots,K$, given some input
$\boldsymbol{x}\in\mathbb{R}^{J}$. Each linear unit in each block computes its
activation $h_{k}^{u},\ u=1,\dots U$, and the output of the block is decided
via competition among its units. Thus, for each block, $k$, and unit, $u$,
therein, the output reads:
$\displaystyle y_{k}^{u}=g(h_{k}^{1},\dots,h_{k}^{U})$ (1)
where $g(\cdot)$ is the competition function. The activation of each
individual unit follows the conventional inner product computation
$h_{k}^{u}=\boldsymbol{w}_{ku}^{T}\boldsymbol{x}$, where
$\boldsymbol{W}\in\mathbb{R}^{J\times K\times U}$ is the weight matrix of the
LWTA layer. In a conventional _hard_ LWTA network, the final output reads:
$\displaystyle y_{k}^{u}=\begin{cases}1,&\text{if }h_{k}^{u}\geq
h_{k}^{i},\qquad\forall i=1,\dots,U,\ i\neq u\\\
0,&\text{otherwise}\end{cases}$ (2)
To bypass the restrictive assumption of binary output, more expressive
versions of the competition function have been proposed in the literature,
e.g., Srivastava et al., (2013). These generalized _hard_ LWTA networks
postulate:
$\displaystyle y_{k}^{u}=\begin{cases}h_{k}^{u},&\text{if }h_{k}^{u}\geq
h_{k}^{i},\qquad\forall i=1,\dots,U,\ i\neq u\\\
0,&\text{otherwise}\end{cases}$ (3)
This way, only the unit with the strongest activation produces an output in
each block, while the others are inhibited to silence, i.e., the zero value.
This way, the output of each layer of the network yields a sparse
representation according to the competition outcome within each block.
The above schemes do not respect a major aspect that is predominant in
biological systems, namely _stochasticity_. We posit that this aspect may be
crucial for endowing deep networks with adversarial robustness. To this end,
we adopt a scheme similar to Panousis et al., (2019). That work proposed a
novel competitive random sampling procedure. We explain this scheme in detail
in the following Section.
## 3 Model Definition
In this work, we consider a principled way of designing deep neural networks
that renders their inferred representations considerably more robust to
adversarial attacks. To this end, we utilize a novel stochastic LWTA type of
activation functions, and we combine it with appropriate sparsity-inducing
arguments from nonparametric Bayesian statistics.
Let us assume an input $\boldsymbol{X}\in\mathbb{R}^{N\times J}$ with $N$
examples, comprising $J$ features each. In conventional deep architectures,
each hidden layer comprises nonlinear units; the input is presented to the
layer, which then computes an affine transformation via the inner product of
the input with weights $\boldsymbol{W}\in\mathbb{R}^{J\times K}$, producing
outputs $\boldsymbol{Y}\in\mathbb{R}^{N\times K}$. The described computation
for each example $n$ yields
$\boldsymbol{y}_{n}=\sigma(\boldsymbol{W}^{T}\boldsymbol{x}_{n}+\boldsymbol{b})\in\mathbb{R}^{K},\
n=1,\dots,N$, where $\boldsymbol{b}\in\mathbb{R}^{K}$ is a bias factor and
$\sigma(\cdot)$ is a non-linear activation function, e.g. ReLU. An
architecture comprises intermediate and output layers of this type.
Under the proposed stochastic LWTA-based modeling rationale, singular units
are replaced by LWTA blocks, each containing a set of _competing linear_
units. Thus, the layer input is now presented to each different block and each
unit therein, via different weights. Letting $K$ be the number of LWTA blocks
and $U$ the number of competing units in each block, the weights are now
represented via a three-dimensional matrix
$\boldsymbol{W}\in\mathbb{R}^{J\times K\times U}$.
Drawing inspiration from Panousis et al., (2019), we postulate that the local
competition in each block is performed via a competitive random sampling
procedure. The higher the output of a competing unit, the higher the
probability of it being the winner. However, the winner is selected
stochastically.
In the following, we introduce a set of discrete latent vectors
$\boldsymbol{\xi}_{n}\in\mathrm{one\\_hot}(U)^{K}$, in order to encode the
outcome of the local competition between the units in each LWTA block of a
network layer. For each data input, $\boldsymbol{x}_{n}$, the non-zero entries
in the aforementioned one-hot representation denotes the winning unit among
the $U$ competitors in each of the $K$ blocks of the layer.
To further enhance the stochasticity and regularization of the resulting
model, we turn to the nonparametric Bayesian framework. Specifically, we
introduce a matrix of latent variables $\boldsymbol{Z}\in\\{0,1\\}^{J\times
K}$, to explicitly regularize the model by inferring whether the model
actually needs to use each connection in each layer. Each entry, $z_{j,k}$,
therein is set to one, if the $j^{th}$ dimension of the input is presented to
the $k^{th}$ block, otherwise $z_{j,k}=0$. We impose the sparsity-inducing IBP
prior over the latent variables $z$ and perform inference over them.
Essentially, if all the connections leading to some block are set to
$z_{j,k}=0$, the block is effectively zeroed-out of the network. This way, we
induce sparsity in the network architecture.
On this basis, we now define the output of a layer of the considered model,
$\boldsymbol{y}_{n}\in\mathbb{R}^{K\cdot U}$, as follows:
$\displaystyle[\boldsymbol{y}_{n}]_{ku}=[\boldsymbol{\xi}_{n}]_{ku}\sum_{j=1}^{J}(w_{j,k,u}\cdot
z_{j,k})\cdot[\boldsymbol{x}_{n}]_{j}\in\mathbb{R}$ (4)
To facilitate a competitive random sampling procedure in a data-driven
fashion, the latent indicators $\boldsymbol{\xi}_{n}$ are drawn from a
posterior Categorical distribution. The concept is that the higher the output
of a linear competing unit, the higher the probability of it being the winner.
We yield:
$\displaystyle
q([\boldsymbol{\xi}_{n}]_{k})=\mathrm{Discrete}\left([\boldsymbol{\xi}_{n}]_{k}\Big{|}\mathrm{softmax}\left(\sum_{j=1}^{J}[w_{j,k,u}]_{u=1}^{U}\cdot
z_{j,k}\cdot[\boldsymbol{x}_{n}]_{j}\right)\right)$ (5)
Further, we postulate that the latent variables $z$ are drawn from Bernoulli
posteriors, such that:
$\displaystyle q(z_{j,k})=\mathrm{Bernoulli}(z_{j,k}|\tilde{\pi}_{j,k})$ (6)
These are trained by means of variational Bayes, as we describe next, while we
resort to fixed-point estimation for the weight matrices $\boldsymbol{W}$.
For the output layer of our approach, we perform the standard inner product
computation followed by a softmax, while imposing an IBP over the connections,
similar to the inner layers. Specifically, let us assume a $C$-unit
penultimate layer with input $\boldsymbol{X}\in\mathbb{R}^{N\times J}$ and
weights $\boldsymbol{W}\in\mathbb{R}^{J\times C}$. We introduce an auxiliary
matrix of latent variables $\boldsymbol{Z}\in\\{0,1\\}^{J\times C}$. Then, the
output $\boldsymbol{Y}\in\mathbb{R}^{N\times C}$ yields:
$\displaystyle y_{n,c}=\mathrm{softmax}\big{(}\sum_{j=1}^{J}\left(w_{j,c}\cdot
z_{j,c}\right)\cdot\left[\boldsymbol{x}_{n}\right]_{j}\big{)}\in\mathbb{R}$
(7)
where the latent variables in $\boldsymbol{Z}$ are drawn independently from a
Bernoulli posterior:
$q\left(z_{j,c}\right)=\operatorname{Bernoulli}\left(z_{j,c}|\tilde{\pi}_{j,c}\right)$
(8)
To perform variational inference for the model latent variables, we impose a
symmetric Discrete prior over the latent indicators,
$[\boldsymbol{\xi}_{n}]_{k}\sim\mathrm{Discrete}(1/U)$. On the other hand, the
prior imposed over $\boldsymbol{Z}$ follows the stick-breaking construction of
the IBP, to facilitate data-driven sparsity induction.
The formulation of the proposed modeling approach is now complete. A graphical
illustration of the resulting model is depicted in Fig. 1(a).
### 3.1 Convolutional Layers
Further, to accommodate architectures comprising convolutional operations, we
devise a convolutional variant inspired from Panousis et al., (2019).
Specifically, let us assume input tensors
$\\{\boldsymbol{X}_{n}\\}_{n=1}^{N}\in\mathbb{R}^{H\times L\times C}$ at a
specific layer, where $H,L,C$ are the height, length and channels of the
input. We define a set of kernels, each with weights
$\boldsymbol{W}_{k}\in\mathbb{R}^{h\times l\times C\times U}$, where $h,l,C,U$
are the kernel height, length, channels and competing feature maps, and
$k=1,\dots K$. Thus, analogously to the grouping of linear units in the dense
layers, in this case, local competition is performed among feature maps. Each
kernel is treated as an LWTA block with competing feature maps; each layer
comprises multiple kernels.
We additionally consider analogous auxiliary binary latent variables
$\boldsymbol{z}\in\\{0,1\\}^{K}$ to further regularize the convolutional
layers. Here, we retain or omit full LWTA blocks (convolutional kernels), as
opposed to single connections. This way, at a given layer of the proposed
convolutional variant, the output $\boldsymbol{Y}_{n}\in\mathbb{R}^{H\times
L\times K\cdot U}$ is obtained via concatenation along the last dimension of
the subtensors:
$\displaystyle[\boldsymbol{Y}_{n}]_{k}=[\boldsymbol{\xi}_{n}]_{k}\left(\boldsymbol{(}z_{k}\cdot
W_{k})\star\boldsymbol{X}_{n}\right)\in\mathbb{R}^{H\times L\times U}$ (9)
where $\boldsymbol{X}_{n}$ is the input tensor for the $n^{th}$ data point,
and “$\star$” denotes the convolution operation. Turning to the competition
function, we follow the same rationale, such that the sampling procedure is
driven from the outputs of the competing feature maps:
$\displaystyle
q([\boldsymbol{\xi}_{n}]_{k})=\mathrm{Discrete}\Big{(}[\boldsymbol{\xi}_{n}]_{k}\Big{|}\mathrm{softmax}(\sum_{h^{\prime},l^{\prime}}[\boldsymbol{(}z_{k}\cdot
W_{k})\star\boldsymbol{X}_{n}]_{h^{\prime},l^{\prime},u}\Big{)}$
We impose an IBP prior over $\boldsymbol{z}$, while a posteriori drawing from
a Bernoulli distribution, such that,
$q(z_{k})=\mathrm{Bernoulli}(z_{k}|\tilde{\pi}_{k})$. We impose a symmetric
prior for the latent winner indicators
$[\boldsymbol{\xi}_{n}]_{k}\sim\mathrm{Discrete}(1/U)$. A graphical
illustration of the defined layer is depicted in Fig. 1(b).
$x_{1}$$x_{J}$$\dots$$z_{1,1}=1$$z_{1,1}=1$$z_{J,K}=0$$z_{J,K}=0$$\dots$$\dots$$\dots$
$\xi$= 1 $\xi$= 0 $\xi$= 0 $\xi$= 1 SB-LWTA layer $1$ $1$ $K$ $K$ SB-LWTA
layer Input layer Output layer
(a)
Convolution Pooling $\dots$ $\xi$= 0 $\xi$= 1 $z_{1}$= 0 $z_{2}$= 1 $z_{K-1}$=
0 $z_{K}$= 1 $\xi$= 0 $\xi$= 1 Input Convolutional Variant CLH $1$ $K$ $1$ $K$
$z_{1}$= 0 $z_{K}$= 1
(b)
Figure 1: (a) A graphical representation of our competition-based modeling
approach. Rectangles denote LWTA blocks, and circles the competing units
therein. The winner units are denoted with bold contours ($\xi=1$). Bold edges
denote retained connections ($z=1$). (b) The convolutional LWTA variant.
Competition takes place among feature maps. The winner feature map (denoted
with bold contour) passes its output to the next layer, while the rest are
zeroed out.
### 3.2 Training & Inference
To train the proposed model, we resort to maximization of the Evidence Lower
Bound (ELBO). To facilitate efficiency in the resulting procedures, we adopt
Stochastic Gradient Variational Bayes (SGVB) (Kingma and Welling,, 2014).
However, our model comprises latent variables that are not readily amenable to
the reparameterization trick of SGVB, namely, the discrete latent variables
$z$ and $\boldsymbol{\xi}$, and the Beta-distributed stick variables
$\boldsymbol{u}$. For the former, we utilize the continuous relaxation of
Discrete (Bernoulli) random variables based on the Gumbel-Softmax trick
(Maddison et al.,, 2016; Jang et al.,, 2017). For the latter, we employ the
Kumaraswamy distribution-based reparameterization trick (Kumaraswamy,, 1980)
of the Beta distribution. These reparameterization tricks are only employed
during training to ensure low-variance ELBO gradients.
At inference time, we _directly draw samples_ from the trained posteriors of
the _winner and network subpart selection latent variables_ $\boldsymbol{\xi}$
and $z$, respectively; this introduces _stochasticity to the network
activations and architecture_ , respectively. Thus, differently from previous
work in the field, the arising stochasticity of the resulting model stems from
two different sampling processes. On the one hand, contrary to deterministic
competition-based networks presented in Srivastava et al., (2013), we
implement a data-driven random sampling procedure to determine the winning
units, by sampling from $q(\boldsymbol{\xi})$. In addition, we infer which
subparts of the model must be used or omitted, again based on sampling from
the trained posteriors $q(z)$111In detail, inference is performed by sampling
the $q(\boldsymbol{\xi})$ and $q(z)$ posteriors a total of $S=5$ times, and
averaging the corresponding $S=5$ sets of output logits. We have found that
considering an increased $S>5$ does not yield any further improvement..
## 4 Experimental Evaluation
We evaluate the capacity of our proposed approach against various adversarial
attacks and under different setups. To this end, we follow the experimental
framework of Verma and Swami, (2019). Specifically, we train networks which
either employ the standard one-hot representation to encode the output
variable of the classifier (predicted class) or the error-correcting output
strategy proposed in Verma and Swami, (2019); the latter is based on Hadamard
coding. We try various activation functions for the classification layer of
the networks, and use either a single network or an ensemble, as described in
Verma and Swami, (2019). The details of the considered networks are provided
in Table 1. To obtain some comparative results, the considered networks are
formulated under both our novel deep learning framework and in a conventional
manner (i.e., with ReLU nonlinearities and SGD training).
Table 1: A summary of the considered networks. $\mathbf{I}_{k}$ denotes a
$k\times k$ identity matrix, while $\boldsymbol{H}_{k}$ a $k$-length Hadamard
code.
Model | Architecture | Code | Output Activation
---|---|---|---
Softmax | Standard | $\mathbf{I}_{10}$ | softmax
Logistic | Standard | $\mathbf{I}_{10}$ | logistic
LogisticEns10 | Ensemble | $\mathbf{I}_{10}$ | logistic
Tanh16 | Standard | $\mathbf{H}_{16}$ | tanh
Table 2: Classification accuracy on MNIST.
Model | Params | Benign | PGD | CW | BSA | Rand
---|---|---|---|---|---|---
Softmax (U=4) | 327,380 | .9613 | .865 | .970 | .950 | .187
Softmax (U=2) | 327,380 | .992 | .935 | .990 | 1.0 | .961
Logistic (U=2) | 327,380 | .991 | .901 | .990 | .990 | .911
LogEns10 (U=2) | 205,190 | .993 | .889 | .980 | .970 | 1.0
Madry | 3,274,634 | .9853 | .925 | .84 | .520 | .351
TanhEns16 | 401,168 | .9948 | .929 | 1.0 | 1.0 | .988
### 4.1 Experimental Setup
We consider two popular benchmark datasets, namely MNIST (LeCun et al.,, 2010)
and CIFAR-10 (Krizhevsky,, 2009). The details of the considered network
architectures are provided in the Supplementary.
To evaluate our modeling approach, all the considered networks, depicted in
Table 1, are evaluated by splitting the architecture into: (i) LWTA blocks
with $U=2$ competing units on each hidden layer, and (ii) blocks with 4
competing units. The total number of units on each layer (split into blocks of
$U=2$ or 4 units) remains the same.
We initialize the posterior parameters of the Kumaraswamy distribution to
$a=K,\ b=1$, where $K$ is the number of LWTA blocks, while using an
uninformative Beta prior, $\text{Beta}(1,1)$. For the Concrete relaxations, we
set the temperatures of the priors and posteriors to $0.5$ and $0.67$
respectively, as suggested in Maddison et al., (2016); we faced no convergence
issues with these selections.
Evaluation is performed by utilizing 4 different benchmark adversarial
attacks: (i) Projected Gradient Descent (PGD), (ii) Carlini and Wagner (CW),
(iii) Blind Spot Attack (BSA) (Zhang et al., (2019)), and (iv) a random attack
(Rand) (Verma and Swami,, 2019). For all attacks, we adopt exactly the same
experimental evaluation of Verma and Swami, (2019), both for transparency and
comparability.
Specifically, for the PGD attack, we use a common choice for the pixel-wise
distortion $\epsilon=0.3(0.031)$ for MNIST(CIFAR-10) with 500(200) attack
iterations. For the CW attack, the learning rate was set to $1e$-$3$,
utilizing 10 binary search steps. BSA performs the CW attack with a scaled
version of the input, $\alpha\boldsymbol{x}$; we set $\alpha=0.8$. In the
“Random” attack, we construct random inputs by independently and uniformly
generating pixels in $(0,1)$; we report the fraction of which that yield class
probability less than $0.9$, in order to assess the confidence of the
classifier as suggested in Verma and Swami, (2019).
### 4.2 Experimental Results
MNIST. We train our model for a maximum of 100 epochs, using the same data
augmentation as in Verma and Swami, (2019). In Table 2, we depict the
comparative results for each different network and adversarial attack.
Therein, we also compare to the best-performing models in Verma and Swami,
(2019), namely the Madry model (Madry et al.,, 2017), and TanhEns16. As we
observe, our modeling approach yields considerable improvements over Madry et
al., (2017) and TanhEns16 (Verma and Swami,, 2019) in three of the four
considered attacks, while imposing lower memory footprint (less trainable
parameters).
Table 3: Classification accuracy on CIFAR-10.
Model | Params | Benign | PGD | CW | BSA | Rand
---|---|---|---|---|---|---
Tanh16(U=4) | 773,600 | .510 | .460 | .550 | .600 | .368
Softmax(U=2) | 772,628 | .869 | .814 | .860 | .870 | .652
Tanh16(U=2) | 773,600 | .872 | .826 | .830 | .830 | .765
LogEns10(U=2) | 1,197,998 | .882 | .806 | .830 | .800 | 1.0
Madry | 45,901,914 | .871 | .470 | .080 | 0 | .981
TanhEns64 | 3,259,456 | .896 | .601 | .760 | .760 | 1.0
CIFAR-10. For the CIFAR-10 dataset, we follow an analogous procedure. The
obtained comparative effectiveness of our approach is depicted in Table 3.
Therein, we also compare to the best-performing models in Verma and Swami,
(2019), namely the Madry model (Madry et al.,, 2017), and TanhEns64. In this
case, the differences in the computational burden are more evident, since now
the trained networks are based on the VGG-like architecture. Specifically, our
approach requires one-two orders of magnitude less parameters than the best
performing alternatives in Verma and Swami, (2019). At the same time, it
yields substantially superior accuracy in the context of the considered
attacks, while retaining a comparable performance for the benign case.
Note that the best performing model of Verma and Swami, (2019), namely
TanhEns64, adopts an architecture similar to Baseline Tanh, that we compare
with, yet it fits an Ensemble of 4 different such networks. This results in
imposing almost 4 times the computational footprint of our approach, both in
terms of memory requirements and computational times. This justifies the small
improvement in accuracy in the benign case; yet, it renders our approach much
more preferable in terms of the offered computation/accuracy trade-off.
(a) Benign: U=2
(b) PGD: U=2
(c) Benign: U=4
(d) PGD: U=4
Figure 2: Winning probabilities of competing units in LWTA blocks, on an
intermediate layer of the Tanh16 network, for each class in the CIFAR-10
dataset. Figs. 2(a) and 2(b) depict the mean probability of activations per
input class on a layer with 2 competing units, for benign and PGD test
examples, respectively. Figs. 2(c) and 2(d) correspond to a network layer
comprising 4 competing units. Black denotes very high winning probability,
while white very low.
### 4.3 Further Insights
Table 4: Classification accuracy for all the attacks on the MNIST dataset
(with $U=2$, wherever applicable).
Model | Method | Benign | PGD | CW | BSA | RAND
---|---|---|---|---|---|---
Softmax | Baseline | .992 | .082 | .540 | .180 | .270
LWTAmax | .993 | .302 | .800 | .390 | .543
LWTAmax & IBP | .994 | .890 | .920 | .840 | .361
LWTA | .992 | .900 | .990 | 1.0 | .810
LWTA & IBP | .992 | .935 | .990 | 1.0 | .961
Logistic | Baseline | .993 | .093 | .660 | .210 | .684
LWTAmax | .993 | .388 | .780 | .420 | .700
LWTAmax & IBP | .993 | .894 | .960 | .950 | .230
LWTA | .991 | .856 | .990 | .990 | .982
LWTA & IBP | .991 | .901 | .990 | .990 | .981
LogisticEns10 | Baseline | .993 | .382 | .880 | .480 | .905
LWTAmax | .993 | .303 | .920 | .520 | .900
LWTAmax & IBP | .994 | .860 | .940 | .910 | .400
LWTA | .992 | .603 | .900 | .550 | .809
LWTA & IBP | .993 | .889 | .980 | .970 | 1.0
Tanh16 | Baseline | .993 | .421 | .790 | .320 | .673
LWTAmax | .992 | .462 | .910 | .420 | .573
LWTAmax & IBP | .994 | .898 | .960 | .940 | .363
LWTA | .992 | .862 | .990 | .980 | .785
LWTA & IBP | .990 | .900 | .990 | .980 | .785
#### 4.3.1 Ablation Study
Further, to assess the individual utility of LWTA-winner and architecture
sampling in the context of our model, we scrutinize the obtained performance
in both the benign case and the considered adversarial attacks. Specifically,
we consider two different settings: (i) utilizing only the proposed LWTA
mechanism in place of conventional ReLU activations; (ii) our full approach
combining LWTA units with IBP-driven architecture sampling. The comparative
results can be found in Tables 4 (MNIST) and 5 (CIFAR-10). Therein, "Baseline"
corresponds to ReLU-based networks, as reported in Verma and Swami, (2019).
Our model’s results are obtained with LWTA blocks of $U=2$ competing units.
We begin with the MNIST dataset, where we examine two additional setups.
Specifically, we employ the deterministic LWTA competition function as defined
in (3) and examine its effect against adversarial attacks, both as a
standalone modification (denoted as LWTAmax) and also combined with the IBP-
driven mechanism (i.e., sampling the $z$ variables). The resulting
classification performance is illustrated in the second and third row for each
network in Table 4. One can observe that using deterministic LWTA activations,
without stochastically sampling the winner, yields improvement, which
increases with the incorporation of the IBP.
The fourth and fifth rows correspond to the proposed LWTA (stochastic)
activations, without or with architecture sampling (via the $z$ variables).
The experimental results suggest that the stochastic nature of the proposed
activation significantly contributes to the robustness of the model. It yields
significant gains in all adversarial attacks compared to both the baseline, as
well as to the deterministic LWTA adaptation. The performance improvement
obtained by employing our proposed LWTA units can be as high as _two orders of
magnitude_ , in the case of the powerful PGD attack. Finally, stochastic
architecture sampling via the IBP-induced mechanism further increases the
robustness.
The corresponding experimental results for CIFAR-10 are provided in Table 5.
Our approach yields a significant performance increase, which reaches up to
_three orders of magnitude_ (PGD attack, Logistic network).
Table 5: Classification accuracy for all the attacks on CIFAR-10 (with $U=2$,
wherever applicable).
Model | Method | Benign | PGD | CW | BSA | RAND
---|---|---|---|---|---|---
Softmax | Baseline | .864 | .070 | .080 | .040 | .404
LWTA | .867 | .804 | .820 | .870 | .701
LWTA & IBP | .869 | .814 | .860 | .870 | .652
Logistic | Baseline | .865 | .006 | .140 | .100 | .492
LWTA | .830 | .701 | .720 | .696 | .690
LWTA & IBP | .837 | .738 | .800 | .820 | .726
LogisticEns10 | Baseline | .877 | .100 | .240 | .140 | .495
LWTA | .879 | .712 | .750 | .750 | .803
LWTA & IBP | .882 | .806 | .830 | .800 | 1.0
Tanh16 | Baseline | .866 | .099 | .080 | .100 | .700
LWTA | .863 | .720 | .750 | .750 | .800
LWTA & IBP | .872 | .826 | .830 | .830 | .765
(a) Proposed model
(b) Conventional ReLU-based counterpart
Figure 3: Change of the output logit values under our proposed approach
(3(a)), and a ReLU-based (3(b)) counterpart (PGD attack, CIFAR-10 dataset,
Softmax network). The gradual, radical change of the logit values in the ReLU-
based network indicates that conventional ReLU activations allow the attacker
to successfully exploit gradient information. In contrast, under our proposed
framework, the gradient-based attacker completely fails to do so.
#### 4.3.2 LWTA Behavior
Here, we scrutinize the competition patterns within the blocks of our model,
in order to gain some further insights. First and foremost, we want to ensure
that competition does not collapse to singular “always-winning” units. To this
end, we choose a random intermediate layer of a Tanh16 network formulated
under our modeling approach. We consider layers comprising 8 or 4 LWTA blocks
of $U=2$ and 4 competing units, respectively, and focus on the CIFAR-10
dataset. The probabilities of unit activations for each class are depicted in
Fig. 2.
In the case of $U=2$ competing units, we observe that the unit activation
probabilities for each different setup (benign and PGD) are essentially the
same (Figs. 2(a) and 2(b)). This suggests that the LWTA mechanism succeeds in
encoding salient discriminative patterns in the data that are resilient to PGD
attacks. Thus, we obtain networks capable of defending against adversarial
attacks in a principled way.
On the other hand, in Figs. 2(c) and 2(d) we depict the corresponding
probabilities when employing 4 competing units. In this case, the competition
mechanism is uncertain about the winning unit in each block and for each
class; average activation probability for each unit is around $\approx 25\%$.
Moreover, there are several differences in the activations between the benign
data and a PGD attack; these explain the significant drop in performance. This
behavior potentially arises due to the relatively small structure of the
network; from the 16 units of the original architecture, only 4 are active for
each input. Thus, in this setting, LWTA fails to encode the necessary
distinctive patterns of the data. Further results are provided in the
Supplementary.
Finally, we investigate how the classifier output logit values change in the
context of an adversarial (PGD) attack. We expect that this investigation may
shed more light to the distinctive properties of the proposed framework that
allow for the observed adversarial robustness. To this end, we consider the
Softmax network trained on the CIFAR-10 dataset; we use a single, randomly
selected example from the test set to base our attack on. In Fig. 3, we depict
how the logit values pertaining to the ten modeled classes vary as we stage
the PGD attack on the proposed model, as well its conventional, ReLU-based
counterpart. As we observe, the conventional, ReLU-based network exhibits a
gradual, yet radical change in the logit values as the PGD attack progresses;
this suggests that finding an adversarial example constitutes an “easy” task
for the attacker. In stark contrast, our approach exhibits varying,
inconsistent, and steadily minor logit value changes as the PGD attack
unfolds; this is especially true for the dominant (correct) class.
This implies that, under our approach, the gradient-based attacker is
completely obstructed from successfully exploiting gradient information. This
non-smooth appearance of the logit outputs seems to be due to the doubly
stochastic nature of our approach, stemming from LWTA winner
($\boldsymbol{\xi}$) and network component ($z$) sampling. Due to this
stochasticity, different sampled parts of a network may contribute to the
logit outputs each time. This destroys, with high probability, the linearity
with respect to the input. Similar results on different randomly selected
examples are provided in the Supplementary.
#### 4.3.3 Effect on the Decision Boundaries
We now turn our focus to the classifier decision boundaries obtained under our
novel paradigm. Many recent works (Fawzi et al.,, 2017, 2018; Ortiz-Jimenez et
al., 2020b, ) have shown that the decision boundaries of trained deep networks
are usually highly sensitive to small perturbations of their training examples
along some particular directions. These are exactly the directions that an
adversary can exploit in order to stage a successful attack. In this context,
Ortiz-Jimenez et al., 2020a showed that deep networks are especially
susceptible to directions of discriminative features; they tend to learn low
margins in their area, while exhibiting high invariance to other directions,
where the obtained margins are substantially larger.
Figure 4: Mean margin over test examples of a LeNet-5 network trained on
MNIST.
First, we train a LeNet-5 network on the MNIST dataset, similar to Ortiz-
Jimenez et al., 2020a . We, then, extract the mean margin of the decision
boundary, by using a subspace-constrained version of DeepFool (Moosavi-
Dezfooli et al.,, 2016). This measures the margin of $M$ samples on a sequence
of subspaces; these are generated by using blocks from a 2-dimensional
discrete cosine transform (2D-DCT) (Ahmed et al.,, 1974). Our results are
depicted in Fig. 4. We train the network using our full model (dubbed "LWTA &
IBP"), as well as a version keeping only the proposed LWTA activations. As we
observe, our approach yields a significant increase of the margin in the low
to medium frequencies, exactly where the baseline is adversarially brittle. It
is also notable that the use of the proposed IBP-driven architecture sampling
mechanism is complementary to the proposed (stochastic) LWTA-type activation
functions.
Subsequently, we repeat our experiments using a VGG-like architecture trained
on CIFAR-10. We consider: (i) the standard, ReLU-based approach; (ii) our
model using the proposed (stochastic) LWTA units but without IBP-driven
architecture sampling; and (iii) the full model proposed in this work ("LWTA &
IBP"). The corresponding illustrations are depicted in Fig. 5. As we observe,
by using the proposed stochastic LWTA-type units, we obtain a very significant
increase of the margin across the frequency spectrum. Once again, the IBP-
driven architecture sampling process presents further improvements to the
network, especially at the lower end of the spectrum, where it is most needed.
Figure 5: Mean margin over test examples on a VGG-like network trained on
CIFAR-10. Table 6: Performance of a Softmax network (Verma and Swami,, 2019)
under adversarial (FGSM-based) training for the MNIST dataset. Block size is
$U=2$, wherever applicable.
Method | Benign | PGD | CW | BSA | RAND
---|---|---|---|---|---
Baseline (ReLU) | .992 | .082 | .540 | .180 | .270
LWTA | .992 | .900 | .990 | 1.0 | .810
LWTA & IBP | .992 | .935 | .990 | 1.0 | .961
Baseline (ReLU) + FGSM | .992 | .755 | .820 | .130 | .835
LWTA + FGSM | .991 | .925 | .990 | 1.0 | .960
LWTA & IBP + FGSM | .991 | .970 | .990 | 1.0 | .985
Table 7: Performance of a Softmax network (Verma and Swami,, 2019) under
adversarial (FGSM-based) training for the CIFAR-10 dataset. Block size is
$U=2$, wherever applicable.
Method | Benign | PGD | CW | BSA | RAND
---|---|---|---|---|---
Baseline (ReLU) | .864 | .070 | .080 | .040 | .404
LWTA | .867 | .804 | .820 | .870 | .701
LWTA & IBP | .869 | .814 | .860 | .870 | .652
Baseline (ReLU) + FGSM | .780 | .200 | .380 | .340 | .185
LWTA + FGSM | .820 | .812 | .810 | .870 | .440
LWTA & IBP + FGSM | .830 | .825 | .870 | .870 | .790
### 4.4 Adversarial Training
Finally, it is important to analyze the behavior of our model under an
adversarial training setup. Existing literature in the field has shown that
conventionally-formulated networks lose some test-set performance when trained
with adversarial examples. On the other hand, this sort of training renders
them more robust when adversarially-attacked.
Therefore, it is interesting to examine how networks formulated under our
model perform in the context of an adversarial training setup. To this end,
and due to space limitations, we limit ourselves to FGSM-based (Goodfellow et
al.,, 2015) adversarial training of the Softmax network. In the case of the
MNIST dataset, FGSM is run with $\epsilon=0.3$; for CIFAR-10, we set
$\epsilon=0.031$. On this basis, we repeat the experiments of Section 4.3.1,
and assess model performance considering all the attacks therein, under the
same configuration; only exception to this rule is the PGD attack, where we
use 40 and 20 PGD steps to attack the networks trained on MNIST and CIFAR-10,
respectively.
We depict the obtained results in Tables 6 and 7. Therein, the first three
lines pertain to training by only making use of "clean" training data; the
last three pertain to FGSM-based adversarial training, as described above. As
we observe, in both datasets our (stochastic) LWTA activations completely
outperform the ReLU-based baselines, while IBP-driven architecture sampling
offers a further improvement in performance, across all attacks.
## 5 Conclusions
This work attacked adversarial robustness in deep learning. We introduced
novel network design principles, founded upon stochastic units formulated
under a sampling-based Local-Winner-Takes-All mechanism. We combined these
with a network subpart omission mechanism driven from an IBP prior, which
further enhances the stochasticity of the model. Our experimental evaluations
have provided strong empirical evidence for the efficacy of our approach.
Specifically, we yielded good accuracy improvements in various kinds of
adversarial attacks, with considerably less trainable parameters. The obtained
performance gains remained important under FGSM-based adversarial training.
Our future work targets novel methods for adversarial training deep networks
formulated under our modeling approach. We specifically target scenarios that
are not limited to the existing paradigm of gradient-based derivation of the
adversarial training examples.
## Acknowledgments
This work has received funding from the European Union’s Horizon 2020 research
and innovation program under grant agreement No 872139, project aiD.
## References
* Ahmed et al., (1974) Ahmed, N., Natarajan, T., and Rao, K. R. (1974). Discrete cosine transfom. IEEE Trans. Comput., 23(1):90–93.
* Andersen et al., (1969) Andersen, P., Gross, G. N., Lomo, T., and Sveen, O. (1969). Participation of inhibitory and excitatory interneurones in the control of hippocampal cortical output. In UCLA forum in medical sciences, volume 11, page 415.
* Boloor et al., (2019) Boloor, A., He, X., Gill, C., Vorobeychik, Y., and Zhang, X. (2019). Simple physical adversarial examples against end-to-end autonomous driving models. In Proc. ICESS, pages 1–7. IEEE.
* Buckman et al., (2018) Buckman, J., Roy, A., Raffel, C., and Goodfellow, I. J. (2018). Thermometer encoding: One hot way to resist adversarial examples. In Proc. ICLR.
* Carpenter and Grossberg, (1988) Carpenter, G. A. and Grossberg, S. (1988). The art of adaptive pattern recognition by a self-organizing neural network. Computer, 21(3):77–88.
* Chen et al., (2015) Chen, C., Seff, A., Kornhauser, A., and Xiao, J. (2015). Deepdriving: Learning affordance for direct perception in autonomous driving. In Proc. ICCV, pages 2722–2730.
* Dhillon et al., (2018) Dhillon, G. S., Azizzadenesheli, K., Lipton, Z. C., Bernstein, J., Kossaifi, J., Khanna, A., and Anandkumar, A. (2018). Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442.
* Douglas and Martin, (2004) Douglas, R. J. and Martin, K. A. (2004). Neuronal circuits of the neocortex. Annu. Rev. Neurosci., 27:419–451.
* Fawzi et al., (2017) Fawzi, A., Moosavi-Dezfooli, S., and Frossard, P. (2017). The robustness of deep networks: A geometrical perspective. IEEE Signal Processing Magazine, 34(6):50–62.
* Fawzi et al., (2018) Fawzi, A., Moosavi-Dezfooli, S., Frossard, P., and Soatto, S. (2018). Empirical study of the topology and geometry of deep networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3762–3770.
* Finlayson et al., (2019) Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., and Kohane, I. S. (2019). Adversarial attacks on medical machine learning. Science, 363(6433):1287–1289.
* Ghahramani and Griffiths, (2006) Ghahramani, Z. and Griffiths, T. L. (2006). Infinite latent feature models and the indian buffet process. In Advances in neural information processing systems, pages 475–482.
* Goodfellow et al., (2015) Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Explaining and harnessing adversarial examples. In Proc. ICLR.
* Grossberg, (1982) Grossberg, S. (1982). Contour enhancement, short term memory, and constancies in reverberating neural networks. In Studies of mind and brain, pages 332–378. Springer.
* Guo et al., (2017) Guo, C., Rana, M., Cisse, M., and Van Der Maaten, L. (2017). Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117.
* Jalal et al., (2017) Jalal, A., Ilyas, A., Daskalakis, C., and Dimakis, A. G. (2017). The robust manifold defense: Adversarial training using generative models. arXiv preprint arXiv:1712.09196.
* Jang et al., (2017) Jang, E., Gu, S., and Poole, B. (2017). Categorical reparametrization with gumbel-softmax. In Proc ICLR.
* Jiang et al., (2019) Jiang, L., Ma, X., Chen, S., Bailey, J., and Jiang, Y.-G. (2019). Black-box adversarial attacks on video recognition models. In Proc. ACM International Conference on Multimedia, pages 864–872.
* Kabilan et al., (2018) Kabilan, V. M., Morris, B., and Nguyen, A. (2018). Vectordefense: Vectorization as a defense to adversarial examples. arXiv preprint arXiv:1804.08529.
* Kandel et al., (2000) Kandel, E. R., Schwartz, J. H., Jessell, T. M., of Biochemistry, D., Jessell, M. B. T., Siegelbaum, S., and Hudspeth, A. (2000). Principles of neural science, volume 4. McGraw-hill New York.
* Kingma and Welling, (2014) Kingma, D. P. and Welling, M. (2014). Auto-encoding variational bayes. In Bengio, Y. and LeCun, Y., editors, ICLR.
* Krizhevsky, (2009) Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Technical report.
* Kumaraswamy, (1980) Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1):79 – 88.
* Kurakin et al., (2016) Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.
* Lansner, (2009) Lansner, A. (2009). Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations. Trends in neurosciences, 32(3):178–186.
* LeCun et al., (2010) LeCun, Y., Cortes, C., and Burges, C. (2010). Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2\.
* Lee and Seung, (1999) Lee, D. D. and Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788–791.
* Maddison et al., (2016) Maddison, C. J., Mnih, A., and Teh, Y. W. (2016). The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712.
* Madry et al., (2017) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
* Moosavi-Dezfooli et al., (2016) Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. pages 2574–2582.
* Olshausen and Field, (1996) Olshausen, B. A. and Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609.
* (32) Ortiz-Jimenez, G., Modas, A., Moosavi-Dezfooli, S.-M., and Frossard, P. (2020a). Hold me tight! Influence of discriminative features on deep network boundaries. In Advances in Neural Information Processing Systems 34.
* (33) Ortiz-Jimenez, G., Modas, A., Moosavi-Dezfooli, S.-M., and Frossard, P. (2020b). Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness.
* Panousis et al., (2019) Panousis, K., Chatzis, S., and Theodoridis, S. (2019). Nonparametric Bayesian deep networks with local competition. In Proc. ICML, pages 4980–4988.
* Papernot et al., (2017) Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., and Swami, A. (2017). Practical black-box attacks against machine learning. In Proc. ACM Asia conference on computer and communications security, pages 506–519.
* Prakash et al., (2018) Prakash, A., Moran, N., Garber, S., DiLillo, A., and Storer, J. (2018). Deflecting adversarial attacks with pixel deflection. In Proc. CVPR, pages 8571–8580.
* Shen et al., (2017) Shen, S., Jin, G., Gao, K., and Zhang, Y. (2017). Ape-gan: Adversarial perturbation elimination with gan. arXiv preprint arXiv:1707.05474.
* Shrivastava et al., (2017) Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., and Webb, R. (2017). Learning from simulated and unsupervised images through adversarial training. In Proc. CVPR, pages 2107–2116.
* Song et al., (2017) Song, Y., Kim, T., Nowozin, S., Ermon, S., and Kushman, N. (2017). Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766.
* Srivastava et al., (2013) Srivastava, R. K., Masci, J., Kazerounian, S., Gomez, F., and Schmidhuber, J. (2013). Compete to compute. In Advances in neural information processing systems, pages 2310–2318.
* Stefanis, (1969) Stefanis, C. (1969). Interneuronal mechanisms in the cortex. In UCLA forum in medical sciences, volume 11, page 497.
* Teh et al., (2007) Teh, Y. W., Grür, D., and Ghahramani, Z. (2007). Stick-breaking construction for the indian buffet process. In Proc. AISTATS, pages 556–563.
* Theodoridis, (2020) Theodoridis, S. (2020). Machine Learning: A Bayesian and Optimization Perspective. Academic Press, 2nd edition.
* Tramèr et al., (2017) Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. (2017). Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204.
* Verma and Swami, (2019) Verma, G. and Swami, A. (2019). Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks. In Advances in Neural Information Processing Systems, pages 8643–8653.
* Xie et al., (2017) Xie, C., Wang, J., Zhang, Z., Ren, Z., and Yuille, A. (2017). Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991.
* Zhang et al., (2019) Zhang, H., Chen, H., Song, Z., Boning, D., Dhillon, I. S., and Hsieh, C.-J. (2019). The limitations of adversarial training and the blind-spot attack. arXiv preprint arXiv:1901.04684.
|
8k
|
arxiv_papers
|
2101.01122
|
∎ [table]capposition=above [table]captionskip=0cm
11institutetext: Kumar P. 22institutetext: Faculty of Civil and Environmental
Engineering,
Technion-Israel Institute of Technology, Haifa, Israel.
22email: [email protected]
Fernández E. 33institutetext: Department of Aerospace and Mechanical
Engineering,
University of Liège, 4000 Liège, Belgium.
33email: [email protected]
# A numerical scheme for filter boundary conditions in topology optimization
on regular and irregular meshes
Prabhat Kumar Eduardo Fernández
(Received: date / Accepted: date)
###### Abstract
In density-based topology optimization, design variables associated to the
boundaries of the design domain require unique treatment to negate boundary
effects arising from the filtering technique. An effective approach to deal
with filtering boundary conditions is to extend the design domain beyond the
borders using void densities. Although the approach is easy to implement, it
introduces extra computational cost required to store the additional Finite
Elements (FEs). This work proposes a numerical technique for the density
filter that emulates an extension of the design domain, thus, it avoids
boundary effects without demanding additional computational cost, besides it
is very simple to implement on both regular and irregular meshes. The
numerical scheme is demonstrated using the compliance minimization problem on
two-dimensional design domains. In addition, this article presents a
discussion regarding the use of a real extension of the design domain, where
the Finite Element Analysis (FEA) and volume restriction are influenced.
Through a quantitative study, it is shown that affecting FEA or volume
restriction with an extension of the FE mesh promotes the disconnection of
material from the boundaries of the design domain, which is interpreted as a
numerical instability promoting convergence towards local optimums.
###### Keywords:
Density Filter Robust Design Boundary Padding SIMP
††journal: PREPRINT
## 1 Introduction
Filtering techniques in density-based Topology Optimization (TO) are efficient
approaches to overcome numerical instabilities (Sigmund, 2007), such as the
presence of checkerboards and the mesh dependency (Sigmund and Petersson,
1998). In addition, due to their synergistic roles in the minimum feature size
control techniques, their usage has been very popular in the last decades
(Lazarov et al, 2016). In this context, the density filter stands out
(Bourdin, 2001; Bruns and Tortorelli, 2001) since when it is combined with
projection techniques, simultaneous control over the minimum size of the solid
and void phases can be obtained (Wang et al, 2011).
Despite the widespread acceptance that the density filter has received in the
TO community, there still exist some issues of concern. For example, when
design variables associated to the boundaries of the design domain are
filtered, the filtering region splits (Clausen and Andreassen, 2017). This
introduces two major deficiencies in a density-based TO setting. First, the
minimum size at the boundaries of the design domain becomes smaller than
desired value due to the reduction in size of the filtering region. Second, as
per the definition of the density filter (Bourdin, 2001; Bruns and Tortorelli,
2001), reducing the filtering region places more weight on design variables
near the edge, which promotes material distribution at the boundaries of the
design domain (Clausen and Andreassen, 2017).
It has been shown that extending the design domain with void Finite Elements
(FEs) is an effective approach to avoid the splitting of filtering regions at
the edge of the design domain (Clausen and Andreassen, 2017), and thereby
avoiding the boundary issues. However, this extension of the design domain,
also known as boundary padding, introduces other practical difficulties. For
instance, the extension requires extra computer memory to store additional
FEs, which may even get more pronounced for large scale 3D design problems.
For this reason, some authors have proposed numerical treatments that simulate
an extension of the design domain (Wallin et al, 2020). For example, Fernández
et al (2020) simulate the effect of extending the mesh by modifying the
weights of the weighted average defining the density filter. Though the effect
of avoiding splitting the filtering regions at the edges is achieved, the
scheme proposed by Fernández et al (2020) is valid only for regular meshes.
Another issue stemming from the extension of the design domain involves the FE
Analysis (FEA) and the volume restriction. However, due to the scarce
discussion in the literature on this subject, the consequence of applying the
boundary padding on the FEA and volume restriction is not vivid. Therefore,
dedicated discussions regarding the consequence of applying the boundary
padding on the FEA and volume restriction are desired.
Inspired by the method of Fernández et al (2020), we present two methods that
simulate an extension of the design domain, thus no real extension is needed
to address boundary issues related to the density filter. These methods differ
in terms of implementation, but both can be used in regular and irregular
meshes. Then, this article presents a quantitative study regarding the effect
of applying a real extension of the design domain that affects the FEA and the
volume restriction. The study is carried out on the 2D MBB beam under the
minimization of compliance subject to a volume restriction. The set of MBB
beam designs shows that boundary padding schemes affecting the FEA and volume
restriction promote disconnection of the material from the boundaries of the
design domain. This observation is interpreted as a numerical instability that
leads topology optimization towards a local optimum.
The following section presents the topology optimization problem considered in
this paper. Section 3 details the two numerical schemes that allow to simulate
the boundary padding on regular and irregular meshes. Section 4 presents a set
of numerical results to assess the effect of applying a boundary padding that
affects the FEA and/or the volume restriction. The conclusions of the paper
are presented in Section 5. Lastly, Section 6 provides the replication of
results for this work, which is available in a set of MATLAB codes.
## 2 Problem Definition
This paper considers a density-based TO framework in association with the
modified SIMP law (Sigmund, 2007). Herein, TO problems are regularized using a
thee-field technique (Sigmund and Maute, 2013). The three fields, denoted as
$\bm{\rho}$, $\bm{\tilde{\rho}}$ and $\bm{\bar{\rho}}$, correspond to the
field of design variables, the filtered field, and the projected field,
respectively.
As the padding schemes presented in this paper are developed for the density
filter, the weighting average defining the density filter is presented here
for the sake of completeness. The traditional density filter is used (Bourdin,
2001; Bruns and Tortorelli, 2001), which is defined as follows:
$\tilde{\rho}_{i}=\frac{\displaystyle\sum_{j=1}^{N}\rho_{j}\mathrm{v}_{j}\mathrm{w}(\mathbf{x}_{i},\mathbf{x}_{j})}{\displaystyle\sum_{j=1}^{N}\mathrm{v}_{j}\mathrm{w}(\mathbf{x}_{i},\mathbf{x}_{j})}\;,$
(1)
where ${\rho}_{i}$ and $\tilde{\rho}_{i}$ are the design variable and its
corresponding filtered variable for the $i^{\text{th}}$ FE , respectively. The
element volume is denoted by $\mathrm{v}$ and the weight of the design
variable $\rho_{j}$ is denoted via
$\mathrm{w}(\mathbf{x}_{i},\mathbf{x}_{j})$. Here, the weighting function is
defined as
$\mathrm{w}(\mathrm{\mathbf{x}}_{i},\mathrm{\mathbf{x}}_{j})=\mathrm{max}\left(0\;,\;1-\frac{\|\mathrm{\mathbf{x}}_{i}-\mathrm{\mathbf{x}}_{j}\|}{\mathrm{r}_{\mathrm{fil}}}\right),$
(2)
where $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ indicate the centroids of the
$i^{\text{th}}$ and $j^{\text{th}}$ FEs, respectively and $||\,.\,||$
represents the Euclidean distance between the two points.
To provide a comparative study in view of different padding schemes, we impose
simultaneous control over the minimum size of the solid and void phases. The
robust design approach based on the eroded $\bm{\bar{\rho}}^{\mathrm{ero}}$,
intermediate $\bm{\bar{\rho}}^{\mathrm{int}}$ and dilated
$\bm{\bar{\rho}}^{\mathrm{dil}}$ material descriptions is adopted (Sigmund,
2009; Wang et al, 2011). The fields involved in the robust formulation are
built from the filtered field using a smoothed Heaviside function
$H(\tilde{\rho},\,\beta,\,\eta)$, which is controlled by a steepness parameter
$\beta$, and a threshold $\eta$, exactly as described in numerous papers in
the literature (Wang et al, 2011; Amir and Lazarov, 2018). For the sake of
brevity and without losing generality, the thresholds that define the eroded,
intermediate and dilated designs are set to 0.75, 0.50 and 0.25, respectively.
We consider the minimization of compliance subject to a volume restriction. In
view of the robust formulation (Amir and Lazarov, 2018), the TO problem can be
written as:
$\displaystyle\begin{split}{\min_{\bm{\rho}}}&\quad
c(\bm{\bar{\rho}}^{\mathrm{ero}})\\\
&\quad\mathbf{v}^{\intercal}\bm{\bar{\rho}}^{\mathrm{dil}}\leq
V^{\mathrm{dil}}\left(V^{\mathrm{int}}\right)\\\
&\quad\bm{0}\leq\bm{\rho}\leq\bm{1}\end{split}$ (3)
where $c(\bm{\bar{\rho}}^{\mathrm{ero}})$ is the compliance of the eroded
design, $\mathbf{v}$ is the vector containing elemental volume, and
$V^{\mathrm{dil}}$ is the upper bound of the volume constraint, which is
scaled according to the desired volume for the intermediate design
$V^{\mathrm{int}}$. The optimization problem is solved using the Optimality
Criteria (OC) algorithm and a Nested Analysis and Design approach (NAND). The
readers may refer to Wang et al (2011); Amir and Lazarov (2018) for a detailed
overview of the optimization problem formulated in Eq. (3).
The MBB beam design problem is chosen for the study. The design domain and its
extension to avoid filter boundary instabilities are displayed in Fig. 1.
$t_{\text{pad}}$ indicates the padding distance, which, in general, should be
kept greater or equal to one filter radius (Clausen and Andreassen, 2017). It
is well known that due to an offset between the eroded and intermediate
designs, the eroded design may not reach the borders of the design domain and
thus, may get detached from the boundary conditions (Clausen and Andreassen,
2017). To avoid numerical instabilities due to such disconnections, local
modifications at the boundary conditions and external forces are required. For
that, one could either exclude the padding around the boundary conditions and
external forces or use solid FEs. In the MBB beam of Fig. 1, the boundary
padding is excluded at the line of symmetry, while solid FEs are placed around
the force and around the remaining boundary conditions. The size of each solid
region is equal to the minimum feature size desired for the intermediate
design.
Unless otherwise specified, the topology optimization problems are solved with
the OC method using a move limit for design variables of 0.05. The SIMP
penalty parameter is set to 3 and the $\beta$ parameter of the smoothed
Heaviside function is initialized at 1.5 and every 40 iterations is multiplied
by 1.5 until a maximum of 38 is reached. The MBB beam is discretized using
$300\times 100$ quadrilateral FEs for the regular mesh, and 30,000 polygonal
FEs for the irregular mesh. The minimum size of the solid and void phases is
defined by a circle of radius equal to 4/300$\times L$, where $L$ is the
length of the MBB beam. Thus, for the regular mesh cases, the minimum size
radius is set to 4 FEs.
Figure 1: Symmetric half MBB beam design domain and its extension. The size of
the design domain is 3 $\times$ 1 unit2, and $t_{\text{pad}}$ denotes the
padding for the domain. Fixed solid regions are shown in black boxes.
## 3 Boundary padding on regular and irregular meshes
This section presents boundary padding schemes aimed at avoiding filter
boundary effects. The schemes are proposed for both regular and irregular
meshes and do not require real extensions of the domain. For the regular mesh
scenarios, we use the 88 lines MATLAB (Andreassen et al, 2011) TO code,
whereas the PolyTop MATLAB (Talischi et al, 2012) TO code is used for the
irregular mesh cases. The implementation procedures related to the presented
padding scheme are described below.
The numerical schemes are based on the method proposed in (Fernández et al,
2020), however they are generalized herein to extend their applicability to
irregular meshes also. The schemes are based on modifying the density filter
of Eq. (1) to symbolize an extension of the domain. To explain the rationale
behind the proposed schemes, the effect on the density filter when applying a
real extension on the design domain is analyzed.
(a)
(b)
(c)
(a)
(b)
(c)
Figure 2: The filtering regions on the interior and at the boundaries of the
design domain.
The extension of the design domain modifies the density filter in Eq. (1) only
to those variables whose filtering regions extend beyond the boundaries of the
design domain, as shown in Fig. 2. Numerically, the boundary padding
incorporates more elements within the filter, which increases the volume of
the filtering region. Thus, two key observations on the density filter (Eq. 1)
can be noted when extending the design domain using void densities:
* $\bullet$
The denominator of the density filter, which is
$\sum_{j=1}^{N}\mathrm{v}_{j}\mathrm{w}(\mathbf{x}_{i},\mathbf{x}_{j})$, grows
for those FEs whose filtering region exceeds the boundaries of the domain.
* $\bullet$
The numerator of the density filter, which is
$\sum_{j=1}^{N}\rho_{j}\mathrm{v}_{j}\mathrm{w}(\mathbf{x}_{i},\mathbf{x}_{j})$,
remains the same with or without boundary padding, because the extension is
performed with void densities ($\rho=0$).
These two observations are valid for both regular and irregular meshes, and
also for any design variable located at the edges of the design domain.
Therefore, to represent an extension of the design domain in the density
filter, it is sufficient to modify the denominator of Eq. (1). This principle
is used by Fernández et al (2020), for 2D and 3D design domains. The authors
simulate the extension of the design domain in the density filter using the
same denominator throughout the design domain. The common denominator is
computed using a filtering region that is not split by the edges of the
domain, thus it represents a boundary padding for design variables located
near the boundaries of the design domain. However, an irregular mesh does not
allow prescribing the size of the filtering regions influenced by an extension
of the design domain. Thus, we propose two different approaches to do so.
These approaches are illustrated pictorially in Fig. 3. The first is named
Mesh Mirroring (MM) approach, where the inner mesh is reflected with respect
to the boundaries of the design permitting to simulate an external mesh when
computing the density filter. The second is termed Approximate Volume (AV)
approach which assumes that the size of the filter region is equal to that of
a perfect circle. It is shown hereafter that both approaches lead to similar
results, but offer different implementation advantages. The numerical
treatments of the MM and AV approaches are described bellow.
(a)
(b)
(a)
(b)
Figure 3: The two approaches to extend the filtering region at the boundaries
of the design domain.
### 3.1 Mesh Mirroring
The Mesh Mirroring (MM) approach is suitable for design domains whose
boundaries are defined by straight and orthogonal lines. For such cases the
mirroring process becomes simple as it is sufficient to shift the center of
the FEs to obtain the required information for the numerical padding. For
instance, consider Fig. 3, the coordinate of the mirrored element can be
computed as:
$\mathbf{x}_{j}^{m}=\mathbf{x}_{j}+2(\mathbf{x}_{\mathrm{b}}-\mathbf{x}_{j}),$
(4)
where $\mathbf{x}_{j}^{m}$ is the coordinate of the FE that is being mirrored
and $\mathbf{x}_{\mathrm{b}}$ is the coordinate of the point that belongs to
the design boundary and is the closest to $\mathbf{x}_{j}$. To extend the
design domain in the filter, the weights of the mirrored elements have to be
included, which is performed as follows:
$\tilde{\rho}_{i}=\frac{\displaystyle\sum_{j=1}^{N}\rho_{j}\mathrm{v}_{j}\mathrm{w}(\mathrm{x}_{i},\mathrm{x}_{j})}{\displaystyle\sum_{j=1}^{N}\mathrm{v}_{j}\mathrm{w}(\mathrm{x}_{i},\mathrm{x}_{j})+\mathrm{v}_{j}\mathrm{w}(\mathrm{x}_{i},\mathrm{x}_{j}^{m})}\;\;.$
(5)
Equation (5) is used instead of (1) when computing the filtered field using
the MM approach.
### 3.2 Approximate Volume
When the boundaries of the design domain are defined by curved lines, the Mesh
Mirroring approach is no longer representative of an external mesh. In this
case, we propose the Approximate Volume (AV) approach, which defines the
density filter as follows:
$\tilde{\rho}_{i}=\frac{\displaystyle\sum_{j=1}^{N}\rho_{j}\mathrm{v}_{j}\mathrm{w}(\mathbf{x}_{i},\mathbf{x}_{j})}{V_{\mathrm{fil}}}\;,$
(6)
where $V_{\mathrm{fil}}$ represents the volume of the cone defined by the
weighting function $w$, which is evaluated as
$V_{\mathrm{fil}}=\left\\{\begin{matrix}\displaystyle\frac{\pi\>r_{\mathrm{fil}}^{2}}{3}\;\quad,\;\;\quad\mathrm{if}\,\,\|\mathrm{x}_{i}-\mathrm{x_{b}}\|>r_{\mathrm{fil}}\\\\[12.91663pt]
\displaystyle\sum_{j=1}^{N}\mathrm{v}_{j}\mathrm{w}(\mathrm{x}_{i},\mathrm{x}_{j}),\,\quad\text{otherwise}\end{matrix}\right.$
(7)
where the condition $\|\mathrm{x}_{i}-\mathrm{x_{b}}\|>r_{\mathrm{fil}}$ means
that the filtering region reaches the boundaries of the design domain, as
shown in Fig. 4. The weighted volume $V_{\mathrm{fil}}$ in Eq. (7) represents
an extension at all boundaries of the design domain. In cases where local
modifications of the padding scheme are required, e.g., at the symmetry line
of the MBB beam, then the volume of a sectioned cone should be computed, as
illustrated in Fig. 4. The expression to compute a sectioned cone is provided
in the attached MATLAB codes.
Although this manuscript is focused on the 2D case, it is worth mentioning
that in the 3D case the approximate volume corresponds to
$\pi\,r_{\mathrm{fil}}^{3}/3$ for the weighting function defined in Eq. (2).
(a)
(b)
(c)
(d)
(a) (b)
(c) (d)
Figure 4: The Volume Approximation approach for the half MBB beam depicted in
(b). Here, $V_{\mathrm{fil}}$ represents (a) the volume of a sectioned cone,
(c) the volume a cone, and (d) the volume of a cone discretized in the FE
mesh.
(a) Hook Domain
(b) $c(\bm{\bar{\rho}}^{\mathrm{int}})=1.00$
(c) $c(\bm{\bar{\rho}}^{\mathrm{int}})=0.83$
Figure 5: (a) Design domain provided by PolyTop. (b) The optimized design
using the default PolyFilter.m code. (c) The optimized design using the
PolyFilter_with_padding.m code. $c(\bm{\bar{\rho}}^{\mathrm{int}})$ represents
the compliance evaluated for the intermediate design.
To illustrate the robustness and efficacy of the AV approach, we optimize the
Hook design that comes in the PolyTop code (see Fig. 5(a)). The optimized Hook
designs without and with boundary padding (AV approach) are shown in Figs.
5(b) and 5(c), respectively. It is noticed that the optimized design including
boundary padding meets the imposed minimum size (Fig. 5(c)), which is not the
case with the design of Fig. 5(b), especially at the edges of the design
domain. It is recalled that the formulation of the optimization problems in
Figs. 5(b) and 5(c) is exactly the same, but the latter designates the
denominator of the density filter as stated in Eqs. (6) and (7). The readers
may refer to the attached MATLAB code PolyFilter_with_padding.m for
implementation details of the Approximate Volume approach in the Hook domain
of Fig. 5(a).
Code | No Treatment | Real Extension | Mesh Mirroring (MM) | Approximate Volume (AV)
---|---|---|---|---
top-88 | | | |
$c(\bm{\bar{\rho}}^{\mathrm{int}})=310.2$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=331.5$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=326.5$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=326.9$
PolyTop | | | |
$c(\bm{\bar{\rho}}^{\mathrm{int}})=311.0$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=329.1$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=327.8$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=327.4$
Table 1: Optimized MBB beams for compliance minimization using different
numerical treatments at the boundaries of the design domain. The problem is
solved using a regular mesh (quadrilateral FEs) and an irregular mesh
(polygonal FEs).
For validating the proposed padding schemes, the MBB beam is solved for a
volume constraint of $30\%$. Here, regular and irregular meshes are
considered. For each discretization, four results are reported. One solution
represents the reference case where no boundary treatment is applied to the
density filter. The other three results include boundary padding schemes using
the real extension of the design domain, the Mesh Mirroring method, and the
Approximate Volume method, respectively. The set of results are summarized in
Table 1.
Regular Mesh
---
FEs | Mirroring Mesh | Approximate Volume
$150\times 50$ | |
$c=328.1$ | $c=328.1$
$75\times 25$ | |
$c=377.3$ | $c=377.4$
Irregular Mesh
FEs | Mirroring Mesh | Approximate Volume
$7500$ | |
$c=364.5$ | $c=362.1$
$1875$ | |
$c=373.4$ | $c=374.6$
Table 2: MBB beam designs using the proposed boundary padding schemes on
coarser discretizations with respect to solutions shown in Table 1. The first
column indicates the number of FEs.
As noted by Clausen and Andreassen (2017) and Wallin et al (2020), the
reference solutions with no boundary treatment (Table 1 column 1) do not meet
the minimum length scale at the bottom edge of the design domain, and they
feature a material connection that is tangent to the boundaries of the design
domain. As expected, these shortcomings are mitigated by incorporating the
boundary padding on the filter (Table 1 columns 2-4).
The proposed padding schemes, i.e., the Mesh Mirroring (MM) and the
Approximate Volume (AV), give similar results in both discretizations (Table 1
columns 3 and 4). The main difference between both methods is the ease of
implementation. This can be appreciated in the attached codes, where the MM
approach requires less lines of code than the AV approach when it comes to the
MBB beam design domain, because the latter needs to compute the volume of a
sectioned cone in the proximity of the symmetry axis, as illustrated in Fig.
4.
Figure 6: Percentage difference between the volume of a perfect cone (as in
Fig. 4) and the volume of a discretized cone (using a regular mesh, as in Fig.
4).
To assess the mesh dependency of the proposed methods, the MBB beam is now
solved in coarser discretizations, specifically, using 4 and 16 times less FEs
than those in Table 1. This is done for both regular and irregular meshing.
The results are shown in Table 2. The solutions provided by the MM and AV
methods are practically the same in all the discretizations and in both,
regular and irregular meshing. This shows that the proposed methods allow to
manipulate the density filter at the boundaries of the design domain without
introducing mesh dependency. This makes sense for the MM method, since it
allows to simulate an external mesh with accuracy due to the reflection of the
internal mesh, on the other hand, the mesh independency of the AV method is
not obvious. This can be explained by the fact that the grouping criterion
that defines the filtering region is performed at the center of the FEs
(midpoint Riemann sum). This allows to approximate with significant precision
the volume of the discretized cone (Fig. 4) by the volume of a perfect cone
(Fig. 4). This can be seen in the graph of Fig. 6, which plots the percentage
difference between the volume of a perfect cone (as in Fig. 4) and the volume
of a cone discretized in a regular mesh (as in Fig. 4). We note that for the
3D case, a similar graph is obtained.
An interesting observation from Table 1 is that, although the real extension
of the design domain (column 2) removes the boundary issues associated to the
density filter, it yields a different solution than the MM and AV methods.
This difference encourages us to carry out the study presented in the next
section.
$V^{\mathrm{int}}$ | (i) Filter | (ii) Filter + FEA | (iii) Filter + Volume | (iv) Filter+FEA+Volume
---|---|---|---|---
0.3 | | | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}326.5}\;\;,\;\;\mathrm{V}=0.300$ | $c=331.2\;\;,\;\;\mathrm{V}=0.300$ | $c=330.0\;\;,\;\;\mathrm{V}=0.300$ | $c=330.1\;\;,\;\;\mathrm{V}=0.300$
0.4 | | | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}240.6}\;\;,\;\;\mathrm{V}=0.400$ | $c=244.4\;\;,\;\;\mathrm{V}=0.400$ | $c=245.5\;\;,\;\;\mathrm{V}=0.400$ | $c=245.5\;\;,\;\;\mathrm{V}=0.400$
0.5 | | | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}196.7}\;\;,\;\;\mathrm{V}=0.500$ | $c=199.7\;\;,\;\;\mathrm{V}=0.500$ | $c=205.4\;\;,\;\;\mathrm{V}=0.500$ | $c=205.3\;\;,\;\;\mathrm{V}=0.500$
Table 3: Optimized MBB beams for compliance minimization. Different padding
scenarios are considered. The compliance $c$ is evaluated on the intermediate
design (the compliance in blue is the lowest of the row).
## 4 Padding the Finite Element analysis and the volume constraint
The proposed boundary padding schemes aimed at compensating boundary effects
due to filtering neither influence the FE model nor affect the design domain,
as the MM and AV approaches do not require to introduce additional FEs beyond
the boundaries of the design domain. Thus, unlike most of the related works in
the literature, the FE Analysis (FEA) and volume restriction are not affected
by the proposed boundary padding approaches. However, the consequences of
excluding the FEA and/or the volume restriction from the padding scheme are
not clear yet, since no work can be found in the literature addressing this
concern. For this reason, in this section we compare results obtained with
different boundary padding scenarios.
As mentioned before, a real extension of the design domain is required for
applying the padding scheme to the FEA and/or volume constraint. In this work,
the real extension distance ($t_{\mathrm{pad}}$) is set equal to the dilation
distance, i.e., the distance between the intermediate and dilated designs. The
dimension of the extension is determined based on the following two reasons.
First, as the dilation projection extends beyond the design domain, the
corresponding design should guide the dimension of the extension (Clausen and
Andreassen, 2017). Therefore, extending the design domain by the dilation
distance is sufficient to guarantee the contribution of the dilated design.
Second, any undesirable boundary effect is assumed to be proportional to the
extension distance. As the dilation distance is the smallest extension that
can be applied to project the dilated field, it is assumed to be the distance
that reduces any undesirable numerical effects that might emerge as a result
of the real extension of the design domain. Given the combination of
parameters chosen to project the eroded, intermediate and dilated designs, it
turns out that the dilation distance corresponds to
$t_{\mathrm{dil}}=0.6r_{\mathrm{min}}=0.3r_{\mathrm{fil}}$ (Fernández et al,
2020).
To understand the effect of a real extension of the design domain, a set of
solutions are generated considering four padding scenarios: (i) the filter,
(ii) the filter and the FEA, (iii) the filter and the dilated volume, and (iv)
the filter, the FEA and the dilated volume. Case (i) is performed using the MM
approach, while cases (ii)-(iv) use real extensions of the design domain.
Here, a regular mesh is considered and an MBB beam is discretized using
$300\times 100$ quadrilateral FEs. Three different volume restrictions
$V^{\mathrm{int}}$ are used. The set of twelve results (4 cases and 3 volume
restrictions) are summarized in Table 3. For each optimized design, the
compliance of the intermediate design ($c$) and the final volume ($V$) are
reported.
Optimality Criteria
---
move | (i) Filter | (iv) Filter+FEA+Volume
0.10 | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}327.8}$ | $c=337.7$
0.15 | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}320.9}$ | $c=333.4$
0.20 | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}317.4}$ | $c=328.9$
MMA
move | (i) Filter | (iv) Filter+FEA+Volume
0.30 | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}326.5}$ | $c=331.1$
0.50 | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}314.4}$ | $c=330.0$
0.70 | |
$c={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}309.2}$ | $c=319.4$
Table 4: The MBB beam is solved for a volume limit of 30$\%$ and for 3
different external move limits using the optimality criteria and the MMA
methods.
The set of results in Table 3 shows that the designs with the best performance
are obtained when the padding scheme is applied only to the density filter
(i). Given that all designs in Table 3 meet the volume restriction precisely,
the difference in performance between cases (i) and (ii-iv) is exclusively
attributed to the arrangement of material within the design domain. Notably,
cases (ii-iv) generate designs that tend to separate from the boundaries of
the design domain, which would explain the reduced performance compared to
case (i). This performance penalization could be interpreted, at first glance,
as a numerical instability introduced by the void elements surrounding the
design domain. To discard that the numerical instability introduced by the
extension of the design domain is due to deficiencies on the optimizer, a set
of results is obtained with different set of parameters for the optimizer.
In the following, cases (i) and (iv) are considered with a volume constraint
$V^{\mathrm{int}}=0.3$. The optimization problem is solved using the exact set
of parameters described above, but with variations on the optimizer. Table 4
summarizes the results for different move limits of design variables,
parameter denoted as move in the attached MATLAB code. Here, the OC
(Optimality Criteria) and the MMA (Method of Moving Asymptotes, see Svanberg,
1987) methods are used.
Pad. | Mesh | $\beta_{\mathrm{ini}}=1.0$ | $\beta_{\mathrm{ini}}=1.5$
---|---|---|---
(i) Filter | Regular | |
$c=1631.8$ | $c=1677.6$
Irregular | |
| $c=1649.2$ | $c=1718.7$
(iv) Filter+FEA+Volume | Regular | |
$c=1692.0$ | $c=1707.5$
Irregular | |
| $c=1812.7$ | $c=1781.4$
Table 5: Long MBB beams obtained with different padding schemes, at different
starting values of the Heaviside parameter $\beta$, and using regular and
irregular meshes.
Table 4 shows the same trend as Table 3, i.e., when the extension of the
design domain is applied not only for the filter but also for the FEA and for
the volume constraint, the optimized designs show structural features
disconnected from the boundaries of the designs. It can be noticed that the
best result for a volume constraint of $30\%$ is obtained with the MMA
optimizer in conjunction with an external move limit of 0.7. In this case, the
design presents a topology that makes sense from the structural point of view,
where horizontal bars appear at the upper and lower zone of the design in
order to increase the area moment of inertia and thus the bending stiffness.
These parallel bars are connected by three diagonal bars generating high
stiffness triangular structures. This high performance MBB beam solution
suggests that the other reported topologies featuring some separation of
material from upper and lower zone of the design domain, are local optima.
If numerical instability is introduced by the void FEs surrounding the design
domain, it should become more pronounced as the structural perimeter
increases. To substantiate this, we now consider a MBB beam of length-to-
height ratio of 6:1 instead of 3:1 and solve for cases (i) and (iv) with a
regular mesh (code top-88), and an irregular mesh (code PolyTop) using the
optimality criteria method. For each topology optimization problem, two
different continuation scheme are used. One initializes the Heaviside
projection parameter $\beta$ at 1.0 and the other at 1.5. A total of 8 long-
MBB beams designs are obtained (2 cases $\times$ 2 discretizations $\times$ 2
continuation methods). The results are summarized in Table 5.
The following three observations can be noted from Table 5:
* $\bullet$
The best structural performance is obtained when the extension of the design
domain does not affect the FEA and the volume constraint. For the case (i) and
$\beta_{\mathrm{ini}}=1.0$, the optimized topology contains two long parallel
bars placed at the top and bottom of the optimized design, which, as discussed
previously, is obvious from a structural point of view.
* $\bullet$
It can be noted that the continuation method plays a major role in the
presence of members disconnected from the edges of the design domain. Even for
case (i), disconnections are observed when $\beta_{\mathrm{ini}}=1.5$.
However, for case (i), the disconnection of material from the boundaries of
the design domain is associated with a non-linearity of the optimization
problem rather than a numerical instability introduced by the treatment of the
filter at the boundaries of the design space. This is established on the fact
that for $\beta_{\mathrm{ini}}=1.0$ no obvious material disconnections are
observed in case (i).
* $\bullet$
It can be seen that the case (i) generates very similar topologies in both
discretizations (regular and irregular), which does not occur with the case
(iv). This suggests that surrounding the design domain with void finite
elements in order to treat the density filter at the boundaries of the design
domain may promote the disconnection of structural features from the
boundaries of the design domain and/or introduce mesh dependency.
It should be noted that the three previous observations are obtained for the
particular case of the 2D MBB beam under the minimization of compliance
subject to a volume restriction, where the optimization problem is formulated
according to the robust design approach whereby the intermediate and dilated
designs have been omitted in the objective function. However, under other
variants of the optimization problem, other test cases, or for other
optimization problems, the above mentioned observations can also be noted. For
example, we have observed that including the eroded, intermediate and dilated
designs in the objective function does not change the reported pattern, as
shown in the first row of Table 6. Also, other test cases, such as the
cantilever beam, evidence similar behavior, as shown in the second row of
Table 6. In addition, other topology optimization problems follow the reported
pattern regarding the boundary padding, such as the thermal compliance
minimization problem shown in the third row of Table 6. On the other hand, the
force inverter design problem does not appear to be influenced by a real
extension of the design domain, as shown in the fourth row of Table 6,
possibly because the structural bars are diagonal with respect to the
boundaries of the design domain and remain distant from the edges. The
optimization parameters used in the examples of Table 6 are summarized in
Table 7, and the min:max problems of Table 6 have been formulated using an
aggregation function.
Design Domain | TO Problem | No treatment | Mesh Mirroring | Real extension
---|---|---|---|---
MBB beam | $\begin{split}{\min_{\bm{\rho}}}&\quad\mathrm{max}(c(\bm{\bar{\rho}}^{\mathrm{ero}}),c(\bm{\bar{\rho}}^{\mathrm{int}}),c(\bm{\bar{\rho}}^{\mathrm{dil}}))\\\ &\quad\mathbf{v}^{\intercal}\bm{\bar{\rho}}^{\mathrm{dil}}\leq V^{\mathrm{dil}}\left(V^{\mathrm{int}}\right)\\\ &\quad\bm{0}\leq\bm{\rho}\leq\bm{1}\end{split}$ | | |
$c(\bm{\bar{\rho}}^{\mathrm{int}})=310.2$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=326.5$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=330.0$
Cantilever beam | $\begin{split}{\min_{\bm{\rho}}}&\quad c(\bm{\bar{\rho}}^{\mathrm{ero}})\\\ &\quad\mathbf{v}^{\intercal}\bm{\bar{\rho}}^{\mathrm{dil}}\leq V^{\mathrm{dil}}\left(V^{\mathrm{int}}\right)\\\ &\quad\bm{0}\leq\bm{\rho}\leq\bm{1}\end{split}$ | | |
$c(\bm{\bar{\rho}}^{\mathrm{int}})=104.2$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=104.7$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=106.3$
Heat sink | $\begin{split}{\min_{\bm{\rho}}}&\quad c(\bm{\bar{\rho}}^{\mathrm{ero}})\\\ &\quad\mathbf{v}^{\intercal}\bm{\bar{\rho}}^{\mathrm{dil}}\leq V^{\mathrm{dil}}\left(V^{\mathrm{int}}\right)\\\ &\quad\bm{0}\leq\bm{\rho}\leq\bm{1}\end{split}$ | | |
$c(\bm{\bar{\rho}}^{\mathrm{int}})=229.0$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=221.6$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=367.6$
Force Inverter | $\begin{split}{\min_{\bm{\rho}}}&\quad\mathrm{max}(c(\bm{\bar{\rho}}^{\mathrm{ero}}),c(\bm{\bar{\rho}}^{\mathrm{dil}}))\\\ &\quad\mathbf{v}^{\intercal}\bm{\bar{\rho}}^{\mathrm{dil}}\leq V^{\mathrm{dil}}\left(V^{\mathrm{int}}\right)\\\ &\quad\bm{0}\leq\bm{\rho}\leq\bm{1}\end{split}$ | | |
$c(\bm{\bar{\rho}}^{\mathrm{int}})=-0.1190$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=-0.1192$ | $c(\bm{\bar{\rho}}^{\mathrm{int}})=-0.1193$
Table 6: Different topology optimization problems (columns 1 and 2) solved without boundary treatment with respect to the density filter (column 3), with boundary padding through the Mesh Mirroring approach (column 4), and through a real extension of the design domain that affects the FEA and the volume restriction (column 5). In the force inverter and heat sink, $c$ represents the output displacement and the thermal compliance, respectively. In the Heat sink, a grid pattern is used as the initial distribution of design variables. See Table 7 for more details. Design Domain | $V^{\mathrm{int}}$ | FEs | $r_{\mathrm{fil}}$ (FEs) | SIMP exponent | $\beta_{\mathrm{ini}}:\beta_{\mathrm{max}}$ | Optimizer
---|---|---|---|---|---|---
MBB beam | 30$\%$ | $300\times 100$ | 8 | 3.0 | 1.5 : 38.4 | OC
Cantilever beam | 40$\%$ | $200\times 100$ | 8 | 3.0 | 1.5 : 38.4 | OC
Heat sink | 40$\%$ | $200\times 60$ | 10 | 3.0 | 1.0 : 38.4 | OC
Force inverter | 25$\%$ | $200\times 100$ | 6 | 3.0 | 1.5 : 20.0 | MMA
Table 7: Optimization parameters used in the TO problems of Table 6.
## 5 Closure
This paper presents two padding schemes to treat the density filter at the
boundaries of the design domain. The padding schemes are termed the Mesh
Mirroring (MM) and the Approximate Volume (AV), and are illustrated on both
regular and irregular meshes. These padding schemes do not require a real
extension of the design domain and they are easy to implement, being these
their main advantage. The efficacy and robustness of the padding methods is
illustrated using a set of 2D MBB beams solved for compliance minimization. By
and large, the optimized results obtained with the proposed approaches perform
better than those obtained via a real extension of the domain.
A quantitative study regarding a real extension of the finite element mesh
suggests that when the FEA or the volume restrictions are influenced by the
padding scheme, structural features can be disconnected from the borders of
the design domain. Rather than a rational material distribution of the MBB
beam, we consider it a numerical instability, since the disconnection of
material from the boundaries tends to increase the structural compliance.
## 6 Replication of results
This note provides three MATLAB codes. The first one is named
top88_with_padding.m and contains the implementations in the base code top88
(Andreassen et al, 2011). The implementations are: 1) the robust design
approach based on the eroded, intermediate and dilated designs, 2) the
numerical treatment on the density filter that simulate an extension of the
finite element mesh, and 3) the numerical treatment that applies a real
extension of the finite element mesh. The second code is named plot_BC.m and
can be used to plot the boundary conditions and non-optimizable regions
defined in the code top88_with_padding.m. The third code is named
PolyFilter_with_padding.m and is intended for the PolyTop code. It contains
the modifications on the density filter to simulate an extension of the finite
element mesh on the MBB beam and Hook domains.
## Acknowledgements
The authors are grateful to Prof. Krister Svanberg for providing the MATLAB
implementation of the Method of Moving Asymptotes.
## Conflict of interest
The authors state that there is no conflict of interest.
## References
* Amir and Lazarov (2018) Amir O, Lazarov BS (2018) Achieving stress-constrained topological design via length scale control. Structural and Multidisciplinary Optimization 58(5):2053–2071
* Andreassen et al (2011) Andreassen E, Clausen A, Schevenels M, Lazarov BS, Sigmund O (2011) Efficient topology optimization in matlab using 88 lines of code. Structural and Multidisciplinary Optimization 43(1):1–16
* Bourdin (2001) Bourdin B (2001) Filters in topology optimization. International journal for numerical methods in engineering 50(9):2143–2158
* Bruns and Tortorelli (2001) Bruns TE, Tortorelli DA (2001) Topology optimization of non-linear elastic structures and compliant mechanisms. Computer methods in applied mechanics and engineering 190(26-27):3443–3459
* Clausen and Andreassen (2017) Clausen A, Andreassen E (2017) On filter boundary conditions in topology optimization. Structural and Multidisciplinary Optimization 56(5):1147–1155
* Fernández et al (2020) Fernández E, Yang Kk, Koppen S, Alarcón P, Bauduin S, Duysinx P (2020) Imposing minimum and maximum member size, minimum cavity size, and minimum separation distance between solid members in topology optimization. Computer Methods in Applied Mechanics and Engineering 368:113,157
* Lazarov et al (2016) Lazarov B, Wang F, Sigmund O (2016) Length scale and manufacturability in density-based topology optimization. Archive of Applied Mechanics 86:189–218
* Sigmund (2007) Sigmund O (2007) Morphology-based black and white filters for topology optimization. Structural and Multidisciplinary Optimization 33(4):401–424
* Sigmund (2009) Sigmund O (2009) Manufacturing tolerant topology optimization. Acta Mechanica Sinica 25(2):227–239
* Sigmund and Maute (2013) Sigmund O, Maute K (2013) Topology optimization approaches. Structural and Multidisciplinary Optimization 48(6):1031–1055
* Sigmund and Petersson (1998) Sigmund O, Petersson J (1998) Numerical instabilities in topology optimization: a survey on procedures dealing with checkerboards, mesh-dependencies and local minima. Structural optimization 16(1):68–75
* Svanberg (1987) Svanberg K (1987) The method of moving asymptotes—a new method for structural optimization. International journal for numerical methods in engineering 24(2):359–373
* Talischi et al (2012) Talischi C, Paulino GH, Pereira A, Menezes IF (2012) Polytop: a matlab implementation of a general topology optimization framework using unstructured polygonal finite element meshes. Structural and Multidisciplinary Optimization 45(3):329–357
* Wallin et al (2020) Wallin M, Ivarsson N, Amir O, Tortorelli D (2020) Consistent boundary conditions for pde filter regularization in topology optimization. Structural and Multidisciplinary Optimization
* Wang et al (2011) Wang F, Lazarov BS, Sigmund O (2011) On projection methods, convergence and robust formulations in topology optimization. Structural and Multidisciplinary Optimization 43(6):767–784
|
8k
|
arxiv_papers
|
2101.01123
|
# Novel evaluation method for non-Fourier effects in heat pulse experiments
A. Fehér1, R. Kovács123 1Department of Energy Engineering, Faculty of
Mechanical Engineering, BME, Budapest, Hungary 2Department of Theoretical
Physics, Wigner Research Centre for Physics, Institute for Particle and
Nuclear Physics, Budapest, Hungary 3Montavid Thermodynamic Research Group
###### Abstract.
The heat pulse (flash) experiment is a well-known and widely accepted method
to measure the thermal diffusivity of a material. In recent years, it is
observed that the thermal behavior of heterogeneous materials can show
deviation from the classical Fourier equation, resulting in a different
thermal diffusivity and requiring further thermal parameters to identify. Such
heterogeneity can be inclusions in metal foams, layered structure in
composites, or even cracks and porous parts in rocks. Furthermore, the next
candidate, the so-called Guyer-Krumhansl equation, is tested on these
experiments with success. However, these recent evaluations required a
computationally intensive fitting procedure using countless numerical
solutions, even when a good initial guess for the parameters is found by hand.
This paper presents a Galerkin-type discretization for the Guyer-Krumhansl
equation, which helped us find a reasonably simple analytical solution for
time-dependent boundary conditions. Utilizing this analytical solution, we
developed a new evaluation technique to immediately estimate all the necessary
thermal parameters using the measured temperature history.
## 1\. Introduction
The engineering practice requires reliable ways to determine the necessary
parameters, which are enough to characterize the material behavior. In what
follows, we place our focus on the thermal description of materials,
especially on heterogeneous materials such as rocks and foams. In recent
papers [1, 2], it is reported that the presence of various heterogeneities can
result in a non-Fourier heat conduction effect on macro-scale under room
temperature conditions. A particular one is depicted in Fig. 1 for a capacitor
sample having a periodic layered structure. Such effects are observed in a so-
called flash (or heat pulse) experiment in which the front side of the
specimen is excited with a short heat pulse, and the temperature is measured
at the rear side. That temperature history is used to find the thermal
diffusivity in order to characterize the transient material behavior.
Figure 1. Measured rear side temperature history for the capacitor sample and
the prediction provided by Fourier’s theory.
This non-Fourier effect occurs on a specific time interval as Fig. 1 shows for
a typical outcome of the flash experiments; this is called over-diffusion.
After that interval, the Fourier equation appears to be a suitable choice for
modeling, the influence of the heterogeneities vanishes (later, we show
further examples). Also, there is no difference between the steady-states of
Fourier and non-Fourier heat equations. Based on our experimental experience,
the existence of over-diffusion depends on various factors, for instance,
sample thickness, characteristic parallel time scales, and excitation (i.e.,
boundary conditions) [3].
In the following sections, we organize the discussion as follows. First, we
briefly introduce the two heat conduction models to model the heat pulse
experiments and used for evaluations with a particular set of dimensionless
quantities. Second, we shortly present how the complete evaluation with the
Fourier heat equation can be conducted. Then, we move on the way of the
evaluation procedure with the Guyer-Krumhansl equation. After, we demonstrate
the benefits of this fitting procedure and revisit some previous measurements.
Furthermore, we decided to place the derivation of the analytical solutions to
the end of the paper as an Appendix. According to our knowledge, the Galerkin
method has not been used before for the Guyer-Krumhansl equation, and is a
novel result in this respect, we want to keep the focus on its practical
utilizations.
## 2\. Models for heat pulse experiments
Although numerous generalizations of Fourier’s law exist in the literature
[4], there is solely one of them, which indeed proved to be reasonable as the
next candidate beyond Fourier’s theory, this is called Guyer-Krumhansl (GK)
equation, this constitutive equation reads in one spatial dimension
$\displaystyle\tau_{q}\partial_{t}q+q+\lambda\partial_{x}T-\kappa^{2}\partial_{xx}q=0.$
(1)
Here, $\tau_{q}$ is the relaxation time for the heat flux $q$ and $\kappa^{2}$
is a kind of ‘dissipation parameter’, usually related to the mean free path.
Whereas it was first derived on the basis of kinetic theory [5], this model
also has a strong background in non-equilibrium thermodynamics with internal
variables (NET-IV) [6, 7]. While in the first case, one assumes an underlying
mechanism for phonons as the kinetic theory requires it, this is entirely
neglected in the case of NET-IV, leaving the coefficients to be free (however,
their sign is restricted by the II. law of thermodynamics). Eq. (1) is a time
evolution equation for the heat flux, and in order to have a mathematically
and physically complete system, we need the balance of internal energy $e$,
too,
$\displaystyle\rho c\partial_{t}T+\partial_{x}q=0,$ (2)
in which the equation of state $e=cT$ is used with $c$ being the specific heat
and $rho$ is the mass density. All these coefficients are constant, only rigid
bodies are assumed with no heat source.
At this point, we owe an explanation of why we leave the Maxwell-Cattaneo-
Vernotte (MCV) equation out of sight.
1. (1)
Hyperbolicity vs. parabolicity. It is usually claimed that a heat equation
model should be hyperbolic such as the MCV theory, describing finite
propagation speed. Indeed, this seems reasonable, but it does not help in the
practical applications under common conditions (room temperature,
heterogeneous materials). The Fourier equation is still well-applicable in
spite of its parabolic nature, therefore we do not see it as a decisive
property.
2. (2)
In a low-temperature situation, the MCV model was useful, primarily due to the
observed wave phenomenon in super-fluids, called second sound [8, 9]. Despite
the GK equation’s parabolic nature, it also helped the researchers find the
second sound in solids as well [10].
3. (3)
There is a significant effort to find the trace of wave propagation at room
temperature (in a macro-scale object, so nano-structures does not count now),
sadly with no success [11, 12].
4. (4)
There are higher-order models as well, such as ballistic-diffusive models [13,
14, 15], but they are related to a different research program, and for this
work, investigating macro-scale objects, they do not seem relevant.
5. (5)
On the analogy of the MCV model, the so-called dual-phase lag (DPL) equation
[16] usually used in many works as the best candidate after Fourier’s law.
Sadly, this model introduces two time constants in an ad hoc manner, violating
basic physical principles [17, 18], leading to mathematically ill-posed
problems as well [19, 20].
Last but not least, we also must mention a relatively less-known model from
the literature, the Nyíri equation [21],
$\displaystyle q+\lambda\partial_{x}T-\kappa^{2}\partial_{xx}q=0,$ (3)
which one is indeed similar to the Guyer-Krumhansl model but leaves the time
lagging effects out of sight, hence it is purely a spatially nonlocal heat
equation. Testing its solutions with the method presented in the Appendix, it
turned out to be inaccurate for measurements, unfortunately. Consequently, the
GK model is indeed the simplest but necessary extension for the Fourier
equation, neither the MCV nor the Nyíri models are capable of describing these
experiments accurately. In other words, the two new parameters ($\tau_{q}$ and
$\kappa^{2}$) are truly needed.
### 2.1. T and q-representations
Depending on the purpose, it is useful to keep in mind that for such linear
models, it is possible to chose a ‘primary’ field variable, which could ease
the definition of boundary conditions in some cases. For the GK equation, the
temperature $T$ and the heat flux $q$ are the candidates, and their forms are
T-representation:
$\displaystyle\quad\tau_{q}\partial_{tt}T+\partial_{t}T-\alpha\partial_{xx}T-\kappa^{2}\partial_{txx}T=0,$
(4) q-representation:
$\displaystyle\quad\tau_{q}\partial_{tt}q+\partial_{t}q-\alpha\partial_{xx}q-\kappa^{2}\partial_{txx}q=0.$
(5)
We note that in $T$-representation, it is unknown how to define boundary
condition for $q$ since it requires knowledge on $\partial_{xx}q$. On the
other hand, in $q$-representation, it becomes meaningless to speak about
$T$-boundaries. In a previous analytical solution for the GK equation [22],
this difference was inevitable to realize. In the present work, we use the
system (1)-(2). It is also interesting to notice that the GK model can recover
the solution of the Fourier equation when $\kappa^{2}/\tau_{q}=\alpha$, this
is called Fourier resonance [1, 23]. Overall, the coefficients $\tau_{q}$,
$\alpha$, and $\kappa^{2}$ must be fitted to the given temperature history.
### 2.2. Dimensionless set of parameters
Following [1], we introduce these definitions for the dimensionless parameters
(quantities with hat):
time: $\displaystyle\hat{t}=\frac{t}{t_{p}}\quad$
$\displaystyle\textrm{and}\quad\hat{x}=\frac{x}{L};$ thermal diffusivity:
$\displaystyle\hat{\alpha}=\frac{\alpha t_{p}}{L^{2}}\quad$
$\displaystyle\textrm{with}\quad\alpha=\frac{\lambda}{\rho c};$ temperature:
$\displaystyle\hat{T}=\frac{T-T_{0}}{T_{\textrm{end}}-T_{0}}\quad$
$\displaystyle\textrm{with}\quad
T_{\textrm{end}}=T_{0}+\frac{\bar{q}_{0}t_{p}}{\rho cL};$ heat flux:
$\displaystyle\hat{q}=\frac{q}{\bar{q}_{0}}\quad$
$\displaystyle\textrm{with}\quad\bar{q}_{0}=\frac{1}{t_{p}}\int_{0}^{t_{p}}q_{0}(t)\textrm{d}t;$
heat transfer coefficient: $\displaystyle\hat{h}=h\frac{t_{p}}{\rho c};$ (6)
together with $\hat{\tau}_{q}=\frac{\tau_{q}}{t_{p}}$,
$\hat{\kappa}^{2}=\frac{\kappa^{2}}{L^{2}}$, where $\hat{t}$ differs from the
usual Fourier number in order to decouple the thermal diffusivity from the
time scale in the fitting procedure. Furthermore, $t_{p}$ denotes the constant
heat pulse duration for which interval $\bar{q}_{0}$ averages the heat
transferred with the heat pulse defined by $q_{0}(t)$. Here, $L$ is equal with
the sample thickness. $T_{\textrm{end}}$ represents the adiabatic steady-
state, and $T_{0}$ is the uniform initial temperature. In the rest of the
paper, we shall omit the hat notation, otherwise we add the unit for the
corresponding quantity. Utilizing this set of definitions, one obtains the
dimensionless GK model:
$\displaystyle\partial_{t}T+\partial_{x}q$ $\displaystyle=0,$
$\displaystyle\tau_{q}\partial_{t}q+q+\alpha\partial_{x}T-\kappa^{2}\partial_{xx}q$
$\displaystyle=0.$ (7)
The initial condition is zero for both fields. For further details, we refer
to the Appendix in which we present the analytical solution for the two heat
equations. This set of dimensionless parameters does not change the definition
of the Fourier resonance condition, i.e., it remains
$\hat{\kappa}^{2}/\hat{\tau}_{q}=\hat{\alpha}$.
## 3\. Evaluation with the Fourier theory
The analytical solution of the Fourier equation is found for the rear side in
the form of
$\displaystyle T(x=1,t)=Y_{0}\exp(-ht)-Y_{1}\exp(x_{F}t),\quad
x_{F}=-2h-\alpha\pi^{2},\quad t>30,$ (8)
where all the coefficients are expressed in detail in the Appendix. First, we
must estimate the heat transfer coefficient $h$ by choosing arbitrarily two
temperature values at the decreasing part of temperature history. In this
region, $\exp(x_{F}t)\approx 0$, thus
$\displaystyle h=-\frac{\ln(T_{2}/T_{1})}{t_{2}-t_{1}}.$ (9)
For the Fourier theory, it is possible to express the thermal diffusivity
explicitly, i.e.,
$\displaystyle\alpha_{F}=1.38\frac{L^{2}}{\pi^{2}t_{1/2}},$ (10)
and after registering $t_{1/2}$, it can be directly determined. This is the
ratio of the thermal conductivity $\lambda$ and the specific heat capacity
$\rho c$. Then, the top of the temperature history ($T_{\textrm{max}}$)
follows by reading the time instant ($t_{\textrm{max}}$) when
$T_{\textrm{max}}$ occurs. Figure 2 schematically summarizes this procedure.
Overall, we obtained the heat transfer coefficients, the thermal diffusivity
and $T_{\textrm{max}}$, which all used for the Guyer-Krumhansl theory.
Figure 2. Schematically presenting the evaluation method using Fourier’s
theory.
## 4\. Evaluation with the Guyer-Krumhansl theory
The situation here becomes more difficult since this non-Fourier theory
consists of two ‘time constants’ ($x_{1}$ and $x_{2}$) instead of one ($x_{F}$
in the Fourier theory). Consequently, it is not possible to find these
exponents without making simplifications, in which one must be immensely
careful. We prepared ‘parameter maps’ for all possible $\tau_{q}$ and
$\kappa^{2}$ values that could be practically possible and beyond in order to
check the effect of the simplifications made in the following. However, we
still had to restrict ourselves to a domain, which is
$3>\kappa^{2}/(\alpha\tau_{q})\geq 1$. Its lower limit expresses the Fourier
case, and any other combination falls on the over-diffusive region. The
highest experimentally observed ratio so far is around $2.5$, thus we expect
$3$ to be eligible. For $\kappa^{2}$, we consider $0.02<\kappa^{2}<1$. We want
to emphasize that the GK theory itself is not restricted on this domain, it
would allow under-diffusive (‘wave-like’) propagation as well [10]. However,
for the present situation, we consider it out of interest in the lack of
experimental observation for room temperature experiments on macro-scale
heterogeneous samples. In the GK theory, we can express the rear side
temperature history as
$\displaystyle
T(x=1,t)=Y_{0}\exp(-ht)-Z_{1}\exp(x_{1}t)-Z_{2}\exp(x_{2}t),\quad
x_{1},x_{2}<0,$ (11)
for the detailed calculation and parameter definitions, we refer to the
Appendix again. This can be equivalently formulated realizing that
$Z_{2}=-P_{0}-Z_{1}$,
$\displaystyle
T(x=1,t)=Y_{0}\exp(-ht)-Z_{1}\big{(}\exp(x_{1}t)-\exp(x_{2}t)\big{)}+P_{0}\exp(x_{2}t),$
(12)
where merely one simplification becomes possible for all $\tau_{q}$ and
$\kappa^{2}$: $\exp(x_{1}t)\gg\exp(x_{2}t)$ when $t>60$, i.e.,
$\displaystyle
T(x=1,t>60)=Y_{0}\exp(-ht)-Z_{1}\exp(x_{1}t)+P_{0}\exp(x_{2}t).$ (13)
This form is more advantageous because $P_{0}$ remains practically constant
for a given boundary condition, thus its value can be assumed a-priori, this
is exploited in the evaluation method. Now, let us present from step by step
the determination of GK parameters, depicted on Fig. 3.
* •
Step 1/A. We have to observe that the temperature predicted by Fourier’s
theory always runs together with the measured one at the beginning, after
that, it rises faster at the top. In other words, in this region the same
temperature value (usually around $0.7-0.95$) is reached sooner.
Mathematically, we can express it by formally writing the equations for the
Fourier and GK theories as follows,
$\displaystyle T_{F}=Y_{0}\exp(-ht)-Y_{1}\exp(x_{F}t_{F});\quad
T_{GK}=Y_{0}\exp(-ht_{m})-Z_{1}\exp(x_{1}t_{m})+P_{0}\exp(x_{2}t_{m}),$ (14)
where the $t_{F}$ time instant is smaller than the measured $t_{m}$, also
$T_{F}=T_{GK}$ holds. Let us choose such two temperatures arbitrarily and
taking their ratio, it yields
$\displaystyle\exp\big{(}x_{F}(t_{F1}-t_{F2})\big{)}=\exp\big{(}x_{1}(t_{m1}-t_{m2})\big{)}\frac{-Z_{1}+P_{0}\exp\big{(}(x_{2}-x_{1})t_{m1}\big{)}}{-Z_{1}+P_{0}\exp\big{(}(x_{2}-x_{1})t_{m2}\big{)}}$
(15)
where the fraction on the right hand side is close to $1$, mostly between $1$
and $1.05$ for ‘small’ time intervals. It could be possible to introduce it as
a correction factor (denoted with $c$ below) for $x_{1}$ in an iterative
procedure if more to be known about $\tau_{q}$ and $\kappa^{2}$. After
rearrangement, we obtain a closed form formula for $x_{1}$:
$\displaystyle
x_{1}=\frac{\ln(1/c)}{t_{m1}-t_{m2}}+x_{F}\frac{t_{F1}-t_{F2}}{t_{m1}-t_{m2}}.$
(16)
Taking $c=1$ is equivalent with neglecting $\exp(x_{2}t)$ from the beginning
around reaching $T_{\textrm{max}}$, and leading to this same expression.
Eventually, it introduces a correction for the Fourier exponent $x_{F}$ based
on the deviation from the measured data with the possibility to apply further
corrections using $c$ if needed. Practically, we take the $80$% and $90$% of
$T_{\textrm{max}}$ and for the next $20$ subsequent measurement points, then
we consider their mean value to be $x_{1}$. From mathematical point of view,
closer data point pairs should perform better, but it does not due to the
uncertainty in the measurement data. According to our experience, it offers a
more consistent value for $x_{1}$.
Figure 3. The schematic representation of the evaluation method using the
Guyer-Krumhansl theory. Here, the ’fitted curve’ belongs to the Fourier
equation.
* •
Step 1/B. In parallel with part A, we can determine the coefficient $Z_{1}$
for each $t_{m}$ and for each corresponding $x_{1,m}$, that is,
$\displaystyle
Z_{1,m}=\exp(-x_{1,m}t_{m})\Big{(}T_{m}-Y_{0}\exp(-ht_{m})\Big{)}$ (17)
where the subscript $m$ denotes the value related to one measurement point.
Also, after $20$ subsequent points, we take the mean value of the set
$\\{Z_{1,m}\\}$.
* •
Step 2. At this point, we can exploit that $P_{0}$ is ‘almost constant’, i.e.,
$2<-P_{0}<2.03$ holds. Here, $2.03$ comes from the parameter sweep, we did not
observe higher values for $-P_{0}$, and also, it cannot be smaller than $2$.
This property allows us to a-priori assume its value (such as $P_{0}=-2.015$),
and in a later step, we must fine-tune since the overall outcome reacts
sensitively. Using $P_{0}$, we can obtain $Z_{2}=-P_{0}-Z_{1}$. In order to
obtain $x_{2}$, we can rearrange the equation
$\displaystyle T=Y_{0}\exp(-ht)-Z_{1}\exp(x_{1}t)-Z_{2}\exp(x_{2}t)$ (18)
for $x_{2}$, and calculate it as a mean value of the set $\\{x_{2,m}\\}$
filled with values related to each $t_{m}$. When having noisy data, this
approach can result in positive $x_{2}$ values, unfortunately. These values
must be excluded, otherwise, it leads to instability and a meaningless
outcome. Careful data filtering can help to solve this shortcoming, and in
fact, we used it to ease the calculation (the details are plotted in the next
section).
* •
Step 3. Now, having both exponents and coefficients, it is possible to
rearrange the analytical expressions to the GK parameters explicitly and
calculate $\alpha_{GK}$, $\tau_{q}$ and $\kappa^{2}$:
$\displaystyle x_{1},x_{2}\Rightarrow k_{1},k_{2};\quad Z_{1}\Rightarrow
DP_{0}\Rightarrow
M_{1},M_{2}\Rightarrow\tau_{q}\Rightarrow\alpha_{GK}\Rightarrow\kappa^{2}.$
(19)
For the detailed parameter definitions, we refer to the Appendix.
* •
Step 4. As it is mentioned in Step 2, the overall outcome is sensitive to
$P_{0}$. Therefore we choose to make a sweep on the possible interval with the
step of $0.002$, producing the temperature history for each set of parameters
and characterizing them with $R^{2}$, the coefficient of determination.
Lastly, we chose the best set.
Practically, this evaluation method reduces the number of ‘fitted’ parameters
as only $P_{0}$ has to be fine-tuned at the end. Besides, it is constrained
into a relatively narrow range, consequently, the overall evaluation procedure
takes only a few seconds instead of hours to perform computationally intensive
algorithms.
## 5\. Comparison with foregoing experiments
First, we revisit the experiments presented in [3] since that set of data on
Basalt rock samples with thicknesses of $1.86$, $2.75$ and $3.84$ mm, showed
size dependence both on the thermal diffusivity and on the non-Fourier
effects. However, the fitted parameters in [3] found by hand, thus not exactly
precise. Here, we aim to specify the exact quantities for the GK model and
establishing a more robust theoretical basis for the observations. Second, we
reevaluate the data recorded on a metal foam sample with $5.2$ mm thickness.
This belongs to the samples showing the potent non-Fourier effect, presented
first in [2]. Figure 4 shows these samples.
Figure 4. The magnified view of the metal foam specimen (center) and the
basalt rock sample (right).
In some cases, the available data is too noisy for such an evaluation method,
an example is presented in Fig. 5. That data is smoothed using the built-in
Savitzky-Golay algorithm of Matlab. Besides, we paied much attention to not
smooth it overly in order to keep the physical content untouched as much as
possible.
Figure 5. The effect of data smoothening, using $10$ neighboring points in the
Savitzky-Golay algorithm.
### 5.1. Basalt rock samples
Regarding the exact details of measurements, we refer to [3]. Tables 1 and 2
consist of our findings using this evaluation algorithm. Comparing the
outcomes of the two fitting procedures, we find the thermal diffusivities to
be close to each other. However, this is not the case with the GK parameters
$\tau_{q}$ and $\kappa^{2}$, they significantly differ from the previous
values from [3]. Despite the huge difference, the size-dependence for both the
Fourier and non-Fourier behaviors is apparent nevertheless. The fitted
temperature histories are depicted in Figs. 6, 7, 8 and 9 for each thickness,
respectively. Each figure shows the $R^{2}$ value for the fitted curve. For
the Fourier one, two of them are given: $R_{t}^{2}$ represents the one found
without any fine-tuned thermal diffusivity, this is purely theoretical. The
other $R^{2}$ stands for fine-tuned $\alpha_{F}$.
In the first case ($L=1.86$ mm), although the difference for the non-Fourier
samples seems negligible, it results in $10$% difference in the thermal
diffusivity. It is more visible from Table 2, in which the Fourier resonance
condition spectacularly characterize the deviation from Fourier’s theory, it
decreases for thicker samples. Regarding the third one ($L=3.84$ mm),
Fourier’s theory seems to be ‘perfectly splendid’, and the GK model hardly
improves it. Indeed, the $0.94$ for the Fourier resonance is close enough to
$1$ to consider it to be a Fourier-like propagation.
Basalt rock samples | Findings in [3] | Refined results
---|---|---
| $\alpha_{F}$
---
$10^{-6}$ [m2/s]
| $\alpha_{GK}$
---
$10^{-6}$ [m2/s]
| $\tau_{q}$
---
[s]
| $\kappa^{2}$
---
$10^{-6}$ [m2]
| $\alpha_{F}$
---
$10^{-6}$ [m2/s]
| $\alpha_{GK}$
---
$10^{-6}$ [m2/s]
| $\tau_{q}$
---
[s]
| $\kappa^{2}$
---
$10^{-6}$ [m2]
$1.86$ mm | $0.62$ | $0.55$ | $0.738$ | $0.509$ | $0.68$ | $0.61$ | $0.211$ | $0.168$
$2.75$ mm | $0.67$ | $0.604$ | $0.955$ | $0.67$ | $0.66$ | $0.61$ | $0.344$ | $0.268$
$3.84$ mm | $0.685$ | $0.68$ | $0.664$ | $0.48$ | $0.70$ | $0.68$ | $1$ | $0.65$
Table 1. Summarizing the fitted thermal parameters. $\frac{\kappa^{2}}{\tau_{q}\alpha}$ | | Findings
---
in [3]
| Refined
---
values
$1.86$ mm | $1.243$ | $1.295$
$2.75$ mm | $1.171$ | $1.272$
$3.84$ mm | $1.06$ | $0.94$
Table 2. Characterizing the non-Fourier behavior using the Fourier resonance
condition. Figure 6. The rear side temperature history for the basalt rock
sample with $L=1.86$ mm. Figure 7. The rear side temperature history for the
basalt rock sample with $L=2.75$ mm. Figure 8. Demonstrating the complete
fitting for the rear side temperature in case of the basalt rock sample with
$L=2.75$ mm. Figure 9. The rear side temperature history for the basalt rock
sample with $L=3.84$ mm.
### 5.2. Metal foam
Regarding the extent of the non-Fourier effect, the situation becomes
remarkably different for the metal foam sample, presented first in [2]. The
millimeter size inclusions can significantly influence the thermal behavior.
The outcome is plotted in Fig. 10 together with the corresponding $R^{2}$
values. Table 3 helps to compare the fitted values found by Wolfram
Mathematica to ours. Notwithstanding that the GK parameters are in
correspondence, the most notable difference is on the thermal diffusivities,
interestingly. The Fourier resonance parameter is found to be $2.395$ with our
procedure, while on the contrary to $3.04$ in [2]. Common in both cases, the
ratio of $\alpha_{F}$ and $\alpha_{GK}$ is found to be $1.28$-$1.29$, which
represents an indeed remarkable deviation from Fourier’s theory.
Metal foam sample | Findings in [2] with Wolfram Math | Present algorithm
---|---|---
| $\alpha_{F}$
---
$10^{-6}$ [m2/s]
| $\alpha_{GK}$
---
$10^{-6}$ [m2/s]
| $\tau_{q}$
---
[s]
| $\kappa^{2}$
---
$10^{-6}$ [m2]
| $\alpha_{F}$
---
$10^{-6}$ [m2/s]
| $\alpha_{GK}$
---
$10^{-6}$ [m2/s]
| $\tau_{q}$
---
[s]
| $\kappa^{2}$
---
$10^{-6}$ [m2]
$5.2$ mm | $3.04$ | $2.373$ | $0.402$ | $2.89$ | $3.91$ | $3.01$ | $0.304$ | $2.203$
Table 3. Summarizing the fitted thermal parameters for the metal foam sample.
Figure 10. The rear side temperature history for the metal foam sample with
$L=5.2$ mm.
## 6\. Discussion and summary
We developed an algorithm to efficiently evaluate room temperature heat pulse
experiments in which a non-Fourier effect could exist. This is called over-
diffusive propagation and detunes the thermal diffusivity, even when the
deviation is seemingly small or negligible for the rear side temperature
history. The presented method is based on the analytical solution of the
Guyer-Krumhansl equation, including temperature-dependent convection boundary
condition, thus the heat transfer to the environment can be immediately
included in the analysis. The reevaluation of preceding experiments showed a
real size-dependence for all thermal parameters, especially for the GK
coefficients $\tau_{q}$ and $\kappa^{2}$. Furthermore, it is in accordance
with the result of the iterative ‘brute force’ iterative fitting procedure of
Wolfram Math, basically, but using much less computational resource.
We plan to improve this procedure by including the investigation of front side
temperature history, too. When $x_{1}$ is obtained from the rear side, it
could be easily used to describe the front side’s thermal behavior. This is
much more sensitive to the initial time evolution right after the excitation,
therefore it could serve as a better candidate to achieve a more precise and
robust estimation for the $x_{2}$ exponent. Also, having two temperature
histories would be a remarkable step forward to ascertain the existence of
non-Fourier heat conduction.
We believe that this procedure lays the foundations for the more practical
engineering applications of non-Fourier models, especially for the best
candidate among all of them, the Guyer-Krumhansl equation. It sheds new light
on the classical and well-known flash experiments, and we provide the
necessary tools to find additional thermal parameters to achieve a better
description of heterogeneous materials. It becomes increasingly important with
the spreading of composites and foams and helps characterize 3D printed
samples with various inclusions.
## 7\. Acknowledgement
The authors thank Tamás Fülöp, Péter Ván, and Mátyás Szücs for the valuable
discussions. We thank László Kovács (Kőmérő Kft., Hungary) and Tamás Bárczy
(Admatis Kft.) for producing the rock and metal foam samples.
The research reported in this paper and carried out at BME has been supported
by the grants National Research, Development and Innovation Office-NKFIH FK
134277, and by the NRDI Fund (TKP2020 NC, Grant No. BME-NC) based on the
charter of bolster issued by the NRDI Office under the auspices of the
Ministry for Innovation and Technology. This paper was supported by the János
Bolyai Research Scholarship of the Hungarian Academy of Sciences.
## 8\. Appendix: Galerkin-type solution of heat equations
Since the non-Fourier models are not well-known in the general literature,
there are only a few available analytical and numerical methods to solve such
a spatially nonlocal equation like the Guyer-Krumhansl one. The nonlocal
property is a cornerstone of these models since the usual boundary conditions
do not work in the same way. That could be a problem when the outcome
seemingly violates the maximum principle, see for instance [24] in which the
operational approach is applied [25].
Another particular candidate originates from the spectral methods, it is
called Galerkin method, where both the weight and the trial functions are the
same. Fortunately, following [22], we can surely apply sine and cosine trial
functions in which terms the solution can be expressed. It is important to
emphasize that we deal with a system of partial differential equations in our
case. The physical (and mathematical) connection between the field variables
restricts the family of trial functions. Namely, even in the simplest case of
the Fourier heat equation,
$\displaystyle\partial_{t}T+\partial_{x}q=0,\quad q+\alpha\partial_{x}T=0,$
(20)
$q$ and $T$ are orthogonal to each other, and the trial functions must respect
this property. Our choice is found by the method called separation of
variables, but it resulted in a too complicated outcome due to the time-
dependent boundary condition. In [22], the heat pulse is modeled with a smooth
$q_{0}(t)=1-\cos(2\pi t/t_{p})$ function on the $0<t\leq t_{p}$ interval. It
is disadvantageous since the most interesting part falls beyond $t_{p}$, and
the solution in $t_{p}<t$ must account for the state at the time instant
$t_{p}$ as an initial condition. Therefore, it results in cumbersome
expressions for the coefficients.
We overcome this difficulty by introducing a different function to model the
heat pulse, i.e., we use $q_{0}(t)=-(\exp(-C_{1}t)+\exp(-C_{2}t))/n$, where
$C_{1}$ and $C_{2}$ are chosen to have a sufficiently small $q_{0}$ after
$t_{p}$, hence the values are $C_{1}=1/0.075$ and $C_{2}=6$. The coefficient
$n$ is normalizing $q_{0}$ to $1$ from $0$ to $t_{p}$, so it is
$n=(C_{1}-C_{2})/(C_{1}C_{2})$. For larger time instants, the front side
becomes adiabatic. Regarding the rear side boundary condition, we choose to
account heat convection for both models.
1. (1)
First, we restrict ourselves to the Fourier equation. According to our
previous experiments [1, 2, 3], one can safely use the Fourier equation where
cooling effects become significant. This solution is used to estimate the heat
transfer coefficient, the maximum temperature and give a first approximation
to the thermal diffusivity.
2. (2)
In the second step, we repeat the calculation for the GK model with the same
boundary conditions. We use the previously found Fourier parameters as the
input to estimate the GK parameters and fine-tune the thermal diffusivity. The
heat transfer coefficient and the temperature maximum can be kept the same.
### 8.1. Step 1: solving the Fourier equation
While there are several available solutions in the literature, we want to see
how the Galerkin approach performs on this model using our set of
dimensionless parameters and boundary conditions. Consequently, we can keep
our findings to be consistent between the two heat equation models. Let us
recall the mathematical model for the sake of traceability. In the Fourier
model, we have
$\displaystyle\partial_{t}T+\partial_{x}q=0,\quad q+\alpha\partial_{x}T=0,$
(21)
with
$\displaystyle
q_{0}(t)=-\frac{1}{n}\Big{(}\exp(-C_{1}t)+\exp(-C_{2}t)\Big{)},\quad
n=\frac{C_{1}-C_{2}}{C_{1}C_{2}},\quad q_{1}(t)=hT(x=1,t),$ (22)
in which all parameters are dimensionless as presented in Sec. 2.2. The
initial conditions are $q(x,t=0)=0$ and $T(x,t=0)=0$, the conducting medium is
thermally relaxed. We emphasize that one does not need to separately specify
boundary conditions for the temperature field $T$ as well. Regarding the heat
flux field $q$, we must separate the time-dependent part from the homogeneous
one,
$\displaystyle q(x,t)=w(x,t)+\tilde{q}(x,t),\quad
w(x,t)=q_{0}(t)+x\left(q_{1}(t)-q_{0}(t)\right)$ (23)
with $\tilde{q}$ being the homogeneous field and $w$ inherits the entire time-
dependent part from the boundary (and $x$ runs from $0$ to $1$). The spectral
decomposition of $\tilde{q}$ and $T$ are
$\displaystyle\tilde{q}(x,t)=\sum_{j=1}^{N}a_{j}(t)\phi_{j}(x),\quad
T(x,t)=\sum_{j=1}^{N}b_{j}(t)\varphi_{j}(x),$ (24)
with $\phi_{j}(x)=\sin(j\pi x)$ and $\varphi_{j}(x)=\cos(j\pi x)$. Revisiting
the boundary conditions, $q_{0}$ is trivial, and $q_{1}$ becomes:
$q_{1}(t)=hT(x=1,t)=h\sum_{j=0}^{N}b_{j}(-1)^{j}$. Naturally, one has to
represent also $w(x,t)$ in the space spanned by $\phi_{j}(x)$. Once one
substituted these expressions into (21), multiplied them by the corresponding
weight functions and integrated respect to $x$ from $0$ to $1$, one obtains a
system of ordinary differential equations (ODE). Here, we exploit that the
square of the trial functions $\phi(x)^{2}$ and $\varphi(x)^{2}$ are both
integrable and after integration they are equal to $1/2$. Since the $\cos$
series have a non-zero part for $j=0$, we handle it separately from the others
corresponding to $j>0$.
* •
For $j=0$, we have
$\displaystyle\dot{b}_{0}+\partial_{x}w=0,\rightarrow\dot{b}_{0}=-hb_{0}+q_{0}$
(25)
with the upper dot denoting the time derivative, and $a_{0}=0$ identically.
* •
For $j>0$, we obtained
$\displaystyle\dot{b}_{j}+{j\pi}a_{j}$ $\displaystyle=0,$ (26) $\displaystyle
a_{j}$ $\displaystyle=\alpha{j\pi}b_{j}+\frac{2}{j\pi}(hb_{j}-q_{0}),$ (27)
where the term $2/(j\pi)$ comes from the $\sin$ series expansion of $w$. We
note that for $j>0$, $\partial_{x}w$ does not contribute to the time
evolution.
Such ODE can be solved easily both numerically and analytically for suitable
$q_{0}$ functions. Figure 11 shows the analytical solution programmed in
Matlab in order to demonstrate the convergence to the right (physical)
solution. In this respect, we refer to [26] in which a thorough analysis is
presented on the analytical and numerical solution of heat equations beyond
Fourier. Our interest is to utilize as less terms as possible of the infinite
series, which is able to properly describe the rear side temperature history
(i.e., the measured one) from a particular time instant. In other words, we
want to simplify the complete solution as much as possible but keeping its
physical meaning.
Starting with $j=0$ case, we find that the terms in the particular solution
with $\exp(-C_{1}t)$ and $\exp(-C_{2}t)$ extinct very quickly, thus we can
safely neglect them with keeping the $\exp(-ht)$ as the leading term
throughout the entire time interval we investigate. Briefly, $j=0$ yields
$\displaystyle b_{0}(t)=Y_{0}\exp(-ht),\quad
Y_{0}=(C_{1}-C_{2})/\big{(}n(C_{1}-h)(C_{2}-h)\big{)}.$ (28)
Continuing with $j=1$, we make the same simplifications and neglecting the
same exponential terms as previously after taking into account the initial
condition, and we found
$\displaystyle b_{1}(t)=Y_{1}\exp(-2ht)\exp(-\pi^{2}t),\quad
Y_{1}=2(C_{1}-C_{2})/\big{(}n(C_{1}+x_{F})(C_{2}+x_{F})\big{)},\quad
x_{F}=-2h-\alpha\pi^{2}.$ (29)
Based on the convergence analysis (Fig. 11), we suppose that these terms are
eligible to properly describe the temperature history after $t>30$ (which is
equal to $0.3$ s if $t_{p}=0.01$ s). Finally, we can combine these solutions,
thus $T(x=1,t)=b_{0}-b_{1}$ (the alternating sign originates in $\cos(j\pi)$).
Figure 11. Convergence analysis for the Fourier equation in two different
cases on the rear side temperature history. The first one shows the adiabatic
limit, and the second one presents the case when the heat transfer coefficient
$h$ is not zero. In both cases, we applied $1$, $3$ and $20$ terms in the
spectral decomposition (24), with $\alpha=0.005$.
### 8.2. Step 2: solving the Guyer-Krumhansl equation
Here, we repeat the calculations using the same set of trial and weight
functions for the GK model, that is, we solve
$\displaystyle\partial_{t}T+\partial_{x}q=0,\quad\tau_{q}\partial_{t}q+q+\alpha\partial_{x}T-\kappa^{2}\partial_{xx}q=0,$
(30)
with
$\displaystyle
q_{0}(t)=-\Big{(}\exp(-C_{1}t/t_{p})+\exp(-C_{2}t/t_{p})\Big{)}/n,\quad
q_{1}(t)=hT(x=1,t),\quad q(x,t=0)=0,\quad T(x,t=0)=0.$ (31)
Analogously with the Fourier case, we obtain a set of ODE as follows.
* •
For $j=0$, we have
$\displaystyle\dot{b}_{0}+\partial_{x}w=0,\rightarrow\dot{b}_{0}=-hb_{0}+q_{0},$
(32)
which is the same as previously due to $a_{0}=0$ identically.
* •
For $j>0$, $a_{j}$ changes
$\displaystyle\dot{b}_{j}+j\pi a_{j}$ $\displaystyle=0,$ (33)
$\displaystyle\tau_{q}\dot{a}_{j}+\left(1+\kappa^{2}j^{2}\pi^{2}\right)a_{j}$
$\displaystyle=\alpha j\pi
b_{j}+\frac{2}{j\pi}\left[(hb_{j}-q_{0})+\tau_{q}(h\dot{b}_{j}-\dot{q}_{0})\right].$
(34)
Consequently, the zeroth term, $b_{0}(t)$ remains the same with the particular
solution being omitted,
$\displaystyle b_{0}(t)=Y_{0}\exp(-ht),\quad
Y_{0}=(C_{1}-C_{2})/\big{(}n(C_{1}-h)(C_{2}-h)\big{)}.$ (35)
However, for $b_{1}(t)$, the particular solution $P(t)$ becomes more
important, its initial value influences the temperature history, i.e.,
$P_{0}=P(t=0)$ and $DP_{0}=\textrm{d}_{t}P(t=0)$ appears in the coefficients
$Z_{1}$ and $Z_{2}$, and the $P_{0}$ and $DP_{0}$ quantities are important in
the evaluation method, too. Thus $b_{1}(t)$ reads
$\displaystyle b_{1}(t)=Z_{1}\exp{(x_{1}t)}+Z_{2}\exp{(x_{2}t)}+P(t),\quad
Z_{1}=-\frac{DP_{0}-P_{0}x_{2}}{x_{1}-x_{2}},\quad
Z_{2}=-P_{0}+\frac{DP_{0}-P_{0}x_{2}}{x_{1}-x_{2}}.$ (36)
The exponents $x_{1}$ and $x_{2}$ depend on the GK parameters $\tau_{q}$ and
$\kappa^{2}$, and obtained as the roots of the quadratic equation
$x_{j}^{2}+k_{1j}x+k_{2j}=0$:
$\displaystyle x_{1,2}=x_{j1,2}|j=1,\quad
x_{j1,2}=\frac{1}{2}\left(-k_{1j}\pm\sqrt{k_{1j}^{2}-4k_{2j}}\right),\quad
k_{1j}=\frac{1+\kappa^{2}j^{2}\pi^{2}}{\tau_{q}}+2h,\quad k_{2j}=\frac{\alpha
j^{2}\pi^{2}}{\tau_{q}}+\frac{2h}{\tau_{q}}.$ (37)
Furthermore, the particular solution reads as
$\displaystyle P_{j}(t)=M_{j1}\exp{(-C_{1}t)}+M_{j2}\exp{(-C_{2}t)},\quad
M_{j1}=\left(\frac{2C_{1}}{n}-\frac{2}{n\tau_{q}}\right)/\left[k_{2j}-k_{1j}C_{1}+C_{1}^{2}\right],$
$\displaystyle
M_{j2}=\left(-\frac{2C_{2}}{n}+\frac{2}{n\tau_{q}}\right)/\left[k_{2j}-k_{1j}C_{2}+C_{2}^{2}\right].$
(38)
Hence $P_{0}=M_{1}+M_{2}$ and $DP_{0}=-M_{1}C_{1}-M_{2}C_{2}$ appears in
$b_{1}(t)$ with $j=1$, too. After obtaining $Z_{1}$ and $Z_{2}$, $P(t)$ can be
neglected since it becomes negligibly small at $t>t_{p}$. Finally, we
formulate the rear side temperature history using $b_{0}(t)$ and $b_{1}(t)$ as
$\displaystyle
T(x=1,t)=b_{0}-b_{1}=Y_{0}\exp(-ht)-Z_{1}\exp{(x_{1}t)}-Z_{2}\exp{(x_{2}t)},$
(39)
for which Figure 12 shows the convergence property.
Figure 12. Convergence analysis for the Guyer-Krumhansl equation in two
different cases on the rear side temperature history. The first one shows the
adiabatic limit, and the second one presents the case when the heat transfer
coefficient $h$ is not zero. In both cases, we applied $1$, $3$ and $20$ terms
in the spectral decomposition (24), with $\alpha=0.005$, $\tau_{q}=1$, and
$\kappa^{2}=10\alpha\tau_{q}$.
## References
* [1] S. Both, B. Czél, T. Fülöp, Gy. Gróf, Á. Gyenis, R. Kovács, P. Ván, and J. Verhás. Deviation from the Fourier law in room-temperature heat pulse experiments. Journal of Non-Equilibrium Thermodynamics, 41(1):41–48, 2016.
* [2] P. Ván, A. Berezovski, T. Fülöp, Gy. Gróf, R. Kovács, Á. Lovas, and J. Verhás. Guyer-Krumhansl-type heat conduction at room temperature. EPL, 118(5):50005, 2017. arXiv:1704.00341v1.
* [3] T. Fülöp, R. Kovács, Á. Lovas, Á. Rieth, T. Fodor, M. Szücs, P. Ván, and Gy. Gróf. Emergence of non-Fourier hierarchies. Entropy, 20(11):832, 2018. ArXiv: 1808.06858.
* [4] P. Ván. Theories and heat pulse experiments of non-Fourier heat conduction. Communications in Applied and Industrial Mathematics, 7(2):150–166, 2016.
* [5] R. A. Guyer and J. A. Krumhansl. Solution of the linearized phonon Boltzmann equation. Physical Review, 148(2):766–778, 1966.
* [6] P. Ván. Weakly nonlocal irreversible thermodynamics – the Guyer-Krumhansl and the Cahn-Hilliard equations. Physic Letters A, 290(1-2):88–92, 2001.
* [7] P. Ván and T. Fülöp. Universality in heat conduction theory – weakly nonlocal thermodynamics. Annalen der Physik (Berlin), 524(8):470–478, 2012.
* [8] L. Tisza. Transport phenomena in Helium II. Nature, 141:913, 1938.
* [9] L. Landau. On the theory of superfluidity of Helium II. Journal of Physics, 11(1):91–92, 1947.
* [10] R. A. Guyer and J. A. Krumhansl. Thermal Conductivity, Second Sound, and Phonon Hydrodynamic Phenomena in Nonmetallic Crystals. Physical Review, 148:778–788, 1966.
* [11] K. Mitra, S. Kumar, A. Vedevarz, and M. K. Moallemi. Experimental evidence of hyperbolic heat conduction in processed meat. Journal of Heat Transfer, 117(3):568–573, 1995.
* [12] V. Józsa and R. Kovács. Solving Problems in Thermal Engineering: A Toolbox for Engineers. Springer, 2020.
* [13] R. Kovács and P. Ván. Generalized heat conduction in heat pulse experiments. International Journal of Heat and Mass Transfer, 83:613 – 620, 2015\.
* [14] W. Dreyer and H. Struchtrup. Heat pulse experiments revisited. Continuum Mechanics and Thermodynamics, 5:3–50, 1993.
* [15] I. Müller and T. Ruggeri. Rational Extended Thermodynamics. Springer, 1998.
* [16] D. Y. Tzou. Longitudinal and transverse phonon transport in dielectric crystals. Journal of Heat Transfer, 136(4):042401, 2014.
* [17] S. A. Rukolaine. Unphysical effects of the dual-phase-lag model of heat conduction. International Journal of Heat and Mass Transfer, 78:58–63, 2014\.
* [18] S. A. Rukolaine. Unphysical effects of the dual-phase-lag model of heat conduction: higher-order approximations. International Journal of Thermal Sciences, 113:83–88, 2017.
* [19] M. Fabrizio, B. Lazzari, and V. Tibullo. Stability and thermodynamic restrictions for a dual-phase-lag thermal model. Journal of Non-Equilibrium Thermodynamics, 2017. Published Online:2017/01/10.
* [20] M. Fabrizio and F. Franchi. Delayed thermal models: stability and thermodynamics. Journal of Thermal Stresses, 37(2):160–173, 2014.
* [21] B. Nyíri. On the entropy current. Journal of Non-Equilibrium Thermodynamics, 16(2):179–186, 1991\.
* [22] R. Kovács. Analytic solution of Guyer-Krumhansl equation for laser flash experiments. International Journal of Heat and Mass Transfer, 127:631–636, 2018\.
* [23] T. Fülöp, R. Kovács, and P. Ván. Thermodynamic hierarchies of evolution equations. Proceedings of the Estonian Academy of Sciences, 64(3):389–395, 2015.
* [24] K. Zhukovsky. Violation of the maximum principle and negative solutions for pulse propagation in Guyer–Krumhansl model. International Journal of Heat and Mass Transfer, 98:523–529, 2016\.
* [25] K. V. Zhukovsky. Exact solution of Guyer–Krumhansl type heat equation by operational method. International Journal of Heat and Mass Transfer, 96:132–144, 2016\.
* [26] Á. Rieth, R. Kovács, and T. Fülöp. Implicit numerical schemes for generalized heat conduction equations. International Journal of Heat and Mass Transfer, 126:1177 – 1182, 2018.
|
8k
|
arxiv_papers
|
2101.01124
|
# Experimental demonstration of negative refraction with 3D locally resonant
acoustic metafluids
Benoit Tallon Univ. Bordeaux, CNRS, Bordeaux INP, ENSAM, I2M, UMR 5295,
F-33405, Talence, France Artem Kovalenko Univ. Bordeaux, CNRS, CRPP,
F-33600, Pessac, France Olivier Poncelet Univ. Bordeaux, CNRS, Bordeaux INP,
ENSAM, I2M, UMR 5295, F-33405, Talence, France Christophe Aristégui Univ.
Bordeaux, CNRS, Bordeaux INP, ENSAM, I2M, UMR 5295, F-33405, Talence, France
Olivier Mondain-Monval Univ. Bordeaux, CNRS, CRPP, F-33600, Pessac, France
Thomas Brunet Univ. Bordeaux, CNRS, Bordeaux INP, ENSAM, I2M, UMR 5295,
F-33405, Talence, France [email protected]
###### Abstract
Negative refraction of acoustic waves is demonstrated through underwater
experiments conducted at ultrasonic frequencies on a 3D locally resonant
acoustic metafluid made of soft porous silicone-rubber micro-beads suspended
in a yield-stress fluid. By measuring the refracted angle of the acoustic beam
transmitted through this metafluid shaped as a prism, we determine the
acoustic index to water according to Snell’s law. These experimental data are
then compared with an excellent agreement to calculations performed in the
framework of Multiple Scattering Theory showing that the emergence of negative
refraction depends on the volume fraction $\Phi$ of the resonant micro-beads.
For diluted metafluid ($\Phi=3\%$), only positive refraction occurs whereas
negative refraction is demonstrated over a broad frequency band with
concentrated metafluid ($\Phi=17\%$).
## Introduction
Since the pioneering works reported by Liu et al.[1], locally resonant
acoustic metamaterials have been attracting great attention [2]. One of the
challenging issues has been the achievement of acoustic metamaterials with a
negative refractive index that offer new possibilities for acoustic imaging
materials and for the control of sound at sub-wavelength scales [3]. In
acoustics, the refractive index $n$ is proportional to $\sqrt{\rho/K}$, where
$K$ and $\rho$ are the bulk modulus and the mass density of the material. Many
works have been devoted to the study of double-negative metamaterials [4, 5,
6, 7, 8, 9, 10, 11, 12] for which the two constitutive parameters $K$ and
$\rho$ are simultaneously negative, leading thus to a negative index[13]. It
is worth noting that such a double-negativity condition is not required for
(real) dissipative metamaterials to get a negative index[14]. When they are
non-negligible, losses may play an important role in the effective acoustic
properties of the metamaterials in such a way that the real part of the
acoustic index can be negative for single-negative metamaterial as
demonstrated in underwater ultrasonic experiments[15], and in air at audible
frequencies[16]. Although the latter work reported on the experimental
observation of negative refraction effects within a 2D acoustic superlens,
negative refraction has never been observed with a 3D acoustic metamaterial up
to now.
In that context, we proposed to use a ”soft” approach, combining various soft-
matter techniques, to achieve soft 3D acoustic metamaterials with negative
index composed of resonant porous micro-beads randomly-dispersed in a yield-
stress fluid[17]. By taking benefit from the strong low-frequency Mie-type
(monopolar and dipolar) resonances of these ’ultra-slow’ particles, single-
band[15] and dual-band[18] negative refractive indices were experimentally
demonstrated. The issue of the experimental observation of negative refraction
was then raised for these water-based metamaterials[19], since the energy
attenuation might be significant in these metafluids due to the intrinsic
absorption in the porous micro-beads, and to the strong resonant scattering by
the particles.
In this paper, we report on negative refraction experiments with metafluids,
composed of soft porous silicone-rubber micro-beads, exhibiting a negative
acoustic index at ultrasonic frequencies[15]. In these experiments, the
metafluid is confined in a prism-shaped box with a small angle
($\theta_{\rm{fluid}}=+2$°) and with a very thin plastic surface in order to
be acoustically transparent. As shown in Fig. 1, this metafluid is directly
deposited on a large ultrasonic immersion transducer operating in water over a
broad ultrasonic frequency range (from 15 kHz to 600 kHz). The large
dimensions of this transmitter ensures the generation of quasi-pure plane
waves propagating in the metafluid along the vertical $z$-axis, from the
bottom to the top. Then, the transmitted acoustic beam refracted by the
interface metafluid/water is scanned in the water tank by using a small
ultrasonic probe. Two samples, composed of micro-beads with similar mean
diameters $d$ of about 750 $\mu$m, are considered in this study with two
different volume fractions $\Phi$, referred to as the diluted metafluid
($\Phi=3\%$) and concentrated metafluid ($\Phi=17\%$). The acoustic properties
of the micro-beads are given in a previous work[20]. One of the particular
features of these porous particles is their very low longitudinal phase
velocity $c_{L}$ (= 120 m.s-1) that is due to the softness of the silicone-
rubber material[21].
Figure 1: (left) Top view and (right) side view of the experimental setup. The
metafluid is confined in a prism-shaped box, with an angle
$\theta_{\rm{fluid}}=+2$°, that is deposited directly on a large broad-band
ultrasonic immersion transducer (150 mm x 40 mm). The acoustic waves propagate
from the bottom to the top of the water tank, along the vertical $z$-axis. The
acoustic field refracted from the prism is scanned in the $x$-$z$ plane (33 mm
x 33 mm) with a small ultrasonic probe.
## Results
When excited with a short electrical pulse, the large ultrasonic transducer
generates a short broad-band acoustic pulse propagating in the confined fluid
along the $z$-axis only. At the interface between the confined fluid and
water, the acoustic pulse is then refracted in the surrounding water with an
angle of refraction $\theta_{\rm{water}}$ shown in Fig. 1. By measuring
$\theta_{\rm{water}}$, the acoustic index $n_{\rm{fluid}}$ of the fluid
confined in the prism can be easily deduced from the Snell’s law as following:
$n_{\rm{fluid}}=n_{\rm{water}}\frac{\rm{sin}(\theta_{\rm{water}})}{\rm{sin}(\theta_{\rm{fluid}})}$
(1)
with $n_{\rm{water}}=1$ since the acoustic index $n$ ($=c_{0}/c$) of a
material with the phase velocity $c$ is usually defined relatively to water
($c_{0}=c_{\textrm{water}}$) for underwater acoustics. In these experiments,
the acoustic index $n_{\rm{fluid}}$ cannot be directly retrieved from the
refracted transmitted temporal signals measured in the $x$-$z$ plane as
depicted in Fig. 1, since a short broad-band acoustic pulse has been used in
these pulsed ultrasonic experiments. In a such broad frequency range (from 15
kHz to 600 kHz), the acoustic index of a concentrated metafluid is expected to
exhibit strong variations ranging from high positive values to negative
ones[15]. Due to potential strong dispersion effects, we focus here on a
harmonic spectral analysis of the refracted beams by performing Fourier
transforms over time and space. First, we performed time-domain Fourier
transforms of all the signals acquired at each position over a 2D spatial grid
(Fig. 2.a.) in order to obtain the spatial-field at each frequency component
(Fig. 2.b.). Then a 2D spatial Fourier transform is applied at each frequency
for getting the wavenumber spectrum of the harmonic beam (Fig. 2.c.). Since
the beams refracted in water are quasi-plane waves, it is straightforward to
extract accurately their direction of propagation from the (real-valued)
spatial Fourier components $k^{\textrm{r}}_{\textrm{x}}$ and
$k^{\textrm{r}}_{\textrm{z}}$ at the peak amplitude of the wavenumber
spectrum.
Figure 2: (a) Snapshot of the measured pressure field refracted from the prism
filled with concentrated metafluid ($\Phi=17\%$). (b) Corresponding angular
phase, shown here at 175 kHz, obtained from the time-domain Fourier transforms
performed for each position over the square grid shown in (a). (c) 2D spatial
Fourier transform of the scanned field pattern at 175 kHz for the measurement
of the (real-valued) spatial Fourier components $k^{\textrm{r}}_{\textrm{x}}$
and $k^{\textrm{r}}_{\textrm{z}}$ of the refracted beam in water. Figure 3: 2D
Fourier transforms of the scanned field pattern performed at different
frequencies: 40, 130, 160, 200, 250 kHz (from the top to bottom). The prism is
filled with either water (left), diluted metafluid (center) or concentrated
metafluid (right). On all maps, the white vertical axis ($k_{\textrm{x}}=0$)
corresponds to the case in which
$\theta_{\textrm{water}}=\theta_{\textrm{fluid}}$ meaning that
$n_{\textrm{fluid}}=+1$, whereas the tilted white axis
($\arctan(k_{\textrm{x}}/k_{\textrm{z}})=-\theta_{\rm{fluid}}$) corresponds to
the case in which $\theta_{\textrm{water}}=0$ meaning that
$n_{\textrm{fluid}}=0$. The Fourier transforms imply a plane wave in the
positive $z$ direction.
From the acoustic field map shown at a given time in Fig. 2.a, we can get the
corresponding map of the angular phase for each frequency component of the
refracted beam. As an example, Fig. 2.b shows the angular phase of the
refracted beam at $f=175$ kHz revealing an angle of refraction of a few
degrees from the initially vertical propagation direction. Such a deviation
can be estimated by measuring the tilted angle of these refracted wavefronts
in the $x$-$z$ plane but this direct spatial measurement may suffer from
fluctuations observed on the wavefronts. As an alternative, we performed 2D
Fourier transforms of the scanned field patterns at different frequencies to
infer the $k^{\textrm{r}}_{\textrm{x}}$ and $k^{\textrm{r}}_{\textrm{z}}$
coordinates of the wave vector $\textbf{k}_{\textrm{water}}$ in the
$k_{\textrm{x}}$-$k_{\textrm{z}}$ plane. The values of these two components
are given by the location of the maximum spot intensity shown in Fig. 2.c.
Then, the angle of refraction $\theta_{\rm{water}}$ can be easily deduced for
each frequency as following:
$\theta_{\rm{water}}=\arctan(\frac{k^{\textrm{r}}_{\textrm{x}}}{k^{\textrm{r}}_{\textrm{z}}})+\theta_{\rm{fluid}}$
(2)
In these experiments, the coordinate $k^{\textrm{r}}_{\textrm{z}}$ is
necessarily positive since the acoustic waves propagate from the bottom to the
top of water tank along the $z$-axis in any case. However, the coordinate
$k^{\textrm{r}}_{\textrm{x}}$ may be either positive or negative depending on
the direction of the refracted beam. The acoustic index $n_{\rm{fluid}}$ of
the fluid confined in the prism can be deduced from the values of the angle
$\theta_{\rm{water}}$ by using Eq. 1. When the prism is filled with water
(Fig. 3, left), the spot goes along the vertical $k_{z}$-axis for which
$k^{\textrm{r}}_{\textrm{x}}=0$ (white vertical line) as the frequency
increases leading to $\theta_{\rm{water}}=\theta_{\rm{fluid}}$ according to
Eq. 2. Therefore, no refraction occurs in that case which may be expected
since the material is the same (water) on both sides of the interface. Note
that if the spot had gone along the white titled line shown in Figs. 3, for
which
$\arctan(k^{\textrm{r}}_{\textrm{x}}/k^{\textrm{r}}_{\textrm{z}})=-\theta_{\textrm{fluid}}$,
this would lead to $\theta_{\textrm{water}}=0$, corresponding thus to a zero-
index fluid confined in the prism.
When the prism is filled with the diluted metafluid ($\Phi=3\%$, $d=750$
$\mu$m with a size dispersion of 30%), the spot oscillates around the white
vertical axis as the frequency increases (Fig. 3, center). The values of
$n_{\textrm{fluid}}$ extracted from Eqs.1 and 2 show that the acoustic index
of this diluted metafluid varies from high values at low frequencies
($n_{\textrm{fluid}}=+4$ at 40 kHz) to low values at intermediate frequencies
($n_{\textrm{fluid}}=+0.5$ around 130 kHz) before getting closer to +1 at high
frequencies as shown in Fig. 4. This strong dispersion is due to the micro-
bead resonances that occur around 150 kHz for that size of particles. Far away
from these low-frequency acoustic resonances, the acoustic index of the
metafluid is similar to that of the aqueous matrix[15].
We also produced a concentrated metafluid ($\Phi=17\%$, $d=700$ $\mu$m with a
size dispersion of 10%) in which negative refraction occurs for a certain
range of frequencies. Actually, Fig. 3 (right) shows that the spot can go
beyond the white tilted axis (corresponding to $n_{\textrm{fluid}}=0$) as
observed at 160 kHz. The experimental results are not shown at 40 kHz because
of the very high attenuation that is due the strong (monopolar) low-frequency
resonances of the micro-beads [15]. But the acoustic index is shown to be
negative over a broad frequency range as shown in Fig. 4. Note that this
’negative band’ is slightly shifted to higher frequencies compared to the
diluted metafluid because of the smaller size of the micro-beads of this
concentrated metafluid [22].
Figure 4: Acoustic index $n_{\textrm{fluid}}$ for different fluids confined in
the prism-shaped box and extracted as a function of frequency from the
$k$-maps. Dashed lines refer to calculations performed in the framework of
multiple scattering theory (see the text).
Finally, we compared our acoustical measurements to theoretical predictions
produced through multiple-scattering modeling, revealing good qualitative
agreement as shown in Fig. 4. The theoretical acoustic index of the metafluids
($\Phi=3\%$ and $17\%$) are obtained from their effective wavenumbers
calculated using the Waterman-Truell formula[23]. The values of the material
parameters for the soft porous silicone rubber that we used for these
calculations, were $c_{L}$ = 120 m.s-1, $\alpha_{L}$= 20 Np.m-1.MHz-2 (the
phase velocity and attenuation coefficient for longitudinal waves), $c_{T}$ =
40 m.s-1, $\alpha_{T}$= 200 Np.m-1.MHz-2 (the phase velocity and attenuation
coefficient for shear waves) and $\rho_{1}$ = 760 kg.m-3. The water-based gel
matrix has the same properties as water ($\rho_{0}$ = 1000 kg.m-3, $c_{0}$ =
1490 m.s-1) since this host Bingham fluid is essentially made of water and
very small amounts of polymer (Carbopol) to prevent the creaming of the porous
micro-beads.
## Conclusion
In summary, we have reported an experimental demonstration of negative
refraction in a 3D acoustic metamaterial. The experiments have been conducted
at ultrasonic frequencies with a locally resonant metafluid composed of soft
porous silicone-rubber micro-beads whose concentration must be high enough so
that refraction be negative. However, the achievement of acoustic devices
based on negative refraction such as perfect lenses envisioned by Pendry[24]
seems unattainable with our metafluids because of their large attenuation that
is mainly due to the strong resonant scattering[15]. Alternatively to bulky
and lossy 3D metamaterials, acoustic metasurfaces[25] might be much more
appropriate to manipulate acoustic wavefronts as recently demonstrated with
soft gradient-index metasurfaces[26].
## Methods
In this study, we used a large ultrasonic transducer (150 mm x 40 mm), with a
central frequency of 150 kHz, to ensure the generation of quasi-pure plane
waves in the water tank (propagating along the vertical axis from the bottom
to the top of the water tank). This broad-band transducer was excited with a
short electrical pulse generated by a pulser/receiver (Olympus, 5077PR) that
was also used to amplify the electric signal recorded by the 1-inch-diameter
receiving transducer (Olympus V301) before its acquisition on a computer via a
waveform digitizer card (AlazarTech, ATS460). The angle of the prism has been
chosen here as low as possible ($\theta_{\rm{fluid}}=+2$°) in order to
guarantee that the refracted-beam amplitude does not vary too much along the
x-axis in spite of the slightly increasing depth of the prism. For the
experiments conducted with the prism filled with water (Fig. 3, left) or with
the diluted metafluid (Fig. 3, centre), the width of the scan area along the
x-axis was 60 mm. For the concentrated metafluid (Fig. 3, right), this width
was reduced to 30 mm inducing a slight spread of the Fourier Transform
distribution along the $k_{x}$-axis. The acoustical fields refracted in water
were scanned on grids with a step of 1 mm that is 3 times smaller than the
acoustic wavelength in water at 500 kHz (= 3 mm). Note that we also used zero-
padding techniques before doing Fast Fourier Transforms in space to improve
the resolution in the $k$-maps shown in Fig. 3.
## References
* [1] Liu, Z. _et al._ Locally resonant sonic materials. _Science_ 289, 1734–1736 (2000).
* [2] Ma, G. & Sheng, P. Acoustic metamaterials: from local resonances to broad horizons. _Sci. Adv._ 2, e1501595 (2016).
* [3] Cummer, S. A., Christensen, J. & Alù, A. Controlling sound with acoustic metamaterials. _Nat. Rev. Mater._ 1, 16001 EP – (2016).
* [4] Li, J. & Chan, C. T. Double-negative acoustic metamaterial. _Phys. Rev. E_ 70, 055602 (2004).
* [5] Lee, S. H., Park, C. M., Seo, Y. M., Wang, Z. G. & Kim, C. K. Composite acoustic medium with simultaneously negative density and modulus. _Phys. Rev. Lett._ 104, 054301 (2010).
* [6] Liang, Z., Willatzen, M., Li, J. & Christensen, J. Tunable acoustic double negativity metamaterial. _Sci. Rep._ 2, 859 (2012).
* [7] Yang, M., Ma, G., Yang, Z. & Sheng, P. Coupled membranes with doubly negative mass density and bulk modulus. _Phys. Rev. Lett._ 110, 134301 (2013).
* [8] Maurya, S. K., Pandey, A., Shukla, S. & Saxena, S. Double negativity in 3d space coiling metamaterials. _Sci. Rep._ 6, 33683 (2016).
* [9] Lanoy, M. _et al._ Acoustic double negativity induced by position correlations within a disordered set of monopolar resonators. _Phys. Rev. B_ 96, 220201 (2017).
* [10] Zhou, Y., Fang, X., Li, D., Hao, T. & Li, Y. Acoustic multiband double negativity from coupled single-negative resonators. _Phys. Rev. Applied_ 10, 044006 (2018).
* [11] Wang, W., Bonello, B., Djafari-Rouhani, B., Pennec, Y. & Zhao, J. Double-negative pillared elastic metamaterial. _Phys. Rev. Applied_ 10, 064011 (2018).
* [12] Dong, H.-W., Zhao, S.-D., Wang, Y.-S., Cheng, L. & Zhang, C. Robust 2d/3d multi-polar acoustic metamaterials with broadband double negativity. _J. Mech. Phys. Solids_ 137, 103889 (2020).
* [13] Li, J., Fung, K. H., Liu, Z. Y., Sheng, P. & Chan, C. T. _Physics of Negative Refraction and Negative Index Materials_. edited by C. M. Krowne and Y. Zong (Springer, Berlin, 2007).
* [14] Brunet, T., Poncelet, O. & Aristégui, C. Negative-index metamaterials: is double negativity a real issue for dissipative media? _EPJ Appl. Metamat._ 2, 3 (2015).
* [15] Brunet, T. _et al._ Soft 3D acoustic metamaterial with negative index. _Nat. Mater._ 14, 384–388 (2015).
* [16] Kaina, N., Lemoult, F., Fink, M. & Lerosey, G. Negative refractive index and acoustic superlens from multiple scattering in single negative metamaterials. _Nature_ 525, 77–81 (2015).
* [17] Brunet, T., Leng, J. & Mondain-Monval, O. Soft acoustic metamaterials. _Science_ 342, 323–324 (2013).
* [18] Raffy, S., Mascaro, B., Brunet, T., Mondain-Monval, O. & Leng, J. A soft 3d acoustic metafluid with dual-band negative refractive index. _Adv. Mater._ 28, 1760–1764 (2016).
* [19] Popa, B.-I. & Cummer, S. A. Water-based metamaterials: Negative refraction of sound. _Nat. Mater._ 14, 363–364 (2015).
* [20] Kovalenko, A., Zimny, K., Mascaro, B., Brunet, T. & Mondain-Monval, O. Tailoring of the porous structure of soft emulsion-templated polymer materials. _Soft Matter_ 12, 5154–5163 (2016).
* [21] Ba, A., Kovalenko, A., Aristégui, C., Mondain-Monval, O. & Brunet, T. Soft porous silicone rubbers with ultra-low sound speeds in acoustic metamaterials. _Sci. Rep._ 7, 40106 EP – (2017).
* [22] Brunet, T. _et al._ Sharp acoustic multipolar-resonances in highly monodisperse emulsions. _Appl. Phys. Lett._ 101, 011913 (2012).
* [23] Waterman, P. C. & Truell, R. Multiple scattering of waves. _J. Math. Phys._ 2, 512–537 (1961).
* [24] Pendry, J. B. Negative refraction makes a perfect lens. _Phys. Rev. Lett._ 85, 3966–3969 (2000).
* [25] Assouar, B. _et al._ Acoustic metasurfaces. _Nat. Rev. Mater._ 3, 460–472 (2018).
* [26] Jin, Y., Kumar, R., Poncelet, O., Mondain-Monval, O. & Brunet, T. Flat acoustics with soft gradient-index metasurfaces. _Nat. Commun._ 10, 143 (2019).
## Acknowledgements
This work was partially funded and performed within the framework of the Labex
AMADEUS ANR-10-LABEX-0042-AMADEUS with the help of the French state Initiative
d’Excellence IdEx ANR-10-IDEX-003-02 and project BRENNUS ANR-15-CE08-0024 (ANR
and FRAE funds).
## Author contributions statement
T.B. supervised the project, A.K. produced the soft porous silicone-rubber
micro-beads to achieve the metafluids under the guidance of O.M.-M., B.T. and
T.B. conducted the underwater experiments, B.T., T.B., C.A., and O.P analyzed
the results. All authors reviewed the manuscript.
## Additional information
Competing financial interests The authors declare no competing financial
interests.
|
4k
|
arxiv_papers
|
2101.01125
|
# On relation between generalized diffusion equations and subordination
schemes
A. Chechkin [email protected] Institute of Physics and Astronomy,
Potsdam University, Karl-Liebknecht-Strasse 24/25, 14476 Potsdam-Golm, Germany
Akhiezer Institute for Theoretical Physics, Akademicheskaya Str. 1, 61108
Kharkow, Ukraine I.M. Sokolov [email protected] Institut für
Physik and IRIS Adlershof, Humboldt Universität zu Berlin, Newtonstraße 15,
12489 Berlin, Germany
###### Abstract
Generalized (non-Markovian) diffusion equations with different memory kernels
and subordination schemes based on random time change in the Brownian
diffusion process are popular mathematical tools for description of a variety
of non-Fickian diffusion processes in physics, biology and earth sciences.
Some of such processes (notably, the fluid limits of continuous time random
walks) allow for either kind of description, but other ones do not. In the
present work we discuss the conditions under which a generalized diffusion
equation does correspond to a subordination scheme, and the conditions under
which a subordination scheme does possess the corresponding generalized
diffusion equation. Moreover, we discuss examples of random processes for
which only one, or both kinds of description are applicable.
## I Introduction
In his seminal paper of 1961 Robert Zwanzig has introduced a generalized non-
Markovian Fokker-Planck equation Zwanzig with a memory kernel, a GFPE in what
follows. The work was much cited, due to the fact that the approach to the
derivation of this equation based on the projection operator formalism has
found its application in the variety of problems in non-equilibrium
statistical physics Grabert . The equation itself was however hardly used,
except for obtaining Markovian approximations. The situation changed when the
generalized Fokker-Planck equations with power-law memory kernels gained
popularity in different fields. This kind of GFPEs, called fractional Fokker-
Planck equations (FFPEs) describe the continuous (long time-space) limit of
continuous time random walks (CTRW) with power-law waiting times Hilfer ;
Compte ; MeerScheff2019 , which gives a physical foundation and explains broad
applicability of FFPE. Such equations proved useful for description of
anomalous transport processes in different media. Applications range from
charge transport in amorphous semiconductors to underground water pollution
and motion of subcellular units in biology, see, e.g., the reviews
MetzlerKlafter ; MeKla-2 ; PhysToday and references therein, as well as the
Chapters in collective monographs AnoTrans ; FracDyn . Further important
generalizations involve kernels consisting of mixtures of power laws, which
correspond to distributed-order fractional derivatives CheGorSok2002 ;
ChechkinFCAA ; CheKlaSok2003 ; APPB ; Naber2004 ; SoKla2005 ; SokChe2005 ;
UmaGor2005 ; MeerScheff2005 ; MeerScheff2006 ; Langlands ; Hanyga ; MaiPag2007
; MaiPaGo2007 ; CheGorSok2008 ; Kochubei2008 ; Meer2011 ; CheSoKla2012 , or
truncated (tempered) power laws SokCheKlaTruncated ; Stanislavsky1 ; MeerGRL ;
Baeumer . Other kernels in use include combinations of power laws with Mittag-
Leffler functions and with generalized Mittag-Leffler functions (Prabhakar
derivatives) TriChe2015 ; StanWer2016 ; Trifce1 ; Trifce2 ; StanWer2018 ;
Trifce3 ; StanWer2019 .
It is well-known that the Markovian, ”normal” Fokker-Planck equation, can be
obtained from the Langevin equation, the stochastic differential equation for
a Brownian motion under the action of an external force Chandra . Similarly,
the FFPE follows from two Langevin equations, giving a parametric
representation of the time dependence of the coordinate. These equations
describe the evolution of the coordinate and of the physical time in an
internal time-like variable (operational time) Fogedby ; Baule1 ; Baule2 ;
Kleinhans ; Hofmann ; Trifce4 . Such an approach is closely related to the
concept of subordination, i.e. random time change in a random process: A
process $X(\tau(t))$ is said to be subordinated to the process $X(\tau)$ under
operational time $\tau(t)$ being a random process with non-negative increments
Feller . The FFPEs discussed above thus describe the Brownian motion, possibly
in a force field, under a random time change, i.e. a process subordinated to a
Brownian motion with or without drift Meerschaert1 ; Stanislavsky2 ; Gorenflo1
; Gorenflo2 ; Gorenflo . Subordination schemes not only deliver a method of
analytical solution of FFPE SaiZasl ; Eli ; Meerschaert4 ; Meerschaert2 or
GFPE SokSub by its integral transformation to the usual, Markovian,
counterpart, but also give the possibility of stochastic simulations of the
processes governed by GFPEs Kleinhans ; SaiUt1 ; MaiPa2003 ; Piryatinska ;
MaiPa2006 ; GoMai2007 ; Marcin1 ; Marcin2 ; Gajda ; Meerschaert3 ;
Stanislavsky3 ; Annunziato ; Marcin3 ; Marcin4 ; Stanislavsky4 ; Stanislavsky5
.
The subordination approach was also used in BNG ; Vittoria1 ; Vittoria2 in
describing, within the diffusive diffusivity model, a recently discovered but
widely spread phenomenon of Brownian yet non-Gaussian diffusion, a kind of
diffusion process in which the mean squared displacement (MSD) grows linearly
in time, like in normal, Fickian diffusion, but the probability density of the
particles’ displacements shows a (double-sided) exponential rather than
Gaussian distribution, at least at short or intermediate times Wang1 ; Wang2 ;
Chub ; Sebastian ; Cherail ; Grebenkov1 ; Grebenkov2 ; Korean ; Sandalo ;
RalfEPJB .
This broad use of generalized (not necessarily fractional) Fokker-Planck
equations on one hand, and of random processes subordinated to the Brownian
motion, on the other hand, urges us to put a question on the relation of these
two kinds of description. In other words, the following questions arise: (i)
given a GFPE (with a certain memory kernel), can one find the corresponding
subordinator or show that none exists, and (ii) given a subordination scheme,
can one find the corresponding GFPE (if any), or show that none exists. These
two questions are addressed in our paper.
Our main statements are summarized as follows. Not all valid GFPEs correspond
to subordination schemes. Not all subordination schemes possess a
corresponding GFPE. In our paper we give criteria to check, whether a
subordination scheme possesses a GFPE, and whether a particular GFPE
corresponds to a subordination scheme or not. We moreover discuss examples of
particular stochastic processes of interest by themselves, having one or
another description, or both.
## II Generalized Fokker-Planck equations and subordination schemes
Let us first present the objects of our investigation: The GFPEs of a specific
form, and the two kinds of subordination schemes as they are discussed in the
literature cited above.
### II.1 Generalized Fokker-Planck equations
In present paper we discuss equations of the form
$\frac{\partial}{\partial
t}P(\mathbf{x},t)=\hat{\Phi}\mathcal{L}P(\mathbf{x},t),$ (1)
with the linear integrodifferential operator $\hat{\Phi}$ acting on time
variable, and ${\cal L}$ is a time-independent linear operator acting on a
function of spatial variable(s) $\mathbf{x}$. We note that Eq.(1) is the most
popular, but not the most general form of such equations; in Ref. Zwanzig a
more general form was derived. This includes the possible additional
coordinate or time dependence of $\hat{\Phi}$ and ${\cal L}$, respectively. In
the case of time dependence of ${\cal L}$ or of position-dependence of
$\hat{\Phi}$ the operators may not commute. Such situations were discussed
e.g. in Refs. SokKla and Inhomogeneous , and are not a topic of present
investigation.
In the time domain the corresponding equations are always representable as a
GFPE
$\frac{\partial}{\partial
t}P(\mathbf{x},t)=\int_{0}^{t}\Phi(t-t^{\prime})\mathcal{L}P(\mathbf{x},t^{\prime})dt^{\prime}$
(2)
where the memory kernel $\Phi(t-t^{\prime})$ can be a generalized function
(contain delta-functions or derivatives thereof). Sometimes the corresponding
equations come in a form
$\frac{\partial}{\partial t}P(\mathbf{x},t)=\frac{\partial}{\partial
t}\int_{0}^{t}M_{R}(t-t^{\prime}){\cal L}P(\mathbf{x},t^{\prime})dt^{\prime},$
(3)
or
$\int_{0}^{t}M_{L}(t-t^{\prime})\frac{\partial}{\partial
t^{\prime}}P(\mathbf{x},t^{\prime})dt^{\prime}={\cal L}P(\mathbf{x},t),$ (4)
however, Eqs. (3) and (4) can be reduced to Eq.(2). Indeed, in the Laplace
domain Eq.(2) reads
$u\tilde{P}(\mathbf{x},u)-P(\mathbf{x},0)=\tilde{\Phi}(u)\mathcal{L}\tilde{P}(\mathbf{x},t)$
with
$\tilde{\Phi}(u)=\left\\{\begin{array}[]{ll}u\tilde{M}_{R}(u)&\mbox{for the
case Eq.(\ref{NMFPE1})}\\\ 1/\tilde{M}_{L}(u)&\mbox{for the case
Eq.(\ref{NMFPE2}),}\end{array}\right.$ (5)
where $\tilde{M}_{...}(u)=\int_{0}^{\infty}M_{...}(t)e^{-ut}dt$. We note that
for special cases of integral kernels corresponding to fractional or
distributed-order derivatives Eqs.(3) and (4) were called “modified” and
“normal” forms of generalized Fokker-Planck equation, respectively APPB .
Essentially, in the most cases the equation can be expressed in the either
form, with the left or the right memory kernel, but sometimes one of the forms
is preferable APPB ; SoKla2005 ; CheSoKla2012 ; Trifce4 . Finally, one can
also consider schemes with integral kernels on both sides, for which case
$\tilde{\Phi}(u)=u\frac{\tilde{M}_{R}(u)}{\tilde{M}_{L}(u)}.$
From now on we will use one-dimensional notation for $x$. Generalization to
higher spatial dimensions is straightforward. In all our examples we will
concentrate mostly on the case of free diffusion
$\mathcal{L}=D\frac{\partial^{2}}{\partial x^{2}}.$ (6)
The coefficient $D$ has a dimension of the normal diffusion coefficient,
$[D]=[\mathrm{L}^{2}/\mathrm{T}]$, the operator $\hat{\Phi}$ is therefore
dimensionless, and its integral kernel has a dimension of the inverse time. We
note however that concentration on free diffusion is not a restriction for the
generality of our approach since it is only about temporal operators and
temporal parts of the subordination procedures.
### II.2 Kernel of GFPE uniquely defines MSD in free diffusion
Here we show that the form of the memory kernel uniquely defines the mean
squared displacement (MSD) in free diffusion, and vice versa (under mild
restrictions). Let us consider the MSD in free diffusion (i.e. without
external force and in absence of external boundaries). We multiply both parts
of Eq.(1) by $x^{2}$ and integrate over $x$ to get
$\frac{d}{dt}\int_{-\infty}^{\infty}x^{2}P(x,t)dx=D\hat{\Phi}\int_{-\infty}^{\infty}x^{2}\frac{\partial^{2}}{\partial
x^{2}}P(x,t)dx.$ (7)
Integrating the right hand side of Eq.(7) by parts twice and assuming the PDF
$P(x,t)$ to vanish at infinity together with its first derivative we get for
the r.h.s.
$\int_{-\infty}^{\infty}x^{2}\frac{\partial^{2}}{\partial x^{2}}P(x,t)dx=2,$
so that the evolution of the MSD is governed by
$\frac{d}{dt}\langle x^{2}(t)\rangle=2D\hat{\Phi}1,$ (8)
with operator $\hat{\Phi}$ acting on a numeric constant. Passing to the
Laplace representation we obtain
$u\langle x^{2}(u)\rangle=2D\tilde{\Phi}(u)\frac{1}{u}$
where we assumed to start from an initial condition concentrated at the
origin, $\langle x^{2}(t=0)\rangle=0$. This uniquely defines $\tilde{\Phi}(u)$
via the MSD:
$\tilde{\Phi}(u)=\frac{1}{2D}u^{2}\langle x^{2}(u)\rangle.$ (9)
Let us consider our first example. If the MSD in free motion grows linearly in
time, $\langle x^{2}(t)\rangle=2Dt$, we have $\langle
x^{2}(u)\rangle=2D/u^{2}$ and therefore $\tilde{\Phi}(u)\equiv 1$ (a unit
operator, an integral operator with a $\delta$-functional kernel). This means
that the only GFPE leading to the linear growth of the MSD is a usual,
Fickian, diffusion equation, for which the PDF is Gaussian at all times.
Therefore, the GFPE is not a valid instrument to describe the BnG diffusion,
contrary to what is claimed in Korean . On the other hand, the subordination
schemes of Chub ; BNG ; Grebenkov1 do describe the phenomenon.
Now we can turn to the main topic of our work: which processes can and which
can not be described by the GFPEs. To this end we first discuss a specific
integral representation of the solution to a GFPE and its relation to
subordination schemes.
### II.3 Subordination schemes
Let $X(\tau)$ be a random process parametrized by a “time-like” variable
$\tau$. Let $\tau$ by itself be a random process, parametrized by the physical
time, or clock time, $t$. The random process $X(\tau)$ is called the parent
process, the random variable $\tau$ is called the operational time, and the
random process $\tau(t)$ the directing process, or subordinator. The process
$t(\tau)$ is called the leading process of the subordination scheme, see
Gorenflo for the consistent explanation of the terminology used. The process
$X(\tau(t))$ is said to be subordinated to $X(\tau)$ under the operational
time $\tau$. The properties of $X(\tau)$ and $\tau(t)$ (or alternatively,
$t(\tau)$) fully define the properties of the composite process $X(\tau(t))$.
The fact that the variable $\tau$ is time-like means that the directing
process $\tau(t)$ preserves the causality: from $t_{2}>t_{1}$ it must follow
that $\tau(t_{2})\geq\tau(t_{1})$: the directing process of a subordination
scheme is increasing at least in a weak sense, that is possesses non-negative
increments. It is moreover assumed that $\tau(0)=0$: the count of operational
time starts together with switching the physical clock.
The composite random function $X(\tau(t))$ can be defined in two ways. The
first way corresponds to an explicit definition by defining the (stochastic)
equations governing $X(\tau)$ and $\tau(t)$. The second way defines the
function parametrically, so that the equations for $X(\tau)$ and $t(\tau)$ are
given.
A process subordinated to a Brownian motion with drift is a process whose
parent process is defined by a stochastic differential equation
$\frac{d}{d\tau}x(\tau)=F(x(\tau))+\sqrt{2D}\xi(\tau)$ (10)
with white Gaussian noise $\xi(\tau)$, with $\langle\xi(\tau)\rangle=0$,
$\langle\xi(\tau_{1})\xi(\tau_{1})\rangle=\delta(\tau_{1}-\tau_{2})$, whose
strength is given by a diffusion coefficient $D$, and with deterministic drift
$F(x(\tau))$ which will be considered absent in our examples concentrating on
free diffusion.
As example of the explicit, or direct, subordination scheme we name the
minimal diffusing diffusivity model,
$\displaystyle\frac{dx(\tau)}{d\tau}$ $\displaystyle=$
$\displaystyle\sqrt{2}\xi(\tau),$ $\displaystyle\frac{d\tau}{dt}$
$\displaystyle=$ $\displaystyle D(t),$
where the random diffusion coefficient $D(t)$ is a squared Ornstein-Uhlenbeck
process BNG .
In the parametric subordination scheme the dependence of $t(\tau)$ is given,
again in a form of an SDE, or via an additional transform of its solution. The
classical Fogedby scheme Fogedby corresponds to a stochastic differential
equation
$\frac{dt}{d\tau}=\lambda(\tau)$ (11)
with $\lambda(t)$ being a one-sided Lévy noise. The clock time given by the
solution of this equation is
$t(\tau)=\int_{0}^{\tau}\lambda(\tau^{\prime})d\tau^{\prime}$, , and the PDF
$q(t,\tau)$ of the process $t(\tau)$ is given by a one-sided $\alpha$-stable
Lévy law, such that its Laplace transform in $t$ variable reads
$\tilde{q}(u,\tau)=\exp(-u^{\alpha}\tau),0<\alpha<1$. This scheme corresponds
to a diffusive limit of CTRW with a power law waiting time distribution.
The correlated CTRW model of Ref. Hofmann is another example of the
parametric scheme where $t(\tau)$ is obtained by an additional integration of
$\lambda(\tau)$ above:
$\frac{dt}{d\tau}=\int_{0}^{\tau}\Psi(\tau-\tau^{\prime})\lambda(\tau^{\prime})d\tau^{\prime}.$
(12)
The previous case is restored if the kernel $\Psi(\tau)$ is a
$\delta$-function.
The attractiveness of subordination schemes lays in the fact that if the
solution for the PDFs of the parent process at a given operational time, and
of the directing process at a given physical time are known, the PDF $P(x,t)$
of the subordinated process can be obtained simply by applying the Bayes
formula. Let $f(x,\tau)$ be the PDF of $x=X(\tau)$ for a given value of the
operational time $\tau$, and $p(\tau,t)$ the PDF of the operational time for
the given physical time $t$. Than the PDF of $x=X(t)=X(\tau(t))$ is given by
$P(x,t)=\int_{0}^{\infty}f(x,\tau)p(\tau,t)d\tau,$ (13)
which in this context is called the integral formula of subordination Feller .
The PDF $p(\tau,t)$ of the operational time at a given clock time is delivered
immediately by explicit schemes, and can be obtained for parametric
subordination schemes by using an additional transformation Baule1 ; Gorenflo
, see below. Note that the PDF $f(x,\tau)$ for a process subordinated to a
Brownian motion always satisfies a usual, Markovian Fokker-Planck equation
$\frac{\partial}{\partial\tau}f(x,\tau)={\cal L}f(x,\tau),$ (14)
with $\mathcal{L}f(x,\tau)=-\frac{\partial}{\partial
x}F(x)f(x,\tau)+D\frac{\partial^{2}}{\partial x^{2}}f(x,\tau)$ by virtue of
Eq.(10) of the subordination schemes.
## III The sufficient condition for GFPE to have a subordination scheme
In Ref. SokSub it was shown that the formal solution of the GFPE (2) can be
obtained in a form of an integral decomposition
$P(x,t)=\int_{0}^{\infty}f(x,\tau)T(\tau,t)d\tau,$ (15)
where $f(x,\tau)$ is a solution of a Markovian FPE with the same Fokker-Planck
operator ${\cal L}$, Eq.(14), and for the same initial and boundary
conditions. Here the function $T(\tau,t)$ is normalized in its first variable
and connected with the memory kernel of the GFPE, as it is discussed in this
Section below. The corresponding form of solution was obtained in SaiZasl ;
Eli for the fractional diffusion and Fokker-Planck equations, and was applied
to the fractional Kramers equation in BarSil ; its more general discussion
followed in Sok1 . Equation (15) is akin to the integral formula of
subordination, Eq.(13). However, the PDF $P(x,t)$ in Eq. (15) may or may not
correspond to a PDF of a random process subordinated to the Brownian motion
with drift (as described by the ordinary FPE), since $T(\tau,t)$ may or may
not be a conditional probability density of $\tau$ at time $t$, e.g.
$T(\tau,t)$ may get negative. In Ref. SokSub the kernels corresponding to
subordination schemes with non-negative $T(\tau,t)$ were called ”safe”, while
the kernels not corresponding to any subordination scheme, for which
$T(\tau,t)$ oscillate, were called “dangerous”. For safe kernels the non-
negativity of solutions of GFPE corresponding to non-negative initial
conditions is guaranteed by virtue of Eq.(13). The sufficient condition for
Eq. (15) to correspond to a subordination scheme will be considered later.
If one assumes that the solution of GFPE (2) can be obtained in the form of
integral decomposition (15) and then insert such form in Eq. (2), one gets the
Laplace transform of the function $\tilde{T}(\tau,t)$ in its second variable,
$\tilde{T}(\tau,u)=\int_{0}^{\infty}T(\tau,t)e^{-ut}dt$, as SokSub
$\tilde{T}(\tau,u)=\frac{1}{\tilde{\Phi}(u)}\exp\left[-\tau\frac{u}{\tilde{\Phi}(u)}\right].$
(16)
This however, does not answer the questions what are the conditions under
which such a solution holds and whether it is unique. Below we discuss these
issues in some detail by presenting an alternative derivation, i.e. explicitly
constructing the solution.
Let us start from our Eq.(2) and integrate its both parts over time getting
$\displaystyle\int_{0}^{t}\frac{\partial}{\partial
t^{\prime\prime}}P(x,t^{\prime\prime})dt^{\prime\prime}=P(x,t)-P(x,0)$
$\displaystyle\qquad=\int_{0}^{t}dt^{\prime\prime}\int_{0}^{t^{\prime\prime}}\Phi(t^{\prime\prime}-t^{\prime}){\cal
L}P(x,t^{\prime})dt^{\prime}.$
Now we exchange the sequence of integrations in $t^{\prime}$ and
$t^{\prime\prime}$ on the r.h.s.,
$\displaystyle\int_{0}^{t}dt^{\prime\prime}\int_{0}^{t^{\prime\prime}}K(t^{\prime\prime}-t^{\prime}){\cal
L}P(x,t^{\prime})dt^{\prime}$
$\displaystyle\qquad=\int_{0}^{t}dt^{\prime}{\cal
L}P(x,t^{\prime})\int_{t^{\prime}}^{t}\Phi(t^{\prime\prime}-t^{\prime})dt^{\prime\prime},$
getting the integral form
$P(x,t)-P(x,0)=\int_{0}^{t}K(t-t^{\prime}){\cal L}P(x,t^{\prime})dt^{\prime},$
(17)
with the integral kernel
$K(t)=\int_{0}^{t}\Phi(t^{\prime\prime})dt^{\prime\prime}$ whose Laplace
transform is equal to $\tilde{\Phi}(u)/u$. Using the condition $\tau(t=0)=0$
we substitute the assumed solution form, Eq.(15), into Eq. (17):
$\displaystyle\int_{0}^{\infty}f(x,\tau)T(\tau,t)d\tau-P(x,0)=$
$\displaystyle\qquad\int_{0}^{t}dt^{\prime}K(t-t^{\prime})\int_{0}^{\infty}d\tau\mathcal{L}f(x,\tau)T(\tau,t^{\prime}).$
Now we use the assumption that $f(x,\tau)$ is the solution of a Markovian
Fokker-Planck equation, and make the substitution
$\mathcal{L}f(x,\tau)=\frac{\partial}{\partial\tau}f(x,\tau)$. We get:
$\displaystyle\int_{0}^{\infty}f(x,\tau)T(\tau,t)d\tau-P(x,0)=$
$\displaystyle\qquad\int_{0}^{t}dt^{\prime}K(t-t^{\prime})\int_{0}^{\infty}d\tau\left(\frac{\partial}{\partial\tau}f(x,\tau)\right)T(\tau,t^{\prime}).$
Performing partial integration in the inner integral on the r.h.s., and
interchanging the sequence of integrations in $t^{\prime}$ and in $\tau$ in
the integral which appears in the r.h.s. we arrive at the final expression
$\displaystyle\int_{0}^{\infty}f(x,\tau)T(\tau,t)d\tau-P(x,0)=$
$\displaystyle\qquad\int_{0}^{t}K(t-t^{\prime})\left[f(x,\infty)T(\infty,t^{\prime})-f(x,0)T(0,t^{\prime})\right]dt^{\prime}$
$\displaystyle\qquad-\int_{0}^{\infty}d\tau
f(x,\tau)\frac{\partial}{\partial\tau}\int_{0}^{t}K(t-t^{\prime})T(\tau,t^{\prime})dt^{\prime}.$
Now we request that the l.h.s. and the r.h.s. are equal at any time $t$ for
all admissible functions $f(x,\tau)$ satisfying the Fokker-Planck equation
$\frac{\partial}{\partial\tau}f(x,\tau)=\mathcal{L}f(x,\tau)$ irrespective of
the particular form of the linear operator $\mathcal{L}$ and of the boundary
and initial conditions. This gives us three conditions:
$\displaystyle\int_{0}^{\infty}f(x|\tau)T(\tau,t)d\tau=$
$\displaystyle\qquad-\int_{0}^{\infty}d\tau
f(x|\tau)\frac{\partial}{\partial\tau}\int_{0}^{t}K(t-t^{\prime})T(\tau,t^{\prime}),$
$\displaystyle-P(x,0)=-f(x,0)\int_{0}^{t}K(t-t^{\prime})T(0,t^{\prime}),$
$\displaystyle 0=\int_{0}^{t}K(t-t^{\prime})f(x,\infty)T(\infty,t^{\prime}),$
which can be rewritten as conditions on $T(\tau,t)$ only:
$\displaystyle
T(\tau,t)=-\frac{\partial}{\partial\tau}\int_{0}^{t}K(t-t^{\prime})T(\tau,t^{\prime})dt^{\prime},$
(18) $\displaystyle\int_{0}^{t}K(t-t^{\prime})T(0,t^{\prime})dt^{\prime}=1,$
(19)
$\displaystyle\int_{0}^{t}K(t-t^{\prime})T(\infty,t^{\prime})dt^{\prime}=0.$
(20)
In the Laplace domain Eq.(18) turns to a simple linear ODE
$-\frac{\tilde{\Phi}(u)}{u}\frac{\partial}{\partial\tau}\tilde{T}(\tau,u)=\tilde{T}(\tau,u)$
whose general solution is
$\tilde{T}(\tau,u)=C\cdot\exp\left(-\tau\frac{u}{\tilde{\Phi}(u)}\right)$
with the integration constant $C$. This integration constant is set by the
second equation, Eq.(19), which in the Laplace domain reads
$\frac{\tilde{\Phi}(u)}{u}\tilde{T}(0,u)=\frac{1}{u},$
so that $C=1/\Phi(u)$, and therefore
$\tilde{T}(\tau,u)=\frac{1}{\tilde{\Phi}(u)}\exp\left(-\tau\frac{u}{\tilde{\Phi}(u)}\right),$
which is our Eq.(16). The function $T(\tau,t)$ is normalized in its first
variable, which follows by the direct integration of Eq.(16):
$\int_{0}^{\infty}\tilde{T}(\tau,u)d\tau=\frac{1}{u},$ (21)
so that its inverse Laplace transform to the time domain is unity:
$\int_{0}^{\infty}T(\tau,t)d\tau=1$ (22)
for any $t>0$.
The third condition, Eq.(20), is fulfilled automatically provided
$T(\infty,t^{\prime})=0$ which implies $\tilde{T}(\infty,u)=0$. This is e.g.
always the case for non-negative kernels $\Phi(t)$ (as encountered in all our
examples) whose Laplace transform $\tilde{\Phi}(u)$ is positive for all $u$.
For non-positive kernels the property has to be checked explicitly.
Let us stress again that the solution in form of Eq.(16), which, as we have
seen, is applicable for a wide range of memory kernels $\Phi$, may or may not
correspond to some subordination scheme. We note, however, the fact that the
kernel does not correspond to a subordination scheme does not devaluate the
corresponding GFPE by itself, and does not mean that this leads to negative
probability densities. As an example of a “dangerous” kernel let us consider a
simple exponential kernel, $\Phi(t)=re^{-rt}$ (where the prefactor $r$ of the
exponential is added to keep the correct dimension of $\Phi$). For example, a
generalized diffusion equation with an exponential kernel,
$\frac{\partial}{\partial
t}p(x,t)=\int_{0}^{t}re^{-r(t-t^{\prime})}D\frac{\partial^{2}}{\partial
x^{2}}p(x,t^{\prime})dt^{\prime},$ (23)
is essentially the Cattaneo equation for the diffusion with finite propagation
speed, i.e. a kind of a telegrapher’s equation, as can be seen by taking a
derivative of its both sides w.r.t. $t$:
$\frac{\partial^{2}}{\partial t^{2}}p(x,t)=r\frac{\partial}{\partial
t}p(x,t)+rD\frac{\partial^{2}}{\partial x^{2}}p(x,t).$ (24)
The solutions to Eq.(24) for non-negative initial conditions are known to be
non-negative on the whole real line, but changing the operator ${\cal L}$ from
a diffusion to a more general one (e.g. to diffusion in presence of the
constant force) may lead to oscillating solutions SokSub .
The reason for this, within our line of argumentation, is that the function
$\tilde{T}(\tau,u)$ for equation (23),
$\tilde{T}(\tau,u)=(1+ur^{-1})\exp[-\tau u(1+r^{-1}u)]$ (25)
is not a Laplace transform of a non-negative function. We remind that the
function $\tilde{\phi}(u),0\leq u\leq\infty$, is a Laplace transform of a non-
negative function $\phi(t)$ defined on the non-negative half-axis, if and only
if $\tilde{\phi}(u)$ is completely monotone, i.e. its derivatives satisfy
$(-1)^{n}\phi^{(n)}(u)\geq 0$
for $n=0,1,2,...$ Feller . On the other hand, it is easy to see that the
second derivative of $\tilde{T}(\tau,u)$ changes its sign. Moreover, using the
mean value theorem, it is not hard to show that the Laplace transform of any
non-negative function integrable to unity cannot decay for $u\to\infty$ faster
than exponentially (see Appendix A), which is not true for Eq.(25).
Summarizing the result of this Section, we see that not all GFPEs can
correspond to subordination schemes. The sufficient condition to have such
scheme is the following: The kernel $\Phi(t)$ in the GFPE (2) is such that the
function $\tilde{T}(\tau,u)$ given by Eq.(16) is completely monotone as a
function of $u$. This always corresponds to subordination scheme, for which
$T(\tau,t)$ can be interpreted as a probability density function of the
operational time $\tau$ for given physical time $t$, $T(\tau,t)\equiv
p(\tau,t)$.
## IV What subordination schemes do have a GFPE?
Let us first perform some simple manipulations while assuming that the
function $T(\tau,t)$ in the integral decomposition formula (15) does
correspond to a subordination scheme, and thus has a meaning of the PDF of
operational time $\tau$ for a given physical time $t$, $T(\tau,t)\equiv
p(\tau,t)$. Then, in the Laplace domain the function $\tilde{p}(\tau,u)$,
Eq.(16), can be represented as
$\tilde{p}(\tau,u)=-\frac{d}{d\tau}u^{-1}\exp\left[-\tau\frac{u}{\tilde{\Phi}(u)}\right],$
(26)
so that in the time domain we have
$p(\tau,t)=-\frac{d}{d\tau}\int_{0}^{t}q(t^{\prime},\tau)dt^{\prime},$
where the function $q(t,\tau)$ is given by the inverse Laplace transform of
$\tilde{q}(u,\tau)=\exp\left[-\tau\frac{u}{\tilde{\Phi}(u)}\right].$ (27)
Thus
$\int_{\tau_{0}}^{\infty}p(\tau,t_{0})d\tau=\int_{0}^{t_{0}}q(t,\tau_{0})dt.$
(28)
Now we proceed to show that, as discussed already in Baule1 ; Gorenflo , the
function $q(t,\tau)$ has a clear physical meaning: this is namely the PDF of
clock times corresponding to the given operational time $\tau$. We note here
that in spite of the fact that Eq.(28) was obtained for a specific form of the
PDF $p(\tau,t)$ given by Eq.(26), it is more general and gives the relation
between the PDFs of a (weakly) increasing process and its inverse.
Indeed, let us consider a set of monotonically non-decaying functions, either
continuous (diffusive limit) or of càdlàg type (genuine continuous time random
walks) on a $(t,\tau)$ plane, see Fig. 1. The integral on the l.h.s. of
Eq.(28) counts all functions (with their probability weights) which cross the
horizontal segment $t\in[0,t_{0}),\tau=\tau_{0}$, the integral on the r.h.s.
counts all functions crossing the semi-infinite vertical segment
$t=t_{0},\tau\in[\tau_{0},\infty)$. The set of these functions is the same:
any monotonically non-decaying function crossing the horizontal segment or
passing from its one side to another side on a jump has to cross the vertical
one. No non-decaying function which never crossed the horizontal segment can
cross the vertical one. Therefore such a monotonicity implies that
$\textrm{Prob}(\tau>\tau_{0}|t_{0})=\textrm{Prob}(t<t_{0}|\tau_{0}),$
where the probabilities are defined on the set of the corresponding
trajectories. The physical meaning of the functions
$\textrm{Prob}(\tau>\tau(t))$ and $\textrm{Prob}(t<t(\tau))$ is that they
represent the survival probability and the cumulative distribution function
for the operational time and for the clock time, respectively. In the
continuous case the PDF of a clock time given operational time is then given
by
$\displaystyle q(t,\tau)$ $\displaystyle=$
$\displaystyle\frac{d}{dt}\textrm{Prob}(t<t(\tau))=\frac{d}{dt}\textrm{Prob}(\tau>\tau(t))$
(29) $\displaystyle=$
$\displaystyle\frac{d}{dt}\int_{\tau}^{\infty}p(\tau^{\prime},t)d\tau^{\prime}.$
This statement allows for immediate transition between direct (random variable
change $\tau(t)$) and parametric (inverse) subordination schemes. This also
gives a necessary condition for a process obeying GFPE to be a subordinated
one.
Figure 1: A schematic picture explaining the nature of Eq.(28). Black lines:
the case of continuous time traces. Here all monotonically non-decaying traces
crossing the horizontal segment $[0,\tau_{0})$ (shown in blue online) also
cross the vertical one $[\tau_{0},\infty)$ (shown in red online), and
therefore contribute equally to the r.h.s. and to the l.h.s. of Eq.(28). A
gray dashed line shows exemplary a càdlàg piecewise constant function with
jumps, the genuine operational time of a CTRW. Any càdlàg trace passing at a
jump from below to above the horizontal segment has to cross the vertical
segment during the waiting time.
To obtain such a necessary condition, let us fix two subsequent non-
intersecting operational time intervals $\tau_{1}$ and $\tau_{2}$
corresponding to the physical time intervals $t_{1}$ and $t_{2}$. Than
$t(\tau_{1}+\tau_{2})=t(\tau_{1})+t(\tau_{2})=t_{1}+t_{2}$. Than it follows
from Eq.(27) that
$\tilde{q}(u,\tau_{1}+\tau_{2})=\exp\left[-\tau_{1}\frac{u}{\tilde{\Phi}(u)}\right]\cdot\exp\left[-\tau_{2}\frac{u}{\tilde{\Phi}(u)}\right],$
or, denoting the Laplace characteristic functions of $t_{1}$ and $t_{2}$ by
$\tilde{\theta}_{t_{1}}(u)$ and $\tilde{\theta}_{t_{2}}(u)$, and the
characteristic function of their sum by $\tilde{\theta}_{t_{1}+t_{2}}(u)$ we
get
$\tilde{\theta}_{t_{1}+t_{2}}(u)=\tilde{\theta}_{t_{1}}(u)\cdot\tilde{\theta}_{t_{2}}(u),$
(30)
that is the random variables $t_{1}$ and $t_{2}$ are sub-independent Hamedani
. This property puts a necessary condition for the possibility to describe the
subordination scheme by a generalized FPE. The condition is always fulfilled
when $t_{1}$ and $t_{2}$ are independent (e.g. for a parametric Fogedby scheme
Fogedby ).
Therefore, the following statement can be made: Only subordination schemes in
which the increments of physical time $t(\tau)$ are sub-independent can be
described by GFPEs.
Combining Eqs.(27) and (30) we find the final criteria for the existence of
GFPE for a given subordination scheme.
* •
A direct subordination scheme does posses a GFPE only if the Laplace transform
of the PDF $p(\tau,t)$ in its $t$ variable has the form
$\tilde{p}(\tau,u)=\frac{f(u)}{u}\exp(-\tau f(u)).$ (31)
The kernel of the ensuing GFPE is then $\tilde{\Phi}(u)=u/f(u)$. Here it might
be easier to check that the double Laplace transform has a form
$\tilde{\tilde{p}}(s,u)=\int_{0}^{\infty}d\tau\int_{0}^{\infty}dte^{-s\tau}e^{-ut}p(\tau,t)=\frac{f(u)}{u[s+f(u)]},$
(32)
i.e. the function $F(s,u)=1/\tilde{\tilde{p}}(s,u)$ is a linear function in
$s$: $F(s,u)=a(u)s+1$. Using this criterion one can show that the BnG model of
Ref. BNG does not posses a corresponding GFPE, see Appendix 2.
* •
A parametric subordination scheme does posses a GFPE only if the
characteristic function (Laplace transform) of the PDF $q(t,\tau)$ in its $t$
variable has a form
$\tilde{q}(u,\tau)=\exp(-\tau f(u)),$ (33)
i.e. the $\tau$-dependence of this function must be simple exponential. The
$t$-variables corresponding to non-intersecting $\tau$ intervals are thus sub-
independent. The kernel of the ensuing GFPE is then $\tilde{\Phi}(u)=u/f(u)$.
Using the last criterion one can easily show that there is no GFPE
corresponding to correlated CTRW of Ref. Hofmann . The distribution of $t$ as
a function of $\tau$ in this model has a Laplace characteristic function
$\tilde{q}(u,\tau)=\exp\left[-u^{\alpha}\phi(\tau)\right]$ (34)
with
$\phi(\tau)=\int_{0}^{\tau}d\tau^{\prime}\left[\int_{\tau^{\prime}}^{\tau}d\tau^{\prime\prime}\Psi(\tau^{\prime\prime}-\tau^{\prime})\right]^{\alpha}$
where $\Psi(\tau)$ denotes the memory function for waiting times along the
trajectory expressed as a function of the number of steps, cf. Eq.(26) of Ref.
Hofmann . To correspond to any GFPE the argument of the exponential in Eq.(34)
has to be linear in $\tau$, i.e. $\phi(\tau)=a\tau$, where $a$=const. This
means that the square bracket in the expression for $\phi$ must be a constant
(equal to $a$) and therefore $\Psi(\tau)=a^{1/\alpha}\delta(\tau)$, which
corresponds to a standard, non-correlated CTRW.
## V Conclusions
Growing awareness of the complexity of non-Markovian diffusion processes in
physics, earth sciences and biology gave rise to a spark of interest to
mathematical tools capable to describe such non-standard diffusion processes
beyond the Fick’s law. Generalized diffusion equations on one hand, and
subordination schemes, on the other hand, are the two classes of such
instruments, which were successfully used for investigation of a broad variety
of anomalous diffusion processes. For several situations, notably, for
decoupled continuous time random walks, both are applicable, and stand in the
same relation to each other as the Fokker-Planck equation and the Langevin
equation do for the case of normal, Fickian diffusion. In the present work we
address the question, whether this is always the case. The answer to this
question is negative: some processes described by the generalized diffusion
equations do not possess an underlying subordination scheme, i.e. cannot be
described by a random time change in the normal diffusion process. On the
other hand, many subordination schemes do not possess the corresponding
generalized diffusion equation. The example for the first situation is the
Cattaneo equation, which can be represented as a generalized diffusion with
exponential memory kernel, for which no subordination scheme exists. The
example of the second situation is the minimal model of Brownian yet non-
Gaussian diffusion and correlated CTRW, the subordination schemes for which we
show that no corresponding generalized diffusion equation can be put down. We
discuss the conditions under which one or the other description is applicable,
i.e. what are the properties of the memory kernel of the diffusion equation
sufficient for its relation with subordination, and what are the properties of
the random time change in the subordination scheme necessary for existence of
the corresponding generalized diffusion equation.
## VI Acknowledgements
AC acknowledges support from the NAWA Project PPN/ULM/2020/1/00236.
## Appendix A Laplace transform of a non-negative function
Let us show that the Laplace transform $\widetilde{\phi}(u)$ of any non-
negative function $\phi(t)$ integrable to a constant, i.e. with $\phi(t)\geq
0$ and with $0<\int_{0}^{\infty}\phi(t)dt<\infty$, cannot decay faster than
exponentially with the Laplace variable $u$:
$\widetilde{\phi}(u)=\int_{0}^{\infty}\phi(t)e^{-ut}dt\geq Ae^{-Bu},$
with a positive constant $A$ and a non-negative constant $B$.
To see this we use the following chain of relations:
$\displaystyle\int_{0}^{\infty}\phi(t)e^{-ut}dt$ $\displaystyle\geq$
$\displaystyle\int_{0}^{C}\phi(t)e^{-ut}dt$ $\displaystyle=$ $\displaystyle
e^{-Bu}\int_{0}^{C}\phi(t)dt=Ae^{-Bu}.$
Here $C$ is some cut-off value which is chosen such that
$\int_{0}^{C}\phi(t)dt>0$, which is always possible due to our assumptions
about the integrability of $\phi(t)$ to a positive constant. The inequality
follows from the mean value theorem for integrals: by assumptions $\phi(t)$ is
non-negative and integralble, and $e^{-ut}$ is, evidently, continuous. The
value of $B$ then follows the inequality $0\leq B\leq C$, i.e. is non-
negative.
For our function $T(\tau,t)$, Eq.(25), the Laplace transform $T(\tau,u)$ is
defined for $u=0$ so that $\int_{0}^{\infty}T(\tau,t)dt=1>0$ for any $\tau$.
Eq.(25) essentially corresponds to a Laplace transform of a function strongly
oscillating at small $t$.
## Appendix B The minimal model of BNG diffusion
In the BnG model of Ref. BNG the PDF $p(\tau,t)$ (denoted there as
$T(\tau,t))$ is defined via its Laplace transform in $\tau$ variable,
$\displaystyle\tilde{p}(s,t)=$
$\displaystyle\frac{e^{t/2}}{\sqrt{\frac{1}{2}\left(\sqrt{1+2s}+\frac{1}{\sqrt{1+2s}}\right)\mathrm{sinh}(t\sqrt{1+2s})+\mathrm{cosh}(t\sqrt{1+2s})}}.$
Let us take the double Laplace transform
$\tilde{\tilde{p}}(s,u)=\int_{0}^{\infty}\tilde{p}(s,t)e^{-ut}dt$ (35)
and check whether it has the form of Eq.(32).
We first denote $\alpha=\sqrt{1+2s}>1$ and rewrite the function
$\tilde{p}(s,t)$ as
$\displaystyle\tilde{p}(s,t)$ $\displaystyle=$
$\displaystyle\frac{e^{t/2}}{\sqrt{\frac{1}{2}\left(\alpha+\frac{1}{\alpha}\right)\mathrm{sinh}(\alpha
t)+\mathrm{cosh}(\alpha t)}}$ $\displaystyle=$
$\displaystyle\frac{2\sqrt{\alpha}e^{-\frac{\alpha-1}{2}t}}{(\alpha+1)^{2}}\left[1+\left(\frac{\alpha-1}{\alpha+1}\right)^{2}e^{-2\alpha
t}\right]^{-\frac{1}{2}}.$
Denoting $A=2\sqrt{\alpha}/(\alpha+1)^{2}$ and
$\zeta=\left[(\alpha-1)/(\alpha+1)\right]^{2}$ we get
$\tilde{p}(s,t)=Ae^{-\frac{\alpha-1}{2}t}\left(1+\zeta e^{-2\alpha
t}\right)^{-\frac{1}{2}}.$
Substituting this expression into Eq. (35) and denoting
$\beta=(\alpha-1+2u)/2$, we obtain
$\tilde{\tilde{p}}(s,u)=A\int_{0}^{\infty}\left(1+\zeta e^{-2\alpha
t}\right)^{-\frac{1}{2}}e^{-\beta t}dt.$
Now we change the variable of integration to $x=e^{-2\alpha t}$ to arrive to
the expression
$\tilde{\tilde{p}}(s,u)=\frac{A}{2\alpha}\int_{0}^{1}\left(1+\zeta
x\right)^{-\frac{1}{2}}x^{\frac{\beta}{2\alpha}-1}dx.$
This can be compared with the integral representation of the hypergeometric
function:
$\;{}_{2}F_{1}(a,b;c;z)=$
$\displaystyle\qquad\frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)}\int_{0}^{1}x^{b-1}(1-x)^{c-b-1}(1-xz)^{-a}dx,$
Eq.(15.3.1) of Ref. AbraSteg , from which we get $a=\frac{1}{2}$,
$b=\frac{\beta}{2\alpha}$, $c=b+1$, and $z=-\zeta$ so that,
$\tilde{\tilde{p}}(s,u)=\frac{A}{\beta}\;_{2}F_{1}\left(\frac{1}{2},\frac{\beta}{2a},1+\frac{\beta}{2a},-\zeta\right).$
Substituting the values of parameters we obtain:
$\displaystyle\tilde{\tilde{p}}(s,u)=\frac{4(1+2s)^{\frac{1}{4}}}{(1+\sqrt{1+2s})^{2}(\sqrt{1+2s}+2u-1)}\times$
$\qquad{}_{2}F_{1}\left[\frac{1}{2},\frac{\sqrt{1+2s}-1+2u}{4\sqrt{1+2s}},\frac{5\sqrt{1+2s}-1+2u}{4\sqrt{1+2s}},\right.$
$\displaystyle\qquad\qquad\left.-\left(\frac{\sqrt{1+2s}-1}{\sqrt{1+2s}+1}\right)^{2}\right].$
The function $F(s,u)=1/\tilde{\tilde{p}}(s,u)$ is not a linear function of $s$
for fixed $u$. This can be clearly seen when plotting the $s$-derivative of
this function for fixed $u$ with the help of Mathematica, see Fig. 2.
Figure 2: The $s$-derivative of the function $F(s,u)$ for $u=0.5,1$ and $2$,
shown by solid, dashed and dotted lines, respectively.
## References
* (1) R. Zwanzig, Memory Effects in Irreversible Thermodynamics, Phys. Rev. 124, 983 (1961).
* (2) H. Grabert, Projection operator techniques in nonequilibrium statistical mechanics, Springer, Berlin, 1982.
* (3) R. Hilfer and L. Anton, Fractional master equations and fractal time random walks, Phys. Rev. E 51, R848 (1995).
* (4) A. Compte, Stochastic foundations of fractional dynamics, Phys. Rev. E 53 (4), 4191 (1996).
* (5) M. Meerschaert and H.P. Scheffler, Continuous time random walks and space-time fractional differential equations. In: A. Kochubei, Yu. Luchko (eds.) Handbook of Fractional Calculus with Applications. Volume 2. Fractional Differential Equations (De Gruyter, Berlin, 2019), pp. 385-406.
* (6) R. Metzler and J. Klafter, The random walk’s guide to anomalous diffusion: a fractional dynamics approach, Phys. Reports, 339, 1-78 (2000).
* (7) I.M. Sokolov, J. Klafter and A. Blumen, Fractional kinetics, Physics Today 55, 48-54 (2002).
* (8) R. Metzler and J. Klafter, The restaurant at the end of the random walk: recent developments in the description of anomalous transport by fractional dynamics, J. Phys. A: Math. Gen. 37, R161–R208 (2004).
* (9) R. Klages, G. Radons and I.M. Sokolov (eds.), Anomalous Transport: Foundations and Applications (Wiley VCH - Verlag, Weinheim, 2004).
* (10) J. Klafter, S.C. Lim and R. Metzler (eds.), Fractional dynamics: recent advances (World Scientific, Singapore, 2012).
* (11) A.V. Chechkin, R. Gorenflo, and I.M. Sokolov, Retarding subdiffusion and accelerating superdiffusion governed by distributed-order fractional diffusion equations, Phys. Rev. E 66, 046129 (2002).
* (12) A.V. Chechkin, R. Gorenflo, I. M. Sokolov, and V. Y. Gonchar, Distributed order time fractional diffusion equation, Fractional Calculus and Applied Analysis 6, 259-280 (2003).
* (13) A. V. Chechkin, J. Klafter and I. M. Sokolov, Fractional Fokker-Planck equation for ultraslow kinetics, Europhys. Lett. 63, 326-332 (2003).
* (14) I. M. Sokolov, A. V. Chechkin, and J. Klafter, Distributed-order fractional kinetics, Acta Physica Polonica B 35, 1323-134 (2004).
* (15) M. Naber, Distributed order fractional sub-diffusion, Fractals 12, 23-32 (2004).
* (16) I. M. Sokolov and J. Klafter, From diffusion to anomalous diffusion: A century after Einstein’s Brownian motion, Chaos 15, 026103 (2005).
* (17) I. M. Sokolov and A. V. Chechkin, Anomalous diffusion and generalized diffusion equations, Fluct. Noise Lett. 5, L275-L282 (2005).
* (18) S. Umarov and R. Gorenflo, Cauchy and Nonlocal Multi-Point Problems for Distributed Order Pseudo-Differential Equations, Part One, J. Anal. Appl. 24, 449-466 (2005).
* (19) M. M. Meerschaert and H.-P. Scheffler, Limit theorems for continuous time random walks with slowly varying waiting times, Statistics Prob. Lett. 71, 15-22 (2005).
* (20) M. M. Meerschaert and H.-P. Scheffler, Stochastic model for ultraslow diffusion, Stoch. Proc. Appl. 116, 1215-1235 (2006).
* (21) T. A. M. Langlands, Solution of a modified fractional diffusion equation, Physica A 367, 136-144 (2006).
* (22) A. Hanyga, Anomalous diffusion without scale invariance, J. Phys. A: Math. Theor. 40, 5551 (2007).
* (23) F. Mainardi and G. Pagnini, The role of the Fox–Wright functions in fractional sub-diffusion of distributed order, J. Comput. Appl. Math. 207, 245-257 (2007).
* (24) F. Mainardi, G. Pagnini and R. Gorenflo, Some aspects of fractional diffusion equations of single and distributed order, Appl. Math. Comput. 187, 295 (2007).
* (25) A.V. Chechkin, V.Yu. Gonchar, R. Gorenflo, N. Korabel and I.M. Sokolov, Generalized fractional diffusion equations for accelerating subdiffusion and truncated Lévy flights, Phys. Rev. E 78, 021111 (2008).
* (26) A. N. Kochubei, Distributed order calculus and equations of ultraslow diffusion, J. Math. Anal. Appl. 340, 252-281 (2008).
* (27) M. M Meerschaert, E. Nane, P. Vellaisamy, Distributed-order fractional diffusions on bounded domains. Journal of Mathematical Analysis and Applications 379, 216-228 (2011).
* (28) A. V. Chechkin, I. M. Sokolov and J. Klafter, Natural and Modified Forms of Distributed-Order Fractional Diffusion Equations. In FracDyn , Chapter 5, pp. 107-127.
* (29) I. M. Sokolov, A. V. Chechkin, and J. Klafter, Fractional diffusion equation for a power-law-truncated Lévy process, Physica A 336, 245-251 (2004).
* (30) A. Stanislavsky, K. Weron, and A. Weron, Diffusion and relaxation controlled by tempered $\alpha$-stable processes, Phys. Rev. E 78, 061106 (2008)
* (31) M. M. Meerschaert, Y. Zhang, B. Baeumer, Tempered anomalous diffusion in heterogeneous systems. Geophys. Res. Lett. 35, L17403 (2008).
* (32) B. Baeumer, M. M. Meerschaert, Tempered stable Lévy motion and transient super-diffusion. J. Comp. Appl. Math. 233, 2438-2448 (2010).
* (33) T. Sandev, A. Chechkin, H. Kantz, R. Metzler, Diffusion and Fokker-Planck equations with Generalized Memory Kernels (Survey paper). Frac. Calc. Appl. Anal. 18 (4), 1006-1038 (2015).
* (34) A. Stanislavsky, K. Weron, Atypical case of the dielectric relaxation responses and its fractional kinetic equation. Frac. Calc. Appl. Anal. 19, 212 (2016).
* (35) T. Sandev, I.M. Sokolov, R. Metzler, A. Chechkin, Beyond monofractional kinetics, Chaos, Solitons and Fractals, 102, 210-217 (2017).
* (36) T. Sandev, R. Metzler, A. Chechkin, From continuous time random walks to the generalized diffusion equation. Frac. Calc. Appl. Anal. (Review paper) 21 (1), 10-28 (2018).
* (37) A. Stanislavsky and A. Weron, Transient anomalous diffusion with Prabhakar-type memory. J. Chem. Phys. 149, 044107 (2018).
* (38) T. Sandev, Z. Tomovski, J. L. A. Dubbeldam, and A. Chechkin, Generalized diffusion-wave equation with memory kernel, J. Phys. A: Math. Theor. 52, 015201 (2019)
* (39) A. Stanislavsky, A. Weron, Control of the transient subdiffusion exponent at short and long times, Phys. Rev. Research 1, 023006 (2019).
* (40) S. Chandrasekhar, Stochastic Problems in Physics and Astronomy, Rev. Mod. Phys. 15, 1 (1943).
* (41) H. Fogedby, Langevin equations for continuous time Lévy flights, Phys. Rev. E 50, 1657 (1994).
* (42) A. Baule and R. Friedrich, Joint probability distributions for a class of non-Markovian processes, Phys. Rev. E 71, 026101 (2005)
* (43) A. Baule and R. Friedrich, A fractional diffusion equation for two-point probability distributions of a continuous-time random walk. Eur. Phys. Lett. 77, 10002 (2007).
* (44) D. Kleinhans and R. Friedrich, Continuous-time random walks: Simulation of continuous trajectories. Phys. Rev. E 76, 061102 (2007).
* (45) A. V. Chechkin, M. Hofmann, and I. M. Sokolov, Continuous-time random walk with correlated waiting times, Phys. Rev. E 80, 031112 (2009).
* (46) T. Sandev, A. V. Chechkin, N. Korabel, H. Kantz, I. M. Sokolov, and R. Metzler, Distributed-order diffusion equations and multifractality: models and solutions, Phys. Rev. E 92, 042117 (2015)
* (47) W. Feller, An Introduction to Probability Theory and Its Applications, vol. 1 and 2, John Wiley & Sons, 1968
* (48) M. M. Meerschaert, D. A. Benson, H.-P. Scheffler, and B. Baeumer, Stochastic solution of space-time fractional diffusion equations, Phys. Rev. E 65, 041103 (2002).
* (49) A. A. Stanislavsky, Subordinated Brownian Motion and its Fractional Fokker–Planck Equation, Phys. Scr. 67, 265 (2003).
* (50) R. Gorenflo, F. Mainardi, and A. Vivoli, Continuous-time random walk and parametric subordination in fractional diffusion, Chaos, Solitons and Fractals 34, 87 - 103 (2007).
* (51) R. Gorenflo and F. Mainardi, Subordination pathways to fractional diffusion, Eur. Phys. J.: Special Topics 193, 119 - 132 (2011).
* (52) R. Gorenflo and F. Mainardi, Parametric subordination in fractional diffusion processes, in: S.C. Lim, J. Klafter and R. Metzler, eds., Fractional Dynamics, Recent Advances, World Scientific, Singapore, 2012 (Ch.10, pp. 227-261)
* (53) A.I. Saichev and G.M. Zaslavsky, Fractional kinetic equations: Solutions and applications, Chaos, 7, 753 - 764 (1997).
* (54) E. Barkai, Fractional Fokker-Planck equation, solution, and application. Phys. Rev. E 63, 046118 (2001).
* (55) B. Baeumer and M. M. Meerschaert, Stochastic solutions for fractional Cauchy Problems, Frac. Calc. Appl. Anal, 4, 481-500 (2001).
* (56) B. Baeumer, D. A. Benson, M. M. Meerschaert, and S. W. Wheatcraft, Subordinated advection-dispersion equation for contaminant transport, Water. Res. Research 37, 1543 - 1550 (2001).
* (57) I.M. Sokolov, Solutions of a class of non-Markovian Fokker-Planck equations, Phys. Rev. E 66, 041101 (2002).
* (58) F. Mainardi, G. Pagnini, and R. Gorenflo, Mellin transform and subordination laws in fractional diffusion processes, Fract. Calc. Appl. Anal., 6, 441 (2003).
* (59) A.I. Saichev and S.G. Utkin, Random Walks with Intermediate Anomalous-Diffusion Asymptotics, J. Exp. Theor. Phys., 99, 443 - 448 (2004)
* (60) A. Piryatinska, A.I. Saichev, and W.A. Woyczynski, Models of anomalous diffusion: the subdiffusive case, Physica A 349, 375 - 420 (2005).
* (61) F. Mainardi, G. Pagnini, and R. Gorenflo, Mellin convolution for subordinated stable processes, J. Math. Sci. 132 637 (2006).
* (62) R. Gorenflo, F. Mainardi, A. Vivoli, Continuous-time random walk and parametric subordination in fractional diffusion, Chaos, Solitons & Fractals, 34 87 (2007).
* (63) M. Magdziarz, Langevin Picture of Subdiffusion with Infinitely Divisible Waiting Times, J. Stat. Phys. 135, 763–772 (2009).
* (64) M. Magdziarz, Stochastic representation of subdiffusion processes with time-dependent drift, Stoch. Proc. Appl. 119, 3238–3252 (2009).
* (65) J. Gajda and M. Magdziarz, Fractional Fokker-Planck equation with tempered $\alpha$-stable waiting times: Langevin picture and computer simulation, Phys. Rev. E 82, 011117 (2010).
* (66) M. M. Meerschaert, P. Straka, Inverse Stable Subordinators, Math. Model. Nat. Phenom. 8, 1-16 (2013).
* (67) A. Stanislavsky, K. Weron, and A. Weron, Anomalous diffusion with transient subordinators: A link to compound relaxation laws, J. Chem. Phys. 140, 054113 (2014).
* (68) M. Annunziato, A. Borzi, M. Magdziarz, and A. Weron, A fractional Fokker–Planck control framework for subdiffusion processes, Optim. Control. Appl. Meth. 37, 290-304 (2015).
* (69) M.Magdziarz and R. L. Schilling, Asymptotic properties of Brownian motion delayed by inverse subordinators, Proc. Amer. Math. Soc. 143, 4485–4501 (2015).
* (70) M. Magdziarz and T. Zorawik, Stochastic representation of fractional subdiffusion equation. The case of infinitely divisible waiting times, Lévy noise and space-time-dependent coefficients, Proc. Amer. Math. Soc. 144, 1767-1778 (2016).
* (71) A. Stanislavsky and A. Weron, Control of the transient subdiffusion exponent at short and long times, Phys. Rev. Research 1, 023006 (2019).
* (72) A. Stanislavsky and A. Weron, Accelerating and retarding anomalous diffusion: A Bernstein function approach, Phys. Rev. E 101, 052119 (2020).
* (73) A.V. Chechkin, F. Seno, R. Metzler, and I.M. Sokolov, Brownian yet Non-Gaussian Diffusion: From Superstatistics to Subordination of Diffusing Diffusivities, Phys. Rev. X 7, 021002 (2017).
* (74) V. Sposini, A.V. Chechkin, F. Seno, G. Pagnini, R. Metzler, Random diffusivity from stochastic equations: comparison of two models for Brownian yet non-Gaussian diffusion, New J. Phys. 20, 043044 (2018).
* (75) V. Sposini, A. Chechkin and R. Metzler, First passage statistics for diffusing diffusivity, J. Phys. A: Math. Theor. 52, 04LT01 (2019).
* (76) B. Wang, S.M. Antony, S.C. Bae and S. Granick, Anomalous yet Brownian, PNAS 106, 15160 (2009).
* (77) B. Wang, J. Kuo, S.C. Bae and S. Granick, When Brownian diffusion is not Gaussian, Nature Materials 11, 481 (2012).
* (78) M.V. Chubynsky and G.W. Slater, Diffusing Diffusivity: A Model for Anomalous, yet Brownian, Diffusion, Phys. Rev. Lett. 113, 098302 (2014).
* (79) R. Jain and K.L. Sebastian, Diffusion in a Crowded, Rearranging Environment, J. Phys. Chem. B 120, 3988 (2016).
* (80) N. Tyagi and B. J. Cherayil, Non-Gaussian Brownian diffusion in dynamically disordered thermal environments, J. Phys. Chem. B 121, 7204 (2017).
* (81) Y. Lanoiselee and D. S. Grebenkov, A model of non-Gaussian diffusion in heterogeneous media, J. Phys. A: Math. Theor. 51, 145602 (2018).
* (82) Y. Lanoiselee, N. Moutal, and D.S. Grebenkov, Diffusion-limited reactions in dynamic heterogeneous media, Nature Comm. 9, 4398 (2018).
* (83) S. Song, S. J. Park, M. Kim, J. S. Kim, B. J. Sung, S. Lee, J.-H. Kim, and J. Sung, Transport dynamics of complex fluids, PNAS 116 (26), 12733-12742 (2019).
* (84) J. M. Miotto, S. Pigolotti, A. V. Chechkin, and S. Roldan-Vargas, Length scales in Brownian yet non-Gaussian dynamics, arXiv:1911.07761.
* (85) R. Metzler, Superstatistics and non-Gaussian diffusion, Eur. Phys. J.: Spec. Top. 229, 711-728 (2020).
* (86) I. M. Sokolov and J. Klafter, Field-induced dispersion in subdiffusion, Phys. Rev. Lett. 97, 140602 (2006).
* (87) A. V. Chechkin, R. Gorenflo, and I. M. Sokolov, Fractional diffusion in inhomogeneous media, J. Phys. A: Math. and Gen. 38 L679 (2005).
* (88) E. Barkai and R. J. Silbey, Fractional Kramers Equation, J. Phys. Chem. B 104, 3866-3874 (2000).
* (89) I. M. Sokolov, Thermodynamics and fractional Fokker-Planck equations, Phys. Rev. E 63, 056111, (2001).
* (90) I. M. Sokolov, Lévy flights from a continuous-time process, Phys. Rev. E 63, 011104 (2000).
* (91) Equation (30) is always fulfilled for independent random variables. However, there exist the examples when it is also valid for dependent variables, see e.g. G. G. Hamedani, Sub-Independence - An explanatory Perspective, Commun. in Stat. Theory and Methods, 42, 3615-3638 (2013).
* (92) M. Abramovitz and I.A. Stegun, Handbook of mathematical functions, Dover, N.Y., 1972
|
8k
|
arxiv_papers
|
2101.01128
|
# The OxyContin Reformulation Revisited: New Evidence From Improved
Definitions of Markets and Substitutes
Shiyu Zhang Zhang: California Institute of Technology. Corresponding author,
email [email protected]. Guth: California Institute of Technology Daniel
Guth
The opioid epidemic began with prescription pain relievers. In 2010 Purdue
Pharma reformulated OxyContin to make it more difficult to abuse. OxyContin
misuse fell dramatically, and concurrently heroin deaths began to rise.
Previous research overlooked generic oxycodone and argued that the
reformulation induced OxyContin users to switch directly to heroin. Using a
novel and fine-grained source of all oxycodone sales from 2006-2014, we show
that the reformulation led users to substitute from OxyContin to generic
oxycodone, and the reformulation had no overall impact on opioid or heroin
mortality. In fact, generic oxycodone, instead of OxyContin, was the driving
factor in the transition to heroin. Finally, we show that by omitting generic
oxycodone we recover the results of the literature. These findings highlight
the important role generic oxycodone played in the opioid epidemic and the
limited effectiveness of a partial supply-side intervention.
Keywords: Opioids, Drug Overdoses, Heroin, Drug Distributors
JEL Codes: I11, I12, I18
## 1 Introduction
Since 1999, the opioid epidemic has claimed more than 415,000 American lives
(CDC Wonder). What started with fewer than 6,000 opioid-related deaths in 1999
grew steadily every year until fatalities reached 47,573 deaths in 2017.
Following a small decline in fatal drug overdoses in 2018, deaths continue to
rise. Over the past two decades, millions of Americans have misused
prescription opioids or progressed to more potent opioids, first heroin and
later fentanyl. Many social scientists have tried to understand how this
crisis has grown over two decades despite significant public health efforts to
the contrary.
Doctors and health economists have long argued that the drug most responsible
for prescription opioid overdose deaths, and the key to understanding the
transition from prescription opioids to heroin starting in 2010, was
OxyContin. Previous research [41], court proceedings [27], and books ([28],
[24]) has documented how Purdue Pharma’s marketing campaign for OxyContin
downplayed the risk of addiction starting in 1996. Since then, according to
the National Survey on Drug Use and Health (NSDUH), millions of Americans have
misused it previously. A key question in this area is whether or not making
prescription opioids, especially OxyContin, more difficult to abuse will
reduce overdose deaths.
In this paper, we show that restricting access to OxyContin led many users to
switch to generic oxycodone but had no impact on opioid or heroin mortality.
Earlier analyses attributing opioid overdose deaths in the late 2000s and the
subsequent rise in heroin deaths to OxyContin are incomplete because they omit
generic oxycodone. Our analysis shows that the misuse of generic oxycodone was
prevalent before the reformulation that restricted OxyContin access, and was
even more so afterward. We also show that heroin overdose deaths increased in
areas with high generic oxycodone exposure, not high OxyContin exposure, two
years after the OxyContin reformulation. In addition, we show that omitting
generic oxycodone in our regressions recovers the results of the literature.
This analysis was not possible until one year ago when The Washington Post won
a court order and published the complete Automation of Reports and
Consolidated Orders System (ARCOS). The ARCOS tracks the manufacturer, the
distributor and the pharmacy of every pain pill sold in the United States. The
newly released data allow us to analyze what happened to sales of generic
oxycodone and OxyContin when OxyContin suddenly became more difficult to
abuse. The previous literature focused on analyzing OxyContin because of
Purdue’s notorious role in the opioid crisis. However, the new data shows that
the sales of OxyContin was only a small part of the sales of all prescription
opioids: in terms of the number of pills, OxyContin was 3% of all oxycodone
pills sold from 2006 to 2012; in terms of morphine milligram equivalents
(MME), OxyContin has closer to 20% market share over this period. The new
transaction-level ARCOS data allows us to track the sales of generic oxycodone
and fill in the narrative gaps of how the opioid crisis progressed in the
United States.
Following [5], [17] and [11], we treat the introduction of an abuse-deterrent
formulation (ADF) of OxyContin as an exogenous shock that should only affect
people who seek to bypass the extended-release mechanism for a more immediate
high. We construct measures of exposure by combining ARCOS sales and the NSDUH
data on drug misuse. The NSDUH is the best survey of people who use drugs at
the state level, and by combining it with local sales we can capture variation
in drug use within the state. We leverage this variation in OxyContin and
generic oxycodone exposure to examine how the reformulation affected OxyContin
sales, generic oxycodone sales, opioid mortality, and heroin mortality. Our
first contribution is that we fix the omitted-variable problem by
differentiating between OxyContin and generic oxycodone, and we show that this
leads to different conclusions than what previous literature suggests. Our
second contribution is disaggregating the data to metropolitan statistical
area (MSA), which allows us to address endogeneity at the state level.
To preview our results, we find strong evidence of substitution from OxyContin
to generic oxycodone immediately after the reformulation. This substitution
was larger in places that had more OxyContin misuse pre-reform, which is
consistent with our hypothesis that users would switch between oxycodones
rather than move on to heroin. Because this substitution should be
concentrated among people misusing OxyContin, the results imply large changes
in consumption at the individual level. Back-of-the-envelope calculation
suggests 68% of the decline in OxyContin sales was substituted to oxycodone in
MSAs with high OxyContin misuse. The findings are consonant with surveys like
[19], [14], and [9] who all document substitution to generic oxycodone after
the reformulation by people seeking to bypass the ADF. We also find suggestive
evidence of substitution from generic oxycodone to OxyContin after the
reformulation in places where generic oxycodone misuse was high, a channel
that has been unexplored in previous research.
Our event study approach also shows that generic oxycodone exposure is
predictive of future heroin overdose deaths whereas OxyContin exposure is not.
The results are not contingent on methodology or our construction of exposure
measures. Crucially, if we run the same exact regressions at the state or MSA
level and omit generic oxycodone, we recover the results of the literature
where OxyContin misuse appears to be significantly predictive of future heroin
overdose deaths. We find that every standard deviation increase in generic
oxycodone exposure pre-reformulation is associated with a 40.8% increase in
heroin mortality in 2012 from the 2009 baseline level. As further evidence
against the argument that there was immediate substitution from OxyContin to
heroin after the reformulation, we note that in all of our regressions the
increase in heroin deaths wasn’t statistically significant until 2012. As
suggested in [30], the rise in heroin deaths can be attributed in part to an
increase in the supply of heroin as well as the introduction of fentanyl into
heroin doses.
Our findings highlight the pitfalls of omitting important substitutes to
OxyContin in analyzing the prescription opioid crisis. Purdue Pharma has
received well-deserved attention over the years for its role in igniting the
crisis. The company has been involved in many lawsuits over the years, but
perhaps the most damaging were lesser-known cases that involved losing its
patent in 2004111Federal ruling, Risk management plan proposals for generic
oxycodone which cleared the way for a rapid increase in generic oxycodone
sales in the early 2000s. While Purdue Pharma was being sued and scrutinized,
several manufacturers took the opportunity to fill in the gaps of OxyContin.
By 2006, generic oxycodone outsold OxyContin by more than 3-to-1 after
accounting for pill dosage differences. This paper sheds lights on the role
generic oxycodone played and continues to play in the opioid crisis and helps
policy makers update their picture of the opioid use disorder (OUD) landscape.
The paper also calls attention to the limited effectiveness of a partial
supply-side intervention to curb OUD. Purdue Pharma was once a dominant player
in the opioid market, but by the time of the reformulation, that dominance had
vanished and it was only one of the many manufacturers whose drugs were
actively misused by Americans. Purdue was the first company to include abuse-
deterrent formulation (ADF) in their opioids, but it is not until recent years
that other brands started adding anti-deterrent compounds to their products
[32]. When substitutions to other abusable opioids are easy, cutting supplies
of one kind is less effective.
The rest of the paper runs as follows. Section 2 gives more background on the
opioid crisis and explains how previous research has characterized the
OxyContin reformulation. In Section 3 we describe the new ARCOS sales
database, the NSDUH misuse data, the NVSS mortality data, as well as our
constructed misuse measure and descriptive statistics. Section 4 describes our
empirical strategy for testing our hypotheses. Section 5 discusses our results
and what it means for our understanding of the transition between illicit
drugs, and Section 6 concludes.
## 2 Background and Literature Review
This section proceeds in chronological order. First, we provide a history of
oxycodone and its most important formulation, OxyContin. We then describe the
OxyContin reformulation in 2010 and what it meant for prescription opioid
misuse, as well as how the previous literature analyzed the reformulation.
Next, we present the nascent research on substitution between different
opioids and how our contribution fits in this strain of work. We conclude with
a summary of the literature on heroin mortality in the early 2010s and its
link with the prescription opioid crisis.
Oxycodone was first marketed in the United States as Percodan by DuPont
Pharmaceuticals in 1950. It quickly found to be as addictive as morphine [7],
and in 1965 California placed it on the triplicate prescription form
[33].222Triplicate programs required pharmacists to send a copy to the
government, and [3] show that these had a persisting effect on reducing the
number of opioid prescriptions. Before the 1990s, doctors were hesitant to
prescribe oxycodone to non-terminally ill patients due to its high abuse
potential [16]. The sales of oxycodone-based pain relievers did not take off
until the mass marketing of OxyContin, Purdue’s patented oxycodone-based
painkiller. OxyContin was first approved by the FDA in 1995. The drug’s
innovation was an ‘extended-release’ formula, which allowed the company to
pack a higher concentration of oxycodone into each OxyContin pill and the
patients to take the pills every 12 hours instead of 8 hours. OxyContin’s
original label, approved by the FDA, stated that the “delayed absorption, as
provided by OxyContin tablets, is believed to reduce the abuse liability of a
drug.” In 2001, the FDA changed OxyContin’s label to include stronger warnings
about the potential for abuse and Purdue agreed to implement a Risk Management
Program to try and reduce OxyContin misuse.333From the FDA Opioid Timeline.
OxyContin was one of the first opioids marketed specifically for non-cancer
pain. In the early 1990s, pain started to enter the medical discussion as the
‘fifth vital sign’ and something to be managed. As described in [28], [41],
and elsewhere, Purdue’s sales representatives pushed OxyContin and were told
to downplay the risk of addiction. [34] describes how Purdue cited a 1980
short letter published in the New England Journal of Medicine describing
extremely low rates of opioid addiction among hospital patients undergoing
hospital stays, but the company repeatedly implied this result extended to the
general population or to individuals who left the hospital with take-home
prescriptions of OxyContin. The short letter was uncritically or incorrectly
cited 409 times as evidence that addiction was rare with long-term opioid
therapy ([23]). As a result of Purdue’s aggressive marketing and downplaying
of the drug’s abuse potential, OxyContin was a huge financial success and
effectively catalyzed the prescription opioid crisis.
In May 2007, Purdue signed a guilty plea for misleading the public about the
risk of OxyContin and paid more than $600 million in fines. Less than six
months later, the company applied to the FDA for approval of a new
reformulated version of OxyContin that included a chemical to make it more
difficult to crush and misuse [35]. Although not completely effective in
reducing misuse, it was approved by the FDA and after August 2010 accounted
for all OxyContin sales in the United States. Until 2016, with Mallinrockdt’s
Xtampza ER, Purdue was the only prescription opioid manufacturer to make
abuse-deterrent oxycodone pills. The majority of all oxycodone sold over this
time was generic oxycodone that remained abusable.444Many other companies
attempted to make abuse-deterrent opioid pills at the same time, as shown in
[42], but Purdue was the first to market. [2] and [32] list other opioids with
an ADF.
Most research shows that OxyContin misuse fell following the reformulation. As
described in [11], although some users were able to circumvent the abuse-
deterrent formulation (ADF) to inject or ingest, the reformulation did reduce
misuse. [17] finds that the reformulation coincided almost exactly with a
structural break in aggregate oxycodone sales, which had previously been
increasing. Shortly after the OxyContin reformulation was implemented,
researchers began to notice illicit drug use moving towards other drugs such
as heroin or generic oxycodone ([12], [14], [4], [17], [19], [9]). Our paper
extends the analysis of the impact of reformulation on opioid use by
separately identifying the shifts in OxyContin and generic oxycodone misuse.
We build upon a rich literature that studies opioid misuse through surveys or
analysis of the aggregated ARCOS reports. Surveys mostly polled either
informants or users themselves (for details see [21]). The best surveys have
been of users in smaller samples at individual treatment facilities, like in
[20] and [39]. However, selection bias is a problem for surveying treatment
facilities, as that is a specific subset of patients whose habits may be
different from the overall drug-using population (particularly because they
are seeking treatment). Some researchers have also used the quarterly ARCOS
reports to study national trends in consumption, like in [5], [25], and [6].
The quaterly ARCOS reports have no information on the market share of each
brand of prescription opioid, thereby restricting any analysis to the
aggregate level only. Our work is closely connected to the second set of
papers, but we are able to leverage ARCOS’s transaction level data to
distinguish sales of OxyContin from generic oxycodone.
This newly released ARCOS data allows us to make two methodological
improvements. First, the literature treats the OxyContin reformulation as an
exogenous shock at the state level. This assumption is problematic because
each state’s dependency on OxyContin as well as exposure to the reformulation
is the result of the state’s regulatory environment ([3]). These regulatory
factors could have an impact on how people react to the reformulation, and
thus create a hidden link between OxyContin exposure and the reformulation
outcomes. Using the new ARCOS data, we can disaggregate to Metropolitan
Statistical Areas (MSAs), which allows our model to identify drug
substitutions using within-state variations in opioid sales and mortality
while controlling for across-state variations in policies and drug
enforcement.
The second benefit of the new ARCOS data set is that it allows us to
disaggregate different kinds of prescription opioid sales on a national scale.
Previous national studies were unable to distinguish between these drugs due
to limitations in existing data. The NSDUH survey, the primary data source for
drug misuse at the national level, only documented past year use of OxyContin.
Death certificates do not distinguish between OxyContin and generic oxycodone.
The aggregate ARCOS sales group all oxycodone sales into one bin. Because of
OxyContin’s unique role in fomenting the opioid epidemic, it has received most
of the attention of researchers. The literature assumes that the study of
OxyContin was equivalent to the study of all oxycodone. As a result, although
non-OxyContin oxycodone misuse is significant in size, it has been
understudied. One notable exception is [31], which notes that non-OxyContin
oxycodone was a better predictor of state opioid deaths than OxyContin.
The previous literature also attempts to link the misuse of prescription
opioids to the rise in heroin misuse. [38] are the first to suggest the
pathway from prescription opioids to heroin, and they further note a reverse
in trend where heroin users switched to prescription opioids when heroin was
unavailable. [13] describes how by the 21st century people who initiated
heroin use were very likely to have started by using prescription opioids non-
medically. The most recent works on OxyContin reformulation suggest that the
reformulation played an important part in reigniting the heroin epidemic since
2010. [11] and [26], who rely on smaller surveys, find the predominant drug
people switched to after reformulation was heroin. [17] identifies a
structural break in heroin deaths in August 2010 that was accompanied by
higher growth in heroin deaths in areas with greater pre-reformulation access
to heroin and opioids. Similarly, [5] shows that the rise in heroin deaths was
larger in places with higher OxyContin misuse pre-reformulation. However, the
evidence linking the reformulation to the rise in heroin death is not
conclusive: other researchers suggest the sharp rise in heroin use may have
predated the OxyContin reformulation by a few years ([15], [9]). With the new
ARCOS data, we are able to examine the claim that the OxyContin reformulation
caused the subsequent heroin epidemic in more detail. In particular, we
separate the impact of the reformulation on heroin use from the gradual shifts
in oxycodone misuse that are independent of the reformulation.
## 3 Data and Descriptive Statistics
To estimate the impact of the OxyContin reformulation on opioid use and
mortality, we combine several data sources including sales of OxyContin and
non-OxyContin alternatives from ARCOS, opioid and heroin mortality from the
NVSS, and self-reported OxyContin and Percocet misuse from the NSDUH. Our main
regression leverages variations in pre-reform exposure to OxyContin and
generic oxycodone to identify the impact of the reformulation on opioid sales
and mortality. We define a new measure of exposure by interacting the state-
level self-reported opioid misuse and MSA-level opioid sales. In this section,
we describe the three sources of data, the market definition, the construction
of the OxyContin and generic oxycodone exposure measure, and present summary
statistics of our data.
### 3.1 Data
#### 3.1.1 ARCOS and the Sales of Prescription Opioid
As part of the Controlled Substances act, distributors and manufacturers of
controlled substances are required to report all transactions to the DEA. This
Automation of Reports and Consolidated Orders System (ARCOS) database contains
the record of every pain pill sold in the United States. The complete database
from 2006 to 2014 was recently released by a federal judge as a result of an
ongoing trial in Ohio against opioid manufacturers.555Link to the ARCOS Data
published by the Washington Post.
The ARCOS database has been used previously to study opioids, but only using
the publicly available quarterly aggregate weight of drugs sold ([6]) or via
special request to the DEA ([29]). The newly released full database reports
the manufacturer and the distributor for every pharmacy order. These data
allow us to track different brands of prescription opioids separately, and
calculate what fraction of oxycodone sold is OxyContin at any level of
geographic aggregation. We can thus construct what we believe is the first
public time-series of OxyContin and generic oxycodone sales from 2006-2014.
Note: We supplemented the 2006 to 2014 data with publicly available aggregate
data from 2000 to 2005. The publicly available aggregate data does not break
down the oxycodone sales by manufacturer.
Figure 1: Growth of oxycodone and OxyContin sales
As we can see from Figure 1, total oxycodone sales increased substantially
from 2000 to 2010, with per-person sales nearly quadrupling in the ten years
period. From 2010 to 2015, sales of oxycodone declined as a result of
aggressive measures taken by the states and the federal government to counter
opioid addiction ([22]).
The newly available ARCOS data suggests that the commonly held belief about
OxyContin’s dominance in the prescription opioid market at the time of
reformulation is incorrect. The last time OxyContin’s market was estimated was
in 2002 by [31], who acquired from the DEA a year’s worth of ARCOS data
aggregated at the state level. In that year, OxyContin was 68% of all
oxycodone sales by active ingredient weight and scholars have assumed that
Purdue’s market share stayed high until the OxyContin reformulation. However,
as Figure 1 shows, by 2006 when our data starts, OxyContin sales only
accounted for 18% of all oxycodone sold by weight and never got above 35%
during this period. The share is even smaller if we count the number of pills
sold, since the average OxyContin active ingredient weight is 5 to 10 times
higher than that of oxycodone from other brands. The share of OxyContin
decreased dramatically from 2002 to 2006 because Purdue lost the patent rights
in 2004. As a result, non-OxyContin oxycodone sales grew much faster in the
early 2000s than OxyContin sales. Figure 7 in Appendix presents the market
share for all oxycodone manufacturers by dosage strength, and Purdue Pharma is
only dominant at higher dosages ($\geq$ 40mg). The overestimation of
OxyContin’s importance in the pre-reform period explains why the previous
literature overlooked the role generic oxycodone played in the opioid
epidemic.
The ARCOS sales data are the primary variables in our main regressions. We
aggregate sales by MSA, year, and brand. To focus on the impact of the
reformulation on OxyContin and non-OxyContin alternatives, we group all
alternative oxycodone products into one measure, and we will refer to it as
generic oxycodone for the rest of the analysis.666We acknowledge some non-
OxyContin alternatives are branded and non-generic (i.e. Percocet and Percodan
or later Roxicodone), but the majority of them are generic products. Generic
oxycodone in this paper should be interpreted as all non-OxyContin oxycodone
products.
#### 3.1.2 NVSS Mortality Data
The second outcome of interest in our main regression is opioid mortality. We
use the restricted-use multiple-cause mortality data from the National Vital
Statistics System (NVSS) to track opioid and heroin overdose. The dataset
covers all deaths in the United States from 2006-2014. We follow the
literature’s two step procedure to identify opioid-related deaths. First, we
code deaths with ICD-10 external cause of injury codes: X40–X44 (accidental
poisoning), X60–64 (intentional self-poisoning), X85 (assault by drugs), and
Y10–Y14 (poisoning) as overdose deaths. Second, we use the drug identification
codes, which provide information about the substances found in the body at
death, to restrict non-synthetic opioid fatalities to those with ICD-10 code
T40.2, and heroin deaths to those with code T40.1. Figure 2 shows the trend
over our period of study for the two series.
Figure 2: Mortality trend
The number of opioid fatalities grew in our sample period, from on average 600
deaths per month to 1000 per month. The number of heroin deaths was stable
from 2006 to 2009 at about 200 deaths per month, and then it rose sharply from
2011 to 2015. As we’ve stated in the literature review section, the cause of
the increase in heroin mortality is unclear. While some papers blame the
OxyContin reformulation, there is evidence indicating the availability of
heroin increased substantially after 2010 [30].
Since the number of drug overdose deaths with no drug specified accounts for
between one-fifth and one-quarter of the overdose cases ([36]), our measures
of opioid and heroin deaths likely underestimate the true number of
deaths.777Specifically, we omit ICD-10 code T50.9 (unspecified poisioning)
from our analysis, and some fraction of these deaths are due to opioids or
heroin but were not diagnosed or recorded as such. However, the
underestimation would not pose a problem for our regressions. There are
variations in how coroners attribute the cause of death across states, but
such variation would be captured by the state fixed effects. In addition, we
do not anticipate systematic changes to each state’s practices due to the
reformulation.
#### 3.1.3 NSDUH and Measuring Misuse
We use state-level data from the National Survey on Drug Use and Health
(NSDUH) to measure nonmedical use of opioids. The NSDUH publishes an annual
measure of OxyContin misuse, asking the respondents whether they have ever
used OxyContin “only for the experience or feeling they caused” (NSDUH
Codebook). As first described in [5], the advantage of the NSDUH misuse
measure is that it seperates out misuse from medical use. However, only
OxyContin is reported in the NSDUH and there is no equivalent measure for
generic oxycodone.
Fortunately, the NSDUH reports PERCTYL2, which asks whether individuals ever
misused Percocet, Percodan, or Tylox.888Percocet Drug Information. Tylox was
discontinued in 2012 following the FDA regulations limiting acetaminophen.
These drugs are oxycodone hydrochloride with acetaminophen and have a maximum
dosage of 10mg of oxycodone per pill. The three drugs were popular among users
in the pre-OxyContin era [28]. In the present day, the PERCTYL2 variable
captures misuse of not only the three branded drugs but also other generic
oxycodone products that are popular on the street.
The most direct evidence supporting this claim is the fact that generic
oxycodone pills have often been referred to as ‘Percs’ colloquially in the
last decade. Many news report indicated that generic oxycodone has the street
name ‘Perc 30’ but is in fact not Percocet. The Patriot Ledger reported in a
2011 article999Patriot Ledger Link. Other references to generic non-OxyContin
oxycodone as Perc 30s: Pheonix House, Washington State Patrol, The Boston
Globe, The Salem News, Massachusetts Court Filing, Cape Cod Times, Pocono
Record, Bangor Daily News, Patch, CNN Op-Ed that ‘Perc 30s’ were the newest
drug of choice in South Shore of Massachusetts, saying:
> ‘Perc 30s are not Percocet — the brand name for oxycodone mixed with
> acetaminophen, the main ingredient in Tylenol — but a generic variety of
> quick-release oxycodone made by a variety of manufacturers. They are
> sometimes referred to as “roxys” after Roxane Laboratories, the first
> company to make the drug, or “blueberries,” because of their color.’
Since many generic oxycodone users wouldn’t know the name of the drug they use
other than by its street name, but could distinguish between immediate release
oxycodone and extended release OxyContin, it is likely that they answer
affirmatively to misusing Percocet when they are, in fact, using generic
oxycodone.101010In the ARCOS dataset these pills are simply listed as
‘Oxycodone Hydrochloride 30mg’
There are also several empirical observations that support this claim. The
first is that we continue to see increases in the lifetime misuse of Percocet,
Percodan, and Tylox even after they were replaced by OxyContin as the
preferred prescription opioid to misuse. The misuse rate of Percocet,
Percodan, and Tylox increased 30% from 4.1% to 5.6% from 2002 to 2009 (see
Figure 8 in Appendix), which would not have been possible if these drugs, or
what people believed were ‘Percs’, were not actively misused by new users
post-introduction of OxyContin.
The second observation is that, based on the average sales data from 2006 to
2014, a disproportionate number of people has reported misusing Percocet,
Percodan, or Tylox as compared to the actual sales of the three drugs. The
sales of Endo Pharma, the manufacturer of Percocet and Percodan111111Tylox not
included since it was discontinued., are orders of magnitude less than the
sales of Purdue while more than twice as many people reported misusing the
three drugs as compared to OxyContin (see Figure 9 in Appendix). A back-of-
the-envelope calculation shows that if PERCTYL2 misuse captures only the
misuse of Percocet and Percodan, then the proportion of pills misused out of
all pills sold is 29 times higher for Percocet and Percodan than than the same
proportion for OxyContin121212In terms of number of pills circulated,
OxyContin is 12.1 times Percocet and Percodan from 2006 to 2014. In terms of
misuse, OxyContin is 41% of Percocet and Percodan in the same period., an very
unlikely situation given the popularity of OxyContin on the street.
This deduction is further supported by misuse data reported in the NSDUH. We
know that generic oxycodone is commonly misused.131313Law enforcement and
journalists have previously identified the 30mg oxycodone pill as the most
commonly trafficked opioid, see DEA Link, ICE Link, and Palm Beach Post Link.
If oxycodone has any other drug names, the popularity of that drug name in the
NSDUH surveys should increase to reflect the increase in misuse in recent
years. In addition to inquiring about popular brands, the NSDUH survey asks
respondents to list any other prescription oxycodone that they have misused
before. Dozens of pain relievers are reported, but in 2010 “oxycodone or
unspecified oxycodone products” was only named by 0.10%141414NSDUH Codebook
variables ANALEWA through ANALEWE list the other pain relievers reported. Even
if we assumed all 2.49% of respondents saying they took a prescription pain
reliever not listed had taken generic oxycodone, it is still less than half of
the reported Percocet misuse. of the respondents. No other brand of oxycodone
pills are reported as commonly misused. We know from the reports in press and
documents in court that generic oxycodone is a popular opioid on the street,
and we know that Percocet is the only other commonly misused opioid documented
in the NSDUH survey. Thus, the only way to reconcile the discrepancy between
these two sources is that people mistakenly perceive generic oxycodone as
Percocet or respond to the NSDUH as if they do. Thus, we use lifetime
OxyContin and lifetime Percocet misuse for the construction of OxyContin and
generic oxycodone exposure measures in Section 3.3.
### 3.2 Market Definition and Endogeneity Problems
Previous studies of the OxyContin reformulation depend on state-level
variation to causally identify the impact of the reformulation. Treating
OxyContin reformulation as an exogenous shock at the state level is
potentially problematic. Although the timing of the reformulation is
exogenous, each state’s exposure to it is a result of a combination of the
state’s regulatory environment and Purdue’s initial marketing strategy ([3]).
These factors have substantial impact on how people in a state respond to the
reformulation, creating a hidden link between exposure to the reformulation,
the identifying variation, and subsequent drug use, the outcome variable.
One can limit the impact of endogenous regulation by disaggregation, but only
if there is substantial intra-state variation in exposure to the
reformulation. Both the ARCOS database and the NVSS mortality data have great
geographic detail. Conducting our analysis on metropolitan statistical areas
(MSAs), we find large variation in both OxyContin use and opioid mortality
across MSAs in the same state. At the aggregate level in 2009, the average
OxyContin market share in a state is 35.6%. 65 of the 379 MSAs (17.1%) in our
sample have an OxyContin market share that is 10% greater or smaller than
their state average. The average opioid mortality is 0.343 deaths per 100,000
population in 2008. The variation in death is even more significant. More than
310 (83%) MSAs have a mortality rate 20% higher or lower than their state
average, and more than 192 (51%) have a mortality rate 50% higher or lower
than their state average . We present the full distribution of deviations of
OxyContin market share and opioid mortality from state average in Figure 10
and Figure 11 in the Appendix.
Disaggregating to the MSA-level allows us to control for the state’s
regulatory environment and hence eliminate the most problematic source of
endogeneity. We use intra-state variation in exposure to the reformulation for
identification. Intra-state heterogeneity in opioid use is associated with
past economic conditions ([8]), location of hospitals and treatment centers
([40]), preferences of local physicians ([37]), and local policy, some of
which could still be correlated with the locality’s response to the
reformulation. Analysis at the MSA level clearly allows us to make a much
stronger claim than analysis at the state level.
In addition, as we will show in the next sections, the disaggregation
increases the statistical power of our regressions beyond the impact of the
tripled sample size. Our results indicate that defining the market at the MSA
level better captures the interaction between drug use and mortality than the
state level. The important variations in drug use, for example between Los
Angeles-Long Beach-Santa Ana at 4.4% of nonmedical use of pain relievers and
San Francisco-Oakland-Fremont at 5.6%, disappears when they’re aggregated to
the state level [1].
### 3.3 OxyContin and Non-OxyContin Oxycodone Exposure
Since the OxyContin reformulation was a national event independent of local
conditions, we can estimate its impact by comparing the outcomes in areas of
high prior exposure to opioids with outcomes in areas of low exposure.
Ideally, we want to quantify exposure using the volume of OxyContin misused in
each region pre-reform while controlling for the volume of generic oxycodone
misused. In practice, we do not observe these quantities. The best proxy in
the literature is the self-reported misuse rate from the NSDUH.
Based on the NSDUH misuse, we create a new measure of OxyContin and non-
OxyContin oxycodone exposure by combining the NSDUH state-level misuse rate
with ARCOS MSA-level sales. Specifically, for each drug, we calculate:
$\text{Exposure}_{m}^{\text{pre-reform}}=\text{Lifetime
Misuse}_{s}^{2004-2009}\times\text{Sales}_{m}^{2009}$ (1)
Our measure is the interaction term of sales of OxyContin/generic oxycodone in
an MSA and the lifetime misuse rate of that drug in the corresponding state.
This new measure has two advantages over the conventional misuse rate from
NSDUH: it captures intra-state variation in misuse and it more accurately
reflects the current misuse of both OxyContin and generic oxycodone.
The NSDUH surveys approximately 70,000 respondents every year and uses
sophisticated reweighting techniques to get accurate state level estimates.
Once we get to the MSA level, the number of people surveyed as well as the
number of positive responses to questions on opioid misuse are extremely
small. As a result, most of the outcomes at the MSA level are censored by the
NSDUH to protect individual privacy. Using only the survey data means that we
would use same state misuse value for all MSAs and therefore forgo any intra-
state variation in drug use. In comparison, our proposed measure relies on
deviations from normal sales patterns to generate variations in exposure rates
for the MSAs. Our definition assumes that the percentage of people reported
misusing a particular drug in a state is equivalent to the proportion of sales
that are being misused. In a state where all the MSAs have identical sales,
all the MSAs will have identical exposure rates by definition. However, if one
MSA has higher sales of OxyContin compared with the rest of the state, our
OxyContin exposure measure in that MSA will be higher than the rest of the
state. This construction of exposure mirrors our intuitive understanding that
the misuse of a drug in a locality is a function of the overall misuse and the
availability of that particular drug in the area.
The NSDUH survey151515In all surveys prior to 2014. reports past-year misuse
of OxyContin but only lifetime misuse of generic oxycodone. Previous studies
did not focus on generic oxycodone misuse, so these studies rely on past-year
OxyContin misuse rate. In our case, to disentangle substitution among
prescription opioids, we have to make the comparison between OxyContin and
generic oxycodone equal. Resorting to lifetime misuse rates for both series
sacrifices the timely nature of the NSDUH misuse rates. By combining the
lifetime misuse rates with sales in the year before reformulation, we capture
recent changes in use of both drugs. To make our results comparable with
previous studies, in the Appendix section, we repeat our entire analysis with
OxyContin last-year misuse and generic oxycodone lifetime misuse. Most of our
conclusions stand despite giving OxyContin a more favorable treatment.
To construct our measure, we follow the precedent set in the literature by
using a six-years average state level lifetime misuse rate pre-reform (2004 -
2009) and sales in 2009. The goal of the time average is to reduce the
variance of the state-level misuse rates. We check the validity of our measure
by regressing opioid death on it and compare the results with the same
regressions on either only ARCOS sales or only NSDUH misuse. Results are
summarized in Table 2 in Appendix. The fit of the generic oxycodone regression
is much improved with the interacted variable ($R^{2}=0.187$) relative to
using only one with NSDUH misuse ($R^{2}=0.062$) or sales ($R^{2}=0.176$). The
improvement is even larger for the OxyContin regression ($R^{2}=0.128$)
relative to using only one with NSDUH ($R^{2}=0.084$) or with sales
($R^{2}=0.086$).
### 3.4 Descriptive Statistics
Table 1: Summary Statistics
| | All MSAs | MSAs with low OxyContin exposure | MSAs with high OxyContin exposure | MSAs with low oxycodone exposure | MSAs with high oxycodone exposure
---|---|---|---|---|---|---
NSDUH lifetime misuse rates (2004-2009)
| OxyContin misuse rate (%) | 2.22 | 1.88 | 2.56 | 1.87 | 2.56
| Oxycodone misuse rate (%) | 5.19 | 4.22 | 6.17 | 3.75 | 6.64
Annual ARCOS sales (all sample period)
| Oxycontin sales per person | 65.71 | 43.47 | 88.06 | 50.70 | 80.79
| Oxycodone sales per person | 181.84 | 112.50 | 251.55 | 99.24 | 264.88
Annual death per 100,000 (all sample period)
| Opioid | 0.32 | 0.23 | 0.41 | 0.23 | 0.42
| Heroin | 0.13 | 0.09 | 0.16 | 0.10 | 0.16
Census Demographics (2009)
| Number of MSAs | 379 | 190 | 189 | 190 | 189
| Population | 679878 | 745327 | 614082 | 663740 | 696101
| Age | 36.13 | 34.68 | 37.59 | 34.84 | 37.43
| Male (%) | 49.24 | 49.35 | 49.13 | 49.40 | 49.08
| Separated (%) | 18.83 | 18.24 | 19.42 | 18.32 | 19.34
| High school and above (%) | 84.20 | 82.79 | 85.61 | 83.68 | 84.72
| Bachelor and above (%) | 25.36 | 24.77 | 25.96 | 24.85 | 25.87
| Mean income | 64213 | 63414 | 65016 | 63058 | 65374
| Low income (%) | 35.38 | 35.79 | 34.98 | 35.90 | 34.86
| White (%) | 82.17 | 79.99 | 84.36 | 81.22 | 83.12
| Black (%) | 11.20 | 13.09 | 9.30 | 11.80 | 10.60
| Asian (%) | 3.03 | 3.47 | 2.60 | 3.52 | 2.54
| Native American (%) | 0.18 | 0.20 | 0.17 | 0.20 | 0.17
* •
Note: Simple average, not weighted by population.
Table 1 reports summary statistics for five groups of MSAs: All MSAs, MSAs
with high OxyContin exposure, MSAs with low OxyContin exposure, MSAs with high
generic oxycodone exposure, and MSAs with low generic oxycodone exposure. MSAs
with high OxyContin exposure and MSAs with high generic oxycodone exposure
have similar demographic summary statistics. These two groups of MSAs also are
not different statistically in their heroin mortality. Disentangling the
impact of various opioids on the rise in heroin mortality is impossible with
nationally aggregated or state level data due to the high correlation in
misuse. The high correlation also implies that regressing heroin death on
OxyContin without controlling for generic oxycodone use will likely lead to an
overestimation of OxyContin’s impact.
MSAs with high misuse differ from MSAs with low misuse. High misuse states
have higher sales of both types of prescription opioids (twice as much for
both types of opioids), higher mortality rate (twice as much for both opioid
and heroin overdose), smaller population, higher average age, higher median
income, higher percentage of white population, and lower percentage of black
population. The differences in racial composition repeat well established
findings in the literature: prescription opioid misuse was originally
concentrated among white users, and by 2010 new heroin users were almost
entirely white [10]. These differences in demographic variables motivate the
inclusion of control variables in our main regressions.
## 4 Empirical Strategies
Our goal is to investigate two questions. First, what was the reformulation’s
immediate impact on OxyContin and generic oxycodone use? Second, what was the
reformulation’s long-run effect on opioid mortality, heroin mortality, and on
the progression of opioid addiction?
We follow the event study framework from [5] to estimate the causal impact of
the OxyContin reformulation on OxyContin and generic oxycodone sales and
opioid and heroin mortality. We exploit variation in MSAs’ exposure to the
reformulation due to the differences in their pre-reform OxyContin use while
controlling for pre-reform generic oxycodone use. Our approach is similar to
[18], where the OxyContin reformulation has more “bite”, or more of an effect,
in areas where OxyContin misuse was higher than in places where generic
oxycodone was the preferred drug. The approach allows us to measure whether
MSAs with higher exposure to OxyContin experienced larger declines in
OxyContin sales, larger increases in alternative oxycodone, or larger
increases in opioid and heroin mortality. The empirical framework is:
$\begin{split}Y_{mt}&=\alpha_{s}+\delta_{t}+\sum_{i=2006}^{2014}\mathbbm{1}\\{i=t\\}\beta_{i}^{1}\times\text{OxyContin
Exp}^{Pre}_{m}\\\
+&\sum_{i=2006}^{2014}\mathbbm{1}\\{i=t\\}\beta_{i}^{2}\times\text{Oxycodone
Exp}^{Pre}_{m}+X^{\prime}_{mt}\gamma+\epsilon_{mt}\end{split}$ (2)
where $Y_{mt}$ are the outcome variables of interest in MSA $m$ at year $t$;
$\text{OxyContin Exp}^{Pre}_{m}$ and $\text{Oxycodone Exp}^{Pre}_{m}$ are
time-invariant measures of OxyContin and oxycodone exposure before the
reformulation (see Section 3.5 for construction), and are interacted with a
set of $\beta_{t}^{1}$ and $\beta_{t}^{2}$ for each year. We include state
fixed effects to control for regulatory differences among states and year
fixed effects to control for national changes in drug use. We also include a
full set of MSA-level demographic variables. We weight the regression by
population and exclude Florida.161616The literature excludes Florida because
it underwent massive increases in oxycodone sales over this period, some of
which was trafficked to other states. We show the full set of $\beta_{t}$
estimates graphically, normalizing by the 2009 coefficient. The $\beta_{t}$
identifies the differences in sales and death across MSAs due to their higher
or lower pre-reform OxyContin or oxycodone exposure. Standard errors are
clustered at the MSA level to account for serial correlation. In the Appendix
section, we present beta estimations from variations of our base model, which
include (1) using a MSA fixed effect instead of state fixed effect, (2)
replacing OxyContin lifetime misuse rate with OxyContin last-year misuse rate,
(3) regressing at the state level, and show that our conclusion are
insensitive to most of these variations.
To complement our results, we also use a strict difference-in-difference
framework to estimate effect of the reformulation conditioning on OxyContin
and non-OxyContin oxycodone exposure levels. Our specification is:
$\begin{split}Y_{mt}&=\alpha_{s}+\gamma_{t}+\delta_{1}\mathbbm{1}\\{t>2010\\}\\\
&+\delta_{2}\mathbbm{1}\\{m\in HighOxyContin\\}+\delta_{3}\mathbbm{1}\\{m\in
HighOxycodone\\}\\\
&+\delta_{4}\mathbbm{1}\\{t>2010\\}\times\mathbbm{1}\\{m\in
HighOxyContin\\}\\\
&+\delta_{5}\mathbbm{1}\\{t>2010\\}\times\mathbbm{1}\\{m\in
HighOxycodone\\}+X^{\prime}_{mt}\beta+\epsilon_{mt}\end{split}$ (3)
where $HighOxyContin$ and $HighOxycodone$ are the set of MSAs with higher than
median pre-reform exposure to OxyContin and oxycodone respectively. We
restrict the regression to include only the three years prior (2008 to 2010)
and the three years after (2011 to 2013) the reformulation. The advantage of
this specification is that it does not assume that OxyContin or oxycodone
exposure affects the outcome variable linearly. Instead of having a flexible
$\delta$ for each year, we have only one $\delta$ for each of the pre- or
post-reform period. In this specification, we simply test whether higher
exposure MSAs reacted differently to the reformulation as compared to lower
exposure MSAs (if $\delta_{4}$ and $\delta_{5}$ are significant). We include
state fixed effects to control for state-level heterogeneity, year fixed
effects for national trend, and a set of time-varying MSAs level covariates.
Again, standard errors are clustered at the MSA level.
## 5 Results
We proceed in two steps. First, we provide direct evidence that the OxyContin
reformulation caused OxyContin sales to decrease and generic oxycodone sales
to increase, and that the changes in sales are proportional to the pre-
reformulation level of OxyContin exposure. Second, we estimate the impact of
the reformulation on opioid and heroin mortality. We find that high pre-
reformulation levels of OxyContin exposure were not associated with high
opioid deaths, but there was a strong positive effect from generic oxycodone
exposure in both the pre- and post-reform period. We find that higher pre-
reform OxyContin and pre-reform oxycodone exposure were both positively but
not significantly associated with later heroin deaths, but the oxycodone
coefficient is larger. If we run the heroin regression separately with only
OxyContin exposure we recover the results of the literature, but running the
heroin regression with only oxycodone exposure better fits the data.
### 5.1 Reformulation’s Impact on Opioid Sales
We begin by showing graphically that OxyContin sales decreased and generic
oxycodone sales increased in high OxyContin misuse MSAs immediately after
reformulation. Figure 3 and Figure 4 present the full set of coefficients from
estimating the event study framework on OxyContin and generic oxycodone sales.
Each data point in the figure is the coefficient of the interactive term of
misuse and sale, which we call exposure, for OxyContin or generic oxycodone in
a specific year, and it captures any additional change in sales in that year
driven by high OxyContin or oxycodone exposure. In Figure 3, we observe a
larger decrease in OxyContin sales post-reform in MSAs with higher pre-reform
OxyContin exposure. As Figure 4 shows, higher OxyContin exposure MSAs saw
greater increases in generic oxycodone sales post-reform. The effects are well
identified at 95% confidence level. An one standard deviation increase in
OxyContin exposure translates into an additional 21.2 MME decrease in per
person OxyContin sales and 11.8 MME increase in per person oxycodone sales in
2011. These changes represents a 24% decrease in OxyContin sales and a 8.8%
increase in oxycodone sales from the 2009 level. The effects are economically
significant especially given that the reformulation should only affect the
population abusing OxyContin, so this drop in sales is driven by a fraction of
all users. The two observations combined support the hypothesis that
reformulation caused substantial substitution from OxyContin to generic
oxycodone.
Figure 3: Main regression on OxyContin sales. Shaded regions are the 95
percent confidence intervals with standard errors clustered at the MSA level.
Figure 4: Main regression on generic oxycodone sales. Shaded regions are the
95 percent confidence intervals with standard errors clustered at the MSA
level.
Figure 3 also documents that high pre-reform oxycodone misuse MSAs saw large
increases in OxyContin sales right after the reformulation. This phenomenon
has been unreported previously, but would be consistent with [37]’s physician
benevolence hypothesis where good physicians switch patients from oxycodone to
reformulated OxyContin to lower the future risk of abuse. Although the switch
toward OxyContin is smaller in magnitude than the switch from OxyContin, this
increase is the first documented positive impact of the OxyContin
reformulation in the literature. It seems both physicians and users saw the
two types of drugs as substitutes. Unfortunately, there are not enough MSAs
where the switch toward OxyContin is significant enough that it cancels out
the switch away from OxyContin to examine the possible substitution channel in
the other direction.
Because we include both OxyContin and generic oxycodone misuse in the same
regression, we can separate out the increases in oxycodone sales due to its
own popularity from the increases due to spillover effects from the OxyContin
reformulation. Figure 4 shows increasing growth in oxycodone sales in MSAs
with higher oxycodone misuse until 2011, and the growth rate declined after.
The smoothness of the oxycodone curve indicates that the OxyContin
reformulation had no impact on how oxycodone misuse affected oxycodone sales.
This trend corresponds well with many states tightening control over opioid
prescription policies in 2011 and 2012 in response to rising sales and and
increased awareness of opioid misuse.
Another way of estimating the impact of the reformulation is through
difference-in-difference regressions. Column (1) of Table 3 in Appendix shows
the regression on OxyContin sales. OxyContin sales in all MSAs decreased by
8.05 MME post-reform, a 9.4% decrease with respect to the average per person
sales of 85.6 MME in 2009. High OxyContin misuse MSAs had a higher level of
OxyContin sales to start with, but experienced an additional 15.1 MME drop (an
additional 17% decrease) post-reform. Given that only 2.46% of the population
ever misused OxyContin171717NSDUH, 2010. and the reformulation only affected
the people misusing it, a 17% additional decrease in all OxyContin sales would
translate into a very significant decrease in sales to the population that
misuses it. The negative and significant Post $\times$ High OxyContin
coefficient confirms previous findings that high OxyContin exposure MSAs saw
larger decreases in OxyContin sales post-reform.
Column (2) of the same table reports the regression on generic oxycodone
sales. Generic oxycodone sales per person increased 41.7 MME in the post
period, a 31.2% increase with respect to the average per person alternative
oxycodone sales of 133.5 MME in 2009. High OxyContin misuse MSAs experienced
an additional 10.3 MME increase, which translates to a 68% conversion from
OxyContin to generic oxycodone in those areas. Combining the findings from
columns (1) and (2), we see direct substitution from OxyContin to generic
oxycodone in local sales immediately after reformulation, and the substitution
pattern is more pronounced in MSAs with high OxyContin exposure as expected.
To help our readers visualize the trend of OxyContin and alternative oxycodone
sales, in Figure 12 in the Appendix, we break all MSAs into three bins by the
magnitude of the observed drop in OxyContin sales due to the reform. Then, we
plot the per person OxyContin and generic oxycodone sales for the three group
respectively. By definition, the high empirical drop group experienced the
largest decreases in OxyContin sales from 2009 to 2011 (-29%) and the low drop
group experienced an increase in OxyContin sales (+15%). Sales of generic
oxycodone started at different levels, but shared the same growth rate until
the reformulation in 2010. Since 2010, the higher the empirical drop in
OxyContin, the faster the growth in generic oxycodone. The high group saw in a
72 MME increase (46% from 2009) in generic oxycodone sales, while the low
group only saw an 29 MME increase (29% from 2009). The high growth rate of
generic oxycodone in high drop MSAs support the substitution story. The post-
reform level of OxyContin sales converges to the same level for all three
groups, suggesting that the remaining sales most likely represent non-
replaceable demand for medical OxyContin use.
### 5.2 Reformulation and Opioid and Heroin Mortality
Next, we estimate the impact of the reformulation on overdose mortality. In
Figure 5, we report the full set of coefficients from estimating the event
study framework on opioid mortality. Each data point in the figure is the
coefficient of the interactive term of misuse and sale for OxyContin or
generic oxycodone in a specific year, and it captures any additional change in
opioid mortality in that year driven by high OxyContin or oxycodone exposure.
The OxyContin coefficients are never significant, suggesting higher pre-reform
OxyContin misuse is not predictive of either higher or lower opioid death
post-reform. The lack of any trend indicates that any benefit of the OxyContin
reformulation on reducing OxyContin consumption is offset by the substitution
to generic oxycodone. In aggregate, the reformulation had no impact on non-
heroin opioid deaths.
Figure 5: Main regression on opioid mortality. Shaded regions are the 95
percent confidence intervals with standard errors clustered at the MSA level.
Figure 6: Main regression on heroin mortality. Shaded regions are the 95
percent confidence intervals with standard errors clustered at the MSA level.
In Figure 6, we report the event study coefficients on heroin mortality. Again
the OxyContin coefficients are tiny and insignificant, while the oxycodone
coefficients grow over time but never reach statistical significance at
conventional levels. The lack of statistical significance is due to the small
number of heroin moralities in the whole sample and high correlations between
OxyContin and oxycodone exposure. If we were to run the OxyContin and
oxycodone regression separately (See Figure 34 and Figure 38 in Appendix),
oxycodone exposure had a much larger and more significant impact on heroin
mortality. The results provide tentative evidence that the higher the generic
oxycodone exposure in an MSA, the higher the increases in heroin mortality.
However, the results do not support the alternative hypothesis that the
OxyContin reformulation was solely responsible for the increase in heroin
mortality.
The difference-in-difference results mirror our finding from the event study
framework. Column (3) of Table 3 in Appendix suggests that opioid deaths are
0.08 higher in high oxycodone exposure MSAs, which is equivalent to 27% of the
average opioid overdose of 0.29 per 10,000 people in 2009. Opioid mortality is
0.05 lower (17% of the 2009 average) in higher OxyContin exposure MSAs after
controlling for oxycodone use. Higher OxyContin exposure does not lead to
higher or lower opioid overdose post-reform, while higher generic oxycodone
exposure is associated with 0.06 (20.6% of 2009 average) more opioid death in
the post period.
Column (4) of the same table reports the difference-in-difference regression
on heroin death. Heroin mortality has increased by 0.14 in the post period in
all MSAs, which is equivalent to a 111% increase from the average 2009 level
of 0.126 heroin death per 10,000 population. High OxyContin exposure MSAs did
not experience additional jumps in heroin mortality, while high oxycodone
exposure MSAs did experience an additional 0.07 (56% with respect to 2009
average) increase in death. Again, the evidence from the difference-in-
difference regressions indicate that OxyContin was not responsible for the
rise in heroin mortality.
In Figure 13 in the Appendix, we show the average trend of the opioid and
heroin mortality for groups with high, medium and low observed drop in
Oxycontin sales. If the reformulation was responsible for the subsequent
heroin epidemic, then the MSAs mostly likely to have additional jumps in
heroin mortality would be the MSAs with the largest OxyContin drop. As shown
in the figure, the three groups went through the same explosive growth in
heroin mortality (around 38% from 2009 to 2011, and similar rate afterward),
indicating the rise in heroin was independent of the decrease in OxyContin
sales. This evidence conclusively rejects the hypothesis that the OxyContin
reformulation is solely responsible for the subsequent heroin epidemic.
### 5.3 Discussion
(A) The Reformulation’s Impact on Opioid Mortality
Until now, the literature has found mixed results for the effects of the
OxyContin reformulation on opioid mortality. In contrast to previous work, we
find no statistically significant impact of the reformulation on opioid
mortality as a result of substantial substitutions from OxyContin to generic
oxycodone post-reform. Increases in generic oxycodone sales compensated for
55% of the drop in OxyContin sales in high OxyContin misuse MSAs by our event
study framework, and 68% by our different-in-difference estimation. Opioid
mortality continued to increase in the post-reform period, but not was driven
by high OxyContin exposure.
(B) The Reformulation’s Impact on Heroin Mortality
Our results stand in direct contrast to the findings of the literature.
Instead of being the event that precipitated the heroin epidemic, the
OxyContin reformulation shifted misuse to other opioids, of which heroin was
only one. We cannot refute the hypothesis that some OxyContin users switched
to heroin due to the reformulation. Our analysis refutes the hypothesis that
the reformulation was the sole cause of the heroin epidemic. Instead of
OxyContin misuse, we identified generic oxycodone misuse as a much more
powerful driver of increases in heroin mortality post-2011. What prompted the
increases in heroin use is still an unresolved question. Previous research has
suggested an increase in the supply of heroin ([30]) around this time, as well
as crackdowns in Florida on pill-mills reducing the supply of oxycodone
([22]).
(C) Bridging the Differences between our Findings and the Literature
One of the innovations we’ve made in this paper is to shed light on a hidden
source of opioid misuse: the misuse of generic oxycodone. This segment of
prescription opioids was overlooked by other scholars because of OxyContin’s
dominance in opioid misuse in the early years as well as, we argue, the lack
of identifiable brand names for the generic products. Empirical studies based
on market data or interviews of opioid users noted that many people misused
generic oxycodone products ([31], [21]). Leaving out oxycodone misuse, an
important driver of opioid and heroin mortality that is positively correlated
with OxyContin misuse, would produce spurious regression results.
To show that the difference in findings is not driven by our constructed
misuse measure, or our choice of framework, we test whether we can reproduce
findings in the literature by running all of our regressions using only
OxyContin (see Section 7.4.4 in the Appendix). Our OxyContin misuse exposure
individually predicts an increase in opioid and heroin mortality post-reform
as the literature claims. This finding is the basis of previous studies
supporting the claim that the OxyContin reformulation is the main cause of the
subsequent heroin epidemic. However, if we run the same set of regressions
using only generic oxycodone (see Section 7.4.5), we were able to produce the
same findings. The only way to differentiate the impact of OxyContin from that
of generic oxycodone is to include both in the same regressions. Variations in
local OxyContin and oxycodone exposure allow us to identify the impact of both
series, if any exist. As we’ve shown in our main regressions, the impact of
OxyContin on heroin disappears after controlling for the effect of generic
oxycodone.
(D) Market Definition
Another innovation we’ve made in this paper is a finer definition of the
opioid market. It is important to consider what we gain from disaggregating to
the MSA level. The specific OxyContin market share in a state is endogenous to
a great many things, including advertising ([41]) and triplicate status ([3]).
Although the OxyContin reformulation was an exogenous shock, its
interpretation is made very complicated because its impact depended on each
state’s regulatory history and prescribing environment. We do our regressions
at the MSA level, where there are unobserved local conditions that affected
sales of OxyContin and generic oxycodone, while controlling for state-level
laws and restrictions. By comparing two different MSAs with the same
regulatory environment but different exposures to the reformulation, we can
get at the marginal effects of OxyContin and generic oxycodone exposure.
Contrasting the state-level regression estimates (see Section 7.4.3) with our
main results, our main results are larger in magnitude and more statistically
significant. The MSA level estimation of the effect of exposure on mortality
is more stable.
(E) Definition of OxyContin Misuse
The literature relies on NSDUH’s OxyContin past-year misuse. To make our
findings comparable with previous studies and robust to the choice of misuse
measure, we repeat our entire analysis with OxyContin last-year misuse and
generic oxycodone lifetime misuse (see Section 7.4.2 for results.) As noted in
Section 3.3, using last-year OxyContin misuse gives an unfair advantage to
OxyContin due to the timeliness of the measure. If our findings on oxycodone
persist despite the unequal treatment of the two misuse measures, then it is a
stronger indication of the essential role generic oxycodone played in the
opioid and heroin epidemic.
Comparing the two sets of results, we observe the same decline in OxyContin
sales and increase in generic oxycodone sales, although smaller in magnitude.
Both sets of coefficients on opioid mortality become positive but
insignificant. Finally, comparing the heroin result, at the state level we do
detect a positive effect on heroin mortality from OxyContin. In aggregate, our
results lose some significance when we replace lifetime OxyContin misuse with
last-year OxyContin misuse. The loss of significance, however, is in the
direction predicted by the unfair advantage given to OxyContin. This exercise
highlights the importance of treating the two misuse measures equally. When we
use measures that more accurately capture recent OxyContin misuse than recent
generic oxycodone misuse, we could mistakenly attribute effects of generic
oxycodone to OxyContin.
## 6 Conclusion
Researchers have attributed the prescription opioid opioid crisis and recent
increase in heroin use to OxyContin. Previous studies have documented how
Purdue Pharma’s marketing downplayed the risks of OxyContin’s abuse potential,
which fomented the prescription opioid crisis; recent studies identified the
OxyContin reformulation as the event that pushed users to switch to heroin,
which precipitated recent increase in heroin use. This paper revisits the
roles OxyContin and the Oxycontin reformulation played in the opioid crisis
with fine-grained sales data that includes OxyContin’s most immediate
substitute, generic oxycodone. We have three main findings.
First, we find direct evidence of substitution to from OxyContin to generic
oxycodone post-reformulation. Our difference-in-difference estimation
indicates a 68% substitution from OxyContin to generic oxycodone due to the
reform. Looking at the decline in OxyContin sales and rise in generic
oxycodone sales from 2002-2006, we believe this substitution (for different
reasons, namely Purdue’s loss of its patent) also happened years before the
reformulation. The size of this substitution, and indeed the size of the
generic oxycodone market pre-reform, may come as a surprise to researchers.
[31] estimate that in 2002 OxyContin’s market share was 68%. By the time of
the reformulation in 2010, it had fallen by more than half. OxyContin played
an essential part in igniting the prescription opioid crisis but, after losing
its patent in 2004, other companies took up the torch and surpassed Purdue by
selling generic oxycodone.
Our second main finding is that the OxyContin reformulation had no overall
effect on opioid mortality. In our estimation, the OxyContin coefficients are
not significant in the entire sample period, suggesting that higher OxyContin
exposure is not predictive of either higher or lower opioid death. The lack of
any trend indicates that the benefits of the OxyContin reformulation, if
exist, are offset by substitution to oxycodone. In addition, we do find that
high oxycodone exposure is predictive of rise in opioid mortality from 2011,
confirming the increasingly important role of generic oxycodone in the recent
prescription opioid crisis.
Third and most importantly, we show that the heroin overdose deaths after 2010
were predicted by generic oxycodone exposure, not OxyContin exposure. Our main
event-study model shows positive and significant effects from oxycodone
exposure on heroin deaths after 2012, but OxyContin exposure is not predictive
of heroin deaths once we control for oxycodone. The difference-in-difference
results are similar, showing that oxycodone exposure was predictive of heroin
deaths before or after the reformulation, and OxyContin exposure after the
reformulation is weakly positive but not statistically significant. We also do
not observe an additional rise in heroin deaths immediately after
reformulation in areas where OxyContin sales declined the most post-
reformulation. In particular, without including generic oxycodone in the
analysis, we recover the same results from the literature that OxyContin was
responsible for the rise in heroin deaths. The evidence shows that omitting
oxycodone, an important substitute to OxyContin, produces erroneous results.
This paper demonstrates the pernicious effects of generic oxycodone, which had
thus far escaped scrutiny until the Washington Post acquired data and reported
on it.
## References
* [1] “2005-2010 NSDUH MSA Detailed Tables”, 2012 URL: https://web.archive.org/web/20130616122357/http://www.samhsa.gov/data/NSDUHMetroBriefReports/index.aspx
* [2] Jeremy A Adler and Theresa Mallick-Searle “An overview of abuse-deterrent opioids and recommendations for practical patient care” In _Journal of multidisciplinary healthcare_ 11 Dove Press, 2018, pp. 323
* [3] Abby Alpert, Evans, Lieber and Powell “Origins of the Opioid Crisis and Its Enduring Impacts”, Working Paper Series 26500, 2019 DOI: 10.3386/w26500
* [4] Abby Alpert, David Powell and Rosalie Liccardo Pacula “Supply-Side Drug Policy in the Presence of Substitutes: Evidence from the Introduction of Abuse-Deterrent Opioids”, Working Paper Series 23031, 2017 DOI: 10.3386/w23031
* [5] Abby Alpert, David Powell and Rosalie Liccardo Pacula “Supply-side drug policy in the presence of substitutes: Evidence from the introduction of abuse-deterrent opioids” In _American Economic Journal: Economic Policy_ 10.4, 2018, pp. 1–35
* [6] Sairam Atluri, G Sundarshan and Laxmaiah Manchikanti “Assessment of the trends in medical use and misuse of opioid analgesics from 2004 to 2011” In _ASIPP_ 17, 2014, pp. E119–E28
* [7] Edward R Bloomquist “The addiction potential of oxycodone (Percodan®)” In _California medicine_ 99.2 BMJ Publishing Group, 1963, pp. 127
* [8] Christopher S Carpenter, Chandler B McClellan and Daniel I Rees “Economic conditions, illicit drug use, and substance use disorders in the United States” In _Journal of Health Economics_ 52 Elsevier, 2017, pp. 63–73
* [9] Theresa A Cassidy et al. “Changes in prevalence of prescription opioid abuse after introduction of an abuse-deterrent opioid formulation” In _Pain Medicine_ 15.3, 2014, pp. 440–451
* [10] Theodore J Cicero, Matthew S Ellis, Hilary L Surratt and Steven P Kurtz “The changing face of heroin use in the United States: a retrospective analysis of the past 50 years” In _JAMA psychiatry_ 71.7 American Medical Association, 2014, pp. 821–826
* [11] Theodore Cicero and Matthew Ellis “Abuse-Deterrent Formulations and the Prescription Opioid Abuse Epidemic in the United States: Lessons Learned From OxyContin” In _JAMA Psychiatry_ 72.5, 2015, pp. 424–430 DOI: 10.1001/jamapsychiatry.2014.3043
* [12] Theodore Cicero, Matthew Ellis and Hilary Surratt “Effect of abuse-deterrent formulation of OxyContin” In _New England Journal of Medicine_ 367.2 Mass Medical Soc, 2012, pp. 187–189
* [13] Wilson M Compton, Christopher M Jones and Grant T Baldwin “Relationship between nonmedical prescription-opioid use and heroin use” In _New England Journal of Medicine_ 374.2 Mass Medical Soc, 2016, pp. 154–163
* [14] Paul M Coplan et al. “Changes in oxycodone and heroin exposures in the National Poison Data System after introduction of extended-release oxycodone with abuse-deterrent characteristics” In _Pharmacoepidemiology and drug safety_ 22.12 Wiley Online Library, 2013, pp. 1274–1282
* [15] Nabarun Dasgupta et al. “Observed transition from opioid analgesic deaths toward heroin” In _Drug and alcohol dependence_ 145 Elsevier, 2014, pp. 238–241
* [16] Sarah DeWeerdt “Tracing the US opioid crisis to its roots.” In _Nature_ 573.7773 Nature Publishing Group, 2019, pp. S10–S10
* [17] William Evans, Ethan Lieber and Patrick Power “How the reformulation of OxyContin ignited the heroin epidemic” In _Review of Economics and Statistics_ 101.1 MIT Press, 2019, pp. 1–15
* [18] Amy Finkelstein “The aggregate effects of health insurance: Evidence from the introduction of Medicare” In _The quarterly journal of economics_ 122.1 MIT Press, 2007, pp. 1–37
* [19] Jennifer R Havens et al. “The impact of a reformulation of extended-release oxycodone designed to deter abuse in a sample of prescription opioid abusers” In _Drug and alcohol dependence_ 139 Elsevier, 2014, pp. 9–17
* [20] Lon R Hays “A profile of OxyContin addiction” In _Journal of Addictive Diseases_ 23.4 Taylor & Francis, 2004, pp. 1–9
* [21] James Inciardi et al. “The “black box” of prescription drug diversion” In _Journal of addictive diseases_ 28.4 Taylor & Francis, 2009, pp. 332–347
* [22] Alene Kennedy-Hendricks et al. “Opioid overdose deaths and Florida’s crackdown on pill mills” In _American journal of public health_ 106.2 American Public Health Association, 2016, pp. 291–297
* [23] Pamela TM Leung et al. “A 1980 letter on the risk of opioid addiction” In _New England Journal of Medicine_ 376.22 Mass Medical Soc, 2017, pp. 2194–2195
* [24] Beth Macy “Dopesick: Dealers, doctors, and the drug company that addicted America” Little, Brown, 2018
* [25] Justine Mallatt “The effect of prescription drug monitoring programs on opioid prescriptions and heroin crime rates” In _Available at SSRN 3050692_ , 2018
* [26] Sarah G Mars et al. ““Every ‘never’I ever said came true”: transitions from opioid pills to heroin injecting” In _International Journal of Drug Policy_ 25.2 Elsevier, 2014, pp. 257–266
* [27] Barry Meier “In guilty plea, OxyContin maker to pay $600 million” In _New York Times_ 10, 2007
* [28] Barry Meier “Pain killer: A” wonder” drug’s trail of addiction and death” Rodale, 2003
* [29] F Modarai et al. “Relationship of opioid prescription sales and overdoses, North Carolina” In _Drug and alcohol dependence_ 132.1-2 Elsevier, 2013, pp. 81–86
* [30] Julie O’Donnell, Matthew Gladden and Puja Seth “Trends in deaths involving heroin and synthetic opioids excluding methadone, and law enforcement drug product reports, by census region—United States, 2006–2015” In _MMWR. Morbidity and mortality weekly report_ 66.34 Centers for Disease ControlPrevention, 2017, pp. 897
* [31] Leonard J Paulozzi and George W Ryan “Opioid analgesics and rates of fatal drug poisoning in the United States” In _American journal of preventive medicine_ 31.6 Elsevier, 2006, pp. 506–511
* [32] Joseph Pergolizzi, Robert Raffa, Robert Taylor Jr and Steven Vacalis “Abuse-deterrent opioids: an update on current approaches and considerations” In _Current medical research and opinion_ 34.4 Taylor & Francis, 2018, pp. 711–723
* [33] Wm F Quinn “Percodan on Triplicate—The Background of the New Law” In _California medicine_ 103.3 BMJ Publishing Group, 1965, pp. 212
* [34] Sam Quinones “Dreamland: The true tale of America’s opiate epidemic” Bloomsbury Publishing USA, 2015
* [35] Bob Rappaport “APPLICATION NUMBER: 22-272 - MEDICAL REVIEW(S)”, 2009 URL: https://www.accessdata.fda.gov/drugsatfda_docs/nda/2010/022272s000MedR.pdf
* [36] Christopher J Ruhm “Geographic variation in opioid and heroin involved drug poisoning mortality rates” In _American journal of preventive medicine_ 53.6 Elsevier, 2017, pp. 745–753
* [37] Molly Schnell “Physician behavior in the presence of a secondary market: The case of prescription opioids”, 2017
* [38] Harvey A Siegal, Robert G Carlson, Deric R Kenne and Maria G Swora “Probable relationship between opioid abuse and heroin use” In _American family physician_ 67.5, 2003, pp. 942
* [39] Beth Sproule, Bruna Brands, Selina Li and Laura Catz-Biro “Changing patterns in opioid addiction: characterizing users of oxycodone and other opioids” In _Canadian Family Physician_ 55.1 The College of Family Physicians of Canada, 2009, pp. 68–69
* [40] Isaac D Swensen “Substance-abuse treatment and mortality” In _Journal of Public Economics_ 122 Elsevier, 2015, pp. 13–30
* [41] Art Van Zee “The promotion and marketing of oxycontin: commercial triumph, public health tragedy” In _American journal of public health_ 99.2 American Public Health Association, 2009, pp. 221–227
* [42] Lynn Webster “Update on abuse-resistant and abuse-deterrent approaches to opioid formulations” In _Pain Medicine_ 10.suppl_2 Blackwell Publishing Inc Malden, USA, 2009, pp. S124–S133
## 7 Appendix
### 7.1 Additional Information
Note: We compute market share based on the average of 2006-2014 sales data. We
kept only the top twenty manufacturers for better readability of the table.
The rest of the 35 manufacturers combined contribute 0.18% of total sales.
During this sample period, Purdue Pharma was the dominant manufacturer of high
dosage oxycodone pills ($\geq$ 40mg). In the lower dosage market, three
manufacturers (SpecGx, Actavis Pharma and Phar Phamaceutical) had higher share
of the market than Purdue Pharma.
Figure 7: Market share of different opioid manufacturers
Note: The figure shows the misuse rate of OxyContin (OXYFLAG or OXYCONT2) and
the misuse rate of Percocet, Percodan and Tylox (PERCTYL2). Data obtained from
annual NSDUH. Percocet was a popular prescription oxycodone to misuse in the
pre-OxyContin period. We see in this graph that the PERCTYL2 misuse rate
increased 30% from 2002 to 2009, suggesting that the lifetime misuse rate
captures more than historical Percocet, Percodan and Tylox misuse.
Figure 8: NSDUH national lifetime misuse rate
Note: This graph shows the difference in oxycodone sales between Purdue and
Endo Pharma. The small market share of Endo Pharma leads us to believe that
individuals misreport the drugs they consume on the NSDUH.
Figure 9: Comparison of Sales of Purdue and Endo Pharma
Note: Left is the absolute difference in market share (0.1 means that MSA
share is 10% higher than the state average) and right is percentage difference
(10% means that MSA share is 1.1 times the state average).
Figure 10: Within-state variation in OxyContin Market Share
Note: Left is the absolute difference in opioid mortality (0.1 means that MSA
mortality per 10,000 people is 0.1 higher than the state average) and right is
percentage difference (10% means that MSA mortality per 10,000 people is 1.1
times the state average)
Figure 11: Within-state variation in opioid mortality
Note: We categorized all MSAs into high, mid, and low by the drop in the
observed per person OxyContin sales from 2009 to 2011. The series are
population weighted and Florida is excluded. The high group saw a 30% drop in
OxyContin sales, mid group a 3.9% drop, and low group a 15% increase. The high
group experienced a 46% increase in generic oxycodone sales, mid group a 34%
increase, and low group a 29% increase. The three groups share similar
oxycodone growth trends until the reformulation.
Figure 12: Opioid sales by empirical OxyContin drop
Note: Similarly to the previous figure, we categorized all MSAs into high,
mid, and low by the drop in the observed per person OxyContin sales from 2009
to 2011. The series are population weighted and Florida is excluded. No trend
break in opioid mortality in the high drop group. The high group saw an 35%
increase in heroin mortality, the mid group 38%, and the low group 37%. The
similar increases in heroin mortality post-reform indicates that drops in
OxyContin use post-reform did not lead to additional increase in heroin use.
Figure 13: Opioid mortality by empirical OxyContin drop
### 7.2 Tables
Table 2: Testing Constructed Exposure Measure Against Opioid Mortality
| Opioid overdose deaths per 100,000
---|---
| OxyContin | Generic Oxycodone
| (1) | (2) | (3) | (4) | (5) | (6)
NSDUH misuse | 10.235 | | | 2.909 | |
| (1.719) | | | (0.570) | |
ARCOS sales | | 0.001 | | | 0.001 |
| | (0.0002) | | | (0.0001) |
Combined exposure | | | 0.093 | | | 0.087
| | | (0.012) | | | (0.009)
Number of observations | 379 | 379 | 379 | 379 | 379 | 379
R-square | 0.086 | 0.089 | 0.130 | 0.065 | 0.178 | 0.189
Adjusted R-square | 0.084 | 0.086 | 0.128 | 0.062 | 0.176 | 0.187
* •
Notes: Standard errors are in parentheses. We report coefficients from OLS
regressions of opioid mortality on misuse, sales or exposure. NSDUH misuse
rates is the 6-year average OxyContin or Percocet lifetime misuse rate from
pre-reform period (2004-2009). ACROS sales is Oxycontin or generic oxycodone
sales per person from 2009. Combined exposure is the product of the previous
two measures normalized (see equation 1). Overdose from 2009. Regressions are
weighted by MSA population.
Table 3: Difference in difference regression results
| Opioid sales per person | Overdose per 10,000
---|---|---
| OxyContin | Oxycodone | Opioid | Heroin
| (1) | (2) | (3) | (4)
Post | -8.05 | 41.74 | 0.01 | 0.14
| (2.86) | (4.92) | (0.02) | (0.02)
High OxyContin | 47.24 | 56.46 | -0.05 | -0.07
| (5.78) | (13.36) | (0.03) | (0.02)
High Oxycodone | 26.84 | 95.90 | 0.14 | 0.08
| (6.66) | (15.47) | (0.04) | (0.05)
Post x High OxyContin | -15.14 | 10.30 | 0.02 | 0.03
| (6.39) | (8.90) | (0.02) | (0.02)
Post x High Oxycodone | -2.33 | 33.99 | 0.06 | 0.07
| (6.37) | (8.80) | (0.02) | (0.02)
Number of observations | 2148 | 2148 | 2148 | 2148
R-square | 0.665 | 0.737 | 0.517 | 0.469
Adjusted R-square | 0.654 | 0.728 | 0.501 | 0.452
* •
Notes: We report coefficients from the difference-in-difference estimation
(see equation (3)). All MSAs in Florida are excluded. In all specifications,
we include MSA-level control variables, state fixed effects and year fixed
effects. Standard errors are clustered at the MSA level.
### 7.3 Map
Note: Data from 2004-2009 NSDUH lifetime OxyContin misuse rate (NSDUH ticker
OXXYR). 0.01 is interpreted as 1% of the state population have ever misused
OxyContin.
Figure 14: OxyContin lifetime misuse rate at state level
Note: Data from 2004-2009 NSDUH lifetime Percocet, Percodan, Tylox misuse rate
(NSDUH ticker PERCTYL2). 0.01 is interpreted as 1% of the state population
have ever misused one of the three drugs. Percocet lifetime misuse rate on
average is much higher than OxyContin lifetime misuse rate.
Figure 15: Percocet lifetime misuse rate at state level
Note: The figure plots the absolute difference in percentile ranking of the
two state level lifetime misuse rate. A 0.1 should be interpreted as a 10%
difference in percentile ranking between OxyContin lifetime misuse rate and
Percocet lifetime misuse rate. For example, Colorado’s OxyContin misuse rate
is 0.0063 (42 percentile) and it’s Percocet misuse rate is 0.092 (97
percentile), which is a 55% difference in percentile ranking. We rely on the
difference between two misuse rate to separately identify the impact of
OxyContin and oxycodone.
Figure 16: Difference in state level misuse rates
Note: This figure shows OxyContin exposure by MSA. We show Florida here, which
had very low OxyContin exposure/sales, but omit it from analysis because it
had abnormally high generic oxycodone sales with large amounts being
trafficked to other states.
Figure 17: OxyContin exposure at MSA level
Note: Florida is excluded in this analysis. MSAs grouped by high vs low
OxyContin exposure and high vs low generic oxycodone exposure.
Figure 18: Diff-in-diff regression categories
### 7.4 Alternative Regression Specifications
#### 7.4.1 MSA FE
Figure 19: Regression on OxyContin sales with MSA FE. Shaded regions are the
95 percent confidence intervals with standard errors clustered at the MSA
level.
Figure 20: Regression on oxycodone sales with MSA FE. Shaded regions are the
95 percent confidence intervals with standard errors clustered at the MSA
level.
Figure 21: Regression on opioid mortality with MSA FE. Shaded regions are the
95 percent confidence intervals with standard errors clustered at the MSA
level.
Figure 22: Regression on heroin mortality with MSA FE. Shaded regions are the
95 percent confidence intervals with standard errors clustered at the MSA
level.
#### 7.4.2 Last Year OxyContin Misuse
Figure 23: Regression on OxyContin sales with last-year OxyContin. Shaded
regions are the 95 percent confidence intervals with standard errors clustered
at the MSA level.
Figure 24: Regression on oxycodone sales with last-year OxyContin. Shaded
regions are the 95 percent confidence intervals with standard errors clustered
at the MSA level.
Figure 25: Regression on opioid mortality with last-year OxyContin. Shaded
regions are the 95 percent confidence intervals with standard errors clustered
at the MSA level.
Figure 26: Regression on heroin mortality with last-year OxyContin. Shaded
regions are the 95 percent confidence intervals with standard errors clustered
at the MSA level.
#### 7.4.3 State Level Regression
Figure 27: Regression on OxyContin sales at state level. Shaded regions are
the 95 percent confidence intervals with standard errors clustered at the MSA
level.
Figure 28: Regression on oxycodone sales at state level. Shaded regions are
the 95 percent confidence intervals with standard errors clustered at the MSA
level.
Figure 29: Regression on opioid mortality at state level
Figure 30: Regression on heroin mortality at state level. Shaded regions are
the 95 percent confidence intervals with standard errors clustered at the MSA
level.
#### 7.4.4 OxyContin Only
Figure 31: Regression on OxyContin sales with OxyContin only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
Figure 32: Regression on oxycodone sales with OxyContin only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
Figure 33: Regression on opioid mortality with OxyContin only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
Figure 34: Regression on heroin mortality with OxyContin only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
#### 7.4.5 Oxycodone Only
Figure 35: Regression on OxyContin sales with oxycodone only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
Figure 36: Regression on oxycodone sales with oxycodone only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
Figure 37: Regression on opioid mortality with oxycodone only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
Figure 38: Regression on heroin mortality with oxycodone only. Shaded regions
are the 95 percent confidence intervals with standard errors clustered at the
MSA level.
|
16k
|
arxiv_papers
|
2101.01135
|
# Stochastic dynamics of a few sodium atoms in a cold potassium cloud
Rohit Prasad Bhatt Jan Kilinc Lilo Höcker Fred Jendrzejewski Universität
Heidelberg, Kirchhoff-Institut für Physik, Im Neuenheimer Feld 227, 69120
Heidelberg, Germany
###### Abstract
We report on the stochastic dynamics of a few sodium atoms immersed in a cold
potassium cloud. The studies are realized in a dual-species magneto-optical
trap by continuously monitoring the emitted fluorescence of the two atomic
species. We investigate the time evolution of sodium and potassium atoms in a
unified statistical language and study the detection limits. We resolve the
sodium atom dynamics accurately, which provides a fit free analysis. This work
paves the path towards precise statistical studies of the dynamical properties
of few atoms immersed in complex quantum environments.
## I Introduction
The random evolution of a small system in a large bath can only be described
by its statistical properties. Such stochastic dynamics occur in a wide range
of settings including financial markets [1], biological systems [2], impurity
physics [3] and quantum heat engines [4]. Their evolution is hard to describe
from microscopic principles, stimulating strong efforts to realize highly
controlled model systems in optomechanics [5], cavity QED [6], superconducting
circuits [7], trapped ions [8] and cold atoms [9]. For cold atoms, the high
control is complemented by the access to a number of powerful statistical
approaches, like the precise analysis of higher-order correlation functions of
a many-body system [10] or the extraction of entanglement through fluctuations
[11, 12].
Cold atomic mixtures offer a natural mapping of physical phenomena involving
system-bath interactions, wherein one species realizes the bath, while the
other species represents the system. If a mesoscopic cloud of the first
species is immersed in a Bose-Einstein condensate formed by the second
species, it implements the Bose polaron problem [13, 14, 15, 16]. In recent
quantum simulators of lattice gauge theories, the small clouds of one species
emulate the matter field, which is properly coupled to the gauge field
realized by the second atomic species [17, 18, 19]. Proposed technologies even
go towards quantum error correction, where the logical qubits are implemented
in one atomic species and the second atomic species mediates entanglement
between them [20]. The feasibility of immersing a few atoms into a large cloud
was demonstrated in a dual-species magneto-optical trap (MOT) of rubidium and
cesium [21]. This was extended towards the study of position- and spin-
resolved dynamics with a single tracer atom acting as a probe [22, 23].
However, combining such experiments with a statistical description of system
and bath remains an open challenge.
Figure 1: Experimental platform for atom counting. The atoms are trapped and
laser cooled in a dual-species MOT inside the science chamber. The emitted
fluorescence is collected by a high-resolution imaging system onto the
cameras. We observe the stochastic dynamics of single sodium atoms (orange),
immersed in a large cloud of potassium atoms (blue).
In this work, we investigate the stochastic dynamics of few sodium atoms and a
large cloud of potassium atoms in a dual-species MOT. It builds upon atom
counting experiments with a single atomic species [24, 25, 26, 27] and atomic
mixtures of Rb and Cs [21, 28, 29]. The mixture of 23Na and 39K, as employed
in our experiment, has shown excellent scattering properties in the degenerate
regime [30]. It further provides the option to replace the bosonic 39K by the
fermionic 40K isotope through minimal changes in the laser cooling scheme
[31]. The two atomic species are cooled through standard laser cooling
techniques in a dual-species MOT, as shown in figure 1. In a MOT, cooling and
trapping is achieved through a combination of magnetic field gradients with
continuous absorption and emission of resonant laser light. We collect the
resulting fluorescence on a dedicated camera for each species and trace their
spatially integrated dynamics. We present a statistical analysis for the
dynamics of both species, which separates the fluctuations induced by the
statistical loading process from those caused by technical limitations.
Furthermore, we achieve single atom counting for sodium, which we employ to
study its full counting statistics.
The paper is structured as follows. In section II, we provide a detailed
discussion of the experimental apparatus, and how it is designed to fulfill
the requirements of modern quantum simulators. In section III, we study the
dynamics of the observed fluorescence signal for both atomic species. The
analysis of their mean and variance after an ensemble average is then employed
to statistically investigate the origin of different fluctuations. In section
IV, we leverage the single atom counting resolution of sodium to extract the
full counting statistics of atom load and loss events. In section V, we end
with an outlook of the next steps for the experimental platform.
## II Experimental Apparatus
In this section, we describe the different elements of our new experimental
apparatus. In the course of designing this machine, effort was taken to
optimize the versatility and stability of the system. To achieve this, the
experimental setup was designed for a continuous development of the vacuum and
the laser system.
### II.1 Vacuum system
Ultracold atom experiments require an ultra-high vacuum (UHV) at pressures
below ${10}^{-11}\text{\,}\millibar$, in order to isolate the cold atoms from
the surrounding environment. Our group operates another machine for quantum
simulation experiments with atomic mixtures [13, 14, 19], which consists of a
dual-species oven and a Zeeman slower connected to a science chamber [32]. In
this apparatus the first cooling stages of the two species are highly coupled,
which renders the optimization of the system very complex.
In the new vacuum system, we decoupled the precooling stages of sodium and
potassium up to the science chamber, as sketched in figure 2. The compact
vacuum system contains two independent two-dimensional magneto-optical trap
(2D-MOT) chambers for sodium and potassium and a dual-species science chamber,
where experiments are performed [33, 34]. The two 2D-MOT chambers are
connected to the science chamber from the same side under a $12.5^{\circ}$
angle. The entire apparatus is mounted on a $600\text{\,}\mathrm{mm}$ x
$700\text{\,}\mathrm{mm}$ aluminium breadboard, which is fixed to a linear
translation stage 111Inspired by the approach of the programmable quantum
simulator machine in the Lukin group [52].. Therefore, we are able to move the
science chamber out of the contraption of magnetic field coils and optics.
This allows for independent improvement of the vacuum system and in-situ
characterization of the magnetic field at the position of the atoms.
Figure 2: Vacuum system. The separated 2D-MOT chambers are connected from the
same side to the dual-species science chamber. The vacuum pumps are shown in
red. The whole vacuum system is mounted on a translation stage, such that the
science chamber can be moved out of the region of the 3D-MOT coils and optics.
2D-MOT. The design of the 2D-MOT setup is inspired by ref. [34]. The chamber
body is manufactured from titanium (fabricated by SAES Getters), where optical
access is ensured by standard CF40 fused silica viewports with broadband anti-
reflection coating (BBR coating). The 2D-MOT region has an oven containing a
$1\text{\,}\mathrm{g}$ atomic ingot ampoule. The oven is heated to
$160\,^{\circ}$C ($70\,^{\circ}$C) for sodium (potassium), thereby increasing
the pressure to ${10}^{-8}\text{\,}\millibar$ in this region. To maintain an
UHV in the science chamber, a differential pumping stage separates the two
vacuum regions from each other. Two gate valves ensure full decoupling of the
two atomic species by isolating different chambers. Each region is pumped with
its separate ion getter pump (from SAES Getters) 222We use NEXTorr Z100 for
the 2D-MOT and NEXTorr D500 for the 3D-MOT.. We employed four stacks of nine
(four) neodymium bar magnets to generate the required magnetic quadrupole
field inside the sodium (potassium) 2D-MOT chamber.
3D-MOT. The rectangular titanium science chamber is designed such that the two
atomic beams from the 2D-MOT chambers intersect in the center. Optical access
for various laser beams and a high-resolution imaging system is maximized by
four elongated oval viewports (fused silica, BBR coating), which are sealed
using indium wire.
The quadrupole magnetic field required for the 3D-MOT is produced by the MOT
coils, which are placed on the sides of the science chamber. Applying a
current of $20\text{\,}\mathrm{A}$ to the coils results in a magnetic field
gradient of $17$ G/cm. The fast control of the current in the coils, required
during an experimental sequence, is achieved through an insulated-gate bipolar
transistor (IGBT) switching circuit. In order to cancel stray fields in the
vicinity of the atomic clouds, we use three independent pairs of Helmholtz
coils carrying small currents ($<1$ A).
Figure 3: Sketch of the optical setup for laser cooling sodium and potassium
atoms. The laser light is split into different paths, enabling the individual
control of laser power and frequency for the 2D-MOT, the 3D-MOT, and the push
beam. The frequency and intensity of these beams is controlled with the help
of acousto-optic modulators (AOMs) in double-pass configuration. The rf-
frequencies for AOMs and EOMs are given in MHz. Na: The repumping light for
the 2D- and 3D-MOT is generated by electro-optic modulators (EOMs). K: In the
2D- and 3D-MOT paths the green AOM controls the 39K cooling frequency and the
blue AOM is responsible for the creation of the 39K repumping light. The
repumping light for 40K is generated by EOMs.
### II.2 Laser cooling
In order to cool and trap the atoms, the laser light is amplified and
frequency-stabilized on a dedicated optical table for each atomic species. The
light is transferred to the main experiment table via optical fibers. The
layout of the laser systems for both species is shown in figure 3.
Sodium. Laser cooling and trapping of sodium atoms is achieved using the
D2-line at 589 nm, which is obtained from a high-power, frequency-doubled
diode laser (TA-SHG pro, from Toptica Photonics). The laser light is
stabilized to excited-state crossover transition of the D2-line using
saturated absorption spectroscopy (SAS) and Zeeman modulation locking [37].
The modulated SAS signal is fed into a digital lock-in amplifier and PI-
controller, which are programmed on a STEMLab 125-14 board from Red Pitaya
using the Pyrpl module [38].
Potassium. Laser cooling and trapping of potassium atoms is achieved using the
D2-line at $767\text{\,}\mathrm{nm}$. The light is obtained from a master-
slave laser configuration (both DL pro, from Toptica Photonics). The master
laser frequency is locked to the ground-state crossover transition of the
D2-line of 39K with a scheme similar to sodium. The slave laser is frequency-
stabilized through an offset beat lock ($405\text{\,}\mathrm{MHz}$) and its
output is amplified to a power of $800\text{\,}\mathrm{mW}$, using a home-
built tapered amplifier (TA) module. This light is used to supply all the
cooling and trapping beams. The offset locking scheme also facilitates
switching between the two isotopes, 39K and 40K. To cool the fermionic 40K,
the slave laser frequency is increased by approximately
$810\text{\,}\mathrm{MHz}$ via the offset lock and the blue acousto-optic
modulators (see figure 3) are turned off.
3D-MOT. On the experiment table, the light from the optical fibers is
distributed into three independent paths for the operation of the dual-species
MOT in a retro-reflected configuration. For both species the number of atoms
loaded into the 3D-MOT can be tuned in a controlled way by adjusting the
2D-MOT beam power and the oven temperature. The pre-cooled atoms in the 2D-MOT
region are transported to the 3D-MOT with a push beam. For accurate atom
counting of sodium, we use $1.3\text{\,}\mathrm{mW}$ power in each 3D-MOT beam
and a beam diameter of about $2\text{\,}\mathrm{mm}$, while the push and
2D-MOT beams are turned off. This helps in reducing the loading rate as well
as the stray light. Furthermore, the sodium oven is kept at a relatively low
temperature of $80$ ∘C, which increases the lifetime of atoms in the 3D-MOT
due to better vacuum.
### II.3 Fluorescence imaging
The cold atoms are characterized by collecting their fluorescence through an
imaging system with a high numerical aperture (NA) onto a camera (fig. 1). The
imaging setup comprises an apochromatic high-resolution objective, which
features an NA of 0.5 and chromatic focal correction in the wavelength range
$589-767$ nm (fabricated by Special Optics). The fluorescence of sodium and
potassium is separated by a dichroic mirror, built into a cage system, which
is mounted on stages for x-, y- and z-translation along with tip-tilt
adjustment.
Both imaging paths contain a secondary lens and an additional relay telescope.
This allows us to do spatial filtering with an iris in the intermediate image
plane of the secondary lens and achieve a magnification of 0.75 (0.25) for
sodium (potassium). For imaging the sodium atoms we use an sCMOS camera (Andor
ZYLA 5.5) [39, 40], while for the potassium atoms we use an EMCCD camera (NuVu
H-512). In total, we estimate the conversion efficiency from photons to camera
counts to be $0.2$% ($0.02$%) for sodium (potassium).
## III Atom dynamics
Our experimental sequence to investigate the atom dynamics is shown in figure
4 A. We start the atom dynamics by switching on the MOT magnetic field (with a
gradient of $21$ G/cm) and then monitor the fluorescence in
$N_{\text{img}}=200$ images. Each image has an integration time
$\tau=$75\text{\,}\mathrm{ms}$$, such that the camera counts overcome the
background noise. Since the motion of the atoms during the integration time
washes out any spatial information, we sum up the counts over the entire MOT
region for each image. This results in a time trace of camera counts
$N_{\text{c}}$, as shown in figure 4 B. Each experimental run is preceded and
succeeded by a series of 100 reference images to quantify the background noise
$\Delta_{\text{bg}}$, induced by the fluctuations in the stray light from the
MOT beams.
Figure 4: Experimental sequence. A: A series of images (black) is taken. While
the MOT beams (red) are always on, the magnetic field (green) is switched off
for reference images marked in grey. B: Typical time trace from the series of
images of sodium (orange) and potassium (blue).
The camera counts for sodium exhibit random jumps, corresponding to single
atom load and loss events. The stochastic nature of the observed signal and
large relative fluctuations require a statistical analysis of the dynamics in
terms of expectation values. The single atom resolution provides additional
access to the full counting statistics, which is discussed in section IV. The
few sodium atoms are immersed in a cloud of potassium atoms, which we pre-load
for $5\text{\,}\mathrm{s}$ to ensure large atom numbers. In contrast to
sodium, we do not observe discrete jumps, but rather a continuous loading
curve with higher counts and smaller relative fluctuations. These are typical
features of a bath, which can be characterized by its mean and variance.
To extract expectation values through an ensemble average, we perform 100
repetitions of the previously described experimental sequence for sodium and
potassium independently 333Given the smaller size of the dataset, it was
possible to increase the integration time $\tau$, as the heating of the MOT
coils was less of a limiting factor. Therefore, we increased the integration
time for sodium to $200\text{\,}\mathrm{ms}$ for the data shown in figure 5
and 6.. To further access the small atom regime for potassium in this
analysis, we reduce the 2D-MOT power and do not perform pre-loading. The
observed dynamics are shown in figure 5 A. We calculate the mean
$\overline{N}_{c}$ and standard deviation $\Delta_{\text{c}}$ of counts at
each image index [42, 43]. For the case of sodium, the dynamics is extremely
slow and never reaches a stationary regime. Furthermore, the amplitude of the
fluctuations is comparable to the average camera counts throughout the entire
observation. For potassium, the stationary situation is achieved on average
after a few seconds of loading. Once again, we observe a strong dependence of
the standard deviation on the average atom counts.
Figure 5: Characterization of atom number fluctuations for sodium (left) and
potassium (right). A: Hundred time traces of sodium and potassium with mean
and error band (shown as thick lines with shaded region around them). B:
Dependence of variance on mean camera counts. For sodium (left) the inset
shows the background noise level.
To study this dependence quantitatively, we trace the variance
$\Delta_{\text{c}}^{2}$ as a function of the average counts $\overline{N}_{c}$
in figure 5 B. For sodium the variance shows a linear dependence on the
average counts with an intercept. This behavior can be understood by
considering two independent noise sources. The first one is a background noise
$\Delta_{\text{bg}}$, which is independent of the atom number and adds a
constant offset to the variance. It originates from the readout noise of the
camera and intensity-varying stray light. The second noise source is the atom
shot noise, which describes the random variations due to the counting of atoms
loaded until a given image index in the time trace. Its variance is equal to
the average atom number. The recorded camera signal is directly proportional
to the atom number $N_{\text{c}}=C\,N_{\text{at}}$, leading through error
propagation to a variance of $C\,\overline{N}_{\text{c}}$. The two independent
noise sources add up in their variances
$\Delta_{\text{c}}^{2}=C\,\overline{N}_{\text{c}}+\Delta_{\text{bg}}^{2}\,.$
(1)
This theoretical prediction agrees well with the experimental observations.
The calibration constant $C_{\text{Na}}=1.15(5)\times 10^{4}$ and the
background noise $\Delta_{\text{bg,Na}}=2201(2)$ were independently extracted
from a histogram plot, as described in section IV. This validates our
assumption that background and shot noise are the dominating noise sources for
sodium. Converting the camera counts back into atom numbers, we obtain a
resolution of $0.20(1)\,$atoms, quantifying the quality of the observed single
atom resolution 444The connection of this resolution to the detection fidelity
can be directly extracted from the histogram, shown in figure 6..
For potassium, we observe a more complex behavior of the variance. In the
regime of few counts the variance is again dominated by the background noise
and the atom shot noise. With the noise model (1), validated for sodium, we
perform a fit to extract the calibration factor $C_{\text{K}}=560(140)$ and
the background noise $\Delta_{\text{bg,K}}=2450(140)$. The resulting atom
resolution of $4.3(1.1)\,$atoms is similar to that achieved in precision
experiments with Bose-Einstein condensates [11, 45].
For higher atom numbers, we observe a non-linear dependence, which we
attribute to technical fluctuations of the MOT. The MOT properties can be
parameterized by the loading rate $\Gamma_{\text{load}}$ and loss rate
$\Gamma_{\text{loss}}$. Considering single atom load and loss only, they are
connected to the atom dynamics through
$N_{\text{at}}(t)=\frac{\Gamma_{\text{load}}}{\Gamma_{\text{loss}}}\big{[}1-\exp(-\Gamma_{\text{loss}}t)\big{]}\,.$
(2)
We fit each time trace with this solution and, hence, extract the distribution
of $\Gamma_{\text{load}}$ and $\Gamma_{\text{loss}}$ across different runs.
The variance in the atom dynamics, resulting from these fluctuations, is
traced as the dash-dotted curve in figure 5 B. In the high atom number regime
it agrees well with our experimental observation. We expect to substantially
reduce these fluctuations in the future by improving the stability of
intensity, frequency, and magnetic field.
## IV Full counting statistics of sodium
Going one step beyond the statistical analysis of ensemble averages, we use
the single atom resolution of sodium to extract its full counting statistics
[46, 47]. This requires the digitization of camera counts into discrete atom
numbers [24], as presented in figure 6. For this, we aggregate the camera
counts of 100 runs into one histogram, which shows distinct atom number peaks.
The calibration from camera counts to atom counts is accomplished through
Gaussian fits to individual single atom peaks. The distance between
consecutive peaks corresponds to the calibration factor
$C_{\text{Na}}=1.15(5)\times 10^{4}$. The width of the zero atom signal sets
the background noise limit $\Delta_{\text{bg,Na}}=2201(2)$ 555These values are
used in section III in the analysis of the variance as a function of the mean
camera counts $\overline{N}_{\text{c}}$.. From the overlap of the peaks, we
estimate the detection fidelity of atoms to $96(3)$%. With this calibration,
we convert the time traces of camera counts into digitized atom count
dynamics, as shown in figure 6 B.
Figure 6: Accurate atom counting of sodium. A: Histogram of recorded camera
counts. The calibration from camera counts to atom number is accomplished
through Gaussian fits to distinct single atom peaks. Insets show average
images of zero and one atom. B: Example time trace before and after
digitization.
Each change in atom counts corresponds to a load or loss event with one or
more atoms, as shown in figure 7 A. We observe that the dynamics are dominated
by single atom events, as only $3$% involve two or more atoms. Therefore, we
neglect them in the following. We count the number of single atom events in
each time trace and summarize them in a histogram, shown in figure 7 B.
Figure 7: Counting statistics of sodium with and without potassium atoms
present. A: Digitized example time trace of sodium with single atom load
(loss) events marked with up (down) arrows. Only jumps during the MOT loading
stage are taken into account. B: Histogram of the number of single atom losses
and loads per time trace. The dashed lines show Poisson distributions with
mean $\overline{N}_{\text{loss}}$ and $\overline{N}_{\text{load}}$ (extracted
from the counting statistics).
On average we observe $\overline{N}_{\text{load}}=2.02(6)$ loading events per
time trace, which is much smaller than the total number of images
$N_{\text{img}}=200$ taken per time trace. Given that the atoms come from a
large reservoir, namely the oven region, the loading rate is independent of
the number of loaded atoms. From these observations, we describe the loading
process statistically as a series of independent Bernoulli trials with a
success probability $p_{\text{load}}$. Therefore, the single atom loading
probability is given by
$p_{\text{load}}=\frac{\overline{N}_{\text{load}}}{N_{\text{img}}}\,.$ (3)
The large number of images and the low loading probability means that the
number of loading events $N_{\text{load}}$ converges towards a Poisson
distribution with mean $\overline{N}_{\text{load}}$. This stands in full
agreement with the experimental observation.
Once an atom is present, it can be lost from the MOT with a probability
$p_{\text{loss}}$. We observe an average number of
$\overline{N}_{\text{loss}}=1.29(5)$ loss events per time trace. Since we do
not distinguish between atoms, the number of atoms lost in each time step can
be described by a binomial distribution. Therefore, the average number of
single atom loss events per time trace $\overline{N}_{\text{loss}}$ enables us
to extract the loss probability
$p_{\text{loss}}=\frac{\overline{N}_{\text{loss}}}{\sum_{i}\overline{N}_{i}}\,.$
(4)
The normalization factor is the sum of average number of atoms present in each
image $i$. Similar to the loading case, we observe a Poisson distribution for
the loss events with mean $\overline{N}_{\text{loss}}$, which can be
attributed to the occurrence of only a few loss events over a large set of
images.
Table 1: Comparison of load and loss probabilities in a few atom sodium MOT with and without the presence of a potassium cloud. The uncertainties were obtained through bootstrap resampling. | $p_{\text{load}}$ [%] | $p_{\text{loss}}$ [%]
---|---|---
Without K | 1.06(3) | 2.76(23)
With K | 1.02(3) | 2.47(24)
To study the influence of the large potassium cloud on the dynamics of the few
sodium atoms, we compare the load and loss statistics of the sodium atom
counts with and without potassium atoms present (see fig. 7 B). The extracted
load and loss probabilities are summarized in table 1. The values
corresponding to the absence and presence of potassium are indistinguishable
to roughly within five percent. To exclude experimental errors, we repeated
the analysis for various configurations of relative positions of the two
clouds, magnetic field gradients and laser detunings. All results were
compatible with our observation of no influence of potassium on the sodium
atom dynamics. We attribute these results to the extremely low density of the
atomic clouds. To increase the density of both clouds in future studies, we
plan to work at higher magnetic field gradients with water-cooled coils [49].
At higher densities, we expect to observe inter-species interaction, which
should influence the loading dynamics similar to previous studies [50, 21].
## V Outlook
In this work, we presented a detailed experimental study of the stochastic
dynamics of a few cold 23Na atoms immersed in a cloud of 39K atoms in a MOT.
The experimental setup is designed to be directly extendable towards quantum
degenerate gases through evaporative cooling [34]. Defect-free optical tweezer
arrays will provide high control over single atoms [51, 52] and their repeated
observation, as recently demonstrated for strontium atoms [53].
Our study opens the path for investigating time-resolved dynamics, including
transport and thermalization, in atomic mixtures over a wide range of
parameters. The emergence of ergodicity will become directly observable as the
equivalence of the ensemble averages with time averages for individual time
traces for sufficiently long times. Optimizing the photon detection should
further allow us to reduce imaging times sufficiently to reach position-
resolution [54, 55] without a pinning lattice [22]. This will extend our work
towards the quantum regime, in which we might continuously monitor
thermalization of impurity atoms in a Bose-Einstein condensate [28, 56].
## Acknowledgement
The authors are grateful for fruitful discussions and experimental help from
Apoorva Hegde, Andy Xia, Alexander Hesse, Helmut Strobel, Valentin Kasper,
Lisa Ringena and all the members of SynQS. We thank Giacomo Lamporesi as well
as Gretchen K. Campbell and her team for valuable input on the design of the
experimental setup.
This work is part of and supported by the DFG Collaborative Research Centre
“SFB 1225 (ISOQUANT)”. F. J. acknowledges the DFG support through the project
FOR 2724, the Emmy- Noether grant (Project-ID 377616843) and support by the
Bundesministerium für Wirtschaft und Energie through the project ”EnerQuant”
(Project- ID 03EI1025C).
## References
* Cont and Bouchaud [2000] R. Cont and J.-P. Bouchaud, Herd behavior and aggregate fluctuations in financial markets, Macroeconomic Dynamics 4, 170 (2000).
* Kucsko _et al._ [2013] G. Kucsko, P. C. Maurer, N. Y. Yao, M. Kubo, H. J. Noh, P. K. Lo, H. Park, and M. D. Lukin, Nanometre-scale thermometry in a living cell, Nature 500, 54 (2013).
* Grusdt and Demler [2016] F. Grusdt and E. Demler, New theoretical approaches to Bose polarons, in _Quantum matter at ultralow temperatures_ (2016) p. 325.
* Gluza _et al._ [2020] M. Gluza, J. Sabino, N. H. Y. Ng, G. Vitagliano, M. Pezzutto, Y. Omar, I. Mazets, M. Huber, J. Schmiedmayer, and J. Eisert, Quantum field thermal machines (2020), arXiv:2006.01177 [quant-ph] .
* Gröblacher _et al._ [2009] S. Gröblacher, K. Hammerer, M. R. Vanner, and M. Aspelmeyer, Observation of strong coupling between a micromechanical resonator and an optical cavity field, Nature 460, 724 (2009).
* Gleyzes _et al._ [2007] S. Gleyzes, S. Kuhr, C. Guerlin, J. Bernu, S. Deléglise, U. Busk Hoff, M. Brune, J.-M. Raimond, and S. Haroche, Quantum jumps of light recording the birth and death of a photon in a cavity, Nature 446, 297 (2007).
* Saira _et al._ [2012] O.-P. Saira, Y. Yoon, T. Tanttu, M. Möttönen, D. V. Averin, and J. P. Pekola, Test of the jarzynski and crooks fluctuation relations in an electronic system, Phys. Rev. Lett. 109, 180601 (2012).
* Maier _et al._ [2019] C. Maier, T. Brydges, P. Jurcevic, N. Trautmann, C. Hempel, B. P. Lanyon, P. Hauke, R. Blatt, and C. F. Roos, Environment-assisted quantum transport in a 10-qubit network, Phys. Rev. Lett. 122, 050501 (2019).
* Chiu _et al._ [2019] C. S. Chiu, G. Ji, A. Bohrdt, M. Xu, M. Knap, E. Demler, F. Grusdt, M. Greiner, and D. Greif, String patterns in the doped hubbard model, Science 365, 251 (2019).
* Schweigler _et al._ [2017] T. Schweigler, V. Kasper, S. Erne, I. Mazets, B. Rauer, F. Cataldini, T. Langen, T. Gasenzer, J. Berges, and J. Schmiedmayer, Experimental characterization of a quantum many-body system via higher-order correlations, Nature 545, 323 (2017).
* Strobel _et al._ [2014] H. Strobel, W. Muessel, D. Linnemann, T. Zibold, D. B. Hume, L. Pezzè, A. Smerzi, and M. K. Oberthaler, Fisher information and entanglement of non-Gaussian spin states, Science 345, 424 (2014).
* Lukin _et al._ [2019] A. Lukin, M. Rispoli, R. Schittko, M. E. Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Léonard, and M. Greiner, Probing entanglement in a many-body-localized system, Science 364, 256 (2019).
* Scelle _et al._ [2013] R. Scelle, T. Rentrop, a. Trautmann, T. Schuster, and M. K. Oberthaler, Motional Coherence of Fermions Immersed in a Bose Gas, Physical Review Letters 111, 070401 (2013).
* Rentrop _et al._ [2016] T. Rentrop, A. Trautmann, F. A. Olivares, F. Jendrzejewski, A. Komnik, and M. K. Oberthaler, Observation of the phononic Lamb shift with a synthetic vacuum, Physical Review X 6, 041041 (2016).
* Jørgensen _et al._ [2016] N. B. Jørgensen, L. Wacker, K. T. Skalmstang, M. M. Parish, J. Levinsen, R. S. Christensen, G. M. Bruun, and J. J. Arlt, Observation of attractive and repulsive polarons in a Bose-Einstein condensate, Physical Review Letters 117, 055302 (2016).
* Yan _et al._ [2020] Z. Z. Yan, Y. Ni, C. Robens, and M. W. Zwierlein, Bose polarons near quantum criticality, Science 368, 190 (2020).
* Zohar _et al._ [2015] E. Zohar, J. I. Cirac, and B. Reznik, Quantum simulations of lattice gauge theories using ultracold atoms in optical lattices, Reports on Progress in Physics 79, 014401 (2015).
* Kasper _et al._ [2017] V. Kasper, F. Hebenstreit, F. Jendrzejewski, M. K. Oberthaler, and J. Berges, Implementing quantum electrodynamics with ultracold atomic systems, New Journal of Physics 19, 023030 (2017).
* Mil _et al._ [2020] A. Mil, T. V. Zache, A. Hegde, A. Xia, R. P. Bhatt, M. K. Oberthaler, P. Hauke, J. Berges, and F. Jendrzejewski, A scalable realization of local U(1) gauge invariance in cold atomic mixtures, Science 367, 1128 (2020).
* Kasper _et al._ [2020] V. Kasper, D. González-Cuadra, A. Hegde, A. Xia, A. Dauphin, F. Huber, E. Tiemann, M. Lewenstein, F. Jendrzejewski, and P. Hauke, Universal quantum computation and quantum error correction with ultracold atomic mixtures (2020), arXiv:2010.15923 [cond-mat.quant-gas] .
* Weber _et al._ [2010] C. Weber, S. John, N. Spethmann, D. Meschede, and A. Widera, Single cs atoms as collisional probes in a large rb magneto-optical trap, Phys. Rev. A 82, 10.1103/PhysRevA.82.042722 (2010).
* Hohmann _et al._ [2017] M. Hohmann, F. Kindermann, T. Lausch, D. Mayer, F. Schmidt, E. Lutz, and A. Widera, Individual Tracer Atoms in an Ultracold Dilute Gas, Physical Review Letters 118, 1 (2017).
* Bouton _et al._ [2020a] Q. Bouton, J. Nettersheim, D. Adam, F. Schmidt, D. Mayer, T. Lausch, E. Tiemann, and A. Widera, Single-Atom Quantum Probes for Ultracold Gases Boosted by Nonequilibrium Spin Dynamics, Physical Review X 10, 11018 (2020a).
* Choi _et al._ [2007] Y. Choi, S. Yoon, S. Kang, W. Kim, J.-H. Lee, and K. An, Direct measurement of loading and loss rates in a magneto-optical trap with atom-number feedback, Phys. Rev. A 76, 013402 (2007).
* Ueberholz _et al._ [2002] B. Ueberholz, S. Kuhr, D. Frese, V. Gomer, and D. Meschede, Cold collisions in a high-gradient magneto-optical trap, Journal of Physics B: Atomic, Molecular and Optical Physics 35, 4899 (2002).
* Wenz _et al._ [2013] A. N. Wenz, G. Zurn, S. Murmann, I. Brouzos, T. Lompe, and S. Jochim, From Few to Many: Observing the Formation of a Fermi Sea One Atom at a Time, Science 342, 457 (2013).
* Hume _et al._ [2013] D. B. Hume, I. Stroescu, M. Joos, W. Muessel, H. Strobel, and M. K. Oberthaler, Accurate atom counting in mesoscopic ensembles, Physical Review Letters 111, 253001 (2013).
* Hohmann _et al._ [2016] M. Hohmann, F. Kindermann, T. Lausch, D. Mayer, F. Schmidt, and A. Widera, Single-atom thermometer for ultracold gases, Phys. Rev. A 93, 043607 (2016).
* Schmidt _et al._ [2016] F. Schmidt, D. Mayer, M. Hohmann, T. Lausch, F. Kindermann, and A. Widera, Precision measurement of the Rb 87 tune-out wavelength in the hyperfine ground state F=1 at 790 nm, Physical Review A 93, 022507 (2016).
* Schulze _et al._ [2018] T. Schulze, T. Hartmann, K. K. Voges, M. W. Gempel, E. Tiemann, A. Zenesini, and S. Ospelkaus, Feshbach spectroscopy and dual-species Bose-Einstein condensation of Na23-K39 mixtures, Physical Review A 97, 023623 (2018).
* Park _et al._ [2012] J. W. Park, C. H. Wu, I. Santiago, T. G. Tiecke, S. Will, P. Ahmadi, and M. W. Zwierlein, Quantum degenerate Bose-Fermi mixture of chemically different atomic species with widely tunable interactions, Physical Review A 85, 051602(R) (2012).
* Stan and Ketterle [2005] C. A. Stan and W. Ketterle, Multiple species atom source for laser-cooling experiments, Review of Scientific Instruments 76, 6 (2005).
* Tiecke _et al._ [2009] T. G. Tiecke, S. D. Gensemer, A. Ludewig, and J. T. M. Walraven, High-flux two-dimensional magneto-optical-trap source for cold lithium atoms, Physical Review A 80, 013409 (2009).
* Lamporesi _et al._ [2013] G. Lamporesi, S. Donadello, S. Serafini, and G. Ferrari, Compact high-flux source of cold sodium atoms, Review of Scientific Instruments 84, 063102 (2013).
* Note [1] Inspired by the approach of the programmable quantum simulator machine in the Lukin group [52].
* Note [2] We use NEXTorr Z100 for the 2D-MOT and NEXTorr D500 for the 3D-MOT.
* Weis and Derler [1988] A. Weis and S. Derler, Doppler modulation and zeeman modulation: laser frequency stabilization without direct frequency modulation, Applied Optics 27, 2662 (1988).
* Neuhaus _et al._ [2017] L. Neuhaus, R. Metzdorff, S. Chua, T. Jacqmin, T. Briant, A. Heidmann, P. . Cohadon, and S. Deléglise, Pyrpl (python red pitaya lockbox) — an open-source software package for fpga-controlled quantum optics experiments, in _2017 Conference on Lasers and Electro-Optics Europe European Quantum Electronics Conference (CLEO/Europe-EQEC)_ (2017) pp. 1–1.
* Picken _et al._ [2017] C. J. Picken, R. Legaie, and J. D. Pritchard, Single atom imaging with an scmos camera, Applied Physics Letters 111, 164102 (2017).
* Schlederer _et al._ [2020] M. Schlederer, A. Mozdzen, T. Lompe, and H. Moritz, Single atom counting in a two-color magneto-optical trap (2020), arXiv:2011.10081 [cond-mat.quant-gas] .
* Note [3] Given the smaller size of the dataset, it was possible to increase the integration time $\tau$, as the heating of the MOT coils was less of a limiting factor. Therefore, we increased the integration time for sodium to $200\text{\,}\mathrm{ms}$ for the data shown in figure 5 and 6.
* Muessel _et al._ [2013] W. Muessel, H. Strobel, M. Joos, E. Nicklas, I. Stroescu, J. Tomkovič, D. B. Hume, and M. K. Oberthaler, Optimized absorption imaging of mesoscopic atomic clouds, Applied Physics B 113, 69 (2013).
* Kang _et al._ [2006] S. Kang, S. Yoon, Y. Choi, J.-H. Lee, and K. An, Dependence of fluorescence-level statistics on bin-time size in a few-atom magneto-optical trap, Phys. Rev. A 74, 013409 (2006).
* Note [4] The connection of this resolution to the detection fidelity can be directly extracted from the histogram, shown in figure 6.
* Muessel _et al._ [2014] W. Muessel, H. Strobel, D. Linnemann, D. Hume, and M. Oberthaler, Scalable Spin Squeezing for Quantum-Enhanced Magnetometry with Bose-Einstein Condensates, Physical Review Letters 113, 103004 (2014).
* Bouton _et al._ [2020b] Q. Bouton, J. Nettersheim, S. Burgardt, D. Adam, E. Lutz, and A. Widera, An endoreversible quantum heat engine driven by atomic collisions (2020b), arXiv:2009.10946 [quant-ph] .
* Esposito _et al._ [2009] M. Esposito, U. Harbola, and S. Mukamel, Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems, Rev. Mod. Phys. 81, 1665 (2009).
* Note [5] These values are used in section III in the analysis of the variance as a function of the mean camera counts $\overline{N}_{\text{c}}$.
* Roux _et al._ [2019] K. Roux, B. Cilenti, V. Helson, H. Konishi, and J.-P. Brantut, Compact bulk-machined electromagnets for quantum gas experiments, SciPost Physics 6, 048 (2019).
* Castilho _et al._ [2019] P. C. M. Castilho, E. Pedrozo-Peñafiel, E. M. Gutierrez, P. L. Mazo, G. Roati, K. M. Farias, and V. S. Bagnato, A compact experimental machine for studying tunable bose-bose superfluid mixtures, Laser Physics Letters 16, 035501 (2019).
* Barredo _et al._ [2016] D. Barredo, S. de Léséleuc, V. Lienhard, T. Lahaye, and A. Browaeys, An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays, Science 354, 1021 (2016).
* Endres _et al._ [2016] M. Endres, H. Bernien, A. Keesling, H. Levine, E. R. Anschuetz, A. Krajenbrink, C. Senko, V. Vuletic, M. Greiner, and M. D. Lukin, Atom-by-atom assembly of defect-free one-dimensional cold atom arrays, Science 354, 1024 (2016).
* Covey _et al._ [2019] J. P. Covey, I. S. Madjarov, A. Cooper, and M. Endres, 2000-Times Repeated Imaging of Strontium Atoms in Clock-Magic Tweezer Arrays, Physical Review Letters 122, 173201 (2019).
* Bergschneider _et al._ [2018] A. Bergschneider, V. M. Klinkhamer, J. H. Becher, R. Klemt, G. Zürn, P. M. Preiss, and S. Jochim, Spin-resolved single-atom imaging of Li 6 in free space, Physical Review A 97, 063613 (2018).
* Bergschneider _et al._ [2019] A. Bergschneider, V. M. Klinkhamer, J. H. Becher, R. Klemt, L. Palm, G. Zürn, S. Jochim, and P. M. Preiss, Experimental characterization of two-particle entanglement through position and momentum correlations, Nature Physics 15, 640 (2019).
* Mehboudi _et al._ [2019] M. Mehboudi, A. Lampo, C. Charalambous, L. A. Correa, M. A. García-March, and M. Lewenstein, Using polarons for sub-nk quantum nondemolition thermometry in a bose-einstein condensate, Phys. Rev. Lett. 122, 030403 (2019).
|
8k
|
arxiv_papers
|
2101.01138
|
Centralizers of Rank One in the First Weyl Algebra
Centralizers of Rank One in the First Weyl Algebra
Leonid MAKAR-LIMANOV ab
L. Makar-Limanov
a) Department of Mathematics, Wayne State University, Detroit, MI 48202, USA
b) Department of Mathematics & Computer Science, The Weizmann Institute of
Science,
b) Rehovot 76100, Israel [email protected]
Received January 07, 2021, in final form May 12, 2021; Published online May
19, 2021
Centralizers of rank one in the first Weyl algebra have genus zero.
Weyl algebra; centralizers
16S32
## 1 Introduction
Take an element $a$ of the first Weyl algebra $A_{1}$. Rank of the centralizer
$C(a)$ of this element is the greatest common divisor of the orders of
elements in $C(a)$ (orders as differential operators).
This note contains a proof of the following
###### Theorem 1.1.
If the centralizer $C(a)$ of an element $a\in A_{1}\setminus K$, where $A_{1}$
is the first Weyl algebra defined over a field $K$ of characteristic zero, has
rank $1$, then $C(a)$ can be embedded into a polynomial ring $K[z]$.
The classical works of Burchnall and Chaundy where the systematic research of
commuting differential operators was initiated are also devoted primarily to
the case of rank $1$ but the coefficients of the operators considered by them
are analytic functions. Burchnall and Chaundy treated only monic differential
operators which doesn’t restrict generality if the coefficients are analytic
functions. Situation is completely different if the coefficients are
polynomial.
## 2 First Weyl algebra $\boldsymbol{A_{1}}$ and its skew field of fractions
$\boldsymbol{D_{1}}$
Before we proceed with a proof, here is a short refresher on the first Weyl
algebra.
###### Definition 2.1.
The first Weyl algebra $A_{1}$ is an algebra over a field $K$ generated by two
elements (denoted here by $x$ and $\partial$) which satisfy a relation
$\partial x-x\partial=1$.
When characteristic of $K$ is zero $A_{1}$ has a natural representation over
the ring of polynomials $K[x]$ by operators of multiplication by $x$ and the
derivative $\partial$ relative to $x$. Hence the elements of the Weyl algebra
can be thought of as differential operators with polynomial coefficients. They
can be written as ordinary polynomials
$a=\sum c_{i,j}x^{i}\partial^{j},\qquad c_{i,j}\in K$
with ordinary addition but a more complicated multiplication.
Algebra $A_{1}$ is rather small, its Gelfand–Kirillov dimension is $2$, hence
it is a two-sided Ore ring. Because of that it can be embedded in a skew field
$D_{1}$. A detailed discussion of skew fields related to Weyl algebras and
their skew fields of fraction, as well as a definition of Gelfand–Kirillov
dimension can be found in a paper [11].
If characteristic of $K$ is zero then the centralizer $C(a)$ of any element
$a\in A_{1}\setminus K$ is a commutative subalgebra of $A_{1}$ of the
transcendence degree one. This theorem which was first proved by Issai Schur
in 1904 (see [37]) and by Shimshon Amitsur by purely algebraic methods (see
[1]) has somewhat entertaining history which is described in [17].
###### Definition 2.2.
The rank of a centralizer is the greatest common divisors of the orders of
elements of $C(a)$ considered as differential operators, i.e., of degrees of
elements of $C(a)$ relative to $\partial$.
## 3 Leading forms of elements of $\boldsymbol{A_{1}}$
Given $\rho,\sigma\in\mathbb{Z}$ it is possible to define a weight degree
function $w$ on $A_{1}$ by
$\displaystyle w(x)=\rho,\qquad w(\partial)=\sigma,\qquad
w\big{(}x^{i}\partial^{j}\big{)}=\rho i+\sigma j,$ $\displaystyle w(a)=\max
w\big{(}x^{i}\partial^{j}\big{)}\mid c_{i,j}\neq 0\qquad\text{for}\quad a=\sum
c_{i,j}x^{i}\partial^{j}.$
###### Definition 3.1.
The leading form $\bar{a}$ of $a$ is
$\bar{a}=\sum c_{i,j}x^{i}\partial^{j}\mid
w\big{(}x^{i}\partial^{j}\big{)}=w(a).$
One of the nice properties of $A_{1}$ which was used by Dixmier in his seminal
research of the first Weyl algebra (see [8], Lemma 2.7) is the following
property of the leading forms of elements of $A_{1}$:
* •
if $\rho+\sigma>0$ then $\overline{[a,b]}=\big{\\{}\bar{a},\bar{b}\big{\\}}$
for $a,b\in A_{1}$, where $[a,b]=ab-ba$ and
$\big{\\{}\bar{a},\bar{b}\big{\\}}=\bar{a}_{\partial}\bar{b}_{x}-\bar{a}_{x}\bar{b}_{\partial}$
is the standard Poisson bracket of $\bar{a}$, $\bar{b}$ as commutative
polynomials ($\bar{a}_{\partial}$ etc. are the corresponding partial
derivatives), provided $\\{\bar{a},\bar{b}\\}\neq 0$;
* •
if $\big{\\{}\bar{a},\bar{b}\big{\\}}=0$ and $w(a)\neq 0$ then $\bar{b}$ is
proportional over $K$ to a fractional power of $\bar{a}$.
The main ingredient of the considerations below is this property of the
leading forms.
To make considerations clearer the reader may use the Newton polygons of
elements of $A_{1}$. The Newton polygon of $a\in A_{1}$ is the convex hull of
those points $(i,j)$ on the plane for which $c_{i,j}\neq 0$. The Newton
polygons of elements of $A_{1}$ are less sensible than the Newton polygons of
polynomials in two variables because they depend on the way one chooses to
record elements of $A_{1}$ but only those edges which are independent of the
choice will be used.
## 4 Proof of the theorem
### Case 1: $\boldsymbol{\displaystyle
a=\partial^{n}+\sum\limits_{i=1}^{n}a_{i}(x)\partial^{n-i}}$
If $a\in K[\partial]$ then $C(a)=K[\partial]$, a ring of polynomials in one
variable. Otherwise consider the leading form $\alpha$ of $a$ which contains
$\partial^{n}$ and is not a monomial. This form has a non-zero weight and
corresponding $\rho$, $\sigma$ satisfy conditions of the Dixmier’s lemma (both
$\rho$ and $\sigma$ are positive).
The leading forms of the elements from $C(a)$ are Poisson commutative with
$\alpha$ since these elements commute with $a$. Therefore they are
proportional over $K$ to the fractional powers of $\alpha$ (as a commutative
polynomial). Because the rank of $C(a)$ is $1$ we should have
$\alpha=c\big{(}\partial+c_{1}x^{k}\big{)}^{n}$.
A mapping $\phi$ of $A_{1}$ to $A_{1}$ defined by
$x\rightarrow x,\qquad\partial\rightarrow\partial-c_{1}x^{k}$
is an automorphism of $A_{1}$. It is easy to see that the Newton polygon of
$\phi(a)$ belongs to the Newton polygon of $a$ and has a smaller area.
Hence there exists an automorphism
$\psi\colon\quad x\rightarrow x,\qquad\partial\rightarrow\partial+p(x)$
such that $\psi(a)\in K[\partial]$. Therefore
$C(a)=K\big{[}\psi^{-1}(\partial)\big{]}$, a polynomial ring in one variable.
### Case 2: $\boldsymbol{\displaystyle
a=x^{m}\partial^{n}+\sum\limits_{i=1}^{n}a_{i}(x)\partial^{n-i},\ m>0}$
As above, the leading forms of elements of $C(a)$ are proportional to the
fractional powers of the leading form $\alpha$ of $a$ (as a commutative
polynomial) as long as $\alpha$ is the leading form of $a$ relative to weights
$\rho$, $\sigma$ of $x$ and $\partial$ provided $\rho+\sigma>0$ and the weight
of $\alpha$ is not zero. Because of that and since the rank is assumed to be
one, $n$ divides $m$.
The Newton polygon ${\mathcal{N}}(a)$ of $a$ has the vertex $(m,n)$. If we
picture ${\mathcal{N}}(a)$ on a plane where the $x$ axis is horizontal and the
$\partial$ axis is vertical then $(m,n)$ belongs to two edges: left an right.
It is also possible that these edges coincide, or that ${\mathcal{N}}(a)$
consists just of the vertex $(m,n)$.
If the left edge exists, i.e., is not just the vertex $(m,n)$, then the ray
with the vertex $(m,n)$ containing this edge cannot intersect the $y$ axis
above the origin. Indeed, assume that this is the case and the point of
intersection is $(0,\mu)$, where $\mu$ is a positive rational number, which
must be smaller than $n$.
If we take weights $\rho=\mu-n$, $\sigma=m$ then $\rho+\sigma=\mu-n+m>0$ since
$m\geq n$ and we can apply the Dixmier’s lemma to the corresponding leading
form of $a$. But then
$\bar{a}=\big{(}x^{d}\partial+cx^{s}\big{)}^{n},\qquad\text{where}\quad
s=\frac{mn}{\mu-n},$
which is impossible since $s<0$.
Therefore $a$ has a non-trivial leading form of zero weight relative to the
weights $\rho=-1$, $\sigma=d$, where $d=\frac{m}{n}$. This form can be the
monomial $x^{m}\partial^{n}$, or a polynomial in $x^{d}\partial$.
###### Lemma 4.1.
If $a$ has the leading form of weight zero relative to the weights
$\rho<0<\sigma,\qquad\rho+\sigma\geq 0$
then $C(a)$ is a subring of a ring of polynomials in one variable. $($Here the
rank of $C(a)$ is not essential.$)$
###### Proof.
Any $b\in C(a)$ has a non-zero leading form $\bar{b}$ of weight zero (relative
to $\rho$, $\sigma$). Indeed, if $\rho+\sigma>0$ and $w(b)\neq 0$ then
$\big{\\{}\bar{a},\bar{b}\big{\\}}\neq 0$ by the Dixmier’s lemma.
If $\rho+\sigma=0$ then $\bar{a}\in K[x\partial]$ and only elements of
$K[x\partial]$ commute with it (see the remark below).
Hence the restriction map $b\rightarrow\bar{b}$ is an isomorphism. An algebra
generated by all $\bar{b}$ is a subalgebra of $K[x^{\sigma}\partial^{-\rho}]$
if we assume that $\rho,\sigma\in\mathbb{Z}$ and relatively prime. ∎
###### Remark 4.2.
An equality
$x^{i}\partial^{i}=(t-i+1)(t-i+2)\cdots(t-i+i),$
where $t=x\partial$ is the Euler operator is easy to check since
$xt=xx\partial=x(\partial x-1)=tx-x=(t-1)x$
and thus $xp(t)\partial=p(t-1)t$ (see similar computations in [4]).
### Case 3: $\boldsymbol{\displaystyle
a=a_{0}(x)\partial^{n}+\sum\limits_{i=1}^{n}a_{i}\partial^{n-i}}$
In this case $a_{0}=\alpha^{n}$, $\alpha\in K[x]$. We may assume that
$\alpha\not\in K$ and that $\alpha(0)=0$ (applying an automorphism
$x\rightarrow x+c$, $\partial\rightarrow\partial$ if necessary). We may also
assume that the origin is a vertex of ${\mathcal{N}}(a)$ since we can replace
$a$ by $a+c$, where $c$ is any element of $K$. Then ${\mathcal{N}}(a)$ has the
horizontal edge with the right vertex $(m,n)$ and the left vertex
$(m^{\prime},n)$ where $m^{\prime}$ is divisible by $n$. As above, the edge
with vertices $(m^{\prime},n)$ and $(0,0)$ corresponds to the leading form of
$a$ of zero weight and Lemma 4.1 shows that $C(a)$ is isomorphic to a subring
of $K[x^{d^{\prime}}\partial]$, where $d^{\prime}=\frac{m^{\prime}}{n}$.
This finishes a proof of the theorem.
Since the proof of the theorem turned out to be too simple and too short we
can complement it by an attempt to describe the rank one centralizers more
precisely. In the first case it is already done, the centralizer is isomorphic
to $K[z]$, where $z=\partial+p(x)$ for some $p(x)\in K[x]$.
It would be interesting to describe $a$ for which $C(a)$ is not isomorphic to
a polynomial ring. The second case described above provides us with examples
of this phenomenon.
## 5 Centralizers in Case 2
Let us call $(m,n)$ the leading vertex of ${\mathcal{N}}(a)$ and the edges
containing this vertex the leading edges. If the extension of the right
leading edge intersects the $x$ axis in the point $(\nu,0)$, where $\nu>m-n$
and we take $\rho=n$, $\sigma=\nu-m$ then $\rho+\sigma=n+\nu-m>0$ and by the
Dixmier’s lemma the leading form $\bar{a}$ which corresponds to this weight is
$\big{(}x^{d}\partial+cx^{k}\big{)}^{n}$, where $k\geq d$. Then, similar to
the Case 1 we will make an automorphism
$x\to x,\qquad\partial\to\partial-cx^{k-d},$
which will collapse the right leading edge to the leading vertex.
After several steps like that we will obtain $\phi(a)$, where $\phi$ is an
automorphism of $A_{1}$, such that the right leading edge of
${\mathcal{N}}(\phi(a))$ is parallel to the bisectrix of the first quadrant.
For this edge $\rho+\sigma=0$ and we cannot apply the Dixmier’s lemma to the
corresponding leading form.
Since $C(a)$ and $C(\phi(a))$ are isomorphic we will assume that the right
leading edge of ${\mathcal{N}}(a)$ is parallel to the bisectrix.
If the left leading edge of ${\mathcal{N}}(a)$ is not parallel to the
bisectrix we can consider the centralizer of $a$ as a subalgebra of
$K\big{[}\partial,x,x^{-1}\big{]}$ and proceed with automorphisms
$x\rightarrow x,\qquad\partial\rightarrow\partial-c_{1}x^{k-d},$
where $k-d<-1$ since the Dixmier’s lemma will be applicable to the
corresponding leading forms.
Hence there exists an automorphism
$x\rightarrow x,\qquad\partial\rightarrow\partial+q(x)$
of $K[\partial,x,x^{-1}]$ such that $\psi(a)=x^{m-n}p(t)$, where $t=x\partial$
(here $q(x)$ is a Laurent polynomial while $p(t)$ is a polynomial).
We see that centralizers of elements with the leading vertex $(m,n)$ are
isomorphic to centralizers of elements $x^{m-n}p(t)$, $p(t)\in K[t]$,
$\deg_{t}(p(t))=n$. If $a=x^{m-n}p(t)$ and $m-n=0$ then $C(a)=K[t]$. Assume
now that $m>n$.
If $b\in C(a)$ then we can present $b$ as the sum of forms, homogeneous
relative to the weight $w$ given by
$w(x)=1,\qquad w(\partial)=-1\colon\quad b=\sum_{i}x^{i}b_{i}(t).$
Since
$[a,b]=\sum_{i}\big{[}a,x^{i}b_{i}(t)\big{]}=\sum_{i}x^{m-n+i}\big{(}p(t-i)b_{i}(t)-b_{i}(t-m+n)p(t)\big{)}=0$
all $x^{i}b_{i}(t)\in C(a)$. Hence $C(a)$ is a linear span of elements,
homogeneous relative to the weight $w$.
###### Lemma 5.1.
If $b\in C(a)$ is a $w$-homogeneous element then $b^{\nu}=ca^{\mu}$ for some
relatively prime integers $\mu$ and $\nu$, and $c\in K$.
###### Proof.
The leading vertex of $b$ is $\lambda(m,n)$, where $\lambda=\frac{\mu}{\nu}$,
$\mu,\nu\in\mathbb{Z}$, $(\mu,\nu)=1$ since we can apply the Dixmier’s lemma
to the leading forms of $a$ and $b$ relative to the weight $w_{1}(x)=1$,
$w_{1}(\partial)=1$. Therefore $b^{\nu}$ and $a^{\mu}$ have the same leading
vertex $\mu(m,n)$ and $\deg_{x}(b^{\nu}-ca^{\mu})<\deg_{x}(b^{\nu})$ with the
appropriate choice of $c\in K$. If $b_{1}=b^{\nu}-ca^{\mu}\neq 0$ then $b_{1}$
is a homogeneous element of $C(a)$ and its leading vertex must be proportional
to the leading vertex of $a$. This is impossible since $w(b_{1})=\mu
w(a)=\mu(m-n)$: if $\xi(m,n)$ is the leading vertex of $b_{1}$ then
$w(b_{1})=\xi(m-n)=\mu(m-n)$ and $\xi=\mu$. ∎
Since the rank of $C(a)$ is $1$ we can find two elements
$b_{1}=x^{\beta_{1}}q_{1}(t),\qquad
b_{2}=x^{\beta_{2}}q_{2}(t)\qquad\text{such
that}\quad\deg_{t}(b_{2})=\deg_{t}(b_{1})+1.$
Then $b=b_{2}b_{1}^{-1}$ belongs to $D_{1}$ (the skew field of fractions of
$A_{1}$), and commutes with $a$. Using a relation $xt=(t-1)x$ we can write
that
$b=x^{\beta_{2}}q_{2}(t)\big{(}x^{\beta_{1}}q_{1}(t)\big{)}^{-1}=x^{\beta_{2}}q_{2}(t)q_{1}(t)^{-1}x^{-\beta_{1}}=x^{\beta_{2}-\beta_{1}}r(t),$
where $r(t)\in K(t)$.
The leading vertex of $b$ can be defined as the difference of the leading
vertices of $b_{2}$ and $b_{1}$ and is proportional to the leading vertex of
$a$. Since $\deg_{t}(r)=\deg_{t}(b_{2})-\deg_{t}(b_{1})=1$ this vertex in
coordinates $x,t$ is $(d-1,1)$. (Recall that $d=\frac{m}{n}$.)
If $r$ is a polynomial then $C(a)=K\big{[}x^{d-1}r\big{]}$; if $d=1$ then
$C(a)=K[t]$; if $r$ is not a polynomial and $d>1$ then some powers of
$x^{d-1}r$ are polynomials: say, $a=cb^{n}$, $c\in K$ because considerations
of Lemma 5.1 are applicable to $w$ homogeneous elements of $D_{1}$ commuting
with $a$.
In the last case $r(t)\in K(t)$ but $r(t)r(t+d-1)\cdots r(t+(k-1)(d-1))\in
K[t]$ (observe that $tx=x(t+1)$). We can reduce this to $r(t)r(t+1)\cdots
r(t+k-1)\in K[t]$ by rescaling $t$ and $r$.
By shifting $t$ if necessary we may assume that one of the roots of $r(t)$ is
$0$ and represent $r$ as a product $r_{0}r_{1}$, where all roots and poles of
$r_{0}$ are in $\mathbb{Z}$ and all roots and poles of $r_{1}$ are not in
$\mathbb{Z}$. It is clear that
$r_{0}(t)r_{0}(t+1)\cdots r_{0}(t+k-1)\in K[t]\qquad\text{and}\qquad
r_{1}(t)r_{1}(t+1)\cdots r_{1}(t+k-1)\in K[t].$
Since $\deg(r)=1$ and $\deg(r_{i})\geq 0$ (because $k\deg(r_{i})\geq 0$),
degree of one of the $r_{i}$ is equal to zero and $r_{i}(t)r_{i}(t+1)\cdots
r_{i}(t+k-1)\in K$ for this $r_{i}$. But then $r_{i}(t)=r_{i}(t+k)$ which is
impossible for a non-constant rational function. Since $r_{0}(0)=0$ and $r\neq
0$ we see that $r_{1}$ is a constant and all roots and poles of $r$ are in
$\mathbb{Z}$.
We can assume now that $0$ is the largest root of $r$ and write
$\displaystyle r=ts(t),\qquad\text{where}\quad
s(t)=\frac{\prod_{i=1}^{p}(t+\lambda_{i})}{\prod_{i=1}^{p}(t+\mu_{i})}\in
K(t)\setminus K,$
$\displaystyle\lambda_{i}\in\mathbb{Z},\qquad\mu_{i}\in\mathbb{Z},\qquad
0\leq\lambda_{1}\leq\lambda_{2}\leq\dots\leq\lambda_{p},\qquad\mu_{1}\leq\mu_{2}\leq\cdots\leq\mu_{p}.$
If $\mu_{1}<0$ then $r(t)r(t+1)\cdots r(t+k-1)$ would have a pole at
$t=-\mu_{1}$. Hence $\mu_{1}>0$ and all poles of $s(t)$ are negative integers
while all zeros of $s(t)$ are non-positive integers.
A fraction $\frac{t+\lambda_{i}}{t+\mu_{i}}$ can be presented as
$\frac{f_{i}(t)}{f_{i}(t+1)}$ if $\lambda_{i}<\mu_{i}$ or as
$\frac{f_{i}(t+1)}{f_{i}(t)}$ if $\lambda_{i}>\mu_{i}$: indeed,
$\frac{t+d}{t}=\frac{(t+1)(t+2)\cdots(t+d)}{t(t+1)\cdots(t+d-1)}\qquad\text{if}\quad
d>0,$
take the reciprocal fraction if $d<0$. Because of that $s(t)$ can be written
as
$\frac{s_{1}(t)s_{2}(t+1)}{s_{1}(t+1)s_{2}(t)},\qquad s_{i}(t)\in K[t].$
Write $s_{1}(t)=s_{3}(t)s_{4}(t)$, $s_{2}(t)=s_{4}(t)s_{5}(t)$, where
$s_{4}(t)$ is the greatest common divisor of $s_{1}(t)$ and $s_{2}(t)$. Then
$s(t)=\frac{s_{3}(t)s_{4}(t)s_{4}(t+1)s_{5}(t+1)}{s_{3}(t+1)s_{4}(t+1)s_{4}(t)s_{5}(t)}=\frac{s_{3}(t)s_{5}(t+1)}{s_{3}(t+1)s_{5}(t)}.$
All roots of $s_{3}(t)$ must be not positive, otherwise the largest positive
root of $s_{3}(t)$ would be a root of $s(t)$ (which doesn’t have positive
roots) since this root couldn’t be canceled by a root of $s_{5}(t)$ or
$s_{3}(t+1)$.
Now,
$q(t)=r(t)r(t+1)\cdots
r(t+k-1)=t(t+1)\cdots(t+k-1)\frac{s_{3}(t)s_{5}(t+k)}{s_{3}(t+k)s_{5}(t)}$
is a polynomial. If $s_{3}(t)\not\in K$ and its smallest root is $i$, where
$i\leq 0$, then the denominator of $q(t)$ has a zero in $i-k$ which is less
then $1-k$ and cannot be canceled by a zero in the numerator since
$s_{3}(t+k)$ and $s_{5}(t+k)$ are relatively prime.
Hence
$s_{3}(t)\in K,\qquad s(t)=\frac{s_{5}(t+1)}{s_{5}(t)},\qquad\text{and}\qquad
q(t)=t(t+1)\cdots(t+k-1)\frac{s_{5}(t+k)}{s_{5}(t)}.$
We can uniquely write
$s_{5}(t)=\prod_{i\in
I}\phi_{k,p_{i}}(t+i),\qquad\text{where}\quad\phi_{k,p}(t)=\prod_{j=0}^{p}(t+jk),$
and all $p_{i}$ are maximal possible. Then
$\frac{s_{5}(t+k)}{s_{5}(t)}=\prod_{i\in
I}\frac{t+i+p_{i}k+k}{t+i}\qquad\text{and}\qquad t+i_{1}\neq
t+i_{2}+p_{i_{2}}k+k$
for all $i_{1},i_{2}\in I$ because of the maximality of $p_{i}$. Hence
$I\subset\\{1,\dots,k-1\\}$ and each $i$ is used at most once.
As we have seen, all roots of $s_{5}(t)$ are of multiplicity $1$ and since
$(xr)^{N}=x^{N}\prod_{i=0}^{N-1}(t+i)\frac{s_{5}(t+N)}{s_{5}(t)}$
the elements $(xr)^{N}\in K[x]$ for sufficiently large $N$. Therefore the rank
of $C(xr)$ is one. In fact, we see that if $s_{5}$ is any polynomial with only
simple roots then the elements of $A_{1}$ which commute with
$xt\frac{s_{5}(t+1)}{s_{5}(t)}$ form a centralizer of the rank one.
Observe that the rank is not stable under automorphisms: the rank of
$C(\phi(xr))$, where $\phi(x)=x+t^{M}$, $\phi(t)=t$ is $M+1$.
Let us return now to $C(a)$, where $a=x^{m-n}p(t)$, $\deg_{t}(p(t))=n$. In
this case
$\displaystyle b=x^{d-1}t\frac{s_{5}(t+d-1)}{s_{5}(t)},\qquad
b^{n}=x^{m-n}t(t+d-1)\cdots(t+(n-1)(d-1))\frac{s_{5}(t+n(d-1))}{s_{5}(t)}$
and all roots of $s_{5}(t)$ belong to $\\{1-d,2(1-d),\dots,(n-1)(1-d)\\}$.
Additionally we can replace $t$ by $t-c$, $c\in K$.
## 6 Cases 2 and 3
We understood the structure of $C(a)$ when $a=x^{m-n}p(t)$. Are there
substantially different examples of centralizers of rank one which are not
isomorphic to a polynomial ring?
Consider the case of an order $2$ element commuting with an order $3$ element
which was completely researched in the work [3] of Burchnall and Chaundry for
analytic coefficients.111To be on a more familiar ground in this section the
field $K$ is the filed $\mathbb{C}$ of complex numbers. They showed (and for
this case it is a straightforward computation) that monic commuting operators
of orders $2$ and $3$ can be reduced to
$A=\partial^{2}-2\psi(x),\qquad
B=\partial^{3}-3\psi(x)\partial-\frac{3}{2}\psi^{\prime}(x),$
where $\psi^{\prime\prime\prime}=12\psi\psi^{\prime}$, i.e.,
$\psi^{\prime\prime}=6\psi^{2}+c_{1}$ and
$(\psi^{\prime})^{2}=4\psi^{3}+c_{1}\psi+c_{2}$ (a Weierstrass function). The
only rational (even algebraic) solution in this case is (up to a substitution)
$\psi=x^{-2}$ when $c_{1}=c_{2}=0$. (If $\psi$ is a rational function then the
curve parameterized by $\psi$, $\psi^{\prime}$ has genus zero, so
$4\psi^{3}+c_{1}\psi+c_{2}=4(\psi-\lambda)^{2}(\psi-\mu)$ and $\psi$ is not an
algebraic function of $x$ if $\lambda\neq\mu$.) The corresponding operator
$A=x^{-2}(t-2)(t+1)=\bigg{(}x^{-1}\frac{t^{2}-1}{t}\bigg{)}^{2}$
is homogeneous.
In our case we have
$A=f(x)^{2}\partial^{2}+f_{1}(x)\partial+f_{2}(x)$
since the leading form for $w(x)=0$, $w(\partial)=1$ must be the square of a
polynomial.
Here are computations for this case
$\displaystyle
A=(f\partial)^{2}-ff^{\prime}\partial+f_{1}(x)\partial+f_{2}(x)$
$\displaystyle\phantom{A}{}=\bigg{[}f\partial+\frac{1}{2}\bigg{(}\frac{f_{1}}{f}-f^{\prime}\bigg{)}\bigg{]}^{2}-\frac{1}{2}\bigg{(}\frac{f_{1}}{f}-f^{\prime}\bigg{)}^{\prime}f-\frac{1}{4}\bigg{(}\frac{f_{1}}{f}-f^{\prime}\bigg{)}^{2}+f_{2}.$
Denote $f\partial+\frac{1}{2}\big{(}\frac{f_{1}}{f}-f^{\prime}\big{)}$ by $D$.
Then
$A=D^{2}-2\phi(x),\qquad\text{where}\quad\phi=\frac{1}{4}\bigg{(}\frac{f_{1}}{f}-f^{\prime}\bigg{)}^{\prime}f+\frac{1}{8}\bigg{(}\frac{f_{1}}{f}-f^{\prime}\bigg{)}^{2}-\frac{1}{2}f_{2}\in\mathbb{C}(x).$
Analogously to Burchnall and Chaundry, if there is an operator of order $3$
commuting with $A$ then it can be written as
$B=D^{3}-3\phi D-\frac{3}{2}\phi^{\prime}f$
(this follows from [3] but will be clear from the condition $[A,B]=0$ as
well). In order to find an equation for $\phi$ we should compute $[A,B]$.
Observe that
$\displaystyle[D,g(x)]=g^{\prime}f,\qquad\big{[}D^{2},g\big{]}=2g^{\prime}fD+(g^{\prime}f)^{\prime}f,$
$\displaystyle\big{[}D^{3},g\big{]}=3g^{\prime}fD^{2}+3(g^{\prime}f)^{\prime}fD+((g^{\prime}f)^{\prime}f)^{\prime}f.$
Hence
$\displaystyle[A,B]=-3\bigg{[}D^{2},\phi
D+\frac{1}{2}\phi^{\prime}f\bigg{]}+2[D^{3}-3\phi D,\phi]$
$\displaystyle\hphantom{[A,B]}{}=-3\bigg{[}(2\phi^{\prime}fD+(\phi^{\prime}f)^{\prime}f)D+(\phi^{\prime}f)^{\prime}fD+\frac{1}{2}((\phi^{\prime}f)^{\prime}f)^{\prime}f\bigg{]}$
$\displaystyle\hphantom{[A,B]=}{}+2[3\phi^{\prime}fD^{2}+3(\phi^{\prime}f)^{\prime}fD+((\phi^{\prime}f)^{\prime}f)^{\prime}f]-6\phi\phi^{\prime}f$
$\displaystyle\hphantom{[A,B]}{}=(-6\phi^{\prime}f\\!+\\!6\phi^{\prime}f)D^{2}\\!+\\!(-6(\phi^{\prime}f)^{\prime}f\\!+\\!6(\phi^{\prime}f)^{\prime}f)D\\!-\\!\frac{3}{2}((\phi^{\prime}f)^{\prime}f)^{\prime}f\\!+\\!2((\phi^{\prime}f)^{\prime}f)^{\prime}f\\!-\\!6\phi\phi^{\prime}f$
$\displaystyle\hphantom{[A,B]}{}=\frac{1}{2}((\phi^{\prime}f)^{\prime}f)^{\prime}f-6\phi\phi^{\prime}f.$
Therefore
$\displaystyle((\phi^{\prime}f)^{\prime}f)^{\prime}=12\phi\phi^{\prime},\qquad(\phi^{\prime}f)^{\prime}f=6\phi^{2}+c_{1},$
$\displaystyle(\phi^{\prime}f)^{\prime}\phi^{\prime}f=6\phi^{2}\phi^{\prime}+c_{1}\phi^{\prime},\qquad(\phi^{\prime}f)^{2}=4\phi^{3}+2c_{1}\phi+c_{2}$
and we have a parameterization of an elliptic curve. Since
$f,\phi\in\mathbb{C}(x)$ this curve must have genus $0$, i.e.,
$4\phi^{3}+2c_{1}\phi+c_{2}=4(\phi-\lambda)^{2}(\phi-\mu)\qquad\text{and}\qquad(\phi^{\prime}f)^{2}=4(\phi-\lambda)^{2}(\phi-\mu).$
Take $z=\frac{\phi^{\prime}f}{2(\phi-\lambda)}$. Then $\phi-\mu=z^{2}$ and
$\phi^{\prime}f=2z\big{(}z^{2}-\delta^{2}\big{)}$, where
$\delta^{2}=\lambda-\mu$. Hence $\phi^{\prime}=2zz^{\prime}$,
$2zz^{\prime}f=2z\big{(}z^{2}-\delta^{2}\big{)}$ and
$z^{\prime}f=z^{2}-\delta^{2}$.
Assume that $\delta\neq 0$. Since we can re-scale $f$ and $z$ as $f\rightarrow
2\delta f$, $z\rightarrow\delta z$, let us further assume that $\delta^{2}=1$.
Then
$z^{\prime}f=z^{2}-1\qquad\text{and}\qquad\int\frac{{\rm
d}z}{z^{2}-1}=\int\frac{{\rm d}x}{f}.$
Recall that $f\in\mathbb{C}[x]$. Since
$2\int\frac{{\rm d}z}{z^{2}-1}=\ln\frac{z-1}{z+1}$
all zeros of $f$ have multiplicity $1$ and
$\int\frac{{\rm d}x}{f}=\ln\bigg{(}\prod_{i}(x-\nu_{i})^{c_{i}}\bigg{)},$
where $\\{\nu_{i}\\}$ are the roots of $f$ and
$c_{i}=(f^{\prime}(\nu_{i}))^{-1}$. Therefore
$\frac{z-1}{z+1}=c\prod_{i}(x-\nu_{i})^{2c_{i}},\qquad\text{where}\quad c\neq
0\quad\text{and}\quad
z=\frac{1+c\prod_{i}(x-\nu_{i})^{2c_{i}}}{1-c\prod_{i}(x-\nu_{i})^{2c_{i}}}.$
Now it is time to recall that
$\phi=\frac{1}{4}\bigg{(}\frac{f_{1}}{f}-f^{\prime}\bigg{)}^{\prime}f+\frac{1}{8}\bigg{(}\frac{f_{1}}{f}-f^{\prime}\bigg{)}^{2}-\frac{1}{2}f_{2},$
where $f,f_{1},f_{2}\in\mathbb{C}[x]$ and thus
$f^{2}\phi=f^{2}\big{(}z^{2}+\mu\big{)}\in\mathbb{C}[x]$. Because of that
$zf=c_{1}\frac{1+c\prod_{i}(x-\nu_{i})^{2c_{i}}}{1-c\prod_{i}(x-\nu_{i})^{2c_{i}}}\prod_{i}(x-\nu_{i})\in\mathbb{C}[x],$
which is possible only if the rational function
$1-c\prod_{i}(x-\nu_{i})^{2c_{i}}$ doesn’t have zeros.
We can write $\prod_{i}(x-\nu_{i})^{2c_{i}}$ as
$\frac{\prod_{j}(x-\nu_{j})^{2c_{j}}}{\prod_{k}(x-\nu_{k})^{2c_{k}}}$, where
$2c_{j},2c_{k}\in\mathbb{Z}^{+}$. Then
$\prod_{k}(x-\nu_{k})^{2c_{k}}-c\prod_{j}(x-\nu_{j})^{2c_{j}}\in\mathbb{C},$
which is possible only if $c=1$.
Since
$z=\frac{\prod_{k}(x-\nu_{k})^{2c_{k}}+\prod_{j}(x-\nu_{j})^{2c_{j}}}{\prod_{k}(x-\nu_{k})^{2c_{k}}-\prod_{j}(x-\nu_{j})^{2c_{j}}}$
we see that $z\in\mathbb{C}[x]$. So to produce a $2,3$ commuting pair we
should find a polynomial solution to $f=\frac{z^{2}-1}{z^{\prime}}$. If $f$,
$z$ are given then
$A=(f\partial+\psi)^{2}-2\big{(}z^{2}-\mu\big{)},\qquad
B=(f\partial+\psi)^{3}-3\big{(}z^{2}-\mu\big{)}(f\partial+\psi)-3zz^{\prime}f$
is a commuting pair for any $\psi\in\mathbb{C}[x]$ (indeed,
$f\psi\in\mathbb{C}[x]$ and $\psi^{2}+f\psi^{\prime}\in\mathbb{C}[x]$, hence
$\psi\in\mathbb{C}[x]$). Constant $\mu=-\frac{2}{3}$ since we assumed that
$\lambda-\mu=1$ and $2\lambda+\mu=0$ because the equation is
$(\phi^{\prime}f)^{2}=4\phi^{3}+2c_{1}\phi+c_{2}$.
Here is a series of examples:
$z=1+x^{n},\qquad
f=\frac{x}{n}\big{(}2+x^{n}\big{)},\qquad\phi=\big{(}1+x^{n}\big{)}^{2}-\frac{2}{3}=x^{n}(2+x^{n})+\frac{1}{3},$
which correspond to
$A=\bigg{[}\frac{x}{n}(2+x^{n})\partial+\psi\bigg{]}^{2}-2\bigg{(}x^{n}(2+x^{n})+\frac{1}{3}\bigg{)}.$
Even the simplest one,
$A=[x(2+x)\partial]^{2}-2\bigg{[}x(2+x)+\frac{1}{3}\bigg{]}$
cannot be made homogeneous.
It seems that a complete classification of $(2,3)$ pairs is a daunting task.
Our condition on $z$ is that $z$ assumes values $\pm 1$ when $z^{\prime}=0$.
Let us call such a polynomial admissible. We can look only at reduced monic
polynomials $z(x)=x^{n}+a_{2}x^{n-2}+\cdots$ because a substitution
$x\rightarrow ax+b$ preserves admissibility. Also
$\lambda^{n}z(\lambda^{-1}x)$ preserves admissibility if $\deg(z)=n$ and
$\lambda^{n}=1$.
Examples above are just one value case. Say, an admissible cubic polynomial is
$x^{3}-3\cdot 2^{-\frac{2}{3}}x$. If $z=(x-\nu)^{i}(x+\nu)^{j}+1$ then it is
admissible when
$\nu^{i+j}=(-1)^{i-1}2^{1-i-j}\frac{(i+j)^{i+j}}{i^{i}j^{j}}.$
If a composition $h(g(x))$ is admissible then $g^{\prime}=0$ and
$h^{\prime}=0$ should imply that $h(g(x))=\pm 1$. Hence $h(x)$ should be an
admissible function. As far as $g$ is concerned $g^{\prime}=0$ should imply
that the value of $g$ belongs to the preimage of $\pm 1$ for $h$ which is less
restrictive if this preimage is large. Because of that it is hard to imagine a
reasonable classification of all admissible polynomials.
On the other hand
$z^{2}\equiv 1\pmod{z^{\prime}}\qquad\text{for}\quad
z=x^{n}+a_{2}x^{n-2}+\cdots+a_{n}$
leads to $n-1$ equations on $n-1$ variables with apparently finite number of
solutions for each $n$. Say, for $n=4$ all admissible polynomials are
$x^{4}\pm 1;\qquad x^{4}+ax^{2}+\frac{1}{8}a^{2},\quad a^{4}=64;\qquad
x^{4}-3a^{2}x^{2}+2\sqrt{2}a^{3}x+\frac{21}{8}a^{4},\quad 337a^{8}=64.$
###### Remark 6.1.
The number of admissible polynomials of a given degree is finite. Indeed
consider first $n-2$ homogeneous equations on the coefficients
$a_{2},\dots,a_{n}$. They are satisfied if $z^{2}\equiv c\pmod{z^{\prime}}$,
where $c\in\mathbb{C}$. If one of the components of the variety defined by
these equations is more than one-dimensional then (by affine dimension
theorem) its intersection with the hypersurface given by the last homogeneous
equation will be at least one-dimensional while condition $z^{2}\equiv
0\pmod{z^{\prime}}$ is satisfied only by $z=x^{n}$ (recall that we are
considering only reduced monic polynomials).
If $\delta=0$ then $(\phi^{\prime}f)^{2}=4(\phi-\lambda)^{3}$ and
$(\phi-\lambda)^{-1/2}=\pm\int\frac{{\rm d}x}{f}$ is a rational function.
###### Lemma 6.2.
If $f\\!\in\\!\mathbb{C}[x]$ and $\int\\!\frac{{\rm d}x}{f}$ is a rational
function then $f$ is a monomial, i.e., $f\\!=a(x\\!-b)^{d}$.222I was unable to
find a published proof for this observation. This proof is a result of
discussions with J. Bernstein and A. Volberg.
###### Proof.
If $g^{\prime}=\frac{1}{f}$ for $g\in\mathbb{C}(x)$ then $g=\frac{h}{f}$,
$h\in\mathbb{C}[x]$ since the poles of $g$ are the zeros of $f$ and if the
multiplicity of a zero of $f$ is $d$ then the corresponding pole of $g$ has
the multiplicity $d-1$. An equality $g^{\prime}=\frac{1}{f}$ can be rewritten
as $h^{\prime}f-hf^{\prime}=f$. If $\deg(h)>1$ then
$\deg(h^{\prime}f)>\deg(f)$. Hence the leading coefficients of polynomials
$h^{\prime}f$ and $hf^{\prime}$ are the same. This is possible only when
$\deg(h)=\deg(f)$. Therefore there exists a $c\in\mathbb{C}$ for which
$\deg(h-cf)<\deg(f)$. Since $(h-cf)^{\prime}f-(h-cf)f^{\prime}=f$ we can
conclude that $\deg(h_{1})=1$ for $h_{1}=h-cf$. Changing the variable we may
assume that $h_{1}=c_{1}x$ and then $c_{1}(f-xf^{\prime})=f$ which is possible
only if $f=ax^{d}$. ∎
Hence when $\delta=0$ we may assume that $f=x^{d}$. If $d=0$ then this is the
first case and $A$ is a homogeneous operator up to an automorphism. If $d>0$
then this is the second case and $A$ is a homogeneous operator up to an
automorphism of $\mathbb{C}\big{[}x^{-1},x,\partial\big{]}$.
These computations show that a description of the structure of centralizers of
rank one in $A_{1}$ is sufficiently challenging. Can the ring of regular
functions of a genus zero curve with one place at infinity be realised as a
centralizer of an element of $A_{1}$? Here is a more approachable relevant
question: is there an element of $A\in D_{1}\setminus A_{1}$ for which
$p(A)\in A_{1}$ for a given a polynomial $p(x)\in\mathbb{C}[x]$?
## 7 Historical remarks
Apparently the first work which was devoted to the research of commuting
differential operators is the work of Georg Wallenberg “Über die
Vertauschbarkeit homogener linearer Differentialausdrücke” (see [40]), in
which he studied the classification problem of pairs of commuting ordinary
differential operators. He didn’t work with the Weyl algebra though.
Differential operators he was working with have coefficients which are
“abstract” differentiable functions.
He mentioned that this problem did not seem to be studied before, even in the
fundamental work of Gaston Floquet Sur la théorie des équations
différentielles linéaires (see [10]). He credited Floquet with the case of two
operators of order one.
Wallenberg started from this point. Then he gave a complete description of
commuting operators $P$ and $Q$ when
$\operatorname{ord}P=\operatorname{ord}Q=2$ and when $\operatorname{ord}P=1$,
$\operatorname{ord}Q=n$. So far everything is easy. Then he studied the case
of $\operatorname{ord}P=2$ and $\operatorname{ord}Q=3$, and noticed that a
Weierstrass elliptic function appears in the coefficients of these operators.
He dealt with a few more examples such as order 2 and 5, but did not obtain
any general theorems.
Issai Schur read the paper of Wallenberg and published in 1904 paper “Über
vertauschbare lineare Differentialausdrücke”, which was already mentioned. He
proved that centralizers of differential operators with differentiable
coefficients are commutative by introducing pseudo-differential operators
approximately fifty years before the notion appeared under this name.
The first general results toward the classification of commuting pairs of
differential operators which were reported to the London Mathematical Society
on June 8, 1922, were established about 20 years later by Joseph Burchnall and
Theodore Chaundy (see [3]).
Burchnall and Chaundy published two more papers [4] and [5] devoted to this
topic. Curiously enough they didn’t know about the Wallenberg’s paper and
rediscovered his classification of $2$, $3$ commuting pairs and the fact that
the ring of operators commuting with this pair of operators is isomorphic to a
ring of regular functions of an elliptic curve.
Arguably the most important fact obtained by them is that an algebraic curve
can be related to a pair of commuting operators.
After these works the question about commuting pairs of operators didn’t
attract much attention until the work of Jacques Dixmier [8] which appeared in
1968. He found elements in $A_{1}$ with centralizers which are also isomorphic
to the ring of regular functions of an elliptic curve. Unlike examples by
Wallenberg and Burchnall and Chaundy, where it was a question of rather
straightforward computations, Dixmier’s example required ingenuity.
After another gap of approximately ten years the question on pairs of
commuting operators was raised in the context of solutions of some important
partial differential equations. It seems that Igor Krichever was the first to
write on this topic in [14] which appeared in 1976. He also rediscovered that
an algebraic curve can be associated to a pair of commuting operators (and
attributes to A. Shabat first observation of this kind) and mentions that
operators commuting with an operator commute with each other.
Then Vladimir Drinfeld in [9] gave algebro-geometric interpretation of
Krichever’s results. This approach was further elucidated in a report of David
Mumford [30] (D. Kajdan mentioned at the end of the report is David Kazhdan).
Later Krichever wrote an important survey [15] devoted to application of
algebraic geometry to solutions of nonlinear PDE.
These works were primarily concerned with centralizers of rank one.
In [16] Krichever considered centralizers of an arbitrary rank and proved that
for any centralizer $A$ of a (non-constant) differential operator there exists
a marked algebraic curve $(\gamma,P)$ such that $A$ is isomorphic to a ring of
meromorphic functions on $\gamma$ with poles in $P$. He remarked that $\gamma$
is non-singular for a “general position” centralizer.
Motohico Mulase in [29] generalized the results of Krichever in [14],
Drinfeld, and Mumford to the case of arbitrary rank. His main theorem is
similar to the theorem of Krichever cited above. Apparently he didn’t know
about [16].
It remains to mention the works devoted to the centralizers of elements of the
first Weyl algebra or its skew field of fractions. They primarily provide
constructions of examples of centralizers which correspond to the curves of
high genus.
Here is a partial list: [6, 7, 12, 13, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 31, 32, 33, 34, 36, 44, 45].
Interested reader will also benefit by looking at lectures by Emma Previato
[35] and at the paper [43] by George Wilson as well as papers [38, 39, 42],
and [2].
Lastly, it seems that the definition of rank of a centralizer as the greatest
common divisor of the orders of its elements first appeared in a work of
Wilson [41]. A similar definition of rank formulated slightly differently can
be found in the work [9] of Drinfeld.
### Acknowledgements
The author is grateful to the Max-Planck-Institut für Mathemtik in Bonn where
he worked on this project in July–August of 2019. He was also supported by a
FAPESP grant awarded by the State of Sao Paulo, Brazil. The author also
greatly benefited from remarks by the referees.
## References
* [1] Amitsur S.A., Commutative linear differential operators, Pacific J. Math. 8 (1958), 1–10.
* [2] Burban I., Zheglov A., Fourier–Mukai transform on Weierstrass cubics and commuting differential operators, Internat. J. Math. 29 (2018), 1850064, 46 pages, arXiv:1602.08694.
* [3] Burchnall J.L., Chaundy T.W., Commutative ordinary differential operators, Proc. London Math. Soc. 21 (1923), 420–440.
* [4] Burchnall J.L., Chaundy T.W., Commutative ordinary differential operators, Proc. Roy. Soc. London A 118 (1928), 557–583.
* [5] Burchnall J.L., Chaundy T.W., Commutative ordinary differential operators II. The identity $P^{n}=Q^{m}$, Proc. Roy. Soc. London A 134 (1931), 471–485.
* [6] Davletshina V.N., Mironov A.E., On commuting ordinary differential operators with polynomial coefficients corresponding to spectral curves of genus two, Bull. Korean Math. Soc. 54 (2017), 1669–1675, arXiv:1606.01346.
* [7] Dehornoy P., Opérateurs différentiels et courbes elliptiques, Compositio Math. 43 (1981), 71–99.
* [8] Dixmier J., Sur les algèbres de Weyl, Bull. Soc. Math. France 96 (1968), 209–242.
* [9] Drinfel’d V.G., Commutative subrings of certain noncommutative rings, Funct. Anal. Appl. 11 (1977), 9–12.
* [10] Floquet G., Sur la théorie des équations différentielles linéaires, Ann. Sci. École Norm. Sup. (2) 8 (1879), 3–132.
* [11] Gelfand I.M., Kirillov A.A., Sur les corps liés aux algèbres enveloppantes des algèbres de Lie, Inst. Hautes Études Sci. Publ. Math. (1966), 5–19.
* [12] Grinevich P.G., Rational solutions for the equation of commutation of differential operators, Funct. Anal. Appl. 16 (1982), 15–19.
* [13] Grünbaum F.A., Commuting pairs of linear ordinary differential operators of orders four and six, Phys. D 31 (1988), 424–433.
* [14] Krichever I.M., An algebraic-geometric construction of the Zakharov–Shabat equations and their periodic solution, Sov. Math. Dokl. 17 (1976), 394–397.
* [15] Krichever I.M., Methods of algebraic geometry in the theory of non-linear equations, Russian Math. Surveys 32 (1977), no. 6, 185–213.
* [16] Krichever I.M., Commutative rings of ordinary linear differential operators, Funct. Anal. Appl. 12 (1978), 175–185.
* [17] Makar-Limanov L., Centralizers in the quantum plane algebra, in Studies in Lie Theory, Progr. Math., Vol. 243, Birkhäuser Boston, Boston, MA, 2006, 411–416.
* [18] Mironov A.E., A ring of commuting differential operators of rank 2 corresponding to a curve of genus 2, Sb. Math. 195 (2004), 103–114.
* [19] Mironov A.E., Commuting rank 2 differential operators corresponding to a curve of genus 2, Funct. Anal. Appl. 39 (2005), 240–243.
* [20] Mironov A.E., On commuting differential operators of rank 2, Sib. Electr. Math. Rep. 6 (2009), 533–536.
* [21] Mironov A.E., Commuting higher rank ordinary differential operators, in European Congress of Mathematics, Eur. Math. Soc., Zürich, 2013, 459–473, arXiv:1204.2092.
* [22] Mironov A.E., Self-adjoint commuting ordinary differential operators, Invent. Math. 197 (2014), 417–431.
* [23] Mironov A.E., Self-adjoint commuting differential operators of rank two, Russian Math. Surveys 71 (2016), 751–779.
* [24] Mironov A.E., Zheglov A.B., Commuting ordinary differential operators with polynomial coefficients and automorphisms of the first Weyl algebra, Int. Math. Res. Not. 2016 (2016), 2974–2993, arXiv:1503.00485.
* [25] Mokhov O.I., Commuting ordinary differential operators of rank 3 corresponding to an elliptic curve, Russian Math. Surveys 37 (1982), no. 4, 129–130.
* [26] Mokhov O.I., Commuting differential operators of rank 3 and nonlinear equations, Math. USSR-Izv. 53 (1989), 629–655.
* [27] Mokhov O.I., On commutative subalgebras of the Weyl algebra related to commuting operators of arbitrary rank and genus, Math. Notes 94 (2013), 298–300, arXiv:1201.5979.
* [28] Mokhov O.I., Commuting ordinary differential operators of arbitrary genus and arbitrary rank with polynomial coefficients, in Topology, Geometry, Integrable Systems, and Mathematical Physics, Amer. Math. Soc. Transl. Ser. 2, Vol. 234, Amer. Math. Soc., Providence, RI, 2014, 323–336, arXiv:1303.4263.
* [29] Mulase M., Category of vector bundles on algebraic curves and infinite-dimensional Grassmannians, Internat. J. Math. 1 (1990), 293–342.
* [30] Mumford D., An algebro-geometric construction of commuting operators and of solutions to the Toda lattice equation, Korteweg–de Vries equation and related nonlinear equation, in Proceedings of the International Symposium on Algebraic Geometry (Kyoto Univ., Kyoto, 1977), Kinokuniya Book Store, Tokyo, 1978, 115–153.
* [31] Oganesyan V.S., Commuting differential operators of rank 2 and arbitrary genus $g$ with polynomial coefficients, Russian Math. Surveys 70 (2015), 165–167.
* [32] Oganesyan V.S., Commuting differential operators of rank 2 with polynomial coefficients, Funct. Anal. Appl. 50 (2016), 54–61, arXiv:1409.4058.
* [33] Oganesyan V.S., An alternative proof of Mironov’s results on self-adjoint commuting operators of rank 2, Sib. Math. J. 59 (2018), 102–106.
* [34] Oganesyan V.S., Commuting differential operators of rank 2 with rational coefficients, Funct. Anal. Appl. 52 (2018), 203–213, arXiv:1608.05146.
* [35] Previato E., Seventy years of spectral curves: 1923–1993, in Integrable Systems and Quantum Groups (Montecatini Terme, 1993), Lecture Notes in Math., Vol. 1620, Springer, Berlin, 1996, 419–481.
* [36] Previato E., Rueda S.L., Zurro M.A., Commuting ordinary differential operators and the Dixmier test, SIGMA 15 (2019), 101, 23 pages, arXiv:1902.01361.
* [37] Schur I., Über vertauschbare lineare Differentialausdrücke, Sitzungsber. Berl. Math. Ges. (1905), 2–8.
* [38] Segal G., Wilson G., Loop groups and equations of KdV type, Inst. Hautes Études Sci. Publ. Math. (1985), 5–65.
* [39] Verdier J.-L., Équations différentielles algébriques, in Séminaire Bourbaki, 30e année (1977/78), Lecture Notes in Math., Vol. 710, Springer, Berlin, 1979, Exp. No. 512, 101–122.
* [40] Wallenberg G., Über die Vertauschbarkeit homogener linearer Differentialausdrücke, Arch. Math. Phys. 3 (1903), 252–268.
* [41] Wilson G., Algebraic curves and soliton equations, in Geometry Today (Rome, 1984), Progr. Math., Vol. 60, Birkhäuser Boston, Boston, MA, 1985, 303–329.
* [42] Wilson G., Bispectral commutative ordinary differential operators, J. Reine Angew. Math. 442 (1993), 177–204.
* [43] Wilson G., Collisions of Calogero–Moser particles and an adelic Grassmannian (with an appendix by I.G. Macdonald), Invent. Math. 133 (1998), 1–41.
* [44] Zheglov A.B., Mironov A.E., On commuting differential operators with polynomial coefficients corresponding to spectral curves of genus one, Dokl. Math. 91 (2015), 281–282.
* [45] Zheglov A.B., Mironov A.E., Saparbaeva B.T., Commuting Krichever–Novikov differential operators with polynomial coefficients, Sib. Math. J. 57 (2016), 819–823.
|
8k
|
arxiv_papers
|
2101.01139
|
LaTeX2ε SVMono Document Class Version 5.x
Reference Guide
for
Monographs
# *
$\copyright$ 2018, Springer Nature
All rights reserved.
## Contents
section.1.1 section.1.2 subsection.1.2.1 subsection.1.2.2 subsection.1.2.3
subsection.1.2.4 subsection.1.2.5 subsection.1.2.6 subsection.1.2.7
subsection.1.2.8 subsection.1.2.8 subsection.1.2.9 subsection.1.2.10
subsection.1.2.11 subsection.1.2.12 section*.4
## 1 Introduction
This reference guide gives a detailed description of the LaTeX2ε SVMono
document class Version 5.x and its special features designed to facilitate the
preparation of scientific books for Springer Nature. It always comes as part
of the SVMono tool package and should not be used on its own.
The components of the SVMono tool package are:
* •
The Springer LaTeX class `SVMono.cls`, MakeIndex styles svind.ist, svindd.ist,
BibTeX styles spmpsci.bst, spphys.bst, spbasic.bst as well as the templates
with preset class options, packages and coding examples;
* Tip: Copy all these files to your working directory, run LaTeX2ε, BibTeX and MakeIndex—as is applicable— and and produce your own example *.dvi file; rename the template files as you see fit and use them for your own input.
* •
Author Instructions with style and coding instructions.
* Tip: Follow these instructions to set up your files, to type in your text and to obtain a consistent formal style in line with the Springer Nature layout specifications; use these pages as checklists before you submit your manuscript data.
* •
The Reference Guide describing SVMono features with regards to their
functionality.
* Tip: Use it as a reference if you need to alter or enhance the default settings of the SVMono document class and/or the templates.
The documentation in the Springer SVMono tool package is not intended to be a
general introduction to LaTeX2ε or TeX. For this we refer you to [1–3].
Should we refer in this tool package to standard tools or packages that are
not installed on your system, please consult the Comprehensive TeX Archive
Network (CTAN) at [4–6].
SVMono was derived from the LaTeX2ε book.cls and article.cls.
The main differences from the standard document classes `article.cls` and
`book.cls` are the presence of
* •
multiple class options,
* •
a number of newly built-in environments for individual text structures like
theorems, exercises, lemmas, proofs, etc.,
* •
enhanced environments for the layout of figures and captions, and
* •
new declarations, commands and useful enhancements of standard environments to
facilitate your math and text input and to ensure their output is in line with
the Springer Nature layout standards.
Nevertheless, text, formulae, figures, and tables are typed using the standard
LaTeX2ε commands. The standard sectioning commands are also used.
Always give a `\label` where possible and use `\ref` for cross-referencing.
Such cross-references may then be converted to hyperlinks in any electronic
version of your book.
The `\cite` and `\bibitem` mechanism for bibliographic references is also
obligatory.
## 2 SVMono Class Features
### 2.1 Initializing the SVMono Class
To use the document class, enter
`\documentclass [`$\langle$options$\rangle$`] {svmono}`
at the beginning of your input.
### 2.2 SVMono Class Options
Choose from the following list of SVMono class options if you need to alter
the default layout settings of the SVMono document class. Please note that the
optional features should only be chosen if instructed so by the editor of your
book.
Page Style
[norunningheads]
default
twoside, single-spaced output, contributions starting always on a recto page
referee
produces double-spaced output for proofreading
footinfo
generates a footline with name, date, $\ldots$ at the bottom of each page
norunningheads
suppresses any headers and footers
N.B. If you want to use both options, you must type referee before footinfo.
Body Font Size
[11pt, 12pt]
default
10 pt
11pt, 12pt
are ignored
Language for Fixed LaTeX Texts
In the SVMono class we have changed a few standard LaTeX texts (e.g. Figure to
Fig. in figure captions) and assigned names to newly defined theorem-like
environments so that they conform with Springer Nature style requirements.
[francais]
default
English
deutsch
translates fixed LaTeX texts into their German equivalent
francais
same as above for French
Text Style
[graybox]
default
plain text
graybox
automatically activates the packages `color` and `framed`
and places a box with 15 percent gray shade in the background
of the text when you use the SVMono environment
`\begin{svgraybox}...\end{svgraybox}`, see Sects. 2.3, 2.4.
Equations Style
[vecarrow]
default
centered layout, vectors boldface (math style)
vecphys
produces boldface italic vectors (physics style)
when `\vec`-command is used
vecarrow
depicts vectors with an arrow above when `\vec`-commandis used
Numbering and Layout of Headings
[nochapnum]
default
all section headings down to subsubsection level are numbered, second and
subsequent lines in a multiline numbered heading are indented; Paragraph and
Subparagraph headings are displayed but not numbered; figures, tables and
equations are numbered chapterwise, individual theorem-like environments are
counted consecutively throughout the book.
nosecnum
suppresses any section numbering; figures, tables and equations are counted
chapterwise displaying the chapter counter, if applicable.
nochapnum
suppresses the chapter numbering only, subsequent section headings as well as
figures, tables and equations are numbered chapterwise but without chapter
counter.
nonum
suppresses any numbering of any headings; tables, figures, equations are
counted consecutively throughout the book.
$\backslash$chapter*
must not be used since all subsequent numbering will go bananas $\ldots$
††margin: Warning !
Numbering of Figures, Tables and Equations
[numart]
default
chapter-wise numbering
numart
numbers figures, tables, equations consecutively (not chapterwise) throughout
the whole text, as in the standard article document class
Numbering and Counting of Built-in Theorem-Like Environments
[envcountresetchap]
default
each built-in theorem-like environment gets its own counter without any
chapter or section prefix and is counted consecutively throughout the book
envcountchap
Each built-in environment gets its own counter and is numbered chapterwise. To
be selected as default setting for a book with numbered chapters.
envcountsect
each built-in environment gets its own counter and is numbered sectionwise
envcountsame
all built-in environments follow a single counter without any chapter or
section prefix, and are counted consecutively throughout the book
envcountresetchap
each built-in environment gets its own counter without any chapter or section
prefix but with the counter reset for each chapter
envcountresetsect
each built-in environment gets its own counter without any chapter or section
prefix but with the counter reset for each section
N.B.1 When the option envcountsame is combined with the options envcount-
resetchap or envcountresetsect all predefined environments get the same
counter; but the counter is reset for each chapter or section.
N.B.2 When the option envcountsame is combined with the options envcountchap
or envcountsect all predefined environments get a common counter with a
chapter or section prefix; but the counter is reset for each chapter or
section.
N.B.3 We have designed a new easy-to-use mechanism to define your own
environments.
N.B.4 Be careful not to use layout options that contradict the parameter of
the selected environment option and vice versa. ††margin: Warning !
Use the Springer class option
[nospthms]
nospthms
only if you want to suppress all defined theorem-like environments and use the
theorem environments of original LaTeX package or other theorem packages
instead. (Please check this with your editor.)
References
[chaprefs]
default
the list of references is set as an unnumbered chapter starting on a new recto
page, with automatically correct running heads and an entry in the table of
contents. The list itself is set in small print and numbered with ordinal
numbers.
sectrefs
sets the reference list as an unnumbered section, e.g. at the end of a chapter
natbib
sorts reference entries in the author-year system (make sure that you have the
natbib package by Patrick W. Daly installed. Otherwise it can be found at the
Comprehensive TeX Archive Network (CTAN…tex-
archive/macros/latex/contrib/supported/natbib/), see [4–6]
Use the Springer class option
[chaprefs]
oribibl
only if you want to set reference numbers in square brackets without automatic
TOC entry etc., as is the case in the original LaTeX bibliography environment.
But please note that most page layout features are nevertheless adjusted to
Springer Nature requirements. (Please check usage of this option with your
editor.)
### 2.3 Required and Recommended Packages
SVMono document class has been tested with a number of Standard LaTeX tools.
Below we list and comment on a selection of recommended packages for preparing
fully formatted book manuscripts for Springer Nature. If not installed on your
system, the source of all standard LaTeX tools and packages is the
Comprehensive TeX Archive Network (CTAN) at [4–6].
Font Selection
default | Times font family as default text body font together with Helvetica clone as sans serif and Courier as typewriter font.
---|---
newtxtext.sty and newtxmath.sty | Supports roman text font provided by a Times clone, sans serif based on a Helvetica clone, typewriter faces, plus math symbol fonts whose math italic letters are from a Times Italic clone
If the packages ‘newtxtext.sty and newtxmath.sty’ are not already installed
with your LaTeX they can be found at https://ctan.org/tex.archive/ fonts/newtx
at the Comprehensive TeX Archive Network (CTAN), see [4–6].
If Times Roman is not available on your system you may revert to CM fonts.
However, the SVMono layout requires font sizes which are not part of the
default set of the computer modern fonts.
[type1cm.sty]
type1cm.sty
The type1cm package enhances this default by enabling scalable versions of the
(Type 1) CM fonts. If not already installed with your LaTeX it can be found at
../tex-archive/macros/latex/contrib/type1cm/ at the Comprehensive TeX Archive
Network (CTAN), see [4–6].
Body Text
When you select the SVMono class option [graybox] the packages framed and
color are required, see Sect. 2.2
[framed.sty]
framed.sty
makes it possible that framed or shaded regions can break across pages.
color.sty
is part of the graphics bundle and makes it possible to selct the color and
define the percentage for the background of the box.
Equations
A useful package for subnumbering each line of an equation array can be found
at ../tex-archive/macros/latex/contrib/supported/subeqnarray/ at the
Comprehensive TeX Archive Network(CTAN), see [4–6].
[subeqnarray.sty]
subeqnarray.sty
defines the subeqnarray and subeqnarray* environments, which behave like the
equivalent eqnarray and eqnarray* environments, except that the individual
lines are numbered as 1a, 1b, 1c, etc.
Footnotes
[footmisc.sty]
footmisc.sty
used with style option [bottom] places all footnotes at the bottom of the page
Figures
[graphicx.sty]
graphicx.sty
tool for including graphics files (preferrably eps files)
References
[natbib.sty]
default
Reference lists are numbered with the references being cited in the text by
their reference number
natbib.sty
sorts reference entries in the author–year system (among other features). N.B.
This style must be installed when the class option natbib is used, see Sect.
2.2
cite.sty
generates compressed, sorted lists of numerical citations: e.g. [8,11–16];
preferred style for books published in a print version only
Index
[multicol.sty]
makeidx.sty
provides and interprets the command `\printindex` which “prints” the
externally generated index file *.ind.
multicol.sty
balances out multiple columns on the last page of your subject index, glossary
or the like
N.B. Use the MakeIndex program together with one of the following styles
[svindd.ist]
svind.ist
for English texts
svindd.ist
for German texts
to generate a subject index automatically in accordance with Springer Nature
layout requirements. For a detailed documentation of the program and its usage
we refer you to [1].
### 2.4 SVMono Commands and Environments in Text Mode
Use the environment syntax
`\begin{dedication}`
---
$\langle text\rangle$
`\end{dedication}`
to typeset a dedication or quotation at the very beginning of the in preferred
Springer layout.
Use the new commands `\foreword` `\preface`
to typeset a Foreword or Preface with automatically generated runnings heads.
Use the new commands `\extrachap{`$\langle heading\rangle$`}`
`\Extrachap{`$\langle heading\rangle$`}`
to typeset — in the front or back matter of the book—an extra unnumbered
chapter with your preferred heading and automatically generated runnings
heads.
`\Extrachap` furthermore generates an automated TOC entry.
Use the new command
`\partbacktext{`$\langle text\rangle$`}`
to typeset a text on the back side of a part title page.
Use the new command
`\chapsubtitle[`$\langle subtitle\rangle$`]`
to typeset a possible subtitle to your chapter title. Beware that this
subtitle is not tranferred automatically to the table of contents.
The command must be placed before the `\chapter` command.
Alternatively use the `\chapter`-command to typeset your subtitle together
with the chapter title and separate the two titles by a period or an en-dash.
††margin: Alternative !
The command must be placed before the `\chapter` command.
Use the new command
`\chapauthor[`$\langle name\rangle$`]`
to typeset the author name(s) beneath your chapter title. Beware that the
author name(s) are not tranferred automatically to the table of contents.
The command must be placed before the `\chapter` command.
Alternatively, if the book has rather the character of a contributed volume as
opposed to a monograph you may want to use the SVMono package with features
that better suit the specific requirements. ††margin: Alternative !
Use the new commands
`\chaptermark{}`
---
`\sectionmark{}`
to alter the text of the running heads.
Use the new command `\motto{`$\langle{\it text}\rangle$`}`
to include special text, e.g. mottos, slogans, between the chapter heading and
the actual content of the chapter in the preferred Springer layout.
The argument `{`$\langle{\it text}\rangle$`}` contains the text of your
inclusion. It may not contain any empty lines. To introduce vertical spaces
use `\\[height]`.
If needed, the you may indicate an alternative widths in the optional
argument.
N.B. The command must be placed before the relevant `heading`-command.
Use the new commands
`\abstract{`$\langle{\it text}\rangle$`}`
---
`\abstract*{`$\langle{\it text}\rangle$`}`
to typeset an abstract at the beginning of a chapter.
The text of `\abstract*` will not be depicted in the printed version of the
book, but will be used for compiling `html` abstracts for the online
publication of the individual chapters `www.SpringerLink.com`.
Please do not use the standard LaTeX environment ††margin: Warning !!!
`\begin{abstract}...\end{abstract}` – it will be ignored when used with the
SVMono document class!
Use the new commands
`\runinhead[`$\langle{\it title}\rangle$`]`
---
`\subruninhead[`$\langle{\it title}\rangle$`]`
when you want to use unnumbered run-in headings to structure your text.
Use the new environment command `\begin{svgraybox}` $\langle text\rangle$
`\end{svgraybox}`
to typeset complete paragraphs within a box showing a 15 percent gray shade.
N.B. Make sure to select the SVMono class option `[graybox]` in order to have
all the required style packages available, see Sects. 2.2, 2.3. ††margin:
Warning !
Use the new environment command
`\begin{petit}`
---
$\langle text\rangle$
`\end{petit}`
to typeset complete paragraphs in small print.
Use the enhanced environment command
`\begin{description}[`$\langle{\it largelabel}\rangle$`]`
---
`\item[`$\langle{\it label1}\rangle$`]` $\langle\textit{text1}\rangle$
`\item[`$\langle{\it label2}\rangle$`]` $\langle\textit{text2}\rangle$
`\end{description}`
for your individual itemized lists.
The new optional parameter `[`$\langle{\it largelabel}\rangle$`]` lets you
specify the largest item label to two levels to appear within the list. The
texts of all items are indented by the width of $\langle largelabel\rangle$
and the item labels are typeset flush left within this space. Note, the
optional parameter will work only two levels deep.
Use the commands
`\setitemindent{`$\langle{\it largelabel}\rangle$`}`
---
`\setitemitemindent{`$\langle{\it largelabel}\rangle$`}`
if you need to customize the indention of your “itemized” or “enumerated”
environments.
### 2.5 SVMono Commands in Math Mode
Use the new or enhanced symbol commands provided by the SVMono document class:
`\D` | upright d for differential d
---|---
`\I` | upright i for imaginary unit
`\E` | upright e for exponential function
`\tens` | depicts tensors as sans serif upright
`\vec` | depicts vectors as boldface characters instead of the arrow accent
N.B. By default the SVMono document class depicts Greek letters as italics
because they are mostly used to symbolize variables. However, when used as
operators, abbreviations, physical units, etc. they should be set upright.
All upright upper-case Greek letters have been defined in the SVMono document
class and are taken from the TeX alphabet.
Use the command prefix
`\var...`
with the upper-case name of the Greek letter to set it upright, e.g.
`\varDelta`.
Many upright lower-case Greek letters have been defined in the SVMono document
class and are taken from the PostScript Symbol font.
Use the command prefix
`\u...`
with the lower-case name of the Greek letter to set it upright, e.g. `\umu`.
If you need to define further commands use the syntax below as an example:
`\newcommand{\ualpha}{\allmodesymb{\greeksym}{a}}`
### 2.6 SVMono Theorem-Like Environments
For individual text structures such as theorems, definitions, and examples,
the SVMono document class provides a number of pre-defined environments which
conform with the specific Springer Nature layout requirements.
Use the environment command
`\begin{`$\langle{\it name~{}of~{}environment}\rangle$`}[`$\langle{\it
optional~{}material}\rangle$`]`
---
$\langle{\it text~{}for~{}that~{}environment}\rangle$
`\end{`$\langle{\it name~{}of~{}environment}\rangle$`}`
for the newly defined environments.
Unnumbered environments will be produced by
`claim` and `proof`.
Numbered environments will be produced by
case, conjecture, corollary, definition, example, exercise, lemma, note,
problem, property, proposition, question, remark, solution, and theorem.
The optional argument `[`$\langle{\it optional~{}material}\rangle$`]` lets you
specify additional text which will follow the environment caption and counter.
N.B. We have designed a new easy-to-use mechanism to define your own
environments.
Use the new symbol command `\qed`
to produce an empty square at the end of your proof.
In addition, use the new declaration
`\smartqed`
to move the position of the predefined qed symbol to be flush right (in text
mode). If you want to use this feature throughout your book the declaration
must be set in the preamble, otherwise it should be used individually in the
relevant environment, i.e. proof.
Example
`\begin{proof}`
`\smartqed`
`Text`
`\qed`
`\end{proof}`
Furthermore the functions of the standard `\newtheorem` command have been
enhanced to allow a more flexible font selection. All standard functions
though remain intact (e.g. adding an optional argument specifying additional
text after the environment counter).
Use the mechanism
`\spdefaulttheorem{`$\langle{\it env~{}name}\rangle$`}``{`$\langle
caption\rangle$`}``{`$\langle{\it cap~{}font}\rangle$`}``{`$\langle{\it
body~{}font}\rangle$`}`
to define an environment compliant with the selected class options (see Sect.
2.2) and designed as the predefined theorem-like environments.
The argument `{`$\langle{\it env~{}name}\rangle$`}` specifies the environment
name; `{`$\langle{\it caption}\rangle$`}` specifies the environment’s heading;
`{`$\langle{\it cap~{}font}\rangle$`}` and `{`$\langle{\it
body~{}font}\rangle$`}` specify the font shape of the caption and the text
body.
N.B. If you want to use optional arguments in your definition of a theoremlike
environment as done in the standard `\newtheorem` command, see below.
Use the mechanism `\spnewtheorem{`$\langle{\it
env~{}name}\rangle$`}``[`$\langle{\it
numbered~{}like}\rangle$`]``{`$\langle{\it caption}\rangle$`}``{`$\langle{\it
cap~{}font}\rangle$`}``{`$\langle{\it body~{}font}\rangle$`}`
to define an environment that shares its counter with another predefined
environment `[`$\langle{\it numbered~{}like}\rangle$`]`.
The optional argument `[`$\langle{\it numbered~{}like}\rangle$`]` specifies
the environment with which to share the counter.
N.B. If you select the class option “envcountsame” the only valid “numbered
like” argument is `[theorem]`.
Use the defined mechanism
`\spnewtheorem{`$\langle{\it env~{}name}\rangle$`}``{`$\langle{\it
caption}\rangle$`}``[`$\langle\langle{\it
within}\rangle\rangle$`]``{`$\langle{\it cap~{}font}\rangle$`}``{`$\langle{\it
body~{}font}\rangle$`}`
to define an environment whose counter is prefixed by either the chapter or
section number (use `[chapter]` or `[section]` for `[`$\langle{\it
within}\rangle$`]`).
Use the defined mechanism
`\spnewtheorem*{`$\langle{\it env~{}name}\rangle$`}``{`$\langle{\it
caption}\rangle$`}``{`$\langle{\it cap~{}font}\rangle$`}``{`$\langle{\it
body~{}font}\rangle$`}`
to define an unnumbered environment such as the pre-defined unnumbered
environments claim and proof.
Use the defined declaration `\nocaption`
in the argument `{`$\langle{\it caption}\rangle$`}` if you want to skip the
environment caption and use an environment counter only.
Use the defined environment `\begin{theopargself}` `...` `\end{theopargself}`
as a wrapper to any theorem-like environment defined with the mechanism. It
suppresses the brackets of the optional argument specifying additional text
after the environment counter.
### 2.7 SVMono Commands for the Figure and Table Environments
Use the new declaration
`\sidecaption[`$\langle pos\rangle$`]`
to move the figure caption from beneath the figure (default) to the lower
lefthand side of the figure.
The optional parameter `[t]` moves the figure caption to the upper left-hand
side of the figure
N.B.1 (1) Make sure the declaration `\sidecaption` follows the
`\begin{figure}` command, and (2) remember to use the standard `\caption{}`
command for your caption text.
N.B.2 This declaration works only if the figure width is less than 7.8 cm. The
caption text will be set raggedright if the width of the caption is less than
3.4 cm.
Use the new declaration
`\samenumber`
within the figure and table environment – directly after the `\begin{`$\langle
environment\rangle$`}` command – to give the caption concerned the same
counter as its predecessor (useful for long tables or figures spanning more
than one page, see also the declaration `\subfigures` below.
To arrange multiple figures in a single environment use the newly defined
commands
`\leftfigure[`$\langle pos\rangle$`]` and `\rightfigure[`$\langle
pos\rangle$`]`
within a `{minipage}{\textwidth}` environment. To allow enough space between
two horizontally arranged figures use `\hspace{\fill}` to separate the
corresponding `\includegraphics{}` commands. The required space between
vertically arranged figures can be controlled with `\\[12pt]`, for example.
The default position of the figures within their predefined space is flush
left. The optional parameter `[c]` centers the figure, whereas `[r]` positions
it flush right – use the optional parameter only if you need to specify a
position other than flush left.
Use the newly defined commands
`\leftcaption{}` and `\rightcaption{}`
outside the `minipage` environment to put two figure captions next to each
other.
Use the newly defined command
`\twocaptionwidth{`$\langle width\rangle$`}``{`$\langle width\rangle$`}`
to overrule the default horizontal space of 5.4 cm provided for each of the
abovedescribed caption commands. The first argument corresponds to
`\leftcaption` and the latter to `\rightcaption`.
Use the new declaration
`\subfigures`
within the figure environment – directly after the `\begin{figure}` command –
to subnumber multiple captions alphabetically within a single figure-
environment.
N.B.: When used in combination with `\samenumber` the main counter remains the
same and the alphabetical subnumbering is continued. It works properly only
when you stick to the sequence `\samenumber\subfigures`.
If you do not include your figures as electronic files use the defined command
`\mpicplace{`$\langle width\rangle$`}{`$\langle height\rangle$`}`
to leave the desired amount of space for each figure. This command draws a
vertical line of the height you specified.
Use the new command `\svhline`
for setting in tables the horizontal line that separates the table header from
the table content.
### 2.8 SVMono Environments for Exercises, Problems and Solutions
Use the environment command
`\begin{prob}`
---
`\label{`$\langle problem{:}key\rangle$`}`
$\langle problem~{}text\rangle$
`\end{prob}`
to typeset and number each problem individually.
To facilitate the correct numbering of the solutions we have also defined a
solution environment, which takes the problem’s key, i.e. $\langle
problem{:}key\rangle$ (see above) as argument.
Use the environment syntax `\begin{sol}{`$\langle problem{:}key\rangle$`}`
$\langle solution~{}text\rangle$ `\end{sol}`
to get the correct (i.e. problem =) solution number automatically.
### 2.9 SVMono Special Elements
Use the commands
`\begin{trailer}{`$\langle$Trailer Head$\rangle$`}`
---
`...`
`\end{trailer}`
If you want to emphasize complete paragraphs of texts in an `Trailer Head`.
Use the commands
`\begin{question}{`$\langle$Questions$\rangle$`}`
---
`...`
`\end{question}`
If you want to emphasize complete paragraphs of texts in an `Questions`.
Use the commands
`\begin{important}{`$\langle$Important$\rangle$`}`
---
`...`
`\end{important}`
If you want to emphasize complete paragraphs of texts in an `Important`.
Use the commands
`\begin{warning}{`$\langle$Attention$\rangle$`}`
---
`...`
`\end{warning}`
If you want to emphasize complete paragraphs of texts in an `Attention`.
Use the commands
`\begin{programcode}{`$\langle$Program Code$\rangle$`}`
---
`...`
`\end{programcode}`
If you want to emphasize complete paragraphs of texts in an `Program Code`.
Use the commands
`\begin{tips}{`$\langle$Tips$\rangle$`}`
---
`...`
`\end{tips}`
If you want to emphasize complete paragraphs of texts in an `Tips`.
Use the commands
`\begin{overview}{`$\langle$Overview$\rangle$`}`
---
`...`
`\end{overview}`
If you want to emphasize complete paragraphs of texts in an `Overview`.
Use the commands
`\begin{backgroundinformation}{`$\langle$Background Information$\rangle$`}`
---
`...`
`\end{backgroundinformation}`
If you want to emphasize complete paragraphs of texts in an `Background`
`Information`.
Use the commands
`\begin{legaltext}{`$\langle$Legal Text$\rangle$`}`
---
`...`
`\end{legaltext}`
If you want to emphasize complete paragraphs of texts in an `Legal Text`.
### 2.10 SVMono Commands for Styling References
The command
`\biblstarthook{`$\langle text\rangle$`}`
allows the inclusion of explanatory text between the bibliography heading and
the actual list of references. The command must be placed before the
`thebibliography` environment.
### 2.11 SVMono Commands for Styling the Index
The declaration
`\threecolindex`
sets the next index following the `\threecolindex` declaration in three
columns.
The Springer declaration
`\indexstarthook{`$\langle text\rangle$`}`
allows the inclusion of explanatory text between the index heading and the
actual list of references. The command must be placed before the `theindex`
environment.
### 2.12 SVMono Commands for Styling the Table of Contents
Use the command
`\setcounter{tocdepth}{number}`
to alter the numerical depth of your table of contents.
Use the macro
`\calctocindent`
to recalculate the horizontal spacing for large section numbers in the table
of contents set with the following variables:
`\tocchpnum` for the | chapter number
---|---
`\tocsecnum` | section number
`\tocsubsecnum` | subsection number
`\tocsubsubsecnum` | subsubsection
`\tocparanum` | paragraph number
Set the sizes of the variables concerned at the maximum numbering appearing in
the current document.
In the preamble set e.g:
`\settowidth{\tocchpnum}{36.\enspace}`
---
`\settowidth{\tocsecnum}{36.10\enspace}`
`\settowidth{\tocsubsecnum}{99.88.77}`
`\calctocindent`
## References
1. [1]
L. Lamport: LaTeX: A Document Preparation System 2nd ed. (Addison-Wesley,
Reading, Ma 1994)
2. [2]
M. Goossens, F. Mittelbach, A. Samarin: The LaTeX Companion (Addison-Wesley,
Reading, Ma 1994)
3. [3]
D. E. Knuth: The TeXbook (Addison-Wesley, Reading, Ma 1986) revised to cover
TeX3 (1991)
4. [4]
TeX Users Group (TUG), http://www.tug.org
5. [5]
Deutschsprachige Anwendervereinigung TeX e.V. (DANTE), Heidelberg, Germany,
http://www.dante.de
6. [6]
UK TeX Users’ Group (UK-TuG), http://uk.tug.org
|
4k
|
arxiv_papers
|
2101.01142
|
# Advanced Machine Learning Techniques for Fake News (Online Disinformation)
Detection: A Systematic Mapping Study
Michał Choraś [email protected] Konstantinos Demestichas [email protected]
Agata Giełczyk [email protected] Álvaro Herrero [email protected] Paweł
Ksieniewicz [email protected] Konstantina Remoundou
[email protected] Daniel Urda [email protected] Michał Woźniak
[email protected] UTP University of Science and Technology, Poland
National Technical University of Athens, Greece Grupo de Inteligencia
Computacional Aplicada (GICAP), Departamento de Ingeniería Informática,
Escuela Politécnica Superior, Universidad de Burgos, Av. Cantabria s/n, 09006,
Burgos, Spain. Wrocław University of Science and Technology, Poland
###### Abstract
Fake news has now grown into a big problem for societies and also a major
challenge for people fighting disinformation. This phenomenon plagues
democratic elections, reputations of individual persons or organizations, and
has negatively impacted citizens, (e.g., during the COVID-19 pandemic in the
US or Brazil). Hence, developing effective tools to fight this phenomenon by
employing advanced Machine Learning (ML) methods poses a significant
challenge. The following paper displays the present body of knowledge on the
application of such intelligent tools in the fight against disinformation. It
starts by showing the historical perspective and the current role of fake news
in the information war. Proposed solutions based solely on the work of experts
are analysed and the most important directions of the application of
intelligent systems in the detection of misinformation sources are pointed
out. Additionally, the paper presents some useful resources (mainly datasets
useful when assessing ML solutions for fake news detection) and provides a
short overview of the most important R&D projects related to this subject. The
main purpose of this work is to analyse the current state of knowledge in
detecting fake news; on the one hand to show possible solutions, and on the
other hand to identify the main challenges and methodological gaps to motivate
future research.
###### keywords:
Fake news , Machine Learning , Social media , Media content manipulation ,
Disinformation detection
††journal: Applied Soft Computing
## 1 Introduction
Let us start with a strong statement: the fake news phenomenon is currently a
big problem for societies, nations and individual citizens. Fake news has
already plagued democratic elections, reputations of individual persons or
organizations, and has negatively impacted citizens in the COVID-19 pandemic
(e.g., fake news on alleged medicines in the US or in Brazil). It is clear we
need agile and reliable solutions to fight and counter the fake news problem.
Therefore, this article demonstrates a critical scrutiny of the present level
of knowledge in fake news detection, on one hand to show possible solutions
but also to motivate the future research in this domain.
Fake news is a tough challenge to overcome, however there are some efforts
from the Machine Learning (ML) community to stand up to this harmful
phenomenon. In this mapping study, we present such efforts, solutions and
ideas. As it is presented in Fig. 1, fake news detection may be performed by
analysing several types of digital content such as images, text and network
data, as well as the author/source reputation.
NEWS Author’s reputation Network metadata Text analysis \- NLP
\- psycholinguistic
\- non-linguistic Image analysis \- context analysis
\- manipulation detection
Figure 1: The types of digital content that are analysed so as to detect fake
news in an automatic manner
This survey is not the first one in the domain of fake news. Another major
comprehensive work addressing the ways to approach fake news detection (mainly
text analysis-based) and mainstream fake news datasets is [1]. According to
it, the _state-of-the-art_ approaches for this kind of analysis may be
classified into five general groups with methods relying upon: (_i_)
linguistic features, (_ii_) deception modelling, (_iii_) clustering, (_iv_)
predictive modelling and (_v_) content cues. With regard to the text
characteristics, style-based and pattern-based detection methods are also
presented in [2]. Those methods rely on the analysis of specific language
attributes and the language structure. The analyzed attributes found by the
authors of the survey include such features as: quantity of the language
elements (e.g. verbs, nouns, sentences, paragraphs), statistical assessment of
language complexity, uncertainty (e.g. number of quantifiers, generalizations,
question marks in the text), subjectivity, non-immediacy (such as the count of
rhetorical questions or passive voice), sentiment, diversity, informality and
specificity of the analyzed text. Paper [3] surveys several approaches to
assessing fake news, which stem from two primary groups: linguistic cue
approaches (applying ML) as well as network analysis approaches.
Yet another category of solutions is network-based analysis. In [4], two
distinct categories are mentioned: (_i_) social network behavior analysis to
authenticate the news publisher’s social media identity and to verify their
trustworthiness and (_ii_) scalable computational fact-checking methods based
on knowledge networks. Beside text-based and network-based analysis, some
other approaches are reviewed. For example, [5] attempts to survey
identification and mitigation techniques in combating fake news and discusses
feedback-based identification approaches.
Crowd-signal based methods are also reported in [6], while content propagation
modelling for fake news detection purposes, alongside credibility assessment
methods, are discussed in [2]. Such credibility-based approaches are
categorized here into four groups: evaluation of news headlines, news source,
news comments and news spreaders/re-publishers. In addition, in some surveys,
content-based approaches using non-text analysis are discussed. The most
common ones are based on image analysis [1, 5].
As complementary to the mentioned surveys, the present paper is unique by
catching a very different angle of fake news detection methods (focused on
advanced ML approaches). Moreover, in addition to overviewing current methods,
we propose our own analysis criterion and categorization. We also suggest
expanding the context of methods applicable for such a task and describe the
datasets, initiatives and current projects, as well as the future challenges.
The remainder of the paper is structured in the following manner: in Section
1, previous surveys are overviewed and the historic evolution of fake news is
presented, its current impact as well as the problem with definitions. In
Section 2, we present current activities to address the fake news detection
problem as well as technological and educational actions. Section 3
constitutes the in-depth systematic mapping of ML based fake news detection
methods focused on the analysis of text, images, network data and reputation.
Section 4 describes the relevant datasets used nowadays. In the final part of
the paper we present some most emerging challenges in the discussed domain and
we draw the main conclusions.
### 1.1 A historic perspective
Even though the fake news problem has lately become increasingly important, it
is not a recent phenomenon. According to different experts [7], its origins
are in ancient times. The oldest recorded case of spreading lies to gain some
advantage is the disinformation campaign that took place on the eve of the
_Battle of Kadesh_ , dated around 1280 B.C., where the Hittite Bedouins
deliberately got arrested by the Egyptians in order to tell _Pharaoh Ramses
II_ the wrong location of the _Muwatallis II_ army [8].
Long time after that, in 1493, the Gutenberg printing press was invented. This
event is widely acknowledged as a keystone in the history of news and press
media, as it revolutionized this field. As a side effect, the dis- and
misinformation campaigns had immeasurably intensified. As an example, it is
worth mentioning the _Great Moon Hoax_ , dating back to 1835. This term is a
reference to the collection of half a dozen papers published in _The Sun_ ,
the newspaper from New York. These articles concerned the ways life and
culture had allegedly been found on the Moon.
More recently, fake news and disinformation played a crucial role in World War
I and II. On the one hand, British propaganda during World War I was aimed at
demonising German enemies, accusing them of using the remains of their troops
to obtain bone meal and fats, and then feeding the rest to swines. As a
negative consequence of that, Nazi atrocities during World War II were
initially doubted [9].
On the other hand, fake news was also generated by Nazis for sharing
propaganda. Joseph Goebbels, who cooperated closely with Hitler and was
responsible for German Reich’s propaganda, performed a deciding role in the
news media of Germany. He ordered the publication of a paper _The Attack_ ,
which was then used to disseminate brainwashing information. By means of
untruth and misinformation, the opinion of the public was being persuaded to
be in favour of the dreadful actions of the Nazis. Furthermore, according to
[10], to this day, it has been the most disreputable propaganda campaign ever
mounted.
Since the Internet and the social media that come with it became massively
popularized, fake news has disseminated at an unprecedented scale. It is
increasingly impacting on presidential elections, celebrities, climate crisis,
healthcare and many other topics. This raising popularity of fake news may be
easily observed in Fig.2. It presents the count of records in the _Google
Scholar_ database appearing year by year (since 2004), related to the term
"fake news".
Figure 2: Evolution of the number of publications _per year_ retrieved from
the keyword "_fake news_ " according to _Google Scholar_. For the year 2020,
the status as of September, 8th.
### 1.2 Overview of definitions: what is meant by fake news?
Defining what the fake news really is poses a significant challenge. As a
starting point, it is worth mentioning that Olga Tokarczuk, during her 2018
Nobel lecture
111https://www.nobelprize.org/prizes/literature/2018/tokarczuk/104871-lecture-
english/, said:
> _"Information can be overwhelming, and its complexity and ambiguity give
> rise to all sorts of defense mechanisms—from denial to repression, even to
> escape into the simple principles of simplifying, ideological, party-line
> thinking. The category of fake news raises new questions about what fiction
> is. Readers who have been repeatedly deceived, misinformed or misled have
> begun to slowly acquire a specific neurotic idiosyncrasy."_
There are many definitions of fake news [11]. In [12], it is defined as
follows: ’the news articles that are intentionally and verifiably false, and
could mislead readers’. Quoting _Wikipedia_ , being quite more verbal and less
precise, it is: ’a type of yellow journalism or propaganda that consists of
deliberate misinformation or hoaxes spread via traditional print and broadcast
news media or online social media’. On the other hand, in Europe, the European
Union Agency for Cybersecurity (_ENISA_) is using the term ’ _online
disinformation_ ’ when talking about fake
news222https://www.enisa.europa.eu/publications/enisa-position-papers-and-
opinions/fake-news. The _European Commission_ , on their websites, describes
the fake news problem as _’verifiably false or misleading information created,
presented and disseminated for economic gain or to intentionally deceive the
public’_ 333https://ec.europa.eu/digital-single-market/en/tackling-online-
disinformation. This lack of a standardized and widely accepted interpretation
of the fake news term has been already mentioned in some previous papers [13].
It is important to remember that conspiracy theories are not always considered
as fake news. In such cases, the text or images found in the considered piece
of information have to be taken into account along with the motivation of the
author/source.
In classification tasks it is very important to distinguish between deliberate
deception (actual fake news) and irony or satire that are close to it, but
completely different in the author’s intention. The difference is so blurred
that it is sometimes difficult even for people (especially those without a
specific sense of humor), so it is a particular problem for automatic
recognition systems. The definition is therefore difficult to establish, but
indeed fake news is a concept related to information; therefore, we tried to
position it within some other information concepts, as presented in Fig. 3.
VeracityAuthor’s involvement Factual news Hoax Propaganda Irony Fake news low
high high
Figure 3: Fake news in the context of information
Factual news is based on facts concerned with actual details or information
rather than ideas or feelings about it.
### 1.3 Why is fake news dangerous?
During the pandemic of Coronavirus (COVID-19) in 2020 we have had the
opportunity to experience the disinformation power of fake news in all its
infamous glory. The _World Health Organization_ called this side-phenomena the
’ _infodemic_ ’ – an overwhelming quantity of overall material in social media
and websites. As the representative example, one of those news items claimed
that 5G mobile devices network ‘ _causes Coronavirus by sucking oxygen out of
your lungs_ ’444https://www.mirror.co.uk/tech/coronavirus-hoax-
claims-5g-causes-21620766. Another group said that the virus comes from bat
soup555www.foreignpolicy.com/2020/01/27/dont-blame-bat-soup-for-the-wuhan-
virus/, while others pointed labs as places where the virus was actually
created as part of the conspiracy. According to the _’studies’_ published this
way, there are also some pieces of advice on how to cure the virus – by
gargling vinegar and rosewater, vinegar and salt666www.bbc.com/news/world-
middle-east-51677530 or – as a classic for plague anxiety – colloidal silver
or garlic.
False news may also be the grounds for political agenda, for instance during
the 2016 US election. One of the misinformation examples in this campaign was
the Eric Tucker’s
case777https://www.nytimes.com/2016/11/20/business/media/how-fake-news-
spreads.html. Tucker, finding it to be an uncommon occurrence, had
photographed a big number of buses that he spotted in the city centre of
Austin. Additionally, he watched the announcements concerning the
demonstrations in protest at President-elect Donald J. Trump taking place
there and arrived at the conclusion that there must have been some connection
between both incidents. Tucker tweeted the photos, commenting on them that ’
_Anti-Trump protestors in Austin today are not as organic as they seem. Here
are the busses they came in. #fakeprotests #trump2016 #austin_ ’. The original
post got retweeted at least sixteen thousand times; _Facebook_ users shared it
over 350,000 times. Later, it turned out that the buses were not involved in
the protests in Austin. In fact, they were employed by _Tableau Software_ , a
company that organised a summit for over 13 thousand people at that time. This
resulted in the original post being deleted from Twitter, and instead
published the picture of it labelled as ’ _false_ ’.
## 2 Current initiatives worldwide to fight against disinformation
The fake news problem is especially visible in any kind of the social media.
In the online report888https://techxplore.com/news/2019-12-nato-social-
media.html NATO-affiliated researchers claimed that the social media fail to
stop the online manipulation. According to the report ’Overall social media
companies are experiencing significant challenges in countering the malicious
use of their platforms’. First of all, the researchers were easily able to buy
tens of thousands of likes, comments and views on Facebook, Twitter, YouTube
and Instagram. What is more, Facebook has been recently flooded by numerous
fake accounts. It claimed to disable 2.2 billion fake accounts solely in the
first quarter of this year! An interesting approach to fake news detecting is
to involve the community. That is why in Facebook mobile application reporting
tool has become more visible lately.
This is just an example showing how serious and far-reaching the issue may be.
The pervasiveness of the problem has already caused a number of fake news
prevention initiatives to be developed; they are of both political and non-
political character; some are local and some of them are of international
scope. The following subsections will present some initiatives of considerable
significance.
### 2.1 Large-scale political initiatives addressing the issue of fake news
From the worldwide perspective, the _International Grand Committee_ (_IGC_)
_on Disinformation and Fake News_ can be considered as the widest present
governmental initiative. The _IGC_ was founded by the governments of
Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia, Singapore, and
United Kingdom. Its inaugural
meeting999https://www.parliament.uk/business/committees/committees-
a-z/commons-select/digital-culture-media-and-sport-committee/news/declaration-
internet-17-19/ was held in the UK in November 2018 and the follow-up ones
have been held in Canada (May, 2019) and Ireland (November, 2019). Elected
representatives of Finland, Georgia, USA, Estonia, and Australia also attended
the last meeting. In addition to general reflections about the topics under
analysis, this international board specifically focused on technology and
media companies, asking them for liability and accountability. One of the
conclusions of the _IGC_ latest
meeting101010https://www.oireachtas.ie/en/press-centre/press-
releases/20191107-update-international-grand-committee-on-disinformation-and-
fake-news-proposes-moratorium-on-misleading-micro-targeted-political-ads-
online/ was that ’ _global technology firms cannot on their own be responsible
in combating harmful content, hate speech and electoral interference_ ’. As a
result, this committee concludes that self-regulation is insufficient.
At the national level, a wide range of actions have been taken. In the case of
Australia, interference in political or governmental processes has been one of
the main concerns of its
Government111111https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/BriefingBook46p/FakeNews.
As a result, the _Electoral Integrity Assurance Taskforce_ (_EIAT_) was
created in 2018 to handle risks (cyber interference) to the Australian
election system remaining integral. Moreover, in June 2020 the Australian
Communications and Media Authority published a
paper121212https://www.acma.gov.au/sites/default/files/2020-06/Misinformation%20and%20news%20quality%20position%20paper.pdf
highlighting that ’ _48% of Australians rely on online news or social media as
their main source of news, but 64% of Australians remain concerned about what
is real or fake on the internet_ ’. The paper discusses potentially harmful
effects of fake news or disinformation to users and/or governments at
different levels, providing two clear and recent examples which occurred in
Australia during the first semester of 2020, such as the bushfire season and
the COVID-19 pandemic. In general, two responses to misinformation are pointed
out. One that considers international regulatory responses and another one
coming from online platforms in terms of how they tackle misconducting users
as well as how they address problematic content.
Similarly, in October 2019 the _US Department of Homeland Security_ (_DHS_)
published a report on _Combatting Targeted Disinformation Campaigns_
131313https://www.dhs.gov/sites/default/files/publications/ia/ia_combatting-
targeted-disinformation-campaigns.pdf. In this report, the _DHS_ highlights
how easy it is nowadays to spread false news through online sources and how ’
_disinformation campaigns should be viewed as a whole-of-society problem
requiring action by government stakeholders, commercial entities, media
organizations, and other segments of civil society_ ’. This report points out
the growth on disinformation campaigns since the 2016 US presidential
election; at the same time that US and world-wide nations were becoming more
aware concerning the potential damage of these campaigns to economy, politics
and society in general. Furthermore, better and wider actions are nowadays
carried out in real-time, i.e. while the disinformation campaign is ongoing,
compared to the first years (until 2018) where most of ’ _the work on
disinformation campaigns was post-mortem, i.e. after the campaign had nearly
run its course_ ’. In this sense, the report summarizes several
recommendations to combat disinformation campaigns such as ’ _government
legislation, funding and support of research efforts that bridge the
commercial and academic sectors (e.g., development of technical tools),
sharing and analysing information between public and private entities,
providing media literacy resources to users and enhancing the transparency of
content distributors, building societal resilience and encouraging the
adoption of healthy skepticism_ ’. In any case, the importance of ’ _private
and public sector cooperation to address targeted disinformation campaigns on
the next five years_ ’ is highlighted.
In Europe, the _EU Commission_ has recently updated (July, 2020) a previous
report providing a clear set of actions, which are easy to understand and/or
apply, in order to fight against fake news and online
disinformation141414https://ec.europa.eu/digital-single-market/en/tackling-
online-
disinformation,151515https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56166.
The action plan consists of the four following pillars:
1. 1.
_Improving the capabilities of Union institutions to detect, analyse and
expose disinformation_. This action implies better communication and
coordination among the EU Member States and their institutions. In principle,
it aims to provide EU members with ’ _specialised experts in data mining and
analysis to gather and process all the relevant data_ ’. Moreover, it refers
to ’ _contracting media monitoring services as crucial in order to cover a
wider range of sources and languages_ ’. Additionally, it also highlights the
need to ’ _invest in developing analytical tools which may help to mine,
organise and aggregate vast amounts of digital data_ ’.
2. 2.
_Stronger cooperation and joint responses to threats_. Since the most
significant is the time right after the publishing of the false news, this
action aims to have a ’ _Rapid Alert System to provide alerts on
disinformation campaign in real-time_ ’. For this purpose, each EU Member
State should ’ _designate contact points which would share alerts and ensure
coordination without prejudice to existing competences and/or national laws_
’.
3. 3.
_Enhancing collaboration with online platforms and industry to tackle
disinformation_. This action aims to mobilise and provide with an active role
to the private sector. Disinformation is most often released in large online
platforms owned by the private sector. Therefore, they should be able to ’
_close down fake accounts active on their service, identify automated bots and
label them accordingly, and collaborate with the national audio-visual
regulators and independent fact-checkers and researchers to detect and flag
disinformation campaigns_ ’.
4. 4.
_Raising awareness and improve societal resilience_. This action aims to
increase public awareness and resilience by activities related to ’ _media
literacy in order to empower EU citizens to better identify and deal with
disinformation_ ’. In this sense, the development of critical thinking and the
use of independent fact-checkers are highlighted to play a key role in ’
_providing new and more efficient tools to the society in order to understand
and combat online disinformation_ ’.
Overall, any activity or action proposed in the above-mentioned cases (_IGC_ ,
Australia, US and EU) could be understood from three different angles:
technological, legislative and educative. For instance – technology-wise – the
four pillars in the set of actions proposed by the EU Commission are
mentioning the use of several kinds of tools (analytical, fact-checkers,
etc.), arising as the key component to be considered in current and/or future
initiatives. However, from the legislation point of view, pillars 1 and 2 are
also showing the need to have a normative framework which facilitates
coordination and communication among different countries. Finally – education-
wise – pillars 3 and 4 strengthen the importance of media literacy and the
development of critical thinking in society. Similarly, activities, actions
and recommendations provided by the _IGC_ , the Australian government and the
_DHS_ can be directly linked to these three different concepts (technology,
legislation and education). Furthermore, the _BBC_ has an available collection
of tools, highlights and global media
education161616https://www.bbc.co.uk/academy/en/collections/fake-news which
could be directly linked to these angles previously described, thus supporting
this division into three categories.
The U.S. Department of Defence started to invest about $70 M in order to
deploy military technology to detect fake contents as they impact on national
security171717https://www.cbc.ca/news/technology/fighting-fake-images-
military-1.4905775. This _Media Forensics Program_ at the _Defence Advanced
Research Projects Agency_ (_DARPA_) started 4 years ago.
Some efforts are being devoted to the technological component from public
bodies. That is the case of the Australian _EIAT_ , which was created to
provide the _Australian Electoral Commission_ with ‘ _technical advice and
expertise_ ’ concerning potential digital disturbance of electoral
processes181818https://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;query=Id%3A%22media%2Fpressclp%2F6016585%22.
The _Australian Cyber Security Centre_ was in charge of providing this
technological advice, among other governmental institutions.
In the case of Europe, a high-level group of experts (_HLGE_) was appointed in
2018 by the European Commission to give advice on this topic. Although these
experts were not against regulating in some cases, they proposed [14] to
mainly take non-regulatory and specific-purpose actions. The focus of the
proposal was the collaboration between different stakeholders to support
digital media companies in combating disinformation. On the contrary, the
_Standing Committee on Access to Information, Privacy and Ethics_ (_SCAIPE_)
established by the Parliament of Canada suggested in 2019 to impose legal
restrictions to media companies in order to be more transparent and force them
to remove illegal contents [15]. Similarly, the _Digital, Culture, Media and
Sport Committee_ (_DCMSC_) created by the Parliament of the United Kingdom
strongly encouraged to take legal actions [16]. More precisely, it proposed a
mandatory ethical code for media enterprises, to be controlled by an
independent body, and forcing these companies to remove those contents to be
considered potentially dangerous and coming from proven sources of
disinformation. Furthermore, the _DCMSC_ suggested to modify laws regarding
electoral communications to ensure their transparency in online media.
Regarding electoral processes, it is worth mentioning the Australian
initiative; unprecedented offences by foreign interference have been added to
the Commonwealth Criminal Code by means of the _National Security Legislation
Amendment (Espionage and Foreign Interference) Act 2018_
191919https://www.legislation.gov.au/Details/C2018C00506. It defines these
foreign offences as the ones that ’ _influence a political or governmental
process of the Commonwealth or a State or Territory or influence the exercise
(whether or not in Australia) of an Australian democratic or political right
or duty_ ’.
In the case of India, the Government issued a notice to _Whatsapp_ in 2018,
because at least 18 people were killed in separate incidents that year after
false information was shared through this
app202020https://www.cbc.ca/news/world/india-child-kidnap-abduction-video-
rumours-killings-1.4737041. The Minister of Electronics and Information
Technology stated that the Indian Government ’ _was committed to freedom of
speech and privacy as enshrined in the constitution of India_ ’. As a result,
the posts published to social networks are not subject to governmental
regulations. He claimed that ’ _these social media have also to follow Article
19(2) of the Constitution and ensure that their platforms are not used to
commit and provoke terrorism, extremism, violence and crime_
’212121https://www.thehindu.com/news/national/fake-news-safety-measures-by-
whatsapp-inadequate-says-ravi-shankar-prasad/article24521167.ece. However,
India’s Government is working on a new IT Act in order to deploy a stronger
framework to deal with
cybercrimes222222https://www.thehindu.com/business/Industry/centre-to-revamp-
it-act/article30925140.ece.
Lastly, it should be mentioned that the third (latest) meeting of the _IGC_
was aimed at advancing international collaboration in the regulation of fake
news and disinformation. In this sense, experts highlighted that there are
conflicting principles regarding the regulation of the internet. This includes
the protection of freedom of speech (in accordance with national laws), while
simultaneously combating disinformation. Thus, this still is an open
challenge.
Finally, from the education perspective, it is worth mentioning that the _EU-
HLGE_ recommended implementing wide education programs on media and
information, in order to educate not only professional users of media
platforms but public users in general terms. Similar recommendations were
pointed out by the Canadian _SCAIPE_ , focusing on awareness-raising campaigns
and literacy programs for the whole society.
### 2.2 Other noteworthy initiatives and solutions
It should also be mentioned that recently some systematic social activity
battling with misinformation has appeared and now it is getting more intense.
For instance, there is a group of volunteers called Lithuanian ’ _elves_ ’.
Their main aim is to beat the _Kremlin_ propaganda. They scan the social media
(_Instagram, Facebook, Twitter_) and report any found fake information on
their daily basis.
From the technological perspective, it is worth highlighting that numerous
online tools have been developed for misinformation detection. Some available
approaches were presented in [17]. This technological development to combat
disinformation is led by tech/media companies. This is the case of _Facebook_
, that quite recently (May 2020) informed232323https://spectrum.ieee.org/view-
from-the-valley/artificial-intelligence/machine-learning/how-facebook-is-
using-ai-to-fight-covid19-misinformation that it is combating fake news by
means of its _Multimodal Content Analysis Tools_ , in addition to 35 thousand
human supervisors. This AI-driven set of tools is been applied to identify
fake or abusive contents related to Coronavirus. The image-processing system
extracts objects that are known to violate its policy. Then the objects are
stored and searched in new ads published by users. _Facebook_ claims that this
solution, based on supervised classifiers, does not suffer from the
limitations of similar tools when facing images created by common adversarial
modification techniques.
In the _BlackHat Europe 2018_ event that was held in London, the _Symantec
Corporation_ displayed its demo of a deepfake
detector242424https://i.blackhat.com/eu-18/Thu-Dec-6/eu-18-Thaware-Agnihotri-
AI-Gone-Rogue-Exterminating-Deep-Fakes-Before-They-Cause-Menace.pdf. In 2019,
Facebook put 10 million dollars252525https://www.reuters.com/article/us-
facebook-microsoft-deepfakes/facebook-microsoft-launch-contest-to-detect-
deepfake-videos-idUSKCN1VQ2T5 into the _Deepfake Detection Challenge_ [18]
aimed at measuring progress on the available technology to detect deepfakes.
The best model (in terms of precision metric for published data) that won this
contest, performed quite poorly (65.18% precision) when validated with new
data. This means that it still is an open challenge and a great research
effort is still required to get robust fake detection technology. In addition
to these huge companies, some other startups are developing anti-fake
technologies, such as the _DeepTrace_ based in Netherlands. This company aims
at building the ’ _antivirus for deepfakes_
’262626https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-
learning/will-deepfakes-detection-be-ready-for-2020.
Some other technological projects are being run at present time; the _AI
Foundation_ raised 10 million dollars to develop the _Guardian AI_
technologies, a set of tools comprising _Reality Defender_
272727https://aifoundation.com/responsibility/. This intelligent software is
intended to support users in identifying fake contents while consuming digital
resources (such as web browsing). No further technical details are available
yet.
## 3 A systematic overview of ML approaches for fake news detection
A comprehensive, critical analysis of previous ML-based approaches to false
news detection is presented in the following part of the paper. As already
mentioned in Section 1, and as graphically presented in Fig. 1, these methods
can analyze different types of digital content. According to that, in this
section we will overview the methods for (_i_) text-based and Natural Language
Processing (NLP) analysis, (_ii_) reputation analysis, (_iii_) network
analysis, and (_iv_) image-manipulation recognition.
### 3.1 Text analysis
Intuitively, the most obvious approach to automatically recognizing fake news
is NLP. Despite the fact that the social context of the message conveyed in
electronic media is a very important factor, the basic source of information
necessary to build a reliable pattern recognition system is the extraction of
features directly from the content of the analyzed article. Several main
trends may be distinguished within the works carried out in this area.
Theoretically, the simplest is the analysis of text representation without
linguistic context, most often in the form of _bag-of-words_ (first mentioned
in 1954 by Harris [19]) or _N-grams_ , but also the analysis of
psycholinguistic factors, syntactic and semantic analysis are commonly used.
#### 3.1.1 NLP-based data representation
As it is rightly pointed out by Saquete et al. [20], each of the subtasks
distinguished within the domain of detecting false news is based on the tools
offered by NLP. Basic solutions are built on _bag-of-words_ , which counts the
occurrences of particular terms within the text. A slightly more sophisticated
development of this idea are _N-grams_ , which tokenize not individual words,
but their sequences. The range of the definition allows to define _bag-of-
words_ as _N-grams_ with $n$ equal to one, making them _unigrams_. It has also
to be noted that the sheer number of _N-grams_ in each document is strongly
dependent on its length and for the needs of the construction of pattern
recognition systems it should be normalized to the document (_Term frequency_
: TF [21]) or to the set of documents used in the learning process (_Term
frequency - inverse document frequency : TF-IDF_ [22]).
Despite the simplicity and age of these solutions, they are successfully
utilized in solving the problem of detecting false news. A good example of
such application is the work of Hassan et al. [23], comparing the
effectiveness of _Twitter_ disinformation classification using five base
classifiers (_Lin-SVM, RF, LR, NB and KNN_), comparing them with methods of TF
and TF-IDF attribute extraction with _N-grams_ of various lengths. On the
basis of the PHEME [24] dataset, they showed the effectiveness of methods
using simultaneously different lengths of word sequences, combining _unigrams_
with _bigrams_ in the extraction. A similar approach is a common starting
point for most analyzes [25, 26], enabling further suggestions for
considerations, through the in-depth review of base classifiers [27] or the
use of ensemble methods [28].
Other interesting trends within this type of analysis include the detection of
unusual tokens, for example text elements, insistently repeated denials and
curses, or question marks, emoticons and multiple exclamation marks, that are
most often rejected at the stage of preprocessing [29, 30, 31]. As in many
other application fields, deep neural networks are a promising branch of
Artificial Intelligence. Hence, they are also playing a role here, being a
popular alternative to classic models [32].
#### 3.1.2 Psycholinguistic features
The psycholinguistic analysis of texts published on the Internet is
particularly difficult due to the limitation of messages to their verbal part
only and the peculiar characteristics of such documents. Existing and widely
cited studies [33, 34] allow us to conclude that the messages that try to
mislead us are characterized, for example, by an enormous length of some
sentences they contain along with their lexical limitation, increased
repetition of key theses or a reduced formality of the language used. These
are often extractable and measurable factors that can be more or less
successfully used in the design of the pattern recognition system.
An interesting work giving a proper overview on psycholinguistic data
extraction is the detection of character assassination attempts by troll
actions, performed by El Marouf et al. [35]. Six different feature extraction
tools were used in the construction of the dataset for supervised learning.
The first is _Linguistic Inquiry and Word Count_ (_LIWC_) [36], which in its
latest version allows to obtain 93 individual characteristics of the analyzed
text, including both simple measurements, like the typical word count within a
given phrase and complex analysis of the grammar used or even the description
of the author’s emotions or the cognitive processes performed by them. It is a
tool that has been widely used for many years in a variety of problems ([37],
[38], [39], [40]), fitting well for detecting fake news. Notwithstanding, in
[41] authors presented a sentiment analysis proposal that classifies the
sample text as irony or not.
Another tool are _POS_ Tags, assigning individual words to _Parts-of-speech_
and returning their percentage share in the document [42]. The basic
information about the grammar of the text and the presumed emotions of the
author obtained in this way are supplemented with the knowledge acquired from
_SlangNet_ [43], _Colloquial WordNet_ [44], _SentiWordNet_ [45] and
_SentiStrength_ [46], returning, in turn, information about slang and
colloquial expressions, indicating the author’s sentiment and defining it as
positive or negative. The system proposed by the authors, using 19 of the
available attributes and _Multinomial Naive Bayes_ as a base classifier,
allowed to obtain a 90% score in the _F-score_ metric.
An extremely interesting trend in this type of feature extraction is the use
of behavioral information that is not directly contained in the entered text,
but can be obtained by analyzing the way it was entered [47, 48].
#### 3.1.3 Syntax-based
The aforementioned methods of obtaining useful information from the text were
based on the analysis of the word sequences present in it or the recognition
of the author’s emotions hidden in the words. However, a full analysis of
_natural language_ requires an additional aspect in the form of processing the
syntactic structure of expressed sentences. Ideas expressed in simple data
representation methods, such as _N-grams_ , were developed to _Probability
Context Free Grammars_ [49], building distributed trees describing the
syntactic structure of the text.
In syntactic analysis, we do not have to limit ourselves to the structure of
the sentence itself, and in the case of social networks, extraction can take
place by building the structure of the whole discussion, as Kumar and Carley
show [50]. Extraction of this type [51] seems to be one of the most promising
tools in the fight against fake news propagation [52].
#### 3.1.4 Non-linguistic methods
The fake news classification is not limited to the linguistic analysis of
documents. The studies analysing different kinds of attributes which may be of
use in the same setting [53] are interesting. Amidst the typical methods that
the study comprises, there are the analyses of the creator and reader of the
message, as well as the contents of the document and its positioning within
social media outlets being verified [54]. Another method which shows promise
is analysing images; this approach concerns fake news in the form of video
material [55]. Similarly, the study by [3] is evenly thought-provoking; it
proposes to divide the methods of linguistic and social analyses. The former
group of models encompasses the semantic, rhetorical, discourse and simple
probabilistic recognition ones. The latter set comprises analysing how the
person conveying the message behaves in social media and what context their
entries are building. Then, [30] has based the design of the recognition
models on the behavior of the authors, where the background of the post (both
the posted and forwarded ones) does depend on their bodies, and at the same
time refers to other texts. Diverse representations of data were analysed by
[56], whilst [57] has examined various variants of stylometric metrics.
Numerous issues relating to fake news detection have been studied by [5] and
indicated that it is possible to apply the _Scientific Content Analysis_
(SCAN) approach in order to tackle the matter. In [58], an effective method
was advanced which aims at verifying the news automatically. This approach
would depart from analysing texts and head for image data. On the other hand,
[54] suggests performing analyses of the posts from social networks within the
context of data streaming, arguing that this approach addresses their dynamic
character. Support Vector Machine (SVM) has been utilised by [29] in order to
recognize which messages are genuine, false or of satirical nature. A similar
type of classifier has been employed by [59] as well; in their work, semantic
analysis as well as behavioural feature descriptors are to uncover the media
entries which may be false.
The comparison was made by [60] so as to assess a number of classification
methods which base on linguistic features. According to its outcomes,
successful detection of false information can base on the already familiar
classifier models (particularly ensembles). In [61], it has been indicated
that the issue of detecting false information is generally limited to the
classification task, although anomaly detection and clustering approaches
might be utilised for it. Lastly, in [62], the NLP tools-based method was
proposed to be applied for analysing Twitter posts. According to this
approach, each post was assigned credibility values owing to the fact that the
researchers viewed this issue as a regression task.
### 3.2 Reputation analysis
The reputation282828https://www.definitions.net/definition/REPUTATION of an
individual, a social group, a company or even a location, is understood as the
estimation of it; usually it results from the determined criteria which
influence social assessment. Within a natural society, reputation is the
mechanism of social control, characterised by high efficiency, ubiquity and
spontaneity. Social, management and technological sciences aim at
investigating this matter. It must be acknowledged that reputation influences
communities, markets, companies, and institutions alike; in other words, it
has an influence over both the competitive and cooperative contexts. Its
influence may extend as far as the relations between the whole countries; the
idea’s importance is appreciated in politics, business, education, online
networks, and diverse, innumerable domains. Thus, reputation can be regarded
to be reflecting the identity of a given social entity.
In technological sciences, the reputation of a product or a site often needs
to be measured. Therefore a reputation score, which represents it in a numeric
manner, is computed. The calculation may be performed by means of a
centralized reputation server or distributively, by means of local or global
trust metrics [63]. This evaluation may be of use when supporting entities in
their decisions concerning if they should rely on someone or buy a given
product, or not.
The general concept behind reputation systems is allowing the entities
evaluate one another or assess an object of their interest (like articles,
publishers, etc.); then subsequently utilize the collected evaluations to
obtain trust or reputation scores, concerning the sources and items within the
system. In order for systems to act in this way, reputation analysis
techniques are used, which actually support the ability to establish in an
automatic manner the way various keywords, phrases, themes or contents created
by users are analysed amongst the mentions of diverse origin. More
specifically, in the news industry, there are two kinds of sources of
reputation for an article/publisher [64]; reputation from content and
reviews/feedback on the one hand, and reputation from IP or domain on the
other one.
Given the technological advances, all types of data can be collected: type of
comments, their scope, keywords, etc. Especially in the news industry, there
are a few key characteristics that can differentiate a trustworthy article
from a fake one. A common characteristic is the anonymity fake news publishers
choose behind their domain. The results from the survey in [64] showed that
whenever the person publishing contents wishes to protect their anonymity, the
online who-is information will indicate the proxy as the organization that
registered it. On the other hand, the renowned, widespread newspapers usually
prefer to register under their actual company names. Another indicative
feature for fake news is the duration of time that publishers spend on the
websites with the intention of disseminating false information. Often, it is
rather brief in comparison to the real news publishers. Domain popularity is
also a good indicator regarding the reputation, as it measures the views of
the website gets every day, per a visiting person. It seems logical that a
well-know website features a greater number of views per person, as they have
the tendency to devote more time to surfing the contents and looking at the
various sub pages. It has been indicated in [64] that the domains of the
trustworthy web pages which post genuine information are far more popular than
the ones which disseminate untruth. The reason for this is that normally the
majority of web pages that publish false news either stop publishing news very
soon, or the readers spend much less time browsing those sites.
So, reputation scoring ranks fake news based on suspicious domain names, IP
addresses and review/feedback by the readers by providing a measure that
indicates whether the specific website is high or low on reputation. Different
techniques for reputation scoring have been proposed in the literature. [65]
describes the application of the _Maximum Entropy Discrimination_ (MED)
classifier. It is utilized to score the reputation of web pages. It is done on
the basis of the information which creates a reputation vector. It includes
multiple factors for the online resource, such as the state in which the
domain was registered, the place where the service is hosted, the place of an
internet protocol address block and when the domain was registered. It also
considers the popularity level, IP address, how many hosts there are, top-tier
domain, a number of run-time behaviours, _JavaScript_ block count, the number
of images, immediate redirect, and response latency. The paper by [66] also
relates to the issue. The scientists have created the _Notos_ reputation
system which makes use of the unique DNS characteristics in filtering out the
malevolent domains on the basis of their past engagement in harmful or genuine
online services. For every domain, the authors utilised clustering analysis of
network-based, zone-based and evidence-based features, so as to obtain the
reputation score. As the majority of the methods do not perform the scoring
online, so it may use processing-intensive approaches. The assessment that
used real-world data, including the traffic from vast ISP networks, has proved
that _Notos_ ’s accuracy in recognizing emerging, malevolent domains in the
DNS query traffic that was monitored was indeed very high, with the true
positive score of 96.8% and false positive one - 0.38%.
According to the above mentioned, a reputation analysis system is actually a
cross-checking system that examines a set of trustworthy databases of
blacklists in an automatic manner, usually employing ML techniques that
recognize malicious IP addresses and domains based on their reputation scores.
More specifically, in [67], a ML model relying on a deep neural architecture
which was trained using a big passive DNS database is presented. _Mnemonic_
292929https://www.mnemonic.no/ supplied the passive DNS data utilised for the
sake of the paper. The raw dataset consisted of 567 million aggregated DNS
queries, gathered in the span of almost half a decade. The variables which
define every entry are as follows: the type of a record, a recorded query, the
response to it, a _Time-to-Live_ (TTL) value for the query-answer pair and a
timestamp for when the pair occurred at first, as well as the total number of
instances when the pair occurred, within a given period. The method is capable
of pinpointing 95% of the suspicious hosts, the false positive rate being
1:1000. Nevertheless, the amount of time required to train turned out to be
exceptionally high because of the vast amount of the data needed for it; the
delay information has not been assessed.
_Segugio_ , an innovative defense system based on behaviour, is introduced in
[68]. It makes it possible to track the appearance of newly-appearing,
malware-control domain names within huge ISP networks in an efficient manner,
with the true positive rate (TP) reaching 85%, and false positive rate (FP)
lower than 0.1%. Nevertheless, both TP and FP have been counted on the basis
of 53 new domains; proving the correctness based on such a little set may be a
challenging task.
Lastly, in [69], an innovative novel granular SVM is presented, namely a
boundary alignment algorithm (GSVM-BA), which repeatedly eliminates the
positive support vectors from the dataset used for training, in order to find
the optimum decision boundary. To accomplish it, two groups of feature vectors
are extracted from the data; they are called the breadth and spectral vectors.
### 3.3 Network data analysis
Network analysis refers to the Network theory, which studies graphs, being a
representation of either symmetric or asymmetric relations between discrete
objects. This theory has been of use in various fields including statistical
physics, particle physics, computer science, electrical engineering, biology,
economics, and others. The possible applications of the theory comprise the
World Wide Web (WWW), Internet, as well as logistical, social, epistemological
and gene regulatory networks, etc. In computer/network sciences, the network
theory belongs to graph theory. This means that one may define a network as a
graph, where nodes and edges have their attributes (like names).
Network-based detection of false news applies the data on the social context
uncovered in news propagation. Generally speaking, it examines two kinds of
networks, namely the homogeneous and heterogeneous ones. Homogeneous networks
(such as Friendship, Diffusion, and Credibility networks, etc.) contain one
kind of nodes and edges. For instance, in credibility networks, people present
their points of view regarding the original news items by means of social
media entries. In them, they might either share the same opinions (which thus
support one another), or conflicting opinions (this is turn may lower their
credibility scores). If one were to model the aforementioned relations, the
credibility network may be applied to assess the level of veracity of the news
pieces, by means of the credibility scores of every particular social network
entry (related to the news item) being leveraged. On the other hand,
heterogeneous networks are characterised by having numerous kinds of nodes or
edges. The main benefits they bring is the capability of representing and
encoding the data and relations from various positions. Some well-known
networks used for detecting false information are Knowledge, Stance, and
Interaction Networks. The first type incorporates linked open data, like
DBdata and Google Relation Extraction Corpus (GREC), as a heterogeneous
network topology. When inspecting for fact by means of a knowledge graph, it
is possible to verify if the contents of the news items may be gathered from
the facts that are present in the knowledge networks, whilst Stances
(viewpoints) represent people’s attitudes to the information, i.e. in favour,
conflicting, etc, [70].
In detecting false news, network analysis is performed in order to evaluate
the truth value of the news item; one may formalize it as a classification
issue which needs obtaining relevant features and building models. As part of
feature extraction, the differential qualities of information items become
captured in order to create efficient representations; on the basis of them,
several models are created, for learning at transforming the features.
Contemporary advancements in network representation learning, e.g. network
embedding and deep neural networks, let one apprehend the features of news
items in an enhanced manner, from supplementary data like friendship networks,
temporal user engagements, and interaction networks. Moreover, knowledge
networks as secondary data may make it easier to challenge the truthfulness of
the news pieces by means of network pairing operations, including path finding
and optimizing the flow.
Across network levels, the data from the social media concerning propagating
the news and its spreaders has not been examined to a significant degree, yet.
In addition to this, it has not been much used in an explainable manner for
detecting false information, too. The authors of [71] have suggested a
network-based pattern-driven model for detecting false information; it proved
robust against the news items being manipulated by malicious actors. The model
makes use of patterns in disseminating fake data within social networks, as it
turns out that, in comparison with the true ones, false news is able to (_i_)
disseminate further and (_ii_) engage a larger number of spreaders, where they
oftentimes prove to be (_iii_) more fully absorbed in the news and (_iv_) more
closely linked within the network. The characteristics which represent the
aforementioned patterns have been developed at several network levels (that
is, node-, ego-, triad-, community-, and network-level), that may be utilised
in a ML supervised-learning framework for false news detection. The patterns
involved in the mentioned study regarding fake news concern the spreading of
the news items, the ones responsible for doing it, and the relations among
those spreaders. Another example of network analysis in the false news
detection may be found in [72], where a framework is proposed which uses a
tri-relationship model (TriFN) amongst the news article, spreader and
publisher. For such a network, a hybrid framework is composed of three major
parts which contribute to detecting false news: (_i_) entity embedding and
representation, (_ii_) relation modeling, and (_iii_) semi-supervised
learning. The actual model possesses four meaningful parameters. $\alpha$ and
$\beta$ manage the inputs from social relationship and user-news relations.
$\gamma$ manages the input of publisher partisan and $\eta$ controls the input
provided by semi-supervised classifier. According to [72], TriFN is able to
perform well whilst detecting, at the initial stages of disseminating news.
### 3.4 Image based analysis and detection of image manipulations
In the last decade, digital images have thoroughly replaced conventional
photographs. Currently it is possible to take a photo using not only cameras
but also smartphones, tablets, smart watches and even eye glasses. Thus,
thousands of billions of digital photos are taken annually. The immense
popularity of image information fosters the development of the tools for
editing it, for instance _Photoshop_ or _Affinity Photo_. The software lets
users manipulate real-life pictures, from low-level (adjusting the brightness
in a photo) to high-level semantic contents (replacing an element of a family
photograph). Nonetheless, the possibilities provided by the photo manipulation
tools may be seen as a double-edged sword. It enables making the pictures more
pleasing to the eye, and also inspires users in their expressing and sharing
their visions on visual arts; however, the contents of the photo may be forged
more easily, with no visible hints left. Thus, it makes it easier to spread
fake news. Hence, with the passage of time, several scientists have developed
the methods for detecting photo tampering; the techniques concentrate on copy-
move forgeries, splice forgeries, inpainting, image-wise adjustments (such as
resizing, histogram equalization, cropping, etc.) and other ones.
So far, numerous researchers have presented some approaches for detecting fake
news and image tampering. In [73], the authors proposed an algorithm which is
able to recognise if the picture has been altered and where, taking advantage
of the characteristic footprints that various camera models leave in the
images. Its way of working is based on the fact that all the pixels of the
pictures that have not been tampered with ought to appear as if they had been
taken by means of a single device. Otherwise, if the image has been composed
of multiple pictures, then the footprints left by several devices may be
found. In the presented algorithm, a _Convolutional Neural Network_ (CNN) is
exploited for obtaining the characteristics which are typical of the specific
camera model, from the studied image. The same algorithmic was also used in
[74].
Authors in [75] used _Structural Similarity Index Measure_ (SSIM) instead of
more classical approaches to modification detection techniques such as: _mean
squared error_ (MSE) and _peak signal to noise ratio_ (PSNR). Moreover, they
moved the whole solution to cloud computing that is claimed to provide
security, the information being deployed quicker, as well as the data being
more accessible and usable.
The technique presented in [76] uses the chrominance of the _YCbCr_ colour
space; it is considered to be able to detect the distortion resulting from
forgery more efficielty than all the further colour components. When
extracting features, selected channel gets segmented into blocks which
overlap. Thus, _Otsu-based Enhanced Local Ternary Pattern_ (OELTP) has been
introduced; it is an innovative visual descriptor aimed at extracting features
from the blocks. It extends the _Enhanced Local Ternary Pattern_ (ELTP) which
recognises the neighbourhood on a range of three values (-1, 0, 1).
Subsequently, the energy of OELTP features gets assessed in order to decrease
the dimensionality of features. Then, the sorted characteristics are used for
training the SVM classifier. Lastly, the image is labelled, either as genuine
or manipulated.
The colour space _YCbCr_ was also used in [77]. The presented approach takes
advantage of the fact that during when being merged, at least two pictures get
engaged in copying and pasting. In case of tampering with JPEG images, the
forgery might not follow the same pattern; a piece of an uncompressed picture
may be pasted into a compressed JPEG file or the other way round. Such
manipulated images being re-saved in JPEG format with different quality
factors can possibly result in the emergence of double quantization artefacts
in DTC coefficients.
According to the method proposed in [78], the input file becomes pre-processed
by converting its colour space from RGB to grey level; following that, the
image features (_Histogram of Oriented Gradient_ (HOG), _Discrete Wavelet
Transform_ (DWT) and _Local Binary Patterns_ (LBP)) get distilled from the
grey level colour space. This fitting group of features is merged to create a
feature vector. The _Logistic Regression_ classifier is utilised to construct
a model and to discriminate a manipulated, authenticated picture. The
suggested technique enhances the correctness of detection by applying combined
spacial features, that is the spatial- and frequency-based ones.
Two types of features were also used in [79]. In this work, the authors
decided to convert the source image into grey scale and use _Haar Wavelet
Transform_. Then, the vector features are calculated using HOG and _Local
Binary Patterns Variance_. The classification step is founded upon the
Euclidean distance.
Within [80], _Speeded Up Robust Features_ (SURF) and _Binary Robust Invariant
Scalable Keypoints_ (BRISK) descriptors were used in order to reveal and
pinpoint single and multiple copy–move forgeries. The features of SURF prove
robust against diverse post-processing attacks, like rotation, blurring and
additive noise. Nevertheless, it seems that features of the BRISK are equally
as robust in relation to detecting the scale-invariant forged regions, along
with the poorly localized keypoints of the objects within the forged image.
However, the _state-of-the-art_ techniques deal not only with copy-move
forgeries, but with other types of modifications, too. The paper [81] presents
the contrast enhancement detection based on numerical measures of images. The
proposed method includes division in non-overlapping blocks and then, mean,
variance, skewness and kurtosis of block calculation. The method also uses DWT
coefficients and SVM for original/tampered classification. Whereas, in [82]
authors claimed that real and fake colorized images can differ in _Hue_ and
_Saturation_ channels of _Hue-Saturation-Value_ (HSV) colour space. Thus, the
presented approach using histogram equalisation and some other statistical
features enables authenticity verification in the colorizing domain.
Overview of all the tools of feature extraction from fake news mentioned in
this section is presented in Table 1.
Table 1: Overview of ML extractors for fake news detection. category | extractor | context of extraction
---|---|---
nlp | _N-grams_ | Primitive tool of tokenization.
| _TF_ | Normalization of tokens to the documents.
| _TF-IDF_ | Normalization of tokens to the corpora.
psycholinguistic | _LIWC_ | Collection of 93 individual characteristics from word counts to grammar analysis.
| _POS Tags_ | Assignment of words to parts of speech.
| _SlangNet_ | Various models of recognition of slang and colloquial expressions.
| _ColloquialWordNet_ |
| _SentiWordNet_ |
| _SentiStrength_ |
syntax | _PCFG_ | Construction of distributed trees describing dynamic structure of text.
| _Tree LSTMs_ | Application of Long Short-Term Memory networks to analysis of content distribution.
non-linguistic | _SCAN_ | Scientific content analysis.
reputation | _MED_ | Reputation vector of a publisher.
| _Notos_ | Reputation vector from DNS characteristic.
| _Segugio_ | Tracking the appearance of new, malware-control domain names within huge ISP networks.
image | _CNNs_ | Primitive tool of feature extraction of digital signals.
| _SSIM_ | Modification detection metric.
| _OELTP_ | Visual block descriptor.
| _HOG, DWT, LBT_ | Primitive tools of feature extraction.
| _SURF, BRISK_ | Detection of copy-move forgeries.
## 4 Research interest and popular datasets used for detecting fake news
### 4.1 Is fake news an important issue for the research society?
The raising interest in the fake news detection domain might easily be noticed
by checking how many scientific articles have concerned this topic, according
to commonly-used and relevant databases. Such metrics are shown in Figure 4
that depicts the number of publications on fake news detection per year and
database. According to that, in the _Scopus_ database there are 5 articles
associated to the ’ _fake news detection_ ’ keyword and published in 2016, 44
in 2017 , 150 in 2018 and 371 in 2019. In the _Web of Science_ database there
are 4 articles published in 2016, 24 in 2017, 62 in 2018 and finally 86 in
2019. There is a similar when looking at the _IEEExplore_ database, stating
that there were 3, 16, 59 and 133 articles published respectively in 2016,
2017, 2018 and 2019.
Figure 4: Evolution of the number of publications _per year_ retrieved from
the keyword "_fake news detection_ " according to _Web of Science_ , _Scopus_
and _IEEExplore_.
In addition to the published papers, another key metric to check the interest
of the research community and funding agencies on a certain topic is the
number of funded projects in competitive calls. According to this idea, a list
of the research EU-funded projects can be seen in Table 2. Data have been
compiled from CORDIS303030https://cordis.europa.eu/projects/enl database (July
2020) searching the terms: ’ _fake news_ ’, ’ _disinformation_ ’, and ’
_deepfake_ ’.
Table 2: EU-funded research projects. Data extracted from cordis database in July 2020. year | number | project acronyms
---|---|---
2014 | 1 | pheme
2015 | 0 | —
2016 | 5 | comprop, debunker, dante, encase, _InVID_
2017 | 1 | botfind
2018 | 7 | dynnet, fandango, _GoodNews_ , jolt, _SocialTruth_ , soma, _WeVerify_
2019 | 8 | _Factmata_ , fakeology, digiact, _Media and Conspiracy_ , newtral, qi, rusinform, truthcheck
2020 | 5 | diced, _mistrust_ , _News in groups_ , printout, radicalisation
Among other EU projects, the _Social Truth_ project can be highlighted. It is
a project funded by the _Horizon 2020_ R&D program and it addresses the
burning issue of fake news. Its purpose is to deal with this matter in a way
that would let vendors to lock-in the solution, build up the trust and
reputation by means of the _blockchain_ technology, incorporate _Lifelong
Learning Machines_ which are capable of spotting the paradigm changes of false
information and provide a handy digital companion which would be able to
support the individuals in their verifying the services they use, from within
web browsers they utilise. To meet this objective, ICT engineers, data
scientists and _blockchain_ experts belonging to both the industry and the
academia, together with end-users and use-cases providers have created a
consortium in order to combine their efforts. Further details may be found in
[83].
It can be concluded that the European Union (EU) works hard in order to combat
online misinformation and to educate the society. As a result, increasing
amounts of economic resources are being invested.
### 4.2 Image tampering datasets
There are multiple datasets of modified images available online. One of them
is CASIA dataset (in fact, two version of this dataset: CASIA ITDE _v1.0_ and
CASIA ITDE _v1.0_) described in [84] and [85]. The ground truth input pictures
are sourced from the CASIA ITDE image tampering detection evaluation (ITDE)
_v1.0_ database; it comprises of images belonging to eight categories (animal,
architecture, article, character, nature, plant, scene and texture), sized
384x256 or 256x384. Comparing datasets CASIA ITDE _v1.0_ and CASIA ITDE _v2.0_
, the newer one proves to be more challenging and comprehensive. It utilises
post-processing, such as blurring or filtering of the tampered parts to render
the manipulated images seem realistic to one’s eye. In the CASIA ITDE _v2.0_
dataset, there can be a number of tampered versions for each genuine image.
In accordance with CASIA ITDE _v2.0_ , the manipulated images are created by
applying crop-and-paste operation in _Adobe Photoshop_ on the genuine
pictures, and the altered areas might be of irregular shapes and various
measurements, rotations or distortions.
Among datasets of modified images available online, the one proposed in [86]
should also be enumerated. It contains unmodified/original images,
unmodified/original images with JPEG compression, 1-to-1 splices (i.e. direct
copy of snippet into image), splices with added Gaussian noise, splices with
added compression artefacts, rotated copies, scaled copies, combined effects
and copies that were pasted multiple times. Mostly, the subsets exist in
downscaled versions as well. There are two image formats available: JPEG and
TIFF, even though the TIFF format can have a size up to 30 GB.
The other dataset was provided by _CVIP Group_ working at _Department of
Industrial and Digital Innovation_ (_DIID_) of University of Palermo [87]. The
Dataset comprises medium-sized images (most of them 1000x700 or 700x1000) and
is further divided into multiple datasets (_D0, D1, D2_). The first dataset
_D0_ is composed of 50 not compressed images with simply translated copies.
For remaining two sets of images (_D1, D2_), 20 not compressed images were
selected, showing simple scenes (single object, simple background). _D1_
subset has been made by copy-pasting rotated elements, while _D2_ \- scaled
ones.
Next dataset was proposed in [88]; it is the _CG-1050_ dataset which comprises
100 original images, 1050 tampered images and their corresponding masks. The
dataset is divided into four directories: original, tampered and mask images,
along with a description file. The directory of original images contains 15
colour and 85 grayscale images. The directory of tampered images comprises
1050 images obtained by one of the following methods of tampering: copy-move,
cut-paste, retouching and colorizing.
There are also some datasets consisting on videos. As an example, the
_Deepfake Detection Challenge Dataset_ can be listed [18]. This dataset
contains 124k videos that have been modified using eight facial modification
algorithms. The dataset is useful for deepfake modification of videos.
### 4.3 Fake news datasets
The LIAR dataset, where there are almost thirteen thousand manually labelled
brief statements in varied context taken from the website _polifact.com_ , was
introduced in [89]. It contains the data collected over a span of a decade and
marked as: _pants-on-fire_ , _false_ , _barely-true_ , _half-true_ , _mostly-
true_ , and _true_. The label distribution is fairly well-balanced: besides
1,050 _pants-on-fire_ cases, there are between 2,063 to 2,638 examples for
each label.
The dataset used in [29] in fact consisted of three datasets: (_i_) _Buzzfeed_
– data collected from _Facebook_ concerning the US Presidential Election,
(_ii_) _Political news_ – news taken from trusted sources (_The Guardian_ ,
_BBC_ , etc.), fake sources (_Ending the Fed_ , _Inforwars_ , etc.) and satire
(_The Onion_ , _SatireWire_ , etc.) and (_iii_) _Burfoot and Baldwin_ – the
dataset proposed in 2009, containing mostly real news stories. Unfortunately,
the whole dataset is not well-balanced. It contains 4,111 real news, 110 fake
news and 308 satire news.
The CREDBANK dataset was introduced in [90]. It is an extensive, crowd-sourced
dataset containing about 60 million tweets covering 96 days, beginning from
October 2015. The tweets relate to more than 1,000 news events, with each of
them checked for credibility by 30 editors from _Amazon Mechanical Turk_.
The _FakeNewsNet_ repository, which is updated in a periodical manner, was
proposed by the authors of [91]. This dataset contains the combination of news
items (source, body, multimedia) and social background details (user profile,
followers/followee) concerning fake and truthful materials, gathered from
_Snopes_ and _BuzzFeed_ , that have been reposted and shared on _Twitter_.
The next dataset is _ISOT Fake News Dataset_ that was described in [48]. It is
very well-balanced: contains over 12,600 false and true news items each. The
dataset was gathered using real world outlets; the truthful items were
collected by crawling the articles from _Reuters.com_. Conversely, the
articles containing false information were picked up from diverse sources. The
false news articles were gathered from unreliable web pages flagged by
_Politifact_ (a US-based fact-checking entity) and _Wikipedia_.
Another method for detecting fake news, called the multimodal one, was shown
in [92]. There, authors defined possible features that may be used during the
analysis. They are: textual features (statistical or semantic), visual
features (image analysis) and social context features (followers, hashtags,
retweets). In the presented approach, textual and visual features were used
for detecting fake news.
Table 3: The review of existing datasets. dataset | elements | citation
---|---|---
| quantity | category |
LIAR | | 1,050
---
2,063 – 2,638
| _pants-on-fire_
---
_others_
[89]
| Buzzfeed dataset
---
\+ Political news dataset
\+ Burfoot and Baldwin dataset
| 4,111
---
110
308
| _real news_
---
_fake_
_satire_
[29]
CREDBANK | _60 million tweets_ | [90]
_FakeNewsNet_ | _no data_ | [91]
_ISOT Fake News Dataset_ | | > 12,600
---
> 12,600
| _real news_
---
_fake_
[48]
_Twitter + Weibo_ | | 12,647
---
10,805
10,042
| _real news_
---
_fake_
_images_
[92]
## 5 Conclusions, further challenges and way forward
This section draws the main conclusions of the present research, regarding the
application of advanced ML techniques. Additionally, open challenges in the
disinformation arena are pointed out.
### 5.1 Streaming nature of fake news
It should be underlined that most of the papers addressing fake news detection
ignore the streaming nature of this task. The profile of items labelled as
fake news might shift over time because the spreaders of false news are
conscious of the fact that automatic detection systems could detect them. As a
result, they try to avoid their messages being identified as fake news by
changing some of their characteristics. Therefore, in order to continuously
detect them, ML-driven systems have to react to these changes, known as
_concept drift_ [93]. It requires to equip the detection systems with
mechanisms able to adapt to changes. Only a few papers have attempted to
develop fake news detection algorithms taking into consideration the streaming
nature of the data [28]. Though a number of researchers noted social media
should be considered as data streams, only Wang and Terano [94] used
appropriate techniques for data stream analysis. Nevertheless, their method is
restricted to quite short streams and probably did not reflect the non-
stationary nature of the data. Ksieniewicz et al. [95] employed NLP techniques
and treated incoming messages as a non-stationary data stream. The computer
experiments on real-life false news datasets prove the usefulness of the
suggested approach.
### 5.2 Lifelong learning solutions
Lifelong ML systems may transcend the restrictions of canonical learning
algorithms that require a substantial set of training samples and are fit for
isolated single-task learning [96]. Key features which should be developed in
the systems of this kind in order to take advantage of prior learned knowledge
comprise feature modeling, saving what had been learnt from past tasks,
transferring the knowledge to upcoming learning tasks, updating the previously
learnt things and user feedback. Additionally, the idea of a ’ _task_ ’ which
is present in several conventional definitions [97] of lifelong ML models,
proves difficult to specify in numerous real-life setups (oftentimes, it seems
hard to tell when a given task ends and the subsequent one begins). One of the
major troubles is the dilemma of _stability and plasticity_ , i.e. the
situation where the learning systems must compromise between learning new
information without forgetting the previous one [98]. It is visible in the
catastrophic forgetting phenomenon, which is described as a neural network
forgetting the previously learned information entirely, after having been
exposed to new information.
We believe that lifelong learning systems and methods would perfectly fit the
fake news problem where content, style, language and types of fake news change
rapidly.
### 5.3 Explainability of ML-based fake news detection systems
Additional point which must be considered at present time is the
explainability of ML and ML-based fake news detection methods and systems.
Unfortunately, numerous scientists and systems architects utilise deep-
learning capacities (along with other black-box ML techniques) in performing
detecting or prediction assignments. However, the outcome produced by the
algorithms is given with no explanation. Explainability concerns the extent to
which a human is able to comprehend and explain (in a literal way) the
internal mechanics driving the AI/ML systems.
Indeed, for the ML-based fake detection methods to be successful and widely
trusted by different communities (journalism, security etc.), the relevant
decision-makers in a realistic environment need the answer to the following
question: what is the reason for the system to give certain answers [99]?
### 5.4 The emergence of deepfakes
It is worth mentioning that, going one step further in this subject, a new
phenomenon has recently appeared, referred to as _deepfakes_. Initially, they
could be defined as hyper-realistic movie files applying face swaps which do
not leave much trace of having been tampered with [100]. This ultimate
manipulation now consists in the generation of fake media resources by using
AI face-swap technology. The contents of graphical deepfakes (both pictures
and videos) are mainly the people whose faces are substituted. On the other
hand, there are also deepfakes recordings in which the voices of people are
simulated. Although there can be potential productive uses of deepfakes [101],
they may also imply negative economic effects [102], as well as severe legal
ones313131https://spectrum.ieee.org/tech-talk/computing/software/what-are-
deepfakes-how-are-they-created.
Although an audit has revealed that the software to generate deepfake videos
is still hard to use323232https://spectrum.ieee.org/tech-
talk/computing/software/the-worlds-first-audit-of-deepfake-videos-and-tools-
on-the-open-web, there is an increase in such fake contents, not only
affecting celebrities 333333https://www.bbc.com/news/av/technology-40598465,
but also less-known
people343434https://www.bbc.co.uk/bbcthree/article/779c940c-c6c3-4d6b-9104-bef9459cc8bd,353535https://www.theguardian.com/technology/2020/jan/13/what-
are-deepfakes-and-how-can-you-spot-them. As some authors have previously
stated, technology will play a keystone role in fighting deepfakes [103]. In
this sense, authors in [104] have very recently presented an approach to
accurately detect fake portrait videos ($97.29\%$ accuracy) as well as to find
out the particular generative model underlying a deep fake based on
spatiotemporal patterns present in biological signals, under the assumption
that a synthetic person, for instance, does not show a similar pattern of
heart beat in comparison to the real one. Nevertheless, contributions are
required from other fields such as legal, educational and political ones [101,
105].
As for some other open challenges related to cybersecurity, fake news and
deepfakes require increasing the resources spent on detection technology;
identification rates must be increased while the sophistication of
disinformation continuously grows [102].
### 5.5 Final remarks
This work presents the results obtained from a comprehensive and systematic
study of research papers, projects and initiatives concerning detecting fake
news (online disinformation). Our goal was to show current and possible trends
in this needed area of research in the computer science field due to the
demands of societies from countries worldwide. Additionally, available
resources (methods, datasets, etc.) to research in this topic have been
thoroughly analysed.
In addition to the analysed previous work, the present study is aimed at
motivating researchers to take up challenges in this domain, that increasingly
impact current societies. More precisely, challenges still to be addressed are
identified in order to propose exploring them.
## Acknowledgement
This work is supported by the SocialTruth project363636http://socialtruth.eu,
which has received funding from the European Union’s Horizon 2020 research and
innovation programme under grant agreement No. 825477.
## References
* Parikh and Atrey [2018] S. B. Parikh, P. K. Atrey, Media-rich fake news detection: A survey, in: 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), IEEE, 2018, pp. 436–441.
* Zhou and Zafarani [2018] X. Zhou, R. Zafarani, Fake news: A survey of research, detection methods, and opportunities, 2018\. arXiv:1812.00315.
* Conroy et al. [2015] N. K. Conroy, V. L. Rubin, Y. Chen, Automatic deception detection: Methods for finding fake news, Proceedings of the Association for Information Science and Technology 52 (2015) 1–4.
* Zubiaga et al. [2018] A. Zubiaga, A. Aker, K. Bontcheva, M. Liakata, R. Procter, Detection and resolution of rumours in social media: A survey, ACM Computing Surveys (CSUR) 51 (2018) 1–36.
* Sharma et al. [2019] K. Sharma, F. Qian, H. Jiang, N. Ruchansky, M. Zhang, Y. Liu, Combating fake news: A survey on identification and mitigation techniques, ACM Transactions on Intelligent Systems and Technology (TIST) 10 (2019) 1–42.
* Tschiatschek et al. [2018] S. Tschiatschek, A. Singla, M. Gomez Rodriguez, A. Merchant, A. Krause, Fake news detection in social networks via crowd signals, in: Companion Proceedings of the The Web Conference 2018, 2018, pp. 517–524. doi:10.1145/3184558.3188722.
* Posetti and Matthews [2018] J. Posetti, A. Matthews, A short guide to the history of’fake news’ and disinformation, International Center for Journalists 7 (2018).
* Cline [2015] E. H. Cline, 1177 BC: The year civilization collapsed, Princeton University Press, 2015\.
* Neander and Marlin [2010] J. Neander, R. Marlin, Media and propaganda: The northcliffe press and the corpse factory story of world war i, Global Media Journal 3 (2010) 67.
* Herzstein [1978] R. Herzstein, The most infamous propaganda campaign in history, GP Putnam & Sons (1978).
* Zhang and Ghorbani [2020] X. Zhang, A. A. Ghorbani, An overview of online fake news: Characterization, detection, and discussion, Information Processing & Management 57 (2020) 102025.
* Allcott and Gentzkow [2017] H. Allcott, M. Gentzkow, Social Media and Fake News in the 2016 Election, Working Paper 23089, National Bureau of Economic Research, 2017. URL: http://www.nber.org/papers/w23089. doi:10.3386/w23089.
* Kula et al. [2020] S. Kula, M. Choraś, R. Kozik, P. Ksieniewicz, M. Woźniak, Sentiment analysis for fake news detection by means of neural networks, in: International Conference on Computational Science, Springer, 2020, pp. 653–666.
* de Cock Buning [2018] M. de Cock Buning, A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation, Publications Office of the European Union, 2018.
* Canada. Parliament. House of Commons. Standing Committee on Access to Information et al. [2018] P. Canada. Parliament. House of Commons. Standing Committee on Access to Information, Ethics, B. Zimmer, Democracy under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly: Report of the Standing Committee on Access to Information, Privacy and Ethics, House of Commons= Chambre des communes, Canada, 2018\.
* Collins et al. [2019] D. Collins, C. Efford, J. Elliot, P. Farrelly, S. Hart, J. Knight, G. Watling, Disinformation and" fake news’: Final report, 2019.
* Giełczyk et al. [2019] A. Giełczyk, R. Wawrzyniak, M. Choraś, Evaluation of the existing tools for fake news detection, in: IFIP International Conference on Computer Information Systems and Industrial Management, Springer, 2019, pp. 144–151.
* Dolhansky et al. [2020] B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, C. C. Ferrer, The deepfake detection challenge dataset, 2020\. arXiv:2006.07397.
* Harris [1954] Z. S. Harris, Distributional structure, Word 10 (1954) 146–162.
* Saquete et al. [2020] E. Saquete, D. Tomás, P. Moreda, P. Martínez-Barco, M. Palomar, Fighting post-truth using natural language processing: A review and open challenges, Expert Systems with Applications 141 (2020). doi:10.1016/j.eswa.2019.112943.
* Luhn [1957] H. P. Luhn, A statistical approach to mechanized encoding and searching of literary information, IBM Journal of research and development 1 (1957) 309–317.
* Jones [1972] K. S. Jones, A statistical interpretation of term specificity and its application in retrieval, Journal of documentation (1972).
* Hassan et al. [2020] N. Hassan, W. Gomaa, G. Khoriba, M. Haggag, Credibility detection in twitter using word n-gram analysis and supervised machine learning techniques, International Journal of Intelligent Engineering and Systems 13 (2020) 291–300. doi:10.22266/ijies2020.0229.27.
* Zubiaga et al. [2016] A. Zubiaga, G. W. S. Hoi, M. Liakata, R. Procter, Pheme dataset of rumours and non-rumours, Figshare. Dataset (2016).
* Bharadwaj and Shao [2019] P. Bharadwaj, Z. Shao, Fake news detection with semantic features and text mining, International Journal on Natural Language Computing (IJNLC) Vol 8 (2019).
* Wynne and Wint [2019] H. Wynne, Z. Wint, Content based fake news detection using n-gram models, 2019\. doi:10.1145/3366030.3366116.
* Kaur et al. [2020] S. Kaur, P. Kumar, P. Kumaraguru, Automating fake news detection system using multi-level voting model, Soft Computing 24 (2020) 9049–9069.
* Ksieniewicz et al. [2019] P. Ksieniewicz, M. Choraś, R. Kozik, M. Woźniak, Machine learning methods for fake news classification, in: H. Yin, D. Camacho, P. Tiño, A. J. Tallón-Ballesteros, R. Menezes, R. Allmendinger (Eds.), Intelligent Data Engineering and Automated Learning - IDEAL 2019 - 20th International Conference, Manchester, UK, November 14-16, 2019, Proceedings, Part II, volume 11872 of Lecture Notes in Computer Science, Springer, 2019, pp. 332–339.
* Horne and Adali [2017] B. D. Horne, S. Adali, This just in: fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news, in: Eleventh International AAAI Conference on Web and Social Media, 2017.
* Castillo et al. [2011] C. Castillo, M. Mendoza, B. Poblete, Information credibility on twitter, in: Proceedings of the 20th International Conference on World Wide Web, ACM, 2011, pp. 675–684.
* Telang et al. [2019] H. Telang, S. More, Y. Modi, L. Kurup, Anempirical analysis of classification models for detection of fake news articles, 2019\. doi:10.1109/ICECCT.2019.8869504.
* Kong et al. [2020] S. Kong, L. Tan, K. Gan, N. Samsudin, Fake news detection using deep learning, 2020, pp. 102–107. doi:10.1109/ISCAIE47305.2020.9108841.
* Zhou and Zhang [2008] L. Zhou, D. Zhang, Following linguistic footprints: Automatic deception detection in online communication, Communications of the ACM 51 (2008) 119–122.
* Zhang et al. [2016] D. Zhang, L. Zhou, J. L. Kehoe, I. Y. Kilic, What online reviewer behaviors really matter? effects of verbal and nonverbal behaviors on detection of fake online reviews, Journal of Management Information Systems 33 (2016) 456–481.
* Marouf et al. [2019] A. Marouf, R. Ajwad, A. Ashrafi, Looking behind the mask: A framework for detecting character assassination via troll comments on social media using psycholinguistic tools, 2019\. doi:10.1109/ECACE.2019.8679154.
* Pennebaker et al. [2001] J. W. Pennebaker, M. E. Francis, R. J. Booth, Linguistic inquiry and word count: Liwc 2001, Mahway: Lawrence Erlbaum Associates 71 (2001) 2001.
* Ott et al. [2011] M. Ott, Y. Choi, C. Cardie, J. T. Hancock, Finding deceptive opinion spam by any stretch of the imagination, arXiv preprint arXiv:1107.4557 (2011).
* Robinson et al. [2013] R. L. Robinson, R. Navea, W. Ickes, Predicting final course performance from students’ written self-introductions, Journal of Language and Social Psychology 32 (2013) 469–479. doi:10.1177/0261927x13476869.
* Huang et al. [2012] C.-L. Huang, C. K. Chung, N. Hui, Y.-C. Lin, Y.-T. Seih, B. C. Lam, W.-C. Chen, M. H. Bond, J. W. Pennebaker, The development of the chinese linguistic inquiry and word count dictionary., Chinese Journal of Psychology (2012).
* del Pilar Salas-Zárate et al. [2014] M. del Pilar Salas-Zárate, E. López-López, R. Valencia-García, N. Aussenac-Gilles, Á. Almela, G. Alor-Hernández, A study on LIWC categories for opinion mining in spanish reviews, Journal of Information Science 40 (2014) 749–760. doi:10.1177/0165551514547842.
* Farías et al. [2020] D. I. H. Farías, R. Prati, F. Herrera, P. Rosso, Irony detection in twitter with imbalanced class distributions, Journal of Intelligent & Fuzzy Systems (2020) 1–17.
* Stoick et al. [2019] B. Stoick, N. Snell, J. Straub, Fake news identification: A comparison of parts-of-speech and n-grams with neural networks, volume 10989, 2019. doi:10.1117/12.2521250.
* Dhuliawala et al. [2016] S. Dhuliawala, D. Kanojia, P. Bhattacharyya, Slangnet: A wordnet like resource for english slang, in: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 2016, pp. 4329–4332.
* McCrae et al. [2017] J. P. McCrae, I. Wood, A. Hicks, The colloquial wordnet: Extending princeton wordnet with neologisms, in: International Conference on Language, Data and Knowledge, Springer, 2017, pp. 194–202.
* Baccianella et al. [2010] S. Baccianella, A. Esuli, F. Sebastiani, Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining., in: Lrec, volume 10, 2010, pp. 2200–2204.
* Thelwall [2017] M. Thelwall, The heart and soul of the web? sentiment strength detection in the social web with sentistrength, in: Cyberemotions, Springer, 2017, pp. 119–134.
* Ahmed and Traore [2013] A. A. Ahmed, I. Traore, Biometric recognition based on free-text keystroke dynamics, IEEE transactions on cybernetics 44 (2013) 458–472.
* Ahmed et al. [2018] H. Ahmed, I. Traore, S. Saad, Detecting opinion spams and fake news using text classification, Security and Privacy 1 (2018) e9.
* Stolcke and Segal [1994] A. Stolcke, J. Segal, Precise n-gram probabilities from stochastic context-free grammars, arXiv preprint cmp-lg/9405016 (1994).
* Kumar and Carley [2020] S. Kumar, K. Carley, Tree lstms with convolution units to predict stance and rumor veracity in social media conversations, 2020, pp. 5047–5058.
* Tai et al. [2015] K. S. Tai, R. Socher, C. D. Manning, Improved semantic representations from tree-structured long short-term memory networks, arXiv preprint arXiv:1503.00075 (2015).
* Zubiaga et al. [2016] A. Zubiaga, E. Kochkina, M. Liakata, R. Procter, M. Lukasik, Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations, arXiv preprint arXiv:1609.09028 (2016).
* Shu et al. [2017] K. Shu, A. Sliva, S. Wang, J. Tang, H. Liu, Fake news detection on social media: A data mining perspective, SIGKDD Explor. Newsl. 19 (2017) 22–36. doi:10.1145/3137597.3137600.
* Zhang and Ghorbani [2019] X. Zhang, A. A. Ghorbani, An overview of online fake news: Characterization, detection, and discussion, Information Processing & Management (2019).
* Choraś et al. [2018] M. Choraś, A. Giełczyk, K. Demestichas, D. Puchalski, R. Kozik, Pattern recognition solutions for fake news detection, in: IFIP International Conference on Computer Information Systems and Industrial Management, Springer, 2018, pp. 130–139.
* Ferrara et al. [2016] E. Ferrara, O. Varol, C. Davis, F. Menczer, A. Flammini, The rise of social bots, Commun. ACM 59 (2016) 96–104. doi:10.1145/2818717.
* Afroz et al. [2012] S. Afroz, M. Brennan, R. Greenstadt, Detecting hoaxes, frauds, and deception in writing style online, in: Proceedings of the 2012 IEEE Symposium on Security and Privacy, SP ’12, IEEE Computer Society, 2012, pp. 461–475. doi:10.1109/SP.2012.34.
* Jin et al. [2017] Z. Jin, J. Cao, Y. Zhang, J. Zhou, Q. Tian, Novel visual and statistical image features for microblogs news verification, Trans. Multi. 19 (2017) 598–608. doi:10.1109/TMM.2016.2617078.
* Chen et al. [2011] C. Chen, K. Wu, S. Venkatesh, X. Zhang, Battling the internet water army: Detection of hidden paid posters, 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2013) (2011) 116–120.
* Gravanis et al. [2019] G. Gravanis, A. Vakali, K. Diamantaras, P. Karadais, Behind the cues: A benchmarking study for fake news detection, Expert Systems with Applications 128 (2019) 201 – 213.
* Bondielli and Marcelloni [2019] A. Bondielli, F. Marcelloni, A survey on fake news and rumour detection techniques, Information Sciences 497 (2019) 38 – 55.
* Atodiresei et al. [2018] C.-S. Atodiresei, A. Tănăselea, A. Iftene, Identifying fake news and fake users on twitter, Procedia Computer Science 126 (2018) 451 – 461. Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 22nd International Conference, KES-2018, Belgrade, Serbia.
* Buford et al. [2009] J. Buford, H. Yu, E. K. Lua, P2P networking and applications, Morgan Kaufmann, 2009.
* Xu et al. [2019] K. Xu, F. Wang, H. Wang, B. Yang, Detecting fake news over online social media via domain reputations and content understanding, Tsinghua Science and Technology 25 (2019) 20–27.
* Hegli et al. [2013] R. Hegli, H. Lonas, C. K. Harris, System and method for developing a risk profile for an internet service, 2013. US Patent 8,438,386.
* Antonakakis et al. [2010] M. Antonakakis, R. Perdisci, D. Dagon, W. Lee, N. Feamster, Building a dynamic reputation system for dns., in: USENIX security symposium, 2010, pp. 273–290.
* Lison and Mavroeidis [2017] P. Lison, V. Mavroeidis, Neural reputation models learned from passive dns data, in: 2017 IEEE International Conference on Big Data (Big Data), IEEE, 2017, pp. 3662–3671.
* Rahbarinia et al. [2015] B. Rahbarinia, R. Perdisci, M. Antonakakis, Segugio: Efficient behavior-based tracking of malware-control domains in large isp networks, in: 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, IEEE, 2015, pp. 403–414.
* Tang et al. [2006] Y. Tang, S. Krasser, P. Judge, Y.-Q. Zhang, Fast and effective spam sender detection with granular svm on highly imbalanced mail server behavior data, in: 2006 International Conference on Collaborative Computing: Networking, Applications and Worksharing, IEEE, 2006, pp. 1–6.
* Shu et al. [2019] K. Shu, H. R. Bernard, H. Liu, Studying fake news via network analysis: detection and mitigation, in: Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining, Springer, 2019, pp. 43–65.
* Zhou and Zafarani [2019] X. Zhou, R. Zafarani, Network-based fake news detection: A pattern-driven approach, ACM SIGKDD Explorations Newsletter 21 (2019) 48–60.
* Shu et al. [2017] K. Shu, S. Wang, H. Liu, Exploiting tri-relationship for fake news detection, arXiv preprint arXiv:1712.07709 8 (2017).
* Bondi et al. [2017] L. Bondi, S. Lameri, D. Güera, P. Bestagini, E. J. Delp, S. Tubaro, Tampering detection and localization through clustering of camera-based cnn features, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, 2017, pp. 1855–1864.
* Zhang and Ni [2020] R. Zhang, J. Ni, A dense u-net with cross-layer intersection for detection and localization of image forgery, in: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, pp. 2982–2986.
* James et al. [2019] A. James, E. B. Edwin, M. Anjana, A. M. Abraham, H. Johnson, Image forgery detection on cloud, in: 2019 2nd International Conference on Signal Processing and Communication (ICSPC), IEEE, 2019, pp. 94–98.
* Kanwal et al. [2020] N. Kanwal, A. Girdhar, L. Kaur, J. S. Bhullar, Digital image splicing detection technique using optimal threshold based local ternary pattern, Multimedia Tools and Applications (2020) 1–18.
* Dua et al. [2020] S. Dua, J. Singh, H. Parthasarathy, Detection and localization of forgery using statistics of dct and fourier components, Signal Processing: Image Communication (2020) 115778.
* Jaiswal and Srivastava [2020] A. K. Jaiswal, R. Srivastava, A technique for image splicing detection using hybrid feature set, Multimedia Tools and Applications (2020) 1–24.
* Jothi and Letitia [2020] J. N. Jothi, S. Letitia, Tampering detection using hybrid local and global features in wavelet-transformed space with digital images, Soft Computing 24 (2020) 5427–5443.
* Bilal et al. [2019] M. Bilal, H. A. Habib, Z. Mehmood, T. Saba, M. Rashid, Single and multiple copy–move forgery detection and localization in digital images based on the sparsely encoded distinctive features and dbscan clustering, Arabian Journal for Science and Engineering (2019) 1–18.
* Suryawanshi et al. [2019] P. Suryawanshi, P. Padiya, V. Mane, Detection of contrast enhancement forgery in previously and post compressed jpeg images, in: 2019 IEEE 5th International Conference for Convergence in Technology (I2CT), IEEE, 2019, pp. 1–4.
* Guo et al. [2018] Y. Guo, X. Cao, W. Zhang, R. Wang, Fake colorized image detection, IEEE Transactions on Information Forensics and Security 13 (2018) 1932–1944.
* Choraś et al. [2019] M. Choraś, M. Pawlicki, R. Kozik, K. Demestichas, P. Kosmides, M. Gupta, Socialtruth project approach to online disinformation (fake news) detection and mitigation, in: Proceedings of the 14th International Conference on Availability, Reliability and Security, 2019, pp. 1–10.
* Dong et al. [2013] J. Dong, W. Wang, T. Tan, Casia image tampering detection evaluation database, in: 2013 IEEE China Summit and International Conference on Signal and Information Processing, IEEE, 2013, pp. 422–426.
* Zheng et al. [2019] Y. Zheng, Y. Cao, C.-H. Chang, A puf-based data-device hash for tampered image detection and source camera identification, IEEE Transactions on Information Forensics and Security 15 (2019) 620–634.
* Christlein et al. [2012] V. Christlein, C. Riess, J. Jordan, C. Riess, E. Angelopoulou, An evaluation of popular copy-move forgery detection approaches, IEEE Transactions on information forensics and security 7 (2012) 1841–1854.
* Ardizzone et al. [2015] E. Ardizzone, A. Bruno, G. Mazzola, Copy–move forgery detection by matching triangles of keypoints, IEEE Transactions on Information Forensics and Security 10 (2015) 2084–2094.
* Castro et al. [2020] M. Castro, D. M. Ballesteros, D. Renza, A dataset of 1050-tampered color and grayscale images (cg-1050), Data in brief 28 (2020) 104864.
* Wang [2017] W. Y. Wang, "liar, liar pants on fire": A new benchmark dataset for fake news detection, arXiv preprint arXiv:1705.00648 (2017).
* Mitra and Gilbert [2015] T. Mitra, E. Gilbert, Credbank: A large-scale social media corpus with associated credibility annotations., in: ICWSM, 2015, pp. 258–267.
* Shu et al. [2020] K. Shu, D. Mahudeswaran, S. Wang, D. Lee, H. Liu, Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media, Big Data 8 (2020) 171–188.
* Wang et al. [2018] Y. Wang, F. Ma, Z. Jin, Y. Yuan, G. Xun, K. Jha, L. Su, J. Gao, Eann: Event adversarial neural networks for multi-modal fake news detection, in: Proceedings of the 24th acm sigkdd international conference on knowledge discovery & data mining, 2018, pp. 849–857.
* Krawczyk et al. [2017] B. Krawczyk, L. Minku, J. Gama, J. Stefanowski, M. Wozniak, Ensemble learning for data stream analysis: A survey, Information Fusion 37 (2017) 132–156. doi:10.1016/j.inffus.2017.02.004.
* Wang et al. [2015] S. Wang, L. L. Minku, X. Yao, Resampling-based ensemble methods for online class imbalance learning, IEEE Trans. Knowl. Data Eng. 27 (2015) 1356–1368.
* Ksieniewicz et al. [2020] P. Ksieniewicz, Z. Paweł, C. Michał, K. Rafał, G. Agata, W. Michał, Fake news detection from data streams, in: Proceedings of the International Join Conference on Neural Networks, 2020.
* Chen et al. [2018] Z. Chen, B. Liu, R. Brachman, P. Stone, F. Rossi, Lifelong Machine Learning, 2nd ed., Morgan & Claypool Publishers, 2018.
* Pentina and Lampert [2015] A. Pentina, C. H. Lampert, Lifelong learning with non-i.i.d. tasks, in: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, MIT Press, Cambridge, MA, USA, 2015, p. 1540–1548.
* Yaochu, Jin et al. [2004] Yaochu, Jin, et al., Neural network regularization and ensembling using multi-objective evolutionary algorithms, in: CEC’04), volume 1, 2004, pp. 1–8. doi:10.1109/CEC.2004.1330830.
* Choraś et al. [2020] M. Choraś, M. Pawlicki, D. Puchalski, R. Kozik, Machine learning–the results are not the only thing that matters! what about security, explainability and fairness?, in: International Conference on Computational Science, Springer, 2020, pp. 615–628.
* Chawla [2019] R. Chawla, Deepfakes: How a pervert shook the world, International Journal of Advance Research and Development 4 (2019) 4–8.
* Westerlund [2019] M. Westerlund, The emergence of deepfake technology: A review, Technology Innovation Management Review 9 (2019).
* Kwok and Koh [2020] A. O. J. Kwok, S. G. M. Koh, Deepfake: a social construction of technology perspective, Current Issues in Tourism 0 (2020) 1–5. doi:10.1080/13683500.2020.1738357.
* Maras and Alexandrou [2019] M.-H. Maras, A. Alexandrou, Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos, The International Journal of Evidence & Proof 23 (2019) 255–262. doi:10.1177/1365712718807226.
* Ciftci et al. [2020] U. A. Ciftci, I. Demir, L. Yin, How do the hearts of deep fakes beat? deep fake source detection via interpreting residuals with biological signals, 2020. arXiv:2008.11363.
* Pitt [2019] J. Pitt, Deepfake videos and ddos attacks (deliberate denial of satire) [editorial], IEEE Technology and Society Magazine 38 (2019) 5–8.
|
16k
|
arxiv_papers
|
2101.01143
|
# Gauge invariance of light-matter interactions in first-principle tight-
binding models
Michael Schüler [email protected] Stanford Institude for Materials and
Energy Sciences (SIMES), SLAC National Accelerator Laboratory, Menlo Park, CA
94025, USA Jacob A. Marks Stanford Institude for Materials and Energy
Sciences (SIMES), SLAC National Accelerator Laboratory, Menlo Park, CA 94025,
USA Physics Department, Stanford University, Stanford, CA 94035, USA Yuta
Murakami Department of Physics, Tokyo Institute of Technology, Meguro, Tokyo
152-8551, Japan Chunjing Jia Stanford Institude for Materials and Energy
Sciences (SIMES), SLAC National Accelerator Laboratory, Menlo Park, CA 94025,
USA Thomas P. Devereaux Stanford Institude for Materials and Energy Sciences
(SIMES), SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA
Department of Materials Science and Engineering, Stanford University,
Stanford, California 94305, USA
###### Abstract
We study the different ways of introducing light-matter interaction in first-
principle tight-binding (TB) models. The standard way of describing optical
properties is the velocity gauge, defined by linear coupling to the vector
potential. In finite systems a transformation to represent the electromagnetic
radiation by the electric field instead is possible, albeit subtleties arise
in periodic systems. The resulting dipole gauge is a multi-orbital
generalization of Peierl’s substitution. In this work, we investigate accuracy
of both pathways, with particular emphasis on gauge invariance, for TB models
constructed from maximally localized Wannier functions. Focusing on
paradigmatic two-dimensional materials, we construct first-principle models
and calculate the response to electromagnetic fields in linear response and
for strong excitations. Benchmarks against fully converged first-principle
calculations allow for ascertaining the accuracy of the TB models. We find
that the dipole gauge provides a more accurate description than the velocity
gauge in all cases. The main deficiency of the velocity gauge is an imperfect
cancellation of paramagnetic and diamagnetic current. Formulating a
corresponding sum rule however provides a way to explicitly enforce this
cancellation. This procedure corrects the TB models in the velocity gauge,
yielding excellent agreement with dipole gauge and thus gauge invariance.
## I Introduction
The impressive progress in tailoring ultrafast laser pulses has led to a surge
of advanced spectroscopies on and control of condensed matter systems Basov
_et al._ (2017). Prominent examples of intriguing phenomena beyond linear
response include nonlinear Bloch oscillations Schubert _et al._ (2014);
Reimann _et al._ (2018), and photo-dressing the electronic structure in
Floquet bands Wang _et al._ (2013); Mahmood _et al._ (2016); De Giovannini
_et al._ (2016); Hübener _et al._ (2017); Schüler _et al._ (2020). Another
recent pathway to controlling the properties of materials is exploiting the
quantum nature of the electromagnetic fields in cavities, thus creating novel
light-matter systems Ruggenthaler _et al._ (2018); Mazza and Georges (2019).
Simulating the response of complex materials to (possibly strong) external
fields proves challenging. Density functional theory (DFT) or time-dependent
DFT (TDDFT) provides a path to treat materials including electronic
correlations, although the accuracy is limited by the inevitable
approximations to the exchange-correlation functional. Depending on the choice
of the basis, incorporating electromagnetic fields via the minimal coupling
$\hat{\mathbf{p}}\rightarrow\hat{\mathbf{p}}-q\mathbf{A}(\mathbf{r},t)$
($\hat{\mathbf{p}}$ denotes the momentum operator, $\mathbf{A}(\mathbf{r},t)$
the vector potential) is straightforward. Upon converging with respect to the
basis, this approach provides a first-principle route to optical properties
Pemmaraju _et al._ (2018) and nonlinear phenomena De Giovannini _et al._
(2016); Tancogne-Dejean and Rubio (2018); Tancogne-Dejean _et al._ (2018).
However, there are many scenarios where a reduced set of bands is preferable,
for instance when many-body techniques beyond DFT are employed. Typical
examples are strongly correlated systems Golež _et al._ (2019a); Petocchi
_et al._ (2019), excitonic effects Attaccalite _et al._ (2011); Perfetto _et
al._ (2019), or systems where scattering events (like electron-phonon) play a
crucial role in the dynamics Sentef _et al._ (2013); Molina-Sánchez _et al._
(2016); Schüler _et al._ (2020). The canonical way of introducing a small
subspace is the tight-binding (TB) approximation. TB models are typically
constructed by fitting a parameterization to a DFT calculation, or by
constructing Wannier functions. While the former approach is straightforward,
the thus obtained empirical TB models lack the information on the underlying
orbitals and hence the light-matter coupling. In this context the Peierl’s
substitution Peierls (1933); Ismail-Beigi _et al._ (2001), is often used to
incorporate the external field. However, this approach neglects local inter-
orbital transitions. Introducing matrix elements of the light-matter coupling
in the minimal coupling scheme (velocity gauge) as fitting parameters is
possible, but does not provide a way to construct them and generally breaks
gauge invariance Foreman (2002).
In contrast, Wannierization of a subspace of the DFT electronic structure – if
possible – provides a systematic way of constructing first-principle TB
Hamiltonians including the orbital information. Expressing the Bloch states
$|\psi_{\mathbf{k}\alpha}\rangle$ in terms of the Wannier functions allows to
calculate the matrix elements of $\hat{\mathbf{p}}$ (velocity matrix elements)
directly. However, typically the momentum operator is replaced in favor of the
position operator Yates _et al._ (2007) by employing the commutation relation
$\displaystyle\hat{\mathbf{p}}=\frac{m}{i\hbar}[\hat{\mathbf{r}},\hat{H}]\ ,$
(1)
as the matrix elements of $\hat{\mathbf{r}}$ in the Wannier basis are directly
obtained from the standard Wannierization procedure. Furthermore, Wannier
models provide a straightforward way to express the Hamiltonian at any point
in momentum space by Wannier interpolation, which greatly facilitates the
otherwise costly calculation of optical transition matrix elements on dense
grids.
Treating the matrix elements of $\hat{\mathbf{r}}$ (dipole matrix elements) as
the more fundamental quantity, it would be advantageous to express the
Hamiltonian directly in terms of the dipoles instead of taking the detour via
Eq. (1). In finite systems and within the dipole approximation (neglecting the
spatial dependence of the field), this is achieved by the Power-Zienau-Woolley
transformation to the dipole gauge, resulting in the light-matter interaction
of form $\hat{H}_{\mathrm{LM}}=-q\mathbf{E}(t)\cdot\hat{\mathbf{r}}$. In
periodic systems, the operator $\hat{\mathbf{r}}$ is ill-defined in the Bloch
basis, but a multi-center generalization of the Power-Zienau-Woolley
transformation can be constructed Golež _et al._ (2019b); Li _et al._
(2020); Mahon _et al._ (2019) as detailed below. Working within a localized
Wannier basis also provides a natural way to capture the magnetoelectric
response of solids Mahon and Sipe (2020a, b).
In principle, all of the mentioned schemes for incorporating light-matter
interaction are equivalent and thus gauge invariant. In practice however,
breaking the completeness of the band space by truncation introduces artifacts
and a dependence on gauge. In this work, we compare the schemes of introducing
light-matter coupling to TB models – (i) in the dipole gauge (TB-DG), and (ii)
in the velocity gauge (TB-VG). In particular we focus on the current as a
fundamental observable determining the optical properties. We study the
optical conductivity within the linear response formalism and, furthermore,
the resonant excitations beyond linear response. All results are benchmarked
against accurate first-principle calculations in either the plane-wave or the
real-space representation of the Bloch wave-functions (which does not invoke
any approximation with respect to the basis if converged with respect to the
grid spacing).
This paper is organized as follows. In Sec. II we introduce the light-matter
interaction in the different gauges. Starting from the velocity gauge (Sec.
II.1) we work out the transformation to the dipole gauge for completeness
(Sec. II.2), with particular emphasis on the gauge invariance. In Sec. III we
systematically investigate the accuracy of gauges when applied to first-
principle TB models. We restrict our focus to typical two-dimensional systems,
and calculate the optical conductivity (Sec. III.1) and Berry curvature (Sec.
III.2). Finally, we study nonlinear excitations (Sec. III.3). We use atomic
units (a.u.) throughout the paper unless stated otherwise.
## II Light-matter interaction in periodic systems
### II.1 Light-matter interaction in the velocity gauge
Here we recapitulate the form of the light-matter interacting arising from the
minimal coupling principle. Let us consider a crystalline solid with the
periodic (single-particle) potential $v(\mathbf{r})$, which we take to be the
Kohn-Sham potential obtained from DFT in the examples below. The Hamiltonian
$\hat{h}=\frac{\hat{\mathbf{p}}^{2}}{2}+v(\mathbf{r})$ defines the eigenstates
$\hat{h}|\psi_{\mathbf{k}\alpha}\rangle=\varepsilon_{\alpha}(\mathbf{k})|\psi_{\mathbf{k}\alpha}\rangle$.
By virtue of the Bloch theorem, the periodic part
$u_{\mathbf{k}\alpha}(\mathbf{r})$ is defined by
$\psi_{\mathbf{k}\alpha}(\mathbf{r})=e^{i\mathbf{k}\cdot\mathbf{r}}u_{\mathbf{k}\alpha}(\mathbf{r})$.
Introducing the Bloch Hamiltonian
$\hat{h}(\mathbf{k})=e^{-i\mathbf{k}\cdot\mathbf{r}}\hat{h}e^{i\mathbf{k}\cdot\mathbf{r}}$
the periodic functions are obtained from
$\hat{h}(\mathbf{k})|u_{\mathbf{k}\alpha}\rangle=\varepsilon_{\alpha}(\mathbf{k})|u_{\mathbf{k}\alpha}\rangle$.
An electromagnetic wave interacting with the electrons in the sample can be
represented by the vector potential $\mathbf{A}(t)$, which we assume to be
spatially homogeneous. This is known as the dipole approximation, which holds
as long as the wave length of the light is significantly larger than the
extent of a unit cell. The minimal coupling
$\hat{\mathbf{p}}\rightarrow\hat{\mathbf{p}}-q\mathbf{A}(t)$ ($q=-e$ is the
charge of an electron) gives rise to the time-dependent Hamiltonian
$\displaystyle\hat{h}(t)=\frac{1}{2}(\hat{\mathbf{p}}-q\mathbf{A}(t))^{2}+v(\mathbf{r})\
.$ (2)
If the Bloch wave-functions $\psi_{\mathbf{k}\alpha}(\mathbf{r})$ for
(partially) occupied bands $\alpha$ are known, the time-dependent wave-
functions can directly be obtained from the time-dependent Schrödinger
equation (TDSE)
$i\partial_{t}\phi_{\mathbf{k}\alpha}(\mathbf{r},t)=\hat{h}(t)\phi_{\mathbf{k}\alpha}(\mathbf{r},t)$
with
$\phi_{\mathbf{k}\alpha}(\mathbf{r},t=0)=\psi_{\mathbf{k}\alpha}(\mathbf{r})$.
The averaged electronic current is calculated from the kinematic momentum
operator $\hat{\mathbf{p}}_{\mathrm{kin}}=\hat{\mathbf{p}}-q\mathbf{A}(t)$:
$\displaystyle\mathbf{J}(t)=\frac{1}{N}\sum_{\mathbf{k}}f_{\alpha}(\mathbf{k})\langle\phi_{\mathbf{k}\alpha}(t)|\hat{\mathbf{p}}-q\mathbf{A}(t)|\phi_{\mathbf{k}\alpha}(t)\rangle\
,$ (3)
where $f_{\alpha}(\mathbf{k})$ denotes the occupation of the corresponding
Bloch state; $N$ is the number of momentum points (or supercells,
equivalently). In absence of spin-orbit coupling (SOC), Eq. (3) represents the
current per spin, while $|\phi_{\mathbf{k}\alpha}(t)\rangle$ should be
understood as a spinor in the case of SOC.
Provided the time-dependent Bloch wave-functions are represented on a dense
enough grid and the TDSE is solved with sufficient accuracy, the current (3)
is the exact (independent particle) current. Let us now introduce a finite
reduced band basis. All operators are expressed in the basis of the
corresponding Bloch states $|\psi_{\mathbf{k}\alpha}\rangle$. The matrix
elements of the time-dependent Hamiltonian (2)
$h_{\alpha\alpha^{\prime}}(\mathbf{k},t)=\langle\psi_{\mathbf{k}\alpha}|\hat{h}(t)|\psi_{\mathbf{k}\alpha^{\prime}}\rangle$
are given by
$\displaystyle
h_{\alpha\alpha^{\prime}}(\mathbf{k},t)=\varepsilon_{\alpha}(\mathbf{k})\delta_{\alpha\alpha^{\prime}}-q\mathbf{A}(t)\cdot\mathbf{v}_{\alpha\alpha^{\prime}}(\mathbf{k})+\frac{q^{2}}{2}\mathbf{A}(t)^{2}\delta_{\alpha\alpha^{\prime}}\
.$ (4)
Here, the last term denotes the diamagnetic coupling, which reduces to a pure
phase factor in the dipole approximation. In Eq. (4) we have introduced the
velocity matrix elements
$\displaystyle\mathbf{v}_{\alpha\alpha^{\prime}}(\mathbf{k})$
$\displaystyle=\langle\psi_{\mathbf{k}\alpha}|\hat{\mathbf{p}}|\psi_{\mathbf{k}\alpha^{\prime}}\rangle=-i\langle\psi_{\mathbf{k}\alpha}|[\hat{\mathbf{r}},\hat{h}]|\psi_{\mathbf{k}\alpha^{\prime}}\rangle$
$\displaystyle=\langle
u_{\mathbf{k}\alpha}|\nabla_{\mathbf{k}}\hat{h}(\mathbf{k})|u_{\mathbf{k}\alpha^{\prime}}\rangle\
.$ (5)
Although a direct calculation of the velocity matrix elements (II.1) is
possible, in practical calculations (especially in the context of first-
principle treatment) it is convenient to split into intra- and interband
contributions. One can show Yates _et al._ (2007) that Eq. (II.1) is
equivalent to
$\displaystyle\mathbf{v}_{\alpha\alpha^{\prime}}(\mathbf{k})=\nabla_{\mathbf{k}}\varepsilon_{\alpha}(\mathbf{k})\delta_{\alpha\alpha^{\prime}}-i\left(\varepsilon_{\alpha^{\prime}}(\mathbf{k})-\varepsilon_{\alpha}(\mathbf{k})\right)\mathbf{A}_{\alpha\alpha^{\prime}}(\mathbf{k})\
.$ (6)
Here, $\mathbf{A}_{\alpha\alpha^{\prime}}(\mathbf{k})=i\langle
u_{\mathbf{k}\alpha}|\nabla_{\mathbf{k}}u_{\mathbf{k}\alpha^{\prime}}\rangle$
denotes the Berry connection. Note that the equivalence of Eq. (II.1) and Eq.
(6) is, strictly speaking, an approximation assuming a complete set of Bloch
states. In the Bloch (band) basis, the total current is obtained by combining
the paramagnetic and diamagnetic current:
$\displaystyle\mathbf{J}^{\mathrm{VG}}(t)$
$\displaystyle=\frac{q}{N}\sum_{\mathbf{k}}\sum_{\alpha\alpha^{\prime}}\left(\mathbf{v}_{\alpha\alpha^{\prime}}(\mathbf{k})-q\mathbf{A}(t)\delta_{\alpha\alpha^{\prime}}\right)\rho_{\alpha^{\prime}\alpha}(\mathbf{k},t)$
(7)
$\displaystyle\equiv\mathbf{J}^{\mathrm{p}}(t)+\mathbf{J}^{\mathrm{dia}}(t)\
.$
Here, $\rho_{\alpha\alpha^{\prime}}(\mathbf{k},t)$ denotes the single-particle
density matrix (SPDM), which is defined by the initial condition
$\rho_{\alpha\alpha^{\prime}}(\mathbf{k},t=0)=f_{\alpha}(\mathbf{k})\delta_{\alpha\alpha^{\prime}}$
and the standard equation of motion.
#### II.1.1 Wannier representation
Calculating the Berry connections
$\mathbf{A}_{\alpha\alpha^{\prime}}(\mathbf{k})$ is numerically challenging,
as derivatives with respect to $\mathbf{k}$ are often ill-defined on a coarse
grid of the Brillouin zone. This problem can be circumvented by switching to
the Wannier representation
$\displaystyle|\psi_{\mathbf{k}\alpha}\rangle=\frac{1}{\sqrt{N}}\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\sum_{m}C_{m\alpha}(\mathbf{k})|m\mathbf{R}\rangle\
,$ (8)
where $w_{m}(\mathbf{r}-\mathbf{R})=\langle\mathbf{r}|m\mathbf{R}\rangle$
denote the Wannier functions (WFs). At this point we invoke an important
assumption: the WFs are assumed to be sufficiently localized, such that $\int
d\mathbf{r}\,|\mathbf{r}w_{m}(\mathbf{r})|^{2}$ remains finite. As detailed in
ref. Yates _et al._ , 2007, the Berry connection can then be expressed as
$\displaystyle\mathbf{A}_{\alpha\alpha^{\prime}}(\mathbf{k})=\sum_{mm^{\prime}}C^{*}_{m\alpha}(\mathbf{k})\left[\mathbf{D}_{mm^{\prime}}(\mathbf{k})+i\nabla_{\mathbf{k}}\right]C_{m^{\prime}\alpha^{\prime}}(\mathbf{k})\
.$ (9)
The derivative in Eq. (9) can then be replaced by an equivalent sum-over-
states expression Yates _et al._ (2007). Here we have defined the Fourier-
transformed dipole operator
$\displaystyle\mathbf{D}_{mm^{\prime}}(\mathbf{k})=\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\mathbf{D}_{m0m^{\prime}\mathbf{R}}\
,$ (10)
where $\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}=\langle
m\mathbf{R}|\mathbf{r}-\mathbf{R}|m^{\prime}\mathbf{R}^{\prime}\rangle$ define
the cell-centered dipole matrix elements. Note that they are well defined for
sufficiently localized WFs.
Eq. (7) is independent of the choice of the band basis; hence, one can replace
the Bloch bands by the basis spanned by the Wannier orbitals by replacing
$\alpha\rightarrow m$. Note that the velocity matrix elements (6) transform
according to
$\mathbf{v}_{mm^{\prime}}(\mathbf{k})=\sum_{\alpha\alpha^{\prime}}C_{m\alpha}(\mathbf{k})\mathbf{v}_{\alpha\alpha^{\prime}}(\mathbf{k})C^{*}_{m^{\prime}\alpha^{\prime}}(\mathbf{k})$,
while the intraband current and the Berry connection term individually can not
be transformed by a unitary transformation due to the derivative in momentum
space. Without loss of generality, we assume the WFs to be orthogonal.
#### II.1.2 Static limit of the current response
Both the paramagnetic and the diamagnetic current contribute to the gauge-
invariant total current. For an insulator, the total current in the linear-
response regime in the direct-current (DC) limit must vanish in the zero-
temperature limit, which amounts to paramagnetic and diamagnetic contributions
canceling out. This defines an important sum rule for the velocity matrix
elements (6). Let us consider the paramagnetic current-current response
function
$\displaystyle\chi^{\mathrm{p}}_{\mu\nu}(t)=-i\langle\left[\hat{J}^{\mathrm{p}}_{\mu}(t),\hat{J}^{\mathrm{p}}_{\nu}(0)\right]\rangle\
,$ (11)
where the operators on the right-hand side are understood in the Heisenberg
picture ($\mu,\nu=x,y,z$ are the Cartesian directions.). The response function
(11) defines the paramagnetic current by
$\displaystyle
J^{\mathrm{p}}_{\mu}(t)=\sum_{\nu}\int^{t}_{-\infty}\\!dt^{\prime}\,\chi^{\mathrm{p}}_{\mu\nu}(t-t^{\prime})A_{\nu}(t^{\prime})\
,$ (12)
while the diamagnetic current becomes
$J^{\mathrm{dia}}_{\mu}(t)=-nq^{2}A_{\mu}(t)$ in linear response ($n$ is the
number of particles per unit cell). Fourier transforming and requiring for
total current $J_{\mu}(\omega=0)=0$ yields the sum rule
$\displaystyle\sum_{\mu}\chi^{\mathrm{p}}_{\mu\mu}(\omega=0)=-nq^{2}\ .$ (13)
The sum rule (13) holds for the fully interacting system. For noninteracting
electrons Eq. (13) reduces to
$\displaystyle
f\equiv\frac{2}{N}\sum_{\mathbf{k}}\sum_{\alpha\neq\alpha^{\prime}}f_{\alpha}(\mathbf{k})(1-f_{\alpha^{\prime}}(\mathbf{k}))\frac{|\mathbf{v}_{\alpha\alpha^{\prime}}(\mathbf{k})|^{2}}{\varepsilon_{\alpha^{\prime}}(\mathbf{k})-\varepsilon_{\alpha}(\mathbf{k})}=n\
.$ (14)
The relation Eq. (14) provides an important criterion for the velocity matrix
elements for assessing the completeness of the band space. Furthermore, the
violation of the sum rule (14) and thus of Eq. (13) gives rise to spurious
behavior of the optical conductivity, which is obtained from
$\displaystyle\sigma_{\mu\nu}(\omega)=\frac{1}{i\omega}\left(\chi^{\mathrm{p}}_{\mu\nu}(\omega)-nq^{2}\delta_{\mu\nu}\right)\
.$ (15)
In particular, $\mathrm{Im}[\sigma_{\mu\nu}(\omega)]\propto 1/\omega$ for
$\omega\rightarrow 0$ if $f\neq n$. In general, sum-of-states expressions such
as Eq. (14) are slowly converging with respect to the number of bands
included. Below we will exemplify this behavior and discuss how to cure this
artifact of an (inevitably) incomplete Bloch basis.
### II.2 Light-matter interaction in the dipole gauge
In finite systems, the dipole gauge is obtained by a unitary transformation of
the type $\hat{U}(t)=\exp[-iq\mathbf{A}(t)\cdot\mathbf{r}]$. Applying this
time-dependent transformation to the Hamiltonian (2), we obtain
$\displaystyle\hat{h}_{\mathrm{LG}}(t)$
$\displaystyle=\hat{U}(t)\hat{h}(t)\hat{U}^{\dagger}(t)+(i\partial_{t}\hat{U}(t))\hat{U}^{\dagger}(t)$
$\displaystyle=\frac{\hat{\mathbf{p}}^{2}}{2}+v(\mathbf{r})-q\mathbf{E}(t)\cdot\mathbf{r}\
,$ (16)
where $\mathbf{E}(t)=-\dot{\mathbf{A}}(t)$ denotes the electric field. The
extension to periodic systems and the corresponding Bloch states requires a
few modifications. There is one subtle point which has to be taken care of:
the dipole operator $\mathbf{r}$ (and any spatial operator without cell
periodicity) is ill-defined with respect to the Bloch Basis. However, the
dipole operator with respect to WFs
($\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$) – which defines the
Berry connection via Eq. (10) and Eq. (9) – is well defined due to the
localized nature of the WFs. Thus,
$\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$ and the Hamiltonian in
Wannier representation $T_{m\mathbf{R}n\mathbf{R}^{\prime}}=\langle
m\mathbf{R}|\hat{h}|n\mathbf{R}^{\prime}\rangle$ will be the constituents of
the dipole gauge formulation.
Figure 1: Calculated band structures along typical paths in the respective
Brillouin zone for the four considered systems. The energy scale is chosen
relative to the Fermi energy $E_{F}$ (red dashed line). The inset for FeSe
illustrates the geometry and chosen unit cell.
#### II.2.1 Transformation to the dipole gauge
Based on the dipole operator in Wannier representation we can define a similar
unitary transformation as above. In the Wannier basis, we define
$\displaystyle U_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}(t)=\langle
m\mathbf{R}|e^{-\mathrm{i}q\mathbf{A}(t)\cdot(\mathbf{r}-\mathbf{R})}|m^{\prime}\mathbf{R}^{\prime}\rangle\
.$ (17)
Note that for Eq. (17) to be unitary, we assume
$\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}=\mathbf{D}^{*}_{m^{\prime}\mathbf{R}^{\prime}m\mathbf{R}}$.
Transforming the time-dependent Hamiltonian using the transformation (17)
yields
$\displaystyle\widetilde{h}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}(t)=e^{iq\mathbf{A}(t)\cdot(\mathbf{R}-\mathbf{R}^{\prime})}\left[T_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}-q\mathbf{E}(t)\cdot\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}\right]\
.$ (18)
Details are presented in Appendix A. The additional phase factor in front of
the field-free Wannier Hamiltonian
$T_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$ is the usual Peierl’s phase
factor Peierls (1933). Fourier transforming to momentum space, we obtain
$\displaystyle\widetilde{h}_{mm^{\prime}}(\mathbf{k},t)=T_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))-q\mathbf{E}(t)\cdot\mathbf{D}_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))\
.$ (19)
Here, $T_{mm^{\prime}}(\mathbf{k})$ is the Fourier-transformed Hamiltonian
$T_{mm^{\prime}}(\mathbf{k})=\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\langle
m0|\hat{h}|m^{\prime}\mathbf{R}\rangle$. Eq. (19) can be understood as
generalization of the Peierl’s substitution for multiband systems. The density
matrix in dipole gauge obeys the equation of motion according to the
Hamiltonian (19) with the initial condition
$\widetilde{\rho}_{mm^{\prime}}(\mathbf{k},t=0)=\sum_{\alpha}C_{m\alpha}(\mathbf{k})f_{\alpha}(\mathbf{k})C^{*}_{m^{\prime}\alpha}(\mathbf{k})$.
For vanishing field $\mathbf{A}(t)$ the density matrix in the different gauges
is identical:
$\widetilde{\rho}_{mm^{\prime}}(\mathbf{k},t)=\rho_{mm^{\prime}}(\mathbf{k},t)$.
For $\mathbf{A}(t)\neq 0$ this equivalence is broken. In particular, the
orbital occupation differes
$\rho_{mm}(\mathbf{k},t)\neq\widetilde{\rho}_{mm}(\mathbf{k},t)$. This also
leads to difference in the band occupation when transforming into the band
basis. This gauge dependence of the density matrix does not affect any
observables.
Note that any additional spatial operators entering the Hamiltonian are
invariant by this unitary transformation. In particular, the Coulomb
interaction is unaffected, which can be shown by carrying out the analogous
steps on the level of the many-body Hamiltonian.
#### II.2.2 Total current in the dipole gauge
The expression for the current in the dipole gauge can be derived from the
minimal coupling formulation (7). As for the Hamiltonian, the strategy is to
express the momentum operator as $\hat{\mathbf{p}}=-i[\mathbf{r},\hat{h}(t)]$
and express the position operator in the Wannier representation,
$\mathbf{r}\rightarrow\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$.
The derivation is presented in Appendix A.1. One obtains
$\displaystyle\mathbf{J}^{\mathrm{LG}}(t)=\mathbf{J}^{\mathrm{disp}}(t)+\mathbf{J}^{\mathrm{dip}}(t)\
,$ (20)
where
$\displaystyle\mathbf{J}^{\mathrm{disp}}(t)=\frac{q}{N}\sum_{\mathbf{k}}\sum_{mm^{\prime}}\nabla_{\mathbf{k}}\widetilde{h}_{mm^{\prime}}(\mathbf{k},t)\widetilde{\rho}_{m^{\prime}m}(\mathbf{k},t)$
(21)
is the contribution related to the dispersion of the time-dependent
Hamiltonian (19). The second contribution arises from temporal variation of
the polarization
$\displaystyle\mathbf{P}(t)=\frac{q}{N}\sum_{\mathbf{k}}\sum_{mm^{\prime}}D_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))\widetilde{\rho}_{m^{\prime}m}(\mathbf{k},t)\
,$ (22)
by $\mathbf{J}^{\mathrm{dip}}(t)=d\mathbf{P}(t)/dt$. Under the assumptions
stated above, gauge-invariance is guaranteed, i. e.
$\mathbf{J}^{\mathrm{VG}}(t)=\mathbf{J}^{\mathrm{LG}}(t)$. For an incomplete
set of WFs, the equivalence of Eq. (20) and (7) are only approximate. In
contrast to the velocity gauge, the cancellation of paramagnetic and
diamagnetic current (which can not be separated in the dipole gauge) for an
insulator at zero temperature is built in. Indeed, it can be shown (see
Appendix A.2) that $\mathbf{J}^{\mathrm{LG}}(\omega=0)=0$ in linear response
to a DC field is fulfilled by construction.
## III First principle examples
In principle, the current within the velocity gauge (7) and the dipole gauge
(20) is identical. In practice, truncating the number of bands introduces
artifacts, which result in differences between the gauges and deviations from
the exact dynamics. _A priori_ it is not clear which gauge is more accurate
upon reducing the number of bands. Hence, we investigate the performance of
both the dipole gauge and the velocity gauge in context of TB Hamiltonians,
which are derived from first-principle calculations. This route also allows
for comparing to converged first-principle treatment as a benchmark.
For simplicity, we focus on a range of two-dimensional (2D) materials, albeit
there is no inherent restriction. We start from graphene as the paradigm
example of 2D systems and a Dirac semimetal. Substituting one carbon atom per
unit cell breaks inversion symmetry and opens a gap Novoselov _et al._
(2005); Geim and Novoselov (2007), making the system a (topologically trivial)
insulator. As another example, we study SnC, which is thermally stable as a
monolayer Hoat _et al._ (2019). This material is also in the spotlight for
the possibility to engineer the gap by strain Lü _et al._ (2012). We also
consider monolayer WSe2 as a prominent example of transition metal
dichalcogenides (TMDCs). Finally, we study a monolayer of FeSe as a
representative of a non-hexagonal structure. While free-standing FeSe is not
stable, the layered structure renders a monolayer a good approximation to thin
films, which are a prominent example of a high-temperature superconductor Lee
_et al._ (2014); Guterding _et al._ (2017); Sentef _et al._ (2018).
We performed first-principle DFT calculations based on the local-density
approximation (LDA) using the Quantum espresso code Giannozzi _et al._
(2009), and separately with the Octopus code Andrade _et al._ (2015);
Tancogne-Dejean _et al._ (2020). The consistency of the results has been
checked. We used optimized norm-conserving pseudopotentials from the
PseudoDojo project van Setten _et al._ (2018). In all cases, the self-
consistent DFT calculation was performed with a $12\times 12$ Monkhorst-Pack
sampling of the Brillouin zone. For the calculations with Quantum espresso we
used a supercell of 50 a.u. in the perpendicular direction, ensuring
convergence of the relevant bands. Similarly, the Octopus calculations were
performed with periodic boundary conditions in the plane, while the 50 a.u.
long simulation box with open boundary conditions in perpendicular direction
is chosen.
For constructing a first-principle TB model, we used the Wannier90 code
Mostofi _et al._ (2014) to obtain maximally localized WFs (MLWFs) and a
corresponding Wannier Hamiltonian for each system. For graphene, we include
the $sp^{2}$, $p_{z}$ and a subset of $d$ orbitals, which allows to well
approximate 9 bands (see Fig. 1). The analogous set of orbitals is chosen for
SnC. A reduced model can be obtained by omitting the $d$ orbitals. For WSe2 we
included the W-$d$ orbitals and the Se-$p$ orbital; excluding the latter
orbitals defines the reduced model. Similarly, the TB model for FeSe is
constructed by choosing $d$ orbitals on Fe and $p$ orbital on Se sites. For
clarity we focus on the extended TB models; results for the reduced models are
shown in Appendix B. Fig. 1 compares the first-principle band structure to the
thus obtained TB models.
We study optical properties and nonlinear dynamics. As we focus on the light-
matter interaction itself, we treat the electrons as independent particles at
this stage, thus excluding excitonic features. We also exclude any SOC. The
dipole matrix elements $\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$
are directly obtained from the output of Wannier90. For calculating the
velocity matrix elements according to Eq. (6), we extracted the calculation of
the Berry connection (9) from internal subroutines of Wannier90 into a custom
code, taking $\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$ and the
Wannier Hamiltonian as input.
Figure 2: Longitudinal optical conductivity of the considered 2D systems
obtained from time propagation of the TB Hamiltonian in dipole (TB-DG) and
velocity gauge (TB-VG), respectively. We propagated until a maximum time of
$T_{\mathrm{max}}=8000$ a.u. and used $256\times 256$ sampling of the
Brillouin zone, ensuring convergences. For the calculation of the conductivity
with DFT we used a $200\times 200$ grid of Brillouin zone. We checked the
convergence of the spectra in the considered frequency range with respect to
the number of included bands.
### III.1 Optical conductivity
In the linear response regime, the current flowing through the system upon
irradiation with light is fully determined by the optical conductivity
$\sigma_{\mu\nu}(\omega)$. Solving the equation of motion for the SPDM in the
velocity ($\boldsymbol{\rho}(\mathbf{k},t)$) and the dipole gauge
($\widetilde{\boldsymbol{\rho}}(\mathbf{k,t})$) and calculating the
corresponding current (7) and (20) provide a direct route to computing the
optical conductivity. To this end, we apply a short pulse of the form
$\displaystyle\mathbf{E}(t)=\mathbf{e}\frac{F_{0}}{\sqrt{2\pi\tau^{2}}}e^{-\frac{t^{2}}{2\tau^{2}}}\
,$ (23)
where $\mathbf{e}$ denotes the polarization vector. In the limit
$\tau\rightarrow 0$, the pulse (31) becomes
$\mathbf{E}(t)=\mathbf{e}F_{0}\delta(t)$, containing all frequencies.
Exploiting the linear relation between $\sigma_{\mu\nu}(\omega)$ and the
electric field (31) upon $F_{0}\rightarrow 0$, the optical conductivity is
obtained by
$\displaystyle\sigma_{\mu\nu}(\omega)=\frac{1}{F_{0}}e^{\omega^{2}\tau^{2}/2}\int^{\infty}_{0}\\!dt\,e^{i\omega
t}e^{-\eta t}J_{\mu}(t)\ ,$ (24)
where $J_{\mu}(t)$ is the current in direction $\mu$ induced by choosing the
polarization $\mathbf{e}$ along direction $\nu$. Here we focus on the
longitudinal conductivity
$\displaystyle\sigma(\omega)=\sigma_{xx}(\omega)+\sigma_{yy}(\omega)\ .$ (25)
The damping factor $\eta$ is introduced for convergence, giving rise to
Lorentzian broadening of the resulting spectra.
As a benchmark reference we calculated the optical conductivity using the
program epsilon.x from the Quantum Espresso package, which calculates the
velocity matrix elements (II.1) directly from the plane-wave representation of
the Bloch wave-functions. Note that this procedure omits pseudopotential
contributions to the velocity operator (which are neglected throughout this
paper). We used Lorentzian smearing for both interband and intraband
transitions, matching the parameter $\eta$ from the TB calculations. This
procedure yields the dielectric function $\epsilon_{\mu\nu}(\omega)$, from
which we calculate the longitudinal conductivity via
$\sigma(\omega)=-i\omega(\epsilon_{xx}(\omega)+\epsilon_{yy}(\omega)-2)$. This
procedure amounts to the independent-particle approximation to the response
properties.
#### III.1.1 Conductivity within the velocity gauge vs. dipole gauge
We solved the equation of motion for the SPDM with the Hamiltonian (4) and
computed the current according to Eq. (7). The velocity matrix elements were
computed from the Wannier input via Eq. (6). We refer to the thus obtained
results in the velocity gauge as TB-VG. Analogously, we have propagated the
SPDM with the Hamiltonian (19) and computed the current according to Eq. (20).
This defines the TB dipole gauge (TB-DG).
In Fig. 2 we compare the optical conductivity from the TB models to the first-
principle spectra. In general, the agreement for low-energy features (for
which the TB models have been optimized) is very good for the real art. The
major differences between the TB-DG model and TB-VG model is the unphysical
behavior of $\mathrm{Im}[\sigma(\omega)]$ for $\omega\rightarrow 0$ in the
velocity gauge. The TB-VG model displays a $\omega^{-1}$ behavior (albeit less
pronounced for WSe2). This artifact can be traced back to the violation of the
sum rule (14). Larger deviations from $f=n$ lead to larger deviations from the
reference conductivity. To check this behavior, we have evaluated $f$
according to Eq. (14) (see Tab. 1). Including more empty bands leads to an
improvement in the sum rule and thus in the behavior at small frequencies.
Inspecting the band structure (Fig. 1), we see that including even more bands
above the Fermi energy into the TB models in not feasible, as higher excited
states can hardly be described by localized WFs. In particular, for energies
larger than the continuum threshold, the Bloch states are entirely
delocalized. Achieving convergence of $\mathrm{Im}[\sigma(\omega)]$ within the
TB-VG model is out of reach.
Table 1: Overview of the TB models, number of electrons per unit cell (per spin) $n$, and the sum $f$ calculated from Eq. (14). For FeSe, the values have been obtained from the weight of the $\omega^{-1}$ term. system | # of bands | $n$ | $f$
---|---|---|---
graphene | 5 | 4 | 0.64
| 9 | 4 | 2.06
SnC | 6 | 4 | 1.94
| 8 | 4 | 2.45
WSe2 | 5 | 1 | 1.11
| 11 | 7 | 4.18
FeSe | 10 | 6 | 2.10∗
| 16 | 12 | 3.06∗
However, imposing the correct $\omega^{-1}$ behavior is possible. Note that
the divergence at small frequencies is solely due to the diamagnetic current,
which is not canceled by the paramagnetic current. The cancellation (and thus
the sum rule (14)) can be enforced by replacing
$\displaystyle\mathbf{J}^{\mathrm{dia}}(t)=-qn\mathbf{A}(t)\rightarrow\mathbf{J}^{\mathrm{dia,c}}(t)=-qf\mathbf{A}(t)\
.$ (26)
Calculating the thus corrected current in the velocity gauge
$\mathbf{J}^{\mathrm{VG,c}}(t)=\mathbf{J}^{\mathrm{p}}(t)+\mathbf{J}^{\mathrm{dia,c}}(t)$
defines the corrected TB-VG model. The corrected model leads to excellent
agreement between the dipole and the velocity gauge and cures the spurious
$\omega^{-1}$ behavior $\mathrm{Im}[\sigma(\omega)]$ in all cases. There is no
influence on $\mathrm{Re}[\sigma(\omega)]$. While the sum rule (14) applies to
insulators, incomplete cancellation of the paramagnetic and the diamagnetic
current will also affect $\mathrm{Im}[\sigma(\omega)]$ for metallic systems
like FeSe. In this case, we determine the $\omega^{-1}$ weight by
$\omega\mathrm{Im}[\sigma(\omega)]\rightarrow 0$ and determine $f$
accordingly.
#### III.1.2 Tight-binding vs. first-principle conductivity
Inspecting the real part of the conductivity for graphene, we notice excellent
agreement of the TB results with the first-principle spectrum, especially for
energies $\omega<10$ eV. For larger energies, the differences in the band
dispersions gives rise to shifted spectra. Note that
$\mathrm{Re}[\sigma(\omega)]\rightarrow 0$ is the exact behavior Stauber _et
al._ (2008), although the transition from almost constant
$\mathrm{Re}[\sigma(\omega)]$ to 0 as $\omega\rightarrow 0$ is very abrupt and
easily masked by smearing. Capturing this subtle feature is especially hard
when calculating the conductivity from the time evolution of the current, as
zero-frequency behavior is only accessible in the limit $t\rightarrow\infty$.
We note that TB-VG and TB-DG are in excellent agreement.
For SnC, all methods agree very well for the entire considered frequency
range. Note the system is an insulator (at low temperature), so
$\mathrm{Re}[\sigma(\omega)]\rightarrow 0$ for $\omega\rightarrow 0$. This is
not exactly reproduced by the TB models (TB-VG is slightly worse); however,
this can be cured by systematically increasing $T_{\mathrm{max}}$ and reducing
the broadening $\eta$. Note that this procedure also requires finer sampling
of the Brillouin zone. Besides the real part, also the imaginary part with the
TB-DG and corrected TB-VG are in excellent agreement with the first-principle
calculation.
For WSe2, the main absorption peak is well captured by the TB models (the TB-
DG in particular). Similar to SnC, $\mathrm{Re}[\sigma(\omega)]$ does not tend
to zero exactly for $\omega\rightarrow 0$. This behavior is consistently more
pronounced with the TB-VG model. There are larger deviations of the imaginary
part for $\omega>3$ eV, which is to be expected from differences in peak
structure of the real part due to the Kramers-Kronig relation.
In contrast to the previous examples, FeSe is a metal. Due to the broadening
used for all methods (which acts as a generic damping mechanism), the Drude
peak is smeared out, giving rise to finite $\mathrm{Re}[\sigma(\omega)]$ for
$\omega\rightarrow 0$. Again, the behavior for very small frequencies is well
captured by the TB-DG model, while the TB-VG has difficulties for the chosen
$\eta$ and the propagation time $T_{\mathrm{max}}$. Apart from the range
$\omega\approx 0$, both TB models produce almost identical results, especially
for the imaginary part (using the corrected TB-VG model).
We have also computed $\sigma(\omega)$ for the reduced TB models (dashed lines
in Fig. 1), presented in Appendix B. Comparing full and reduced models one
finds that the artificial finite value of $\mathrm{Re}[\sigma(\omega)]$ for
$\omega\rightarrow 0$ for insulating systems (within the TB-VG model) is less
pronounced if $f\approx n$. Especially for WSe2 ($f=1.1$ within the reduced
model, see Tab. 1), TB-DG and TB-VG model are almost identical.
### III.2 Berry curvature
Figure 3: Total Berry curvature of SnC (left) and WSe2 (right panel) along a
characteristic path in the Brillouin zone with the TB-DG model (Eq.
(29)–(30)), TB-VG model (Eq. (28) and Eq. (6)), and directly from the Bloch
states (DFT). The turquoise line corresponds to the dipole contribution (30).
The described way of obtaining the optical conductivity can, of course, also
be applied to the transverse response. In general, the Hall conductance
$\sigma_{H}=\sigma_{xy}(\omega=0)$ of insulating systems contains information
about their topological state due its close connection to the Berry curvature
Yao _et al._ (2004):
$\displaystyle\sigma_{H}=\frac{e^{2}}{\hbar}\int_{\mathrm{BZ}}\frac{d\mathbf{k}}{(2\pi)^{2}}f_{\alpha}(\mathbf{k})\Omega_{\alpha}(\mathbf{k})\
.$ (27)
Here, $\Omega_{\alpha}(\mathbf{k})$ is the Berry curvature of band $\alpha$.
Exploiting Eq. (27) and working out the paramagnetic linear response function
(11) explicitly (in the velocity gauge) yields the Kubo formula 111We restrict
ourselves to the nondegenerate case here. The corresponding non-abelian
expressions can be derived analogously Gradhand _et al._ (2012). for the
Berry curvature Thouless _et al._ (1982) in terms of the velocity matrix
elements:
$\displaystyle\Omega_{\alpha}(\mathbf{k})=-2\mathrm{Im}\sum_{\alpha^{\prime}\neq\alpha}\frac{v^{x}_{\alpha\alpha^{\prime}}(\mathbf{k})v^{y}_{\alpha^{\prime}\alpha}(\mathbf{k})}{(\varepsilon_{\alpha}(\mathbf{k})-\varepsilon_{\alpha^{\prime}}(\mathbf{k}))^{2}}\
.$ (28)
In practice, the velocity matrix elements are usually computed from the
Wannier representation and Eq. (6). However, the formulation of the real-time
dynamics in terms of the dipole gauge provides an alternative route. To this
end we evaluate the current (20) in linear response. Inserting into the
current-current response function and evaluating the corresponding
conductivity, one obtains two distinct contributions:
$\Omega_{\alpha}(\mathbf{k})=\Omega^{\mathrm{disp}}_{\alpha}(\mathbf{k})+\Omega^{\mathrm{dip}}_{\alpha}(\mathbf{k})$.
This is in direct analogy to the current contributions (21) and (22). The
dispersion part reads
$\displaystyle\Omega^{\mathrm{disp}}_{\alpha}(\mathbf{k})=-2\mathrm{Im}\sum_{\alpha^{\prime}\neq\alpha}\frac{\mathbf{C}^{\dagger}_{\alpha}(\mathbf{k})\partial_{k_{x}}\widetilde{\mathbf{h}}(\mathbf{k})\mathbf{C}_{\alpha^{\prime}}(\mathbf{k})\mathbf{C}^{\dagger}_{\alpha^{\prime}}(\mathbf{k})\partial_{k_{y}}\widetilde{\mathbf{h}}(\mathbf{k})\mathbf{C}_{\alpha}(\mathbf{k})}{(\varepsilon_{\alpha}(\mathbf{k})-\varepsilon_{\alpha^{\prime}}(\mathbf{k}))^{2}}\
,$ (29)
while for the dipole part one finds
$\displaystyle\Omega^{\mathrm{dip}}_{\alpha}(\mathbf{k})=2\mathrm{Re}\sum_{\alpha^{\prime}\neq\alpha}\left(\frac{\mathbf{C}^{\dagger}_{\alpha}(\mathbf{k})\partial_{k_{x}}\widetilde{\mathbf{h}}(\mathbf{k})\mathbf{C}_{\alpha^{\prime}}(\mathbf{k})}{\varepsilon_{\alpha}(\mathbf{k})-\varepsilon_{\alpha^{\prime}}(\mathbf{k})}D^{y}_{\alpha\alpha^{\prime}}(\mathbf{k})-\frac{\mathbf{C}^{\dagger}_{\alpha}(\mathbf{k})\partial_{k_{y}}\widetilde{\mathbf{h}}(\mathbf{k})\mathbf{C}_{\alpha^{\prime}}(\mathbf{k})}{\varepsilon_{\alpha}(\mathbf{k})-\varepsilon_{\alpha^{\prime}}(\mathbf{k})}D^{x}_{\alpha\alpha^{\prime}}(\mathbf{k})\right)\
.$ (30)
For brevity, we have introduced the vector notation
$[\mathbf{C}_{\alpha}(\mathbf{k})]_{m}=C_{m\alpha}(\mathbf{k})$, while
$D^{\mu}_{\alpha\alpha^{\prime}}(\mathbf{k})=\sum_{mn}C^{*}_{m\alpha}(\mathbf{k})D^{\mu}_{mn}(\mathbf{k})C_{n\alpha}(\mathbf{k})$
denotes the dipole matrix elements in the Bloch basis. The expressions (29)
and (30) are equivalent to Eq. (71)–(72) from ref. Gradhand _et al._ (2012).
Assuming a complete basis of WFs one can also obtain Eq. (29)–(30) from Eq.
(28) inserting Eq. (6) and (9). For an incomplete basis the equivalence is
only guaranteed if
$\sum_{n}D^{\mu}_{mn}(\mathbf{k})D^{\nu}_{nm^{\prime}}(\mathbf{k})=\sum_{n}D^{\nu}_{mn}(\mathbf{k})D^{\mu}_{nm^{\prime}}(\mathbf{k})$,
i e. if the dipole operators with respect to orthogonal directions commute.
Figure 4: Current induced by a few cycle pulse (electric field shown in top
panels) in the case of weak (middle) and strong driving (bottom panels).
Calculations were performed with a $48\times 48$ sampling of the Brillouin
zone for all cases. For better readability, $J_{x}$ has been multiplied by the
factor $10^{3}$.
We have calculated the Berry curvature for the two systems that break
inversion symmetry – SnC and WSe2 – (i) from Eq. (28) inserting the velocity
matrix elements (6) from the respective TB model, (ii) from Eq. (29)–(30), and
(iii) from Eq. (28) based on velocity matrix elements calculated from the
Bloch states directly. We have used the Octopus code to compute the matrix
elements from the real-space representation of the
$\psi_{\mathbf{k}\alpha}(\mathbf{r})$ and the momentum operator
$\hat{\mathbf{p}}=-i\nabla_{\mathbf{r}}$. Converging the obtained Berry
curvature with respect to the number of bands thus serves as a benchmark.
In Fig. 3 we compare the different models for calculating the total Berry
curvature
$\Omega_{\mathrm{tot}}(\mathbf{k})=\sum_{\alpha}f_{\alpha}(\mathbf{k})\Omega_{\alpha}(\mathbf{k})$.
For SnC, $\Omega_{\mathrm{tot}}(\mathbf{k})$ is almost identical within the
TB-DG and TB-VG model. Both models agree qualitatively with the DFT
calculation, albeit the magnitude of the Berry curvature is slightly
overestimated specifically in the vicinity of the K and K′ point. This is
explained by all (dipole-allowed) bands, especially higher conduction bands
that are missing in the TB models, contributing to the Berry curvature. The
picture is similar for WSe2. Interestingly, the peak of the Berry curvature
between K (K′) and $\Gamma$ (called $\Sigma$ ($\Sigma^{\prime}$) valley) is
well reproduced by both gauges. We also show the dipole contribution (30).
While for both materials the dispersion part (29) dominates, for WSe2 the
dipole part is the predominant contribution close to $\Sigma^{(\prime)}$
valley. Note that this feature would be missed by the usual TB models Fang
_et al._ (2015) that are constructed without the dipole matrix elements.
### III.3 Nonlinear dynamics
We proceed to investigating the nonlinear response. To this end we simulated
the dynamics upon a short laser pulse, defined by
$\displaystyle\mathbf{A}(t)=\mathbf{e}_{x}A_{0}\exp\left(-a\left(\frac{t-t_{0}}{\tau}\right)^{2}\right)\cos[\omega_{0}(t-t_{0})]\
.$ (31)
Here, $\mathbf{e}_{x}$ denotes the unit vector in $x$ direction. Choosing the
parameters $a=4.6$ and $\tau=2\pi n_{c}/\omega_{0}$, the vector potential (31)
represents an $n_{c}$-cycle pulse. We choose $n_{c}=2$ and determine
$\omega_{0}$ to drive typical excitations within the band manifold spanned by
the TB models. For the pulse strengh $A_{0}$ we consider two scenarios: (i)
weak driving (but beyond linear response), and (ii) strong excitation. We have
chosen $A_{0}$ to obtain representative examples of the dynamics. Tab. 2 lists
the pulse parameters for all systems considered.
Table 2: Pulse parameters defining the pulse (31) used for the simulations. $\omega_{0}$ is given in units of eV, while atomic units are used for $A_{0}$. system | $\omega_{0}$ | $A_{0}$ (weak) | $A_{0}$ (strong)
---|---|---|---
graphene | 2.0 | 0.025 | 0.075
SnC | 2.5 | 0.035 | 0.125
WSe2 | 2.0 | 0.025 | 0.125
FeSe | 1.5 | 0.025 | 0.1125
To obtain an accurate benchmark, we have simulated the dynamics with the TDDFT
code Octopus. The Kohn-Sham potential was frozen as the ground-state
potential, so that all calculations are performed on equal footing
(independent particle approximation). The current was calculated (ignoring
pseudopotential corrections) from Eq. (3).
Figure 5: Current induced by a few cycle pulse as in Fig. 4 for longer times
$t\gg\tau$ where $\mathbf{A}(t)\approx 0$. The color coding is consistent with
Fig. 4.
In Fig. 4 we present the induced current $J_{x}$ along with the shape of the
pulse (31) for all systems. Comparing the TB-VG model to the TDDFT results one
finds pronounced differences, especially for graphene and FeSe. The
oscillations of $J_{x}$ seem out of phase. This behavior is due to the
incomplete cancellation of paramagnetic and diamagnetic current, similar to
the linear response case. It is less severe for SnC and WSe2, where the sum
rule $f=n$ (cf. Eq. (13)) is violated to a lesser extent. Following the same
procedure as in Section III.1, we replaced $n\rightarrow f$ when calculating
the diamagnetic current (26). The thus obtained corrected TB-VG model yields
almost identical results as the TB-DG model in the regime of weak driving
(albeit it is beyond linear response). Even for strong driving, the corrected
TB-VG model is in good agreement with the TB-DG model, although deviations
become apparent when the field peaks. The TB-DG model reproduces the TDDFT
current better than the TB-VG model. Even for strong excitations, the TB-DG
model is in remarkably good agreement.
For all systems, both TB models yield a very good approximation to the current
$J_{x}(t)$ obtained from TDDFT. The agreement is particularly good for SnC and
WSe2. For FeSe the magnitude of the the current is slightly overestimated
222Inspecting the velocity matrix elements obtained from the Bloch states
directly and from the TB models shows some qualitative discrepencies.
We have also performed analogous simulations for the reduced TB models. As
expected, for weak excitation the results are almost identical to Fig. 4,
while deviations are more pronounced for stronger driving. This is
particularly pronounced for WSe2. In this case, the lack of the Se $p$ bands
(see Fig. 1) limit the nonresonant optical transitions, and the $p$-$d$
hybdrization of the $d_{xz}$ and $d_{yz}$ orbitals is missing. For graphene
and SnC, excluding higher bands has only minor effect, as the additional bands
are strongly off-resonant. For FeSe, excluding the lower-lying $p$ bands has
almost no noticeable effect for the same reason.
There is still current flowing in the systems after the pulse is essentially
zero ($t\gg\tau$), which is mostly due to the induced oscillations of the
dipole moments. Fig. 5 shows the current corresponding to Fig. 4 in this
field-free regime. As $\mathbf{A}(t)$ is vanishingly small, there is no
diamagnetic contribution spoiling the TB-VG model. Both the TB-DG and TB-VG
model are in very good agreement with the TDDFT calculation, albeit the TB-DG
model seems to have a slight edge over the TB-VG model.
## IV Conclusions
We have studied light-induced dynamics in 2D systems in the linear response
regime and beyond, focusing on the different ways of introducing light-matter
interaction. While all gauges of light-matter coupling are connected by a
unitary transformation of the Hamiltonian and are thus equivalent, from a
practical point of view it pertinent to assess the accuracy of each gauge.
This is particularly important when working with TB models to capture low-
energy excitations, which inherently breaks the completeness relation. We have
introduced the standard velocity gauge from the minimal coupling scheme and
presented the transformation to the dipole gauge, which yields a multi-band
extension of the Peierl’s substitution. To systematically investigate the
performance of dipole or velocity gauge in a reduced basis, we have
constructed first-principle TB models including dipole transition matrix
elements in Wannier basis. As an accurate reference method, we performed TDDFT
simulations with a converged plane-wave (for linear response) or real-space
(for nonlinear dynamics) basis.
Linear response properties – we focused on the optical conductivity – are well
captured by the TB models in their corresponding energy range. The TB-VG
model, however, shows spurious $\omega^{-1}$ behavior for the imaginary part
of the conductivity, which can be traced back to a violation of a sum rule of
the paramagnetic response function. In contrast, the TB-DG model captures the
correct low-frequency behavior by construction. Correcting the TB-VG model by
hand is possible by enforcing the sum rule. Instead of correcting the TB-VG
model, convergence of the low-energy behavior can be achieved by (i)
systematically increasing the number of conduction bands, and (ii) excluding
lower-lying valence bands that are not participating in the dynamics. This
strategy is exemplified by WSe2. However, the delocalized nature of higher
conduction bands renders (i) impractical. Thus, enforcing the paramagnetic sum
rule is a more efficient way of systematically improving the imaginary part of
the conductivity. This procedure will also be important when investigating
light-matter interaction beyond the dipole approximation (like Raman or X-ray
scattering), where the diamagnetic term is responsible for the excitations.
TB models are also a convenient way to calculate topological properties like
the Berry curvature. With an accurate Wannier representation to the Bloch
states, the Berry curvature is almost identical within the TB-VG and TB-DG
model. The dipole-gauge formulation furthermore allows to disentangle orbital
hybridization and dipole couplings. The latter contribution, which is often
ignored in the TB framework, can be important as demonstrated for WSe2.
Nonlinear excitations can also be captured accurately within the TB models.
Similar to the linear response case, the lack of cancellation of paramagnetic
and diamagnetic current within the TB-VG model gives rise to a strongly
overestimated total current. Remarkably, enforcing the cancellation on the
linear-response level cures these deficiencies even for strong excitations.
The TB-DG model provides a more accurate description, especially for strong
pulses.
In summary, both the TB-VG and the TB-DG model provide an excellent
description of light-induced dynamics (as long as the relevant bands are
included), along with all the advantages of TB model: simplicity, low
computational cost, straightforward interpolation to any momentum grid, and
the possibility to include many-body effects with quantum kinetic methods.
## Acknowledgments
We acknowledge insightful discussions with Denis Golež, Brian Moritz and C.
Das Pemmaraju. We also thank the Stanford Research Computing Center for
providing computational resources. Data used in this manuscript is stored on
Stanford’s Sherlock computing cluster. Supported by the U.S. Department of
Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences
and Engineering, under contract DE-AC02-76SF00515. M. S. thanks the Alexander
von Humboldt Foundation for its support with a Feodor Lynen scholarship. Y. M.
acknowledges the support by a Grant-in-Aid for Scientific Research from JSPS,
KAKENHI Grant Nos. JP19K23425, JP20K14412, JP20H05265 and JST CREST Grant No.
JPMJCR1901.
## Appendix A Transformation to the dipole gauge
For completeness we present the detailed derivation of the dipole gauge in
this appendix. We start from the minimal-coupling Hamiltonian in Wannier
representation, defined by
$\displaystyle
T_{m\mathbf{R}n\mathbf{R}^{\prime}}(t)=\big{\langle}m\mathbf{R}\big{|}\frac{1}{2}(\hat{\mathbf{p}}-q\mathbf{A}(t))^{2}+\hat{v}\big{|}n\mathbf{R}^{\prime}\big{\rangle}\
.$ (32)
For finite systems the unitary transformation is constructed from the
generator $\hat{S}(t)=-iq\mathbf{A}(t)\cdot\mathbf{r}$, i. e.
$\hat{U}(t)=e^{\hat{S}(t)}$. Expressing the dipole operator in the Wannier
basis, the generalization of this generator to periodic system is defined by
$\displaystyle
S_{m\mathbf{R}n\mathbf{R}^{\prime}}=-\mathrm{i}q\mathbf{A}(t)\cdot\sum_{\mathbf{R}\mathbf{R}^{\prime}}\sum_{mn}\mathbf{D}_{m\mathbf{R}n\mathbf{R}^{\prime}}\
.$ (33)
Collecting orbital and site indices in compact matrix notation, the generator
(33) defines the unitary transformation $\mathbf{U}(t)=e^{\mathbf{S}(t)}$ by
its matrix elements (cf. Eq. (17)). The generator must obey
$\mathbf{S}(t)=-\mathbf{S}^{\dagger}(t)$ to define a unitary transformation,
which is fulfilled if
$\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}=\mathbf{D}^{*}_{m^{\prime}\mathbf{R}^{\prime}m\mathbf{R}}$.
In analogy to Eq. (II.2), the dipole-gauge Hamiltonian in Wannier
representation is obtained by transforming Eq. (32):
$\displaystyle\widetilde{h}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}(t)=[\mathbf{U}(t)(\mathbf{T}(t)+\mathrm{i}\partial_{t}\mathbf{S}(t))\mathbf{U}^{\dagger}(t)]_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}\
.$ (34)
The first term gives rise to the Peierl’s phase factor
$\displaystyle\widetilde{T}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}(t)$
$\displaystyle=[\mathbf{U}(t)\mathbf{T}(t)\mathbf{U}^{\dagger}(t)]_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}=\sum_{\mathbf{R}_{1}\mathbf{R}_{2}}\sum_{n_{1}n_{2}}\langle
m\mathbf{R}|e^{-iq\mathbf{A}(t)\cdot(\mathbf{r}-\mathbf{R})}|n_{1}\mathbf{R}_{1}\rangle
T_{n_{1}\mathbf{R}_{1}n_{2}\mathbf{R}_{2}}(t)\langle
n_{2}\mathbf{R}_{2}|e^{iq\mathbf{A}(t)\cdot(\mathbf{r}-\mathbf{R}^{\prime})}|m^{\prime}\mathbf{R}^{\prime}\rangle$
$\displaystyle=e^{\mathrm{i}q\mathbf{A}(t)\cdot(\mathbf{R}-\mathbf{R}^{\prime})}\langle
m\mathbf{R}|e^{-\mathrm{i}q\mathbf{A}(t)\cdot\mathbf{r}}\left(\frac{1}{2}\left(\hat{\mathbf{p}}-q\mathbf{A}(t)\right)^{2}+\hat{v}\right)e^{\mathrm{i}q\mathbf{A}(t)\cdot\mathbf{r}}|m^{\prime}\mathbf{R}^{\prime}\rangle=e^{\mathrm{i}q\mathbf{A}(t)\cdot(\mathbf{R}-\mathbf{R}^{\prime})}T_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}\
,$
while the second term arises due to the time-dependence of the generator (33).
Following similar steps as above, we can show
$\displaystyle[\mathbf{U}(t)\mathrm{i}\partial_{t}\mathbf{S}(t)\mathbf{U}^{\dagger}(t)]_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}=-q\mathbf{E}(t)\cdot
e^{\mathrm{i}q\mathbf{A}(t)\cdot(\mathbf{R}-\mathbf{R}^{\prime})}\mathbf{D}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}\
.$
Hence, the unitary transformation (34) gives rise to the Wannier Hamiltonian
(18).
It is straightforward to show that the SPDM obeys the transformed equation of
motion
$\displaystyle\frac{d}{dt}\widetilde{\boldsymbol{\rho}}(\mathbf{k},t)=-i\left[\widetilde{\mathbf{h}}(\mathbf{k},t),\widetilde{\boldsymbol{\rho}}(\mathbf{k},t)\right]\
.$ (35)
The dipole-gauge SPDM $\widetilde{\boldsymbol{\rho}}(\mathbf{k},t)$ transforms
according to
$\widetilde{\rho}_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}(t)=[\mathbf{U}(t)\boldsymbol{\rho}(t)\mathbf{U}^{\dagger}(t)]_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$.
Here $[\boldsymbol{\rho}(t)]_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}$
denotes the velocity-gauge SPDM in Wannier representation.
### A.1 Total current in the dipole gauge
Figure 6: Optical conductivity as in Fig. 2 (using the same parameters), but
for the reduced TB models.
To derive the expression for the current in the dipole gauge, we start from
the minimal coupling formulation (7) and require gauge invariance. Switching
to the Wannier basis, the expectation value of total current (7) reads
$\displaystyle\mathbf{J}^{\mathrm{VG}}(t)=\frac{q}{N}\sum_{\mathbf{R}\mathbf{R}^{\prime}}\sum_{mm^{\prime}}\langle
m\mathbf{R}|\hat{\mathbf{p}}-q\mathbf{A}(t)|m^{\prime}\mathbf{R}^{\prime}\rangle\rho_{m^{\prime}\mathbf{R}^{\prime}m\mathbf{R}}(t)\
,$
where
$\rho_{m^{\prime}\mathbf{R}^{\prime}m\mathbf{R}}(t)=\sum_{\mathbf{k}}\rho_{mm^{\prime}}(\mathbf{k},t)e^{-i\mathbf{k}\cdot(\mathbf{R}-\mathbf{R}^{\prime})}$
denotes the SPDM in Wannier basis. Exploiting the cyclic invariance of the
trace, we insert the unitary transformation (17) to transform the momentum
matrix elements and SPDM to the dipole gauge. One finds
$\displaystyle\mathbf{J}^{\mathrm{LG}}(t)=\frac{q}{N}\sum_{\mathbf{R}\mathbf{R}^{\prime}}\sum_{mm^{\prime}}e^{-iq\mathbf{A}(t)\cdot(\mathbf{R}-\mathbf{R}^{\prime})}\langle
m\mathbf{R}|\hat{\mathbf{p}}|m^{\prime}\mathbf{R}^{\prime}\rangle\widetilde{\rho}_{m^{\prime}\mathbf{R}^{\prime}m\mathbf{R}}(t)\
.$ (36)
If the matrix elements of the momentum operator are available in the Wannier
basis, Eq. (36) provides a direct way of obtaining the (gauge-invariant) total
current. However, it is typically more convenient to calculate dipole matrix
elements instead. Note that this also how the Berry connection (9) is computed
based on WFs Yates _et al._ (2007). Therefore, we replace the momentum
operator by $\hat{\mathbf{p}}=-i[\mathbf{r},\hat{h}(t)]$.
Using the cell-centered dipole matrix elements (10), one finds
$\displaystyle\langle
m\mathbf{R}|\hat{\mathbf{p}}|m^{\prime}\mathbf{R}^{\prime}\rangle=-i(\mathbf{R}-\mathbf{R}^{\prime})T_{m\mathbf{R}m^{\prime}\mathbf{R}^{\prime}}-i\sum_{\mathbf{R}_{1},n_{1}}\Big{(}D_{m\mathbf{R}n_{1}\mathbf{R}_{1}}T_{n_{1}\mathbf{R}_{1}m^{\prime}\mathbf{R}^{\prime}}-T_{m\mathbf{R}n_{1}\mathbf{R}_{1}}D_{n_{1}\mathbf{R}_{1}m^{\prime}\mathbf{R}^{\prime}}\Big{)}\
.$
The structure suggests two distinct terms which contribute to the current:
$\mathbf{J}(t)=\mathbf{J}^{(1)}(t)+\mathbf{J}^{(2)}(t)$. Fourier transforming
the first term and the SPDM to momentum space, the first contribution
simplyfies to
$\displaystyle\mathbf{J}^{(1)}(t)=\frac{q}{N}\sum_{\mathbf{k}}\sum_{mm^{\prime}}\nabla_{\mathbf{k}}T_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))\widetilde{\rho}_{m^{\prime}m}(\mathbf{k},t)\
.$ (37)
Similarly, the second contribution after switching to momentum space is given
by
$\displaystyle\mathbf{J}^{(2)}(t)=\frac{q}{N}\sum_{\mathbf{k}}\sum_{mm^{\prime}}\boldsymbol{\mathcal{P}}_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))\widetilde{\rho}_{m^{\prime}m}(\mathbf{k},t)\
,$ (38)
where
$\displaystyle\boldsymbol{\mathcal{P}}_{mm^{\prime}}(\mathbf{k})=-i\sum_{n}\left(\mathbf{D}_{mn}(\mathbf{k})T_{nm^{\prime}}(\mathbf{k})-T_{mn}(\mathbf{k})\mathbf{D}_{nm^{\prime}}(\mathbf{k})\right)\
.$ (39)
We note that $T_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))$ can be replaced by
$\widetilde{h}_{mm^{\prime}}(\mathbf{k},t)$ in the commutator (39). Using the
identity $\mathrm{Tr}([A,B]C)=\mathrm{Tr}(A[B,C])$ one thus obtains
$\displaystyle\mathbf{J}^{(2)}(t)=\frac{q}{N}\sum_{\mathbf{k}}\sum_{mm^{\prime}}D_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))\frac{d}{dt}\widetilde{\rho}_{m^{\prime}m}(\mathbf{k},t)\
.$ (40)
We can move the time derivative from the density matrix to the whole
expression by compensating the derivative acting on
$\mathbf{D}_{mm^{\prime}}(\mathbf{k}-q\mathbf{A}(t))$. One thus obtains
$\mathbf{J}^{\mathrm{LG}}(t)=\mathbf{J}^{\mathrm{disp}}(t)+\mathbf{J}^{\mathrm{dip}}(t)$,
where the two contributions are defined by Eq. (21) and Eq. (22) (by
$\mathbf{J}^{\mathrm{dip}}(t)=\dot{\mathbf{P}}(t)$), respectively.
### A.2 Static limit
For a insulating system at zero temperature, the DC current response vanishes.
This property is fulfilled by construction in the dipole gauge. We note that
the displacement current does not contribute to the DC current, as
$\mathbf{J}^{\prime}_{2}(\omega)=-i\omega\mathbf{P}(\omega)\rightarrow 0$ for
$\omega\rightarrow 0$, as $\mathbf{P}(\omega\rightarrow 0)$ stays finite. For
showing that the static contribution
$\mathbf{J}^{\prime}_{1}(\omega\rightarrow 0)$ vanishes, it is convenient to
switch to a band basis:
$\displaystyle\mathbf{J}^{\prime}_{1}(\omega)=\frac{q}{N}\int^{\infty}_{-\infty}\\!dt\,e^{i\omega
t}\sum_{\mathbf{k}}\sum_{\alpha\in\mathrm{occ}}\langle\psi_{\mathbf{k}\alpha}(t)|\nabla_{\mathbf{k}}\widetilde{\mathbf{h}}(\mathbf{k},t)|\psi_{\mathbf{k}\alpha}(t)\rangle$
(41)
Employing first-order time-dependent perturbation theory to the time-dependent
Bloch states $|\psi_{\mathbf{k}\alpha}(t)\rangle$ assuming a quasi-static
electric field one finds that only valence bands can appear in the expansion
$|\psi_{\mathbf{k}\alpha}(t)\rangle=\sum_{\nu}C_{\alpha\nu}(\mathbf{k},t)|\psi_{\mathbf{k}\nu}\rangle$.
Using this property and expanding Eq. (41) up to linear order in the external
fields one finds $\mathbf{J}^{\prime}_{1}(\omega)\rightarrow 0$ for
$\omega\rightarrow 0$, similar to the single-band Peierl’s substitution.
## Appendix B Conductivity within reduced tight-binding models
We have computed the optical conductivity $\sigma(\omega)$ for the reduced TB
models (see dashed lines in Fig. 1) for all systems by the same procedure as
for the full models (see Sec. III.1). The result is shown in Fig. 6.
## References
* Basov _et al._ (2017) D. N. Basov, R. D. Averitt, and D. Hsieh, Nat. Mater. 16, 1077 (2017).
* Schubert _et al._ (2014) O. Schubert, M. Hohenleutner, F. Langer, B. Urbanek, C. Lange, U. Huttner, D. Golde, T. Meier, M. Kira, S. W. Koch, and R. Huber, Nat Photon 8, 119 (2014).
* Reimann _et al._ (2018) J. Reimann, S. Schlauderer, C. P. Schmid, F. Langer, S. Baierl, K. A. Kokh, O. E. Tereshchenko, A. Kimura, C. Lange, J. Güdde, U. Höfer, and R. Huber, Nature 562, 396 (2018).
* Wang _et al._ (2013) Y. H. Wang, H. Steinberg, P. Jarillo-Herrero, and N. Gedik, Science 342, 453 (2013).
* Mahmood _et al._ (2016) F. Mahmood, C.-K. Chan, Z. Alpichshev, D. Gardner, Y. Lee, P. A. Lee, and N. Gedik, Nature Phys. 12, 306 (2016).
* De Giovannini _et al._ (2016) U. De Giovannini, H. Hübener, and A. Rubio, Nano Lett. 16, 7993 (2016).
* Hübener _et al._ (2017) H. Hübener, M. A. Sentef, U. D. Giovannini, A. F. Kemper, and A. Rubio, Nat Commun 8, 1 (2017).
* Schüler _et al._ (2020) M. Schüler, U. De Giovannini, H. Hübener, A. Rubio, M. A. Sentef, T. P. Devereaux, and P. Werner, Phys. Rev. X 10, 041013 (2020).
* Ruggenthaler _et al._ (2018) M. Ruggenthaler, N. Tancogne-Dejean, J. Flick, H. Appel, and A. Rubio, Nature Reviews Chemistry 2, 0118 (2018).
* Mazza and Georges (2019) G. Mazza and A. Georges, Phys. Rev. Lett. 122, 017401 (2019).
* Pemmaraju _et al._ (2018) C. D. Pemmaraju, F. D. Vila, J. J. Kas, S. A. Sato, J. J. Rehr, K. Yabana, and D. Prendergast, Comp. Phys. Commun. 226, 30 (2018).
* Tancogne-Dejean and Rubio (2018) N. Tancogne-Dejean and A. Rubio, Science Advances 4, eaao5207 (2018).
* Tancogne-Dejean _et al._ (2018) N. Tancogne-Dejean, M. A. Sentef, and A. Rubio, Phys. Rev. Lett. 121, 097402 (2018).
* Golež _et al._ (2019a) D. Golež, L. Boehnke, M. Eckstein, and P. Werner, Phys. Rev. B 100, 041111 (2019a).
* Petocchi _et al._ (2019) F. Petocchi, S. Beck, C. Ederer, and P. Werner, Phys. Rev. B 100, 075147 (2019).
* Attaccalite _et al._ (2011) C. Attaccalite, M. Gruening, and A. Marini, Phys. Rev. B 84, 245110 (2011).
* Perfetto _et al._ (2019) E. Perfetto, D. Sangalli, A. Marini, and G. Stefanucci, Phys. Rev. Materials 3, 124601 (2019).
* Sentef _et al._ (2013) M. Sentef, A. F. Kemper, B. Moritz, J. K. Freericks, Z.-X. Shen, and T. P. Devereaux, Phys. Rev. X 3, 041033 (2013).
* Molina-Sánchez _et al._ (2016) A. Molina-Sánchez, M. Palummo, A. Marini, and L. Wirtz, Phys. Rev. B 93, 155435 (2016).
* Peierls (1933) R. Peierls, Z. Physik 80, 763 (1933).
* Ismail-Beigi _et al._ (2001) S. Ismail-Beigi, E. K. Chang, and S. G. Louie, Phys. Rev. Lett. 87, 087402 (2001).
* Foreman (2002) B. A. Foreman, Phys. Rev. B 66, 165212 (2002).
* Yates _et al._ (2007) J. R. Yates, X. Wang, D. Vanderbilt, and I. Souza, Phys. Rev. B 75, 195121 (2007).
* Golež _et al._ (2019b) D. Golež, M. Eckstein, and P. Werner, Phys. Rev. B 100, 235117 (2019b).
* Li _et al._ (2020) J. Li, D. Golez, G. Mazza, A. Millis, A. Georges, and M. Eckstein, arXiv:2001.09726 [cond-mat] (2020).
* Mahon _et al._ (2019) P. T. Mahon, R. A. Muniz, and J. E. Sipe, Phys. Rev. B 99, 235140 (2019).
* Mahon and Sipe (2020a) P. T. Mahon and J. E. Sipe, Phys. Rev. Research 2, 043110 (2020a).
* Mahon and Sipe (2020b) P. T. Mahon and J. E. Sipe, Phys. Rev. Research 2, 033126 (2020b).
* Novoselov _et al._ (2005) K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, and A. K. Geim, Proc. Natl. Acad. Sci. U. S. A. 102, 10451 (2005).
* Geim and Novoselov (2007) A. K. Geim and K. S. Novoselov, Nat. Mater. 6, 183 (2007).
* Hoat _et al._ (2019) D. M. Hoat, M. Naseri, R. Ponce-Péreze, N. N. Hieu, J. F. Rivas-Silva, T. V. Vu, H. D. Tong, and G. H. Cocoletzi, Mater. Res. Express 7, 015013 (2019).
* Lü _et al._ (2012) T.-Y. Lü, X.-X. Liao, H.-Q. Wang, and J.-C. Zheng, J. Mater. Chem. 22, 10062 (2012).
* Lee _et al._ (2014) J. J. Lee, F. T. Schmitt, R. G. Moore, S. Johnston, Y.-T. Cui, W. Li, M. Yi, Z. K. Liu, M. Hashimoto, Y. Zhang, D. H. Lu, T. P. Devereaux, D.-H. Lee, and Z.-X. Shen, Nature 515, 245 (2014).
* Guterding _et al._ (2017) D. Guterding, H. O. Jeschke, and R. Valentí, Phys. Rev. B 96, 125107 (2017).
* Sentef _et al._ (2018) M. A. Sentef, M. Ruggenthaler, and A. Rubio, Science Advances 4, eaau6969 (2018).
* Giannozzi _et al._ (2009) P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, Davide Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. d. Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, Anton Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, Stefano Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, J. Phys.: Condens. Matter 21, 395502 (2009).
* Andrade _et al._ (2015) X. Andrade, D. Strubbe, U. D. Giovannini, A. H. Larsen, M. J. T. Oliveira, J. Alberdi-Rodriguez, A. Varas, I. Theophilou, N. Helbig, M. J. Verstraete, L. Stella, F. Nogueira, A. Aspuru-Guzik, A. Castro, M. A. L. Marques, and A. Rubio, Phys. Chem. Chem. Phys. (2015), 10.1039/C5CP00351B.
* Tancogne-Dejean _et al._ (2020) N. Tancogne-Dejean, M. J. T. Oliveira, X. Andrade, H. Appel, C. H. Borca, G. Le Breton, F. Buchholz, A. Castro, S. Corni, A. A. Correa, U. De Giovannini, A. Delgado, F. G. Eich, J. Flick, G. Gil, A. Gomez, N. Helbig, H. Hübener, R. Jestädt, J. Jornet-Somoza, A. H. Larsen, I. V. Lebedeva, M. Lüders, M. A. L. Marques, S. T. Ohlmann, S. Pipolo, M. Rampp, C. A. Rozzi, D. A. Strubbe, S. A. Sato, C. Schäfer, I. Theophilou, A. Welden, and A. Rubio, J. Chem. Phys. 152, 124119 (2020).
* van Setten _et al._ (2018) M. van Setten, M. Giantomassi, E. Bousquet, M. Verstraete, D. Hamann, X. Gonze, and G.-M. Rignanese, Comp. Phys. Commun. 226, 39 (2018).
* Mostofi _et al._ (2014) A. A. Mostofi, J. R. Yates, G. Pizzi, Y.-S. Lee, I. Souza, D. Vanderbilt, and N. Marzari, Comp. Phys. Commun. 185, 2309 (2014).
* Stauber _et al._ (2008) T. Stauber, N. M. R. Peres, and A. K. Geim, Phys. Rev. B 78, 085432 (2008).
* Yao _et al._ (2004) Y. Yao, L. Kleinman, A. H. MacDonald, J. Sinova, T. Jungwirth, D.-s. Wang, E. Wang, and Q. Niu, Phys. Rev. Lett. 92, 037204 (2004).
* Note (1) We restrict ourselves to the nondegenerate case here. The corresponding non-abelian expressions can be derived analogously Gradhand _et al._ (2012).
* Thouless _et al._ (1982) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982).
* Gradhand _et al._ (2012) M. Gradhand, D. V. Fedorov, F. Pientka, P. Zahn, I. Mertig, and B. L. Györffy, J. Phys.: Condens. Matter 24, 213202 (2012).
* Fang _et al._ (2015) S. Fang, R. K. Defo, S. N. Shirodkar, S. Lieu, G. A. Tritsaris, and E. Kaxiras, Phys. Rev. B 92 (2015), 10.1103/PhysRevB.92.205108.
* Note (2) Inspecting the velocity matrix elements obtained from the Bloch states directly and from the TB models shows some qualitative discrepencies.
|
16k
|
arxiv_papers
|
2101.01146
|
# Clan Embeddings into Trees, and Low Treewidth Graphs
Arnold Filtser The research was supported by the Simons Foundation. Columbia
University, [email protected] Hung Le The research was supported by the
start-up grant of UMass Amherst. University of Massachusetts at Amherst,
[email protected]
###### Abstract
In low distortion metric embeddings, the goal is to embed a host “hard” metric
space into a “simpler” target space while approximately preserving pairwise
distances. A highly desirable target space is that of a tree metric.
Unfortunately, such embedding will result in a huge distortion. A celebrated
bypass to this problem is stochastic embedding with logarithmic expected
distortion. Another bypass is Ramsey-type embedding, where the distortion
guarantee applies only to a subset of the points. However, both these
solutions fail to provide an embedding into a single tree with a worst-case
distortion guarantee on all pairs. In this paper, we propose a novel third
bypass called _clan embedding_. Here each point $x$ is mapped to a subset of
points $f(x)$, called a _clan_ , with a special _chief_ point $\chi(x)\in
f(x)$. The clan embedding has multiplicative distortion $t$ if for every pair
$(x,y)$ some copy $y^{\prime}\in f(y)$ in the clan of $y$ is close to the
chief of $x$: $\min_{y^{\prime}\in f(y)}d(y^{\prime},\chi(x))\leq t\cdot
d(x,y)$. Our first result is a clan embedding into a tree with multiplicative
distortion $O(\frac{\log n}{\epsilon})$ such that each point has $1+\epsilon$
copies (in expectation). In addition, we provide a “spanning” version of this
theorem for graphs and use it to devise the first compact routing scheme with
constant size routing tables.
We then focus on minor-free graphs of diameter prameterized by $D$, which were
known to be stochastically embeddable into bounded treewidth graphs with
expected additive distortion $\epsilon D$. We devise Ramsey-type embedding and
clan embedding analogs of the stochastic embedding. We use these embeddings to
construct the first (bicriteria quasi-polynomial time) approximation scheme
for the metric $\rho$-dominating set and metric $\rho$-independent set
problems in minor-free graphs.
###### Contents
1. 1 Introduction
1. 1.1 Applications
2. 1.2 Paper Overview
3. 1.3 Related Work
2. 2 Preliminaries
1. 2.1 Metric Embeddings
2. 2.2 Minor Structure Theorem
3. 3 Clan embedding into an ultrametric (Theorem 1)
1. 3.1 Constructive Proof of Theorem 1
4. 4 Clan Embedding into a Spanning Tree (Theorem 3)
1. 4.1 Petal Decomposition Framework
2. 4.2 Choosing a Radius
3. 4.3 Proof of Lemma 5: the distributional case
4. 4.4 Missing proofs from the create-petal procedure (Algorithm 3)
5. 4.5 Grand finale: proof of Theorem 3
5. 5 Lower Bound for Clan Embeddings into Trees (Theorem 2)
6. 6 Ramsey Type Embedding for Minor-Free Graphs (Theorem 4)
7. 7 Clan Embedding for Minor-Free Graphs (Theorem 5)
8. 8 Applications
1. 8.1 Metric $\rho$-Independent Set (Theorem 7)
2. 8.2 Metric $\rho$-Dominating Set (Theorem 8)
3. 8.3 Compact Routing Scheme (Theorem 6)
9. A Path Distortion of Clan embeddings into ultrametrics
10. B Local Search Algorithms
1. B.1 Local search for $\rho$-dominating set under uniform measure
2. B.2 Local search for $\rho$-independent set under uniform measure
## 1 Introduction
Low distortion metric embeddings provide a powerful algorithmic toolkit, with
applications ranging from approximation/sublinear/online/distributed
algorithms [LLR95, AMS99, BCL+18, KKM+12] to machine learning [GKK17], biology
[HBK+03], and vision [AS03]. Classically, we say that an embedding $f$ from a
metric space $(X,d_{X})$ to a metric space $(Y,d_{Y})$ has multiplicative
distortion $t$, if for every pair of points $u,v\in X$ it holds that
$d_{X}(u,v)\leq d_{Y}(f(u),f(v))\leq t\cdot d_{X}(u,v)$. Typical applications
of metric embeddings naturally have the following structures: take some
instance of a problem in a “hard” metric space $(X,d_{X})$; embed $X$ into a
“simple” metric space $(Y,d_{Y})$ via a low-distortion metric embedding $f$;
solve the problem in $Y$, and “pull-back” the solution in $X$. Thus, the
objectives are low distortion and “simple” target space.
Simple target spaces that immediately come to mind are Euclidean space and
tree metric, or — even better — an ultrametric. 111Ultrametric is a metric
space satisfying a strong form of the triangle inequality:
$d(x,z)\leq\max\left\\{d(x,y),d(y,z)\right\\}$ (for all $x,y,z$). Ultrametrics
embed isometrically into both Euclidean space [Lem03], and tree metric. See
Definition 1. In a celebrated result, Bourgain [Bou85] showed that every
$n$-point metric space embeds into Euclidean space with multiplicative
distortion $O(\log n)$ (which is tight [LLR95]). On the other hand, any
embedding of the $n$-vertex cycle graph $C_{n}$ into a tree metric will incur
multiplicative distortion $\Omega(n)$ [RR98]. Karp [Kar89] observed that
deleting a random edge from $C_{n}$ results in an embedding into a line with
expected distortion $2$ (see Figure 1(a)). This idea was developed by Bartal
[Bar96, Bar98] (improving over [AKPW95]), and culminating in the celebrated
work of Fakcharoenphol, Rao, and Talwar [FRT04] (see also [Bar04]) who showed
that every $n$-point metric space stochastically embeds into trees (actually
ultrametrics) with expected multiplicative distortion $O(\log n)$.
Specifically, there is a distribution $\mathcal{D}$, over dominating metric
embeddings 222Metric embedding $f:X\rightarrow Y$ is dominating if $\forall
u,v\in X$, $d_{X}(u,v)\leq d_{Y}(f(u),f(v))$. into trees (ultrametrics), such
that $\forall u,v\in X$,
$\mathbb{E}_{(f,T)\sim\mathcal{D}}d_{T}(f(u),f(v))\leq O(\log n)\cdot
d_{X}(u,v)$. The $O(\log n)$ multiplicative distortion is known to be optimal
[Bar96]. Stochastic embeddings into trees are widely successful and have found
numerous applications (see e.g. [Ind01]).
In many applications of metric embeddings, a worst-case distortion guarantee
is required. A different type of compromise (compared to expected distortion)
is provided by _Ramsey-type_ embeddings. The classical Ramsey problem for
metric spaces was introduced by Bourgain et al. [BFM86], and is concerned with
finding ”nice” structures in arbitrary metric spaces. Following [BBM06,
BLMN05a], Mendel and Naor [MN07] showed that for every integer parameter
$k\geq 1$, every $n$-point metric $(X,d)$ has a subset $M\subseteq X$ of size
at least $n^{1-1/k}$ that embeds into a tree (ultrametric) with multiplicative
distortion $O(k)$ (see [NT12, BGS16, ACE+20] for improvements). In fact, the
embedding has multiplicative distortion $O(k)$ for any pair in $M\times X$. We
say that the vertices in $M$ are _satisfied_ (see Figure 1(b) for an
illustration). As a corollary, every $n$-point metric space $(X,d_{X})$ admits
a collection ${\cal T}$ of $k\cdot n^{1/k}$ dominating trees over $X$ and a
mapping $\mbox{\bf home}:X\to{\cal T}$, such that for every $x,y\in X$, it
holds that $d_{\mbox{\bf home}(x)}(x,y)\leq O(k)\cdot d_{X}(x,y)$. These are
called Ramsey trees, and they have found applications to online algorithms
[BBM06], approximate distance oracles [MN07, Che15], and routing [ACE+20].
Figure 1: Three different types of embeddings of the cycle graph $C_{n}$ into
a tree. (a) On the left illustrated a stochastic embedding that is created by
deleting an edge $\\{v_{i},v_{i+1}\\}$ uniformly at random. The expected
multiplicative distortion of a pair of neighboring vertices $v_{j},v_{j+1}$ is
$\mathbb{E}[d_{T}(v_{j},v_{j+1})]=\frac{n-1}{n}\cdot
1+\frac{1}{n}\cdot(n-1)=\frac{2n-2}{n}<2$. By the triangle inequality and
linearity of expectation, the expected multiplicative distortion is $\leq 2$.
(b) In the middle illustrated a Ramsey type embedding: an arbitrary edge
$\\{v_{i},v_{i+1}\\}$ is deleted. The vertices in the subset $M$ (on the thick
red line), which constitutes a $(1-2\epsilon)$ fraction of the vertex set, are
satisfied. That is, they suffer from a multiplicative distortion at most
$\frac{1}{\epsilon}$ w.r.t. any other vertex.
(c) On the right illustrated a clan embedding, where $i$ is chosen uniformly
at random. The chief of a vertex $v_{j}$ denoted $\tilde{v}_{j}$. Each vertex
$v_{j}\in\\{v_{i+1-\epsilon n},\dots,v_{i+\epsilon n}\\}$ has additional copy
$v^{\prime}_{j}$; thus the probability that a vertex has two copies is
$2\epsilon$, implying that $\mathbb{E}[|f(v_{a})|]=1+2\epsilon$. The
distortion is
$\min\\{d(\tilde{v}_{a},\tilde{v}_{b}),d(v^{\prime}_{a},\tilde{v}_{b})\\}\leq\frac{1}{\epsilon}\cdot
d_{C_{n}}(v{}_{a},v_{b})$.
##### A new type of embedding: clan embedding
Recall that our initial goal was to embed a general metric space into a
“simple” target space, specifically a tree metric. A drawback of both the
stochastic embedding and the Ramsey-type embedding is that the embeddings are
actually into a collection of trees rather than into a single one; thus the
target space is not as simple as one might desire. Each embedding type makes a
different type of compromise: the distortion guaranteed in stochastic
embedding is only in expectation, while in the Ramsey-type embedding, only a
subset of the vertices enjoys a bounded distortion guarantee. In this paper,
we propose a novel type of compromise, which we call _clan embedding_. Here we
will have a single embedding with a worst-case guarantee on all vertex pairs.
The caveat is that each vertex might be mapped to multiple copies. This
violates the classical paradigm of having a one-to-one relationship between
the source and target spaces. However, we obtain a map into a single tree with
a worst-case guarantee; this is beneficial and opens a new array of
possibilities.
A _one-to-many_ embedding $f:X\rightarrow 2^{Y}$ maps each point $x$ into a
subset $f(x)\subseteq Y$ called the _clan_ of $x$. Each vertex $x^{\prime}\in
f(x)$ is called a _copy_ of $x$ (see Definition 2). Clan embedding is a pair
$(f,\chi)$, where $f$ is a one-to-many embedding, and $\chi:X\rightarrow Y$
maps each vertex $x$ to a special vertex $\chi(x)\in f(x)$ called the _chief_.
Clan embeddings are _dominating_ , that is, for every $x,y\in X$, the distance
between every two copies is at least the original distance:
$\min_{x^{\prime}\in f(x),y^{\prime}\in f(y)}d_{Y}(x^{\prime},y^{\prime})\geq
d_{X}(x,y)$. $(f,\chi)$ has multiplicative distortion $t$, if for every
$x,y\in X$, some vertex in the clan of $x$ is close to the chief of $y$:
$\min_{x^{\prime}\in f(x)}d_{Y}(x^{\prime},\chi(y))\leq t\cdot d_{X}(x,y)$
(see Definition 3). See Figure 1(c) for an illustration.
##### Clan embeddings into trees
One can easily construct an isometric clan embedding into a tree by allowing
$n$ copies for each vertex. On the other hand, with a single copy per vertex,
the clan embedding becomes a classic embedding, which requires a
multiplicative distortion of $\Omega(n)$. Our goal is to construct a low
distortion clan embedding, while keeping the number of copies each vertex has
as small as possible. To this end, we construct a distribution over clan
embeddings, where all the embeddings in the support have a worst-case
distortion guarantee; however, the expected number of copies each vertex has
is bounded by a constant arbitrarily close to $1$.
###### Theorem 1 (Clan embedding into ultrametric).
Given an $n$-point metric space $(X,d_{X})$ and parameter $\epsilon\in(0,1]$,
there is a uniform distribution $\mathcal{D}$ over $O(n\log n/\epsilon^{2})$
clan embeddings $(f,\chi)$ into ulrametrics with multiplicative distortion
$O(\frac{\log n}{\epsilon})$ such that for every point $x\in X$,
$\mathbb{E}_{f\sim\mathcal{D}}[|f(x)|]\leq 1+\epsilon$.
In addition, for every $k\in\mathbb{N}$, there is a uniform distribution
$\mathcal{D}$ over $O(n^{1+\frac{2}{k}}\log n)$ clan embeddings $(f,\chi)$
into ulrametrics with multiplicative distortion $16k$ such that for every
point $x\in X$, $\mathbb{E}_{f\sim\mathcal{D}}[|f(x)|]=O(n^{\frac{1}{k}})$.
We fist show that there exists a distribution $\mathcal{D}$ of clan embeddings
that has distortion and expected clan size via the minimax theorem. We then
use the multiplicative weights update (MWU) method to explicitly construct a
uniform distribution $\mathcal{D}$ of polynomial support as specified by
Theorem 1.
Our clan embedding into ultrametric is asymptotically tight (up to a constant
factor in the distortion), and cannot be improved even if we embed into a
general tree (rather than to the much more restricted structure of an
ultrametric). Additionally, our lower bound implies that the ultra-sparse
spanner construction of Elkin and Neiman [EN19] is asymptotically tight.
(Elkin and Neiman [EN19] constructed a spanner with stretch $O(\frac{\log
n}{\epsilon})$ and $(1+\epsilon)n$ edges; see Remark 2 for further details.)
###### Theorem 2 (Lower bound for clan embedding into a tree).
For every fixed $\epsilon\in(0,1)$ and large enough $n$, there is an $n$-point
metric space $(X,d_{X})$ such that for every clan embedding $(f,\chi)$ of $X$
into a tree with multiplicative distortion $O(\frac{\log n}{\epsilon})$, it
holds that $\sum_{x\in X}|f(x)|\geq(1+\epsilon)n$.
Furthermore, for every $k\in\mathbb{N}$, there is an $n$-point metric space
$(X,d_{X})$ such that for every clan embedding $(f,\chi)$ of $X$ into a tree
with multiplicative distortion $O(k)$, it holds that $\sum_{x\in
X}|f(x)|\geq\Omega(n^{1+\frac{1}{k}})$.
Often, we are given a weighted graph $G=(V,E,w)$, and the goal is to embed the
shortest path metric of the graph $d_{G}$ into a tree $T$. However, if, for
example, one is required to construct a network while using only pre-existing
edges from $E$, it is desirable that the tree $T$ will be a subgraph of $G$,
called a spanning tree. Abraham and Neiman [AN19] (improving over [EEST08])
constructed a stochastic embedding of general graphs into spanning trees with
expected distortion $O(\log n\log\log n)$ (losing a $\log\log n$ factor
compared to general trees [FRT04]). Later, Abraham et al. [ACE+20] constructed
Ramsey spanning trees, showing that for every $k\in\mathbb{N}$, every graph
can be embedded into a spanning tree with a subset $M$ of at least
$n^{1-\frac{1}{k}}$ satisfied vertices which suffers a distortion at most
$O(k\log\log n)$ w.r.t. any other vertex (again losing a $\log\log n$ factor
compared to general trees). Here we provide a “spanning” analog of Theorem 1.
Similar to [AN19, ACE+20], we also lose a $\log\log n$ factor compared to
general trees (see the introduction to Section 4 for further discussion). In
particular, by Theorem 2, our spanning clan embedding is optimal up to second-
order terms. As an application, we construct the first compact routing scheme
with routing tables of constant size in expectation; see Section 1.1.1. We say
that a clan embedding $(f,\chi)$ of a graph $G$ into a graph $H$ is _spanning_
if $f(V(G))=V(H)$ (i.e., every vertex in $H$ is an image of a vertex in $G$)
and for every edge $\\{v^{\prime},u^{\prime}\\}\in E(H)$ where $v^{\prime}\in
f(v),u^{\prime}\in f(u)$, it holds that $\\{v,u\\}\in E(G)$ (see Definitions 2
and 3).
###### Theorem 3 (Spanning clan embedding into trees).
Given an $n$-vertex weighted graph $G=(V,E,w)$ and parameter
$\epsilon\in(0,1]$, there is a distribution $\mathcal{D}$ over spanning clan
embeddings $(f,\chi)$ into trees with multiplicative distortion $O(\frac{\log
n\log\log n}{\epsilon})$ such that for every vertex $v\in V$,
$\mathbb{E}_{f\sim\mathcal{D}}[|f(v)|]\leq 1+\epsilon$.
In addition, for every $k\in\mathbb{N}$, there is a distribution $\mathcal{D}$
over spanning clan embeddings $(f,\chi)$ into trees with multiplicative
distortion $O(k\log\log n)$, where for every vertex $v\in V$,
$\mathbb{E}_{f\sim\mathcal{D}}[|f(v)|]=O(n^{\frac{1}{k}})$.
##### Clan embedding of minor-free graphs into bounded treewidth graphs
As [Bou85] and [FRT04] are tight, a natural question arises: by embedding from
a simpler space (than general $n$-point metric space) into a richer space
(than trees), could the distortion be reduced? The family of low-treewidth
graphs is an excellent candidate for a target space: it is a much more
expressive target space than trees, while many hard problems remain tractable.
Unfortunately, by the work of Chakrabarti et al. [CJLV08] (see also [CG04]),
there are $n$-vertex planar graphs such that every (stochastic) embedding into
$o(\sqrt{n})$-treewidth graphs must incur expected multiplicative distortion
$\Omega(\log n)$. Bypassing this roadblock, Fox-Epstein et al. [FKS19]
(improving over [EKM14]), showed how to embed planar metrics into bounded
treewidth graphs while incurring only a small _additive_ distortion.
Specifically, given a planar graph $G$ and a parameter $\epsilon$, they
constructed a deterministic dominating embedding $f$ into a graph $H$ of
treewidth $\mathrm{poly}(\frac{1}{\epsilon})$, such that $\forall u,v\in G$,
$d_{H}(f(u),f(v))\leq d_{G}(u,v)+\epsilon D$, where $D$ is the diameter of
$G$. While $\epsilon D$ looks like a crude additive bound, it suffices to
obtain approximation schemes for several classic problems: $k$-center, vehicle
routing, metric $\rho$-dominating set, and metric $\rho$-independent set.
Following the success in planar graphs, Cohen-Addad et al. [CFKL20] wanted to
generalize to minor-free graphs. Unfortunately, they showed that already
obtaining additive distortion $\frac{1}{20}D$ for $K_{6}$-minor-free graphs
requires the host graph to have treewidth $\Omega(\sqrt{n})$. Inspired by the
case of trees, [CFKL20] bypass this barrier by constructing a stochastic
embedding from $K_{r}$-minor-free $n$-vertex graphs into a distribution over
treewidth-$O_{r}(\frac{\log n}{\epsilon^{2}})$ graphs with expected additive
distortion $\epsilon D$, 333$O_{r}$ hides some function depending only on $r$.
That is, there is some function $\chi:\mathbb{N}\rightarrow\mathbb{N}$ such
that $O_{r}(x)\leq\chi(r)\cdot x$. that is $\forall u,v\in G$,
$\mathbb{E}_{(f,H)\sim\mathcal{D}}[d_{H}(f(u),f(v))]\leq d_{G}(u,v)+\epsilon
D$. Similar to the case in planar graphs, Cohen-Addad et al. [CFKL20] used
their embedding to construct an approximation scheme for the capacitated
vehicle routing problem in $K_{r}$-minor-free graphs. However, due to the
stochastic nature of the embedding, it was not strong enough to imply any
results for the metric $\rho$-dominating/independent problems in minor-free
graphs, which, prior to our work, remain wide open.
In this paper, similar to the case of trees, we construct Ramsey-type and clan
embedding analogs to the stochastic embedding of [CFKL20]. Our Ramsey-type
embedding bypasses the lower bound of $\Omega(\sqrt{n})$ from [CFKL20] while
guaranteeing a worst-case distortion (for a large random subset of vertices).
As an application, we obtain a bicriteria quasi-polynomial time approximation
scheme (QPTAS) ${}^{\ref{foot:approximationSchemes}}$ for the metric
$\rho$-independent set problem in minor-free graphs (see Section 1.1.2).
###### Theorem 4 (Ramsey-type embedding for minor-free graphs).
Given an $n$-vertex $K_{r}$-minor-free graph $G=(V,E,w)$ with diameter $D$ and
parameters ${\epsilon\in(0,\frac{1}{4})}$, $\delta\in(0,1)$, there is a
distribution over dominating embeddings $g:G\rightarrow H$ into graphs of
treewidth $O_{r}(\frac{\log^{2}n}{\epsilon\delta})$, such that there is a
subset $M\subseteq V$ of vertices for which the following claims hold:
1. 1.
For every $u\in V$, $\Pr[u\in M]\geq 1-\delta$.
2. 2.
For every $u\in M$ and $v\in V$, $d_{H}(g(u),g(v))\leq d_{G}(u,v)+\epsilon D$.
By setting $\delta=\frac{1}{2}$ and repeating $\log n$ times, a
straightforward corollary is the following.
###### Corollary 1.
Given a $K_{r}$-minor-free $n$-vertex graph $G=(V,E,w)$ with diameter $D$ and
parameter $\epsilon\in(0,\frac{1}{4})$, there are $\log n$ dominating
embeddings $g_{1},\dots,g_{\log n}$ into graphs of treewidth
$O_{r}(\frac{\log^{2}n}{\epsilon})$, such that for every vertex $v$, there is
some embedding $g_{i_{v}}$, such that
$\forall u\in V,\qquad d_{H_{i_{v}}}(g_{i_{v}}(u),g_{i_{v}}(v))\leq
d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$
While Ramsey-type embedding is sufficient for the metric $\rho$-independent
set problem (as we can restrict our search to independent sets in $M$), we
cannot use it for the metric $\rho$-dominating set problem (as every good
solution might contain vertices outside $M$). To resolve this issue, we
construct a clan embedding of minor-free graphs into bounded treewidth graphs.
As we have a worst-case distortion guarantee for all vertex pairs, we obtain a
QPTAS ${}^{\ref{foot:approximationSchemes}}$ for the metric $\rho$-dominating
set problem in minor-free graphs (see Section 1.1.2).
###### Theorem 5 (Clan embedding for minor-free graphs).
Given a $K_{r}$-minor-free $n$-vertex graph $G=(V,E,w)$ of diameter $D$ and
parameters $\epsilon\in(0,\frac{1}{4})$, $\delta\in(0,1)$, there is a
distribution $\mathcal{D}$ over clan embeddings $(f,\chi)$ with additive
distortion $\epsilon D$ into graphs of treewidth
$O_{r}(\frac{\log^{2}n}{\delta\epsilon})$ such that for every $v\in V$,
$\mathbb{E}[|f(v)|]\leq 1+\delta$.
### 1.1 Applications
#### 1.1.1 Compact Routing Scheme
A _routing scheme_ in a network is a mechanism that allows packets to be
delivered from any node to any other node. The network is represented as a
weighted undirected graph, and each node can forward incoming data by using
local information stored at the node, called a _routing table_ , and the
(short) packet’s _header_. The routing scheme has two main phases: in the
preprocessing phase, each node is assigned a routing table and a short _label_
; in the routing phase, when a node receives a packet, it should make a local
decision, based on its own routing table and the packet’s header (which may
contain the label of the destination, or a part of it), of where to send the
packet. The stretch of a routing scheme is the worst-case ratio between the
length of a path on which a packet is routed to the shortest possible path.
Compact routing schemes were extensively studied [PU89, ABLP90, AP92, Cow01,
EGP03, TZ01, Che13, ACE+20], starting with Peleg and Upfal [PU89]. Using
$\tilde{O}(n^{\frac{1}{k}})$ table size, Awerbuch et al. [ABLP90] obtained
stretch $O(k^{2}9^{k})$, which was improved later to $O(k^{2})$ by Awerbuch
and Peleg [AP92]. In their celebrated compact routing scheme, Thorup and Zwick
[TZ01] obtained stretch $4k-5$ while using $O(k\cdot n^{1/k})$ size tables and
labels of size $O(k\log n)$. 444Unless stated otherwise, we measure space in
machine words, each word is $\Theta(\log n)$ bits. The stretch was improved to
roughly $3.68k$ by Chechik [Che13], using a scheme similar to [TZ01] (while
keeping all other parameters intact). Recently, Abraham et al. [ACE+20] devise
a compact routing scheme (using Ramsey spanning trees) with labels of size
$O(\log n)$, tables of size $O(k\cdot n^{1/k})$, and stretch $O(k\log\log n)$.
In all previous works, the guarantees on the table size are worst case. That
is, the table size of every node in the network is bounded by a certain
parameter. Here our guarantee is only in expectation. Note that such an
expected guarantee makes a lot of sense for a central planner constructing a
routing scheme for a network where the goal is to minimize the total amount of
resources rather than the maximal amount of resources in a single spot. Even
though previous works analyzed worst-case guarantees, if one tries to analyze
their expected bounds per vertex, the guarantees will not be improved. Our
contribution is the following:
[capbesideposition=left,top,capbesidewidth=8.1cm]table[] Routing s. Stretch
Label Table 1. [TZ01] $4k-5$ $O(k\log n)$ $O(kn^{1/k})$ 2. [Che13] $3.68k$
$O(k\log n)$ $O(kn^{1/k})$ 3. [ACE+20] $O(k\log\log n)$ $O(\log n)$
$O(kn^{1/k})$ 4. Thm. 6 $O(k\log\log n)$ $O(\log n)$ $O(n^{1/k})^{(*)}$ 5.
[TZ01] $O(\log n)$ $O(\log^{2}n)$ $O(\log n)$ 6. [Che13] $O(\log n)$
$O(\log^{2}n)$ $O(\log n)$ 7. [ACE+20] $O(\log n)$ $O(\log n)$ $O(\log^{2}n)$
8. [ACE+20] $\widetilde{O}(\log n)$ $O(\log n)$ $O(\log n)$ 9. Thm. 6 $O(\log
n)$ $O(\log n)$ $O(\log n)^{(*)}$ 10. Thm. 6 $\widetilde{O}(\log n)$ $O(\log
n)$ $\boldsymbol{O(1)}^{(*)}$
Table 1: The table compares various routing schemes for $n$-vertex graphs. In
rows 1-4, we compare different schemes in their full generality, here $k$ is
an integer parameter. In rows 5,6,8,10, we fix $k=\log n$, while in rows 7 and
9, we fix $k=\frac{\log n}{\log\log n}$. Note that our result in line 9 is
superior to all previous results: it has reduced label size compared to lines
5-6, reduced table size compared to line 7, and reduced stretch compared to
line 8. Our result in line 10 is the first to obtain a constant table size.
The sizes of the table and label are measured in words, each word is $O(\log
n)$ bits. The header size is asymptotically equal to the label size in all the
compared routing schemes. The main caveat is that, while in all previous
results the table size is analyzed w.r.t. a worst-case guarantee, we only
provide bounds in expectation (marked by (*)). The label size (as well as the
stretch) is a worst-case guarantee in our work as well.
###### Theorem 6 (Compact routing scheme).
Given a weighted graph $G=(V,E,w)$ on $n$ vertices and integer parameter
$k>1$, there is a compact routing scheme with stretch $O(k\log\log n)$ that
has (worst-case) labels (and headers) of size $O(\log n)$, and the expected
size of the routing table of each vertex is $O(n^{1/k})$.
See Table 1 for comparison of our and previous results. We mainly focus on the
very compact regime where all the parameters are at most poly-logarithmic. A
key result in [TZ01] is a stretch $1$ routing scheme for the special case of a
tree, where a routing table has constant size, and logarithmic label size (see
Theorem 13). All the previous works are based on constructing a collection of
trees. Specifically, in [TZ01, Che13], there are $n$ trees, where each vertex
belongs to $O(\log n)$ trees, and for each pair of nodes, there is a tree that
guarantees a small stretch. Routing is then done in that tree. This is the
reason for their large label size of $\log^{2}n$ (as a label consists of $\log
n$ labels in different trees). [ACE+20] constructs $\log n$ (Ramsey spanning)
trees in total, where each vertex $v$ has a home tree $T_{v}$, such that $v$
enjoys a small stretch w.r.t. any other vertex in $T_{v}$. The label then
consists of the name of $T_{v}$ and the label of $v$ in $T_{v}$. However, the
routing table is still somewhat large as one needs to store the routing
information in $\log n$ different trees.
In contrast, our construction is based on the spanning clan embedding
$(f,\chi)$ of Theorem 3 into a single tree $T$, where the clan of each vertex
consists of $O(1)$ copies (in expectation). The label of each vertex $v$ is
simply the label of $\chi(v)$ in $T$. The routing table of $v$ contains the
routing tables of all the corresponding copies in $f(v)$.
#### 1.1.2 Metric Baker Problems in Minor-free graphs
Baker [Bak94] introduced a “layering” technique in order to construct
efficient polynomial approximation schemes (EPTAS) 555A polynomial time
approximation scheme (PTAS) is an algorithm that for any fixed
$\epsilon\in(0,1)$, provides a $(1+\epsilon)$-approximation in polynomial
time. A PTAS is an _efficient_ polynomial time approximation scheme (EPTAS) if
running time is of the form $n^{O(1)}\cdot f(\epsilon)$ for some function
$f(.)$ depending on $\epsilon$ only. A quasi-polynomial time approximation
scheme (QPTAS) has running time $2^{\cdot\mathrm{polylog}(n)}$ for every fixed
$\epsilon$. for many “local” problems in planar graphs such as minimum-measure
_dominating set_ and maximum-measure _independent set_. The key observation is
that planar graphs have the “bounded local treewidth” property. Baker showed
that for some problems solvable on bounded treewidth graphs, one can construct
efficient approximation schemes for graphs possessing the bounded local
treewidth property. This approach was generalized by Demaine et al. [DHK05] to
minor-free graphs.
Eisenstat et al. [EKM14] proposed metric generalizations of Baker problems:
minimum measure $\rho$_-dominating set_ , and maximum measure
$\rho$_-independent set_. Given a metric space $(X,d_{X})$, a
$\rho$-independent set is a subset $S\subseteq X$ of points such that for
every $x,y\in S$, $d_{X}(x,y)>\rho$. Similarly, a $\rho$-dominating set is a
subset $S\subseteq X$ such that for every $x\in X$, there exists $y\in S$,
such that $d_{X}(x,y)\leq\rho$. Given a measure
$\mu:X\rightarrow\mathbb{R}_{+}$, the goal of the metric $\rho$-dominating
(resp. independent) set problem is to find a $\rho$-dominating (resp.
independent) set of minimum (resp. maximum) measure. It is often the case that
metric Baker problems are much easier under the uniform measure. Sometimes, in
addition, we are given a set of terminals ${\cal K}\subseteq X$, and required
only that the terminals will be dominated ($\forall x\in{\cal
K},\leavevmode\nobreak\ \exists y\in S$ s.t. $d_{X}(x,y)\geq\rho$). Note that
the metric generalization of Becker problems in structured graphs (e.g.
planar) is considerably harder than the non-metric problems. This is because
the graph describing dominance/independence relations no longer posses the
original structure (e.g. planarity).
An approximation scheme for the $\rho$-dominating (resp. independent) set
problem returns a $\rho$-dominating (resp. independent) set $S$ such that for
every $\rho$-dominating (resp. independent) set $S^{\prime}$ it holds that
$\mu(S)\leq(1+\epsilon)\mu(S^{\prime})$ (resp.
$\mu(S)\geq(1-\epsilon)\mu(S^{\prime})$). A bicriteria approximation scheme
for the $\rho$-dominating (resp. independent) set problem returns a
$(1+\epsilon)\rho$-dominating (resp. $(1-\epsilon)\rho$-independent) set $S$
such that for every $\rho$-dominating (resp. independent) set $S^{\prime}$ it
holds that $\mu(S)\leq(1+\epsilon)\mu(S^{\prime})$ (resp.
$\mu(S)\geq(1-\epsilon)\mu(S^{\prime})$).
For unweighted graphs with treewidth $\mathrm{tw}$, Borradaile and Le [BL16]
provided an exact algorithm for the $\rho$-dominating set problem with
$O((2\rho+1)^{\mathrm{tw}+1}n)$ running time (see also [DFHT05]). For general
treewidth $\mathrm{tw}$ graphs, using dynamic programming technique,
Katsikarelis et al. [KLP19] designed a fixed parameter tractable (FPT)
approximation algorithm for the metric $\rho$-dominating set problem with
$(\mathrm{tw}/\epsilon)^{O(\mathrm{tw})}\cdot\mathrm{poly}(n)$ runtime that
returns a $(1+\epsilon)\rho$-dominating set $S$, such that for every
$\rho$-dominating set $S^{\prime}$ it holds that $\mu(S)\leq\mu(S^{\prime})$.
A similar result was also obtained for the metric $\rho$-independent set
problem [KLP20]. In particular, for the very basic case of bounded treewidth
graphs, no true approximation scheme (even with quasi-polynomial time) is
known for these problems. Additional evidence was provided by Marx and
Pilipczuk [MP15] (see also [FKS19]), who showed that the existence of EPTAS
${}^{\ref{foot:approximationSchemes}}$ for either
$\rho$-dominating/independent set problem in planar graphs would refute the
exponential-time hypothesis (ETH). Given this evidence, it is natural to
settle for bicriteria approximation.
For unweighted planar graphs and constant $\rho$, there are linear time
approximation schemes (not bicriteria) for the metric
$\rho$-independent/dominating set problems [EILM16, DFHT05]. In weighted
planar graphs, under the uniform measure, Marx and Pilipczuk [MP15] gave exact
$n^{O(\sqrt{k})}$ time solution to both metric $\rho$-dominating/isolated set
problems, provided that the solution is guaranteed to be of size at most $k$.
Using their embedding of planar graphs into $\epsilon^{-O(1)}\log n$-treewidth
graphs with additive distortion $\epsilon D$, Eisenstat et al. [EKM14]
provided a bicriteria PTAS ${}^{\ref{foot:approximationSchemes}}$ for both
metric $\rho$-independent/dominating set problems in planar graphs. Later, by
constructing an improved embedding into $\epsilon^{-O(1)}$-treewidth graphs,
Fox-Epstein et al. [FKS19] obtained a bicriteria
EPTAS.${}^{\ref{foot:approximationSchemes}}$
Reference | Family | Result | Technique
---|---|---|---
1. | [MP15] | planar | No EPTAS under ETH |
2. | [KLP19, KLP20] | treewidth | FPT with approx $(1+\epsilon)\rho$ | Dynamic programming
3. | [EKM14] | planar | Bicriteria PTAS | Deterministic embedding
4. | [FKS19] | planar | Bicriteria EPTAS | Deterministic embedding
5. | Theorems 17&18 | minor-free | PTAS (uniform measure) | Local search
6. | Theorems 7&8 | minor-free | Bicriteria QPTAS | Clan/Ramsey type embedding
Table 2: The table compares different approximation schemes for metric Becker
problems on weighted graphs. All compared results apply to both metric
$\rho$-dominating/independent set problems. All the results (other than in
line 5) apply to the general measure case.
Finally, we turn to the most challenging case of minor-free graphs. For the
restricted uniform measure case, using local search (similarly to [CKM19]), we
construct PTAS for both metric $\rho$-dominating/independent set problems. See
Theorems 17 and 18 in Appendix B for details. However, the local search
approach seems to be hopeless for general measures. Alternately, one can try
the metric embedding approach (for which bicriteria approximation is
unavoidable). Unfortunately, unlike the classic embeddings in [EKM14, FKS19],
Cohen-Addad et al. [CFKL20] provided a stochastic embedding with an _expected
distortion_ guarantee. Such a stochastic guarantee is not strong enough to
construct approximation schemes for the metric $\rho$-independent/dominating
set problems. Using our clan and Ramsey-type embeddings, we are able to
provide the first bicriteria QPTAS ${}^{\ref{foot:approximationSchemes}}$ for
these problems. See Table 2 for a summary of previous and current results.
###### Theorem 7 (Metric $\rho$-independent set).
There is a bicriteria quasi-polynomial time approximation scheme (QPTAS) for
the metric $\rho$-independent set problem in $K_{r}$-minor-free graphs.
Specifically, given a weighted $n$-vertex $K_{r}$-minor-free graph
$G=(V,E,w)$, measure $\mu:V\rightarrow\mathbb{R}_{+}$ and parameters
$\epsilon\in(0,\frac{1}{4})$, $\rho>0$, in
$2^{\tilde{O}_{r}(\frac{\log^{2}n}{\epsilon^{2}})}$ time, one can find a
$(1-\epsilon)\rho$-independent set $S\subseteq V$ such that for every
$\rho$-independent set $\tilde{S}$, $\mu(S)\geq(1-\epsilon)\mu(\tilde{S})$.
###### Theorem 8 (Metric $\rho$-dominating set).
There is a bicriteria quasi-polynomial time approximation scheme (QPTAS) for
the metric $\rho$-dominating set problem in $K_{r}$-minor-free graphs.
Specifically, given a weighted $n$-vertex $K_{r}$-minor-free graph
$G=(V,E,w)$, measure $\mu:V\rightarrow\mathbb{R}_{+}$, a subset of terminals
${\cal K}\subseteq V$, and parameters $\epsilon\in(0,\frac{1}{4})$, $\rho>0$,
in $2^{\tilde{O}_{r}(\frac{\log^{2}n}{\epsilon^{2}})}$ time, one can find a
$(1+\epsilon)\rho$-dominating set $S\subseteq V$ for ${\cal K}$ such that for
every $\rho$-dominating set $\tilde{S}$ of ${\cal K}$ ,
$\mu(S)\leq(1+\epsilon)\mu(\tilde{S})$.
### 1.2 Paper Overview
The paper overview uses terminology presented in the preliminaries Section 2.
##### Clan embedding into ultrametric
The main task is to prove a “distributional” version of Theorem 1.
Specifically, given a parameter $k$, and a measure
$\mu:X\rightarrow\mathbb{R}_{\geq 1}$, we construct a clan embedding with
distortion $16k$ such that $\sum_{x\in
X}\mu(x)\cdot|f(x)|\leq\mu(X)^{1+\frac{1}{k}}$, where $\mu(X)=\sum_{x\in
X}\mu(x)$ (Lemma 2). We show that the distributioal version implies Theorem 1
by using the minimax theorem.
The algorithm to construct the distributional version is a deterministic
recursive ball growing algorithm, which is somewhat similar to previous
deterministic algorithms constructing Ramsey trees [Bar11, ACE+20]. Let $D$ be
the diameter of the metric space. We grow a ball $B(v,R)$ around a point $v$
and partition the space into two clusters: the interior $B(v,R+\frac{D}{16k})$
and exterior $X\setminus B(v,R-\frac{D}{16k})$ of the ball, while points at
distance $\frac{D}{16k}$ from the boundary of the ball belong to both
clusters. We then recursively create a clan embedding into ultrametrics for
each of the two clusters. These two embeddings are later combined into a
single ultrametric where the root has label $D$. See Figure 2 for an
illustration. The $16k$ distortion guarantee follows from the wide “belt”
around the boundary of the ball belonging to both clusters. Note that the
images of vertices in this “belt” contain copies in the clan embeddings of
both clusters, while “non-belt” points have copies in a single embedding only.
However, the two clusters have cardinality smaller than $|X|$. The key is to
carve the partition while guaranteeing that the relative measure of points
belonging to both clusters will be small compared to the reduction in
cardinality.
##### Spanning clan embedding into trees
In Theorem 3, the spanning version, we try to imitate the approach of Theorem
1. However, we cannot simply carve balls and continue recursively. The reason
is that the diameter of a cluster could grow unboundedly after deleting some
vertices. In particular, there is no clear upper bound on the distance between
separated points.
To imitate the ball growing approach nonetheless, we use the petal-
decomposition framework that was previously applied to create stochastic
embedding into spanning trees [AN19], and Ramsey spanning trees [ACE+20]. The
petal decomposition framework enables one to iteratively construct a spanning
tree for a given graph. In each level, the current cluster is partitioned into
smaller diameter pieces (called _petals_), which have properties resembling
balls. The algorithm continues recursively on the petals. Later, the petals
are connected back to create a spanning tree. The key property is that while
creating a petal, we have a certain degree of freedom to chose its “radius”,
which enables us to use the ball growing approach from above. Crucially, the
framework guarantees that for every choice of radii (within the sepecified
limits), the diameter of the resulting tree will be only constant times larger
than that of the original graph. However, the petal decomposition framework
does not provide us with the freedom to choose the center of the petal. This
makes the task of controlling the number of copies more subtle.
##### Lower bound for clan embedding into a tree
We provide here a proof sketch for the first assertion in Theorem 2. We begin
by constructing an $n$-vertex graph $G=(V,E)$ with $(1+\epsilon)n$ edges and
girth $g=\Omega(\frac{\log n}{\epsilon})$; the girth is the length of the
shortest cycle. Consider an arbitrary clan embedding of $G$ into a tree $T$
with distortion $\frac{g}{c}=O(\frac{\log n}{\epsilon})$ (for some constant
$c$) and $\kappa$ copies overall. We create a new graph $H$ by merging all the
copies of each vertex into a single vertex. There is a naturally defined
classic embedding from $G$ to $H$ with distortion $\leq\frac{g}{c}$. The Euler
characteristic of the graph $G$ equals $\chi(G)=|E|-|V|+1=\epsilon n+1$, while
the Euler characteristic of $H$ is at most $\chi(H)\leq\kappa-n$. However,
Rabinovich and Raz [RR98] showed that, if an embedding from a girth-$g$ graph
$G$ has distortion $\leq\frac{g}{c}$, the host graph must have the Euler
characteristic at least as large as that of $G$. Thus, we conclude that
$\kappa\geq(1+\epsilon)n+1$ as required.
##### Ramsey type embedding for minor-free graphs
The structure theorem of Robertson and Seymour [RS03] stated that every minor-
free graph can be decomposed into a collection of graphs embedded on the
surface of constant genus (with some vortices and apices), glued together into
a tree structure by taking clique-sums. The stochastic embedding of minor free
graphs into a distribution over bounded treewidth graphs by Cohen-Addad et al.
[CFKL20] was constructed according to the layers of the structure theorem.
First, they constructed an embedding for a planar graph with a single vortex.
Then, they generalized it to planar graphs with multiple vortices,
subsequently to graphs embedded on the surface of constant genus with multiple
vortices, and to surface embeddable graphs with multiple vortices and apices.
Finally, they incorporated cliques-sums and generalized to minor-free graphs.
Most crucially, for this paper, the only step requiring randomness was the
incorporation of apices. Specifically, [CFKL20] constructed a deterministic
embedding for graphs embedded on the surface of constant genus with multiple
vortices. This is the starting point of our embeddings.
Our first step is to incorporate apices, however, instead of guaranteeing that
the distance of each pair is distorted by $\epsilon D$ in expectation, we will
show that each vertex with probability $1-\delta$ enjoys a small distortion
w.r.t. any other vertex. We begin by deleting all the apices $\Psi$ and
obtaining a surface embeddable graph with multiple vortices
$G^{\prime}=G[V\setminus\Psi]$. However, the diameter of the resulting graph
is essentially unbounded. Pick an arbitrary vertex $r$, and partition
$G^{\prime}$ into layers of width $O(\frac{D}{\delta})$ w.r.t. distances from
$r$ with a random shift 666Alternatively, one could use here a strong padded
decomposition [Fil19] (as in [CFKL20]) into clusters of diameter
$O_{r}(\frac{D}{\delta})$ such that each radius-$D$ ball is fully contained in
a single cluster with probability $1-\delta$. However, this approach will not
work for our clan embedding, as there is no bound on the number of copies we
will need for failed vertices. We use the layering approach for the Theorem 4
as well to keep the proofs of Theorems 4 and 5 similar.. It follows that every
vertex $v$ is $2D$-padded (that is, the ball $B(v,2D)$ is fully contained in a
single layer) with probability $1-\delta$. The set $M$ of _satisfied_ vertices
defined to be the set of all $D$-padded vertices. We then use the
deterministic embedding from [CFKL20] on every layer with distortion parameter
$\epsilon^{\prime}=\Theta(\epsilon\delta)$ to incur additive distortion
$\epsilon D$. Finally, we combine all these embeddings together into a single
embedding, which also contains the apices.
The next step is to incorporate clique-sums. This is done recursively w.r.t.
the clique-sum decomposition tree $\mathbb{T}$. In each step, we pick a
central piece $\tilde{G}\in\mathbb{T}$ such that
$\mathbb{T}\setminus\tilde{G}$ breaks into connected components
$\mathbb{T}_{1},\mathbb{T}_{2},\dots$, where each $\mathbb{T}_{i}$ contains at
most $|\mathbb{T}|/2$ pieces. We construct a Ramsey-type embedding for
$\tilde{G}$ using the lemma above and obtain a set $\tilde{M}$ of satisfied
vertices. Recursively, we construct a Ramsey-type embedding for each
$\mathbb{T}_{i}$ and obtain a set $M_{i}$ of satisfied vertices. We ensure
that all these embeddings are clique-preserving. Thus even though eventually
we will obtain a one-to-one embedding, during the process, we keep them _one-
to-many and clique-preserving_. This provides us with a natural way to combine
all the embeddings of $\tilde{G},\mathbb{T}_{1},\mathbb{T}_{2},\dots$ into a
single embedding into a graph of bounded treewidth (by identifying vertices of
respective clique copies). All the vertices in $\tilde{M}$ will be satisfied.
A vertex $v\in\mathbb{T}_{i}$ will be satisfied if $v\in M_{i}$ and all the
vertices in the clique $Q_{i}$, used in the clique sum of $\tilde{G}$ with
$\mathbb{T}_{i}$, are satisfied $Q_{i}\subseteq\tilde{M}$. Analyzing the
entire process, we show that each vertex is satisfied with probability at
least $(1-\delta)^{\log n}$. The theorem follows by setting the parameter
$\delta^{\prime}=\Theta(\frac{\delta}{\log n})$.
##### Clan embedding for minor-free graphs
The construction here follows similar lines to our Ramsey-type embedding.
However, we cannot simply “give-up” on vertices, as we required to provide a
worst-case distortion guarantee on all vertex pairs. Similarly to the Ramsey-
type case, we build on the deterministic embedding of surface embeddable
graphs with vortices from [CFKL20], and generalize it to a clan embedding of
graphs including the apices. However, there is one crucial difference in
creating the “layering” (with the random shift). In the Ramsey-type embedding,
vertices near the boundary between two layers simply failed and did not join
$M$. Here, instead, the layers will somewhat overlap such that copies of
vertices near boundary areas will be split into two unrelated sets. In
particular, cliques that lie near boundary areas will have two separated
clique copies w.r.t. each corresponding layer (at most two). Even though that
actually each vertex will have an essentially unbounded number of copies (due
to the clique-preservation requirement), the copies of each vertex will be
divided to either one or two sets, such that in the final embedding, it will
be enough to pick an arbitrary single copy from each set. The copies of a
vertex will split into two sets only if it is in the area of the boundary, the
probability of which is bounded by $\delta$.
The generalization to clique-sums also follows similar lines to the Ramsey-
type embedding. We create a clan embedding for $\tilde{G}$ into treewidth
graph $\tilde{H}$ as above, and recursively clan embeddings
$H_{1},H_{2},\dots$ for $\mathbb{T}_{1},\mathbb{T}_{2},\dots$. For each
$\mathbb{T}_{i}$, we will make the vertices of the clique $Q_{i}$, used for
the clique-sum between $\tilde{G}$ and $\mathbb{T}_{i}$, into apices, thereby
ensuring that $H_{i}$ will succeed on $Q_{i}$. In particular, every vertex
$v\in Q_{i}$ will have a single copy in $H_{i}$. When combining $H_{i}$ with
$\tilde{H}$, there are two cases. If the embedding $\tilde{H}$ was successful
w.r.t. $Q_{i}$ we will simply identify between the two clique copies and done.
Otherwise, $\tilde{H}$ will contain two vertex-disjoint clique copies
$\tilde{Q}_{i}^{1},\tilde{Q}_{i}^{2}$ of $Q_{i}$. We will create two disjoint
copies of the embedding $H_{i}$: $H_{i}^{1},H_{i}^{2}$, and identify the two
copies of $Q_{i}$ in $H_{i}^{1},H_{i}^{2}$ with
$\tilde{Q}_{i}^{1},\tilde{Q}_{i}^{2}$, respectively. It follows that for a
vertex $v\in\mathbb{T}_{i}$, with probability at least $1-\delta$, the number
of copies it will have is the same as in $H_{i}$, while with probability at
most $\delta$ it will be doubled. Analyzing the entire process (and picking a
single copy from each relevant set as above), we show that each vertex is
expected to have at most $(1+\delta)^{\log n}$ copies. The theorem follows by
using the parameter $\delta^{\prime}=\Theta(\frac{\delta}{\log n})$.
### 1.3 Related Work
Path-distortion A closely related notion to clan embeddings is multi-embedding
studied by Bartal and Mendel [BM04]. A multi-embedding is a dominating one-to-
many embedding. The distortion guarantee, however, is very different. We say
that a multi-embedding $f:X\rightarrow 2^{Y}$ between metric spaces
$(X,d_{X})$, $(Y,d_{Y})$ has _path distortion_ $t$, if for every “path” in
$X$, i.e., a sequence of points $x_{0},x_{1},\dots,x_{q}$, there are copies
$x^{\prime}_{i}\in f(x_{i})$ such that
$\sum_{i=0}^{q-1}d_{Y}(x^{\prime}_{i},x^{\prime}_{i+1})\leq
t\cdot\sum_{i=0}^{q-1}d_{X}(x_{i},x_{i+1})$. For $n$ point metric space
$(X,d)$ with aspect ratio $\Phi$ 777The aspect ratio of a metric space $(X,d)$
is the ratio between the maximal and minimal distances
$\frac{\max_{x,y}d(x,y)}{\min_{x\not=y}d(x,y)}$., and parameter $k\geq 1$,
Bartal and Mendel [BM04] constructed a multi-embedding into ultrametric with
$O(n^{1+\frac{1}{k}})$ vertices and distortion $O(k\cdot\min\\{\log
n\cdot\log\log n,\log\Phi\cdot\log\log\Phi\\})$. Formally, path distortion and
multiplicative distortion of clan embedding are incomparable, as clan
embedding guarantees small distortion with respect to a single chief vertex
(which is crucial to our applications), while the multi-embedding [BM04]
distortion guarantee is w.r.t. arbitrary copies, but preserve entire “paths”.
Interestingly, a small modification to our clan embedding provides the path
distortion guarantee as well! See Theorem 15 in Appendix A. Specifically, we
obtain embedding into ultrametric with $O(n^{1+\frac{1}{k}})$ (resp.
$(1+\epsilon)n$) vertices and distortion $O(k\cdot\min\\{\log n,\log\Phi\\})$
(resp. $O(\frac{\log n}{\epsilon}\cdot\min\\{\log n,\log\Phi\\})$), shaving a
$\log\log$ factor compared with [BM04]. In a private communication, Bartal
told us that he obtained the exact same path distortion guarantees more than a
decade ago; Bartal’s manuscript is made public recently [Bar21].
In a concurrent paper, Haeupler et al. [HHZ21] studied a closely related
notion of tree embeddings with copies. They construct a one-to-many embedding
of a graph $G$ into a tree $T$ where every vertex has at most $O(\log n)$
copies, and such that every connect subgraph $H$ of $G$ has a connected copy
$H^{\prime}$ in $T$, of weight at most $O(\log^{2}n)\cdot w(H)$. Using the
path distortion gurantee in our embedding (or [Bar21]), one will obtain an
embedding such that every connect subgraph $H$ of $G$ has a connected copy
$H^{\prime}$ in $T$, of weight at most $O(\log n)\cdot w(H)$, however the
bound on the maximal number of copies will be only polynomial.
Tree covers. The constructions of Ramsey trees are asymptotically tight
[BBM06]. Furthermore, as was shown by Bartal et al. [BFN19] that they cannot
be substantially improved even for planar graphs with a constant doubling
dimension. 888Specifically, for every $\alpha>0$, [BFN19] constructed planar
graph with constant doubling dimension, such that for every tree embedding,
the subset of vertices enjoying distortion $\leq\alpha$ is of size at most
$n^{1-\Omega(\frac{1}{\alpha\log\alpha})}$, which is almost as bad as general
graphs. Therefore [BFN19] suggested studying a weaker gurantee provided by
tree covers. Here the goal is to construct a small collection of dominating
embeddings into trees such that every pair of vertices has a small distortion
in some tree in the collection. For $n$-vertex minor-free graph [BFN19]
constructed $1+\epsilon$ tree covers of size
$O_{r}(\frac{\log^{2}n}{\epsilon^{2}})$ (or a $O(1)$-tree cover $O(1)$ size).
For metrics with doubling dimension $d$, [BFN19] constructed $1+\epsilon$-tree
covers of size $(\frac{1}{\epsilon})^{O(d)}$. Recently, the authors [FL21]
showed that for doubling metrics, we can replace the trees by ultrametrics.
Minor free graphs. Different types of embedding were studied for minor-free
graphs. $K_{r}$-minor-free graphs embed into $\ell_{p}$ space with
multiplicative distortion $O_{r}(\log^{\min\\{\frac{1}{2},\frac{1}{p}\\}}n)$
[Rao99, KLMN05, AGG+19, AFGN18]. In particular, they embed into
$\ell_{\infty}$ of dimension $O_{r}(\log^{2}n)$ with a constant multiplicative
distortion. They also admit spanners with multiplicative distortion
$1+\epsilon$ and $\tilde{O}_{r}(\epsilon^{-3})$ lightness [BLW17]. On the
other hand, there are other graph families that embed well into bounded
treewidth graphs. Talwar [Tal04] showed that graphs with doubling dimension
$d$ and aspect ratio $\Phi$ ${}^{\ref{foot:aspectRatio}}$, stochastically
embed into graphs with treewidth $\epsilon^{-O(d\log d)}\cdot\log^{d}\Phi$
with expected distortion $1+\epsilon$. Similar embeddings are known for graphs
with highway dimension $h$ [FFKP18] (into treewidth
$(\log\Phi)^{-O(\log^{2}\frac{h}{\epsilon})}$ graphs), and graphs with
correlation dimension $k$ [CG12] (into treewidth
$\tilde{O}_{k,\epsilon}(\sqrt{n})$ graphs).
## 2 Preliminaries
$\tilde{O}$ notation hides poly-logarithmic factors, that is
$\tilde{O}(g)=O(g)\cdot\mathrm{polylog}(g)$, while $O_{r}$ notation hides
factors in $r$, e.g. $O_{r}(m)=O(m)\cdot f(r)$ for some function $f$ of $r$.
All logarithms are at base $2$ (unless specified otherwise).
We consider connected undirected graphs $G=(V,E)$ with edge weights
$w_{G}:E\to\mathbb{R}_{\geq 0}$. A graph is called unweighted if all its edges
have unit weight. Additionally, we denote $G$’s vertex set and edge set by
$V(G)$ and $E(G)$, respectively. Often, we will abuse notation and write $G$
instead of $V(G)$. $d_{G}$ denotes the shortest path metric in $G$, i.e.,
$d_{G}(u,v)$ is the shortest distance between $u$ to $v$ in $G$. Note that
every metric space can be represented as the shortest path metric of a
weighted complete graph. We will use the notions of metric spaces, and
weighted graphs interchangeably. When the graph is clear from the context, we
might use $w$ to refer to $w_{G}$, and $d$ to refer to $d_{G}$. $G[S]$ denotes
the induced subgraph by $S$. The diameter of $S$, denoted by
$\mathrm{diam}(S)$, is $\max_{u,v\in S}d_{G[S]}(u,v)$. 999This is often called
_strong_ diameter. A related notion is the _weak_ diameter of a cluster $S$ ,
defined to be $\max_{u,v\in S}d_{G}(u,v)$. Note that for a metric space, weak
and strong diameters are equivalent.
An ultrametric $\left(X,d\right)$ is a metric space satisfying a strong form
of the triangle inequality, that is, for all $x,y,z\in X$,
$d(x,z)\leq\max\left\\{d(x,y),d(y,z)\right\\}$. The following definition is
known to be an equivalent one (see [BLMN05b]).
###### Definition 1.
An ultrametric is a metric space $\left(X,d\right)$ whose elements are the
leaves of a rooted labeled tree $T$. Each $z\in T$ is associated with a label
$\ell\left(z\right)\geq 0$ such that if $x\in T$ is a descendant of $z$ then
$\ell\left(x\right)\leq\ell\left(z\right)$ and $\ell\left(x\right)=0$ iff $x$
is a leaf. The distance between leaves $x,y\in X$ is defined as
$d_{T}(x,y)=\ell\left(\mbox{lca}\left(x,y\right)\right)$ where
$\mbox{lca}\left(x,y\right)$ is the least common ancestor of $x$ and $y$ in
$T$.
### 2.1 Metric Embeddings
Classically, a metric embedding is defined as a function $f:X\rightarrow Y$
between the points of two metric spaces $(X,d_{X})$ and $(Y,d_{Y})$. A metric
embedding $f$ is said to be _dominating_ if for every pair of points $x,y\in
X$, it holds that $d_{X}(x,y)\leq d_{Y}(f(x),f(y))$. The distortion of a
dominating embedding $f$ is $\max_{x\not=y\in
X}\frac{d_{Y}(f(x),f(y))}{d_{X}(x,y)}$. Here we will study a more permitting
generalization of metric embedding introduced by Cohen-Addad et al. [CFKL20],
which is called _one-to-many_ embedding .
###### Definition 2 (One-to-many embedding).
A _one-to-many embedding_ is a function $f:X\rightarrow 2^{Y}$ from the points
of a metric space $(X,d_{X})$ into non-empty subsets of points of a metric
space $(Y,d_{Y})$, where the subsets $\\{f(x)\\}_{x\in X}$ are disjoint.
$f^{-1}(x^{\prime})$ denotes the unique point $x\in X$ such that
$x^{\prime}\in f(x)$. If no such point exists, $f^{-1}(x^{\prime})=\emptyset$.
A point $x^{\prime}\in f(x)$ is called a _copy_ of $x$, while $f(x)$ is called
the _clan_ of $x$. For a subset $A\subseteq X$ of vertices, denote
$f(A)=\cup_{x\in A}f(x)$.
We say that $f$ is _dominating_ if for every pair of points $x,y\in X$, it
holds that $d_{X}(x,y)\leq\min_{x^{\prime}\in f(x),y^{\prime}\in
f(y)}d_{Y}(x^{\prime},y^{\prime})$. We say that $f$ has multiplicative
distortion $t$, if it is dominating and $\forall x,y\in X$, it holds that
$\max_{x^{\prime}\in f(x),y^{\prime}\in f(y)}d_{Y}(x^{\prime},y^{\prime})\leq
t\cdot d_{X}(x,y)$. Similarly, $f$ has additive distortion $\epsilon D$ if $f$
is dominating and $\forall x,y\in X$, $\max_{x^{\prime}\in f(x),y^{\prime}\in
f(y)}d_{Y}(x^{\prime},y^{\prime})\leq d_{X}(x,y)+\epsilon D$.
A stochastic one-to-many embedding is a distribution $\mathcal{D}$ over
dominating one-to-many embeddings. We say that a stochastic one-to-many
embedding has expected multiplicative distortion $t$ if $\forall x,y\in X$,
$\mathbb{E}[\max_{x^{\prime}\in f(x),y^{\prime}\in
f(y)}d_{Y}(x^{\prime},y^{\prime})]\leq t\cdot d_{X}(u,v)$. Similarly, $f$ has
expected additive distortion $\epsilon D$, if $\forall x,y\in X$,
$\mathbb{E}[\max_{x^{\prime}\in f(x),y^{\prime}\in
f(y)}d_{Y}(x^{\prime},y^{\prime})]\leq d_{X}(x,y)+\epsilon D$.
For a one-to-many embedding $f$ between weighted graphs $G=(V,E,w)$ and
$H=(V^{\prime},E^{\prime},w^{\prime})$, we say that $f$ is spanning if
$V^{\prime}=f(V)$ (i.e. $f$ is “onto”), and for every edge $(u,v)\in
E^{\prime}$, it holds that $\left(f^{-1}(u),f^{-1}(v)\right)\in E$ and
$w^{\prime}(u,v)=w\left(f^{-1}(u),f^{-1}(v)\right)$.
This paper is mainly devoted to the new notion of clan embeddings.
###### Definition 3 (Clan embedding).
A clan embedding from metric space $(X,d_{X})$ into a metric space $(Y,d_{Y})$
is a pair $(f,\chi)$ where $f:X\rightarrow 2^{Y}$ is a dominating one-to-many
embedding, and $\chi:X\rightarrow Y$ is a classic embedding. For every $x\in
X$, we have that $\chi(x)\in f(x)$; here $f(x)$ called the clan of $x$, while
$\chi(x)$ is referred to as the chief of the clan of $x$ (or simply the chief
of $x$).
We say that clan embedding $f$ has multiplicative distortion $t$ if for every
$x,y\in X$, $\min_{y^{\prime}\in f(y)}d_{Y}(y^{\prime},\chi(x))\leq t\cdot
d_{X}(x,y)$. Similarly, $f$ has additive distortion $\epsilon D$ if for every
$x,y\in X$, $\min_{y^{\prime}\in f(y)}d_{Y}(y^{\prime},\chi(x))\leq
d_{X}(x,y)+\epsilon D$.
A clan embedding $(f,\chi)$ is said to be spanning if $f$ is a spanning one-
to-many embedding.
We will construct embeddings for minor-free graphs using a divide-and-concur
approach. First, we will construct embedding on each piece (see below). Then,
in order to combine different embeddings into a single one, it will be
important that these embeddings are _clique-preserving_.
###### Definition 4 (Clique-copy).
Consider a one-to-many embedding $f:G\rightarrow 2^{H}$, and a clique $Q$ in
$G$. A subset $Q^{\prime}\subseteq f(Q)$ is called clique copy of $Q$ if
$Q^{\prime}$ is a clique in $H$, and for every vertex $v\in Q$,
$Q^{\prime}\cap f(v)$ is a singleton.
###### Definition 5 (Clique-preserving embedding).
A one-to-many embedding $f:G\rightarrow 2^{H}$ is called clique-preserving
embedding if for every clique $Q$ in $G$, $f(Q)$ contains a clique copy of
$Q$. A clan embedding $(f,\chi)$ is clique-preserving if $f$ is clique
preserving.
### 2.2 Robertson-Seymour Structure Theorem
In this section, we review notation used in graph minor theory by Robertson
and Seymour. Informally speaking, the celebrated theorem of Robertson and
Seymour (Theorem 9, [RS03]) said that every minor-free graph can be decomposed
into a collection of graphs _nearly embeddable_ in the surface of constant
genus, glued together into a tree structure by taking _clique-sum_. To
formally state the Robertson-Seymour decomposition, we need additional
notation.
###### Definition 6 (Tree/Path decomposition).
A tree decomposition of $G(V,E)$, denoted by $\mathcal{T}$, is a tree
satisfying the following conditions:
1. 1.
Each node $i\in V(\mathcal{T})$ corresponds to a subset of vertices $X_{i}$ of
$V$ (called bags), such that $\cup_{i\in V(\mathcal{T})}X_{i}=V$.
2. 2.
For each edge $uv\in E$, there is a bag $X_{i}$ containing both $u,v$.
3. 3.
For a vertex $v\in V$, all the bags containing $v$ make up a subtree of
$\mathcal{T}$.
The _width_ of a tree decomposition $\mathcal{T}$ is $\max_{i\in
V(\mathcal{T})}|X_{i}|-1$ and the treewidth of $G$, denoted by $\mathrm{tw}$,
is the minimum width among all possible tree decompositions of $G$. A _path
decomposition_ of a graph $G(V,E)$ is a tree decomposition where the
underlying tree is a path. The pathwidth of $G$, denoted by $\mathrm{pw}$, is
defined accordingly.
A _vortex_ is a graph $W$ equipped with a pah decomposition
$\\{X_{1},X_{2},\ldots,X_{t}\\}$ and a sequence of $t$ designated vertices
$x_{1},\ldots,x_{t}$, called the _perimeter_ of $W$, such that each $x_{i}\leq
X_{i}$ for all $1\leq i\leq t$. The _width_ of the vortex is the width of its
path decomposition. We say that a vortex $W$ is _glued_ to a face $F$ of a
surface embedded graph $G$ if $W\cap F$ is the perimeter of $W$ whose vertices
appear consecutively along the boundary of $F$.
##### Nearly $h$-embeddability
A graph $G$ is nearly $h$-embeddable if there is a set of at most $h$ vertices
$A$, called _apices_ , such that $G\setminus A$ can be decomposed as
$G_{\Sigma}\cup\\{W_{1},W_{2},\ldots,W_{h}\\}$ where $G_{\Sigma}$ is
(cellularly) embedded on a surface $\Sigma$ of genus at most $h$ and each
$W_{i}$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$.
##### $h$-Clique-sum
A graph $G$ is a $h$-clique-sum of two graphs $G_{1},G_{2}$, denoted by
$G=G_{1}\oplus_{h}G_{2}$, if there are two cliques of size exactly $h$ each
such that $G$ can be obtained by identifying vertices of the two cliques and
remove some clique edges of the resulting identification.
Note that clique-sum is not a well-defined operation since the clique-sum of
two graphs is not unique due to the clique edge deletion step. We are ready
now to state the decomposition theorem.
###### Theorem 9 (Theorem 1.3 [RS03]).
There is a constant $h=O_{r}(1)$ such that any $K_{r}$-minor-free graph $G$
can be decomposed into a tree $\mathbb{T}$ where each node of $\mathbb{T}$
corresponds to a nearly $h$-embeddable graph such that $G=\cup_{X_{i}X_{j}\in
E(\mathbb{T})}X_{i}\oplus_{h}X_{j}$.
The graphs corresponding to the nodes in the clique-sum decomposition above
are referred to as _pieces_. Note that the pieces in $\mathbb{T}$ may not be
subgraphs of $G$, as in the clique-sum, some edges of a node, namely some
edges of a nearly $h$-embeddable subgraph associated to a node, may not be
present in $G$. We will slightly modify the graph to ensure that this never
happens. Specifically, for any pair $u,v$ of vertices used in a clique-sum for
a piece $X$ of $\mathbb{T}$, that are not present in $G$, we add an edge
$(u,v)$ to $G$ and set its weight to be $d_{G}(u,v)$. In the decomposition of
the resulting graph, the clique-sum operation does not remove any edge. Note
that this operation does not change the Robertson-Seymour decomposition of the
graph, nor its shortest path metric. Thus from a metric point of view, the two
graphs are equivalent.
Cohen-Addad et al. [CFKL20] showed that every $n$-vertex $K_{r}$-minor free
graph has a stochastic one-to-many embedding with expected additive distortion
$\epsilon D$ into a graph with treewidth $O(\frac{\log n}{\epsilon^{2}})$. The
only reason [CFKL20] used randomness is due to apices. The following lemma
[CFKL20] states that nearly $h$-embeddable graphs without apices embed
deterministically into bounded treewidth graphs. We will use this embedding in
a black box manner.
###### Lemma 1 (Multiple Vortices and Genus, [CFKL20]).
Consider a graph $G=G_{\Sigma}\cup W_{1}\cup\dots\cup W_{h}$ of diameter $D$,
where $G_{\Sigma}$ is (cellularly) embedded on a surface $\Sigma$ of genus
$h$, and each $W_{i}$ is a vortex of width at most $h$ glued to a face of
$G_{\Sigma}$. There is a one-to-many clique-preserving embedding $f$ from $G$
to a graph $H$ of treewidth at most $O_{h}\left(\frac{\log
n}{\epsilon}\right)$ with additive distortion $\epsilon D$.
## 3 Clan embedding into an ultrametric
This section is devoted to proving Theorem 1. We restate it for convenience.
See 1
First, we will prove a “distributional” version of Theorem 1. That is, we will
receive a distribution $\mu$ over the points, and deterministically construct
a single clan embedding $(f,\chi)$ such that $\sum_{x\in X}\mu(x)|f(x)|$ will
be bounded. Later, we will use the minimax theorem to conclude Theorem 1. We
begin with some definitions: a _measure_ over a finite set $X$, is simply a
function $\mu:X\rightarrow\mathbb{R}_{\geq 0}$. The measure of a subset
$A\subseteq X$, is $\mu(A)=\sum_{x\in A}\mu(x)$. Given some function
$f:X\rightarrow\mathbb{R}$, it’s expectation w.r.t. $\mu$ is
$\mathbb{E}_{x\sim\mu}[f]=\sum_{x\in X}\mu(x)\cdot f(x)$. We say that $\mu$ is
a _probability measure_ if $\mu(X)=1$. We say that $\mu$ is a _$(\geq 1)$
-measure_ if for every $x\in X$, $\mu(x)\geq 1$.
###### Lemma 2.
Given an $n$-point metric space $(X,d_{X})$, $(\geq 1)$-measure
$\mu:X\rightarrow\mathbb{R}_{\geq 1}$, and integer parameter $k\geq 1$, there
is a clan embedding $(f,\chi)$ into an ultrametric with multiplicative
distortion $16k$ such that
$\mathbb{E}_{x\sim\mu}[|f(x)|]\leq\mu(X)^{1+\frac{1}{k}}$.
_Proof._ Our proof is inspired by Bartal’s lecture notes [Bar11], who provided
a deterministic construction of Ramsey trees. Specifically, 1 bellow is due to
[Bar11]. Lemma 2 could also be proved using the techniques of Abraham et al.
[ACE+20] (and indeed we will use their approach for our clan embedding into a
spanning tree, see Lemma 5); however the proof based on [Bar11] we present
here is shorter. For a subset $A\subseteq X$, denote by $B_{A}(x,r)\coloneqq
B_{X}(x,r)\cap A$ the ball in the metric space $(X,d_{X})$ restricted to $A$.
Set $\mu^{*}(A)\coloneqq\max_{x\in
A}\mu\left(B_{A}(x,\frac{\mathrm{diam}(A)}{4})\right)$. Note that $\mu^{*}$ is
monotone: i.e. $A^{\prime}\subseteq A$ implies
$\mu^{*}(A^{\prime})\leq\mu^{*}(A)$, and $\forall A,$ $\mu^{*}(A)\leq\mu(A)$.
The following claim is crucial for our construction; its proof appears below.
See Figure 2 for an illustration of the claim.
###### Claim 1.
There is a point $v\in X$ and radius $R\in(0,\frac{\mathrm{diam}(X)}{2}]$,
such that the sets ${P=B_{X}(v,R+\frac{1}{8k}\cdot\mathrm{diam}(X))}$,
$Q=B_{X}(v,R)$, and $\bar{Q}=X\setminus Q$ satisfy
$\mu(P)\leq\mu(Q)\cdot\left(\frac{\mu^{*}(X)}{\mu^{*}(P)}\right)^{\frac{1}{k}}$.
The construction of the embedding is by induction on $n$, the number of points
in the metric space. We assume that for a metric space $X$ with strictly less
than $n$ points, and arbitrary $(\geq 1)$-measure $\mu$, we can construct a
clan embedding $(f,\chi)$ with distortion $16k$, such that
$\mathbb{E}_{x\sim\mu}[|f(x)|]\leq\mu(X)\mu^{*}(X)^{\frac{1}{k}}\leq\mu(X)^{1+\frac{1}{k}}$.
Find sets $P,Q,\bar{Q}\subseteq X$ using 1. Let $\mu_{P}$ (resp.
$\mu_{\bar{Q}}$) be the $(\geq 1)$-measure $\mu$ restricted to $P$ (resp.
$\bar{Q}$). Using the induction hypothesis, construct clan embeddings
$(f_{P},\chi_{P})$ for $P$, and $(f_{\bar{Q}},\chi_{\bar{Q}})$ for $\bar{Q}$
into ultra-metrics $U_{P},U_{\bar{Q}}$ respectively. Construct a new
ultrametric $U$ by combining $U_{P}$ and $U_{\bar{Q}}$ by adding a new root
node $r_{U}$ with label $\mathrm{diam}(X)$ and making roots of $U_{P}$ and
$U_{\bar{Q}}$ children of $r_{U}$. For every $x\in X$ set $f(x)=f_{P}(x)\cup
f_{\bar{Q}}(x)$. If $d_{X}(v,x)\leq R+\frac{1}{16k}\cdot\mathrm{diam}(X)$ set
$\chi(x)=\chi_{P}(x)$, otherwise set $\chi(x)=\chi_{\bar{Q}}(x)$. This
finishes the construction; see Figure 2 for an illustration.
Figure 2: On the left illustrated the clusters $P,Q,\bar{Q}$ from 1. On the
right we illustrate the clan embedding of the metric space $(X,d_{X})$ into
ultrametric $U$. $r_{U}$ is the root of $U$, and its children are the roots of
the ultrametrics $U_{P},U_{\bar{Q}}$ which were constructed recursively. The
point $x\in P\cap Q$ has $f(x)=f_{P}(x)$ and $\chi(x)=\chi_{P}(x)$ (where
$|f(x)|=2$). The point $y$ is in $\bar{Q}\setminus P$ and thus
$f(y)=f_{\bar{Q}}(y)$ and $\chi(y)=\chi_{\bar{Q}}(y)$ (there is a single copy
of $y$). The point $z$ belongs to $P\cap\bar{Q}$, where
$d_{X}(v,z)>R+\frac{1}{16}\cdot\mathrm{diam}(X)$, hence $f(z)=f_{P}(z)\cup
f_{\bar{Q}}(z)$ and $\chi(z)=\chi_{\bar{Q}}(z)$. Note that
$|f_{P}(z)|=|f_{\bar{Q}}(z)|=2$, and hence $|f(z)|=4$.
Next, we argue that the clan embedding $(f,\chi)$ has multiplicative
distortion $16k$. Consider a pair of points $x,y\in X$. We will show that
$\min_{y^{\prime}\in f(y)}d_{U}(y^{\prime},\chi(x))\leq 16k\cdot d_{X}(x,y)$.
Suppose first that $d_{X}(v,x)\leq R+\frac{1}{16k}\cdot\mathrm{diam}(X)$. If
$y\in P$, then by the induction hypothesis
$\min_{y^{\prime}\in f(y)}d_{U}(y^{\prime},\chi(x))\leq\min_{y^{\prime}\in
f_{P}(y)}d_{U_{P}}(y^{\prime},\chi_{P}(x))\leq 16k\cdot d_{P}(x,y)=16k\cdot
d_{X}(x,y)\leavevmode\nobreak\ .$
Else, $y\notin P$, then $d_{X}(v,y)>R+\frac{1}{8k}\cdot\mathrm{diam}(X)$.
Using the triangle inequality $d_{X}(x,y)\geq
d_{X}(v,y)-d_{X}(v,x)\geq\frac{\mathrm{diam}(X)}{16}$. Note that the label of
$r_{U}$ is $\mathrm{diam}(X)$, implying that $\min_{y^{\prime}\in
f(y)}d_{U}(y^{\prime},\chi(x))\leq\mathrm{diam}(X)\leq 16\cdot d_{X}(x,y)$.
The case where $d_{X}(v,x)>R+\frac{1}{16k}\cdot\mathrm{diam}(X)$ is symmetric
(using $\bar{Q}$ instead of $P$).
Next, we bound the weighted number of leafs in the ultrametric. Note that the
process is deterministic and there is no probability involved. Using the
induction hypothesis, it holds that
$\displaystyle\mathbb{E}_{x\sim\mu}[|f(x)|]$ $\displaystyle=\sum_{x\in
X}\mu(x)\cdot\left(|f_{P}(x)|+|f_{\bar{Q}}(x)|\right)$
$\displaystyle=\mathbb{E}_{x\sim\mu_{P}}[|f_{P}(x)|]+\mathbb{E}_{x\sim\mu_{\bar{Q}}}[|f_{\bar{Q}}(x)|]$
$\displaystyle\leq\mu_{P}(P)\mu_{P}^{*}(P)^{\frac{1}{k}}+\mu_{\bar{Q}}(\bar{Q})\mu_{\bar{Q}}^{*}(\bar{Q})^{\frac{1}{k}}$
$\displaystyle\leq\mu(P)\mu^{*}(P)^{\frac{1}{k}}+\mu(\bar{Q})\mu^{*}(\bar{Q})^{\frac{1}{k}}$
$\displaystyle\overset{(*)}{\leq}\mu(Q)\mu^{*}(X)^{\frac{1}{k}}+\mu(\bar{Q})\mu^{*}(X)^{\frac{1}{k}}=\mu(X)\mu^{*}(X)^{\frac{1}{k}}\leavevmode\nobreak\
,$
where in the inequality $(*)$ is due to 1 and the fact that
$\mu^{*}(\bar{Q})\leq\mu^{*}(X)$. ∎
###### Proof of 1.
Let $v$ be the point minimizing the ratio
$\frac{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{4})\right)}{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{8})\right)}$.
Set $\rho=\frac{\mathrm{diam}(X)}{8k}$, and for $i\in[0,k]$ let
$Q_{i}=B_{X}(v,\frac{\mathrm{diam}(X)}{8}+i\cdot\rho)$. Let $i\in[0,k-1]$ be
the index minimizing $\frac{\mu(Q_{i+1})}{\mu(Q_{i})}$. Then,
$\left(\frac{\mu(Q_{k})}{\mu(Q_{0})}\right)^{\frac{1}{k}}=\left(\frac{\mu(Q_{1})}{\mu(Q_{0})}\cdot\frac{\mu(Q_{2})}{\mu(Q_{1})}\cdots\frac{\mu(Q_{k})}{\mu(Q_{k-1})}\right)^{\frac{1}{k}}\geq\left(\frac{\mu(Q_{i+1})}{\mu(Q_{i})}\right)^{k\cdot\frac{1}{k}}=\frac{\mu(Q_{i+1})}{\mu(Q_{i})}\leavevmode\nobreak\
.$
Set $R=\frac{\mathrm{diam}(X)}{8}+i\cdot\rho$, then $P=B_{X}(v,R+\rho)$,
$Q=B_{X}(v,R)$, $\bar{Q}=X\setminus Q$. Note that $\mathrm{diam}(P)\leq
2\cdot(\frac{\mathrm{diam}(X)}{8}+k\cdot\rho)=\frac{\mathrm{diam}(X)}{2}$. Let
$u_{P}$ be the point defining $\mu^{*}(P)$, that is
$\mu^{*}(P)=\mu\left(B_{P}(u_{P},\frac{\mathrm{diam}(P)}{4}\right)\leq\mu\left(B_{P}(u_{P},\frac{\mathrm{diam}(X)}{8}\right)$.
Using the minimality of $v$, it holds that
$\frac{\mu(P)}{\mu(Q)}\leq\left(\frac{\mu(Q_{k})}{\mu(Q_{0})}\right)^{\frac{1}{k}}=\left(\frac{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{4})\right)}{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{8})\right)}\right)^{\frac{1}{k}}\stackrel{{\scriptstyle(*)}}{{\leq}}\left(\frac{\mu\left(B_{X}(u_{P},\frac{\mathrm{diam}(X)}{4})\right)}{\mu\left(B_{X}(u_{P},\frac{\mathrm{diam}(X)}{8})\right)}\right)^{\frac{1}{k}}\leq\left(\frac{\mu^{*}\left(X\right)}{\mu^{*}\left(P\right)}\right)^{\frac{1}{k}}\leavevmode\nobreak\
.$
where $(*)$ is due to the choice of $v$. ∎
Next, we translate the language of $(\geq 1)$-measures used in Lemma 2 to
probability measures:
###### Lemma 3.
Given an $n$-point metric space $(X,d_{X})$, and probability measure
$\mu:X\rightarrow\mathbb{R}_{\geq 0}$, we can construct the two following clan
embeddings $(f,\chi)$ into ultrametrics:
1. 1.
For every parameter $k\geq 1$, multiplicative distortion $16k$ such that
$\mathbb{E}_{x\sim\mu}[|f(x)|]\leq O(n^{\frac{1}{k}})$.
2. 2.
For every parameter $\epsilon\in(0,1]$, multiplicative distortion
$O(\frac{\log n}{\epsilon})$ such that $\mathbb{E}_{x\sim\mu}[|f(x)|]\leq
1+\epsilon$.
###### Proof.
We define the following probability measure $\widetilde{\mu}$: $\forall x\in
X$, $\widetilde{\mu}(x)=\frac{1}{2n}+\frac{1}{2}\mu(x)$. Set the following
$(\geq 1)$-measure $\widetilde{\mu}_{\geq 1}(x)=2n\cdot\tilde{\mu}(x)$. Note
that $\widetilde{\mu}_{\geq 1}(X)=2n$. We execute Lemma 2 w.r.t. the $(\geq
1)$-measure $\widetilde{\mu}_{\geq 1}$, and parameter
$\frac{1}{\delta}\in\mathbb{N}$ to be determined later. It holds that
$\widetilde{\mu}_{\geq
1}(X)\cdot\mathbb{E}_{x\sim\widetilde{\mu}}[|f(x)|]=\mathbb{E}_{x\sim\widetilde{\mu}_{\geq
1}}[|f(x)|]\leq\widetilde{\mu}_{\geq 1}(X)^{1+\delta}=\widetilde{\mu}_{\geq
1}(X)\cdot(2n)^{\delta}\leavevmode\nobreak\ ,$
implying
$(2n)^{\delta}\geq\mathbb{E}_{x\sim\widetilde{\mu}}[|f(x)|]=\frac{1}{2}\cdot\mathbb{E}_{x\sim\mu}[|f(x)|]+\frac{\sum_{x\in
X}|f(x)|}{2n}\geq\frac{1}{2}\cdot\mathbb{E}_{x\sim\mu}[|f(x)|]+\frac{1}{2}\leavevmode\nobreak\
.$
1. 1.
Set $\delta=\frac{1}{k}$, then we have multiplicative distortion
$\frac{16}{\delta}=16k$, and $\mathbb{E}_{x\sim\mu}[|f(x)|]\leq
2\cdot(2n)^{\delta}=O(n^{\frac{1}{k}})$.
2. 2.
Choose $\delta\in(0,1]$ such that
$\frac{1}{\delta}=\left\lceil\frac{\ln(2n)}{\ln(1+\epsilon/2)}\right\rceil$,
note that $\delta\leq\frac{\ln(1+\epsilon/2)}{\ln(2n)}$. Then we have
multiplicative distortion $O(\frac{1}{\delta})=O(\frac{\log n}{\epsilon})$,
and $\mathbb{E}_{x\sim\mu}[|f(x)|]\leq 2\cdot(2n)^{\delta}-1\leq 1+\epsilon$.
∎
###### Remark 1.
Lemma 3, note that for the clan embedding $(f,\chi)$ returned by Lemma 3 for
input $k$, it holds that $|f(X)|\leq\tilde{\mu}_{\geq
1}(X)^{1+\frac{1}{k}}=(2n)^{1+\frac{1}{k}}$. In particular, every $x\in X$ has
at most $(2n)^{1+\frac{1}{k}}$ copies. Similarly, for input $\epsilon$,
$|f(X)|\leq\tilde{\mu}_{\geq
1}(X)^{1+\delta}\leq(2n)^{1+\frac{\ln(1+\epsilon/2)}{\ln
2n}}=2n\cdot(1+\frac{\epsilon}{2})$. As for every $y\in X$,
$f(y)\neq\emptyset$, it follows that for every $x\in X$, its number of copies
is bounded by $|f(x)|=|f(X)|-|f(X\setminus\\{x\\})|\leq
2n\cdot(1+\frac{\epsilon}{2})-(n-1)=(1+\epsilon)n+1$.
Using the minimax theorem, as shown bellow, we show that there exists a
distribution $\mathcal{D}$ of clan embeddings with distortion and expected
clan size as specified by Theorem 1. Afterwards, in Section 3.1, using the
multiplicative weights update (MWU) method, we explicitly construct such
distributions efficiently, and with small support size.
###### Proof of Theorem 1 (exsistential agrument).
Let $\mu$ be an arbitrary probability measure over the points, and
$\mathcal{D}$ be any distribution over clan embeddings $(f,\chi)$ of
$(X,d_{X})$ intro trees with multiplicative distortion $O(\frac{\log
n}{\epsilon})$. Using Lemma 3 and the minimax theorem we have that
$\min_{\mathcal{D}}\max_{\mu}\mathbb{E}_{(f,\chi)\sim\mathcal{D},x\sim\mu}[|f(x)|]=\max_{\mu}\min_{(f,\chi)}\mathbb{E}_{x\sim\mu}[|f(x)|]\leq
1+\epsilon\leavevmode\nobreak\ .$
Let $\mathcal{D}$ be the distribution from above, denote by $\mu_{z}$ the
probability measure where $\mu_{z}(z)=1$ (and $\mu_{z}(y)=0$ for $y\neq z$).
Then for every $x\in X$
$\mathbb{E}_{(f,\chi)\sim\mathcal{D}}[|f(z)|]=\mathbb{E}_{(f,\chi)\sim\mathcal{D},x\sim\mu_{z}}[|f(x)|]\leq\max_{\mu}\mathbb{E}_{(f,\chi)\sim\mathcal{D},x\sim\mu}[|f(x)|]\leq
1+\epsilon\leavevmode\nobreak\ .$
The second claim of Theorem 1 could be proven using exactly the same argument.
∎
### 3.1 Constructive Proof of Theorem 1
In this section, we efficiently construct a uniform distribution $\mathcal{D}$
as stated in Theorem 1. Our construction relies on the multiplicative weights
update method (MWU) 101010For an excellent introduction of the MWU method and
its historical account, see the survey by Arora, Hazan and Kale [AHK12]. and
the notion of a $(\rho,\alpha,\beta)$-bounded Oracle.
###### Definition 7 ($(\rho,\alpha,\beta)$-bounded Oracle).
Given a probability measure $\mu$ over the metric points, a
$(\rho,\alpha,\beta)$-bounded Oracle returns a clan embedding $(f,\chi)$ with
multiplicative distortion $\beta$ such that:
1. 1.
$\mathbb{E}_{x\sim\mu}[|f(x)|]\leq\alpha$.
2. 2.
$\max_{x\in V}|f(x)|\leq\rho$.
In Lemma 4 below, we show that one can construct a uniform distribution
$\mathcal{D}$ by making a polynomial number of oracle calls.
###### Lemma 4.
Given a $(\rho,\alpha,\beta)$-bounded Oracle, and parameter
$\epsilon\in(0,\frac{1}{2})$ one can construct a uniform distribution
$\mathcal{D}$ over $O(\frac{\rho\alpha\log(n)}{\epsilon^{2}})$ clan embeddings
with multiplicative distortion $\beta$ such that:
$\mbox{For every }x\in X,\leavevmode\nobreak\ \leavevmode\nobreak\
\mathbb{E}_{(f,\chi)\sim\mathcal{D}}[|f(x)|]\leq\alpha+\epsilon$
Furthermore, the construction only makes
$O(\frac{\rho\alpha\log(n)}{\epsilon^{2}})$ queries to the
$(\rho,\alpha,\beta)$-bounded Oracle.
###### Proof.
Let $\mathcal{O}$ be a $(\rho,\alpha,\beta)$-bounded Oracle and
$\mathcal{O}(\mu)$ be the clan embedding returned by the oracle given a
probability measure $\mu$. We follow the standard set up of MWU: we have $n$
“experts” where the $i$-th expert is associated with the $i$-th point
$x_{i}\in X$. The construction happens in $T$ rounds. At the beginning of
round $t$, we have a weight vector
$\mathbf{w}^{t}=(w_{1}^{t},\ldots,w_{n}^{t})^{\intercal}$; at the first round,
$\mathbf{w}^{1}=(1,1,\ldots,1)^{\intercal}$.
The weight vector $\mathbf{w}^{t}$ induces a probability measure
$\mu^{t}=(\frac{w^{t}_{1}}{W^{t}},\ldots,\frac{w^{t}_{n}}{W^{t}})$ where
$W^{t}=\sum_{i=1}^{n}w^{t}_{i}$. We construct a clan embedding
$(f^{t},\chi^{t})=\mathcal{O}(\mu^{t})$ by making an oracle call to
$\mathcal{O}$ with $\mu_{t}$ as input. Let
$g^{t}_{i}=\frac{|f^{t}(x_{i})|}{\rho}$, and
$\mathbf{g}^{t}=(g^{t}_{1},\ldots,g^{t}_{n})^{\intercal}$ be the “penalty”
vector for the set of $n$ points (or experts). We then update:
$w^{t+1}_{i}=(1+\delta)^{g^{t}_{i}}w^{t}_{i}\qquad\forall x_{i}\in X,$ (1)
for some small parameter $\delta$ chosen later.
The penalty for each additional copy of each point is proportional to the
number of copies it has in the clan embeddings constructed in previous steps.
This is because in the next round, we will increase the measure of points with
a large number of copies. Hence the oracle will be “motivated” to reduce the
number of copies of these points in the next outputted clan embedding.
After $T$ rounds, we have a collection
$\mathcal{D}_{T}=\\{(f^{1},\chi^{1}),\ldots,(f^{T},\chi^{T})\\}$ of $T$ clan
embeddings. The distribution $\mathcal{D}$ is constructed by sampling an
embedding from $\mathcal{D}_{T}$ uniformly at random. Note that the distortion
bound follows directly from the fact that the distortion of every clan
embedding returned by the oracle is $\beta$. Our goal is to show that, by
setting $T=O(\frac{\rho\log(n)}{\epsilon^{2}})$, we have:
$\frac{1}{T}\cdot\sum_{t=1}^{T}|f^{t}(x_{i})|\leavevmode\nobreak\
\leq\leavevmode\nobreak\ \alpha+\epsilon\qquad\forall x_{i}\in V$ (2)
To that end, we first observe that:
$W^{t+1}\leavevmode\nobreak\ =\leavevmode\nobreak\
\sum_{i=1}^{n}w_{i}^{t+1}\leavevmode\nobreak\ =\leavevmode\nobreak\
\sum_{i=1}^{n}(1+\delta)^{g_{i}^{t}}w_{i}^{t}\leavevmode\nobreak\
\stackrel{{\scriptstyle(*)}}{{\leq}}\leavevmode\nobreak\
\sum_{i=1}^{n}(1+\delta g_{i}^{t})w_{i}^{t}\leavevmode\nobreak\
=\leavevmode\nobreak\ (1+\sum_{i=1}^{n}\delta
g_{i}^{t}\mu_{i}^{t})W^{t}\leavevmode\nobreak\ \leq\leavevmode\nobreak\
e^{\delta\langle\mathbf{g}^{t},\mu^{t}\rangle}W^{t}$
where inequality $(*)$ follows from that $(1+x)^{r}\leq(1+rx)$ for any $x\geq
0$ and $r\in[0,1]$. Thus, we have:
$W^{T+1}\leavevmode\nobreak\ \leq\leavevmode\nobreak\
e^{\delta\sum_{t=1}^{T}\langle\mathbf{g}^{t},\mu^{t}\rangle}W^{1}\leavevmode\nobreak\
=\leavevmode\nobreak\
e^{\delta\sum_{t=1}^{T}\langle\mathbf{g}^{t},\mu^{t}\rangle}n$ (3)
Observe that $W^{T+1}\geq
w^{T+1}_{i}=(1+\delta)^{\sum_{t=1}^{T}g^{t}_{i}}w^{1}_{i}=(1+\delta)^{\sum_{t=1}^{T}g^{t}_{i}}$
and that:
$\sum_{t=1}^{T}\langle\mathbf{g}^{t},\mu^{t}\rangle\leavevmode\nobreak\
=\leavevmode\nobreak\ \sum_{t=1}^{T}\sum_{x\in
X}\frac{|f^{t}(x)|}{\rho}\cdot\mu^{t}(v_{i})\leavevmode\nobreak\
=\leavevmode\nobreak\
\frac{1}{\rho}\cdot\sum_{t=1}^{T}\mathbb{E}_{x\sim\mu^{t}}[|f^{t}(x)|]\leavevmode\nobreak\
\leq\leavevmode\nobreak\ \frac{T\alpha}{\rho}\leavevmode\nobreak\ .$
Thus, by equation (3), it holds that:
$\displaystyle(1+\delta)^{\sum_{t=1}^{T}g^{t}_{i}}\leavevmode\nobreak\
\leq\leavevmode\nobreak\ e^{\frac{\delta T\alpha}{\rho}}n\leavevmode\nobreak\
.$ (4)
Taking the natural logarithm from both sides we obtain that $\frac{\delta
T\alpha}{\rho}+\ln
n\geq\sum_{t=1}^{T}g_{i}^{t}\cdot\ln(1+\delta)=\frac{\ln(1+\delta)}{\rho}\cdot\sum_{t=1}^{T}|f^{t}(x_{i})|$,
and thus
$\frac{1}{T}\cdot\sum_{t=1}^{T}|f^{t}(x_{i})|\leavevmode\nobreak\
\leq\leavevmode\nobreak\
\frac{\rho}{T\cdot\ln(1+\delta)}\cdot\left(\frac{\delta T\alpha}{\rho}+\ln
n\right)\leavevmode\nobreak\ =\leavevmode\nobreak\
\frac{\delta\alpha}{\ln(1+\delta)}+\frac{\rho\cdot\ln
n}{T\cdot\ln(1+\delta)}\leavevmode\nobreak\ \leq\leavevmode\nobreak\
\alpha(1+\frac{\delta}{2})+\frac{2\rho\cdot\ln
n}{T\cdot\delta}\leavevmode\nobreak\ ,$
where the last inequality follows as
$\frac{\delta}{\ln(1+\delta)}\leq(1+\frac{\delta}{2})$ and
$\ln(1+\delta)\geq\frac{\delta}{2}$ for $\delta\in(0,\frac{1}{2})$. By
choosing $T=\frac{4\rho\alpha\ln n}{\epsilon^{2}}=O(\frac{\rho\alpha\log
n}{\epsilon^{2}})$ and $\delta=\sqrt{\frac{4\rho\ln
n}{T\alpha}}=\sqrt{\frac{\epsilon^{2}}{\alpha^{2}}}=\frac{\epsilon}{\alpha}<\frac{1}{2}$,
we obtain that
$\frac{1}{T}\cdot\sum_{t=1}^{T}|f^{t}(x_{i})|\leavevmode\nobreak\
\leq\leavevmode\nobreak\
\alpha+\frac{\delta\cdot\alpha}{2}+\frac{\epsilon^{2}}{2\alpha\cdot\delta}\leavevmode\nobreak\
=\leavevmode\nobreak\ \alpha+\epsilon\leavevmode\nobreak\ ,$
satisfying equation (2), which completes our proof. ∎
Observe that Lemma 3, combined with Remark 1, provides an
$(O(n),1+\frac{\epsilon}{2},O(\frac{\log n}{\epsilon}))$-bounded Oracle (when
we apply Lemma 3 with parameter $\frac{\epsilon}{2}$). Using Lemma 4 with
parameter $\frac{\epsilon}{2}$ provides us with an efficiently computable
distribution over clan embeddings with support size $O(\frac{n\log
n}{\epsilon^{2}})$, distortion $O(\frac{\log n}{\epsilon})$, and such that for
every $x\in X$, $\mathbb{E}_{(f,\chi)\sim\mathcal{D}}[|f(x)|]\leq 1+\epsilon$.
Similarly, by applying Lemma 3 with parameter $k$, we get an
$(O(n^{1+\frac{1}{k}}),O(n^{\frac{1}{k}}),16k)$-bounded Oracle. Thus Lemma 4
will produce an efficiently computable distribution over clan embeddings with
support size $O(n^{1+\frac{2}{k}}\log n)$, distortion $16k$, and such that for
every $x\in X$,
$\mathbb{E}_{(f,\chi)\sim\mathcal{D}}[|f(x)|]=O(n^{\frac{1}{k}})$. Theorem 1
now follows.
## 4 Clan Embedding into a Spanning Tree
This section is devoted to proving Theorem 3. We restate it for convenience.
See 3
In this section, we construct spanning clan embeddings into trees. We will use
the framework of petal decomposition proposed by Abraham and Neiman [AN19],
who originally used it to construct a stochastic embedding of a graph into
spanning trees with a bounded expected distortion. The framework was also
previously used by Abraham et al. [ACE+20] to construct Ramsey spanning trees.
The petal decomposition is an iterative method to build a spanning tree of a
given graph. At each level, the current graph is partitioned into smaller
diameter pieces (called _petals_), and a single central piece (called
_stigma_), which are then connected by edges in a tree structure. Each of the
petals is a ball in a certain cone metric. When creating a petal from a
cluster of diameter $\Delta$, one has the freedom to choose a radius from an
interval of length $\Omega(\Delta)$. The crucial property is that, regardless
of the radii choices during the execution of the algorithm, the framework
guarantees that the diameter of the resulting tree will be $O(\Delta)$.
However, as we are constructing a clan embedding rather than a classical one,
some vertices will have multiple copies. As a result, some mild changes will
be introduced to the construction of [AN19]. Once we establish the petal
decomposition framework for clan embeddings, the proof of Theorem 3 will
follow the lines similar to Theorem 1. The additional $\log\log n$ factor is a
phenomenon also appearing in previous uses of the petal decomposition
framework [AN19, ACE+20]. The reason is that, while similar embeddings into
ultrametrics create clusters by growing balls around smartly chosen centers
(e.g. [Bar04, Bar11] and Theorem 1), in the petal decomposition framework, we
lack the freedom to choose the center of the petal.
##### Organization:
In Section 4.1, we describe the petal decomposition framework in general. In
Section 4.2, we describe our specific usage of it, i.e. the algorithm choosing
the radii (with some leftovers in Section 4.4). Then in Section 4.3, we prove
Lemma 5, that appears below. Lemma 5 is a “distributional” version of Theorem
3, and has a role parallel to Lemma 2 in Section 3. Finally, in Section 4.5,
we will deduce Theorem 3 using Lemma 5.
###### Lemma 5.
Given an $n$-vertex weighted graph $G=(V,E,w)$, $(\geq 1)$-measure
$\mu:V\rightarrow\mathbb{R}_{\geq 1}$, and integer parameter $k\geq 1$, there
is a spanning clan embedding $(f,\chi)$ into a tree with multiplicative
distortion $O(k\log\log\mu(V))$ such that
$\mathbb{E}_{v\sim\mu}[|f(v)|]\leq\mu(V)^{1+\frac{1}{k}}$.
### 4.1 Petal Decomposition Framework
We begin with some notations specific to this section. For a subset
$S\subseteq G$ and a center vertex $x_{0}\in S$, the radius of $S$ w.r.t
$x_{0}$, $\Delta_{x_{0}}(S)$, is the minimal $\Delta$ such that
$B_{G[S]}(x_{0},\Delta)=S$. (If for every $\Delta$,
$B_{G[S]}(x_{0},\Delta)\neq S$ — this can happen iff $G[S]$ is not connected —
we say that $\Delta_{x_{0}}(S)=\infty$.) When the center $x_{0}$ is clear from
the context or is not relevant, we will omit it. Given two vertices $u,v$,
$P_{u,v}(X)$ denotes the shortest path between them in $G[X]$, the graph
induced by $X$ (we will assume that every pair has a unique shortest path;
this can be arranged by tiny perturbation of the edge weights.).
Given a graph $G=(V,E,w)$ and a cluster $A\subseteq V$ (with center $x_{0}$),
we say that a vertex $y\in A$ is $\rho$-padded by the cluster
$A^{\prime}\subseteq A$ (w.r.t $A$) if $B(y,\Delta_{x_{0}}(A)/\rho,G)\subseteq
A^{\prime}$. See an illustration on the right.
Next, we provide a concise description of the petal decomposition algorithm,
focusing on the main properties we will use. For proofs and further details,
we refer readers to [AN19]. The presentation here differs slightly from [AN19]
as our goal is to construct a spanning clan embedding into a tree rather than
a classic one. However, the changes are straightforward, and no new ideas are
required.
The hierarchical-petal-decomposition (see Algorithm 1) is a recursive
algorithm. The input is $G[X]$ (a graph $G=(V,E,w)$ induced over a set of
vertices $X\subseteq V$), a center $x_{0}\in X$, a target $t\in X$, and the
radius $\Delta=\Delta_{x_{0}}(X)$.111111Rather than inferring
$\Delta=\Delta_{x_{0}}(X)$ from $G[X]$ and $x_{0}$ as in [AN19], we will
follow [ACE+20] and think of $\Delta$ as part of the input. We shall allow any
$\Delta\geq\Delta_{x_{0}}(X)$. We stress that, in fact, in the algorithm, we
always use $\Delta_{x_{0}}(X)$, and consider this degree of freedom only in
the analysis. The algorithm invokes the petal-decomposition procedure to
create clusters $\widetilde{X}_{0},\widetilde{X}_{1},\dots,\widetilde{X}_{s}$
of $X$ (for some integer $s$), and also provides a set of edges
$\\{(x_{1},y_{1}),\dots,(x_{s},y_{s})\\}$ and targets
$t_{0},t_{1},\dots,t_{s}$. The hierarchical-petal-decomposition algorithm now
recurses on each
$(G[\widetilde{X}_{j}],x_{j},t_{j},\Delta_{x_{j}}(\widetilde{X}_{j}))$ for
$0\leq j\leq s$, to get trees $\\{T_{j}\\}_{0\leq j\leq s}$ (and clan
embeddings $\\{(f_{j},\chi_{j})\\}_{0\leq j\leq s}$), which are then connected
by the edges $\\{(x_{j},y_{j})\\}_{1\leq j\leq s}$ to form a tree $T$ (the
recursion ends when $X_{j}$ is a singleton). The one-to-many embedding $f$
simply defined as the union of the one-to-many embeddings $\\{f_{j}\\}_{0\leq
j\leq s}$. Note, however, that the clusters
$\widetilde{X}_{0},\widetilde{X}_{1},\dots,\widetilde{X}_{s}$ are not
disjoint. Therefore, in addition, for each cluster $\widetilde{X}_{j}$ the
petal-decomposition procedure will also provide us with sub-clusters
$\underline{X}_{j}\subseteq X_{j}\subseteq\widetilde{X}_{j}$ that will be used
to determine the chiefs (i.e. $\chi$ part) of the clan embedding.
1 if _$|X|=1$_ then
2 return $G[X]$
3Let
$\left(\left\\{\underline{X}_{j},X_{j},\widetilde{X}_{j},x_{j},t_{j},\Delta_{j}\right\\}_{j=0}^{s},\left\\{(y_{j},x_{j})\right\\}_{j=1}^{s}\right)=\texttt{petal-
decomposition}(G[X],x_{0},t,\Delta)$
4 for _each $j\in[0,\dots,s]$_ do
5 $(T_{j},f_{j},\chi_{j})=\texttt{hierarchical-petal-decomposition}$
$(G[\widetilde{X}_{j}],x_{j},t_{j},\Delta_{j})$
6
7for _each $z\in X$_ do
8 Set $f(x)=\cup_{j=0}^{s}f_{j}(z)$
9 if _$\exists j >0$ such that $z\in X_{j}$_ then
10 Let $j>0$ be the minimal index such that $z\in X_{j}$. Set
$\chi(z)=\chi_{j}(z)$
11 else
12 Set $\chi(z)=\chi_{0}(z)$
13
14Let $T$ be the tree formed by connecting $T_{0},\dots,T_{s}$ using the edges
$\\{\chi(y_{1}),\chi(x_{1})\\},\dots,\\{\chi(y_{s}),\chi(x_{s})\\}$
15 return $(T,f,\chi)$
Algorithm 1 $(T,f,\chi)=\texttt{hierarchical-petal-
decomposition}(G[X],x_{0},t,\Delta)$
Next, we describe the petal-decomposition procedure (see Algorithm 2).
Initially it sets $Y_{0}=X$, and for $j=1,2,\dots,s$, it carves out the petal
$\widetilde{X}_{j}$ from the graph induced on $Y_{j-1}$, and sets
$Y_{j}=Y_{j-1}\backslash\underline{X}_{j}$, where $\underline{X}_{j}$ is a
sub-petal of $\widetilde{X}_{j}$, consisting of all the vertices which are
padded by $\widetilde{X}_{j}$. The idea is that $Y_{j}$ is defined w.r.t. to a
smaller set than the petal itself; thus, by duplicating some vertices, we will
be able to guarantee that each vertex is padded somewhere. In order to control
the radius increase, the first petal might be carved using different
parameters (see [AN19] for details and explanation of this subtlety 121212One
may notice that in algorithm 2 of the petal-decomposition procedure, the
weight of some edges is changed by a factor of 2. This can happen at most once
for each copy of every edge throughout the hierarchical-petal-decomposition
execution, thus it may affect the padding parameter by a factor of at most 2.
This re-weighting is ignored here for simplicity. We again refer readers to
[AN19] for details and further explanation.). The definition of petal
guarantees that the radius $\Delta_{x_{0}}(Y_{j})$ is non-increasing, and when
at step $s$ it becomes at most $3\Delta/4$, define $X_{0}=Y_{s}$ and then the
petal-decomposition routine ends. In carving of the petal
$\widetilde{X}_{j}\subseteq Y_{j-1}$, the algorithm chooses an arbitrary
target $t_{j}\in Y_{j-1}$ (at distance at least $3\Delta/4$ from $x_{0}$) and
a range $[\mathrm{lo},\mathrm{hi}]$ of size
$\mathrm{hi}-\mathrm{lo}\in\\{\Delta/8,\Delta/4\\}$ which are passed to the
sub-routine create-petal.
1 Let $Y_{0}=X$
2 Set $j=1$
3
4if _$d_{X}(x_{0},t)\geq\Delta/2$_ then
5 Let $(\underline{X}_{1},X_{1},\widetilde{X}_{1})=\texttt{create-
petal}(G[Y_{0}],[d_{X}(x_{0},t)-\Delta/2,d_{X}(x_{0},t)-\Delta/4],x_{0},t)$
6 $Y_{1}=Y_{0}\backslash\underline{X}_{1}$
7 Let $\\{x_{1},y_{1}\\}$ be the unique edge on the shortest path $P_{x_{0}t}$
from $x_{0}$ to $t$ in $Y_{0}$, where $x_{1}\in X_{1}$ and $y_{1}\in Y_{1}$
8 Set $t_{0}=y_{1}$, $t_{1}=t$; $j=2$
9else
10 set $t_{0}=t$
11
12while _$Y_{j-1}\backslash B_{X}(x_{0},\frac{3}{4}\Delta)\neq\emptyset$_ do
13 Let $t_{j}\in Y_{j-1}$ be an arbitrary vertex satisfying
$d_{X}(x_{0},t_{j})>\frac{3}{4}\Delta$
14 Let $(\underline{X}_{j},X_{j},\widetilde{X}_{j})=\texttt{create-
petal}(G[Y_{j-1}],[0,\Delta/8],x_{0},t_{j})$
15 $Y_{j}=Y_{j-1}\backslash\underline{X}_{j}$
16 Let $\\{x_{j},y_{j}\\}$ be the unique edge on the shortest path
$P_{x_{j}t_{j}}$ from $x_{0}$ to $t_{j}$ in $Y_{j-1}$, where
$x_{j}\in\widetilde{X}_{j}$ and $y_{j}\in Y_{j}$
17 Consider $G_{j}=G[\widetilde{X}_{j}]$ the graph induced by
$\widetilde{X}_{j}$. For each edge $e\in P_{x_{j}t_{j}}(\widetilde{X}_{j})$,
set its weight to be $w(e)/2$
18 Let $j=j+1$
19
20
21Let $s=j-1$
22 Let $\underline{X}_{0}=X_{0}=\widetilde{X}_{0}=Y_{s}$
23 return
$\left(\left\\{\underline{X}_{j},X_{j},\widetilde{X}_{j},x_{j},t_{j},\Delta_{x_{j}}(\widetilde{X}_{j})\right\\}_{j=0}^{s},\left\\{(y_{j},x_{j})\right\\}_{j=1}^{s}\right)$
Algorithm 2
$\left(\left\\{\underline{X}_{j},X_{j},\widetilde{X}_{j},x_{j},t_{j},\Delta_{j}\right\\}_{j=0}^{s},\left\\{(y_{j},x_{j})\right\\}_{j=1}^{s}\right)=\texttt{petal-
decomposition}(G[X],x_{0},t,\Delta)$
Both hierarchical-petal-decomposition and petal-decomposition are essentially
the algorithms that appeared in [AN19]. The only technical difference is that
in [AN19] $\widetilde{X}_{j}=\underline{X}_{j}$ for every $j$ (as they created
actually spanning tree, while we are constructing a clan embedding). The more
important difference lies in the create-petal procedure, depicted in Algorithm
3. It carefully selects a radius $r\in[\mathrm{lo},\mathrm{hi}]$, which
determines the petal $\widetilde{X}_{j}$ together with a connecting edge
$(x_{j},y_{j})\in E$, where $x_{j}\in\widetilde{X}_{j}$ is the center of
$\widetilde{X}_{j}$ and $y_{j}\in Y_{j}$. It is important to note that the
target $t_{0}\in X_{0}$ of the central cluster $X_{0}$ is determined during
the creation of the first petal $X_{1}$. The petals are created using an
alternative metric on the graph, known as the cone-metric:
###### Definition 8 (Cone-metric).
Given a graph $G=(V,E)$, a subset $X\subseteq V$ and points $x,y\in X$, define
the cone-metric $\rho=\rho(X,x,y):X^{2}\to\mathbb{R}^{+}$ as
$\rho(u,v)=\left|\left(d_{X}(x,u)-d_{X}(y,u)\right)-\left(d_{X}(x,v)-d_{X}(y,v)\right)\right|$.
The cone-metric is in fact a pseudo-metric, i.e., distances between distinct
points are allowed to be 0. The ball $B_{(X,\rho)}(y,r)$ in the cone-metric
$\rho=\rho(X,x,y)$, contains all vertices $u$ whose shortest path to $x$ is
increased (additively) by at most $r$ if forced to go through $y$. In the
create-petal algorithm, while working in a subgraph $G[Y]$ with two specified
vertices: a center $x_{0}$ and a target $t$, we define
$W_{r}\left(Y,x_{0},t\right)=\bigcup_{p\in P_{x_{0}t}:\ d_{Y}(p,t)\leq
r}B_{(Y,\rho(Y,x_{0},p))}(p,\frac{r-d_{Y}(p,t)}{2})$ which is union of balls
in the cone-metric, where any vertex $p$ in the shortest path from $x_{0}$ to
$t$ of distance at most $r$ from $t$ is a center of a ball with radius
$\frac{r-d_{Y}(p,t)}{2}$. See Figure 3 for an illustration. The parameters
$(Y,x_{0},t)$ are usually clear from the context and hence omitted. The
following fact from [AN19] demonstrates that petals are similar to balls.
Figure 3: On the left, we illustrate the ball $B_{(X,\rho)}(t,r)$ in the cone-
metric $\rho=\rho(X,x,t)$ containing all vertices $u$ whose shortest path to
$x$ is increased (additively) by at most $r$ if forced to go through $t$. The
red vertex $z$ joins $B_{(X,\rho)}(t,r)$ as $d_{X}(z,t)+d_{X}(t,x)\leq
d_{X}(z,x)+r$. The blue point $q$ on the path $P_{t,x}$ at distance
$\frac{r}{2}$ from $t$ is the last point on $P_{t,x}$ to join
$B_{(X,\rho)}(t,r)$.
On the right, we illustrate the petal $W_{r}\left(X,x,t\right)=\bigcup_{p\in
P_{xt}:\ d_{Y}(p,t)\leq r}B_{(Y,\rho(X,x,p))}(p,\frac{r-d_{X}(p,t)}{2})$. In
the illustration, the point $p_{i}$ is at distance $\frac{i}{4}r$ from $t$,
and is the center of a ball of radius $\frac{4-i}{8}r$ in the respective cone
metric.
###### Fact 1 ([AN19]).
For every $y\in W_{r}\left(Y,x_{0},t\right)$ and $l\geq 0$,
$B_{G[Y]}(y,l)\subseteq W_{r+4l}\left(Y,x_{0},t\right)$.
Note that 1 implies that $W_{r}$ is monotone in $r$, i.e., for $r\leq
r^{\prime}$, it holds that $W_{r}\subseteq W_{r^{\prime}}$.
For each $j$, the clusters $\underline{X}_{j},X_{j},\widetilde{X}_{j}$
returned by the create-petal procedure executed on
$(G[Y_{j-1}],[\mathrm{lo},\mathrm{hi}],x_{0},t_{j})$ will all be petals of the
form $W_{r}(Y_{j-1},x_{0},t_{j})$ for $r\in[\mathrm{lo},\mathrm{hi}]$.
Specifically, we will chose some
$r_{1},r_{2},r_{3}\in[\mathrm{lo},\mathrm{hi}]$ such that
$\underline{X}_{j}=W_{r_{1}}(Y_{j-1},x_{0},t_{j})$,
$X_{j}=W_{r_{2}}(Y_{j-1},x_{0},t_{j})$ and
$\widetilde{X}_{j}=W_{r_{3}}(Y_{j-1},x_{0},t_{j})$ while
$r_{2}-r_{1}=r_{3}-r_{2}=\Theta(\frac{\mathrm{hi}-\mathrm{lo}}{k\log\log\mu(Y_{j-1})})$.
The following facts were proven in [AN19] regarding the petal-decomposition
procedure. They hold in our version of the algorithm using exactly the same
proofs.
###### Fact 2 ([AN19]).
Consider the petal-decomposition procedure executed on $X$ with center
$x_{0}$, target $t$ and radius $\Delta$. It creates clusters
$(\underline{X_{0}},X_{0},\widetilde{X_{0}}),(\underline{X_{1}},X_{1},\widetilde{X_{1}}),\dots,(\underline{X_{s}},X_{s},\widetilde{X_{s}})$.
During the process, we had temporary metrics $Y_{0}=X$, and
$Y_{j}=Y_{j-1}\backslash\underline{X_{j}}$. For $j\geq 1$ the cluster
$\widetilde{X}_{j}$ had center $x_{j}$ connected to $y_{j}\in Y_{j}$ and
target $t_{j}\in\widetilde{X}_{j}$. Throughout the execution, the following
hold:
1. 1.
For every $j$ and $z\in Y_{j}$, $P_{z,x_{0}}(X)\subseteq G[Y_{j}]$. In
particular, the radius of the $Y_{j}$’s is monotonically non-increasing:
$\Delta_{x_{0}}(Y_{0})\geq\Delta_{x_{0}}(Y_{1})\geq\dots\geq\Delta_{x_{0}}(Y_{s})$.
In particular $X_{0}$ is a connected cluster with radius at most $3\Delta/4$.
2. 2.
For each $j\geq 0$, $\widetilde{X}_{j}$ is a connected cluster with center
$x_{j}$, target $t_{j}$ such that $\Delta_{x_{j}}(X_{j})\leq 3\Delta/4$. In
particular, the entire shortest path from $x_{j}$ to $t_{j}$ (in $Y_{j-1}$) is
in $\widetilde{X}_{j}$.
3. 3.
If a special first cluster is created, then $y_{1}\in X_{0}$ and
$P_{x_{0},t}(X)\subseteq G[X_{0}\cup X_{1}]$. If no special first cluster is
created, then $P_{x_{0},t}(X)\subseteq G[X_{0}]$.
Next, we cite the relevant properties regarding the hierarchical-petal-
decomposition procedure. The proofs follow almost the same lines as [AN19],
with slight and natural adaptations due to the embedding being a clan
embedding with duplicate copies for some vertices. In any case, no new ideas
are required and we will skip the proof.
###### Fact 3 ([AN19]).
Consider the hierarchical-petal-decomposition procedure executed on $X$ with
center $x_{0}$, target $t$ and radius $\Delta$. The following properties hold:
1. 1.
The algorithm returns a spanning clan embedding into a tree $T$.
2. 2.
The tree $T$ has radius at most $4\Delta_{x_{0}}(X)$. That is
$\Delta_{x_{0}}(T)\leq 4\Delta_{x_{0}}(X)\leavevmode\nobreak\ .$
Note that it follows from 3, that the distance between every pair of vertices
in the tree $T$ is at most $8\Delta_{x_{0}}(X)$.
We will need the following observation. Roughly speaking, it says that when
the petal-decomposition algorithm is carving out
$(\underline{X}_{j+1},X_{j+1},\widetilde{X}_{j+1})$, it is oblivious to the
past petals, edges and targets – it only cares about $Y_{j}$ and the original
diameter $\Delta$.
###### Observation 1.
Assume that petal-decomposition on input
$(G\left[X\right],x_{0},t,\Delta_{x_{0}}(X))$ returns as output
$\left(\left\\{\underline{X}_{j},X_{j},\widetilde{X}_{j},x_{j},t_{j},\Delta_{j}\right\\}_{j\in\\{0,\dots,s\\}},\left\\{(y_{j},x_{j})\right\\}_{j\in\\{1,\dots,s\\}}\right)$.
Then running petal-decomposition on input
$(G\left[Y_{l}\right],x_{0},t_{0},\Delta_{x_{0}}(X))$ will output
$\left(\left\\{\underline{X}_{j},X_{j},\widetilde{X}_{j},x_{j},t_{j},\Delta_{j}\right\\}_{j\in\\{0,l+1,\dots,s\\}},\left\\{(y_{j},x_{j})\right\\}_{j\in\\{l+1,\dots,s\\}}\right)$.
### 4.2 Choosing a Radius
Fix some $1\leq j\leq s$, and consider carving the petal
$(\underline{X}_{j},X_{j},\widetilde{X}_{j})$ from the graph induced on
$Y=Y_{j-1}$. Our choice of radius bears similarities to the one in [ACE+20].
The properties of the petal decomposition described above (in Section 4.1),
together with 2 and 3, hold for any radius picked from a given interval. We
will now describe the method to select a radius that suits our needs. The
petal-decomposition algorithm provides an interval $[\mathrm{lo},\mathrm{hi}]$
of size at least $\Delta/8$, and for each $r\in[\mathrm{lo},\mathrm{hi}]$ let
$W_{r}(Y,x_{0},t)\subseteq Y$ denote the petal of radius $r$ (usually we will
omit $(Y,x_{0},t)$.).
Our algorithm will return three clusters: $\underline{X}_{j}\subseteq
X_{j}\subseteq\widetilde{X}_{j}$ which will correspond to three petals
$W_{r-\frac{R}{4Lk}}\subseteq W_{r}\subseteq W_{r+\frac{R}{4Lk}}$
respectively, where
$\frac{R}{4Lk}=\Theta(\frac{\mathrm{hi}-\mathrm{lo}}{k\log\log\mu(Y)})=\Theta(\frac{\Delta}{k\log\log\mu(Y)})$.
The algorithm will be executed recursively on $\widetilde{X}_{j}$, while
$\underline{X}_{j}$ will be removed from $Y$. The cluster $X_{j}$ will only be
used in order to define $\chi$ (during the hierarchical-petal-decomposition
procedure). 1 implies that the vertices in $X_{j}$ are padded by
$\widetilde{X}_{j}$, while the vertices in $Y\backslash X_{j}$ are padded by
$Y\backslash\underline{X}_{j}$. If a pair of vertices $u,v$ do not belong to
the same cluster (e.g. $u\in\underline{X}_{j}$ and $v\notin\widetilde{X}_{j}$)
then $d_{Y}(u,v)=\Omega(\frac{\Delta}{k\log\log\mu(Y)})$. By 3, the diameter
of the final tree will be $O(\Delta)$. In particular, the distance in the
embedded tree between every copy of $u$ and $v$ will be bounded by
$O(\Delta)=O(k\log\log\mu(Y))d_{Y}(u,v)$. Note that only the vertices in
$\widetilde{X}_{j}\backslash\underline{X}_{j}$ are duplicated. Thus, our goal
is to choose a radius $r$ such that the measure of the duplicated vertices
would be small.
Our algorithm to select a radius is based on region growing techniques as in
[ACE+20], which is more involved than the region growing in Theorem 1. In the
petal decomposition framework, we cannot pick as the center a vertex
maximizing the ”small ball” (as the target $t_{j}$ must be at distance
$\frac{3}{4}$ from $x_{0}$). We first choose an appropriate range that mimics
that choice (see algorithm 3 in Algorithm 3) — this is the reason for the
extra factor of $\log\log\mu(Y)$. The basic idea in region growing is to
charge the measure of the duplicated vertices (i.e.
$\widetilde{X}_{j}\backslash\underline{X}_{j}$), to all the vertices in the
cluster $\widetilde{X}_{j}$. In order to avoid a range in
$[\mathrm{lo},\mathrm{hi}]$ that contains more than half of the measure, we
will cut either in $[\mathrm{lo},\mathrm{mid}]$ or in
$[\mathrm{mid},\mathrm{hi}]$ where $\mathrm{mid}=(\mathrm{hi}+\mathrm{lo})/2$.
Specifically, in the case where $W_{\mathrm{mid}}$ has measure at least
$\mu(Y)/2$, we ”cut backward” in the regime $[\mathrm{mid},\mathrm{hi}]$, and
charge the measure of duplicated vertices to the remaining graph $Y_{j}$,
rather than to $\widetilde{X}_{j}$.
1 $L=\lceil 1+\log\log\mu(Y)\rceil$
2 $R=\mathrm{hi}-\mathrm{lo}$;
$\mathrm{mid}=(\mathrm{lo}+\mathrm{hi})/2=\mathrm{lo}+R/2$
3 For every $r$, denote $W_{r}=W_{r}(Y,x_{0},t)$, $w_{r}=\mu(W_{r})$
4 if _$w_{\mathrm{mid}}\leq\frac{\mu(Y)}{2}$_ then
Choose $\left[a,b\right]\subseteq\left[\mathrm{lo},\mathrm{mid}\right]$ such
that $b-a=\frac{R}{2L}$ and $w_{a}\geq w_{b}^{2}/\mu(Y)$
// see Lemma 8
5
Pick $r\in\left[a+\frac{b-a}{2k},b-\frac{b-a}{2k}\right]$ such that
$w_{r+\frac{b-a}{2k}}\leq
w_{r-\frac{b-a}{2k}}\cdot\left(\frac{w_{b}}{w_{a}}\right)^{\frac{1}{k}}$
// see Lemma 9
6
7else
8 For every $r\in[\mathrm{lo},\mathrm{hi}]$, denote $q_{r}=\mu(Y\backslash
W_{r})$
Choose $\left[b,a\right]\subseteq\left[\mathrm{mid},\mathrm{hi}\right]$ such
that $a-b=\frac{R}{2L}$ and $q_{a}\geq q_{b}^{2}/\mu(Y)$
// see Lemma 10
9
Pick $r\in\left[b+\frac{b-a}{2k},a-\frac{b-a}{2k}\right]$ such that
$q_{r-\frac{a-b}{2k}}\leq
q_{r+\frac{a-b}{2k}}\cdot\left(\frac{q_{b}}{q_{a}}\right)^{1/k}$
// see Lemma 11
10
11return $(W_{r-\frac{R}{4Lk}},W_{r},W_{r+\frac{R}{4Lk}})$
Algorithm 3 $(\underline{X},X,\widetilde{X})=\texttt{create-
petal}(G[Y],\mu,[\mathrm{lo},\mathrm{hi}],x_{0},t)$
### 4.3 Proof of Lemma 5: the distributional case
Let $u,v\in V$ be a pair of vertices, let $(f,\chi)$ be the spanning clan
embedding into a tree $T$ returned by calling hierarchical-petal-decomposition
on $(G[V],z,z,\Delta_{z}(V))$ for arbitrary $z\in V$.
###### Lemma 6.
The clan embedding $(f,\chi)$ has distortion $O(\rho)=O(k\log\log\mu(V))$.
###### Proof.
The proof is by induction on the radius $\Delta$ of the graph (w.r.t. the
center). The basic case is where the graph is a singleton and $\Delta=0$ is
trivial. For the general case, consider a pair of vertices $u,v$. Let
$\left(\left\\{\underline{X}_{j},X_{j},\widetilde{X}_{j},x_{j},t_{j},\Delta_{j}\right\\}_{j=0}^{s},\left\\{(y_{j},x_{j})\right\\}_{j=1}^{s}\right)$
be the output of the call to the petal-decomposition procedure on $X,x_{0}$.
For each $j\geq 1$, let $Y_{j-1}$ be the graph held during the $j$’th stage of
the algorithm. Note that $Y_{s}=X_{0}$. Then we created the petals
$(\underline{X}_{j},X_{j},\widetilde{X}_{j})=(W_{r_{j}-\frac{R}{4Lk}},W_{r_{j}},W_{r_{j}+\frac{R}{4Lk}})$,
and $Y_{j}=Y_{j-1}\backslash\underline{X}_{j}$, where $L=\lceil
1+\log\log\mu(Y_{j})\rceil$, and $R\geq\frac{\Delta}{8}$. Set
$\rho=128\left\lceil 1+\log\log\mu(V)\right\rceil\cdot k=O(k\log\log\mu(V))$.
Note that for every execution of the create-petal procedure at this stage, it
holds that $\frac{\Delta}{\rho}\leq\frac{1}{4}\cdot\frac{R}{4Lk}$.
First, consider the case where $d_{G}(u,v)\geq\frac{\Delta}{\rho}$. By 3, the
distance between any pair of vertices in $T$ is $O(\Delta)$. In particular
$\min_{v^{\prime}\in f(v)}d_{T}(v^{\prime},\chi(u))=O(\Delta)=O(\rho)\cdot
d_{G}(u,v)\leavevmode\nobreak\ .$
Otherwise, $d_{G}(u,v)<\frac{\Delta}{\rho}$. Let
$B=B_{X}(u,\frac{\Delta}{\rho})$. For ease of notation, set
$\underline{X}_{s+1}=X_{s+1}=\widetilde{X}_{s+1}=X_{0}=Y_{s}$. Let
$j_{u}\in[1,s+1]$ be the minimal index such that $u\in X_{j}$. We argue that
$B\subseteq Y_{j_{u}-1}$. Assume otherwise, and let $j\in[1,j_{u}-1]$ be the
minimal index such that $B\nsubseteq Y_{j}$. Thus, there is a vertex
$u^{\prime}\in B\cap\underline{X}_{j}\subseteq W_{r_{j}-\frac{R}{4Lk}}$, while
by the minimality of $j$, it holds that $B\subseteq Y_{j-1}$. Using 1, it
follows that
$u\in B_{Y_{j-1}}(u^{\prime},\frac{\Delta}{\rho})\subseteq
W_{r_{j}-\frac{R}{4Lk}+4\cdot\frac{\Delta}{\rho}}\subseteq
W_{r_{j}}=X_{j}\leavevmode\nobreak\ ,$
a contradiction to the minimality of $j_{u}$.
Next, we argue that $B\subseteq\widetilde{X}_{j_{u}}$. If $j_{u}=s+1$, then we
have $B\subseteq Y_{s}=X_{0}=\widetilde{X}_{s+1}$ and done. Otherwise, as
$u\in X_{j_{u}}=W_{r_{j_{u}}}$, using 1 again we obtain
$B=B_{X}(u,\frac{\Delta}{\rho})=B_{Y_{j_{u}-1}}(u,\frac{\Delta}{\rho})\subseteq
W_{r_{j_{u}}+4\cdot\frac{\Delta}{\rho}}\subseteq
W_{r_{j_{u}}+\frac{R}{4Lk}}=\widetilde{X}_{j_{u}}\leavevmode\nobreak\ .$
In the hierarchical-petal-decomposition algorithm, we create a clan embedding
$(f_{j_{u}},\chi_{j_{u}})$ of $\widetilde{X}_{j_{u}}$ into a tree $T_{j_{u}}$.
The tree $T_{j_{u}}$ is incorporated into a global tree $T$, where
$f(u)=\cup_{j}f_{j}(u)$, $f(v)=\cup_{j}f_{j}(v)$, and
$\chi(u)=\chi_{j_{u}}(u)$ by the definition of $j_{u}$. As
$d_{G}(u,v)<\frac{\Delta}{\rho}$, it holds that $v\in B$. In particular, the
shortest path from $v$ to $u$ in $G$ belongs to $B$, thus
$d_{G[X_{j_{u}}]}(u,v)=d_{G}(u,v)$. By 2, the radius of $X_{j_{u}}$ is at most
$\frac{3}{4}\Delta$; hence, using the induction hypothesis, we conclude that:
$\min_{v^{\prime}\in f(v)}d_{T}(v^{\prime},\chi(u))\leq\min_{v^{\prime}\in
f_{j_{u}}(v)}d_{T_{j_{u}}}(v^{\prime},\chi_{j_{u}}(u))=O(\rho)\cdot
d_{G[X_{j_{u}}]}(u,v)=O(\rho)\cdot d_{G}(u,v)\leavevmode\nobreak\ .$
∎
###### Lemma 7.
$\mathbb{E}_{v\sim\mu}[|f(v)|]\leq\mu(V)^{1+1/k}$.
###### Proof.
We prove by induction on $|X|$ and $\Delta$ that the one-to-many embedding $f$
constructed using the hierarchical-petal-decomposition algorithm w.r.t. any
$(\geq 1)$-measure $\mu$ fulfills
$\mathbb{E}_{v\sim\mu}[|f(v)|]\leq\mu(X)^{1+1/k}$. The base case where $X$ is
a singleton is trivial. For the inductive step, assume we call petal-
decomposition on $(G[X],x_{0},t,\Delta)$ with $\Delta\geq\Delta_{x_{0}}(X)$
and measure $\mu$.
Assume that the petal-decomposition algorithm does a non-trivial clustering of
$X$ to $\widetilde{X}_{0},\widetilde{X}_{1},\dots,\widetilde{X}_{s}$. (If it
is the case that all vertices are sufficiently close to $x_{0}$, then no petal
will be created, and the hierarchical-petal-decomposition will simply recurse
on $(G[X],x_{0},t,\Delta_{x_{0}}(X))$, so we can ignore this case.) Let
$\widetilde{X}_{1}=W_{r+\frac{R}{4Lk}}$ be the first petal created by the
petal-decomposition algorithm, and $Y_{1}=X\backslash\underline{X}_{1}$, where
$\underline{X}_{1}=W_{r-\frac{R}{4Lk}}$. Denote by $\mu_{\widetilde{X}_{j}}$
the measure $\mu$ restricted to $\widetilde{X}_{j}$, and by
$f_{\widetilde{X}_{j}}$ the one-to-many embedding our algorithm constructs for
$\widetilde{X}_{j}$.
By 1, we can consider the remaining execution of petal-decomposition on
$Y_{1}$ as a new recursive call of petal-decomposition with input
$(G[Y_{1}],x_{0},t_{0},\Delta)$. In particular, the recursive calls on
$\widetilde{X}_{0},\widetilde{X}_{2},\dots,\widetilde{X}_{s}$ are completely
independent from $\widetilde{X}_{1}$. Denote
$f_{Y_{1}}=\cup_{j=0,2,\dots,s}f_{\widetilde{X}_{j}}$, and by $\mu_{{Y}_{1}}$
the measure $\mu$ restricted to $Y_{1}$. Since
$|\widetilde{X}_{1}|,|Y_{1}|<|X|$, the induction hypothesis implies that
$\mathbb{E}_{v\sim\mu_{\widetilde{X}_{1}}}[|f_{\widetilde{X}_{1}}(v)|]\leq\mu_{\widetilde{X}_{1}}(\widetilde{X}_{1})^{1+\frac{1}{k}}=\mu(\widetilde{X}_{1})^{1+\frac{1}{k}}$
and
$\mathbb{E}_{v\sim\mu_{Y_{1}}}[|f_{Y_{1}}(v)|]\leq\mu_{Y_{1}}(Y_{1})^{1+\frac{1}{k}}=\mu(Y_{1})^{1+\frac{1}{k}}$.
Note that by our construction,
$\mathbb{E}_{v\sim\mu}[|f(v)|]=\sum_{j=0}^{s}\mathbb{E}_{v\sim\mu_{\widetilde{X}_{j}}}[|f_{j}(v)|]=\mathbb{E}_{v\sim\mu_{\widetilde{X}_{1}}}[|f_{1}(v)|]+\mathbb{E}_{v\sim\mu_{Y_{1}}}[|f_{Y_{1}}(v)|]\leavevmode\nobreak\
.$
The rest of the proof is by case analysis according to the choice of radii in
Algorithm 3. Recall that $w_{r^{\prime}}=\mu(W_{r^{\prime}})$ and
$q_{r^{\prime}}=\mu(Y\setminus W_{r^{\prime}})=\mu(X\setminus W_{r^{\prime}})$
for every parameter $r^{\prime}$.
1. 1.
Case 1: $w_{\mathrm{mid}}\leq\mu(X)/2$. In this case, we pick
$a,b\in[\mathrm{lo},\mathrm{hi}]$ where $b-a=R/(2L)$, and
$r\in\left[a+\frac{b-a}{2k},b-\frac{b-a}{2k}\right]$ such that
$w_{a}>w_{b}^{2}/\mu(X)\qquad\qquad\mbox{and}\qquad\qquad
w_{r+\frac{b-a}{2k}}\leq
w_{r-\frac{b-a}{2k}}\cdot\left(\frac{w_{b}}{w_{a}}\right)^{1/k}\leavevmode\nobreak\
.$
Here $\widetilde{X}_{1}=W_{r+\frac{b-a}{2k}}$, while
$Y_{1}=X\backslash\underline{X}_{1}=X\backslash W_{r-\frac{b-a}{2k}}$. Using
these two inequalities, we have that
$\mu(\widetilde{X}_{1})^{1+\frac{1}{k}}=w_{r+\frac{b-a}{2k}}\cdot
w_{r+\frac{b-a}{2k}}^{\frac{1}{k}}\leq
w_{r-\frac{b-a}{2k}}\cdot\left(\frac{w_{b}}{w_{a}}\right)^{\frac{1}{k}}\cdot
w_{r+\frac{b-a}{2k}}^{\frac{1}{k}}\leq
w_{r-\frac{b-a}{2k}}\cdot\left(\frac{\mu(X)}{w_{b}}\right)^{\frac{1}{k}}\cdot
w_{r+\frac{b-a}{2k}}^{\frac{1}{k}}\leq
w_{r-\frac{b-a}{2k}}\cdot\mu(X)^{\frac{1}{k}}\leavevmode\nobreak\ ,$
where we used the fact that $r+\frac{b-a}{2k}\leq b$ (and that $w_{r}$ is
monotone). Using the induction hypothesis, we conclude that
$\displaystyle\mathbb{E}_{x\sim\mu}[|f(x)|]$
$\displaystyle=\mathbb{E}_{x\sim\mu_{\widetilde{X}_{1}}}[|f_{\widetilde{X}_{1}}(x)|]+\mathbb{E}_{x\sim\mu_{Y_{1}}}[|f_{Y_{1}}(x)|]$
$\displaystyle\leq\mu(\widetilde{X}_{1})^{1+\frac{1}{k}}+\mu(Y_{1})^{1+\frac{1}{k}}$
$\displaystyle\leq
w_{r-\frac{b-a}{2k}}\cdot\mu(X)^{\frac{1}{k}}+\mu(Y_{1})\cdot\mu(X)^{\frac{1}{k}}$
$\displaystyle=\left(\mu(W_{r-\frac{b-a}{2k}})+\mu(X\backslash
W_{r-\frac{b-a}{2k}})\right)\cdot\mu(X)^{\frac{1}{k}}=\mu(X)^{1+\frac{1}{k}}\leavevmode\nobreak\
,$
where the second inequality is because $\mu(Y_{1})\leq\mu(X)$.
2. 2.
Case 2: $w_{\mathrm{mid}}>\mu(X)/2$. This case is completely symmetric.
Denoting $q_{r}=\mu(X\setminus W_{r})$, we picked
$b,a\in[\mathrm{lo},\mathrm{hi}]$ so that $a-b=R/(2L)$ and
$r\in\left[b+\frac{b-a}{2k},a-\frac{b-a}{2k}\right]$ such that
$q_{a}\geq q_{b}^{2}/\mu(X)\qquad\qquad\mbox{and}\qquad\qquad
q_{r-\frac{a-b}{2k}}\leq
q_{r+\frac{a-b}{2k}}\cdot\left(\frac{q_{b}}{q_{a}}\right)^{1/k}\leavevmode\nobreak\
,$
Here $\widetilde{X}_{1}=W_{r+\frac{b-a}{2k}}$, while $Y_{1}=X\backslash
W_{r-\frac{b-a}{2k}}$. Note that $\mu(Y_{1})=q_{r-\frac{a-b}{2k}}$ while
$\mu(\widetilde{X}_{1})=\mu(X)-q_{r+\frac{a-b}{2k}}$. Using this two
inequalities we have that
$\mu(Y_{1})^{1+\frac{1}{k}}=q_{r-\frac{b-a}{2k}}\cdot
q_{r-\frac{b-a}{2k}}^{\frac{1}{k}}\leq
q_{r+\frac{b-a}{2k}}\cdot\left(\frac{q_{b}}{q_{a}}\right)^{\frac{1}{k}}\cdot
q_{r-\frac{b-a}{2k}}^{\frac{1}{k}}\leq
q_{r+\frac{b-a}{2k}}\cdot\left(\frac{\mu(X)}{q_{b}}\right)^{\frac{1}{k}}\cdot
q_{r-\frac{b-a}{2k}}^{\frac{1}{k}}\leq
q_{r+\frac{b-a}{2k}}\cdot\mu(X)^{\frac{1}{k}}$
where we used the fact that $b\leq r-\frac{b-a}{2k}$. Following previous
calculations, we conclude that:
$\displaystyle\mathbb{E}_{x\sim\mu}[|f(x)|]$
$\displaystyle\leq\mu(\widetilde{X}_{1})^{1+\frac{1}{k}}+\mu(Y_{1})^{1+\frac{1}{k}}$
$\displaystyle\leq\mu(\widetilde{X}_{1})\mu(X)^{\frac{1}{k}}+q_{r+\frac{b-a}{2k}}\cdot\mu(X)^{\frac{1}{k}}$
$\displaystyle=\left(\mu(W_{r+\frac{a-b}{2k}})+\mu(X\backslash
W_{r+\frac{a-b}{2k}})\right)\cdot\mu(X)^{\frac{1}{k}}=\mu(X)^{1+\frac{1}{k}}\leavevmode\nobreak\
.$
∎
Lemma 5 follows by the combination of Lemma 6 and Lemma 7.
### 4.4 Missing proofs from the create-petal procedure (Algorithm 3)
In this section we prove that the choices made in the create-petal procedure
are all legal. In all lemmas in this section, we shall use the notation in
Algorithm 3.
###### Lemma 8.
If $w_{\mathrm{mid}}\leq\mu(Y)/2$ then there is
$\left[a,b\right]\subseteq\left[\mathrm{lo},\mathrm{mid}\right]$ such that
$b-a=\frac{R}{2L}$ and $w_{a}\geq w_{b}^{2}/\mu(Y)$.
###### Proof.
Seeking contradiction, assume that for every such $a,b$ with
$b-a=\frac{R}{2L}$ it holds that $w_{b}>\sqrt{\mu(Y)\cdot w_{a}}$. Applying
this on $b=\mathrm{mid}-\frac{iR}{2L}$ and $a=\mathrm{mid}-\frac{(i+1)R}{2L}$
for every $i=0,1,\dots,L-2$, we have that
$w_{\mathrm{mid}}>\mu(Y)^{1/2}\cdot
w_{\mathrm{mid}-\frac{R}{2L}}^{1/2}>\dots>\mu(Y)^{1-2^{-(L-1)}}\cdot
w_{\mathrm{mid}-\frac{(L-1)R}{2L}}^{2^{-(L-1)}}\geq\mu(Y)\cdot 2^{-1}\cdot
w_{\mathrm{lo}}^{2^{-(L-1)}}\geq\frac{\mu(Y)}{2}\leavevmode\nobreak\ ,$
where we used that $\log\log\mu(Y)\leq L-1$ and
$\mathrm{mid}=\mathrm{lo}+R/2$. In the last inequality, we also used that
$W_{\mathrm{lo}}$ contains at least one vertex, thus $w_{\mathrm{lo}}\geq 1$.
The contradiction follows. ∎
###### Lemma 9.
There is $r\in\left[a+\frac{b-a}{2k},b-\frac{b-a}{2k}\right]$ such that
$w_{r+\frac{b-a}{2k}}\leq
w_{r-\frac{b-a}{2k}}\cdot\left(\frac{w_{b}}{w_{a}}\right)^{\frac{1}{k}}$.
###### Proof.
Seeking contradiction, assume there is no such choice of $r$. Then applying
the inequality for $r=b-(i+1/2)\cdot\frac{b-a}{k}$ for $i=0,1,\dots,k-1$ we
get
$w_{b}>w_{b-\frac{b-a}{k}}\cdot\left(\frac{w_{b}}{w_{a}}\right)^{1/k}>\cdots>w_{b-k\cdot\frac{b-a}{k}}\cdot\left(\frac{w_{b}}{w_{a}}\right)^{k/k}=w_{a}\cdot\frac{w_{b}}{w_{a}}=w_{b}\leavevmode\nobreak\
,$
a contradiction. ∎
The following two lemmas are symmetric to the two lemmas above.
###### Lemma 10.
If $w_{\mathrm{mid}}>\frac{\mu(Y)}{2}$ (implies
$q_{\mathrm{mid}}\leq\frac{\mu(Y)}{2}$), then there is
$\left[b,a\right]\subseteq\left[\mathrm{mid},\mathrm{hi}\right]$ such that
$a-b=\frac{R}{2L}$ and $q_{a}\geq q_{b}^{2}/\mu(Y)$.
###### Lemma 11.
There is $r\in\left[b+\frac{b-a}{2k},a-\frac{b-a}{2k}\right]$ such that
$q_{r-\frac{a-b}{2k}}\leq
q_{r+\frac{a-b}{2k}}\cdot\left(\frac{q_{b}}{q_{a}}\right)^{1/k}$.
### 4.5 Grand finale: proof of Theorem 3
The proof of Theorem 3 using Lemma 5 follows the same lines as the proof of
Theorem 1 from Lemma 2. First we transform the language of $(\geq 1)$-measure
to that of probability measure.
###### Lemma 12.
Given an $n$-point weighted graph $G=(V,E,w)$ and probability measure
$\mu:V\rightarrow\mathbb{R}_{\geq 0}$, we can construct the two following
spanning clan embeddings $(f,\chi)$ into a tree:
1. 1.
For integer $k\geq 1$, multiplicative distortion $O(k\log\log n)$ such that
$\mathbb{E}_{x\sim\mu}[|f(x)|]\leq O(n^{\frac{1}{k}})$.
2. 2.
For $\epsilon\in(0,1]$, multiplicative distortion $O(\frac{\log n\log\log
n}{\epsilon})$ such that $\mathbb{E}_{x\sim\mu}[|f(x)|]\leq 1+\epsilon$.
The proof of Lemma 12 is exactly identical to that of Lemma 3 and we will skip
it. The only subtlety to note is the $(\geq 1)$-measure $\widetilde{\mu}_{\geq
1}$ constructed during the proof of Lemma 3 fulfills $\widetilde{\mu}_{\geq
1}(V)=2n$, and thus the multiplicative distortion guarantee from Lemma 5 will
be $O(k\log\log n)$. Theorem 3 now follows from the minimax theorem (in the
exact same way as the proof of Theorem 1).
## 5 Lower Bound for Clan Embeddings into Trees
This section is devoted to proving Theorem 2 that we restate below. See 2
The girth of an unweighted graph $G$ is the length of the shortest cycle in
$G$. The Erdős’ girth conjecture states that for any $g$ and $n$, there exists
an $n$-vertex graph with girth $g$ and $\Omega(n^{1+\frac{2}{g-2}})$ edges.
The conjecture is known to holds for $g=4,6,8,12$ (see [Ben66, Wen91]).
However, the best known lower bound for general $k$ is due to Lazebnik et al.
[LUW95].
###### Theorem 10 ([LUW95]).
For every even $g$, and $n$, there exists an unweighted graph with girth $g$
and $\Omega(n^{1+\frac{4}{3}\cdot\frac{1}{g-2}})$ edges.
From the upper bound perspective, the (generalized) Moore’s bound [AHL02,
BR10] states that every $n$ vertex graph with girth $g$ has at most
$n^{1+\frac{2}{g-2}}$ edges for $g\leq 2\log n$, and at most
$n\left(1+(1+o(1))\frac{\ln(m-n+1)}{g}\right)$ edges for larger $g$; here $m$
is the number of edges.
We will be able to use Theorem 10 to prove the second assertion in Theorem 2.
That is, any clan embedding into a tree with distortion $O(k)$ must have
$\sum_{x\in X}|f(x)|\geq\Omega(n^{1+\frac{1}{k}})$. However, the first
assertion requires a much tighter lower bound of $(1+\epsilon)n$ on the number
of edges. Therefore, the asymptotic nature of Theorem 10 is unfortunately not
strong enough for our needs. We begin by showing that for large enough $n$ and
$\epsilon\in(0,1)$, there exists an $n$-vertex graph with $(1+\epsilon)n$
edges and girth $\Omega(\frac{\log n}{\epsilon})$. We are not aware of this
very basic fact to previously appear in the literature. Note that Lemma 13
matches Moore’s upper bound (up to a constant dependency on the girth $g$).
###### Lemma 13.
For every fixed $\epsilon\in(0,1)$ and large enough $n$, there exists a graph
with at least $(1+\epsilon)n$ edges and girth $\Omega(\frac{\log
n}{\epsilon})$.
###### Remark 2 (Ultra sparse spanners).
Given a graph $G=(V,E,w)$, a $t$-spanner is a subgraph $H$ of $G$ such that
for every pair of vertices $u,v\in V$, $d_{H}(u,v)\leq t\cdot d_{G}(u,v)$. For
every fixed $\epsilon\in(0,1)$, Elkin and Neiman [EN19] constructed ultra-
sparse spanners with $(1+\epsilon)n$ edges and stretch $O(\frac{\log
n}{\epsilon})$. Even though they noted that the sparsity of their spanner
matches the Moore’s bound, it remained open whether one can construct better
spanners. As the only $(g-2)$-spanner of a graph with girth $g$ is the graph
itself, Lemma 13 implies that the ultra sparse spanner from [EN19] is tight
(up to a constant in the stretch).
For the case of girth $\Omega(\log n)$, the first step is to replace the
asymptotic notation in the lower bound on the number of edges from Theorem 10
with explicit bound.
###### Claim 2.
For every $n\in N$, there exist an $n$-vertex graph with $2n$ edges and girth
$\Omega(\log n)$.
###### Proof.
Set $p=\frac{4n}{{n\choose 2}}=\frac{8}{n-1}$. Consider a graph $G=(V,E)$
sampled according to $G(n,p)$ (that is, each edge sampled to $G$ i.i.d. with
probability $p$.). It holds that $\mathbb{E}[|E|]={n\choose 2}\cdot p=4n$. By
Chernoff bound,
$\Pr\left[|E|<3n\right]\leq
e^{-\frac{1}{32}\mathbb{E}[E]}=e^{-\frac{n}{8}}\leavevmode\nobreak\ .$
On the other hand, for $t\geq 3$, denote by $C_{t}$ the set of cycles of
length exactly $t$. Then,
$\mathbb{E}\left[|C_{t}|\right]\leq n(n-1)\cdots(n-t+1)\cdot
p^{t}=\frac{n(n-1)\cdots(n-t+1)}{(n-1)^{t}}\cdot
4^{t}<4^{t}\leavevmode\nobreak\ .$
Denote by $\mathcal{C}$ the set of all cycles of length smaller than
$\frac{1}{3}\log n$. Then
$\mathbb{E}\left[|\mathcal{C}|\right]=\sum_{t=3}^{\frac{1}{3}\log
n-1}\mathbb{E}\left[|C_{t}|\right]\leq\sum_{t=3}^{\frac{1}{3}\log
n-1}4^{t}<4^{\frac{1}{3}\log n}=n^{\frac{2}{3}}\leavevmode\nobreak\ .$
By Markov inequality, $\Pr\left[|\mathcal{C}|\geq
n\right]\leq\frac{\mathbb{E}\left[|\mathcal{C}|\right]}{n}<n^{-\frac{1}{3}}<\frac{1}{2}$.
By union bound, there exists a graph $G$ with at least $3n$ edges, and at most
$n$ cycles of length less than $\frac{1}{3}\log n$. Let $G^{\prime}$ be the
graph obtained by deleting an arbitrary single edge from each cycle. Continue
deleting edges until $G^{\prime}$ has exactly $2n$ edges. We conclude that
$G^{\prime}$ has $2n$ edges and girth at least $\frac{1}{3}\log n$ as
required. ∎
###### Proof of Lemma 13.
Fix $\delta=\frac{1-\epsilon}{2\epsilon}$. Set $n^{\prime}=\epsilon
n=\frac{n}{1+2\delta}$. We ignore issues of integrality during the proof. Such
issues could be easily fixed as we don’t state an explicit bound on the girth.
Using 2, construct a graph $G^{\prime}$ with $n^{\prime}$ vertices,
$2n^{\prime}$ edges, and girth $\Omega(\log n^{\prime})$.
Let $G$ be the graph obtained from $G^{\prime}$ by replacing each edge with a
path of length $\delta+1$. Then:
$\displaystyle|V(G)|$
$\displaystyle=|V(G^{\prime})|+\delta\cdot|E(G^{\prime})|=n^{\prime}+\delta\cdot
2n^{\prime}=n^{\prime}(1+2\delta)=n$ $\displaystyle|E(G)|$
$\displaystyle=(\delta+1)\cdot|E(G^{\prime})|=(\delta+1)\cdot
2n^{\prime}=n\cdot\frac{2(1+\delta)}{1+2\delta}=(1+\epsilon)n\leavevmode\nobreak\
,$
where the last equality follows by the definition of $\delta$. Note that the
girth of $G$ is at least $\Omega((1+\delta)\log
n^{\prime})=\Omega(\frac{\log\epsilon n}{\epsilon})=\Omega(\frac{\log
n}{\epsilon})$, for $n$ large enough. ∎
The _Euler characteristic_ of a graph $G$ is defined as
$\chi(G)\coloneqq|E(G)|-|V(G)|+1$. Our lower bound is based on the following
theorem by Rabinovich and Raz [RR98].
###### Theorem 11 ([RR98] ).
Consider an unweighted graph $G$ with girth $g$, and consider a (classic)
embedding $f:G\rightarrow H$ of $G$ into a weighted graph $H$, such that
$\chi(H)<\chi(G)$. Then $f$ has multiplicative distortion at least
$\frac{g}{4}-\frac{3}{2}$.
Next, we transfer the language of classic embeddings into graphs used in
Theorem 11 to that of clan embeddings into trees.
###### Lemma 14.
Consider an unweighted, $n$-vertex graph $G=(V,E)$ with girth $g$, and let
$(f,\chi)$ be a clan embedding of $G$ into a tree $T$ with multiplicative
distortion $t<\frac{g}{4}-\frac{3}{2}$. Then necessarily $\sum_{v\in
V}|f(v)|\geq n+\chi(G)$.
###### Proof.
Let $H$ be the graph obtained from $T$ by merging all the copies of each
vertex. Specifically, arbitrarily order the vertices in $V$:
$v_{1},v_{2},\dots,v_{n}$. Iteratively construct a series of graphs
$H_{0}=T,H_{1},H_{2},\dots,H_{n}$ with one-to-many embeddings
$f_{i}:G\rightarrow H_{i}$. In the $i$’th iteration, we create $H_{i},f_{i}$
out of $H_{i-1},f_{i-1}$ by replacing all the vertices in $f_{i-1}(v_{i})$ by
a single vertex $\tilde{v}_{i}$. For a vertex $u\in H_{i-1}$, we add an edge
from $u$ to $\tilde{v}_{i}$ if there was an edge from $u$ to some vertex in
$f_{i-1}(v)$. If an edge $\\{u,\tilde{v}_{i}\\}$ is added, its weight is
defined to be $\min_{v^{\prime}\in f_{i-1}(v)}w_{H_{i-1}}(v^{\prime},u)$. Set
$H=H_{n}$, and $\tilde{f}=f_{n}$. Clearly, distances in $H$ can only decrease
compared to $T$. This is because for every $u,v\in V$,
$d_{H}(\tilde{u},\tilde{v})\leq\min_{u^{\prime}\in f(u),\leavevmode\nobreak\
v^{\prime}\in f(v)}d_{T}(u^{\prime},v^{\prime})\leq\min_{u^{\prime}\in
f(u)}d_{T}(u^{\prime},\chi(v))\leq t\cdot d_{G}(u,v)$. On the other hand, by
induction (and the triangle inequality), since $f$ is a dominating embedding,
one can show that $\tilde{f}$ is also dominating. That is $\forall u,v\in V$,
$d_{H}(\tilde{u},\tilde{v})\geq d_{G}(u,v)$.
We conclude that $\tilde{f}$ is a classic embedding of $G$ with a
multiplicative distortion at most $t<\frac{g}{4}-\frac{3}{2}$. By Theorem 11,
it follows that $\chi(H)\geq\chi(G)$. For every $i$, it holds that
$\chi(H_{i})=|E(H_{i})|-|V(H_{i})|-1\leq|E(H_{i-1})|-\left(|V(H_{i-1})|-|f(v_{i})|+1\right)-1=\chi(H_{i-1})+|f(v_{i})|-1$
As the Euler characteristic of a tree equals $0$, we obtain
$\chi(G)\leq\chi(H)=\chi(H_{n})\leq\sum_{i}(|f(v_{i})|-1)+\chi(T)=\sum_{v\in
V}|f(v)|-n\leavevmode\nobreak\ ,$
as desired. ∎
We are now ready to prove Theorem 2.
###### Proof of Theorem 2.
For the first assertion, using Lemma 13, let $G$ be an unweighted graph with
girth $g=\Omega(\frac{\log n}{\epsilon})$ and $(1+\epsilon)n$ edges. Consider
a clan embedding of $G$ into a tree with distortion smaller than
$\frac{g}{4}-\frac{3}{2}=\Omega(\frac{\log n}{\epsilon})$. By Lemma 14, it
holds that
$\sum_{v\in V}|f(v)|\geq n+\chi(G)=|E(G)|+1>(1+\epsilon)n\leavevmode\nobreak\
.$
The second assertion follows similar lines. Set
$g=2\cdot\left\lfloor\frac{\frac{4}{3}k+2}{2}\right\rfloor$. Note that $g$ is
largest even number up to $\frac{4}{3}k+2$. Using Theorem 10, let $G$ be an
unweighted graph with girth $g$ and
$\Omega(n^{1+\frac{4}{3}\cdot\frac{1}{g-2}})\geq\Omega(n^{1+\frac{1}{k}})$
edges. Consider a clan embedding of $G$ into a tree with distortion smaller
than $\frac{g}{4}-\frac{3}{2}=\Omega(k)$. By Lemma 14, it holds that
$\sum_{v\in V}|f(v)|\geq
n+\chi(G)=|E(G)|+1=\Omega(n^{1+\frac{1}{k}})\leavevmode\nobreak\ .$
∎
## 6 Ramsey Type Embedding for Minor-Free Graphs
This section is devoted to proving the following theorem, See 4 We begin by
proving Theorem 4 for the special case of nearly-$h$-embeddable graphs.
###### Lemma 15.
Given a nearly $h$-embeddable $n$-vertex graph $G=(V,E,w)$ of diameter $D$,
and parameters $\epsilon\in(0,\frac{1}{4})$, $\delta\in(0,1)$ , there is a
distribution over one-to-many, clique preserving, dominating embeddings $f$
into treewidth $O_{h}(\frac{\log n}{\epsilon\delta})$ graphs, such that there
is a subset $M\subseteq V$ of vertices for which the following claims hold:
1. 1.
For every clique $Q\subseteq V$, $\Pr[Q\subseteq M]\geq 1-\delta$.
2. 2.
For every $u\in M$ and $v\in V$, $\max_{u^{\prime}\in f(u),v^{\prime}\in
f(v)}d_{H}(u^{\prime},v^{\prime}))\leq d_{G}(u,v)+\epsilon D$.
###### Proof.
Consider a nearly $h$-embedded graph $G=(V,E,w)$. Assume w.l.o.g. that $D=1$,
otherwise we will scale accordingly. We assume that $1/\delta$ is an integer,
otherwise, we solve for $\delta^{\prime}$ such that
$\frac{1}{\delta^{\prime}}=\lceil\frac{1}{\delta}\rceil$. Let $\varPsi$ be the
set of apices. We will construct $q=\frac{5}{\delta}$ embeddings, all
satisfying property (2) of Lemma 15. The final embeddings will be obtained by
choosing one of these embeddings uniformly at random. We first create a new
graph $G^{\prime}=G[V\setminus\varPsi]$ by deleting all the apex vertices
$\Psi$. In the tree decomposition of $H$ to be constructed, the set $\Psi$
will belong to all the bags (with edges towards all the vertices). Thus we can
assume that $G^{\prime}$ is connected, since otherwise, we can simply solve
the problem on each connected component separately and combine the solutions
by taking the union of all graphs/embeddings.
Let $r\in G^{\prime}$ be an arbitrary vertex. For
$\sigma\in\left\\{1,\dots,\frac{5}{\delta}\right\\}$ set
$\mathcal{I}_{-1,\sigma}=[0,\sigma]$,
$\mathcal{I}_{-1,\sigma}^{+}=[0,\sigma+1]$, and
$\mbox{for $j\geq 0$,
set}\qquad\mathcal{I}_{j,\sigma}=\left[\frac{5j}{\delta}+\sigma,\frac{5(j+1)}{\delta}+\sigma\right),\quad\mbox{and}\quad\mathcal{I}_{j,\sigma}^{+}=\left[\frac{5j}{\delta}+\sigma-1,\frac{5(j+1)}{\delta}+\sigma+1\right),$
Set $U_{j,\sigma}=\left\\{v\in G^{\prime}\mid
d_{G^{\prime}}(r,v)\in\mathcal{I}_{j,\sigma}\right\\}$ and similarly
$U_{j,\sigma}^{+}$ w.r.t. $I_{j,\sigma}^{+}$. Let $G_{j,\sigma}$ be the graph
induced by $U^{+}_{j,\sigma}$, plus the vertex $r$. In addition, for every
vertex $v\in U^{+}_{j,\sigma}$ who has a neighbor in
$\cup_{j^{\prime}<j}U^{+}_{j^{\prime},\sigma}\setminus U_{j,\sigma}^{+}$, we
add an edge to $r$ of weight $d_{G}(v,r)$. Equivalently, $G_{j,\sigma}$ can be
constructed by taking the graph induced by $\cup_{j^{\prime}\leq
j}U^{+}_{j^{\prime},\sigma}$, and contracting all the internal edges out of
$U_{j,\sigma}^{+}$ into $r$. See Figure 4 (in Section 7) for an illustration.
Note that all the edges towards $r$ have weight at most $D=1$, thus
$G_{j,\sigma}$ is a nearly $h$-embedded graph with diameter at most
$2\cdot(\frac{5}{\delta}+3)=O(\frac{1}{\delta})$ and no apices.
Fix some $\sigma$ and $j$. Using Lemma 1 with parameter
$\Theta(\epsilon\cdot\delta)$, we construct a one-to-many embedding
$f_{j,\sigma}$, of $G_{j,\sigma}$ into a graph $H_{j,\sigma}$ with treewidth
$O_{h}(\frac{\log n}{\epsilon\cdot\delta})$, such that $f_{j,\sigma}$ is
clique preserving and has additive distortion
$\Theta(\epsilon\cdot\delta)\cdot O(\frac{1}{\delta})=\epsilon$. After the
application of Lemma 1, we will merge all copies of $r$, and add edges from
$r$ to all the other vertices (where the weight of a new edge
$\left(r,v\right)$ is $d_{G}(r,v)$). Note that this increases the treewidth by
at most $1$. Furthermore, we will assume that there is a bag containing only
the vertex $r$ (as we can simply add such a bag). Next, fix $\sigma$. Let
$H^{\prime}_{\sigma}$ be a union of the graphs $\cup_{j\geq-1}H_{j,\sigma}$.
We identify the vertex $r$ with itself, but for all the other vertices that
participate in more than one graph, their copies in each graph remain
separate. Formally, we define a one-to-many embedding $f_{\sigma}$, where
$f_{\sigma}(r)$ equals to the unique $r$, and for every other vertex $v\in
V\setminus\Psi$, $f_{\sigma}(v)=\bigcup_{j\geq-1}f_{j,\sigma}(v)$. Note that
$H^{\prime}_{\sigma}$ has a tree decomposition of width $O_{h}(\frac{\log
n}{\epsilon\cdot\delta})$, by identifying the bag containing only $r$ in all
the graphs. Finally, we create the graph $H_{\sigma}$ by adding the set $\Psi$
with edges towards all the vertices in $H^{\prime}_{\sigma}$, where the weight
of a new edge $\left(u^{\prime},v\right)$ is $d_{G}(u,v)$. For $v\in\Psi$, set
$f_{\sigma}(v)=\\{v\\}$. As $\Psi=O_{h}(1)$, $H_{\sigma}$ has treewidth
$O_{h}(\frac{\log n}{\epsilon\cdot\delta})$. Finally, set
$M_{j,\sigma}=\left\\{v\in G^{\prime}\mid
d_{G^{\prime}}(r,v)\in\big{[}\frac{5j}{\delta}+\sigma+2,\frac{5(j+1)}{\delta}+\sigma-2\big{)}\right\\}$
, and $M_{\sigma}=\Psi\cup\\{r\\}\cup\bigcup_{j\geq-1}M_{j,\sigma}$. This
finishes the construction.
Observe that the one-to-many embedding $f_{\sigma}$ is dominating. This
follows from the triangle inequality since every edge
$\\{u^{\prime},v^{\prime}\\}$ for $u^{\prime}\in f_{\sigma}(u),v^{\prime}\in
f_{\sigma}(v)$ in the graph has weight $d_{G}(u,v)$. Next we argue that
$f_{\sigma}$ is clique-preserving. Consider a clique $Q$ in $G$, and let
$\tilde{Q}=Q\setminus\Psi$ be the non apex vertices in $Q$. We will show that
$f_{\sigma}$ contains a clique copy of $\tilde{Q}$. As the apices have edges
towards all the other vertices, it will imply that $f_{\sigma}$ is clique-
preserving. Let $v\in\tilde{Q}$ be some arbitrary vertex, and $j$ be the
unique index such that $v\in U_{j,\sigma}$. For every $u\in\tilde{Q}$,
$d_{G^{\prime}}(v,u)=d_{G}(v,u)\leq 1$, implying $u\in U^{+}_{j,\sigma}$. We
conclude that all $\tilde{Q}$ vertices belong to $G_{j,\sigma}$. As
$f_{j,\sigma}$ is clique-preserving, it follows that there is a bag in
$H_{j,\sigma}$, and thus also in $H_{\sigma}$, containing a clique copy of
$\tilde{Q}$.
Next, we argue that property (1) holds. We say that $f$ fails on a vertex
$v\in V$ if $v\notin M$, and we say that $f$ fails on a clique $Q$ if
$Q\nsubseteq M$. Consider some clique $Q$; we can assume w.l.o.g. that $Q$
does not contain any apex vertices (as $f$ never fails on an apex vertex). Let
$s_{Q},t_{Q}\in Q$ be the closest and farthest vertices from $r$ in
$G^{\prime}$, respectively. Then
$d_{G^{\prime}}(r,t_{Q})-d_{G^{\prime}}(r,s_{Q})\leq
d_{G^{\prime}}(s_{Q},t_{Q})\leq 1$. $f_{\sigma}$ fails on $Q$ iff there is a
non-empty intersection between the interval
$[d_{G^{\prime}}(r,t_{Q}),d_{G^{\prime}}(r,s_{Q}))$ and the interval
$[\frac{5j}{\delta}+\sigma-2,\frac{5j}{\delta}+\sigma+2)$ for some $j$. Note
that there are at most $5$ values of $\sigma$ for which this intersection is
non-empty. As we constructed $q=\frac{5}{\delta}$ embeddings,
$\Pr_{\sigma}\left[Q\subseteq
M_{\sigma}\right]=\frac{\left|\left\\{\sigma\in[q]\mid Q\subseteq
M_{\sigma}\right\\}\right|}{q}\geq\frac{q-5}{q}=1-\delta$
Finally, we show that $f_{\sigma}$ has additive distortion $\epsilon D$ w.r.t.
$M_{\sigma}$. Consider a pair of vertices $u\in M_{\sigma}$ and $v\in V$. If
one of $u,v$ belongs to $\Psi\cup\\{r\\}$ then for every $u^{\prime}\in
f_{\sigma}(u)$ and $v^{\prime}\in f_{\sigma}(v)$,
$d_{H_{\sigma}}(u^{\prime},v^{\prime})=d_{G}(u,v)$. Otherwise, if
$d_{G^{\prime}}(u,v)>d_{G}(u,v)$, then it must be that the shortest path
between $u$ to $v$ in $G$ goes through an apex vertex $z\in\Psi$. In
$H_{\sigma}$, $f_{\sigma}(z)$ is a singleton that have an edge towards every
other vertex. It follows that $\max_{u^{\prime}\in f_{\sigma}(u),v^{\prime}\in
f_{\sigma}(v)}d_{H_{\sigma}}(u^{\prime},v^{\prime})\leq\max_{u^{\prime}\in
f_{\sigma}(u),v^{\prime}\in
f_{\sigma}(v)}d_{H_{\sigma}}(u^{\prime},f_{\sigma}(z))+d_{H_{\sigma}}(f_{\sigma}(z),v^{\prime})=d_{G}(u,z)+d_{G}(z,v)=d_{G}(u,v)\leavevmode\nobreak\
.$
Else, $d_{G^{\prime}}(u,v)=d_{G}(u,v)\leq D=1$. Let $j$ be the unique index
such that $u\in U_{j,\sigma}$. As $u\in M_{j,\sigma}$, it implies that there
is no index $j^{\prime}\neq j$ such that $v\in U^{+}_{j^{\prime},\sigma}$. In
particular, all the vertices in the shortest path between $u$ to $v$ in $G$
are in $U_{j,\sigma}$. Thus, we have
$\max_{u^{\prime}\in f_{\sigma}(u),v^{\prime}\in
f_{\sigma}(v)}d_{H_{\sigma}}(u^{\prime},v^{\prime})\leq\max_{u^{\prime}\in
f_{j,\sigma}(u),v^{\prime}\in
f_{j,\sigma}(v)}d_{H_{j,\sigma}}(u^{\prime},v^{\prime})\leq
d_{G_{j,\sigma}}(u,v)+\epsilon D=d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ ,$
as desired. ∎
Consider a $K_{r}$-minor-free graph $G$, and let $\mathbb{T}$ be its clique-
sum decomposition. That is $G=\cup_{(G_{i},G_{j})\in
E(\mathbb{T})}G_{i}\oplus_{h(r)}G_{j}$ where each $G_{i}$ is a nearly
$h(r)$-embeddable graph. We call the clique involved in the clique-sum of
$G_{i}$ and $G_{j}$ the _joint set_ of the two graphs. Here $h(r)$ is a
function depending on $r$ only. Let $\phi_{h}$ be some function depending only
on $h$ such that the treewidth of the graphs constructed in Lemma 15 is
bounded by $\phi_{h}\cdot\frac{\log n}{\epsilon\cdot\delta}$.
The embedding of $G$ is defined recursively, where some vertices from former
levels will be added to future levels as apices. In order to control the
number of such apices, we will use the following concept.
###### Definition 9 (Enhanced minor-free graph).
A graph $G$ is called $(r,s,t)$_-enhanced minor free graph_ if there is a set
$S$ of at most $s$ vertices, called elevated vertices, such that every
elevated vertex $u\in S$ has edges towards all the other vertices and
$G\setminus S$ is a $K_{r}$-minor-free graph that has a clique-sum
decomposition with $t$ pieces.
We will prove the following claim by induction on $t$:
###### Lemma 16.
Given an $n$-vertex $(r,s,t)$-enhanced minor-free graph $G$ of diameter $D$
with a set $S$ of elevated vertices, and parameter
$\epsilon\in(0,\frac{1}{4})$, there is a distribution over one-to-many,
clique-preserving, dominating embeddings $f$ into graphs $H$ of treewidth
$\phi_{h(r)}\cdot\frac{\log n}{\epsilon\cdot\delta}+s+h(r)\cdot\log t$, such
that there is a subset $M\subseteq V$ of vertices for which the following
hold:
1. 1.
For every $v\in V$, $\Pr[v\in M]\geq 1-\delta\cdot\log 2t$.
2. 2.
For every $u\in M$ and $v\in V$, $\max_{u^{\prime}\in f(u),v^{\prime}\in
f(v)}d_{H_{\sigma}}(u^{\prime},v^{\prime})\leq d_{G}(u,v)+\epsilon D$.
W now show that Lemma 16 implies Theorem 4:
###### Proof of Theorem 4.
Note that every $K_{r}$-minor-free graph is $(r,0,n)$-enhanced minor free.
Apply Lemma 16 using parameters $\epsilon$ and
$\delta^{\prime}=\frac{\delta}{\log 2n}$ to obtain a distribution of
embeddings. For each embedding $f$ in the distribution, define another
embedding $g$ by setting $g(v)$ for each $v\in V$ to be an arbitrary vertex
from $f(v)$. We obtain a distribution over embeddings into treewidth
$\phi_{h(r)}\cdot\frac{\log n}{\epsilon\cdot\delta^{\prime}}+0+h(r)\cdot\log
n=O_{r}(\frac{\log^{2}n}{\epsilon\delta})$ graphs with distortion $\epsilon
D$, such that for every vertex $v\in V$, $\Pr[v\in M]\geq
1-\delta^{\prime}\cdot\log 2n=1-\delta$. ∎
###### Proof of Lemma 16.
It follows from Lemma 15 that the claim holds for the base case $t=1$. We now
turn to the induction step. Consider an $(r,s,t)$-enhanced minor-free graph
$G$. Let $G^{\prime}$ be a $K_{r}$-minor-free graph obtained from $G$ by
removing a set $S$ of elevated vertices. Let $\mathbb{T}$ be the clique-sum
decomposition of $G^{\prime}$ with $t$ pieces. We use the following lemma to
pick a central piece $\mathcal{G}$ of $\mathbb{T}$.
###### Lemma 17 ([Jor69]).
Given a tree $T$ of $n$ vertices, there is a vertex $v$ such that every
connected component of $T\setminus\\{v\\}$ has at most $\frac{n}{2}$ vertices.
Let $G_{1},\dots,G_{p}$ be the neighbors of $\tilde{G}$ in $\mathbb{T}$. Note
that $\mathbb{T}\setminus\tilde{G}$ contains $p$ connected components
$\mathbb{T}_{1},\dots,\mathbb{T}_{p}$, where $G_{i}\in\mathbb{T}_{i}$, and
$\mathbb{T}_{i}$ contains at most $|\mathbb{T}|/2=t/2$ pieces. Let $Q_{i}$ be
the clique used in the clique-sum of $G_{i}$ with $\tilde{G}$ in $\mathbb{T}$.
For every $i$, we will add edges between $Q_{i}$ vertices to all the vertices
in $\mathbb{T}_{i}$. That is, we add $Q_{i}$ to the set of elevated vertices
in the graph induced by pieces in $\mathbb{T}_{i}$. Every new edge $\\{u,v\\}$
will have the weight $d_{G}(u,v)$. Let $\mathcal{G}_{i}$ be the graph induced
on vertices of $\mathbb{T}_{i}\cup S$ (and the newly added edges). Note that
$\mathcal{G}_{i}$ is an $(r,s^{\prime},t^{\prime})$-enhanced minor-free graph
for $t^{\prime}\leq\frac{t}{2}$ and $|s^{\prime}|\leq|S|+|Q_{i}|\leq s+h(r)$.
Furthermore, for every $u,v\in\mathcal{G}_{i}$, it holds that
$d_{\mathcal{G}_{i}}(u,v)=d_{G}(u,v)$. Thus, each $\mathcal{G}_{i}$ has
diameter at most $D$. Using the inductive hypothesis on $\mathcal{G}_{i}$, we
sample a dominating embedding $f_{i}$ into $H_{i}$, and a subset
$M_{i}\subseteq\mathcal{G}_{i}$ of vertices. Note that properties (1)-(2)
hold, and $H_{i}$ has treewidth $\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s^{\prime}+h(r)\cdot\log
2t^{\prime}\leq\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s+h(r)\cdot\log 2t$.
Let $\tilde{\mathcal{G}}$ be the graph induced on $\tilde{G}\cup S$. Note that
$\tilde{\mathcal{G}}$ has diameter at most $D$. We apply Lemma 15 to
$\tilde{\mathcal{G}}$ to sample a dominating embedding $\tilde{f}$ into
$\tilde{H}$, and a subset $\tilde{M}$ of vertices. Note that properties
(1)-(2) hold, in particular, the treewidth of $\tilde{H}$ is bounded by
$\phi_{h(r)}\cdot\frac{\log n}{\epsilon\cdot\delta}+s$ (as the construction
first will delete the elevated vertices and eventually add them to all the
bags).
As the embeddings $\tilde{f},f_{1},\dots,f_{p}$ are clique-preserving
embeddings into $\tilde{H},H_{1},\dots,H_{p}$, there is a natural way to
combine them into a single graph $H$ of treewidth $\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s+h(r)\cdot\log 2t$. In more detail, initially, we
just take a disjoint union of all the graphs $\tilde{H},H_{1},\dots,H_{p}$,
keeping all copies of the different vertices separately. Next, we identify all
the copies of the elevated vertices. Finally, for each $i$, as both
$\tilde{f}$ and $f_{i}$ are clique-preserving, we simply take two clique
copies of $Q_{i}$ from $\tilde{f}$ and $f_{i}$, and identify the respective
vertices in this two clique copies. Note that every vertex $v\in Q_{i}$ is an
elevated vertex in $\mathcal{G}_{i}$, and thus $f_{i}(v)$ is unique. The
embedding $f$ is defined as follows: For $v\in\tilde{\mathcal{G}}$, set
$f(v)=\tilde{f}(v)$, while for
$v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}}$ for some $i$, set
$f(v)=f_{i}(v)$.
We now define the subset $M\subseteq V$. Every vertex $v\in\tilde{M}$ joins
$M$. A vertex $v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}}$ join $M$ if
and only if $v\in M_{i}$ and $Q_{i}\subseteq\tilde{M}$. Note that for vertices
in $\tilde{\mathcal{G}}$, property (1) holds trivially, while for
$v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}}$, using the induction
hypothesis and union bound
$\Pr\left[v\notin M\right]\leq\Pr\left[v\notin
M_{i}\right]+\Pr\left[Q_{i}\nsubseteq\tilde{M}\right]\leq\delta\cdot\log
2t^{\prime}+\delta\leq\delta\cdot\log 2t\leavevmode\nobreak\ .$
Hence property (1) holds. Note that $f$ is clique-preserving as every clique
must be contained in either $\tilde{\mathcal{G}}$ or some $\mathcal{G}_{i}$.
Finally, we show that property (2) holds. Consider a vertex $u\in M$ and $v\in
V$. We proceed by case analysis.
* •
If a shortest path from $u$ to $v$ goes through a vertex $z\in S$ (this
includes the case where either $u$ or $v$ is in $S$). Then for every
$u^{\prime}\in f(u)$ and $v^{\prime}\in f(v)$, it holds that
$d_{H}(u^{\prime},v^{\prime})\leq
d_{H}(u^{\prime},f(z))+d_{H}(f(z),v^{\prime})=d_{G}(u,z)+d_{G}(z,v)=d_{G}(u,v)$.
* •
Else, if both $u,v\in\tilde{G}$, then by Lemma 15, $\max_{u^{\prime}\in
f(u),v^{\prime}\in
f(v)}d_{H}(u^{\prime},v^{\prime})\leq\max_{u^{\prime}\in\tilde{f}(u),v^{\prime}\in\tilde{f}(v)}d_{\tilde{H}}(u^{\prime},v^{\prime})\leq
d_{\mathcal{\tilde{G}}}(u,v)+\epsilon D=d_{G}(u,v)+\epsilon D$.
* •
Else, if there is an $i\in[p]$ such that both
$u,v\in\mathcal{G}_{i}\setminus\tilde{G}$, then by the induction hypothesis
$\max_{u^{\prime}\in f(u),v^{\prime}\in
f(v)}d_{H}(u^{\prime},v^{\prime})\leq\max_{u^{\prime}\in
f_{i}(u),v^{\prime}\in f_{i}(v)}d_{H_{i}}(u^{\prime},v^{\prime})\leq
d_{\mathcal{G}_{i}}(u,v)+\epsilon D=d_{G}(u,v)+\epsilon D$.
* •
Else, if $u\in\tilde{G}$ and there is an $i\in[p]$ such that
$v\in\mathcal{G}_{i}$. There is necessarily a vertex $x\in Q_{i}$ such that
there is a shortest path from $u$ to $v$ in $G$ going through $x$. Let
$\hat{x}$ be the copy of $x$ used to connect between $\tilde{H}$ and $H_{i}$.
Note that there is an edge between $\hat{x}$ to every copy $v^{\prime}\in
f_{i}(v)$ in $H_{i}$. In addition, as $u\in\tilde{M}$, by the second case it
holds that $\max_{u^{\prime}\in
f(u)}d_{H}(u^{\prime},\hat{x})\leq\max_{u^{\prime}\in f(u),x^{\prime}\in
f(x)}d_{H}(u^{\prime},\hat{x})\leq d_{G}(u,x)+\epsilon D$. We conclude
$\displaystyle\max_{u^{\prime}\in f(u),v^{\prime}\in
f(v)}d_{H}(u^{\prime},v^{\prime})$ $\displaystyle\leq\max_{u^{\prime}\in
f(u)}d_{H}(u^{\prime},\hat{x})+\max_{v^{\prime}\in
f(v)}d_{H}(\hat{x},v^{\prime})$ $\displaystyle\leq d_{G}(u,x)+\epsilon
D+d_{G}(x,v)=d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$ (5)
* •
Else, if $v\in\tilde{G}$ and there is an $i\in[p]$ such that
$u\in\mathcal{G}_{i}\setminus\tilde{G}$. There is necessarily a vertex $x\in
Q_{i}$ such that there is a shortest path from $u$ to $v$ in $G$ going through
$x$. As $u\in M$ it follows that $x\in\tilde{M}\subseteq M$. Let $\hat{x}$ be
the copy of $x$ used to connect between $\tilde{H}$ and $H_{i}$; we observe
that inequality (5) holds in this case.
* •
Else, there are $i\neq j$ such that $u\in\mathcal{G}_{i}\setminus\tilde{G}$
and $v\in\mathcal{G}_{j}\setminus\tilde{G}$. There is necessarily a vertex
$x\in Q_{i}$ such that there is a shortest path from $u$ to $v$ in $G$ going
through $x$. As $u\in M$ it follows that $x\in\tilde{M}\subseteq M$. Let
$\hat{x}$ be the copy of $x$ used to connect between $\tilde{H}$ and $H_{i}$.
By the forth case, it holds that $\max_{x^{\prime}\in f(x),v^{\prime}\in
f(v)}d_{H}(x^{\prime},v^{\prime})\leq d_{G}(x,v)+\epsilon D$. Thus,
$\displaystyle\max_{u^{\prime}\in f(u),v^{\prime}\in
f(v)}d_{H}(u^{\prime},v^{\prime})$ $\displaystyle\leq\max_{u^{\prime}\in
f(u)}d_{H}(u^{\prime},\hat{x})+\max_{v^{\prime}\in
f(v)}d_{H}(\hat{x},v^{\prime})$ $\displaystyle\leq
d_{G}(u,x)+d_{G}(x,v)+\epsilon D=d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$
∎
## 7 Clan Embedding for Minor-Free Graphs
This section is devoted to proving Theorem 5 (restated below for convenience).
The proof of Theorem 5 builds upon a similar approach to Theorem 4, however,
it is more delicate and considerably more involved. We present the proof here
without assuming familiarity with the proof of Theorem 4. Nonetheless, we
recommend the reader to first understand the proof of Theorem 4 before reading
this section. See 5
###### Remark 3.
Note that Theorem 5 implies a weak version Theorem 4, where the distortion
guarantee is for pairs $u,v\in M$ rather than than for $u\in M$ and $v\in V$:
simply use the chief part $\chi$ as a Ramsey type embedding and set
$M=\\{v\mid|f(v)|=1\\}$. Interestingly, this weaker version is still
sufficient for our application to the metric $\rho$-independent set problem
(Theorem 7).
We begin with Lemma 18, which is a special case of nearly-embeddable graphs.
Later, we will generalize to minor-free graphs via clique-sums. Specifically,
inductively we will use Lemma 18 for each piece, and integrate it to the
general embedding. However, for this integration to go through, we will need
the intermediate embedding to be clique-preserving. As a consequence, we will
not attempt to bound the size of $f$ directly. Instead, for every vertex $v$,
$f(v)$ will be the union of two sets $\chi(v)$ and $\psi(v)$. Eventually, for
the clan embedding, we will take _one copy from each set_. We will say that
the embedding succeeds on a vertex $v$ if $\psi(v)=\emptyset$. (In the
following lemma, $\bigcupdot$ denotes the disjoint-union operation.)
###### Lemma 18.
Consider a nearly $h$-embeddable $n$-vertex graph $G=(V,E,w)$ with set of
apices $\Psi$, diameter $D$, and parameters $\epsilon\in(0,\frac{1}{4})$,
$\delta\in(0,1)$. Then there is a distribution over one-to-many, dominating
embeddings $f$ into treewidth $O_{h}(\frac{\log n}{\epsilon\delta})$ graphs,
such that for every vertex $v\in V$, $f(v)$ can be partitioned into sets
$\chi(v),\psi(v)$ where $\chi(v)\bigcupdot\psi(v)=f(v)$.It holds that:
1. 1.
For every pair of vertices $u,v$, 131313Note that $\psi(v)$ might be an empty
set. A maximum over an empty set is defined to be $\infty$.
$\min\left\\{\max_{u^{\prime}\in\chi(u),v^{\prime}\in\chi(v)}d_{H}(u^{\prime},v^{\prime}),\max_{u^{\prime}\in\psi(u),v^{\prime}\in\chi(v)}d_{H}(v^{\prime},u^{\prime})\right\\}\leq
d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$ (6)
2. 2.
We say that $f$ fails on a vertex $v$ if $\psi(v)\neq\emptyset$. For a clique
$Q\subseteq V$, we say that $f$ fails on $Q$ if it fails on some vertex in
$Q$. For every clique $Q\subseteq V$, $\Pr[f\mbox{ fails on }Q]\leq\delta$.
3. 3.
Consider a clique $Q$, one of the following holds:
1. (a)
$f$ succeeds on $Q$. In particular $\chi(Q)$ contains a clique copy of $Q$.
2. (b)
$f$ fails on $Q$, and $\chi(Q)$ contains a clique copy of $Q$. In addition,
consider the set
$Q^{F}=\\{v\in Q\mid\psi(v)\neq 0\\}$, then $\psi(Q^{F})$ contains a clique
copy of $Q^{F}$.
3. (c)
$f$ fails on $Q$, and $f(Q)$ contains two cliques copies $Q^{1},Q^{2}$ of $Q$
such that for every vertex $v\in Q\setminus\Psi$, both $\chi(v)\cap(Q^{1}\cup
Q^{2})$ and $\psi(v)\cap(Q^{1}\cup Q^{2})$ are singletons. In this case, in
addition to equation (6), it also holds that for every $u\in V$ and $v\in
Q\setminus\Psi$,
$\min\left\\{\max_{u^{\prime}\in\chi(u),v^{\prime}\in\psi(v)}d_{H}(u^{\prime},v^{\prime}),\max_{u^{\prime}\in\psi(u),v^{\prime}\in\psi(v)}d_{H}(u^{\prime},v^{\prime})\right\\}\leq
d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$ (7)
###### Proof.
Consider a nearly $h$-embedded graph $G=(V,E,w)$. Assume w.l.o.g. that $D=1$,
otherwise we can scale accordingly. We assume that $1/\delta$ is an integer,
otherwise we solve for $\delta^{\prime}$ such that
$\frac{1}{\delta^{\prime}}=\lceil\frac{1}{\delta}\rceil$. We will construct
$q=\frac{8}{\delta}$ embeddings satisfying property (1) of Lemma 18. The final
embedding will be obtained by choosing one of these embeddings uniformly at
random. Denote by $G^{\prime}=G[V\setminus\Psi]$ the induced subgraph obtain
by removing the apices. In the tree decomposition of $H$ we will construct,
the set $\Psi$ will belong to all the bags (with edges towards all the
vertices). Thus we can assume that $G^{\prime}$ is connected, as otherwise we
can simply solve the problem on each connected component separately, and
combine the solutions by taking the union of all graphs/embeddings.
Let $r\in G^{\prime}$ be an arbitrary vertex. For
$\sigma\in\left\\{4,\dots,\frac{8}{\delta}\right\\}$, set
$\mathcal{I}_{-1,\sigma}=[0,\sigma]$,
$\mathcal{I}_{-1,\sigma}^{+}=[0,\sigma+2]$, and for $j\geq 0$, set
$\mathcal{I}_{j,\sigma}=\left[\frac{8j}{\delta}+\sigma,\frac{8(j+1)}{\delta}+\sigma\right)$,
and
$\mathcal{I}_{j,\sigma}^{+}=\left[\frac{8j}{\delta}+\sigma-2,\frac{8(j+1)}{\delta}+\sigma+2\right)$.
Set $U_{j,\sigma}=\left\\{v\in G^{\prime}\mid
d_{G^{\prime}}(r,v)\in\mathcal{I}_{j,\sigma}\right\\}$ and similarly
$U_{j,\sigma}^{+}$ w.r.t. $I_{j,\sigma}^{+}$. Note that by the triangle
inequality, for every pair of neighboring vertices $u,v$ it holds that
$d_{G}(u,v)\leq D=1$; thus, $u\in U_{j,\sigma}$ implies $v\in
U^{+}_{j,\sigma}$. Let $G_{j,\sigma}$ be the graph induced by
$U^{+}_{j,\sigma}$, plus the vertex $r$. In addition, we add edges from the
vertex $r$ towards all the vertices with neighbors in
$(\cup_{q<j}U^{+}_{q,\sigma})\setminus U^{+}_{j,\sigma}$ (where the weight of
a new edge $\left(r,v\right)$ is $d_{G}(r,v)$). Equivalently, $G_{j,\sigma}$
can be constructed by taking the graph induced by $\cup_{j^{\prime}\leq
j}U^{+}_{j^{\prime},\sigma}$ and contracting all the internal edges out of
$U_{j,\sigma}^{+}$ into $r$. Note that all the edges towards $r$ have weight
at most $D=1$. Furthermore, for every vertex $v\in G_{j,\sigma}$,
$d_{G_{j,\sigma}}(v,r)<1+\frac{8}{\delta}+4$. Thus $G_{j,\sigma}$ is a nearly
$h$-embedded graph with diameter at most
$\frac{16}{\delta}+10=O(\frac{1}{\delta})$ and no apices. See Figure 4 for an
illustration.
Figure 4: On the left is the graph $G^{\prime}$. $r$ is the big black vertex
in the middle. The dashed orange lines separate between the layers of
$U_{-1,\sigma},U_{0,\sigma},U_{1,\sigma},\dots$. The two blue lines are the
boundaries of $U^{+}_{0,\sigma}$. All the vertices in $U^{+}_{0,\sigma}$ (and
the edges between them) are black, while all other vertices (and the edges
incident on them) are gray. On the right is the graph $G_{0,\sigma}$ with
vertex set $U^{+}_{0,\sigma}\cup\\{r\\}$, where the edges added from $r$ to
vertices with neighbors in $U^{+}_{-1,\sigma}\setminus U^{+}_{0,\sigma}$ are
marked in red.
Fix some $\sigma$ and $j$. Using Lemma 1 with parameter
$\Theta(\epsilon\cdot\delta)$, we construct a dominating one-to-many embedding
$f_{j,\sigma}$, of $G_{j,\sigma}$ into a graph $H_{j,\sigma}$ with treewidth
$O_{h}(\frac{\log n}{\epsilon\cdot\delta})$, such that $f_{j,\sigma}$ is
clique preserving and has additive distortion
$\Theta(\epsilon\cdot\delta)\cdot O(\frac{1}{\delta})=\epsilon$. After the
application of Lemma 1, we will add edges from $r$ to all the other vertices
(where the weight of a new edge $\left(r,v\right)$ is $d_{G}(r,v)$). Note that
this increases the treewidth by at most $1$. Further, we will assume that
there is a bag containing only the vertex $r$ (as we can simply add such a
bag). Next, fix $\sigma$. Let $H^{\prime}_{\sigma}$ be a union of the graphs
$\cup_{j\geq-1}H_{j,\sigma}$. We identify the vertex $r$ with itself, but all
copies of other vertices that participate in more that a single graph will
remain separate. Formally, we define a one-to-many embedding $f_{\sigma}$,
where $f_{\sigma}(r)$ equals to the unique vertex $r$, and for every other
vertex $v\in V\setminus\Psi$,
$f_{\sigma}(v)=\bigcup_{j\geq-1}f_{j,\sigma}(v)$. Note that
$H^{\prime}_{\sigma}$ has a tree decomposition of width $O_{h}(\frac{\log
n}{\epsilon\cdot\delta})$, by identifying the bag containing only $r$ in all
the graphs. Finally, we create the graph $H_{\sigma}$ by adding the set $\Psi$
with edges towards all the vertices in $H^{\prime}_{\sigma}$, where the weight
of a new edge $\left(u^{\prime},v\right)$ for $u\in f_{\sigma}(u)$ and
$v\in\Psi$ is $d_{G}(u,v)$. For $v\in\Psi$, set $f_{\sigma}(v)=\\{v\\}$. As
$\Psi=O_{h}(1)$, $H_{\sigma}$ has treewidth $O_{h}(\frac{\log
n}{\epsilon\cdot\delta})$. The one-to-many embedding $f_{\sigma}$ is
dominating. This follows by the triangle inequality as every edge $\\{u,v\\}$
in the graph has weight $d_{G}(u,v)$. Finally, the embedding $f$ is chosen to
equal $f_{\sigma}$, for $\sigma$ chosen uniformly at random. This concludes
the definition of the embedding $f$.
Next, we define the partition $\chi_{\sigma}(v)\bigcupdot\psi_{\sigma}(v)$ of
$f_{\sigma}(v)$ for each vertex $v\in V$ as follows:
* •
If $v\in\Psi\cup\\{r\\}$, then there is a single copy of $v$ in $f_{\sigma}$.
Set $\chi_{\sigma}(v)=f_{\sigma}(v)$ and $\psi_{\sigma}(v)=\emptyset$.
* •
Else, let $j$ be the unique index such that $v\in U_{j,\sigma}$. Set
$\chi_{\sigma}(v)=f_{j,\sigma}(v)$. If there is another index $j^{\prime}$
such that $v\in U^{+}_{j^{\prime},\sigma}$, set
$\psi_{\sigma}(v)=f_{j^{\prime},\sigma}(v)$, otherwise set
$\psi_{\sigma}(v)=\emptyset$.
Clearly, as there are at most $2$ indices $j$ such that $v\in
U^{+}_{j,\sigma}$, $\chi_{\sigma}(v)\bigcupdot\psi_{\sigma}(v)=f_{\sigma}(v)$.
Next, we prove property (1)\- the stretch bound. Consider a pair of vertices
$u,v\in V$. If $v\in\Psi\cup\\{r\\}$ then $f_{\sigma}(v)$ is a singleton with
an edge towards every copy of $u$, thus property (1) holds. The same argument
holds also if $u\in\Psi\cup\\{r\\}$. Otherwise, if
$d_{G^{\prime}}(u,v)>d_{G}(u,v)$, then the shortest path between $u$ to $v$ in
$G$ goes through an apex vertex $z\in\Psi$. In particular, $f_{\sigma}(z)$ is
a singleton with an edge towards every other vertex. It follows that in
$H_{\sigma}$, the distance between every two copies in $f_{\sigma}(v)$ and
$f_{\sigma}(u)$ is exactly $d_{G}(u,z)+d_{G}(z,v)=d_{G}(u,v)$. Else,
$d_{G^{\prime}}(u,v)=d_{G}(u,v)$. Let $j$ be the unique index such that $v\in
U_{j,\sigma}$, then $u\in U^{+}_{j,\sigma}$. Furthermore,
$d_{G_{j,\sigma}}(u,v)=d_{G^{\prime}}(u,v)$ since the entire shortest path
between them is in $U^{+}_{j,\sigma}$. By Lemma 1,
$\displaystyle\min\left\\{\max_{u^{\prime}\in\chi_{\sigma}(u),v^{\prime}\in\chi_{\sigma}(v)}d_{H_{\sigma}}(u^{\prime},v^{\prime}),\max_{u^{\prime}\in\psi_{\sigma}(u),v^{\prime}\in\chi_{\sigma}(v)}d_{H_{\sigma}}(v^{\prime},u^{\prime})\right\\}$
$\displaystyle\qquad\leq\max_{u^{\prime}\in f_{j,\sigma}(u),v^{\prime}\in
f_{j,\sigma}(v)}d_{H_{j,\sigma}}(u^{\prime},v^{\prime})\leavevmode\nobreak\
\leq\leavevmode\nobreak\ d_{G_{j,\sigma}}(u,v)+\epsilon D\leavevmode\nobreak\
=\leavevmode\nobreak\ d_{G^{\prime}}(u,v)+\epsilon D\leavevmode\nobreak\
=\leavevmode\nobreak\ d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$
Next we argue property (2)\- the failure probability of a clique. Recall that
$f,\chi,\psi$ will equal to $f_{\sigma},\chi_{\sigma},\psi_{\sigma}$ for
$\sigma\in\\{1,\dots,\frac{8}{\delta}\\}$ chosen uniformly at random. Consider
some clique $Q$, we can assume w.l.o.g. that $Q$ does not contain any apex
vertices (as $f$ never fails on apex vertex). Let $s_{Q},t_{Q}\in Q$ be the
closest and farthest vertices from $r$ in $G^{\prime}$, respectively. Then
$d_{G^{\prime}}(r,t_{Q})-d_{G^{\prime}}(r,s_{Q})\leq
d_{G^{\prime}}(s_{Q},t_{Q})\leq D=1$. $f_{\sigma}$ fails on $Q$ iff there is a
non-empty intersection between the interval
$[d_{G^{\prime}}(r,t_{Q}),d_{G^{\prime}}(r,s_{Q}))$ (of length at most $1$)
and interval $[\frac{8j}{\delta}+\sigma-2,\frac{8j}{\delta}+\sigma+2)$ for
some $j$. Note that there are at most $5$ choices of $\sigma$ on which this
happens. We conclude that $\Pr[f\mbox{ fails on
}Q]\leq\frac{5}{\nicefrac{{8}}{{\delta}}-3}\leq\delta$.
[capbesideposition=left,top,capbesidewidth=7.3cm]figure[]
Figure 5: Illustration of the different cases in property (3). The green area
marks all the vertices in $U^{+}_{j,\sigma}$. The vertices in $U_{j,\sigma}$
are enclosed between the two black semicircles. The vertices in
$U^{+}_{j,\sigma}\cap U^{+}_{j+1,\sigma}$ (resp. $U^{+}_{j-1,\sigma}\cap
U^{+}_{j,\sigma}$) are enclosed between the red (resp. orange) dashed
semicircles. In the first case (a), all the vertices of $Q$ are in
$U_{j,\sigma}$ and no vertex failed. In the second case (b), all the vertices
of $Q$ are in $U_{j,\sigma}$ and some vertices failed. In the third case (c),
the vertices of $Q$ non-trivially partitioned between $U_{j,\sigma}$ and
$U_{j+1,\sigma}$, and all of them failed.
Finally, we prove property (3)\- clique preservation. Consider a clique $Q$,
note that we can assume that $Q\subseteq G^{\prime}$, as $f_{\sigma}$ will not
fail on any apex. Furthermore, if $r\in Q$, then no vertex in $Q$ fails as
$Q\subseteq B_{G^{\prime}}(r,1)\subseteq U_{-1,\sigma}\setminus
U^{+}_{0,\sigma}$. Thus we can assume that $r\notin Q$. We proceed by case
analysis; the cases are illustrated in Figure 5.
* (a)
if $f_{\sigma}$ succeeds on $Q$, then $f_{\sigma}(Q)=\chi_{\sigma}(Q)$. In
particular there is a unique $j$ such that $Q\subseteq U_{j,\sigma}$. As
$f_{j,\sigma}$ is clique-preserving, it contains a clique copy of $Q$. In
particular, $\chi_{\sigma}(Q)$ contain a clique copy of $Q$.
Otherwise, $f_{\sigma}$ fails on $Q$. Then, there is a unique index $j$ such
that the intersection of $Q$ with both $U^{+}_{j,\sigma}$ and
$U^{+}_{j+1,\sigma}$ is non-empty.
* (b)
First, consider the case that $Q\subseteq U_{j,\sigma}$ (the case $Q\subseteq
U_{j+1,\sigma}$ is symmetric). Here $\chi_{\sigma}(Q)=f_{j,\sigma}(Q)$, and
$\psi_{\sigma}(Q)=\psi_{\sigma}(Q^{F}_{\sigma})=f_{j+1,\sigma}(Q^{F}_{\sigma})$,
where $Q^{F}_{\sigma}=\\{v\in Q\mid\psi_{\sigma}(v)\neq 0\\}$. As
$f_{j,\sigma}$ and $f_{j+1,\sigma}$ are clique-preserving, $\chi_{\sigma}(Q)$
contain a clique copy of $Q$, while $\psi_{\sigma}(Q^{F}_{\sigma})$ contains a
clique copy of $Q^{F}_{\sigma}$.
* (c)
Finally, consider the case where $Q$ intersect both $U_{j,\sigma}$ and
$U_{j+1,\sigma}$. It holds that
$d_{G^{\prime}}(r,s_{Q})<\frac{8(j+1)}{\delta}+\sigma\leq
d_{G^{\prime}}(r,t_{Q})$, hence $\frac{8(j+1)}{\delta}+\sigma-1\leq
d_{G^{\prime}}(r,s_{Q})$ and
$d_{G^{\prime}}(r,t_{Q})<\frac{8(j+1)}{\delta}+\sigma+1$ (here $s_{Q},t_{Q}\in
Q$ are the closest and farthest vertices from $r$, respectively). Necessarily,
$Q\subseteq U^{+}_{j,\sigma}\cap U^{+}_{j+1,\sigma}$. In particular, as
$f_{j,\sigma}(Q)$, and $f_{j+1,\sigma}(Q)$ are clique-preserving, they contain
clique copies $Q_{1},Q_{2}$ of $Q$ (respectively). Furthermore,
$Q_{1},Q_{2}\subseteq f_{\sigma}(Q)$, and for every vertex $v\in Q$, both
$\chi(v)\cap(Q^{1}\cup Q^{2})$ and $\psi(v)\cap(Q^{1}\cup Q^{2})$ are
singletons.
It remains to prove the additional stretch guarantee. Consider a vertex $v\in
Q$, suppose that $v\in U_{j,\sigma}$ (the case $v\in U_{j+1,\sigma}$ is
symmetric). Here $\chi_{\sigma}(v)=f_{j,\sigma}(v)$ and
$\psi_{\sigma}(v)=f_{j+1,\sigma}(v)$. Consider some vertex $u\in V$, in
similar manner to the general distortion argument, if either
$u\in\Psi\cup\\{r\\}$, or the shortest path from $u$ to $v$ in $G$ goes
through $\Psi\cup\\{r\\}$, then the distance between every two copies in
$f_{\sigma}(v)$ and $f_{\sigma}(u)$ is exactly $d_{G}(u,v)$, and hence
equation (7) holds. Else, $d_{G^{\prime}}(u,v)=d_{G}(u,v)$, and it holds that
$d_{G^{\prime}}(r,u)\geq d_{G^{\prime}}(r,v)-d_{G^{\prime}}(u,v)\geq
d_{G^{\prime}}(r,s_{Q})-1\geq\frac{8(j+1)}{\delta}+\sigma-2$, thus $u\in
U^{+}_{j+1,\sigma}$. Furthermore,
$d_{G_{j+1,\sigma}}(u,v)=d_{G^{\prime}}(u,v)$ (as the entire shortest path
between them is in $U^{+}_{j+1,\sigma}$). By Lemma 1,
$\displaystyle\min\left\\{\max_{u^{\prime}\in\chi(u),v^{\prime}\in\psi(v)}d_{H}(u^{\prime},v^{\prime}),\max_{u^{\prime}\in\psi(u),v^{\prime}\in\psi(v)}d_{H}(u^{\prime},v^{\prime})\right\\}$
$\displaystyle\quad\leq\max_{u^{\prime}\in f_{j+1,\sigma}(u),v^{\prime}\in
f_{j+1,\sigma}(v)}d_{H_{j+1,\sigma}}(u^{\prime},v^{\prime})\,\leq\,d_{G_{j+1,\sigma}}(u,v)+\epsilon
D\,=\,d_{G^{\prime}}(u,v)+\epsilon D\,=\,d_{G}(u,v)+\epsilon
D\leavevmode\nobreak\ .$
∎
Consider a $K_{r}$-minor-free graph $G$, and let $\mathbb{T}$ be its clique-
sum decomposition. That is $G=\cup_{(G_{i},G_{j})\in
E(\mathbb{T})}G_{i}\oplus_{h}G_{j}$ where each $G_{i}$ is a nearly
$h(r)$-embeddable graph. We call the clique involved in the clique-sum of
$G_{i}$ and $G_{j}$ the _joint set_ of the two graphs. Let $\phi_{h}$ be some
function depending only on $h$ such that the treewidth of the graphs
constructed in Lemma 18 is bounded by $\phi_{h}\cdot\frac{\log
n}{\epsilon\cdot\delta}$. The embedding of $G$ is defined recursively, where
some vertices from former levels will be added to future levels as apices. In
order to control the number of such apices, we will use the concept of enhance
minor-free graphs introduced in Definition 9 in Section 6. We will prove the
following lemma by induction on $t$:
###### Lemma 19.
Given an $(r,s,t)$-enhanced minor-free graph $G$ of diameter $D$ with a
specified set $S$ of elevated vertices, and parameters
$\epsilon\in(0,\frac{1}{4})$,$\delta\in(0,1)$, there is a distribution over
one-to-many, clique-preserving, dominating embeddings $f$ into graphs of
treewidth $\phi_{h(r)}\cdot\frac{\log n}{\epsilon\cdot\delta}+s+h(r)\cdot\log
t$, such that for every vertex $v\in V$, $f(v)$ can be partitioned into sets
$g_{1}(v),g_{2}(v),\dots$ where $\bigcupdot_{j\geq 1}g_{j}(v)=f(v)$.
Furthermore,
1. 1.
For every $v\in V$, let $q_{v}$ be the maximal index $j$ such that
$g_{j}(v)\neq\emptyset$, then $\mathbb{E}[q_{v}]\leq(1+\delta)^{\log 2t}$. In
addition, if $v\in S$ then $|f(v)|=1$ and thus $q_{v}=1$.
2. 2.
For every pair of vertices $u,v$, $\min_{j}\max_{u^{\prime}\in
g_{j}(u),v^{\prime}\in g_{1}(v)}d_{H}(u^{\prime},v^{\prime})\leq
d_{G}(u,v)+\epsilon D$.
Assuming Lemma 19, Theorem 5 easily follows.
###### Proof of Theorem 5.
Note that every $K_{r}$-minor-free graph is $(r,0,n)$-enhanced minor-free. We
apply Lemma 19 using parameters $\epsilon$ and
$\delta^{\prime}=\frac{\delta}{2\log 2n}$. For every vertex $v\in V$, let
$g(v)\subseteq f(v)$ be a set containing a single copy from each non empty set
$g_{j}(v)$. Let $\chi(v)=g(v)\cap g_{1}(v)$ be the copy in $g(v)$ from
$g_{1}(v)$. The distortion guarantee is straightforward to verify. The
treewidth of the resulting graph is $\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta^{\prime}}+0+h(r)\cdot\log
n=O_{r}(\frac{\log^{2}n}{\epsilon^{2}})$. Finally, for every vertex $v\in V$,
it holds that $\mathbb{E}[|g(v)|]\leq(1+\frac{\delta}{2\log 2n})^{\log
2n}<e^{\frac{\delta}{2}}<1+\delta$. ∎
The rest of the section is devoted to proving Lemma 19.
###### Proof of Lemma 19.
The claim is proved by induction on $t$. It follows from Lemma 18 that Lemma
19 holds for the base case $t=1$; the treewidth will be
$\phi_{h(r)}\cdot\frac{\log n}{\epsilon\cdot\delta}+s$ since we add all
elevated vertices to every bag.
We now turn to the induction step. Consider an $(r,s,t)$-enhanced minor-free
graph $G$. Let $G^{\prime}$ be a $K_{r}$-minr-free graph obtained from $G$ by
removing the set $S$ (of size at most $s$). Let $\mathbb{T}$ be the clique-sum
decomposition of $G^{\prime}$ with $t$ pieces. Using Lemma 17, choose a
central piece $\tilde{G}\in\mathbb{T}$ of $\mathbb{T}$. Let
$G_{1},\dots,G_{p}$ be the neighbors of $\tilde{G}$ in $\mathbb{T}$. Note that
$\mathbb{T}\setminus\tilde{G}$ contains $p$ connected components
$\mathbb{T}_{1},\dots,\mathbb{T}_{p}$, where $G_{i}\in\mathbb{T}_{i}$, and
$\mathbb{T}_{i}$ contains at most $|\mathbb{T}|/2=t/2$ pieces. Let $Q_{i}$ be
the clique used in the clique-sum of $G_{i}$ with $\tilde{G}$ in $\mathbb{T}$.
For every $i$, we will add edges between vertices of $Q_{i}$ to all the
vertices in $\mathbb{T}_{i}$; this is equivalent to making $Q_{i}$ into
apices. Every new edge $\\{u,v\\}$ will have weight $d_{G}(u,v)$. Let
$\mathcal{G}_{i}$ be the graph induced on the vertices of $\mathbb{T}_{i}\cup
S$ (and the newly added edges). Note that $\mathcal{G}_{i}$ is an
$(r,s^{\prime},t^{\prime})$-enhanced minor-free graph for
$t^{\prime}\leq\frac{t}{2}$ and $|s^{\prime}|\leq s+|Q_{i}|\leq s+h(r)$.
Further, for every $u,v\in\mathcal{G}_{i}$, it holds that
$d_{\mathcal{G}_{i}}(u,v)=d_{G}(u,v)$, and thus $\mathcal{G}_{i}$ has diameter
at most $D$. Applying the inductive hypothesis to $\mathcal{G}_{i}$, we sample
a dominating embedding $f_{i}$ into $H_{i}$, such that for every
$v\in\mathcal{G}_{i}$ we have $f_{i}(v)=\bigcupdot_{j\geq 1}g_{i,j}(v)$. We
denote by $q_{v}^{i}$ the maximal index such that
$g_{i,q_{v}^{i}}(v)\neq\emptyset$. Note that properties (1) and (2) hold and
furthermore, $H_{i}$ has treewidth $\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s^{\prime}+h(r)\cdot\log
2t^{\prime}\leq\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s+h(r)\cdot\log 2t$. In addition, for a vertex $v\in
S\cup Q_{i}$, $|f_{i}(v)|=1$ (thus $q_{v}^{i}=1$), while for every vertex
$v\in V$, $\mathbb{E}[q_{v}^{i}]\leq(1+\delta)^{\log
2t^{\prime}}\leq(1+\delta)^{\log t}$.
Let $\tilde{\mathcal{G}}$ be the graph induced on $\tilde{G}\cup S$. We apply
Lemma 18 to $\tilde{\mathcal{G}}$ to sample a dominating one-to-many embedding
$\tilde{f}$ into $\tilde{H}$, such that for each vertex
$v\in\tilde{\mathcal{G}}$, $\tilde{f}(v)$ is partitioned into
$\tilde{\chi}(v)$ and $\tilde{\psi}(v)$. $\tilde{H}$ has treewidth
$\phi_{h(r)}\cdot\frac{\log n}{\epsilon\cdot\delta}+s$ (this is by Lemma 18,
we first remove all apices and then add them back). Note also that properties
(1), (2), and (3) hold.
We next describe how to combine the different parts into a single embedding.
The graph (and the induced embedding) will be created by identifying some
vertices in $\tilde{H}$ with vertices in each $H_{i}$. Some of the graphs
$H_{i}$ will be duplicated and we will have two copies of them (depending on
whether $Q_{i}$ failed in $\tilde{f}$). Note that the set $S$ has a single
copy everywhere, and thus for every $v\in S$, we will simply identify all the
vertices $\tilde{f}(v),f_{1}(v),\dots,f_{p}(v)$.
$\mbox{For a vertex}\leavevmode\nobreak\ v\in\tilde{\mathcal{G}}\mbox{, set
}\quad g_{1}(v)=\tilde{\chi}(v)\quad\mbox{ and }\quad
g_{2}(v)=\tilde{\psi}(v)\leavevmode\nobreak\ .$
Consider some $i\in[p]$. Note that the clique $Q_{i}$ belongs to $S_{i}$. In
particular, for every vertex $v\in Q_{i}$, $f_{i}(v)$ is a singleton, and
$f_{i}(Q_{i})$ is a clique. We continue w.r.t. the $3$ cases in Lemma 18 (see
Figure 5 for an illustration of the cases):
* •
$\tilde{f}$ succeeds on $Q_{i}$: Here $\tilde{\psi}(Q_{i})=\emptyset$, and
$\tilde{\chi}(Q_{i})=g_{q}(Q_{i})$ contains a clique copy
$Q^{1}_{i}\subseteq\tilde{\chi}(Q_{i})$ of $Q_{i}$. We simply identify each
vertex in $f_{i}(Q_{i})$ with the corresponding copy in $Q^{1}_{i}$. We will
abuse notation and refer to $H_{i}$ as $H_{i}^{1}$, to $f_{i}$ as $f_{i}^{1}$,
and to $g_{i,j}$ as $g^{1}_{i,j}$.
$\mbox{For a vertex}\leavevmode\nobreak\
v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}},\quad\mbox{ for every }j\geq
1\quad\mbox{ set }\quad g_{j}(v)=g^{1}_{i,j}(v)\leavevmode\nobreak\ .$
* •
$\tilde{f}$ fails on $Q_{i}$, and $\tilde{\chi}(Q_{i})$ contains a clique copy
of $Q_{i}$: Denote by $Q_{i}^{1}\subseteq\tilde{\chi}(Q_{i})$ the promised
clique copy of $Q_{i}$. In addition, $\tilde{\psi}(Q^{F}_{i})$ is guaranteed
to contain a clique copy $Q_{i}^{2}$ of $Q^{F}_{i}=\\{v\in
Q_{i}\mid\tilde{\psi}(v)\neq\emptyset\\}$. We duplicate $H_{i}$ into two
graphs $H_{i}^{1}$ and $H_{i}^{2}$ with respective duplicate embeddings
$f_{i}^{1},f_{i}^{2}$. However, the vertices of $Q_{i}\setminus Q_{i}^{F}$ are
removed from $H_{i}^{2}$ and $f_{i}^{2}$. We combine $\tilde{H}$ with
$H_{i}^{1}$ (resp. $H_{i}^{2}$) by combining a clique copy from
$\tilde{\chi}(Q_{i})$ (resp. $\tilde{\psi}(Q^{F}_{i})$) with the corresponding
vertices from $f_{i}^{1}(Q_{i})$ (resp. $f_{i}^{2}(Q^{F}_{i})$) (recall that
they are apices and thus have a single copy).
* –
For every vertex $v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}}$ where
$q_{v}^{i}$ is the maximal index $j$ such that $g_{i,j}(v)\neq\emptyset$. For
every $j\in[1,q_{v}^{i}]$, set $g_{j}(v)=g^{1}_{i,j}(v)$ to be the
corresponding copies from $f^{1}_{i}(v)$, and
$g_{q_{v}^{i}+j}(v)=g^{2}_{i,j}(v)$ be the corresponding copies from
$f^{2}_{i}(v)$.
* •
$\tilde{f}$ fails on $Q_{i}$, and $\tilde{f}(Q_{i})$ contains two clique
copies $Q_{i}^{1},Q_{i}^{2}$ of $Q_{i}$ such that for every $v\in Q_{i}$,
$Q_{i}^{1}\cup Q_{i}^{2}$ intersects both $\tilde{\chi}(v)$ and
$\tilde{\psi}(v)$: We duplicate $H_{i}$ into two graphs $H_{i}^{1}$ and
$H_{i}^{2}$ with respective duplicate embeddings $f_{i}^{1},f_{i}^{2}$. We
combine $\tilde{H}$ with $H_{i}^{1}$ (resp. $H_{i}^{2}$) by identifying
$Q_{i}^{1}$ (resp. $Q_{i}^{2}$) with $f_{i}^{1}(Q_{i})$ (resp.
$f_{i}^{2}(Q_{i})$) (recall that they are apices and thus have a single copy).
* –
For every vertex $v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}}$ where
$q_{v}^{i}$ is the maximal index $j$ such that $g_{i,j}(v)\neq\emptyset$. For
every $j\in[1,q_{v}^{i}]$, set $g_{j}(v)=g^{1}_{i,j}(v)$ be the corresponding
copies from $f^{1}_{i,j}(v)$, and $g_{q_{v}^{i}+j}(v)=g^{2}_{i,j}(v)$ be the
corresponding copies from $f^{2}_{i,j}(v)$.
We claim next that $f,g_{1},g_{2},\dots$ fulfill all the required properties.
First, note that $f$ is clique-preserving as every clique must be contained in
either $\tilde{\mathcal{G}}$ or some $\mathcal{G}_{i}$. Second, clearly $f$ is
dominating as the weight of every edge between a vertex in $f(v)$ and $f(u)$
is $d_{G}(u,v)$. Third, as we only identify between cliques, the graph $H$ has
treewidth
$\max\left\\{\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s+h(r)\cdot\log
2t\quad,\quad\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s\right\\}\quad=\quad\phi_{h(r)}\cdot\frac{\log
n}{\epsilon\cdot\delta}+s+h(r)\cdot\log 2t$
Forth, it holds by definition that for every vertex $v\in V$,
$f(v)=\bigcupdot_{j}g_{j}(v)$.
Next, we prove property (1). Clearly, for a vertex $v\in S$, we identify
between all its copies and thus $f(v)$ is a singleton. Consider a vertex $v\in
V$, if $v\in\tilde{\mathcal{G}}$, then by Lemma 18
$\mathbb{E}[q_{v}]=1+\Pr\left[\tilde{f}\text{ fails on }v\right]\leq
1+\delta\leavevmode\nobreak\ .$
Else, consider $v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}}$ for some $i$,
and denote by $q_{v}^{i}$ the maximal index $j$ such that $g_{i,j}$ is non-
empty. We have
$\displaystyle\mathbb{E}[q_{v}]$
$\displaystyle=\mathbb{E}[q_{v}^{i}]\cdot\Pr\left[\tilde{f}\text{ succeeds on
}Q_{i}\right]+\mathbb{E}[2q_{v}^{i}]\cdot\Pr\left[\tilde{f}\text{ fails on
}Q_{i}\right]$
$\displaystyle=\mathbb{E}[q_{v}^{i}]\cdot\left(1+\Pr\left[\tilde{f}\text{
fails on }Q_{i}\right]\right)$ $\displaystyle\leq(1+\delta)^{\log
2\cdot\frac{t}{2}}\cdot(1+\delta)=(1+\delta)^{\log 2t}\leavevmode\nobreak\ ,$
where the first equality is because we have two copies of $H_{i}$ iff
$\tilde{f}$ fails on $Q_{i}$. The second equality is because $\Pr\left[f\text{
succeeds on }Q_{i}\right]=1-\Pr\left[f\text{ fails on }Q_{i}\right]$. The
final inequality follows by the induction hypothesis and Lemma 18.
Finally, we prove property (2). Consider a pair of vertices $u,v\in V$. We
proceed by case analysis.
* •
If a shortest path from $u$ to $v$ goes through a vertex $z\in S$ (this
includes the case where either $u$ or $v$ are in $S$): Then
$\displaystyle\min_{j}\max_{u^{\prime}\in g_{j}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},v^{\prime})\quad\leq\quad\max_{u^{\prime}\in
f(u),v^{\prime}\in f(v)}d_{H}(u^{\prime},v^{\prime})$
$\displaystyle\qquad\quad\leq\quad\max_{u^{\prime}\in f(u),v^{\prime}\in
f(v)}d_{H}(u^{\prime},f(z))+d_{H}(f(z),v^{\prime})\quad=\quad
d_{G}(u,z)+d_{G}(z,v)\quad=\quad d_{G}(u,v)\leavevmode\nobreak\ .$
For the remaining cases, we assume that $d_{G^{\prime}}(u,v)=d_{G}(u,v)$
(recall that $G^{\prime}=G[V\setminus S]$).
* •
Else, if both $u,v\in\tilde{\mathcal{G}}$: Then by Lemma 18,
$\displaystyle\min_{j}\max_{u^{\prime}\in g_{j}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},v^{\prime})$
$\displaystyle\leq\min\left\\{\max_{u^{\prime}\in\chi(u),v^{\prime}\in\chi(v)}d_{H}(u^{\prime},v^{\prime}),\max_{u^{\prime}\in\psi(u),v^{\prime}\in\chi(v)}d_{H}(u^{\prime},v^{\prime})\right\\}$
$\displaystyle\leq d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$
* •
Else, if $u\in\tilde{\mathcal{G}}$ and there is an $i\in[p]$ such that
$v\in\mathcal{G}_{i}\setminus\tilde{\mathcal{G}}$: There is necessarily a
vertex $x\in Q_{i}$ such that there is a shortest path from $u$ to $v$ in $G$
going through $x$. Note by the construction that (a) the copy $g_{1}(v)$
belongs to $H^{1}_{i}$ (a copy of $H_{i}$), (b) there is an edge from
$g_{1}(v)$ to a copy of $x$ in $f_{i}^{1}(Q_{i})$ and (c) a clique copy
$Q_{i}^{1}\subseteq\tilde{f}(Q)$ of $Q$ is identified with $f_{i}^{1}(Q_{i})$
(a set of singletons). We continue by case analysis:
* –
If either $\tilde{f}$ succeeds on $Q_{i}$, or
$Q_{i}^{1}\subseteq\tilde{\chi}(Q_{i})$. Then there is a copy $\hat{x}$ of $x$
in $g_{1}(x)\cap Q_{i}^{1}$. It holds that
$\displaystyle\min_{j}\max_{u^{\prime}\in g_{j}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},v^{\prime})$
$\displaystyle\leq\min_{j}\left(\max_{u^{\prime}\in
g_{j}(u)}d_{H}(u^{\prime},\hat{x})+\max_{v^{\prime}\in
g_{1}(v)}d_{H}(\hat{x},v^{\prime})\right)$ $\displaystyle\leq
d_{G}(u,x)+\epsilon D+d_{G}(x,v)=d_{G}(u,v)+\epsilon D\leavevmode\nobreak\ .$
(8)
where the second inequality follows by the second case (as
$x\in\tilde{\mathcal{G}}$), and the fact that there is an edge in $H$ between
$\hat{x}$ to every vertex in $g_{1}(v)$.
* –
Else, $\tilde{f}(Q_{i})$ contains two clique copies $Q_{i}^{1},Q_{i}^{2}$ of
$Q_{i}$. Note that $\hat{x}$ can belong to either $g_{1}(x)=\tilde{\chi}(x)$
or $g_{2}(x)=\tilde{\psi}(x)$. Nevertheless, by using either equation (6) or
(7) we have that $\min_{j}\max_{u^{\prime}\in
g_{j}(u)}d_{H}(u^{\prime},\hat{x})\leq d_{G}(u,x)+\epsilon D$. As there is
edge in $H$ between $\hat{x}$ to every vertex in $g_{1}(v)$, we conclude that
equation (8) holds.
* •
Else, if $v\in\tilde{\mathcal{G}}$ and there is an $i\in[p]$ such that
$u\in\mathcal{G}_{i}\setminus{\mathcal{G}}$: There is necessarily a vertex
$x\in Q_{i}$ such that there is a shortest path from $u$ to $v$ in $G$ going
through $x$. By the second case, there is an index $j^{\prime}$ such that
$\max_{x^{\prime}\in g_{j^{\prime}}(x),v^{\prime}\in
g_{1}(v)}d_{H}(x^{\prime},u^{\prime})\leq d_{G}(x,v)+\epsilon D$. As
$x\in\tilde{\mathcal{G}}$, $j^{\prime}\in\\{1,2\\}$. In any case, a copy of
$H_{i}$ was assigned to $\tilde{H}$ by identifying clique vertices. In
particular, some vertex $\hat{x}\in g_{j^{\prime}}(x)$ was identified with the
apex vertex $f_{i}(x)$ (from the relevant copy). Therefore there is an index
$j^{\prime\prime}$ such that $\hat{x}$ has edges towards all the vertices in
$g_{j^{\prime\prime}}(u)$. We conclude,
$\displaystyle\min_{j}\max_{u^{\prime}\in g_{j}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},v^{\prime})$ $\displaystyle\leq\max_{u^{\prime}\in
g_{j^{\prime\prime}}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},\hat{x})+d_{H}(\hat{x},v^{\prime})$
$\displaystyle\leq d_{G}(u,x)+\max_{x^{\prime}\in
g_{j^{\prime}}(x),v^{\prime}\in g_{1}(v)}d_{H}(x^{\prime},v^{\prime})$
$\displaystyle\leq d_{G}(u,x)+d_{G}(x,v)+\epsilon D=d_{G}(u,v)+\epsilon
D\leavevmode\nobreak\ .$
* •
Else, if there is an $i\in[p]$ such that
$u,v\in\mathcal{G}_{i}\setminus{\mathcal{G}}$: There is a copy of $H_{i}$
which embedded as is into $H$ and contains all the vertices in $g_{1}(v)$. By
the induction hypothesis
$\min_{j}\max_{u^{\prime}\in g_{j}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},v^{\prime})\leq\min_{j}\max_{u^{\prime}\in
g_{i,j}(u),v^{\prime}\in g_{i,1}(v)}d_{H_{i}}(u^{\prime},v^{\prime})\leq
d_{\mathcal{G}_{i}}(u,v)+\epsilon D=d_{G}(u,v)+\epsilon D\leavevmode\nobreak\
.$
* •
Else, there are $i\neq i^{\prime}\in[p]$ such that
$u\in\mathcal{G}_{i}\setminus{\mathcal{G}}$ and
$v\in\mathcal{G}_{i^{\prime}}\setminus{\mathcal{G}}$: There are necessarily
vertices $y\in Q_{i}$ and $x\in Q_{i^{\prime}}$ such that there is a shortest
path from $u$ to $v$ in $G$ going through $y$ and $x$. Note that the copy
$H^{1}_{i^{\prime}}$ of $H_{i^{\prime}}$ containing $g_{1}(v)$ was added to
$H$ by identifying $f^{1}_{i^{\prime}}(Q_{i^{\prime}})$ with a clique copy
$Q_{i^{\prime}}^{1}$ of $Q_{i^{\prime}}$. In particular, there is a copy
$\hat{x}\in Q_{i^{\prime}}^{1}$ of $x$ which has edges towards all the
vertices in $g_{1}(v)$. There are two cases:
* –
If $\hat{x}\in g_{1}(x)$, then by the third case there is an index $j$ such
that $\max_{u^{\prime}\in
g_{j}(u)}d_{H}(u^{\prime},\hat{x})\leq\max_{u^{\prime}\in
g_{j}(u),x^{\prime}\in g_{1}(x)}d_{H}(u^{\prime},x^{\prime})\leq
d_{G}(u,x)+\epsilon D$. As there is an edge from $\hat{x}$ to every copy of
$v$ in $g_{1}(v)$, we conclude that $\max_{u^{\prime}\in
g_{j}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},v^{\prime})\leq\max_{u^{\prime}\in
g_{j}(u)}d_{H}(u^{\prime},\hat{x})+\max_{v^{\prime}\in
g_{1}(x)}d_{H}(\hat{x},v^{\prime})\leq d_{G}(u,x)+\epsilon
D+d_{G}(x,v)=d_{G}(u,v)+\epsilon D$.
* –
Else, $\hat{x}\in g_{2}(x)$. Necessarily $\tilde{f}$ failed on
$Q_{i^{\prime}}$ and $\tilde{f}(Q_{i^{\prime}})$ contains two clique copies
$Q_{i^{\prime}}^{1},Q_{i^{\prime}}^{2}$ of $Q_{i^{\prime}}$. It holds that
$g_{2}(x)=\tilde{\psi}(x)$, thus by Lemma 18 (case 3.(c)) there is an index
$j\in\\{1,2\\}$ such that $\max_{y^{\prime}\in
g_{j}(y)}d_{H}(y^{\prime},\hat{x})\leq\max_{y^{\prime}\in
g_{j}(y),x^{\prime}\in\tilde{\psi}(x)}d_{H}(y^{\prime},x^{\prime})\leq
d_{G}(x,y)+\epsilon D$. Let $\hat{y}\in Q_{i}^{j}\subseteq g_{j}(Q_{i})$ be
the copy of $y$ from the corresponding clique copy. Note that there is an edge
from $\hat{x}$ to every copy of $v$ in $g_{1}(v)$. Farther, there is an index
$j^{\prime\prime}$ such that $\hat{y}$ has edges towards all the vertices in
$g_{j^{\prime\prime}}(u)$. We conclude,
$\displaystyle\min_{j}\max_{u^{\prime}\in g_{j}(u),v^{\prime}\in
g_{1}(v)}d_{H}(u^{\prime},v^{\prime})$ $\displaystyle\leq\max_{u^{\prime}\in
g_{j^{\prime\prime}}(u)}d_{H}(u^{\prime},\hat{y})+d_{H}(\hat{y},\hat{x})+\max_{v^{\prime}\in
g_{1}(v)}d_{H}(\hat{x},v^{\prime})$ $\displaystyle\leq
d_{G}(u,y)+\max_{y^{\prime}\in g_{j}(y),x^{\prime}\in
g_{2}(x)}d_{H}(y^{\prime},x^{\prime})+d_{G}(x,v)$ $\displaystyle\leq
d_{G}(u,y)+d_{G}(y,x)+\epsilon D+d_{G}(x,v)=d_{G}(u,v)+\epsilon
D\leavevmode\nobreak\ .$
∎
###### Remark 4.
The clan embedding in Theorem 5 directly implies a weaker version of Theorem
4, where the only difference is that the distortion is only for pairs where
both $u,v\in M$ and not only $u\in M$. Note that this weaker version is still
strong enough for our application to the $\rho$-independent set problem in
Theorem 7.
Sketch: sample a clan embedding $(f,\chi)$ using Theorem 5. Return $g=\chi$
with the set $M=\\{v\in V\leavevmode\nobreak\ \mid\leavevmode\nobreak\
|f(v)|=1\\}$. The weaker distortion guarantee and failure probability are
straightforward.
## 8 Applications
Organization: in Sections 8.1, 8.2 and 8.3 we provide the algorithms (and
proofs) to our QPTAS ${}^{\ref{foot:approximationSchemes}}$ for metric
$\rho$-independent set problem, QPTAS for metric $\rho$-dominating set
problem, and compact routing scheme, respectively.
We begin with a discussion on approximation schemes for metric
$\rho$-dominating/independent set problems in bounded treewidth graphs. In the
$(k,r)$-center problem we are given a graph $G=(V,E,w)$, and the goal is to
find a set $S$ of centers of cardinality at most $r$ such that every vertex
$v\in V$ is at distance at most $r$ from some center $u\in S$. Katsikarelis,
Lampis and Paschos [KLP19] provided a PTAS
${}^{\ref{foot:approximationSchemes}}$ for the $(k,r)$-center problem in
treewidth $\mathrm{tw}$ graphs using a dynamic programming approach.
Specifically, for any parameters $k,r\in\mathbb{N}$ and $\epsilon\in(0,1)$,
they provided an algorithm running in
$O(\frac{\mathrm{tw}}{\epsilon})^{\mathrm{tw}}\cdot\mathrm{poly}(n)$ time that
either returns a solution to the $(k,(1+\epsilon)r)$-center problem, or
(correctly) declares that there is no valid solution to the $(k,r)$-center
problem in $G$. This dynamic programming can be easily generalized to the case
where there is a measure $\mu:V\rightarrow\mathbb{R}^{+}$, and terminal set
${\cal K}\subset V$. Specifically, the algorithm will either return a set $S$
of measure $\mu(S)\leq k$, such that every vertex $v\in{\cal K}$ is at
distance at most $(1+\epsilon)r$ from $S$, or will declare there is no set $S$
of measure at most $k$ at distance at most $r$ from every vertex in ${\cal
K}$.
As was observed by Fox-Epstein et al. [FKS19], using [KLP19] one can construct
a bicriteria PTAS for the metric $\rho$-dominating set problem in treewidth
$\mathrm{tw}$ graphs with
$O(\frac{\mathrm{tw}}{\epsilon})^{\mathrm{tw}}\cdot\mathrm{poly}(n)$ running
time. [FKS19] studied the basic version (with uniform measure and ${\cal
K}=V$), however this observation holds for the general case as well. In a
follow-up paper, Katsikarelis et al. [KLP20] constructed a similar dynamic
programming for the $\rho$-independent problem with the same
$O(\frac{\mathrm{tw}}{\epsilon})^{\mathrm{tw}}\cdot\mathrm{poly}(n)$ running
time. It could also be generalized to work with a measure $\mu$. This dynamic
programming was also promised to appear in the full version of [FKS19]. We
conclude this discussion:
###### Theorem 12 ([KLP19, KLP20]).
There is a bicriteria polynomial approximation scheme (PTAS) for both metric
$\rho$-independent set and $\rho$-dominating set problems in treewidth
$\mathrm{tw}$ graphs with running time
$O(\frac{\mathrm{tw}}{\epsilon})^{\mathrm{tw}}\cdot\mathrm{poly}(n)$.
### 8.1 QPTAS for the $\rho$-Independent Set Problem in Minor-Free Graphs
This subsection is devoted to proving the following theorem: See 7
###### Proof.
Create a new graph $G^{\prime}$ from $G$ by adding a single vertex $\psi$ at
distance $\frac{3}{4}\rho$ from all the other vertices. $G^{\prime}$ is
$K_{r+1}$-minor free. Note that for every $u,v\in Y$, it holds that
$d_{G^{\prime}}(u,v)=\min\\{\frac{3}{2}\rho,d_{G}(u,v)\\}$. Thus $G^{\prime}$
has diameter at most $\frac{3}{2}\rho$. Furthermore, for every
$\rho^{\prime}\in(0,\frac{3}{2}\rho)$, a set $S\subseteq V$ is a
$\rho^{\prime}$-independent set in $G$ if and only if $S$ is a
$\rho^{\prime}$-independent set in $G^{\prime}$. Using Theorem 4 with
parameters $\epsilon^{\prime}=\frac{\epsilon}{2}$ and
$\delta=\frac{\epsilon}{4}$, let $g$ be an embedding of $G^{\prime}$ into a
treewidth-$O_{r}(\frac{\log^{2}n}{\epsilon^{2}})$ graph $H$ with a set
$M\subseteq V\cup\\{\psi\\}$ such that (1) for every $u,v\in M$,
$d_{H}(g(u),g(v))\leq
d_{G^{\prime}}(u,v)+\frac{\epsilon}{2}\cdot\frac{3}{2}\rho<d_{G^{\prime}}(u,v)+\epsilon\rho$,
and (2) for every $v\in V$, $\Pr[v\in M]\geq 1-\frac{\epsilon}{4}$.
Define a new measure $\mu_{H}$ in $H$, where for each $v\in G^{\prime}$,
$\mu_{H}(v^{\prime})=\begin{cases}0&v^{\prime}\notin g(V\cap M)\\\
\mu(v)&\text{else, }g(v)=v^{\prime}\text{ for some }v\in
M\setminus\\{\psi\\}\end{cases}\qquad.$
In particular, $\mu_{H}(g(\psi))=0$. Using Theorem 12, we find a
$(1-\frac{\epsilon}{2})\rho$-independent set $S_{H}$ w.r.t. $\mu_{H}$, such
that for every $\rho$-independent set $\tilde{S}$ in $H$ it holds that
$\mu_{H}(S_{H})\geq(1-\frac{\epsilon}{2})\mu_{H}(\tilde{S})$. We can assume
that $S_{H}\subseteq g(M)$, as the measure of all vertices out of $g(M)$ is
$0$. We will return $S=g^{-1}(S_{H})$; note that $S\subseteq M$. First, we
argue that $S$ is a $(1-\epsilon)\rho$-independent set. For every $u,v\in S$,
$g(u),g(u)\in S_{H}$ thus
$(1-\frac{\epsilon}{2})\rho\leq d_{H}(g(u),g(v))\leq
d_{G^{\prime}}(u,v)+\frac{\epsilon}{2}\rho\leq
d_{G}(u,v)+\frac{\epsilon}{2}\rho\leavevmode\nobreak\ ,$
implying $d_{G}(u,v)\geq(1-\epsilon)\rho$.
Let $S_{\mathrm{opt}}$ be a $\rho$-independent set w.r.t. $d_{G}$ of maximal
measure. As $g$ is dominating embedding, $g(S_{\mathrm{opt}}\cap M)$ is a
$\rho$-independent set in $H$. By linearity of expectation
${\mathbb{E}[\mu(S_{\mathrm{opt}}\setminus M)]=\sum_{v\in
S_{\mathrm{opt}}}\mu(v)\cdot\Pr\left[v\notin
M\right]\leq\frac{\epsilon}{4}\cdot\mu(S_{\mathrm{opt}})}$. Using Markov
inequality
$\Pr\left[\mu(S_{\mathrm{opt}}\cap
M)<(1-\frac{\epsilon}{2})\mu(S_{\mathrm{opt}})\right]=\Pr\left[\mu(S_{\mathrm{opt}}\setminus
M)\geq\frac{\epsilon}{2}\mu(S_{\mathrm{opt}})\right]\leq\frac{\mathbb{E}[\mu(S_{\mathrm{opt}}\setminus
M)]}{\frac{\epsilon}{2}\mu(S_{\mathrm{opt}})}\leq\frac{1}{2}\leavevmode\nobreak\
.$
Thus, with probability at least $\frac{1}{2}$, $H$ contains a
$\rho$-independent set $g(S_{\mathrm{opt}}\cap M)$ of measure
${\mu_{H}(g(S_{\mathrm{opt}}\cap M))=\mu(S_{\mathrm{opt}}\cap
M)\geq(1-\frac{\epsilon}{2})\mu(S_{\mathrm{opt}})}$. If this event indeed
occurs, the independent set $S_{H}$ returned by [FKS19] algorithm will be of
measure greater than
$(1-\frac{\epsilon}{2})(1-\frac{\epsilon}{2})\mu(S_{\mathrm{opt}})>(1-\epsilon)\mu(S_{\mathrm{opt}})$.
High probability could be obtained by repeating the above algorithm $O(\log
n)$ times and returning the independent set of maximal cardinality among the
observed solutions. ∎
###### Remark 5.
The algorithm above can be derandomized as follows: first note that the
algorithm from Theorem 12 is deterministic. Next, during the construction in
the proof of Theorem 4, each time we execute Lemma 15 we pick $\sigma\in
O(\frac{1}{\delta})$ uniformly at random, where
$\delta=\Theta(\frac{\epsilon}{\log n})$. As we bound the probability of
$\Pr[v\notin M]$ using a simple union bound, it will still hold if we pick the
same $\sigma$ in all the executions of Lemma 15. We conclude that we can
sample the embedding of Theorem 4 from a distribution with support size
$O(\frac{\log n}{\epsilon})$. A derandomization follows.
### 8.2 QPTAS for the $\rho$-Dominating Set Problem in Minor-Free Graphs
We restate the main theorem in this section for convenience. See 8
###### Proof.
Similarly to Theorem 7, we start by constructing an auxiliary graph
$G^{\prime}$ from $G$ by adding a single vertex $\psi$ at distance $2\rho$
from all the other vertices. Extend the measure $\mu$ to $\psi$ by setting
$\mu(\psi)=\infty$. For every $u,v\in V$ it holds that
$d_{G^{\prime}}(u,v)=\min\\{4\rho,d_{G}(u,v)\\}$. It follows that $G^{\prime}$
is a $K_{r+1}$-minor-free graph with diameter bounded by $4\rho$. In
particular, for every $\rho^{\prime}\in(0,2\rho)$, a set $S\subseteq V$ is
$\rho^{\prime}$-dominating set (w.r.t. ${\cal K}$) in $G$ if and only if $S$
is $\rho^{\prime}$ dominating set in $G^{\prime}$ (w.r.t. ${\cal K}$). Using
Theorem 5 with parameters $\epsilon^{\prime}=\frac{\epsilon}{12}$ and
$\delta=\frac{\epsilon}{6}$, let $(f,\chi)$ be a clan embeddings of
$G^{\prime}$ into a treewidth $O_{r}(\frac{\log^{2}n}{\epsilon^{2}})$ graph
$H^{\prime}$ with additive distortion $\epsilon^{\prime}\cdot
4\rho=\frac{\epsilon}{3}\rho$. Define a new measure $\mu_{H}$ in $H$, where
for each $v^{\prime}\in H$,
$\mu_{H}(v^{\prime})=\begin{cases}\infty&v^{\prime}\notin f(V)\\\
\mu(v)&v^{\prime}\in f(v)\end{cases}$
Set also ${\cal K}_{H}=\chi({\cal K})\subseteq H$ to be our set of terminals.
Using Theorem 12, we find a $(1+\frac{\epsilon}{3})^{2}\rho$-dominating set
$A_{H}$, such that for every $\chi(v)\in{\cal K}_{H}$,
$d_{H}(\chi(v),A_{H})\leq(1+\frac{\epsilon}{3})^{2}\rho$, and for every
$(1+\frac{\epsilon}{3})\rho$-dominating set $\tilde{A}$ w.r.t. ${\cal K}_{H}$
it holds that $\mu_{H}(A_{H})\leq(1+\frac{\epsilon}{3})\mu_{H}(\tilde{A})$. We
can assume that $A_{H}$ contains only vertices from $f(V)$ (as all other
vertices have measure $\infty$, while ${\cal K}_{H}$ itself is legal solution
of finite measure). We will return $A=f^{-1}(A_{H})=\\{u\in V\mid f(u)\cap
A_{H}\neq\emptyset\\}$.
First, we argue that $A$ is a $(1+\epsilon)\rho$-dominating set. For every
vertex $v\in{\cal K}$, $\chi(v)\in{\cal K}_{H}$. Therefore there is a vertex
$\hat{u}\in A_{H}$ such that
$d_{H}(\chi(v),\hat{u})\leq(1+\frac{\epsilon}{3})^{2}\rho$. In particular, our
solution $A$ contains the vertex $u$ such that $\hat{u}\in f(u)$. As
$(f,\chi)$ is dominating embedding we conclude
$d_{G}(u,v)\leq\min_{u^{\prime}\in f(u)}d_{H}(u^{\prime},\chi(v))\leq
d_{H}(\hat{u},\chi(v))\leq(1+\frac{\epsilon}{3})^{2}\rho<(1+\epsilon)\rho\leavevmode\nobreak\
.$
Second, we argue that $A$ has nearly optimal measure. Let $A_{\mathrm{opt}}$
be a $\rho$-dominating set in $G$ w.r.t. ${\cal K}$ of minimal measure. As
$(f,\chi)$ has additive distortion $\frac{\epsilon}{3}\rho$,
$f(A_{\mathrm{opt}})$ is a $(1+\frac{\epsilon}{3})\rho$-dominating set in $H$
(w.r.t. ${\cal K}_{H}$). Indeed, consider a vertex $\chi(v)\in{\cal K}_{H}$
(for $v\in{\cal K}$). There is a vertex $u\in A_{\mathrm{opt}}$ such that
$d_{G}(u,v)\leq\rho$. It holds that
$d_{H}(f(A_{\mathrm{opt}}),\chi(v))\leq\min_{u^{\prime}\in
f(u)}d_{H}(u^{\prime},\chi(v))\leq
d_{G}(u,v)+\frac{\epsilon}{3}\rho\leq(1+\frac{\epsilon}{3})\rho$
By Theorem 12, we will find a $(1+\frac{\epsilon}{3})^{2}\rho$-dominating set
of measure at most $(1+\frac{\epsilon}{3})\mu_{H}(f(A_{\mathrm{opt}}))$ in
$H$. By linearity of expectation,
$\mathbb{E}\left[\mu_{H}(f(A_{\mathrm{opt}})\right]=\sum_{u\in
A_{\mathrm{opt}}}\mu(u)\cdot\mathbb{E}\left[\left|f(u)\right|\right]\leq(1+\frac{\epsilon}{6})\cdot\mu(A_{\mathrm{opt}})\leavevmode\nobreak\
.$
From the other hand,
$\mu_{H}(f(A_{\mathrm{opt}}))\geq\mu_{H}(\chi(A_{\mathrm{opt}}))=\mu(A_{\mathrm{opt}})$.
Using Markov inequality,
$\displaystyle\Pr\left[\mu_{H}(f(A_{\mathrm{opt}}))\geq(1+\frac{\epsilon}{3})\cdot\mu(A_{\mathrm{opt}})\right]$
$\displaystyle=\Pr\left[\mu_{H}(f(A_{\mathrm{opt}}))-\mu(A_{\mathrm{opt}})\geq\frac{\epsilon}{3}\mu(A_{\mathrm{opt}})\right]$
$\displaystyle\leq\frac{\mathbb{E}[\mu_{H}(f(A_{\mathrm{opt}}))-\mu(A_{\mathrm{opt}})]}{\frac{\epsilon}{3}\mu(A_{\mathrm{opt}})}\leq\frac{\frac{\epsilon}{6}}{\frac{\epsilon}{3}}=\frac{1}{2}\leavevmode\nobreak\
.$
Thus with probability at least $\frac{1}{2}$, $H$ contains
$(1+\frac{\epsilon}{3})\rho$-dominating set of measure
$(1+\frac{\epsilon}{3})\mu(A_{\mathrm{opt}})$. If this event indeed occurs,
the dominating set $A_{H}$ returned by Theorem 12 will be of measure at most
$(1+\frac{\epsilon}{3})^{2}\mu(A_{\mathrm{opt}})<(1+\epsilon)\mu(A_{\mathrm{opt}})$.
High probability could be obtained by repeating the algorithm above $O(\log
n)$ times and returning the set of minimum measure among the observed
dominating sets. ∎
### 8.3 Compact Routing Scheme
We restate the main theorem of this subsection for convenience. We begin by
presenting a result of Thorup and Zwick [TZ01] regarding routing in a tree.
###### Theorem 13 ([TZ01]).
For any tree $T=(V,E)$ (where $|V|=n$), there is a routing scheme with stretch
$1$ that has routing tables of size $O(1)$ and labels (and headers) of size
$O(\log n)$.
Recall that we measure space in machine words, where each word is $\Theta(\log
n)$ bits. We stress out the extremely short routing table size obtained in
[TZ01]. Note that when a vertex receives a packet with a header, it makes the
routing decision based only on the routing table, and do not require any
knowledge of the label of itself. In particular, the routing table contains a
unique identifier of the vertex.
Additional ingredient that our construction will require is that of a
_distance labeling scheme_ for trees. A _distance labeling_ , assigns to each
point $x\in X$ a label $l(x)$, and there is an algorithm $\mathcal{A}$
(oblivious to $(X,d)$) that provided labels $l(x),l(y)$ of arbitrary $x,y\in
X$, can compute $d_{G}(u,v)$. Specifically, a distance labeling is said to
have _stretch_ $t\geq 1$ if
$\forall x,y\in X,\qquad d(x,y)\leq\mathcal{A}\left(l(x),l(y)\right)\leq
t\cdot d(x,y).$
We refer to [FGK20] for an overview of distance labeling schemes in different
regimes (and comparison with metric embedding, see also [Pel00, GPPR04, TZ05,
EFN18]). Exact distance labeling on an $n$-vertex tree requires $\Theta(\log
n)$ words [AGHP16], which is already larger than the routing table size we are
aiming for. Nonetheless, Freedman et al. [FGNW17] (improving upon [AGHP16,
GKK+01]) showed that for any $n$-vertex unweighted tree, and
$\epsilon\in(0,1)$, one can construct an $(1+\epsilon)$-labeling scheme with
labels of size $O(\log\frac{1}{\epsilon})$ words.
###### Theorem 14 ([FGNW17]).
For any $n$-vertex tree $T=(V,E)$ with polynomial aspect ratio
${}^{\ref{foot:aspectRatio}}$, and parameter $\epsilon\in(0,1)$, there is a
distance labeling scheme with stretch $1+\epsilon$, and
$O(\log\frac{1}{\epsilon})$ label size.
We will use Theorem 14 for fixed $\epsilon$. For this case the theorem can be
extended to weighted trees with polynomial aspect ratio (by subdividing
edges).
###### Proof of Theorem 6.
We combine Theorem 3 with Theorem 13 and Theorem 14 to construct a compact
routing scheme. We begin by sampling a spanning clan embedding $(f,\chi)$ of
$G$ into a tree $T$ with distortion $O(k\log\log n)$ such that for every
vertex $v\in V$, $\mathbb{E}[|f(v)|]\leq n^{1/k}$. Using Theorem 14, we
construct a distance labeling scheme for $T$ with stretch at most $2$ and
$O(1)$ label size. That is, each vertex $v^{\prime}\in T$ has a label $l_{\rm
dl}(v^{\prime})$ of constant size, such that for every pair
$v^{\prime},u^{\prime}\in T$,
$d_{T}(v^{\prime},u^{\prime})\leq\mathcal{A}\left(l_{\rm
dl}(v^{\prime}),l_{\rm dl}(u^{\prime})\right)\leq 2\cdot
d_{T}(v^{\prime},u^{\prime})$ (${\rm dl}$ stands for distance labeling).
Using Theorem 13, we construct a compact routing scheme for $T$, such that
each $v^{\prime}\in T$ has a label $\ell_{\rm crs}(v^{\prime})$ of size
$O(\log|T|)=O(\log n)$, and routing table $\tau_{\rm crs}(v^{\prime})$ of size
$O(1)$ (${\rm crs}$ stands for compact routing scheme). We construct a compact
routing scheme for $G$ as follows: for every vertex $v\in V$, its label
defined to be $\ell_{G}(v)=\left(\ell_{\rm crs}(\chi(v)),l_{\rm
dl}(\chi(v))\right)$, and its table $\tau_{G}(v)$ to be the concatenation of
$\left\\{\left(\tau_{\rm crs}(v^{\prime}),l_{\rm
dl}(v^{\prime})\right)\right\\}_{v^{\prime}\in f(v)}$. In words, the label
$\ell_{G}(v)$ consist of the routing label $\ell_{\rm crs}(\chi(v))$, and
distance label $l_{\rm dl}(\chi(v))$, of the chief $\chi(v)$ in $T$, while the
routing table $\tau_{G}(v)$ consist of the routing table $\tau_{\rm
crs}(v^{\prime})$, and distance label $l_{\rm dl}(v^{\prime})$, of all the
copies $v^{\prime}$ in the clan $f(v)$. Clearly, the size of the label is
$O(\log n)+O(1)=O(\log n)$, while the expected size of the routing table is
$\mathbb{E}[\sum_{v^{\prime}\in
f(v)}O(1)]=O(1)\cdot\mathbb{E}[|f(v)|]=O(n^{\frac{1}{k}})$.
Consider a node $v$ that wants so send a package to a node $u$, while
possessing the routing label $\ell_{G}(u)$ of $u$. $v$ will go over all the
copies $v^{\prime}\in f(v)$, and choose the copy $v_{u}$ that minimized the
estimated distance $\mathcal{A}\left(l_{\rm dl}(v^{\prime}),l_{\rm
dl}(\chi(u))\right)$. Then, using the routing table $\tau_{\rm crs}(v_{u})$ of
$v_{u}$, $v$ will make a routing decision and transfer the package to the
first vertex $z^{\prime}\in T$ on the shortest path from $v_{u}$ to $\chi(u)$
in $T$. $v$ will transfer this package with a header consisting of the label
of $u$ and the name of $z^{\prime}$. This somewhat longer routing decision
process occurs only when a delivery is initiated. In any other step, a node
$z$ receives a package with a header containing the routing label of the
destination $\ell_{G}(u)$ and a name of a copy $z^{\prime}\in f(z)$. Then $z$
uses the routing table $\tau_{\rm crs}(z^{\prime})$ of $z^{\prime}$ to make a
routing decision and transfer the package to the first vertex $q^{\prime}\in
T$ on the shortest path from $z^{\prime}$ to $\chi(u)$ in $T$. As previously,
$z$ will transfer the package with a header consisting of the label of $u$ and
the name of $q^{\prime}$. Clearly the size of the header is $O(\log n)$. Note
that other than the first decision, each decision is made in constant time
(while the first decision is made in expected $O(n^{\frac{1}{k}})$ time).
Finally, when routing a package starting at $v$ towards $u$, the path
corresponds exactly to a path in $T$ from a copy $v_{u}\in f(v)$ to $\chi(u)$.
The length of this path is bounded by
$\displaystyle d_{T}(v_{u},\chi(u))$
$\displaystyle\leq\mathcal{A}\left(l_{{\rm dl}}(v_{u}),l_{{\rm
dl}}(\chi(u))\right)=\min_{v^{\prime}\in f(v)}\mathcal{A}\left(l_{{\rm
dl}}(v^{\prime}),l_{{\rm dl}}(\chi(u))\right)$
$\displaystyle\leq\min_{v^{\prime}\in f(v)}2\cdot
d_{T}(v^{\prime},\chi(u))=O(k\log\log n)\cdot d_{G}(v,u)\leavevmode\nobreak\
.$
∎
## Acknowledgments
The authors are grateful to Philip Klein for suggesting the metric
$\rho$-dominating/independent set problems, which eventually led to this
project. We thank Vincent Cohen-Addad for useful conversations and for
pointing out the proof of Theorem 17 to the first author. The first author
would like to thank Alexandr Andoni for helpful discussions. The second author
would like to thank Michael Lampis for discussing dynamic programming
algorithms for metric independent set/dominating set on bounded treewidth
graphs.
## References
* [ABLP90] B. Awerbuch, A. Bar-Noy, N. Linial, and D. Peleg. Improved routing strategies with succinct tables. J. Algorithms, 11(3):307–341, 1990, doi:10.1016/0196-6774(90)90017-9.
* [ACE+20] I. Abraham, S. Chechik, M. Elkin, A. Filtser, and O. Neiman. Ramsey spanning trees and their applications. ACM Trans. Algorithms, 16(2):19:1–19:21, 2020. preliminary version published in SODA 2018, doi:10.1145/3371039.
* [AFGN18] I. Abraham, A. Filtser, A. Gupta, and O. Neiman. Metric embedding via shortest path decompositions. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 952–963, 2018. full version: https://arxiv.org/abs/1708.04073, doi:10.1145/3188745.3188808.
* [AGG+19] I. Abraham, C. Gavoille, A. Gupta, O. Neiman, and K. Talwar. Cops, robbers, and threatening skeletons: Padded decomposition for minor-free graphs. SIAM J. Comput., 48(3):1120–1145, 2019. preliminary version published in STOC 2014, doi:10.1137/17M1112406.
* [AGHP16] S. Alstrup, I. L. Gørtz, E. B. Halvorsen, and E. Porat. Distance labeling schemes for trees. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy, pages 132:1–132:16, 2016, doi:10.4230/LIPIcs.ICALP.2016.132.
* [AHK12] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(1):121–164, 2012, doi:10.4086/toc.2012.v008a006.
* [AHL02] N. Alon, S. Hoory, and N. Linial. The moore bound for irregular graphs. Graphs Comb., 18(1):53–57, 2002, doi:10.1007/s003730200002.
* [AKPW95] N. Alon, R. M. Karp, D. Peleg, and D. B. West. A graph-theoretic game and its application to the k-server problem. SIAM J. Comput., 24(1):78–100, 1995. preliminary version published in On-Line Algorithms 1991, doi:10.1137/S0097539792224474.
* [AMS99] N. Alon, Y. Matias, and M. Szegedy. The space complexity of approximating the frequency moments. J. Comput. Syst. Sci., 58(1):137–147, 1999. preliminary version published in STOC 1996, doi:10.1006/jcss.1997.1545.
* [AN19] I. Abraham and O. Neiman. Using petal-decompositions to build a low stretch spanning tree. SIAM J. Comput., 48(2):227–248, 2019. preliminary version published in STOC 2012, doi:10.1137/17M1115575.
* [AP92] B. Awerbuch and D. Peleg. Routing with polynomial communication-space tradeoff. SIAM J. Discrete Mathematics, 5:151–162, 1992\.
* [AS03] V. Athitsos and S. Sclaroff. Database indexing methods for 3d hand pose estimation. In Gesture-Based Communication in Human-Computer Interaction, 5th International Gesture Workshop, GW 2003, Genova, Italy, April 15-17, 2003, Selected Revised Papers, pages 288–299, 2003, doi:10.1007/978-3-540-24598-8\\_27.
* [AST90] N. Alon, P. D. Seymour, and R. Thomas. A separator theorem for graphs with an excluded minor and its applications. In Proceedings of the 22nd Annual ACM Symposium on Theory of Computing, May 13-17, 1990, Baltimore, Maryland, USA, pages 293–299, 1990, doi:10.1145/100216.100254.
* [Bak94] B. S. Baker. Approximation algorithms for NP-complete problems on planar graphs. Journal of the ACM, 41(1):153–180, 1994. preliminary version published in FOCS 1983, doi:10.1145/174644.174650.
* [Bar96] Y. Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In 37th Annual Symposium on Foundations of Computer Science, FOCS ’96, Burlington, Vermont, USA, 14-16 October, 1996, pages 184–193, 1996, doi:10.1109/SFCS.1996.548477.
* [Bar98] Y. Bartal. On approximating arbitrary metrices by tree metrics. In Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, Dallas, Texas, USA, May 23-26, 1998, pages 161–168, 1998, doi:10.1145/276698.276725.
* [Bar04] Y. Bartal. Graph decomposition lemmas and their role in metric embedding methods. In Algorithms - ESA 2004, 12th Annual European Symposium, Bergen, Norway, September 14-17, 2004, Proceedings, pages 89–97, 2004, doi:10.1007/978-3-540-30140-0\\_10.
* [Bar11] Y. Bartal. Lecture notes in metric embedding theory and its algorithmic applications, 2011. URL: http://moodle.cs.huji.ac.il/cs10/file.php/67720/GM_Lecture6.pdf.
* [Bar21] Y. Bartal. Advances in metric ramsey theory and its applications. CoRR, abs/2104.03484, 2021, arXiv:2104.03484.
* [BBM06] Y. Bartal, B. Bollobás, and M. Mendel. Ramsey-type theorems for metric spaces with applications to online problems. J. Comput. Syst. Sci., 72(5):890–921, 2006. Special Issue on FOCS 2001, doi:10.1016/j.jcss.2005.05.008.
* [BCL+18] S. Bubeck, M. B. Cohen, Y. T. Lee, J. R. Lee, and A. Madry. k-server via multiscale entropic regularization. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 3–16, 2018, doi:10.1145/3188745.3188798.
* [Ben66] C. T. Benson. Minimal regular graphs of girths eight and twelve. Canadian Journal of Mathematics, 18:1091–1094, 1966, doi:10.4153/CJM-1966-109-8.
* [BFM86] J. Bourgain, T. Figiel, and V. Milman. On Hilbertian subsets of finite metric spaces. Israel J. Math., 55(2):147–152, 1986, doi:10.1007/BF02801990.
* [BFN19] Y. Bartal, N. Fandina, and O. Neiman. Covering metric spaces by few trees. In 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019, July 9-12, 2019, Patras, Greece, pages 20:1–20:16, 2019, doi:10.4230/LIPIcs.ICALP.2019.20.
* [BGS16] G. E. Blelloch, Y. Gu, and Y. Sun. A new efficient construction on probabilistic tree embeddings. CoRR, abs/1605.04651, 2016. https://arxiv.org/abs/1605.04651, arXiv:1605.04651.
* [BL16] G. Borradaile and H. Le. Optimal dynamic program for r-domination problems over tree decompositions. In 11th International Symposium on Parameterized and Exact Computation, IPEC 2016, August 24-26, 2016, Aarhus, Denmark, pages 8:1–8:23, 2016, doi:10.4230/LIPIcs.IPEC.2016.8.
* [BLMN05a] Y. Bartal, N. Linial, M. Mendel, and A. Naor. On metric Ramsey-type dichotomies. Journal of the London Mathematical Society, 71(2):289–303, 2005, doi:10.1112/S0024610704006155.
* [BLMN05b] Y. Bartal, N. Linial, M. Mendel, and A. Naor. Some low distortion metric ramsey problems. Discret. Comput. Geom., 33(1):27–41, 2005, doi:10.1007/s00454-004-1100-z.
* [BLW17] G. Borradaile, H. Le, and C. Wulff-Nilsen. Minor-free graphs have light spanners. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science, FOCS ’17, pages 767–778, 2017, doi:10.1109/FOCS.2017.76.
* [BM04] Y. Bartal and M. Mendel. Multiembedding of metric spaces. SIAM J. Comput., 34(1):248–259, 2004. preliminary version published in SODA 2003, doi:10.1137/S0097539703433122.
* [Bou85] J. Bourgain. On lipschitz embedding of finite metric spaces in hilbert space. Israel Journal of Mathematics, 52(1-2):46–52, 1985, doi:10.1007/BF02776078.
* [BR10] A. Babu and J. Radhakrishnan. An entropy based proof of the moore bound for irregular graphs. CoRR, abs/1011.1058, 2010, arXiv:1011.1058.
* [CFKL20] V. Cohen-Addad, A. Filtser, P. N. Klein, and H. Le. On light spanners, low-treewidth embeddings and efficient traversing in minor-free graphs. CoRR, abs/2009.05039, 2020. To appear in FOCS 2020,https://arxiv.org/abs/2009.05039, arXiv:2009.05039.
* [CG04] D. E. Carroll and A. Goel. Lower bounds for embedding into distributions over excluded minor graph families. In Algorithms - ESA 2004, 12th Annual European Symposium, Bergen, Norway, September 14-17, 2004, Proceedings, pages 146–156, 2004, doi:10.1007/978-3-540-30140-0\\_15.
* [CG12] T. H. Chan and A. Gupta. Approximating TSP on metrics with bounded global growth. SIAM J. Comput., 41(3):587–617, 2012. preliminary version published in SODA 2008, doi:10.1137/090749396.
* [Che13] S. Chechik. Compact routing schemes with improved stretch. In ACM Symposium on Principles of Distributed Computing, PODC ’13, Montreal, QC, Canada, July 22-24, 2013, pages 33–41, 2013, doi:10.1145/2484239.2484268.
* [Che15] S. Chechik. Approximate distance oracles with improved bounds. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015, pages 1–10, 2015, doi:10.1145/2746539.2746562.
* [CJLV08] A. Chakrabarti, A. Jaffe, J. R. Lee, and J. Vincent. Embeddings of topological graphs: Lossy invariants, linearization, and 2-sums. In 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2008, October 25-28, 2008, Philadelphia, PA, USA, pages 761–770, 2008, doi:10.1109/FOCS.2008.79.
* [CKM19] V. Cohen-Addad, P. N. Klein, and C. Mathieu. Local search yields approximation schemes for k-means and k-median in euclidean and minor-free metrics. SIAM J. Comput., 48(2):644–667, 2019. preliminary version published in FOCS 2016, doi:10.1137/17M112717X.
* [Cow01] L. Cowen. Compact routing with minimum stretch. J. Algorithms, 38(1):170–183, 2001. preliminary version published in SODA 1999, doi:10.1006/jagm.2000.1134.
* [DFHT05] E. D. Demaine, F. V. Fomin, M. T. Hajiaghayi, and D. M. Thilikos. Fixed-parameter algorithms for (_k_ , _r_)-center in planar graphs and map graphs. ACM Trans. Algorithms, 1(1):33–47, 2005. preliminary version published in ICALP 2003, doi:10.1145/1077464.1077468.
* [DHK05] E. D. Demaine, M. Hajiaghayi, and K. Kawarabayashi. Algorithmic graph minor theory: Decomposition, approximation, and coloring. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, pages 637–646, 2005, doi:10.1109/SFCS.2005.14.
* [EEST08] M. Elkin, Y. Emek, D. A. Spielman, and S. Teng. Lower-stretch spanning trees. SIAM J. Comput., 38(2):608–628, 2008. preliminary version published in STOC 2005, doi:10.1137/050641661.
* [EFN18] M. Elkin, A. Filtser, and O. Neiman. Prioritized metric structures and embedding. SIAM J. Comput., 47(3):829–858, 2018. preliminary version published in STOC 2015, doi:10.1137/17M1118749.
* [EGP03] T. Eilam, C. Gavoille, and D. Peleg. Compact routing schemes with low stretch factor. J. Algorithms, 46(2):97–114, 2003. preliminary version published in PODC 1998, doi:10.1016/S0196-6774(03)00002-6.
* [EILM16] H. Eto, T. Ito, Z. Liu, and E. Miyano. Approximability of the distance independent set problem on regular graphs and planar graphs. In Combinatorial Optimization and Applications - 10th International Conference, COCOA 2016, Hong Kong, China, December 16-18, 2016, Proceedings, pages 270–284, 2016, doi:10.1007/978-3-319-48749-6\\_20.
* [EKM14] D. Eisenstat, P. N. Klein, and C. Mathieu. Approximating _k_ -center in planar graphs. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 617–627, 2014, doi:10.1137/1.9781611973402.47.
* [EN19] M. Elkin and O. Neiman. Efficient algorithms for constructing very sparse spanners and emulators. ACM Trans. Algorithms, 15(1):4:1–4:29, 2019. preliminary version published in SODA 2017, doi:10.1145/3274651.
* [FFKP18] A. E. Feldmann, W. S. Fung, J. Könemann, and I. Post. A (1+$\epsilon$)-embedding of low highway dimension graphs into bounded treewidth graphs. SIAM J. Comput., 47(4):1667–1704, 2018. preliminary version published in ICALP 2015, doi:10.1137/16M1067196.
* [FGK20] A. Filtser, L. Gottlieb, and R. Krauthgamer. Labelings vs. embeddings: On distributed representations of distances. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 1063–1075, 2020, doi:10.1137/1.9781611975994.65.
* [FGNW17] O. Freedman, P. Gawrychowski, P. K. Nicholson, and O. Weimann. Optimal distance labeling schemes for trees. In Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC 2017, Washington, DC, USA, July 25-27, 2017, pages 185–194, 2017, doi:10.1145/3087801.3087804.
* [Fil19] A. Filtser. On strong diameter padded decompositions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019, September 20-22, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA, pages 6:1–6:21, 2019, doi:10.4230/LIPIcs.APPROX-RANDOM.2019.6.
* [FKS19] E. Fox-Epstein, P. N. Klein, and A. Schild. Embedding planar graphs into low-treewidth graphs with applications to efficient approximation schemes for metric problems. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ‘19, page 1069–1088, 2019, doi:10.1137/1.9781611975482.66.
* [FL21] A. Filtser and H. Le. Reliable spanners: Locality-sensitive orderings strike back. CoRR, abs/2101.07428, 2021, arXiv:2101.07428.
* [Fre87] G. N. Frederickson. Fast algorithms for shortest paths in planar graphs, with applications. SIAM J. Comput., 16(6):1004–1022, 1987, doi:10.1137/0216064.
* [FRT04] J. Fakcharoenphol, S. Rao, and K. Talwar. A tight bound on approximating arbitrary metrics by tree metrics. J. Comput. Syst. Sci., 69(3):485–497, November 2004. preliminary version published in STOC 2003, doi:10.1016/j.jcss.2004.04.011.
* [GKK+01] C. Gavoille, M. Katz, N. A. Katz, C. Paul, and D. Peleg. Approximate distance labeling schemes. In Algorithms - ESA 2001, 9th Annual European Symposium, Aarhus, Denmark, August 28-31, 2001, Proceedings, pages 476–487, 2001, doi:10.1007/3-540-44676-1\\_40.
* [GKK17] L. Gottlieb, A. Kontorovich, and R. Krauthgamer. Efficient regression in metric spaces via approximate lipschitz extension. IEEE Trans. Inf. Theory, 63(8):4838–4849, 2017. preliminary version published in SIMBAD 2013, doi:10.1109/TIT.2017.2713820.
* [GPPR04] C. Gavoille, D. Peleg, S. Pérennes, and R. Raz. Distance labeling in graphs. J. Algorithms, 53(1):85–112, 2004. preliminary version published in SODA 2001, doi:10.1016/j.jalgor.2004.05.002.
* [HBK+03] E. Halperin, J. Buhler, R. M. Karp, R. Krauthgamer, and B. Westover. Detecting protein sequence conservation via metric embeddings. Bioinformatics, 19(suppl 1):i122–i129, 07 2003, arXiv:https://academic.oup.com/bioinformatics/article-pdf/19/suppl\\_1/i122/614436/btg1016.pdf, doi:10.1093/bioinformatics/btg1016.
* [HHZ21] B. Haeupler, D. E. Hershkowitz, and G. Zuzic. Deterministic tree embeddings with copies for algorithms against adaptive adversaries. CoRR, abs/2102.05168, 2021, arXiv:2102.05168.
* [Ind01] P. Indyk. Algorithmic applications of low-distortion geometric embeddings. In 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, 14-17 October 2001, Las Vegas, Nevada, USA, pages 10–33, 2001, doi:10.1109/SFCS.2001.959878.
* [Jor69] C. Jordan. Sur les assemblages de lignes. Journal für die reine und angewandte Mathematik, 70:185–190, 1869\.
* [Kar89] R. M. Karp. A 2k-competitive algorithm for the circle. Manuscript, August, 5, 1989\.
* [KKM+12] M. Khan, F. Kuhn, D. Malkhi, G. Pandurangan, and K. Talwar. Efficient distributed approximation algorithms via probabilistic tree embeddings. Distributed Comput., 25(3):189–205, 2012. preliminary version published in PODC 2008, doi:10.1007/s00446-012-0157-9.
* [KLMN05] R. Krauthgamer, J. R. Lee, M. Mendel, and A. Naor. Measured descent: a new embedding method for finite metrics. Geometric and Functional Analysis, 15(4):839–858, 2005. preliminary version published in FOCS 2004, doi:10.1007/s00039-005-0527-6.
* [KLP19] I. Katsikarelis, M. Lampis, and V. T. Paschos. Structural parameters, tight bounds, and approximation for (k, r)-center. Discret. Appl. Math., 264:90–117, 2019. preliminary version published in ISAAC 2017, doi:10.1016/j.dam.2018.11.002.
* [KLP20] I. Katsikarelis, M. Lampis, and V. T. Paschos. Structurally parameterized d-scattered set. Discrete Applied Mathematics, 2020. preliminary version published in WG 2018, doi:10.1016/j.dam.2020.03.052.
* [Le18] H. Le. Structural Results and Approximation Algorithms in Minor-free Graphs. PhD thesis, Oregon State University, 2018\.
* [Lem03] A. Lemin. On ultrametrization of general metric spaces. Proceedings of the American mathematical society, 131(3):979–989, 2003, doi:10.1090/S0002-9939-02-06605-4.
* [LLR95] N. Linial, E. London, and Y. Rabinovich. The geometry of graphs and some of its algorithmic applications. Comb., 15(2):215–245, 1995. preliminary version published in FOCS 1994, doi:10.1007/BF01200757.
* [LUW95] F. Lazebnik, V. A. Ustimenko, and A. J. Woldar. A new series of dense graphs of high girth. Bulletin of the American mathematical society, 32(1):73–79, 1995, doi:10.1090/S0273-0979-1995-00569-0.
* [MN07] M. Mendel and A. Naor. Ramsey partitions and proximity data structures. Journal of the European Mathematical Society, 9(2):253–275, 2007\. preliminary version published in FOCS 2006, doi:10.4171/JEMS/79.
* [MP15] D. Marx and M. Pilipczuk. Optimal parameterized algorithms for planar facility location problems using voronoi diagrams. In Algorithms - ESA 2015 - 23rd Annual European Symposium, Patras, Greece, September 14-16, 2015, Proceedings, pages 865–877, 2015, doi:10.1007/978-3-662-48350-3\\_72.
* [NT12] A. Naor and T. Tao. Scale-oblivious metric fragmentation and the nonlinear dvoretzky theorem. Israel Journal of Mathematics, 192(1):489–504, 2012, doi:10.1007/s11856-012-0039-7.
* [Pel00] D. Peleg. Proximity-preserving labeling schemes. J. Graph Theory, 33(3):167–176, 2000. preliminary version published in WG 1999, doi:10.1002/(SICI)1097-0118(200003)33:3<167::AID-JGT7>3.0.CO;2-5.
* [PU89] D. Peleg and E. Upfal. A trade-off between space and efficiency for routing tables. J. ACM, 36(3):510–530, 1989, doi:10.1145/65950.65953.
* [Rao99] S. Rao. Small distortion and volume preserving embeddings for planar and Euclidean metrics. In Proceedings of the Fifteenth Annual Symposium on Computational Geometry, Miami Beach, Florida, USA, June 13-16, 1999, pages 300–306, 1999, doi:10.1145/304893.304983.
* [RR98] Y. Rabinovich and R. Raz. Lower bounds on the distortion of embedding finite metric spaces in graphs. Discret. Comput. Geom., 19(1):79–94, 1998, doi:10.1007/PL00009336.
* [RS03] N. Robertson and P. D. Seymour. Graph minors. XVI. Excluding a non-planar graph. Journal of Combinatoral Theory Series B, 89(1):43–76, 2003, doi:10.1016/S0095-8956(03)00042-X.
* [Tal04] K. Talwar. Bypassing the embedding: algorithms for low dimensional metrics. In STOC ’04: Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 281–290. ACM Press, 2004, doi:http://doi.acm.org/10.1145/1007352.1007399.
* [TZ01] M. Thorup and U. Zwick. Compact routing schemes. In Proceedings of the Thirteenth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA 2001, Heraklion, Crete Island, Greece, July 4-6, 2001, pages 1–10, 2001, doi:10.1145/378580.378581.
* [TZ05] M. Thorup and U. Zwick. Approximate distance oracles. J. ACM, 52(1):1–24, 2005, doi:10.1145/1044731.1044732.
* [Wen91] R. Wenger. Extremal graphs with no c4’s, c6’s, or c10’s. Journal of Combinatorial Theory, Series B, 52(1):113 – 116, 1991, doi:https://doi.org/10.1016/0095-8956(91)90097-4.
## Appendix A Path Distortion of Clan embeddings into ultrametrics
In this section we provide briefly the modification and missing details
required to obtain the path distortion property for our clan embedding into
ultrametrics.
###### Definition 10 (Path-distortion).
We say that the one-to-many embedding $f:X\rightarrow 2^{Y}$ between
$(X,d_{X})$ to $(Y,d_{Y})$ has _path-distortion_ $t$ if for every sequence
$\left(x_{0},x_{1},\dots,x_{m}\right)$ in $X$ there is a sequence
$v^{\prime}_{0},\dots,v^{\prime}_{m}$ in $Y$ where $x^{\prime}_{i}\in
f(x_{i})$, such that
$\sum_{i=0}^{m-1}d_{Y}(x^{\prime}_{i},x^{\prime}_{i+1})\leq
t\cdot\sum_{i=1}^{m-1}d_{X}(x_{i},x_{i+1})$.
To obtain a clan embedding $(f,\chi)$ as in Lemma 2, the only modification
required is to use the following strengthen version of 1 (the proof of which
appears bellow).
###### Claim 3.
There is a point $v\in X$ and radius $R\in(0,\frac{\mathrm{diam}(X)}{2}]$,
such that the sets ${P=B_{X}(v,R+\frac{1}{8(k+1)}\cdot\mathrm{diam}(X))}$,
$Q=B_{X}(v,R)$, and $\bar{Q}=X\setminus Q$ satisfy the following properties:
1. 1.
$\min\\{\mu(P),\mu(\bar{Q})\\}\leq\frac{2}{3}\cdot\mu(X)$, and
$\mathrm{diam}(P)\leq\frac{1}{2}\cdot\mathrm{diam}(X)$.
2. 2.
$\mu(P)\leq\mu(Q)\cdot\left(\frac{\mu^{*}(X)}{\mu^{*}(P)}\right)^{\frac{1}{k}}$.
As a result, the distortion gurantee we will obtain will be $16(k+1)$ instead
of $16k$. However, we will be guaranteed that recursively one of the two
created clusters has measure at most $\frac{2}{3}\mu(X)$, and also that the
diameter in the first cluster is bounded by half the diameter of $X$. These
are the only properties used in the proof of [BM04] to obtain the path
distortion gurantee. In particular, the exact same argument as in [BM04] will
imply the following result:
###### Lemma 20.
Given an $n$-point metric space $(X,d_{X})$ with aspect
ratio${}^{\ref{foot:aspectRatio}}$ $\Phi$, $(\geq 1)$-measure
$\mu:X\rightarrow\mathbb{R}_{\geq 1}$, and integer parameter $k\geq 1$, there
is a clan embedding $(f,\chi)$ into an ultrametric with multiplicative
distortion $16(k+1)$, path distortion $O(k)\cdot\min\\{\log n,\log\Phi\\}$,
and such that $\mathbb{E}_{x\sim\mu}[|f(x)|]\leq\mu(X)^{1+\frac{1}{k}}$.
Using the exact same arguments we had to obtain Theorem 1 from Lemma 2, we
conclude:
###### Theorem 15 (Clan embedding into ultrametric).
Given an $n$-point metric space $(X,d_{X})$ with aspect ration $\Phi$, and
parameter $\epsilon\in(0,1]$, there is a uniform distribution $\mathcal{D}$
over $O(n\log n/\epsilon^{2})$ clan embeddings $(f,\chi)$ into ulrametrics
with multiplicative distortion $O(\frac{\log n}{\epsilon})$, path distortion
$O(\frac{\log n}{\epsilon})\cdot\min\\{\log n,\log\Phi\\}$, and such that for
every point $x\in X$, $\mathbb{E}_{f\sim\mathcal{D}}[|f(x)|]\leq 1+\epsilon$.
In addition, for every $k\in\mathbb{N}$, there is a uniform distribution
$\mathcal{D}$ over $O(n^{1+\frac{2}{k}}\log n)$ clan embeddings $(f,\chi)$
into ulrametrics with multiplicative distortion $16(k+1)$, path distortion
$O(k)\cdot\min\\{\log n,\log\Phi\\}$, and such that for every point $x\in X$,
$\mathbb{E}_{f\sim\mathcal{D}}[|f(x)|]=O(n^{\frac{1}{k}})$.
###### Remark 6.
The spanning clan embedding construction for Theorem 3 actually provides path-
distortion gurantee without modification. This is as in the create-petal
procedure (Algorithm 3), we always create a petal (cluster) with measure at
most $\frac{1}{2}\mu(Y)$ (and bounded radius).
###### Proof of 3.
Let $v$ be the point minimizing the ratio
$\frac{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{4})\right)}{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{8})\right)}$.
Set $\rho=\frac{\mathrm{diam}(X)}{8(k+1)}$, and for $i\in[0,k]$ let
$Q_{i}=B_{X}(v,\frac{\mathrm{diam}(X)}{8}+i\cdot\rho)$. Let
$i^{\prime}\in[0,k-1]$ be the index minimizing
$\frac{\mu(Q_{i+1})}{\mu(Q_{i})}$. Then,
$\left(\frac{\mu(Q_{k+1})}{\mu(Q_{0})}\right)^{\frac{1}{k}}\geq\left(\frac{\mu(Q_{k})}{\mu(Q_{0})}\right)^{\frac{1}{k}}=\left(\frac{\mu(Q_{1})}{\mu(Q_{0})}\cdot\frac{\mu(Q_{2})}{\mu(Q_{1})}\cdots\frac{\mu(Q_{k})}{\mu(Q_{k-1})}\right)^{\frac{1}{k}}\geq\left(\frac{\mu(Q_{i^{\prime}+1})}{\mu(Q_{i^{\prime}})}\right)^{k\cdot\frac{1}{k}}=\frac{\mu(Q_{i^{\prime}+1})}{\mu(Q_{i^{\prime}})}\leavevmode\nobreak\
.$
If $\mu(Q_{i^{\prime}+1})\leq\frac{2}{3}\mu(X)$ or
$\mu(Q_{i^{\prime}})\geq\frac{1}{3}\mu(X)$, fix $i=i^{\prime}$. Otherwise, fix
$i=i^{\prime}+1$. Note that $i\in[0,k]$. Set
$R=\frac{\mathrm{diam}(X)}{8}+i\cdot\rho$, and $P=B_{X}(v,R+\rho)$,
$Q=B_{X}(v,R)$, $\bar{Q}=X\setminus Q$. Note that $\mathrm{diam}(P)\leq
2\cdot(\frac{\mathrm{diam}(X)}{8}+(k+1)\cdot\rho)=\frac{\mathrm{diam}(X)}{2}$.
If $i=i^{\prime}$, then clearly
$\frac{\mu(P)}{\mu(Q)}\leq\left(\frac{\mu(Q_{k+1})}{\mu(Q_{0})}\right)^{\frac{1}{k}}$
and $\min\\{\mu(P),\mu(\bar{Q})\\}\leq\frac{2}{3}\cdot\mu(X)$. Otherwise,
$i=i^{\prime}+1$, thus $\mu(Q_{i^{\prime}+1})>\frac{2}{3}\mu(X)$ and
$\mu(Q_{i^{\prime}})<\frac{1}{3}\mu(X)$, implying that
$\frac{\mu(Q_{i^{\prime}+1})}{\mu(Q_{i^{\prime}})}>2$ and thus
$\frac{\mu(P)}{\mu(Q)}=\frac{\mu(Q_{i+1})}{\mu(Q_{i})}=\frac{\mu(Q_{i^{\prime}+2})}{\mu(Q_{i^{\prime}+1})}\leq\frac{\mu(X)}{\frac{2}{3}\mu(X)}=\frac{3}{2}<\frac{\mu(Q_{i^{\prime}+1})}{\mu(Q_{i^{\prime}})}\leq\left(\frac{\mu(Q_{k+1})}{\mu(Q_{0})}\right)^{\frac{1}{k}}\leavevmode\nobreak\
.$
Furthermore, $\mu(\bar{Q})=\mu(X)-\mu(Q_{i^{\prime}+1})<\frac{1}{3}\mu(X)$. In
both cases we obtain that
$\min\\{\mu(P),\mu(\bar{Q})\\}\leq\frac{2}{3}\cdot\mu(X)$ and
$\frac{\mu(P)}{\mu(Q)}\leq\left(\frac{\mu(Q_{k+1})}{\mu(Q_{0})}\right)^{\frac{1}{k}}$.
It remains to prove the second required property.
Let $u_{P}$ be the point defining $\mu^{*}(P)$, that is
$\mu^{*}(P)=\mu\left(B_{P}(u_{P},\frac{\mathrm{diam}(P)}{4}\right)\leq\mu\left(B_{P}(u_{P},\frac{\mathrm{diam}(X)}{8}\right)$.
Using the minimality of $v$, it holds that
$\frac{\mu(P)}{\mu(Q)}\leq\left(\frac{\mu(Q_{k})}{\mu(Q_{0})}\right)^{\frac{1}{k}}=\left(\frac{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{4})\right)}{\mu\left(B_{X}(v,\frac{\mathrm{diam}(X)}{8})\right)}\right)^{\frac{1}{k}}\stackrel{{\scriptstyle(*)}}{{\leq}}\left(\frac{\mu\left(B_{X}(u_{P},\frac{\mathrm{diam}(X)}{4})\right)}{\mu\left(B_{X}(u_{P},\frac{\mathrm{diam}(X)}{8})\right)}\right)^{\frac{1}{k}}\leq\left(\frac{\mu^{*}\left(X\right)}{\mu^{*}\left(P\right)}\right)^{\frac{1}{k}}\leavevmode\nobreak\
.$
where $(*)$ is due to the choice of $v$. ∎
## Appendix B Local Search Algorithms for Metric Becker Problems
In this section we present PTAS’s ${}^{\ref{foot:approximationSchemes}}$ for
the metric $\rho$-dominating/independent set problems under the uniform
measure. Both algorithms are local search algorithms. The analysis of the
algorithm for the metric $\rho$-dominating set problem was presented in
[Le18]. This analysis uses techniques similar to the ones used in [CKM19] to
construct PTAS for the $k$-means and $k$-median problems in minor-free graphs.
The analysis for the metric $\rho$-independent set problem is original (even
though similar).
In both proofs we will use $r$-divisions. The following theorem follows from
[Fre87, AST90] (see [CKM19] for details).
###### Theorem 16 ([Fre87, AST90]).
For every graph $H$, there is an absolute constant $c_{H}$ such that every
$r\in\mathbb{N}$, and every $n$-vertex $H$-minor-free graph $G=(V,E)$, the
vertices of $G$ can be divided into clusters $\mathcal{R}$ such that:
1. 1.
For every edge $\\{u,v\\}\in E$, there is a cluster $C\in\mathcal{R}$ such
that $u,v\in C$.
2. 2.
For every $C\in\mathcal{R}$, $|C|\leq r^{2}$.
3. 3.
Let $\mathcal{B}$ be the set of vertices appearing in more than a single
cluster, called boundary vertices, then
$\sum_{C\in\mathcal{R}}|C\cap\mathcal{B}|\leq c_{H}\cdot\frac{n}{r}$.
### B.1 Local search for $\rho$-dominating set under uniform measure
We state and prove the theorem here when the set of terminals $\mathcal{K}=V$,
however it can be easily accommodated to deal with a general terminal set.
###### Theorem 17.
There is a polynomial approximation scheme (PTAS) for the metric
$\rho$-dominating set problem in $H$-minor-free graphs under the uniform
measure.
Specifically, given a weighted $n$-vertex $H$-minor-free graph $G=(V,E,w)$,
and parameters $\epsilon\in(0,\frac{1}{2})$, $\rho>0$, in
$n^{O_{|H|}(\epsilon^{-2})}$ time, one can find a $\rho$-dominating set
$S\subseteq V$ such that for every $\rho$-dominating set $\tilde{S}$,
$|S|\leq(1+\epsilon)|\tilde{S}|$.
input : $n$ vertex graph $G=(V,E,w)$, parameters $\rho,s$
output : $\rho$-dominating set $S$
1 $S\leftarrow V$
2 while _$\exists$ $\rho$-dominating set $S^{\prime}\subseteq V$ s.t.
$|S^{\prime}|<|S|$ and $|S\setminus S^{\prime}|+|S^{\prime}\setminus S|\leq
s$_ do
3 $S\leftarrow S^{\prime}$
4
5
6return $S$
Algorithm 4 Local search algorithm for metric $\rho$-dominating set
_Proof._ Set $r=\frac{4c_{H}}{\epsilon}$ where $c_{H}$ is the constant from
Theorem 16 w.r.t. $H$. Let $S$ be the set returned by the local search
Algorithm 4 with parameters $\rho$, and
$s=r^{2}=O_{H}(\frac{1}{\epsilon^{2}})$. Clearly $S$ is a $\rho$-dominating
set. The running time of each step of the while loop is at most ${n\choose
s}^{2}\cdot\mathrm{poly}(n)=n^{O_{|H|}(\epsilon^{-2})}$, as there are at most
$n$ iterations, the running time follows. Let $S_{\mathrm{opt}}$ be the
$\rho$-dominating set of minimum cardinality, it remains to prove that
$|S|\leq(1+\epsilon)|S_{\mathrm{opt}}|$.
Let $\tilde{V}=S\cup S_{\mathrm{opt}}$, and let $\mathcal{P}$ be a partition
of the vertices in $V$ w.r.t. the Voronoi cells with $\tilde{V}$ as centers.
Specifically, for each vertex $u\in V$, $u$ joins the cluster $P_{v}$ of a
vertex $v\in\tilde{V}$ at minimal distance $\min_{v\in\tilde{V}}d_{G}(u,v)$.
141414For simplicity, we will assume that all the pairwise distances are
unique. Alternatively, one can break ties in a consistent way (i.e. w.r.t.
some total order). Let $\tilde{G}$ be the graph obtained from $G$ by
contracting the internal edges in each Voronoi cell (and keeping only a single
copy of each edge). Alternatively, one can define $\tilde{G}$ with $\tilde{V}$
as vertex set such that $v,u\in\tilde{V}$ are adjacent iff there is an edge in
$G$ between a vertex in $P_{u}$ to a vertex in $P_{v}$. Note that $\tilde{G}$
is a minor of $G$, and hence is $H$-minor-free.
Next, we use Theorem 16 on $\tilde{G}$ to obtain $r$-division $\mathcal{R}$,
with $\mathcal{B}$ as boundary vertices. Consider a cluster $C\in\mathcal{R}$,
and let $C^{\prime}=C\cap(\mathcal{B}\cup S_{\mathrm{opt}})$. Fix
$S^{\prime}=(S\setminus C)\cup C^{\prime}$.
###### Claim 4.
$S^{\prime}$ is a $\rho$-dominating set.
_Proof._ Consider a vertex $u\in V$, We will argue that $u$ is at distance at
most $\rho$ from some vertex in $S^{\prime}$. Let $v_{1}\in S$ (resp.
$v_{2}\in S_{\mathrm{opt}}$) be the closest vertex to $u$ in $S$ (resp. in
$S_{\mathrm{opt}}$). It holds that $d_{G}(u,v_{1}),d_{G}(u,v_{2})\leq\rho$. If
either $v_{1}\notin C$, $v_{1}\in C\cap\mathcal{B}$, or $v_{2}\in C$ then
$S^{\prime}$ contains at least one of $v_{1},v_{2}$ and we are done. Thus we
can assume that $v_{1}\in C\setminus\mathcal{B}$ and $v_{2}\notin C$. Let
$\Pi=\\{v_{1}=z_{0},z_{1},\dots,z_{a},u,w_{0},w_{1},\dots,w_{b}=v_{2}\\}$ be
the unique shortest path from $v_{1}$ to $v_{2}$ that goes through $u$ (the
thick black line in illustration on the right).
Assume first that $u$ belongs to the Voroni cell $P_{v_{1}}$ of $v_{1}$
(encircled by a blue dashed line). For every $i$ and $v^{\prime}\in\tilde{V}$
it holds that $d_{G}(v^{\prime},z_{i})\geq
d_{G}(v^{\prime},u)-d_{G}(u,z_{i})>d_{G}(v_{1},u)-d_{G}(u,z_{i})=d_{G}(v_{1},z_{i})$.
It follows that all the vertices $\\{z_{0},z_{1},\dots,z_{a}\\}$ belong to the
Voronoi cell $P_{v_{1}}$. As $v_{1}\in C\setminus\mathcal{B}$, and
$v_{2}\notin C$, there must be some index $j$ such that $w_{j}$ belongs to the
Voronoi cell $P_{v_{3}}$ of $v_{3}\in C\cap\mathcal{B}$ (as otherwise there
will be an edge in $\tilde{G}$ between a vertex in $C\setminus\mathcal{B}$ to
a vertex not in $C$). It holds that
$d_{G}(u,v_{3})\leq d_{G}(u,w_{j})+d_{G}(w_{j},v_{3})\leq
d_{G}(u,w_{j})+d_{G}(w_{j},v_{2})=d_{G}(u,v_{2})\leq\rho\leavevmode\nobreak\
,$
where the first inequality follows by triangle inequality, the second as
$w_{j}\in P_{v_{3}}$, and the equality as $w_{j}$ lays on the shortest path
from $u$ to $v_{2}$. As $v_{3}\in C\cap\mathcal{B}$ it follows that $v_{3}\in
S^{\prime}$, thus we are done. The case $u\in P_{v_{2}}$ is symmetric. ∎
It holds that $|S^{\prime}\setminus S|+|S\setminus S^{\prime}|\leq|C|\leq
r^{2}=s$. Thus, $|S^{\prime}|\geq|S|$ since otherwise, Algorithm 4 would’ve
not returned the set $S$. Hence $|C\cap(\mathcal{B}\cup
S_{\mathrm{opt}})|=|C^{\prime}|\geq|C\cap S|$. As the same argument could be
applied on every cluster $C\in\mathcal{R}$, we conclude that,
$|S|=\sum_{C\in\mathcal{R}}|C\cap
S|\leq\sum_{C\in\mathcal{R}}|C\cap(\mathcal{B}\cup
S_{\mathrm{opt}})|\leq|S_{\mathrm{opt}}|+\sum_{C\in\mathcal{R}}|C\cap\mathcal{B}|\leq|S_{\mathrm{opt}}|+c_{H}\cdot\frac{|\tilde{V}|}{r}\leq|S_{\mathrm{opt}}|+2c_{H}\cdot\frac{|S|}{r}\leavevmode\nobreak\
.$
But this implies
$|S_{\mathrm{opt}}|\geq(1-\frac{2c_{H}}{r})|S|=(1-\frac{\epsilon}{2})|S|$,
thus
$|S|\leq\frac{1}{1-\frac{\epsilon}{2}}|S_{\mathrm{opt}}|\leq(1+\epsilon)|S_{\mathrm{opt}}|$.
∎
### B.2 Local search for $\rho$-independent set under uniform measure
###### Theorem 18.
There is a polynomial approximation scheme (PTAS) for the metric
$\rho$-independent set problem in $H$-minor-free graphs under uniform measure.
Specifically, given a weighted $n$-vertex $H$-minor-free graph $G=(V,E,w)$,
and parameters $\epsilon\in(0,\frac{1}{2})$, $\rho>0$, in
$n^{O_{|H|}(\epsilon^{-2})}$ time, one can find a $\rho$-independent set
$S\subseteq V$ such that for every $\rho$-independent set $\tilde{S}$,
$|S|\geq(1-\epsilon)|\tilde{S}|$.
input : $n$ vertex graph $G=(V,E,w)$, parameters $\rho,s$
output : $\rho$-independent set $S$
1 $S\leftarrow\emptyset$
2 while _$\exists$ $\rho$-independent set $S^{\prime}\subseteq V$ s.t.
$|S^{\prime}|>|S|$ and $|S\setminus S^{\prime}|+|S^{\prime}\setminus S|\leq
s$_ do
3 $S\leftarrow S^{\prime}$
4
5
6return $S$
Algorithm 5 Local search algorithm for metric $\rho$-independent set
###### Proof.
Set $r=\frac{4c_{H}}{\epsilon}$ where $c_{H}$ is the constant from Theorem 16
w.r.t. $H$. Let $S$ be the set returned by the local search Algorithm 5 with
parameters $\rho$, and
$s=r^{2}=\frac{16c_{H}^{2}}{\epsilon^{2}}=O_{H}(\frac{1}{\epsilon^{2}})$.
Clearly $S$ is a $\rho$-independent set. The running time of each step of the
while loop is at most ${n\choose
s}^{2}\cdot\mathrm{poly}(n)=n^{O_{|H|}(\epsilon^{-2})}$, as there are at most
$n$ iterations, the running time follows. Let $S_{\mathrm{opt}}$ be the
$\rho$-independent set of maximum cardinality, it remains to prove that
$|S|\geq(1-\epsilon)|S_{\mathrm{opt}}|$.
Construct a graph $\tilde{G}$ with $\tilde{V}=S\cup S_{\mathrm{opt}}$ as a
vertex set. We add an edge an edge between $u,v\in\tilde{V}$ iff
$d_{G}(u,v)<\rho$. Clearly all the edges are from $S\times S_{\mathrm{opt}}$
(as both $S,S_{\mathrm{opt}}$ are $\rho$-independent sets). Note that
$\tilde{V}$ is a minor of $G$. This is because if we take all the shortest
paths $P_{u,v}$ for $\\{u,v\\}\in E^{\prime}$ they will not intersect. To see
this, assume for contradiction that there are different pairs $u,u^{\prime}\in
S_{\mathrm{opt}}$ , $v,v^{\prime}\in S$ such that
$\\{u,v\\},\\{u^{\prime},v^{\prime}\\}\in E$, and there is some vertex $z$
such that $z\in P_{u,v}\cap P_{u^{\prime},v^{\prime}}$. W.l.o.g. assume that
$d_{G}(u,z)+d_{G}(u^{\prime},z)\leq d_{G}(z,v)+d_{G}(z,v^{\prime})$. Using the
triangle inequality it follows that
$\displaystyle d_{G}(u,u^{\prime})\leq d_{G}(u,z)+d_{G}(u^{\prime},z)$
$\displaystyle\leq\frac{1}{2}\cdot\left(d_{G}(u,z)+d_{G}(z,v)+d_{G}(u^{\prime},z)+d_{G}(z,v^{\prime})\right)$
$\displaystyle=\frac{1}{2}\cdot\left(d_{G}(u,v)+d_{G}(u^{\prime},v^{\prime})\right)<\rho\leavevmode\nobreak\
,$
a contradiction.
Next, we apply Theorem 16 to $\tilde{G}$ to obtain $r$-division $\mathcal{R}$,
with $\mathcal{B}$ as boundary vertices. Consider a cluster $C\in\mathcal{R}$,
and let $C^{\prime}=(C\cap S_{\mathrm{opt}})\setminus\mathcal{B}$. Fix
$S^{\prime}=(S\setminus C)\cup C^{\prime}$.
###### Claim 5.
$S^{\prime}$ is a $\rho$-independent set.
###### Proof.
Consider a pair of vertices $u,v\in S^{\prime}$, we will show that
$d_{G}(u,v)\geq\rho$. If both $u,v$ belong to $S$, then since $S$ is a
$\rho$-independent set, it follows that $d_{G}(u,v)\geq\rho$. The same
argument holds if both $u,v$ belong to $S_{\mathrm{opt}}$. We thus can assume
w.l.o.g. that $u\in S\setminus S_{\mathrm{opt}}$ and $v\in
S_{\mathrm{opt}}\setminus S$. It follows that $u\notin C$ while $v\in C$.
However, as $v\in C\cap S^{\prime}$, necessarily $v\notin\mathcal{B}$. The
only vertices in $C$ with edges towards vertices out of $C$ are in
$\mathcal{B}$. It follows that $\\{u,v\\}$ is not an edge of $\tilde{G}$,
implying $d_{G}(u,v)\geq\rho$. ∎
It holds that $|S^{\prime}\setminus S|+|S\setminus S^{\prime}|\leq|C|\leq
r^{2}=s$. Thus, $|S^{\prime}|\leq|S|$, as otherwise Algorithm 5 would have not
returned the set $S$. Hence, $|(C\cap
S_{\mathrm{opt}})\setminus\mathcal{B}|=|C^{\prime}|\leq|C\cap S|$. As the same
argument could be applied on every cluster $C\in\mathcal{R}$, we conclude
that,
$|S|=\sum_{C\in\mathcal{R}}|C\cap S|\geq\sum_{C\in\mathcal{R}}|(C\cap
S_{\mathrm{opt}})\setminus\mathcal{B}|\geq|S_{\mathrm{opt}}|-\sum_{C\in\mathcal{R}}|C\cap\mathcal{B}|\geq|S_{\mathrm{opt}}|-c_{H}\cdot\frac{|\tilde{V}|}{r}\geq|S_{\mathrm{opt}}|-2c_{H}\cdot\frac{|S|}{r}\leavevmode\nobreak\
.$
But this implies that
$|S_{\mathrm{opt}}|\leq(1+\frac{2c_{H}}{r})|S|=(1+\frac{\epsilon}{2})|S|$,
thus
$|S|\geq\frac{1}{1+\frac{\epsilon}{2}}|S_{\mathrm{opt}}|\geq(1-\epsilon)|S_{\mathrm{opt}}|$.
∎
|
32k
|
arxiv_papers
|
2101.01148
|
# A remark on the Strichartz Inequality in one dimension
Ryan Frier and Shuanglin Shao
###### Abstract.
In this paper, we study the extremal problem for the Strichartz inequality for
the Schrödinger equation on $\mathbb{R}^{2}$. We show that the solutions to
the associated Euler-Lagrange equation are exponentially decaying in the
Fourier space and thus can be extended to be complex analytic. Consequently we
provide a new proof to the characterization of the extremal functions: the
only extremals are Gaussian functions, which was investigated previously by
Foschi [7] and Hundertmark-Zharnitsky [11].
## 1\. Introduction
To begin, we note that the Strichartz inequality for an arbitrary dimension
$d$ is
$\begin{split}\|e^{it\Delta}f\|_{2+4/d}\leq C_{d}\|f\|_{2},\end{split}$
where $\|\ f\|_{p}=\left(\int_{\mathbb{R}^{d}}|f(x)|^{p}\ dx\right)^{1/p}$,
and
$e^{it\Delta}f=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d}}e^{ix\cdot\xi+i|\xi|^{2}t}\widehat{f}(\xi)d\xi$,
see e.g., [12, 18]. Strichartz’s Inequality has long been studied. The
original proof of Strichartz inequality is due to Robert Strichartz in [17] in
1977.
Define the Fourier transform as
$\widehat{f}(\xi)=\int_{\mathbb{R}^{d}}e^{-ix\cdot\xi}f(x)dx$ and the space-
time Fourier transform as
$\tilde{F}(\tau,\xi)=\int_{\mathbb{R}\times\mathbb{R}^{n}}e^{i(\tau
t+x\cdot\xi)}F(t,x)\ dt\ dx$. Note that in the case of $d=1$ we have
(1) $\begin{split}\left\|e^{it\Delta}f\right\|_{6}&\leq C_{1}\|f\|_{2};\\\
e^{it\Delta}f&=\frac{1}{2\pi}\int_{\mathbb{R}}e^{ix\xi+i\xi^{2}t}\widehat{f}(\xi)d\xi.\end{split}$
Define
(2)
$\begin{split}C_{1}&=\sup\left\\{\frac{\left\|e^{it\Delta}f\right\|_{6}}{\|f\|_{2}}:f\in
L^{2},f\neq 0\right\\}.\end{split}$
We say that $f$ is an extremizer or a maximzer of the Strichartz inequality if
$f\neq 0$ and $\|e^{it\Delta}f\|_{6}=C_{1}\|f\|_{2}$. The extremal problem for
the Strichartz inequality (1) is (a) Whether there exists an extremzier for
(1)? (b) If it exists, what are the characterizations of extremizers, e.g.,
continuity and differentiability? What is the explicit formulation of
extremizers? are they unique up to the symmetries of the inequality? In this
note, we are mainly concerned with question (b).
By D. Foschi’s 2007 paper [7] we know that the maximizers are Gaussian
functions of the form $f(x)=e^{Ax^{2}+Bx+C}$, where $A,B,C\in\mathbb{C}$, and
$\Re\\{A\\}<0$ up to the symmetries of the Strichartz inequality. In
particular, according to Foschi, $f(x)=e^{-|x|^{2}}$ is a maximizer in
dimension $1$. Thus $f$ must satisfy $(1)$, and obtains equality with $C_{1}$.
Hundertmark and Zharnitsky in [11] showed a new representation using an
orthogonal projection operator for dimension $1$ and $2$. The representation
that was found is
$\begin{split}\int_{\mathbb{R}}\int_{\mathbb{R}}\left|e^{it\Delta}f(x)\right|^{6}\
dxdt&=\frac{1}{2\sqrt{3}}\langle f\otimes f\otimes f,P_{1}(f\otimes f\otimes
f)\rangle_{L^{2}(\mathbb{R}^{3})},\\\
\int_{\mathbb{R}}\int_{\mathbb{R}^{2}}\left|e^{it\Delta}f(x)\right|^{4}\
dxdt&=\frac{1}{4}\langle f\otimes f,P_{2}(f\otimes
f)\rangle_{L^{2}(\mathbb{R}^{4})}\end{split}$
for dimensions $d=1$ and $d=2$, respectively, where $P_{1},P_{2}$ are certain
projection operators. Using this, they were able to obtain the same results.
In [13] Kunze showed that such a maximizer exists in dimension $1$. In [14],
the second author showed the existence of a maximizer in all dimensions for
the Strichartz inequalities for the Schrödinger equation. Likewise, in [16],
Brocchi, Silva, and Quilodrán investigated sharp Strichartz inequalities for
fractional and higher order Schrödinger equations. There they discussed the
rapid $L^{2}$ decay of extremizers, which we will also discuss and use it to
establishing a characterization of extremizers.
We will take inspiration from [15] to show a different method of proving that
extremizers are Gaussians. More precisely, in this note, we are interested in
the problem of how to characterize extremals for (1) via the study of the
associated Euler-Lagrange equation. We show that the solutions of this
generalized Euler-Lagrange equation enjoy a fast decay in the Fourier space
and thus can be extended to be complex analytic, see Theorem 1.1. Then as an
easy consequence, we give an alternative proof that all extremal functions to
(1) are Gaussians based on solving a functional equation of extremizers
derived in Foschi [7], see (5) and Theorem 1.2. The functional equality (5) is
a key ingredient in Foschi’s proof in [7]. To prove $f$ in (5) to be a
Gaussian function, local integrability of $f$ is assumed in [7], which is
further reduced to measurable functions in Charalambides [2].
Let $f$ be an extremal function to (1) with the constant $C_{1}$. Then $f$
satisfies the following generalized Euler-Lagrange equation,
(3) $\omega\langle g,f\rangle=\mathcal{Q}(g,f,f,f,f,f),\text{for all }g\in
L^{2},$
where $\omega=\mathcal{Q}(f,f,f,f,f,f)/\|f\|_{L^{2}}^{2}>0$ and
$\mathcal{Q}(f_{1},f_{2},f_{3},f_{4},f_{5},f_{6})$ is the integral
(4)
$\begin{split}&\int_{\mathbb{R}^{6}}\overline{\widehat{f_{1}}}(\xi_{1})\overline{\widehat{f_{2}}}(\xi_{2})\overline{\widehat{f_{3}}}(\xi_{3})\widehat{f_{4}}(\xi_{4})\widehat{f_{5}}(\xi_{5})\widehat{f_{6}}(\xi_{6})\delta(\xi_{1}+\xi_{2}+\xi_{3}-\xi_{4}-\xi_{5}-\xi_{6})\\\
&\qquad\qquad\qquad\times\delta(\xi_{1}^{2}+\xi_{2}^{2}+\xi_{3}^{2}-\xi_{4}^{2}-\xi_{5}^{2}-\xi_{6}^{2})d\xi_{1}d\xi_{2}d\xi_{3}d\xi_{4}d\xi_{5}d\xi_{6},\end{split}$
for $f_{i}\in L^{2}(\mathbb{R})$, $1\leq i\leq 6$,
$\delta(\xi)=(2\pi)^{-d}\int_{\mathbb{R}^{d}}e^{i\xi\cdot x}dx$ in the
distribution sense, $d=1,2$. The proof of (3) is standard; see e.g. [6, p.
489] or [9, Section 2] for similar derivations of Euler-Lagrange equations.
###### Theorem 1.1.
If $f$ solves the generalized Euler-Lagrange equation (3) for some $\omega>0$,
then there exists $\mu>0$ such that
$e^{\mu|\xi|^{2}}\widehat{f}\in L^{2}(\mathbb{R}).$
Furthermore $f$ can be extended to be complex analytic on $\mathbb{C}$.
To prove this theorem, we follow the argument in [10]. Similar reasoning has
appeared previously in [5, 8]. It relies on a multilinear weighted Strichartz
estimate and a continuity argument. See Lemma 3.2 and Lemma 3.3, respectively.
Next we prove that the extremals to (1) are Gaussian functions. We start with
the study of the functional equation derived in [7]. In [7], the functional
equation reads
(5) $f(x)f(y)f(z)=f(a)f(b)f(c),$
for any $x,y,z,a,b,c\in\mathbb{R}$ such that
(6) $x+y+z=a+b+c,\quad x^{2}+y^{2}+y^{2}=a^{2}+b^{2}+c^{2},$
In [7], it is proven that $f\in L^{2}$ satisfies (5) if and only if $f$ is an
extremal function to (1). Basically, this comes from two aspects. One is that
in the Foschi’s proof of the sharp Strichartz inequality only the Cauchy-
Schwarz inequality is used at one place besides equality. So the equality in
the Strichartz inequality (1), or equivalently the equality in Cauchy-Schwarz,
yields the same functional equation as (5) where $f$ is replaced by $\hat{f}$.
The other one is that the Strichartz norm for the Schrödinger equation enjoys
an identity that
(7)
$\|e^{it\Delta}f\|_{L^{6}(\mathbb{R}^{2})}=C\|e^{it\Delta}f^{\vee}\|_{L^{6}(\mathbb{R}^{2})}$
for some $C>0$.
In [7], Foschi is able to show that all the solutions to (5) are Gaussians
under the assumption that $f$ is a locally integrable function. In [15], Jiang
and the second author studied the two dimensional case of (5) and proved that
the solutions are Gaussian functions. These can be viewed as investigations of
the Cauchy functional equations (5) for functions supported on the
paraboloids. To characterize the extremals for the Tomas-Stein inequality for
the sphere in $\mathbb{R}^{3}$, in [4], Christ and the second author study the
functional equation of similar type for functions supported on the sphere and
prove that they are exponentially affine functions. In [2], Charalambides
generalizes the analysis in [4] to some general hyper-surfaces in
$\mathbb{R}^{n}$ that include the sphere, paraboloids and cones as special
examples and proves that the solutions are exponentially affine functions. In
[2, 4], the functions are assumed to be measurable functions.
By the analyticity established in Theorem 1.1, Equations (5) and (6) have the
following easy consequence, which recovers the result in [7, 11].
###### Theorem 1.2.
Suppose that $f$ is an extremal function to (1). Then
(8) $f(x)=e^{A|x|^{2}+B\cdot x+C},$
where $A,C\in\mathbb{C},B\in\mathbb{C}$ and $\Re(A)<0$.
## 2\. Developing the Extremizer
We want to show that if $f$ solves the generalized Euler-Lagrange equation
(3), then there exists some $\mu>0$ such that
$\begin{split}e^{\mu|\xi|^{2}}\widehat{f}\in L^{2}.\end{split}$
Furthermore, we can extend $f$ to be entire. To begin, we note that by Foshi’s
paper [7], we have for a maximizer $f$,
$f(x)f(y)f(z)=F(x^{2}+y^{2}+z^{2},x+y+z)$. Thus for any
$(x,y,z),(a,b,c)\in\mathbb{R}^{3}$ such that
(9) $\begin{split}x+y+z&=a+b+c\end{split}$
and
(10) $\begin{split}x^{2}+y^{2}+z^{2}&=a^{2}+b^{2}+c^{2},\end{split}$
then
$\begin{split}f(x)f(y)f(z)&=F(x^{2}+y^{2}+z^{2},x+y+z)\\\
&=F(a^{2}+b^{2}+c^{2},a+b+c)\\\ &=f(a)f(b)f(c).\end{split}$
That is,
(11) $\begin{split}f(x)f(y)f(z)&=f(a)f(b)f(c).\end{split}$
Let us assume that $f$ has an entire extension. Then $f$ restricted to
$\mathbb{R}$ is real analytic. By [7, Lemma 7.9], such nontrival $f\in L^{2}$
is also nonzero. We prove the following theorem.
###### Theorem 2.1.
If $f$ is a maximizer for $(1)$, then $f(x)=e^{Ax^{2}+Bx+C}$, where
$A,B,C\in\mathbb{C}$.
###### Proof.
Consider $\varphi(x)=\log(f(x))$. We know from [7, Lemma 7.9] that $f$ is
nowhere $0$, so $\varphi$ is well defined. Since $f$ is analytic, then so is
$\varphi$. Hence by the power series expansion we have
$\varphi(x)=\varphi(0)+\varphi^{\prime}(0)x+\frac{\varphi^{\prime\prime}(0)}{2}x^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}x^{k}.$
Hence it is true for $a,b,c,d,e,g$ such that $(a,b,c),(d,e,g)$ satisfy
equations (9) and (10). That is,
$\begin{split}\varphi(a)&=\varphi(0)+\varphi^{\prime}(0)a+\frac{\varphi^{\prime\prime}(0)}{2}a^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}a^{k},\\\
\varphi(b)&=\varphi(0)+\varphi^{\prime}(0)b+\frac{\varphi^{\prime\prime}(0)}{2}b^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}b^{k},\\\
\varphi(c)&=\varphi(0)+\varphi^{\prime}(0)c+\frac{\varphi^{\prime\prime}(0)}{2}c^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}c^{k},\\\
\varphi(d)&=\varphi(0)+\varphi^{\prime}(0)d+\frac{\varphi^{\prime\prime}(0)}{2}d^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}d^{k},\\\
\varphi(e)&=\varphi(0)+\varphi^{\prime}(0)e+\frac{\varphi^{\prime\prime}(0)}{2}e^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}e^{k},\\\
\varphi(g)&=\varphi(0)+\varphi^{\prime}(0)g+\frac{\varphi^{\prime\prime}(0)}{2}g^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}g^{k}.\end{split}$
By equation (11) we know that
$\varphi(a)+\varphi(b)+\varphi(c)=\varphi(d)+\varphi(e)+\varphi(g)$. Thus by
using the power series expansions and equations (9), (10) and (11) we have
that
$\begin{split}0&=\varphi(a)+\varphi(b)+\varphi(c)-\varphi(d)-\varphi(e)-\varphi(g)\\\
&=\varphi(0)+\varphi^{\prime}(0)a+\frac{\varphi^{\prime\prime}(0)}{2}a^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}a^{k}\\\
&+\varphi(0)+\varphi^{\prime}(0)b+\frac{\varphi^{\prime\prime}(0)}{2}b^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}b^{k}\\\
&+\varphi(0)+\varphi^{\prime}(0)c+\frac{\varphi^{\prime\prime}(0)}{2}c^{2}+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}c^{k}\\\
&-\varphi(0)-\varphi^{\prime}(0)d-\frac{\varphi^{\prime\prime}(0)}{2}d^{2}-\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}d^{k}\\\
&-\varphi(0)-\varphi^{\prime}(0)e-\frac{\varphi^{\prime\prime}(0)}{2}e^{2}-\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}e^{k}\\\
&-\varphi(0)-\varphi^{\prime}(0)g-\frac{\varphi^{\prime\prime}(0)}{2}g^{2}-\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}g^{k}\\\ &=\varphi^{\prime}(0)(a+b+c-d-
e-g)+\frac{\varphi^{\prime\prime}(0)}{2}(a^{2}+b^{2}+c^{2}-d^{2}-e^{2}-g^{2})\\\
&\qquad\qquad+\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}(a^{k}+b^{k}+c^{k}-d^{k}-e^{k}-g^{k})\\\
&=\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}(a^{k}+b^{k}+c^{k}-d^{k}-e^{k}-g^{k}).\end{split}$
That is,
(12) $\begin{split}\sum_{k\geq
3}\frac{\varphi^{(k)}(0)}{k!}(a^{k}+b^{k}+c^{k}-d^{k}-e^{k}-g^{k})&=0,\end{split}$
where $(a,b,c),(d,e,g)\in\mathbb{R}^{3}$ satisfy equations (9) and (10).
Consider $a=x,b=-x,c=x,g=0$, by solving the equations
$\begin{split}d+e&=x,\\\ d^{2}+e^{2}&=3x^{2},\end{split}$
we obtain that $d=\frac{1+\sqrt{5}}{2}x,e=\frac{1-\sqrt{5}}{2}x$. Then
$a^{k}+b^{k}+c^{k}-e^{k}-f^{k}-g^{k}=2x^{k}+(-x)^{k}-\left(\frac{1+\sqrt{5}}{2}x\right)^{k}-\left(\frac{1-\sqrt{5}}{2}x\right)^{k}.$
When $k$ is even,
$-\left(3-(\frac{1+\sqrt{5}}{2})^{k}-(\frac{1-\sqrt{5}}{2})^{k}\right)\geq(\frac{3}{2})^{k}-3>0$
for $k\geq 3$. When $k$ is odd,
$-\left(1-(\frac{1+\sqrt{5}}{2})^{k}+(\frac{\sqrt{5}-1}{2})^{k}\right)\geq(\frac{3}{2})^{k}-2>0$
for $k\geq 3$. This shows that $\varphi^{k}(0)=0$ when $k\geq 3$. Hence
$\varphi^{(k)}(0)=0$ for all $k$. Thus $\varphi(x)=Ax^{2}+Bx+C$. Therefore
$f(x)=e^{Ax^{2}+Bx+C}$. ∎
## 3\. Establishing the Exponential Decay in Fourier Space
Consider the integral
$\begin{split}&Q(f_{1},f_{2},f_{3},f_{4},f_{5},f_{6})=\\\
&\int_{\mathbb{R}^{6}}\overline{\widehat{f_{1}}}(\xi_{1})\overline{\widehat{f_{2}}}(\xi_{2})\overline{\widehat{f_{3}}}(\xi_{3})\widehat{f_{4}}(\xi_{4})\widehat{f_{5}}(\xi_{5})\widehat{f_{6}}(\xi_{6})\delta(\xi_{1}+\xi_{2}+\xi_{3}-\xi_{4}-\xi_{5}-\xi_{6})\times\\\
&\delta(\xi_{1}^{2}+\xi_{2}^{2}+\xi_{3}^{2}-\xi_{4}^{2}-\xi_{5}^{2}-\xi_{6})d\xi_{1}d\xi_{2}d\xi_{3}d\xi_{4}d\xi_{5}d\xi_{6}.\end{split}$
Notice that if $f$ is an extremal function to Strichartz estimate, then $f$
must satisfy the generalized Euler-Lagrange equation
(13) $\begin{split}\omega\langle g,f\rangle&=Q(g,f,f,f,f,f)\end{split}$
for all $g\in L^{2}$, where $\omega=\frac{1}{||f||^{2}_{2}}Q(f,f,f,f,f,f)>0$,
and $\delta(\xi)=\frac{1}{2\pi}\int_{\mathbb{R}}e^{i\xi x}dx$ in the
distribution sense.
Define
$\begin{split}\eta&:=(\eta_{1},\eta_{2},\eta_{3},\eta_{4},\eta_{5},\eta_{6})\in\mathbb{R}^{6},\\\
a(\eta)&:=\eta_{1}+\eta_{2}+\eta_{3}-\eta_{4}-\eta_{5}-\eta_{6},\\\
b(\eta)&:=\eta_{1}^{2}+\eta_{2}^{2}+\eta_{3}^{2}-\eta_{4}^{2}-\eta_{5}^{2}-\eta_{6}^{2}.\end{split}$
The choice of $a(\eta)$ and $b(\eta)$ are useful, since when
$a(\eta)=0=b(\eta)$, then $\eta$, more specifically,
$(\eta_{1},\eta_{2},\eta_{3}),(\eta_{4},\eta_{5},\eta_{6})\in\mathbb{R}^{3}$,
satisfy equations (9) and (10). For $\varepsilon\geq 0,\ \mu\geq 0,$ and
$\xi\in\mathbb{R}$ define
$\begin{split}F(\xi)&:=F_{\mu,\varepsilon}(\xi)=\frac{\mu\xi^{2}}{1+\varepsilon\xi^{2}}.\end{split}$
For $h_{i}\in L^{2}(\mathbb{R}),\ 1\leq i\leq 6$, define the weighted
multilinear integral $M_{F}$ as
$\begin{split}M_{F}(h_{1},h_{2},h_{3},h_{4},h_{5},h_{6})&=\int_{\mathbb{R}^{6}}e^{F(\eta_{1})-\sum_{k=2}^{6}F(\eta_{k})}\Pi_{k=1}^{6}|h(\eta_{k})|\delta(a(\eta))\delta(b(\eta))d\eta.\end{split}$
It is easy to see that
(14)
$\begin{split}M_{F}(h_{1},h_{2},h_{3},h_{4},h_{5},h_{6})&\leq\int_{\mathbb{R}^{6}}\Pi_{k=1}^{6}|h(\eta_{k})|\delta(a(\eta))\delta(b(\eta))d\eta.\end{split}$
Indeed, on the support of $a$ and $b$,
$\eta_{1}^{2}\leq\sum_{i=2}^{6}\eta_{i}^{2}.$
We also note that $F(\xi)$ is an increasing function, and $F(\xi)\geq 0$ for
all $\xi,\ \mu$, and $\varepsilon$. So equation (14) can be derived by
$\begin{split}\left|M_{F}(h_{1},h_{2},h_{3},h_{4},h_{5},h_{6})\right|&=\left|\int_{\mathbb{R}^{6}}e^{F(\eta_{1})-\sum_{k=2}^{6}F(\eta_{k})}\Pi_{k=1}^{6}|h(\eta_{k})|\delta(a(\eta))\delta(b(\eta))d\eta\right|\\\
&\leq\int_{\mathbb{R}^{6}}\left|e^{F(\eta_{1})-\sum_{k=2}^{6}F(\eta_{k})}\right|\Pi_{k=1}^{6}\left|h(\eta_{k})\right|\delta(a(\eta))\delta(b(\eta))d\eta\\\
&\leq\int_{\mathbb{R}^{6}}\Pi_{k=1}^{6}\left|h(\eta_{k})\right|\delta(a(\eta))\delta(b(\eta))d\eta.\end{split}$
We state the following key lemma, which is established by using the Hausdorff-
Young inequality. The two dimensional such estimate is due to Bourgain [1]
that is much harder.
###### Lemma 3.1.
(15)
$\begin{split}\left\|e^{it\Delta}h_{1}e^{it\Delta}h_{2}\right\|_{L^{3}_{t,x}}&\leq
CN^{-1/6}\|h_{1}\|_{L^{2}}\|h_{2}\|_{L^{2}},\end{split}$
where $h_{1}\in L^{2}$ is supported on $|\xi|\leq s$ and $h_{2}\in L^{2}$ is
supported on $|\eta|\geq Ns$, for $N\gg 1$ and $s\gg 1$.
Equation (15) has been established in [16]. We provide a proof for
completeness. Let $\widehat{f}$ be supported on $|\xi|\leq s$ and
$\widehat{g}$ be supported on $|\eta|\geq Ns$ where $N\gg 1$ and $s\gg 1$, and
note that
$\begin{split}e^{it\Delta}fe^{it\Delta}g&=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}}\int_{\mathbb{R}}e^{ix\xi+it\xi^{2}}\widehat{f}(\xi)e^{ix\eta+it\eta^{2}}\widehat{g}(\eta)d\eta
d\xi\\\
&=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}}\int_{\mathbb{R}}e^{ix(\xi+\eta)+it(\xi^{2}+\eta^{2})}\widehat{f}(\xi)\widehat{g}(\eta)d\eta
d\xi.\end{split}$
Consider the change of variables $\gamma=\xi+\eta$ and
$\tau=\xi^{2}+\eta^{2}$. Let $|J|=\frac{1}{\sqrt{2\tau-\gamma^{2}}}$ be the
corresponding Jacobian. Thus $2\tau-\gamma^{2}=(\xi-\eta)^{2}$. Let $|\xi|\leq
s$ and $|\eta|\geq Ns$, where $N>1$. If $\eta>0$, then $\eta>\xi$. If
$\eta<0$, then $\eta<\xi$. In either case, the Jacobian is well defined.
Likewise, by considering $2^{k}Ns\leq|\eta|\leq 2^{k+1}Ns$, we see that
$|J|\lesssim(2^{k}Ns)^{-1}$. Let
$\begin{split}G(\gamma,\tau)&:=\widehat{f}\left(\frac{\gamma+\sqrt{2\tau-\gamma^{2}}}{2}\right)\widehat{g}\left(\frac{\gamma-\sqrt{2\tau-\gamma^{2}}}{2}\right)|J|.\end{split}$
Then we have
$\begin{split}e^{it\Delta}fe^{it\Delta}g&=\frac{1}{(2\pi)^{2}}\int
e^{ix\gamma+it\tau}G(\gamma,\tau)d(\gamma\times\tau)\\\
&=\frac{1}{(2\pi)^{2}}\widetilde{G}(x,t).\end{split}$
Thus by the Hausdorff-Young inequality and change of variables, we have
$\displaystyle\left\|e^{it\Delta}fe^{it\Delta}g\right\|_{L^{3}_{x,t}}$
$\displaystyle=\left\|\tilde{G}(x,t)\right\|_{L^{3}_{x,t}}\leq\left\|G(\gamma,\tau)\right\|_{L^{3/2}_{\gamma,\tau}}=\left(\int|G(\gamma,\tau)|^{3/2}d(\gamma\times\tau)\right)^{\frac{2}{3}}$
$\displaystyle=\left(\int\left|\widehat{f}\left(\frac{\gamma\pm\sqrt{2\tau-\gamma^{2}}}{2}\right)\right|^{\frac{3}{2}}\left|\widehat{g}\left(\frac{\gamma\mp\sqrt{2\tau-\gamma^{2}}}{2}\right)\right|^{\frac{3}{2}}\left|J\right|^{\frac{3}{2}}d(\gamma\times\tau)\right)^{\frac{2}{3}}$
$\displaystyle=\left(\int\left|\widehat{f}(\xi)\right|^{\frac{3}{2}}\left|\widehat{g}(\eta)\right|^{\frac{3}{2}}|J|^{\frac{3}{2}}|J|^{-1}d(\xi\times\eta)\right)^{\frac{2}{3}}.$
The above continues to equal
$\displaystyle\left(\sum_{k=0}^{\infty}\int_{|\xi|\leq s,2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}\left|\widehat{f}(\xi)\right|^{\frac{3}{2}}\left|\widehat{g}(\eta)\right|^{\frac{3}{2}}|J|^{\frac{1}{2}}d(\xi\times\eta)\right)^{\frac{2}{3}}$
$\displaystyle\leq\sum_{k=0}^{\infty}\left(\int_{|\xi|\leq
s,2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}\left|\widehat{f}(\xi)\right|^{\frac{3}{2}}\left|\widehat{g}(\eta)\right|^{\frac{3}{2}}|J|^{\frac{1}{2}}d(\xi\times\eta)\right)^{\frac{2}{3}}$
$\displaystyle\lesssim\sum_{k=0}^{\infty}\left(\int\left|\widehat{f}(\xi)\right|^{\frac{3}{2}}\left|\widehat{g}(\eta)\right|^{\frac{3}{2}}(2^{k}sN)^{-\frac{1}{2}}d(\xi\times\eta)\right)^{\frac{2}{3}}$
$\displaystyle=(sN)^{-\frac{1}{3}}\sum_{k=0}^{\infty}2^{-\frac{k}{3}}\left(\int_{|\xi|\leq
s,2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}\left|\widehat{f}(\xi)\right|^{\frac{3}{2}}\left|\widehat{g}(\eta)\right|^{\frac{3}{2}}d(\xi\times\eta)\right)^{\frac{2}{3}}$
$\displaystyle=(sN)^{-\frac{1}{3}}\left(\int_{|\xi|\leq
s}|\widehat{f}(\xi)|^{\frac{3}{2}}d\xi\right)^{\frac{2}{3}}\sum_{k=0}^{\infty}2^{-\frac{k}{3}}\left(\int_{2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}|\widehat{g}(\eta)|^{\frac{3}{2}}d\eta\right)^{\frac{2}{3}}.$
For $\left(\int_{|\xi|\leq s}|\widehat{f}(\xi)|^{3/2}\right)^{2/3}$, we wish
to show that $\int_{|\xi|\leq s}\left|\widehat{f}(\xi)\right|^{3/2}\lesssim
s^{1/4}\|f\|_{2}^{3/2}$. Consider Hölder’s inequality, for $p=4$ and
$q=\frac{4}{3}$. Then
$\begin{split}\int_{|\xi|\leq
s}\left|\widehat{f}(\xi)\right|^{3/2}&=\int_{|\xi|\leq
s}1\cdot\left|\widehat{f}(\xi)\right|^{3/2}\\\ &\leq\left(\int_{|\xi|\leq
s}1^{4}\right)^{1/4}\left(\int_{|\xi|\leq s}\left|\
\widehat{f}(\xi)\right|^{2}\right)^{3/4}\\\
&=(2s)^{1/4}\left(\left(\int_{|\xi|\leq
s}\left|\widehat{f}(\xi)\right|^{2}\right)^{1/2}\right)^{3/2}\\\
&\leq(2s)^{1/4}\|\widehat{f}\|_{2}^{3/2}=(2s)^{1/4}\|f\|_{2}^{3/2},\end{split}$
where the final step is a consequence of Plancherel’s theorem. Hence
$\left(\int_{|\xi|\leq
s}\left|\widehat{f}(\xi)\right|^{3/2}\right)^{2/3}\leq(2s)^{1/6}\|f\|_{2}.$
As for $\left(\int_{2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}|\widehat{g}(\eta)|^{3/2}\right)^{2/3}$, we use a similar technique
as we did for $\widehat{f}$. Specifically,
$\begin{split}\int_{2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}\left|\widehat{g}(\eta)\right|^{3/2}&\leq\left(\int_{2kNs\leq|\eta|\leq
2^{k+1}Ns}1^{4}\right)^{1/4}\left(\int_{2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}\left|\widehat{g}(\eta)\right|^{2}\right)^{3/4}\\\
&=\left(2^{k}Ns\right)^{1/4}\left(\left(\int_{2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}\left|\widehat{g}(\eta)\right|^{2}\right)^{1/2}\right)^{3/2}\\\
&\leq\left(2^{k}Ns\right)^{1/4}\|\widehat{g}\|_{2}^{3/2}\\\
&=\left(2^{k}Ns\right)^{1/4}\|g\|_{2}^{3/2}.\end{split}$
Hence $\left(\int_{2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}\left|\widehat{g}(\eta)\right|^{3/2}\right)^{2/3}\leq(2^{k}Ns)^{1/6}\|g\|_{2}$.
By pairing this with the above we have
$\begin{split}\left\|e^{it\Delta}fe^{it\Delta}g\right\|_{L^{3}_{x,t}}&\lesssim(sN)^{-\frac{1}{3}}\left(\int_{|\xi|\leq
s}|\widehat{f}(\xi)|^{\frac{3}{2}}d\xi\right)^{\frac{2}{3}}\\\
&\qquad\times\sum_{k=0}^{\infty}2^{-\frac{k}{3}}\left(\int_{2^{k}Ns\leq|\eta|\leq
2^{k+1}Ns}|\widehat{g}(\eta)|^{\frac{3}{2}}d\eta\right)^{\frac{2}{3}}\\\
&\leq(sN)^{-1/3}(2s)^{1/6}\|f\|_{2}\sum_{k=0}^{\infty}2^{-k/3}(2^{k}Ns)^{1/6}\|g\|_{2}\\\
&=2^{1/6}N^{-1/6}\|f\|_{2}\|g\|_{2}\sum_{k=0}^{\infty}2^{-k/6}\\\
&=CN^{-1/6}\|f\|_{2}\|g\|_{2}.\end{split}$
If we pair the estimate in Lemma 3.1 with Hölder’s inequality and the
$L^{6}\rightarrow L^{2}$ Strichartz inequality, we get the following lemma.
###### Lemma 3.2.
Let $h_{k}\in L^{2}(\mathbb{R}),\ 1\leq k\leq 6$, and $s\gg 1,\ N\gg 1$.
Suppose that the Fourier transform of $h_{1}$ is supported on
$\\{\xi:|\xi|\leq s\\}$ and the Fourier transform of $h_{2}$ is supported on
$\\{|\xi|\geq Ns\\}$. Then
$\begin{split}M_{F}(h_{1},h_{2},h_{3},h_{4},h_{5},h_{6})&\leq
CN^{-1/6}\Pi_{k=1}^{6}\|h_{k}\|_{2}.\end{split}$
Next we focus on establishing Theorem 1.1. If it can be shown that
$e^{\mu\xi^{2}}\widehat{f}\in L^{2}$ for some $\mu>0$, then
$e^{\lambda|\xi|^{2}}\widehat{f}\in L^{1}$ for some $0<\lambda<\mu$. Then by
the Fourier inversion equation we have that
$f(z)=\frac{1}{2\pi}\int_{\mathbb{R}}e^{iz\xi-\lambda|\xi|^{2}}e^{\lambda|\xi|^{2}}\widehat{f}(\xi)d\xi$.
Thus
$\begin{split}\partial_{\overline{z}}f(z)&=\partial_{\overline{z}}\left(\frac{1}{2\pi}\int_{\mathbb{R}}e^{iz\xi-\lambda|\xi|^{2}}e^{\lambda|\xi|^{2}}\widehat{f}(\xi)d\xi\right)\\\
&=\int_{\mathbb{R}}\partial_{\overline{z}}\left(e^{iz\xi-\lambda|\xi|^{2}}\right)e^{\lambda|\xi|^{2}}\widehat{f}(\xi)d\xi=0.\end{split}$
So $f$ can be extended to complex analytic on $\mathbb{C}$. To prove Theorem
1.1, we establish
###### Lemma 3.3.
Let $f$ solve the generalized Euler-Lagrange equation $(7)$ for $\omega$ as
defined just below equation $(7)$, $\|f\|_{2}=1$, and define
$\widehat{f}_{>}:=\widehat{f}1_{|\xi|\geq s^{2}}$ for $s>0$. Then there exists
some $s\gg 1$ such that for $\mu=s^{-4}$,
(16) $\begin{split}\omega\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}&\leq
o_{1}(1)\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}+C\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{2}+C\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|^{3}_{2}\\\
&\qquad+C\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{4}+C\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{5}+o_{2}(1),\end{split}$
where $\lim_{s\rightarrow\infty}o_{i}(1)=0$ uniformly for all $\varepsilon>0,\
i=1,2$, and the constant $C$ is independent of $\varepsilon$ and $s$.
###### Proof.
For this proof we follow the proof of lemma $2.2$ in [15]. Note that
$\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{2}=\langle
e^{F(\cdot)}\widehat{f}_{>},e^{F(\cdot)}\widehat{f}_{>}\rangle=\langle
e^{2F(\cdot)}\widehat{f}_{>},\widehat{f}\rangle=\langle
e^{2F(\cdot)}f_{>},f\rangle$. So by equation $(7)$ we have that
$\begin{split}\omega&\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{2}=Q(e^{2F(\cdot)}f_{>},f,f,f,f,f)\\\
&\quad=\int_{\mathbb{R^{6}}}e^{F(\xi_{1})-\sum_{k=2}^{6}F(\xi_{k})}h_{>}(\xi_{1})h(\xi_{2})h(\xi_{3})h(\xi_{4})h(\xi_{5})h(\xi_{6})\delta(a(\xi))\delta(b(\xi))d\xi,\end{split}$
where $\xi=(\xi_{1},\xi_{2},\xi_{3},\xi_{4},\xi_{5},\xi_{6})$, and
$h(\xi_{i}):=e^{F(\xi_{i})}\widehat{f}(\xi_{i})$ and
$h_{>}(\xi_{i}):=e^{F(\xi_{i})}\widehat{f}_{>}(\xi_{i})$ for $1\leq i\leq 6$.
Thus $\omega\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{2}\leq
M_{F}(h_{>},h,h,h,h,h)$. Define $h_{\ll}:=h1_{|\xi|<s}$ and
$h_{\sim}:=h1_{s\leq|\xi|\leq s^{2}}$. We can break $M_{F}$ up into the
intervals $|\xi|<s,\ |\xi|\geq s^{2},$ and $s\leq|\xi|<s^{2}$ so that
$h_{<}=h1_{|\xi|<s^{2}}$. Thus
$\begin{split}M_{F}(h_{>},h,h,h,h,h)&=M_{F}(h_{>},h_{<},...,h_{<})+\sum_{j_{2},...,j_{6}}M_{F}(h_{>},h_{j_{2}},...,h_{j_{6}})\\\
&:=A+B,\end{split}$
where $j_{i}$ is either $<$ or $>$, and at least one of the subscripts is $>$.
We break up $A$ into
$M_{F}(h_{>},h_{\ll},h_{<},...,h_{<})+M_{F}(h_{>},h_{\sim},h_{<},...,h_{<}):=A_{1}+A_{2}$.
By lemma $3.1$ we know that $A_{1}\lesssim
s^{-1/6}\|h_{>}\|_{2}\|h_{\ll}\|_{2}\|h_{<}\|_{2}^{4}$. Note that
$\begin{split}\|h_{<}\|^{2}_{2}&=\int\left|e^{F(\xi)}\widehat{f}(\xi)1_{|\xi|<s^{2}}\right|^{2}=\int
e^{2\frac{\mu\xi^{2}}{1+\varepsilon\xi^{2}}}\left|\widehat{f}\right|^{2}1_{|\xi|<s^{2}}\\\
&\leq e^{2\mu s^{4}}\|f\|_{2}^{2}=e^{2\mu s^{4}}.\end{split}$
So $\|h_{<}\|_{2}\leq e^{\mu s^{4}}$. Likewise, $\|h_{\ll}\|_{2}\leq e^{\mu
s^{2}}$. As for $A_{2}$, if we similarly define $f_{\sim}$, we have
$\|h_{\sim}\|_{2}\leq e^{\mu s^{4}}\|f_{\sim}\|_{2}$. To see that
$\|f_{\sim}\|_{2}\rightarrow 0$ as $s\rightarrow\infty$, recall that
$\begin{split}\|f\|_{2}^{2}&=\sum_{k=1}^{\infty}\int_{x_{k}\leq|\xi|\leq
x_{k+1}}|f|^{2},\end{split}$
where $\\{x_{k}\\}$ is a sequence such that $x_{0}=0$, and $x_{k}$ is strictly
increasing. Since $\|f\|_{2}=1$, then
$\lim_{k\rightarrow\infty}\int_{x_{k}\leq|\xi|\leq x_{k+1}}|f|^{2}=0$. Thus
$\|f_{\sim}\|_{2}\rightarrow 0$ as $s\rightarrow\infty$. So
$\begin{split}A&=A_{1}+A_{2}\\\ &\lesssim
s^{-1/6}\|h_{>}\|_{2}\|h_{\ll}\|_{2}\|h_{<}\|_{2}^{4}+\|h_{>}\|_{2}\|h_{\sim}\|_{2}\|h_{<}\|_{2}^{4}\\\
&\leq s^{-1/6}\|h_{>}\|_{2}e^{\mu s^{2}}e^{\mu s^{4}}+\|h_{>}\|_{2}e^{\mu
s^{4}}\|f_{\sim}\|_{2}e^{\mu s^{4}}\\\ &=e^{2\mu
s^{4}}\|h_{>}\|_{2}\left(s^{-1/6}e^{\mu s^{2}-\mu
s^{4}}+\|f_{\sim}\|_{2}\right)\\\
&=o_{1}(1)\|h_{>}\|_{2}=o_{1}(1)\left\|e^{F(\cdot)}\widehat{f}\right\|_{2},\end{split}$
where $o_{1}(1)=e^{2\mu s^{4}}\left(s^{-1/6}e^{\mu s^{2}-\mu
s^{4}}+\|f_{\sim}\|_{2}\right)$. Thus $o_{1}(1)\rightarrow 0$ as
$s\rightarrow\infty$.
As for $B$, let
$B_{1}:=\sum_{j_{2},...,j_{6}}M_{F}(h_{>},h_{j_{2}},...,h_{j_{6}})$ containing
precisely $1$ $h_{>}\in\\{h_{j_{2}},...,h_{j_{6}}\\}$,
$B_{k}:=\sum_{j_{2},...,j_{6}}M_{F}(h_{>},h_{j_{2}},...,h_{j_{6}})$ containing
precisely $k\ h_{>}\in\\{h_{j_{2}},...,h_{j_{6}}\\}$. For example,
$\begin{split}B_{1}&=M_{F}(h_{>},h_{>},h_{<},h_{<},h_{<},h_{<})+M_{F}(h_{>},h_{<},h_{>},h_{<},h_{<},h_{<})\\\
&+M_{F}(h_{>},h_{<},h_{<},h_{>},h_{<},h_{<})+M_{F}(h_{>},h_{<},h_{<},h_{<},h_{>},h_{<})\\\
&+M_{F}(h_{>},h_{<},h_{<},h_{<},h_{<},h_{>})\\\
&=CM_{F}(h_{>},h_{<},h_{>},h_{<},h_{<},h_{<})\\\
&=CM_{F}(h_{>},h_{\ll},h_{>},h_{<},h_{<},h_{<})+CM_{F}(h_{>},h_{\sim},h_{>},h_{<},h_{<},h_{<}).\end{split}$
By the same argument as for $A$, we see that
$\begin{split}B_{1}&\lesssim
s^{-1/6}\|h_{>}\|_{2}^{2}\|h_{\ll}\|_{2}\|h_{<}\|_{2}^{3}+\|h_{>}\|_{2}^{2}\|h_{\sim}\|_{2}\|h_{<}\|_{2}^{3}\\\
&=o_{2}(1)\|h_{>}\|_{2}^{2}.\end{split}$
Hence $B_{1}\lesssim
o_{2}(1)\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{2}$ where
$o_{2}(1)\rightarrow 0$ as $s\rightarrow\infty$. Following a similar process,
we find that
$B_{k}\lesssim\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{k+1}$. By
setting $\mu=s^{-4}$ we have $e^{4\mu s^{4}}=e^{4}$. Thus we have
(17)
$\begin{split}\omega\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{2}&\leq
o_{1}(1)\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}+o_{2}(1)\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{2}+C\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{3}+C\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{4}\\\
&\qquad+\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{5}+\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}^{6}.\end{split}$
Dividing both sides of inequality $(17)$ by
$\left\|e^{F(\cdot)}\widehat{f}_{>}\right\|_{2}$ we obtain the desired result.
∎
To see that $e^{\mu\xi^{2}}\widehat{f}\in L^{2}$ as required in Theorem 1.1,
we also follow a discussion in [15]. Define
$\begin{split}H(\varepsilon)&=\left(\int_{|\xi|\geq
s^{2}}\left|e^{F_{s^{-4},\varepsilon}(\xi)}\widehat{f}\right|^{2}d\xi\right)^{1/2}.\end{split}$
Note here that $s$ is fixed, but we have control over that term. By the
dominated convergence theorem we have that $H(\varepsilon)$ is continuous on
$(0,\infty)$, and is therefore connected on $(0,\infty)$. To see that $H$ is
bounded uniformly on $(0,\infty)$, consider the function
$G(x)=\frac{\omega}{2}x-Cx^{2}-Cx^{3}-Cx^{4}-Cx^{5}$ on $(0,\infty)$ (refer to
lemma $3.2$ and choose $s$ large enough such that $o(1)\leq\omega$). This is
very clearly bounded above by lemma $3.2$. Let $M=\sup_{x\in[0,\infty)}G(x)$.
Notice that $G$ is concave, implying that the line $y=\frac{M}{2}$ intersects
$G$ in at least two places, call the first two $x_{0}$ and $x_{1}$; clearly
$x_{0}>0$. Since $H(\varepsilon)$ is connected, then
$G^{-1}\left(\left[0,\frac{M}{2}\right]\right)$ is contained in either
$[0,x_{0}]$ or $[x_{1},\infty)$. When $s$ is sufficiently large and
$\varepsilon=1$, $H(1)<x_{0}$. Thus
$G^{-1}\left(\left[0,\frac{M}{2}\right]\right)\subset[0,x_{0}]$. This implies
that $H(\varepsilon)$ is uniformly bounded on $(0,\infty)$. Thus by Fatou’s
lemma or the monotone convergence theorem, $e^{\mu\xi^{2}}\widehat{f}\in
L^{2}$ for $\mu=s^{-4}$. This finishes the proof of Theorem 1.1.
## References
* [1] J. Bourgain. Refinements of Strichartz’ inequality and applications to $2$D-NLS with critical nonlinearity. Internat. Math. Res. Notices (IMRN), Vol. (5): 253–283, 1998.
* [2] M. Charalambides. On restricting Cauchy-Pexider equations to submanifolds. Aequationes Math., 86: 231–253,2013.
* [3] M. Christ and S. Shao. Existence of extremals for a Fourier restriction inequality. Analysis & PDE, 5(2): 261–312, 2012.
* [4] M. Christ and S. Shao. On the extremisers of an adjoint Fourier restriction inequality. Advances in Math., 230(2): 957–977, 2012.
* [5] B. Erdoğan, D. Hundertmark and Y. R. Lee. Exponential decay of dispersion managed solitons for vanishing average dispersion. Math. Res. Lett., 18(1): 11–24, 2011.
* [6] L. Evans. Partial differential equations. Graduate Studies in Mathematics 19, American Mathematical Society, Providence, RI.
* [7] D. Foschi. Maximizers for the Strichartz Inequality J. Eur. Math. Soc. (JEMS), 9(4): 739–774, 2007.
* [8] D. Hundertmark and Y. R. Lee. Decay estimates and smoothness for solutions of the dispersion managed non-linear Schrödinger equation. Comm. Math. Phys., 286(3): 851–873, 2009.
* [9] D. Hundertmark and Y. R. Lee. On non-local variational problems with lack of compactness related to non-linear optics. J. Nonlinear Sci., 22(1): 1–38, 2012.
* [10] D. Hundertmark and S. Shao. Analyticity of extremals to the Airy-Strichartz inequality. Bull. London Math. Soc., 44(2): 336–352, 2012.
* [11] D. Hundertmark and V. Zharnitsky. On sharp Strichartz Inequalities in low dimension Int. Math. Res. Not., pages Art. ID 34080, 18, 2006.
* [12] M. Keel, and T. Tao. Endpoint Strichartz estimates Amer. J. Math., 120(5): 955–980, 1998.
* [13] M. Kunze. On the Existence of a maximizer for the Strichartz Inequality Communications in Mathematical Physics 243, 137–162 (2003).
* [14] S. Shao. Maximizers for the Strichartz and the Sobolev-Strichartz inequalities for the Schrödinger equation Electron. J. Differential Equations, 3: 1–13, 2009.
* [15] J. Jiang and S. Shao. On characterization of the sharp Strichartz inequality for the Schrödinger equation Analysis & PDE, 9-2 (2016), 353–361.
* [16] G. Brocchi; E. Silva, and R. Quiladrán, Sharp Strichartz inequalities for fractional and higher order Schrödinger equations Analysis & PDE, 9-2 (2016), 353–361.
* [17] R. Strichartz. Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations Duke Mathematical Journal, no. 3 (1977), 705–714.
* [18] T. Tao. Nonlinear dispersive equations: local and global analysis. CBMS Regional Conference series in Mathematics, Volume 106, 2006.
|
8k
|
arxiv_papers
|
2101.01154
|
# High-resolution land cover change from low-resolution labels: Simple
baselines for the 2021 IEEE GRSS Data Fusion Contest
Kolya Malkin
Yale University
&Caleb Robinson
Microsoft AI for Good
&Nebojsa Jojic
Microsoft Research
[email protected]
###### Abstract
We present simple algorithms for land cover change detection in the 2021 IEEE
GRSS Data Fusion Contest [2]. The task of the contest is to create high-
resolution (1m / pixel) land cover change maps of a study area in Maryland,
USA, given multi-resolution imagery and label data. We study several baseline
models for this task and discuss directions for further research.
See https://dfc2021.blob.core.windows.net/competition-data/dfc2021_index.txt
for the data and https://github.com/calebrob6/dfc2021-msd-baseline for an
implementation of these baselines.
## 1 Introduction
We describe a possible road map towards estimating high-resolution (e.g., 1m)
land cover change, using high-resolution multitemporal input imagery and low-
resolution (e.g., 30m) noisy land cover labels for one or more time points.
The examples here use the data from the multitemporal semantic change
detection track of the 2021 IEEE Geoscience and Remote Sensing Society’s Data
Fusion Contest (DFC-MSD) [2]. The input imagery is from the National
Agriculture Imagery Program (NAIP) for the years 2013 and 2017, captured at 1m
resolution, and the available labels are from the 30m-resolution National Land
Cover Database (NLCD) for the years 2013 and 2016. The NLCD labels are derived
from Landsat imagery, and the contest also provides multiyear Landsat data for
the period between 2013 and 2017. All the data is limited to Maryland, USA.
The task exemplifies the situation commonly found worldwide. New imagery comes
in faster than high-quality high-resolution labels are being created. However,
some older, noisy, and low-resolution labels are often available, e.g., 30m
NLCD in the United States, or 500m MODIS land cover available worldwide. How
could one use machine learning to build models that predict high-resolution
change without a large set of high-resolution change examples? The hope
springs from the success of weakly supervised segmentation and label-super
resolution research, e.g. [8, 7], which demonstrated that it is possible to
train high-resolution label predictors for high-resolution input imagery using
regional supervision, where a large block of land (e.g. 30m$\times$30m) is
labeled with a single class, which often designates an area where land cover
follows particular mixing proportions. Weak supervision was the topic of the
2020 IEEE GRSS Data Fusion Contest [1].
Taking the next step to predict land cover _change_ could be as
straightforward as applying these techniques separately for different time
points and comparing the super-resolved labels to estimate change, and in this
paper we evaluate what such simple baselines could accomplish. We demonstrate
some simple lessons in terms of scope of training and model selection that
could aid in development of techniques specifically built for high-resolution
_change_ detection. Such techniques would presumably not use the multiyear
images in isolation, but analyze them jointly to take advantage of the fact
that most land cover _does not_ change from year to year.
The paper is organized as follows. We first describe the data in the DFC-MSD
task and illustrate the types of changes that are being scored in the contest.
Then we study training neural networks to approximate functions $y=f(x)$ where
$x$ are high-resolution (1m) image patches and $y$ are upsampled $30$m labels,
and discuss how the choices for model complexity and scope of training data
can affect the results. In particular, a highly expressive and well-trained
function $f$ could learn to predict blurry labels, because the targets $y$
cone in $30\times 30$ blocks, while very simple color and texture models,
e.g., [7, 9], tend to super-resolve them at 1m, as they are unable to assign
the same label to $30\times 30$ blocks of pixels with varying color and
texture. Finally, we discuss possible approaches that could use all the data
in concert to build more accurate prediction models.
All data can be downloaded from the addresses listed at
https://dfc2021.blob.core.windows.net/competition-data/dfc2021_index.txt;
alternative fast download instructions can be found on the DFC-MSD webpage
[2]. Example code is available in the accompanying code repository at
https://github.com/calebrob6/dfc2021-msd-baseline.
## 2 The change detection task
### 2.1 Data
The input data in the DFC-MSD comprises 9 layers covering the study area of
the state of Maryland in the United States ($\sim$35,000 km2). All layers are
upsampled from their native resolutions to 1m / pixel and provided as 2250
aligned _tiles_ of dimensions not exceeding $4000\times 4000$. The layers are
the following:
1. (1)
NAIP (2 layers): 1m-resolution 4-band (red, green, blue, and near-infrared)
aerial imagery from the US Department of Agriculture’s National Agriculture
Imagery Program (NAIP) from two points in time: 2013 and 2017.
2. (2)
Landsat (5 layers): 30m-resolution 9-band imagery from the Landsat-8 satellite
from five time points: 2013, 2014, 2015, 2016, and 2017. Each of these images
is a median-composite from all cloud and cloud-shadow masked surface-
reflectance scenes intersecting Maryland.
3. (3)
NLCD (2 layers): 30m-resolution coarse land cover labels from the US
Geological Survey’s National Land Cover Database [5], in 15 classes (see Table
1) from two time points: 2013 and 2016. These labels were created in a semi-
automatic way, with Landsat imagery as the principal input.
An example of the input layers is shown in Figure 1.
Figure 1: Four of the input data layers in the DFC-MSD. Above: NAIP imagery
from 2013 and 2017. Below: Coarse NLCD land cover labels from 2013 and 2016.
See Table 1 for the color scheme.
### 2.2 Task
The goal of the DFC-MSD is to classify pixels in the study area as belonging
to classes describing land cover change between the times when NAIP 2013 and
2017 aerial photographs were taken. Specifically, we wish to detect the loss
or gain of four land cover classes in a simplified scheme based on that of the
NLCD: water, tree canopy, low vegetation, and impervious surfaces, as well as
the absence of land cover change. An approximate correspondence between NLCD
classes and the target classes is shown in Table 1.
| | | | | Approximate class freq.
---|---|---|---|---|---
| NLCD class | | | Target class | W% | TC% | LV% | I%
| Open Water | | | water | 98 | 2 | 0 | 0
| Developed, Open Space | | | – | 0 | 39 | 49 | 12
| Developed, Low Intensity | | | – | 0 | 31 | 34 | 35
| Developed, Medium Intensity | | | impervious | 1 | 13 | 22 | 64
| Developed High Intensity | | | impervious | 0 | 3 | 7 | 90
| Barren Land (Rock/Sand/Clay) | | | – | 5 | 13 | 43 | 40
| Deciduous Forest | | | tree canopy | 0 | 93 | 5 | 0
| Evergreen Forest | | | tree canopy | 0 | 95 | 4 | 0
| Mixed Forest | | | tree canopy | 0 | 92 | 7 | 0
| Shrub/Scrub | | | tree canopy | 0 | 58 | 38 | 4
| Grassland/Herbaceous | | | low vegetation | 1 | 23 | 54 | 22
| Pasture/Hay | | | low vegetation | 0 | 12 | 83 | 3
| Cultivated Crops | | | low vegetation | 0 | 5 | 92 | 1
| Woody Wetlands | | | tree canopy | 0 | 94 | 5 | 0
| Emergent Herbaceous Wetlands | | | tree canopy | 8 | 86 | 5 | 0
Table 1: The approximate correspondence between the NLCD classes and the four
target classes. The last four columns show _estimated_ frequencies of target
classes of pixels labeled with each NLCD class, computed from a high-
resolution land cover product, specific to Maryland, in a similar class scheme
[3]. The colors next to NLCD classes are those used in Figure 1. The two
colors next to the target classes are those used to show loss and gain
respectively in Figures 2 and 4.
We require that any pixel classified as a gain of one land cover class must be
simultaneously classified as a loss of another class. Thus, the task is
equivalent to creating two aligned land cover maps in the four target classes
for the two time points and computing their disagreement: pixels assigned
class $c_{1}$ in 2013 and $c_{2}$ in 2017 are considered loss of $c_{1}$ and
gain of $c_{2}$; pixels in which the maps agree are considered no change. See
Figure 2 for examples of the desired outputs. For example, in the second row,
a part of the forest is cut (loss of tree canopy). In its place we see some
gain in impervious surfaces (buildings and roads), as well as some cleared
fields, which are placed in the low vegetation category. In a few places, new
trees were planted, resulting in no change in land cover between the two time
points. In the first row, where forest is also cut, most of it is still
undeveloped and consists of cleared fields. The undisclosed evaluation data
consists of many small areas of interest such as the ones in the figure, as
well as a set of areas where there has been no change. Ambiguous areas are
blocked from evaluation. These include shadows and areas which humans have
difficulty recognizing.
This problem is made difficult by the very different spectral characteristics
of the two NAIP layers (due to differences in sensors, seasons, and time of
day when they were created), as well as the lack of high-resolution label
data: the only available training labels are the NLCD layers, which are
coarse, created for different points in time than NAIP, and have a different
class scheme.
Figure 2: Examples of paired NAIP 2013 and NAIP 2017 imagery and the desired
predictions. See Table 1 for the color scheme. Pixels with no change are shown
in black.
### 2.3 Contest description
The DFC-MSD is organized in two phases: the _validation_ phase, in which
contestants have 100 attempts to submit their proposed land cover change maps
for a subset of the study area to an evaluation server and receive feedback on
their performance, and the _test_ phase, in which contestants have 10 attempts
to submit change maps for a different subset of the study area.
For the validation phase, a set of 50 out of the 2250 tiles were selected for
evaluation. Within these tiles, regions of interest were identified and high-
resolution change maps created by the contest organizers. Some regions of
interest contain pixels with land cover change, while others have only the no
change label. While participants submit predictions for all 50 tiles,
evaluation is performed only over the areas of interest; predictions for
pixels outside areas of interest are ignored. The class statistics for the
validation set are shown in Table 2.
The scoring metric is the average intersection over union (IoU) between the
predicted and ground truth labels for eight classes (loss and gain of each of
the four target classes, excluding no change). (Recall that the IoU for a
class $c$ is defined as
$\frac{\text{\\# pixels labeled $c$ in prediction \emph{and} in ground
truth}}{\text{\\# pixels labeled $c$ in prediction \emph{or} in ground
truth}},$ (1)
or, equivalently, $\frac{f}{2-f}$, where $f$ is the F1 score for class $c$.)
Notice that while IoU for the no change class is not computed, predictions of
change for pixels with no change still contribute to the denominator in (1)
and decrease the score. The evaluation server outputs the IoU for each class
upon submission.
For the test phase, a different set of 50 tiles and regions of interest is
selected. The submission and evaluation formats remain the same.
Class | water | tree canopy | low vegetation | impervious
---|---|---|---|---
loss | 0.70% | 8.12% | 10.20% | 1.33%
gain | 0.86% | 1.05% | 7.55% | 10.90%
Table 2: Class statistics for the validation set. The total number of pixels
in regions of interest is 13,496,574, or about 1.7% of the total area of the
validation tiles; 59.29% are labeled as no change.
## 3 Methods
### 3.1 Baselines
We study four baseline algorithms, each of which takes as input the 2013 and
2017 NAIP layers and the 2013 and 2016 NLCD layers.
1. (1)
NLCD diff: We use solely the NLCD layers to assign each pixel a high-
resolution class according to the second column of Table 1; the Barren Land
and Developed, Low Intensity classes are assigned impervious and the
Developed, Open Space class is assigned low vegetation. This results in two
(coarse) land cover maps in the four target classes. The map derived from NLCD
2013 is used as the prediction for 2013; the map derived from NLCD 2016 is
used as the prediction for 2017. Land cover change is computed from these two
maps.
2. (2)
We train small fully convolutional neural networks (5 layers of 64-filter
$3\times 3$ convolutions with ReLU activation and a logistic regression layer;
151k parameters) to predict the NLCD labels from NAIP imagery. Two FCNs are
trained: one for NAIP 2013, targeting NLCD 2013, and one for NAIP 2017,
targeting NLCD 2016.
We found the following scheme to work best for creating target labels from the
(probabilistic) FCN predictions: the predicted probabilities (at each pixel)
of the classes Developed, Open Space, Developed, Low Intensity, and Barren
Land, which do not have a dominant high-resolution target class, are set to 0,
and the probabilities are renormalized. The remaining classes are then mapped
to the four target labels according to the second column of Table 1: the
output probability of each target label is the sum of the probabilities of all
NLCD classes mapping to it. The target class with highest probability is then
taken as the prediction. The land cover change is computed from the two
resulting prediction layers.
1. (a)
FCN / tile: Two separate FCNs are trained for each of the validation tiles.
Overfitting models with a small receptive field to the coarse NLCD labels and
evaluating them results in a super-resolution of these labels: the FCNs cannot
detect the 30m block structure in the NLCD labels they were trained to
predict, so colors and textures are mapped to the classes of the low-
resolution blocks in which they are most likely to appear.
2. (b)
FCN / all: The same as (a), but a single FCN is trained on the entire
dataset. This may give an advantage due to the much larger training data
offering more potential for generalization. However, it may also hurt
predictions if pixels with similar appearance tend to belong to different
classes in distant parts of the study area.
3. (3)
U-Net / all: The same as (2b), but with a model of the U-Net family [10],
specifically, a U-Net with a ResNet-18 [4] encoder structure where the first 3
blocks in the ResNet are used in the downsampling path, and convolutions with
128, 64, and 64 filters are used in the corresponding upsampling layers (1.2m
total parameters). We use an implementation from the PyTorch Segmentation
Models library111https://github.com/qubvel/segmentation_models.pytorch; see
the accompanying GitHub repo for details.
The IoU scores for each of these models are shown in Table 3. Some examples of
their predictions appear in Figure 4.
Algorithm | $-$W | $-$TC | $-$LV | $-$I | $+$W | $+$TC | $+$LV | $+$I | avg.
---|---|---|---|---|---|---|---|---|---
NLCD diff | 0.148 | 0.167 | 0.282 | 0.014 | 0.031 | 0.001 | 0.106 | 0.362 | 0.139
FCN / tile | 0.641 | 0.436 | 0.407 | 0.091 | 0.255 | 0.073 | 0.302 | 0.528 | 0.342
FCN / all | 0.584 | 0.644 | 0.601 | 0.400 | 0.292 | 0.181 | 0.607 | 0.716 | 0.503
U-Net / all | 0.325 | 0.484 | 0.476 | 0.307 | 0.237 | 0.205 | 0.342 | 0.517 | 0.361
Table 3: Baseline IOUs. $-$C and $+$C denote loss and gain of class C,
respectively.
### 3.2 Discussion
Figure 3: Distribution of changes between 2013 and 2016 NLCD layers. Above:
Distribution of 2013 classes (left axis) and distribution of 2016 classes for
pixels of each 2013 class. Below: The same, restricted to the set of pixels
whose labels differ between 2013 and 2016.
From Table 3, we see that NLCD difference alone is a poor predictor of land
cover change. There may be several reasons for this. First, this algorithm
gives only coarse change predictions, in 30m blocks, that miss fine details;
many changes are in small, thin regions (such as new houses or roads). Second,
the 2013 and 2016 NLCD layers were created in a joint pipeline in which
certain kinds of change involving the Developed classes were explicitly
forbidden [5] (see Figure 3). Third, the NLCD difference simply misses some
changes (e.g., in the lower right corner of Figure 1), either because of
errors made by the algorithm that produced the NLCD labels, or because NLCD
and NAIP represent different time points.
Interestingly, fitting FCNs to predict the coarse NLCD labels is a strong
baseline and an efficient way to super-resolve the NLCD labels to 1m
resolution. In some cases, the FCN trained on a single tile simply ignores
changes in areas that have no change in NLCD: it has overfit to the local
correspondences of (incorrect) NLCD labels and textures that appear. For
example, in Figure 4(d), it predicts the construction area as tree canopy –
the label suggested by both 2013 and 2016 NLCD labels. However, in other
cases, the FCN trained on one tile correctly classifies areas that appear
ambiguous to other models: in Figure 4(e), the silty part of the body of water
is predicted correctly as water in both years, as suggested by the Open Water
label in both NLCD layers, and thus shown as no change. However, to the FCN
trained on all tiles, the local information that this shade of water is indeed
water is trumped by evidence from other tiles that this color and texture
tends to be low vegetation, resulting in an incorrect prediction.
Differences between class appearances across geographic regions motivated
prior work on alternative loss functions for weakly supervised segmentation
(label super-resolution) [8]; it has also been found that tile-by-tile
algorithms sometimes perform better than those trained on a large dataset [7]
– in segmentation problems with wide variation of appearances over space, it
is sometimes the case that “less is more”. We believe that methods that
combine global models with tile-by-tile analysis warrant further study.
We also attempted to train models targeting _high-resolution_ labels from the
Chesapeake Conservancy land cover dataset [3], which use a class scheme
slightly different from ours. While the use of these labels is not allowed in
the DFC-MSD, as they are not available outside of a small geographic region,
we found that they do not improve the predictions: surprisingly, models
trained on these high-resolution labels over the entire study area perform no
better than those trained on coarse NLCD labels.
The last two rows of Table 3 suggest that sometimes “less is more” when it
comes to neural architectures as well. The U-Net has about $80\times$ more
parameters than the FCN. Figure 5 shows the raw predictions of NLCD labels
from the two models, both trained on the entire dataset. The FCN, which has a
small ($5\times 5$) receptive field, is unable to learn that NLCD labels come
in $30\times 30$ blocks, while the U-Net has a large receptive field and
learns to make blurry predictions. (In fact, the winning weak supervision
entry in the 2020 Data Fusion Contest [9] used very simple texture/color
mixture models to super-resolve MODIS labels, followed by smoothing with a
small FCN.)
We also found that targeting the same year of labels for both 2013 and 2017
networks does not change the results much, as the models primarily learn their
textures from the vast area where there is no change anyway. In fact, we
should note that training separate models for different time points is a
starting point, but we expect that the best approaches will analyze imagery
jointly, since most pixels do not change their class from one year to the
next. Such situations are often exploited in computer vision, e.g., in
background subtraction or co-segmentation. The probabilistic index map model
[6], for example, learns to assign a distribution over labels to each pixel in
an aligned image stack assuming that within each image assignments are
consistent, but the palette of assigned colors (or textures) to each index can
freely change.
(a) |
---|---
(b) |
(c) |
(d) |
(e) |
Figure 4: Examples of land cover change predictions by baseline models. Figure
5: Outputs of a FCN and a U-Net trained on the entire dataset to predict NLCD
2013 classes (bottom left) from NAIP 2013 imagery (top left).
## 4 Conclusion
In this paper, we have described simple algorithms for land cover change
detection and raised questions for future research to answer: How can machine
learning models optimally combine noisy, low-resolution labels for detection
and classification of change in high-resolution images? How should they strike
the balance between local analysis and models that work over a geographically
diverse study area? How can low-resolution (Landsat) imagery aid in change
detection? The 2021 IEEE GRSS Data Fusion Contest will help to answer these
questions, fostering innovation in weakly supervised segmentation and change
detection in this important application domain.
## References
* [1] 2020 IEEE GRSS Data Fusion Contest, 2020. http://www.grss-ieee.org/community/technical-committees/data-fusion/2020-ieee-grss-data-fusion-contest/.
* [2] 2021 IEEE GRSS Data Fusion Contest Track MSD, 2021. http://www.grss-ieee.org/community/technical-committees/data-fusion/2021-ieee-grss-data-fusion-contest-track-msd/.
* [3] Chesapeake Conservancy. Land Cover Data Project, 2017. https://chesapeakeconservancy.org/wp-content/uploads/2017/01/LandCover101Guide.pdf.
* [4] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. Computer Vision and Pattern Recognition (CVPR), 2016.
* [5] C. Homer, J. Dewitz, S. Jin, G. Xian, C. Costello, P. Danielson, L. Gass, M. Funk, J. Wickham, S. Stehman, R. Auch, and K. Riitters. Conterminous United States land cover change patterns 2001–2016 from the 2016 National Land Cover Database. ISPRS Journal of Photogrammetry and Remote Sensing, 162:184–199, 2020.
* [6] N. Jojic and Y. Caspi. Capturing image structure with probabilistic index maps. Computer Vision and Pattern Recognition (CVPR), 2004.
* [7] N. Malkin, A. Ortiz, and N. Jojic. Mining self-similarity: Label super-resolution with epitomic representations. European Conference on Computer Vision (ECCV), 2020.
* [8] N. Malkin, C. Robinson, L. Hou, R. Soobitsky, J. Czawlytko, D. Samaras, J. Saltz, L. Joppa, and N. Jojic. Label super-resolution networks. International Conference on Learning Representations (ICLR), 2019\.
* [9] C. Robinson, N. Malkin, L. Hu, B. Dilkina, and N. Jojic. Weakly supervised semantic segmentation in the 2020 IEEE GRSS Data Fusion Contest. International Geoscience and Remote Sensing Symposium (IGARSS), 2020\.
* [10] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2015.
|
4k
|
arxiv_papers
|
2101.01155
|
# Bus operators in competition: a directed location approach
Fernanda Herrera ${\dagger}$ and Sergio I. López School of Global Policy and
Strategy
University of California San Diego
CA 92093 Departamento de Matemáticas
Facultad de Ciencias, UNAM
C.P. 04510, Ciudad de México, México [email protected] [email protected]
###### Abstract.
We present a directed variant of Salop’s (1979) model to analyze bus transport
dynamics. Players are operators competing in both cooperative and non-
cooperative games. Utility, like in most bus concession schemes in emerging
countries, is proportional to the total fare collection. Competition for
picking up passengers leads to well documented and dangerous driving practices
that cause road accidents, traffic congestion and pollution. We obtain
theoretical results that support the existence and implementation of such
practices, and give a qualitative description of how they come to occur. In
addition, our results allow to compare the base transport system with a more
cooperative one.
###### Key words and phrases:
Transport, Bus, Location games, Nash equilibrium, Mixed Strategies
###### 1991 Mathematics Subject Classification:
C710, C720, C730, R410
Fernanda Herrera gratefully acknowledges financial support from the University
of California Institute for Mexico and the United States (UC MEXUS). Sergio I.
López was partially funded by Conacyt-SNI 215989 grant.
## 1\. Introduction
In this work, we model the competition of bus operators for passengers in a
public transport concession scheme. The models -which are directed variants of
the Salop model [18], in turn a circuit adaptation of the classic Hotelling
model [13]\- are a characterization of Mexico City’s transport system.
According to a 2017 survey, 74.1% of the trips made in Mexico City by public
transport are carried out on buses with concession contracts [15].
Much like in other Latin American cities, the contracts that lay the
responsibilities, penalties and service areas, are rarely enforced by the
corresponding authorities, and in these instances, the main driver determining
the planning and operations tend to be the operator’s profit margins [12] p.
9. Leaving the task to companies or even drivers themselves, has lead to what
[10] refer to as curious old practices: driving habits adopted by bus
operators, whose salary is proportional to the fare collection, to maximize
the number of users boarding the unit. While these practices were observed and
recorded in the United Kingdom in the 1920s, they are very much present today,
particularly in cities with emerging economies and sub optimal concession
plans. The practices enlisted in [10] pertaining to driving are:
1. (1)
Hanging back or Crowling. Operators drive slowly to pick up as many people as
possible. The idea is that long waiting times increase the number of
passengers waiting at stops. A variant is to stop altogether until the bus is
fully loaded, or the next bus catches up.
2. (2)
Racing. When an operator deems that the number of passengers waiting at a stop
is not worth making the stop. In this case, she continues driving in the hopes
of collecting more users ahead.
3. (3)
Overtaking, Tailing or Chasing. Attempting to pass the bus ahead, to cut in
and pick up the passengers frontwards.
4. (4)
Turning. When an empty or nearly empty bus turns around before the end of the
route, and drives back in the opposite direction.
Many of these practices have negative consequences on the service provided to
users, and as a byproduct, on the perception of public transport. In the 2019
survey on victimization in public transport [5], carried out in Mexico City
and its metropolitan area, 50% of the interviewees deemed the quality of
concession transport to be bad, and 15% very bad. Moreover, 27% considered
that traveling in concession transport was somewhat dangerous, and 60% very
dangerous. In both of these dimensions, concession public transport did worse
than any other form of transport, including public and private types. The
matter is pressing enough that the current administration of Mexico City
stressed in its Strategic Mobility Plan of 2019 [19] p. 9: The business model
that governs this (transport) sector, (…) produces competition in the streets
for users, which results in the pick up and drop off of passengers in
unauthorized places, increased congestion and a large number of traffic
incidents each year.
A solution to these problems may involve the deregulation of public transport
to increase competition between providers, and to create incentives for
providing a differentiated product, namely better service in the form of
shorter waiting times, and safer driving practices. As an example, Margaret
Thatcher introduced the Transport Act 1985 [1], which lead to the
privatization of bus services, higher competition between companies, and a set
of norms to abide by, like keeping vehicles in good condition, avoiding
dangerous driving, and establishing routes and publishing timetables. However
successful, this type of measure seems unlikely for Mexico and other
developing Latin American countries, both for legislative reasons and
corruption in the implementation. So, with this work we aim to shed light on
the implications of a transport system where operators compete for passengers
without regulation.
To be specific, we model the situation where bus operators compete to maximize
their utility, which is proportional to the number of passengers boarding the
units. As a proxy for the number of passengers collected, we use the road
ahead up to the next bus. The strategies available to drivers are the driving
speeds. Time, like strategies themselves, is continuous. For simplicity, we do
not allow drivers to change speed any time they want, instead we assume that
they maintain a chosen speed for a given time, and let them change in the
next. While practical, the assumption also reflects the empirical observation
that bus drivers make strategic stops along the road, where they obtain
information on the game. More precisely, they pay agents that collect the
arrival times of previous buses to that particular stop, and even the identity
of the drivers themselves. This way, the operators realize whether they are
competing against known drivers, and more importantly, whether they changed
their speed. With this information, they make their decision for the next part
of the route. We obtain a simple interpretation of the results that is
consistent with the driving practices mentioned above.
To the best of our knowledge, our approach is novel, and it allows us to model
a variety of scenarios and obtain explicit descriptions of equilibria.
Furthermore, we are able to explore the time evolution of the adopted
strategies. All the results are expressed in terms of the behavior of the
operators. Given the tractability of our models, some natural theoretical
questions emerge.
Relevant literature on transport problems includes [17] modeling of the
optimal headway bus service from the point of view of a central dispatcher. In
the historical context of Transport Act 1985 [1], several scientific articles
analyzed the effect of the privatization. Under the assumption of the
existence of an economic equilibrium in the competition system, [10] classify
the driving practices into two categories: those consistent with the
equilibrium, and those who are not. They analyzed the expected timetables in
the deregulated scenario. In [9] a comparative analysis of fare and timetable
allocation in competition, monopoly and net benefit maximization (both
restricted and unrestricted to a zero profit) models is presented. Building on
from this, [16] introduces the consumer’s perspective and obtains the
equilibria prices and number of services offered by transport companies. The
possibility of predatory behavior between two enterprises competing through
fares and service level, is analyzed by [7], using the data from the city of
Inverness. In [8] the authors study the optimal policies of competing
enterprises in terms of fares, and the bus service headway, in a unique bus
stop and destination scenario. They also introduce the concept of demand
coordination which can be implemented through timetables. Assuming a spatial
directed model with a single enterprise, [6] finds the timetable that
minimizes the costs associated to service delays. The work of [4] analyzes
flight time data and finds empirical evidence to support Hotelling models.
From a non-economic perspective, [3] models competing buses in a circuit
behaving like random particles with repulsion between them (meaning they could
not pass each other). A contemporary review on transport market models using
game theory is given by [2], and a general review of control problems which
arise in buses transport systems is presented in [14].
This paper is organized as follows. In Section 2, we present the general
model, and the single and two player games. Relevant definitions, notation and
interpretations are introduced. In Section 3 we present the solutions to the
games and include in Subsection 3.3 the evolution of the strategies adopted by
the operators. That is, we look at the long-run equilibria of the games. We
also introduce a natural extension of the two player games and present the
results in Subsection 3.4. Concluding remarks are in 4, and proofs are in
Appendix A.
## 2\. The model
The assumptions of the game are the following. There are $n\leq 2$ buses, each
is driven by one of $n$ operators along a route. There is only one type of bus
and one type of driver, meaning that the buses have identical features, and
that the drivers are homogeneous in terms of skill and other relevant
characteristics.
The speed of a bus, denoted by $v$, is bounded throughout every time and place
of the road by:
(2.1) $0<v_{min}\leq v\leq v_{max},$
where the constants $v_{min}$ and $v_{max}$ are fixed, and determined by
exogenous factors like the condition of the bus, Federal and State laws and
regulations, the infrastructure of the road, etc.
Drivers can pick up passengers along any point on the route at any given time.
In other words, there are no designated bus stations, nor interval-based time
schedules in place. This scenario is an approximation to a route with a large
number of homogeneously distributed bus stops.
We allow for infinite bus capacity, so drivers can pick up any number of
passengers they come across. Alternatively, one can assume that passengers
alight from the bus almost right after boarding it, so the bus is virtually
empty and ready to pick up users at any given time. The important point to
note is that passengers that have boarded a bus will not hop on the next,
either because they never descended it in the first place, or because they
already reached their final destination if they did.
Bus users reach their pick up point at random times, so demand for transport
is proportional to the time elapsed between bus arrivals. Let $\lambda>0$
denote the mean number of passengers boarding a bus per unit of time, and let
$p\geq 0$ denote the fixed fare paid by each user. We assume that there is a
fixed driving cost $c\geq 0$ per unit of time. This cost summarizes fuel
consumption, maintenance, protection insurance for the bus and passengers,
etc.
The operators get a share of the total revenue, and consequently seek to
maximize it. Since they cannot control the number of passengers on the route,
the fare, or the driving costs, the only resource available to them is to set
the driving speed, which we assume remains constant throughout the time
interval $[0,T]$, with $T>0$. The strategy space of a bus driver is then
(2.2) $\Gamma=\\{v\geq 0:v_{min}\leq v\leq v_{max}\\},$
where $v_{min}$ and $v_{max}$ are given in (2.1). We define a mixed strategy,
$X$ or $Y$, to be a random variable taking values in the space $\Gamma$.
In what follows we define the expected utility of drivers given a set of
assumptions on the number of players and their starting positions, the fixed
variables of the models, and route characteristics. Relevant notation and
concepts are introduced when deemed necessary.
### 2.1. Single player games
We first consider a game with only one driver picking up passengers along the
road. Importantly, the fact that only one bus is covering the route implies
that commuters have no option but to wait for its arrival, the player is aware
of this.
* •
Fixed-distance game
A single bus departs the origin of a route of length $D$. We adopt the
convention that the initial time is whenever the bus departs the origin. We
define the expected utility of driving at a given speed $v$ to be
(2.3) $u(v):=(p\lambda)T-cT,$
where $T=\frac{D}{v}$ is the time needed to travel the distance $D$ at speed
$v$.
Note that since there is no other bus picking up passengers, the expected
number of people waiting for the bus in a fixed interval of the road increases
proportionally with time. From this, one infers that the expected total number
of passengers taking the bus is proportional to the time it takes the bus to
reach its final destination.111This justifies the first summand in (2.3). The
conclusion and its implication can be expressed rigorously using a space-time
Poisson process, see for example [11] pp. 283-296.
* •
Fixed-time game
Suppose now that the driver chooses a constant speed $v$ satisfying (2.1) in
order to drive for $T$ units of time. The bus then travels the distance
$D=Tv$, which clearly depends on $v$. We define the expected utility of
driving at a given speed $v$ to be
(2.4) $u(v):=(p\lambda)D-cT.$
The underlying assumption is that for sufficiently small $T$, there are
virtually no new arrivals of commuters to the route, so effectively, the
number of people queuing for the bus remains the same as that of the previous
instant. The requirement is that $T$ is small compared to the expected
interarrival times of commuters.
It follows that the total amount of money collected by the driver is
proportional to the total distance traveled by the bus.
### 2.2. Two player games
There are two buses picking up passengers along a route, which we assume is a
one-way traffic circuit. An advantageous feature of circuits is that buses
that return from any point on the route to the initial stop may remain in
service; this is generally not the case in other types of routes. In
particular, we assume that the circuit is a one-dimensional torus of length
$D$. For illustration purposes and without loss of generality, from now on we
require the direction of traffic to be clockwise.
We define the $D$-module of any real number $r$ as
$(r)_{mod\,D}:=\frac{r}{D}-\Big{\lfloor}\frac{r}{D}\Big{\rfloor},$
where $\lfloor z\rfloor$ is the greatest integer less than or equal to $z$.
The interpretation of $(r)_{mod\,D}$ is the following: if starting from the
origin, a bus travels the total distance $r$, then $(r)_{mod\,D}$ denotes its
relative position on the torus. Indeed, $r$ may be such that the bus loops
around the circuit many times, nonetheless $(r)_{mod\,D}$ is in $[0,D)$ for
all $r$. We refer to $r$ as the absolute position of the bus, and to
$(r)_{mod\,D}$ as the relative position (with respect to the torus). Note that
the origin and the end of the route share the same relative position, since
$(0)_{mod\,D}=0=(D)_{mod\,D}$.
Let ${\mathbf{x}}$ and ${\mathbf{y}}$ denote the two players of the game, and
let $x,y$ be their respective relative positions. The directed distance
function $d_{{\mathbf{x}}}$ is given by
(2.7) $\displaystyle
d_{{\mathbf{x}}}(x,y):=\left\\{\begin{array}[]{ll}y-x&\textrm{ if }x\leq y,\\\
D+y-x&\textrm{ if }x>y.\end{array}\right.$
Equation (2.7) has a key geometrical interpretation: it gives the distance
from $x$ to $y$ considering that traffic is one-way. The interest of this is
that the potential amount of commuters ${\mathbf{x}}$ picks up is proportional
to the distance between $x$ and $y$, namely $d_{{\mathbf{x}}}(x,y)$. See
Figure 1.
A straightforward observation is that for any real number $r$, we have
(2.8) $d_{{\mathbf{x}}}((x+r)_{mod\,D},(y+r)_{mod\,D})=d_{{\mathbf{x}}}(x,y).$
This asserts that if we shift the relative position of the two players by $r$
units (either clockwise or counterclockwise, depending on the sign of $r$),
then the directed distance $d_{{\mathbf{x}}}$ is unchanged.
One can define the directed distance $d_{{\mathbf{y}}}$ analogously,
$d_{{\mathbf{y}}}(x,y):=\left\\{\begin{array}[]{ll}x-y&\textrm{ if }y\leq
x,\\\ D+x-y&\textrm{ if }y>x.\end{array}\right.$
By definition, there is an intrinsic symmetry between $d_{{\mathbf{x}}}$ and
$d_{{\mathbf{y}}}$: we have $d_{{\mathbf{x}}}(x,y)=d_{{\mathbf{y}}}(y,x)$ and
$d_{{\mathbf{y}}}(x,y)=d_{{\mathbf{x}}}(y,x)$. Roughly speaking, this means
that if we were to swap all the labels, namely ${\mathbf{x}}$ to
${\mathbf{y}}$, $x$ to $y$,222Importantly, this switches the relative
positions of the players. and vice versa, then it suffices to plug the new
labels into the previous definitions to obtain the directed distances.
Another immediate observation is that for any pair of different positions
$(x,y)$, the sum of the two directed distances gives the total length of the
circuit,
(2.9) $d_{{\mathbf{x}}}(x,y)+d_{{\mathbf{y}}}(x,y)=D.$
This is portrayed in Figure 1.
Figure 1. Directed distances
Let us assume that players ${\mathbf{x}}$ and ${\mathbf{y}}$ have starting
positions $x_{0}$ and $y_{0}$ in $[0,D)$. The initial minimal distance is
defined to be
(2.10)
$d_{0}:=\min\\{d_{{\mathbf{x}}}(x_{0},y_{0}),d_{{\mathbf{y}}}(x_{0},y_{0})\\}.$
Now suppose that starting from $x_{0}$ and $y_{0}$, the operators drive at the
respective speeds $v_{{\mathbf{x}}}$ and $v_{{\mathbf{y}}}$, with
$v_{{\mathbf{x}}},v_{{\mathbf{y}}}$ in $\Gamma$, for $T$ units of time. Their
final relative positions are then
$x_{T}=(x_{0}+Tv_{{\mathbf{x}}})_{mod\,D}\qquad\text{and}\qquad
y_{T}=(y_{0}+Tv_{{\mathbf{y}}})_{mod\,D}.$
We orient the maximum displacement of buses by requiring $Tv_{max}$, with
$v_{max}$ given in (2.1), to be small compared to $D$. The reason for this is
to be consistent with our assumption of constant speed strategies, since they
are short-term. More precisely, we require
(2.11) $Tv_{max}<\frac{D}{2}.$
Lastly, we define the escape distance by
(2.12) $d:=T(v_{max}-v_{min}).$
This gives a threshold such that if the distance between the players is
shorter than $d$, then the buses can catch up to each other, given the
appropriate pair of speeds. If the distance is greater than $d$, this cannot
occur.
We now proceed to define the expected utility of players given the type of
game being played, namely, whether it is cooperative or non-cooperative.
* •
Non-cooperative game
We define the utility of ${\mathbf{x}}$ given the initial positions of players
$x_{0}$ and $y_{0}$, and the strategies $v_{{\mathbf{x}}}$ and
$v_{{\mathbf{y}}}$, to be
(2.13)
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}}):=\left\\{\begin{array}[]{cc}p\lambda\,d_{{\mathbf{x}}}(x_{T},y_{T})-cT&\textrm{
if }x_{T}\neq y_{T},\\\ p\lambda\,\frac{D}{2}-cT&\textrm{ if
}x_{T}=y_{T}.\end{array}\right.$
The definition above includes two summands: the first one gives the (gross)
expected income of ${\mathbf{x}}$, since the factor $p\lambda$ is the expected
income per unit of distance. The second term gives the total driving cost.
It is worth pointing out that for simplicity, we have assumed that the
expected income depends only on the relative final positions $x_{T}$ and
$y_{T}$. A more precise account would consider the entire trajectory of the
buses. Nevertheless, even if this could be described with mathematical
precision, the model would grow greatly in complexity without adding to its
economic interpretation.
Similarly, we define
(2.14)
$u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}}):=\left\\{\begin{array}[]{ll}p\lambda\,d_{Y}(x_{T},y_{T})-cT&\textrm{
if }x_{T}\neq y_{T},\\\ p\lambda\,\frac{D}{2}-cT&\textrm{ if
}x_{T}=y_{T}.\end{array}\right.$
By equation (2.9) and the definition of the utility functions (2.13), (2.14),
the sum $u_{{\mathbf{x}}}+u_{{\mathbf{y}}}$ is a constant that does not depend
on the driving speeds nor on the initial positions. For this reason, we
analyze the game as a zero-sum game.
* •
Cooperative game
Players aim to maximize the collective payoff, and this amounts to solving the
global optimization of the sum $U_{{\mathbf{x}}}+U_{{\mathbf{y}}}$, which
includes the utility functions in the non-cooperative game (2.13) and (2.14).
Since the non-cooperative game is a zero-sum game, we introduce an extra term
in the utility, which gives the discomfort players derive from payoff
inequality. This assumption can be imagined in a situation where equity in
payments is desirable, specially since players have complete information.
We define the utility function to be
(2.15)
$u(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}}):=u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})+u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})-k|u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})-u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})|,$
where $k$ is a non-negative constant, and all the other elements are the same
as in the non-cooperative game.
#### 2.2.1. Mixed strategies and $\varepsilon$-equilibria
For the solution of two player games, it is convenient to define the expected
utility of randomizing over the set of strategies. We also introduce the
definition of $\varepsilon$-equilibrium.
Suppose that players ${\mathbf{x}}$ and ${\mathbf{y}}$ use the mixed
strategies $X$ and $Y$.333Recall that a mixed strategy is a random variable
taking values in the set $\Gamma=\\{v\geq 0:v_{min}\leq v\leq v_{max}\\}$. We
define the utility of player ${\mathbf{x}}$ to be
$U_{{\mathbf{x}}}(x_{0},X,y_{0},Y):=\mathbb{E}[u_{{\mathbf{x}}}(x_{0},X,y_{0},Y)].$
An analogous definition can be derived for player ${\mathbf{y}}$.
Let $\varepsilon>0$. We say that a pair of pure strategies
$(v^{*}_{{\mathbf{x}}},v^{*}_{{\mathbf{y}}})$ is an $\varepsilon$-equilibrium
if for every $v_{{\mathbf{x}}}$ and $v_{{\mathbf{y}}}$ we have
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v^{*}_{{\mathbf{y}}})\leq
u_{{\mathbf{x}}}(x_{0},v^{*}_{{\mathbf{x}}},y_{0},v^{*}_{{\mathbf{y}}})+\varepsilon,$
and
$u_{{\mathbf{y}}}(x_{0},v^{*}_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})\leq
u_{{\mathbf{y}}}(x_{0},v^{*}_{{\mathbf{x}}},y_{0},v^{*}_{{\mathbf{y}}})+\varepsilon.$
This means that any unilateral deviation from the equilibrium strategy leads
to a gain of no more than $\varepsilon$; this is why an
$\varepsilon$-equilibrium is also called near-Nash equilibrium. Note that in
particular, an $\varepsilon$-equilibrium with $\varepsilon=0$ gives the
standard definition of Nash equilibrium. However, an $\varepsilon$-equilibrium
for all $\varepsilon$ sufficiently small, need not be a Nash equilibrium,
specially if the utility function is discontinuous, which is our case.
A mixed strategies $\varepsilon$-equilibrium $(X,Y)$ is similarly defined by
replacing the utility functions with the expected utility functions in the
last definition.
## 3\. Results
In what follows, we analyze the speeds that drivers choose, both in the short
and long-run. Results on the short term are crucial to the analysis, as
implementing the optimal short-term strategies over a long period of time,
gives the long-term solution to the games.
### 3.1. Single player games
The single player games have pure strategy Nash equilibria. Although the
results are immediate, we include them in the analysis for completeness and
ease of interpretation.
###### Proposition 1.
Let $v^{*}$ in $\Gamma$ be the driving speed that maximizes the utility of the
driver. We provide an explicit description of $v^{*}$.
1. a)
Fixed-distance game. Given the utility function defined in (2.3), we have
$\displaystyle v^{*}=\begin{cases}v_{min}&\text{if $p\lambda>c$},\\\
v_{min}\leq v\leq v_{max}&\text{if $p\lambda=c$},\\\ v_{max}&\text{if
$p\lambda<c$}.\end{cases}$
2. b)
Fixed-time game. Given the utility function defined in (2.4), we have
$v^{*}=v_{max}$.
###### Proof.
Note that in the fixed-distance game, $p\lambda-c$ gives the driver’s expected
net income per unit of time. If this amount is positive, then the player
maximizes her utility by driving for the longest time, or equivalently, by
driving at the lowest possible speed. Conversely, a negative expected net
income leads to driving at the highest speed. Lastly, a null expected income
makes the driver indifferent between any given speed in the range.
In the fixed-time game, the total revenue is proportional to the traveled
distance, so the driver maximizes her utility by driving at the highest speed.
∎
### 3.2. Two-player games
The strategies adopted by the players strongly depend on the initial minimal
distance defined in (2.10). We cover all cases.
###### Theorem 1.
Non-cooperative game. Without loss of generality we can assume
$d_{0}=d_{{\mathbf{x}}}(x_{0},y_{0})$.
1. a)
If $d_{0}=0$, that is, if the initial positions of the players are the same,
then the pair of strategies $(v_{max},v_{max})$ is the only Nash equilibrium.
2. b)
If $0<d_{0}<d<d_{{\mathbf{y}}}(x_{0},y_{0})$, with $d$ the escape distance in
2.12, then for sufficiently small $\varepsilon$, the mixed strategy
$\varepsilon$-equilibria $(X,Y)$ is
$\displaystyle X=\begin{cases}v_{min}&\text{with probability
}1-\frac{d-d_{0}}{D},\\\ U&\text{with probability
}\frac{d-d_{0}}{D}\end{cases}$ and $\displaystyle
Y=\begin{cases}v_{min}&\text{with probability }q_{1},\\\ V&\text{with
probability }q_{2},\\\
v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}&\text{with probability
}1-\frac{d}{D},\end{cases}$
where $U$ is a uniform random variable on
$\Big{(}v_{min}+\frac{d_{0}}{T},v_{max}\Big{)}$, $q_{1}$ and $q_{2}$ are non-
negative numbers such that $q_{1}+q_{2}=\frac{d}{D}$ and
$q_{2}\leq\frac{d-d_{0}}{D}$, and $V$ is a uniform random variable on
$\Big{(}v_{max}-\frac{d_{0}}{T}-q_{2}\frac{D}{T},\,v_{max}-\frac{d_{0}}{T}\Big{)}$.
In other words, $X$ has an atom at $v_{min}$, and $Y$ has two atoms at
$v_{min}$ and $v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}$, and are
otherwise uniformly distributed over their respective intervals.
3. c)
If $0<d=d_{0}<d_{{\mathbf{y}}}(x_{0},y_{0})$, then for sufficiently small
$\varepsilon$, the mixed strategy $\varepsilon$-equilibria is
$\displaystyle X=\begin{cases}v_{min}&\text{with probability
}1-\frac{2\varepsilon}{D},\\\ v_{max}&\text{with probability
}\frac{2\varepsilon}{D}\end{cases}$ and $\displaystyle
Y=\begin{cases}v_{min}&\text{with probability }\frac{2d}{D},\\\
v_{min}+\frac{\varepsilon}{T}&\text{with probability
}1-\frac{2d}{D}.\end{cases}$
4. d)
If $d<d_{0}$, then the pair of strategies $(v_{min},v_{min})$ is the unique
Nash equilibrium.
###### Proof.
The proof is in Appendix A. ∎
By assumption (2.11), this result covers all the possible initial positions
$(x_{0},y_{0})$, so we have a complete and explicit characterization of the
equilibria. Simply put, the theorem asserts that if the players have the same
starting point, they drive at the maximum speed. If their positions differ by
at most the escape distance, then they play mixed strategies. Lastly, if the
distance between them is greater than the escape one, they drive at the
minimum speed. See Figure 2 for an illustration of the result and its cases.
Figure 2. On the rightmost side of each graph are the final positions of
players, blue for ${\mathbf{x}}$ and red for ${\mathbf{y}}$, after driving at
the optimal speed for $T$ units of time. Points represent probability mass
atoms, while continuous bars give the intervals in which the locations may be.
###### Theorem 2.
Cooperative game. Without loss of generality we assume that
$d_{0}=d_{{\mathbf{x}}}(x_{0},y_{0})$.
1. a)
If $d_{0}=0$, then the optimal pairs of driving speeds are $(v_{min},v_{max})$
and $(v_{max},v_{min})$.
2. b)
If $0<d_{0}$ and $d_{0}+d<\frac{D}{2}$, then the only optimal strategies are
$(v_{min},v_{max})$.
3. c)
If $d_{0}+d>\frac{D}{2}$, then any pair $(v_{{\mathbf{x}}},v_{{\mathbf{y}}})$
such that $T(v_{{\mathbf{y}}}-v_{{\mathbf{x}}})=\frac{D}{2}$ is an optimal
strategy.
###### Proof.
The proof is direct. Since the sum
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})+u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})$
is equal to a constant for any pair $(v_{{\mathbf{x}}},v_{{\mathbf{y}}})$, the
only quantity left to optimize is
$-k|u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})-u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})|$.
Minimization occurs when the distance between the final positions $x_{F}$ and
$y_{F}$ is the greatest possible. It is easy to check that the driving speeds
listed above do just this. ∎
An important observation is that in the case where $d_{0}=\frac{D}{2}$, which
is accounted for in c) all the optimal strategies are of the form $(v,v)$ for
a feasible speed $v$. Intuitively, this means that if the players have
diametrically opposite initial positions, then any speed is optimal, as long
as both adopt it.
### 3.3. Long-run analysis
Let us recall that the previous results are obtained for small enough $T$, the
formal requirement being stated in (2.11). It is of interest to know what
happens in longer time periods, and in particular, in the long-run. To this
end, we repeat the games infinitely many times, implementing the optimal
strategies in each stage. Of course, the strategies depend on the distance
between players, which is given by the implementation of the optimal
strategies in the previous period. It is thus convenient to define a recursive
process, and to introduce a few variables.
Consider the initial positions of ${\mathbf{x}}$ and ${\mathbf{y}}$, namely
$(x_{0},y_{0})$, with $d_{0}$ defined in (2.10). Let
$\\{(x_{n},y_{n})\\}_{n\geq 1}$ be a stochastic process with the following
property: the pair $(x_{k+1},y_{k+1})$ gives the final locations of the
players after they play their optimal strategies, taking $(x_{k},y_{k})$ as
their starting positions. It is worth noting that since equilibria in Theorem
2 involve mixed strategies, randomness is very much present in the process.
We define the distance between the buses at any (non-negative integer) time
as:
(3.1)
$d_{n}:=\min\\{d_{{\mathbf{x}}}(x_{n},y_{n}),d_{{\mathbf{y}}}(x_{n},y_{n})\\}\qquad\forall\enspace
n\geq 0.$
We also define the first time in which $d_{n}$ exceeds the escape distance $d$
(given in (2.12)), denoted by $N$, as follows
$N=\min\\{n\geq 0:d_{n}>d\\}.$
###### Theorem 3.
Non-cooperative game. If $d_{0}\neq 0,d$, we have
$\mathbb{P}(N>k)\leq\Big{(}\frac{d}{D}\Big{)}^{k}\quad\text{for all $k\geq
1$}.$
If $d_{0}=d$, then there exists a geometrically distributed random time $M$
with parameter $1-(1-\frac{2\varepsilon}{D})(\frac{2d}{D})$, taking values in
the natural numbers, with $\varepsilon$ satisfying the
$\varepsilon$-equilibrium conditions in Theorem 2, with the property that
$d_{k}=d$ for all $k<M$, and
$d_{M}=\left\\{\begin{array}[]{ll}0&\textrm{ with probability
}\qquad\frac{4\varepsilon
d}{D^{2}\Big{(}1-\Big{(}1-\frac{2\varepsilon}{D}\Big{)}\Big{(}\frac{2d}{D}\Big{)}\Big{)}},\\\
>d&\textrm{ with complementary probability.}\\\ \end{array}\right.$
###### Proof.
For the proof we refer the reader to Appendix A. ∎
Explicitly, this means that for most starting points, playing the game
repeatedly leads to a bus gap greater than the escape distance in a finite and
geometrically distributed time. From Theorem 2, we conclude that in this case,
drivers end up driving at the minimum speed. There are two exceptions to this:
if the drivers have the same starting position, or if the initial distance
between them is exactly that of escape. In the former case, the drivers choose
to go at the maximum speed forever, and in the latter, they maintain their
distance for some random time, and from then on reach the escape distance, and
drive at the minimum speed. It is with very little probability (proportional
to $\varepsilon$) that this scenario does not occur. Figure 3 shows the
evolution of the distance process $\\{d_{n}:n\geq 0\\}$ given a few initial
distances $d_{0}$.
Figure 3. Evolution of the process $\\{d_{n}:n\geq 0\\}$ for different initial
positions, non-cooperative game.
###### Theorem 4.
Cooperative game. For all $d_{0}\geq 0$ we have
$N\leq\lceil\frac{D}{2d}\rceil$, where $N=\min\Big{\\{}n\geq
0:d_{n}=\frac{D}{2}\Big{\\}}$, and $\lceil z\rceil$ is the least integer
greater than or equal to the real number $z$.
###### Proof.
First note that $N$ gives the time in which the buses reach diametrically
opposite positions in the circuit. Also, playing the optimal strategies in
Theorem 2, increases the distance between the buses by $d$. Hence, repeating
the game eventually leads to reaching the diametric distance. This means that
$N$ is at most the number of steps of size $d$ necessary to go over
$\frac{D}{2}$. Once diametrical positions are reached, the distance is
preserved forever. ∎
### 3.4. Extension
It is possible to account for perturbations like traffic lights, congestion,
or accidents, by introducing a random noise to the displacement of buses. One
could do this defining
(3.2) $x_{T}=(x_{0}+Tv_{{\mathbf{x}}}+\sigma
Z_{x})_{mod\,D}\quad\text{and}\quad y_{T}=(y_{0}+Tv_{{\mathbf{y}}}+\sigma
Z_{y})_{mod\,D},$
where $Z_{x}$ and $Z_{y}$ are independent standard normal random variables and
$\sigma\geq 0$ is a fixed parameter.
Then, the following results would be observed.
* •
Non-cooperative game. Given that the expected value of the final positions is
unchanged, Theorem 1 remains valid. However, the repetition of this new game
leads to a new result. Since the probability of maintaining a null, or escape
distance $d$, at any positive time is zero, the long-run analysis is reduced
to two distinct cases: $0<d_{0}<d$ and $d_{0}<d$. Arguments similar to that in
the proof of Theorem 3 show that if $0<d_{0}<d$, we have $d_{N}\geq d$ in an
exponentially fast time $N$. If $d<d_{0}$, then the distance process
$\\{d_{n}\\}_{n\geq 1}$ remains above $d$ for a random time $M$, but
eventually falls below it. The expected time above is inversely proportional
to $\sigma$.
* •
Cooperative game. The analysis collapses to the cases b) and c) of Theorem 2.
So, while the players try to reach the diametrically opposite positions, with
probability one this does not occur.
## 4\. Concluding remarks
Our theoretical results are consistent with the driving practices mentioned in
the Introduction. In particular, Theorem 1.$a$ induces (2) Racing, Theorem
1.$b$, $c$ conduce to (2) Racing and (3) Overtaking, Tailing or Chasing, and
Theorem 1.$d$ to (1) Hanging back or Crowling. It is worth noting that all of
the aforementioned are short-term strategies. As far as the time-evolution of
the game goes, Theorem 3 asserts that in the long run and with high
probability, both operators end up hanging back. Theorems 2 and 4 are intended
to contrast the drivers’ optimal strategies and ultimately the equilibria when
cooperation is desired.
In subsection 3.4, we extended the model to allow for randomness in
displacement. In this scenario no equilibrium is lasting, so the operators
alternate between racing, hanging back and chasing from time to time. We
believe this is precisely what happens in Mexico City, although proving this
would require a data driven approach analysis.
There are a few open problems worth exploring. First, one could increase the
number of players, and investigate whether equilibria still exists, and if so,
try to characterize it. Second, one may vary the distribution of the
passengers along the route, dispensing with the homogeneous assumption. Along
these lines, one may introduce traffic congestion by making the utility
function depend on space in a non-homogeneous manner. This would potentially
require strategies to depend on the player’s position. Lastly, one could
introduce decision variables like tariffs and timetables; doing so would allow
to compare the results with some that have already been addressed in the
literature.
### Declaration of interest
None.
## References
* [1] Act of Parliament, United Kingdom. Transport Acta. 1985\.
* [2] N. Adler, A. Brudner, and S. Proost. A review of transport market modeling using game-theoretic principles. European Journal of Operational Research, 2020.
* [3] J. Baik, A. Borodin, P. Deift, and T. Suidan. A model for the bus system in Cuernavaca (Mexico). Journal of Physics A: Mathematical and General, 39(28):8965–8975, 2006.
* [4] S. Borenstein and J. Netz. Why do all the flights leave at 8 am?: Competition and departure-time differentiation in airline markets. International Journal of Industrial Organization, 17(5):611 – 640, 1999.
* [5] Consultant Buendía y Laredo. Encuesta sobre victimización en el transporte público en la Ciudad de México y en la Zona Metropolitana 2019. 2019\.
* [6] A. de Palma and R. Lindsey. Optimal timetables for public transportation. Transportation Research Part B: Methodological, 35(8):789 – 813, 2001.
* [7] J. S. Dodgson, Y. Katsoulacos, and C. R. Newton. An Application of the Economic Modelling Approach to the Investigation of Predation. Journal of Transport Economics and Policy, 27(2):153–170, 1993\.
* [8] C. J. Ellis and E. C. Silva. British Bus Deregulation: Competition and Demand Coordination. Journal of Urban Economics, 43(3):336 – 361, 1998.
* [9] A. Evans. A Theoretical Comparison of Competition with Other Economic Regimes for Bus Services. Journal of Transport Economics and Policy, 21(1):7–36, 1987.
* [10] C. Foster and J. Golay. Some Curious Old Practices and Their Relevance to Equilibrium in Bus Competition. Journal of Transport Economics and Policy, 20(2):191–216, 1986\.
* [11] A. E. Gelfand, P. J. Diggle, M. Fuentes, and P. Guttorp. Handbook of spatial statistics. Statistics in Medicine, 30(8):899–900, 2011.
* [12] Global Green Growth Institute. Comparative Analysis of Bus Public Transport Concession Models. 2018\.
* [13] H. Hotelling. Stability in Competition. The Economic Journal, 39(153):41–57, 1929.
* [14] O. Ibarra-Rojas, F. Delgado, R. Giesen, and J. Munoz. Planning, operation, and control of bus transport systems: A literature review. Transportation Research Part B: Methodological, 77:38 – 75, 2015\.
* [15] Instituto Nacional de Estadística y Geografía. Encuesta Origen Destino en Hogares de la Zona Metropolitana del Valle de México 2017. 2017\.
* [16] N. J. Ireland. A Product Differentiation Model of Bus Deregulation. Journal of Transport Economics and Policy, 25(2):153–162, 1991\.
* [17] G. F. Newell. Dispatching Policies for a Transportation Route. Transportation Science, 5(1):91–105, 1971.
* [18] S. C. Salop. Monopolistic Competition with Outside Goods. The Bell Journal of Economics, 10(1):141–156, 1979.
* [19] Secretaría de Movilidad, Gobierno de la Ciudad de México. Plan Estratégico de Movilidad 2019. Una ciudad, un sistema. 2019\.
## Appendix A Computations
To prove Theorem 1, it is convenient to introduce the following Lemma.
###### Lemma 1.
Let $X$ be a mixed strategy of ${\mathbf{x}}$ and $Y$ be a mixed strategy of
${\mathbf{y}}$. We define $Z$ to be a mixed random variable in the Probability
theory sense: it has both discrete and continuous components. In particular,
$Z$ is of the form
$\displaystyle Z=\left\\{\begin{array}[]{cc}z_{i}&\textrm{ with probability
}p_{i},\textrm{ for }i\in I,\\\ W&\textrm{ with probability }1-\sum_{i\in
I}p_{i},\end{array}\right.$
where $I$ is a finite or numerable set, and $W$ is a continuous random
variable with density $f_{W}(t)$ on its support, denoted by $supp(f_{W})$.
Then,
(A.2) $\displaystyle U_{\mathbf{x}}(x_{0},X,y_{0},Y)$ $\displaystyle=$
$\displaystyle\sum_{i\in
I}\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|Z=z_{i})\,p_{i}$ $\displaystyle+$
$\displaystyle\Big{(}1-\sum_{i\in
I}p_{i}\Big{)}\int_{supp(f_{W})}\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|Z=w)f_{W}(w)dw.$
If $Z=X$ and $(X,Y)$ is a mixed strategy Nash equilibrium, then
(A.3)
$\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=z_{i})=\int_{supp(f_{W})}\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=w)f_{W}(w)dw\qquad\forall
i\in I,$
and
(A.4)
$\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=w_{1})=\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=w_{2})\qquad\forall
w_{1},w_{2}\in supp(f_{W}).$
###### Proof.
Equation (A.2) is straightforwardly obtained by computing the conditional
expectancy of the random variable $u_{\mathbf{x}}(x_{0},X,y_{0},Y)$ given the
values of $Z$.
Note that if (A.3) does not occur, then there exist two different values
$z_{i}$ and $z_{j}$, such that
$\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=z_{i})\neq\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=z_{j})$.
This means that $U_{{\mathbf{x}}}$ can be increased by placing all the
probability on the value that gives the highest expectation. This leads to a
contradiction with the form of the mixed strategy $X$. Similar arguments apply
to the case where (A.3) is violated through the continuous component.
Likewise, if condition (A.4) is not fulfilled, then there are two values
$w_{1}$ and $w_{2}$ such that
$\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=w_{i})$ are different. Then,
$U_{{\mathbf{x}}}$ can be increased by restricting the support of $f_{W}$ to
the points where the maximum of the function
$g(w)=\mathbb{E}(u_{\mathbf{x}}(x_{0},X,y_{0},Y)|X=w)$ is reached. Here, the
form of the mixed strategy $X$ is violated. ∎
Proof of Theorem 1:
First, note that for optimizing the utility function (2.13), (2.14) the terms
$p\lambda$ and $c$ are irrelevant, since the $\mathop{\rm arg\,min}$ of any
function is invariant under linear transformations. Thus, there is no loss of
generality in assuming that $p\lambda=1$ and $c=0$.
By equation (2.8), we may actually assume that $0=x_{0}\leq y_{0}<D$. We then
have
$d_{0}=d_{{\mathbf{x}}}(x_{0},y_{0})=y_{0}\quad\text{and}\quad
d_{{\mathbf{y}}}(x_{0},y_{0})=D-y_{0}.$
Under the above assumption and using (2.1), (2.11) in cases a), b), c) and d),
it happens that $0<x_{T},y_{T}<D$, so we can get rid of all the $D$-modules in
the computations.
For computing the $\varepsilon$-equilibrium, we will consider the
$\varepsilon$-best reply, defined as follows. Let $\varepsilon$ be a positive
number. We say that a strategy $v^{*}_{{\mathbf{x}}}$ is ${\mathbf{x}}$’s
$\varepsilon$-best reply to ${\mathbf{y}}$’s strategy $v_{{\mathbf{y}}}$, if
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})\leq
u_{{\mathbf{x}}}(x_{0},v^{*}_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})+\varepsilon,$
for all strategies $v_{{\mathbf{x}}}$.
To simplify notation, we write
$u_{{\mathbf{x}}}(v_{{\mathbf{x}}},v_{{\mathbf{y}}})$ and
$u_{{\mathbf{x}}}(X,Y)$ in the case of mixed strategies, instead of
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})$ and
$u_{{\mathbf{x}}}(x_{0},X,y_{0},Y)$ if the computations do not depend on the
fixed initial positions.
* •
Case a)
We assume that $x_{0}=y_{0}=0$. Let player ${\mathbf{y}}$ pick the strategy
$v_{{\mathbf{y}}}=v_{max}$. Then,
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{min})=d_{{\mathbf{x}}}(Tv_{{\mathbf{x}}},Tv_{max})=Tv_{max}-Tv_{{\mathbf{x}}}\leq
Tv_{max}.$
Using (2.11), we obtain the bound
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{min})\leq\frac{D}{2}=u_{{\mathbf{x}}}(x_{0},v_{max},y_{0},v_{max}).$
Explicitly, this means that the strategy $v_{{\mathbf{x}}}=v_{max}$ is the
best reply to $v_{{\mathbf{y}}}=v_{max}$. By symmetry, we conclude that
$(v_{max},v_{max})$ is a Nash equilibrium.
To check the uniqueness of the equilibrium, we note that ${\mathbf{y}}$’s
$\varepsilon$-best reply to a given speed $v_{{\mathbf{x}}}<v_{max}$ chosen by
${\mathbf{x}}$, is $v_{{\mathbf{y}}}=v_{{\mathbf{x}}}+\epsilon$ for
sufficiently small $\varepsilon$. On the other hand, ${\mathbf{x}}$’s
$\varepsilon$-best reply to $v_{{\mathbf{y}}}=v_{{\mathbf{x}}}+\epsilon$ is
$v_{{\mathbf{x}}}=v_{{\mathbf{y}}}+\varepsilon$. Therefore the only
equilibrium is $(v_{max},v_{max})$.
* •
Case b)
Let us denote by $B_{{\mathbf{x}}}(v)$ ${\mathbf{x}}$’s best reply when
${\mathbf{y}}$ plays $v$. It is straightforward to show that
$\displaystyle
B_{{\mathbf{x}}}(v)=\left\\{\begin{array}[]{ll}v+\frac{d_{0}}{T}+\frac{\varepsilon}{T}&\textrm{
if }v_{min}\leq v<v_{max}-\frac{d_{0}}{T},\\\ v_{max}&\textrm{ if
}v=v_{max}-\frac{d_{0}}{T},\\\ v_{min}&\textrm{ if
}v_{max}-\frac{d_{0}}{T}<v,\end{array}\right.$
and
$\displaystyle B_{{\mathbf{y}}}(v)=\left\\{\begin{array}[]{ll}v_{min}&\textrm{
if }v<v_{min}+\frac{d_{0}}{T},\\\
v-\frac{d_{0}}{T}+\frac{\varepsilon}{T}&\textrm{ if
}v_{min}+\frac{d_{0}}{T}\leq v\leq v_{max},\end{array}\right.$
under hypothesis $b)$.
If $(X,Y)$ is a mixed strategy Nash equilibrium, then the support of the
random variable $X$ should be contained in the set of ${\mathbf{x}}$’s best
replies, the corresponding is true for variable $Y$. In this particular case,
$X$ has support on
$\\{v_{min}\\}\cup(v_{min}+\frac{d_{0}}{T},v_{max})\cup\\{v_{max}\\}$, while
$Y$ has support on
$\\{v_{min}\\}\cup(v_{min},v_{max}-\frac{d_{0}}{T})\cup\\{v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}\\}$.
Hence, a mixed strategy $X$ with the support obtained is of the form
$\displaystyle X=\left\\{\begin{array}[]{ll}v_{min}&\textrm{ with probability
}p_{1},\\\ U&\textrm{ with probability }p_{2},\\\ v_{max}&\textrm{ with
probability }1-p_{1}-p_{2},\end{array}\right.$
where $p_{1},p_{2}\in[0,1]$, and $U$ is a continuous random variable with
density $f_{U}(u)$ and support contained in
$(v_{min}+\frac{d_{0}}{T},v_{max})$. Similarly, a mixed strategy $Y$ with the
desired support is
$\displaystyle Y=\left\\{\begin{array}[]{ll}v_{min}&\textrm{ with probability
}q_{1},\\\ V&\textrm{ with probability }q_{2},\\\
v_{max}-\frac{d}{T}+\frac{\varepsilon}{T}&\textrm{ with probability
}1-q_{1}-q_{2},\end{array}\right.$
where $q_{1},q_{2}\in[0,1]$, and $V$ is a continuous random variable with
density $f_{V}(v)$ with support contained in
$(v_{min},v_{max}-\frac{d_{0}}{T})$.
To compute the density of $U$, we apply (A.4) to $Y$. Let us compute
${\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=v)$ when
$v\in(v_{min}-\frac{d_{0}}{T},v_{max}-\frac{d_{0}}{T})$:
$\displaystyle{\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=v)$ $\displaystyle=$
$\displaystyle{\mathbb{E}}(u_{{\mathbf{y}}}(X,v))$ $\displaystyle=$
$\displaystyle
p_{1}u_{{\mathbf{y}}}(v_{min},v)+p_{2}{\mathbb{E}}(u_{{\mathbf{y}}}(U,v))+(1-p_{1}-p_{2})u_{{\mathbf{y}}}(v_{max},v)$
$\displaystyle=$ $\displaystyle p_{1}(D+Tv_{min}-Tv-d_{0})$ $\displaystyle+$
$\displaystyle p_{2}\Big{[}\int_{v_{min}}^{v+\frac{d_{0}}{T}}(D+Tu-Tv-
d_{0})f_{U}(u)du+\int_{v+\frac{d_{0}}{T}}^{v_{max}}(Tu-Tv-
d_{0})f_{U}(u)du\Big{]}$ $\displaystyle+$
$\displaystyle(1-p_{1}-p_{2})(Tv_{max}-Tv-d_{0})$ $\displaystyle=$
$\displaystyle
p_{1}D+p_{2}DF_{U}\Big{(}v+\frac{d_{0}}{T}\Big{)}+p_{1}Tv_{min}+(1-p_{1}-p_{2})Tv_{max}+p_{2}T{\mathbb{E}}(U)-Tv-
d_{0},$
where $F_{U}(u)$ is the cumulative probability distribution function of the
random variable $U$.
By (A.4), we have
(A.9)
$p_{1}D+p_{2}DF_{U}\Big{(}v+\frac{d_{0}}{T}\Big{)}+p_{1}Tv_{min}+(1-p_{1}-p_{2})Tv_{max}+p_{2}T{\mathbb{E}}(U)-Tv-
d_{0}=k,$
for some constant $k$.
Since $F_{U}(v_{max})=1$, when we plug $v=v_{max}-\frac{d_{0}}{T}$, we obtain
its value
(A.10)
$k=(p_{1}+p_{2})D+p_{1}Tv_{min}-(p_{1}+p_{2})Tv_{max}+p_{2}T{\mathbb{E}}(U).$
On substituting $k$ into (A.9) we obtain
$F_{U}\Big{(}v+\frac{d_{0}}{T}\Big{)}=1-\frac{T(v_{max}-v)-d_{0}}{p_{2}D}.$
Let $u=v+\frac{d_{0}}{T}$. Then, $u\in(v_{min}+\frac{d_{0}}{T},v_{max})$ and
$F_{U}(u)=1-\frac{T(v_{max}-u)}{p_{2}D}$. From this we have
$u^{*}=v_{max}-\frac{p_{2}D}{T}$ is the value such that $F_{U}(u^{*})=0$.
The conclusion is that $U$ is uniformly distributed on the interval
$(v_{max}-\frac{p_{2}D}{T},v_{max})$, thus
(A.11) ${\mathbb{E}}(U)=v_{max}-\frac{p_{2}D}{2T}.$
In the same manner we can see that $V$ has uniform distribution on the
interval $(v_{max}-\frac{d_{0}}{T}-\frac{q_{2}}{T},v_{max}-\frac{d_{0}}{T})$,
with expectancy given by
(A.12) ${\mathbb{E}}(V)=v_{max}-\frac{d_{0}}{T}-\frac{q_{2}D}{2T}.$
To compute the values of $p_{1}$ and $p_{2}$ necessary for the
$\varepsilon$-equilibrium, we use (A.3). We first compute the conditional
expectancy of $u_{{\mathbf{y}}}(X,Y)$ given $Y$,
$\displaystyle{\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=v_{min})$ $\displaystyle=$
$\displaystyle
p_{1}u_{{\mathbf{y}}}(v_{min},v_{min})+p_{2}{\mathbb{E}}(u_{{\mathbf{y}}}(U,v_{min}))+(1-p_{1}-p_{2})u_{{\mathbf{y}}}(v_{max},v_{min})$
$\displaystyle=$ $\displaystyle
p_{1}(D-d_{0})+p_{2}\Big{[}\int_{v_{max}-\frac{p_{2}D}{T}}^{v_{max}}(Tu-
Tv_{min}-d_{0})f_{U}(u)\,du\Big{]}$ $\displaystyle+$
$\displaystyle(1-p_{1}-p_{2})(Tv_{max}-Tv_{min}-d_{0})$ $\displaystyle=$
$\displaystyle
p_{1}(D-d_{0})+p_{2}(T{\mathbb{E}}(U)-Tv_{min}-d_{0})+(1-p_{1}-p_{2})(Tv_{max}-Tv_{min}-d).$
By (A.11), we have
(A.13)
${\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=v_{min})=p_{1}D-d_{0}+(1-p_{1})(Tv_{max}-Tv_{min})-p_{2}^{2}\frac{D}{2}.$
Computing ${\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=V)$ yields
${\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=V)=\int_{v_{max}-\frac{d_{0}}{T}-\frac{q_{2}D}{T}}^{v_{max}-\frac{d_{0}}{T}}{\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=v)f_{V}(v)\,dv.$
Since we know that the integrand is constant and its value is given by
equations (A.10) and (A.11), we directly obtain
(A.14)
${\mathbb{E}}(u_{{\mathbf{y}}}(X,Y)|Y=V)=(p_{1}+p_{2})D-p_{1}(Tv_{max}-Tv_{min})-p_{2}^{2}\frac{D}{2}.$
We are left with the task of determining the expected value of
$u_{{\mathbf{y}}}(X,Y)$ conditioned on the value
$Y=v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}$,
(A.15)
$\displaystyle{\mathbb{E}}\Big{(}u_{{\mathbf{y}}}(X,Y)|Y=v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}\Big{)}$
$\displaystyle=$ $\displaystyle
p_{1}u_{{\mathbf{y}}}\Big{(}v_{min},v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}\Big{)}+p_{2}{\mathbb{E}}\Big{(}u_{{\mathbf{y}}}\Big{(}U,v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}\Big{)}\Big{)}$
$\displaystyle+$
$\displaystyle(1-p_{1}-p_{2})u_{{\mathbf{y}}}\Big{(}v_{max},v_{max}-\frac{d_{0}}{T}+\frac{\varepsilon}{T}\Big{)}$
$\displaystyle=$ $\displaystyle p_{1}(D-T(v_{max}-v_{min})-\varepsilon)$
$\displaystyle+$ $\displaystyle
p_{2}\int_{v_{max}-\frac{p_{2}D}{T}}^{v_{max}}(D-T(v_{max}-u)-\varepsilon)f_{U}(u)\,du+(1-p_{1}-p_{2})(D-\varepsilon)$
$\displaystyle=$ $\displaystyle p_{1}(D-T(v_{max}-v_{min})-\varepsilon)$
$\displaystyle+$ $\displaystyle
p_{2}(D-Tv_{max}+T{\mathbb{E}}(U)-\varepsilon)+(1-p_{1}-p_{2})(D-\varepsilon)$
$\displaystyle=$ $\displaystyle D-\varepsilon-
p_{1}T(v_{max}-v_{min})-p_{2}^{2}\frac{D}{2},$
where we used (A.11) in the last equality.
Lemma (A.3) implies that in order to have an $\varepsilon$-equilibrium, the
expressions (A.13), (A.14) and (A.15) must be equal. This system of equations
has the unique solution
$p_{1}=1-\frac{T(v_{max}-v_{min})-d_{0}}{D},\qquad
p_{2}=\frac{T(v_{max}-v_{min})-d_{0}}{D},\qquad 1-p_{1}-p_{2}=0.$
We now apply this argument again, to obtain the expectancy of the random
variable $u_{{\mathbf{x}}}(X,Y)$ conditioned on the values of $X$, as well as
the values $q_{1},q_{2}$ necessary to have an $\varepsilon$-equilibrium. In
this case, there are many solutions. Indeed, any combination $q_{1},q_{2}$
satisfying
$0\leq q_{1},q_{2},\qquad q_{1}+q_{2}=\frac{T(v_{max}-v_{min})}{D},\qquad
1-q_{1}-q_{2}=1-\frac{T(v_{max}-v_{min})}{D},$
fulfills equation (A.3).
Given that the support of $V$ is
$\Big{(}v_{max}-\frac{d_{0}}{T}-q_{2}\frac{D}{T},\,v_{max}-\frac{d_{0}}{T}\Big{)}\subseteq\Big{(}v_{min},v_{max}-\frac{d_{0}}{T}\Big{)}$,
it is necessary to impose the condition $q_{2}\leq\frac{d-d_{0}}{D}$.
* •
Case c)
From the conditions stated in $c)$, it follows that
$\displaystyle B_{{\mathbf{x}}}(v)=\left\\{\begin{array}[]{ll}v_{max}&\textrm{
if }v=v_{min},\\\ v_{min}&\textrm{ if }v>v_{min}.\end{array}\right.$
Intuitively, under hypothesis $c)$, it always happens that $x_{T}\leq y_{T}$
for every pair of strategies $v_{{\mathbf{x}}},v_{{\mathbf{y}}}$. Equality
holds only when $v_{{\mathbf{x}}}=v_{max}$ and $v_{{\mathbf{y}}}=v_{min}$.
Similarly, one can check that
$\displaystyle B_{{\mathbf{y}}}(v)=\left\\{\begin{array}[]{ll}v_{min}&\textrm{
if }v<v_{max},\\\ v_{max}+\frac{\varepsilon}{T}&\textrm{ if
}v=v_{max},\end{array}\right.$
where last case is an $\varepsilon$-best reply.
To find the $\varepsilon$-equilibria, we define $X$ to be a random variable
such that
$\mathbb{P}(X=v_{min})=p,\qquad\mathbb{P}(X=v_{max})=1-p,\quad\text{for some
probability $p\in[0,1]$.}$
Similarly, we define a random variable $Y$ such that
$\mathbb{P}(Y=v_{min})=q,\qquad\mathbb{P}\Big{(}Y=v_{min}+\frac{\varepsilon}{T}\Big{)}=1-q,\quad\text{for
$q\in[0,1]$.}$
An $\varepsilon$-equilibrium requires
$\mathbb{E}(u_{{\mathbf{x}}}(v_{min},Y))=\mathbb{E}(u_{{\mathbf{x}}}(v_{\max},Y))$,
which is exactly the condition (A.3) when there is no continuous part for $X$.
Since
$\mathbb{E}(u_{{\mathbf{x}}}(v_{min},Y))=qu_{{\mathbf{x}}}(v_{min},v_{min})+(1-q)u_{{\mathbf{x}}}\Big{(}v_{min},v_{min}+\frac{\varepsilon}{T}\Big{)}=qd+(1-q)(d+\varepsilon),$
and
$\mathbb{E}(u_{{\mathbf{x}}}(v_{max},Y))=qu_{{\mathbf{x}}}(v_{max},v_{min})+(1-q)u_{{\mathbf{x}}}\Big{(}v_{max},v_{min}+\frac{\varepsilon}{T}\Big{)}=q\Big{(}\frac{D}{2}\Big{)}+(1-q)(\varepsilon),$
we can equalize the two equations and solve to obtain $q=\frac{2d}{D}$. Note
that (2.1) implies that $0<q<1$.
Similarly, we should have
$\mathbb{E}(u_{{\mathbf{y}}}(X,v_{min}))=\mathbb{E}\Big{(}u_{{\mathbf{y}}}\Big{(}X,v_{min}+\frac{\varepsilon}{T}\Big{)}\Big{)}$.
The explicit formulas being
$\mathbb{E}(u_{{\mathbf{y}}}(X,v_{min}))=pu_{{\mathbf{y}}}(v_{min},v_{min})+(1-p)u_{{\mathbf{y}}}(v_{max},v_{min})=p(D-d)+(1-p)\frac{D}{2},$
and
$\mathbb{E}\Big{(}u_{{\mathbf{y}}}\Big{(}X,v_{min}+\frac{\varepsilon}{T}\Big{)}\Big{)}=p\,u_{{\mathbf{y}}}\Big{(}v_{min},v_{min}+\frac{\varepsilon}{T}\Big{)}+(1-p)u_{{\mathbf{y}}}\Big{(}v_{max},v_{min}+\frac{\varepsilon}{T}\Big{)}=p(D-d-\varepsilon)+(1-p)(D-\varepsilon).$
Matching and solving the two yields $0<1-p=\frac{2\varepsilon}{D}<1$.
* •
Case d)
Assume that player ${\mathbf{y}}$ chooses strategy $v_{{\mathbf{y}}}$
satisfying (2.1). Then
(A.18)
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})=d_{{\mathbf{x}}}(Tv_{{\mathbf{x}}},y_{0}+Tv_{{\mathbf{y}}}).$
By assumption $d)$, we have
$T(v_{{\mathbf{x}}}-v_{{\mathbf{y}}})\leq
T(v_{max}-v_{min})<d_{{\mathbf{x}}}(x_{0},y_{0})=y_{0},$
so $y_{0}+Tv_{{\mathbf{y}}}-Tv_{{\mathbf{x}}}>0$ for every
$v_{{\mathbf{x}}},v_{{\mathbf{y}}}$. Then, (A.18) is equal to
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})=y_{0}+T(v_{{\mathbf{y}}}-v_{{\mathbf{x}}}),$
which is bounded by
$u_{{\mathbf{x}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})\leq
y_{0}+T(v_{{\mathbf{y}}}-v_{min})=u_{{\mathbf{x}}}(x_{0},v_{min},y_{0},v_{{\mathbf{y}}}).$
We conclude that $v_{{\mathbf{x}}}=v_{min}$ is ${\mathbf{x}}$’s best reply to
any strategy $v_{{\mathbf{y}}}$ played by ${\mathbf{y}}$.
Similarly, if ${\mathbf{x}}$ chooses strategy $v_{{\mathbf{x}}}$, then
(A.19)
$u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})=d_{{\mathbf{y}}}(Tv_{{\mathbf{x}}},y_{0}+Tv_{{\mathbf{y}}}).$
We have already proven that $y_{0}+Tv_{{\mathbf{y}}}-Tv_{{\mathbf{x}}}>0$ for
every $v_{{\mathbf{x}}},v_{{\mathbf{y}}}$, so (A.19) is equal to
$u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})=D+Tv_{{\mathbf{x}}}-y_{0}-Tv_{{\mathbf{y}}}.$
We can bound the last expression by
$u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{{\mathbf{y}}})=D-y_{0}+T(v_{{\mathbf{x}}}-v_{{\mathbf{y}}})\leq
D-y_{0}+T(v_{{\mathbf{x}}}-v_{min})=u_{{\mathbf{y}}}(x_{0},v_{{\mathbf{x}}},y_{0},v_{min}).$
This implies $v_{{\mathbf{y}}}=v_{min}$ is ${\mathbf{y}}$’s best reply to any
strategy $v_{{\mathbf{x}}}$ played by ${\mathbf{x}}$. The conclusion is that
$(v_{min},v_{min})$ is the unique Nash equilibrium.
$\square$
Proof of Theorem 3:
First, note that $d_{0}>d$ implies $N\equiv 0$, and the result holds
trivially.
Assume that $0<d_{0}<d$, and suppose that $0<d_{k}<d$ for some $k\geq 0$.
Then, the strategies
$(U,v_{min}),(U,V),(U,v_{max}-\frac{d_{k}}{T}+\frac{\varepsilon}{T}),(v_{min},v_{min}),(v_{min},V)$
lead to $0<d_{k+1}<0$ with probability one.
If the strategies of ${\mathbf{x}}$ and ${\mathbf{y}}$ are instead
$(v_{min},v_{max}-\frac{d_{k}}{T}-\frac{\varepsilon}{T})$, then
$d_{k+1}=d+\varepsilon$. We can uniformly bound from below the probability
that the players adopt these strategies by
$\mathbb{P}\Big{(}(X,Y)=\Big{(}v_{min},v_{max}-\frac{d_{k}}{T}-\frac{\varepsilon}{T}\Big{)}\Big{)}=\Big{(}1-\frac{d-d_{k}}{D}\Big{)}\Big{(}1-\frac{d_{k}}{D}\Big{)}\geq\Big{(}1-\frac{d}{D}\Big{)},\quad\forall\enspace
0<d_{k}<d,$
where the inequality can be obtained by calculus (or by noting that this
probability is an inverted parabola, as a function of $d_{k}$). Therefore,
$\mathbb{P}(N>k)\leq\mathbb{P}(G>k),$
where $G$ is a geometric random variable with parameter $1-\frac{d}{D}$, and
the result follows.
Finally, assume that $d_{0}=d$. If players ${\mathbf{x}}$ and ${\mathbf{y}}$
choose $(v_{min},v_{min})$, then $d_{1}=d$. Any other strategy choice yields
$d_{1}\neq d$.
Define $M=\min\\{n\geq 1:d_{n}\neq d\\}$. By the above remark, $M$ has
geometric distribution on the natural numbers with parameter
$1-\Big{(}1-\frac{2\varepsilon}{D}\Big{)}\Big{(}\frac{2d}{D}\Big{)}$. After
$M$ trials, we are on the conditional space where ${\mathbf{x}}$ and
${\mathbf{y}}$ do not play $(v_{min},v_{min})$, instead they choose
$\displaystyle(v_{max},v_{min})$ $\displaystyle\textrm{ with probability
}\frac{\Big{(}\frac{2\varepsilon}{D}\Big{)}\Big{(}\frac{2d}{D}\Big{)}}{1-\Big{(}1-\frac{2\varepsilon}{D}\Big{)}\Big{(}\frac{2d}{D}\Big{)}},$
$\displaystyle(v_{min},v_{min}+\frac{\varepsilon}{T})$ $\displaystyle\textrm{
with probability
}\frac{\Big{(}1-\frac{2\varepsilon}{D}\Big{)}\Big{(}1-\frac{2d}{D}\Big{)}}{1-\Big{(}1-\frac{2\varepsilon}{D}\Big{)}\Big{(}\frac{2d}{D}\Big{)}},$
$\displaystyle(v_{max},v_{min}+\frac{\varepsilon}{T})$ $\displaystyle\textrm{
with probability
}\frac{\Big{(}\frac{2\varepsilon}{D}\Big{)}\Big{(}1-\frac{2d}{D}\Big{)}}{1-\Big{(}1-\frac{2\varepsilon}{D}\Big{)}\Big{(}\frac{2d}{D}\Big{)}}.$
The first election leads to $d_{M+1}=0$, while the other two give $d_{M+1}>d$.
This concludes the proof.
$\square$
|
8k
|
arxiv_papers
|
2101.01157
|
# Partially observed Markov processes with spatial structure via the R package
spatPomp
Kidus Asfaw
Univ. of Michigan &Joonha Park
Univ. of Kansas &Allister Ho
Univ. of Michigan &Aaron A. King
Univ. of Michigan &Edward Ionides
Univ. of Michigan [email protected] [email protected] [email protected]
[email protected] [email protected]
Kidus Asfaw, Joonha Park, Allister Ho, Aaron A. King, Edward L. Ionides
Statistical Inference for Spatially Coupled Partially Observed Markov
Processes via the R Package spatPomp spatPomp: Spatially Coupled Partially
Observed Markov Processes in R We address inference for a partially observed
nonlinear non-Gaussian latent stochastic system comprised of interacting
units. Each unit has a state, which may be discrete or continuous, scalar or
vector valued. In biological applications, the state may represent a
structured population or the abundances of a collection of species at a single
location. Units can have spatial locations, allowing the description of
spatially distributed interacting populations arising in ecology, epidemiology
and elsewhere. We consider models where the collection of states is a latent
Markov process, and a time series of noisy or incomplete measurements is made
on each unit. A model of this form is called a spatiotemporal partially
observed Markov process (SpatPOMP). The R package spatPomp provides an
environment for implementing SpatPOMP models, analyzing data, and developing
new inference approaches. We describe the spatPomp implementations of some
methods with scaling properties suited to SpatPOMP models. We demonstrate the
package on a simple Gaussian system and on a nontrivial epidemiological model
for measles transmission within and between cities. We show how to construct
user-specified SpatPOMP models within spatPomp. This document is provided
under the Creative Commons Attribution License. Markov processes, hidden
Markov model, state space model, stochastic dynamical system, maximum
likelihood, plug-and-play, spatiotemporal data, mechanistic model, sequential
Monte Carlo, R Markov processes, hidden Markov model, state space model,
stochastic dynamical system, maximum likelihood, plug-and-play, spatiotemporal
data, mechanistic model, sequential Monte Carlo, R Kidus Asfaw, Edward
Ionides, Allister Ho
Department of Statistics
University of Michigan
48109 Michigan, United States of America
E-mail: , ,
URL: http://www.stat.lsa.umich.edu/~ionides/
Aaron A. King
Department of Ecology & Evolutionary Biology
Center for the Study of Complex Systems
University of Michigan
48109 Michigan, United States of America
E-mail:
URL: http://kinglab.eeb.lsa.umich.edu/
Joonha Park
Department of Mathematics
University of Kansas
66045 Kansas, United States of America
E-mail:
URL: http://people.ku.edu/~j139p002
## 1 Introduction
A partially observed Markov process (POMP) model consists of incomplete and
noisy measurements of a latent Markov process. A POMP model in which the
latent process has spatial as well as temporal structure is called a
spatiotemporal POMP or SpatPOMP. Many biological, social and physical systems
have the spatiotemporal structure, dynamic stochasticity and imperfect
observability that characterize SpatPOMP models. The spatial structure of
SpatPOMPs adds complexity to the problems of likelihood estimation, parameter
inference and model selection for nonlinear and non-Gaussian systems. The
objective of the spatPomp package (Asfaw _et al._ , 2021) is to facilitate
model development and data analysis in the context of the general class of
SpatPOMP models, enabling scientists to separate the scientific task of model
development from the statistical task of providing inference tools. Thus,
spatPomp brings together general purpose methods for carrying out Monte Carlo
statistical inference for such systems. More generally, spatPomp provides an
abstract representation for specifying SpatPOMP models. This ensures that
SpatPOMP models formulated with the package can be investigated using a range
of methods, and that new methods can be readily tested on a range of models.
In its current manifestation, spatPomp is appropriate for data analysis with a
moderate number of spatial units (up to around 100 spatial units) having
nonlinear and non-Gaussian dynamics. In particular, spatPomp is not targeted
at very large spatiotemporal systems such as those that arise in geophysical
data assimilation (Anderson _et al._ , 2009) though some of the tools
developed in that context can be applied to smaller models and are provided by
the package. Spatiotemporal systems with Gaussian dynamics can be investigated
with spatPomp, but a variety of alternative methods and software are available
in this case (Wikle _et al._ , 2019; Sigrist _et al._ , 2015; Cappello _et
al._ , 2020).
The spatPomp package builds on the pomp package described by King _et al._
(2016). Mathematically, a SpatPOMP model is also a POMP model, and this
property is reflected in the object-oriented design of spatPomp: The package
is implemented using S4 classes (Chambers, 1998; Genolini, 2008; Wickham,
2019) and the basic class ‘spatPomp’ extends the class ‘pomp’ provided by
pomp. Among other things, this allows us to test new methods against
extensively tested methods in low-dimensional settings, use existing
convenience functions, and apply methods for POMP models on SpatPOMP models.
However, standard Monte Carlo statistical inference methods for nonlinear POMP
models break down with increasing spatial dimension (Bengtsson _et al._ ,
2008), a manifestation of the _curse of dimensionality_. Therefore, effective
inference approaches must, in practice, take advantage of the special
structure of SpatPOMP models. Figure 1 illustrates the use case of the
spatPomp package relative to the pomp package and methods that use Gaussian
approximations to target models with massive dimensionality. The difficulty of
statistical inference for a dynamic model often comes from a combination of
its nonlinearity and its dimensionality, so methods for nonlinear dynamics of
small dimensions sometimes work well for linear and Gaussian problems of
relatively high dimensions. This means that the boundaries of the methods and
packages in Figure 1 can extend beyond the edges shown in the figure.
Nevertheless, it is useful to situate models in this nonlinearity-
dimensionality problem space so that candidate methods and software packages
can become clearer.
pompspatPompNonlinearityDimensionKFEnKFPF Figure 1: The use case for the
spatPomp package. For statistical inference of models that are approximately
linear and Gaussian, the Kalman Filter (KF) is an appropriate method. If the
nonlinearity in the problem increases moderately but the dimension of the
problem is very large (e.g. geophysical models), the ensemble Kalman Filter
(EnKF) and similar methods are useful. In low-dimensional but very nonlinear
settings, the particle filter (PF) and related methods are useful and the pomp
package targets such problems. The spatPomp package and the methods
implemented in it are intended for statistical inference for nonlinear models
that are of moderate dimension. The nonlinearity in these models (e.g.
epidemiological models) is problematic for Gaussian approximations and the
dimensionality is large enough to cause the particle filter to be unstable.
A SpatPOMP model is characterized by the transition density for the latent
Markov process and unit-specific measurement densities111We use the term
“density” in this article to encompass both the continuous and discrete cases.
Thus, when latent variables or measured quantities are discrete, one can
replace “probability density function” with “probability mass function”.. Once
these elements are specified, calculating and simulating from all joint and
conditional densities are well defined operations. However, different
statistical methods vary in the operations they require. Some methods require
only simulation from the transition density whereas others require evaluation
of this density. Some methods avoid working with the model directly, replacing
it by an approximation, such as a linearization. For a given model, some
operations may be considerably easier to implement and so it is useful to
classify inference methods according to the operations on which they depend.
In particular, an algorithm is said to be _plug-and-play_ if it utilizes
simulation of the latent process but not evaluation of transition densities
(Bretó _et al._ , 2009; He _et al._ , 2010). The arguments for and against
plug-and-play methodology for SpatPOMP models are essentially the same as for
POMP models (He _et al._ , 2010; King _et al._ , 2016). Simulators are
relatively easy to implement for most SpatPOMP models; plug-and-play
methodology facilitates the investigation of a variety of models that may be
scientifically interesting but mathematically inconvenient. On the other hand,
approaches that leverage explicit transition densities are sometimes more
computationally efficient than those that rely on Monte Carlo methods.
Nevertheless, the utility of plug-and-play methods has been amply demonstrated
in scientific applications. In particular, plug-and-play methods implemented
using pomp have proved capable for state-of-the-art inference problems for
POMP models (e.g., King _et al._ , 2008; Bhadra _et al._ , 2011; Shrestha _et
al._ , 2011, 2013; Earn _et al._ , 2012; Roy _et al._ , 2013; Blackwood _et
al._ , 2013a, b; He _et al._ , 2013; Bretó, 2014; Blake _et al._ , 2014;
Martinez-Bakker _et al._ , 2015; Bakker _et al._ , 2016; Becker _et al._ ,
2016; Buhnerkempe _et al._ , 2017; Ranjeva _et al._ , 2017; Marino _et al._ ,
2019; Pons-Salort and Grassly, 2018; Becker _et al._ , 2019; Kain _et al._ ,
2020). Although the spatPomp package provides a general environment for
methods with and without the plug-and-play property, development of the
package to date has emphasized plug-and-play methods.
The remainder of this paper is organized as follows. Section 2 defines
mathematical notation for SpatPOMP models and relates this to their
representation as objects of class ‘spatPomp’ in the spatPomp package. Section
3 introduces simulation and several spatiotemporal filtering methods currently
implemented in spatPomp. Section 4 introduces some parameter estimation
algorithms currently implemented in spatPomp which build upon these simulation
and filtering techniques. Section 5 constructs a simple linear Gaussian
SpatPOMP model and uses this example to illustrate the statistical
methodology. Section 6 discusses the construction of spatially structured
compartment models for population dynamics, in the context of coupled measles
dynamics in UK cities; this demonstrates the kind of nonlinear stochastic
system primarily motivating the development of spatPomp. Finally, Section 7
discusses extensions and applications of spatPomp.
## 2 SpatPOMP models and their representation in spatPomp
We now set up notation for SpatPOMP models. This notation is similar to that
of POMP models (King _et al._ , 2016), but we provide all the details here for
completeness. A visually intuitive schematic illustrating SpatPOMPs is given
in Figure 2. The notation allows us to eventually define a SpatPOMP model in
terms of its three main components: a model for one-step transitions of the
latent states, a model for the measurements at an observation time conditional
on the latent states at that time, and an initializer for the latent state
process. Suppose there are $U$ units labeled $1{\hskip 1.70717pt:\hskip
1.70717pt}U=\\{1,2,\dots,U\\}$. Let ${\mathbb{T}}$ be the set of times at
which the latent dynamic process is defined. The SpatPOMP framework and the
spatPomp package permit the timescale ${\mathbb{T}}$ to be a discrete or
continuous subset of $\mathbb{R}$. In either case, ${\mathbb{T}}$ must contain
a set of $N$ observation times, $t_{1:N}=\\{t_{n}:n=1,\dots,N\\}$, where
$t_{1}<t_{2}<\dots<t_{N}$. In the examples below, we take
${\mathbb{T}}=[t_{0},t_{N}]$, with $t_{0}{\leq}t_{1}$ being the time at which
we begin modeling the dynamics of the latent Markov process. The partially
observed latent Markov process is written as $\\{\boldsymbol{X}(t{\hskip
1.42262pt;\hskip 1.42262pt}\theta):t\in{\mathbb{T}}\\}$ where
$\boldsymbol{X}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta)=\big{(}X_{1}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta),\dots,X_{U}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta)\big{)}$, with $X_{u}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ taking values in some space ${\mathbb{R}^{D_{X}}}$ and
$\theta$ a $D_{\theta}$-dimensional real-valued parameter which we write as
$\theta=\theta_{1:D_{\theta}}\in\mathbb{R}^{D_{\theta}}$. For some purposes it
is adequate to consider the latent process only at the finite collection of
times at which it is observed. We write
$\boldsymbol{X}_{n}=\boldsymbol{X}(t_{n}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$, noting that we sometimes choose to suppress the dependence
on $\theta$. We can also write
$\boldsymbol{X}_{n}=({X}_{1,n},\dots,{X}_{U,n})={X}_{1:U,n}$. The initial
value $\boldsymbol{X}_{0}=\boldsymbol{X}(t_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ may be stochastic or deterministic. The observable process
$\\{\boldsymbol{Y}_{n}=Y_{1:U,n},n\in 1{\hskip 1.70717pt:\hskip
1.70717pt}N\\}$ takes values in $\mathbb{R}^{U}$.
$X_{1,0}$$\cdots$$X_{U,0}$$X_{1,1}$$\cdots$$X_{U,1}$$\boldsymbol{X}_{0}$$\boldsymbol{X}_{1}$$Y_{1,1}$$\cdots$$Y_{U,1}$$\boldsymbol{Y}_{1}$$\cdots$$\cdots$$X_{1,N}$$\cdots$$X_{U,N}$$\boldsymbol{X}_{N}$$Y_{1,N}$$\cdots$$Y_{U,N}$$\boldsymbol{Y}_{N}$$\cdots$$\cdots$
Figure 2: The structure of spatially coupled partially observed Markov process
(SpatPOMP) models. The latent dynamic model is $\\{\boldsymbol{X}(t{\hskip
1.42262pt;\hskip 1.42262pt}\theta):t\in{\mathbb{T}}\\}$. During observation
times $t_{1:N}=\\{t_{n}:n=1,\dots,N\\}$, the values of the latent process are
denoted $\boldsymbol{X}_{1}$, …, $\boldsymbol{X}_{N}$. The partial and noisy
observations at these times are denoted $\boldsymbol{Y}_{1}$, …,
$\boldsymbol{Y}_{N}$. A SpatPOMP model specifies that $\boldsymbol{X}(t)$ is
itself a collection of latent states from $U$ spatially coupled processes that
each produce observations at the observation times. Ultimately, defining a
SpatPOMP model is equivalent to defining three models: a model for the one-
step transition of the latent process, a model for each spatial unit’s
measurements at an observation time conditional on the latent states for that
unit at that time, and a model for the initial latent state,
$\boldsymbol{X}_{0}$.
Observations are modeled as conditionally independent given the latent
process. This conditional independence of measurements applies over both space
and time, so $Y_{u,n}$ is conditionally independent of
$\\{Y_{\tilde{u},\tilde{n}},(\tilde{u},\tilde{n})\neq(u,n)\\}$ given
$X_{u,n}$. We suppose the existence of a joint density
$f_{\boldsymbol{X}_{0:N},\boldsymbol{Y}_{1:N}}$ of $X_{1:U,0:N}$ and
$Y_{1:U,1:N}$ with respect to some reference measure. We use the same
subscript notation to denote other marginal and conditional densities. The
data, $y^{*}_{1:U,1:N}=(\boldsymbol{y}_{1}^{*},\ \ldots,\
\boldsymbol{y}_{N}^{*})$, are modeled as a realization of this observation
process. Spatially or temporally dependent measurement errors can be modeled
by adding suitable latent variables.
The SpatPOMP structure permits a factorization of the joint density in terms
of the initial density, $f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0};\theta)$,
the conditional transition probability density,
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$, and the unit measurement density,
$f_{Y_{u,n}|X_{u,n}}(y_{u,n}|x_{u,n}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ for $1\leq n\leq N$, $1\leq u\leq U$, given by
$f_{\boldsymbol{X}_{0:N},\boldsymbol{Y}_{1:N}}(\boldsymbol{x}_{0:N},\boldsymbol{y}_{1:N};\theta)=f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0};\theta)\,\prod_{n=1}^{N}\\!f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}|\boldsymbol{x}_{n-1};\theta)\,\prod_{u=1}^{U}f_{Y_{u,n}|X_{u,n}}(y_{u,n}|x_{u,n}{\hskip
1.42262pt;\hskip 1.42262pt}\theta).$
In this framework, the transition density,
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}$, and unit-specific measurement
density, $f_{Y_{u,n}|X_{u,n}}$, can depend on $n$ and $u$, allowing for the
possibility of temporally and spatially inhomogeneous models.
### 2.1 Implementation of SpatPOMP models
A SpatPOMP model is represented in spatPomp by an S4 object of class
‘spatPomp’. Slots in this object encode the components of the SpatPOMP model,
and can be filled or changed using the constructor function spatPomp() and
various other convenience functions. Methods for the class ‘spatPomp’ use
these components to carry out computations on the model. Table 1 gives the
mathematical notation corresponding to the elementary methods that can be
executed on a class ‘spatPomp’ object.
Method | Argument to | Mathematical terminology
---|---|---
| spatPomp() |
dunit_measure | dunit_measure | Evaluate $f_{Y_{u,n}|X_{u,n}}(y_{u,n}{\,|\,}x_{u,n}{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$
runit_measure | runit_measure | Simulate from $f_{Y_{u,n}|X_{u,n}}(y_{u,n}{\,|\,}x_{u,n}{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$
eunit_measure | eunit_measure | Evaluate $\mathrm{e}_{u,n}(x,\theta)=\E[Y_{u,n}{\,|\,}X_{u,n}=x{\hskip 1.42262pt;\hskip 1.42262pt}\theta]$
vunit_measure | vunit_measure | Evaluate $\mathrm{v}_{u,n}(x,\theta)=\mathrm{Var}[Y_{u,n}{\,|\,}X_{u,n}=x{\hskip 1.42262pt;\hskip 1.42262pt}\theta]$
munit_measure | munit_measure | $\mathrm{m}_{u,n}(x,V,\theta)=\boldsymbol{\psi}$ solves $\mathrm{v}_{u,n}(x,\boldsymbol{\psi})=V$, $\mathrm{e}_{u,n}(x,\boldsymbol{\psi})=\mathrm{e}_{u,n}(x,\theta)$
rprocess | rprocess | Simulate from $f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$
dprocess | dprocess | Evaluate $f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$
rmeasure | rmeasure | Simulate from $f_{\boldsymbol{Y}_{n}|\boldsymbol{X}_{n}}(\boldsymbol{y}_{n}{\,|\,}\boldsymbol{x}_{n}{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$
dmeasure | dmeasure | Evaluate $f_{\boldsymbol{Y}_{n}|\boldsymbol{X}_{n}}(\boldsymbol{y}_{n}{\,|\,}\boldsymbol{x}_{n}{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$
rprior | rprior | Simulate from the prior distribution $\pi(\theta)$
dprior | dprior | Evaluate the prior density $\pi(\theta)$
rinit | rinit | Simulate from $f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$
timezero | t0 | $t_{0}$
time | times | $t_{1:N}$
obs | data | $\boldsymbol{y}^{*}_{1:N}$
states | — | $\boldsymbol{x}_{0:N}$
coef | params | $\theta$
Table 1: Model component methods for class ‘spatPomp’ objects and their
translation into mathematical notation for SpatPOMP models. For example, the
rprocess method is set using the rprocess argument to the spatPomp constructor
function.
Class ‘spatPomp’ inherits from the class ‘pomp’ defined by the pomp package.
One of the main ways in which spatPomp extends pomp is the addition of unit-
level specifications of the measurement model. This reflects the modeling
assumption that measurements are carried out independently in both space and
time, conditional on the current value of the latent process which is known as
the state of the dynamic system. There are five unit-level functionalities of
class ‘spatPomp’ objects: dunit_measure, runit_measure, eunit_measure,
vunit_measure and munit_measure. Each functionality corresponds to an S4
method. The set of instructions performed by each method are supplied by the
user via an argument to the spatPomp() constructor function of the same name.
(See Table 1 for details).
Only functionalities that are required to run an algorithm of interest need to
be supplied in advance. dunit_measure evaluates the probability density of the
measurement of a spatial unit given its latent state vector, whereas
runit_measure simulates from this conditional distribution. Given the latent
state, eunit_measure and vunit_measure give the expectation and variance of
the measurement, respectively. They are used by the ensemble Kalman filter
(EnKF, Section 3.3) and iterated EnKF (Section 4.2). munit_measure returns a
parameter vector corresponding to given moments (mean and variance), used by
one of the options for a guided particle filter (Section 3.1).
### 2.2 Initial conditions
Specification of the initial condition
$\boldsymbol{X}_{0}=\boldsymbol{X}(t_{0};\theta)$ of a SpatPOMP model is
similar to that of a POMP model, and is carried out using the rinit argument
of the spatPomp() constructor function. The initial distribution
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ may sometimes be a known property of the system but in
general it must be inferred. If the transition density for the dynamic model,
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$, does not depend on time and possesses a
unique stationary distribution, it may be natural to set
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ to be this stationary distribution. When no clear
scientifically motivated choice of
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ exists, one can treat $\boldsymbol{X}_{0}$ as a component
of the parameter set to be estimated. In this case,
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ concentrates at a point which depends on $\theta$.
### 2.3 Covariates
Scientifically, one may be interested in the impact of a vector-valued
covariate process $\\{\mathcal{Z}(t)\\}$ on the latent dynamic system. For
instance, the transition density,
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}$, and the measurement density,
$f_{\boldsymbol{Y}_{n}|\boldsymbol{X}_{n}}$, can depend on this observed
process. The covariate process might be shared between units, or might take
unit-specific values. spatPomp allows modeling and inference for
spatiotemporal processes that depend on an observed covariate process. If such
a covariate exists the user can supply a class ‘data.frame’ object to the
covar argument of the spatPomp() constructor function. This data.frame will
have a column for time, spatial unit, and each of the covariates. If any of
the variables in the covariates data.frame is common among all units the user
must supply the variable names as class ‘character’ vectors to the
shared_covarnames argument of the spatPomp() constructor function. Otherwise,
all covariates are assumed to be unit-specific, meaning that they generally
take on different values for each unit. spatPomp manages the task of
presenting interpolated values of the covariates to the elementary model
functions at the time they are called. An example implementing a SpatPOMP
model with covariates is presented in Section 6.
### 2.4 Constructing the class ‘spatPomp’ object
There are four required arguments for creating a new class ‘spatPomp’ object
using the spatPomp() constructor. The first is a class ‘data.frame’ object
containing observations for each spatial unit at each time. This long-format
data.frame (e.g. a column for time, a column for spatial unit and a column for
a measurement) is supplied to the data argument of the spatPomp() constructor
function. The package does not presuppose data at all units and observation
times. Missing data in some or all of the observations for any given
observation times are preserved, which allows the user to describe a
measurement model that handles missingness. In other words, the user can
provide data with NA encoding missing observations while also providing model
components for the measurement model that can describe the mechanism of
missingness. The second and third required arguments are the column names of
the data corresponding to observation times and unit names. These are supplied
to the times and units arguments, respectively. The last required argument is
t0, for which the user supplies the initial time of the dynamics in the time
unit of the measurements. The resulting class ‘spatPomp’ object stores the
unit names as a vector in a slot called unit_names. Internally, only the
positions of the names in the unit_names vector are used to keep track of
units. In other words, the units are labeled 1 through U just as in our
mathematical notation, and these labels can be used to construct measurement
or process models that differ between units. The text labels in unit_names are
re-attached for user-readable outputs like simulation plots and class
‘data.frame’ outputs.
There are more arguments to the spatPomp() constructor that allow us to
specify our SpatPOMP model components. Some of these are used to supply codes
that encode the model components in Table 1 while others are used to provide
information about the roles of the variables in these codes. One of the
arguments that plays the latter role is unit_statenames. This argument expects
a class ‘character’ vector of length $D_{X}$ containing names of the
components of any unit’s latent state at a given time. For example, to
implement a SpatPOMP model studying the population dynamics between frogs and
dragonflies in neighboring spatial units, the user can provide unit_statenames
= c(‘F’,‘D’). The package then expects that unit-level model components (e.g.
dunit_measure) will use ‘F’ and ‘D’ in their instructions. As discussed in the
next subsection, model components are provided via C codes and arguments like
unit_statenames set off variable definitions that allow these codes to compile
without errors. Another argument that is often needed to fully specify model
components is the paramnames argument. This argument expects a class
‘character’ vector of length $D_{\theta}$ containing variable names in the
model components that correspond to parameters.
Other name arguments, unit_obsnames and unit_covarnames, are used internally
to keep track of data and covariates that have corresponding values for each
unit. These need not be supplied in the spatPomp() constructor, however, since
the data and covariate data.frame objects provided by the user implicitly
supply them.
### 2.5 Specifying model components using C snippets
The spatPomp package extends the C snippet facility in pomp which allows users
to specify the model components in Table 1 using inline C code. The package is
therefore able to perform computationally expensive calculations in C while
outputting results in higher-level R objects. The code below illustrates the
creation of a C snippet unit measurement density using the Csnippet()
function.
R> example_dunit_measure <- Csnippet("
+ // define measurement standard deviation for first unit
+ double sd0 = sd*1.5;
+ // Quantities u, Y, X, sd and lik are available in the context
+ // in which this snippet is executed.
+ if (u==0) lik = dnorm(Y, X, sd0, 0);
+ else lik = dnorm(Y, X, sd, 0);
+ "
+ )
The example C snippet above gets executed when there is a need to evaluate the
unit measurement density at the data at an observation time conditional on the
latent states at that time. Y represents the data for a unit at an observation
time; X represents the latent state at the corresponding unit and time, sd
represents a standard deviation parameter in the unit measurement model for
all units except the first unit, for which the corresponding parameter is sd0;
lik is a pre-defined variable used to store the evaluated density or
likelihood. The example C snippet can now be provided to the dunit_measure
argument of the spatPomp() constructor along with a paramnames argument vector
which includes ‘sd’, a unit_statenames argument vector which includes ‘X’ and
a data argument data.frame with an observation column called Y. Since sd0 is
defined in the C snippet itself, it should not be included in paramnames.
The example C snippet above also illustrates a key difference in the practical
use of the five unit-level model components in Table 1 compared to the
components inherited from pomp that access the entire state and measurement
vectors. All the filtering and inference algorithms in spatPomp assume the
conditional independence of the spatial unit measurements given the state
corresponding to the unit:
$f_{\boldsymbol{Y}_{n}|\boldsymbol{X}_{n}}(\boldsymbol{y}_{n}{\,|\,}\boldsymbol{x}_{n}{\hskip
1.42262pt;\hskip
1.42262pt}\theta)=\prod_{u=1}^{U}f_{Y_{u,n}|X_{u,n}}(y_{u,n}{\,|\,}x_{u,n}{\hskip
1.42262pt;\hskip 1.42262pt}\theta).$
When we specify the unit-level model components we can thus assume that the
segments of the measurement and state vectors for the current unit are passed
in during the execution of the unit-level model component. This allows the
user to declare the unit-level model components by using the unit_statenames
(X in the above example) and unit_obsnames (Y in the above example) without
having to specify the names of the relevant components of the full latent
state at a given time (which would be c(‘X1’, ‘X2’, ‘X3’) for three coupled
latent dynamics where each has one latent state X, for instance) and
observation at a given time (which would be c(‘Y1’, ‘Y2’, ‘Y3’) for three
coupled latent dynamics that each produce a partial observation Y, for
instance).
The variable u, which takes a value between 0 and U-1, is passed in to each
unit-level model component. This allows the user to specify heterogeneity in
the unit-level model components across units. Since C uses zero-based
numbering, a user interested in introducing model heterogeneity for a unit
must find the position of the unit in the unit_names slot and subtract one to
get the corresponding value of u. The user can then use standard conditioning
logic to specify the heterogeneity. For instance, when the
example_dunit_measure C snippet above is executed for spatial unit 1, the u is
passed in as 0, Y will have the value of the measurement from unit 1 (or Y1)
and X will have the value of the latent state in unit 1 (or X1). This example
C snippet is coded so that the first unit has a measurement error inflated
relative to that of the other units. The last argument to dnorm specifies
whether the desired output should be on log scale; the same convention is
followed by all base R probability distribution functions.
Not all of the model components need to be supplied for any specific
computation. In particular, plug-and-play methodology by definition never uses
dprocess. An empty dprocess slot in a class ‘spatPomp’ object is therefore
acceptable unless a non-plug-and-play algorithm is attempted. In the package,
the data and corresponding measurement times and units are considered
necessary parts of a class ‘spatPomp’ object whilst specific values of the
parameters and latent states are not.
### 2.6 Examples included in the package
The construction of a new class ‘spatPomp’ object is illustrated in Section 6.
To provide some examples of class ‘spatPomp’ objects, the spatPomp package
includes functions bm(), lorenz() and measles(). These functions create class
‘spatPomp’ objects with user-specified dimensions for a correlated Brownian
motion model, the Lorenz-96 atmospheric model (Lorenz, 1996), and a
spatiotemporal measles SEIR model, respectively. These examples can be used to
better understand the components of class ‘spatPomp’ objects as well as to
test filtering and inference algorithms for future development.
For instance, we can create four correlated Brownian motions each with ten
time steps as follows. The correlation structure and other model details are
discussed in Section 5.
R> U <- 4; N <- 10
R> bm4 <- bm(U = U, N = N)
The above code results in the creation of the bm4 object of class ‘spatPomp’
with simulated data. This is done by bringing together pre-specified C
snippets and adapting them to a four-dimensional process. One can inspect many
aspects of bm4, some of which are listed in Table 1, by using the
corresponding accessor functions:
R> obs(bm4)
R> unit_names(bm4)
R> states(bm4)
R> as.data.frame(bm4)
R> plot(bm4)
R> timezero(bm4)
R> time(bm4)
R> coef(bm4)
R> rinit(bm4)
The measles() example is described in Section 6 to demonstrate user-specified
model construction.
## 3 Simulation and filtering: Elementary SpatPOMP methods
Once the user has encoded one or more SpatPOMP models as objects of class
‘spatPomp’, the package provides a variety of algorithms to assist with data
analysis. Methods can be divided into two categories: _elementary_ operations
that investigate a model with a fixed set of parameter values, and _inference_
operations that estimate parameters. This section considers the first of these
categories.
A basic operation on a SpatPOMP model is to simulate a stochastic realization
of the latent process and the resulting data. This requires specifications of
rprocess and rmeasure. Applying the simulate function to an object of class
‘spatPomp’ by default returns another object of class ‘spatPomp’, within which
the data $\boldsymbol{y}^{*}_{1:N}$ have been replaced by a stochastic
realization of $\boldsymbol{Y}_{1:N}$. The corresponding realization of
$\boldsymbol{X}_{0:N}$ is accessible via the states slot, and the params slot
is filled with the value of $\theta$ used in the simulation. Optionally,
simulate can be made to return a class ‘data.frame’ object by supplying the
argument format=‘data.frame’ in the call to simulate. Section 6 provides an
example of constructing a class ‘spatPomp’ object and simulating from it.
Evaluating the conditional distribution of latent process variables given
currently available data is an elementary operation called _filtering_.
Filtering also provides an evaluation of the likelihood function for a fixed
parameter vector. The curse of dimensionality associated with spatiotemporal
models can make filtering for SpatPOMP models computationally challenging. A
widely used time-series filtering technique is the particle filter, available
as pfilter in the pomp package. However, most particle filter algorithms scale
poorly with dimension (Bengtsson _et al._ , 2008; Snyder _et al._ , 2015).
Thus, in the spatiotemporal context, successful particle filtering requires
state-of-the-art algorithms. Currently the spatPomp package contains
implementations of five such algorithms, four of which are described below.
Spatiotemporal data analysis using mechanistic models is a nascent topic, and
future methodological developments are anticipated. Since the mission of
spatPomp is to be a home for such analyses, the package developers welcome
contributions and collaborations to further expand the functionality of
spatPomp. In the remainder of this section, we describe and discuss some of
the filtering methods currently implemented in the package.
### 3.1 The guided intermediate resampling filter (GIRF)
The guided intermediate resampling filter (GIRF, Park and Ionides, 2020) is an
extension of the auxiliary particle filter (APF, Pitt and Shepard, 1999). GIRF
is appropriate for moderately high-dimensional SpatPOMP models with a
continuous-time latent process. All particle filters compute importance
weights for proposed particles and carry out resampling to focus computational
effort on particles consistent with the data (see reviews by Arulampalam _et
al._ , 2002; Doucet and Johansen, 2011; Kantas _et al._ , 2015). In the
context of pomp, the pfilter function is discussed by King _et al._ (2016).
GIRF combines two techniques for improved scaling of particle filters: the use
of a guide function and intermediate resampling.
The guide function steers particles using importance weights that anticipate
upcoming observations. Future measurements are considered up to a lookahead
horizon, $L$. APF corresponds to a lookahead horizon $L=2$, and a basic
particle filter has $L=1$. Values $L\leq 3$ are typical for GIRF.
Intermediate resampling breaks each observation interval into $S$ sub-
intervals, and carries out reweighting and resampling on each sub-interval.
Perhaps surprisingly, intermediate resampling can facilitate some otherwise
intractable importance sampling problems (Del Moral and Murray, 2015). APF and
the basic particle filter correspond to $S=1$, whereas choosing $S=U$ gives
favorable scaling properties (Park and Ionides, 2020).
input: simulator for
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$ and
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; evaluator for
$f_{{Y}_{u,n}|{X}_{u,n}}({y}_{u,n}{\,|\,}{x}_{u,n}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$, and $\boldsymbol{\mu}(\boldsymbol{x},s,t{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$; data, $\boldsymbol{y}^{*}_{1:N}$;
parameter, $\theta$; number of particles, $J$; number of guide simulations,
$K$; number of intermediate timesteps, $S$; number of lookahead lags, $L$.
1 initialize: simulate
$\boldsymbol{X}_{0,0}^{F,j}\sim{f}_{\boldsymbol{X}_{0}}({\,\cdot\,}{\hskip
1.42262pt;\hskip 1.42262pt}{\theta})$ and set $g^{F,j}_{0,0}=1$ for $j$ in
${1}\\!:\\!{J}$
2 for _$n\,\,\mathrm{in}\,\,{0}\\!:\\!{N-1}$_ do
3 sequence of guide forecast times, $\mathbb{L}={(n+1)}\\!:\\!{\min(n+L,N)}$
4 guide simulations,
$\boldsymbol{X}_{\mathbb{L}}^{G,j,k}\sim{f}_{\boldsymbol{X}_{\mathbb{L}}|\boldsymbol{X}_{n}}\big{(}{\,\cdot\,}|\boldsymbol{X}_{n,0}^{F,j}{\hskip
1.42262pt;\hskip 1.42262pt}{\theta}\big{)}$ for $j$ in ${1}\\!:\\!{J}$, $k$ in
${1}\\!:\\!{K}$
5 guide residuals,
$\boldsymbol{\epsilon}^{j,k}_{0,\ell}=\boldsymbol{X}_{\ell}^{G,j,k}-\boldsymbol{\mu}\big{(}\boldsymbol{X}^{F,j}_{n},t_{n},t_{\ell}{\hskip
1.42262pt;\hskip 1.42262pt}{\theta}\big{)}$ for $j$ in ${1}\\!:\\!{J}$, $k$ in
${1}\\!:\\!{K}$, $\ell$ in $\mathbb{L}$
6 for _$s\,\,\mathrm{in}\,\,{1}\\!:\\!{S}$_ do
7 prediction simulations,
${\boldsymbol{X}}_{n,s}^{P,j}\sim{f}_{{\boldsymbol{X}_{n,s}}|{\boldsymbol{X}_{n,s-1}}}\big{(}{\,\cdot\,}|{\boldsymbol{X}^{F,j}_{n,s-1}}{\hskip
1.42262pt;\hskip 1.42262pt}{\theta}\big{)}$ for $j$ in ${1}\\!:\\!{J}$
8 deterministic trajectory,
$\boldsymbol{\mu}^{P,j}_{n,s,\ell}=\boldsymbol{\mu}\big{(}\boldsymbol{X}^{P,j}_{n,s},t_{n,s},t_{\ell}{\hskip
1.42262pt;\hskip 1.42262pt}\theta\big{)}$ for $j$ in ${1}\\!:\\!{J}$, $\ell$
in $\mathbb{L}$
9 pseudo guide simulations,
$\hat{\boldsymbol{X}}^{j,k}_{n,s,\ell}=\boldsymbol{\mu}^{P,j}_{n,s,\ell}+\boldsymbol{\epsilon}^{j,k}_{s-1,\ell}-\boldsymbol{\epsilon}^{j,k}_{s-1,n+1}+{\textstyle\sqrt{\frac{t_{n+1}-t_{n,s}}{t_{n+1}-t_{n,0}}}}\,\boldsymbol{\epsilon}^{j,k}_{s-1,n+1}$
for $j$ in ${1}\\!:\\!{J}$, $k$ in ${1}\\!:\\!{K}$, $\ell$ in $\mathbb{L}$
10 discount factor,
$\eta_{n,s,\ell}=1-(t_{n+\ell}-t_{n,s})/\\{(t_{n+\ell}-t_{\max(n+\ell-L,0)})\cdot(1+\mathbbm{1}_{L=1})\\}$
11 $\displaystyle
g^{P,j}_{n,s}=\prod_{\ell\,\mathrm{in}\,\mathbb{L}}\,\prod_{u=1}^{U}\left[\frac{1}{K}\sum_{k=1}^{K}f_{Y_{u,\ell}|X_{u,\ell}}\Big{(}y^{*}_{u,\ell}{\,|\,}\hat{X}^{j,k}_{u,n,s,\ell}{\hskip
1.42262pt;\hskip 1.42262pt}\theta\Big{)}\right]^{\eta_{n,s,\ell}}$ for $j$ in
${1}\\!:\\!{J}$
12 for $j$ in ${1}\\!:\\!{J}$,
$w^{j}_{n,s}=\left\\{\begin{array}[]{ll}f_{\boldsymbol{Y}_{n}|\boldsymbol{X}_{n}}\big{(}\boldsymbol{y}_{n}{\,|\,}\boldsymbol{X}^{F,j}_{n,s-1}{\hskip
1.42262pt;\hskip
1.42262pt}\theta\big{)}\,\,g^{P,j}_{n,s}\Big{/}g^{F,j}_{n,s-1}&\mbox{if $s=1$
and $n\neq 0$}\\\
g^{P,j}_{n,s}\Big{/}g^{F,j}_{n,s-1}&\mbox{else}\end{array}\right.$
13 log likelihood component,
$c_{n,s}=\log\Big{(}J^{-1}\,\sum_{q=1}^{J}\\!w^{q}_{n,s}\Big{)}$
14 normalized weights,
$\tilde{w}^{j}_{n,s}=w^{j}_{n,s}\Big{/}\sum_{q=1}^{J}w^{q}_{n,s}$ for $j$ in
${1}\\!:\\!{J}$
15 select resample indices, $r_{1:J}$ with
$\mathbb{P}\left[r_{j}=q\right]=\tilde{w}^{q}_{n,s}$ for $j$ in
${1}\\!:\\!{J}$
16 $\boldsymbol{X}_{n,s}^{F,j}=\boldsymbol{X}_{n,s}^{P,r_{j}}\,$,
$\;g^{F,j}_{n,s}=g^{P,r_{j}}_{n,s}\,$,
$\;\boldsymbol{\epsilon}^{j,k}_{s,\ell}=\boldsymbol{\epsilon}^{r_{j},k}_{s-1,\ell}$
for $j$ in ${1}\\!:\\!{J}$, $k$ in ${1}\\!:\\!{K}$, $\ell$ in $\mathbb{L}$
17
18 end
19 set $\boldsymbol{X}^{F,j}_{n+1,0}=\boldsymbol{X}^{F,j}_{n,S}$ and
$g^{F}_{n+1,0,j}=g^{F}_{n,S,j}$ for $j$ in ${1}\\!:\\!{J}$
20
21 end
output: log likelihood,
$\lambda^{\mbox{\tiny{GIRF}}}=\sum_{n=0}^{N-1}\sum_{s=1}^{S}c_{n,s}$, and
filter particles, $\boldsymbol{X}^{F,1:J}_{N,0}$
complexity: $\mathcal{O}\big{(}{JLUN(K+S)}\big{)}$
Algorithm 1 girf(P, Np = $J\\!$, Ninter = $S\\!$, Nguide = $K\\!$, Lookahead =
$L$), using notation from Table 1 where P is a class ‘spatPomp’ object with
definitions for rprocess, dunit_measure, rinit, skeleton, obs and coef.
In Algorithm 1 the $F$, $G$ and $P$ superscripts indicate filtered, guide and
proposal particles, respectively. The goal for the pseudocode in Algorithm 1,
and subsequent algorithms in this paper, is a succinct description of the
logic of the procedure rather than a complete recipe for efficient coding. The
code in spatPomp takes advantage of memory overwriting and vectorization
opportunities that are not represented in this pseudocode.
We call the guide in Algorithm 1 a bootstrap guide function since it is based
on resampling the Monte Carlo residuals calculated in step 1. Another option
of a guide function in girf is the simulated moment guide function developed
by Park and Ionides (2020) which uses the eunit_measure, vunit_measure and
munit_measure model components together with simulations to calculate the
guide. The expectation of Monte Carlo likelihood estimates does not depend on
the guide function, so an inexact guide approximation may lead to loss of
numerical efficiency but does not affect the consistency of the procedure.
The intermediate resampling is represented in Algorithm 1 by the loop of
$s=1,\dots,S$ in step 1. The intermediate times are defined by
$t_{n,s}=t_{n}+(t_{n+1}-t_{n})\,\cdot s\big{/}S$ and we write
$\boldsymbol{X}_{n,s}=\boldsymbol{X}(t_{n,s})$. The resampling weights (step
1) are defined in terms of guide function evaluations $g^{P,j}_{n,s}$. The
only requirement for the guide function to achieve unbiased estimates is that
it satisfies $g^{F,j}_{0,0}=1$ and
$g^{P,j}_{N-1,S}=f_{\boldsymbol{Y}_{N}|\boldsymbol{X}_{N}}\big{(}\boldsymbol{y}^{*}_{N}{\,|\,}\boldsymbol{X}^{F,j}_{N-1,S}{\hskip
1.42262pt;\hskip 1.42262pt}\theta\big{)}$, which is the case in Algorithm 1.
The particular guide function calculated in step 1 evaluates particles using a
prediction centered on a function
$\boldsymbol{\mu}(\boldsymbol{x},s,t{\hskip 1.42262pt;\hskip
1.42262pt}\theta)\approx\E[\boldsymbol{X}(t){\,|\,}\boldsymbol{X}(s)=\boldsymbol{x}{\hskip
1.42262pt;\hskip 1.42262pt}\theta].$
We call $\boldsymbol{\mu}(\boldsymbol{x},s,t{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$ a deterministic trajectory associated with
$\boldsymbol{X}(t)$. For a continuous time SpatPOMP model, this trajectory is
typically the solution to a system of differential equations that define a
vector field called the skeleton (Tong, 1990). In spatPomp, the argument to
skeleton is a map or vector field which is numerically solved to obtain
$\boldsymbol{\mu}(\boldsymbol{x},s,t{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$. It can be specified using complied C code via a C snippet
argument to spatPomp, as demonstrated in Section 6. The forecast spread around
this point prediction is given by the simulated bootstrap residuals
constructed in step 1.
### 3.2 Adapted bagged filter (ABF)
The adapted bagged filter (Ionides _et al._ , 2021) combines many independent
particle filters. This is called _bagging_ , (_b_ ootstrap _agg_ regat _ing_),
since a basic particle filter is also called a bootstrap filter. The adapted
distribution is the conditional distribution of the latent process given its
current value and the subsequent observation (Johansen and Doucet, 2008). In
the adapted bagged filter, each bootstrap replicate makes a Monte Carlo
approximation to a draw from the adapted distribution. Thus, in the pseudocode
of Algorithm 2, $\boldsymbol{X}^{A,i}_{0:N}$ is a Monte Carlo sample targeting
the adapted sampling distribution,
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)\prod_{n=1}^{N}f_{\boldsymbol{X}_{n}|\boldsymbol{Y}_{n},\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{y}^{*}_{n},\boldsymbol{x}_{n-1}\,;\theta).$
(1)
Each adapted simulation replicate is constructed by importance sampling using
proposal particles $\\{\boldsymbol{X}^{P,i,j}_{n}\\}$. The ensemble of adapted
simulation replicates are then weighted using data in a spatiotemporal
neighborhood of each observation to obtain a locally combined Monte Carlo
sample targeting the filter distribution, with some approximation error due to
the finite spatiotemporal neighborhood used. This local aggregation of the
bootstrap replicates also provides an evaluation of the likelihood function.
On a given bootstrap replicate $i$ at a given time $n$, all the adapted
proposal particles $\boldsymbol{X}^{P,i,1:J}_{n}$ in step 2 are necessarily
close to each other in state space because they share the parent particle
$\boldsymbol{X}^{A,i}_{n-1}$. This reduces imbalance in the adapted weights in
step 2, which helps to battle the curse of dimensionality that afflicts
importance sampling. The combination of the replicates for the filter estimate
in step 2 is carried out using only weights in a spatiotemporal neighborhood,
thus avoiding the curse of dimensionality. For any point $(u,n)$, the
neighborhood $B_{u,n}$ should be specified as a subset of
$A_{u,n}=\\{(\tilde{u},\tilde{n}):\mbox{$\tilde{n}<n$ or ($\tilde{u}<u$ and
$\tilde{n}=n$)}\\}$. If the model has a mixing property, meaning that
conditioning on the observations in the neighborhood $B_{u,n}$ is negligibly
different from conditioning on the full set $A_{u,n}$, then the approximation
involved in this localization is adequate.
input: simulator for
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$ and
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; evaluator for
$f_{{Y}_{u,n}|{X}_{u,n}}({y}_{u,n}{\,|\,}{x}_{u,n}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; data, $\boldsymbol{y}^{*}_{1:N}$; parameter, $\theta$;
number of particles per replicate, $J$; number of replicates, $\mathcal{I}$;
neighborhood structure, $B_{u,n}$
1 initialize adapted simulation, $\boldsymbol{X}^{A,i}_{0}\sim
f_{\boldsymbol{X}_{0}}(\cdot{\hskip 1.42262pt;\hskip 1.42262pt}\theta)$ for
$i$ in ${1}\\!:\\!{\mathcal{I}}$
2 for _$n\ \mathrm{in}\ {1}\\!:\\!{N}$_ do
3 proposals, $\boldsymbol{X}_{n}^{P,i,j}\sim
f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}\big{(}{\,\cdot\,}{\,|\,}\boldsymbol{X}^{A,i}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta\big{)}$ for $i$ in
${1}\\!:\\!{\mathcal{I}}$, $j$ in ${1}\\!:\\!{J}$
4
$w_{u,n}^{i,j}=f_{Y_{u,n}|X_{u,n}}\big{(}y^{*}_{u,n}{\,|\,}X^{P,i,j}_{u,n}{\hskip
1.42262pt;\hskip 1.42262pt}\theta\big{)}$ for $u$ in ${1}\\!:\\!{U}$, $i$ in
${1}\\!:\\!{\mathcal{I}}$, $j$ in ${1}\\!:\\!{J}$
5 adapted resampling weights, $w^{A,i,j}_{n}=\prod_{u=1}^{U}w^{i,j}_{u,n}$ for
$u$ in ${1}\\!:\\!{U}$, $i$ in ${1}\\!:\\!{\mathcal{I}}$, $j$ in
${1}\\!:\\!{J}$
6 set $\boldsymbol{X}^{A,i}_{n}=\boldsymbol{X}^{P,i,j}_{n}$ with probability
$w^{A,i,j}_{n}\left(\sum_{q=1}^{J}w^{A,i,q}_{n}\right)^{-1}$ for $i$ in
${1}\\!:\\!{\mathcal{I}}$
7
$w^{P,i,j}_{u,n}=\displaystyle\prod_{\tilde{n}=1}^{n-1}\left[\frac{1}{J}\sum_{q=1}^{J}\hskip
5.69054pt\prod_{(\tilde{u},\tilde{n})\in
B_{u,n}}w_{\tilde{u},\tilde{n}}^{i,q}\right]\prod_{(\tilde{u},n)\in
B_{u,n}}w_{\tilde{u},n}^{i,j}$ for $u$ in ${1}\\!:\\!{U}$, $i$ in
${1}\\!:\\!{\mathcal{I}}$, $j$ in ${1}\\!:\\!{J}$
8 end
9 filter weights, $\displaystyle
w^{F,i,j}_{u,n}=\frac{w_{u,n}^{i,j}\,\,w^{P,i,j}_{u,n}}{\sum_{p=1}^{\mathcal{I}}\sum_{q=1}^{J}w^{P,p,q}_{u,n}}$
for $u$ in ${1}\\!:\\!{U}$, $n$ in ${1}\\!:\\!{N}$, $i$ in
${1}\\!:\\!{\mathcal{I}}$, $j$ in ${1}\\!:\\!{J}$
10 conditional log likelihood,
${\lambda}_{u,n}=\log\left(\sum_{i=1}^{\mathcal{I}}\sum_{j=1}^{J}w^{F,i,j}_{u,n}\right)$
for $u$ in ${1}\\!:\\!{U}$, $n$ in ${1}\\!:\\!{N}$
11 set $X^{F,j}_{u,n}={X}^{P,i,k}_{u,n}$ with probability
$w^{F,i,k}_{u,n}\,e^{-{\lambda}_{u,n}}$ for $u$ in ${1}\\!:\\!{U}$, $n$ in
${1}\\!:\\!{N}$, $j$ in ${1}\\!:\\!{J}$
output: filter particles, $\boldsymbol{X}^{F,1:J}_{n}$, for $n$ in
${1}\\!:\\!{N}$; log likelihood,
$\lambda^{\mbox{\tiny{ABF}}}=\sum_{n=1}^{N}\sum_{u=1}^{U}\lambda_{u,n}$
complexity: $\mathcal{O}(\mathcal{I}JUN)$
Algorithm 2 abf(P, replicates = $\mathcal{I}$, Np = $J$, nbhd=$B_{u,n}$),
using notation from Table 1 where P is a class ‘spatPomp’ object with
definitions for rprocess, dunit_measure, rinit, obs and coef.
Steps 2 through 2 do not involve interaction between replicates and therefore
iteration over $i$ can be carried out in parallel. If a parallel backend has
been set up by the user, the abf method will parallelize computations over the
replicates using multiple cores. The user can register a parallel backend
using the doParallel package (Wallig and Weston, 2020, 2019) prior to calling
abf.
R> library("doParallel")
R> registerDoParallel(3) # Parallelize over 3 cores
The neighborhood is supplied via the nbhd argument to abf as a function which
takes a point in space-time, $(u,n)$, and returns a list of points in space-
time which correspond to $B_{u,n}$. An example with
$B_{u,n}=\\{(u-1,n),(u,n-1)\\}$ follows.
R> example_nbhd <- function(object, unit, time){
+ nbhd_list = list()
+ if(time>1) nbhd_list <- c(nbhd_list, list(c(unit, time-1)))
+ if(unit>1) nbhd_list <- c(nbhd_list, list(c(unit-1, time)))
+ return(nbhd_list)
+ }
ABF can be combined with the guided intermediate resampling technique used by
GIRF to give an algorithm called ABF-IR (Ionides _et al._ , 2021) implemented
as abfir.
### 3.3 The ensemble Kalman filter (EnKF)
input: simulator for
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$ and
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; evaluator for $\mathrm{e}_{u}(X_{u,n},\theta)$ and
$\mathrm{v}_{u}(X_{u,n},\theta)$; parameter, $\theta$; data,
$\boldsymbol{y}^{*}_{1:N}$; number of particles, $J$.
1 initialize filter particles,
$\boldsymbol{X}_{0}^{F,j}\sim{f}_{\boldsymbol{X}_{0}}\left({\,\cdot\,}{\hskip
1.42262pt;\hskip 1.42262pt}{\theta}\right)$ for $j$ in ${1}\\!:\\!{J}$
2 for _$n\ \mathrm{in}\ {1}\\!:\\!{N}$_ do
3 prediction ensemble,
$\boldsymbol{X}_{n}^{P,j}\sim{f}_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}\big{(}{\,\cdot\,}|\boldsymbol{X}_{n-1}^{F,j};\theta\big{)}$
for $j$ in ${1}\\!:\\!{J}$
4 centered prediction ensemble,
$\tilde{\boldsymbol{X}}_{n}^{P,j}=\boldsymbol{X}_{n}^{P,j}-\frac{1}{J}\sum_{q=1}^{J}\boldsymbol{X}_{n}^{P,q}$
for $j$ in ${1}\\!:\\!{J}$
5 forecast ensemble,
$\boldsymbol{\hat{Y}}^{j}_{\\!n}=\mathrm{e}_{u}(X_{u,n}^{P,j},\theta)$ for $j$
in ${1}\\!:\\!{J}$
6 forecast mean,
$\overline{\boldsymbol{Y}}_{\\!n}=\frac{1}{J}\sum_{j=1}^{J}\boldsymbol{\hat{Y}}^{j}_{\\!n}$
7 centered forecast ensemble,
$\boldsymbol{\tilde{Y}}^{j}_{n}=\boldsymbol{\hat{Y}}^{j}_{\\!n}-\overline{\boldsymbol{Y}}_{\\!n}$
for $j$ in ${1}\\!:\\!{J}$
8 forecast measurement variance,
$R_{u,\tilde{u}}=\mathbbm{1}_{u,\tilde{u}}\,\frac{1}{J}\sum_{j=1}^{J}\mathrm{v}_{u}\big{(}\boldsymbol{X}_{u,n}^{P,j},\theta\big{)}$
for $u,\tilde{u}$ in ${1}\\!:\\!{U}$
9 forecast estimated covariance,
$\Sigma_{Y}=\frac{1}{J-1}\sum_{j=1}^{J}(\boldsymbol{\tilde{Y}}^{j}_{\\!n})(\boldsymbol{\tilde{Y}}^{j}_{\\!n})^{T}+R$
10 prediction and forecast sample covariance,
$\Sigma_{XY}=\frac{1}{J-1}\sum_{j=1}^{J}(\tilde{\boldsymbol{X}}_{n}^{P,j})(\boldsymbol{\tilde{Y}}^{j}_{\\!n})^{T}$
11 Kalman gain, $K=\Sigma_{XY}\Sigma_{Y}^{-1}$
12 artificial measurement noise,
$\boldsymbol{\epsilon}_{n}^{j}\sim{\mathrm{Normal}}(\boldsymbol{0},R)$ for $j$
in ${1}\\!:\\!{J}$
13 errors,
$\boldsymbol{r}_{n}^{j}=\boldsymbol{\hat{Y}}^{j}_{\\!n}-\boldsymbol{y}^{*}_{n}$
for $j$ in ${1}\\!:\\!{J}$
14 filter update,
$\boldsymbol{X}_{n}^{F,j}=\boldsymbol{X}_{n}^{P,j}+K\big{(}\boldsymbol{r}_{n}^{j}+\boldsymbol{\epsilon}_{n}^{j}\big{)}$
for $j$ in ${1}\\!:\\!{J}$
15 $\lambda_{n}=\log\big{[}\phi\big{(}\boldsymbol{y}^{*}_{n}{\hskip
1.42262pt;\hskip
1.42262pt}\overline{\boldsymbol{Y}}_{\\!n},\Sigma_{Y}\big{)}\big{]}$ where
$\phi(\cdot{\hskip 1.42262pt;\hskip 1.42262pt}\boldsymbol{\mu},\Sigma)$ is the
${\mathrm{Normal}}(\boldsymbol{\mu},\Sigma)$ density.
16 end
output: filter sample, $\boldsymbol{X}^{F,1:J}_{n}$, for $n$ in
${1}\\!:\\!{N}$; log likelihood estimate,
$\lambda^{\mbox{\tiny{EnKF}}}=\sum_{n=1}^{N}\lambda_{n}$
complexity: $\mathcal{O}(JUN)$
Algorithm 3 enkf(P, Np = $J$), using notation from Table 1 where P is a class
‘spatPomp’ object with definitions for rprocess, eunit_measure, vunit_measure,
rinit, coef and obs.
Algorithm 3 is an implementation of the ensemble Kalman filter (EnKF)(Evensen,
1994; Evensen and van Leeuwen, 1996). The EnKF makes a Gaussian approximation
to assimilate Monte Carlo simulations from a state prediction model with data
observed in the corresponding time step. The EnKF has two steps: the
prediction and the filtering steps; the latter is known the “analysis” step in
the geophysical data-assimilation literature. The prediction step advances
Monte Carlo particles to the next observation time step by using simulations
from a possibly non-linear transition model. In the filtering step, the sample
estimate of the state covariance matrix and the linear measurement model are
used to make a Gaussian approximation to the conditional distribution of the
state given the data.
In step 3 of Algorithm 3, the conditional variance of the measurement at the
current time step is approximated by constructing a diagonal covariance matrix
whose diagonal elements are the sample average of the theoretical unit
measurement variances at each unit. This is written using an indicator
function $\mathbbm{1}_{u,\tilde{u}}$ which takes value 1 if $u=\tilde{u}$ and
0 otherwise. The vunit_measure model component aids in this step whereas
eunit_measure specifies how we can construct forecast data (step 3) that can
be used to later update our prediction particles in step 3. In step 3 we add
artificial measurement error to arrive at a consistent sample covariance for
the filtering step (Evensen, 1994; Evensen and van Leeuwen, 1996), writing
${\mathrm{Normal}}(\boldsymbol{\mu},\Sigma)$ for independent draws from a
multivariate normal random variable with mean $\boldsymbol{\mu}$ and variance
matrix $\Sigma$.
EnKF achieves good dimensional scaling relative to a particle filter (by
avoiding the resampling step) at the expense of a Gaussian approximation in
the filtering update rule. Adding hierarchical layers to the model
representation can help to make the EnKF approximation applicable in non-
Gaussian contexts (Katzfuss _et al._ , 2020). Since we envisage spatPomp
primarily for situations with relatively low dimension, this implementation
does not engage in regularization issues required when the dimension of the
observation space exceeds the number of particles in the ensemble.
Our EnKF implementation supposes we have access to the functions
$\displaystyle\mathrm{e}_{u,n}(x,\theta)$ $\displaystyle=$
$\displaystyle\E\big{[}Y_{u,n}{\,|\,}X_{u,n}{=}x{\hskip 1.42262pt;\hskip
1.42262pt}\theta\big{]}$ $\displaystyle\mathrm{v}_{u}(x,\theta)$
$\displaystyle=$
$\displaystyle\mathrm{Var}\big{(}Y_{u,n}{\,|\,}X_{u,n}{=}x{\hskip
1.42262pt;\hskip 1.42262pt}\theta\big{)}$
If the measurements are unbiased, $\mathrm{e}_{u,n}(x_{u},\theta)$ will simply
extract the measured components of $x_{u}$. The measurements in a SpatPOMP do
not necessarily correspond to specific components of the state vector, and
enkf permits arbitrary relationships subject to the constraint that the user
can provide the necessary $\mathrm{e}_{u,n}(x,\theta)$ and
$\mathrm{v}_{u}(x,\theta)$ functions. These functions can be defined during
the construction of a class ‘spatPomp’ object by supplying C snippets to the
arguments eunit_measure and vunit_measure respectively. For common choices of
measurement model, such as Gaussian or negative binomial, $\mathrm{e}_{u,n}$
and $\mathrm{v}_{u,n}$ are readily available. In general, the functional forms
of $\mathrm{e}_{u,n}$ and $\mathrm{v}_{u,n}$ may depend on $u$ and $n$. Users
can specify more general functional forms in spatPomp since the variables u
and t are defined for the C snippets. Similarly, both the mathematical
notation in Section 2 and the spatPomp implementation permit arbitrary
dependence on covariate time series.
### 3.4 Block particle filter
input: simulator for
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$ and
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; number of particles, $J$; evaluator for
$f_{{Y}_{u,n}|{X}_{u,n}}({y}_{u,n}{\,|\,}{x}_{u,n}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; data, $\boldsymbol{y}^{*}_{1:N}$; parameter, $\theta$;
blocks, $\mathcal{B}_{1:K}$;
1 initialization,
$\boldsymbol{X}_{0}^{F,j}\sim{f}_{\boldsymbol{X}_{0}}\left({\,\cdot\,}{\hskip
1.42262pt;\hskip 1.42262pt}{\theta}\right)$ for $j$ in ${1}\\!:\\!{J}$
2 for _$n\ \mathrm{in}\ {1}\\!:\\!{N}$_ do
3 prediction,
$\boldsymbol{X}_{n}^{P,j}\sim{f}_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}\big{(}{\,\cdot\,}|\boldsymbol{X}_{n-1}^{F,j};{\theta}\big{)}$
for $j$ in ${1}\\!:\\!{J}$
4 block weights, $\displaystyle
w_{k,n}^{j}=\prod_{u\in\mathcal{B}_{k}}f_{Y_{u,n}|X_{u,n}}\big{(}y^{*}_{u,n}{\,|\,}X^{P,j}_{u,n}{\hskip
1.42262pt;\hskip 1.42262pt}\theta\big{)}$ for $j$ in ${1}\\!:\\!{J}$, $k$ in
${1}\\!:\\!{K}$
5 resampling indices, $r^{j}_{k,n}$ with
$\mathbb{P}\left[r^{j}_{k,n}=i\right]=\tilde{w}^{i}_{k,n}\Big{/}\sum_{q=1}^{J}w^{q}_{k,n}$
for $j$ in ${1}\\!:\\!{J}$, $k$ in ${1}\\!:\\!{K}$
6 resample, $X_{\mathcal{B}_{k},n}^{F,j}=X_{\mathcal{B}_{k},n}^{P,r_{j,k}}$
for $j$ in ${1}\\!:\\!{J}$, $k$ in ${1}\\!:\\!{K}$
7
8 end
output: log likelihood,
$\lambda^{\mbox{\tiny{BPF}}}(\theta)=\sum_{n=1}^{N}\sum_{k=1}^{K}\log\Big{(}\frac{1}{J}\sum_{j=1}^{J}w^{j}_{k,n}\Big{)}$,
filter particles $\boldsymbol{X}^{F,1:J}_{1:N}$
complexity: $\mathcal{O}(JUN)$
Algorithm 4 bpfilter(P, Np = $J\\!$, block_list = $\mathcal{B}$ ) using
notation from Table 1 where P is a class ‘spatPomp’ object with definitions
for rprocess, dunit_measure, rinit, obs, coef.
Algorithm 4 is an implementation of the block particle filter (BPF Rebeschini
and van Handel, 2015), also called the factored particle filter (Ng _et al._ ,
2002). BPF partitions the units into a collection of blocks,
$\mathcal{B}_{1},\dots,\mathcal{B}_{K}$. Each unit is placed in exactly one
block. BPF generates proposal particles by simulating from the joint latent
process across all blocks, exactly as the particle filter does. However, the
resampling in the filtering step is carried out independently for each block,
using weights corresponding only to the measurements in the block. Different
proposal particles may be successful for different blocks, and the block
resampling allows the filter particles to paste together these successful
proposed blocks. BPF supposes that spatial coupling conditional on the data
occurs primarily within blocks, and is negligible between blocks.
The user has a choice of specifying the blocks using either the block_list
argument or block_size, but not both. block_list takes a class ‘list’ object
where each entry is a vector representing the units in a block. block_size
takes an integer and evenly partitions $1{\hskip 1.70717pt:\hskip 1.70717pt}U$
into blocks of size approximately block_size. For example, if there are 4
units, executing bpfilter with block_size=2 is equivalent to setting
block_list=list(c(1,2),c(3,4)).
## 4 Inference for SpatPOMP models
We focus on iterated filtering methods (Ionides _et al._ , 2015) which provide
a relatively simple way to coerce filtering algorithms to carry out parameter
inference, applicable to the general class of SpatPOMP models considered by
spatPomp. The main idea of iterated filtering is to extend a POMP model to
include dynamic parameter perturbations. Repeated filtering, with parameter
perturbations of decreasing magnitude, approaches the maximum likelihood
estimate. Here, we present iterated versions of GIRF, EnKF and the unadapted
bagged filter (UBF), a version of ABF with $J=1$. These algorithms are known
as IGIRF (Park and Ionides, 2020), IEnKF (Li _et al._ , 2020) and IUBF
(Ionides _et al._ , 2021) respectively. SpatPOMP model estimation is an active
area for research (for example, Katzfuss _et al._ , 2020) and spatPomp
provides a platform for developing and testing new methods, in addition to new
models and data analysis.
input: simulator for
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$ and
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; evaluator for
$f_{{Y}_{u,n}|{X}_{u,n}}({y}_{u,n}{\,|\,}{x}_{u,n}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$, and $\boldsymbol{\mu}(\boldsymbol{x},s,t{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$; data, $\boldsymbol{y}^{*}_{1:N}$;
starting parameter, $\theta_{0}$; iterations, $M$; particles, $J$; lookahead
lags, $L$; intermediate timesteps, $S$; random walk intensities,
$\sigma_{0:N,1:D_{\theta}}$; cooling fraction in 50 iterations, $a$.
note: free indices are implicit ‘for’ loops, calculated for $j\ \text{in}\
{1}\\!:\\!{J}$, $k\ \text{in}\ {1}\\!:\\!{K}$, $\ell\ \text{in}\
{(n+1)}\\!:\\!{\min(n+L,N)}$, $u\ \text{in}\ {1}\\!:\\!{U}$,
$d_{\theta},d_{\theta}^{\prime}\ \text{in}\ {1}\\!:\\!{D_{\theta}}$.
1 initialize parameters, $\Theta^{F,0,j}_{N-1,S}=\theta_{0}$
2 for _$m\ \mathrm{in}\ {1}\\!:\\!{M}$_ do
3 initialize parameters,
$\Theta^{F,m,j}_{0,0}\sim{\mathrm{Normal}}\big{(}\Theta^{F,m-1,j}_{N-1,S}\,,a^{2m/50}\,\Sigma_{\text{ivp}}\big{)}$
for
$\big{[}\Sigma_{\text{ivp}}\big{]}_{d_{\theta},d_{\theta}^{\prime}}=\sigma_{\text{ivp},d_{\theta}}^{2}\mathbbm{1}_{d_{\theta}=d_{\theta}^{\prime}}$
4 initialize filter particles, simulate
$\boldsymbol{X}_{0,0}^{F,j}\sim{f}_{\boldsymbol{X}_{0}}\left({\,\cdot\,}{\hskip
1.42262pt;\hskip 1.42262pt}{\Theta^{F,m,j}_{0,0}}\right)$ and set
$g^{F,j}_{n,0}=1$
5 for _$n\ \mathrm{in}\ {0}\\!:\\!{N-1}$_ do
6 guide simulations,
$\boldsymbol{X}_{\ell}^{G,j,k}\sim{f}_{\boldsymbol{X}_{\ell}|\boldsymbol{X}_{n}}\big{(}{\,\cdot\,}|\boldsymbol{X}_{n,0}^{F,j}{\hskip
1.42262pt;\hskip 1.42262pt}{\Theta_{n,0}^{F,m,j}}\big{)}$
7 guide residuals,
$\boldsymbol{\epsilon}^{j,k}_{0,\ell}=\boldsymbol{X}_{\ell}^{G,j,k}-\boldsymbol{\mu}\big{(}\boldsymbol{X}^{F,j}_{n,0},t_{n},t_{\ell}{\hskip
1.42262pt;\hskip 1.42262pt}\Theta^{F,m,j}_{n,0}\big{)}$
8 for _$s\ \mathrm{in}\ {1}\\!:\\!{S}$_ do
9 perturb parameters,
$\Theta^{P,m,j}_{n,s}\sim{\mathrm{Normal}}\big{(}\Theta^{F,m,j}_{n,s-1}\,,a^{2m/50}\,\Sigma_{n}\big{)}$
for
$\big{[}\Sigma_{n}\big{]}_{d_{\theta},d_{\theta}^{\prime}}=\sigma_{n,d_{\theta}}^{2}\mathbbm{1}_{d_{\theta}=d_{\theta}^{\prime}}/S$
10 prediction simulations,
${\boldsymbol{X}}_{n,s}^{P,j}\sim{f}_{{\boldsymbol{X}}_{n,s}|{\boldsymbol{X}}_{n,s-1}}\big{(}{\,\cdot\,}|{\boldsymbol{X}}_{n,s-1}^{F,j}{\hskip
1.42262pt;\hskip 1.42262pt}{\Theta_{n,s}^{P,m,j}}\big{)}$
11 deterministic trajectory,
$\boldsymbol{\mu}^{P,j}_{n,s,\ell}=\boldsymbol{\mu}\big{(}\boldsymbol{X}^{P,j}_{n,s},t_{n,s},t_{\ell}{\hskip
1.42262pt;\hskip 1.42262pt}\Theta_{n,s}^{P,m,j}\big{)}$
12 pseudo guide simulations,
$\hat{\boldsymbol{X}}^{j,k}_{n,s,\ell}=\boldsymbol{\mu}^{P,j}_{n,s,\ell}+\boldsymbol{\epsilon}^{j,k}_{s-1,\ell}-\boldsymbol{\epsilon}^{j,k}_{s-1,n+1}+{\textstyle\sqrt{\frac{t_{n+1}-t_{n,s}}{t_{n+1}-t_{n,0}}}}\,\boldsymbol{\epsilon}^{j,k}_{s-1,n+1}$
13 discount factor,
$\eta_{n,s,\ell}=1-(t_{n+\ell}-t_{n,s})/\\{(t_{n+\ell}-t_{\max(n+\ell-L,0)})\cdot(1+\mathbbm{1}_{L=1})\\}$
14 $\displaystyle
g^{P,j}_{n,s}=\prod_{\ell=n+1}^{\min(n+L,N)}\prod_{u=1}^{U}\left[\frac{1}{K}\sum_{k=1}^{K}f_{Y_{u,\ell}|X_{u,\ell}}\Big{(}y^{*}_{u,\ell}{\,|\,}\hat{X}^{j,k}_{u,n,s,\ell}{\hskip
1.42262pt;\hskip
1.42262pt}\Theta_{n,s}^{P,m,j}\Big{)}\right]^{\eta_{n,s,\ell}}$
15
$w^{j}_{n,s}=\left\\{\begin{array}[]{ll}f_{\boldsymbol{Y}_{n}|\boldsymbol{X}_{n}}\big{(}\boldsymbol{y}_{n}{\,|\,}\boldsymbol{X}^{F,j}_{n,s-1}{\hskip
1.42262pt;\hskip
1.42262pt}\Theta_{n,s-1}^{F,m,j}\big{)}\,\,g^{P,j}_{n,s}\Big{/}g^{F,j}_{n,s-1}&\mbox{if
$s=1$, $n\neq 0$}\\\
g^{P,j}_{n,s}\Big{/}g^{F,j}_{n,s-1}&\mbox{else}\end{array}\right.$
16 normalized weights,
$\tilde{w}^{j}_{n,s}=w^{j}_{n,s}\Big{/}\sum_{q=1}^{J}w^{q}_{n,s}$
17 resampling indices, $r_{1:J}$ with
$\mathbb{P}\left[r_{j}=q\right]=\tilde{w}^{q}_{n,s}$
18 set $\boldsymbol{X}_{n,s}^{F,j}=\boldsymbol{X}_{n,s}^{P,r_{j}}\,$,
$\;g^{F,j}_{n,s}=g^{P,r_{j}}_{n,s}\,$,
$\;\boldsymbol{\epsilon}^{j,k}_{s,\ell}=\boldsymbol{\epsilon}^{r_{j},k}_{s-1,\ell}$,
$\;\Theta_{n,s}^{F,m,j}=\Theta_{n,s}^{P,m,r_{j}}$
19 end
20
21 end
22
23 end
output: Iterated GIRF parameter swarm, $\Theta_{N-1,S}^{F,M,1:J}$
Monte Carlo maximum likelihood estimate:
$\frac{1}{J}\sum_{j=1}^{J}\Theta^{F,M,j}_{N-1,S}$
complexity: $\mathcal{O}\big{(}{MJLUN(K+S)}\big{)}$
Algorithm 5 igirf(P, params = $\theta_{0}$, Ngirf = $M$, Np = $J\\!$, Ninter =
$S\\!$, Nguide = $K\\!$, Lookahead = $L\\!$ ), rw.sd =
$\sigma_{0:N,1:D_{\theta}}$, cooling.fraction.50 = $a$ using notation from
Table 1 where P is a class ‘spatPomp’ object with definitions for rprocess,
dunit_measure, skeleton, rinit and obs
### 4.1 Iterated GIRF for parameter estimation
Algorithm 5 describes igirf(), the spatPomp implementation of IGIRF. This
algorithm carries out the IF2 algorithm of Ionides _et al._ (2015) with
filtering carried out by GIRF, therefore its implementation combines the mif2
function in pomp with girf (Algorithm 1). For Algorithm 5, we unclutter the
pseudocode by using a subscript and superscript notation for free indices,
meaning subscripts and superscripts for which a value is not explicitly
specified in the code. We use the convention that a free subscript or
superscript is evaluated for all values in its range, leading to an implicit
‘for’ loop. This does not hold for capitalized subscripts and superscripts,
which describe the purpose of a Monte Carlo particle, matching usage in
Algorithm 1.
The quantity $\Theta^{P,m,j}_{n,s}$ gives a perturbed parameter vector for
$\theta$ corresponding to particle $j$ on iteration $m$ at the $s^{\text{th}}$
intermediate time between $n$ and $n+1$. The perturbations in Algorithm 5 are
taken to follow a multivariate normal distribution, with a diagonal covariance
matrix scaled by $\sigma_{n,d_{\theta}}$. Normal perturbations are not
theoretically required, but this is a common choice in practice. The igirf
function permits perturbations to be carried out on a transformed scale,
specified using the partrans argument, to accommodate situations where
normally distributed perturbations are more natural on the log or inverse-
logistic scale, or any other user-specified scale. For regular parameters,
i.e. parameters that are not related to the initial conditions of the
dynamics, it may be appropriate to set the perturbation scale independent of
$n$. If parameters are transformed so that a unit scale is relevant, for
example using a logarithmic transform for non-negative parameters, a simple
choice such as $\sigma_{n,d_{\theta}}=0.02$ may be effective. Initial value
parameters (IVPs) are those that determine only the latent state at time
$t_{0}$, and these should be perturbed only at the beginning of each iteration
$m$. The matrix $\sigma_{0:N,1:D_{\theta}}$ can be constructed using the rw.sd
function, which simplifies the construction for regular parameters and IVPs.
The cooling.fraction.50 argument takes the fraction of rw.sd by which to
perturb the parameters after 50 iterations of igirf. If set to 0.5, for
instance, the default behavior is to lower the perturbation standard deviation
geometrically so that it is halved by the 50th iteration of igirf.
### 4.2 Iterated EnKF for parameter estimation
input: simulator for
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$ and
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; evaluator for $\mathrm{e}_{u,n}(x,\theta)$ and
$\mathrm{v}_{u,n}(x,\theta)$; data, $\boldsymbol{y}^{*}_{1:N}$; number of
particles, $J$; number of iterations, $M$; starting parameter, $\theta_{0}$;
random walk intensities, $\sigma_{0:N,1:D_{\theta}}$; cooling fraction in 50
iterations, $a$.
note: free indices are implicit ‘for’ loops, calculated for $j\ \text{in}\
{1}\\!:\\!{J}$, $u$ and $\tilde{u}$ in ${1}\\!:\\!{U}$, $d_{\theta}$ and
$d_{\theta}^{\prime}$ in ${1}\\!:\\!{D_{\theta}}$.
1 initialize parameters, $\Theta^{F,0,j}_{N}=\theta_{0}$
2 for _$m\ \mathrm{in}\ {1}\\!:\\!{M}$_ do
3 initialize parameters,
$\Theta^{F,m,j}_{0}\sim{\mathrm{Normal}}\big{(}\Theta^{F,m-1,j}_{N}\,\,,\,a^{2m/50}\,\Sigma_{0}\big{)}$
for
$\big{[}\Sigma_{n}\big{]}_{d_{\theta},d_{\theta}^{\prime}}=\sigma_{n,d_{\theta}}^{2}\mathbbm{1}_{d_{\theta}=d_{\theta}^{\prime}}$
4 initialize filter particles, simulate
$\boldsymbol{X}_{0}^{F,j}\sim{f}_{\boldsymbol{X}_{0}}\left({\,\cdot\,}{\hskip
1.42262pt;\hskip 1.42262pt}{\Theta^{F,m,j}_{0}}\right)$.
5 for _$n\ \mathrm{in}\ {1}\\!:\\!{N}$_ do
6 perturb parameters,
$\Theta_{n}^{P,m,j}\sim{\mathrm{Normal}}\big{(}\Theta^{F,m,j}_{n-1}\,\,,\,a^{2m/50}\,\Sigma_{n}\big{)}$
7 prediction ensemble,
$\boldsymbol{X}_{n}^{P,j}\sim{f}_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}\big{(}{\,\cdot\,}|\boldsymbol{X}_{n-1}^{F,j};{\Theta_{n}^{P,m,j}}\big{)}$
8 process and parameter ensemble,
$\boldsymbol{Z}_{n}^{P,j}=\begin{pmatrix}\boldsymbol{X}^{P,j}_{n}\\\
\Theta^{P,m,j}_{n}\end{pmatrix}$
9 centered process and parameter ensemble,
$\tilde{\boldsymbol{Z}}_{n}^{P,j}=\boldsymbol{Z}_{n}^{P,j}-\frac{1}{J}\sum_{q=1}^{J}\boldsymbol{Z}_{n}^{P,q}$
10 forecast ensemble,
${\hat{Y}}_{u,n}^{j}=\mathrm{e}_{u}(X_{u,n}^{P,j},\Theta_{n}^{P,m,j})$
11 centered forecast ensemble,
$\boldsymbol{\tilde{Y}}_{n}^{j}=\boldsymbol{\hat{Y}}_{n}^{j}-\frac{1}{J}\sum_{q=1}^{J}\boldsymbol{\hat{Y}}_{n}^{q}$
12 forecast measurement variance,
$R_{u,\tilde{u}}=\mathbbm{1}_{u,\tilde{u}}\,\frac{1}{J}\sum_{j=1}^{J}\mathrm{v}_{u}(\boldsymbol{X}_{u,n}^{P,j},\Theta_{n}^{P,m,j})$
13 forecast sample covariance,
$\Sigma_{Y}=\frac{1}{J-1}\sum_{j=1}^{J}(\boldsymbol{\tilde{Y}}_{n}^{j})(\boldsymbol{\tilde{Y}}_{n}^{j})^{T}+R$
14 prediction and forecast sample covariance,
$\Sigma_{ZY}=\frac{1}{J-1}\sum_{j=1}^{J}(\tilde{\boldsymbol{Z}}_{n}^{P,j})(\boldsymbol{\tilde{Y}}_{n}^{j})^{T}$
15 Kalman gain: $K=\Sigma_{ZY}\Sigma_{Y}^{-1}$
16 artificial measurement noise,
$\boldsymbol{\epsilon}_{n}^{j}\sim{\mathrm{Normal}}(\boldsymbol{0},R)$
17 errors,
$\boldsymbol{r}_{n}^{j}=\boldsymbol{\hat{Y}}_{n}^{j}-\boldsymbol{y}^{*}_{n}$
18 filter update:
$\boldsymbol{Z}_{n}^{F,j}=\begin{pmatrix}\boldsymbol{X}^{F,j}_{n}\\\
\Theta^{F,m,j}_{n}\end{pmatrix}=\boldsymbol{Z}_{n}^{P,j}+K\big{(}\boldsymbol{r}_{n}^{j}+\boldsymbol{\epsilon}_{n}^{j}\big{)}$
19 end
20
21 end
22 set $\theta_{M}=\frac{1}{J}\sum_{j=1}^{J}\Theta^{F,M,j}_{N}$
output: Monte Carlo maximum likelihood estimate, $\theta_{M}$.
complexity: $\mathcal{O}(MJUN)$
Algorithm 6 ienkf(P, params = $\theta_{0}$, Nenkf = $M$, Np = $J$,
cooling.fraction.50 = $a$, rw.sd = $\sigma_{0:N,1:D_{\theta}}$), using
notation from Table 1 where P is a class ‘spatPomp’ object with definitions
for rprocess, eunit_measure, vunit_measure, rinit, and obs.
Algorithm 6 is an implementation of the iterated ensemble Kalman filter
(IEnKF) which extends the IF2 approach for parameter estimation by replacing a
particle filter with an ensemble Kalman filter. As described in Section 4.1,
we employ a free index notation whereby superscripts and subscripts that are
not otherwise specified have an implicit ‘for’ loop.
We note a caveat in using IEnKF. If the forecast mean $\hat{\boldsymbol{Y}}$
is not dependent on a parameter component, that component of the parameter is
not updated by the Kalman gain on average. For example, in Brownian motion,
the forecast $\boldsymbol{\hat{Y}}$ is independent of the measurement variance
parameter $\tau$, and so IEnKF is ineffective in estimating $\tau$. By
contrast, for geometric Brownian motion, which is obtained by exponentiating
Brownian motion, IEnKF can estimate $\tau$ because high values of $\tau$ lead
to higher values of $\boldsymbol{\hat{Y}}$ on average. In this case, if the
average forecast is different from the observed data, the $\tau$ parameter
gets updated accordingly to reduce the error. Therefore, IEnKF may need to be
coupled with other parameter estimation methods (such as IGIRF) to estimate
parameters that do not affect the forecast mean.
### 4.3 Iterated UBF for parameter estimation
input: simulator for
$f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}(\boldsymbol{x}_{n}{\,|\,}\boldsymbol{x}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\theta)$ and
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; evaluator for
$f_{{Y}_{u,n}|{X}_{u,n}}({y}_{u,n}{\,|\,}{x}_{u,n}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$; data, $\boldsymbol{y}^{*}_{1:N}$; starting parameter,
$\theta_{0}$; number of replicates per parameter, $\mathcal{I}$; number of
parameters, $K$; neighborhood structure, $B_{u,n}$; number of iterations, $M$;
resampling proportion, $p$; random walk intensities,
$\sigma_{0:N,1:D_{\theta}}$; cooling fraction in 50 iterations, $a$.
1 initialize parameters, $\Theta^{F,0,k}_{N}=\theta_{0}$
2 for _$m\ \mathrm{in}\ {1}\\!:\\!{M}$_ do
3 initialize parameters, $\Theta^{F,m,k}_{0}=\Theta^{F,m-1,k}_{N}$ for $k$ in
${1}\\!:\\!{K}$
4 initialize filter particles,
$\boldsymbol{X}_{0}^{F,m,k,i}\sim{f}_{\boldsymbol{X}_{0}}\left({\,\cdot\,}{\hskip
1.42262pt;\hskip 1.42262pt}{\Theta^{F,m,k}_{0}}\right)$ for $k$ in
${1}\\!:\\!{K}$, for $i$ in ${1}\\!:\\!{\mathcal{I}}$.
5 for _$n\ \mathrm{in}\ {1}\\!:\\!{N}$_ do
6 perturb parameters,
$\Theta_{n}^{P,m,k,i}\sim{\mathrm{Normal}}\big{(}\Theta^{F,m,k}_{n-1}\,\,,\,a^{2m/50}\,\Sigma_{n}\big{)}$
for $k$ in ${1}\\!:\\!{K}$, $i$ in ${1}\\!:\\!{\mathcal{I}}$, where
$\big{[}\Sigma_{n}\big{]}_{d_{\theta},d_{\theta}^{\prime}}=\sigma_{n,d_{\theta}}^{2}\mathbbm{1}_{d_{\theta}=d_{\theta}^{\prime}}$
7 proposals, $\boldsymbol{X}_{n}^{P,m,k,i}\sim
f_{\boldsymbol{X}_{n}|\boldsymbol{X}_{n-1}}\big{(}{\,\cdot\,}{\,|\,}\boldsymbol{X}^{F,m,k,i}_{n-1}{\hskip
1.42262pt;\hskip 1.42262pt}\Theta_{n}^{P,m,k,i}\big{)}$ for $k$ in
${1}\\!:\\!{K}$, $i$ in ${1}\\!:\\!{\mathcal{I}}$
8
9
$w_{u,n}^{k,i}=f_{Y_{u,n}|X_{u,n}}\big{(}y^{*}_{u,n}{\,|\,}X^{P,m,k,i}_{u,n}{\hskip
1.42262pt;\hskip 1.42262pt}\Theta_{n}^{P,m,k,i}\big{)}$ for $u$ in
${1}\\!:\\!{U}$, $k$ in ${1}\\!:\\!{K}$, $i$ in ${1}\\!:\\!{\mathcal{I}}$
10 $w^{P,k,i}_{u,n}=\displaystyle\prod_{(\tilde{u},\tilde{n})\in
B_{u,n}}w_{\tilde{u},\tilde{n}}^{k,i}$ for $u$ in ${1}\\!:\\!{U}$, $k$ in
${1}\\!:\\!{K}$, $i$ in ${1}\\!:\\!{\mathcal{I}}$
11
12 parameter log likelihoods, $\displaystyle
r^{k}_{n}=\sum_{u=1}^{U}\mathrm{log}\Bigg{(}\frac{\sum_{i=1}^{\mathcal{I}}w_{u,n}^{k,i}\,\,w^{P,k,i}_{u,n}}{\sum_{\tilde{i}=1}^{\mathcal{I}}w^{P,k,\tilde{i}}_{u,n}}\Bigg{)}$
for $k$ in ${1}\\!:\\!{K}$,
13 Select the highest $pK$ weights: find $s$ with
$\\{s(1),\dots,s(pK)\\}=\big{\\{}k:\sum_{\tilde{k}=1}^{K}{\mathbf{1}}\\{r^{\tilde{k}}>r^{k}\\}<pK\big{\\}}$
14
15 Make $1/p$ copies of successful parameters,
$\Theta^{F,m,k}_{n}=\Theta^{F,m,s(\lceil pk\rceil)}_{n}$ for $k$ in
${1}\\!:\\!{K}$
16
17 Set $\boldsymbol{X}^{F,m,k,i}_{n}=\boldsymbol{X}_{n}^{P,m,s(\lceil
pk\rceil),i}$
18
19 end
20
21 end
output: Iterated UBF parameter swarm: $\Theta^{F,M,1:K}_{N}$
Monte Carlo maximum likelihood estimate:
$\frac{1}{K}\sum_{k=1}^{K}\Theta^{F,M,1:K}_{N}$.
complexity: $\mathcal{O}(MK\mathcal{I}UN)$
22
Algorithm 7 iubf(P, params = $\theta_{0}$, Nubf = $M$, Nrep_per_param =
$\mathcal{I}$, Nparam = $K$, nbhd=$B_{u,n}$, prop=$p$, cooling.fraction.50 =
$a$, rw.sd = $\sigma_{0:N,1:D_{\theta}}$), using notation from Table 1 where P
is a class ‘spatPomp’ object with definitions for rprocess, dunit_measure,
rinit, obs and coef.
Algorithm 7 also extends the IF2 approach by using an ABF-inspired particle
filter. We start with $K$ copies of our starting parameter set and iteratively
perturb the parameter set and evaluate a conditional likelihood at each
observation time using ABF with $J=1$ (also called the unadapted bagged
filter, or UBF). The parameter sets yielding the top $p$ quantile of the
likelihoods are resampled for pertubation and likelihood evaluation in the
next time step.
### 4.4 Inference algorithms inherited from pomp
Objects of class ‘spatPomp’ inherit methods for inference from class ‘pomp’
objects implemented in the pomp package. As discussed earlier, IF2 (Ionides
_et al._ , 2015) enables parameter estimation in the frequentist sense and has
been used in numerous applications. It can be used to check the capabilities
of newer and more scalable inference methods on smaller examples for which IF2
is known to be effective. Extensions for Bayesian inference of the currently
implemented high-dimensional particle filter methods (GIRF, ABF, EnKF, BPF)
are not yet available. Bayesian inference is available in spatPomp using the
approximate Bayesian computing (ABC) method inherited from pomp, abc(). ABC
has previously been used for spatiotemporal inference (Brown _et al._ , 2018)
and can also serve as a baseline method. However, ABC is a feature-based
method that may lose substantial information compared to full-information
methods that work with the full likelihood function.
## 5 Demonstrating data analysis tools on a toy model
We illustrate key capabilities of spatPomp using a model for correlated
Brownian motions. This allows us to demonstrate a data analysis in a simple
context where we can compare results with a standard particle filter as well
as validate all methods against the exact solutions which are analytically
available. Here we defer the details of model construction by using a model
pre-specified within the package. Section 6 proceeds to develop a model
exemplifying the kinds of nonlinear, non-Gaussian spatiotemporal dynamics of
moderately high dimension, which are the target of spatPomp. Consider spatial
units $1,\dots,U$ located evenly around a circle, with
$\mathrm{dist}(u,\tilde{u})$ being the circle distance,
$\mathrm{dist}(u,\tilde{u})=\min\big{(}|u-\tilde{u}|,|u-\tilde{u}+U|,|u-\tilde{u}-U|\big{)}.$
We investigate a SpatPOMP where the latent process is a $U$-dimensional
Brownian motion $\boldsymbol{X}(t)$ having correlation that decays with
distance. Specifically,
$dX_{u}(t)=\sum_{\tilde{u}=1}^{U}\rho^{\mathrm{dist}(u,\tilde{u})}dW_{\tilde{u}}(t),$
where $W_{1}(t),\dots,W_{U}(t)$ are independent Brownian motions with
infinitesimal variance $\sigma^{2}$, and $|\rho|$ < 1\. Using the notation in
Section 2, we suppose our measurement model for discrete-time observations of
the latent process is
$Y_{u,n}=X_{u,n}+\eta_{u,n}$
where $\eta_{u,n}\overset{\text{iid}}{\sim}{\mathrm{Normal}}(0,\tau^{2})$. The
model specification is completed by specifying the initial conditions,
$\\{X_{u}(0),u\in 1:U\\}$. A class ‘spatPomp’ object which simulates from this
model for $U=10$ with $N=20$ evenly-spaced observations that are one unit time
apart can be simulated using bm() and plotted using plot(), yielding the plot
in Figure 3 as follows:
Figure 3: Output of the plot() method on a class ‘spatPomp’ object
representing a simulation from a 10-dimensional correlated Brownian motions
model with 20 observations that are one unit time apart.
R> bm10 <- bm(U=10, N=20)
R> plot(bm10)
Such plots can help the user qualitatively assess dynamics within and between
the units. plot() visualizes the results of coercing bm10 into a class
‘data.frame’ object by using the as.data.frame(bm10) method. More customized
plots can thus be created by using the many plotting options in R for class
‘data.frame’ objects. A detailed description of the components of the bm10
object can be obtained by invoking the spy() method from pomp as follows (the
output is suppressed to conserve space):
R> spy(bm10)
### 5.1 Computing the likelihood
bm10 contains all the necessary model components for likelihood estimation
using the algorithms discussed in Section 3. The standard particle filter,
GIRF, ABF, EnKF and BPF can be run to estimate the likelihood of the data at a
given parameter set. Here, we use the parameter set that was used to simulate
bm10 and show one likelihood evaluation for each method.
R> theta <- coef(bm10)
R> theta
rho sigma tau X1_0 X2_0 X3_0 X4_0 X5_0 X6_0 X7_0 X8_0
0.4 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
X9_0 X10_0
0.0 0.0
R> logLik(pfilter(bm10, params=theta, Np=1000))
[1] -391.7841
R> logLik(girf(bm10, params=theta, Np=100, Nguide=10, Ninter=10, lookahead=1))
[1] -381.0272
R> logLik(abf(bm10, params=theta, Nrep=100, Np=10))
[1] -391.0623
R> logLik(enkf(bm10, params=theta, Np=1000))
[1] -374.0955
R> logLik(bpfilter(bm10, params=theta, Np=1000, block_size=2))
[1] -379.5812
Figure 4: A: RMSE of log likelihood estimates from ABF, BPF, EnKF, GIRF and
particle filter on correlated Brownian motions of various dimensions. For a
given dimension, we run each method 5 times and calculate the RMSE against the
exact log likelihood (obtained via Kalman filter). Error bars represent the
variability of the RMSE. B: Log likelihood estimates from ABF, BPF, EnKF, GIRF
and particle filter compared to the exact likelihood obtained via Kalman
filter (in black).
We see considerable differences in these initial log likelihood estimates.
These might be explained by Monte Carlo variability or bias, and additional
investigation is needed to make an assessment. Both Monte Carlo uncertainty
and bias are typical of spatiotemporal filtering applications because of two
main factors. First, approximations that give a filtering method scalability
have non-negligible impact, primarily bias, on the resulting likelihood
estimates. Second, in high-dimensional filtering problems, a Monte Carlo
filter can be expected to have non-negligible uncertainty in the likelihood
estimate even for methods designed to be scalable. This variability translates
to a negative bias in log likelihood estimates, even for methods that are
unbiased for the likelihood in the natural scale, due to Jensen’s inequality.
Overall, all the filters we study have negative bias because they make
probabilistic forecasts which involve some approximation to the true forecast
distribution under the postulated model. The log likelihood is a proper
scoring rule for forecasts, meaning that the exact probabilistic forecast has
a higher expected log likelihood than an approximation, if the model is
correctly specified (Gneiting _et al._ , 2007).
In practice, we execute multiple runs of each Monte Carlo filtering algorithm
to assess the Monte Carlo variance. Bias is harder to assess, except in toy
models when a precise likelihood evaluation is computationally tractable.
Figure 4 illustrates the result of a more practical exercise of likelihood
evaluation. We often start with a model for a small $U$ and evaluate the
likelihood many times for each algorithm to quantify the Monte Carlo
variability. We then develop our model for increasing $U$. As $U$ grows the
relative performances of the algorithms can vary, so we evaluate the
likelihood using all possible methods with several repetitions for a fixed
$U$. On this toy problem with analytically evaluable likelihood, we can
compare all likelihood evaluations with the exact likelihood computed using
the Kalman filter. As can be seen from Figure 4, as the difficulty of the
problem increases through the increase in the value of $U$, we quickly enter
the regime in which the particle filter does worse than the methods designed
specifically for SpatPOMP models. ABF trades off a slowly growing bias for a
reduced variance by conditioning on a local neighborhood; GIRF reduces
variance by using guide functions at intermediate time points to guide
prediction particles towards regions of high probability, but this can be
computationally costly; BPF approximates the joint filter distribution by
resampling independently between blocks of units; EnKF uses a Gaussian-
inspired update rule to improve computational efficiency, and on this Gaussian
problem the Gaussian approximation made by EnKF is valid, leading to strong
performance. In general, since each filtering method has its strengths and
limitations, it is worthwhile on a new problem to try them all.
Table 2: Comparison of computational resources of the filtering algorithms Method | Resources (core-minutes) | Particles (per replicate) | Replicates | Guide particles | Lookahead
---|---|---|---|---|---
Particle Filter | 1.02 | 2000 | - | - | -
ABF | 28.67 | 30 | 500 | - | -
GIRF | 111.82 | 500 | - | 40 | 1
EnKF | 0.82 | 1000 | - | - | -
BPF | 1.06 | 1000 | - | - | -
Users will also need to keep in mind considerations about computational
resources used up by the available algorithms. Computing resources used by
each algorithm for Figure 4 are given in Table 2. Each algorithm was allowed
to use 8 CPU cores to evaluate all the likelihoods and the algorithmic
settings were fixed as shown in the table. The time-complexity of GIRF is
quadratic in $U$, due to the intermediate time step loop shown in the
pseudocode in Section 3.1, whereas the other algorithms scale linearly with
$U$ for a fixed algorithmic setting. In addition, GIRF is less scalable than
the other filter methods designed for SpatPOMP models. However, a positive
feature of GIRF is that it shares with PF the property that it targets the
exact likelihood, i.e., it is consistent for the exact log likelihood as the
number of particles grows and the Monte Carlo variance apporaches zero. GIRF
may be a practical algorithm when the number of units prohibits PF but permits
effective use of GIRF. ABF is implemented such that each bootstrap replicate
is run on a CPU core and the results are combined at the end. Since the result
from each core is a $U\times N$ matrix, the user should supply more memory if
$U$ and/or $N$ are very large. EnKF and BPF generally run the quickest and
require the least memory. However, the Gaussian and independent blocks
assumptions, respectively, of the two algorithms must be reasonable to obtain
likelihood estimates with low bias.
### 5.2 Parameter inference
The correlated Brownian motions example also serves to illustrate parameter
inference using IGIRF. Suppose we have data from the correlated 10-dimensional
Brownian motions model discussed above. We are interested in estimating the
model parameters $\sigma$, $\tau$, $\rho$. The initial conditions,
$\\{{X}_{u}(0),u\in 1{\hskip 1.70717pt:\hskip 1.70717pt}U\\}$, can be
considered to be known such that these parameters will not undergo
perturbations in IGIRF.
We must construct a starting parameter set for our search.
R> start_params <- c(rho = 0.8, sigma = 0.4, tau = 0.2,
+ X1_0 = 0, X2_0 = 0, X3_0 = 0, X4_0 = 0, X5_0 = 0,
+ X6_0 = 0, X7_0 = 0, X8_0 = 0, X9_0 = 0, X10_0 = 0)
We can now run igirf(). Note that we set the parameter perturbation standard
deviation to zero for the initial conditions, which allows us to only estimate
our parameters of interest.
R> igirf_out <- igirf(
+ bm10,
+ params=start_params,
+ Ngirf=30,
+ Np=1000,
+ Ninter=10,
+ lookahead=1,
+ Nguide=50,
+ rw.sd=rw.sd(rho=0.02, sigma=0.02, tau=0.02,
+ X1_0=0, X2_0=0, X3_0=0, X4_0=0,
+ X5_0=0, X6_0=0, X7_0=0, X8_0=0, X9_0=0, X10_0=0),
+ cooling.type = "geometric",
+ cooling.fraction.50=0.5
+ )
The output of igirf() is an object of class ‘igirfd_spatpomp’. We can view the
final parameter estimate and obtain a likelihood evaluation at this estimate.
R> coef(igirf_out)[c(’rho’,’sigma’,’tau’)]
rho sigma tau
0.5560766 0.9642862 1.2031939
R> logLik(igirf_out)
[1] -383.996
To get a more accurate likelihood evaluation at the final estimate, the user
can run the filtering algorithms with greater computational effort. Since our
model is linear and Gaussian, the maximum likelihood estimate of our model and
the likelihood at this estimate can be found analytically. The maximum log
likelihood is -373.02. An enkf run at our igirf() parameter estimate yields a
log likelihood estimate of -380.91. This shortfall is a reminder that Monte
Carlo optimization algorithms should usually be replicated, and may be best
used with inference methodology that accommodates Monte Carlo error, as
discussed in Section 5.3.
A useful diagnostic of the parameter search is the record of improvement of
our parameter estimates during the course of an igirf() run. Each iteration
within an igirf run provides a parameter estimate and a likelihood evaluation
at that estimate. The plot method for a class ‘igirfd_spatPomp’ object shows
the convergence record of parameter estimates and their likelihood
evaluations.
R> plot(igirf_out, params = c("rho", "sigma", "tau"), ncol = 2)
Figure 5: The output of the plot() method on the object of class
‘igirfd_spatPomp’ that encodes our model for correlated Brownian motions
produces convergence traces for $\rho$, $\sigma$ and $\tau$, and the
corresponding log likelihoods. Over 30 iterations igirf() has allowed us to
get within a neighborhood of the maximum likelihood.
As shown in Figure 5, igirf() has allowed us to explore the parameter space
and climb significantly up the likelihood surface to within a small
neighborhood of the maximum likelihood. The run took 5.88 minutes on one CPU
core for this example with 10 spatial units. For larger models, one may
require starting multiple searches of the parameter space at various starting
points by using parallel runs of igirf() on a larger machine with multiple
cores.
### 5.3 Monte Carlo profiles
Proper interpretation of a parameter estimate requires uncertainty estimates.
For instance, we may be interested in estimating confidence intervals for the
coupling parameters of spatiotemporal models. These are parameters that
influence the strength of the dependence between the latent dynamics in
different spatial units. In our correlated Brownian motions example, $\rho$
plays this role. The dependence between any two units is moderated by the
distance between the units and the value of $\rho$.
We can often estimate confidence intervals for parameters like $\tau$ and
$\sigma$ which drive the dynamics of each spatial unit. However, coupling
parameters can be hard to detect because any signal can be overwhelmed by the
inevitably high variance estimates of high-dimensional models. Full-
information inference methods like IGIRF which are able to mitigate high
variance issues in the filtering step can allow us to extract what limited
information is available on coupling parameters like $\rho$. Here we will
construct a profile likelihood for $\rho$ with a 95% confidence interval that
adjusts for Monte Carlo error.
A profile over a model parameter is a collection of maximized likelihood
evaluations at a range of values of the profiled parameter. For each fixed
value of this parameter, we maximize the likelihood over all the other
parameters. We often use multiple different starting points for each fixed
value of the profiled parameter.
Let us first design our profile over $\rho$ by setting the bounds over all
other model parameters from which we will draw starting values for likelihood
maximization. It can sometimes be useful to transform the other parameters to
an unconstrained scale by using pomp::partrans(). For instance, parameters
whose natural values are constrained to the non-negative real numbers can be
log-transformed to maximize them over the unconstrained real line.
R> # center of our hyperbox of starting parameter sets
R> theta <- c("rho" = 0.7, "sigma"=0.7, "tau"=0.6,
+ "X1_0"=0, "X2_0"=0, "X3_0"=0, "X4_0"=0, "X5_0"=0,
+ "X6_0"=0, "X7_0"=0, "X8_0"=0, "X9_0"=0, "X10_0"=0)
R> # set bounds of hyperbox of starting parameter sets for
R> # all non-profiled parameters (use estimation scale to set this)
R> estpars <- setdiff(names(theta),c("rho"))
R> theta_t <- pomp::partrans(bm10,theta,"toEst")
R> theta_t_hi <- theta_t_lo <- theta_t
R> theta_t_lo[estpars] <- theta_t[estpars] - log(2) # lower bound
R> theta_t_hi[estpars] <- theta_t[estpars] + log(2) # upper bound
R> theta_lo <- pomp::partrans(bm10, theta_t_lo, "fromEst")
R> theta_hi <- pomp::partrans(bm10, theta_t_hi, "fromEst")
theta_lo and theta_hi effectively specify a “hyperbox” from which we can draw
starting parameter sets for our maximization. Next, we use
pomp::profile_design() to sample our starting points from this hyperbox. The
first argument is the name of the parameter to be profiled over and is set to
a range of feasible values for that parameter. The second and third arguments
take the hyperbox bounds and the final argument is used to determine how many
unique starting positions must be drawn for each value of $\rho$.
R> pomp::profile_design(
+ rho=seq(from=0.2,to=0.5,length=10),
+ lower=theta_lo,
+ upper=theta_hi,
+ nprof=5
+ ) -> pd
pd is now a class ‘data.frame’ object representing random starting positions
for our maximizations. Since we 3 starting points for each value of $\rho$ and
10 different values of $\rho$, we expect 30 rows in pd.
R> dim(pd)
[1] 30 14
R> head(pd)[c(’rho’,’sigma’,’tau’)]
rho sigma tau
1 0.2000000 0.8949604 0.4172217
2 0.2000000 0.5589302 0.8485934
3 0.2000000 0.9093869 0.9267436
4 0.2333333 0.9045623 0.3579629
5 0.2333333 0.9893592 1.1255378
6 0.2333333 0.8490907 0.8455325
We can now run igirf() at each starting point. We can run these jobs in
parallel using foreach and %dopar% from the foreach package (Wallig and
Weston, 2020) and collecting all the results together using bind_rows from
dplyr (Wickham _et al._ , 2020). Once we get a final parameter estimate from
each igirf run, we can estimate the likelihood at this point by running, say,
enkf(), 10 times and appropriately averaging the resulting log likelihoods.
R> foreach (
+ p=iter(pd,"row"),
+ .combine=dplyr::bind_rows
+ ) %dopar% {
+ library(spatPomp)
+ igirf_out <- igirf(bm10,
+ params=p,
+ Ngirf=bm_prof_ngirf,
+ Np=1000,
+ Nguide = 30,
+ rw.sd=rw.sd(sigma=0.02, tau=0.02),
+ cooling.type = "geometric",
+ cooling.fraction.50=0.5)
+
+ ## 10 EnKF log likelihood evaluations
+ ef <- replicate(10,
+ enkf(igirf_out,
+ Np = 2000))
+ ll <- sapply(ef,logLik)
+ ## logmeanexp to average log likelihoods
+ ## se=TRUE to estimate Monte Carlo variability
+ ll <- logmeanexp(ll, se = TRUE)
+
+ # Each igirf job returns one row
+ data.frame(
+ as.list(coef(igirf_out)),
+ loglik = ll[1],
+ loglik.se = ll[2]
+ )
+ } -> rho_prof
rho_prof now contains parameter estimates that result from running igirf on
each starting parameter in pd and the corresponding log likelihood estimates.
R> dim(rho_prof)
[1] 30 17
R> print(head(rho_prof)[c("rho","sigma","tau","loglik")], row.names = F)
rho sigma tau loglik
0.2000000 1.1821368 0.9660797 -379.4979
0.2000000 0.8896483 1.2138036 -382.4032
0.2000000 1.0181503 0.9965749 -378.5496
0.2333333 1.5835910 0.7610113 -383.6311
0.2333333 1.0532616 0.8215984 -380.4950
0.2333333 1.1643568 0.9578773 -377.6221
We can can now use the Monte Carlo adjusted profile confidence interval
methodology of Ionides _et al._ (2017) to construct a 95% confidence interval
for $\rho$.
R> rho_prof_mcap <- mcap(
+ lp=rho_prof[,"loglik"],
+ parameter=rho_prof[,"rho"]
+ )
R> rho_prof_mcap$ci
[1] 0.2663664 0.4879880
The 95% estimated confidence interval for $\rho$ is, therefore,
$($0.266,0.488$)$. Note that the data in bm10 are generated from a model with
$\rho=0.4$.
## 6 A spatiotemporal model of measles transmission
A complete spatPomp workflow involves roughly two major steps. The first is to
obtain data, postulate a class of models that could have generated the data
and bring these two pieces together via a call to spatPomp(). The second step
involves evaluating the likelihood at specific parameter sets and/or
maximizing likelihoods under the postulated class of models and/or
constructing a Monte Carlo adjusted confidence interval and/or performing a
hypothesis test by comparing maximized likelihoods in a constrained region of
parameter space with maximized likelihoods in the unconstrained parameter
space. We have shown examples of the second major step in a spatPomp workflow
in Section 5. We now show how to bring our data and models together via a
compartment model for coupled measles dynamics in the 5 largest cities in
England in the pre-vaccine era.
Compartment models for population dynamics divide up the population into
categories (called compartments) which are modeled as homogeneous. The rate of
flow of individuals between a pair of compartments may depend on the count of
individuals in other compartments. Compartment models have widespread
scientific applications, especially in the biological and health sciences
(Bretó _et al._ , 2009). Spatiotemporal compartment models can be called patch
models or metapopulation models in an ecological context, since the full
population is divided into a “population” of sub-populations. We develop a
spatiotemporal model for disease transmission dynamics of measles within and
between multiple cities, based on the model of Park and Ionides (2020) which
adds spatial interaction to the compartment model presented by He _et al._
(2010). We use this example to demonstrate how to construct spatiotemporal
compartment models in spatPomp. The measles() function in spatPomp constructs
such an object, and here we consider the key steps in this construction.
Beyond the examples in the pomp and spatPomp packages, previous analyses using
pomp with published open-source code provide additional examples of
compartment models (Marino _et al._ , 2019).
### 6.1 Mathematical model for the latent process
Before discussing how to code the model, we first define it mathematically in
the time scale ${\mathbb{T}}=[0,T]$. First we describe how we model the
coupling (via travel) between cities. Let $v_{u\tilde{u}}$ denote the number
of travelers from city $u$ to $\tilde{u}$. Here, $v_{u\tilde{u}}$ is
constructed using the gravity model of Xia _et al._ (2004):
$v_{u\tilde{u}}=G\cdot\frac{\;\overline{\mathrm{dist}}\;}{\bar{P}^{2}}\cdot\frac{P_{u}\cdot
P_{\tilde{u}}}{\mathrm{dist}(u,\tilde{u})},$
where $\mathrm{dist}(u,\tilde{u})$ denotes the distance between city $u$ and
city $\tilde{u}$, $P_{u}$ is the average population for city $u$ across time,
$\bar{P}$ is the average population across cities, and
$\overline{\mathrm{dist}}$ is the average distance between a randomly chosen
pair of cities. In this version of the model, we model $v_{u\tilde{u}}$ as
fixed through time and symmetric between any two arbitrary cities, though a
natural extension would allow for temporal variation and asymmetric movement
between two cities. The gravitation constant $G$ is scaled with respect to
$\bar{P}$ and $\overline{\mathrm{dist}}$. The measles model divides the
population of each city into susceptible ($S$), exposed ($E$), infectious
($I$), and recovered/removed ($R$) compartments.
Next we discuss the dynamics within each city, including where the
$v_{u\tilde{u}}$ terms eventually feature in (2). The latent state process is
$\\{\boldsymbol{X}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta),t\in{\mathbb{T}}\\}=\\{\big{(}X_{1}(t{\hskip
1.42262pt;\hskip 1.42262pt}\theta),\dots,X_{U}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta)\big{)},t\in{\mathbb{T}}\\}$ with $X_{u}(t{\hskip
1.42262pt;\hskip 1.42262pt}\theta)=\big{(}S_{u}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta),E_{u}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta),I_{u}(t{\hskip 1.42262pt;\hskip
1.42262pt}\theta),R_{u}(t{\hskip 1.42262pt;\hskip 1.42262pt}\theta)\big{)}$.
The number of individuals in each compartment for city $u$ at time $t$ are
denoted by $S_{u}(t)$, $E_{u}(t)$, $I_{u}(t)$, and $R_{u}(t)$. The population
dynamics are described by the following set of stochastic differential
equations:
$\left.\begin{array}[]{lllllll}\displaystyle
dS_{u}(t)&=&dN_{BS,u}(t)&-&dN_{SE,u}(t)&-&dN_{SD,u}(t)\\\ \displaystyle
dE_{u}(t)&=&dN_{SE,u}(t)&-&dN_{EI,u}(t)&-&dN_{ED,u}(t)\\\ \displaystyle
dI_{u}(t)&=&dN_{EI,u}(t)&-&dN_{IR,u}(t)&-&dN_{ID,u}(t)\end{array}\qquad\right\\}\quad\mbox{for
$u=1,\dots,U$.}$
Here, $N_{SE,u}(t)$, $N_{EI,u}(t)$, and $N_{IR,u}(t)$ denote the cumulative
number of transitions, between the compartments identified by the subscripts,
up to time $t$ in city $u$. When these are modeled as integer-valued, the
system of differential equations has step function solutions. The recruitment
of susceptible individuals into city $u$ is denoted by the counting process
$N_{BS,u}(t)$. Here, $B$ denotes a source of individuals that can enter the
susceptible population, primarily modeling births. Each compartment also has
an outflow, written as a transition to $D$, primarily representing death,
which occurs at a constant per-capita rate $\mu$. The number of recovered
individuals $R_{u}(t)$ in city $u$ is defined implicitly given the known
census population $P_{u}(t)=S_{u}(t)+E_{u}(t)+I_{u}(t)+R_{u}(t)$. $R_{u}(t)$
plays no direct role in the dynamics, beyond accounting for individuals not in
any of the other classes.
Pseudocode for one rprocess Euler timestep for the measles spatPomp
For $u$ in $1{\hskip 1.70717pt:\hskip 1.70717pt}U$:
1. i.
Draw unbiased, independent, multiplicative noise for each
$\mu_{SE,u}(s_{m+1})$,
$\Delta\Gamma_{Q_{1}Q_{2},u}\sim\mathrm{Gamma}(\frac{\delta}{\sigma_{Q_{1}Q_{2}}^{2}},\sigma_{Q_{1}Q_{2}}^{2})$.
Define $\Delta\Gamma_{Q_{i}Q_{j},u}=\delta$, for $(i,j)\neq(1,2)$
2. ii.
Draw one-step transitions from $\tilde{S}_{u}(s_{m})$, $\tilde{E}_{u}(s_{m})$
and $\tilde{I}_{u}(s_{m})$:
$\Big{(}\Delta\tilde{N}_{Q_{1}Q_{2},u},\Delta\tilde{N}_{Q_{1}Q_{5},u},\tilde{S}_{u}(s_{m})-\Delta\tilde{N}_{Q_{1}Q_{2},u}-\Delta\tilde{N}_{Q_{1}Q_{5},u}\Big{)}\\\
\quad\sim\mathrm{Multinomial}\Big{(}\tilde{S}_{u}(s_{m}),p_{Q_{1}Q_{2},u},p_{Q_{1}Q_{5},u},1-p_{Q_{1}Q_{2},u}-p_{Q_{1}Q_{5},u}\Big{)}$;
$\Big{(}\Delta\tilde{N}_{Q_{2}Q_{3},u},\Delta\tilde{N}_{Q_{2}Q_{5},u},\tilde{E}_{u}(s_{m})-\Delta\tilde{N}_{Q_{2}Q_{3},u}-\Delta\tilde{N}_{Q_{2}Q_{5},u}\Big{)}\\\
\quad\sim\mathrm{Multinomial}\Big{(}\tilde{E}_{u}(s_{m}),p_{Q_{2}Q_{3},u},p_{Q_{2}Q_{5},u},1-p_{Q_{2}Q_{3},u}-p_{Q_{2}Q_{5},u}\Big{)}$;
$\Big{(}\Delta\tilde{N}_{Q_{3}Q_{4},u},\Delta\tilde{N}_{Q_{3}Q_{5},u},\tilde{I}_{u}(s_{m})-\Delta\tilde{N}_{Q_{3}Q_{4},u}-\Delta\tilde{N}_{Q_{3}Q_{5},u}\Big{)}\\\
\quad\sim\mathrm{Multinomial}\Big{(}\tilde{I}_{u}(s_{m}),p_{Q_{3}Q_{4},u},p_{Q_{3}Q_{5},u},1-p_{Q_{3}Q_{4},u}-p_{Q_{3}Q_{5},u}\Big{)}$,
where
$\displaystyle p_{Q_{i}Q_{j},u}$
$\displaystyle=\frac{\big{(}1-\exp(\sum_{k}{\mu_{Q_{i}Q_{k},u}(s_{m+1})\Delta\Gamma_{Q_{i}Q_{k},u}})\big{)}\mu_{Q_{i}Q_{j},u}(s_{m+1})\Delta\Gamma_{Q_{i}Q_{j},u}}{\sum_{k}{\mu_{Q_{i}Q_{k},u}(s_{m+1})\Delta\Gamma_{Q_{i}Q_{k},u}}}$
3. iii.
New entries into susceptible class via birth:
$\Delta\tilde{N}_{BQ_{1},u}\sim\mathrm{Poisson}(\mu_{BQ_{1},u}(s_{m+1})\cdot\delta)$
4. iv.
Update compartments by the one-step transitions:
$\tilde{S}_{u}(s_{m+1})=\tilde{S}_{u}(t_{n})-\Delta\tilde{N}_{Q_{1}Q_{2},u}-\Delta\tilde{N}_{Q_{1}Q_{5},u}+\Delta\tilde{N}_{BQ_{1},u}$
$\tilde{E}_{u}(s_{m+1})=\tilde{E}_{u}(t_{n})-\Delta\tilde{N}_{Q_{2}Q_{3},u}-\Delta\tilde{N}_{Q_{2}Q_{5},u}+\Delta\tilde{N}_{Q_{1}Q_{2},u}$
$\tilde{I}_{u}(s_{m+1})=\tilde{I}_{u}(t_{n})-\Delta\tilde{N}_{Q_{3}Q_{4},u}-\Delta\tilde{N}_{Q_{3}Q_{5},u}+\Delta\tilde{N}_{Q_{2}Q_{3},u}$
$\tilde{R}_{u}(s_{m+1})=P(s_{m+1})-\tilde{S}_{u}(s_{m+1})-\tilde{E}_{u}(s_{m+1})-\tilde{I}_{u}(s_{m+1})$
Box 1: An Euler increment between times $s_{m}=m\delta$ to
$s_{m+1}=s_{m}+\delta$ of an Euler scheme whose limit as $\delta$ approaches
zero is our continuous-time measles latent process model. For notational
convenience, $Q_{1}$, $Q_{2}$, $Q_{3}$, $Q_{4}$ and $Q_{5}$ represent
susceptible (S), exposed (E), infectious (I), recovered (R) and natural death
(D) statuses, respectively. We keep track of changes to $\tilde{S}_{u}$,
$\tilde{E}_{u}$, $\tilde{I}_{u}$ and $\tilde{R}_{u}$, the numerical analogues
of $S_{u}$, $E_{u}$, $I_{u}$ and $R_{u}$ in our mathematical model, by
updating each compartment using dynamic rates and our population covariate.
The dynamics are coupled via $\mu_{SE,u}$ terms that incorporate travel of
infectives from other units. Here, $\mathrm{Gamma}(\alpha,\beta)$ is the gamma
distribution with mean $\alpha\beta$ and variance $\alpha\beta^{2}$. More
information about the gamma, multinomial and Poisson distributions can be
found in Casella and Berger (1990). The instructions in this box are encoded
in the measles_rprocess C snippet in the following subsection.
A continuous time latent process model is defined as the limit of the Euler
scheme in Box 1 as the Euler time increment approaches zero. We use tildes to
distinguish the numerical solution from the continuous time model. The scheme
involves initializing numerical analogues $\tilde{S}_{u}(0)$,
$\tilde{E}_{u}(0)$, $\tilde{I}_{u}(0)$,
$\tilde{R}_{u}(0)=P_{u}(0)-\tilde{S}_{u}(0)-\tilde{E}_{u}(0)-\tilde{I}_{u}(0)$
and executing the one-step transitions in Box 1 at time increments of $\delta$
until time $T$. In the limit as $\delta$ approaches zero, this results in a
model with infinitesimal mean and variance given by
$\begin{split}\mathbb{E}\left[N_{SE,u}(t+dt)-N_{SE,u}(t)\right]&\approx\mu_{SE,u}(t)S_{u}(t)dt+o(dt)\\\
\mathbb{V}\left[N_{SE,u}(t+dt)-N_{SE,u}(t)\right]&\approx\big{[}\mu_{SE,u}(t)S_{u}(t)+\mu^{2}_{SE,u}(t)S^{2}_{u}(t)\sigma^{2}_{SE}\big{]}dt+o(dt),\\\
\text{ where
}\mu_{SE,u}(t)&=\beta(t)\left[\frac{I_{u}}{P_{u}}+\sum_{\tilde{u}\neq
u}\frac{v_{u\tilde{u}}}{P_{u}}\left\\{\frac{I_{\tilde{u}}}{P_{\tilde{u}}}-\frac{I_{u}}{P_{u}}\right\\}\right].\end{split}$
(2)
We use an integrated noise process with independent Gamma distributed
increments that we use to model extrademographic stochasticity on the rate of
transition from susceptible classes to exposed classes, following Bretó _et
al._ (2009). Extrademographic stochasticity permits overdispersion (McCullagh
and Nelder, 1989) which is often appropriate for stochastic models of discrete
populations. The ‘$\approx$’ in the above two approximations is a consequence
of extrademographic noise, and as $\sigma_{SE}$ becomes small it approaches
equality (Bretó and Ionides, 2011). Here, $\beta(t)$ denotes the seasonal
transmission coefficient (He _et al._ , 2010).
### 6.2 Mathematical model for the measurement process
The discrete set of observation times is $t_{1:N}=\\{t_{n},n=1,\dots,N\\}$.
The observations for city $u$ are bi-weekly new case counts. The observation
process $\\{\boldsymbol{Y}_{n}=Y_{1:U,n},n\in 1{\hskip 1.70717pt:\hskip
1.70717pt}N\\}$ can be written
$\\{\boldsymbol{Y}_{n}=\mathrm{cases}_{1:U,n},n\in 1{\hskip 1.70717pt:\hskip
1.70717pt}N\\}$. We denote the number of true transitions from compartment I
to compartment R accumulated between an observation time and some time $t$
before the next observation time to be $C_{u}(t)=N_{IR,u}(t)-N_{IR,u}(\lfloor
t\rfloor)$, where $\lfloor t\rfloor$ is the greatest element of $t_{1:N}$ that
is less than $t$.
Our measurement model assumes that a certain fraction, $\rho$, called the
reporting probability, of the transitions from the infectious compartment to
the recovered compartment were, on average, counted as reported cases. Our
measurement model is:
$\mathrm{cases}_{u,n}{\,|\,}C_{u}(t_{n})=c\sim{\mathrm{Normal}}(\rho\,c,\rho\,(1-\rho)\,c+(\psi\,\rho\,c)^{2}),$
where $\psi$ is an overdispersion parameter that allows us to have measurement
variance that is greater than the variance of the binomial distribution with
number of trials $c$ and success probability $\rho$.
### 6.3 Construction of a measles spatPomp object
The construction of class ‘spatPomp’ objects is similar to the construction of
class ‘pomp’ objects discussed by King _et al._ (2016). Here, we focus on the
distinctive features of SpatPOMP models.
Suppose for our example below that we have bi-weekly measles case counts from
$U=$ 5 cities in England as reported by Dalziel _et al._ (2016) in the object
measles_cases of class ‘data.frame’. Each city has about 15 years (391 bi-
weeks) of data with no missing data. The first few rows of this data are shown
here. We see the column corresponding to time is called year and is measured
in years (two weeks is equivalent to 0.038 years).
year city cases
1950.000 LONDON 96
1950.000 BIRMINGHAM 179
1950.000 LIVERPOOL 533
1950.000 MANCHESTER 22
1950.000 LEEDS 17
1950.038 LONDON 60
We can construct a spatPomp object by supplying three minimal requirements in
addition to our data above: the column names corresponding to the spatial
units and times of each observation (‘city’ and ‘year’ in this case) and the
time at which the latent dynamics are supposed to begin. Here we set this to
two weeks before the first recorded observations.
R> measles5 <- spatPomp(
+ data=measles_cases,
+ units=’city’,
+ times=’year’,
+ t0=min(measles_cases$year)-1/26)
We can successively add each model component to measles5 with a call to
spatPomp() on measles5 with the argument for each component. To avoid
repetition, we will construct all of our model components and supply them all
at once in a later call to spatPomp().
First, we consider covariates. Suppose that we have covariate information for
each city at each observation time in a class ‘data.frame’ object called
measles_covar. In this case, we have census population and lagged birthrate
data. We consider lagged birthrate because we assume children enter the
susceptible pool when they are old enough to go to school.
year city pop lag_birthrate
1950.000 LONDON 3389306.0 70571.23
1950.000 BIRMINGHAM 1117892.5 24117.23
1950.000 LIVERPOOL 802064.9 19662.96
1950.000 MANCHESTER 704468.0 15705.46
1950.000 LEEDS 509658.5 10808.73
1950.038 LONDON 3388407.4 70265.20
If covariate information is not reported at the same frequency as the
measurement data, spatPomp will linearly interpolate the covariate data, as is
the default behavior in pomp.
For ease of access, the spatial unit names are mapped to the entries
$1,\dots,U$. The mapping for each unit can be found by extracting the unit’s
position in:
R> unit_names(measles)
We now move from preparing our covariates to writing our model components. We
shall use C snippets to specify our model components due to the computational
advantages discussed in Section 2.5. spatPomp compiles the C snippets when
building the class ‘spatPomp’ object. Before coding up our model components
let us specify some global variables in C that will be accessible to all model
components. The globals argument to a spatPomp() call can be used to supply
these. A global argument that is automatically created based on the units
column of our observed data is the U variable, which encodes the number of
spatial units. Since the movement matrix
$\big{(}v_{u,\tilde{u}}\big{)}_{u,\tilde{u}\in 1{\hskip 1.70717pt:\hskip
1.70717pt}U}$ is calculable up to the parameter $G$. We can then define a two-
dimensional C array, called v_by_g that supplies each
$\big{(}\frac{v_{u,\tilde{u}}}{G}\big{)}_{u,\tilde{u}\in 1{\hskip
1.70717pt:\hskip 1.70717pt}U}$ in a C snippet called measles_globals.
R> measles_globals <- Csnippet("
+ const double v_by_g[5][5] = {
+ {0,2.205,0.865,0.836,0.599},
+ {2.205,0,0.665,0.657,0.375},
+ {0.865,0.665,0,1.118,0.378},
+ {0.836,0.657,1.118,0,0.580},
+ {0.599,0.375,0.378,0.580,0}
+ };
+ ")
We now construct a C snippet for initializing the latent process at time
$t_{0}$, which corresponds to t_0 above. This involves drawing from
$f_{\boldsymbol{X}_{0}}(\boldsymbol{x}_{0}{\hskip 1.42262pt;\hskip
1.42262pt}\theta)$. The parameter vector $\theta$ includes initial proportions
of the population in each of the four compartments for each city. The names of
these initial value parameters (IVPs) will be passed in alongside other
parameters to the paramnames argument of the spatPomp() constructor with names
S1_0, $\dots$, S5_0, E1_0, $\dots$, E5_0, I1_0, $\dots$, I5_0 and R1_0,
$\dots$, R5_0. We can use spatPomp_Csnippet() to assign the latent states
$S_{u}(0)$, $E_{u}(0)$, $I_{u}(0)$, and $R_{u}(0)$ to the numbers implied by
the corresponding IVPs. We do this via the unit_statenames argument of
spatPomp_Csnippet(), which, in our example below, receives the vector c("S",
"E", "I", "R", "C", "W"). This function recognizes that all $S_{u}(0),u\in
1{\hskip 1.70717pt:\hskip 1.70717pt}U$ are stored contiguously in a C array
(with names S1, $\dots$, S5) and gives us access to S1 via S[0]. S2, $\dots$,
S5 can then be accessed as S[1], $\dots$, S[U-1]. Similarly, it provides us
access to E1, I1 and R1 via E[0], I[0] and R[0].
The unit_ivpnames argument of spatPomp_Csnippet() serves a similar purpose. If
the user provides paramnames to the spatPomp() constructor that includes IVP
names stored contiguously, their corresponding values are also stored in a C
array that can be traversed easily. Setting unit_ivpnames = c("S") then gives
us access to the initial value parameters corresponding to the susceptible
class for all units (i.e. S1_0, …, S5_0) via S_0[0], …, S_0[U-1]
Finally, the unit_covarnames argument of spatPomp_Csnippet() similarly allows
us to have access to pop1, which is the population covariate for our first
city, via pop[0]. The populations of other cities can then be found via
pop[1], …, pop[U-1]
These arguments to spatPomp_Csnippet() allow us to have a code argument that
focuses on specifying the model component. Here, we are able to write a few
lines relating the latent states for each city at the initial time to the
population in each city and the IVPs.
R> measles_rinit <- spatPomp_Csnippet(
+ unit_statenames = c("S", "E", "I", "R", "C", "W"),
+ unit_ivpnames = c("S", "E", "I", "R"),
+ unit_covarnames = c("pop"),
+ code = "
+ for (int u=0; u<U; u++) {
+ S[u] = nearbyint(pop[u]*S_0[u]);
+ E[u] = nearbyint(pop[u]*E_0[u]);
+ I[u] = nearbyint(pop[u]*I_0[u]);
+ R[u] = nearbyint(pop[u]*R_0[u]);
+ W[u] = 0;
+ C[u] = 0;
+ }
+ "
+ )
The array variable called C above corresponds to $C_{u}(t)$ defined above. The
W array variable corresponds to the integrated white noise process with
independent gamma increments that helps us model extrademographic
stochasticity in the latent process. We will later provide C and W to the
unit_accumvars argument of the spatPomp() constructor. In pomp parlance, C and
W are referred to as accumulator variables because they store changes over the
course of a measurement period instead of over the full time scale.
The rprocess C snippet has to encode only a rule for a single Euler increment
from the process model. Further, spatPomp provides C definitions of all
parameters (e.g. amplitude) in addition to the state variables and covariates,
so the user need only define additional variables used.
R> measles_rprocess <- spatPomp_Csnippet(
+ unit_statenames = c("S", "E", "I", "R", "C", "W"),
+ unit_covarnames = c("pop", "lag_birthrate"),
+ code = "
+ double beta, br, seas, foi, dw, births;
+ double rate[6], trans[6];
+ int u,v;
+
+ // school term-time seasonality
+ t = (t-floor(t))*365.25;
+ if ((t>=7&&t<=100) || (t>=115&&t<=199) ||
+ (t>=252&&t<=300) || (t>=308&&t<=356))
+ seas = 1.0+amplitude*0.2411/0.7589;
+ else
+ seas = 1.0-amplitude;
+
+ // transmission rate
+ beta = R0*(gamma+mu)*seas;
+
+ for (u= 0 ; u < U ; u++) {
+ br = lag_birthrate[u];
+
+ // expected force of infection
+ foi = (I[u])/pop[u];
+ for (v=0; v < U ; v++) {
+ if(v != u)
+ foi += g * v_by_g[u][v] * (I[v]/pop[v] -
+ I[u]/pop[u]) / pop[u];
+ }
+
+ // white noise (extrademographic stochasticity)
+ dw = rgammawn(sigmaSE,dt);
+ rate[0] = beta*foi*dw/dt; // stochastic force of infection
+ rate[1] = mu; // natural S death
+ rate[2] = sigma; // rate of ending of latent stage
+ rate[3] = mu; // natural E death
+ rate[4] = gamma; // recovery
+ rate[5] = mu; // natural I death
+
+ // Poisson births
+ births = rpois(br*dt);
+
+ // transitions between classes
+ reulermultinom(2,S[u],&rate[0],dt,&trans[0]);
+ reulermultinom(2,E[u],&rate[2],dt,&trans[2]);
+ reulermultinom(2,I[u],&rate[4],dt,&trans[4]);
+
+ S[u] += births - trans[0] - trans[1];
+ E[u] += trans[0] - trans[2] - trans[3];
+ I[u] += trans[2] - trans[4] - trans[5];
+ R[u] = pop[u] - S[u] - E[u] - I[u];
+ W[u] += (dw - dt)/sigmaSE; // standardized i.i.d. white noise
+ C[u] += trans[4]; // true incidence
+ }
+ "
+ )
The measurement model is chosen to allow for overdispersion relative to the
binomial distribution with success probability $\rho$. Here, we show the C
snippet defining the unit measurement model. The lik variable is pre-defined
and is set to the evaluation of the unit measurement density in either the log
or natural scale depending on the value of give_log.
R> measles_dunit_measure <- spatPomp_Csnippet(
+ code = "
+ double m= rho*C;
+ double v = m*(1.0-rho+psi*psi*m);
+ lik = dnorm(cases,m,sqrt(v),give_log);
+ "
+ )
spatPomp will then multiply the unit measurement densities over $u\in 1{\hskip
1.70717pt:\hskip 1.70717pt}U$ to compute the measurement density at each time.
The user may rather directly supply dmeasure that returns the product of unit-
specific measurement densities. We do so and store the resulting C snippet in
measles_dmeasure, but do not show the code here. This may be used, for
instance, to run pfilter in pomp. We use Csnippet() since the argument to the
The runit_measure argument of the spatPomp() constructor can be supplied a C
snippet for generating data for a point in time and space given the latent
state at that point. This is useful for simulating data from a model. We
construct such a C snippet here.
R> measles_runit_measure <- Csnippet("
+ double cases;
+ double m= rho*C;
+ double v = m*(1.0-rho+psi*psi*m);
+ cases = rnorm(m,sqrt(v));
+ if (cases > 0.0) cases = nearbyint(cases);
+ else cases = 0.0;
+ ")
We construct the corresponding rmeasure and store it in the measles_rmeasure
variable. To run the methods EnKF, IEnKF, GIRF and IGIRF, we must supply more
specifications about the measurement model. The first two require
eunit_measure whereas the last two require skeleton and additionally
eunit_measure, vunit_measure, munit_measure when kind=‘moment’ is the desired
kind of guide function for GIRF. As was the case with dunit_measure and
runit_measure, the C snippets for eunit_measure, vunit_measure and
munit_measure can be written assuming that the unit statenames and the u and t
variables have been pre-defined. Within the C snippet for eunit_measure, a
variable named ey is defined which should be coded to compute the quantity
$\E[Y_{u,n}{\,|\,}X_{u,n}]$ that eunit_measure is tasked to obtain. Similarly,
since vunit_measure computes a unit measurement variance given the parameter
set and the unit states, a variable named vc is pre-defined and should take
the value of the computed variance. Finally, munit_measure returns a moment-
matched parameter set given the existing parameter set, the unit states, and
an empirically computed variance. Variables with the names of the parameters
prefixed by M_ (e.g. M_tau) are pre-defined and assigned to the existing
parameter values. The user need only change the parameters that would take on
a new value after moment-matching.
For our measles example, eunit_measure multiplies the latent modeled cases by
the reporting rate.
R> measles_eunit_measure <- spatPomp_Csnippet(
+ code = "
+ ey = rho*C;
+ "
+ )
vunit_measure computes the variance of the unit observation given the unit
states and parameter set.
R> measles_vunit_measure <- spatPomp_Csnippet(
+ code = "
+ double m = rho*C;
+ vc = m*(1.0-rho+psi*psi*m);
+ "
+ )
munit_measure computes a moment-matched size parameter given an empirically
calculated variance.
R> measles_munit_measure <- spatPomp_Csnippet(
+ code = "
+ double binomial_var;
+ double m;
+ m = rho*C;
+ binomial_var = rho*(1-rho)*C;
+ if(vc > binomial_var) M_psi = sqrt(vc - binomial_var)/m;
+ "
+ )
The skeleton model component allows the user to specify a system of
differential equations, also called a vector field, which can be numerically
solved to evaluate a deterministic trajectory of the latent process at
requested times (King _et al._ , 2016). spatPomp_Csnippet() provides an
argument called unit_vfnames which provides pointers to vector field values
for the corresponding states. The time derivatives for the susceptible classes
for our five spatial units, DS1, $\dots$, DS5 can then be assigned using
DS[0], $\dots$, DS[U-1].
R> measles_skel <- spatPomp_Csnippet(
+ unit_statenames = c("S", "E", "I", "R", "C", "W"),
+ unit_vfnames = c("S", "E", "I", "R", "C", "W"),
+ unit_covarnames = c("pop", "lag_birthrate"),
+ code = "
+ double beta, br, seas, foi;
+ int u,v;
+
+ // term-time seasonality
+ t = (t-floor(t))*365.25;
+ if ((t>=7&&t<=100) || (t>=115&&t<=199) ||
+ (t>=252&&t<=300) || (t>=308&&t<=356))
+ seas = 1.0+amplitude*0.2411/0.7589;
+ else
+ seas = 1.0-amplitude;
+
+ // transmission rate
+ beta = R0*(gamma+mu)*seas;
+
+ // deterministic skeleton for each unit
+ for (u = 0 ; u < U ; u++) {
+ br = lag_birthrate[u];
+ foi = I[u]/pop[u];
+ for (v=0; v < U ; v++) {
+ if(v != u)
+ foi+=g*v_by_g[u][v]*(I[v]/pop[v]-I[u]/pop[u])/pop[u];
+ }
+
+ DS[u] = br - (beta*foi + mu)*S[u];
+ DE[u] = beta*foi*S[u] - (sigma+mu)*E[u];
+ DI[u] = sigma*E[u] - (gamma+mu)*I[u];
+ DR[u] = gamma*I[u] - mu*R[u];
+ DW[u] = 0;
+ DC[u] = gamma*I[u];
+ }
+ "
+ )
Finally we declare the names of states and parameters. This will allow the
compilation of the model components which refer to these names.
R> measles_unit_statenames <- c(’S’,’E’,’I’,’R’,’C’,’W’)
R> measles_covarnames <- paste0(rep(c("pop","lag_birthrate"),each=U),1:U)
R> measles_statenames <- paste0(rep(measles_unit_statenames,each=U),1:U)
As discussed above, some unit_statenames may be used to keep track of
accumulations of other unit_statenames over an observation time period. The
spatPomp() constructor provides an argument called unit_accumvars to handle
this behavior. Among other things, this extends pomp’s feature of resetting
such variables to zero at the beginning of a measurement period.
A parameter can sometimes be classified as an initial value parameter (IVP)
that determines only the initial condition, or a regular parameters (RP) that
contributes to the process or measurement model throughout the observed time
interval. This classification, when it exists, can be helpful since there are
inferential consequences. Precision on estimates of IVPs may not grow with
increasing number, $N$, of observations, whereas for RPs we expect increasing
information with increasing $N$.
R> measles_IVPnames <- paste0(measles_statenames[1:(4*U)],"_0")
R> measles_RPnames <- c("R0","amplitude","gamma","sigma","mu",
+ "sigmaSE","rho","psi","g")
R> measles_paramnames <- c(measles_RPnames,measles_IVPnames)
The pieces of the SpatPOMP are now combined by a call to spatPomp.
R> measles5_full <- spatPomp(
+ data = measles5,
+ covar = measles_covar,
+ unit_statenames = measles_unit_statenames,
+ unit_accumvars = c("C","W"),
+ paramnames = measles_paramnames,
+ rinit = measles_rinit,
+ rprocess = euler(measles_rprocess, delta.t=2/365),
+ skeleton=vectorfield(measles_skel),
+ dunit_measure = measles_dunit_measure,
+ eunit_measure = measles_eunit_measure,
+ vunit_measure = measles_vunit_measure,
+ munit_measure = measles_munit_measure,
+ runit_measure = measles_runit_measure,
+ dmeasure = measles_dmeasure,
+ rmeasure = measles_rmeasure,
+ globals = measles_globals
+ )
### 6.4 Simulating measles data
Suppose we wanted to simulate data from our model for measles dynamics in the
$U=$5 cities and that we have a parameter set m5_params at which we are
simulating. We can compare our simulations to the data using the code below
and the plot() method on the class ‘spatPomp’ objects resulting from the
simulation and the measles5_full object (which includes the true observations)
respectively. For epidemiological settings, it helps to set the argument
log=TRUE of the plot() method to focus more on seasonal trends and less on
spikes in case counts. The resulting plots are shown in Figure 6. This figure
may indicate room for improvement in the current parameter vector or model
structure. As discussed before, such plots can be customized by working
directly with the class ‘data.frame’ output of as.data.frame().
R> m5_params
R0 amplitude gamma sigma mu sigmaSE rho
5.68e+01 5.54e-01 3.04e+01 2.89e+01 2.00e-02 2.00e-02 4.88e-01
psi g S1_0 S2_0 S3_0 S4_0 S5_0
1.16e-01 1.00e+02 2.97e-02 2.97e-02 2.97e-02 2.97e-02 2.97e-02
E1_0 E2_0 E3_0 E4_0 E5_0 I1_0 I2_0
5.17e-05 5.17e-05 5.17e-05 5.17e-05 5.17e-05 5.14e-05 5.14e-05
I3_0 I4_0 I5_0 R1_0 R2_0 R3_0 R4_0
5.14e-05 5.14e-05 5.14e-05 9.70e-01 9.70e-01 9.70e-01 9.70e-01
R5_0
9.70e-01
R> m5_sim <- simulate(measles5_full, params=m5_params)
Figure 6: A: Bi-weekly observed measles case counts in the five largest cities
in England. B: Simulations from the measles SEIR model encoded in the class
spatPomp object called measles5_full. The figure indicates that the parameter
vector and/or the model structure of our SEIR model need to be altered to get
patterns similar to those observed in the data.
## 7 Conclusion
The spatPomp package is both a tool for data analysis based on SpatPOMP models
and a principled computational framework for the ongoing development of
inference algorithms. The model specification language provided by spatPomp is
very general, and implementing a SpatPOMP model in spatPomp makes a wide range
of inference algorithms available. These two features facilitate objective
comparison of alternative models and methods.
As a development platform, spatPomp is particularly convenient for
implementing algorithms with the plug-and-play property, since models will
typically be defined by their rprocess simulator, together with rmeasure and
often dunit_measure, but can accommodate inference methods based on other
model components such as dprocess if they are available. As an open-source
project, the package readily supports expansion, and the authors invite
community participation in the spatPomp project in the form of additional
inference algorithms, improvements and extensions of existing algorithms,
additional model/data examples, documentation contributions and improvements,
bug reports, and feature requests.
Complex models and large datasets can challenge computational resources. With
this in mind, key components of the spatPomp package are written in C, and
spatPomp provides facilities for users to write models either in R or, for the
acceleration that typically proves necessary in applications, in C. Multi-
processor computing also becomes necessary for ambitious projects. The two
most common computationally intensive tasks are the assessment of Monte Carlo
variability and the investigation of the roles of starting values and other
algorithmic settings on optimization routines. These analyses require only
embarrassingly parallel computations and need no special discussion here.
Practical modeling and inference for spatiotemporal partially observed
systems, capable of handling scientifically motivated nonlinear, non-
stationary stochastic models, is the last open problem of the challenges
raised by Bjørnstad and Grenfell (2001). Recent studies have underscored the
need for deeper analyses of spatially coupled dynamics (Dalziel _et al._ ,
2016), more mechanistic spatial coupling models (Lau _et al._ , 2020), more
ways to incorporate covariate information of spatial coupling via cellular
data records (Wesolowski _et al._ , 2012, 2015) and more statistical inference
methodology that can handle increasing spatial dimension (Lee _et al._ ,
2020). The spatPomp package addresses these challenges by combining access to
modern algorithmic developments with a suitable framework for model
specification. The capability to carry out statistically efficient inference
for general spatiotemporal systems will promote the development, criticism,
refinement and validation of new spatiotemporal models. Nonlinear interacting
systems are hard to understand intuitively even when there are relatively few
units. Even the single-unit case, corresponding to a low-dimensional nonlinear
stochastic dynamic system with a low-dimensional observation process, has rich
mathematical theory. Statistically efficient inference for this low-
dimensional case was not generally available before the recent development of
iterated filtering and particle Markov Chain Monte Carlo methods, and
application of these methods has been assisted by their implementations in
pomp. We anticipate there is much to be gained scientifically by carrying out
modeling and inference for spatiotemporal processes with relatively few
spatial units but nevertheless surpassing the capabilities of previous
software. Facilitating this task is the primary goal of spatPomp.
## Acknowledgments
This work was supported by National Science Foundation grants DMS-1761603 and
DMS-1646108, and National Institutes of Health grants 1-U54-GM111274 and
1-U01-GM110712.
## References
* Anderson _et al._ (2009) Anderson J, Hoar T, Raeder K, Liu H, Collins N, Torn R, Avellano A (2009). “The data assimilation research testbed: A community facility.” _Bulletin of the American Meteorological Society_ , 90(9), 1283–1296. 10.1175/2009BAMS2618.1.
* Arulampalam _et al._ (2002) Arulampalam MS, Maskell S, Gordon N, Clapp T (2002). “A tutorial on particle filters for online nonlinear, non-Gaussian Bayesian tracking.” _IEEE Transactions on Signal Processing_ , 50, 174–188. 10.1109/78.978374.
* Asfaw _et al._ (2021) Asfaw K, Park J, Ho A, King AA, Ionides EL (2021). _spatPomp: Inference for Spatiotemporal Partially Observed Markov Processes_. R package version 0.21.0, URL https://CRAN.R-project.org/package=spatPomp.
* Bakker _et al._ (2016) Bakker KM, Martinez-Bakker ME, Helm B, Stevenson TJ (2016). “Digital epidemiology reveals global childhood disease seasonality and the effects of immunization.” _Proceedings of the National Academy of Sciences_ , 113(24), 6689–6694. 10.1073/pnas.1523941113.
* Becker _et al._ (2016) Becker AD, Birger RB, Teillant A, Gastanaduy PA, Wallace GS, Grenfell BT (2016). “Estimating enhanced prevaccination measles transmission hotspots in the context of cross-scale dynamics.” _Proceedings of the National Academy of Sciences_ , 113(51), 14595–14600. 10.1073/pnas.1604976113.
* Becker _et al._ (2019) Becker AD, Wesolowski A, Bjørnstad ON, Grenfell BT (2019). “Long-term dynamics of measles in London: Titrating the impact of wars, the 1918 pandemic, and vaccination.” _PLoS Computational Biology_ , 15(9), e1007305. 10.1371/journal.pcbi.1007305.
* Bengtsson _et al._ (2008) Bengtsson T, Bickel P, Li B (2008). “Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems.” In T Speed, D Nolan (eds.), _Probability and Statistics: Essays in Honor of David A. Freedman_ , pp. 316–334. Institute of Mathematical Statistics, Beachwood, OH. 10.1214/193940307000000518.
* Bhadra _et al._ (2011) Bhadra A, Ionides EL, Laneri K, Pascual M, Bouma M, Dhiman RC (2011). “Malaria in northwest India: Data analysis via partially observed stochastic differential equation models driven by Lévy noise.” _Journal of the American Statistical Association_ , 106, 440–451. 10.1198/jasa.2011.ap10323.
* Bjørnstad and Grenfell (2001) Bjørnstad ON, Grenfell BT (2001). “Noisy clockwork: Time series analysis of population fluctuations in animals.” _Science_ , 293, 638–643. 10.1126/science.1062226.
* Blackwood _et al._ (2013a) Blackwood JC, Cummings DAT, Broutin H, Iamsirithaworn S, Rohani P (2013a). “Deciphering the impacts of vaccination and immunity on pertussis epidemiology in Thailand.” _Proceedings of the National Academy of Sciences of the USA_ , 110, 9595–9600. 10.1073/pnas.1220908110.
* Blackwood _et al._ (2013b) Blackwood JC, Streicker DG, Altizer S, Rohani P (2013b). “Resolving the roles of immunity, pathogenesis, and immigration for rabies persistence in vampire bats.” _Proceedings of the National Academy of Sciences of the USA_. 10.1073/pnas.1308817110.
* Blake _et al._ (2014) Blake IM, Martin R, Goel A, Khetsuriani N, Everts J, Wolff C, Wassilak S, Aylward RB, Grassly NC (2014). “The role of older children and adults in wild poliovirus transmission.” _Proceedings of the National Academy of Sciences of the USA_ , 111(29), 10604–10609. http://www.pnas.org/content/111/29/10604.full.pdf+html.
* Bretó (2014) Bretó C (2014). “On idiosyncratic stochasticity of financial leverage effects.” _Statistics & Probability Letters_, 91, 20–26. http://dx.xdoi.org/10.1016/j.spl.2014.04.003.
* Bretó _et al._ (2009) Bretó C, He D, Ionides EL, King AA (2009). “Time series analysis via mechanistic models.” _Annals of Applied Statistics_ , 3, 319–348. 10.1214/08-AOAS201.
* Bretó and Ionides (2011) Bretó C, Ionides EL (2011). “Compound Markov counting processes and their applications to modeling infinitesimally over-dispersed systems.” _Stochastic Processes and their Applications_ , 121, 2571–2591. 10.1016/j.spa.2011.07.005.
* Brown _et al._ (2018) Brown GD, Porter AT, Oleson JJ, Hinman JA (2018). “Approximate Bayesian computation for spatial SEIR(S) epidemic models.” _Spatial and Spatio-temporal Epidemiology_ , 24, 27–37. 10.1016/j.sste.2017.11.001.
* Buhnerkempe _et al._ (2017) Buhnerkempe MG, Prager KC, Strelioff CC, Greig DJ, Laake JL, Melin SR, DeLong RL, Gulland F, Lloyd-Smith JO (2017). “Detecting signals of chronic shedding to explain pathogen persistence: Leptospira interrogans in California sea lions.” _Journal of Animal Ecology_ , 86(3), 460–472. 10.1111/1365-2656.12656.
* Cappello _et al._ (2020) Cappello C, De Iaco S, Posa D (2020). “covatest: An R Package for Selecting a Class of Space-Time Covariance Functions.” _Journal of Statistical Software_ , 94(1), 1–42. 10.18637/jss.v094.i01.
* Casella and Berger (1990) Casella G, Berger RL (1990). _Statistical Inference_. Wadsworth, Pacific Grove.
* Chambers (1998) Chambers JM (1998). _Programming with Data: A Guide to the S Language_. Springer Science & Business Media.
* Dalziel _et al._ (2016) Dalziel BD, Bjørnstad ON, van Panhuis WG, Burke DS, Metcalf CJE, Grenfell BT (2016). “Persistent chaos of measles epidemics in the prevaccination United States caused by a small change in seasonal transmission patterns.” _PLoS Computational Biology_ , 12(2), e1004655. 10.1371/journal.pcbi.1004655.
* Del Moral and Murray (2015) Del Moral P, Murray LM (2015). “Sequential Monte Carlo with highly informative observations.” _Journal on Uncertainty Quantification_ , 3, 969–997. 10.1137/15M1011214.
* Doucet and Johansen (2011) Doucet A, Johansen A (2011). “A tutorial on particle filtering and smoothing: Fifteen years later.” In D Crisan, B Rozovsky (eds.), _Oxford Handbook of Nonlinear Filtering_. Oxford University Press. URL https://warwick.ac.uk/fac/sci/statistics/staff/academic-research/johansen/publications/dj11.pdf.
* Earn _et al._ (2012) Earn DJ, He D, Loeb MB, Fonseca K, Lee BE, Dushoff J (2012). “Effects of school closure on incidence of pandemic influenza in Alberta, Canada.” _Annals of Internal Medicine_ , 156, 173–181. 10.7326/0003-4819-156-3-201202070-00005. http://www.annals.org/content/156/3/173.full.pdf+html.
* Evensen (1994) Evensen G (1994). “Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics.” _Journal of Geophysical Research: Oceans_ , 99(C5), 10143–10162. 10.1029/94JC00572.
* Evensen and van Leeuwen (1996) Evensen G, van Leeuwen PJ (1996). “Assimilation of geostat altimeter data for the Agulhas Current using the ensemble Kalman filter with a quasigeostrophic model.” _Monthly Weather Review_ , 124, 58–96. 10.1175/1520-0493(1996)124<0085:AOGADF>2.0.CO;2.
* Genolini (2008) Genolini C (2008). “A (not so) short introduction to S4.” _Technical report_ , The R-Project for Statistical Computing. URL http://christophe.genolini.free.fr/webTutorial/S4tutorialV0-5en.pdf.
* Gneiting _et al._ (2007) Gneiting T, Balabdaoui F, Raftery AE (2007). “Probabilistic forecasts, calibration and sharpness.” _Journal of the Royal Statistical Society, Series B (Statistical Methodology)_ , 69, 243–268. 10.1111/j.1467-9868.2007.00587.x.
* He _et al._ (2013) He D, Dushoff J, Day T, Ma J, Earn DJD (2013). “Inferring the causes of the three waves of the 1918 influenza pandemic in England and Wales.” _Proceedings of the Royal Society of London, Series B_ , 280, 20131345. 10.1098/rspb.2013.1345.
* He _et al._ (2010) He D, Ionides EL, King AA (2010). “Plug-and-play inference for disease dynamics: Measles in large and small towns as a case study.” _Journal of the Royal Society Interface_ , 7, 271–283. 10.1098/rsif.2009.0151.
* Ionides _et al._ (2021) Ionides EL, Asfaw K, Park J, King AA (2021). “Bagged filters for partially observed spatiotemporal systems.” _arXiv:2002.05211v2_.
* Ionides _et al._ (2017) Ionides EL, Breto C, Park J, Smith RA, King AA (2017). “Monte Carlo profile confidence intervals for dynamic systems.” _Journal of the Royal Society Interface_ , 14, 1–10. 10.1098/rsif.2017.0126.
* Ionides _et al._ (2015) Ionides EL, Nguyen D, Atchadé Y, Stoev S, King AA (2015). “Inference for dynamic and latent variable models via iterated, perturbed Bayes maps.” _Proceedings of the National Academy of Sciences of the USA_ , 112(3), 719––724. 10.1073/pnas.1410597112.
* Johansen and Doucet (2008) Johansen AM, Doucet A (2008). “A note on the auxiliary particle filter.” _Statistics & Probability Letters_, 78, 1498–1504. 10.1016/j.spl.2008.01.032.
* Kain _et al._ (2020) Kain MP, Childs ML, Becker AD, Mordecai EA (2020). “Chopping the tail: How preventing superspreading can help to maintain COVID-19 control.” _MedRxiv_. 10.1101/2020.06.30.20143115.
* Kantas _et al._ (2015) Kantas N, Doucet A, Singh SS, Maciejowski J, Chopin N, _et al._ (2015). “On particle methods for parameter estimation in state-space models.” _Statistical Science_ , 30(3), 328–351. 10.1214/14-STS511.
* Katzfuss _et al._ (2020) Katzfuss M, Stroud JR, Wikle CK (2020). “Ensemble Kalman methods for high-dimensional hierarchical dynamic space-time models.” _Journal of the American Statistical Association_ , 115(530), 866–885. 10.1080/01621459.2019.1592753.
* King _et al._ (2008) King AA, Ionides EL, Pascual M, Bouma MJ (2008). “Inapparent infections and cholera dynamics.” _Nature_ , 454, 877–880. 10.1038/nature07084.
* King _et al._ (2016) King AA, Nguyen D, Ionides EL (2016). “Statistical inference for partially observed Markov processes via the R Package pomp.” _Journal of Statistical Software_ , 69, 1–43. 10.18637/jss.v069.i12.
* Lau _et al._ (2020) Lau MS, Becker AD, Korevaar HM, Caudron Q, Shaw DJ, Metcalf CJE, Bjørnstad ON, Grenfell BT (2020). “A competing-risks model explains hierarchical spatial coupling of measles epidemics en route to national elimination.” _Nature Ecology & Evolution_, pp. 1–6. 10.1038/s41559-020-1186-6.
* Lee _et al._ (2020) Lee EC, Chao DL, Lemaitre JC, Matrajt L, Pasetto D, Perez-Saez J, Finger F, Rinaldo A, Sugimoto JD, Halloran ME, _et al._ (2020). “Achieving coordinated national immunity and cholera elimination in Haiti through vaccination: a modelling study.” _The Lancet Global Health_ , 8(8), e1081–e1089. 10.1016/S2214-109X(20)30310-7.
* Li _et al._ (2020) Li R, Pei S, Chen B, Song Y, Zhang T, Yang W, Shaman J (2020). “Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2).” _Science_ , 368(6490), 489–493. 10.1126/science.abb3221.
* Lorenz (1996) Lorenz EN (1996). “Predictability: A problem partly solved.” _Proceedings of the Seminar on Predictability_ , 1, 1–18.
* Marino _et al._ (2019) Marino JA, Peacor SD, Bunnell D, Vanderploeg HA, Pothoven SA, Elgin AK, Bence JR, Jiao J, Ionides EL (2019). “Evaluating consumptive and nonconsumptive predator effects on prey density using field time-series data.” _Ecology_ , 100(3), e02583. 10.1002/ecy.2583.
* Martinez-Bakker _et al._ (2015) Martinez-Bakker M, King AA, Rohani P (2015). “Unraveling the transmission ecology of polio.” _PLoS Biology_ , 13(6), e1002172. 10.1371/journal.pbio.1002172.
* McCullagh and Nelder (1989) McCullagh P, Nelder JA (1989). _Generalized Linear Models_. 2nd edition. Chapman and Hall, London.
* Ng _et al._ (2002) Ng B, Peshkin L, Pfeffer A (2002). “Factored particles for scalable monitoring.” _Proceedings of the 18th Conferenece on Uncertainty and Artificial Intelligence_ , pp. 370–377. 1301.0590.
* Park and Ionides (2020) Park J, Ionides EL (2020). “Inference on high-dimensional implicit dynamic models using a guided intermediate resampling filter.” _Statistics & Computing_, 30, 1497–1522. 10.1007/s11222-020-09957-3.
* Pitt and Shepard (1999) Pitt MK, Shepard N (1999). “Filtering via simulation: Auxillary particle filters.” _Journal of the American Statistical Association_ , 94, 590–599. 10.1080/01621459.1999.10474153.
* Pons-Salort and Grassly (2018) Pons-Salort M, Grassly NC (2018). “Serotype-specific immunity explains the incidence of diseases caused by human enteroviruses.” _Science_ , 361(6404), 800–803. 10.1126/science.aat6777.
* Ranjeva _et al._ (2017) Ranjeva SL, Baskerville EB, Dukic V, Villa LL, Lazcano-Ponce E, Giuliano AR, Dwyer G, Cobey S (2017). “Recurring infection with ecologically distinct HPV types can explain high prevalence and diversity.” _Proceedings of the National Academy of Sciences_ , p. 201714712. 10.1073/pnas.1714712114.
* Rebeschini and van Handel (2015) Rebeschini P, van Handel R (2015). “Can local particle filters beat the curse of dimensionality?” _The Annals of Applied Probability_ , 25(5), 2809–2866. 10.1214/14-AAP1061.
* Roy _et al._ (2013) Roy M, Bouma MJ, Ionides EL, Dhiman RC, Pascual M (2013). “The potential elimination of Plasmodium vivax malaria by relapse treatment: Insights from a transmission model and surveillance data from NW India.” _PLoS Neglected Tropical Diseases_ , 7, e1979. 10.1371/journal.pntd.0001979.
* Shrestha _et al._ (2013) Shrestha S, Foxman B, Weinberger DM, Steiner C, Viboud C, Rohani P (2013). “Identifying the interaction between influenza and pneumococcal pneumonia using incidence data.” _Science Translational Medicine_ , 5, 191ra84. 10.1126/scitranslmed.3005982.
* Shrestha _et al._ (2011) Shrestha S, King AA, Rohani P (2011). “Statistical inference for multi-pathogen systems.” _PLoS Computational Biology_ , 7, e1002135. 10.1371/journal.pcbi.1002135.
* Sigrist _et al._ (2015) Sigrist F, Kunsch HR, Stahel WA (2015). “spate: An R package for spatio-temporal modeling with a stochastic advection-diffusion process.” _Journal of Statistical Software_ , 63(14), 1–23. 10.18637/jss.v063.i14.
* Snyder _et al._ (2015) Snyder C, Bengtsson T, Morzfeld M (2015). “Performance bounds for particle filters using the optimal proposal.” _Monthly Weather Review_ , 143(11), 4750–4761. 10.1175/MWR-D-15-0144.1.
* Tong (1990) Tong H (1990). _Non-linear Time Series: A Dynamical System Approach_. Oxford Science Publ., Oxford.
* Wallig and Weston (2019) Wallig M, Weston S (2019). _doParallel: Foreach Parallel Adaptor for the ’parallel’ Package_. R package version 1.0.15, URL https://CRAN.R-project.org/package=doParallel.
* Wallig and Weston (2020) Wallig M, Weston S (2020). _foreach: Provides Foreach Looping Construct_. R package version 1.5.0, URL https://CRAN.R-project.org/package=foreach.
* Wesolowski _et al._ (2012) Wesolowski A, Eagle N, Tatem AJ, Smith DL, Noor AM, Snow RW, Buckee CO (2012). “Quantifying the impact of human mobility on malaria.” _Science_ , 338(6104), 267–270. 10.1126/science.1223467.
* Wesolowski _et al._ (2015) Wesolowski A, Qureshi T, Boni MF, Sundsøy PR, Johansson MA, Rasheed SB, Engø-Monsen K, Buckee CO (2015). “Impact of human mobility on the emergence of dengue epidemics in Pakistan.” _Proceedings of the National Academy of Sciences_ , 112(38), 11887–11892. 10.1073/pnas.1504964112.
* Wickham (2019) Wickham H (2019). _Advanced R_. CRC press.
* Wickham _et al._ (2020) Wickham H, François R, Henry L, Müller K (2020). _dplyr: A Grammar of Data Manipulation_. R package version 1.0.0, URL https://CRAN.R-project.org/package=dplyr.
* Wikle _et al._ (2019) Wikle CK, Zammit-Mangion A, Cressie N (2019). _Spatio-temporal Statistics with R_. CRC Press.
* Xia _et al._ (2004) Xia Y, Bjørnstad ON, Grenfell BT (2004). “Measles metapopulation dynamics: A gravity model for epidemiological coupling and dynamics.” _American Naturalist_ , 164(2), 267–281. 10.1086/422341.
|
32k
|
arxiv_papers
|
2101.01158
|
# A Hybrid Learner for Simultaneous Localization and Mapping
Thangarajah Akilan, , Edna Johnson, Japneet Sandhu, Ritika Chadha, Gaurav
Taluja
###### Abstract
Simultaneous localization and mapping (SLAM) is used to predict the dynamic
motion path of a moving platform based on the location coordinates and the
precise mapping of the physical environment. SLAM has great potential in
augmented reality (AR), autonomous vehicles, viz. self-driving cars, drones,
Autonomous navigation robots (ANR). This work introduces a hybrid learning
model that explores beyond feature fusion and conducts a multimodal weight
sewing strategy towards improving the performance of a baseline SLAM
algorithm. It carries out weight enhancement of the front end feature
extractor of the SLAM via mutation of different deep networks’ top layers. At
the same time, the trajectory predictions from independently trained models
are amalgamated to refine the location detail. Thus, the integration of the
aforesaid early and late fusion techniques under a hybrid learning framework
minimizes the translation and rotation errors of the SLAM model. This study
exploits some well-known deep learning (DL) architectures, including ResNet18,
ResNet34, ResNet50, ResNet101, VGG16, VGG19, and AlexNet for experimental
analysis. An extensive experimental analysis proves that hybrid learner (HL)
achieves significantly better results than the unimodal approaches and
multimodal approaches with early or late fusion strategies. Hence, it is found
that the Apolloscape dataset taken in this work has never been used in the
literature under SLAM with fusion techniques, which makes this work unique and
insightful.
###### Index Terms:
SLAM, deep learning, hybrid learning
## I Introduction
SLAM is a technological process that enables a device to build a map of the
environment, at the same time, helps compute the relative location on
predefined map. It can be used for range of applications from self-driving
vehicles (SDV) to space and maritime exploration, and from indoor positioning
to search and rescue operations. The primary responsibility of a SLAM
algorithm is to produce an understanding of a moving platform’s environment
and the location of the vehicle by providing the value of its coordinates;
thus, improving the formation of a trajectory to determine the view at a
particular instance. As a SLAM is one of the emerging technologies, numerous
implementations have been introduced but the DL-based approaches surmount
others by their efficiency in extracting the finest features and giving better
results even in a feature-scarce environment.
This study aims to improve the performance of a self-localization module based
on PoseNet [1] architecture through the concept of hybrid learning that does a
multimodal weight mutation for enhancing the weights of a feature extractor
layer and refines the trajectory predictions using amalgamation of multimodal
scores. The ablation study is carried out on the Apolloscape [2, 3], as per
our knowledge, there has been no research work performed on the self-
localization repository of the Apolloscape dataset, in which the proposed HL
has been evaluated extensively. The experimental analysis presented in this
work consists of three parts, in which initial two parts form the base for the
third. The first part concentrates on an extensive evaluation of several DL
models, as feature extractors. The second part analyzes of two proposed
multimodal fusion approaches: (i). an early fusion via layer weight
enhancement of feature extractor, and (ii). a late fusion via score refinement
of the trajectory (pose) regressor. Finally, the third part aims at the
combination of early and late fusion models forming a hybrid learner with
addition or multiplication operation. Here, the late fusion model harnesses
five pretrained deep convolutional neural networks (DCNNs), viz. ResNet18,
ResNet34, ResNet101. VGG16, and VGG19 as the feature extractor for pose
regressor module. While, the early fusion model and the HL focuses on
exploiting the best DCNNs, ResNet101 and VGG19 based on their individual
performance on the Apolloscape self-localization dataset.
When analyzing the results of the early and late fusion models, it is observed
that the early fusion encompasses $14.842m$ of translation error and
$0.673^{\circ}$ of rotation error. On the other hand, the late fusion achieves
$9.763m$ of translation error and $0.945^{\circ}$ of rotation error. On
analyzing the hybrid learners, the additive hybrid learner (AHL) gets
$10.400m$ of translation error and $0.828^{\circ}$ of rotation error, whereas
the multiplicative hybrid learner (MHL) records $9.307m$ and $1.206^{\circ}$
of translation and rotation errors, respectively. By fusing the predictions of
AHL and MHL called hybrid learner full-fusion (HLFF) produces better results
than all other models with $7.762m$ and $0.886^{\circ}$ of translation
rotation errors, respectively.
The rest of the paper is organized as follows. Section II reviews relevant
SLAM literature and provides basic detail of the PoseNet, unimodality, and
multimodality. Section III elaborates the proposed hybrid learner including
required pre-processing operations. Section IV describes the experimental
setup and analyzes the obtained results from various models. Section V
concludes the research work.
## II Background
### II-A SLAM
Simultaneous localization and mapping is an active research domain in robotics
and artificial intelligence (AI). It enables a remotely automated moving
vehicle to be placed in an unknown environment and location. According to
Whyte et al. [4] and Montemerlo et al. [5], SLAM should build a consistent map
of this unknown environment and determine the location relative to the map.
Through SLAM, robots and vehicles can be truly and completely automated
without any or minimal human intervention. But the estimation of maps consists
of various other entities, such as large storage issues, precise location
coordinates, which makes SLAM a rather intriguing task, especially in the
real-time domain.
Many researches have been done worldwide to determine the efficient method to
perform SLAM. In [6], Montemerlo et al. propose a model named FastSLAM, as an
efficient solution to the problem. FastSLAM is a recursive algorithm that
calculates the posterior distribution spanning over autonomous vehicle’s pose
and landmark locations, yet, it scales logarithmically with the total number
of landmarks. This algorithm relies on an exact factorization of the posterior
into a product of landmark distributions and a distribution over the paths of
the robot. The research on SLAM originates on the work of Smith and Cheeseman
[7] that propose the use of the extended Kalman filter (EKF). It is based on
the notion that pose errors and errors in the map are correlated, and the
covariance matrix obtained by the EKF represents this covariance. There are
two main approaches for the localization of an autonomous vehicle: metric SLAM
and appearance-based SLAM [1]. However, this research focuses on the
appearance-based SLAM that is trained by giving a set of visual samples
collected at multiple discrete locations.
### II-B PoseNet
The neural network (NN) comprises of several interconnected nodes and
associated parameters, like weights and biases. The weights are adjusted
through a series of trials and experiments in the training phase so that the
network can learn and can be used to predict the outcomes at a later stage.
There are various kinds of NN’s available, for instance, Feed-forward neural
network (FFNN), Radial basis neural network (RBNN), DCNN, Recurrent neural
network (RNN), etc. Among them, the DCNN’s have been highly regarded for the
adaptability and finer interpretability with accurate and justifiable
predictions in applications range from finance to medical analysis and from
science to engineering. Thus, the PoseNet model for SLAM shown in Fig. 1,
harness the DCNN to be firm against difficult lighting, blurredness, and
varying camera instincts [1]. Figure 1 depicts the underlying architecture of
the PoseNet. It subsumes a front-end with a feature extractor and a back-end
with a regression subnetwork. The feature extractor can be a pretrained DCNN,
like ResNet$34$, VGG$16$, or AlexNet. The regression subnetwork consists of
three stages: a dropout, an average pooling, a dense layer interconnected,
sequentially. It receives the high dimensional vector from the feature
extractor. Through the average pooling and dropout layers, it is then reduced
to a lower dimension for generalization and faster computation [8]. The
predicted poses are in Six-degree of freedom (6-DoF), which define the six
parameters in translation and rotation [1]. The translation consists of
forward-backward, left-right, and up-down parameters forming the axis of 3D
space as $x-axis$, $y-axis$, and $z-axis$, respectively. Likewise, the
rotation includes yaw, pitch, and roll parameters of the same 3D space noted
as $normal-axis$, $transverse-axis$, and $longitudinal-axis$, respectively.
Front-end | Back-end
---|---
Feature Extractor: | Pose Regressor | Output: Poses
A pretrained CNN | Dropout$\rightarrow$ Pooling$\rightarrow$ Dense | Translation & Rotation
Figure 1: PoseNet Architecture Subsuming a Feature Extractor and a Pose
Regressor Subnetwork.
Then, these six core parameters are converted to seven coordinates: $x_{1}$,
$x_{2}$, and $x_{3}$ of translation coordinates, and $y_{1},y_{2},y_{3},y_{4}$
of rotation coordinates. It is because the actual rotation poses are in Euler
angles. Thus, a pre-processing operation converts the Euler angles into
quaternions. The quaternions are the set of four values ($x_{o}$, $y_{1}$,
$y_{2}$ and $y_{3}$), where $x_{o}$ represents a scalar rotation of the vector
- $y_{1}$, $y_{2}$ and $y_{3}$. This conversion is governed by the expressions
given in Eq. (1) - (4).
$x_{0}=(\sqrt{1+c_{1}c_{2}+c_{1}c_{3}-s_{1}s_{2}s_{3}+c_{2}c_{3}})/2,$ (1)
$y_{1}=(c_{2}s_{3}+c_{1}s_{3}+s_{1}s_{2}c_{3})/4x_{0},$ (2)
$y_{2}=(s_{1}c_{2}+s_{1}c_{3}+c_{1}s_{2}s_{3})/4x_{0},$ (3)
$y_{3}=(-s_{1}s_{3}+c_{1}s_{2}c_{3}+s_{2})/4x_{0},$ (4)
where $c_{1}$ = $\cos(roll/2)$, $c_{2}$ = $\cos(yaw/2)$, $c_{3}$ =
$\cos(pitch/2)$, $s_{1}$ = $\sin(roll/2)$, $s_{2}$ = $\sin(yaw/2)$, and
$s_{3}$ = $\sin(pitch/2)$.
The pose regressor subnetwork is to be trained to minimize the translation and
rotation errors. These errors are combined into a single objective function,
$L_{\beta}$ as defined in Eq. (5) [9].
$L_{\beta}(I)=L_{x}(I)+\beta L_{q}(I),$ (5)
where $L_{x}$, $L_{q}$ are the losses of translation and rotation
respectively, and $I$ is the input vector representing the discrete location
in the map. $\beta$ is a scaling factor that is used to balance both the
losses and calculated using homoscedastic uncertainty that combines the losses
as defined in (6).
$L_{\sigma}(I)=\frac{L_{x}(I)}{\hat{\sigma}_{x}^{2}}+\log\hat{\sigma}_{x}^{2}+\frac{L_{q}(I)}{\hat{\sigma}_{q}^{2}}+\log\hat{\sigma}_{q}^{2},$
(6)
where $\hat{\sigma}_{x}$ and $\hat{\sigma}_{q}$ are the uncertainties for
translation and rotation respectively. Here, the regularizers
$\log\hat{\sigma}_{x}^{2}$ and $\log\hat{\sigma}_{q}^{2}$ prevent the values
from becoming too big [9]. It can be calculated using a more stable form as in
Eq. (7), which is very handy for training the PoseNet.
$L_{\sigma}(I)=L_{x}(I)^{-\hat{s}_{x}}+\hat{s}_{x}+L_{q}(I)^{-\hat{s}_{q}}+\hat{s}_{q},$
(7)
where the learning parameter $s=\log{\hat{\sigma}^{2}}$. Following [9], in
this work, $\hat{s_{x}}$ and $\hat{s_{q}}$ are set to $0$ and $-3.0$,
respectively.
### II-C The Front-end Feature Extractor
As discussed earlier the PoseNet take advantage of transfer learning (TL),
whereby it uses pretrained DCNN as feature extractor. TL differs from
traditional learning, as, in latter, the models or tasks are isolated and
function separately. They do not retain any knowledge, whereas TL learns from
the older problem and leverages the new set of problems [10]. Thus, in this
work, versions of ResNet, versions of VGG, and AlexNet are investigated. Some
basic information of these DCNN’s are given in the following subsections.
#### II-C1 AlexNet
It was the winner in 2012 ImageNet Large Scale Visual Recognition Competition
(ILSVRC’12) with a breakthrough performance [11]. It consists of five
convolution (Conv) layers taking up to 60 million trainable parameters and
650,000 neurons making it one of the huge models in DCNN’s. The first and
second Conv layers are followed by a max pooling operation. But the third,
fourth, and fifth Conv layers are connected directly, and the final stage is a
dense layer and a thousand-way Softmax layer. It was the first time for a DCNN
to adopt rectified linear units (ReLU) instead of the tanh activation function
and to use of dropout layer to eradicate overfitting issues of DL.
#### II-C2 VGG (16, 19)
Simonayan and Zisserman [12] proposed the first version of VGG network named
VGG16 for the ILSVRC’14. It stood 2nd in the image classification challenge
with the error of top-5 as $7.32\%$. VGG16 and 19 consist of 16 and 19 Conv
layers, respectively with max pooling layer after set of two or three Conv
layers. It comprises of two fully connected layers and a thousand-way Softmax
top layer, similar to AlexNet. The main drawbacks of VGG models are high
training time and high network weights.
#### II-C3 ResNet (18, 34, 50, 101)
ResNet18 [13] was introduced to compete in ILSVRC’15, where it outperformed
other models, like VGG, GoogLeNet, and Inception. All the ResNet models used
in this work are trained on the ImageNet database that consists more than
million images. Experiments have depicted that even though ResNet18 is a
subspace of ResNet34, yet its performance is more or less equivalent to
ResNet34. ResNet18, 34, 50, and 101 consist of 18, 34, 50, and 101 layers,
respectively. This paper, firstly, evaluates the performance of the PoseNet
individually using the above mentioned ResNet models besides other feature
extractors. Consequently, it chooses the best ones to be used in the fusion
modalities and in the hybrid learner, thereby, establishing a good trade-off
between depth and performance. The ResNet models constitute of residual
blocks, whereas ResNet18 and 34 have two stack of deep residual blocks, while
ResNet50 and 101 have three deep residual blocks. A residual block subsumes
five convolutional stages, which is followed by average pooling layer. Hence,
each ResNet model has a fully connected layer followed by a thousand-way
Softmax layer to generate a thousand-class labels.
### II-D Multimodal Feature Fusion
There are many existing researches that have taken the advantage of various
strategies for feature extraction and fusion. For an instance, Xu et al. [14]
modify the Inception-ResNet-v1 model to have four layers followed by a fully
connected layer in order to reduce effect of overfitting, as their problem
domain has less number of samples and fifteen classes. On the other hand,
Akilan et al. [15] continue with a TL technique in the feature fusion, whereby
they extract features using multiple DCNN’s, namely AlexNet, VGG16 and
Inception-v3. As these extractors will result into a varied feature dimensions
and sub-spaces, feature space transformation and energy-level normalisation
are performed to embed the features into a common sub-space using
dimensionality reduction techniques like PCA. Finally, the features are fused
together using fusion rules, such as concatenation, feature product,
summation, mean value pooling, and maximum value pooling.
Fu et al. [16] also consider the dimension normalization techniques to produce
a consistently uniform dimensional feature space. It presents supervised and
unsupervised learning sub-space learning method for dimensionality reduction
and multimodal feature fusion. The work also introduces a new technique
called, Tensor-Based discriminative sub-space learning. This technique gives
better results, as it produces the final fused feature vector of adequate
length, i.e., the long vector if the number of features are too large and the
shorter vector if number of features are small. Hence, Bahrampour et al. [17]
introduce a multimodal task-driven dictionary learning algorithm for
information that is obtained either homogenously or heterogeneously. These
multimodal task-driven dictionaries produce the features from the input data
for classification problems.
### II-E Hybrid Learning
Sun et al. [18] proposes a hybrid convolutional neural network for face
verification in wild conditions. Instead of extracting the features separately
from the images, the features from the two images are jointly extracted by
filter pairs. The extracted features are then processed through multiple
layers of the DCNN to extract high-level and global features. The higher
layers in the DCNN discussed in their work locally share the weights, which is
quite contrary to conventional CNNs. In this way, feature extraction and
recognition are combined under the hybrid model.
Similarly, Pawar et al. [19] develope an efficient hybrid approach involving
invariant scale features for object recognition. In the feature extraction
phase, the invariant features, like color, shape, and texture are extracted
and subsequently fused together to improve the recognition performance. The
fused feature set is then fed to the pattern recognition algorithms, such as
support vector machine (SVM), discriminant canonical correlation, and locality
preserving projections, which likely produces either three distinct or
identical numbered false positives. To hybridize the process entirely, a
decision module is developed using NN’s that takes in the match values from
the chosen pattern recognition algorithm as input, and then returns the result
based on those match values.
Figure 2: Operational Flow of the Proposed Hybrid Learner with a Weight Sewing
Strategy and a Late Fusion Phase of the Predicted Poses Towards Improving the
Localization Capability of a SLAM Model - PoseNet.
However, the hybrid learner (Fig. 2) introduced in this work is more unique
and insightful than the existing hybrid fusion approaches. It focuses on
enhancing and updating the weights of the pretrained unimodals before using
them as front-end feature extractors of the PoseNet. Besides that it not only
does a mutation of multimodal weights of the feature extraction dense layer,
but also fuses the predicted scores of the pose regressor.
## III PROPOSED METHOD
### III-A Hybrid Weight Swing and Score Fusion Model
The Fig. 2 shows a detailed flow diagram of the hybrid learner. It consists of
two parts, wherein the first part (Step 1 - weight enhancement) carries out an
early fusion by layer weight enhancement of the feature extractor and the
second part (Step 3) does a late fusion via score refinement of the models
involved in the early fusion. The two best feature extractors chosen for early
fusion based on their individual performances are used for forming the hybrid
learner. The early fusion models obtained through fusing the dense layer
weights of ResNet101 and VGG19 by addition or multiplication. In late fusion,
the predicted scores of multiple pose regressors with the weight enhanced
above feature extractors are amalgamated using average filtering to achieve
better results.
### III-B Preprocessing
Before passing the images and poses to the PoseNet model, it is required to
preprocess the data adequately. The preprocessing involves checking the
consistency of the images, resizing and center cropping of the images,
extraction of mean and standard deviation, and normalization of the poses. The
images are resized to $260\times 260$, and center cropped to $250\times 250$.
The translation is used to get the minimum, maximum, mean and standard
deviation. The rotation values are read as Euler angles which suffer from wrap
around infinities, gimbal lock and interpolation problems. To overcome these
challenges, Euler rotations is converted to quaternions [20].
### III-C Multimodal Weight Sewing via Early Fusion (EF)
The preprocessed data is fed to the feature extractors: ResNet101 and VGG19.
These two models are selected based on their individual performance on the
Apolloscape test dataset. Using these two feature extractors the PoseNet has
produced minimum translation and rotation errors, as recorded in Table I. The
weights of the top feature extracting dense layers of the two feature
extractors are fused via addition or multiplication operation. The fused
values are used to update the weights of the respective dense layer of the
ResNet101 and VGG19 feature extractors. The updated models are then used as
new feature extractors for the regressor subnetwork. Then, the regressor is
trained on the training dataset.
### III-D Pose Refinement via Late Fusion (LF)
The trained models with the updated ResNet101 and VGG19 using early fusion of
multiplication and addition operations are moved onto the late fusion phase as
shown in Step 3 in Fig. 2. Where, the loaded weight enhanced early fusion
models, simultaneously predict the poses for each input visual. The predicted
scores from these models (in this case, ResNet101 and VGG19) are amalgamated
with average filtering. This way of fusion is denoted as AHL. Similarly, the
predicted scores of the early fusion models can be refined using
multiplication and it is denoted as MHL. Finally, the predicted scores of the
four early fusion models using addition and multiplication are fused together
using average mathematical operation to achieve the predicted scores for the
full hybrid fusion model stated earlier (Section I) in the paper as HLFF.
These predicted poses are then compared with the ground truth poses to
calculate the mean and median of translation and rotational errors.
## IV Experimental Setup and Results
### IV-A Dataset
Apolloscape dataset is used for many computer vision tasks related to
autonomous driving. The Apolloscape dataset consists of modules including
instance segmentation, scene parsing, lanemark parsing and self-localization
that are used to understand and make a vehicle to act and reason accordingly
[3, 2]. The Apolloscape dataset for self-localization is made up of images and
poses on different roads at discrete locations. The images and the poses are
created from the recordings of the videos. Each record of the dataset has
multiple images and poses corresponding to every image. The road chosen for
the ablation study of this research is zpark. It consists of a total of 3000
stereo vision road scenes. For each image, there is a ground truth of poses
with 6-DoF. From this entire dataset, a mutually exclusive training and test
sets are created with the ratio of $3:1$.
### IV-B Evaluation Metric
Measuring the performance of the machine learning model is pivotal to
comparing the various CNN models. Since every CNN model is trained and tested
on different datasets with varied hyperparameters, it is necessary to choose
the right evaluation metric. As the domain of this work is a regression
problem, the mean absolute error (MAE) is used to measure the performance of
the set of models ranging from unimodals to the proposed hybrid learner. MAE
is a linear score, which is calculated as an average of the absolute
difference between the target variables and the predicted variables using the
formula given in Eq. (8).
$MAE=\frac{1}{n}\sum_{i=1}^{n}\mid{x_{i}-x}\mid,$ (8)
where, $n$ is the total number of samples in the validation dataset, $x_{i}$
and $x$ are the predicted and ground truth poses, respectively. Since, it is
an average measure, it implies that all the distinct values are weighted
equally ignoring any bias involvement.
Model Name | Median $\mathbf{e_{t}(m)}$ | Mean $\mathbf{e_{t}(m)}$ | Median $\mathbf{e_{r}(^{\circ})}$ | Mean $\mathbf{e_{r}(^{\circ})}$ | MAPST $\mathbf{(s)}$
---|---|---|---|---|---
M$1$ \- ResNet$18$ | $21.194$ | $24.029$ | $0.778$ | $0.900$ | $0.092$
M$2$ \- ResNet$34$ | $20.990$ | $23.597$ | $0.673$ | $0.824$ | $0.093$
M$3$ \- ResNet$50$ | $18.583$ | $20.803$ | $0.903$ | $1.434$ | $0.095$
M$4$ \- ResNet$101$ | $16.227$ | $19.427$ | $0.966$ | $1.230$ | $0.098$
M$5$ \- VGG$16$ | $17.150$ | $21.571$ | $1.079$ | $1.758$ | $0.103$
M$6$ \- VGG$19$ | $16.820$ | $19.935$ | $0.899$ | $1.378$ | $0.111$
M$7$ \- AlexNet | $46.992$ | $53.004$ | $4.282$ | $7.177$ | $0.108$
M$8$ \- LF | $9.763$ | $10.561$ | $0.945$ | $4.645$ | $0.146$
M$9$ \- AEFResNet101 | $14.870$ | $18.256$ | $0.673$ | $0.784$ | $0.134$
M$10$ \- MEFResNet101 | $14.842$ | $18.013$ | $0.779$ | $0.977$ | $0.131$
M$11$ \- AEFVGG19 | $11.047$ | $13.840$ | $0.742$ | $1.024$ | $0.137$
M$12$ \- MEFVGG19 | $10.730$ | $14.181$ | $0.756$ | $1.141$ | $0.135$
M$13$ \- AHL | $10.400$ | $12.193$ | $0.828$ | $5.155$ | $0.142$
M$14$ \- MHL | $9.307$ | $11.420$ | $1.206$ | $5.455$ | $0.142$
M$15$ \- HLFF | $7.762$ | $8.829$ | $1.008$ | $4.618$ | $0.144$
TABLE I: Performance Analysis of Various Models: $e_{t}$ \- translation error,
$e_{r}$ \- rotation error, MAPST - mean average per sample processing time.
### IV-C Performance Analysis
This Section elaborates the results obtained from each of the model introduced
earlier in this paper.
#### IV-C1 Translation and Rotation Errors
Table I tabulates the performance of the PoseNet with various front-end
unimodel and multimodal feature extractors, along with the proposed hybrid
learners. The results in this Table can be described in three subdivisions.
The primary section from M$1$ to M$7$ is the outcomes of the unimodal PoseNet
with unimodality-based feature extractors. The subsequent section extending
from M$8$ to M$12$ depicts the performances of five multimodality-based
learners. M$8$ represents late fusion (LF), M$9$ (AEFResNet101) and M$10$
(MEFResNet101) represent early fusion on ResNet101 as feature extractor with
addition and multiplication, respectively. M$11$ (AEFVGG19) and M$12$
(MEFVGG19) are the results for early fusion on VGG$19$ with addition and
multiplication, respectively. The third section consists of proposed hybrid
learners, where M$13$, M$14$, and M$15$ stand for AHL that combines the early
fusion models, M$9$ and M$11$, MHL, which combines the early fusion models,
M$10$ and M$12$, and the HLFF obtained after averaging the predicted scores of
the four models, M$9$, M$10$, M$11$, and M$12$. The results are computed as
the mean and median values of translation and rotation errors. The translation
errors are measured in terms of meters ($m$), while the rotation error is
measured in degrees $(^{\circ})$.
(a) Error in Terms of Translation.
(b) Error in Terms of Rotation.
(c) Average Processing Time per Sample.
Figure 3: Performance Analysis of All Different Models.
Considering the unimodal-based PoseNet implementation, it is very apparent
that ResNet$101$ and VGG$19$ give the best two outcomes among others. In terms
of translation error, the late fusion model shows better performance than
unimodality-based learners, but not good as compared to the early fusion
model. Let’s pick the ResNet$101$-based PoseNet as baseline model for rest of
the comparative analysis because it has the best performance amongst the all
the unimodals. Here, the late fusion shows a $66$% decrease in the
translation’s median errors and an $84$% decrease in the translation’s mean
errors when compared to ResNet$101$ (M$4$). The median of rotation errors in
the late fusion model shows a decrease of $2$% but the mean of rotation errors
increase by $73$%.
On comparison of the baseline model (M$4$) with the early fusion model using
addition having ResNet101 as a feature extractor (M$9$), it is seen that there
is a $9$% decrease in translation’s median error and a $6$% decrease in
translation’s mean error. On the other hand, the median of rotation error
shows a $43$% decrease and mean of rotation error shows a $41$% decrease. The
comparison with the early fusion model (M$10$) using multiplication on VGG$19$
exhibits a $51$% decrease in translation’s median error and a $37$% decrease
in translation’s mean error. While the rotation errors drop in median value by
$28$% and in the mean value by $8$%.
It is evident from the Table I that hybrid learners show much better
performance than the unimodal and early fusion models. The hybrid learner
using average filtering shows a $109$% decrease in the translation’s median
errors and a $120$% decrease in the translation’s mean error when compared to
the ResNet$101$-based PoseNet. While for rotation errors, there is a $4$%
increase in the median and a $73$% increase in the mean.
In holistic analysis, it is observed that the late fusion shows an improvement
of $37$% in translation and $3$% in rotation. On considering the early fusion
model using ResNet as a feature extractor, the translation shows $6\%$
improvement while rotation shows $31$%. The improvement of the early fusion
using VGG$19$ as a feature extractor in terms of translation and rotation is
$31$% and $24$%, respectively. It is quite evident from the Table II that the
proposed HL has a negligible low results for rotation with a decrease of $4$%
nevertheless, it is the best model considering a huge improvement in
translation by $50$% across all the modals.
Model Name | Improvement in | Timing Overhead $(ms)$
---|---|---
$e_{t}$ (%) | $e_{r}$ (%)
LF | $37$ | $3$ | $48$
EFResNet | $6$ | $31$ | $36$
EFVGG | $31$ | $24$ | $39$
HLFF | $50$ | $-4$ | $46$
TABLE II: Performance Improvement of Proposed Hybrid Learner When Compared to
the Baseline PoseNet with ResNet101 as Front-end.
### IV-D Timing Analysis
The timing analysis is conducted on a machine that uses an Intel Core i5
processor that uses the Google Colaboratory having a GPU chip Tesla K80 with
2496 CUDA cores, a hard disk space of 319GB and 12.6GB RAM. Table I shows the
mean average processing time calculated for processing a batch of ten samples.
As seen from the Table I and Figure 3, the fusion models which involve early,
late, and hybrid learner take slightly extra time compared to the unimodality-
based baseline PoseNet. The late fusion model (M8) takes more processing time
in comparison to all the other models, as it uses five pretrained modalities,
which are trained and tested individually, thereby, increasing the time
overhead. The early fusion models also show an increase in the processing time
in comparison to the unimodels but lesser than the late fusion model, as
training the pretrained model after weight enhancement takes more time. The
hybrid learners also show the same trend because of the underlying fact that
it is a combination of the early and late fusion methods. These models employ
weight enhanced early fusion models adding to the time overhead, besides
fusing the scores from different models after validation.
Note that the hyper-parameters have been fixed throughout the experimental
analysis on various models to avoid uncertainties in the comparative study.
The learning rate used in all the models is $0.01$, dropout rate for the
dropout layer of the PoseNet is set to $0.5$, and the batch size during
training is fixed to $34$. Hence, every model is trained for $1000$ epochs
with Adam optimizer.
## V Conclusion
This work introduces a the hybrid learner to improve the localization accuracy
of a pose regressor model for SLAM. The hybrid learner is a combination of
multimodal early and late fusion algorithms to harness the best properties of
the both. The extensive experiments on the Apolloscape self-localization
dataset show that the proposed hybrid leaner is capable of reducing the
translation error nearly by a $50$% decrease, although the rotation error gets
worse by a negligible $4$% when compared to unimodal PoseNet with ResNet$101$
as a feature extractor.
Thus, the future work aims at minimizing the rotation errors and overcoming
the little overhead in the processing time.
## Acknowledgment
This work acknowledges the Google for generosity of providing the HPC on the
Colab machine learning platform and the organizer of Apollo Scape dataset.
## References
* [1] A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in Proceedings of the IEEE international conference on computer vision, pp. 2938–2946, 2015.
* [2] P. Wang, R. Yang, B. Cao, W. Xu, and Y. Lin, “Dels-3d: Deep localization and segmentation with a 3d semantic map,” in CVPR, pp. 5860–5869, 2018.
* [3] P. Wang, X. Huang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The apolloscape open dataset for autonomous driving and its application,” IEEE transactions on pattern analysis and machine intelligence, 2019.
* [4] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping (slam): Part ii,” IEEE robotics & automation magazine, vol. 13, no. 3, pp. 108–117, 2006.
* [5] M. Montemerlo, S. Thrun, D. Koller, B. Wegbreit, et al., “Fastslam 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges,” in IJCAI, pp. 1151–1156, 2003.
* [6] M. Montemerlo, S. Thrun, D. Koller, B. Wegbreit, et al., “Fastslam: A factored solution to the simultaneous localization and mapping problem,” Aaai/iaai, vol. 593598, 2002.
* [7] R. C. Smith and P. Cheeseman, “On the representation and estimation of spatial uncertainty,” The international journal of Robotics Research, vol. 5, no. 4, pp. 56–68, 1986.
* [8] F. Walch, C. Hazirbas, L. Leal-Taixé, T. Sattler, S. Hilsenbeck, and D. Cremers, “Image-based localization with spatial lstms,” CoRR, vol. abs/1611.07890, 2016.
* [9] A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5974–5983, 2017.
* [10] M. E. Taylor and P. Stone, “Transfer learning for reinforcement learning domains: A survey,” Journal of Machine Learning Research, vol. 10, no. Jul, pp. 1633–1685, 2009.
* [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
* [12] K. Simonyan and A. Zisserman, “Very deep convolutional neural networks for large-scale image recognition,” in The International Conference on Learning Representations, 2015, pp. 1–5, 2015.
* [13] R. S. Paolo Napoletano, Flavio Piccoli, “Anomaly detection in nanofibrous materials by cnn-based similarity,” MDPI, 2018.
* [14] J. Xu, Y. Zhao, J. Jiang, Y. Dou, Z. Liu, and K. Chen, “Fusion model based on convolutional neural networks with two features for acoustic scene classification,” in Proc. of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017), Munich, Germany, 2017.
* [15] T. Akilan, Q. M. J. Wu, and H. Zhang, “Effect of fusing features from multiple dcnn architectures in image classification,” The Institution of Engineering and Technology, Feb 2018.
* [16] Y. Fu, L. Cao, G. Guo, and T. S. Huang, “Multiple feature fusion by subspace learning,” in Proceedings of the 2008 international conference on Content-based image and video retrieval, pp. 127–134, 2008.
* [17] S. Bahrampour, N. Nasrabadi, A. Ray, and W. Jenkins, “Multimodal task-driven dictionary learning for image classification,” The IEEE Transactions on Image Processing, 2015.
* [18] Y. Sun, X. Wang, and X. Tang, “Hybrid deep learning for face verification,” in Proceedings of the IEEE international conference on computer vision, pp. 1489–1496, 2013.
* [19] V. Pawar and S. Talbar, “Hybrid machine learning approach for object recognition: Fusion of features and decisions,” Machine Graphics and Vision, vol. 19, no. 4, pp. 411–428, 2010.
* [20] P. Bouthellier, “Rotations and orientations in r3,” 27th International Conference on Technology in Collegiate Mathematics, vol. 27, 2015.
|
8k
|
arxiv_papers
|
2101.01159
|
basicstyle=, language=Python, keywordstyle=, keywordstyle=[1],
keywordstyle=[2], numbers=left, numberstyle=, stepnumber=1, numbersep=8pt,
breakindent=0pt, firstnumber=1, showspaces=false, showstringspaces=false,
showtabs=false, frame=leftline, tabsize=2, captionpos=b, breaklines=false,
breakatwhitespace=true, columns=fixed, basewidth=0.52em,
numberblanklines=false, escapechar=|, morekeywords= basicstyle=,
language=Python, keywordstyle=, numbers=left, numberstyle=, stepnumber=1,
numbersep=8pt, breakindent=0pt, firstnumber=1, showspaces=false,
showstringspaces=false, showtabs=false, frame=leftline, tabsize=2,
captionpos=b, breaklines=false, breakatwhitespace=true, columns=fixed,
basewidth=0.52em, numberblanklines=false, escapechar=|,
otherkeywords=:=,delete,update,sum, keywordstyle=[1], keywordstyle=[2],
keywordstyle=[3],
morekeywords=[1]var,table,key,partition,handler,on,query,isolation,consistency,availability,failures,deployment,processor,default,latency,cost,merge,send,once,domain,target,
morekeywords=[3]:=,delete,update,sum, alsoother=@ basicstyle=,
language=Python, numbers=left, numberstyle=, stepnumber=1, numbersep=8pt,
breakindent=0pt, firstnumber=1, showspaces=false, showstringspaces=false,
showtabs=false, frame=leftline, tabsize=2, captionpos=b, breaklines=false,
breakatwhitespace=true, columns=fixed, basewidth=0.52em,
numberblanklines=false, escapechar=|, otherkeywords=table,var, commentstyle=
basicstyle=, language=SQL, keywordstyle=, numbers=left, numberstyle=,
stepnumber=1, numbersep=8pt, breakindent=0pt, firstnumber=1, showspaces=false,
showstringspaces=false, showtabs=false, frame=leftline, tabsize=2,
captionpos=b, breaklines=false, breakatwhitespace=true, columns=fixed,
basewidth=0.52em, numberblanklines=false, escapechar=^,
morekeywords=next,async,declare,return,with,language,returns,function,handler,type,
alsoother=@ basicstyle=, language=ruby, numbers=left, numberstyle=,
stepnumber=1, numbersep=8pt, breakindent=0pt, firstnumber=1, showspaces=false,
showstringspaces=false, showtabs=false, frame=leftline, tabsize=2,
captionpos=b, breaklines=false, breakatwhitespace=true, columns=fixed,
basewidth=0.52em, numberblanklines=false, escapechar=^, otherkeywords=<+-,
keywordstyle=[1], keywordstyle=[2],
morekeywords=[1]bloom,state,bootstrap,table,scratch,channel,interface,lset,lbool,lmax,lmap,lmin,in,out,
morekeywords=[2]<+-,<+,<\-
# New Directions in Cloud Programming
Alvin Cheung Natacha Crooks Joseph M. Hellerstein Matthew Milano
akcheung,ncrooks,hellerstein,[email protected] UC Berkeley
###### Abstract.
Nearly twenty years after the launch of AWS, it remains difficult for most
developers to harness the enormous potential of the cloud. In this paper we
lay out an agenda for a new generation of cloud programming research aimed at
bringing research ideas to programmers in an evolutionary fashion. Key to our
approach is a separation of distributed programs into a PACT of four facets:
Program semantics, Availablity, Consistency and Targets of optimization. We
propose to migrate developers gradually to PACT programming by lifting
familiar code into our more declarative level of abstraction. We then propose
a multi-stage compiler that emits human-readable code at each stage that can
be hand-tuned by developers seeking more control. Our agenda raises numerous
research challenges across multiple areas including language design, query
optimization, transactions, distributed consistency, compilers and program
synthesis.
††copyright: cidr††doi: ††journalyear: 1888
## 1\. Introduction
It is easy to take the public clouds for granted, but we have barely scratched
the surface of their potential. These are the largest computing platforms ever
assembled, and among the easiest to access. Prior generations of architectural
revolutions led to programming models that unlocked their potential:
minicomputers led to C and the UNIX shell, personal computers to graphical
“low-code” programming via LabView and Hypercard, smartphones to Android and
Swift. To date, the cloud has yet to inspire a programming environment that
exposes the inherent power of the platform.
Initial commercial efforts at a programmable cloud have started to take wing
recently in the form of “serverless” Functions-as-a-Service. FaaS offerings
allow developers to write sequential code and upload it to the cloud, where it
is executed in an independent, replicated fashion at whatever scale of
workload it attracts. First-generation FaaS systems have well-documented
limitations (hellerstein2019serverless, ), which are being addressed by newer
prototypes with more advanced FaaS designs (e.g., (akkus2018sand, ;
sreekanti2020cloudburst, )). But fundamentally, even “FaaS done right” is a
low-level assembly language for the cloud, a simple infrastructure for
launching sequential code, a UDF framework without a programming model to host
it.
As cloud programming matures, it seems inevitable that it will depart from
traditional sequential programming. The cloud is a massive, globe-spanning
distributed computer made up of heterogeneous multicore machines. Parallelism
abounds at all scales, and the distributed systems challenges of non-
deterministic network interleavings and partial failures exist at most of
those scales. Creative programmers are held back by the need to account for
these complexities using legacy sequential programming models originally
designed for single-processor machines.
We need a programming environment that addresses these complexities directly,
but without requiring programmers to radically change behavior. The next
generation of technology should _evolutionize_ the way developers program:
allow them to address distributed concerns gradually, working with the
assistance of new automation technologies, but retaining the ability to
manually override automated decisions over time.
### 1.1. A New PACT for Cloud Programming
Moving forward, we envision decoupling cloud programming into four separate
concerns, each with an independent language facet: Program semantics,
Availability, Consistency and Targets for optimization (PACT).
Program Semantics: Lift and Support. A programmer’s primary goal is to specify
the intended functionality of their program. Few programmers can correctly
write down their program semantics in a sequential language while also
accounting for parallel interleavings, message reordering, partial failures
and dynamically autoscaling deployment. This kind of “hand-crafted”
distributed programming is akin to assembly language for the cloud.
Declarative specifications offer a very different solution, shielding the
programmer from implementation and deployment details. Declarative programming
environments for distributed computing have emerged in academia and industry
over the past decade (alvaro2011consistency, ; datomic12, ; grangerlive2018,
), but adoption of these “revolutionary” approaches has been limited. Moving
forward, we advocate an evolutionary _Lift and Support_ approach: given a
program specification written in a familiar style, automatically lift as much
as possible to a higher-level declarative Intermediate Representation (IR)
used by the compiler, and encapsulate what remains in UDFs (i.e., FaaS
Functions).
Availability Specification. Availability is one of the key advantages of the
cloud. Cloud vendors offer hardware and networking to deploy services
redundantly across multiple relatively independent failure domains.
Traditionally, though, developers have had to craft custom solutions to ensure
that their code and deployments take advantage of this redundancy efficiently
and correctly. Availability protocols are frequently interleaved into program
logic in ways that make them tricky to test and evolve. We envision a
declarative facet here as well, allowing programmers to specify the
availability they wish to offer independent from their program semantics. A
compiler stage must then synthesize code to provide that availability
guarantee efficiently.
Consistency Guarantees. Many of the hardest challenges of distributed
programming involve consistency guarantees. “Sophisticated” distributed
programs are often salted with programmer-designed mechanisms to maintain
consistency. We advocate a programming environment that separates consistency
specifications into a first-class program facet, separated from the basic
functionality. A compiler stage can then generate custom code to guarantee
that clients see the desired consistency subject to availability guarantees.
Disentangling consistency invariants from code makes two things explicit: the
desired common-case sequential semantics, and the relaxations of those
semantics that are to be tolerated in the distributed setting. This faceting
makes it easier for compilers to guarantee correctness and achieve efficiency,
it allows enforcement across compositions of multiple distributed libraries,
and allows developers to easily understand and modify the consistency
guarantees of their code.
Targets for Dynamic Optimization. In the modern cloud, code is not just
compiled; it must be deployed as a well-configured service across multiple
machines. It also must be able to redeploy itself dynamically— _autoscale_ —to
work efficiently as workloads grow and shrink by orders of magnitude, from a
single multicore box to a datacenter to the globe. We believe cloud frameworks
inevitably must lighten this load for general-purpose developers. We envision
an environment where programmers can specify multi-objective performance
targets for execution, e.g., tradeoffs between billing costs, latency and
availability. From there, a number of implementation and deployment decisions
must be made. This includes compilation logic like choosing the right data
structures and algorithms for “local,” sequential code fragments, as well as
protocols for message-passing for distributed functionality. It also includes
the partitioning, replication and placement of code and data across machines
with potentially heterogeneous resources. Finally, the binary executables we
generate need to include dynamic runtime logic that monitors and adapts the
deployment in the face of shifting workloads.
For all these facets, we envision a _gradual_ approach to bring programmers on
board in an evolutionary manner. Today’s developers should be able to get
initial success by writing simple familiar programs, and entrusting everything
else to a compiler. In turn, this compiler should generate human-centric code
in well-documented internal languages, suitable for eventual refinement by
programmers. As a start, we believe an initial compiler should be able to
achieve performance and cost at the level of FaaS offerings that users
tolerate today (hellerstein2019serverless, ), with the full functionality of
PACT programming. Programmers can then improve the generated programs
incrementally by modifying the lower-level facets or “hinting” the compiler
via constraints.
### 1.2. Sources of Inspiration and Confidence
Our goals for the next generation of cloud programming are ambitious, but work
over the last decade gives us confidence that we can take significant strides
in this direction. A number of ideas from the past decade inform our approach:
Monotonic Distributed Programming. Monotonicity—the property that a program’s
output grows with its input— has emerged as a key foundation for efficient,
available distributed programs (hellerstein2020keeping, ). The roots of this
idea go back to Helland and Campbell’s crystallization of coordination-free
distributed design patterns as ACID 2.0: Associative, Commutative, Idempotent
and Distributed (helland2009building, ). Subsequently, CRDTs
(shapiro2011conflict, ) were proposed as data types with ACI methods,
observing that the ACI properties are those of join-semilattices: algebraic
structures that grow monotonically. The connection between monotonicity and
order-independence turns out to be fundamental. The CALM Theorem
(hellerstein2010declarative, ; ameloot2011relational, ) proved that programs
produce deterministic outcomes without coordination _if and only if_ they are
monotonic. Hence monotonic code can run coordination-free without any need for
locking, barriers, commit, consensus, etc. At the same time, our Bloom
language (alvaro2011consistency, ; conway2012logic, ) adopted declarative
logic programming for distributed computing, with a focus on a monotonic core,
and coordination only for non-monotone expressions. Various monotonic
distributed language proposals have followed (kuper2013lvars, ;
meiklejohn2015lasp, ; milano2019tour, ). Monotonic design patterns have led to
clean versions of complex distributed applications like collaborative editing
(weiss2009logoot, ), and high-performance, consistency-rich autoscaling
systems like the Anna KVS (wu2019anna, ).
Dataflow and Reactive Programming. Much of the code in a distributed
application involves data that flows between machines, and event-handling at
endpoints. Distributed dataflow is a notable success story in parallel and
distributed computing, from its roots in 1980s parallel databases
(dewitt1992parallel, ) through to recent work on richer models like Timely
Dataflow (mcsherry2017modular, ) and efforts to autoscale dataflow in the
cloud (kalavri2018three, ). For event handling, reactive programming libraries
like React.js (chedeau2014react, ) and Rx (meijer2010reactive, ) provide a
different dataflow model for handling events and mutating state. Given these
successes and our experience with dataflow backends for low-latency settings
(loo2009declarative, ; loo2005implementing, ; alvaro2010boom, ) we are
optimistic that a combination of dataflow and reactivity would provide a good
general-purpose runtime target for services and protocols in the cloud. We are
also encouraged by the general popularity of libraries like Spark and
React.js—evidence that advanced programmers will be willing to customize low-
level IR code in that style.
Faceted Languages. The success of LLVM (lattner2004llvm, ) has popularized the
idea of multi-stage compilation with explicit internal representation (IR)
languages. We are inspired by the success of faceted languages and separation
of concerns in systems design, with examples such as the model-view-controller
design pattern for building user interfaces (design-patterns, ), the three-
tier architecture for web applications (ror, ; django, ), and domain-specific
languages such as Halide for image processing pipelines (ragan2013halide, ).
Dissecting an application into facets enables the compiler designer and
runtime developer to choose different algorithms to translate and execute
different parts of the application. For instance, Halide decouples algorithm
specification from execution strategy, but keeps both as syntactic constructs
for either programmer control or compiler autotuning. This decoupling has led
Halide to outperform expert hand-tuned code that took far longer to develop,
and its outputs are now used in commercial image processing software. Image
processing is particularly inspiring, given its requirements for highly
optimized code including parallelism and locality in combinations of CPUs and
GPUs.
Verified Lifting. Program synthesis is one of the most influential and
promising practical breakthroughs in modern programming systems research
(synthesisSurvey, ). Verified lifting is a technique we developed that uses
program synthesis to formulate code translation as code search. We have
applied verified lifting to translate code across different domains, e.g.,
translating imperative Java to declarative SQL (qbs, ) and functional Spark
(casper, ; casper2, ), translating imperative C to CUDA kernels to Halide
(dexter, ) and to hardware description languages (domino, ). Our translated
Halide code is now shipping in commercial products. Verified Lifting cannot
handle arbitrary sequential code, but our Lift and Support approach should
allow us to use it as a powerful programmer aid.
Client-Centric and Mixed Consistency. Within the enormous literature on
consistency and isolation, two recent thrusts are of particular note here. Our
recent work on _client-centric_ consistency steps away from traditional low-
level histories to offer guarantees about what could be observed by a calling
client. This has led to a new understanding of the connections between
transactional isolation and distributed consistency guarantees
(crooks2020client, ). Another theme across a number of our recent results is
the _composition_ of services that offer different consistency guarantees
(milano2019tour, ; milano2018mixt, ; crooks2016tardis, ). The composition of
multiple services with different consistency guarantees is a signature of
modern cloud computing that needs to be brought more explicitly into
programming frameworks.
### 1.3. Outline
In this paper we elaborate on our vision for cloud-centric programming
technologies. We are exploring our ideas by building a new language stack
called Hydro that we introduce next. The development of Hydro is part of our
methodology, but we believe that the problems we are addressing can inform
other efforts towards a more programmable cloud.
In Section 2 we provide a high-level overview of the Hydro stack and a
scenario that we use as a running example in the paper. In Section 3 we
present our ideas for HydroLogic’s program semantics facet, and in Section 4
we back up and explore the challenge of lifting from multiple distributed
programming paradigms into HydroLogic. Section 5 discusses the data model of
HydroLogic and our ability to use program synthesis to automatically choose
data representations to meet performance goals. Section 6 sketches our first
explicitly distributed language facet: control over Availability, while
Section 7 covers the challenges in the correlated facet of Consistency. In
Section 8 we discuss lowering HydroLogic to the corresponding Hydroflow
algebra on a single node. Finally, in Section 9 we address the distributed
aspects of optimizing and deploying Hydroflow, subject to multi-objective
goals for cost and performance.
## 2\. Hydro’s Languages
Figure 1. The Hydro stack.
Hydro consists of a faceted, three-stage compiler that takes programs in one
or more distributed DSLs and compiles them to run on a low-level, autoscaling
distributed deployment of local programs in the Hydroflow runtime.
To bring a distributed library or DSL to the Hydro platform, we need to lift
it to Hydro’s declarative Intermediate Representation (IR)
language—HydroLogic. Hence our first stage is the Hydraulic Verified Lifting
facility (Section 4), which automates that lifting as much as it can,
encapsulating whatever logic remains in UDFs.
### 2.1. The Declarative Layer: HydroLogic
The HydroLogic IR (Section 3) is itself multifaceted as seen in Figure 1. The
core of the IR allows _Program Semantics_ P to be captured in a declarative
fashion, without recourse to implementation details regarding deployment or
other physical optimizations. For fragments of program logic that fail to lift
to HydroLogic, the language supports legacy sequential code via UDFs executed
inline or asynchronously in a FaaS style. The _Availability_ facet A allows a
programmer to ensure that each network endpoint in the application can remain
available in the face of $f$ failures across specified failure domains (VMs,
data centers, availability zones, etc.) In the _Consistency_ facet C we allow
users to specify their desires for replica consistency and transactional
properties; we use the term “consistency” to cover both. Specifically, we
allow service endpoints to specify the consistency semantics that senders can
expect to see. The final high-level _Targets for Optimization_ facet T allows
the developer to specify multiple objectives for performance, including
latency distributions, billing costs, downtime tolerance, etc.
Given a specification of the above facets, we have enough information to
compile an executable cloud deployment. HydroLogic’s first three facets
identify a finite space of satisfying distributed programs, and the fourth
provides performance objectives for optimization in that space.
### 2.2. Compilation of HydroLogic
The next challenge is to compile a HydroLogic specification into an executable
program deployable in the cloud. Rather than generating binaries to be
deployed directly on different cloud platforms, we will instead compile
HydroLogic specifications into programs written against APIs exposed by the
Hydroflow runtime (to be discussed in Section 2.3). Doing so allows
experienced developers to fine-tune different aspects of a deployment while
simplifying code generation. We are currently designing the Hydroflow APIs; we
envision them to cover different primitives that can be used to implement the
HydroLogic facets, such as:
* •
The choice of data structures for collection types and concrete physical
implementations (e.g., join algorithm) to implement the semantics facet
running as a local data flow on a single node.
* •
Partitioning (“sharding”) strategies for data and flows among multiple nodes,
based on the data model facet.
* •
Mechanisms that together form the isolation and replica consistency protocols
specific to the application.
* •
Scheduling and coordination primitives to execute data flows across multiple
nodes, such as spawning and terminating Hydroflow threads on VMs.
* •
Monitoring hooks inserted into each local data flow to trigger adaptive
reoptimization as needed during execution.
These primitives cover a lot of design ground, and we are still exploring
their design. A natural initial approach is to provide a finite set of choices
as different API calls, and combine API calls into libraries that provide
similar functionalities for the compiler or developer to invoke (e.g.,
different data partitioning mechanisms). We imagine that the Hydrolysis
compiler will analyze multiple facets to determine which APIs to invoke for a
given application, for instance combining the program semantics and targets
for optimization facets to determine which data structures and physical
implementations to use. In subsequent sections we illustrate one or more
choices for each. Readers familiar with these areas will hopefully begin to
see the larger optimization space we envision, by substituting in prior work
in databases, dataflow systems, and distributed storage. Note also that many
of these features can be bootstrapped in HydroLogic, e.g., by adapting prior
work on distributed logic for transaction and consensus protocols
(alvaro2010boom, ; alvaro2010declare, ), query compilation (condie2008evita,
), and distributed data structures (loo2005implementing, ).
We are designing a compiler called Hydrolysis to take a HydroLogic
specification and generate programs to be executed on the Hydroflow runtime.
As mentioned, our initial goal for Hydrolysis is to guarantee correctness
while meeting the performance of deployments on commercial FaaS pipelines. Our
next goal is to explore different compilation strategies for Hydrolysis,
ranging from syntax-directed, cost-driven translation (similar to a typical
SQL optimizer), to utilizing program synthesis and machine learning for
compilation. The faceted design of HydroLogic makes it easy to explore this
space: each facet can be compiled independently using (a combination of)
different strategies, and the generated code can then be combined and further
optimized with low-level, whole program transformation passes.
### 2.3. The Executable Layer: Hydroflow
The Hydroflow runtime (Section 8) is a strongly-typed single-node flow runtime
implemented in Rust. It subsumes ideas from both the dataflow engines common
in data processing, and the reactive programming engines more commonly used in
event-driven UI programming. Hydroflow provides an event-driven, flow-based
execution model, with operators that produce and consume various types
including collections (sets, relations, tensors, etc.), lattices (counters,
vector clocks, etc.) and traditional mutable scalar variables.
Hydroflow executes within a transducer network (ameloot2011relational, )
(Section 3.1). This event model allows for very high efficiency: as in the
high-performance Anna KVS (wu2019anna, ), all state is thread local and
Hydroflow does not require any locks, atomics, or other coordination for its
own execution. Another advantage of the transducer model is the clean temporal
semantics. As discussed in Section 3.1, all state updates are deferred to end-
of-tick and applied atomically, so that handlers do not experience race
conditions within a tick. Non-deterministic ordering arises only via explicit
asynchronous messages.
### 2.4. A Running Example
As a running example, we start with a simplified backend for a COVID-19
tracking app. We assume a front-end application that generates pairwise
contact traces, allows medical organizations to report positive diagnoses, and
alerts users to the risk of infection. Sequential pseudocode is in Figure 2.
` Simple COVID-19 Tracker Pseudocode ` Figure 2. A simple COVID-19 tracking
application.
The application logic starts with basic code to add an entry to the set
people. The add_contact function records the arrival of a new contact pair in
the contacts list of both people involved. The utility function trace returns
the transitive closure of a person’s contacts. Upon diagnosis, the diagnosed
function updates the state and sends an alert to the app for every person
transitively in contact. Next up is the likelihood function, which allows
recipients of an alert to synchronously invoke an imported black-box ML model
covid_predict, which returns a likelihood that the virus propagated to them
through the contact graph.
Our final function allocates a vaccine from inventory to a particular person.
We will revisit this example shortly, lifted into HydroLogic.
## 3\. The Program Semantics Facet
In Hydro, our “evolutionary” approach is to accept programs written in
sequential code or legacy distributed frameworks like actors and futures. In a
best-effort fashion, we lift these programs into a higher-level Internal
Representation (IR) language called HydroLogic. Over time we envision a desire
among some programmers for a more “revolutionary” approach involving user-
friendly syntax that maps fairly directly to HydroLogic or Hydroflow and their
more optimizable constructs. The IR syntax we present here is preliminary and
designed for exposition; we leave the full design of HydroLogic syntax for
future work.
We want our IR to be a target that is _optimizable_ , _general_ and
_programmer-friendly_. In the next few sections we introduce the IR and the
ways in which it is amenable to distributed optimizations. In Appendix A we
demonstrate generality by showing how various distributed computing models can
compile to HydroLogic.
Figure 3 shows our running example in a Pythonic version of HydroLogic. The
data model is presented in lines LABEL:line:datamodel through
LABEL:line:vaccine_count, discussed further in Section 5. The program
semantics (Section 3.1) are specified in lines LABEL:line:startofapp through
LABEL:line:endofapp, with the consistency facet (Section 7) declared inline
for the handler at Line LABEL:line:vaccinate that does not use the default of
eventual. Availability (Section 6) and Target facets (Section 9) appear at the
end of the example.
### 3.1. HydroLogic Semantics
` Simple COVID-19 Tracker App in Pythonic HydroLogic ` Figure 3. A simple
COVID-19 tracking application in a Pythonic HydroLogic syntax. Each on handler
has faceted specifications of consistency, availability and deployment either
in the definition (as is done here with consistency specs) or defined in a
separate block.
HydroLogic’s program semantics begin with its event loop, which is based on
the transducer model in Bloom (alvaro2011consistency, ). HydroLogic’s event
loop considers the current _snapshot_ of program state, which includes any new
inbound messages to be handled. Each iteration (“tick”) of the loop uses the
developer’s program specification to compute new results from the snapshot,
and atomically updates state at the end of the tick. All computation within
the tick is done to fixpoint. The snapshot and fixpoint semantics together
ensure that the results of a tick are independent of the order in which
statements appear in the program.
The notion of endpoints and events should be familiar to developers of
microservices or actors. Unlike microservices, actors or Bloom, HydroLogic’s
application semantics provide a simple “single-node” model—a global view of
state, and a single event loop providing a single sequence (clock) of
iterations (ticks). This single-node metaphor is part of the facet’s
declarative nature—it ignores issues of data placement, replication, message
passing, distributed time and consistency, deferring them to separable facets
of the stack.
Basic statements in HydroLogic’s program semantics facet come in a few forms:
_— Queries_ derive information from the current snapshot. Queries are named
and referenceable, like SQL views, and defined over various lattice types,
including relational tables. Line LABEL:line:transitive1 represents a simple
query returning pairs of Persons, the second of whom is a contact in the
first. As in Datalog, multiple queries can have the same name, implicitly
defining a merge of results across them. Lines LABEL:line:transitive1 and
LABEL:line:transitive2 are an example, defining the base case and inductive
case, respectively, for graph transitive closure111HydroLogic supports
recursion and non-monotonic operations (with stratified negation) for both
relations and lattices. These features are based on BloomL and the interested
reader is referred to (conway2012logic, ) for details.. A query $q$ may have
the same name as a data variable $q$, in which case the contents of data
variable $q$ are implicitly included in the query result.
_— Mutations_ are requests to modify data variables based on the current
contents of the snapshot. Following the transducer model, mutations are
deferred until the end of a clock “tick”—they become visible together,
atomically, once the tick completes. Mutations take three forms. A lattice
merge mutation as in lines
LABEL:line:addperson1,LABEL:line:addcontact1,LABEL:line:addcontact2, or
LABEL:line:vaccinate1 monotonically “merges in” the lattice value of its
argument. The traditional bare assignment operator :=, as in line
LABEL:line:vaccinate2 represents an arbitrary, likely non-monotonic update. A
query $q$ with the same name as a data variable $q$ implicitly replaces
(mutates) $q$ at end of tick; this mutation is monotonic iff the query is
monotonic.
_— Handlers_ begin with the keyword on, and model reactions to messages. Seen
within the confines of a tick, though, a handler is simply syntactic sugar for
Hydro statements mapped over a _mailbox_ of messages corresponding to the
handler’s name. The body of a handler is a collection of HydroLogic
statements, each quantified by the particular message being mapped. For
example, the add_person handler on Line LABEL:line:addperson is syntactic
sugar for the HydroLogic statements:
⬇
people.merge(Person(a.pid) for a in add_person)
send add_person<response>(message_id: int, payload: Status):
{(a.message_id, OK) for a in add_person}
The implicit mailbox add_person<response> is used to send results to the
caller of an add_purpose API—e.g., to send the HTTP status response to a REST
call.
_— UDFs_ are black-box functions, and may keep internal state across
invocations. An example UDF, covid_predict, can be seen in the likelihood
handler of line LABEL:line:likelihood. UDFs cannot access HydroLogic variables
and should avoid any other external, globally-visible data storage. Because
UDFs can be stateful and non-idempotent, each UDF is invoked once per input
per tick (memoized by the runtime), in arbitrary order.
_— Send_ is an asynchronous merge into a mailbox. As with mutations, sends are
not visible during the current tick. Unlike mutations, sends might not appear
atomically—each individual object sent from a given tick may be “delayed” an
unbounded number of ticks, appearing non-deterministically in the specified
mailbox at any later tick. Sends capture the semantics of unbounded network
delay. Line LABEL:line:diagnosed2 provides an internal example, letting the
compiler know that we expect alerts to be delivered asynchronously. As another
example, we can rewrite the likelihood handler of line LABEL:line:likelihood
to use a remote FaaS service. This requires sending a request to the service
and handling a response:
⬇
on async_likelihood(pid:int, isolation=snapshot)
send FaaS((covid_predict, handler.message_id, find_person(pid)))
on covid_predict<response>(al_message_id: int, result: bool):
send async_likelihood<response>((handler.message_id,
al_message_id, result))
HydroLogic statements can be bundled into _blocks_ of multiple statements, as
in the bodies of the add_contact and vaccinate handlers. Blocks can be
declared as object-like _modules_ with methods to scope naming and allow
reuse. Blocks and modules are purely syntactic sugar and we do not describe
them further here.
## 4\. Lifting to HydroLogic
We aim for HydroLogic to be an evolutionary, general-purpose IR that can be
targeted from a range of legacy design patterns and languages, while pointing
the way toward coding styles that take advantage of more recent research.
Our goal in the near term is not to convert any arbitrary piece of code into
an elegant, easily-optimized HydroLogic program. In particular, we do not
focus on lifting existing “hand-crafted” distributed programs to HydroLogic.
We have a fair bit of experience (and humility!) about such a general goal.
Instead we focus on two scenarios for lifting:
Lifting single-threaded applications to the cloud: Many applications consist
largely of single-threaded logic, but would benefit from scaling—and
autoscaling—in the cloud. In our earlier work, we have had success using
verified lifting to convert sequential imperative code of this sort into
declarative frameworks like SQL (qbs, ), Spark (casper, ; casper2, ) and
Halide (dexter, ). One advantage of sequential programs—as opposed to hand-
coded multi-threaded or distributed “assembly code”—is that we do not have to
reverse-engineer consistency semantics from ad hoc patterns of messaging or
concurrency control in shared memory. Some interesting corpora of applications
are already written in opinionated frameworks that assist our goals. For
example, applications that are built on top of object-relational mapping (ORM)
libraries such as Rails (ror, ) and Django (django, ) are essentially built on
top of data definition languages (e.g., ActiveRecord (activerecord, )), which
makes it easy to lift the data model, and sometimes explicit transactional
semantics as well. ORM-based applications also often serve as backends for
multiple clients and need to scale over time—Twitter is a notorious example of
a Rails app that had to be rewritten for scalability and availability.
Evolving a breadth of distributed programming frameworks: There are existing
distributed programming frameworks that are fairly popular, and our near-term
goal is to embrace these programming styles. Simple examples include FaaS
interfaces and big-data style functional dataflow like Spark. Other popular
examples for asynchronous distributed systems include actor libraries (e.g.,
Erlang (erlang, ), Akka (akka, ), Orleans (bykov2011orleans, )), libraries for
distributed promises/futures (e.g., Ray (moritz2018ray, ) and Dask (dask, )
for Python), and collective communication libraries like that of MPI (mpi-
collective, ). Programs written with these libraries adhere to fairly stylized
uses of distributed state and computation, which we believe we can lift
relatively cleanly to HydroLogic. In Appendix A we share our initial thoughts
and examples in this direction.
Our goals for lifting also offer validation baselines for the rest of our
research. If we can lift code from popular frameworks, we can auto-generate a
corpus of test cases. Hydro should aim to compete with the native runtimes for
these test cases. In addition, lifting to HydroLogic will hopefully illustrate
the additional flexibility Hydro offers via faceted re-specification of
consistency, availability and performance goals. And finally, success here
across different styles of frameworks will demonstrate the viability of our
stack as a common cloud runtime for multiple styles of distributed
programming, old and new.
## 5\. HydroLogic’s Data Modeling
HydroLogic data models consist of four components: 1) a class hierarchy that
describes how persistent data is structured, 2) relational constraints, such
as functional dependencies, 3) persistent collection abstractions like
relations, ordered lists, sets, and associative arrays, and 4) declarations
for data placement across nodes in distributed deployments.
For instance, Lines LABEL:line:datamodel-LABEL:line:vaccine_count in Figure 3
show an example of persistent data specification for our Covid application.
The data is structured as Person objects, each storing an integer pid that
serves as a unique id (key), along with a set of references to other Persons
that they have been in contact with. Line LABEL:line:partition illustrates an
optional partition value to suggest how Person objects should be partitioned
across multiple nodes. (HydroLogic uses the class’s unique id to partition by
default). Line LABEL:line:people then prescribes that the Persons are to be
collectively stored in a table keyed on each person’s pid that is publicly
accessible by all functions in the program.
Partitioning allows developers to hint at ways to scatter data; a similar
syntax for locality hints is available. These hints are not required, however:
HydroLogic programmers can define their data model without needing to know how
their data will be stored in the cloud. The goal of Hydro is to take such
user-provided specifications and generate a concrete implementation
afterwards.
### 5.1. Design Space
As part of compilation, we need to choose an implementation of the data model
facet. For example in our Covid tracker we might store Person objects in
memory using an associative array indexed on each person’s pid, with each
person’s contacts field stored as a list with only the pids of the Person
objects. Obviously this particular implementation choice has tradeoffs with
more normalized choices, depending on workload.
In general, a concrete data structure implementation consists of two
components: choosing the container(s) to store persistent data (e.g., a
B+-tree indexed on a field declared in one of the persistent classes), and
determining the access path(s) given the choices for containers (e.g., an
index or full container scan) when looking up a specific object.
We envision that there will be multiple algorithms to generate concrete
implementations. These can range from a rule-driven approach that directly
matches on specific forms of queries and determines the corresponding
implementation (e.g., for programs with many lookup queries based on id, use
an associative array to store Person objects), to a synthesis-driven approach
that enumerates different implementations based on a grammar of basic data
structure implementations (stratos-data-structures, ) and a cost model. Access
paths can then be determined based on how the containers are selected.
### 5.2. Promise and Challenges
We have designed a data structure synthesizer called Chestnut in our earlier
work (chestnut, ; chestnut-demo, ), focusing on database-backed web
applications that are built using ORM libraries. Similar to HydroLogic,
Chestnut takes in a user-provided data model and workload specification, and
synthesizes data structures to store persistent data once it is loaded into
memory. Synthesis is done using an enumeration-based approach based on a set
of provided primitives. For example, if a persistently stored class contains N
attributes, Chestnut would consider storing all objects in an ordered list, or
in an associative array keyed on any of the N unique attributes, or split the
N attributes into a subset that is stored in a list, and the rest stored in a
B+-tree index. The corresponding access path is generated for each
implementation. Choosing among the different options is guided by a cost model
that estimates the cost of each query that can potentially be issued by the
application. Evaluations using open-source web apps showed that Chestnut can
improve query execution by up to 42$\times$.
Searching for the optimal data representation is reminiscent of the physical
design problem in data management research, and there has been a long line of
work on that front (autoadmin, ) that we can leverage. There has also been
work done on data structure synthesis in the programming systems research
community (cozy, ; aiken-data-structures, ) that focuses on the single-node
setting, with the goal to organize program state as relations and persistently
store them as such.
Synthesizing data structures based on HydroLogic specifications will raise new
challenges. First, we will need to design a data structure programming
interface that is expressive enough for the program specifications that users
will write. Next, we will need a set of data structure “building blocks” that
the synthesizer can utilize to implement program specifications. Such building
blocks must be composable such that new structures can be designed, yet not
too low-level that makes it difficult to verify if the synthesized
implementation satisfies the provided specifications.
In addition, synthesizing distributed data structures will require new
innovations in devising cost models for data transfer and storage costs, and
reasoning about data placement and lookup mechanisms. New synthesis and
verification algorithms will need to be devised in order to handle both
aspects efficiently. Finally, workload changes (both client request rates and
cloud service pricing) motivate incremental synthesis, where initial data
structures are generated when the program is deployed, and gradually refined
or converted to other implementations based on runtime properties.
## 6\. The Availability Facet
The availability facet starts with a simple programmer contract: ensure that
each application endpoint remains available in the face of $f$ independent
failures. In this discussion we assume that failures are non-Byzantine. The
definition of independence here is tied to a user-selected notion of _failure
domains_ : two failures are considered independent if they are in different
failure domains. Typical choices for failure domains include virtual machines,
racks, data centers, or availability zones (AZs). In line
LABEL:line:availability1 of Figure 3 we specify that our handlers should
tolerate faults across 2 AZs. In line LABEL:line:availability2 we override
that spec for the case of the likelihood handler, an ML routine that requires
expensive GPU reservations, for which we trade off availability to save cost.
### 6.1. Design Space
The natural methodology for availability is to replicate service
endpoints—execution and state—across failure domains. This goes back to the
idea of process pairs in the Tandem computing systems, followed by the Borg
and Lamport notions of state machine replication, and many distributed systems
built thereafter. For example, when compiling the handler for the add_contact
endpoint in line LABEL:line:addcontact of Figure 3, we can interpose
HydroLogic implementing a load-balancing client proxy module that tracks
replicas of the endpoint, forwards requests on to $f+1$ of them, and makes
sure that a response gets to the client.
Another standard approach is for backend logic to replicate its internal
state—often by generating logs or lineage of state mutation events for
subsequent replay. We could do this naively in our example by matching each
mutation statement with a log statement.
### 6.2. Promise and Challenges
Whether using replication or replay, availability is fundamentally achieved by
_redundancy_ of state and computation. The design of that redundancy is
typically complicated by two issues. The first is cost. In the absence of
failure, redundancy logic can increase latency. Worse, running an identical
replica of a massive service could be massively expensive. As a result, some
replication schemes reduce the cost of replicas by having them perform logic
that is different from—but semantically equivalent to—state change at the main
service. A standard example is to do logical logging at the storage level,
without redundantly performing application behavior. In general, it would of
course be challenging to synthesize sophisticated database logging and
recovery protocols from scratch. But simpler uses of activity logs for state
replication are an increasingly common design pattern for distributed
architectures (kafka, ; corfu, ; ambrosia, ), and use of these basic log-
shipping patterns and services could offer a point in the optimization space
of latency, throughput and resource consumption that differs from application-
level redundancy.
The second complication that arises immediately from availability is the issue
of consistency across redundant state and computation, which we address next
with its own facet.
## 7\. The Consistency Facet
The majority of distributed systems work has relegated consistency issues to
the storage or memory layer. But the past decade has seen a variety of clever
applications (shopping carts (decandia2007dynamo, ), collaborative editing
systems (weiss2009logoot, ), gradiant descent (recht2011hogwild, ), etc.) that
have demonstrated massive performance and availability benefits by customizing
consistency at the application layer. In Hydro we aim to take full
programs—including compositions of multiple independent modules—and
automatically generate similarly clever, “just right” code to meet
_application-specific_ consistency specifications.
The idea of raising transactional consistency from storage to the programming
language level is familiar from object databases (atkinson1990object, ) and
distributed object systems. Liskov’s Argus language (liskov88argus, ) is a
canonical example, with each distributed method invoked as an isolated
(nested) transaction, strictly enforced via locking and two-phase commit. This
provides strong correctness properties—unnecessarily strong, since not every
method call in a distributed application requires strong consistency or
perfect isolation. From our perspective today, Argus and its peers passed up
the biggest question they raised: if all the application code is available,
how _little_ enforcement can the compiler use to provide those semantics? And
what if those semantics are weaker than serializability?
As seen in the example of Figure 3, HydroLogic allows consistency to be
specified at the level of the client API handlers. Like all our facets,
consistency can be specified inline with the handler definition (as in Figure
3), or in a separate consistency block. In practice, applications are built
from code written by different parties for potentially different purposes. As
a result the original consistency specs provided for different handlers may be
heterogeneous within a single application. What matters in the end is to
respect the (possibly heterogeneous) consistency that clients of the
application can observe from its public interfaces.
In Figure 3, the add_person handler uses default eventual consistency. This
ensures that if the two people in the arguments are not physically co-located,
then each person (and each replica) can be updated without waiting for any
others.
As a different example, the vaccinate handler specifies serializability and a
non-negative vaccine_count constraint. We might be concerned that
serializability for this handler will require strong consistency from other
handlers. Close analysis shows this is not the case: vaccinate is the only
handler that references vaccine_count, and all references to people are
monotonic and hence reorderable—including the mutation in vaccinate. Hence if
vaccinate completes for some pid in any history, there is an equivalent serial
history in which vaccinate(pid) runs successfully with the same initial value
of vaccine_count and the same resulting value of both vaccine_count and
people.
### 7.1. Design Space
In HydroLogic we enable two different types of consistency specifications:
traditional history-based guarantees, and application-centric _invariants_.
History-based guarantees are prevalent today, with widely agreed-upon
semantics. For example, serializability, linearizability, sequential
consistency, causal consistency, and others specifically constrain the
ordering of conflicting operations and in turn define “anomalies” that
applications can observe. The second type of consistency annotation we allow
is application-centric, and makes use of Hydrologic’s declarative formulation.
Past work has demonstrated that invariants are a powerful way for developers
to precisely specify what guarantees are necessary at application level
(hellerstein2020keeping, ; bailis2014coordination, ; whittaker2018interactive,
; magrino2019warranties, ; roy2015homeostatis, ; shapiro15invariant, ;
sivaramakrishnan2015declarative, ; crooks2017seeing, ). These include
motonicity invariants that guarantee convergent outcomes, or isolation
invariants for predicates on visible states—e.g., positive bank accounts or
referential integrity.
### 7.2. Promise and Challenges
Many challenges fall out from an agenda of compiling arbitrary distributed
code to efficiently enforce consistency invariants. Based on work to date, we
believe the field is ripe for innovation. Here we highlight some key
challenges and our reasons for optimism.
Metaconsistency Analysis: Servicing a single public API call may require
crossing multiple internal endpoints with different consistency
specifications. This entails two challenges: identifying the possible
composition paths, and ensuring _metaconsistency_ : the consistency of
heterogeneous consistency specs along each path. The first problem amounts to
dataflow analysis across HydroLogic handlers; this is easy to do
conservatively in a static analysis of a HydroLogic program, though we may
desire more nuanced conditional solutions enforced at runtime. The question of
metaconsistency is related to our prior work on mixed consistency of black-box
services (milano2019tour, ; milano2018mixt, ; crooks2016tardis, ). In the
Hydro context we may use third-party services, but we also expect to have
plenty of white-box HydroLogic code, where we have the flexibility to change
the consistency specs across modules to make them consistent with the
consistency of endpoint specifications. Our recent work on client-centric
consistency offers a unified framework for reasoning about both transactional
isolation _and_ distributed consistency guarantees (crooks2020client, ).
Consistency Mechanisms: Given a consistency requirement, we need to synthesize
code to enforce it. There are three broad approaches to choose from. The first
is to recognize when no enforcement is required for a particular code
block—examples include the monotonicity and invariant confluence analyses
mentioned above. Another is for the compiler to wrap or “encapsulate” state
with lattice metadata that allows for local (coordination-free) consistency
enforcement at each endpoint—this is the approach in our work on the
Cloudburst FaaS (sreekanti2020cloudburst, ) and Hydrocache
(wu2020transactional, ). The third approach is the traditional “heavyweight”
use of coordination protocols, including barriers, transaction protocols,
consensus-based logs for state-machine replication and so on. The space of
enforcement mechanisms is wide, but there are well-known building blocks in
the literature that we can use to start on our software synthesis agenda here.
Consistency Placement: Understanding consistency specs and mechanisms is not
enough—we can also reason about where to invoke the mechanism in the program,
and how the spec is kept invariant downstream. This flexibility arises when we
consider consistency at an application level rather than as a storage
guarantee. As a canonical example, the original work on Dynamo’s shopping
carts was coordination-free _except_ for “sealing” the final cart contents for
checkout (decandia2007dynamo, ; helland2009building, ; alvaro2011consistency,
). Conway (conway2012logic, ) shifted the sealing to the end-user’s browser
code where it is decided unilaterally (for “free”) in an unreplicated stage of
the application. When shopping ends, the browser ships a compressed _manifest_
summarizing the final content of the cart. Maintaining the final cart state at
the replicas then becomes coordination-free as well: each replica can eagerly
move to checkout once its contents match the manifest. Alvaro systematized
this sealing idea in Blazes (alvaro2014blazes, ); more work is needed to
address the variety of coordination guarantees we wish to enforce and
maintain.
Clearly these issues are correlated, so a compiler will have to explore the
space defined by their combinations.
## 8\. The Hydroflow IR
Like many declarative languages, to execute HydroLogic we translate it down to
a lower-level algebra of operators that can be executed in a flow style on a
single node, or partitioned and pipelined across multiple nodes (Section 9).
Most of these operators are familiar from relational algebra and functional
libraries like Spark and Pandas. Here we focus on the unique aspects of the
HydroLogic algebra.
### 8.1. Design Space
The Hydroflow algebra has to handle all the constructs of HydroLogic’s event
loop. One of the key goals of the Hydroflow algebra design is a _unification
of dataflow, lattices and reactive programming._ Typical runtimes implement a
dataflow model of operators over streaming collections of individual items.
This assumes that collection types and their operators are the primary types
in any program. We want to accommodate lattices beyond collection types. For
example, a COUNT query takes a set lattice as input and produces an integer
lattice as output; we need the output of that query to “pipeline” in the same
fashion as a set. In addition, to capture state mutation we want to adapt
reactive programming models (e.g., React.js and Rx). that provide ordered
streams propagating changes to individual values over time.
In deployment, a Hydro program involves Hydroflow algebra fragments running at
multiple nodes in a network, communicating via messages. Inbound messages
appear at Hydroflow ingress operators, and outbound messages are produced by
egress operators. These operators are agnostic to networking details like
addressing and queueing, which are parameterized by the target facet. However,
as a working model we can consider that a network egress point in Hydroflow
can be parameterized to do explicit point-to-point networking, or a content-
hash-based style of addressing. As a result, local Hydroflow algebra programs
can participate as fragments of a wide range of deployment models, including
parallel intra-operator partitioning (a la Exchange or MapReduce) as well as
static dataflows across algebraic operators, or dynamic invocations of on-
demand operators.
### 8.2. Promise and Challenges
A program in HydroLogic can be lowered (compiled) to a set of single-node
Hydroflow algebra expressions in a straightforward fashion, much as one can
compile SQL to relational algebra. Our concern at this stage of lowering is
scoped to single-node, in-memory performance; issues of distributed computing
are deferred to Section 9. The design space here is similar to that of
traditional query optimization, and classical methods such as Cascades are a
plausible approach (cascades, ). We are also considering more recent results
in program synthesis here, since they have shown promise in traditional query
optimization (qbs, ; statusquo, ).
The design of the Hydroflow algebra is a work in progress, and achieving a
semantics that unifies all of its aspects is non-trivial. In addition, two
other challenges arise naturally.
Figure 4. On the trickiness of manual checks for monotonicity. The full thread
includes pseudocode and fixes (kleppmanntweet, ).
Monotonicity typechecking: Current models for monotonic programming like CRDT
libraries expect programmers to guarantee the monotonicity of their code
manually. This is notoriously tricky—see Figure 4. BloomL attempted to
simplify this problem by replacing monolithic CRDTs with monotone compositions
of simpler lattices, but correctness was still assumed for the basic lattices
and composition functions. We wish to go further, providing an explicit
monotone type modifier, and a compiler that can typecheck monotonicity.
Guarantees of monotonicity can be exploited to ensure guarantees from the
consistency facet (Section 7) as part of the Optimization facet (Section 9).
Representation of flows beyond collections: Algebras defined for a collection
type C<T> (e.g., relational algebra on set<tuple>) are often implemented in a
dataflow of operators over the underlying element type T, or over incremental
batches of elements. This _differential_ approach is well-suited for operators
on C<T> that have stateless implementations over T—e.g., map, select and
project. Other operators require stateful implementations that use ad-hoc
internal memory management to rebuild collections of type C<T> across
invocations over type T. This makes it difficult for a compiler to check
properties like determinism or monotonicity. Moreover, in Hydroflow we want to
expand flow computation beyond collection types to lattices and reactive
scalar values. Hence we need to support operators that view inputs
differentially or all-at-once, providing clear semantics for both cases, and
allowing properties like monotonicity to be statically checked by a compiler.
Copy efficiency: In many modern applications and systems, the majority of
compute time is spent in copying and formatting data. Developers who have
built high-performance query engines know that it is relatively easy to build
a simple dataflow prototype, and quite hard to build one that makes efficient
use of memory and minimizes the cost of data copying and redundant work.
Taking a cue from recent systems like Timely Dataflow (mcsherry2017modular, ),
we use the ownership properties of the Rust language to help us carefully
control how data is managed in memory along our flows.
## 9\. The Target Facet
After specifying various semantic aspects of the application, the final facet
describes the targets for optimization that the cloud runtime should achieve,
as described in Section 1.1. Such targets can include a cost budget to spend
on running the application on the cloud, maximum number of machines to
utilize, specific capabilities of the hosted machines (e.g., GPU on board),
latency requirements for any of the handlers, etc. We imagine that the user
will provide a subset of these configuration parameters and leave the rest to
be determined by Hydrolysis.
For example, lines LABEL:line:deployment-LABEL:line:deploymentlikelihood in
Figure 3 show the targets for our COVID application. Line
LABEL:line:deploymentdefault specifies the default latency/cost goals for
handlers; line LABEL:line:deploymentlikelihood specializes this for machine-
learning-based likelihood handler, dictating the use of GPU-class machines
with a higher budget per call.
Compared to the current practice of deployment configurations spread across
platform-specific scripts (aws-gateway, ; azure-management, ), program
annotations (cloud-annotations, ), and service level agreements, HydroLogic
allows developers to consolidate deployment-related targets in an isolated
program facet. This allows developers to easily see and change the
cost/performance profiles of their code, and enables HydroLogic applications
to be deployed across different cloud platforms with different
implementations.
### 9.1. Design Space
Given a HydroLogic specification, the Hydrolysis compiler will attempt to find
an implementation that satisfies the provided constraints subject to the
desired overall objectives. As discussed in Section 2, Hydro will first
generate an initial implementation of the application based on the previously
described facets. The initial implementation would have various aspects of the
application determined: algorithms for data-related operations, replication
and consistency protocols, etc. What remains are the runtime deployment
aspects such as mapping of functions and data to available machines.
For instance, given the code in Figure 3, Hydro can formulate the runtime
mapping problem as an integer programming problem, based on our prior work
(pyxis, ; quro, ). Such a mapping problem can be formulated as a dynamic
program partitioning problem. Suppose at any given time we have $M$ different
types of machine configurations to choose from, and $n_{i}$ represents the
number of instances we will use for machine of type $i$. We then have the
following constraints:
$\mathbin{\vbox{\hbox{\scalebox{0.9}{$\bullet$}}}}$
$latency(\texttt{add\\_person},n_{i})\leq 100ms$. The latency incurred by
hosting add_person on $n_{i}$ instances of type $i$ machines must be less than
the specified value. We have one constraint for each pair of handler and
machine type, and Figure 3 shows a shortcut using the default construct while
overriding it for the likelihood handler.
$\mathbin{\vbox{\hbox{\scalebox{0.9}{$\bullet$}}}}$
$cost(\texttt{add\\_person},n_{i})\leq 0.01$. The cost of running add_person
on $n_{i}$ instances of type $i$ machines must be less than the specified
value. The value can either be specified by the end user or provided by the
hosting platform.
$\mathbin{\vbox{\hbox{\scalebox{0.9}{$\bullet$}}}}$ $\sum_{i}n_{i}>0$.
Allocate some machines to fulfill the workload.
The overall objective depends on the user specification, for instance
minimizing the total number of machines used ($\sum_{i}n_{i}$), or maximizing
overall throughput of each handler $f_{j}$ while executed on $n_{i}$ machines
($\sum_{i,j}tput(f_{j},n_{i})$). More sophisticated objectives are possible,
for instance incurring up to a fixed cost over a time period (aws-budgets, ).
As formulated above, our integer programming problem relies on having models
to estimate latency, throughput, and cost of running each function given
machine type and number of instances. Solving the problem gives us the values
of each $n_{i}$, i.e., the number of instances to allocate for each machine
configuration. Given that program objectives or resource costs can vary as the
application executes in the cloud, we might need to periodically reformulate
the problem based on the data available. Predicting or detecting when a
reformulation is needed will be interesting future work.
Note that the above integer program might not have a solution, e.g., if the
initial implementations were too costly to meet the given targets. If this
arises, Hydro can ask previous components to choose other implementations and
reiterate the mapping procedure. This iterative process is simplified by
decomposing the application into facets, allowing Hydro to revert to a
previous search state during compilation.
### 9.2. Promise and Challenges
Our problem formulation above is inspired by prior work in multi-query
optimization (mqo, ), multi-objective optimization (multi-objective-qo, ) and
scheduling tasks for serverless platforms (serverless-scheduling, ). Our
faceted setting, however, also raises new challenges.
Cost modeling: Our integer programming problem formulation relies on having
accurate cost models for different aspects of program execution on the cloud
(e.g., latency prediction). While cost prediction has been a classic research
topic in data management, much of the prior work has focused on single and
distributed relational databases. The cloud presents new challenges as
functions can move across heterogeneous machines, and aspects such as machine
prices and network latencies can vary at any time.
Solution enumeration: As mentioned earlier, our faceted approach allows Hydro
to easily backtrack during compilation, should an initial strategy turn out to
be infeasible given the configuration constraints. Implementing backtracking
will rely on an efficient way to enumerate different implementations based on
the previously described facets, and being able to do so efficiently in real
time. This depends on the algorithms used to generate the initial
implementations, for instance by considering types of query plans that were
previously bypassed during code generation, or asking solvers to generate
another satisfiable solution if formal methods-based algorithms are used. We
will also need feedback mechanisms to interact with the user, should the
provided specifications prove too stringent.
Adaptive optimization: One of the reasons for deploying applications on the
cloud is to leverage the cloud’s elasticity. As a consequence, the
implementation generated by Hydro will likely need to change over time. While
Hydro’s architecture is designed to tackle that aspect by not having hard-
wired rules for code generation, we will also devise new runtime monitoring
and adaptive code generation techniques, in the spirit of prior work (pyxis, ;
eddies, ; aqp-survey, ).
## 10\. Conclusion
We are optimistic that the time is ripe for innovation and adoption of new
technologies for end-user distributed programming. This is based not only on
our assessment of research progress and potential, but also the emerging
competition among cloud vendors to bring third-party developers to their
platforms. We are currently implementing the Hydroflow runtime, and exploring
different algorithms to lift legacy design patterns to HydroLogic. Our next
goals are to design the compilation strategies from HydroLogic to Hydroflow
programs, and to explore the compilation of application-specific availability
and consistency protocols. We are also contemplating related research agendas
in security and developer experience including debugging and monitoring. There
are many research challenges ahead, but we believe they can be addressed
incrementally and in parallel, and quickly come together in practical forms.
## 11\. Acknowledgments
This work is supported in part by the National Science Foundation through
grants CNS-1730628, IIS-1546083, IIS-1955488, IIS-2027575, CCF-1723352, and
DOE award DE-SC0016260; the Intel-NSF CAPA center, and gifts from Adobe,
Amazon, Ant Group, Ericsson, Facebook, Futurewei, Google, Intel, Microsoft,
NVIDIA, Scotiabank, Splunk and VMware.
## References
* [1] Active Record. https://github.com/rails/rails/tree/master/activerecord.
* [2] G. Agha. Concurrent object-oriented programming. Communications of the ACM, 33(9):125–141, 1990.
* [3] S. Agrawal, S. Chaudhuri, and V. R. Narasayya. Automated selection of materialized views and indexes in sql databases. In VLDB, 2000.
* [4] M. Ahmad and A. Cheung. Leveraging parallel data processing frameworks with verified lifting. In Proceedings Fifth Workshop on Synthesis, SYNT@CAV 2016, pages 67–83, 2016.
* [5] M. Ahmad et al. Automatically leveraging mapreduce frameworks for data-intensive applications. In SIGMOD, pages 1205–1220, 2018.
* [6] M. Ahmad et al. Automatically translating image processing libraries to halide. ACM Trans. Graph., 38(6):204:1–204:13, 2019.
* [7] Akka homepage, Aug. 2020.
https//akka.io, retrieved 8/28/2020.
* [8] I. E. Akkus, R. Chen, I. Rimac, M. Stein, K. Satzke, A. Beck, P. Aditya, and V. Hilt. SAND: Towards high-performance serverless computing. In USENIX ATC, pages 923–935, 2018.
* [9] P. Alvaro, T. Condie, N. Conway, K. Elmeleegy, J. M. Hellerstein, and R. Sears. Boom analytics: exploring data-centric, declarative programming for the cloud. In Proceedings of the 5th European conference on Computer systems, pages 223–236, 2010.
* [10] P. Alvaro, T. Condie, N. Conway, J. M. Hellerstein, and R. Sears. I do declare: consensus in a logic language. ACM SIGOPS Operating Systems Review, 43(4):25–30, 2010.
* [11] P. Alvaro et al. Consistency Analysis in Bloom: a CALM and Collected Approach. In CIDR, pages 249–260, 2011.
* [12] P. Alvaro et al. Blazes: Coordination analysis for distributed programs. In ICDE, pages 52–63. IEEE, 2014.
* [13] Amazon. Amazon API gateway. https://aws.amazon.com/api-gateway/.
* [14] Amazon. Aws budgets update – track cloud costs and usage. https://aws.amazon.com/blogs/aws/aws-budgets-update-track-cloud-costs-and-usage.
* [15] T. J. Ameloot, F. Neven, and J. Van den Bussche. Relational transducers for declarative networking. In Principles of Database Systems (PODS), pages 283––292, June 2011.
* [16] M. Atkinson, D. Dewitt, D. Maier, F. Bancilhon, K. Dittrich, and S. Zdonik. The object-oriented database system manifesto. In Deductive and object-oriented databases, pages 223–240. Elsevier, 1990.
* [17] R. Avnur and J. M. Hellerstein. Eddies: Continuously adaptive query processing. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, May 16-18, 2000, Dallas, Texas, USA, pages 261–272. ACM, 2000.
* [18] P. Bailis, A. Fekete, M. J. Franklin, A. Ghodsi, J. M. Hellerstein, and I. Stoica. Coordination avoidance in database systems. Proc. VLDB Endow., 8(3):185–196, Nov. 2014.
* [19] H. C. Baker and C. Hewitt. The incremental garbage collection of processes. SIGPLAN Not., 12(8):55–59, Aug. 1977.
* [20] M. Balakrishnan, D. Malkhi, V. Prabhakaran, T. Wobber, M. Wei, and J. D. Davis. CORFU: A shared log design for flash clusters. In S. D. Gribble and D. Katabi, editors, Proceedings of the 9th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2012, San Jose, CA, USA, April 25-27, 2012, pages 1–14. USENIX Association, 2012.
* [21] V. Balegas, N. Preguiça, R. Rodrigues, S. Duarte, C. Ferreira, M. Najafzadeh, and M. Shapiro. Putting consistency back into eventual consistency. In EuroSys, pages 6:1–6:16, Bordeaux, France, Apr. 2015. Indigo.
* [22] S. Bykov, A. Geller, G. Kliot, J. R. Larus, R. Pandya, and J. Thelin. Orleans: cloud computing for everyone. In SoCC, pages 1–14, 2011.
* [23] C. Chedeau. React’s architecture. In OSCON, July 2014.
* [24] A. Cheung, O. Arden, S. Madden, A. Solar-Lezama, and A. C. Myers. StatusQuo: Making familiar abstractions perform using program analysis. In CIDR, 2013.
* [25] A. Cheung, S. Madden, O. Arden, and A. C. Myers. Automatic partitioning of database applications. PVLDB, 5(11):1471–1482, 2012.
* [26] A. Cheung, A. Solar-Lezama, and S. Madden. Optimizing database-backed applications with query synthesis. In PLDI, pages 3–14, 2013.
* [27] T. Condie, D. Chu, J. M. Hellerstein, and P. Maniatis. Evita raced: metacompilation for declarative networks. Proceedings of the VLDB Endowment, 1(1):1153–1165, 2008.
* [28] N. Conway et al. Logic and lattices for distributed programming. In SoCC, pages 1–14, 2012.
* [29] N. Crooks. A client-centric approach to transactional datastores. PhD thesis, University of Texas, Austin, 2020.
* [30] N. Crooks, Y. Pu, L. Alvisi, and A. Clement. Seeing is believing: A client-centric specification of database isolation. In Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC ’17, page 73–82, New York, NY, USA, 2017. Association for Computing Machinery.
* [31] N. Crooks, Y. Pu, N. Estrada, T. Gupta, L. Alvisi, and A. Clement. Tardis: A branch-and-merge approach to weak consistency. In Proceedings of the 2016 International Conference on Management of Data, SIGMOD ’16, page 1615–1628, New York, NY, USA, 2016. Association for Computing Machinery.
* [32] Dask parallel computing library. https://dask.org/.
* [33] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S. Sivasubramanian, P. Vosshall, and W. Vogels. Dynamo: amazon’s highly available key-value store. ACM SIGOPS operating systems review, 41(6):205–220, 2007.
* [34] A. Deshpande, Z. G. Ives, and V. Raman. Adaptive query processing. Foundations and Trends in Databases, 1(1):1–140, 2007.
* [35] D. DeWitt and J. Gray. Parallel database systems: the future of high performance database systems. Communications of the ACM, 35(6):85–98, 1992.
* [36] Django. https://www.djangoproject.com/.
* [37] The Erlang programming language. https://www.erlang.org.
* [38] E. Gamma, R. Helm, R. Johnson, and J. M. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley Professional, 1 edition, 1994.
* [39] J. Goldstein, A. S. Abdelhamid, M. Barnett, S. Burckhardt, B. Chandramouli, D. Gehring, N. Lebeck, C. Meiklejohn, U. F. Minhas, R. Newton, R. Peshawaria, T. Zaccai, and I. Zhang. A.M.B.R.O.S.I.A: providing performant virtual resiliency for distributed applications. Proc. VLDB Endow., 13(5):588–601, 2020.
* [40] G. Graefe. The cascades framework for query optimization. IEEE Data Eng. Bull., 18(3):19–29, 1995.
* [41] C. Granger. Against the current: What we learned from eve. In Future of Coding LIVE Conference, 2018. https://futureofcoding.org/notes/live/2018.
* [42] S. Gulwani. Dimensions in program synthesis. In T. Kutsia, W. Schreiner, and M. Fernández, editors, PPoPP, pages 13–24, 2010.
* [43] P. Helland and D. Campbell. Building on quicksand. In CIDR, 2009.
* [44] J. M. Hellerstein. The declarative imperative: experiences and conjectures in distributed logic. ACM SIGMOD Record, 39(1):5–19, 2010.
* [45] J. M. Hellerstein and P. Alvaro. Keeping calm: when distributed consistency is easy. Communications of the ACM, 63(9):72–81, 2020.
* [46] J. M. Hellerstein, J. Faleiro, J. E. Gonzalez, J. Schleier-Smith, V. Sreekanti, A. Tumanov, and C. Wu. Serverless computing: One step forward, two steps back. In CIDR, 2019.
* [47] C. Hewitt. Viewing control structures as patterns of passing messages. Artificial intelligence, 8(3):323–364, 1977.
* [48] S. Idreos, K. Zoumpatianos, M. Athanassoulis, N. Dayan, B. Hentschel, M. S. Kester, D. Guo, L. M. Maas, W. Qin, A. Wasay, and Y. Sun. The periodic table of data structures. IEEE Data Eng. Bull., 41(3):64–75, 2018.
* [49] K. Kaffes, N. J. Yadwadkar, and C. Kozyrakis. Centralized core-granular scheduling for serverless functions. In Proceedings of the ACM Symposium on Cloud Computing, SoCC 2019, Santa Cruz, CA, USA, November 20-23, 2019, pages 158–164. ACM, 2019\.
* [50] Kafka. Apache kafka. https://kafka.apache.org.
* [51] V. Kalavri, J. Liagouris, M. Hoffmann, D. Dimitrova, M. Forshaw, and T. Roscoe. Three steps is all you need: fast, accurate, automatic scaling decisions for distributed streaming dataflows. In 13th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 18), pages 783–798, 2018.
* [52] M. Kleppmann. Twitter thread, Nov. 2020. https://twitter.com/martinkl/status/1327020435419041792.
* [53] L. Kuper and R. R. Newton. Lvars: lattice-based data structures for deterministic parallelism. In ACM SIGPLAN, pages 71–84, 2013.
* [54] C. Lattner and V. Adve. Llvm: A compilation framework for lifelong program analysis & transformation. In International Symposium on Code Generation and Optimization, 2004\. CGO 2004., pages 75–86. IEEE, 2004.
* [55] W. Lee, M. Papadakis, E. Slaughter, and A. Aiken. A constraint-based approach to automatic data partitioning for distributed memory execution. In M. Taufer, P. Balaji, and A. J. Peña, editors, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2019, Denver, Colorado, USA, November 17-19, 2019, pages 45:1–45:24. ACM, 2019.
* [56] B. Liskov. Distributed programming in argus. Commun. ACM, 31(3):300–312, Mar. 1988.
* [57] C. Loncaric, E. Torlak, and M. D. Ernst. Fast synthesis of fast collections. In C. Krintz and E. Berger, editors, Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2016, Santa Barbara, CA, USA, June 13-17, 2016, pages 355–368. ACM, 2016.
* [58] B. T. Loo, T. Condie, M. Garofalakis, D. E. Gay, J. M. Hellerstein, P. Maniatis, R. Ramakrishnan, T. Roscoe, and I. Stoica. Declarative networking. Communications of the ACM, 52(11):87–95, 2009.
* [59] B. T. Loo, T. Condie, J. M. Hellerstein, P. Maniatis, T. Roscoe, and I. Stoica. Implementing declarative overlays. In Proceedings of the twentieth ACM symposium on Operating systems principles, pages 75–90, 2005.
* [60] T. Magrino, J. Liu, N. Foster, J. Gehrke, and A. C. Myers. Efficient, consistent distributed computation with predictive treaties. In Proceedings of the Fourteenth EuroSys Conference 2019, EuroSys ’19, New York, NY, USA, 2019. Association for Computing Machinery.
* [61] F. e. a. McSherry. A modular implementation of timely dataflow in rust, 2017. https://github.com/TimelyDataflow/timely-dataflow, retrieved 12/23/2020.
* [62] E. Meijer. Reactive extensions (rx) curing your asynchronous programming blues. In ACM SIGPLAN Commercial Users of Functional Programming, pages 1–1. ACM, 2010.
* [63] C. Meiklejohn and P. Van Roy. Lasp: A language for distributed, coordination-free programming. In Principles and Practice of Declarative Programming, pages 184–195, 2015.
* [64] Message Passing Interface Forum. Mpi-2: Extensions to the message-passing interface, 1997. https://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-2.0/mpi2-report.htm, retrieved 12/16/2020.
* [65] Metadata Partners, LLC. Datomic: Technical overview, 2012. https://web.archive.org/web/20120324023546/http://datomic.com/docs/datomic-whitepaper.pdf. Accessed: Dec. 13 2020.
* [66] Microsoft. Azure management API. https://azure.microsoft.com/en-us/services/api-management.
* [67] M. Milano and A. C. Myers. Mixt: A language for mixing consistency in geodistributed transactions. SIGPLAN Notices, 53(4):226–241, 2018.
* [68] M. Milano, R. Recto, T. Magrino, and A. C. Myers. A tour of gallifrey, a language for geodistributed programming. In SNAPL, 2019.
* [69] P. Moritz et al. Ray: A distributed framework for emerging AI applications. In OSDI, pages 561–577, 2018.
* [70] MPI Collective Functions. https://docs.microsoft.com/en-us/message-passing-interface/mpi-collective-functions.
* [71] J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe. Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. Acm Sigplan Notices, 48(6):519–530, 2013.
* [72] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. Advances in neural information processing systems, 24:693–701, 2011\.
* [73] D. Rogora, S. Smolka, A. Carzaniga, A. Diwan, and R. Soulé. Performance annotations for cloud computing. In E. de Lara and S. Sundararaman, editors, 9th USENIX Workshop on Hot Topics in Cloud Computing, HotCloud. USENIX Association, 2017\.
* [74] S. Roy, L. Kot, G. Bender, B. Ding, H. Hojjat, C. Koch, N. Foster, and J. Gehrke. The homeostasis protocol: Avoiding transaction coordination through program analysis. In SIGMOD, SIGMOD ’15, page 1311–1326, New York, NY, USA, 2015\. Association for Computing Machinery.
* [75] Ruby on Rails. http://rubyonrails.org/.
* [76] M. Samuel, C. Yan, and A. Cheung. Demonstration of chestnut: An in-memory data layout designer for database applications. In D. Maier, R. Pottinger, A. Doan, W. Tan, A. Alawini, and H. Q. Ngo, editors, Proceedings of the 2020 International Conference on Management of Data, SIGMOD Conference 2020, online conference [Portland, OR, USA], June 14-19, 2020, pages 2813–2816. ACM, 2020.
* [77] T. K. Sellis. Multiple-query optimization. ACM Trans. Database Syst., 13(1):23–52, 1988.
* [78] M. Shapiro, N. Preguiça, C. Baquero, and M. Zawirski. Conflict-free replicated data types. In Symposium on Self-Stabilizing Systems, pages 386–400. Springer, 2011.
* [79] K. C. Sivaramakrishnan, G. Kaki, and S. Jagannathan. Declarative programming over eventually consistent data stores. ACM SIGPLAN Notices, 50(6):413–424, 2015.
* [80] A. Sivaraman et al. Packet transactions: High-level programming for line-rate switches. In SIGCOMM, pages 15–28, 2016.
* [81] V. Sreekanti et al. Cloudburst: Stateful functions-as-a-service. PVLDB, 13(11):2438–2452, 2020.
* [82] I. Trummer and C. Koch. Multi-objective parametric query optimization. SIGMOD Rec., 45(1):24–31, 2016.
* [83] S. Weiss, P. Urso, and P. Molli. Logoot: A scalable optimistic replication algorithm for collaborative editing on p2p networks. In 2009 29th IEEE International Conference on Distributed Computing Systems, pages 404–412. IEEE, 2009.
* [84] M. Whittaker and J. M. Hellerstein. Interactive checks for coordination avoidance. Proc. VLDB Endow., 12(1):14–27, Sept. 2018.
* [85] C. Wu, J. Faleiro, Y. Lin, and J. Hellerstein. Anna: A kvs for any scale. IEEE TKDE, 2019.
* [86] C. Wu, V. Sreekanti, and J. M. Hellerstein. Transactional causal consistency for serverless computing. In SIGMOD, pages 83–97, 2020.
* [87] C. Yan and A. Cheung. Leveraging lock contention to improve OLTP application performance. In PVLDB, pages 444–455, 2016.
* [88] C. Yan and A. Cheung. Generating application-specific data layouts for in-memory databases. Proc. VLDB Endow., 12(11):1513–1525, 2019.
## Appendix A Lifting Legacy Design Patterns
We aim for HydroLogic to be a general-purpose IR that can be targeted from a
range of input languages. In this section, we provide initial evidence that
HydroLogic can be a natural target for code written in a variety of accepted
distributed design patterns: actors, futures, and MPI. We provide working
implementations of all code from the paper at https://github.com/hydro-
project/cidr2021.
### A.1. Actors
The Actor model has a long history [47]. In a nutshell, an actor is an object
with three basic primitives [2]: (a) exchange messages with other actors, (b)
update local state, and (c) spawn additional actors. Like other object
abstractions, actors have private, encapsulated state. Actors are often
implemented in a very lightweight fashion running in a single OS process;
actor libraries like Erlang can run hundreds of thousands of actors per
machine. At the same time, the asynchronous semantics of actors makes it
simple to distribute them across machines.
Actors are like objects: they encapsulate state and handlers. HydroLogic does
not bind handlers to objects, but we can enforce that when lifting by
generating a HydroLogic program in which we have an Actor class keyed by
actor_id, and each handler’s first argument identifies an actor_id that
associates the inbound message with a particular Actor instance. The
HydroLogic to emulate spawning an actor simply creates a new Actor instance
with a unique ID and runs any initialization code to associate initial state
with the actor. In keeping with other actor implementations, each actor is
very lightweight. HydroLogic allows us to optionally specify availability,
consistency and deployment for our actors’ handlers. Hydrolysis can choose to
how to partition/replicate actors across machines.
Actor frameworks provide event loops, and at first blush it is straightforward
to map an actor method into HydroLogic. Consider an actor method do_foo(msg)
with an RPC-like behavior in Erlang style:
` This translates literally into a HydroLogic handler: ` A Simple Actor Method
in HydroLogic RPC is a good match to the transducer model where code fragments
are bracketed by send and receive. But actors are not transducers. In
particular, they can issue blocking requests for messages at any point in
their control flow. In the next listing, note that the actor first runs the
function m_pre(msg), then waits to receive a message in the mybox mailbox,
after which it runs m_post on the message it finds: ` We can translate this
into two separate handlers in HydroLogic, but we need to make sure that (a)
the state of the computation (heap and stack) after m_pre runs is preserved,
(b) m_post can run from that same state of computation, and (c) that the
handler doesn’t do anyting else while waiting for newmsg. _Coroutines_ are a
common language construct that provides convenient versions of (a) and (b),
and are found in many host languages for actors (including C# for Orleans and
Scala for Akka). The third property (c) can be enforced across HydroLogic
ticks by a status variable in the actor222The attentive reader will note that
we have elided a bit of bookkeeping here that buffers new inbound messages to
m while the actor is waiting.: ` Mid-Method Message Handling in HydroLogic
Note that this HydroLogic has to use non-monotonic mutation to capture the
(arguably undesirable!) notion of blocking that is implicit in a synchronous
receive call.
### A.2. Promises and Futures
Another common language pattern for distributed messaging is Promises and
Futures; this has roots in the actor literature as well [19], but often
appears independently. The basic idea is to spawn an asynchronous function
call with a handle for the computation called a _Promise_ , and a handle for
the result called a _Future_. In the basic form, sequential code generates
pairs of Promises and Futures, effectively launching the computation of each
Promise in a separate thread of execution (perhaps on a remote machine), and
continuing to process locally until the Future needs to be resolved. We take
an example from the Ray framework in Python:
` The function f is invoked as a promise for the numbers 0 through 3 via Ray’s
f.remote syntax; four futures are immediately stored in the array futures. The
function g() then runs locally while the promises execute concurrently and
remotely. After g() completes, the futures are resolved (in batch, in this
case) by the ray.get syntax. In this simple example, futures are little more
than a syntactic sugar for keeping track of asynchronous promise invocations.
The translation to HydroLogic is straightforward. It could be sugared
similarly to Ray if desired, but we show it in full below. Much like our mid-
method receipt for actors, we implement waiting across HydroLogic ticks with a
condition variable. ` Promises/Futures in HydroLogic Promise/Future libraries
vary in their semantics, and it’s relatively easy to generate HydroLogic code
for each of these semantics. For example, note that promises and futures are
data, so we can implement semantics where they can be sent or copied to
different logical agents (like our actors above). Similarly, we can support a
variety of “kickoff” semantics for promises. Our example above eagerly
executes promises, but we could easily implement a lazy model, where pending
promises are placed in a table until future requests come along.
### A.3. MPI Collective Communications
MPI is a standard Message Passing Interface for scientific computing and
supercomputers [64]. While this domain per se is not a primary target for
HydroLogic, the “collective communication” patterns defined for MPI are a good
suite of functionality that any distributed programming environment should
support.
The MPI standard classifies these patterns into the following categories:
One-to-All, in which one agent contributes to the result, but all agents
receive the result. The two basic patterns are Bcast (which takes a value and
sends a copy of it to a set of agents) and Scatter (which takes an array and
partitions it, sending each chunk to a separate agent).
All-to-One, in which all agents contribute to the result, but one agent
receives the result. The basic patterns are Gather (which assembles data from
all the agents into a dense array) and Reduce (which computes an aggregate
function across data from all agents).
All-to-All, in which all agents contributes to _and_ receive the result. This
includes Allgather (similar to gather but all agents receive all the data and
assemble the result array), Allreduce (similar to reduce except the result
arrives at all agents) and Alltoall (all processes send and receive the same
amount of data to each other).
These operations map very naturally into HydroLogic. Assume we start with a
static table agents containing the set of relevant agentIDs.
` MPI collective communication in HydroLogic Note that these are naive
specifications, and there are various well-known optimizations that can be
employed by Hydrolysis, including tree-based or ring-based mechanisms. `
``
````
|
16k
|
arxiv_papers
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.